Merge pull request #120 from yangsoon/fix-error-delete

recover deleted documents
This commit is contained in:
Wei (段少) 2021-06-15 14:30:53 +08:00 committed by GitHub
commit 251155f105
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
151 changed files with 11650 additions and 0 deletions

View File

@ -0,0 +1,27 @@
![alt](resources/KubeVela-03.png)
*Make shipping applications more enjoyable.*
# KubeVela
KubeVela is a modern application platform that makes deploying and managing applications across today's hybrid, multi-cloud environments easier and faster.
## Community
- Slack: [CNCF Slack](https://slack.cncf.io/) #kubevela channel
- Gitter: [Discussion](https://gitter.im/oam-dev/community)
- Bi-weekly Community Call: [Meeting Notes](https://docs.google.com/document/d/1nqdFEyULekyksFHtFvgvFAYE-0AMHKoS3RMnaKsarjs)
## Installation
Installation guide is available on [this section](./install).
## Quick Start
Quick start is available on [this section](./quick-start).
## Contributing
Check out [CONTRIBUTING](https://github.com/oam-dev/kubevela/blob/master/CONTRIBUTING.md) to see how to develop with KubeVela.
## Code of Conduct
KubeVela adopts the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).

View File

@ -0,0 +1,123 @@
---
title: Other Install Topics
---
## Install KubeVela with cert-manager
KubeVela can use cert-manager generate certs for your application if it's available. Note that you need to install cert-manager **before** the KubeVela chart.
```shell script
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.2.0 --create-namespace --set installCRDs=true
```
Install kubevela with enabled certmanager:
```shell script
helm install --create-namespace -n vela-system --set admissionWebhooks.certManager.enabled=true kubevela kubevela/vela-core
```
## Install Pre-release
Add flag `--devel` in command `helm search` to choose a pre-release
version in format `<next_version>-rc-master`. It means a release candidate version build on `master` branch,
such as `0.4.0-rc-master`.
```shell script
helm search repo kubevela/vela-core -l --devel
```
```console
NAME CHART VERSION APP VERSION DESCRIPTION
kubevela/vela-core 0.4.0-rc-master 0.4.0-rc-master A Helm chart for KubeVela core
kubevela/vela-core 0.3.2 0.3.2 A Helm chart for KubeVela core
kubevela/vela-core 0.3.1 0.3.1 A Helm chart for KubeVela core
```
And try the following command to install it.
```shell script
helm install --create-namespace -n vela-system kubevela kubevela/vela-core --version <next_version>-rc-master
```
```console
NAME: kubevela
LAST DEPLOYED: Thu Apr 1 19:41:30 2021
NAMESPACE: vela-system
STATUS: deployed
REVISION: 1
NOTES:
Welcome to use the KubeVela! Enjoy your shipping application journey!
```
## Upgrade
### Step 1. Update Helm repo
You can explore the newly released chart versions of KubeVela by run:
```shell
helm repo update
helm search repo kubevela/vela-core -l
```
### Step 2. Upgrade KubeVela CRDs
```shell
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/charts/vela-core/crds/core.oam.dev_componentdefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/charts/vela-core/crds/core.oam.dev_workloaddefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/charts/vela-core/crds/core.oam.dev_traitdefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/charts/vela-core/crds/core.oam.dev_applications.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/charts/vela-core/crds/core.oam.dev_approllouts.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/charts/vela-core/crds/core.oam.dev_applicationrevisions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/charts/vela-core/crds/core.oam.dev_scopedefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/charts/vela-core/crds/core.oam.dev_appdeployments.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/charts/vela-core/crds/core.oam.dev_applicationcontexts.yaml
```
> Tips: If you see errors like `* is invalid: spec.scope: Invalid value: "Namespaced": filed is immutable`. Please delete the CRD which reports error and re-apply the kubevela crds.
```shell
kubectl delete crd \
scopedefinitions.core.oam.dev \
traitdefinitions.core.oam.dev \
workloaddefinitions.core.oam.dev
```
### Step 3. Upgrade KubeVela Helm chart
```shell
helm upgrade --install --create-namespace --namespace vela-system kubevela kubevela/vela-core --version <the_new_version>
```
## Clean Up
Run:
```shell script
helm uninstall -n vela-system kubevela
rm -r ~/.vela
```
This will uninstall KubeVela server component and its dependency components.
This also cleans up local CLI cache.
Then clean up CRDs (CRDs are not removed via helm by default):
```shell script
kubectl delete crd \
appdeployments.core.oam.dev \
applicationconfigurations.core.oam.dev \
applicationcontexts.core.oam.dev \
applicationrevisions.core.oam.dev \
applications.core.oam.dev \
approllouts.core.oam.dev \
componentdefinitions.core.oam.dev \
components.core.oam.dev \
containerizedworkloads.core.oam.dev \
healthscopes.core.oam.dev \
manualscalertraits.core.oam.dev \
podspecworkloads.standard.oam.dev \
scopedefinitions.core.oam.dev \
traitdefinitions.core.oam.dev \
workloaddefinitions.core.oam.dev
```

View File

@ -0,0 +1,41 @@
---
title: vela
---
```
vela [flags]
```
### Options
```
-e, --env string specify environment name for application
-h, --help help for vela
```
### SEE ALSO
* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities
* [vela completion](vela_completion) - Output shell completion code for the specified shell (bash or zsh)
* [vela config](vela_config) - Manage configurations
* [vela delete](vela_delete) - Delete an application
* [vela env](vela_env) - Manage environments
* [vela exec](vela_exec) - Execute command in a container
* [vela export](vela_export) - Export deploy manifests from appfile
* [vela help](vela_help) - Help about any command
* [vela init](vela_init) - Create scaffold for an application
* [vela logs](vela_logs) - Tail logs for application
* [vela ls](vela_ls) - List applications
* [vela port-forward](vela_port-forward) - Forward local ports to services in an application
* [vela show](vela_show) - Show the reference doc for a workload type or trait
* [vela status](vela_status) - Show status of an application
* [vela system](vela_system) - System management utilities
* [vela template](vela_template) - Manage templates
* [vela traits](vela_traits) - List traits
* [vela up](vela_up) - Apply an appfile
* [vela version](vela_version) - Prints out build version information
* [vela workloads](vela_workloads) - List workloads
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,31 @@
---
title: vela cap
---
Manage capability centers and installing/uninstalling capabilities
### Synopsis
Manage capability centers and installing/uninstalling capabilities
### Options
```
-h, --help help for cap
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
* [vela cap center](vela_cap_center) - Manage Capability Center
* [vela cap install](vela_cap_install) - Install capability into cluster
* [vela cap ls](vela_cap_ls) - List capabilities from cap-center
* [vela cap uninstall](vela_cap_uninstall) - Uninstall capability from cluster
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,31 @@
---
title: vela cap center
---
Manage Capability Center
### Synopsis
Manage Capability Center with config, sync, list
### Options
```
-h, --help help for center
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities
* [vela cap center config](vela_cap_center_config) - Configure (add if not exist) a capability center, default is local (built-in capabilities)
* [vela cap center ls](vela_cap_center_ls) - List all capability centers
* [vela cap center remove](vela_cap_center_remove) - Remove specified capability center
* [vela cap center sync](vela_cap_center_sync) - Sync capabilities from remote center, default to sync all centers
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,38 @@
---
title: vela cap center config
---
Configure (add if not exist) a capability center, default is local (built-in capabilities)
### Synopsis
Configure (add if not exist) a capability center, default is local (built-in capabilities)
```
vela cap center config <centerName> <centerURL> [flags]
```
### Examples
```
vela cap center config mycenter https://github.com/oam-dev/catalog/tree/master/registry
```
### Options
```
-h, --help help for config
-t, --token string Github Repo token
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela cap center](vela_cap_center) - Manage Capability Center
###### Auto generated by spf13/cobra on 6-May-2021

View File

@ -0,0 +1,37 @@
---
title: vela cap center ls
---
List all capability centers
### Synopsis
List all configured capability centers
```
vela cap center ls [flags]
```
### Examples
```
vela cap center ls
```
### Options
```
-h, --help help for ls
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela cap center](vela_cap_center) - Manage Capability Center
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,37 @@
---
title: vela cap center remove
---
Remove specified capability center
### Synopsis
Remove specified capability center
```
vela cap center remove <centerName> [flags]
```
### Examples
```
vela cap center remove mycenter
```
### Options
```
-h, --help help for remove
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela cap center](vela_cap_center) - Manage Capability Center
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,37 @@
---
title: vela cap center sync
---
Sync capabilities from remote center, default to sync all centers
### Synopsis
Sync capabilities from remote center, default to sync all centers
```
vela cap center sync [centerName] [flags]
```
### Examples
```
vela cap center sync mycenter
```
### Options
```
-h, --help help for sync
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela cap center](vela_cap_center) - Manage Capability Center
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,38 @@
---
title: vela cap install
---
Install capability into cluster
### Synopsis
Install capability into cluster
```
vela cap install <center>/<name> [flags]
```
### Examples
```
vela cap install mycenter/route
```
### Options
```
-h, --help help for install
-t, --token string Github Repo token
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,37 @@
---
title: vela cap ls
---
List capabilities from cap-center
### Synopsis
List capabilities from cap-center
```
vela cap ls [cap-center] [flags]
```
### Examples
```
vela cap ls
```
### Options
```
-h, --help help for ls
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,38 @@
---
title: vela cap uninstall
---
Uninstall capability from cluster
### Synopsis
Uninstall capability from cluster
```
vela cap uninstall <name> [flags]
```
### Examples
```
vela cap uninstall route
```
### Options
```
-h, --help help for uninstall
-t, --token string Github Repo token
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,32 @@
---
title: vela completion
---
Output shell completion code for the specified shell (bash or zsh)
### Synopsis
Output shell completion code for the specified shell (bash or zsh).
The shell code must be evaluated to provide interactive completion
of vela commands.
### Options
```
-h, --help help for completion
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
* [vela completion bash](vela_completion_bash) - generate autocompletions script for bash
* [vela completion zsh](vela_completion_zsh) - generate autocompletions script for zsh
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,41 @@
---
title: vela completion bash
---
generate autocompletions script for bash
### Synopsis
Generate the autocompletion script for Vela for the bash shell.
To load completions in your current shell session:
$ source <(vela completion bash)
To load completions for every new session, execute once:
Linux:
$ vela completion bash > /etc/bash_completion.d/vela
MacOS:
$ vela completion bash > /usr/local/etc/bash_completion.d/vela
```
vela completion bash
```
### Options
```
-h, --help help for bash
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela completion](vela_completion) - Output shell completion code for the specified shell (bash or zsh)
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,38 @@
---
title: vela completion zsh
---
generate autocompletions script for zsh
### Synopsis
Generate the autocompletion script for Vela for the zsh shell.
To load completions in your current shell session:
$ source <(vela completion zsh)
To load completions for every new session, execute once:
$ vela completion zsh > "${fpath[1]}/_vela"
```
vela completion zsh
```
### Options
```
-h, --help help for zsh
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela completion](vela_completion) - Output shell completion code for the specified shell (bash or zsh)
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,37 @@
---
title: vela components
---
List components
### Synopsis
List components
```
vela components
```
### Examples
```
vela components
```
### Options
```
-h, --help help for workloads
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,31 @@
---
title: vela config
---
Manage configurations
### Synopsis
Manage configurations
### Options
```
-h, --help help for config
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
* [vela config del](vela_config_del) - Delete config
* [vela config get](vela_config_get) - Get data for a config
* [vela config ls](vela_config_ls) - List configs
* [vela config set](vela_config_set) - Set data for a config
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,37 @@
---
title: vela config del
---
Delete config
### Synopsis
Delete config
```
vela config del
```
### Examples
```
vela config del <config-name>
```
### Options
```
-h, --help help for del
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela config](vela_config) - Manage configurations
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,37 @@
---
title: vela config get
---
Get data for a config
### Synopsis
Get data for a config
```
vela config get
```
### Examples
```
vela config get <config-name>
```
### Options
```
-h, --help help for get
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela config](vela_config) - Manage configurations
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,37 @@
---
title: vela config ls
---
List configs
### Synopsis
List all configs
```
vela config ls
```
### Examples
```
vela config ls
```
### Options
```
-h, --help help for ls
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela config](vela_config) - Manage configurations
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,37 @@
---
title: vela config set
---
Set data for a config
### Synopsis
Set data for a config
```
vela config set
```
### Examples
```
vela config set <config-name> KEY=VALUE K2=V2
```
### Options
```
-h, --help help for set
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela config](vela_config) - Manage configurations
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,38 @@
---
title: vela delete
---
Delete an application
### Synopsis
Delete an application
```
vela delete APP_NAME
```
### Examples
```
vela delete frontend
```
### Options
```
-h, --help help for delete
--svc string delete only the specified service in this app
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,31 @@
---
title: vela env
---
Manage environments
### Synopsis
Manage environments
### Options
```
-h, --help help for env
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
* [vela env delete](vela_env_delete) - Delete environment
* [vela env init](vela_env_init) - Create environments
* [vela env ls](vela_env_ls) - List environments
* [vela env set](vela_env_set) - Set an environment
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,37 @@
---
title: vela env delete
---
Delete environment
### Synopsis
Delete environment
```
vela env delete
```
### Examples
```
vela env delete test
```
### Options
```
-h, --help help for delete
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela env](vela_env) - Manage environments
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,40 @@
---
title: vela env init
---
Create environments
### Synopsis
Create environment and set the currently using environment
```
vela env init <envName>
```
### Examples
```
vela env init test --namespace test --email my@email.com
```
### Options
```
--domain string specify domain your applications
--email string specify email for production TLS Certificate notification
-h, --help help for init
--namespace string specify K8s namespace for env
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela env](vela_env) - Manage environments
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,37 @@
---
title: vela env ls
---
List environments
### Synopsis
List all environments
```
vela env ls
```
### Examples
```
vela env ls [env-name]
```
### Options
```
-h, --help help for ls
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela env](vela_env) - Manage environments
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,37 @@
---
title: vela env set
---
Set an environment
### Synopsis
Set an environment as the current using one
```
vela env set
```
### Examples
```
vela env set test
```
### Options
```
-h, --help help for set
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela env](vela_env) - Manage environments
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,35 @@
---
title: vela exec
---
Execute command in a container
### Synopsis
Execute command in a container
```
vela exec [flags] APP_NAME -- COMMAND [args...]
```
### Options
```
-h, --help help for exec
--pod-running-timeout duration The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one pod is running (default 1m0s)
-i, --stdin Pass stdin to the container (default true)
-s, --svc string service name
-t, --tty Stdin is a TTY (default true)
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,32 @@
---
title: vela export
---
Export deploy manifests from appfile
### Synopsis
Export deploy manifests from appfile
```
vela export
```
### Options
```
-f, -- string specify file path for appfile
-h, --help help for export
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,27 @@
---
title: vela help
---
Help about any command
```
vela help [command]
```
### Options
```
-h, --help help for help
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,38 @@
---
title: vela init
---
Create scaffold for an application
### Synopsis
Create scaffold for an application
```
vela init
```
### Examples
```
vela init
```
### Options
```
-h, --help help for init
--render-only Rendering vela.yaml in current dir and do not deploy
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,32 @@
---
title: vela logs
---
Tail logs for application
### Synopsis
Tail logs for application
```
vela logs [flags]
```
### Options
```
-h, --help help for logs
-o, --output string output format for logs, support: [default, raw, json] (default "default")
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,38 @@
---
title: vela ls
---
List applications
### Synopsis
List all applications in cluster
```
vela ls
```
### Examples
```
vela ls
```
### Options
```
-h, --help help for ls
-n, --namespace string specify the namespace the application want to list, default is the current env namespace
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,40 @@
---
title: vela port-forward
---
Forward local ports to services in an application
### Synopsis
Forward local ports to services in an application
```
vela port-forward APP_NAME [flags]
```
### Examples
```
port-forward APP_NAME [options] [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N]
```
### Options
```
--address strings Addresses to listen on (comma separated). Only accepts IP addresses or localhost as a value. When localhost is supplied, vela will try to bind on both 127.0.0.1 and ::1 and will fail if neither of these addresses are available to bind. (default [localhost])
-h, --help help for port-forward
--pod-running-timeout duration The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one pod is running (default 1m0s)
--route forward ports from route trait service
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,38 @@
---
title: vela show
---
Show the reference doc for a workload type or trait
### Synopsis
Show the reference doc for a workload type or trait
```
vela show [flags]
```
### Examples
```
show webservice
```
### Options
```
-h, --help help for show
--web start web doc site
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,38 @@
---
title: vela status
---
Show status of an application
### Synopsis
Show status of an application, including workloads and traits of each service.
```
vela status APP_NAME [flags]
```
### Examples
```
vela status APP_NAME
```
### Options
```
-h, --help help for status
-s, --svc string service name
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,29 @@
---
title: vela system
---
System management utilities
### Synopsis
System management utilities
### Options
```
-h, --help help for system
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
* [vela system dry-run](vela_system_dry-run) - Dry Run an application, and output the conversion result to stdout
* [vela system info](vela_system_info) - Show vela client and cluster chartPath
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,35 @@
## vela system cue-packages
List cue package
### Synopsis
List cue package
```
vela system cue-packages
```
### Examples
```
vela system cue-packages
```
### Options
```
-h, --help help for cue-packages
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela system](vela_system) - System management utilities
###### Auto generated by spf13/cobra on 2-May-2021

View File

@ -0,0 +1,38 @@
---
title: vela system dry-run
---
Dry Run an application, and output the conversion result to stdout
### Synopsis
Dry Run an application, and output the conversion result to stdout
```
vela system dry-run
```
### Examples
```
vela dry-run
```
### Options
```
-f, --file string application file name (default "./app.yaml")
-h, --help help for dry-run
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela system](vela_system) - System management utilities
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,31 @@
---
title: vela system info
---
Show vela client and cluster chartPath
### Synopsis
Show vela client and cluster chartPath
```
vela system info [flags]
```
### Options
```
-h, --help help for info
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela system](vela_system) - System management utilities
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,39 @@
## vela system live-diff
Dry-run an application, and do diff on a specific app revison
### Synopsis
Dry-run an application, and do diff on a specific app revison. The provided capability definitions will be used during Dry-run. If any capabilities used in the app are not found in the provided ones, it will try to find from cluster.
```
vela system live-diff
```
### Examples
```
vela live-diff -f app-v2.yaml -r app-v1 --context 10
```
### Options
```
-r, --Revision string specify an application Revision name, by default, it will compare with the latest Revision
-c, --context int output number lines of context around changes, by default show all unchanged lines (default -1)
-d, --definition string specify a file or directory containing capability definitions, they will only be used in dry-run rather than applied to K8s cluster
-f, --file string application file name (default "./app.yaml")
-h, --help help for live-diff
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela system](vela_system) - System management utilities
###### Auto generated by spf13/cobra on 2-May-2021

View File

@ -0,0 +1,28 @@
---
title: vela template
---
Manage templates
### Synopsis
Manage templates
### Options
```
-h, --help help for template
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
* [vela template context](vela_template_context) - Show context parameters
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,37 @@
---
title: vela template context
---
Show context parameters
### Synopsis
Show context parameter
```
vela template context
```
### Examples
```
vela template context
```
### Options
```
-h, --help help for context
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela template](vela_template) - Manage templates
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,37 @@
---
title: vela traits
---
List traits
### Synopsis
List traits
```
vela traits
```
### Examples
```
vela traits
```
### Options
```
-h, --help help for traits
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,32 @@
---
title: vela up
---
Apply an appfile
### Synopsis
Apply an appfile
```
vela up
```
### Options
```
-f, -- string specify file path for appfile
-h, --help help for up
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,31 @@
---
title: vela version
---
Prints out build version information
### Synopsis
Prints out build version information
```
vela version [flags]
```
### Options
```
-h, --help help for version
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,37 @@
---
title: vela workloads
---
List workloads
### Synopsis
List workloads
```
vela workloads
```
### Examples
```
vela workloads
```
### Options
```
-h, --help help for workloads
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,68 @@
---
title: How it Works
---
In this documentation, we will explain the core idea of KubeVela and clarify some technical terms that are widely used in the project.
## Architecture
The overall architecture of KubeVela is shown as below:
![alt](resources/arch.png)
### Control Plane Cluster
The main part of KubeVela. It is the component that users will interact with and where the KubeVela controllers are running on. Control plane cluster will deploy the application to multiple *runtime clusters*.
### Runtime Clusters
Runtime clusters are where the applications are actually running on. The needed addons in runtime cluster are automatically discovered and installed with leverage of [CRD Lifecycle Management (CLM)](https://github.com/cloudnativeapp/CLM).
## API
On control plane cluster, KubeVela introduces [Open Application Model (OAM)](https://oam.dev) as the higher level API. Hence users of KubeVela only work on application level with a consistent experience, regardless of the complexity of heterogeneous runtime environments.
This API could be explained as below:
![alt](resources/concepts.png)
In detail:
- Components - deployable/provisionable entities that compose your application.
- e.g. a Helm chart, a Kubernetes workload, a CUE or Terraform module, or a cloud database instance etc.
- Traits - attachable features that will *overlay* given component with operational behaviors.
- e.g. autoscaling rules, rollout strategies, ingress rules, sidecars, security policies etc.
- Application - full description of your application deployment assembled with components and traits.
- Environment - the target environments to deploy this application.
We also reference components and traits as *"capabilities"* in KubeVela.
## Workflow
To ensure simple yet consistent user experience across hybrid environments. KubeVela introduces a workflow with separate of concerns as below:
- **Platform Team**
- Model and manage platform capabilities as components or traits, together with target environments specifications.
- **Application Team**
- Choose a environment, assemble the application with components and traits per needs, and deploy it to target environment.
> Note that either platform team or application team application will only talk to the control plane cluster. KubeVela is designed to hide the details of runtime clusters except for debugging or verifying purpose.
Below is how this workflow looks like:
![alt](resources/how-it-works.png)
All the capability building blocks can be updated or extended easily at any time since they are fully programmable (currently support CUE, Terraform and Helm).
## Environment
Before releasing an application to production, it's important to test the code in testing/staging workspaces. In KubeVela, we describe these workspaces as "environments". Each environment has its own configuration (e.g., domain, Kubernetes cluster and namespace, configuration data, access control policy, etc.) to allow user to create different deployment environments such as "test" and "production".
Currently, a KubeVela `environment` only maps to a Kubernetes namespace. In the future, a `environment` will contain multiple clusters.
## What's Next
Here are some recommended next steps:
- Learn how to [deploy an application](end-user/application) and understand how it works.
- Join `#kubevela` channel in CNCF [Slack](https://cloud-native.slack.com) and/or [Gitter](https://gitter.im/oam-dev/community)
Welcome onboard and sail Vela!

View File

@ -0,0 +1,133 @@
---
title: Managing Capabilities
---
In KubeVela, developers can install more capabilities (i.e. new component types and traits) from any GitHub repo that contains OAM definition files. We call these GitHub repos as _Capability Centers_.
KubeVela is able to discover OAM definition files in this repo automatically and sync them to your own KubeVela platform.
## Add a capability center
Add and sync a capability center in KubeVela:
```bash
$ vela cap center config my-center https://github.com/oam-dev/catalog/tree/master/registry
successfully sync 1/1 from my-center remote center
Successfully configured capability center my-center and sync from remote
$ vela cap center sync my-center
successfully sync 1/1 from my-center remote center
sync finished
```
Now, this capability center `my-center` is ready to use.
## List capability centers
You are allowed to add more capability centers and list them.
```bash
$ vela cap center ls
NAME ADDRESS
my-center https://github.com/oam-dev/catalog/tree/master/registry
```
## [Optional] Remove a capability center
Or, remove one.
```bash
$ vela cap center remove my-center
```
## List all available capabilities in capability center
Or, list all available capabilities in certain center.
```bash
$ vela cap ls my-center
NAME CENTER TYPE DEFINITION STATUS APPLIES-TO
clonesetservice my-center componentDefinition clonesets.apps.kruise.io uninstalled []
```
## Install a capability from capability center
Now let's try to install the new component named `clonesetservice` from `my-center` to your own KubeVela platform.
You need to install OpenKruise first.
```shell
helm install kruise https://github.com/openkruise/kruise/releases/download/v0.7.0/kruise-chart.tgz
```
Install `clonesetservice` component from `my-center`.
```bash
$ vela cap install my-center/clonesetservice
Installing component capability clonesetservice
Successfully installed capability clonesetservice from my-center
```
## Use the newly installed capability
Let's check the `clonesetservice` appears in your platform firstly:
```bash
$ vela components
NAME NAMESPACE WORKLOAD DESCRIPTION
clonesetservice vela-system clonesets.apps.kruise.io Describes long-running, scalable, containerized services
that have a stable network endpoint to receive external
network traffic from customers. If workload type is skipped
for any service defined in Appfile, it will be defaulted to
`webservice` type.
```
Great! Now let's deploy an app via Appfile.
```bash
$ cat << EOF > vela.yaml
name: testapp
services:
testsvc:
type: clonesetservice
image: crccheck/hello-world
port: 8000
EOF
```
```bash
$ vela up
Parsing vela appfile ...
Load Template ...
Rendering configs for service (testsvc)...
Writing deploy config to (.vela/deploy.yaml)
Applying application ...
Checking if app has been deployed...
App has not been deployed, creating a new deployment...
Updating: core.oam.dev/v1alpha2, Kind=HealthScope in default
✅ App has been deployed 🚀🚀🚀
Port forward: vela port-forward testapp
SSH: vela exec testapp
Logging: vela logs testapp
App status: vela status testapp
Service status: vela status testapp --svc testsvc
```
then you can Get a cloneset in your environment.
```shell
$ kubectl get clonesets.apps.kruise.io
NAME DESIRED UPDATED UPDATED_READY READY TOTAL AGE
testsvc 1 1 1 1 1 46s
```
## Uninstall a capability
> NOTE: make sure no apps are using the capability before uninstalling.
```bash
$ vela cap uninstall my-center/clonesetservice
Successfully uninstalled capability clonesetservice
```

View File

@ -0,0 +1,9 @@
---
title: Check Application Logs
---
```bash
$ vela logs testapp
```
It will let you select the container to get logs from. If there is only one container it will select automatically.

View File

@ -0,0 +1,103 @@
---
title: The Reference Documentation Guide of Capabilities
---
In this documentation, we will show how to check the detailed schema of a given capability (i.e. workload type or trait).
This may sound challenging because every capability is a "plug-in" in KubeVela (even for the built-in ones), also, it's by design that KubeVela allows platform administrators to modify the capability templates at any time. In this case, do we need to manually write documentation for every newly installed capability? And how can we ensure those documentations for the system is up-to-date?
## Using Browser
Actually, as a important part of its "extensibility" design, KubeVela will always **automatically generate** reference documentation for every workload type or trait registered in your Kubernetes cluster, based on its template in definition of course. This feature works for any capability: either built-in ones or your own workload types/traits.
Thus, as an end user, the only thing you need to do is:
```console
$ vela show WORKLOAD_TYPE or TRAIT --web
```
This command will automatically open the reference documentation for given workload type or trait in your default browser.
### For Workload Types
Let's take `$ vela show webservice --web` as example. The detailed schema documentation for `Web Service` workload type will show up immediately as below:
![](../resources/vela_show_webservice.jpg)
Note that there's in the section named `Specification`, it even provides you with a full sample for the usage of this workload type with a fake name `my-service-name`.
### For Traits
Similarly, we can also do `$ vela show autoscale --web`:
![](../resources/vela_show_autoscale.jpg)
With these auto-generated reference documentations, we could easily complete the application description by simple copy-paste, for example:
```yaml
name: helloworld
services:
backend: # copy-paste from the webservice ref doc above
image: oamdev/testapp:v1
cmd: ["node", "server.js"]
port: 8080
cpu: "0.1"
autoscale: # copy-paste and modify from autoscaler ref doc above
min: 1
max: 8
cron:
startAt: "19:00"
duration: "2h"
days: "Friday"
replicas: 4
timezone: "America/Los_Angeles"
```
## Using Terminal
This reference doc feature also works for terminal-only case. For example:
```shell
$ vela show webservice
# Properties
+-------+----------------------------------------------------------------------------------+---------------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+-------+----------------------------------------------------------------------------------+---------------+----------+---------+
| cmd | Commands to run in the container | []string | false | |
| env | Define arguments by using environment variables | [[]env](#env) | false | |
| image | Which image would you like to use for your service | string | true | |
| port | Which port do you want customer traffic sent to | int | true | 80 |
| cpu | Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) | string | false | |
+-------+----------------------------------------------------------------------------------+---------------+----------+---------+
## env
+-----------+-----------------------------------------------------------+-------------------------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+-----------+-----------------------------------------------------------+-------------------------+----------+---------+
| name | Environment variable name | string | true | |
| value | The value of the environment variable | string | false | |
| valueFrom | Specifies a source the value of this var should come from | [valueFrom](#valueFrom) | false | |
+-----------+-----------------------------------------------------------+-------------------------+----------+---------+
### valueFrom
+--------------+--------------------------------------------------+-------------------------------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+--------------+--------------------------------------------------+-------------------------------+----------+---------+
| secretKeyRef | Selects a key of a secret in the pod's namespace | [secretKeyRef](#secretKeyRef) | true | |
+--------------+--------------------------------------------------+-------------------------------+----------+---------+
#### secretKeyRef
+------+------------------------------------------------------------------+--------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+------+------------------------------------------------------------------+--------+----------+---------+
| name | The name of the secret in the pod's namespace to select from | string | true | |
| key | The key of the secret to select from. Must be a valid secret key | string | true | |
+------+------------------------------------------------------------------+--------+----------+---------+
```
> Note that for all the built-in capabilities, we already published their reference docs [here](https://kubevela.io/#/en/developers/references/) based on the same doc generation mechanism.

View File

@ -0,0 +1,85 @@
---
title: Configuring data/env in Application
---
`vela` provides a `config` command to manage config data.
## `vela config set`
```bash
$ vela config set test a=b c=d
reading existing config data and merging with user input
config data saved successfully ✅
```
## `vela config get`
```bash
$ vela config get test
Data:
a: b
c: d
```
## `vela config del`
```bash
$ vela config del test
config (test) deleted successfully
```
## `vela config ls`
```bash
$ vela config set test a=b
$ vela config set test2 c=d
$ vela config ls
NAME
test
test2
```
## Configure env in application
The config data can be set as the env in applications.
```bash
$ vela config set demo DEMO_HELLO=helloworld
```
Save the following to `vela.yaml` in current directory:
```yaml
name: testapp
services:
env-config-demo:
image: heroku/nodejs-hello-world
config: demo
```
Then run:
```bash
$ vela up
Parsing vela.yaml ...
Loading templates ...
Rendering configs for service (env-config-demo)...
Writing deploy config to (.vela/deploy.yaml)
Applying deploy configs ...
Checking if app has been deployed...
App has not been deployed, creating a new deployment...
✅ App has been deployed 🚀🚀🚀
Port forward: vela port-forward testapp
SSH: vela exec testapp
Logging: vela logs testapp
App status: vela status testapp
Service status: vela status testapp --svc env-config-demo
```
Check env var:
```
$ vela exec testapp -- printenv | grep DEMO_HELLO
DEMO_HELLO=helloworld
```

View File

@ -0,0 +1,93 @@
---
title: Setting Up Deployment Environment
---
A deployment environment is where you could configure the workspace, email for contact and domain for your applications globally.
A typical set of deployment environment is `test`, `staging`, `prod`, etc.
## Create environment
```bash
$ vela env init demo --email my@email.com
environment demo created, Namespace: default, Email: my@email.com
```
## Check the deployment environment metadata
```bash
$ vela env ls
NAME CURRENT NAMESPACE EMAIL DOMAIN
default default
demo * default my@email.com
```
By default, the environment will use `default` namespace in K8s.
## Configure changes
You could change the config by executing the environment again.
```bash
$ vela env init demo --namespace demo
environment demo created, Namespace: demo, Email: my@email.com
```
```bash
$ vela env ls
NAME CURRENT NAMESPACE EMAIL DOMAIN
default default
demo * demo my@email.com
```
**Note that the created apps won't be affected, only newly created apps will use the updated info.**
## [Optional] Configure Domain if you have public IP
If your K8s cluster is provisioned by cloud provider and has public IP for ingress.
You could configure your domain in the environment, then you'll be able to visit
your app by this domain with an mTLS supported automatically.
For example, you could get the public IP from ingress service.
```bash
$ kubectl get svc -A | grep LoadBalancer
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-lb LoadBalancer 172.21.2.174 123.57.10.233 80:32740/TCP,443:32086/TCP 41d
```
The fourth column is public IP. Configure 'A' record for your custom domain.
```
*.your.domain => 123.57.10.233
```
You could also use `123.57.10.233.xip.io` as your domain, if you don't have a custom one.
`xip.io` will automatically route to the prefix IP `123.57.10.233`.
```bash
$ vela env init demo --domain 123.57.10.233.xip.io
environment demo updated, Namespace: demo, Email: my@email.com
```
### Using domain in Appfile
Since you now have domain configured globally in deployment environment, you don't need to specify the domain in route configuration anymore.
```yaml
# in demo environment
servcies:
express-server:
...
route:
rules:
- path: /testapp
rewriteTarget: /
```
```
$ curl http://123.57.10.233.xip.io/testapp
Hello World
```

View File

@ -0,0 +1,10 @@
---
title: Execute Commands in Container
---
Run:
```
$ vela exec testapp -- /bin/sh
```
This open a shell within the container of testapp.

View File

@ -0,0 +1,238 @@
---
title: Automatically scale workloads by resource utilization metrics and cron
---
## Prerequisite
Make sure auto-scaler trait controller is installed in your cluster
Install auto-scaler trait controller with helm
1. Add helm chart repo for autoscaler trait
```shell script
helm repo add oam.catalog http://oam.dev/catalog/
```
2. Update the chart repo
```shell script
helm repo update
```
3. Install autoscaler trait controller
```shell script
helm install --create-namespace -n vela-system autoscalertrait oam.catalog/autoscalertrait
Autoscale depends on metrics server, please [enable it in your Kubernetes cluster](../references/devex/faq#autoscale-how-to-enable-metrics-server-in-various-kubernetes-clusters) at the beginning.
> Note: autoscale is one of the extension capabilities [installed from cap center](../cap-center),
> please install it if you can't find it in `vela traits`.
## Setting cron auto-scaling policy
Introduce how to automatically scale workloads by cron.
1. Prepare Appfile
```yaml
name: testapp
services:
express-server:
# this image will be used in both build and deploy steps
image: oamdev/testapp:v1
cmd: ["node", "server.js"]
port: 8080
autoscale:
min: 1
max: 4
cron:
startAt: "14:00"
duration: "2h"
days: "Monday, Thursday"
replicas: 2
timezone: "America/Los_Angeles"
```
> The full specification of `autoscale` could show up by `$ vela show autoscale`.
2. Deploy an application
```
$ vela up
Parsing vela.yaml ...
Loading templates ...
Rendering configs for service (express-server)...
Writing deploy config to (.vela/deploy.yaml)
Applying deploy configs ...
Checking if app has been deployed...
App has not been deployed, creating a new deployment...
✅ App has been deployed 🚀🚀🚀
Port forward: vela port-forward testapp
SSH: vela exec testapp
Logging: vela logs testapp
App status: vela status testapp
Service status: vela status testapp --svc express-server
```
3. Check the replicas and wait for the scaling to take effect
Check the replicas of the application, there is one replica.
```
$ vela status testapp
About:
Name: testapp
Namespace: default
Created at: 2020-11-05 17:09:02.426632 +0800 CST
Updated at: 2020-11-05 17:09:02.426632 +0800 CST
Services:
- Name: express-server
Type: webservice
HEALTHY Ready: 1/1
Traits:
- ✅ autoscale: type: cron replicas(min/max/current): 1/4/1
Last Deployment:
Created at: 2020-11-05 17:09:03 +0800 CST
Updated at: 2020-11-05T17:09:02+08:00
```
Wait till the time clocks `startAt`, and check again. The replicas become to two, which is specified as
`replicas` in `vela.yaml`.
```
$ vela status testapp
About:
Name: testapp
Namespace: default
Created at: 2020-11-10 10:18:59.498079 +0800 CST
Updated at: 2020-11-10 10:18:59.49808 +0800 CST
Services:
- Name: express-server
Type: webservice
HEALTHY Ready: 2/2
Traits:
- ✅ autoscale: type: cron replicas(min/max/current): 1/4/2
Last Deployment:
Created at: 2020-11-10 10:18:59 +0800 CST
Updated at: 2020-11-10T10:18:59+08:00
```
Wait after the period ends, the replicas will be one eventually.
## Setting auto-scaling policy of CPU resource utilization
Introduce how to automatically scale workloads by CPU resource utilization.
1. Prepare Appfile
Modify `vela.yaml` as below. We add field `services.express-server.cpu` and change the auto-scaling policy
from cron to cpu utilization by updating filed `services.express-server.autoscale`.
```yaml
name: testapp
services:
express-server:
image: oamdev/testapp:v1
cmd: ["node", "server.js"]
port: 8080
cpu: "0.01"
autoscale:
min: 1
max: 5
cpuPercent: 10
```
2. Deploy an application
```bash
$ vela up
```
3. Expose the service entrypoint of the application
```
$ vela port-forward helloworld 80
Forwarding from 127.0.0.1:80 -> 80
Forwarding from [::1]:80 -> 80
Forward successfully! Opening browser ...
Handling connection for 80
Handling connection for 80
Handling connection for 80
Handling connection for 80
```
On your macOS, you might need to add `sudo` ahead of the command.
4. Monitor the replicas changing
Continue to monitor the replicas changing when the application becomes overloaded. You can use Apache HTTP server
benchmarking tool `ab` to mock many requests to the application.
```
$ ab -n 10000 -c 200 http://127.0.0.1/
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 1000 requests
```
The replicas gradually increase from one to four.
```
$ vela status helloworld --svc frontend
About:
Name: helloworld
Namespace: default
Created at: 2020-11-05 20:07:21.830118 +0800 CST
Updated at: 2020-11-05 20:50:42.664725 +0800 CST
Services:
- Name: frontend
Type: webservice
HEALTHY Ready: 1/1
Traits:
- ✅ autoscale: type: cpu cpu-utilization(target/current): 5%/10% replicas(min/max/current): 1/5/2
Last Deployment:
Created at: 2020-11-05 20:07:23 +0800 CST
Updated at: 2020-11-05T20:50:42+08:00
```
```
$ vela status helloworld --svc frontend
About:
Name: helloworld
Namespace: default
Created at: 2020-11-05 20:07:21.830118 +0800 CST
Updated at: 2020-11-05 20:50:42.664725 +0800 CST
Services:
- Name: frontend
Type: webservice
HEALTHY Ready: 1/1
Traits:
- ✅ autoscale: type: cpu cpu-utilization(target/current): 5%/14% replicas(min/max/current): 1/5/4
Last Deployment:
Created at: 2020-11-05 20:07:23 +0800 CST
Updated at: 2020-11-05T20:50:42+08:00
```
Stop `ab` tool, and the replicas will decrease to one eventually.

View File

@ -0,0 +1,107 @@
---
title: Monitoring Application
---
If your application has exposed metrics, you can easily tell the platform how to collect the metrics data from your app with `metrics` capability.
## Prerequisite
Make sure metrics trait controller is installed in your cluster
Install metrics trait controller with helm
1. Add helm chart repo for metrics trait
```shell script
helm repo add oam.catalog http://oam.dev/catalog/
```
2. Update the chart repo
```shell script
helm repo update
```
3. Install metrics trait controller
```shell script
helm install --create-namespace -n vela-system metricstrait oam.catalog/metricstrait
> Note: metrics is one of the extension capabilities [installed from cap center](../cap-center),
> please install it if you can't find it in `vela traits`.
## Setting metrics policy
Let's run [`christianhxc/gorandom:1.0`](https://github.com/christianhxc/prometheus-tutorial) as an example app.
The app will emit random latencies as metrics.
1. Prepare Appfile:
```bash
$ cat <<EOF > vela.yaml
name: metricapp
services:
metricapp:
type: webservice
image: christianhxc/gorandom:1.0
port: 8080
metrics:
enabled: true
format: prometheus
path: /metrics
port: 0
scheme: http
EOF
```
> The full specification of `metrics` could show up by `$ vela show metrics`.
2. Deploy the application:
```bash
$ vela up
```
3. Check status:
```bash
$ vela status metricapp
About:
Name: metricapp
Namespace: default
Created at: 2020-11-11 17:00:59.436347573 -0800 PST
Updated at: 2020-11-11 17:01:06.511064661 -0800 PST
Services:
- Name: metricapp
Type: webservice
HEALTHY Ready: 1/1
Traits:
- ✅ metrics: Monitoring port: 8080, path: /metrics, format: prometheus, schema: http.
Last Deployment:
Created at: 2020-11-11 17:00:59 -0800 PST
Updated at: 2020-11-11T17:01:06-08:00
```
The metrics trait will automatically discover port and label to monitor if no parameters specified.
If more than one ports found, it will choose the first one by default.
**(Optional) Verify that the metrics are collected on Prometheus**
<details>
Expose the port of Prometheus dashboard:
```bash
kubectl --namespace monitoring port-forward `kubectl -n monitoring get pods -l prometheus=oam -o name` 9090
```
Then access the Prometheus dashboard via http://localhost:9090/targets
![Prometheus Dashboard](../../resources/metrics.jpg)
</details>

View File

@ -0,0 +1,163 @@
---
title: Setting Rollout Strategy
---
> Note: rollout is one of the extension capabilities [installed from cap center](../cap-center),
> please install it if you can't find it in `vela traits`.
The `rollout` section is used to configure Canary strategy to release your app.
Add rollout config under `express-server` along with a `route`.
```yaml
name: testapp
services:
express-server:
type: webservice
image: oamdev/testapp:rolling01
port: 80
rollout:
replicas: 5
stepWeight: 20
interval: "30s"
route:
domain: "example.com"
```
> The full specification of `rollout` could show up by `$ vela show rollout`.
Apply this `appfile.yaml`:
```bash
$ vela up
```
You could check the status by:
```bash
$ vela status testapp
About:
Name: testapp
Namespace: myenv
Created at: 2020-11-09 17:34:38.064006 +0800 CST
Updated at: 2020-11-10 17:05:53.903168 +0800 CST
Services:
- Name: testapp
Type: webservice
HEALTHY Ready: 5/5
Traits:
- ✅ rollout: interval=5s
replicas=5
stepWeight=20
- ✅ route: Visiting URL: http://example.com IP: <your-ingress-IP-address>
Last Deployment:
Created at: 2020-11-09 17:34:38 +0800 CST
Updated at: 2020-11-10T17:05:53+08:00
```
Visiting this app by:
```bash
$ curl -H "Host:example.com" http://<your-ingress-IP-address>/
Hello World -- Rolling 01
```
In day 2, assuming we have make some changes on our app and build the new image and name it by `oamdev/testapp:v2`.
Let's update the appfile by:
```yaml
name: testapp
services:
express-server:
type: webservice
- image: oamdev/testapp:rolling01
+ image: oamdev/testapp:rolling02
port: 80
rollout:
replicas: 5
stepWeight: 20
interval: "30s"
route:
domain: example.com
```
Apply this `appfile.yaml` again:
```bash
$ vela up
```
You could run `vela status` several times to see the instance rolling:
```shell script
$ vela status testapp
About:
Name: testapp
Namespace: myenv
Created at: 2020-11-12 19:02:40.353693 +0800 CST
Updated at: 2020-11-12 19:02:40.353693 +0800 CST
Services:
- Name: express-server
Type: webservice
HEALTHY express-server-v2:Ready: 1/1 express-server-v1:Ready: 4/4
Traits:
- ✅ rollout: interval=30s
replicas=5
stepWeight=20
- ✅ route: Visiting by using 'vela port-forward testapp --route'
Last Deployment:
Created at: 2020-11-12 17:20:46 +0800 CST
Updated at: 2020-11-12T19:02:40+08:00
```
You could then try to `curl` your app multiple times and and see how the app being rollout following Canary strategy:
```bash
$ curl -H "Host:example.com" http://<your-ingress-ip-address>/
Hello World -- This is rolling 02
$ curl -H "Host:example.com" http://<your-ingress-ip-address>/
Hello World -- Rolling 01
$ curl -H "Host:example.com" http://<your-ingress-ip-address>/
Hello World -- Rolling 01
$ curl -H "Host:example.com" http://<your-ingress-ip-address>/
Hello World -- This is rolling 02
$ curl -H "Host:example.com" http://<your-ingress-ip-address>/
Hello World -- Rolling 01
$ curl -H "Host:example.com" http://<your-ingress-ip-address>/
Hello World -- This is rolling 02
```
**How `Rollout` works?**
<details>
`Rollout` trait implements progressive release process to rollout your app following [Canary strategy](https://martinfowler.com/bliki/CanaryRelease.html).
In detail, `Rollout` controller will create a canary of your app , and then gradually shift traffic to the canary while measuring key performance indicators like HTTP requests success rate at the same time.
![alt](../../resources/traffic-shifting-analysis.png)
In this sample, for every `10s`, `5%` traffic will be shifted to canary from the primary, until the traffic on canary reached `50%`. At the mean time, the instance number of canary will automatically scale to `replicas: 2` per configured in Appfile.
Based on analysis result of the KPIs during this traffic shifting, a canary will be promoted or aborted if analysis is failed. If promoting, the primary will be upgraded from v1 to v2, and traffic will be fully shifted back to the primary instances. So as result, canary instances will be deleted after the promotion finished.
![alt](../../resources/promotion.png)
> Note: KubeVela's `Rollout` trait is implemented with [Weaveworks Flagger](https://flagger.app/) operator.
</details>

View File

@ -0,0 +1,82 @@
---
title: Setting Routes
---
The `route` section is used to configure the access to your app.
## Prerequisite
Make sure route trait controller is installed in your cluster
Install route trait controller with helm
1. Add helm chart repo for route trait
```shell script
helm repo add oam.catalog http://oam.dev/catalog/
```
2. Update the chart repo
```shell script
helm repo update
```
3. Install route trait controller
```shell script
helm install --create-namespace -n vela-system routetrait oam.catalog/routetrait
> Note: route is one of the extension capabilities [installed from cap center](../cap-center),
> please install it if you can't find it in `vela traits`.
## Setting route policy
Add routing config under `express-server`:
```yaml
services:
express-server:
...
route:
domain: example.com
rules:
- path: /testapp
rewriteTarget: /
```
> The full specification of `route` could show up by `$ vela show route`.
Apply again:
```bash
$ vela up
```
Check the status until we see route is ready:
```bash
$ vela status testapp
About:
Name: testapp
Namespace: default
Created at: 2020-11-04 16:34:43.762730145 -0800 PST
Updated at: 2020-11-11 16:21:37.761158941 -0800 PST
Services:
- Name: express-server
Type: webservice
HEALTHY Ready: 1/1
Last Deployment:
Created at: 2020-11-11 16:21:37 -0800 PST
Updated at: 2020-11-11T16:21:37-08:00
Routes:
- route: Visiting URL: http://example.com IP: <ingress-IP-address>
```
**In [kind cluster setup](../../install#kind)**, you can visit the service via localhost:
> If not in kind cluster, replace 'localhost' with ingress address
```
$ curl -H "Host:example.com" http://localhost/testapp
Hello World
```

View File

@ -0,0 +1,251 @@
---
title: Learning Appfile
---
A sample `Appfile` is as below:
```yaml
name: testapp
services:
frontend: # 1st service
image: oamdev/testapp:v1
build:
docker:
file: Dockerfile
context: .
cmd: ["node", "server.js"]
port: 8080
route: # trait
domain: example.com
rules:
- path: /testapp
rewriteTarget: /
backend: # 2nd service
type: task # workload type
image: perl
cmd: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
```
Under the hood, `Appfile` will build the image from source code, and then generate `Application` resource with the image name.
## Schema
> Before learning about Appfile's detailed schema, we recommend you to get familiar with [core concepts](../concepts) in KubeVela.
```yaml
name: _app-name_
services:
_service-name_:
# If `build` section exists, this field will be used as the name to build image. Otherwise, KubeVela will try to pull the image with given name directly.
image: oamdev/testapp:v1
build:
docker:
file: _Dockerfile_path_ # relative path is supported, e.g. "./Dockerfile"
context: _build_context_path_ # relative path is supported, e.g. "."
push:
local: kind # optionally push to local KinD cluster instead of remote registry
type: webservice (default) | worker | task
# detailed configurations of workload
... properties of the specified workload ...
_trait_1_:
# properties of trait 1
_trait_2_:
# properties of trait 2
... more traits and their properties ...
_another_service_name_: # more services can be defined
...
```
> To learn about how to set the properties of specific workload type or trait, please use `vela show <TYPE | TRAIT>`.
## Example Workflow
In the following workflow, we will build and deploy an example NodeJS app under [examples/testapp/](https://github.com/oam-dev/kubevela/tree/master/docs/examples/testapp).
### Prerequisites
- [Docker](https://docs.docker.com/get-docker/) installed on the host
- [KubeVela](../install) installed and configured
### 1. Download test app code
git clone and go to the testapp directory:
```bash
$ git clone https://github.com/oam-dev/kubevela.git
$ cd kubevela/docs/examples/testapp
```
The example contains NodeJS app code, Dockerfile to build the app.
### 2. Deploy app in one command
In the directory there is a [vela.yaml](https://github.com/oam-dev/kubevela/tree/master/docs/examples/testapp/vela.yaml) which follows Appfile format supported by Vela.
We are going to use it to build and deploy the app.
> NOTE: please change `oamdev` to your own registry account so you can push. Or, you could try the alternative approach in `Local testing without pushing image remotely` section.
```yaml
image: oamdev/testapp:v1 # change this to your image
```
Run the following command:
```bash
$ vela up
Parsing vela.yaml ...
Loading templates ...
Building service (express-server)...
Sending build context to Docker daemon 71.68kB
Step 1/10 : FROM mhart/alpine-node:12
---> 9d88359808c3
...
pushing image (oamdev/testapp:v1)...
...
Rendering configs for service (express-server)...
Writing deploy config to (.vela/deploy.yaml)
Applying deploy configs ...
Checking if app has been deployed...
App has not been deployed, creating a new deployment...
✅ App has been deployed 🚀🚀🚀
Port forward: vela port-forward testapp
SSH: vela exec testapp
Logging: vela logs testapp
App status: vela status testapp
Service status: vela status testapp --svc express-server
```
Check the status of the service:
```bash
$ vela status testapp
About:
Name: testapp
Namespace: default
Created at: 2020-11-02 11:08:32.138484 +0800 CST
Updated at: 2020-11-02 11:08:32.138485 +0800 CST
Services:
- Name: express-server
Type: webservice
HEALTHY Ready: 1/1
Last Deployment:
Created at: 2020-11-02 11:08:33 +0800 CST
Updated at: 2020-11-02T11:08:32+08:00
Routes:
```
#### Alternative: Local testing without pushing image remotely
If you have local [kind](../install) cluster running, you may try the local push option. No remote container registry is needed in this case.
Add local option to `build`:
```yaml
build:
# push image into local kind cluster without remote transfer
push:
local: kind
docker:
file: Dockerfile
context: .
```
Then deploy the app to kind:
```bash
$ vela up
```
<details><summary>(Advanced) Check rendered manifests</summary>
By default, Vela renders the final manifests in `.vela/deploy.yaml`:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: ApplicationConfiguration
metadata:
name: testapp
namespace: default
spec:
components:
- componentName: express-server
---
apiVersion: core.oam.dev/v1alpha2
kind: Component
metadata:
name: express-server
namespace: default
spec:
workload:
apiVersion: apps/v1
kind: Deployment
metadata:
name: express-server
...
---
apiVersion: core.oam.dev/v1alpha2
kind: HealthScope
metadata:
name: testapp-default-health
namespace: default
spec:
...
```
</details>
### [Optional] Configure another workload type
By now we have deployed a *[Web Service](../end-user/components/webservice)*, which is the default workload type in KubeVela. We can also add another service of *[Task](../end-user/components/task)* type in the same app:
```yaml
services:
pi:
type: task
image: perl
cmd: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
express-server:
...
```
Then deploy Appfile again to update the application:
```bash
$ vela up
```
Congratulations! You have just deployed an app using `Appfile`.
## What's Next?
Play more with your app:
- [Check Application Logs](./check-logs)
- [Execute Commands in Application Container](./exec-cmd)
- [Access Application via Route](./port-forward)

View File

@ -0,0 +1,23 @@
---
title: Port Forwarding
---
Once your web services of the application deployed, you can access it locally via `port-forward`.
```bash
$ vela ls
NAME APP WORKLOAD TRAITS STATUS CREATED-TIME
express-server testapp webservice Deployed 2020-09-18 22:42:04 +0800 CST
```
It will directly open browser for you.
```bash
$ vela port-forward testapp
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Forward successfully! Opening browser ...
Handling connection for 8080
Handling connection for 8080
```

View File

@ -0,0 +1,28 @@
---
title: KubeVela CLI
---
### Auto-completion
#### bash
```bash
To load completions in your current shell session:
$ source <(vela completion bash)
To load completions for every new session, execute once:
Linux:
$ vela completion bash > /etc/bash_completion.d/vela
MacOS:
$ vela completion bash > /usr/local/etc/bash_completion.d/vela
```
#### zsh
```bash
To load completions in your current shell session:
$ source <(vela completion zsh)
To load completions for every new session, execute once:
$ vela completion zsh > "${fpath[1]}/_vela"
```

View File

@ -0,0 +1,10 @@
# KubeVela Dashboard (WIP)
KubeVela has a simple client side dashboard for you to interact with. The functionality is equivalent to the vela cli.
```bash
$ vela dashboard
```
> NOTE: this feature is still under development.

View File

@ -0,0 +1,304 @@
---
title: FAQ
---
- [Compare to X](#Compare-to-X)
* [What is the difference between KubeVela and Helm?](#What-is-the-difference-between-KubeVela-and-Helm?)
- [Issues](#issues)
* [Error: unable to create new content in namespace cert-manager because it is being terminated](#error-unable-to-create-new-content-in-namespace-cert-manager-because-it-is-being-terminated)
* [Error: ScopeDefinition exists](#error-scopedefinition-exists)
* [You have reached your pull rate limit](#You-have-reached-your-pull-rate-limit)
* [Warning: Namespace cert-manager exists](#warning-namespace-cert-manager-exists)
* [How to fix issue: MutatingWebhookConfiguration mutating-webhook-configuration exists?](#how-to-fix-issue-mutatingwebhookconfiguration-mutating-webhook-configuration-exists)
- [Operating](#operating)
* [Autoscale: how to enable metrics server in various Kubernetes clusters?](#autoscale-how-to-enable-metrics-server-in-various-kubernetes-clusters)
## Compare to X
### What is the difference between KubeVela and Helm?
KubeVela is a platform builder tool to create easy-to-use yet extensible app delivery/management systems with Kubernetes. KubeVela relies on Helm as templating engine and package format for apps. But Helm is not the only templating module that KubeVela supports. Another first-class supported approach is CUE.
Also, KubeVela is by design a Kubernetes controller (i.e. works on server side), even for its Helm part, a Helm operator will be installed.
## Issues
### Error: unable to create new content in namespace cert-manager because it is being terminated
Occasionally you might hit the issue as below. It happens when the last KubeVela release deletion hasn't completed.
```
$ vela install
- Installing Vela Core Chart:
install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
Failed to install the chart with error: serviceaccounts "cert-manager-cainjector" is forbidden: unable to create new content in namespace cert-manager because it is being terminated
failed to create resource
helm.sh/helm/v3/pkg/kube.(*Client).Update.func1
/home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/kube/client.go:190
...
Error: failed to create resource: serviceaccounts "cert-manager-cainjector" is forbidden: unable to create new content in namespace cert-manager because it is being terminated
```
Take a break and try again in a few seconds.
```
$ vela install
- Installing Vela Core Chart:
Vela system along with OAM runtime already exist.
Automatically discover capabilities successfully ✅ Add(0) Update(0) Delete(8)
TYPE CATEGORY DESCRIPTION
-task workload One-off task to run a piece of code or script to completion
-webservice workload Long-running scalable service with stable endpoint to receive external traffic
-worker workload Long-running scalable backend worker without network endpoint
-autoscale trait Automatically scale the app following certain triggers or metrics
-metrics trait Configure metrics targets to be monitored for the app
-rollout trait Configure canary deployment strategy to release the app
-route trait Configure route policy to the app
-scaler trait Manually scale the app
- Finished successfully.
```
And manually apply all WorkloadDefinition and TraitDefinition manifests to have all capabilities back.
```
$ kubectl apply -f charts/vela-core/templates/defwithtemplate
traitdefinition.core.oam.dev/autoscale created
traitdefinition.core.oam.dev/scaler created
traitdefinition.core.oam.dev/metrics created
traitdefinition.core.oam.dev/rollout created
traitdefinition.core.oam.dev/route created
workloaddefinition.core.oam.dev/task created
workloaddefinition.core.oam.dev/webservice created
workloaddefinition.core.oam.dev/worker created
$ vela workloads
Automatically discover capabilities successfully ✅ Add(8) Update(0) Delete(0)
TYPE CATEGORY DESCRIPTION
+task workload One-off task to run a piece of code or script to completion
+webservice workload Long-running scalable service with stable endpoint to receive external traffic
+worker workload Long-running scalable backend worker without network endpoint
+autoscale trait Automatically scale the app following certain triggers or metrics
+metrics trait Configure metrics targets to be monitored for the app
+rollout trait Configure canary deployment strategy to release the app
+route trait Configure route policy to the app
+scaler trait Manually scale the app
NAME DESCRIPTION
task One-off task to run a piece of code or script to completion
webservice Long-running scalable service with stable endpoint to receive external traffic
worker Long-running scalable backend worker without network endpoint
```
### Error: ScopeDefinition exists
Occasionally you might hit the issue as below. It happens when there is an old OAM Kubernetes Runtime release, or you applied `ScopeDefinition` before.
```
$ vela install
- Installing Vela Core Chart:
install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
Failed to install the chart with error: ScopeDefinition "healthscopes.core.oam.dev" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "kubevela": current value is "oam"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "vela-system": current value is "oam-system"
rendered manifests contain a resource that already exists. Unable to continue with install
helm.sh/helm/v3/pkg/action.(*Install).Run
/home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274
...
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ScopeDefinition "healthscopes.core.oam.dev" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "kubevela": current value is "oam"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "vela-system": current value is "oam-system"
```
Delete `ScopeDefinition` "healthscopes.core.oam.dev" and try again.
```
$ kubectl delete ScopeDefinition "healthscopes.core.oam.dev"
scopedefinition.core.oam.dev "healthscopes.core.oam.dev" deleted
$ vela install
- Installing Vela Core Chart:
install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
Successfully installed the chart, status: deployed, last deployed time = 2020-12-03 16:26:41.491426 +0800 CST m=+4.026069452
WARN: handle workload template `containerizedworkloads.core.oam.dev` failed: no template found, you will unable to use this workload capabilityWARN: handle trait template `manualscalertraits.core.oam.dev` failed
: no template found, you will unable to use this trait capabilityAutomatically discover capabilities successfully ✅ Add(8) Update(0) Delete(0)
TYPE CATEGORY DESCRIPTION
+task workload One-off task to run a piece of code or script to completion
+webservice workload Long-running scalable service with stable endpoint to receive external traffic
+worker workload Long-running scalable backend worker without network endpoint
+autoscale trait Automatically scale the app following certain triggers or metrics
+metrics trait Configure metrics targets to be monitored for the app
+rollout trait Configure canary deployment strategy to release the app
+route trait Configure route policy to the app
+scaler trait Manually scale the app
- Finished successfully.
```
### You have reached your pull rate limit
When you look into the logs of Pod kubevela-vela-core and found the issue as below.
```
$ kubectl get pod -n vela-system -l app.kubernetes.io/name=vela-core
NAME READY STATUS RESTARTS AGE
kubevela-vela-core-f8b987775-wjg25 0/1 - 0 35m
```
>Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by
>authenticating and upgrading: https://www.docker.com/increase-rate-limit
You can use github container registry instead.
```
$ docker pull ghcr.io/oam-dev/kubevela/vela-core:latest
```
### Warning: Namespace cert-manager exists
If you hit the issue as below, an `cert-manager` release might exist whose namespace and RBAC related resource conflict
with KubeVela.
```
$ vela install
- Installing Vela Core Chart:
install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
Failed to install the chart with error: Namespace "cert-manager" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"
rendered manifests contain a resource that already exists. Unable to continue with install
helm.sh/helm/v3/pkg/action.(*Install).Run
/home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274
...
/opt/hostedtoolcache/go/1.14.12/x64/src/runtime/asm_amd64.s:1373
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Namespace "cert-manager" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"
```
Try these steps to fix the problem.
- Delete release `cert-manager`
- Delete namespace `cert-manager`
- Install KubeVela again
```
$ helm delete cert-manager -n cert-manager
release "cert-manager" uninstalled
$ kubectl delete ns cert-manager
namespace "cert-manager" deleted
$ vela install
- Installing Vela Core Chart:
install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
Successfully installed the chart, status: deployed, last deployed time = 2020-12-04 10:46:46.782617 +0800 CST m=+4.248889379
Automatically discover capabilities successfully ✅ (no changes)
TYPE CATEGORY DESCRIPTION
task workload One-off task to run a piece of code or script to completion
webservice workload Long-running scalable service with stable endpoint to receive external traffic
worker workload Long-running scalable backend worker without network endpoint
autoscale trait Automatically scale the app following certain triggers or metrics
metrics trait Configure metrics targets to be monitored for the app
rollout trait Configure canary deployment strategy to release the app
route trait Configure route policy to the app
scaler trait Manually scale the app
- Finished successfully.
```
### How to fix issue: MutatingWebhookConfiguration mutating-webhook-configuration exists?
If you deploy some other services which will apply MutatingWebhookConfiguration mutating-webhook-configuration, installing
KubeVela will hit the issue as below.
```shell
- Installing Vela Core Chart:
install chart vela-core, version v0.2.1, desc : A Helm chart for Kube Vela core, contains 36 file
Failed to install the chart with error: MutatingWebhookConfiguration "mutating-webhook-configuration" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"
rendered manifests contain a resource that already exists. Unable to continue with install
helm.sh/helm/v3/pkg/action.(*Install).Run
/home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274
github.com/oam-dev/kubevela/pkg/commands.InstallOamRuntime
/home/runner/work/kubevela/kubevela/pkg/commands/system.go:259
github.com/oam-dev/kubevela/pkg/commands.(*initCmd).run
/home/runner/work/kubevela/kubevela/pkg/commands/system.go:162
github.com/oam-dev/kubevela/pkg/commands.NewInstallCommand.func2
/home/runner/work/kubevela/kubevela/pkg/commands/system.go:119
github.com/spf13/cobra.(*Command).execute
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:850
github.com/spf13/cobra.(*Command).ExecuteC
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:958
github.com/spf13/cobra.(*Command).Execute
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:895
main.main
/home/runner/work/kubevela/kubevela/references/cmd/cli/main.go:16
runtime.main
/opt/hostedtoolcache/go/1.14.13/x64/src/runtime/proc.go:203
runtime.goexit
/opt/hostedtoolcache/go/1.14.13/x64/src/runtime/asm_amd64.s:1373
Error: rendered manifests contain a resource that already exists. Unable to continue with install: MutatingWebhookConfiguration "mutating-webhook-configuration" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"
```
To fix this issue, please upgrade KubeVela Cli `vela` version to be higher than `v0.2.2` from [KubeVela releases](https://github.com/oam-dev/kubevela/releases).
## Operating
### Autoscale: how to enable metrics server in various Kubernetes clusters?
Operating Autoscale depends on metrics server, so it has to be enabled in various clusters. Please check whether metrics server
is enabled with command `kubectl top nodes` or `kubectl top pods`.
If the output is similar as below, the metrics is enabled.
```shell
$ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
cn-hongkong.10.0.1.237 288m 7% 5378Mi 78%
cn-hongkong.10.0.1.238 351m 8% 5113Mi 74%
$ kubectl top pods
NAME CPU(cores) MEMORY(bytes)
php-apache-65f444bf84-cjbs5 0m 1Mi
wordpress-55c59ccdd5-lf59d 1m 66Mi
```
Or you have to manually enable metrics server in your Kubernetes cluster.
- ACK (Alibaba Cloud Container Service for Kubernetes)
Metrics server is already enabled.
- ASK (Alibaba Cloud Serverless Kubernetes)
Metrics server has to be enabled in `Operations/Add-ons` section of [Alibaba Cloud console](https://cs.console.aliyun.com/) as below.
![](../../../resources/install-metrics-server-in-ASK.jpg)
Please refer to [metrics server debug guide](https://help.aliyun.com/document_detail/176515.html) if you hit more issue.
- Kind
Install metrics server as below, or you can install the [latest version](https://github.com/kubernetes-sigs/metrics-server#installation).
```shell
$ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
```
Also add the following part under `.spec.template.spec.containers` in the yaml file loaded by `kubectl edit deploy -n kube-system metrics-server`.
Noted: This is just a walk-around, not for production-level use.
```
command:
- /metrics-server
- --kubelet-insecure-tls
```
- MiniKube
Enable it with following command.
```shell
$ minikube addons enable metrics-server
```
Have fun to [set autoscale](../../extensions/set-autoscale) on your application.

View File

@ -0,0 +1,10 @@
---
title: Restful API
---
import useBaseUrl from '@docusaurus/useBaseUrl';
<a
target="_blank"
href={useBaseUrl('/restful-api')}>
KubeVela Restful API
</a>

View File

@ -0,0 +1,265 @@
---
title: Application
---
This documentation will walk through how to use KubeVela to design a simple application without any placement rule.
> Note: since you didn't declare placement rule, KubeVela will deploy this application directly to the control plane cluster (i.e. the cluster your `kubectl` is talking to). This is also the same case if you are using local cluster such as KinD or MiniKube to play KubeVela.
## Step 1: Check Available Components
Components are deployable or provisionable entities that compose your application. It could be a Helm chart, a simple Kubernetes workload, a CUE or Terraform module, or a cloud database etc.
Let's check the available components in fresh new KubeVela.
```shell
kubectl get comp -n vela-system
NAME WORKLOAD-KIND DESCRIPTION
task Job Describes jobs that run code or a script to completion.
webservice Deployment Describes long-running, scalable, containerized services that have a stable network endpoint to receive external network traffic from customers.
worker Deployment Describes long-running, scalable, containerized services that running at backend. They do NOT have network endpoint to receive external network traffic.
```
To show the specification for given component, you could use `vela show`.
```shell
$ kubectl vela show webservice
# Properties
+------------------+----------------------------------------------------------------------------------+-----------------------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+------------------+----------------------------------------------------------------------------------+-----------------------+----------+---------+
| cmd | Commands to run in the container | []string | false | |
| env | Define arguments by using environment variables | [[]env](#env) | false | |
| addRevisionLabel | | bool | true | false |
| image | Which image would you like to use for your service | string | true | |
| port | Which port do you want customer traffic sent to | int | true | 80 |
| cpu | Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) | string | false | |
| volumes | Declare volumes and volumeMounts | [[]volumes](#volumes) | false | |
+------------------+----------------------------------------------------------------------------------+-----------------------+----------+---------+
... // skip other fields
```
> Tips: `vela show xxx --web` will open its capability reference documentation in your default browser.
You could always [add more components](components/more) to the platform at any time.
## Step 2: Declare an Application
Application is the full description of a deployment. Let's define an application that deploys a *Web Service* and a *Worker* components.
```yaml
# sample.yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: website
spec:
components:
- name: frontend
type: webservice
properties:
image: nginx
- name: backend
type: worker
properties:
image: busybox
cmd:
- sleep
- '1000'
```
## Step 3: Attach Traits
Traits are platform provided features that could *overlay* a given component with extra operational behaviors.
```shell
$ kubectl get trait -n vela-system
NAME APPLIES-TO DESCRIPTION
cpuscaler [webservice worker] configure k8s HPA with CPU metrics for Deployment
ingress [webservice worker] Configures K8s ingress and service to enable web traffic for your service. Please use route trait in cap center for advanced usage.
scaler [webservice worker] Configures replicas for your service.
sidecar [webservice worker] inject a sidecar container into your app
```
Let's check the specification of `sidecar` trait.
```shell
$ kubectl vela show sidecar
# Properties
+---------+-----------------------------------------+----------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+---------+-----------------------------------------+----------+----------+---------+
| name | Specify the name of sidecar container | string | true | |
| image | Specify the image of sidecar container | string | true | |
| command | Specify the commands run in the sidecar | []string | false | |
+---------+-----------------------------------------+----------+----------+---------+
```
Note that traits are designed to be *overlays*.
This means for `sidecar` trait, your `frontend` component doesn't need to have a sidecar template or bring a webhook to enable sidecar injection. Instead, KubeVela is able to patch a sidecar to its workload instance after it is generated by the component (no matter it's a Helm chart or CUE module) but before it is applied to runtime cluster.
Similarly, the system will assign a HPA instance based on the properties you set and "link" it to the target workload instance, the component itself is untouched.
Now let's attach `sidecar` and `cpuscaler` traits to the `frontend` component.
```yaml
# sample.yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: website
spec:
components:
- name: frontend # This is the component I want to deploy
type: webservice
properties:
image: nginx
traits:
- type: cpuscaler # Assign a HPA to scale the component by CPU usage
properties:
min: 1
max: 10
cpuPercent: 60
- type: sidecar # Inject a fluentd sidecar before applying the component to runtime cluster
properties:
name: "sidecar-test"
image: "fluentd"
- name: backend
type: worker
properties:
image: busybox
cmd:
- sleep
- '1000'
```
## Step 4: Deploy the Application
```shell
$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/enduser/sample.yaml
application.core.oam.dev/website created
```
You'll get the application becomes `running`.
```shell
$ kubectl get application
NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE
website frontend webservice running true 4m54s
```
Check the details of the application.
```shell
$ kubectl get app website -o yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
generation: 1
name: website
namespace: default
spec:
components:
- name: frontend
properties:
image: nginx
traits:
- properties:
cpuPercent: 60
max: 10
min: 1
type: cpuscaler
- properties:
image: fluentd
name: sidecar-test
type: sidecar
type: webservice
- name: backend
properties:
cmd:
- sleep
- "1000"
image: busybox
type: worker
status:
...
latestRevision:
name: website-v1
revision: 1
revisionHash: e9e062e2cddfe5fb
services:
- healthy: true
name: frontend
traits:
- healthy: true
type: cpuscaler
- healthy: true
type: sidecar
- healthy: true
name: backend
status: running
```
Specifically:
1. `status.latestRevision` declares current revision of this deployment.
2. `status.services` declares the component created by this deployment and the healthy state.
3. `status.status` declares the global state of this deployment.
### List Revisions
When updating an application entity, KubeVela will create a new revision for this change.
```shell
$ kubectl get apprev -l app.oam.dev/name=website
NAME AGE
website-v1 35m
```
Furthermore, the system will decide how to/whether to rollout the application based on the attached [rollout plan](scopes/rollout-plan).
### Verify
<details>
On the runtime cluster, you could see a Kubernetes Deployment named `frontend` is running, with port exposed, and with a container `fluentd` injected.
```shell
$ kubectl get deploy frontend
NAME READY UP-TO-DATE AVAILABLE AGE
frontend 1/1 1 1 97s
```
```shell
$ kubectl get deploy frontend -o yaml
...
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: frontend
ports:
- containerPort: 80
protocol: TCP
- image: fluentd
imagePullPolicy: Always
name: sidecar-test
...
```
Another Deployment is also running named `backend`.
```shell
$ kubectl get deploy backend
NAME READY UP-TO-DATE AVAILABLE AGE
backend 1/1 1 1 111s
```
An HPA was also created by the `cpuscaler` trait.
```shell
$ kubectl get HorizontalPodAutoscaler frontend
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
frontend Deployment/frontend <unknown>/50% 1 10 1 101m
```
</details>

View File

@ -0,0 +1,186 @@
---
title: Cloud Services
---
KubeVela allows you to declare cloud services your application needs in consistent API. Currently, we support both Terraform and Crossplane.
> Please check [the platform team guide for cloud services](../../platform-engineers/cloud-services) if you are interested in how these capabilities are maintained in KubeVela.
The cloud services will be consumed by the application via [Service Binding Trait](../traits/service-binding).
## Terraform
> ⚠️ This section assumes [Terraform related capabilities](../../platform-engineers/terraform) have been installed in your platform.
Check the parameters of cloud resource components and trait.
```shell
$ kubectl vela show alibaba-rds
# Properties
+----------------------------+-------------------------------------------------------------------------+-----------------------------------------------------------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+----------------------------+-------------------------------------------------------------------------+-----------------------------------------------------------+----------+---------+
| bucket | OSS bucket name | string | true | |
| acl | OSS bucket ACL, supported 'private', 'public-read', 'public-read-write' | string | true | |
| writeConnectionSecretToRef | The secret which the cloud resource connection will be written to | [writeConnectionSecretToRef](#writeConnectionSecretToRef) | false | |
+----------------------------+-------------------------------------------------------------------------+-----------------------------------------------------------+----------+---------+
## writeConnectionSecretToRef
+-----------+-----------------------------------------------------------------------------+--------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+-----------+-----------------------------------------------------------------------------+--------+----------+---------+
| name | The secret name which the cloud resource connection will be written to | string | true | |
| namespace | The secret namespace which the cloud resource connection will be written to | string | false | |
+-----------+-----------------------------------------------------------------------------+--------+----------+---------+
$ kubectl vela show service-binding
# Properties
+-------------+------------------------------------------------+------------------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+-------------+------------------------------------------------+------------------+----------+---------+
| envMappings | The mapping of environment variables to secret | map[string]{...} | true | |
+-------------+------------------------------------------------+------------------+----------+---------+
```
### Alibaba Cloud RDS and OSS
A sample [application](https://github.com/oam-dev/kubevela/tree/master/docs/examples/terraform/cloud-resource-provision-and-consume/application.yaml) is as below.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: webapp
spec:
components:
- name: express-server
type: webservice
properties:
image: zzxwill/flask-web-application:v0.3.1-crossplane
ports: 80
traits:
- type: service-binding
properties:
envMappings:
# environments refer to db-conn secret
DB_PASSWORD:
secret: db-conn # 1) If the env name is the same as the secret key, secret key can be omitted.
endpoint:
secret: db-conn
key: DB_HOST # 2) If the env name is different from secret key, secret key has to be set.
username:
secret: db-conn
key: DB_USER
# environments refer to oss-conn secret
BUCKET_NAME:
secret: oss-conn
- name: sample-db
type: alibaba-rds
properties:
instance_name: sample-db
account_name: oamtest
password: U34rfwefwefffaked
writeConnectionSecretToRef:
name: db-conn
- name: sample-oss
type: alibaba-oss
properties:
bucket: vela-website
acl: private
writeConnectionSecretToRef:
name: oss-conn
```
## Crossplane
> ⚠️ This section assumes [Crossplane related capabilities](../../platform-engineers/crossplane) have been installed in your platform.
### Alibaba Cloud RDS and OSS
Check the parameters of cloud service component:
```shell
$ kubectl vela show alibaba-rds
# Properties
+---------------+------------------------------------------------+--------+----------+--------------------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+---------------+------------------------------------------------+--------+----------+--------------------+
| engine | RDS engine | string | true | mysql |
| engineVersion | The version of RDS engine | string | true | 8.0 |
| instanceClass | The instance class for the RDS | string | true | rds.mysql.c1.large |
| username | RDS username | string | true | |
| secretName | Secret name which RDS connection will write to | string | true | |
+---------------+------------------------------------------------+--------+----------+--------------------+
```
A sample application is as below.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: webapp
spec:
components:
- name: express-server
type: webservice
properties:
image: zzxwill/flask-web-application:v0.3.1-crossplane
ports: 80
traits:
- type: service-binding
properties:
envMappings:
# environments refer to db-conn secret
DB_PASSWORD:
secret: db-conn
key: password # 1) If the env name is different from secret key, secret key has to be set.
endpoint:
secret: db-conn # 2) If the env name is the same as the secret key, secret key can be omitted.
username:
secret: db-conn
# environments refer to oss-conn secret
BUCKET_NAME:
secret: oss-conn
key: Bucket
- name: sample-db
type: alibaba-rds
properties:
name: sample-db
engine: mysql
engineVersion: "8.0"
instanceClass: rds.mysql.c1.large
username: oamtest
secretName: db-conn
- name: sample-oss
type: alibaba-oss
properties:
name: velaweb
secretName: oss-conn
```
## Verify
Deploy and verify the application (by either provider is OK).
```shell
$ kubectl get application
NAME AGE
webapp 46m
$ kubectl port-forward deployment/express-server 80:80
Forwarding from 127.0.0.1:80 -> 80
Forwarding from [::1]:80 -> 80
Handling connection for 80
Handling connection for 80
```
![](../../resources/crossplane-visit-application.jpg)

View File

@ -0,0 +1,12 @@
---
title: Want More?
---
Components in KubeVela are designed to be brought by users.
Check below documentations about how to bring your own components to the system in various approaches.
- [Helm](../../platform-engineers/helm/component) - Helm chart is a natural form of component, note that you need to have a valid Helm repository (e.g. GitHub repo or a Helm hub) to host the chart in this case.
- [CUE](../../platform-engineers/cue/component) - CUE is powerful approach to encapsulate a component and it doesn't require any repository.
- [Simple Template](../../platform-engineers/kube/component) - Not a Helm or CUE expert? A simple template approach is also provided to define any Kubernetes API resource as a component. Note that only key-value style parameters are supported in this case.
- [Cloud Services](../../platform-engineers/cloud-services) - KubeVela allows you to declare cloud services as part of the application and provision them in consistent API.

View File

@ -0,0 +1,38 @@
---
title: Task
---
## Description
Describes jobs that run code or a script to completion.
## Samples
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: app-worker
spec:
components:
- name: mytask
type: task
properties:
image: perl
count: 10
cmd: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
```
## Specification
```console
# Properties
+---------+--------------------------------------------------------------------------------------------------+----------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+---------+--------------------------------------------------------------------------------------------------+----------+----------+---------+
| cmd | Commands to run in the container | []string | false | |
| count | specify number of tasks to run in parallel | int | true | 1 |
| restart | Define the job restart policy, the value can only be Never or OnFailure. By default, it's Never. | string | true | Never |
| image | Which image would you like to use for your service | string | true | |
+---------+--------------------------------------------------------------------------------------------------+----------+----------+---------+
```

View File

@ -0,0 +1,129 @@
---
title: Web Service
---
## Description
Describes long-running, scalable, containerized services that have a stable network endpoint to receive external network traffic from customers.
## Samples
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: website
spec:
components:
- name: frontend
type: webservice
properties:
image: oamdev/testapp:v1
cmd: ["node", "server.js"]
port: 8080
cpu: "0.1"
env:
- name: FOO
value: bar
- name: FOO
valueFrom:
secretKeyRef:
name: bar
key: bar
```
### Declare Volumes
The `Web Service` component exposes configurations for certain volume types including `PersistenVolumeClaim`, `ConfigMap`, `Secret`, and `EmptyDir`.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: website
spec:
components:
- name: frontend
type: webservice
properties:
image: nginx
volumes:
- name: "my-pvc"
mountPath: "/var/www/html1"
type: "pvc" # PersistenVolumeClaim volume
claimName: "myclaim"
- name: "my-cm"
mountPath: "/var/www/html2"
type: "configMap" # ConfigMap volume (specifying items)
cmName: "myCmName"
items:
- key: "k1"
path: "./a1"
- key: "k2"
path: "./a2"
- name: "my-cm-noitems"
mountPath: "/var/www/html22"
type: "configMap" # ConfigMap volume (not specifying items)
cmName: "myCmName2"
- name: "mysecret"
type: "secret" # Secret volume
mountPath: "/var/www/html3"
secretName: "mysecret"
- name: "my-empty-dir"
type: "emptyDir" # EmptyDir volume
mountPath: "/var/www/html4"
```
## Specification
```console
# Properties
+------------------+----------------------------------------------------------------------------------+-----------------------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+------------------+----------------------------------------------------------------------------------+-----------------------+----------+---------+
| cmd | Commands to run in the container | []string | false | |
| env | Define arguments by using environment variables | [[]env](#env) | false | |
| addRevisionLabel | | bool | true | false |
| image | Which image would you like to use for your service | string | true | |
| port | Which port do you want customer traffic sent to | int | true | 80 |
| cpu | Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) | string | false | |
| volumes | Declare volumes and volumeMounts | [[]volumes](#volumes) | false | |
+------------------+----------------------------------------------------------------------------------+-----------------------+----------+---------+
##### volumes
+-----------+---------------------------------------------------------------------+--------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+-----------+---------------------------------------------------------------------+--------+----------+---------+
| name | | string | true | |
| mountPath | | string | true | |
| type | Specify volume type, options: "pvc","configMap","secret","emptyDir" | string | true | |
+-----------+---------------------------------------------------------------------+--------+----------+---------+
## env
+-----------+-----------------------------------------------------------+-------------------------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+-----------+-----------------------------------------------------------+-------------------------+----------+---------+
| name | Environment variable name | string | true | |
| value | The value of the environment variable | string | false | |
| valueFrom | Specifies a source the value of this var should come from | [valueFrom](#valueFrom) | false | |
+-----------+-----------------------------------------------------------+-------------------------+----------+---------+
### valueFrom
+--------------+--------------------------------------------------+-------------------------------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+--------------+--------------------------------------------------+-------------------------------+----------+---------+
| secretKeyRef | Selects a key of a secret in the pod's namespace | [secretKeyRef](#secretKeyRef) | true | |
+--------------+--------------------------------------------------+-------------------------------+----------+---------+
#### secretKeyRef
+------+------------------------------------------------------------------+--------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+------+------------------------------------------------------------------+--------+----------+---------+
| name | The name of the secret in the pod's namespace to select from | string | true | |
| key | The key of the secret to select from. Must be a valid secret key | string | true | |
+------+------------------------------------------------------------------+--------+----------+---------+
```

View File

@ -0,0 +1,37 @@
---
title: Worker
---
## Description
Describes long-running, scalable, containerized services that running at backend. They do NOT have network endpoint to receive external network traffic.
## Samples
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: app-worker
spec:
components:
- name: myworker
type: worker
properties:
image: "busybox"
cmd:
- sleep
- "1000"
```
## Specification
```console
# Properties
+-------+----------------------------------------------------+----------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+-------+----------------------------------------------------+----------+----------+---------+
| cmd | Commands to run in the container | []string | false | |
| image | Which image would you like to use for your service | string | true | |
+-------+----------------------------------------------------+----------+----------+---------+
```

View File

@ -0,0 +1,376 @@
---
title: Dry-Run and Live-Diff
---
KubeVela allows you to dry-run and live-diff your application.
## Dry-Run the `Application`
Dry run will help you to understand what are the real resources which will to be expanded and deployed
to the Kubernetes cluster. In other words, it will mock to run the same logic as KubeVela's controller
and output the results locally.
For example, let's dry-run the following application:
```yaml
# app.yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: vela-app
spec:
components:
- name: express-server
type: webservice
properties:
image: crccheck/hello-world
port: 8000
traits:
- type: ingress
properties:
domain: testsvc.example.com
http:
"/": 8000
```
```shell
kubectl vela dry-run -f app.yaml
---
# Application(vela-app) -- Comopnent(express-server)
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.oam.dev/appRevision: ""
app.oam.dev/component: express-server
app.oam.dev/name: vela-app
workload.oam.dev/type: webservice
spec:
selector:
matchLabels:
app.oam.dev/component: express-server
template:
metadata:
labels:
app.oam.dev/component: express-server
spec:
containers:
- image: crccheck/hello-world
name: express-server
ports:
- containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
labels:
app.oam.dev/appRevision: ""
app.oam.dev/component: express-server
app.oam.dev/name: vela-app
trait.oam.dev/resource: service
trait.oam.dev/type: ingress
name: express-server
spec:
ports:
- port: 8000
targetPort: 8000
selector:
app.oam.dev/component: express-server
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
labels:
app.oam.dev/appRevision: ""
app.oam.dev/component: express-server
app.oam.dev/name: vela-app
trait.oam.dev/resource: ingress
trait.oam.dev/type: ingress
name: express-server
spec:
rules:
- host: testsvc.example.com
http:
paths:
- backend:
serviceName: express-server
servicePort: 8000
path: /
---
```
In this example, the definitions(`webservice` and `ingress`) which `vela-app` depends on is the built-in
components and traits of KubeVela. You can also use `-d `or `--definitions` to specify your local definition files.
`-d `or `--definitions` permitting user to provide capability definitions used in the application from local files.
`dry-run` cmd will prioritize the provided capabilities than the living ones in the cluster.
## Live-Diff the `Application`
Live-diff helps you to have a preview of what would change if you're going to upgrade an application without making any changes
to the living cluster.
This feature is extremely useful for serious production deployment, and make the upgrade under control
It basically generates a diff between the specific revision of running instance and the local candidate application.
The result shows the changes (added/modified/removed/no_change) of the application as well as its sub-resources,
such as components and traits.
Assume you have just deployed the application in dry-run section.
Then you can list the revisions of the Application.
```shell
$ kubectl get apprev -l app.oam.dev/name=vela-app
NAME AGE
vela-app-v1 50s
```
Assume we're going to upgrade the application like below.
```yaml
# new-app.yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: vela-app
spec:
components:
- name: express-server
type: webservice
properties:
image: crccheck/hello-world
port: 8080 # change port
cpu: 0.5 # add requests cpu units
- name: my-task # add a component
type: task
properties:
image: busybox
cmd: ["sleep", "1000"]
traits:
- type: ingress
properties:
domain: testsvc.example.com
http:
"/": 8080 # change port
```
Run live-diff like this:
```shell
kubectl vela live-diff -f new-app.yaml -r vela-app-v1
```
`-r` or `--revision` is a flag that specifies the name of a living ApplicationRevision with which you want to compare the updated application.
`-c` or `--context` is a flag that specifies the number of lines shown around a change. The unchanged lines
which are out of the context of a change will be omitted. It's useful if the diff result contains a lot of unchanged content
while you just want to focus on the changed ones.
<details><summary> Click to view the details of diff result </summary>
```bash
---
# Application (vela-app) has been modified(*)
---
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
creationTimestamp: null
name: vela-app
namespace: default
spec:
components:
- name: express-server
properties:
+ cpu: 0.5
image: crccheck/hello-world
- port: 8000
+ port: 8080
+ type: webservice
+ - name: my-task
+ properties:
+ cmd:
+ - sleep
+ - "1000"
+ image: busybox
traits:
- properties:
domain: testsvc.example.com
http:
- /: 8000
+ /: 8080
type: ingress
- type: webservice
+ type: task
status:
batchRollingState: ""
currentBatch: 0
rollingState: ""
upgradedReadyReplicas: 0
upgradedReplicas: 0
---
## Component (express-server) has been modified(*)
---
apiVersion: core.oam.dev/v1alpha2
kind: Component
metadata:
creationTimestamp: null
labels:
app.oam.dev/name: vela-app
name: express-server
spec:
workload:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.oam.dev/appRevision: ""
app.oam.dev/component: express-server
app.oam.dev/name: vela-app
workload.oam.dev/type: webservice
spec:
selector:
matchLabels:
app.oam.dev/component: express-server
template:
metadata:
labels:
app.oam.dev/component: express-server
spec:
containers:
- image: crccheck/hello-world
name: express-server
ports:
- - containerPort: 8000
+ - containerPort: 8080
status:
observedGeneration: 0
---
### Component (express-server) / Trait (ingress/service) has been removed(-)
---
- apiVersion: v1
- kind: Service
- metadata:
- labels:
- app.oam.dev/appRevision: ""
- app.oam.dev/component: express-server
- app.oam.dev/name: vela-app
- trait.oam.dev/resource: service
- trait.oam.dev/type: ingress
- name: express-server
- spec:
- ports:
- - port: 8000
- targetPort: 8000
- selector:
- app.oam.dev/component: express-server
---
### Component (express-server) / Trait (ingress/ingress) has been removed(-)
---
- apiVersion: networking.k8s.io/v1beta1
- kind: Ingress
- metadata:
- labels:
- app.oam.dev/appRevision: ""
- app.oam.dev/component: express-server
- app.oam.dev/name: vela-app
- trait.oam.dev/resource: ingress
- trait.oam.dev/type: ingress
- name: express-server
- spec:
- rules:
- - host: testsvc.example.com
- http:
- paths:
- - backend:
- serviceName: express-server
- servicePort: 8000
- path: /
---
## Component (my-task) has been added(+)
---
+ apiVersion: core.oam.dev/v1alpha2
+ kind: Component
+ metadata:
+ creationTimestamp: null
+ labels:
+ app.oam.dev/name: vela-app
+ name: my-task
+ spec:
+ workload:
+ apiVersion: batch/v1
+ kind: Job
+ metadata:
+ labels:
+ app.oam.dev/appRevision: ""
+ app.oam.dev/component: my-task
+ app.oam.dev/name: vela-app
+ workload.oam.dev/type: task
+ spec:
+ completions: 1
+ parallelism: 1
+ template:
+ spec:
+ containers:
+ - command:
+ - sleep
+ - "1000"
+ image: busybox
+ name: my-task
+ restartPolicy: Never
+ status:
+ observedGeneration: 0
---
### Component (my-task) / Trait (ingress/service) has been added(+)
---
+ apiVersion: v1
+ kind: Service
+ metadata:
+ labels:
+ app.oam.dev/appRevision: ""
+ app.oam.dev/component: my-task
+ app.oam.dev/name: vela-app
+ trait.oam.dev/resource: service
+ trait.oam.dev/type: ingress
+ name: my-task
+ spec:
+ ports:
+ - port: 8080
+ targetPort: 8080
+ selector:
+ app.oam.dev/component: my-task
---
### Component (my-task) / Trait (ingress/ingress) has been added(+)
---
+ apiVersion: networking.k8s.io/v1beta1
+ kind: Ingress
+ metadata:
+ labels:
+ app.oam.dev/appRevision: ""
+ app.oam.dev/component: my-task
+ app.oam.dev/name: vela-app
+ trait.oam.dev/resource: ingress
+ trait.oam.dev/type: ingress
+ name: my-task
+ spec:
+ rules:
+ - host: testsvc.example.com
+ http:
+ paths:
+ - backend:
+ serviceName: my-task
+ servicePort: 8080
+ path: /
```
</details>

View File

@ -0,0 +1,9 @@
---
title: Monitoring
---
TBD, Content Overview
1. We will move all installation scripts to a separate doc may be named Install Capability Providers (e.g. https://knative.dev/docs/install/install-extensions/)Install monitoring trait(along with prometheus/grafana controller).
2. Add monitoring trait into Application.
3. View it with grafana.

View File

@ -0,0 +1,8 @@
---
title: Build CI/CD Pipeline
---
TBD, Content Overview
1. install argo/teckton.
2. run the pipeline example: https://github.com/oam-dev/kubevela/tree/master/docs/examples/argo

View File

@ -0,0 +1,238 @@
---
title: Advanced Rollout Plan
---
The rollout plan feature in KubeVela is essentially provided by `AppRollout` API.
## AppRollout
Below is an example for rolling update an application from v1 to v2 in three batches. The
first batch contains only 1 pod while the rest of the batches split the rest.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: AppRollout
metadata:
name: rolling-example
spec:
sourceAppRevisionName: test-rolling-v1
targetAppRevisionName: test-rolling-v2
componentList:
- metrics-provider
rolloutPlan:
rolloutStrategy: "IncreaseFirst"
rolloutBatches:
- replicas: 1
- replicas: 50%
- replicas: 50%
batchPartition: 1
```
## Basic Usage
1. Deploy application
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: test-rolling
annotations:
"app.oam.dev/rolling-components": "metrics-provider"
"app.oam.dev/rollout-template": "true"
spec:
components:
- name: metrics-provider
type: worker
properties:
cmd:
- ./podinfo
- stress-cpu=1
image: stefanprodan/podinfo:4.0.6
port: 8080
replicas: 5
```
Verify AppRevision `test-rolling-v1` have generated
```shell
$ kubectl get apprev test-rolling-v1
NAME AGE
test-rolling-v1 9s
```
2. Attach the following rollout plan to upgrade the application to v1
```yaml
apiVersion: core.oam.dev/v1beta1
kind: AppRollout
metadata:
name: rolling-example
spec:
# application (revision) reference
targetAppRevisionName: test-rolling-v1
componentList:
- metrics-provider
rolloutPlan:
rolloutStrategy: "IncreaseFirst"
rolloutBatches:
- replicas: 10%
- replicas: 40%
- replicas: 50%
targetSize: 5
```
Use can check the status of the ApplicationRollout and wait for the rollout to complete.
3. User can continue to modify the application image tag and apply.This will generate new AppRevision `test-rolling-v2`
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: test-rolling
annotations:
"app.oam.dev/rolling-components": "metrics-provider"
"app.oam.dev/rollout-template": "true"
spec:
components:
- name: metrics-provider
type: worker
properties:
cmd:
- ./podinfo
- stress-cpu=1
image: stefanprodan/podinfo:5.0.2
port: 8080
replicas: 5
```
Verify AppRevision `test-rolling-v2` have generated
```shell
$ kubectl get apprev test-rolling-v2
NAME AGE
test-rolling-v2 7s
```
4. Apply the application rollout that upgrade the application from v1 to v2
```yaml
apiVersion: core.oam.dev/v1beta1
kind: AppRollout
metadata:
name: rolling-example
spec:
# application (revision) reference
sourceAppRevisionName: test-rolling-v1
targetAppRevisionName: test-rolling-v2
componentList:
- metrics-provider
rolloutPlan:
rolloutStrategy: "IncreaseFirst"
rolloutBatches:
- replicas: 1
- replicas: 2
- replicas: 2
```
User can check the status of the ApplicationRollout and see the rollout completes, and the
ApplicationRollout's "Rolling State" becomes `rolloutSucceed`
## Advanced Usage
Using `AppRollout` separately can enable some advanced use case.
### Revert
5. Apply the application rollout that revert the application from v2 to v1
```yaml
apiVersion: core.oam.dev/v1beta1
kind: AppRollout
metadata:
name: rolling-example
spec:
# application (revision) reference
sourceAppRevisionName: test-rolling-v2
targetAppRevisionName: test-rolling-v1
componentList:
- metrics-provider
rolloutPlan:
rolloutStrategy: "IncreaseFirst"
rolloutBatches:
- replicas: 1
- replicas: 2
- replicas: 2
```
### Skip Revision Rollout
6. User can apply this yaml continue to modify the application image tag.This will generate new AppRevision `test-rolling-v3`
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: test-rolling
annotations:
"app.oam.dev/rolling-components": "metrics-provider"
"app.oam.dev/rollout-template": "true"
spec:
components:
- name: metrics-provider
type: worker
properties:
cmd:
- ./podinfo
- stress-cpu=1
image: stefanprodan/podinfo:5.2.0
port: 8080
replicas: 5
```
Verify AppRevision `test-rolling-v3` have generated
```shell
$ kubectl get apprev test-rolling-v3
NAME AGE
test-rolling-v3 7s
```
7. Apply the application rollout that rollout the application from v1 to v3
```yaml
apiVersion: core.oam.dev/v1beta1
kind: AppRollout
metadata:
name: rolling-example
spec:
# application (revision) reference
sourceAppRevisionName: test-rolling-v1
targetAppRevisionName: test-rolling-v3
componentList:
- metrics-provider
rolloutPlan:
rolloutStrategy: "IncreaseFirst"
rolloutBatches:
- replicas: 1
- replicas: 2
- replicas: 2
```
## More Details About `AppRollout`
### Design Principles and Goals
There are several attempts at solving rollout problem in the cloud native community. However, none
of them provide a true rolling style upgrade. For example, flagger supports Blue/Green, Canary
and A/B testing. Therefore, we decide to add support for batch based rolling upgrade as
our first style to support in KubeVela.
We design KubeVela rollout solutions with the following principles in mind
- First, we want all flavors of rollout controllers share the same core rollout
related logic. The trait and application related logic can be easily encapsulated into its own
package.
- Second, the core rollout related logic is easily extensible to support different type of
workloads, i.e. Deployment, CloneSet, Statefulset, DaemonSet or even customized workloads.
- Thirdly, the core rollout related logic has a well documented state machine that
does state transition explicitly.
- Finally, the controllers can support all the rollout/upgrade needs of an application running
in a production environment including Blue/Green, Canary and A/B testing.
### State Transition
Here is the high level state transition graph
![](../../resources/approllout-status-transition.jpg)
### Roadmap
Our recent roadmap for rollout plan is [here](./roadmap).

View File

@ -0,0 +1,230 @@
---
title: Placement
---
## Introduction
In this section, we will introduce how to use KubeVela to place application across multiple clusters with traffic management enabled. For traffic management, KubeVela currently allows you to split the traffic onto both the old and new revisions during rolling update and verify the new version while preserving service availability.
### AppDeployment
The `AppDeployment` API in KubeVela is provided to satisfy such requirements. Here's an overview of the API:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: AppDeployment
metadata:
name: sample-appdeploy
spec:
traffic:
hosts:
- example.com
http:
- match:
# match any requests to 'example.com/example-app'
- uri:
prefix: "/example-app"
# split traffic 50/50 on v1/v2 versions of the app
weightedTargets:
- revisionName: example-app-v1
componentName: testsvc
port: 80
weight: 50
- revisionName: example-app-v2
componentName: testsvc
port: 80
weight: 50
appRevisions:
- # Name of the AppRevision.
# Each modification to Application would generate a new AppRevision.
revisionName: example-app-v1
# Cluster specific workload placement config
placement:
- clusterSelector:
# You can select Clusters by name or labels.
# If multiple clusters is selected, one will be picked via a unique hashing algorithm.
labels:
tier: production
name: prod-cluster-1
distribution:
replicas: 5
- # If no clusterSelector is given, it will use the host cluster in which this CR exists
distribution:
replicas: 5
- revisionName: example-app-v2
placement:
- clusterSelector:
labels:
tier: production
name: prod-cluster-1
distribution:
replicas: 5
- distribution:
replicas: 5
```
### Cluster
The clusters selected in the `placement` part from above is defined in Cluster CRD. Here's what it looks like:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Cluster
metadata:
name: prod-cluster-1
labels:
tier: production
spec:
kubeconfigSecretRef:
name: kubeconfig-cluster-1 # the secret name
```
The secret must contain the kubeconfig credentials in `config` field:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: kubeconfig-cluster-1
data:
config: ... # kubeconfig data
```
## Quickstart
Here's a step-by-step tutorial for you to try out. All of the yaml files are from [`docs/examples/appdeployment/`](https://github.com/oam-dev/kubevela/tree/master/docs/examples/appdeployment).
You must run all commands in that directory.
1. Create an Application
```bash
$ cat <<EOF | kubectl apply -f -
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: example-app
annotations:
app.oam.dev/revision-only: "true"
spec:
components:
- name: testsvc
type: webservice
properties:
addRevisionLabel: true
image: crccheck/hello-world
port: 8000
EOF
```
This will create `example-app-v1` AppRevision. Check it:
```bash
$ kubectl get applicationrevisions.core.oam.dev
NAME AGE
example-app-v1 116s
```
> Note: with `app.oam.dev/revision-only: "true"` annotation, above `Application` resource won't create any pod instances and leave the real deployment process to `AppDeployment`.
1. Then use the above AppRevision to create an AppDeployment.
```bash
$ kubectl apply -f appdeployment-1.yaml
```
> Note: in order to AppDeployment to work, your workload object must have a `spec.replicas` field for scaling.
1. Now you can check that there will 1 deployment and 2 pod instances deployed
```bash
$ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
testsvc-v1 2/2 2 0 27s
```
1. Update Application properties:
```bash
$ cat <<EOF | kubectl apply -f -
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: example-app
annotations:
app.oam.dev/revision-only: "true"
spec:
components:
- name: testsvc
type: webservice
properties:
addRevisionLabel: true
image: nginx
port: 80
EOF
```
This will create a new `example-app-v2` AppRevision. Check it:
```bash
$ kubectl get applicationrevisions.core.oam.dev
NAME
example-app-v1
example-app-v2
```
1. Then use the two AppRevisions to update the AppDeployment:
```bash
$ kubectl apply -f appdeployment-2.yaml
```
(Optional) If you have Istio installed, you can apply the AppDeployment with traffic split:
```bash
# set up gateway if not yet
$ kubectl apply -f gateway.yaml
$ kubectl apply -f appdeployment-2-traffic.yaml
```
Note that for traffic split to work, your must set the following pod labels in workload cue templates (see [webservice.cue](https://github.com/oam-dev/kubevela/blob/master/hack/vela-templates/cue/webservice.cue)):
```shell
"app.oam.dev/component": context.name
"app.oam.dev/appRevision": context.appRevision
```
1. Now you can check that there will 1 deployment and 1 pod per revision.
```bash
$ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
testsvc-v1 1/1 1 1 2m14s
testsvc-v2 1/1 1 1 8s
```
(Optional) To verify traffic split:
```bash
# run this in another terminal
$ kubectl -n istio-system port-forward service/istio-ingressgateway 8080:80
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
# The command should return pages of either docker whale or nginx in 50/50
$ curl -H "Host: example-app.example.com" http://localhost:8080/
```
1. Cleanup:
```bash
kubectl delete appdeployments.core.oam.dev --all
kubectl delete applications.core.oam.dev --all
```

View File

@ -0,0 +1,35 @@
---
title: Canary
---
## Description
Configures Canary deployment strategy for your application.
## Specification
List of all configuration options for a `Rollout` trait.
```yaml
...
rollout:
replicas: 2
stepWeight: 50
interval: "10s"
```
## Properties
Name | Description | Type | Required | Default
------------ | ------------- | ------------- | ------------- | -------------
interval | Schedule interval time | string | true | 30s
stepWeight | Weight percent of every step in rolling update | int | true | 50
replicas | Total replicas of the workload | int | true | 2
## Conflicts With
### `Autoscale`
When `Rollout` and `Autoscle` traits are attached to the same service, they two will fight over the number of instances during rollout. Thus, it's by design that `Rollout` will take over replicas control (specified by `.replicas` field) during rollout.
> Note: in up coming releases, KubeVela will introduce a separate section in Appfile to define release phase configurations such as `Rollout`.

View File

@ -0,0 +1,89 @@
---
title: Aggregated Health Probe
---
The `HealthyScope` allows you to define an aggregated health probe for all components in same application.
1.Create health scope instance.
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: HealthScope
metadata:
name: health-check
namespace: default
spec:
probe-interval: 60
workloadRefs:
- apiVersion: apps/v1
kind: Deployment
name: express-server
```
2. Create an application that drops in this health scope.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: vela-app
spec:
components:
- name: express-server
type: webservice
properties:
image: crccheck/hello-world
port: 8080 # change port
cpu: 0.5 # add requests cpu units
scopes:
healthscopes.core.oam.dev: health-check
```
3. Check the reference of the aggregated health probe (`status.service.scopes`).
```shell
$ kubectl get app vela-app -o yaml
```
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: vela-app
...
status:
...
services:
- healthy: true
name: express-server
scopes:
- apiVersion: core.oam.dev/v1alpha2
kind: HealthScope
name: health-check
```
4.Check health scope detail.
```shell
$ kubectl get healthscope health-check -o yaml
```
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: HealthScope
metadata:
name: health-check
...
spec:
probe-interval: 60
workloadRefs:
- apiVersion: apps/v1
kind: Deployment
name: express-server
status:
healthConditions:
- componentName: express-server
diagnosis: 'Ready:1/1 '
healthStatus: HEALTHY
targetWorkload:
apiVersion: apps/v1
kind: Deployment
name: express-server
scopeHealthCondition:
healthStatus: HEALTHY
healthyWorkloads: 1
total: 1
```
It shows the aggregated health status for all components in this application.

View File

@ -0,0 +1,48 @@
---
title: Progressive Rollout RoadMap
---
Here are some working items on the roadmap
## Embed rollout in an application
We will support embedded rollout settings in an application. In this way, any changes to the
application will naturally roll out in a controlled manner instead of a sudden replace.
## Add support to trait upgrade
There are three trait related workitems that complement each other
- we need to make sure that traits that work on the previous application still work on the new
application
- traits themselves also need a controlled way to upgrade instead of replacing the old in one shot
- rollout controller should suppress conflicting traits (like HPA/Scalar) during the rollout process
## Add metrics based rolling checking
We will integrate with prometheus and use the metrics generated by the application to control the
flow of the rollout. This part will be very similar to flagger.
## Add traffic shifting support
We will add traffic shifting based upgrading strategy like canary, A/B testing. We plan to support
Istio in our first version. This part will be very similar to flagger.
## Support upgrading more than one component
Currently, we can only upgrade one component at a time. We will support upgrading more components in
one application at once.
## Support Helm Rollout strategy
Currently, we only support upgrading k8s resources. We will support helm based workload in the
future.
## Add more restrictions on what part of the rollout plan can be changed during rolling
Here are some examples
- the BatchPartition field cannot decrease beyond the current batch
- the RolloutBatches field can only change the part after the current batch
- the ComponentList field cannot be changed after rolling starts
- the RolloutStrategy/TargetSize/NumBatches cannot be changed

View File

@ -0,0 +1,71 @@
---
title: Rollout Plan
---
In this documentation, we will show how to use the rollout plan to rolling update an application.
## Overview
By default, when we update the properties of application, KubeVela will update the underlying instances directly. The availability of the application will be guaranteed by rollout traits (if any).
Though KubeVela also provides a rolling style update mechanism, you can specify the `spec.rolloutPlan` in application to do so.
## Example
1. Deploy application to the cluster
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: test-rolling
spec:
components:
- name: metrics-provider
type: worker
properties:
cmd:
- ./podinfo
- stress-cpu=1.0
image: stefanprodan/podinfo:4.0.6
port: 8080
rolloutPlan:
rolloutStrategy: "IncreaseFirst"
rolloutBatches:
- replicas: 50%
- replicas: 50%
targetSize: 6
```
2. User can modify the application container command and apply
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: test-rolling
spec:
components:
- name: metrics-provider
type: worker
properties:
cmd:
- ./podinfo
- stress-cpu=2.0
image: stefanprodan/podinfo:4.0.6
port: 8080
rolloutPlan:
rolloutStrategy: "IncreaseFirst"
rolloutBatches:
- replicas: 50%
- replicas: 50%
targetSize: 6
```
User can check the status of the application and see the rollout completes, and the
application's `status.rollout.rollingState` becomes `rolloutSucceed`.
## Advanced Usage
If you want to control and rollout the specific application revisions, or do revert, please refer to [Advanced Usage](advanced-rollout) to learn more details.

View File

@ -0,0 +1,58 @@
---
title: Labels and Annotations
---
## List Traits
The `label` and `annotations` traits allows you to append labels and annotations to the component.
```shell
# myapp.yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: myapp
spec:
components:
- name: express-server
type: webservice
properties:
image: crccheck/hello-world
port: 8000
traits:
- type: labels
properties:
"release": "stable"
- type: annotations
properties:
"description": "web application"
```
Deploy this application.
```shell
kubectl apply -f myapp.yaml
```
On runtime cluster, check the workload has been created successfully.
```bash
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
express-server 1/1 1 1 15s
```
Check the `labels`.
```bash
$ kubectl get deployments express-server -o jsonpath='{.spec.template.metadata.labels}'
{"app.oam.dev/component":"express-server","release": "stable"}
```
Check the `annotations`.
```bash
$ kubectl get deployments express-server -o jsonpath='{.spec.template.metadata.annotations}'
{"description":"web application"}
```

View File

@ -0,0 +1,96 @@
---
title: Ingress
---
> ⚠️ This section requires your runtime cluster has a working ingress controller.
The `ingress` trait exposes a component to public Internet via a valid domain.
```shell
$ kubectl vela show ingress
# Properties
+--------+------------------------------------------------------------------------------+----------------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+--------+------------------------------------------------------------------------------+----------------+----------+---------+
| http | Specify the mapping relationship between the http path and the workload port | map[string]int | true | |
| domain | Specify the domain you want to expose | string | true | |
+--------+------------------------------------------------------------------------------+----------------+----------+---------+
```
Attach a `ingress` trait to the component you want to expose and deploy.
```yaml
# vela-app.yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: first-vela-app
spec:
components:
- name: express-server
type: webservice
properties:
image: crccheck/hello-world
port: 8000
traits:
- type: ingress
properties:
domain: testsvc.example.com
http:
"/": 8000
```
```bash
$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/vela-app.yaml
application.core.oam.dev/first-vela-app created
```
Check the status until we see `status` is `running` and services are `healthy`:
```bash
$ kubectl get application first-vela-app -w
NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE
first-vela-app express-server webservice healthChecking 14s
first-vela-app express-server webservice running true 42s
```
Check the trait detail for the its visiting url:
```shell
$ kubectl get application first-vela-app -o yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: first-vela-app
namespace: default
spec:
...
services:
- healthy: true
name: express-server
traits:
- healthy: true
message: 'Visiting URL: testsvc.example.com, IP: 47.111.233.220'
type: ingress
status: running
...
```
Then you will be able to visit this application via its domain.
```
$ curl -H "Host:testsvc.example.com" http://<your ip address>/
<xmp>
Hello World
## .
## ## ## ==
## ## ## ## ## ===
/""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o _,/
\ \ _,'
`'--.._\..--''
</xmp>
```

View File

@ -0,0 +1,31 @@
---
title: Metrics
---
## Description
Configures monitoring metrics for your service.
## Specification
List of all configuration options for a `Metrics` trait.
```yaml
...
format: "prometheus"
port: 8080
path: "/metrics"
scheme: "http"
enabled: true
```
## Properties
Name | Description | Type | Required | Default
------------ | ------------- | ------------- | ------------- | -------------
path | The metrics path of the service | string | true | /metrics
format | Format of the metrics, default as prometheus | string | true | prometheus
scheme | The way to retrieve data which can take the values `http` or `https` | string | true | http
enabled | | bool | true | true
port | The port for metrics, will discovery automatically by default | int | true | 0
selector | The label selector for the pods, will discovery automatically by default | map[string]string | false |

View File

@ -0,0 +1,7 @@
---
title: Want More?
---
Traits in KubeVela are designed as modularized building blocks, they are fully customizable and pluggable.
Check [this documentation](../../platform-engineers/cue/trait) about how to design and enable your own traits in KubeVela platform.

View File

@ -0,0 +1,38 @@
---
title: Route
---
## Description
Configures external access to your service.
## Specification
List of all configuration options for a `Route` trait.
```yaml
...
domain: example.com
issuer: tls
rules:
- path: /testapp
rewriteTarget: /
```
## Properties
Name | Description | Type | Required | Default
------------ | ------------- | ------------- | ------------- | -------------
domain | Domain name | string | true | empty
issuer | | string | true | empty
rules | | [[]rules](#rules) | false |
provider | | string | false |
ingressClass | | string | false |
### rules
Name | Description | Type | Required | Default
------------ | ------------- | ------------- | ------------- | -------------
path | | string | true |
rewriteTarget | | string | true | empty

View File

@ -0,0 +1,64 @@
---
title: Manual Scaling
---
The `scaler` trait allows you to scale your component instance manually.
```shell
$ kubectl vela show scaler
# Properties
+----------+--------------------------------+------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+----------+--------------------------------+------+----------+---------+
| replicas | Specify replicas of workload | int | true | 1 |
+----------+--------------------------------+------+----------+---------+
```
Declare an application with scaler trait.
```yaml
# sample-manual.yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: website
spec:
components:
- name: frontend
type: webservice
properties:
image: nginx
traits:
- type: scaler
properties:
replicas: 2
- type: sidecar
properties:
name: "sidecar-test"
image: "fluentd"
- name: backend
type: worker
properties:
image: busybox
cmd:
- sleep
- '1000'
```
Apply the sample application:
```shell
$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/enduser/sample-manual.yaml
application.core.oam.dev/website configured
```
In runtime cluster, you can see the underlying deployment of `frontend` component has 2 replicas now.
```shell
$ kubectl get deploy -l app.oam.dev/name=website
NAME READY UP-TO-DATE AVAILABLE AGE
backend 1/1 1 1 19h
frontend 2/2 2 2 19h
```
To scale up or scale down, you just need to modify the `replicas` field of `scaler` trait and re-apply the YAML.

View File

@ -0,0 +1,92 @@
---
title: Service Binding
---
Service binding trait will bind data from Kubernetes `Secret` to the application container's ENV.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "binding cloud resource secrets to pod env"
name: service-binding
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
patch: {
spec: template: spec: {
// +patchKey=name
containers: [{
name: context.name
// +patchKey=name
env: [
for envName, v in parameter.envMappings {
name: envName
valueFrom: {
secretKeyRef: {
name: v.secret
if v["key"] != _|_ {
key: v.key
}
if v["key"] == _|_ {
key: envName
}
}
}
},
]
}]
}
}
parameter: {
// +usage=The mapping of environment variables to secret
envMappings: [string]: [string]: string
}
```
With the help of this `service-binding` trait, you can explicitly set parameter `envMappings` to mapping all
environment names with secret key. Here is an example.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: webapp
spec:
components:
- name: express-server
type: webservice
properties:
image: zzxwill/flask-web-application:v0.3.1-crossplane
ports: 80
traits:
- type: service-binding
properties:
envMappings:
# environments refer to db-conn secret
DB_PASSWORD:
secret: db-conn
key: password # 1) If the env name is different from secret key, secret key has to be set.
endpoint:
secret: db-conn # 2) If the env name is the same as the secret key, secret key can be omitted.
username:
secret: db-conn
- name: sample-db
type: alibaba-rds
properties:
name: sample-db
engine: mysql
engineVersion: "8.0"
instanceClass: rds.mysql.c1.large
username: oamtest
secretName: db-conn
```

View File

@ -0,0 +1,102 @@
---
title: Attaching Sidecar
---
The `sidecar` trait allows you to attach a sidecar container to the component.
## Show the Usage of Sidecar
```shell
$ kubectl vela show sidecar
# Properties
+---------+-----------------------------------------+-----------------------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+---------+-----------------------------------------+-----------------------+----------+---------+
| name | Specify the name of sidecar container | string | true | |
| cmd | Specify the commands run in the sidecar | []string | false | |
| image | Specify the image of sidecar container | string | true | |
| volumes | Specify the shared volume path | [[]volumes](#volumes) | false | |
+---------+-----------------------------------------+-----------------------+----------+---------+
## volumes
+-----------+-------------+--------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+-----------+-------------+--------+----------+---------+
| name | | string | true | |
| path | | string | true | |
+-----------+-------------+--------+----------+---------+
```
## Deploy the Application
In this Application, component `log-gen-worker` and sidecar share the data volume that saves the logs.
The sidebar will re-output the log to stdout.
```yaml
# app.yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: vela-app-with-sidecar
spec:
components:
- name: log-gen-worker
type: worker
properties:
image: busybox
cmd:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/date.log;
i=$((i+1));
sleep 1;
done
volumes:
- name: varlog
mountPath: /var/log
type: emptyDir
traits:
- type: sidecar
properties:
name: count-log
image: busybox
cmd: [ /bin/sh, -c, 'tail -n+1 -f /var/log/date.log']
volumes:
- name: varlog
path: /var/log
```
Deploy this Application.
```shell
kubectl apply -f app.yaml
```
On runtime cluster, check the name of running pod.
```shell
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
log-gen-worker-76945f458b-k7n9k 2/2 Running 0 90s
```
And check the logging output of sidecar.
```shell
$ kubectl logs -f log-gen-worker-76945f458b-k7n9k count-log
0: Fri Apr 16 11:08:45 UTC 2021
1: Fri Apr 16 11:08:46 UTC 2021
2: Fri Apr 16 11:08:47 UTC 2021
3: Fri Apr 16 11:08:48 UTC 2021
4: Fri Apr 16 11:08:49 UTC 2021
5: Fri Apr 16 11:08:50 UTC 2021
6: Fri Apr 16 11:08:51 UTC 2021
7: Fri Apr 16 11:08:52 UTC 2021
8: Fri Apr 16 11:08:53 UTC 2021
9: Fri Apr 16 11:08:54 UTC 2021
```

View File

@ -0,0 +1,51 @@
---
title: Cloud Volumes
---
This section introduces how to attach cloud volumes to the component. For example, AWS ElasticBlockStore,
Azure Disk, Alibaba Cloud OSS, etc.
Cloud volumes are not built-in capabilities in KubeVela so you need to enable these traits first. Let's use AWS EBS as example.
Install and check the `TraitDefinition` for AWS EBS volume trait.
```shell
$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/app-with-volumes/td-awsEBS.yaml
```
```shell
$ kubectl vela show aws-ebs-volume
+-----------+----------------------------------------------------------------+--------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+-----------+----------------------------------------------------------------+--------+----------+---------+
| name | The name of volume. | string | true | |
| mountPath | | string | true | |
| volumeID | Unique id of the persistent disk resource. | string | true | |
| fsType | Filesystem type to mount. | string | true | ext4 |
| partition | Partition on the disk to mount. | int | false | |
| readOnly | ReadOnly here will force the ReadOnly setting in VolumeMounts. | bool | true | false |
+-----------+----------------------------------------------------------------+--------+----------+---------+
```
Then we can now attach a `aws-ebs` volume to a component.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: app-worker
spec:
components:
- name: myworker
type: worker
properties:
image: "busybox"
cmd:
- sleep
- "1000"
traits:
- type: aws-ebs-volume
properties:
name: "my-ebs"
mountPath: "/myebs"
volumeID: "my-ebs-id"
```

View File

@ -0,0 +1,226 @@
---
title: Installation
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
> For upgrading existing KubeVela, please read the [upgrade guide](./advanced-install#upgrade).
## 1. Choose Kubernetes Cluster
Requirements:
- Kubernetes cluster >= v1.15.0
- `kubectl` installed and configured
KubeVela is a simple custom controller that can be installed on any Kubernetes cluster including managed offerings or your own clusters. The only requirement is please ensure [ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/) is installed and enabled.
For for local deployment and test, you could use `minikube` or `kind`.
<Tabs
className="unique-tabs"
defaultValue="minikube"
values={[
{label: 'Minikube', value: 'minikube'},
{label: 'KinD', value: 'kind'},
]}>
<TabItem value="minikube">
Follow the minikube [installation guide](https://minikube.sigs.k8s.io/docs/start/).
Then spins up a minikube cluster
```shell script
minikube start
```
Install ingress:
```shell script
minikube addons enable ingress
```
</TabItem>
<TabItem value="kind">
Follow [this guide](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) to install kind.
Then spins up a kind cluster:
```shell script
cat <<EOF | kind create cluster --image=kindest/node:v1.18.15 --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
EOF
```
Then install [ingress for kind](https://kind.sigs.k8s.io/docs/user/ingress/#ingress-nginx):
```shell script
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
```
</TabItem>
</Tabs>
## 2. Install KubeVela
1. Add helm chart repo for KubeVela
```shell script
helm repo add kubevela https://kubevelacharts.oss-accelerate.aliyuncs.com/core
```
2. Update the chart repo
```shell script
helm repo update
```
3. Install KubeVela
```shell script
helm install --create-namespace -n vela-system kubevela kubevela/vela-core
```
By default, it will enable the webhook with a self-signed certificate provided by [kube-webhook-certgen](https://github.com/jet/kube-webhook-certgen).
You can also [install it with `cert-manager`](./advanced-install#install-kubevela-with-cert-manager).
4. Verify chart installed successfully
```shell script
helm test kubevela -n vela-system
```
<details> <summary> Click to see the expected output of helm test </summary>
```shell
Pod kubevela-application-test pending
Pod kubevela-application-test pending
Pod kubevela-application-test running
Pod kubevela-application-test succeeded
NAME: kubevela
LAST DEPLOYED: Tue Apr 13 18:42:20 2021
NAMESPACE: vela-system
STATUS: deployed
REVISION: 1
TEST SUITE: kubevela-application-test
Last Started: Fri Apr 16 20:49:10 2021
Last Completed: Fri Apr 16 20:50:04 2021
Phase: Succeeded
TEST SUITE: first-vela-app
Last Started: Fri Apr 16 20:49:10 2021
Last Completed: Fri Apr 16 20:49:10 2021
Phase: Succeeded
NOTES:
Welcome to use the KubeVela! Enjoy your shipping application journey!
```
</details>
## 3. Get KubeVela CLI
KubeVela CLI gives you a simplified workflow to manage applications with optimized output. It is not mandatory though.
KubeVela CLI could be [installed as kubectl plugin](./kubectl-plugin.mdx), or install as standalone binary.
<Tabs
className="unique-tabs"
defaultValue="script"
values={[
{label: 'Script', value: 'script'},
{label: 'Homebrew', value: 'homebrew'},
{label: 'Download directly from releases', value: 'download'},
]}>
<TabItem value="script">
** macOS/Linux **
```shell script
curl -fsSl https://kubevela.io/script/install.sh | bash
```
**Windows**
```shell script
powershell -Command "iwr -useb https://kubevela.io/script/install.ps1 | iex"
```
</TabItem>
<TabItem value="homebrew">
**macOS/Linux**
Update your brew firstly.
```shell script
brew update
```
Then install kubevela client.
```shell script
brew install kubevela
```
</TabItem>
<TabItem value="download">
- Download the latest `vela` binary from the [releases page](https://github.com/oam-dev/kubevela/releases).
- Unpack the `vela` binary and add it to `$PATH` to get started.
```shell script
sudo mv ./vela /usr/local/bin/vela
```
> Known Issue(https://github.com/oam-dev/kubevela/issues/625):
> If you're using mac, it will report that “vela” cannot be opened because the developer cannot be verified.
>
> The new version of MacOS is stricter about running software you've downloaded that isn't signed with an Apple developer key. And we haven't supported that for KubeVela yet.
> You can open your 'System Preference' -> 'Security & Privacy' -> General, click the 'Allow Anyway' to temporarily fix it.
</TabItem>
</Tabs>
## 4. Enable Helm Support
KubeVela leverages Helm controller from [Flux v2](https://github.com/fluxcd/flux2) to deploy [Helm](https://helm.sh/) based components.
You can enable this feature by installing a minimal Flux v2 chart as below:
```shell
helm install --create-namespace -n flux-system helm-flux http://oam.dev/catalog/helm-flux2-0.1.0.tgz
```
Or you could install full Flux v2 following its own guide of course.
## 5. Verify
Checking available application components and traits by `vela` CLI tool:
```shell script
vela components
```
```console
NAME NAMESPACE WORKLOAD DESCRIPTION
task vela-system jobs.batch Describes jobs that run code or a script to completion.
webservice vela-system deployments.apps Describes long-running, scalable, containerized services
that have a stable network endpoint to receive external
network traffic from customers.
worker vela-system deployments.apps Describes long-running, scalable, containerized services
that running at backend. They do NOT have network endpoint
to receive external network traffic.
```
These capabilities are built-in so they are ready to use if showed up. KubeVela is designed to be programmable and fully self-service, so the assumption is more capabilities will be added later per your own needs.
Also, whenever new capabilities are added in the platform, you will immediately see them in above output.
> See the [advanced installation guide](./advanced-install) to learn more about installation details.

View File

@ -0,0 +1,73 @@
---
title: Introduction
slug: /
---
![alt](resources/KubeVela-01.png)
## Motivation
The trend of cloud-native technology is moving towards pursuing consistent application delivery across clouds and on-premises infrastructures using Kubernetes as the common abstraction layer. Kubernetes, although excellent in abstracting low-level infrastructure details, does introduce extra complexity to application developers, namely understanding the concepts of pods, port exposing, privilege escalation, resource claims, CRD, and so on. Weve seen the nontrivial learning curve and the lack of developer-facing abstraction have impacted user experiences, slowed down productivity, led to unexpected errors or misconfigurations in production. People start to question the value of this revolution: "why am I bothered with all these details?".
On the other hand, abstracting Kubernetes to serve developers' requirements is a highly opinionated process, and the resultant abstractions would only make sense had the decision makers been the platform team. Unfortunately, the platform team today face the following dilemma:
*There is no tool or framework for them to build user friendly yet highly extensible abstractions for application management*.
Thus, many application platforms today are essentially restricted abstractions with in-house add-on mechanisms despite the extensibility of Kubernetes. This makes extending such platforms for developers' requirements or to wider scenarios almost impossible, not to mention taking the full advantage of the rich Kubernetes ecosystems.
In the end, developers complain those platforms are too rigid and slow in response to feature requests or improvements. The platform team do want to help but the engineering effort is daunting: any simple API change in the platform could easily become a marathon negotiation around the opinionated abstraction design.
## What is KubeVela?
For platform team, KubeVela serves as a framework that relieves the pains of building modern application platforms by doing the following:
**Application Centric** - KubeVela introduces consistent yet application centric API to capture a full deployment of microservices on top of hybrid environments. No infrastructure level concern, simply deploy.
**Natively Extensible** - KubeVela uses CUE to glue capabilities provided by runtime infrastructure and expose them to users via self-service API. When users' needs grow, these API can naturally expand in programmable approach.
**Runtime Agnostic** - KubeVela is built with Kubernetes as control plane but adaptable to any runtime as data-plane. It can deploy (and manage) diverse workload types such as container, cloud functions, databases, or even EC2 instances across hybrid environments.
With KubeVela, the platform team finally have the tooling supports to design easy-to-use application platform with high confidence and low turn around time.
For end-users (e.g. application team), this platform will enable them to design and ship applications to hybrid environments with minimal effort, and instead of managing a handful infrastructure details, a simple application definition that can be easily integrated with any CI/CD pipeline is all they need.
## Comparisons
### KubeVela vs. Platform-as-a-Service (PaaS)
The typical examples are Heroku and Cloud Foundry. They provide full application management capabilities and aim to improve developer experience and efficiency. In this context, KubeVela shares the same goal.
Though the biggest difference lies in **flexibility**.
KubeVela enables you to serve end users with programmable building blocks which are fully flexible and coded by yourself. Comparing to this mechanism, traditional PaaS systems are highly restricted, i.e. they have to enforce constraints in the type of supported applications and capabilities, and as application needs grows, you always outgrow the capabilities of the PaaS system - this will never happen in KubeVela platform.
So think of KubeVela as a Heroku that is fully extensible to serve your needs as you grow.
### KubeVela vs. Serverless
Serverless platform such as AWS Lambda provides extraordinary user experience and agility to deploy serverless applications. However, those platforms impose even more constraints in extensibility. They are arguably "hard-coded" PaaS.
KubeVela can easily deploy Kubernetes based serverless workloads such as Knative, OpenFaaS by referencing them as new components. Even for AWS Lambda, KubeVela can also deploy such workload leveraging Terraform based component.
### KubeVela vs. Platform agnostic developer tools
The typical example is Hashicorp's Waypoint. Waypoint is a developer facing tool which introduces a consistent workflow (i.e., build, deploy, release) to ship applications on top of different platforms.
KubeVela can be integrated with such tools seamlessly. In this case, developers would use the Waypoint workflow as the UI to deploy and manage applications with KubeVela's abstractions (e.g. applications, components, traits etc).
### KubeVela vs. Helm
Helm is a package manager for Kubernetes that provides package, install, and upgrade a set of YAML files for Kubernetes as a unit.
KubeVela as a modern deployment system can naturally deploys Helm charts. A common example is you could easily use KubeVela to declare and deploy an application which is composed by a WordPress Helm chart and a AWS RDS instance defined by Terraform, or distribute the Helm chart to multiple clusters.
KubeVela also leverages Helm to manage the capability addons in runtime clusters.
### KubeVela vs. Kubernetes
KubeVela is a Kubernetes add-on for building developer-centric deployment system. It leverages [Open Application Model](https://github.com/oam-dev/spec) and the native Kubernetes extensibility to resolve a hard problem - making shipping applications enjoyable on Kubernetes.
## Getting Started
Now let's [get started](./quick-start) with KubeVela!

View File

@ -0,0 +1,75 @@
---
title: Install kubectl plugin
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
Install vela kubectl plugin can help you to ship applications more easily!
## Installation
You can install kubectl plugin `kubectl vela` by:
<Tabs
className="unique-tabs"
defaultValue="krew"
values={[
{label: 'Krew', value: 'krew'},
{label: 'Script', value: 'script'},
]}>
<TabItem value="krew">
1. [Install and set up](https://krew.sigs.k8s.io/docs/user-guide/setup/install/) Krew on your machine.
2. Discover plugins available on Krew:
```shell
kubectl krew update
```
3. install kubectl vela:
```shell script
kubectl krew install vela
```
</TabItem>
<TabItem value="script">
**macOS/Linux**
```shell script
curl -fsSl https://kubevela.io/script/install-kubectl-vela.sh | bash
```
You can also download the binary from [release pages ( >= v1.0.3)](https://github.com/oam-dev/kubevela/releases) manually.
Kubectl will discover it from your system path automatically.
</TabItem>
</Tabs>
## Usage
```shell
$ kubectl vela -h
A Highly Extensible Platform Engine based on Kubernetes and Open Application Model.
Usage:
kubectl vela [flags]
kubectl vela [command]
Available Commands:
Flags:
-h, --help help for vela
dry-run Dry Run an application, and output the K8s resources as
result to stdout, only CUE template supported for now
live-diff Dry-run an application, and do diff on a specific app
revison. The provided capability definitions will be used
during Dry-run. If any capabilities used in the app are not
found in the provided ones, it will try to find from
cluster.
show Show the reference doc for a workload type or trait
version Prints out build version information
Use "kubectl vela [command] --help" for more information about a command.
```

View File

@ -0,0 +1,92 @@
---
title: Extend CRD Operator as Component Type
---
Let's use [OpenKruise](https://github.com/openkruise/kruise) as example of extend CRD as KubeVela Component.
**The mechanism works for all CRD Operators**.
### Step 1: Install the CRD controller
You need to [install the CRD controller](https://github.com/openkruise/kruise#quick-start) into your K8s system.
### Step 2: Create Component Definition
To register Cloneset(one of the OpenKruise workloads) as a new workload type in KubeVela, the only thing needed is to create an `ComponentDefinition` object for it.
A full example can be found in this [cloneset.yaml](https://github.com/oam-dev/catalog/blob/master/registry/cloneset.yaml).
Several highlights are list below.
#### 1. Describe The Workload Type
```yaml
...
annotations:
definition.oam.dev/description: "OpenKruise cloneset"
...
```
A one line description of this component type. It will be shown in helper commands such as `$ vela components`.
#### 2. Register it's underlying CRD
```yaml
...
workload:
definition:
apiVersion: apps.kruise.io/v1alpha1
kind: CloneSet
...
```
This is how you register OpenKruise Cloneset's API resource (`fapps.kruise.io/v1alpha1.CloneSet`) as the workload type.
KubeVela uses Kubernetes API resource discovery mechanism to manage all registered capabilities.
#### 4. Define Template
```yaml
...
schematic:
cue:
template: |
output: {
apiVersion: "apps.kruise.io/v1alpha1"
kind: "CloneSet"
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
replicas: parameter.replicas
selector: matchLabels: {
"app.oam.dev/component": context.name
}
template: {
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
containers: [{
name: context.name
image: parameter.image
}]
}
}
}
}
parameter: {
// +usage=Which image would you like to use for your service
// +short=i
image: string
// +usage=Number of pods in the cloneset
replicas: *5 | int
}
```
### Step 3: Register New Component Type to KubeVela
As long as the definition file is ready, you just need to apply it to Kubernetes.
```bash
$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/catalog/master/registry/cloneset.yaml
```
And the new component type will immediately become available for developers to use in KubeVela.

View File

@ -0,0 +1,22 @@
---
title: Overview
---
Cloud services are important components of your application, and KubeVela allows you to provision and consume them in a consistent experience.
## How Does KubeVela Manage Cloud Services?
In KubeVela, the needed cloud services are claimed as *components* in an application, and consumed via *Service Binding Trait* by other components.
## Does KubeVela Talk to the Clouds?
KubeVela relies on [Terraform Controller](https://github.com/oam-dev/terraform-controller) or [Crossplane](http://crossplane.io/) as providers to talk to the clouds. Please check the documentations below for detailed steps.
- [Terraform](./terraform)
- [Crossplane](./crossplane)
## Can a Instance of Cloud Services be Shared by Multiple Applications?
Yes. Though we currently defer this to providers so by default the cloud service instances are not shared and dedicated per `Application`. A workaround for now is you could use a separate `Application` to declare the cloud service only, then other `Application` can consume it via service binding trait in a shared approach.
In the future, we are considering making this part as a standard feature of KubeVela so you could claim whether a given cloud service component should be shared or not.

View File

@ -0,0 +1,155 @@
---
title: Crossplane
---
In this documentation, we will use Alibaba Cloud's RDS (Relational Database Service), and Alibaba Cloud's OSS (Object Storage System) as examples to show how to enable cloud services as part of the application deployment.
These cloud services are provided by Crossplane.
## Prepare Crossplane
<details>
Please Refer to [Installation](https://github.com/crossplane/provider-alibaba/releases/tag/v0.5.0)
to install Crossplane Alibaba provider v0.5.0.
> If you'd like to configure any other Crossplane providers, please refer to [Crossplane Select a Getting Started Configuration](https://crossplane.io/docs/v1.1/getting-started/install-configure.html#select-a-getting-started-configuration).
```
$ kubectl crossplane install provider crossplane/provider-alibaba:v0.5.0
# Note the xxx and yyy here is your own AccessKey and SecretKey to the cloud resources.
$ kubectl create secret generic alibaba-account-creds -n crossplane-system --from-literal=accessKeyId=xxx --from-literal=accessKeySecret=yyy
$ kubectl apply -f provider.yaml
```
`provider.yaml` is as below.
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: crossplane-system
---
apiVersion: alibaba.crossplane.io/v1alpha1
kind: ProviderConfig
metadata:
name: default
spec:
credentials:
source: Secret
secretRef:
namespace: crossplane-system
name: alibaba-account-creds
key: credentials
region: cn-beijing
```
</details>
## Register `alibaba-rds` Component
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: alibaba-rds
namespace: vela-system
annotations:
definition.oam.dev/description: "Alibaba Cloud RDS Resource"
spec:
workload:
definition:
apiVersion: database.alibaba.crossplane.io/v1alpha1
kind: RDSInstance
schematic:
cue:
template: |
output: {
apiVersion: "database.alibaba.crossplane.io/v1alpha1"
kind: "RDSInstance"
spec: {
forProvider: {
engine: parameter.engine
engineVersion: parameter.engineVersion
dbInstanceClass: parameter.instanceClass
dbInstanceStorageInGB: 20
securityIPList: "0.0.0.0/0"
masterUsername: parameter.username
}
writeConnectionSecretToRef: {
namespace: context.namespace
name: parameter.secretName
}
providerConfigRef: {
name: "default"
}
deletionPolicy: "Delete"
}
}
parameter: {
// +usage=RDS engine
engine: *"mysql" | string
// +usage=The version of RDS engine
engineVersion: *"8.0" | string
// +usage=The instance class for the RDS
instanceClass: *"rds.mysql.c1.large" | string
// +usage=RDS username
username: string
// +usage=Secret name which RDS connection will write to
secretName: string
}
```
## Register `alibaba-oss` Component
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: alibaba-oss
namespace: vela-system
annotations:
definition.oam.dev/description: "Alibaba Cloud RDS Resource"
spec:
workload:
definition:
apiVersion: oss.alibaba.crossplane.io/v1alpha1
kind: Bucket
schematic:
cue:
template: |
output: {
apiVersion: "oss.alibaba.crossplane.io/v1alpha1"
kind: "Bucket"
spec: {
name: parameter.name
acl: parameter.acl
storageClass: parameter.storageClass
dataRedundancyType: parameter.dataRedundancyType
writeConnectionSecretToRef: {
namespace: context.namespace
name: parameter.secretName
}
providerConfigRef: {
name: "default"
}
deletionPolicy: "Delete"
}
}
parameter: {
// +usage=OSS bucket name
name: string
// +usage=The access control list of the OSS bucket
acl: *"private" | string
// +usage=The storage type of OSS bucket
storageClass: *"Standard" | string
// +usage=The data Redundancy type of OSS bucket
dataRedundancyType: *"LRS" | string
// +usage=Secret name which RDS connection will write to
secretName: string
}
```

View File

@ -0,0 +1,245 @@
---
title: Advanced Features
---
As a Data Configuration Language, CUE allows you to do some advanced templating magic in definition objects.
## Render Multiple Resources With a Loop
You can define the for-loop inside the `outputs`.
> Note that in this case the type of `parameter` field used in the for-loop must be a map.
Below is an example that will render multiple Kubernetes Services in one trait:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
name: expose
spec:
schematic:
cue:
template: |
parameter: {
http: [string]: int
}
outputs: {
for k, v in parameter.http {
"\(k)": {
apiVersion: "v1"
kind: "Service"
spec: {
selector:
app: context.name
ports: [{
port: v
targetPort: v
}]
}
}
}
}
```
The usage of this trait could be:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: testapp
spec:
components:
- name: express-server
type: webservice
properties:
...
traits:
- type: expose
properties:
http:
myservice1: 8080
myservice2: 8081
```
## Execute HTTP Request in Trait Definition
The trait definition can send a HTTP request and capture the response to help you rendering the resource with keyword `processing`.
You can define HTTP request `method`, `url`, `body`, `header` and `trailer` in the `processing.http` section, and the returned data will be stored in `processing.output`.
> Please ensure the target HTTP server returns a **JSON data**. `output`.
Then you can reference the returned data from `processing.output` in `patch` or `output/outputs`.
Below is an example:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
name: auth-service
spec:
schematic:
cue:
template: |
parameter: {
serviceURL: string
}
processing: {
output: {
token?: string
}
// The target server will return a JSON data with `token` as key.
http: {
method: *"GET" | string
url: parameter.serviceURL
request: {
body?: bytes
header: {}
trailer: {}
}
}
}
patch: {
data: token: processing.output.token
}
```
In above example, this trait definition will send request to get the `token` data, and then patch the data to given component instance.
## Data Passing
A trait definition can read the generated API resources (rendered from `output` and `outputs`) of given component definition.
> KubeVela will ensure the component definitions are always rendered before traits definitions.
Specifically, the `context.output` contains the rendered workload API resource (whose GVK is indicated by `spec.workload`in component definition), and use `context.outputs.<xx>` to contain all the other rendered API resources.
Below is an example for data passing:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: worker
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
cue:
template: |
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": context.name
}
template: {
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
containers: [{
name: context.name
image: parameter.image
ports: [{containerPort: parameter.port}]
envFrom: [{
configMapRef: name: context.name + "game-config"
}]
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
}]
}
}
}
}
outputs: gameconfig: {
apiVersion: "v1"
kind: "ConfigMap"
metadata: {
name: context.name + "game-config"
}
data: {
enemies: parameter.enemies
lives: parameter.lives
}
}
parameter: {
// +usage=Which image would you like to use for your service
// +short=i
image: string
// +usage=Commands to run in the container
cmd?: [...string]
lives: string
enemies: string
port: int
}
---
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
name: ingress
spec:
schematic:
cue:
template: |
parameter: {
domain: string
path: string
exposePort: int
}
// trait template can have multiple outputs in one trait
outputs: service: {
apiVersion: "v1"
kind: "Service"
spec: {
selector:
app: context.name
ports: [{
port: parameter.exposePort
targetPort: context.output.spec.template.spec.containers[0].ports[0].containerPort
}]
}
}
outputs: ingress: {
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: context.name
labels: config: context.outputs.gameconfig.data.enemies
spec: {
rules: [{
host: parameter.domain
http: {
paths: [{
path: parameter.path
backend: {
serviceName: context.name
servicePort: parameter.exposePort
}
}]
}
}]
}
}
```
In detail, during rendering `worker` `ComponentDefinition`:
1. the rendered Kubernetes Deployment resource will be stored in the `context.output`,
2. all other rendered resources will be stored in `context.outputs.<xx>`, with `<xx>` is the unique name in every `template.outputs`.
Thus, in `TraitDefinition`, it can read the rendered API resources (e.g. `context.outputs.gameconfig.data.enemies`) from the `context`.

View File

@ -0,0 +1,548 @@
---
title: Learning CUE
---
This document will explain more about how to use CUE to encapsulate and abstract a given capability in Kubernetes in detail.
> Please make sure you have already learned about `Application` custom resource before reading the following guide.
## Overview
The reasons for KubeVela supports CUE as a first-class solution to design abstraction can be concluded as below:
- **CUE is designed for large scale configuration.** CUE has the ability to understand a
configuration worked on by engineers across a whole company and to safely change a value that modifies thousands of objects in a configuration. This aligns very well with KubeVela's original goal to define and ship production level applications at web scale.
- **CUE supports first-class code generation and automation.** CUE can integrate with existing tools and workflows naturally while other tools would have to build complex custom solutions. For example, generate OpenAPI schemas wigh Go code. This is how KubeVela build developer tools and GUI interfaces based on the CUE templates.
- **CUE integrates very well with Go.**
KubeVela is built with GO just like most projects in Kubernetes system. CUE is also implemented in and exposes a rich API in Go. KubeVela integrates with CUE as its core library and works as a Kubernetes controller. With the help of CUE, KubeVela can easily handle data constraint problems.
> Pleas also check [The Configuration Complexity Curse](https://blog.cedriccharly.com/post/20191109-the-configuration-complexity-curse/) and [The Logic of CUE](https://cuelang.org/docs/concepts/logic/) for more details.
## Prerequisites
Please make sure below CLIs are present in your environment:
* [`cue` >=v0.2.2](https://cuelang.org/docs/install/)
* [`vela` (>v1.0.0)](../../install#3-get-kubevela-cli)
## CUE CLI Basic
Below is the basic CUE data, you can define both schema and value in the same file with the almost same format:
```
a: 1.5
a: float
b: 1
b: int
d: [1, 2, 3]
g: {
h: "abc"
}
e: string
```
CUE is a superset of JSON, we can use it like json with following convenience:
* C style comments,
* quotes may be omitted from field names without special characters,
* commas at the end of fields are optional,
* comma after last element in list is allowed,
* outer curly braces are optional.
CUE has powerful CLI commands. Let's keep the data in a file named `first.cue` and try.
* Format the CUE file. If you're using Goland or similar JetBrains IDE,
you can [configure save on format](https://wonderflow.info/posts/2020-11-02-goland-cuelang-format/) instead.
This command will not only format the CUE, but also point out the wrong schema. That's very useful.
```shell
cue fmt first.cue
```
* Schema Check, besides `cue fmt`, you can also use `vue vet` to check schema.
```shell
cue vet first.cue
```
* Calculate/Render the result. `cue eval` will calculate the CUE file and render out the result.
You can see the results don't contain `a: float` and `b: int`, because these two variables are calculated.
While the `e: string` doesn't have definitive results, so it keeps as it is.
```shell
$ cue eval first.cue
a: 1.5
b: 1
d: [1, 2, 3]
g: {
h: "abc"
}
e: string
```
* Render for specified result. For example, we want only know the result of `b` in the file, then we can specify the parameter `-e`.
```shell
$ cue eval -e b first.cue
1
```
* Export the result. `cue export` will export the result with final value. It will report an error if some variables are not definitive.
```shell
$ cue export first.cue
e: cannot convert incomplete value "string" to JSON:
./first.cue:9:4
```
We can complete the value by giving a value to `e`, for example:
```shell
echo "e: \"abc\"" >> first.cue
```
Then, the command will work. By default, the result will be rendered in json format.
```shell
$ cue export first.cue
{
"a": 1.5,
"b": 1,
"d": [
1,
2,
3
],
"g": {
"h": "abc"
},
"e": "abc"
}
```
* Export the result in YAML format.
```shell
$ cue export first.cue --out yaml
a: 1.5
b: 1
d:
- 1
- 2
- 3
g:
h: abc
e: abc
```
* Export the result for specified variable.
```shell
$ cue export -e g first.cue
{
"h": "abc"
}
```
For now, you have learned all useful CUE cli operations.
## CUE Language Basic
* Data structure: Below is the basic data structure of CUE.
```shell
// float
a: 1.5
// int
b: 1
// string
c: "blahblahblah"
// array
d: [1, 2, 3, 1, 2, 3, 1, 2, 3]
// bool
e: true
// struct
f: {
a: 1.5
b: 1
d: [1, 2, 3, 1, 2, 3, 1, 2, 3]
g: {
h: "abc"
}
}
// null
j: null
```
* Define a custom CUE type. You can use a `#` symbol to specify some variable represents a CUE type.
```
#abc: string
```
Let's name it `second.cue`. Then the `cue export` won't complain as the `#abc` is a type not incomplete value.
```shell
$ cue export second.cue
{}
```
You can also define a more complex custom struct, such as:
```
#abc: {
x: int
y: string
z: {
a: float
b: bool
}
}
```
It's widely used in KubeVela to define templates and do validation.
## CUE Templating and References
Let's try to define a CUE template with the knowledge just learned.
1. Define a struct variable `parameter`.
```shell
parameter: {
name: string
image: string
}
```
Let's save it in a file called `deployment.cue`.
2. Define a more complex struct variable `template` and reference the variable `parameter`.
```
template: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": parameter.name
}
template: {
metadata: labels: {
"app.oam.dev/component": parameter.name
}
spec: {
containers: [{
name: parameter.name
image: parameter.image
}]
}}}
}
```
People who are familiar with Kubernetes may have understood that is a template of K8s Deployment. The `parameter` part
is the parameters of the template.
Add it into the `deployment.cue`.
4. Then, let's add the value by adding following code block:
```
parameter:{
name: "mytest"
image: "nginx:v1"
}
```
5. Finally, let's export it in yaml:
```shell
$ cue export deployment.cue -e template --out yaml
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: mytest
image: nginx:v1
metadata:
labels:
app.oam.dev/component: mytest
selector:
matchLabels:
app.oam.dev/component: mytest
```
## Advanced CUE Schematic
* Open struct and list. Using `...` in a list or struct means the object is open.
- A list like `[...string]` means it can hold multiple string elements.
If we don't add `...`, then `[string]` means the list can only have one `string` element in it.
- A struct like below means the struct can contain unknown fields.
```
{
abc: string
...
}
```
* Operator `|`, it represents a value could be both case. Below is an example that the variable `a` could be in string or int type.
```shell
a: string | int
```
* Default Value, we can use `*` symbol to represent a default value for variable. That's usually used with `|`,
which represents a default value for some type. Below is an example that variable `a` is `int` and it's default value is `1`.
```shell
a: *1 | int
```
* Optional Variable. In some cases, a variable could not be used, they're optional variables, we can use `?:` to define it.
In the below example, `a` is an optional variable, `x` and `z` in `#my` is optional while `y` is a required variable.
```
a ?: int
#my: {
x ?: string
y : int
z ?:float
}
```
Optional variables can be skipped, that usually works together with conditional logic.
Specifically, if some field does not exit, the CUE grammar is `if _variable_ != _|_`, the example is like below:
```
parameter: {
name: string
image: string
config?: [...#Config]
}
output: {
...
spec: {
containers: [{
name: parameter.name
image: parameter.image
if parameter.config != _|_ {
config: parameter.config
}
}]
}
...
}
```
* Operator `&`, it used to calculate two variables.
```shell
a: *1 | int
b: 3
c: a & b
```
Saving it in `third.cue` file.
You can evaluate the result by using `cue eval`:
```shell
$ cue eval third.cue
a: 1
b: 3
c: 3
```
* Conditional statement, it's really useful when you have some cascade operations that different value affects different results.
So you can do `if..else` logic in the template.
```shell
price: number
feel: *"good" | string
// Feel bad if price is too high
if price > 100 {
feel: "bad"
}
price: 200
```
Saving it in `fourth.cue` file.
You can evaluate the result by using `cue eval`:
```shell
$ cue eval fourth.cue
price: 200
feel: "bad"
```
Another example is to use bool type as prameter.
```
parameter: {
name: string
image: string
useENV: bool
}
output: {
...
spec: {
containers: [{
name: parameter.name
image: parameter.image
if parameter.useENV == true {
env: [{name: "my-env", value: "my-value"}]
}
}]
}
...
}
```
* For Loop: if you want to avoid duplicate, you may want to use for loop.
- Loop for Map
```cue
parameter: {
name: string
image: string
env: [string]: string
}
output: {
spec: {
containers: [{
name: parameter.name
image: parameter.image
env: [
for k, v in parameter.env {
name: k
value: v
},
]
}]
}
}
```
- Loop for type
```
#a: {
"hello": "Barcelona"
"nihao": "Shanghai"
}
for k, v in #a {
"\(k)": {
nameLen: len(v)
value: v
}
}
```
- Loop for Slice
```cue
parameter: {
name: string
image: string
env: [...{name:string,value:string}]
}
output: {
...
spec: {
containers: [{
name: parameter.name
image: parameter.image
env: [
for _, v in parameter.env {
name: v.name
value: v.value
},
]
}]
}
}
```
Note that we use `"\( _my-statement_ )"` for inner calculation in string.
## Import CUE Internal Packages
CUE has many [internal packages](https://pkg.go.dev/cuelang.org/go@v0.2.2/pkg) which also can be used in KubeVela.
Below is an example that use `strings.Join` to `concat` string list to one string.
```cue
import ("strings")
parameter: {
outputs: [{ip: "1.1.1.1", hostname: "xxx.com"}, {ip: "2.2.2.2", hostname: "yyy.com"}]
}
output: {
spec: {
if len(parameter.outputs) > 0 {
_x: [ for _, v in parameter.outputs {
"\(v.ip) \(v.hostname)"
}]
message: "Visiting URL: " + strings.Join(_x, "")
}
}
}
```
## Import Kube Package
KubeVela automatically generates all K8s resources as internal packages by reading K8s openapi from the
installed K8s cluster.
You can use these packages with the format `kube/<apiVersion>` in CUE Template of KubeVela just like the same way
with the CUE internal packages.
For example, `Deployment` can be used as:
```cue
import (
apps "kube/apps/v1"
)
parameter: {
name: string
}
output: apps.#Deployment
output: {
metadata: name: parameter.name
}
```
Service can be used as (import package with an alias is not necessary):
```cue
import ("kube/v1")
output: v1.#Service
output: {
metadata: {
"name": parameter.name
}
spec: type: "ClusterIP",
}
parameter: {
name: "myapp"
}
```
Even the installed CRD works:
```
import (
oam "kube/core.oam.dev/v1alpha2"
)
output: oam.#Application
output: {
metadata: {
"name": parameter.name
}
}
parameter: {
name: "myapp"
}
```

View File

@ -0,0 +1,368 @@
---
title: How-to
---
In this section, it will introduce how to use [CUE](https://cuelang.org/) to declare app components via `ComponentDefinition`.
> Before reading this part, please make sure you've learned the [Definition CRD](../definition-and-templates) in KubeVela.
## Declare `ComponentDefinition`
Here is a CUE based `ComponentDefinition` example which provides a abstraction for stateless workload type:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: stateless
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
cue:
template: |
parameter: {
name: string
image: string
}
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": parameter.name
}
template: {
metadata: labels: {
"app.oam.dev/component": parameter.name
}
spec: {
containers: [{
name: parameter.name
image: parameter.image
}]
}
}
}
}
```
In detail:
- `.spec.workload` is required to indicate the workload type of this component.
- `.spec.schematic.cue.template` is a CUE template, specifically:
* The `output` filed defines the template for the abstraction.
* The `parameter` filed defines the template parameters, i.e. the configurable properties exposed in the `Application`abstraction (and JSON schema will be automatically generated based on them).
Let's declare another component named `task`, i.e. an abstraction for run-to-completion workload.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: task
annotations:
definition.oam.dev/description: "Describes jobs that run code or a script to completion."
spec:
workload:
definition:
apiVersion: batch/v1
kind: Job
schematic:
cue:
template: |
output: {
apiVersion: "batch/v1"
kind: "Job"
spec: {
parallelism: parameter.count
completions: parameter.count
template: spec: {
restartPolicy: parameter.restart
containers: [{
image: parameter.image
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
}]
}
}
}
parameter: {
count: *1 | int
image: string
restart: *"Never" | string
cmd?: [...string]
}
```
Save above `ComponentDefintion` objects to files and install them to your Kubernetes cluster by `$ kubectl apply -f stateless-def.yaml task-def.yaml`
## Declare an `Application`
The `ComponentDefinition` can be instantiated in `Application` abstraction as below:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: Application
metadata:
name: website
spec:
components:
- name: hello
type: stateless
properties:
image: crccheck/hello-world
name: mysvc
- name: countdown
type: task
properties:
image: centos:7
cmd:
- "bin/bash"
- "-c"
- "for i in 9 8 7 6 5 4 3 2 1 ; do echo $i ; done"
```
### Under The Hood
<details>
Above application resource will generate and manage following Kubernetes resources in your target cluster based on the `output` in CUE template and user input in `Application` properties.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
... # skip tons of metadata info
spec:
template:
spec:
containers:
- name: mysvc
image: crccheck/hello-world
metadata:
labels:
app.oam.dev/component: mysvc
selector:
matchLabels:
app.oam.dev/component: mysvc
---
apiVersion: batch/v1
kind: Job
metadata:
name: countdown
... # skip tons of metadata info
spec:
parallelism: 1
completions: 1
template:
metadata:
name: countdown
spec:
containers:
- name: countdown
image: 'centos:7'
command:
- bin/bash
- '-c'
- for i in 9 8 7 6 5 4 3 2 1 ; do echo $i ; done
restartPolicy: Never
```
</details>
## CUE `Context`
KubeVela allows you to reference the runtime information of your application via `conext` keyword.
The most widely used context is application name(`context.appName`) component name(`context.name`).
```cue
context: {
appName: string
name: string
}
```
For example, let's say you want to use the component name filled in by users as the container name in the workload instance:
```cue
parameter: {
image: string
}
output: {
...
spec: {
containers: [{
name: context.name
image: parameter.image
}]
}
...
}
```
> Note that `context` information are auto-injected before resources are applied to target cluster.
### Full available information in CUE `context`
| Context Variable | Description |
| :--: | :---------: |
| `context.appRevision` | The revision of the application |
| `context.appRevisionNum` | The revision number(`int` type) of the application, e.g., `context.appRevisionNum` will be `1` if `context.appRevision` is `app-v1`|
| `context.appName` | The name of the application |
| `context.name` | The name of the component of the application |
| `context.namespace` | The namespace of the application |
| `context.output` | The rendered workload API resource of the component, this usually used in trait |
| `context.outputs.<resourceName>` | The rendered trait API resource of the component, this usually used in trait |
## Composition
It's common that a component definition is composed by multiple API resources, for example, a `webserver` component that is composed by a Deployment and a Service. CUE is a great solution to achieve this in simplified primitives.
> Another approach to do composition in KubeVela of course is [using Helm](../helm/component).
## How-to
KubeVela requires you to define the template of workload type in `output` section, and leave all the other resource templates in `outputs` section with format as below:
```cue
outputs: <unique-name>:
<full template data>
```
> The reason for this requirement is KubeVela needs to know it is currently rendering a workload so it could do some "magic" like patching annotations/labels or other data during it.
Below is the example for `webserver` definition:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: webserver
annotations:
definition.oam.dev/description: "webserver is a combo of Deployment + Service"
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
cue:
template: |
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": context.name
}
template: {
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
containers: [{
name: context.name
image: parameter.image
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
if parameter["env"] != _|_ {
env: parameter.env
}
if context["config"] != _|_ {
env: context.config
}
ports: [{
containerPort: parameter.port
}]
if parameter["cpu"] != _|_ {
resources: {
limits:
cpu: parameter.cpu
requests:
cpu: parameter.cpu
}
}
}]
}
}
}
}
// an extra template
outputs: service: {
apiVersion: "v1"
kind: "Service"
spec: {
selector: {
"app.oam.dev/component": context.name
}
ports: [
{
port: parameter.port
targetPort: parameter.port
},
]
}
}
parameter: {
image: string
cmd?: [...string]
port: *80 | int
env?: [...{
name: string
value?: string
valueFrom?: {
secretKeyRef: {
name: string
key: string
}
}
}]
cpu?: string
}
```
The user could now declare an `Application` with it:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: webserver-demo
namespace: default
spec:
components:
- name: hello-world
type: webserver
properties:
image: crccheck/hello-world
port: 8000
env:
- name: "foo"
value: "bar"
cpu: "100m"
```
It will generate and manage below API resources in target cluster:
```shell
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
hello-world-v1 1/1 1 1 15s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world-trait-7bdcff98f7 ClusterIP <your ip> <none> 8000/TCP 32s
```
## What's Next
Please check the [Learning CUE](./basic) documentation about why we support CUE as first-class templating solution and more details about using CUE efficiently.

View File

@ -0,0 +1,53 @@
---
title: Render Resource to Other Namespaces
---
In this section, we will introduce how to use cue template create resources in different namespace with the application.
By default, the `metadata.namespace` of K8s resource in CUE template is automatically filled with the same namespace of the application.
If you want to create K8s resources running in a specific namespace witch is different with the application, you can set the `metadata.namespace` field.
KubeVela will create the resources in the specified namespace, and create a resourceTracker object as owener of those resources.
## Usage
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: worker
spec:
definitionRef:
name: deployments.apps
schematic:
cue:
template: |
parameter: {
name: string
image: string
namespace: string # make this parameter `namespace` as keyword which represents the resource maybe located in different namespace with application
}
output: {
apiVersion: "apps/v1"
kind: "Deployment"
metadata: {
namespace: my-namespace
}
spec: {
selector: matchLabels: {
"app.oam.dev/component": parameter.name
}
template: {
metadata: labels: {
"app.oam.dev/component": parameter.name
}
spec: {
containers: [{
name: parameter.name
image: parameter.image
}]
}}}
}
```

View File

@ -0,0 +1,517 @@
---
title: Patch Traits
---
**Patch** is a very common pattern of trait definitions, i.e. the app operators can amend/path attributes to the component instance (normally the workload) to enable certain operational features such as sidecar or node affinity rules (and this should be done **before** the resources applied to target cluster).
This pattern is extremely useful when the component definition is provided by third-party component provider (e.g. software distributor) so app operators do not have privilege to change its template.
> Note that even patch trait itself is defined by CUE, it can patch any component regardless how its schematic is defined (i.e. CUE, Helm, and any other supported schematic approaches).
Below is an example for `node-affinity` trait:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "affinity specify node affinity and toleration"
name: node-affinity
spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: true
schematic:
cue:
template: |
patch: {
spec: template: spec: {
if parameter.affinity != _|_ {
affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: [{
matchExpressions: [
for k, v in parameter.affinity {
key: k
operator: "In"
values: v
},
]}]
}
if parameter.tolerations != _|_ {
tolerations: [
for k, v in parameter.tolerations {
effect: "NoSchedule"
key: k
operator: "Equal"
value: v
}]
}
}
}
parameter: {
affinity?: [string]: [...string]
tolerations?: [string]: string
}
```
The patch trait above assumes the target component instance have `spec.template.spec.affinity` field.
Hence, we need to use `appliesToWorkloads` to enforce the trait only applies to those workload types have this field.
Another important field is `podDisruptive`, this patch trait will patch to the pod template field,
so changes on any field of this trait will cause the pod to restart, We should add `podDisruptive` and make it to be true
to tell users that applying this trait will cause the pod to restart.
Now the users could declare they want to add node affinity rules to the component instance as below:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: Application
metadata:
name: testapp
spec:
components:
- name: express-server
type: webservice
properties:
image: oamdev/testapp:v1
traits:
- type: "node-affinity"
properties:
affinity:
server-owner: ["owner1","owner2"]
resource-pool: ["pool1","pool2","pool3"]
tolerations:
resource-pool: "broken-pool1"
server-owner: "old-owner"
```
### Known Limitations
By default, patch trait in KubeVela leverages the CUE `merge` operation. It has following known constraints though:
- Can not handle conflicts.
- For example, if a component instance already been set with value `replicas=5`, then any patch trait to patch `replicas` field will fail, a.k.a you should not expose `replicas` field in its component definition schematic.
- Array list in the patch will be merged following the order of index. It can not handle the duplication of the array list members. This could be fixed by another feature below.
### Strategy Patch
Strategy Patch is effective by adding annotation, and supports the following two ways
> Note that this is not a standard CUE feature, KubeVela enhanced CUE in this case.
#### 1. With `+patchKey=<key_name>` annotation
This is useful for patching array list, merging logic of two array lists will not follow the CUE behavior. Instead, it will treat the list as object and use a strategy merge approach:
- if a duplicated key is found, the patch data will be merge with the existing values;
- if no duplication found, the patch will append into the array list.
The example of strategy patch trait with 'patchKey' will like below:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "add sidecar to the app"
name: sidecar
spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: true
schematic:
cue:
template: |
patch: {
// +patchKey=name
spec: template: spec: containers: [parameter]
}
parameter: {
name: string
image: string
command?: [...string]
}
```
In above example we defined `patchKey` is `name` which is the parameter key of container name. In this case, if the workload don't have the container with same name, it will be a sidecar container append into the `spec.template.spec.containers` array list. If the workload already has a container with the same name of this `sidecar` trait, then merge operation will happen instead of append (which leads to duplicated containers).
If `patch` and `outputs` both exist in one trait definition, the `patch` operation will be handled first and then render the `outputs`.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "expose the app"
name: expose
spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: true
schematic:
cue:
template: |
patch: {spec: template: metadata: labels: app: context.name}
outputs: service: {
apiVersion: "v1"
kind: "Service"
metadata: name: context.name
spec: {
selector: app: context.name
ports: [
for k, v in parameter.http {
port: v
targetPort: v
},
]
}
}
parameter: {
http: [string]: int
}
```
So the above trait which attaches a Service to given component instance will patch an corresponding label to the workload first and then render the Service resource based on template in `outputs`.
#### 2. With `+patchStrategy=retainkeys` annotation
Similar to strategy [retainkeys](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/#use-strategic-merge-patch-to-update-a-deployment-using-the-retainkeys-strategy) in K8s strategic merge patch
In some scenarios that the entire object needs to be replaced, retainkeys strategy is the best choice. the example as follows:
Assume the Deployment is the base resource
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: retainkeys-demo
spec:
selector:
matchLabels:
app: nginx
strategy:
type: rollingUpdate
rollingUpdate:
maxSurge: 30%
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: retainkeys-demo-ctr
image: nginx
```
Now want to replace rollingUpdate strategy with a new strategy, you can write the patch trait like below
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: TraitDefinition
metadata:
name: recreate
spec:
appliesToWorkloads:
- deployments.apps
extension:
template: |-
patch: {
spec: {
// +patchStrategy=retainKeys
strategy: type: "Recreate"
}
}
```
Then the base resource becomes as follows
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: retainkeys-demo
spec:
selector:
matchLabels:
app: nginx
strategy:
type: Recreate
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: retainkeys-demo-ctr
image: nginx
```
## More Use Cases of Patch Trait
Patch trait is in general pretty useful to separate operational concerns from the component definition, here are some more examples.
### Add Labels
For example, patch common label (virtual group) to the component instance.
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "Add virtual group labels"
name: virtualgroup
spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: true
schematic:
cue:
template: |
patch: {
spec: template: {
metadata: labels: {
if parameter.scope == "namespace" {
"app.namespace.virtual.group": parameter.group
}
if parameter.scope == "cluster" {
"app.cluster.virtual.group": parameter.group
}
}
}
}
parameter: {
group: *"default" | string
scope: *"namespace" | string
}
```
Then it could be used like:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
spec:
...
traits:
- type: virtualgroup
properties:
group: "my-group1"
scope: "cluster"
```
### Add Annotations
Similar to common labels, you could also patch the component instance with annotations. The annotation value should be a JSON string.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "Specify auto scale by annotation"
name: kautoscale
spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: false
schematic:
cue:
template: |
import "encoding/json"
patch: {
metadata: annotations: {
"my.custom.autoscale.annotation": json.Marshal({
"minReplicas": parameter.min
"maxReplicas": parameter.max
})
}
}
parameter: {
min: *1 | int
max: *3 | int
}
```
### Add Pod Environments
Inject system environments into Pod is also very common use case.
> This case relies on strategy merge patch, so don't forget add `+patchKey=name` as below:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "add env into your pods"
name: env
spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: true
schematic:
cue:
template: |
patch: {
spec: template: spec: {
// +patchKey=name
containers: [{
name: context.name
// +patchKey=name
env: [
for k, v in parameter.env {
name: k
value: v
},
]
}]
}
}
parameter: {
env: [string]: string
}
```
### Inject `ServiceAccount` Based on External Auth Service
In this example, the service account was dynamically requested from an authentication service and patched into the service.
This example put UID token in HTTP header but you can also use request body if you prefer.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "dynamically specify service account"
name: service-account
spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: true
schematic:
cue:
template: |
processing: {
output: {
credentials?: string
}
http: {
method: *"GET" | string
url: parameter.serviceURL
request: {
header: {
"authorization.token": parameter.uidtoken
}
}
}
}
patch: {
spec: template: spec: serviceAccountName: processing.output.credentials
}
parameter: {
uidtoken: string
serviceURL: string
}
```
The `processing.http` section is an advanced feature that allow trait definition to send a HTTP request during rendering the resource. Please refer to [Execute HTTP Request in Trait Definition](#Processing-Trait) section for more details.
### Add `InitContainer`
[`InitContainer`](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container) is useful to pre-define operations in an image and run it before app container.
Below is an example:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "add an init container and use shared volume with pod"
name: init-container
spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: true
schematic:
cue:
template: |
patch: {
spec: template: spec: {
// +patchKey=name
containers: [{
name: context.name
// +patchKey=name
volumeMounts: [{
name: parameter.mountName
mountPath: parameter.appMountPath
}]
}]
initContainers: [{
name: parameter.name
image: parameter.image
if parameter.command != _|_ {
command: parameter.command
}
// +patchKey=name
volumeMounts: [{
name: parameter.mountName
mountPath: parameter.initMountPath
}]
}]
// +patchKey=name
volumes: [{
name: parameter.mountName
emptyDir: {}
}]
}
}
parameter: {
name: string
image: string
command?: [...string]
mountName: *"workdir" | string
appMountPath: string
initMountPath: string
}
```
The usage could be:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: testapp
spec:
components:
- name: express-server
type: webservice
properties:
image: oamdev/testapp:v1
traits:
- type: "init-container"
properties:
name: "install-container"
image: "busybox"
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://info.cern.ch
mountName: "workdir"
appMountPath: "/usr/share/nginx/html"
initMountPath: "/work-dir"
```

View File

@ -0,0 +1,137 @@
---
title: Status Write Back
---
This documentation will explain how to achieve status write back by using CUE templates in definition objects.
## Health Check
The spec of health check is `spec.status.healthPolicy`, they are the same for both Workload Type and Trait.
If not defined, the health result will always be `true`.
The keyword in CUE is `isHealth`, the result of CUE expression must be `bool` type.
KubeVela runtime will evaluate the CUE expression periodically until it becomes healthy. Every time the controller will get all the Kubernetes resources and fill them into the context field.
So the context will contain following information:
```cue
context:{
name: <component name>
appName: <app name>
output: <K8s workload resource>
outputs: {
<resource1>: <K8s trait resource1>
<resource2>: <K8s trait resource2>
}
}
```
Trait will not have the `context.ouput`, other fields are the same.
The example of health check likes below:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
spec:
status:
healthPolicy: |
isHealth: (context.output.status.readyReplicas > 0) && (context.output.status.readyReplicas == context.output.status.replicas)
...
```
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
spec:
status:
healthPolicy: |
isHealth: len(context.outputs.service.spec.clusterIP) > 0
...
```
> Please refer to [this doc](https://github.com/oam-dev/kubevela/blob/master/docs/examples/app-with-status/template.yaml) for the complete example.
The health check result will be recorded into the `Application` resource.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
spec:
components:
- name: myweb
type: worker
properties:
cmd:
- sleep
- "1000"
enemies: alien
image: busybox
lives: "3"
traits:
- type: ingress
properties:
domain: www.example.com
http:
/: 80
status:
...
services:
- healthy: true
message: "type: busybox,\t enemies:alien"
name: myweb
traits:
- healthy: true
message: 'Visiting URL: www.example.com, IP: 47.111.233.220'
type: ingress
status: running
```
## Custom Status
The spec of custom status is `spec.status.customStatus`, they are the same for both Workload Type and Trait.
The keyword in CUE is `message`, the result of CUE expression must be `string` type.
The custom status has the same mechanism with health check.
Application CRD controller will evaluate the CUE expression after the health check succeed.
The context will contain following information:
```cue
context:{
name: <component name>
appName: <app name>
output: <K8s workload resource>
outputs: {
<resource1>: <K8s trait resource1>
<resource2>: <K8s trait resource2>
}
}
```
Trait will not have the `context.ouput`, other fields are the same.
Please refer to [this doc](https://github.com/oam-dev/kubevela/blob/master/docs/examples/app-with-status/template.yaml) for the complete example.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
spec:
status:
customStatus: |-
message: "type: " + context.output.spec.template.spec.containers[0].image + ",\t enemies:" + context.outputs.gameconfig.data.enemies
...
```
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
spec:
status:
customStatus: |-
message: "type: "+ context.outputs.service.spec.type +",\t clusterIP:"+ context.outputs.service.spec.clusterIP+",\t ports:"+ "\(context.outputs.service.spec.ports[0].port)"+",\t domain"+context.outputs.ingress.spec.rules[0].host
...
```

Some files were not shown because too many files have changed in this diff Show More