sync commit 3800a700b012954f6c07d79d3838d826df7f4064 from kubevela-v1.0.3
This commit is contained in:
parent
285478220d
commit
00fb222258
|
|
@ -1,27 +0,0 @@
|
|||

|
||||
|
||||
*Make shipping applications more enjoyable.*
|
||||
|
||||
# KubeVela
|
||||
|
||||
KubeVela is a modern application engine that adapts to your application's needs, not the other way around.
|
||||
|
||||
## Community
|
||||
|
||||
- Slack: [CNCF Slack](https://slack.cncf.io/) #kubevela channel
|
||||
- Gitter: [Discussion](https://gitter.im/oam-dev/community)
|
||||
- Bi-weekly Community Call: [Meeting Notes](https://docs.google.com/document/d/1nqdFEyULekyksFHtFvgvFAYE-0AMHKoS3RMnaKsarjs)
|
||||
|
||||
## Installation
|
||||
|
||||
Installation guide is available on [this section](./install).
|
||||
|
||||
## Quick Start
|
||||
|
||||
Quick start is available on [this section](./quick-start).
|
||||
|
||||
## Contributing
|
||||
Check out [CONTRIBUTING](https://github.com/oam-dev/kubevela/blob/master/CONTRIBUTING.md) to see how to develop with KubeVela.
|
||||
|
||||
## Code of Conduct
|
||||
KubeVela adopts the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
|
||||
|
|
@ -1,126 +0,0 @@
|
|||
---
|
||||
title: Advanced Topics for Installation
|
||||
---
|
||||
|
||||
## Install KubeVela with cert-manager
|
||||
|
||||
KubeVela can use cert-manager generate certs for your application if it's available. Note that you need to install cert-manager **before** the KubeVela chart.
|
||||
|
||||
```shell script
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.2.0 --create-namespace --set installCRDs=true
|
||||
```
|
||||
|
||||
Install kubevela with enabled certmanager:
|
||||
```shell script
|
||||
helm install --create-namespace -n vela-system --set admissionWebhooks.certManager.enabled=true kubevela kubevela/vela-core
|
||||
```
|
||||
|
||||
## Install Pre-release
|
||||
|
||||
Add flag `--devel` in command `helm search` to choose a pre-release
|
||||
version in format `<next_version>-rc-master`. It means a release candidate version build on `master` branch,
|
||||
such as `0.4.0-rc-master`.
|
||||
|
||||
```shell script
|
||||
helm search repo kubevela/vela-core -l --devel
|
||||
```
|
||||
```console
|
||||
NAME CHART VERSION APP VERSION DESCRIPTION
|
||||
kubevela/vela-core 0.4.0-rc-master 0.4.0-rc-master A Helm chart for KubeVela core
|
||||
kubevela/vela-core 0.3.2 0.3.2 A Helm chart for KubeVela core
|
||||
kubevela/vela-core 0.3.1 0.3.1 A Helm chart for KubeVela core
|
||||
```
|
||||
|
||||
And try the following command to install it.
|
||||
|
||||
```shell script
|
||||
helm install --create-namespace -n vela-system kubevela kubevela/vela-core --version <next_version>-rc-master
|
||||
```
|
||||
```console
|
||||
NAME: kubevela
|
||||
LAST DEPLOYED: Thu Apr 1 19:41:30 2021
|
||||
NAMESPACE: vela-system
|
||||
STATUS: deployed
|
||||
REVISION: 1
|
||||
NOTES:
|
||||
Welcome to use the KubeVela! Enjoy your shipping application journey!
|
||||
```
|
||||
|
||||
## Upgrade
|
||||
|
||||
### Step 1. Update Helm repo
|
||||
|
||||
```shell
|
||||
helm repo update
|
||||
```
|
||||
|
||||
you can get the new version kubevela chart by run:
|
||||
|
||||
```shell
|
||||
helm search repo kubevela/vela-core -l
|
||||
```
|
||||
|
||||
### Step 2. Upgrade KubeVela CRDs
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/charts/vela-core/crds/core.oam.dev_componentdefinitions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/charts/vela-core/crds/core.oam.dev_workloaddefinitions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/charts/vela-core/crds/core.oam.dev_traitdefinitions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/charts/vela-core/crds/core.oam.dev_applications.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/charts/vela-core/crds/core.oam.dev_approllouts.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/charts/vela-core/crds/core.oam.dev_applicationrevisions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/charts/vela-core/crds/core.oam.dev_scopedefinitions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/charts/vela-core/crds/core.oam.dev_appdeployments.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/charts/vela-core/crds/core.oam.dev_applicationcontexts.yaml
|
||||
```
|
||||
|
||||
> Tips: If you see errors like `* is invalid: spec.scope: Invalid value: "Namespaced": filed is immutable`. Please delete the CRD which reports error and re-apply the kubevela crds.
|
||||
|
||||
```shell
|
||||
kubectl delete crd \
|
||||
scopedefinitions.core.oam.dev \
|
||||
traitdefinitions.core.oam.dev \
|
||||
workloaddefinitions.core.oam.dev
|
||||
```
|
||||
|
||||
### Step 3. Upgrade KubeVela Helm chart
|
||||
|
||||
```shell
|
||||
helm upgrade --install --create-namespace --namespace vela-system kubevela kubevela/vela-core --version <the_new_version>
|
||||
```
|
||||
|
||||
## Clean Up
|
||||
|
||||
Run:
|
||||
|
||||
```shell script
|
||||
helm uninstall -n vela-system kubevela
|
||||
rm -r ~/.vela
|
||||
```
|
||||
|
||||
This will uninstall KubeVela server component and its dependency components.
|
||||
This also cleans up local CLI cache.
|
||||
|
||||
Then clean up CRDs (CRDs are not removed via helm by default):
|
||||
|
||||
```shell script
|
||||
kubectl delete crd \
|
||||
appdeployments.core.oam.dev \
|
||||
applicationconfigurations.core.oam.dev \
|
||||
applicationcontexts.core.oam.dev \
|
||||
applicationdeployments.core.oam.dev \
|
||||
applicationrevisions.core.oam.dev \
|
||||
applications.core.oam.dev \
|
||||
approllouts.core.oam.dev \
|
||||
componentdefinitions.core.oam.dev \
|
||||
components.core.oam.dev \
|
||||
containerizedworkloads.core.oam.dev \
|
||||
healthscopes.core.oam.dev \
|
||||
manualscalertraits.core.oam.dev \
|
||||
podspecworkloads.standard.oam.dev \
|
||||
scopedefinitions.core.oam.dev \
|
||||
traitdefinitions.core.oam.dev \
|
||||
workloaddefinitions.core.oam.dev
|
||||
```
|
||||
|
|
@ -1,122 +0,0 @@
|
|||
---
|
||||
title: Deploy Application
|
||||
---
|
||||
|
||||
This documentation will walk through a full application deployment workflow on KubeVela platform.
|
||||
|
||||
## Introduction
|
||||
|
||||
KubeVela is a fully self-service platform. All capabilities an application deployment needs are maintained as building block modules in this platform. Specifically:
|
||||
- Components - deployable/provisionable entities that composed your application deployment
|
||||
- e.g. a Kubernetes workload, a MySQL database, or a AWS OSS bucket
|
||||
- Traits - attachable operational features per your needs
|
||||
- e.g. autoscaling rules, rollout strategies, ingress rules, sidecars, security policies etc
|
||||
|
||||
## Step 1: Check Capabilities in the Platform
|
||||
|
||||
As user of this platform, you could check available components you can deploy, and available traits you can attach.
|
||||
|
||||
```console
|
||||
kubectl get componentdefinitions -n vela-system
|
||||
NAME WORKLOAD-KIND DESCRIPTION AGE
|
||||
task Job Describes jobs that run code or a script to completion. 5h52m
|
||||
webservice Deployment Describes long-running, scalable, containerized services that have a stable network endpoint to receive external network traffic from customers. 5h52m
|
||||
worker Deployment Describes long-running, scalable, containerized services that running at backend. They do NOT have network endpoint to receive external network traffic. 5h52m
|
||||
```
|
||||
|
||||
```console
|
||||
kubectl get traitdefinitions -n vela-system
|
||||
NAME APPLIES-TO DESCRIPTION AGE
|
||||
ingress ["webservice","worker"] Configures K8s ingress and service to enable web traffic for your service. Please use route trait in cap center for advanced usage. 6h8m
|
||||
scaler ["webservice","worker"] Configures replicas for your service. 6h8m
|
||||
```
|
||||
|
||||
To show the specification for given capability, you could use `vela` CLI. For example, `vela show webservice` will return full schema of *Web Service* component and `vela show webservice --web` will open its capability reference documentation in your browser.
|
||||
|
||||
## Step 2: Design and Deploy Application
|
||||
|
||||
In KubeVela, `Application` is the main API to define your application deployment based on available capabilities. Every `Application` could contain multiple components, each of them can be attached with a number of traits per needs.
|
||||
|
||||
Now let's define an application composed by *Web Service* and *Worker* components.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: website
|
||||
spec:
|
||||
components:
|
||||
- name: frontend
|
||||
type: webservice
|
||||
properties:
|
||||
image: nginx
|
||||
traits:
|
||||
- type: autoscaler
|
||||
properties:
|
||||
min: 1
|
||||
max: 10
|
||||
cpuPercent: 60
|
||||
- type: sidecar
|
||||
properties:
|
||||
name: "sidecar-test"
|
||||
image: "fluentd"
|
||||
- name: backend
|
||||
type: worker
|
||||
properties:
|
||||
image: busybox
|
||||
cmd:
|
||||
- sleep
|
||||
- '1000'
|
||||
```
|
||||
|
||||
In this sample, we also attached `sidecar` and `autoscaler` traits to the `frontend` component. So after deployed, the `frontend` component instance (a Kubernetes Deployment workload) will be automatically injected with a `fluentd` sidecar and automatically scale from 1-100 replicas based on CPU usage.
|
||||
|
||||
### Deploy the Application
|
||||
|
||||
Apply application YAML to Kubernetes, you'll get the application becomes `running`.
|
||||
|
||||
```shell
|
||||
$ kubectl get application -o yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: website
|
||||
....
|
||||
status:
|
||||
components:
|
||||
- apiVersion: core.oam.dev/v1alpha2
|
||||
kind: Component
|
||||
name: backend
|
||||
- apiVersion: core.oam.dev/v1alpha2
|
||||
kind: Component
|
||||
name: frontend
|
||||
....
|
||||
status: running
|
||||
|
||||
```
|
||||
|
||||
### Verify the Deployment
|
||||
|
||||
You could see a Deployment named `frontend` is running, with port exposed, and with a container `fluentd` injected.
|
||||
|
||||
```shell
|
||||
$ kubectl get deploy frontend
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
frontend 1/1 1 1 100m
|
||||
```
|
||||
|
||||
Another Deployment is also running named `backend`.
|
||||
|
||||
```shell
|
||||
$ kubectl get deploy backend
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
backend 1/1 1 1 100m
|
||||
```
|
||||
|
||||
An HPA was also created by the `autoscaler` trait.
|
||||
|
||||
```shell
|
||||
$ kubectl get HorizontalPodAutoscaler frontend
|
||||
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
|
||||
frontend Deployment/frontend <unknown>/50% 1 10 1 101m
|
||||
```
|
||||
|
|
@ -1,41 +0,0 @@
|
|||
---
|
||||
title: vela
|
||||
---
|
||||
|
||||
|
||||
|
||||
```
|
||||
vela [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
-h, --help help for vela
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities
|
||||
* [vela completion](vela_completion) - Output shell completion code for the specified shell (bash or zsh)
|
||||
* [vela config](vela_config) - Manage configurations
|
||||
* [vela delete](vela_delete) - Delete an application
|
||||
* [vela env](vela_env) - Manage environments
|
||||
* [vela exec](vela_exec) - Execute command in a container
|
||||
* [vela export](vela_export) - Export deploy manifests from appfile
|
||||
* [vela help](vela_help) - Help about any command
|
||||
* [vela init](vela_init) - Create scaffold for an application
|
||||
* [vela logs](vela_logs) - Tail logs for application
|
||||
* [vela ls](vela_ls) - List applications
|
||||
* [vela port-forward](vela_port-forward) - Forward local ports to services in an application
|
||||
* [vela show](vela_show) - Show the reference doc for a workload type or trait
|
||||
* [vela status](vela_status) - Show status of an application
|
||||
* [vela system](vela_system) - System management utilities
|
||||
* [vela template](vela_template) - Manage templates
|
||||
* [vela traits](vela_traits) - List traits
|
||||
* [vela up](vela_up) - Apply an appfile
|
||||
* [vela version](vela_version) - Prints out build version information
|
||||
* [vela workloads](vela_workloads) - List workloads
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,31 +0,0 @@
|
|||
---
|
||||
title: vela cap
|
||||
---
|
||||
|
||||
Manage capability centers and installing/uninstalling capabilities
|
||||
|
||||
### Synopsis
|
||||
|
||||
Manage capability centers and installing/uninstalling capabilities
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for cap
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela](vela) -
|
||||
* [vela cap center](vela_cap_center) - Manage Capability Center
|
||||
* [vela cap install](vela_cap_install) - Install capability into cluster
|
||||
* [vela cap ls](vela_cap_ls) - List capabilities from cap-center
|
||||
* [vela cap uninstall](vela_cap_uninstall) - Uninstall capability from cluster
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,31 +0,0 @@
|
|||
---
|
||||
title: vela cap center
|
||||
---
|
||||
|
||||
Manage Capability Center
|
||||
|
||||
### Synopsis
|
||||
|
||||
Manage Capability Center with config, sync, list
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for center
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities
|
||||
* [vela cap center config](vela_cap_center_config) - Configure (add if not exist) a capability center, default is local (built-in capabilities)
|
||||
* [vela cap center ls](vela_cap_center_ls) - List all capability centers
|
||||
* [vela cap center remove](vela_cap_center_remove) - Remove specified capability center
|
||||
* [vela cap center sync](vela_cap_center_sync) - Sync capabilities from remote center, default to sync all centers
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,38 +0,0 @@
|
|||
---
|
||||
title: vela cap center config
|
||||
---
|
||||
|
||||
Configure (add if not exist) a capability center, default is local (built-in capabilities)
|
||||
|
||||
### Synopsis
|
||||
|
||||
Configure (add if not exist) a capability center, default is local (built-in capabilities)
|
||||
|
||||
```
|
||||
vela cap center config <centerName> <centerURL> [flags]
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela cap center config mycenter https://github.com/oam-dev/catalog/cap-center
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for config
|
||||
-t, --token string Github Repo token
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela cap center](vela_cap_center) - Manage Capability Center
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,37 +0,0 @@
|
|||
---
|
||||
title: vela cap center ls
|
||||
---
|
||||
|
||||
List all capability centers
|
||||
|
||||
### Synopsis
|
||||
|
||||
List all configured capability centers
|
||||
|
||||
```
|
||||
vela cap center ls [flags]
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela cap center ls
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for ls
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela cap center](vela_cap_center) - Manage Capability Center
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,37 +0,0 @@
|
|||
---
|
||||
title: vela cap center remove
|
||||
---
|
||||
|
||||
Remove specified capability center
|
||||
|
||||
### Synopsis
|
||||
|
||||
Remove specified capability center
|
||||
|
||||
```
|
||||
vela cap center remove <centerName> [flags]
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela cap center remove mycenter
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for remove
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela cap center](vela_cap_center) - Manage Capability Center
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,37 +0,0 @@
|
|||
---
|
||||
title: vela cap center sync
|
||||
---
|
||||
|
||||
Sync capabilities from remote center, default to sync all centers
|
||||
|
||||
### Synopsis
|
||||
|
||||
Sync capabilities from remote center, default to sync all centers
|
||||
|
||||
```
|
||||
vela cap center sync [centerName] [flags]
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela cap center sync mycenter
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for sync
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela cap center](vela_cap_center) - Manage Capability Center
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,38 +0,0 @@
|
|||
---
|
||||
title: vela cap install
|
||||
---
|
||||
|
||||
Install capability into cluster
|
||||
|
||||
### Synopsis
|
||||
|
||||
Install capability into cluster
|
||||
|
||||
```
|
||||
vela cap install <center>/<name> [flags]
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela cap install mycenter/route
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for install
|
||||
-t, --token string Github Repo token
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,37 +0,0 @@
|
|||
---
|
||||
title: vela cap ls
|
||||
---
|
||||
|
||||
List capabilities from cap-center
|
||||
|
||||
### Synopsis
|
||||
|
||||
List capabilities from cap-center
|
||||
|
||||
```
|
||||
vela cap ls [cap-center] [flags]
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela cap ls
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for ls
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,38 +0,0 @@
|
|||
---
|
||||
title: vela cap uninstall
|
||||
---
|
||||
|
||||
Uninstall capability from cluster
|
||||
|
||||
### Synopsis
|
||||
|
||||
Uninstall capability from cluster
|
||||
|
||||
```
|
||||
vela cap uninstall <name> [flags]
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela cap uninstall route
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for uninstall
|
||||
-t, --token string Github Repo token
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,32 +0,0 @@
|
|||
---
|
||||
title: vela completion
|
||||
---
|
||||
|
||||
Output shell completion code for the specified shell (bash or zsh)
|
||||
|
||||
### Synopsis
|
||||
|
||||
Output shell completion code for the specified shell (bash or zsh).
|
||||
The shell code must be evaluated to provide interactive completion
|
||||
of vela commands.
|
||||
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for completion
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela](vela) -
|
||||
* [vela completion bash](vela_completion_bash) - generate autocompletions script for bash
|
||||
* [vela completion zsh](vela_completion_zsh) - generate autocompletions script for zsh
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,41 +0,0 @@
|
|||
---
|
||||
title: vela completion bash
|
||||
---
|
||||
|
||||
generate autocompletions script for bash
|
||||
|
||||
### Synopsis
|
||||
|
||||
Generate the autocompletion script for Vela for the bash shell.
|
||||
|
||||
To load completions in your current shell session:
|
||||
$ source <(vela completion bash)
|
||||
|
||||
To load completions for every new session, execute once:
|
||||
Linux:
|
||||
$ vela completion bash > /etc/bash_completion.d/vela
|
||||
MacOS:
|
||||
$ vela completion bash > /usr/local/etc/bash_completion.d/vela
|
||||
|
||||
|
||||
```
|
||||
vela completion bash
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for bash
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela completion](vela_completion) - Output shell completion code for the specified shell (bash or zsh)
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,38 +0,0 @@
|
|||
---
|
||||
title: vela completion zsh
|
||||
---
|
||||
|
||||
generate autocompletions script for zsh
|
||||
|
||||
### Synopsis
|
||||
|
||||
Generate the autocompletion script for Vela for the zsh shell.
|
||||
|
||||
To load completions in your current shell session:
|
||||
$ source <(vela completion zsh)
|
||||
|
||||
To load completions for every new session, execute once:
|
||||
$ vela completion zsh > "${fpath[1]}/_vela"
|
||||
|
||||
|
||||
```
|
||||
vela completion zsh
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for zsh
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela completion](vela_completion) - Output shell completion code for the specified shell (bash or zsh)
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,37 +0,0 @@
|
|||
---
|
||||
title: vela components
|
||||
---
|
||||
|
||||
List components
|
||||
|
||||
### Synopsis
|
||||
|
||||
List components
|
||||
|
||||
```
|
||||
vela components
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela components
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for workloads
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela](vela) -
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,31 +0,0 @@
|
|||
---
|
||||
title: vela config
|
||||
---
|
||||
|
||||
Manage configurations
|
||||
|
||||
### Synopsis
|
||||
|
||||
Manage configurations
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for config
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela](vela) -
|
||||
* [vela config del](vela_config_del) - Delete config
|
||||
* [vela config get](vela_config_get) - Get data for a config
|
||||
* [vela config ls](vela_config_ls) - List configs
|
||||
* [vela config set](vela_config_set) - Set data for a config
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,37 +0,0 @@
|
|||
---
|
||||
title: vela config del
|
||||
---
|
||||
|
||||
Delete config
|
||||
|
||||
### Synopsis
|
||||
|
||||
Delete config
|
||||
|
||||
```
|
||||
vela config del
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela config del <config-name>
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for del
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela config](vela_config) - Manage configurations
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,37 +0,0 @@
|
|||
---
|
||||
title: vela config get
|
||||
---
|
||||
|
||||
Get data for a config
|
||||
|
||||
### Synopsis
|
||||
|
||||
Get data for a config
|
||||
|
||||
```
|
||||
vela config get
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela config get <config-name>
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for get
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela config](vela_config) - Manage configurations
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,37 +0,0 @@
|
|||
---
|
||||
title: vela config ls
|
||||
---
|
||||
|
||||
List configs
|
||||
|
||||
### Synopsis
|
||||
|
||||
List all configs
|
||||
|
||||
```
|
||||
vela config ls
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela config ls
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for ls
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela config](vela_config) - Manage configurations
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,37 +0,0 @@
|
|||
---
|
||||
title: vela config set
|
||||
---
|
||||
|
||||
Set data for a config
|
||||
|
||||
### Synopsis
|
||||
|
||||
Set data for a config
|
||||
|
||||
```
|
||||
vela config set
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela config set <config-name> KEY=VALUE K2=V2
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for set
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela config](vela_config) - Manage configurations
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,38 +0,0 @@
|
|||
---
|
||||
title: vela delete
|
||||
---
|
||||
|
||||
Delete an application
|
||||
|
||||
### Synopsis
|
||||
|
||||
Delete an application
|
||||
|
||||
```
|
||||
vela delete APP_NAME
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela delete frontend
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for delete
|
||||
--svc string delete only the specified service in this app
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela](vela) -
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,31 +0,0 @@
|
|||
---
|
||||
title: vela env
|
||||
---
|
||||
|
||||
Manage environments
|
||||
|
||||
### Synopsis
|
||||
|
||||
Manage environments
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for env
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela](vela) -
|
||||
* [vela env delete](vela_env_delete) - Delete environment
|
||||
* [vela env init](vela_env_init) - Create environments
|
||||
* [vela env ls](vela_env_ls) - List environments
|
||||
* [vela env set](vela_env_set) - Set an environment
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,37 +0,0 @@
|
|||
---
|
||||
title: vela env delete
|
||||
---
|
||||
|
||||
Delete environment
|
||||
|
||||
### Synopsis
|
||||
|
||||
Delete environment
|
||||
|
||||
```
|
||||
vela env delete
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela env delete test
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for delete
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela env](vela_env) - Manage environments
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,40 +0,0 @@
|
|||
---
|
||||
title: vela env init
|
||||
---
|
||||
|
||||
Create environments
|
||||
|
||||
### Synopsis
|
||||
|
||||
Create environment and set the currently using environment
|
||||
|
||||
```
|
||||
vela env init <envName>
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela env init test --namespace test --email my@email.com
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
--domain string specify domain your applications
|
||||
--email string specify email for production TLS Certificate notification
|
||||
-h, --help help for init
|
||||
--namespace string specify K8s namespace for env
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela env](vela_env) - Manage environments
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,37 +0,0 @@
|
|||
---
|
||||
title: vela env ls
|
||||
---
|
||||
|
||||
List environments
|
||||
|
||||
### Synopsis
|
||||
|
||||
List all environments
|
||||
|
||||
```
|
||||
vela env ls
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela env ls [env-name]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for ls
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela env](vela_env) - Manage environments
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,37 +0,0 @@
|
|||
---
|
||||
title: vela env set
|
||||
---
|
||||
|
||||
Set an environment
|
||||
|
||||
### Synopsis
|
||||
|
||||
Set an environment as the current using one
|
||||
|
||||
```
|
||||
vela env set
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela env set test
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for set
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela env](vela_env) - Manage environments
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,35 +0,0 @@
|
|||
---
|
||||
title: vela exec
|
||||
---
|
||||
|
||||
Execute command in a container
|
||||
|
||||
### Synopsis
|
||||
|
||||
Execute command in a container
|
||||
|
||||
```
|
||||
vela exec [flags] APP_NAME -- COMMAND [args...]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for exec
|
||||
--pod-running-timeout duration The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one pod is running (default 1m0s)
|
||||
-i, --stdin Pass stdin to the container (default true)
|
||||
-s, --svc string service name
|
||||
-t, --tty Stdin is a TTY (default true)
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela](vela) -
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,32 +0,0 @@
|
|||
---
|
||||
title: vela export
|
||||
---
|
||||
|
||||
Export deploy manifests from appfile
|
||||
|
||||
### Synopsis
|
||||
|
||||
Export deploy manifests from appfile
|
||||
|
||||
```
|
||||
vela export
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-f, -- string specify file path for appfile
|
||||
-h, --help help for export
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela](vela) -
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,27 +0,0 @@
|
|||
---
|
||||
title: vela help
|
||||
---
|
||||
|
||||
Help about any command
|
||||
|
||||
```
|
||||
vela help [command]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for help
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela](vela) -
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,38 +0,0 @@
|
|||
---
|
||||
title: vela init
|
||||
---
|
||||
|
||||
Create scaffold for an application
|
||||
|
||||
### Synopsis
|
||||
|
||||
Create scaffold for an application
|
||||
|
||||
```
|
||||
vela init
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela init
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for init
|
||||
--render-only Rendering vela.yaml in current dir and do not deploy
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela](vela) -
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,32 +0,0 @@
|
|||
---
|
||||
title: vela logs
|
||||
---
|
||||
|
||||
Tail logs for application
|
||||
|
||||
### Synopsis
|
||||
|
||||
Tail logs for application
|
||||
|
||||
```
|
||||
vela logs [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for logs
|
||||
-o, --output string output format for logs, support: [default, raw, json] (default "default")
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela](vela) -
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,38 +0,0 @@
|
|||
---
|
||||
title: vela ls
|
||||
---
|
||||
|
||||
List applications
|
||||
|
||||
### Synopsis
|
||||
|
||||
List all applications in cluster
|
||||
|
||||
```
|
||||
vela ls
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela ls
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for ls
|
||||
-n, --namespace string specify the namespace the application want to list, default is the current env namespace
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela](vela) -
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,40 +0,0 @@
|
|||
---
|
||||
title: vela port-forward
|
||||
---
|
||||
|
||||
Forward local ports to services in an application
|
||||
|
||||
### Synopsis
|
||||
|
||||
Forward local ports to services in an application
|
||||
|
||||
```
|
||||
vela port-forward APP_NAME [flags]
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
port-forward APP_NAME [options] [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
--address strings Addresses to listen on (comma separated). Only accepts IP addresses or localhost as a value. When localhost is supplied, vela will try to bind on both 127.0.0.1 and ::1 and will fail if neither of these addresses are available to bind. (default [localhost])
|
||||
-h, --help help for port-forward
|
||||
--pod-running-timeout duration The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one pod is running (default 1m0s)
|
||||
--route forward ports from route trait service
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela](vela) -
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,38 +0,0 @@
|
|||
---
|
||||
title: vela show
|
||||
---
|
||||
|
||||
Show the reference doc for a workload type or trait
|
||||
|
||||
### Synopsis
|
||||
|
||||
Show the reference doc for a workload type or trait
|
||||
|
||||
```
|
||||
vela show [flags]
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
show webservice
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for show
|
||||
--web start web doc site
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela](vela) -
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,38 +0,0 @@
|
|||
---
|
||||
title: vela status
|
||||
---
|
||||
|
||||
Show status of an application
|
||||
|
||||
### Synopsis
|
||||
|
||||
Show status of an application, including workloads and traits of each service.
|
||||
|
||||
```
|
||||
vela status APP_NAME [flags]
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela status APP_NAME
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for status
|
||||
-s, --svc string service name
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela](vela) -
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,29 +0,0 @@
|
|||
---
|
||||
title: vela system
|
||||
---
|
||||
|
||||
System management utilities
|
||||
|
||||
### Synopsis
|
||||
|
||||
System management utilities
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for system
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela](vela) -
|
||||
* [vela system dry-run](vela_system_dry-run) - Dry Run an application, and output the conversion result to stdout
|
||||
* [vela system info](vela_system_info) - Show vela client and cluster chartPath
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,38 +0,0 @@
|
|||
---
|
||||
title: vela system dry-run
|
||||
---
|
||||
|
||||
Dry Run an application, and output the conversion result to stdout
|
||||
|
||||
### Synopsis
|
||||
|
||||
Dry Run an application, and output the conversion result to stdout
|
||||
|
||||
```
|
||||
vela system dry-run
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela dry-run
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-f, --file string application file name (default "./app.yaml")
|
||||
-h, --help help for dry-run
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela system](vela_system) - System management utilities
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,31 +0,0 @@
|
|||
---
|
||||
title: vela system info
|
||||
---
|
||||
|
||||
Show vela client and cluster chartPath
|
||||
|
||||
### Synopsis
|
||||
|
||||
Show vela client and cluster chartPath
|
||||
|
||||
```
|
||||
vela system info [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for info
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela system](vela_system) - System management utilities
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,28 +0,0 @@
|
|||
---
|
||||
title: vela template
|
||||
---
|
||||
|
||||
Manage templates
|
||||
|
||||
### Synopsis
|
||||
|
||||
Manage templates
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for template
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela](vela) -
|
||||
* [vela template context](vela_template_context) - Show context parameters
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,37 +0,0 @@
|
|||
---
|
||||
title: vela template context
|
||||
---
|
||||
|
||||
Show context parameters
|
||||
|
||||
### Synopsis
|
||||
|
||||
Show context parameter
|
||||
|
||||
```
|
||||
vela template context
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela template context
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for context
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela template](vela_template) - Manage templates
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,37 +0,0 @@
|
|||
---
|
||||
title: vela traits
|
||||
---
|
||||
|
||||
List traits
|
||||
|
||||
### Synopsis
|
||||
|
||||
List traits
|
||||
|
||||
```
|
||||
vela traits
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela traits
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for traits
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela](vela) -
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,32 +0,0 @@
|
|||
---
|
||||
title: vela up
|
||||
---
|
||||
|
||||
Apply an appfile
|
||||
|
||||
### Synopsis
|
||||
|
||||
Apply an appfile
|
||||
|
||||
```
|
||||
vela up
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-f, -- string specify file path for appfile
|
||||
-h, --help help for up
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela](vela) -
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,31 +0,0 @@
|
|||
---
|
||||
title: vela version
|
||||
---
|
||||
|
||||
Prints out build version information
|
||||
|
||||
### Synopsis
|
||||
|
||||
Prints out build version information
|
||||
|
||||
```
|
||||
vela version [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for version
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela](vela) -
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,37 +0,0 @@
|
|||
---
|
||||
title: vela workloads
|
||||
---
|
||||
|
||||
List workloads
|
||||
|
||||
### Synopsis
|
||||
|
||||
List workloads
|
||||
|
||||
```
|
||||
vela workloads
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
vela workloads
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for workloads
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
-e, --env string specify environment name for application
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [vela](vela) -
|
||||
|
||||
###### Auto generated by spf13/cobra on 20-Mar-2021
|
||||
|
|
@ -1,104 +0,0 @@
|
|||
---
|
||||
title: How it Works
|
||||
---
|
||||
|
||||
*"KubeVela is a scalable way to create PaaS-like experience on Kubernetes"*
|
||||
|
||||
In this documentation, we will explain the core idea of KubeVela and clarify some technical terms that are widely used in the project.
|
||||
|
||||
## Overview
|
||||
|
||||
First of all, KubeVela introduces a workflow with separate of concerns as below:
|
||||
- **Platform Team**
|
||||
- Defining templates for deployment environments and reusable capability modules to compose an application, and registering them into the cluster.
|
||||
- **End Users**
|
||||
- Choose a deployment environment, model and assemble the app with available modules, and deploy the app to target environment.
|
||||
|
||||
Below is how this workflow looks like:
|
||||
|
||||

|
||||
|
||||
This template based workflow make it possible for platform team enforce best practices and deployment confidence with a set of Kubernetes CRDs, and give end users a *PaaS-like* experience (*i.e. app-centric, higher level abstractions, self-service operations etc*) by natural.
|
||||
|
||||

|
||||
|
||||
Below are the core concepts in KubeVela that make this happen.
|
||||
|
||||
## `Application`
|
||||
The *Application* is the core API of KubeVela. It allows developers to work with a single artifact to capture the complete application deployment with simplified primitives.
|
||||
|
||||
In application delivery platform, having an "application" concept is important to simplify administrative tasks and can serve as an anchor to avoid configuration drifts during operation. Also, it provides a much simpler path for on-boarding Kubernetes capabilities to application delivery process without relying on low level details. For example, a developer will be able to model a "web service" without defining a detailed Kubernetes Deployment + Service combo each time, or claim the auto-scaling requirements without referring to the underlying KEDA ScaleObject.
|
||||
|
||||
### Example
|
||||
|
||||
An example of `website` application with two components (i.e. `frontend` and `backend`) could be modeled as below:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: website
|
||||
spec:
|
||||
components:
|
||||
- name: backend
|
||||
type: worker
|
||||
properties:
|
||||
image: busybox
|
||||
cmd:
|
||||
- sleep
|
||||
- '1000'
|
||||
- name: frontend
|
||||
type: webservice
|
||||
properties:
|
||||
image: nginx
|
||||
traits:
|
||||
- type: autoscaler
|
||||
properties:
|
||||
min: 1
|
||||
max: 10
|
||||
- type: sidecar
|
||||
properties:
|
||||
name: "sidecar-test"
|
||||
image: "fluentd"
|
||||
```
|
||||
|
||||
## Building the Abstraction
|
||||
|
||||
The `Application` resource in KubeVela is a LEGO-style object and does not even have fixed schema. Instead, it is composed by building blocks (app components and traits etc.) that allow you to on-board platform capabilities to this application definition via your own abstractions.
|
||||
|
||||
The building blocks to abstraction and model platform capabilities named `ComponentDefinition` and `TraitDefinition`.
|
||||
|
||||
### ComponentDefinition
|
||||
|
||||
`ComponentDefinition` is a pre-defined *template* for the deployable workload. It contains template, parametering and workload characteristic information as a declarative API resource.
|
||||
|
||||
Hence, the `Application` abstraction essentially declares how the user want to **instantiate** given component definitions in target cluster. Specifically, the `.type` field references the name of installed `ComponentDefinition` and `.properties` are the user set values to instantiate it.
|
||||
|
||||
Some typical component definitions are *Long Running Web Service*, *One-time Off Task* or *Redis Database*. All component definitions expected to be pre-installed in the platform, or provided by component providers such as 3rd-party software vendors.
|
||||
|
||||
### TraitDefinition
|
||||
|
||||
Optionally, each component has a `.traits` section that augments the component instance with operational behaviors such as load balancing policy, network ingress routing, auto-scaling policies, or upgrade strategies, etc.
|
||||
|
||||
Traits are operational features provided by the platform. To attach a trait to component instance, the user will declare `.type` field to reference the specific `TraitDefinition`, and `.properties` field to set property values of the given trait. Similarly, `TraitDefiniton` also allows you to define *template* for operational features.
|
||||
|
||||
We also reference component definitions and trait definitions as *"capability definitions"* in KubeVela.
|
||||
|
||||
## Environment
|
||||
Before releasing an application to production, it's important to test the code in testing/staging workspaces. In KubeVela, we describe these workspaces as "deployment environments" or "environments" for short. Each environment has its own configuration (e.g., domain, Kubernetes cluster and namespace, configuration data, access control policy, etc.) to allow user to create different deployment environments such as "test" and "production".
|
||||
|
||||
Currently, a KubeVela `environment` only maps to a Kubernetes namespace, while the cluster level environment is work in progress.
|
||||
|
||||
### Summary
|
||||
|
||||
The main concepts of KubeVela could be shown as below:
|
||||
|
||||

|
||||
|
||||
## Architecture
|
||||
|
||||
The overall architecture of KubeVela is shown as below:
|
||||
|
||||

|
||||
|
||||
Specifically, the application controller is responsible for application abstraction and encapsulation (i.e. the controller for `Application` and `Definition`). The rollout controller will handle progressive rollout strategy with the whole application as a unit. The multi-cluster deployment engine is responsible for deploying the application across multiple clusters and environments with traffic shifting and rollout features supported.
|
||||
|
|
@ -1,245 +0,0 @@
|
|||
---
|
||||
title: Advanced Features
|
||||
---
|
||||
|
||||
As a Data Configuration Language, CUE allows you to do some advanced templating magic in definition objects.
|
||||
|
||||
## Render Multiple Resources With a Loop
|
||||
|
||||
You can define the for-loop inside the `outputs`.
|
||||
|
||||
> Note that in this case the type of `parameter` field used in the for-loop must be a map.
|
||||
|
||||
Below is an example that will render multiple Kubernetes Services in one trait:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: TraitDefinition
|
||||
metadata:
|
||||
name: expose
|
||||
spec:
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
parameter: {
|
||||
http: [string]: int
|
||||
}
|
||||
|
||||
outputs: {
|
||||
for k, v in parameter.http {
|
||||
"\(k)": {
|
||||
apiVersion: "v1"
|
||||
kind: "Service"
|
||||
spec: {
|
||||
selector:
|
||||
app: context.name
|
||||
ports: [{
|
||||
port: v
|
||||
targetPort: v
|
||||
}]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The usage of this trait could be:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: testapp
|
||||
spec:
|
||||
components:
|
||||
- name: express-server
|
||||
type: webservice
|
||||
properties:
|
||||
...
|
||||
traits:
|
||||
- type: expose
|
||||
properties:
|
||||
http:
|
||||
myservice1: 8080
|
||||
myservice2: 8081
|
||||
```
|
||||
|
||||
## Execute HTTP Request in Trait Definition
|
||||
|
||||
The trait definition can send a HTTP request and capture the response to help you rendering the resource with keyword `processing`.
|
||||
|
||||
You can define HTTP request `method`, `url`, `body`, `header` and `trailer` in the `processing.http` section, and the returned data will be stored in `processing.output`.
|
||||
|
||||
> Please ensure the target HTTP server returns a **JSON data**. `output`.
|
||||
|
||||
Then you can reference the returned data from `processing.output` in `patch` or `output/outputs`.
|
||||
|
||||
Below is an example:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: TraitDefinition
|
||||
metadata:
|
||||
name: auth-service
|
||||
spec:
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
parameter: {
|
||||
serviceURL: string
|
||||
}
|
||||
|
||||
processing: {
|
||||
output: {
|
||||
token?: string
|
||||
}
|
||||
// The target server will return a JSON data with `token` as key.
|
||||
http: {
|
||||
method: *"GET" | string
|
||||
url: parameter.serviceURL
|
||||
request: {
|
||||
body?: bytes
|
||||
header: {}
|
||||
trailer: {}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
patch: {
|
||||
data: token: processing.output.token
|
||||
}
|
||||
```
|
||||
|
||||
In above example, this trait definition will send request to get the `token` data, and then patch the data to given component instance.
|
||||
|
||||
## Data Passing
|
||||
|
||||
A trait definition can read the generated API resources (rendered from `output` and `outputs`) of given component definition.
|
||||
|
||||
> KubeVela will ensure the component definitions are always rendered before traits definitions.
|
||||
|
||||
Specifically, the `context.output` contains the rendered workload API resource (whose GVK is indicated by `spec.workload`in component definition), and use `context.outputs.<xx>` to contain all the other rendered API resources.
|
||||
|
||||
Below is an example for data passing:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: ComponentDefinition
|
||||
metadata:
|
||||
name: worker
|
||||
spec:
|
||||
workload:
|
||||
definition:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
output: {
|
||||
apiVersion: "apps/v1"
|
||||
kind: "Deployment"
|
||||
spec: {
|
||||
selector: matchLabels: {
|
||||
"app.oam.dev/component": context.name
|
||||
}
|
||||
|
||||
template: {
|
||||
metadata: labels: {
|
||||
"app.oam.dev/component": context.name
|
||||
}
|
||||
spec: {
|
||||
containers: [{
|
||||
name: context.name
|
||||
image: parameter.image
|
||||
ports: [{containerPort: parameter.port}]
|
||||
envFrom: [{
|
||||
configMapRef: name: context.name + "game-config"
|
||||
}]
|
||||
if parameter["cmd"] != _|_ {
|
||||
command: parameter.cmd
|
||||
}
|
||||
}]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
outputs: gameconfig: {
|
||||
apiVersion: "v1"
|
||||
kind: "ConfigMap"
|
||||
metadata: {
|
||||
name: context.name + "game-config"
|
||||
}
|
||||
data: {
|
||||
enemies: parameter.enemies
|
||||
lives: parameter.lives
|
||||
}
|
||||
}
|
||||
|
||||
parameter: {
|
||||
// +usage=Which image would you like to use for your service
|
||||
// +short=i
|
||||
image: string
|
||||
// +usage=Commands to run in the container
|
||||
cmd?: [...string]
|
||||
lives: string
|
||||
enemies: string
|
||||
port: int
|
||||
}
|
||||
|
||||
|
||||
---
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: TraitDefinition
|
||||
metadata:
|
||||
name: ingress
|
||||
spec:
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
parameter: {
|
||||
domain: string
|
||||
path: string
|
||||
exposePort: int
|
||||
}
|
||||
// trait template can have multiple outputs in one trait
|
||||
outputs: service: {
|
||||
apiVersion: "v1"
|
||||
kind: "Service"
|
||||
spec: {
|
||||
selector:
|
||||
app: context.name
|
||||
ports: [{
|
||||
port: parameter.exposePort
|
||||
targetPort: context.output.spec.template.spec.containers[0].ports[0].containerPort
|
||||
}]
|
||||
}
|
||||
}
|
||||
outputs: ingress: {
|
||||
apiVersion: "networking.k8s.io/v1beta1"
|
||||
kind: "Ingress"
|
||||
metadata:
|
||||
name: context.name
|
||||
labels: config: context.outputs.gameconfig.data.enemies
|
||||
spec: {
|
||||
rules: [{
|
||||
host: parameter.domain
|
||||
http: {
|
||||
paths: [{
|
||||
path: parameter.path
|
||||
backend: {
|
||||
serviceName: context.name
|
||||
servicePort: parameter.exposePort
|
||||
}
|
||||
}]
|
||||
}
|
||||
}]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
In detail, during rendering `worker` `ComponentDefinition`:
|
||||
1. the rendered Kubernetes Deployment resource will be stored in the `context.output`,
|
||||
2. all other rendered resources will be stored in `context.outputs.<xx>`, with `<xx>` is the unique name in every `template.outputs`.
|
||||
|
||||
Thus, in `TraitDefinition`, it can read the rendered API resources (e.g. `context.outputs.gameconfig.data.enemies`) from the `context`.
|
||||
|
|
@ -1,548 +0,0 @@
|
|||
---
|
||||
title: Learning CUE
|
||||
---
|
||||
|
||||
This document will explain more about how to use CUE to encapsulate and abstract a given capability in Kubernetes in detail.
|
||||
|
||||
> Please make sure you have already learned about `Application` custom resource before reading the following guide.
|
||||
|
||||
## Overview
|
||||
|
||||
The reasons for KubeVela supports CUE as a first-class solution to design abstraction can be concluded as below:
|
||||
|
||||
- **CUE is designed for large scale configuration.** CUE has the ability to understand a
|
||||
configuration worked on by engineers across a whole company and to safely change a value that modifies thousands of objects in a configuration. This aligns very well with KubeVela's original goal to define and ship production level applications at web scale.
|
||||
- **CUE supports first-class code generation and automation.** CUE can integrate with existing tools and workflows naturally while other tools would have to build complex custom solutions. For example, generate OpenAPI schemas wigh Go code. This is how KubeVela build developer tools and GUI interfaces based on the CUE templates.
|
||||
- **CUE integrates very well with Go.**
|
||||
KubeVela is built with GO just like most projects in Kubernetes system. CUE is also implemented in and exposes a rich API in Go. KubeVela integrates with CUE as its core library and works as a Kubernetes controller. With the help of CUE, KubeVela can easily handle data constraint problems.
|
||||
|
||||
> Pleas also check [The Configuration Complexity Curse](https://blog.cedriccharly.com/post/20191109-the-configuration-complexity-curse/) and [The Logic of CUE](https://cuelang.org/docs/concepts/logic/) for more details.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Please make sure below CLIs are present in your environment:
|
||||
* [`cue` >=v0.2.2](https://cuelang.org/docs/install/)
|
||||
* [`vela` (>v1.0.0)](../install#4-optional-get-kubevela-cli)
|
||||
|
||||
## CUE CLI Basic
|
||||
|
||||
Below is the basic CUE data, you can define both schema and value in the same file with the almost same format:
|
||||
|
||||
```
|
||||
a: 1.5
|
||||
a: float
|
||||
b: 1
|
||||
b: int
|
||||
d: [1, 2, 3]
|
||||
g: {
|
||||
h: "abc"
|
||||
}
|
||||
e: string
|
||||
```
|
||||
|
||||
CUE is a superset of JSON, we can use it like json with following convenience:
|
||||
|
||||
* C style comments,
|
||||
* quotes may be omitted from field names without special characters,
|
||||
* commas at the end of fields are optional,
|
||||
* comma after last element in list is allowed,
|
||||
* outer curly braces are optional.
|
||||
|
||||
CUE has powerful CLI commands. Let's keep the data in a file named `first.cue` and try.
|
||||
|
||||
* Format the CUE file. If you're using Goland or similar JetBrains IDE,
|
||||
you can [configure save on format](https://wonderflow.info/posts/2020-11-02-goland-cuelang-format/) instead.
|
||||
This command will not only format the CUE, but also point out the wrong schema. That's very useful.
|
||||
```shell
|
||||
cue fmt first.cue
|
||||
```
|
||||
|
||||
* Schema Check, besides `cue fmt`, you can also use `vue vet` to check schema.
|
||||
```shell
|
||||
cue vet first.cue
|
||||
```
|
||||
|
||||
* Calculate/Render the result. `cue eval` will calculate the CUE file and render out the result.
|
||||
You can see the results don't contain `a: float` and `b: int`, because these two variables are calculated.
|
||||
While the `e: string` doesn't have definitive results, so it keeps as it is.
|
||||
```shell
|
||||
$ cue eval first.cue
|
||||
a: 1.5
|
||||
b: 1
|
||||
d: [1, 2, 3]
|
||||
g: {
|
||||
h: "abc"
|
||||
}
|
||||
e: string
|
||||
```
|
||||
|
||||
* Render for specified result. For example, we want only know the result of `b` in the file, then we can specify the parameter `-e`.
|
||||
```shell
|
||||
$ cue eval -e b first.cue
|
||||
1
|
||||
```
|
||||
|
||||
* Export the result. `cue export` will export the result with final value. It will report an error if some variables are not definitive.
|
||||
```shell
|
||||
$ cue export first.cue
|
||||
e: cannot convert incomplete value "string" to JSON:
|
||||
./first.cue:9:4
|
||||
```
|
||||
We can complete the value by giving a value to `e`, for example:
|
||||
```shell
|
||||
echo "e: \"abc\"" >> first.cue
|
||||
```
|
||||
Then, the command will work. By default, the result will be rendered in json format.
|
||||
```shell
|
||||
$ cue export first.cue
|
||||
{
|
||||
"a": 1.5,
|
||||
"b": 1,
|
||||
"d": [
|
||||
1,
|
||||
2,
|
||||
3
|
||||
],
|
||||
"g": {
|
||||
"h": "abc"
|
||||
},
|
||||
"e": "abc"
|
||||
}
|
||||
```
|
||||
|
||||
* Export the result in YAML format.
|
||||
```shell
|
||||
$ cue export first.cue --out yaml
|
||||
a: 1.5
|
||||
b: 1
|
||||
d:
|
||||
- 1
|
||||
- 2
|
||||
- 3
|
||||
g:
|
||||
h: abc
|
||||
e: abc
|
||||
```
|
||||
|
||||
* Export the result for specified variable.
|
||||
```shell
|
||||
$ cue export -e g first.cue
|
||||
{
|
||||
"h": "abc"
|
||||
}
|
||||
```
|
||||
|
||||
For now, you have learned all useful CUE cli operations.
|
||||
|
||||
## CUE Language Basic
|
||||
|
||||
* Data structure: Below is the basic data structure of CUE.
|
||||
|
||||
```shell
|
||||
// float
|
||||
a: 1.5
|
||||
|
||||
// int
|
||||
b: 1
|
||||
|
||||
// string
|
||||
c: "blahblahblah"
|
||||
|
||||
// array
|
||||
d: [1, 2, 3, 1, 2, 3, 1, 2, 3]
|
||||
|
||||
// bool
|
||||
e: true
|
||||
|
||||
// struct
|
||||
f: {
|
||||
a: 1.5
|
||||
b: 1
|
||||
d: [1, 2, 3, 1, 2, 3, 1, 2, 3]
|
||||
g: {
|
||||
h: "abc"
|
||||
}
|
||||
}
|
||||
|
||||
// null
|
||||
j: null
|
||||
```
|
||||
|
||||
* Define a custom CUE type. You can use a `#` symbol to specify some variable represents a CUE type.
|
||||
|
||||
```
|
||||
#abc: string
|
||||
```
|
||||
|
||||
Let's name it `second.cue`. Then the `cue export` won't complain as the `#abc` is a type not incomplete value.
|
||||
|
||||
```shell
|
||||
$ cue export second.cue
|
||||
{}
|
||||
```
|
||||
|
||||
You can also define a more complex custom struct, such as:
|
||||
|
||||
```
|
||||
#abc: {
|
||||
x: int
|
||||
y: string
|
||||
z: {
|
||||
a: float
|
||||
b: bool
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
It's widely used in KubeVela to define templates and do validation.
|
||||
|
||||
## CUE Templating and References
|
||||
|
||||
Let's try to define a CUE template with the knowledge just learned.
|
||||
|
||||
1. Define a struct variable `parameter`.
|
||||
|
||||
```shell
|
||||
parameter: {
|
||||
name: string
|
||||
image: string
|
||||
}
|
||||
```
|
||||
|
||||
Let's save it in a file called `deployment.cue`.
|
||||
|
||||
2. Define a more complex struct variable `template` and reference the variable `parameter`.
|
||||
|
||||
```
|
||||
template: {
|
||||
apiVersion: "apps/v1"
|
||||
kind: "Deployment"
|
||||
spec: {
|
||||
selector: matchLabels: {
|
||||
"app.oam.dev/component": parameter.name
|
||||
}
|
||||
template: {
|
||||
metadata: labels: {
|
||||
"app.oam.dev/component": parameter.name
|
||||
}
|
||||
spec: {
|
||||
containers: [{
|
||||
name: parameter.name
|
||||
image: parameter.image
|
||||
}]
|
||||
}}}
|
||||
}
|
||||
```
|
||||
|
||||
People who are familiar with Kubernetes may have understood that is a template of K8s Deployment. The `parameter` part
|
||||
is the parameters of the template.
|
||||
|
||||
Add it into the `deployment.cue`.
|
||||
|
||||
4. Then, let's add the value by adding following code block:
|
||||
|
||||
```
|
||||
parameter:{
|
||||
name: "mytest"
|
||||
image: "nginx:v1"
|
||||
}
|
||||
```
|
||||
|
||||
5. Finally, let's export it in yaml:
|
||||
|
||||
```shell
|
||||
$ cue export deployment.cue -e template --out yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: mytest
|
||||
image: nginx:v1
|
||||
metadata:
|
||||
labels:
|
||||
app.oam.dev/component: mytest
|
||||
selector:
|
||||
matchLabels:
|
||||
app.oam.dev/component: mytest
|
||||
```
|
||||
|
||||
## Advanced CUE Schematic
|
||||
|
||||
* Open struct and list. Using `...` in a list or struct means the object is open.
|
||||
|
||||
- A list like `[...string]` means it can hold multiple string elements.
|
||||
If we don't add `...`, then `[string]` means the list can only have one `string` element in it.
|
||||
- A struct like below means the struct can contain unknown fields.
|
||||
```
|
||||
{
|
||||
abc: string
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
* Operator `|`, it represents a value could be both case. Below is an example that the variable `a` could be in string or int type.
|
||||
|
||||
```shell
|
||||
a: string | int
|
||||
```
|
||||
|
||||
* Default Value, we can use `*` symbol to represent a default value for variable. That's usually used with `|`,
|
||||
which represents a default value for some type. Below is an example that variable `a` is `int` and it's default value is `1`.
|
||||
|
||||
```shell
|
||||
a: *1 | int
|
||||
```
|
||||
|
||||
* Optional Variable. In some cases, a variable could not be used, they're optional variables, we can use `?:` to define it.
|
||||
In the below example, `a` is an optional variable, `x` and `z` in `#my` is optional while `y` is a required variable.
|
||||
|
||||
```
|
||||
a ?: int
|
||||
|
||||
#my: {
|
||||
x ?: string
|
||||
y : int
|
||||
z ?:float
|
||||
}
|
||||
```
|
||||
|
||||
Optional variables can be skipped, that usually works together with conditional logic.
|
||||
Specifically, if some field does not exit, the CUE grammar is `if _variable_ != _|_`, the example is like below:
|
||||
|
||||
```
|
||||
parameter: {
|
||||
name: string
|
||||
image: string
|
||||
config?: [...#Config]
|
||||
}
|
||||
output: {
|
||||
...
|
||||
spec: {
|
||||
containers: [{
|
||||
name: parameter.name
|
||||
image: parameter.image
|
||||
if parameter.config != _|_ {
|
||||
config: parameter.config
|
||||
}
|
||||
}]
|
||||
}
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
* Operator `&`, it used to calculate two variables.
|
||||
|
||||
```shell
|
||||
a: *1 | int
|
||||
b: 3
|
||||
c: a & b
|
||||
```
|
||||
|
||||
Saving it in `third.cue` file.
|
||||
|
||||
You can evaluate the result by using `cue eval`:
|
||||
|
||||
```shell
|
||||
$ cue eval third.cue
|
||||
a: 1
|
||||
b: 3
|
||||
c: 3
|
||||
```
|
||||
|
||||
* Conditional statement, it's really useful when you have some cascade operations that different value affects different results.
|
||||
So you can do `if..else` logic in the template.
|
||||
|
||||
```shell
|
||||
price: number
|
||||
feel: *"good" | string
|
||||
// Feel bad if price is too high
|
||||
if price > 100 {
|
||||
feel: "bad"
|
||||
}
|
||||
price: 200
|
||||
```
|
||||
|
||||
Saving it in `fourth.cue` file.
|
||||
|
||||
You can evaluate the result by using `cue eval`:
|
||||
|
||||
```shell
|
||||
$ cue eval fourth.cue
|
||||
price: 200
|
||||
feel: "bad"
|
||||
```
|
||||
|
||||
Another example is to use bool type as prameter.
|
||||
|
||||
```
|
||||
parameter: {
|
||||
name: string
|
||||
image: string
|
||||
useENV: bool
|
||||
}
|
||||
output: {
|
||||
...
|
||||
spec: {
|
||||
containers: [{
|
||||
name: parameter.name
|
||||
image: parameter.image
|
||||
if parameter.useENV == true {
|
||||
env: [{name: "my-env", value: "my-value"}]
|
||||
}
|
||||
}]
|
||||
}
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
* For Loop: if you want to avoid duplicate, you may want to use for loop.
|
||||
- Loop for Map
|
||||
```cue
|
||||
parameter: {
|
||||
name: string
|
||||
image: string
|
||||
env: [string]: string
|
||||
}
|
||||
output: {
|
||||
spec: {
|
||||
containers: [{
|
||||
name: parameter.name
|
||||
image: parameter.image
|
||||
env: [
|
||||
for k, v in parameter.env {
|
||||
name: k
|
||||
value: v
|
||||
},
|
||||
]
|
||||
}]
|
||||
}
|
||||
}
|
||||
```
|
||||
- Loop for type
|
||||
```
|
||||
#a: {
|
||||
"hello": "Barcelona"
|
||||
"nihao": "Shanghai"
|
||||
}
|
||||
|
||||
for k, v in #a {
|
||||
"\(k)": {
|
||||
nameLen: len(v)
|
||||
value: v
|
||||
}
|
||||
}
|
||||
```
|
||||
- Loop for Slice
|
||||
```cue
|
||||
parameter: {
|
||||
name: string
|
||||
image: string
|
||||
env: [...{name:string,value:string}]
|
||||
}
|
||||
output: {
|
||||
...
|
||||
spec: {
|
||||
containers: [{
|
||||
name: parameter.name
|
||||
image: parameter.image
|
||||
env: [
|
||||
for _, v in parameter.env {
|
||||
name: v.name
|
||||
value: v.value
|
||||
},
|
||||
]
|
||||
}]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Note that we use `"\( _my-statement_ )"` for inner calculation in string.
|
||||
|
||||
## Import CUE Internal Packages
|
||||
|
||||
CUE has many [internal packages](https://pkg.go.dev/cuelang.org/go@v0.2.2/pkg) which also can be used in KubeVela.
|
||||
|
||||
Below is an example that use `strings.Join` to `concat` string list to one string.
|
||||
|
||||
```cue
|
||||
import ("strings")
|
||||
|
||||
parameter: {
|
||||
outputs: [{ip: "1.1.1.1", hostname: "xxx.com"}, {ip: "2.2.2.2", hostname: "yyy.com"}]
|
||||
}
|
||||
output: {
|
||||
spec: {
|
||||
if len(parameter.outputs) > 0 {
|
||||
_x: [ for _, v in parameter.outputs {
|
||||
"\(v.ip) \(v.hostname)"
|
||||
}]
|
||||
message: "Visiting URL: " + strings.Join(_x, "")
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Import Kube Package
|
||||
|
||||
KubeVela automatically generates all K8s resources as internal packages by reading K8s openapi from the
|
||||
installed K8s cluster.
|
||||
|
||||
You can use these packages with the format `kube/<apiVersion>` in CUE Template of KubeVela just like the same way
|
||||
with the CUE internal packages.
|
||||
|
||||
For example, `Deployment` can be used as:
|
||||
|
||||
```cue
|
||||
import (
|
||||
apps "kube/apps/v1"
|
||||
)
|
||||
|
||||
parameter: {
|
||||
name: string
|
||||
}
|
||||
|
||||
output: apps.#Deployment
|
||||
output: {
|
||||
metadata: name: parameter.name
|
||||
}
|
||||
```
|
||||
|
||||
Service can be used as (import package with an alias is not necessary):
|
||||
|
||||
```cue
|
||||
import ("kube/v1")
|
||||
|
||||
output: v1.#Service
|
||||
output: {
|
||||
metadata: {
|
||||
"name": parameter.name
|
||||
}
|
||||
spec: type: "ClusterIP",
|
||||
}
|
||||
|
||||
parameter: {
|
||||
name: "myapp"
|
||||
}
|
||||
```
|
||||
|
||||
Even the installed CRD works:
|
||||
|
||||
```
|
||||
import (
|
||||
oam "kube/core.oam.dev/v1alpha2"
|
||||
)
|
||||
|
||||
output: oam.#Application
|
||||
output: {
|
||||
metadata: {
|
||||
"name": parameter.name
|
||||
}
|
||||
}
|
||||
|
||||
parameter: {
|
||||
name: "myapp"
|
||||
}
|
||||
```
|
||||
|
|
@ -1,368 +0,0 @@
|
|||
---
|
||||
title: How-to
|
||||
---
|
||||
|
||||
In this section, it will introduce how to use [CUE](https://cuelang.org/) to declare app components via `ComponentDefinition`.
|
||||
|
||||
> Before reading this part, please make sure you've learned the [Definition CRD](../platform-engineers/definition-and-templates) in KubeVela.
|
||||
|
||||
## Declare `ComponentDefinition`
|
||||
|
||||
Here is a CUE based `ComponentDefinition` example which provides a abstraction for stateless workload type:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: ComponentDefinition
|
||||
metadata:
|
||||
name: stateless
|
||||
spec:
|
||||
workload:
|
||||
definition:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
parameter: {
|
||||
name: string
|
||||
image: string
|
||||
}
|
||||
output: {
|
||||
apiVersion: "apps/v1"
|
||||
kind: "Deployment"
|
||||
spec: {
|
||||
selector: matchLabels: {
|
||||
"app.oam.dev/component": parameter.name
|
||||
}
|
||||
template: {
|
||||
metadata: labels: {
|
||||
"app.oam.dev/component": parameter.name
|
||||
}
|
||||
spec: {
|
||||
containers: [{
|
||||
name: parameter.name
|
||||
image: parameter.image
|
||||
}]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
In detail:
|
||||
- `.spec.workload` is required to indicate the workload type of this component.
|
||||
- `.spec.schematic.cue.template` is a CUE template, specifically:
|
||||
* The `output` filed defines the template for the abstraction.
|
||||
* The `parameter` filed defines the template parameters, i.e. the configurable properties exposed in the `Application`abstraction (and JSON schema will be automatically generated based on them).
|
||||
|
||||
Let's declare another component named `task`, i.e. an abstraction for run-to-completion workload.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: ComponentDefinition
|
||||
metadata:
|
||||
name: task
|
||||
annotations:
|
||||
definition.oam.dev/description: "Describes jobs that run code or a script to completion."
|
||||
spec:
|
||||
workload:
|
||||
definition:
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
output: {
|
||||
apiVersion: "batch/v1"
|
||||
kind: "Job"
|
||||
spec: {
|
||||
parallelism: parameter.count
|
||||
completions: parameter.count
|
||||
template: spec: {
|
||||
restartPolicy: parameter.restart
|
||||
containers: [{
|
||||
image: parameter.image
|
||||
if parameter["cmd"] != _|_ {
|
||||
command: parameter.cmd
|
||||
}
|
||||
}]
|
||||
}
|
||||
}
|
||||
}
|
||||
parameter: {
|
||||
count: *1 | int
|
||||
image: string
|
||||
restart: *"Never" | string
|
||||
cmd?: [...string]
|
||||
}
|
||||
```
|
||||
|
||||
Save above `ComponentDefintion` objects to files and install them to your Kubernetes cluster by `$ kubectl apply -f stateless-def.yaml task-def.yaml`
|
||||
|
||||
## Declare an `Application`
|
||||
|
||||
The `ComponentDefinition` can be instantiated in `Application` abstraction as below:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1alpha2
|
||||
kind: Application
|
||||
metadata:
|
||||
name: website
|
||||
spec:
|
||||
components:
|
||||
- name: hello
|
||||
type: stateless
|
||||
properties:
|
||||
image: crccheck/hello-world
|
||||
name: mysvc
|
||||
- name: countdown
|
||||
type: task
|
||||
properties:
|
||||
image: centos:7
|
||||
cmd:
|
||||
- "bin/bash"
|
||||
- "-c"
|
||||
- "for i in 9 8 7 6 5 4 3 2 1 ; do echo $i ; done"
|
||||
```
|
||||
|
||||
### Under The Hood
|
||||
<details>
|
||||
|
||||
Above application resource will generate and manage following Kubernetes resources in your target cluster based on the `output` in CUE template and user input in `Application` properties.
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: backend
|
||||
... # skip tons of metadata info
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: mysvc
|
||||
image: crccheck/hello-world
|
||||
metadata:
|
||||
labels:
|
||||
app.oam.dev/component: mysvc
|
||||
selector:
|
||||
matchLabels:
|
||||
app.oam.dev/component: mysvc
|
||||
---
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: countdown
|
||||
... # skip tons of metadata info
|
||||
spec:
|
||||
parallelism: 1
|
||||
completions: 1
|
||||
template:
|
||||
metadata:
|
||||
name: countdown
|
||||
spec:
|
||||
containers:
|
||||
- name: countdown
|
||||
image: 'centos:7'
|
||||
command:
|
||||
- bin/bash
|
||||
- '-c'
|
||||
- for i in 9 8 7 6 5 4 3 2 1 ; do echo $i ; done
|
||||
restartPolicy: Never
|
||||
```
|
||||
</details>
|
||||
|
||||
## CUE `Context`
|
||||
|
||||
KubeVela allows you to reference the runtime information of your application via `conext` keyword.
|
||||
|
||||
The most widely used context is application name(`context.appName`) component name(`context.name`).
|
||||
|
||||
```cue
|
||||
context: {
|
||||
appName: string
|
||||
name: string
|
||||
}
|
||||
```
|
||||
|
||||
For example, let's say you want to use the component name filled in by users as the container name in the workload instance:
|
||||
|
||||
```cue
|
||||
parameter: {
|
||||
image: string
|
||||
}
|
||||
output: {
|
||||
...
|
||||
spec: {
|
||||
containers: [{
|
||||
name: context.name
|
||||
image: parameter.image
|
||||
}]
|
||||
}
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
> Note that `context` information are auto-injected before resources are applied to target cluster.
|
||||
|
||||
### Full available information in CUE `context`
|
||||
|
||||
| Context Variable | Description |
|
||||
| :--: | :---------: |
|
||||
| `context.appRevision` | The revision of the application |
|
||||
| `context.appRevisionNum` | The revision number(`int` type) of the application, e.g., `context.appRevisionNum` will be `1` if `context.appRevision` is `app-v1`|
|
||||
| `context.appName` | The name of the application |
|
||||
| `context.name` | The name of the component of the application |
|
||||
| `context.namespace` | The namespace of the application |
|
||||
| `context.output` | The rendered workload API resource of the component, this usually used in trait |
|
||||
| `context.outputs.<resourceName>` | The rendered trait API resource of the component, this usually used in trait |
|
||||
|
||||
|
||||
## Composition
|
||||
|
||||
It's common that a component definition is composed by multiple API resources, for example, a `webserver` component that is composed by a Deployment and a Service. CUE is a great solution to achieve this in simplified primitives.
|
||||
|
||||
> Another approach to do composition in KubeVela of course is [using Helm](/docs/helm/component).
|
||||
|
||||
## How-to
|
||||
|
||||
KubeVela requires you to define the template of workload type in `output` section, and leave all the other resource templates in `outputs` section with format as below:
|
||||
|
||||
```cue
|
||||
outputs: <unique-name>:
|
||||
<full template data>
|
||||
```
|
||||
|
||||
> The reason for this requirement is KubeVela needs to know it is currently rendering a workload so it could do some "magic" like patching annotations/labels or other data during it.
|
||||
|
||||
Below is the example for `webserver` definition:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: ComponentDefinition
|
||||
metadata:
|
||||
name: webserver
|
||||
annotations:
|
||||
definition.oam.dev/description: "webserver is a combo of Deployment + Service"
|
||||
spec:
|
||||
workload:
|
||||
definition:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
output: {
|
||||
apiVersion: "apps/v1"
|
||||
kind: "Deployment"
|
||||
spec: {
|
||||
selector: matchLabels: {
|
||||
"app.oam.dev/component": context.name
|
||||
}
|
||||
template: {
|
||||
metadata: labels: {
|
||||
"app.oam.dev/component": context.name
|
||||
}
|
||||
spec: {
|
||||
containers: [{
|
||||
name: context.name
|
||||
image: parameter.image
|
||||
|
||||
if parameter["cmd"] != _|_ {
|
||||
command: parameter.cmd
|
||||
}
|
||||
|
||||
if parameter["env"] != _|_ {
|
||||
env: parameter.env
|
||||
}
|
||||
|
||||
if context["config"] != _|_ {
|
||||
env: context.config
|
||||
}
|
||||
|
||||
ports: [{
|
||||
containerPort: parameter.port
|
||||
}]
|
||||
|
||||
if parameter["cpu"] != _|_ {
|
||||
resources: {
|
||||
limits:
|
||||
cpu: parameter.cpu
|
||||
requests:
|
||||
cpu: parameter.cpu
|
||||
}
|
||||
}
|
||||
}]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
// an extra template
|
||||
outputs: service: {
|
||||
apiVersion: "v1"
|
||||
kind: "Service"
|
||||
spec: {
|
||||
selector: {
|
||||
"app.oam.dev/component": context.name
|
||||
}
|
||||
ports: [
|
||||
{
|
||||
port: parameter.port
|
||||
targetPort: parameter.port
|
||||
},
|
||||
]
|
||||
}
|
||||
}
|
||||
parameter: {
|
||||
image: string
|
||||
cmd?: [...string]
|
||||
port: *80 | int
|
||||
env?: [...{
|
||||
name: string
|
||||
value?: string
|
||||
valueFrom?: {
|
||||
secretKeyRef: {
|
||||
name: string
|
||||
key: string
|
||||
}
|
||||
}
|
||||
}]
|
||||
cpu?: string
|
||||
}
|
||||
```
|
||||
|
||||
The user could now declare an `Application` with it:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: webserver-demo
|
||||
namespace: default
|
||||
spec:
|
||||
components:
|
||||
- name: hello-world
|
||||
type: webserver
|
||||
properties:
|
||||
image: crccheck/hello-world
|
||||
port: 8000
|
||||
env:
|
||||
- name: "foo"
|
||||
value: "bar"
|
||||
cpu: "100m"
|
||||
```
|
||||
|
||||
It will generate and manage below API resources in target cluster:
|
||||
|
||||
```shell
|
||||
$ kubectl get deployment
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
hello-world-v1 1/1 1 1 15s
|
||||
|
||||
$ kubectl get svc
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
hello-world-trait-7bdcff98f7 ClusterIP <your ip> <none> 8000/TCP 32s
|
||||
```
|
||||
|
||||
## What's Next
|
||||
|
||||
Please check the [Learning CUE](./basic) documentation about why we support CUE as first-class templating solution and more details about using CUE efficiently.
|
||||
|
|
@ -1,53 +0,0 @@
|
|||
---
|
||||
title: Define resources located in defferent namespace with application
|
||||
---
|
||||
|
||||
In this section, we will introduce how to use cue template create resources (workload/trait) in different namespace with the application.
|
||||
|
||||
By default, the `metadata.namespace` of K8s resource in CuE template is automatically filled with the same namespace of the application.
|
||||
|
||||
If you want to create K8s resources running in a specific namespace witch is different with the application, you can set the `metadata.namespace` field.
|
||||
KubeVela will create the resources in the specified namespace, and create a resourceTracker object as owener of those resources.
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: ComponentDefinition
|
||||
metadata:
|
||||
name: worker
|
||||
spec:
|
||||
definitionRef:
|
||||
name: deployments.apps
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
parameter: {
|
||||
name: string
|
||||
image: string
|
||||
namespace: string # make this parameter `namespace` as keyword which represents the resource maybe located in defferent namespace with application
|
||||
}
|
||||
output: {
|
||||
apiVersion: "apps/v1"
|
||||
kind: "Deployment"
|
||||
metadata: {
|
||||
namespace: my-namespace
|
||||
}
|
||||
spec: {
|
||||
selector: matchLabels: {
|
||||
"app.oam.dev/component": parameter.name
|
||||
}
|
||||
template: {
|
||||
metadata: labels: {
|
||||
"app.oam.dev/component": parameter.name
|
||||
}
|
||||
spec: {
|
||||
containers: [{
|
||||
name: parameter.name
|
||||
image: parameter.image
|
||||
}]
|
||||
}}}
|
||||
}
|
||||
```
|
||||
|
||||
|
|
@ -1,445 +0,0 @@
|
|||
---
|
||||
title: Patch Traits
|
||||
---
|
||||
|
||||
**Patch** is a very common pattern of trait definitions, i.e. the app operators can amend/path attributes to the component instance (normally the workload) to enable certain operational features such as sidecar or node affinity rules (and this should be done **before** the resources applied to target cluster).
|
||||
|
||||
This pattern is extremely useful when the component definition is provided by third-party component provider (e.g. software distributor) so app operators do not have privilege to change its template.
|
||||
|
||||
> Note that even patch trait itself is defined by CUE, it can patch any component regardless how its schematic is defined (i.e. CUE, Helm, and any other supported schematic approaches).
|
||||
|
||||
Below is an example for `node-affinity` trait:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: TraitDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
definition.oam.dev/description: "affinity specify node affinity and toleration"
|
||||
name: node-affinity
|
||||
spec:
|
||||
appliesToWorkloads:
|
||||
- webservice
|
||||
- worker
|
||||
podDisruptive: true
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
patch: {
|
||||
spec: template: spec: {
|
||||
if parameter.affinity != _|_ {
|
||||
affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: [{
|
||||
matchExpressions: [
|
||||
for k, v in parameter.affinity {
|
||||
key: k
|
||||
operator: "In"
|
||||
values: v
|
||||
},
|
||||
]}]
|
||||
}
|
||||
if parameter.tolerations != _|_ {
|
||||
tolerations: [
|
||||
for k, v in parameter.tolerations {
|
||||
effect: "NoSchedule"
|
||||
key: k
|
||||
operator: "Equal"
|
||||
value: v
|
||||
}]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
parameter: {
|
||||
affinity?: [string]: [...string]
|
||||
tolerations?: [string]: string
|
||||
}
|
||||
```
|
||||
|
||||
The patch trait above assumes the target component instance have `spec.template.spec.affinity` field.
|
||||
Hence, we need to use `appliesToWorkloads` to enforce the trait only applies to those workload types have this field.
|
||||
|
||||
Another important field is `podDisruptive`, this patch trait will patch to the pod template field,
|
||||
so changes on any field of this trait will cause the pod to restart, We should add `podDisruptive` and make it to be true
|
||||
to tell users that applying this trait will cause the pod to restart.
|
||||
|
||||
|
||||
Now the users could declare they want to add node affinity rules to the component instance as below:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1alpha2
|
||||
kind: Application
|
||||
metadata:
|
||||
name: testapp
|
||||
spec:
|
||||
components:
|
||||
- name: express-server
|
||||
type: webservice
|
||||
properties:
|
||||
image: oamdev/testapp:v1
|
||||
traits:
|
||||
- type: "node-affinity"
|
||||
properties:
|
||||
affinity:
|
||||
server-owner: ["owner1","owner2"]
|
||||
resource-pool: ["pool1","pool2","pool3"]
|
||||
tolerations:
|
||||
resource-pool: "broken-pool1"
|
||||
server-owner: "old-owner"
|
||||
```
|
||||
|
||||
### Known Limitations
|
||||
|
||||
By default, patch trait in KubeVela leverages the CUE `merge` operation. It has following known constraints though:
|
||||
|
||||
- Can not handle conflicts.
|
||||
- For example, if a component instance already been set with value `replicas=5`, then any patch trait to patch `replicas` field will fail, a.k.a you should not expose `replicas` field in its component definition schematic.
|
||||
- Array list in the patch will be merged following the order of index. It can not handle the duplication of the array list members. This could be fixed by another feature below.
|
||||
|
||||
### Strategy Patch
|
||||
|
||||
The `strategy patch` is useful for patching array list.
|
||||
|
||||
> Note that this is not a standard CUE feature, KubeVela enhanced CUE in this case.
|
||||
|
||||
With `//+patchKey=<key_name>` annotation, merging logic of two array lists will not follow the CUE behavior. Instead, it will treat the list as object and use a strategy merge approach:
|
||||
- if a duplicated key is found, the patch data will be merge with the existing values;
|
||||
- if no duplication found, the patch will append into the array list.
|
||||
|
||||
The example of strategy patch trait will like below:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: TraitDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
definition.oam.dev/description: "add sidecar to the app"
|
||||
name: sidecar
|
||||
spec:
|
||||
appliesToWorkloads:
|
||||
- webservice
|
||||
- worker
|
||||
podDisruptive: true
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
patch: {
|
||||
// +patchKey=name
|
||||
spec: template: spec: containers: [parameter]
|
||||
}
|
||||
parameter: {
|
||||
name: string
|
||||
image: string
|
||||
command?: [...string]
|
||||
}
|
||||
```
|
||||
|
||||
In above example we defined `patchKey` is `name` which is the parameter key of container name. In this case, if the workload don't have the container with same name, it will be a sidecar container append into the `spec.template.spec.containers` array list. If the workload already has a container with the same name of this `sidecar` trait, then merge operation will happen instead of append (which leads to duplicated containers).
|
||||
|
||||
If `patch` and `outputs` both exist in one trait definition, the `patch` operation will be handled first and then render the `outputs`.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: TraitDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
definition.oam.dev/description: "expose the app"
|
||||
name: expose
|
||||
spec:
|
||||
appliesToWorkloads:
|
||||
- webservice
|
||||
- worker
|
||||
podDisruptive: true
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
patch: {spec: template: metadata: labels: app: context.name}
|
||||
outputs: service: {
|
||||
apiVersion: "v1"
|
||||
kind: "Service"
|
||||
metadata: name: context.name
|
||||
spec: {
|
||||
selector: app: context.name
|
||||
ports: [
|
||||
for k, v in parameter.http {
|
||||
port: v
|
||||
targetPort: v
|
||||
},
|
||||
]
|
||||
}
|
||||
}
|
||||
parameter: {
|
||||
http: [string]: int
|
||||
}
|
||||
```
|
||||
|
||||
So the above trait which attaches a Service to given component instance will patch an corresponding label to the workload first and then render the Service resource based on template in `outputs`.
|
||||
|
||||
## More Use Cases of Patch Trait
|
||||
|
||||
Patch trait is in general pretty useful to separate operational concerns from the component definition, here are some more examples.
|
||||
|
||||
### Add Labels
|
||||
|
||||
For example, patch common label (virtual group) to the component instance.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1alpha2
|
||||
kind: TraitDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
definition.oam.dev/description: "Add virtual group labels"
|
||||
name: virtualgroup
|
||||
spec:
|
||||
appliesToWorkloads:
|
||||
- webservice
|
||||
- worker
|
||||
podDisruptive: true
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
patch: {
|
||||
spec: template: {
|
||||
metadata: labels: {
|
||||
if parameter.scope == "namespace" {
|
||||
"app.namespace.virtual.group": parameter.group
|
||||
}
|
||||
if parameter.scope == "cluster" {
|
||||
"app.cluster.virtual.group": parameter.group
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
parameter: {
|
||||
group: *"default" | string
|
||||
scope: *"namespace" | string
|
||||
}
|
||||
```
|
||||
|
||||
Then it could be used like:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
spec:
|
||||
...
|
||||
traits:
|
||||
- type: virtualgroup
|
||||
properties:
|
||||
group: "my-group1"
|
||||
scope: "cluster"
|
||||
```
|
||||
|
||||
### Add Annotations
|
||||
|
||||
Similar to common labels, you could also patch the component instance with annotations. The annotation value should be a JSON string.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: TraitDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
definition.oam.dev/description: "Specify auto scale by annotation"
|
||||
name: kautoscale
|
||||
spec:
|
||||
appliesToWorkloads:
|
||||
- webservice
|
||||
- worker
|
||||
podDisruptive: false
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
import "encoding/json"
|
||||
|
||||
patch: {
|
||||
metadata: annotations: {
|
||||
"my.custom.autoscale.annotation": json.Marshal({
|
||||
"minReplicas": parameter.min
|
||||
"maxReplicas": parameter.max
|
||||
})
|
||||
}
|
||||
}
|
||||
parameter: {
|
||||
min: *1 | int
|
||||
max: *3 | int
|
||||
}
|
||||
```
|
||||
|
||||
### Add Pod Environments
|
||||
|
||||
Inject system environments into Pod is also very common use case.
|
||||
|
||||
> This case relies on strategy merge patch, so don't forget add `+patchKey=name` as below:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: TraitDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
definition.oam.dev/description: "add env into your pods"
|
||||
name: env
|
||||
spec:
|
||||
appliesToWorkloads:
|
||||
- webservice
|
||||
- worker
|
||||
podDisruptive: true
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
patch: {
|
||||
spec: template: spec: {
|
||||
// +patchKey=name
|
||||
containers: [{
|
||||
name: context.name
|
||||
// +patchKey=name
|
||||
env: [
|
||||
for k, v in parameter.env {
|
||||
name: k
|
||||
value: v
|
||||
},
|
||||
]
|
||||
}]
|
||||
}
|
||||
}
|
||||
|
||||
parameter: {
|
||||
env: [string]: string
|
||||
}
|
||||
```
|
||||
|
||||
### Inject `ServiceAccount` Based on External Auth Service
|
||||
|
||||
In this example, the service account was dynamically requested from an authentication service and patched into the service.
|
||||
|
||||
This example put UID token in HTTP header but you can also use request body if you prefer.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: TraitDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
definition.oam.dev/description: "dynamically specify service account"
|
||||
name: service-account
|
||||
spec:
|
||||
appliesToWorkloads:
|
||||
- webservice
|
||||
- worker
|
||||
podDisruptive: true
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
processing: {
|
||||
output: {
|
||||
credentials?: string
|
||||
}
|
||||
http: {
|
||||
method: *"GET" | string
|
||||
url: parameter.serviceURL
|
||||
request: {
|
||||
header: {
|
||||
"authorization.token": parameter.uidtoken
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
patch: {
|
||||
spec: template: spec: serviceAccountName: processing.output.credentials
|
||||
}
|
||||
|
||||
parameter: {
|
||||
uidtoken: string
|
||||
serviceURL: string
|
||||
}
|
||||
```
|
||||
|
||||
The `processing.http` section is an advanced feature that allow trait definition to send a HTTP request during rendering the resource. Please refer to [Execute HTTP Request in Trait Definition](#Processing-Trait) section for more details.
|
||||
|
||||
### Add `InitContainer`
|
||||
|
||||
[`InitContainer`](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container) is useful to pre-define operations in an image and run it before app container.
|
||||
|
||||
Below is an example:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: TraitDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
definition.oam.dev/description: "add an init container and use shared volume with pod"
|
||||
name: init-container
|
||||
spec:
|
||||
appliesToWorkloads:
|
||||
- webservice
|
||||
- worker
|
||||
podDisruptive: true
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
patch: {
|
||||
spec: template: spec: {
|
||||
// +patchKey=name
|
||||
containers: [{
|
||||
name: context.name
|
||||
// +patchKey=name
|
||||
volumeMounts: [{
|
||||
name: parameter.mountName
|
||||
mountPath: parameter.appMountPath
|
||||
}]
|
||||
}]
|
||||
initContainers: [{
|
||||
name: parameter.name
|
||||
image: parameter.image
|
||||
if parameter.command != _|_ {
|
||||
command: parameter.command
|
||||
}
|
||||
|
||||
// +patchKey=name
|
||||
volumeMounts: [{
|
||||
name: parameter.mountName
|
||||
mountPath: parameter.initMountPath
|
||||
}]
|
||||
}]
|
||||
// +patchKey=name
|
||||
volumes: [{
|
||||
name: parameter.mountName
|
||||
emptyDir: {}
|
||||
}]
|
||||
}
|
||||
}
|
||||
|
||||
parameter: {
|
||||
name: string
|
||||
image: string
|
||||
command?: [...string]
|
||||
mountName: *"workdir" | string
|
||||
appMountPath: string
|
||||
initMountPath: string
|
||||
}
|
||||
```
|
||||
|
||||
The usage could be:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: testapp
|
||||
spec:
|
||||
components:
|
||||
- name: express-server
|
||||
type: webservice
|
||||
properties:
|
||||
image: oamdev/testapp:v1
|
||||
traits:
|
||||
- type: "init-container"
|
||||
properties:
|
||||
name: "install-container"
|
||||
image: "busybox"
|
||||
command:
|
||||
- wget
|
||||
- "-O"
|
||||
- "/work-dir/index.html"
|
||||
- http://info.cern.ch
|
||||
mountName: "workdir"
|
||||
appMountPath: "/usr/share/nginx/html"
|
||||
initMountPath: "/work-dir"
|
||||
```
|
||||
|
|
@ -1,137 +0,0 @@
|
|||
---
|
||||
title: Status Write Back
|
||||
---
|
||||
|
||||
This documentation will explain how to achieve status write back by using CUE templates in definition objects.
|
||||
|
||||
## Health Check
|
||||
|
||||
The spec of health check is `spec.status.healthPolicy`, they are the same for both Workload Type and Trait.
|
||||
|
||||
If not defined, the health result will always be `true`.
|
||||
|
||||
The keyword in CUE is `isHealth`, the result of CUE expression must be `bool` type.
|
||||
KubeVela runtime will evaluate the CUE expression periodically until it becomes healthy. Every time the controller will get all the Kubernetes resources and fill them into the context field.
|
||||
|
||||
So the context will contain following information:
|
||||
|
||||
```cue
|
||||
context:{
|
||||
name: <component name>
|
||||
appName: <app name>
|
||||
output: <K8s workload resource>
|
||||
outputs: {
|
||||
<resource1>: <K8s trait resource1>
|
||||
<resource2>: <K8s trait resource2>
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Trait will not have the `context.ouput`, other fields are the same.
|
||||
|
||||
The example of health check likes below:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: ComponentDefinition
|
||||
spec:
|
||||
status:
|
||||
healthPolicy: |
|
||||
isHealth: (context.output.status.readyReplicas > 0) && (context.output.status.readyReplicas == context.output.status.replicas)
|
||||
...
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: TraitDefinition
|
||||
spec:
|
||||
status:
|
||||
healthPolicy: |
|
||||
isHealth: len(context.outputs.service.spec.clusterIP) > 0
|
||||
...
|
||||
```
|
||||
|
||||
> Please refer to [this doc](https://github.com/oam-dev/kubevela/blob/master/docs/examples/app-with-status/template.yaml) for the complete example.
|
||||
|
||||
The health check result will be recorded into the `Application` resource.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
spec:
|
||||
components:
|
||||
- name: myweb
|
||||
type: worker
|
||||
properties:
|
||||
cmd:
|
||||
- sleep
|
||||
- "1000"
|
||||
enemies: alien
|
||||
image: busybox
|
||||
lives: "3"
|
||||
traits:
|
||||
- type: ingress
|
||||
properties:
|
||||
domain: www.example.com
|
||||
http:
|
||||
/: 80
|
||||
status:
|
||||
...
|
||||
services:
|
||||
- healthy: true
|
||||
message: "type: busybox,\t enemies:alien"
|
||||
name: myweb
|
||||
traits:
|
||||
- healthy: true
|
||||
message: 'Visiting URL: www.example.com, IP: 47.111.233.220'
|
||||
type: ingress
|
||||
status: running
|
||||
```
|
||||
|
||||
## Custom Status
|
||||
|
||||
The spec of custom status is `spec.status.customStatus`, they are the same for both Workload Type and Trait.
|
||||
|
||||
The keyword in CUE is `message`, the result of CUE expression must be `string` type.
|
||||
|
||||
The custom status has the same mechanism with health check.
|
||||
Application CRD controller will evaluate the CUE expression after the health check succeed.
|
||||
|
||||
The context will contain following information:
|
||||
|
||||
```cue
|
||||
context:{
|
||||
name: <component name>
|
||||
appName: <app name>
|
||||
output: <K8s workload resource>
|
||||
outputs: {
|
||||
<resource1>: <K8s trait resource1>
|
||||
<resource2>: <K8s trait resource2>
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Trait will not have the `context.ouput`, other fields are the same.
|
||||
|
||||
|
||||
Please refer to [this doc](https://github.com/oam-dev/kubevela/blob/master/docs/examples/app-with-status/template.yaml) for the complete example.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: ComponentDefinition
|
||||
spec:
|
||||
status:
|
||||
customStatus: |-
|
||||
message: "type: " + context.output.spec.template.spec.containers[0].image + ",\t enemies:" + context.outputs.gameconfig.data.enemies
|
||||
...
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: TraitDefinition
|
||||
spec:
|
||||
status:
|
||||
customStatus: |-
|
||||
message: "type: "+ context.outputs.service.spec.type +",\t clusterIP:"+ context.outputs.service.spec.clusterIP+",\t ports:"+ "\(context.outputs.service.spec.ports[0].port)"+",\t domain"+context.outputs.ingress.spec.rules[0].host
|
||||
...
|
||||
```
|
||||
|
|
@ -1,145 +0,0 @@
|
|||
---
|
||||
title: How-to
|
||||
---
|
||||
|
||||
In this section we will introduce how to define a trait.
|
||||
|
||||
## Simple Trait
|
||||
|
||||
A trait in KubeVela can be defined by simply reference a existing Kubernetes API resource.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: TraitDefinition
|
||||
metadata:
|
||||
name: ingress
|
||||
spec:
|
||||
definitionRef:
|
||||
name: ingresses.networking.k8s.io
|
||||
```
|
||||
Let's attach this trait to a component instance in `Application`:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: testapp
|
||||
spec:
|
||||
components:
|
||||
- name: express-server
|
||||
type: webservice
|
||||
properties:
|
||||
cmd:
|
||||
- node
|
||||
- server.js
|
||||
image: oamdev/testapp:v1
|
||||
port: 8080
|
||||
traits:
|
||||
- type: ingress
|
||||
properties:
|
||||
rules:
|
||||
- http:
|
||||
paths:
|
||||
- path: /testpath
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: test
|
||||
port:
|
||||
number: 80
|
||||
```
|
||||
|
||||
Note that in this case, all fields in the referenced resource's `spec` will be exposed to end user and no metadata (e.g. `annotations` etc) are allowed to be set trait properties. Hence this approach is normally used when you want to bring your own CRD and controller as a trait, and it dose not rely on `annotations` etc as tuning knobs.
|
||||
|
||||
## Using CUE as Trait Schematic
|
||||
|
||||
The recommended approach is defining a CUE based schematic for trait as well. In this case, it comes with abstraction and you have full flexibility to templating any resources and fields as you want. Note that KubeVela requires all traits MUST be defined in `outputs` section (not `output`) in CUE template with format as below:
|
||||
|
||||
```cue
|
||||
outputs: <unique-name>:
|
||||
<full template data>
|
||||
```
|
||||
|
||||
Below is an example for `ingress` trait.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: TraitDefinition
|
||||
metadata:
|
||||
name: ingress
|
||||
spec:
|
||||
podDisruptive: false
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
parameter: {
|
||||
domain: string
|
||||
http: [string]: int
|
||||
}
|
||||
|
||||
// trait template can have multiple outputs in one trait
|
||||
outputs: service: {
|
||||
apiVersion: "v1"
|
||||
kind: "Service"
|
||||
spec: {
|
||||
selector:
|
||||
app: context.name
|
||||
ports: [
|
||||
for k, v in parameter.http {
|
||||
port: v
|
||||
targetPort: v
|
||||
},
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
outputs: ingress: {
|
||||
apiVersion: "networking.k8s.io/v1beta1"
|
||||
kind: "Ingress"
|
||||
metadata:
|
||||
name: context.name
|
||||
spec: {
|
||||
rules: [{
|
||||
host: parameter.domain
|
||||
http: {
|
||||
paths: [
|
||||
for k, v in parameter.http {
|
||||
path: k
|
||||
backend: {
|
||||
serviceName: context.name
|
||||
servicePort: v
|
||||
}
|
||||
},
|
||||
]
|
||||
}
|
||||
}]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Let's attach this trait to a component instance in `Application`:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: testapp
|
||||
spec:
|
||||
components:
|
||||
- name: express-server
|
||||
type: webservice
|
||||
properties:
|
||||
cmd:
|
||||
- node
|
||||
- server.js
|
||||
image: oamdev/testapp:v1
|
||||
port: 8080
|
||||
traits:
|
||||
- type: ingress
|
||||
properties:
|
||||
domain: test.my.domain
|
||||
http:
|
||||
"/api": 8080
|
||||
```
|
||||
|
||||
CUE based trait definitions can also enable many other advanced scenarios such as patching and data passing. They will be explained in detail in the following documentations.
|
||||
|
|
@ -1,133 +0,0 @@
|
|||
---
|
||||
title: Managing Capabilities
|
||||
---
|
||||
|
||||
In KubeVela, developers can install more capabilities (i.e. new component types and traits) from any GitHub repo that contains OAM definition files. We call these GitHub repos as _Capability Centers_.
|
||||
|
||||
KubeVela is able to discover OAM definition files in this repo automatically and sync them to your own KubeVela platform.
|
||||
|
||||
## Add a capability center
|
||||
|
||||
Add and sync a capability center in KubeVela:
|
||||
|
||||
```bash
|
||||
$ vela cap center config my-center https://github.com/oam-dev/catalog/tree/master/registry
|
||||
successfully sync 1/1 from my-center remote center
|
||||
Successfully configured capability center my-center and sync from remote
|
||||
|
||||
$ vela cap center sync my-center
|
||||
successfully sync 1/1 from my-center remote center
|
||||
sync finished
|
||||
```
|
||||
|
||||
Now, this capability center `my-center` is ready to use.
|
||||
|
||||
## List capability centers
|
||||
|
||||
You are allowed to add more capability centers and list them.
|
||||
|
||||
```bash
|
||||
$ vela cap center ls
|
||||
NAME ADDRESS
|
||||
my-center https://github.com/oam-dev/catalog/tree/master/registry
|
||||
```
|
||||
|
||||
## [Optional] Remove a capability center
|
||||
|
||||
Or, remove one.
|
||||
|
||||
```bash
|
||||
$ vela cap center remove my-center
|
||||
```
|
||||
|
||||
## List all available capabilities in capability center
|
||||
|
||||
Or, list all available capabilities in certain center.
|
||||
|
||||
```bash
|
||||
$ vela cap ls my-center
|
||||
NAME CENTER TYPE DEFINITION STATUS APPLIES-TO
|
||||
clonesetservice my-center componentDefinition clonesets.apps.kruise.io uninstalled []
|
||||
```
|
||||
|
||||
## Install a capability from capability center
|
||||
|
||||
Now let's try to install the new component named `clonesetservice` from `my-center` to your own KubeVela platform.
|
||||
|
||||
You need to install OpenKruise first.
|
||||
|
||||
```shell
|
||||
helm install kruise https://github.com/openkruise/kruise/releases/download/v0.7.0/kruise-chart.tgz
|
||||
```
|
||||
|
||||
Install `clonesetservice` component from `my-center`.
|
||||
|
||||
```bash
|
||||
$ vela cap install my-center/clonesetservice
|
||||
Installing component capability clonesetservice
|
||||
Successfully installed capability clonesetservice from my-center
|
||||
```
|
||||
|
||||
## Use the newly installed capability
|
||||
|
||||
Let's check the `clonesetservice` appears in your platform firstly:
|
||||
|
||||
```bash
|
||||
$ vela components
|
||||
NAME NAMESPACE WORKLOAD DESCRIPTION
|
||||
clonesetservice vela-system clonesets.apps.kruise.io Describes long-running, scalable, containerized services
|
||||
that have a stable network endpoint to receive external
|
||||
network traffic from customers. If workload type is skipped
|
||||
for any service defined in Appfile, it will be defaulted to
|
||||
`webservice` type.
|
||||
```
|
||||
|
||||
Great! Now let's deploy an app via Appfile.
|
||||
|
||||
```bash
|
||||
$ cat << EOF > vela.yaml
|
||||
name: testapp
|
||||
services:
|
||||
testsvc:
|
||||
type: clonesetservice
|
||||
image: crccheck/hello-world
|
||||
port: 8000
|
||||
EOF
|
||||
```
|
||||
|
||||
```bash
|
||||
$ vela up
|
||||
Parsing vela appfile ...
|
||||
Load Template ...
|
||||
|
||||
Rendering configs for service (testsvc)...
|
||||
Writing deploy config to (.vela/deploy.yaml)
|
||||
|
||||
Applying application ...
|
||||
Checking if app has been deployed...
|
||||
App has not been deployed, creating a new deployment...
|
||||
Updating: core.oam.dev/v1alpha2, Kind=HealthScope in default
|
||||
✅ App has been deployed 🚀🚀🚀
|
||||
Port forward: vela port-forward testapp
|
||||
SSH: vela exec testapp
|
||||
Logging: vela logs testapp
|
||||
App status: vela status testapp
|
||||
Service status: vela status testapp --svc testsvc
|
||||
```
|
||||
|
||||
then you can Get a cloneset in your environment.
|
||||
|
||||
```shell
|
||||
$ kubectl get clonesets.apps.kruise.io
|
||||
NAME DESIRED UPDATED UPDATED_READY READY TOTAL AGE
|
||||
testsvc 1 1 1 1 1 46s
|
||||
```
|
||||
|
||||
## Uninstall a capability
|
||||
|
||||
> NOTE: make sure no apps are using the capability before uninstalling.
|
||||
|
||||
```bash
|
||||
$ vela cap uninstall my-center/clonesetservice
|
||||
Successfully uninstalled capability clonesetservice
|
||||
```
|
||||
|
|
@ -1,9 +0,0 @@
|
|||
---
|
||||
title: Check Application Logs
|
||||
---
|
||||
|
||||
```bash
|
||||
$ vela logs testapp
|
||||
```
|
||||
|
||||
It will let you select the container to get logs from. If there is only one container it will select automatically.
|
||||
|
|
@ -1,103 +0,0 @@
|
|||
---
|
||||
title: The Reference Documentation Guide of Capabilities
|
||||
---
|
||||
|
||||
In this documentation, we will show how to check the detailed schema of a given capability (i.e. workload type or trait).
|
||||
|
||||
This may sound challenging because every capability is a "plug-in" in KubeVela (even for the built-in ones), also, it's by design that KubeVela allows platform administrators to modify the capability templates at any time. In this case, do we need to manually write documentation for every newly installed capability? And how can we ensure those documentations for the system is up-to-date?
|
||||
|
||||
## Using Browser
|
||||
|
||||
Actually, as a important part of its "extensibility" design, KubeVela will always **automatically generate** reference documentation for every workload type or trait registered in your Kubernetes cluster, based on its template in definition of course. This feature works for any capability: either built-in ones or your own workload types/traits.
|
||||
|
||||
Thus, as an end user, the only thing you need to do is:
|
||||
|
||||
```console
|
||||
$ vela show WORKLOAD_TYPE or TRAIT --web
|
||||
```
|
||||
|
||||
This command will automatically open the reference documentation for given workload type or trait in your default browser.
|
||||
|
||||
### For Workload Types
|
||||
|
||||
Let's take `$ vela show webservice --web` as example. The detailed schema documentation for `Web Service` workload type will show up immediately as below:
|
||||
|
||||

|
||||
|
||||
Note that there's in the section named `Specification`, it even provides you with a full sample for the usage of this workload type with a fake name `my-service-name`.
|
||||
|
||||
### For Traits
|
||||
|
||||
Similarly, we can also do `$ vela show autoscale --web`:
|
||||
|
||||

|
||||
|
||||
With these auto-generated reference documentations, we could easily complete the application description by simple copy-paste, for example:
|
||||
|
||||
```yaml
|
||||
name: helloworld
|
||||
|
||||
services:
|
||||
backend: # copy-paste from the webservice ref doc above
|
||||
image: oamdev/testapp:v1
|
||||
cmd: ["node", "server.js"]
|
||||
port: 8080
|
||||
cpu: "0.1"
|
||||
|
||||
autoscale: # copy-paste and modify from autoscaler ref doc above
|
||||
min: 1
|
||||
max: 8
|
||||
cron:
|
||||
startAt: "19:00"
|
||||
duration: "2h"
|
||||
days: "Friday"
|
||||
replicas: 4
|
||||
timezone: "America/Los_Angeles"
|
||||
```
|
||||
|
||||
## Using Terminal
|
||||
|
||||
This reference doc feature also works for terminal-only case. For example:
|
||||
|
||||
```shell
|
||||
$ vela show webservice
|
||||
# Properties
|
||||
+-------+----------------------------------------------------------------------------------+---------------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-------+----------------------------------------------------------------------------------+---------------+----------+---------+
|
||||
| cmd | Commands to run in the container | []string | false | |
|
||||
| env | Define arguments by using environment variables | [[]env](#env) | false | |
|
||||
| image | Which image would you like to use for your service | string | true | |
|
||||
| port | Which port do you want customer traffic sent to | int | true | 80 |
|
||||
| cpu | Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) | string | false | |
|
||||
+-------+----------------------------------------------------------------------------------+---------------+----------+---------+
|
||||
|
||||
|
||||
## env
|
||||
+-----------+-----------------------------------------------------------+-------------------------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-----------+-----------------------------------------------------------+-------------------------+----------+---------+
|
||||
| name | Environment variable name | string | true | |
|
||||
| value | The value of the environment variable | string | false | |
|
||||
| valueFrom | Specifies a source the value of this var should come from | [valueFrom](#valueFrom) | false | |
|
||||
+-----------+-----------------------------------------------------------+-------------------------+----------+---------+
|
||||
|
||||
|
||||
### valueFrom
|
||||
+--------------+--------------------------------------------------+-------------------------------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+--------------+--------------------------------------------------+-------------------------------+----------+---------+
|
||||
| secretKeyRef | Selects a key of a secret in the pod's namespace | [secretKeyRef](#secretKeyRef) | true | |
|
||||
+--------------+--------------------------------------------------+-------------------------------+----------+---------+
|
||||
|
||||
|
||||
#### secretKeyRef
|
||||
+------+------------------------------------------------------------------+--------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+------+------------------------------------------------------------------+--------+----------+---------+
|
||||
| name | The name of the secret in the pod's namespace to select from | string | true | |
|
||||
| key | The key of the secret to select from. Must be a valid secret key | string | true | |
|
||||
+------+------------------------------------------------------------------+--------+----------+---------+
|
||||
```
|
||||
|
||||
> Note that for all the built-in capabilities, we already published their reference docs [here](https://kubevela.io/#/en/developers/references/) based on the same doc generation mechanism.
|
||||
|
|
@ -1,85 +0,0 @@
|
|||
---
|
||||
title: Configuring data/env in Application
|
||||
---
|
||||
|
||||
`vela` provides a `config` command to manage config data.
|
||||
|
||||
## `vela config set`
|
||||
|
||||
```bash
|
||||
$ vela config set test a=b c=d
|
||||
reading existing config data and merging with user input
|
||||
config data saved successfully ✅
|
||||
```
|
||||
|
||||
## `vela config get`
|
||||
|
||||
```bash
|
||||
$ vela config get test
|
||||
Data:
|
||||
a: b
|
||||
c: d
|
||||
```
|
||||
|
||||
## `vela config del`
|
||||
|
||||
```bash
|
||||
$ vela config del test
|
||||
config (test) deleted successfully
|
||||
```
|
||||
|
||||
## `vela config ls`
|
||||
|
||||
```bash
|
||||
$ vela config set test a=b
|
||||
$ vela config set test2 c=d
|
||||
$ vela config ls
|
||||
NAME
|
||||
test
|
||||
test2
|
||||
```
|
||||
|
||||
## Configure env in application
|
||||
|
||||
The config data can be set as the env in applications.
|
||||
|
||||
```bash
|
||||
$ vela config set demo DEMO_HELLO=helloworld
|
||||
```
|
||||
|
||||
Save the following to `vela.yaml` in current directory:
|
||||
|
||||
```yaml
|
||||
name: testapp
|
||||
services:
|
||||
env-config-demo:
|
||||
image: heroku/nodejs-hello-world
|
||||
config: demo
|
||||
```
|
||||
|
||||
Then run:
|
||||
```bash
|
||||
$ vela up
|
||||
Parsing vela.yaml ...
|
||||
Loading templates ...
|
||||
|
||||
Rendering configs for service (env-config-demo)...
|
||||
Writing deploy config to (.vela/deploy.yaml)
|
||||
|
||||
Applying deploy configs ...
|
||||
Checking if app has been deployed...
|
||||
App has not been deployed, creating a new deployment...
|
||||
✅ App has been deployed 🚀🚀🚀
|
||||
Port forward: vela port-forward testapp
|
||||
SSH: vela exec testapp
|
||||
Logging: vela logs testapp
|
||||
App status: vela status testapp
|
||||
Service status: vela status testapp --svc env-config-demo
|
||||
```
|
||||
|
||||
Check env var:
|
||||
|
||||
```
|
||||
$ vela exec testapp -- printenv | grep DEMO_HELLO
|
||||
DEMO_HELLO=helloworld
|
||||
```
|
||||
|
|
@ -1,93 +0,0 @@
|
|||
---
|
||||
title: Setting Up Deployment Environment
|
||||
---
|
||||
|
||||
A deployment environment is where you could configure the workspace, email for contact and domain for your applications globally.
|
||||
A typical set of deployment environment is `test`, `staging`, `prod`, etc.
|
||||
|
||||
## Create environment
|
||||
|
||||
```bash
|
||||
$ vela env init demo --email my@email.com
|
||||
environment demo created, Namespace: default, Email: my@email.com
|
||||
```
|
||||
|
||||
## Check the deployment environment metadata
|
||||
|
||||
```bash
|
||||
$ vela env ls
|
||||
NAME CURRENT NAMESPACE EMAIL DOMAIN
|
||||
default default
|
||||
demo * default my@email.com
|
||||
```
|
||||
|
||||
By default, the environment will use `default` namespace in K8s.
|
||||
|
||||
## Configure changes
|
||||
|
||||
You could change the config by executing the environment again.
|
||||
|
||||
```bash
|
||||
$ vela env init demo --namespace demo
|
||||
environment demo created, Namespace: demo, Email: my@email.com
|
||||
```
|
||||
|
||||
```bash
|
||||
$ vela env ls
|
||||
NAME CURRENT NAMESPACE EMAIL DOMAIN
|
||||
default default
|
||||
demo * demo my@email.com
|
||||
```
|
||||
|
||||
**Note that the created apps won't be affected, only newly created apps will use the updated info.**
|
||||
|
||||
## [Optional] Configure Domain if you have public IP
|
||||
|
||||
If your K8s cluster is provisioned by cloud provider and has public IP for ingress.
|
||||
You could configure your domain in the environment, then you'll be able to visit
|
||||
your app by this domain with an mTLS supported automatically.
|
||||
|
||||
For example, you could get the public IP from ingress service.
|
||||
|
||||
```bash
|
||||
$ kubectl get svc -A | grep LoadBalancer
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
nginx-ingress-lb LoadBalancer 172.21.2.174 123.57.10.233 80:32740/TCP,443:32086/TCP 41d
|
||||
```
|
||||
|
||||
The fourth column is public IP. Configure 'A' record for your custom domain.
|
||||
|
||||
```
|
||||
*.your.domain => 123.57.10.233
|
||||
```
|
||||
|
||||
You could also use `123.57.10.233.xip.io` as your domain, if you don't have a custom one.
|
||||
`xip.io` will automatically route to the prefix IP `123.57.10.233`.
|
||||
|
||||
|
||||
```bash
|
||||
$ vela env init demo --domain 123.57.10.233.xip.io
|
||||
environment demo updated, Namespace: demo, Email: my@email.com
|
||||
```
|
||||
|
||||
### Using domain in Appfile
|
||||
|
||||
Since you now have domain configured globally in deployment environment, you don't need to specify the domain in route configuration anymore.
|
||||
|
||||
```yaml
|
||||
# in demo environment
|
||||
servcies:
|
||||
express-server:
|
||||
...
|
||||
|
||||
route:
|
||||
rules:
|
||||
- path: /testapp
|
||||
rewriteTarget: /
|
||||
```
|
||||
|
||||
```
|
||||
$ curl http://123.57.10.233.xip.io/testapp
|
||||
Hello World
|
||||
```
|
||||
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
---
|
||||
title: Execute Commands in Container
|
||||
---
|
||||
|
||||
Run:
|
||||
```
|
||||
$ vela exec testapp -- /bin/sh
|
||||
```
|
||||
|
||||
This open a shell within the container of testapp.
|
||||
|
|
@ -1,238 +0,0 @@
|
|||
---
|
||||
title: Automatically scale workloads by resource utilization metrics and cron
|
||||
---
|
||||
|
||||
|
||||
|
||||
## Prerequisite
|
||||
Make sure auto-scaler trait controller is installed in your cluster
|
||||
|
||||
Install auto-scaler trait controller with helm
|
||||
|
||||
1. Add helm chart repo for autoscaler trait
|
||||
```shell script
|
||||
helm repo add oam.catalog http://oam.dev/catalog/
|
||||
```
|
||||
|
||||
2. Update the chart repo
|
||||
```shell script
|
||||
helm repo update
|
||||
```
|
||||
|
||||
3. Install autoscaler trait controller
|
||||
```shell script
|
||||
helm install --create-namespace -n vela-system autoscalertrait oam.catalog/autoscalertrait
|
||||
|
||||
Autoscale depends on metrics server, please [enable it in your Kubernetes cluster](../references/devex/faq#autoscale-how-to-enable-metrics-server-in-various-kubernetes-clusters) at the beginning.
|
||||
|
||||
> Note: autoscale is one of the extension capabilities [installed from cap center](../cap-center),
|
||||
> please install it if you can't find it in `vela traits`.
|
||||
|
||||
## Setting cron auto-scaling policy
|
||||
Introduce how to automatically scale workloads by cron.
|
||||
|
||||
1. Prepare Appfile
|
||||
|
||||
```yaml
|
||||
name: testapp
|
||||
|
||||
services:
|
||||
express-server:
|
||||
# this image will be used in both build and deploy steps
|
||||
image: oamdev/testapp:v1
|
||||
|
||||
cmd: ["node", "server.js"]
|
||||
port: 8080
|
||||
|
||||
autoscale:
|
||||
min: 1
|
||||
max: 4
|
||||
cron:
|
||||
startAt: "14:00"
|
||||
duration: "2h"
|
||||
days: "Monday, Thursday"
|
||||
replicas: 2
|
||||
timezone: "America/Los_Angeles"
|
||||
```
|
||||
|
||||
> The full specification of `autoscale` could show up by `$ vela show autoscale` or be found on [its reference documentation](../references/traits/autoscale)
|
||||
|
||||
2. Deploy an application
|
||||
|
||||
```
|
||||
$ vela up
|
||||
Parsing vela.yaml ...
|
||||
Loading templates ...
|
||||
|
||||
Rendering configs for service (express-server)...
|
||||
Writing deploy config to (.vela/deploy.yaml)
|
||||
|
||||
Applying deploy configs ...
|
||||
Checking if app has been deployed...
|
||||
App has not been deployed, creating a new deployment...
|
||||
✅ App has been deployed 🚀🚀🚀
|
||||
Port forward: vela port-forward testapp
|
||||
SSH: vela exec testapp
|
||||
Logging: vela logs testapp
|
||||
App status: vela status testapp
|
||||
Service status: vela status testapp --svc express-server
|
||||
```
|
||||
|
||||
3. Check the replicas and wait for the scaling to take effect
|
||||
|
||||
Check the replicas of the application, there is one replica.
|
||||
|
||||
```
|
||||
$ vela status testapp
|
||||
About:
|
||||
|
||||
Name: testapp
|
||||
Namespace: default
|
||||
Created at: 2020-11-05 17:09:02.426632 +0800 CST
|
||||
Updated at: 2020-11-05 17:09:02.426632 +0800 CST
|
||||
|
||||
Services:
|
||||
|
||||
- Name: express-server
|
||||
Type: webservice
|
||||
HEALTHY Ready: 1/1
|
||||
Traits:
|
||||
- ✅ autoscale: type: cron replicas(min/max/current): 1/4/1
|
||||
Last Deployment:
|
||||
Created at: 2020-11-05 17:09:03 +0800 CST
|
||||
Updated at: 2020-11-05T17:09:02+08:00
|
||||
```
|
||||
|
||||
Wait till the time clocks `startAt`, and check again. The replicas become to two, which is specified as
|
||||
`replicas` in `vela.yaml`.
|
||||
|
||||
```
|
||||
$ vela status testapp
|
||||
About:
|
||||
|
||||
Name: testapp
|
||||
Namespace: default
|
||||
Created at: 2020-11-10 10:18:59.498079 +0800 CST
|
||||
Updated at: 2020-11-10 10:18:59.49808 +0800 CST
|
||||
|
||||
Services:
|
||||
|
||||
- Name: express-server
|
||||
Type: webservice
|
||||
HEALTHY Ready: 2/2
|
||||
Traits:
|
||||
- ✅ autoscale: type: cron replicas(min/max/current): 1/4/2
|
||||
Last Deployment:
|
||||
Created at: 2020-11-10 10:18:59 +0800 CST
|
||||
Updated at: 2020-11-10T10:18:59+08:00
|
||||
```
|
||||
|
||||
Wait after the period ends, the replicas will be one eventually.
|
||||
|
||||
## Setting auto-scaling policy of CPU resource utilization
|
||||
Introduce how to automatically scale workloads by CPU resource utilization.
|
||||
|
||||
1. Prepare Appfile
|
||||
|
||||
Modify `vela.yaml` as below. We add field `services.express-server.cpu` and change the auto-scaling policy
|
||||
from cron to cpu utilization by updating filed `services.express-server.autoscale`.
|
||||
|
||||
```yaml
|
||||
name: testapp
|
||||
|
||||
services:
|
||||
express-server:
|
||||
image: oamdev/testapp:v1
|
||||
|
||||
cmd: ["node", "server.js"]
|
||||
port: 8080
|
||||
cpu: "0.01"
|
||||
|
||||
autoscale:
|
||||
min: 1
|
||||
max: 5
|
||||
cpuPercent: 10
|
||||
```
|
||||
|
||||
2. Deploy an application
|
||||
|
||||
```bash
|
||||
$ vela up
|
||||
```
|
||||
|
||||
3. Expose the service entrypoint of the application
|
||||
|
||||
```
|
||||
$ vela port-forward helloworld 80
|
||||
Forwarding from 127.0.0.1:80 -> 80
|
||||
Forwarding from [::1]:80 -> 80
|
||||
|
||||
Forward successfully! Opening browser ...
|
||||
Handling connection for 80
|
||||
Handling connection for 80
|
||||
Handling connection for 80
|
||||
Handling connection for 80
|
||||
```
|
||||
|
||||
On your macOS, you might need to add `sudo` ahead of the command.
|
||||
|
||||
4. Monitor the replicas changing
|
||||
|
||||
Continue to monitor the replicas changing when the application becomes overloaded. You can use Apache HTTP server
|
||||
benchmarking tool `ab` to mock many requests to the application.
|
||||
|
||||
```
|
||||
$ ab -n 10000 -c 200 http://127.0.0.1/
|
||||
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
|
||||
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
|
||||
Licensed to The Apache Software Foundation, http://www.apache.org/
|
||||
|
||||
Benchmarking 127.0.0.1 (be patient)
|
||||
Completed 1000 requests
|
||||
```
|
||||
|
||||
The replicas gradually increase from one to four.
|
||||
|
||||
```
|
||||
$ vela status helloworld --svc frontend
|
||||
About:
|
||||
|
||||
Name: helloworld
|
||||
Namespace: default
|
||||
Created at: 2020-11-05 20:07:21.830118 +0800 CST
|
||||
Updated at: 2020-11-05 20:50:42.664725 +0800 CST
|
||||
|
||||
Services:
|
||||
|
||||
- Name: frontend
|
||||
Type: webservice
|
||||
HEALTHY Ready: 1/1
|
||||
Traits:
|
||||
- ✅ autoscale: type: cpu cpu-utilization(target/current): 5%/10% replicas(min/max/current): 1/5/2
|
||||
Last Deployment:
|
||||
Created at: 2020-11-05 20:07:23 +0800 CST
|
||||
Updated at: 2020-11-05T20:50:42+08:00
|
||||
```
|
||||
|
||||
```
|
||||
$ vela status helloworld --svc frontend
|
||||
About:
|
||||
|
||||
Name: helloworld
|
||||
Namespace: default
|
||||
Created at: 2020-11-05 20:07:21.830118 +0800 CST
|
||||
Updated at: 2020-11-05 20:50:42.664725 +0800 CST
|
||||
|
||||
Services:
|
||||
|
||||
- Name: frontend
|
||||
Type: webservice
|
||||
HEALTHY Ready: 1/1
|
||||
Traits:
|
||||
- ✅ autoscale: type: cpu cpu-utilization(target/current): 5%/14% replicas(min/max/current): 1/5/4
|
||||
Last Deployment:
|
||||
Created at: 2020-11-05 20:07:23 +0800 CST
|
||||
Updated at: 2020-11-05T20:50:42+08:00
|
||||
```
|
||||
|
||||
Stop `ab` tool, and the replicas will decrease to one eventually.
|
||||
|
|
@ -1,107 +0,0 @@
|
|||
---
|
||||
title: Monitoring Application
|
||||
---
|
||||
|
||||
|
||||
If your application has exposed metrics, you can easily tell the platform how to collect the metrics data from your app with `metrics` capability.
|
||||
|
||||
## Prerequisite
|
||||
Make sure metrics trait controller is installed in your cluster
|
||||
|
||||
Install metrics trait controller with helm
|
||||
|
||||
1. Add helm chart repo for metrics trait
|
||||
```shell script
|
||||
helm repo add oam.catalog http://oam.dev/catalog/
|
||||
```
|
||||
|
||||
2. Update the chart repo
|
||||
```shell script
|
||||
helm repo update
|
||||
```
|
||||
|
||||
3. Install metrics trait controller
|
||||
```shell script
|
||||
helm install --create-namespace -n vela-system metricstrait oam.catalog/metricstrait
|
||||
|
||||
|
||||
> Note: metrics is one of the extension capabilities [installed from cap center](../cap-center),
|
||||
> please install it if you can't find it in `vela traits`.
|
||||
|
||||
## Setting metrics policy
|
||||
Let's run [`christianhxc/gorandom:1.0`](https://github.com/christianhxc/prometheus-tutorial) as an example app.
|
||||
The app will emit random latencies as metrics.
|
||||
|
||||
|
||||
|
||||
|
||||
1. Prepare Appfile:
|
||||
|
||||
```bash
|
||||
$ cat <<EOF > vela.yaml
|
||||
name: metricapp
|
||||
services:
|
||||
metricapp:
|
||||
type: webservice
|
||||
image: christianhxc/gorandom:1.0
|
||||
port: 8080
|
||||
|
||||
metrics:
|
||||
enabled: true
|
||||
format: prometheus
|
||||
path: /metrics
|
||||
port: 0
|
||||
scheme: http
|
||||
EOF
|
||||
```
|
||||
|
||||
> The full specification of `metrics` could show up by `$ vela show metrics` or be found on [its reference documentation](../references/traits/metrics)
|
||||
|
||||
2. Deploy the application:
|
||||
|
||||
```bash
|
||||
$ vela up
|
||||
```
|
||||
|
||||
3. Check status:
|
||||
|
||||
```bash
|
||||
$ vela status metricapp
|
||||
About:
|
||||
|
||||
Name: metricapp
|
||||
Namespace: default
|
||||
Created at: 2020-11-11 17:00:59.436347573 -0800 PST
|
||||
Updated at: 2020-11-11 17:01:06.511064661 -0800 PST
|
||||
|
||||
Services:
|
||||
|
||||
- Name: metricapp
|
||||
Type: webservice
|
||||
HEALTHY Ready: 1/1
|
||||
Traits:
|
||||
- ✅ metrics: Monitoring port: 8080, path: /metrics, format: prometheus, schema: http.
|
||||
Last Deployment:
|
||||
Created at: 2020-11-11 17:00:59 -0800 PST
|
||||
Updated at: 2020-11-11T17:01:06-08:00
|
||||
```
|
||||
|
||||
The metrics trait will automatically discover port and label to monitor if no parameters specified.
|
||||
If more than one ports found, it will choose the first one by default.
|
||||
|
||||
|
||||
**(Optional) Verify that the metrics are collected on Prometheus**
|
||||
|
||||
<details>
|
||||
|
||||
Expose the port of Prometheus dashboard:
|
||||
|
||||
```bash
|
||||
kubectl --namespace monitoring port-forward `kubectl -n monitoring get pods -l prometheus=oam -o name` 9090
|
||||
```
|
||||
|
||||
Then access the Prometheus dashboard via http://localhost:9090/targets
|
||||
|
||||

|
||||
|
||||
</details>
|
||||
|
|
@ -1,163 +0,0 @@
|
|||
---
|
||||
title: Setting Rollout Strategy
|
||||
---
|
||||
|
||||
> Note: rollout is one of the extension capabilities [installed from cap center](../cap-center),
|
||||
> please install it if you can't find it in `vela traits`.
|
||||
|
||||
The `rollout` section is used to configure Canary strategy to release your app.
|
||||
|
||||
Add rollout config under `express-server` along with a `route`.
|
||||
|
||||
```yaml
|
||||
name: testapp
|
||||
services:
|
||||
express-server:
|
||||
type: webservice
|
||||
image: oamdev/testapp:rolling01
|
||||
port: 80
|
||||
|
||||
rollout:
|
||||
replicas: 5
|
||||
stepWeight: 20
|
||||
interval: "30s"
|
||||
|
||||
route:
|
||||
domain: "example.com"
|
||||
```
|
||||
|
||||
> The full specification of `rollout` could show up by `$ vela show rollout` or be found on [its reference documentation](../references/traits/rollout)
|
||||
|
||||
Apply this `appfile.yaml`:
|
||||
|
||||
```bash
|
||||
$ vela up
|
||||
```
|
||||
|
||||
You could check the status by:
|
||||
|
||||
```bash
|
||||
$ vela status testapp
|
||||
About:
|
||||
|
||||
Name: testapp
|
||||
Namespace: myenv
|
||||
Created at: 2020-11-09 17:34:38.064006 +0800 CST
|
||||
Updated at: 2020-11-10 17:05:53.903168 +0800 CST
|
||||
|
||||
Services:
|
||||
|
||||
- Name: testapp
|
||||
Type: webservice
|
||||
HEALTHY Ready: 5/5
|
||||
Traits:
|
||||
- ✅ rollout: interval=5s
|
||||
replicas=5
|
||||
stepWeight=20
|
||||
- ✅ route: Visiting URL: http://example.com IP: <your-ingress-IP-address>
|
||||
|
||||
Last Deployment:
|
||||
Created at: 2020-11-09 17:34:38 +0800 CST
|
||||
Updated at: 2020-11-10T17:05:53+08:00
|
||||
```
|
||||
|
||||
Visiting this app by:
|
||||
|
||||
```bash
|
||||
$ curl -H "Host:example.com" http://<your-ingress-IP-address>/
|
||||
Hello World -- Rolling 01
|
||||
```
|
||||
|
||||
In day 2, assuming we have make some changes on our app and build the new image and name it by `oamdev/testapp:v2`.
|
||||
|
||||
Let's update the appfile by:
|
||||
|
||||
```yaml
|
||||
name: testapp
|
||||
services:
|
||||
express-server:
|
||||
type: webservice
|
||||
- image: oamdev/testapp:rolling01
|
||||
+ image: oamdev/testapp:rolling02
|
||||
port: 80
|
||||
rollout:
|
||||
replicas: 5
|
||||
stepWeight: 20
|
||||
interval: "30s"
|
||||
route:
|
||||
domain: example.com
|
||||
```
|
||||
|
||||
Apply this `appfile.yaml` again:
|
||||
|
||||
```bash
|
||||
$ vela up
|
||||
```
|
||||
|
||||
You could run `vela status` several times to see the instance rolling:
|
||||
|
||||
```shell script
|
||||
$ vela status testapp
|
||||
About:
|
||||
|
||||
Name: testapp
|
||||
Namespace: myenv
|
||||
Created at: 2020-11-12 19:02:40.353693 +0800 CST
|
||||
Updated at: 2020-11-12 19:02:40.353693 +0800 CST
|
||||
|
||||
Services:
|
||||
|
||||
- Name: express-server
|
||||
Type: webservice
|
||||
HEALTHY express-server-v2:Ready: 1/1 express-server-v1:Ready: 4/4
|
||||
Traits:
|
||||
- ✅ rollout: interval=30s
|
||||
replicas=5
|
||||
stepWeight=20
|
||||
- ✅ route: Visiting by using 'vela port-forward testapp --route'
|
||||
|
||||
Last Deployment:
|
||||
Created at: 2020-11-12 17:20:46 +0800 CST
|
||||
Updated at: 2020-11-12T19:02:40+08:00
|
||||
```
|
||||
|
||||
You could then try to `curl` your app multiple times and and see how the app being rollout following Canary strategy:
|
||||
|
||||
|
||||
```bash
|
||||
$ curl -H "Host:example.com" http://<your-ingress-ip-address>/
|
||||
Hello World -- This is rolling 02
|
||||
$ curl -H "Host:example.com" http://<your-ingress-ip-address>/
|
||||
Hello World -- Rolling 01
|
||||
$ curl -H "Host:example.com" http://<your-ingress-ip-address>/
|
||||
Hello World -- Rolling 01
|
||||
$ curl -H "Host:example.com" http://<your-ingress-ip-address>/
|
||||
Hello World -- This is rolling 02
|
||||
$ curl -H "Host:example.com" http://<your-ingress-ip-address>/
|
||||
Hello World -- Rolling 01
|
||||
$ curl -H "Host:example.com" http://<your-ingress-ip-address>/
|
||||
Hello World -- This is rolling 02
|
||||
```
|
||||
|
||||
|
||||
**How `Rollout` works?**
|
||||
|
||||
<details>
|
||||
|
||||
`Rollout` trait implements progressive release process to rollout your app following [Canary strategy](https://martinfowler.com/bliki/CanaryRelease.html).
|
||||
|
||||
In detail, `Rollout` controller will create a canary of your app , and then gradually shift traffic to the canary while measuring key performance indicators like HTTP requests success rate at the same time.
|
||||
|
||||
|
||||

|
||||
|
||||
In this sample, for every `10s`, `5%` traffic will be shifted to canary from the primary, until the traffic on canary reached `50%`. At the mean time, the instance number of canary will automatically scale to `replicas: 2` per configured in Appfile.
|
||||
|
||||
|
||||
Based on analysis result of the KPIs during this traffic shifting, a canary will be promoted or aborted if analysis is failed. If promoting, the primary will be upgraded from v1 to v2, and traffic will be fully shifted back to the primary instances. So as result, canary instances will be deleted after the promotion finished.
|
||||
|
||||

|
||||
|
||||
> Note: KubeVela's `Rollout` trait is implemented with [Weaveworks Flagger](https://flagger.app/) operator.
|
||||
|
||||
</details>
|
||||
|
|
@ -1,82 +0,0 @@
|
|||
---
|
||||
title: Setting Routes
|
||||
---
|
||||
|
||||
The `route` section is used to configure the access to your app.
|
||||
|
||||
## Prerequisite
|
||||
Make sure route trait controller is installed in your cluster
|
||||
|
||||
Install route trait controller with helm
|
||||
|
||||
1. Add helm chart repo for route trait
|
||||
```shell script
|
||||
helm repo add oam.catalog http://oam.dev/catalog/
|
||||
```
|
||||
|
||||
2. Update the chart repo
|
||||
```shell script
|
||||
helm repo update
|
||||
```
|
||||
|
||||
3. Install route trait controller
|
||||
```shell script
|
||||
helm install --create-namespace -n vela-system routetrait oam.catalog/routetrait
|
||||
|
||||
|
||||
> Note: route is one of the extension capabilities [installed from cap center](../cap-center),
|
||||
> please install it if you can't find it in `vela traits`.
|
||||
|
||||
## Setting route policy
|
||||
Add routing config under `express-server`:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
express-server:
|
||||
...
|
||||
|
||||
route:
|
||||
domain: example.com
|
||||
rules:
|
||||
- path: /testapp
|
||||
rewriteTarget: /
|
||||
```
|
||||
|
||||
> The full specification of `route` could show up by `$ vela show route` or be found on [its reference documentation](../references/traits/route)
|
||||
|
||||
Apply again:
|
||||
|
||||
```bash
|
||||
$ vela up
|
||||
```
|
||||
|
||||
Check the status until we see route is ready:
|
||||
```bash
|
||||
$ vela status testapp
|
||||
About:
|
||||
|
||||
Name: testapp
|
||||
Namespace: default
|
||||
Created at: 2020-11-04 16:34:43.762730145 -0800 PST
|
||||
Updated at: 2020-11-11 16:21:37.761158941 -0800 PST
|
||||
|
||||
Services:
|
||||
|
||||
- Name: express-server
|
||||
Type: webservice
|
||||
HEALTHY Ready: 1/1
|
||||
Last Deployment:
|
||||
Created at: 2020-11-11 16:21:37 -0800 PST
|
||||
Updated at: 2020-11-11T16:21:37-08:00
|
||||
Routes:
|
||||
- route: Visiting URL: http://example.com IP: <ingress-IP-address>
|
||||
```
|
||||
|
||||
**In [kind cluster setup](../../install#kind)**, you can visit the service via localhost:
|
||||
|
||||
> If not in kind cluster, replace 'localhost' with ingress address
|
||||
|
||||
```
|
||||
$ curl -H "Host:example.com" http://localhost/testapp
|
||||
Hello World
|
||||
```
|
||||
|
|
@ -1,251 +0,0 @@
|
|||
---
|
||||
title: Learning Appfile
|
||||
---
|
||||
|
||||
A sample `Appfile` is as below:
|
||||
|
||||
```yaml
|
||||
name: testapp
|
||||
|
||||
services:
|
||||
frontend: # 1st service
|
||||
|
||||
image: oamdev/testapp:v1
|
||||
build:
|
||||
docker:
|
||||
file: Dockerfile
|
||||
context: .
|
||||
|
||||
cmd: ["node", "server.js"]
|
||||
port: 8080
|
||||
|
||||
route: # trait
|
||||
domain: example.com
|
||||
rules:
|
||||
- path: /testapp
|
||||
rewriteTarget: /
|
||||
|
||||
backend: # 2nd service
|
||||
type: task # workload type
|
||||
image: perl
|
||||
cmd: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
|
||||
```
|
||||
|
||||
Under the hood, `Appfile` will build the image from source code, and then generate `Application` resource with the image name.
|
||||
|
||||
## Schema
|
||||
|
||||
> Before learning about Appfile's detailed schema, we recommend you to get familiar with [core concepts](../concepts) in KubeVela.
|
||||
|
||||
|
||||
```yaml
|
||||
name: _app-name_
|
||||
|
||||
services:
|
||||
_service-name_:
|
||||
# If `build` section exists, this field will be used as the name to build image. Otherwise, KubeVela will try to pull the image with given name directly.
|
||||
image: oamdev/testapp:v1
|
||||
|
||||
build:
|
||||
docker:
|
||||
file: _Dockerfile_path_ # relative path is supported, e.g. "./Dockerfile"
|
||||
context: _build_context_path_ # relative path is supported, e.g. "."
|
||||
|
||||
push:
|
||||
local: kind # optionally push to local KinD cluster instead of remote registry
|
||||
|
||||
type: webservice (default) | worker | task
|
||||
|
||||
# detailed configurations of workload
|
||||
... properties of the specified workload ...
|
||||
|
||||
_trait_1_:
|
||||
# properties of trait 1
|
||||
|
||||
_trait_2_:
|
||||
# properties of trait 2
|
||||
|
||||
... more traits and their properties ...
|
||||
|
||||
_another_service_name_: # more services can be defined
|
||||
...
|
||||
|
||||
```
|
||||
|
||||
> To learn about how to set the properties of specific workload type or trait, please check the [reference documentation guide](./check-ref-doc).
|
||||
|
||||
## Example Workflow
|
||||
|
||||
In the following workflow, we will build and deploy an example NodeJS app under [examples/testapp/](https://github.com/oam-dev/kubevela/tree/master/docs/examples/testapp).
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- [Docker](https://docs.docker.com/get-docker/) installed on the host
|
||||
- [KubeVela](../install) installed and configured
|
||||
|
||||
### 1. Download test app code
|
||||
|
||||
git clone and go to the testapp directory:
|
||||
|
||||
```bash
|
||||
$ git clone https://github.com/oam-dev/kubevela.git
|
||||
$ cd kubevela/docs/examples/testapp
|
||||
```
|
||||
|
||||
The example contains NodeJS app code, Dockerfile to build the app.
|
||||
|
||||
### 2. Deploy app in one command
|
||||
|
||||
In the directory there is a [vela.yaml](https://github.com/oam-dev/kubevela/tree/master/docs/examples/testapp/vela.yaml) which follows Appfile format supported by Vela.
|
||||
We are going to use it to build and deploy the app.
|
||||
|
||||
> NOTE: please change `oamdev` to your own registry account so you can push. Or, you could try the alternative approach in `Local testing without pushing image remotely` section.
|
||||
|
||||
```yaml
|
||||
image: oamdev/testapp:v1 # change this to your image
|
||||
```
|
||||
|
||||
Run the following command:
|
||||
|
||||
```bash
|
||||
$ vela up
|
||||
Parsing vela.yaml ...
|
||||
Loading templates ...
|
||||
|
||||
Building service (express-server)...
|
||||
Sending build context to Docker daemon 71.68kB
|
||||
Step 1/10 : FROM mhart/alpine-node:12
|
||||
---> 9d88359808c3
|
||||
...
|
||||
|
||||
pushing image (oamdev/testapp:v1)...
|
||||
...
|
||||
|
||||
Rendering configs for service (express-server)...
|
||||
Writing deploy config to (.vela/deploy.yaml)
|
||||
|
||||
Applying deploy configs ...
|
||||
Checking if app has been deployed...
|
||||
App has not been deployed, creating a new deployment...
|
||||
✅ App has been deployed 🚀🚀🚀
|
||||
Port forward: vela port-forward testapp
|
||||
SSH: vela exec testapp
|
||||
Logging: vela logs testapp
|
||||
App status: vela status testapp
|
||||
Service status: vela status testapp --svc express-server
|
||||
```
|
||||
|
||||
|
||||
Check the status of the service:
|
||||
|
||||
```bash
|
||||
$ vela status testapp
|
||||
About:
|
||||
|
||||
Name: testapp
|
||||
Namespace: default
|
||||
Created at: 2020-11-02 11:08:32.138484 +0800 CST
|
||||
Updated at: 2020-11-02 11:08:32.138485 +0800 CST
|
||||
|
||||
Services:
|
||||
|
||||
- Name: express-server
|
||||
Type: webservice
|
||||
HEALTHY Ready: 1/1
|
||||
Last Deployment:
|
||||
Created at: 2020-11-02 11:08:33 +0800 CST
|
||||
Updated at: 2020-11-02T11:08:32+08:00
|
||||
Routes:
|
||||
|
||||
```
|
||||
|
||||
#### Alternative: Local testing without pushing image remotely
|
||||
|
||||
If you have local [kind](../install) cluster running, you may try the local push option. No remote container registry is needed in this case.
|
||||
|
||||
Add local option to `build`:
|
||||
|
||||
```yaml
|
||||
build:
|
||||
# push image into local kind cluster without remote transfer
|
||||
push:
|
||||
local: kind
|
||||
|
||||
docker:
|
||||
file: Dockerfile
|
||||
context: .
|
||||
```
|
||||
|
||||
Then deploy the app to kind:
|
||||
|
||||
```bash
|
||||
$ vela up
|
||||
```
|
||||
|
||||
<details><summary>(Advanced) Check rendered manifests</summary>
|
||||
|
||||
By default, Vela renders the final manifests in `.vela/deploy.yaml`:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1alpha2
|
||||
kind: ApplicationConfiguration
|
||||
metadata:
|
||||
name: testapp
|
||||
namespace: default
|
||||
spec:
|
||||
components:
|
||||
- componentName: express-server
|
||||
---
|
||||
apiVersion: core.oam.dev/v1alpha2
|
||||
kind: Component
|
||||
metadata:
|
||||
name: express-server
|
||||
namespace: default
|
||||
spec:
|
||||
workload:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: express-server
|
||||
...
|
||||
---
|
||||
apiVersion: core.oam.dev/v1alpha2
|
||||
kind: HealthScope
|
||||
metadata:
|
||||
name: testapp-default-health
|
||||
namespace: default
|
||||
spec:
|
||||
...
|
||||
```
|
||||
</details>
|
||||
|
||||
### [Optional] Configure another workload type
|
||||
|
||||
By now we have deployed a *[Web Service](references/component-types/webservice)*, which is the default workload type in KubeVela. We can also add another service of *[Task](references/component-types/task)* type in the same app:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
pi:
|
||||
type: task
|
||||
image: perl
|
||||
cmd: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
|
||||
|
||||
express-server:
|
||||
...
|
||||
```
|
||||
|
||||
Then deploy Appfile again to update the application:
|
||||
|
||||
```bash
|
||||
$ vela up
|
||||
```
|
||||
|
||||
Congratulations! You have just deployed an app using `Appfile`.
|
||||
|
||||
## What's Next?
|
||||
|
||||
Play more with your app:
|
||||
- [Check Application Logs](./check-logs)
|
||||
- [Execute Commands in Application Container](./exec-cmd)
|
||||
- [Access Application via Route](./port-forward)
|
||||
|
||||
|
|
@ -1,23 +0,0 @@
|
|||
---
|
||||
title: Port Forwarding
|
||||
---
|
||||
|
||||
Once your web services of the application deployed, you can access it locally via `port-forward`.
|
||||
|
||||
```bash
|
||||
$ vela ls
|
||||
NAME APP WORKLOAD TRAITS STATUS CREATED-TIME
|
||||
express-server testapp webservice Deployed 2020-09-18 22:42:04 +0800 CST
|
||||
```
|
||||
|
||||
It will directly open browser for you.
|
||||
|
||||
```bash
|
||||
$ vela port-forward testapp
|
||||
Forwarding from 127.0.0.1:8080 -> 80
|
||||
Forwarding from [::1]:8080 -> 80
|
||||
|
||||
Forward successfully! Opening browser ...
|
||||
Handling connection for 8080
|
||||
Handling connection for 8080
|
||||
```
|
||||
|
|
@ -1,113 +0,0 @@
|
|||
---
|
||||
title: Overview
|
||||
---
|
||||
|
||||
In this documentation, we will show how to check the detailed schema of a given capability (i.e. component type or trait).
|
||||
|
||||
This may sound challenging because every capability is a "plug-in" in KubeVela (even for the built-in ones), also, it's by design that KubeVela allows platform administrators to modify the capability templates at any time. In this case, do we need to manually write documentation for every newly installed capability? And how can we ensure those documentations for the system is up-to-date?
|
||||
|
||||
## Using Browser
|
||||
|
||||
Actually, as a important part of its "extensibility" design, KubeVela will always **automatically generate** reference documentation for every workload type or trait registered in your Kubernetes cluster, based on the template in its definition of course. This feature works for any capability: either built-in ones or your own workload types/traits.
|
||||
|
||||
Thus, as an end user, the only thing you need to do is:
|
||||
|
||||
```console
|
||||
$ vela show COMPONENT_TYPE or TRAIT --web
|
||||
```
|
||||
|
||||
This command will automatically open the reference documentation for given component type or trait in your default browser.
|
||||
|
||||
Let's take `$ vela show webservice --web` as example. The detailed schema documentation for `Web Service` component type will show up immediately as below:
|
||||
|
||||

|
||||
|
||||
Note that there's in the section named `Specification`, it even provides you with a full sample for the usage of this workload type with a fake name `my-service-name`.
|
||||
|
||||
Similarly, we can also do `$ vela show autoscale`:
|
||||
|
||||

|
||||
|
||||
With these auto-generated reference documentations, we could easily complete the application description by simple copy-paste, for example:
|
||||
|
||||
```yaml
|
||||
name: helloworld
|
||||
|
||||
services:
|
||||
backend: # copy-paste from the webservice ref doc above
|
||||
image: oamdev/testapp:v1
|
||||
cmd: ["node", "server.js"]
|
||||
port: 8080
|
||||
cpu: "0.1"
|
||||
|
||||
autoscale: # copy-paste and modify from autoscaler ref doc above
|
||||
min: 1
|
||||
max: 8
|
||||
cron:
|
||||
startAt: "19:00"
|
||||
duration: "2h"
|
||||
days: "Friday"
|
||||
replicas: 4
|
||||
timezone: "America/Los_Angeles"
|
||||
```
|
||||
|
||||
## Using Terminal
|
||||
|
||||
This reference doc feature also works for terminal-only case. For example:
|
||||
|
||||
```shell
|
||||
$ vela show webservice
|
||||
# Properties
|
||||
+-------+----------------------------------------------------------------------------------+---------------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-------+----------------------------------------------------------------------------------+---------------+----------+---------+
|
||||
| cmd | Commands to run in the container | []string | false | |
|
||||
| env | Define arguments by using environment variables | [[]env](#env) | false | |
|
||||
| image | Which image would you like to use for your service | string | true | |
|
||||
| port | Which port do you want customer traffic sent to | int | true | 80 |
|
||||
| cpu | Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) | string | false | |
|
||||
+-------+----------------------------------------------------------------------------------+---------------+----------+---------+
|
||||
|
||||
|
||||
## env
|
||||
+-----------+-----------------------------------------------------------+-------------------------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-----------+-----------------------------------------------------------+-------------------------+----------+---------+
|
||||
| name | Environment variable name | string | true | |
|
||||
| value | The value of the environment variable | string | false | |
|
||||
| valueFrom | Specifies a source the value of this var should come from | [valueFrom](#valueFrom) | false | |
|
||||
+-----------+-----------------------------------------------------------+-------------------------+----------+---------+
|
||||
|
||||
|
||||
### valueFrom
|
||||
+--------------+--------------------------------------------------+-------------------------------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+--------------+--------------------------------------------------+-------------------------------+----------+---------+
|
||||
| secretKeyRef | Selects a key of a secret in the pod's namespace | [secretKeyRef](#secretKeyRef) | true | |
|
||||
+--------------+--------------------------------------------------+-------------------------------+----------+---------+
|
||||
|
||||
|
||||
#### secretKeyRef
|
||||
+------+------------------------------------------------------------------+--------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+------+------------------------------------------------------------------+--------+----------+---------+
|
||||
| name | The name of the secret in the pod's namespace to select from | string | true | |
|
||||
| key | The key of the secret to select from. Must be a valid secret key | string | true | |
|
||||
+------+------------------------------------------------------------------+--------+----------+---------+
|
||||
```
|
||||
|
||||
## For Built-in Capabilities
|
||||
|
||||
Note that for all the built-in capabilities, we already published their reference docs below based on the same doc generation mechanism.
|
||||
|
||||
|
||||
- Workload Types
|
||||
- [webservice](component-types/webservice)
|
||||
- [task](component-types/task)
|
||||
- [worker](component-types/worker)
|
||||
- Traits
|
||||
- [route](traits/route)
|
||||
- [autoscale](traits/autoscale)
|
||||
- [rollout](traits/rollout)
|
||||
- [metrics](traits/metrics)
|
||||
- [scaler](traits/scaler)
|
||||
|
|
@ -1,27 +0,0 @@
|
|||
---
|
||||
title: Task
|
||||
---
|
||||
|
||||
## Description
|
||||
|
||||
Describes jobs that run code or a script to completion.
|
||||
|
||||
## Specification
|
||||
|
||||
List of all configuration options for a `Task` workload type.
|
||||
|
||||
```yaml
|
||||
...
|
||||
image: perl
|
||||
count: 10
|
||||
cmd: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
|
||||
```
|
||||
|
||||
## Properties
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
cmd | Commands to run in the container | []string | false |
|
||||
count | specify number of tasks to run in parallel | int | true | 1
|
||||
restart | Define the job restart policy, the value can only be Never or OnFailure. By default, it's Never. | string | true | Never
|
||||
image | Which image would you like to use for your service | string | true |
|
||||
|
|
@ -1,61 +0,0 @@
|
|||
---
|
||||
title: Webservice
|
||||
---
|
||||
|
||||
## Description
|
||||
|
||||
Describes long-running, scalable, containerized services that have a stable network endpoint to receive external network traffic from customers. If workload type is skipped for any service defined in Appfile, it will be defaulted to `webservice` type.
|
||||
|
||||
## Specification
|
||||
|
||||
List of all configuration options for a `Webservice` workload type.
|
||||
|
||||
```yaml
|
||||
...
|
||||
image: oamdev/testapp:v1
|
||||
cmd: ["node", "server.js"]
|
||||
port: 8080
|
||||
cpu: "0.1"
|
||||
env:
|
||||
- name: FOO
|
||||
value: bar
|
||||
- name: FOO
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: bar
|
||||
key: bar
|
||||
```
|
||||
|
||||
## Properties
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
cmd | Commands to run in the container | []string | false |
|
||||
env | Define arguments by using environment variables | [[]env](#env) | false |
|
||||
image | Which image would you like to use for your service | string | true |
|
||||
port | Which port do you want customer traffic sent to | int | true | 80
|
||||
cpu | Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) | string | false |
|
||||
|
||||
|
||||
### env
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
name | Environment variable name | string | true |
|
||||
value | The value of the environment variable | string | false |
|
||||
valueFrom | Specifies a source the value of this var should come from | [valueFrom](#valueFrom) | false |
|
||||
|
||||
|
||||
#### valueFrom
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
secretKeyRef | Selects a key of a secret in the pod's namespace | [secretKeyRef](#secretKeyRef) | true |
|
||||
|
||||
|
||||
##### secretKeyRef
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
name | The name of the secret in the pod's namespace to select from | string | true |
|
||||
key | The key of the secret to select from. Must be a valid secret key | string | true |
|
||||
|
|
@ -1,24 +0,0 @@
|
|||
---
|
||||
title: Worker
|
||||
---
|
||||
|
||||
## Description
|
||||
|
||||
Describes long-running, scalable, containerized services that running at backend. They do NOT have network endpoint to receive external network traffic.
|
||||
|
||||
## Specification
|
||||
|
||||
List of all configuration options for a `Worker` workload type.
|
||||
|
||||
```yaml
|
||||
...
|
||||
image: oamdev/testapp:v1
|
||||
cmd: ["node", "server.js"]
|
||||
```
|
||||
|
||||
## Properties
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
cmd | Commands to run in the container | []string | false |
|
||||
image | Which image would you like to use for your service | string | true |
|
||||
|
|
@ -1,28 +0,0 @@
|
|||
---
|
||||
title: KubeVela CLI
|
||||
---
|
||||
|
||||
### Auto-completion
|
||||
|
||||
#### bash
|
||||
|
||||
```bash
|
||||
To load completions in your current shell session:
|
||||
$ source <(vela completion bash)
|
||||
|
||||
To load completions for every new session, execute once:
|
||||
Linux:
|
||||
$ vela completion bash > /etc/bash_completion.d/vela
|
||||
MacOS:
|
||||
$ vela completion bash > /usr/local/etc/bash_completion.d/vela
|
||||
```
|
||||
|
||||
#### zsh
|
||||
|
||||
```bash
|
||||
To load completions in your current shell session:
|
||||
$ source <(vela completion zsh)
|
||||
|
||||
To load completions for every new session, execute once:
|
||||
$ vela completion zsh > "${fpath[1]}/_vela"
|
||||
```
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
|
||||
# KubeVela Dashboard (WIP)
|
||||
|
||||
KubeVela has a simple client side dashboard for you to interact with. The functionality is equivalent to the vela cli.
|
||||
|
||||
```bash
|
||||
$ vela dashboard
|
||||
```
|
||||
|
||||
> NOTE: this feature is still under development.
|
||||
|
|
@ -1,304 +0,0 @@
|
|||
---
|
||||
title: FAQ
|
||||
---
|
||||
|
||||
- [Compare to X](#Compare-to-X)
|
||||
* [What is the difference between KubeVela and Helm?](#What-is-the-difference-between-KubeVela-and-Helm?)
|
||||
|
||||
- [Issues](#issues)
|
||||
* [Error: unable to create new content in namespace cert-manager because it is being terminated](#error-unable-to-create-new-content-in-namespace-cert-manager-because-it-is-being-terminated)
|
||||
* [Error: ScopeDefinition exists](#error-scopedefinition-exists)
|
||||
* [You have reached your pull rate limit](#You-have-reached-your-pull-rate-limit)
|
||||
* [Warning: Namespace cert-manager exists](#warning-namespace-cert-manager-exists)
|
||||
* [How to fix issue: MutatingWebhookConfiguration mutating-webhook-configuration exists?](#how-to-fix-issue-mutatingwebhookconfiguration-mutating-webhook-configuration-exists)
|
||||
|
||||
- [Operating](#operating)
|
||||
* [Autoscale: how to enable metrics server in various Kubernetes clusters?](#autoscale-how-to-enable-metrics-server-in-various-kubernetes-clusters)
|
||||
|
||||
## Compare to X
|
||||
|
||||
### What is the difference between KubeVela and Helm?
|
||||
|
||||
KubeVela is a platform builder tool to create easy-to-use yet extensible app delivery/management systems with Kubernetes. KubeVela relies on Helm as templating engine and package format for apps. But Helm is not the only templating module that KubeVela supports. Another first-class supported approach is CUE.
|
||||
|
||||
Also, KubeVela is by design a Kubernetes controller (i.e. works on server side), even for its Helm part, a Helm operator will be installed.
|
||||
|
||||
## Issues
|
||||
|
||||
### Error: unable to create new content in namespace cert-manager because it is being terminated
|
||||
|
||||
Occasionally you might hit the issue as below. It happens when the last KubeVela release deletion hasn't completed.
|
||||
|
||||
```
|
||||
$ vela install
|
||||
- Installing Vela Core Chart:
|
||||
install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
|
||||
Failed to install the chart with error: serviceaccounts "cert-manager-cainjector" is forbidden: unable to create new content in namespace cert-manager because it is being terminated
|
||||
failed to create resource
|
||||
helm.sh/helm/v3/pkg/kube.(*Client).Update.func1
|
||||
/home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/kube/client.go:190
|
||||
...
|
||||
Error: failed to create resource: serviceaccounts "cert-manager-cainjector" is forbidden: unable to create new content in namespace cert-manager because it is being terminated
|
||||
```
|
||||
|
||||
Take a break and try again in a few seconds.
|
||||
|
||||
```
|
||||
$ vela install
|
||||
- Installing Vela Core Chart:
|
||||
Vela system along with OAM runtime already exist.
|
||||
Automatically discover capabilities successfully ✅ Add(0) Update(0) Delete(8)
|
||||
|
||||
TYPE CATEGORY DESCRIPTION
|
||||
-task workload One-off task to run a piece of code or script to completion
|
||||
-webservice workload Long-running scalable service with stable endpoint to receive external traffic
|
||||
-worker workload Long-running scalable backend worker without network endpoint
|
||||
-autoscale trait Automatically scale the app following certain triggers or metrics
|
||||
-metrics trait Configure metrics targets to be monitored for the app
|
||||
-rollout trait Configure canary deployment strategy to release the app
|
||||
-route trait Configure route policy to the app
|
||||
-scaler trait Manually scale the app
|
||||
|
||||
- Finished successfully.
|
||||
```
|
||||
|
||||
And manually apply all WorkloadDefinition and TraitDefinition manifests to have all capabilities back.
|
||||
|
||||
```
|
||||
$ kubectl apply -f charts/vela-core/templates/defwithtemplate
|
||||
traitdefinition.core.oam.dev/autoscale created
|
||||
traitdefinition.core.oam.dev/scaler created
|
||||
traitdefinition.core.oam.dev/metrics created
|
||||
traitdefinition.core.oam.dev/rollout created
|
||||
traitdefinition.core.oam.dev/route created
|
||||
workloaddefinition.core.oam.dev/task created
|
||||
workloaddefinition.core.oam.dev/webservice created
|
||||
workloaddefinition.core.oam.dev/worker created
|
||||
|
||||
$ vela workloads
|
||||
Automatically discover capabilities successfully ✅ Add(8) Update(0) Delete(0)
|
||||
|
||||
TYPE CATEGORY DESCRIPTION
|
||||
+task workload One-off task to run a piece of code or script to completion
|
||||
+webservice workload Long-running scalable service with stable endpoint to receive external traffic
|
||||
+worker workload Long-running scalable backend worker without network endpoint
|
||||
+autoscale trait Automatically scale the app following certain triggers or metrics
|
||||
+metrics trait Configure metrics targets to be monitored for the app
|
||||
+rollout trait Configure canary deployment strategy to release the app
|
||||
+route trait Configure route policy to the app
|
||||
+scaler trait Manually scale the app
|
||||
|
||||
NAME DESCRIPTION
|
||||
task One-off task to run a piece of code or script to completion
|
||||
webservice Long-running scalable service with stable endpoint to receive external traffic
|
||||
worker Long-running scalable backend worker without network endpoint
|
||||
```
|
||||
|
||||
### Error: ScopeDefinition exists
|
||||
|
||||
Occasionally you might hit the issue as below. It happens when there is an old OAM Kubernetes Runtime release, or you applied `ScopeDefinition` before.
|
||||
|
||||
```
|
||||
$ vela install
|
||||
- Installing Vela Core Chart:
|
||||
install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
|
||||
Failed to install the chart with error: ScopeDefinition "healthscopes.core.oam.dev" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "kubevela": current value is "oam"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "vela-system": current value is "oam-system"
|
||||
rendered manifests contain a resource that already exists. Unable to continue with install
|
||||
helm.sh/helm/v3/pkg/action.(*Install).Run
|
||||
/home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274
|
||||
...
|
||||
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ScopeDefinition "healthscopes.core.oam.dev" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "kubevela": current value is "oam"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "vela-system": current value is "oam-system"
|
||||
```
|
||||
|
||||
Delete `ScopeDefinition` "healthscopes.core.oam.dev" and try again.
|
||||
|
||||
```
|
||||
$ kubectl delete ScopeDefinition "healthscopes.core.oam.dev"
|
||||
scopedefinition.core.oam.dev "healthscopes.core.oam.dev" deleted
|
||||
|
||||
$ vela install
|
||||
- Installing Vela Core Chart:
|
||||
install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
|
||||
Successfully installed the chart, status: deployed, last deployed time = 2020-12-03 16:26:41.491426 +0800 CST m=+4.026069452
|
||||
WARN: handle workload template `containerizedworkloads.core.oam.dev` failed: no template found, you will unable to use this workload capabilityWARN: handle trait template `manualscalertraits.core.oam.dev` failed
|
||||
: no template found, you will unable to use this trait capabilityAutomatically discover capabilities successfully ✅ Add(8) Update(0) Delete(0)
|
||||
|
||||
TYPE CATEGORY DESCRIPTION
|
||||
+task workload One-off task to run a piece of code or script to completion
|
||||
+webservice workload Long-running scalable service with stable endpoint to receive external traffic
|
||||
+worker workload Long-running scalable backend worker without network endpoint
|
||||
+autoscale trait Automatically scale the app following certain triggers or metrics
|
||||
+metrics trait Configure metrics targets to be monitored for the app
|
||||
+rollout trait Configure canary deployment strategy to release the app
|
||||
+route trait Configure route policy to the app
|
||||
+scaler trait Manually scale the app
|
||||
|
||||
- Finished successfully.
|
||||
```
|
||||
|
||||
### You have reached your pull rate limit
|
||||
|
||||
When you look into the logs of Pod kubevela-vela-core and found the issue as below.
|
||||
|
||||
```
|
||||
$ kubectl get pod -n vela-system -l app.kubernetes.io/name=vela-core
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
kubevela-vela-core-f8b987775-wjg25 0/1 - 0 35m
|
||||
```
|
||||
|
||||
>Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by
|
||||
>authenticating and upgrading: https://www.docker.com/increase-rate-limit
|
||||
|
||||
You can use github container registry instead.
|
||||
|
||||
```
|
||||
$ docker pull ghcr.io/oam-dev/kubevela/vela-core:latest
|
||||
```
|
||||
|
||||
### Warning: Namespace cert-manager exists
|
||||
|
||||
If you hit the issue as below, an `cert-manager` release might exist whose namespace and RBAC related resource conflict
|
||||
with KubeVela.
|
||||
|
||||
```
|
||||
$ vela install
|
||||
- Installing Vela Core Chart:
|
||||
install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
|
||||
Failed to install the chart with error: Namespace "cert-manager" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"
|
||||
rendered manifests contain a resource that already exists. Unable to continue with install
|
||||
helm.sh/helm/v3/pkg/action.(*Install).Run
|
||||
/home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274
|
||||
...
|
||||
/opt/hostedtoolcache/go/1.14.12/x64/src/runtime/asm_amd64.s:1373
|
||||
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Namespace "cert-manager" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"
|
||||
```
|
||||
|
||||
Try these steps to fix the problem.
|
||||
|
||||
- Delete release `cert-manager`
|
||||
- Delete namespace `cert-manager`
|
||||
- Install KubeVela again
|
||||
|
||||
```
|
||||
$ helm delete cert-manager -n cert-manager
|
||||
release "cert-manager" uninstalled
|
||||
|
||||
$ kubectl delete ns cert-manager
|
||||
namespace "cert-manager" deleted
|
||||
|
||||
$ vela install
|
||||
- Installing Vela Core Chart:
|
||||
install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
|
||||
Successfully installed the chart, status: deployed, last deployed time = 2020-12-04 10:46:46.782617 +0800 CST m=+4.248889379
|
||||
Automatically discover capabilities successfully ✅ (no changes)
|
||||
|
||||
TYPE CATEGORY DESCRIPTION
|
||||
task workload One-off task to run a piece of code or script to completion
|
||||
webservice workload Long-running scalable service with stable endpoint to receive external traffic
|
||||
worker workload Long-running scalable backend worker without network endpoint
|
||||
autoscale trait Automatically scale the app following certain triggers or metrics
|
||||
metrics trait Configure metrics targets to be monitored for the app
|
||||
rollout trait Configure canary deployment strategy to release the app
|
||||
route trait Configure route policy to the app
|
||||
scaler trait Manually scale the app
|
||||
- Finished successfully.
|
||||
```
|
||||
|
||||
### How to fix issue: MutatingWebhookConfiguration mutating-webhook-configuration exists?
|
||||
|
||||
If you deploy some other services which will apply MutatingWebhookConfiguration mutating-webhook-configuration, installing
|
||||
KubeVela will hit the issue as below.
|
||||
|
||||
```shell
|
||||
- Installing Vela Core Chart:
|
||||
install chart vela-core, version v0.2.1, desc : A Helm chart for Kube Vela core, contains 36 file
|
||||
Failed to install the chart with error: MutatingWebhookConfiguration "mutating-webhook-configuration" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"
|
||||
rendered manifests contain a resource that already exists. Unable to continue with install
|
||||
helm.sh/helm/v3/pkg/action.(*Install).Run
|
||||
/home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274
|
||||
github.com/oam-dev/kubevela/pkg/commands.InstallOamRuntime
|
||||
/home/runner/work/kubevela/kubevela/pkg/commands/system.go:259
|
||||
github.com/oam-dev/kubevela/pkg/commands.(*initCmd).run
|
||||
/home/runner/work/kubevela/kubevela/pkg/commands/system.go:162
|
||||
github.com/oam-dev/kubevela/pkg/commands.NewInstallCommand.func2
|
||||
/home/runner/work/kubevela/kubevela/pkg/commands/system.go:119
|
||||
github.com/spf13/cobra.(*Command).execute
|
||||
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:850
|
||||
github.com/spf13/cobra.(*Command).ExecuteC
|
||||
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:958
|
||||
github.com/spf13/cobra.(*Command).Execute
|
||||
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:895
|
||||
main.main
|
||||
/home/runner/work/kubevela/kubevela/references/cmd/cli/main.go:16
|
||||
runtime.main
|
||||
/opt/hostedtoolcache/go/1.14.13/x64/src/runtime/proc.go:203
|
||||
runtime.goexit
|
||||
/opt/hostedtoolcache/go/1.14.13/x64/src/runtime/asm_amd64.s:1373
|
||||
Error: rendered manifests contain a resource that already exists. Unable to continue with install: MutatingWebhookConfiguration "mutating-webhook-configuration" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"
|
||||
```
|
||||
|
||||
To fix this issue, please upgrade KubeVela Cli `vela` version to be higher than `v0.2.2` from [KubeVela releases](https://github.com/oam-dev/kubevela/releases).
|
||||
|
||||
## Operating
|
||||
|
||||
### Autoscale: how to enable metrics server in various Kubernetes clusters?
|
||||
|
||||
Operating Autoscale depends on metrics server, so it has to be enabled in various clusters. Please check whether metrics server
|
||||
is enabled with command `kubectl top nodes` or `kubectl top pods`.
|
||||
|
||||
If the output is similar as below, the metrics is enabled.
|
||||
|
||||
```shell
|
||||
$ kubectl top nodes
|
||||
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
|
||||
cn-hongkong.10.0.1.237 288m 7% 5378Mi 78%
|
||||
cn-hongkong.10.0.1.238 351m 8% 5113Mi 74%
|
||||
|
||||
$ kubectl top pods
|
||||
NAME CPU(cores) MEMORY(bytes)
|
||||
php-apache-65f444bf84-cjbs5 0m 1Mi
|
||||
wordpress-55c59ccdd5-lf59d 1m 66Mi
|
||||
```
|
||||
|
||||
Or you have to manually enable metrics server in your Kubernetes cluster.
|
||||
|
||||
- ACK (Alibaba Cloud Container Service for Kubernetes)
|
||||
|
||||
Metrics server is already enabled.
|
||||
|
||||
- ASK (Alibaba Cloud Serverless Kubernetes)
|
||||
|
||||
Metrics server has to be enabled in `Operations/Add-ons` section of [Alibaba Cloud console](https://cs.console.aliyun.com/) as below.
|
||||
|
||||

|
||||
|
||||
Please refer to [metrics server debug guide](https://help.aliyun.com/document_detail/176515.html) if you hit more issue.
|
||||
|
||||
- Kind
|
||||
|
||||
Install metrics server as below, or you can install the [latest version](https://github.com/kubernetes-sigs/metrics-server#installation).
|
||||
|
||||
```shell
|
||||
$ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
|
||||
```
|
||||
|
||||
Also add the following part under `.spec.template.spec.containers` in the yaml file loaded by `kubectl edit deploy -n kube-system metrics-server`.
|
||||
|
||||
Noted: This is just a walk-around, not for production-level use.
|
||||
|
||||
```
|
||||
command:
|
||||
- /metrics-server
|
||||
- --kubelet-insecure-tls
|
||||
```
|
||||
|
||||
- MiniKube
|
||||
|
||||
Enable it with following command.
|
||||
|
||||
```shell
|
||||
$ minikube addons enable metrics-server
|
||||
```
|
||||
|
||||
|
||||
Have fun to [set autoscale](../../extensions/set-autoscale) on your application.
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
---
|
||||
title: Restful API
|
||||
---
|
||||
import useBaseUrl from '@docusaurus/useBaseUrl';
|
||||
|
||||
<a
|
||||
target="_blank"
|
||||
href={useBaseUrl('/restful-api')}>
|
||||
KubeVela Restful API
|
||||
</a>
|
||||
|
|
@ -1,44 +0,0 @@
|
|||
---
|
||||
title: Autoscale
|
||||
---
|
||||
|
||||
## Description
|
||||
|
||||
Automatically scales workloads by resource utilization metrics or cron triggers.
|
||||
|
||||
## Specification
|
||||
|
||||
List of all configuration options for a `Autoscale` trait.
|
||||
|
||||
```yaml
|
||||
...
|
||||
min: 1
|
||||
max: 4
|
||||
cron:
|
||||
startAt: "14:00"
|
||||
duration: "2h"
|
||||
days: "Monday, Thursday"
|
||||
replicas: 2
|
||||
timezone: "America/Los_Angeles"
|
||||
cpuPercent: 10
|
||||
```
|
||||
|
||||
## Properties
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
min | Minimal replicas of the workload | int | true |
|
||||
max | Maximal replicas of the workload | int | true |
|
||||
cpuPercent | Specify the value for CPU utilization, like 80, which means 80% | int | false |
|
||||
cron | Cron type auto-scaling. Just for `appfile`, not available for Cli usage | [cron](#cron) | false |
|
||||
|
||||
|
||||
### cron
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
startAt | The time to start scaling, like `08:00` | string | true |
|
||||
duration | For how long the scaling will last | string | true |
|
||||
days | Several workdays or weekends, like "Monday, Tuesday" | string | true |
|
||||
replicas | The target replicas to be scaled to | int | true |
|
||||
timezone | Timezone, like "America/Los_Angeles" | string | true |
|
||||
|
|
@ -1,25 +0,0 @@
|
|||
---
|
||||
title: Ingress
|
||||
---
|
||||
|
||||
## Description
|
||||
|
||||
Configures K8s ingress and service to enable web traffic for your service. Please use route trait in cap center for advanced usage.
|
||||
|
||||
## Specification
|
||||
|
||||
List of all configuration options for a `Ingress` trait.
|
||||
|
||||
```yaml
|
||||
...
|
||||
domain: testsvc.example.com
|
||||
http:
|
||||
/: 8000
|
||||
```
|
||||
|
||||
## Properties
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
domain | | string | true |
|
||||
http | | map[string]int | true |
|
||||
|
|
@ -1,31 +0,0 @@
|
|||
---
|
||||
title: Metrics
|
||||
---
|
||||
|
||||
## Description
|
||||
|
||||
Configures monitoring metrics for your service.
|
||||
|
||||
## Specification
|
||||
|
||||
List of all configuration options for a `Metrics` trait.
|
||||
|
||||
```yaml
|
||||
...
|
||||
format: "prometheus"
|
||||
port: 8080
|
||||
path: "/metrics"
|
||||
scheme: "http"
|
||||
enabled: true
|
||||
```
|
||||
|
||||
## Properties
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
path | The metrics path of the service | string | true | /metrics
|
||||
format | Format of the metrics, default as prometheus | string | true | prometheus
|
||||
scheme | The way to retrieve data which can take the values `http` or `https` | string | true | http
|
||||
enabled | | bool | true | true
|
||||
port | The port for metrics, will discovery automatically by default | int | true | 0
|
||||
selector | The label selector for the pods, will discovery automatically by default | map[string]string | false |
|
||||
|
|
@ -1,35 +0,0 @@
|
|||
---
|
||||
title: Rollout
|
||||
---
|
||||
|
||||
## Description
|
||||
|
||||
Configures Canary deployment strategy for your application.
|
||||
|
||||
## Specification
|
||||
|
||||
List of all configuration options for a `Rollout` trait.
|
||||
|
||||
```yaml
|
||||
...
|
||||
rollout:
|
||||
replicas: 2
|
||||
stepWeight: 50
|
||||
interval: "10s"
|
||||
```
|
||||
|
||||
## Properties
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
interval | Schedule interval time | string | true | 30s
|
||||
stepWeight | Weight percent of every step in rolling update | int | true | 50
|
||||
replicas | Total replicas of the workload | int | true | 2
|
||||
|
||||
## Conflicts With
|
||||
|
||||
### `Autoscale`
|
||||
|
||||
When `Rollout` and `Autoscle` traits are attached to the same service, they two will fight over the number of instances during rollout. Thus, it's by design that `Rollout` will take over replicas control (specified by `.replicas` field) during rollout.
|
||||
|
||||
> Note: in up coming releases, KubeVela will introduce a separate section in Appfile to define release phase configurations such as `Rollout`.
|
||||
|
|
@ -1,38 +0,0 @@
|
|||
---
|
||||
title: Route
|
||||
---
|
||||
|
||||
## Description
|
||||
|
||||
Configures external access to your service.
|
||||
|
||||
## Specification
|
||||
|
||||
List of all configuration options for a `Route` trait.
|
||||
|
||||
```yaml
|
||||
...
|
||||
domain: example.com
|
||||
issuer: tls
|
||||
rules:
|
||||
- path: /testapp
|
||||
rewriteTarget: /
|
||||
```
|
||||
|
||||
## Properties
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
domain | Domain name | string | true | empty
|
||||
issuer | | string | true | empty
|
||||
rules | | [[]rules](#rules) | false |
|
||||
provider | | string | false |
|
||||
ingressClass | | string | false |
|
||||
|
||||
|
||||
### rules
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
path | | string | true |
|
||||
rewriteTarget | | string | true | empty
|
||||
|
|
@ -1,23 +0,0 @@
|
|||
---
|
||||
title: Scaler
|
||||
---
|
||||
|
||||
## Description
|
||||
|
||||
Configures replicas for your service.
|
||||
|
||||
## Specification
|
||||
|
||||
List of all configuration options for a `Scaler` trait.
|
||||
|
||||
```yaml
|
||||
...
|
||||
scaler:
|
||||
replicas: 100
|
||||
```
|
||||
|
||||
## Properties
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
replicas | Replicas of the workload | int | true | 1
|
||||
|
|
@ -1,89 +0,0 @@
|
|||
---
|
||||
title: How-to
|
||||
---
|
||||
|
||||
In this section, it will introduce how to declare Helm charts as app components via `ComponentDefinition`.
|
||||
|
||||
> Before reading this part, please make sure you've learned [the definition and template concepts](../platform-engineers/definition-and-templates).
|
||||
|
||||
## Prerequisite
|
||||
|
||||
* Make sure you have enabled Helm support in the [installation guide](/docs/install).
|
||||
|
||||
## Declare `ComponentDefinition`
|
||||
|
||||
Here is an example `ComponentDefinition` about how to use Helm as schematic module.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: ComponentDefinition
|
||||
metadata:
|
||||
name: webapp-chart
|
||||
annotations:
|
||||
definition.oam.dev/description: helm chart for webapp
|
||||
spec:
|
||||
workload:
|
||||
definition:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
schematic:
|
||||
helm:
|
||||
release:
|
||||
chart:
|
||||
spec:
|
||||
chart: "podinfo"
|
||||
version: "5.1.4"
|
||||
repository:
|
||||
url: "http://oam.dev/catalog/"
|
||||
```
|
||||
|
||||
In detail:
|
||||
- `.spec.workload` is required to indicate the workload type of this Helm based component. Please also check for [Known Limitations](/docs/helm/known-issues?id=workload-type-indicator) if you have multiple workloads packaged in one chart.
|
||||
- `.spec.schematic.helm` contains information of Helm `release` and `repository` which leverages `fluxcd/flux2`.
|
||||
- i.e. the pec of `release` aligns with [`HelmReleaseSpec`](https://github.com/fluxcd/helm-controller/blob/main/docs/api/helmrelease.md) and spec of `repository` aligns with [`HelmRepositorySpec`](https://github.com/fluxcd/source-controller/blob/main/docs/api/source.md#source.toolkit.fluxcd.io/v1beta1.HelmRepository).
|
||||
|
||||
## Declare an `Application`
|
||||
|
||||
Here is an example `Application`.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: myapp
|
||||
namespace: default
|
||||
spec:
|
||||
components:
|
||||
- name: demo-podinfo
|
||||
type: webapp-chart
|
||||
properties:
|
||||
image:
|
||||
tag: "5.1.2"
|
||||
```
|
||||
|
||||
The component `properties` is exactly the [overlay values](https://github.com/captainroy-hy/podinfo/blob/master/charts/podinfo/values.yaml) of the Helm chart.
|
||||
|
||||
Deploy the application and after several minutes (it may take time to fetch Helm chart), you can check the Helm release is installed.
|
||||
```shell
|
||||
$ helm ls -A
|
||||
myapp-demo-podinfo default 1 2021-03-05 02:02:18.692317102 +0000 UTC deployed podinfo-5.1.4 5.1.4
|
||||
```
|
||||
Check the workload defined in the chart has been created successfully.
|
||||
```shell
|
||||
$ kubectl get deploy
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
myapp-demo-podinfo 1/1 1 1 66m
|
||||
```
|
||||
|
||||
Check the values (`image.tag = 5.1.2`) from application's `properties` are assigned to the chart.
|
||||
```shell
|
||||
$ kubectl get deployment myapp-demo-podinfo -o json | jq '.spec.template.spec.containers[0].image'
|
||||
"ghcr.io/stefanprodan/podinfo:5.1.2"
|
||||
```
|
||||
|
||||
|
||||
### Generate Form from Helm Based Components
|
||||
|
||||
KubeVela will automatically generate OpenAPI v3 JSON schema based on [`values.schema.json`](https://helm.sh/docs/topics/charts/#schema-files) in the Helm chart, and store it in a `ConfigMap` in the same `namespace` with the definition object. Furthermore, if `values.schema.json` is not provided by the chart author, KubeVela will generate OpenAPI v3 JSON schema based on its `values.yaml` file automatically.
|
||||
|
||||
Please check the [Generate Forms from Definitions](/docs/platform-engineers/openapi-v3-json-schema) guide for more detail of using this schema to render GUI forms.
|
||||
|
|
@ -1,82 +0,0 @@
|
|||
---
|
||||
title: Known Limitations
|
||||
---
|
||||
|
||||
## Limitations
|
||||
|
||||
Here are some known limitations for using Helm chart as application component.
|
||||
|
||||
### Workload Type Indicator
|
||||
|
||||
Following best practices of microservice, KubeVela recommends only one workload resource present in one Helm chart. Please split your "super" Helm chart into multiple charts (i.e. components). Essentially, KubeVela relies on the `workload` filed in component definition to indicate the workload type it needs to take care, for example:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: ComponentDefinition
|
||||
...
|
||||
spec:
|
||||
workload:
|
||||
definition:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
```
|
||||
```yaml
|
||||
...
|
||||
spec:
|
||||
workload:
|
||||
definition:
|
||||
apiVersion: apps.kruise.io/v1alpha1
|
||||
kind: Cloneset
|
||||
```
|
||||
|
||||
Note that KubeVela won't fail if multiple workload types are packaged in one chart, the issue is for further operational behaviors such as rollout, revisions, and traffic management, they can only take effect on the indicated workload type.
|
||||
|
||||
### Always Use Full Qualified Name
|
||||
|
||||
The name of the workload should be templated with [fully qualified application name](https://github.com/helm/helm/blob/543364fba59b0c7c30e38ebe0f73680db895abb6/pkg/chartutil/create.go#L415) and please do NOT assign any value to `.Values.fullnameOverride`. As a best practice, Helm also highly recommend that new charts should be created via `$ helm create` command so the template names are automatically defined as per this best practice.
|
||||
|
||||
### Control the Application Upgrade
|
||||
|
||||
Changes made to the component `properties` will trigger a Helm release upgrade. This process is handled by Flux v2 Helm controller, hence you can define remediation
|
||||
strategies in the schematic based on [Helm Release
|
||||
documentation](https://github.com/fluxcd/helm-controller/blob/main/docs/api/helmrelease.md#upgraderemediation)
|
||||
and [specification](https://toolkit.fluxcd.io/components/helm/helmreleases/#configuring-failure-remediation)
|
||||
in case failure happens during this upgrade.
|
||||
|
||||
For example:
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: ComponentDefinition
|
||||
metadata:
|
||||
name: webapp-chart
|
||||
spec:
|
||||
...
|
||||
schematic:
|
||||
helm:
|
||||
release:
|
||||
chart:
|
||||
spec:
|
||||
chart: "podinfo"
|
||||
version: "5.1.4"
|
||||
upgrade:
|
||||
remediation:
|
||||
retries: 3
|
||||
remediationStrategy: rollback
|
||||
repository:
|
||||
url: "http://oam.dev/catalog/"
|
||||
|
||||
```
|
||||
|
||||
Though one issue is for now it's hard to get helpful information of a living Helm release to figure out what happened if upgrading failed. We will enhance the observability to help users track the situation of Helm release in application level.
|
||||
|
||||
## Issues
|
||||
|
||||
The known issues will be fixed in following releases.
|
||||
|
||||
### Rollout Strategy
|
||||
|
||||
For now, Helm based components cannot benefit from [application level rollout strategy](https://github.com/oam-dev/kubevela/blob/master/design/vela-core/rollout-design.md#applicationdeployment-workflow). As shown in [this sample](./trait#update-an-applicatiion), if the application is updated, it can only be rollouted directly without canary or blue-green approach.
|
||||
|
||||
### Updating Traits Properties may Also Lead to Pods Restart
|
||||
|
||||
Changes on traits properties may impact the component instance and Pods belonging to this workload instance will restart. In CUE based components this is avoidable as KubeVela has full control on the rendering process of the resources, though in Helm based components it's currently deferred to Flux v2 controller.
|
||||
|
|
@ -1,153 +0,0 @@
|
|||
---
|
||||
title: Attach Traits
|
||||
---
|
||||
|
||||
Traits in KubeVela can be attached to Helm based component seamlessly.
|
||||
|
||||
In this sample application below, we add two traits, [scaler](https://github.com/oam-dev/kubevela/blob/master/charts/vela-core/templates/defwithtemplate/manualscale.yaml)
|
||||
and [virtualgroup](https://github.com/oam-dev/kubevela/blob/master/docs/examples/helm-module/virtual-group-td.yaml) to a Helm based component.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: myapp
|
||||
namespace: default
|
||||
spec:
|
||||
components:
|
||||
- name: demo-podinfo
|
||||
type: webapp-chart
|
||||
properties:
|
||||
image:
|
||||
tag: "5.1.2"
|
||||
traits:
|
||||
- type: scaler
|
||||
properties:
|
||||
replicas: 4
|
||||
- type: virtualgroup
|
||||
properties:
|
||||
group: "my-group1"
|
||||
type: "cluster"
|
||||
```
|
||||
|
||||
> Note: when use traits with Helm based component, please *make sure the target workload in your Helm chart strictly follows the qualified-full-name convention in Helm.* [For example in this chart](https://github.com/captainroy-hy/podinfo/blob/c2b9603036f1f033ec2534ca0edee8eff8f5b335/charts/podinfo/templates/deployment.yaml#L4), the workload name is composed of [release name and chart name](https://github.com/captainroy-hy/podinfo/blob/c2b9603036f1f033ec2534ca0edee8eff8f5b335/charts/podinfo/templates/_helpers.tpl#L13).
|
||||
|
||||
> This is because KubeVela relies on the name to discovery the workload, otherwise it cannot apply traits to the workload. KubeVela will generate a release name based on your `Application` name and component name automatically, so you need to make sure never override the full name template in your Helm chart.
|
||||
|
||||
## Verify traits work correctly
|
||||
|
||||
> You may need to wait a few seconds to check the trait attached because of reconciliation interval.
|
||||
|
||||
Check the `scaler` trait takes effect.
|
||||
```shell
|
||||
$ kubectl get manualscalertrait
|
||||
NAME AGE
|
||||
demo-podinfo-scaler-d8f78c6fc 13m
|
||||
```
|
||||
```shell
|
||||
$ kubectl get deployment myapp-demo-podinfo -o json | jq .spec.replicas
|
||||
4
|
||||
```
|
||||
|
||||
Check the `virtualgroup` trait.
|
||||
```shell
|
||||
$ kubectl get deployment myapp-demo-podinfo -o json | jq .spec.template.metadata.labels
|
||||
{
|
||||
"app.cluster.virtual.group": "my-group1",
|
||||
"app.kubernetes.io/name": "myapp-demo-podinfo"
|
||||
}
|
||||
```
|
||||
|
||||
## Update Application
|
||||
|
||||
After the application is deployed and workloads/traits are created successfully,
|
||||
you can update the application, and corresponding changes will be applied to the
|
||||
workload instances.
|
||||
|
||||
Let's make several changes on the configuration of the sample application.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: myapp
|
||||
namespace: default
|
||||
spec:
|
||||
components:
|
||||
- name: demo-podinfo
|
||||
type: webapp-chart
|
||||
properties:
|
||||
image:
|
||||
tag: "5.1.3" # 5.1.2 => 5.1.3
|
||||
traits:
|
||||
- type: scaler
|
||||
properties:
|
||||
replicas: 2 # 4 => 2
|
||||
- type: virtualgroup
|
||||
properties:
|
||||
group: "my-group2" # my-group1 => my-group2
|
||||
type: "cluster"
|
||||
```
|
||||
|
||||
Apply the new configuration and check the results after several minutes.
|
||||
|
||||
Check the new values (`image.tag = 5.1.3`) from application's `properties` are assigned to the chart.
|
||||
```shell
|
||||
$ kubectl get deployment myapp-demo-podinfo -o json | jq '.spec.template.spec.containers[0].image'
|
||||
"ghcr.io/stefanprodan/podinfo:5.1.3"
|
||||
```
|
||||
Under the hood, Helm makes an upgrade to the release (revision 1 => 2).
|
||||
```shell
|
||||
$ helm ls -A
|
||||
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
|
||||
myapp-demo-podinfo default 2 2021-03-15 08:52:00.037690148 +0000 UTC deployed podinfo-5.1.4 5.1.4
|
||||
```
|
||||
|
||||
Check the `scaler` trait.
|
||||
```shell
|
||||
$ kubectl get deployment myapp-demo-podinfo -o json | jq .spec.replicas
|
||||
2
|
||||
```
|
||||
|
||||
Check the `virtualgroup` trait.
|
||||
```shell
|
||||
$ kubectl get deployment myapp-demo-podinfo -o json | jq .spec.template.metadata.labels
|
||||
{
|
||||
"app.cluster.virtual.group": "my-group2",
|
||||
"app.kubernetes.io/name": "myapp-demo-podinfo"
|
||||
}
|
||||
```
|
||||
|
||||
## Detach Trait
|
||||
|
||||
Let's have a try detach a trait from the application.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1alpha2
|
||||
kind: Application
|
||||
metadata:
|
||||
name: myapp
|
||||
namespace: default
|
||||
spec:
|
||||
components:
|
||||
- name: demo-podinfo
|
||||
type: webapp-chart
|
||||
settings:
|
||||
image:
|
||||
tag: "5.1.3"
|
||||
traits:
|
||||
# - name: scaler
|
||||
# properties:
|
||||
# replicas: 2
|
||||
- name: virtualgroup
|
||||
properties:
|
||||
group: "my-group2"
|
||||
type: "cluster"
|
||||
```
|
||||
|
||||
Apply the application and check `manualscalertrait` has been deleted.
|
||||
```shell
|
||||
$ kubectl get manualscalertrait
|
||||
No resources found
|
||||
```
|
||||
|
||||
|
|
@ -1,193 +0,0 @@
|
|||
---
|
||||
title: Installation
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
> For upgrading existing KubeVela, please read the [upgrade guide](./advanced-install#upgrade).
|
||||
|
||||
## 1. Choose Kubernetes Cluster
|
||||
|
||||
Requirements:
|
||||
- Kubernetes cluster >= v1.15.0
|
||||
- `kubectl` installed and configured
|
||||
|
||||
KubeVela is a simple custom controller that can be installed on any Kubernetes cluster including managed offerings or your own clusters. The only requirement is please ensure [ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/) is installed and enabled.
|
||||
|
||||
For for local deployment and test, you could use `minikube` or `kind`.
|
||||
|
||||
<Tabs
|
||||
className="unique-tabs"
|
||||
defaultValue="minikube"
|
||||
values={[
|
||||
{label: 'Minikube', value: 'minikube'},
|
||||
{label: 'KinD', value: 'kind'},
|
||||
]}>
|
||||
<TabItem value="minikube">
|
||||
|
||||
Follow the minikube [installation guide](https://minikube.sigs.k8s.io/docs/start/).
|
||||
|
||||
Then spins up a minikube cluster
|
||||
|
||||
```shell script
|
||||
minikube start
|
||||
```
|
||||
|
||||
Install ingress:
|
||||
|
||||
```shell script
|
||||
minikube addons enable ingress
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="kind">
|
||||
|
||||
Follow [this guide](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) to install kind.
|
||||
|
||||
Then spins up a kind cluster:
|
||||
|
||||
```shell script
|
||||
cat <<EOF | kind create cluster --image=kindest/node:v1.18.15 --config=-
|
||||
kind: Cluster
|
||||
apiVersion: kind.x-k8s.io/v1alpha4
|
||||
nodes:
|
||||
- role: control-plane
|
||||
kubeadmConfigPatches:
|
||||
- |
|
||||
kind: InitConfiguration
|
||||
nodeRegistration:
|
||||
kubeletExtraArgs:
|
||||
node-labels: "ingress-ready=true"
|
||||
extraPortMappings:
|
||||
- containerPort: 80
|
||||
hostPort: 80
|
||||
protocol: TCP
|
||||
- containerPort: 443
|
||||
hostPort: 443
|
||||
protocol: TCP
|
||||
EOF
|
||||
```
|
||||
|
||||
Then install [ingress for kind](https://kind.sigs.k8s.io/docs/user/ingress/#ingress-nginx):
|
||||
```shell script
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
|
||||
## 2. Install KubeVela Controller
|
||||
|
||||
1. Add helm chart repo for KubeVela
|
||||
```shell script
|
||||
helm repo add kubevela https://kubevelacharts.oss-cn-hangzhou.aliyuncs.com/core
|
||||
```
|
||||
|
||||
2. Update the chart repo
|
||||
```shell script
|
||||
helm repo update
|
||||
```
|
||||
|
||||
3. Install KubeVela
|
||||
```shell script
|
||||
helm install --create-namespace -n vela-system kubevela kubevela/vela-core
|
||||
```
|
||||
By default, it will enable the webhook with a self-signed certificate provided by [kube-webhook-certgen](https://github.com/jet/kube-webhook-certgen)
|
||||
|
||||
## 3. Get KubeVela CLI
|
||||
|
||||
Using KubeVela CLI gives you a simplified workflow with optimized output comparing to using `kubectl`. It is not mandatory though.
|
||||
|
||||
Here are three ways to get KubeVela Cli:
|
||||
|
||||
<Tabs
|
||||
className="unique-tabs"
|
||||
defaultValue="script"
|
||||
values={[
|
||||
{label: 'Script', value: 'script'},
|
||||
{label: 'Homebrew', value: 'homebrew'},
|
||||
{label: 'Download directly from releases', value: 'download'},
|
||||
]}>
|
||||
<TabItem value="script">
|
||||
|
||||
** macOS/Linux **
|
||||
|
||||
```shell script
|
||||
curl -fsSl https://kubevela.io/script/install.sh | bash
|
||||
```
|
||||
|
||||
**Windows**
|
||||
|
||||
```shell script
|
||||
powershell -Command "iwr -useb https://kubevela.io/script/install.ps1 | iex"
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="homebrew">
|
||||
|
||||
**macOS/Linux**
|
||||
|
||||
Update your brew firstly.
|
||||
```shell script
|
||||
brew update
|
||||
```
|
||||
Then install kubevela client.
|
||||
|
||||
```shell script
|
||||
brew install kubevela
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="download">
|
||||
|
||||
- Download the latest `vela` binary from the [releases page](https://github.com/oam-dev/kubevela/releases).
|
||||
- Unpack the `vela` binary and add it to `$PATH` to get started.
|
||||
|
||||
```shell script
|
||||
sudo mv ./vela /usr/local/bin/vela
|
||||
```
|
||||
|
||||
> Known Issue(https://github.com/oam-dev/kubevela/issues/625):
|
||||
> If you're using mac, it will report that “vela” cannot be opened because the developer cannot be verified.
|
||||
>
|
||||
> The new version of MacOS is stricter about running software you've downloaded that isn't signed with an Apple developer key. And we haven't supported that for KubeVela yet.
|
||||
> You can open your 'System Preference' -> 'Security & Privacy' -> General, click the 'Allow Anyway' to temporarily fix it.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
|
||||
## 4. Enable Helm Support
|
||||
|
||||
KubeVela leverages Helm controller from [Flux v2](https://github.com/fluxcd/flux2) to deploy [Helm](https://helm.sh/) based components.
|
||||
|
||||
You can enable this feature by installing a minimal Flux v2 chart as below:
|
||||
|
||||
```shell
|
||||
$ helm install --create-namespace -n flux-system helm-flux http://oam.dev/catalog/helm-flux2-0.1.0.tgz
|
||||
```
|
||||
|
||||
Or you could install full Flux v2 following its own guide of course.
|
||||
|
||||
|
||||
## 5. Verify
|
||||
|
||||
Checking available application components and traits by `vela` CLI tool:
|
||||
|
||||
```shell script
|
||||
vela components
|
||||
```
|
||||
```console
|
||||
NAME NAMESPACE WORKLOAD DESCRIPTION
|
||||
task vela-system jobs.batch Describes jobs that run code or a script to completion.
|
||||
webservice vela-system deployments.apps Describes long-running, scalable, containerized services
|
||||
that have a stable network endpoint to receive external
|
||||
network traffic from customers.
|
||||
worker vela-system deployments.apps Describes long-running, scalable, containerized services
|
||||
that running at backend. They do NOT have network endpoint
|
||||
to receive external network traffic.
|
||||
```
|
||||
|
||||
These capabilities are built-in so they are ready to use if showed up. KubeVela is designed to be programmable and fully self-service, so the assumption is more capabilities will be added later per your own needs.
|
||||
|
||||
Also, whenever new capabilities are added in the platform, you will immediately see them in above output.
|
||||
|
|
@ -1,71 +0,0 @@
|
|||
---
|
||||
title: Introduction
|
||||
slug: /
|
||||
|
||||
---
|
||||
|
||||

|
||||
|
||||
## Motivation
|
||||
|
||||
The trend of cloud-native technology is moving towards pursuing consistent application delivery across clouds and on-premises infrastructures using Kubernetes as the common abstraction layer. Kubernetes, although excellent in abstracting low-level infrastructure details, does introduce extra complexity to application developers, namely understanding the concepts of pods, port exposing, privilege escalation, resource claims, CRD, and so on. We’ve seen the nontrivial learning curve and the lack of developer-facing abstraction have impacted user experiences, slowed down productivity, led to unexpected errors or misconfigurations in production. People start to question the value of this revolution: "why am I bothered with all these details?".
|
||||
|
||||
On the other hand, abstracting Kubernetes to serve developers' requirements is a highly opinionated process, and the resultant abstractions would only make sense had the decision makers been the platform builders. Unfortunately, the platform builders today face the following dilemma:
|
||||
|
||||
*There is no tool or framework for them to easily build user friendly yet highly extensible abstractions*.
|
||||
|
||||
Thus, many platforms today are essentially restricted abstractions with in-house add-on mechanisms despite the extensibility of Kubernetes. This makes extending such platforms for developers' requirements or to wider scenarios almost impossible, not to mention taking the full advantage of the rich Kubernetes ecosystems.
|
||||
|
||||
In the end, developers complain those platforms are too rigid and slow in response to feature requests or improvements. The platform builders do want to help but the engineering effort is daunting: any simple API change in the platform could easily become a marathon negotiation around the opinionated abstraction design.
|
||||
|
||||
## What is KubeVela?
|
||||
|
||||
For platform builders, KubeVela serves as a framework that relieves the pains of building developer focused platforms by doing the following:
|
||||
|
||||
- Developer Centric. KubeVela introduces the Application as the main API to capture a full deployment of microservices, and builds features around the application needs only. Progressive rollout and multi-cluster deployment are provided out-of-box. No infrastructure level concerns, simply deploy.
|
||||
|
||||
- Extending Natively. In KubeVela, all platform features (such as workloads, operational behaviors, and cloud services) are defined as reusable [CUE](https://github.com/cuelang/cue) and/or [Helm](https://helm.sh) components, per needs of the application deployment. And when application's needs grow, your platform capabilities expand naturally in a programmable approach.
|
||||
|
||||
- Simple yet Reliable. Perfect in flexibility though, X-as-Code may lead to configuration drift (i.e. the running instances are not in line with the expected configuration). KubeVela solves this by modeling its capabilities as code but enforce them via Kubernetes control loop which will never leave inconsistency in your clusters. This also makes KubeVela work with any CI/CD or GitOps tools via declarative API without integration burden.
|
||||
|
||||
With KubeVela, the platform builders finally have the tooling supports to design easy-to-use abstractions and ship them to end-users with high confidence and low turn around time.
|
||||
|
||||
For end-users (e.g. app developers and operators), these abstractions will enable them to design and ship applications to Kubernetes clusters with minimal effort, and instead of managing a handful infrastructure details, a simple application definition that can be easily integrated with any CI/CD pipeline is all they need.
|
||||
|
||||
## Comparisons
|
||||
|
||||
### KubeVela vs. Platform-as-a-Service (PaaS)
|
||||
|
||||
The typical examples are Heroku and Cloud Foundry. They provide full application management capabilities and aim to improve developer experience and efficiency. In this context, KubeVela shares the same goal.
|
||||
|
||||
Though the biggest difference lies in **flexibility**.
|
||||
|
||||
KubeVela is a Kubernetes add-on that enabling you to serve end users with programmable building blocks which are fully flexible and coded by yourself. Comparing to this mechanism, traditional PaaS systems are highly restricted, i.e. they have to enforce constraints in the type of supported applications and capabilities, and as application needs grows, you always outgrow the capabilities of the PaaS system - this will never happen in KubeVela platform.
|
||||
|
||||
So think of KubeVela as a Heroku that is fully extensible to serve your needs as you grow.
|
||||
|
||||
### KubeVela vs. Serverless
|
||||
|
||||
Serverless platform such as AWS Lambda provides extraordinary user experience and agility to deploy serverless applications. However, those platforms impose even more constraints in extensibility. They are arguably "hard-coded" PaaS.
|
||||
|
||||
Kubernetes based serverless platforms such as Knative, OpenFaaS can be easily integrated with KubeVela by registering themselves as new workload types and traits. Even for AWS Lambda, there is a success story to integrate it with KubeVela by the tools developed by Crossplane.
|
||||
|
||||
### KubeVela vs. Platform agnostic developer tools
|
||||
|
||||
The typical example is Hashicorp's Waypoint. Waypoint is a developer facing tool which introduces a consistent workflow (i.e., build, deploy, release) to ship applications on top of different platforms.
|
||||
|
||||
KubeVela can be integrated into such tools as an application platform. In this case, developers could use the Waypoint workflow to manage applications with leverage of abstractions (e.g. application, rollout, ingress, autoscaling etc) you built via KubeVela.
|
||||
|
||||
### KubeVela vs. Helm
|
||||
|
||||
Helm is a package manager for Kubernetes that provides package, install, and upgrade a set of YAML files for Kubernetes as a unit. KubeVela can patch, deploy and rollout Helm packaged application components, and it also leverages Helm to manage the capability dependencies in system level.
|
||||
|
||||
Though KubeVela itself is not a package manager, it's a core engine for platform builders to create developer-centric deployment system with easy and repeatable approach.
|
||||
|
||||
### KubeVela vs. Kubernetes
|
||||
|
||||
KubeVela is a Kubernetes add-on for building developer-centric deployment system. It leverages [Open Application Model](https://github.com/oam-dev/spec) and the native Kubernetes extensibility to resolve a hard problem - making shipping applications enjoyable on Kubernetes.
|
||||
|
||||
## Getting Started
|
||||
|
||||
Now let's [get started](./quick-start.md) with KubeVela!
|
||||
|
|
@ -1,91 +0,0 @@
|
|||
---
|
||||
title: How-to
|
||||
---
|
||||
|
||||
In this section, it will introduce how to use raw K8s Object to declare app components via `ComponentDefinition`.
|
||||
|
||||
> Before reading this part, please make sure you've learned [the definition and template concepts](../platform-engineers/definition-and-templates).
|
||||
|
||||
## Declare `ComponentDefinition`
|
||||
|
||||
Here is a raw template based `ComponentDefinition` example which provides a abstraction for worker workload type:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: ComponentDefinition
|
||||
metadata:
|
||||
name: kube-worker
|
||||
namespace: default
|
||||
spec:
|
||||
workload:
|
||||
definition:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
schematic:
|
||||
kube:
|
||||
template:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
parameters:
|
||||
- name: image
|
||||
required: true
|
||||
type: string
|
||||
fieldPaths:
|
||||
- "spec.template.spec.containers[0].image"
|
||||
```
|
||||
|
||||
In detail, the `.spec.schematic.kube` contains template of a workload resource and
|
||||
configurable parameters.
|
||||
- `.spec.schematic.kube.template` is the raw template in YAML format.
|
||||
- `.spec.schematic.kube.parameters` contains a set of configurable parameters. The `name`, `type`, and `fieldPaths` are required fields, `description` and `required` are optional fields.
|
||||
- The parameter `name` must be unique in a `ComponentDefinition`.
|
||||
- `type` indicates the data type of value set to the field. This is a required field which will help KubeVela to generate a OpenAPI JSON schema for the parameters automatically. In raw template, only basic data types are allowed, including `string`, `number`, and `boolean`, while `array` and `object` are not.
|
||||
- `fieldPaths` in the parameter specifies an array of fields within the template that will be overwritten by the value of this parameter. Fields are specified as JSON field paths without a leading dot, for example
|
||||
`spec.replicas`, `spec.containers[0].image`.
|
||||
|
||||
## Declare an `Application`
|
||||
|
||||
Here is an example `Application`.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: myapp
|
||||
namespace: default
|
||||
spec:
|
||||
components:
|
||||
- name: mycomp
|
||||
type: kube-worker
|
||||
properties:
|
||||
image: nginx:1.14.0
|
||||
```
|
||||
|
||||
Since parameters only support basic data type, values in `properties` should be simple key-value, `<parameterName>: <parameterValue>`.
|
||||
|
||||
Deploy the `Application` and verify the running workload instance.
|
||||
|
||||
```shell
|
||||
$ kubectl get deploy
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
mycomp 1/1 1 1 66m
|
||||
```
|
||||
And check the parameter works.
|
||||
```shell
|
||||
$ kubectl get deployment mycomp -o json | jq '.spec.template.spec.containers[0].image'
|
||||
"nginx:1.14.0"
|
||||
```
|
||||
|
||||
|
|
@ -1,111 +0,0 @@
|
|||
---
|
||||
title: Attach Traits
|
||||
---
|
||||
|
||||
All traits in the KubeVela system works well with the Raw K8s Object Template based Component.
|
||||
|
||||
In this sample, we will attach two traits,
|
||||
[scaler](https://github.com/oam-dev/kubevela/blob/master/charts/vela-core/templates/defwithtemplate/manualscale.yaml)
|
||||
and
|
||||
[virtualgroup](https://github.com/oam-dev/kubevela/blob/master/docs/examples/kube-module/virtual-group-td.yaml) to a component
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: myapp
|
||||
namespace: default
|
||||
spec:
|
||||
components:
|
||||
- name: mycomp
|
||||
type: kube-worker
|
||||
properties:
|
||||
image: nginx:1.14.0
|
||||
traits:
|
||||
- type: scaler
|
||||
properties:
|
||||
replicas: 2
|
||||
- type: virtualgroup
|
||||
properties:
|
||||
group: "my-group1"
|
||||
type: "cluster"
|
||||
```
|
||||
|
||||
## Verify
|
||||
|
||||
Deploy the application and verify traits work.
|
||||
|
||||
Check the `scaler` trait.
|
||||
```shell
|
||||
$ kubectl get manualscalertrait
|
||||
NAME AGE
|
||||
demo-podinfo-scaler-3x1sfcd34 2m
|
||||
```
|
||||
```shell
|
||||
$ kubectl get deployment mycomp -o json | jq .spec.replicas
|
||||
2
|
||||
```
|
||||
|
||||
Check the `virtualgroup` trait.
|
||||
```shell
|
||||
$ kubectl get deployment mycomp -o json | jq .spec.template.metadata.labels
|
||||
{
|
||||
"app.cluster.virtual.group": "my-group1",
|
||||
"app.kubernetes.io/name": "myapp"
|
||||
}
|
||||
```
|
||||
|
||||
## Update an Application
|
||||
|
||||
After the application is deployed and workloads/traits are created successfully,
|
||||
you can update the application, and corresponding changes will be applied to the
|
||||
workload.
|
||||
|
||||
Let's make several changes on the configuration of the sample application.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: myapp
|
||||
namespace: default
|
||||
spec:
|
||||
components:
|
||||
- name: mycomp
|
||||
type: kube-worker
|
||||
properties:
|
||||
image: nginx:1.14.1 # 1.14.0 => 1.14.1
|
||||
traits:
|
||||
- type: scaler
|
||||
properties:
|
||||
replicas: 4 # 2 => 4
|
||||
- type: virtualgroup
|
||||
properties:
|
||||
group: "my-group2" # my-group1 => my-group2
|
||||
type: "cluster"
|
||||
```
|
||||
|
||||
Apply the new configuration and check the results after several seconds.
|
||||
|
||||
> After updating, the workload instance name will be updated from `mycomp-v1` to `mycomp-v2`.
|
||||
|
||||
Check the new property value.
|
||||
```shell
|
||||
$ kubectl get deployment mycomp -o json | jq '.spec.template.spec.containers[0].image'
|
||||
"nginx:1.14.1"
|
||||
```
|
||||
|
||||
Check the `scaler` trait.
|
||||
```shell
|
||||
$ kubectl get deployment mycomp -o json | jq .spec.replicas
|
||||
4
|
||||
```
|
||||
|
||||
Check the `virtualgroup` trait.
|
||||
```shell
|
||||
$ kubectl get deployment mycomp -o json | jq .spec.template.metadata.labels
|
||||
{
|
||||
"app.cluster.virtual.group": "my-group2",
|
||||
"app.kubernetes.io/name": "myapp"
|
||||
}
|
||||
```
|
||||
|
|
@ -1,92 +0,0 @@
|
|||
---
|
||||
title: Extend CRD Operator as Component Type
|
||||
---
|
||||
|
||||
Let's use [OpenKruise](https://github.com/openkruise/kruise) as example of extend CRD as KubeVela Component.
|
||||
**The mechanism works for all CRD Operators**.
|
||||
|
||||
### Step 1: Install the CRD controller
|
||||
|
||||
You need to [install the CRD controller](https://github.com/openkruise/kruise#quick-start) into your K8s system.
|
||||
|
||||
### Step 2: Create Component Definition
|
||||
|
||||
To register Cloneset(one of the OpenKruise workloads) as a new workload type in KubeVela, the only thing needed is to create an `ComponentDefinition` object for it.
|
||||
A full example can be found in this [cloneset.yaml](https://github.com/oam-dev/catalog/blob/master/registry/cloneset.yaml).
|
||||
Several highlights are list below.
|
||||
|
||||
#### 1. Describe The Workload Type
|
||||
|
||||
```yaml
|
||||
...
|
||||
annotations:
|
||||
definition.oam.dev/description: "OpenKruise cloneset"
|
||||
...
|
||||
```
|
||||
|
||||
A one line description of this component type. It will be shown in helper commands such as `$ vela components`.
|
||||
|
||||
#### 2. Register it's underlying CRD
|
||||
|
||||
```yaml
|
||||
...
|
||||
workload:
|
||||
definition:
|
||||
apiVersion: apps.kruise.io/v1alpha1
|
||||
kind: CloneSet
|
||||
...
|
||||
```
|
||||
|
||||
This is how you register OpenKruise Cloneset's API resource (`fapps.kruise.io/v1alpha1.CloneSet`) as the workload type.
|
||||
KubeVela uses Kubernetes API resource discovery mechanism to manage all registered capabilities.
|
||||
|
||||
#### 4. Define Template
|
||||
|
||||
```yaml
|
||||
...
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
output: {
|
||||
apiVersion: "apps.kruise.io/v1alpha1"
|
||||
kind: "CloneSet"
|
||||
metadata: labels: {
|
||||
"app.oam.dev/component": context.name
|
||||
}
|
||||
spec: {
|
||||
replicas: parameter.replicas
|
||||
selector: matchLabels: {
|
||||
"app.oam.dev/component": context.name
|
||||
}
|
||||
template: {
|
||||
metadata: labels: {
|
||||
"app.oam.dev/component": context.name
|
||||
}
|
||||
spec: {
|
||||
containers: [{
|
||||
name: context.name
|
||||
image: parameter.image
|
||||
}]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
parameter: {
|
||||
// +usage=Which image would you like to use for your service
|
||||
// +short=i
|
||||
image: string
|
||||
|
||||
// +usage=Number of pods in the cloneset
|
||||
replicas: *5 | int
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Register New Component Type to KubeVela
|
||||
|
||||
As long as the definition file is ready, you just need to apply it to Kubernetes.
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/catalog/master/registry/cloneset.yaml
|
||||
```
|
||||
|
||||
And the new component type will immediately become available for developers to use in KubeVela.
|
||||
|
|
@ -1,555 +0,0 @@
|
|||
---
|
||||
title: Crossplane
|
||||
---
|
||||
|
||||
Cloud services is also part of your application deployment.
|
||||
|
||||
## Should a Cloud Service be a Component or Trait?
|
||||
|
||||
The following practice could be considered:
|
||||
- Use `ComponentDefinition` if:
|
||||
- you want to allow your end users explicitly claim a "instance" of the cloud service and consume it, and release the "instance" when deleting the application.
|
||||
- Use `TraitDefinition` if:
|
||||
- you don't want to give your end users any control/workflow of claiming or releasing the cloud service, you only want to give them a way to consume a cloud service which could even be managed by some other system. A `Service Binding` trait is widely used in this case.
|
||||
|
||||
In this documentation, we will define an Alibaba Cloud's RDS (Relational Database Service), and an Alibaba Cloud's OSS (Object Storage System) as example. This mechanism works the same with other cloud providers.
|
||||
In a single application, they are in form of Traits, and in multiple applications, they are in form of Components.
|
||||
|
||||
## Install and Configure Crossplane
|
||||
|
||||
This guide will use [Crossplane](https://crossplane.io/) as the cloud service provider. Please Refer to [Installation](https://github.com/crossplane/provider-alibaba/releases/tag/v0.5.0)
|
||||
to install Crossplane Alibaba provider v0.5.0.
|
||||
|
||||
If you'd like to configure any other Crossplane providers, please refer to [Crossplane Select a Getting Started Configuration](https://crossplane.io/docs/v1.1/getting-started/install-configure.html#select-a-getting-started-configuration).
|
||||
|
||||
```
|
||||
$ kubectl crossplane install provider crossplane/provider-alibaba:v0.5.0
|
||||
|
||||
# Note the xxx and yyy here is your own AccessKey and SecretKey to the cloud resources.
|
||||
$ kubectl create secret generic alibaba-account-creds -n crossplane-system --from-literal=accessKeyId=xxx --from-literal=accessKeySecret=yyy
|
||||
|
||||
$ kubectl apply -f provider.yaml
|
||||
```
|
||||
|
||||
`provider.yaml` is as below.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: crossplane-system
|
||||
|
||||
---
|
||||
apiVersion: alibaba.crossplane.io/v1alpha1
|
||||
kind: ProviderConfig
|
||||
metadata:
|
||||
name: default
|
||||
spec:
|
||||
credentials:
|
||||
source: Secret
|
||||
secretRef:
|
||||
namespace: crossplane-system
|
||||
name: alibaba-account-creds
|
||||
key: credentials
|
||||
region: cn-beijing
|
||||
```
|
||||
|
||||
Note: We currently just use Crossplane Alibaba provider. But we are about to use [Crossplane](https://crossplane.io/) as the
|
||||
cloud resource operator for Kubernetes in the near future.
|
||||
|
||||
## Provisioning and consuming cloud resource in a single application v1 (one cloud resource)
|
||||
|
||||
### Step 1: Register ComponentDefinition `alibaba-rds` as RDS cloud resource producer
|
||||
|
||||
First, register the `alibaba-rds` workload type to KubeVela.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: ComponentDefinition
|
||||
metadata:
|
||||
name: alibaba-rds
|
||||
namespace: vela-system
|
||||
annotations:
|
||||
definition.oam.dev/description: "Alibaba Cloud RDS Resource"
|
||||
spec:
|
||||
workload:
|
||||
definition:
|
||||
apiVersion: database.alibaba.crossplane.io/v1alpha1
|
||||
kind: RDSInstance
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
output: {
|
||||
apiVersion: "database.alibaba.crossplane.io/v1alpha1"
|
||||
kind: "RDSInstance"
|
||||
spec: {
|
||||
forProvider: {
|
||||
engine: parameter.engine
|
||||
engineVersion: parameter.engineVersion
|
||||
dbInstanceClass: parameter.instanceClass
|
||||
dbInstanceStorageInGB: 20
|
||||
securityIPList: "0.0.0.0/0"
|
||||
masterUsername: parameter.username
|
||||
}
|
||||
writeConnectionSecretToRef: {
|
||||
namespace: context.namespace
|
||||
name: parameter.secretName
|
||||
}
|
||||
providerConfigRef: {
|
||||
name: "default"
|
||||
}
|
||||
deletionPolicy: "Delete"
|
||||
}
|
||||
}
|
||||
parameter: {
|
||||
engine: *"mysql" | string
|
||||
engineVersion: *"8.0" | string
|
||||
instanceClass: *"rds.mysql.c1.large" | string
|
||||
username: string
|
||||
secretName: string
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
### Step 2: Prepare TraitDefinition `service-binding` to do env-secret mapping
|
||||
|
||||
As for data binding in Application, KubeVela recommends defining a trait to finish the job. We have prepared a common
|
||||
trait for convenience. This trait works well for binding resources' info into pod spec Env.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: TraitDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
definition.oam.dev/description: "binding cloud resource secrets to pod env"
|
||||
name: service-binding
|
||||
spec:
|
||||
appliesToWorkloads:
|
||||
- webservice
|
||||
- worker
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
patch: {
|
||||
spec: template: spec: {
|
||||
// +patchKey=name
|
||||
containers: [{
|
||||
name: context.name
|
||||
// +patchKey=name
|
||||
env: [
|
||||
for envName, v in parameter.envMappings {
|
||||
name: envName
|
||||
valueFrom: {
|
||||
secretKeyRef: {
|
||||
name: v.secret
|
||||
if v["key"] != _|_ {
|
||||
key: v.key
|
||||
}
|
||||
if v["key"] == _|_ {
|
||||
key: envName
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
]
|
||||
}]
|
||||
}
|
||||
}
|
||||
|
||||
parameter: {
|
||||
envMappings: [string]: [string]: string
|
||||
}
|
||||
```
|
||||
|
||||
With the help of this `service-binding` trait, developers can explicitly set parameter `envMappings` to mapping all environment names with secret key. Here is an example.
|
||||
|
||||
```yaml
|
||||
...
|
||||
traits:
|
||||
- type: service-binding
|
||||
properties:
|
||||
envMappings:
|
||||
# environments refer to db-conn secret
|
||||
DB_PASSWORD:
|
||||
secret: db-conn
|
||||
key: password # 1) If the env name is different from secret key, secret key has to be set.
|
||||
endpoint:
|
||||
secret: db-conn # 2) If the env name is the same as the secret key, secret key can be omitted.
|
||||
username:
|
||||
secret: db-conn
|
||||
# environments refer to oss-conn secret
|
||||
BUCKET_NAME:
|
||||
secret: oss-conn
|
||||
key: Bucket
|
||||
...
|
||||
```
|
||||
|
||||
### Step 3: Create an application to provision and consume cloud resource
|
||||
|
||||
Create an application with a cloud resource provisioning component and a consuming component as below.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: webapp
|
||||
spec:
|
||||
components:
|
||||
- name: express-server
|
||||
type: webservice
|
||||
properties:
|
||||
image: zzxwill/flask-web-application:v0.3.1-crossplane
|
||||
ports: 80
|
||||
traits:
|
||||
- type: service-binding
|
||||
properties:
|
||||
envMappings:
|
||||
# environments refer to db-conn secret
|
||||
DB_PASSWORD:
|
||||
secret: db-conn
|
||||
key: password # 1) If the env name is different from secret key, secret key has to be set.
|
||||
endpoint:
|
||||
secret: db-conn # 2) If the env name is the same as the secret key, secret key can be omitted.
|
||||
username:
|
||||
secret: db-conn
|
||||
|
||||
- name: sample-db
|
||||
type: alibaba-rds
|
||||
properties:
|
||||
name: sample-db
|
||||
engine: mysql
|
||||
engineVersion: "8.0"
|
||||
instanceClass: rds.mysql.c1.large
|
||||
username: oamtest
|
||||
secretName: db-conn
|
||||
|
||||
```
|
||||
|
||||
Apply it and verify the application.
|
||||
|
||||
```shell
|
||||
$ kubectl get application
|
||||
NAME AGE
|
||||
webapp 46m
|
||||
|
||||
$ kubectl port-forward deployment/express-server 80:80
|
||||
Forwarding from 127.0.0.1:80 -> 80
|
||||
Forwarding from [::1]:80 -> 80
|
||||
Handling connection for 80
|
||||
Handling connection for 80
|
||||
```
|
||||
|
||||

|
||||
|
||||
## Provisioning and consuming cloud resource in a single application v2 (two cloud resources)
|
||||
|
||||
Based on the section `Provisioning and consuming cloud resource in a single application v1 (one cloud resource)`, register
|
||||
one more cloud resource workload type `alibaba-oss` to KubeVela.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: ComponentDefinition
|
||||
metadata:
|
||||
name: alibaba-oss
|
||||
namespace: vela-system
|
||||
annotations:
|
||||
definition.oam.dev/description: "Alibaba Cloud RDS Resource"
|
||||
spec:
|
||||
workload:
|
||||
definition:
|
||||
apiVersion: oss.alibaba.crossplane.io/v1alpha1
|
||||
kind: Bucket
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
output: {
|
||||
apiVersion: "oss.alibaba.crossplane.io/v1alpha1"
|
||||
kind: "Bucket"
|
||||
spec: {
|
||||
name: parameter.name
|
||||
acl: parameter.acl
|
||||
storageClass: parameter.storageClass
|
||||
dataRedundancyType: parameter.dataRedundancyType
|
||||
writeConnectionSecretToRef: {
|
||||
namespace: context.namespace
|
||||
name: parameter.secretName
|
||||
}
|
||||
providerConfigRef: {
|
||||
name: "default"
|
||||
}
|
||||
deletionPolicy: "Delete"
|
||||
}
|
||||
}
|
||||
parameter: {
|
||||
name: string
|
||||
acl: *"private" | string
|
||||
storageClass: *"Standard" | string
|
||||
dataRedundancyType: *"LRS" | string
|
||||
secretName: string
|
||||
}
|
||||
```
|
||||
|
||||
Update the application to also consume cloud resource OSS.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: webapp
|
||||
spec:
|
||||
components:
|
||||
- name: express-server
|
||||
type: webservice
|
||||
properties:
|
||||
image: zzxwill/flask-web-application:v0.3.1-crossplane
|
||||
ports: 80
|
||||
traits:
|
||||
- type: service-binding
|
||||
properties:
|
||||
envMappings:
|
||||
# environments refer to db-conn secret
|
||||
DB_PASSWORD:
|
||||
secret: db-conn
|
||||
key: password # 1) If the env name is different from secret key, secret key has to be set.
|
||||
endpoint:
|
||||
secret: db-conn # 2) If the env name is the same as the secret key, secret key can be omitted.
|
||||
username:
|
||||
secret: db-conn
|
||||
# environments refer to oss-conn secret
|
||||
BUCKET_NAME:
|
||||
secret: oss-conn
|
||||
key: Bucket
|
||||
|
||||
- name: sample-db
|
||||
type: alibaba-rds
|
||||
properties:
|
||||
name: sample-db
|
||||
engine: mysql
|
||||
engineVersion: "8.0"
|
||||
instanceClass: rds.mysql.c1.large
|
||||
username: oamtest
|
||||
secretName: db-conn
|
||||
|
||||
- name: sample-oss
|
||||
type: alibaba-oss
|
||||
properties:
|
||||
name: velaweb
|
||||
secretName: oss-conn
|
||||
```
|
||||
|
||||
Apply it and verify the application.
|
||||
|
||||
```shell
|
||||
$ kubectl port-forward deployment/express-server 80:80
|
||||
Forwarding from 127.0.0.1:80 -> 80
|
||||
Forwarding from [::1]:80 -> 80
|
||||
Handling connection for 80
|
||||
Handling connection for 80
|
||||
```
|
||||
|
||||

|
||||
|
||||
## Provisioning and consuming cloud resource in different applications
|
||||
|
||||
In this section, cloud resource will be provisioned in one application and consumed in another application.
|
||||
|
||||
### Provision Cloud Resource
|
||||
|
||||
Instantiate RDS component with `alibaba-rds` workload type in an [Application](../application.md) to provide cloud resources.
|
||||
|
||||
As we have claimed an RDS instance with ComponentDefinition name `alibaba-rds`.
|
||||
The component in the application should refer to this type.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: baas-rds
|
||||
spec:
|
||||
components:
|
||||
- name: sample-db
|
||||
type: alibaba-rds
|
||||
properties:
|
||||
name: sample-db
|
||||
engine: mysql
|
||||
engineVersion: "8.0"
|
||||
instanceClass: rds.mysql.c1.large
|
||||
username: oamtest
|
||||
secretName: db-conn
|
||||
```
|
||||
|
||||
Apply the application to Kubernetes and a RDS instance will be automatically provisioned (may take some time, ~2 mins).
|
||||
|
||||
A secret `db-conn` will also be created in the same namespace as that of the application.
|
||||
|
||||
```shell
|
||||
$ kubectl get application
|
||||
NAME AGE
|
||||
baas-rds 9h
|
||||
|
||||
$ kubectl get rdsinstance
|
||||
NAME READY SYNCED STATE ENGINE VERSION AGE
|
||||
sample-db-v1 True True Running mysql 8.0 9h
|
||||
|
||||
$ kubectl get secret
|
||||
NAME TYPE DATA AGE
|
||||
db-conn connection.crossplane.io/v1alpha1 4 9h
|
||||
|
||||
$ ✗ kubectl get secret db-conn -o yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
endpoint: xxx==
|
||||
password: yyy
|
||||
port: MzMwNg==
|
||||
username: b2FtdGVzdA==
|
||||
kind: Secret
|
||||
```
|
||||
|
||||
### Consuming the Cloud Resource
|
||||
|
||||
In this section, we will show how another component consumes the RDS instance.
|
||||
|
||||
> Note: we recommend defining the cloud resource claiming to an independent application if that cloud resource has
|
||||
> standalone lifecycle.
|
||||
|
||||
#### Step 1: Define a ComponentDefinition with Secret Reference
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: ComponentDefinition
|
||||
metadata:
|
||||
name: webconsumer
|
||||
annotations:
|
||||
definition.oam.dev/description: A Deployment provides declarative updates for Pods and ReplicaSets
|
||||
spec:
|
||||
workload:
|
||||
definition:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
output: {
|
||||
apiVersion: "apps/v1"
|
||||
kind: "Deployment"
|
||||
spec: {
|
||||
selector: matchLabels: {
|
||||
"app.oam.dev/component": context.name
|
||||
}
|
||||
|
||||
template: {
|
||||
metadata: labels: {
|
||||
"app.oam.dev/component": context.name
|
||||
}
|
||||
|
||||
spec: {
|
||||
containers: [{
|
||||
name: context.name
|
||||
image: parameter.image
|
||||
|
||||
if parameter["cmd"] != _|_ {
|
||||
command: parameter.cmd
|
||||
}
|
||||
|
||||
if parameter["dbSecret"] != _|_ {
|
||||
env: [
|
||||
{
|
||||
name: "username"
|
||||
value: dbConn.username
|
||||
},
|
||||
{
|
||||
name: "endpoint"
|
||||
value: dbConn.endpoint
|
||||
},
|
||||
{
|
||||
name: "DB_PASSWORD"
|
||||
value: dbConn.password
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
ports: [{
|
||||
containerPort: parameter.port
|
||||
}]
|
||||
|
||||
if parameter["cpu"] != _|_ {
|
||||
resources: {
|
||||
limits:
|
||||
cpu: parameter.cpu
|
||||
requests:
|
||||
cpu: parameter.cpu
|
||||
}
|
||||
}
|
||||
}]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
parameter: {
|
||||
// +usage=Which image would you like to use for your service
|
||||
// +short=i
|
||||
image: string
|
||||
|
||||
// +usage=Commands to run in the container
|
||||
cmd?: [...string]
|
||||
|
||||
// +usage=Which port do you want customer traffic sent to
|
||||
// +short=p
|
||||
port: *80 | int
|
||||
|
||||
// +usage=Referred db secret
|
||||
// +insertSecretTo=dbConn
|
||||
dbSecret?: string
|
||||
|
||||
// +usage=Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core)
|
||||
cpu?: string
|
||||
}
|
||||
|
||||
dbConn: {
|
||||
username: string
|
||||
endpoint: string
|
||||
password: string
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
The key point is the annotation `//+insertSecretTo=dbConn`, KubeVela will know the parameter is a K8s secret, it will parse
|
||||
the secret and bind the data into the CUE struct `dbConn`.
|
||||
|
||||
Then the `output` can reference the `dbConn` struct for the data value. The name `dbConn` can be any name.
|
||||
It's just an example in this case. The `+insertSecretTo` is keyword, it defines the data binding mechanism.
|
||||
|
||||
Now create the Application to consume the data.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: webapp
|
||||
spec:
|
||||
components:
|
||||
- name: express-server
|
||||
type: webconsumer
|
||||
properties:
|
||||
image: zzxwill/flask-web-application:v0.3.1-crossplane
|
||||
ports: 80
|
||||
dbSecret: db-conn
|
||||
```
|
||||
|
||||
```shell
|
||||
$ kubectl get application
|
||||
NAME AGE
|
||||
baas-rds 10h
|
||||
webapp 14h
|
||||
|
||||
$ kubectl get deployment
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
express-server-v1 1/1 1 1 9h
|
||||
|
||||
$ kubectl port-forward deployment/express-server 80:80
|
||||
```
|
||||
|
||||
We can see the cloud resource is successfully consumed by the application.
|
||||
|
||||

|
||||
File diff suppressed because it is too large
Load Diff
|
|
@ -1,236 +0,0 @@
|
|||
---
|
||||
title: Programmable Building Blocks
|
||||
---
|
||||
|
||||
This documentation explains `ComponentDefinition` and `TraitDefinition` in detail.
|
||||
|
||||
## Overview
|
||||
|
||||
Essentially, a definition object in KubeVela is a programmable build block. A definition object normally includes several information to model a certain platform capability that would used in further application deployment:
|
||||
- **Capability Indicator**
|
||||
- `ComponentDefinition` uses `spec.workload` to indicate the workload type of this component.
|
||||
- `TraitDefinition` uses `spec.definitionRef` to indicate the provider of this trait.
|
||||
- **Interoperability Fields**
|
||||
- they are for the platform to ensure a trait can work with given workload type. Hence only `TraitDefinition` has these fields.
|
||||
- **Capability Encapsulation and Abstraction** defined by `spec.schematic`
|
||||
- this defines the **templating and parametering** (i.e. encapsulation) of this capability.
|
||||
|
||||
Hence, the basic structure of definition object is as below:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: XxxDefinition
|
||||
metadata:
|
||||
name: <definition name>
|
||||
spec:
|
||||
...
|
||||
schematic:
|
||||
cue:
|
||||
# cue template ...
|
||||
helm:
|
||||
# Helm chart ...
|
||||
# ... interoperability fields
|
||||
```
|
||||
|
||||
Let's explain these fields one by one.
|
||||
|
||||
### Capability Indicator
|
||||
|
||||
In `ComponentDefinition`, the indicator of workload type is declared as `spec.workload`.
|
||||
|
||||
Below is a definition for *Web Service* in KubeVela:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: ComponentDefinition
|
||||
metadata:
|
||||
name: webservice
|
||||
namespace: default
|
||||
annotations:
|
||||
definition.oam.dev/description: "Describes long-running, scalable, containerized services that have a stable network endpoint to receive external network traffic from customers."
|
||||
spec:
|
||||
workload:
|
||||
definition:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
...
|
||||
```
|
||||
|
||||
In above example, it claims to leverage Kubernetes Deployment (`apiVersion: apps/v1`, `kind: Deployment`) as the workload type for component.
|
||||
|
||||
### Interoperability Fields
|
||||
|
||||
The interoperability fields are **trait only**. An overall view of interoperability fields in a `TraitDefinition` is show as below.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: TraitDefinition
|
||||
metadata:
|
||||
name: ingress
|
||||
spec:
|
||||
appliesToWorkloads:
|
||||
- deployments.apps
|
||||
- webservice
|
||||
conflictsWith:
|
||||
- service
|
||||
workloadRefPath: spec.wrokloadRef
|
||||
podDisruptive: false
|
||||
```
|
||||
|
||||
Let's explain them in detail.
|
||||
|
||||
#### `.spec.appliesToWorkloads`
|
||||
|
||||
This field defines the constraints that what kinds of workloads this trait is allowed to apply to.
|
||||
- It accepts an array of string as value.
|
||||
- Each item in the array refers to one or a group of workload types to which this trait is allowded to apply.
|
||||
|
||||
There are four approaches to denote one or a group of workload types.
|
||||
|
||||
- `ComponentDefinition` name, e.g., `webservice`, `worker`
|
||||
- `ComponentDefinition` definition reference (CRD name), e.g., `deployments.apps`
|
||||
- Resource group of `ComponentDefinition` definition reference prefixed with `*.`, e.g., `*.apps`, `*.oam.dev`. This means the trait is allowded to apply to any workloads in this group.
|
||||
- `*` means this trait is allowded to apply to any workloads
|
||||
|
||||
If this field is omitted, it means this trait is allowded to apply to any workload types.
|
||||
|
||||
KubeVela will raise an error if a trait is applied to a workload which is NOT included in the `appliesToWorkloads`.
|
||||
|
||||
|
||||
##### `.spec.conflictsWith`
|
||||
|
||||
This field defines that constraints that what kinds of traits are conflicting with this trait, if they are applied to the same workload.
|
||||
- It accepts an array of string as value.
|
||||
- Each item in the array refers to one or a group of traits.
|
||||
|
||||
There are four approaches to denote one or a group of workload types.
|
||||
|
||||
- `TraitDefinition` name, e.g., `ingress`
|
||||
- Resource group of `TraitDefinition` definition reference prefixed with `*.`, e.g., `*.networking.k8s.io`. This means the trait is conflicting with any traits in this group.
|
||||
- `*` means this trait is conflicting with any other trait.
|
||||
|
||||
If this field is omitted, it means this trait is NOT conflicting with any traits.
|
||||
|
||||
##### `.spec.workloadRefPath`
|
||||
|
||||
This field defines the field path of the trait which is used to store the reference of the workload to which the trait is applied.
|
||||
- It accepts a string as value, e.g., `spec.workloadRef`.
|
||||
|
||||
If this field is set, KubeVela core will automatically fill the workload reference into target field of the trait. Then the trait controller can get the workload reference from the trait latter. So this field usually accompanies with the traits whose controllers relying on the workload reference at runtime.
|
||||
|
||||
Please check [scaler](https://github.com/oam-dev/kubevela/blob/master/charts/vela-core/templates/defwithtemplate/manualscale.yaml) trait as a demonstration of how to set this field.
|
||||
|
||||
##### `.spec.podDisruptive`
|
||||
|
||||
This field defines that adding/updating the trait will disruptive the pod or not.
|
||||
In this example, the answer is not, so the field is `false`, it will not affect the pod when the trait is added or updated.
|
||||
If the field is `true`, then it will cause the pod to disruptive and restart when the trait is added or updated.
|
||||
By default, the value is `false` which means this trait will not affect.
|
||||
Please take care of this field, it's really important and useful for serious large scale production usage scenarios.
|
||||
|
||||
### Capability Encapsulation and Abstraction
|
||||
|
||||
The programmable template of given capability are defined in `spec.schematic` field. For example, below is the full definition of *Web Service* type in KubeVela:
|
||||
|
||||
<details>
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: ComponentDefinition
|
||||
metadata:
|
||||
name: webservice
|
||||
namespace: default
|
||||
annotations:
|
||||
definition.oam.dev/description: "Describes long-running, scalable, containerized services that have a stable network endpoint to receive external network traffic from customers."
|
||||
spec:
|
||||
workload:
|
||||
definition:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
output: {
|
||||
apiVersion: "apps/v1"
|
||||
kind: "Deployment"
|
||||
spec: {
|
||||
selector: matchLabels: {
|
||||
"app.oam.dev/component": context.name
|
||||
}
|
||||
|
||||
template: {
|
||||
metadata: labels: {
|
||||
"app.oam.dev/component": context.name
|
||||
}
|
||||
|
||||
spec: {
|
||||
containers: [{
|
||||
name: context.name
|
||||
image: parameter.image
|
||||
|
||||
if parameter["cmd"] != _|_ {
|
||||
command: parameter.cmd
|
||||
}
|
||||
|
||||
if parameter["env"] != _|_ {
|
||||
env: parameter.env
|
||||
}
|
||||
|
||||
if context["config"] != _|_ {
|
||||
env: context.config
|
||||
}
|
||||
|
||||
ports: [{
|
||||
containerPort: parameter.port
|
||||
}]
|
||||
|
||||
if parameter["cpu"] != _|_ {
|
||||
resources: {
|
||||
limits:
|
||||
cpu: parameter.cpu
|
||||
requests:
|
||||
cpu: parameter.cpu
|
||||
}
|
||||
}
|
||||
}]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
parameter: {
|
||||
// +usage=Which image would you like to use for your service
|
||||
// +short=i
|
||||
image: string
|
||||
|
||||
// +usage=Commands to run in the container
|
||||
cmd?: [...string]
|
||||
|
||||
// +usage=Which port do you want customer traffic sent to
|
||||
// +short=p
|
||||
port: *80 | int
|
||||
// +usage=Define arguments by using environment variables
|
||||
env?: [...{
|
||||
// +usage=Environment variable name
|
||||
name: string
|
||||
// +usage=The value of the environment variable
|
||||
value?: string
|
||||
// +usage=Specifies a source the value of this var should come from
|
||||
valueFrom?: {
|
||||
// +usage=Selects a key of a secret in the pod's namespace
|
||||
secretKeyRef: {
|
||||
// +usage=The name of the secret in the pod's namespace to select from
|
||||
name: string
|
||||
// +usage=The key of the secret to select from. Must be a valid secret key
|
||||
key: string
|
||||
}
|
||||
}
|
||||
}]
|
||||
// +usage=Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core)
|
||||
cpu?: string
|
||||
}
|
||||
```
|
||||
</details>
|
||||
|
||||
The specification of `schematic` is explained in following CUE and Helm specific documentations.
|
||||
|
||||
Also, the `schematic` filed enables you to render UI forms directly based on them, please check the [Generate Forms from Definitions](/docs/platform-engineers/openapi-v3-json-schema) section about how to.
|
||||
|
|
@ -1,112 +0,0 @@
|
|||
---
|
||||
title: Defining KEDA as Autoscaling Trait
|
||||
---
|
||||
|
||||
> Before continue, make sure you have learned about the concepts of [Definition Objects](definition-and-templates) and [Defining Traits with CUE](/docs/cue/trait) section.
|
||||
|
||||
In the following tutorial, you will learn to add [KEDA](https://keda.sh/) as a new autoscaling trait to your KubeVela based platform.
|
||||
|
||||
> KEDA is a Kubernetes-based Event Driven Autoscaler. With KEDA, you can drive the scaling of any container based on resource metrics or the number of events needing to be processed.
|
||||
|
||||
## Step 1: Install KEDA controller
|
||||
|
||||
[Install the KEDA controller](https://keda.sh/docs/2.2/deploy/) into your K8s system.
|
||||
|
||||
## Step 2: Create Trait Definition
|
||||
|
||||
To register KEDA as a new capability (i.e. trait) in KubeVela, the only thing needed is to create an `TraitDefinition` object for it.
|
||||
|
||||
A full example can be found in this [keda.yaml](https://github.com/oam-dev/catalog/blob/master/registry/keda-scaler.yaml).
|
||||
Several highlights are list below.
|
||||
|
||||
### 1. Describe The Trait
|
||||
|
||||
```yaml
|
||||
...
|
||||
name: keda-scaler
|
||||
annotations:
|
||||
definition.oam.dev/description: "keda supports multiple event to elastically scale applications, this scaler only applies to deployment as example"
|
||||
...
|
||||
```
|
||||
|
||||
We use label `definition.oam.dev/description` to add one line description for this trait.
|
||||
It will be shown in helper commands such as `$ vela traits`.
|
||||
|
||||
### 2. Register API Resource
|
||||
|
||||
```yaml
|
||||
...
|
||||
spec:
|
||||
definitionRef:
|
||||
name: scaledobjects.keda.sh
|
||||
...
|
||||
```
|
||||
|
||||
This is how you claim and register KEDA `ScaledObject`'s API resource (`scaledobjects.keda.sh`) as a trait definition.
|
||||
|
||||
### 3. Define `appliesToWorkloads`
|
||||
|
||||
A trait can be attached to specified workload types or all (i.e. `"*"` means your trait can work with any workload type).
|
||||
|
||||
For the case of KEAD, we will only allow user to attach it to Kubernetes workload type. So we claim it as below:
|
||||
|
||||
```yaml
|
||||
...
|
||||
spec:
|
||||
...
|
||||
appliesToWorkloads:
|
||||
- "deployments.apps" # claim KEDA based autoscaling trait can only attach to Kubernetes Deployment workload type.
|
||||
...
|
||||
```
|
||||
|
||||
### 4. Define Schematic
|
||||
|
||||
In this step, we will define the schematic of KEDA based autoscaling trait, i.e. we will create abstraction for KEDA `ScaledObject` with simplified primitives, so end users of this platform don't really need to know what is KEDA at all.
|
||||
|
||||
|
||||
```yaml
|
||||
...
|
||||
schematic:
|
||||
cue:
|
||||
template: |-
|
||||
outputs: kedaScaler: {
|
||||
apiVersion: "keda.sh/v1alpha1"
|
||||
kind: "ScaledObject"
|
||||
metadata: {
|
||||
name: context.name
|
||||
}
|
||||
spec: {
|
||||
scaleTargetRef: {
|
||||
name: context.name
|
||||
}
|
||||
triggers: [{
|
||||
type: parameter.triggerType
|
||||
metadata: {
|
||||
type: "Utilization"
|
||||
value: parameter.value
|
||||
}
|
||||
}]
|
||||
}
|
||||
}
|
||||
parameter: {
|
||||
// +usage=Types of triggering application elastic scaling, Optional: cpu, memory
|
||||
triggerType: string
|
||||
// +usage=Value to trigger scaling actions, represented as a percentage of the requested value of the resource for the pods. like: "60"(60%)
|
||||
value: string
|
||||
}
|
||||
```
|
||||
|
||||
This is a CUE based template which only exposes `type` and `value` as trait properties for user to set.
|
||||
|
||||
> Please check the [Defining Trait with CUE](../cue/trait) section for more details regarding to CUE templating.
|
||||
|
||||
## Step 2: Register New Trait to KubeVela
|
||||
|
||||
As long as the definition file is ready, you just need to apply it to Kubernetes.
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/catalog/master/registry/keda-scaler.yaml
|
||||
```
|
||||
|
||||
And the new trait will immediately become available for end users to use in `Application` resource.
|
||||
|
||||
|
|
@ -1,67 +0,0 @@
|
|||
---
|
||||
title: Generating UI Forms
|
||||
---
|
||||
|
||||
For any capabilities installed via [Definition Objects](./definition-and-templates),
|
||||
KubeVela will automatically generate OpenAPI v3 JSON schema based on its parameter list, and store it in a `ConfigMap` in the same `namespace` with the definition object.
|
||||
|
||||
> The default KubeVela system `namespace` is `vela-system`, the built-in capabilities and schemas are laid there.
|
||||
|
||||
|
||||
## List Schema
|
||||
|
||||
This `ConfigMap` will have a common label `definition.oam.dev=schema`, so you can find easily by:
|
||||
|
||||
```shell
|
||||
$ kubectl get configmap -n vela-system -l definition.oam.dev=schema
|
||||
NAME DATA AGE
|
||||
schema-ingress 1 19s
|
||||
schema-scaler 1 19s
|
||||
schema-task 1 19s
|
||||
schema-webservice 1 19s
|
||||
schema-worker 1 20s
|
||||
```
|
||||
|
||||
The `ConfigMap` name is in the format of `schema-<your-definition-name>`,
|
||||
and the data key is `openapi-v3-json-schema`.
|
||||
|
||||
For example, we can use the following command to get the JSON schema of `webservice`.
|
||||
|
||||
```shell
|
||||
$ kubectl get configmap schema-webservice -n vela-system -o yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: schema-webservice
|
||||
namespace: vela-system
|
||||
data:
|
||||
openapi-v3-json-schema: '{"properties":{"cmd":{"description":"Commands to run in
|
||||
the container","items":{"type":"string"},"title":"cmd","type":"array"},"cpu":{"description":"Number
|
||||
of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core)","title":"cpu","type":"string"},"env":{"description":"Define
|
||||
arguments by using environment variables","items":{"properties":{"name":{"description":"Environment
|
||||
variable name","title":"name","type":"string"},"value":{"description":"The value
|
||||
of the environment variable","title":"value","type":"string"},"valueFrom":{"description":"Specifies
|
||||
a source the value of this var should come from","properties":{"secretKeyRef":{"description":"Selects
|
||||
a key of a secret in the pod''s namespace","properties":{"key":{"description":"The
|
||||
key of the secret to select from. Must be a valid secret key","title":"key","type":"string"},"name":{"description":"The
|
||||
name of the secret in the pod''s namespace to select from","title":"name","type":"string"}},"required":["name","key"],"title":"secretKeyRef","type":"object"}},"required":["secretKeyRef"],"title":"valueFrom","type":"object"}},"required":["name"],"type":"object"},"title":"env","type":"array"},"image":{"description":"Which
|
||||
image would you like to use for your service","title":"image","type":"string"},"port":{"default":80,"description":"Which
|
||||
port do you want customer traffic sent to","title":"port","type":"integer"}},"required":["image","port"],"type":"object"}'
|
||||
```
|
||||
|
||||
Specifically, this schema is generated based on `parameter` section in capability definition:
|
||||
|
||||
* For CUE based definition: the [`parameter`](../cue/component#Write-ComponentDefinition) is a keyword in CUE template.
|
||||
* For Helm based definition: the [`parameter`](../helm/component#Write-ComponentDefinition) is generated from `values.yaml` in Helm chart.
|
||||
|
||||
## Render Form
|
||||
|
||||
You can render above schema into a form by [form-render](https://github.com/alibaba/form-render) or [React JSON Schema form](https://github.com/rjsf-team/react-jsonschema-form) and integrate with your dashboard easily.
|
||||
|
||||
Below is a form rendered with `form-render`:
|
||||
|
||||

|
||||
|
||||
# What's Next
|
||||
|
||||
It's by design that KubeVela supports multiple ways to define the schematic. Hence, we will explain `.schematic` field in detail with following guides.
|
||||
|
|
@ -1,202 +0,0 @@
|
|||
---
|
||||
title: Resource Model
|
||||
---
|
||||
|
||||
This documentation will explain the core resource model of KubeVela which is fully powered by Open Application Model (OAM).
|
||||
|
||||
## Application
|
||||
|
||||
KubeVela introduces an `Application` CRD as its main API that captures a full application deployment. Every application is composed by multiple components with attachable operational behaviors (traits). For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: application-sample
|
||||
spec:
|
||||
components:
|
||||
- name: foo
|
||||
type: webservice
|
||||
properties:
|
||||
image: crccheck/hello-world
|
||||
port: 8000
|
||||
traits:
|
||||
- type: ingress
|
||||
properties:
|
||||
domain: testsvc.example.com
|
||||
http:
|
||||
"/": 8000
|
||||
- type: sidecar
|
||||
properties:
|
||||
name: "logging"
|
||||
image: "fluentd"
|
||||
- name: bar
|
||||
type: aliyun-oss # cloud service
|
||||
bucket: "my-bucket"
|
||||
```
|
||||
|
||||
Though the application object doesn't have fixed schema, it is a composition object assembled by several *programmable building blocks* as shown below.
|
||||
|
||||
## Component
|
||||
|
||||
The component model in KubeVela is designed to allow *component providers* to encapsulate deployable/provisionable entities by leveraging widely adopted tools such as CUE, Helm etc, and give a easier path to developers to deploy complicated microservices with ease.
|
||||
|
||||
Templates based encapsulation is probably the mostly widely used approach to enable efficient application deployment and exposes easier interfaces to end users. For example, many tools today encapsulate Kubernetes *Deployment* and *Service* into a *Web Service* module, and then instantiate this module by simply providing parameters such as *image=foo* and *ports=80*. This pattern can be found in cdk8s (e.g. [`web-service.ts` ](https://github.com/awslabs/cdk8s/blob/master/examples/typescript/web-service/web-service.ts)), CUE (e.g. [`kube.cue`](https://github.com/cuelang/cue/blob/b8b489251a3f9ea318830788794c1b4a753031c0/doc/tutorial/kubernetes/quick/services/kube.cue#L70)), and many widely used Helm charts (e.g. [Web Service](https://docs.bitnami.com/tutorials/create-your-first-helm-chart/)).
|
||||
|
||||
> Hence, a components provider could be anyone who packages software components in form of Helm chart of CUE modules. Think about 3rd-party software distributor, DevOps team, or even your CI pipeline.
|
||||
|
||||
In above example, it describes an application composed with Kubernetes stateless workload (component `foo`) and a Alibaba Cloud OSS bucket (component `bar`) alongside.
|
||||
|
||||
### How it Works?
|
||||
|
||||
In above example, `type: worker` means the specification of this component (claimed in following `properties` section) will be enforced by a `ComponentDefinition` object named `worker` as below:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: ComponentDefinition
|
||||
metadata:
|
||||
name: worker
|
||||
annotations:
|
||||
definition.oam.dev/description: "Describes long-running, scalable, containerized services that running at backend. They do NOT have network endpoint to receive external network traffic."
|
||||
spec:
|
||||
workload:
|
||||
definition:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
output: {
|
||||
apiVersion: "apps/v1"
|
||||
kind: "Deployment"
|
||||
spec: {
|
||||
selector: matchLabels: {
|
||||
"app.oam.dev/component": context.name
|
||||
}
|
||||
template: {
|
||||
metadata: labels: {
|
||||
"app.oam.dev/component": context.name
|
||||
}
|
||||
spec: {
|
||||
containers: [{
|
||||
name: context.name
|
||||
image: parameter.image
|
||||
|
||||
if parameter["cmd"] != _|_ {
|
||||
command: parameter.cmd
|
||||
}
|
||||
}]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
parameter: {
|
||||
image: string
|
||||
cmd?: [...string]
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
Hence, the `properties` section of `backend` only exposes two parameters to fill: `image` and `cmd`, this is enforced by the `parameter` list of the `.spec.template` field of the definition.
|
||||
|
||||
|
||||
## Traits
|
||||
|
||||
Traits are operational features that can be attached to component per needs. Traits are normally considered as platform features and maintained by platform team. In the above example, `type: autoscaler` in `frontend` means the specification (i.e. `properties` section)
|
||||
of this trait will be enforced by a `TraitDefinition` object named `autoscaler` as below:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: TraitDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
definition.oam.dev/description: "configure k8s HPA for Deployment"
|
||||
name: hpa
|
||||
spec:
|
||||
appliesToWorkloads:
|
||||
- webservice
|
||||
- worker
|
||||
schematic:
|
||||
cue:
|
||||
template: |
|
||||
outputs: hpa: {
|
||||
apiVersion: "autoscaling/v2beta2"
|
||||
kind: "HorizontalPodAutoscaler"
|
||||
metadata: name: context.name
|
||||
spec: {
|
||||
scaleTargetRef: {
|
||||
apiVersion: "apps/v1"
|
||||
kind: "Deployment"
|
||||
name: context.name
|
||||
}
|
||||
minReplicas: parameter.min
|
||||
maxReplicas: parameter.max
|
||||
metrics: [{
|
||||
type: "Resource"
|
||||
resource: {
|
||||
name: "cpu"
|
||||
target: {
|
||||
type: "Utilization"
|
||||
averageUtilization: parameter.cpuUtil
|
||||
}
|
||||
}
|
||||
}]
|
||||
}
|
||||
}
|
||||
parameter: {
|
||||
min: *1 | int
|
||||
max: *10 | int
|
||||
cpuUtil: *50 | int
|
||||
}
|
||||
```
|
||||
|
||||
The application also have a `sidecar` trait.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: TraitDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
definition.oam.dev/description: "add sidecar to the app"
|
||||
name: sidecar
|
||||
spec:
|
||||
appliesToWorkloads:
|
||||
- webservice
|
||||
- worker
|
||||
schematic:
|
||||
cue:
|
||||
template: |-
|
||||
patch: {
|
||||
// +patchKey=name
|
||||
spec: template: spec: containers: [parameter]
|
||||
}
|
||||
parameter: {
|
||||
name: string
|
||||
image: string
|
||||
command?: [...string]
|
||||
}
|
||||
```
|
||||
|
||||
Please note that the end users of KubeVela do NOT need to know about definition objects, they learn how to use a given capability with visualized forms (or the JSON schema of parameters if they prefer). Please check the [Generate Forms from Definitions](/docs/platform-engineers/openapi-v3-json-schema) section about how this is achieved.
|
||||
|
||||
## Standard Contract Behind The Abstractions
|
||||
|
||||
Once the application is deployed, KubeVela will index and manage the underlying instances with name, revisions, labels and selector etc in automatic approach. These metadata are shown as below.
|
||||
|
||||
| Label | Description |
|
||||
| :--: | :---------: |
|
||||
|`workload.oam.dev/type=<component definition name>` | The name of its corresponding `ComponentDefinition` |
|
||||
|`trait.oam.dev/type=<trait definition name>` | The name of its corresponding `TraitDefinition` |
|
||||
|`app.oam.dev/name=<app name>` | The name of the application it belongs to |
|
||||
|`app.oam.dev/component=<component name>` | The name of the component it belongs to |
|
||||
|`trait.oam.dev/resource=<name of trait resource instance>` | The name of trait resource instance |
|
||||
|`app.oam.dev/appRevision=<name of app revision>` | The name of the application revision it belongs to |
|
||||
|
||||
|
||||
Consider these metadata as a standard contract for any "day 2" operation controller such as rollout controller to work on KubeVela deployed applications. This is the key to ensure the interoperability for KubeVela based platform as well.
|
||||
|
||||
## No Configuration Drift
|
||||
|
||||
Despite the efficiency and extensibility in abstracting application deployment, IaC (Infrastructure-as-Code) tools may lead to an issue called *Infrastructure/Configuration Drift*, i.e. the generated component instances are not in line with the expected configuration. This could be caused by incomplete coverage, less-than-perfect processes or emergency changes. This makes them can be barely used as a platform level building block.
|
||||
|
||||
Hence, KubeVela is designed to maintain all these programmable capabilities with [Kubernetes Control Loop](https://kubernetes.io/docs/concepts/architecture/controller/) and leverage Kubernetes control plane to eliminate the issue of configuration drifting, while still keeps the flexibly and velocity enabled by IaC.
|
||||
|
|
@ -1,80 +0,0 @@
|
|||
---
|
||||
title: Overview
|
||||
---
|
||||
|
||||
To achieve best user experience for your platform, we recommend platform builders to create simple and user friendly UI for end users instead of exposing full platform level details to them. Some common practices include building GUI console, adopting DSL, or creating a user friendly command line tool.
|
||||
|
||||
As an proof-of-concept of building developer experience with KubeVela, we developed a client-side tool named `Appfile` as well. This tool enables developers to deploy any application with a single file and a single command: `vela up`.
|
||||
|
||||
Now let's walk through its experience.
|
||||
|
||||
## Step 1: Install
|
||||
|
||||
Make sure you have finished and verified the installation following [this guide](./install).
|
||||
|
||||
## Step 2: Deploy Your First Application
|
||||
|
||||
```bash
|
||||
$ vela up -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/vela.yaml
|
||||
Parsing vela.yaml ...
|
||||
Loading templates ...
|
||||
|
||||
Rendering configs for service (testsvc)...
|
||||
Writing deploy config to (.vela/deploy.yaml)
|
||||
|
||||
Applying deploy configs ...
|
||||
Checking if app has been deployed...
|
||||
App has not been deployed, creating a new deployment...
|
||||
✅ App has been deployed 🚀🚀🚀
|
||||
Port forward: vela port-forward first-vela-app
|
||||
SSH: vela exec first-vela-app
|
||||
Logging: vela logs first-vela-app
|
||||
App status: vela status first-vela-app
|
||||
Service status: vela status first-vela-app --svc testsvc
|
||||
```
|
||||
|
||||
Check the status until we see `Routes` are ready:
|
||||
```bash
|
||||
$ vela status first-vela-app
|
||||
About:
|
||||
|
||||
Name: first-vela-app
|
||||
Namespace: default
|
||||
Created at: ...
|
||||
Updated at: ...
|
||||
|
||||
Services:
|
||||
|
||||
- Name: testsvc
|
||||
Type: webservice
|
||||
HEALTHY Ready: 1/1
|
||||
Last Deployment:
|
||||
Created at: ...
|
||||
Updated at: ...
|
||||
Traits:
|
||||
- ✅ ingress: Visiting URL: testsvc.example.com, IP: <your IP address>
|
||||
```
|
||||
|
||||
**In [kind cluster setup](./install#kind)**, you can visit the service via localhost. In other setups, replace localhost with ingress address accordingly.
|
||||
|
||||
```
|
||||
$ curl -H "Host:testsvc.example.com" http://localhost/
|
||||
<xmp>
|
||||
Hello World
|
||||
|
||||
|
||||
## .
|
||||
## ## ## ==
|
||||
## ## ## ## ## ===
|
||||
/""""""""""""""""\___/ ===
|
||||
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
|
||||
\______ o _,/
|
||||
\ \ _,'
|
||||
`'--.._\..--''
|
||||
</xmp>
|
||||
```
|
||||
**Voila!** You are all set to go.
|
||||
|
||||
## What's Next
|
||||
|
||||
- Learn details about [`Appfile`](./developers/learn-appfile) and know how it works.
|
||||
|
|
@ -1,97 +0,0 @@
|
|||
---
|
||||
title: Quick Start
|
||||
---
|
||||
|
||||
Welcome to KubeVela! In this guide, we'll walk you through how to install KubeVela, and deploy your first simple application.
|
||||
|
||||
## Step 1: Install
|
||||
|
||||
Make sure you have finished and verified the installation following [this guide](./install).
|
||||
|
||||
## Step 2: Deploy Your First Application
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/vela-app.yaml
|
||||
application.core.oam.dev/first-vela-app created
|
||||
```
|
||||
|
||||
Check the status until we see `status` is `running` and services are `healthy`:
|
||||
|
||||
```bash
|
||||
$ kubectl get application first-vela-app -o yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
generation: 1
|
||||
name: first-vela-app
|
||||
...
|
||||
namespace: default
|
||||
spec:
|
||||
components:
|
||||
- name: express-server
|
||||
type: webservice
|
||||
properties:
|
||||
image: crccheck/hello-world
|
||||
port: 8000
|
||||
traits:
|
||||
- type: ingress
|
||||
properties:
|
||||
domain: testsvc.example.com
|
||||
http:
|
||||
/: 8000
|
||||
status:
|
||||
...
|
||||
services:
|
||||
- healthy: true
|
||||
name: express-server
|
||||
traits:
|
||||
- healthy: true
|
||||
message: 'Visiting URL: testsvc.example.com, IP: your ip address'
|
||||
type: ingress
|
||||
status: running
|
||||
```
|
||||
|
||||
Under the neath, the K8s resources was created:
|
||||
|
||||
```bash
|
||||
$ kubectl get deployment
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
express-server-v1 1/1 1 1 8m
|
||||
$ kubectl get svc
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
express-server ClusterIP 172.21.11.152 <none> 8000/TCP 7m43s
|
||||
kubernetes ClusterIP 172.21.0.1 <none> 443/TCP 116d
|
||||
$ kubectl get ingress
|
||||
NAME CLASS HOSTS ADDRESS PORTS AGE
|
||||
express-server <none> testsvc.example.com <your ip address> 80 7m47s
|
||||
```
|
||||
|
||||
If your cluster has a working ingress, you can visit the service.
|
||||
|
||||
```
|
||||
$ curl -H "Host:testsvc.example.com" http://<your ip address>/
|
||||
<xmp>
|
||||
Hello World
|
||||
|
||||
|
||||
## .
|
||||
## ## ## ==
|
||||
## ## ## ## ## ===
|
||||
/""""""""""""""""\___/ ===
|
||||
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
|
||||
\______ o _,/
|
||||
\ \ _,'
|
||||
`'--.._\..--''
|
||||
</xmp>
|
||||
```
|
||||
**Voila!** You are all set to go.
|
||||
|
||||
## What's Next
|
||||
|
||||
Here are some recommended next steps:
|
||||
|
||||
- Learn KubeVela starting from its [core concepts](./concepts)
|
||||
- Learn more details about [`Application`](./application) and understand how it works.
|
||||
- Join `#kubevela` channel in CNCF [Slack](https://cloud-native.slack.com) and/or [Gitter](https://gitter.im/oam-dev/community)
|
||||
|
||||
Welcome onboard and sail Vela!
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 2.9 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 3.8 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 5.3 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 2.7 KiB |
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue