This commit is contained in:
yangsoon 2021-03-27 19:40:52 +08:00
parent 03c46c3e32
commit ff024ca886
144 changed files with 10580 additions and 2444 deletions

27
docs/README.md Normal file
View File

@ -0,0 +1,27 @@
![alt](../resources/KubeVela-03.png)
*Make shipping applications more enjoyable.*
# KubeVela
KubeVela is the platform engine to create *PaaS-like* experience on Kubernetes, in a scalable approach.
## Community
- Slack: [CNCF Slack](https://slack.cncf.io/) #kubevela channel
- Gitter: [Discussion](https://gitter.im/oam-dev/community)
- Bi-weekly Community Call: [Meeting Notes](https://docs.google.com/document/d/1nqdFEyULekyksFHtFvgvFAYE-0AMHKoS3RMnaKsarjs)
## Installation
Installation guide is available on [this section](./install).
## Quick Start
Quick start is available on [this section](./quick-start).
## Contributing
Check out [CONTRIBUTING](https://github.com/oam-dev/kubevela/blob/master/CONTRIBUTING.md) to see how to develop with KubeVela.
## Code of Conduct
KubeVela adopts the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).

View File

@ -1,9 +1,7 @@
---
title: Application CRD
title: Introduction of Application CRD
---
## Using `Application` to Describe Your App
This documentation will walk through how to use `Application` object to define your apps with corresponding operational behaviors in declarative approach.
## Example
@ -13,7 +11,7 @@ The sample application below claimed a `backend` component with *Worker* workloa
Moreover, the `frontend` component claimed `sidecar` and `autoscaler` traits which means the workload will be automatically injected with a `fluentd` sidecar and scale from 1-100 replicas triggered by CPU usage.
```yaml
apiVersion: core.oam.dev/v1alpha2
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: website
@ -21,37 +19,41 @@ spec:
components:
- name: backend
type: worker
settings:
properties:
image: busybox
cmd:
- sleep
- '1000'
- name: frontend
type: webservice
settings:
properties:
image: nginx
traits:
- name: autoscaler
- type: autoscaler
properties:
min: 1
max: 10
cpuPercent: 60
- name: sidecar
- type: sidecar
properties:
name: "sidecar-test"
image: "fluentd"
```
The `type: worker` means the specification of this workload (claimed in following `settings` section) will be enforced by a `WorkloadDefinition` object named `worker` as below:
The `type: worker` means the specification of this component (claimed in following `properties` section) will be enforced by a `ComponentDefinition` object named `worker` as below:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: worker
annotations:
definition.oam.dev/description: "Describes long-running, scalable, containerized services that running at backend. They do NOT have network endpoint to receive external network traffic."
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
cue:
template: |
@ -86,13 +88,56 @@ spec:
```
Hence, the `settings` section of `backend` only supports two parameters: `image` and `cmd`, this is enforced by the `parameter` list of the `.spec.template` field of the definition.
Hence, the `properties` section of `backend` only supports two parameters: `image` and `cmd`, this is enforced by the `parameter` list of the `.spec.template` field of the definition.
The similar extensible abstraction mechanism also applies to traits. For example, `name: autoscaler` in `frontend` means its trait specification (i.e. `properties` section) will be enforced by a `TraitDefinition` object named `autoscaler` as below:
> TBD: a autoscaler TraitDefinition (HPA)
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "configure k8s HPA for Deployment"
name: hpa
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
outputs: hpa: {
apiVersion: "autoscaling/v2beta2"
kind: "HorizontalPodAutoscaler"
metadata: name: context.name
spec: {
scaleTargetRef: {
apiVersion: "apps/v1"
kind: "Deployment"
name: context.name
}
minReplicas: parameter.min
maxReplicas: parameter.max
metrics: [{
type: "Resource"
resource: {
name: "cpu"
target: {
type: "Utilization"
averageUtilization: parameter.cpuUtil
}
}
}]
}
}
parameter: {
min: *1 | int
max: *10 | int
cpuUtil: *50 | int
}
```
All the definition objects are expected to be defined and installed by platform team. The end users will only focus on `Application` resource (either render it by tools or author it manually).
All the definition objects are expected to be defined and installed by platform team. The end users will only focus on `Application` resource.
## Conventions and "Standard Contract"
@ -101,7 +146,7 @@ After the `Application` resource is applied to Kubernetes cluster, the KubeVela
| Label | Description |
| :--: | :---------: |
|`workload.oam.dev/type=<workload definition name>` | The name of its corresponding `WorkloadDefinition` |
|`workload.oam.dev/type=<component definition name>` | The name of its corresponding `ComponentDefinition` |
|`trait.oam.dev/type=<trait definition name>` | The name of its corresponding `TraitDefinition` |
|`app.oam.dev/name=<app name>` | The name of the application it belongs to |
|`app.oam.dev/component=<component name>` | The name of the component it belongs to |

View File

@ -1,10 +0,0 @@
---
title: Restful API
---
import useBaseUrl from '@docusaurus/useBaseUrl';
<a
target="_blank"
href={useBaseUrl('/restful-api/index.html')}>
KubeVela Restful API
</a>

View File

@ -1,39 +0,0 @@
## vela
```
vela [flags]
```
### Options
```
-e, --env string specify environment name for application
-h, --help help for vela
```
### SEE ALSO
* [vela cap](vela_cap.md) - Manage capability centers and installing/uninstalling capabilities
* [vela completion](vela_completion.md) - Output shell completion code for the specified shell (bash or zsh)
* [vela config](vela_config.md) - Manage configurations
* [vela delete](vela_delete.md) - Delete an application
* [vela env](vela_env.md) - Manage environments
* [vela exec](vela_exec.md) - Execute command in a container
* [vela export](vela_export.md) - Export deploy manifests from appfile
* [vela help](vela_help.md) - Help about any command
* [vela init](vela_init.md) - Create scaffold for an application
* [vela logs](vela_logs.md) - Tail logs for application
* [vela ls](vela_ls.md) - List applications
* [vela port-forward](vela_port-forward.md) - Forward local ports to services in an application
* [vela show](vela_show.md) - Show the reference doc for a workload type or trait
* [vela status](vela_status.md) - Show status of an application
* [vela system](vela_system.md) - System management utilities
* [vela template](vela_template.md) - Manage templates
* [vela traits](vela_traits.md) - List traits
* [vela up](vela_up.md) - Apply an appfile
* [vela version](vela_version.md) - Prints out build version information
* [vela workloads](vela_workloads.md) - List workloads
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,29 +0,0 @@
## vela cap center
Manage Capability Center
### Synopsis
Manage Capability Center with config, sync, list
### Options
```
-h, --help help for center
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela cap](vela_cap.md) - Manage capability centers and installing/uninstalling capabilities
* [vela cap center config](vela_cap_center_config.md) - Configure (add if not exist) a capability center, default is local (built-in capabilities)
* [vela cap center ls](vela_cap_center_ls.md) - List all capability centers
* [vela cap center remove](vela_cap_center_remove.md) - Remove specified capability center
* [vela cap center sync](vela_cap_center_sync.md) - Sync capabilities from remote center, default to sync all centers
###### Auto generated by spf13/cobra on 20-Mar-2021

41
docs/cli/vela.md Normal file
View File

@ -0,0 +1,41 @@
---
title: vela
---
```
vela [flags]
```
### Options
```
-e, --env string specify environment name for application
-h, --help help for vela
```
### SEE ALSO
* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities
* [vela completion](vela_completion) - Output shell completion code for the specified shell (bash or zsh)
* [vela config](vela_config) - Manage configurations
* [vela delete](vela_delete) - Delete an application
* [vela env](vela_env) - Manage environments
* [vela exec](vela_exec) - Execute command in a container
* [vela export](vela_export) - Export deploy manifests from appfile
* [vela help](vela_help) - Help about any command
* [vela init](vela_init) - Create scaffold for an application
* [vela logs](vela_logs) - Tail logs for application
* [vela ls](vela_ls) - List applications
* [vela port-forward](vela_port-forward) - Forward local ports to services in an application
* [vela show](vela_show) - Show the reference doc for a workload type or trait
* [vela status](vela_status) - Show status of an application
* [vela system](vela_system) - System management utilities
* [vela template](vela_template) - Manage templates
* [vela traits](vela_traits) - List traits
* [vela up](vela_up) - Apply an appfile
* [vela version](vela_version) - Prints out build version information
* [vela workloads](vela_workloads) - List workloads
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela cap
---
title: vela cap
---
Manage capability centers and installing/uninstalling capabilities
@ -20,10 +22,10 @@ Manage capability centers and installing/uninstalling capabilities
### SEE ALSO
* [vela](vela.md) -
* [vela cap center](vela_cap_center.md) - Manage Capability Center
* [vela cap install](vela_cap_install.md) - Install capability into cluster
* [vela cap ls](vela_cap_ls.md) - List capabilities from cap-center
* [vela cap uninstall](vela_cap_uninstall.md) - Uninstall capability from cluster
* [vela](vela) -
* [vela cap center](vela_cap_center) - Manage Capability Center
* [vela cap install](vela_cap_install) - Install capability into cluster
* [vela cap ls](vela_cap_ls) - List capabilities from cap-center
* [vela cap uninstall](vela_cap_uninstall) - Uninstall capability from cluster
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -0,0 +1,31 @@
---
title: vela cap center
---
Manage Capability Center
### Synopsis
Manage Capability Center with config, sync, list
### Options
```
-h, --help help for center
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities
* [vela cap center config](vela_cap_center_config) - Configure (add if not exist) a capability center, default is local (built-in capabilities)
* [vela cap center ls](vela_cap_center_ls) - List all capability centers
* [vela cap center remove](vela_cap_center_remove) - Remove specified capability center
* [vela cap center sync](vela_cap_center_sync) - Sync capabilities from remote center, default to sync all centers
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela cap center config
---
title: vela cap center config
---
Configure (add if not exist) a capability center, default is local (built-in capabilities)
@ -31,6 +33,6 @@ vela cap center config mycenter https://github.com/oam-dev/catalog/cap-center
### SEE ALSO
* [vela cap center](vela_cap_center.md) - Manage Capability Center
* [vela cap center](vela_cap_center) - Manage Capability Center
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela cap center ls
---
title: vela cap center ls
---
List all capability centers
@ -30,6 +32,6 @@ vela cap center ls
### SEE ALSO
* [vela cap center](vela_cap_center.md) - Manage Capability Center
* [vela cap center](vela_cap_center) - Manage Capability Center
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela cap center remove
---
title: vela cap center remove
---
Remove specified capability center
@ -30,6 +32,6 @@ vela cap center remove mycenter
### SEE ALSO
* [vela cap center](vela_cap_center.md) - Manage Capability Center
* [vela cap center](vela_cap_center) - Manage Capability Center
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela cap center sync
---
title: vela cap center sync
---
Sync capabilities from remote center, default to sync all centers
@ -30,6 +32,6 @@ vela cap center sync mycenter
### SEE ALSO
* [vela cap center](vela_cap_center.md) - Manage Capability Center
* [vela cap center](vela_cap_center) - Manage Capability Center
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela cap install
---
title: vela cap install
---
Install capability into cluster
@ -31,6 +33,6 @@ vela cap install mycenter/route
### SEE ALSO
* [vela cap](vela_cap.md) - Manage capability centers and installing/uninstalling capabilities
* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela cap ls
---
title: vela cap ls
---
List capabilities from cap-center
@ -30,6 +32,6 @@ vela cap ls
### SEE ALSO
* [vela cap](vela_cap.md) - Manage capability centers and installing/uninstalling capabilities
* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela cap uninstall
---
title: vela cap uninstall
---
Uninstall capability from cluster
@ -31,6 +33,6 @@ vela cap uninstall route
### SEE ALSO
* [vela cap](vela_cap.md) - Manage capability centers and installing/uninstalling capabilities
* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela completion
---
title: vela completion
---
Output shell completion code for the specified shell (bash or zsh)
@ -23,8 +25,8 @@ of vela commands.
### SEE ALSO
* [vela](vela.md) -
* [vela completion bash](vela_completion_bash.md) - generate autocompletions script for bash
* [vela completion zsh](vela_completion_zsh.md) - generate autocompletions script for zsh
* [vela](vela) -
* [vela completion bash](vela_completion_bash) - generate autocompletions script for bash
* [vela completion zsh](vela_completion_zsh) - generate autocompletions script for zsh
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela completion bash
---
title: vela completion bash
---
generate autocompletions script for bash
@ -34,6 +36,6 @@ vela completion bash
### SEE ALSO
* [vela completion](vela_completion.md) - Output shell completion code for the specified shell (bash or zsh)
* [vela completion](vela_completion) - Output shell completion code for the specified shell (bash or zsh)
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela completion zsh
---
title: vela completion zsh
---
generate autocompletions script for zsh
@ -31,6 +33,6 @@ vela completion zsh
### SEE ALSO
* [vela completion](vela_completion.md) - Output shell completion code for the specified shell (bash or zsh)
* [vela completion](vela_completion) - Output shell completion code for the specified shell (bash or zsh)
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela config
---
title: vela config
---
Manage configurations
@ -20,10 +22,10 @@ Manage configurations
### SEE ALSO
* [vela](vela.md) -
* [vela config del](vela_config_del.md) - Delete config
* [vela config get](vela_config_get.md) - Get data for a config
* [vela config ls](vela_config_ls.md) - List configs
* [vela config set](vela_config_set.md) - Set data for a config
* [vela](vela) -
* [vela config del](vela_config_del) - Delete config
* [vela config get](vela_config_get) - Get data for a config
* [vela config ls](vela_config_ls) - List configs
* [vela config set](vela_config_set) - Set data for a config
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela config del
---
title: vela config del
---
Delete config
@ -30,6 +32,6 @@ vela config del <config-name>
### SEE ALSO
* [vela config](vela_config.md) - Manage configurations
* [vela config](vela_config) - Manage configurations
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela config get
---
title: vela config get
---
Get data for a config
@ -30,6 +32,6 @@ vela config get <config-name>
### SEE ALSO
* [vela config](vela_config.md) - Manage configurations
* [vela config](vela_config) - Manage configurations
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela config ls
---
title: vela config ls
---
List configs
@ -30,6 +32,6 @@ vela config ls
### SEE ALSO
* [vela config](vela_config.md) - Manage configurations
* [vela config](vela_config) - Manage configurations
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela config set
---
title: vela config set
---
Set data for a config
@ -30,6 +32,6 @@ vela config set <config-name> KEY=VALUE K2=V2
### SEE ALSO
* [vela config](vela_config.md) - Manage configurations
* [vela config](vela_config) - Manage configurations
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela delete
---
title: vela delete
---
Delete an application
@ -31,6 +33,6 @@ vela delete frontend
### SEE ALSO
* [vela](vela.md) -
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela env
---
title: vela env
---
Manage environments
@ -20,10 +22,10 @@ Manage environments
### SEE ALSO
* [vela](vela.md) -
* [vela env delete](vela_env_delete.md) - Delete environment
* [vela env init](vela_env_init.md) - Create environments
* [vela env ls](vela_env_ls.md) - List environments
* [vela env set](vela_env_set.md) - Set an environment
* [vela](vela) -
* [vela env delete](vela_env_delete) - Delete environment
* [vela env init](vela_env_init) - Create environments
* [vela env ls](vela_env_ls) - List environments
* [vela env set](vela_env_set) - Set an environment
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela env delete
---
title: vela env delete
---
Delete environment
@ -30,6 +32,6 @@ vela env delete test
### SEE ALSO
* [vela env](vela_env.md) - Manage environments
* [vela env](vela_env) - Manage environments
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela env init
---
title: vela env init
---
Create environments
@ -33,6 +35,6 @@ vela env init test --namespace test --email my@email.com
### SEE ALSO
* [vela env](vela_env.md) - Manage environments
* [vela env](vela_env) - Manage environments
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela env ls
---
title: vela env ls
---
List environments
@ -30,6 +32,6 @@ vela env ls [env-name]
### SEE ALSO
* [vela env](vela_env.md) - Manage environments
* [vela env](vela_env) - Manage environments
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela env set
---
title: vela env set
---
Set an environment
@ -30,6 +32,6 @@ vela env set test
### SEE ALSO
* [vela env](vela_env.md) - Manage environments
* [vela env](vela_env) - Manage environments
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela exec
---
title: vela exec
---
Execute command in a container
@ -28,6 +30,6 @@ vela exec [flags] APP_NAME -- COMMAND [args...]
### SEE ALSO
* [vela](vela.md) -
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela export
---
title: vela export
---
Export deploy manifests from appfile
@ -25,6 +27,6 @@ vela export
### SEE ALSO
* [vela](vela.md) -
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela help
---
title: vela help
---
Help about any command
@ -20,6 +22,6 @@ vela help [command]
### SEE ALSO
* [vela](vela.md) -
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela init
---
title: vela init
---
Create scaffold for an application
@ -31,6 +33,6 @@ vela init
### SEE ALSO
* [vela](vela.md) -
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela logs
---
title: vela logs
---
Tail logs for application
@ -25,6 +27,6 @@ vela logs [flags]
### SEE ALSO
* [vela](vela.md) -
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela ls
---
title: vela ls
---
List applications
@ -31,6 +33,6 @@ vela ls
### SEE ALSO
* [vela](vela.md) -
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela port-forward
---
title: vela port-forward
---
Forward local ports to services in an application
@ -33,6 +35,6 @@ port-forward APP_NAME [options] [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMO
### SEE ALSO
* [vela](vela.md) -
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela show
---
title: vela show
---
Show the reference doc for a workload type or trait
@ -31,6 +33,6 @@ show webservice
### SEE ALSO
* [vela](vela.md) -
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela status
---
title: vela status
---
Show status of an application
@ -31,6 +33,6 @@ vela status APP_NAME
### SEE ALSO
* [vela](vela.md) -
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela system
---
title: vela system
---
System management utilities
@ -20,8 +22,8 @@ System management utilities
### SEE ALSO
* [vela](vela.md) -
* [vela system dry-run](vela_system_dry-run.md) - Dry Run an application, and output the conversion result to stdout
* [vela system info](vela_system_info.md) - Show vela client and cluster chartPath
* [vela](vela) -
* [vela system dry-run](vela_system_dry-run) - Dry Run an application, and output the conversion result to stdout
* [vela system info](vela_system_info) - Show vela client and cluster chartPath
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela system dry-run
---
title: vela system dry-run
---
Dry Run an application, and output the conversion result to stdout
@ -31,6 +33,6 @@ vela dry-run
### SEE ALSO
* [vela system](vela_system.md) - System management utilities
* [vela system](vela_system) - System management utilities
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela system info
---
title: vela system info
---
Show vela client and cluster chartPath
@ -24,6 +26,6 @@ vela system info [flags]
### SEE ALSO
* [vela system](vela_system.md) - System management utilities
* [vela system](vela_system) - System management utilities
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela template
---
title: vela template
---
Manage templates
@ -20,7 +22,7 @@ Manage templates
### SEE ALSO
* [vela](vela.md) -
* [vela template context](vela_template_context.md) - Show context parameters
* [vela](vela) -
* [vela template context](vela_template_context) - Show context parameters
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela template context
---
title: vela template context
---
Show context parameters
@ -30,6 +32,6 @@ vela template context
### SEE ALSO
* [vela template](vela_template.md) - Manage templates
* [vela template](vela_template) - Manage templates
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela traits
---
title: vela traits
---
List traits
@ -30,6 +32,6 @@ vela traits
### SEE ALSO
* [vela](vela.md) -
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela up
---
title: vela up
---
Apply an appfile
@ -25,6 +27,6 @@ vela up
### SEE ALSO
* [vela](vela.md) -
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela version
---
title: vela version
---
Prints out build version information
@ -24,6 +26,6 @@ vela version [flags]
### SEE ALSO
* [vela](vela.md) -
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,4 +1,6 @@
## vela workloads
---
title: vela workloads
---
List workloads
@ -30,6 +32,6 @@ vela workloads
### SEE ALSO
* [vela](vela.md) -
* [vela](vela) -
###### Auto generated by spf13/cobra on 20-Mar-2021

View File

@ -1,5 +1,5 @@
---
title: Core Concepts
title: Core Concepts of KubeVela
---
*"KubeVela is a scalable way to create PaaS-like experience on Kubernetes"*
@ -24,15 +24,19 @@ This template based workflow make it possible for platform team enforce best pra
Below are the core building blocks in KubeVela that make this happen.
## Application
## `Application`
The *Application* is the core API of KubeVela. It allows developers to work with a single artifact to capture the complete application definition with simplified primitives.
Having an "application" concept is important to for any app-centric platform to simplify administrative tasks and can serve as an anchor to avoid configuration drifts during operation. Also, as an abstraction object, `Application` provides a much simpler path for on-boarding Kubernetes capabilities without relying on low level details. For example, a developer will be able to model a "web service" without defining a detailed Kubernetes Deployment + Service combo each time, or claim the auto-scaling requirements without referring to the underlying KEDA ScaleObject.
### Why Choose `Application` as the Main Abstraction
Having an "application" concept is important to any developer-centric platform to simplify administrative tasks and can serve as an anchor to avoid configuration drifts during operation. Also, as an abstraction object, `Application` provides a much simpler path for on-boarding Kubernetes capabilities without relying on low level details. For example, a developer will be able to model a "web service" without defining a detailed Kubernetes Deployment + Service combo each time, or claim the auto-scaling requirements without referring to the underlying KEDA ScaleObject.
### Example
An example of `website` application with two components (i.e. `frontend` and `backend`) could be modeled as below:
```yaml
apiVersion: core.oam.dev/v1alpha2
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: website
@ -40,45 +44,47 @@ spec:
components:
- name: backend
type: worker
settings:
properties:
image: busybox
cmd:
- sleep
- '1000'
- name: frontend
type: webservice
settings:
properties:
image: nginx
traits:
- name: autoscaler
- type: autoscaler
properties:
min: 1
max: 10
- name: sidecar
- type: sidecar
properties:
name: "sidecar-test"
image: "fluentd"
```
### Components
## Building the Abstraction
For each of the components in `Application`, its `.type` field references the detailed definition of this component (such as its workload type, template, parameters, etc.), and `.settings` are the user input values to instantiate it. Some typical component types are *Long Running Web Service*, *One-time Off Task* or *Redis Database*.
Unlike most of the higher level platforms, the `Application` abstraction in KubeVela is fully extensible and does not even have fixed schema. Instead, it is composed by building blocks (app components and traits etc.) that allow you to onboard platform capabilities to this application definition with your own abstractions.
All supported component types expected to be pre-installed in the platform, or provided by component providers such as 3rd-party software vendors.
The building blocks to abstraction and model platform capabilities named `ComponentDefinition` and `TraitDefinition`.
### Traits
### ComponentDefinition
Optionally, each component has a `.traits` section that augments its component instance with operational behaviors such as load balancing policy, network ingress routing, auto-scaling policies, or upgrade strategies, etc.
You can think of `ComponentDefinition` as a *template* for workload type. It contains template, parametering and workload characteristic information as a declarative API resource.
Essentially, traits are operational features provided by the platform, note that KubeVela allows users bring their own traits as well. To attach a trait, use `.name` field to reference the specific trait definition, and `.properties` field to set detailed configuration values of the given trait.
Hence, the `Application` abstraction essentially declares how users want to **instantiate** given component definitions. Specifically, the `.type` field references the name of installed `ComponentDefinition` and `.properties` are the user set values to instantiate it.
We also reference component types and traits as *"capabilities"* in KubeVela.
Some typical component definitions are *Long Running Web Service*, *One-time Off Task* or *Redis Database*. All component definitions expected to be pre-installed in the platform, or provided by component providers such as 3rd-party software vendors.
## Definitions
### TraitDefinition
Both the schemas of workload settings and trait properties in `Application` are enforced by a set of definition objects. The platform teams or component providers are responsible for registering and managing definition objects in target cluster following [workload definition](https://github.com/oam-dev/spec/blob/master/4.workload_types.md) and [trait definition](https://github.com/oam-dev/spec/blob/master/6.traits.md) specifications in Open Application Model (OAM).
Optionally, each component has a `.traits` section that augments the component instance with operational behaviors such as load balancing policy, network ingress routing, auto-scaling policies, or upgrade strategies, etc.
Specifically, definition object carries the templating information of this capability. Currently, KubeVela supports [Helm](http://helm.sh/) charts and [CUE](https://github.com/cuelang/cue) modules as definitions which means you could use KubeVela to deploy Helm charts and CUE modules as application components, or claim them as traits. More capability types support such as [Terraform](https://www.terraform.io/) is also work in progress.
You can think of traits as operational features provided by the platform. To attach a trait to component instance, the user will use `.type` field to reference the specific `TraitDefinition`, and `.properties` field to set property values of the given trait. Similarly, `TraitDefiniton` also allows you to define *template* for operational features.
We also reference component definitions and trait definitions as *"capability definitions"* in KubeVela.
## Environment
Before releasing an application to production, it's important to test the code in testing/staging workspaces. In KubeVela, we describe these workspaces as "deployment environments" or "environments" for short. Each environment has its own configuration (e.g., domain, Kubernetes cluster and namespace, configuration data, access control policy, etc.) to allow user to create different deployment environments such as "test" and "production".

245
docs/cue/advanced.md Normal file
View File

@ -0,0 +1,245 @@
---
title: Advanced Features
---
As a Data Configuration Language, CUE allows you to do some advanced templating magic in definition objects.
## Render Multiple Resources With a Loop
You can define the for-loop inside the `outputs`.
> Note that in this case the type of `parameter` field used in the for-loop must be a map.
Below is an example that will render multiple Kubernetes Services in one trait:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
name: expose
spec:
schematic:
cue:
template: |
parameter: {
http: [string]: int
}
outputs: {
for k, v in parameter.http {
"\(k)": {
apiVersion: "v1"
kind: "Service"
spec: {
selector:
app: context.name
ports: [{
port: v
targetPort: v
}]
}
}
}
}
```
The usage of this trait could be:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: testapp
spec:
components:
- name: express-server
type: webservice
properties:
...
traits:
- type: expose
properties:
http:
myservice1: 8080
myservice2: 8081
```
## Execute HTTP Request in Trait Definition
The trait definition can send a HTTP request and capture the response to help you rendering the resource with keyword `processing`.
You can define HTTP request `method`, `url`, `body`, `header` and `trailer` in the `processing.http` section, and the returned data will be stored in `processing.output`.
> Please ensure the target HTTP server returns a **JSON data**. `output`.
Then you can reference the returned data from `processing.output` in `patch` or `output/outputs`.
Below is an example:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
name: auth-service
spec:
schematic:
cue:
template: |
parameter: {
serviceURL: string
}
processing: {
output: {
token?: string
}
// The target server will return a JSON data with `token` as key.
http: {
method: *"GET" | string
url: parameter.serviceURL
request: {
body?: bytes
header: {}
trailer: {}
}
}
}
patch: {
data: token: processing.output.token
}
```
In above example, this trait definition will send request to get the `token` data, and then patch the data to given component instance.
## Data Passing
A trait definition can read the generated API resources (rendered from `output` and `outputs`) of given component definition.
> KubeVela will ensure the component definitions are always rendered before traits definitions.
Specifically, the `context.output` contains the rendered workload API resource (whose GVK is indicated by `spec.workload`in component definition), and use `context.outputs.<xx>` to contain all the other rendered API resources.
Below is an example for data passing:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: worker
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
cue:
template: |
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": context.name
}
template: {
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
containers: [{
name: context.name
image: parameter.image
ports: [{containerPort: parameter.port}]
envFrom: [{
configMapRef: name: context.name + "game-config"
}]
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
}]
}
}
}
}
outputs: gameconfig: {
apiVersion: "v1"
kind: "ConfigMap"
metadata: {
name: context.name + "game-config"
}
data: {
enemies: parameter.enemies
lives: parameter.lives
}
}
parameter: {
// +usage=Which image would you like to use for your service
// +short=i
image: string
// +usage=Commands to run in the container
cmd?: [...string]
lives: string
enemies: string
port: int
}
---
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
name: ingress
spec:
schematic:
cue:
template: |
parameter: {
domain: string
path: string
exposePort: int
}
// trait template can have multiple outputs in one trait
outputs: service: {
apiVersion: "v1"
kind: "Service"
spec: {
selector:
app: context.name
ports: [{
port: parameter.exposePort
targetPort: context.output.spec.template.spec.containers[0].ports[0].containerPort
}]
}
}
outputs: ingress: {
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: context.name
labels: config: context.outputs.gameconfig.data.enemies
spec: {
rules: [{
host: parameter.domain
http: {
paths: [{
path: parameter.path
backend: {
serviceName: context.name
servicePort: parameter.exposePort
}
}]
}
}]
}
}
```
In detail, during rendering `worker` `ComponentDefinition`:
1. the rendered Kubernetes Deployment resource will be stored in the `context.output`,
2. all other rendered resources will be stored in `context.outputs.<xx>`, with `<xx>` is the unique name in every `template.outputs`.
Thus, in `TraitDefinition`, it can read the rendered API resources (e.g. `context.outputs.gameconfig.data.enemies`) from the `context`.

548
docs/cue/basic.md Normal file
View File

@ -0,0 +1,548 @@
---
title: Learning CUE
---
This document will explain more about how to use CUE to encapsulate and abstract a given capability in Kubernetes in detail.
> Please make sure you have already learned about `Application` custom resource before reading the following guide.
## Overview
The reasons for KubeVela supports CUE as a first-class solution to design abstraction can be concluded as below:
- **CUE is designed for large scale configuration.** CUE has the ability to understand a
configuration worked on by engineers across a whole company and to safely change a value that modifies thousands of objects in a configuration. This aligns very well with KubeVela's original goal to define and ship production level applications at web scale.
- **CUE supports first-class code generation and automation.** CUE can integrate with existing tools and workflows naturally while other tools would have to build complex custom solutions. For example, generate OpenAPI schemas wigh Go code. This is how KubeVela build developer tools and GUI interfaces based on the CUE templates.
- **CUE integrates very well with Go.**
KubeVela is built with GO just like most projects in Kubernetes system. CUE is also implemented in and exposes a rich API in Go. KubeVela integrates with CUE as its core library and works as a Kubernetes controller. With the help of CUE, KubeVela can easily handle data constraint problems.
> Pleas also check [The Configuration Complexity Curse](https://blog.cedriccharly.com/post/20191109-the-configuration-complexity-curse/) and [The Logic of CUE](https://cuelang.org/docs/concepts/logic/) for more details.
## Prerequisites
Please make sure below CLIs are present in your environment:
* [`cue` >=v0.2.2](https://cuelang.org/docs/install/)
* [`vela` (>v1.0.0)](https://kubevela.io/#/en/install?id=_3-optional-get-kubevela-cli)
## CUE CLI Basic
Below is the basic CUE data, you can define both schema and value in the same file with the almost same format:
```
a: 1.5
a: float
b: 1
b: int
d: [1, 2, 3]
g: {
h: "abc"
}
e: string
```
CUE is a superset of JSON, we can use it like json with following convenience:
* C style comments,
* quotes may be omitted from field names without special characters,
* commas at the end of fields are optional,
* comma after last element in list is allowed,
* outer curly braces are optional.
CUE has powerful CLI commands. Let's keep the data in a file named `first.cue` and try.
* Format the CUE file. If you're using Goland or similar JetBrains IDE,
you can [configure save on format](https://wonderflow.info/posts/2020-11-02-goland-cuelang-format/) instead.
This command will not only format the CUE, but also point out the wrong schema. That's very useful.
```shell
cue fmt first.cue
```
* Schema Check, besides `cue fmt`, you can also use `vue vet` to check schema.
```shell
cue vet first.cue
```
* Calculate/Render the result. `cue eval` will calculate the CUE file and render out the result.
You can see the results don't contain `a: float` and `b: int`, because these two variables are calculated.
While the `e: string` doesn't have definitive results, so it keeps as it is.
```shell
$ cue eval first.cue
a: 1.5
b: 1
d: [1, 2, 3]
g: {
h: "abc"
}
e: string
```
* Render for specified result. For example, we want only know the result of `b` in the file, then we can specify the parameter `-e`.
```shell
$ cue eval -e b first.cue
1
```
* Export the result. `cue export` will export the result with final value. It will report an error if some variables are not definitive.
```shell
$ cue export first.cue
e: cannot convert incomplete value "string" to JSON:
./first.cue:9:4
```
We can complete the value by giving a value to `e`, for example:
```shell
echo "e: \"abc\"" >> first.cue
```
Then, the command will work. By default, the result will be rendered in json format.
```shell
$ cue export first.cue
{
"a": 1.5,
"b": 1,
"d": [
1,
2,
3
],
"g": {
"h": "abc"
},
"e": "abc"
}
```
* Export the result in YAML format.
```shell
$ cue export first.cue --out yaml
a: 1.5
b: 1
d:
- 1
- 2
- 3
g:
h: abc
e: abc
```
* Export the result for specified variable.
```shell
$ cue export -e g first.cue
{
"h": "abc"
}
```
For now, you have learned all useful CUE cli operations.
## CUE Language Basic
* Data structure: Below is the basic data structure of CUE.
```shell
// float
a: 1.5
// int
b: 1
// string
c: "blahblahblah"
// array
d: [1, 2, 3, 1, 2, 3, 1, 2, 3]
// bool
e: true
// struct
f: {
a: 1.5
b: 1
d: [1, 2, 3, 1, 2, 3, 1, 2, 3]
g: {
h: "abc"
}
}
// null
j: null
```
* Define a custom CUE type. You can use a `#` symbol to specify some variable represents a CUE type.
```
#abc: string
```
Let's name it `second.cue`. Then the `cue export` won't complain as the `#abc` is a type not incomplete value.
```shell
$ cue export second.cue
{}
```
You can also define a more complex custom struct, such as:
```
#abc: {
x: int
y: string
z: {
a: float
b: bool
}
}
```
It's widely used in KubeVela to define templates and do validation.
## CUE Templating and References
Let's try to define a CUE template with the knowledge just learned.
1. Define a struct variable `parameter`.
```shell
parameter: {
name: string
image: string
}
```
Let's save it in a file called `deployment.cue`.
2. Define a more complex struct variable `template` and reference the variable `parameter`.
```
template: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": parameter.name
}
template: {
metadata: labels: {
"app.oam.dev/component": parameter.name
}
spec: {
containers: [{
name: parameter.name
image: parameter.image
}]
}}}
}
```
People who are familiar with Kubernetes may have understood that is a template of K8s Deployment. The `parameter` part
is the parameters of the template.
Add it into the `deployment.cue`.
4. Then, let's add the value by adding following code block:
```
parameter:{
name: "mytest"
image: "nginx:v1"
}
```
5. Finally, let's export it in yaml:
```shell
$ cue export deployment.cue -e template --out yaml
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: mytest
image: nginx:v1
metadata:
labels:
app.oam.dev/component: mytest
selector:
matchLabels:
app.oam.dev/component: mytest
```
## Advanced CUE Schematic
* Open struct and list. Using `...` in a list or struct means the object is open.
- A list like `[...string]` means it can hold multiple string elements.
If we don't add `...`, then `[string]` means the list can only have one `string` element in it.
- A struct like below means the struct can contain unknown fields.
```
{
abc: string
...
}
```
* Operator `|`, it represents a value could be both case. Below is an example that the variable `a` could be in string or int type.
```shell
a: string | int
```
* Default Value, we can use `*` symbol to represent a default value for variable. That's usually used with `|`,
which represents a default value for some type. Below is an example that variable `a` is `int` and it's default value is `1`.
```shell
a: *1 | int
```
* Optional Variable. In some cases, a variable could not be used, they're optional variables, we can use `?:` to define it.
In the below example, `a` is an optional variable, `x` and `z` in `#my` is optional while `y` is a required variable.
```
a ?: int
#my: {
x ?: string
y : int
z ?:float
}
```
Optional variables can be skipped, that usually works together with conditional logic.
Specifically, if some field does not exit, the CUE grammar is `if _variable_ != _|_`, the example is like below:
```
parameter: {
name: string
image: string
config?: [...#Config]
}
output: {
...
spec: {
containers: [{
name: parameter.name
image: parameter.image
if parameter.config != _|_ {
config: parameter.config
}
}]
}
...
}
```
* Operator `&`, it used to calculate two variables.
```shell
a: *1 | int
b: 3
c: a & b
```
Saving it in `third.cue` file.
You can evaluate the result by using `cue eval`:
```shell
$ cue eval third.cue
a: 1
b: 3
c: 3
```
* Conditional statement, it's really useful when you have some cascade operations that different value affects different results.
So you can do `if..else` logic in the template.
```shell
price: number
feel: *"good" | string
// Feel bad if price is too high
if price > 100 {
feel: "bad"
}
price: 200
```
Saving it in `fourth.cue` file.
You can evaluate the result by using `cue eval`:
```shell
$ cue eval fourth.cue
price: 200
feel: "bad"
```
Another example is to use bool type as prameter.
```
parameter: {
name: string
image: string
useENV: bool
}
output: {
...
spec: {
containers: [{
name: parameter.name
image: parameter.image
if parameter.useENV == true {
env: [{name: "my-env", value: "my-value"}]
}
}]
}
...
}
```
* For Loop: if you want to avoid duplicate, you may want to use for loop.
- Loop for Map
```cue
parameter: {
name: string
image: string
env: [string]: string
}
output: {
spec: {
containers: [{
name: parameter.name
image: parameter.image
env: [
for k, v in parameter.env {
name: k
value: v
},
]
}]
}
}
```
- Loop for type
```
#a: {
"hello": "Barcelona"
"nihao": "Shanghai"
}
for k, v in #a {
"\(k)": {
nameLen: len(v)
value: v
}
}
```
- Loop for Slice
```cue
parameter: {
name: string
image: string
env: [...{name:string,value:string}]
}
output: {
...
spec: {
containers: [{
name: parameter.name
image: parameter.image
env: [
for _, v in parameter.env {
name: v.name
value: v.value
},
]
}]
}
}
```
Note that we use `"\( _my-statement_ )"` for inner calculation in string.
## Import CUE Internal Packages
CUE has many [internal packages](https://pkg.go.dev/cuelang.org/go@v0.2.2/pkg) which also can be used in KubeVela.
Below is an example that use `strings.Join` to `concat` string list to one string.
```cue
import ("strings")
parameter: {
outputs: [{ip: "1.1.1.1", hostname: "xxx.com"}, {ip: "2.2.2.2", hostname: "yyy.com"}]
}
output: {
spec: {
if len(parameter.outputs) > 0 {
_x: [ for _, v in parameter.outputs {
"\(v.ip) \(v.hostname)"
}]
message: "Visiting URL: " + strings.Join(_x, "")
}
}
}
```
## Import Kube Package
KubeVela automatically generates all K8s resources as internal packages by reading K8s openapi from the
installed K8s cluster.
You can use these packages with the format `kube/<apiVersion>` in CUE Template of KubeVela just like the same way
with the CUE internal packages.
For example, `Deployment` can be used as:
```cue
import (
apps "kube/apps/v1"
)
parameter: {
name: string
}
output: apps.#Deployment
output: {
metadata: name: parameter.name
}
```
Service can be used as (import package with an alias is not necessary):
```cue
import ("kube/v1")
output: v1.#Service
output: {
metadata: {
"name": parameter.name
}
spec: type: "ClusterIP",
}
parameter: {
name: "myapp"
}
```
Even the installed CRD works:
```
import (
oam "kube/core.oam.dev/v1alpha2"
)
output: oam.#Application
output: {
metadata: {
"name": parameter.name
}
}
parameter: {
name: "myapp"
}
```

357
docs/cue/component.md Normal file
View File

@ -0,0 +1,357 @@
---
title: Defining Components with CUE
---
In this section, it will introduce how to use [CUE](https://cuelang.org/) to declare app components via `ComponentDefinition`.
> Before reading this part, please make sure you've learned the [Definition CRD](../platform-engineers/definition-and-templates) in KubeVela.
## Declare `ComponentDefinition`
Here is a CUE based `ComponentDefinition` example which provides a abstraction for stateless workload type:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: stateless
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
cue:
template: |
parameter: {
name: string
image: string
}
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": parameter.name
}
template: {
metadata: labels: {
"app.oam.dev/component": parameter.name
}
spec: {
containers: [{
name: parameter.name
image: parameter.image
}]
}
}
}
}
```
In detail:
- `.spec.workload` is required to indicate the workload type of this component.
- `.spec.schematic.cue.template` is a CUE template, specifically:
* The `output` filed defines the template for the abstraction.
* The `parameter` filed defines the template parameters, i.e. the configurable properties exposed in the `Application`abstraction (and JSON schema will be automatically generated based on them).
Let's declare another component named `task`, i.e. an abstraction for run-to-completion workload.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: task
annotations:
definition.oam.dev/description: "Describes jobs that run code or a script to completion."
spec:
workload:
definition:
apiVersion: batch/v1
kind: Job
schematic:
cue:
template: |
output: {
apiVersion: "batch/v1"
kind: "Job"
spec: {
parallelism: parameter.count
completions: parameter.count
template: spec: {
restartPolicy: parameter.restart
containers: [{
image: parameter.image
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
}]
}
}
}
parameter: {
count: *1 | int
image: string
restart: *"Never" | string
cmd?: [...string]
}
```
Save above `ComponentDefintion` objects to files and install them to your Kubernetes cluster by `$ kubectl apply -f stateless-def.yaml task-def.yaml`
## Declare an `Application`
The `ComponentDefinition` can be instantiated in `Application` abstraction as below:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: Application
metadata:
name: website
spec:
components:
- name: hello
type: stateless
properties:
image: crccheck/hello-world
name: mysvc
- name: countdown
type: task
properties:
image: centos:7
cmd:
- "bin/bash"
- "-c"
- "for i in 9 8 7 6 5 4 3 2 1 ; do echo $i ; done"
```
### Under The Hood
<details>
Above application resource will generate and manage following Kubernetes resources in your target cluster based on the `output` in CUE template and user input in `Application` properties.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
... # skip tons of metadata info
spec:
template:
spec:
containers:
- name: mysvc
image: crccheck/hello-world
metadata:
labels:
app.oam.dev/component: mysvc
selector:
matchLabels:
app.oam.dev/component: mysvc
---
apiVersion: batch/v1
kind: Job
metadata:
name: countdown
... # skip tons of metadata info
spec:
parallelism: 1
completions: 1
template:
metadata:
name: countdown
spec:
containers:
- name: countdown
image: 'centos:7'
command:
- bin/bash
- '-c'
- for i in 9 8 7 6 5 4 3 2 1 ; do echo $i ; done
restartPolicy: Never
```
</details>
## CUE `Context`
KubeVela allows you to reference the runtime information of your application via `conext` keyword.
The most widely used context is application name(`context.appName`) and component name(`context.name`).
```cue
context: {
appName: string
name: string
}
```
For example, let's say you want to use the component name filled in by users as the container name in the workload instance:
```cue
parameter: {
image: string
}
output: {
...
spec: {
containers: [{
name: context.name
image: parameter.image
}]
}
...
}
```
> Note that `context` information are auto-injected before resources are applied to target cluster.
> TBD: full available information in CUE `context`.
## Composition
It's common that a component definition is composed by multiple API resources, for example, a `webserver` component that is composed by a Deployment and a Service. CUE is a great solution to achieve this in simplified primitives.
> Another approach to do composition in KubeVela of course is [using Helm](/docs/helm/component).
## How-to
KubeVela requires you to define the template of workload type in `output` section, and leave all the other resource templates in `outputs` section with format as below:
```cue
outputs: <unique-name>:
<full template data>
```
> The reason for this requirement is KubeVela needs to know it is currently rendering a workload so it could do some "magic" like patching annotations/labels or other data during it.
Below is the example for `webserver` definition:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: webserver
annotations:
definition.oam.dev/description: "webserver is a combo of Deployment + Service"
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
cue:
template: |
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": context.name
}
template: {
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
containers: [{
name: context.name
image: parameter.image
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
if parameter["env"] != _|_ {
env: parameter.env
}
if context["config"] != _|_ {
env: context.config
}
ports: [{
containerPort: parameter.port
}]
if parameter["cpu"] != _|_ {
resources: {
limits:
cpu: parameter.cpu
requests:
cpu: parameter.cpu
}
}
}]
}
}
}
}
// an extra template
outputs: service: {
apiVersion: "v1"
kind: "Service"
spec: {
selector: {
"app.oam.dev/component": context.name
}
ports: [
{
port: parameter.port
targetPort: parameter.port
},
]
}
}
parameter: {
image: string
cmd?: [...string]
port: *80 | int
env?: [...{
name: string
value?: string
valueFrom?: {
secretKeyRef: {
name: string
key: string
}
}
}]
cpu?: string
}
```
The user could now declare an `Application` with it:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: webserver-demo
namespace: default
spec:
components:
- name: hello-world
type: webserver
properties:
image: crccheck/hello-world
port: 8000
env:
- name: "foo"
value: "bar"
cpu: "100m"
```
It will generate and manage below API resources in target cluster:
```shell
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
hello-world-v1 1/1 1 1 15s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world-trait-7bdcff98f7 ClusterIP <your ip> <none> 8000/TCP 32s
```
## What's Next
Please check the [Learning CUE](./basic) documentation about why we support CUE as first-class templating solution and more details about using CUE efficiently.

View File

@ -0,0 +1,56 @@
---
title: Define resources located in defferent namespace with application
---
In this section, we will introduce how to use cue template create resources (workload/trait) in different namespace with the application.
By default, the `metadata.namespace` of K8s resource in CuE template is automatically filled with the same namespace of the applicaiton.
If you want to create K8s resources running in a specific namespace witch is different with the application, you can set the `metadata.namespace` field.
KubeVela will create the resources in the specified namespace, and create a resourceTracker object as owener of those resources.
## Usage
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: worker
spec:
definitionRef:
name: deployments.apps
schematic:
cue:
template: |
parameter: {
name: string
image: string
namespace: string # make this parameter `namespace` as keyword which represents the resource maybe located in defferent namespace with application
}
output: {
apiVersion: "apps/v1"
kind: "Deployment"
metadata: {
namespace: my-namespace
}
spec: {
selector: matchLabels: {
"app.oam.dev/component": parameter.name
}
template: {
metadata: labels: {
"app.oam.dev/component": parameter.name
}
spec: {
containers: [{
name: parameter.name
image: parameter.image
}]
}}}
}
```
## Limitations
If you update definition by changing the `metadata.namespace` field. KubeVela will create new resources in the new namespace but not delete old resources.
We wil fix the limitation in the near future.

432
docs/cue/patch-trait.md Normal file
View File

@ -0,0 +1,432 @@
---
title: Patch Trait
---
**Patch** is a very common pattern of trait definitions, i.e. the app operators can amend/path attributes to the component instance (normally the workload) to enable certain operational features such as sidecar or node affinity rules (and this should be done **before** the resources applied to target cluster).
This pattern is extremely useful when the component definition is provided by third-party component provider (e.g. software distributor) so app operators do not have privilege to change its template.
> Note that even patch trait itself is defined by CUE, it can patch any component regardless how its schematic is defined (i.e. CUE, Helm, and any other supported schematic approaches).
Below is an example for `node-affinity` trait:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "affinity specify node affinity and toleration"
name: node-affinity
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
patch: {
spec: template: spec: {
if parameter.affinity != _|_ {
affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: [{
matchExpressions: [
for k, v in parameter.affinity {
key: k
operator: "In"
values: v
},
]}]
}
if parameter.tolerations != _|_ {
tolerations: [
for k, v in parameter.tolerations {
effect: "NoSchedule"
key: k
operator: "Equal"
value: v
}]
}
}
}
parameter: {
affinity?: [string]: [...string]
tolerations?: [string]: string
}
```
The patch trait above assumes the target component instance have `spec.template.spec.affinity` field. Hence we need to use `appliesToWorkloads` to enforce the trait only applies to those workload types have this field.
Now the users could declare they want to add node affinity rules to the component instance as below:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: Application
metadata:
name: testapp
spec:
components:
- name: express-server
type: webservice
properties:
image: oamdev/testapp:v1
traits:
- type: "node-affinity"
properties:
affinity:
server-owner: ["owner1","owner2"]
resource-pool: ["pool1","pool2","pool3"]
tolerations:
resource-pool: "broken-pool1"
server-owner: "old-owner"
```
### Known Limitations
By default, patch trait in KubeVela leverages the CUE `merge` operation. It has following known constraints though:
- Can not handle conflicts.
- For example, if a component instance already been set with value `replicas=5`, then any patch trait to patch `replicas` field will fail, a.k.a you should not expose `replicas` field in its component definition schematic.
- Array list in the patch will be merged following the order of index. It can not handle the duplication of the array list members. This could be fixed by another feature below.
### Strategy Patch
The `strategy patch` is useful for patching array list.
> Note that this is not a standard CUE feature, KubeVela enhanced CUE in this case.
With `//+patchKey=<key_name>` annotation, merging logic of two array lists will not follow the CUE behavior. Instead, it will treat the list as object and use a strategy merge approach:
- if a duplicated key is found, the patch data will be merge with the existing values;
- if no duplication found, the patch will append into the array list.
The example of strategy patch trait will like below:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "add sidecar to the app"
name: sidecar
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
patch: {
// +patchKey=name
spec: template: spec: containers: [parameter]
}
parameter: {
name: string
image: string
command?: [...string]
}
```
In above example we defined `patchKey` is `name` which is the parameter key of container name. In this case, if the workload don't have the container with same name, it will be a sidecar container append into the `spec.template.spec.containers` array list. If the workload already has a container with the same name of this `sidecar` trait, then merge operation will happen instead of append (which leads to duplicated containers).
If `patch` and `outputs` both exist in one trait definition, the `patch` operation will be handled first and then render the `outputs`.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "expose the app"
name: expose
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
patch: {spec: template: metadata: labels: app: context.name}
outputs: service: {
apiVersion: "v1"
kind: "Service"
metadata: name: context.name
spec: {
selector: app: context.name
ports: [
for k, v in parameter.http {
port: v
targetPort: v
},
]
}
}
parameter: {
http: [string]: int
}
```
So the above trait which attaches a Service to given component instance will patch an corresponding label to the workload first and then render the Service resource based on template in `outputs`.
## More Use Cases of Patch Trait
Patch trait is in general pretty useful to separate operational concerns from the component definition, here are some more examples.
### Add Labels
For example, patch common label (virtual group) to the component instance.
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "Add virtual group labels"
name: virtualgroup
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
patch: {
spec: template: {
metadata: labels: {
if parameter.scope == "namespace" {
"app.namespace.virtual.group": parameter.group
}
if parameter.scope == "cluster" {
"app.cluster.virtual.group": parameter.group
}
}
}
}
parameter: {
group: *"default" | string
scope: *"namespace" | string
}
```
Then it could be used like:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
spec:
...
traits:
- type: virtualgroup
properties:
group: "my-group1"
scope: "cluster"
```
### Add Annotations
Similar to common labels, you could also patch the component instance with annotations. The annotation value should be a JSON string.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "Specify auto scale by annotation"
name: kautoscale
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
import "encoding/json"
patch: {
metadata: annotations: {
"my.custom.autoscale.annotation": json.Marshal({
"minReplicas": parameter.min
"maxReplicas": parameter.max
})
}
}
parameter: {
min: *1 | int
max: *3 | int
}
```
### Add Pod Environments
Inject system environments into Pod is also very common use case.
> This case rely on strategy merge patch, so don't forget add `+patchKey=name` as below:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "add env into your pods"
name: env
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
patch: {
spec: template: spec: {
// +patchKey=name
containers: [{
name: context.name
// +patchKey=name
env: [
for k, v in parameter.env {
name: k
value: v
},
]
}]
}
}
parameter: {
env: [string]: string
}
```
### Inject `ServiceAccount` Based on External Auth Service
In this example, the service account was dynamically requested from an authentication service and patched into the service.
This example put UID token in HTTP header but you can also use request body if you prefer.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "dynamically specify service account"
name: service-account
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
processing: {
output: {
credentials?: string
}
http: {
method: *"GET" | string
url: parameter.serviceURL
request: {
header: {
"authorization.token": parameter.uidtoken
}
}
}
}
patch: {
spec: template: spec: serviceAccountName: processing.output.credentials
}
parameter: {
uidtoken: string
serviceURL: string
}
```
The `processing.http` section is an advanced feature that allow trait definition to send a HTTP request during rendering the resource. Please refer to [Execute HTTP Request in Trait Definition](#Processing-Trait) section for more details.
### Add `InitContainer`
[`InitContainer`](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container) is useful to pre-define operations in an image and run it before app container.
Below is an example:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "add an init container and use shared volume with pod"
name: init-container
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
patch: {
spec: template: spec: {
// +patchKey=name
containers: [{
name: context.name
// +patchKey=name
volumeMounts: [{
name: parameter.mountName
mountPath: parameter.appMountPath
}]
}]
initContainers: [{
name: parameter.name
image: parameter.image
if parameter.command != _|_ {
command: parameter.command
}
// +patchKey=name
volumeMounts: [{
name: parameter.mountName
mountPath: parameter.initMountPath
}]
}]
// +patchKey=name
volumes: [{
name: parameter.mountName
emptyDir: {}
}]
}
}
parameter: {
name: string
image: string
command?: [...string]
mountName: *"workdir" | string
appMountPath: string
initMountPath: string
}
```
The usage could be:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: testapp
spec:
components:
- name: express-server
type: webservice
properties:
image: oamdev/testapp:v1
traits:
- type: "init-container"
properties:
name: "install-container"
image: "busybox"
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://info.cern.ch
mountName: "workdir"
appMountPath: "/usr/share/nginx/html"
initMountPath: "/work-dir"
```

View File

@ -1,8 +1,8 @@
---
title: Advanced Features
title: Status Write Back
---
By using CUE as encapsulation method, some advanced features such as status write back could be easily achieved.
This documentation will explain how to achieve status write back by using CUE templates in definition objects.
## Health Check
@ -32,8 +32,8 @@ Trait will not have the `context.ouput`, other fields are the same.
The example of health check likes below:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
spec:
status:
healthPolicy: |
@ -42,7 +42,7 @@ spec:
```
```yaml
apiVersion: core.oam.dev/v1alpha2
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
spec:
status:
@ -56,12 +56,13 @@ spec:
The health check result will be recorded into the `Application` resource.
```yaml
apiVersion: core.oam.dev/v1alpha2
apiVersion: core.oam.dev/v1beta1
kind: Application
spec:
components:
- name: myweb
settings:
type: worker
properties:
cmd:
- sleep
- "1000"
@ -69,12 +70,11 @@ spec:
image: busybox
lives: "3"
traits:
- name: ingress
- type: ingress
properties:
domain: www.example.com
http:
/: 80
type: worker
status:
...
services:
@ -117,8 +117,8 @@ Trait will not have the `context.ouput`, other fields are the same.
Please refer to [this doc](https://github.com/oam-dev/kubevela/blob/master/docs/examplesapp-with-status/template.yaml) for the complete example.
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
spec:
status:
customStatus: |-
@ -127,7 +127,7 @@ spec:
```
```yaml
apiVersion: core.oam.dev/v1alpha2
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
spec:
status:

144
docs/cue/trait.md Normal file
View File

@ -0,0 +1,144 @@
---
title: Defining Traits
---
In this section we will introduce how to define a trait.
## Simple Trait
A trait in KubeVela can be defined by simply reference a existing Kubernetes API resource.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
name: ingress
spec:
definitionRef:
name: ingresses.networking.k8s.io
```
Let's attach this trait to a component instance in `Application`:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: testapp
spec:
components:
- name: express-server
type: webservice
properties:
cmd:
- node
- server.js
image: oamdev/testapp:v1
port: 8080
traits:
- type: ingress
properties:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
```
Note that in this case, all fields in the referenced resource's `spec` will be exposed to end user and no metadata (e.g. `annotations` etc) are allowed to be set trait properties. Hence this approach is normally used when you want to bring your own CRD and controller as a trait, and it dose not rely on `annotations` etc as tuning knobs.
## Using CUE as Trait Schematic
The recommended approach is defining a CUE based schematic for trait as well. In this case, it comes with abstraction and you have full flexibility to templating any resources and fields as you want. Note that KubeVela requires all traits MUST be defined in `outputs` section (not `output`) in CUE template with format as below:
```cue
outputs: <unique-name>:
<full template data>
```
Below is an example for `ingress` trait.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
name: ingress
spec:
schematic:
cue:
template: |
parameter: {
domain: string
http: [string]: int
}
// trait template can have multiple outputs in one trait
outputs: service: {
apiVersion: "v1"
kind: "Service"
spec: {
selector:
app: context.name
ports: [
for k, v in parameter.http {
port: v
targetPort: v
},
]
}
}
outputs: ingress: {
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: context.name
spec: {
rules: [{
host: parameter.domain
http: {
paths: [
for k, v in parameter.http {
path: k
backend: {
serviceName: context.name
servicePort: v
}
},
]
}
}]
}
}
```
Let's attach this trait to a component instance in `Application`:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: testapp
spec:
components:
- name: express-server
type: webservice
properties:
cmd:
- node
- server.js
image: oamdev/testapp:v1
port: 8080
traits:
- type: ingress
properties:
domain: test.my.domain
http:
"/api": 8080
```
CUE based trait definitions can also enable many other advanced scenarios such as patching and data passing. They will be explained in detail in the following documentations.

View File

@ -1,135 +0,0 @@
---
title: Managing Capabilities
---
## Managing Capabilities
In KubeVela, developers can install more capabilities (i.e. new workload types and traits) from any GitHub repo that contains OAM definition files. We call these GitHub repos as _Capability Centers_.
KubeVela is able to discover OAM definition files in this repo automatically and sync them to your own KubeVela platform.
## Add a capability center
Add and sync a capability center in KubeVela:
```bash
$ vela cap center config my-center https://github.com/oam-dev/catalog/tree/master/registry
successfully sync 1/1 from my-center remote center
Successfully configured capability center my-center and sync from remote
$ vela cap center sync my-center
successfully sync 1/1 from my-center remote center
sync finished
```
Now, this capability center `my-center` is ready to use.
## List capability centers
You are allowed to add more capability centers and list them.
```bash
$ vela cap center ls
NAME ADDRESS
my-center https://github.com/oam-dev/catalog/tree/master/registry
```
## [Optional] Remove a capability center
Or, remove one.
```bash
$ vela cap center remove my-center
```
## List all available capabilities in capability center
Or, list all available capabilities in certain center.
```bash
$ vela cap ls my-center
NAME CENTER TYPE DEFINITION STATUS APPLIES-TO
kubewatch my-center trait kubewatches.labs.bitnami.com uninstalled []
```
## Install a capability from capability center
Now let's try to install the new trait named `kubewatch` from `my-center` to your own KubeVela platform.
> [KubeWatch](https://github.com/bitnami-labs/kubewatch) is a Kubernetes plugin that watches events and publishes notifications to Slack channel etc. We can use it as a trait to watch important changes of your app and notify the platform administrators via Slack.
Install `kubewatch` trait from `my-center`.
```bash
$ vela cap install my-center/kubewatch
Installing trait capability kubewatch
"my-repo" has been added to your repositories
2020/11/06 16:19:30 [debug] creating 1 resource(s)
2020/11/06 16:19:30 [debug] CRD kubewatches.labs.bitnami.com is already present. Skipping.
2020/11/06 16:19:37 [debug] creating 3 resource(s)
Successfully installed chart (kubewatch) with release name (kubewatch)
Successfully installed capability kubewatch from my-center
```
## Use the newly installed capability
Let's check the `kubewatch` trait appears in your platform firstly:
```bash
$ vela traits
Synchronizing capabilities from cluster⌛ ...
Sync capabilities successfully ✅ (no changes)
TYPE CATEGORY DESCRIPTION
kubewatch trait Add a watch for resource
...
```
Great! Now let's deploy an app via Appfile.
```bash
$ cat << EOF > vela.yaml
name: testapp
services:
testsvc:
type: webservice
image: crccheck/hello-world
port: 8000
route:
domain: testsvc.example.com
EOF
```
```bash
$ vela up
```
Then let's add `kubewatch` as a trait in this Appfile.
```bash
$ cat << EOF >> vela.yaml
kubewatch:
webhook: https://hooks.slack.com/<your-token>
EOF
```
> The `https://hooks.slack.com/<your-token>` is the Slack channel that your platform administrators are keeping an eye on.
Update the deployment:
```
$ vela up
```
Now, your platform administrators should receive notifications whenever important changes happen to your app. For example, a fresh new deployment.
![Image of Kubewatch](../../resources/kubewatch-notif.jpg)
## Uninstall a capability
> NOTE: make sure no apps are using the capability before uninstalling.
```bash
$ vela cap uninstall my-center/kubewatch
Successfully removed chart (kubewatch) with release name (kubewatch)
```

View File

@ -0,0 +1,133 @@
---
title: Managing Capabilities
---
In KubeVela, developers can install more capabilities (i.e. new component types and traits) from any GitHub repo that contains OAM definition files. We call these GitHub repos as _Capability Centers_.
KubeVela is able to discover OAM definition files in this repo automatically and sync them to your own KubeVela platform.
## Add a capability center
Add and sync a capability center in KubeVela:
```bash
$ vela cap center config my-center https://github.com/oam-dev/catalog/tree/master/registry
successfully sync 1/1 from my-center remote center
Successfully configured capability center my-center and sync from remote
$ vela cap center sync my-center
successfully sync 1/1 from my-center remote center
sync finished
```
Now, this capability center `my-center` is ready to use.
## List capability centers
You are allowed to add more capability centers and list them.
```bash
$ vela cap center ls
NAME ADDRESS
my-center https://github.com/oam-dev/catalog/tree/master/registry
```
## [Optional] Remove a capability center
Or, remove one.
```bash
$ vela cap center remove my-center
```
## List all available capabilities in capability center
Or, list all available capabilities in certain center.
```bash
$ vela cap ls my-center
NAME CENTER TYPE DEFINITION STATUS APPLIES-TO
clonesetservice my-center componentDefinition clonesets.apps.kruise.io uninstalled []
```
## Install a capability from capability center
Now let's try to install the new component named `clonesetservice` from `my-center` to your own KubeVela platform.
You need to install OpenKruise first.
```shell
helm install kruise https://github.com/openkruise/kruise/releases/download/v0.7.0/kruise-chart.tgz
```
Install `clonesetservice` component from `my-center`.
```bash
$ vela cap install my-center/clonesetservice
Installing component capability clonesetservice
Successfully installed capability clonesetservice from my-center
```
## Use the newly installed capability
Let's check the `clonesetservice` appears in your platform firstly:
```bash
$ vela components
NAME NAMESPACE WORKLOAD DESCRIPTION
clonesetservice vela-system clonesets.apps.kruise.io Describes long-running, scalable, containerized services
that have a stable network endpoint to receive external
network traffic from customers. If workload type is skipped
for any service defined in Appfile, it will be defaulted to
`webservice` type.
```
Great! Now let's deploy an app via Appfile.
```bash
$ cat << EOF > vela.yaml
name: testapp
services:
testsvc:
type: clonesetservice
image: crccheck/hello-world
port: 8000
EOF
```
```bash
$ vela up
Parsing vela appfile ...
Load Template ...
Rendering configs for service (testsvc)...
Writing deploy config to (.vela/deploy.yaml)
Applying application ...
Checking if app has been deployed...
App has not been deployed, creating a new deployment...
Updating: core.oam.dev/v1alpha2, Kind=HealthScope in default
✅ App has been deployed 🚀🚀🚀
Port forward: vela port-forward testapp
SSH: vela exec testapp
Logging: vela logs testapp
App status: vela status testapp
Service status: vela status testapp --svc testsvc
```
then you can Get a cloneset in your environment.
```shell
$ kubectl get clonesets.apps.kruise.io
NAME DESIRED UPDATED UPDATED_READY READY TOTAL AGE
testsvc 1 1 1 1 1 46s
```
## Uninstall a capability
> NOTE: make sure no apps are using the capability before uninstalling.
```bash
$ vela cap uninstall my-center/clonesetservice
Successfully uninstalled capability clonesetservice
```

View File

@ -1,5 +1,5 @@
---
title: Check Application Logs
title: Check Logs of Container
---
```bash

View File

@ -1,5 +1,5 @@
---
title: The Reference Documentation Guide of Capabilities
title: The Reference Documentation Guide of Capabilities
---
In this documentation, we will show how to check the detailed schema of a given capability (i.e. workload type or trait).

View File

@ -1,5 +1,5 @@
---
title: Configuring Data/Env in Application
title: Configuring Data/Env in Application
---
`vela` provides a `config` command to manage config data.

View File

@ -1,5 +1,5 @@
---
title: Setting Up Deployment Environment
title: Setting Up Deployment Environment
---
A deployment environment is where you could configure the workspace, email for certificate issuer and domain for your applications globally. A typical set of deployment environment is `test`, `staging`, `prod`, etc.

View File

@ -1,5 +1,5 @@
---
title: Execute Commands in Container
title: Execute Commands in Container
---
Run:

View File

@ -1,4 +1,6 @@
# Automatically scale workloads by resource utilization metrics and cron
---
title: Automatically scale workloads by resource utilization metrics and cron
---
@ -21,9 +23,9 @@ Install auto-scaler trait controller with helm
```shell script
helm install --create-namespace -n vela-system autoscalertrait oam.catalog/autoscalertrait
Autoscale depends on metrics server, please [enable it in your Kubernetes cluster](../../faq#autoscale-how-to-enable-metrics-server-in-various-kubernetes-clusters) at the beginning.
Autoscale depends on metrics server, please [enable it in your Kubernetes cluster](../references/devex/faq#autoscale-how-to-enable-metrics-server-in-various-kubernetes-clusters) at the beginning.
> Note: autoscale is one of the extension capabilities [installed from cap center](../cli/cap-center),
> Note: autoscale is one of the extension capabilities [installed from cap center](../cap-center),
> please install it if you can't find it in `vela traits`.
## Setting cron auto-scaling policy
@ -53,9 +55,9 @@ Introduce how to automatically scale workloads by cron.
timezone: "America/Los_Angeles"
```
> The full specification of `autoscale` could show up by `$ vela show autoscale` or be found on [its reference documentation](../../capability-references/autoscale)
> The full specification of `autoscale` could show up by `$ vela show autoscale` or be found on [its reference documentation](../references/traits/autoscale)
1. Deploy an application
2. Deploy an application
```
$ vela up

View File

@ -1,4 +1,6 @@
# Monitoring Application
---
title: Monitoring Application
---
If your application has exposed metrics, you can easily tell the platform how to collect the metrics data from your app with `metrics` capability.
@ -23,7 +25,7 @@ Install metrics trait controller with helm
helm install --create-namespace -n vela-system metricstrait oam.catalog/metricstrait
> Note: metrics is one of the extension capabilities [installed from cap center](../cli/cap-center),
> Note: metrics is one of the extension capabilities [installed from cap center](../cap-center),
> please install it if you can't find it in `vela traits`.
## Setting metrics policy
@ -53,7 +55,7 @@ The app will emit random latencies as metrics.
EOF
```
> The full specification of `metrics` could show up by `$ vela show metrics` or be found on [its reference documentation](../../capability-references/metrics)
> The full specification of `metrics` could show up by `$ vela show metrics` or be found on [its reference documentation](../references/traits/metrics)
2. Deploy the application:
@ -100,6 +102,6 @@ kubectl --namespace monitoring port-forward `kubectl -n monitoring get pods -l p
Then access the Prometheus dashboard via http://localhost:9090/targets
![Prometheus Dashboard](../../resources/metrics.jpg)
![Prometheus Dashboard](../../../resources/metrics.jpg)
</details>

View File

@ -1,6 +1,8 @@
# Setting Rollout Strategy
---
title: Setting Rollout Strategy
---
> Note: rollout is one of the extension capabilities [installed from cap center](../cli/cap-center),
> Note: rollout is one of the extension capabilities [installed from cap center](../cap-center),
> please install it if you can't find it in `vela traits`.
The `rollout` section is used to configure Canary strategy to release your app.
@ -24,7 +26,7 @@ services:
domain: "example.com"
```
> The full specification of `rollout` could show up by `$ vela show rollout` or be found on [its reference documentation](../../capability-references/rollout)
> The full specification of `rollout` could show up by `$ vela show rollout` or be found on [its reference documentation](../references/traits/rollout)
Apply this `appfile.yaml`:
@ -147,14 +149,14 @@ Hello World -- This is rolling 02
In detail, `Rollout` controller will create a canary of your app , and then gradually shift traffic to the canary while measuring key performance indicators like HTTP requests success rate at the same time.
![alt](../../resources/traffic-shifting-analysis.png)
![alt](../../../resources/traffic-shifting-analysis.png)
In this sample, for every `10s`, `5%` traffic will be shifted to canary from the primary, until the traffic on canary reached `50%`. At the mean time, the instance number of canary will automatically scale to `replicas: 2` per configured in Appfile.
Based on analysis result of the KPIs during this traffic shifting, a canary will be promoted or aborted if analysis is failed. If promoting, the primary will be upgraded from v1 to v2, and traffic will be fully shifted back to the primary instances. So as result, canary instances will be deleted after the promotion finished.
![alt](../../resources/promotion.png)
![alt](../../../resources/promotion.png)
> Note: KubeVela's `Rollout` trait is implemented with [Weaveworks Flagger](https://flagger.app/) operator.

View File

@ -1,4 +1,6 @@
# Setting Routes
---
title: Setting Routes
---
The `route` section is used to configure the access to your app.
@ -22,7 +24,7 @@ Install route trait controller with helm
helm install --create-namespace -n vela-system routetrait oam.catalog/routetrait
> Note: route is one of the extension capabilities [installed from cap center](../cli/cap-center),
> Note: route is one of the extension capabilities [installed from cap center](../cap-center),
> please install it if you can't find it in `vela traits`.
## Setting route policy
@ -40,7 +42,7 @@ services:
rewriteTarget: /
```
> The full specification of `route` could show up by `$ vela show route` or be found on [its reference documentation](../../capability-references/route)
> The full specification of `route` could show up by `$ vela show route` or be found on [its reference documentation](../references/traits/route)
Apply again:
@ -70,7 +72,7 @@ Services:
- route: Visiting URL: http://example.com IP: <ingress-IP-address>
```
**In [kind cluster setup](../../getting-started/install#kind)**, you can visit the service via localhost:
**In [kind cluster setup](../../install#kind)**, you can visit the service via localhost:
> If not in kind cluster, replace 'localhost' with ingress address

View File

@ -1,9 +1,7 @@
---
title: Learning Appfile
title: Appfile
---
## Appfile
A sample `Appfile` is as below:
```yaml
@ -37,7 +35,7 @@ Under the hood, `Appfile` will build the image from source code, and then genera
## Schema
> Before learning about Appfile's detailed schema, we recommend you to get familiar with [core concepts](../../overview/concepts) in KubeVela.
> Before learning about Appfile's detailed schema, we recommend you to get familiar with [core concepts](../concepts) in KubeVela.
```yaml
@ -83,7 +81,7 @@ In the following workflow, we will build and deploy an example NodeJS app under
### Prerequisites
- [Docker](https://docs.docker.com/get-docker/) installed on the host
- [KubeVela](../../getting-started/install) installed and configured
- [KubeVela](../install) installed and configured
### 1. Download test app code
@ -163,7 +161,7 @@ $ vela status testapp
#### Alternative: Local testing without pushing image remotely
If you have local [kind](../../getting-started/install) cluster running, you may try the local push option. No remote container registry is needed in this case.
If you have local [kind](../install) cluster running, you may try the local push option. No remote container registry is needed in this case.
Add local option to `build`:
@ -223,7 +221,7 @@ spec:
### [Optional] Configure another workload type
By now we have deployed a *[Web Service](../../capability-references/webservice)*, which is the default workload type in KubeVela. We can also add another service of *[Task](../../capability-references/task)* type in the same app:
By now we have deployed a *[Web Service](references/workload-types/webservice)*, which is the default workload type in KubeVela. We can also add another service of *[Task](references/workload-types/task)* type in the same app:
```yaml
services:
@ -247,7 +245,7 @@ Congratulations! You have just deployed an app using `Appfile`.
## What's Next?
Play more with your app:
- [Check Application Logs](../cli/check-logs)
- [Execute Commands in Application Container](../cli/exec-cmd)
- [Access Application via Route](../cli/port-forward)
- [Check Application Logs](./check-logs)
- [Execute Commands in Application Container](./exec-cmd)
- [Access Application via Route](./port-forward)

View File

@ -1,5 +1,5 @@
---
title: Port Forwarding
title: Port Forwarding
---
Once your web services of the application deployed, you can access it locally via `port-forward`.

View File

@ -1,10 +1,8 @@
---
titile: Overview
title: The Reference Documentation of Capabilities
---
## The Reference Documentation of Capabilities
In this documentation, we will show how to check the detailed schema of a given capability (i.e. workload type or trait).
In this documentation, we will show how to check the detailed schema of a given capability (i.e. component type or trait).
This may sound challenging because every capability is a "plug-in" in KubeVela (even for the built-in ones), also, it's by design that KubeVela allows platform administrators to modify the capability templates at any time. In this case, do we need to manually write documentation for every newly installed capability? And how can we ensure those documentations for the system is up-to-date?
@ -15,20 +13,20 @@ Actually, as a important part of its "extensibility" design, KubeVela will alway
Thus, as an end user, the only thing you need to do is:
```console
$ vela show WORKLOAD_TYPE or TRAIT
$ vela show COMPONENT_TYPE or TRAIT --web
```
This command will automatically open the reference documentation for given workload type or trait in your default browser.
This command will automatically open the reference documentation for given component type or trait in your default browser.
Let's take `$ vela show webservice` as example. The detailed schema documentation for `Web Service` workload type will show up immediately as below:
Let's take `$ vela show webservice --web` as example. The detailed schema documentation for `Web Service` component type will show up immediately as below:
![](../resources/vela_show_webservice.jpg)
![](../../../resources/vela_show_webservice.jpg)
Note that there's in the section named `Specification`, it even provides you with a full sample for the usage of this workload type with a fake name `my-service-name`.
Similarly, we can also do `$ vela show autoscale`:
![](../resources/vela_show_autoscale.jpg)
![](../../../resources/vela_show_autoscale.jpg)
With these auto-generated reference documentations, we could easily complete the application description by simple copy-paste, for example:
@ -58,7 +56,7 @@ services:
This reference doc feature also works for terminal-only case. For example:
```shell
$ vela show webservice --no-website
$ vela show webservice
# Properties
+-------+----------------------------------------------------------------------------------+---------------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
@ -104,12 +102,12 @@ Note that for all the built-in capabilities, we already published their referenc
- Workload Types
- [webservice](./webservice)
- [task](./task)
- [worker](./worker)
- [webservice](workload-types/webservice)
- [task](workload-types/task)
- [worker](workload-types/worker)
- Traits
- [route](./route)
- [autoscale](./autoscale)
- [rollout](./rollout)
- [metrics](./metrics)
- [scaler](./scaler)
- [route](traits/route)
- [autoscale](traits/autoscale)
- [rollout](traits/rollout)
- [metrics](traits/metrics)
- [scaler](traits/scaler)

View File

@ -0,0 +1,28 @@
---
title: KubeVela CLI
---
### Auto-completion
#### bash
```bash
To load completions in your current shell session:
$ source <(vela completion bash)
To load completions for every new session, execute once:
Linux:
$ vela completion bash > /etc/bash_completion.d/vela
MacOS:
$ vela completion bash > /usr/local/etc/bash_completion.d/vela
```
#### zsh
```bash
To load completions in your current shell session:
$ source <(vela completion zsh)
To load completions for every new session, execute once:
$ vela completion zsh > "${fpath[1]}/_vela"
```

View File

@ -0,0 +1,10 @@
# KubeVela Dashboard (WIP)
KubeVela has a simple client side dashboard for you to interact with. The functionality is equivalent to the vela cli.
```bash
$ vela dashboard
```
> NOTE: this feature is still under development.

View File

@ -1,23 +1,19 @@
---
title: FAQ
title: FAQ
---
<!-- - [Compare to X](#Compare-to-X)
- [Compare to X](#Compare-to-X)
* [What is the difference between KubeVela and Helm?](#What-is-the-difference-between-KubeVela-and-Helm?)
- [Compare to X](#compare-to-x)
- [What is the difference between KubeVela and Helm?](#what-is-the-difference-between-kubevela-and-helm)
- [Issues](#issues)
- [Error: unable to create new content in namespace cert-manager because it is being terminated](#error-unable-to-create-new-content-in-namespace-cert-manager-because-it-is-being-terminated)
- [Error: ScopeDefinition exists](#error-scopedefinition-exists)
- [You have reached your pull rate limit](#you-have-reached-your-pull-rate-limit)
- [Warning: Namespace cert-manager exists](#warning-namespace-cert-manager-exists)
- [How to fix issue: MutatingWebhookConfiguration mutating-webhook-configuration exists?](#how-to-fix-issue-mutatingwebhookconfiguration-mutating-webhook-configuration-exists)
- [Operating](#operating)
- [Autoscale: how to enable metrics server in various Kubernetes clusters?](#autoscale-how-to-enable-metrics-server-in-various-kubernetes-clusters)
* [Error: unable to create new content in namespace cert-manager because it is being terminated](#error-unable-to-create-new-content-in-namespace-cert-manager-because-it-is-being-terminated)
* [Error: ScopeDefinition exists](#error-scopedefinition-exists)
* [You have reached your pull rate limit](#You-have-reached-your-pull-rate-limit)
* [Warning: Namespace cert-manager exists](#warning-namespace-cert-manager-exists)
* [How to fix issue: MutatingWebhookConfiguration mutating-webhook-configuration exists?](#how-to-fix-issue-mutatingwebhookconfiguration-mutating-webhook-configuration-exists)
- [Operating](#operating)
* [Autoscale: how to enable metrics server in various Kubernetes clusters?](#autoscale-how-to-enable-metrics-server-in-various-kubernetes-clusters) -->
* [Autoscale: how to enable metrics server in various Kubernetes clusters?](#autoscale-how-to-enable-metrics-server-in-various-kubernetes-clusters)
## Compare to X
@ -274,7 +270,7 @@ Metrics server is already enabled.
Metrics server has to be enabled in `Operations/Add-ons` section of [Alibaba Cloud console](https://cs.console.aliyun.com/) as below.
![](./resources/install-metrics-server-in-ASK.jpg)
![](../../../../resources/install-metrics-server-in-ASK.jpg)
Please refer to [metrics server debug guide](https://help.aliyun.com/document_detail/176515.html) if you hit more issue.
@ -305,4 +301,4 @@ $ minikube addons enable metrics-server
```
Have fun to [set autoscale](./developer-experience-guide/extensions/set-autoscale) on your application.
Have fun to [set autoscale](../../extensions/set-autoscale) on your application.

File diff suppressed because one or more lines are too long

View File

@ -1,5 +1,5 @@
---
titile: Autoscale
title: Autoscale
---
## Description

View File

@ -0,0 +1,20 @@
---
title: Ingress
---
## Description
Configures K8s ingress and service to enable web traffic for your service. Please use route trait in cap center for advanced usage.
## Specification
List of all configuration options for a `Ingress` trait.
```yaml```
## Properties
Name | Description | Type | Required | Default
------------ | ------------- | ------------- | ------------- | -------------
domain | | string | true |
http | | map[string]int | true |

View File

@ -1,5 +1,5 @@
---
titile: Metrics
title: Metrics
---
## Description

View File

@ -1,5 +1,5 @@
---
titile: Rollout
title: Rollout
---
## Description

View File

@ -1,5 +1,5 @@
---
titile: Route
title: Route
---
## Description

View File

@ -1,5 +1,5 @@
---
titile: Scaler
title: Scaler
---
## Description

View File

@ -1,5 +1,5 @@
---
titile: Task
title: Task
---
## Description

View File

@ -1,5 +1,5 @@
---
titile: Web Service
title: Webservice
---
## Description

View File

@ -1,5 +1,5 @@
---
titile: Worker
title: Worker
---
## Description

89
docs/helm/component.md Normal file
View File

@ -0,0 +1,89 @@
---
title: Defining Components with Helm
---
In this section, it will introduce how to declare Helm charts as app components via `ComponentDefinition`.
> Before reading this part, please make sure you've learned [the definition and template concepts](../platform-engineers/definition-and-templates).
## Prerequisite
* [fluxcd/flux2](../install#3-optional-install-flux2), make sure you have installed the flux2 in the [installation guide](https://kubevela.io/#/en/install).
## Declare `ComponentDefinition`
Here is an example `ComponentDefinition` about how to use Helm as schematic module.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: webapp-chart
annotations:
definition.oam.dev/description: helm chart for webapp
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
helm:
release:
chart:
spec:
chart: "podinfo"
version: "5.1.4"
repository:
url: "http://oam.dev/catalog/"
```
In detail:
- `.spec.workload` is required to indicate the workload type of this Helm based component. Please also check for [Known Limitations](/docs/helm/known-issues?id=workload-type-indicator) if you have multiple workloads packaged in one chart.
- `.spec.schematic.helm` contains information of Helm `release` and `repository` which leverages `fluxcd/flux2`.
- i.e. the pec of `release` aligns with [`HelmReleaseSpec`](https://github.com/fluxcd/helm-controller/blob/main/docs/api/helmrelease.md) and spec of `repository` aligns with [`HelmRepositorySpec`](https://github.com/fluxcd/source-controller/blob/main/docs/api/source.md#source.toolkit.fluxcd.io/v1beta1.HelmRepository).
## Declare an `Application`
Here is an example `Application`.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: myapp
namespace: default
spec:
components:
- name: demo-podinfo
type: webapp-chart
properties:
image:
tag: "5.1.2"
```
The component `properties` is exactly the [overlay values](https://github.com/captainroy-hy/podinfo/blob/master/charts/podinfo/values.yaml) of the Helm chart.
Deploy the application and after several minutes (it may take time to fetch Helm chart), you can check the Helm release is installed.
```shell
$ helm ls -A
myapp-demo-podinfo default 1 2021-03-05 02:02:18.692317102 +0000 UTC deployed podinfo-5.1.4 5.1.4
```
Check the workload defined in the chart has been created successfully.
```shell
$ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
myapp-demo-podinfo 1/1 1 1 66m
```
Check the values (`image.tag = 5.1.2`) from application's `properties` are assigned to the chart.
```shell
$ kubectl get deployment myapp-demo-podinfo -o json | jq '.spec.template.spec.containers[0].image'
"ghcr.io/stefanprodan/podinfo:5.1.2"
```
### Generate Form from Helm Based Components
KubeVela will automatically generate OpenAPI v3 JSON schema based on [`values.schema.json`](https://helm.sh/docs/topics/charts/#schema-files) in the Helm chart, and store it in a `ConfigMap` in the same `namespace` with the definition object. Furthermore, if `values.schema.json` is not provided by the chart author, KubeVela will generate OpenAPI v3 JSON schema based on its `values.yaml` file automatically.
Please check the [Generate Forms from Definitions](/docs/platform-engineers/openapi-v3-json-schema) guide for more detail of using this schema to render GUI forms.

82
docs/helm/known-issues.md Normal file
View File

@ -0,0 +1,82 @@
---
title: Known Limitations and Issues
---
## Limitations
Here are some known limitations for using Helm chart as application component.
### Workload Type Indicator
Following best practices of microservice, KubeVela recommends only one workload resource present in one Helm chart. Please split your "super" Helm chart into multiple charts (i.e. components). Essentially, KubeVela relies on the `workload` filed in component definition to indicate the workload type it needs to take care, for example:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
...
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
```
```yaml
...
spec:
workload:
definition:
apiVersion: apps.kruise.io/v1alpha1
kind: Cloneset
```
Note that KubeVela won't fail if multiple workload types are packaged in one chart, the issue is for further operational behaviors such as rollout, revisions, and traffic management, they can only take effect on the indicated workload type.
### Always Use Full Qualified Name
The name of the workload should be templated with [fully qualified application name](https://github.com/helm/helm/blob/543364fba59b0c7c30e38ebe0f73680db895abb6/pkg/chartutil/create.go#L415) and please do NOT assign any value to `.Values.fullnameOverride`. As a best practice, Helm also highly recommend that new charts should be created via `$ helm create` command so the template names are automatically defined as per this best practice.
### Control the Application Upgrade
Changes made to the component `properties` will trigger a Helm release upgrade. This process is handled by Flux v2 Helm controller, hence you can define remediation
strategies in the schematic based on [Helm Release
documentation](https://github.com/fluxcd/helm-controller/blob/main/docs/api/helmrelease.md#upgraderemediation)
and [specification](https://toolkit.fluxcd.io/components/helm/helmreleases/#configuring-failure-remediation)
in case failure happens during this upgrade.
For example:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: webapp-chart
spec:
...
schematic:
helm:
release:
chart:
spec:
chart: "podinfo"
version: "5.1.4"
upgrade:
remediation:
retries: 3
remediationStrategy: rollback
repository:
url: "http://oam.dev/catalog/"
```
Though one issue is for now it's hard to get helpful information of a living Helm release to figure out what happened if upgrading failed. We will enhance the observability to help users track the situation of Helm release in application level.
## Issues
The known issues will be fixed in following releases.
### Rollout Strategy
For now, Helm based components cannot benefit from [application level rollout strategy](https://github.com/oam-dev/kubevela/blob/master/design/vela-core/rollout-design.md#applicationdeployment-workflow). As shown in [this sample](./trait#update-an-applicatiion), if the application is updated, it can only be rollouted directly without canary or blue-green approach.
### Updating Traits Properties may Also Lead to Pods Restart
Changes on traits properties may impact the component instance and Pods belonging to this workload instance will restart. In CUE based components this is avoidable as KubeVela has full control on the rendering process of the resources, though in Helm based components it's currently deferred to Flux v2 controller.

View File

@ -1,15 +1,14 @@
---
title: Attach Traits
title: Attach Traits to Helm Based Components
---
## Attach Traits to Helm Based Components
Traits in KubeVela can be attached to Helm based component seamlessly.
Most traits in KubeVela can be attached to Helm based component seamlessly. In this sample application below, we add two traits, [scaler](https://github.com/oam-dev/kubevela/blob/master/charts/vela-core/templates/defwithtemplate/manualscale.yaml)
and [virtualgroup](https://github.com/oam-dev/kubevela/blob/master/docs/examples/helm-module/virtual-group-td.yaml),
to a Helm based component.
In this sample application below, we add two traits, [scaler](https://github.com/oam-dev/kubevela/blob/master/charts/vela-core/templates/defwithtemplate/manualscale.yaml)
and [virtualgroup](https://github.com/oam-dev/kubevela/blob/master/docs/examples/helm-module/virtual-group-td.yaml) to a Helm based component.
```yaml
apiVersion: core.oam.dev/v1alpha2
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: myapp
@ -17,30 +16,29 @@ metadata:
spec:
components:
- name: demo-podinfo
type: webapp-chart
settings:
type: webapp-chart
properties:
image:
tag: "5.1.2"
traits:
- name: scaler
- type: scaler
properties:
replicas: 4
- name: virtualgroup
- type: virtualgroup
properties:
group: "my-group1"
type: "cluster"
```
> Note: when use Trait system with Helm module workload, please *make sure the target workload in your Helm chart strictly follows the qualified-full-name convention in Helm.* [For example in this chart](https://github.com/captainroy-hy/podinfo/blob/c2b9603036f1f033ec2534ca0edee8eff8f5b335/charts/podinfo/templates/deployment.yaml#L4), the workload name is composed of [release name and chart name](https://github.com/captainroy-hy/podinfo/blob/c2b9603036f1f033ec2534ca0edee8eff8f5b335/charts/podinfo/templates/_helpers.tpl#L13).
> Note: when use traits with Helm based component, please *make sure the target workload in your Helm chart strictly follows the qualified-full-name convention in Helm.* [For example in this chart](https://github.com/captainroy-hy/podinfo/blob/c2b9603036f1f033ec2534ca0edee8eff8f5b335/charts/podinfo/templates/deployment.yaml#L4), the workload name is composed of [release name and chart name](https://github.com/captainroy-hy/podinfo/blob/c2b9603036f1f033ec2534ca0edee8eff8f5b335/charts/podinfo/templates/_helpers.tpl#L13).
> This is because KubeVela relies on the name to discovery the workload, otherwise it cannot apply traits to the workload. KubeVela will generate a release name based on your `Application` name and component name automatically, so you need to make sure never override the full name template in your Helm chart.
## Verify traits work correctly
You may wait a bit more time to check the trait works after deploying the application.
Because KubeVela may not discovery the target workload immediately when it's created because of reconciliation interval.
> You may need to wait a few seconds to check the trait attached because of reconciliation interval.
Check the scaler trait.
Check the `scaler` trait takes effect.
```shell
$ kubectl get manualscalertrait
NAME AGE
@ -51,7 +49,7 @@ $ kubectl get deployment myapp-demo-podinfo -o json | jq .spec.replicas
4
```
Check the virtualgroup trait.
Check the `virtualgroup` trait.
```shell
$ kubectl get deployment myapp-demo-podinfo -o json | jq .spec.template.metadata.labels
{
@ -60,16 +58,16 @@ $ kubectl get deployment myapp-demo-podinfo -o json | jq .spec.template.metadata
}
```
## Update an application
## Update Application
After the application is deployed and workloads/traits are created successfully,
you can update the application, and corresponding changes will be applied to the
workload.
workload instances.
Let's make several changes on the configuration of the sample application.
```yaml
apiVersion: core.oam.dev/v1alpha2
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: myapp
@ -77,15 +75,15 @@ metadata:
spec:
components:
- name: demo-podinfo
type: webapp-chart
settings:
type: webapp-chart
properties:
image:
tag: "5.1.3" # 5.1.2 => 5.1.3
traits:
- name: scaler
- type: scaler
properties:
replicas: 2 # 4 => 2
- name: virtualgroup
- type: virtualgroup
properties:
group: "my-group2" # my-group1 => my-group2
type: "cluster"
@ -93,7 +91,7 @@ spec:
Apply the new configuration and check the results after several minutes.
Check the new values(`image.tag = 5.1.3`) from application's `settings` are assigned to the chart.
Check the new values (`image.tag = 5.1.3`) from application's `properties` are assigned to the chart.
```shell
$ kubectl get deployment myapp-demo-podinfo -o json | jq '.spec.template.spec.containers[0].image'
"ghcr.io/stefanprodan/podinfo:5.1.3"
@ -105,13 +103,13 @@ NAME NAMESPACE REVISION UPDATED ST
myapp-demo-podinfo default 2 2021-03-15 08:52:00.037690148 +0000 UTC deployed podinfo-5.1.4 5.1.4
```
Check the scaler trait.
Check the `scaler` trait.
```shell
$ kubectl get deployment myapp-demo-podinfo -o json | jq .spec.replicas
2
```
Check the virtualgroup trait.
Check the `virtualgroup` trait.
```shell
$ kubectl get deployment myapp-demo-podinfo -o json | jq .spec.template.metadata.labels
{
@ -120,9 +118,9 @@ $ kubectl get deployment myapp-demo-podinfo -o json | jq .spec.template.metadata
}
```
## Delete a trait
## Detach Trait
Let's have a try removing a trait from the application.
Let's have a try detach a trait from the application.
```yaml
apiVersion: core.oam.dev/v1alpha2
@ -147,7 +145,7 @@ spec:
type: "cluster"
```
Apply the configuration and check `manualscalertrait` has been deleted.
Apply the application and check `manualscalertrait` has been deleted.
```shell
$ kubectl get manualscalertrait
No resources found

View File

@ -1,6 +1,7 @@
---
title: Install
title: Install KubeVela
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
@ -14,15 +15,14 @@ If you don't have K8s cluster from Cloud Provider, you may pick either Minikube
> NOTE: If you are not using minikube or kind, please make sure to [install or enable ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/) by yourself.
<Tabs
className="unique-tabs"
defaultValue="minikube"
values={[
{label: 'Minikube', value: 'minikube'},
{label: 'KinD', value: 'kind'},
]}>
<TabItem value="minikube">
className="unique-tabs"
defaultValue="minikube"
values={[
{label: 'Minikube', value: 'minikube'},
{label: 'KinD', value: 'kind'},
]}>
<TabItem value="minikube">
Follow the minikube [installation guide](https://minikube.sigs.k8s.io/docs/start/).
Once minikube is installed, create a cluster:
@ -36,6 +36,7 @@ Install ingress:
```shell script
minikube addons enable ingress
```
</TabItem>
<TabItem value="kind">
@ -69,6 +70,7 @@ Then install [ingress for kind](https://kind.sigs.k8s.io/docs/user/ingress/#ingr
```shell script
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
```
</TabItem>
</Tabs>
@ -141,21 +143,42 @@ These steps will install KubeVela controller and its dependency.
helm install --create-namespace -n vela-system --set admissionWebhooks.certManager.enabled=true kubevela kubevela/vela-core
```
## 3. (Optional) Get KubeVela CLI
## 3. (Optional) Install flux2
This installation step is optional, it's required if you want to register [Helm Chart](https://helm.sh/) as KubeVela capabilities.
KubeVela relies on several CRDs and controllers from [fluxcd/flux2](https://github.com/fluxcd/flux2).
| CRD | Controller Image |
| ----------- | ----------- |
| helmrepositories.source.toolkit.fluxcd.io | fluxcd/source-controller:v0.9.0 |
| helmcharts.source.toolkit.fluxcd.io | - |
| buckets.source.toolkit.fluxcd.io | - |
| gitrepositories.source.toolkit.fluxcd.io | - |
| helmreleases.helm.toolkit.fluxcd.io | fluxcd/helm-controller:v0.8.0 |
You can install the whole flux2 from their [official website](https://github.com/fluxcd/flux2)
or install the chart with minimal parts provided by KubeVela:
```shell
$ helm install --create-namespace -n flux-system helm-flux http://oam.dev/catalog/helm-flux2-0.1.0.tgz
```
## 4. (Optional) Get KubeVela CLI
Here are three ways to get KubeVela Cli:
<Tabs
className="unique-tabs"
defaultValue="script"
values={[
{label: 'Script', value: 'script'},
{label: 'Homebrew', value: 'homebrew'},
{label: 'Download directly from releases', value: 'download'},
]}>
<TabItem value="script">
className="unique-tabs"
defaultValue="script"
values={[
{label: 'Script', value: 'script'},
{label: 'Homebrew', value: 'homebrew'},
{label: 'Download directly from releases', value: 'download'},
]}>
<TabItem value="script">
**macOS/Linux**
** macOS/Linux **
```shell script
curl -fsSl https://kubevela.io/install.sh | bash
@ -166,16 +189,15 @@ curl -fsSl https://kubevela.io/install.sh | bash
```shell script
powershell -Command "iwr -useb https://kubevela.io/install.ps1 | iex"
```
</TabItem>
<TabItem value="homebrew">
</TabItem>
<TabItem value="homebrew">
**macOS/Linux**
```shell script
brew install kubevela
```
</TabItem>
<TabItem value="download">
</TabItem>
<TabItem value="download">
- Download the latest `vela` binary from the [releases page](https://github.com/oam-dev/kubevela/releases).
- Unpack the `vela` binary and add it to `$PATH` to get started.
@ -189,16 +211,16 @@ sudo mv ./vela /usr/local/bin/vela
>
> The new version of MacOS is stricter about running software you've downloaded that isn't signed with an Apple developer key. And we haven't supported that for KubeVela yet.
> You can open your 'System Preference' -> 'Security & Privacy' -> General, click the 'Allow Anyway' to temporarily fix it.
</TabItem>
</TabItem>
</Tabs>
## 4. (Optional) Sync Capability from Cluster
## 5. (Optional) Sync Capability from Cluster
If you want to run application from `vela` cli, then you should sync capabilities first like below:
```shell script
vela workloads
vela components
```
```console
Automatically discover capabilities successfully ✅ Add(5) Update(0) Delete(0)
@ -225,7 +247,7 @@ worker Describes long-running, scalable, containerized services that running
receive external network traffic.
```
## 5. (Optional) Clean Up
## 6. (Optional) Clean Up
<details>
Run:
@ -244,15 +266,11 @@ Then clean up CRDs (CRDs are not removed via helm by default):
kubectl delete crd \
applicationconfigurations.core.oam.dev \
applicationdeployments.core.oam.dev \
autoscalers.standard.oam.dev \
components.core.oam.dev \
containerizedworkloads.core.oam.dev \
healthscopes.core.oam.dev \
issuers.cert-manager.io \
manualscalertraits.core.oam.dev \
metricstraits.standard.oam.dev \
podspecworkloads.standard.oam.dev \
routes.standard.oam.dev \
scopedefinitions.core.oam.dev \
traitdefinitions.core.oam.dev \
workloaddefinitions.core.oam.dev

View File

@ -1,6 +1,7 @@
---
title: Introduction
slug: /
title: Introduction to KubeVela
slug: /
---
![alt](../resources/KubeVela-01.png)
@ -65,4 +66,4 @@ KubeVela is a Kubernetes plugin for building upper layer platforms. It leverages
## Getting Started
[Install KubeVela](./getting-started/install) into any Kubernetes cluster to get started.
[Install KubeVela](./install) into any Kubernetes cluster to get started.

91
docs/kube/component.md Normal file
View File

@ -0,0 +1,91 @@
---
title: Defining Components with Raw Template
---
In this section, it will introduce how to use raw template to declare app components via `ComponentDefinition`.
> Before reading this part, please make sure you've learned [the definition and template concepts](../platform-engineers/definition-and-templates).
## Declare `ComponentDefinition`
Here is a raw template based `ComponentDefinition` example which provides a abstraction for worker workload type:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: kube-worker
namespace: default
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
kube:
template:
apiVersion: apps/v1
kind: Deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
ports:
- containerPort: 80
parameters:
- name: image
required: true
type: string
fieldPaths:
- "spec.template.spec.containers[0].image"
```
In detail, the `.spec.schematic.kube` contains template of a workload resource and
configurable parameters.
- `.spec.schematic.kube.template` is the raw template in YAML format.
- `.spec.schematic.kube.parameters` contains a set of configurable parameters. The `name`, `type`, and `fieldPaths` are required fields, `description` and `required` are optional fields.
- The parameter `name` must be unique in a `ComponentDefinition`.
- `type` indicates the data type of value set to the field. This is a required field which will help KubeVela to generate a OpenAPI JSON schema for the parameters automatically. In raw template, only basic data types are allowed, including `string`, `number`, and `boolean`, while `array` and `object` are not.
- `fieldPaths` in the parameter specifies an array of fields within the template that will be overwritten by the value of this parameter. Fields are specified as JSON field paths without a leading dot, for example
`spec.replicas`, `spec.containers[0].image`.
## Declare an `Application`
Here is an example `Application`.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: myapp
namespace: default
spec:
components:
- name: mycomp
type: kube-worker
properties:
image: nginx:1.14.0
```
Since parameters only support basic data type, values in `properties` should be simple key-value, `<parameterName>: <parameterValue>`.
Deploy the `Application` and verify the running workload instance.
```shell
$ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
mycomp 1/1 1 1 66m
```
And check the parameter works.
```shell
$ kubectl get deployment mycomp -o json | jq '.spec.template.spec.containers[0].image'
"nginx:1.14.0"
```

109
docs/kube/trait.md Normal file
View File

@ -0,0 +1,109 @@
---
title: Attach Traits to Raw Template Based Components
---
In this sample, we will attach two traits,
[scaler](https://github.com/oam-dev/kubevela/blob/master/charts/vela-core/templates/defwithtemplate/manualscale.yaml)
and
[virtualgroup](https://github.com/oam-dev/kubevela/blob/master/docs/examples/kube-module/virtual-group-td.yaml) to a component
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: myapp
namespace: default
spec:
components:
- name: mycomp
type: kube-worker
properties:
image: nginx:1.14.0
traits:
- type: scaler
properties:
replicas: 2
- type: virtualgroup
properties:
group: "my-group1"
type: "cluster"
```
## Verify
Deploy the application and verify traits work.
Check the `scaler` trait.
```shell
$ kubectl get manualscalertrait
NAME AGE
demo-podinfo-scaler-3x1sfcd34 2m
```
```shell
$ kubectl get deployment mycomp -o json | jq .spec.replicas
2
```
Check the `virtualgroup` trait.
```shell
$ kubectl get deployment mycomp -o json | jq .spec.template.metadata.labels
{
"app.cluster.virtual.group": "my-group1",
"app.kubernetes.io/name": "myapp"
}
```
## Update an Application
After the application is deployed and workloads/traits are created successfully,
you can update the application, and corresponding changes will be applied to the
workload.
Let's make several changes on the configuration of the sample application.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: myapp
namespace: default
spec:
components:
- name: mycomp
type: kube-worker
properties:
image: nginx:1.14.1 # 1.14.0 => 1.14.1
traits:
- type: scaler
properties:
replicas: 4 # 2 => 4
- type: virtualgroup
properties:
group: "my-group2" # my-group1 => my-group2
type: "cluster"
```
Apply the new configuration and check the results after several seconds.
> After updating, the workload instance name will be updated from `mycomp-v1` to `mycomp-v2`.
Check the new property value.
```shell
$ kubectl get deployment mycomp -o json | jq '.spec.template.spec.containers[0].image'
"nginx:1.14.1"
```
Check the `scaler` trait.
```shell
$ kubectl get deployment mycomp -o json | jq .spec.replicas
4
```
Check the `virtualgroup` trait.
```shell
$ kubectl get deployment mycomp -o json | jq .spec.template.metadata.labels
{
"app.cluster.virtual.group": "my-group2",
"app.kubernetes.io/name": "myapp"
}
```

View File

@ -1,443 +0,0 @@
---
title: Learning CUE
---
This document will explain how to use [CUE](https://cuelang.org/) to encapsulate a given capability in Kubernetes and make it available to end users to consume in `Application` CRD. Please make sure you have already learned about `Application` custom resource before reading the following guide.
## Overview
The reasons for KubeVela supports CUE as first class templating solution can be concluded as below:
- **CUE is designed for large scale configuration.** CUE has the ability to understand a
configuration worked on by engineers across a whole company and to safely change a value that modifies thousands of objects in a configuration. This aligns very well with KubeVela's original goal to define and ship production level applications at web scale.
- **CUE supports first-class code generation and automation.** CUE can integrate with existing tools and workflows naturally while other tools would have to build complex custom solutions. For example, generate OpenAPI schemas wigh Go code. This is how KubeVela build developer tools and GUI interfaces based on the CUE templates.
- **CUE integrates very well with Go.**
KubeVela is built with GO just like most projects in Kubernetes system. CUE is also implemented in and exposes a rich API in Go. KubeVela integrates with CUE as its core library and works as a Kubernetes controller. With the help of CUE, KubeVela can easily handle data constraint problems.
> Pleas also check [The Configuration Complexity Curse](https://blog.cedriccharly.com/post/20191109-the-configuration-complexity-curse/) and [The Logic of CUE](https://cuelang.org/docs/concepts/logic/) for more details.
## Parameter and Template
A very simple `WorkloadDefinition` is like below:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
metadata:
name: mydeploy
spec:
definitionRef:
name: deployments.apps
schematic:
cue:
template: |
parameter: {
name: string
image: string
}
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": parameter.name
}
template: {
metadata: labels: {
"app.oam.dev/component": parameter.name
}
spec: {
containers: [{
name: parameter.name
image: parameter.image
}]
}
}
}
}
```
The `template` field in this definition is a CUE module, it defines two keywords for KubeVela to build the application abstraction:
- The `parameter` defines the input parameters from end user, i.e. the configurable fields in the abstraction.
- The `output` defines the template for the abstraction.
## CUE Template Step by Step
Let's say as the platform team, we only want to allow end user configure `image` and `name` fields in the `Application` abstraction, and automatically generate all rest of the fields. How can we use CUE to achieve this?
We can start from the final resource we envision the platform will generate based on user inputs, for example:
```yaml
apiVersion: apps/v1
kind: Deployment
meadata:
name: mytest # user inputs
spec:
template:
spec:
containers:
- name: mytest # user inputs
env:
- name: a
value: b
image: nginx:v1 # user inputs
metadata:
labels:
app.oam.dev/component: mytest # generate by user inputs
selector:
matchLabels:
app.oam.dev/component: mytest # generate by user inputs
```
Then we can just convert this YAML to JSON and put the whole JSON object into the `output` keyword field:
```cue
output: {
apiVersion: "apps/v1"
kind: "Deployment"
metadata: name: "mytest"
spec: {
selector: matchLabels: {
"app.oam.dev/component": "mytest"
}
template: {
metadata: labels: {
"app.oam.dev/component": "mytest"
}
spec: {
containers: [{
name: "mytest"
image: "nginx:v1"
env: [{name:"a",value:"b"}]
}]
}
}
}
}
```
Since CUE as a superset of JSON, we can use:
* C style comments,
* quotes may be omitted from field names without special characters,
* commas at the end of fields are optional,
* comma after last element in list is allowed,
* outer curly braces are optional.
After that, we can then add `parameter` keyword, and use it as a variable reference, this is the very basic CUE feature for templating.
```cue
parameter: {
name: string
image: string
}
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": parameter.name
}
template: {
metadata: labels: {
"app.oam.dev/component": parameter.name
}
spec: {
containers: [{
name: parameter.name
image: parameter.image
}]
}
}
}
}
```
Finally, you can put the above CUE module in the `template` field of `WorkloadDefinition` object and give it a name. Then end users can now author `Application` resource reference this definition as workload type and only have `name` and `image` as configurable parameters.
## Advanced CUE Templating
In this section, we will introduce advanced CUE templating features supports in KubeVela.
### Structural Parameter
This is the most commonly used feature. It enables us to expose complex data structure for end users. For example, environment variable list.
A simple guide is as below:
1. Define a type in the CUE template, it includes a struct (`other`), a string and an integer.
```
#Config: {
name: string
value: int
other: {
key: string
value: string
}
}
```
2. In the `parameter` section, reference above type and define it as `[...#Config]`. Then it can accept inputs from end users as an array list.
```
parameter: {
name: string
image: string
configSingle: #Config
config: [...#Config] # array list parameter
}
```
3. In the `output` section, simply do templating as other parameters.
```
output: {
...
spec: {
containers: [{
name: parameter.name
image: parameter.image
env: parameter.config
}]
}
...
}
```
4. As long as you install a workload definition object (e.g. `mydeploy`) with above template in the system, a new field `config` will be available to use like below:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: Application
metadata:
name: website
spec:
components:
- name: backend
type: mydeploy
settings:
image: crccheck/hello-world
name: mysvc
config: # a complex parameter
- name: a
value: 1
other:
key: mykey
value: myvalue
```
### Conditional Parameter
Conditional parameter can be used to do `if..else` logic in template.
Below is an example that when `useENV=true`, it will render env section, otherwise, it will not.
```
parameter: {
name: string
image: string
useENV: bool
}
output: {
...
spec: {
containers: [{
name: parameter.name
image: parameter.image
if parameter.useENV == true {
env: [{name: "my-env", value: "my-value"}]
}
}]
}
...
}
```
### Optional and Default Value
Optional parameter can be skipped, that usually works together with conditional logic.
Specifically, if some field does not exit, the CUE grammar is `if _variable_ != _|_`, the example is like below:
```
parameter: {
name: string
image: string
config?: [...#Config]
}
output: {
...
spec: {
containers: [{
name: parameter.name
image: parameter.image
if parameter.config != _|_ {
config: parameter.config
}
}]
}
...
}
```
Default Value is marked with a `*` prefix. It's used like
```
parameter: {
name: string
image: *"nginx:v1" | string
port: *80 | int
number: *123.4 | float
}
output: {
...
spec: {
containers: [{
name: parameter.name
image: parameter.image
}]
}
...
}
```
So if a parameter field is neither a parameter with default value nor a conditional field, it's a required value.
### Loop
#### Loop for Map
```cue
parameter: {
name: string
image: string
env: [string]: string
}
output: {
spec: {
containers: [{
name: parameter.name
image: parameter.image
env: [
for k, v in parameter.env {
name: k
value: v
},
]
}]
}
}
```
#### Loop for Slice
```cue
parameter: {
name: string
image: string
env: [...{name:string,value:string}]
}
output: {
...
spec: {
containers: [{
name: parameter.name
image: parameter.image
env: [
for _, v in parameter.env {
name: v.name
value: v.value
},
]
}]
}
}
```
### Import CUE Internal Packages
CUE has many [internal packages](https://pkg.go.dev/cuelang.org/go@v0.2.2/pkg) which also can be used in KubeVela.
Below is an example that use `strings.Join` to `concat` string list to one string.
```cue
import ("strings")
parameter: {
outputs: [{ip: "1.1.1.1", hostname: "xxx.com"}, {ip: "2.2.2.2", hostname: "yyy.com"}]
}
output: {
spec: {
if len(parameter.outputs) > 0 {
_x: [ for _, v in parameter.outputs {
"\(v.ip) \(v.hostname)"
}]
message: "Visiting URL: " + strings.Join(_x, "")
}
}
}
```
### Import Kube Package
KubeVela automatically generates all K8s resources as internal packages by reading K8s openapi from the
installed K8s cluster.
You can use these packages with the format `kube/<apiVersion>` in CUE Template of KubeVela just like the same way
with the CUE internal packages.
For example, `Deployment` can be used as:
```cue
import (
apps "kube/apps/v1"
)
parameter: {
name: string
}
output: apps.#Deployment
output: {
metadata: name: parameter.name
}
```
Service can be used as (import package with an alias is not necessary):
```cue
import ("kube/v1")
output: v1.#Service
output: {
metadata: {
"name": parameter.name
}
spec: type: "ClusterIP",
}
parameter: {
name: "myapp"
}
```
Even the installed CRD works:
```
import (
oam "kube/core.oam.dev/v1alpha2"
)
output: oam.#Application
output: {
metadata: {
"name": parameter.name
}
}
parameter: {
name: "myapp"
}
```

View File

@ -1,141 +0,0 @@
---
title: Auto-generated Schema
---
## Auto-generated OpenAPI v3 JSON Schema for Capability
For any installed capability, KubeVela will automatically generate OpenAPI v3 JSON Schema for it.
## Why?
In definition objects, [parameter](https://kubevela.io/#/en/platform-engineers/workload-type?id=_4-define-template) section are expected to be set by developers when creating `Application` object.
While there is another GUI way for developers to input all parameter fields by rendering `parameter` section to [OpenAPI v3 Specification](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.2.md#format).
For example, after you created a definition object, KubeVela will generate OpenAPI v3 JSON Schema from `parameter` of Workload Type [webservice](https://kubevela.io/#/en/developers/references/workload-types/webservice).
```json
{
"properties": {
"cmd": {
"description": "Commands to run in the container",
"items": {
"type": "string"
},
"title": "cmd",
"type": "array"
},
"cpu": {
"description": "Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core)",
"title": "cpu",
"type": "string"
},
"env": {
"description": "Define arguments by using environment variables",
"items": {
"properties": {
"name": {
"description": "Environment variable name",
"title": "name",
"type": "string"
},
"value": {
"description": "The value of the environment variable",
"title": "value",
"type": "string"
},
"valueFrom": {
"description": "Specifies a source the value of this var should come from",
"properties": {
"secretKeyRef": {
"description": "Selects a key of a secret in the pod's namespace",
"properties": {
"key": {
"description": "The key of the secret to select from. Must be a valid secret key",
"title": "key",
"type": "string"
},
"name": {
"description": "The name of the secret in the pod's namespace to select from",
"title": "name",
"type": "string"
}
},
"required": ["name", "key"],
"title": "secretKeyRef",
"type": "object"
}
},
"required": ["secretKeyRef"],
"title": "valueFrom",
"type": "object"
}
},
"required": ["name"],
"type": "object"
},
"title": "env",
"type": "array"
},
"image": {
"description": "Which image would you like to use for your service",
"title": "image",
"type": "string"
},
"port": {
"default": 80,
"description": "Which port do you want customer traffic sent to",
"title": "port",
"type": "integer"
}
},
"required": ["image", "port"],
"type": "object"
}
```
You can render the schema by [form-render](https://github.com/alibaba/form-render) or [React JSON Schema form](https://github.com/rjsf-team/react-jsonschema-form). A web form can be as below.
![](../../resources/json-schema-render-example.jpg)
## How to use the generated JSON Schema of definition
When a platform builder applies a WorkloadDefinition or TraitDefinition, a ConfigMap will be created in a namespace same
as the definition (by default it will be in `vela-system` namespace) and labeled with `definition.oam.dev=schema`.
```shell
$ kubectl get cm -n vela-system -l definition.oam.dev=schema
NAME DATA AGE
schema-ingress 1 19s
schema-scaler 1 19s
schema-task 1 19s
schema-webservice 1 19s
schema-worker 1 20s
```
The ConfigMap name is in the format `schema-$definitionName`, and the key of ConfigMap data is `openapi-v3-json-schema`.
For example, we can use the following command to get the JSON Schema of `webservice`.
```shell
$ kubectl get cm schema-webservice -n vela-system -o yaml
apiVersion: v1
data:
openapi-v3-json-schema: '{"properties":{"cmd":{"description":"Commands to run in
the container","items":{"type":"string"},"title":"cmd","type":"array"},"cpu":{"description":"Number
of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core)","title":"cpu","type":"string"},"env":{"description":"Define
arguments by using environment variables","items":{"properties":{"name":{"description":"Environment
variable name","title":"name","type":"string"},"value":{"description":"The value
of the environment variable","title":"value","type":"string"},"valueFrom":{"description":"Specifies
a source the value of this var should come from","properties":{"secretKeyRef":{"description":"Selects
a key of a secret in the pod''s namespace","properties":{"key":{"description":"The
key of the secret to select from. Must be a valid secret key","title":"key","type":"string"},"name":{"description":"The
name of the secret in the pod''s namespace to select from","title":"name","type":"string"}},"required":["name","key"],"title":"secretKeyRef","type":"object"}},"required":["secretKeyRef"],"title":"valueFrom","type":"object"}},"required":["name"],"type":"object"},"title":"env","type":"array"},"image":{"description":"Which
image would you like to use for your service","title":"image","type":"string"},"port":{"default":80,"description":"Which
port do you want customer traffic sent to","title":"port","type":"integer"}},"required":["image","port"],"type":"object"}'
kind: ConfigMap
metadata:
name: schema-webservice
namespace: vela-system
```
If you adopt [KubeVela API Server](https://github.com/oam-dev/kubevela/tree/master/references/apiserver), you can get the
schema by API [/api/definitions/{definitionName}](https://kubevela.io/en/developers/references/restful-api/index.html#api-Definitions-getDefinition).

View File

@ -1,737 +0,0 @@
---
title: Defining Traits
---
In this section we will introduce how to define a Trait with CUE template.
## Composition
Defining a *Trait* with CUE template is a bit different from *Workload Type*: a trait MUST use `outputs` keyword instead of `output` in template.
With the help of CUE template, it is very nature to compose multiple Kubernetes resources in one trait.
Similarly, the format MUST be `outputs:<unique-name>:<full template>`.
Below is an example for `ingress` trait.
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: TraitDefinition
metadata:
name: ingress
spec:
extension:
template: |
parameter: {
domain: string
http: [string]: int
}
// trait template can have multiple outputs in one trait
outputs: service: {
apiVersion: "v1"
kind: "Service"
spec: {
selector:
app: context.name
ports: [
for k, v in parameter.http {
port: v
targetPort: v
}
]
}
}
outputs: ingress: {
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: context.name
spec: {
rules: [{
host: parameter.domain
http: {
paths: [
for k, v in parameter.http {
path: k
backend: {
serviceName: context.name
servicePort: v
}
}
]
}
}]
}
}
```
It can be used in the application object like below:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: Application
metadata:
name: testapp
spec:
components:
- name: express-server
type: webservice
settings:
cmd:
- node
- server.js
image: oamdev/testapp:v1
port: 8080
traits:
- name: ingress
properties:
domain: test.my.domain
http:
"/api": 8080
```
### Generate Multiple Resources with Loop
You can define the for-loop inside the `outputs`, the type of `parameter` field used in the for-loop must be a map.
Below is an example that will generate multiple Kubernetes Services in one trait:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: TraitDefinition
metadata:
name: expose
spec:
extension:
template: |
parameter: {
http: [string]: int
}
outputs: {
for k, v in parameter.http {
"\(k)": {
apiVersion: "v1"
kind: "Service"
spec: {
selector:
app: context.name
ports: [{
port: v
targetPort: v
}]
}
}
}
}
```
The usage of this trait could be:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: Application
metadata:
name: testapp
spec:
components:
- name: express-server
type: webservice
settings:
...
traits:
- name: expose
properties:
http:
myservice1: 8080
myservice2: 8081
```
## Patch Trait
You could also use keyword `patch` to patch data to the component instance (before the resource applied) and claim this behavior as a trait.
Below is an example for `node-affinity` trait:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "affinity specify node affinity and toleration"
name: node-affinity
spec:
appliesToWorkloads:
- webservice
- worker
extension:
template: |-
patch: {
spec: template: spec: {
if parameter.affinity != _|_ {
affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: [{
matchExpressions: [
for k, v in parameter.affinity {
key: k
operator: "In"
values: v
},
]}]
}
if parameter.tolerations != _|_ {
tolerations: [
for k, v in parameter.tolerations {
effect: "NoSchedule"
key: k
operator: "Equal"
value: v
}]
}
}
}
parameter: {
affinity?: [string]: [...string]
tolerations?: [string]: string
}
```
You can use it like:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: Application
metadata:
name: testapp
spec:
components:
- name: express-server
type: webservice
settings:
image: oamdev/testapp:v1
traits:
- name: "node-affinity"
properties:
affinity:
server-owner: ["owner1","owner2"]
resource-pool: ["pool1","pool2","pool3"]
tolerations:
resource-pool: "broken-pool1"
server-owner: "old-owner"
```
The patch trait above assumes the component instance have `spec.template.spec.affinity` schema. Hence we need to use it with the field `appliesToWorkloads` which can enforce the trait only to be used by these specified workload types.
By default, the patch trait in KubeVela relies on the CUE `merge` operation. It has following known constraints:
* Can not handle conflicts. For example, if a field already has a final value `replicas=5`, then the patch trait will conflict when patches `replicas=1` and fail. It only works when `replica` is not finalized before patch.
* Array list in the patch will be merged following the order of index. It can not handle the duplication of the array list members.
### Strategy Patch Trait
The `strategy patch` is a special patch logic for patching array list. This is supported **only** in KubeVela (i.e. not a standard CUE feature).
In order to make it work, you need to use annotation `//+patchKey=<key_name>` in the template.
With this annotation, merging logic of two array lists will not follow the CUE behavior. Instead, it will treat the list as object and use a strategy merge approach: if the value of the key name equal, then the patch data will merge into that, if no equal found, the patch will append into the array list.
The example of strategy patch trait will like below:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "add sidecar to the app"
name: sidecar
spec:
appliesToWorkloads:
- webservice
- worker
extension:
template: |-
patch: {
// +patchKey=name
spec: template: spec: containers: [parameter]
}
parameter: {
name: string
image: string
command?: [...string]
}
```
The patchKey is `name` which represents the container name in this example. In this case, if the workload already has a container with the same name of this `sidecar` trait, it will be a merge operation. If the workload don't have the container with same name, it will be a sidecar container append into the `spec.template.spec.containers` array list.
### Patch The Trait
If patch and outputs both exist in one trait, the patch part will execute first and then the output object will be rendered out.
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "service the app"
name: kservice
spec:
appliesToWorkloads:
- webservice
- worker
extension:
template: |-
patch: {spec: template: metadata: labels: app: context.name}
outputs: service: {
apiVersion: "v1"
kind: "Service"
metadata: name: context.name
spec: {
selector: app: context.name
ports: [
for k, v in parameter.http {
port: v
targetPort: v
}
]
}
}
parameter: {
http: [string]: int
}
```
## Processing Trait
A trait can also help you to do some processing job. Currently, we have supported http request.
The keyword is `processing`, inside the `processing`, there are two keywords `output` and `http`.
You can define http request `method`, `url`, `body`, `header` and `trailer` in the `http` section.
KubeVela will send a request using this information, the requested server shall output a **json result**.
The `output` section will used to match with the `json result`, correlate fields by name will be automatically filled into it.
Then you can use the requested data from `processing.output` into `patch` or `output/outputs`.
Below is an example:
```yaml
apiVersion: core.oam.dev/v1alpha1
kind: TraitDefinition
metadata:
name: auth-service
spec:
schematic:
cue:
template: |
parameter: {
serviceURL: string
}
processing: {
output: {
token?: string
}
// task shall output a json result and output will correlate fields by name.
http: {
method: *"GET" | string
url: parameter.serviceURL
request: {
body?: bytes
header: {}
trailer: {}
}
}
}
patch: {
data: token: processing.output.token
}
```
## Simple data passing
The trait can use the data of workload output and outputs to fill itself.
There are two keywords `output` and `outputs` in the rendering context.
You can use `context.output` refer to the workload object, and use `context.outputs.<xx>` refer to the trait object.
please make sure the trait resource name is unique, or the former data will be covered by the latter one.
Below is an example
1. the main workload object(Deployment) in this example will render into the context.output before rendering traits.
2. the `context.outputs.<xx>` will keep all these rendered trait data and can be used in the traits after them.
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
metadata:
name: worker
spec:
definitionRef:
name: deployments.apps
extension:
template: |
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": context.name
}
template: {
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
containers: [{
name: context.name
image: parameter.image
ports: [{containerPort: parameter.port}]
envFrom: [{
configMapRef: name: context.name + "game-config"
}]
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
}]
}
}
}
}
outputs: gameconfig: {
apiVersion: "v1"
kind: "ConfigMap"
metadata: {
name: context.name + "game-config"
}
data: {
enemies: parameter.enemies
lives: parameter.lives
}
}
parameter: {
// +usage=Which image would you like to use for your service
// +short=i
image: string
// +usage=Commands to run in the container
cmd?: [...string]
lives: string
enemies: string
port: int
}
---
apiVersion: core.oam.dev/v1alpha2
kind: TraitDefinition
metadata:
name: ingress
spec:
extension:
template: |
parameter: {
domain: string
path: string
exposePort: int
}
// trait template can have multiple outputs in one trait
outputs: service: {
apiVersion: "v1"
kind: "Service"
spec: {
selector:
app: context.name
ports: [{
port: parameter.exposePort
targetPort: context.output.spec.template.spec.containers[0].ports[0].containerPort
}]
}
}
outputs: ingress: {
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: context.name
labels: config: context.outputs.gameconfig.data.enemies
spec: {
rules: [{
host: parameter.domain
http: {
paths: [{
path: parameter.path
backend: {
serviceName: context.name
servicePort: parameter.exposePort
}
}]
}
}]
}
}
```
## More Use Cases for Patch Trait
Patch trait could be very powerful, here are some more advanced use cases.
### Add Labels
For example, patch common label (virtual group) to the component workload.
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "Add virtual group labels"
name: virtualgroup
spec:
appliesToWorkloads:
- webservice
- worker
extension:
template: |-
patch: {
spec: template: {
metadata: labels: {
if parameter.type == "namespace" {
"app.namespace.virtual.group": parameter.group
}
if parameter.type == "cluster" {
"app.cluster.virtual.group": parameter.group
}
}
}
}
parameter: {
group: *"default" | string
type: *"namespace" | string
}
```
Then it could be used like:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: Application
spec:
...
traits:
- name: virtualgroup
properties:
group: "my-group1"
type: "cluster"
```
In this example, different type will use different label key.
### Add Annotations
Similar to common labels, you could also patch the component workload with annotations. The annotation value will be a JSON string.
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "Specify auto scale by annotation"
name: kautoscale
spec:
appliesToWorkloads:
- webservice
- worker
extension:
template: |-
import "encoding/json"
patch: {
metadata: annotations: {
"my.custom.autoscale.annotation": json.Marshal({
"minReplicas": parameter.min
"maxReplicas": parameter.max
})
}
}
parameter: {
min: *1 | int
max: *3 | int
}
```
### Add Pod ENV
Inject some system environments into pod is also very common use case.
The example could be like below, this case rely on strategy merge patch, so don't forget add `+patchKey=name` like below:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "add env into your pods"
name: env
spec:
appliesToWorkloads:
- webservice
- worker
extension:
template: |-
patch: {
spec: template: spec: {
// +patchKey=name
containers: [{
name: context.name
// +patchKey=name
env: [
for k, v in parameter.env {
name: k
value: v
},
]
}]
}
}
parameter: {
env: [string]: string
}
```
### Dynamically Pod Service Account
In this example, the service account was dynamically requested from an authentication service and patched into the service.
This example put uid token in http header, you can also use request body. You may refer to [processing](#Processing-Trait) section for more details.
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "dynamically specify service account"
name: service-account
spec:
appliesToWorkloads:
- webservice
- worker
extension:
template: |-
processing: {
output: {
credentials?: string
}
http: {
method: *"GET" | string
url: parameter.serviceURL
request: {
header: {
"authorization.token": parameter.uidtoken
}
}
}
}
patch: {
spec: template: spec: serviceAccountName: processing.output.credentials
}
parameter: {
uidtoken: string
serviceURL: string
}
```
### Add Init Container
Init container is useful to pre-define operations in an image and run it before app container.
> Please check [Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container) for more detail about Init Container.
Below is an example of init container trait:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "add an init container and use shared volume with pod"
name: init-container
spec:
appliesToWorkloads:
- webservice
- worker
extension:
template: |-
patch: {
spec: template: spec: {
// +patchKey=name
containers: [{
name: context.name
// +patchKey=name
volumeMounts: [{
name: parameter.mountName
mountPath: parameter.appMountPath
}]
}]
initContainers: [{
name: parameter.name
image: parameter.image
if parameter.command != _|_ {
command: parameter.command
}
// +patchKey=name
volumeMounts: [{
name: parameter.mountName
mountPath: parameter.initMountPath
}]
}]
// +patchKey=name
volumes: [{
name: parameter.mountName
emptyDir: {}
}]
}
}
parameter: {
name: string
image: string
command?: [...string]
mountName: *"workdir" | string
appMountPath: string
initMountPath: string
}
```
This case must rely on the strategy merge patch, for every array list, we add a `// +patchKey=name` annotation to avoid conflict.
The usage could be:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: Application
metadata:
name: testapp
spec:
components:
- name: express-server
type: webservice
settings:
image: oamdev/testapp:v1
traits:
- name: "init-container"
properties:
name: "install-container"
image: "busybox"
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://info.cern.ch
mountName: "workdir"
appMountPath: "/usr/share/nginx/html"
initMountPath: "/work-dir"
```

View File

@ -1,299 +0,0 @@
---
title: Defining Workload Types
---
In this section, we will introduce more examples of using CUE to define workload types.
## Basic Usage
The very basic usage of CUE in workload is to extend a Kubernetes resource as a workload type(via `WorkloadDefinition`) and expose configurable parameters to users.
A Deployment as workload type:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
metadata:
name: worker
spec:
definitionRef:
name: deployments.apps
schematic:
cue:
template: |
parameter: {
name: string
image: string
}
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": parameter.name
}
template: {
metadata: labels: {
"app.oam.dev/component": parameter.name
}
spec: {
containers: [{
name: parameter.name
image: parameter.image
}]
}}}
}
```
A Job as workload type:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
metadata:
name: task
annotations:
definition.oam.dev/description: "Describes jobs that run code or a script to completion."
spec:
definitionRef:
name: jobs.batch
schematic:
cue:
template: |
output: {
apiVersion: "batch/v1"
kind: "Job"
spec: {
parallelism: parameter.count
completions: parameter.count
template: spec: {
restartPolicy: parameter.restart
containers: [{
image: parameter.image
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
}]
}
}
}
parameter: {
count: *1 | int
image: string
restart: *"Never" | string
cmd?: [...string]
}
```
## Context
When you want to reference the runtime instance name for an app, you can use the `conext` keyword to define `parameter`.
KubeVela runtime provides a `context` struct including app name(`context.appName`) and component name(`context.name`).
```cue
context: {
appName: string
name: string
}
```
Values of the context will be automatically generated before the underlying resources are applied.
This is why you can reference the context variable as value in the template.
```yaml
parameter: {
image: string
}
output: {
...
spec: {
containers: [{
name: context.name
image: parameter.image
}]
}
...
}
```
## Composition
A workload type can contain multiple Kubernetes resources, for example, we can define a `webserver` workload type that is composed by Deployment and Service.
Note that in this case, you MUST define the template of component instance in `output` section, and leave all the other templates in `outputs` with resource name claimed. The format MUST be `outputs:<unique-name>:<full template>`.
> This is how KubeVela know which resource is the running instance of the application component.
Below is the example:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
metadata:
name: webserver
annotations:
definition.oam.dev/description: "webserver is a combo of Deployment + Service"
spec:
definitionRef:
name: deployments.apps
schematic:
cue:
template: |
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": context.name
}
template: {
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
containers: [{
name: context.name
image: parameter.image
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
if parameter["env"] != _|_ {
env: parameter.env
}
if context["config"] != _|_ {
env: context.config
}
ports: [{
containerPort: parameter.port
}]
if parameter["cpu"] != _|_ {
resources: {
limits:
cpu: parameter.cpu
requests:
cpu: parameter.cpu
}
}
}]
}
}
}
}
// an extra template
outputs: service: {
apiVersion: "v1"
kind: "Service"
spec: {
selector: {
"app.oam.dev/component": context.name
}
ports: [
{
port: parameter.port
targetPort: parameter.port
},
]
}
}
parameter: {
image: string
cmd?: [...string]
port: *80 | int
env?: [...{
name: string
value?: string
valueFrom?: {
secretKeyRef: {
name: string
key: string
}
}
}]
cpu?: string
}
```
Please save the example as file `webserver.yaml`, then register the new workload to kubevela.
```shell
$ kubectl apply -f webserver.yaml
```
Next, we can use the `webserver` type workload in our application, below is the example:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: Application
metadata:
name: webserver-demo
namespace: default
spec:
components:
- name: hello-world
type: webserver
settings:
image: crccheck/hello-world
port: 8000
env:
- name: "PORT"
value: "8000"
cpu: "100m"
```
Please save the Application example as file `app.yaml`, then create the new Application.
```shell
kubectl apply -f app.yaml
```
Wait for a while until the status of Application is `running`.
```shell
$ kubectl get application webserver-demo -o yaml
apiVersion: core.oam.dev/v1alpha2
kind: Application
metadata:
name: webserver-demo
namespace: default
...
spec:
components:
- name: hello-world
settings:
cpu: 100m
env:
- name: PORT
value: "8000"
image: crccheck/hello-world
port: 8000
type: webserver
status:
components:
- apiVersion: core.oam.dev/v1alpha2
kind: Component
name: hello-world
...
services:
- healthy: true
name: hello-world
status: running
```
In the K8s cluster, you will see the following resources are created:
```shell
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
hello-world-v1 1/1 1 1 15s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world-trait-7bdcff98f7 ClusterIP <your ip> <none> 8000/TCP 32s
```

View File

@ -1,106 +0,0 @@
---
title: Define Components
---
## Use Helm To Define a Component
This documentation explains how to use Helm chart to define an application component.
## Install fluxcd/flux2 as dependencies
Using helm as a workload depends on several CRDs and controllers from [fluxcd/flux2](https://github.com/fluxcd/flux2), make sure you have make them installed before continue.
It's worth to note that flux2 doesn't offer an official Helm chart to install,
so we provide a chart which only includes minimal dependencies KubeVela relies on as an alternative choice.
Install the minimal flux2 chart provided by KubeVela:
```shell
$ helm install --create-namespace -n flux-system helm-flux http://oam.dev/catalog/helm-flux2-0.1.0.tgz
```
## Write WorkloadDefinition
Here is an example `WorkloadDefinition` about how to use Helm as schematic module.
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
metadata:
name: webapp-chart
annotations:
definition.oam.dev/description: helm chart for webapp
spec:
definitionRef:
name: deployments.apps
version: v1
schematic:
helm:
release:
chart:
spec:
chart: "podinfo"
version: "5.1.4"
repository:
url: "http://oam.dev/catalog/"
```
Just like using CUE as schematic module, we also have some rules and contracts to use helm chart as schematic module.
- `.spec.definitionRef` is required to indicate the main workload(Group/Verison/Kind) in your Helm chart.
Only one workload allowed in one helm chart.
For example, in our sample chart, the core workload is `deployments.apps/v1`, other resources will also be deployed but mechanism of KubeVela won't work for them.
- `.spec.schematic.helm` contains information of Helm release & repository.
There are two fields `release` and `repository` in the `.spec.schematic.helm` section, these two fields align with the APIs of `fluxcd/flux2`. Spec of `release` aligns with [`HelmReleaseSpec`](https://github.com/fluxcd/helm-controller/blob/main/docs/api/helmrelease.md) and spec of `repository` aligns with [`HelmRepositorySpec`](https://github.com/fluxcd/source-controller/blob/main/docs/api/source.md#source.toolkit.fluxcd.io/v1beta1.HelmRepository).
In a word, just like the fields shown in the sample, the helm schematic module describes a specific Helm chart release and its repository.
## Create an Application using the helm based WorkloadDefinition
Here is an example `Application`.
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: Application
metadata:
name: myapp
namespace: default
spec:
components:
- name: demo-podinfo
type: webapp-chart
settings:
image:
tag: "5.1.2"
```
Helm module workload will use data in `settings` as [Helm chart values](https://github.com/captainroy-hy/podinfo/blob/master/charts/podinfo/values.yaml).
You can learn the schema of settings by reading the `README.md` of the Helm
chart, and the schema are totally align with
[`values.yaml`](https://github.com/captainroy-hy/podinfo/blob/master/charts/podinfo/values.yaml)
of the chart.
Helm v3 has [support to validate
values](https://helm.sh/docs/topics/charts/#schema-files) in a chart's
values.yaml file with JSON schemas.
Vela will try to fetch the `values.schema.json` file from the Chart archive and
[save the schema into a
ConfigMap](https://kubevela.io/#/en/platform-engineers/openapi-v3-json-schema.md)
which can be consumed latter through UI or CLI.
If `values.schema.json` is not provided by the Chart author, Vela will generate a
OpenAPI-v3 JSON schema based on the `values.yaml` file automatically.
Deploy the application and after several minutes (it takes time to fetch Helm chart from the repo, render and install), you can check the Helm release is installed.
```shell
$ helm ls -A
myapp-demo-podinfo default 1 2021-03-05 02:02:18.692317102 +0000 UTC deployed podinfo-5.1.4 5.1.4
```
Check the deployment defined in the chart has been created successfully.
```shell
$ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
myapp-demo-podinfo 1/1 1 1 66m
```
Check the values(`image.tag = 5.1.2`) from application's `settings` are assigned to the chart.
```shell
$ kubectl get deployment myapp-demo-podinfo -o json | jq '.spec.template.spec.containers[0].image'
"ghcr.io/stefanprodan/podinfo:5.1.2"
```

View File

@ -1,94 +0,0 @@
---
title: Known Limitations
---
## Limitations and Known Issues
Here are some known issues for using Helm chart as application component. Pleas note most of these restrictions will be fixed over time.
## Only one main workload in the chart
The chart must have exactly one workload being regarded as the **main** workload. In this context, `main workload` means the workload that will be tracked by KubeVela controllers, applied with traits and added into scopes. Only the `main workload` will benefit from KubeVela with rollout, revision, traffic management, etc.
To tell KubeVela which one is the main workload, you must follow these two steps:
#### 1. Declare main workload's resource definition
The field `.spec.definitionRef` in `WorkloadDefinition` is used to record the
resource definition of the main workload.
The name should be in the format: `<resource>.<group>`.
For example, the Deployment resource should be defined as:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
...
spec:
definitionRef:
name: deployments.apps
version: v1
```
The CloneSet workload resource should be defined as:
```yaml
...
spec:
definitionRef:
name: clonesets.apps.kruise.io
version: v1alpha1
```
#### 2. Qualified full name of the main workload
The name of the main workload should be templated with [a default fully
qualified app
name](https://github.com/helm/helm/blob/543364fba59b0c7c30e38ebe0f73680db895abb6/pkg/chartutil/create.go#L415). DO NOT assign any value to `.Values.fullnameOverride`.
> Also, Helm highly recommend that new charts are created via `$ helm create` command so the template names are automatically defined as per this best practice.
## Upgrade the application
#### Rollout strategy
For now, Helm based components cannot benefit from [application level rollout strategy](https://github.com/oam-dev/kubevela/blob/master/design/vela-core/rollout-design.md#applicationdeployment-workflow).
So currently in-place upgrade by modifying the application specification directly is the only way to upgrade the Helm based components, no advanced rollout strategy can be assigned to it. Please check [this sample](./trait#update-an-applicatiion).
#### Changing `settings` will trigger Helm release upgrade
For Helm based component, `.spec.components.settings` is the way user override the default values of the chart, so any changes applied to `settings` will trigger a Helm release upgrade.
This process is handled by Helm and `Flux2/helm-controller`, hence you can define remediation
strategies in the schematic according to [fluxcd/helmrelease API
doc](https://github.com/fluxcd/helm-controller/blob/main/docs/api/helmrelease.md#upgraderemediation)
and [spec doc](https://toolkit.fluxcd.io/components/helm/helmreleases/#configuring-failure-remediation)
in case failure happens during this upgrade.
For example
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
metadata:
name: webapp-chart
spec:
...
schematic:
helm:
release:
chart:
spec:
chart: "podinfo"
version: "5.1.4"
upgrade:
remediation:
retries: 3
remediationStrategy: rollback
repository:
url: "http://oam.dev/catalog/"
```
> Note: currently, it's hard to get helpful information of a living Helm release to figure out what happened if upgrading failed. We will enhance the observability to help users track the situation of Helm release in application level.
#### Changing `traits` may make Pods restart
Traits work on Helm based component in the same way as CUE based component, i.e. changes on traits may impact the main workload instance. Hence, the Pods belonging to this workload instance may restart twice during upgrade, one is by the Helm upgrade, and the other one is caused by traits.

View File

@ -0,0 +1,92 @@
---
title: Extend CRD Operator as Component Type
---
Let's use [OpenKruise](https://github.com/openkruise/kruise) as example of extend CRD as KubeVela Component.
**The mechanism works for all CRD Operators**.
### Step 1: Install the CRD controller
You need to [install the CRD controller](https://github.com/openkruise/kruise#quick-start) into your K8s system.
### Step 2: Create Component Definition
To register Cloneset(one of the OpenKruise workloads) as a new workload type in KubeVela, the only thing needed is to create an `ComponentDefinition` object for it.
A full example can be found in this [cloneset.yaml](https://github.com/oam-dev/catalog/blob/master/registry/cloneset.yaml).
Several highlights are list below.
#### 1. Describe The Workload Type
```yaml
...
annotations:
definition.oam.dev/description: "OpenKruise cloneset"
...
```
A one line description of this component type. It will be shown in helper commands such as `$ vela components`.
#### 2. Register it's underlying CRD
```yaml
...
workload:
definition:
apiVersion: apps.kruise.io/v1alpha1
kind: CloneSet
...
```
This is how you register OpenKruise Cloneset's API resource (`fapps.kruise.io/v1alpha1.CloneSet`) as the workload type.
KubeVela uses Kubernetes API resource discovery mechanism to manage all registered capabilities.
#### 4. Define Template
```yaml
...
schematic:
cue:
template: |
output: {
apiVersion: "apps.kruise.io/v1alpha1"
kind: "CloneSet"
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
replicas: parameter.replicas
selector: matchLabels: {
"app.oam.dev/component": context.name
}
template: {
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
containers: [{
name: context.name
image: parameter.image
}]
}
}
}
}
parameter: {
// +usage=Which image would you like to use for your service
// +short=i
image: string
// +usage=Number of pods in the cloneset
replicas: *5 | int
}
```
### Step 3: Register New Component Type to KubeVela
As long as the definition file is ready, you just need to apply it to Kubernetes.
```bash
$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/catalog/master/registry/cloneset.yaml
```
And the new component type will immediately become available for developers to use in KubeVela.

View File

@ -0,0 +1,181 @@
---
title: Defining Cloud Database as Component
---
KubeVela provides unified abstraction even for cloud services.
## Should a Cloud Service be a Component or Trait?
The following practice could be considered:
- Use `ComponentDefinition` if:
- you want to allow your end users explicitly claim a "instance" of the cloud service and consume it, and release the "instance" when deleting the application.
- Use `TraitDefinition` if:
- you don't want to give your end users any control/workflow of claiming or releasing the cloud service, you only want to give them a way to consume a cloud service which could even be managed by some other system. A `Service Binding` trait is widely used in this case.
In this documentation, we will add a Alibaba Cloud's RDS (Relational Database Service) as a component.
## Step 1: Install and Configure Crossplane
KubeVela uses [Crossplane](https://crossplane.io/) as the cloud service operator.
> This tutorial has been tested with Crossplane version `0.14`. Please follow the [Crossplane documentation](https://crossplane.io/docs/), especially the `Install & Configure` and `Compose Infrastructure` sections to configure
Crossplane with your cloud account.
**Note: When installing Crossplane via Helm chart, please DON'T set `alpha.oam.enabled=true` as all OAM features are already installed by KubeVela.**
## Step 2: Add Component Definition
Register the `rds` component to KubeVela.
```bash
$ cat << EOF | kubectl apply -f -
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: rds
annotations:
definition.oam.dev/apiVersion: "database.example.org/v1alpha1"
definition.oam.dev/kind: "PostgreSQLInstance"
definition.oam.dev/description: "RDS on Ali Cloud"
spec:
workload:
definition:
apiVersion: database.example.org/v1alpha1
kind: PostgreSQLInstance
schematic:
cue:
template: |
output: {
apiVersion: "database.example.org/v1alpha1"
kind: "PostgreSQLInstance"
metadata:
name: context.name
spec: {
parameters:
storageGB: parameter.storage
compositionSelector: {
matchLabels:
provider: parameter.provider
}
writeConnectionSecretToRef:
name: parameter.secretname
}
}
parameter: {
secretname: *"db-conn" | string
provider: *"alibaba" | string
storage: *20 | int
}
EOF
```
## Step 3: Verify
Instantiate RDS component in an [Application](../application) to provide cloud resources.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: mydatabase
spec:
components:
- name: myrds
type: rds
properties:
name: "alibaba-rds"
storage: 20
secretname: "myrds-conn"
```
Apply above application to Kubernetes and a RDS instance will be automatically provisioned (may take some time, ~5 mins).
> TBD: add status check , show database create result.
## Step 4: Consuming The Cloud Service
In this section, we will show how another component consumes the RDS instance.
> Note: we recommend to define the cloud resource claiming to an independent application if that cloud resource has standalone lifecycle. Otherwise, it could be defined in the same application of the consumer component.
### `ComponentDefinition` With Secret Reference
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: webserver
annotations:
definition.oam.dev/description: "webserver to consume cloud resources"
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
cue:
template: |
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": context.name
}
template: {
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
containers: [{
name: context.name
image: parameter.image
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
env: [{
name: "DB_NAME"
value: mySecret.dbName
}, {
name: "DB_PASSWORD"
value: mySecret.password
}]
}]
}
}
}
}
mySecret: {
dbName: string
password: string
}
parameter: {
image: string
//+InsertSecretTo=mySecret
dbConnection: string
cmd?: [...string]
}
```
With the `//+InsertSecretTo=mySecret` annotation, KubeVela knows this parameter value comes from a Kubernetes Secret (whose name is set by user), so it will inject its data to `mySecret` which is referenced as environment variable in the template.
Then declare an application to consume the RDS instance.
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: Application
metadata:
name: data-consumer
spec:
components:
- name: myweb
type: webserver
properties:
image: "nginx"
dbConnection: "mydb-outputs"
```
// TBD show the result

View File

@ -0,0 +1,482 @@
---
title: Debug, Test and Dry-run
---
With flexibility in defining abstractions, it's important to be able to debug, test and dry-run the CUE based definitions. This tutorial will show this step by step.
## Prerequisites
Please make sure below CLIs are present in your environment:
* [`cue` >=v0.2.2](https://cuelang.org/docs/install/)
* [`vela` (>v1.0.0)](https://kubevela.io/#/en/install?id=_3-optional-get-kubevela-cli)
## Define Definition and Template
We recommend to define the `Definition Object` in two separate parts: the CRD part and the CUE template. This enable us to debug, test and dry-run the CUE template.
Let's name the CRD part as `def.yaml`.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: microservice
annotations:
definition.oam.dev/description: "Describes a microservice combo Deployment with Service."
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
cue:
template: |
```
And the CUE template part as `def.cue`, then we can use CUE commands such as `cue fmt` / `cue vet` to format and validate the CUE file.
```
output: {
// Deployment
apiVersion: "apps/v1"
kind: "Deployment"
metadata: {
name: context.name
namespace: "default"
}
spec: {
selector: matchLabels: {
"app": context.name
}
template: {
metadata: {
labels: {
"app": context.name
"version": parameter.version
}
}
spec: {
serviceAccountName: "default"
terminationGracePeriodSeconds: parameter.podShutdownGraceSeconds
containers: [{
name: context.name
image: parameter.image
ports: [{
if parameter.containerPort != _|_ {
containerPort: parameter.containerPort
}
if parameter.containerPort == _|_ {
containerPort: parameter.servicePort
}
}]
if parameter.env != _|_ {
env: [
for k, v in parameter.env {
name: k
value: v
},
]
}
resources: {
requests: {
if parameter.cpu != _|_ {
cpu: parameter.cpu
}
if parameter.memory != _|_ {
memory: parameter.memory
}
}
}
}]
}
}
}
}
// Service
outputs: service: {
apiVersion: "v1"
kind: "Service"
metadata: {
name: context.name
labels: {
"app": context.name
}
}
spec: {
type: "ClusterIP"
selector: {
"app": context.name
}
ports: [{
port: parameter.servicePort
if parameter.containerPort != _|_ {
targetPort: parameter.containerPort
}
if parameter.containerPort == _|_ {
targetPort: parameter.servicePort
}
}]
}
}
parameter: {
version: *"v1" | string
image: string
servicePort: int
containerPort?: int
// +usage=Optional duration in seconds the pod needs to terminate gracefully
podShutdownGraceSeconds: *30 | int
env: [string]: string
cpu?: string
memory?: string
}
```
After everything is done, there's a script [`hack/vela-templates/mergedef.sh`](https://github.com/oam-dev/kubevela/blob/master/hack/vela-templates/mergedef.sh) to merge the `def.yaml` and `def.cue` into a completed Definition Object.
```shell
$ ./hack/vela-templates/mergedef.sh def.yaml def.cue > microservice-def.yaml
```
## Debug CUE template
### Use `cue vet` to Validate
```shell
$ cue vet def.cue
output.metadata.name: reference "context" not found:
./def.cue:6:14
output.spec.selector.matchLabels.app: reference "context" not found:
./def.cue:11:11
output.spec.template.metadata.labels.app: reference "context" not found:
./def.cue:16:17
output.spec.template.spec.containers.name: reference "context" not found:
./def.cue:24:13
outputs.service.metadata.name: reference "context" not found:
./def.cue:62:9
outputs.service.metadata.labels.app: reference "context" not found:
./def.cue:64:11
outputs.service.spec.selector.app: reference "context" not found:
./def.cue:70:11
```
The `reference "context" not found` is a common error in this step as [`context`](/docs/cue/component?id=cue-context) is a runtime information that only exist in KubeVela controllers. In order to validate the CUE template end-to-end, we can add a mock `context` in `def.cue`.
> Note that you need to remove all mock data when you finished the validation.
```CUE
... // existing template data
context: {
name: string
}
```
Then execute the command:
```shell
$ cue vet def.cue
some instances are incomplete; use the -c flag to show errors or suppress this message
```
The `reference "context" not found` error is gone, but `cue vet` only validates the data type which is not enough to ensure the login in template is correct. Hence we need to use `cue vet -c` for complete validation:
```shell
$ cue vet def.cue -c
context.name: incomplete value string
output.metadata.name: incomplete value string
output.spec.selector.matchLabels.app: incomplete value string
output.spec.template.metadata.labels.app: incomplete value string
output.spec.template.spec.containers.0.image: incomplete value string
output.spec.template.spec.containers.0.name: incomplete value string
output.spec.template.spec.containers.0.ports.0.containerPort: incomplete value int
outputs.service.metadata.labels.app: incomplete value string
outputs.service.metadata.name: incomplete value string
outputs.service.spec.ports.0.port: incomplete value int
outputs.service.spec.ports.0.targetPort: incomplete value int
outputs.service.spec.selector.app: incomplete value string
parameter.image: incomplete value string
parameter.servicePort: incomplete value int
```
It now complains some runtime data is incomplete (because `context` and `parameter` do not have value), let's now fill in more mock data in the `def.cue` file:
```CUE
context: {
name: "test-app"
}
parameter: {
version: "v2"
image: "image-address"
servicePort: 80
containerPort: 8000
env: {"PORT": "8000"}
cpu: "500m"
memory: "128Mi"
}
```
It won't complain now which means validation is passed:
```shell
cue vet def.cue -c
```
#### Use `cue export` to Check the Rendered Resources
The `cue export` can export rendered result in YAMl foramt:
```shell
$ cue export -e output def.cue --out yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app
namespace: default
spec:
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
version: v2
spec:
serviceAccountName: default
terminationGracePeriodSeconds: 30
containers:
- name: test-app
image: image-address
```
```shell
$ cue export -e outputs.service def.cue --out yaml
apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
app: test-app
spec:
selector:
app: test-app
type: ClusterIP
```
## Dry-Run the `Application`
When CUE template is good, we can use `vela system dry-run` to dry run and check the rendered resources in real Kubernetes cluster. This command will exactly execute the same render logic in KubeVela's `Application` Controller adn output the result for you.
First, we need use `mergedef.sh` to merge the definition and cue files.
```shell
$ mergedef.sh def.yaml def.cue > componentdef.yaml
```
Then, let's create an Application named `test-app.yaml`.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: boutique
namespace: default
spec:
components:
- name: frontend
type: microservice
properties:
image: registry.cn-hangzhou.aliyuncs.com/vela-samples/frontend:v0.2.2
servicePort: 80
containerPort: 8080
env:
PORT: "8080"
cpu: "100m"
memory: "64Mi"
```
Dry run the application by using `vela system dry-run`.
```shell
$ vela system dry-run -f test-app.yaml -d componentdef.yaml
---
# Application(boutique) -- Comopnent(frontend)
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.oam.dev/component: frontend
app.oam.dev/name: boutique
workload.oam.dev/type: microservice
name: frontend
namespace: default
spec:
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
version: v1
spec:
containers:
- env:
- name: PORT
value: "8080"
image: registry.cn-hangzhou.aliyuncs.com/vela-samples/frontend:v0.2.2
name: frontend
ports:
- containerPort: 8080
resources:
requests:
cpu: 100m
memory: 64Mi
serviceAccountName: default
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
labels:
app: frontend
app.oam.dev/component: frontend
app.oam.dev/name: boutique
trait.oam.dev/resource: service
trait.oam.dev/type: AuxiliaryWorkload
name: frontend
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: frontend
type: ClusterIP
---
```
### Import `kube` Package
KubeVela automatically generates internal CUE packages for all built-in Kubernetes API resources, so you can import them in CUE template. This could simplify how you write the template because some default values are already there, and the imported package will help you validate the template.
Let's try to define a template with help of `kube` package:
```cue
import (
apps "kube/apps/v1"
corev1 "kube/v1"
)
// output is validated by Deployment.
output: apps.#Deployment
output: {
metadata: {
name: context.name
namespace: "default"
}
spec: {
selector: matchLabels: {
"app": context.name
}
template: {
metadata: {
labels: {
"app": context.name
"version": parameter.version
}
}
spec: {
terminationGracePeriodSeconds: parameter.podShutdownGraceSeconds
containers: [{
name: context.name
image: parameter.image
ports: [{
if parameter.containerPort != _|_ {
containerPort: parameter.containerPort
}
if parameter.containerPort == _|_ {
containerPort: parameter.servicePort
}
}]
if parameter.env != _|_ {
env: [
for k, v in parameter.env {
name: k
value: v
},
]
}
resources: {
requests: {
if parameter.cpu != _|_ {
cpu: parameter.cpu
}
if parameter.memory != _|_ {
memory: parameter.memory
}
}
}
}]
}
}
}
}
outputs:{
service: corev1.#Service
}
// Service
outputs: service: {
metadata: {
name: context.name
labels: {
"app": context.name
}
}
spec: {
//type: "ClusterIP"
selector: {
"app": context.name
}
ports: [{
port: parameter.servicePort
if parameter.containerPort != _|_ {
targetPort: parameter.containerPort
}
if parameter.containerPort == _|_ {
targetPort: parameter.servicePort
}
}]
}
}
parameter: {
version: *"v1" | string
image: string
servicePort: int
containerPort?: int
// +usage=Optional duration in seconds the pod needs to terminate gracefully
podShutdownGraceSeconds: *30 | int
env: [string]: string
cpu?: string
memory?: string
}
```
Then merge them.
```shell
mergedef.sh def.yaml def.cue > componentdef.yaml
```
And dry run to see the rendered resources:
```shell
vela system dry-run -f test-app.yaml -d componentdef.yaml
```

View File

@ -1,33 +1,32 @@
---
title: Definition CRD
title: Introduction of Definition CRD
---
## Definition Objects
This documentation explains how to register and manage available *components* and *traits* in your platform with `WorkloadDefinition` and `TraitDefinition`, so your end users could "assemble" them into a `Application` resource.
This documentation explains how to register and manage available *components* and *traits* in your platform with
`ComponentDefinition` and `TraitDefinition`, so end users could instantiate and "assemble" them into an `Application`.
> All definition objects are expected to be maintained and installed by platform team, think them as *capability providers* in your platform.
## Overview
Essentially, a definition object in KubeVela is consisted by three section:
- **Capability Indexer** defined by `spec.definitionRef`
- this is for discovering the provider of this capability.
- **Capability Indicator**
- `ComponentDefinition` uses `spec.workload` to indicate the workload type of this component.
- `TraitDefinition` uses `spec.definitionRef` to indicate the provider of this trait.
- **Interoperability Fields**
- they are for the platform to ensure a trait can work with given workload type. Hence only `TraitDefinition` has these fields.
- **Capability Encapsulation** defined by `spec.schematic`
- this defines the encapsulation (i.e. templating and parametering) of this capability. For now, user can choose to use Helm or CUE as encapsulation.
- **Capability Encapsulation and Abstraction** defined by `spec.schematic`
- this defines the **templating and parametering** (i.e. encapsulation) of this capability.
Hence, the basic structure of definition object is as below:
```yaml
apiVersion: core.oam.dev/v1alpha2
apiVersion: core.oam.dev/v1beta1
kind: XxxDefinition
metadata:
name: <definition name>
spec:
definitionRef:
name: <resources>.<api-group>
...
schematic:
cue:
# cue template ...
@ -38,56 +37,40 @@ spec:
Let's explain these fields one by one.
### Capability Indexer
### Capability Indicator
The indexer of given capability is declared as `spec.definitionRef`.
In `ComponentDefinition`, the indicator of workload type is declared as `spec.workload`.
Below is a definition for *Web Service* in KubeVela:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: webservice
namespace: default
annotations:
definition.oam.dev/description: "Describes long-running, scalable, containerized services that have a stable network endpoint to receive external network traffic from customers."
spec:
definitionRef:
name: deployments.apps
...
workload:
definition:
apiVersion: apps/v1
kind: Deployment
...
```
In above example, it claims to leverage Kubernetes Deployment (`deployments.apps`) as the workload type to instantiate this component.
Below is an example of *ingress* trait:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: TraitDefinition
metadata:
name: ingress
spec:
definitionRef:
name: ingresses.networking.k8s.io
...
```
Similarly, it claims to leverage Kubernetes Ingress (`ingresses.networking.k8s.io`) as the underlying provider of this capability.
In above example, it claims to leverage Kubernetes Deployment (`apiVersion: apps/v1`, `kind: Deployment`) as the workload type for component.
### Interoperability Fields
The interoperability fields are **trait only**. An overall view of interoperability fields in a `TraitDefinition` is show as below.
```yaml
apiVersion: core.oam.dev/v1alpha2
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
name: ingress
spec:
definitionRef:
name: ingresses.networking.k8s.io
appliesToWorkloads:
- deployments.apps
- webservice
@ -106,9 +89,9 @@ This field defines the constraints that what kinds of workloads this trait is al
There are four approaches to denote one or a group of workload types.
- `WorkloadDefinition` name, e.g., `webservice`, `worker`
- `WorkloadDefinition` definition reference (CRD name), e.g., `deployments.apps`
- Resource group of `WorkloadDefinition` definition reference prefixed with `*.`, e.g., `*.apps`, `*.oam.dev`. This means the trait is allowded to apply to any workloads in this group.
- `ComponentDefinition` name, e.g., `webservice`, `worker`
- `ComponentDefinition` definition reference (CRD name), e.g., `deployments.apps`
- Resource group of `ComponentDefinition` definition reference prefixed with `*.`, e.g., `*.apps`, `*.oam.dev`. This means the trait is allowded to apply to any workloads in this group.
- `*` means this trait is allowded to apply to any workloads
If this field is omitted, it means this trait is allowded to apply to any workload types.
@ -125,7 +108,6 @@ This field defines that constraints that what kinds of traits are conflicting wi
There are four approaches to denote one or a group of workload types.
- `TraitDefinition` name, e.g., `ingress`
- `TraitDefinition` definition reference (CRD name), e.g., `ingresses.networking.k8s.io`
- Resource group of `TraitDefinition` definition reference prefixed with `*.`, e.g., `*.networking.k8s.io`. This means the trait is conflicting with any traits in this group.
- `*` means this trait is conflicting with any other trait.
@ -140,23 +122,25 @@ If this field is set, KubeVela core will automatically fill the workload referen
Please check [scaler](https://github.com/oam-dev/kubevela/blob/master/charts/vela-core/templates/defwithtemplate/manualscale.yaml) trait as a demonstration of how to set this field.
### Capability Encapsulation
### Capability Encapsulation and Abstraction
The encapsulation (i.e. templating and parametering) of given capability are defined in `spec.schematic` field. For example, below is the full definition of *Web Service* type in KubeVela:
The templating and parameterizing of given capability are defined in `spec.schematic` field. For example, below is the full definition of *Web Service* type in KubeVela:
<details>
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: webservice
namespace: default
annotations:
definition.oam.dev/description: "Describes long-running, scalable, containerized services that have a stable network endpoint to receive external network traffic from customers."
spec:
definitionRef:
name: deployments.apps
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
cue:
template: |
@ -241,8 +225,6 @@ spec:
```
</details>
It's by design that KubeVela supports multiple ways to define the encapsulation. Hence, we will explain this field in detail with following guides.
- Learn about [CUE](../using-cue/basic) based capability definitions.
- Learn about [Helm](../using-helm/component) based capability definitions.
The specification of `schematic` is explained in following CUE and Helm specific documentations.
Also, the `schematic` filed enables you to render UI forms directly based on them, please check the [Generate Forms from Definitions](/docs/platform-engineers/openapi-v3-json-schema) section about how to.

Some files were not shown because too many files have changed in this diff Show More