Feat: release v1.1 docs (#281)

This commit is contained in:
yangsoon 2021-09-15 15:24:32 +08:00 committed by GitHub
parent a800eed791
commit e45f87198d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
469 changed files with 35511 additions and 887 deletions

View File

@ -1,734 +0,0 @@
# 如何在 20 分钟内给你的 K8s PaaS 上线一个新功能
>2020年12月14日 19:33, by @wonderflow
上个月,[KubeVela 正式发布](https://kubevela.io/#/blog/zh/kubevela-the-extensible-app-platform-based-on-open-application-model-and-kubernetes)了,
作为一款简单易用且高度可扩展的应用管理平台与核心引擎,可以说是广大平台工程师用来构建自己的云原生 PaaS 的神兵利器。
那么本文就以一个实际的例子,讲解一下如何在 20 分钟内,为你基于 KubeVela 的 PaaS “上线“一个新能力。
在正式开始本文档的教程之前,请确保你本地已经正确[安装了 KubeVela](https://kubevela.io/#/en/install) 及其依赖的 K8s 环境。
# KubeVela 扩展的基本结构
KubeVela 的基本架构如图所示:
![image](https://kubevela-docs.oss-cn-beijing.aliyuncs.com/kubevela-extend.jpg)
简单来说KubeVela 通过添加 **Workload Type****Trait** 来为用户扩展能力,平台的服务提供方通过 Definition 文件注册和扩展,向上通过 Appfile 透出扩展的功能。官方文档中也分别给出了基本的编写流程其中2个是Workload的扩展例子一个是Trait的扩展例子
- [OpenFaaS 为例的 Workload Type 扩展](https://kubevela.io/#/en/platform-engineers/workload-type)
- [云资源 RDS 为例的 Workload Type 扩展](https://kubevela.io/#/en/platform-engineers/cloud-services)
- [KubeWatch 为例的 Trait 扩展](https://kubevela.io/#/en/platform-engineers/trait)
我们以一个内置的 WorkloadDefinition 为例来介绍一下 Definition 文件的基本结构:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
metadata:
name: webservice
annotations:
definition.oam.dev/description: "`Webservice` is a workload type to describe long-running, scalable, containerized services that have a stable network endpoint to receive external network traffic from customers.
If workload type is skipped for any service defined in Appfile, it will be defaulted to `Web Service` type."
spec:
definitionRef:
name: deployments.apps
extension:
template: |
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": context.name
}
template: {
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
containers: [{
name: context.name
image: parameter.image
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
if parameter["env"] != _|_ {
env: parameter.env
}
if context["config"] != _|_ {
env: context.config
}
ports: [{
containerPort: parameter.port
}]
if parameter["cpu"] != _|_ {
resources: {
limits:
cpu: parameter.cpu
requests:
cpu: parameter.cpu
}}
}]
}}}
}
parameter: {
// +usage=Which image would you like to use for your service
// +short=i
image: string
// +usage=Commands to run in the container
cmd?: [...string]
// +usage=Which port do you want customer traffic sent to
// +short=p
port: *80 | int
// +usage=Define arguments by using environment variables
env?: [...{
// +usage=Environment variable name
name: string
// +usage=The value of the environment variable
value?: string
// +usage=Specifies a source the value of this var should come from
valueFrom?: {
// +usage=Selects a key of a secret in the pod's namespace
secretKeyRef: {
// +usage=The name of the secret in the pod's namespace to select from
name: string
// +usage=The key of the secret to select from. Must be a valid secret key
key: string
}
}
}]
// +usage=Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core)
cpu?: string
}
```
乍一看挺长的,好像很复杂,但是不要着急,其实细看之下它分为两部分:
* 不含扩展字段的 Definition 注册部分。
* 供 Appfile 使用的扩展模板CUE Template部分
我们拆开来慢慢介绍,其实学起来很简单。
# 不含扩展字段的 Definition 注册部分
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
metadata:
name: webservice
annotations:
definition.oam.dev/description: "`Webservice` is a workload type to describe long-running, scalable, containerized services that have a stable network endpoint to receive external network traffic from customers.
If workload type is skipped for any service defined in Appfile, it will be defaulted to `Web Service` type."
spec:
definitionRef:
name: deployments.apps
```
这一部分满打满算11行其中有3行是在介绍 `webservice`的功能5行是固定的格式。只有2行是有特定信息
```yaml
definitionRef:
name: deployments.apps
```
这两行的意思代表了这个Definition背后用的CRD名称是什么其格式是 `<resources>.<api-group>`.了解 K8s 的同学应该知道 K8s 中比较常用的是通过 `api-group`, `version``kind` 定位资源,而 `kind` 在 K8s restful API 中对应的是 `resources`。以大家熟悉 `Deployment``ingress` 为例,它的对应关系如下:
| api-group | kind | version | resources |
| -------- | -------- | -------- | -------- |
| apps | Deployment | v1 | deployments |
| networking.k8s.io | Ingress | v1 | ingresses |
> 这里补充一个小知识,为什么有了 kind 还要加个 resources 的概念呢?
> 因为一个 CRD 除了 kind 本身还有一些像 statusreplica 这样的字段希望跟 spec 本身解耦开来在 restful API 中单独更新,
> 所以 resources 除了 kind 对应的那一个,还会有一些额外的 resources如 Deployment 的 status 表示为 `deployments/status`
所以相信聪明的你已经明白了不含 extension 的情况下Definition应该怎么写了最简单的就是根据 K8s 的资源组合方式拼接一下,只要填下面三个尖括号的空格就可以了。
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
metadata:
name: <这里写名称>
spec:
definitionRef:
name: <这里写resources>.<这里写api-group>
```
针对运维特征注册TraitDefinition也是这样。
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: TraitDefinition
metadata:
name: <这里写名称>
spec:
definitionRef:
name: <这里写resources>.<这里写api-group>
```
所以把 `Ingress` 作为 KubeVela 的扩展写进去就是:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: TraitDefinition
metadata:
name: ingress
spec:
definitionRef:
name: ingresses.networking.k8s.io
```
除此之外TraitDefinition 中还增加了一些其他功能模型层功能,如:
* `appliesToWorkloads`: 表示这个 trait 可以作用于哪些 Workload 类型。
* `conflictWith` 表示这个 trait 和哪些其他类型的 trait 有冲突。
* `workloadRefPath` 表示这个 trait 包含的 workload 字段是哪个KubeVela 在生成 trait 对象时会自动填充。
...
这些功能都是可选的,本文中不涉及使用,在后续的其他文章中我们再给大家详细介绍。
所以到这里,相信你已经掌握了一个不含 extensions 的基本扩展模式,而剩下部分就是围绕 [CUE](https://cuelang.org/) 的抽象模板。
# 供 Appfile 使用的扩展模板CUE Template部分
对 CUE 本身有兴趣的同学可以参考这篇[CUE 基础入门](https://wonderflow.info/posts/2020-12-15-cuelang-template/) 多做一些了解,限于篇幅本文对 CUE 本身不详细展开。
大家知道 KubeVela 的 Appfile 写起来很简洁,但是 K8s 的对象是一个相对比较复杂的 YAML而为了保持简洁的同时又不失可扩展性KubeVela 提供了一个从复杂到简洁的桥梁。
这就是 Definition 中 CUE Template 的作用。
## CUE 格式模板
让我们先来看一个 Deployment 的 YAML 文件,如下所示,其中很多内容都是固定的框架(模板部分),真正需要用户填的内容其实就少量的几个字段(参数部分)。
```yaml
apiVersion: apps/v1
kind: Deployment
meadata:
name: mytest
spec:
template:
spec:
containers:
- name: mytest
env:
- name: a
value: b
image: nginx:v1
metadata:
labels:
app.oam.dev/component: mytest
selector:
matchLabels:
app.oam.dev/component: mytest
```
在 KubeVela 中Definition 文件的固定格式就是分为 `output``parameter` 两部分。其中`output`中的内容就是“模板部分”,而 `parameter` 就是参数部分。
那我们来把上面的 Deployment YAML 改写成 Definition 中模板的格式。
```cue
output: {
apiVersion: "apps/v1"
kind: "Deployment"
metadata: name: "mytest"
spec: {
selector: matchLabels: {
"app.oam.dev/component": "mytest"
}
template: {
metadata: labels: {
"app.oam.dev/component": "mytest"
}
spec: {
containers: [{
name: "mytest"
image: "nginx:v1"
env: [{name:"a",value:"b"}]
}]
}}}
}
```
这个格式跟 json 很像,事实上这个是 CUE 的格式,而 CUE 本身就是一个 json 的超集。也就是说CUE的格式在满足 JSON 规则的基础上,增加了一些简便规则,
使其更易读易用:
* C 语言的注释风格。
* 表示字段名称的双引号在没有特殊符号的情况下可以缺省。
* 字段值结尾的逗号可以缺省,在字段最后的逗号写了也不会出错。
* 最外层的大括号可以省略。
## CUE 格式的模板参数--变量引用
编写好了模板部分,让我们来构建参数部分,而这个参数其实就是变量的引用。
```
parameter: {
name: string
image: string
}
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": parameter.name
}
template: {
metadata: labels: {
"app.oam.dev/component": parameter.name
}
spec: {
containers: [{
name: parameter.name
image: parameter.image
}]
}}}
}
```
如上面的这个例子所示KubeVela 中的模板参数就是通过 `parameter` 这个部分来完成的,而`parameter` 本质上就是作为引用,替换掉了 `output` 中的某些字段。
## 完整的 Definition 以及在 Appfile 使用
事实上,经过上面两部分的组合,我们已经可以写出一个完整的 Definition 文件:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
metadata:
name: mydeploy
spec:
definitionRef:
name: deployments.apps
extension:
template: |
parameter: {
name: string
image: string
}
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": parameter.name
}
template: {
metadata: labels: {
"app.oam.dev/component": parameter.name
}
spec: {
containers: [{
name: parameter.name
image: parameter.image
}]
}}}
}
```
为了方便调试,一般情况下可以预先分为两个文件,一部分放前面的 yaml 部分,假设命名为 `def.yaml` 如:
```shell script
apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
metadata:
name: mydeploy
spec:
definitionRef:
name: deployments.apps
extension:
template: |
```
另一个则放 cue 文件,假设命名为 `def.cue`
```shell script
parameter: {
name: string
image: string
}
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": parameter.name
}
template: {
metadata: labels: {
"app.oam.dev/component": parameter.name
}
spec: {
containers: [{
name: parameter.name
image: parameter.image
}]
}}}
}
```
先对 `def.cue` 做一个格式化,格式化的同时 cue 工具本身会做一些校验,也可以更深入的[通过 cue 命令做调试](https://wonderflow.info/posts/2020-12-15-cuelang-template/):
```shell script
cue fmt def.cue
```
调试完成后,可以通过脚本把这个 yaml 组装:
```shell script
./hack/vela-templates/mergedef.sh def.yaml def.cue > mydeploy.yaml
```
再把这个 yaml 文件 apply 到 K8s 集群中。
```shell script
$ kubectl apply -f mydeploy.yaml
workloaddefinition.core.oam.dev/mydeploy created
```
一旦新能力 `kubectl apply` 到了 Kubernetes 中不用重启也不用更新KubeVela 的用户可以立刻看到一个新的能力出现并且可以使用了:
```shell script
$ vela worklaods
Automatically discover capabilities successfully ✅ Add(1) Update(0) Delete(0)
TYPE CATEGORY DESCRIPTION
+mydeploy workload description not defined
NAME DESCRIPTION
mydeploy description not defined
```
在 Appfile 中使用方式如下:
```yaml
name: my-extend-app
services:
mysvc:
type: mydeploy
image: crccheck/hello-world
name: mysvc
```
执行 `vela up` 就能把这个运行起来了:
```shell script
$ vela up -f docs/examples/blog-extension/my-extend-app.yaml
Parsing vela appfile ...
Loading templates ...
Rendering configs for service (mysvc)...
Writing deploy config to (.vela/deploy.yaml)
Applying deploy configs ...
Checking if app has been deployed...
App has not been deployed, creating a new deployment...
✅ App has been deployed 🚀🚀🚀
Port forward: vela port-forward my-extend-app
SSH: vela exec my-extend-app
Logging: vela logs my-extend-app
App status: vela status my-extend-app
Service status: vela status my-extend-app --svc mysvc
```
我们来查看一下应用的状态,已经正常运行起来了(`HEALTHY Ready: 1/1`
```shell script
$ vela status my-extend-app
About:
Name: my-extend-app
Namespace: env-application
Created at: 2020-12-15 16:32:25.08233 +0800 CST
Updated at: 2020-12-15 16:32:25.08233 +0800 CST
Services:
- Name: mysvc
Type: mydeploy
HEALTHY Ready: 1/1
```
# Definition 模板中的高级用法
上面我们已经通过模板替换这个最基本的功能体验了扩展 KubeVela 的全过程,除此之外,可能你还有一些比较复杂的需求,如条件判断,循环,复杂类型等,需要一些高级的用法。
## 结构体参数
如果模板中有一些参数类型比较复杂,包含结构体和嵌套的多个结构体,就可以使用结构体定义。
1. 定义一个结构体类型包含1个字符串成员、1个整型和1个结构体成员。
```
#Config: {
name: string
value: int
other: {
key: string
value: string
}
}
```
2. 在变量中使用这个结构体类型,并作为数组使用。
```
parameter: {
name: string
image: string
config: [...#Config]
}
```
3. 同样的目标中也是以变量引用的方式使用。
```shell script
output: {
...
spec: {
containers: [{
name: parameter.name
image: parameter.image
env: parameter.config
}]
}
...
}
```
4. Appfile 中的写法就是按照 parameter 定义的结构编写。
```
name: my-extend-app
services:
mysvc:
type: mydeploy
image: crccheck/hello-world
name: mysvc
config:
- name: a
value: 1
other:
key: mykey
value: myvalue
```
## 条件判断
有时候某些参数加还是不加取决于某个条件:
```shell script
parameter: {
name: string
image: string
useENV: bool
}
output: {
...
spec: {
containers: [{
name: parameter.name
image: parameter.image
if parameter.useENV == true {
env: [{name: "my-env", value: "my-value"}]
}
}]
}
...
}
```
在 Appfile 就是写值。
```
name: my-extend-app
services:
mysvc:
type: mydeploy
image: crccheck/hello-world
name: mysvc
useENV: true
```
## 可缺省参数
有些情况下参数可能存在也可能不存在,即非必填,这个时候一般要配合条件判断使用,对于某个字段不存在的情况,判断条件是是 `_variable != _|_`
```shell script
parameter: {
name: string
image: string
config?: [...#Config]
}
output: {
...
spec: {
containers: [{
name: parameter.name
image: parameter.image
if parameter.config != _|_ {
config: parameter.config
}
}]
}
...
}
```
这种情况下 Appfile 的 config 就非必填了,填了就渲染,没填就不渲染。
## 默认值
对于某些参数如果希望设置一个默认值,可以采用这个写法。
```shell script
parameter: {
name: string
image: *"nginx:v1" | string
}
output: {
...
spec: {
containers: [{
name: parameter.name
image: parameter.image
}]
}
...
}
```
这个时候 Appfile 就可以不写 image 这个参数,默认使用 "nginx:v1"
```
name: my-extend-app
services:
mysvc:
type: mydeploy
name: mysvc
```
## 循环
### Map 类型的循环
```shell script
parameter: {
name: string
image: string
env: [string]: string
}
output: {
spec: {
containers: [{
name: parameter.name
image: parameter.image
env: [
for k, v in parameter.env {
name: k
value: v
},
]
}]
}
}
```
Appfile 中的写法:
```
name: my-extend-app
services:
mysvc:
type: mydeploy
name: "mysvc"
image: "nginx"
env:
env1: value1
env2: value2
```
### 数组类型的循环
```shell script
parameter: {
name: string
image: string
env: [...{name:string,value:string}]
}
output: {
...
spec: {
containers: [{
name: parameter.name
image: parameter.image
env: [
for _, v in parameter.env {
name: v.name
value: v.value
},
]
}]
}
}
```
Appfile 中的写法:
```
name: my-extend-app
services:
mysvc:
type: mydeploy
name: "mysvc"
image: "nginx"
env:
- name: env1
value: value1
- name: env2
value: value2
```
## KubeVela 内置的 `context` 变量
大家可能也注意到了,我们在 parameter 中定义的 name 每次在 Appfile中 实际上写了两次,一次是在 services 下面每个service都以名称区分
另一次则是在具体的`name`参数里面。事实上这里重复的不应该由用户再写一遍,所以 KubeVela 中还定义了一个内置的 `context`,里面存放了一些通用的环境上下文信息,如应用名称、秘钥等。
直接在模板中使用 context 就不需要额外增加一个 `name` 参数了, KubeVela 在运行渲染模板的过程中会自动传入。
```shell script
parameter: {
image: string
}
output: {
...
spec: {
containers: [{
name: context.name
image: parameter.image
}]
}
...
}
```
## KubeVela 中的注释增强
KubeVela 还对 cuelang 的注释做了一些扩展,方便自动生成文档以及被 CLI 使用。
```
parameter: {
// +usage=Which image would you like to use for your service
// +short=i
image: string
// +usage=Commands to run in the container
cmd?: [...string]
...
}
```
其中,`+usgae` 开头的注释会变成参数的说明,`+short` 开头的注释后面则是在 CLI 中使用的缩写。
# 总结
本文通过实际的案例和详细的讲述,为你介绍了在 KubeVela 中新增一个能力的详细过程与原理,以及能力模板的编写方法。
这里你可能还有个疑问,平台管理员这样添加了一个新能力后,平台的用户又该怎么能知道这个能力怎么使用呢?其实,在 KubeVela 中,它不仅能方便的添加新能力,**它还能自动为“能力”生成 Markdown 格式的使用文档!** 不信,你可以看下 KubeVela 本身的官方网站,所有在 `References/Capabilities`目录下能力使用说明文档(比如[这个](https://kubevela.io/#/en/developers/references/workload-types/webservice)),全都是根据每个能力的模板自动生成的哦。
最后,欢迎大家写一些有趣的扩展功能,提交到 KubeVela 的[社区仓库](https://github.com/oam-dev/catalog/tree/master/registry)中来。

View File

@ -1,95 +0,0 @@
---
title: KubeVela Official Documentation Translation Event
tags: [ documentation ]
description: KubeVela Official Documentation Translation Event
---
## 背景
KubeVela v1.0 启用了新的官网架构和文档维护方式新增功能包括文档版本化控制、i18n 国际化以及自动化流程。但目前 KubeVela 官方文档只有英文版,这提高了学习和使用 KubeVela 的门槛,不利于项目的传播和发展,同时翻译工作也能显著提升语言能力,帮助我们拓宽阅读技术资料的广度,故组织本次活动。
## 活动流程
本次活动主要在 [kubevela.io](https://github.com/oam-dev/kubevela.io) repo 下进行,报名参与和认领任务都在 [KubeVela 官方文档翻译登记](https://shimo.im/sheets/QrCwcDqh8xkRWKPC/MODOC) 中(**请务必在表格中登记信息**)。
### 开始翻译
![翻译流程](https://tvax1.sinaimg.cn/large/ad5fbf65gy1gpdbriuraij20k20l6dhm.jpg)
参与翻译活动的基本流程如下:
- 任务领取:在 [KubeVela 官方文档翻译登记](https://shimo.im/sheets/QrCwcDqh8xkRWKPC/MODOC) 登记并认领任务;
- 提交:参与人员提交 PR 等待 review
- 审阅maintainer 审阅 PR
- 终审: 对 review 后的内容进行最后确认;
- 合并merge 到 master 分支,任务结束。
### 参与指南
下面具体介绍参与翻译的具体工作。
#### 准备工作
- 账号:你需要先准备一个 GitHub 账号。使用 Github 进行翻译任务的认领和 PR 提交。
- 仓库和分支管理
- fork [kubevela.io](https://github.com/oam-dev/kubevela.io) 的仓库,并作为自己仓库的上游: `git remote add upstream https://github.com/oam-dev/kubevela.io.git`
- 在自己的仓库,也就是 origin 上进行翻译;
- 一个任务新建一个 branch
- Node.js 版本 >= 12.13.0 (可以使用 `node -v` 命令查看)
- Yarn 版本 >= 1.5(可以使用 `yarn --version` 命令查看)
#### 参与步骤
**Step1任务浏览**
在 [KubeVela 官方文档翻译登记](https://shimo.im/sheets/QrCwcDqh8xkRWKPC/MODOC) 登记并浏览有哪些任务可以认领。
**Step2任务领取**
在 [KubeVela 官方文档翻译登记](https://shimo.im/sheets/QrCwcDqh8xkRWKPC/MODOC) 表格中编辑并认领任务。注意:为保证质量,同一译者只能同时认领三个任务,完成后才可继续认领。
**Step3本地构建和预览**
```shell
# 命令安装依赖
$ yarn install
# 本地运行中文文档
$ yarn run start -- --locale zh
yarn run v1.22.10
warning From Yarn 1.0 onwards, scripts don't require "--" for options to be forwarded. In a future version, any explicit "--" will be forwarded as-is to the scripts.
$ docusaurus start --locale zh
Starting the development server...
Docusaurus website is running at: http://localhost:3000/zh/
✔ Client
Compiled successfully in 7.54s
「wds」: Project is running at http://localhost:3000/
「wds」: webpack output is served from /zh/
「wds」: Content not from webpack is served from /Users/saybot/own/kubevela.io
「wds」: 404s will fallback to /index.html
✔ Client
Compiled successfully in 137.94ms
```
请勿修改 `/docs` 目录下内容,中文文档在 `/i18n/zh/docusaurus-plugin-content-docs` 中,之后就可以在 http://localhost:3000/zh/ 中进行预览了。
**Step4提交 PR**
确认翻译完成就可以提交 PR 了,注意:为了方便 review 每篇翻译为**一个 PR**,如果翻译多篇请 checkout **多个分支**并提交多个 PR。
**Step5审阅**
由 maintainer 对 PR 进行 review。
**Step6任务完成**
翻译合格的文章将会 merge 到 [kubevela.io](https://github.com/oam-dev/kubevela.io) 的 master 分支进行发布。
### 翻译要求
- 数字和英文两边是中文要加空格。
- KubeVela 统一写法。K 和 V 大写。
- 翻译完请先阅读一遍不要出现遗漏段落保证文章通顺、符合中文阅读习惯。不追求严格一致可以意译。review 的时候也会检验。
- 你和您不要混用,统一使用 **“你”**。
- 不会翻译的词汇可以不翻译,可以在 PR 中说明review 的时候会查看。
- Component、Workload、Trait 这些 OAM/KubeVela 里面定义的专属概念不要翻译,我们也要加强这些词汇的认知。可以在一篇新文章最开始出现的时候用括号加上中文翻译。
- 注意中英文标点符号。
- `PR` 命名规范 `Translate <翻译文件相对路径>`,如 `Translate i18n/zh/docusaurus-plugin-content-docs/current/introduction.md`

View File

@ -67,7 +67,7 @@ Finally, go to *Dashboard > Manage Jenkins > Configure System > GitHub* in Jenki
### KubeVela
You need to install KubeVela in your Kubernetes cluster and enable the apiserver function. Refer to [official doc](/docs/advanced-install#install-kubevela-with-apiserver) for details.
You need to install KubeVela in your Kubernetes cluster and enable the apiserver function. Refer to [official doc](/docs/platform-engineers/advanced-install#install-kubevela-with-apiserver) for details.
## Composing Applications

View File

@ -103,28 +103,26 @@ helm search repo kubevela/vela-core -l
### Step 2. Upgrade KubeVela CRDs
```shell
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_appdeployments.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_applicationcontexts.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_applicationrevisions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_applications.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_approllouts.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_clusters.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_componentdefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_containerizedworkloads.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_definitionrevisions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_envbindings.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_healthscopes.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_initializers.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_manualscalertraits.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_policydefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_resourcetrackers.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_scopedefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_traitdefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_workflowstepdefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_workloaddefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/standard.oam.dev_podspecworkloads.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/standard.oam.dev_rollouts.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/standard.oam.dev_rollouttraits.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_appdeployments.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_applicationcontexts.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_applicationrevisions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_applications.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_approllouts.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_clusters.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_componentdefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_containerizedworkloads.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_definitionrevisions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_envbindings.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_healthscopes.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_initializers.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_manualscalertraits.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_policydefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_resourcetrackers.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_scopedefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_traitdefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_workflowstepdefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_workloaddefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/standard.oam.dev_rollouts.yaml
```
> Tips: If you see errors like `* is invalid: spec.scope: Invalid value: "Namespaced": filed is immutable`. Please delete the CRD which reports error and re-apply the kubevela crds.

View File

@ -76,11 +76,11 @@ module.exports = {
},
{
label: 'User Manuals',
to: '/docs/end-user/application',
to: '/docs/end-user/components/helm',
},
{
label: 'Administrator Manuals',
to: '/docs/platform-engineers/overview',
to: '/docs/platform-engineers/oam/oam-model',
},
],
},
@ -146,7 +146,7 @@ module.exports = {
showLastUpdateAuthor: true,
showLastUpdateTime: true,
includeCurrentVersion: true,
lastVersion: 'v1.0',
lastVersion: 'v1.1',
},
blog: {
showReadingTime: true,

View File

@ -33,7 +33,7 @@ KubeVela 打通了应用与基础设施之间的交付管控的壁垒,相较
> 本文采用了 Jenkins 作为持续集成工具,开发者也可以使用其他 CI 工具,如 TravisCI 或者 GitHub Action。
首先需要准备一份 Jenkins 环境来部署 CI 流水线。安装与初始化 Jenkins 流程可以参见[官方文档](https://www.jenkins.io/doc/book/installing/linux/)。
首先需要准备一份 Jenkins 环境来部署 CI 流水线。安装与初始化 Jenkins 流程可以参见[官方文档](https://www.jenkins.io/doc/book/installing/linux/)。
需要注意的是,由于本文的 CI 流水线是基于 Docker 及 GitHub 的,因此在安装 Jenkins 之后还需要安装相关插件 (Dashboard > Manage Jenkins > Manage Plugins) ,包括 Pipeline、HTTP Request Plugin、Docker Pipeline、Docker Plugin。
@ -67,7 +67,7 @@ KubeVela 打通了应用与基础设施之间的交付管控的壁垒,相较
### KubeVela 环境
需要在 Kubernetes 集群中安装 KubeVela并启用 apiserver 功能,可以参考[官方文档](/docs/advanced-install#install-kubevela-with-apiserver)。
需要在 Kubernetes 集群中安装 KubeVela并启用 apiserver 功能,可以参考[官方文档](/docs/platform-engineers/advanced-install#install-kubevela-with-apiserver)。
## 编写应用

View File

@ -16,7 +16,7 @@ title: 多集群应用交付
## 准备工作
在使用多集群应用部署之前,你需要将子集群通过 KubeConfig 加入到 KubeVela 的管控中来。Vela CLI 可以帮实现这一点。
在使用多集群应用部署之前,你需要将子集群通过 KubeConfig 加入到 KubeVela 的管控中来。Vela CLI 可以帮实现这一点。
```shell script
vela cluster join <your kubeconfig path>

View File

@ -2,7 +2,7 @@
title: 获取更多
---
KubeVela 中的模块完全都是可定制和可插拔的,所以除了内置的组件之外,你还可以通过如下的方式添加更多自己的组件类型。
KubeVela 中的模块完全都是可定制和可插拔的,所以除了内置的组件之外,你还可以通过如下的方式添加更多自己的组件类型。
## 1. 从官方或第三方能力中心获取模块化能力

View File

@ -2,7 +2,7 @@
title: 获取更多
---
KubeVela 中的模块完全都是可定制和可插拔的,所以除了内置的运维能力之外,你还可以通过如下的方式添加更多自己的运维能力。
KubeVela 中的模块完全都是可定制和可插拔的,所以除了内置的运维能力之外,你还可以通过如下的方式添加更多自己的运维能力。
## 1. 从官方或第三方能力中心获取模块化能力

View File

@ -199,7 +199,7 @@ sudo mv ./vela /usr/local/bin/vela
## 4. 【可选】安装插件
KubeVela 支持一系列[开箱即用的插件](./platform-engineers/advanced-install#插件列表),建议至少开启以下插件:
KubeVela 支持一系列[开箱即用的插件](./platform-engineers/advanced-install#插件列表),建议至少开启以下插件:
* Helm 以及 Kustomize 组件功能插件
```shell

View File

@ -105,28 +105,26 @@ helm search repo kubevela/vela-core -l
### 第二步 升级 KubeVela 的 CRDs
```shell
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_appdeployments.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_applicationcontexts.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_applicationrevisions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_applications.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_approllouts.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_clusters.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_componentdefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_containerizedworkloads.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_definitionrevisions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_envbindings.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_healthscopes.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_initializers.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_manualscalertraits.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_policydefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_resourcetrackers.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_scopedefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_traitdefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_workflowstepdefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_workloaddefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/standard.oam.dev_podspecworkloads.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/standard.oam.dev_rollouts.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/standard.oam.dev_rollouttraits.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_appdeployments.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_applicationcontexts.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_applicationrevisions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_applications.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_approllouts.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_clusters.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_componentdefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_containerizedworkloads.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_definitionrevisions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_envbindings.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_healthscopes.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_initializers.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_manualscalertraits.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_policydefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_resourcetrackers.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_scopedefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_traitdefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_workflowstepdefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_workloaddefinitions.yaml
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/standard.oam.dev_rollouts.yaml
```
> 提示:如果看到诸如 `* is invalid: spec.scope: Invalid value: "Namespaced": filed is immutable` 之类的错误,请删除出错的 CRD 后再重新安装。

View File

@ -92,7 +92,7 @@ Prometheus server 需要的 pvc 的大小,同 `alertmanager-pvc-size`。
- grafana-domain
Grafana 的域名,可以使用自定义的域名,也可以使用 ACK 提供的集群级别的泛域名,`*.c276f4dac730c47b8b8988905e3c68fcf.cn-hongkong.alicontainer.com`
Grafana 的域名,可以使用自定义的域名,也可以使用 ACK 提供的集群级别的泛域名,`*.c276f4dac730c47b8b8988905e3c68fcf.cn-hongkong.alicontainer.com`
如本处取值 `grafana.c276f4dac730c47b8b8988905e3c68fcf.cn-hongkong.alicontainer.com`
#### 其他云服务商提供的 Kubernetes 集群

View File

@ -4,7 +4,7 @@ title: 性能调优
### 推荐配置
在集群规模变大,应用数量变多时,可能会因为 KubeVela 的控制器性能跟不上需求导致 KubeVela 系统内的应用运维出现问题,这可能是由于的 KubeVela 控制器参数不当所致。
在集群规模变大,应用数量变多时,可能会因为 KubeVela 的控制器性能跟不上需求导致 KubeVela 系统内的应用运维出现问题,这可能是由于的 KubeVela 控制器参数不当所致。
在 KubeVela 的性能测试中KubeVela 团队验证了在各种不同规模的场景下 KubeVela 控制器的运维能力。并给出了以下的推荐配置:
@ -14,7 +14,7 @@ title: 性能调优
| 中 | < 500 | < 5,000 | < 30,000 | 4 | 500 | 800 | 1 | 2Gi |
| 大 | < 1,000 | < 12,000 | < 72,000 | 4 | 800 | 1,000 | 2 | 4Gi |
> 上述配置中,单一应用的规模应在 23 个组件56 个资源左右。如果您的场景下,应用普遍较大,如单个应用需要对应 20 个资源,那么您可以按照比例相应提高各项配置。
> 上述配置中,单一应用的规模应在 23 个组件56 个资源左右。如果你的场景下,应用普遍较大,如单个应用需要对应 20 个资源,那么你可以按照比例相应提高各项配置。
### 调优方法

View File

@ -0,0 +1,166 @@
{
"version.label": {
"message": "Next",
"description": "The label for version current"
},
"sidebar.docs.category.Getting Started": {
"message": "快速开始",
"description": "The label for category Getting Started in sidebar docs"
},
"sidebar.docs.category.Core Concepts": {
"message": "核心概念",
"description": "The label for Core Concepts in sidebar docs"
},
"sidebar.docs.category.Learning CUE": {
"message": "CUE 语言",
"description": "The label for category Learning CUE in sidebar docs"
},
"sidebar.docs.category.Helm": {
"message": "Helm",
"description": "The label for category Helm in sidebar docs"
},
"sidebar.docs.category.Raw Template": {
"message": "Raw Template",
"description": "The label for category Raw Template in sidebar docs"
},
"sidebar.docs.category.Traits System": {
"message": "运维特征系统",
"description": "The label for category Traits System in sidebar docs"
},
"sidebar.docs.category.Defining Cloud Service": {
"message": "定义 Cloud Service",
"description": "The label for category Defining Cloud Service in sidebar docs"
},
"sidebar.docs.category.Hands-on Lab": {
"message": "实践实验室",
"description": "The label for category Hands-on Lab in sidebar docs"
},
"sidebar.docs.category.Appfile": {
"message": "Appfile",
"description": "The label for category Appfile in sidebar docs"
},
"sidebar.docs.category.Roadmap": {
"message": "Roadmap",
"description": "The label for category Roadmap in sidebar docs"
},
"sidebar.docs.category.Application Deployment": {
"message": "Application Deployment",
"description": "The label for category Application Deployment in sidebar docs"
},
"sidebar.docs.category.More Operations": {
"message": "更多操作",
"description": "The label for category More Operations in sidebar docs"
},
"sidebar.docs.category.Platform Operation Guide": {
"message": "Platform Operation Guide",
"description": "The label for category Platform Operation Guide in sidebar docs"
},
"sidebar.docs.category.Using KubeVela CLI": {
"message": "使用命令行工具",
"description": "The label for category Using KubeVela CLI in sidebar docs"
},
"sidebar.docs.category.Managing Applications": {
"message": "管理应用",
"description": "The label for category Managing Applications in sidebar docs"
},
"sidebar.docs.category.References": {
"message": "参考",
"description": "The label for category References in sidebar docs"
},
"sidebar.docs.category.Learning OAM": {
"message": "开放应用模型",
"description": "The label for category Learning OAM in sidebar docs"
},
"sidebar.docs.category.Environment System": {
"message": "交付环境系统",
"description": "The label for category Environment System in sidebar docs"
},
"sidebar.docs.category.Workflow": {
"message": "定义交付工作流",
"description": "The label for category Workflow End User in sidebar docs"
},
"sidebar.docs.category.Workflow System": {
"message": "工作流系统",
"description": "The label for category Workflow System in sidebar docs"
},
"sidebar.docs.category.System Operation": {
"message": "系统运维",
"description": "The label for category System Operation in sidebar docs"
},
"sidebar.docs.category.Customize Traits": {
"message": "自定义运维特征",
"description": "The label for category Customize Traits in sidebar docs"
},
"sidebar.docs.category.Customize Components": {
"message": "自定义组件",
"description": "The label for category Customize Traits in sidebar docs"
},
"sidebar.docs.category.CLI": {
"message": "CLI 命令行工具",
"description": "The label for category CLI in sidebar docs"
},
"sidebar.docs.category.Capabilities": {
"message": "Capabilities",
"description": "The label for category Capabilities in sidebar docs"
},
"sidebar.docs.category.Appendix": {
"message": "附录",
"description": "The label for category Appendix in sidebar docs"
},
"sidebar.docs.category.Component System": {
"message": "组件系统",
"description": "The label for category Component System in sidebar docs"
},
"sidebar.docs.category.User Manuals": {
"message": "用户手册",
"description": "The label for category User Guide in sidebar docs"
},
"sidebar.docs.category.Components": {
"message": "选择待交付组件",
"description": "The label for category Components in sidebar docs"
},
"sidebar.docs.category.Traits": {
"message": "绑定运维特征",
"description": "The label for category Traits in sidebar docs"
},
"sidebar.docs.category.Policies": {
"message": "设定应用策略",
"description": "The label for category Policies in sidebar docs"
},
"sidebar.docs.category.Case Studies": {
"message": "实践案例",
"description": "The label for category case studies in sidebar docs"
},
"sidebar.docs.category.Observability": {
"message": "新增可观测性",
"description": "The label for category Observability in sidebar docs"
},
"sidebar.docs.category.Scaling": {
"message": "扩缩容",
"description": "The label for category Scaler in sidebar docs"
},
"sidebar.docs.category.Debugging": {
"message": "调试指南",
"description": "The label for category Debugging in sidebar docs"
},
"sidebar.docs.category.Administrator Manuals": {
"message": "管理员手册",
"description": "The label for category Administrator Manuals in sidebar docs"
},
"sidebar.docs.category.Simple Template": {
"message": "Simple Template",
"description": "The label for category Simple Template in sidebar docs"
},
"sidebar.docs.category.Cloud Services": {
"message": "云服务组件",
"description": "The label for category Cloud Services in sidebar docs"
},
"sidebar.docs.category.CUE Component": {
"message": "CUE 组件",
"description": "The label for category CUE Components in sidebar docs"
},
"sidebar.docs.category.Addons": {
"message": "插件系统",
"description": "The extended add-ons"
}
}

View File

@ -0,0 +1,27 @@
![alt](resources/KubeVela-03.png)
*Make shipping applications more enjoyable.*
# KubeVela
KubeVela is a modern application engine that adapts to your application's needs, not the other way around.
## Community
- Slack: [CNCF Slack](https://slack.cncf.io/) #kubevela channel
- Gitter: [Discussion](https://gitter.im/oam-dev/community)
- Bi-weekly Community Call: [Meeting Notes](https://docs.google.com/document/d/1nqdFEyULekyksFHtFvgvFAYE-0AMHKoS3RMnaKsarjs)
## Installation
Installation guide is available on [this section](install).
## Quick Start
Quick start is available on [this section](quick-start).
## Contributing
Check out [CONTRIBUTING](https://github.com/oam-dev/kubevela/blob/master/CONTRIBUTING.md) to see how to develop with KubeVela.
## Code of Conduct
KubeVela adopts the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).

View File

@ -0,0 +1,235 @@
---
title: Application CRD
---
本部分将逐步介绍如何使用 `Application` 对象来定义你的应用,并以声明式的方式进行相应的操作。
## 示例
下面的示例应用声明了一个具有 *Worker* 工作负载类型的 `backend` 组件和具有 *Web Service* 工作负载类型的 `frontend` 组件。
此外,`frontend`组件声明了具有 `sidecar``autoscaler``trait` 运维能力,这意味着工作负载将自动注入 `fluentd` 的sidecar并可以根据CPU使用情况触发1-10个副本进行扩展。
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: website
spec:
components:
- name: backend
type: worker
properties:
image: busybox
cmd:
- sleep
- '1000'
- name: frontend
type: webservice
properties:
image: nginx
traits:
- type: autoscaler
properties:
min: 1
max: 10
cpuPercent: 60
- type: sidecar
properties:
name: "sidecar-test"
image: "fluentd"
```
### 部署应用
部署上述的 application yaml文件, 然后应用启动
```shell
$ kubectl get application -o yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: website
....
status:
components:
- apiVersion: core.oam.dev/v1alpha2
kind: Component
name: backend
- apiVersion: core.oam.dev/v1alpha2
kind: Component
name: frontend
....
status: running
```
你可以看到一个命名为 `frontend` 并带有被注入的容器 `fluentd` 的 Deployment 正在运行。
```shell
$ kubectl get deploy frontend
NAME READY UP-TO-DATE AVAILABLE AGE
frontend 1/1 1 1 100m
```
另一个命名为 `backend` 的 Deployment 也在运行。
```shell
$ kubectl get deploy backend
NAME READY UP-TO-DATE AVAILABLE AGE
backend 1/1 1 1 100m
```
同样被 `autoscaler` trait 创建出来的还有一个 HPA 。
```shell
$ kubectl get HorizontalPodAutoscaler frontend
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
frontend Deployment/frontend <unknown>/50% 1 10 1 101m
```
## 背后的原理
在上面的示例中, `type: worker` 指的是该组件的字段内容(即下面的 `properties` 字段中的内容)将遵从名为 `worker``ComponentDefinition` 对象中的规范定义,如下所示:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: worker
annotations:
definition.oam.dev/description: "Describes long-running, scalable, containerized services that running at backend. They do NOT have network endpoint to receive external network traffic."
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
cue:
template: |
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": context.name
}
template: {
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
containers: [{
name: context.name
image: parameter.image
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
}]
}
}
}
}
parameter: {
image: string
cmd?: [...string]
}
```
因此,`backend` 的 `properties` 部分仅支持两个参数:`image` 和 `cmd`。这是由定义的 `.spec.template` 字段中的 `parameter` 列表执行的。
类似的可扩展抽象机制也同样适用于 traits(运维能力)。
例如,`frontend` 中的 `typeautoscaler` 指的是组件对应的 trait 的字段规范(即 trait 的 `properties` 部分)
将由名为 `autoscaler``TraitDefinition` 对象执行,如下所示:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "configure k8s HPA for Deployment"
name: hpa
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
outputs: hpa: {
apiVersion: "autoscaling/v2beta2"
kind: "HorizontalPodAutoscaler"
metadata: name: context.name
spec: {
scaleTargetRef: {
apiVersion: "apps/v1"
kind: "Deployment"
name: context.name
}
minReplicas: parameter.min
maxReplicas: parameter.max
metrics: [{
type: "Resource"
resource: {
name: "cpu"
target: {
type: "Utilization"
averageUtilization: parameter.cpuUtil
}
}
}]
}
}
parameter: {
min: *1 | int
max: *10 | int
cpuUtil: *50 | int
}
```
应用同样有一个`sidecar`的运维能力
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "add sidecar to the app"
name: sidecar
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |-
patch: {
// +patchKey=name
spec: template: spec: containers: [parameter]
}
parameter: {
name: string
image: string
command?: [...string]
}
```
在业务用户使用之前我们认为所有用于定义的对象Definition Object都已经由平台团队声明并安装完毕了。所以业务用户将需要专注于应用`Application`)本身。
请注意KubeVela 的终端用户(业务研发)不需要了解定义对象,他们只需要学习如何使用平台已经安装的能力,这些能力通常还可以被可视化的表单展示出来(或者通过 JSON schema 对接其他方式)。请从[由定义生成前端表单](/docs/platform-engineers/openapi-v3-json-schema)部分的文档了解如何实现。
### 惯例和"标准协议"
在应用(`Application` 资源)部署到 Kubernetes 集群后KubeVela 运行时将遵循以下 “标准协议”和惯例来生成和管理底层资源实例。
| Label | 描述 |
| :--: | :---------: |
|`workload.oam.dev/type=<component definition name>` | 其对应 `ComponentDefinition` 的名称 |
|`trait.oam.dev/type=<trait definition name>` | 其对应 `TraitDefinition` 的名称 |
|`app.oam.dev/name=<app name>` | 它所属的应用的名称 |
|`app.oam.dev/component=<component name>` | 它所属的组件的名称 |
|`trait.oam.dev/resource=<name of trait resource instance>` | 运维能力资源实例的名称 |
|`app.oam.dev/appRevision=<name of app revision>` | 它所属的应用revision的名称 |

View File

@ -0,0 +1,229 @@
---
title: 基于 Istio 的渐进式发布
---
## 简介
KubeVela 后的应用交付模型OAM是一个从设计与实现上都高度可扩展的模型。因此KubeVela 不需要任何“脏乱差”的胶水代码或者脚本就可以同任何云原生技术和工具(比如 Service Mesh实现集成让社区生态中各种先进技术立刻为你的应用交付助上“一臂之力”。
本文将会介绍如何使用 KubeVela 结合 [Istio](https://istio.io/latest/) 进行复杂的金丝雀发布流程。在这个过程中KubeVela 会帮助你:
- 将 Istio 的能力进行封装和抽象后再交付给用户使用,使得用户无需成为 Istio 专家就可以直接使用这个金丝雀发布的场景KubeVela 会为你提供一个封装好的 Rollout 运维特征)
- 通过声明式工作流来设计金丝雀发布的步骤,以及执行发布/回滚,而无需通过“脏乱差”的脚本或者人工的方式管理这个过程。
本案例中,我们将使用经典的微服务应用 [bookinfo](https://istio.io/latest/docs/examples/bookinfo/?ie=utf-8&hl=en&docs-search=Canary) 来展示上述金丝雀发布过程。
## 准备工作
开启 Istio 集群插件
```shell
vela addon enable istio
```
因为后面的例子运行在 default namespace需要为 default namespace 打上 Istio 自动注入 sidecar 的标签。
```shell
kubectl label namespace default istio-injection=enabled
```
## 初次部署
执行下面的命令,部署 bookinfo 应用。
```shell
kubectl apply -f https://github.com/oam-dev/kubevela/blob/master/docs/examples/canary-rollout-use-case/first-deploy.yaml
```
该应用的组件架构和访问关系如下所示:
![book-info-struct](../resources/book-info-struct.jpg)
该应用包含四个组件,其中组件 pruductpage, details, ratings 均配置了一个暴露端口 (expose) 运维特征用来在集群内暴露服务。
组件 reviews 配置了一个金丝雀流量发布 (canary-traffic) 的运维特征。
productpage 组件还配置了一个 网关入口 (istio-gateway) 的运维特征,从而让该组件接收进入集群的流量。这个运维特征通过设置 `gateway:ingressgateway` 来使用 Istio 的默认网关实现,设置 `hosts: "*"` 来指定携带任意 host 信息的请求均可进入网关。
```shell
...
- name: productpage
type: webservice
properties:
image: docker.io/istio/examples-bookinfo-productpage-v1:1.16.2
port: 9080
traits:
- type: expose
properties:
port:
- 9080
- type: istio-gateway
properties:
hosts:
- "*"
gateway: ingressgateway
match:
- exact: /productpage
- prefix: /static
- exact: /login
- prefix: /api/v1/products
port: 9080
...
```
你可以通过执行下面的命令将网关的端口映射到本地。
```shell
kubectl port-forward service/istio-ingressgateway -n istio-system 19082:80
```
通过浏览器访问 `127.0.0.1:19082` 将会看到下面的页面。
![pic-v2](../resources/canary-pic-v2.jpg)
## 金丝雀发布
接下来我们以 `reviews` 组件为例,模拟一次金丝雀发布的完整过程,及先升级一部分组件实例,同时调整流量,以此达到渐进式灰度发布的目的。
执行下面的命令,来更新应用。
```shell
kubectl apply -f https://github.com/oam-dev/kubevela/blob/master/docs/examples/canary-rollout-use-case/rollout-v2.yaml
```
这次操作更新了 reviews 组件的镜像,从之前的 v2 升级到了 v3。同时 reviews 组件的灰度发布 (Rollout) 运维特征指定了,升级的目标实例个数为 2 个,分两个批次升级,每批升级 1 个实例。
```shell
...
- name: reviews
type: webservice
properties:
image: docker.io/istio/examples-bookinfo-reviews-v3:1.16.2
port: 9080
volumes:
- name: wlp-output
type: emptyDir
mountPath: /opt/ibm/wlp/output
- name: tmp
type: emptyDir
mountPath: /tmp
traits:
- type: expose
properties:
port:
- 9080
- type: rollout
properties:
targetSize: 2
rolloutBatches:
- replicas: 1
- replicas: 1
- type: canary-traffic
properties:
port: 9080
...
```
这次更新还为应用新增了一个升级的执行工作流,该工作流包含三个步骤。
第一步通过指定 `batchPartition` 等于 0 设置只升级第一批次的实例。并通过 `traffic.weightedTargets` 将 10% 的流量切换到新版本的实例上面。
完成第一步之后,执行第二步工作流会进入暂停状态,等待用户校验服务状态。
工作流的第三步是完成剩下实例的升级,并将全部流量切换致新的组件版本上。
```shell
...
workflow:
steps:
- name: rollout-1st-batch
type: canary-rollout
properties:
# just upgrade first batch of component
batchPartition: 0
traffic:
weightedTargets:
- revision: reviews-v1
weight: 90 # 90% shift to new version
- revision: reviews-v2
weight: 10 # 10% shift to new version
# give user time to verify part of traffic shifting to newRevision
- name: manual-approval
type: suspend
- name: rollout-rest
type: canary-rollout
properties:
# upgrade all batches of component
batchPartition: 1
traffic:
weightedTargets:
- revision: reviews-v2
weight: 100 # 100% shift to new version
...
```
更新完成之后再在浏览器多次访问之前的网址。发现有大概10%的概率会看到下面这个新的页面,
![pic-v3](../resources/canary-pic-v3.jpg)
可见新版本的页面由之前的黑色五角星变成了红色五角星
### 继续完成全量发布
如果在人工校验时,发现服务符合预期,需要继续执行工作流,完成全量发布。你可以通过执行下面的命令完成这一操作。
```shell
vela workflow reumse book-info
```
在浏览器上继续多次访问网页,会发现五角星将一直是红色的。
### 终止发布工作流并回滚
如果在人工校验时,发现服务不符合预期,需要终止预先定义好的发布工作流,并将流量和实例切换回之前的版本。你可以通过执行下面的命令完成这一操作。
```shell
kubectl apply -f https://github.com/oam-dev/kubevela/blob/master/docs/examples/canary-rollout-use-case/revert-in-middle.yaml
```
这次更新删除了之前定义好的工作流, 来终止执行工作流。
并通过修改灰度发布运维特征的 `targetRevision` 指向之前的组件版本 `reviews-v1`。此外,这次更新还删除了组件的金丝雀流量发布 (canary-traffic) 运维特征,将全部流量打到同一个组件版本上 `reviews-v1`
```shell
...
- name: reviews
type: webservice
properties:
image: docker.io/istio/examples-bookinfo-reviews-v3:1.16.2
port: 9080
volumes:
- name: wlp-output
type: emptyDir
mountPath: /opt/ibm/wlp/output
- name: tmp
type: emptyDir
mountPath: /tmp
traits:
- type: expose
properties:
port:
- 9080
- type: rollout
properties:
targetRevision: reviews-v1
batchPartition: 1
targetSize: 2
# This means to rollout two more replicas in two batches.
rolloutBatches:
- replicas: 2
...
```
在浏览器上继续访问网址,会发现五角星又变回到了黑色。

View File

@ -0,0 +1,177 @@
---
title: 基于工作流的 GitOps
---
本案例讲介绍如何在 GitOps 场景下使用 KubeVela 并介绍这样做的好处是什么。
## 简介
GitOps 是一种现代化的持续交付手段,它允许开发人员通过直接更改 Git 仓库中的代码和配置来自动部署应用,在提高部署生产力的同时也通过分支回滚等能力提高了可靠性。其具体的好处可以查看[这篇文章](https://www.weave.works/blog/what-is-gitops-really),本文将不再赘述。
KubeVela 作为一个声明式的应用交付控制平面,天然就可以以 GitOps 的方式进行使用,并且这样做会在 GitOps 的基础上为用户提供更多的益处和端到端的体验,包括:
- 应用交付工作流CD 流水线)
- 即KubeVela 支持在 GitOps 模式中描述过程式的应用交付,而不只是简单的声明终态;
- 处理部署过程中的各种依赖关系和拓扑结构;
- 在现有各种 GitOps 工具的语义之上提供统一的上层抽象,简化应用交付与管理过程;
- 统一进行云服务的声明、部署和服务绑定;
- 提供开箱即用的交付策略(金丝雀、蓝绿发布等);
- 提供开箱即用的混合云/多云部署策略(放置规则、集群过滤规则等);
- 在多环境交付中提供 Kustomize 风格的 Patch 来描述部署差异,而无需学习任何 Kustomize 本身的细节;
- …… 以及更多。
在本文中,我们主要讲解直接使用 KubeVela 在 GitOps 模式下进行交付的步骤。
> 提示:你也可以通过类似的步骤使用 ArgoCD 等 GitOps 工具来间接使用 KubeVela细节的操作文档我们会在后续发布中提供。
## 准备代码仓库
首先,准备一个 Git 仓库,里面含有一个 `Application` 配置文件,一些源代码以及对应的 Dockerfile。
代码的实现逻辑非常简单,会启动一个服务,并显示对应的 Version 版本。而在 `Application` 当中,我们会通过一个 `webservice` 类型的组件启动该服务,并添加一个 `ingress` 的运维特征以方便访问:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: first-vela-workflow
namespace: default
spec:
components:
- name: test-server
type: webservice
properties:
# 在创建完自动部署文件后,将 `default:gitops` 替换为其 namespace 和 name
image: <your image> # {"$imagepolicy": "default:gitops"}
port: 8088
traits:
- type: ingress
properties:
domain: testsvc.example.com
http:
/: 8088
```
我们希望用户改动代码进行提交后,自动构建出镜像并推送到镜像仓库。这一步 CI 可以通过集成 GitHub Actions、Jenkins 或者其他 CI 工具来实现。在本例中,我们通过借助 GitHub Actions 来完成持续集成。具体的代码文件及配置可参考 [示例仓库](https://github.com/oam-dev/samples/tree/master/9.GitOps_Demo)。
## 配置秘钥信息
在新的镜像推送到镜像仓库后KubeVela 会识别到新的镜像,并更新仓库及集群中的 `Application` 配置文件。因此,我们需要一个含有 Git 信息的 Secret使 KubeVela 向 Git 仓库进行提交。
```yaml
apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: kubernetes.io/basic-auth
stringData:
username: <your username>
password: <your password>
```
## 编写自动部署配置文件
完成了上述基础配置后,我们可以在本地新建一个自动部署配置文件,关联对应的 Git 仓库以及镜像仓库信息:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: git-app
spec:
components:
- name: gitops
type: kustomize
properties:
repoType: git
# 将此处替换成你的 git 仓库地址
url: <your github repo address>
# 关联 git secret
secretRef: my-secret
# 自动拉取配置的时间间隔
pullInterval: 1m
git:
# 指定监听的 branch
branch: master
# 指定监听的路径
path: .
imageRepository:
# 镜像地址
image: <your image>
# 如果这是一个私有的镜像仓库,可以通过 `kubectl create secret docker-registry` 创建对应的镜像秘钥并相关联
# secretRef: imagesecret
filterTags:
# 可对镜像 tag 进行过滤
pattern: '^master-[a-f0-9]+-(?P<ts>[0-9]+)'
extract: '$ts'
# 通过 policy 筛选出最新的镜像 Tag 并用于更新
policy:
numerical:
order: asc
# 追加提交信息
commitMessage: "Image: {{range .Updated.Images}}{{println .}}{{end}}"
```
将上述文件部署到集群中后,查看集群中的应用,可以看到,应用 `git-app` 自动拉取了 Git 仓库中的应用配置并部署到了集群中:
```shell
$ vela ls
APP COMPONENT TYPE TRAITS PHASE HEALTHY STATUS CREATED-TIME
first-vela-workflow test-server webservice ingress running healthy 2021-09-10 11:23:34 +0800 CST
git-app gitops kustomize running healthy 2021-09-10 11:23:32 +0800 CST
```
通过 `curl` 对应的 `Ingress`,可以看到目前的版本是 0.1.5
```shell
$ curl -H "Host:testsvc.example.com" http://<your-ip>
Version: 0.1.5
```
## 修改代码
完成首次部署后,我们可以通过修改 Git 仓库中的代码,来完成自动部署。
将代码文件中的 `Version` 改为 `0.1.6`:
```go
const VERSION = "0.1.6"
func main() {
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
_, _ = fmt.Fprintf(w, "Version: %s\n", VERSION)
})
if err := http.ListenAndServe(":8088", nil); err != nil {
println(err.Error())
}
}
```
提交该改动至代码仓库,可以看到,我们配置的 CI 流水线开始构建镜像并推送至镜像仓库。
而 KubeVela 会通过监听镜像仓库,根据最新的镜像 Tag 来更新代码仓库中的 `Application`。此时,可以看到代码仓库中有一条来自 `kubevelabot` 的提交,提交信息均带有 `Update image automatically.` 前缀。你也可以通过 `{{range .Updated.Images}}{{println .}}{{end}}``commitMessage` 字段中追加你所需要的信息。
![alt](../resources/gitops-commit.png)
> 值得注意的是,来自 `kubevelabot` 的提交不会再次触发流水线导致重复构建,因为我们在 CI 配置的时候,将来自 KubeVela 的提交过滤掉了。
>
> ```shell
> jobs:
> publish:
> if: "!contains(github.event.head_commit.message, 'Update image automatically')"
> ```
重新查看集群中的应用,可以看到经过一段时间后,`Application` 的镜像已经被更新。通过 `curl` 对应的 `Ingress` 查看当前版本:
```shell
$ curl -H "Host:testsvc.example.com" http://<your-ip>
Version: 0.1.6
```
版本已被成功更新!至此,我们完成了从变更代码,到自动部署至集群的全部操作。
KubeVela 会通过你配置的 `interval` 时间间隔,来每隔一段时间分别从代码仓库及镜像仓库中获取最新信息:
* 当 Git 仓库中的配置文件被更新时KubeVela 将根据最新的配置更新集群中的应用。
* 当镜像仓库中多了新的 Tag 时KubeVela 将根据你配置的 policy 规则,筛选出最新的镜像 Tag并更新到 Git 仓库中。而当代码仓库中的文件被更新后KubeVela 将重复第一步,更新集群中的文件,从而达到了自动部署的效果。
通过与 GitOps 的集成KubeVela 可以帮助用户加速部署应用,更为简洁地完成持续部署。

View File

@ -0,0 +1,149 @@
---
title: Jenkins CI/CD
---
本文将介绍如何使用 KubeVela 同已有的 CI/CD 工具(比如 Jenkins共同协作来进行应用的持续交付并解释这样集成的好处是什么。
## 简介
KubeVela 作为一个普适的应用交付控制平面,只需要一点点集成工作就可以同任何现有的 CI/CD 系统对接起来,并且为它们带来一系列现代云原生应用交付的能力,比如:
- 混合云/多云应用交付;
- 跨环境发布Promotion
- 基于 Service Mesh 的发布与回滚;
- 处理部署过程中的各种依赖关系和拓扑结构;
- 统一进行云服务的声明、部署和服务绑定;
- 无需强迫你的团队采纳完整的 GitOps 协作方式即可享受 GitOps 技术本身的[一系列好处](https://www.weave.works/blog/what-is-gitops-really)
- …… 以及更多。
接下来,本文将会以一个 HTTP 服务的开发部署为例,介绍 KubeVela + Jenkins 方式下应用的持续集成与持续交付步骤。这个应用的具体代码在[这个 GitHub 库中](https://github.com/Somefive/KubeVela-demo-CICD-app)。
## 准备工作
在对接之前,用户首先需要确保以下环境。
1. 已部署好 Jenkins 服务并配置了 Docker 在 Jenkins 中的环境,包括相关插件及镜像仓库的访问权限。
2. 已配置好的 Git 仓库并开启 Webhook。确保 Git 仓库对应分支的变化能够通过 Webhook 触发 Jenkins 流水线的运行。
3. 准备好需要部署的 Kubernetes 集群环境,并在环境中安装 KubeVela 基础组件及 apiserver确保 KubeVela apiserver 能够从公网访问到。
## 对接 Jenkins 与 KubeVela apiserver
在 Jenkins 中以下面的 Groovy 脚本为例设置部署流水线。可以将流水线中的 Git 地址、镜像地址、apiserver 的地址、应用命名空间及应用替换成自己的配置,同时在自己的代码仓库中存放 Dockerfile 及 app.yaml用来构建及部署 KubeVela 应用。
```groovy
pipeline {
agent any
environment {
GIT_BRANCH = 'prod'
GIT_URL = 'https://github.com/Somefive/KubeVela-demo-CICD-app.git'
DOCKER_REGISTRY = 'https://registry.hub.docker.com'
DOCKER_CREDENTIAL = 'DockerHubCredential'
DOCKER_IMAGE = 'somefive/kubevela-demo-cicd-app'
APISERVER_URL = 'http://47.88.24.19'
APPLICATION_YAML = 'app.yaml'
APPLICATION_NAMESPACE = 'kubevela-demo-namespace'
APPLICATION_NAME = 'cicd-demo-app'
}
stages {
stage('Prepare') {
steps {
script {
def checkout = git branch: env.GIT_BRANCH, url: env.GIT_URL
env.GIT_COMMIT = checkout.GIT_COMMIT
env.GIT_BRANCH = checkout.GIT_BRANCH
echo "env.GIT_BRANCH=${env.GIT_BRANCH},env.GIT_COMMIT=${env.GIT_COMMIT}"
}
}
}
stage('Build') {
steps {
script {
docker.withRegistry(env.DOCKER_REGISTRY, env.DOCKER_CREDENTIAL) {
def customImage = docker.build(env.DOCKER_IMAGE)
customImage.push()
}
}
}
}
stage('Deploy') {
steps {
sh 'wget -q "https://github.com/mikefarah/yq/releases/download/v4.12.1/yq_linux_amd64"'
sh 'chmod +x yq_linux_amd64'
script {
def app = sh (
script: "./yq_linux_amd64 eval -o=json '.spec' ${env.APPLICATION_YAML} | sed -e 's/GIT_COMMIT/$GIT_COMMIT/g'",
returnStdout: true
)
echo "app: ${app}"
def response = httpRequest acceptType: 'APPLICATION_JSON', contentType: 'APPLICATION_JSON', httpMode: 'POST', requestBody: app, url: "${env.APISERVER_URL}/v1/namespaces/${env.APPLICATION_NAMESPACE}/applications/${env.APPLICATION_NAME}"
println('Status: '+response.status)
println('Response: '+response.content)
}
}
}
}
}
```
之后向流水线中使用的 Git 仓库的分支推送代码变更Git 仓库的 Webhook 会触发 Jenkins 中新创建的流水线。该流水线会自动构建代码镜像并推送至镜像仓库,然后对 KubeVela apiserver 发送 POST 请求,将仓库中的应用配置文件部署到 Kubernetes 集群中。其中 app.yaml 可以参照以下样例。
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: kubevela-demo-app
spec:
components:
- name: kubevela-demo-app-web
type: webservice
properties:
image: somefive/kubevela-demo-cicd-app
imagePullPolicy: Always
port: 8080
traits:
- type: rollout
properties:
rolloutBatches:
- replicas: 2
- replicas: 3
batchPartition: 0
targetSize: 5
- type: labels
properties:
jenkins-build-commit: GIT_COMMIT
- type: ingress
properties:
domain: <your domain>
http:
"/": 8088
```
其中 GIT_COMMIT 会在 Jenkins 流水线中被替换为当前的 git commit id。这时可以通过 kubectl 命令查看 Kubernetes 集群中应用的部署情况。
```bash
$ kubectl get app -n kubevela-demo-namespace
NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE
cicd-demo-app kubevela-demo-app-web webservice running true 102s
$ kubectl get deployment -n kubevela-demo-namespace
NAME READY UP-TO-DATE AVAILABLE AGE
kubevela-demo-app-web-v1 2/2 2 2 111s
$ kubectl get ingress -n kubevela-demo-namespace
NAME CLASS HOSTS ADDRESS PORTS AGE
kubevela-demo-app-web <none> <your domain> 198.11.175.125 80 117s
```
在部署的应用文件中,我们使用了灰度发布(Rollout)的特性,应用初始发布先创建 2 个 Pod以便进行金丝雀验证。待验证完毕你可以将应用配置中 Rollout 特性的 `batchPatition: 0` 删去,以便完成剩余实例的更新发布。这个机制大大提高发布的安全性和稳定性,同时你也可以根据实际需要,调整 Rollout 发布策略。
```bash
$ kubectl edit app -n kubevela-demo-namespace
application.core.oam.dev/cicd-demo-app edited
$ kubectl get deployment -n kubevela-demo-namespace
NAME READY UP-TO-DATE AVAILABLE AGE
kubevela-demo-app-web-v1 5/5 5 5 4m16s
$ curl http://<your domain>/
Version: 0.1.2
```
## 更多
详细的环境部署流程以及更加完整的应用滚动更新可以参考[博客](/blog/2021/09/02/kubevela-jenkins-cicd)。

View File

@ -0,0 +1,572 @@
---
title: 实践案例-理想汽车
---
## 背景
理想汽车后台服务采用的是微服务架构,虽然借助 Kubernetes 进行部署,但运维工作依然很复杂。并具有以下特点:
- 一个应用能运行起来并对外提供服务正常情况下都需要配套的db实例以及 redis 集群支撑。
- 应用之间存在依赖关系,对于部署顺序有比较强的诉求。
- 应用部署流程中需要和外围系统(如:配置中心)交互。
下面以一个理想汽车的经典场景为例,介绍如何借助 KubeVela 实现以上诉求。
## 典型场景介绍
![场景架构](../resources/li-auto-inc.jpg)
这里面包含两个应用,分别是 `base-server``proxy-server`, 整体应用部署需要满足以下条件:
- base-server 成功启动(状态 ready后需要往配置中心apollo注册信息。
- base-server 需要绑定到 service 和 ingress 进行负载均衡。
- proxy-server 需要在 base-server 成功运行后启动,并需要获取到 base-server 对应的 service 的 clusterIP。
- proxy-server 依赖 redis 中间件,需要在 redis 成功运行后启动。
- proxy-server 需要从配置中心apollo读取 base-server 的相关注册信息。
可见整个部署过程,如果人为操作,会变得异常困难以及容易出错。在借助 KubeVela 后,可以轻松实现场景的自动化和一键式运维。
## 解决方案
在 KubeVela 上,以上诉求可以拆解为以下 KubeVela 的模型:
- 组件部分: 包含三个分别是 base-server 、redis 、proxy-server。
- 运维特征: ingress (包括 service) 作为一个通用的负载均衡运维特征。
- 工作流: 实现组件按照依赖进行部署,并实现和配置中心的交互。
- 应用部署计划: 理想汽车的开发者可以通过 KubeVela 的应用部署计划完成应用发布。
详细过程如下:
## 平台的功能定制
理想汽车的平台工程师通过以下步骤完成方案中所涉及的能力,并向开发者用户透出(通过编写 definition 的方式实现)。
### 1.定义组件
- 编写 base-service 的组件定义,使用 `Deployment` 作为工作负载,并向终端用户透出参数 `image``cluster`。对于终端用户来说,之后在发布时只需要关注镜像以及部署的集群名称。
- 编写 proxy-service 的组件定义,使用 `argo rollout` 作为工作负载,同样向终端用户透出参数 `image``cluster`
如下所示:
```
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: base-service
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
kube:
template:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
appId: BASE-SERVICE
appName: base-service
version: 0.0.1
name: base-service
spec:
replicas: 2
revisionHistoryLimit: 5
selector:
matchLabels:
app: base-service
template:
metadata:
labels:
antiAffinity: none
app: base-service
appId: BASE-SERVICE
version: 0.0.1
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- base-service
- key: antiAffinity
operator: In
values:
- none
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: APP_NAME
value: base-service
- name: LOG_BASE
value: /data/log
- name: RUNTIME_CLUSTER
value: default
image: base-service
imagePullPolicy: Always
name: base-service
ports:
- containerPort: 11223
protocol: TCP
- containerPort: 11224
protocol: TCP
volumeMounts:
- mountPath: /tmp/data/log/base-service
name: log-volume
- mountPath: /data
name: sidecar-sre
- mountPath: /app/skywalking
name: skywalking
initContainers:
- args:
- 'echo "do something" '
command:
- /bin/sh
- -c
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: APP_NAME
value: base-service
image: busybox
imagePullPolicy: Always
name: sidecar-sre
resources:
limits:
cpu: 100m
memory: 100Mi
volumeMounts:
- mountPath: /tmp/data/log/base-service
name: log-volume
- mountPath: /scratch
name: sidecar-sre
terminationGracePeriodSeconds: 120
volumes:
- hostPath:
path: /logs/dev/base-service
type: DirectoryOrCreate
name: log-volume
- emptyDir: {}
name: sidecar-sre
- emptyDir: {}
name: skywalking
parameters:
- name: image
required: true
type: string
fieldPaths:
- "spec.template.spec.containers[0].image"
- name: cluster
required: true
type: string
fieldPaths:
- "spec.template.spec.containers[0].env[6].value"
- "spec.template.metadata.labels.cluster"
---
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: proxy-service
spec:
workload:
definition:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
schematic:
kube:
template:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
labels:
appId: PROXY-SERVICE
appName: proxy-service
version: 0.0.0
name: proxy-service
spec:
replicas: 1
revisionHistoryLimit: 1
selector:
matchLabels:
app: proxy-service
strategy:
canary:
steps:
- setWeight: 50
- pause: {}
template:
metadata:
labels:
app: proxy-service
appId: PROXY-SERVICE
cluster: default
version: 0.0.1
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- proxy-service
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: APP_NAME
value: proxy-service
- name: LOG_BASE
value: /app/data/log
- name: RUNTIME_CLUSTER
value: default
image: proxy-service:0.1
imagePullPolicy: Always
name: proxy-service
ports:
- containerPort: 11024
protocol: TCP
- containerPort: 11025
protocol: TCP
volumeMounts:
- mountPath: /tmp/data/log/proxy-service
name: log-volume
- mountPath: /app/data
name: sidecar-sre
- mountPath: /app/skywalking
name: skywalking
initContainers:
- args:
- 'echo "do something" '
command:
- /bin/sh
- -c
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: APP_NAME
value: proxy-service
image: busybox
imagePullPolicy: Always
name: sidecar-sre
resources:
limits:
cpu: 100m
memory: 100Mi
volumeMounts:
- mountPath: /tmp/data/log/proxy-service
name: log-volume
- mountPath: /scratch
name: sidecar-sre
terminationGracePeriodSeconds: 120
volumes:
- hostPath:
path: /app/logs/dev/proxy-service
type: DirectoryOrCreate
name: log-volume
- emptyDir: {}
name: sidecar-sre
- emptyDir: {}
name: skywalking
parameters:
- name: image
required: true
type: string
fieldPaths:
- "spec.template.spec.containers[0].image"
- name: cluster
required: true
type: string
fieldPaths:
- "spec.template.spec.containers[0].env[5].value"
- "spec.template.metadata.labels.cluster"
```
### 2.定义运维特征
编写用于负载均衡的运维特征的定义,其通过生成 Kubernetes 中的原生资源 `Service``Ingress` 实现负载均衡。
向终端用户透出的参数包括 domain 和 http ,其中 domain 可以指定域名http 用来设定路由,具体将部署服务的端口映射为不同的 url path。
如下所示:
```
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
name: ingress
spec:
schematic:
cue:
template: |
parameter: {
domain: string
http: [string]: int
}
outputs: {
"service": {
apiVersion: "v1"
kind: "Service"
metadata: {
name: context.name
namespace: context.namespace
}
spec: {
selector: app: context.name
ports: [for ph, pt in parameter.http{
protocol: "TCP"
port: pt
targetPort: pt
}]
}
}
"ingress": {
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata: {
name: "\(context.name)-ingress"
namespace: context.namespace
}
spec: rules: [{
host: parameter.domain
http: paths: [for ph, pt in parameter.http {
path: ph
pathType: "Prefix"
backend: service: {
name: context.name
port: number: pt
}
}]
}]
}
}
```
### 3.定义工作流的步骤
- 定义 apply-base 工作流步骤: 完成部署 base-server等待组件成功启动后往注册中心注册信息。透出参数为 component终端用户在流水线中使用步骤 apply-base 时只需要指定组件名称。
- 定义 apply-helm 工作流步骤: 完成部署 redis helm chart并等待 redis 成功启动。透出参数为 component终端用户在流水线中使用步骤 apply-helm 时只需要指定组件名称。
- 定义 apply-proxy 工作流步骤: 完成部署 proxy-server并等待组件成功启动。透出参数为 component 和 backendIP其中 component 为组件名称backendIP 为 proxy-server 服务依赖组件的 IP。
如下所示:
```
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
name: apply-base
namespace: vela-system
spec:
schematic:
cue:
template: |-
import ("vela/op")
parameter: {
component: string
}
apply: op.#ApplyComponent & {
component: parameter.component
}
// 等待 deployment 可用
wait: op.#ConditionalWait & {
continue: apply.workload.status.readyReplicas == apply.workload.status.replicas && apply.workload.status.observedGeneration == apply.workload.metadata.generation
}
message: {...}
// 往三方配置中心apollo写配置
notify: op.#HTTPPost & {
url: "appolo-address"
request: body: json.Marshal(message)
}
// 暴露 service 的 ClusterIP
clusterIP: apply.traits["service"].value.spec.clusterIP
---
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
name: apply-helm
namespace: vela-system
spec:
schematic:
cue:
template: |-
import ("vela/op")
parameter: {
component: string
}
apply: op.#ApplyComponent & {
component: parameter.component
}
chart: op.#Read & {
value: {
// redis 的元数据
...
}
}
// 等待 redis 可用
wait: op.#ConditionalWait & {
// todo
continue: chart.value.status.phase=="ready"
}
---
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
name: apply-proxy
namespace: vela-system
spec:
schematic:
cue:
template: |-
import (
"vela/op"
"encoding/json"
)
parameter: {
component: string
backendIP: string
}
// 往三方配置中心apollo读取配置
// config: op.#HTTPGet
apply: op.#ApplyComponent & {
component: parameter.component
// 给环境变量中注入BackendIP
workload: patch: spec: template: spec: {
containers: [{
// patchKey=name
env: [{name: "BackendIP",value: parameter.backendIP}]
},...]
}
}
// 等待 argo.rollout 可用
wait: op.#ConditionalWait & {
continue: apply.workload.status.readyReplicas == apply.workload.status.replicas && apply.workload.status.observedGeneration == apply.workload.metadata.generation
}
```
### 用户使用
理想汽车的开发工程师接下来就可以使用 Application 完成应用的发布。
开发工程师可以直接使用如上平台工程师在 KubeVela 上定制的通用能力,轻松完成应用部署计划的编写。
> 在下面例子中通过 workflow 的数据传递机制 input/output将 base-server 的 clusterIP 传递给 proxy-server。
```
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: lixiang-app
spec:
components:
- name: base-service
type: base-service
properties:
image: nginx:1.14.2
# 用于区分appollo环境
cluster: default
traits:
- type: ingress
properties:
domain: base-service.dev.example.com
http:
"/": 11001
# redis无依赖启动后service的endpionts 需要通过http接口写入信息写入到apollo
- name: "redis"
type: helm
properties:
chart: "redis-cluster"
version: "6.2.7"
repoUrl: "https://charts.bitnami.com/bitnami"
repoType: helm
- name: proxy-service
type: proxy-service
properties:
image: nginx:1.14.2
# 用于区分appollo环境
cluster: default
traits:
- type: ingress
properties:
domain: proxy-service.dev.example.com
http:
"/": 11002
workflow:
steps:
- name: apply-base-service
type: apply-base
outputs:
- name: baseIP
exportKey: clusterIP
properties:
component: base-service
- name: apply-redis
type: apply-helm
properties:
component: redis
- name: apply-proxy-service
type: apply-proxy
inputs:
- from: baseIP
parameterKey: backendIP
properties:
component: proxy-service
```

View File

@ -0,0 +1,302 @@
---
title: 多集群应用交付
---
本章节会介绍如何使用 KubeVela 完成应用的多集群应用交付。
## 简介
如今,越来越多的企业及开发者出于不同的原因,开始在多集群环境中进行应用交付:
* 由于 Kubernetes 集群存在着部署规模的局限性(单一集群最多容纳 5k 节点),需要应用多集群技术来部署、管理海量的应用。
* 考虑到稳定性及高可用性,同一个应用可以部署在多个集群中,以实现容灾、异地多活等需求。
* 应用可能需要部署在不同的区域来满足不同政府对于数据安全性的政策需求。
下文将会介绍如何在 KubeVela 中使用多集群技术帮助你快速将应用部署在多集群环境中。
## 准备工作
在使用多集群应用部署之前,你需要将子集群通过 KubeConfig 加入到 KubeVela 的管控中来。Vela CLI 可以帮你实现这一点。
```shell script
vela cluster join <your kubeconfig path>
```
该命令会自动使用 KubeConfig 中的 `context.cluster` 字段作为集群名称,你也可以使用 `--name` 参数来指定,如
```shell
vela cluster join stage-cluster.kubeconfig --name cluster-staging
vela cluster join prod-cluster.kubeconfig --name cluster-prod
```
在子集群加入 KubeVela 中后,你同样可以使用 CLI 命令来查看当前正在被 KubeVela 管控的所有集群。
```bash
$ vela cluster list
CLUSTER TYPE ENDPOINT
cluster-prod tls https://47.88.4.97:6443
cluster-staging tls https://47.88.7.230:6443
```
如果你不需要某个子集群了,还可以将子集群从 KubeVela 管控中移除。
```shell script
$ vela cluster detach cluster-prod
```
当然,如果现在有应用正跑在该集群中,这条命令会被 KubeVela 拒绝。
## 部署多集群应用
KubeVela 将一个 Kubernetes 集群看作是一个环境,对于一个应用,你可以将其部署在多个环境中。
下面的这个例子将会把应用先部署在预发环境中,待确认应用正常运行后,再将其部署在生产环境中。
对于不同的环境KubeVela 支持进行差异化部署。比如在本文的例子中,预发环境只使用 webservice 组件而不是用 worker 组件,同时 webservice 也只部署了一份。而在生产环境中,两个组件都会使用,而且 webservice 还会部署三副本。
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: example-app
namespace: default
spec:
components:
- name: hello-world-server
type: webservice
properties:
image: crccheck/hello-world
port: 8000
traits:
- type: scaler
properties:
replicas: 1
- name: data-worker
type: worker
properties:
image: busybox
cmd:
- sleep
- '1000000'
policies:
- name: example-multi-env-policy
type: env-binding
properties:
envs:
- name: staging
placement: # 选择要部署的集群
clusterSelector:
name: cluster-staging
selector: # 选择要使用的组件
components:
- hello-world-server
- name: prod
placement:
clusterSelector:
name: cluster-prod
patch: # 对组件进行差异化配置
components:
- name: hello-world-server
type: webservice
traits:
- type: scaler
properties:
replicas: 3
- name: health-policy-demo
type: health
properties:
probeInterval: 5
probeTimeout: 10
workflow:
steps:
# 部署到预发环境中
- name: deploy-staging
type: deploy2env
properties:
policy: example-multi-env-policy
env: staging
# 手动确认
- name: manual-approval
type: suspend
# 部署到生产环境中
- name: deploy-prod
type: deploy2env
properties:
policy: example-multi-env-policy
env: prod
```
在应用创建后,它会通过 KubeVela 工作流完成部署。
> 你可以参考[多环境部署](../end-user/policies/envbinding)和[健康检查](../end-user/policies/health)的用户手册来查看更多参数细节。
首先,它会将应用部署到预发环境中,你可以运行下面的命令来查看应用的状态。
```shell
> kubectl get application example-app -o yaml
NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE
example-app hello-world-server webservice workflowSuspending true Ready:1/1 10s
```
可以看到,当前的部署工作流在 `manual-approval` 步骤中暂停。
```yaml
...
status:
workflow:
appRevision: example-app-v1:44a6447e3653bcc2
contextBackend:
apiVersion: v1
kind: ConfigMap
name: workflow-example-app-context
uid: 56ddcde6-8a83-4ac3-bf94-d19f8f55eb3d
mode: StepByStep
steps:
- id: wek2b31nai
name: deploy-staging
phase: succeeded
type: deploy2env
- id: 7j5eb764mk
name: manual-approval
phase: succeeded
type: suspend
suspend: true
terminated: false
waitCount: 0
```
你也可以检查 `status.service` 字段来查看应用的健康状态。
```yaml
...
status:
services:
- env: staging
healthy: true
message: 'Ready:1/1 '
name: hello-world-server
scopes:
- apiVersion: core.oam.dev/v1alpha2
kind: HealthScope
name: health-policy-demo
namespace: test
uid: 6e6230a3-93f3-4dba-ba09-dd863b6c4a88
traits:
- healthy: true
type: scaler
workloadDefinition:
apiVersion: apps/v1
kind: Deployment
```
通过工作流的 resume 指令,你可以在确认当前部署正常后,继续将应用部署至生产环境中。
```shell
> vela workflow resume example-app
Successfully resume workflow: example-app
```
再次确认应用的状态:
```shell
> kubectl get application example-app
NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE
example-app hello-world-server webservice running true Ready:1/1 62s
```
```yaml
status:
services:
- env: staging
healthy: true
message: 'Ready:1/1 '
name: hello-world-server
scopes:
- apiVersion: core.oam.dev/v1alpha2
kind: HealthScope
name: health-policy-demo
namespace: default
uid: 9174ac61-d262-444b-bb6c-e5f0caee706a
traits:
- healthy: true
type: scaler
workloadDefinition:
apiVersion: apps/v1
kind: Deployment
- env: prod
healthy: true
message: 'Ready:3/3 '
name: hello-world-server
scopes:
- apiVersion: core.oam.dev/v1alpha2
kind: HealthScope
name: health-policy-demo
namespace: default
uid: 9174ac61-d262-444b-bb6c-e5f0caee706a
traits:
- healthy: true
type: scaler
workloadDefinition:
apiVersion: apps/v1
kind: Deployment
- env: prod
healthy: true
message: 'Ready:1/1 '
name: data-worker
scopes:
- apiVersion: core.oam.dev/v1alpha2
kind: HealthScope
name: health-policy-demo
namespace: default
uid: 9174ac61-d262-444b-bb6c-e5f0caee706a
workloadDefinition:
apiVersion: apps/v1
kind: Deployment
```
现在,工作流中的所有步骤都已完成。
```yaml
...
status:
workflow:
appRevision: example-app-v1:44a6447e3653bcc2
contextBackend:
apiVersion: v1
kind: ConfigMap
name: workflow-example-app-context
uid: e1e7bd2d-8743-4239-9de7-55a0dd76e5d3
mode: StepByStep
steps:
- id: q8yx7pr8wb
name: deploy-staging
phase: succeeded
type: deploy2env
- id: 6oxrtvki9o
name: manual-approval
phase: succeeded
type: suspend
- id: uk287p8c31
name: deploy-prod
phase: succeeded
type: deploy2env
suspend: false
terminated: false
waitCount: 0
```
## 更多使用案例
KubeVela 可以提供更多的应用多集群部署策略,如将单一应用的不同组件部署在不同环境中,或在管控集群及子集群中混合部署。
对于工作流与多集群部署的使用,你可以通过下图简单了解其整体流程。
![alt](../resources/workflow-multi-env.png)
更多的多集群环境下应用部署的使用案例将在不久后加入文档中。

View File

@ -0,0 +1,174 @@
---
title: 使用工作流实现多集群部署
---
本案例,将为你讲述如何使用 KubeVela 做多集群应用部署,将包含从集群创建、集群注册、环境初始化、多集群调度,一直到应用多集群部署的完整流程。
- 通过 KubeVela 中的环境初始化Initializer功能我们可以创建一个 Kubernetes 集群并注册到中央管控集群,同样通过环境初始化功能,可以将应用管理所需的系统依赖一并安装。
- 通过 KubeVela 的多集群多环境部署EnvBinding功能可以对应用进行差异化配置并选择资源下发到哪些集群。
## 开始之前
- 首先你需要有一个 Kubernetes 版本为 1.20+ 的集群作为管控集群,并且已经安装好 KubeVela ,管控集群需要有一个可以通过公网访问的 APIServer
的地址。如果不做特殊说明,实践案例上的所有步骤都在管控集群上操作。
- 在这个场景中KubeVela 背后采用[OCM(open-cluster-management)](https://open-cluster-management.io/getting-started/quick-start/)技术做实际的多集群资源分发。
- 本实践案例相关的 YAML 描述和 Shell 脚本都在 KubeVela 项目的 [docs/examples/workflow-with-ocm](https://github.com/oam-dev/kubevela/tree/master/docs/examples/workflow-with-ocm) 下,
请下载该案例,在该目录执行下面的终端命令。
- 本实践案例将以阿里云的 ACK 集群作为例子,创建阿里云资源需要使用相应的鉴权,需要保存你阿里云账号的 AK/SK 到管控集群的 Secret 中。
```shell
export ALICLOUD_ACCESS_KEY=xxx; export ALICLOUD_SECRET_KEY=yyy
```
```shell
# 如果你想使用阿里云安全令牌服务,还要导出环境变量 ALICLOUD_SECURITY_TOKEN 。
export ALICLOUD_SECURITY_TOKEN=zzz
```
```shell
# prepare-alibaba-credentials.sh 脚本会读取环境变量并创建 secret 到当前集群。
sh hack/prepare-alibaba-credentials.sh
```
```shell
$ kubectl get secret -n vela-system
NAME TYPE DATA AGE
alibaba-account-creds Opaque 1 11s
```
## 初始化阿里云资源创建功能
我们可以使用 KubeVela 的环境初始化功能,开启阿里云资源创建的系统功能,这个初始化过程主要是将之前配置的鉴权信息提供出来,并初始化 Terraform 系统插件。我们将这个初始化对象命名为:`terraform-alibaba`,并部署:
```shell
kubectl apply -f initializers/init-terraform-alibaba.yaml
```
### 创建环境初始化 `terraform-alibaba`
```shell
kubectl apply -f initializers/init-terraform-alibaba.yaml
```
当环境初始化 `terraform-alibaba``PHASE` 字段为 `success` 表示环境初始化成功这可能需要等待1分钟左右的时间。
```shell
$ kubectl get initializers.core.oam.dev -n vela-system
NAMESPACE NAME PHASE AGE
vela-system terraform-alibaba success 94s
```
## 初始化多集群调度功能
我们使用 KubeVela 的环境初始化功能,开启多集群调度的系统功能,这个初始化过程主要是创建一个新的 ACK 集群,使用 OCM 多集群管理方案管理新创建的集群,我们将这个初始化对象命名为:`managed-cluster`,并部署:
```shell
kubectl apply -f initializers/init-managed-cluster.yaml
```
除此之外,为了让创建好的集群可以被管控集群所使用,我们还需要将创建的集群注册到管控集群。我们通过定义一个工作流节点来传递新创建集群的证书信息,再定义一个工作流节点来完成集群注册。
**自定义执行集群创建的工作流节点,命名为 `create-ack`**,进行部署:
```shell
kubectl apply -f definitions/create-ack.yaml
```
**自定义集群注册的工作流节点,命名为 `register-cluster`**,进行部署:
```shell
kubectl apply -f definitions/register-cluster.yaml
```
### 创建环境初始化
1. 安装工作流节点定义 `create-ack``register-cluster`
```shell
kubectl apply -f definitions/create-ack.yaml.yaml
kubectl apply -f definitions/register-cluster.yaml
```
2. 修改工作流节点 `register-ack` 的 hubAPIServer 的参数为管控集群的 APIServer 的公网地址。
```yaml
- name: register-ack
type: register-cluster
inputs:
- from: connInfo
parameterKey: connInfo
properties:
# 用户需要填写管控集群的 APIServer 的公网地址
hubAPIServer: {{ public network address of APIServer }}
env: prod
initNameSpace: default
patchLabels:
purpose: test
```
3. 创建环境初始化 `managed-cluster`
```
kubectl apply -f initializers/init-managed-cluster.yaml
```
当环境初始化 `managed-cluster``PHASE` 字段为 `success` 表示环境初始化成功,你可能需要等待 15-20 分钟左右的时间阿里云创建一个ack集群需要 15 分钟左右)。
```shell
$ kubectl get initializers.core.oam.dev -n vela-system
NAMESPACE NAME PHASE AGE
vela-system managed-cluster success 20m
```
当环境初始化 `managed-cluster` 初始化成功后,你可以看到新集群 `poc-01` 已经被被注册到管控集群中。
```shell
$ kubectl get managedclusters.cluster.open-cluster-management.io
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE
poc-01 true {{ APIServer address }} True True 30s
```
## 部署应用到指定集群
管理员完成多集群的注册之后,用户可以在应用部署计划中指定将资源部署到哪个集群中。
```shell
kubectl apply -f app.yaml
```
检查应用部署计划 `workflow-demo` 是否成功创建。
```shell
$ kubectl get app workflow-demo
NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE
workflow-demo podinfo-server webservice running true 7s
```
你可以切换到新创建的 ACK 集群上,查看资源是否被成功地部署。
```shell
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
podinfo-server 1/1 1 1 40s
```
```shell
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
podinfo-server-auxiliaryworkload-85d7b756f9 LoadBalancer 192.168.57.21 < EIP > 9898:31132/TCP 50s
```
Service `podinfo-server` 绑定了一个 EXTERNAL-IP允许用户通过公网访问应用用户可以在浏览器中输入 `http://<EIP>:9898` 来访问刚刚创建的应用。
![workflow-with-ocm-demo](../resources/workflow-with-ocm-demo.png)
上述应用部署计划 `workflow-demo` 中使用了内置的应用策略 `env-binding` 对应用部署计划进行差异化配置,修改了组件 `podinfo-server` 的镜像,
以及运维特征 `expose` 的类型以允许集群外部的请求访问,同时应用策略 `env-binding` 指定了资源调度策略,将资源部署到新注册的 ACK 集群内。
应用部署计划的交付工作流也使用了内置的 [`multi-env`](../end-user/workflow/multi-env) 交付工作流定义,指定具体哪一个配置后的组件部署到集群中。

View File

@ -0,0 +1,45 @@
---
title: vela
---
```
vela [flags]
```
### Options
```
-e, --env string specify environment name for application
-h, --help help for vela
```
### SEE ALSO
* [vela addon](vela_addon) - List and get addon in KubeVela
* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities
* [vela completion](vela_completion) - Output shell completion code for the specified shell (bash or zsh)
* [vela components](vela_components) - List components
* [vela config](vela_config) - Manage configurations
* [vela def](vela_def) - Manage Definitions
* [vela delete](vela_delete) - Delete an application
* [vela env](vela_env) - Manage environments
* [vela exec](vela_exec) - Execute command in a container
* [vela export](vela_export) - Export deploy manifests from appfile
* [vela help](vela_help) - Help about any command
* [vela init](vela_init) - Create scaffold for an application
* [vela logs](vela_logs) - Tail logs for application
* [vela ls](vela_ls) - List applications
* [vela port-forward](vela_port-forward) - Forward local ports to services in an application
* [vela show](vela_show) - Show the reference doc for a workload type or trait
* [vela status](vela_status) - Show status of an application
* [vela system](vela_system) - System management utilities
* [vela template](vela_template) - Manage templates
* [vela traits](vela_traits) - List traits
* [vela up](vela_up) - Apply an appfile
* [vela version](vela_version) - Prints out build version information
* [vela workflow](vela_workflow) - Operate application workflow in KubeVela
* [vela workloads](vela_workloads) - List workloads
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,30 @@
---
title: vela addon
---
List and get addon in KubeVela
### Synopsis
List and get addon in KubeVela
### Options
```
-h, --help help for addon
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
* [vela addon disable](vela_addon_disable) - disable an addon
* [vela addon enable](vela_addon_enable) - enable an addon
* [vela addon list](vela_addon_list) - List addons
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,37 @@
---
title: vela addon disable
---
disable an addon
### Synopsis
disable an addon in cluster
```
vela addon disable [flags]
```
### Examples
```
vela addon disable <addon-name>
```
### Options
```
-h, --help help for disable
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela addon](vela_addon) - List and get addon in KubeVela
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,37 @@
---
title: vela addon enable
---
enable an addon
### Synopsis
enable an addon in cluster
```
vela addon enable [flags]
```
### Examples
```
vela addon enable <addon-name>
```
### Options
```
-h, --help help for enable
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela addon](vela_addon) - List and get addon in KubeVela
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,31 @@
---
title: vela addon list
---
List addons
### Synopsis
List addons in KubeVela
```
vela addon list [flags]
```
### Options
```
-h, --help help for list
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela addon](vela_addon) - List and get addon in KubeVela
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,31 @@
---
title: vela cap
---
Manage capability centers and installing/uninstalling capabilities
### Synopsis
Manage capability centers and installing/uninstalling capabilities
### Options
```
-h, --help help for cap
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
* [vela cap center](vela_cap_center) - Manage Capability Center
* [vela cap install](vela_cap_install) - Install capability into cluster
* [vela cap ls](vela_cap_ls) - List capabilities from cap-center
* [vela cap uninstall](vela_cap_uninstall) - Uninstall capability from cluster
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,31 @@
---
title: vela cap center
---
Manage Capability Center
### Synopsis
Manage Capability Center with config, sync, list
### Options
```
-h, --help help for center
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities
* [vela cap center config](vela_cap_center_config) - Configure (add if not exist) a capability center, default is local (built-in capabilities)
* [vela cap center ls](vela_cap_center_ls) - List all capability centers
* [vela cap center remove](vela_cap_center_remove) - Remove specified capability center
* [vela cap center sync](vela_cap_center_sync) - Sync capabilities from remote center, default to sync all centers
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,38 @@
---
title: vela cap center config
---
Configure (add if not exist) a capability center, default is local (built-in capabilities)
### Synopsis
Configure (add if not exist) a capability center, default is local (built-in capabilities)
```
vela cap center config <centerName> <centerURL> [flags]
```
### Examples
```
vela cap center config mycenter https://github.com/oam-dev/catalog/tree/master/registry
```
### Options
```
-h, --help help for config
-t, --token string Github Repo token
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela cap center](vela_cap_center) - Manage Capability Center
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,37 @@
---
title: vela cap center ls
---
List all capability centers
### Synopsis
List all configured capability centers
```
vela cap center ls [flags]
```
### Examples
```
vela cap center ls
```
### Options
```
-h, --help help for ls
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela cap center](vela_cap_center) - Manage Capability Center
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,37 @@
---
title: vela cap center remove
---
Remove specified capability center
### Synopsis
Remove specified capability center
```
vela cap center remove <centerName> [flags]
```
### Examples
```
vela cap center remove mycenter
```
### Options
```
-h, --help help for remove
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela cap center](vela_cap_center) - Manage Capability Center
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,37 @@
---
title: vela cap center sync
---
Sync capabilities from remote center, default to sync all centers
### Synopsis
Sync capabilities from remote center, default to sync all centers
```
vela cap center sync [centerName] [flags]
```
### Examples
```
vela cap center sync mycenter
```
### Options
```
-h, --help help for sync
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela cap center](vela_cap_center) - Manage Capability Center
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,38 @@
---
title: vela cap install
---
Install capability into cluster
### Synopsis
Install capability into cluster
```
vela cap install <center>/<name> [flags]
```
### Examples
```
vela cap install mycenter/route
```
### Options
```
-h, --help help for install
-t, --token string Github Repo token
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,37 @@
---
title: vela cap ls
---
List capabilities from cap-center
### Synopsis
List capabilities from cap-center
```
vela cap ls [cap-center] [flags]
```
### Examples
```
vela cap ls
```
### Options
```
-h, --help help for ls
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,38 @@
---
title: vela cap uninstall
---
Uninstall capability from cluster
### Synopsis
Uninstall capability from cluster
```
vela cap uninstall <name> [flags]
```
### Examples
```
vela cap uninstall route
```
### Options
```
-h, --help help for uninstall
-t, --token string Github Repo token
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,32 @@
---
title: vela completion
---
Output shell completion code for the specified shell (bash or zsh)
### Synopsis
Output shell completion code for the specified shell (bash or zsh).
The shell code must be evaluated to provide interactive completion
of vela commands.
### Options
```
-h, --help help for completion
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
* [vela completion bash](vela_completion_bash) - generate autocompletions script for bash
* [vela completion zsh](vela_completion_zsh) - generate autocompletions script for zsh
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,41 @@
---
title: vela completion bash
---
generate autocompletions script for bash
### Synopsis
Generate the autocompletion script for Vela for the bash shell.
To load completions in your current shell session:
$ source <(vela completion bash)
To load completions for every new session, execute once:
Linux:
$ vela completion bash > /etc/bash_completion.d/vela
MacOS:
$ vela completion bash > /usr/local/etc/bash_completion.d/vela
```
vela completion bash
```
### Options
```
-h, --help help for bash
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela completion](vela_completion) - Output shell completion code for the specified shell (bash or zsh)
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,38 @@
---
title: vela completion zsh
---
generate autocompletions script for zsh
### Synopsis
Generate the autocompletion script for Vela for the zsh shell.
To load completions in your current shell session:
$ source <(vela completion zsh)
To load completions for every new session, execute once:
$ vela completion zsh > "${fpath[1]}/_vela"
```
vela completion zsh
```
### Options
```
-h, --help help for zsh
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela completion](vela_completion) - Output shell completion code for the specified shell (bash or zsh)
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,38 @@
---
title: vela components
---
List components
### Synopsis
List components
```
vela components
```
### Examples
```
vela components
```
### Options
```
--discover discover traits in capability centers
-h, --help help for components
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,31 @@
---
title: vela config
---
Manage configurations
### Synopsis
Manage configurations
### Options
```
-h, --help help for config
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
* [vela config del](vela_config_del) - Delete config
* [vela config get](vela_config_get) - Get data for a config
* [vela config ls](vela_config_ls) - List configs
* [vela config set](vela_config_set) - Set data for a config
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,37 @@
---
title: vela config del
---
Delete config
### Synopsis
Delete config
```
vela config del
```
### Examples
```
vela config del <config-name>
```
### Options
```
-h, --help help for del
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela config](vela_config) - Manage configurations
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,37 @@
---
title: vela config get
---
Get data for a config
### Synopsis
Get data for a config
```
vela config get
```
### Examples
```
vela config get <config-name>
```
### Options
```
-h, --help help for get
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela config](vela_config) - Manage configurations
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,37 @@
---
title: vela config ls
---
List configs
### Synopsis
List all configs
```
vela config ls
```
### Examples
```
vela config ls
```
### Options
```
-h, --help help for ls
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela config](vela_config) - Manage configurations
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,37 @@
---
title: vela config set
---
Set data for a config
### Synopsis
Set data for a config
```
vela config set
```
### Examples
```
vela config set <config-name> KEY=VALUE K2=V2
```
### Options
```
-h, --help help for set
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela config](vela_config) - Manage configurations
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,35 @@
---
title: vela def
---
Manage Definitions
### Synopsis
Manage Definitions
### Options
```
-h, --help help for def
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
* [vela def apply](vela_def_apply) - Apply definition
* [vela def del](vela_def_del) - Delete definition
* [vela def edit](vela_def_edit) - Edit definition
* [vela def get](vela_def_get) - Get definition
* [vela def init](vela_def_init) - Init a new definition
* [vela def list](vela_def_list) - List definitions
* [vela def render](vela_def_render) - Render definition
* [vela def vet](vela_def_vet) - Validate definition
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,43 @@
---
title: vela def apply
---
Apply definition
### Synopsis
Apply definition from local storage to kubernetes cluster. It will apply file to vela-system namespace by default.
```
vela def apply DEFINITION.cue [flags]
```
### Examples
```
# Command below will apply the local my-webservice.cue file to kubernetes vela-system namespace
> vela def apply my-webservice.cue
# Command below will apply the ./defs/my-trait.cue file to kubernetes default namespace
> vela def apply ./defs/my-trait.cue --namespace default# Command below will convert the ./defs/my-trait.cue file to kubernetes CRD object and print it without applying it to kubernetes
> vela def apply ./defs/my-trait.cue --dry-run
```
### Options
```
--dry-run only build definition from CUE into CRB object without applying it to kubernetes clusters
-h, --help help for apply
-n, --namespace string Specify which namespace to apply. (default "vela-system")
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela def](vela_def) - Manage Definitions
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,40 @@
---
title: vela def del
---
Delete definition
### Synopsis
Delete definition in kubernetes cluster.
```
vela def del DEFINITION_NAME [flags]
```
### Examples
```
# Command below will delete TraitDefinition of annotations in default namespace
> vela def del annotations -t trait -n default
```
### Options
```
-h, --help help for del
-n, --namespace string Specify which namespace the definition locates.
-t, --type string Specify the definition type of target. Valid types: component, trait, policy, workload, scope, workflow-step
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela def](vela_def) - Manage Definitions
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,43 @@
---
title: vela def edit`
---
Edit definition
### Synopsis
Edit definition in kubernetes. If type and namespace are not specified, the command will automatically search all possible results.
By default, this command will use the vi editor and can be altered by setting EDITOR environment variable.
```
vela def edit NAME [flags]
```
### Examples
```
# Command below will edit the ComponentDefinition (and other definitions if exists) of webservice in kubernetes
> vela def edit webservice
# Command below will edit the TraitDefinition of ingress in vela-system namespace
> vela def edit ingress --type trait --namespace vela-system
```
### Options
```
-h, --help help for edit
-n, --namespace string Specify which namespace to get. If empty, all namespaces will be searched.
-t, --type string Specify which definition type to get. If empty, all types will be searched. Valid types: scope, workflow-step, component, trait, policy, workload
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela def](vela_def) - Manage Definitions
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,42 @@
---
title: vela def get
---
Get definition
### Synopsis
Get definition from kubernetes cluster
```
vela def get NAME [flags]
```
### Examples
```
# Command below will get the ComponentDefinition(or other definitions if exists) of webservice in all namespaces
> vela def get webservice
# Command below will get the TraitDefinition of annotations in namespace vela-system
> vela def get annotations --type trait --namespace vela-system
```
### Options
```
-h, --help help for get
-n, --namespace string Specify which namespace to get. If empty, all namespaces will be searched.
-t, --type string Specify which definition type to get. If empty, all types will be searched. Valid types: component, trait, policy, workload, scope, workflow-step
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela def](vela_def) - Manage Definitions
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,48 @@
---
title: vela def init
---
Init a new definition
### Synopsis
Init a new definition with given arguments or interactively
* We support parsing a single YAML file (like kubernetes objects) into the cue-style template. However, we do not support variables in YAML file currently, which prevents users from directly feeding files like helm chart directly. We may introduce such features in the future.
```
vela def init DEF_NAME [flags]
```
### Examples
```
# Command below initiate an empty TraitDefinition named my-ingress
> vela def init my-ingress -t trait --desc "My ingress trait definition." > ./my-ingress.cue
# Command below initiate a definition named my-def interactively and save it to ./my-def.cue
> vela def init my-def -i --output ./my-def.cue
# Command below initiate a ComponentDefinition named my-webservice with the template parsed from ./template.yaml.
> vela def init my-webservice -i --template-yaml ./template.yaml
```
### Options
```
-d, --desc string Specify the description of the new definition.
-h, --help help for init
-i, --interactive Specify whether use interactive process to help generate definitions.
-o, --output string Specify the output path of the generated definition. If empty, the definition will be printed in the console.
-y, --template-yaml string Specify the template yaml file that definition will use to build the schema. If empty, a default template for the given definition type will be used.
-t, --type string Specify the type of the new definition. Valid types: workload, scope, workflow-step, component, trait, policy
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela def](vela_def) - Manage Definitions
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,42 @@
---
title: vela def list
---
List definitions
### Synopsis
List definitions in kubernetes cluster
```
vela def list [flags]
```
### Examples
```
# Command below will list all definitions in all namespaces
> vela def list
# Command below will list all definitions in the vela-system namespace
> vela def get annotations --type trait --namespace vela-system
```
### Options
```
-h, --help help for list
-n, --namespace string Specify which namespace to list. If empty, all namespaces will be searched.
-t, --type string Specify which definition type to list. If empty, all types will be searched. Valid types: policy, workload, scope, workflow-step, component, trait
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela def](vela_def) - Manage Definitions
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,43 @@
---
title: vela def render
---
Render definition
### Synopsis
Render definition with cue format into kubernetes YAML format. Could be used to check whether the cue format definition is working as expected. If a directory is used as input, all cue definitions in the directory will be rendered.
```
vela def render DEFINITION.cue [flags]
```
### Examples
```
# Command below will render my-webservice.cue into YAML format and print it out.
> vela def render my-webservice.cue
# Command below will render my-webservice.cue and save it in my-webservice.yaml.
> vela def render my-webservice.cue -o my-webservice.yaml# Command below will render all cue format definitions in the ./defs/cue/ directory and save the YAML objects in ./defs/yaml/.
> vela def render ./defs/cue/ -o ./defs/yaml/
```
### Options
```
-h, --help help for render
--message string Specify the header message of the generated YAML file. For example, declaring author information.
-o, --output string Specify the output path of the rendered definition YAML. If empty, the definition will be printed in the console. If input is a directory, the output path is expected to be a directory as well.
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela def](vela_def) - Manage Definitions
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,39 @@
---
title: vela def vet
---
Validate definition
### Synopsis
Validate definition file by checking whether it has the valid cue format with fields set correctly
* Currently, this command only checks the cue format. This function is still working in progress and we will support more functional validation mechanism in the future.
```
vela def vet DEFINITION.cue [flags]
```
### Examples
```
# Command below will validate the my-def.cue file.
> vela def vet my-def.cue
```
### Options
```
-h, --help help for vet
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela def](vela_def) - Manage Definitions
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,38 @@
---
title: vela delete
---
Delete an application
### Synopsis
Delete an application
```
vela delete APP_NAME
```
### Examples
```
vela delete frontend
```
### Options
```
-h, --help help for delete
--svc string delete only the specified service in this app
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,31 @@
---
title: vela env
---
Manage environments
### Synopsis
Manage environments
### Options
```
-h, --help help for env
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
* [vela env delete](vela_env_delete) - Delete environment
* [vela env init](vela_env_init) - Create environments
* [vela env ls](vela_env_ls) - List environments
* [vela env set](vela_env_set) - Set an environment
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,37 @@
---
title: vela env delete
---
Delete environment
### Synopsis
Delete environment
```
vela env delete
```
### Examples
```
vela env delete test
```
### Options
```
-h, --help help for delete
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela env](vela_env) - Manage environments
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,40 @@
---
title: vela env init
---
Create environments
### Synopsis
Create environment and set the currently using environment
```
vela env init <envName>
```
### Examples
```
vela env init test --namespace test --email my@email.com
```
### Options
```
--domain string specify domain your applications
--email string specify email for production TLS Certificate notification
-h, --help help for init
--namespace string specify K8s namespace for env
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela env](vela_env) - Manage environments
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,37 @@
---
title: vela env ls
---
List environments
### Synopsis
List all environments
```
vela env ls
```
### Examples
```
vela env ls [env-name]
```
### Options
```
-h, --help help for ls
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela env](vela_env) - Manage environments
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,37 @@
---
title: vela env set
---
Set an environment
### Synopsis
Set an environment as the current using one
```
vela env set
```
### Examples
```
vela env set test
```
### Options
```
-h, --help help for set
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela env](vela_env) - Manage environments
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,35 @@
---
title: vela exec
---
Execute command in a container
### Synopsis
Execute command in a container
```
vela exec [flags] APP_NAME -- COMMAND [args...]
```
### Options
```
-h, --help help for exec
--pod-running-timeout duration The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one pod is running (default 1m0s)
-i, --stdin Pass stdin to the container (default true)
-s, --svc string service name
-t, --tty Stdin is a TTY (default true)
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,32 @@
---
title: vela export
---
Export deploy manifests from appfile
### Synopsis
Export deploy manifests from appfile
```
vela export
```
### Options
```
-f, -- string specify file path for appfile
-h, --help help for export
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,27 @@
---
title: vela help
---
Help about any command
```
vela help [command]
```
### Options
```
-h, --help help for help
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,38 @@
---
title: vela init
---
Create scaffold for an application
### Synopsis
Create scaffold for an application
```
vela init
```
### Examples
```
vela init
```
### Options
```
-h, --help help for init
--render-only Rendering vela.yaml in current dir and do not deploy
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,32 @@
---
title: vela logs
---
Tail logs for application
### Synopsis
Tail logs for application
```
vela logs [flags]
```
### Options
```
-h, --help help for logs
-o, --output string output format for logs, support: [default, raw, json] (default "default")
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,38 @@
---
title: vela ls
---
List applications
### Synopsis
List all applications in cluster
```
vela ls
```
### Examples
```
vela ls
```
### Options
```
-h, --help help for ls
-n, --namespace string specify the namespace the application want to list, default is the current env namespace
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,40 @@
---
title: vela port-forward
---
Forward local ports to services in an application
### Synopsis
Forward local ports to services in an application
```
vela port-forward APP_NAME [flags]
```
### Examples
```
port-forward APP_NAME [options] [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N]
```
### Options
```
--address strings Addresses to listen on (comma separated). Only accepts IP addresses or localhost as a value. When localhost is supplied, vela will try to bind on both 127.0.0.1 and ::1 and will fail if neither of these addresses are available to bind. (default [localhost])
-h, --help help for port-forward
--pod-running-timeout duration The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one pod is running (default 1m0s)
--route forward ports from route trait service
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,38 @@
---
title: vela show
---
Show the reference doc for a workload type or trait
### Synopsis
Show the reference doc for a workload type or trait
```
vela show [flags]
```
### Examples
```
show webservice
```
### Options
```
-h, --help help for show
--web start web doc site
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,38 @@
---
title: vela status
---
Show status of an application
### Synopsis
Show status of an application, including workloads and traits of each service.
```
vela status APP_NAME [flags]
```
### Examples
```
vela status APP_NAME
```
### Options
```
-h, --help help for status
-s, --svc string service name
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,31 @@
---
title: vela system
---
System management utilities
### Synopsis
System management utilities
### Options
```
-h, --help help for system
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
* [vela system cue-packages](vela_system_cue-packages) - List cue package
* [vela system dry-run](vela_system_dry-run) - Dry Run an application, and output the K8s resources as result to stdout
* [vela system info](vela_system_info) - Show vela client and cluster chartPath
* [vela system live-diff](vela_system_live-diff) - Dry-run an application, and do diff on a specific app revison
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,37 @@
---
title: vela system cue-packages
---
List cue package
### Synopsis
List cue package
```
vela system cue-packages
```
### Examples
```
vela system cue-packages
```
### Options
```
-h, --help help for cue-packages
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela system](vela_system) - System management utilities
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,39 @@
---
title: vela system dry-run
---
Dry Run an application, and output the K8s resources as result to stdout
### Synopsis
Dry Run an application, and output the K8s resources as result to stdout, only CUE template supported for now
```
vela system dry-run
```
### Examples
```
vela dry-run
```
### Options
```
-d, --definition string specify a definition file or directory, it will only be used in dry-run rather than applied to K8s cluster
-f, --file string application file name (default "./app.yaml")
-h, --help help for dry-run
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela system](vela_system) - System management utilities
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,31 @@
---
title: vela system info
---
Show vela client and cluster chartPath
### Synopsis
Show vela client and cluster chartPath
```
vela system info [flags]
```
### Options
```
-h, --help help for info
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela system](vela_system) - System management utilities
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,41 @@
---
title: vela system live-diff
---
Dry-run an application, and do diff on a specific app revison
### Synopsis
Dry-run an application, and do diff on a specific app revison. The provided capability definitions will be used during Dry-run. If any capabilities used in the app are not found in the provided ones, it will try to find from cluster.
```
vela system live-diff
```
### Examples
```
vela live-diff -f app-v2.yaml -r app-v1 --context 10
```
### Options
```
-r, --Revision string specify an application Revision name, by default, it will compare with the latest Revision
-c, --context int output number lines of context around changes, by default show all unchanged lines (default -1)
-d, --definition string specify a file or directory containing capability definitions, they will only be used in dry-run rather than applied to K8s cluster
-f, --file string application file name (default "./app.yaml")
-h, --help help for live-diff
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela system](vela_system) - System management utilities
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,28 @@
---
title: vela template
---
Manage templates
### Synopsis
Manage templates
### Options
```
-h, --help help for template
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
* [vela template context](vela_template_context) - Show context parameters
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,37 @@
---
title: vela template context
---
Show context parameters
### Synopsis
Show context parameter
```
vela template context
```
### Examples
```
vela template context
```
### Options
```
-h, --help help for context
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela template](vela_template) - Manage templates
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,38 @@
---
title: vela traits
---
List traits
### Synopsis
List traits
```
vela traits
```
### Examples
```
vela traits
```
### Options
```
--discover discover traits in capability centers
-h, --help help for traits
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,32 @@
---
title: vela up
---
Apply an appfile
### Synopsis
Apply an appfile
```
vela up
```
### Options
```
-f, -- string specify file path for appfile
-h, --help help for up
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,31 @@
---
title: vela version
---
Prints out build version information
### Synopsis
Prints out build version information
```
vela version [flags]
```
### Options
```
-h, --help help for version
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,31 @@
---
title: vela workflow
---
Operate application workflow in KubeVela
### Synopsis
Operate application workflow in KubeVela
### Options
```
-h, --help help for workflow
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
* [vela workflow restart](vela_workflow_restart) - Restart an application workflow
* [vela workflow resume](vela_workflow_resume) - Resume a suspend application workflow
* [vela workflow suspend](vela_workflow_suspend) - Suspend an application workflow
* [vela workflow terminate](vela_workflow_terminate) - Terminate an application workflow
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,37 @@
---
title: vela workflow restart
---
Restart an application workflow
### Synopsis
Restart an application workflow in cluster
```
vela workflow restart [flags]
```
### Examples
```
vela workflow restart <application-name>
```
### Options
```
-h, --help help for restart
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela workflow](vela_workflow) - Operate application workflow in KubeVela
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,37 @@
---
title: vela workflow resume
---
Resume a suspend application workflow
### Synopsis
Resume a suspend application workflow in cluster
```
vela workflow resume [flags]
```
### Examples
```
vela workflow resume <application-name>
```
### Options
```
-h, --help help for resume
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela workflow](vela_workflow) - Operate application workflow in KubeVela
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,37 @@
---
title: vela workflow suspend
---
Suspend an application workflow
### Synopsis
Suspend an application workflow in cluster
```
vela workflow suspend [flags]
```
### Examples
```
vela workflow suspend <application-name>
```
### Options
```
-h, --help help for suspend
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela workflow](vela_workflow) - Operate application workflow in KubeVela
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,37 @@
---
title: vela workflow terminate
---
Terminate an application workflow
### Synopsis
Terminate an application workflow in cluster
```
vela workflow terminate [flags]
```
### Examples
```
vela workflow terminate <application-name>
```
### Options
```
-h, --help help for terminate
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela workflow](vela_workflow) - Operate application workflow in KubeVela
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,37 @@
---
title: vela workloads
---
List workloads
### Synopsis
List workloads
```
vela workloads
```
### Examples
```
vela workloads
```
### Options
```
-h, --help help for workloads
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela](vela) -
###### Auto generated by spf13/cobra on 19-Aug-2021

View File

@ -0,0 +1,105 @@
---
title: 核心概念
---
*"KubeVela 是一个面向混合环境的、简单易用、同时高可扩展的应用交付与管理引擎。"*
在本部分中,我们会对 KubeVela 的核心思想进行详细解释,并进一步阐清一些在本项目中被广泛使用的技术术语。
## 综述
首先KubeVela 引入了下面所述的带有关注点分离思想的工作流:
- **平台团队**
- 通过给部署环境和可重复使用的能力模块编写模板来构建应用,并将他们注册到集群中。
- **业务用户**
- 选择部署环境、模型和可用模块来组装应用,并把应用部署到目标环境中。
工作流如下图所示:
![alt](resources/how-it-works.png)
这种基于模板的工作流使得平台团队能够在一系列的 Kubernetes CRD 之上,引导用户遵守他们构建的最佳实践和 部署经验,并且可以很自然地为业务用户提供 PaaS 级别的体验(比如:“以应用为中心”,“高层次的抽象”,“自助式运维操作”等等)。
![alt](resources/what-is-kubevela.png)
下面开始介绍 KubeVela 的核心概念
## `Application`
应用(*Application*),是 KubeVela 的核心 API。它使得业务开发者只需要基于一个单一的制品和一些简单的原语就可以构建完整的应用。
在应用交付平台中,有一个 *Application* 的概念尤为重要,因为这可以很大程度上简化运维任务,并且作为一个锚点避免操作过程中产生配置漂移的问题。同时,它也帮助应用交付过程中引入 Kubernetes的能力提供了一个更简单的、且不用依赖底层细节的途径。 举个例子,开发者能够不需要每次都定义一个详细的 Kubernetes Deployment + Service 的组合来建模一个 web service ,或者不用依靠底层的 KEDA ScaleObject 来获取自动扩容的需求。
### 举例
一个需要两个组件(比如 `frontend``backend` )的 `website` 应用可以如下建模:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: website
spec:
components:
- name: backend
type: worker
properties:
image: busybox
cmd:
- sleep
- '1000'
- name: frontend
type: webservice
properties:
image: nginx
traits:
- type: autoscaler
properties:
min: 1
max: 10
- type: sidecar
properties:
name: "sidecar-test"
image: "fluentd"
```
## 构建抽象
不像大多数的高层次的抽象KubeVela 中的 `Application` 资源是一种积木风格的对象,而且它甚至没有固定的 schema。相反它由构建模块比如app components应用组件和 traits运维能力构成。这种构建模块允许开发者通过自己定义的抽象来集成平台的能力到此应用定义。
定义抽象和建模平台能力的构建模块是 `ComponentDefinition``TraitDefinition`
### ComponentDefinition
`ComponentDefinition` ,组件定义,是一个预先定义好的,用于可部署的工作负载的*模板*。它包括了模板参数化的和工作负载特性的信息作为一种声明式API资源。
因此,`Application` 抽象本质上定义了在目标集群中,用户想要如何来**实例化**给定 component definition。特别地`.type` 字段引用安装了的 `ComponentDefinition` 的名字; `.properties` 字段是用户设置的用来实例化它的值。
一些主要的 component definition 有:长期运行的 web service、一次性的 task 和 Redis数据库。所有的 component definition 均应在平台提前安装,或由组件提供商,比如第三方软件供应商,来提供。
### TraitDefinition
可选的,每一个组件都有一个 `.traits` 部分。这个部分通过使用操作类行为,比如负载均衡策略、网络入口路由、自动扩容策略,和升级策略等,来增强组件实例。
*Trait*,运维能力,是由平台提供的操作性质的特性。为了给组件实例附加运维能力,用户需要声明 `.type` 字段来引用特定的 `TraitDefinition``.properties` ,以此来设置给定运维能力的属性值。相似的,`TraitDefiniton` 同样允许用户来给这些操作特性定义*模板*。
在 KubeVela 中,我们还将 component definition 和 trait definitions 定义称为 *“capability definitions”*
## Environment
在将应用发布到生产环境之前,在 testing/staging workspace 中测试代码很重要。在 KubeVela我们将这些 workspace 描述为 “deployment environments”部署环境或者简称为 “environments”环境。每一个环境都有属于自己的配置比如说domainKubernetes集群命名空间配置数据和访问控制策略等来允许用户创建不同的部署环境比如 “test”和 “production”。
到目前为止,一个 KubeVela 的 `environment` 只映射到一个 Kubernetes 的命名空间。集群级环境正在开发中。
### 总结
KubeVela的主要概念由下图所示
![alt](resources/concepts.png)
## 架构
KubeVela的整体架构由下图所示
![alt](resources/arch.png)
特别的application controller 负责应用的抽象和封装(比如负责 `Application``Definition` 的 controller 。Rollout contoller 负责以整个应用为单位处理渐进式 rollout 策略。多集群部署引擎,在流量切分和 rollout 特性的支持下,负责跨多集群和环境部署应用。

View File

@ -0,0 +1,169 @@
---
title: 应用部署计划
---
KubeVela 背后的应用交付模型是 [Open Application Model](../platform-engineers/oam/oam-model),简称 OAM ,其核心是将应用部署所需的所有组件和各项运维动作,描述为一个统一的、与基础设施无关的“部署计划”,进而实现在混合环境中进行标准化和高效率的应用交付。这个应用部署计划就是这一节所要介绍的 **Application** 对象,也是 OAM 模型的使用者唯一需要了解的 API。
## 应用程序部署计划Application
KubeVela 通过 YAML 文件的方式描述应用部署计划。一个典型的 YAML 样例如下:
```yaml
# sample.yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: website
spec:
components:
- name: frontend # 比如我们希望部署一个实现前端业务的 Web Service 类型组件
type: webservice
properties:
image: nginx
traits:
- type: cpuscaler # 给组件设置一个可以动态调节 CPU 使用率的 cpuscaler 类型运维特征
properties:
min: 1
max: 10
cpuPercent: 60
- type: sidecar # 往运行时集群部署之前,注入一个做辅助工作的 sidecar
properties:
name: "sidecar-test"
image: "fluentd"
- name: backend
type: worker
properties:
image: busybox
cmd:
- sleep
- '1000'
policies:
- name: demo-policy
type: env-binding
properties:
envs:
- name: test
placement:
clusterSelector:
name: cluster-test
- name: prod
placement:
clusterSelector:
name: cluster-prod
workflow:
steps:
# 步骤名称
- name: deploy-test-env
# 指定步骤类型
type: deploy2env
properties:
# 指定策略名称
policy: demo-policy
# 指定部署的环境名称
env: test
- name: manual-approval
# 工作流内置 suspend 类型的任务,用于暂停工作流
type: suspend
- name: deploy-prod-env
type: deploy2env
properties:
policy: demo-policy
env: prod
```
这里的字段对应着:
- `apiVersion`:所使用的 OAM API 版本。
- `kind`:种类。我们最经常用到的就是 Pod 了。
- `metadata`:业务相关信息。比如这次要创建的是一个网站。
- `Spec`:描述我们需要应用去交付什么,告诉 Kubernetes 做成什么样。这里我们放入 KubeVela 的 `components`、`policies` 以及 `workflow`
- `components`:一次应用交付部署计划所涵盖的全部组件。
- `traits`:应用交付部署计划中每个组件独立的运维特征。
- `policies`:作用于整个应用全局的部署策略。
- `workflow`:自定义应用交付“执行过程”的工作流。
下面这张示意图诠释了它们之间的关系:
![image.png](../resources/concepts.png)
先有一个总体的应用部署计划 Application。在此基础之上我们申明应用主体为可配置、可部署的组件Components并同时对应地去申明期望每个组件要拥有的相关运维特征 Traits如果有需要还可以申明自定义的执行流程 Workflow
你使用 KubeVela 的时候,就像在玩“乐高“积木:先拿起一块大的“应用程序”,然后往上固定一块或几块“组件”,组件上又可以贴上任何颜色大小的“运维特征”。同时根据需求的变化,你随时可以重新组装,形成新的应用部署计划。
## 组件Components
KubeVela 内置了常用的组件类型,使用 [KubeVela CLI](../install#3-安装-kubevela-cli) 命令查看:
```
vela components
```
返回结果:
```
NAME NAMESPACE WORKLOAD DESCRIPTION
alibaba-rds default configurations.terraform.core.oam.dev Terraform configuration for Alibaba Cloud RDS object
task vela-system jobs.batch Describes jobs that run code or a script to completion.
webservice vela-system deployments.apps Describes long-running, scalable, containerized services
that have a stable network endpoint to receive external
network traffic from customers.
worker vela-system deployments.apps Describes long-running, scalable, containerized services
that running at backend. They do NOT have network endpoint
to receive external network traffic.
```
你可以继续使用 [Helm 组件](../end-user/components/helm)和[Kustomize 组件](../end-user/components/kustomize)等开箱即用的 KubeVela 内置组件来构建你的应用部署计划。
如果你是熟悉 Kubernetes 的平台管理员,你可以通过[自定义组件入门](../platform-engineers/components/custom-component)文档了解 KubeVela 是如何扩展任意类型的自定义组件的。特别的,[Terraform 组件](../platform-engineers/components/component-terraform) 就是 KubeVela 自定义组件能力的一个最佳实践,可以满足任意云资源的供应,只需少量云厂商特定配置(如鉴权、云资源模块等),即可成为一个开箱即用的云资源组件。
## 运维特征Traits
KubeVela 也内置了常用的运维特征类型,使用 [KubeVela CLI](../install#3-安装-kubevela-cli) 命令查看:
```
vela traits
```
返回结果:
```
NAME NAMESPACE APPLIES-TO CONFLICTS-WITH POD-DISRUPTIVE DESCRIPTION
annotations vela-system deployments.apps true Add annotations for your Workload.
cpuscaler vela-system webservice,worker false Automatically scale the component based on CPU usage.
ingress vela-system webservice,worker false Enable public web traffic for the component.
labels vela-system deployments.apps true Add labels for your Workload.
scaler vela-system webservice,worker false Manually scale the component.
sidecar vela-system deployments.apps true Inject a sidecar container to the component.
```
你可以继续阅读用户手册里的 [绑定运维特征](../end-user/traits/ingress) ,具体查看如何完成各种运维特征的开发。
如果你是熟悉 Kubernetes 的平台管理员,也可以了解 KubeVela 中[自定义运维特征](../platform-engineers/traits/customize-trait) 的能力,为你的用户扩展任意运维功能。
## 应用策略Policy)
应用策略Policy负责定义应用级别的部署特征比如健康检查规则、安全组、防火墙、SLO、检验等模块。
应用策略的扩展性和功能与运维特征类似,可以灵活的扩展和对接所有云原生应用生命周期管理的能力。相对于运维特征而言,应用策略作用于一个应用的整体,而运维特征作用于应用中的某个组件。
在本例中,我们设置了一个将应用部署到不同环境的策略。
## 工作流Workflow
KubeVela 的工作流机制允许用户自定义应用部署计划中的步骤,粘合额外的交付流程,指定任意的交付环境。简而言之,工作流提供了定制化的控制逻辑,在原有 Kubernetes 模式交付资源Apply的基础上提供了面向过程的灵活性。比如说使用工作流实现暂停、人工验证、状态等待、数据流传递、多环境灰度、A/B 测试等复杂操作。
工作流是 KubeVela 实践过程中基于 OAM 模型的进一步探索和最佳实践,充分遵守 OAM 的模块化理念和可复用特性。每一个工作流模块都是一个“超级粘合剂”,可以将你任意的工具和流程都组合起来。使得你在现代复杂云原生应用交付环境中,可以通过一份申明式的配置,完整的描述所有的交付流程,保证交付过程的稳定性和便利性。
> 需要说明的是,工作流机制是基于“应用和环境”粒度工作的,它提供了“自定义交付过程”的强大能力。一旦定义工作流,就代表用户自己指定交付的执行过程,原有的组件部署过程会被取代。工作流并非必填能力,用户在不编写 Workflow 过程的情况下,依旧可以完成组件和运维策略的自动化部署。
在上面的例子中,我们已经可以看到一些工作流的步骤:
- 这里使用了 `deploy2env``suspend` 类型的工作流步骤:
- `deploy2env` 类型可以根据用户定义的策略将应用部署到指定的环境。
- 在第一步完成后,开始执行 `suspend` 类型的工作流步骤。该步骤会暂停工作流,我们可以查看集群中第一个组件的状态,当其成功运行后,再使用 `vela workflow resume website` 命令来继续该工作流。
- 当工作流继续运行后,第三个步骤开始部署组件及运维特征。此时我们查看集群,可以看到所以资源都已经被成功部署。
关于工作流,你可以从[指定环境部署](../end-user/workflow/multi-env)这个工作流节点类型开始逐次了解更多 KubeVela 当前的内置工作流节点类型。
如果你是熟悉 Kubernetes 的平台管理员,你可以[学习创建自定义工作流节点类型](../platform-engineers/workflow/workflow),或者通过[设计文档](https://github.com/oam-dev/kubevela/blob/master/design/vela-core/workflow_policy.md)了解工作流系统背后的设计和架构.
## 下一步
后续步骤:
- 加入 KubeVela 中文社区钉钉群群号23310022。
- 阅读[**用户手册**](../end-user/components/helm),从 Helm 组件开始了解如何构建你的应用部署计划。
- 阅读[**管理员手册**](../platform-engineers/oam/oam-model)了解 KubeVela 的扩展方式和背后的 OAM 模型原理。

View File

@ -0,0 +1,38 @@
---
title: 系统架构
---
KubeVela 在默认安装模式下,是一个只包含“控制平面”的架构,通过插件机制与各种运行时系统进行紧密配合。其中 KubeVela 核心控制器工作在一个单独的控制面 Kubernetes 集群。
如下图所示,自上而下看,用户只与 KubeVela 所在的控制面 Kubernetes 集群发生交互。
![kubevela-arch](../resources/system-arch.png)
## API 层
KubeVela API 是声明式的,并以应用程序为中心,用于构建应用交付平台和解决方案。同时,由于它基于原生的 Kubernetes CRD 构建,所以使用起来非常方便。
- 对于大多数不用关心底层细节的用户来说,你只需要:
- 查看 KubeVela 所提供的开箱即用的组件、运维特征、应用策略和工作流
- 通过 YAML 文件来描述一个应用部署计划
- 对于少数管理员来说:
- 内置新的组件、运维特征、应用策略和自定义工作流,提供给你的用户
- 通常需要使用 YAML 文件和 CUE 语言来完成上述操作
## 控制平面层
控制平面层是 KubeVela 的系统核心。它既能帮你按需组装内置能力,或者通过注册各种能力插件满足交付应用的需要,同时在交付后全自动处理 API 请求并管理全局状态。
主要包含如下三个部分:
- **核心控制器** 为整个系统提供核心控制逻辑,完成诸如编排应用和工作流、修订版本快照、垃圾回收等等基础逻辑
- **新增内置能力** 由 X-Definitions 创建,注册应用交付所需要的内置能力。基于这个灵活性,我们可以自由地去集成开源生态的能力,按需自定义
- **插件能力中心** Addon 让你可以调用生态下常见的能力,甚至直接省去了开发的时间和成本
## 执行层
最后执行层是应用程序实际会运行的地方。KubeVela 允许你在统一的工作流中,部署和管理应用程序到 Kubernetes 集群例如本地、托管云服务、IoT/边缘设备端等等。
## 下一步
- 学习如何用 KubeVela 来进行应用交付, 请查看[应用部署计划](./application)。
- 阅读管理员手册学习 [开放应用模型](../platform-engineers/oam/oam-model)。

View File

@ -0,0 +1,133 @@
---
title: 能力管理
---
在 KubeVela 中,开发者可以从任何包含 OAM 抽象文件的 GitHub 仓库中安装更多的能力(例如:新 component 类型或者 traits )。我们将这些 GitHub 仓库称为 _Capability Centers_
KubeVela 可以从这些仓库中自动发现 OAM 抽象文件,并且同步这些能力到我们的 KubeVela 平台中。
## 添加能力中心
新增且同步能力中心到 KubeVela
```bash
$ vela cap center config my-center https://github.com/oam-dev/catalog/tree/master/registry
successfully sync 1/1 from my-center remote center
Successfully configured capability center my-center and sync from remote
$ vela cap center sync my-center
successfully sync 1/1 from my-center remote center
sync finished
```
现在,该能力中心 `my-center` 已经可以使用。
## 列出能力中心
你可以列出或者添加更多能力中心。
```bash
$ vela cap center ls
NAME ADDRESS
my-center https://github.com/oam-dev/catalog/tree/master/registry
```
## [可选] 删除能力中心
删除一个
```bash
$ vela cap center remove my-center
```
## 列出所有可用的能力中心
列出某个中心所有可用的能力。
```bash
$ vela cap ls my-center
NAME CENTER TYPE DEFINITION STATUS APPLIES-TO
clonesetservice my-center componentDefinition clonesets.apps.kruise.io uninstalled []
```
## 从能力中心安装能力
我们开始从 `my-center` 安装新 component `clonesetservice` 到你的 KubeVela 平台。
你可以先安装 OpenKruise 。
```shell
helm install kruise https://github.com/openkruise/kruise/releases/download/v0.7.0/kruise-chart.tgz
```
`my-center` 中安装 `clonesetservice` component 。
```bash
$ vela cap install my-center/clonesetservice
Installing component capability clonesetservice
Successfully installed capability clonesetservice from my-center
```
## 使用新安装的能力
我们先检查 `clonesetservice` component 是否已经被安装到平台:
```bash
$ vela components
NAME NAMESPACE WORKLOAD DESCRIPTION
clonesetservice vela-system clonesets.apps.kruise.io Describes long-running, scalable, containerized services
that have a stable network endpoint to receive external
network traffic from customers. If workload type is skipped
for any service defined in Appfile, it will be defaulted to
`webservice` type.
```
很棒!现在我们部署使用 Appfile 部署一个应用。
```bash
$ cat << EOF > vela.yaml
name: testapp
services:
testsvc:
type: clonesetservice
image: crccheck/hello-world
port: 8000
EOF
```
```bash
$ vela up
Parsing vela appfile ...
Load Template ...
Rendering configs for service (testsvc)...
Writing deploy config to (.vela/deploy.yaml)
Applying application ...
Checking if app has been deployed...
App has not been deployed, creating a new deployment...
Updating: core.oam.dev/v1alpha2, Kind=HealthScope in default
✅ App has been deployed 🚀🚀🚀
Port forward: vela port-forward testapp
SSH: vela exec testapp
Logging: vela logs testapp
App status: vela status testapp
Service status: vela status testapp --svc testsvc
```
随后,该 cloneset 已经被部署到你的环境。
```shell
$ kubectl get clonesets.apps.kruise.io
NAME DESIRED UPDATED UPDATED_READY READY TOTAL AGE
testsvc 1 1 1 1 1 46s
```
## 删除能力
> 注意,删除能力前请先确认没有被应用引用。
```bash
$ vela cap uninstall my-center/clonesetservice
Successfully uninstalled capability clonesetservice
```

View File

@ -0,0 +1,9 @@
---
title: 查看应用的日志
---
```bash
$ vela logs testapp
```
执行如上命令后就能查看指定的 testapp 容器的日志。如果只有一个容器,则默认会查看该容器的日志

View File

@ -0,0 +1,103 @@
---
title: The Reference Documentation Guide of Capabilities
---
In this documentation, we will show how to check the detailed schema of a given capability (i.e. workload type or trait).
This may sound challenging because every capability is a "plug-in" in KubeVela (even for the built-in ones), also, it's by design that KubeVela allows platform platform-engineerss to modify the capability templates at any time. In this case, do we need to manually write documentation for every newly installed capability? And how can we ensure those documentations for the system is up-to-date?
## Using Browser
Actually, as a important part of its "extensibility" design, KubeVela will always **automatically generate** reference documentation for every workload type or trait registered in your Kubernetes cluster, based on its template in definition of course. This feature works for any capability: either built-in ones or your own workload types/traits.
Thus, as an end user, the only thing you need to do is:
```console
$ vela show WORKLOAD_TYPE or TRAIT --web
```
This command will automatically open the reference documentation for given workload type or trait in your default browser.
### For Workload Types
Let's take `$ vela show webservice --web` as example. The detailed schema documentation for `Web Service` workload type will show up immediately as below:
![](../resources/vela_show_webservice.jpg)
Note that there's in the section named `Specification`, it even provides you with a full sample for the usage of this workload type with a fake name `my-service-name`.
### For Traits
Similarly, we can also do `$ vela show autoscale --web`:
![](../resources/vela_show_autoscale.jpg)
With these auto-generated reference documentations, we could easily complete the application description by simple copy-paste, for example:
```yaml
name: helloworld
services:
backend: # copy-paste from the webservice ref doc above
image: oamdev/testapp:v1
cmd: ["node", "server.js"]
port: 8080
cpu: "0.1"
autoscale: # copy-paste and modify from autoscaler ref doc above
min: 1
max: 8
cron:
startAt: "19:00"
duration: "2h"
days: "Friday"
replicas: 4
timezone: "America/Los_Angeles"
```
## Using Terminal
This reference doc feature also works for terminal-only case. For example:
```shell
$ vela show webservice
# Properties
+-------+----------------------------------------------------------------------------------+---------------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+-------+----------------------------------------------------------------------------------+---------------+----------+---------+
| cmd | Commands to run in the container | []string | false | |
| env | Define arguments by using environment variables | [[]env](#env) | false | |
| image | Which image would you like to use for your service | string | true | |
| port | Which port do you want customer traffic sent to | int | true | 80 |
| cpu | Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) | string | false | |
+-------+----------------------------------------------------------------------------------+---------------+----------+---------+
## env
+-----------+-----------------------------------------------------------+-------------------------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+-----------+-----------------------------------------------------------+-------------------------+----------+---------+
| name | Environment variable name | string | true | |
| value | The value of the environment variable | string | false | |
| valueFrom | Specifies a source the value of this var should come from | [valueFrom](#valueFrom) | false | |
+-----------+-----------------------------------------------------------+-------------------------+----------+---------+
### valueFrom
+--------------+--------------------------------------------------+-------------------------------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+--------------+--------------------------------------------------+-------------------------------+----------+---------+
| secretKeyRef | Selects a key of a secret in the pod's namespace | [secretKeyRef](#secretKeyRef) | true | |
+--------------+--------------------------------------------------+-------------------------------+----------+---------+
#### secretKeyRef
+------+------------------------------------------------------------------+--------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+------+------------------------------------------------------------------+--------+----------+---------+
| name | The name of the secret in the pod's namespace to select from | string | true | |
| key | The key of the secret to select from. Must be a valid secret key | string | true | |
+------+------------------------------------------------------------------+--------+----------+---------+
```
> Note that for all the built-in capabilities, we already published their reference docs [here](https://kubevela.io/#/en/developers/references/) based on the same doc generation mechanism.

View File

@ -0,0 +1,85 @@
---
title: 在应用程序中配置数据或环境
---
`vela` 提供 `config` 命令用于管理配置数据。
## `vela config set`
```bash
$ vela config set test a=b c=d
reading existing config data and merging with user input
config data saved successfully ✅
```
## `vela config get`
```bash
$ vela config get test
Data:
a: b
c: d
```
## `vela config del`
```bash
$ vela config del test
config (test) deleted successfully
```
## `vela config ls`
```bash
$ vela config set test a=b
$ vela config set test2 c=d
$ vela config ls
NAME
test
test2
```
## 在应用程序中配置环境变量
可以在应用程序中将配置数据设置为环境变量。
```bash
$ vela config set demo DEMO_HELLO=helloworld
```
将以下内容保存为 `vela.yaml` 到当前目录中:
```yaml
name: testapp
services:
env-config-demo:
image: heroku/nodejs-hello-world
config: demo
```
然后运行:
```bash
$ vela up
Parsing vela.yaml ...
Loading templates ...
Rendering configs for service (env-config-demo)...
Writing deploy config to (.vela/deploy.yaml)
Applying deploy configs ...
Checking if app has been deployed...
App has not been deployed, creating a new deployment...
✅ App has been deployed 🚀🚀🚀
Port forward: vela port-forward testapp
SSH: vela exec testapp
Logging: vela logs testapp
App status: vela status testapp
Service status: vela status testapp --svc env-config-demo
```
检查环境变量:
```
$ vela exec testapp -- printenv | grep DEMO_HELLO
DEMO_HELLO=helloworld
```

View File

@ -0,0 +1,89 @@
---
title: 设置部署环境
---
通过部署环境可以为你的应用配置全局工作空间、email 以及域名。通常情况下,部署环境分为 `test` (测试环境)、`staging` (生产镜像环境)、`prod`(生产环境)等。
## 创建环境
```bash
$ vela env init demo --email my@email.com
environment demo created, Namespace: default, Email: my@email.com
```
## 检查部署环境元数据
```bash
$ vela env ls
NAME CURRENT NAMESPACE EMAIL DOMAIN
default default
demo * default my@email.com
```
默认情况下, 将会在 K8s 默认的命名空间 `default` 下面创建环境。
## 配置变更
你可以通过再次执行如下命令变更环境配置。
```bash
$ vela env init demo --namespace demo
environment demo created, Namespace: demo, Email: my@email.com
```
```bash
$ vela env ls
NAME CURRENT NAMESPACE EMAIL DOMAIN
default default
demo * demo my@email.com
```
**注意:部署环境只针对新创建的应用生效,之前创建的应用不会受到任何影响。**
## [可选操作] 配置域名(前提:拥有 public IP
如果你使用的是云厂商提供的 k8s 服务并已为 ingress 配置了公网 IP那么就可以在环境中配置域名来使用之后你就可以通过该域名来访问应用并且自动支持 mTLS 双向认证。
例如, 你可以使用下面的命令方式获得 ingress service 的公网 IP
```bash
$ kubectl get svc -A | grep LoadBalancer
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-lb LoadBalancer 172.21.2.174 123.57.10.233 80:32740/TCP,443:32086/TCP 41d
```
命令响应结果 `EXTERNAL-IP` 列的值123.57.10.233 就是公网 IP。 在 DNS 中添加一条 `A` 记录吧:
```
*.your.domain => 123.57.10.233
```
如果没有自定义域名,那么你可以使用如 `123.57.10.233.xip.io` 作为域名,其中 `xip.io` 将会自动路由到前面的 IP `123.57.10.233`
```bash
$ vela env init demo --domain 123.57.10.233.xip.io
environment demo updated, Namespace: demo, Email: my@email.com
```
### 在 Appfile 中使用域名
由于在部署环境中已经配置了全局域名, 就不需要在 route 配置中特别指定域名了。
```yaml
# in demo environment
services:
express-server:
...
route:
rules:
- path: /testapp
rewriteTarget: /
```
```
$ curl http://123.57.10.233.xip.io/testapp
Hello World
```

View File

@ -0,0 +1,10 @@
---
title: 在容器中运行命令
---
运行如下命令:
```
$ vela exec testapp -- /bin/sh
```
这将打开一个 shell 访问 testapp 容器。

View File

@ -0,0 +1,238 @@
---
title: Automatically scale workloads by resource utilization metrics and cron
---
## Prerequisite
Make sure auto-scaler trait controller is installed in your cluster
Install auto-scaler trait controller with helm
1. Add helm chart repo for autoscaler trait
```shell script
helm repo add oam.catalog http://oam.dev/catalog/
```
2. Update the chart repo
```shell script
helm repo update
```
3. Install autoscaler trait controller
```shell script
helm install --create-namespace -n vela-system autoscalertrait oam.catalog/autoscalertrait
Autoscale depends on metrics server, please [enable it in your Kubernetes cluster](../references/devex/faq#autoscale-how-to-enable-metrics-server-in-various-kubernetes-clusters) at the beginning.
> Note: autoscale is one of the extension capabilities [installed from cap center](../cap-center),
> please install it if you can't find it in `vela traits`.
## Setting cron auto-scaling policy
Introduce how to automatically scale workloads by cron.
1. Prepare Appfile
```yaml
name: testapp
services:
express-server:
# this image will be used in both build and deploy steps
image: oamdev/testapp:v1
cmd: ["node", "server.js"]
port: 8080
autoscale:
min: 1
max: 4
cron:
startAt: "14:00"
duration: "2h"
days: "Monday, Thursday"
replicas: 2
timezone: "America/Los_Angeles"
```
> The full specification of `autoscale` could show up by `$ vela show autoscale`.
2. Deploy an application
```
$ vela up
Parsing vela.yaml ...
Loading templates ...
Rendering configs for service (express-server)...
Writing deploy config to (.vela/deploy.yaml)
Applying deploy configs ...
Checking if app has been deployed...
App has not been deployed, creating a new deployment...
✅ App has been deployed 🚀🚀🚀
Port forward: vela port-forward testapp
SSH: vela exec testapp
Logging: vela logs testapp
App status: vela status testapp
Service status: vela status testapp --svc express-server
```
3. Check the replicas and wait for the scaling to take effect
Check the replicas of the application, there is one replica.
```
$ vela status testapp
About:
Name: testapp
Namespace: default
Created at: 2020-11-05 17:09:02.426632 +0800 CST
Updated at: 2020-11-05 17:09:02.426632 +0800 CST
Services:
- Name: express-server
Type: webservice
HEALTHY Ready: 1/1
Traits:
- ✅ autoscale: type: cron replicas(min/max/current): 1/4/1
Last Deployment:
Created at: 2020-11-05 17:09:03 +0800 CST
Updated at: 2020-11-05T17:09:02+08:00
```
Wait till the time clocks `startAt`, and check again. The replicas become to two, which is specified as
`replicas` in `vela.yaml`.
```
$ vela status testapp
About:
Name: testapp
Namespace: default
Created at: 2020-11-10 10:18:59.498079 +0800 CST
Updated at: 2020-11-10 10:18:59.49808 +0800 CST
Services:
- Name: express-server
Type: webservice
HEALTHY Ready: 2/2
Traits:
- ✅ autoscale: type: cron replicas(min/max/current): 1/4/2
Last Deployment:
Created at: 2020-11-10 10:18:59 +0800 CST
Updated at: 2020-11-10T10:18:59+08:00
```
Wait after the period ends, the replicas will be one eventually.
## Setting auto-scaling policy of CPU resource utilization
Introduce how to automatically scale workloads by CPU resource utilization.
1. Prepare Appfile
Modify `vela.yaml` as below. We add field `services.express-server.cpu` and change the auto-scaling policy
from cron to cpu utilization by updating filed `services.express-server.autoscale`.
```yaml
name: testapp
services:
express-server:
image: oamdev/testapp:v1
cmd: ["node", "server.js"]
port: 8080
cpu: "0.01"
autoscale:
min: 1
max: 5
cpuPercent: 10
```
2. Deploy an application
```bash
$ vela up
```
3. Expose the service entrypoint of the application
```
$ vela port-forward helloworld 80
Forwarding from 127.0.0.1:80 -> 80
Forwarding from [::1]:80 -> 80
Forward successfully! Opening browser ...
Handling connection for 80
Handling connection for 80
Handling connection for 80
Handling connection for 80
```
On your macOS, you might need to add `sudo` ahead of the command.
4. Monitor the replicas changing
Continue to monitor the replicas changing when the application becomes overloaded. You can use Apache HTTP server
benchmarking tool `ab` to mock many requests to the application.
```
$ ab -n 10000 -c 200 http://127.0.0.1/
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 1000 requests
```
The replicas gradually increase from one to four.
```
$ vela status helloworld --svc frontend
About:
Name: helloworld
Namespace: default
Created at: 2020-11-05 20:07:21.830118 +0800 CST
Updated at: 2020-11-05 20:50:42.664725 +0800 CST
Services:
- Name: frontend
Type: webservice
HEALTHY Ready: 1/1
Traits:
- ✅ autoscale: type: cpu cpu-utilization(target/current): 5%/10% replicas(min/max/current): 1/5/2
Last Deployment:
Created at: 2020-11-05 20:07:23 +0800 CST
Updated at: 2020-11-05T20:50:42+08:00
```
```
$ vela status helloworld --svc frontend
About:
Name: helloworld
Namespace: default
Created at: 2020-11-05 20:07:21.830118 +0800 CST
Updated at: 2020-11-05 20:50:42.664725 +0800 CST
Services:
- Name: frontend
Type: webservice
HEALTHY Ready: 1/1
Traits:
- ✅ autoscale: type: cpu cpu-utilization(target/current): 5%/14% replicas(min/max/current): 1/5/4
Last Deployment:
Created at: 2020-11-05 20:07:23 +0800 CST
Updated at: 2020-11-05T20:50:42+08:00
```
Stop `ab` tool, and the replicas will decrease to one eventually.

View File

@ -0,0 +1,107 @@
---
title: Monitoring Application
---
If your application has exposed metrics, you can easily tell the platform how to collect the metrics data from your app with `metrics` capability.
## Prerequisite
Make sure metrics trait controller is installed in your cluster
Install metrics trait controller with helm
1. Add helm chart repo for metrics trait
```shell script
helm repo add oam.catalog http://oam.dev/catalog/
```
2. Update the chart repo
```shell script
helm repo update
```
3. Install metrics trait controller
```shell script
helm install --create-namespace -n vela-system metricstrait oam.catalog/metricstrait
> Note: metrics is one of the extension capabilities [installed from cap center](../cap-center),
> please install it if you can't find it in `vela traits`.
## Setting metrics policy
Let's run [`christianhxc/gorandom:1.0`](https://github.com/christianhxc/prometheus-tutorial) as an example app.
The app will emit random latencies as metrics.
1. Prepare Appfile:
```bash
$ cat <<EOF > vela.yaml
name: metricapp
services:
metricapp:
type: webservice
image: christianhxc/gorandom:1.0
port: 8080
metrics:
enabled: true
format: prometheus
path: /metrics
port: 0
scheme: http
EOF
```
> The full specification of `metrics` could show up by `$ vela show metrics`.
2. Deploy the application:
```bash
$ vela up
```
3. Check status:
```bash
$ vela status metricapp
About:
Name: metricapp
Namespace: default
Created at: 2020-11-11 17:00:59.436347573 -0800 PST
Updated at: 2020-11-11 17:01:06.511064661 -0800 PST
Services:
- Name: metricapp
Type: webservice
HEALTHY Ready: 1/1
Traits:
- ✅ metrics: Monitoring port: 8080, path: /metrics, format: prometheus, schema: http.
Last Deployment:
Created at: 2020-11-11 17:00:59 -0800 PST
Updated at: 2020-11-11T17:01:06-08:00
```
The metrics trait will automatically discover port and label to monitor if no parameters specified.
If more than one ports found, it will choose the first one by default.
**(Optional) Verify that the metrics are collected on Prometheus**
<details>
Expose the port of Prometheus dashboard:
```bash
kubectl --namespace monitoring port-forward `kubectl -n monitoring get pods -l prometheus=oam -o name` 9090
```
Then access the Prometheus dashboard via http://localhost:9090/targets
![Prometheus Dashboard](../../resources/metrics.jpg)
</details>

View File

@ -0,0 +1,163 @@
---
title: Setting Rollout Strategy
---
> Note: rollout is one of the extension capabilities [installed from cap center](../cap-center),
> please install it if you can't find it in `vela traits`.
The `rollout` section is used to configure Canary strategy to release your app.
Add rollout config under `express-server` along with a `route`.
```yaml
name: testapp
services:
express-server:
type: webservice
image: oamdev/testapp:rolling01
port: 80
rollout:
replicas: 5
stepWeight: 20
interval: "30s"
route:
domain: "example.com"
```
> The full specification of `rollout` could show up by `$ vela show rollout`.
Apply this `appfile.yaml`:
```bash
$ vela up
```
You could check the status by:
```bash
$ vela status testapp
About:
Name: testapp
Namespace: myenv
Created at: 2020-11-09 17:34:38.064006 +0800 CST
Updated at: 2020-11-10 17:05:53.903168 +0800 CST
Services:
- Name: testapp
Type: webservice
HEALTHY Ready: 5/5
Traits:
- ✅ rollout: interval=5s
replicas=5
stepWeight=20
- ✅ route: Visiting URL: http://example.com IP: <your-ingress-IP-address>
Last Deployment:
Created at: 2020-11-09 17:34:38 +0800 CST
Updated at: 2020-11-10T17:05:53+08:00
```
Visiting this app by:
```bash
$ curl -H "Host:example.com" http://<your-ingress-IP-address>/
Hello World -- Rolling 01
```
In day 2, assuming we have make some changes on our app and build the new image and name it by `oamdev/testapp:v2`.
Let's update the appfile by:
```yaml
name: testapp
services:
express-server:
type: webservice
- image: oamdev/testapp:rolling01
+ image: oamdev/testapp:rolling02
port: 80
rollout:
replicas: 5
stepWeight: 20
interval: "30s"
route:
domain: example.com
```
Apply this `appfile.yaml` again:
```bash
$ vela up
```
You could run `vela status` several times to see the instance rolling:
```shell script
$ vela status testapp
About:
Name: testapp
Namespace: myenv
Created at: 2020-11-12 19:02:40.353693 +0800 CST
Updated at: 2020-11-12 19:02:40.353693 +0800 CST
Services:
- Name: express-server
Type: webservice
HEALTHY express-server-v2:Ready: 1/1 express-server-v1:Ready: 4/4
Traits:
- ✅ rollout: interval=30s
replicas=5
stepWeight=20
- ✅ route: Visiting by using 'vela port-forward testapp --route'
Last Deployment:
Created at: 2020-11-12 17:20:46 +0800 CST
Updated at: 2020-11-12T19:02:40+08:00
```
You could then try to `curl` your app multiple times and and see how the app being rollout following Canary strategy:
```bash
$ curl -H "Host:example.com" http://<your-ingress-ip-address>/
Hello World -- This is rolling 02
$ curl -H "Host:example.com" http://<your-ingress-ip-address>/
Hello World -- Rolling 01
$ curl -H "Host:example.com" http://<your-ingress-ip-address>/
Hello World -- Rolling 01
$ curl -H "Host:example.com" http://<your-ingress-ip-address>/
Hello World -- This is rolling 02
$ curl -H "Host:example.com" http://<your-ingress-ip-address>/
Hello World -- Rolling 01
$ curl -H "Host:example.com" http://<your-ingress-ip-address>/
Hello World -- This is rolling 02
```
**How `Rollout` works?**
<details>
`Rollout` trait implements progressive release process to rollout your app following [Canary strategy](https://martinfowler.com/bliki/CanaryRelease.html).
In detail, `Rollout` controller will create a canary of your app , and then gradually shift traffic to the canary while measuring key performance indicators like HTTP requests success rate at the same time.
![alt](../../resources/traffic-shifting-analysis.png)
In this sample, for every `10s`, `5%` traffic will be shifted to canary from the primary, until the traffic on canary reached `50%`. At the mean time, the instance number of canary will automatically scale to `replicas: 2` per configured in Appfile.
Based on analysis result of the KPIs during this traffic shifting, a canary will be promoted or aborted if analysis is failed. If promoting, the primary will be upgraded from v1 to v2, and traffic will be fully shifted back to the primary instances. So as result, canary instances will be deleted after the promotion finished.
![alt](../../resources/promotion.png)
> Note: KubeVela's `Rollout` trait is implemented with [Weaveworks Flagger](https://flagger.app/) operator.
</details>

View File

@ -0,0 +1,82 @@
---
title: Setting Routes
---
The `route` section is used to configure the access to your app.
## Prerequisite
Make sure route trait controller is installed in your cluster
Install route trait controller with helm
1. Add helm chart repo for route trait
```shell script
helm repo add oam.catalog http://oam.dev/catalog/
```
2. Update the chart repo
```shell script
helm repo update
```
3. Install route trait controller
```shell script
helm install --create-namespace -n vela-system routetrait oam.catalog/routetrait
> Note: route is one of the extension capabilities [installed from cap center](../cap-center),
> please install it if you can't find it in `vela traits`.
## Setting route policy
Add routing config under `express-server`:
```yaml
services:
express-server:
...
route:
domain: example.com
rules:
- path: /testapp
rewriteTarget: /
```
> The full specification of `route` could show up by `$ vela show route`.
Apply again:
```bash
$ vela up
```
Check the status until we see route is ready:
```bash
$ vela status testapp
About:
Name: testapp
Namespace: default
Created at: 2020-11-04 16:34:43.762730145 -0800 PST
Updated at: 2020-11-11 16:21:37.761158941 -0800 PST
Services:
- Name: express-server
Type: webservice
HEALTHY Ready: 1/1
Last Deployment:
Created at: 2020-11-11 16:21:37 -0800 PST
Updated at: 2020-11-11T16:21:37-08:00
Routes:
- route: Visiting URL: http://example.com IP: <ingress-IP-address>
```
**In [kind cluster setup](../../install#kind)**, you can visit the service via localhost:
> If not in kind cluster, replace 'localhost' with ingress address
```
$ curl -H "Host:example.com" http://localhost/testapp
Hello World
```

View File

@ -0,0 +1,249 @@
---
title: 学习使用 Appfile
---
`appfile` 的示例如下:
```yaml
name: testapp
services:
frontend: # 1st service
image: oamdev/testapp:v1
build:
docker:
file: Dockerfile
context: .
cmd: ["node", "server.js"]
port: 8080
route: # trait
domain: example.com
rules:
- path: /testapp
rewriteTarget: /
backend: # 2nd service
type: task # workload type
image: perl
cmd: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
```
在底层,`Appfile` 会从源码构建镜像,然后用镜像名称创建 `application` 资源
## Schema
> 在深入学习 Appfile 的详细 schema 之前,我们建议你先熟悉 KubeVela 的[核心概念](../core-concepts/application)
```yaml
name: _app-name_
services:
_service-name_:
# If `build` section exists, this field will be used as the name to build image. Otherwise, KubeVela will try to pull the image with given name directly.
image: oamdev/testapp:v1
build:
docker:
file: _Dockerfile_path_ # relative path is supported, e.g. "./Dockerfile"
context: _build_context_path_ # relative path is supported, e.g. "."
push:
local: kind # optionally push to local KinD cluster instead of remote registry
type: webservice (default) | worker | task
# detailed configurations of workload
... properties of the specified workload ...
_trait_1_:
# properties of trait 1
_trait_2_:
# properties of trait 2
... more traits and their properties ...
_another_service_name_: # more services can be defined
...
```
> 想了解怎样设置特定类型的 workload 或者 trait请阅读[参考文档手册](./check-ref-doc)
## 示例流程
在以下的流程中,我们会构建并部署一个 NodeJs 的示例 app。该 app 的源文件在[这里](https://github.com/oam-dev/kubevela/tree/master/docs/examples/testapp)。
### 环境要求
- [Docker](https://docs.docker.com/get-docker/) 需要在主机上安装 docker
- [KubeVela](../install) 需要安装 KubeVela 并配置
### 1. 下载测试的 app 的源码
git clone 然后进入 testapp 目录:
```bash
$ git clone https://github.com/oam-dev/kubevela.git
$ cd kubevela/docs/examples/testapp
```
这个示例包含 NodeJs app 的源码和用于构建 app 镜像的Dockerfile
### 2. 使用命令部署 app
我们将会使用目录中的 [vela.yaml](https://github.com/oam-dev/kubevela/tree/master/docs/examples/testapp/vela.yaml) 文件来构建和部署 app
> 注意:请修改 `oamdev` 为你自己注册的账号。或者你可以尝试 `本地测试方式`
```yaml
image: oamdev/testapp:v1 # change this to your image
```
执行如下命令:
```bash
$ vela up
Parsing vela.yaml ...
Loading templates ...
Building service (express-server)...
Sending build context to Docker daemon 71.68kB
Step 1/10 : FROM mhart/alpine-node:12
---> 9d88359808c3
...
pushing image (oamdev/testapp:v1)...
...
Rendering configs for service (express-server)...
Writing deploy config to (.vela/deploy.yaml)
Applying deploy configs ...
Checking if app has been deployed...
App has not been deployed, creating a new deployment...
✅ App has been deployed 🚀🚀🚀
Port forward: vela port-forward testapp
SSH: vela exec testapp
Logging: vela logs testapp
App status: vela status testapp
Service status: vela status testapp --svc express-server
```
检查服务状态:
```bash
$ vela status testapp
About:
Name: testapp
Namespace: default
Created at: 2020-11-02 11:08:32.138484 +0800 CST
Updated at: 2020-11-02 11:08:32.138485 +0800 CST
Services:
- Name: express-server
Type: webservice
HEALTHY Ready: 1/1
Last Deployment:
Created at: 2020-11-02 11:08:33 +0800 CST
Updated at: 2020-11-02T11:08:32+08:00
Routes:
```
#### 本地测试方式
如果你本地有运行的 [kind](../install) 集群,你可以尝试推送到本地。这种方法无需注册远程容器仓库。
`build` 中添加 local 的选项值:
```yaml
build:
# push image into local kind cluster without remote transfer
push:
local: kind
docker:
file: Dockerfile
context: .
```
然后部署到 kind
```bash
$ vela up
```
<details><summary>(进阶) 检查渲染后的 manifests 文件</summary>
默认情况下Vela 通过 `./vela/deploy.yaml` 渲染最后的 manifests 文件:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: ApplicationConfiguration
metadata:
name: testapp
namespace: default
spec:
components:
- componentName: express-server
---
apiVersion: core.oam.dev/v1alpha2
kind: Component
metadata:
name: express-server
namespace: default
spec:
workload:
apiVersion: apps/v1
kind: Deployment
metadata:
name: express-server
...
---
apiVersion: core.oam.dev/v1alpha2
kind: HealthScope
metadata:
name: testapp-default-health
namespace: default
spec:
...
```
</details>
### [可选] 配置其他类型的 workload
至此,我们成功地部署一个默认类型的 workload 的 *[web 服务](../end-user/components/cue/webservice)*。我们也可以添加 *[Task](../end-user/components/cue/task)* 类型的服务到同一个 app 中。
```yaml
services:
pi:
type: task
image: perl
cmd: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
express-server:
...
```
然后再次部署 Applfile 来升级应用:
```bash
$ vela up
```
恭喜!你已经学会了使用 `Appfile` 来部署应用了。
## 下一步?
更多关于 app 的操作:
- [Check Application Logs](./check-logs)
- [Execute Commands in Application Container](./exec-cmd)
- [Access Application via Route](./port-forward)

View File

@ -0,0 +1,23 @@
---
title: 端口转发
---
当你的 web 服务 Application 已经被部署就可以通过 `port-forward` 来本地访问。
```bash
$ vela ls
NAME APP WORKLOAD TRAITS STATUS CREATED-TIME
express-server testapp webservice Deployed 2020-09-18 22:42:04 +0800 CST
```
它将直接为你打开浏览器。
```bash
$ vela port-forward testapp
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Forward successfully! Opening browser ...
Handling connection for 8080
Handling connection for 8080
```

Some files were not shown because too many files have changed in this diff Show More