diff --git a/blog/2020-12-14-extend-kubevela-by-cuelang-in-20-mins.md b/blog/2020-12-14-extend-kubevela-by-cuelang-in-20-mins.md deleted file mode 100644 index 713d8875..00000000 --- a/blog/2020-12-14-extend-kubevela-by-cuelang-in-20-mins.md +++ /dev/null @@ -1,734 +0,0 @@ -# 如何在 20 分钟内给你的 K8s PaaS 上线一个新功能 - ->2020年12月14日 19:33, by @wonderflow - -上个月,[KubeVela 正式发布](https://kubevela.io/#/blog/zh/kubevela-the-extensible-app-platform-based-on-open-application-model-and-kubernetes)了, -作为一款简单易用且高度可扩展的应用管理平台与核心引擎,可以说是广大平台工程师用来构建自己的云原生 PaaS 的神兵利器。 -那么本文就以一个实际的例子,讲解一下如何在 20 分钟内,为你基于 KubeVela 的 PaaS “上线“一个新能力。 - -在正式开始本文档的教程之前,请确保你本地已经正确[安装了 KubeVela](https://kubevela.io/#/en/install) 及其依赖的 K8s 环境。 - -# KubeVela 扩展的基本结构 - -KubeVela 的基本架构如图所示: - -![image](https://kubevela-docs.oss-cn-beijing.aliyuncs.com/kubevela-extend.jpg) - -简单来说,KubeVela 通过添加 **Workload Type** 和 **Trait** 来为用户扩展能力,平台的服务提供方通过 Definition 文件注册和扩展,向上通过 Appfile 透出扩展的功能。官方文档中也分别给出了基本的编写流程,其中2个是Workload的扩展例子,一个是Trait的扩展例子: - -- [OpenFaaS 为例的 Workload Type 扩展](https://kubevela.io/#/en/platform-engineers/workload-type) -- [云资源 RDS 为例的 Workload Type 扩展](https://kubevela.io/#/en/platform-engineers/cloud-services) -- [KubeWatch 为例的 Trait 扩展](https://kubevela.io/#/en/platform-engineers/trait) - -我们以一个内置的 WorkloadDefinition 为例来介绍一下 Definition 文件的基本结构: - -```yaml -apiVersion: core.oam.dev/v1alpha2 -kind: WorkloadDefinition -metadata: - name: webservice - annotations: - definition.oam.dev/description: "`Webservice` is a workload type to describe long-running, scalable, containerized services that have a stable network endpoint to receive external network traffic from customers. - If workload type is skipped for any service defined in Appfile, it will be defaulted to `Web Service` type." -spec: - definitionRef: - name: deployments.apps - extension: - template: | - output: { - apiVersion: "apps/v1" - kind: "Deployment" - spec: { - selector: matchLabels: { - "app.oam.dev/component": context.name - } - template: { - metadata: labels: { - "app.oam.dev/component": context.name - } - spec: { - containers: [{ - name: context.name - image: parameter.image - if parameter["cmd"] != _|_ { - command: parameter.cmd - } - if parameter["env"] != _|_ { - env: parameter.env - } - if context["config"] != _|_ { - env: context.config - } - ports: [{ - containerPort: parameter.port - }] - if parameter["cpu"] != _|_ { - resources: { - limits: - cpu: parameter.cpu - requests: - cpu: parameter.cpu - }} - }] - }}} - } - parameter: { - // +usage=Which image would you like to use for your service - // +short=i - image: string - - // +usage=Commands to run in the container - cmd?: [...string] - - // +usage=Which port do you want customer traffic sent to - // +short=p - port: *80 | int - // +usage=Define arguments by using environment variables - env?: [...{ - // +usage=Environment variable name - name: string - // +usage=The value of the environment variable - value?: string - // +usage=Specifies a source the value of this var should come from - valueFrom?: { - // +usage=Selects a key of a secret in the pod's namespace - secretKeyRef: { - // +usage=The name of the secret in the pod's namespace to select from - name: string - // +usage=The key of the secret to select from. Must be a valid secret key - key: string - } - } - }] - // +usage=Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) - cpu?: string - } -``` - -乍一看挺长的,好像很复杂,但是不要着急,其实细看之下它分为两部分: - -* 不含扩展字段的 Definition 注册部分。 -* 供 Appfile 使用的扩展模板(CUE Template)部分 - -我们拆开来慢慢介绍,其实学起来很简单。 - -# 不含扩展字段的 Definition 注册部分 - -```yaml -apiVersion: core.oam.dev/v1alpha2 -kind: WorkloadDefinition -metadata: - name: webservice - annotations: - definition.oam.dev/description: "`Webservice` is a workload type to describe long-running, scalable, containerized services that have a stable network endpoint to receive external network traffic from customers. - If workload type is skipped for any service defined in Appfile, it will be defaulted to `Web Service` type." -spec: - definitionRef: - name: deployments.apps -``` - -这一部分满打满算11行,其中有3行是在介绍 `webservice`的功能,5行是固定的格式。只有2行是有特定信息: - -```yaml - definitionRef: - name: deployments.apps -``` - -这两行的意思代表了这个Definition背后用的CRD名称是什么,其格式是 `.`.了解 K8s 的同学应该知道 K8s 中比较常用的是通过 `api-group`, `version` 和 `kind` 定位资源,而 `kind` 在 K8s restful API 中对应的是 `resources`。以大家熟悉 `Deployment` 和 `ingress` 为例,它的对应关系如下: - - -| api-group | kind | version | resources | -| -------- | -------- | -------- | -------- | -| apps | Deployment | v1 | deployments | -| networking.k8s.io | Ingress | v1 | ingresses | - -> 这里补充一个小知识,为什么有了 kind 还要加个 resources 的概念呢? -> 因为一个 CRD 除了 kind 本身还有一些像 status,replica 这样的字段希望跟 spec 本身解耦开来在 restful API 中单独更新, -> 所以 resources 除了 kind 对应的那一个,还会有一些额外的 resources,如 Deployment 的 status 表示为 `deployments/status`。 - -所以相信聪明的你已经明白了不含 extension 的情况下,Definition应该怎么写了,最简单的就是根据 K8s 的资源组合方式拼接一下,只要填下面三个尖括号的空格就可以了。 - -```yaml -apiVersion: core.oam.dev/v1alpha2 -kind: WorkloadDefinition -metadata: - name: <这里写名称> -spec: - definitionRef: - name: <这里写resources>.<这里写api-group> -``` - -针对运维特征注册(TraitDefinition)也是这样。 - -```yaml -apiVersion: core.oam.dev/v1alpha2 -kind: TraitDefinition -metadata: - name: <这里写名称> -spec: - definitionRef: - name: <这里写resources>.<这里写api-group> -``` - -所以把 `Ingress` 作为 KubeVela 的扩展写进去就是: - -```yaml -apiVersion: core.oam.dev/v1alpha2 -kind: TraitDefinition -metadata: - name: ingress -spec: - definitionRef: - name: ingresses.networking.k8s.io -``` - -除此之外,TraitDefinition 中还增加了一些其他功能模型层功能,如: - -* `appliesToWorkloads`: 表示这个 trait 可以作用于哪些 Workload 类型。 -* `conflictWith`: 表示这个 trait 和哪些其他类型的 trait 有冲突。 -* `workloadRefPath`: 表示这个 trait 包含的 workload 字段是哪个,KubeVela 在生成 trait 对象时会自动填充。 -... - -这些功能都是可选的,本文中不涉及使用,在后续的其他文章中我们再给大家详细介绍。 - -所以到这里,相信你已经掌握了一个不含 extensions 的基本扩展模式,而剩下部分就是围绕 [CUE](https://cuelang.org/) 的抽象模板。 - -# 供 Appfile 使用的扩展模板(CUE Template)部分 - -对 CUE 本身有兴趣的同学可以参考这篇[CUE 基础入门](https://wonderflow.info/posts/2020-12-15-cuelang-template/) 多做一些了解,限于篇幅本文对 CUE 本身不详细展开。 - -大家知道 KubeVela 的 Appfile 写起来很简洁,但是 K8s 的对象是一个相对比较复杂的 YAML,而为了保持简洁的同时又不失可扩展性,KubeVela 提供了一个从复杂到简洁的桥梁。 -这就是 Definition 中 CUE Template 的作用。 - -## CUE 格式模板 - -让我们先来看一个 Deployment 的 YAML 文件,如下所示,其中很多内容都是固定的框架(模板部分),真正需要用户填的内容其实就少量的几个字段(参数部分)。 - -```yaml -apiVersion: apps/v1 -kind: Deployment -meadata: - name: mytest -spec: - template: - spec: - containers: - - name: mytest - env: - - name: a - value: b - image: nginx:v1 - metadata: - labels: - app.oam.dev/component: mytest - selector: - matchLabels: - app.oam.dev/component: mytest -``` - -在 KubeVela 中,Definition 文件的固定格式就是分为 `output` 和 `parameter` 两部分。其中`output`中的内容就是“模板部分”,而 `parameter` 就是参数部分。 - -那我们来把上面的 Deployment YAML 改写成 Definition 中模板的格式。 - -```cue -output: { - apiVersion: "apps/v1" - kind: "Deployment" - metadata: name: "mytest" - spec: { - selector: matchLabels: { - "app.oam.dev/component": "mytest" - } - template: { - metadata: labels: { - "app.oam.dev/component": "mytest" - } - spec: { - containers: [{ - name: "mytest" - image: "nginx:v1" - env: [{name:"a",value:"b"}] - }] - }}} -} -``` - -这个格式跟 json 很像,事实上这个是 CUE 的格式,而 CUE 本身就是一个 json 的超集。也就是说,CUE的格式在满足 JSON 规则的基础上,增加了一些简便规则, -使其更易读易用: - -* C 语言的注释风格。 -* 表示字段名称的双引号在没有特殊符号的情况下可以缺省。 -* 字段值结尾的逗号可以缺省,在字段最后的逗号写了也不会出错。 -* 最外层的大括号可以省略。 - -## CUE 格式的模板参数--变量引用 - -编写好了模板部分,让我们来构建参数部分,而这个参数其实就是变量的引用。 - -``` -parameter: { - name: string - image: string -} -output: { - apiVersion: "apps/v1" - kind: "Deployment" - spec: { - selector: matchLabels: { - "app.oam.dev/component": parameter.name - } - template: { - metadata: labels: { - "app.oam.dev/component": parameter.name - } - spec: { - containers: [{ - name: parameter.name - image: parameter.image - }] - }}} -} -``` - -如上面的这个例子所示,KubeVela 中的模板参数就是通过 `parameter` 这个部分来完成的,而`parameter` 本质上就是作为引用,替换掉了 `output` 中的某些字段。 - -## 完整的 Definition 以及在 Appfile 使用 - -事实上,经过上面两部分的组合,我们已经可以写出一个完整的 Definition 文件: - -```yaml -apiVersion: core.oam.dev/v1alpha2 -kind: WorkloadDefinition -metadata: - name: mydeploy -spec: - definitionRef: - name: deployments.apps - extension: - template: | - parameter: { - name: string - image: string - } - output: { - apiVersion: "apps/v1" - kind: "Deployment" - spec: { - selector: matchLabels: { - "app.oam.dev/component": parameter.name - } - template: { - metadata: labels: { - "app.oam.dev/component": parameter.name - } - spec: { - containers: [{ - name: parameter.name - image: parameter.image - }] - }}} - } -``` - -为了方便调试,一般情况下可以预先分为两个文件,一部分放前面的 yaml 部分,假设命名为 `def.yaml` 如: - -```shell script -apiVersion: core.oam.dev/v1alpha2 -kind: WorkloadDefinition -metadata: - name: mydeploy -spec: - definitionRef: - name: deployments.apps - extension: - template: | -``` - -另一个则放 cue 文件,假设命名为 `def.cue` : - -```shell script -parameter: { - name: string - image: string -} -output: { - apiVersion: "apps/v1" - kind: "Deployment" - spec: { - selector: matchLabels: { - "app.oam.dev/component": parameter.name - } - template: { - metadata: labels: { - "app.oam.dev/component": parameter.name - } - spec: { - containers: [{ - name: parameter.name - image: parameter.image - }] - }}} -} -``` - -先对 `def.cue` 做一个格式化,格式化的同时 cue 工具本身会做一些校验,也可以更深入的[通过 cue 命令做调试](https://wonderflow.info/posts/2020-12-15-cuelang-template/): - -```shell script -cue fmt def.cue -``` - -调试完成后,可以通过脚本把这个 yaml 组装: - -```shell script -./hack/vela-templates/mergedef.sh def.yaml def.cue > mydeploy.yaml -``` - -再把这个 yaml 文件 apply 到 K8s 集群中。 - -```shell script -$ kubectl apply -f mydeploy.yaml -workloaddefinition.core.oam.dev/mydeploy created -``` - -一旦新能力 `kubectl apply` 到了 Kubernetes 中,不用重启,也不用更新,KubeVela 的用户可以立刻看到一个新的能力出现并且可以使用了: - -```shell script -$ vela worklaods -Automatically discover capabilities successfully ✅ Add(1) Update(0) Delete(0) - -TYPE CATEGORY DESCRIPTION -+mydeploy workload description not defined - -NAME DESCRIPTION -mydeploy description not defined -``` - -在 Appfile 中使用方式如下: - -```yaml -name: my-extend-app -services: - mysvc: - type: mydeploy - image: crccheck/hello-world - name: mysvc -``` - -执行 `vela up` 就能把这个运行起来了: - -```shell script -$ vela up -f docs/examples/blog-extension/my-extend-app.yaml -Parsing vela appfile ... -Loading templates ... - -Rendering configs for service (mysvc)... -Writing deploy config to (.vela/deploy.yaml) - -Applying deploy configs ... -Checking if app has been deployed... -App has not been deployed, creating a new deployment... -✅ App has been deployed 🚀🚀🚀 - Port forward: vela port-forward my-extend-app - SSH: vela exec my-extend-app - Logging: vela logs my-extend-app - App status: vela status my-extend-app - Service status: vela status my-extend-app --svc mysvc -``` - -我们来查看一下应用的状态,已经正常运行起来了(`HEALTHY Ready: 1/1`): - -```shell script -$ vela status my-extend-app -About: - - Name: my-extend-app - Namespace: env-application - Created at: 2020-12-15 16:32:25.08233 +0800 CST - Updated at: 2020-12-15 16:32:25.08233 +0800 CST - -Services: - - - Name: mysvc - Type: mydeploy - HEALTHY Ready: 1/1 -``` - -# Definition 模板中的高级用法 - -上面我们已经通过模板替换这个最基本的功能体验了扩展 KubeVela 的全过程,除此之外,可能你还有一些比较复杂的需求,如条件判断,循环,复杂类型等,需要一些高级的用法。 - -## 结构体参数 - -如果模板中有一些参数类型比较复杂,包含结构体和嵌套的多个结构体,就可以使用结构体定义。 - -1. 定义一个结构体类型,包含1个字符串成员、1个整型和1个结构体成员。 -``` -#Config: { - name: string - value: int - other: { - key: string - value: string - } -} -``` - -2. 在变量中使用这个结构体类型,并作为数组使用。 -``` -parameter: { - name: string - image: string - config: [...#Config] -} -``` - -3. 同样的目标中也是以变量引用的方式使用。 -```shell script -output: { - ... - spec: { - containers: [{ - name: parameter.name - image: parameter.image - env: parameter.config - }] - } - ... -} -``` - -4. Appfile 中的写法就是按照 parameter 定义的结构编写。 -``` -name: my-extend-app -services: - mysvc: - type: mydeploy - image: crccheck/hello-world - name: mysvc - config: - - name: a - value: 1 - other: - key: mykey - value: myvalue -``` - -## 条件判断 - -有时候某些参数加还是不加取决于某个条件: - -```shell script -parameter: { - name: string - image: string - useENV: bool -} -output: { - ... - spec: { - containers: [{ - name: parameter.name - image: parameter.image - if parameter.useENV == true { - env: [{name: "my-env", value: "my-value"}] - } - }] - } - ... -} -``` - -在 Appfile 就是写值。 -``` -name: my-extend-app -services: - mysvc: - type: mydeploy - image: crccheck/hello-world - name: mysvc - useENV: true -``` - -## 可缺省参数 - -有些情况下参数可能存在也可能不存在,即非必填,这个时候一般要配合条件判断使用,对于某个字段不存在的情况,判断条件是是 `_variable != _|_`。 - -```shell script -parameter: { - name: string - image: string - config?: [...#Config] -} -output: { - ... - spec: { - containers: [{ - name: parameter.name - image: parameter.image - if parameter.config != _|_ { - config: parameter.config - } - }] - } - ... -} -``` - -这种情况下 Appfile 的 config 就非必填了,填了就渲染,没填就不渲染。 - -## 默认值 - -对于某些参数如果希望设置一个默认值,可以采用这个写法。 - -```shell script -parameter: { - name: string - image: *"nginx:v1" | string -} -output: { - ... - spec: { - containers: [{ - name: parameter.name - image: parameter.image - }] - } - ... -} -``` - -这个时候 Appfile 就可以不写 image 这个参数,默认使用 "nginx:v1": - -``` -name: my-extend-app -services: - mysvc: - type: mydeploy - name: mysvc -``` - - -## 循环 - -### Map 类型的循环 - -```shell script -parameter: { - name: string - image: string - env: [string]: string -} -output: { - spec: { - containers: [{ - name: parameter.name - image: parameter.image - env: [ - for k, v in parameter.env { - name: k - value: v - }, - ] - }] - } -} -``` - -Appfile 中的写法: -``` -name: my-extend-app -services: - mysvc: - type: mydeploy - name: "mysvc" - image: "nginx" - env: - env1: value1 - env2: value2 -``` - -### 数组类型的循环 - -```shell script -parameter: { - name: string - image: string - env: [...{name:string,value:string}] -} -output: { - ... - spec: { - containers: [{ - name: parameter.name - image: parameter.image - env: [ - for _, v in parameter.env { - name: v.name - value: v.value - }, - ] - }] - } -} -``` - -Appfile 中的写法: -``` -name: my-extend-app -services: - mysvc: - type: mydeploy - name: "mysvc" - image: "nginx" - env: - - name: env1 - value: value1 - - name: env2 - value: value2 -``` - -## KubeVela 内置的 `context` 变量 - -大家可能也注意到了,我们在 parameter 中定义的 name 每次在 Appfile中 实际上写了两次,一次是在 services 下面(每个service都以名称区分), -另一次则是在具体的`name`参数里面。事实上这里重复的不应该由用户再写一遍,所以 KubeVela 中还定义了一个内置的 `context`,里面存放了一些通用的环境上下文信息,如应用名称、秘钥等。 -直接在模板中使用 context 就不需要额外增加一个 `name` 参数了, KubeVela 在运行渲染模板的过程中会自动传入。 - -```shell script -parameter: { - image: string -} -output: { - ... - spec: { - containers: [{ - name: context.name - image: parameter.image - }] - } - ... -} -``` - -## KubeVela 中的注释增强 - -KubeVela 还对 cuelang 的注释做了一些扩展,方便自动生成文档以及被 CLI 使用。 - -``` - parameter: { - // +usage=Which image would you like to use for your service - // +short=i - image: string - - // +usage=Commands to run in the container - cmd?: [...string] - ... - } -``` - -其中,`+usgae` 开头的注释会变成参数的说明,`+short` 开头的注释后面则是在 CLI 中使用的缩写。 - -# 总结 -本文通过实际的案例和详细的讲述,为你介绍了在 KubeVela 中新增一个能力的详细过程与原理,以及能力模板的编写方法。 - -这里你可能还有个疑问,平台管理员这样添加了一个新能力后,平台的用户又该怎么能知道这个能力怎么使用呢?其实,在 KubeVela 中,它不仅能方便的添加新能力,**它还能自动为“能力”生成 Markdown 格式的使用文档!** 不信,你可以看下 KubeVela 本身的官方网站,所有在 `References/Capabilities`目录下能力使用说明文档(比如[这个](https://kubevela.io/#/en/developers/references/workload-types/webservice)),全都是根据每个能力的模板自动生成的哦。 -最后,欢迎大家写一些有趣的扩展功能,提交到 KubeVela 的[社区仓库](https://github.com/oam-dev/catalog/tree/master/registry)中来。 diff --git a/blog/2021-03-01-kubevela-official-documentation-translation-event.md b/blog/2021-03-01-kubevela-official-documentation-translation-event.md deleted file mode 100644 index 9f6f3fb2..00000000 --- a/blog/2021-03-01-kubevela-official-documentation-translation-event.md +++ /dev/null @@ -1,95 +0,0 @@ ---- -title: KubeVela Official Documentation Translation Event -tags: [ documentation ] -description: KubeVela Official Documentation Translation Event ---- - -## 背景 - -KubeVela v1.0 启用了新的官网架构和文档维护方式,新增功能包括文档版本化控制、i18n 国际化以及自动化流程。但目前 KubeVela 官方文档只有英文版,这提高了学习和使用 KubeVela 的门槛,不利于项目的传播和发展,同时翻译工作也能显著提升语言能力,帮助我们拓宽阅读技术资料的广度,故组织本次活动。 - -## 活动流程 - -本次活动主要在 [kubevela.io](https://github.com/oam-dev/kubevela.io) repo 下进行,报名参与和认领任务都在 [KubeVela 官方文档翻译登记](https://shimo.im/sheets/QrCwcDqh8xkRWKPC/MODOC) 中(**请务必在表格中登记信息**)。 - -### 开始翻译 - -![翻译流程](https://tvax1.sinaimg.cn/large/ad5fbf65gy1gpdbriuraij20k20l6dhm.jpg) - -参与翻译活动的基本流程如下: -- 任务领取:在 [KubeVela 官方文档翻译登记](https://shimo.im/sheets/QrCwcDqh8xkRWKPC/MODOC) 登记并认领任务; -- 提交:参与人员提交 PR 等待 review; -- 审阅:maintainer 审阅 PR; -- 终审: 对 review 后的内容进行最后确认; -- 合并:merge 到 master 分支,任务结束。 - -### 参与指南 - -下面具体介绍参与翻译的具体工作。 - -#### 准备工作 - -- 账号:你需要先准备一个 GitHub 账号。使用 Github 进行翻译任务的认领和 PR 提交。 -- 仓库和分支管理 - - fork [kubevela.io](https://github.com/oam-dev/kubevela.io) 的仓库,并作为自己仓库的上游: `git remote add upstream https://github.com/oam-dev/kubevela.io.git` - - 在自己的仓库,也就是 origin 上进行翻译; - - 一个任务新建一个 branch -- Node.js 版本 >= 12.13.0 (可以使用 `node -v` 命令查看) -- Yarn 版本 >= 1.5(可以使用 `yarn --version` 命令查看) - -#### 参与步骤 - -**Step1:任务浏览** - -在 [KubeVela 官方文档翻译登记](https://shimo.im/sheets/QrCwcDqh8xkRWKPC/MODOC) 登记并浏览有哪些任务可以认领。 - -**Step2:任务领取** - -在 [KubeVela 官方文档翻译登记](https://shimo.im/sheets/QrCwcDqh8xkRWKPC/MODOC) 表格中编辑并认领任务。注意:为保证质量,同一译者只能同时认领三个任务,完成后才可继续认领。 - -**Step3:本地构建和预览** - -```shell -# 命令安装依赖 -$ yarn install -# 本地运行中文文档 -$ yarn run start -- --locale zh -yarn run v1.22.10 -warning From Yarn 1.0 onwards, scripts don't require "--" for options to be forwarded. In a future version, any explicit "--" will be forwarded as-is to the scripts. -$ docusaurus start --locale zh -Starting the development server... -Docusaurus website is running at: http://localhost:3000/zh/ -✔ Client - Compiled successfully in 7.54s -ℹ 「wds」: Project is running at http://localhost:3000/ -ℹ 「wds」: webpack output is served from /zh/ -ℹ 「wds」: Content not from webpack is served from /Users/saybot/own/kubevela.io -ℹ 「wds」: 404s will fallback to /index.html -✔ Client - Compiled successfully in 137.94ms -``` -请勿修改 `/docs` 目录下内容,中文文档在 `/i18n/zh/docusaurus-plugin-content-docs` 中,之后就可以在 http://localhost:3000/zh/ 中进行预览了。 - -**Step4:提交 PR** - -确认翻译完成就可以提交 PR 了,注意:为了方便 review 每篇翻译为**一个 PR**,如果翻译多篇请 checkout **多个分支**并提交多个 PR。 - -**Step5:审阅** - -由 maintainer 对 PR 进行 review。 - -**Step6:任务完成** - -翻译合格的文章将会 merge 到 [kubevela.io](https://github.com/oam-dev/kubevela.io) 的 master 分支进行发布。 - - -### 翻译要求 - -- 数字和英文两边是中文要加空格。 -- KubeVela 统一写法。K 和 V 大写。 -- 翻译完请先阅读一遍,不要出现遗漏段落,保证文章通顺、符合中文阅读习惯。不追求严格一致,可以意译。review 的时候也会检验。 -- 你和您不要混用,统一使用 **“你”**。 -- 不会翻译的词汇可以不翻译,可以在 PR 中说明,review 的时候会查看。 -- Component、Workload、Trait 这些 OAM/KubeVela 里面定义的专属概念不要翻译,我们也要加强这些词汇的认知。可以在一篇新文章最开始出现的时候用括号加上中文翻译。 -- 注意中英文标点符号。 -- `PR` 命名规范 `Translate <翻译文件相对路径>`,如 `Translate i18n/zh/docusaurus-plugin-content-docs/current/introduction.md`。 diff --git a/blog/2021-09-02-kubevela-jenkins-cicd.md b/blog/2021-09-02-kubevela-jenkins-cicd.md index f3192022..915b89a9 100644 --- a/blog/2021-09-02-kubevela-jenkins-cicd.md +++ b/blog/2021-09-02-kubevela-jenkins-cicd.md @@ -67,7 +67,7 @@ Finally, go to *Dashboard > Manage Jenkins > Configure System > GitHub* in Jenki ### KubeVela -You need to install KubeVela in your Kubernetes cluster and enable the apiserver function. Refer to [official doc](/docs/advanced-install#install-kubevela-with-apiserver) for details. +You need to install KubeVela in your Kubernetes cluster and enable the apiserver function. Refer to [official doc](/docs/platform-engineers/advanced-install#install-kubevela-with-apiserver) for details. ## Composing Applications diff --git a/docs/platform-engineers/advanced-install.mdx b/docs/platform-engineers/advanced-install.mdx index c2e0c675..49160834 100644 --- a/docs/platform-engineers/advanced-install.mdx +++ b/docs/platform-engineers/advanced-install.mdx @@ -103,28 +103,26 @@ helm search repo kubevela/vela-core -l ### Step 2. Upgrade KubeVela CRDs ```shell -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_appdeployments.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_applicationcontexts.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_applicationrevisions.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_applications.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_approllouts.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_clusters.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_componentdefinitions.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_containerizedworkloads.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_definitionrevisions.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_envbindings.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_healthscopes.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_initializers.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_manualscalertraits.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_policydefinitions.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_resourcetrackers.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_scopedefinitions.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_traitdefinitions.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_workflowstepdefinitions.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_workloaddefinitions.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/standard.oam.dev_podspecworkloads.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/standard.oam.dev_rollouts.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/standard.oam.dev_rollouttraits.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_appdeployments.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_applicationcontexts.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_applicationrevisions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_applications.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_approllouts.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_clusters.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_componentdefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_containerizedworkloads.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_definitionrevisions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_envbindings.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_healthscopes.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_initializers.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_manualscalertraits.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_policydefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_resourcetrackers.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_scopedefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_traitdefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_workflowstepdefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_workloaddefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/standard.oam.dev_rollouts.yaml ``` > Tips: If you see errors like `* is invalid: spec.scope: Invalid value: "Namespaced": filed is immutable`. Please delete the CRD which reports error and re-apply the kubevela crds. diff --git a/docusaurus.config.js b/docusaurus.config.js index bc881343..07a08cf5 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -76,11 +76,11 @@ module.exports = { }, { label: 'User Manuals', - to: '/docs/end-user/application', + to: '/docs/end-user/components/helm', }, { label: 'Administrator Manuals', - to: '/docs/platform-engineers/overview', + to: '/docs/platform-engineers/oam/oam-model', }, ], }, @@ -146,7 +146,7 @@ module.exports = { showLastUpdateAuthor: true, showLastUpdateTime: true, includeCurrentVersion: true, - lastVersion: 'v1.0', + lastVersion: 'v1.1', }, blog: { showReadingTime: true, diff --git a/i18n/zh/docusaurus-plugin-content-blog/2021-09-02-kubevela-jenkins-cicd.md b/i18n/zh/docusaurus-plugin-content-blog/2021-09-02-kubevela-jenkins-cicd.md index a7fd94a8..2bfbfb59 100644 --- a/i18n/zh/docusaurus-plugin-content-blog/2021-09-02-kubevela-jenkins-cicd.md +++ b/i18n/zh/docusaurus-plugin-content-blog/2021-09-02-kubevela-jenkins-cicd.md @@ -33,7 +33,7 @@ KubeVela 打通了应用与基础设施之间的交付管控的壁垒,相较 > 本文采用了 Jenkins 作为持续集成工具,开发者也可以使用其他 CI 工具,如 TravisCI 或者 GitHub Action。 -首先您需要准备一份 Jenkins 环境来部署 CI 流水线。安装与初始化 Jenkins 流程可以参见[官方文档](https://www.jenkins.io/doc/book/installing/linux/)。 +首先你需要准备一份 Jenkins 环境来部署 CI 流水线。安装与初始化 Jenkins 流程可以参见[官方文档](https://www.jenkins.io/doc/book/installing/linux/)。 需要注意的是,由于本文的 CI 流水线是基于 Docker 及 GitHub 的,因此在安装 Jenkins 之后还需要安装相关插件 (Dashboard > Manage Jenkins > Manage Plugins) ,包括 Pipeline、HTTP Request Plugin、Docker Pipeline、Docker Plugin。 @@ -67,7 +67,7 @@ KubeVela 打通了应用与基础设施之间的交付管控的壁垒,相较 ### KubeVela 环境 -您需要在 Kubernetes 集群中安装 KubeVela,并启用 apiserver 功能,可以参考[官方文档](/docs/advanced-install#install-kubevela-with-apiserver)。 +你需要在 Kubernetes 集群中安装 KubeVela,并启用 apiserver 功能,可以参考[官方文档](/docs/platform-engineers/advanced-install#install-kubevela-with-apiserver)。 ## 编写应用 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/case-studies/multi-cluster.md b/i18n/zh/docusaurus-plugin-content-docs/current/case-studies/multi-cluster.md index 19d28cef..c251347c 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/case-studies/multi-cluster.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/case-studies/multi-cluster.md @@ -16,7 +16,7 @@ title: 多集群应用交付 ## 准备工作 -在使用多集群应用部署之前,你需要将子集群通过 KubeConfig 加入到 KubeVela 的管控中来。Vela CLI 可以帮您实现这一点。 +在使用多集群应用部署之前,你需要将子集群通过 KubeConfig 加入到 KubeVela 的管控中来。Vela CLI 可以帮你实现这一点。 ```shell script vela cluster join diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/end-user/components/more.md b/i18n/zh/docusaurus-plugin-content-docs/current/end-user/components/more.md index 87f56c05..707dc3da 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/end-user/components/more.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/end-user/components/more.md @@ -2,7 +2,7 @@ title: 获取更多 --- -KubeVela 中的模块完全都是可定制和可插拔的,所以除了内置的组件之外,你还可以通过如下的方式添加更多您自己的组件类型。 +KubeVela 中的模块完全都是可定制和可插拔的,所以除了内置的组件之外,你还可以通过如下的方式添加更多你自己的组件类型。 ## 1. 从官方或第三方能力中心获取模块化能力 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/end-user/traits/more.md b/i18n/zh/docusaurus-plugin-content-docs/current/end-user/traits/more.md index cd21a763..45e97a7d 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/end-user/traits/more.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/end-user/traits/more.md @@ -2,7 +2,7 @@ title: 获取更多 --- -KubeVela 中的模块完全都是可定制和可插拔的,所以除了内置的运维能力之外,你还可以通过如下的方式添加更多您自己的运维能力。 +KubeVela 中的模块完全都是可定制和可插拔的,所以除了内置的运维能力之外,你还可以通过如下的方式添加更多你自己的运维能力。 ## 1. 从官方或第三方能力中心获取模块化能力 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/install.mdx b/i18n/zh/docusaurus-plugin-content-docs/current/install.mdx index 0291ebce..384d9c6c 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/install.mdx +++ b/i18n/zh/docusaurus-plugin-content-docs/current/install.mdx @@ -199,7 +199,7 @@ sudo mv ./vela /usr/local/bin/vela ## 4. 【可选】安装插件 -KubeVela 支持一系列[开箱即用的插件](./platform-engineers/advanced-install#插件列表),建议您至少开启以下插件: +KubeVela 支持一系列[开箱即用的插件](./platform-engineers/advanced-install#插件列表),建议你至少开启以下插件: * Helm 以及 Kustomize 组件功能插件 ```shell diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/platform-engineers/advanced-install.mdx b/i18n/zh/docusaurus-plugin-content-docs/current/platform-engineers/advanced-install.mdx index cb22f27b..32a9316d 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/platform-engineers/advanced-install.mdx +++ b/i18n/zh/docusaurus-plugin-content-docs/current/platform-engineers/advanced-install.mdx @@ -105,28 +105,26 @@ helm search repo kubevela/vela-core -l ### 第二步 升级 KubeVela 的 CRDs ```shell -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_appdeployments.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_applicationcontexts.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_applicationrevisions.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_applications.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_approllouts.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_clusters.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_componentdefinitions.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_containerizedworkloads.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_definitionrevisions.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_envbindings.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_healthscopes.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_initializers.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_manualscalertraits.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_policydefinitions.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_resourcetrackers.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_scopedefinitions.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_traitdefinitions.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_workflowstepdefinitions.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/core.oam.dev_workloaddefinitions.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/standard.oam.dev_podspecworkloads.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/standard.oam.dev_rollouts.yaml -kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/v1.1.0/charts/vela-core/crds/standard.oam.dev_rollouttraits.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_appdeployments.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_applicationcontexts.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_applicationrevisions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_applications.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_approllouts.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_clusters.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_componentdefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_containerizedworkloads.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_definitionrevisions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_envbindings.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_healthscopes.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_initializers.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_manualscalertraits.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_policydefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_resourcetrackers.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_scopedefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_traitdefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_workflowstepdefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_workloaddefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/standard.oam.dev_rollouts.yaml ``` > 提示:如果看到诸如 `* is invalid: spec.scope: Invalid value: "Namespaced": filed is immutable` 之类的错误,请删除出错的 CRD 后再重新安装。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/platform-engineers/system-operation/observability.md b/i18n/zh/docusaurus-plugin-content-docs/current/platform-engineers/system-operation/observability.md index 9e69defa..93d1709e 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/platform-engineers/system-operation/observability.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/platform-engineers/system-operation/observability.md @@ -92,7 +92,7 @@ Prometheus server 需要的 pvc 的大小,同 `alertmanager-pvc-size`。 - grafana-domain -Grafana 的域名,可以使用您自定义的域名,也可以使用 ACK 提供的集群级别的泛域名,`*.c276f4dac730c47b8b8988905e3c68fcf.cn-hongkong.alicontainer.com`, +Grafana 的域名,可以使用你自定义的域名,也可以使用 ACK 提供的集群级别的泛域名,`*.c276f4dac730c47b8b8988905e3c68fcf.cn-hongkong.alicontainer.com`, 如本处取值 `grafana.c276f4dac730c47b8b8988905e3c68fcf.cn-hongkong.alicontainer.com`。 #### 其他云服务商提供的 Kubernetes 集群 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/platform-engineers/system-operation/performance-finetuning.md b/i18n/zh/docusaurus-plugin-content-docs/current/platform-engineers/system-operation/performance-finetuning.md index a9681d16..76f0e959 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/platform-engineers/system-operation/performance-finetuning.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/platform-engineers/system-operation/performance-finetuning.md @@ -4,7 +4,7 @@ title: 性能调优 ### 推荐配置 -在集群规模变大,应用数量变多时,可能会因为 KubeVela 的控制器性能跟不上需求导致 KubeVela 系统内的应用运维出现问题,这可能是由于您的 KubeVela 控制器参数不当所致。 +在集群规模变大,应用数量变多时,可能会因为 KubeVela 的控制器性能跟不上需求导致 KubeVela 系统内的应用运维出现问题,这可能是由于你的 KubeVela 控制器参数不当所致。 在 KubeVela 的性能测试中,KubeVela 团队验证了在各种不同规模的场景下 KubeVela 控制器的运维能力。并给出了以下的推荐配置: @@ -14,7 +14,7 @@ title: 性能调优 | 中 | < 500 | < 5,000 | < 30,000 | 4 | 500 | 800 | 1 | 2Gi | | 大 | < 1,000 | < 12,000 | < 72,000 | 4 | 800 | 1,000 | 2 | 4Gi | -> 上述配置中,单一应用的规模应在 2~3 个组件,5~6 个资源左右。如果您的场景下,应用普遍较大,如单个应用需要对应 20 个资源,那么您可以按照比例相应提高各项配置。 +> 上述配置中,单一应用的规模应在 2~3 个组件,5~6 个资源左右。如果你的场景下,应用普遍较大,如单个应用需要对应 20 个资源,那么你可以按照比例相应提高各项配置。 ### 调优方法 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1.json b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1.json new file mode 100644 index 00000000..3236c9fc --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1.json @@ -0,0 +1,166 @@ +{ + "version.label": { + "message": "Next", + "description": "The label for version current" + }, + "sidebar.docs.category.Getting Started": { + "message": "快速开始", + "description": "The label for category Getting Started in sidebar docs" + }, + "sidebar.docs.category.Core Concepts": { + "message": "核心概念", + "description": "The label for Core Concepts in sidebar docs" + }, + "sidebar.docs.category.Learning CUE": { + "message": "CUE 语言", + "description": "The label for category Learning CUE in sidebar docs" + }, + "sidebar.docs.category.Helm": { + "message": "Helm", + "description": "The label for category Helm in sidebar docs" + }, + "sidebar.docs.category.Raw Template": { + "message": "Raw Template", + "description": "The label for category Raw Template in sidebar docs" + }, + "sidebar.docs.category.Traits System": { + "message": "运维特征系统", + "description": "The label for category Traits System in sidebar docs" + }, + "sidebar.docs.category.Defining Cloud Service": { + "message": "定义 Cloud Service", + "description": "The label for category Defining Cloud Service in sidebar docs" + }, + "sidebar.docs.category.Hands-on Lab": { + "message": "实践实验室", + "description": "The label for category Hands-on Lab in sidebar docs" + }, + "sidebar.docs.category.Appfile": { + "message": "Appfile", + "description": "The label for category Appfile in sidebar docs" + }, + "sidebar.docs.category.Roadmap": { + "message": "Roadmap", + "description": "The label for category Roadmap in sidebar docs" + }, + "sidebar.docs.category.Application Deployment": { + "message": "Application Deployment", + "description": "The label for category Application Deployment in sidebar docs" + }, + "sidebar.docs.category.More Operations": { + "message": "更多操作", + "description": "The label for category More Operations in sidebar docs" + }, + "sidebar.docs.category.Platform Operation Guide": { + "message": "Platform Operation Guide", + "description": "The label for category Platform Operation Guide in sidebar docs" + }, + "sidebar.docs.category.Using KubeVela CLI": { + "message": "使用命令行工具", + "description": "The label for category Using KubeVela CLI in sidebar docs" + }, + "sidebar.docs.category.Managing Applications": { + "message": "管理应用", + "description": "The label for category Managing Applications in sidebar docs" + }, + "sidebar.docs.category.References": { + "message": "参考", + "description": "The label for category References in sidebar docs" + }, + "sidebar.docs.category.Learning OAM": { + "message": "开放应用模型", + "description": "The label for category Learning OAM in sidebar docs" + }, + "sidebar.docs.category.Environment System": { + "message": "交付环境系统", + "description": "The label for category Environment System in sidebar docs" + }, + "sidebar.docs.category.Workflow": { + "message": "定义交付工作流", + "description": "The label for category Workflow End User in sidebar docs" + }, + "sidebar.docs.category.Workflow System": { + "message": "工作流系统", + "description": "The label for category Workflow System in sidebar docs" + }, + "sidebar.docs.category.System Operation": { + "message": "系统运维", + "description": "The label for category System Operation in sidebar docs" + }, + "sidebar.docs.category.Customize Traits": { + "message": "自定义运维特征", + "description": "The label for category Customize Traits in sidebar docs" + }, + "sidebar.docs.category.Customize Components": { + "message": "自定义组件", + "description": "The label for category Customize Traits in sidebar docs" + }, + "sidebar.docs.category.CLI": { + "message": "CLI 命令行工具", + "description": "The label for category CLI in sidebar docs" + }, + "sidebar.docs.category.Capabilities": { + "message": "Capabilities", + "description": "The label for category Capabilities in sidebar docs" + }, + "sidebar.docs.category.Appendix": { + "message": "附录", + "description": "The label for category Appendix in sidebar docs" + }, + "sidebar.docs.category.Component System": { + "message": "组件系统", + "description": "The label for category Component System in sidebar docs" + }, + "sidebar.docs.category.User Manuals": { + "message": "用户手册", + "description": "The label for category User Guide in sidebar docs" + }, + "sidebar.docs.category.Components": { + "message": "选择待交付组件", + "description": "The label for category Components in sidebar docs" + }, + "sidebar.docs.category.Traits": { + "message": "绑定运维特征", + "description": "The label for category Traits in sidebar docs" + }, + "sidebar.docs.category.Policies": { + "message": "设定应用策略", + "description": "The label for category Policies in sidebar docs" + }, + "sidebar.docs.category.Case Studies": { + "message": "实践案例", + "description": "The label for category case studies in sidebar docs" + }, + "sidebar.docs.category.Observability": { + "message": "新增可观测性", + "description": "The label for category Observability in sidebar docs" + }, + "sidebar.docs.category.Scaling": { + "message": "扩缩容", + "description": "The label for category Scaler in sidebar docs" + }, + "sidebar.docs.category.Debugging": { + "message": "调试指南", + "description": "The label for category Debugging in sidebar docs" + }, + "sidebar.docs.category.Administrator Manuals": { + "message": "管理员手册", + "description": "The label for category Administrator Manuals in sidebar docs" + }, + "sidebar.docs.category.Simple Template": { + "message": "Simple Template", + "description": "The label for category Simple Template in sidebar docs" + }, + "sidebar.docs.category.Cloud Services": { + "message": "云服务组件", + "description": "The label for category Cloud Services in sidebar docs" + }, + "sidebar.docs.category.CUE Component": { + "message": "CUE 组件", + "description": "The label for category CUE Components in sidebar docs" + }, + "sidebar.docs.category.Addons": { + "message": "插件系统", + "description": "The extended add-ons" + } +} diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/README.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/README.md new file mode 100644 index 00000000..d66baec3 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/README.md @@ -0,0 +1,27 @@ +![alt](resources/KubeVela-03.png) + +*Make shipping applications more enjoyable.* + +# KubeVela + +KubeVela is a modern application engine that adapts to your application's needs, not the other way around. + +## Community + +- Slack: [CNCF Slack](https://slack.cncf.io/) #kubevela channel +- Gitter: [Discussion](https://gitter.im/oam-dev/community) +- Bi-weekly Community Call: [Meeting Notes](https://docs.google.com/document/d/1nqdFEyULekyksFHtFvgvFAYE-0AMHKoS3RMnaKsarjs) + +## Installation + +Installation guide is available on [this section](install). + +## Quick Start + +Quick start is available on [this section](quick-start). + +## Contributing +Check out [CONTRIBUTING](https://github.com/oam-dev/kubevela/blob/master/CONTRIBUTING.md) to see how to develop with KubeVela. + +## Code of Conduct +KubeVela adopts the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md). \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/application-crd.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/application-crd.md new file mode 100644 index 00000000..7114c5d8 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/application-crd.md @@ -0,0 +1,235 @@ +--- +title: Application CRD +--- + +本部分将逐步介绍如何使用 `Application` 对象来定义你的应用,并以声明式的方式进行相应的操作。 + +## 示例 + +下面的示例应用声明了一个具有 *Worker* 工作负载类型的 `backend` 组件和具有 *Web Service* 工作负载类型的 `frontend` 组件。 + +此外,`frontend`组件声明了具有 `sidecar` 和 `autoscaler` 的 `trait` 运维能力,这意味着工作负载将自动注入 `fluentd` 的sidecar,并可以根据CPU使用情况触发1-10个副本进行扩展。 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: website +spec: + components: + - name: backend + type: worker + properties: + image: busybox + cmd: + - sleep + - '1000' + - name: frontend + type: webservice + properties: + image: nginx + traits: + - type: autoscaler + properties: + min: 1 + max: 10 + cpuPercent: 60 + - type: sidecar + properties: + name: "sidecar-test" + image: "fluentd" +``` + +### 部署应用 + +部署上述的 application yaml文件, 然后应用启动 + +```shell +$ kubectl get application -o yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: website +.... +status: + components: + - apiVersion: core.oam.dev/v1alpha2 + kind: Component + name: backend + - apiVersion: core.oam.dev/v1alpha2 + kind: Component + name: frontend +.... + status: running + +``` + +你可以看到一个命名为 `frontend` 并带有被注入的容器 `fluentd` 的 Deployment 正在运行。 + +```shell +$ kubectl get deploy frontend +NAME READY UP-TO-DATE AVAILABLE AGE +frontend 1/1 1 1 100m +``` + +另一个命名为 `backend` 的 Deployment 也在运行。 + +```shell +$ kubectl get deploy backend +NAME READY UP-TO-DATE AVAILABLE AGE +backend 1/1 1 1 100m +``` + +同样被 `autoscaler` trait 创建出来的还有一个 HPA 。 + +```shell +$ kubectl get HorizontalPodAutoscaler frontend +NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE +frontend Deployment/frontend /50% 1 10 1 101m +``` + + +## 背后的原理 + +在上面的示例中, `type: worker` 指的是该组件的字段内容(即下面的 `properties` 字段中的内容)将遵从名为 `worker` 的 `ComponentDefinition` 对象中的规范定义,如下所示: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: worker + annotations: + definition.oam.dev/description: "Describes long-running, scalable, containerized services that running at backend. They do NOT have network endpoint to receive external network traffic." +spec: + workload: + definition: + apiVersion: apps/v1 + kind: Deployment + schematic: + cue: + template: | + output: { + apiVersion: "apps/v1" + kind: "Deployment" + spec: { + selector: matchLabels: { + "app.oam.dev/component": context.name + } + template: { + metadata: labels: { + "app.oam.dev/component": context.name + } + spec: { + containers: [{ + name: context.name + image: parameter.image + + if parameter["cmd"] != _|_ { + command: parameter.cmd + } + }] + } + } + } + } + parameter: { + image: string + cmd?: [...string] + } +``` + +因此,`backend` 的 `properties` 部分仅支持两个参数:`image` 和 `cmd`。这是由定义的 `.spec.template` 字段中的 `parameter` 列表执行的。 + +类似的可扩展抽象机制也同样适用于 traits(运维能力)。 +例如,`frontend` 中的 `type:autoscaler` 指的是组件对应的 trait 的字段规范(即 trait 的 `properties` 部分) +将由名为 `autoscaler` 的 `TraitDefinition` 对象执行,如下所示: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + annotations: + definition.oam.dev/description: "configure k8s HPA for Deployment" + name: hpa +spec: + appliesToWorkloads: + - webservice + - worker + schematic: + cue: + template: | + outputs: hpa: { + apiVersion: "autoscaling/v2beta2" + kind: "HorizontalPodAutoscaler" + metadata: name: context.name + spec: { + scaleTargetRef: { + apiVersion: "apps/v1" + kind: "Deployment" + name: context.name + } + minReplicas: parameter.min + maxReplicas: parameter.max + metrics: [{ + type: "Resource" + resource: { + name: "cpu" + target: { + type: "Utilization" + averageUtilization: parameter.cpuUtil + } + } + }] + } + } + parameter: { + min: *1 | int + max: *10 | int + cpuUtil: *50 | int + } +``` + +应用同样有一个`sidecar`的运维能力 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + annotations: + definition.oam.dev/description: "add sidecar to the app" + name: sidecar +spec: + appliesToWorkloads: + - webservice + - worker + schematic: + cue: + template: |- + patch: { + // +patchKey=name + spec: template: spec: containers: [parameter] + } + parameter: { + name: string + image: string + command?: [...string] + } +``` + +在业务用户使用之前,我们认为所有用于定义的对象(Definition Object)都已经由平台团队声明并安装完毕了。所以,业务用户将需要专注于应用(`Application`)本身。 + +请注意,KubeVela 的终端用户(业务研发)不需要了解定义对象,他们只需要学习如何使用平台已经安装的能力,这些能力通常还可以被可视化的表单展示出来(或者通过 JSON schema 对接其他方式)。请从[由定义生成前端表单](/docs/platform-engineers/openapi-v3-json-schema)部分的文档了解如何实现。 + +### 惯例和"标准协议" + +在应用(`Application` 资源)部署到 Kubernetes 集群后,KubeVela 运行时将遵循以下 “标准协议”和惯例来生成和管理底层资源实例。 + + +| Label | 描述 | +| :--: | :---------: | +|`workload.oam.dev/type=` | 其对应 `ComponentDefinition` 的名称 | +|`trait.oam.dev/type=` | 其对应 `TraitDefinition` 的名称 | +|`app.oam.dev/name=` | 它所属的应用的名称 | +|`app.oam.dev/component=` | 它所属的组件的名称 | +|`trait.oam.dev/resource=` | 运维能力资源实例的名称 | +|`app.oam.dev/appRevision=` | 它所属的应用revision的名称 | diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/case-studies/canary-blue-green.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/case-studies/canary-blue-green.md new file mode 100644 index 00000000..361ac46d --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/case-studies/canary-blue-green.md @@ -0,0 +1,229 @@ +--- +title: 基于 Istio 的渐进式发布 +--- + +## 简介 + +KubeVela 后的应用交付模型(OAM)是一个从设计与实现上都高度可扩展的模型。因此,KubeVela 不需要任何“脏乱差”的胶水代码或者脚本就可以同任何云原生技术和工具(比如 Service Mesh)实现集成,让社区生态中各种先进技术立刻为你的应用交付助上“一臂之力”。 + + +本文将会介绍如何使用 KubeVela 结合 [Istio](https://istio.io/latest/) 进行复杂的金丝雀发布流程。在这个过程中,KubeVela 会帮助你: +- 将 Istio 的能力进行封装和抽象后再交付给用户使用,使得用户无需成为 Istio 专家就可以直接使用这个金丝雀发布的场景(KubeVela 会为你提供一个封装好的 Rollout 运维特征) +- 通过声明式工作流来设计金丝雀发布的步骤,以及执行发布/回滚,而无需通过“脏乱差”的脚本或者人工的方式管理这个过程。 + + +本案例中,我们将使用经典的微服务应用 [bookinfo](https://istio.io/latest/docs/examples/bookinfo/?ie=utf-8&hl=en&docs-search=Canary) 来展示上述金丝雀发布过程。 + +## 准备工作 + +开启 Istio 集群插件 +```shell +vela addon enable istio +``` + +因为后面的例子运行在 default namespace,需要为 default namespace 打上 Istio 自动注入 sidecar 的标签。 + +```shell +kubectl label namespace default istio-injection=enabled +``` + +## 初次部署 + +执行下面的命令,部署 bookinfo 应用。 + +```shell +kubectl apply -f https://github.com/oam-dev/kubevela/blob/master/docs/examples/canary-rollout-use-case/first-deploy.yaml +``` + +该应用的组件架构和访问关系如下所示: + +![book-info-struct](../resources/book-info-struct.jpg) + +该应用包含四个组件,其中组件 pruductpage, details, ratings 均配置了一个暴露端口 (expose) 运维特征用来在集群内暴露服务。 +组件 reviews 配置了一个金丝雀流量发布 (canary-traffic) 的运维特征。 + +productpage 组件还配置了一个 网关入口 (istio-gateway) 的运维特征,从而让该组件接收进入集群的流量。这个运维特征通过设置 `gateway:ingressgateway` 来使用 Istio 的默认网关实现,设置 `hosts: "*"` 来指定携带任意 host 信息的请求均可进入网关。 +```shell +... + - name: productpage + type: webservice + properties: + image: docker.io/istio/examples-bookinfo-productpage-v1:1.16.2 + port: 9080 + + traits: + - type: expose + properties: + port: + - 9080 + + - type: istio-gateway + properties: + hosts: + - "*" + gateway: ingressgateway + match: + - exact: /productpage + - prefix: /static + - exact: /login + - prefix: /api/v1/products + port: 9080 +... +``` + +你可以通过执行下面的命令将网关的端口映射到本地。 +```shell +kubectl port-forward service/istio-ingressgateway -n istio-system 19082:80 +``` +通过浏览器访问 `127.0.0.1:19082` 将会看到下面的页面。 + +![pic-v2](../resources/canary-pic-v2.jpg) + +## 金丝雀发布 + +接下来我们以 `reviews` 组件为例,模拟一次金丝雀发布的完整过程,及先升级一部分组件实例,同时调整流量,以此达到渐进式灰度发布的目的。 + +执行下面的命令,来更新应用。 +```shell +kubectl apply -f https://github.com/oam-dev/kubevela/blob/master/docs/examples/canary-rollout-use-case/rollout-v2.yaml +``` +这次操作更新了 reviews 组件的镜像,从之前的 v2 升级到了 v3。同时 reviews 组件的灰度发布 (Rollout) 运维特征指定了,升级的目标实例个数为 2 个,分两个批次升级,每批升级 1 个实例。 + +```shell +... + - name: reviews + type: webservice + properties: + image: docker.io/istio/examples-bookinfo-reviews-v3:1.16.2 + port: 9080 + volumes: + - name: wlp-output + type: emptyDir + mountPath: /opt/ibm/wlp/output + - name: tmp + type: emptyDir + mountPath: /tmp + + + traits: + - type: expose + properties: + port: + - 9080 + + - type: rollout + properties: + targetSize: 2 + rolloutBatches: + - replicas: 1 + - replicas: 1 + + - type: canary-traffic + properties: + port: 9080 +... +``` + +这次更新还为应用新增了一个升级的执行工作流,该工作流包含三个步骤。 + +第一步通过指定 `batchPartition` 等于 0 设置只升级第一批次的实例。并通过 `traffic.weightedTargets` 将 10% 的流量切换到新版本的实例上面。 + +完成第一步之后,执行第二步工作流会进入暂停状态,等待用户校验服务状态。 + +工作流的第三步是完成剩下实例的升级,并将全部流量切换致新的组件版本上。 + +```shell +... + workflow: + steps: + - name: rollout-1st-batch + type: canary-rollout + properties: + # just upgrade first batch of component + batchPartition: 0 + traffic: + weightedTargets: + - revision: reviews-v1 + weight: 90 # 90% shift to new version + - revision: reviews-v2 + weight: 10 # 10% shift to new version + + # give user time to verify part of traffic shifting to newRevision + - name: manual-approval + type: suspend + + - name: rollout-rest + type: canary-rollout + properties: + # upgrade all batches of component + batchPartition: 1 + traffic: + weightedTargets: + - revision: reviews-v2 + weight: 100 # 100% shift to new version +... +``` + +更新完成之后,再在浏览器多次访问之前的网址。发现有大概10%的概率会看到下面这个新的页面, + +![pic-v3](../resources/canary-pic-v3.jpg) + +可见新版本的页面由之前的黑色五角星变成了红色五角星 + +### 继续完成全量发布 + +如果在人工校验时,发现服务符合预期,需要继续执行工作流,完成全量发布。你可以通过执行下面的命令完成这一操作。 + +```shell +vela workflow reumse book-info +``` + +在浏览器上继续多次访问网页,会发现五角星将一直是红色的。 + +### 终止发布工作流并回滚 + +如果在人工校验时,发现服务不符合预期,需要终止预先定义好的发布工作流,并将流量和实例切换回之前的版本。你可以通过执行下面的命令完成这一操作。 + +```shell +kubectl apply -f https://github.com/oam-dev/kubevela/blob/master/docs/examples/canary-rollout-use-case/revert-in-middle.yaml +``` + +这次更新删除了之前定义好的工作流, 来终止执行工作流。 + +并通过修改灰度发布运维特征的 `targetRevision` 指向之前的组件版本 `reviews-v1`。此外,这次更新还删除了组件的金丝雀流量发布 (canary-traffic) 运维特征,将全部流量打到同一个组件版本上 `reviews-v1`。 + +```shell +... + - name: reviews + type: webservice + properties: + image: docker.io/istio/examples-bookinfo-reviews-v3:1.16.2 + port: 9080 + volumes: + - name: wlp-output + type: emptyDir + mountPath: /opt/ibm/wlp/output + - name: tmp + type: emptyDir + mountPath: /tmp + + + traits: + - type: expose + properties: + port: + - 9080 + + - type: rollout + properties: + targetRevision: reviews-v1 + batchPartition: 1 + targetSize: 2 + # This means to rollout two more replicas in two batches. + rolloutBatches: + - replicas: 2 +... +``` + +在浏览器上继续访问网址,会发现五角星又变回到了黑色。 + diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/case-studies/gitops.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/case-studies/gitops.md new file mode 100644 index 00000000..356761d1 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/case-studies/gitops.md @@ -0,0 +1,177 @@ +--- +title: 基于工作流的 GitOps +--- + +本案例讲介绍如何在 GitOps 场景下使用 KubeVela, 并介绍这样做的好处是什么。 + +## 简介 + +GitOps 是一种现代化的持续交付手段,它允许开发人员通过直接更改 Git 仓库中的代码和配置来自动部署应用,在提高部署生产力的同时也通过分支回滚等能力提高了可靠性。其具体的好处可以查看[这篇文章](https://www.weave.works/blog/what-is-gitops-really),本文将不再赘述。 + +KubeVela 作为一个声明式的应用交付控制平面,天然就可以以 GitOps 的方式进行使用,并且这样做会在 GitOps 的基础上为用户提供更多的益处和端到端的体验,包括: +- 应用交付工作流(CD 流水线) + - 即:KubeVela 支持在 GitOps 模式中描述过程式的应用交付,而不只是简单的声明终态; +- 处理部署过程中的各种依赖关系和拓扑结构; +- 在现有各种 GitOps 工具的语义之上提供统一的上层抽象,简化应用交付与管理过程; +- 统一进行云服务的声明、部署和服务绑定; +- 提供开箱即用的交付策略(金丝雀、蓝绿发布等); +- 提供开箱即用的混合云/多云部署策略(放置规则、集群过滤规则等); +- 在多环境交付中提供 Kustomize 风格的 Patch 来描述部署差异,而无需学习任何 Kustomize 本身的细节; +- …… 以及更多。 + +在本文中,我们主要讲解直接使用 KubeVela 在 GitOps 模式下进行交付的步骤。 + +> 提示:你也可以通过类似的步骤使用 ArgoCD 等 GitOps 工具来间接使用 KubeVela,细节的操作文档我们会在后续发布中提供。 + +## 准备代码仓库 + +首先,准备一个 Git 仓库,里面含有一个 `Application` 配置文件,一些源代码以及对应的 Dockerfile。 + +代码的实现逻辑非常简单,会启动一个服务,并显示对应的 Version 版本。而在 `Application` 当中,我们会通过一个 `webservice` 类型的组件启动该服务,并添加一个 `ingress` 的运维特征以方便访问: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-workflow + namespace: default +spec: + components: + - name: test-server + type: webservice + properties: + # 在创建完自动部署文件后,将 `default:gitops` 替换为其 namespace 和 name + image: # {"$imagepolicy": "default:gitops"} + port: 8088 + traits: + - type: ingress + properties: + domain: testsvc.example.com + http: + /: 8088 +``` + +我们希望用户改动代码进行提交后,自动构建出镜像并推送到镜像仓库。这一步 CI 可以通过集成 GitHub Actions、Jenkins 或者其他 CI 工具来实现。在本例中,我们通过借助 GitHub Actions 来完成持续集成。具体的代码文件及配置可参考 [示例仓库](https://github.com/oam-dev/samples/tree/master/9.GitOps_Demo)。 + +## 配置秘钥信息 + +在新的镜像推送到镜像仓库后,KubeVela 会识别到新的镜像,并更新仓库及集群中的 `Application` 配置文件。因此,我们需要一个含有 Git 信息的 Secret,使 KubeVela 向 Git 仓库进行提交。 + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: my-secret +type: kubernetes.io/basic-auth +stringData: + username: + password: +``` + +## 编写自动部署配置文件 + +完成了上述基础配置后,我们可以在本地新建一个自动部署配置文件,关联对应的 Git 仓库以及镜像仓库信息: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: git-app +spec: + components: + - name: gitops + type: kustomize + properties: + repoType: git + # 将此处替换成你的 git 仓库地址 + url: + # 关联 git secret + secretRef: my-secret + # 自动拉取配置的时间间隔 + pullInterval: 1m + git: + # 指定监听的 branch + branch: master + # 指定监听的路径 + path: . + imageRepository: + # 镜像地址 + image: + # 如果这是一个私有的镜像仓库,可以通过 `kubectl create secret docker-registry` 创建对应的镜像秘钥并相关联 + # secretRef: imagesecret + filterTags: + # 可对镜像 tag 进行过滤 + pattern: '^master-[a-f0-9]+-(?P[0-9]+)' + extract: '$ts' + # 通过 policy 筛选出最新的镜像 Tag 并用于更新 + policy: + numerical: + order: asc + # 追加提交信息 + commitMessage: "Image: {{range .Updated.Images}}{{println .}}{{end}}" +``` + +将上述文件部署到集群中后,查看集群中的应用,可以看到,应用 `git-app` 自动拉取了 Git 仓库中的应用配置并部署到了集群中: + +```shell +$ vela ls + +APP COMPONENT TYPE TRAITS PHASE HEALTHY STATUS CREATED-TIME +first-vela-workflow test-server webservice ingress running healthy 2021-09-10 11:23:34 +0800 CST +git-app gitops kustomize running healthy 2021-09-10 11:23:32 +0800 CST +``` + +通过 `curl` 对应的 `Ingress`,可以看到目前的版本是 0.1.5 + +```shell +$ curl -H "Host:testsvc.example.com" http:// +Version: 0.1.5 +``` + +## 修改代码 + +完成首次部署后,我们可以通过修改 Git 仓库中的代码,来完成自动部署。 + +将代码文件中的 `Version` 改为 `0.1.6`: + +```go +const VERSION = "0.1.6" + +func main() { + http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { + _, _ = fmt.Fprintf(w, "Version: %s\n", VERSION) + }) + if err := http.ListenAndServe(":8088", nil); err != nil { + println(err.Error()) + } +} +``` + +提交该改动至代码仓库,可以看到,我们配置的 CI 流水线开始构建镜像并推送至镜像仓库。 + +而 KubeVela 会通过监听镜像仓库,根据最新的镜像 Tag 来更新代码仓库中的 `Application`。此时,可以看到代码仓库中有一条来自 `kubevelabot` 的提交,提交信息均带有 `Update image automatically.` 前缀。你也可以通过 `{{range .Updated.Images}}{{println .}}{{end}}` 在 `commitMessage` 字段中追加你所需要的信息。 + +![alt](../resources/gitops-commit.png) + +> 值得注意的是,来自 `kubevelabot` 的提交不会再次触发流水线导致重复构建,因为我们在 CI 配置的时候,将来自 KubeVela 的提交过滤掉了。 +> +> ```shell +> jobs: +> publish: +> if: "!contains(github.event.head_commit.message, 'Update image automatically')" +> ``` + +重新查看集群中的应用,可以看到经过一段时间后,`Application` 的镜像已经被更新。通过 `curl` 对应的 `Ingress` 查看当前版本: + +```shell +$ curl -H "Host:testsvc.example.com" http:// +Version: 0.1.6 +``` + +版本已被成功更新!至此,我们完成了从变更代码,到自动部署至集群的全部操作。 + +KubeVela 会通过你配置的 `interval` 时间间隔,来每隔一段时间分别从代码仓库及镜像仓库中获取最新信息: +* 当 Git 仓库中的配置文件被更新时,KubeVela 将根据最新的配置更新集群中的应用。 +* 当镜像仓库中多了新的 Tag 时,KubeVela 将根据你配置的 policy 规则,筛选出最新的镜像 Tag,并更新到 Git 仓库中。而当代码仓库中的文件被更新后,KubeVela 将重复第一步,更新集群中的文件,从而达到了自动部署的效果。 + +通过与 GitOps 的集成,KubeVela 可以帮助用户加速部署应用,更为简洁地完成持续部署。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/case-studies/jenkins-cicd.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/case-studies/jenkins-cicd.md new file mode 100644 index 00000000..02d84efd --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/case-studies/jenkins-cicd.md @@ -0,0 +1,149 @@ +--- +title: Jenkins CI/CD +--- + +本文将介绍如何使用 KubeVela 同已有的 CI/CD 工具(比如 Jenkins)共同协作来进行应用的持续交付,并解释这样集成的好处是什么。 + +## 简介 + +KubeVela 作为一个普适的应用交付控制平面,只需要一点点集成工作就可以同任何现有的 CI/CD 系统对接起来,并且为它们带来一系列现代云原生应用交付的能力,比如: +- 混合云/多云应用交付; +- 跨环境发布(Promotion); +- 基于 Service Mesh 的发布与回滚; +- 处理部署过程中的各种依赖关系和拓扑结构; +- 统一进行云服务的声明、部署和服务绑定; +- 无需强迫你的团队采纳完整的 GitOps 协作方式即可享受 GitOps 技术本身的[一系列好处](https://www.weave.works/blog/what-is-gitops-really); +- …… 以及更多。 + + +接下来,本文将会以一个 HTTP 服务的开发部署为例,介绍 KubeVela + Jenkins 方式下应用的持续集成与持续交付步骤。这个应用的具体代码在[这个 GitHub 库中](https://github.com/Somefive/KubeVela-demo-CICD-app)。 + +## 准备工作 + +在对接之前,用户首先需要确保以下环境。 + +1. 已部署好 Jenkins 服务并配置了 Docker 在 Jenkins 中的环境,包括相关插件及镜像仓库的访问权限。 +2. 已配置好的 Git 仓库并开启 Webhook。确保 Git 仓库对应分支的变化能够通过 Webhook 触发 Jenkins 流水线的运行。 +3. 准备好需要部署的 Kubernetes 集群环境,并在环境中安装 KubeVela 基础组件及 apiserver,确保 KubeVela apiserver 能够从公网访问到。 + +## 对接 Jenkins 与 KubeVela apiserver + +在 Jenkins 中以下面的 Groovy 脚本为例设置部署流水线。可以将流水线中的 Git 地址、镜像地址、apiserver 的地址、应用命名空间及应用替换成自己的配置,同时在自己的代码仓库中存放 Dockerfile 及 app.yaml,用来构建及部署 KubeVela 应用。 + +```groovy +pipeline { + agent any + environment { + GIT_BRANCH = 'prod' + GIT_URL = 'https://github.com/Somefive/KubeVela-demo-CICD-app.git' + DOCKER_REGISTRY = 'https://registry.hub.docker.com' + DOCKER_CREDENTIAL = 'DockerHubCredential' + DOCKER_IMAGE = 'somefive/kubevela-demo-cicd-app' + APISERVER_URL = 'http://47.88.24.19' + APPLICATION_YAML = 'app.yaml' + APPLICATION_NAMESPACE = 'kubevela-demo-namespace' + APPLICATION_NAME = 'cicd-demo-app' + } + stages { + stage('Prepare') { + steps { + script { + def checkout = git branch: env.GIT_BRANCH, url: env.GIT_URL + env.GIT_COMMIT = checkout.GIT_COMMIT + env.GIT_BRANCH = checkout.GIT_BRANCH + echo "env.GIT_BRANCH=${env.GIT_BRANCH},env.GIT_COMMIT=${env.GIT_COMMIT}" + } + } + } + stage('Build') { + steps { + script { + docker.withRegistry(env.DOCKER_REGISTRY, env.DOCKER_CREDENTIAL) { + def customImage = docker.build(env.DOCKER_IMAGE) + customImage.push() + } + } + } + } + stage('Deploy') { + steps { + sh 'wget -q "https://github.com/mikefarah/yq/releases/download/v4.12.1/yq_linux_amd64"' + sh 'chmod +x yq_linux_amd64' + script { + def app = sh ( + script: "./yq_linux_amd64 eval -o=json '.spec' ${env.APPLICATION_YAML} | sed -e 's/GIT_COMMIT/$GIT_COMMIT/g'", + returnStdout: true + ) + echo "app: ${app}" + def response = httpRequest acceptType: 'APPLICATION_JSON', contentType: 'APPLICATION_JSON', httpMode: 'POST', requestBody: app, url: "${env.APISERVER_URL}/v1/namespaces/${env.APPLICATION_NAMESPACE}/applications/${env.APPLICATION_NAME}" + println('Status: '+response.status) + println('Response: '+response.content) + } + } + } + } +} +``` + +之后向流水线中使用的 Git 仓库的分支推送代码变更,Git 仓库的 Webhook 会触发 Jenkins 中新创建的流水线。该流水线会自动构建代码镜像并推送至镜像仓库,然后对 KubeVela apiserver 发送 POST 请求,将仓库中的应用配置文件部署到 Kubernetes 集群中。其中 app.yaml 可以参照以下样例。 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: kubevela-demo-app +spec: + components: + - name: kubevela-demo-app-web + type: webservice + properties: + image: somefive/kubevela-demo-cicd-app + imagePullPolicy: Always + port: 8080 + traits: + - type: rollout + properties: + rolloutBatches: + - replicas: 2 + - replicas: 3 + batchPartition: 0 + targetSize: 5 + - type: labels + properties: + jenkins-build-commit: GIT_COMMIT + - type: ingress + properties: + domain: + http: + "/": 8088 +``` + +其中 GIT_COMMIT 会在 Jenkins 流水线中被替换为当前的 git commit id。这时可以通过 kubectl 命令查看 Kubernetes 集群中应用的部署情况。 + +```bash +$ kubectl get app -n kubevela-demo-namespace +NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE +cicd-demo-app kubevela-demo-app-web webservice running true 102s +$ kubectl get deployment -n kubevela-demo-namespace +NAME READY UP-TO-DATE AVAILABLE AGE +kubevela-demo-app-web-v1 2/2 2 2 111s +$ kubectl get ingress -n kubevela-demo-namespace +NAME CLASS HOSTS ADDRESS PORTS AGE +kubevela-demo-app-web 198.11.175.125 80 117s +``` + +在部署的应用文件中,我们使用了灰度发布(Rollout)的特性,应用初始发布先创建 2 个 Pod,以便进行金丝雀验证。待验证完毕,你可以将应用配置中 Rollout 特性的 `batchPatition: 0` 删去,以便完成剩余实例的更新发布。这个机制大大提高发布的安全性和稳定性,同时你也可以根据实际需要,调整 Rollout 发布策略。 + +```bash +$ kubectl edit app -n kubevela-demo-namespace +application.core.oam.dev/cicd-demo-app edited +$ kubectl get deployment -n kubevela-demo-namespace +NAME READY UP-TO-DATE AVAILABLE AGE +kubevela-demo-app-web-v1 5/5 5 5 4m16s +$ curl http:/// +Version: 0.1.2 +``` + +## 更多 + +详细的环境部署流程以及更加完整的应用滚动更新可以参考[博客](/blog/2021/09/02/kubevela-jenkins-cicd)。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/case-studies/li-auto-inc.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/case-studies/li-auto-inc.md new file mode 100644 index 00000000..b12b1d00 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/case-studies/li-auto-inc.md @@ -0,0 +1,572 @@ +--- +title: 实践案例-理想汽车 +--- +## 背景 + +理想汽车后台服务采用的是微服务架构,虽然借助 Kubernetes 进行部署,但运维工作依然很复杂。并具有以下特点: + +- 一个应用能运行起来并对外提供服务,正常情况下都需要配套的db实例以及 redis 集群支撑。 +- 应用之间存在依赖关系,对于部署顺序有比较强的诉求。 +- 应用部署流程中需要和外围系统(如:配置中心)交互。 + +下面以一个理想汽车的经典场景为例,介绍如何借助 KubeVela 实现以上诉求。 + +## 典型场景介绍 + +![场景架构](../resources/li-auto-inc.jpg) + +这里面包含两个应用,分别是 `base-server` 和 `proxy-server`, 整体应用部署需要满足以下条件: + +- base-server 成功启动(状态 ready)后需要往配置中心(apollo)注册信息。 +- base-server 需要绑定到 service 和 ingress 进行负载均衡。 +- proxy-server 需要在 base-server 成功运行后启动,并需要获取到 base-server 对应的 service 的 clusterIP。 +- proxy-server 依赖 redis 中间件,需要在 redis 成功运行后启动。 +- proxy-server 需要从配置中心(apollo)读取 base-server 的相关注册信息。 + +可见整个部署过程,如果人为操作,会变得异常困难以及容易出错。在借助 KubeVela 后,可以轻松实现场景的自动化和一键式运维。 + +## 解决方案 + +在 KubeVela 上,以上诉求可以拆解为以下 KubeVela 的模型: + +- 组件部分: 包含三个分别是 base-server 、redis 、proxy-server。 +- 运维特征: ingress (包括 service) 作为一个通用的负载均衡运维特征。 +- 工作流: 实现组件按照依赖进行部署,并实现和配置中心的交互。 +- 应用部署计划: 理想汽车的开发者可以通过 KubeVela 的应用部署计划完成应用发布。 + +详细过程如下: + +## 平台的功能定制 + +理想汽车的平台工程师通过以下步骤完成方案中所涉及的能力,并向开发者用户透出(通过编写 definition 的方式实现)。 + +### 1.定义组件 + +- 编写 base-service 的组件定义,使用 `Deployment` 作为工作负载,并向终端用户透出参数 `image` 和 `cluster`。对于终端用户来说,之后在发布时只需要关注镜像以及部署的集群名称。 +- 编写 proxy-service 的组件定义,使用 `argo rollout` 作为工作负载,同样向终端用户透出参数 `image` 和 `cluster`。 + +如下所示: + +``` +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: base-service +spec: + workload: + definition: + apiVersion: apps/v1 + kind: Deployment + schematic: + kube: + template: + apiVersion: apps/v1 + kind: Deployment + metadata: + labels: + appId: BASE-SERVICE + appName: base-service + version: 0.0.1 + name: base-service + spec: + replicas: 2 + revisionHistoryLimit: 5 + selector: + matchLabels: + app: base-service + template: + metadata: + labels: + antiAffinity: none + app: base-service + appId: BASE-SERVICE + version: 0.0.1 + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchExpressions: + - key: app + operator: In + values: + - base-service + - key: antiAffinity + operator: In + values: + - none + topologyKey: kubernetes.io/hostname + weight: 100 + containers: + - env: + - name: NODE_IP + valueFrom: + fieldRef: + fieldPath: status.hostIP + - name: POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + - name: POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: APP_NAME + value: base-service + - name: LOG_BASE + value: /data/log + - name: RUNTIME_CLUSTER + value: default + image: base-service + imagePullPolicy: Always + name: base-service + ports: + - containerPort: 11223 + protocol: TCP + - containerPort: 11224 + protocol: TCP + volumeMounts: + - mountPath: /tmp/data/log/base-service + name: log-volume + - mountPath: /data + name: sidecar-sre + - mountPath: /app/skywalking + name: skywalking + initContainers: + - args: + - 'echo "do something" ' + command: + - /bin/sh + - -c + env: + - name: NODE_IP + valueFrom: + fieldRef: + fieldPath: status.hostIP + - name: POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + - name: APP_NAME + value: base-service + image: busybox + imagePullPolicy: Always + name: sidecar-sre + resources: + limits: + cpu: 100m + memory: 100Mi + volumeMounts: + - mountPath: /tmp/data/log/base-service + name: log-volume + - mountPath: /scratch + name: sidecar-sre + terminationGracePeriodSeconds: 120 + volumes: + - hostPath: + path: /logs/dev/base-service + type: DirectoryOrCreate + name: log-volume + - emptyDir: {} + name: sidecar-sre + - emptyDir: {} + name: skywalking + parameters: + - name: image + required: true + type: string + fieldPaths: + - "spec.template.spec.containers[0].image" + - name: cluster + required: true + type: string + fieldPaths: + - "spec.template.spec.containers[0].env[6].value" + - "spec.template.metadata.labels.cluster" +--- +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: proxy-service +spec: + workload: + definition: + apiVersion: argoproj.io/v1alpha1 + kind: Rollout + schematic: + kube: + template: + apiVersion: argoproj.io/v1alpha1 + kind: Rollout + metadata: + labels: + appId: PROXY-SERVICE + appName: proxy-service + version: 0.0.0 + name: proxy-service + spec: + replicas: 1 + revisionHistoryLimit: 1 + selector: + matchLabels: + app: proxy-service + strategy: + canary: + steps: + - setWeight: 50 + - pause: {} + template: + metadata: + labels: + app: proxy-service + appId: PROXY-SERVICE + cluster: default + version: 0.0.1 + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchExpressions: + - key: app + operator: In + values: + - proxy-service + topologyKey: kubernetes.io/hostname + weight: 100 + containers: + - env: + - name: NODE_IP + valueFrom: + fieldRef: + fieldPath: status.hostIP + - name: POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + - name: POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: APP_NAME + value: proxy-service + - name: LOG_BASE + value: /app/data/log + - name: RUNTIME_CLUSTER + value: default + image: proxy-service:0.1 + imagePullPolicy: Always + name: proxy-service + ports: + - containerPort: 11024 + protocol: TCP + - containerPort: 11025 + protocol: TCP + volumeMounts: + - mountPath: /tmp/data/log/proxy-service + name: log-volume + - mountPath: /app/data + name: sidecar-sre + - mountPath: /app/skywalking + name: skywalking + initContainers: + - args: + - 'echo "do something" ' + command: + - /bin/sh + - -c + env: + - name: NODE_IP + valueFrom: + fieldRef: + fieldPath: status.hostIP + - name: POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + - name: APP_NAME + value: proxy-service + image: busybox + imagePullPolicy: Always + name: sidecar-sre + resources: + limits: + cpu: 100m + memory: 100Mi + volumeMounts: + - mountPath: /tmp/data/log/proxy-service + name: log-volume + - mountPath: /scratch + name: sidecar-sre + terminationGracePeriodSeconds: 120 + volumes: + - hostPath: + path: /app/logs/dev/proxy-service + type: DirectoryOrCreate + name: log-volume + - emptyDir: {} + name: sidecar-sre + - emptyDir: {} + name: skywalking + parameters: + - name: image + required: true + type: string + fieldPaths: + - "spec.template.spec.containers[0].image" + - name: cluster + required: true + type: string + fieldPaths: + - "spec.template.spec.containers[0].env[5].value" + - "spec.template.metadata.labels.cluster" +``` + +### 2.定义运维特征 + +编写用于负载均衡的运维特征的定义,其通过生成 Kubernetes 中的原生资源 `Service` 和 `Ingress` 实现负载均衡。 + +向终端用户透出的参数包括 domain 和 http ,其中 domain 可以指定域名,http 用来设定路由,具体将部署服务的端口映射为不同的 url path。 + +如下所示: + +``` +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + name: ingress +spec: + schematic: + cue: + template: | + parameter: { + domain: string + http: [string]: int + } + outputs: { + "service": { + apiVersion: "v1" + kind: "Service" + metadata: { + name: context.name + namespace: context.namespace + } + spec: { + selector: app: context.name + ports: [for ph, pt in parameter.http{ + protocol: "TCP" + port: pt + targetPort: pt + }] + } + } + "ingress": { + apiVersion: "networking.k8s.io/v1" + kind: "Ingress" + metadata: { + name: "\(context.name)-ingress" + namespace: context.namespace + } + spec: rules: [{ + host: parameter.domain + http: paths: [for ph, pt in parameter.http { + path: ph + pathType: "Prefix" + backend: service: { + name: context.name + port: number: pt + } + }] + }] + } + } +``` + +### 3.定义工作流的步骤 + +- 定义 apply-base 工作流步骤: 完成部署 base-server,等待组件成功启动后,往注册中心注册信息。透出参数为 component,终端用户在流水线中使用步骤 apply-base 时只需要指定组件名称。 +- 定义 apply-helm 工作流步骤: 完成部署 redis helm chart,并等待 redis 成功启动。透出参数为 component,终端用户在流水线中使用步骤 apply-helm 时只需要指定组件名称。 +- 定义 apply-proxy 工作流步骤: 完成部署 proxy-server,并等待组件成功启动。透出参数为 component 和 backendIP,其中 component 为组件名称,backendIP 为 proxy-server 服务依赖组件的 IP。 + +如下所示: + +``` +apiVersion: core.oam.dev/v1beta1 +kind: WorkflowStepDefinition +metadata: + name: apply-base + namespace: vela-system +spec: + schematic: + cue: + template: |- + import ("vela/op") + parameter: { + component: string + } + apply: op.#ApplyComponent & { + component: parameter.component + } + + // 等待 deployment 可用 + wait: op.#ConditionalWait & { + continue: apply.workload.status.readyReplicas == apply.workload.status.replicas && apply.workload.status.observedGeneration == apply.workload.metadata.generation + } + + message: {...} + // 往三方配置中心apollo写配置 + notify: op.#HTTPPost & { + url: "appolo-address" + request: body: json.Marshal(message) + } + + // 暴露 service 的 ClusterIP + clusterIP: apply.traits["service"].value.spec.clusterIP +--- +apiVersion: core.oam.dev/v1beta1 +kind: WorkflowStepDefinition +metadata: + name: apply-helm + namespace: vela-system +spec: + schematic: + cue: + template: |- + import ("vela/op") + parameter: { + component: string + } + apply: op.#ApplyComponent & { + component: parameter.component + } + + chart: op.#Read & { + value: { + // redis 的元数据 + ... + } + } + // 等待 redis 可用 + wait: op.#ConditionalWait & { + // todo + continue: chart.value.status.phase=="ready" + } +--- +apiVersion: core.oam.dev/v1beta1 +kind: WorkflowStepDefinition +metadata: + name: apply-proxy + namespace: vela-system +spec: + schematic: + cue: + template: |- + import ( + "vela/op" + "encoding/json" + ) + parameter: { + component: string + backendIP: string + } + + // 往三方配置中心apollo读取配置 + // config: op.#HTTPGet + + apply: op.#ApplyComponent & { + component: parameter.component + // 给环境变量中注入BackendIP + workload: patch: spec: template: spec: { + containers: [{ + // patchKey=name + env: [{name: "BackendIP",value: parameter.backendIP}] + },...] + } + } + + // 等待 argo.rollout 可用 + wait: op.#ConditionalWait & { + continue: apply.workload.status.readyReplicas == apply.workload.status.replicas && apply.workload.status.observedGeneration == apply.workload.metadata.generation + } +``` + + +### 用户使用 + +理想汽车的开发工程师接下来就可以使用 Application 完成应用的发布。 + + +开发工程师可以直接使用如上平台工程师在 KubeVela 上定制的通用能力,轻松完成应用部署计划的编写。 + +> 在下面例子中通过 workflow 的数据传递机制 input/output,将 base-server 的 clusterIP 传递给 proxy-server。 + +``` +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: lixiang-app +spec: + components: + - name: base-service + type: base-service + properties: + image: nginx:1.14.2 + # 用于区分appollo环境 + cluster: default + traits: + - type: ingress + properties: + domain: base-service.dev.example.com + http: + "/": 11001 + # redis无依赖,启动后service的endpionts 需要通过http接口写入信息写入到apollo + - name: "redis" + type: helm + properties: + chart: "redis-cluster" + version: "6.2.7" + repoUrl: "https://charts.bitnami.com/bitnami" + repoType: helm + - name: proxy-service + type: proxy-service + properties: + image: nginx:1.14.2 + # 用于区分appollo环境 + cluster: default + traits: + - type: ingress + properties: + domain: proxy-service.dev.example.com + http: + "/": 11002 + workflow: + steps: + - name: apply-base-service + type: apply-base + outputs: + - name: baseIP + exportKey: clusterIP + properties: + component: base-service + - name: apply-redis + type: apply-helm + properties: + component: redis + - name: apply-proxy-service + type: apply-proxy + inputs: + - from: baseIP + parameterKey: backendIP + properties: + component: proxy-service +``` \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/case-studies/multi-cluster.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/case-studies/multi-cluster.md new file mode 100644 index 00000000..c251347c --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/case-studies/multi-cluster.md @@ -0,0 +1,302 @@ +--- +title: 多集群应用交付 +--- + +本章节会介绍如何使用 KubeVela 完成应用的多集群应用交付。 + +## 简介 + +如今,越来越多的企业及开发者出于不同的原因,开始在多集群环境中进行应用交付: + +* 由于 Kubernetes 集群存在着部署规模的局限性(单一集群最多容纳 5k 节点),需要应用多集群技术来部署、管理海量的应用。 +* 考虑到稳定性及高可用性,同一个应用可以部署在多个集群中,以实现容灾、异地多活等需求。 +* 应用可能需要部署在不同的区域来满足不同政府对于数据安全性的政策需求。 + +下文将会介绍如何在 KubeVela 中使用多集群技术帮助你快速将应用部署在多集群环境中。 + +## 准备工作 + +在使用多集群应用部署之前,你需要将子集群通过 KubeConfig 加入到 KubeVela 的管控中来。Vela CLI 可以帮你实现这一点。 + +```shell script +vela cluster join +``` + +该命令会自动使用 KubeConfig 中的 `context.cluster` 字段作为集群名称,你也可以使用 `--name` 参数来指定,如 + +```shell +vela cluster join stage-cluster.kubeconfig --name cluster-staging +vela cluster join prod-cluster.kubeconfig --name cluster-prod +``` + +在子集群加入 KubeVela 中后,你同样可以使用 CLI 命令来查看当前正在被 KubeVela 管控的所有集群。 + +```bash +$ vela cluster list +CLUSTER TYPE ENDPOINT +cluster-prod tls https://47.88.4.97:6443 +cluster-staging tls https://47.88.7.230:6443 +``` + +如果你不需要某个子集群了,还可以将子集群从 KubeVela 管控中移除。 + +```shell script +$ vela cluster detach cluster-prod +``` + +当然,如果现在有应用正跑在该集群中,这条命令会被 KubeVela 拒绝。 + +## 部署多集群应用 + +KubeVela 将一个 Kubernetes 集群看作是一个环境,对于一个应用,你可以将其部署在多个环境中。 + +下面的这个例子将会把应用先部署在预发环境中,待确认应用正常运行后,再将其部署在生产环境中。 + +对于不同的环境,KubeVela 支持进行差异化部署。比如在本文的例子中,预发环境只使用 webservice 组件而不是用 worker 组件,同时 webservice 也只部署了一份。而在生产环境中,两个组件都会使用,而且 webservice 还会部署三副本。 + + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: example-app + namespace: default +spec: + components: + - name: hello-world-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: scaler + properties: + replicas: 1 + - name: data-worker + type: worker + properties: + image: busybox + cmd: + - sleep + - '1000000' + policies: + - name: example-multi-env-policy + type: env-binding + properties: + envs: + - name: staging + placement: # 选择要部署的集群 + clusterSelector: + name: cluster-staging + selector: # 选择要使用的组件 + components: + - hello-world-server + + - name: prod + placement: + clusterSelector: + name: cluster-prod + patch: # 对组件进行差异化配置 + components: + - name: hello-world-server + type: webservice + traits: + - type: scaler + properties: + replicas: 3 + + - name: health-policy-demo + type: health + properties: + probeInterval: 5 + probeTimeout: 10 + + workflow: + steps: + # 部署到预发环境中 + - name: deploy-staging + type: deploy2env + properties: + policy: example-multi-env-policy + env: staging + + # 手动确认 + - name: manual-approval + type: suspend + + # 部署到生产环境中 + - name: deploy-prod + type: deploy2env + properties: + policy: example-multi-env-policy + env: prod +``` + +在应用创建后,它会通过 KubeVela 工作流完成部署。 + +> 你可以参考[多环境部署](../end-user/policies/envbinding)和[健康检查](../end-user/policies/health)的用户手册来查看更多参数细节。 + +首先,它会将应用部署到预发环境中,你可以运行下面的命令来查看应用的状态。 + +```shell +> kubectl get application example-app -o yaml +NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE +example-app hello-world-server webservice workflowSuspending true Ready:1/1 10s +``` + +可以看到,当前的部署工作流在 `manual-approval` 步骤中暂停。 + +```yaml +... + status: + workflow: + appRevision: example-app-v1:44a6447e3653bcc2 + contextBackend: + apiVersion: v1 + kind: ConfigMap + name: workflow-example-app-context + uid: 56ddcde6-8a83-4ac3-bf94-d19f8f55eb3d + mode: StepByStep + steps: + - id: wek2b31nai + name: deploy-staging + phase: succeeded + type: deploy2env + - id: 7j5eb764mk + name: manual-approval + phase: succeeded + type: suspend + suspend: true + terminated: false + waitCount: 0 +``` + +你也可以检查 `status.service` 字段来查看应用的健康状态。 + +```yaml +... + status: + services: + - env: staging + healthy: true + message: 'Ready:1/1 ' + name: hello-world-server + scopes: + - apiVersion: core.oam.dev/v1alpha2 + kind: HealthScope + name: health-policy-demo + namespace: test + uid: 6e6230a3-93f3-4dba-ba09-dd863b6c4a88 + traits: + - healthy: true + type: scaler + workloadDefinition: + apiVersion: apps/v1 + kind: Deployment +``` + +通过工作流的 resume 指令,你可以在确认当前部署正常后,继续将应用部署至生产环境中。 + +```shell +> vela workflow resume example-app +Successfully resume workflow: example-app +``` + +再次确认应用的状态: + +```shell +> kubectl get application example-app +NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE +example-app hello-world-server webservice running true Ready:1/1 62s +``` + +```yaml + status: + services: + - env: staging + healthy: true + message: 'Ready:1/1 ' + name: hello-world-server + scopes: + - apiVersion: core.oam.dev/v1alpha2 + kind: HealthScope + name: health-policy-demo + namespace: default + uid: 9174ac61-d262-444b-bb6c-e5f0caee706a + traits: + - healthy: true + type: scaler + workloadDefinition: + apiVersion: apps/v1 + kind: Deployment + - env: prod + healthy: true + message: 'Ready:3/3 ' + name: hello-world-server + scopes: + - apiVersion: core.oam.dev/v1alpha2 + kind: HealthScope + name: health-policy-demo + namespace: default + uid: 9174ac61-d262-444b-bb6c-e5f0caee706a + traits: + - healthy: true + type: scaler + workloadDefinition: + apiVersion: apps/v1 + kind: Deployment + - env: prod + healthy: true + message: 'Ready:1/1 ' + name: data-worker + scopes: + - apiVersion: core.oam.dev/v1alpha2 + kind: HealthScope + name: health-policy-demo + namespace: default + uid: 9174ac61-d262-444b-bb6c-e5f0caee706a + workloadDefinition: + apiVersion: apps/v1 + kind: Deployment +``` + +现在,工作流中的所有步骤都已完成。 + +```yaml +... + status: + workflow: + appRevision: example-app-v1:44a6447e3653bcc2 + contextBackend: + apiVersion: v1 + kind: ConfigMap + name: workflow-example-app-context + uid: e1e7bd2d-8743-4239-9de7-55a0dd76e5d3 + mode: StepByStep + steps: + - id: q8yx7pr8wb + name: deploy-staging + phase: succeeded + type: deploy2env + - id: 6oxrtvki9o + name: manual-approval + phase: succeeded + type: suspend + - id: uk287p8c31 + name: deploy-prod + phase: succeeded + type: deploy2env + suspend: false + terminated: false + waitCount: 0 +``` + +## 更多使用案例 + +KubeVela 可以提供更多的应用多集群部署策略,如将单一应用的不同组件部署在不同环境中,或在管控集群及子集群中混合部署。 + +对于工作流与多集群部署的使用,你可以通过下图简单了解其整体流程。 + +![alt](../resources/workflow-multi-env.png) + +更多的多集群环境下应用部署的使用案例将在不久后加入文档中。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/case-studies/workflow-with-ocm.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/case-studies/workflow-with-ocm.md new file mode 100644 index 00000000..0560bc0a --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/case-studies/workflow-with-ocm.md @@ -0,0 +1,174 @@ +--- +title: 使用工作流实现多集群部署 +--- + +本案例,将为你讲述如何使用 KubeVela 做多集群应用部署,将包含从集群创建、集群注册、环境初始化、多集群调度,一直到应用多集群部署的完整流程。 + +- 通过 KubeVela 中的环境初始化(Initializer)功能,我们可以创建一个 Kubernetes 集群并注册到中央管控集群,同样通过环境初始化功能,可以将应用管理所需的系统依赖一并安装。 +- 通过 KubeVela 的多集群多环境部署(EnvBinding)功能,可以对应用进行差异化配置,并选择资源下发到哪些集群。 + +## 开始之前 + +- 首先你需要有一个 Kubernetes 版本为 1.20+ 的集群作为管控集群,并且已经安装好 KubeVela ,管控集群需要有一个可以通过公网访问的 APIServer + 的地址。如果不做特殊说明,实践案例上的所有步骤都在管控集群上操作。 + + +- 在这个场景中,KubeVela 背后采用[OCM(open-cluster-management)](https://open-cluster-management.io/getting-started/quick-start/)技术做实际的多集群资源分发。 + + +- 本实践案例相关的 YAML 描述和 Shell 脚本都在 KubeVela 项目的 [docs/examples/workflow-with-ocm](https://github.com/oam-dev/kubevela/tree/master/docs/examples/workflow-with-ocm) 下, + 请下载该案例,在该目录执行下面的终端命令。 + + +- 本实践案例将以阿里云的 ACK 集群作为例子,创建阿里云资源需要使用相应的鉴权,需要保存你阿里云账号的 AK/SK 到管控集群的 Secret 中。 + + ```shell + export ALICLOUD_ACCESS_KEY=xxx; export ALICLOUD_SECRET_KEY=yyy + ``` + + ```shell + # 如果你想使用阿里云安全令牌服务,还要导出环境变量 ALICLOUD_SECURITY_TOKEN 。 + export ALICLOUD_SECURITY_TOKEN=zzz + ``` + + ```shell + # prepare-alibaba-credentials.sh 脚本会读取环境变量并创建 secret 到当前集群。 + sh hack/prepare-alibaba-credentials.sh + ``` + + ```shell + $ kubectl get secret -n vela-system + NAME TYPE DATA AGE + alibaba-account-creds Opaque 1 11s + ``` + +## 初始化阿里云资源创建功能 + +我们可以使用 KubeVela 的环境初始化功能,开启阿里云资源创建的系统功能,这个初始化过程主要是将之前配置的鉴权信息提供出来,并初始化 Terraform 系统插件。我们将这个初始化对象命名为:`terraform-alibaba`,并部署: + +```shell +kubectl apply -f initializers/init-terraform-alibaba.yaml +``` + +### 创建环境初始化 `terraform-alibaba` + +```shell +kubectl apply -f initializers/init-terraform-alibaba.yaml +``` + +当环境初始化 `terraform-alibaba` 的 `PHASE` 字段为 `success` 表示环境初始化成功,这可能需要等待1分钟左右的时间。 + +```shell +$ kubectl get initializers.core.oam.dev -n vela-system +NAMESPACE NAME PHASE AGE +vela-system terraform-alibaba success 94s +``` + +## 初始化多集群调度功能 + +我们使用 KubeVela 的环境初始化功能,开启多集群调度的系统功能,这个初始化过程主要是创建一个新的 ACK 集群,使用 OCM 多集群管理方案管理新创建的集群,我们将这个初始化对象命名为:`managed-cluster`,并部署: + +```shell +kubectl apply -f initializers/init-managed-cluster.yaml +``` + +除此之外,为了让创建好的集群可以被管控集群所使用,我们还需要将创建的集群注册到管控集群。我们通过定义一个工作流节点来传递新创建集群的证书信息,再定义一个工作流节点来完成集群注册。 + +**自定义执行集群创建的工作流节点,命名为 `create-ack`**,进行部署: + +```shell +kubectl apply -f definitions/create-ack.yaml +``` + +**自定义集群注册的工作流节点,命名为 `register-cluster`**,进行部署: + +```shell +kubectl apply -f definitions/register-cluster.yaml +``` + +### 创建环境初始化 + +1. 安装工作流节点定义 `create-ack` 和 `register-cluster`: + +```shell +kubectl apply -f definitions/create-ack.yaml.yaml +kubectl apply -f definitions/register-cluster.yaml +``` + +2. 修改工作流节点 `register-ack` 的 hubAPIServer 的参数为管控集群的 APIServer 的公网地址。 + +```yaml + - name: register-ack + type: register-cluster + inputs: + - from: connInfo + parameterKey: connInfo + properties: + # 用户需要填写管控集群的 APIServer 的公网地址 + hubAPIServer: {{ public network address of APIServer }} + env: prod + initNameSpace: default + patchLabels: + purpose: test +``` + +3. 创建环境初始化 `managed-cluster`。 + +``` +kubectl apply -f initializers/init-managed-cluster.yaml +``` + +当环境初始化 `managed-cluster` 的 `PHASE` 字段为 `success` 表示环境初始化成功,你可能需要等待 15-20 分钟左右的时间(阿里云创建一个ack集群需要 15 分钟左右)。 + +```shell +$ kubectl get initializers.core.oam.dev -n vela-system +NAMESPACE NAME PHASE AGE +vela-system managed-cluster success 20m +``` + +当环境初始化 `managed-cluster` 初始化成功后,你可以看到新集群 `poc-01` 已经被被注册到管控集群中。 + +```shell +$ kubectl get managedclusters.cluster.open-cluster-management.io +NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE +poc-01 true {{ APIServer address }} True True 30s +``` + +## 部署应用到指定集群 + +管理员完成多集群的注册之后,用户可以在应用部署计划中指定将资源部署到哪个集群中。 + +```shell +kubectl apply -f app.yaml +``` + +检查应用部署计划 `workflow-demo` 是否成功创建。 + +```shell +$ kubectl get app workflow-demo +NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE +workflow-demo podinfo-server webservice running true 7s +``` + +你可以切换到新创建的 ACK 集群上,查看资源是否被成功地部署。 + +```shell +$ kubectl get deployments +NAME READY UP-TO-DATE AVAILABLE AGE +podinfo-server 1/1 1 1 40s +``` + +```shell +$ kubectl get service +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +podinfo-server-auxiliaryworkload-85d7b756f9 LoadBalancer 192.168.57.21 < EIP > 9898:31132/TCP 50s +``` + +Service `podinfo-server` 绑定了一个 EXTERNAL-IP,允许用户通过公网访问应用,用户可以在浏览器中输入 `http://:9898` 来访问刚刚创建的应用。 + +![workflow-with-ocm-demo](../resources/workflow-with-ocm-demo.png) + +上述应用部署计划 `workflow-demo` 中使用了内置的应用策略 `env-binding` 对应用部署计划进行差异化配置,修改了组件 `podinfo-server` 的镜像, +以及运维特征 `expose` 的类型以允许集群外部的请求访问,同时应用策略 `env-binding` 指定了资源调度策略,将资源部署到新注册的 ACK 集群内。 + +应用部署计划的交付工作流也使用了内置的 [`multi-env`](../end-user/workflow/multi-env) 交付工作流定义,指定具体哪一个配置后的组件部署到集群中。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela.md new file mode 100644 index 00000000..b8f8cdd8 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela.md @@ -0,0 +1,45 @@ +--- +title: vela +--- + + + +``` +vela [flags] +``` + +### Options + +``` + -e, --env string specify environment name for application + -h, --help help for vela +``` + +### SEE ALSO + +* [vela addon](vela_addon) - List and get addon in KubeVela +* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities +* [vela completion](vela_completion) - Output shell completion code for the specified shell (bash or zsh) +* [vela components](vela_components) - List components +* [vela config](vela_config) - Manage configurations +* [vela def](vela_def) - Manage Definitions +* [vela delete](vela_delete) - Delete an application +* [vela env](vela_env) - Manage environments +* [vela exec](vela_exec) - Execute command in a container +* [vela export](vela_export) - Export deploy manifests from appfile +* [vela help](vela_help) - Help about any command +* [vela init](vela_init) - Create scaffold for an application +* [vela logs](vela_logs) - Tail logs for application +* [vela ls](vela_ls) - List applications +* [vela port-forward](vela_port-forward) - Forward local ports to services in an application +* [vela show](vela_show) - Show the reference doc for a workload type or trait +* [vela status](vela_status) - Show status of an application +* [vela system](vela_system) - System management utilities +* [vela template](vela_template) - Manage templates +* [vela traits](vela_traits) - List traits +* [vela up](vela_up) - Apply an appfile +* [vela version](vela_version) - Prints out build version information +* [vela workflow](vela_workflow) - Operate application workflow in KubeVela +* [vela workloads](vela_workloads) - List workloads + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_addon.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_addon.md new file mode 100644 index 00000000..e22f5db8 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_addon.md @@ -0,0 +1,30 @@ +--- +title: vela addon +--- + +List and get addon in KubeVela + +### Synopsis + +List and get addon in KubeVela + +### Options + +``` + -h, --help help for addon +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - +* [vela addon disable](vela_addon_disable) - disable an addon +* [vela addon enable](vela_addon_enable) - enable an addon +* [vela addon list](vela_addon_list) - List addons + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_addon_disable.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_addon_disable.md new file mode 100644 index 00000000..e5a0f9d6 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_addon_disable.md @@ -0,0 +1,37 @@ +--- +title: vela addon disable +--- + +disable an addon + +### Synopsis + +disable an addon in cluster + +``` +vela addon disable [flags] +``` + +### Examples + +``` +vela addon disable +``` + +### Options + +``` + -h, --help help for disable +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela addon](vela_addon) - List and get addon in KubeVela + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_addon_enable.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_addon_enable.md new file mode 100644 index 00000000..5d9d7ad8 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_addon_enable.md @@ -0,0 +1,37 @@ +--- +title: vela addon enable +--- + +enable an addon + +### Synopsis + +enable an addon in cluster + +``` +vela addon enable [flags] +``` + +### Examples + +``` +vela addon enable +``` + +### Options + +``` + -h, --help help for enable +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela addon](vela_addon) - List and get addon in KubeVela + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_addon_list.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_addon_list.md new file mode 100644 index 00000000..18780f50 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_addon_list.md @@ -0,0 +1,31 @@ +--- +title: vela addon list +--- + +List addons + +### Synopsis + +List addons in KubeVela + +``` +vela addon list [flags] +``` + +### Options + +``` + -h, --help help for list +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela addon](vela_addon) - List and get addon in KubeVela + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap.md new file mode 100644 index 00000000..3e0a6101 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap.md @@ -0,0 +1,31 @@ +--- +title: vela cap +--- + +Manage capability centers and installing/uninstalling capabilities + +### Synopsis + +Manage capability centers and installing/uninstalling capabilities + +### Options + +``` + -h, --help help for cap +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - +* [vela cap center](vela_cap_center) - Manage Capability Center +* [vela cap install](vela_cap_install) - Install capability into cluster +* [vela cap ls](vela_cap_ls) - List capabilities from cap-center +* [vela cap uninstall](vela_cap_uninstall) - Uninstall capability from cluster + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_center.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_center.md new file mode 100644 index 00000000..4e45d2de --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_center.md @@ -0,0 +1,31 @@ +--- +title: vela cap center +--- + +Manage Capability Center + +### Synopsis + +Manage Capability Center with config, sync, list + +### Options + +``` + -h, --help help for center +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities +* [vela cap center config](vela_cap_center_config) - Configure (add if not exist) a capability center, default is local (built-in capabilities) +* [vela cap center ls](vela_cap_center_ls) - List all capability centers +* [vela cap center remove](vela_cap_center_remove) - Remove specified capability center +* [vela cap center sync](vela_cap_center_sync) - Sync capabilities from remote center, default to sync all centers + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_center_config.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_center_config.md new file mode 100644 index 00000000..fbd91ff4 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_center_config.md @@ -0,0 +1,38 @@ +--- +title: vela cap center config +--- + +Configure (add if not exist) a capability center, default is local (built-in capabilities) + +### Synopsis + +Configure (add if not exist) a capability center, default is local (built-in capabilities) + +``` +vela cap center config [flags] +``` + +### Examples + +``` +vela cap center config mycenter https://github.com/oam-dev/catalog/tree/master/registry +``` + +### Options + +``` + -h, --help help for config + -t, --token string Github Repo token +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela cap center](vela_cap_center) - Manage Capability Center + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_center_ls.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_center_ls.md new file mode 100644 index 00000000..0b2a52e3 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_center_ls.md @@ -0,0 +1,37 @@ +--- +title: vela cap center ls +--- + +List all capability centers + +### Synopsis + +List all configured capability centers + +``` +vela cap center ls [flags] +``` + +### Examples + +``` +vela cap center ls +``` + +### Options + +``` + -h, --help help for ls +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela cap center](vela_cap_center) - Manage Capability Center + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_center_remove.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_center_remove.md new file mode 100644 index 00000000..6f615454 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_center_remove.md @@ -0,0 +1,37 @@ +--- +title: vela cap center remove +--- + +Remove specified capability center + +### Synopsis + +Remove specified capability center + +``` +vela cap center remove [flags] +``` + +### Examples + +``` +vela cap center remove mycenter +``` + +### Options + +``` + -h, --help help for remove +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela cap center](vela_cap_center) - Manage Capability Center + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_center_sync.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_center_sync.md new file mode 100644 index 00000000..babc8bcc --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_center_sync.md @@ -0,0 +1,37 @@ +--- +title: vela cap center sync +--- + +Sync capabilities from remote center, default to sync all centers + +### Synopsis + +Sync capabilities from remote center, default to sync all centers + +``` +vela cap center sync [centerName] [flags] +``` + +### Examples + +``` +vela cap center sync mycenter +``` + +### Options + +``` + -h, --help help for sync +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela cap center](vela_cap_center) - Manage Capability Center + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_install.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_install.md new file mode 100644 index 00000000..ea7b3b9f --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_install.md @@ -0,0 +1,38 @@ +--- +title: vela cap install +--- + +Install capability into cluster + +### Synopsis + +Install capability into cluster + +``` +vela cap install
/ [flags] +``` + +### Examples + +``` +vela cap install mycenter/route +``` + +### Options + +``` + -h, --help help for install + -t, --token string Github Repo token +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_ls.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_ls.md new file mode 100644 index 00000000..107ea861 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_ls.md @@ -0,0 +1,37 @@ +--- +title: vela cap ls +--- + +List capabilities from cap-center + +### Synopsis + +List capabilities from cap-center + +``` +vela cap ls [cap-center] [flags] +``` + +### Examples + +``` +vela cap ls +``` + +### Options + +``` + -h, --help help for ls +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_uninstall.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_uninstall.md new file mode 100644 index 00000000..b556fa80 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_cap_uninstall.md @@ -0,0 +1,38 @@ +--- +title: vela cap uninstall +--- + +Uninstall capability from cluster + +### Synopsis + +Uninstall capability from cluster + +``` +vela cap uninstall [flags] +``` + +### Examples + +``` +vela cap uninstall route +``` + +### Options + +``` + -h, --help help for uninstall + -t, --token string Github Repo token +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_completion.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_completion.md new file mode 100644 index 00000000..77e2c0c4 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_completion.md @@ -0,0 +1,32 @@ +--- +title: vela completion +--- + +Output shell completion code for the specified shell (bash or zsh) + +### Synopsis + +Output shell completion code for the specified shell (bash or zsh). +The shell code must be evaluated to provide interactive completion +of vela commands. + + +### Options + +``` + -h, --help help for completion +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - +* [vela completion bash](vela_completion_bash) - generate autocompletions script for bash +* [vela completion zsh](vela_completion_zsh) - generate autocompletions script for zsh + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_completion_bash.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_completion_bash.md new file mode 100644 index 00000000..56415693 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_completion_bash.md @@ -0,0 +1,41 @@ +--- +title: vela completion bash +--- + +generate autocompletions script for bash + +### Synopsis + +Generate the autocompletion script for Vela for the bash shell. + +To load completions in your current shell session: +$ source <(vela completion bash) + +To load completions for every new session, execute once: +Linux: + $ vela completion bash > /etc/bash_completion.d/vela +MacOS: + $ vela completion bash > /usr/local/etc/bash_completion.d/vela + + +``` +vela completion bash +``` + +### Options + +``` + -h, --help help for bash +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela completion](vela_completion) - Output shell completion code for the specified shell (bash or zsh) + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_completion_zsh.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_completion_zsh.md new file mode 100644 index 00000000..9b10a641 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_completion_zsh.md @@ -0,0 +1,38 @@ +--- +title: vela completion zsh +--- + +generate autocompletions script for zsh + +### Synopsis + +Generate the autocompletion script for Vela for the zsh shell. + +To load completions in your current shell session: +$ source <(vela completion zsh) + +To load completions for every new session, execute once: +$ vela completion zsh > "${fpath[1]}/_vela" + + +``` +vela completion zsh +``` + +### Options + +``` + -h, --help help for zsh +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela completion](vela_completion) - Output shell completion code for the specified shell (bash or zsh) + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_components.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_components.md new file mode 100644 index 00000000..8b07ab8d --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_components.md @@ -0,0 +1,38 @@ +--- +title: vela components +--- + +List components + +### Synopsis + +List components + +``` +vela components +``` + +### Examples + +``` +vela components +``` + +### Options + +``` + --discover discover traits in capability centers + -h, --help help for components +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_config.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_config.md new file mode 100644 index 00000000..1adeca7e --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_config.md @@ -0,0 +1,31 @@ +--- +title: vela config +--- + +Manage configurations + +### Synopsis + +Manage configurations + +### Options + +``` + -h, --help help for config +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - +* [vela config del](vela_config_del) - Delete config +* [vela config get](vela_config_get) - Get data for a config +* [vela config ls](vela_config_ls) - List configs +* [vela config set](vela_config_set) - Set data for a config + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_config_del.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_config_del.md new file mode 100644 index 00000000..24c23cda --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_config_del.md @@ -0,0 +1,37 @@ +--- +title: vela config del +--- + +Delete config + +### Synopsis + +Delete config + +``` +vela config del +``` + +### Examples + +``` +vela config del +``` + +### Options + +``` + -h, --help help for del +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela config](vela_config) - Manage configurations + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_config_get.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_config_get.md new file mode 100644 index 00000000..0badbba2 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_config_get.md @@ -0,0 +1,37 @@ +--- +title: vela config get +--- + +Get data for a config + +### Synopsis + +Get data for a config + +``` +vela config get +``` + +### Examples + +``` +vela config get +``` + +### Options + +``` + -h, --help help for get +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela config](vela_config) - Manage configurations + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_config_ls.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_config_ls.md new file mode 100644 index 00000000..71071694 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_config_ls.md @@ -0,0 +1,37 @@ +--- +title: vela config ls +--- + +List configs + +### Synopsis + +List all configs + +``` +vela config ls +``` + +### Examples + +``` +vela config ls +``` + +### Options + +``` + -h, --help help for ls +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela config](vela_config) - Manage configurations + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_config_set.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_config_set.md new file mode 100644 index 00000000..b8c466bb --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_config_set.md @@ -0,0 +1,37 @@ +--- +title: vela config set +--- + +Set data for a config + +### Synopsis + +Set data for a config + +``` +vela config set +``` + +### Examples + +``` +vela config set KEY=VALUE K2=V2 +``` + +### Options + +``` + -h, --help help for set +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela config](vela_config) - Manage configurations + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def.md new file mode 100644 index 00000000..972aef86 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def.md @@ -0,0 +1,35 @@ +--- +title: vela def +--- + +Manage Definitions + +### Synopsis + +Manage Definitions + +### Options + +``` + -h, --help help for def +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - +* [vela def apply](vela_def_apply) - Apply definition +* [vela def del](vela_def_del) - Delete definition +* [vela def edit](vela_def_edit) - Edit definition +* [vela def get](vela_def_get) - Get definition +* [vela def init](vela_def_init) - Init a new definition +* [vela def list](vela_def_list) - List definitions +* [vela def render](vela_def_render) - Render definition +* [vela def vet](vela_def_vet) - Validate definition + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_apply.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_apply.md new file mode 100644 index 00000000..5457ceab --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_apply.md @@ -0,0 +1,43 @@ +--- +title: vela def apply +--- + +Apply definition + +### Synopsis + +Apply definition from local storage to kubernetes cluster. It will apply file to vela-system namespace by default. + +``` +vela def apply DEFINITION.cue [flags] +``` + +### Examples + +``` +# Command below will apply the local my-webservice.cue file to kubernetes vela-system namespace +> vela def apply my-webservice.cue +# Command below will apply the ./defs/my-trait.cue file to kubernetes default namespace +> vela def apply ./defs/my-trait.cue --namespace default# Command below will convert the ./defs/my-trait.cue file to kubernetes CRD object and print it without applying it to kubernetes +> vela def apply ./defs/my-trait.cue --dry-run +``` + +### Options + +``` + --dry-run only build definition from CUE into CRB object without applying it to kubernetes clusters + -h, --help help for apply + -n, --namespace string Specify which namespace to apply. (default "vela-system") +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela def](vela_def) - Manage Definitions + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_del.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_del.md new file mode 100644 index 00000000..10e80e74 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_del.md @@ -0,0 +1,40 @@ +--- +title: vela def del +--- + +Delete definition + +### Synopsis + +Delete definition in kubernetes cluster. + +``` +vela def del DEFINITION_NAME [flags] +``` + +### Examples + +``` +# Command below will delete TraitDefinition of annotations in default namespace +> vela def del annotations -t trait -n default +``` + +### Options + +``` + -h, --help help for del + -n, --namespace string Specify which namespace the definition locates. + -t, --type string Specify the definition type of target. Valid types: component, trait, policy, workload, scope, workflow-step +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela def](vela_def) - Manage Definitions + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_edit.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_edit.md new file mode 100644 index 00000000..4c80862e --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_edit.md @@ -0,0 +1,43 @@ +--- +title: vela def edit` +--- + +Edit definition + +### Synopsis + +Edit definition in kubernetes. If type and namespace are not specified, the command will automatically search all possible results. +By default, this command will use the vi editor and can be altered by setting EDITOR environment variable. + +``` +vela def edit NAME [flags] +``` + +### Examples + +``` +# Command below will edit the ComponentDefinition (and other definitions if exists) of webservice in kubernetes +> vela def edit webservice +# Command below will edit the TraitDefinition of ingress in vela-system namespace +> vela def edit ingress --type trait --namespace vela-system +``` + +### Options + +``` + -h, --help help for edit + -n, --namespace string Specify which namespace to get. If empty, all namespaces will be searched. + -t, --type string Specify which definition type to get. If empty, all types will be searched. Valid types: scope, workflow-step, component, trait, policy, workload +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela def](vela_def) - Manage Definitions + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_get.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_get.md new file mode 100644 index 00000000..4a27727e --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_get.md @@ -0,0 +1,42 @@ +--- +title: vela def get +--- + +Get definition + +### Synopsis + +Get definition from kubernetes cluster + +``` +vela def get NAME [flags] +``` + +### Examples + +``` +# Command below will get the ComponentDefinition(or other definitions if exists) of webservice in all namespaces +> vela def get webservice +# Command below will get the TraitDefinition of annotations in namespace vela-system +> vela def get annotations --type trait --namespace vela-system +``` + +### Options + +``` + -h, --help help for get + -n, --namespace string Specify which namespace to get. If empty, all namespaces will be searched. + -t, --type string Specify which definition type to get. If empty, all types will be searched. Valid types: component, trait, policy, workload, scope, workflow-step +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela def](vela_def) - Manage Definitions + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_init.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_init.md new file mode 100644 index 00000000..0e0c52cf --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_init.md @@ -0,0 +1,48 @@ +--- +title: vela def init +--- + +Init a new definition + +### Synopsis + +Init a new definition with given arguments or interactively +* We support parsing a single YAML file (like kubernetes objects) into the cue-style template. However, we do not support variables in YAML file currently, which prevents users from directly feeding files like helm chart directly. We may introduce such features in the future. + +``` +vela def init DEF_NAME [flags] +``` + +### Examples + +``` +# Command below initiate an empty TraitDefinition named my-ingress +> vela def init my-ingress -t trait --desc "My ingress trait definition." > ./my-ingress.cue +# Command below initiate a definition named my-def interactively and save it to ./my-def.cue +> vela def init my-def -i --output ./my-def.cue +# Command below initiate a ComponentDefinition named my-webservice with the template parsed from ./template.yaml. +> vela def init my-webservice -i --template-yaml ./template.yaml +``` + +### Options + +``` + -d, --desc string Specify the description of the new definition. + -h, --help help for init + -i, --interactive Specify whether use interactive process to help generate definitions. + -o, --output string Specify the output path of the generated definition. If empty, the definition will be printed in the console. + -y, --template-yaml string Specify the template yaml file that definition will use to build the schema. If empty, a default template for the given definition type will be used. + -t, --type string Specify the type of the new definition. Valid types: workload, scope, workflow-step, component, trait, policy +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela def](vela_def) - Manage Definitions + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_list.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_list.md new file mode 100644 index 00000000..7ec60325 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_list.md @@ -0,0 +1,42 @@ +--- +title: vela def list +--- + +List definitions + +### Synopsis + +List definitions in kubernetes cluster + +``` +vela def list [flags] +``` + +### Examples + +``` +# Command below will list all definitions in all namespaces +> vela def list +# Command below will list all definitions in the vela-system namespace +> vela def get annotations --type trait --namespace vela-system +``` + +### Options + +``` + -h, --help help for list + -n, --namespace string Specify which namespace to list. If empty, all namespaces will be searched. + -t, --type string Specify which definition type to list. If empty, all types will be searched. Valid types: policy, workload, scope, workflow-step, component, trait +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela def](vela_def) - Manage Definitions + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_render.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_render.md new file mode 100644 index 00000000..1df1cae7 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_render.md @@ -0,0 +1,43 @@ +--- +title: vela def render +--- + +Render definition + +### Synopsis + +Render definition with cue format into kubernetes YAML format. Could be used to check whether the cue format definition is working as expected. If a directory is used as input, all cue definitions in the directory will be rendered. + +``` +vela def render DEFINITION.cue [flags] +``` + +### Examples + +``` +# Command below will render my-webservice.cue into YAML format and print it out. +> vela def render my-webservice.cue +# Command below will render my-webservice.cue and save it in my-webservice.yaml. +> vela def render my-webservice.cue -o my-webservice.yaml# Command below will render all cue format definitions in the ./defs/cue/ directory and save the YAML objects in ./defs/yaml/. +> vela def render ./defs/cue/ -o ./defs/yaml/ +``` + +### Options + +``` + -h, --help help for render + --message string Specify the header message of the generated YAML file. For example, declaring author information. + -o, --output string Specify the output path of the rendered definition YAML. If empty, the definition will be printed in the console. If input is a directory, the output path is expected to be a directory as well. +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela def](vela_def) - Manage Definitions + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_vet.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_vet.md new file mode 100644 index 00000000..b9437ec7 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_def_vet.md @@ -0,0 +1,39 @@ +--- +title: vela def vet +--- + +Validate definition + +### Synopsis + +Validate definition file by checking whether it has the valid cue format with fields set correctly +* Currently, this command only checks the cue format. This function is still working in progress and we will support more functional validation mechanism in the future. + +``` +vela def vet DEFINITION.cue [flags] +``` + +### Examples + +``` +# Command below will validate the my-def.cue file. +> vela def vet my-def.cue +``` + +### Options + +``` + -h, --help help for vet +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela def](vela_def) - Manage Definitions + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_delete.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_delete.md new file mode 100644 index 00000000..ed46acb0 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_delete.md @@ -0,0 +1,38 @@ +--- +title: vela delete +--- + +Delete an application + +### Synopsis + +Delete an application + +``` +vela delete APP_NAME +``` + +### Examples + +``` +vela delete frontend +``` + +### Options + +``` + -h, --help help for delete + --svc string delete only the specified service in this app +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_env.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_env.md new file mode 100644 index 00000000..15e5f732 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_env.md @@ -0,0 +1,31 @@ +--- +title: vela env +--- + +Manage environments + +### Synopsis + +Manage environments + +### Options + +``` + -h, --help help for env +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - +* [vela env delete](vela_env_delete) - Delete environment +* [vela env init](vela_env_init) - Create environments +* [vela env ls](vela_env_ls) - List environments +* [vela env set](vela_env_set) - Set an environment + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_env_delete.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_env_delete.md new file mode 100644 index 00000000..7425148b --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_env_delete.md @@ -0,0 +1,37 @@ +--- +title: vela env delete +--- + +Delete environment + +### Synopsis + +Delete environment + +``` +vela env delete +``` + +### Examples + +``` +vela env delete test +``` + +### Options + +``` + -h, --help help for delete +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela env](vela_env) - Manage environments + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_env_init.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_env_init.md new file mode 100644 index 00000000..0a3cb37b --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_env_init.md @@ -0,0 +1,40 @@ +--- +title: vela env init +--- + +Create environments + +### Synopsis + +Create environment and set the currently using environment + +``` +vela env init +``` + +### Examples + +``` +vela env init test --namespace test --email my@email.com +``` + +### Options + +``` + --domain string specify domain your applications + --email string specify email for production TLS Certificate notification + -h, --help help for init + --namespace string specify K8s namespace for env +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela env](vela_env) - Manage environments + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_env_ls.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_env_ls.md new file mode 100644 index 00000000..81808263 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_env_ls.md @@ -0,0 +1,37 @@ +--- +title: vela env ls +--- + +List environments + +### Synopsis + +List all environments + +``` +vela env ls +``` + +### Examples + +``` +vela env ls [env-name] +``` + +### Options + +``` + -h, --help help for ls +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela env](vela_env) - Manage environments + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_env_set.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_env_set.md new file mode 100644 index 00000000..f85a5169 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_env_set.md @@ -0,0 +1,37 @@ +--- +title: vela env set +--- + +Set an environment + +### Synopsis + +Set an environment as the current using one + +``` +vela env set +``` + +### Examples + +``` +vela env set test +``` + +### Options + +``` + -h, --help help for set +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela env](vela_env) - Manage environments + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_exec.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_exec.md new file mode 100644 index 00000000..793f810d --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_exec.md @@ -0,0 +1,35 @@ +--- +title: vela exec +--- + +Execute command in a container + +### Synopsis + +Execute command in a container + +``` +vela exec [flags] APP_NAME -- COMMAND [args...] +``` + +### Options + +``` + -h, --help help for exec + --pod-running-timeout duration The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one pod is running (default 1m0s) + -i, --stdin Pass stdin to the container (default true) + -s, --svc string service name + -t, --tty Stdin is a TTY (default true) +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_export.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_export.md new file mode 100644 index 00000000..4e43cb36 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_export.md @@ -0,0 +1,32 @@ +--- +title: vela export +--- + +Export deploy manifests from appfile + +### Synopsis + +Export deploy manifests from appfile + +``` +vela export +``` + +### Options + +``` + -f, -- string specify file path for appfile + -h, --help help for export +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_help.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_help.md new file mode 100644 index 00000000..d108da7c --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_help.md @@ -0,0 +1,27 @@ +--- +title: vela help +--- + +Help about any command + +``` +vela help [command] +``` + +### Options + +``` + -h, --help help for help +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_init.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_init.md new file mode 100644 index 00000000..ff76e7c4 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_init.md @@ -0,0 +1,38 @@ +--- +title: vela init +--- + +Create scaffold for an application + +### Synopsis + +Create scaffold for an application + +``` +vela init +``` + +### Examples + +``` +vela init +``` + +### Options + +``` + -h, --help help for init + --render-only Rendering vela.yaml in current dir and do not deploy +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_logs.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_logs.md new file mode 100644 index 00000000..07376d6b --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_logs.md @@ -0,0 +1,32 @@ +--- +title: vela logs +--- + +Tail logs for application + +### Synopsis + +Tail logs for application + +``` +vela logs [flags] +``` + +### Options + +``` + -h, --help help for logs + -o, --output string output format for logs, support: [default, raw, json] (default "default") +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_ls.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_ls.md new file mode 100644 index 00000000..55bb9d2c --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_ls.md @@ -0,0 +1,38 @@ +--- +title: vela ls +--- + +List applications + +### Synopsis + +List all applications in cluster + +``` +vela ls +``` + +### Examples + +``` +vela ls +``` + +### Options + +``` + -h, --help help for ls + -n, --namespace string specify the namespace the application want to list, default is the current env namespace +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_port-forward.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_port-forward.md new file mode 100644 index 00000000..b5b34bfa --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_port-forward.md @@ -0,0 +1,40 @@ +--- +title: vela port-forward +--- + +Forward local ports to services in an application + +### Synopsis + +Forward local ports to services in an application + +``` +vela port-forward APP_NAME [flags] +``` + +### Examples + +``` +port-forward APP_NAME [options] [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N] +``` + +### Options + +``` + --address strings Addresses to listen on (comma separated). Only accepts IP addresses or localhost as a value. When localhost is supplied, vela will try to bind on both 127.0.0.1 and ::1 and will fail if neither of these addresses are available to bind. (default [localhost]) + -h, --help help for port-forward + --pod-running-timeout duration The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one pod is running (default 1m0s) + --route forward ports from route trait service +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_show.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_show.md new file mode 100644 index 00000000..70910e40 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_show.md @@ -0,0 +1,38 @@ +--- +title: vela show +--- + +Show the reference doc for a workload type or trait + +### Synopsis + +Show the reference doc for a workload type or trait + +``` +vela show [flags] +``` + +### Examples + +``` +show webservice +``` + +### Options + +``` + -h, --help help for show + --web start web doc site +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_status.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_status.md new file mode 100644 index 00000000..c12e0a2d --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_status.md @@ -0,0 +1,38 @@ +--- +title: vela status +--- + +Show status of an application + +### Synopsis + +Show status of an application, including workloads and traits of each service. + +``` +vela status APP_NAME [flags] +``` + +### Examples + +``` +vela status APP_NAME +``` + +### Options + +``` + -h, --help help for status + -s, --svc string service name +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_system.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_system.md new file mode 100644 index 00000000..70cea01b --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_system.md @@ -0,0 +1,31 @@ +--- +title: vela system +--- + +System management utilities + +### Synopsis + +System management utilities + +### Options + +``` + -h, --help help for system +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - +* [vela system cue-packages](vela_system_cue-packages) - List cue package +* [vela system dry-run](vela_system_dry-run) - Dry Run an application, and output the K8s resources as result to stdout +* [vela system info](vela_system_info) - Show vela client and cluster chartPath +* [vela system live-diff](vela_system_live-diff) - Dry-run an application, and do diff on a specific app revison + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_system_cue-packages.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_system_cue-packages.md new file mode 100644 index 00000000..6e8ec190 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_system_cue-packages.md @@ -0,0 +1,37 @@ +--- +title: vela system cue-packages +--- + +List cue package + +### Synopsis + +List cue package + +``` +vela system cue-packages +``` + +### Examples + +``` +vela system cue-packages +``` + +### Options + +``` + -h, --help help for cue-packages +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela system](vela_system) - System management utilities + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_system_dry-run.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_system_dry-run.md new file mode 100644 index 00000000..056feefd --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_system_dry-run.md @@ -0,0 +1,39 @@ +--- +title: vela system dry-run +--- + +Dry Run an application, and output the K8s resources as result to stdout + +### Synopsis + +Dry Run an application, and output the K8s resources as result to stdout, only CUE template supported for now + +``` +vela system dry-run +``` + +### Examples + +``` +vela dry-run +``` + +### Options + +``` + -d, --definition string specify a definition file or directory, it will only be used in dry-run rather than applied to K8s cluster + -f, --file string application file name (default "./app.yaml") + -h, --help help for dry-run +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela system](vela_system) - System management utilities + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_system_info.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_system_info.md new file mode 100644 index 00000000..8b068c74 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_system_info.md @@ -0,0 +1,31 @@ +--- +title: vela system info +--- + +Show vela client and cluster chartPath + +### Synopsis + +Show vela client and cluster chartPath + +``` +vela system info [flags] +``` + +### Options + +``` + -h, --help help for info +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela system](vela_system) - System management utilities + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_system_live-diff.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_system_live-diff.md new file mode 100644 index 00000000..f37aeaca --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_system_live-diff.md @@ -0,0 +1,41 @@ +--- +title: vela system live-diff +--- + +Dry-run an application, and do diff on a specific app revison + +### Synopsis + +Dry-run an application, and do diff on a specific app revison. The provided capability definitions will be used during Dry-run. If any capabilities used in the app are not found in the provided ones, it will try to find from cluster. + +``` +vela system live-diff +``` + +### Examples + +``` +vela live-diff -f app-v2.yaml -r app-v1 --context 10 +``` + +### Options + +``` + -r, --Revision string specify an application Revision name, by default, it will compare with the latest Revision + -c, --context int output number lines of context around changes, by default show all unchanged lines (default -1) + -d, --definition string specify a file or directory containing capability definitions, they will only be used in dry-run rather than applied to K8s cluster + -f, --file string application file name (default "./app.yaml") + -h, --help help for live-diff +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela system](vela_system) - System management utilities + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_template.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_template.md new file mode 100644 index 00000000..11841c78 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_template.md @@ -0,0 +1,28 @@ +--- +title: vela template +--- + +Manage templates + +### Synopsis + +Manage templates + +### Options + +``` + -h, --help help for template +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - +* [vela template context](vela_template_context) - Show context parameters + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_template_context.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_template_context.md new file mode 100644 index 00000000..6a4ccdfd --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_template_context.md @@ -0,0 +1,37 @@ +--- +title: vela template context +--- + +Show context parameters + +### Synopsis + +Show context parameter + +``` +vela template context +``` + +### Examples + +``` +vela template context +``` + +### Options + +``` + -h, --help help for context +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela template](vela_template) - Manage templates + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_traits.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_traits.md new file mode 100644 index 00000000..4f962aca --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_traits.md @@ -0,0 +1,38 @@ +--- +title: vela traits +--- + +List traits + +### Synopsis + +List traits + +``` +vela traits +``` + +### Examples + +``` +vela traits +``` + +### Options + +``` + --discover discover traits in capability centers + -h, --help help for traits +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_up.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_up.md new file mode 100644 index 00000000..01a92b48 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_up.md @@ -0,0 +1,32 @@ +--- +title: vela up +--- + +Apply an appfile + +### Synopsis + +Apply an appfile + +``` +vela up +``` + +### Options + +``` + -f, -- string specify file path for appfile + -h, --help help for up +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_version.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_version.md new file mode 100644 index 00000000..38877295 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_version.md @@ -0,0 +1,31 @@ +--- +title: vela version +--- + +Prints out build version information + +### Synopsis + +Prints out build version information + +``` +vela version [flags] +``` + +### Options + +``` + -h, --help help for version +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_workflow.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_workflow.md new file mode 100644 index 00000000..bc96ae0f --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_workflow.md @@ -0,0 +1,31 @@ +--- +title: vela workflow +--- + +Operate application workflow in KubeVela + +### Synopsis + +Operate application workflow in KubeVela + +### Options + +``` + -h, --help help for workflow +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - +* [vela workflow restart](vela_workflow_restart) - Restart an application workflow +* [vela workflow resume](vela_workflow_resume) - Resume a suspend application workflow +* [vela workflow suspend](vela_workflow_suspend) - Suspend an application workflow +* [vela workflow terminate](vela_workflow_terminate) - Terminate an application workflow + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_workflow_restart.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_workflow_restart.md new file mode 100644 index 00000000..9ddcf06f --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_workflow_restart.md @@ -0,0 +1,37 @@ +--- +title: vela workflow restart +--- + +Restart an application workflow + +### Synopsis + +Restart an application workflow in cluster + +``` +vela workflow restart [flags] +``` + +### Examples + +``` +vela workflow restart +``` + +### Options + +``` + -h, --help help for restart +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela workflow](vela_workflow) - Operate application workflow in KubeVela + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_workflow_resume.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_workflow_resume.md new file mode 100644 index 00000000..a2150b5b --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_workflow_resume.md @@ -0,0 +1,37 @@ +--- +title: vela workflow resume +--- + +Resume a suspend application workflow + +### Synopsis + +Resume a suspend application workflow in cluster + +``` +vela workflow resume [flags] +``` + +### Examples + +``` +vela workflow resume +``` + +### Options + +``` + -h, --help help for resume +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela workflow](vela_workflow) - Operate application workflow in KubeVela + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_workflow_suspend.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_workflow_suspend.md new file mode 100644 index 00000000..1c328275 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_workflow_suspend.md @@ -0,0 +1,37 @@ +--- +title: vela workflow suspend +--- + +Suspend an application workflow + +### Synopsis + +Suspend an application workflow in cluster + +``` +vela workflow suspend [flags] +``` + +### Examples + +``` +vela workflow suspend +``` + +### Options + +``` + -h, --help help for suspend +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela workflow](vela_workflow) - Operate application workflow in KubeVela + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_workflow_terminate.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_workflow_terminate.md new file mode 100644 index 00000000..3a865925 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_workflow_terminate.md @@ -0,0 +1,37 @@ +--- +title: vela workflow terminate +--- + +Terminate an application workflow + +### Synopsis + +Terminate an application workflow in cluster + +``` +vela workflow terminate [flags] +``` + +### Examples + +``` +vela workflow terminate +``` + +### Options + +``` + -h, --help help for terminate +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela workflow](vela_workflow) - Operate application workflow in KubeVela + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_workloads.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_workloads.md new file mode 100644 index 00000000..7b460e36 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/cli/vela_workloads.md @@ -0,0 +1,37 @@ +--- +title: vela workloads +--- + +List workloads + +### Synopsis + +List workloads + +``` +vela workloads +``` + +### Examples + +``` +vela workloads +``` + +### Options + +``` + -h, --help help for workloads +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/concepts.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/concepts.md new file mode 100644 index 00000000..097c2ace --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/concepts.md @@ -0,0 +1,105 @@ +--- +title: 核心概念 +--- + +*"KubeVela 是一个面向混合环境的、简单易用、同时高可扩展的应用交付与管理引擎。"* + +在本部分中,我们会对 KubeVela 的核心思想进行详细解释,并进一步阐清一些在本项目中被广泛使用的技术术语。 + +## 综述 + +首先,KubeVela 引入了下面所述的带有关注点分离思想的工作流: +- **平台团队** + - 通过给部署环境和可重复使用的能力模块编写模板来构建应用,并将他们注册到集群中。 +- **业务用户** + - 选择部署环境、模型和可用模块来组装应用,并把应用部署到目标环境中。 + +工作流如下图所示: + +![alt](resources/how-it-works.png) + +这种基于模板的工作流使得平台团队能够在一系列的 Kubernetes CRD 之上,引导用户遵守他们构建的最佳实践和 部署经验,并且可以很自然地为业务用户提供 PaaS 级别的体验(比如:“以应用为中心”,“高层次的抽象”,“自助式运维操作”等等)。 + +![alt](resources/what-is-kubevela.png) + +下面开始介绍 KubeVela 的核心概念 + +## `Application` + +应用(*Application*),是 KubeVela 的核心 API。它使得业务开发者只需要基于一个单一的制品和一些简单的原语就可以构建完整的应用。 + +在应用交付平台中,有一个 *Application* 的概念尤为重要,因为这可以很大程度上简化运维任务,并且作为一个锚点避免操作过程中产生配置漂移的问题。同时,它也帮助应用交付过程中引入 Kubernetes的能力提供了一个更简单的、且不用依赖底层细节的途径。 举个例子,开发者能够不需要每次都定义一个详细的 Kubernetes Deployment + Service 的组合来建模一个 web service ,或者不用依靠底层的 KEDA ScaleObject 来获取自动扩容的需求。 + +### 举例 + +一个需要两个组件(比如 `frontend` 和 `backend` )的 `website` 应用可以如下建模: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: website +spec: + components: + - name: backend + type: worker + properties: + image: busybox + cmd: + - sleep + - '1000' + - name: frontend + type: webservice + properties: + image: nginx + traits: + - type: autoscaler + properties: + min: 1 + max: 10 + - type: sidecar + properties: + name: "sidecar-test" + image: "fluentd" +``` + +## 构建抽象 + +不像大多数的高层次的抽象,KubeVela 中的 `Application` 资源是一种积木风格的对象,而且它甚至没有固定的 schema。相反,它由构建模块,比如app components(应用组件)和 traits(运维能力)等,构成。这种构建模块允许开发者通过自己定义的抽象来集成平台的能力到此应用定义。 + +定义抽象和建模平台能力的构建模块是 `ComponentDefinition` 和 `TraitDefinition` 。 + +### ComponentDefinition + +`ComponentDefinition` ,组件定义,是一个预先定义好的,用于可部署的工作负载的*模板*。它包括了模板,参数化的和工作负载特性的信息作为一种声明式API资源。 + +因此,`Application` 抽象本质上定义了在目标集群中,用户想要如何来**实例化**给定 component definition。特别地,`.type` 字段引用安装了的 `ComponentDefinition` 的名字; `.properties` 字段是用户设置的用来实例化它的值。 + +一些主要的 component definition 有:长期运行的 web service、一次性的 task 和 Redis数据库。所有的 component definition 均应在平台提前安装,或由组件提供商,比如第三方软件供应商,来提供。 + +### TraitDefinition + +可选的,每一个组件都有一个 `.traits` 部分。这个部分通过使用操作类行为,比如负载均衡策略、网络入口路由、自动扩容策略,和升级策略等,来增强组件实例。 + +*Trait*,运维能力,是由平台提供的操作性质的特性。为了给组件实例附加运维能力,用户需要声明 `.type` 字段来引用特定的 `TraitDefinition` 和 `.properties` ,以此来设置给定运维能力的属性值。相似的,`TraitDefiniton` 同样允许用户来给这些操作特性定义*模板*。 + +在 KubeVela 中,我们还将 component definition 和 trait definitions 定义称为 *“capability definitions”* 。 + +## Environment +在将应用发布到生产环境之前,在 testing/staging workspace 中测试代码很重要。在 KubeVela,我们将这些 workspace 描述为 “deployment environments”,部署环境,或者简称为 “environments”,环境。每一个环境都有属于自己的配置(比如说,domain,Kubernetes集群,命名空间,配置数据和访问控制策略等)来允许用户创建不同的部署环境,比如 “test”,和 “production”。 + +到目前为止,一个 KubeVela 的 `environment` 只映射到一个 Kubernetes 的命名空间。集群级环境正在开发中。 + +### 总结 + +KubeVela的主要概念由下图所示: + +![alt](resources/concepts.png) + +## 架构 + +KubeVela的整体架构由下图所示: + +![alt](resources/arch.png) + +特别的,application controller 负责应用的抽象和封装(比如负责 `Application` 和 `Definition` 的 controller )。Rollout contoller 负责以整个应用为单位处理渐进式 rollout 策略。多集群部署引擎,在流量切分和 rollout 特性的支持下,负责跨多集群和环境部署应用。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/core-concepts/application.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/core-concepts/application.md new file mode 100644 index 00000000..e8ba2e77 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/core-concepts/application.md @@ -0,0 +1,169 @@ +--- +title: 应用部署计划 +--- + +KubeVela 背后的应用交付模型是 [Open Application Model](../platform-engineers/oam/oam-model),简称 OAM ,其核心是将应用部署所需的所有组件和各项运维动作,描述为一个统一的、与基础设施无关的“部署计划”,进而实现在混合环境中进行标准化和高效率的应用交付。这个应用部署计划就是这一节所要介绍的 **Application** 对象,也是 OAM 模型的使用者唯一需要了解的 API。 + +## 应用程序部署计划(Application) + +KubeVela 通过 YAML 文件的方式描述应用部署计划。一个典型的 YAML 样例如下: + +```yaml +# sample.yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: website +spec: + components: + - name: frontend # 比如我们希望部署一个实现前端业务的 Web Service 类型组件 + type: webservice + properties: + image: nginx + traits: + - type: cpuscaler # 给组件设置一个可以动态调节 CPU 使用率的 cpuscaler 类型运维特征 + properties: + min: 1 + max: 10 + cpuPercent: 60 + - type: sidecar # 往运行时集群部署之前,注入一个做辅助工作的 sidecar + properties: + name: "sidecar-test" + image: "fluentd" + - name: backend + type: worker + properties: + image: busybox + cmd: + - sleep + - '1000' + policies: + - name: demo-policy + type: env-binding + properties: + envs: + - name: test + placement: + clusterSelector: + name: cluster-test + - name: prod + placement: + clusterSelector: + name: cluster-prod + workflow: + steps: + # 步骤名称 + - name: deploy-test-env + # 指定步骤类型 + type: deploy2env + properties: + # 指定策略名称 + policy: demo-policy + # 指定部署的环境名称 + env: test + - name: manual-approval + # 工作流内置 suspend 类型的任务,用于暂停工作流 + type: suspend + - name: deploy-prod-env + type: deploy2env + properties: + policy: demo-policy + env: prod +``` + +这里的字段对应着: + +- `apiVersion`:所使用的 OAM API 版本。 +- `kind`:种类。我们最经常用到的就是 Pod 了。 +- `metadata`:业务相关信息。比如这次要创建的是一个网站。 +- `Spec`:描述我们需要应用去交付什么,告诉 Kubernetes 做成什么样。这里我们放入 KubeVela 的 `components`、`policies` 以及 `workflow`。 +- `components`:一次应用交付部署计划所涵盖的全部组件。 +- `traits`:应用交付部署计划中每个组件独立的运维特征。 +- `policies`:作用于整个应用全局的部署策略。 +- `workflow`:自定义应用交付“执行过程”的工作流。 + +下面这张示意图诠释了它们之间的关系: +![image.png](../resources/concepts.png) + +先有一个总体的应用部署计划 Application。在此基础之上我们申明应用主体为可配置、可部署的组件(Components),并同时对应地去申明,期望每个组件要拥有的相关运维特征 (Traits),如果有需要,还可以申明自定义的执行流程 (Workflow)。 + +你使用 KubeVela 的时候,就像在玩“乐高“积木:先拿起一块大的“应用程序”,然后往上固定一块或几块“组件”,组件上又可以贴上任何颜色大小的“运维特征”。同时根据需求的变化,你随时可以重新组装,形成新的应用部署计划。 + +## 组件(Components) + +KubeVela 内置了常用的组件类型,使用 [KubeVela CLI](../install#3-安装-kubevela-cli) 命令查看: +``` +vela components +``` +返回结果: +``` +NAME NAMESPACE WORKLOAD DESCRIPTION +alibaba-rds default configurations.terraform.core.oam.dev Terraform configuration for Alibaba Cloud RDS object +task vela-system jobs.batch Describes jobs that run code or a script to completion. +webservice vela-system deployments.apps Describes long-running, scalable, containerized services + that have a stable network endpoint to receive external + network traffic from customers. +worker vela-system deployments.apps Describes long-running, scalable, containerized services + that running at backend. They do NOT have network endpoint + to receive external network traffic. + +``` + +你可以继续使用 [Helm 组件](../end-user/components/helm)和[Kustomize 组件](../end-user/components/kustomize)等开箱即用的 KubeVela 内置组件来构建你的应用部署计划。 + +如果你是熟悉 Kubernetes 的平台管理员,你可以通过[自定义组件入门](../platform-engineers/components/custom-component)文档了解 KubeVela 是如何扩展任意类型的自定义组件的。特别的,[Terraform 组件](../platform-engineers/components/component-terraform) 就是 KubeVela 自定义组件能力的一个最佳实践,可以满足任意云资源的供应,只需少量云厂商特定配置(如鉴权、云资源模块等),即可成为一个开箱即用的云资源组件。 + +## 运维特征(Traits) + +KubeVela 也内置了常用的运维特征类型,使用 [KubeVela CLI](../install#3-安装-kubevela-cli) 命令查看: +``` +vela traits +``` +返回结果: +``` +NAME NAMESPACE APPLIES-TO CONFLICTS-WITH POD-DISRUPTIVE DESCRIPTION +annotations vela-system deployments.apps true Add annotations for your Workload. +cpuscaler vela-system webservice,worker false Automatically scale the component based on CPU usage. +ingress vela-system webservice,worker false Enable public web traffic for the component. +labels vela-system deployments.apps true Add labels for your Workload. +scaler vela-system webservice,worker false Manually scale the component. +sidecar vela-system deployments.apps true Inject a sidecar container to the component. +``` + +你可以继续阅读用户手册里的 [绑定运维特征](../end-user/traits/ingress) ,具体查看如何完成各种运维特征的开发。 + +如果你是熟悉 Kubernetes 的平台管理员,也可以了解 KubeVela 中[自定义运维特征](../platform-engineers/traits/customize-trait) 的能力,为你的用户扩展任意运维功能。 + +## 应用策略(Policy) + +应用策略(Policy)负责定义应用级别的部署特征,比如健康检查规则、安全组、防火墙、SLO、检验等模块。 +应用策略的扩展性和功能与运维特征类似,可以灵活的扩展和对接所有云原生应用生命周期管理的能力。相对于运维特征而言,应用策略作用于一个应用的整体,而运维特征作用于应用中的某个组件。 + +在本例中,我们设置了一个将应用部署到不同环境的策略。 + +## 工作流(Workflow) + +KubeVela 的工作流机制允许用户自定义应用部署计划中的步骤,粘合额外的交付流程,指定任意的交付环境。简而言之,工作流提供了定制化的控制逻辑,在原有 Kubernetes 模式交付资源(Apply)的基础上,提供了面向过程的灵活性。比如说,使用工作流实现暂停、人工验证、状态等待、数据流传递、多环境灰度、A/B 测试等复杂操作。 + +工作流是 KubeVela 实践过程中基于 OAM 模型的进一步探索和最佳实践,充分遵守 OAM 的模块化理念和可复用特性。每一个工作流模块都是一个“超级粘合剂”,可以将你任意的工具和流程都组合起来。使得你在现代复杂云原生应用交付环境中,可以通过一份申明式的配置,完整的描述所有的交付流程,保证交付过程的稳定性和便利性。 + +> 需要说明的是,工作流机制是基于“应用和环境”粒度工作的,它提供了“自定义交付过程”的强大能力。一旦定义工作流,就代表用户自己指定交付的执行过程,原有的组件部署过程会被取代。工作流并非必填能力,用户在不编写 Workflow 过程的情况下,依旧可以完成组件和运维策略的自动化部署。 + +在上面的例子中,我们已经可以看到一些工作流的步骤: + +- 这里使用了 `deploy2env` 和 `suspend` 类型的工作流步骤: + - `deploy2env` 类型可以根据用户定义的策略将应用部署到指定的环境。 + - 在第一步完成后,开始执行 `suspend` 类型的工作流步骤。该步骤会暂停工作流,我们可以查看集群中第一个组件的状态,当其成功运行后,再使用 `vela workflow resume website` 命令来继续该工作流。 + - 当工作流继续运行后,第三个步骤开始部署组件及运维特征。此时我们查看集群,可以看到所以资源都已经被成功部署。 + +关于工作流,你可以从[指定环境部署](../end-user/workflow/multi-env)这个工作流节点类型开始逐次了解更多 KubeVela 当前的内置工作流节点类型。 + +如果你是熟悉 Kubernetes 的平台管理员,你可以[学习创建自定义工作流节点类型](../platform-engineers/workflow/workflow),或者通过[设计文档](https://github.com/oam-dev/kubevela/blob/master/design/vela-core/workflow_policy.md)了解工作流系统背后的设计和架构. + +## 下一步 + +后续步骤: + +- 加入 KubeVela 中文社区钉钉群,群号:23310022。 +- 阅读[**用户手册**](../end-user/components/helm),从 Helm 组件开始了解如何构建你的应用部署计划。 +- 阅读[**管理员手册**](../platform-engineers/oam/oam-model)了解 KubeVela 的扩展方式和背后的 OAM 模型原理。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/core-concepts/architecture.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/core-concepts/architecture.md new file mode 100644 index 00000000..e5f97177 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/core-concepts/architecture.md @@ -0,0 +1,38 @@ +--- +title: 系统架构 +--- + +KubeVela 在默认安装模式下,是一个只包含“控制平面”的架构,通过插件机制与各种运行时系统进行紧密配合。其中 KubeVela 核心控制器工作在一个单独的控制面 Kubernetes 集群。 +如下图所示,自上而下看,用户只与 KubeVela 所在的控制面 Kubernetes 集群发生交互。 + +![kubevela-arch](../resources/system-arch.png) + +## API 层 + +KubeVela API 是声明式的,并以应用程序为中心,用于构建应用交付平台和解决方案。同时,由于它基于原生的 Kubernetes CRD 构建,所以使用起来非常方便。 + +- 对于大多数不用关心底层细节的用户来说,你只需要: + - 查看 KubeVela 所提供的开箱即用的组件、运维特征、应用策略和工作流 + - 通过 YAML 文件来描述一个应用部署计划 +- 对于少数管理员来说: + - 内置新的组件、运维特征、应用策略和自定义工作流,提供给你的用户 + - 通常需要使用 YAML 文件和 CUE 语言来完成上述操作 + +## 控制平面层 + +控制平面层是 KubeVela 的系统核心。它既能帮你按需组装内置能力,或者通过注册各种能力插件满足交付应用的需要,同时在交付后全自动处理 API 请求并管理全局状态。 + +主要包含如下三个部分: + +- **核心控制器** 为整个系统提供核心控制逻辑,完成诸如编排应用和工作流、修订版本快照、垃圾回收等等基础逻辑 +- **新增内置能力** 由 X-Definitions 创建,注册应用交付所需要的内置能力。基于这个灵活性,我们可以自由地去集成开源生态的能力,按需自定义 +- **插件能力中心** Addon 让你可以调用生态下常见的能力,甚至直接省去了开发的时间和成本 + +## 执行层 + +最后,执行层是应用程序实际会运行的地方。KubeVela 允许你在统一的工作流中,部署和管理应用程序到 Kubernetes 集群,例如本地、托管云服务、IoT/边缘设备端等等。 + +## 下一步 + +- 学习如何用 KubeVela 来进行应用交付, 请查看[应用部署计划](./application)。 +- 阅读管理员手册学习 [开放应用模型](../platform-engineers/oam/oam-model)。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/cap-center.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/cap-center.md new file mode 100644 index 00000000..0d045bfe --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/cap-center.md @@ -0,0 +1,133 @@ +--- +title: 能力管理 +--- + +在 KubeVela 中,开发者可以从任何包含 OAM 抽象文件的 GitHub 仓库中安装更多的能力(例如:新 component 类型或者 traits )。我们将这些 GitHub 仓库称为 _Capability Centers_ 。 + +KubeVela 可以从这些仓库中自动发现 OAM 抽象文件,并且同步这些能力到我们的 KubeVela 平台中。 + +## 添加能力中心 + +新增且同步能力中心到 KubeVela: + +```bash +$ vela cap center config my-center https://github.com/oam-dev/catalog/tree/master/registry +successfully sync 1/1 from my-center remote center +Successfully configured capability center my-center and sync from remote + +$ vela cap center sync my-center +successfully sync 1/1 from my-center remote center +sync finished +``` + +现在,该能力中心 `my-center` 已经可以使用。 + +## 列出能力中心 + +你可以列出或者添加更多能力中心。 + +```bash +$ vela cap center ls +NAME ADDRESS +my-center https://github.com/oam-dev/catalog/tree/master/registry +``` + +## [可选] 删除能力中心 + +删除一个 + +```bash +$ vela cap center remove my-center +``` + +## 列出所有可用的能力中心 + +列出某个中心所有可用的能力。 + +```bash +$ vela cap ls my-center +NAME CENTER TYPE DEFINITION STATUS APPLIES-TO +clonesetservice my-center componentDefinition clonesets.apps.kruise.io uninstalled [] +``` + +## 从能力中心安装能力 + +我们开始从 `my-center` 安装新 component `clonesetservice` 到你的 KubeVela 平台。 + +你可以先安装 OpenKruise 。 + +```shell +helm install kruise https://github.com/openkruise/kruise/releases/download/v0.7.0/kruise-chart.tgz +``` + +从 `my-center` 中安装 `clonesetservice` component 。 + +```bash +$ vela cap install my-center/clonesetservice +Installing component capability clonesetservice +Successfully installed capability clonesetservice from my-center +``` + +## 使用新安装的能力 + +我们先检查 `clonesetservice` component 是否已经被安装到平台: + +```bash +$ vela components +NAME NAMESPACE WORKLOAD DESCRIPTION +clonesetservice vela-system clonesets.apps.kruise.io Describes long-running, scalable, containerized services + that have a stable network endpoint to receive external + network traffic from customers. If workload type is skipped + for any service defined in Appfile, it will be defaulted to + `webservice` type. +``` + +很棒!现在我们部署使用 Appfile 部署一个应用。 + +```bash +$ cat << EOF > vela.yaml +name: testapp +services: + testsvc: + type: clonesetservice + image: crccheck/hello-world + port: 8000 +EOF +``` + +```bash +$ vela up +Parsing vela appfile ... +Load Template ... + +Rendering configs for service (testsvc)... +Writing deploy config to (.vela/deploy.yaml) + +Applying application ... +Checking if app has been deployed... +App has not been deployed, creating a new deployment... +Updating: core.oam.dev/v1alpha2, Kind=HealthScope in default +✅ App has been deployed 🚀🚀🚀 + Port forward: vela port-forward testapp + SSH: vela exec testapp + Logging: vela logs testapp + App status: vela status testapp + Service status: vela status testapp --svc testsvc +``` + +随后,该 cloneset 已经被部署到你的环境。 + +```shell +$ kubectl get clonesets.apps.kruise.io +NAME DESIRED UPDATED UPDATED_READY READY TOTAL AGE +testsvc 1 1 1 1 1 46s +``` + +## 删除能力 + +> 注意,删除能力前请先确认没有被应用引用。 + +```bash +$ vela cap uninstall my-center/clonesetservice +Successfully uninstalled capability clonesetservice +``` diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/check-logs.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/check-logs.md new file mode 100644 index 00000000..df2be913 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/check-logs.md @@ -0,0 +1,9 @@ +--- +title: 查看应用的日志 +--- + +```bash +$ vela logs testapp +``` + +执行如上命令后就能查看指定的 testapp 容器的日志。如果只有一个容器,则默认会查看该容器的日志 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/check-ref-doc.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/check-ref-doc.md new file mode 100644 index 00000000..704c2324 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/check-ref-doc.md @@ -0,0 +1,103 @@ +--- +title: The Reference Documentation Guide of Capabilities +--- + +In this documentation, we will show how to check the detailed schema of a given capability (i.e. workload type or trait). + +This may sound challenging because every capability is a "plug-in" in KubeVela (even for the built-in ones), also, it's by design that KubeVela allows platform platform-engineerss to modify the capability templates at any time. In this case, do we need to manually write documentation for every newly installed capability? And how can we ensure those documentations for the system is up-to-date? + +## Using Browser + +Actually, as a important part of its "extensibility" design, KubeVela will always **automatically generate** reference documentation for every workload type or trait registered in your Kubernetes cluster, based on its template in definition of course. This feature works for any capability: either built-in ones or your own workload types/traits. + +Thus, as an end user, the only thing you need to do is: + +```console +$ vela show WORKLOAD_TYPE or TRAIT --web +``` + +This command will automatically open the reference documentation for given workload type or trait in your default browser. + +### For Workload Types + +Let's take `$ vela show webservice --web` as example. The detailed schema documentation for `Web Service` workload type will show up immediately as below: + +![](../resources/vela_show_webservice.jpg) + +Note that there's in the section named `Specification`, it even provides you with a full sample for the usage of this workload type with a fake name `my-service-name`. + +### For Traits + +Similarly, we can also do `$ vela show autoscale --web`: + +![](../resources/vela_show_autoscale.jpg) + +With these auto-generated reference documentations, we could easily complete the application description by simple copy-paste, for example: + +```yaml +name: helloworld + +services: + backend: # copy-paste from the webservice ref doc above + image: oamdev/testapp:v1 + cmd: ["node", "server.js"] + port: 8080 + cpu: "0.1" + + autoscale: # copy-paste and modify from autoscaler ref doc above + min: 1 + max: 8 + cron: + startAt: "19:00" + duration: "2h" + days: "Friday" + replicas: 4 + timezone: "America/Los_Angeles" +``` + +## Using Terminal + +This reference doc feature also works for terminal-only case. For example: + +```shell +$ vela show webservice +# Properties ++-------+----------------------------------------------------------------------------------+---------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++-------+----------------------------------------------------------------------------------+---------------+----------+---------+ +| cmd | Commands to run in the container | []string | false | | +| env | Define arguments by using environment variables | [[]env](#env) | false | | +| image | Which image would you like to use for your service | string | true | | +| port | Which port do you want customer traffic sent to | int | true | 80 | +| cpu | Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) | string | false | | ++-------+----------------------------------------------------------------------------------+---------------+----------+---------+ + + +## env ++-----------+-----------------------------------------------------------+-------------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++-----------+-----------------------------------------------------------+-------------------------+----------+---------+ +| name | Environment variable name | string | true | | +| value | The value of the environment variable | string | false | | +| valueFrom | Specifies a source the value of this var should come from | [valueFrom](#valueFrom) | false | | ++-----------+-----------------------------------------------------------+-------------------------+----------+---------+ + + +### valueFrom ++--------------+--------------------------------------------------+-------------------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++--------------+--------------------------------------------------+-------------------------------+----------+---------+ +| secretKeyRef | Selects a key of a secret in the pod's namespace | [secretKeyRef](#secretKeyRef) | true | | ++--------------+--------------------------------------------------+-------------------------------+----------+---------+ + + +#### secretKeyRef ++------+------------------------------------------------------------------+--------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++------+------------------------------------------------------------------+--------+----------+---------+ +| name | The name of the secret in the pod's namespace to select from | string | true | | +| key | The key of the secret to select from. Must be a valid secret key | string | true | | ++------+------------------------------------------------------------------+--------+----------+---------+ +``` + +> Note that for all the built-in capabilities, we already published their reference docs [here](https://kubevela.io/#/en/developers/references/) based on the same doc generation mechanism. diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/config-app.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/config-app.md new file mode 100644 index 00000000..615a2014 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/config-app.md @@ -0,0 +1,85 @@ +--- +title: 在应用程序中配置数据或环境 +--- + +`vela` 提供 `config` 命令用于管理配置数据。 + +## `vela config set` + +```bash +$ vela config set test a=b c=d +reading existing config data and merging with user input +config data saved successfully ✅ +``` + +## `vela config get` + +```bash +$ vela config get test +Data: + a: b + c: d +``` + +## `vela config del` + +```bash +$ vela config del test +config (test) deleted successfully +``` + +## `vela config ls` + +```bash +$ vela config set test a=b +$ vela config set test2 c=d +$ vela config ls +NAME +test +test2 +``` + +## 在应用程序中配置环境变量 + +可以在应用程序中将配置数据设置为环境变量。 + +```bash +$ vela config set demo DEMO_HELLO=helloworld +``` + +将以下内容保存为 `vela.yaml` 到当前目录中: + +```yaml +name: testapp +services: + env-config-demo: + image: heroku/nodejs-hello-world + config: demo +``` + +然后运行: +```bash +$ vela up +Parsing vela.yaml ... +Loading templates ... + +Rendering configs for service (env-config-demo)... +Writing deploy config to (.vela/deploy.yaml) + +Applying deploy configs ... +Checking if app has been deployed... +App has not been deployed, creating a new deployment... +✅ App has been deployed 🚀🚀🚀 + Port forward: vela port-forward testapp + SSH: vela exec testapp + Logging: vela logs testapp + App status: vela status testapp + Service status: vela status testapp --svc env-config-demo +``` + +检查环境变量: + +``` +$ vela exec testapp -- printenv | grep DEMO_HELLO +DEMO_HELLO=helloworld +``` diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/config-enviroments.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/config-enviroments.md new file mode 100644 index 00000000..b9a5fab7 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/config-enviroments.md @@ -0,0 +1,89 @@ +--- +title: 设置部署环境 +--- + +通过部署环境,可以为你的应用配置全局工作空间、email 以及域名。通常情况下,部署环境分为 `test` (测试环境)、`staging` (生产镜像环境)、`prod`(生产环境)等。 + +## 创建环境 + +```bash +$ vela env init demo --email my@email.com +environment demo created, Namespace: default, Email: my@email.com +``` + +## 检查部署环境元数据 + +```bash +$ vela env ls +NAME CURRENT NAMESPACE EMAIL DOMAIN +default default +demo * default my@email.com +``` + +默认情况下, 将会在 K8s 默认的命名空间 `default` 下面创建环境。 + +## 配置变更 + +你可以通过再次执行如下命令变更环境配置。 + +```bash +$ vela env init demo --namespace demo +environment demo created, Namespace: demo, Email: my@email.com +``` + +```bash +$ vela env ls +NAME CURRENT NAMESPACE EMAIL DOMAIN +default default +demo * demo my@email.com +``` + +**注意:部署环境只针对新创建的应用生效,之前创建的应用不会受到任何影响。** + +## [可选操作] 配置域名(前提:拥有 public IP) + +如果你使用的是云厂商提供的 k8s 服务并已为 ingress 配置了公网 IP,那么就可以在环境中配置域名来使用,之后你就可以通过该域名来访问应用,并且自动支持 mTLS 双向认证。 + +例如, 你可以使用下面的命令方式获得 ingress service 的公网 IP: + + +```bash +$ kubectl get svc -A | grep LoadBalancer +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +nginx-ingress-lb LoadBalancer 172.21.2.174 123.57.10.233 80:32740/TCP,443:32086/TCP 41d +``` + +命令响应结果 `EXTERNAL-IP` 列的值:123.57.10.233 就是公网 IP。 在 DNS 中添加一条 `A` 记录吧: + +``` +*.your.domain => 123.57.10.233 +``` + +如果没有自定义域名,那么你可以使用如 `123.57.10.233.xip.io` 作为域名,其中 `xip.io` 将会自动路由到前面的 IP `123.57.10.233`。 + +```bash +$ vela env init demo --domain 123.57.10.233.xip.io +environment demo updated, Namespace: demo, Email: my@email.com +``` + +### 在 Appfile 中使用域名 + +由于在部署环境中已经配置了全局域名, 就不需要在 route 配置中特别指定域名了。 + +```yaml +# in demo environment +services: + express-server: + ... + + route: + rules: + - path: /testapp + rewriteTarget: / +``` + +``` +$ curl http://123.57.10.233.xip.io/testapp +Hello World +``` + diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/exec-cmd.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/exec-cmd.md new file mode 100644 index 00000000..bcb91c04 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/exec-cmd.md @@ -0,0 +1,10 @@ +--- +title: 在容器中运行命令 +--- + +运行如下命令: +``` +$ vela exec testapp -- /bin/sh +``` + +这将打开一个 shell 访问 testapp 容器。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/extensions/set-autoscale.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/extensions/set-autoscale.md new file mode 100644 index 00000000..b9aa9b61 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/extensions/set-autoscale.md @@ -0,0 +1,238 @@ +--- +title: Automatically scale workloads by resource utilization metrics and cron +--- + + + +## Prerequisite +Make sure auto-scaler trait controller is installed in your cluster + +Install auto-scaler trait controller with helm + +1. Add helm chart repo for autoscaler trait + ```shell script + helm repo add oam.catalog http://oam.dev/catalog/ + ``` + +2. Update the chart repo + ```shell script + helm repo update + ``` + +3. Install autoscaler trait controller + ```shell script + helm install --create-namespace -n vela-system autoscalertrait oam.catalog/autoscalertrait + +Autoscale depends on metrics server, please [enable it in your Kubernetes cluster](../references/devex/faq#autoscale-how-to-enable-metrics-server-in-various-kubernetes-clusters) at the beginning. + +> Note: autoscale is one of the extension capabilities [installed from cap center](../cap-center), +> please install it if you can't find it in `vela traits`. + +## Setting cron auto-scaling policy +Introduce how to automatically scale workloads by cron. + +1. Prepare Appfile + + ```yaml + name: testapp + + services: + express-server: + # this image will be used in both build and deploy steps + image: oamdev/testapp:v1 + + cmd: ["node", "server.js"] + port: 8080 + + autoscale: + min: 1 + max: 4 + cron: + startAt: "14:00" + duration: "2h" + days: "Monday, Thursday" + replicas: 2 + timezone: "America/Los_Angeles" + ``` + +> The full specification of `autoscale` could show up by `$ vela show autoscale`. + +2. Deploy an application + + ``` + $ vela up + Parsing vela.yaml ... + Loading templates ... + + Rendering configs for service (express-server)... + Writing deploy config to (.vela/deploy.yaml) + + Applying deploy configs ... + Checking if app has been deployed... + App has not been deployed, creating a new deployment... + ✅ App has been deployed 🚀🚀🚀 + Port forward: vela port-forward testapp + SSH: vela exec testapp + Logging: vela logs testapp + App status: vela status testapp + Service status: vela status testapp --svc express-server + ``` + +3. Check the replicas and wait for the scaling to take effect + + Check the replicas of the application, there is one replica. + + ``` + $ vela status testapp + About: + + Name: testapp + Namespace: default + Created at: 2020-11-05 17:09:02.426632 +0800 CST + Updated at: 2020-11-05 17:09:02.426632 +0800 CST + + Services: + + - Name: express-server + Type: webservice + HEALTHY Ready: 1/1 + Traits: + - ✅ autoscale: type: cron replicas(min/max/current): 1/4/1 + Last Deployment: + Created at: 2020-11-05 17:09:03 +0800 CST + Updated at: 2020-11-05T17:09:02+08:00 + ``` + + Wait till the time clocks `startAt`, and check again. The replicas become to two, which is specified as + `replicas` in `vela.yaml`. + + ``` + $ vela status testapp + About: + + Name: testapp + Namespace: default + Created at: 2020-11-10 10:18:59.498079 +0800 CST + Updated at: 2020-11-10 10:18:59.49808 +0800 CST + + Services: + + - Name: express-server + Type: webservice + HEALTHY Ready: 2/2 + Traits: + - ✅ autoscale: type: cron replicas(min/max/current): 1/4/2 + Last Deployment: + Created at: 2020-11-10 10:18:59 +0800 CST + Updated at: 2020-11-10T10:18:59+08:00 + ``` + + Wait after the period ends, the replicas will be one eventually. + +## Setting auto-scaling policy of CPU resource utilization +Introduce how to automatically scale workloads by CPU resource utilization. + +1. Prepare Appfile + + Modify `vela.yaml` as below. We add field `services.express-server.cpu` and change the auto-scaling policy + from cron to cpu utilization by updating filed `services.express-server.autoscale`. + + ```yaml + name: testapp + + services: + express-server: + image: oamdev/testapp:v1 + + cmd: ["node", "server.js"] + port: 8080 + cpu: "0.01" + + autoscale: + min: 1 + max: 5 + cpuPercent: 10 + ``` + +2. Deploy an application + + ```bash + $ vela up + ``` + +3. Expose the service entrypoint of the application + + ``` + $ vela port-forward helloworld 80 + Forwarding from 127.0.0.1:80 -> 80 + Forwarding from [::1]:80 -> 80 + + Forward successfully! Opening browser ... + Handling connection for 80 + Handling connection for 80 + Handling connection for 80 + Handling connection for 80 + ``` + + On your macOS, you might need to add `sudo` ahead of the command. + +4. Monitor the replicas changing + + Continue to monitor the replicas changing when the application becomes overloaded. You can use Apache HTTP server + benchmarking tool `ab` to mock many requests to the application. + + ``` + $ ab -n 10000 -c 200 http://127.0.0.1/ + This is ApacheBench, Version 2.3 <$Revision: 1843412 $> + Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ + Licensed to The Apache Software Foundation, http://www.apache.org/ + + Benchmarking 127.0.0.1 (be patient) + Completed 1000 requests + ``` + + The replicas gradually increase from one to four. + + ``` + $ vela status helloworld --svc frontend + About: + + Name: helloworld + Namespace: default + Created at: 2020-11-05 20:07:21.830118 +0800 CST + Updated at: 2020-11-05 20:50:42.664725 +0800 CST + + Services: + + - Name: frontend + Type: webservice + HEALTHY Ready: 1/1 + Traits: + - ✅ autoscale: type: cpu cpu-utilization(target/current): 5%/10% replicas(min/max/current): 1/5/2 + Last Deployment: + Created at: 2020-11-05 20:07:23 +0800 CST + Updated at: 2020-11-05T20:50:42+08:00 + ``` + + ``` + $ vela status helloworld --svc frontend + About: + + Name: helloworld + Namespace: default + Created at: 2020-11-05 20:07:21.830118 +0800 CST + Updated at: 2020-11-05 20:50:42.664725 +0800 CST + + Services: + + - Name: frontend + Type: webservice + HEALTHY Ready: 1/1 + Traits: + - ✅ autoscale: type: cpu cpu-utilization(target/current): 5%/14% replicas(min/max/current): 1/5/4 + Last Deployment: + Created at: 2020-11-05 20:07:23 +0800 CST + Updated at: 2020-11-05T20:50:42+08:00 + ``` + + Stop `ab` tool, and the replicas will decrease to one eventually. diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/extensions/set-metrics.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/extensions/set-metrics.md new file mode 100644 index 00000000..7115e8fc --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/extensions/set-metrics.md @@ -0,0 +1,107 @@ +--- +title: Monitoring Application +--- + + +If your application has exposed metrics, you can easily tell the platform how to collect the metrics data from your app with `metrics` capability. + +## Prerequisite +Make sure metrics trait controller is installed in your cluster + +Install metrics trait controller with helm + +1. Add helm chart repo for metrics trait + ```shell script + helm repo add oam.catalog http://oam.dev/catalog/ + ``` + +2. Update the chart repo + ```shell script + helm repo update + ``` + +3. Install metrics trait controller + ```shell script + helm install --create-namespace -n vela-system metricstrait oam.catalog/metricstrait + + +> Note: metrics is one of the extension capabilities [installed from cap center](../cap-center), +> please install it if you can't find it in `vela traits`. + +## Setting metrics policy +Let's run [`christianhxc/gorandom:1.0`](https://github.com/christianhxc/prometheus-tutorial) as an example app. +The app will emit random latencies as metrics. + + + + +1. Prepare Appfile: + + ```bash + $ cat < vela.yaml + name: metricapp + services: + metricapp: + type: webservice + image: christianhxc/gorandom:1.0 + port: 8080 + + metrics: + enabled: true + format: prometheus + path: /metrics + port: 0 + scheme: http + EOF + ``` + +> The full specification of `metrics` could show up by `$ vela show metrics`. + +2. Deploy the application: + + ```bash + $ vela up + ``` + +3. Check status: + + ```bash + $ vela status metricapp + About: + + Name: metricapp + Namespace: default + Created at: 2020-11-11 17:00:59.436347573 -0800 PST + Updated at: 2020-11-11 17:01:06.511064661 -0800 PST + + Services: + + - Name: metricapp + Type: webservice + HEALTHY Ready: 1/1 + Traits: + - ✅ metrics: Monitoring port: 8080, path: /metrics, format: prometheus, schema: http. + Last Deployment: + Created at: 2020-11-11 17:00:59 -0800 PST + Updated at: 2020-11-11T17:01:06-08:00 + ``` + +The metrics trait will automatically discover port and label to monitor if no parameters specified. +If more than one ports found, it will choose the first one by default. + + +**(Optional) Verify that the metrics are collected on Prometheus** + +
+ +Expose the port of Prometheus dashboard: + +```bash +kubectl --namespace monitoring port-forward `kubectl -n monitoring get pods -l prometheus=oam -o name` 9090 +``` + +Then access the Prometheus dashboard via http://localhost:9090/targets + +![Prometheus Dashboard](../../resources/metrics.jpg) + +
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/extensions/set-rollout.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/extensions/set-rollout.md new file mode 100644 index 00000000..3f0dee17 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/extensions/set-rollout.md @@ -0,0 +1,163 @@ +--- +title: Setting Rollout Strategy +--- + +> Note: rollout is one of the extension capabilities [installed from cap center](../cap-center), +> please install it if you can't find it in `vela traits`. + +The `rollout` section is used to configure Canary strategy to release your app. + +Add rollout config under `express-server` along with a `route`. + +```yaml +name: testapp +services: + express-server: + type: webservice + image: oamdev/testapp:rolling01 + port: 80 + + rollout: + replicas: 5 + stepWeight: 20 + interval: "30s" + + route: + domain: "example.com" +``` + +> The full specification of `rollout` could show up by `$ vela show rollout`. + +Apply this `appfile.yaml`: + +```bash +$ vela up +``` + +You could check the status by: + +```bash +$ vela status testapp +About: + + Name: testapp + Namespace: myenv + Created at: 2020-11-09 17:34:38.064006 +0800 CST + Updated at: 2020-11-10 17:05:53.903168 +0800 CST + +Services: + + - Name: testapp + Type: webservice + HEALTHY Ready: 5/5 + Traits: + - ✅ rollout: interval=5s + replicas=5 + stepWeight=20 + - ✅ route: Visiting URL: http://example.com IP: + + Last Deployment: + Created at: 2020-11-09 17:34:38 +0800 CST + Updated at: 2020-11-10T17:05:53+08:00 +``` + +Visiting this app by: + +```bash +$ curl -H "Host:example.com" http:/// +Hello World -- Rolling 01 +``` + +In day 2, assuming we have make some changes on our app and build the new image and name it by `oamdev/testapp:v2`. + +Let's update the appfile by: + +```yaml +name: testapp +services: + express-server: + type: webservice +- image: oamdev/testapp:rolling01 ++ image: oamdev/testapp:rolling02 + port: 80 + rollout: + replicas: 5 + stepWeight: 20 + interval: "30s" + route: + domain: example.com +``` + +Apply this `appfile.yaml` again: + +```bash +$ vela up +``` + +You could run `vela status` several times to see the instance rolling: + +```shell script +$ vela status testapp +About: + + Name: testapp + Namespace: myenv + Created at: 2020-11-12 19:02:40.353693 +0800 CST + Updated at: 2020-11-12 19:02:40.353693 +0800 CST + +Services: + + - Name: express-server + Type: webservice + HEALTHY express-server-v2:Ready: 1/1 express-server-v1:Ready: 4/4 + Traits: + - ✅ rollout: interval=30s + replicas=5 + stepWeight=20 + - ✅ route: Visiting by using 'vela port-forward testapp --route' + + Last Deployment: + Created at: 2020-11-12 17:20:46 +0800 CST + Updated at: 2020-11-12T19:02:40+08:00 +``` + +You could then try to `curl` your app multiple times and and see how the app being rollout following Canary strategy: + + +```bash +$ curl -H "Host:example.com" http:/// +Hello World -- This is rolling 02 +$ curl -H "Host:example.com" http:/// +Hello World -- Rolling 01 +$ curl -H "Host:example.com" http:/// +Hello World -- Rolling 01 +$ curl -H "Host:example.com" http:/// +Hello World -- This is rolling 02 +$ curl -H "Host:example.com" http:/// +Hello World -- Rolling 01 +$ curl -H "Host:example.com" http:/// +Hello World -- This is rolling 02 +``` + + +**How `Rollout` works?** + +
+ +`Rollout` trait implements progressive release process to rollout your app following [Canary strategy](https://martinfowler.com/bliki/CanaryRelease.html). + +In detail, `Rollout` controller will create a canary of your app , and then gradually shift traffic to the canary while measuring key performance indicators like HTTP requests success rate at the same time. + + +![alt](../../resources/traffic-shifting-analysis.png) + +In this sample, for every `10s`, `5%` traffic will be shifted to canary from the primary, until the traffic on canary reached `50%`. At the mean time, the instance number of canary will automatically scale to `replicas: 2` per configured in Appfile. + + +Based on analysis result of the KPIs during this traffic shifting, a canary will be promoted or aborted if analysis is failed. If promoting, the primary will be upgraded from v1 to v2, and traffic will be fully shifted back to the primary instances. So as result, canary instances will be deleted after the promotion finished. + +![alt](../../resources/promotion.png) + +> Note: KubeVela's `Rollout` trait is implemented with [Weaveworks Flagger](https://flagger.app/) operator. + +
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/extensions/set-route.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/extensions/set-route.md new file mode 100644 index 00000000..e692ec22 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/extensions/set-route.md @@ -0,0 +1,82 @@ +--- +title: Setting Routes +--- + +The `route` section is used to configure the access to your app. + +## Prerequisite +Make sure route trait controller is installed in your cluster + +Install route trait controller with helm + +1. Add helm chart repo for route trait + ```shell script + helm repo add oam.catalog http://oam.dev/catalog/ + ``` + +2. Update the chart repo + ```shell script + helm repo update + ``` + +3. Install route trait controller + ```shell script + helm install --create-namespace -n vela-system routetrait oam.catalog/routetrait + + +> Note: route is one of the extension capabilities [installed from cap center](../cap-center), +> please install it if you can't find it in `vela traits`. + +## Setting route policy +Add routing config under `express-server`: + +```yaml +services: + express-server: + ... + + route: + domain: example.com + rules: + - path: /testapp + rewriteTarget: / +``` + +> The full specification of `route` could show up by `$ vela show route`. + +Apply again: + +```bash +$ vela up +``` + +Check the status until we see route is ready: +```bash +$ vela status testapp +About: + + Name: testapp + Namespace: default + Created at: 2020-11-04 16:34:43.762730145 -0800 PST + Updated at: 2020-11-11 16:21:37.761158941 -0800 PST + +Services: + + - Name: express-server + Type: webservice + HEALTHY Ready: 1/1 + Last Deployment: + Created at: 2020-11-11 16:21:37 -0800 PST + Updated at: 2020-11-11T16:21:37-08:00 + Routes: + - route: Visiting URL: http://example.com IP: +``` + +**In [kind cluster setup](../../install#kind)**, you can visit the service via localhost: + +> If not in kind cluster, replace 'localhost' with ingress address + +``` +$ curl -H "Host:example.com" http://localhost/testapp +Hello World +``` diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/learn-appfile.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/learn-appfile.md new file mode 100644 index 00000000..0cc674a1 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/learn-appfile.md @@ -0,0 +1,249 @@ +--- +title: 学习使用 Appfile +--- + +`appfile` 的示例如下: + +```yaml +name: testapp + +services: + frontend: # 1st service + + image: oamdev/testapp:v1 + build: + docker: + file: Dockerfile + context: . + + cmd: ["node", "server.js"] + port: 8080 + + route: # trait + domain: example.com + rules: + - path: /testapp + rewriteTarget: / + + backend: # 2nd service + type: task # workload type + image: perl + cmd: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] +``` + +在底层,`Appfile` 会从源码构建镜像,然后用镜像名称创建 `application` 资源 + +## Schema + +> 在深入学习 Appfile 的详细 schema 之前,我们建议你先熟悉 KubeVela 的[核心概念](../core-concepts/application) + +```yaml +name: _app-name_ + +services: + _service-name_: + # If `build` section exists, this field will be used as the name to build image. Otherwise, KubeVela will try to pull the image with given name directly. + image: oamdev/testapp:v1 + + build: + docker: + file: _Dockerfile_path_ # relative path is supported, e.g. "./Dockerfile" + context: _build_context_path_ # relative path is supported, e.g. "." + + push: + local: kind # optionally push to local KinD cluster instead of remote registry + + type: webservice (default) | worker | task + + # detailed configurations of workload + ... properties of the specified workload ... + + _trait_1_: + # properties of trait 1 + + _trait_2_: + # properties of trait 2 + + ... more traits and their properties ... + + _another_service_name_: # more services can be defined + ... + +``` + +> 想了解怎样设置特定类型的 workload 或者 trait,请阅读[参考文档手册](./check-ref-doc) + +## 示例流程 + +在以下的流程中,我们会构建并部署一个 NodeJs 的示例 app。该 app 的源文件在[这里](https://github.com/oam-dev/kubevela/tree/master/docs/examples/testapp)。 + +### 环境要求 + +- [Docker](https://docs.docker.com/get-docker/) 需要在主机上安装 docker +- [KubeVela](../install) 需要安装 KubeVela 并配置 + +### 1. 下载测试的 app 的源码 + +git clone 然后进入 testapp 目录: + +```bash +$ git clone https://github.com/oam-dev/kubevela.git +$ cd kubevela/docs/examples/testapp +``` + +这个示例包含 NodeJs app 的源码和用于构建 app 镜像的Dockerfile + +### 2. 使用命令部署 app + +我们将会使用目录中的 [vela.yaml](https://github.com/oam-dev/kubevela/tree/master/docs/examples/testapp/vela.yaml) 文件来构建和部署 app + +> 注意:请修改 `oamdev` 为你自己注册的账号。或者你可以尝试 `本地测试方式`。 + +```yaml + image: oamdev/testapp:v1 # change this to your image +``` + +执行如下命令: + +```bash +$ vela up +Parsing vela.yaml ... +Loading templates ... + +Building service (express-server)... +Sending build context to Docker daemon 71.68kB +Step 1/10 : FROM mhart/alpine-node:12 + ---> 9d88359808c3 +... + +pushing image (oamdev/testapp:v1)... +... + +Rendering configs for service (express-server)... +Writing deploy config to (.vela/deploy.yaml) + +Applying deploy configs ... +Checking if app has been deployed... +App has not been deployed, creating a new deployment... +✅ App has been deployed 🚀🚀🚀 + Port forward: vela port-forward testapp + SSH: vela exec testapp + Logging: vela logs testapp + App status: vela status testapp + Service status: vela status testapp --svc express-server +``` + + +检查服务状态: + +```bash +$ vela status testapp + About: + + Name: testapp + Namespace: default + Created at: 2020-11-02 11:08:32.138484 +0800 CST + Updated at: 2020-11-02 11:08:32.138485 +0800 CST + + Services: + + - Name: express-server + Type: webservice + HEALTHY Ready: 1/1 + Last Deployment: + Created at: 2020-11-02 11:08:33 +0800 CST + Updated at: 2020-11-02T11:08:32+08:00 + Routes: + +``` + +#### 本地测试方式 + +如果你本地有运行的 [kind](../install) 集群,你可以尝试推送到本地。这种方法无需注册远程容器仓库。 + +在 `build` 中添加 local 的选项值: + +```yaml + build: + # push image into local kind cluster without remote transfer + push: + local: kind + + docker: + file: Dockerfile + context: . +``` + +然后部署到 kind: + +```bash +$ vela up +``` + +
(进阶) 检查渲染后的 manifests 文件 + +默认情况下,Vela 通过 `./vela/deploy.yaml` 渲染最后的 manifests 文件: + +```yaml +apiVersion: core.oam.dev/v1alpha2 +kind: ApplicationConfiguration +metadata: + name: testapp + namespace: default +spec: + components: + - componentName: express-server +--- +apiVersion: core.oam.dev/v1alpha2 +kind: Component +metadata: + name: express-server + namespace: default +spec: + workload: + apiVersion: apps/v1 + kind: Deployment + metadata: + name: express-server + ... +--- +apiVersion: core.oam.dev/v1alpha2 +kind: HealthScope +metadata: + name: testapp-default-health + namespace: default +spec: + ... +``` +
+ +### [可选] 配置其他类型的 workload + +至此,我们成功地部署一个默认类型的 workload 的 *[web 服务](../end-user/components/cue/webservice)*。我们也可以添加 *[Task](../end-user/components/cue/task)* 类型的服务到同一个 app 中。 + +```yaml +services: + pi: + type: task + image: perl + cmd: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] + + express-server: + ... +``` + +然后再次部署 Applfile 来升级应用: + +```bash +$ vela up +``` + +恭喜!你已经学会了使用 `Appfile` 来部署应用了。 + +## 下一步? + +更多关于 app 的操作: +- [Check Application Logs](./check-logs) +- [Execute Commands in Application Container](./exec-cmd) +- [Access Application via Route](./port-forward) + diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/port-forward.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/port-forward.md new file mode 100644 index 00000000..b082ea0b --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/port-forward.md @@ -0,0 +1,23 @@ +--- +title: 端口转发 +--- + +当你的 web 服务 Application 已经被部署就可以通过 `port-forward` 来本地访问。 + +```bash +$ vela ls +NAME APP WORKLOAD TRAITS STATUS CREATED-TIME +express-server testapp webservice Deployed 2020-09-18 22:42:04 +0800 CST +``` + +它将直接为你打开浏览器。 + +```bash +$ vela port-forward testapp +Forwarding from 127.0.0.1:8080 -> 80 +Forwarding from [::1]:8080 -> 80 + +Forward successfully! Opening browser ... +Handling connection for 8080 +Handling connection for 8080 +``` \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/README.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/README.md new file mode 100644 index 00000000..58743825 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/README.md @@ -0,0 +1,112 @@ +--- +title: 功能参考文档 +--- + +在这篇文档中,我们将展示如何查看给定能力的详细文档 (比如 component 或者 trait)。 + +这听起来很有挑战,因为每种能力都是 KubeVela 的一个插件(内置能力也是如此)。同时,根据设计, KubeVela 允许平台管理员随时修改功能模板。在这种情况下,我们是否需要为每个新安装的功能手动写文档? 以及我们如何确保系统的那些文档是最新的? + +## 使用浏览器 + +实际上,作为其可扩展设计的重要组成部分, KubeVela 总是会根据模板的定义对每种 workload 类型或者 Kubernetes 集群注册的 trait 自动生成参考文档。此功能适用于任何功能:内置功能或者你自己的 workload 类型/ traits 。 +因此,作为一个终端用户,你唯一需要做的事情是: + +```console +$ vela show COMPONENT_TYPE or TRAIT --web +``` + +这条命令会自动在你的默认浏览器中打开对应的 component 类型或者 traint 参考文档。 + +以 `$ vela show webservice --web` 为例。 `Web Service` component 类型的详细的文档将立即显示如下: + +![](../../resources/vela_show_webservice.jpg) + +注意, 在名为 `Specification` 的部分中,它甚至为你提供了一种使用假名称 `my-service-name` 的这种 workload 类型。 + +同样的, 我们可以执行 `$ vela show autoscale`: + +![](../../resources/vela_show_autoscale.jpg) + +使用这些自动生成的参考文档,我们可以通过简单的复制粘贴轻松地完成应用程序描述,例如: + +```yaml +name: helloworld + +services: + backend: # 复制粘贴上面的 webservice 参考文档 + image: oamdev/testapp:v1 + cmd: ["node", "server.js"] + port: 8080 + cpu: "0.1" + + autoscale: # 复制粘贴并修改上面的 autoscaler 参考文档 + min: 1 + max: 8 + cron: + startAt: "19:00" + duration: "2h" + days: "Friday" + replicas: 4 + timezone: "America/Los_Angeles" +``` + +## 使用命令行终端 + +此参考文档功能也适用于仅有命令行终端的情况,例如: + +```shell +$ vela show webservice +# Properties ++-------+----------------------------------------------------------------------------------+---------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++-------+----------------------------------------------------------------------------------+---------------+----------+---------+ +| cmd | Commands to run in the container | []string | false | | +| env | Define arguments by using environment variables | [[]env](#env) | false | | +| image | Which image would you like to use for your service | string | true | | +| port | Which port do you want customer traffic sent to | int | true | 80 | +| cpu | Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) | string | false | | ++-------+----------------------------------------------------------------------------------+---------------+----------+---------+ + + +## env ++-----------+-----------------------------------------------------------+-------------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++-----------+-----------------------------------------------------------+-------------------------+----------+---------+ +| name | Environment variable name | string | true | | +| value | The value of the environment variable | string | false | | +| valueFrom | Specifies a source the value of this var should come from | [valueFrom](#valueFrom) | false | | ++-----------+-----------------------------------------------------------+-------------------------+----------+---------+ + + +### valueFrom ++--------------+--------------------------------------------------+-------------------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++--------------+--------------------------------------------------+-------------------------------+----------+---------+ +| secretKeyRef | Selects a key of a secret in the pod's namespace | [secretKeyRef](#secretKeyRef) | true | | ++--------------+--------------------------------------------------+-------------------------------+----------+---------+ + + +#### secretKeyRef ++------+------------------------------------------------------------------+--------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++------+------------------------------------------------------------------+--------+----------+---------+ +| name | The name of the secret in the pod's namespace to select from | string | true | | +| key | The key of the secret to select from. Must be a valid secret key | string | true | | ++------+------------------------------------------------------------------+--------+----------+---------+ +``` + +## 内置功能 + +注意,对于所有的内置功能,我们已经将它们的参考文档发布在下面,这些文档遵循同样的文档生成机制。 + + +- Workload Types + - [webservice](component-types/webservice) + - [task](component-types/task) + - [worker](component-types/worker) +- Traits + - [route](traits/route) + - [autoscale](traits/autoscale) + - [rollout](traits/rollout) + - [metrics](traits/metrics) + - [scaler](traits/scaler) diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/component-types/task.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/component-types/task.md new file mode 100644 index 00000000..9da748a2 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/component-types/task.md @@ -0,0 +1,31 @@ +--- +title: Task +--- + +## 描述 + +描述运行完成代码或脚本的作业。 + +## 规范 + +列出 `Task` 类型 workload 的所有配置项。 + +```yaml +name: my-app-name + +services: + my-service-name: + type: task + image: perl + count: 10 + cmd: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] +``` + +## 属性 + +名称 | 描述 | 类型 | 是否必须 | 默认值 +------------ | ------------- | ------------- | ------------- | ------------- + cmd | 容器中运行的命令 | []string | false | + count | 指定并行运行的 task 数量 | int | true | 1 + restart | 定义作业重启策略,值只能为 Never 或 OnFailure。 | string | true | Never + image | 你的服务使用的镜像 | string | true | diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/component-types/webservice.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/component-types/webservice.md new file mode 100644 index 00000000..91b72d0d --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/component-types/webservice.md @@ -0,0 +1,66 @@ +--- +title: Webservice +--- + +## 描述 + +描述长期运行的,可伸缩的,容器化的服务,这些服务具有稳定的网络接口,可以接收来自客户的外部网络流量。 如果对于 Appfile 中定义的任何服务,workload type 都被跳过,则默认使用“ webservice”类型。 + +## 规范 + +列出 `Webservice` workload 类型的所有配置项。 + +```yaml +name: my-app-name + +services: + my-service-name: + type: webservice # could be skipped + image: oamdev/testapp:v1 + cmd: ["node", "server.js"] + port: 8080 + cpu: "0.1" + env: + - name: FOO + value: bar + - name: FOO + valueFrom: + secretKeyRef: + name: bar + key: bar +``` + +## 属性 + +名称 | 描述 | 类型 | 是否必须 | 默认值 +------------ | ------------- | ------------- | ------------- | ------------- + cmd | 容器中运行的命令 | []string | false | + env | 使用环境变量定义参数 | [[]env](#env) | false | + image | 你的服务所使用到的镜像 | string | true | + port | 你要将用户流浪发送到哪个端口 | int | true | 80 + cpu | 用于服务的CPU单元数,例如0.5(0.5 CPU内核),1(1 CPU内核) | string | false | + + +### env + +名称 | 描述 | 类型 | 是否必须 | 默认值 +------------ | ------------- | ------------- | ------------- | ------------- + name | 环境变量名 | string | true | + value | 环境变量值 | string | false | + valueFrom | 指定此变量值的源 | [valueFrom](#valueFrom) | false | + + +#### valueFrom + +名称 | 描述 | 类型 | 是否必须 | 默认值 +------------ | ------------- | ------------- | ------------- | ------------- + secretKeyRef | 选择一个 pod 命名空间中的 secret 键 | [secretKeyRef](#secretKeyRef) | true | + + +##### secretKeyRef + +名称 | 描述 | 类型 | 是否必须 | 默认值 +------------ | ------------- | ------------- | ------------- | ------------- + name | 要从 pod 的命名空间中选择的 secret 的名字 | string | true | + key | 选择的 secret 键。 必须是有效的 secret 键 | string | true | + diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/component-types/worker.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/component-types/worker.md new file mode 100644 index 00000000..40d475d2 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/component-types/worker.md @@ -0,0 +1,28 @@ +--- +title: Worker +--- + +## 描述 + +描述在后台长期运行,可拓展的容器化服务。它们不需要网络端点来接收外部流量。 + +## 规格 + +列出 `Worker` 类型 workload 的所有配置项。 + +```yaml +name: my-app-name + +services: + my-service-name: + type: worker + image: oamdev/testapp:v1 + cmd: ["node", "server.js"] +``` + +## 属性 + +名称 | 描述 | 类型 | 是否必须 | 默认值 +------------ | ------------- | ------------- | ------------- | ------------- + cmd | 容器中运行的命令 | []string | false | + image | 你的服务使用的镜像 | string | true | diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/devex/cli.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/devex/cli.md new file mode 100644 index 00000000..77353dc6 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/devex/cli.md @@ -0,0 +1,28 @@ +--- +title: KubeVela CLI +--- + +### Auto-completion + +#### bash + +```bash +To load completions in your current shell session: +$ source <(vela completion bash) + +To load completions for every new session, execute once: +Linux: + $ vela completion bash > /etc/bash_completion.d/vela +MacOS: + $ vela completion bash > /usr/local/etc/bash_completion.d/vela +``` + +#### zsh + +```bash +To load completions in your current shell session: +$ source <(vela completion zsh) + +To load completions for every new session, execute once: +$ vela completion zsh > "${fpath[1]}/_vela" +``` diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/devex/dashboard.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/devex/dashboard.md new file mode 100644 index 00000000..6e1d949a --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/devex/dashboard.md @@ -0,0 +1,10 @@ + +# KubeVela Dashboard (WIP) + +KubeVela has a simple client side dashboard for you to interact with. The functionality is equivalent to the vela cli. + +```bash +$ vela dashboard +``` + +> NOTE: this feature is still under development. diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/devex/faq.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/devex/faq.md new file mode 100644 index 00000000..2ed15378 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/devex/faq.md @@ -0,0 +1,301 @@ +--- +title: FAQ +--- + +- [对比 X](#对比-x) + * [KubeVela 和 Helm 的区别?](#kubevela-和-helm-有什么区别) + +- [问题](#问题) + - [Error: unable to create new content in namespace cert-manager because it is being terminated](#error-unable-to-create-new-content-in-namespace-cert-manager-because-it-is-being-terminated) + - [Error: ScopeDefinition exists](#error-scopedefinition-exists) + - [You have reached your pull rate limit](#you-have-reached-your-pull-rate-limit) + - [Warning: Namespace cert-manager exists](#warning-namespace-cert-manager-exists) + - [如何修复问题: MutatingWebhookConfiguration mutating-webhook-configuration exists?](#如何修复问题:mutatingwebhookconfiguration-mutating-webhook-configuration-exists) + +- [运维](#运维) + * [Autoscale:如何在多个 Kkubernetes 集群上开启 metrics server?](#autoscale-如何在多个-kubernetes-集群上开启-metrics-server-?) + +## 对比 X + +### KubeVela 和 Helm 有什么区别? + +KubeVela 是一个平台构建工具,用于创建基于 Kubernete 的易使用、可拓展的应用交付/管理系统。KubeVela 将 Helm 作为模板引擎和应用包的标准。但是 Helm 不是 KubeVela 唯一支持的模板模块。另一个同样最优先支持的是 CUE。 + +同时,KubeVale 被设计为 Kubernetes 的一个控制器(即工作在服务端),即使是其 Helm 部分,也会安装一个 Helm Operator。 + +## 问题 + +### Error: unable to create new content in namespace cert-manager because it is being terminated + +你可能偶尔会碰到如下问题。它发生在上一个 KubeVele 版本没有删除完成时。 + +``` +$ vela install +- Installing Vela Core Chart: +install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file +Failed to install the chart with error: serviceaccounts "cert-manager-cainjector" is forbidden: unable to create new content in namespace cert-manager because it is being terminated +failed to create resource +helm.sh/helm/v3/pkg/kube.(*Client).Update.func1 + /home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/kube/client.go:190 +... +Error: failed to create resource: serviceaccounts "cert-manager-cainjector" is forbidden: unable to create new content in namespace cert-manager because it is being terminated +``` + +稍事休息,然后在几秒内重试。 + +``` +$ vela install +- Installing Vela Core Chart: +Vela system along with OAM runtime already exist. +Automatically discover capabilities successfully ✅ Add(0) Update(0) Delete(8) + +TYPE CATEGORY DESCRIPTION +-task workload One-off task to run a piece of code or script to completion +-webservice workload Long-running scalable service with stable endpoint to receive external traffic +-worker workload Long-running scalable backend worker without network endpoint +-autoscale trait Automatically scale the app following certain triggers or metrics +-metrics trait Configure metrics targets to be monitored for the app +-rollout trait Configure canary deployment strategy to release the app +-route trait Configure route policy to the app +-scaler trait Manually scale the app + +- Finished successfully. +``` + +手动应用所有 WorkloadDefinition 和 TraitDefinition manifests 以恢复所有功能。 + +``` +$ kubectl apply -f charts/vela-core/templates/defwithtemplate +traitdefinition.core.oam.dev/autoscale created +traitdefinition.core.oam.dev/scaler created +traitdefinition.core.oam.dev/metrics created +traitdefinition.core.oam.dev/rollout created +traitdefinition.core.oam.dev/route created +workloaddefinition.core.oam.dev/task created +workloaddefinition.core.oam.dev/webservice created +workloaddefinition.core.oam.dev/worker created + +$ vela workloads +Automatically discover capabilities successfully ✅ Add(8) Update(0) Delete(0) + +TYPE CATEGORY DESCRIPTION ++task workload One-off task to run a piece of code or script to completion ++webservice workload Long-running scalable service with stable endpoint to receive external traffic ++worker workload Long-running scalable backend worker without network endpoint ++autoscale trait Automatically scale the app following certain triggers or metrics ++metrics trait Configure metrics targets to be monitored for the app ++rollout trait Configure canary deployment strategy to release the app ++route trait Configure route policy to the app ++scaler trait Manually scale the app + +NAME DESCRIPTION +task One-off task to run a piece of code or script to completion +webservice Long-running scalable service with stable endpoint to receive external traffic +worker Long-running scalable backend worker without network endpoint +``` + +### Error: ScopeDefinition exists + +你可能偶尔会碰到如下问题。它发生在存在一个老的 OAM Kubernetes Runtime 发行版时,或者你之前已经部署过 `ScopeDefinition` 。 + +``` +$ vela install + - Installing Vela Core Chart: + install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file + Failed to install the chart with error: ScopeDefinition "healthscopes.core.oam.dev" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "kubevela": current value is "oam"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "vela-system": current value is "oam-system" + rendered manifests contain a resource that already exists. Unable to continue with install + helm.sh/helm/v3/pkg/action.(*Install).Run + /home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274 + ... + Error: rendered manifests contain a resource that already exists. Unable to continue with install: ScopeDefinition "healthscopes.core.oam.dev" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "kubevela": current value is "oam"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "vela-system": current value is "oam-system" +``` + +删除 `ScopeDefinition` "healthscopes.core.oam.dev" 然后重试. + +``` +$ kubectl delete ScopeDefinition "healthscopes.core.oam.dev" +scopedefinition.core.oam.dev "healthscopes.core.oam.dev" deleted + +$ vela install +- Installing Vela Core Chart: +install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file +Successfully installed the chart, status: deployed, last deployed time = 2020-12-03 16:26:41.491426 +0800 CST m=+4.026069452 +WARN: handle workload template `containerizedworkloads.core.oam.dev` failed: no template found, you will unable to use this workload capabilityWARN: handle trait template `manualscalertraits.core.oam.dev` failed +: no template found, you will unable to use this trait capabilityAutomatically discover capabilities successfully ✅ Add(8) Update(0) Delete(0) + +TYPE CATEGORY DESCRIPTION ++task workload One-off task to run a piece of code or script to completion ++webservice workload Long-running scalable service with stable endpoint to receive external traffic ++worker workload Long-running scalable backend worker without network endpoint ++autoscale trait Automatically scale the app following certain triggers or metrics ++metrics trait Configure metrics targets to be monitored for the app ++rollout trait Configure canary deployment strategy to release the app ++route trait Configure route policy to the app ++scaler trait Manually scale the app + +- Finished successfully. +``` + +### You have reached your pull rate limit + +当你查看 Pod kubevela-vela-core 的日志并发现如下问题时。 + +``` +$ kubectl get pod -n vela-system -l app.kubernetes.io/name=vela-core +NAME READY STATUS RESTARTS AGE +kubevela-vela-core-f8b987775-wjg25 0/1 - 0 35m +``` + +>Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by +>authenticating and upgrading: https://www.docker.com/increase-rate-limit + +你可以换成 github 的镜像仓库。 + +``` +$ docker pull ghcr.io/oam-dev/kubevela/vela-core:latest +``` + +### Warning: Namespace cert-manager exists + +如果碰到以下问题,则可能存在一个 `cert-manager` 发行版,其 namespace 及 RBAC 相关资源与 KubeVela 存在冲突。 + +``` +$ vela install +- Installing Vela Core Chart: +install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file +Failed to install the chart with error: Namespace "cert-manager" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system" +rendered manifests contain a resource that already exists. Unable to continue with install +helm.sh/helm/v3/pkg/action.(*Install).Run + /home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274 +... + /opt/hostedtoolcache/go/1.14.12/x64/src/runtime/asm_amd64.s:1373 +Error: rendered manifests contain a resource that already exists. Unable to continue with install: Namespace "cert-manager" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system" +``` + +尝试如下步骤修复这个问题。 + +- 删除 `cert-manager` 发行版 +- 删除 `cert-manager` namespace +- 重装 KubeVela + +``` +$ helm delete cert-manager -n cert-manager +release "cert-manager" uninstalled + +$ kubectl delete ns cert-manager +namespace "cert-manager" deleted + +$ vela install +- Installing Vela Core Chart: +install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file +Successfully installed the chart, status: deployed, last deployed time = 2020-12-04 10:46:46.782617 +0800 CST m=+4.248889379 +Automatically discover capabilities successfully ✅ (no changes) + +TYPE CATEGORY DESCRIPTION +task workload One-off task to run a piece of code or script to completion +webservice workload Long-running scalable service with stable endpoint to receive external traffic +worker workload Long-running scalable backend worker without network endpoint +autoscale trait Automatically scale the app following certain triggers or metrics +metrics trait Configure metrics targets to be monitored for the app +rollout trait Configure canary deployment strategy to release the app +route trait Configure route policy to the app +scaler trait Manually scale the app +- Finished successfully. +``` + +### 如何修复问题:MutatingWebhookConfiguration mutating-webhook-configuration exists? + +如果你部署的其他服务会安装 MutatingWebhookConfiguration mutating-webhook-configuration,则安装 KubeVela 时会碰到如下问题。 + +```shell +- Installing Vela Core Chart: +install chart vela-core, version v0.2.1, desc : A Helm chart for Kube Vela core, contains 36 file +Failed to install the chart with error: MutatingWebhookConfiguration "mutating-webhook-configuration" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system" +rendered manifests contain a resource that already exists. Unable to continue with install +helm.sh/helm/v3/pkg/action.(*Install).Run + /home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274 +github.com/oam-dev/kubevela/pkg/commands.InstallOamRuntime + /home/runner/work/kubevela/kubevela/pkg/commands/system.go:259 +github.com/oam-dev/kubevela/pkg/commands.(*initCmd).run + /home/runner/work/kubevela/kubevela/pkg/commands/system.go:162 +github.com/oam-dev/kubevela/pkg/commands.NewInstallCommand.func2 + /home/runner/work/kubevela/kubevela/pkg/commands/system.go:119 +github.com/spf13/cobra.(*Command).execute + /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:850 +github.com/spf13/cobra.(*Command).ExecuteC + /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:958 +github.com/spf13/cobra.(*Command).Execute + /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:895 +main.main + /home/runner/work/kubevela/kubevela/references/cmd/cli/main.go:16 +runtime.main + /opt/hostedtoolcache/go/1.14.13/x64/src/runtime/proc.go:203 +runtime.goexit + /opt/hostedtoolcache/go/1.14.13/x64/src/runtime/asm_amd64.s:1373 +Error: rendered manifests contain a resource that already exists. Unable to continue with install: MutatingWebhookConfiguration "mutating-webhook-configuration" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system" +``` + +要解决这个问题,请从 [KubeVela releases](https://github.com/oam-dev/kubevela/releases) 将 KubeVela Cli `vela` 版本升级到 `v0.2.2` 以上。 + +## 运维 + +### Autoscale: 如何在多个 Kubernetes 集群上开启 metrics server ? + +运维 Autoscale 依赖 metrics server,所以它在许多集群中都是开启的。请通过命令 `kubectl top nodes` 或 `kubectl top pods` 检查 metrics server 是否开启。 + +如果输出如下相似内容,那么 metrics 已经开启。 + +```shell +$ kubectl top nodes +NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% +cn-hongkong.10.0.1.237 288m 7% 5378Mi 78% +cn-hongkong.10.0.1.238 351m 8% 5113Mi 74% + +$ kubectl top pods +NAME CPU(cores) MEMORY(bytes) +php-apache-65f444bf84-cjbs5 0m 1Mi +wordpress-55c59ccdd5-lf59d 1m 66Mi +``` + +或者需要在你的 kubernetes 集群中手动开启 metrics 。 + +- ACK (Alibaba Cloud Container Service for Kubernetes) + +Metrics server 已经开启。 + +- ASK (Alibaba Cloud Serverless Kubernetes) + +Metrics server 已经在如下 [Alibaba Cloud console](https://cs.console.aliyun.com/) `Operations/Add-ons` 部分开启。 + +![](../../../resources/install-metrics-server-in-ASK.jpg) + +如果你有更多问题,请访问 [metrics server 排错指导](https://help.aliyun.com/document_detail/176515.html) 。 + +- Kind + +使用如下命令安装 metrics server,或者可以安装 [最新版本](https://github.com/kubernetes-sigs/metrics-server#installation)。 + +```shell +$ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml +``` + +并且在通过 `kubectl edit deploy -n kube-system metrics-server` 加载的 yaml 文件中 `.spec.template.spec.containers` 下增加如下部分。 + +注意:这里只是一个示例,而不是用于生产级别的使用。 + +``` +command: +- /metrics-server +- --kubelet-insecure-tls +``` + +- MiniKube + +使用如下命令开启。 + +```shell +$ minikube addons enable metrics-server +``` + + +享受在你的应用中 [设置 autoscale](../../extensions/set-autoscale)。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/restful-api/rest.mdx b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/restful-api/rest.mdx new file mode 100644 index 00000000..fa230177 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/restful-api/rest.mdx @@ -0,0 +1,10 @@ +--- +title: Restful API +--- +import useBaseUrl from '@docusaurus/useBaseUrl'; + + + KubeVela Restful API + \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/traits/ingress.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/traits/ingress.md new file mode 100644 index 00000000..0f3e31a4 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/traits/ingress.md @@ -0,0 +1,20 @@ +--- +title: Ingress +--- + +## Description + +Configures K8s ingress and service to enable web traffic for your service. Please use route trait in cap center for advanced usage. + +## Specification + +List of all configuration options for a `Ingress` trait. + +```yaml``` + +## Properties + +Name | Description | Type | Required | Default +------------ | ------------- | ------------- | ------------- | ------------- + domain | | string | true | + http | | map[string]int | true | diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/traits/scaler.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/traits/scaler.md new file mode 100644 index 00000000..0231e7fb --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/developers/references/traits/scaler.md @@ -0,0 +1,27 @@ +--- +title: Scaler +--- + +## 描述 + +配置你服务的副本数。 + +## 规范 + +列出 `Scaler` trait 的所有配置项。 + +```yaml +name: my-app-name + +services: + my-service-name: + ... + scaler: + replicas: 100 +``` + +## 属性 + +名称 | 描述 | 类型 | 是否必须 | 默认值 +------------ | ------------- | ------------- | ------------- | ------------- + replicas | Workload 的副本数 | int | true | 1 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/binding-traits.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/binding-traits.md new file mode 100644 index 00000000..50b281b8 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/binding-traits.md @@ -0,0 +1,310 @@ +--- +title: 绑定运维特征 // Deprecated +--- + +运维特征(Traits)也是应用部署计划的核心组成之一,它作用于组件层面,可以让你自由地给组件绑定各式各样的运维动作和策略。比如业务层面的配置网关、标签管理和容器注入(Sidecar),又或者是管理员层面的弹性扩缩容、灰度发布等等。 + + +与组件定义类似,KubeVela 提供了一系列开箱即用的运维特征能力,同时也允许你自定义扩展其它的运维能力。 + + +## 查看 KubeVela 的运维特征类型 + + +``` +$ vela traits +NAME NAMESPACE APPLIES-TO CONFLICTS-WITH POD-DISRUPTIVE DESCRIPTION +annotations vela-system * true Add annotations on K8s pod for your workload which follows + the pod spec in path 'spec.template'. +configmap vela-system * true Create/Attach configmaps on K8s pod for your workload which + follows the pod spec in path 'spec.template'. +env vela-system * false add env on K8s pod for your workload which follows the pod + spec in path 'spec.template.' +ingress vela-system false Enable public web traffic for the component. +ingress-1-20 vela-system false Enable public web traffic for the component, the ingress API + matches K8s v1.20+. +labels vela-system * true Add labels on K8s pod for your workload which follows the + pod spec in path 'spec.template'. +lifecycle vela-system * true Add lifecycle hooks for the first container of K8s pod for + your workload which follows the pod spec in path + 'spec.template'. +rollout vela-system false rollout the component +sidecar vela-system * true Inject a sidecar container to K8s pod for your workload + which follows the pod spec in path 'spec.template'. +... +``` + + +以其中比较常用的运维能力包括: + + +- `annotations` :给工作负载添加注释。 +- `labels`:给工作负载添加标签。 +- `env`: 为工作负载添加环境变量。 +- `configmap` :添加键值对配置文件。 +- `ingress` :配置一个公共网关。 +- `ingress-1-20` :配置一个基于 Kubernetes v1.20+ 版本的公共网关。 +- `lifecycle` :给工作负载增加生命周期“钩子”。 +- `rollout` :组件的灰度发布策略。 +- `sidecar`:给组件注入一个容器。 + + + +下面,我们将以几个典型的运维特征为例,介绍 KubeVela 运维特征的用法。 + + +## 使用 Ingress 给组件配置网关 + + +我们以给一个 Web Service 组件配置网关,来进行示例讲解。这个组件从 `crccheck/hello-world` 镜像中拉取过来,设置网关后,对外通过 `testsvc.example.com` 加上端口 8000 提供访问。 + + +为了便于你快速学习,请直接复制下面的 Shell 执行,会部署到集群中: + + +```shell +cat < 8000 +Forwarding from [::1]:8000 -> 8000 + +Forward successfully! Opening browser ... +Handling connection for 8000 +``` +访问服务: +```shell +curl -H "Host:testsvc.example.com" http://127.0.0.1:8000/ +Hello World + + + ## . + ## ## ## == + ## ## ## ## ## === + /""""""""""""""""\___/ === + ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~ + \______ o _,/ + \ \ _,' + `'--.._\..--'' +``` + + +## 给组件添加标签和注释 + + +labels 和 annotations 运维特征,允许你将标签和注释附加到组件上,让我们在实现业务逻辑时,按需触发被标记的组件和获取注释信息。 + + +首先,我们准备一个应用部署计划的示例,请直接复制执行: + + +```shell +cat < + i=0; + while true; + do + echo "$i: $(date)" >> /var/log/date.log; + i=$((i+1)); + sleep 1; + done + volumes: + - name: varlog + mountPath: /var/log + type: emptyDir + traits: + - type: sidecar + properties: + name: count-log + image: busybox + cmd: [ /bin/sh, -c, 'tail -n+1 -f /var/log/date.log'] + volumes: + - name: varlog + path: /var/log +# YAML 文件结束 +EOF +``` + + +使用 `vela ls` 查看应用是否部署成功: + + +```shell +$ vela ls +APP COMPONENT TYPE TRAITS PHASE HEALTHY STATUS CREATED-TIME +vela-app-with-sidecar log-gen-worker worker sidecar running healthy 2021-08-29 22:07:07 +0800 CST +``` + + +成功后,先检查应用生成的工作负载情况: + + +``` +$ kubectl get pods -l app.oam.dev/component=log-gen-worker +NAME READY STATUS RESTARTS AGE +log-gen-worker-7bb65dcdd6-tpbdh 2/2 Running 0 45s +``` + + + + +最后查看 Sidecar 所输出的日志,可以看到读取日志的 sidecar 已经生效。 + + +``` +kubectl logs -f log-gen-worker-7bb65dcdd6-tpbdh count-log +``` + + +以上,我们以几个常见的运维特征为例介绍了如何绑定运维特征,更多的运维特征功能和参数,请前往运维特征系统中的内置运维特征查看。 + +## 自定义运维特征 + +当已经内置的运维特征无法满足需求,你可以自由的自定义运维能力,请查看管理员手册里的[自定义运维特征](../platform-engineers/traits/customize-trait)进行实现。 + +## 下一步 + +- [集成云资源](./components/cloud-services/provider-and-consume-cloud-services.md),了解如何集成各类云厂商的云资源 +- [灰度发布和扩缩容](./rollout-scaler) \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/component-observability.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/component-observability.md new file mode 100644 index 00000000..ca322c44 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/component-observability.md @@ -0,0 +1,5 @@ +--- +title: 组件可观测性 +--- + +WIP \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/components/cloud-services/provider-and-consume-cloud-services.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/components/cloud-services/provider-and-consume-cloud-services.md new file mode 100644 index 00000000..8d4b23c7 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/components/cloud-services/provider-and-consume-cloud-services.md @@ -0,0 +1,163 @@ +--- +title: 集成云资源 +--- + +在面向云开发逐渐成为范式的这个时代,我们希望集成来源不同、类型不同云资源的需求非常迫切。不管是最基本的对象存储、云数据库,还是更多的负载均衡等等, +也面临着混合云、多云等复杂环境所带来的挑战,而 KubeVela 都可以很好满足你的需要。 + +KubeVela 通过云资源组件(Component)和运维特征(Trait)里的资源绑定功能,高效安全地完成不同类型云资源的集成工作。目前你可以直接调用阿里云容器 +服务 Kubernetes 版(ACK )、阿里云对象存储服务(OSS)和阿里云关系型数据库服务(RDS)这些默认组件。同时在未来,更多新的云资源也会在社区的支撑下 +逐渐成为默认选项,让你标准化统一地去使用各种厂商的云资源。 + +> ⚠️ 请确认管理员已经安装了 [Terraform 插件 'terraform/provider-alicloud'](../../../install#4-【可选】安装插件). + + +## 支持的云资源列表 +编排类型 | 云服务商 | 云资源 | 描述 +------------ | ------------- | ------------- | ------------- +Terraform | Alibaba Cloud | [ACK](./terraform/alibaba-ack) | 用于部署阿里云 ACK 的 Terraform Configuration 的 ComponentDefinition +| | | [EIP](./terraform/alibaba-eip) | 用于部署阿里云 EIP 的 Terraform Configuration 的 ComponentDefinition +| | | [OSS](./terraform/alibaba-oss) | 用于部署阿里云 OSS 的 Terraform Configuration 的 ComponentDefinition +| | | [RDS](./terraform/alibaba-rds) | 用于部署阿里云 RDS 的 Terraform Configuration 的 ComponentDefinition + +## Terraform + +KubeVela 支持的所有由 Terraform 编排的云资源如上所示,你也可以通过命令 `vela components --label type=terraform` 查看。 + +### 部署云资源 + +我们以 OSS bucket 为例展示如何部署云资源。 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: provision-cloud-resource-sample +spec: + components: + - name: sample-oss + type: alibaba-oss + properties: + bucket: vela-website-0911 + acl: private + writeConnectionSecretToRef: + name: oss-conn +``` + +`alibaba-oss` 类型的组件的 properties 在上面文档有清晰的描述,包括每一个 property 的名字、类型、描述、是否必填和默认值。 + +部署应用程序并检查应用程序的状态。 + +```shell +$ vela ls +APP COMPONENT TYPE TRAITS PHASE HEALTHY STATUS CREATED-TIME +provision-cloud-resource-sample sample-oss alibaba-oss running healthy Cloud resources are deployed and ready to use 2021-09-11 12:55:57 +0800 CST +``` + +当应用程序处于 `running` 和 `healthy`状态。我们可以在阿里云控制台或通过 [ossutil](https://partners-intl.aliyun.com/help/doc-detail/50452.htm) +检查OSS bucket 是否被创建。 + +```shell +$ ossutil ls oss:// +CreationTime Region StorageClass BucketName +2021-09-11 12:56:17 +0800 CST oss-cn-beijing Standard oss://vela-website-0911 +``` + +### 消费云资源 + +下面我们以阿里云关系型数据库(RDS)的例子,作为示例进行讲解。 + +首先请直接复制一个编写好的应用部署计划,在命令行中执行: + +```shell +cat < -o yaml` 查看应用的部署状态: + +```shell +$ kubectl get application website -o yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: website + ... # 省略非关键信息 +spec: + components: + - name: frontend + properties: + ... # 省略非关键信息 + type: webservice +status: + conditions: + - lastTransitionTime: "2021-08-28T10:26:47Z" + reason: Available + status: "True" + ... # 省略非关键信息 + type: HealthCheck + observedGeneration: 1 + ... # 省略非关键信息 + services: + - healthy: true + name: frontend + workloadDefinition: + apiVersion: apps/v1 + kind: Deployment + status: running +``` + +当我们看到 status-services-healthy 的字段为 true,并且 status 为 running 时,即表示整个应用交付成功。 + +如果 status 显示为 rendering,或者 healthy 为 false,则表示应用要么部署失败,要么还在部署中。请根据 `kubectl get application <应用 name> -o yaml` 中返回的信息对应地进行处理。 + +你也可以通过 vela 的 CLI 查看,使用如下命令: + +```shell +$ vela ls +APP COMPONENT TYPE TRAITS PHASE HEALTHY STATUS CREATED-TIME +website frontend webservice running healthy 2021-08-28 18:26:47 +0800 CST +``` + +我们也看到 website APP 的 PHASE 为 running,同时 STATUS 为 healthy。 + + +## 属性说明 + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ---------------- | ----------------------------------------------------------------------------------------- | --------------------------------- | -------- | ------- | +| cmd | Commands to run in the container | []string | false | | +| env | Define arguments by using environment variables | [[]env](#env) | false | | +| image | Which image would you like to use for your service | string | true | | +| port | Which port do you want customer traffic sent to | int | true | 80 | +| imagePullPolicy | Specify image pull policy for your service | string | false | | +| cpu | Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) | string | false | | +| memory | Specifies the attributes of the memory resource required for the container. | string | false | | +| volumes | Declare volumes and volumeMounts | [[]volumes](#volumes) | false | | +| livenessProbe | Instructions for assessing whether the container is alive. | [livenessProbe](#livenessProbe) | false | | +| readinessProbe | Instructions for assessing whether the container is in a suitable state to serve traffic. | [readinessProbe](#readinessProbe) | false | | +| imagePullSecrets | Specify image pull secrets for your service | []string | false | | + + +### readinessProbe + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------------------- | ---------------------------------------------------------------------------------------------------- | ----------------------- | -------- | ------- | +| exec | Instructions for assessing container health by executing a command. Either this attribute or the | [exec](#exec) | false | | +| | httpGet attribute or the tcpSocket attribute MUST be specified. This attribute is mutually exclusive | | | | +| | with both the httpGet attribute and the tcpSocket attribute. | | | | +| httpGet | Instructions for assessing container health by executing an HTTP GET request. Either this attribute | [httpGet](#httpGet) | false | | +| | or the exec attribute or the tcpSocket attribute MUST be specified. This attribute is mutually | | | | +| | exclusive with both the exec attribute and the tcpSocket attribute. | | | | +| tcpSocket | Instructions for assessing container health by probing a TCP socket. Either this attribute or the | [tcpSocket](#tcpSocket) | false | | +| | exec attribute or the httpGet attribute MUST be specified. This attribute is mutually exclusive with | | | | +| | both the exec attribute and the httpGet attribute. | | | | +| initialDelaySeconds | Number of seconds after the container is started before the first probe is initiated. | int | true | 0 | +| periodSeconds | How often, in seconds, to execute the probe. | int | true | 10 | +| timeoutSeconds | Number of seconds after which the probe times out. | int | true | 1 | +| successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed. | int | true | 1 | +| failureThreshold | Number of consecutive failures required to determine the container is not alive (liveness probe) or | int | true | 3 | +| | not ready (readiness probe). | | | | + + +#### tcpSocket + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ---- | ------------------------------------------------------------------------------------- | ---- | -------- | ------- | +| port | The TCP socket within the container that should be probed to assess container health. | int | true | | + + +#### httpGet +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ----------- | ------------------------------------------------------------------------------------- | ----------------------------- | -------- | ------- | +| path | The endpoint, relative to the port, to which the HTTP GET request should be directed. | string | true | | +| port | The TCP socket within the container to which the HTTP GET request should be directed. | int | true | | +| httpHeaders | | [[]httpHeaders](#httpHeaders) | false | | + + +##### httpHeaders +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ----- | ----------- | ------ | -------- | ------- | +| name | | string | true | | +| value | | string | true | | + + +###### exec +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------- | --------------------------------------------------------------------------------------------------- | -------- | -------- | ------- | +| command | A command to be executed inside the container to assess its health. Each space delimited token of | []string | true | | +| | the command is a separate array element. Commands exiting 0 are considered to be successful probes, | | | | +| | whilst all other exit codes are considered failures. | | | | + + +### livenessProbe +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------------------- | ---------------------------------------------------------------------------------------------------- | ----------------------- | -------- | ------- | +| exec | Instructions for assessing container health by executing a command. Either this attribute or the | [exec](#exec) | false | | +| | httpGet attribute or the tcpSocket attribute MUST be specified. This attribute is mutually exclusive | | | | +| | with both the httpGet attribute and the tcpSocket attribute. | | | | +| httpGet | Instructions for assessing container health by executing an HTTP GET request. Either this attribute | [httpGet](#httpGet) | false | | +| | or the exec attribute or the tcpSocket attribute MUST be specified. This attribute is mutually | | | | +| | exclusive with both the exec attribute and the tcpSocket attribute. | | | | +| tcpSocket | Instructions for assessing container health by probing a TCP socket. Either this attribute or the | [tcpSocket](#tcpSocket) | false | | +| | exec attribute or the httpGet attribute MUST be specified. This attribute is mutually exclusive with | | | | +| | both the exec attribute and the httpGet attribute. | | | | +| initialDelaySeconds | Number of seconds after the container is started before the first probe is initiated. | int | true | 0 | +| periodSeconds | How often, in seconds, to execute the probe. | int | true | 10 | +| timeoutSeconds | Number of seconds after which the probe times out. | int | true | 1 | +| successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed. | int | true | 1 | +| failureThreshold | Number of consecutive failures required to determine the container is not alive (liveness probe) or | int | true | 3 | +| | not ready (readiness probe). | | | | + + +###### tcpSocket +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ---- | ------------------------------------------------------------------------------------- | ---- | -------- | ------- | +| port | The TCP socket within the container that should be probed to assess container health. | int | true | | + + +##### httpGet +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ----------- | ------------------------------------------------------------------------------------- | ----------------------------- | -------- | ------- | +| path | The endpoint, relative to the port, to which the HTTP GET request should be directed. | string | true | | +| port | The TCP socket within the container to which the HTTP GET request should be directed. | int | true | | +| httpHeaders | | [[]httpHeaders](#httpHeaders) | false | | + + +###### httpHeaders +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ----- | ----------- | ------ | -------- | ------- | +| name | | string | true | | +| value | | string | true | | + + +##### exec +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------- | --------------------------------------------------------------------------------------------------- | -------- | -------- | ------- | +| command | A command to be executed inside the container to assess its health. Each space delimited token of | []string | true | | +| | the command is a separate array element. Commands exiting 0 are considered to be successful probes, | | | | +| | whilst all other exit codes are considered failures. | | | | + + +### volumes +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| --------- | ------------------------------------------------------------------- | ------ | -------- | ------- | +| name | | string | true | | +| mountPath | | string | true | | +| type | Specify volume type, options: "pvc","configMap","secret","emptyDir" | string | true | | + + +#### env +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| --------- | --------------------------------------------------------- | ----------------------- | -------- | ------- | +| name | Environment variable name | string | true | | +| value | The value of the environment variable | string | false | | +| valueFrom | Specifies a source the value of this var should come from | [valueFrom](#valueFrom) | false | | + + +### valueFrom +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------------ | ------------------------------------------------ | ----------------------------- | -------- | ------- | +| secretKeyRef | Selects a key of a secret in the pod's namespace | [secretKeyRef](#secretKeyRef) | true | | + + +#### secretKeyRef + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ---- | ---------------------------------------------------------------- | ------ | -------- | ------- | +| name | The name of the secret in the pod's namespace to select from | string | true | | +| key | The key of the secret to select from. Must be a valid secret key | string | true | | diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/components/cue/worker.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/components/cue/worker.md new file mode 100644 index 00000000..36cea5c6 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/components/cue/worker.md @@ -0,0 +1,167 @@ +--- +title: 后端服务 +--- + +后端服务(Worker)描述在后端运行的长时间运行、可扩展、容器化的服务。它不对外暴露访问端口。 + +## 如何使用 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: app-worker +spec: + components: + - name: myworker + type: worker + properties: + image: "busybox" + cmd: + - sleep + - "1000" +``` + +## 属性说明 + + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ---------------- | ----------------------------------------------------------------------------------------- | --------------------------------- | -------- | ------- | +| cmd | Commands to run in the container | []string | false | | +| env | Define arguments by using environment variables | [[]env](#env) | false | | +| image | Which image would you like to use for your service | string | true | | +| imagePullPolicy | Specify image pull policy for your service | string | false | | +| cpu | Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) | string | false | | +| memory | Specifies the attributes of the memory resource required for the container. | string | false | | +| volumes | Declare volumes and volumeMounts | [[]volumes](#volumes) | false | | +| livenessProbe | Instructions for assessing whether the container is alive. | [livenessProbe](#livenessProbe) | false | | +| readinessProbe | Instructions for assessing whether the container is in a suitable state to serve traffic. | [readinessProbe](#readinessProbe) | false | | +| imagePullSecrets | Specify image pull secrets for your service | []string | false | | + + +### readinessProbe +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------------------- | ---------------------------------------------------------------------------------------------------- | ----------------------- | -------- | ------- | +| exec | Instructions for assessing container health by executing a command. Either this attribute or the | [exec](#exec) | false | | +| | httpGet attribute or the tcpSocket attribute MUST be specified. This attribute is mutually exclusive | | | | +| | with both the httpGet attribute and the tcpSocket attribute. | | | | +| httpGet | Instructions for assessing container health by executing an HTTP GET request. Either this attribute | [httpGet](#httpGet) | false | | +| | or the exec attribute or the tcpSocket attribute MUST be specified. This attribute is mutually | | | | +| | exclusive with both the exec attribute and the tcpSocket attribute. | | | | +| tcpSocket | Instructions for assessing container health by probing a TCP socket. Either this attribute or the | [tcpSocket](#tcpSocket) | false | | +| | exec attribute or the httpGet attribute MUST be specified. This attribute is mutually exclusive with | | | | +| | both the exec attribute and the httpGet attribute. | | | | +| initialDelaySeconds | Number of seconds after the container is started before the first probe is initiated. | int | true | 0 | +| periodSeconds | How often, in seconds, to execute the probe. | int | true | 10 | +| timeoutSeconds | Number of seconds after which the probe times out. | int | true | 1 | +| successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed. | int | true | 1 | +| failureThreshold | Number of consecutive failures required to determine the container is not alive (liveness probe) or | int | true | 3 | +| | not ready (readiness probe). | | | | + + +##### tcpSocket +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ---- | ------------------------------------------------------------------------------------- | ---- | -------- | ------- | +| port | The TCP socket within the container that should be probed to assess container health. | int | true | | + + +#### httpGet +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ----------- | ------------------------------------------------------------------------------------- | ----------------------------- | -------- | ------- | +| path | The endpoint, relative to the port, to which the HTTP GET request should be directed. | string | true | | +| port | The TCP socket within the container to which the HTTP GET request should be directed. | int | true | | +| httpHeaders | | [[]httpHeaders](#httpHeaders) | false | | + + +##### httpHeaders +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ----- | ----------- | ------ | -------- | ------- | +| name | | string | true | | +| value | | string | true | | + + +##### exec +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------- | --------------------------------------------------------------------------------------------------- | -------- | -------- | ------- | +| command | A command to be executed inside the container to assess its health. Each space delimited token of | []string | true | | +| | the command is a separate array element. Commands exiting 0 are considered to be successful probes, | | | | +| | whilst all other exit codes are considered failures. | | | | + + +### livenessProbe +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------------------- | ---------------------------------------------------------------------------------------------------- | ----------------------- | -------- | ------- | +| exec | Instructions for assessing container health by executing a command. Either this attribute or the | [exec](#exec) | false | | +| | httpGet attribute or the tcpSocket attribute MUST be specified. This attribute is mutually exclusive | | | | +| | with both the httpGet attribute and the tcpSocket attribute. | | | | +| httpGet | Instructions for assessing container health by executing an HTTP GET request. Either this attribute | [httpGet](#httpGet) | false | | +| | or the exec attribute or the tcpSocket attribute MUST be specified. This attribute is mutually | | | | +| | exclusive with both the exec attribute and the tcpSocket attribute. | | | | +| tcpSocket | Instructions for assessing container health by probing a TCP socket. Either this attribute or the | [tcpSocket](#tcpSocket) | false | | +| | exec attribute or the httpGet attribute MUST be specified. This attribute is mutually exclusive with | | | | +| | both the exec attribute and the httpGet attribute. | | | | +| initialDelaySeconds | Number of seconds after the container is started before the first probe is initiated. | int | true | 0 | +| periodSeconds | How often, in seconds, to execute the probe. | int | true | 10 | +| timeoutSeconds | Number of seconds after which the probe times out. | int | true | 1 | +| successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed. | int | true | 1 | +| failureThreshold | Number of consecutive failures required to determine the container is not alive (liveness probe) or | int | true | 3 | +| | not ready (readiness probe). | | | | + + +#### tcpSocket +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ---- | ------------------------------------------------------------------------------------- | ---- | -------- | ------- | +| port | The TCP socket within the container that should be probed to assess container health. | int | true | | + + +#### httpGet +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ----------- | ------------------------------------------------------------------------------------- | ----------------------------- | -------- | ------- | +| path | The endpoint, relative to the port, to which the HTTP GET request should be directed. | string | true | | +| port | The TCP socket within the container to which the HTTP GET request should be directed. | int | true | | +| httpHeaders | | [[]httpHeaders](#httpHeaders) | false | | + + +##### httpHeaders +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ----- | ----------- | ------ | -------- | ------- | +| name | | string | true | | +| value | | string | true | | + + +#### exec +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------- | --------------------------------------------------------------------------------------------------- | -------- | -------- | ------- | +| command | A command to be executed inside the container to assess its health. Each space delimited token of | []string | true | | +| | the command is a separate array element. Commands exiting 0 are considered to be successful probes, | | | | +| | whilst all other exit codes are considered failures. | | | | + + +#### volumes +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| --------- | ------------------------------------------------------------------- | ------ | -------- | ------- | +| name | | string | true | | +| mountPath | | string | true | | +| type | Specify volume type, options: "pvc","configMap","secret","emptyDir" | string | true | | + + +#### env +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| --------- | --------------------------------------------------------- | ----------------------- | -------- | ------- | +| name | Environment variable name | string | true | | +| value | The value of the environment variable | string | false | | +| valueFrom | Specifies a source the value of this var should come from | [valueFrom](#valueFrom) | false | | + + +#### valueFrom +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------------ | ------------------------------------------------ | ----------------------------- | -------- | ------- | +| secretKeyRef | Selects a key of a secret in the pod's namespace | [secretKeyRef](#secretKeyRef) | true | | + + +#### secretKeyRef + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ---- | ---------------------------------------------------------------- | ------ | -------- | ------- | +| name | The name of the secret in the pod's namespace to select from | string | true | | +| key | The key of the secret to select from. Must be a valid secret key | string | true | | diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/components/helm.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/components/helm.md new file mode 100644 index 00000000..1e4c417e --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/components/helm.md @@ -0,0 +1,142 @@ +--- +title: Helm 组件 +--- + +KubeVela 的 `helm` 组件满足了用户对接 Helm Chart 的需求,你可以通过 `helm` 组件部署任意来自 Helm 仓库、Git 仓库或者 OSS bucket 的现成 Helm Chart 软件包,并对其进行参数覆盖。 + +## 部署来自 Helm 仓库的 Chart + +来自 Helm 仓库的 Chart 包部署方式,我们以一个 redis-comp 组件为例。它是来自 [bitnami](https://charts.bitnami.com/) Helm 仓库的 Chart。Chart 类型为 `redis-cluster`,版本 `6.2.7`。 + +```shell +cat < --from-literal=secretkey= +secret/bucket-secret created +``` + +2. 部署 chart +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: bucket-app +spec: + components: + - name: bucket-comp + type: helm + properties: + repoType: oss + # required if bucket is private + secretRef: bucket-secret + chart: ./chart/podinfo-5.1.3.tgz + url: oss-cn-beijing.aliyuncs.com + oss: + bucketName: definition-registry +``` + +上面的示例中,Application 中名为 bucket-comp 的组件交付了一个来自 endpoint 为 oss-cn-beijing.aliyuncs.com 的 OSS bucket definition-registry 的 chart。Chart 路径为 ./chart/podinfo-5.1.3.tgz。 + +## 部署来自 Git 仓库的 Chart + +| 参数 | 是否可选 | 含义 | 例子 | +| --------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------- | +| repoType | 必填 | 值为 git 标志 chart 来自 Git 仓库 | git | +| pullInterval | 可选 | 与 Git 仓库进行同步,与调谐 Helm release 的时间间隔 默认值5m(5分钟) | 10m | +| url | 必填 | Git 仓库地址 | https://github.com/oam-dev/terraform-controller | +| secretRef | 可选 | 存有拉取 Git 仓库所需凭证的 Secret 对象名,对 HTTP/S 基本鉴权 Secret 中必须包含 username 和 password 字段。对 SSH 形式鉴权必须包含 identity, identity.pub 和 known_hosts 字段 | sec-name | +| timeout | 可选 | 下载操作的超时时间,默认 20s | 60s | +| chart | 必填 | chart 存放路径(key) | ./chart/podinfo-5.1.3.tgz | +| version | 可选 | 在 Git 来源中,该参数不起作用 | | +| targetNamespace | 可选 | 安装 chart 的名字空间,默认由 chart 本身决定 | your-ns | +| releaseName | 可选 | 安装得到的 release 名称 | your-rn | +| values | 可选 | 覆写 chart 的 Values.yaml ,用于 Helm 渲染。 | 见来自 Git 仓库的例子 | +| git.branch | 可选 | Git 分支,默认为 master | dev | + +**使用示例** + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: app-delivering-chart +spec: + components: + - name: terraform-controller + type: helm + properties: + repoType: git + url: https://github.com/oam-dev/terraform-controller + chart: ./chart + git: + branch: master +``` + +上面的示例中,Application 中名为 terraform-controller 的组件交付了一个来自 https://github.com/oam-dev/terraform-controller 的 Github 仓库的 chart。Chart 路径为 ./chart,仓库分支为 master。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/components/kustomize.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/components/kustomize.md new file mode 100644 index 00000000..f8421897 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/components/kustomize.md @@ -0,0 +1,104 @@ +--- +title: Kustomize 组件 +--- + +KubeVela 的 [`kustomize` 组件](https://github.com/kubernetes-sigs/kustomize)满足了用户直接对接 Yaml 文件、文件夹作为组件制品的需求。无论你的 Yaml 文件/文件夹是存放在 Git 仓库还是对象存储库(如 OSS bucket),KubeVela 均能读取并交付。 + + +## 来自 OSS bucket + + +来自 OSS bucket 仓库的 YAML 文件夹部署,我们以一个名为 bucket-comp 的组件为例。组件对应的部署文件存放在云存储 OSS bucket,使用对应 bucket 名称是 definition-registry。`kustomize.yaml` 来自 `oss-cn-beijing.aliyuncs.com` 的这个地址,所在路径为 `./app/prod/`。 + + +1. (可选)如果你的 OSS bucket 需要身份验证, 创建 Secret 对象: + +```shell +$ kubectl create secret generic bucket-secret --from-literal=accesskey= --from-literal=secretkey= +secret/bucket-secret created +``` + +2. 部署该组件 + +```shell +cat <// + git: + branch: master + path: ./app/dev/ +``` diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/components/more.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/components/more.md new file mode 100644 index 00000000..707dc3da --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/components/more.md @@ -0,0 +1,37 @@ +--- +title: 获取更多 +--- + +KubeVela 中的模块完全都是可定制和可插拔的,所以除了内置的组件之外,你还可以通过如下的方式添加更多你自己的组件类型。 + +## 1. 从官方或第三方能力中心获取模块化能力 + +可以通过 KubeVela 的 [Kubectl 插件](../../developers/references/kubectl-plugin#install-kubectl-vela-plugin)获取官方能力中心中发布的能力。 + +### 查看能力中心的模块列表 + +默认情况下,命令会从 KubeVela 官方维护的[能力中心](https://registry.kubevela.net)中获取模块化功能。 + +例如,让我们尝试列出注册表中所有可用的组件,使用 `--discover` 这个标志位: + +```shell +$ kubectl vela comp --discover +Showing components from registry: oss://registry.kubevela.net +NAME REGISTRY DEFINITION +webserver default deployments.apps +``` + +### 从能力中心安装模块 + +然后你可以安装一个组件,如: + +```shell +$ kubectl vela comp get webserver +Installing component capability webserver +Successfully install component: webserver +``` + +## 2. 自定义模块化能力 + +* 阅读[管理模块化功能](../../platform-engineers/cue/definition-edit),学习对已有的模块化能力进行修改和编辑。 +* 从头开始[自定义模块化能力](../../platform-engineers/cue/advanced),并了解自定义组件的[更多用法和功能](../../platform-engineers/components/custom-component)。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/debug/health.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/debug/health.md new file mode 100644 index 00000000..7fb8d39d --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/debug/health.md @@ -0,0 +1,89 @@ +--- +title: 部署状态查看 +--- + +The `HealthyScope` allows you to define an aggregated health probe for all components in same application. + +1.Create health scope instance. +```yaml +apiVersion: core.oam.dev/v1alpha2 +kind: HealthScope +metadata: + name: health-check + namespace: default +spec: + probe-interval: 60 + workloadRefs: + - apiVersion: apps/v1 + kind: Deployment + name: express-server +``` +2. Create an application that drops in this health scope. +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: vela-app +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8080 # change port + cpu: 0.5 # add requests cpu units + scopes: + healthscopes.core.oam.dev: health-check +``` +3. Check the reference of the aggregated health probe (`status.service.scopes`). +```shell +kubectl get app vela-app -o yaml +``` +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: vela-app +... +status: +... + services: + - healthy: true + name: express-server + scopes: + - apiVersion: core.oam.dev/v1alpha2 + kind: HealthScope + name: health-check +``` +4.Check health scope detail. +```shell +kubectl get healthscope health-check -o yaml +``` +```yaml +apiVersion: core.oam.dev/v1alpha2 +kind: HealthScope +metadata: + name: health-check +... +spec: + probe-interval: 60 + workloadRefs: + - apiVersion: apps/v1 + kind: Deployment + name: express-server +status: + healthConditions: + - componentName: express-server + diagnosis: 'Ready:1/1 ' + healthStatus: HEALTHY + targetWorkload: + apiVersion: apps/v1 + kind: Deployment + name: express-server + scopeHealthCondition: + healthStatus: HEALTHY + healthyWorkloads: 1 + total: 1 +``` + +It shows the aggregated health status for all components in this application. diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/debug/monitoring.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/debug/monitoring.md new file mode 100644 index 00000000..28d5e142 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/debug/monitoring.md @@ -0,0 +1,4 @@ +--- +title: 查看监控 +--- + diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/labels.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/labels.md new file mode 100644 index 00000000..a85cec49 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/labels.md @@ -0,0 +1,74 @@ +--- +title: Labels and Annotations +--- + +We will introduce how to add labels and annotations to your Application. + +## List Traits + +```bash +$ kubectl get trait -n vela-system +NAME APPLIES-TO DESCRIPTION +annotations ["webservice","worker"] Add annotations for your Workload. +cpuscaler ["webservice","worker"] configure k8s HPA with CPU metrics for Deployment +ingress ["webservice","worker"] Configures K8s ingress and service to enable web traffic for your service. Please use route trait in cap center for advanced usage. +labels ["webservice","worker"] Add labels for your Workload. +scaler ["webservice","worker"] Configures replicas for your service by patch replicas field. +sidecar ["webservice","worker"] inject a sidecar container into your app +``` + +You can use `label` and `annotations` traits to add labels and annotations for your workload. + +## Apply Application + +Let's use `label` and `annotations` traits in your Application. + +```shell +# myapp.yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: myapp +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: labels + properties: + "release": "stable" + - type: annotations + properties: + "description": "web application" +``` + +Apply this Application. + +```shell +kubectl apply -f myapp.yaml +``` + +Check the workload has been created successfully. + +```bash +$ kubectl get deployments +NAME READY UP-TO-DATE AVAILABLE AGE +express-server 1/1 1 1 15s +``` + +Check the `labels` trait. + +```bash +$ kubectl get deployments express-server -o jsonpath='{.spec.template.metadata.labels}' +{"app.oam.dev/component":"express-server","release": "stable"} +``` + +Check the `annotations` trait. + +```bash +$ kubectl get deployments express-server -o jsonpath='{.spec.template.metadata.annotations}' +{"description":"web application"} +``` diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/monitoring.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/monitoring.md new file mode 100644 index 00000000..aae812cb --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/monitoring.md @@ -0,0 +1,9 @@ +--- +title: Monitoring +--- + +TBD, Content Overview: + +1. We will move all installation scripts to a separate doc may be named Install Capability Providers (e.g. https://knative.dev/docs/install/install-extensions/)Install monitoring trait(along with prometheus/grafana controller). +2. Add monitoring trait into Application. +3. View it with grafana. \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/overview-end-user.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/overview-end-user.md new file mode 100644 index 00000000..ccd07b3a --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/overview-end-user.md @@ -0,0 +1,48 @@ +--- +title: 概述 +--- + +Here are some workitems on the roadmap + +## Embed rollout in an application + +We will support embedded rollout settings in an application. In this way, any changes to the +application will naturally roll out in a controlled manner instead of a sudden replace. + +## Add support to trait upgrade + +There are three trait related workitems that complement each other + +- we need to make sure that traits that work on the previous application still work on the new + application +- traits themselves also need a controlled way to upgrade instead of replacing the old in one shot +- rollout controller should suppress conflicting traits (like HPA/Scalar) during the rollout process + +## Add metrics based rolling checking + +We will integrate with prometheus and use the metrics generated by the application to control the +flow of the rollout. This part will be very similar to flagger. + +## Add traffic shifting support + +We will add traffic shifting based upgrading strategy like canary, A/B testing. We plan to support +Istio in our first version. This part will be very similar to flagger. + +## Support upgrading more than one component + +Currently, we can only upgrade one component at a time. We will support upgrading more components in +one application at once. + +## Support Helm Rollout strategy + +Currently, we only support upgrading k8s resources. We will support helm based workload in the +future. + +## Add more restrictions on what part of the rollout plan can be changed during rolling + +Here are some examples + +- the BatchPartition field cannot decrease beyond the current batch +- the RolloutBatches field can only change the part after the current batch +- the ComponentList field cannot be changed after rolling starts +- the RolloutStrategy/TargetSize/NumBatches cannot be changed diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/pipeline.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/pipeline.md new file mode 100644 index 00000000..9e307904 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/pipeline.md @@ -0,0 +1,8 @@ +--- +title: Build CI/CD Pipeline +--- + +TBD, Content Overview: + +1. install argo/tekton. +2. run the pipeline example: https://github.com/oam-dev/kubevela/tree/master/docs/examples/argo \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/policies/envbinding.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/policies/envbinding.md new file mode 100644 index 00000000..67c9bf1f --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/policies/envbinding.md @@ -0,0 +1,139 @@ +--- +title: 多环境部署 +--- + +本章节会介绍,如何使用环境差异化配置(env-binding)为应用提供差异化配置和环境调度策略。 + +## 背景 + +在日常开发中会经常将应用部署计划(Application)部署到不同的环境。例如,在开发环境中对应用部署计划进行调试,在生产环境中部署应用部署计划对外提供服务。针对不同的环境,应用部署计划需要有差异化的配置。 + +## 如何使用 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: example-app + namespace: test +spec: + components: + - name: hello-world-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: scaler + properties: + replicas: 1 + - name: data-worker + type: worker + properties: + image: busybox + cmd: + - sleep + - '1000000' + policies: + - name: example-multi-env-policy + type: env-binding + properties: + envs: + - name: staging + placement: # 选择要部署的目标集群 + clusterSelector: + name: cluster-staging + selector: # 选择要使用的组件 + components: + - hello-world-server + + - name: prod + placement: + clusterSelector: + name: cluster-prod + patch: # 差异化配置该环境中的组件 + components: + - name: hello-world-server + type: webservice + traits: + - type: scaler + properties: + replicas: 3 + + workflow: + steps: + # 部署 预发 环境 + - name: deploy-staging + type: deploy2env + properties: + policy: example-multi-env-policy + env: staging + + # 人工确认 + - name: manual-approval + type: suspend + + # 部署 生产 环境 + - name: deploy-prod + type: deploy2env + properties: + policy: example-multi-env-policy + env: prod +``` + +> 创建应用部署计划之前需要当前集群、目标集群中均有名为 `test` 的命名空间,你可以通过执行 `kubectl create ns test` 来创建。 + +```shell +kubectl apply -f app.yaml +``` + +应用部署计划创建之后,在 `test` 命名空间下会创建一个配置化的应用部署计划。同时在子集群的 `test` 命名空间中会出现相应的资源。 + +```shell +$ kubectl get app -n test +NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE +example-app hello-world-server webservice running 25s +``` + +如果你想使用 `env-binding` 在多集群环境下创建应用部署计划,请参考 **[应用多集群部署](../../case-studies/multi-cluster)** 。 + +## 参数说明 + +环境差异化配置应用策略的所有配置项 + +| 名称 | 描述 | 类型 | 是否必须 | 默认值 | +| :---------------------- | :----------------------------------------------------- | :------- | :------- | :------------------------------------------ | +| envs | 环境配置 | env 数组 | 是 | 无 | + +env 的属性 + +| 名称 | 描述 | 类型 | 是否必须 | 默认值 | +| :-------- | :----------------------------------------------------------- | :--------------- | :------- | :----- | +| name | 环境名称 | string | 是 | 无 | +| patch | 对应用部署计划中的组件差异化配置 | patch 结构体 | 是 | 无 | +| placement | 资源调度策略,选择将配置化的资源部署到指定的集群或命名空间上 | placement 结构体 | 是 | 无 | +| selector | 为应用部署计划选择需要使用的组件,默认为空代表使用所有组件 | selector 结构体 | 否 | 无 | + +patch 的属性 + +| 名称 | 描述 | 类型 | 是否必须 | 默认值 | +| :--------- | :------------------- | :------------- | :------- | :----- | +| components | 需要差异化配置的组件 | component 数组 | 是 | 无 | + +placement 的属性 + +| 名称 | 描述 | 类型 | 是否必须 | 默认值 | +| :---------------- | :---------------------------------------------------------------------------------------------------------- | :----------------------- | :------- | :----- | +| clusterSelector | 集群选择器,通过名称筛选集群 | clusterSelector 结构体 | 是 | 无 | + +selector 的属性 + +| 名称 | 描述 | 类型 | 是否必须 | 默认值 | +| :--------- | :------------------- | :------------- | :------- | :----- | +| components | 需要使用的组件名称列表 | string 数组 | 否 | 无 | + +clusterSelector 的属性 + +| 名称 | 描述 | 类型 | 是否必须 | 默认值 | +| :----- | :------- | :---------------- | :------- | :----- | +| name | 集群名称 | string | 否 | 无 | diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/policies/health.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/policies/health.md new file mode 100644 index 00000000..fd01ea4e --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/policies/health.md @@ -0,0 +1,114 @@ +--- +title: 健康状态检查 +--- + +本章节会介绍如何使用健康策略(health policy)为应用添加定期健康检查策略。 + + +## 背景 + +当一个应用部署成功后,用户经常需要观测应用的健康状态,以及每一个组件的健康状态。 +对于不健康的组件,及时让用户发现问题,并提供诊断信息供用户排查。 +设定部署状态检查策略可以让健康检查的流程与应用执行流程解耦,设定独立的检查周期,如每30秒检查一次。 + +## 健康策略 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: app-healthscope-unhealthy +spec: + components: + - name: my-server + type: webservice + properties: + cmd: + - node + - server.js + image: oamdev/testapp:v1 + port: 8080 + traits: + - type: ingress + properties: + domain: test.my.domain + http: + "/": 8080 + - name: my-server-unhealthy + type: webservice + properties: + cmd: + - node + - server.js + image: oamdev/testapp:boom # make it unhealthy + port: 8080 + policies: + - name: health-policy-demo + type: health + properties: + probeInterval: 5 + probeTimeout: 10 +``` + +示例中提供了一个包含有两个组件的应用,其中一个组件是健康的,另一个由于镜像版本错误,将会无法启动,从而被认为是不健康的。 + +示例中的健康策略配置如下,它提供两个选填参数:`probeInterval` 表示健康检查间隔,默认是30秒;`probeTimeout` 表示健康检查超时时间,默认是10秒。 + +```yaml +... + policies: + - name: health-policy-demo + type: health + properties: + probeInterval: 5 + probeTimeout: 10 +... +``` + +关于如何定义组件的健康检查规则,请参考 **[Status Write Back](../../platform-engineers/traits/status)**. + +最后我们可以从应用状态中观测应用的健康状态。 + +```yaml +... + services: + - healthy: true + message: 'Ready:1/1 ' + name: my-server + scopes: + - apiVersion: core.oam.dev/v1alpha2 + kind: HealthScope + name: health-policy-demo + namespace: default + uid: 1d54b5a0-d951-4f20-9541-c2d76c412a94 + traits: + - healthy: true + message: | + No loadBalancer found, visiting by using 'vela port-forward app-healthscope-unhealthy' + type: ingress + workloadDefinition: + apiVersion: apps/v1 + kind: Deployment + - healthy: false + message: 'Ready:0/1 ' + name: my-server-unhealthy + scopes: + - apiVersion: core.oam.dev/v1alpha2 + kind: HealthScope + name: health-policy-demo + namespace: default + uid: 1d54b5a0-d951-4f20-9541-c2d76c412a94 + workloadDefinition: + apiVersion: apps/v1 + kind: Deployment + status: running +... + +``` + +## 参数说明 + +名称 | 描述 | 类型 | 是否必须 | 默认值 +:---------- | :----------- | :----------- | :----------- | :----------- +probeInterval| 健康检查间隔时间(单位/秒) | int | 否 | 30 +probeTimeout| 健康检查超时时间 (单位/秒)| int | 否 | 10 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/traits/annotations-and-labels.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/traits/annotations-and-labels.md new file mode 100644 index 00000000..e35fe0be --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/traits/annotations-and-labels.md @@ -0,0 +1,60 @@ +--- +title: 标签管理 +--- +`labels` 和 `annotations` 运维特征,允许你将标签和注释附加到组件。通过标签和注释,我们在实现业务逻辑时,能十分灵活的根据它们来对应地调用组件。 + +## 字段说明 + +给 Pod 打注解: + +```shell +$ vela show annotations +# Properties ++-----------+-------------+-------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++-----------+-------------+-------------------+----------+---------+ +| - | | map[string]string | true | | ++-----------+-------------+-------------------+----------+---------+ +``` + +给 Pod 打标签: + +```shell +$ vela show labels +# Properties ++-----------+-------------+-------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++-----------+-------------+-------------------+----------+---------+ +| - | | map[string]string | true | | ++-----------+-------------+-------------------+----------+---------+ +``` + +字段类型均是字符串键值对。 + +## 如何使用 + +首先,我们准备一个应用部署计划 YAML 如下: + +```shell +# myapp.yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: myapp +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: labels + properties: + "release": "stable" + - type: annotations + properties: + "description": "web application" +``` + +最终标签和注解为打到工作负载底层的 Pod 资源上。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/traits/autoscaler.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/traits/autoscaler.md new file mode 100644 index 00000000..ab2d7e23 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/traits/autoscaler.md @@ -0,0 +1,42 @@ +--- +title: 自动扩缩容 +--- + +本小节会介绍,如何为应用部署计划的一个待交付组件,配置自动扩缩容。我们使用运维特征里的 `cpuscaler` 来完成开发。 + +## 字段说明 + + +``` +$ vela show cpuscaler +# Properties ++---------+---------------------------------------------------------------------------------+------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++---------+---------------------------------------------------------------------------------+------+----------+---------+ +| min | Specify the minimal number of replicas to which the autoscaler can scale down | int | true | 1 | +| max | Specify the maximum number of of replicas to which the autoscaler can scale up | int | true | 10 | +| cpuUtil | Specify the average cpu utilization, for example, 50 means the CPU usage is 50% | int | true | 50 | ++---------+---------------------------------------------------------------------------------+------+----------+---------+ +``` + +## 如何使用 + +```yaml +# sample.yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: website +spec: + components: + - name: frontend # This is the component I want to deploy + type: webservice + properties: + image: nginx + traits: + - type: cpuscaler # Automatically scale the component by CPU usage after deployed + properties: + min: 1 + max: 10 + cpuPercent: 60 +``` diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/traits/ingress.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/traits/ingress.md new file mode 100644 index 00000000..a85b0ef4 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/traits/ingress.md @@ -0,0 +1,103 @@ +--- +title: 配置网关 +--- + +## 开始之前 + +> ⚠️ 需要你的集群已安装 [Ingress 控制器](https://kubernetes.github.io/ingress-nginx/deploy/)。 + +## 字段说明 + + +```shell +vela show ingress +``` + +```console +# Properties ++--------+------------------------------------------------------------------------------+----------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++--------+------------------------------------------------------------------------------+----------------+----------+---------+ +| http | Specify the mapping relationship between the http path and the workload port | map[string]int | true | | +| domain | Specify the domain you want to expose | string | true | | ++--------+------------------------------------------------------------------------------+----------------+----------+---------+ +``` + +## 如何使用 + +```yaml +# vela-app.yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-app +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: ingress + properties: + domain: testsvc.example.com + http: + "/": 8000 +``` + +部署到集群后,检查应用状态为 running,并且状态是 healthy: + +```bash +kubectl get application first-vela-app -w +``` +```console +NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE +first-vela-app express-server webservice healthChecking 14s +first-vela-app express-server webservice running true 42s +``` + +如果你的集群带有云厂商的负载均衡机制可以通过 Application 查看到访问的 IP: + +```shell +kubectl get application first-vela-app -o yaml +``` +```console +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-app + namespace: default +spec: +... + services: + - healthy: true + name: express-server + traits: + - healthy: true + message: 'Visiting URL: testsvc.example.com, IP: 47.111.233.220' + type: ingress + status: running +... +``` + +然后就能够通过这个 IP,来访问该应用程序了。 + +``` +curl -H "Host:testsvc.example.com" http:/// +``` +```console + +Hello World + + + ## . + ## ## ## == + ## ## ## ## ## === + /""""""""""""""""\___/ === + ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~ + \______ o _,/ + \ \ _,' + `'--.._\..--'' + +``` diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/traits/kustomize-patch.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/traits/kustomize-patch.md new file mode 100644 index 00000000..bb7781f0 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/traits/kustomize-patch.md @@ -0,0 +1,221 @@ +--- +title: Kustomize 补丁 +--- + +本小节将介绍如何使用 trait 对 Kustomize 组件做差异化配置。 + +## 功能说明 + +| Trait | 简介 | +| ------------------------ | --------------------------------------------------------------------------- | +| kustomize-patch | 支持以 inline YAML 字符串形式支持 strategy Merge 和 JSON6902 格式的 patch。 | +| kustomize-json-patch | 支持以 JSON6902 格式对 kustomize 进行 patch | +| kustomize-strategy-merge | 支持以 YAML 格式对 kustomize 进行 patch | + + +### kustomize-patch 字段说明 + +kustomize-patch 类型的 trait 以字符串形式描述 patch 内容。 + +```shell +vela show kustomize-patch +``` + +输出如下: + +```shell +# Properties ++---------+---------------------------------------------------------------+-----------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++---------+---------------------------------------------------------------+-----------------------+----------+---------+ +| patches | a list of StrategicMerge or JSON6902 patch to selected target | [[]patches](#patches) | true | | ++---------+---------------------------------------------------------------+-----------------------+----------+---------+ + + +## patches ++--------+---------------------------------------------------+-------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++--------+---------------------------------------------------+-------------------+----------+---------+ +| patch | Inline patch string, in yaml style | string | true | | +| target | Specify the target the patch should be applied to | [target](#target) | true | | ++--------+---------------------------------------------------+-------------------+----------+---------+ + + +### target ++--------------------+-------------+--------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++--------------------+-------------+--------+----------+---------+ +| name | | string | false | | +| group | | string | false | | +| version | | string | false | | +| kind | | string | false | | +| namespace | | string | false | | +| annotationSelector | | string | false | | +| labelSelector | | string | false | | ++--------------------+-------------+--------+----------+---------+ +``` + +### 如何使用 + +使用示例如下 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: bucket-app +spec: + components: + - name: bucket-comp + type: kustomize + # ... omitted for brevity + traits: + - type: kustomize-patch + properties: + patches: + - patch: |- + apiVersion: v1 + kind: Pod + metadata: + name: not-used + labels: + app.kubernetes.io/part-of: test-app + target: + labelSelector: "app=podinfo" +``` + +上面的例子给原本的 kustomize 添加了一个 patch : 筛选出带有 app=podinfo 标签的 Pod 打了 patch。 + +### kustomize-json-patch 字段说明 + +可以以 [JSON6902 格式](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/patchesjson6902/)进行 patch。先来了解其信息: + +```shell +vela show kustomize-json-patch +``` + +```shell +# Properties ++-------------+---------------------------+-------------------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++-------------+---------------------------+-------------------------------+----------+---------+ +| patchesJson | A list of JSON6902 patch. | [[]patchesJson](#patchesJson) | true | | ++-------------+---------------------------+-------------------------------+----------+---------+ + + +## patchesJson ++--------+-------------+-------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++--------+-------------+-------------------+----------+---------+ +| patch | | [patch](#patch) | true | | +| target | | [target](#target) | true | | ++--------+-------------+-------------------+----------+---------+ + + +#### target ++--------------------+-------------+--------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++--------------------+-------------+--------+----------+---------+ +| name | | string | false | | +| group | | string | false | | +| version | | string | false | | +| kind | | string | false | | +| namespace | | string | false | | +| annotationSelector | | string | false | | +| labelSelector | | string | false | | ++--------------------+-------------+--------+----------+---------+ + + +### patch ++-------+-------------+--------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++-------+-------------+--------+----------+---------+ +| path | | string | true | | +| op | | string | true | | +| value | | string | false | | ++-------+-------------+--------+----------+---------+ +``` + +### 如何使用 + +使用示例如下: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: bucket-app +spec: + components: + - name: bucket-comp + type: kustomize + # ... omitted for brevity + traits: + - type: kustomize-json-patch + properties: + patchesJson: + - target: + version: v1 + kind: Deployment + name: podinfo + patch: + - op: add + path: /metadata/annotations/key + value: value +``` +上面这个例子中给所有 Deployment 对象的 annotations 添加了一条:`key: value` + +### kustomize-strategy-merge 字段说明 + +可以以 格式进行 patch。先来了解其信息: + +```shell +vela show kustomize-json-patch +``` + +```shell +# Properties ++-----------------------+-----------------------------------------------------------+---------------------------------------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++-----------------------+-----------------------------------------------------------+---------------------------------------------------+----------+---------+ +| patchesStrategicMerge | a list of strategicmerge, defined as inline yaml objects. | [[]patchesStrategicMerge](#patchesStrategicMerge) | true | | ++-----------------------+-----------------------------------------------------------+---------------------------------------------------+----------+---------+ + + +## patchesStrategicMerge ++-----------+-------------+--------------------------------------------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++-----------+-------------+--------------------------------------------------------+----------+---------+ +| undefined | | map[string](null|bool|string|bytes|{...}|[...]|number) | true | | +``` + +### 如何使用 + +使用示例如下: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: bucket-app +spec: + components: + - name: bucket-comp + type: kustomize + # ... omitted for brevity + traits: + - type: kustomize-strategy-merge + properties: + patchesStrategicMerge: + - apiVersion: apps/v1 + kind: Deployment + metadata: + name: podinfo + spec: + template: + spec: + serviceAccount: custom-service-account +``` + +上面这个例子中用 YAML 原生格式(即非内嵌字符串格式)对原本 kustomize 进行了patch。 + diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/traits/more.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/traits/more.md new file mode 100644 index 00000000..45e97a7d --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/traits/more.md @@ -0,0 +1,46 @@ +--- +title: 获取更多 +--- + +KubeVela 中的模块完全都是可定制和可插拔的,所以除了内置的运维能力之外,你还可以通过如下的方式添加更多你自己的运维能力。 + +## 1. 从官方或第三方能力中心获取模块化能力 + +可以通过 KubeVela 的 [Kubectl 插件](../../developers/references/kubectl-plugin#install-kubectl-vela-plugin)获取官方能力中心中发布的能力。 + +### 查看能力中心的模块列表 + +默认情况下,命令会从 KubeVela 官方维护的[能力中心](https://registry.kubevela.net)中获取模块化功能。 + +例如,让我们尝试列出注册表中所有可用的 trait: + +```shell +$ kubectl vela trait --discover +Showing traits from registry: https://registry.kubevela.net +NAME REGISTRY DEFINITION APPLIES-TO +service-account default [webservice worker] +env default [webservice worker] +flagger-rollout default canaries.flagger.app [webservice] +init-container default [webservice worker] +keda-scaler default scaledobjects.keda.sh [deployments.apps] +metrics default metricstraits.standard.oam.dev [webservice backend task] +node-affinity default [webservice worker] +route default routes.standard.oam.dev [webservice] +virtualgroup default [webservice worker] +``` +请注意,`--discover` 标志表示显示不在集群中的所有特征。 + +### 从能力中心安装模块 + +然后你可以安装一个 trait,如: + +```shell +$ kubectl vela trait get init-container +Installing component capability init-container +Successfully install trait: init-container +``` + +## 2. 自定义模块化能力 + +* 阅读[管理模块化功能](../../platform-engineers/cue/definition-edit),学习对已有的模块化能力进行修改和编辑。 +* 从头开始[自定义模块化能力](../../platform-engineers/cue/advanced),并了解自定义运维能力的[更多用法和功能](../../platform-engineers/traits/customize-trait)。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/traits/rollout.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/traits/rollout.md new file mode 100644 index 00000000..adbbaf95 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/traits/rollout.md @@ -0,0 +1,438 @@ +--- +title: 灰度发布和扩缩容 +--- + +灰度发布( Rollout )运维特征可以用于对工作负载的滚动发布和扩缩容。 + +## 如何使用 + +### 首次发布 + +应用下面的 YAML 来创建一个应用部署计划,该应用包含了一个使用了灰度发布运维特征的 webservice 类型的组件,并指定[组件版本](../version-control)名称为 express-server-v1 。如果你不指定,每次对组件的修改都会自动产生一个组件版本(ControllerRevision),组件版本名称的默认产生规则是:`<组件名>-<版本序号>`。 + +```shell +cat < + i=0; + while true; + do + echo "$i: $(date)" >> /var/log/date.log; + i=$((i+1)); + sleep 1; + done + volumes: + - name: varlog + mountPath: /var/log + type: emptyDir + traits: + - type: sidecar + properties: + name: count-log + image: busybox + cmd: [ /bin/sh, -c, 'tail -n+1 -f /var/log/date.log'] + volumes: + - name: varlog + path: /var/log +``` + +编写完毕,在 YAML 文件所在路径下,部署这个应用: + +```shell +kubectl apply -f app.yaml +``` + +成功后,先检查应用生成的工作负载情况: + +```shell +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +log-gen-worker-76945f458b-k7n9k 2/2 Running 0 90s +``` + +然后,查看 `sidecar` 的输出,日志显示正常。 + +```shell +$ kubectl logs -f log-gen-worker-76945f458b-k7n9k count-log +0: Fri Apr 16 11:08:45 UTC 2021 +1: Fri Apr 16 11:08:46 UTC 2021 +2: Fri Apr 16 11:08:47 UTC 2021 +3: Fri Apr 16 11:08:48 UTC 2021 +4: Fri Apr 16 11:08:49 UTC 2021 +5: Fri Apr 16 11:08:50 UTC 2021 +6: Fri Apr 16 11:08:51 UTC 2021 +7: Fri Apr 16 11:08:52 UTC 2021 +8: Fri Apr 16 11:08:53 UTC 2021 +9: Fri Apr 16 11:08:54 UTC 2021 +``` \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/version-control.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/version-control.md new file mode 100644 index 00000000..cda48b34 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/version-control.md @@ -0,0 +1,241 @@ +--- +title: 版本管理 +--- + +## 组件版本 + +你可以通过字段 `spec.components[*].externalRevision` 在 Application 中指定即将生成的组件实例版本名称。 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: myapp +spec: + components: + - name: express-server + type: webservice + externalRevision: express-server-v1 + properties: + image: stefanprodan/podinfo:4.0.3 +``` + +如果没有主动指定版本名称,会根据规则 `-` 自动生成。 + +应用创建以后,你就可以看到系统中生成了 ControllerRevision 对象来记录组件版本。 + +* 获取组件实例的版本记录 + +```shell +$ kubectl get controllerrevision -l controller.oam.dev/component=express-server +NAME CONTROLLER REVISION AGE +express-server-v1 application.core.oam.dev/myapp 1 2m40s +express-server-v2 application.core.oam.dev/myapp 2 2m12s +``` + +你可以在[灰度发布](./traits/rollout)功能中进一步利用组件实例版本化以后的功能。 + +## 在应用中指定组件类型和运维功能的版本 + +当系统中的组件类型和运维功能变化时,也会产生对应的版本号。 + +* 查看组件类型的版本变化 + +```shell +$ kubectl get definitionrevision -l="componentdefinition.oam.dev/name=webservice" -n vela-system +NAME REVISION HASH TYPE +webservice-v1 1 3f6886d9832021ba Component +webservice-v2 2 b3b9978e7164d973 Component +``` + +* 查看运维能力的版本变化 + +```shell +$ kubectl get definitionrevision -l="trait.oam.dev/name=rollout" -n vela-system +NAME REVISION HASH TYPE +rollout-v1 1 e441f026c1884b14 Trait +``` + +你可以在应用中指定使用的组件类型、运维能力的版本,加上后缀 `@version` 即可。在下面的例子里,你可以指定 `webservice@v1` 表示一直使用 `webservice`这个组件的 v1 版本。 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: myapp +spec: + components: + - name: server + type: webservice@v1 + properties: + image: stefanprodan/podinfo:4.0.3 +``` + +通过这种方式,系统管理员对组件类型和运维功能的变更就不会影响到你的应用,否则每次应用的更新都会使用最新的组件类型和运维功能。 + +## 应用版本 + +除了工作流字段,应用中的每个字段更新都会生成一个对应的版本快照。 + +* 查看版本快照 + +```shell +$ kubectl get apprev -l app.oam.dev/name=myapp +NAME AGE +myapp-v1 54m +myapp-v2 53m +myapp-v3 18s +``` + +你可以在版本快照中获得应用所关联的所有信息,包括应用的字段以及对应的组件类型、运维能力等。 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ApplicationRevision +metadata: + labels: + app.oam.dev/app-revision-hash: a74b4a514ba2fc08 + app.oam.dev/name: myapp + name: myapp-v3 + namespace: default + ... +spec: + application: + apiVersion: core.oam.dev/v1beta1 + kind: Application + metadata: + name: myapp + namespace: default + ... + spec: + components: + - name: express-server + properties: + image: stefanprodan/podinfo:5.0.3 + type: webservice@v1 + ... + componentDefinitions: + webservice: + apiVersion: core.oam.dev/v1beta1 + kind: ComponentDefinition + metadata: + name: webservice + namespace: vela-system + ... + spec: + schematic: + cue: + ... + traitDefinitions: + ... +``` + +## 应用版本对比 + +部署前版本对比(Live-diff)功能可以让你不用真的对运行时集群进行操作,在本地预览即将部署的版本和线上版本的差异性,并进行确认。 + +预览所提供的信息,会包括应用部署计划的新增、修改和移除等信息,同时也包括其中的组件和运维特征的相关信息。 + +假设你的新应用部署计划如下,包含镜像的变化: + +```yaml +# new-app.yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: myapp +spec: + components: + - name: express-server + type: webservice@v1 + properties: + image: crccheck/hello-world # 变更镜像 +``` + +然后运行 `版本对比` 功能,使用如下命令: + +```shell +vela live-diff -f new-app.yaml -r vela-app-v1 +``` + +* 通过 `-r` 或 `--revision` 参数,指定要比较的版本名称。 +* 通过 `-c` 或 `--context` 指定对比差异的上下文行数。 + +通过 `vela live-diff -h` 查看更多参数用法。 + +
点击查看对比结果 + +```bash +--- +# Application (myapp) has been modified(*) +--- + apiVersion: core.oam.dev/v1beta1 + kind: Application + metadata: +- annotations: +- kubectl.kubernetes.io/last-applied-configuration: | +- {"apiVersion":"core.oam.dev/v1beta1","kind":"Application","metadata":{"annotations":{},"name":"myapp","namespace":"default"},"spec":{"components":[{"externalRevision":"express-server-v1","name":"express-server","properties":{"image":"stefanprodan/podinfo:4.0.3"},"type":"webservice"}]}} + creationTimestamp: null +- finalizers: +- - app.oam.dev/resource-tracker-finalizer + name: myapp + namespace: default + spec: + components: +- - externalRevision: express-server-v1 +- name: express-server ++ - name: express-server + properties: +- image: stefanprodan/podinfo:4.0.3 +- type: webservice ++ image: crccheck/hello-world ++ type: webservice@v1 + status: + rollout: + batchRollingState: "" + currentBatch: 0 + lastTargetAppRevision: "" + rollingState: "" + upgradedReadyReplicas: 0 + upgradedReplicas: 0 + +--- +## Component (express-server) has been modified(*) +--- + apiVersion: apps/v1 + kind: Deployment + metadata: +- annotations: +- kubectl.kubernetes.io/last-applied-configuration: | +- {"apiVersion":"core.oam.dev/v1beta1","kind":"Application","metadata":{"annotations":{},"name":"myapp","namespace":"default"},"spec":{"components":[{"externalRevision":"express-server-v1","name":"express-server","properties":{"image":"stefanprodan/podinfo:4.0.3"},"type":"webservice"}]}} ++ annotations: {} + labels: + app.oam.dev/appRevision: "" + app.oam.dev/component: express-server + app.oam.dev/name: myapp + app.oam.dev/resourceType: WORKLOAD +- workload.oam.dev/type: webservice ++ workload.oam.dev/type: webservice-v1 + name: express-server + namespace: default + spec: + selector: + matchLabels: + app.oam.dev/component: express-server + template: + metadata: + labels: + app.oam.dev/component: express-server + app.oam.dev/revision: KUBEVELA_COMPONENT_REVISION_PLACEHOLDER + spec: + containers: +- - image: stefanprodan/podinfo:4.0.3 ++ - image: crccheck/hello-world + name: express-server + ports: + - containerPort: 80 +``` + +
+ +未来,我们也计划将应用版本快照集成到 CLI/Dashboard 等工具中,以此实现快照恢复等更多功能。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/volumes.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/volumes.md new file mode 100644 index 00000000..9fb4d862 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/volumes.md @@ -0,0 +1,97 @@ +--- +title: 使用 Volumes +--- + +我们将会介绍如何在应用中使用基本和定制化的 volumes。 + + +## 使用基本的 Volume + +`worker` 和 `webservice` 都可以使用多个通用的 volumes,包括: `persistenVolumeClaim`, `configMap`, `secret`, and `emptyDir`。你应该使用名称属性来区分不同类型的 volumes。(为了简洁,我们使用 `pvc` 代替 `persistenVolumeClaim`) + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: website +spec: + components: + - name: frontend + type: webservice + properties: + image: nginx + volumes: + - name: "my-pvc" + mountPath: "/var/www/html1" + type: "pvc" # persistenVolumeClaim type volume + claimName: "myclaim" + - name: "my-cm" + mountPath: "/var/www/html2" + type: "configMap" # configMap type volume (specifying items) + cmName: "myCmName" + items: + - key: "k1" + path: "./a1" + - key: "k2" + path: "./a2" + - name: "my-cm-noitems" + mountPath: "/var/www/html22" + type: "configMap" # configMap type volume (not specifying items) + cmName: "myCmName2" + - name: "mysecret" + type: "secret" # secret type volume + mountPath: "/var/www/html3" + secretName: "mysecret" + - name: "my-empty-dir" + type: "emptyDir" # emptyDir type volume + mountPath: "/var/www/html4" +``` +你需要确保使用的 volume 资源在集群中是可用的。 + +## 使用自定义类型的 volume + +使用者可以自己扩展定制化类型的 volume,例如 AWS ElasticBlockStore, +Azure disk, Alibaba Cloud OSS。 +为了可以使用定制化类型的 volume,我们需要先安装特定的 Trait。 + +```shell +$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/app-with-volumes/td-awsEBS.yaml +``` + +```shell +$ kubectl vela show aws-ebs-volume ++-----------+----------------------------------------------------------------+--------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++-----------+----------------------------------------------------------------+--------+----------+---------+ +| name | The name of volume. | string | true | | +| mountPath | | string | true | | +| volumeID | Unique id of the persistent disk resource. | string | true | | +| fsType | Filesystem type to mount. | string | true | ext4 | +| partition | Partition on the disk to mount. | int | false | | +| readOnly | ReadOnly here will force the ReadOnly setting in VolumeMounts. | bool | true | false | ++-----------+----------------------------------------------------------------+--------+----------+---------+ +``` + +然后我们可以在应用的定义中使用 aws-ebs volumes。 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: app-worker +spec: + components: + - name: myworker + type: worker + properties: + image: "busybox" + cmd: + - sleep + - "1000" + traits: + - type: aws-ebs-volume + properties: + name: "my-ebs" + mountPath: "/myebs" + volumeID: "my-ebs-id" +``` diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/workflow/apply-component.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/workflow/apply-component.md new file mode 100644 index 00000000..309bbe8b --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/workflow/apply-component.md @@ -0,0 +1,132 @@ +--- +title: 部署组件和运维特征 +--- + +本节将介绍如何在工作流中部署组件和运维特征。 + +## 如何使用 + +部署如下应用部署计划: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-workflow + namespace: default +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: ingress + properties: + domain: testsvc.example.com + http: + /: 8000 + - name: nginx-server + type: webservice + properties: + image: nginx:1.21 + port: 80 + workflow: + steps: + - name: express-server + # 指定步骤类型 + type: apply-component + properties: + # 指定组件名称 + component: express-server + - name: manual-approval + # 工作流内置 suspend 类型的任务,用于暂停工作流 + type: suspend + - name: nginx-server + type: apply-component + properties: + component: nginx-server +``` + +在一些情况下,我们在部署某些组件前,需要暂停整个工作流,以等待人工审批。 + +在本例中,部署完第一个组件后,工作流会暂停。直到继续的命令被发起后,才开始部署第二个组件。 + +部署应用特征计划后,查看工作流状态: + +```shell +$ kubectl get app first-vela-workflow + +NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE +first-vela-workflow express-server webservice workflowSuspending 2s +``` + +可以通过 `vela workflow resume` 命令来使工作流继续执行。 + +> 有关于 `vela workflow` 命令的介绍,可以详见 [vela cli](../../cli/vela_workflow)。 + +```shell +$ vela workflow resume first-vela-workflow + +Successfully resume workflow: first-vela-workflow +``` + +查看应用部署计划,可以看到状态已经变为执行中: + +```shell +$ kubectl get app first-vela-workflow + +NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE +first-vela-workflow express-server webservice running true 10s +``` + +## 期望结果 + +查看应用的状态: + +```shell +kubectl get application first-vela-workflow -o yaml +``` + +所有步骤的状态均已成功: + +```yaml +... + status: + workflow: + ... + stepIndex: 3 + steps: + - name: express-server + phase: succeeded + resourceRef: {} + type: apply-component + - name: manual-approval + phase: succeeded + resourceRef: {} + type: suspend + - name: nginx-server + phase: succeeded + resourceRef: {} + type: apply-component + suspend: false + terminated: true +``` + +确认集群中组件的状态: + +```shell +$ kubectl get deployment + +NAME READY UP-TO-DATE AVAILABLE AGE +express-server 1/1 1 1 3m28s +nginx-server 1/1 1 1 3s + +$ kubectl get ingress + +NAME CLASS HOSTS ADDRESS PORTS AGE +express-server testsvc.example.com 80 4m7s +``` + +可以看到,所有的组件及运维特征都被成功地部署到了集群中。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/workflow/apply-remaining.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/workflow/apply-remaining.md new file mode 100644 index 00000000..5b60d3d8 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/workflow/apply-remaining.md @@ -0,0 +1,163 @@ +--- +title: 部署剩余资源 +--- + +在一些情况下,我们希望先部署一个组件,等待其成功运行后,再一键部署剩余组件。KubeVela 提供了一个 `apply-remaining` 类型的工作流步骤,可以使用户方便的一键过滤不想要的资源,并部署剩余组件。 +本节将介绍如何在工作流中通过 `apply-remaining` 部署剩余资源。 + +## 如何使用 + +部署如下应用部署计划,其工作流中的步骤类型为 `apply-remaining`: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-workflow + namespace: default +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: ingress + properties: + domain: testsvc.example.com + http: + /: 8000 + - name: express-server2 + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + - name: express-server3 + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + - name: express-server4 + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + workflow: + steps: + - name: first-server + type: apply-component + properties: + component: express-server + - name: manual-approval + # 工作流内置 suspend 类型的任务,用于暂停工作流 + type: suspend + - name: remaining-server + # 指定步骤类型 + type: apply-remaining + properties: + # 指定需要被跳过的组件 + exceptions: + # 配置组件参数 + express-server: + # skipApplyWorkload 表明是否需要跳过组件的部署 + skipApplyWorkload: true + # skipAllTraits 表明是否需要跳过所有运维特征的部署 + skipAllTraits: true +``` + +## 期望结果 + +查看此时应用的状态: + +```shell +kubectl get application first-vela-workflow -o yaml +``` + +可以看到执行到了 `manual-approval` 步骤时,工作流被暂停执行了: + +```yaml +... + status: + workflow: + ... + stepIndex: 2 + steps: + - name: first-server + phase: succeeded + resourceRef: {} + type: apply-component + - name: manual-approval + phase: succeeded + resourceRef: {} + type: suspend + suspend: true + terminated: false +``` + +查看集群中组件的状态,当组件运行成功后,再继续工作流: + +```shell +$ kubectl get deployment + +NAME READY UP-TO-DATE AVAILABLE AGE +express-server 1/1 1 1 5s + +$ kubectl get ingress + +NAME CLASS HOSTS ADDRESS PORTS AGE +express-server testsvc.example.com 80 47s +``` + +继续该工作流: + +``` +vela workflow resume first-vela-workflow +``` + +重新查看应用的状态: + +```shell +kubectl get application first-vela-workflow -o yaml +``` + +可以看到所有步骤的状态均已成功: + +```yaml +... + status: + workflow: + ... + stepIndex: 3 + steps: + - name: first-server + phase: succeeded + resourceRef: {} + type: apply-component + - name: manual-approval + phase: succeeded + resourceRef: {} + type: suspend + - name: remaining-server + phase: succeeded + resourceRef: {} + type: apply-remaining + suspend: false + terminated: true +``` + +重新查看集群中组件的状态: + +```shell +$ kubectl get deployment + +NAME READY UP-TO-DATE AVAILABLE AGE +express-server 1/1 1 1 110s +express-server2 1/1 1 1 6s +express-server3 1/1 1 1 6s +express-server4 1/1 1 1 6s +``` + +可以看到,所有的组件都被部署到了集群中,且没有被重复部署。 + +通过填写 `apply-remaining` 中提供的参数,可以使用户方便的过滤部署资源。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/workflow/component-dependency-parameter.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/workflow/component-dependency-parameter.md new file mode 100644 index 00000000..3388a432 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/workflow/component-dependency-parameter.md @@ -0,0 +1,99 @@ +--- +title: 应用组件间的依赖和参数传递 +--- + +本节将介绍如何在 KubeVela 中进行组件间的参数传递。 + +## 参数传递 + +在 KubeVela 中,可以在组件中通过 outputs 和 inputs 来指定要传输的数据。 + +### Outputs + +outputs 由 `name` 和 `valueFrom` 组成。`name` 声明了这个 output 的名称,在 input 中将通过 `name` 引用 output。 + +`valueFrom` 有以下几种写法: +1. 直接通过字符串表示值,如:`valueFrom: testString`。 +2. 通过表达式来指定值,如:`valueFrom: output.metadata.name`。注意,`output` 为固定内置字段,指向组件中被部署在集群里的资源。 +3. 通过 `+` 来任意连接以上两种写法,最终值是计算后的字符串拼接结果,如:`valueFrom: output.metadata.name + "testString"`。 + +### Inputs + +inputs 由 `from` 和 `parameterKey` 组成。`from` 声明了这个 input 从哪个 output 中取值,`parameterKey` 为一个表达式,将会把 input 取得的值赋给对应的字段。 + +如: +1. 指定 inputs: + +```yaml +... +- name: wordpress + type: helm + inputs: + - from: mysql-svc + parameterKey: properties.values.externalDatabase.host +``` + +2. 经过渲染后,该组件的 `properties.values.externalDatabase.host` 字段中会被赋上值,效果如下所示: + +```yaml +... +- name: wordpress + type: helm + properties: + values: + externalDatabase: + host: +``` + +## 如何使用 + +假设我们希望在本地启动一个 WordPress,而这个 Wordpress 的数据存放在一个 MySQL 数据库中,我们需要将这个 MySQL 的地址传递给 WordPress。 + +部署如下应用部署计划: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: wordpress-with-mysql + namespace: default +spec: + components: + - name: mysql + type: helm + outputs: + # 将 service 地址作为 output + - name: mysql-svc + valueFrom: output.metadata.name + ".default.svc.cluster.local" + properties: + repoType: helm + url: https://charts.bitnami.com/bitnami + chart: mysql + version: "8.8.2" + values: + auth: + rootPassword: mypassword + - name: wordpress + type: helm + inputs: + # 将 mysql 的 service 地址赋值到 host 中 + - from: mysql-svc + parameterKey: properties.values.externalDatabase.host + properties: + repoType: helm + url: https://charts.bitnami.com/bitnami + chart: wordpress + version: "12.0.3" + values: + mariadb: + enabled: false + externalDatabase: + user: root + password: mypassword + database: mysql + port: 3306 +``` + +## 期望结果 + +WordPress 已被成功部署,且与 MySQL 正常连接。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/workflow/multi-env.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/workflow/multi-env.md new file mode 100644 index 00000000..9de9c7d5 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/workflow/multi-env.md @@ -0,0 +1,175 @@ +--- +title: 多环境交付 +--- + +本节将介绍如何在工作流中使用多环境功能。 + +在多集群的情况下,我们首先需要在测试集群部署应用,等到测试集群的应用一切正常后,再部署到生产集群。KubeVela 提供了一个 `multi-env` 类型的工作流步骤,可以帮助用户方便的管理多环境配置。你可以大致了解它的工作原理,如下所示: + +![alt](../../resources/workflow-multi-env.png) + +本节将介绍如何在工作流使用 `multi-env` 来管理多环境。 + +> 在阅读本部分之前,请确保你已经学习了 KubeVela 中的 [Env Binding](../policies/envbinding)。 + +## 如何使用 + +部署如下应用部署计划,其工作流中的步骤类型为 `multi-env`: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: multi-env-demo + namespace: default +spec: + components: + - name: nginx-server + type: webservice + properties: + image: nginx:1.21 + port: 80 + + policies: + - name: env + type: env-binding + properties: + created: false + envs: + - name: test + patch: + components: + - name: nginx-server + type: webservice + properties: + image: nginx:1.20 + port: 80 + placement: + clusterSelector: + labels: + purpose: test + - name: prod + patch: + components: + - name: nginx-server + type: webservice + properties: + image: nginx:1.20 + port: 80 + placement: + clusterSelector: + labels: + purpose: prod + + workflow: + steps: + - name: deploy-test-server + # 指定步骤类型 + type: deploy2env + properties: + # 指定组件名称 + component: nginx-server + # 指定 policy 名称 + policy: env + # 指定 policy 中的 env 名称 + env: test + - name: manual-approval + # 工作流内置 suspend 类型的任务,用于暂停工作流 + type: suspend + - name: deploy-prod-server + type: deploy2env + properties: + component: nginx-server + policy: env + env: prod +``` + +## 期望结果 + +查看此时应用的状态: + +```shell +kubectl get application multi-env-demo -o yaml +``` + +可以看到执行到了 `manual-approval` 步骤时,工作流被暂停执行了: + +```yaml +... + status: + workflow: + ... + stepIndex: 2 + steps: + - name: deploy-test-server + phase: succeeded + resourceRef: {} + type: deploy2env + - name: manual-approval + phase: succeeded + resourceRef: {} + type: suspend + suspend: true + terminated: false +``` + +切换到 `test` 集群,并查看应用的状态: + +```shell +$ kubectl get deployment + +NAME READY UP-TO-DATE AVAILABLE AGE +nginx-server 1/1 1 1 1m10s +``` + +测试集群的应用一切正常后,使用命令继续工作流: + +```shell +$ vela workflow resume multi-env-demo + +Successfully resume workflow: multi-env-demo +``` + +重新查看应用的状态: + +```shell +kubectl get application multi-env-demo -o yaml +``` + +可以看到所有步骤的状态均已成功: + +```yaml +... + status: + workflow: + ... + stepIndex: 3 + steps: + - name: deploy-test-server + phase: succeeded + resourceRef: {} + type: deploy2env + - name: manual-approval + phase: succeeded + resourceRef: {} + type: suspend + - name: deploy-prod-server + phase: succeeded + resourceRef: {} + type: deploy2env + suspend: false + terminated: true +``` + +在 `prod` 集群中,查看应用的状态: + +```shell +$ kubectl get deployment + +NAME READY UP-TO-DATE AVAILABLE AGE +nginx-server 1/1 1 1 1m10s +``` + +可以看到,使用最新配置的组件已经被成功地部署到了两个集群中。 + +通过 `deploy2env`,我们可以轻松地在多个环境中管理应用。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/workflow/webhook-notification.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/workflow/webhook-notification.md new file mode 100644 index 00000000..f0a89dd3 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/end-user/workflow/webhook-notification.md @@ -0,0 +1,82 @@ +--- +title: 使用 Webhook 发送通知 +--- + +在一些情况下,当我们使用工作流部署应用前后,希望能够得到部署的通知。KubeVela 提供了与 Webhook 集成的能力,支持用户在工作流中向钉钉或者 Slack 发送通知。 + +本节将介绍如何在工作流中通过 `webhook-notification` 发送 Webhook 通知。 + +## 参数说明 + +| 参数 | 类型 | 说明 | +| :---: | :--: | :-- | +| slack | Object | 可选值,如果需要发送 Slack 信息,则需填写其 url 及 message | +| slack.url | String | 必填值,Slack 的 Webhook 地址 | +| slack.message | Object | 必填值,需要发送的 Slack 信息,请符合 [Slack 信息规范](https://api.slack.com/reference/messaging/payload) | +| dingding | Object | 可选值,如果需要发送钉钉信息,则需填写其 url 及 message | +| dingding.url | String | 必填值,钉钉的 Webhook 地址 | +| dingding.message | Object | 必填值,需要发送的钉钉信息,请符合 [钉钉信息规范](https://developers.dingtalk.com/document/robots/custom-robot-access/title-72m-8ag-pqw) | + +## 如何使用 + +部署如下应用部署计划,在部署组件前后,都有一个 `webhook-notification` 步骤发送通知: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-workflow + namespace: default +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: ingress + properties: + domain: testsvc.example.com + http: + /: 8000 + workflow: + steps: + - name: dingtalk-message + # 指定步骤类型 + type: webhook-notification + properties: + dingding: + # 钉钉 Webhook 地址,请查看:https://developers.dingtalk.com/document/robots/custom-robot-access + url: + # 具体要发送的信息详情 + message: + msgtype: text + text: + content: 开始运行工作流 + - name: application + type: apply-component + properties: + component: express-server + outputs: + - from: app-status + exportKey: output.status.conditions[0].message + "工作流运行完成" + - name: slack-message + type: webhook-notification + inputs: + - name: app-status + parameterKey: properties.slack.message.text + properties: + slack: + # Slack Webhook 地址,请查看:https://api.slack.com/messaging/webhooks + url: + # 具体要发送的信息详情,会被 input 中的值覆盖 + # message: + # text: condition message + 工作流运行完成 +``` + +## 期望结果 + +通过与 Webhook 的对接,可以看到,在工作流中的组件部署前后,都能在对应的群聊中看到相应的信息。 + +通过 `webhook-notification` ,可以使用户方便的与 Webhook 对接消息通知。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/getting-started/introduction.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/getting-started/introduction.md new file mode 100644 index 00000000..0e2b5bf7 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/getting-started/introduction.md @@ -0,0 +1,72 @@ +--- +title: KubeVela 简介 +slug: / + +--- + + +## 背景 + +云原生技术的发展趋势正在朝着利用 Kubernetes 作为公共抽象层来实现高度一致的、跨云、跨环境的的应用交付而不断迈进。然而,尽管 Kubernetes 在屏蔽底层基础架构细节方面表现出色,它并没有在混合与分布式的部署环境之上引入上层抽象来为软件交付进行建模。我们已经看到,这种缺乏统一上层抽象的软件交付过程,不仅给降低了生产力、影响了用户体验,甚至还会导致生产中出现错误和故障。 + +然而,为现代微服务应用的交付过程进行建模是一个高度碎片化且充满挑战的事情。到目前为止,绝大多数试图解决上述问题的技术方案,要么过于简化而不足以解决实际问题,要么过于复杂以至于几乎无法落地。另一方面,今天的很多应用管理平台主要专注在 UI 和集成上工作,平台本身的能力其实很有限且难以扩展,这意味着随着应用交付需求的增长,用户诉求超出此类系统的能力边界只是一个时间问题。 + +今天,越来越多的应用研发团队期盼着这样一个平台:它既能够简化面向混合环境(多集群/多云/混合云/分布式云)的应用交付过程;同时又足够灵活可以随时满足业务不断高速变化所带来的迭代压力。然而,尽管平台团队很希望能帮上忙,但构建这样一个面向混合交付环境同时又高可扩展的应用交付系统,着实是一项令人生畏的任务。 + +## 什么是 KubeVela? + +KubeVela 是一个开箱即用的、现代化的应用交付与管理平台。KubeVela 通过以下设计,使得面向混合/多云环境的应用交付变得非常简单高效: + +- **完全以应用为中心** - KubeVela 创新性的提出了[开放应用模型(OAM)](https://oam.dev/)来作为应用交付的顶层抽象,并通过声明式的交付工作流来捕获面向混合环境的微服务应用交付的整个过程,甚至连多集群分发策略、流量调配和滚动更新等运维特征,也都声明在应用级别。用户无需关心任何基础设施细节,只需要专注于定义和部署应用即可。 +- **可编程式交付工作流** - KubeVela 的交付模型是利用 [CUE](https://cuelang.org/) 来实现的。CUE 是一种诞生自 Google Borg 系统的数据配置语言,它可以将应用交付的所有步骤、所需资源、关联的运维动作以可编程的方式粘合成一个 DAG(有向无环图)来作为最终的声明式交付计划。相比于其他系统的复杂性和不可扩展性,KubeVela 基于 CUE 的实现不仅使用简单、扩展性极强,也更符合现代 GitOps 应用交付的趋势与要求。 +- **运行时无关** - KubeVela 是一个完全与运行时基础设施无关的应用交付与管理控制平面。所以它可以按照你定义的工作流与策略,面向任何环境交付和管理任何应用组件,比如:容器、云函数、数据库,甚至 AWS EC2 实例等等。 + +## 谁应该使用 KubeVela? + +- 云原生时代的应用研发、运维人员、DevOps 工程师: + - KubeVela 是一个现代化的持续交付(CD)平台。 +- 云原生应用平台的构建者、PaaS、Serverless 平台工程师、基础设施平台管理员: + - KubeVela 是一个普适的、高可扩展的应用交付引擎与内核。 +- 第三方软件供应商(ISV)、垂直领域软件开发者、架构师: + - KubeVela 是一个 Kubernetes 和云平台之上的应用商店(App Store)。 + + +## 产品形态对比 + +### KubeVela vs. 平台即服务 (PaaS) + +典型的例子是 Heroku 和 Cloud Foundry。 它们提供完整的应用程序部署和管理功能,旨在提高开发人员的体验和效率。在这个情况下,KubeVela 也有着相同的目标。 + +不过,KubeVela 和它们最大的区别在于其**可扩展性**。 + +KubeVela 对待交付应用没有任何限制,它的整个应用交付与管理能力集也是由独立的可插拔模块(CUE 模块)构成的,这些模块可以随时通过编写 CUE 模板的方式进行增加或者定制。与这种机制相比,传统的 PaaS 系统的限制非常多:它们需要对应用类型和提供的能力进行各种约束来实现更好的用户体验,但随着应用交付需求的增长,用户的诉求就一定会超出 PaaS 系统的能力边界。这种情况在 KubeVela 平台中则永远不会发生。 + +### KubeVela vs. Serverless + +AWS Lambda 等 Serverless 平台可以为 Serverless 应用管理提供极佳的用户体验和敏捷性。然而,这些平台本质上也带来了更多限制,它们可以说是 PaaS 的一类极端情况,同 KubeVela 的区别也是类似。 + +另一方面,KubeVela 可以轻松部署任何函数类型的组件,包括基于 Kubernetes 的 Serverless 工作负载(例如 KNative/OpenFaaS 函数)或云函数(例如 AWS Lambda)。 + +### KubeVela vs. 跨平台开发者工具 + +典型的例子是 Hashicorp 的 Waypoint。 Waypoint 是一个面向开发者的应用部署工具,它引入了一个一致的工作流(即构建、部署、发布)来模拟在不同平台上交付应用程序的过程。 + +KubeVela 可以与此类工具无缝集成。在这种情况下,开发人员将使用 Waypoint 工作流作为平台的 UI,然后通过 KubeVela 来完成跨混合环境的应用部署、管理和发布。 + +### KubeVela vs. Helm + +Helm 是 Kubernetes 的包管理器,它能够以 Chart 为一个单元,提供打包、安装和升级的一组 YAML 文件的能力。 + +KubeVela 作为一个应用交付系统天然可以部署各种制品类型,当然也包括 Helm Chart。 例如,你可以使用 KubeVela 定义一个由 WordPress Chart 和 AWS RDS 实例组成的应用,编排这两个组件之间的顺序关系,然后将它们按照一定的策略分发到多个 Kubernetes 集群当中。 + +当然,KubeVela 还支持其他制品格式比如 Kustomize。 + +### KubeVela vs. Kubernetes + +KubeVela 是一个基于云原生技术栈构建的现代应用交付系统。它利用了开放应用程序模型(Open Application Model)和 Kubernetes 来解决一个旷日已久的难题——如何让应用交付变得更加轻松愉快。 + +## 下一步 + +接下来,我们推荐你: +- 开始[安装使用 KubeVela](./install) +- 了解[系统架构](core-concepts/architecture)和[核心概念](core-concepts/application) \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/install.mdx b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/install.mdx new file mode 100644 index 00000000..384d9c6c --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/install.mdx @@ -0,0 +1,304 @@ +--- +title: 快速安装 +--- + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +> 如果是要升级现有的 KubeVela,请直接阅读[升级指南](./platform-engineers/advanced-install#升级). + +## 1. 选择放置控制平面的集群 + +确保: +- Kubernetes 集群版本 >= v1.18.0 +- 安装并配置 kubectl 命令行工具 + +KubeVela 得以成为控制平面,主要是依赖 Kubernetes 。它可以放置在任何托管 Kubernetes 作为底座的产品或你自己的集群中。 + +你可以使用 kind 或 minikube 在本地部署、测试 KubeVela,或者使用云厂商提供的 Kubernetes 服务做生产部署。 + + + + + +[安装 minikube](https://minikube.sigs.k8s.io/docs/start/) 后,创建一个集群: + +```shell script +minikube start +``` + +
安装 ingress 启用路由访问功能 + +```shell script +minikube addons enable ingress +``` + +
+
+ + + +安装 [Kind 命令行工具](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)后,创建集群: + +```shell script +cat < 安装 ingress 启用路由访问功能 + +```shell script +kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml +``` + + + + + + + +* 阿里云 [ACK 服务](https://www.aliyun.com/product/kubernetes) +* AWS [EKS 服务](https://aws.amazon.com/cn/eks) +* Azure [AKS 服务](https://azure.microsoft.com/en-us/services/kubernetes-service) +* Google [GKE 服务](https://cloud.google.com/kubernetes-engine) + +> 注意: 请确保云厂商的集群[已安装或启用 ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/) 以保证路由访问功能可正常使用。 + + +
+ + +## 2. 安装 KubeVela + +1. 添加并更新 KubeVela helm chart 仓库 + ```shell script + helm repo add kubevela https://charts.kubevela.net/core + helm repo update + ``` + +2. 安装 KubeVela + ```shell script + helm install --create-namespace -n vela-system kubevela kubevela/vela-core --set multicluster.enabled=true --wait + ``` + 你可以参考 [`自定义安装`](./platform-engineers/advanced-install) 获取更多安装模式和功能。 + +3. 验证 chart 安装是否成功 + ```shell script + helm test kubevela -n vela-system + ``` + +
点击查看期望输出 + + ```shell + Pod kubevela-application-test pending + Pod kubevela-application-test pending + Pod kubevela-application-test running + Pod kubevela-application-test succeeded + NAME: kubevela + LAST DEPLOYED: Tue Apr 13 18:42:20 2021 + NAMESPACE: vela-system + STATUS: deployed + REVISION: 1 + TEST SUITE: kubevela-application-test + Last Started: Fri Apr 16 20:49:10 2021 + Last Completed: Fri Apr 16 20:50:04 2021 + Phase: Succeeded + TEST SUITE: first-vela-app + Last Started: Fri Apr 16 20:49:10 2021 + Last Completed: Fri Apr 16 20:49:10 2021 + Phase: Succeeded + NOTES: + Welcome to use the KubeVela! Enjoy your shipping application journey! + ``` + +
+ +## 3. 【可选】安装 KubeVela CLI + +KubeVela CLI 可以让你更便捷地来管理应用交付。不过,它不是必须使用的。 + +KubeVela CLI 也可以通过 [kubectl plugin](./platform-engineers/advanced-install#安装-kubectl-kubevela-cli-插件) 的方式来安装,或者通过二进制文件. + + + + +** macOS/Linux ** + +```shell script +curl -fsSl https://kubevela.io/script/install.sh | bash +``` + +**Windows** + +```shell script +powershell -Command "iwr -useb https://kubevela.io/script/install.ps1 | iex" +``` + + + +**macOS/Linux** + +先更新下你的 brew + +```shell script +brew update +``` +紧接着安装 KubeVela + +```shell script +brew install kubevela +``` + + + +- 通过[发布日志](https://github.com/oam-dev/kubevela/releases)下载最新的 `vela` 二进制文件。 +- 解压二进制文件,并且在 `$PATH` 中配好环境变量,就搞定啦。 + +```shell script +sudo mv ./vela /usr/local/bin/vela +``` + +> [安装提示](https://github.com/oam-dev/kubevela/issues/625): +> 如果你使用的是 Mac 系统,它会弹出 “vela” 无法打开的警告,因为来自开发者的包无法验证。 +> +> MacOS 对能够在系统中运行的软件,采取了更加严格的限制。你暂时可以通过打开 'System Preference' -> 'Security & Privacy' -> General 并点击 'Allow Anyway' 来解决这个问题。 + + + + +## 4. 【可选】安装插件 + +KubeVela 支持一系列[开箱即用的插件](./platform-engineers/advanced-install#插件列表),建议你至少开启以下插件: + +* Helm 以及 Kustomize 组件功能插件 + ```shell + vela addon enable fluxcd + ``` + +* Terraform Provider 插件 + + 开启 Terraform 对阿里云的支持,请执行: + + ```shell + vela addon enable terraform/provider-alibaba ALICLOUD_ACCESS_KEY=xxx ALICLOUD_SECRET_KEY=yyy ALICLOUD_SECURITY_TOKEN=zzz + ``` + +## 5. 查看已安装能力 + +> 如果没安装 vela 命令行工具,你也可以通过 `kubectl get comp -A` 和 `kubectl get trait -A` 代替. + +* 通过 `vela` CLI 来看看有哪些组件类型: + ```shell script + vela components + ``` +
查看输出 + + ```console + NAME NAMESPACE WORKLOAD DESCRIPTION + alibaba-ack vela-system configurations.terraform.core.oam.dev Terraform configuration for Alibaba Cloud ACK cluster + alibaba-oss vela-system configurations.terraform.core.oam.dev Terraform configuration for Alibaba Cloud OSS object + alibaba-rds vela-system configurations.terraform.core.oam.dev Terraform configuration for Alibaba Cloud RDS object + helm vela-system autodetects.core.oam.dev helm release is a group of K8s resources from either git + repository or helm repo + kustomize vela-system autodetects.core.oam.dev kustomize can fetching, building, updating and applying + Kustomize manifests from git repo. + raw vela-system autodetects.core.oam.dev raw allow users to specify raw K8s object in properties + task vela-system jobs.batch Describes jobs that run code or a script to completion. + webservice vela-system deployments.apps Describes long-running, scalable, containerized services + that have a stable network endpoint to receive external + network traffic from customers. + worker vela-system deployments.apps Describes long-running, scalable, containerized services + that running at backend. They do NOT have network endpoint + to receive external network traffic. + ``` + +
+ +* 通过 `vela` CLI 来看看有哪些运维功能: + ```shell script + vela traits + ``` +
查看输出 + + ```console + NAME NAMESPACE APPLIES-TO CONFLICTS-WITH POD-DISRUPTIVE DESCRIPTION + annotations vela-system * true Add annotations on K8s pod for your workload which follows + the pod spec in path 'spec.template'. + configmap vela-system * true Create/Attach configmaps on K8s pod for your workload which + follows the pod spec in path 'spec.template'. + cpuscaler vela-system deployments.apps false Automatically scale the component based on CPU usage. + env vela-system * false add env on K8s pod for your workload which follows the pod + spec in path 'spec.template.' + expose vela-system false Expose port to enable web traffic for your component. + hostalias vela-system * false Add host aliases on K8s pod for your workload which follows + the pod spec in path 'spec.template'. + ingress vela-system false Enable public web traffic for the component. + ingress-1-20 vela-system false Enable public web traffic for the component, the ingress API + matches K8s v1.20+. + init-container vela-system deployments.apps true add an init container and use shared volume with pod + kustomize-json-patch vela-system false A list of JSON6902 patch to selected target + kustomize-patch vela-system false A list of StrategicMerge or JSON6902 patch to selected + target + kustomize-strategy-merge vela-system false A list of strategic merge to kustomize config + labels vela-system * true Add labels on K8s pod for your workload which follows the + pod spec in path 'spec.template'. + lifecycle vela-system * true Add lifecycle hooks for the first container of K8s pod for + your workload which follows the pod spec in path + 'spec.template'. + node-affinity vela-system * true affinity specify node affinity and toleration on K8s pod for + your workload which follows the pod spec in path + 'spec.template'. + pvc vela-system deployments.apps true Create a Persistent Volume Claim and mount the PVC as volume + to the first container in the pod + resource vela-system * true Add resource requests and limits on K8s pod for your + workload which follows the pod spec in path 'spec.template.' + rollout vela-system false rollout the component + scaler vela-system * false Manually scale K8s pod for your workload which follows the + pod spec in path 'spec.template'. + service-binding vela-system webservice,worker false Binding secrets of cloud resources to component env + sidecar vela-system * true Inject a sidecar container to K8s pod for your workload + which follows the pod spec in path 'spec.template'. + volumes vela-system deployments.apps true Add volumes on K8s pod for your workload which follows the + pod spec in path 'spec.template'. + ``` + +
+ + +以上的这些能力都是已经内置的,随取随用。而由于 KubeVela 从一开始就被设计成可编程的,你可以按玩乐高积木一样,添加任何你需要的功能。 + +## 下一步 + +* 安装完毕 KubeVela,开始动手编写[第一个应用部署计划](./quick-start)。 +* 更多插件功能的安装,了解[自定义安装方式](./platform-engineers/advanced-install)安装。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/kubectlplugin.mdx b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/kubectlplugin.mdx new file mode 100644 index 00000000..2fcbf38f --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/kubectlplugin.mdx @@ -0,0 +1,73 @@ +--- +title: 安装 kubectl 插件 +--- + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +安装 vela kubectl 插件可以帮助你更简单的交付云原生应用! + +## 安装 + +你可以通过以下方式安装 `kubectl vela`: + + + + +1. 安装并且设置 [krew](https://krew.sigs.k8s.io/docs/user-guide/setup/install/) +2. 更新 kubectl 插件列表: + ```shell + kubectl krew update + ``` +3. 安装 kubectl vela: + ```shell script + kubectl krew install vela + ``` + + + + +**macOS/Linux** +```shell script +curl -fsSl https://kubevela.io/script/install-kubectl-vela.sh | bash +``` + +你也可以在 [release 页面(>= v1.0.3)](https://github.com/oam-dev/kubevela/releases)手动下载二进制可执行文件,Kubectl 会自动从你的系统路径中找到它。 + + + + +## 使用 + +```shell +$ kubectl vela -h +A Highly Extensible Platform Engine based on Kubernetes and Open Application Model. + +Usage: + kubectl vela [flags] + kubectl vela [command] + +Available Commands: + +Flags: + -h, --help help for vela + + dry-run Dry Run an application, and output the K8s resources as + result to stdout, only CUE template supported for now + live-diff Dry-run an application, and do diff on a specific app + revison. The provided capability definitions will be used + during Dry-run. If any capabilities used in the app are not + found in the provided ones, it will try to find from + cluster. + show Show the reference doc for a workload type or trait + version Prints out build version information + + +Use "kubectl vela [command] --help" for more information about a command. +``` \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/advanced-install.mdx b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/advanced-install.mdx new file mode 100644 index 00000000..32a9316d --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/advanced-install.mdx @@ -0,0 +1,217 @@ +--- +title: 自定义安装 +--- + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +## 带着证书管理器安装 KubeVela + +默认情况下,KubeVela 使用 [kube-webhook-certgen](https://github.com/jet/kube-webhook-certgen) 提供的自签名证书以便使用参数校验等 Webhook 功能。 +你可以对接证书管理软件(Cert Manager),但是你需要提前安装好。 + +1. 安装 Cert Manager (如果已经安装,可省略) + +```shell script +helm repo add jetstack https://charts.jetstack.io +helm repo update +helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.2.0 --create-namespace --set installCRDs=true +``` + +2. 安装 KubeVela 同时启用证书管理器: + +```shell script +helm install --create-namespace -n vela-system --set admissionWebhooks.certManager.enabled=true kubevela kubevela/vela-core +``` + +## 安装预发布版 + +在使用 `helm search` 命令时,添加标记参数 `--devel` 即可搜索出预发布版。预发布版的版本号格式为 `-rc-master`,例如 `0.4.0-rc-master`,代表的是一个基于 `master` 分支构建的发布候选版。 + +```shell script +helm search repo kubevela/vela-core -l --devel +``` +```console + NAME CHART VERSION APP VERSION DESCRIPTION + kubevela/vela-core 0.4.0-rc-master 0.4.0-rc-master A Helm chart for KubeVela core + kubevela/vela-core 0.3.2 0.3.2 A Helm chart for KubeVela core + kubevela/vela-core 0.3.1 0.3.1 A Helm chart for KubeVela core +``` + +然后尝试跟着以下的命令安装一个预发布版。 + +```shell script +helm install --create-namespace -n vela-system kubevela kubevela/vela-core --version -rc-master +``` +```console +NAME: kubevela +LAST DEPLOYED: Thu Apr 1 19:41:30 2021 +NAMESPACE: vela-system +STATUS: deployed +REVISION: 1 +NOTES: +Welcome to use the KubeVela! Enjoy your shipping application journey! +``` +## 安装 Kubectl KubeVela CLI 插件 + +安装 Kubectl KubeVela CLI 插件,可以更好的进行应用交付操作。 + + + + + +1. [先安装](https://krew.sigs.k8s.io/docs/user-guide/setup/install/) Krew。 +2. 查看 Krew 上可用的插件: +```shell +kubectl krew update +``` +3. 安装 Kubectl KubeVela CLI 插件: +```shell script +kubectl krew install vela +``` + + + + + +**macOS/Linux** +```shell script +curl -fsSl https://kubevela.io/script/install-kubectl-vela.sh | bash +``` + +你也可以直接从[发布页面](https://github.com/oam-dev/kubevela/releases)手动下载来使用。 + + + + + +## 升级 + +### 第一步 更新 Helm 仓库 + +通过以下命令获取 KubeVela 最新发布的 chart: + +```shell +helm repo update +helm search repo kubevela/vela-core -l +``` + +### 第二步 升级 KubeVela 的 CRDs + +```shell +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_appdeployments.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_applicationcontexts.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_applicationrevisions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_applications.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_approllouts.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_clusters.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_componentdefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_containerizedworkloads.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_definitionrevisions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_envbindings.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_healthscopes.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_initializers.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_manualscalertraits.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_policydefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_resourcetrackers.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_scopedefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_traitdefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_workflowstepdefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_workloaddefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/standard.oam.dev_rollouts.yaml +``` + +> 提示:如果看到诸如 `* is invalid: spec.scope: Invalid value: "Namespaced": filed is immutable` 之类的错误,请删除出错的 CRD 后再重新安装。 + +```shell + kubectl delete crd \ + scopedefinitions.core.oam.dev \ + traitdefinitions.core.oam.dev \ + workloaddefinitions.core.oam.dev +``` + +### 第三步 升级 KubeVela Helm chart + +```shell +helm upgrade --install --create-namespace --namespace vela-system kubevela kubevela/vela-core --version --wait +``` + +## 插件列表 + +| 插件 | 简介 | 对应的内置功能 | 插件对应开源项目 | +|---------------------|-------------------------------------------------|----------------|-------------------------------------------------| +| terraform | 提供云资源(默认已安装) | - | https://github.com/oam-dev/terraform-controller | +| fluxcd | 提供 Helm、Kustomize 组件的部署功能 | kustomize、helm | https://fluxcd.io/ | +| kruise | 提供比 Kubernetes 原生更强大的工作负载套件 | cloneset | https://openkruise.io/ | +| prometheus | 提供基于 Promethus 的基础监控功能 | - | https://prometheus.io/ | +| keda | 提供基于事件驱动的工作负载自动扩缩容功能 | - | https://keda.sh/ | +| ocm | 提供多集群功能的系统插件 | - | http://open-cluster-management.io/ | +| observability | 为 KubeVela core 提供系统级别的监控,也可以为应用提供业务级别的监控。 | - | - | + +1. 查看可用的插件 + +```shell +vela addon list +``` + +2. 安装插件,以 fluxcd 插件为例 + +```shell +vela addon enable fluxcd +``` + +3. 禁用插件 + +``` +vela addon disable fluxcd +``` + +禁用前请先清理使用插件功能的应用,否则将禁用失败。 + + +## 卸载 + +运行命令: + +```shell script +helm uninstall -n vela-system kubevela +rm -r ~/.vela +``` + +命令会卸载 KubeVela 服务和相关的依赖组件,同时会清理本地 CLI 的缓存 +然后清理 CRDs(默认情况下,helm 不会移除 CRDs) + +```shell script + kubectl delete crd \ + appdeployments.core.oam.dev \ + applicationconfigurations.core.oam.dev \ + applicationcontexts.core.oam.dev \ + applicationrevisions.core.oam.dev \ + applications.core.oam.dev \ + approllouts.core.oam.dev \ + clusters.core.oam.dev \ + componentdefinitions.core.oam.dev \ + components.core.oam.dev \ + containerizedworkloads.core.oam.dev \ + definitionrevisions.core.oam.dev \ + envbindings.core.oam.dev \ + healthscopes.core.oam.dev \ + initializers.core.oam.dev \ + manualscalertraits.core.oam.dev \ + podspecworkloads.standard.oam.dev \ + policydefinitions.core.oam.dev \ + resourcetrackers.core.oam.dev \ + rollouts.standard.oam.dev \ + rollouttraits.standard.oam.dev \ + scopedefinitions.core.oam.dev \ + traitdefinitions.core.oam.dev \ + workflows.core.oam.dev \ + workflowstepdefinitions.core.oam.dev \ + workloaddefinitions.core.oam.dev +``` diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/cloneset.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/cloneset.md new file mode 100644 index 00000000..051d14cc --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/cloneset.md @@ -0,0 +1,92 @@ +--- +title: Extend CRD Operator as Component Type +--- + +Let's use [OpenKruise](https://github.com/openkruise/kruise) as example of extend CRD as KubeVela Component. +**The mechanism works for all CRD Operators**. + +### Step 1: Install the CRD controller + +You need to [install the CRD controller](https://github.com/openkruise/kruise#quick-start) into your K8s system. + +### Step 2: Create Component Definition + +To register Cloneset(one of the OpenKruise workloads) as a new workload type in KubeVela, the only thing needed is to create an `ComponentDefinition` object for it. +A full example can be found in this [cloneset.yaml](https://github.com/oam-dev/catalog/blob/master/registry/cloneset.yaml). +Several highlights are list below. + +#### 1. Describe The Workload Type + +```yaml +... + annotations: + definition.oam.dev/description: "OpenKruise cloneset" +... +``` + +A one line description of this component type. It will be shown in helper commands such as `$ vela components`. + +#### 2. Register it's underlying CRD + +```yaml +... +workload: + definition: + apiVersion: apps.kruise.io/v1alpha1 + kind: CloneSet +... +``` + +This is how you register OpenKruise Cloneset's API resource (`fapps.kruise.io/v1alpha1.CloneSet`) as the workload type. +KubeVela uses Kubernetes API resource discovery mechanism to manage all registered capabilities. + +#### 4. Define Template + +```yaml +... +schematic: + cue: + template: | + output: { + apiVersion: "apps.kruise.io/v1alpha1" + kind: "CloneSet" + metadata: labels: { + "app.oam.dev/component": context.name + } + spec: { + replicas: parameter.replicas + selector: matchLabels: { + "app.oam.dev/component": context.name + } + template: { + metadata: labels: { + "app.oam.dev/component": context.name + } + spec: { + containers: [{ + name: context.name + image: parameter.image + }] + } + } + } + } + parameter: { + // +usage=Which image would you like to use for your service + // +short=i + image: string + + // +usage=Number of pods in the cloneset + replicas: *5 | int + } + ``` + +### Step 3: Register New Component Type to KubeVela + +As long as the definition file is ready, you just need to apply it to Kubernetes. + +```bash +$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/catalog/master/registry/cloneset.yaml +``` + +And the new component type will immediately become available for developers to use in KubeVela. diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/cloud-services.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/cloud-services.md new file mode 100644 index 00000000..c93b66d1 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/cloud-services.md @@ -0,0 +1,22 @@ +--- +title: 概述 +--- + +云服务是应用程序的重要组件,KubeVela 允许你以一致的体验配置和使用它们。 + +## KubeVela 如何管理云服务? + +在 KubeVela 中,所需的云服务在应用程序中被声明为*components*,并通过*Service Binding Trait*被其他组件使用。 + +## KubeVela 是否与云对话? + +KubeVela 依靠 [Terraform Controller](https://github.com/oam-dev/terraform-controller) 或 [Crossplane](http://crossplane.io/) 作为提供者与云对话。请查看以下文档以了解详细步骤。 + +- [Terraform](../platform-engineers/components/component-terraform) +- [Crossplane](./crossplane) + +## 一个云服务实例可以被多个应用程序共享吗? + +是的。虽然我们目前将此推迟到提供者,因此默认情况下,云服务实例不是每个“应用程序”共享和专用的。现在的解决方法是你可以使用单独的“应用程序”仅声明云服务,然后其他“应用程序”可以通过共享方法中的 service binding trait 来使用它。 + +将来,我们正在考虑将此部分作为 KubeVela 的标准功能,以便你可以声明是否应共享给定的云服务组件。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/components/build-in-component.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/components/build-in-component.md new file mode 100644 index 00000000..3fadf34e --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/components/build-in-component.md @@ -0,0 +1,5 @@ +--- +title: 内置组件 +--- + +WIP \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/components/component-terraform.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/components/component-terraform.md new file mode 100644 index 00000000..e2b11ead --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/components/component-terraform.md @@ -0,0 +1,116 @@ +--- +title: Terraform 组件 +--- + +对云资源的集成需求往往是最频繁出现,比如你可能希望数据库、中间件等服务使用阿里云、AWS 等云厂商的,以获得生产级别的可用性并免去运维的麻烦。 +Terraform 是目前业内支持云资源最广泛也最受欢迎的组件,KubeVela 对 Terraform 进行了额外的支持,使得用户可以通过 Kubernetes CRD 的方式配合 +Terraform 使用任意的云资源。 + +为了使最终用户能够[部署和消费云资源](../../end-user/components/cloud-services/provider-and-consume-cloud-services),当用户的要求超出了 [内置云资源的能力](../../end-user/components/cloud-services/provider-and-consume-cloud-services), +管理员需要要为云资源准备 ComponentDefinitions。 + +## 为云资源开发 ComponentDefinition + +### 阿里云 + +以 [弹性 IP](https://help.aliyun.com/document_detail/120192.html)为例。 + +#### 为云资源开发一个 ComponentDefinition + +这是 Terraform ComponentDefinition 的脚手架。你只需要修改三个字段:`metadata.name`,`metadata.annotations.definition.oam.dev/description` +和 `spec.schematic.terraform.configuration`。 + + +```yaml +apiVersion: core.oam.dev/v1alpha2 +kind: ComponentDefinition +metadata: + name: # 1. ComponentDefinition name, like `alibaba-oss` + namespace: {{.Values.systemDefinitionNamespace}} + annotations: + definition.oam.dev/description: # 2. description, like `Terraform configuration for Alibaba Cloud OSS object` + labels: + type: terraform +spec: + workload: + definition: + apiVersion: terraform.core.oam.dev/v1beta1 + kind: Configuration + schematic: + terraform: + configuration: | + # 3. The developed Terraform HCL +``` + +这里阿里云 EIP 的完整的 ComponentDefinition,我们热烈欢迎你将扩展的云资源的 ComponentDefinition 贡献到 [oam-dev/kubevela](https://github.com/oam-dev/kubevela/tree/master/charts/vela-core/templates/definitions)。 + +```yaml +apiVersion: core.oam.dev/v1alpha2 +kind: ComponentDefinition +metadata: + name: alibaba-eip + namespace: {{.Values.systemDefinitionNamespace}} + annotations: + definition.oam.dev/description: Terraform configuration for Alibaba Cloud Elastic IP + labels: + type: terraform +spec: + workload: + definition: + apiVersion: terraform.core.oam.dev/v1beta1 + kind: Configuration + schematic: + terraform: + configuration: | + module "eip" { + source = "github.com/zzxwill/terraform-alicloud-eip" + name = var.name + bandwidth = var.bandwidth + } + + variable "name" { + description = "Name to be used on all resources as prefix. Default to 'TF-Module-EIP'." + default = "TF-Module-EIP" + type = string + } + + variable "bandwidth" { + description = "Maximum bandwidth to the elastic public network, measured in Mbps (Mega bit per second)." + type = number + default = 5 + } + + output "EIP_ADDRESS" { + description = "The elastic ip address." + value = module.eip.this_eip_address.0 + } + +``` + +#### 验证 + +你可以通过 `vela show` 命令快速验证 ComponentDefinition。 + +```shell +$ vela show alibaba-eip +# Properties ++----------------------------+------------------------------------------------------------------------------------------+-----------------------------------------------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++----------------------------+------------------------------------------------------------------------------------------+-----------------------------------------------------------+----------+---------+ +| name | Name to be used on all resources as prefix. Default to 'TF-Module-EIP'. | string | true | | +| bandwidth | Maximum bandwidth to the elastic public network, measured in Mbps (Mega bit per second). | number | true | | +| writeConnectionSecretToRef | The secret which the cloud resource connection will be written to | [writeConnectionSecretToRef](#writeConnectionSecretToRef) | false | | ++----------------------------+------------------------------------------------------------------------------------------+-----------------------------------------------------------+----------+---------+ + + +## writeConnectionSecretToRef ++-----------+-----------------------------------------------------------------------------+--------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++-----------+-----------------------------------------------------------------------------+--------+----------+---------+ +| name | The secret name which the cloud resource connection will be written to | string | true | | +| namespace | The secret namespace which the cloud resource connection will be written to | string | false | | ++-----------+-----------------------------------------------------------------------------+--------+----------+---------+ +``` + +如果表格能正常出来,ComponentDefinition 应该就可以工作了。更进一步,你可以通过文档[部署云资源](../../end-user/components/cloud-services/provider-and-consume-cloud-services)创建一个实际的 EIP 来验证。 + diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/components/custom-component.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/components/custom-component.md new file mode 100644 index 00000000..28c7786e --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/components/custom-component.md @@ -0,0 +1,484 @@ +--- +title: 自定义组件入门 +--- + +> 在阅读本部分之前,请确保你已经了解 KubeVela 中 [组件定义(ComponentDefinition](../oam/x-definition.md##组件定义(ComponentDefinition)) 的概念且学习掌握了 [CUE 的基本知识](../cue/basic) + +本节将以组件定义的例子展开说明,介绍如何使用 [CUE](https://cuelang.org/) 通过组件定义 `ComponentDefinition` 来自定义应用部署计划的组件。 + +### 交付一个简单的自定义组件 + +我们可以通过 `vela def init` 来根据已有的 YAML 文件来生成一个 `ComponentDefinition` 模板。 + +YAML 文件: + +```yaml +apiVersion: "apps/v1" +kind: "Deployment" +spec: + selector: + matchLabels: + "app.oam.dev/component": "name" + template: + metadata: + labels: + "app.oam.dev/component": "name" + spec: + containers: + - name: "name" + image: "image" +``` + +根据以上的 YAML 来生成 `ComponentDefinition`: + +```shell +vela def init stateless -t component --template-yaml ./stateless.yaml -o stateless.cue +``` + +得到如下结果: + +```shell +$ cat stateless.cue +stateless: { + annotations: {} + attributes: workload: definition: { + apiVersion: " apps/v1" + kind: " Deployment" + } + description: "" + labels: {} + type: "component" +} + +template: { + output: { + spec: { + selector: matchLabels: "app.oam.dev/component": "name" + template: { + metadata: labels: "app.oam.dev/component": "name" + spec: containers: [{ + name: "name" + image: "image" + }] + } + } + apiVersion: "apps/v1" + kind: "Deployment" + } + outputs: {} + parameters: {} +} +``` + +在这个自动生成的模板中: +- 需要 `.spec.workload` 来指示该组件的工作负载类型。 +- `.spec.schematic.cue.template` 是一个 CUE 模板: + * `output` 字段定义了 CUE 要输出的抽象模板。 + * `parameter` 字段定义了模板参数,即在应用部署计划(Application)中公开的可配置属性(KubeVela 将基于 `parameter` 字段自动生成 Json schema)。 + +下面我们来给这个自动生成的自定义组件添加参数并进行赋值: + +``` +stateless: { + annotations: {} + attributes: workload: definition: { + apiVersion: " apps/v1" + kind: " Deployment" + } + description: "" + labels: {} + type: "component" +} + +template: { + output: { + spec: { + selector: matchLabels: "app.oam.dev/component": parameter.name + template: { + metadata: labels: "app.oam.dev/component": parameter.name + spec: containers: [{ + name: parameter.name + image: parameter.image + }] + } + } + apiVersion: "apps/v1" + kind: "Deployment" + } + outputs: {} + parameters: { + name: string + image: string + } +} +``` + +修改后可以用 `vela def vet` 做一下格式检查和校验。 + +```shell +$ vela def vet stateless.cue +Validation succeed. +``` + +接着,让我们声明另一个名为 `task` 的组件。 + +```shell +vela def init task -t component -o task.cue +``` + +得到如下结果: + +```shell +$ cat task.cue +task: { + annotations: {} + attributes: workload: definition: { + apiVersion: " apps/v1" + kind: " Deployment" + } + description: "" + labels: {} + type: "component" +} + +template: { + output: {} + parameter: {} +} +``` + +修改该组件定义: + +``` +task: { + annotations: {} + attributes: workload: definition: { + apiVersion: "batch/v1" + kind: "Job" + } + description: "" + labels: {} + type: "component" +} + +template: { + output: { + apiVersion: "batch/v1" + kind: "Job" + spec: { + parallelism: parameter.count + completions: parameter.count + template: spec: { + restartPolicy: parameter.restart + containers: [{ + image: parameter.image + if parameter["cmd"] != _|_ { + command: parameter.cmd + } + }] + } + } + } + parameter: { + count: *1 | int + image: string + restart: *"Never" | string + cmd?: [...string] + } +} +``` + +将以上两个组件定义部署到集群中: + +```shell +$ vela def apply stateless.cue +ComponentDefinition stateless created in namespace vela-system. +$ vela def apply task.cue +ComponentDefinition task created in namespace vela-system. +``` + +这两个已经定义好的组件,最终会在应用部署计划中实例化,我们引用自定义的组件类型 `stateless`,命名为 `hello`。同样,我们也引用了自定义的第二个组件类型 `task`,并命令为 `countdown`。 + +然后把它们编写到应用部署计划中,如下所示: + + ```yaml + apiVersion: core.oam.dev/v1alpha2 + kind: Application + metadata: + name: website + spec: + components: + - name: hello + type: stateless + properties: + image: crccheck/hello-world + name: mysvc + - name: countdown + type: task + properties: + image: centos:7 + cmd: + - "bin/bash" + - "-c" + - "for i in 9 8 7 6 5 4 3 2 1 ; do echo $i ; done" + ``` + +以上,我们就完成了一个自定义应用组件的应用交付全过程。值得注意的是,作为管理员的我们,可以通过 CUE 提供用户所需要的任何自定义组件类型,同时也为用户提供了模板参数 `parameter` 来灵活地指定对 Kubernetes 相关资源的要求。 + +#### 查看 Kubernetes 最终资源信息 +
+ +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: backend + ... # 隐藏一些与本小节讲解无关的信息 +spec: + template: + spec: + containers: + - name: mysvc + image: crccheck/hello-world + metadata: + labels: + app.oam.dev/component: mysvc + selector: + matchLabels: + app.oam.dev/component: mysvc +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: countdown + ... # 隐藏一些与本小节讲解无关的信息 +spec: + parallelism: 1 + completions: 1 + template: + metadata: + name: countdown + spec: + containers: + - name: countdown + image: 'centos:7' + command: + - bin/bash + - '-c' + - for i in 9 8 7 6 5 4 3 2 1 ; do echo $i ; done + restartPolicy: Never +``` +
+ + +### 交付一个复合的自定义组件 + +除了上面这个例子外,一个组件的定义通常也会由多个 Kubernetes API 资源组成。例如,一个由 `Deployment` 和 `Service` 组成的 `webserver` 组件。CUE 同样能很好的满足这种自定义复合组件的需求。 + +我们会使用 `output` 这个字段来定义工作负载类型的模板,而其他剩下的资源模板,都在 `outputs` 这个字段里进行声明,格式如下: + +```cue +outputs: : + +``` + +回到 `webserver` 这个复合自定义组件上,它的 CUE 文件编写如下: + +``` +webserver: { + annotations: {} + attributes: workload: definition: { + apiVersion: "apps/v1" + kind: "Deployment" + } + description: "" + labels: {} + type: "component" +} + +template: { + output: { + apiVersion: "apps/v1" + kind: "Deployment" + spec: { + selector: matchLabels: { + "app.oam.dev/component": context.name + } + template: { + metadata: labels: { + "app.oam.dev/component": context.name + } + spec: { + containers: [{ + name: context.name + image: parameter.image + + if parameter["cmd"] != _|_ { + command: parameter.cmd + } + + if parameter["env"] != _|_ { + env: parameter.env + } + + if context["config"] != _|_ { + env: context.config + } + + ports: [{ + containerPort: parameter.port + }] + + if parameter["cpu"] != _|_ { + resources: { + limits: + cpu: parameter.cpu + requests: + cpu: parameter.cpu + } + } + }] + } + } + } + } + // an extra template + outputs: service: { + apiVersion: "v1" + kind: "Service" + spec: { + selector: { + "app.oam.dev/component": context.name + } + ports: [ + { + port: parameter.port + targetPort: parameter.port + }, + ] + } + } + parameter: { + image: string + cmd?: [...string] + port: *80 | int + env?: [...{ + name: string + value?: string + valueFrom?: { + secretKeyRef: { + name: string + key: string + } + } + }] + cpu?: string + } +} +``` + +可以看到: +1. 最核心的工作负载,我们按需要在 `output` 字段里,定义了一个要交付的 `Deployment` 类型的 Kubernetes 资源。 +2. `Service` 类型的资源,则放到 `outputs` 里定义。以此类推,如果你要复合第三个资源,只需要继续在后面以键值对的方式添加: + +``` +outputs: service: { + apiVersion: "v1" + kind: "Service" + spec: { +... +outputs: third-resource: { + apiVersion: "v1" + kind: "Service" + spec: { +... +``` + +在理解这些之后,将上面的组件定义对象保存到 CUE 文件中,并部署到你的 Kubernetes 集群。 + +```shell +$ vela def apply webserver.cue +ComponentDefinition webserver created in namespace vela-system. +``` + +然后,我们使用它们,来编写一个应用部署计划: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: webserver-demo + namespace: default +spec: + components: + - name: hello-world + type: webserver + properties: + image: crccheck/hello-world + port: 8000 + env: + - name: "foo" + value: "bar" + cpu: "100m" +``` + +进行部署: +``` +$ kubectl apply -f webserver.yaml +``` +最后,它将在运行时集群生成相关 Kubernetes 资源如下: + +```shell +$ kubectl get deployment +NAME READY UP-TO-DATE AVAILABLE AGE +hello-world-v1 1/1 1 1 15s + +$ kubectl get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +hello-world-trait-7bdcff98f7 ClusterIP 8000/TCP 32s +``` + +## 使用 CUE `Context` + +KubeVela 让你可以在运行时,通过 `context` 关键字来引用一些信息。 + +最常用的就是应用部署计划的名称 `context.appName` 和组件的名称 `context.name`。 + +```cue +context: { + appName: string + name: string +} +``` + +举例来说,假设你在实现一个组件定义,希望将容器的名称填充为组件的名称。那么这样做: + +```cue +parameter: { + image: string +} +output: { + ... + spec: { + containers: [{ + name: context.name + image: parameter.image + }] + } + ... +} +``` + +> 注意,`context` 的信息会在资源部署到目标集群之前就自动注入了 + +### CUE `context` 的配置项 + +| Context 变量名 | 说明 | +| :------------------------------: | :--------------------------------------------------------------------------------------------------------------------: | +| `context.appRevision` | 应用部署计划的版本 | +| `context.appRevisionNum` | 应用部署计划的版本号(`int` 类型), 比如说如果 `context.appRevision` 是 `app-v1` 的话,`context.appRevisionNum` 会是 `1` | +| `context.appName` | 应用部署计划的名称 | +| `context.name` | 组件的名称 | +| `context.namespace` | 应用部署计划的命名空间 | +| `context.output` | 组件中渲染的工作负载 API 资源,这通常用在运维特征里 | +| `context.outputs.` | 组件中渲染的运维特征 API 资源,这通常用在运维特征里 | \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/crossplane.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/crossplane.md new file mode 100644 index 00000000..5cd87f7b --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/crossplane.md @@ -0,0 +1,347 @@ +--- +title: Crossplane +--- + +云服务是应用程序的一部分。 + +## 云服务是 Component 还是 Trait? + +可以考虑以下做法: +- 使用 `ComponentDefinition` 的场景: + - 你想要允许最终用户明确声明云服务的实例并使用它,并在删除应用程序时释放该实例。 +- 使用 `TraitDefinition` 的场景: + - 你不想让最终用户拥有声明或发布云服务的任何控制权,而只想给他们消费云服务,甚至可以由其他系统管理的云服务的方式。在这种情况下,会广泛使用 `Service Binding` 特性。 + +在本文档中,我们将以阿里云的 RDS(关系数据库服务)和阿里云的 OSS(对象存储服务)为例。在单个应用程序中,它们是 Traits,在多个应用程序中,它们是 Components。此机制与其他云提供商相同。 + +## 安装和配置 Crossplane + +KubeVela 使用 [Crossplane](https://crossplane.io/) 作为云服务提供商。请参阅 [Installation](https://github.com/crossplane/provider-alibaba/releases/tag/v0.5.0) 安装 Crossplane Alibaba provider v0.5.0。 + +如果你想配置任何其他 Crossplane providers,请参阅 [Crossplane Select a Getting Started Configuration](https://crossplane.io/docs/v1.1/getting-started/install-configure.html#select-a-getting-started-configuration)。 + +``` +$ kubectl crossplane install provider crossplane/provider-alibaba:v0.5.0 + +# 注意这里的 xxx 和 yyy 是你自己云资源的 AccessKey 和 SecretKey。 +$ kubectl create secret generic alibaba-account-creds -n crossplane-system --from-literal=accessKeyId=xxx --from-literal=accessKeySecret=yyy + +$ kubectl apply -f provider.yaml +``` + +`provider.yaml` 如下。 + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: crossplane-system + +--- +apiVersion: alibaba.crossplane.io/v1alpha1 +kind: ProviderConfig +metadata: + name: default +spec: + credentials: + source: Secret + secretRef: + namespace: crossplane-system + name: alibaba-account-creds + key: credentials + region: cn-beijing +``` + +注意:我们目前仅使用阿里提供的 Crossplane。但是在不久的将来,我们将使用 [Crossplane](https://crossplane.io/) 作为 Kubernetes 的云资源供应商。 + +## 注册 ComponentDefinition 和 TraitDefinition + +### 注册 ComponentDefinition `alibaba-rds` 为 RDS 云资源生产者 + +将工作负载类型 `alibaba-rds` 注册到 KubeVela。 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: alibaba-rds + namespace: vela-system + annotations: + definition.oam.dev/description: "Alibaba Cloud RDS Resource" +spec: + workload: + definition: + apiVersion: database.alibaba.crossplane.io/v1alpha1 + kind: RDSInstance + schematic: + cue: + template: | + output: { + apiVersion: "database.alibaba.crossplane.io/v1alpha1" + kind: "RDSInstance" + spec: { + forProvider: { + engine: parameter.engine + engineVersion: parameter.engineVersion + dbInstanceClass: parameter.instanceClass + dbInstanceStorageInGB: 20 + securityIPList: "0.0.0.0/0" + masterUsername: parameter.username + } + writeConnectionSecretToRef: { + namespace: context.namespace + name: parameter.secretName + } + providerConfigRef: { + name: "default" + } + deletionPolicy: "Delete" + } + } + parameter: { + // +usage=RDS engine + engine: *"mysql" | string + // +usage=The version of RDS engine + engineVersion: *"8.0" | string + // +usage=The instance class for the RDS + instanceClass: *"rds.mysql.c1.large" | string + // +usage=RDS username + username: string + // +usage=Secret name which RDS connection will write to + secretName: string + } + + +``` + +### 注册 ComponentDefinition `alibaba-oss` 为 OSS 云资源生产者 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: alibaba-oss + namespace: vela-system + annotations: + definition.oam.dev/description: "Alibaba Cloud RDS Resource" +spec: + workload: + definition: + apiVersion: oss.alibaba.crossplane.io/v1alpha1 + kind: Bucket + schematic: + cue: + template: | + output: { + apiVersion: "oss.alibaba.crossplane.io/v1alpha1" + kind: "Bucket" + spec: { + name: parameter.name + acl: parameter.acl + storageClass: parameter.storageClass + dataRedundancyType: parameter.dataRedundancyType + writeConnectionSecretToRef: { + namespace: context.namespace + name: parameter.secretName + } + providerConfigRef: { + name: "default" + } + deletionPolicy: "Delete" + } + } + parameter: { + // +usage=OSS bucket name + name: string + // +usage=The access control list of the OSS bucket + acl: *"private" | string + // +usage=The storage type of OSS bucket + storageClass: *"Standard" | string + // +usage=The data Redundancy type of OSS bucket + dataRedundancyType: *"LRS" | string + // +usage=Secret name which RDS connection will write to + secretName: string + } + +``` + +### 引用 Secret 注册 ComponentDefinition `webconsumer` + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: webconsumer + annotations: + definition.oam.dev/description: A Deployment provides declarative updates for Pods and ReplicaSets +spec: + workload: + definition: + apiVersion: apps/v1 + kind: Deployment + schematic: + cue: + template: | + output: { + apiVersion: "apps/v1" + kind: "Deployment" + spec: { + selector: matchLabels: { + "app.oam.dev/component": context.name + } + + template: { + metadata: labels: { + "app.oam.dev/component": context.name + } + + spec: { + containers: [{ + name: context.name + image: parameter.image + + if parameter["cmd"] != _|_ { + command: parameter.cmd + } + + if parameter["dbSecret"] != _|_ { + env: [ + { + name: "username" + value: dbConn.username + }, + { + name: "endpoint" + value: dbConn.endpoint + }, + { + name: "DB_PASSWORD" + value: dbConn.password + }, + ] + } + + ports: [{ + containerPort: parameter.port + }] + + if parameter["cpu"] != _|_ { + resources: { + limits: + cpu: parameter.cpu + requests: + cpu: parameter.cpu + } + } + }] + } + } + } + } + + parameter: { + // +usage=Which image would you like to use for your service + // +short=i + image: string + + // +usage=Commands to run in the container + cmd?: [...string] + + // +usage=Which port do you want customer traffic sent to + // +short=p + port: *80 | int + + // +usage=Referred db secret + // +insertSecretTo=dbConn + dbSecret?: string + + // +usage=Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) + cpu?: string + } + + dbConn: { + username: string + endpoint: string + password: string + } + +``` + +关键词是 annotation `// + insertSecretTo = dbConn`,KubeVela 将知道该参数是 K8s 的 secret,它将解析该 secret 并将数据绑定到 CUE 接口 `dbConn` 中。 + +`output` 可以引用 `dbConn` 获取数据。`dbConn` 的名称没有限制。 + 关键词是 `+insertSecretTo`,它定义了数据绑定机制。以上只是一个例子。 + +### 准备 TraitDefinition `service-binding` 进行 env-secret mapping + +至于应用程序中的数据绑定,KubeVela 建议定义一个 trait 以完成工作。我们已经准备了一个方便的 trait。此 trait 非常适合将资源的信息绑定到 pod spec 的环境变量中. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + annotations: + definition.oam.dev/description: "binding cloud resource secrets to pod env" + name: service-binding +spec: + appliesToWorkloads: + - webservice + - worker + schematic: + cue: + template: | + patch: { + spec: template: spec: { + // +patchKey=name + containers: [{ + name: context.name + // +patchKey=name + env: [ + for envName, v in parameter.envMappings { + name: envName + valueFrom: { + secretKeyRef: { + name: v.secret + if v["key"] != _|_ { + key: v.key + } + if v["key"] == _|_ { + key: envName + } + } + } + }, + ] + }] + } + } + + parameter: { + // +usage=The mapping of environment variables to secret + envMappings: [string]: [string]: string + } + +``` + +借助这种 `service-binding` trait,开发人员可以显式设置参数 `envMappings`,以映射所有环境变量。例子如下。 + +```yaml +... + traits: + - type: service-binding + properties: + envMappings: + # environments refer to db-conn secret + DB_PASSWORD: + secret: db-conn + key: password # 1) If the env name is different from secret key, secret key has to be set. + endpoint: + secret: db-conn # 2) If the env name is the same as the secret key, secret key can be omitted. + username: + secret: db-conn + # environments refer to oss-conn secret + BUCKET_NAME: + secret: oss-conn + key: Bucket +... +``` \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/cue/advanced.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/cue/advanced.md new file mode 100644 index 00000000..d4c59ab9 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/cue/advanced.md @@ -0,0 +1,453 @@ +--- +title: 交付完整模块 +--- + +现在你已经了解了[OAM 模型][1]和[模块定义(X-Definition )][2]的概念,本节将介绍如何使用 CUE 交付完整的模块化功能,使得你的平台可以随着用户需求变化动态扩展功能,适应各类用户和场景,满足公司业务长期发展的迭代诉求。 + +## 将 Kubernetes API 对象转化为自定义组件 + +KubeVela 使用 [CUE][3] 配置语言作为管理用户模块化交付的核心,同时也围绕 CUE 提供了管理工具来[编辑和生成 KubeVela 的模块定义对象][4]。 + +下面我们以 Kubernetes 官方的 [StatefulSet 对象][5]为例,来具体看如何使用 KubeVela 构建自定义的模块化功能并提供能力。 +我们将官方文档中 StatefulSet 的 YAML 例子保存在本地,并命名为 `my-stateful.yaml`, +然后执行如下命令,生成一个名为 “my-stateful” 的 Component 模块定义,并输出到 “my-stateful.cue” 文件中: + + vela def init my-stateful -t component --desc "My StatefulSet component." --template-yaml ./my-stateful.yaml -o my-stateful.cue + +查看生成的 “my-stateful.cue” 文件: + + $ cat my-stateful.cue + "my-stateful": { + annotations: {} + attributes: workload: definition: { + apiVersion: " apps/v1" + kind: " Deployment" + } + description: "My StatefulSet component." + labels: {} + type: "component" + } + + template: { + output: { + apiVersion: "v1" + kind: "Service" + ... // 省略一些非重要信息 + } + outputs: web: { + apiVersion: "apps/v1" + kind: "StatefulSet" + ... // 省略一些非重要信息 + } + parameter: {} + } + +下面我们来对这个自动生成的自定义组件做一些微调: + +1. StatefulSet 官网的例子是由 `StatefulSet`和 `Service` 两个对象构成的一个复合组件。而根据 KubeVela [自定义组件的规则][6],在复合组件中,比如 StatefulSet 这样的核心工作负载需要由 `template.output`字段表示,其他辅助对象用 `template.outputs`表示,所以我们将内容做一些调整,将自动生成的 output 和 outputs 中的全部调换。 +2. 然后我们将核心工作负载的 apiVersion 和 kind 数据填写到标注为 ``的部分 + +修改后可以用 `vela def vet`做一下格式检查和校验。 + + $ vela def vet my-stateful.cue + Validation succeed. + +经过两步改动后的文件如下: + + $ cat my-stateful.cue + "my-stateful": { + annotations: {} + attributes: workload: definition: { + apiVersion: "apps/v1" + kind: "StatefulSet" + } + description: "My StatefulSet component." + labels: {} + type: "component" + } + + template: { + output: { + apiVersion: "apps/v1" + kind: "StatefulSet" + metadata: name: "web" + spec: { + selector: matchLabels: app: "nginx" + replicas: 3 + serviceName: "nginx" + template: { + metadata: labels: app: "nginx" + spec: { + containers: [{ + name: "nginx" + ports: [{ + name: "web" + containerPort: 80 + }] + image: "k8s.gcr.io/nginx-slim:0.8" + volumeMounts: [{ + name: "www" + mountPath: "/usr/share/nginx/html" + }] + }] + terminationGracePeriodSeconds: 10 + } + } + volumeClaimTemplates: [{ + metadata: name: "www" + spec: { + accessModes: ["ReadWriteOnce"] + resources: requests: storage: "1Gi" + storageClassName: "my-storage-class" + } + }] + } + } + outputs: web: { + apiVersion: "v1" + kind: "Service" + metadata: { + name: "nginx" + labels: app: "nginx" + } + spec: { + clusterIP: "None" + ports: [{ + name: "web" + port: 80 + }] + selector: app: "nginx" + } + } + parameter: {} + } + +将该组件定义安装到 Kubernetes 集群中: + + $ vela def apply my-stateful.cue + ComponentDefinition my-stateful created in namespace vela-system. + +此时平台的最终用户已经可以通过 `vela components`命令看到有一个 `my-stateful`组件可以使用了。 + + $ vela components + NAME NAMESPACE WORKLOAD DESCRIPTION + ... + my-stateful vela-system statefulsets.apps My StatefulSet component. + ... + +通过 KubeVela 的应用部署计划发布到集群中,就可以拉起我们刚刚定义的 StatefulSet 和 Service 对象。 + + cat < + +``` +# Application(website) -- Component(my-component) +--- + +apiVersion: v1 +kind: Service +metadata: + labels: + app: nginx + app.oam.dev/appRevision: "" + app.oam.dev/component: my-component + app.oam.dev/name: website + workload.oam.dev/type: my-stateful + name: nginx + namespace: default +spec: + clusterIP: None + ports: + - name: web + port: 80 + selector: + app: nginx + template: + spec: + containers: + - image: saravak/fluentd:elastic + name: my-sidecar + +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + app.oam.dev/appRevision: "" + app.oam.dev/component: my-component + app.oam.dev/name: website + trait.oam.dev/resource: web + trait.oam.dev/type: AuxiliaryWorkload + name: web + namespace: default +spec: + replicas: 3 + selector: + matchLabels: + app: nginx + serviceName: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - image: k8s.gcr.io/nginx-slim:0.8 + name: nginx + ports: + - containerPort: 80 + name: web + volumeMounts: + - mountPath: /usr/share/nginx/html + name: www + terminationGracePeriodSeconds: 10 + volumeClaimTemplates: + - metadata: + name: www + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: my-storage-class +``` + + + + +你还可以通过 `vela dry-run -h` 来查看更多可用的功能参数。 + + +## 使用上下文信息减少参数 + +在我们上面的 Application 例子中,properties 中的 name 和 Component 的 name 字段是相同的,此时我们可以在模板中使用携带了上下文信息的 `context`关键字,其中 `context.name` 就是运行时组件名称,此时 `parameter` 中的 name 参数就不再需要的。 + + ... # 省略其他没有修改的字段 + template: { + output: { + apiVersion: "apps/v1" + kind: "StatefulSet" + metadata: name: context.name + ... // 省略其他没有修改的字段 + } + parameter: { + image: string + replicas: int + } + } + +KubeVela 内置了应用[所需的上下文信息][9],你可以根据需要配置. + +## 使用运维能力按需添加配置 + +对于用户的新需求,除了修改组件定义增加参数以外,你还可以使用运维能力,按需添加配置。一方面,KubeVela 已经内置了大量的通用运维能力,可以满足诸如:添加 label、annotation,注入环境变量、sidecar,添加 volume 等等的需求。另一方面,你可以像自定义组件一样,自定义[补丁型运维特征][10],来满足更多的配置灵活组装的需求。 + +你可以使用 `vela traits` 查看,带 `*` 标记的 trait 均为通用 trait,能够对常见的 Kubernetes 资源对象做操作。 + + $ vela traits + NAME NAMESPACE APPLIES-TO CONFLICTS-WITH POD-DISRUPTIVE DESCRIPTION + annotations vela-system * true Add annotations on K8s pod for your workload which follows + the pod spec in path 'spec.template'. + configmap vela-system * true Create/Attach configmaps on K8s pod for your workload which + follows the pod spec in path 'spec.template'. + env vela-system * false add env on K8s pod for your workload which follows the pod + spec in path 'spec.template.' + hostalias vela-system * false Add host aliases on K8s pod for your workload which follows + the pod spec in path 'spec.template'. + labels vela-system * true Add labels on K8s pod for your workload which follows the + pod spec in path 'spec.template'. + lifecycle vela-system * true Add lifecycle hooks for the first container of K8s pod for + your workload which follows the pod spec in path + 'spec.template'. + node-affinity vela-system * true affinity specify node affinity and toleration on K8s pod for + your workload which follows the pod spec in path + 'spec.template'. + scaler vela-system * false Manually scale K8s pod for your workload which follows the + pod spec in path 'spec.template'. + sidecar vela-system * true Inject a sidecar container to K8s pod for your workload + which follows the pod spec in path 'spec.template'. + + +以 sidecar 为例,你可以查看 sidecar 的用法: + + $ vela show sidecar + # Properties + +---------+-----------------------------------------+-----------------------+----------+---------+ + | NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | + +---------+-----------------------------------------+-----------------------+----------+---------+ + | name | Specify the name of sidecar container | string | true | | + | cmd | Specify the commands run in the sidecar | []string | false | | + | image | Specify the image of sidecar container | string | true | | + | volumes | Specify the shared volume path | [[]volumes](#volumes) | false | | + +---------+-----------------------------------------+-----------------------+----------+---------+ + + + ## volumes + +------+-------------+--------+----------+---------+ + | NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | + +------+-------------+--------+----------+---------+ + | path | | string | true | | + | name | | string | true | | + +------+-------------+--------+----------+---------+ + +直接使用 sidecar 注入一个容器,应用的描述如下: + + apiVersion: core.oam.dev/v1beta1 + kind: Application + metadata: + name: website + spec: + components: + - name: my-component + type: my-stateful + properties: + image: nginx:latest + replicas: 1 + name: my-component + traits: + - type: sidecar + properties: + name: my-sidecar + image: saravak/fluentd:elastic + +部署运行该应用,就可以看到 StatefulSet 中已经部署运行了一个 fluentd 的 sidecar。 + +你也可以使用 `vela def` 获取 sidecar 的 CUE 源文件进行修改,增加参数等。 + + vela def get sidecar + +运维能力的自定义与组件自定义类似,不再赘述,你可以阅读[运维能力自定义文档][11]了解更详细的功能。 + +## 总结 + +本节介绍了如何通过 CUE 交付完整的模块化能力,其核心是可以随着用户的需求,不断动态增加配置能力,逐步暴露更多的功能和用法,以便降低用户整体的学习门槛,最终提升研发效率。 +KubeVela 背后提供的开箱即用的能力,包括组件、运维功能、策略以及工作流,均是通过同样的方式提供了可插拔、可修改的能力。 + +## 下一步 + +* 了解更多[自定义组件](../components/custom-component)的功能。 +* 了解更多[自定义运维能力](../traits/customize-trait)的功能。 + +* 了解[自定义工作流](../workflow/workflow)背后的功能。 + +[1]: ../oam/oam-model +[2]: ../oam/x-definition +[3]: ../cue/basic +[4]: ../cue/definition-edit +[5]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ +[6]: ../components/custom-component#%E4%BA%A4%E4%BB%98%E4%B8%80%E4%B8%AA%E5%A4%8D%E5%90%88%E7%9A%84%E8%87%AA%E5%AE%9A%E4%B9%89%E7%BB%84%E4%BB%B6 +[7]: ../cue/basic#%E5%AE%9A%E4%B9%89%E4%B8%80%E4%B8%AA-cue-%E6%A8%A1%E6%9D%BF +[9]: ../oam/x-definition#%E6%A8%A1%E5%9D%97%E5%AE%9A%E4%B9%89%E8%BF%90%E8%A1%8C%E6%97%B6%E4%B8%8A%E4%B8%8B%E6%96%87 +[10]: ../traits/patch-trait +[11]: ../traits/customize-trait diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/cue/basic.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/cue/basic.md new file mode 100644 index 00000000..d5dbe843 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/cue/basic.md @@ -0,0 +1,564 @@ +--- +title: 基础入门 +--- + +CUE 是 KubeVela 的核心依赖,也是用户实现自定义扩展的主要方式。本章节将详细介绍 CUE 的基础知识,帮助你更好地使用 KubeVela。 + +## 概述 + +KubeVela 将 CUE 作为应用交付核心依赖和扩展方式的原因如下:: + +- **CUE 本身就是为大规模配置而设计。** CUE 能够感知非常复杂的配置文件,并且能够安全地更改可修改配置中成千上万个对象的值。这非常符合 KubeVela 的目标,即以可编程的方式,去定义和交付生产级别的应用程序。 +- **CUE 支持一流的代码生成和自动化。** CUE 原生支持与现有工具以及工作流进行集成,反观其他工具则需要自定义复杂的方案才能实现。例如,需要手动使用 Go 代码生成 OpenAPI 模式。KubeVela 也是依赖 CUE 该特性进行构建开发工具和 GUI 界面的。 +- **CUE 与 Go 完美集成。** KubeVela 像 Kubernetes 系统中的大多数项目一样使用 GO 进行开发。CUE 已经在 Go 中实现并提供了丰富的 API。KubeVela 以 CUE 为核心实现 Kubernetes 控制器。借助 CUE,KubeVela 可以轻松处理数据约束问题。 + +> 更多细节请查看 [The Configuration Complexity Curse](https://blog.cedriccharly.com/post/20191109-the-configuration-complexity-curse/) 以及 [The Logic of CUE](https://cuelang.org/docs/concepts/logic/)。 + +## 前提 + +请确保你的环境中已经安装如下命令行: +* [`cue` v0.2.2](https://cuelang.org/docs/install/) 目前 KubeVela 暂时只支持 CUE v0.2.2 版本,将在后续迭代中升级支持新的 CUE 版本。 +* [`vela` >= v1.1.0](../../install#3-get-kubevela-cli) + +## 学习 CUE 命令行 + +CUE 是 JSON 的超集, 我们可以像使用 JSON 一样使用 CUE,并具备以下特性: + +* C 语言风格的注释 +* 字段名称可以用双引号括起来,注意字段名称中不可以带特殊字符 +* 可选字段末尾是否有逗号 +* 允许数组中最后一个元素末尾带逗号 +* 外大括号可选 + +请先复制以下信息,保存为一个 `first.cue` 文件: + +``` +a: 1.5 +a: float +b: 1 +b: int +d: [1, 2, 3] +g: { + h: "abc" +} +e: string +``` + +接下来,我们以上面这个文件为例子,来学习 CUE 命令行的相关指令: + +* 如何格式化 CUE 文件。(如果你使用 Goland 或者类似 JetBrains IDE, + 可以参考该文章配置自动格式化插件 [使用 Goland 设置 cuelang 的自动格式化](https://wonderflow.info/posts/2020-11-02-goland-cuelang-format/)。) + + 该命令不仅可以格式化 CUE 文件,还能提示错误的模型,相当好用的命令。 + ```shell + cue fmt first.cue + ``` + +* 如何校验模型。除了 `cue fmt`,你还可以使用 `cue vet` 来校验模型。 + ```shell + $ cue vet first.cue + some instances are incomplete; use the -c flag to show errors or suppress this message + + $ cue vet first.cue -c + e: incomplete value string + + ``` + 提示我们:这个文件里的 e 这个变量,有数据类型 `string` 但并没有赋值。 + +* 如何计算/渲染结果。 `cue eval` 可以计算 CUE 文件并且渲染出最终结果。 + 我们看到最终结果中并不包含 `a: float` 和 `b: int`,这是因为这两个变量已经被计算填充。 + 其中 `e: string` 没有被明确的赋值, 故保持不变. + ```shell + $ cue eval first.cue + a: 1.5 + b: 1 + d: [1, 2, 3] + g: { + h: "abc" + } + e: string + ``` + +* 如何指定渲染的结果。例如,我们仅想知道文件中 `b` 的渲染结果,则可以使用该参数 `-e`。 + ```shell + $ cue eval -e b first.cue + 1 + ``` + +* 如何导出渲染结果。 `cue export` 可以导出最终渲染结果。如果一些变量没有被定义执行该命令将会报错。 + ```shell + $ cue export first.cue + e: incomplete value string + ``` + 我们更新一下 `first.cue` 文件,给 `e` 赋值: + ```shell + a: 1.5 + a: float + b: 1 + b: int + d: [1, 2, 3] + g: { + h: "abc" + } + e: string + e: "abc" + ``` + 然后,该命令就可以正常工作。默认情况下, 渲染结果会被格式化为 JSON 格式。 + ```shell + $ cue export first.cue + { + "a": 1.5, + "b": 1, + "d": [ + 1, + 2, + 3 + ], + "g": { + "h": "abc" + }, + "e": "abc" + } + ``` + +* 如何导出 YAML 格式的渲染结果。 + ```shell + $ cue export first.cue --out yaml + a: 1.5 + b: 1 + d: + - 1 + - 2 + - 3 + g: + h: abc + e: abc + ``` + +* 如何导出指定变量的结果。 + ```shell + $ cue export -e g first.cue + { + "h": "abc" + } + ``` + +以上, 你已经学习完所有常用的 CUE 命令行指令。 + +## 学习 CUE 语言 + +在熟悉完常用 CUE 命令行指令后,我们来进一步学习 CUE 语言。 + +先了解 CUE 的数据类型。以下是它的基础数据类型: + +```shell +// float +a: 1.5 + +// int +b: 1 + +// string +c: "blahblahblah" + +// array +d: [1, 2, 3, 1, 2, 3, 1, 2, 3] + +// bool +e: true + +// struct +f: { + a: 1.5 + b: 1 + d: [1, 2, 3, 1, 2, 3, 1, 2, 3] + g: { + h: "abc" + } +} + +// null +j: null +``` + +如何自定义 CUE 类型?使用 `#` 符号来指定一些表示 CUE 类型的变量。 + +``` +#abc: string +``` + +我们将上述内容保存到 `second.cue` 文件。 执行 `cue export` 不会报 `#abc` 是一个类型不完整的值。 + +```shell +$ cue export second.cue +{} +``` + +你还可以定义更复杂的自定义结构,比如: + +``` +#abc: { + x: int + y: string + z: { + a: float + b: bool + } +} +``` + +自定义结构在 KubeVela 中被广泛用于模块定义(X-Definitions)和进行验证。 + +## 定义一个 CUE 模板 + +下面,我们开始尝试利用刚刚学习到的知识,来定义 CUE 模版。 + +1. 定义结构体变量 `parameter`. + +```shell +parameter: { + name: string + image: string +} +``` + +保存上述变量到文件 `deployment.cue`. + +2. 定义更复杂的结构变量 `template` 同时引用变量 `parameter`. + +``` +template: { + apiVersion: "apps/v1" + kind: "Deployment" + spec: { + selector: matchLabels: { + "app.oam.dev/component": parameter.name + } + template: { + metadata: labels: { + "app.oam.dev/component": parameter.name + } + spec: { + containers: [{ + name: parameter.name + image: parameter.image + }] + }}} +} +``` + +熟悉 Kubernetes 的你可能已经知道,这是 Kubernetes Deployment 的模板。 `parameter` 为模版的参数部分。 + +添加上述内容到文件 `deployment.cue`. + +4. 随后, 我们通过更新以下内容来完成变量赋值: + +``` +parameter:{ + name: "mytest" + image: "nginx:v1" +} +``` + +5. 最后, 导出渲染结果为 YAML 格式: + +```shell +$ cue export deployment.cue -e template --out yaml + +apiVersion: apps/v1 +kind: Deployment +spec: + selector: + matchLabels: + app.oam.dev/component: mytest + template: + metadata: + labels: + app.oam.dev/component: mytest + spec: + containers: + - name: mytest + image: nginx:v1 +``` + +以上,你就得到了一个 Kubernetes Deployment 类型的模板。 + +## CUE 的更多用法 + +* 设计开放的结构体和数组。如果在数组或者结构体中使用 `...`,则说明该对象为开放的。 + - 数组对象 `[...string]` ,说明该对象可以容纳多个字符串元素。 + 如果不添加 `...`, 该对象 `[string]` 说明数组只能容纳一个类型为 `string` 的元素。 + - 如下所示的结构体说明可以包含未知字段。 + ``` + { + abc: string + ... + } + ``` + +* 使用运算符 `|` 来表示两种类型的值。如下所示,变量 `a` 表示类型可以是字符串或者整数类型。 + +```shell +a: string | int +``` + +* 使用符号 `*` 定义变量的默认值。通常它与符号 `|` 配合使用, + 代表某种类型的默认值。如下所示,变量 `a` 类型为 `int`,默认值为 `1`。 + +```shell +a: *1 | int +``` + +* 让一些变量可被选填。 某些情况下,一些变量不一定被使用,这些变量就是可选变量,我们可以使用 `?:` 定义此类变量。 + 如下所示, `a` 是可选变量, 自定义 `#my` 对象中 `x` 和 `z` 为可选变量, 而 `y` 为必填字段。 + +``` +a ?: int + +#my: { +x ?: string +y : int +z ?:float +} +``` + +选填变量可以被跳过,这经常和条件判断逻辑一起使用。 +具体来说,如果某些字段不存在,则 CUE 语法为 `if _variable_!= _ | _` ,如下所示: + +``` +parameter: { + name: string + image: string + config?: [...#Config] +} +output: { + ... + spec: { + containers: [{ + name: parameter.name + image: parameter.image + if parameter.config != _|_ { + config: parameter.config + } + }] + } + ... +} +``` + +* 使用运算符 `&` 来运算两个变量。 + +```shell +a: *1 | int +b: 3 +c: a & b +``` + +保存上述内容到 `third.cue` 文件。 + +你可以使用 `cue eval` 来验证结果: + +```shell +$ cue eval third.cue +a: 1 +b: 3 +c: 3 +``` + +* 需要执行条件判断。当你执行一些级联操作时,不同的值会影响不同的结果,条件判断就非常有用。 + 因此,你可以在模版中执行 `if..else` 的逻辑。 + +```shell +price: number +feel: *"good" | string +// Feel bad if price is too high +if price > 100 { + feel: "bad" +} +price: 200 +``` + +保存上述内容到 `fourth.cue` 文件。 + +你可以使用 `cue eval` 来验证结果: + +```shell +$ cue eval fourth.cue +price: 200 +feel: "bad" +``` + +另一个示例是将布尔类型作为参数。 + +``` +parameter: { + name: string + image: string + useENV: bool +} +output: { + ... + spec: { + containers: [{ + name: parameter.name + image: parameter.image + if parameter.useENV == true { + env: [{name: "my-env", value: "my-value"}] + } + }] + } + ... +} +``` + + +* 使用 For 循环。 我们为了避免减少重复代码,常常使用 For 循环。 + - 映射遍历。 + ```cue + parameter: { + name: string + image: string + env: [string]: string + } + output: { + spec: { + containers: [{ + name: parameter.name + image: parameter.image + env: [ + for k, v in parameter.env { + name: k + value: v + }, + ] + }] + } + } + ``` + - 类型遍历。 + ``` + #a: { + "hello": "Barcelona" + "nihao": "Shanghai" + } + + for k, v in #a { + "\(k)": { + nameLen: len(v) + value: v + } + } + ``` + - 切片遍历。 + ```cue + parameter: { + name: string + image: string + env: [...{name:string,value:string}] + } + output: { + ... + spec: { + containers: [{ + name: parameter.name + image: parameter.image + env: [ + for _, v in parameter.env { + name: v.name + value: v.value + }, + ] + }] + } + } + ``` + +另外,可以使用 `"\( _my-statement_ )"` 进行字符串内部计算,比如上面类型循环示例中,获取值的长度等等操作。 + +## 导入 CUE 内部包 + +CUE 有很多 [internal packages](https://pkg.go.dev/cuelang.org/go@v0.2.2/pkg) 可以被 KubeVela 使用,这样可以满足更多的开发需求。 + +比如,使用 `strings.Join` 方法将字符串数组拼接成字符串。 + +```cue +import ("strings") + +parameter: { + outputs: [{ip: "1.1.1.1", hostname: "xxx.com"}, {ip: "2.2.2.2", hostname: "yyy.com"}] +} +output: { + spec: { + if len(parameter.outputs) > 0 { + _x: [ for _, v in parameter.outputs { + "\(v.ip) \(v.hostname)" + }] + message: "Visiting URL: " + strings.Join(_x, "") + } + } +} +``` + +## 导入 Kubernetes 包 + +KubeVela 会从 Kubernetes 集群中读取 OpenAPI,并将 Kubernetes 所有资源自动构建为内部包。 + +你可以在 KubeVela 的 CUE 模版中通过 `kube/` 导入这些包,就像使用 CUE 内部包一样。 + +比如,`Deployment` 可以这样使用: + +```cue +import ( + apps "kube/apps/v1" +) + +parameter: { + name: string +} + +output: apps.#Deployment +output: { + metadata: name: parameter.name +} +``` + +`Service` 可以这样使用(无需使用别名导入软件包): + +```cue +import ("kube/v1") + +output: v1.#Service +output: { + metadata: { + "name": parameter.name + } + spec: type: "ClusterIP", +} + +parameter: { + name: "myapp" +} +``` + +甚至已经安装的 CRD 也可以导入使用: + +``` +import ( + oam "kube/core.oam.dev/v1alpha2" +) + +output: oam.#Application +output: { + metadata: { + "name": parameter.name + } +} + +parameter: { + name: "myapp" +} +``` + +## 下一步 + +* 了解如何统一使用 CUE 来[管理自定义 OAM 模块](./definition-edit)。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/cue/definition-edit.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/cue/definition-edit.md new file mode 100644 index 00000000..d816de3f --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/cue/definition-edit.md @@ -0,0 +1,246 @@ +--- +title: 编辑管理模块定义 +--- + +在 KubeVela CLI (>= v1.1.0) 工具中,`vela def` 命令组为开发者提供了一系列便捷的 X-Definition 编写工具,使得 Definition 的编写将全部在 CUE 文件中进行,避免将 Template CUE 与 Kubernetes 的 YAML 格式进行混合,方便进行格式化与校验。 + +## init + +`vela def init` 是一个用来帮助用户初始化新的 Definition 的脚手架命令。用户可以通过 `vela def init my-trait -t trait --desc "My trait description."` 来创建一个新的空白 TraitDefinition ,如下 + +```json +"my-trait": { + annotations: {} + attributes: { + appliesToWorkloads: [] + conflictsWith: [] + definitionRef: "" + podDisruptive: false + workloadRefPath: "" + } + description: "My trait description." + labels: {} + type: "trait" +} +template: patch: {} +``` + +或者是采用 `vela def init my-comp --interactive` 来交互式地创建新的 Definition 。 + +```bash +$ vela def init my-comp --interactive +Please choose one definition type from the following values: component, trait, policy, workload, scope, workflow-step +> Definition type: component +> Definition description: My component definition. +Please enter the location the template YAML file to build definition. Leave it empty to generate default template. +> Definition template filename: +Please enter the output location of the generated definition. Leave it empty to print definition to stdout. +> Definition output filename: my-component.cue +Definition written to my-component.cue +``` + +除此之外,如果用户创建 ComponentDefinition 的目的是一个 Deployment(或者是其他的 Kubernetes Object ),而这个 Deployment 已经有了 YAML 格式的模版,用户还可以通过 `--template-yaml` 参数来完成从 YAML 到 CUE 的自动转换。例如如下的 `my-deployment.yaml` + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: hello-world +spec: + replicas: 1 + selector: + matchLabels: + app.kubernetes.io/name: hello-world + template: + metadata: + labels: + app.kubernetes.io/name: hello-world + spec: + containers: + - name: hello-world + image: somefive/hello-world + ports: + - name: http + containerPort: 80 + protocol: TCP +--- +apiVersion: v1 +kind: Service +metadata: + name: hello-world-service +spec: + selector: + app: hello-world + ports: + - name: http + protocol: TCP + port: 80 + targetPort: 8080 + type: LoadBalancer +``` + +运行 `vela def init my-comp -t component --desc "My component." --template-yaml ./my-deployment.yaml` 可以得到 CUE 格式的 ComponentDefinition + +```json +"my-comp": { + annotations: {} + attributes: workload: definition: { + apiVersion: " apps/v1" + kind: " Deployment" + } + description: "My component." + labels: {} + type: "component" +} +template: { + output: { + metadata: name: "hello-world" + spec: { + replicas: 1 + selector: matchLabels: "app.kubernetes.io/name": "hello-world" + template: { + metadata: labels: "app.kubernetes.io/name": "hello-world" + spec: containers: [{ + name: "hello-world" + image: "somefive/hello-world" + ports: [{ + name: "http" + containerPort: 80 + protocol: "TCP" + }] + }] + } + } + apiVersion: "apps/v1" + kind: "Deployment" + } + outputs: "hello-world-service": { + metadata: name: "hello-world-service" + spec: { + ports: [{ + name: "http" + protocol: "TCP" + port: 80 + targetPort: 8080 + }] + selector: app: "hello-world" + type: "LoadBalancer" + } + apiVersion: "v1" + kind: "Service" + } + parameter: {} +} +``` + +接下来,用户就可以在该文件的基础上进一步做进一步的修改了。比如将属性中对于 **workload.definition** 中的 *\* 去掉。 + +## vet + +在初始化 Definition 文件之后,可以运行 `vela def vet my-comp.cue` 来校验 Definition 是否在语法上有错误。比如如果少写了一个括号,该命令能够帮助用户识别出来。 + +```bash +$ vela def vet my-comp.cue +Validation succeed. +``` + +## render / apply + +确认 Definition 撰写无误后,开发者可以运行 `vela def apply my-comp.cue --namespace my-namespace` 来将该 Definition 应用在 Kubernetes 的 my-namespace 命名空间中。如果想了解一下 CUE 格式的 Definition 文件会被渲染成什么样的 Kubernetes YAML 文件,可以使用 `vela def apply my-comp.cue --dry-run` 或者 `vela def render my-comp.cue -o my-comp.yaml` 来预先渲染一下 YAML 文件进行确认。 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + annotations: + definition.oam.dev/description: My component. + labels: {} + name: my-comp + namespace: vela-system +spec: + schematic: + cue: + template: | + output: { + metadata: name: "hello-world" + spec: { + replicas: 1 + selector: matchLabels: "app.kubernetes.io/name": "hello-world" + template: { + metadata: labels: "app.kubernetes.io/name": "hello-world" + spec: containers: [{ + name: "hello-world" + image: "somefive/hello-world" + ports: [{ + name: "http" + containerPort: 80 + protocol: "TCP" + }] + }] + } + } + apiVersion: "apps/v11" + kind: "Deployment" + } + outputs: "hello-world-service": { + metadata: name: "hello-world-service" + spec: { + ports: [{ + name: "http" + protocol: "TCP" + port: 80 + targetPort: 8080 + }] + selector: app: "hello-world" + type: "LoadBalancer" + } + apiVersion: "v1" + kind: "Service" + } + parameter: {} + workload: + definition: + apiVersion: apps/v1 + kind: Deployment +``` + +```bash +$ vela def apply my-comp.cue -n my-namespace +ComponentDefinition my-comp created in namespace my-namespace. +``` + +## get / list / edit / del + +在 apply 命令后,开发者可以采用原生的 kubectl 来对结果进行确认,但是正如我们上文提到的,YAML 格式的结果会相对复杂。使用 `vela def get` 命令可以自动将其转换成 CUE 格式,方便用户查看。 + +```bash +$ vela def get my-comp -t component +``` + +或者用户可以通过 `vela def list` 命令来查看当前系统中安装的所有 Definition(可以指定命名空间及类型)。 + +```bash +$ vela def list -n my-namespace -t component +NAME TYPE NAMESPACE DESCRIPTION +my-comp ComponentDefinition my-namespace My component. +``` + +同样的,在使用 `vela def edit` 命令来编辑 Definition 时,用户也只需要对转换过的 CUE 格式 Definition 进行修改,该命令会自动完成格式转换。用户也可以通过设定环境变量 `EDITOR` 来使用自己想要使用的编辑器。 + +```bash +$ EDITOR=vim vela def edit my-comp +``` + +类似的,用户可以运行 `vela def del` 来删除相应的 Definition。 + +```bash +$ vela def del my-comp -n my-namespace +Are you sure to delete the following definition in namespace my-namespace? +ComponentDefinition my-comp: My component. +[yes|no] > yes +ComponentDefinition my-comp in namespace my-namespace deleted. +``` + +## 下一步 + +* 了解如何使用 CUE 和 KubeVela 工具[交付完整的模块能力](./advanced)。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/debug-test-cue.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/debug-test-cue.md new file mode 100644 index 00000000..a89ce3bb --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/debug-test-cue.md @@ -0,0 +1,665 @@ +--- +title: 调试, 测试 以及 Dry-run +--- + +基于具有强大灵活抽象能力的 CUE 定义的模版来说,调试、测试以及 dry-run 非常重要。本教程将逐步介绍如何进行调试。 + +## 前提 + +请确保你的环境已经安装以下 CLI : +* [`cue` >=v0.2.2](https://cuelang.org/docs/install/) + +## 定义 Definition 和 Template + +我们建议将 `Definition Object` 定义拆分为两个部分:CRD 部分和 CUE 模版部分。前面的拆分会帮忙我们对 CUE 模版进行调试、测试以及 dry-run 操作。 + +我们将 CRD 部分保存到 `def.yaml` 文件。 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: microservice + annotations: + definition.oam.dev/description: "Describes a microservice combo Deployment with Service." +spec: + workload: + definition: + apiVersion: apps/v1 + kind: Deployment + schematic: + cue: + template: | +``` + +同时将 CUE 模版部分保存到 `def.cue` 文件,随后我们可以使用 CUE 命令行(`cue fmt` / `cue vet`)格式化和校验 CUE 文件。 + +``` +output: { + // Deployment + apiVersion: "apps/v1" + kind: "Deployment" + metadata: { + name: context.name + namespace: "default" + } + spec: { + selector: matchLabels: { + "app": context.name + } + template: { + metadata: { + labels: { + "app": context.name + "version": parameter.version + } + } + spec: { + serviceAccountName: "default" + terminationGracePeriodSeconds: parameter.podShutdownGraceSeconds + containers: [{ + name: context.name + image: parameter.image + ports: [{ + if parameter.containerPort != _|_ { + containerPort: parameter.containerPort + } + if parameter.containerPort == _|_ { + containerPort: parameter.servicePort + } + }] + if parameter.env != _|_ { + env: [ + for k, v in parameter.env { + name: k + value: v + }, + ] + } + resources: { + requests: { + if parameter.cpu != _|_ { + cpu: parameter.cpu + } + if parameter.memory != _|_ { + memory: parameter.memory + } + } + } + }] + } + } + } +} +// Service +outputs: service: { + apiVersion: "v1" + kind: "Service" + metadata: { + name: context.name + labels: { + "app": context.name + } + } + spec: { + type: "ClusterIP" + selector: { + "app": context.name + } + ports: [{ + port: parameter.servicePort + if parameter.containerPort != _|_ { + targetPort: parameter.containerPort + } + if parameter.containerPort == _|_ { + targetPort: parameter.servicePort + } + }] + } +} +parameter: { + version: *"v1" | string + image: string + servicePort: int + containerPort?: int + // +usage=Optional duration in seconds the pod needs to terminate gracefully + podShutdownGraceSeconds: *30 | int + env: [string]: string + cpu?: string + memory?: string +} +``` + +以上操作完成之后,使用该脚本 [`hack/vela-templates/mergedef.sh`](https://github.com/oam-dev/kubevela/blob/master/hack/vela-templates/mergedef.sh) 将 `def.yaml` 和 `def.cue` 合并到完整的 Definition 对象中。 + +```shell +$ ./hack/vela-templates/mergedef.sh def.yaml def.cue > microservice-def.yaml +``` + +## 调试 CUE 模版 + +### 使用 `cue vet` 进行校验 + +```shell +$ cue vet def.cue +output.metadata.name: reference "context" not found: + ./def.cue:6:14 +output.spec.selector.matchLabels.app: reference "context" not found: + ./def.cue:11:11 +output.spec.template.metadata.labels.app: reference "context" not found: + ./def.cue:16:17 +output.spec.template.spec.containers.name: reference "context" not found: + ./def.cue:24:13 +outputs.service.metadata.name: reference "context" not found: + ./def.cue:62:9 +outputs.service.metadata.labels.app: reference "context" not found: + ./def.cue:64:11 +outputs.service.spec.selector.app: reference "context" not found: + ./def.cue:70:11 +``` + +常见错误 `reference "context" not found` 主要发生在 [`context`](./cue/component#cue-context),该部分是仅在 KubeVela 控制器中存在的运行时信息。我们可以在 `def.cue` 中模拟 `context` ,从而对 CUE 模版进行 end-to-end 的校验操作。 + +> 注意,完成校验测试之后需要清除所有模拟数据。 + +```CUE +... // existing template data +context: { + name: string +} +``` + +随后执行命令: + +```shell +$ cue vet def.cue +some instances are incomplete; use the -c flag to show errors or suppress this message +``` + +该错误 `reference "context" not found` 已经被解决,但是 `cue vet` 仅对数据类型进行校验,这还不能证明模版逻辑是准确对。因此,我们需要使用 `cue vet -c` 完成最终校验: + +```shell +$ cue vet def.cue -c +context.name: incomplete value string +output.metadata.name: incomplete value string +output.spec.selector.matchLabels.app: incomplete value string +output.spec.template.metadata.labels.app: incomplete value string +output.spec.template.spec.containers.0.image: incomplete value string +output.spec.template.spec.containers.0.name: incomplete value string +output.spec.template.spec.containers.0.ports.0.containerPort: incomplete value int +outputs.service.metadata.labels.app: incomplete value string +outputs.service.metadata.name: incomplete value string +outputs.service.spec.ports.0.port: incomplete value int +outputs.service.spec.ports.0.targetPort: incomplete value int +outputs.service.spec.selector.app: incomplete value string +parameter.image: incomplete value string +parameter.servicePort: incomplete value int +``` + +此时,命令行抛出运行时数据不完整的异常(主要因为 `context` 和 `parameter` 字段字段中还有设置值),现在我们填充更多的模拟数据到 `def.cue` 文件: + +```CUE +context: { + name: "test-app" +} +parameter: { + version: "v2" + image: "image-address" + servicePort: 80 + containerPort: 8000 + env: {"PORT": "8000"} + cpu: "500m" + memory: "128Mi" +} +``` + +此时,执行以下命令行没有抛出异常,说明逻辑校验通过: + +```shell +cue vet def.cue -c +``` + +#### 使用 `cue export` 校验已渲染的资源 + +该命令行 `cue export` 将会渲染结果以 YAML 格式导出: + +```shell +$ cue export -e output def.cue --out yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: test-app + namespace: default +spec: + selector: + matchLabels: + app: test-app + template: + metadata: + labels: + app: test-app + version: v2 + spec: + serviceAccountName: default + terminationGracePeriodSeconds: 30 + containers: + - name: test-app + image: image-address +``` + +```shell +$ cue export -e outputs.service def.cue --out yaml +apiVersion: v1 +kind: Service +metadata: + name: test-app + labels: + app: test-app +spec: + selector: + app: test-app + type: ClusterIP +``` + +### 测试使用 `Kube` 包的 CUE 模版 + +KubeVela 将所有内置 Kubernetes API 资源以及 CRD 自动生成为内部 CUE 包。 +你可以将它们导入CUE模板中,以简化模板以及帮助你进行验证。 + +目前有两种方式来导入内部 `kube` 包。 + +1. 以固定方式导入: `kube/` ,这样我们就可以直接引用 `Kind` 对应的结构体。 + ```cue + import ( + apps "kube/apps/v1" + corev1 "kube/v1" + ) + // output is validated by Deployment. + output: apps.#Deployment + outputs: service: corev1.#Service + ``` + 这是比较好记易用的方式,主要因为它与 Kubernetes Object 的用法一致,只需要在 `apiVersion` 之前添加前缀 `kube/`。 + 当然,这个方式仅在 KubeVela 中被支持,所以你只能通过该方法 [`vela system dry-run`](#dry-run-the-application) 进行调试和测试。 + +2. 以第三方包的方式导入。 + 你可以运行 `vela system cue-packages` 获取所有内置 `kube` 包,通过这个方式可以了解当前支持的 `third-party packages`。 + + ```shell + $ vela system cue-packages + DEFINITION-NAME IMPORT-PATH USAGE + #Deployment k8s.io/apps/v1 Kube Object for apps/v1.Deployment + #Service k8s.io/core/v1 Kube Object for v1.Service + #Secret k8s.io/core/v1 Kube Object for v1.Secret + #Node k8s.io/core/v1 Kube Object for v1.Node + #PersistentVolume k8s.io/core/v1 Kube Object for v1.PersistentVolume + #Endpoints k8s.io/core/v1 Kube Object for v1.Endpoints + #Pod k8s.io/core/v1 Kube Object for v1.Pod + ``` + 其实,这些都是内置包,只是你可以像 `third-party packages` 一样使用 `import-path` 导入这些包。 + 当前方式你可以使用 `cue` 命令行进行调试。 + + +#### 使用 `Kube` 包的 CUE 模版调试流程 + +此部分主要介绍使用 `cue` 命令行对 CUE 模版调试和测试的流程,并且可以在 KubeVela中使用 **完全相同的 CUE 模版**。 + +1. 创建目录,初始化 CUE 模块 + +```shell +mkdir cue-debug && cd cue-debug/ +cue mod init oam.dev +go mod init oam.dev +touch def.cue +``` + +2. 使用 `cue` 命令行下载 `third-party packages` + +其实在 KubeVela 中并不需要下载这些包,因为它们已经被从 Kubernetes API 自动生成。 +但是在本地测试环境,我们需要使用 `cue get go` 来获取 Go 包并将其转换为 CUE 格式的文件。 + +所以,为了能够使用 Kubernetes 中 `Deployment` 和 `Serivice` 资源,我们需要下载并转换为 `core` 和 `apps` Kubernetes 模块的 CUE 定义,如下所示: + +```shell +cue get go k8s.io/api/core/v1 +cue get go k8s.io/api/apps/v1 +``` + +随后,该模块目录下可以看到如下结构: + +```shell +├── cue.mod +│ ├── gen +│ │ └── k8s.io +│ │ ├── api +│ │ │ ├── apps +│ │ │ └── core +│ │ └── apimachinery +│ │ └── pkg +│ ├── module.cue +│ ├── pkg +│ └── usr +├── def.cue +├── go.mod +└── go.sum +``` + +该包在 CUE 模版中被导入的路径应该是: + +```cue +import ( + apps "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" +) +``` + +3. 重构目录结构 + +我们的目标是本地测试模版并在 KubeVela 中使用相同模版。 +所以我们需要对我们本地 CUE 模块目录进行一些重构,并将目录与 KubeVela 提供的导入路径保持一致。 + +我们将 `apps` 和 `core` 目录从 `cue.mod/gen/k8s.io/api` 复制到 `cue.mod/gen/k8s.io`。 +请注意,我们应将源目录 `apps` 和 `core` 保留在 `gen/k8s.io/api` 中,以避免出现包依赖性问题。 + +```bash +cp -r cue.mod/gen/k8s.io/api/apps cue.mod/gen/k8s.io +cp -r cue.mod/gen/k8s.io/api/core cue.mod/gen/k8s.io +``` + +合并过之后到目录结构如下: + +```shell +├── cue.mod +│ ├── gen +│ │ └── k8s.io +│ │ ├── api +│ │ │ ├── apps +│ │ │ └── core +│ │ ├── apimachinery +│ │ │ └── pkg +│ │ ├── apps +│ │ └── core +│ ├── module.cue +│ ├── pkg +│ └── usr +├── def.cue +├── go.mod +└── go.sum +``` + +因此,你可以使用与 KubeVela 对齐的路径导入包: + +```cue +import ( + apps "k8s.io/apps/v1" + corev1 "k8s.io/core/v1" +) +``` + +4. 运行测试 + +最终,我们可以使用 `Kube` 包测试 CUE 模版。 + +```cue +import ( + apps "k8s.io/apps/v1" + corev1 "k8s.io/core/v1" +) + +// output is validated by Deployment. +output: apps.#Deployment +output: { + metadata: { + name: context.name + namespace: "default" + } + spec: { + selector: matchLabels: { + "app": context.name + } + template: { + metadata: { + labels: { + "app": context.name + "version": parameter.version + } + } + spec: { + terminationGracePeriodSeconds: parameter.podShutdownGraceSeconds + containers: [{ + name: context.name + image: parameter.image + ports: [{ + if parameter.containerPort != _|_ { + containerPort: parameter.containerPort + } + if parameter.containerPort == _|_ { + containerPort: parameter.servicePort + } + }] + if parameter.env != _|_ { + env: [ + for k, v in parameter.env { + name: k + value: v + }, + ] + } + resources: { + requests: { + if parameter.cpu != _|_ { + cpu: parameter.cpu + } + if parameter.memory != _|_ { + memory: parameter.memory + } + } + } + }] + } + } + } +} + +outputs:{ + service: corev1.#Service +} + + +// Service +outputs: service: { + metadata: { + name: context.name + labels: { + "app": context.name + } + } + spec: { + //type: "ClusterIP" + selector: { + "app": context.name + } + ports: [{ + port: parameter.servicePort + if parameter.containerPort != _|_ { + targetPort: parameter.containerPort + } + if parameter.containerPort == _|_ { + targetPort: parameter.servicePort + } + }] + } +} +parameter: { + version: *"v1" | string + image: string + servicePort: int + containerPort?: int + // +usage=Optional duration in seconds the pod needs to terminate gracefully + podShutdownGraceSeconds: *30 | int + env: [string]: string + cpu?: string + memory?: string +} + +// mock context data +context: { + name: "test" +} + +// mock parameter data +parameter: { + image: "test-image" + servicePort: 8000 + env: { + "HELLO": "WORLD" + } +} +``` + +使用 `cue export` 导出渲染结果。 + +```shell +$ cue export def.cue --out yaml +output: + metadata: + name: test + namespace: default + spec: + selector: + matchLabels: + app: test + template: + metadata: + labels: + app: test + version: v1 + spec: + terminationGracePeriodSeconds: 30 + containers: + - name: test + image: test-image + ports: + - containerPort: 8000 + env: + - name: HELLO + value: WORLD + resources: + requests: {} +outputs: + service: + metadata: + name: test + labels: + app: test + spec: + selector: + app: test + ports: + - port: 8000 + targetPort: 8000 +parameter: + version: v1 + image: test-image + servicePort: 8000 + podShutdownGraceSeconds: 30 + env: + HELLO: WORLD +context: + name: test +``` + +## Dry-Run `Application` + +当 CUE 模版就绪,我们就可以使用 `vela system dry-run` 执行 dry-run 并检查在真实 Kubernetes 集群中被渲染的资源。该命令行背后的执行逻辑与 KubeVela 中 `Application` 控制器的逻辑是一致的。 + +首先,我们需要使用 `mergedef.sh` 合并 Definition 和 CUE 文件。 + +```shell +$ mergedef.sh def.yaml def.cue > componentdef.yaml +``` + +随后,我们创建 `test-app.yaml` Application。 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: boutique + namespace: default +spec: + components: + - name: frontend + type: microservice + properties: + image: registry.cn-hangzhou.aliyuncs.com/vela-samples/frontend:v0.2.2 + servicePort: 80 + containerPort: 8080 + env: + PORT: "8080" + cpu: "100m" + memory: "64Mi" +``` + +针对上面 Application 使用 `vela system dry-run` 命令执行 dry-run 操作。 + +```shell +$ vela system dry-run -f test-app.yaml -d componentdef.yaml +--- +# Application(boutique) -- Comopnent(frontend) +--- + +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.oam.dev/component: frontend + app.oam.dev/name: boutique + workload.oam.dev/type: microservice + name: frontend + namespace: default +spec: + selector: + matchLabels: + app: frontend + template: + metadata: + labels: + app: frontend + version: v1 + spec: + containers: + - env: + - name: PORT + value: "8080" + image: registry.cn-hangzhou.aliyuncs.com/vela-samples/frontend:v0.2.2 + name: frontend + ports: + - containerPort: 8080 + resources: + requests: + cpu: 100m + memory: 64Mi + serviceAccountName: default + terminationGracePeriodSeconds: 30 + +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app: frontend + app.oam.dev/component: frontend + app.oam.dev/name: boutique + trait.oam.dev/resource: service + trait.oam.dev/type: AuxiliaryWorkload + name: frontend +spec: + ports: + - port: 80 + targetPort: 8080 + selector: + app: frontend + type: ClusterIP + +--- +``` diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/debug/dry-run.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/debug/dry-run.md new file mode 100644 index 00000000..c4c4dd05 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/debug/dry-run.md @@ -0,0 +1,138 @@ +--- +title: 本地试运行 +--- + +如果你是一个 DevOps 用户或者运维管理员,并且对 Kubernetes 有所了解,为了保证一个应用部署计划在 Kubernetes 运行时集群的表现符合期望,在开发调试阶段,你也可以通过下面的试运行功能提前确认这个部署计划背后的逻辑是否正确。 + +KubeVela 提供了本地试运行(Dry-run)的功能,来满足你的这个需求。 + +### 如何使用 + +我们将以一个应用部署计划的示例,来进行讲解。 + +首先编写如下的 YAML 文件: + +```yaml +# app.yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: vela-app +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: ingress + properties: + domain: testsvc.example.com + http: + "/": 8000 +``` + +可以看到,我们的期望是交付一个 Web Service 的组件,使用来自 `crccheck/hello-world` 的镜像,并最终提供一个可供对外访问的网关,地址域名为 `testsvc.example.com`,端口号 8000。 + +然后打开本地试运行模式,使用如下命令: + +```shell +vela dry-run -f app.yaml +``` +```console +--- +# Application(vela-app) -- Comopnent(express-server) +--- + +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.oam.dev/appRevision: "" + app.oam.dev/component: express-server + app.oam.dev/name: vela-app + workload.oam.dev/type: webservice +spec: + selector: + matchLabels: + app.oam.dev/component: express-server + template: + metadata: + labels: + app.oam.dev/component: express-server + spec: + containers: + - image: crccheck/hello-world + name: express-server + ports: + - containerPort: 8000 + +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app.oam.dev/appRevision: "" + app.oam.dev/component: express-server + app.oam.dev/name: vela-app + trait.oam.dev/resource: service + trait.oam.dev/type: ingress + name: express-server +spec: + ports: + - port: 8000 + targetPort: 8000 + selector: + app.oam.dev/component: express-server + +--- +apiVersion: networking.k8s.io/v1beta1 +kind: Ingress +metadata: + labels: + app.oam.dev/appRevision: "" + app.oam.dev/component: express-server + app.oam.dev/name: vela-app + trait.oam.dev/resource: ingress + trait.oam.dev/type: ingress + name: express-server +spec: + rules: + - host: testsvc.example.com + http: + paths: + - backend: + serviceName: express-server + servicePort: 8000 + path: / + +--- +``` + +查看本地试运行模式给出的信息,我们可以进行确认: + +1. Kubernetes 集群内部 Service 和我们期望的 `kind: Deployment` 部署,在相关的镜像地址、域名端口上,是否能匹配。 +2. 最终对外的 Ingress 网关,与 Kubernetes 集群内部的 `Service`,在相关的镜像地址、域名端口上,是否能匹配。 + +在完成上述信息确认之后,我们就能进行后续的开发调试步骤了。 + +最后,你还可以通过 `vela dry-run -h` 来查看更多可用的本地试运行模式: + +``` +Dry Run an application, and output the K8s resources as result to stdout, only CUE template supported for now + +Usage: + vela dry-run + +Examples: +vela dry-run + +Flags: + -d, --definition string specify a definition file or directory, it will only be used in dry-run rather than applied to K8s cluster + -f, --file string application file name (default "./app.yaml") + -h, --help help for dry-run + +Global Flags: + -e, --env string specify environment name for application +``` diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/definition-and-templates.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/definition-and-templates.md new file mode 100644 index 00000000..9bf9936b --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/definition-and-templates.md @@ -0,0 +1,238 @@ +--- +title: 定义CRD +--- + +在本部分中,我们会对 `ComponentDefinition` 和 `TraitDefinition` 进行详细介绍。 + +> 所有的定义对象都应由平台团队来进行维护和安装。在此背景下,可以把平台团队理解为平台中的*能力提供者*。 + +## 概述 + +本质上,KubeVela 中的定义对象由三个部分组成: + +- **能力指示器 (Capability Indicator)** + - `ComponentDefinition` 使用 `spec.workload` 指出此组件的 workload 类型. + - `TraitDefinition` 使用 `spec.definitionRef` 指出此 trait 的提供者。 +- **互操作字段 (Interoperability Fields)** + - 他们是为平台所设计的,用来确保给定的 workload 类型可以和某个 trait 一起工作。因此只有 `TraitDefinition` 有这些字段。 +- **能力封装和抽象 (Capability Encapsulation and Abstraction)** (由 `spec.schematic` 定义) + - 它定义了此 capability 的**模板和参数** ,比如封装。 + +因此,定义对象的基本结构如下所示: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: XxxDefinition +metadata: + name: +spec: + ... + schematic: + cue: + # cue template ... + helm: + # Helm chart ... + # ... interoperability fields +``` + +我们接下来详细解释每个字段。 + +### 能力指示器 (Capability Indicator) + +在 `ComponentDefinition` 中,workload 类型的指示器被声明为 `spec.workload` + +下面的示例是在 KubeVela 中,一个给 *Web Service* 的定义: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: webservice + namespace: default + annotations: + definition.oam.dev/description: "Describes long-running, scalable, containerized services that have a stable network endpoint to receive external network traffic from customers." +spec: + workload: + definition: + apiVersion: apps/v1 + kind: Deployment + ... +``` + +在上面的示例中,它声称利用 Kubernetes 的 Deployment (`apiVersion: apps/v1`, `kind: Deployment`)作为组件的 workload 类型。 + +### 互操作字段 (Interoperability Fields) + +**只有 trait** 有互操作字段。在一个 `TraitDefinition` 中,互操作字段的大体示例如下所示: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + name: ingress +spec: + appliesToWorkloads: + - deployments.apps + - webservice + conflictsWith: + - service + workloadRefPath: spec.wrokloadRef + podDisruptive: false +``` + +我们来详细解释一下。 + +#### `.spec.appliesToWorkloads` + +该字段定义了此 trait 允许应用于哪些类型的 workload 的约束。 + +- 它使用一个字符串的数组作为其值。 +- 数组中的每一个元素指向允许应用此 trait 的一个或一组 workload 类型。 + +有四种来表示一个或者一组 workload 类型。 + +- `ComponentDefinition` 命名, 例如 `webservice`, `worker` +- `ComponentDefinition` 定义引用(CRD 命名),例如 `deployments.apps` +- 以`*.`为前缀的 `ComponentDefinition` 定义引用的资源组,例如`*.apps` 和 `*.oam.dev`。这表示 trait 被允许应用于该组中的任意 workload。 +- `*` 表示 trait 被允许应用于任意 workload。 + +如果省略此字段,则表示该 trait 允许应用于任意 workload 类型。 + +如果将一个 trait 应用于未包含在 `appliesToWorkloads` 中的 workload,KubeVela 将会报错。 + +##### `.spec.conflictsWith` + +如果将某些种类的 trait 应用于该 workload,该字段定义了其中哪些 trait 与该 trait 冲突的约束。 + +- 它使用一个字符串的数组作为其值。 +- 数组中的每一个元素指向一个或一组 trait。 + +有四种来表示一个或者一组 workload 类型。 + +- `TraitDefinition` 命名,比如 `ingress` +- 以`*.`为前缀的 `TraitDefinition` 定义引用的资源组,例如`*.networking.k8s.io`。这表示当前 trait 与该组中的任意 trait 相冲突。 +- `*` 表示当前 trait 与任意 trait 相冲突。 + +如果省略此字段,则表示该 trait 没有和其他任何 trait 相冲突。 + +##### `.spec.workloadRefPath` + +该字段定义 trait 的字段路径,该路径用于存储对其应用 trait 的 workload 的引用。 + +- 它使用一个字符串作为其值,比如 `spec.workloadRef`. + +如果设置了此字段,KubeVela core 会自动将 workload 引用填充到 trait 的目标字段中。然后,trait controller 可以之后从 trait 中获取 workload 引用。因此,此字段通常和 trait 一起出现,其 controller 在运行时依赖于 workload 引用。 + +如何设置此字段的具体细节,请查阅 [scaler](https://github.com/oam-dev/kubevela/blob/master/charts/vela-core/templates/defwithtemplate/manualscale.yaml) trait 作为演示。 + +##### `.spec.podDisruptive` + +此字段定义了添加或者更新 trait 会不会破坏 pod。在此示例中,因为答案是不会,所以该字段为 `false`,当添加或更新 trait 时,它不会影响 pod。如果此字段是 `true`,则它会导致 pod 在 trait 被添加或者更新时被破坏并重启。默认情况下,该值为 `false`,这意味着该 trait 不会影响 pod。请小心处理此字段。对于严肃的大规模生产使用场景而言,它非常重要和有用。 + +### 能力封装和抽象 (Capability Encapsulation and Abstraction) + +给定的能力的模板和封装被定义在 `spec.schematic` 字段中。如下所示范例是一个 KubeVela 中的*Web Service* 类型的完整定义: + +
+ +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: webservice + namespace: default + annotations: + definition.oam.dev/description: "Describes long-running, scalable, containerized services that have a stable network endpoint to receive external network traffic from customers." +spec: + workload: + definition: + apiVersion: apps/v1 + kind: Deployment + schematic: + cue: + template: | + output: { + apiVersion: "apps/v1" + kind: "Deployment" + spec: { + selector: matchLabels: { + "app.oam.dev/component": context.name + } + + template: { + metadata: labels: { + "app.oam.dev/component": context.name + } + + spec: { + containers: [{ + name: context.name + image: parameter.image + + if parameter["cmd"] != _|_ { + command: parameter.cmd + } + + if parameter["env"] != _|_ { + env: parameter.env + } + + if context["config"] != _|_ { + env: context.config + } + + ports: [{ + containerPort: parameter.port + }] + + if parameter["cpu"] != _|_ { + resources: { + limits: + cpu: parameter.cpu + requests: + cpu: parameter.cpu + } + } + }] + } + } + } + } + parameter: { + // +usage=Which image would you like to use for your service + // +short=i + image: string + + // +usage=Commands to run in the container + cmd?: [...string] + + // +usage=Which port do you want customer traffic sent to + // +short=p + port: *80 | int + // +usage=Define arguments by using environment variables + env?: [...{ + // +usage=Environment variable name + name: string + // +usage=The value of the environment variable + value?: string + // +usage=Specifies a source the value of this var should come from + valueFrom?: { + // +usage=Selects a key of a secret in the pod's namespace + secretKeyRef: { + // +usage=The name of the secret in the pod's namespace to select from + name: string + // +usage=The key of the secret to select from. Must be a valid secret key + key: string + } + } + }] + // +usage=Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) + cpu?: string + } +``` + +
+ +`schematic` 的技术规范在接下来的 CUE 和 Helm 相关的文档中有详细解释。 + +同时,`schematic` 字段使你可以直接根据他们来渲染UI表单。详细操作请见[从定义中生成表单](/docs/platform-engineers/openapi-v3-json-schema)部分。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/helm/component.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/helm/component.md new file mode 100644 index 00000000..04002f8c --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/helm/component.md @@ -0,0 +1,89 @@ +--- +title: 怎么用 helm +--- + +在本节中,将介绍如何通过 `ComponentDefinition` 将 Helm charts 声明为应用程序组件。 + +> 在阅读本部分之前,请确保你已经了解了[定义和模板概念](../definition-and-templates)。 + +## 先决条件 + +* [fluxcd/flux2](../../install#3-optional-install-flux2),请确保你已经在[安装指南](../../install)中安装了 flux2。 + +## 声明 `ComponentDefinition` + +这是一个关于如何使用 Helm 作为 schematic 模块的示例 `ComponentDefinition`。 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: webapp-chart + annotations: + definition.oam.dev/description: helm chart for webapp +spec: + workload: + definition: + apiVersion: apps/v1 + kind: Deployment + schematic: + helm: + release: + chart: + spec: + chart: "podinfo" + version: "5.1.4" + repository: + url: "http://oam.dev/catalog/" +``` + +详细: +- 需要`.spec.workload` 来指示这个基于 Helm 的组件的工作负载类型。 如果你将多个工作负载打包在一个 chart 中,请同时检查 [已知限制](./known-issues#=workload-type-indicator)。 +- `.spec.schematic.helm` 包含 Helm `release` 和利用 `fluxcd/flux2` 的 `repository` 的信息。 + - 即`release`的pec与[`HelmReleaseSpec`](https://github.com/fluxcd/helm-controller/blob/main/docs/api/helmrelease.md) 对齐,`repository`的 spec 和[`HelmRepositorySpec`](https://github.com/fluxcd/source-controller/blob/main/docs/api/source.md#source.toolkit.fluxcd.io/v1beta1.HelmRepository)对齐。 + +## 声明一个`Application` + +这是一个示例 `Application`。 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: myapp + namespace: default +spec: + components: + - name: demo-podinfo + type: webapp-chart + properties: + image: + tag: "5.1.2" +``` + +组件 `properties` 正是 Helm Chart 的 [overlay values](https://github.com/captainroy-hy/podinfo/blob/master/charts/podinfo/values.yaml)。 + +部署应用程序,几分钟后(获取 Helm Chart 可能需要一些时间),你可以检查 Helm 版本是否已安装。 +```shell +$ helm ls -A +myapp-demo-podinfo default 1 2021-03-05 02:02:18.692317102 +0000 UTC deployed podinfo-5.1.4 5.1.4 +``` +检查 Chart 中定义的工作负载是否已成功创建。 +```shell +$ kubectl get deploy +NAME READY UP-TO-DATE AVAILABLE AGE +myapp-demo-podinfo 1/1 1 1 66m +``` + +检查应用程序的 `properties` 中的值(`image.tag = 5.1.2`)是否已分配给 Chart 。 +```shell +$ kubectl get deployment myapp-demo-podinfo -o json | jq '.spec.template.spec.containers[0].image' +"ghcr.io/stefanprodan/podinfo:5.1.2" +``` + + +### 从基于 Helm 的组件生成表单 + +KubeVela 会根据 Helm Chart 中的 [`values.schema.json`](https://helm.sh/docs/topics/charts/#schema-files) 自动生成 OpenAPI v3 JSON schema,并将其存储在一个 ` ConfigMap` 在与定义对象相同的 `namespace` 中。 此外,如果 Chart 作者未提供 `values.schema.json`,KubeVela 将根据其 `values.yaml` 文件自动生成 OpenAPI v3 JSON 模式。 + +请查看 [Generate Forms from Definitions](/docs/platform-engineers/openapi-v3-json-schema) 指南,了解有关使用此架构呈现 GUI 表单的更多详细信息。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/helm/known-issues.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/helm/known-issues.md new file mode 100644 index 00000000..4655d7e2 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/helm/known-issues.md @@ -0,0 +1,78 @@ +--- +title: 已知限制 +--- + +## 限制 + +以下是使用 Helm 图表作为应用程序组件的一些已知限制。 + +### 工作负载类型指示器 + +遵循微服务的最佳实践,KubeVela 建议在一个 Helm 图表中只存在一种工作负载资源。 请将你的“超级”Helm 图表拆分为多个图表(即组件)。 本质上,KubeVela 依赖于组件定义中的`workload`来指示它需要注意的工作负载类型,例如: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +... +spec: + workload: + definition: + apiVersion: apps/v1 + kind: Deployment +``` +```yaml +... +spec: + workload: + definition: + apiVersion: apps.kruise.io/v1alpha1 + kind: Cloneset +``` + +请注意,如果多个工作负载类型打包在一个图表中,KubeVela 不会失败,问题在于进一步的操作行为,例如推出、修订和流量管理,它们只能对指定的工作负载类型生效。 + +### 始终使用完整的限定名称 + +工作负载的名称应使用 [完全限定的应用程序名称](https://github.com/helm/helm/blob/543364fba59b0c7c30e38ebe0f73680db895abb6/pkg/chartutil/create.go#L415) 进行模板化,并且请不要为`.Values.fullnameOverride`。作为最佳实践,Helm 还强烈建议通过 `$ helm create` 命令创建新图表,以便根据此最佳实践自动定义模板名称。 + +### 控制应用程序升级 + +对组件`properties` 所做的更改将触发 Helm 版本升级。此过程由 Flux v2 Helm 控制器处理,因此你可以定义基于 [Helm Release 文档](https://github.com/fluxcd/helm-controller/blob/main/docs/api/helmrelease.md#upgraderemediation) 和 [规范](https://toolkit.fluxcd.io/components/helm/helmreleases/#configuring-failure-remediation)的修复,以防在此升级过程中发生故障。 + +例如: +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: webapp-chart +spec: +... + schematic: + helm: + release: + chart: + spec: + chart: "podinfo" + version: "5.1.4" + upgrade: + remediation: + retries: 3 + remediationStrategy: rollback + repository: + url: "http://oam.dev/catalog/" + +``` + +尽管目前存在一个问题,但很难获得有关 Helm 实时发布的有用信息,以了解升级失败时发生的情况。我们将增强可观察性,帮助用户在应用层面跟踪 Helm 发布的情况。 + +## 问题 + +已知问题将在后续版本中修复。 + +### 推出策略 + +目前,基于 Helm 的组件无法受益于 [应用程序级部署策略](https://github.com/oam-dev/kubevela/blob/master/design/vela-core/rollout-design.md#applicationdeployment-workflow)。如[本示例](./trait#update-an-applicatiion)所示,如果应用更新了,只能直接 rollout,没有 canary 或者 blue-green 方式。 + +### 更新特征属性也可能导致 Pod 重启 + +特性属性的更改可能会影响组件实例,属于此工作负载实例的 Pod 将重新启动。在基于 CUE 的组件中,这是可以避免的,因为 KubeVela 可以完全控制资源的渲染过程,尽管在基于 Helm 的组件中,它目前被推迟到 Flux v2 控制器。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/helm/trait.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/helm/trait.md new file mode 100644 index 00000000..03fbd61f --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/helm/trait.md @@ -0,0 +1,153 @@ +--- +标题: 添加 Trait 特性 +--- + +KubeVela 中的 Trait 特性可以从基于Helm的组件无缝添加. + +在以下应用实例中,我们将基于 Helm 组件添加两个 Trait 特性 [scaler](https://github.com/oam-dev/kubevela/blob/master/charts/vela-core/templates/defwithtemplate/manualscale.yaml) 和 [virtualgroup](https://github.com/oam-dev/kubevela/blob/master/docs/examples/helm-module/virtual-group-td.yaml). + + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: myapp + namespace: default +spec: + components: + - name: demo-podinfo + type: webapp-chart + properties: + image: + tag: "5.1.2" + traits: + - type: scaler + properties: + replicas: 4 + - type: virtualgroup + properties: + group: "my-group1" + type: "cluster" +``` + +> 注意: 当我们使用基于 Helm 的 Trait 特性时, *请确认在你 Helm 图标中的目标负载严格按照 qualified-full-name convention in Helm 的命名方式.* [以此表为例](https://github.com/captainroy-hy/podinfo/blob/c2b9603036f1f033ec2534ca0edee8eff8f5b335/charts/podinfo/templates/deployment.yaml#L4), +> 负载名为[版本名和图表名](https://github.com/captainroy-hy/podinfo/blob/c2b9603036f1f033ec2534ca0edee8eff8f5b335/charts/podinfo/templates/_helpers.tpl#L13). + +> 这是因为 KubeVela 依赖命名去发现负载,否则将不能把 Trait 特性赋予负载. KubeVela 将会基于你的应用和组件自动生成版本名, 所以你需要保证不能超出你的 Helm 图表中命名模版格式. + +## 验证特性工作正确 + +> 因为应用内部的调整生效需要几秒钟时间. + +检查缩放组 `scaler` 特性生效. +```shell +$ kubectl get manualscalertrait +NAME AGE +demo-podinfo-scaler-d8f78c6fc 13m +``` +```shell +$ kubectl get deployment myapp-demo-podinfo -o json | jq .spec.replicas +4 +``` + +检查虚拟组 `virtualgroup` 特性. +```shell +$ kubectl get deployment myapp-demo-podinfo -o json | jq .spec.template.metadata.labels +{ + "app.cluster.virtual.group": "my-group1", + "app.kubernetes.io/name": "myapp-demo-podinfo" +} +``` + +## 更新应用 + +当应用已被部署且 workload 负载/ Trait 特性都被顺利建立时, +你可以更新应用, 变化会被负载实例所响应. + +让我们对实例应用的配置做几个改动. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: myapp + namespace: default +spec: + components: + - name: demo-podinfo + type: webapp-chart + properties: + image: + tag: "5.1.3" # 5.1.2 => 5.1.3 + traits: + - type: scaler + properties: + replicas: 2 # 4 => 2 + - type: virtualgroup + properties: + group: "my-group2" # my-group1 => my-group2 + type: "cluster" +``` + +在几分钟后应用新配置并检查效果. + +检查从应用属性 `properties` 的新值 (`image.tag = 5.1.3`) 已被赋予图表. +```shell +$ kubectl get deployment myapp-demo-podinfo -o json | jq '.spec.template.spec.containers[0].image' +"ghcr.io/stefanprodan/podinfo:5.1.3" +``` +实际上, Helm 更新了版本号 (revision 1 => 2). +```shell +$ helm ls -A +NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION +myapp-demo-podinfo default 2 2021-03-15 08:52:00.037690148 +0000 UTC deployed podinfo-5.1.4 5.1.4 +``` + +检查 `scaler` 的特性. +```shell +$ kubectl get deployment myapp-demo-podinfo -o json | jq .spec.replicas +2 +``` + +检查 `virtualgroup` 的特性. +```shell +$ kubectl get deployment myapp-demo-podinfo -o json | jq .spec.template.metadata.labels +{ + "app.cluster.virtual.group": "my-group2", + "app.kubernetes.io/name": "myapp-demo-podinfo" +} +``` + +## 去除 Trait 特性 + +让我们试试从应用中去除特性. + +```yaml +apiVersion: core.oam.dev/v1alpha2 +kind: Application +metadata: + name: myapp + namespace: default +spec: + components: + - name: demo-podinfo + type: webapp-chart + settings: + image: + tag: "5.1.3" + traits: + # - name: scaler + # properties: + # replicas: 2 + - name: virtualgroup + properties: + group: "my-group2" + type: "cluster" +``` + +更新应用实例并检查 `manualscalertrait` 已被删除. +```shell +$ kubectl get manualscalertrait +No resources found +``` + diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/initializer-platform-eng.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/initializer-platform-eng.md new file mode 100644 index 00000000..0596b911 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/initializer-platform-eng.md @@ -0,0 +1,3 @@ +--- +title: 平台环境初始化 +--- \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/initializer/basic-initializer.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/initializer/basic-initializer.md new file mode 100644 index 00000000..3f0f1c35 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/initializer/basic-initializer.md @@ -0,0 +1,197 @@ +--- +title: 自定义环境初始化 +--- + +本章节介绍环境的概念以及如何使用环境初始化(Initializer)初始化一个环境。 + +## 什么是环境? + +一个应用开发团队通常需要初始化一些共享环境供用户部署他们的应用部署计划(Application)。环境是一个逻辑概念,他表示一组应用部署计划依赖的公共资源。 +例如,一个团队想要初始化2个环境: 一个开发环境用于开发和测试,一个生产环境用于实际应用部署并提供对外服务。 +管理员可以针对环境所代表的实际含义配置相关的初始化方式,创建不同的资源。 + +环境初始化背后也是用 OAM 模型的方式来执行的,所以环境初始化控制器的功能非常灵活,几乎可以满足任何初始化的需求,同时也是可插拔的。通常而言,可以初始化的资源类型包括但不限于下列类型: + +1. 一个或多个 Kubernetes 集群,不同的环境可能需要不同规模和不同版本的 Kubernetes 集群。同时环境初始化还可以将多个 Kubernetes 集群注册到一个中央集群进行统一的多集群管控。 + +2. 任意类型的 Kubernetes 自定义资源(CRD)和系统插件,一个环境会拥有很多种不同的自定义资源来提供系统能力,比如不同的工作负载、不同的运维管理功能等等。初始化环境可以包含环境所依赖的一系列功能的初始化安装,比如各类中间件工作负载、流量管理、日志监控等各类运维系统。 + +3. 各类共享资源和服务,一个微服务在不同环境中测试验证时,除了自身所开发的组件以外,往往还会包含一系列其他团队维护的组件和一些公共资源。环境初始化功能可以将其他组件和公共资源统一部署,在无需使用时销毁。这些共享资源可以是一个微服务组件、云数据库、缓存、负载均衡、API网关等等。 + +4. 各类管理策略和流程,一个环境可能会配备不同的全局策略和流程,举例来说,环境策略可能会包括混沌测试、安全扫描、错误配置检测、SLO指标等等;而流程则可以是初始化一个数据库表、注册一个自动发现配置等。 + +## 环境初始化(Initializer) + +KubeVela 提供了环境初始化(Initializer)允许你自定义组合不同的资源来初始化环境。环境初始化利用了应用部署计划的能力来创建一个环境所需的资源, +你甚至可以利用应用部署计划中的 “应用的执行策略(Policy)" 和 “部署工作流(Workflow)” 来流程化、配置化地创建环境。需要注意的是,多个环境初始化 +之间可能会存在依赖关系,一个环境初始化会依赖其他环境初始化提供的能力。 + +一个环境初始化的整体结构如下: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Initializer +metadata: + name: +spec: + # 我们利用 Application 来部署一个环境需要的资源 + appTemplate: + spec: + components: + - name: <环境组件名称> + type: <环境组件类型> + properties: + <环境组件参数> + policies: + - name: <应用策略名称> + type: <应用策略类型> + properties: + <应用策略参数> + workflow: + - name: <工作流节点名称> + type: <工作流节点类型> + # dependsOn 表示依赖的 Initializer + dependsOn: + - ref: + apiVersion: core.oam.dev/v1beta1 + kind: Initializer + name: <依赖的 Initializer 的名称> + namespace: <依赖的 Initializer 所在的命名空间> +``` + +环境初始化定义的核心在 `.spec` 下面的两部分,一部分是应用部署计划的模板,另一部分是环境初始化的依赖。 + +- 应用部署计划模板(对应`.spec.appTemplate`字段),环境初始化利用应用部署计划来创建环境需要的资源, 你可以按照编写一个应用部署计划的模式填写该字段。 + +- 环境初始化依赖(对应`.spec.dependsOn`字段),一个环境初始化 A 可能会依赖其他环境初始化的能力,只有当依赖的环境初始化正常运行在环境中,才会创建环境初始化 A 包含的资源。 + +### 环境初始化依赖 + +不同环境初始化存在依赖关系,可以将不同环境初始化的公共资源分离出一个单独的环境初始化作为依赖,这样可以形成可以被复用的初始化模块。 +例如,测试环境和开发环境都依赖了一些相同的控制器,可以将这些控制器提取出来作为单独的环境初始化,在开发环境和测试环境中都指定依赖该环境初始化。 + +## 如何使用 + +### 利用 Helm 组件初始化环境 + +```shell +vela addon enable fluxcd +``` + +我们以环境初始化 kruise 为例: + +```shell +cat < 在继续之前,请确保你已了解 [Definition Objects](definition-and-templates) 和 [Defining Traits with CUE](./traits/customize-trait) 的概念。 + + +在下面的教程中,你将学习将 [KEDA](https://keda.sh/) 作为新的自动伸缩 trait 添加到基于 KubeVela 的平台中。 + +> KEDA 是基于 Kubernetes 事件驱动的自动伸缩工具。使用 KEDA,你可以根据资源指标或需要处理的事件数来驱动任何容器的伸缩。 + +## 步骤 1: 安装 KEDA controller + +[安装 KEDA controller](https://keda.sh/docs/2.2/deploy/) 到 K8s 中。 + +## 步骤 2: 创建 Trait Definition + +要在 KubeVela 中将 KEDA 注册为一项新功能(即 trait),唯一需要做的就是为其创建一个 `TraitDefinition` 对象。 + +完整的示例可以在 [keda.yaml](https://github.com/oam-dev/catalog/blob/master/registry/keda-scaler.yaml) 中找到。 +下面列出了几个要点。 + +### 1. 描述 Trait + +```yaml +... +name: keda-scaler +annotations: + definition.oam.dev/description: "keda supports multiple event to elastically scale applications, this scaler only applies to deployment as example" +... +``` + +我们使用标签 `definition.oam.dev/description` 为该 trait 添加一行描述。它将显示在帮助命令中,比如 `$ vela traits`。 + +### 2. 注册 API 资源 + +```yaml +... +spec: + definitionRef: + name: scaledobjects.keda.sh +... +``` + +这就是将 KEDA `ScaledObject` 的 API 资源声明和注册为 trait 的方式。 + +### 3. 定义 `appliesToWorkloads` + +trait 可以附加到指定或全部的工作负载类型(`"*"` 表示你的 trait 可以与任何工作负载类型一起使用)。 + +对于 KEAD,我们仅允许用户将其附加到 Kubernetes 工作负载类型。 因此,我们声明如下: + +```yaml +... +spec: + ... + appliesToWorkloads: + - "deployments.apps" # claim KEDA based autoscaling trait can only attach to Kubernetes Deployment workload type. +... +``` + +### 4. 定义 Schematic + +在这一步中,我们将定义基于 KEDA 自动伸缩 trait 的 schematic,也就是说,我们将使用简化的原语为 KEDA `ScaledObject` 创建抽象,因此平台的最终用户根本不需要知道什么是 KEDA 。 + + +```yaml +... +schematic: + cue: + template: |- + outputs: kedaScaler: { + apiVersion: "keda.sh/v1alpha1" + kind: "ScaledObject" + metadata: { + name: context.name + } + spec: { + scaleTargetRef: { + name: context.name + } + triggers: [{ + type: parameter.triggerType + metadata: { + type: "Utilization" + value: parameter.value + } + }] + } + } + parameter: { + // +usage=Types of triggering application elastic scaling, Optional: cpu, memory + triggerType: string + // +usage=Value to trigger scaling actions, represented as a percentage of the requested value of the resource for the pods. like: "60"(60%) + value: string + } + ``` + +这是一个基于 CUE 的模板,仅开放 `type` 和 `value` 作为 trait 的属性供用户设置。 + +> 请查看 [Defining Trait with CUE](./traits/customize-trait) 部分,以获取有关 CUE 模板的更多详细信息。 + +## 步骤 2: 向 KubeVela 注册新的 Trait + +定义文件准备就绪后,你只需将其部署到 Kubernetes 中。 + +```bash +$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/catalog/master/registry/keda-scaler.yaml +``` + +用户就可以在 `Application` 中立即使用新 trait。 + diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/kube/component.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/kube/component.md new file mode 100644 index 00000000..874b6025 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/kube/component.md @@ -0,0 +1,91 @@ +--- +title: 怎么用 +--- + +在本节中,将介绍如何使用 raw K8s Object 通过 `ComponentDefinition` 来声明应用程序组件。 + +> 在阅读本部分之前,请确保你已经了解了[定义和模板概念](../definition-and-templates)。 + +## 声明`ComponentDefinition` + +这是一个基于“ComponentDefinition”的原始模板示例,它提供了对工作负载类型的抽象: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: kube-worker + namespace: default +spec: + workload: + definition: + apiVersion: apps/v1 + kind: Deployment + schematic: + kube: + template: + apiVersion: apps/v1 + kind: Deployment + spec: + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + ports: + - containerPort: 80 + parameters: + - name: image + required: true + type: string + fieldPaths: + - "spec.template.spec.containers[0].image" +``` + +详细地说,`.spec.schematic.kube` 包含工作负载资源的模板和 +可配置的参数。 +- `.spec.schematic.kube.template` 是 YAML 格式的原始模板。 +- `.spec.schematic.kube.parameters` 包含一组可配置的参数。 `name`、`type` 和 `fieldPaths` 是必填字段,`description` 和 `required` 是可选字段。 + - 参数`name` 在`ComponentDefinition` 中必须是唯一的。 + - `type` 表示设置到字段的值的数据类型。这是一个必填字段,它将帮助 KubeVela 自动为参数生成 OpenAPI JSON 模式。在原始模板中,只允许使用基本数据类型,包括 `string`、`number` 和 `boolean`,而不允许使用 `array` 和 `object`。 + - 参数中的`fieldPaths` 指定模板中的字段数组,这些字段将被该参​​数的值覆盖。字段被指定为没有前导点的 JSON 字段路径,例如 +`spec.replicas`、`spec.containers[0].image`。 + +## 声明一个 `Application` + +这是一个示例 `Application`。 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: myapp + namespace: default +spec: + components: + - name: mycomp + type: kube-worker + properties: + image: nginx:1.14.0 +``` + +由于参数只支持基本数据类型,`properties` 中的值应该是简单的键值,`: `。 + +部署“应用程序”并验证正在运行的工作负载实例。 + +```shell +$ kubectl get deploy +NAME READY UP-TO-DATE AVAILABLE AGE +mycomp 1/1 1 1 66m +``` +并检查参数是否有效。 +```shell +$ kubectl get deployment mycomp -o json | jq '.spec.template.spec.containers[0].image' +"nginx:1.14.0" +``` + diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/kube/trait.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/kube/trait.md new file mode 100644 index 00000000..7f69a385 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/kube/trait.md @@ -0,0 +1,111 @@ +--- +title: 添加 Traits +--- + +通过 Component,KubeVela 中的所有 traits 都可以兼容原生的 K8s 对象模板。 + +在这个例子中,我们会添加两个 traits 到 component 中。分别是:[scaler](https://github.com/oam-dev/kubevela/blob/master/charts/vela-core/templates/defwithtemplate/manualscale.yaml) 和 [virtualgroup](https://github.com/oam-dev/kubevela/blob/master/docs/examples/kube-module/virtual-group-td.yaml) + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: myapp + namespace: default +spec: + components: + - name: mycomp + type: kube-worker + properties: + image: nginx:1.14.0 + traits: + - type: scaler + properties: + replicas: 2 + - type: virtualgroup + properties: + group: "my-group1" + type: "cluster" +``` + +## 验证 + +部署应用,验证 traits 正常运行 + +检查 `scaler` trait。 + +```shell +$ kubectl get manualscalertrait +NAME AGE +demo-podinfo-scaler-3x1sfcd34 2m +``` +```shell +$ kubectl get deployment mycomp -o json | jq .spec.replicas +2 +``` + +检查 `virtualgroup` trait。 + +```shell +$ kubectl get deployment mycomp -o json | jq .spec.template.metadata.labels +{ + "app.cluster.virtual.group": "my-group1", + "app.kubernetes.io/name": "myapp" +} +``` + +## 更新应用 + +在应用部署完后(同时 workloads/trait 成功地创建),你可以执行更新应用的操作,并且更新的内容会被应用到 workload 上。 + +下面来演示修改上面部署的应用的几个配置 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: myapp + namespace: default +spec: + components: + - name: mycomp + type: kube-worker + properties: + image: nginx:1.14.1 # 1.14.0 => 1.14.1 + traits: + - type: scaler + properties: + replicas: 4 # 2 => 4 + - type: virtualgroup + properties: + group: "my-group2" # my-group1 => my-group2 + type: "cluster" +``` + +应用上面的配置,几秒后检查配置。 + +> 更新配置后,workload 实例的名称会被修改成 `mycomp-v2` + +检查新的属性值 + +```shell +$ kubectl get deployment mycomp -o json | jq '.spec.template.spec.containers[0].image' +"nginx:1.14.1" +``` + +检查 `scaler` trait。 + +```shell +$ kubectl get deployment mycomp -o json | jq .spec.replicas +4 +``` + +检查 `virtualgroup` trait + +```shell +$ kubectl get deployment mycomp -o json | jq .spec.template.metadata.labels +{ + "app.cluster.virtual.group": "my-group2", + "app.kubernetes.io/name": "myapp" +} +``` diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/oam/oam-model.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/oam/oam-model.md new file mode 100644 index 00000000..86a2360f --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/oam/oam-model.md @@ -0,0 +1,95 @@ +--- +title: 模型简介 +--- + +KubeVela 背后的应用交付模型是 OAM(Open Application Model),即:开放应用模型。开放应用模型允许用户把一个现代微服务应用部署所需的所有组件和各项运维动作,描述为一个统一的、与基础设施无关的“部署计划”(这个过程称作“建模”),进而实现在混合环境中进行标准化和高效率的应用交付。具体来说: + +* 用户通过一个叫做应用部署计划(Application)的对象来声明一个微服务应用的完整交付流程,这其中包含了待交付组件、关联的运维动作、交付流水线等内容。 +* 所有的待交付组件、运维动作和流水线中的每一个步骤,都遵循 OAM 规范设计为独立的可插拔模块,允许用户按照自己的需求进行组合或者定制。 +* OAM 模型也会负责规范各个模块之间的协作接口。 + +![oam-model](../../resources/oam-model.jpg) + +## 应用部署计划(Application) + +**Application** 对象是用户唯一需要了解的 API,它表达了一个微服务应用的部署计划。 + +遵循 OAM 规范,一个应用部署计划(Application)由“待部署组件(Component)”、“运维动作(Trait)”、“应用的执行策略(Policy)”,以及“部署工作流(Workflow)”这四部分概念组成。 + +而无论待交付的组件是 Helm chart 还是云数据库、目标基础设施是 Kubernetes 集群还是云平台,KubeVela 都通过 Application 这样一个统一的、上层的交付描述文件来同用户交互,不会泄露任何复杂的底层基础设施细节,真正做到让用户完全专注于应用研发和交付本身。 + +具体而言,一个应用部署计划的整体结构如下所示: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: <应用名称> +spec: + components: + - name: <组件名称1> + type: <组件类型1> + properties: + <组件参数> + traits: + - type: <运维特征类型1> + properties: + <运维特征类型> + - type: <运维特征类型2> + properties: + <运维特征类型> + - name: <组件名称2> + type: <组件类型2> + properties: + <组件参数> + policies: + - name: <应用策略名称> + type: <应用策略类型> + properties: + <应用策略参数> + workflow: + - name: <工作流节点名称> + type: <工作流节点类型> + properties: + <工作流节点参数> +``` + +在实际使用时,用户通过上述 Application 对象来引用预置的组件、运维特征、应用策略、以及工作流节点模块,填写这些模块暴露的用户参数即可完成一次对应用交付的建模。 + +> 注意:上诉可插拔模块在 OAM 中称为 X-Definitions,Application 对象负责引用 X-Definitions 并对用户输入进行校验,而各模块具体的可填写参数则是约束在相应的 X-Definition 文件当中的。具体请参考: [模块定义(X-Definition)](./x-definition) 章节。 + +## 组件(Component) + +组件(Component)是构成微服务应用的基本单元,比如一个 [Bookinfo](https://istio.io/latest/docs/examples/bookinfo/) 应用可以包含 Ratings、Reviews、Details 等多个组件。 + +KubeVela 内置即支持多种类型的组件交付,包括 Helm Chart、容器镜像、CUE 模块、Terraform 模块等等。同时,KubeVela 也允许平台管理员以 CUE 语言的形式增加其它任意类型的组件。 + +## 运维特征(Trait) + +运维特征(Trait)负责定义组件可以关联的通用运维行为,比如服务发布、访问、治理、弹性、可观测性、灰度发布等。在 OAM 规范中,一个组件可以绑定任意个运维特征。 + +与组件系统的可扩展性类似,KubeVela 允许平台管理员在系统中随时增加新的运维特征。 + +## 应用的执行策略(Policy) + +应用的执行策略(Policy)负责定义应用级别的部署特征,比如健康检查规则、安全组、防火墙、SLO、检验等模块。 + +应用策略的扩展性和功能与运维特征类似,可以灵活的扩展和对接所有云原生应用生命周期管理的能力。相对于运维特征而言,应用策略作用于一个应用的整体,而运维特征作用于应用中的某个组件。 + +## 部署执行工作流(Workflow) + +部署执行工作流(Workflow)定义了从部署开始到达到部署终态的一条完整路径,KubeVela 会按这个流水线执行工作流中定义的各个步骤来完成整个应用交付。除了常规的组件依赖编排、数据流传递等功能,KubeVela 的工作流还支持面向多环境/多集群的部署过程与策略描述。 + +> 注意:如果用户没有定义工作流,KubeVela 默认会自动按照组件和运维特征数组的顺序进行部署,并把 KubeVela 所在的当前集群作为目标集群。 + +KubeVela 当前内置的工作流步骤节点包括了创建资源、条件判断、数据输入输出等。同组件等系统类似,KubeVela 所支持的工作流节点也允许平台管理员自行定义和扩展。 + +## 避免配置漂移 + +在具体实现上,KubeVela 采用了脱胎于 Google Borg 系统的 [CUE 配置语言](https://cuelang.org/) 作为模型层实现,从而以 IaC (Infrastructure-as-Code) 的方式实现了 OAM 各个模块之间的高效协作和灵活扩展。但另一方面,常规的 IaC 技术往往会引入“配置漂移”(Infrastructure/Configuration Drift)的问题,即:当用户声明的应用部署配置和生产环境实际运行的实例状态发生不一致时,IaC 就无能为力了。 + +所以 KubeVela 在采用 IaC 技术实现的同时,还同时通过 [Kubernetes 控制循环](https://kubernetes.io/docs/concepts/architecture/controller/) 来管控整个 IaC 模型的渲染和执行,从而以完全自动化的方式保证了应用配置和终态的一致性,同时在模块定义、封装、扩展过程中保留了完整的 IaC 的使用体验。简介明了的 CUE 模板语法是平台管理员自行扩展 KubeVela 时唯一需要学习的一项技术。 + +## 下一步 + +- [X-Definition](x-definition) 对接标准化模型:在这些概念的背后,平台管理员可以利用 OAM 的扩展功能,通过自定义的方式对接自己的基础设施能力到开发模型中,以统一的方式暴露用户功能。而这些扩展的对接方式,就是[X-Definition](x-definition)。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/oam/x-definition.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/oam/x-definition.md new file mode 100644 index 00000000..aa02c4f0 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/oam/x-definition.md @@ -0,0 +1,465 @@ +--- +title: 模块定义(X-Definition) +--- + +最终用户使用的 OAM 模型 [应用部署计划 Application][1] 中,有很多声明“类型的字段”,如组件类型、运维特征类型、应用策略类型、工作流节点类型等,这些类型实际上就是 OAM 模型的模块定义(X-Definition)。 + +当前 OAM 模型支持的模块定义(X-Definition)包括组件定义(ComponentDefinition),运维特征定义(TraitDefinition)、应用策略定义(PolicyDefinition),工作流节点定义(WorkflowStepDefinition)等,随着系统演进,OAM 模型未来可能会根据场景需要进一步增加新的模块定义。 + + +## 组件定义(ComponentDefinition) + +组件定义(ComponentDefinition)的设计目标是允许平台管理员将任何类型的可部署制品封装成待交付的“组件”。只要定义好之后,这种类型的组件就可以被用户在部署计划(Application)中引用、实例化并交付出去。 + +> 所以你可以很容易的联想到,我们在 Application 的 `components[*].type` 中指定的类型名称,就是就是平台中的组件定义。无论是 KubeVela 内置的还是平台管理员扩展的,都是平等的对象。 + +常见的组件类型包括 Helm Chart 、Kustomize 目录、一组 Kubernetes YAML文件、容器镜像、云资源 IaC 文件、或者 CUE 配置文件模块等等。组件供应方对应真实世界中的角色,一般就是第三方软件的分发者(ISV)、DevOps 团队的工程师、或者你自己建设的 CI 体系生成的代码包和镜像。 + +组件定义是可以被共享和复用的。比如一个`阿里云 RDS`组件类型,最终用户可以在不同的应用中选择同样的 `阿里云 RDS` 组件类型,实例化成不同规格、不同参数配置的云数据库实例。 + +### 组件定义是如何运作的? + +让我们来看一下组件定义的框架格式: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: <组件定义名称> + annotations: + definition.oam.dev/description: <功能描述说明> +spec: + workload: # 工作负载描述 + definition: + apiVersion: + kind: + schematic: # 组件描述 + cue: # 通过 CUE 语言定义的组件详情 + template: +``` + +除了基本的“组件定义名称”和“功能描述说明”以外,组件定义的核心是 `.spec` 下面的两部分,一部分是工作负载类型;另一部分是组件描述。 + +* 工作负载类型(对应`.spec.workload`)字段为系统指明了一个组件背后对应的工作负载类型。它有两种定义方式,一种如例子中显示的,填写 `.spec.workload.definition` 的具体资源组和资源类型名称。另一种方法则是填写一个工作负载类型的名称。对于背后的工作负载类型不明确的组件定义,可以填写一个特殊的工作负载类型 `autodetects.core.oam.dev`,表示让 KubeVela 自动发现背后的工作负载。 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +... +spec: + workload: # 工作负载类型 + type: <工作负载类型名称> + ... +``` + +工作负载类型名称对应了一个“工作负载定义”的引用,“工作负载定义”的原理会在下一个小节介绍。两种写法的区别在于第一种直接写工作负载的资源组和类型,如果背后没有工作负载定义,会自动生成。而指定“工作负载类型名称”则可以做校验,限制组件定义只能针对系统中已经存在的工作负载类型做抽象。 + +* 组件描述(对应`.spec.schematic`)定义了组件的详细信息。该部分目前支持 [`cue`][2] 和 [`kube`][3] 两种描述方式。 + +具体抽象方式和交付方式的编写可以查阅对应的文档,这里以一个完整的例子介绍组件定义的工作流程。 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: helm + namespace: vela-system + annotations: + definition.oam.dev/description: "helm release is a group of K8s resources from either git repository or helm repo" +spec: + workload: + type: autodetects.core.oam.dev + schematic: + cue: + template: | + output: { + apiVersion: "source.toolkit.fluxcd.io/v1beta1" + metadata: { + name: context.name + } + if parameter.repoType == "git" { + kind: "GitRepository" + spec: { + url: parameter.repoUrl + ref: + branch: parameter.branch + interval: parameter.pullInterval + } + } + if parameter.repoType == "helm" { + kind: "HelmRepository" + spec: { + interval: parameter.pullInterval + url: parameter.repoUrl + if parameter.secretRef != _|_ { + secretRef: { + name: parameter.secretRef + } + } + } + } + } + + outputs: release: { + apiVersion: "helm.toolkit.fluxcd.io/v2beta1" + kind: "HelmRelease" + metadata: { + name: context.name + } + spec: { + interval: parameter.pullInterval + chart: { + spec: { + chart: parameter.chart + version: parameter.version + sourceRef: { + if parameter.repoType == "git" { + kind: "GitRepository" + } + if parameter.repoType == "helm" { + kind: "HelmRepository" + } + name: context.name + namespace: context.namespace + } + interval: parameter.pullInterval + } + } + if parameter.targetNamespace != _|_ { + targetNamespace: parameter.targetNamespace + } + if parameter.values != _|_ { + values: parameter.values + } + } + } + + parameter: { + repoType: "git" | "helm" + // +usage=The Git or Helm repository URL, accept HTTP/S or SSH address as git url. + repoUrl: string + // +usage=The interval at which to check for repository and relese updates. + pullInterval: *"5m" | string + // +usage=1.The relative path to helm chart for git source. 2. chart name for helm resource + chart: string + // +usage=Chart version + version?: string + // +usage=The Git reference to checkout and monitor for changes, defaults to master branch. + branch: *"master" | string + // +usage=The name of the secret containing authentication credentials for the Helm repository. + secretRef?: string + // +usage=The namespace for helm chart + targetNamespace?: string + // +usage=Chart version + value?: #nestedmap + } + + #nestedmap: { + ... + } +``` + +如上所示,这个组件定义的名字叫 `helm`,一经注册,最终用户在 Application 的组件类型(`components[*].type`)字段就可以填写这个类型。 + +* 其中 `definition.oam.dev/description` 对应的字段就描述了这个组件类型的功能是启动一个 helm chart。 +* `.spec.workload` 字段,填写的是`autodetects.core.oam.dev`表示让用户自动发现这个 helm chart 组件背后的工作负载。 +* `.spec.schematic.cue.template`字段描述了基于 CUE 的抽象模板,输出包含2个对象,其中一个输出是根据 helm repo 托管的制品形态决定的,如果是用的helm官方的模式托管的则是生成 `HelmRepository` 对象,git模式推广的就是生成`GitRepository` 对象,另一个输出的对象是 `HelmRelease` 包含了这个 helm 的具体参数。 其中 `parameter` 列表则是暴露给用户填写的全部参数。 + + +## 运维特征定义(TraitDefinition) + +运维特征定义(TraitDefinition)为组件提供了一系列可被按需绑定的运维动作,这些运维动作通常都是由平台管理员提供的运维能力,它们为这个组件提供一系列的运维操作和策略,比如添加一个负载均衡策略、路由策略、或者执行弹性扩缩容、灰度发布策略等等。 + +绑定一个运维特征实际上就是在 Application 的 `components[*].traits` 数组中添加一个元素,其中 `.triats[*].type` 指定的类型名称,就是平台中的运维特征定义,而 `.triats[*].properties` 就是一系列运维特征的参数字段。与组件定义类似,这里的运维特征可以是 KubeVela 内置的,也可以是平台管理员后续扩展的,都是平等的对象。同样的,运维特征定义中也允许通过不同的抽象方式定义模板指定运维功能。 + +运维特征定义的格式和字段作用如下: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + name: <运维特征定义名称> + annotations: + definition.oam.dev/description: <功能描述说明> +spec: + definition: + apiVersion: <运维能力对应的 Kubernetes 资源组> + kind: <运维能力对应的 Kubernetes 资源类型> + workloadRefPath: <运维能力中对于工作负载对象的引用字段路径> + podDisruptive: <运维能力的参数更新会不会引起底层资源(pod)重启> + manageWorkload: <是否由该运维特征管理工作负载> + skipRevisionAffect: <该运维特征是否不计入版本变化的计算> + appliesToWorkloads: + - <运维特征能够适配的工作负载名称> + conflictsWith: + - <与该运维特征冲突的其他运维特征名称> + revisionEnabled: <运维能力是否感知组件版本的变化> + schematic: # 抽象方式 + cue: # 存在多种抽象方式 + template: +``` + +从上述运维特征的格式和功能中我们可以看到,运维特征定义提供了一系列运维能力和组件之间衔接的方式,使得相同功能的运维功能可以适配到不同的组件中。具体的字段功能如下所示: + +* 运维特征能够适配的工作负载名称列表(`.spec.appliesToWorkloads`),可缺省字段,字符串数组类型,申明这个运维特征可以用于的工作负载类型,填写的是工作负载的 CRD 名称,格式为 `.` +* 与该运维特征冲突的其他运维特征名称列表(`.spec.conflictsWith`),可缺省字段,字符串数组类型,申明这个运维特征与哪些运维特征冲突,填写的是运维特征名称的列表。 +* 特征描述(对应`.spec.schematic`)字段定义了具体的运维动作。目前主要通过 [`CUE`][4] 来实现,同时也包含一系列诸如[`patch-trait`][5]这样的高级用法。 +* 运维特征对应的 Kubernetes 资源定义(`.spec.definition`字段),可缺省字段,如果运维能力通过 Kubernetes 的 CRD 方式提供可以填写,其中 `apiVersion` 和 `kind` 分别描述了背后对应的 Kubernetes 资源组和资源类型。 +* 运维能力中对于工作负载对象的引用字段路径(`.spec.workloadRefPath`字段),可缺省字段,运维能力中如果涉及到工作负载的引用,可以填写这样一个路径地址(如操作弹性扩缩容的 [HPA][6]对象,就可以填写`spec.scaleTargetRef`),然后 KubeVela 会自动将工作负载的实例化引用注入到运维能力的实例对象中。 +* 运维能力的参数更新会不会引起底层资源(pod)重启(`.spec.podDisruptive`字段),可缺省字段,bool 类型,主要用于向用户标识 trait 的更新会不会导致底层资源(pod)的重启。这个标识通常可以提醒用户,改动这样一个trait可能应该再结合一个灰度发布,以免大量资源重启引入服务不可用和其他风险。 +* 是否由该运维特征管理工作负载(`.spec.manageWorkload`),可缺省字段,bool 类型,设置为 true 则标识这个运维特征会负责工作负载的创建、更新、以及资源回收,通常是灰度发布的运维特征会具备这个能力。 +* 该运维特征是否不计入版本变化的计算(`.spec.skipRevisionAffect`),可缺省字段,bool 类型,设置为 true 则标识这个运维特征的修改不计入版本的变化,即用户在应用中纯粹修改这个运维特征的字段不会触发应用本身的版本变化。 +* 运维能力是否感知组件版本的变化(`.spec.revisionEnabled`)字段,可缺省字段,bool 类型,设置为 true 表示组件会生成的资源后缀会带版本后缀。 + + +让我们来看一个实际的例子: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + annotations: + definition.oam.dev/description: "configure k8s Horizontal Pod Autoscaler for Component which using Deployment as worklaod" + name: hpa +spec: + appliesToWorkloads: + - deployments.apps + workloadRefPath: spec.scaleTargetRef + schematic: + cue: + template: | + outputs: hpa: { + apiVersion: "autoscaling/v2beta2" + kind: "HorizontalPodAutoscaler" + spec: { + minReplicas: parameter.min + maxReplicas: parameter.max + metrics: [{ + type: "Resource" + resource: { + name: "cpu" + target: { + type: "Utilization" + averageUtilization: parameter.cpuUtil + } + } + }] + } + } + parameter: { + min: *1 | int + max: *10 | int + cpuUtil: *50 | int + } +``` + +如上所示,这个运维特征的名字叫 `hpa`,一经注册,最终用户在 Application 的组件类型(`components[*].traits[*].type`)字段就可以填写这个类型,将这个运维特征作用在对应的组件上。 + +从字段和参数中我们可以看到这个运维特征的功能是为 Kubernetes Deployment 这类工作负载的组件提供弹性扩缩容的能力。这个运维特征只能用于 Kubernetes Deployment,同时 KubeVela 在实例化运维能力(即这个 HPA 对象)时会为 `spec.scaleTargetRef` 字段自动注入工作负载(Deployment 对象)的引用。最后 `.spec.schematic.cue.template`字段描述了基于 CUE 的抽象模板,定义了输出是一个 Kubernetes 的 HorizontalPodAutoscaler 结构,模板中的 parameter 定义了用户可以使用的参数,包括最大最小值和 CPU 的利用率。 + + +## 应用策略定义(PolicyDefinition) + +应用策略定义与运维特征定义类似,区别在于运维特征作用于单个组件,而应用策略是作用于整个应用整体(多个组件)。它可以为应用提供全局的策略定义,常见的包括全局安全策略(如 RBAC权限、审计、秘钥管理)、应用洞察(如 应用的 SLO 管理等)。 + +其格式如下所示: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: PolicyDefinition +metadata: + name: <应用策略定义名称> + annotations: + definition.oam.dev/description: <功能描述说明> +spec: + schematic: # 策略描述 + cue: + template: +``` + +目前应用策略定义仅包含 CUE 格式模板一个字段,包含了应用策略输出的对象以及对应的参数,其 CUE 编写的格式与组件定义的 CUE 模板格式一致。一个具体的例子如下所示: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: PolicyDefinition +metadata: + name: env-binding + annotations: + definition.oam.dev/description: "为应用提供差异化配置和环境调度策略" +spec: + schematic: + cue: + template: | + output: { + apiVersion: "core.oam.dev/v1alpha1" + kind: "EnvBinding" + spec: { + engine: parameter.engine + appTemplate: { + apiVersion: "core.oam.dev/v1beta1" + kind: "Application" + metadata: { + name: context.appName + namespace: context.namespace + } + spec: { + components: context.components + } + } + envs: parameter.envs + } + } + + #Env: { + name: string + patch: components: [...{ + name: string + type: string + properties: {...} + }] + placement: clusterSelector: { + labels?: [string]: string + name?: string + } + } + + parameter: { + engine: *"ocm" | string + envs: [...#Env] + } +``` + +主要介绍其中的策略描述部分,基于 CUE 格式,输出一个 `EnvBinding` 对象,其参数就是 engine 和 envs 两个,其中 envs 是一个结构体数组,具体的结构体类型和其中的参数由 `#Env` 指定,这里面的 CUE 语法与[组件定义的 CUE 语法][7]一致。 + + +## 工作流节点定义(WorkflowStepDefinition) + +工作流节点定义用于描述一系列在工作流中可以声明的步骤节点,如执行资源的部署、状态检查、数据输出、依赖输入、外部脚本调用等一系列能力均可以通过工作流节点定义来描述。 + +工作流节点的字段相对简单,除了基本的名称和功能以外,其主要的功能都在 CUE 的模板中配置。KubeVela 在工作流模板中内置了大量的操作,具体可以通过工作流文档学习如何使用和编写。 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: WorkflowStepDefinition +metadata: + name: <工作流节点定义名称> + annotations: + definition.oam.dev/description: <功能描述说明> +spec: + schematic: # 节点描述 + cue: + template: +``` + +一个实际的工作流节点定义如下所示: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: WorkflowStepDefinition +metadata: + name: apply-component +spec: + schematic: + cue: + template: | + import ("vela/op") + parameter: { + component: string + } + + // load component from application + component: op.#Load & { + component: parameter.component + } + + // apply workload to kubernetes cluster + apply: op.#ApplyComponent & { + component: parameter.name + } + + // wait until workload.status equal "Running" + wait: op.#ConditionalWait & { + continue: apply.status.phase =="Running" + } +``` + +例子中的工作流主要通过引入`vela/op` 这个内置的 KubeVela 包完成一系列动作,包括数据载入、资源创建以及状态检查。整体这个工作流就完成了 KubeVela 组件的创建、并且查看组件状态是否在正常运行这一功能。 + + +## 工作负载定义(WorkloadDefinition) + +工作负载定义(WorkloadDefinition)是组件的一种系统级特征,它不是用户所关心的字段,而是作为元数据被 OAM 系统本身进行检查、验证和使用。 + +其格式如下所示: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: WorkloadDefinition +metadata: + name: <工作负载定义名称> +spec: + definitionRef: + name: <工作负载定义对应的 Kubernetes 资源> + version: <工作负载定义对应的 Kubernetes 资源版本> + podSpecPath: <工作负载中 Pod 字段的路径> + childResourceKinds: + - apiVersion: <资源组> + kind: <资源类型> +``` + +* 其中 `.spec.definitionRef.name` 描述了 Kubernetes 资源的名称,其格式与 CRD(Custom Resource Definition)名称一致,是 `.`。 +* `.spec.podSpecPath` 定义了包含 `Kubernetes Pod` 字段的工作负载对应的 Pod 字段路径。 +* `.spec.revisionLabel` 定义了工作负载的版本化标签 +* `.spec.childResourceKinds` 定义了这个资源会生成的子资源。 + +除此之外,未来需要 OAM 模型中需要的引入的其他的针对 Kubernetes 资源类型特征的约定,也会作为字段加入到工作负载定义中。 + +## 抽象背后的标准协议 + +应用一经创建,KubeVela 就会为创建的资源打上一系列的标签,这些标签包含了应用的版本、名称、类型等。通过这些标准协议,应用的组件和运维能力之间就可以进行协作。具体的元数据列表如下所示: + +| 标签 | 描述 | +| :-------------------------------------------------: | :--------------------------------------------: | +| `workload.oam.dev/type=<组件定义名称>` | 对应了组件定义(`ComponentDefinition`)的名字 | +| `trait.oam.dev/type=<运维特征定义名称>` | 对应了运维特征定义(`TraitDefinition`)的名字 | +| `app.oam.dev/name=<应用实例名称>` | 应用实例化(Application)的名称 | +| `app.oam.dev/component=<组件实例名称>` | 组件实例化的名称 | +| `trait.oam.dev/resource=<运维特征中输出的资源名称>` | 运维特征中输出(outputs.\<资源名称\>)的资源名称 | +| `app.oam.dev/appRevision=<应用实例的版本名称>` | 应用实例的版本名称 | + +这些元数据可以帮助应用部署以后的运维能力正常运作,比如灰度发布组件在应用更新时根据标签进行灰度发布等,同时这些标签也保证了 KubeVela 被其他系统集成时的正确性。 + +## 模块定义运行时上下文 + +模块定义中可以通过 `context` 变量获得一些运行时的上下文信息,具体的列表如下,其中作用范围表示该 Context 变量能够在哪些模块定义中使用: + +| Context 变量 | 功能描述 | 作用范围 | +| :------------------------------: | :------------------------------------------------------------------------------: | :----------------------------------: | +| `context.appRevision` | 应用当前的实例对应的版本名称 | 组件定义、运维特征定义 | +| `context.appRevisionNum` | 应用当前的实例对应的版本数字 | 组件定义、运维特征定义 | +| `context.appName` | 应用当前的实例对应的名称 | 组件定义、运维特征定义 | +| `context.name` | 在组件定义和运维特征定义中表示的是组件名称,在应用策略定义中表示的是应用策略名称 | 组件定义、运维特征定义、应用策略定义 | +| `context.namespace` | 应用当前实例所在的命名空间 | 组件定义、运维特征定义 | +| `context.revision` | 当前组件实例的版本名称 | 组件定义、运维特征定义 | +| `context.parameter` | 当前组件实例的参数,可以在运维特征中获得组件的参数 | 运维特征定义 | +| `context.output` | 当前组件实例化后的对象结构体 | 组件定义、运维特征定义 | +| `context.outputs.` | 当前组件和运维特征实例化以后的结构体 | 组件定义、运维特征定义 | + +同时,在工作流系统当中,由于 `context` 要作用于应用级,与上述用法有很大不同。我们单独对其进行介绍: + +| Context 变量 | 功能描述 | 作用范围 | +| :------------------------------: | :------------------------------------------------------------------------------: | :----------------------------------: | +| `context.name` | 应用当前实例对应的名称 | 工作流节点定义 | +| `context.namespace` | 应用当前实例所在的命名空间 | 工作流节点定义 | +| `context.labels` | 应用当前实例的标签 | 工作流节点定义 | +| `context.annotations` | 应用当前实例的注释 | 工作流节点定义 | + +最后请注意,在本节介绍的所有的模块化定义概念,都只需要平台的管理员在希望对 KubeVela 进行功能扩展时了解,最终用户对这些概念不需要有任何感知。 + +[1]: ./oam-model +[2]: ../cue/basic +[3]: ../kube/component +[4]: ../traits/customize-trait +[5]: ../traits/advanced +[6]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/ +[7]: ../cue/basic \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/openapi-v3-json-schema.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/openapi-v3-json-schema.md new file mode 100644 index 00000000..cdccfa18 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/openapi-v3-json-schema.md @@ -0,0 +1,64 @@ +--- +title: 从定义中生成表单 +--- + +对于任何通过[定义对象](./definition-and-templates) 安装的 capability, KubeVela 会自动根据 OpenAPI v3 JSON schema 的参数列表来生成 OpenAPI v3 JSON schema,并把它储存到一个和定义对象处于同一个 `namespace` 的 `ConfigMap` 中。 + +> 默认的 KubeVela 系统 `namespace` 是 `vela-system`,内置的 capability 和 schema 位于此处。 + +## 列出 Schema + +这个 `ConfigMap` 会有一个通用的标签 `definition.oam.dev=schema`,所以你可以轻松地通过下述方法找到他们: + +```shell +$ kubectl get configmap -n vela-system -l definition.oam.dev=schema +NAME DATA AGE +schema-ingress 1 19s +schema-scaler 1 19s +schema-task 1 19s +schema-webservice 1 19s +schema-worker 1 20s +``` + +`ConfigMap` 命名的格式为 `schema-`,数据键为 `openapi-v3-json-schema`. + +举个例子,我们可以使用以下命令来获取 `webservice` 的JSON schema。 + +```shell +$ kubectl get configmap schema-webservice -n vela-system -o yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: schema-webservice + namespace: vela-system +data: + openapi-v3-json-schema: '{"properties":{"cmd":{"description":"Commands to run in + the container","items":{"type":"string"},"title":"cmd","type":"array"},"cpu":{"description":"Number + of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core)","title":"cpu","type":"string"},"env":{"description":"Define + arguments by using environment variables","items":{"properties":{"name":{"description":"Environment + variable name","title":"name","type":"string"},"value":{"description":"The value + of the environment variable","title":"value","type":"string"},"valueFrom":{"description":"Specifies + a source the value of this var should come from","properties":{"secretKeyRef":{"description":"Selects + a key of a secret in the pod''s namespace","properties":{"key":{"description":"The + key of the secret to select from. Must be a valid secret key","title":"key","type":"string"},"name":{"description":"The + name of the secret in the pod''s namespace to select from","title":"name","type":"string"}},"required":["name","key"],"title":"secretKeyRef","type":"object"}},"required":["secretKeyRef"],"title":"valueFrom","type":"object"}},"required":["name"],"type":"object"},"title":"env","type":"array"},"image":{"description":"Which + image would you like to use for your service","title":"image","type":"string"},"port":{"default":80,"description":"Which + port do you want customer traffic sent to","title":"port","type":"integer"}},"required":["image","port"],"type":"object"}' +``` + +具体来说,该 schema 是根据capability 定义中的 `parameter` 部分生成的: + +* 对于基于 CUE 的定义:`parameter` CUE 模板中的关键词。 +* 对于基于 Helm 的定义:`parameter` 是从在 Helm Chart 中的 `values.yaml` 生成的。 + +## 渲染表单 + +你可以通过 [form-render](https://github.com/alibaba/form-render) 或者 [React JSON Schema form](https://github.com/rjsf-team/react-jsonschema-form) 渲染上述 schema 到表单中并轻松地集成到你的仪表盘中。 + +以下是使用 `form-render` 渲染的表单: + +![](../resources/json-schema-render-example.jpg) + +# 下一步 + +根据设计,KubeVela 支持多种方法来定义 schematic。因此,我们将在接下来的文档来详细解释 `.schematic` 字段。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/overview.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/overview.md new file mode 100644 index 00000000..535b4c39 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/overview.md @@ -0,0 +1,70 @@ +--- +title: 概述 +--- + +本文档将解释什么是“Application”对象以及为什么需要它。 + +## 初衷 + +基于封装的抽象可能是最广泛使用的方法,可以使开发人员体验更轻松,并允许用户将整个应用程序资源作为一个单元交付。例如,今天许多工具将Kubernetes *Deployment* 和 *Service* 封装到一个 *Web Service* 模块中,然后通过简单地提供 *image=foo* 和 *ports=80* 等参数来实例化这个模块。这种模式可以在 cdk8s (例如 [`web-service.ts` ](https://github.com/awslabs/cdk8s/blob/master/examples/typescript/web-service/web-service.ts)), CUE (例如 [`kube.cue`](https://github.com/cuelang/cue/blob/b8b489251a3f9ea318830788794c1b4a753031c0/doc/tutorial/kubernetes/quick/services/kube.cue#L70)),以及许多广泛使用的 Helm charts 中找到(例如 [Web Service](https://docs.bitnami.com/tutorials/create-your-first-helm-chart/))。 + +尽管在定义抽象方面具有效率和可扩展性,但这两种 DSL 工具(例如 cdk8s 、CUE 和 Helm 模板)主要用作客户端工具,几乎不能用作平台级构建块。这使得平台构建者要么不得不创建受限/不可扩展的抽象,要么重新发明 DSL/templating 已经做得很好的轮子。 + +KubeVela 允许平台团队使用 DSL/templating 创建以开发人员为中心的抽象,但使用经过实战测试的 [Kubernetes 控制循环](https://kubernetes.io/docs/concepts/architecture/controller/) 来维护它们。 + +## Application + +首先,KubeVela 引入了一个 `Application` CRD 作为其主要抽象,可以捕获完整的应用程序部署。 为了对最新的微服务进行建模,每个 Application 都由具有附加 trait(操作行为)的多个 components 组成。 例如: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: application-sample +spec: + components: + - name: foo + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: ingress + properties: + domain: testsvc.example.com + http: + "/": 8000 + - type: sidecar + properties: + name: "logging" + image: "fluentd" + - name: bar + type: aliyun-oss # cloud service + bucket: "my-bucket" +``` + +这个应用程序中 *component* 和 *trait* 规范的模式实际上是由另一组名为 *"definitions"* 的构建模块强制执行的,例如,`ComponentDefinition` 和 `TraitDefinition`。 + +`XxxDefinition` 资源旨在利用诸如 `CUE`、`Helm` 和 `Terraform modules` 等封装解决方案来模板化和参数化 Kubernetes 资源以及云服务。 这使用户能够通过简单地设置参数将模板化功能组装到 `Application` 中。 在上面的 `application-sample` 中,它模拟了一个 Kubernetes Deployment(component `foo`)来运行容器和一个阿里云 OSS 存储桶(component `bar`)。 + +这种抽象机制是 KubeVela 向最终用户提供 *PaaS-like* 体验(*即以应用程序为中心、更高级别的抽象、自助操作等*)的关键,其优势如下所示。 + +### 不再“杂耍”地管理 Kubernetes 对象 + +例如,作为平台团队,我们希望利用 Istio 作为服务网格层来控制某些 `Deployment` 实例的流量。但这在今天可能真的很难受,因为我们必须强制最终用户以有点“杂耍”的方式定义和管理一组 Kubernetes 资源。例如,在一个简单的金丝雀部署案例中,最终用户必须仔细管理一个主要的 *Deployment*、一个主要的 *Service*、一个 *root Service*、一个金丝雀 *Deployment*、一个金丝雀 *Service*,并且必须可能在金丝雀升级后重命名 *Deployment* 实例(这在生产中实际上是不可接受的,因为重命名会导致应用程序重新启动)。更糟糕的是,我们必须期望用户在这些对象上正确设置标签和选择器,因为它们是确保每个应用程序实例正确可访问的关键,也是我们 Istio 控制器可以依赖的唯一修订机制。 + +如果组件实例不是 *Deployment*,而是 *StatefulSet* 或自定义工作负载类型,则上述问题甚至可能会很痛苦。例如,通常在部署期间复制 *StatefulSet* 实例是没有意义的,这意味着用户必须以与 *Deployment* 完全不同的方法维护名称、修订、标签、选择器、应用程序实例。 + +#### 抽象背后的标准契约 + +KubeVela 旨在减轻手动管理版本化 Kubernetes 资源的负担。 简而言之,应用程序所需的所有 Kubernetes 资源现在都封装在一个抽象中,KubeVela 将通过经过实战测试的协调循环自动化而不是人工来维护实例名称、修订、标签和选择器。 同时,定义对象的存在让平台团队可以自定义抽象背后所有上述元数据的细节,甚至可以控制如何进行修订的行为。 + +因此,所有这些元数据现在都成为任何“day 2”操作控制器(例如 Istio 或 rollout)都可以依赖的标准合约。 这是确保我们的平台能够提供用户友好体验但对操作行为保持“透明”的关键。 + +### 无配置漂移 + +定义抽象的轻量级和灵活,当今任何现有的封装解决方案都可以在客户端工作,例如 DSL/IaC(基础设施即代码)工具和 Helm。 这种方式易于采用,对用户集群的入侵较少。 + +但是客户端抽象总是会导致一个称为*基础设施/配置漂移*的问题,即生成的组件实例与预期的配置不一致。 这可能是由不完整的覆盖范围、不完美的流程或紧急更改引起的。 + +因此,KubeVela 中的所有抽象都被设计为使用 [Kubernetes Control Loop](https://kubernetes.io/docs/concepts/architecture/controller/) 进行维护,并利用 Kubernetes 控制平面来消除配置漂移的问题,并且仍然保持现有封装解决方案(例如 DSL/IaC 和 templating)的灵活性和速度。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/policy/custom-policy.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/policy/custom-policy.md new file mode 100644 index 00000000..827d01b4 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/policy/custom-policy.md @@ -0,0 +1,5 @@ +--- +title: 自定义策略 +--- + +TBD \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/system-operation/bootstrap-parameters.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/system-operation/bootstrap-parameters.md new file mode 100644 index 00000000..4921a743 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/system-operation/bootstrap-parameters.md @@ -0,0 +1,45 @@ +--- +title: 启动参数说明 +--- + +KubeVela 控制器的各项启动参数及其说明如下。 + +| 参数名 | 类型 | 默认值 | 描述 | +|:---------------------------:|:------:|:---------------------------------:|------------------------------------------------------------------------------| +| use-webhook | bool | false | 使用 Admission Webhook | +| webhook-cert-dir | string | /k8s-webhook-server/serving-certs | Admission Webhook 的密钥文件夹 | +| webhook-port | int | 9443 | Admission Webhook 的监听地址 | +| metrics-addr | string | :8080 | Prometheus 指标的监听地址 | +| enable-leader-election | bool | false | 为 Controller Manager 启用 Leader Election,确保至多有一个控制器处于工作状态 | +| leader-election-namespace | string | "" | Leader Election 的 ConfigMap 所在的命名空间 | +| log-file-path | string | "" | 日志文件路径 | +| log-file-max-size | int | 1024 | 日志文件的最大量,单位为 MB | +| log-debug | bool | false | 将日志级别设为调试,开发环境下使用 | +| application-revision-limit | int | 10 | 最大维护的应用历史版本数量,当应用版本数超过此数值时,较早的版本会被丢弃 | +| definition-revision-limit | int | 20 | 最大维护的模块定义历史版本数量 | +| autogen-workload-definition | bool | true | 自动为 组件定义 生成 工作负载定义 | +| health-addr | string | :9440 | 健康检查监听地址 | +| apply-once-only | string | false | 工作负载及特征在生成后不再变更,在特定需求环境下使用 | +| disable-caps | string | "" | 禁用内置的能力 | +| storage-driver | string | Local | 应用文件的存储驱动 | +| informer-re-sync-interval | time | 1h | 无变更情况下,控制器轮询维护资源的周期 | +| system-definition-namespace | string | vela-system | 系统级特征定义的命名空间 | +| concurrent-reconciles | int | 4 | 控制器处理请求的并发线程数 | +| kube-api-qps | int | 50 | 控制器访问 apiserver 的速率 | +| kube-api-burst | int | 100 | 控制器访问 apiserver 的短时最大速率 | +| oam-spec-var | string | v0.3 | OAM 标准的使用版本 | +| pprof-addr | string | "" | 使用 pprof 监测性能的监听地址,默认为空时不监测 | +| perf-enabled | bool | false | 启用控制器性能记录,可以配合监控组件监测当前控制器的性能瓶颈 | +| enable-cluster-gateway | bool | false | 启动多集群功能 | + +> 未列在表中的参数为旧版本参数,当前版本 v1.1 无需关心 + +### 重点参数介绍 + +- **informer-resync-interval**: 在应用配置未发生变化时,KubeVela 控制器主动维护应用的间隔时间。过短的时间会导致控制器频繁调谐不需要同步的应用。间隔性地维护确保应用及其组件的状态保持同步,不会因特殊情况造成的状态不一致持续过长时间 +- **concurrent-reconciles**: 用来控制并发处理请求的线程数,当控制器能够获得较多的 CPU 资源时,如果不相应的提高线程数会导致无法充分利用多核的性能 +- **kube-api-qps / kube-api-burst**: 用来控制 KubeVela 控制器访问 apiserver 的频率。当 KubeVela 控制器管理的应用较为复杂时 ( 包含较多的组件及资源 ),如果 KubeVela 控制器对 apiserver 的访问速率受限,则较难提高 KubeVela 控制器的并发量。然而过高的请求速率也有可能对 apiserver 造成较大的负担 +- **pprof-addr**: 开启该地址可以启用 pprof 进行控制器性能调试 +- **perf-enabled**: 启用时可以在日志中看到 KubeVela 控制器管理应用时各个阶段的时间开销,关闭可以简化日志记录 + +> [性能调优](./performance-finetuning)章节中包含了若干组不同场景下的推荐参数配置。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/system-operation/managing-clusters.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/system-operation/managing-clusters.md new file mode 100644 index 00000000..24715753 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/system-operation/managing-clusters.md @@ -0,0 +1,39 @@ +--- +title: 集群管理 +--- + +KubeVela 多集群功能中的集群管理是通过 Vela CLI 的一系列相关命令完成的。 + +### vela cluster list + +该命令可列出当前 KubeVela 正在管理的所有子集群。 +```bash +$ vela cluster list +CLUSTER TYPE ENDPOINT +cluster-prod tls https://47.88.4.97:6443 +cluster-staging tls https://47.88.7.230:6443 +``` + +### vela cluster join + +该命令可将已有的子集群通过 kubeconfig 文件加入到 KubeVela 中,并将其命名为 cluster-prod,供[多环境部署](../../end-user/policies/envbinding)使用。 + +```shell script +$ vela cluster join example-cluster.kubeconfig --name cluster-prod +``` + +### vela cluster detach + +该命令可用来将 KubeVela 正在管理的子集群移除。 + +```shell script +$ vela cluster detach cluster-prod +``` + +### vela cluster rename + +该命令可用来重命名 KubeVela 正在管理的子集群。 + +```shell script +$ vela cluster rename cluster-prod cluster-production +``` diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/system-operation/observability.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/system-operation/observability.md new file mode 100644 index 00000000..93d1709e --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/system-operation/observability.md @@ -0,0 +1,149 @@ +--- +title: 可观测性 +--- + +可观测性插件(Observability addon)基于 metrics、logging、tracing 数据,可以为 KubeVela core 提供系统级别的监控,也可以为应用提供业务级别的监控。 + +下面详细介绍可观测能力,以及如何启用可观测性插件,并查看各种监控数据。 + +## 可观测能力介绍 + +KubeVela 可观测能力是通过 [Grafana](https://grafana.com/) 展示的,提供系统级别和应用级别的数据监控。 + +### 内置的指标类别一:KubeVela Core 系统级别可观测性 + +- KubeVela Core 资源使用情况监控 + +1)CPU、内存等使用量和使用率数据 + +![](../../resources/observability-system-level-summary-of-source-usages.png) + +2)CPU、内存随着时间变化(如过去三小时)的使用量和使用率、以及每秒网络带宽的图形化展示 + +![](../../resources/observability-system-level-summary-of-source-usages-chart.png) + +### 内置的指标类别二:KubeVela Core 日志监控 + +1)日志统计 + +可观测页面会显示KubeVela Core 日志总量,以及默认情况下,`error` 出现的数量、频率、出现的所有日志概览和详情。 + +![](../../resources/observability-system-level-logging-statistics.png) + +还会展示随着时间变化,`error` 日志出现的总量、频率等。 + +![](../../resources/observability-system-level-logging-statistics2.png) + +2)日志过滤 + +在最上方填写关键词,还可以过滤日志。 + +![](../../resources/observability-system-level-logging-search.png) + +## 安装插件 + +可观测性插件是通过 `vela addon` 命令安装的。因为本插件依赖了 Prometheus,Prometheus 依赖 StorageClass, +不同 Kubernetes 发行版,StorageClass 会有一定的差异,所以,在不同的 Kubernetes 发行版, 安装命令也有一些差异。 + +### 自建/常规集群 + +执行如下命令安装可观测性插件,KinD 等测试集群的安装步骤同理。 + +```shell +$ vela addon enable observability alertmanager-pvc-enabled=false server-pvc-enabled=false grafana-domain=example.com +``` + +### 云服务商提供的 Kubernetes 集群 + +#### 阿里云 ACK + +```shell +$ vela addon enable observability alertmanager-pvc-class=alicloud-disk-available alertmanager-pvc-size=20Gi server-pvc-class=alicloud-disk-available server-pvc-size=20Gi grafana-domain=grafana.c276f4dac730c47b8b8988905e3c68fcf.cn-hongkong.alicontainer.com +``` + +其中,各个参数含义如下: + +- alertmanager-pvc-class + +Prometheus alert manager 需要的 pvc 的类型,也就是 StorageClass,在阿里云上,可选的 StorageClass 有: + +```shell +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +alicloud-disk-available alicloud/disk Delete Immediate true 6d +alicloud-disk-efficiency alicloud/disk Delete Immediate true 6d +alicloud-disk-essd alicloud/disk Delete Immediate true 6d +alicloud-disk-ssd alicloud/disk Delete Immediate true 6d +``` + +此处取值 `alicloud-disk-available`。 + +- alertmanager-pvc-size + +Prometheus alert manager 需要的 pvc 的大小,在阿里云上,最小的 PV 是 20GB,此处取值 20Gi。 + +- server-pvc-class + +Prometheus server 需要的 pvc 的类型,同 `alertmanager-pvc-class`。 + +- server-pvc-size + +Prometheus server 需要的 pvc 的大小,同 `alertmanager-pvc-size`。 + +- grafana-domain + +Grafana 的域名,可以使用你自定义的域名,也可以使用 ACK 提供的集群级别的泛域名,`*.c276f4dac730c47b8b8988905e3c68fcf.cn-hongkong.alicontainer.com`, +如本处取值 `grafana.c276f4dac730c47b8b8988905e3c68fcf.cn-hongkong.alicontainer.com`。 + +#### 其他云服务商提供的 Kubernetes 集群 + +请根据不同云服务商 Kubernetes 集群提供的 PVC 的名字和大小规格,以及域名规则,对应更改以下参数: + +- alertmanager-pvc-class +- alertmanager-pvc-size +- server-pvc-class +- server-pvc-size +- grafana-domain + +## 查看监控数据 + +### 获取访问监控控制台的账号 + +```shell +$ kubectl get secret grafana -o jsonpath="{.data.admin-password}" -n observability | base64 --decode ; echo +<密码显示在这里> +``` + +使用 `admin` 和上面的密码登陆下面的监控控制台。 + +### 获取监控控制台访问路径 + +- 自建/常规集群 + +```shell +$ kubectl get svc grafana -n vela-system +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +grafana ClusterIP 192.168.42.243 80/TCP 177m + +$ sudo k port-forward service/grafana -n vela-system 80:80 +Password: +Forwarding from 127.0.0.1:80 -> 3000 +Forwarding from [::1]:80 -> 3000 +``` + +通过浏览器访问 [http://127.0.0.1/dashboards](http://127.0.0.1/dashboards),点击相应的 Dashboard ,查看前面介绍的各种监控数据。 + +![](../../resources/observability-system-level-dashboards.png) + +- 云服务商提供的 Kubernetes 集群 + +直接访问上面设置的 Grafana 域名,查看前面介绍的各种监控数据。 + +### 查看各种类别的监控数据 + +在 Grafana 主页上,点击如图所示的控制台,可以访问相应类别的监控数据。 + +KubeVela Core System Monitoring Dashboard 是 KubeVela Core 系统级别监控控制台。 +KubeVela Core Logging Dashboard 是 KubeVela Core 日志监控控制台。 + +![](../../resources/observability-dashboards.png) diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/system-operation/performance-finetuning.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/system-operation/performance-finetuning.md new file mode 100644 index 00000000..76f0e959 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/system-operation/performance-finetuning.md @@ -0,0 +1,28 @@ +--- +title: 性能调优 +--- + +### 推荐配置 + +在集群规模变大,应用数量变多时,可能会因为 KubeVela 的控制器性能跟不上需求导致 KubeVela 系统内的应用运维出现问题,这可能是由于你的 KubeVela 控制器参数不当所致。 + +在 KubeVela 的性能测试中,KubeVela 团队验证了在各种不同规模的场景下 KubeVela 控制器的运维能力。并给出了以下的推荐配置: + +| 规模 | 集群节点数 | 应用数量 | Pod 数量 | concurrent-reconciles | kube-api-qps | kube-api-burst | CPU | Memory | +| :---: | ---------: | -------: | -------: | --------------------: | :----------: | -------------: | ---: | -----: | +| 小 | < 200 | < 3,000 | < 18,000 | 2 | 300 | 500 | 0.5 | 1Gi | +| 中 | < 500 | < 5,000 | < 30,000 | 4 | 500 | 800 | 1 | 2Gi | +| 大 | < 1,000 | < 12,000 | < 72,000 | 4 | 800 | 1,000 | 2 | 4Gi | + +> 上述配置中,单一应用的规模应在 2~3 个组件,5~6 个资源左右。如果你的场景下,应用普遍较大,如单个应用需要对应 20 个资源,那么你可以按照比例相应提高各项配置。 + +### 调优方法 + +性能瓶颈出现时一般可能会有以下一些不同的表现: + +1. 新创建的应用能够获取到,其直接关联资源获取得到,但间接关联资源获取不到。如应用内包含的 webservice 对应的 Deployment 成功创建,但 Pod 迟迟无法创建。这种情况一般和相应资源的控制器有关,比如 kube-controller-manager。可以排查相应控制器是否存在性能瓶颈或问题。 +2. 新创建的应用能够获取到,关联资源无法获取,且应用渲染本身没有问题 ( 在应用的信息内没有出现渲染错误 )。检查 apiserver 内是否存在大量排队请求,这种场景有可能是由于分发的下属资源,如 Deployment 请求到了 apiserver,但由于先前的资源在 apiserver 处排队导致新请求无法及时处理。 +3. 新创建的应用能够获取到,但是没有状态信息。这种情况如果应用本身的内容格式没有问题,有可能是由于 KubeVela 控制器出现瓶颈,如访问 apiserver 被限流,导致吞吐量跟不上请求的速率。可以通过提高 **kube-api-qps / kube-api-burst** 来解决。如果限流不存在问题,可以检查控制器所用的 CPU 资源是否已经用满;如果 CPU 过载。则可以通过提高控制器的 CPU 资源来解决。如果 CPU 资源未使用满,但始终保持在同一负载上,有可能是线程数小于所给 CPU 数量。 +4. KubeVela 控制器本身由于内存不足频繁崩溃,可以通过给控制器提高内存量解决。 + +> 更多细节可以参考 [KubeVela 性能测试报告](/blog/2021/08/30/kubevela-performance-test) \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/traits/advanced.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/traits/advanced.md new file mode 100644 index 00000000..e719d122 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/traits/advanced.md @@ -0,0 +1,245 @@ +--- +title: 更多用法 +--- + +CUE 作为一种配置语言,可以让你在定义对象的时候使用更多进阶用法。 + +## 一次渲染多个资源 + +你可以在 `outputs` 里定义 For 循环。 + +> 注意在 For 循环里的 `parameter` 字段必须是 map 类型。 + +看看如下这个例子,在一个 `TraitDefinition` 对象里渲染多个 `Service`: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + name: expose +spec: + schematic: + cue: + template: | + parameter: { + http: [string]: int + } + + outputs: { + for k, v in parameter.http { + "\(k)": { + apiVersion: "v1" + kind: "Service" + spec: { + selector: + app: context.name + ports: [{ + port: v + targetPort: v + }] + } + } + } + } +``` + +这个运维特征可以这样使用: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: testapp +spec: + components: + - name: express-server + type: webservice + properties: + ... + traits: + - type: expose + properties: + http: + myservice1: 8080 + myservice2: 8081 +``` + +## 自定义运维特征里执行 HTTP Request + +`TraitDefinition` 对象可以发送 HTTP 请求并获取应答,让你可以通过关键字 `processing` 来渲染资源。 + +你可以在 `processing.http` 里定义 HTTP 请求的 `method`, `url`, `body`, `header` 和 `trailer`,然后返回的数据将被存储在 `processing.output` 中。 + +> 请确保目标 HTTP 服务器返回的数据是 JSON 格式 + +接着,你就可以通过 `patch` 或者 `output/outputs` 里的 `processing.output` 来引用返回数据了。 + +下面是一个示例: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + name: auth-service +spec: + schematic: + cue: + template: | + parameter: { + serviceURL: string + } + + processing: { + output: { + token?: string + } + // The target server will return a JSON data with `token` as key. + http: { + method: *"GET" | string + url: parameter.serviceURL + request: { + body?: bytes + header: {} + trailer: {} + } + } + } + + patch: { + data: token: processing.output.token + } +``` + +在上面这个例子中,`TraitDefinition` 对象发送请求来获取 `token` 的数据,然后将这些数据补丁给组件实例。 + +## 数据传递 + +`TraitDefinition` 对象可以读取特定 `ComponentDefinition` 对象生成的 API 资源(渲染自 `output` 和 `outputs`)。 + +> KubeVela 保证了 `ComponentDefinition` 一定会在 `TraitDefinition` 之前渲染 + +具体来说,`context.output` 字段包含了所有渲染后的工作负载 API 资源,然后 `context.outputs.` 则包含渲染后的其它类型 API 资源。 + +下面是一个数据传递的例子: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: worker +spec: + workload: + definition: + apiVersion: apps/v1 + kind: Deployment + schematic: + cue: + template: | + output: { + apiVersion: "apps/v1" + kind: "Deployment" + spec: { + selector: matchLabels: { + "app.oam.dev/component": context.name + } + + template: { + metadata: labels: { + "app.oam.dev/component": context.name + } + spec: { + containers: [{ + name: context.name + image: parameter.image + ports: [{containerPort: parameter.port}] + envFrom: [{ + configMapRef: name: context.name + "game-config" + }] + if parameter["cmd"] != _|_ { + command: parameter.cmd + } + }] + } + } + } + } + + outputs: gameconfig: { + apiVersion: "v1" + kind: "ConfigMap" + metadata: { + name: context.name + "game-config" + } + data: { + enemies: parameter.enemies + lives: parameter.lives + } + } + + parameter: { + // +usage=Which image would you like to use for your service + // +short=i + image: string + // +usage=Commands to run in the container + cmd?: [...string] + lives: string + enemies: string + port: int + } + + +--- +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + name: ingress +spec: + schematic: + cue: + template: | + parameter: { + domain: string + path: string + exposePort: int + } + // trait template can have multiple outputs in one trait + outputs: service: { + apiVersion: "v1" + kind: "Service" + spec: { + selector: + app: context.name + ports: [{ + port: parameter.exposePort + targetPort: context.output.spec.template.spec.containers[0].ports[0].containerPort + }] + } + } + outputs: ingress: { + apiVersion: "networking.k8s.io/v1beta1" + kind: "Ingress" + metadata: + name: context.name + labels: config: context.outputs.gameconfig.data.enemies + spec: { + rules: [{ + host: parameter.domain + http: { + paths: [{ + path: parameter.path + backend: { + serviceName: context.name + servicePort: parameter.exposePort + } + }] + } + }] + } + } +``` + +在渲染 `worker` `ComponentDefinition` 时,具体发生了: +1. 渲染的 `Deployment` 资源放在 `context.output` 中。 +2. 其它类型资源则放进 `context.outputs.` 中,同时 `` 是在特指 `template.outputs` 的唯一名字 + +因而,`TraitDefinition` 对象可以从 `context` 里读取渲染后的 API 资源(比如 `context.outputs.gameconfig.data.enemies` 这个字段)。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/traits/customize-trait.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/traits/customize-trait.md new file mode 100644 index 00000000..882f9583 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/traits/customize-trait.md @@ -0,0 +1,222 @@ +--- +title: 自定义运维特征 +--- + + +本节介绍如何自定义运维特征,为用户的组件增添任何需要的运维特征能力。 + +### 开始之前 + +请先阅读和理解 [运维特征定义](../oam/x-definition.md#运维特征定义(traitdefinition)) + +### 如何使用 + +我们首先为你展示一个简单的示例,比如直接引用已有的 Kubernetes API 资源 Ingress。来编写一个下面这样的 YAML 文件: + + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + name: customize-ingress +spec: + definitionRef: + name: ingresses.networking.k8s.io +``` + +为了和我们已经内置的 ingress 有所区分,我们将其命名为 `customize-ingress`。然后我们部署到运行时集群: + +``` +$ kubectl apply -f customize-ingress.yaml +traitdefinition.core.oam.dev/customize-ingress created +``` + +创建成功。这时候,你的用户可以通过 `vela traits` 查看到这个能力: + +``` +$ vela traits +NAME NAMESPACE APPLIES-TO CONFLICTS-WITH POD-DISRUPTIVE DESCRIPTION +customize-ingress default false description not defined +ingress default false description not defined +annotations vela-system deployments.apps true Add annotations for your Workload. +configmap vela-system deployments.apps true Create/Attach configmaps to workloads. +cpuscaler vela-system deployments.apps false Automatically scale the component based on CPU usage. +expose vela-system deployments.apps false Expose port to enable web traffic for your component. +hostalias vela-system deployment.apps false Add host aliases to workloads. +labels vela-system deployments.apps true Add labels for your Workload. +lifecycle vela-system deployments.apps true Add lifecycle hooks to workloads. +resource vela-system deployments.apps true Add resource requests and limits to workloads. +rollout vela-system false rollout the component +scaler vela-system deployments.apps false Manually scale the component. +service-binding vela-system webservice,worker false Binding secrets of cloud resources to component env +sidecar vela-system deployments.apps true Inject a sidecar container to the component. +volumes vela-system deployments.apps true Add volumes for your Workload. +``` + +最后用户只需要把这个自定义的运维特征,放入一个与之匹配的组件中进行使用即可: + + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: testapp +spec: + components: + - name: express-server + type: webservice + properties: + cmd: + - node + - server.js + image: oamdev/testapp:v1 + port: 8080 + traits: + - type: customize-ingress + properties: + rules: + - http: + paths: + - path: /testpath + pathType: Prefix + backend: + service: + name: test + port: + number: 80 +``` + +参照上面的开发过程,你可以继续自定义其它需要的 Kubernetes 资源来提供给你的用户。 + +请注意:这种自定义运维特征的方式中,是无法设置诸如 `annotations` 这样的元信息(metadata),来作为运维特征属性的。也就是说,当你只想简单引入自己的 CRD 资源或者控制器作为运维特征时,可以遵循这种做法。 + +#### 使用 CUE 来自定义运维特征 + +我们更推荐你使用 CUE 模版来自定义运维特征。这种方法给你可以模版化任何资源和资源的灵活性。比如,我们可以组合自定义 `Service` 和 `ingeress` 成为一个运维特征来使用。 + +在用法上,你需要把所有的运维特征定义在 `outputs` 里(注意,不是 `output`),格式如下: + + +```cue +outputs: : + +``` + +我们下面同样使用一个 `ingress` 和 `Service` 的示例进行讲解: + + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + name: cue-ingress +spec: + podDisruptive: false + schematic: + cue: + template: | + parameter: { + domain: string + http: [string]: int + } + + // 我们可以在一个运维特征 CUE 模版定义多个 outputs + outputs: service: { + apiVersion: "v1" + kind: "Service" + metadata: { + annotations: { + address-type: "intranet" + } + } + spec: { + selector: + app: context.name + ports: [ + for k, v in parameter.http { + port: v + targetPort: v + }, + ] + + type: "LoadBalancer" + } + } + + outputs: ingress: { + apiVersion: "networking.k8s.io/v1beta1" + kind: "Ingress" + metadata: + name: context.name + spec: { + rules: [{ + host: parameter.domain + http: { + paths: [ + for k, v in parameter.http { + path: k + backend: { + serviceName: context.name + servicePort: v + } + }, + ] + } + }] + } + } +``` + +可以看到,`parameter` 字段让我们可以自由自定义和传递业务参数。同时在 `metadata` 的 `annotations` 里可以标记任何你们需要的信息,我们上文的例子里标记了 `Service` 是一个给内网使用的负载均衡。 + +接下来我们把这个 cue-ingress YMAL 部署到运行时集群,成功之后用户同样可以通过 `vela traits` 命令,查看到这个新生成的运维特征: + +```shell +$ kubectl apply -f cue-ingress.yaml +traitdefinition.core.oam.dev/cue-ingress created +$ vela traits +NAME NAMESPACE APPLIES-TO CONFLICTS-WITH POD-DISRUPTIVE DESCRIPTION +cue-ingress default false description not defined +ingress default false description not defined +annotations vela-system deployments.apps true Add annotations for your Workload. +configmap vela-system deployments.apps true Create/Attach configmaps to workloads. +cpuscaler vela-system deployments.apps false Automatically scale the component based on CPU usage. +expose vela-system deployments.apps false Expose port to enable web traffic for your component. +hostalias vela-system deployment.apps false Add host aliases to workloads. +labels vela-system deployments.apps true Add labels for your Workload. +lifecycle vela-system deployments.apps true Add lifecycle hooks to workloads. +resource vela-system deployments.apps true Add resource requests and limits to workloads. +rollout vela-system false rollout the component +scaler vela-system deployments.apps false Manually scale the component. +service-binding vela-system webservice,worker false Binding secrets of cloud resources to component env +sidecar vela-system deployments.apps true Inject a sidecar container to the component. +volumes vela-system deployments.apps true Add volumes for your Workload. +``` + +最后用户将这个运维特征放入对应组件,通过应用部署计划完成交付: + + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: testapp +spec: + components: + - name: express-server + type: webservice + properties: + cmd: + - node + - server.js + image: oamdev/testapp:v1 + port: 8080 + traits: + - type: cue-ingress + properties: + domain: test.my.domain + http: + "/api": 8080 +``` + +基于 CUE 的运维特征定义方式,也提供了满足于更多业务场景的用法,比如给运维特征打补丁、传递数据等等。后面的文档将进一步介绍相关内容。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/traits/patch-trait.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/traits/patch-trait.md new file mode 100644 index 00000000..542fa111 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/traits/patch-trait.md @@ -0,0 +1,505 @@ +--- +title: 补丁型特征 +--- + +在自定义运维特征中,使用补丁型特征是一种比较常用的形式。 + +它让我们可以修改、补丁某些属性给组件对象(一般是工作负载)来完成特定操作,比如更新 `sidecar` 和节点亲和性(node affinity)的规则(并且,这个操作一定是在资源往集群部署前就已经生效)。 + +当我们的组件是从第三方提供并自定义而来的时候,由于它们的模版往往是固定不可变的,所以能使用补丁型特征就显得尤为有用了。 + +> 尽管运维特征是由 CUE 来定义,它能打补丁的组件类型并不限,不管是来自 CUE、Helm 还是其余支持的模版格式 + +下面,我们通过一个节点亲和性(node affinity)的例子,讲解如何使用补丁型特征: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + annotations: + definition.oam.dev/description: "affinity specify node affinity and toleration" + name: node-affinity +spec: + appliesToWorkloads: + - deployments.apps + podDisruptive: true + schematic: + cue: + template: | + patch: { + spec: template: spec: { + if parameter.affinity != _|_ { + affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: [{ + matchExpressions: [ + for k, v in parameter.affinity { + key: k + operator: "In" + values: v + }, + ]}] + } + if parameter.tolerations != _|_ { + tolerations: [ + for k, v in parameter.tolerations { + effect: "NoSchedule" + key: k + operator: "Equal" + value: v + }] + } + } + } + + parameter: { + affinity?: [string]: [...string] + tolerations?: [string]: string + } +``` + +具体来说,我们上面的这个补丁型特征,假定了使用它的组件对象将会使用 `spec.template.spec.affinity` 这个字段。因此,我们需要用 `appliesToWorkloads` 来指明,让当前运维特征被应用到拥有这个字段的对应工作负载实例上。 + +另一个重要的字段是 `podDisruptive`,这个补丁型特征将修改 Pod 模板字段,因此对该运维特征的任何字段进行更改,都会导致 Pod 重启。我们应该增加 `podDisruptive` 并且设置它的值为 true,以此告诉用户这个运维特征生效后将导致 Pod 重新启动。 + +现在用户只需要,声明他们希望增加一个节点亲和性的规则到组件实例当中: + +```yaml +apiVersion: core.oam.dev/v1alpha2 +kind: Application +metadata: + name: testapp +spec: + components: + - name: express-server + type: webservice + properties: + image: oamdev/testapp:v1 + traits: + - type: "node-affinity" + properties: + affinity: + server-owner: ["owner1","owner2"] + resource-pool: ["pool1","pool2","pool3"] + tolerations: + resource-pool: "broken-pool1" + server-owner: "old-owner" +``` + +### 待解决的短板 + +默认来说,补丁型特征是通过 CUE 的 `merge` 操作来实现的。它有以下限制: + +- 不能处理有冲突的字段名 + - 比方说,在一个组件实例中已经设置过这样的值 `replicas=5`,那一旦有运维特征实例,尝试给 `replicas` 字段的值打补丁就会失败。所以我们建议你提前规划好,不要在组件和运维特征之间使用重复的字段名。 +- 数组列表被补丁时,会按索引顺序进行合并。如果数组里出现了重复的值,将导致问题。为了规避这个风险,请查询后面的解决方案。 + +### 策略补丁 + +策略补丁,通过增加注解(annotation)而生效,并支持如下两种模式。 + +> 请注意,这里开始并不是 CUE 官方提供的功能, 而是 KubeVela 扩展开发而来 + +#### 1. 使用 `+patchKey=` 注解 + +这个注解,是给数组列表打补丁用的。它的执行方式也不遵循 CUE 官方的方式,而是将每一个数组列表视作对象,并执行如下的策略: + - 如果发现重复的键名,补丁数据会直接替换掉它的值 + - 如果没有重复键名,补丁则会自动附加这些数据 + +下面来看看,一个使用 'patchKey' 的策略补丁: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + annotations: + definition.oam.dev/description: "add sidecar to the app" + name: sidecar +spec: + appliesToWorkloads: + - deployments.apps + podDisruptive: true + schematic: + cue: + template: | + patch: { + // +patchKey=name + spec: template: spec: containers: [parameter] + } + parameter: { + name: string + image: string + command?: [...string] + } +``` +在上述的这个例子中,我们定义了要 `patchKey` 的字段 `name`,是来自容器的参数键名。如果工作负载中并没有同名的容器,那么一个 sidecar 容器就会被加到 `spec.template.spec.containers` 数组列表中。如果工作负载中有重名的 `sidecar` 运维特征,则会执行 merge 操作而不是附加。 + +如果 `patch` 和 `outputs` 同时存在于一个运维特征定义中,`patch` 会率先被执行然后再渲染 `outputs`。 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + annotations: + definition.oam.dev/description: "expose the app" + name: expose +spec: + appliesToWorkloads: + - deployments.apps + podDisruptive: true + schematic: + cue: + template: | + patch: {spec: template: metadata: labels: app: context.name} + outputs: service: { + apiVersion: "v1" + kind: "Service" + metadata: name: context.name + spec: { + selector: app: context.name + ports: [ + for k, v in parameter.http { + port: v + targetPort: v + }, + ] + } + } + parameter: { + http: [string]: int + } +``` +在上面这个运维特征定义中,我们将会把一个 `Service` 添加到给定的组件实例上。同时会先去给工作负载类型打上补丁数据,然后基于模版里的 `outputs` 渲染余下的资源。 + +#### 2. 使用 `+patchStrategy=retainkeys` 注解 + +这个注解的策略,与 Kubernetes 官方的 [retainkeys](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/#use-strategic-merge-patch-to-update-a-deployment-using-the-retainkeys-strategy) 策略类似。 + +在一些场景下,整个对象需要被一起替换掉,使用 `retainkeys` 就是最适合的办法。 + +假定一个 `Deployment` 对象是这样编写的: +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: retainkeys-demo +spec: + selector: + matchLabels: + app: nginx + strategy: + type: rollingUpdate + rollingUpdate: + maxSurge: 30% + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: retainkeys-demo-ctr + image: nginx +``` + +现在如果我们想替换掉 `rollingUpdate` 策略,你可以这样写: + +```yaml +apiVersion: core.oam.dev/v1alpha2 +kind: TraitDefinition +metadata: + name: recreate +spec: + appliesToWorkloads: + - deployments.apps + extension: + template: |- + patch: { + spec: { + // +patchStrategy=retainKeys + strategy: type: "Recreate" + } + } +``` + +这个 YAML 资源将变更为: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: retainkeys-demo +spec: + selector: + matchLabels: + app: nginx + strategy: + type: Recreate + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: retainkeys-demo-ctr + image: nginx +``` +## 更多补丁型特征的使用场景 + +补丁型特征,针对组件层面做些整体操作时,非常有用。我们看看还可以满足哪些需求: + +### 增加标签 + +比如说,我们要给组件实例打上 `virtualgroup` 的通用标签。 + +```yaml +apiVersion: core.oam.dev/v1alpha2 +kind: TraitDefinition +metadata: + annotations: + definition.oam.dev/description: "Add virtual group labels" + name: virtualgroup +spec: + appliesToWorkloads: + - deployments.apps + podDisruptive: true + schematic: + cue: + template: | + patch: { + spec: template: { + metadata: labels: { + if parameter.scope == "namespace" { + "app.namespace.virtual.group": parameter.group + } + if parameter.scope == "cluster" { + "app.cluster.virtual.group": parameter.group + } + } + } + } + parameter: { + group: *"default" | string + scope: *"namespace" | string + } +``` + +然后这样用就可以了: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +spec: + ... + traits: + - type: virtualgroup + properties: + group: "my-group1" + scope: "cluster" +``` + +### 增加注解 + +与通用标签类似,你也可以给组件实例打补丁,增加一些注解。注解的格式,必须是 JSON。 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + annotations: + definition.oam.dev/description: "Specify auto scale by annotation" + name: kautoscale +spec: + appliesToWorkloads: + - deployments.apps + podDisruptive: false + schematic: + cue: + template: | + import "encoding/json" + + patch: { + metadata: annotations: { + "my.custom.autoscale.annotation": json.Marshal({ + "minReplicas": parameter.min + "maxReplicas": parameter.max + }) + } + } + parameter: { + min: *1 | int + max: *3 | int + } +``` + +### 增加 Pod 环境变量 + +给 Pod 去注入环境变量也是非常常见的操作。 + +> 这种使用方式依赖策略补丁而生效, 所以记得加上 `+patchKey=name` + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + annotations: + definition.oam.dev/description: "add env into your pods" + name: env +spec: + appliesToWorkloads: + - deployments.apps + podDisruptive: true + schematic: + cue: + template: | + patch: { + spec: template: spec: { + // +patchKey=name + containers: [{ + name: context.name + // +patchKey=name + env: [ + for k, v in parameter.env { + name: k + value: v + }, + ] + }] + } + } + + parameter: { + env: [string]: string + } +``` + +### 基于外部鉴权服务注入 `ServiceAccount` + +在这个场景下,service-account 是从一个鉴权服务中动态获取、再通过打补丁给到应用的。 + +我们这里展示的是,将 UID token 放进 `HTTP header` 的例子。你也可以用 `HTTP body` 来完成需求。 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + annotations: + definition.oam.dev/description: "dynamically specify service account" + name: service-account +spec: + appliesToWorkloads: + - deployments.apps + podDisruptive: true + schematic: + cue: + template: | + processing: { + output: { + credentials?: string + } + http: { + method: *"GET" | string + url: parameter.serviceURL + request: { + header: { + "authorization.token": parameter.uidtoken + } + } + } + } + patch: { + spec: template: spec: serviceAccountName: processing.output.credentials + } + + parameter: { + uidtoken: string + serviceURL: string + } +``` + +### 增加 `InitContainer` + +[`InitContainer`](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container) 常用于预定义镜像内的操作,并且在承载应用的容器运行前就跑起来。 + +看看示例: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + annotations: + definition.oam.dev/description: "add an init container and use shared volume with pod" + name: init-container +spec: + appliesToWorkloads: + - deployments.apps + podDisruptive: true + schematic: + cue: + template: | + patch: { + spec: template: spec: { + // +patchKey=name + containers: [{ + name: context.name + // +patchKey=name + volumeMounts: [{ + name: parameter.mountName + mountPath: parameter.appMountPath + }] + }] + initContainers: [{ + name: parameter.name + image: parameter.image + if parameter.command != _|_ { + command: parameter.command + } + + // +patchKey=name + volumeMounts: [{ + name: parameter.mountName + mountPath: parameter.initMountPath + }] + }] + // +patchKey=name + volumes: [{ + name: parameter.mountName + emptyDir: {} + }] + } + } + + parameter: { + name: string + image: string + command?: [...string] + mountName: *"workdir" | string + appMountPath: string + initMountPath: string + } +``` + +用法像这样: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: testapp +spec: + components: + - name: express-server + type: webservice + properties: + image: oamdev/testapp:v1 + traits: + - type: "init-container" + properties: + name: "install-container" + image: "busybox" + command: + - wget + - "-O" + - "/work-dir/index.html" + - http://info.cern.ch + mountName: "workdir" + appMountPath: "/usr/share/nginx/html" + initMountPath: "/work-dir" +``` \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/traits/status.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/traits/status.md new file mode 100644 index 00000000..7233a35d --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/traits/status.md @@ -0,0 +1,130 @@ +--- +title: 状态回写 +--- + +本文档将为你讲解,如何通过 CUE 模版在定义对象时实现状态回写。 + +## 健康检查 + +不管是组件定义中,还是运维特征定义中,健康检查对应的配置项都是 `spec.status.healthPolicy`。如果没有定义,它的值默认是 `true`。 + +在 CUE 里的关键词是 `isHealth`,CUE 表达式结果必须是 `bool` 类型。 +KubeVela 运行时会一直检查 CUE 表达式,直至其状态显示为健康。每次控制器都会获取所有的 Kubernetes 资源,并将他们填充到 context 字段中。 + +所以 context 字段会包含如下信息: + +```cue +context:{ + name: + appName: + output: + outputs: { + : + : + } +} +``` +`Trait` 对象,没有 `context.output` 这个字段,其它字段相同。 + +我们看看健康检查的例子: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +spec: + status: + healthPolicy: | + isHealth: (context.output.status.readyReplicas > 0) && (context.output.status.readyReplicas == context.output.status.replicas) + ... +``` + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +spec: + status: + healthPolicy: | + isHealth: len(context.outputs.service.spec.clusterIP) > 0 + ... +``` + +健康检查的结果将会记录到 `Application` 对象中。 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +spec: + components: + - name: myweb + type: worker + properties: + cmd: + - sleep + - "1000" + enemies: alien + image: busybox + lives: "3" + traits: + - type: ingress + properties: + domain: www.example.com + http: + /: 80 +status: + ... + services: + - healthy: true + message: "type: busybox,\t enemies:alien" + name: myweb + traits: + - healthy: true + message: 'Visiting URL: www.example.com, IP: 47.111.233.220' + type: ingress + status: running +``` + +## 自定义状态 + +不管是组件定义中,还是运维特征定义中,自定义状态对应的配置项都是 `spec.status.customStatus`。 + +在 CUE 中的关键词是 `message`。同时,CUE 表达式的结果必须是 `string` 类型。 + +自定义状态和健康检查的原理一致。`Application` 对象的 CRD 控制器都会检查 CUE 表达式,直至显示健康通过。 + +context 字段包含如下信息: + +```cue +context:{ + name: + appName: + output: + outputs: { + : + : + } +} +``` + +`Trait` 对象不会有 `context.output` 这个字段, 其它字段一致. + +查看示例: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +spec: + status: + customStatus: |- + message: "type: " + context.output.spec.template.spec.containers[0].image + ",\t enemies:" + context.outputs.gameconfig.data.enemies + ... +``` + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +spec: + status: + customStatus: |- + message: "type: "+ context.outputs.service.spec.type +",\t clusterIP:"+ context.outputs.service.spec.clusterIP+",\t ports:"+ "\(context.outputs.service.spec.ports[0].port)"+",\t domain"+context.outputs.ingress.spec.rules[0].host + ... +``` \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/workflow-platform-eng.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/workflow-platform-eng.md new file mode 100644 index 00000000..24951c7b --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/workflow-platform-eng.md @@ -0,0 +1,3 @@ +--- +title: 构建 Workflow 工作流 +--- \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/workflow/built-in-workflow-defs.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/workflow/built-in-workflow-defs.md new file mode 100644 index 00000000..4635a889 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/workflow/built-in-workflow-defs.md @@ -0,0 +1,270 @@ +--- +title: 附录:内置工作流步骤 +--- + +为了便于用户使用,KubeVela 提供了一些内置的工作流步骤。 + +## apply-application + +### 简介 + +部署当前 Application 中的所有组件和运维特征。 + +### 参数 + +无需指定参数,主要用于应用部署前后增加自定义步骤。 + +### 示例 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-workflow + namespace: default +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: ingress + properties: + domain: testsvc.example.com + http: + /: 8000 + workflow: + steps: + - name: express-server + type: apply-application +``` + +## depends-on-app + +### 简介 + +等待指定的 Application 完成。 + +### 参数 + +| 参数名 | 类型 | 说明 | +| :-------: | :----: | :-----------------------------------: | +| name | string | 需要等待的 Application 名称 | +| namespace | string | 需要等待的 Application 所在的命名空间 | + +### 示例 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-workflow + namespace: default +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: ingress + properties: + domain: testsvc.example.com + http: + /: 8000 + workflow: + steps: + - name: express-server + type: depends-on-app + properties: + name: another-app + namespace: default +``` + +## multi-env + +### 简介 + +将 Application 在不同的环境和策略中部署。 + +### 参数 + +| 参数名 | 类型 | 说明 | +| :----: | :----: | :--------------: | +| policy | string | 需要关联的策略名 | +| env | string | 需要关联的环境名 | + +### 示例 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: multi-env-demo + namespace: default +spec: + components: + - name: nginx-server + type: webservice + properties: + image: nginx:1.21 + port: 80 + + policies: + - name: env + type: env-binding + properties: + created: false + envs: + - name: test + patch: + components: + - name: nginx-server + type: webservice + properties: + image: nginx:1.20 + port: 80 + placement: + namespaceSelector: + name: test + - name: prod + patch: + components: + - name: nginx-server + type: webservice + properties: + image: nginx:1.20 + port: 80 + placement: + namespaceSelector: + name: prod + + workflow: + steps: + - name: deploy-test-server + type: deploy2env + properties: + policy: env + env: test + - name: deploy-prod-server + type: deploy2env + properties: + policy: env + env: prod +``` + +## webhook-notification + +### 简介 + +向指定的 Webhook 发送信息。 + +### 参数 + +| 参数名 | 类型 | 说明 | +| :--------------: | :----: | :--------------------------------------------------------------------------------------------------------------------------------------- | +| slack | Object | 可选值,如果需要发送 Slack 信息,则需填写其 url 及 message | +| slack.url | String | 必填值,Slack 的 Webhook 地址 | +| slack.message | Object | 必填值,需要发送的 Slack 信息,请符合 [Slack 信息规范](https://api.slack.com/reference/messaging/payload) | +| dingding | Object | 可选值,如果需要发送钉钉信息,则需填写其 url 及 message | +| dingding.url | String | 必填值,钉钉的 Webhook 地址 | +| dingding.message | Object | 必填值,需要发送的钉钉信息,请符合 [钉钉信息规范](https://developers.dingtalk.com/document/robots/custom-robot-access/title-72m-8ag-pqw) | + +### 示例 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-workflow + namespace: default +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: ingress + properties: + domain: testsvc.example.com + http: + /: 8000 + workflow: + steps: + - name: dingtalk-message + type: webhook-notification + properties: + dingding: + # 钉钉 Webhook 地址,请查看:https://developers.dingtalk.com/document/robots/custom-robot-access + url: xxx + message: + msgtype: text + text: + context: 开始运行工作流 + - name: application + type: apply-application + - name: slack-message + type: webhook-notification + properties: + slack: + # Slack Webhook 地址,请查看:https://api.slack.com/messaging/webhooks + url: xxx + message: + text: 工作流运行完成 +``` + +## suspend + +### 简介 + +暂停当前工作流,可以通过 `vela workflow resume appname` 继续已暂停的工作流。 + +> 有关于 `vela workflow` 命令的介绍,可以详见 [vela cli](../../cli/vela_workflow)。 + +### 参数 + +| 参数名 | 类型 | 说明 | +| :----: | :---: | :---: | +| - | - | - | + +### 示例 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-workflow + namespace: default +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: ingress + properties: + domain: testsvc.example.com + http: + /: 8000 + workflow: + steps: + - name: slack-message + type: webhook-notification + properties: + slack: + # Slack Webhook 地址,请查看:https://api.slack.com/messaging/webhooks + url: xxx + message: + text: 准备开始部署应用,请管理员审批并继续工作流 + - name: manual-approval + type: suspend + - name: express-server + type: apply-application +``` \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/workflow/cue-actions.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/workflow/cue-actions.md new file mode 100644 index 00000000..3b7205a1 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/workflow/cue-actions.md @@ -0,0 +1,301 @@ +--- +title: 附录:CUE 操作符 +--- + +这个文档介绍 step 定义过程中,可以使用的 CUE 操作类型。这些操作均由 `vela/op` 包提供。 + +> 可以阅读 [CUE 基础文档](../cue/basic) 来学习 CUE 基础语法。 + +## Apply + +-------- + +在 Kubernetes 集群中创建或者更新资源。 + +### 操作参数 + +- value: 将被 apply 的资源的定义。操作成功执行后,会用集群中资源的状态重新渲染 `value`。 +- patch: 对 `value` 的内容打补丁,支持策略性合并,比如可以通过注释 `// +patchKey` 实现数组的按主键合并。 + + +``` +#Apply: { + value: {...} + patch: { + // patchKey=$key + ... + } +} +``` +### 用法示例 + +``` +import "vela/op" +stepName: op.#Apply & { + value: { + kind: "Deployment" + apiVersion: "apps/v1" + metadata: name: "test-app" + spec: { + replicas: 2 + ... + } + } + patch: { + spec: template: spec: { + // patchKey=name + containers: [{name: "sidecar"}] + } + } +} +``` + +## ConditionalWait + +--- + +会让 workflow step 处于等待状态,直到条件被满足。 + +### 操作参数 + +- continue: 当该字段为 true 时,workflow step 才会恢复继续执行。 + +``` +#ConditionalWait: { + continue: bool +} +``` + +### 用法示例 + +``` +import "vela/op" + +apply: op.#Apply + +wait: op.#ConditionalWait & { + continue: apply.value.status.phase=="running" +} +``` + +## Load + +--- + +获取 Application 中所有组件对应的资源数据。 + +### 操作参数 + +无需指定参数。 + + +``` +#Load: {} +``` + +### 用法示例 + +``` +import "vela/op" + +// 该操作完成后,你可以使用 `load.value.[componentName]` 来获取到对应组件的资源数据 +load: op.#Load & {} +``` + +## Read + +--- + +读取 Kubernetes 集群中的资源。 + +### 操作参数 + +- value: 需要用户描述读取资源的元数据,比如 kind、name 等,操作完成后,集群中资源的数据会被填充到 `value` 上。 +- err: 如果读取操作发生错误,这里会以字符串的方式指示错误信息。 + + +``` +#Read: { + value: {} + err?: string +} +``` + +### 用法示例 + +``` +// 操作完成后,你可以通过 configmap.value.data 使用 configmap 里面的数据 +configmap: op.#Read & { + value: { + kind: "ConfigMap" + apiVersion: "v1" + metadata: { + name: "configmap-name" + namespace: "configmap-ns" + } + } +} +``` + +## ApplyApplication + +--- + +在 Kubernetes 集群中创建或者更新应用对应的所有资源。 + +### 操作参数 + +无需指定参数。 + +``` +#ApplyApplication: {} +``` + +### 用法示例 + +``` +apply: op.#ApplyApplication & {} +``` + +## ApplyComponent + +--- + +在 Kubernetes 集群中创建或者更新组件对应的所有资源。 + +### 操作参数 + +- component: 指定需要 apply 的组件名称。 +- workload: 操作完成后,从 Kubernetes 集群中获取到的组件对应的 workload 资源的状态数据。 +- traits: 操作完成后,从 Kubernetes 集群中获取到的组件对应的辅助资源的状态数据。数据结构为 map 类型, 索引为定义中 outputs 涉及到的名称。 + + +``` +#ApplyComponent: { + component: string + workload: {...} + traits: [string]: {...} +} +``` + +### 用法示例 + +``` +apply: op.#ApplyComponent & { + component: "component-name" +} +``` + +## ApplyRemaining + +--- +在 Kubernetes 集群中创建或者更新 Application 中所有组件对应的资源,并可以通过 `exceptions` 指明哪些组件或者组件中的某些资源跳过创建和更新。 + +### 操作参数 + +- exceptions: 指明该操作需要排除掉的组件。 +- skipApplyWorkload: 是否跳过该组件 workload 资源的同步。 +- skipAllTraits: 是否跳过该组件所有辅助资源的同步。 + + +``` +#ApplyRemaining: { + exceptions?: [componentName=string]: { + // skipApplyWorkload 表明是否需要跳过组件的部署 + skipApplyWorkload: *true | bool + + // skipAllTraits 表明是否需要跳过所有运维特征的部署 + skipAllTraits: *true| bool + } +} +``` + +### 用法示例 + +``` +apply: op.#ApplyRemaining & { + exceptions: {"applied-component-name": {}} +} +``` + +## Slack + +--- + +向 Slack 发送消息通知。 + +### 操作参数 + +- url: Slack 的 Webhook 地址。 +- message: 需要发送的 Slack 消息,需要符合 [Slack 信息规范](https://api.slack.com/reference/messaging/payload) 。 + +``` +#Slack: { + url: string + message: {...} +} +``` + +### 用法示例 + +``` +apply: op.#Slack & { + url: webhook url + message: + text: Hello KubeVela +} +``` + +## DingTalk + +--- + +向钉钉发送消息通知。 + +### 操作参数 + +- url: 钉钉的 Webhook 地址。 +- message: 需要发送的钉钉消息,需要符合 [钉钉信息规范](https://developers.dingtalk.com/document/robots/custom-robot-access/title-72m-8ag-pqw) 。 + +``` +#DingTalk: { + url: string + message: {...} +} +``` + +### 用法示例 + +``` +apply: op.#DingTalk & { + url: webhook url + message: + msgtype: text + text: + context: Hello KubeVela +} +``` + +## Steps + +--- + +用来封装一组操作。 + +### 操作参数 + +- steps 里面需要通过 tag 的方式指定执行顺序,数字越小执行越靠前。 + + +### 用法示例 + +``` +app: op.#Steps & { + load: op.#Load & { + component: "component-name" + } @step(1) + apply: op.#Apply & { + value: load.value.workload + } @step(2) +} +``` diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/workflow/workflow.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/workflow/workflow.md new file mode 100644 index 00000000..91b511bb --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/platform-engineers/workflow/workflow.md @@ -0,0 +1,147 @@ +--- +title: 自定义工作流 +--- + +## 总览 + +KubeVela 的工作流机制允许你自定义应用部署计划中的步骤,粘合额外的交付流程,指定任意的交付环境。简而言之,工作流提供了定制化的控制逻辑,在原有 Kubernetes 模式交付资源(Apply)的基础上,提供了面向过程的灵活性。比如说,使用工作流实现暂停、人工验证、状态等待、数据流传递、多环境灰度、A/B 测试等复杂操作。 + +工作流是 KubeVela 实践过程中基于 OAM 模型的进一步探索和最佳实践,充分遵守 OAM 的模块化理念和可复用特性。每一个工作流模块都是一个“超级粘合剂”,可以将你任意的工具和流程都组合起来。使得你在现代复杂云原生应用交付环境中,可以通过一份申明式的配置,完整的描述所有的交付流程,保证交付过程的稳定性和便利性。 + +## 使用工作流 + +工作流由步骤组成,你既可以使用 KubeVela 提供的 [内置工作流步骤] 来便利地完成操作,也可以自己来编写 `WorkflowStepDefinition` 来达到想要的效果。 + +我们可以使用 `vela def` 通过编写 `Cue template` 来定义工作流步骤。下面我们来完成这个场景:使用 Helm 部署一个 Tomcat,并在部署完成后自动向 Slack 发送消息通知。 + +### 编写工作流步骤 + +KubeVela 提供了一些 CUE 操作类型用于编写工作流步骤。这些操作均由 `vela/op` 包提供。为了实现上述场景,我们需要使用以下 3 个 CUE 操作: + +| 操作名 | 说明 | 参数 | +| :---: | :--: | :-- | +| [ApplyApplication](./cue-actions#apply) | 部署应用中的所有资源 | - | +| [Read](./cue-actions#read) | 读取 Kubernetes 集群中的资源。 | value: 描述需要被读取资源的元数据,比如 kind、name 等,操作完成后,集群中资源的数据会被填充到 `value` 上。
err: 如果读取操作发生错误,这里会以字符串的方式指示错误信息。 | +| [ConditionalWait](./cue-actions#conditionalwait) | 会让 Workflow Step 处于等待状态,直到条件被满足。 | continue: 当该字段为 true 时,Workflow Step 才会恢复继续执行。 | + +> 所有的操作类型可参考 [Cue Actions](./cue-actions) + +在此基础上,我们需要两个 `WorkflowStepDefinition`: + +1. 部署 Tomcat,并且等待 Deployment 的状态变为 running,这一步需要自定义工作流步骤来实现。 +2. 发送 Slack 通知,这一步可以使用 KubeVela 内置的 [webhook-notification] 步骤来实现。 + +#### 部署 Tomcat 步骤 + +首先,通过 `vela def init` 来生成一个 `WorkflowStepDefinition` 模板: + +```shell +vela def init my-helm -t workflow-step --desc "Apply helm charts and wait till it's running." -o my-helm.cue +``` + +得到如下结果: +```shell +$ cat my-helm.cue + +"my-helm": { + annotations: {} + attributes: {} + description: "Apply helm charts and wait till it's running." + labels: {} + type: "workflow-step" +} + +template: { +} +``` + +引用 `vela/op` 包,并将 Cue 代码补充到 `template` 中: + +``` +import ( + "vela/op" +) + +"my-helm": { + annotations: {} + attributes: {} + description: "Apply helm charts and wait till it's running." + labels: {} + type: "workflow-step" +} + +template: { + // 部署应用中的所有资源 + apply: op.#ApplyApplication & {} + + resource: op.#Read & { + value: { + kind: "Deployment" + apiVersion: "apps/v1" + metadata: { + name: "tomcat" + // 可以使用 context 来获取该 Application 的任意元信息 + namespace: context.namespace + } + } + } + + workload: resource.value + // 等待 helm 的 deployment 可用 + wait: op.#ConditionalWait & { + continue: workload.status.readyReplicas == workload.status.replicas && workload.status.observedGeneration == workload.metadata.generation + } +} +``` + +部署到集群中: + +```shell +$ vela def apply my-helm.cue + +WorkflowStepDefinition my-helm in namespace vela-system updated. +``` + +#### 发送 Slack 通知步骤 + +直接使用内置的 [webhook-notification] 步骤。 + +### 编写应用 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-workflow + namespace: default +spec: + components: + - name: tomcat + type: helm + properties: + repoType: helm + url: https://charts.bitnami.com/bitnami + chart: tomcat + version: "9.2.20" + workflow: + steps: + - name: tomcat + # 指定步骤类型 + type: my-helm + outputs: + - name: msg + # 将 my-helm 中读取到的 deployment status 作为信息导出 + valueFrom: resource.value.status.conditions[0].message + - name: send-message + type: webhook-notification + inputs: + - from: msg + # 引用上一步中 outputs 中的值,并传入到 properties 的 slack.message.text 中作为输入 + parameterKey: slack.message.text + properties: + slack: + # 你的 slack webhook 地址,请参考:https://api.slack.com/messaging/webhooks + url: +``` + +将该应用部署到集群中,可以看到所有的资源都已被成功部署,且 Slack 中收到了对应的通知,通知内容为该 Deployment 的状态信息。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/quick-start-appfile.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/quick-start-appfile.md new file mode 100644 index 00000000..efe4fada --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/quick-start-appfile.md @@ -0,0 +1,80 @@ +--- +title: 概述 +--- + +为了你的平台获得最佳用户体验,我们建议各位平台构建者们为最终用户提供简单并且友好的 UI,而不是仅仅简单展示全部平台层面的信息。一些常用的做法包括构建 GUI 控制台,使用 DSL,或者创建用户友好的命令行工具。 + +为了证明在 KubeVela 中提供了良好的构建开发体验,我们开发了一个叫 `Appfile` 的客户端工具。这个工具使得开发者通过一个文件和一个简单的命令:`vela up` 就可以部署任何应用。 + +现在,让我们来体验一下它是如何使用的。 + +## Step 1: 安装 + +确保你已经参照 [安装指南](install) 完成了所有的安装验证工作。 + +## Step 2: 部署你的第一个应用 + +```bash +$ vela up -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/vela.yaml +Parsing vela.yaml ... +Loading templates ... + +Rendering configs for service (testsvc)... +Writing deploy config to (.vela/deploy.yaml) + +Applying deploy configs ... +Checking if app has been deployed... +App has not been deployed, creating a new deployment... +✅ App has been deployed 🚀🚀🚀 + Port forward: vela port-forward first-vela-app + SSH: vela exec first-vela-app + Logging: vela logs first-vela-app + App status: vela status first-vela-app + Service status: vela status first-vela-app --svc testsvc +``` + +检查状态直到看到 `Routes` 为就绪状态: +```bash +$ vela status first-vela-app +About: + + Name: first-vela-app + Namespace: default + Created at: ... + Updated at: ... + +Services: + + - Name: testsvc + Type: webservice + HEALTHY Ready: 1/1 + Last Deployment: + Created at: ... + Updated at: ... + Traits: + - ✅ ingress: Visiting URL: testsvc.example.com, IP: +``` + +**在 [kind cluster 配置章节](install#kind)**, 你可以通过 localhost 访问 service。 在其他配置中, 使用相应的 ingress 地址来替换 localhost。 + +``` +$ curl -H "Host:testsvc.example.com" http://localhost/ + +Hello World + + + ## . + ## ## ## == + ## ## ## ## ## === + /""""""""""""""""\___/ === + ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~ + \______ o _,/ + \ \ _,' + `'--.._\..--'' + +``` +**瞧!** 你已经基本掌握了它。 + +## 下一步 + +- 详细学习 [`Appfile`](./developers/learn-appfile) 并且了解它是如何工作的。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/quick-start.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/quick-start.md new file mode 100644 index 00000000..67a369b0 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/quick-start.md @@ -0,0 +1,102 @@ +--- +title: 交付第一个应用 +--- + +欢迎来到 KubeVela! 在本小节中,我们会向你介绍如何完成第一个 Demo。 + +先来实际感受 KubeVela 到底是如何工作的。 + +## 部署你的第一个应用 + +首先,在你的集群上,我们使用一个提前准备好的 YMAL 文件。 + +```bash +$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/vela-app.yaml +application.core.oam.dev/first-vela-app created +``` + +检查状态:直到看到 `status` 是 `running`,并且 `services` 是 `healthy`。 + +```bash +$ kubectl get application first-vela-app -o yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + generation: 1 + name: first-vela-app + ... + namespace: default +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: ingress + properties: + domain: testsvc.example.com + http: + /: 8000 +status: + ... + services: + - healthy: true + name: express-server + traits: + - healthy: true + message: 'Visiting URL: testsvc.example.com, IP: your ip address' + type: ingress + status: running +``` +可以看到,这个 YAML 的类型是一个 `Application`,由 `core.oam.dev/v1beta1` 来定义,即 KubeVela 的 API Specification。 + +在 `Sepc` 字段里,我们也看到比如 `components` 和 `traits` 这样陌生的字段。 + +在下一章节中,我们将带你进一步深入它们背后 KubeVela 的核心概念:应用程序、组件系统和运维特征系统。 + +同时,在底层的 K8s 资源也被创建了出来: + +```bash +$ kubectl get deployment +NAME READY UP-TO-DATE AVAILABLE AGE +express-server-v1 1/1 1 1 8m +$ kubectl get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +express-server ClusterIP 172.21.11.152 8000/TCP 7m43s +kubernetes ClusterIP 172.21.0.1 443/TCP 116d +$ kubectl get ingress +NAME CLASS HOSTS ADDRESS PORTS AGE +express-server testsvc.example.com 80 7m47s +``` + +如果你的集群有一个工作中的 ingress,你可以查看这个 service。 + +``` +$ curl -H "Host:testsvc.example.com" http:/// + +Hello World + + + ## . + ## ## ## == + ## ## ## ## ## === + /""""""""""""""""\___/ === + ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~ + \______ o _,/ + \ \ _,' + `'--.._\..--'' + +``` +**太棒了!** 你已经全都部署成功了。 + +也就是说 KubeVlea 的应用交付,围绕应用程序、组件系统和运维特征系统这一整套应用部署计划的核心概念展开,同时通过 Workflow 工作流、CUE 粘合开源生态等进行场景和能力的按需扩展,达成跨云、标准和统一的交付目标。 + +## 下一步 + +后续步骤: + +- 查看 KubeVela 的[`应用部署计划及其概念`](./core-concepts/application),进一步理解其是如何工作的。 +- 查看 KubeVela 的[`系统架构`](./core-concepts/architecture),了解 KubeVela 的系统构成和运转模式。 +- 加入 KubeVela 中文社区钉钉群,群号:23310022。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/KubeVela-01.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/KubeVela-01.png new file mode 100644 index 00000000..f48cd487 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/KubeVela-01.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/KubeVela-02.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/KubeVela-02.png new file mode 100644 index 00000000..1c297828 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/KubeVela-02.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/KubeVela-03.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/KubeVela-03.png new file mode 100644 index 00000000..1e4f1171 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/KubeVela-03.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/KubeVela-04.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/KubeVela-04.png new file mode 100644 index 00000000..bc50f5a9 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/KubeVela-04.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/KubeVela-05.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/KubeVela-05.png new file mode 100644 index 00000000..cf9d3a33 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/KubeVela-05.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/KubeVela-06.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/KubeVela-06.png new file mode 100644 index 00000000..8e77d6b5 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/KubeVela-06.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/api-arch.jpg b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/api-arch.jpg new file mode 100644 index 00000000..e7007c13 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/api-arch.jpg differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/api-workflow.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/api-workflow.png new file mode 100644 index 00000000..bc43dc11 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/api-workflow.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/apiserver-arch.jpg b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/apiserver-arch.jpg new file mode 100644 index 00000000..de8974f2 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/apiserver-arch.jpg differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/app-centric.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/app-centric.png new file mode 100644 index 00000000..bd3e6d4b Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/app-centric.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/appfile.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/appfile.png new file mode 100644 index 00000000..c21ad845 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/appfile.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/approllout-status-transition.jpg b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/approllout-status-transition.jpg new file mode 100644 index 00000000..ba275688 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/approllout-status-transition.jpg differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/arch.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/arch.png new file mode 100644 index 00000000..eb24a1ac Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/arch.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/book-info-struct.jpg b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/book-info-struct.jpg new file mode 100644 index 00000000..ca5fbb6f Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/book-info-struct.jpg differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/canary-pic-v2.jpg b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/canary-pic-v2.jpg new file mode 100644 index 00000000..11b83d4b Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/canary-pic-v2.jpg differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/canary-pic-v3.jpg b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/canary-pic-v3.jpg new file mode 100644 index 00000000..83a20380 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/canary-pic-v3.jpg differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/catalog-workflow.jpg b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/catalog-workflow.jpg new file mode 100644 index 00000000..33957a42 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/catalog-workflow.jpg differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/coa.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/coa.png new file mode 100644 index 00000000..e1062354 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/coa.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/concepts.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/concepts.png new file mode 100644 index 00000000..d9635546 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/concepts.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/crossplane-visit-application-v2.jpg b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/crossplane-visit-application-v2.jpg new file mode 100644 index 00000000..77ac4f1f Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/crossplane-visit-application-v2.jpg differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/crossplane-visit-application-v3.jpg b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/crossplane-visit-application-v3.jpg new file mode 100644 index 00000000..68ad952e Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/crossplane-visit-application-v3.jpg differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/crossplane-visit-application.jpg b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/crossplane-visit-application.jpg new file mode 100644 index 00000000..ac3802d5 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/crossplane-visit-application.jpg differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/data-flow.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/data-flow.png new file mode 100644 index 00000000..e396b26e Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/data-flow.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/gitops-commit.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/gitops-commit.png new file mode 100644 index 00000000..2ce9c8b3 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/gitops-commit.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/how-it-works.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/how-it-works.png new file mode 100644 index 00000000..beb85424 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/how-it-works.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/install-metrics-server-in-ASK.jpg b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/install-metrics-server-in-ASK.jpg new file mode 100644 index 00000000..56140887 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/install-metrics-server-in-ASK.jpg differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/json-schema-render-example.jpg b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/json-schema-render-example.jpg new file mode 100644 index 00000000..245563cc Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/json-schema-render-example.jpg differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/kubevela-runtime.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/kubevela-runtime.png new file mode 100644 index 00000000..a91b94ed Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/kubevela-runtime.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/kubewatch-notif.jpg b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/kubewatch-notif.jpg new file mode 100644 index 00000000..54eacf4d Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/kubewatch-notif.jpg differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/li-auto-inc.jpg b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/li-auto-inc.jpg new file mode 100644 index 00000000..a6008344 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/li-auto-inc.jpg differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/metrics.jpg b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/metrics.jpg new file mode 100644 index 00000000..af51b9ea Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/metrics.jpg differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/oam-model.jpg b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/oam-model.jpg new file mode 100644 index 00000000..2eb9bb37 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/oam-model.jpg differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/observability-dashboards.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/observability-dashboards.png new file mode 100644 index 00000000..52595944 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/observability-dashboards.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/observability-system-level-dashboards.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/observability-system-level-dashboards.png new file mode 100644 index 00000000..a451efee Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/observability-system-level-dashboards.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/observability-system-level-logging-search.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/observability-system-level-logging-search.png new file mode 100644 index 00000000..4dabf42c Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/observability-system-level-logging-search.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/observability-system-level-logging-statistics.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/observability-system-level-logging-statistics.png new file mode 100644 index 00000000..daf751ef Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/observability-system-level-logging-statistics.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/observability-system-level-logging-statistics2.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/observability-system-level-logging-statistics2.png new file mode 100644 index 00000000..58169e43 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/observability-system-level-logging-statistics2.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/observability-system-level-summary-of-source-usages-chart.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/observability-system-level-summary-of-source-usages-chart.png new file mode 100644 index 00000000..b1a0e897 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/observability-system-level-summary-of-source-usages-chart.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/observability-system-level-summary-of-source-usages.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/observability-system-level-summary-of-source-usages.png new file mode 100644 index 00000000..251439bc Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/observability-system-level-summary-of-source-usages.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/openfaas.jpg b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/openfaas.jpg new file mode 100644 index 00000000..b97e81f1 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/openfaas.jpg differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/promotion.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/promotion.png new file mode 100644 index 00000000..a66f87d8 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/promotion.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/system-arch.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/system-arch.png new file mode 100644 index 00000000..7606f8a2 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/system-arch.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/traffic-shifting-analysis.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/traffic-shifting-analysis.png new file mode 100644 index 00000000..1001d851 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/traffic-shifting-analysis.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/vela_show_autoscale.jpg b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/vela_show_autoscale.jpg new file mode 100644 index 00000000..e14896e1 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/vela_show_autoscale.jpg differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/vela_show_webservice.jpg b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/vela_show_webservice.jpg new file mode 100644 index 00000000..ee913061 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/vela_show_webservice.jpg differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/what-is-kubevela.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/what-is-kubevela.png new file mode 100644 index 00000000..28b391ca Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/what-is-kubevela.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/workflow-multi-env.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/workflow-multi-env.png new file mode 100644 index 00000000..8474ee92 Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/workflow-multi-env.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/workflow-with-ocm-demo.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/workflow-with-ocm-demo.png new file mode 100644 index 00000000..fcbe317f Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/resources/workflow-with-ocm-demo.png differ diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/roadmap.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/roadmap.md new file mode 100644 index 00000000..6e45079f --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/roadmap.md @@ -0,0 +1,5 @@ +--- +title: KubeVela Roadmap +--- + +Please visit [roadmap docs page](https://github.com/oam-dev/kubevela/tree/master/docs/en/roadmap/). diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/roadmap/2020-12-roadmap.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/roadmap/2020-12-roadmap.md new file mode 100644 index 00000000..6690d99a --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/roadmap/2020-12-roadmap.md @@ -0,0 +1,29 @@ +--- +title: Roadmap +--- + +Date: 2020-10-01 to 2020-12-31 + +## Core Platform + +- [Merge CUE based abstraction into OAM user facing objects](https://github.com/oam-dev/kubevela/projects/1#card-48198530). +- [Compatibility checking between workload types and traits](https://github.com/oam-dev/kubevela/projects/1#card-48199349) and [`conflictsWith` feature](https://github.com/oam-dev/kubevela/projects/1#card-48199465) +- [Simplify revision mechanism in kubevela core](https://github.com/oam-dev/kubevela/projects/1#card-48199829) +- [Capability Center (i.e. addon registry)](https://github.com/oam-dev/kubevela/projects/1#card-48203470) +- [CRD registry to manage the third-party dependencies easier](https://github.com/oam-dev/kubevela/projects/1#card-48200758) +- [Dapr trait as built-in capability](https://github.com/oam-dev/kubevela/projects/1#card-49368484) + +## User Experience + +- [Smart Dashboard based on CUE schema](https://github.com/oam-dev/kubevela/projects/1#card-48200031) +- [Make defining CUE templates easier](https://github.com/oam-dev/kubevela/projects/1#card-48200509) +- [Generate reference doc automatically for capability based on CUE schema](https://github.com/oam-dev/kubevela/projects/1#card-48200195) +- [Better application observability](https://github.com/oam-dev/kubevela/projects/1#card-47134946) + +## Integration with other projects + +- Integrate with ArgoCD to do GitOps style application deployment + +## Project improvement + +- [Contributing the modularizing Flagger changes to upstream](https://github.com/oam-dev/kubevela/projects/1#card-48198830) diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/roadmap/2021-03-roadmap.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/roadmap/2021-03-roadmap.md new file mode 100644 index 00000000..bb53eccd --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/roadmap/2021-03-roadmap.md @@ -0,0 +1,28 @@ +--- +title: Roadmap +--- + +Date: 2021-01-01 to 2021-03-30 + +## Core Platform + +- Add Application object as the deployment unit applied to k8s control plane. + - The new Application object will handle CUE template rendering on the server side. So the appfile would be translated to Application object directly without doing client side rendering. + - CLI/UI will be updated to replace ApplicationConfiguration and Component objects with Application object. +- Integrate Terraform as one of the core templating engines so that platform builders can add Terraform modules as Workloads/Traits into KubeVela. +- Re-architect API Server to have clean API and storage layer as [designed](https://github.com/oam-dev/kubevela/blob/master/design/vela-core/APIServer-Catalog.md#2-api-design). +- Automatically sync Catalog server and display packages information as [designed](https://github.com/oam-dev/kubevela/blob/master/design/vela-core/APIServer-Catalog.md#3-catalog-design). +- Add Rollout CRD to do native Workload and Application level application rollout management. +- Support intermediate store (e.g. ConfigMap) and JSON patch operations in data input/output. + +## User Experience + +- Rewrite dashboard to support up-to-date Vela object model. + - Support dynamic form rendering based on OpenAPI schema generated from Definition objects. + - Support displaying pages of applications, capabilities, catalogs. +- Automatically generate reference docs for capabilities and support displaying them in CLI/UI devtools. + +## Third-party integrations + +- Integrate with S2I (Source2Image) tooling like [Derrick](https://github.com/alibaba/derrick) to enable more developer-friendly workflow in appfile. +- Integrate with Dapr to enable end-to-end microservice application development and deployment workflow. diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/roadmap/README.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/roadmap/README.md new file mode 100644 index 00000000..c97968d4 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/roadmap/README.md @@ -0,0 +1,6 @@ +--- +title: KubeVela Roadmap +--- + +- [2021 Spring Roadmap](./2021-03-roadmap) +- [2020 Winter Roadmap](./2020-12-roadmap) diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/roadmap/template.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/roadmap/template.md new file mode 100644 index 00000000..b8a21a53 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.1/roadmap/template.md @@ -0,0 +1,41 @@ +--- +title: Roadmap +--- + +Date: 2021-01-01 to 2021-03-30 + +> Note: add roadmap entry to to `roadmap/README.md` + +## Core Platform + +- K8s controllers +- Core workloads/traits +- Server components +- Architecture and model + +## User Experience + +- Devtools: CLI/UI/Appfile +- SDK/Framework + +## Deployment and Operations + +- Diagnostics and debugging + +## Third-party integrations + +- CI/CD, GitOps +- Application development framework +- Third party workloads/traits + +## Testing + +- Test infrastructure + - CI (Github Actions) and hosts + - codecov +- Unit/e2e test cases + +## Project elevation + +- Website/documentation improvement +- Contribution/development workflow improvement diff --git a/versioned_docs/version-v1.1/README.md b/versioned_docs/version-v1.1/README.md new file mode 100644 index 00000000..d9d1879c --- /dev/null +++ b/versioned_docs/version-v1.1/README.md @@ -0,0 +1,27 @@ +![alt](resources/KubeVela-03.png) + +*Make shipping applications more enjoyable.* + +# KubeVela + +KubeVela is a modern application platform that makes deploying and managing applications across today's hybrid, multi-cloud environments easier and faster. + +## Community + +- Slack: [CNCF Slack](https://slack.cncf.io/) #kubevela channel +- Gitter: [Discussion](https://gitter.im/oam-dev/community) +- Bi-weekly Community Call: [Meeting Notes](https://docs.google.com/document/d/1nqdFEyULekyksFHtFvgvFAYE-0AMHKoS3RMnaKsarjs) + +## Installation + +Installation guide is available on [this section](install). + +## Quick Start + +Quick start is available on [this section](quick-start). + +## Contributing +Check out [CONTRIBUTING](https://github.com/oam-dev/kubevela/blob/master/CONTRIBUTING.md) to see how to develop with KubeVela. + +## Code of Conduct +KubeVela adopts the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md). \ No newline at end of file diff --git a/versioned_docs/version-v1.1/case-studies/canary-blue-green.md b/versioned_docs/version-v1.1/case-studies/canary-blue-green.md new file mode 100644 index 00000000..3a915b22 --- /dev/null +++ b/versioned_docs/version-v1.1/case-studies/canary-blue-green.md @@ -0,0 +1,229 @@ +--- +title: Progressive Rollout with Istio +--- + +## Introduction + +The application deployment model in KubeVela is designed and implemented with extreme level of extensibility at heart. Hence, KubeVela can be easily integrated with any existing tools to superpower your application delivery with modern technologies such as Service Mesh immediately, without writing dirty glue code/scripts. + +This guide will introduce how to use KubeVela and [Istio](https://istio.io/latest/) to do an advanced canary release process. In this process, KubeVela will help you to: +- ship Istio capabilities to end users without asking them to become an Istio expert (i.e. KubeVela will provide you a rollout trait as abstraction); +- design canary release steps and do rollout/rollback in a declarative workflow, instead managing the whole process manually or with ugly scripts. + +We will use the well-known [bookinfo](https://istio.io/latest/docs/examples/bookinfo/?ie=utf-8&hl=en&docs-search=Canary) application as the sample. + +## Preparation + +Install the Istio cluster plugin. +```shell +vela addon enable istio +``` + +The default namespace needs to be labeled so that Istio will auto-inject sidecar. + +```shell +kubectl label namespace default istio-injection=enabled +``` + +## Initial deployment + +Deploy the Application of `bookinfo`: + +```shell +kubectl apply -f https://github.com/oam-dev/kubevela/blob/master/docs/examples/canary-rollout-use-case/first-deploy.yaml +``` + +The component architecture and relationship of the application are as follows: + +![book-info-struct](../resources/book-info-struct.jpg) + +This Application has four Components, `productpage`, `ratings`, `details` components configured with an`expose` Trait to expose cluster-level service. + +And `reviews` component have a canary-traffic Trait. + +The `productpage` component is also configured with an `istio-gateway` Trait, allowing the Component to receive traffic coming from outside the cluster. The example below show that it sets `gateway:ingressgateway` to use Istio's default gateway, and `hosts: "*"` to specify that any request can enter the gateway. +```shell +... + - name: productpage + type: webservice + properties: + image: docker.io/istio/examples-bookinfo-productpage-v1:1.16.2 + port: 9080 + + traits: + - type: expose + properties: + port: + - 9080 + + - type: istio-gateway + properties: + hosts: + - "*" + gateway: ingressgateway + match: + - exact: /productpage + - prefix: /static + - exact: /login + - prefix: /api/v1/products + port: 9080 +... +``` + +You can port-forward to the gateway as follows: +```shell +kubectl port-forward service/istio-ingressgateway -n istio-system 19082:80 +``` +Visit `127.0.0.1:19082` through the browser and you will see the following page. + +![pic-v2](../resources/canary-pic-v2.jpg) + +## Canary Release + +Next, we take the `reviews` Component as an example to simulate the complete process of a canary release, and first upgrade a part of the component instances, and adjust the traffic at the same time, so as to achieve the purpose of progressive canary release. + +Execute the following command to update the application. +```shell +kubectl apply -f https://github.com/oam-dev/kubevela/blob/master/docs/examples/canary-rollout-use-case/rollout-v2.yaml +``` +This operation updates the mirror of the `reviews` Component from the previous v2 to v3. At the same time, the Rollout Trait of the `reviews` Component specifies that the number of target instances to be upgraded is two, which are upgraded in two batches, with one instance in each batch. + +In addition, a canary-traffic Trait has been added to the Component. +```shell +... + - name: reviews + type: webservice + properties: + image: docker.io/istio/examples-bookinfo-reviews-v3:1.16.2 + port: 9080 + volumes: + - name: wlp-output + type: emptyDir + mountPath: /opt/ibm/wlp/output + - name: tmp + type: emptyDir + mountPath: /tmp + + + traits: + - type: expose + properties: + port: + - 9080 + + - type: rollout + properties: + targetSize: 2 + rolloutBatches: + - replicas: 1 + - replicas: 1 + + - type: canary-traffic + properties: + port: 9080 +... +``` + +This update also adds an upgraded execution Workflow to the Application, which contains three steps. + +The first step is to upgrade only the first batch of instances by specifying `batchPartition` equal to 0. And use `traffic.weightedTargets` to switch 10% of the traffic to the new version of the instance. + +After completing the first step, the execution of the second step of the Workflow will enter a pause state, waiting for the user to verify the service status. + +The third step of the Workflow is to complete the upgrade of the remaining instances and switch all traffic to the new component version. + +```shell +... + workflow: + steps: + - name: rollout-1st-batch + type: canary-rollout + properties: + # just upgrade first batch of component + batchPartition: 0 + traffic: + weightedTargets: + - revision: reviews-v1 + weight: 90 # 90% shift to new version + - revision: reviews-v2 + weight: 10 # 10% shift to new version + + # give user time to verify part of traffic shifting to newRevision + - name: manual-approval + type: suspend + + - name: rollout-rest + type: canary-rollout + properties: + # upgrade all batches of component + batchPartition: 1 + traffic: + weightedTargets: + - revision: reviews-v2 + weight: 100 # 100% shift to new version +... +``` + +After the update is complete, visit the previous URL multiple times in the browser. There is about 10% probability that you will see the new page below, + +![pic-v3](../resources/canary-pic-v3.jpg) + +It can be seen that the new version of the page has changed from the previous black five-pointed star to a red five-pointed star. + +### Continue with Full Release + +If the service is found to meet expectations during manual verification, the Workflow needs to be continued to complete the full release. You can do that by executing the following command. + +```shell +vela workflow reumse book-info +``` + +If you continue to verify the webpage several times on the browser, you will find that the five-pointed star will always be red. + +### Terminate the publishing Workflow and Roll Back + +If during manual verification, it is found that the service does not meet expectations, you need to terminate the pre-defined release workflow and switch the traffic and instances back to the previous version. + +```shell +kubectl apply -f https://github.com/oam-dev/kubevela/blob/master/docs/examples/canary-rollout-use-case/revert-in-middle.yaml +``` + +This update deletes the previously defined workflow to terminate the execution of the workflow. + +And by modifying the `targetRevision` of the Rollout Trait to point to the previous component version `reviews-v1`. In addition, this update also removes the canary-traffic Trait of the Component, and puts all traffic on the same component version `reviews-v1`. + +```shell +... + - name: reviews + type: webservice + properties: + image: docker.io/istio/examples-bookinfo-reviews-v3:1.16.2 + port: 9080 + volumes: + - name: wlp-output + type: emptyDir + mountPath: /opt/ibm/wlp/output + - name: tmp + type: emptyDir + mountPath: /tmp + + + traits: + - type: expose + properties: + port: + - 9080 + + - type: rollout + properties: + targetRevision: reviews-v1 + batchPartition: 1 + targetSize: 2 + # This means to rollout two more replicas in two batches. + rolloutBatches: + - replicas: 2 +... +``` + +Continue to visit the website on the browser, you will find that the five-pointed star has changed back to black. + diff --git a/versioned_docs/version-v1.1/case-studies/gitops.md b/versioned_docs/version-v1.1/case-studies/gitops.md new file mode 100644 index 00000000..be872101 --- /dev/null +++ b/versioned_docs/version-v1.1/case-studies/gitops.md @@ -0,0 +1,176 @@ +--- +title: GitOps with Workflow +--- + +This section will introduce how to use KubeVela in GitOps environment and why. + +## Introduction + +GitOps is a continuous delivery method that allows developers to automatically deploy applications by changing code and declarative configurations in a Git repository, with Git-centric operations such as PR and commit. For detailed benefits of GitOps, please check [this article](https://www.weave.works/blog/what-is-gitops-really). + +KubeVela as an declarative application delivery control plane can be naturally used in GitOps approach, and this will provide below extra bonus to end users alongside with GitOps benefits: +- application delivery workflow (CD pipeline) + - i.e. KubeVela supports pipeline style application delivery process in GitOps, instead of simply declaring final status; +- handling deployment dependencies and designing typologies (DAG); +- unified higher level abstraction atop various GitOps tools' primitives; +- declare, provision and consume cloud resources in unified application definition; +- various out-of-box deployment strategies (Canary, Blue-Green ...); +- various out-of-box hybrid/multi-cloud deployment policies (placement rule, cluster selectors etc.); +- Kustomize-style patch for multi-env deployment without the need to learn Kustomize at all; +- ... and much more. + + +In this section, we will introduce steps of using KubeVela directly in GitOps approach. + +> Note: you can also use it with existing tools such as ArgoCD with similar steps, detailed guides will be added in following releases. + +## Setup + +First, setup a Git Repository with `Application` files, some source code and a Dockerfile. + +The code is very simple, starting a service and displaying the version in the code. In `Application`, we'll start a `webservice` for the code and add an `Ingress` trait to access. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-workflow + namespace: default +spec: + components: + - name: test-server + type: webservice + properties: + # replace the imagepolicy `default:gitops` with your policy later + image: # {"$imagepolicy": "default:gitops"} + port: 8088 + traits: + - type: ingress + properties: + domain: testsvc.example.com + http: + /: 8088 +``` + +We want users to build the image and push it to the image registry after changing the code, so we need to integrate with a CI tool like GitHub Actions or Jenkins to do it. In this example, we use GitHub Actions to build the image. For the code and configuration file, please refer to [Example Repo](https://github.com/oam-dev/samples/tree/master/9.GitOps_Demo). + +## Create the Git secret + +After the new image is pushed to the image registry, KubeVela will recognize the new image and update the `Application` file in the Git repository and cluster. Therefore, we need a secret with Git information for KubeVela to commit to the Git repository. + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: my-secret +type: kubernetes.io/basic-auth +stringData: + username: + password: +``` + +## Create the Application that sync with Git + +After completing the basic configuration above, we can create an Application file that syncs with the corresponding Git repository and image registry information: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: git-app +spec: + components: + - name: gitops + type: kustomize + properties: + repoType: git + url: + # your git secret + secretRef: my-secret + # the interval time to pull from git repo and image registry + pullInterval: 1m + git: + # the specific branch + branch: master + # the path that you want to listen + path: . + imageRepository: + image: + # if it's a private image registry, use `kubectl create secret docker-registry` to create the secret + # secretRef: imagesecret + filterTags: + # filter the image tag + pattern: '^master-[a-f0-9]+-(?P[0-9]+)' + extract: '$ts' + # use the policy to sort the latest image tag and update + policy: + numerical: + order: asc + # add more commit message + commitMessage: "Image: {{range .Updated.Images}}{{println .}}{{end}}" +``` + +Apply the file to the cluster and check the `Application` in clusters, we can see that the `git-app` automatically pulls the config from Git Repository and apply the application to the cluster: + +```shell +$ vela ls + +APP COMPONENT TYPE TRAITS PHASE HEALTHY STATUS CREATED-TIME +first-vela-workflow test-server webservice ingress running healthy 2021-09-10 11:23:34 +0800 CST +git-app gitops kustomize running healthy 2021-09-10 11:23:32 +0800 CST +``` + +We can `curl` the `Ingress` to see the current version in code: + +```shell +$ curl -H "Host:testsvc.example.com" http:// +Version: 0.1.5 +``` + +## Modify the code to trigger automatic deployment + +After the first applying, we can modify the code in Git Repository to apply automatically. + +Change the `Version` to `0.1.6` in code: + +```go +const VERSION = "0.1.6" + +func main() { + http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { + _, _ = fmt.Fprintf(w, "Version: %s\n", VERSION) + }) + if err := http.ListenAndServe(":8088", nil); err != nil { + println(err.Error()) + } +} +``` + +Commit the change to the Git Repository, we can see that our CI pipelines has built the image and push it to the image registry. + +KubeVela will then listening to the image registry and update the `Application` in Git Repository with the latest image tag. We can see that there is a commit form `kubevelabot`, the commit message is always with a prefix `Update image automatically.` You can use format like `{{range .Updated.Images}}{{println .}}{{end}}` to specify the image name in the `commitMessage` field. + +![alt](../resources/gitops-commit.png) + +> Note that the commit from `kubevelabot` will not trigger the pipeline again and since we filter out the commit from KubeVela in CI configuration. +> +> ```shell +> jobs: +> publish: +> if: "!contains(github.event.head_commit.message, 'Update image automatically')" +> ``` + +Re-check the `Application` in cluster, we can see that the image of the `Application` has been updated after a while. We can `curl` to `Ingress` to see the current version: + +```shell +$ curl -H "Host:testsvc.example.com" http:// +Version: 0.1.6 +``` + +The `Version` has been updated successfully! Now we're done with everything from changing the code to automatically applying to the cluster. + +KubeVela polls the latest information from the code and image repo periodically (at an interval that can be customized): +* When the `Application` file in the Git repository is updated, KubeVela will update the `Application` in the cluster based on the latest configuration. +* When a new tag is added to the image registry, KubeVela will filter out the latest tag based on your policy and update it to Git repository. When the files in the repository are updated, KubeVela repeats the first step and updates the files in the cluster, thus achieving automatic deployment. + +By integrating with GitOps, KubeVela helps users speed up deployment and simplify continuous deployment. \ No newline at end of file diff --git a/versioned_docs/version-v1.1/case-studies/jenkins-cicd.md b/versioned_docs/version-v1.1/case-studies/jenkins-cicd.md new file mode 100644 index 00000000..906f5190 --- /dev/null +++ b/versioned_docs/version-v1.1/case-studies/jenkins-cicd.md @@ -0,0 +1,148 @@ +--- +title: Jenkins CI/CD +--- + +This section will introduce how to use KubeVela with existing CI/CD tools such as Jenkins and why. + +## Introduction + +With a simple integration effort, KubeVela as a universal application delivery control plane can then supercharge existing CI/CD tools with modern application deployment capabilities such as: +- hybrid/multi-cloud delivery; +- cross-environments promotion; +- service mesh based application rollout/rollback; +- handling deployment dependencies and topology (DAG); +- declare, provision and consume cloud resources alongside with your application; +- enjoy benefits of [GitOps](https://www.weave.works/blog/what-is-gitops-really) delivery without the need to introduce full GitOps transformation to your team; +- ... and much more. + +The following guide will use Jenkins as an example to release a sample HTTP server application step by step. The application code can be found in [this GitHub repo](https://github.com/Somefive/KubeVela-demo-CICD-app). + +## Preparation + +Before combining KubeVela and Jenkins, you need to ensure the following environments have already been set up. + +1. Deploy Jenkins service with Docker support, including related plugins and credentials which will be used to access image repository. +2. A git repository with Webhook enabled. Ensure commits to the target branch will trigger the running of the Jenkins pipeline. +3. Get Kubernetes for deployment. Install KubeVela and enable its apiserver. Ensure the KubeVela apiserver can be accessed from external environment. + +## Combining Jenkins with KubeVela apiserver + +Deploy Jenkins pipeline with the following Groovy script. You can change the git repository address, image address, apiserver address and other environment configurations on demand. Your git repository should contain the `Dockerfile` and `app.yaml` configuration file to tell how to build the target image and which component the application contains. + +```groovy +pipeline { + agent any + environment { + GIT_BRANCH = 'prod' + GIT_URL = 'https://github.com/Somefive/KubeVela-demo-CICD-app.git' + DOCKER_REGISTRY = 'https://registry.hub.docker.com' + DOCKER_CREDENTIAL = 'DockerHubCredential' + DOCKER_IMAGE = 'somefive/kubevela-demo-cicd-app' + APISERVER_URL = 'http://47.88.24.19' + APPLICATION_YAML = 'app.yaml' + APPLICATION_NAMESPACE = 'kubevela-demo-namespace' + APPLICATION_NAME = 'cicd-demo-app' + } + stages { + stage('Prepare') { + steps { + script { + def checkout = git branch: env.GIT_BRANCH, url: env.GIT_URL + env.GIT_COMMIT = checkout.GIT_COMMIT + env.GIT_BRANCH = checkout.GIT_BRANCH + echo "env.GIT_BRANCH=${env.GIT_BRANCH},env.GIT_COMMIT=${env.GIT_COMMIT}" + } + } + } + stage('Build') { + steps { + script { + docker.withRegistry(env.DOCKER_REGISTRY, env.DOCKER_CREDENTIAL) { + def customImage = docker.build(env.DOCKER_IMAGE) + customImage.push() + } + } + } + } + stage('Deploy') { + steps { + sh 'wget -q "https://github.com/mikefarah/yq/releases/download/v4.12.1/yq_linux_amd64"' + sh 'chmod +x yq_linux_amd64' + script { + def app = sh ( + script: "./yq_linux_amd64 eval -o=json '.spec' ${env.APPLICATION_YAML} | sed -e 's/GIT_COMMIT/$GIT_COMMIT/g'", + returnStdout: true + ) + echo "app: ${app}" + def response = httpRequest acceptType: 'APPLICATION_JSON', contentType: 'APPLICATION_JSON', httpMode: 'POST', requestBody: app, url: "${env.APISERVER_URL}/v1/namespaces/${env.APPLICATION_NAMESPACE}/applications/${env.APPLICATION_NAME}" + println('Status: '+response.status) + println('Response: '+response.content) + } + } + } + } +} +``` + +Push commits to the target branch in the git repository, which the Jenkins pipeline focuses on. The webhook of git repo will trigger the newly created Jenkins pipeline. This pipeline will automatically build container images and push it to the image repository. Then it will send POST request to KubeVela apiserver, which will deploy `app.yaml` to Kubernetes cluster. An example of `app.yaml` is shown below. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: kubevela-demo-app +spec: + components: + - name: kubevela-demo-app-web + type: webservice + properties: + image: somefive/kubevela-demo-cicd-app + imagePullPolicy: Always + port: 8080 + traits: + - type: rollout + properties: + rolloutBatches: + - replicas: 2 + - replicas: 3 + batchPartition: 0 + targetSize: 5 + - type: labels + properties: + jenkins-build-commit: GIT_COMMIT + - type: ingress + properties: + domain: + http: + "/": 8088 +``` + +THe *GIT_COMMIT* identifier will be replaced by the current git commit id in Jenkins pipeline. You can check the deployment status in Kubernetes through `kubectl`. + +```bash +$ kubectl get app -n kubevela-demo-namespace +NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE +cicd-demo-app kubevela-demo-app-web webservice running true 102s +$ kubectl get deployment -n kubevela-demo-namespace +NAME READY UP-TO-DATE AVAILABLE AGE +kubevela-demo-app-web-v1 2/2 2 2 111s +$ kubectl get ingress -n kubevela-demo-namespace +NAME CLASS HOSTS ADDRESS PORTS AGE +kubevela-demo-app-web 198.11.175.125 80 117s +``` + +In the deployed application, we use the `rollout` trait, which enables us to release 2 pods first for canary validation. After validation succeed, remove `batchPatition: 0` in application configuration in the `rollout` trait. After that, a complete release will be fired. This mechanism greatly improves the security and stability of the releasing process. You can adjust the rollout strategy depending on your scenario. + +```bash +$ kubectl edit app -n kubevela-demo-namespace +application.core.oam.dev/cicd-demo-app edited +$ kubectl get deployment -n kubevela-demo-namespace +NAME READY UP-TO-DATE AVAILABLE AGE +kubevela-demo-app-web-v1 5/5 5 5 4m16s +$ curl http:/// +Version: 0.1.2 +``` + +## More + +Refer to the [blog post](/blog/2021/09/02/kubevela-jenkins-cicd) for more details about deploying Jenkins + KubeVela and more comprehensive demo for application rolling update. \ No newline at end of file diff --git a/versioned_docs/version-v1.1/case-studies/li-auto-inc.md b/versioned_docs/version-v1.1/case-studies/li-auto-inc.md new file mode 100644 index 00000000..c68af37e --- /dev/null +++ b/versioned_docs/version-v1.1/case-studies/li-auto-inc.md @@ -0,0 +1,3 @@ +--- +title: Practical Case +--- diff --git a/versioned_docs/version-v1.1/case-studies/multi-cluster.md b/versioned_docs/version-v1.1/case-studies/multi-cluster.md new file mode 100644 index 00000000..dd71a24c --- /dev/null +++ b/versioned_docs/version-v1.1/case-studies/multi-cluster.md @@ -0,0 +1,305 @@ +--- +title: Multi-Cluster Application Deploy +--- + +This section will introduce how to use KubeVela for multi-cluster application delivery and why. + +## Introduction + +There are more and more situations come out that organizations need multi-cluster technology for application delivery: + +* For scalability, a single Kubernetes cluster has its limit around 5K nodes or less, it is unable to handle the large scale application load. +* For stability/availability, application can deploy in multi-cluster for backup which provides more stability and availability. +* For security, you may need to deploy in different zones/areas as government policy requires. + +The following guide will the multi-cluster that helps you easily deploy an application to different environments. + +## Preparation + +You can simply join an existing cluster into KubeVela by specify its KubeConfig like below. + +```shell script +vela cluster join +``` + +It will use field `context.cluster` in KubeConfig as the cluster name automatically, +you can also specify the name by `--name` parameter. For example: + +```shell +vela cluster join stage-cluster.kubeconfig --name cluster-staging +vela cluster join prod-cluster.kubeconfig --name cluster-prod +``` + +After clusters joined, you could list all clusters managed by KubeVela currently. + +```bash +$ vela cluster list +CLUSTER TYPE ENDPOINT +cluster-prod tls https://47.88.4.97:6443 +cluster-staging tls https://47.88.7.230:6443 +``` + +You can also detach a cluster if you're not using it any more. + +```shell script +$ vela cluster detach cluster-prod +``` + +If there's still any application running in the cluster, the command will be rejected. + +## Deploy Application to multi cluster + +KubeVela regards a Kubernetes cluster as an environment, so you can deploy an application into +one or more environments. + +Below is an example, deploy to a staging environment first, check the application running well, +and finally promote to production environment. + +For different environments, the deployment configuration can also have some nuance. In the staging environment, we only need one replica for the webservice and do not need the worker. In the production environment, we setup 3 replicas for the webservice and enable the worker. + + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: example-app + namespace: default +spec: + components: + - name: hello-world-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: scaler + properties: + replicas: 1 + - name: data-worker + type: worker + properties: + image: busybox + cmd: + - sleep + - '1000000' + policies: + - name: example-multi-env-policy + type: env-binding + properties: + envs: + - name: staging + placement: # selecting the cluster to deploy to + clusterSelector: + name: cluster-staging + selector: # selecting which component to use + components: + - hello-world-server + + - name: prod + placement: + clusterSelector: + name: cluster-prod + patch: # overlay patch on above components + components: + - name: hello-world-server + type: webservice + traits: + - type: scaler + properties: + replicas: 3 + + - name: health-policy-demo + type: health + properties: + probeInterval: 5 + probeTimeout: 10 + + workflow: + steps: + # deploy to staging env + - name: deploy-staging + type: deploy2env + properties: + policy: example-multi-env-policy + env: staging + + # manual check + - name: manual-approval + type: suspend + + # deploy to prod env + - name: deploy-prod + type: deploy2env + properties: + policy: example-multi-env-policy + env: prod +``` + +After the application deployed, it will run as the workflow steps. + +> You can refer to [Env Binding](../end-user/policies/envbinding) and [Health Check](../end-user/policies/health) policy user guide for parameter details. + +It will deploy application to staging environment first, you can check the `Application` status by: + +```shell +> kubectl get application example-app -o yaml +NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE +example-app hello-world-server webservice workflowSuspending true Ready:1/1 10s +``` + +We can see that the workflow is suspended at `manual-approval`: + +```yaml +... + status: + workflow: + appRevision: example-app-v1:44a6447e3653bcc2 + contextBackend: + apiVersion: v1 + kind: ConfigMap + name: workflow-example-app-context + uid: 56ddcde6-8a83-4ac3-bf94-d19f8f55eb3d + mode: StepByStep + steps: + - id: wek2b31nai + name: deploy-staging + phase: succeeded + type: deploy2env + - id: 7j5eb764mk + name: manual-approval + phase: succeeded + type: suspend + suspend: true + terminated: false + waitCount: 0 +``` + +You can also check the health status in the `status.service` field below. + +```yaml +... + status: + services: + - env: staging + healthy: true + message: 'Ready:1/1 ' + name: hello-world-server + scopes: + - apiVersion: core.oam.dev/v1alpha2 + kind: HealthScope + name: health-policy-demo + namespace: test + uid: 6e6230a3-93f3-4dba-ba09-dd863b6c4a88 + traits: + - healthy: true + type: scaler + workloadDefinition: + apiVersion: apps/v1 + kind: Deployment +``` + +You can use `resume` command after everything verified in statging cluster: + +```shell +> vela workflow resume example-app +Successfully resume workflow: example-app +``` + +Recheck the `Application` status: + +```shell +> kubectl get application example-app +NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE +example-app hello-world-server webservice running true Ready:1/1 62s +``` + +```yaml + status: + services: + - env: staging + healthy: true + message: 'Ready:1/1 ' + name: hello-world-server + scopes: + - apiVersion: core.oam.dev/v1alpha2 + kind: HealthScope + name: health-policy-demo + namespace: default + uid: 9174ac61-d262-444b-bb6c-e5f0caee706a + traits: + - healthy: true + type: scaler + workloadDefinition: + apiVersion: apps/v1 + kind: Deployment + - env: prod + healthy: true + message: 'Ready:3/3 ' + name: hello-world-server + scopes: + - apiVersion: core.oam.dev/v1alpha2 + kind: HealthScope + name: health-policy-demo + namespace: default + uid: 9174ac61-d262-444b-bb6c-e5f0caee706a + traits: + - healthy: true + type: scaler + workloadDefinition: + apiVersion: apps/v1 + kind: Deployment + - env: prod + healthy: true + message: 'Ready:1/1 ' + name: data-worker + scopes: + - apiVersion: core.oam.dev/v1alpha2 + kind: HealthScope + name: health-policy-demo + namespace: default + uid: 9174ac61-d262-444b-bb6c-e5f0caee706a + workloadDefinition: + apiVersion: apps/v1 + kind: Deployment +``` + +All the step status in workflow is succeeded: + +```yaml +... + status: + workflow: + appRevision: example-app-v1:44a6447e3653bcc2 + contextBackend: + apiVersion: v1 + kind: ConfigMap + name: workflow-example-app-context + uid: e1e7bd2d-8743-4239-9de7-55a0dd76e5d3 + mode: StepByStep + steps: + - id: q8yx7pr8wb + name: deploy-staging + phase: succeeded + type: deploy2env + - id: 6oxrtvki9o + name: manual-approval + phase: succeeded + type: suspend + - id: uk287p8c31 + name: deploy-prod + phase: succeeded + type: deploy2env + suspend: false + terminated: false + waitCount: 0 +``` + +## More use cases + +KubeVela can provide many strategies to deploy an application to multiple clusters by composing env-binding policy and workflow steps. + +You can have a glimpse of how does it work as below: + +![alt](../resources/workflow-multi-env.png) + +More use cases about the multi cluster application deployment are coming soon. \ No newline at end of file diff --git a/versioned_docs/version-v1.1/case-studies/workflow-with-ocm.md b/versioned_docs/version-v1.1/case-studies/workflow-with-ocm.md new file mode 100644 index 00000000..4098ea04 --- /dev/null +++ b/versioned_docs/version-v1.1/case-studies/workflow-with-ocm.md @@ -0,0 +1,5 @@ +--- +title: Practical Case +--- + +TBD \ No newline at end of file diff --git a/versioned_docs/version-v1.1/cli/vela.md b/versioned_docs/version-v1.1/cli/vela.md new file mode 100644 index 00000000..b8f8cdd8 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela.md @@ -0,0 +1,45 @@ +--- +title: vela +--- + + + +``` +vela [flags] +``` + +### Options + +``` + -e, --env string specify environment name for application + -h, --help help for vela +``` + +### SEE ALSO + +* [vela addon](vela_addon) - List and get addon in KubeVela +* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities +* [vela completion](vela_completion) - Output shell completion code for the specified shell (bash or zsh) +* [vela components](vela_components) - List components +* [vela config](vela_config) - Manage configurations +* [vela def](vela_def) - Manage Definitions +* [vela delete](vela_delete) - Delete an application +* [vela env](vela_env) - Manage environments +* [vela exec](vela_exec) - Execute command in a container +* [vela export](vela_export) - Export deploy manifests from appfile +* [vela help](vela_help) - Help about any command +* [vela init](vela_init) - Create scaffold for an application +* [vela logs](vela_logs) - Tail logs for application +* [vela ls](vela_ls) - List applications +* [vela port-forward](vela_port-forward) - Forward local ports to services in an application +* [vela show](vela_show) - Show the reference doc for a workload type or trait +* [vela status](vela_status) - Show status of an application +* [vela system](vela_system) - System management utilities +* [vela template](vela_template) - Manage templates +* [vela traits](vela_traits) - List traits +* [vela up](vela_up) - Apply an appfile +* [vela version](vela_version) - Prints out build version information +* [vela workflow](vela_workflow) - Operate application workflow in KubeVela +* [vela workloads](vela_workloads) - List workloads + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_addon.md b/versioned_docs/version-v1.1/cli/vela_addon.md new file mode 100644 index 00000000..e22f5db8 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_addon.md @@ -0,0 +1,30 @@ +--- +title: vela addon +--- + +List and get addon in KubeVela + +### Synopsis + +List and get addon in KubeVela + +### Options + +``` + -h, --help help for addon +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - +* [vela addon disable](vela_addon_disable) - disable an addon +* [vela addon enable](vela_addon_enable) - enable an addon +* [vela addon list](vela_addon_list) - List addons + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_addon_disable.md b/versioned_docs/version-v1.1/cli/vela_addon_disable.md new file mode 100644 index 00000000..e5a0f9d6 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_addon_disable.md @@ -0,0 +1,37 @@ +--- +title: vela addon disable +--- + +disable an addon + +### Synopsis + +disable an addon in cluster + +``` +vela addon disable [flags] +``` + +### Examples + +``` +vela addon disable +``` + +### Options + +``` + -h, --help help for disable +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela addon](vela_addon) - List and get addon in KubeVela + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_addon_enable.md b/versioned_docs/version-v1.1/cli/vela_addon_enable.md new file mode 100644 index 00000000..5d9d7ad8 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_addon_enable.md @@ -0,0 +1,37 @@ +--- +title: vela addon enable +--- + +enable an addon + +### Synopsis + +enable an addon in cluster + +``` +vela addon enable [flags] +``` + +### Examples + +``` +vela addon enable +``` + +### Options + +``` + -h, --help help for enable +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela addon](vela_addon) - List and get addon in KubeVela + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_addon_list.md b/versioned_docs/version-v1.1/cli/vela_addon_list.md new file mode 100644 index 00000000..18780f50 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_addon_list.md @@ -0,0 +1,31 @@ +--- +title: vela addon list +--- + +List addons + +### Synopsis + +List addons in KubeVela + +``` +vela addon list [flags] +``` + +### Options + +``` + -h, --help help for list +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela addon](vela_addon) - List and get addon in KubeVela + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_cap.md b/versioned_docs/version-v1.1/cli/vela_cap.md new file mode 100644 index 00000000..3e0a6101 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_cap.md @@ -0,0 +1,31 @@ +--- +title: vela cap +--- + +Manage capability centers and installing/uninstalling capabilities + +### Synopsis + +Manage capability centers and installing/uninstalling capabilities + +### Options + +``` + -h, --help help for cap +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - +* [vela cap center](vela_cap_center) - Manage Capability Center +* [vela cap install](vela_cap_install) - Install capability into cluster +* [vela cap ls](vela_cap_ls) - List capabilities from cap-center +* [vela cap uninstall](vela_cap_uninstall) - Uninstall capability from cluster + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_cap_center.md b/versioned_docs/version-v1.1/cli/vela_cap_center.md new file mode 100644 index 00000000..4e45d2de --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_cap_center.md @@ -0,0 +1,31 @@ +--- +title: vela cap center +--- + +Manage Capability Center + +### Synopsis + +Manage Capability Center with config, sync, list + +### Options + +``` + -h, --help help for center +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities +* [vela cap center config](vela_cap_center_config) - Configure (add if not exist) a capability center, default is local (built-in capabilities) +* [vela cap center ls](vela_cap_center_ls) - List all capability centers +* [vela cap center remove](vela_cap_center_remove) - Remove specified capability center +* [vela cap center sync](vela_cap_center_sync) - Sync capabilities from remote center, default to sync all centers + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_cap_center_config.md b/versioned_docs/version-v1.1/cli/vela_cap_center_config.md new file mode 100644 index 00000000..fbd91ff4 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_cap_center_config.md @@ -0,0 +1,38 @@ +--- +title: vela cap center config +--- + +Configure (add if not exist) a capability center, default is local (built-in capabilities) + +### Synopsis + +Configure (add if not exist) a capability center, default is local (built-in capabilities) + +``` +vela cap center config [flags] +``` + +### Examples + +``` +vela cap center config mycenter https://github.com/oam-dev/catalog/tree/master/registry +``` + +### Options + +``` + -h, --help help for config + -t, --token string Github Repo token +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela cap center](vela_cap_center) - Manage Capability Center + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_cap_center_ls.md b/versioned_docs/version-v1.1/cli/vela_cap_center_ls.md new file mode 100644 index 00000000..0b2a52e3 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_cap_center_ls.md @@ -0,0 +1,37 @@ +--- +title: vela cap center ls +--- + +List all capability centers + +### Synopsis + +List all configured capability centers + +``` +vela cap center ls [flags] +``` + +### Examples + +``` +vela cap center ls +``` + +### Options + +``` + -h, --help help for ls +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela cap center](vela_cap_center) - Manage Capability Center + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_cap_center_remove.md b/versioned_docs/version-v1.1/cli/vela_cap_center_remove.md new file mode 100644 index 00000000..6f615454 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_cap_center_remove.md @@ -0,0 +1,37 @@ +--- +title: vela cap center remove +--- + +Remove specified capability center + +### Synopsis + +Remove specified capability center + +``` +vela cap center remove [flags] +``` + +### Examples + +``` +vela cap center remove mycenter +``` + +### Options + +``` + -h, --help help for remove +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela cap center](vela_cap_center) - Manage Capability Center + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_cap_center_sync.md b/versioned_docs/version-v1.1/cli/vela_cap_center_sync.md new file mode 100644 index 00000000..babc8bcc --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_cap_center_sync.md @@ -0,0 +1,37 @@ +--- +title: vela cap center sync +--- + +Sync capabilities from remote center, default to sync all centers + +### Synopsis + +Sync capabilities from remote center, default to sync all centers + +``` +vela cap center sync [centerName] [flags] +``` + +### Examples + +``` +vela cap center sync mycenter +``` + +### Options + +``` + -h, --help help for sync +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela cap center](vela_cap_center) - Manage Capability Center + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_cap_install.md b/versioned_docs/version-v1.1/cli/vela_cap_install.md new file mode 100644 index 00000000..ea7b3b9f --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_cap_install.md @@ -0,0 +1,38 @@ +--- +title: vela cap install +--- + +Install capability into cluster + +### Synopsis + +Install capability into cluster + +``` +vela cap install
/ [flags] +``` + +### Examples + +``` +vela cap install mycenter/route +``` + +### Options + +``` + -h, --help help for install + -t, --token string Github Repo token +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_cap_ls.md b/versioned_docs/version-v1.1/cli/vela_cap_ls.md new file mode 100644 index 00000000..107ea861 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_cap_ls.md @@ -0,0 +1,37 @@ +--- +title: vela cap ls +--- + +List capabilities from cap-center + +### Synopsis + +List capabilities from cap-center + +``` +vela cap ls [cap-center] [flags] +``` + +### Examples + +``` +vela cap ls +``` + +### Options + +``` + -h, --help help for ls +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_cap_uninstall.md b/versioned_docs/version-v1.1/cli/vela_cap_uninstall.md new file mode 100644 index 00000000..b556fa80 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_cap_uninstall.md @@ -0,0 +1,38 @@ +--- +title: vela cap uninstall +--- + +Uninstall capability from cluster + +### Synopsis + +Uninstall capability from cluster + +``` +vela cap uninstall [flags] +``` + +### Examples + +``` +vela cap uninstall route +``` + +### Options + +``` + -h, --help help for uninstall + -t, --token string Github Repo token +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela cap](vela_cap) - Manage capability centers and installing/uninstalling capabilities + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_completion.md b/versioned_docs/version-v1.1/cli/vela_completion.md new file mode 100644 index 00000000..77e2c0c4 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_completion.md @@ -0,0 +1,32 @@ +--- +title: vela completion +--- + +Output shell completion code for the specified shell (bash or zsh) + +### Synopsis + +Output shell completion code for the specified shell (bash or zsh). +The shell code must be evaluated to provide interactive completion +of vela commands. + + +### Options + +``` + -h, --help help for completion +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - +* [vela completion bash](vela_completion_bash) - generate autocompletions script for bash +* [vela completion zsh](vela_completion_zsh) - generate autocompletions script for zsh + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_completion_bash.md b/versioned_docs/version-v1.1/cli/vela_completion_bash.md new file mode 100644 index 00000000..56415693 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_completion_bash.md @@ -0,0 +1,41 @@ +--- +title: vela completion bash +--- + +generate autocompletions script for bash + +### Synopsis + +Generate the autocompletion script for Vela for the bash shell. + +To load completions in your current shell session: +$ source <(vela completion bash) + +To load completions for every new session, execute once: +Linux: + $ vela completion bash > /etc/bash_completion.d/vela +MacOS: + $ vela completion bash > /usr/local/etc/bash_completion.d/vela + + +``` +vela completion bash +``` + +### Options + +``` + -h, --help help for bash +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela completion](vela_completion) - Output shell completion code for the specified shell (bash or zsh) + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_completion_zsh.md b/versioned_docs/version-v1.1/cli/vela_completion_zsh.md new file mode 100644 index 00000000..9b10a641 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_completion_zsh.md @@ -0,0 +1,38 @@ +--- +title: vela completion zsh +--- + +generate autocompletions script for zsh + +### Synopsis + +Generate the autocompletion script for Vela for the zsh shell. + +To load completions in your current shell session: +$ source <(vela completion zsh) + +To load completions for every new session, execute once: +$ vela completion zsh > "${fpath[1]}/_vela" + + +``` +vela completion zsh +``` + +### Options + +``` + -h, --help help for zsh +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela completion](vela_completion) - Output shell completion code for the specified shell (bash or zsh) + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_components.md b/versioned_docs/version-v1.1/cli/vela_components.md new file mode 100644 index 00000000..8b07ab8d --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_components.md @@ -0,0 +1,38 @@ +--- +title: vela components +--- + +List components + +### Synopsis + +List components + +``` +vela components +``` + +### Examples + +``` +vela components +``` + +### Options + +``` + --discover discover traits in capability centers + -h, --help help for components +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_config.md b/versioned_docs/version-v1.1/cli/vela_config.md new file mode 100644 index 00000000..1adeca7e --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_config.md @@ -0,0 +1,31 @@ +--- +title: vela config +--- + +Manage configurations + +### Synopsis + +Manage configurations + +### Options + +``` + -h, --help help for config +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - +* [vela config del](vela_config_del) - Delete config +* [vela config get](vela_config_get) - Get data for a config +* [vela config ls](vela_config_ls) - List configs +* [vela config set](vela_config_set) - Set data for a config + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_config_del.md b/versioned_docs/version-v1.1/cli/vela_config_del.md new file mode 100644 index 00000000..24c23cda --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_config_del.md @@ -0,0 +1,37 @@ +--- +title: vela config del +--- + +Delete config + +### Synopsis + +Delete config + +``` +vela config del +``` + +### Examples + +``` +vela config del +``` + +### Options + +``` + -h, --help help for del +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela config](vela_config) - Manage configurations + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_config_get.md b/versioned_docs/version-v1.1/cli/vela_config_get.md new file mode 100644 index 00000000..0badbba2 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_config_get.md @@ -0,0 +1,37 @@ +--- +title: vela config get +--- + +Get data for a config + +### Synopsis + +Get data for a config + +``` +vela config get +``` + +### Examples + +``` +vela config get +``` + +### Options + +``` + -h, --help help for get +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela config](vela_config) - Manage configurations + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_config_ls.md b/versioned_docs/version-v1.1/cli/vela_config_ls.md new file mode 100644 index 00000000..71071694 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_config_ls.md @@ -0,0 +1,37 @@ +--- +title: vela config ls +--- + +List configs + +### Synopsis + +List all configs + +``` +vela config ls +``` + +### Examples + +``` +vela config ls +``` + +### Options + +``` + -h, --help help for ls +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela config](vela_config) - Manage configurations + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_config_set.md b/versioned_docs/version-v1.1/cli/vela_config_set.md new file mode 100644 index 00000000..b8c466bb --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_config_set.md @@ -0,0 +1,37 @@ +--- +title: vela config set +--- + +Set data for a config + +### Synopsis + +Set data for a config + +``` +vela config set +``` + +### Examples + +``` +vela config set KEY=VALUE K2=V2 +``` + +### Options + +``` + -h, --help help for set +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela config](vela_config) - Manage configurations + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_def.md b/versioned_docs/version-v1.1/cli/vela_def.md new file mode 100644 index 00000000..972aef86 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_def.md @@ -0,0 +1,35 @@ +--- +title: vela def +--- + +Manage Definitions + +### Synopsis + +Manage Definitions + +### Options + +``` + -h, --help help for def +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - +* [vela def apply](vela_def_apply) - Apply definition +* [vela def del](vela_def_del) - Delete definition +* [vela def edit](vela_def_edit) - Edit definition +* [vela def get](vela_def_get) - Get definition +* [vela def init](vela_def_init) - Init a new definition +* [vela def list](vela_def_list) - List definitions +* [vela def render](vela_def_render) - Render definition +* [vela def vet](vela_def_vet) - Validate definition + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_def_apply.md b/versioned_docs/version-v1.1/cli/vela_def_apply.md new file mode 100644 index 00000000..5457ceab --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_def_apply.md @@ -0,0 +1,43 @@ +--- +title: vela def apply +--- + +Apply definition + +### Synopsis + +Apply definition from local storage to kubernetes cluster. It will apply file to vela-system namespace by default. + +``` +vela def apply DEFINITION.cue [flags] +``` + +### Examples + +``` +# Command below will apply the local my-webservice.cue file to kubernetes vela-system namespace +> vela def apply my-webservice.cue +# Command below will apply the ./defs/my-trait.cue file to kubernetes default namespace +> vela def apply ./defs/my-trait.cue --namespace default# Command below will convert the ./defs/my-trait.cue file to kubernetes CRD object and print it without applying it to kubernetes +> vela def apply ./defs/my-trait.cue --dry-run +``` + +### Options + +``` + --dry-run only build definition from CUE into CRB object without applying it to kubernetes clusters + -h, --help help for apply + -n, --namespace string Specify which namespace to apply. (default "vela-system") +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela def](vela_def) - Manage Definitions + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_def_del.md b/versioned_docs/version-v1.1/cli/vela_def_del.md new file mode 100644 index 00000000..10e80e74 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_def_del.md @@ -0,0 +1,40 @@ +--- +title: vela def del +--- + +Delete definition + +### Synopsis + +Delete definition in kubernetes cluster. + +``` +vela def del DEFINITION_NAME [flags] +``` + +### Examples + +``` +# Command below will delete TraitDefinition of annotations in default namespace +> vela def del annotations -t trait -n default +``` + +### Options + +``` + -h, --help help for del + -n, --namespace string Specify which namespace the definition locates. + -t, --type string Specify the definition type of target. Valid types: component, trait, policy, workload, scope, workflow-step +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela def](vela_def) - Manage Definitions + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_def_edit.md b/versioned_docs/version-v1.1/cli/vela_def_edit.md new file mode 100644 index 00000000..4c80862e --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_def_edit.md @@ -0,0 +1,43 @@ +--- +title: vela def edit` +--- + +Edit definition + +### Synopsis + +Edit definition in kubernetes. If type and namespace are not specified, the command will automatically search all possible results. +By default, this command will use the vi editor and can be altered by setting EDITOR environment variable. + +``` +vela def edit NAME [flags] +``` + +### Examples + +``` +# Command below will edit the ComponentDefinition (and other definitions if exists) of webservice in kubernetes +> vela def edit webservice +# Command below will edit the TraitDefinition of ingress in vela-system namespace +> vela def edit ingress --type trait --namespace vela-system +``` + +### Options + +``` + -h, --help help for edit + -n, --namespace string Specify which namespace to get. If empty, all namespaces will be searched. + -t, --type string Specify which definition type to get. If empty, all types will be searched. Valid types: scope, workflow-step, component, trait, policy, workload +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela def](vela_def) - Manage Definitions + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_def_get.md b/versioned_docs/version-v1.1/cli/vela_def_get.md new file mode 100644 index 00000000..4a27727e --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_def_get.md @@ -0,0 +1,42 @@ +--- +title: vela def get +--- + +Get definition + +### Synopsis + +Get definition from kubernetes cluster + +``` +vela def get NAME [flags] +``` + +### Examples + +``` +# Command below will get the ComponentDefinition(or other definitions if exists) of webservice in all namespaces +> vela def get webservice +# Command below will get the TraitDefinition of annotations in namespace vela-system +> vela def get annotations --type trait --namespace vela-system +``` + +### Options + +``` + -h, --help help for get + -n, --namespace string Specify which namespace to get. If empty, all namespaces will be searched. + -t, --type string Specify which definition type to get. If empty, all types will be searched. Valid types: component, trait, policy, workload, scope, workflow-step +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela def](vela_def) - Manage Definitions + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_def_init.md b/versioned_docs/version-v1.1/cli/vela_def_init.md new file mode 100644 index 00000000..0e0c52cf --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_def_init.md @@ -0,0 +1,48 @@ +--- +title: vela def init +--- + +Init a new definition + +### Synopsis + +Init a new definition with given arguments or interactively +* We support parsing a single YAML file (like kubernetes objects) into the cue-style template. However, we do not support variables in YAML file currently, which prevents users from directly feeding files like helm chart directly. We may introduce such features in the future. + +``` +vela def init DEF_NAME [flags] +``` + +### Examples + +``` +# Command below initiate an empty TraitDefinition named my-ingress +> vela def init my-ingress -t trait --desc "My ingress trait definition." > ./my-ingress.cue +# Command below initiate a definition named my-def interactively and save it to ./my-def.cue +> vela def init my-def -i --output ./my-def.cue +# Command below initiate a ComponentDefinition named my-webservice with the template parsed from ./template.yaml. +> vela def init my-webservice -i --template-yaml ./template.yaml +``` + +### Options + +``` + -d, --desc string Specify the description of the new definition. + -h, --help help for init + -i, --interactive Specify whether use interactive process to help generate definitions. + -o, --output string Specify the output path of the generated definition. If empty, the definition will be printed in the console. + -y, --template-yaml string Specify the template yaml file that definition will use to build the schema. If empty, a default template for the given definition type will be used. + -t, --type string Specify the type of the new definition. Valid types: workload, scope, workflow-step, component, trait, policy +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela def](vela_def) - Manage Definitions + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_def_list.md b/versioned_docs/version-v1.1/cli/vela_def_list.md new file mode 100644 index 00000000..7ec60325 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_def_list.md @@ -0,0 +1,42 @@ +--- +title: vela def list +--- + +List definitions + +### Synopsis + +List definitions in kubernetes cluster + +``` +vela def list [flags] +``` + +### Examples + +``` +# Command below will list all definitions in all namespaces +> vela def list +# Command below will list all definitions in the vela-system namespace +> vela def get annotations --type trait --namespace vela-system +``` + +### Options + +``` + -h, --help help for list + -n, --namespace string Specify which namespace to list. If empty, all namespaces will be searched. + -t, --type string Specify which definition type to list. If empty, all types will be searched. Valid types: policy, workload, scope, workflow-step, component, trait +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela def](vela_def) - Manage Definitions + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_def_render.md b/versioned_docs/version-v1.1/cli/vela_def_render.md new file mode 100644 index 00000000..1df1cae7 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_def_render.md @@ -0,0 +1,43 @@ +--- +title: vela def render +--- + +Render definition + +### Synopsis + +Render definition with cue format into kubernetes YAML format. Could be used to check whether the cue format definition is working as expected. If a directory is used as input, all cue definitions in the directory will be rendered. + +``` +vela def render DEFINITION.cue [flags] +``` + +### Examples + +``` +# Command below will render my-webservice.cue into YAML format and print it out. +> vela def render my-webservice.cue +# Command below will render my-webservice.cue and save it in my-webservice.yaml. +> vela def render my-webservice.cue -o my-webservice.yaml# Command below will render all cue format definitions in the ./defs/cue/ directory and save the YAML objects in ./defs/yaml/. +> vela def render ./defs/cue/ -o ./defs/yaml/ +``` + +### Options + +``` + -h, --help help for render + --message string Specify the header message of the generated YAML file. For example, declaring author information. + -o, --output string Specify the output path of the rendered definition YAML. If empty, the definition will be printed in the console. If input is a directory, the output path is expected to be a directory as well. +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela def](vela_def) - Manage Definitions + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_def_vet.md b/versioned_docs/version-v1.1/cli/vela_def_vet.md new file mode 100644 index 00000000..b9437ec7 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_def_vet.md @@ -0,0 +1,39 @@ +--- +title: vela def vet +--- + +Validate definition + +### Synopsis + +Validate definition file by checking whether it has the valid cue format with fields set correctly +* Currently, this command only checks the cue format. This function is still working in progress and we will support more functional validation mechanism in the future. + +``` +vela def vet DEFINITION.cue [flags] +``` + +### Examples + +``` +# Command below will validate the my-def.cue file. +> vela def vet my-def.cue +``` + +### Options + +``` + -h, --help help for vet +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela def](vela_def) - Manage Definitions + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_delete.md b/versioned_docs/version-v1.1/cli/vela_delete.md new file mode 100644 index 00000000..ed46acb0 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_delete.md @@ -0,0 +1,38 @@ +--- +title: vela delete +--- + +Delete an application + +### Synopsis + +Delete an application + +``` +vela delete APP_NAME +``` + +### Examples + +``` +vela delete frontend +``` + +### Options + +``` + -h, --help help for delete + --svc string delete only the specified service in this app +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_env.md b/versioned_docs/version-v1.1/cli/vela_env.md new file mode 100644 index 00000000..15e5f732 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_env.md @@ -0,0 +1,31 @@ +--- +title: vela env +--- + +Manage environments + +### Synopsis + +Manage environments + +### Options + +``` + -h, --help help for env +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - +* [vela env delete](vela_env_delete) - Delete environment +* [vela env init](vela_env_init) - Create environments +* [vela env ls](vela_env_ls) - List environments +* [vela env set](vela_env_set) - Set an environment + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_env_delete.md b/versioned_docs/version-v1.1/cli/vela_env_delete.md new file mode 100644 index 00000000..7425148b --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_env_delete.md @@ -0,0 +1,37 @@ +--- +title: vela env delete +--- + +Delete environment + +### Synopsis + +Delete environment + +``` +vela env delete +``` + +### Examples + +``` +vela env delete test +``` + +### Options + +``` + -h, --help help for delete +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela env](vela_env) - Manage environments + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_env_init.md b/versioned_docs/version-v1.1/cli/vela_env_init.md new file mode 100644 index 00000000..0a3cb37b --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_env_init.md @@ -0,0 +1,40 @@ +--- +title: vela env init +--- + +Create environments + +### Synopsis + +Create environment and set the currently using environment + +``` +vela env init +``` + +### Examples + +``` +vela env init test --namespace test --email my@email.com +``` + +### Options + +``` + --domain string specify domain your applications + --email string specify email for production TLS Certificate notification + -h, --help help for init + --namespace string specify K8s namespace for env +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela env](vela_env) - Manage environments + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_env_ls.md b/versioned_docs/version-v1.1/cli/vela_env_ls.md new file mode 100644 index 00000000..81808263 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_env_ls.md @@ -0,0 +1,37 @@ +--- +title: vela env ls +--- + +List environments + +### Synopsis + +List all environments + +``` +vela env ls +``` + +### Examples + +``` +vela env ls [env-name] +``` + +### Options + +``` + -h, --help help for ls +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela env](vela_env) - Manage environments + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_env_set.md b/versioned_docs/version-v1.1/cli/vela_env_set.md new file mode 100644 index 00000000..f85a5169 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_env_set.md @@ -0,0 +1,37 @@ +--- +title: vela env set +--- + +Set an environment + +### Synopsis + +Set an environment as the current using one + +``` +vela env set +``` + +### Examples + +``` +vela env set test +``` + +### Options + +``` + -h, --help help for set +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela env](vela_env) - Manage environments + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_exec.md b/versioned_docs/version-v1.1/cli/vela_exec.md new file mode 100644 index 00000000..793f810d --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_exec.md @@ -0,0 +1,35 @@ +--- +title: vela exec +--- + +Execute command in a container + +### Synopsis + +Execute command in a container + +``` +vela exec [flags] APP_NAME -- COMMAND [args...] +``` + +### Options + +``` + -h, --help help for exec + --pod-running-timeout duration The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one pod is running (default 1m0s) + -i, --stdin Pass stdin to the container (default true) + -s, --svc string service name + -t, --tty Stdin is a TTY (default true) +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_export.md b/versioned_docs/version-v1.1/cli/vela_export.md new file mode 100644 index 00000000..4e43cb36 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_export.md @@ -0,0 +1,32 @@ +--- +title: vela export +--- + +Export deploy manifests from appfile + +### Synopsis + +Export deploy manifests from appfile + +``` +vela export +``` + +### Options + +``` + -f, -- string specify file path for appfile + -h, --help help for export +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_help.md b/versioned_docs/version-v1.1/cli/vela_help.md new file mode 100644 index 00000000..d108da7c --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_help.md @@ -0,0 +1,27 @@ +--- +title: vela help +--- + +Help about any command + +``` +vela help [command] +``` + +### Options + +``` + -h, --help help for help +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_init.md b/versioned_docs/version-v1.1/cli/vela_init.md new file mode 100644 index 00000000..ff76e7c4 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_init.md @@ -0,0 +1,38 @@ +--- +title: vela init +--- + +Create scaffold for an application + +### Synopsis + +Create scaffold for an application + +``` +vela init +``` + +### Examples + +``` +vela init +``` + +### Options + +``` + -h, --help help for init + --render-only Rendering vela.yaml in current dir and do not deploy +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_logs.md b/versioned_docs/version-v1.1/cli/vela_logs.md new file mode 100644 index 00000000..07376d6b --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_logs.md @@ -0,0 +1,32 @@ +--- +title: vela logs +--- + +Tail logs for application + +### Synopsis + +Tail logs for application + +``` +vela logs [flags] +``` + +### Options + +``` + -h, --help help for logs + -o, --output string output format for logs, support: [default, raw, json] (default "default") +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_ls.md b/versioned_docs/version-v1.1/cli/vela_ls.md new file mode 100644 index 00000000..55bb9d2c --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_ls.md @@ -0,0 +1,38 @@ +--- +title: vela ls +--- + +List applications + +### Synopsis + +List all applications in cluster + +``` +vela ls +``` + +### Examples + +``` +vela ls +``` + +### Options + +``` + -h, --help help for ls + -n, --namespace string specify the namespace the application want to list, default is the current env namespace +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_port-forward.md b/versioned_docs/version-v1.1/cli/vela_port-forward.md new file mode 100644 index 00000000..b5b34bfa --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_port-forward.md @@ -0,0 +1,40 @@ +--- +title: vela port-forward +--- + +Forward local ports to services in an application + +### Synopsis + +Forward local ports to services in an application + +``` +vela port-forward APP_NAME [flags] +``` + +### Examples + +``` +port-forward APP_NAME [options] [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N] +``` + +### Options + +``` + --address strings Addresses to listen on (comma separated). Only accepts IP addresses or localhost as a value. When localhost is supplied, vela will try to bind on both 127.0.0.1 and ::1 and will fail if neither of these addresses are available to bind. (default [localhost]) + -h, --help help for port-forward + --pod-running-timeout duration The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one pod is running (default 1m0s) + --route forward ports from route trait service +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_show.md b/versioned_docs/version-v1.1/cli/vela_show.md new file mode 100644 index 00000000..70910e40 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_show.md @@ -0,0 +1,38 @@ +--- +title: vela show +--- + +Show the reference doc for a workload type or trait + +### Synopsis + +Show the reference doc for a workload type or trait + +``` +vela show [flags] +``` + +### Examples + +``` +show webservice +``` + +### Options + +``` + -h, --help help for show + --web start web doc site +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_status.md b/versioned_docs/version-v1.1/cli/vela_status.md new file mode 100644 index 00000000..c12e0a2d --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_status.md @@ -0,0 +1,38 @@ +--- +title: vela status +--- + +Show status of an application + +### Synopsis + +Show status of an application, including workloads and traits of each service. + +``` +vela status APP_NAME [flags] +``` + +### Examples + +``` +vela status APP_NAME +``` + +### Options + +``` + -h, --help help for status + -s, --svc string service name +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_system.md b/versioned_docs/version-v1.1/cli/vela_system.md new file mode 100644 index 00000000..70cea01b --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_system.md @@ -0,0 +1,31 @@ +--- +title: vela system +--- + +System management utilities + +### Synopsis + +System management utilities + +### Options + +``` + -h, --help help for system +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - +* [vela system cue-packages](vela_system_cue-packages) - List cue package +* [vela system dry-run](vela_system_dry-run) - Dry Run an application, and output the K8s resources as result to stdout +* [vela system info](vela_system_info) - Show vela client and cluster chartPath +* [vela system live-diff](vela_system_live-diff) - Dry-run an application, and do diff on a specific app revison + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_system_cue-packages.md b/versioned_docs/version-v1.1/cli/vela_system_cue-packages.md new file mode 100644 index 00000000..6e8ec190 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_system_cue-packages.md @@ -0,0 +1,37 @@ +--- +title: vela system cue-packages +--- + +List cue package + +### Synopsis + +List cue package + +``` +vela system cue-packages +``` + +### Examples + +``` +vela system cue-packages +``` + +### Options + +``` + -h, --help help for cue-packages +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela system](vela_system) - System management utilities + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_system_dry-run.md b/versioned_docs/version-v1.1/cli/vela_system_dry-run.md new file mode 100644 index 00000000..056feefd --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_system_dry-run.md @@ -0,0 +1,39 @@ +--- +title: vela system dry-run +--- + +Dry Run an application, and output the K8s resources as result to stdout + +### Synopsis + +Dry Run an application, and output the K8s resources as result to stdout, only CUE template supported for now + +``` +vela system dry-run +``` + +### Examples + +``` +vela dry-run +``` + +### Options + +``` + -d, --definition string specify a definition file or directory, it will only be used in dry-run rather than applied to K8s cluster + -f, --file string application file name (default "./app.yaml") + -h, --help help for dry-run +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela system](vela_system) - System management utilities + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_system_info.md b/versioned_docs/version-v1.1/cli/vela_system_info.md new file mode 100644 index 00000000..8b068c74 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_system_info.md @@ -0,0 +1,31 @@ +--- +title: vela system info +--- + +Show vela client and cluster chartPath + +### Synopsis + +Show vela client and cluster chartPath + +``` +vela system info [flags] +``` + +### Options + +``` + -h, --help help for info +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela system](vela_system) - System management utilities + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_system_live-diff.md b/versioned_docs/version-v1.1/cli/vela_system_live-diff.md new file mode 100644 index 00000000..f37aeaca --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_system_live-diff.md @@ -0,0 +1,41 @@ +--- +title: vela system live-diff +--- + +Dry-run an application, and do diff on a specific app revison + +### Synopsis + +Dry-run an application, and do diff on a specific app revison. The provided capability definitions will be used during Dry-run. If any capabilities used in the app are not found in the provided ones, it will try to find from cluster. + +``` +vela system live-diff +``` + +### Examples + +``` +vela live-diff -f app-v2.yaml -r app-v1 --context 10 +``` + +### Options + +``` + -r, --Revision string specify an application Revision name, by default, it will compare with the latest Revision + -c, --context int output number lines of context around changes, by default show all unchanged lines (default -1) + -d, --definition string specify a file or directory containing capability definitions, they will only be used in dry-run rather than applied to K8s cluster + -f, --file string application file name (default "./app.yaml") + -h, --help help for live-diff +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela system](vela_system) - System management utilities + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_template.md b/versioned_docs/version-v1.1/cli/vela_template.md new file mode 100644 index 00000000..11841c78 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_template.md @@ -0,0 +1,28 @@ +--- +title: vela template +--- + +Manage templates + +### Synopsis + +Manage templates + +### Options + +``` + -h, --help help for template +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - +* [vela template context](vela_template_context) - Show context parameters + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_template_context.md b/versioned_docs/version-v1.1/cli/vela_template_context.md new file mode 100644 index 00000000..6a4ccdfd --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_template_context.md @@ -0,0 +1,37 @@ +--- +title: vela template context +--- + +Show context parameters + +### Synopsis + +Show context parameter + +``` +vela template context +``` + +### Examples + +``` +vela template context +``` + +### Options + +``` + -h, --help help for context +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela template](vela_template) - Manage templates + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_traits.md b/versioned_docs/version-v1.1/cli/vela_traits.md new file mode 100644 index 00000000..4f962aca --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_traits.md @@ -0,0 +1,38 @@ +--- +title: vela traits +--- + +List traits + +### Synopsis + +List traits + +``` +vela traits +``` + +### Examples + +``` +vela traits +``` + +### Options + +``` + --discover discover traits in capability centers + -h, --help help for traits +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_up.md b/versioned_docs/version-v1.1/cli/vela_up.md new file mode 100644 index 00000000..01a92b48 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_up.md @@ -0,0 +1,32 @@ +--- +title: vela up +--- + +Apply an appfile + +### Synopsis + +Apply an appfile + +``` +vela up +``` + +### Options + +``` + -f, -- string specify file path for appfile + -h, --help help for up +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_version.md b/versioned_docs/version-v1.1/cli/vela_version.md new file mode 100644 index 00000000..38877295 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_version.md @@ -0,0 +1,31 @@ +--- +title: vela version +--- + +Prints out build version information + +### Synopsis + +Prints out build version information + +``` +vela version [flags] +``` + +### Options + +``` + -h, --help help for version +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_workflow.md b/versioned_docs/version-v1.1/cli/vela_workflow.md new file mode 100644 index 00000000..bc96ae0f --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_workflow.md @@ -0,0 +1,31 @@ +--- +title: vela workflow +--- + +Operate application workflow in KubeVela + +### Synopsis + +Operate application workflow in KubeVela + +### Options + +``` + -h, --help help for workflow +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - +* [vela workflow restart](vela_workflow_restart) - Restart an application workflow +* [vela workflow resume](vela_workflow_resume) - Resume a suspend application workflow +* [vela workflow suspend](vela_workflow_suspend) - Suspend an application workflow +* [vela workflow terminate](vela_workflow_terminate) - Terminate an application workflow + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_workflow_restart.md b/versioned_docs/version-v1.1/cli/vela_workflow_restart.md new file mode 100644 index 00000000..9ddcf06f --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_workflow_restart.md @@ -0,0 +1,37 @@ +--- +title: vela workflow restart +--- + +Restart an application workflow + +### Synopsis + +Restart an application workflow in cluster + +``` +vela workflow restart [flags] +``` + +### Examples + +``` +vela workflow restart +``` + +### Options + +``` + -h, --help help for restart +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela workflow](vela_workflow) - Operate application workflow in KubeVela + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_workflow_resume.md b/versioned_docs/version-v1.1/cli/vela_workflow_resume.md new file mode 100644 index 00000000..a2150b5b --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_workflow_resume.md @@ -0,0 +1,37 @@ +--- +title: vela workflow resume +--- + +Resume a suspend application workflow + +### Synopsis + +Resume a suspend application workflow in cluster + +``` +vela workflow resume [flags] +``` + +### Examples + +``` +vela workflow resume +``` + +### Options + +``` + -h, --help help for resume +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela workflow](vela_workflow) - Operate application workflow in KubeVela + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_workflow_suspend.md b/versioned_docs/version-v1.1/cli/vela_workflow_suspend.md new file mode 100644 index 00000000..1c328275 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_workflow_suspend.md @@ -0,0 +1,37 @@ +--- +title: vela workflow suspend +--- + +Suspend an application workflow + +### Synopsis + +Suspend an application workflow in cluster + +``` +vela workflow suspend [flags] +``` + +### Examples + +``` +vela workflow suspend +``` + +### Options + +``` + -h, --help help for suspend +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela workflow](vela_workflow) - Operate application workflow in KubeVela + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_workflow_terminate.md b/versioned_docs/version-v1.1/cli/vela_workflow_terminate.md new file mode 100644 index 00000000..3a865925 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_workflow_terminate.md @@ -0,0 +1,37 @@ +--- +title: vela workflow terminate +--- + +Terminate an application workflow + +### Synopsis + +Terminate an application workflow in cluster + +``` +vela workflow terminate [flags] +``` + +### Examples + +``` +vela workflow terminate +``` + +### Options + +``` + -h, --help help for terminate +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela workflow](vela_workflow) - Operate application workflow in KubeVela + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/cli/vela_workloads.md b/versioned_docs/version-v1.1/cli/vela_workloads.md new file mode 100644 index 00000000..7b460e36 --- /dev/null +++ b/versioned_docs/version-v1.1/cli/vela_workloads.md @@ -0,0 +1,37 @@ +--- +title: vela workloads +--- + +List workloads + +### Synopsis + +List workloads + +``` +vela workloads +``` + +### Examples + +``` +vela workloads +``` + +### Options + +``` + -h, --help help for workloads +``` + +### Options inherited from parent commands + +``` + -e, --env string specify environment name for application +``` + +### SEE ALSO + +* [vela](vela) - + +###### Auto generated by spf13/cobra on 19-Aug-2021 diff --git a/versioned_docs/version-v1.1/core-concepts/application.md b/versioned_docs/version-v1.1/core-concepts/application.md new file mode 100644 index 00000000..e7979c18 --- /dev/null +++ b/versioned_docs/version-v1.1/core-concepts/application.md @@ -0,0 +1,165 @@ +--- +title: Application +--- + +KubeVela takes Application as the basis of modeling, uses Components and Traits to complete a set of application deployment plans. After you are familiar with these core concepts, you can develop in accordance with the user manual and administrator manual according to your needs. + +## Application + +In modeling, the YAML file is the bearer of the application deployment plan. A typical YAML example is as follows: + +```yaml +# sample.yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: website +spec: + components: + - name: frontend # e.g. we want to deploy a frontend component and serves as web service + type: webservice + properties: + image: nginx + traits: + - type: cpuscaler # e.g. we add a CPU based auto scaler to this component + properties: + min: 1 + max: 10 + cpuPercent: 60 + - type: sidecar # add a sidecar container into this component + properties: + name: "sidecar-test" + image: "fluentd" + - name: backend + type: worker + properties: + image: busybox + cmd: + - sleep + - '1000' + policies: + - name: demo-policy + type: env-binding + properties: + envs: + - name: test + placement: + clusterSelector: + name: cluster-test + - name: prod + placement: + clusterSelector: + name: cluster-prod + workflow: + steps: + #workflow step name + - name: deploy-test-env + type: deploy2env + properties: + # Specify the policy name + policy: demo-policy + # Specify the env name in the policy + env: test + - name: manual-approval + # use suspend can stop workflow and wait here until condition changed + type: suspend + - name: deploy-prod-env + type: deploy2env + properties: + # Specify the policy name + policy: demo-policy + # Specify the env name in the policy + env: prod +``` + + +The fields here correspond to: + +- apiVersion: The OAM API version used. +- kind: of CRD Resourse Type. The one we use most often is Pod. +- metadata: business-related information. For example, this time I want to create a website. +- Spec: Describe what we need to deliver and tell Kubernetes what to make. Here we put the `components`, `policies` and `workflow` of KubeVela. +- components: KubeVela's component system. +- Traits: KubeVela's operation and maintenance feature system, works in component level. +- Policies: KubeVela's application level policy. +- Workflow: KubeVela's application level deployment workflow, you can custom every deployment step with it. + +## Components + +KubeVela has some built-in component types, you can find them by using [KubeVela CLI](../install#3-get-kubevela-cli): + +``` +vela components +``` + +The output shows: + +``` +NAME NAMESPACE WORKLOAD DESCRIPTION +helm vela-system autodetects.core.oam.dev helm release is a group of K8s resources from either git + repository or helm repo +kustomize vela-system autodetects.core.oam.dev kustomize can fetching, building, updating and applying + Kustomize manifests from git repo. +task vela-system jobs.batch Describes jobs that run code or a script to completion. +webservice vela-system deployments.apps Describes long-running, scalable, containerized services + that have a stable network endpoint to receive external + network traffic from customers. +worker vela-system deployments.apps Describes long-running, scalable, containerized services + that running at backend. They do NOT have network endpoint + to receive external network traffic. +alibaba-ack vela-system configurations.terraform.core.oam.dev Terraform configuration for Alibaba Cloud ACK cluster +alibaba-oss vela-system configurations.terraform.core.oam.dev Terraform configuration for Alibaba Cloud OSS object +alibaba-rds vela-system configurations.terraform.core.oam.dev Terraform configuration for Alibaba Cloud RDS object +``` + +You can continue to use [Helm](../end-user/components/helm) and [Kustomize](../end-user/components/kustomize) components to deploy your application, an application is a deployment plan. + +If you're a platform builder who's familiar with Kubernetes, you can learn to [define your custom component](../platform-engineers/components/custom-component) to extend every kind of component in KubeVela. Especially, [Terraform Component](../platform-engineers/components/component-terraform) is one of the best practice. + + +## Traits + +KubeVela also has many built-in traits, search them by using [KubeVela CLI](../install#3-get-kubevela-cli): + +``` +vela traits +``` + +The result can be: + +``` +NAME NAMESPACE APPLIES-TO CONFLICTS-WITH POD-DISRUPTIVE DESCRIPTION +annotations vela-system deployments.apps true Add annotations for your Workload. +cpuscaler vela-system webservice,worker false Automatically scale the component based on CPU usage. +ingress vela-system webservice,worker false Enable public web traffic for the component. +labels vela-system deployments.apps true Add labels for your Workload. +scaler vela-system webservice,worker false Manually scale the component. +sidecar vela-system deployments.apps true Inject a sidecar container to the component. +``` + +You can learn how to bind trait by these detail docs, such as [ingress trait](../end-user/traits/ingress). + +If you're a platform builder who's familiar with Kubernetes, you can learn to [define your custom trait](../platform-engineers/traits/customize-trait) to extend any operational capability for your users. + +## Policy + +Policy allows you to define application level capabilities, such as health check, security group, fire wall, SLO and so on. + +Policy is similar to trait, but trait works for component while policy works for the whole application. + +## Workflow + +In KubeVela, Workflow allows user to glue various operation and maintenance tasks into one process, and achieve automated and rapid delivery of cloud-native applications to any hybrid environment. From the design point of view, the Workflow is to customize the control logic: not only simply apply all resources, but also to provide some process-oriented flexibility. For example, the use of Workflow can help us implement complex operations such as pause, manual verification, waiting state, data flow transmission, multi-environment grayscale, and A/B testing. + +The Workflow is based on modular design. Each Workflow module is defined by a Definition CRD and provided to users for operation through K8s API. As a "super glue", the Workflow module can combine any of your tools and processes through the CUE language. This allows you to create your own modules through a powerful declarative language and cloud-native APIs. + +> Especially, workflow works in application level, if you specify workflow, the resources won't be deployed if you don't specify any step to deploy it. + +If you're a platform builder who's familiar with Kubernetes, you can learn to [define your own workflow step by using CUE](../platform-engineers/workflow/workflow). + +## What's Next + +Here are some recommended next steps: + +- Learn KubeVela's user guide to know how to deploy component, let's start from [helm component](../end-user/components/helm). +- Learn KubeVela's admin guide to learn more about [the OAM model](../platform-engineers/oam/oam-model). diff --git a/versioned_docs/version-v1.1/core-concepts/architecture.md b/versioned_docs/version-v1.1/core-concepts/architecture.md new file mode 100644 index 00000000..e50253b5 --- /dev/null +++ b/versioned_docs/version-v1.1/core-concepts/architecture.md @@ -0,0 +1,64 @@ +--- +title: Architecture +--- + +The overall architecture of KubeVela is shown as below: + +![alt](../resources/system-arch.png) + + +## API + +The API layer provides KubeVela APIs exposed to users for building application delivery platform and solutions. +KubeVela APIs are declarative and application centric. +It is based on Kubernetes CRDs to natively fit into the Kubernetes ecosystem. + +The APIs can be categorized for two purposes: + +- For **end users** to compose final application manifest to deploy. + - Usually this contains only user-concerned config and hides infrastructure details. + - Users will normally write the manifest in yaml format. + - This currently includes Application only. But we may add more user-facing APIs, e.g. ApplicationSet to define multiple Applications. +- For **platform admins** to define capability definitions to handle actual operations. + - Each definition glues operational tasks using CUE and exposes user-concerned config only. + - Admins will normally write the manifest in yaml + CUE format. + - This currently includes Component, Trait, Policy, and Workflow definition types. + +The APIs are served by the control plane. +Because it is so important that we put a separate section to talk about it. + +## Control Plane + +The control plane layers is where KubeVela puts the components central to the entire system. +It is the first entry to handle user API requests, the central place to register plugins, +and central processor to manage global states and dispatches tasks/resources. + +The control plane contains three major parts: + +- **Plugin registry** stores and manages X-Definitions. + X-Definitions are CRDs that users can apply and get via kubectl. + There are additional backend functions to store and manage multiple versions of X-Definitions. +- **Core Control** provides the core control logic to the entire system. + It consists of the core components that are hanlding Application, X-Definition API requests, + orchestrating Workflows, storing revisions of Applications and Components, + parsing and executing CUE fields, garbage collecting unused resources. +- **Builtin Controllers** registers builtin plugins and provides the backing controllers for the resources + created by X-Definitions. These are core to the KubeVela ecosystem that we believe everyone will use. + +The control plane (including API layer) is KubeVela per se. +Technically speaking, KubeVela is a control plane to manage applications over multiple clusters, hybrid environments. + +## Execution + +The execution layer is where the applications are actually running on. +KubeVela allows you to deploy and manage application resources in a consistent workflow onto both +Kubernetes cluster (e.g. local, managed offerings, IoT/edge, on-prem) +and non-Kubernetes environments on clouds. +KubeVela itself does not run on the execution infrastructures, but manage them instead. + +## What's Next + +Here are some recommended next steps: + +- Learn KubeVela's Application to know the basics of how to building App Delivery, let's start from [Application](./application). +- Learn KubeVela's admin guide to learn more about [the OAM model](../platform-engineers/oam/oam-model). diff --git a/versioned_docs/version-v1.1/developers/cap-center.md b/versioned_docs/version-v1.1/developers/cap-center.md new file mode 100644 index 00000000..9dcbea8a --- /dev/null +++ b/versioned_docs/version-v1.1/developers/cap-center.md @@ -0,0 +1,152 @@ +--- +title: Managing Capabilities +--- + +In KubeVela, developers can install more capabilities (i.e. new component types and traits) from any GitHub repo that contains OAM definition files. We call these GitHub repos as _Capability Centers_. + +KubeVela is able to discover OAM definition files in this repo automatically and sync them to your own KubeVela platform. + +## Add a capability center + +Add and sync a capability center in KubeVela: + +```bash +vela cap center config my-center https://github.com/oam-dev/catalog/tree/master/registry +``` +```console +successfully sync 1/1 from my-center remote center +Successfully configured capability center my-center and sync from remote +``` +```bash +vela cap center sync my-center +``` +```console +successfully sync 1/1 from my-center remote center +sync finished +``` + +Now, this capability center `my-center` is ready to use. + +## List capability centers + +You are allowed to add more capability centers and list them. + +```bash +vela cap center ls +``` +```console +NAME ADDRESS +my-center https://github.com/oam-dev/catalog/tree/master/registry +``` + +## [Optional] Remove a capability center + +Or, remove one. + +```bash +vela cap center remove my-center +``` + +## List all available capabilities in capability center + +Or, list all available capabilities in certain center. + +```bash +vela cap ls my-center +``` +```console +NAME CENTER TYPE DEFINITION STATUS APPLIES-TO +clonesetservice my-center componentDefinition clonesets.apps.kruise.io uninstalled [] +``` + +## Install a capability from capability center + +Now let's try to install the new component named `clonesetservice` from `my-center` to your own KubeVela platform. + +You need to install OpenKruise first. + +```shell +helm install kruise https://github.com/openkruise/kruise/releases/download/v0.7.0/kruise-chart.tgz +``` + +Install `clonesetservice` component from `my-center`. + +```bash +vela cap install my-center/clonesetservice +``` +```console +Installing component capability clonesetservice +Successfully installed capability clonesetservice from my-center +``` + +## Use the newly installed capability + +Let's check the `clonesetservice` appears in your platform firstly: + +```bash +vela components +``` +```console +NAME NAMESPACE WORKLOAD DESCRIPTION +clonesetservice vela-system clonesets.apps.kruise.io Describes long-running, scalable, containerized services + that have a stable network endpoint to receive external + network traffic from customers. If workload type is skipped + for any service defined in Appfile, it will be defaulted to + `webservice` type. +``` + +Great! Now let's deploy an app via Appfile. + +```bash +cat << EOF > vela.yaml +name: testapp +services: + testsvc: + type: clonesetservice + image: crccheck/hello-world + port: 8000 +EOF +``` + +```bash +vela up +``` +```console +Parsing vela appfile ... +Load Template ... + +Rendering configs for service (testsvc)... +Writing deploy config to (.vela/deploy.yaml) + +Applying application ... +Checking if app has been deployed... +App has not been deployed, creating a new deployment... +Updating: core.oam.dev/v1alpha2, Kind=HealthScope in default +✅ App has been deployed 🚀🚀🚀 + Port forward: vela port-forward testapp + SSH: vela exec testapp + Logging: vela logs testapp + App status: vela status testapp + Service status: vela status testapp --svc testsvc +``` + +then you can Get a cloneset in your environment. + +```shell +kubectl get clonesets.apps.kruise.io +``` +```console +NAME DESIRED UPDATED UPDATED_READY READY TOTAL AGE +testsvc 1 1 1 1 1 46s +``` + +## Uninstall a capability + +> NOTE: make sure no apps are using the capability before uninstalling. + +```bash +vela cap uninstall my-center/clonesetservice +``` +```console +Successfully uninstalled capability clonesetservice +``` diff --git a/versioned_docs/version-v1.1/developers/check-logs.md b/versioned_docs/version-v1.1/developers/check-logs.md new file mode 100644 index 00000000..d9b51609 --- /dev/null +++ b/versioned_docs/version-v1.1/developers/check-logs.md @@ -0,0 +1,9 @@ +--- +title: Check Application Logs +--- + +```bash +vela logs testapp +``` + +It will let you select the container to get logs from. If there is only one container it will select automatically. diff --git a/versioned_docs/version-v1.1/developers/check-ref-doc.md b/versioned_docs/version-v1.1/developers/check-ref-doc.md new file mode 100644 index 00000000..704c2324 --- /dev/null +++ b/versioned_docs/version-v1.1/developers/check-ref-doc.md @@ -0,0 +1,103 @@ +--- +title: The Reference Documentation Guide of Capabilities +--- + +In this documentation, we will show how to check the detailed schema of a given capability (i.e. workload type or trait). + +This may sound challenging because every capability is a "plug-in" in KubeVela (even for the built-in ones), also, it's by design that KubeVela allows platform platform-engineerss to modify the capability templates at any time. In this case, do we need to manually write documentation for every newly installed capability? And how can we ensure those documentations for the system is up-to-date? + +## Using Browser + +Actually, as a important part of its "extensibility" design, KubeVela will always **automatically generate** reference documentation for every workload type or trait registered in your Kubernetes cluster, based on its template in definition of course. This feature works for any capability: either built-in ones or your own workload types/traits. + +Thus, as an end user, the only thing you need to do is: + +```console +$ vela show WORKLOAD_TYPE or TRAIT --web +``` + +This command will automatically open the reference documentation for given workload type or trait in your default browser. + +### For Workload Types + +Let's take `$ vela show webservice --web` as example. The detailed schema documentation for `Web Service` workload type will show up immediately as below: + +![](../resources/vela_show_webservice.jpg) + +Note that there's in the section named `Specification`, it even provides you with a full sample for the usage of this workload type with a fake name `my-service-name`. + +### For Traits + +Similarly, we can also do `$ vela show autoscale --web`: + +![](../resources/vela_show_autoscale.jpg) + +With these auto-generated reference documentations, we could easily complete the application description by simple copy-paste, for example: + +```yaml +name: helloworld + +services: + backend: # copy-paste from the webservice ref doc above + image: oamdev/testapp:v1 + cmd: ["node", "server.js"] + port: 8080 + cpu: "0.1" + + autoscale: # copy-paste and modify from autoscaler ref doc above + min: 1 + max: 8 + cron: + startAt: "19:00" + duration: "2h" + days: "Friday" + replicas: 4 + timezone: "America/Los_Angeles" +``` + +## Using Terminal + +This reference doc feature also works for terminal-only case. For example: + +```shell +$ vela show webservice +# Properties ++-------+----------------------------------------------------------------------------------+---------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++-------+----------------------------------------------------------------------------------+---------------+----------+---------+ +| cmd | Commands to run in the container | []string | false | | +| env | Define arguments by using environment variables | [[]env](#env) | false | | +| image | Which image would you like to use for your service | string | true | | +| port | Which port do you want customer traffic sent to | int | true | 80 | +| cpu | Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) | string | false | | ++-------+----------------------------------------------------------------------------------+---------------+----------+---------+ + + +## env ++-----------+-----------------------------------------------------------+-------------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++-----------+-----------------------------------------------------------+-------------------------+----------+---------+ +| name | Environment variable name | string | true | | +| value | The value of the environment variable | string | false | | +| valueFrom | Specifies a source the value of this var should come from | [valueFrom](#valueFrom) | false | | ++-----------+-----------------------------------------------------------+-------------------------+----------+---------+ + + +### valueFrom ++--------------+--------------------------------------------------+-------------------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++--------------+--------------------------------------------------+-------------------------------+----------+---------+ +| secretKeyRef | Selects a key of a secret in the pod's namespace | [secretKeyRef](#secretKeyRef) | true | | ++--------------+--------------------------------------------------+-------------------------------+----------+---------+ + + +#### secretKeyRef ++------+------------------------------------------------------------------+--------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++------+------------------------------------------------------------------+--------+----------+---------+ +| name | The name of the secret in the pod's namespace to select from | string | true | | +| key | The key of the secret to select from. Must be a valid secret key | string | true | | ++------+------------------------------------------------------------------+--------+----------+---------+ +``` + +> Note that for all the built-in capabilities, we already published their reference docs [here](https://kubevela.io/#/en/developers/references/) based on the same doc generation mechanism. diff --git a/versioned_docs/version-v1.1/developers/config-app.md b/versioned_docs/version-v1.1/developers/config-app.md new file mode 100644 index 00000000..8f2e586c --- /dev/null +++ b/versioned_docs/version-v1.1/developers/config-app.md @@ -0,0 +1,97 @@ +--- +title: Configuring data/env in Application +--- + +`vela` provides a `config` command to manage config data. + +## `vela config set` + +```bash +vela config set test a=b c=d +``` +```console +reading existing config data and merging with user input +config data saved successfully ✅ +``` + +## `vela config get` + +```bash +vela config get test +``` +```console +Data: + a: b + c: d +``` + +## `vela config del` + +```bash +vela config del test +``` +```console +config (test) deleted successfully +``` + +## `vela config ls` + +```bash +vela config set test a=b +vela config set test2 c=d +vela config ls +``` +```console +NAME +test +test2 +``` + +## Configure env in application + +The config data can be set as the env in applications. + +```bash +vela config set demo DEMO_HELLO=helloworld +``` + +Save the following to `vela.yaml` in current directory: + +```yaml +name: testapp +services: + env-config-demo: + image: heroku/nodejs-hello-world + config: demo +``` + +Then run: +```bash +vela up +``` +```console +Parsing vela.yaml ... +Loading templates ... + +Rendering configs for service (env-config-demo)... +Writing deploy config to (.vela/deploy.yaml) + +Applying deploy configs ... +Checking if app has been deployed... +App has not been deployed, creating a new deployment... +✅ App has been deployed 🚀🚀🚀 + Port forward: vela port-forward testapp + SSH: vela exec testapp + Logging: vela logs testapp + App status: vela status testapp + Service status: vela status testapp --svc env-config-demo +``` + +Check env var: + +``` +vela exec testapp -- printenv | grep DEMO_HELLO +``` +```console +DEMO_HELLO=helloworld +``` diff --git a/versioned_docs/version-v1.1/developers/config-enviroments.md b/versioned_docs/version-v1.1/developers/config-enviroments.md new file mode 100644 index 00000000..7c658853 --- /dev/null +++ b/versioned_docs/version-v1.1/developers/config-enviroments.md @@ -0,0 +1,107 @@ +--- +title: Setting Up Deployment Environment +--- + +A deployment environment is where you could configure the workspace, email for contact and domain for your applications globally. +A typical set of deployment environment is `test`, `staging`, `prod`, etc. + +## Create environment + +```bash +vela env init demo --email my@email.com +``` +```console +environment demo created, Namespace: default, Email: my@email.com +``` + +## Check the deployment environment metadata + +```bash +vela env ls +``` +```console +NAME CURRENT NAMESPACE EMAIL DOMAIN +default default +demo * default my@email.com +``` + +By default, the environment will use `default` namespace in K8s. + +## Configure changes + +You could change the config by executing the environment again. + +```bash +vela env init demo --namespace demo +``` +```console +environment demo created, Namespace: demo, Email: my@email.com +``` + +```bash +vela env ls +``` +```console +NAME CURRENT NAMESPACE EMAIL DOMAIN +default default +demo * demo my@email.com +``` + +**Note that the created apps won't be affected, only newly created apps will use the updated info.** + +## [Optional] Configure Domain if you have public IP + +If your K8s cluster is provisioned by cloud provider and has public IP for ingress. +You could configure your domain in the environment, then you'll be able to visit +your app by this domain with an mTLS supported automatically. + +For example, you could get the public IP from ingress service. + +```bash +kubectl get svc -A | grep LoadBalancer +``` +```console +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +nginx-ingress-lb LoadBalancer 172.21.2.174 123.57.10.233 80:32740/TCP,443:32086/TCP 41d +``` + +The fourth column is public IP. Configure 'A' record for your custom domain. + +``` +*.your.domain => 123.57.10.233 +``` + +You could also use `123.57.10.233.xip.io` as your domain, if you don't have a custom one. +`xip.io` will automatically route to the prefix IP `123.57.10.233`. + + +```bash +vela env init demo --domain 123.57.10.233.xip.io +``` +```console +environment demo updated, Namespace: demo, Email: my@email.com +``` + +### Using domain in Appfile + +Since you now have domain configured globally in deployment environment, you don't need to specify the domain in route configuration anymore. + +```yaml +# in demo environment +services: + express-server: + ... + + route: + rules: + - path: /testapp + rewriteTarget: / +``` + +``` +curl http://123.57.10.233.xip.io/testapp +``` +```console +Hello World +``` + diff --git a/versioned_docs/version-v1.1/developers/exec-cmd.md b/versioned_docs/version-v1.1/developers/exec-cmd.md new file mode 100644 index 00000000..8776125d --- /dev/null +++ b/versioned_docs/version-v1.1/developers/exec-cmd.md @@ -0,0 +1,10 @@ +--- +title: Execute Commands in Container +--- + +Run: +``` +vela exec testapp -- /bin/sh +``` + +This open a shell within the container of testapp. diff --git a/versioned_docs/version-v1.1/developers/extensions/set-autoscale.md b/versioned_docs/version-v1.1/developers/extensions/set-autoscale.md new file mode 100644 index 00000000..b9aa9b61 --- /dev/null +++ b/versioned_docs/version-v1.1/developers/extensions/set-autoscale.md @@ -0,0 +1,238 @@ +--- +title: Automatically scale workloads by resource utilization metrics and cron +--- + + + +## Prerequisite +Make sure auto-scaler trait controller is installed in your cluster + +Install auto-scaler trait controller with helm + +1. Add helm chart repo for autoscaler trait + ```shell script + helm repo add oam.catalog http://oam.dev/catalog/ + ``` + +2. Update the chart repo + ```shell script + helm repo update + ``` + +3. Install autoscaler trait controller + ```shell script + helm install --create-namespace -n vela-system autoscalertrait oam.catalog/autoscalertrait + +Autoscale depends on metrics server, please [enable it in your Kubernetes cluster](../references/devex/faq#autoscale-how-to-enable-metrics-server-in-various-kubernetes-clusters) at the beginning. + +> Note: autoscale is one of the extension capabilities [installed from cap center](../cap-center), +> please install it if you can't find it in `vela traits`. + +## Setting cron auto-scaling policy +Introduce how to automatically scale workloads by cron. + +1. Prepare Appfile + + ```yaml + name: testapp + + services: + express-server: + # this image will be used in both build and deploy steps + image: oamdev/testapp:v1 + + cmd: ["node", "server.js"] + port: 8080 + + autoscale: + min: 1 + max: 4 + cron: + startAt: "14:00" + duration: "2h" + days: "Monday, Thursday" + replicas: 2 + timezone: "America/Los_Angeles" + ``` + +> The full specification of `autoscale` could show up by `$ vela show autoscale`. + +2. Deploy an application + + ``` + $ vela up + Parsing vela.yaml ... + Loading templates ... + + Rendering configs for service (express-server)... + Writing deploy config to (.vela/deploy.yaml) + + Applying deploy configs ... + Checking if app has been deployed... + App has not been deployed, creating a new deployment... + ✅ App has been deployed 🚀🚀🚀 + Port forward: vela port-forward testapp + SSH: vela exec testapp + Logging: vela logs testapp + App status: vela status testapp + Service status: vela status testapp --svc express-server + ``` + +3. Check the replicas and wait for the scaling to take effect + + Check the replicas of the application, there is one replica. + + ``` + $ vela status testapp + About: + + Name: testapp + Namespace: default + Created at: 2020-11-05 17:09:02.426632 +0800 CST + Updated at: 2020-11-05 17:09:02.426632 +0800 CST + + Services: + + - Name: express-server + Type: webservice + HEALTHY Ready: 1/1 + Traits: + - ✅ autoscale: type: cron replicas(min/max/current): 1/4/1 + Last Deployment: + Created at: 2020-11-05 17:09:03 +0800 CST + Updated at: 2020-11-05T17:09:02+08:00 + ``` + + Wait till the time clocks `startAt`, and check again. The replicas become to two, which is specified as + `replicas` in `vela.yaml`. + + ``` + $ vela status testapp + About: + + Name: testapp + Namespace: default + Created at: 2020-11-10 10:18:59.498079 +0800 CST + Updated at: 2020-11-10 10:18:59.49808 +0800 CST + + Services: + + - Name: express-server + Type: webservice + HEALTHY Ready: 2/2 + Traits: + - ✅ autoscale: type: cron replicas(min/max/current): 1/4/2 + Last Deployment: + Created at: 2020-11-10 10:18:59 +0800 CST + Updated at: 2020-11-10T10:18:59+08:00 + ``` + + Wait after the period ends, the replicas will be one eventually. + +## Setting auto-scaling policy of CPU resource utilization +Introduce how to automatically scale workloads by CPU resource utilization. + +1. Prepare Appfile + + Modify `vela.yaml` as below. We add field `services.express-server.cpu` and change the auto-scaling policy + from cron to cpu utilization by updating filed `services.express-server.autoscale`. + + ```yaml + name: testapp + + services: + express-server: + image: oamdev/testapp:v1 + + cmd: ["node", "server.js"] + port: 8080 + cpu: "0.01" + + autoscale: + min: 1 + max: 5 + cpuPercent: 10 + ``` + +2. Deploy an application + + ```bash + $ vela up + ``` + +3. Expose the service entrypoint of the application + + ``` + $ vela port-forward helloworld 80 + Forwarding from 127.0.0.1:80 -> 80 + Forwarding from [::1]:80 -> 80 + + Forward successfully! Opening browser ... + Handling connection for 80 + Handling connection for 80 + Handling connection for 80 + Handling connection for 80 + ``` + + On your macOS, you might need to add `sudo` ahead of the command. + +4. Monitor the replicas changing + + Continue to monitor the replicas changing when the application becomes overloaded. You can use Apache HTTP server + benchmarking tool `ab` to mock many requests to the application. + + ``` + $ ab -n 10000 -c 200 http://127.0.0.1/ + This is ApacheBench, Version 2.3 <$Revision: 1843412 $> + Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ + Licensed to The Apache Software Foundation, http://www.apache.org/ + + Benchmarking 127.0.0.1 (be patient) + Completed 1000 requests + ``` + + The replicas gradually increase from one to four. + + ``` + $ vela status helloworld --svc frontend + About: + + Name: helloworld + Namespace: default + Created at: 2020-11-05 20:07:21.830118 +0800 CST + Updated at: 2020-11-05 20:50:42.664725 +0800 CST + + Services: + + - Name: frontend + Type: webservice + HEALTHY Ready: 1/1 + Traits: + - ✅ autoscale: type: cpu cpu-utilization(target/current): 5%/10% replicas(min/max/current): 1/5/2 + Last Deployment: + Created at: 2020-11-05 20:07:23 +0800 CST + Updated at: 2020-11-05T20:50:42+08:00 + ``` + + ``` + $ vela status helloworld --svc frontend + About: + + Name: helloworld + Namespace: default + Created at: 2020-11-05 20:07:21.830118 +0800 CST + Updated at: 2020-11-05 20:50:42.664725 +0800 CST + + Services: + + - Name: frontend + Type: webservice + HEALTHY Ready: 1/1 + Traits: + - ✅ autoscale: type: cpu cpu-utilization(target/current): 5%/14% replicas(min/max/current): 1/5/4 + Last Deployment: + Created at: 2020-11-05 20:07:23 +0800 CST + Updated at: 2020-11-05T20:50:42+08:00 + ``` + + Stop `ab` tool, and the replicas will decrease to one eventually. diff --git a/versioned_docs/version-v1.1/developers/extensions/set-metrics.md b/versioned_docs/version-v1.1/developers/extensions/set-metrics.md new file mode 100644 index 00000000..7115e8fc --- /dev/null +++ b/versioned_docs/version-v1.1/developers/extensions/set-metrics.md @@ -0,0 +1,107 @@ +--- +title: Monitoring Application +--- + + +If your application has exposed metrics, you can easily tell the platform how to collect the metrics data from your app with `metrics` capability. + +## Prerequisite +Make sure metrics trait controller is installed in your cluster + +Install metrics trait controller with helm + +1. Add helm chart repo for metrics trait + ```shell script + helm repo add oam.catalog http://oam.dev/catalog/ + ``` + +2. Update the chart repo + ```shell script + helm repo update + ``` + +3. Install metrics trait controller + ```shell script + helm install --create-namespace -n vela-system metricstrait oam.catalog/metricstrait + + +> Note: metrics is one of the extension capabilities [installed from cap center](../cap-center), +> please install it if you can't find it in `vela traits`. + +## Setting metrics policy +Let's run [`christianhxc/gorandom:1.0`](https://github.com/christianhxc/prometheus-tutorial) as an example app. +The app will emit random latencies as metrics. + + + + +1. Prepare Appfile: + + ```bash + $ cat < vela.yaml + name: metricapp + services: + metricapp: + type: webservice + image: christianhxc/gorandom:1.0 + port: 8080 + + metrics: + enabled: true + format: prometheus + path: /metrics + port: 0 + scheme: http + EOF + ``` + +> The full specification of `metrics` could show up by `$ vela show metrics`. + +2. Deploy the application: + + ```bash + $ vela up + ``` + +3. Check status: + + ```bash + $ vela status metricapp + About: + + Name: metricapp + Namespace: default + Created at: 2020-11-11 17:00:59.436347573 -0800 PST + Updated at: 2020-11-11 17:01:06.511064661 -0800 PST + + Services: + + - Name: metricapp + Type: webservice + HEALTHY Ready: 1/1 + Traits: + - ✅ metrics: Monitoring port: 8080, path: /metrics, format: prometheus, schema: http. + Last Deployment: + Created at: 2020-11-11 17:00:59 -0800 PST + Updated at: 2020-11-11T17:01:06-08:00 + ``` + +The metrics trait will automatically discover port and label to monitor if no parameters specified. +If more than one ports found, it will choose the first one by default. + + +**(Optional) Verify that the metrics are collected on Prometheus** + +
+ +Expose the port of Prometheus dashboard: + +```bash +kubectl --namespace monitoring port-forward `kubectl -n monitoring get pods -l prometheus=oam -o name` 9090 +``` + +Then access the Prometheus dashboard via http://localhost:9090/targets + +![Prometheus Dashboard](../../resources/metrics.jpg) + +
diff --git a/versioned_docs/version-v1.1/developers/extensions/set-rollout.md b/versioned_docs/version-v1.1/developers/extensions/set-rollout.md new file mode 100644 index 00000000..3f0dee17 --- /dev/null +++ b/versioned_docs/version-v1.1/developers/extensions/set-rollout.md @@ -0,0 +1,163 @@ +--- +title: Setting Rollout Strategy +--- + +> Note: rollout is one of the extension capabilities [installed from cap center](../cap-center), +> please install it if you can't find it in `vela traits`. + +The `rollout` section is used to configure Canary strategy to release your app. + +Add rollout config under `express-server` along with a `route`. + +```yaml +name: testapp +services: + express-server: + type: webservice + image: oamdev/testapp:rolling01 + port: 80 + + rollout: + replicas: 5 + stepWeight: 20 + interval: "30s" + + route: + domain: "example.com" +``` + +> The full specification of `rollout` could show up by `$ vela show rollout`. + +Apply this `appfile.yaml`: + +```bash +$ vela up +``` + +You could check the status by: + +```bash +$ vela status testapp +About: + + Name: testapp + Namespace: myenv + Created at: 2020-11-09 17:34:38.064006 +0800 CST + Updated at: 2020-11-10 17:05:53.903168 +0800 CST + +Services: + + - Name: testapp + Type: webservice + HEALTHY Ready: 5/5 + Traits: + - ✅ rollout: interval=5s + replicas=5 + stepWeight=20 + - ✅ route: Visiting URL: http://example.com IP: + + Last Deployment: + Created at: 2020-11-09 17:34:38 +0800 CST + Updated at: 2020-11-10T17:05:53+08:00 +``` + +Visiting this app by: + +```bash +$ curl -H "Host:example.com" http:/// +Hello World -- Rolling 01 +``` + +In day 2, assuming we have make some changes on our app and build the new image and name it by `oamdev/testapp:v2`. + +Let's update the appfile by: + +```yaml +name: testapp +services: + express-server: + type: webservice +- image: oamdev/testapp:rolling01 ++ image: oamdev/testapp:rolling02 + port: 80 + rollout: + replicas: 5 + stepWeight: 20 + interval: "30s" + route: + domain: example.com +``` + +Apply this `appfile.yaml` again: + +```bash +$ vela up +``` + +You could run `vela status` several times to see the instance rolling: + +```shell script +$ vela status testapp +About: + + Name: testapp + Namespace: myenv + Created at: 2020-11-12 19:02:40.353693 +0800 CST + Updated at: 2020-11-12 19:02:40.353693 +0800 CST + +Services: + + - Name: express-server + Type: webservice + HEALTHY express-server-v2:Ready: 1/1 express-server-v1:Ready: 4/4 + Traits: + - ✅ rollout: interval=30s + replicas=5 + stepWeight=20 + - ✅ route: Visiting by using 'vela port-forward testapp --route' + + Last Deployment: + Created at: 2020-11-12 17:20:46 +0800 CST + Updated at: 2020-11-12T19:02:40+08:00 +``` + +You could then try to `curl` your app multiple times and and see how the app being rollout following Canary strategy: + + +```bash +$ curl -H "Host:example.com" http:/// +Hello World -- This is rolling 02 +$ curl -H "Host:example.com" http:/// +Hello World -- Rolling 01 +$ curl -H "Host:example.com" http:/// +Hello World -- Rolling 01 +$ curl -H "Host:example.com" http:/// +Hello World -- This is rolling 02 +$ curl -H "Host:example.com" http:/// +Hello World -- Rolling 01 +$ curl -H "Host:example.com" http:/// +Hello World -- This is rolling 02 +``` + + +**How `Rollout` works?** + +
+ +`Rollout` trait implements progressive release process to rollout your app following [Canary strategy](https://martinfowler.com/bliki/CanaryRelease.html). + +In detail, `Rollout` controller will create a canary of your app , and then gradually shift traffic to the canary while measuring key performance indicators like HTTP requests success rate at the same time. + + +![alt](../../resources/traffic-shifting-analysis.png) + +In this sample, for every `10s`, `5%` traffic will be shifted to canary from the primary, until the traffic on canary reached `50%`. At the mean time, the instance number of canary will automatically scale to `replicas: 2` per configured in Appfile. + + +Based on analysis result of the KPIs during this traffic shifting, a canary will be promoted or aborted if analysis is failed. If promoting, the primary will be upgraded from v1 to v2, and traffic will be fully shifted back to the primary instances. So as result, canary instances will be deleted after the promotion finished. + +![alt](../../resources/promotion.png) + +> Note: KubeVela's `Rollout` trait is implemented with [Weaveworks Flagger](https://flagger.app/) operator. + +
diff --git a/versioned_docs/version-v1.1/developers/extensions/set-route.md b/versioned_docs/version-v1.1/developers/extensions/set-route.md new file mode 100644 index 00000000..e692ec22 --- /dev/null +++ b/versioned_docs/version-v1.1/developers/extensions/set-route.md @@ -0,0 +1,82 @@ +--- +title: Setting Routes +--- + +The `route` section is used to configure the access to your app. + +## Prerequisite +Make sure route trait controller is installed in your cluster + +Install route trait controller with helm + +1. Add helm chart repo for route trait + ```shell script + helm repo add oam.catalog http://oam.dev/catalog/ + ``` + +2. Update the chart repo + ```shell script + helm repo update + ``` + +3. Install route trait controller + ```shell script + helm install --create-namespace -n vela-system routetrait oam.catalog/routetrait + + +> Note: route is one of the extension capabilities [installed from cap center](../cap-center), +> please install it if you can't find it in `vela traits`. + +## Setting route policy +Add routing config under `express-server`: + +```yaml +services: + express-server: + ... + + route: + domain: example.com + rules: + - path: /testapp + rewriteTarget: / +``` + +> The full specification of `route` could show up by `$ vela show route`. + +Apply again: + +```bash +$ vela up +``` + +Check the status until we see route is ready: +```bash +$ vela status testapp +About: + + Name: testapp + Namespace: default + Created at: 2020-11-04 16:34:43.762730145 -0800 PST + Updated at: 2020-11-11 16:21:37.761158941 -0800 PST + +Services: + + - Name: express-server + Type: webservice + HEALTHY Ready: 1/1 + Last Deployment: + Created at: 2020-11-11 16:21:37 -0800 PST + Updated at: 2020-11-11T16:21:37-08:00 + Routes: + - route: Visiting URL: http://example.com IP: +``` + +**In [kind cluster setup](../../install#kind)**, you can visit the service via localhost: + +> If not in kind cluster, replace 'localhost' with ingress address + +``` +$ curl -H "Host:example.com" http://localhost/testapp +Hello World +``` diff --git a/versioned_docs/version-v1.1/developers/learn-appfile.md b/versioned_docs/version-v1.1/developers/learn-appfile.md new file mode 100644 index 00000000..78e2039f --- /dev/null +++ b/versioned_docs/version-v1.1/developers/learn-appfile.md @@ -0,0 +1,255 @@ +--- +title: Learning Appfile +--- + +A sample `Appfile` is as below: + +```yaml +name: testapp + +services: + frontend: # 1st service + + image: oamdev/testapp:v1 + build: + docker: + file: Dockerfile + context: . + + cmd: ["node", "server.js"] + port: 8080 + + route: # trait + domain: example.com + rules: + - path: /testapp + rewriteTarget: / + + backend: # 2nd service + type: task # workload type + image: perl + cmd: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] +``` + +Under the hood, `Appfile` will build the image from source code, and then generate `Application` resource with the image name. + +## Schema + +> Before learning about Appfile's detailed schema, we recommend you to get familiar with core concepts in KubeVela. + + +```yaml +name: _app-name_ + +services: + _service-name_: + # If `build` section exists, this field will be used as the name to build image. Otherwise, KubeVela will try to pull the image with given name directly. + image: oamdev/testapp:v1 + + build: + docker: + file: _Dockerfile_path_ # relative path is supported, e.g. "./Dockerfile" + context: _build_context_path_ # relative path is supported, e.g. "." + + push: + local: kind # optionally push to local KinD cluster instead of remote registry + + type: webservice (default) | worker | task + + # detailed configurations of workload + ... properties of the specified workload ... + + _trait_1_: + # properties of trait 1 + + _trait_2_: + # properties of trait 2 + + ... more traits and their properties ... + + _another_service_name_: # more services can be defined + ... + +``` + +> To learn about how to set the properties of specific workload type or trait, please use `vela show `. + +## Example Workflow + +In the following workflow, we will build and deploy an example NodeJS app under [examples/testapp/](https://github.com/oam-dev/kubevela/tree/master/docs/examples/testapp). + +### Prerequisites + +- [Docker](https://docs.docker.com/get-docker/) installed on the host +- KubeVela] installed and configured + +### 1. Download test app code + +git clone and go to the testapp directory: + +```bash +git clone https://github.com/oam-dev/kubevela.git +cd kubevela/docs/examples/testapp +``` + +The example contains NodeJS app code, Dockerfile to build the app. + +### 2. Deploy app in one command + +In the directory there is a [vela.yaml](https://github.com/oam-dev/kubevela/tree/master/docs/examples/testapp/vela.yaml) which follows Appfile format supported by Vela. +We are going to use it to build and deploy the app. + +> NOTE: please change `oamdev` to your own registry account so you can push. Or, you could try the alternative approach in `Local testing without pushing image remotely` section. + +```yaml + image: oamdev/testapp:v1 # change this to your image +``` + +Run the following command: + +```bash +vela up +``` +```console +Parsing vela.yaml ... +Loading templates ... + +Building service (express-server)... +Sending build context to Docker daemon 71.68kB +Step 1/10 : FROM mhart/alpine-node:12 + ---> 9d88359808c3 +... + +pushing image (oamdev/testapp:v1)... +... + +Rendering configs for service (express-server)... +Writing deploy config to (.vela/deploy.yaml) + +Applying deploy configs ... +Checking if app has been deployed... +App has not been deployed, creating a new deployment... +✅ App has been deployed 🚀🚀🚀 + Port forward: vela port-forward testapp + SSH: vela exec testapp + Logging: vela logs testapp + App status: vela status testapp + Service status: vela status testapp --svc express-server +``` + + +Check the status of the service: + +```bash +vela status testapp +``` +```console + About: + + Name: testapp + Namespace: default + Created at: 2020-11-02 11:08:32.138484 +0800 CST + Updated at: 2020-11-02 11:08:32.138485 +0800 CST + + Services: + + - Name: express-server + Type: webservice + HEALTHY Ready: 1/1 + Last Deployment: + Created at: 2020-11-02 11:08:33 +0800 CST + Updated at: 2020-11-02T11:08:32+08:00 + Routes: + +``` + +#### Alternative: Local testing without pushing image remotely + +If you have local kind cluster running, you may try the local push option. No remote container registry is needed in this case. + +Add local option to `build`: + +```yaml + build: + # push image into local kind cluster without remote transfer + push: + local: kind + + docker: + file: Dockerfile + context: . +``` + +Then deploy the app to kind: + +```bash +vela up +``` + +
(Advanced) Check rendered manifests + +By default, Vela renders the final manifests in `.vela/deploy.yaml`: + +```yaml +apiVersion: core.oam.dev/v1alpha2 +kind: ApplicationConfiguration +metadata: + name: testapp + namespace: default +spec: + components: + - componentName: express-server +--- +apiVersion: core.oam.dev/v1alpha2 +kind: Component +metadata: + name: express-server + namespace: default +spec: + workload: + apiVersion: apps/v1 + kind: Deployment + metadata: + name: express-server + ... +--- +apiVersion: core.oam.dev/v1alpha2 +kind: HealthScope +metadata: + name: testapp-default-health + namespace: default +spec: + ... +``` +
+ +### [Optional] Configure another workload type + +By now we have deployed a *[Web Service](../end-user/components/cue/webservice)*, which is the default workload type in KubeVela. We can also add another service of *[Task](../end-user/components/cue/task)* type in the same app: + +```yaml +services: + pi: + type: task + image: perl + cmd: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] + + express-server: + ... +``` + +Then deploy Appfile again to update the application: + +```bash +vela up +``` + +Congratulations! You have just deployed an app using `Appfile`. + +## What's Next? + +Play more with your app: +- [Check Application Logs](./check-logs) +- [Execute Commands in Application Container](./exec-cmd) +- [Access Application via Route](./port-forward) + diff --git a/versioned_docs/version-v1.1/developers/port-forward.md b/versioned_docs/version-v1.1/developers/port-forward.md new file mode 100644 index 00000000..28413529 --- /dev/null +++ b/versioned_docs/version-v1.1/developers/port-forward.md @@ -0,0 +1,27 @@ +--- +title: Port Forwarding +--- + +Once your web services of the application deployed, you can access it locally via `port-forward`. + +```bash +vela ls +``` +```console +NAME APP WORKLOAD TRAITS STATUS CREATED-TIME +express-server testapp webservice Deployed 2020-09-18 22:42:04 +0800 CST +``` + +It will directly open browser for you. + +```bash +vela port-forward testapp +``` +```console +Forwarding from 127.0.0.1:8080 -> 80 +Forwarding from [::1]:8080 -> 80 + +Forward successfully! Opening browser ... +Handling connection for 8080 +Handling connection for 8080 +``` diff --git a/versioned_docs/version-v1.1/developers/references/devex/cli.md b/versioned_docs/version-v1.1/developers/references/devex/cli.md new file mode 100644 index 00000000..77353dc6 --- /dev/null +++ b/versioned_docs/version-v1.1/developers/references/devex/cli.md @@ -0,0 +1,28 @@ +--- +title: KubeVela CLI +--- + +### Auto-completion + +#### bash + +```bash +To load completions in your current shell session: +$ source <(vela completion bash) + +To load completions for every new session, execute once: +Linux: + $ vela completion bash > /etc/bash_completion.d/vela +MacOS: + $ vela completion bash > /usr/local/etc/bash_completion.d/vela +``` + +#### zsh + +```bash +To load completions in your current shell session: +$ source <(vela completion zsh) + +To load completions for every new session, execute once: +$ vela completion zsh > "${fpath[1]}/_vela" +``` diff --git a/versioned_docs/version-v1.1/developers/references/devex/dashboard.md b/versioned_docs/version-v1.1/developers/references/devex/dashboard.md new file mode 100644 index 00000000..6e1d949a --- /dev/null +++ b/versioned_docs/version-v1.1/developers/references/devex/dashboard.md @@ -0,0 +1,10 @@ + +# KubeVela Dashboard (WIP) + +KubeVela has a simple client side dashboard for you to interact with. The functionality is equivalent to the vela cli. + +```bash +$ vela dashboard +``` + +> NOTE: this feature is still under development. diff --git a/versioned_docs/version-v1.1/developers/references/devex/faq.md b/versioned_docs/version-v1.1/developers/references/devex/faq.md new file mode 100644 index 00000000..8a0075ac --- /dev/null +++ b/versioned_docs/version-v1.1/developers/references/devex/faq.md @@ -0,0 +1,337 @@ +--- +title: FAQ +--- + +- [Compare to X](#Compare-to-X) + * [What is the difference between KubeVela and Helm?](#What-is-the-difference-between-KubeVela-and-Helm?) + +- [Issues](#issues) + * [Error: unable to create new content in namespace cert-manager because it is being terminated](#error-unable-to-create-new-content-in-namespace-cert-manager-because-it-is-being-terminated) + * [Error: ScopeDefinition exists](#error-scopedefinition-exists) + * [You have reached your pull rate limit](#You-have-reached-your-pull-rate-limit) + * [Warning: Namespace cert-manager exists](#warning-namespace-cert-manager-exists) + * [How to fix issue: MutatingWebhookConfiguration mutating-webhook-configuration exists?](#how-to-fix-issue-mutatingwebhookconfiguration-mutating-webhook-configuration-exists) + +- [Operating](#operating) + * [Autoscale: how to enable metrics server in various Kubernetes clusters?](#autoscale-how-to-enable-metrics-server-in-various-kubernetes-clusters) + +## Compare to X + +### What is the difference between KubeVela and Helm? + +KubeVela is a platform builder tool to create easy-to-use yet extensible app delivery/management systems with Kubernetes. KubeVela relies on Helm as templating engine and package format for apps. But Helm is not the only templating module that KubeVela supports. Another first-class supported approach is CUE. + +Also, KubeVela is by design a Kubernetes controller (i.e. works on server side), even for its Helm part, a Helm operator will be installed. + +## Issues + +### Error: unable to create new content in namespace cert-manager because it is being terminated + +Occasionally you might hit the issue as below. It happens when the last KubeVela release deletion hasn't completed. + +``` +vela install +``` +```console +- Installing Vela Core Chart: +install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file +Failed to install the chart with error: serviceaccounts "cert-manager-cainjector" is forbidden: unable to create new content in namespace cert-manager because it is being terminated +failed to create resource +helm.sh/helm/v3/pkg/kube.(*Client).Update.func1 + /home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/kube/client.go:190 +... +Error: failed to create resource: serviceaccounts "cert-manager-cainjector" is forbidden: unable to create new content in namespace cert-manager because it is being terminated +``` + +Take a break and try again in a few seconds. + +``` +vela install +``` +```console +- Installing Vela Core Chart: +Vela system along with OAM runtime already exist. +Automatically discover capabilities successfully ✅ Add(0) Update(0) Delete(8) + +TYPE CATEGORY DESCRIPTION +-task workload One-off task to run a piece of code or script to completion +-webservice workload Long-running scalable service with stable endpoint to receive external traffic +-worker workload Long-running scalable backend worker without network endpoint +-autoscale trait Automatically scale the app following certain triggers or metrics +-metrics trait Configure metrics targets to be monitored for the app +-rollout trait Configure canary deployment strategy to release the app +-route trait Configure route policy to the app +-scaler trait Manually scale the app + +- Finished successfully. +``` + +And manually apply all WorkloadDefinition and TraitDefinition manifests to have all capabilities back. + +``` +kubectl apply -f charts/vela-core/templates/defwithtemplate +``` +```console +traitdefinition.core.oam.dev/autoscale created +traitdefinition.core.oam.dev/scaler created +traitdefinition.core.oam.dev/metrics created +traitdefinition.core.oam.dev/rollout created +traitdefinition.core.oam.dev/route created +workloaddefinition.core.oam.dev/task created +workloaddefinition.core.oam.dev/webservice created +workloaddefinition.core.oam.dev/worker created +``` +``` +vela workloads +``` +```console +Automatically discover capabilities successfully ✅ Add(8) Update(0) Delete(0) + +TYPE CATEGORY DESCRIPTION ++task workload One-off task to run a piece of code or script to completion ++webservice workload Long-running scalable service with stable endpoint to receive external traffic ++worker workload Long-running scalable backend worker without network endpoint ++autoscale trait Automatically scale the app following certain triggers or metrics ++metrics trait Configure metrics targets to be monitored for the app ++rollout trait Configure canary deployment strategy to release the app ++route trait Configure route policy to the app ++scaler trait Manually scale the app + +NAME DESCRIPTION +task One-off task to run a piece of code or script to completion +webservice Long-running scalable service with stable endpoint to receive external traffic +worker Long-running scalable backend worker without network endpoint +``` + +### Error: ScopeDefinition exists + +Occasionally you might hit the issue as below. It happens when there is an old OAM Kubernetes Runtime release, or you applied `ScopeDefinition` before. + +``` +vela install +``` +```console + - Installing Vela Core Chart: + install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file + Failed to install the chart with error: ScopeDefinition "healthscopes.core.oam.dev" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "kubevela": current value is "oam"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "vela-system": current value is "oam-system" + rendered manifests contain a resource that already exists. Unable to continue with install + helm.sh/helm/v3/pkg/action.(*Install).Run + /home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274 + ... + Error: rendered manifests contain a resource that already exists. Unable to continue with install: ScopeDefinition "healthscopes.core.oam.dev" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "kubevela": current value is "oam"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "vela-system": current value is "oam-system" +``` + +Delete `ScopeDefinition` "healthscopes.core.oam.dev" and try again. + +``` +kubectl delete ScopeDefinition "healthscopes.core.oam.dev" +``` +```console +scopedefinition.core.oam.dev "healthscopes.core.oam.dev" deleted +``` +``` +vela install +``` +```console +- Installing Vela Core Chart: +install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file +Successfully installed the chart, status: deployed, last deployed time = 2020-12-03 16:26:41.491426 +0800 CST m=+4.026069452 +WARN: handle workload template `containerizedworkloads.core.oam.dev` failed: no template found, you will unable to use this workload capabilityWARN: handle trait template `manualscalertraits.core.oam.dev` failed +: no template found, you will unable to use this trait capabilityAutomatically discover capabilities successfully ✅ Add(8) Update(0) Delete(0) + +TYPE CATEGORY DESCRIPTION ++task workload One-off task to run a piece of code or script to completion ++webservice workload Long-running scalable service with stable endpoint to receive external traffic ++worker workload Long-running scalable backend worker without network endpoint ++autoscale trait Automatically scale the app following certain triggers or metrics ++metrics trait Configure metrics targets to be monitored for the app ++rollout trait Configure canary deployment strategy to release the app ++route trait Configure route policy to the app ++scaler trait Manually scale the app + +- Finished successfully. +``` + +### You have reached your pull rate limit + +When you look into the logs of Pod kubevela-vela-core and found the issue as below. + +``` +kubectl get pod -n vela-system -l app.kubernetes.io/name=vela-core +``` +```console +NAME READY STATUS RESTARTS AGE +kubevela-vela-core-f8b987775-wjg25 0/1 - 0 35m +``` + +>Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by +>authenticating and upgrading: https://www.docker.com/increase-rate-limit + +You can use github container registry instead. + +``` +docker pull ghcr.io/oam-dev/kubevela/vela-core:latest +``` + +### Warning: Namespace cert-manager exists + +If you hit the issue as below, an `cert-manager` release might exist whose namespace and RBAC related resource conflict +with KubeVela. + +``` +vela install +``` +```console +- Installing Vela Core Chart: +install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file +Failed to install the chart with error: Namespace "cert-manager" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system" +rendered manifests contain a resource that already exists. Unable to continue with install +helm.sh/helm/v3/pkg/action.(*Install).Run + /home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274 +... + /opt/hostedtoolcache/go/1.14.12/x64/src/runtime/asm_amd64.s:1373 +Error: rendered manifests contain a resource that already exists. Unable to continue with install: Namespace "cert-manager" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system" +``` + +Try these steps to fix the problem. + +- Delete release `cert-manager` +- Delete namespace `cert-manager` +- Install KubeVela again + +``` +helm delete cert-manager -n cert-manager +``` +```console +release "cert-manager" uninstalled +``` +``` +kubectl delete ns cert-manager +``` +```console +namespace "cert-manager" deleted +``` +``` +vela install +``` +```console +- Installing Vela Core Chart: +install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file +Successfully installed the chart, status: deployed, last deployed time = 2020-12-04 10:46:46.782617 +0800 CST m=+4.248889379 +Automatically discover capabilities successfully ✅ (no changes) + +TYPE CATEGORY DESCRIPTION +task workload One-off task to run a piece of code or script to completion +webservice workload Long-running scalable service with stable endpoint to receive external traffic +worker workload Long-running scalable backend worker without network endpoint +autoscale trait Automatically scale the app following certain triggers or metrics +metrics trait Configure metrics targets to be monitored for the app +rollout trait Configure canary deployment strategy to release the app +route trait Configure route policy to the app +scaler trait Manually scale the app +- Finished successfully. +``` + +### How to fix issue: MutatingWebhookConfiguration mutating-webhook-configuration exists? + +If you deploy some other services which will apply MutatingWebhookConfiguration mutating-webhook-configuration, installing +KubeVela will hit the issue as below. + +```shell +- Installing Vela Core Chart: +install chart vela-core, version v0.2.1, desc : A Helm chart for Kube Vela core, contains 36 file +Failed to install the chart with error: MutatingWebhookConfiguration "mutating-webhook-configuration" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system" +rendered manifests contain a resource that already exists. Unable to continue with install +helm.sh/helm/v3/pkg/action.(*Install).Run + /home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274 +github.com/oam-dev/kubevela/pkg/commands.InstallOamRuntime + /home/runner/work/kubevela/kubevela/pkg/commands/system.go:259 +github.com/oam-dev/kubevela/pkg/commands.(*initCmd).run + /home/runner/work/kubevela/kubevela/pkg/commands/system.go:162 +github.com/oam-dev/kubevela/pkg/commands.NewInstallCommand.func2 + /home/runner/work/kubevela/kubevela/pkg/commands/system.go:119 +github.com/spf13/cobra.(*Command).execute + /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:850 +github.com/spf13/cobra.(*Command).ExecuteC + /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:958 +github.com/spf13/cobra.(*Command).Execute + /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:895 +main.main + /home/runner/work/kubevela/kubevela/references/cmd/cli/main.go:16 +runtime.main + /opt/hostedtoolcache/go/1.14.13/x64/src/runtime/proc.go:203 +runtime.goexit + /opt/hostedtoolcache/go/1.14.13/x64/src/runtime/asm_amd64.s:1373 +Error: rendered manifests contain a resource that already exists. Unable to continue with install: MutatingWebhookConfiguration "mutating-webhook-configuration" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system" +``` + +To fix this issue, please upgrade KubeVela Cli `vela` version to be higher than `v0.2.2` from [KubeVela releases](https://github.com/oam-dev/kubevela/releases). + +## Operating + +### Autoscale: how to enable metrics server in various Kubernetes clusters? + +Operating Autoscale depends on metrics server, so it has to be enabled in various clusters. Please check whether metrics server +is enabled with command `kubectl top nodes` or `kubectl top pods`. + +If the output is similar as below, the metrics is enabled. + +```shell +kubectl top nodes +``` +```console +NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% +cn-hongkong.10.0.1.237 288m 7% 5378Mi 78% +cn-hongkong.10.0.1.238 351m 8% 5113Mi 74% +``` +``` +kubectl top pods +``` +```console +NAME CPU(cores) MEMORY(bytes) +php-apache-65f444bf84-cjbs5 0m 1Mi +wordpress-55c59ccdd5-lf59d 1m 66Mi +``` + +Or you have to manually enable metrics server in your Kubernetes cluster. + +- ACK (Alibaba Cloud Container Service for Kubernetes) + +Metrics server is already enabled. + +- ASK (Alibaba Cloud Serverless Kubernetes) + +Metrics server has to be enabled in `Operations/Add-ons` section of [Alibaba Cloud console](https://cs.console.aliyun.com/) as below. + +![](../../../resources/install-metrics-server-in-ASK.jpg) + +Please refer to [metrics server debug guide](https://help.aliyun.com/document_detail/176515.html) if you hit more issue. + +- Kind + +Install metrics server as below, or you can install the [latest version](https://github.com/kubernetes-sigs/metrics-server#installation). + +```shell +kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml +``` + +Also add the following part under `.spec.template.spec.containers` in the yaml file loaded by `kubectl edit deploy -n kube-system metrics-server`. + +Noted: This is just a walk-around, not for production-level use. + +``` +command: +- /metrics-server +- --kubelet-insecure-tls +``` + +- MiniKube + +Enable it with following command. + +```shell +minikube addons enable metrics-server +``` + + +Have fun to [set autoscale](../../extensions/set-autoscale) on your application. diff --git a/versioned_docs/version-v1.1/developers/references/kubectl-plugin.mdx b/versioned_docs/version-v1.1/developers/references/kubectl-plugin.mdx new file mode 100644 index 00000000..ab9d2e9b --- /dev/null +++ b/versioned_docs/version-v1.1/developers/references/kubectl-plugin.mdx @@ -0,0 +1,41 @@ +--- +title: Kubectl plugin +--- +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +Install vela kubectl plugin can help you to ship applications more easily! + +## Installation + +See [advanced-install](../../platform-engineers/advanced-install) + +## Usage + +```shell +$ kubectl vela -h +A Highly Extensible Platform Engine based on Kubernetes and Open Application Model. + +Usage: + kubectl vela [flags] + kubectl vela [command] + +Available Commands: + + comp Show components in capability registry + dry-run Dry Run an application, and output the K8s resources as + result to stdout, only CUE template supported for now + live-diff Dry-run an application, and do diff on a specific app + revison. The provided capability definitions will be used + during Dry-run. If any capabilities used in the app are not + found in the provided ones, it will try to find from + cluster. + show Show the reference doc for a workload type or trait + trait Show traits in capability registry + version Prints out build version information + +Flags: + -h, --help help for vela + +Use "kubectl vela [command] --help" for more information about a command. +``` \ No newline at end of file diff --git a/versioned_docs/version-v1.1/developers/references/restful-api/rest.mdx b/versioned_docs/version-v1.1/developers/references/restful-api/rest.mdx new file mode 100644 index 00000000..fa230177 --- /dev/null +++ b/versioned_docs/version-v1.1/developers/references/restful-api/rest.mdx @@ -0,0 +1,10 @@ +--- +title: Restful API +--- +import useBaseUrl from '@docusaurus/useBaseUrl'; + + + KubeVela Restful API + \ No newline at end of file diff --git a/versioned_docs/version-v1.1/end-user/binding-traits.md b/versioned_docs/version-v1.1/end-user/binding-traits.md new file mode 100644 index 00000000..cee641ad --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/binding-traits.md @@ -0,0 +1,292 @@ +--- +title: Binding Traits +--- + +Traits is also one of the core concepts of the application. It acts on the component level and allows you to freely bind various operation and maintenance actions and strategies to the component. For example, configuration gateway, label management and container injection (Sidecar) at the business level, or flexible scaler at the admin level, gray release, etc. + +Similar to Component, KubeVela provides a series of out-of-the-box traits, and also allows you to customize and extend other operation and maintenance capabilities with traits. + + +## KubeVela's Trait + + +``` +$ vela traits +NAME NAMESPACE APPLIES-TO CONFLICTS-WITH POD-DISRUPTIVE DESCRIPTION +annotations vela-system * true Add annotations on K8s pod for your workload which follows + the pod spec in path 'spec.template'. +configmap vela-system * true Create/Attach configmaps on K8s pod for your workload which + follows the pod spec in path 'spec.template'. +env vela-system * false add env on K8s pod for your workload which follows the pod + spec in path 'spec.template.' +ingress vela-system false Enable public web traffic for the component. +ingress-1-20 vela-system false Enable public web traffic for the component, the ingress API + matches K8s v1.20+. +labels vela-system * true Add labels on K8s pod for your workload which follows the + pod spec in path 'spec.template'. +lifecycle vela-system * true Add lifecycle hooks for the first container of K8s pod for + your workload which follows the pod spec in path + 'spec.template'. +rollout vela-system false rollout the component +sidecar vela-system * true Inject a sidecar container to K8s pod for your workload + which follows the pod spec in path 'spec.template'. +... +``` + + +Below, we will take a few typical traits as examples to introduce the usage of KubeVela Trait. + + +## Use Ingress to Configure the Gateway + + +We will configure a gateway for a Web Service component as an example. This component is pulled from the `crccheck/hello-world` image. After setting the gateway, it provides external access through `testsvc.example.com` plus port 8000. + + +Please directly copy the following Shell code, which will be applied to the cluster: + + +```shell +cat < 8000 +Forwarding from [::1]:8000 -> 8000 + +Forward successfully! Opening browser ... +Handling connection for 8000 +``` +Access service: +```shell +curl -H "Host:testsvc.example.com" http://127.0.0.1:8000/ +Hello World + + + ## . + ## ## ## == + ## ## ## ## ## === + /""""""""""""""""\___/ === + ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~ + \______ o _,/ + \ \ _,' + `'--.._\..--'' +``` + + +## Attach Labels and Annotations to Component + + +Labels and Annotations Trait allow you to attach labels and annotations to components, allowing us to trigger the marked components and obtain annotation information on demand when implementing business logic. + +First, we prepare an example of an application, please copy and execute it directly: + + +```shell +cat < + i=0; + while true; + do + echo "$i: $(date)" >> /var/log/date.log; + i=$((i+1)); + sleep 1; + done + volumes: + - name: varlog + mountPath: /var/log + type: emptyDir + traits: + - type: sidecar + properties: + name: count-log + image: busybox + cmd: [ /bin/sh, -c, 'tail -n+1 -f /var/log/date.log'] + volumes: + - name: varlog + path: /var/log +# YAML ends +EOF +``` + + +Use `vela ls` to check whether the application is successfully deployed: + + +```shell +$ vela ls +APP COMPONENT TYPE TRAITS PHASE HEALTHY STATUS CREATED-TIME +vela-app-with-sidecar log-gen-worker worker sidecar running healthy 2021-08-29 22:07:07 +0800 CST +``` + + +After success, first to look out the workload generated by the application: + + +``` +$ kubectl get pods -l app.oam.dev/component=log-gen-worker +NAME READY STATUS RESTARTS AGE +log-gen-worker-7bb65dcdd6-tpbdh 2/2 Running 0 45s +``` + + + + +Finally, check the log output by the sidecar, you can see that the sidecar that reads the log has taken effect. + + +``` +kubectl logs -f log-gen-worker-7bb65dcdd6-tpbdh count-log +``` + + +Above, we took several common traits as examples to introduce how to bind traits. For more trait's functions and parameters, please go to built-in Trait view. + +## Custom Trait + +When the built-in Trait cannot meet your needs, you can freely customize the maintenance capabilities. Please refer to [Custom Trait](../platform-engineers/traits/customize-trait) in the Admin Guide for implementation. + +## Next + +- [Integrated Cloud Services](./components/cloud-services/provider-and-consume-cloud-services), learn how to integrate cloud services from various cloud vendors +- [Rollout & Scaler](./rollout-scaler) \ No newline at end of file diff --git a/versioned_docs/version-v1.1/end-user/component-observability.md b/versioned_docs/version-v1.1/end-user/component-observability.md new file mode 100644 index 00000000..4c066c01 --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/component-observability.md @@ -0,0 +1,5 @@ +--- +title: Component Observability +--- + +WIP \ No newline at end of file diff --git a/versioned_docs/version-v1.1/end-user/components/cloud-services/provider-and-consume-cloud-services.md b/versioned_docs/version-v1.1/end-user/components/cloud-services/provider-and-consume-cloud-services.md new file mode 100644 index 00000000..413dfa16 --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/components/cloud-services/provider-and-consume-cloud-services.md @@ -0,0 +1,154 @@ +--- +title: Provision and Consume Cloud Services +--- + +Cloud-oriented development is now becoming the norm, there is an urgent need to integrate cloud resources from different +sources and types. Whether it is the most basic object storage, cloud database, or load balancing, it is all faced with +the challenges of hybrid cloud, multi-cloud and other complex environments. KubeVela is perfect to satisfy the needs. + +KubeVela efficiently and securely integrates different types of cloud resources through resource binding capabilities in +cloud resource Components and Traits. At present, you can directly use the default components of AliCloud Kubernetes(ACK), +AliCloud Object Storage Service (OSS) and AliCloud Relational Database Service (RDS). At the same time, more new cloud +resources will gradually become the default option under the support of the community in the future. You can use cloud +resources of various manufacturers in a standardized and unified way. + +This tutorial will talk about how to provision and consume Cloud Resources by Terraform. + +> ⚠️ This section requires your platform engineers have already enabled [add-on 'terraform/provider-alicloud'](../../../install#4-optional-enable-addons). + +## Supported Cloud Resource list + +Orchestration Type | Cloud Provider | Cloud Resource | Description +------------ | ------------- | ------------- | ------------- +Terraform | Alibaba Cloud | [ACK](./terraform/alibaba-ack) | Terraform configuration for Alibaba Cloud ACK cluster +| | | [EIP](./terraform/alibaba-eip) | Terraform configuration for Alibaba Cloud EIP object +| | | [OSS](./terraform/alibaba-oss) | Terraform configuration for Alibaba Cloud OSS object +| | | [RDS](./terraform/alibaba-rds) | Terraform configuration for Alibaba Cloud RDS object + +## Terraform + +All supported Terraform cloud resources can be seen in the list above. You can also filter them by command by `vela components --label type=terraform`. + +### Provision cloud resources + +Use the following Application to provision an OSS bucket: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: provision-cloud-resource-sample +spec: + components: + - name: sample-oss + type: alibaba-oss + properties: + bucket: vela-website-0911 + acl: private + writeConnectionSecretToRef: + name: oss-conn +``` + +The above `alibaba-oss` component will create an OSS bucket named `vela-website-0911`, with private acl, with connection information stored in a secreted named `oss-conn`. +description, whether it's compulsory, and default value. + +Apply the above application, then check the status: + +```shell +$ vela ls +APP COMPONENT TYPE TRAITS PHASE HEALTHY STATUS CREATED-TIME +provision-cloud-resource-sample sample-oss alibaba-oss running healthy Cloud resources are deployed and ready to use 2021-09-11 12:55:57 +0800 CST +``` + +After the phase becomes `running` and `healthy`, you can then check the OSS bucket in Alibaba Cloud console or by [ossutil](https://partners-intl.aliyun.com/help/doc-detail/50452.htm) +command. + +```shell +$ ossutil ls oss:// +CreationTime Region StorageClass BucketName +2021-09-11 12:56:17 +0800 CST oss-cn-beijing Standard oss://vela-website-0911 +``` + +### Consume cloud resources + +Let's deploy +the [application](https://github.com/oam-dev/kubevela/tree/master/docs/examples/terraform/cloud-resource-provision-and-consume/application.yaml) +below to provision Alibaba Cloud OSS and RDS cloud resources, and consume them by the web component. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: webapp +spec: + components: + - name: express-server + type: webservice + properties: + image: zzxwill/flask-web-application:v0.3.1-crossplane + ports: 80 + traits: + - type: service-binding + properties: + envMappings: + # environments refer to db-conn secret + DB_PASSWORD: + secret: db-conn # 1) If the env name is the same as the secret key, secret key can be omitted. + endpoint: + secret: db-conn + key: DB_HOST # 2) If the env name is different from secret key, secret key has to be set. + username: + secret: db-conn + key: DB_USER + # environments refer to oss-conn secret + BUCKET_NAME: + secret: oss-conn + + - name: sample-db + type: alibaba-rds + properties: + instance_name: sample-db + account_name: oamtest + password: U34rfwefwefffaked + writeConnectionSecretToRef: + name: db-conn + + - name: sample-oss + type: alibaba-oss + properties: + bucket: vela-website-0911 + acl: private + writeConnectionSecretToRef: + name: oss-conn +``` + +The component `sample-db` will generate secret `db-conn` with [these keys](./terraform/alibaba-rds#outputs), and the component +`sample-oss` will generate secret `oss-conn`. These secrets are binded to the Envs of component `express-server` by trait +[Service Binding](../../traits/service-binding). Then the component can consume instances of OSS and RDS. + +Deploy and verify the application. + +```shell +$ vela ls +APP COMPONENT TYPE TRAITS PHASE HEALTHY STATUS CREATED-TIME +webapp express-server webservice service-binding running healthy 2021-09-08 16:50:41 +0800 CST +├─ sample-db alibaba-rds running healthy 2021-09-08 16:50:41 +0800 CST +└─ sample-oss alibaba-oss running healthy 2021-09-08 16:50:41 +0800 CST +``` + +```shell +$ sudo kubectl port-forward deployment/express-server 80:80 + +Forwarding from 127.0.0.1:80 -> 80 +Forwarding from [::1]:80 -> 80 +Handling connection for 80 +Handling connection for 80 +``` + +![](../../../resources/crossplane-visit-application-v3.jpg) + +## Next + +- [Component Observability](../../component-observability) +- [Data Pass Between Components ](../../workflow/component-dependency-parameter) +- [Multi-Cluster and Environment](../../../case-studies/multi-cluster) diff --git a/versioned_docs/version-v1.1/end-user/components/cloud-services/terraform/alibaba-ack.md b/versioned_docs/version-v1.1/end-user/components/cloud-services/terraform/alibaba-ack.md new file mode 100644 index 00000000..cc04454b --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/components/cloud-services/terraform/alibaba-ack.md @@ -0,0 +1,76 @@ +--- +title: Alibaba Cloud ACK +--- + + + +## Description + +Terraform configuration for Alibaba Cloud ACK cluster + +## Sample + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: ack-cloud-source +spec: + components: + - name: ack-cluster + type: alibaba-ack + properties: + writeConnectionSecretToRef: + name: ack-conn + namespace: vela-system + +``` + +## Specification + + +### Properties + +Name | Description | Type | Required | Default +------------ | ------------- | ------------- | ------------- | ------------- +k8s_worker_number | The number of worker nodes in kubernetes cluster. | number | false | +zone_id | Availability Zone ID | string | false | +node_cidr_mask | The node cidr block to specific how many pods can run on single node. Valid values: [24-28]. | number | false | +proxy_mode | Proxy mode is option of kube-proxy. Valid values: 'ipvs','iptables'. Default to 'iptables'. | string | false | +password | The password of ECS instance. | string | false | +k8s_version | The version of the kubernetes version. Valid values: '1.16.6-aliyun.1','1.14.8-aliyun.1'. Default to '1.16.6-aliyun.1'. | string | false | +memory_size | Memory size used to fetch instance types. | number | false | +vpc_cidr | The cidr block used to launch a new vpc when 'vpc_id' is not specified. | string | false | +vswitch_cidrs | List of cidr blocks used to create several new vswitches when 'vswitch_ids' is not specified. | list | false | +master_instance_types | The ecs instance types used to launch master nodes. | list | false | +worker_instance_types | The ecs instance types used to launch worker nodes. | list | false | +install_cloud_monitor | Install cloud monitor agent on ECS. | bool | false | +k8s_service_cidr | The kubernetes service cidr block. It cannot be equals to vpc's or vswitch's or pod's and cannot be in them. | string | false | +cpu_core_count | CPU core count is used to fetch instance types. | number | false | +vpc_name | The vpc name used to create a new vpc when 'vpc_id' is not specified. Default to variable `example_name` | string | false | +vswitch_name_prefix | The vswitch name prefix used to create several new vswitches. Default to variable 'example_name'. | string | false | +number_format | The number format used to output. | string | false | +vswitch_ids | List of existing vswitch id. | list | false | +k8s_name_prefix | The name prefix used to create several kubernetes clusters. Default to variable `example_name` | string | false | +new_nat_gateway | Whether to create a new nat gateway. In this template, a new nat gateway will create a nat gateway, eip and server snat entries. | bool | false | +enable_ssh | Enable login to the node through SSH. | bool | false | +cpu_policy | kubelet cpu policy. Valid values: 'none','static'. Default to 'none'. | string | false | +k8s_pod_cidr | The kubernetes pod cidr block. It cannot be equals to vpc's or vswitch's and cannot be in them. | string | false | +writeConnectionSecretToRef | The secret which the cloud resource connection will be written to | [writeConnectionSecretToRef](#writeConnectionSecretToRef) | false | + + +#### writeConnectionSecretToRef + +Name | Description | Type | Required | Default +------------ | ------------- | ------------- | ------------- | ------------- +name | The secret name which the cloud resource connection will be written to | string | false | +namespace | The secret namespace which the cloud resource connection will be written to | string | false | + +## Outputs + +If `writeConnectionSecretToRef` is set, a secret will be generated with these keys as below: + +Name | Description +------------ | ------------- +name | ACK Kubernetes cluster name | +kubeconfig | The KubeConfig string for the ACK Kubernetes cluster | diff --git a/versioned_docs/version-v1.1/end-user/components/cloud-services/terraform/alibaba-eip.md b/versioned_docs/version-v1.1/end-user/components/cloud-services/terraform/alibaba-eip.md new file mode 100644 index 00000000..34547902 --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/components/cloud-services/terraform/alibaba-eip.md @@ -0,0 +1,51 @@ +--- +title: Alibaba Cloud EIP +--- + +## Description + +Terraform configuration for Alibaba Cloud Elastic IP + +## Samples + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: provision-cloud-resource-eip +spec: + components: + - name: sample-eip + type: alibaba-eip + properties: + writeConnectionSecretToRef: + name: eip-conn +``` + +## Specification + + +### Properties + +Name | Description | Type | Required | Default +------------ | ------------- | ------------- | ------------- | ------------- +name | Name to be used on all resources as prefix. Default to 'TF-Module-EIP'. | string | true | +bandwidth | Maximum bandwidth to the elastic public network, measured in Mbps (Mega bit per second). | number | true | +writeConnectionSecretToRef | The secret which the cloud resource connection will be written to | [writeConnectionSecretToRef](#writeConnectionSecretToRef) | false | + + +#### writeConnectionSecretToRef + +Name | Description | Type | Required | Default +------------ | ------------- | ------------- | ------------- | ------------- +name | The secret name which the cloud resource connection will be written to | string | true | +namespace | The secret namespace which the cloud resource connection will be written to | string | false | + + +## Outputs + +If `writeConnectionSecretToRef` is set, a secret will be generated with these keys as below: + +Name | Description +------------ | ------------- +EIP_ADDRESS | EIP address | diff --git a/versioned_docs/version-v1.1/end-user/components/cloud-services/terraform/alibaba-oss.md b/versioned_docs/version-v1.1/end-user/components/cloud-services/terraform/alibaba-oss.md new file mode 100644 index 00000000..82f083c1 --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/components/cloud-services/terraform/alibaba-oss.md @@ -0,0 +1,53 @@ +--- +title: Alibaba Cloud OSS +--- + +## Description + +Terraform configuration for Alibaba Cloud OSS object + +## Samples + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: oss-cloud-source +spec: + components: + - name: sample-oss + type: alibaba-oss + properties: + bucket: vela-website + acl: private + writeConnectionSecretToRef: + name: oss-conn +``` + +## Specification + + +### Properties + +Name | Description | Type | Required | Default +------------ | ------------- | ------------- | ------------- | ------------- +bucket | OSS bucket name | string | true | +acl | OSS bucket ACL, supported 'private', 'public-read', 'public-read-write' | string | true | +writeConnectionSecretToRef | The secret which the cloud resource connection will be written to | [writeConnectionSecretToRef](#writeConnectionSecretToRef) | false | + + +#### writeConnectionSecretToRef + +Name | Description | Type | Required | Default +------------ | ------------- | ------------- | ------------- | ------------- +name | The secret name which the cloud resource connection will be written to | string | true | +namespace | The secret namespace which the cloud resource connection will be written to | string | false | + + +## Outputs + +If `writeConnectionSecretToRef` is set, a secret will be generated with these keys as below: + +Name | Description +------------ | ------------- +BUCKET_NAME | OSS bucket name | diff --git a/versioned_docs/version-v1.1/end-user/components/cloud-services/terraform/alibaba-rds.md b/versioned_docs/version-v1.1/end-user/components/cloud-services/terraform/alibaba-rds.md new file mode 100644 index 00000000..cf2048a6 --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/components/cloud-services/terraform/alibaba-rds.md @@ -0,0 +1,58 @@ +--- +title: Alibaba Cloud RDS +--- + +## Description + +Terraform configuration for Alibaba Cloud RDS object + +## Sample + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: rds-cloud-source +spec: + components: + - name: sample-db + type: alibaba-rds + properties: + instance_name: sample-db + account_name: oamtest + password: U34rfwefwefffaked + writeConnectionSecretToRef: + name: db-conn +``` + +## Specification + + +### Properties + +Name | Description | Type | Required | Default +------------ | ------------- | ------------- | ------------- | ------------- +password | RDS instance account password | string | true | +instance_name | RDS instance name | string | true | +account_name | RDS instance user account name | string | true | +writeConnectionSecretToRef | The secret which the cloud resource connection will be written to | [writeConnectionSecretToRef](#writeConnectionSecretToRef) | false | + + +#### writeConnectionSecretToRef + +Name | Description | Type | Required | Default +------------ | ------------- | ------------- | ------------- | ------------- +name | The secret name which the cloud resource connection will be written to | string | true | +namespace | The secret namespace which the cloud resource connection will be written to | string | false | + +## Outputs + +If `writeConnectionSecretToRef` is set, a secret will be generated with these keys as below: + +Name | Description +------------ | ------------- +DB_NAME | RDS instance name | +DB_USER | RDS instance username | +DB_PORT | RDS instance port | +DB_HOST | RDS instance host | +DB_PASSWORD | RDS instance password | diff --git a/versioned_docs/version-v1.1/end-user/components/cue/raw.md b/versioned_docs/version-v1.1/end-user/components/cue/raw.md new file mode 100644 index 00000000..9300b369 --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/components/cue/raw.md @@ -0,0 +1,36 @@ +--- +title: Raw +--- + +Use raw Kubernetes resources directly. For example, a Job. + +## How to use + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: app-raw +spec: + components: + - name: myjob + type: raw + properties: + apiVersion: batch/v1 + kind: Job + metadata: + name: pi + spec: + template: + spec: + containers: + - name: pi + image: perl + command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] + restartPolicy: Never + backoffLimit: 4 +``` + +## Attributes + +Just write the whole Kubernetes Resource in properties. \ No newline at end of file diff --git a/versioned_docs/version-v1.1/end-user/components/cue/task.md b/versioned_docs/version-v1.1/end-user/components/cue/task.md new file mode 100644 index 00000000..5f7333de --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/components/cue/task.md @@ -0,0 +1,177 @@ +--- +title: Task +--- + +Describes jobs that run code or a script to completion. + +## How-to + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: app-worker +spec: + components: + - name: mytask + type: task + properties: + image: perl + count: 10 + cmd: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] +``` + +## Attributes + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| -------------- | ------------------------------------------------------------------------------------------------ | --------------------------------- | -------- | ------- | +| cmd | Commands to run in the container | []string | false | | +| env | Define arguments by using environment variables | [[]env](#env) | false | | +| count | Specify number of tasks to run in parallel | int | true | 1 | +| restart | Define the job restart policy, the value can only be Never or OnFailure. By default, it's Never. | string | true | Never | +| image | Which image would you like to use for your service | string | true | | +| cpu | Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) | string | false | | +| memory | Specifies the attributes of the memory resource required for the container. | string | false | | +| volumes | Declare volumes and volumeMounts | [[]volumes](#volumes) | false | | +| livenessProbe | Instructions for assessing whether the container is alive. | [livenessProbe](#livenessProbe) | false | | +| readinessProbe | Instructions for assessing whether the container is in a suitable state to serve traffic. | [readinessProbe](#readinessProbe) | false | | + + +### readinessProbe + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------------------- | ---------------------------------------------------------------------------------------------------- | ----------------------- | -------- | ------- | +| exec | Instructions for assessing container health by executing a command. Either this attribute or the | [exec](#exec) | false | | +| | httpGet attribute or the tcpSocket attribute MUST be specified. This attribute is mutually exclusive | | | | +| | with both the httpGet attribute and the tcpSocket attribute. | | | | +| httpGet | Instructions for assessing container health by executing an HTTP GET request. Either this attribute | [httpGet](#httpGet) | false | | +| | or the exec attribute or the tcpSocket attribute MUST be specified. This attribute is mutually | | | | +| | exclusive with both the exec attribute and the tcpSocket attribute. | | | | +| tcpSocket | Instructions for assessing container health by probing a TCP socket. Either this attribute or the | [tcpSocket](#tcpSocket) | false | | +| | exec attribute or the httpGet attribute MUST be specified. This attribute is mutually exclusive with | | | | +| | both the exec attribute and the httpGet attribute. | | | | +| initialDelaySeconds | Number of seconds after the container is started before the first probe is initiated. | int | true | 0 | +| periodSeconds | How often, in seconds, to execute the probe. | int | true | 10 | +| timeoutSeconds | Number of seconds after which the probe times out. | int | true | 1 | +| successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed. | int | true | 1 | +| failureThreshold | Number of consecutive failures required to determine the container is not alive (liveness probe) or | int | true | 3 | +| | not ready (readiness probe). | | | | + + +#### tcpSocket + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ---- | ------------------------------------------------------------------------------------- | ---- | -------- | ------- | +| port | The TCP socket within the container that should be probed to assess container health. | int | true | | + + +#### httpGet + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ----------- | ------------------------------------------------------------------------------------- | ----------------------------- | -------- | ------- | +| path | The endpoint, relative to the port, to which the HTTP GET request should be directed. | string | true | | +| port | The TCP socket within the container to which the HTTP GET request should be directed. | int | true | | +| httpHeaders | | [[]httpHeaders](#httpHeaders) | false | | + + +##### httpHeaders + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ----- | ----------- | ------ | -------- | ------- | +| name | | string | true | | +| value | | string | true | | + + +#### exec + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------- | --------------------------------------------------------------------------------------------------- | -------- | -------- | ------- | +| command | A command to be executed inside the container to assess its health. Each space delimited token of | []string | true | | +| | the command is a separate array element. Commands exiting 0 are considered to be successful probes, | | | | +| | whilst all other exit codes are considered failures. | | | | + + +### livenessProbe + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------------------- | ---------------------------------------------------------------------------------------------------- | ----------------------- | -------- | ------- | +| exec | Instructions for assessing container health by executing a command. Either this attribute or the | [exec](#exec) | false | | +| | httpGet attribute or the tcpSocket attribute MUST be specified. This attribute is mutually exclusive | | | | +| | with both the httpGet attribute and the tcpSocket attribute. | | | | +| httpGet | Instructions for assessing container health by executing an HTTP GET request. Either this attribute | [httpGet](#httpGet) | false | | +| | or the exec attribute or the tcpSocket attribute MUST be specified. This attribute is mutually | | | | +| | exclusive with both the exec attribute and the tcpSocket attribute. | | | | +| tcpSocket | Instructions for assessing container health by probing a TCP socket. Either this attribute or the | [tcpSocket](#tcpSocket) | false | | +| | exec attribute or the httpGet attribute MUST be specified. This attribute is mutually exclusive with | | | | +| | both the exec attribute and the httpGet attribute. | | | | +| initialDelaySeconds | Number of seconds after the container is started before the first probe is initiated. | int | true | 0 | +| periodSeconds | How often, in seconds, to execute the probe. | int | true | 10 | +| timeoutSeconds | Number of seconds after which the probe times out. | int | true | 1 | +| successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed. | int | true | 1 | +| failureThreshold | Number of consecutive failures required to determine the container is not alive (liveness probe) or | int | true | 3 | +| | not ready (readiness probe). | | | | + + +#### tcpSocket + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ---- | ------------------------------------------------------------------------------------- | ---- | -------- | ------- | +| port | The TCP socket within the container that should be probed to assess container health. | int | true | | + + +#### httpGet + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ----------- | ------------------------------------------------------------------------------------- | ----------------------------- | -------- | ------- | +| path | The endpoint, relative to the port, to which the HTTP GET request should be directed. | string | true | | +| port | The TCP socket within the container to which the HTTP GET request should be directed. | int | true | | +| httpHeaders | | [[]httpHeaders](#httpHeaders) | false | | + + +##### httpHeaders + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ----- | ----------- | ------ | -------- | ------- | +| name | | string | true | | +| value | | string | true | | + + +#### exec + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------- | --------------------------------------------------------------------------------------------------- | -------- | -------- | ------- | +| command | A command to be executed inside the container to assess its health. Each space delimited token of | []string | true | | +| | the command is a separate array element. Commands exiting 0 are considered to be successful probes, | | | | +| | whilst all other exit codes are considered failures. | | | | + + +##### volumes +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| --------- | ------------------------------------------------------------------- | ------ | -------- | ------- | +| name | | string | true | | +| mountPath | | string | true | | +| type | Specify volume type, options: "pvc","configMap","secret","emptyDir" | string | true | | + + +#### env + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| --------- | --------------------------------------------------------- | ----------------------- | -------- | ------- | +| name | Environment variable name | string | true | | +| value | The value of the environment variable | string | false | | +| valueFrom | Specifies a source the value of this var should come from | [valueFrom](#valueFrom) | false | | + + +#### valueFrom + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------------ | ------------------------------------------------ | ----------------------------- | -------- | ------- | +| secretKeyRef | Selects a key of a secret in the pod's namespace | [secretKeyRef](#secretKeyRef) | true | | + + +#### secretKeyRef + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ---- | ---------------------------------------------------------------- | ------ | -------- | ------- | +| name | The name of the secret in the pod's namespace to select from | string | true | | +| key | The key of the secret to select from. Must be a valid secret key | string | true | | \ No newline at end of file diff --git a/versioned_docs/version-v1.1/end-user/components/cue/webservice.md b/versioned_docs/version-v1.1/end-user/components/cue/webservice.md new file mode 100644 index 00000000..8bcacc1f --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/components/cue/webservice.md @@ -0,0 +1,220 @@ +--- +title: Web Service +--- + +Service-oriented components are components that support external access to services with the container as the core, and their functions cover the needs of most of he microservice scenarios. + +Please copy shell below and apply to the cluster: +```shell +cat < -o yaml`: +```shell +$ kubectl get application website -o yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: website + ... # Omit non-critical information +spec: + components: + - name: frontend + properties: + ... # Omit non-critical information + type: webservice +status: + conditions: + - lastTransitionTime: "2021-08-28T10:26:47Z" + reason: Available + status: "True" + ... # Omit non-critical information + type: HealthCheck + observedGeneration: 1 + ... # Omit non-critical information + services: + - healthy: true + name: frontend + workloadDefinition: + apiVersion: apps/v1 + kind: Deployment + status: running +``` + +When we see that the `status.services.healthy` field is true and the status is running, it means that the entire application is delivered successfully. + +If status shows as rendering or healthy as false, it means that the application has either failed to deploy or is still being deployed. Please proceed according to the information returned in `kubectl get application -o yaml`. + +You can also view through the CLI of vela, using the following command: +```shell +$ vela ls +APP COMPONENT TYPE TRAITS PHASE HEALTHY STATUS CREATED-TIME +website frontend webservice running healthy 2021-08-28 18:26:47 +0800 CST +``` +We also see that the PHASE of the app is running and the STATUS is healthy. + +## Attributes + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ---------------- | ----------------------------------------------------------------------------------------- | --------------------------------- | -------- | ------- | +| cmd | Commands to run in the container | []string | false | | +| env | Define arguments by using environment variables | [[]env](#env) | false | | +| image | Which image would you like to use for your service | string | true | | +| port | Which port do you want customer traffic sent to | int | true | 80 | +| imagePullPolicy | Specify image pull policy for your service | string | false | | +| cpu | Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) | string | false | | +| memory | Specifies the attributes of the memory resource required for the container. | string | false | | +| volumes | Declare volumes and volumeMounts | [[]volumes](#volumes) | false | | +| livenessProbe | Instructions for assessing whether the container is alive. | [livenessProbe](#livenessProbe) | false | | +| readinessProbe | Instructions for assessing whether the container is in a suitable state to serve traffic. | [readinessProbe](#readinessProbe) | false | | +| imagePullSecrets | Specify image pull secrets for your service | []string | false | | + + +### readinessProbe + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------------------- | ---------------------------------------------------------------------------------------------------- | ----------------------- | -------- | ------- | +| exec | Instructions for assessing container health by executing a command. Either this attribute or the | [exec](#exec) | false | | +| | httpGet attribute or the tcpSocket attribute MUST be specified. This attribute is mutually exclusive | | | | +| | with both the httpGet attribute and the tcpSocket attribute. | | | | +| httpGet | Instructions for assessing container health by executing an HTTP GET request. Either this attribute | [httpGet](#httpGet) | false | | +| | or the exec attribute or the tcpSocket attribute MUST be specified. This attribute is mutually | | | | +| | exclusive with both the exec attribute and the tcpSocket attribute. | | | | +| tcpSocket | Instructions for assessing container health by probing a TCP socket. Either this attribute or the | [tcpSocket](#tcpSocket) | false | | +| | exec attribute or the httpGet attribute MUST be specified. This attribute is mutually exclusive with | | | | +| | both the exec attribute and the httpGet attribute. | | | | +| initialDelaySeconds | Number of seconds after the container is started before the first probe is initiated. | int | true | 0 | +| periodSeconds | How often, in seconds, to execute the probe. | int | true | 10 | +| timeoutSeconds | Number of seconds after which the probe times out. | int | true | 1 | +| successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed. | int | true | 1 | +| failureThreshold | Number of consecutive failures required to determine the container is not alive (liveness probe) or | int | true | 3 | +| | not ready (readiness probe). | | | | + + +#### tcpSocket + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ---- | ------------------------------------------------------------------------------------- | ---- | -------- | ------- | +| port | The TCP socket within the container that should be probed to assess container health. | int | true | | + + +#### httpGet +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ----------- | ------------------------------------------------------------------------------------- | ----------------------------- | -------- | ------- | +| path | The endpoint, relative to the port, to which the HTTP GET request should be directed. | string | true | | +| port | The TCP socket within the container to which the HTTP GET request should be directed. | int | true | | +| httpHeaders | | [[]httpHeaders](#httpHeaders) | false | | + + +##### httpHeaders +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ----- | ----------- | ------ | -------- | ------- | +| name | | string | true | | +| value | | string | true | | + + +###### exec +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------- | --------------------------------------------------------------------------------------------------- | -------- | -------- | ------- | +| command | A command to be executed inside the container to assess its health. Each space delimited token of | []string | true | | +| | the command is a separate array element. Commands exiting 0 are considered to be successful probes, | | | | +| | whilst all other exit codes are considered failures. | | | | + + +### livenessProbe +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------------------- | ---------------------------------------------------------------------------------------------------- | ----------------------- | -------- | ------- | +| exec | Instructions for assessing container health by executing a command. Either this attribute or the | [exec](#exec) | false | | +| | httpGet attribute or the tcpSocket attribute MUST be specified. This attribute is mutually exclusive | | | | +| | with both the httpGet attribute and the tcpSocket attribute. | | | | +| httpGet | Instructions for assessing container health by executing an HTTP GET request. Either this attribute | [httpGet](#httpGet) | false | | +| | or the exec attribute or the tcpSocket attribute MUST be specified. This attribute is mutually | | | | +| | exclusive with both the exec attribute and the tcpSocket attribute. | | | | +| tcpSocket | Instructions for assessing container health by probing a TCP socket. Either this attribute or the | [tcpSocket](#tcpSocket) | false | | +| | exec attribute or the httpGet attribute MUST be specified. This attribute is mutually exclusive with | | | | +| | both the exec attribute and the httpGet attribute. | | | | +| initialDelaySeconds | Number of seconds after the container is started before the first probe is initiated. | int | true | 0 | +| periodSeconds | How often, in seconds, to execute the probe. | int | true | 10 | +| timeoutSeconds | Number of seconds after which the probe times out. | int | true | 1 | +| successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed. | int | true | 1 | +| failureThreshold | Number of consecutive failures required to determine the container is not alive (liveness probe) or | int | true | 3 | +| | not ready (readiness probe). | | | | + + +###### tcpSocket +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ---- | ------------------------------------------------------------------------------------- | ---- | -------- | ------- | +| port | The TCP socket within the container that should be probed to assess container health. | int | true | | + + +##### httpGet +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ----------- | ------------------------------------------------------------------------------------- | ----------------------------- | -------- | ------- | +| path | The endpoint, relative to the port, to which the HTTP GET request should be directed. | string | true | | +| port | The TCP socket within the container to which the HTTP GET request should be directed. | int | true | | +| httpHeaders | | [[]httpHeaders](#httpHeaders) | false | | + + +###### httpHeaders +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ----- | ----------- | ------ | -------- | ------- | +| name | | string | true | | +| value | | string | true | | + + +##### exec +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------- | --------------------------------------------------------------------------------------------------- | -------- | -------- | ------- | +| command | A command to be executed inside the container to assess its health. Each space delimited token of | []string | true | | +| | the command is a separate array element. Commands exiting 0 are considered to be successful probes, | | | | +| | whilst all other exit codes are considered failures. | | | | + + +### volumes +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| --------- | ------------------------------------------------------------------- | ------ | -------- | ------- | +| name | | string | true | | +| mountPath | | string | true | | +| type | Specify volume type, options: "pvc","configMap","secret","emptyDir" | string | true | | + + +#### env +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| --------- | --------------------------------------------------------- | ----------------------- | -------- | ------- | +| name | Environment variable name | string | true | | +| value | The value of the environment variable | string | false | | +| valueFrom | Specifies a source the value of this var should come from | [valueFrom](#valueFrom) | false | | + + +### valueFrom +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------------ | ------------------------------------------------ | ----------------------------- | -------- | ------- | +| secretKeyRef | Selects a key of a secret in the pod's namespace | [secretKeyRef](#secretKeyRef) | true | | + + +#### secretKeyRef + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ---- | ---------------------------------------------------------------- | ------ | -------- | ------- | +| name | The name of the secret in the pod's namespace to select from | string | true | | +| key | The key of the secret to select from. Must be a valid secret key | string | true | | diff --git a/versioned_docs/version-v1.1/end-user/components/cue/worker.md b/versioned_docs/version-v1.1/end-user/components/cue/worker.md new file mode 100644 index 00000000..f320a631 --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/components/cue/worker.md @@ -0,0 +1,166 @@ +--- +title: Worker +--- + +Describes long-running, scalable, containerized services that running at backend. They do NOT have network endpoint to receive external network traffic. + +## How-to + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: app-worker +spec: + components: + - name: myworker + type: worker + properties: + image: "busybox" + cmd: + - sleep + - "1000" +``` + +## Attributes + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ---------------- | ----------------------------------------------------------------------------------------- | --------------------------------- | -------- | ------- | +| cmd | Commands to run in the container | []string | false | | +| env | Define arguments by using environment variables | [[]env](#env) | false | | +| image | Which image would you like to use for your service | string | true | | +| imagePullPolicy | Specify image pull policy for your service | string | false | | +| cpu | Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) | string | false | | +| memory | Specifies the attributes of the memory resource required for the container. | string | false | | +| volumes | Declare volumes and volumeMounts | [[]volumes](#volumes) | false | | +| livenessProbe | Instructions for assessing whether the container is alive. | [livenessProbe](#livenessProbe) | false | | +| readinessProbe | Instructions for assessing whether the container is in a suitable state to serve traffic. | [readinessProbe](#readinessProbe) | false | | +| imagePullSecrets | Specify image pull secrets for your service | []string | false | | + + +### readinessProbe +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------------------- | ---------------------------------------------------------------------------------------------------- | ----------------------- | -------- | ------- | +| exec | Instructions for assessing container health by executing a command. Either this attribute or the | [exec](#exec) | false | | +| | httpGet attribute or the tcpSocket attribute MUST be specified. This attribute is mutually exclusive | | | | +| | with both the httpGet attribute and the tcpSocket attribute. | | | | +| httpGet | Instructions for assessing container health by executing an HTTP GET request. Either this attribute | [httpGet](#httpGet) | false | | +| | or the exec attribute or the tcpSocket attribute MUST be specified. This attribute is mutually | | | | +| | exclusive with both the exec attribute and the tcpSocket attribute. | | | | +| tcpSocket | Instructions for assessing container health by probing a TCP socket. Either this attribute or the | [tcpSocket](#tcpSocket) | false | | +| | exec attribute or the httpGet attribute MUST be specified. This attribute is mutually exclusive with | | | | +| | both the exec attribute and the httpGet attribute. | | | | +| initialDelaySeconds | Number of seconds after the container is started before the first probe is initiated. | int | true | 0 | +| periodSeconds | How often, in seconds, to execute the probe. | int | true | 10 | +| timeoutSeconds | Number of seconds after which the probe times out. | int | true | 1 | +| successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed. | int | true | 1 | +| failureThreshold | Number of consecutive failures required to determine the container is not alive (liveness probe) or | int | true | 3 | +| | not ready (readiness probe). | | | | + + +##### tcpSocket +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ---- | ------------------------------------------------------------------------------------- | ---- | -------- | ------- | +| port | The TCP socket within the container that should be probed to assess container health. | int | true | | + + +#### httpGet +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ----------- | ------------------------------------------------------------------------------------- | ----------------------------- | -------- | ------- | +| path | The endpoint, relative to the port, to which the HTTP GET request should be directed. | string | true | | +| port | The TCP socket within the container to which the HTTP GET request should be directed. | int | true | | +| httpHeaders | | [[]httpHeaders](#httpHeaders) | false | | + + +##### httpHeaders +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ----- | ----------- | ------ | -------- | ------- | +| name | | string | true | | +| value | | string | true | | + + +##### exec +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------- | --------------------------------------------------------------------------------------------------- | -------- | -------- | ------- | +| command | A command to be executed inside the container to assess its health. Each space delimited token of | []string | true | | +| | the command is a separate array element. Commands exiting 0 are considered to be successful probes, | | | | +| | whilst all other exit codes are considered failures. | | | | + + +### livenessProbe +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------------------- | ---------------------------------------------------------------------------------------------------- | ----------------------- | -------- | ------- | +| exec | Instructions for assessing container health by executing a command. Either this attribute or the | [exec](#exec) | false | | +| | httpGet attribute or the tcpSocket attribute MUST be specified. This attribute is mutually exclusive | | | | +| | with both the httpGet attribute and the tcpSocket attribute. | | | | +| httpGet | Instructions for assessing container health by executing an HTTP GET request. Either this attribute | [httpGet](#httpGet) | false | | +| | or the exec attribute or the tcpSocket attribute MUST be specified. This attribute is mutually | | | | +| | exclusive with both the exec attribute and the tcpSocket attribute. | | | | +| tcpSocket | Instructions for assessing container health by probing a TCP socket. Either this attribute or the | [tcpSocket](#tcpSocket) | false | | +| | exec attribute or the httpGet attribute MUST be specified. This attribute is mutually exclusive with | | | | +| | both the exec attribute and the httpGet attribute. | | | | +| initialDelaySeconds | Number of seconds after the container is started before the first probe is initiated. | int | true | 0 | +| periodSeconds | How often, in seconds, to execute the probe. | int | true | 10 | +| timeoutSeconds | Number of seconds after which the probe times out. | int | true | 1 | +| successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed. | int | true | 1 | +| failureThreshold | Number of consecutive failures required to determine the container is not alive (liveness probe) or | int | true | 3 | +| | not ready (readiness probe). | | | | + + +#### tcpSocket +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ---- | ------------------------------------------------------------------------------------- | ---- | -------- | ------- | +| port | The TCP socket within the container that should be probed to assess container health. | int | true | | + + +#### httpGet +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ----------- | ------------------------------------------------------------------------------------- | ----------------------------- | -------- | ------- | +| path | The endpoint, relative to the port, to which the HTTP GET request should be directed. | string | true | | +| port | The TCP socket within the container to which the HTTP GET request should be directed. | int | true | | +| httpHeaders | | [[]httpHeaders](#httpHeaders) | false | | + + +##### httpHeaders +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ----- | ----------- | ------ | -------- | ------- | +| name | | string | true | | +| value | | string | true | | + + +#### exec +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------- | --------------------------------------------------------------------------------------------------- | -------- | -------- | ------- | +| command | A command to be executed inside the container to assess its health. Each space delimited token of | []string | true | | +| | the command is a separate array element. Commands exiting 0 are considered to be successful probes, | | | | +| | whilst all other exit codes are considered failures. | | | | + + +#### volumes +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| --------- | ------------------------------------------------------------------- | ------ | -------- | ------- | +| name | | string | true | | +| mountPath | | string | true | | +| type | Specify volume type, options: "pvc","configMap","secret","emptyDir" | string | true | | + + +#### env +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| --------- | --------------------------------------------------------- | ----------------------- | -------- | ------- | +| name | Environment variable name | string | true | | +| value | The value of the environment variable | string | false | | +| valueFrom | Specifies a source the value of this var should come from | [valueFrom](#valueFrom) | false | | + + +#### valueFrom +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------------ | ------------------------------------------------ | ----------------------------- | -------- | ------- | +| secretKeyRef | Selects a key of a secret in the pod's namespace | [secretKeyRef](#secretKeyRef) | true | | + + +#### secretKeyRef + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ---- | ---------------------------------------------------------------- | ------ | -------- | ------- | +| name | The name of the secret in the pod's namespace to select from | string | true | | +| key | The key of the secret to select from. Must be a valid secret key | string | true | | diff --git a/versioned_docs/version-v1.1/end-user/components/helm.md b/versioned_docs/version-v1.1/end-user/components/helm.md new file mode 100644 index 00000000..d1c888b5 --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/components/helm.md @@ -0,0 +1,138 @@ +--- +title: Helm +--- + +KubeVela's Helm component meets the needs of users to connect to Helm Chart. You can deploy any ready-made Helm chart software package from Helm Repo, Git Repo or OSS bucket through the Helm component, and overwrite its parameters. + +## Deploy From Helm Repo + +In this `Application`, we hope to deliver a component called redis-comp. It is a chart from the [bitnami](https://charts.bitnami.com/bitnami). + +```shell +cat < --from-literal=secretkey= +secret/bucket-secret created +``` + +1. Example +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: bucket-app +spec: + components: + - name: bucket-comp + type: helm + properties: + repoType: oss + # required if bucket is private + secretRef: bucket-secret + chart: ./chart/podinfo-5.1.3.tgz + url: oss-cn-beijing.aliyuncs.com + oss: + bucketName: definition-registry +``` + +## Deploy From Git Repo + +| Parameters | Description | Example | +| ---------- | ----------- | ------- | +| repoType | required, indicates where it's from | git | +| pullInterval | optional, synchronize with Git Repo, tunning interval and 5 minutes by default | 10m | +| url | required, Git Repo address | https://github.com/oam-dev/terraform-controller | +| secretRef | optional, The name of the Secret object that holds the credentials required to pull the Git repository. For HTTP/S basic authentication, the Secret must contain the username and password fields. For SSH authentication, the identity, identity.pub and known_hosts fields must be included | sec-name | +| timeout | optional, The timeout period of the download operation, the default is 20s | 60s | +| chart | required, Chart storage path (key) | ./chart/podinfo-5.1.3.tgz | +| version | optional, In Git source, this parameter has no effect | | +| targetNamespace | optional, the namespace to install chart, decided by chart itself | your-ns | +| releaseName | optional, Installed release name | your-rn | +| values | optional, Overwrite the Values.yaml of the chart for Helm rendering. | | +| git.branch | optional, Git branch, master by default | dev | + +**How-to** + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: app-delivering-chart +spec: + components: + - name: terraform-controller + type: helm + properties: + repoType: git + url: https://github.com/oam-dev/terraform-controller + chart: ./chart + git: + branch: master +``` \ No newline at end of file diff --git a/versioned_docs/version-v1.1/end-user/components/kustomize.md b/versioned_docs/version-v1.1/end-user/components/kustomize.md new file mode 100644 index 00000000..5a62bd8f --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/components/kustomize.md @@ -0,0 +1,120 @@ +--- +title: Kustomize +--- + +Create a Kustomize Component, it could be from Git Repo or OSS bucket. + +## Deploy From OSS bucket + +KubeVela's `kustomize` component meets the needs of users to directly connect Yaml files and folders as component products. No matter whether your Yaml file/folder is stored in a Git Repo or an OSS bucket, KubeVela can read and deliver it. + +Let's take the YAML folder component from the OSS bucket registry as an example to explain the usage. In the `Application` this time, I hope to deliver a component named bucket-comp. The deployment file corresponding to the component is stored in the cloud storage OSS bucket, and the corresponding bucket name is definition-registry. `kustomize.yaml` comes from this address of oss-cn-beijing.aliyuncs.com and the path is `./app/prod/`. + + +1. (Opentional) If your OSS bucket needs identity verification, create a Secret: + +```shell +$ kubectl create secret generic bucket-secret --from-literal=accesskey= --from-literal=secretkey= +secret/bucket-secret created +``` + +2. Deploy it: + +```shell +cat <// + git: + branch: master + path: ./app/dev/ +``` + +**Override Kustomize** + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: bucket-app +spec: + components: + - name: bucket-comp + type: kustomize + properties: + # ...omitted for brevity + path: ./app/ + +``` + diff --git a/versioned_docs/version-v1.1/end-user/components/more.md b/versioned_docs/version-v1.1/end-user/components/more.md new file mode 100644 index 00000000..378e5ce6 --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/components/more.md @@ -0,0 +1,34 @@ +--- +title: Want More? +--- + +## 1. Get from capability registry + +You can get more from official capability registry by using KubeVela [plugin](../../developers/references/kubectl-plugin#install-kubectl-vela-plugin)。 + +### List + +By default, the commands will list capabilities from [repo](https://registry.kubevela.net) maintained by KubeVela. + +```shell +$ kubectl vela comp --discover +Showing components from registry: oss://registry.kubevela.net +NAME REGISTRY DEFINITION +webserver default deployments.apps +``` + +### Install + +Then you can install a component like: + +```shell +$ kubectl vela comp get webserver +Installing component capability webserver +Successfully install component: webserver +``` + +## 2. Designed by yourself + +* Read [how to edit definitions](../../platform-engineers/cue/definition-edit) to build your own capability from existing ones. +* [Build your own capability from scratch](../../platform-engineers/cue/advanced) + and learn more features about how to [define custom components](../../platform-engineers/components/custom-component). \ No newline at end of file diff --git a/versioned_docs/version-v1.1/end-user/debug/health.md b/versioned_docs/version-v1.1/end-user/debug/health.md new file mode 100644 index 00000000..5f976649 --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/debug/health.md @@ -0,0 +1,89 @@ +--- +title: Aggregated Health Probe +--- + +The `HealthyScope` allows you to define an aggregated health probe for all components in same application. + +1.Create health scope instance. +```yaml +apiVersion: core.oam.dev/v1alpha2 +kind: HealthScope +metadata: + name: health-check + namespace: default +spec: + probe-interval: 60 + workloadRefs: + - apiVersion: apps/v1 + kind: Deployment + name: express-server +``` +2. Create an application that drops in this health scope. +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: vela-app +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8080 # change port + cpu: 0.5 # add requests cpu units + scopes: + healthscopes.core.oam.dev: health-check +``` +3. Check the reference of the aggregated health probe (`status.service.scopes`). +```shell +kubectl get app vela-app -o yaml +``` +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: vela-app +... +status: +... + services: + - healthy: true + name: express-server + scopes: + - apiVersion: core.oam.dev/v1alpha2 + kind: HealthScope + name: health-check +``` +4.Check health scope detail. +```shell +kubectl get healthscope health-check -o yaml +``` +```yaml +apiVersion: core.oam.dev/v1alpha2 +kind: HealthScope +metadata: + name: health-check +... +spec: + probe-interval: 60 + workloadRefs: + - apiVersion: apps/v1 + kind: Deployment + name: express-server +status: + healthConditions: + - componentName: express-server + diagnosis: 'Ready:1/1 ' + healthStatus: HEALTHY + targetWorkload: + apiVersion: apps/v1 + kind: Deployment + name: express-server + scopeHealthCondition: + healthStatus: HEALTHY + healthyWorkloads: 1 + total: 1 +``` + +It shows the aggregated health status for all components in this application. diff --git a/versioned_docs/version-v1.1/end-user/debug/monitoring.md b/versioned_docs/version-v1.1/end-user/debug/monitoring.md new file mode 100644 index 00000000..aae812cb --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/debug/monitoring.md @@ -0,0 +1,9 @@ +--- +title: Monitoring +--- + +TBD, Content Overview: + +1. We will move all installation scripts to a separate doc may be named Install Capability Providers (e.g. https://knative.dev/docs/install/install-extensions/)Install monitoring trait(along with prometheus/grafana controller). +2. Add monitoring trait into Application. +3. View it with grafana. \ No newline at end of file diff --git a/versioned_docs/version-v1.1/end-user/initializer-end-user.md b/versioned_docs/version-v1.1/end-user/initializer-end-user.md new file mode 100644 index 00000000..20d6058d --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/initializer-end-user.md @@ -0,0 +1,3 @@ +--- +title: Environments +--- \ No newline at end of file diff --git a/versioned_docs/version-v1.1/end-user/overview-end-user.md b/versioned_docs/version-v1.1/end-user/overview-end-user.md new file mode 100644 index 00000000..ccd07b3a --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/overview-end-user.md @@ -0,0 +1,48 @@ +--- +title: 概述 +--- + +Here are some workitems on the roadmap + +## Embed rollout in an application + +We will support embedded rollout settings in an application. In this way, any changes to the +application will naturally roll out in a controlled manner instead of a sudden replace. + +## Add support to trait upgrade + +There are three trait related workitems that complement each other + +- we need to make sure that traits that work on the previous application still work on the new + application +- traits themselves also need a controlled way to upgrade instead of replacing the old in one shot +- rollout controller should suppress conflicting traits (like HPA/Scalar) during the rollout process + +## Add metrics based rolling checking + +We will integrate with prometheus and use the metrics generated by the application to control the +flow of the rollout. This part will be very similar to flagger. + +## Add traffic shifting support + +We will add traffic shifting based upgrading strategy like canary, A/B testing. We plan to support +Istio in our first version. This part will be very similar to flagger. + +## Support upgrading more than one component + +Currently, we can only upgrade one component at a time. We will support upgrading more components in +one application at once. + +## Support Helm Rollout strategy + +Currently, we only support upgrading k8s resources. We will support helm based workload in the +future. + +## Add more restrictions on what part of the rollout plan can be changed during rolling + +Here are some examples + +- the BatchPartition field cannot decrease beyond the current batch +- the RolloutBatches field can only change the part after the current batch +- the ComponentList field cannot be changed after rolling starts +- the RolloutStrategy/TargetSize/NumBatches cannot be changed diff --git a/versioned_docs/version-v1.1/end-user/pipeline.md b/versioned_docs/version-v1.1/end-user/pipeline.md new file mode 100644 index 00000000..9e307904 --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/pipeline.md @@ -0,0 +1,8 @@ +--- +title: Build CI/CD Pipeline +--- + +TBD, Content Overview: + +1. install argo/tekton. +2. run the pipeline example: https://github.com/oam-dev/kubevela/tree/master/docs/examples/argo \ No newline at end of file diff --git a/versioned_docs/version-v1.1/end-user/policies/envbinding.md b/versioned_docs/version-v1.1/end-user/policies/envbinding.md new file mode 100644 index 00000000..c04664cd --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/policies/envbinding.md @@ -0,0 +1,139 @@ +--- +title: Multi-Environment Deployment +--- + +This documentation will introduce how to use env-binding to automate multi-stage application rollout across multiple environments. + +## Background + +Users usually have two or more environments to deploy applications to. For example, dev environment to test the application code, and production environment to deploy applications to serve live traffic. For different environments, the deployment configuration also has some nuance. + +## Multi-env Application Deployment + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: example-app + namespace: test +spec: + components: + - name: hello-world-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: scaler + properties: + replicas: 1 + - name: data-worker + type: worker + properties: + image: busybox + cmd: + - sleep + - '1000000' + policies: + - name: example-multi-env-policy + type: env-binding + properties: + envs: + - name: staging + placement: # selecting the cluster to deploy to + clusterSelector: + name: cluster-staging + selector: # selecting which component to use + components: + - hello-world-server + + - name: prod + placement: + clusterSelector: + name: cluster-prod + patch: # overlay patch on above components + components: + - name: hello-world-server + type: webservice + traits: + - type: scaler + properties: + replicas: 3 + + workflow: + steps: + # deploy to staging env + - name: deploy-staging + type: deploy2env + properties: + policy: example-multi-env-policy + env: staging + + # manual check + - name: manual-approval + type: suspend + + # deploy to prod env + - name: deploy-prod + type: deploy2env + properties: + policy: example-multi-env-policy + env: prod +``` + +We apply the Application `policy-demo` in the example. + +> Before applying this example application, you need a namespace named `test` in the current cluster and two sub-clusters. You can create it by executing cmd `kubectl create ns test`. + +```shell +kubectl apply -f app.yaml +``` + +After the Application is created, a configured Application will be created under the `test` namespace. + +```shell +$ kubectl get app -n test +NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE +example-app hello-world-server webservice running 25s +``` + +If you want to learn more about `env-binding`, please refer to **[Multi Cluster Deployment](../../case-studies/multi-cluster)**. + +## Appendix: Parameter List + +Name | Desc | Type | Required | Default Value +:---------- | :----------- | :----------- | :----------- | :----------- +envs|environment configuration| `env` array|true|null + +env + +Name | Desc | Type | Required | Default Value +:----------- | :------------ | :------------ | :------------ | :------------ +name|environment name|string|true|null +patch|configure the components of the Application|`patch`|true|null +placement|resource scheduling strategy, choose to deploy the configured resources to the specified cluster or namespace| `placement`|true|null +| selector | identify which components to be deployed for this environment, default to be empty which means deploying all components | `selector` | false | null | + +patch + +Name | Desc | Type | Required | Default Value +:----------- | :------------ | :------------ | :------------ | :------------ +components|components that need to be configured| component array|true|null + +placement + +Name | Desc | Type | Required | Default Value +:----------- | :------------ | :------------ | :------------ | :------------ +clusterSelector| select deploy cluster by cluster name | `clusterSelector` |true|null + +selector + +| 名称 | 描述 | 类型 | 是否必须 | 默认值 | +| :--------- | :------------------- | :------------- | :------- | :----- | +| components | component names to be used | string array | false | null | + +clusterSelector + +Name | Desc | Type | Required | Default Value +:----------- | :------------ | :------------ | :------------ | :------------ +name |cluster name| string |false|null diff --git a/versioned_docs/version-v1.1/end-user/policies/health.md b/versioned_docs/version-v1.1/end-user/policies/health.md new file mode 100644 index 00000000..9eee1fb2 --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/policies/health.md @@ -0,0 +1,123 @@ +--- +title: Health Status Check +--- + +This documentation will introduce how to use `health` policy to apply periodical +health checking to an application. + +## Background + +After an application is deployed, users usually want to monitor or observe the +health condition of the running application as well as each components. +Health policy decouples health checking procedure from application workflow +execution. +It allows to set independent health inspection cycle, such as check every 30s. +That helps users to notice as soon as applications turn out unhealthy and +follow the diagnosis message to troubleshot. + +## Health Policy + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: app-healthscope-unhealthy +spec: + components: + - name: my-server + type: webservice + properties: + cmd: + - node + - server.js + image: oamdev/testapp:v1 + port: 8080 + traits: + - type: ingress + properties: + domain: test.my.domain + http: + "/": 8080 + - name: my-server-unhealthy + type: webservice + properties: + cmd: + - node + - server.js + image: oamdev/testapp:boom # make it unhealthy + port: 8080 + policies: + - name: health-policy-demo + type: health + properties: + probeInterval: 5 + probeTimeout: 10 +``` + +We apply the sample application including two components, `my-server` is +supposed to be healthy while `my-server-unhealthy` is supposed to be unhealthy +(because of invalid image). + +As shown in the sample, a `Health` policy is specified. +`Health` policy accepts two optional properties, `probeInterval` indicating time +duration between checking (default is 30s) and `probeTimeout` indicating time +duration before checking timeout (default is 10s). + +```yaml +... + policies: + - name: health-policy-demo + type: health + properties: + probeInterval: 5 + probeTimeout: 10 +... +``` + +To learn about defining health checking rules, please refer to **[Status Write Back](../../platform-engineers/traits/status)**. + +Finally we can observe application health status from its `status.services`. +Here is a snippet of health status. + +```yaml +... + services: + - healthy: true + message: 'Ready:1/1 ' + name: my-server + scopes: + - apiVersion: core.oam.dev/v1alpha2 + kind: HealthScope + name: health-policy-demo + namespace: default + uid: 1d54b5a0-d951-4f20-9541-c2d76c412a94 + traits: + - healthy: true + message: | + No loadBalancer found, visiting by using 'vela port-forward app-healthscope-unhealthy' + type: ingress + workloadDefinition: + apiVersion: apps/v1 + kind: Deployment + - healthy: false + message: 'Ready:0/1 ' + name: my-server-unhealthy + scopes: + - apiVersion: core.oam.dev/v1alpha2 + kind: HealthScope + name: health-policy-demo + namespace: default + uid: 1d54b5a0-d951-4f20-9541-c2d76c412a94 + workloadDefinition: + apiVersion: apps/v1 + kind: Deployment + status: running +... +``` + +## Appendix: Parameter List + +Name | Desc | Type | Required | Default Value +:---------- | :----------- | :----------- | :----------- | :----------- +probeInterval|time duration between checking (in units of seconds) | int |false| 30 +probeTimeout|time duration before checking timeout (in units of seconds) | int |false| 10 \ No newline at end of file diff --git a/versioned_docs/version-v1.1/end-user/rollout-scaler.md b/versioned_docs/version-v1.1/end-user/rollout-scaler.md new file mode 100644 index 00000000..537beacc --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/rollout-scaler.md @@ -0,0 +1,5 @@ +--- +title: Rollout & Scaler +--- + +WIP \ No newline at end of file diff --git a/versioned_docs/version-v1.1/end-user/traits/annotations-and-labels.md b/versioned_docs/version-v1.1/end-user/traits/annotations-and-labels.md new file mode 100644 index 00000000..f34570e4 --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/traits/annotations-and-labels.md @@ -0,0 +1,55 @@ +--- +title: Labels and Annotations +--- + +`labels` and `annotations` traits allow us to mark annotations and labels on Pod for workload. + +## Specification + +```shell +$ vela show annotations +# Properties ++-----------+-------------+-------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++-----------+-------------+-------------------+----------+---------+ +| - | | map[string]string | true | | ++-----------+-------------+-------------------+----------+---------+ +``` + +```shell +$ vela show labels +# Properties ++-----------+-------------+-------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++-----------+-------------+-------------------+----------+---------+ +| - | | map[string]string | true | | ++-----------+-------------+-------------------+----------+---------+ +``` + +They're all string Key-Value pairs. + +## How to use + +```shell +# myapp.yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: myapp +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: labels + properties: + "release": "stable" + - type: annotations + properties: + "description": "web application" +``` + +Then the labels and annotations will mark on pods. \ No newline at end of file diff --git a/versioned_docs/version-v1.1/end-user/traits/autoscaler.md b/versioned_docs/version-v1.1/end-user/traits/autoscaler.md new file mode 100644 index 00000000..56899577 --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/traits/autoscaler.md @@ -0,0 +1,34 @@ +--- +title: AutoScaler +--- + +## Specification + + +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | +| ------- | ------------------------------------------------------------------------------- | ---- | -------- | ------- | +| min | Specify the minimal number of replicas to which the autoscaler can scale down | int | true | 1 | +| max | Specify the maximum number of of replicas to which the autoscaler can scale up | int | true | 10 | +| cpuUtil | Specify the average cpu utilization, for example, 50 means the CPU usage is 50% | int | true | 50 | + +## How to use + +```yaml +# sample.yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: website +spec: + components: + - name: frontend # This is the component I want to deploy + type: webservice + properties: + image: nginx + traits: + - type: cpuscaler # Automatically scale the component by CPU usage after deployed + properties: + min: 1 + max: 10 + cpuPercent: 60 +``` diff --git a/versioned_docs/version-v1.1/end-user/traits/ingress.md b/versioned_docs/version-v1.1/end-user/traits/ingress.md new file mode 100644 index 00000000..c2d726bf --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/traits/ingress.md @@ -0,0 +1,110 @@ +--- +title: Ingress +--- + +The `ingress` trait exposes a component to public Internet via a valid domain. + +## Specification + +```shell +kubectl vela show ingress +``` +```console +# Properties ++--------+------------------------------------------------------------------------------+----------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++--------+------------------------------------------------------------------------------+----------------+----------+---------+ +| http | Specify the mapping relationship between the http path and the workload port | map[string]int | true | | +| domain | Specify the domain you want to expose | string | true | | ++--------+------------------------------------------------------------------------------+----------------+----------+---------+ +``` + +## How to use + +Attach a `ingress` trait to the component you want to expose and deploy. + +```yaml +# vela-app.yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-app +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: ingress + properties: + domain: testsvc.example.com + http: + "/": 8000 +``` + +```bash +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/vela-app.yaml +``` +```console +application.core.oam.dev/first-vela-app created +``` + +Check the status until we see `status` is `running` and services are `healthy`: + +```bash +kubectl get application first-vela-app -w +``` +```console +NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE +first-vela-app express-server webservice healthChecking 14s +first-vela-app express-server webservice running true 42s +``` + +Check the trait detail for the its visiting url: + +```shell +kubectl get application first-vela-app -o yaml +``` +```console +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-app + namespace: default +spec: +... + services: + - healthy: true + name: express-server + traits: + - healthy: true + message: 'Visiting URL: testsvc.example.com, IP: ' + type: ingress + status: running +... +``` + +Then you will be able to visit this application via its domain. + +``` +curl -H "Host:testsvc.example.com" http:/// +``` +```console + +Hello World + + + ## . + ## ## ## == + ## ## ## ## ## === + /""""""""""""""""\___/ === + ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~ + \______ o _,/ + \ \ _,' + `'--.._\..--'' + +``` + +> ⚠️ This section requires your runtime cluster has a working ingress controller. \ No newline at end of file diff --git a/versioned_docs/version-v1.1/end-user/traits/kustomize-patch.md b/versioned_docs/version-v1.1/end-user/traits/kustomize-patch.md new file mode 100644 index 00000000..38844298 --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/traits/kustomize-patch.md @@ -0,0 +1,194 @@ +--- +title: Kustomize Patch +--- + +## kustomize-patch Specification + +```shell +vela show kustomize-patch +``` + +```shell +# Properties ++---------+---------------------------------------------------------------+-----------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++---------+---------------------------------------------------------------+-----------------------+----------+---------+ +| patches | a list of StrategicMerge or JSON6902 patch to selected target | [[]patches](#patches) | true | | ++---------+---------------------------------------------------------------+-----------------------+----------+---------+ + + +## patches ++--------+---------------------------------------------------+-------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++--------+---------------------------------------------------+-------------------+----------+---------+ +| patch | Inline patch string, in yaml style | string | true | | +| target | Specify the target the patch should be applied to | [target](#target) | true | | ++--------+---------------------------------------------------+-------------------+----------+---------+ + + +### target ++--------------------+-------------+--------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++--------------------+-------------+--------+----------+---------+ +| name | | string | false | | +| group | | string | false | | +| version | | string | false | | +| kind | | string | false | | +| namespace | | string | false | | +| annotationSelector | | string | false | | +| labelSelector | | string | false | | ++--------------------+-------------+--------+----------+---------+ +``` + +### How to use + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: bucket-app +spec: + components: + - name: bucket-comp + type: kustomize + # ... omitted for brevity + traits: + - type: kustomize-patch + properties: + patches: + - patch: |- + apiVersion: v1 + kind: Pod + metadata: + name: not-used + labels: + app.kubernetes.io/part-of: test-app + target: + labelSelector: "app=podinfo" +``` + +In this example, the `kustomize-patch` will patch the content for all Pods with label `app=podinfo`. + +## kustomize-json-patch Specification + +You could use [JSON6902 format](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/patchesjson6902/) to patch the component. Get to know it first: + +```shell +vela show kustomize-json-patch +``` + +```shell +# Properties ++-------------+---------------------------+-------------------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++-------------+---------------------------+-------------------------------+----------+---------+ +| patchesJson | A list of JSON6902 patch. | [[]patchesJson](#patchesJson) | true | | ++-------------+---------------------------+-------------------------------+----------+---------+ + + +## patchesJson ++--------+-------------+-------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++--------+-------------+-------------------+----------+---------+ +| patch | | [patch](#patch) | true | | +| target | | [target](#target) | true | | ++--------+-------------+-------------------+----------+---------+ + + +#### target ++--------------------+-------------+--------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++--------------------+-------------+--------+----------+---------+ +| name | | string | false | | +| group | | string | false | | +| version | | string | false | | +| kind | | string | false | | +| namespace | | string | false | | +| annotationSelector | | string | false | | +| labelSelector | | string | false | | ++--------------------+-------------+--------+----------+---------+ + + +### patch ++-------+-------------+--------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++-------+-------------+--------+----------+---------+ +| path | | string | true | | +| op | | string | true | | +| value | | string | false | | ++-------+-------------+--------+----------+---------+ +``` + +### How to use + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: bucket-app +spec: + components: + - name: bucket-comp + type: kustomize + # ... omitted for brevity + traits: + - type: kustomize-json-patch + properties: + patchesJson: + - target: + version: v1 + kind: Deployment + name: podinfo + patch: + - op: add + path: /metadata/annotations/key + value: value +``` + +### kustomize-strategy-merge Specification + +```shell +vela show kustomize-json-patch +``` + +```shell +# Properties ++-----------------------+-----------------------------------------------------------+---------------------------------------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++-----------------------+-----------------------------------------------------------+---------------------------------------------------+----------+---------+ +| patchesStrategicMerge | a list of strategicmerge, defined as inline yaml objects. | [[]patchesStrategicMerge](#patchesStrategicMerge) | true | | ++-----------------------+-----------------------------------------------------------+---------------------------------------------------+----------+---------+ + + +## patchesStrategicMerge ++-----------+-------------+--------------------------------------------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++-----------+-------------+--------------------------------------------------------+----------+---------+ +| undefined | | map[string](null|bool|string|bytes|{...}|[...]|number) | true | | +``` + +### How to use + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: bucket-app +spec: + components: + - name: bucket-comp + type: kustomize + # ... omitted for brevity + traits: + - type: kustomize-strategy-merge + properties: + patchesStrategicMerge: + - apiVersion: apps/v1 + kind: Deployment + metadata: + name: podinfo + spec: + template: + spec: + serviceAccount: custom-service-account +``` diff --git a/versioned_docs/version-v1.1/end-user/traits/more.md b/versioned_docs/version-v1.1/end-user/traits/more.md new file mode 100644 index 00000000..e4859425 --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/traits/more.md @@ -0,0 +1,44 @@ +--- +title: Want More? +--- + +## 1. Get from capability registry + +You can get more from official capability registry by using KubeVela [plugin](../../developers/references/kubectl-plugin#install-kubectl-vela-plugin)。 + +### List + +By default, the commands will list capabilities from [repo](https://registry.kubevela.net) maintained by KubeVela. + +```shell +$ kubectl vela trait --discover +Showing traits from registry: https://registry.kubevela.net +NAME REGISTRY DEFINITION APPLIES-TO +service-account default [webservice worker] +env default [webservice worker] +flagger-rollout default canaries.flagger.app [webservice] +init-container default [webservice worker] +keda-scaler default scaledobjects.keda.sh [deployments.apps] +metrics default metricstraits.standard.oam.dev [webservice backend task] +node-affinity default [webservice worker] +route default routes.standard.oam.dev [webservice] +virtualgroup default [webservice worker] +``` + +Note that the `--discover` flag means show all traits not in your cluster. + +### Install + +Then you can install a trait like: + +```shell +$ kubectl vela trait get init-container +Installing component capability init-container +Successfully install trait: init-container +``` + +## 2. Designed by yourself + +* Read [how to edit definitions](../../platform-engineers/cue/definition-edit) to build your own capability from existing ones. +* [Build your own capability from scratch](../../platform-engineers/cue/advanced) + and learn more features about how to [define custom traits](../../platform-engineers/traits/customize-trait). diff --git a/versioned_docs/version-v1.1/end-user/traits/rollout.md b/versioned_docs/version-v1.1/end-user/traits/rollout.md new file mode 100644 index 00000000..3ccca2af --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/traits/rollout.md @@ -0,0 +1,416 @@ +--- +title: Rollout +--- + +This chapter will introduce how to use Rollout Trait to perform a rolling update on Workload. + +## How to + +### First Deployment + +Apply the Application YAML below which includes a webservice-type workload with Rollout Trait, and [control version](../version-control) +of component name to be express-server-v1. + +```shell +cat < + i=0; + while true; + do + echo "$i: $(date)" >> /var/log/date.log; + i=$((i+1)); + sleep 1; + done + volumes: + - name: varlog + mountPath: /var/log + type: emptyDir + traits: + - type: sidecar + properties: + name: count-log + image: busybox + cmd: [ /bin/sh, -c, 'tail -n+1 -f /var/log/date.log'] + volumes: + - name: varlog + path: /var/log +``` + +Deploy this Application. + +```shell +kubectl apply -f app.yaml +``` + +On runtime cluster, check the name of running pod. + +```shell +kubectl get pod +``` +```console +NAME READY STATUS RESTARTS AGE +log-gen-worker-76945f458b-k7n9k 2/2 Running 0 90s +``` + +And check the logging output of sidecar. + +```shell +kubectl logs -f log-gen-worker-76945f458b-k7n9k count-log +``` +```console +0: Fri Apr 16 11:08:45 UTC 2021 +1: Fri Apr 16 11:08:46 UTC 2021 +2: Fri Apr 16 11:08:47 UTC 2021 +3: Fri Apr 16 11:08:48 UTC 2021 +4: Fri Apr 16 11:08:49 UTC 2021 +5: Fri Apr 16 11:08:50 UTC 2021 +6: Fri Apr 16 11:08:51 UTC 2021 +7: Fri Apr 16 11:08:52 UTC 2021 +8: Fri Apr 16 11:08:53 UTC 2021 +9: Fri Apr 16 11:08:54 UTC 2021 +``` diff --git a/versioned_docs/version-v1.1/end-user/version-control.md b/versioned_docs/version-v1.1/end-user/version-control.md new file mode 100644 index 00000000..0967b6fd --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/version-control.md @@ -0,0 +1,247 @@ +--- +title: Version Control +--- + +## Component Revision + +You can specify a generated component instance revision with field `spec.components[*].externalRevision` in Application like below: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: myapp +spec: + components: + - name: express-server + type: webservice + externalRevision: express-server-v1 + properties: + image: stefanprodan/podinfo:4.0.3 +``` + +If the field is not specified, it will generated by the name rule `-`. + +After the Application created, it will generate a ControllerRevision object for each component. + +* Get the revision for component instance + +```shell +$ kubectl get controllerrevision -l controller.oam.dev/component=express-server +NAME CONTROLLER REVISION AGE +express-server-v1 application.core.oam.dev/myapp 1 2m40s +express-server-v2 application.core.oam.dev/myapp 2 2m12s +``` + +You can specify the component revision for [component rolling update](./traits/rollout). + +## Specify Component/Trait Capability Revision in Application + +When the capabilities(Component or Trait) changes, KubeVela will generate a definition revision automatically. + +* Check ComponentDefinition Revision + +```shell +$ kubectl get definitionrevision -l="componentdefinition.oam.dev/name=webservice" -n vela-system +NAME REVISION HASH TYPE +webservice-v1 1 3f6886d9832021ba Component +webservice-v2 2 b3b9978e7164d973 Component +``` + +* Check TraitDefinition Revision + +```shell +$ kubectl get definitionrevision -l="trait.oam.dev/name=rollout" -n vela-system +NAME REVISION HASH TYPE +rollout-v1 1 e441f026c1884b14 Trait +``` + +You can specify the revision with `@version` approach, for example, if a user want to stick to using the `v1` revision of `webservice` component: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: myapp +spec: + components: + - name: express-server + type: webservice@v1 + properties: + image: stefanprodan/podinfo:4.0.3 +``` + +In this way, if system admin changes the ComponentDefinition, it won't affect your application. +If no revision specified, KubeVela will use the latest revision when you upgrade your application. + +## Application Revision + +When updating an application entity except workflow, KubeVela will create a new revision as a snapshot for this change. + +```shell +$ kubectl get apprev -l app.oam.dev/name=myapp +NAME AGE +myapp-v1 54m +myapp-v2 53m +myapp-v3 18s +``` + +You can get all the information related with the application in the application revision, including the application spec, +and all related definitions. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ApplicationRevision +metadata: + labels: + app.oam.dev/app-revision-hash: a74b4a514ba2fc08 + app.oam.dev/name: myapp + name: myapp-v3 + namespace: default + ... +spec: + application: + apiVersion: core.oam.dev/v1beta1 + kind: Application + metadata: + name: myapp + namespace: default + ... + spec: + components: + - name: express-server + properties: + image: stefanprodan/podinfo:5.0.3 + type: webservice@v1 + ... + componentDefinitions: + webservice: + apiVersion: core.oam.dev/v1beta1 + kind: ComponentDefinition + metadata: + name: webservice + namespace: vela-system + ... + spec: + schematic: + cue: + ... + traitDefinitions: + ... +``` + +## Live-Diff the `Application` + +Live-diff helps you to have a preview of what would change if you're going to upgrade an application without making any changes +to the living cluster. +This feature is extremely useful for serious production deployment, and make the upgrade under control + +It basically generates a diff between the specific revision of running instance and the local candidate application. +The result shows the changes (added/modified/removed/no_change) of the application as well as its sub-resources, +such as components and traits. + +Assume we're going to upgrade the application like below. + +```yaml +# new-app.yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: myapp +spec: + components: + - name: express-server + type: webservice@v1 + properties: + image: crccheck/hello-world # change the image +``` + +Run live-diff like this: + +```shell +vela live-diff -f new-app.yaml -r myapp-v1 +``` + +`-r` or `--revision` is a flag that specifies the name of a living ApplicationRevision with which you want to compare the updated application. + +`-c` or `--context` is a flag that specifies the number of lines shown around a change. The unchanged lines +which are out of the context of a change will be omitted. It's useful if the diff result contains a lot of unchanged content +while you just want to focus on the changed ones. + +
Click to view the details of diff result + +```bash +--- +# Application (myapp) has been modified(*) +--- + apiVersion: core.oam.dev/v1beta1 + kind: Application + metadata: +- annotations: +- kubectl.kubernetes.io/last-applied-configuration: | +- {"apiVersion":"core.oam.dev/v1beta1","kind":"Application","metadata":{"annotations":{},"name":"myapp","namespace":"default"},"spec":{"components":[{"externalRevision":"express-server-v1","name":"express-server","properties":{"image":"stefanprodan/podinfo:4.0.3"},"type":"webservice"}]}} + creationTimestamp: null +- finalizers: +- - app.oam.dev/resource-tracker-finalizer + name: myapp + namespace: default + spec: + components: +- - externalRevision: express-server-v1 +- name: express-server ++ - name: express-server + properties: +- image: stefanprodan/podinfo:4.0.3 +- type: webservice ++ image: crccheck/hello-world ++ type: webservice@v1 + status: + rollout: + batchRollingState: "" + currentBatch: 0 + lastTargetAppRevision: "" + rollingState: "" + upgradedReadyReplicas: 0 + upgradedReplicas: 0 + +--- +## Component (express-server) has been modified(*) +--- + apiVersion: apps/v1 + kind: Deployment + metadata: +- annotations: +- kubectl.kubernetes.io/last-applied-configuration: | +- {"apiVersion":"core.oam.dev/v1beta1","kind":"Application","metadata":{"annotations":{},"name":"myapp","namespace":"default"},"spec":{"components":[{"externalRevision":"express-server-v1","name":"express-server","properties":{"image":"stefanprodan/podinfo:4.0.3"},"type":"webservice"}]}} ++ annotations: {} + labels: + app.oam.dev/appRevision: "" + app.oam.dev/component: express-server + app.oam.dev/name: myapp + app.oam.dev/resourceType: WORKLOAD +- workload.oam.dev/type: webservice ++ workload.oam.dev/type: webservice-v1 + name: express-server + namespace: default + spec: + selector: + matchLabels: + app.oam.dev/component: express-server + template: + metadata: + labels: + app.oam.dev/component: express-server + app.oam.dev/revision: KUBEVELA_COMPONENT_REVISION_PLACEHOLDER + spec: + containers: +- - image: stefanprodan/podinfo:4.0.3 ++ - image: crccheck/hello-world + name: express-server + ports: + - containerPort: 80 +``` + +
+ + +Furthermore, you can integrate the revision for application snapshot and recovery with your own system. \ No newline at end of file diff --git a/versioned_docs/version-v1.1/end-user/workflow/apply-component.md b/versioned_docs/version-v1.1/end-user/workflow/apply-component.md new file mode 100644 index 00000000..bee2a86f --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/workflow/apply-component.md @@ -0,0 +1,131 @@ +--- +title: Apply Components and Traits +--- + +In this guide, you will learn how to apply components and traits in `Workflow`. + +## How to use + +Apply the following `Application`: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-workflow + namespace: default +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: ingress + properties: + domain: testsvc.example.com + http: + /: 8000 + - name: nginx-server + type: webservice + properties: + image: nginx:1.21 + port: 80 + workflow: + steps: + - name: express-server + # specify the workflow step type + type: apply-component + properties: + # specify the component name + component: express-server + - name: manual-approval + # suspend is a built-in task of workflow used to suspend the workflow + type: suspend + - name: nginx-server + type: apply-component + properties: + component: nginx-server +``` + +If we want to suspend the workflow for manual approval before applying some certain components, we can use `suspend` step to pause the workflow. + +In this case, the workflow will be suspended after applying the first component. The second component will wait to be applied util the `resume` command is called. + +Check the status after applying the `Application`: + +```shell +$ kubectl get app first-vela-workflow + +NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE +first-vela-workflow express-server webservice workflowSuspending 2s +``` + +We can use `vela workflow resume` to resume the workflow. + +> For more information of `vela workflow`,please ref [vela cli](../../cli/vela_workflow). + +```shell +$ vela workflow resume first-vela-workflow + +Successfully resume workflow: first-vela-workflow +``` + +Check the status, the `Application` is now `runningWorkflow`: + +```shell +$ kubectl get app first-vela-workflow + +NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE +first-vela-workflow express-server webservice running true 10s +``` +## Expected outcome + +Check the `Application` status: + +```shell +kubectl get application first-vela-workflow -o yaml +``` + +All the step status in workflow is succeeded: + +```yaml +... + status: + workflow: + ... + stepIndex: 3 + steps: + - name: express-server + phase: succeeded + resourceRef: {} + type: apply-component + - name: manual-approval + phase: succeeded + resourceRef: {} + type: suspend + - name: nginx-server + phase: succeeded + resourceRef: {} + type: apply-component + suspend: false + terminated: true +``` + +Check the component status in cluster: + +```shell +$ kubectl get deployment + +NAME READY UP-TO-DATE AVAILABLE AGE +express-server 1/1 1 1 3m28s +nginx-server 1/1 1 1 3s + +$ kubectl get ingress + +NAME CLASS HOSTS ADDRESS PORTS AGE +express-server testsvc.example.com 80 4m7s +``` + +We can see that all the components and traits have been applied to the cluster. diff --git a/versioned_docs/version-v1.1/end-user/workflow/apply-remaining.md b/versioned_docs/version-v1.1/end-user/workflow/apply-remaining.md new file mode 100644 index 00000000..2474985f --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/workflow/apply-remaining.md @@ -0,0 +1,164 @@ +--- +title: Apply Remaining +--- + +If we want to apply one component first and then apply the rest of the components after the first one is running, KubeVela provides the `apply-remaining` workflow step to filter out selected resources and apply remaining. + +In this guide, you will learn how to apply remaining resources via `apply-remaining` in `Workflow`. + +## How to use + +Apply the following `Application` with workflow step type of `apply-remaining`: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-workflow + namespace: default +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: ingress + properties: + domain: testsvc.example.com + http: + /: 8000 + - name: express-server2 + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + - name: express-server3 + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + - name: express-server4 + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + workflow: + steps: + - name: first-server + type: apply-component + properties: + component: express-server + - name: manual-approval + # suspend is a built-in task of workflow used to suspend the workflow + type: suspend + - name: remaining-server + # specify the workflow step type + type: apply-remaining + properties: + # specify the component that needs to be skipped + exceptions: + # specify the configuration of the component + express-server: + # skipApplyWorkload indicates whether to skip apply the workload resource + skipApplyWorkload: true + # skipAllTraits indicates to skip apply all resources of the traits + skipAllTraits: true +``` + +## Expected outcome + +Check the `Application` status: + +```shell +kubectl get application first-vela-workflow -o yaml +``` + +We can see that the workflow is suspended at `manual-approval`: + +```yaml +... + status: + workflow: + ... + stepIndex: 2 + steps: + - name: first-server + phase: succeeded + resourceRef: {} + type: apply-component + - name: manual-approval + phase: succeeded + resourceRef: {} + type: suspend + suspend: true + terminated: false +``` + +Check the component status in cluster and resume the workflow after the component is running: + +```shell +$ kubectl get deployment + +NAME READY UP-TO-DATE AVAILABLE AGE +express-server 1/1 1 1 5s + +$ kubectl get ingress + +NAME CLASS HOSTS ADDRESS PORTS AGE +express-server testsvc.example.com 80 47s +``` + +Resume the workflow: + +``` +vela workflow resume first-vela-workflow +``` + +Recheck the `Application` status: + +```shell +kubectl get application first-vela-workflow -o yaml +``` + +All the step status in workflow is succeeded: + +```yaml +... + status: + workflow: + ... + stepIndex: 3 + steps: + - name: first-server + phase: succeeded + resourceRef: {} + type: apply-component + - name: manual-approval + phase: succeeded + resourceRef: {} + type: suspend + - name: remaining-server + phase: succeeded + resourceRef: {} + type: apply-remaining + suspend: false + terminated: true +``` + +Recheck the component status: + +```shell +$ kubectl get deployment + +NAME READY UP-TO-DATE AVAILABLE AGE +express-server 1/1 1 1 110s +express-server2 1/1 1 1 6s +express-server3 1/1 1 1 6s +express-server4 1/1 1 1 6s +``` + +We can see that all of the components has been applied to the cluster successfully. Besides, the first component `express-server` is not applied repeatedly. + +With `apply-remaining`, we can easily filter and apply resources by filling in the built-in parameters. diff --git a/versioned_docs/version-v1.1/end-user/workflow/component-dependency-parameter.md b/versioned_docs/version-v1.1/end-user/workflow/component-dependency-parameter.md new file mode 100644 index 00000000..459f5227 --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/workflow/component-dependency-parameter.md @@ -0,0 +1,100 @@ +--- +title: Data Pass Between Components +--- + +This section will introduce how to pass data between components. + +## Inputs and Outputs + +In KubeVela, we can use inputs and outputs in Components to pass data. + +### Outputs + +Outputs is made of `name` and `valueFrom`. Input will use `name` to reference output. + +We can write `valueFrom` in the following ways: +1. Fill string value in the field, eg. `valueFrom: testString`. +2. Use expression, eg. `valueFrom: output.metadata.name`. Note that `output` is a built-in field referring to the resource in the component that is rendered and deployed to the cluster. +3. Use `+` to combine above two ways, the computed value will be the result, eg. `valueFrom: output.metadata.name + "testString"`. + +### Inputs + +Inputs is made of `from` and `parameterKey`. Input uses `from` to reference output, `parameterKey` is a expression that assigns the value of the input to the corresponding field. + +eg. + +1. Specify inputs: + +```yaml +... +- name: wordpress + type: helm + inputs: + - from: mysql-svc + parameterKey: properties.values.externalDatabase.host +``` + +2. The field parameterKey specifies the field path of the parameter key in component to be assigned after rendering: + +Which means the input value will be passed into the below properties: + +```yaml +... +- name: wordpress + type: helm + properties: + values: + externalDatabase: + host: +``` + +## How to use + +In the following we will apply a WordPress server with the MySQL address passed from a MySQL component: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: wordpress-with-mysql + namespace: default +spec: + components: + - name: mysql + type: helm + outputs: + # the output is the mysql service address + - name: mysql-svc + exportKey: output.metadata.name + ".default.svc.cluster.local" + properties: + repoType: helm + url: https://charts.bitnami.com/bitnami + chart: mysql + version: "8.8.2" + values: + auth: + rootPassword: mypassword + - name: wordpress + type: helm + inputs: + # set the host to mysql service address + - from: mysql-svc + parameterKey: properties.values.externalDatabase.host + properties: + repoType: helm + url: https://charts.bitnami.com/bitnami + chart: wordpress + version: "12.0.3" + values: + mariadb: + enabled: false + externalDatabase: + user: root + password: mypassword + database: mysql + port: 3306 +``` + +## Expected Outcome + +The WordPress with MySQL has been successfully applied. \ No newline at end of file diff --git a/versioned_docs/version-v1.1/end-user/workflow/multi-env.md b/versioned_docs/version-v1.1/end-user/workflow/multi-env.md new file mode 100644 index 00000000..2b51794b --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/workflow/multi-env.md @@ -0,0 +1,173 @@ +--- +title: Multi Environments +--- + +If we have multiple clusters, we want to apply our application in the test cluster first, and then apply it to the production cluster after the application in test cluster is running. KubeVela provides the `deploy2env` workflow step to manage multi environments. You can have a glimpse of how does it work as below: + +![alt](../../resources/workflow-multi-env.png) + +In this guide, you will learn how to manage multi environments via `deploy2env` in `Workflow`. + +> Before reading this section, please make sure you have learned about the [Env Binding](../policies/envbinding) in KubeVela. + +## How to use + +Apply the following `Application` with workflow step type of `deploy2env`: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: multi-env-demo + namespace: default +spec: + components: + - name: nginx-server + type: webservice + properties: + image: nginx:1.21 + port: 80 + + policies: + - name: env + type: env-binding + properties: + created: false + envs: + - name: test + patch: + components: + - name: nginx-server + type: webservice + properties: + image: nginx:1.20 + port: 80 + placement: + clusterSelector: + labels: + purpose: test + - name: prod + patch: + components: + - name: nginx-server + type: webservice + properties: + image: nginx:1.20 + port: 80 + placement: + clusterSelector: + labels: + purpose: prod + + workflow: + steps: + - name: deploy-test-server + # specify the workflow step type + type: deploy2env + properties: + # specify the component name + component: nginx-server + # specify the policy name + policy: env + # specify the env name in policy + env: test + - name: manual-approval + # suspend is a built-in task of workflow used to suspend the workflow + type: suspend + - name: deploy-prod-server + type: deploy2env + properties: + component: nginx-server + policy: env + env: prod +``` + +## Expected outcome + +Check the `Application` status: + +```shell +kubectl get application multi-env-demo -o yaml +``` + +We can see that the workflow is suspended at `manual-approval`: + +```yaml +... + status: + workflow: + ... + stepIndex: 2 + steps: + - name: deploy-test-server + phase: succeeded + resourceRef: {} + type: deploy2env + - name: manual-approval + phase: succeeded + resourceRef: {} + type: suspend + suspend: true + terminated: false +``` + +Switch to `test` cluster and check the component status: + +```shell +$ kubectl get deployment + +NAME READY UP-TO-DATE AVAILABLE AGE +nginx-server 1/1 1 1 1m10s +``` + +Use `resume` command after everything is ok in test cluster: + +```shell +$ vela workflow resume multi-env-demo + +Successfully resume workflow: multi-env-demo +``` + +Recheck the `Application` status: + +```shell +kubectl get application multi-env-demo -o yaml +``` + +All the step status in workflow is succeeded: + +```yaml +... + status: + workflow: + ... + stepIndex: 3 + steps: + - name: deploy-test-server + phase: succeeded + resourceRef: {} + type: deploy2env + - name: manual-approval + phase: succeeded + resourceRef: {} + type: suspend + - name: deploy-prod-server + phase: succeeded + resourceRef: {} + type: deploy2env + suspend: false + terminated: true +``` + +Then, check the component status in `prod` cluster: + +```shell +$ kubectl get deployment + +NAME READY UP-TO-DATE AVAILABLE AGE +nginx-server 1/1 1 1 1m10s +``` + +We can see that the component have been applied to both clusters. + +With `deploy2env`, we can easily manage applications in multiple environments. diff --git a/versioned_docs/version-v1.1/end-user/workflow/webhook-notification.md b/versioned_docs/version-v1.1/end-user/workflow/webhook-notification.md new file mode 100644 index 00000000..c68b67d4 --- /dev/null +++ b/versioned_docs/version-v1.1/end-user/workflow/webhook-notification.md @@ -0,0 +1,82 @@ +--- +title: Webhook Notification +--- + +If we want to be notified before or after deploying an application, KubeVela provides integration with notification webhooks, allowing users to send notifications to DingTalk or Slack. + +In this guide, you will learn how to send notifications via `webhook-notification` in workflow. + +## Parameters + +| Parameter | Type | Description | +| :---: | :--: | :-- | +| slack | Object | Optional, please fulfill its url and message if you want to send Slack messages | +| slack.url | String | Required, the webhook address of Slack | +| slack.message | Object | Required, the Slack messages you want to send, please follow [Slack messaging](https://api.slack.com/reference/messaging/payload) | +| dingding | Object | Optional, please fulfill its url and message if you want to send DingTalk messages | +| dingding.url | String | Required, the webhook address of DingTalk | +| dingding.message | Object | Required, the DingTalk messages you want to send, please follow [DingTalk messaging](https://developers.dingtalk.com/document/robots/custom-robot-access/title-72m-8ag-pqw) | + +## How to use + +Apply the following `Application` with workflow step type of `webhook-notification`: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-workflow + namespace: default +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: ingress + properties: + domain: testsvc.example.com + http: + /: 8000 + workflow: + steps: + - name: dingtalk-message + # specify the workflow step type + type: webhook-notification + properties: + dingding: + # the DingTalk webhook address, please refer to: https://developers.dingtalk.com/document/robots/custom-robot-access + url: + # specify the message details + message: + msgtype: text + text: + content: Workflow starting... + - name: application + type: apply-component + properties: + component: express-server + outputs: + - from: app-status + exportKey: output.status.conditions[0].message + "工作流运行完成" + - name: slack-message + type: webhook-notification + inputs: + - name: app-status + parameterKey: properties.slack.message.text + properties: + slack: + # the Slack webhook address, please refer to: https://api.slack.com/messaging/webhooks + url: + # specify the message details, will be filled by the input value + # message: + # text: condition message + Workflow ended. +``` + +## Expected outcome + +we can see that before and after the deployment of the application, the messages can be seen in the corresponding group chat. + +With `webhook-notification`, we can integrate with webhook notifier easily. diff --git a/versioned_docs/version-v1.1/getting-started/introduction.md b/versioned_docs/version-v1.1/getting-started/introduction.md new file mode 100644 index 00000000..6315e9b9 --- /dev/null +++ b/versioned_docs/version-v1.1/getting-started/introduction.md @@ -0,0 +1,79 @@ +--- +title: Introduction +slug: / + +--- + +## Motivation + +The trend of cloud-native technology is moving towards pursuing consistent experience of application delivery across clouds and on-prem clusters. Kubernetes is becoming the standard layer which is excellent in abstracting away low-level infrastructure details. But it does not provide abstractions to model application deployment on top of hybrid and distributed environments. The lack of application level context have impacted user experience, slowed down productivity, led to unexpected errors due to misconfigurations in production. + +Meanwhile, modeling the deployment of a microservice application is a highly fragmented and challenging process. Thus, many solutions that tried to solve the problem so far are either over simplified and could not fix the real issue, or too complicated to use at all. On the other hand, though many solutions provided friendly UI layer, the platform themselves are not customizable. This means as the needs of your platform grow, it is inevitable for the feature requirements to outgrow the capabilities of such systems. + +Today the application teams are eager to find a platform that can simplify the application delivery experience across hybrid environments (e.g. multi-cluster/multi-cloud/hybrid-cloud/distributed-cloud), while also be flexible enough to satisfy the fast growth of businesses requirements. The platform-engineers have similar empathy but the effort of building such system is out of their scope. + + +## What is KubeVela? + +KubeVela is a modern application platform that makes it easier and faster to deliver and manage applications across hybrid, multi-cloud environments. At the mean time, it is highly extensible and programmable, which can adapt to your needs as they grow. This is achieved by doing the following: + +**Application Centric** - KubeVela introduces [Open Application Model (OAM)](https://oam.dev/) as the consistent and application-focused API to capture a full deployment of microservices on top of hybrid environments. Placement strategy, traffic shifting and rolling update are declared at the perspective of application developers. No infrastructure level concern, only application level concepts. + +**Programmable Workflow** - KubeVela leverages [CUE](https://cuelang.org/) as the implementation engine behind the model layer. This allows you to compose deployment workflow in a modular and declarative API, and automates any operational tasks in a programmable manner. No restrictions, natively extensible. + +**Runtime Agnostic** - KubeVela works as an application delivery control plane that is fully runtime agnostic. It can deploy and manage any application components including containers, cloud functions, databases, or even EC2 instances across hybrid environments, following the workflow you defined. + +## Who should use KubeVela? + +- Application developers, operators, DevOps engineers + - think about a modern Continuous Delivery (CD) platform. +- Platform builders for PaaS, Serverless, application management/delivery systems + - think about an application delivery engine that you could build your advanced platform with. +- ISV, SaaS owners, and application architects who need to distribute software to anywhere + - think about an App Store but on Kubernetes and clouds. + +## Comparisons + +### KubeVela vs. Platform-as-a-Service (PaaS) + +The typical examples are Heroku and Cloud Foundry. They provide full application deployment and management capabilities and aim to improve developer experience and efficiency. In this context, KubeVela shares the same goal. + +Though the biggest difference lies in **flexibility**. + +KubeVela does not introduce any restriction. As a plus, even its deployment workflow and full feature set are implemented as LEGO-sytle CUE modules and can be extended at any time when your needs grow. Comparing to this mechanism, traditional PaaS systems are highly restricted, i.e. they have to enforce constraints in the type of supported applications and capabilities, and as application needs grows, you always outgrow the capabilities of the PaaS system - this will never happen in KubeVela platform. + +### KubeVela vs. Serverless + +Serverless platform such as AWS Lambda provides extraordinary user experience and agility to deploy serverless applications. However, those platforms impose even more constraints in extensibility. They are arguably "hard-coded" PaaS, so KubeVela differ from them in similar way. + +On the other hand, KubeVela can easily deploy both Kubernetes based serverless workloads such as Knative/OpenFaaS, or cloud based functions such as AWS Lambda. + +### KubeVela vs. Platform agnostic developer tools + +The typical example is Hashicorp's Waypoint. Waypoint is a developer facing tool which introduces a consistent workflow (i.e., build, deploy, release) to ship applications on top of different platforms. + +KubeVela can be integrated with such tools seamlessly. In this case, developers would use the Waypoint workflow as the UI to deploy and release applications with KubeVela as the underlying deployment platform. + +### KubeVela vs. Helm + +Helm is a package manager for Kubernetes that provides package, install, and upgrade a set of YAML files for Kubernetes as a unit. + +KubeVela as a modern deployment system can naturally deploy Helm charts. For example, you could use KubeVela to define an application that is composed by a WordPress chart and a AWS RDS Terraform module, orchestrate the components' topology, and then deploy them to multiple environments following certain strategy. + +Of course, KubeVela also supports other encapsulation formats including Kustomize etc. + +### KubeVela vs. Kubernetes + +KubeVela is a modern application deployment system built with cloud native stack. It leverages [Open Application Model](https://github.com/oam-dev/spec) and Kubernetes as control plane to resolve a hard problem - making shipping applications enjoyable. + +Welcome onboard and sail Vela! + + +## What's Next + +Here are some recommended next steps: + +- Start to [install KubeVela](./install). +- Learn KubeVela's [Core Concepts](core-concepts/application). +- Learn KubeVela's [Architecture](core-concepts/architecture). + diff --git a/versioned_docs/version-v1.1/install.mdx b/versioned_docs/version-v1.1/install.mdx new file mode 100644 index 00000000..f2feea87 --- /dev/null +++ b/versioned_docs/version-v1.1/install.mdx @@ -0,0 +1,307 @@ +--- +title: Installation +--- + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +> For upgrading existing KubeVela, please read the [upgrade guide](./platform-engineers/advanced-install/#upgrade). + +## 1. Choose Control Plane Cluster + +Requirements: +- Kubernetes cluster >= v1.18.0 +- `kubectl` installed and configured + +KubeVela relies on Kubernetes as control plane. The control plane could be any managed Kubernetes offering or your own cluster. + +For local deployment and test, you could use `kind` or `minikube`. For production usage, you could use Kubernetes services provided by cloud providers. + + + + +Follow the minikube [installation guide](https://minikube.sigs.k8s.io/docs/start/). + +Then spins up a minikube cluster + +```shell script +minikube start +``` + +
Install ingress to enable service route + +```shell script +minikube addons enable ingress +``` + +
+
+ + + +Follow [this guide](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) to install kind. + +Then spins up a kind cluster: + +```shell script +cat < Install ingress to enable service route + +```shell script +kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml +``` + + + + + + +* Alibaba Cloud [ACK Service](https://www.aliyun.com/product/kubernetes) +* AWS [EKS Service](https://aws.amazon.com/cn/eks) +* Azure [AKS Service](https://azure.microsoft.com/en-us/services/kubernetes-service) +* Google [GKE Service](https://cloud.google.com/kubernetes-engine) + +> Please ensure [ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/) is installed and enabled. + + +
+ + +## 2. Install KubeVela + +1. Add and update helm chart repo for KubeVela + ```shell script + helm repo add kubevela https://charts.kubevela.net/core + helm repo update + ``` + +2. Install KubeVela + ```shell script + helm install --create-namespace -n vela-system kubevela kubevela/vela-core --set multicluster.enabled=true --wait + ``` + You can refer to [advanced installation guide](./platform-engineers/advanced-install) for more custom ways. + +3. Verify chart installed successfully + ```shell script + helm test kubevela -n vela-system + ``` + +
Click to see the expected output of helm test + + ```shell + Pod kubevela-application-test pending + Pod kubevela-application-test pending + Pod kubevela-application-test running + Pod kubevela-application-test succeeded + NAME: kubevela + LAST DEPLOYED: Tue Apr 13 18:42:20 2021 + NAMESPACE: vela-system + STATUS: deployed + REVISION: 1 + TEST SUITE: kubevela-application-test + Last Started: Fri Apr 16 20:49:10 2021 + Last Completed: Fri Apr 16 20:50:04 2021 + Phase: Succeeded + TEST SUITE: first-vela-app + Last Started: Fri Apr 16 20:49:10 2021 + Last Completed: Fri Apr 16 20:49:10 2021 + Phase: Succeeded + NOTES: + Welcome to use the KubeVela! Enjoy your shipping application journey! + ``` + +
+ +## 3. [Optional] Get KubeVela CLI + +KubeVela CLI gives you a simplified workflow to manage applications with optimized output. It is not mandatory though. + +KubeVela CLI could be [installed as kubectl plugin](./platform-engineers/advanced-install#install-kubectl-vela-plugin), or install as standalone binary. + + + + +** macOS/Linux ** + +```shell script +curl -fsSl https://kubevela.io/script/install.sh | bash +``` + +**Windows** + +```shell script +powershell -Command "iwr -useb https://kubevela.io/script/install.ps1 | iex" +``` + + + +**macOS/Linux** + +Update your brew firstly. +```shell script +brew update +``` +Then install kubevela client. + +```shell script +brew install kubevela +``` + + + +- Download the latest `vela` binary from the [releases page](https://github.com/oam-dev/kubevela/releases). +- Unpack the `vela` binary and add it to `$PATH` to get started. + +```shell script +sudo mv ./vela /usr/local/bin/vela +``` + +> Known Issue(https://github.com/oam-dev/kubevela/issues/625): +> If you're using mac, it will report that “vela” cannot be opened because the developer cannot be verified. +> +> The new version of MacOS is stricter about running software you've downloaded that isn't signed with an Apple developer key. And we haven't supported that for KubeVela yet. +> You can open your 'System Preference' -> 'Security & Privacy' -> General, click the 'Allow Anyway' to temporarily fix it. + + + + +## 4. [Optional] Enable Addons + +KubeVela support a dozen of [out-of-box addons](./platform-engineers/advanced-install#Addons), +please at least enable following addons to make sure KubeVela functioning well: + +* Helm and Kustomize Components addons + ```shell + vela addon enable fluxcd + ``` + +* Terraform Provider addon + + Enable Terraform Alibaba Cloud Provider as below. + + ```shell + vela addon enable terraform/provider-alibaba ALICLOUD_ACCESS_KEY=xxx ALICLOUD_SECRET_KEY=yyy ALICLOUD_SECURITY_TOKEN=zzz + ``` + +## 5. Verify + +> You can also using `kubectl get comp -A` and `kubectl get trait -A` instead if you haven't installed CLI. + +* Get built-in component types by `vela` CLI: + ```shell script + vela components + ``` +
Outputs + + ```console + NAME NAMESPACE WORKLOAD DESCRIPTION + alibaba-ack vela-system configurations.terraform.core.oam.dev Terraform configuration for Alibaba Cloud ACK cluster + alibaba-oss vela-system configurations.terraform.core.oam.dev Terraform configuration for Alibaba Cloud OSS object + alibaba-rds vela-system configurations.terraform.core.oam.dev Terraform configuration for Alibaba Cloud RDS object + helm vela-system autodetects.core.oam.dev helm release is a group of K8s resources from either git + repository or helm repo + kustomize vela-system autodetects.core.oam.dev kustomize can fetching, building, updating and applying + Kustomize manifests from git repo. + raw vela-system autodetects.core.oam.dev raw allow users to specify raw K8s object in properties + task vela-system jobs.batch Describes jobs that run code or a script to completion. + webservice vela-system deployments.apps Describes long-running, scalable, containerized services + that have a stable network endpoint to receive external + network traffic from customers. + worker vela-system deployments.apps Describes long-running, scalable, containerized services + that running at backend. They do NOT have network endpoint + to receive external network traffic. + ``` + +
+ +* Get built-in traits by `vela` CLI: + ```shell script + vela traits + ``` +
Outputs + + ```console + NAME NAMESPACE APPLIES-TO CONFLICTS-WITH POD-DISRUPTIVE DESCRIPTION + annotations vela-system * true Add annotations on K8s pod for your workload which follows + the pod spec in path 'spec.template'. + configmap vela-system * true Create/Attach configmaps on K8s pod for your workload which + follows the pod spec in path 'spec.template'. + cpuscaler vela-system deployments.apps false Automatically scale the component based on CPU usage. + env vela-system * false add env on K8s pod for your workload which follows the pod + spec in path 'spec.template.' + expose vela-system false Expose port to enable web traffic for your component. + hostalias vela-system * false Add host aliases on K8s pod for your workload which follows + the pod spec in path 'spec.template'. + ingress vela-system false Enable public web traffic for the component. + ingress-1-20 vela-system false Enable public web traffic for the component, the ingress API + matches K8s v1.20+. + init-container vela-system deployments.apps true add an init container and use shared volume with pod + kustomize-json-patch vela-system false A list of JSON6902 patch to selected target + kustomize-patch vela-system false A list of StrategicMerge or JSON6902 patch to selected + target + kustomize-strategy-merge vela-system false A list of strategic merge to kustomize config + labels vela-system * true Add labels on K8s pod for your workload which follows the + pod spec in path 'spec.template'. + lifecycle vela-system * true Add lifecycle hooks for the first container of K8s pod for + your workload which follows the pod spec in path + 'spec.template'. + node-affinity vela-system * true affinity specify node affinity and toleration on K8s pod for + your workload which follows the pod spec in path + 'spec.template'. + pvc vela-system deployments.apps true Create a Persistent Volume Claim and mount the PVC as volume + to the first container in the pod + resource vela-system * true Add resource requests and limits on K8s pod for your + workload which follows the pod spec in path 'spec.template.' + rollout vela-system false rollout the component + scaler vela-system * false Manually scale K8s pod for your workload which follows the + pod spec in path 'spec.template'. + service-binding vela-system webservice,worker false Binding secrets of cloud resources to component env + sidecar vela-system * true Inject a sidecar container to K8s pod for your workload + which follows the pod spec in path 'spec.template'. + volumes vela-system deployments.apps true Add volumes on K8s pod for your workload which follows the + pod spec in path 'spec.template'. + ``` + +
+ +These capabilities are built-in so they are ready to use if showed up. KubeVela is designed to be programmable and fully self-service, so the assumption is more capabilities will be added later per your own needs. + +## What's Next + +* Start to [deploy our first application](./quick-start). +* See the [advanced installation guide](./platform-engineers/advanced-install) to learn more about installation details. + diff --git a/versioned_docs/version-v1.1/kubectlplugin.mdx b/versioned_docs/version-v1.1/kubectlplugin.mdx new file mode 100644 index 00000000..dded38ab --- /dev/null +++ b/versioned_docs/version-v1.1/kubectlplugin.mdx @@ -0,0 +1,75 @@ +--- +title: Install kubectl plugin +--- +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +Install vela kubectl plugin can help you to ship applications more easily! + +## Installation + +You can install kubectl plugin `kubectl vela` by: + + + + +1. [Install and set up](https://krew.sigs.k8s.io/docs/user-guide/setup/install/) Krew on your machine. +2. Discover plugins available on Krew: +```shell +kubectl krew update +``` +3. install kubectl vela: +```shell script +kubectl krew install vela +``` + + + + +**macOS/Linux** +```shell script +curl -fsSl https://kubevela.io/script/install-kubectl-vela.sh | bash +``` + +You can also download the binary from [release pages ( >= v1.0.3)](https://github.com/oam-dev/kubevela/releases) manually. +Kubectl will discover it from your system path automatically. + + + + + + +## Usage + +```shell +$ kubectl vela -h +A Highly Extensible Platform Engine based on Kubernetes and Open Application Model. + +Usage: + kubectl vela [flags] + kubectl vela [command] + +Available Commands: + +Flags: + -h, --help help for vela + + dry-run Dry Run an application, and output the K8s resources as + result to stdout, only CUE template supported for now + live-diff Dry-run an application, and do diff on a specific app + revison. The provided capability definitions will be used + during Dry-run. If any capabilities used in the app are not + found in the provided ones, it will try to find from + cluster. + show Show the reference doc for a workload type or trait + version Prints out build version information + + +Use "kubectl vela [command] --help" for more information about a command. +``` \ No newline at end of file diff --git a/versioned_docs/version-v1.1/platform-engineers/advanced-install.mdx b/versioned_docs/version-v1.1/platform-engineers/advanced-install.mdx new file mode 100644 index 00000000..49160834 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/advanced-install.mdx @@ -0,0 +1,218 @@ +--- +title: Custom Installation +--- +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +## Install KubeVela with cert-manager + +By default, KubeVela will use a self-signed certificate provided by [kube-webhook-certgen](https://github.com/jet/kube-webhook-certgen) for admissionWebhooks. +You can also use cert-manager if it's available. Note that you need to install cert-manager **before** the KubeVela chart. + +```shell script +helm repo add jetstack https://charts.jetstack.io +helm repo update +helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.2.0 --create-namespace --set installCRDs=true +``` + +Install kubevela with enabled certmanager: +```shell script +helm install --create-namespace -n vela-system --set admissionWebhooks.certManager.enabled=true kubevela kubevela/vela-core --wait +``` + +## Install Pre-release + +Add flag `--devel` in command `helm search` to choose a pre-release +version in format `-rc-master`. It means a release candidate version build on `master` branch, +such as `0.4.0-rc-master`. + +```shell script +helm search repo kubevela/vela-core -l --devel +``` +```console + NAME CHART VERSION APP VERSION DESCRIPTION + kubevela/vela-core 0.4.0-rc-master 0.4.0-rc-master A Helm chart for KubeVela core + kubevela/vela-core 0.3.2 0.3.2 A Helm chart for KubeVela core + kubevela/vela-core 0.3.1 0.3.1 A Helm chart for KubeVela core +``` + +And try the following command to install it. + +```shell script +helm install --create-namespace -n vela-system kubevela kubevela/vela-core --version -rc-master --wait +``` +```console +NAME: kubevela +LAST DEPLOYED: Thu Apr 1 19:41:30 2021 +NAMESPACE: vela-system +STATUS: deployed +REVISION: 1 +NOTES: +Welcome to use the KubeVela! Enjoy your shipping application journey! +``` +## Install Kubectl Vela Plugin + +Install vela kubectl plugin can help you to ship applications more easily! + + + + +1. [Install and set up](https://krew.sigs.k8s.io/docs/user-guide/setup/install/) Krew on your machine. +2. Discover plugins available on Krew: +```shell +kubectl krew update +``` +3. install kubectl vela: +```shell script +kubectl krew install vela +``` + + + + +**macOS/Linux** +```shell script +curl -fsSl https://kubevela.io/script/install-kubectl-vela.sh | bash +``` + +You can also download the binary from [release pages ( >= v1.0.3)](https://github.com/oam-dev/kubevela/releases) manually. +Kubectl will discover it from your system path automatically. + + + + +For more usage please reference [kubectl plugin](../developers/references/kubectl-plugin). +## Upgrade + +### Step 1. Update Helm repo + + +You can explore the newly released chart versions of KubeVela by run: + +```shell +helm repo update +helm search repo kubevela/vela-core -l +``` + +### Step 2. Upgrade KubeVela CRDs + +```shell +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_appdeployments.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_applicationcontexts.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_applicationrevisions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_applications.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_approllouts.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_clusters.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_componentdefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_containerizedworkloads.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_definitionrevisions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_envbindings.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_healthscopes.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_initializers.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_manualscalertraits.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_policydefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_resourcetrackers.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_scopedefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_traitdefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_workflowstepdefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/core.oam.dev_workloaddefinitions.yaml +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.1/charts/vela-core/crds/standard.oam.dev_rollouts.yaml +``` + +> Tips: If you see errors like `* is invalid: spec.scope: Invalid value: "Namespaced": filed is immutable`. Please delete the CRD which reports error and re-apply the kubevela crds. + +```shell + kubectl delete crd \ + scopedefinitions.core.oam.dev \ + traitdefinitions.core.oam.dev \ + workloaddefinitions.core.oam.dev +``` + +### Step 3. Upgrade KubeVela Helm chart + +```shell +helm upgrade --install --create-namespace --namespace vela-system kubevela kubevela/vela-core --version --wait +``` + +## Addons + +| Name | Description | capability | Open Source Project Reference | +|---------------------|-------------------------------------------------|----------------|-------------------------------------------------| +| terraform | Basic addon to Provide Cloud Resources(installed by default) | - | https://github.com/oam-dev/terraform-controller | +| fluxcd | Support Deployment of Helm and Kustomize components | kustomize、helm | https://fluxcd.io/ | +| kruise | Support more powerful workload feature | cloneset | https://openkruise.io/ | +| prometheus | Support basic observability from Promethus | - | https://prometheus.io/ | +| keda | Support event driven auto scaling | - | https://keda.sh/ | +| ocm | Support Multi-cluster Application Deployment | - | http://open-cluster-management.io/ | +| observability | Support KubeVela core observability | - | - | + +1. Search all addons + +```shell +vela addon list +``` + +2. Install addons (use fluxcd as example) + +```shell +vela addon enable fluxcd +``` + +3. Disable addons + +``` +vela addon disable fluxcd +``` + +Please remove all application using this addon before disable it. + + +## Clean Up + +Run: + +```shell script +helm uninstall -n vela-system kubevela +rm -r ~/.vela +``` + +This will uninstall KubeVela server component and its dependency components. +This also cleans up local CLI cache. + +Then clean up CRDs (CRDs are not removed via helm by default): + +```shell script + kubectl delete crd \ + appdeployments.core.oam.dev \ + applicationconfigurations.core.oam.dev \ + applicationcontexts.core.oam.dev \ + applicationrevisions.core.oam.dev \ + applications.core.oam.dev \ + approllouts.core.oam.dev \ + clusters.core.oam.dev \ + componentdefinitions.core.oam.dev \ + components.core.oam.dev \ + containerizedworkloads.core.oam.dev \ + definitionrevisions.core.oam.dev \ + envbindings.core.oam.dev \ + healthscopes.core.oam.dev \ + initializers.core.oam.dev \ + manualscalertraits.core.oam.dev \ + podspecworkloads.standard.oam.dev \ + policydefinitions.core.oam.dev \ + resourcetrackers.core.oam.dev \ + rollouts.standard.oam.dev \ + rollouttraits.standard.oam.dev \ + scopedefinitions.core.oam.dev \ + traitdefinitions.core.oam.dev \ + workflows.core.oam.dev \ + workflowstepdefinitions.core.oam.dev \ + workloaddefinitions.core.oam.dev +``` + diff --git a/versioned_docs/version-v1.1/platform-engineers/cloneset.md b/versioned_docs/version-v1.1/platform-engineers/cloneset.md new file mode 100644 index 00000000..051d14cc --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/cloneset.md @@ -0,0 +1,92 @@ +--- +title: Extend CRD Operator as Component Type +--- + +Let's use [OpenKruise](https://github.com/openkruise/kruise) as example of extend CRD as KubeVela Component. +**The mechanism works for all CRD Operators**. + +### Step 1: Install the CRD controller + +You need to [install the CRD controller](https://github.com/openkruise/kruise#quick-start) into your K8s system. + +### Step 2: Create Component Definition + +To register Cloneset(one of the OpenKruise workloads) as a new workload type in KubeVela, the only thing needed is to create an `ComponentDefinition` object for it. +A full example can be found in this [cloneset.yaml](https://github.com/oam-dev/catalog/blob/master/registry/cloneset.yaml). +Several highlights are list below. + +#### 1. Describe The Workload Type + +```yaml +... + annotations: + definition.oam.dev/description: "OpenKruise cloneset" +... +``` + +A one line description of this component type. It will be shown in helper commands such as `$ vela components`. + +#### 2. Register it's underlying CRD + +```yaml +... +workload: + definition: + apiVersion: apps.kruise.io/v1alpha1 + kind: CloneSet +... +``` + +This is how you register OpenKruise Cloneset's API resource (`fapps.kruise.io/v1alpha1.CloneSet`) as the workload type. +KubeVela uses Kubernetes API resource discovery mechanism to manage all registered capabilities. + +#### 4. Define Template + +```yaml +... +schematic: + cue: + template: | + output: { + apiVersion: "apps.kruise.io/v1alpha1" + kind: "CloneSet" + metadata: labels: { + "app.oam.dev/component": context.name + } + spec: { + replicas: parameter.replicas + selector: matchLabels: { + "app.oam.dev/component": context.name + } + template: { + metadata: labels: { + "app.oam.dev/component": context.name + } + spec: { + containers: [{ + name: context.name + image: parameter.image + }] + } + } + } + } + parameter: { + // +usage=Which image would you like to use for your service + // +short=i + image: string + + // +usage=Number of pods in the cloneset + replicas: *5 | int + } + ``` + +### Step 3: Register New Component Type to KubeVela + +As long as the definition file is ready, you just need to apply it to Kubernetes. + +```bash +$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/catalog/master/registry/cloneset.yaml +``` + +And the new component type will immediately become available for developers to use in KubeVela. diff --git a/versioned_docs/version-v1.1/platform-engineers/cloud-services.md b/versioned_docs/version-v1.1/platform-engineers/cloud-services.md new file mode 100644 index 00000000..d3461032 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/cloud-services.md @@ -0,0 +1,22 @@ +--- +title: Overview +--- + +Cloud services are important components of your application, and KubeVela allows you to provision and consume them in a consistent experience. + +## How Does KubeVela Manage Cloud Services? + +In KubeVela, the needed cloud services are claimed as *components* in an application, and consumed via *Service Binding Trait* by other components. + +## Does KubeVela Talk to the Clouds? + +KubeVela relies on [Terraform Controller](https://github.com/oam-dev/terraform-controller) or [Crossplane](http://crossplane.io/) as providers to talk to the clouds. Please check the documentations below for detailed steps. + +- [Terraform](./components/component-terraform) +- [Crossplane](./crossplane) + +## Can a Instance of Cloud Services be Shared by Multiple Applications? + +Yes. Though we currently defer this to providers so by default the cloud service instances are not shared and dedicated per `Application`. A workaround for now is you could use a separate `Application` to declare the cloud service only, then other `Application` can consume it via service binding trait in a shared approach. + +In the future, we are considering making this part as a standard feature of KubeVela so you could claim whether a given cloud service component should be shared or not. diff --git a/versioned_docs/version-v1.1/platform-engineers/components/component-terraform.md b/versioned_docs/version-v1.1/platform-engineers/components/component-terraform.md new file mode 100644 index 00000000..4e530709 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/components/component-terraform.md @@ -0,0 +1,119 @@ +--- +title: Terraform Component +--- + +To enable end users to [provision and consume cloud resources](../../end-user/components/cloud-services/provider-and-consume-cloud-services), +platform engineers need to prepare ComponentDefinitions for cloud resources if end users' requirements are beyond the +[built-in capabilities](../../end-user/components/cloud-services/provider-and-consume-cloud-services#supported-cloud-resource-list). + +## Develop a ComponentDefinition for a cloud resource + +### Alibaba Cloud + +Take [Elastic IP](https://www.alibabacloud.com/help/doc-detail/36016.htm) as an example. + +#### Develop a Terraform resource or module for the cloud resource + +We recommend you to search the required cloud resource module in [Terraform official module registry](https://registry.terraform.io/browse/modules). +If no luck, you can create a Terraform resource or module yourself per +[Terraform Alibaba Cloud Provider specifications](https://registry.terraform.io/providers/aliyun/alicloud/latest/docs). + +```terraform +module "eip" { + source = "github.com/zzxwill/terraform-alicloud-eip" + name = var.name + bandwidth = var.bandwidth +} + +variable "name" { + description = "Name to be used on all resources as prefix. Default to 'TF-Module-EIP'." + default = "TF-Module-EIP" + type = string +} + +variable "bandwidth" { + description = "Maximum bandwidth to the elastic public network, measured in Mbps (Mega bit per second)." + type = number + default = 5 +} + +output "EIP_ADDRESS" { + description = "The elastic ip address." + value = module.eip.this_eip_address.0 +} +``` + +For Alibaba Cloud EIP, here is the complete ComponentDefinition. You are warmly welcome to contribute this extended cloud +resource ComponentDefinition to [oam-dev/kubevela](https://github.com/oam-dev/kubevela/tree/master/charts/vela-core/templates/definitions). + +```yaml +apiVersion: core.oam.dev/v1alpha2 +kind: ComponentDefinition +metadata: + name: alibaba-eip + namespace: {{.Values.systemDefinitionNamespace}} + annotations: + definition.oam.dev/description: Terraform configuration for Alibaba Cloud Elastic IP + labels: + type: terraform +spec: + workload: + definition: + apiVersion: terraform.core.oam.dev/v1beta1 + kind: Configuration + schematic: + terraform: + configuration: | + module "eip" { + source = "github.com/zzxwill/terraform-alicloud-eip" + name = var.name + bandwidth = var.bandwidth + } + + variable "name" { + description = "Name to be used on all resources as prefix. Default to 'TF-Module-EIP'." + default = "TF-Module-EIP" + type = string + } + + variable "bandwidth" { + description = "Maximum bandwidth to the elastic public network, measured in Mbps (Mega bit per second)." + type = number + default = 5 + } + + output "EIP_ADDRESS" { + description = "The elastic ip address." + value = module.eip.this_eip_address.0 + } + +``` + +#### Verify + +You can quickly verify the ComponentDefinition by command `vela show`. + +```shell +$ vela show alibaba-eip +# Properties ++----------------------------+------------------------------------------------------------------------------------------+-----------------------------------------------------------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++----------------------------+------------------------------------------------------------------------------------------+-----------------------------------------------------------+----------+---------+ +| name | Name to be used on all resources as prefix. Default to 'TF-Module-EIP'. | string | true | | +| bandwidth | Maximum bandwidth to the elastic public network, measured in Mbps (Mega bit per second). | number | true | | +| writeConnectionSecretToRef | The secret which the cloud resource connection will be written to | [writeConnectionSecretToRef](#writeConnectionSecretToRef) | false | | ++----------------------------+------------------------------------------------------------------------------------------+-----------------------------------------------------------+----------+---------+ + + +## writeConnectionSecretToRef ++-----------+-----------------------------------------------------------------------------+--------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++-----------+-----------------------------------------------------------------------------+--------+----------+---------+ +| name | The secret name which the cloud resource connection will be written to | string | true | | +| namespace | The secret namespace which the cloud resource connection will be written to | string | false | | ++-----------+-----------------------------------------------------------------------------+--------+----------+---------+ +``` + +If the tables display, the ComponentDefinition should work. To take a step further, you can verify it by provision an actual EIP instance per +the doc [Provision cloud resources](../../end-user/components/cloud-services/provider-and-consume-cloud-services#provision-cloud-resources +). diff --git a/versioned_docs/version-v1.1/platform-engineers/components/custom-component.md b/versioned_docs/version-v1.1/platform-engineers/components/custom-component.md new file mode 100644 index 00000000..5de2cd46 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/components/custom-component.md @@ -0,0 +1,479 @@ +--- +title: CUE Component +--- + +In this section, it will introduce how to use [CUE](https://cuelang.org/) to declare app components via `ComponentDefinition`. + +> Before reading this part, please make sure you've learned the [Definition CRD](../definition-and-templates) in KubeVela. + +## Declare `ComponentDefinition` + +First, generate `ComponentDefinition` scaffolds via `vela def init` with existed YAML file. + +The YAML file: + +```yaml +apiVersion: "apps/v1" +kind: "Deployment" +spec: + selector: + matchLabels: + "app.oam.dev/component": "name" + template: + metadata: + labels: + "app.oam.dev/component": "name" + spec: + containers: + - name: "name" + image: "image" +``` + +Generate `ComponentDefinition` based on the YAML file: + +```shell +vela def init stateless -t component --template-yaml ./stateless.yaml -o stateless.cue +``` + +It generates a file: + +```shell +$ cat stateless.cue +stateless: { + annotations: {} + attributes: workload: definition: { + apiVersion: " apps/v1" + kind: " Deployment" + } + description: "" + labels: {} + type: "component" +} + +template: { + output: { + spec: { + selector: matchLabels: "app.oam.dev/component": "name" + template: { + metadata: labels: "app.oam.dev/component": "name" + spec: containers: [{ + name: "name" + image: "image" + }] + } + } + apiVersion: "apps/v1" + kind: "Deployment" + } + outputs: {} + parameters: {} +} +``` + +In detail: +- `.spec.workload` is required to indicate the workload type of this component. +- `.spec.schematic.cue.template` is a CUE template, specifically: + * The `output` filed defines the template for the abstraction. + * The `parameter` filed defines the template parameters, i.e. the configurable properties exposed in the `Application`abstraction (and JSON schema will be automatically generated based on them). + +Add parameters in this auto-generated custom component file : + +``` +stateless: { + annotations: {} + attributes: workload: definition: { + apiVersion: " apps/v1" + kind: " Deployment" + } + description: "" + labels: {} + type: "component" +} + +template: { + output: { + spec: { + selector: matchLabels: "app.oam.dev/component": parameter.name + template: { + metadata: labels: "app.oam.dev/component": parameter.name + spec: containers: [{ + name: parameter.name + image: parameter.image + }] + } + } + apiVersion: "apps/v1" + kind: "Deployment" + } + outputs: {} + parameters: { + name: string + image: string + } +} +``` + +You can use `vela def vet` to validate the format: + +```shell +$ vela def vet stateless.cue +Validation succeed. +``` + +Declare another component named `task` which is an abstraction for run-to-completion workload. + +```shell +vela def init task -t component -o task.cue +``` + +It generates a file: + +```shell +$ cat task.cue +task: { + annotations: {} + attributes: workload: definition: { + apiVersion: " apps/v1" + kind: " Deployment" + } + description: "" + labels: {} + type: "component" +} + +template: { + output: {} + parameter: {} +} +``` + +Edit the generated component file: + +``` +task: { + annotations: {} + attributes: workload: definition: { + apiVersion: "batch/v1" + kind: "Job" + } + description: "" + labels: {} + type: "component" +} + +template: { + output: { + apiVersion: "batch/v1" + kind: "Job" + spec: { + parallelism: parameter.count + completions: parameter.count + template: spec: { + restartPolicy: parameter.restart + containers: [{ + image: parameter.image + if parameter["cmd"] != _|_ { + command: parameter.cmd + } + }] + } + } + } + parameter: { + count: *1 | int + image: string + restart: *"Never" | string + cmd?: [...string] + } +} +``` + +Apply above `ComponentDefinition` files to your Kubernetes cluster: + +```shell +$ vela def apply stateless.cue +ComponentDefinition stateless created in namespace vela-system. +$ vela def apply task.cue +ComponentDefinition task created in namespace vela-system. +``` + +## Declare an `Application` + +The `ComponentDefinition` can be instantiated in `Application` abstraction as below: + + ```yaml + apiVersion: core.oam.dev/v1alpha2 + kind: Application + metadata: + name: website + spec: + components: + - name: hello + type: stateless + properties: + image: crccheck/hello-world + name: mysvc + - name: countdown + type: task + properties: + image: centos:7 + cmd: + - "bin/bash" + - "-c" + - "for i in 9 8 7 6 5 4 3 2 1 ; do echo $i ; done" + ``` + +### Under The Hood +
+ +Above application resource will generate and manage following Kubernetes resources in your target cluster based on the `output` in CUE template and user input in `Application` properties. + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: backend + ... # skip tons of metadata info +spec: + template: + spec: + containers: + - name: mysvc + image: crccheck/hello-world + metadata: + labels: + app.oam.dev/component: mysvc + selector: + matchLabels: + app.oam.dev/component: mysvc +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: countdown + ... # skip tons of metadata info +spec: + parallelism: 1 + completions: 1 + template: + metadata: + name: countdown + spec: + containers: + - name: countdown + image: 'centos:7' + command: + - bin/bash + - '-c' + - for i in 9 8 7 6 5 4 3 2 1 ; do echo $i ; done + restartPolicy: Never +``` +
+ +## CUE `Context` + +KubeVela allows you to reference the runtime information of your application via `context` keyword. + +The most widely used context is application name(`context.appName`) component name(`context.name`). + +```cue +context: { + appName: string + name: string +} +``` + +For example, let's say you want to use the component name filled in by users as the container name in the workload instance: + +```cue +parameter: { + image: string +} +output: { + ... + spec: { + containers: [{ + name: context.name + image: parameter.image + }] + } + ... +} +``` + +> Note that `context` information are auto-injected before resources are applied to target cluster. + +### Full available information in CUE `context` + +| Context Variable | Description | +| :--: | :---------: | +| `context.appRevision` | The revision of the application | +| `context.appRevisionNum` | The revision number(`int` type) of the application, e.g., `context.appRevisionNum` will be `1` if `context.appRevision` is `app-v1`| +| `context.appName` | The name of the application | +| `context.name` | The name of the component of the application | +| `context.namespace` | The namespace of the application | +| `context.output` | The rendered workload API resource of the component, this usually used in trait | +| `context.outputs.` | The rendered trait API resource of the component, this usually used in trait | + + +## Composition + +It's common that a component definition is composed by multiple API resources, for example, a `webserver` component that is composed by a Deployment and a Service. CUE is a great solution to achieve this in simplified primitives. + +> Another approach to do composition in KubeVela of course is [using Helm](../helm/component). + +## How-to + +KubeVela requires you to define the template of workload type in `output` section, and leave all the other resource templates in `outputs` section with format as below: + +```cue +outputs: : + +``` + +> The reason for this requirement is KubeVela needs to know it is currently rendering a workload so it could do some "magic" like patching annotations/labels or other data during it. + +Below is the example for `webserver` definition: + +``` +webserver: { + annotations: {} + attributes: workload: definition: { + apiVersion: "apps/v1" + kind: "Deployment" + } + description: "" + labels: {} + type: "component" +} + +template: { + output: { + apiVersion: "apps/v1" + kind: "Deployment" + spec: { + selector: matchLabels: { + "app.oam.dev/component": context.name + } + template: { + metadata: labels: { + "app.oam.dev/component": context.name + } + spec: { + containers: [{ + name: context.name + image: parameter.image + + if parameter["cmd"] != _|_ { + command: parameter.cmd + } + + if parameter["env"] != _|_ { + env: parameter.env + } + + if context["config"] != _|_ { + env: context.config + } + + ports: [{ + containerPort: parameter.port + }] + + if parameter["cpu"] != _|_ { + resources: { + limits: + cpu: parameter.cpu + requests: + cpu: parameter.cpu + } + } + }] + } + } + } + } + // an extra template + outputs: service: { + apiVersion: "v1" + kind: "Service" + spec: { + selector: { + "app.oam.dev/component": context.name + } + ports: [ + { + port: parameter.port + targetPort: parameter.port + }, + ] + } + } + parameter: { + image: string + cmd?: [...string] + port: *80 | int + env?: [...{ + name: string + value?: string + valueFrom?: { + secretKeyRef: { + name: string + key: string + } + } + }] + cpu?: string + } +} +``` + +Apply to your Kubernetes cluster: + +```shell +$ vela def apply webserver.cue +ComponentDefinition webserver created in namespace vela-system. +``` + +The user could now declare an `Application` with it: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: webserver-demo + namespace: default +spec: + components: + - name: hello-world + type: webserver + properties: + image: crccheck/hello-world + port: 8000 + env: + - name: "foo" + value: "bar" + cpu: "100m" +``` + +It will generate and manage below API resources in target cluster: + +```shell +kubectl get deployment +``` +```console +NAME READY UP-TO-DATE AVAILABLE AGE +hello-world-v1 1/1 1 1 15s +``` + +```shell +kubectl get svc +``` +```console +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +hello-world-trait-7bdcff98f7 ClusterIP 8000/TCP 32s +``` + +## What's Next + +Please check the [Learning CUE](../cue/basic) documentation about why we support CUE as first-class templating solution and more details about using CUE efficiently. diff --git a/versioned_docs/version-v1.1/platform-engineers/crossplane.md b/versioned_docs/version-v1.1/platform-engineers/crossplane.md new file mode 100644 index 00000000..336339a2 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/crossplane.md @@ -0,0 +1,156 @@ +--- +title: Crossplane +--- + +In this documentation, we will use Alibaba Cloud's RDS (Relational Database Service), and Alibaba Cloud's OSS (Object Storage System) as examples to show how to enable cloud services as part of the application deployment. + +These cloud services are provided by Crossplane. + +## Prepare Crossplane + +
+ +Please Refer to [Installation](https://github.com/crossplane/provider-alibaba/releases/tag/v0.5.0) +to install Crossplane Alibaba provider v0.5.0. + +> If you'd like to configure any other Crossplane providers, please refer to [Crossplane Select a Getting Started Configuration](https://crossplane.io/docs/v1.1/getting-started/install-configure.html#select-a-getting-started-configuration). + +``` +$ kubectl crossplane install provider crossplane/provider-alibaba:v0.5.0 + +# Note the xxx and yyy here is your own AccessKey and SecretKey to the cloud resources. +$ kubectl create secret generic alibaba-account-creds -n crossplane-system --from-literal=accessKeyId=xxx --from-literal=accessKeySecret=yyy + +$ kubectl apply -f provider.yaml +``` + +`provider.yaml` is as below. + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: crossplane-system + +--- +apiVersion: alibaba.crossplane.io/v1alpha1 +kind: ProviderConfig +metadata: + name: default +spec: + credentials: + source: Secret + secretRef: + namespace: crossplane-system + name: alibaba-account-creds + key: credentials + region: cn-beijing +``` +
+ +## Register `alibaba-rds` Component + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: alibaba-rds + namespace: vela-system + annotations: + definition.oam.dev/description: "Alibaba Cloud RDS Resource" +spec: + workload: + definition: + apiVersion: database.alibaba.crossplane.io/v1alpha1 + kind: RDSInstance + schematic: + cue: + template: | + output: { + apiVersion: "database.alibaba.crossplane.io/v1alpha1" + kind: "RDSInstance" + spec: { + forProvider: { + engine: parameter.engine + engineVersion: parameter.engineVersion + dbInstanceClass: parameter.instanceClass + dbInstanceStorageInGB: 20 + securityIPList: "0.0.0.0/0" + masterUsername: parameter.username + } + writeConnectionSecretToRef: { + namespace: context.namespace + name: parameter.secretName + } + providerConfigRef: { + name: "default" + } + deletionPolicy: "Delete" + } + } + parameter: { + // +usage=RDS engine + engine: *"mysql" | string + // +usage=The version of RDS engine + engineVersion: *"8.0" | string + // +usage=The instance class for the RDS + instanceClass: *"rds.mysql.c1.large" | string + // +usage=RDS username + username: string + // +usage=Secret name which RDS connection will write to + secretName: string + } + + +``` + +## Register `alibaba-oss` Component + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: alibaba-oss + namespace: vela-system + annotations: + definition.oam.dev/description: "Alibaba Cloud RDS Resource" +spec: + workload: + definition: + apiVersion: oss.alibaba.crossplane.io/v1alpha1 + kind: Bucket + schematic: + cue: + template: | + output: { + apiVersion: "oss.alibaba.crossplane.io/v1alpha1" + kind: "Bucket" + spec: { + name: parameter.name + acl: parameter.acl + storageClass: parameter.storageClass + dataRedundancyType: parameter.dataRedundancyType + writeConnectionSecretToRef: { + namespace: context.namespace + name: parameter.secretName + } + providerConfigRef: { + name: "default" + } + deletionPolicy: "Delete" + } + } + parameter: { + // +usage=OSS bucket name + name: string + // +usage=The access control list of the OSS bucket + acl: *"private" | string + // +usage=The storage type of OSS bucket + storageClass: *"Standard" | string + // +usage=The data Redundancy type of OSS bucket + dataRedundancyType: *"LRS" | string + // +usage=Secret name which RDS connection will write to + secretName: string + } + +``` diff --git a/versioned_docs/version-v1.1/platform-engineers/cue/advanced.md b/versioned_docs/version-v1.1/platform-engineers/cue/advanced.md new file mode 100644 index 00000000..c97b30a3 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/cue/advanced.md @@ -0,0 +1,456 @@ +--- +title: CUE Advanced +--- + +This section will introduce how to use CUE to deliver KubeVela modules. You can can dynamically expand the platform as user needs change, adapt to growing number of users and scenarios, and meet the iterative demands of the company's long-term business development. + +## Convert Kubernetes API Objects Into Custom Components + +Let's take the [Kubernetes StatefulSet][5] as an example to show how to use KubeVela to build custom modules and provide capabilities. + +Save the YAML example of StatefulSet in the official document locally and name it as `my-stateful.yaml`, then execute commande as below: + + vela def init my-stateful -t component --desc "My StatefulSet component." --template-yaml ./my-stateful.yaml -o my-stateful.cue + +View the generated "my-stateful.cue" file: + + $ cat my-stateful.cue + "my-stateful": { + annotations: {} + attributes: workload: definition: { + apiVersion: " apps/v1" + kind: " Deployment" + } + description: "My StatefulSet component." + labels: {} + type: "component" + } + + template: { + output: { + apiVersion: "v1" + kind: "Service" + ... // omit non-critical info + } + outputs: web: { + apiVersion: "apps/v1" + kind: "StatefulSet" + ... // omit non-critical info + } + parameter: {} + } + +Modify the generated file as follows: + +1. The example of the official StatefulSet website is a composite component composed of two objects `StatefulSet` and `Service`. According to KubeVela [Rules for customize components] [6], in composite components, core workloads such as StatefulSet need to be represented by the `template.output` field, and other auxiliary objects are represented by `template.outputs`, so we make some adjustments and all the automatically generated output and outputs are switched. +2. Then we fill in the apiVersion and kind data of the core workload into the part marked as `` + +After modification, you can use `vela def vet` to do format check and verification. + + $ vela def vet my-stateful.cue + Validation succeed. + +The file after two steps of changes is as follows: + + $ cat my-stateful.cue + "my-stateful": { + annotations: {} + attributes: workload: definition: { + apiVersion: "apps/v1" + kind: "StatefulSet" + } + description: "My StatefulSet component." + labels: {} + type: "component" + } + + template: { + output: { + apiVersion: "apps/v1" + kind: "StatefulSet" + metadata: name: "web" + spec: { + selector: matchLabels: app: "nginx" + replicas: 3 + serviceName: "nginx" + template: { + metadata: labels: app: "nginx" + spec: { + containers: [{ + name: "nginx" + ports: [{ + name: "web" + containerPort: 80 + }] + image: "k8s.gcr.io/nginx-slim:0.8" + volumeMounts: [{ + name: "www" + mountPath: "/usr/share/nginx/html" + }] + }] + terminationGracePeriodSeconds: 10 + } + } + volumeClaimTemplates: [{ + metadata: name: "www" + spec: { + accessModes: ["ReadWriteOnce"] + resources: requests: storage: "1Gi" + storageClassName: "my-storage-class" + } + }] + } + } + outputs: web: { + apiVersion: "v1" + kind: "Service" + metadata: { + name: "nginx" + labels: app: "nginx" + } + spec: { + clusterIP: "None" + ports: [{ + name: "web" + port: 80 + }] + selector: app: "nginx" + } + } + parameter: {} + } + +Install ComponentDefinition into the Kubernetes cluster: + + $ vela def apply my-stateful.cue + ComponentDefinition my-stateful created in namespace vela-system. + +You can see that a `my-stateful` component via `vela components` command: + + $ vela components + NAME NAMESPACE WORKLOAD DESCRIPTION + ... + my-stateful vela-system statefulsets.apps My StatefulSet component. + ... + +When you put this customized component into `Application`, it looks like: + + cat < + +``` +# Application(website) -- Component(my-component) +--- + +apiVersion: v1 +kind: Service +metadata: + labels: + app: nginx + app.oam.dev/appRevision: "" + app.oam.dev/component: my-component + app.oam.dev/name: website + workload.oam.dev/type: my-stateful + name: nginx + namespace: default +spec: + clusterIP: None + ports: + - name: web + port: 80 + selector: + app: nginx + template: + spec: + containers: + - image: saravak/fluentd:elastic + name: my-sidecar + +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + app.oam.dev/appRevision: "" + app.oam.dev/component: my-component + app.oam.dev/name: website + trait.oam.dev/resource: web + trait.oam.dev/type: AuxiliaryWorkload + name: web + namespace: default +spec: + replicas: 3 + selector: + matchLabels: + app: nginx + serviceName: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - image: k8s.gcr.io/nginx-slim:0.8 + name: nginx + ports: + - containerPort: 80 + name: web + volumeMounts: + - mountPath: /usr/share/nginx/html + name: www + terminationGracePeriodSeconds: 10 + volumeClaimTemplates: + - metadata: + name: www + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: my-storage-class +``` + + + + +You can also use `vela dry-run -h` to view more available function parameters. + + +## Use `context` to get runtime information + +In our Application example above, the name field in the properties and the name field of the Component are the same. So we can use the `context` keyword that carries context information in the template, where `context.name` is the runtime component Name, thus the name parameter in `parameter` is no longer needed. + + ... # Omit other unmodified fields + template: { + output: { + apiVersion: "apps/v1" + kind: "StatefulSet" + metadata: name: context.name + ... // 省略其他没有修改的字段 + } + parameter: { + image: string + replicas: int + } + } + +KubeVela has built-in application [required context][9], you can configure it according to your needs. + +## Add Traits On Demand + +In addition to modifying ComponentDefinitions and adding parameters, you can also use the TraitDefinition to patch configurations to Components. KubeVela has built-in operations to meet the following needs: adding labels, annotations, injecting environment variables, sidecars, adding volumes, and so on. You can also [customize Trait][10] to do more flexible patching. + +You can use `vela traits` to view, the traits marked with `*` are general traits, which can operate on common Kubernetes resource objects. + + $ vela traits + NAME NAMESPACE APPLIES-TO CONFLICTS-WITH POD-DISRUPTIVE DESCRIPTION + annotations vela-system * true Add annotations on K8s pod for your workload which follows + the pod spec in path 'spec.template'. + configmap vela-system * true Create/Attach configmaps on K8s pod for your workload which + follows the pod spec in path 'spec.template'. + env vela-system * false add env on K8s pod for your workload which follows the pod + spec in path 'spec.template.' + hostalias vela-system * false Add host aliases on K8s pod for your workload which follows + the pod spec in path 'spec.template'. + labels vela-system * true Add labels on K8s pod for your workload which follows the + pod spec in path 'spec.template'. + lifecycle vela-system * true Add lifecycle hooks for the first container of K8s pod for + your workload which follows the pod spec in path + 'spec.template'. + node-affinity vela-system * true affinity specify node affinity and toleration on K8s pod for + your workload which follows the pod spec in path + 'spec.template'. + scaler vela-system * false Manually scale K8s pod for your workload which follows the + pod spec in path 'spec.template'. + sidecar vela-system * true Inject a sidecar container to K8s pod for your workload + which follows the pod spec in path 'spec.template'. + + +Taking sidecar as an example, you can check the usage of sidecar: + + $ vela show sidecar + # Properties + +---------+-----------------------------------------+-----------------------+----------+---------+ + | NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | + +---------+-----------------------------------------+-----------------------+----------+---------+ + | name | Specify the name of sidecar container | string | true | | + | cmd | Specify the commands run in the sidecar | []string | false | | + | image | Specify the image of sidecar container | string | true | | + | volumes | Specify the shared volume path | [[]volumes](#volumes) | false | | + +---------+-----------------------------------------+-----------------------+----------+---------+ + + + ## volumes + +------+-------------+--------+----------+---------+ + | NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | + +------+-------------+--------+----------+---------+ + | path | | string | true | | + | name | | string | true | | + +------+-------------+--------+----------+---------+ + +Use the sidecar directly to inject a container, the application description is as follows: + + apiVersion: core.oam.dev/v1beta1 + kind: Application + metadata: + name: website + spec: + components: + - name: my-component + type: my-stateful + properties: + image: nginx:latest + replicas: 1 + name: my-component + traits: + - type: sidecar + properties: + name: my-sidecar + image: saravak/fluentd:elastic + +Deploy and run the application, and you can see that a fluentd sidecar has been deployed and running in the StatefulSet. + +You can also use `vela def` to get the CUE source file of the sidecar to modify, add parameters, etc. + + vela def get sidecar + +The customization of operation and maintenance capabilities is similar to component customization, so we won’t go into details here. You can read [Customize Trait][11] for more detailed functions. + +## Summarize + +This section introduces how to deliver complete modular capabilities through CUE. The core is that it can dynamically increase configuration capabilities according to user needs, and gradually expose more functions and usages, so as to reduce the overall learning threshold for users and ultimately improve R&D efficient. +The out-of-the-box capabilities provided by KubeVela, including components, traits, policy, and workflow, are also designed as pluggable and modifiable capabilities. + +## Next + +Get to know about how to customize: +* [Component](../components/custom-component) +* [Trait](../traits/customize-trait) + +* [Workflow](../workflow/workflow) + +[1]: ../oam/oam-model +[2]: ../oam/x-definition +[3]: ../cue/basic +[4]: ../cue/definition-edit +[5]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ +[6]: ../components/custom-component#composition +[7]: ../cue/basic#cue-templating-and-references +[9]: ../oam/x-definition#x-definition-runtime-context +[10]: ../traits/patch-trait +[11]: ../traits/customize-trait + + diff --git a/versioned_docs/version-v1.1/platform-engineers/cue/basic.md b/versioned_docs/version-v1.1/platform-engineers/cue/basic.md new file mode 100644 index 00000000..ce1ae184 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/cue/basic.md @@ -0,0 +1,568 @@ +--- +title: CUE Basic +--- + +This document will explain more about how to use CUE to encapsulate and abstract a given capability in Kubernetes in detail. + +> Please make sure you have already learned about `Application` custom resource before reading the following guide. + +## Overview + +The reasons for KubeVela supports CUE as a first-class solution to design abstraction can be concluded as below: + +- **CUE is designed for large scale configuration.** CUE has the ability to understand a + configuration worked on by engineers across a whole company and to safely change a value that modifies thousands of objects in a configuration. This aligns very well with KubeVela's original goal to define and ship production level applications at web scale. +- **CUE supports first-class code generation and automation.** CUE can integrate with existing tools and workflows naturally while other tools would have to build complex custom solutions. For example, generate OpenAPI schemas with Go code. This is how KubeVela build developer tools and GUI interfaces based on the CUE templates. +- **CUE integrates very well with Go.** + KubeVela is built with GO just like most projects in Kubernetes system. CUE is also implemented in and exposes a rich API in Go. KubeVela integrates with CUE as its core library and works as a Kubernetes controller. With the help of CUE, KubeVela can easily handle data constraint problems. + +> Pleas also check [The Configuration Complexity Curse](https://blog.cedriccharly.com/post/20191109-the-configuration-complexity-curse/) and [The Logic of CUE](https://cuelang.org/docs/concepts/logic/) for more details. + +## Prerequisites + +Please make sure below CLIs are present in your environment: +* [`cue` >=v0.2.2](https://cuelang.org/docs/install/) +* [`vela` (>v1.0.0)](../../install#3-get-kubevela-cli) + +## CUE CLI Basic + +Below is the basic CUE data, you can define both schema and value in the same file with the almost same format: + +``` +a: 1.5 +a: float +b: 1 +b: int +d: [1, 2, 3] +g: { + h: "abc" +} +e: string +``` + +CUE is a superset of JSON, we can use it like json with following convenience: + +* C style comments, +* quotes may be omitted from field names without special characters, +* commas at the end of fields are optional, +* comma after last element in list is allowed, +* outer curly braces are optional. + +CUE has powerful CLI commands. Let's keep the data in a file named `first.cue` and try. + +* Format the CUE file. If you're using Goland or similar JetBrains IDE, + you can [configure save on format](https://wonderflow.info/posts/2020-11-02-goland-cuelang-format/) instead. + This command will not only format the CUE, but also point out the wrong schema. That's very useful. + ```shell + cue fmt first.cue + ``` + +* Schema Check, besides `cue fmt`, you can also use `cue vet` to check schema. + ```shell + cue vet first.cue + ``` + +* Calculate/Render the result. `cue eval` will calculate the CUE file and render out the result. + You can see the results don't contain `a: float` and `b: int`, because these two variables are calculated. + While the `e: string` doesn't have definitive results, so it keeps as it is. + ```shell + cue eval first.cue + ``` + ```console + a: 1.5 + b: 1 + d: [1, 2, 3] + g: { + h: "abc" + } + e: string + ``` + +* Render for specified result. For example, we want only know the result of `b` in the file, then we can specify the parameter `-e`. + ```shell + cue eval -e b first.cue + ``` + ```console + 1 + ``` + +* Export the result. `cue export` will export the result with final value. It will report an error if some variables are not definitive. + ```shell + cue export first.cue + ``` + ```console + e: cannot convert incomplete value "string" to JSON: + ./first.cue:9:4 + ``` + We can complete the value by giving a value to `e`, for example: + ```shell + echo "e: \"abc\"" >> first.cue + ``` + Then, the command will work. By default, the result will be rendered in json format. + ```shell + cue export first.cue + ``` + ```console + { + "a": 1.5, + "b": 1, + "d": [ + 1, + 2, + 3 + ], + "g": { + "h": "abc" + }, + "e": "abc" + } + ``` + +* Export the result in YAML format. + ```shell + cue export first.cue --out yaml + ``` + ```console + a: 1.5 + b: 1 + d: + - 1 + - 2 + - 3 + g: + h: abc + e: abc + ``` + +* Export the result for specified variable. + ```shell + cue export -e g first.cue + ``` + ```console + { + "h": "abc" + } + ``` + +For now, you have learned all useful CUE cli operations. + +## CUE Language Basic + +* Data structure: Below is the basic data structure of CUE. + +```shell +// float +a: 1.5 + +// int +b: 1 + +// string +c: "blahblahblah" + +// array +d: [1, 2, 3, 1, 2, 3, 1, 2, 3] + +// bool +e: true + +// struct +f: { + a: 1.5 + b: 1 + d: [1, 2, 3, 1, 2, 3, 1, 2, 3] + g: { + h: "abc" + } +} + +// null +j: null +``` + +* Define a custom CUE type. You can use a `#` symbol to specify some variable represents a CUE type. + +``` +#abc: string +``` + +Let's name it `second.cue`. Then the `cue export` won't complain as the `#abc` is a type not incomplete value. + +```shell +cue export second.cue +``` +```console +{} +``` + +You can also define a more complex custom struct, such as: + +``` +#abc: { + x: int + y: string + z: { + a: float + b: bool + } +} +``` + +It's widely used in KubeVela to define templates and do validation. + +## CUE Templating and References + +Let's try to define a CUE template with the knowledge just learned. + +1. Define a struct variable `parameter`. + +```shell +parameter: { + name: string + image: string +} +``` + +Let's save it in a file called `deployment.cue`. + +2. Define a more complex struct variable `template` and reference the variable `parameter`. + +``` +template: { + apiVersion: "apps/v1" + kind: "Deployment" + spec: { + selector: matchLabels: { + "app.oam.dev/component": parameter.name + } + template: { + metadata: labels: { + "app.oam.dev/component": parameter.name + } + spec: { + containers: [{ + name: parameter.name + image: parameter.image + }] + }}} +} +``` + +People who are familiar with Kubernetes may have understood that is a template of K8s Deployment. The `parameter` part +is the parameters of the template. + +Add it into the `deployment.cue`. + +4. Then, let's add the value by adding following code block: + +``` +parameter:{ + name: "mytest" + image: "nginx:v1" +} +``` + +5. Finally, let's export it in yaml: + +```shell +cue export deployment.cue -e template --out yaml +``` +```console +apiVersion: apps/v1 +kind: Deployment +spec: + template: + spec: + containers: + - name: mytest + image: nginx:v1 + metadata: + labels: + app.oam.dev/component: mytest + selector: + matchLabels: + app.oam.dev/component: mytest +``` + +## Advanced CUE Schematic + +* Open struct and list. Using `...` in a list or struct means the object is open. + + - A list like `[...string]` means it can hold multiple string elements. + If we don't add `...`, then `[string]` means the list can only have one `string` element in it. + - A struct like below means the struct can contain unknown fields. + ``` + { + abc: string + ... + } + ``` + +* Operator `|`, it represents a value could be both case. Below is an example that the variable `a` could be in string or int type. + +```shell +a: string | int +``` + +* Default Value, we can use `*` symbol to represent a default value for variable. That's usually used with `|`, + which represents a default value for some type. Below is an example that variable `a` is `int` and it's default value is `1`. + +```shell +a: *1 | int +``` + +* Optional Variable. In some cases, a variable could not be used, they're optional variables, we can use `?:` to define it. + In the below example, `a` is an optional variable, `x` and `z` in `#my` is optional while `y` is a required variable. + +``` +a ?: int + +#my: { +x ?: string +y : int +z ?:float +} +``` + +Optional variables can be skipped, that usually works together with conditional logic. +Specifically, if some field does not exit, the CUE grammar is `if _variable_ != _|_`, the example is like below: + +``` +parameter: { + name: string + image: string + config?: [...#Config] +} +output: { + ... + spec: { + containers: [{ + name: parameter.name + image: parameter.image + if parameter.config != _|_ { + config: parameter.config + } + }] + } + ... +} +``` + +* Operator `&`, it used to calculate two variables. + +```shell +a: *1 | int +b: 3 +c: a & b +``` + +Saving it in `third.cue` file. + +You can evaluate the result by using `cue eval`: + +```shell +cue eval third.cue +``` +```console +a: 1 +b: 3 +c: 3 +``` + +* Conditional statement, it's really useful when you have some cascade operations that different value affects different results. + So you can do `if..else` logic in the template. + +```shell +price: number +feel: *"good" | string +// Feel bad if price is too high +if price > 100 { + feel: "bad" +} +price: 200 +``` + +Saving it in `fourth.cue` file. + +You can evaluate the result by using `cue eval`: + +```shell +cue eval fourth.cue +``` +```console +price: 200 +feel: "bad" +``` + +Another example is to use bool type as parameter. + +``` +parameter: { + name: string + image: string + useENV: bool +} +output: { + ... + spec: { + containers: [{ + name: parameter.name + image: parameter.image + if parameter.useENV == true { + env: [{name: "my-env", value: "my-value"}] + } + }] + } + ... +} +``` + + +* For Loop: if you want to avoid duplicate, you may want to use for loop. + - Loop for Map + ```cue + parameter: { + name: string + image: string + env: [string]: string + } + output: { + spec: { + containers: [{ + name: parameter.name + image: parameter.image + env: [ + for k, v in parameter.env { + name: k + value: v + }, + ] + }] + } + } + ``` + - Loop for type + ``` + #a: { + "hello": "Barcelona" + "nihao": "Shanghai" + } + + for k, v in #a { + "\(k)": { + nameLen: len(v) + value: v + } + } + ``` + - Loop for Slice + ```cue + parameter: { + name: string + image: string + env: [...{name:string,value:string}] + } + output: { + ... + spec: { + containers: [{ + name: parameter.name + image: parameter.image + env: [ + for _, v in parameter.env { + name: v.name + value: v.value + }, + ] + }] + } + } + ``` + +Note that we use `"\( _my-statement_ )"` for inner calculation in string. + +## Import CUE Internal Packages + +CUE has many [internal packages](https://pkg.go.dev/cuelang.org/go@v0.2.2/pkg) which also can be used in KubeVela. + +Below is an example that use `strings.Join` to `concat` string list to one string. + +```cue +import ("strings") + +parameter: { + outputs: [{ip: "1.1.1.1", hostname: "xxx.com"}, {ip: "2.2.2.2", hostname: "yyy.com"}] +} +output: { + spec: { + if len(parameter.outputs) > 0 { + _x: [ for _, v in parameter.outputs { + "\(v.ip) \(v.hostname)" + }] + message: "Visiting URL: " + strings.Join(_x, "") + } + } +} +``` + +## Import Kube Package + +KubeVela automatically generates all K8s resources as internal packages by reading K8s openapi from the +installed K8s cluster. + +You can use these packages with the format `kube/` in CUE Template of KubeVela just like the same way +with the CUE internal packages. + +For example, `Deployment` can be used as: + +```cue +import ( + apps "kube/apps/v1" +) + +parameter: { + name: string +} + +output: apps.#Deployment +output: { + metadata: name: parameter.name +} +``` + +Service can be used as (import package with an alias is not necessary): + +```cue +import ("kube/v1") + +output: v1.#Service +output: { + metadata: { + "name": parameter.name + } + spec: type: "ClusterIP", +} + +parameter: { + name: "myapp" +} +``` + +Even the installed CRD works: + +``` +import ( + oam "kube/core.oam.dev/v1alpha2" +) + +output: oam.#Application +output: { + metadata: { + "name": parameter.name + } +} + +parameter: { + name: "myapp" +} +``` diff --git a/versioned_docs/version-v1.1/platform-engineers/cue/cross-namespace-resource.md b/versioned_docs/version-v1.1/platform-engineers/cue/cross-namespace-resource.md new file mode 100644 index 00000000..21773bb1 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/cue/cross-namespace-resource.md @@ -0,0 +1,53 @@ +--- +title: Render Resource to Other Namespaces +--- + +In this section, we will introduce how to use cue template create resources in different namespace with the application. + +By default, the `metadata.namespace` of K8s resource in CUE template is automatically filled with the same namespace of the application. + +If you want to create K8s resources running in a specific namespace witch is different with the application, you can set the `metadata.namespace` field. +KubeVela will create the resources in the specified namespace, and create a resourceTracker object as owner of those resources. + + +## Usage + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: worker +spec: + definitionRef: + name: deployments.apps + schematic: + cue: + template: | + parameter: { + name: string + image: string + namespace: string # make this parameter `namespace` as keyword which represents the resource maybe located in different namespace with application + } + output: { + apiVersion: "apps/v1" + kind: "Deployment" + metadata: { + namespace: my-namespace + } + spec: { + selector: matchLabels: { + "app.oam.dev/component": parameter.name + } + template: { + metadata: labels: { + "app.oam.dev/component": parameter.name + } + spec: { + containers: [{ + name: parameter.name + image: parameter.image + }] + }}} + } +``` + diff --git a/versioned_docs/version-v1.1/platform-engineers/cue/definition-edit.md b/versioned_docs/version-v1.1/platform-engineers/cue/definition-edit.md new file mode 100644 index 00000000..283db30b --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/cue/definition-edit.md @@ -0,0 +1,244 @@ +--- +title: Manage X-Definition +--- + +In KubeVela CLI (>= v1.1.0), `vela def` command group provides a series of convenient definition writing tools. With these commands, users only need to write CUE files to generate and edit definitions, instead of composing Kubernetes YAML object with mixed CUE string. + +## init + +`vela def init` is a command that helps users bootstrap new definitions. To create an empty trait definition, run `vela def init my-trait -t trait --desc "My trait description."` + +```json +"my-trait": { + annotations: {} + attributes: { + appliesToWorkloads: [] + conflictsWith: [] + definitionRef: "" + podDisruptive: false + workloadRefPath: "" + } + description: "My trait description." + labels: {} + type: "trait" +} +template: patch: {} +``` + +Or you can use `vela def init my-comp --interactive` to initiate definitions interactively. + +```bash +$ vela def init my-comp --interactive +Please choose one definition type from the following values: component, trait, policy, workload, scope, workflow-step +> Definition type: component +> Definition description: My component definition. +Please enter the location the template YAML file to build definition. Leave it empty to generate default template. +> Definition template filename: +Please enter the output location of the generated definition. Leave it empty to print definition to stdout. +> Definition output filename: my-component.cue +Definition written to my-component.cue +``` + +In addition, users can create definitions from existing YAML files. For example, if a user want to create a ComponentDefinition which is designed to generate a deployment, and this deployment has already been created elsewhere, he/she can use the `--template-yaml` flag to complete the transformation. The YAML file is as below + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: hello-world +spec: + replicas: 1 + selector: + matchLabels: + app.kubernetes.io/name: hello-world + template: + metadata: + labels: + app.kubernetes.io/name: hello-world + spec: + containers: + - name: hello-world + image: somefive/hello-world + ports: + - name: http + containerPort: 80 + protocol: TCP +--- +apiVersion: v1 +kind: Service +metadata: + name: hello-world-service +spec: + selector: + app: hello-world + ports: + - name: http + protocol: TCP + port: 80 + targetPort: 8080 + type: LoadBalancer +``` + +Running `vela def init my-comp -t component --desc "My component." --template-yaml ./my-deployment.yaml` to get the CUE-format ComponentDefinition + +```json +"my-comp": { + annotations: {} + attributes: workload: definition: { + apiVersion: " apps/v1" + kind: " Deployment" + } + description: "My component." + labels: {} + type: "component" +} +template: { + output: { + metadata: name: "hello-world" + spec: { + replicas: 1 + selector: matchLabels: "app.kubernetes.io/name": "hello-world" + template: { + metadata: labels: "app.kubernetes.io/name": "hello-world" + spec: containers: [{ + name: "hello-world" + image: "somefive/hello-world" + ports: [{ + name: "http" + containerPort: 80 + protocol: "TCP" + }] + }] + } + } + apiVersion: "apps/v1" + kind: "Deployment" + } + outputs: "hello-world-service": { + metadata: name: "hello-world-service" + spec: { + ports: [{ + name: "http" + protocol: "TCP" + port: 80 + targetPort: 8080 + }] + selector: app: "hello-world" + type: "LoadBalancer" + } + apiVersion: "v1" + kind: "Service" + } + parameter: {} + +} +``` + +Then the user can make further modifications based on the definition file above, like removing *\* in **workload.definition**。 + +## vet + +After initializing definition files, run `vela def vet my-comp.cue` to validate if there are any syntax error in the definition file. It can be used to detect some simple errors such as missing brackets. + +```bash +$ vela def vet my-comp.cue +Validation succeed. +``` + +## render / apply + +After confirming the definition file has correct syntax. users can run `vela def apply my-comp.cue --namespace my-namespace` to apply this definition in the `my-namespace` namespace。If you want to check the transformed Kubernetes YAML file, `vela def apply my-comp.cue --dry-run` or `vela def render my-comp.cue -o my-comp.yaml` can achieve that. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + annotations: + definition.oam.dev/description: My component. + labels: {} + name: my-comp + namespace: vela-system +spec: + schematic: + cue: + template: | + output: { + metadata: name: "hello-world" + spec: { + replicas: 1 + selector: matchLabels: "app.kubernetes.io/name": "hello-world" + template: { + metadata: labels: "app.kubernetes.io/name": "hello-world" + spec: containers: [{ + name: "hello-world" + image: "somefive/hello-world" + ports: [{ + name: "http" + containerPort: 80 + protocol: "TCP" + }] + }] + } + } + apiVersion: "apps/v11" + kind: "Deployment" + } + outputs: "hello-world-service": { + metadata: name: "hello-world-service" + spec: { + ports: [{ + name: "http" + protocol: "TCP" + port: 80 + targetPort: 8080 + }] + selector: app: "hello-world" + type: "LoadBalancer" + } + apiVersion: "v1" + kind: "Service" + } + parameter: {} + workload: + definition: + apiVersion: apps/v1 + kind: Deployment +``` + +```bash +$ vela def apply my-comp.cue -n my-namespace +ComponentDefinition my-comp created in namespace my-namespace. +``` + +## get / list / edit / del + +While you can use native kubectl tools to confirm the results of the apply command, as mentioned above, the YAML object mixed with raw CUE template string is complex. Using `vela def get` will automatically convert the YAML object into the CUE-format definition. + +```bash +$ vela def get my-comp -t component +``` + +Or you can list all defintions installed through `vela def list` + +```bash +$ vela def list -n my-namespace -t component +NAME TYPE NAMESPACE DESCRIPTION +my-comp ComponentDefinition my-namespace My component. +``` + +Similarly, using `vela def edit` to edit definitions in pure CUE-format. The transformation between CUE-format definition and YAML object is done by the command. Besides, you can specify the `EDITOR` environment variable to use your favourate editor. + +```bash +$ EDITOR=vim vela def edit my-comp +``` + +Finally, `vela def del` can be utilized to delete existing definitions. + +```bash +$ vela def del my-comp -n my-namespace +Are you sure to delete the following definition in namespace my-namespace? +ComponentDefinition my-comp: My component. +[yes|no] > yes +ComponentDefinition my-comp in namespace my-namespace deleted. +``` + diff --git a/versioned_docs/version-v1.1/platform-engineers/cue/examples/app-stateful.yaml b/versioned_docs/version-v1.1/platform-engineers/cue/examples/app-stateful.yaml new file mode 100644 index 00000000..2561d032 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/cue/examples/app-stateful.yaml @@ -0,0 +1,18 @@ +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: website +spec: + components: + - name: my-component + type: my-stateful + properties: + image: nginx:latest + replicas: 1 + name: my-component + traits: + - type: sidecar + properties: + name: my-sidecar + image: saravak/fluentd:elastic + diff --git a/versioned_docs/version-v1.1/platform-engineers/cue/examples/my-stateful.cue b/versioned_docs/version-v1.1/platform-engineers/cue/examples/my-stateful.cue new file mode 100644 index 00000000..09bfcd4c --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/cue/examples/my-stateful.cue @@ -0,0 +1,70 @@ +"my-stateful": { + annotations: {} + attributes: workload: definition: { + apiVersion: "apps/v1" + kind: "StatefulSet" + } + description: "My StatefulSet component." + labels: {} + type: "component" +} + +template: { + output: { + apiVersion: "apps/v1" + kind: "StatefulSet" + metadata: name: context.name + spec: { + selector: matchLabels: app: "nginx" + replicas: parameter.replicas + serviceName: parameter.name + template: { + metadata: labels: app: "nginx" + spec: { + containers: [{ + name: "nginx" + ports: [{ + name: "web" + containerPort: 80 + }] + image: parameter.image + volumeMounts: [{ + name: "www" + mountPath: "/usr/share/nginx/html" + }] + }] + terminationGracePeriodSeconds: 10 + } + } + volumeClaimTemplates: [{ + metadata: name: "www" + spec: { + accessModes: ["ReadWriteOnce"] + resources: requests: storage: "1Gi" + storageClassName: "my-storage-class" + } + }] + } + } + outputs: web: { + apiVersion: "v1" + kind: "Service" + metadata: { + name: parameter.name + labels: app: "nginx" + } + spec: { + clusterIP: "None" + ports: [{ + name: "web" + port: 80 + }] + selector: app: "nginx" + } + } + parameter: { + image: string + name: string + replicas: int + } +} diff --git a/versioned_docs/version-v1.1/platform-engineers/cue/examples/my-stateful.yaml b/versioned_docs/version-v1.1/platform-engineers/cue/examples/my-stateful.yaml new file mode 100644 index 00000000..f8b2d56d --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/cue/examples/my-stateful.yaml @@ -0,0 +1,49 @@ +apiVersion: v1 +kind: Service +metadata: + name: nginx + labels: + app: nginx +spec: + ports: + - port: 80 + name: web + clusterIP: None + selector: + app: nginx +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: web +spec: + selector: + matchLabels: + app: nginx # has to match .spec.template.metadata.labels + serviceName: "nginx" + replicas: 3 # by default is 1 + template: + metadata: + labels: + app: nginx # has to match .spec.selector.matchLabels + spec: + terminationGracePeriodSeconds: 10 + containers: + - name: nginx + image: k8s.gcr.io/nginx-slim:0.8 + ports: + - containerPort: 80 + name: web + volumeMounts: + - name: www + mountPath: /usr/share/nginx/html + volumeClaimTemplates: + - metadata: + name: www + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "my-storage-class" + resources: + requests: + storage: 1Gi + diff --git a/versioned_docs/version-v1.1/platform-engineers/debug/dry-run.md b/versioned_docs/version-v1.1/platform-engineers/debug/dry-run.md new file mode 100644 index 00000000..0eb86be6 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/debug/dry-run.md @@ -0,0 +1,107 @@ +--- +title: Dry Run +--- + +Dry run will help you to understand what are the real resources which will to be expanded and deployed +to the Kubernetes cluster. In other words, it will mock to run the same logic as KubeVela's controller +and output the results locally. + +For example, let's dry-run the following application: + +```yaml +# app.yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: vela-app +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: ingress + properties: + domain: testsvc.example.com + http: + "/": 8000 +``` + +```shell +vela dry-run -f app.yaml +--- +# Application(vela-app) -- Comopnent(express-server) +--- + +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.oam.dev/appRevision: "" + app.oam.dev/component: express-server + app.oam.dev/name: vela-app + workload.oam.dev/type: webservice +spec: + selector: + matchLabels: + app.oam.dev/component: express-server + template: + metadata: + labels: + app.oam.dev/component: express-server + spec: + containers: + - image: crccheck/hello-world + name: express-server + ports: + - containerPort: 8000 + +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app.oam.dev/appRevision: "" + app.oam.dev/component: express-server + app.oam.dev/name: vela-app + trait.oam.dev/resource: service + trait.oam.dev/type: ingress + name: express-server +spec: + ports: + - port: 8000 + targetPort: 8000 + selector: + app.oam.dev/component: express-server + +--- +apiVersion: networking.k8s.io/v1beta1 +kind: Ingress +metadata: + labels: + app.oam.dev/appRevision: "" + app.oam.dev/component: express-server + app.oam.dev/name: vela-app + trait.oam.dev/resource: ingress + trait.oam.dev/type: ingress + name: express-server +spec: + rules: + - host: testsvc.example.com + http: + paths: + - backend: + serviceName: express-server + servicePort: 8000 + path: / + +--- +``` + +In this example, the definitions(`webservice` and `ingress`) which `vela-app` depends on are the built-in +components and traits of KubeVela. You can also use `-d `or `--definitions` to specify your local definition files. + +`-d `or `--definitions` permits users to provide capability definitions used in the application from local files. +`dry-run` cmd will prioritize the provided capabilities than the living ones in the cluster. \ No newline at end of file diff --git a/versioned_docs/version-v1.1/platform-engineers/definition-and-templates.md b/versioned_docs/version-v1.1/platform-engineers/definition-and-templates.md new file mode 100644 index 00000000..e9167cd9 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/definition-and-templates.md @@ -0,0 +1,323 @@ +--- +title: Definition Objects +--- + +This documentation explains `ComponentDefinition` and `TraitDefinition` in detail. + +## Overview + +Essentially, a definition object in KubeVela is a programmable build block. A definition object normally includes several information to model a certain platform capability that would used in further application deployment: +- **Capability Indicator** + - `ComponentDefinition` uses `spec.workload` to indicate the workload type of this component. + - `TraitDefinition` uses `spec.definitionRef` to indicate the provider of this trait. +- **Interoperability Fields** + - they are for the platform to ensure a trait can work with given workload type. Hence only `TraitDefinition` has these fields. +- **Capability Encapsulation and Abstraction** defined by `spec.schematic` + - this defines the **templating and parametering** (i.e. encapsulation) of this capability. + +Hence, the basic structure of definition object is as below: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: XxxDefinition +metadata: + name: +spec: + ... + schematic: + cue: + # cue template ... + helm: + # Helm chart ... + # ... interoperability fields +``` + +Let's explain these fields one by one. + +### Capability Indicator + +In `ComponentDefinition`, the indicator of workload type is declared as `spec.workload`. + +Below is a definition for *Web Service* in KubeVela: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: webservice + namespace: default + annotations: + definition.oam.dev/description: "Describes long-running, scalable, containerized services that have a stable network endpoint to receive external network traffic from customers." +spec: + workload: + definition: + apiVersion: apps/v1 + kind: Deployment + ... +``` + +In above example, it claims to leverage Kubernetes Deployment (`apiVersion: apps/v1`, `kind: Deployment`) as the workload type for component. + +### Interoperability Fields + +The interoperability fields are **trait only**. An overall view of interoperability fields in a `TraitDefinition` is show as below. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + name: ingress +spec: + appliesToWorkloads: + - deployments.apps + conflictsWith: + - service + workloadRefPath: spec.wrokloadRef + podDisruptive: false +``` + +Let's explain them in detail. + +#### `.spec.appliesToWorkloads` + +This field defines the constraints that what kinds of workloads this trait is allowed to apply to. +- It accepts an array of string as value. +- Each item in the array refers to one or a group of workload types to which this trait is allowed to apply. + +There are three approaches to denote one or a group of workload types. + +- `ComponentDefinition` definition reference (CRD name), e.g., `deployments.apps` +- Resource group of `ComponentDefinition` definition reference prefixed with `*.`, e.g., `*.apps`, `*.oam.dev`. This means the trait is allowed to apply to any workloads in this group. +- `*` means this trait is allowed to apply to any workloads + +If this field is omitted, it means this trait is allowed to apply to any workload types. + +KubeVela will raise an error if a trait is applied to a workload type which is NOT included in the `appliesToWorkloads`. + + +##### `.spec.conflictsWith` + +This field defines that constraints that what kinds of traits are conflicting with this trait, if they are applied to the same workload. +- It accepts an array of string as value. +- Each item in the array refers to one or a group of traits. + +There are four approaches to denote one or a group of workload types. + +- `TraitDefinition` name, e.g., `ingress` +- Resource group of `TraitDefinition` definition reference prefixed with `*.`, e.g., `*.networking.k8s.io`. This means the trait is conflicting with any traits in this group. +- `*` means this trait is conflicting with any other trait. + +If this field is omitted, it means this trait is NOT conflicting with any traits. + +##### `.spec.workloadRefPath` + +This field defines the field path of the trait which is used to store the reference of the workload to which the trait is applied. +- It accepts a string as value, e.g., `spec.workloadRef`. + +If this field is set, KubeVela core will automatically fill the workload reference into target field of the trait. Then the trait controller can get the workload reference from the trait latter. So this field usually accompanies with the traits whose controllers relying on the workload reference at runtime. + +Please check [scaler](https://github.com/oam-dev/kubevela/blob/master/charts/vela-core/templates/defwithtemplate/scaler.yaml) trait as a demonstration of how to set this field. + +##### `.spec.podDisruptive` + +This field defines that adding/updating the trait will disruptive the pod or not. +In this example, the answer is not, so the field is `false`, it will not affect the pod when the trait is added or updated. +If the field is `true`, then it will cause the pod to disruptive and restart when the trait is added or updated. +By default, the value is `false` which means this trait will not affect. +Please take care of this field, it's really important and useful for serious large scale production usage scenarios. + +### Capability Encapsulation and Abstraction + +The programmable template of given capability are defined in `spec.schematic` field. For example, below is the full definition of *Web Service* type in KubeVela: + +
+ +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: webservice + namespace: default + annotations: + definition.oam.dev/description: "Describes long-running, scalable, containerized services that have a stable network endpoint to receive external network traffic from customers." +spec: + workload: + definition: + apiVersion: apps/v1 + kind: Deployment + schematic: + cue: + template: | + output: { + apiVersion: "apps/v1" + kind: "Deployment" + spec: { + selector: matchLabels: { + "app.oam.dev/component": context.name + } + + template: { + metadata: labels: { + "app.oam.dev/component": context.name + } + + spec: { + containers: [{ + name: context.name + image: parameter.image + + if parameter["cmd"] != _|_ { + command: parameter.cmd + } + + if parameter["env"] != _|_ { + env: parameter.env + } + + if context["config"] != _|_ { + env: context.config + } + + ports: [{ + containerPort: parameter.port + }] + + if parameter["cpu"] != _|_ { + resources: { + limits: + cpu: parameter.cpu + requests: + cpu: parameter.cpu + } + } + }] + } + } + } + } + parameter: { + // +usage=Which image would you like to use for your service + // +short=i + image: string + + // +usage=Commands to run in the container + cmd?: [...string] + + // +usage=Which port do you want customer traffic sent to + // +short=p + port: *80 | int + // +usage=Define arguments by using environment variables + env?: [...{ + // +usage=Environment variable name + name: string + // +usage=The value of the environment variable + value?: string + // +usage=Specifies a source the value of this var should come from + valueFrom?: { + // +usage=Selects a key of a secret in the pod's namespace + secretKeyRef: { + // +usage=The name of the secret in the pod's namespace to select from + name: string + // +usage=The key of the secret to select from. Must be a valid secret key + key: string + } + } + }] + // +usage=Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) + cpu?: string + } +``` +
+ +The specification of `schematic` is explained in following CUE and Helm specific documentations. + +Also, the `schematic` filed enables you to render UI forms directly based on them, please check the [Generate Forms from Definitions](openapi-v3-json-schema) section about how to. + +## Definition Revisions + +In KubeVela, definition entities are mutable. Each time a `ComponentDefinition` or `TraitDefinition` is updated, a corresponding `DefinitionRevision` will be generated to snapshot this change. Hence, KubeVela allows user to reference a specific revision of definition to declare an application. + +For example, we can design a new parameter named `args` for the `webservice` component definition by applying a new definition with same name as below. + +```shell +kubectl vela show webservice +``` +```console +# Properties ++-------+----------------------------------------------------+----------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++-------+----------------------------------------------------+----------+----------+---------+ +| cmd | Commands to run in the container | []string | false | | +... // skip +``` + +```shell +kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/definition-revision/webservice-v2.yaml +``` + +The change will take effect immediately. + +```shell +kubectl vela show webservice +``` +```console +# Properties ++-------+----------------------------------------------------+----------+----------+---------+ +| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT | ++-------+----------------------------------------------------+----------+----------+---------+ +| cmd | Commands to run in the container | []string | false | | +| args | Arguments to the cmd | []string | false | | +... // skip +``` + +We will see a new definition revision will be automatically generated, `v2` is the latest version, `v1` is the previous one. + +```shell +kubectl get definitionrevision -l="componentdefinition.oam.dev/name=webservice" -n vela-system +``` +```console +NAME REVISION HASH TYPE +webservice-v1 1 3f6886d9832021ba Component +webservice-v2 2 b3b9978e7164d973 Component +``` + +### Specify Definition Revision in Application + +Users can specify the revision with `@version` approach, for example, if a user want to stick to using the `v1` revision of `webservice` component: + +```yaml +# testapp.yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: testapp +spec: + components: + - name: server + type: webservice@v1 + properties: + image: foo + cmd: + - sleep + - '1000' +``` +If no revision is specified, KubeVela will always use the latest revision for a given component definition. + +```yaml +# testapp.yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: testapp +spec: + components: + - name: server + type: webservice # type: webservice@v2 + properties: + image: foo + cmd: + - sleep + - '1000' + args: + - wait +``` diff --git a/versioned_docs/version-v1.1/platform-engineers/helm/component.md b/versioned_docs/version-v1.1/platform-engineers/helm/component.md new file mode 100644 index 00000000..f7cbfed9 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/helm/component.md @@ -0,0 +1,88 @@ +--- +title: How-to +--- + +In this section, it will introduce how to declare Helm charts as components via `ComponentDefinition`. + +> Before reading this part, please make sure you've learned [the definition and template concepts](../definition-and-templates). + +## Prerequisite + +* Make sure you have enabled Helm support in the [installation guide](../../install#4-enable-helm-support). + +## Declare `ComponentDefinition` + +Here is an example `ComponentDefinition` about how to use Helm as schematic module. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: webapp-chart + annotations: + definition.oam.dev/description: helm chart for webapp +spec: + workload: + definition: + apiVersion: apps/v1 + kind: Deployment + schematic: + helm: + release: + chart: + spec: + chart: "podinfo" + version: "5.1.4" + repository: + url: "http://oam.dev/catalog/" +``` + +In detail: +- `.spec.workload` is required to indicate the workload type of this Helm based component. Please also check for [known limitations](known-issues?id=workload-type-indicator) if you have multiple workloads packaged in one chart. +- `.spec.schematic.helm` contains information of Helm `release` and `repository` which leverages `fluxcd/flux2`. + - i.e. the spec of `release` aligns with [`HelmReleaseSpec`](https://github.com/fluxcd/helm-controller/blob/main/docs/api/helmrelease.md) and spec of `repository` aligns with [`HelmRepositorySpec`](https://github.com/fluxcd/source-controller/blob/main/docs/api/source.md#source.toolkit.fluxcd.io/v1beta1.HelmRepository). + +## Declare an `Application` + +Here is an example `Application`. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: myapp + namespace: default +spec: + components: + - name: demo-podinfo + type: webapp-chart + properties: + image: + tag: "5.1.2" +``` + +The component `properties` is exactly the [overlay values](https://github.com/captainroy-hy/podinfo/blob/master/charts/podinfo/values.yaml) of the Helm chart. + +Deploy the application and after several minutes (it may take time to fetch Helm chart), you can check the Helm release is installed. +```shell +helm ls -A +``` +```console +myapp-demo-podinfo default 1 2021-03-05 02:02:18.692317102 +0000 UTC deployed podinfo-5.1.4 5.1.4 +``` +Check the workload defined in the chart has been created successfully. +```shell +kubectl get deploy +``` +```console +NAME READY UP-TO-DATE AVAILABLE AGE +myapp-demo-podinfo 1/1 1 1 66m +``` + +Check the values (`image.tag = 5.1.2`) from application's `properties` are assigned to the chart. +```shell +kubectl get deployment myapp-demo-podinfo -o json | jq '.spec.template.spec.containers[0].image' +``` +```console +"ghcr.io/stefanprodan/podinfo:5.1.2" +``` diff --git a/versioned_docs/version-v1.1/platform-engineers/helm/known-issues.md b/versioned_docs/version-v1.1/platform-engineers/helm/known-issues.md new file mode 100644 index 00000000..99e27a3a --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/helm/known-issues.md @@ -0,0 +1,82 @@ +--- +title: Known Limitations +--- + +## Limitations + +Here are some known limitations for using Helm chart as application component. + +### Workload Type Indicator + +Following best practices of microservice, KubeVela recommends only one workload resource present in one Helm chart. Please split your "super" Helm chart into multiple charts (i.e. components). Essentially, KubeVela relies on the `workload` filed in component definition to indicate the workload type it needs to take care, for example: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +... +spec: + workload: + definition: + apiVersion: apps/v1 + kind: Deployment +``` +```yaml +... +spec: + workload: + definition: + apiVersion: apps.kruise.io/v1alpha1 + kind: Cloneset +``` + + Note that KubeVela won't fail if multiple workload types are packaged in one chart, the issue is for further operational behaviors such as rollout, revisions, and traffic management, they can only take effect on the indicated workload type. + +### Always Use Full Qualified Name + +The name of the workload should be templated with [fully qualified application name](https://github.com/helm/helm/blob/543364fba59b0c7c30e38ebe0f73680db895abb6/pkg/chartutil/create.go#L415) and please do NOT assign any value to `.Values.fullnameOverride`. As a best practice, Helm also highly recommend that new charts should be created via `$ helm create` command so the template names are automatically defined as per this best practice. + +### Control the Application Upgrade + +Changes made to the component `properties` will trigger a Helm release upgrade. This process is handled by Flux v2 Helm controller, hence you can define remediation +strategies in the schematic based on [Helm Release +documentation](https://github.com/fluxcd/helm-controller/blob/main/docs/api/helmrelease.md#upgraderemediation) +and [specification](https://toolkit.fluxcd.io/components/helm/helmreleases/#configuring-failure-remediation) +in case failure happens during this upgrade. + +For example: +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: webapp-chart +spec: +... + schematic: + helm: + release: + chart: + spec: + chart: "podinfo" + version: "5.1.4" + upgrade: + remediation: + retries: 3 + remediationStrategy: rollback + repository: + url: "http://oam.dev/catalog/" + +``` + +Though one issue is for now it's hard to get helpful information of a living Helm release to figure out what happened if upgrading failed. We will enhance the observability to help users track the situation of Helm release in application level. + +## Issues + +The known issues will be fixed in following releases. + +### Rollout Strategy + +For now, Helm based components cannot benefit from [rolling update API](https://github.com/oam-dev/kubevela/blob/master/design/vela-core/rollout-design.md#applicationdeployment-workflow). As shown in [this sample](./trait#update-an-applicatiion), if the application is updated, it can only be rollouted directly without canary or blue-green approach. + +### Updating Traits Properties may Also Lead to Pods Restart + +Changes on traits properties may impact the component instance and Pods belonging to this workload instance will restart. In CUE based components this is avoidable as KubeVela has full control on the rendering process of the resources, though in Helm based components it's currently deferred to Flux v2 controller. diff --git a/versioned_docs/version-v1.1/platform-engineers/helm/trait.md b/versioned_docs/version-v1.1/platform-engineers/helm/trait.md new file mode 100644 index 00000000..8ee3610c --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/helm/trait.md @@ -0,0 +1,169 @@ +--- +title: Attach Traits +--- + +Traits in KubeVela can be attached to Helm based component seamlessly. + +In this sample application below, we add two traits, [scaler](https://github.com/oam-dev/kubevela/blob/master/charts/vela-core/templates/defwithtemplate/scaler.yaml) +and [virtualgroup](https://github.com/oam-dev/kubevela/blob/master/docs/examples/helm-module/virtual-group-td.yaml) to a Helm based component. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: myapp + namespace: default +spec: + components: + - name: demo-podinfo + type: webapp-chart + properties: + image: + tag: "5.1.2" + traits: + - type: scaler + properties: + replicas: 4 + - type: virtualgroup + properties: + group: "my-group1" + type: "cluster" +``` + +> Note: when use traits with Helm based component, please *make sure the target workload in your Helm chart strictly follows the qualified-full-name convention in Helm.* [For example in this chart](https://github.com/captainroy-hy/podinfo/blob/c2b9603036f1f033ec2534ca0edee8eff8f5b335/charts/podinfo/templates/deployment.yaml#L4), the workload name is composed of [release name and chart name](https://github.com/captainroy-hy/podinfo/blob/c2b9603036f1f033ec2534ca0edee8eff8f5b335/charts/podinfo/templates/_helpers.tpl#L13). + +> This is because KubeVela relies on the name to discovery the workload, otherwise it cannot apply traits to the workload. KubeVela will generate a release name based on your `Application` name and component name automatically, so you need to make sure never override the full name template in your Helm chart. + +## Verify traits work correctly + +> You may need to wait a few seconds to check the trait attached because of reconciliation interval. + +Check the `scaler` trait takes effect. +```shell +kubectl get manualscalertrait +``` +```console +NAME AGE +demo-podinfo-scaler-d8f78c6fc 13m +``` +```shell +kubectl get deployment myapp-demo-podinfo -o json | jq .spec.replicas +``` +```console +4 +``` + +Check the `virtualgroup` trait. +```shell +kubectl get deployment myapp-demo-podinfo -o json | jq .spec.template.metadata.labels +``` +```console +{ + "app.cluster.virtual.group": "my-group1", + "app.kubernetes.io/name": "myapp-demo-podinfo" +} +``` + +## Update Application + +After the application is deployed and workloads/traits are created successfully, +you can update the application, and corresponding changes will be applied to the +workload instances. + +Let's make several changes on the configuration of the sample application. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: myapp + namespace: default +spec: + components: + - name: demo-podinfo + type: webapp-chart + properties: + image: + tag: "5.1.3" # 5.1.2 => 5.1.3 + traits: + - type: scaler + properties: + replicas: 2 # 4 => 2 + - type: virtualgroup + properties: + group: "my-group2" # my-group1 => my-group2 + type: "cluster" +``` + +Apply the new configuration and check the results after several minutes. + +Check the new values (`image.tag = 5.1.3`) from application's `properties` are assigned to the chart. +```shell +kubectl get deployment myapp-demo-podinfo -o json | jq '.spec.template.spec.containers[0].image' +``` +```console +"ghcr.io/stefanprodan/podinfo:5.1.3" +``` +Under the hood, Helm makes an upgrade to the release (revision 1 => 2). +```shell +helm ls -A +``` +```console +NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION +myapp-demo-podinfo default 2 2021-03-15 08:52:00.037690148 +0000 UTC deployed podinfo-5.1.4 5.1.4 +``` + +Check the `scaler` trait. +```shell +kubectl get deployment myapp-demo-podinfo -o json | jq .spec.replicas +``` +```console +2 +``` + +Check the `virtualgroup` trait. +```shell +kubectl get deployment myapp-demo-podinfo -o json | jq .spec.template.metadata.labels +``` +```console +{ + "app.cluster.virtual.group": "my-group2", + "app.kubernetes.io/name": "myapp-demo-podinfo" +} +``` + +## Detach Trait + +Let's have a try detach a trait from the application. + +```yaml +apiVersion: core.oam.dev/v1alpha2 +kind: Application +metadata: + name: myapp + namespace: default +spec: + components: + - name: demo-podinfo + type: webapp-chart + settings: + image: + tag: "5.1.3" + traits: + # - name: scaler + # properties: + # replicas: 2 + - name: virtualgroup + properties: + group: "my-group2" + type: "cluster" +``` + +Apply the application and check `manualscalertrait` has been deleted. +```shell +kubectl get manualscalertrait +``` +```console +No resources found +``` + diff --git a/versioned_docs/version-v1.1/platform-engineers/initializer-platform-eng.md b/versioned_docs/version-v1.1/platform-engineers/initializer-platform-eng.md new file mode 100644 index 00000000..0596b911 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/initializer-platform-eng.md @@ -0,0 +1,3 @@ +--- +title: 平台环境初始化 +--- \ No newline at end of file diff --git a/versioned_docs/version-v1.1/platform-engineers/initializer/advanced-initializer.md b/versioned_docs/version-v1.1/platform-engineers/initializer/advanced-initializer.md new file mode 100644 index 00000000..4479b2c7 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/initializer/advanced-initializer.md @@ -0,0 +1,5 @@ +--- +title: Custom Initializer +--- + +TBD \ No newline at end of file diff --git a/versioned_docs/version-v1.1/platform-engineers/initializer/basic-initializer.md b/versioned_docs/version-v1.1/platform-engineers/initializer/basic-initializer.md new file mode 100644 index 00000000..49a0712b --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/initializer/basic-initializer.md @@ -0,0 +1,5 @@ +--- +title: Introduction +--- + +TBD \ No newline at end of file diff --git a/versioned_docs/version-v1.1/platform-engineers/keda.md b/versioned_docs/version-v1.1/platform-engineers/keda.md new file mode 100644 index 00000000..747c797a --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/keda.md @@ -0,0 +1,112 @@ +--- +title: KEDA as Autoscaling Trait +--- + +> Before continue, make sure you have learned about the concepts of [Definition Objects](definition-and-templates) and [Defining Traits with CUE]((./traits/customize-trait) section. + +In the following tutorial, you will learn to add [KEDA](https://keda.sh/) as a new autoscaling trait to your KubeVela based platform. + +> KEDA is a Kubernetes-based Event Driven Autoscaler. With KEDA, you can drive the scaling of any container based on resource metrics or the number of events needing to be processed. + +## Step 1: Install KEDA controller + +[Install the KEDA controller](https://keda.sh/docs/2.2/deploy/) into your K8s system. + +## Step 2: Create Trait Definition + +To register KEDA as a new capability (i.e. trait) in KubeVela, the only thing needed is to create an `TraitDefinition` object for it. + +A full example can be found in this [keda.yaml](https://github.com/oam-dev/catalog/blob/master/registry/keda-scaler.yaml). +Several highlights are list below. + +### 1. Describe The Trait + +```yaml +... +name: keda-scaler +annotations: + definition.oam.dev/description: "keda supports multiple event to elastically scale applications, this scaler only applies to deployment as example" +... +``` + +We use label `definition.oam.dev/description` to add one line description for this trait. +It will be shown in helper commands such as `$ vela traits`. + +### 2. Register API Resource + +```yaml +... +spec: + definitionRef: + name: scaledobjects.keda.sh +... +``` + +This is how you claim and register KEDA `ScaledObject`'s API resource (`scaledobjects.keda.sh`) as a trait definition. + +### 3. Define `appliesToWorkloads` + +A trait can be attached to specified workload types or all (i.e. `"*"` or omitted means your trait can work with any workload type). + +For the case of KEAD, we will only allow user to attach it to Kubernetes workload type. So we claim it as below: + +```yaml +... +spec: + ... + appliesToWorkloads: + - "deployments.apps" # claim KEDA based autoscaling trait can only attach to Kubernetes Deployment workload type. +... +``` + +### 4. Define Schematic + +In this step, we will define the schematic of KEDA based autoscaling trait, i.e. we will create abstraction for KEDA `ScaledObject` with simplified primitives, so end users of this platform don't really need to know what is KEDA at all. + + +```yaml +... +schematic: + cue: + template: |- + outputs: kedaScaler: { + apiVersion: "keda.sh/v1alpha1" + kind: "ScaledObject" + metadata: { + name: context.name + } + spec: { + scaleTargetRef: { + name: context.name + } + triggers: [{ + type: parameter.triggerType + metadata: { + type: "Utilization" + value: parameter.value + } + }] + } + } + parameter: { + // +usage=Types of triggering application elastic scaling, Optional: cpu, memory + triggerType: string + // +usage=Value to trigger scaling actions, represented as a percentage of the requested value of the resource for the pods. like: "60"(60%) + value: string + } + ``` + +This is a CUE based template which only exposes `type` and `value` as trait properties for user to set. + +> Please check the [Defining Trait with CUE](./traits/customize-trait) section for more details regarding to CUE templating. + +## Step 2: Register New Trait to KubeVela + +As long as the definition file is ready, you just need to apply it to Kubernetes. + +```bash +kubectl apply -f https://raw.githubusercontent.com/oam-dev/catalog/master/registry/keda-scaler.yaml +``` + +And the new trait will immediately become available for end users to use in `Application` resource. + diff --git a/versioned_docs/version-v1.1/platform-engineers/kube/component.md b/versioned_docs/version-v1.1/platform-engineers/kube/component.md new file mode 100644 index 00000000..b4ae16d3 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/kube/component.md @@ -0,0 +1,95 @@ +--- +title: How-to +--- + +In this section, it will introduce how to use simple template to declare Kubernetes API resource into a component. + +> Before reading this part, please make sure you've learned [the definition and template concepts](../definition-and-templates). + +## Declare `ComponentDefinition` + +Here is a simple template based `ComponentDefinition` example which provides a abstraction for worker workload type: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: kube-worker + namespace: default +spec: + workload: + definition: + apiVersion: apps/v1 + kind: Deployment + schematic: + kube: + template: + apiVersion: apps/v1 + kind: Deployment + spec: + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + ports: + - containerPort: 80 + parameters: + - name: image + required: true + type: string + fieldPaths: + - "spec.template.spec.containers[0].image" +``` + +In detail, the `.spec.schematic.kube` contains template of a workload resource and +configurable parameters. +- `.spec.schematic.kube.template` is the simple template in YAML format. +- `.spec.schematic.kube.parameters` contains a set of configurable parameters. The `name`, `type`, and `fieldPaths` are required fields, `description` and `required` are optional fields. + - The parameter `name` must be unique in a `ComponentDefinition`. + - `type` indicates the data type of value set to the field. This is a required field which will help KubeVela to generate a OpenAPI JSON schema for the parameters automatically. In simple template, only basic data types are allowed, including `string`, `number`, and `boolean`, while `array` and `object` are not. + - `fieldPaths` in the parameter specifies an array of fields within the template that will be overwritten by the value of this parameter. Fields are specified as JSON field paths without a leading dot, for example +`spec.replicas`, `spec.containers[0].image`. + +## Declare an `Application` + +Here is an example `Application`. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: myapp + namespace: default +spec: + components: + - name: mycomp + type: kube-worker + properties: + image: nginx:1.14.0 +``` + +Since parameters only support basic data type, values in `properties` should be simple key-value, `: `. + +Deploy the `Application` and verify the running workload instance. + +```shell +kubectl get deploy +``` +```console +NAME READY UP-TO-DATE AVAILABLE AGE +mycomp 1/1 1 1 66m +``` +And check the parameter works. +```shell +kubectl get deployment mycomp -o json | jq '.spec.template.spec.containers[0].image' +``` +```console +"nginx:1.14.0" +``` + diff --git a/versioned_docs/version-v1.1/platform-engineers/kube/trait.md b/versioned_docs/version-v1.1/platform-engineers/kube/trait.md new file mode 100644 index 00000000..02313221 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/kube/trait.md @@ -0,0 +1,123 @@ +--- +title: Attach Traits +--- + +All traits in the KubeVela system works well with the simple template based Component. + +In this sample, we will attach two traits, +[scaler](https://github.com/oam-dev/kubevela/blob/master/charts/vela-core/templates/defwithtemplate/scaler.yaml) +and +[virtualgroup](https://github.com/oam-dev/kubevela/blob/master/docs/examples/kube-module/virtual-group-td.yaml) to a component + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: myapp + namespace: default +spec: + components: + - name: mycomp + type: kube-worker + properties: + image: nginx:1.14.0 + traits: + - type: scaler + properties: + replicas: 2 + - type: virtualgroup + properties: + group: "my-group1" + type: "cluster" +``` + +## Verify + +Deploy the application and verify traits work. + +Check the `scaler` trait. +```shell +kubectl get manualscalertrait +``` +```console +NAME AGE +demo-podinfo-scaler-3x1sfcd34 2m +``` +```shell +kubectl get deployment mycomp -o json | jq .spec.replicas +``` +```console +2 +``` + +Check the `virtualgroup` trait. +```shell +kubectl get deployment mycomp -o json | jq .spec.template.metadata.labels +``` +```console +{ + "app.cluster.virtual.group": "my-group1", + "app.kubernetes.io/name": "myapp" +} +``` + +## Update an Application + +After the application is deployed and workloads/traits are created successfully, +you can update the application, and corresponding changes will be applied to the +workload. + +Let's make several changes on the configuration of the sample application. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: myapp + namespace: default +spec: + components: + - name: mycomp + type: kube-worker + properties: + image: nginx:1.14.1 # 1.14.0 => 1.14.1 + traits: + - type: scaler + properties: + replicas: 4 # 2 => 4 + - type: virtualgroup + properties: + group: "my-group2" # my-group1 => my-group2 + type: "cluster" +``` + +Apply the new configuration and check the results after several seconds. + +> After updating, the workload instance name will be updated from `mycomp-v1` to `mycomp-v2`. + +Check the new property value. +```shell +kubectl get deployment mycomp -o json | jq '.spec.template.spec.containers[0].image' +``` +```console +"nginx:1.14.1" +``` + +Check the `scaler` trait. +```shell +kubectl get deployment mycomp -o json | jq .spec.replicas +``` +```console +4 +``` + +Check the `virtualgroup` trait. +```shell +kubectl get deployment mycomp -o json | jq .spec.template.metadata.labels +``` +```console +{ + "app.cluster.virtual.group": "my-group2", + "app.kubernetes.io/name": "myapp" +} +``` diff --git a/versioned_docs/version-v1.1/platform-engineers/oam/oam-model.md b/versioned_docs/version-v1.1/platform-engineers/oam/oam-model.md new file mode 100644 index 00000000..5d1e1141 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/oam/oam-model.md @@ -0,0 +1,207 @@ +--- +title: Introduction +--- + +This documentation will explain the core resource model of KubeVela which is fully powered by Open Application Model (OAM). + +## Application + +The *Application* is the core API of KubeVela. It allows End User to work with a single artifact to capture the complete application deployment with simplified primitives. + +This provides a simpler path for on-boarding End User to the platform without leaking low level details in runtime infrastructure. For example, they will be able to declare a "web service" without defining a detailed Kubernetes Deployment + Service combo each time, or claim the auto-scaling requirements without referring to the underlying KEDA ScaleObject. They can also declare a cloud database with same API if they want. + +Every application is composed by multiple components with attachable operational behaviors (traits). For example: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: application-sample +spec: + components: + - name: foo + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: ingress + properties: + domain: testsvc.example.com + http: + "/": 8000 + - type: sidecar + properties: + name: "logging" + image: "fluentd" + - name: bar + type: aliyun-oss # cloud service + bucket: "my-bucket" +``` + +The `Application` resource in KubeVela is a LEGO-style entity and does not even have fixed schema. Instead, it is assembled by below building block entities that are maintained by the platform-engineers. +Though the application object doesn't have fixed schema, it is a composition object assembled by several *programmable building blocks* as shown below. + +## Component + +The component model (`ComponentDefinition` API) is designed to allow *component providers* to encapsulate deployable/provisionable entities with a wide range of tools, and hence give a easier path to End User to deploy complicated microservices across hybrid environments at ease. A component normally carries its workload type description (i.e. `WorkloadDefinition`), a encapsulation module with a parameter list. + +> Hence, a components provider could be anyone who packages software components in form of Helm chart of CUE modules. Think about 3rd-party software distributor, DevOps team, or even your CI pipeline. + +Components are shareable and reusable. For example, by referencing the same *Alibaba Cloud RDS* component and setting different parameter values, End User could easily provision Alibaba Cloud RDS instances of different sizes in different availability zones. + +End User will use the `Application` entity to declare how they want to instantiate and deploy a group of certain components. In above example, it describes an application composed with Kubernetes stateless workload (component `foo`) and a Alibaba Cloud OSS bucket (component `bar`) alongside. + +### How it Works? + +In above example, `type: worker` means the specification of this component (claimed in following `properties` section) will be enforced by a `ComponentDefinition` object named `worker` as below: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: worker + annotations: + definition.oam.dev/description: "Describes long-running, scalable, containerized services that running at backend. They do NOT have network endpoint to receive external network traffic." +spec: + workload: + definition: + apiVersion: apps/v1 + kind: Deployment + schematic: + cue: + template: | + output: { + apiVersion: "apps/v1" + kind: "Deployment" + spec: { + selector: matchLabels: { + "app.oam.dev/component": context.name + } + template: { + metadata: labels: { + "app.oam.dev/component": context.name + } + spec: { + containers: [{ + name: context.name + image: parameter.image + + if parameter["cmd"] != _|_ { + command: parameter.cmd + } + }] + } + } + } + } + parameter: { + image: string + cmd?: [...string] + } +``` + + +Hence, the `properties` section of `backend` only exposes two parameters to fill: `image` and `cmd`, this is enforced by the `parameter` list of the `.spec.template` field of the definition. + +## Traits + +Traits (`TraitDefinition` API) are operational features provided by the platform. A trait augments the component instance with operational behaviors such as load balancing policy, network ingress routing, auto-scaling policies, or upgrade strategies, etc. + +To attach a trait to component instance, the user will declare `.type` field to reference the specific `TraitDefinition`, and `.properties` field to set property values of the given trait. Similarly, `TraitDefinition` also allows you to define *template* for operational features. + +In the above example, `type: autoscaler` in `frontend` means the specification (i.e. `properties` section) of this trait will be enforced by a `TraitDefinition` object named `autoscaler` as below: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + annotations: + definition.oam.dev/description: "configure k8s HPA for Deployment" + name: hpa +spec: + appliesToWorkloads: + - deployments.apps + schematic: + cue: + template: | + outputs: hpa: { + apiVersion: "autoscaling/v2beta2" + kind: "HorizontalPodAutoscaler" + metadata: name: context.name + spec: { + scaleTargetRef: { + apiVersion: "apps/v1" + kind: "Deployment" + name: context.name + } + minReplicas: parameter.min + maxReplicas: parameter.max + metrics: [{ + type: "Resource" + resource: { + name: "cpu" + target: { + type: "Utilization" + averageUtilization: parameter.cpuUtil + } + } + }] + } + } + parameter: { + min: *1 | int + max: *10 | int + cpuUtil: *50 | int + } +``` + +The application also have a `sidecar` trait. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + annotations: + definition.oam.dev/description: "add sidecar to the app" + name: sidecar +spec: + appliesToWorkloads: + - deployments.apps + schematic: + cue: + template: |- + patch: { + // +patchKey=name + spec: template: spec: containers: [parameter] + } + parameter: { + name: string + image: string + command?: [...string] + } +``` + +Please note that the End User do NOT need to know about definition objects, they learn how to use a given capability with visualized forms (or the JSON schema of parameters if they prefer). Please check the [Generate Forms from Definitions](../openapi-v3-json-schema) section about how this is achieved. + +## Standard Contract Behind The Abstractions + +Once the application is deployed, KubeVela will index and manage the underlying instances with name, revisions, labels and selector etc in automatic approach. These metadata are shown as below. + +| Label | Description | +| :--------------------------------------------------------: | :-------------------------------------------------: | +| `workload.oam.dev/type=` | The name of its corresponding `ComponentDefinition` | +| `trait.oam.dev/type=` | The name of its corresponding `TraitDefinition` | +| `app.oam.dev/name=` | The name of the application it belongs to | +| `app.oam.dev/component=` | The name of the component it belongs to | +| `trait.oam.dev/resource=` | The name of trait resource instance | +| `app.oam.dev/appRevision=` | The name of the application revision it belongs to | + + +Consider these metadata as a standard contract for any "day 2" operation controller such as rollout controller to work on KubeVela deployed applications. This is the key to ensure the interoperability for KubeVela based platform as well. + +## No Configuration Drift + +Despite the efficiency and extensibility in abstracting application deployment, IaC (Infrastructure-as-Code) tools may lead to an issue called *Infrastructure/Configuration Drift*, i.e. the generated component instances are not in line with the expected configuration. This could be caused by incomplete coverage, less-than-perfect processes or emergency changes. This makes them can be barely used as a platform level building block. + +Hence, KubeVela is designed to maintain all these programmable capabilities with [Kubernetes Control Loop](https://kubernetes.io/docs/concepts/architecture/controller/) and leverage Kubernetes control plane to eliminate the issue of configuration drifting, while still keeps the flexibly and velocity enabled by IaC. diff --git a/versioned_docs/version-v1.1/platform-engineers/oam/x-definition.md b/versioned_docs/version-v1.1/platform-engineers/oam/x-definition.md new file mode 100644 index 00000000..0e7db1b1 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/oam/x-definition.md @@ -0,0 +1,400 @@ +--- +title: X-Definition +--- + +In OAM model, [Application][1] used by end user consists many declarative moduels such as Component, Trait, Policy and Workflow etc. These types are actually shaped by many definitions behind them. The module definition (X-Definition) supported by the current OAM model includes ComponentDefinition, TraitDefinition, PolicyDefinition and WorkflowStepDefinition etc. + +## ComponentDefinition + +The design goal of ComponentDefinition is to allow platform administrators to encapsulate any type of deployable products into "components" to be delivered. Once defined, this type of component can be referenced, instantiated and delivered by users in the `Application`. + +Common component types include Helm Chart, Kustomize directory, a set of Kubernetes YAML files, container images, cloud resource IaC files, or CUE configuration file modules, etc. The component supplier corresponds to the real-world role, which is generally a third-party software distributor (ISV), a DevOps team engineer, or a code package and image generated by the CI system you built. + +ComponentDefinition can be shared and reused. For example, for an `Alibaba Cloud RDS` component type, end users can select the same `Alibaba Cloud RDS` component type in different applications and instantiate them into cloud database instances with different specifications and different parameter configurations. + +Let's take a look at the frame format of ComponentDefinition: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: + annotations: + definition.oam.dev/description: +spec: + workload: # Workload description + definition: + apiVersion: + kind: + schematic: # Component description + cue: # Details of components defined by CUE language + template: +``` +Here is a complete example to introduce how to use ComponentDefinition. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: helm + namespace: vela-system + annotations: + definition.oam.dev/description: "helm release is a group of K8s resources from either git repository or helm repo" +spec: + workload: + type: autodetects.core.oam.dev + schematic: + cue: + template: | + output: { + apiVersion: "source.toolkit.fluxcd.io/v1beta1" + metadata: { + name: context.name + } + if parameter.repoType == "git" { + kind: "GitRepository" + spec: { + url: parameter.repoUrl + ref: + branch: parameter.branch + interval: parameter.pullInterval + } + } + if parameter.repoType == "helm" { + kind: "HelmRepository" + spec: { + interval: parameter.pullInterval + url: parameter.repoUrl + if parameter.secretRef != _|_ { + secretRef: { + name: parameter.secretRef + } + } + } + } + } + + outputs: release: { + apiVersion: "helm.toolkit.fluxcd.io/v2beta1" + kind: "HelmRelease" + metadata: { + name: context.name + } + spec: { + interval: parameter.pullInterval + chart: { + spec: { + chart: parameter.chart + version: parameter.version + sourceRef: { + if parameter.repoType == "git" { + kind: "GitRepository" + } + if parameter.repoType == "helm" { + kind: "HelmRepository" + } + name: context.name + namespace: context.namespace + } + interval: parameter.pullInterval + } + } + if parameter.targetNamespace != _|_ { + targetNamespace: parameter.targetNamespace + } + if parameter.values != _|_ { + values: parameter.values + } + } + } + + parameter: { + repoType: "git" | "helm" + // +usage=The Git or Helm repository URL, accept HTTP/S or SSH address as git url. + repoUrl: string + // +usage=The interval at which to check for repository and relese updates. + pullInterval: *"5m" | string + // +usage=1.The relative path to helm chart for git source. 2. chart name for helm resource + chart: string + // +usage=Chart version + version?: string + // +usage=The Git reference to checkout and monitor for changes, defaults to master branch. + branch: *"master" | string + // +usage=The name of the secret containing authentication credentials for the Helm repository. + secretRef?: string + // +usage=The namespace for helm chart + targetNamespace?: string + // +usage=Chart version + value?: #nestedmap + } + + #nestedmap: { + ... + } +``` + +## TraitDefinition + +TraitDefinition provides a series of DevOps actions for the component that can be bound on demand. These operation and maintenance actions are usually provided by the platform administrator, such as adding a load balancing strategy, routing strategy, or performing scaler, gray release strategy, etc. + +The format and field functions of the TraitDefinition are as follows: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + name: + annotations: + definition.oam.dev/description: +spec: + definition: + apiVersion: + kind: + workloadRefPath: + podDisruptive: + manageWorkload: + skipRevisionAffect: + appliesToWorkloads: + - + conflictsWith: + - <> + revisionEnabled: + schematic: # Abstract + cue: # There are many abstracts + template: +``` + +Let's look at a practical example: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + annotations: + definition.oam.dev/description: "configure k8s Horizontal Pod Autoscaler for Component which using Deployment as worklaod" + name: hpa +spec: + appliesToWorkloads: + - deployments.apps + workloadRefPath: spec.scaleTargetRef + schematic: + cue: + template: | + outputs: hpa: { + apiVersion: "autoscaling/v2beta2" + kind: "HorizontalPodAutoscaler" + spec: { + minReplicas: parameter.min + maxReplicas: parameter.max + metrics: [{ + type: "Resource" + resource: { + name: "cpu" + target: { + type: "Utilization" + averageUtilization: parameter.cpuUtil + } + } + }] + } + } + parameter: { + min: *1 | int + max: *10 | int + cpuUtil: *50 | int + } +``` + +## PolicyDefinition + +PolicyDefinition is simimarly to TraitDefinition, the difference is that TraitDefinition acts on a single component but PolicyDefinition is to act on the entire application as a whole (multiple components). + +It can provide global policy for applications, commonly including global security policies (such as RBAC permissions, auditing, and key management), application insights (such as application SLO management, etc.). + +The format is as follows: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: PolicyDefinition +metadata: + name: + annotations: + definition.oam.dev/description: +spec: + schematic: # strategy description + cue: + template: +``` +A specific example is shown below: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: PolicyDefinition +metadata: + name: env-binding + annotations: + definition.oam.dev/description: +spec: + schematic: + cue: + template: | + output: { + apiVersion: "core.oam.dev/v1alpha1" + kind: "EnvBinding" + spec: { + engine: parameter.engine + appTemplate: { + apiVersion: "core.oam.dev/v1beta1" + kind: "Application" + metadata: { + name: context.appName + namespace: context.namespace + } + spec: { + components: context.components + } + } + envs: parameter.envs + } + } + + #Env: { + name: string + patch: components: [...{ + name: string + type: string + properties: {...} + }] + placement: clusterSelector: { + labels?: [string]: string + name?: string + } + } + + parameter: { + engine: *"ocm" | string + envs: [...#Env] + } +``` + +## WorkflowStepDefinition + +WorkflowStepDefinition is used to describe a series of steps that can be declared in the workflow, such as the deployment of execution resources, status check, data output, dependent input, external script call, etc. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: WorkflowStepDefinition +metadata: + name: + annotations: + definition.oam.dev/description: +spec: + schematic: # node description + cue: + template: +``` + +An actual WorkflowStepDefinition is as follows: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: WorkflowStepDefinition +metadata: + name: apply-component +spec: + schematic: + cue: + template: | + import ("vela/op") + parameter: { + component: string + } + + // load component from application + component: op.#Load & { + component: parameter.component + } + + // apply workload to kubernetes cluster + apply: op.#ApplyComponent & { + component: parameter.name + } + + // wait until workload.status equal "Running" + wait: op.#ConditionalWait & { + continue: apply.status.phase =="Running" + } +``` + + +## WorkloadDefinition + +WorkloadDefinition is a system-level feature. It's not a field that users should care about but as metadata checked, verified, and used by the OAM system itself. + +The format is as follows: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: WorkloadDefinition +metadata: + name: +spec: + definitionRef: + name: + version: + podSpecPath: + childResourceKinds: + - apiVersion: + kind: +``` + +In addition, other Kubernetes resource type that need to be introduced into OAM model in the future will also be added as fields to the workload definition. + +## The Standard Protocol Behind Abstraction + +Once the application is created, KubeVela will tag the created resources with a series of tags, which include the version, name, type, etc. of the application. Through these standard protocols, application components, traits and policies can be coordinated. The specific metadata list is as follows: + +| Label | Description | +| :-------------------------------------------------: | :-------------------------------------------: | +| `workload.oam.dev/type` | Corresponds to the name of `ComponentDefinition`| +| `trait.oam.dev/type` | Corresponds to the name of `TraitDefinition` | +| `app.oam.dev/name` | Application name | +| `app.oam.dev/component` | Component name | +| `trait.oam.dev/resource` | `outputs.\`in Trait | +| `app.oam.dev/appRevision` | Application Revision Name | + +## X-Definition Runtime Context + +In the X-Definition, some runtime context information can be obtained through the `context` variable. The specific list is as follows, where the scope indicates which module definitions the Context variable can be used in: + +| Context Variable | Description | Scope | +| :------------------------------: | :------------------------------------------------------------------------------: | :----------------------------------: | +| `context.appRevision` | The app version name corresponding to the current instance of the application | ComponentDefinition, TraitDefinition | +| `context.appRevisionNum` | The app version number corresponding to the current instance of the application. | ComponentDefinition, TraitDefinition | +| `context.appName` | The app name corresponding to the current instance of the application. | ComponentDefinition, TraitDefinition | +| `context.name` | component name in ComponentDefinition and TraitDefinition,policy in PolicyDefinition | ComponentDefinition, TraitDefinition, PolicyDefinition | +| `context.namespace` | The namespace of the current instance of the application | ComponentDefinition, TraitDefinition | +| `context.revision` | The version name of the current component instance | ComponentDefinition, TraitDefinition | +| `context.parameter` | The parameters of the current component instance, it can be obtained in the trait | TraitDefinition | +| `context.output` | Object structure after instantiation of current component | ComponentDefinition, TraitDefinition | +| `context.outputs.` | Structure after instantiation of current component and trait | ComponentDefinition, TraitDefinition | + +At the same time, in the Workflow system, because the `context` has to act on the application level, it is very different from the above usage. We introduce it separately: + +| Context Variable | Description | Scope | +| :------------------------------: | :------------------------------------------------------------------------------: | :----------------------------------: | +| `context.name` | The name of the current instance of the application | WorkflowStepDefinition | +| `context.namespace` | The namespace of the current instance of the application | WorkflowStepDefinition | +| `context.labels` | The labels of the current instance of the application | WorkflowStepDefinition | +| `context.annotations` | The annotations of the current instance of the application | WorkflowStepDefinition | + + +Please note that all the X-Definition concepts introduced in this section only need to be understood by the platform administrator when they want to expand the functions of KubeVela, and end users do not need to have any perception of these concepts. + +[1]: ./oam-model +[2]: ../cue/basic +[3]: ../kube/component +[4]: ../traits/customize-trait +[5]: ../traits/advanced.md +[6]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/ +[7]: ../cue/basic.md \ No newline at end of file diff --git a/versioned_docs/version-v1.1/platform-engineers/openapi-v3-json-schema.md b/versioned_docs/version-v1.1/platform-engineers/openapi-v3-json-schema.md new file mode 100644 index 00000000..3025b857 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/openapi-v3-json-schema.md @@ -0,0 +1,89 @@ +--- +title: Generating UI Forms +--- + +For any capabilities installed via [Definition Objects](./definition-and-templates), +KubeVela will automatically generate OpenAPI v3 JSON schema based on its parameter list, and store it in a `ConfigMap` in the same `namespace` with the definition object. + +> The default KubeVela system `namespace` is `vela-system`, the built-in capabilities and schemas are laid there. + + +## List Schema +KubeVela support generate different versions of Component/Trait Definition. +Thus, we use `ConfigMap` to store the parameter information of different versions of Definition. +This `ConfigMap` will have a common label `definition.oam.dev=schema`, the default `ConfigMap` without a version suffix will point to the latest version, +you can find easily by: +```shell +kubectl get configmap -n vela-system -l definition.oam.dev=schema +``` +```console +NAME DATA AGE +schema-ingress 1 46m +schema-scaler 1 50m +schema-webservice 1 2m26s +schema-webservice-v1 1 40s +schema-worker 1 1m45s +schema-worker-v1 1 55s +schema-worker-v2 1 20s +``` +For the sack of convenience, we also specify a unified label for the `ConfigMap` which stores the parameter information of the same Definition. +And we can list the ConfigMap which stores the parameter of the same Definition by specifying the label like `definition.oam.dev/name=definitionName`, where the `definitionName` is the specific name of your component or trait. +```shell +kubectl get configmap -l definition.oam.dev/name=worker +``` +```console +NAME DATA AGE +schema-worker 1 1m50s +schema-worker-v1 1 1m +schema-worker-v2 1 25s +``` + +The `ConfigMap` name is in the format of `schema-`, +and the data key is `openapi-v3-json-schema`. + +For example, we can use the following command to get the JSON schema of `webservice`. + +```shell +kubectl get configmap schema-webservice -n vela-system -o yaml +``` +```console +apiVersion: v1 +kind: ConfigMap +metadata: + name: schema-webservice + namespace: vela-system +data: + openapi-v3-json-schema: '{"properties":{"cmd":{"description":"Commands to run in + the container","items":{"type":"string"},"title":"cmd","type":"array"},"cpu":{"description":"Number + of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core)","title":"cpu","type":"string"},"env":{"description":"Define + arguments by using environment variables","items":{"properties":{"name":{"description":"Environment + variable name","title":"name","type":"string"},"value":{"description":"The value + of the environment variable","title":"value","type":"string"},"valueFrom":{"description":"Specifies + a source the value of this var should come from","properties":{"secretKeyRef":{"description":"Selects + a key of a secret in the pod''s namespace","properties":{"key":{"description":"The + key of the secret to select from. Must be a valid secret key","title":"key","type":"string"},"name":{"description":"The + name of the secret in the pod''s namespace to select from","title":"name","type":"string"}},"required":["name","key"],"title":"secretKeyRef","type":"object"}},"required":["secretKeyRef"],"title":"valueFrom","type":"object"}},"required":["name"],"type":"object"},"title":"env","type":"array"},"image":{"description":"Which + image would you like to use for your service","title":"image","type":"string"},"port":{"default":80,"description":"Which + port do you want customer traffic sent to","title":"port","type":"integer"}},"required":["image","port"],"type":"object"}' +``` + +Specifically, this schema is generated based on `parameter` section in capability definition: + +* For CUE based definition: the `parameter` is a keyword in CUE template. +* For Helm based definition: the `parameter` is generated from `values.yaml` in Helm chart. + +## Render Form + +You can render above schema into a form by [form-render](https://github.com/alibaba/form-render) or [React JSON Schema form](https://github.com/rjsf-team/react-jsonschema-form) and integrate with your dashboard easily. + +Below is a form rendered with `form-render`: + +![](../resources/json-schema-render-example.jpg) + +### Helm Based Components + +If a Helm based component definition is installed in KubeVela, it will also generate OpenAPI v3 JSON schema based on the [`values.schema.json`](https://helm.sh/docs/topics/charts/#schema-files) in the Helm chart, and store it in the `ConfigMap` following convention above. If `values.schema.json` is not provided by the chart author, KubeVela will automatically generate OpenAPI v3 JSON schema based on its `values.yaml` file automatically. + +# What's Next + +It's by design that KubeVela supports multiple ways to define the schematic. Hence, we will explain `.schematic` field in detail with following guides. diff --git a/versioned_docs/version-v1.1/platform-engineers/operations/observability.md b/versioned_docs/version-v1.1/platform-engineers/operations/observability.md new file mode 100644 index 00000000..f6b6ef2a --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/operations/observability.md @@ -0,0 +1,5 @@ +--- +title: Observability +--- + +Working in progress. \ No newline at end of file diff --git a/versioned_docs/version-v1.1/platform-engineers/policy/custom-policy.md b/versioned_docs/version-v1.1/platform-engineers/policy/custom-policy.md new file mode 100644 index 00000000..6ec75d24 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/policy/custom-policy.md @@ -0,0 +1,5 @@ +--- +title: Custom Policy +--- + +TBD \ No newline at end of file diff --git a/versioned_docs/version-v1.1/platform-engineers/system-operation/bootstrap-parameters.md b/versioned_docs/version-v1.1/platform-engineers/system-operation/bootstrap-parameters.md new file mode 100644 index 00000000..4446a0f2 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/system-operation/bootstrap-parameters.md @@ -0,0 +1,45 @@ +--- +title: Bootstrap Parameters +--- + +The introduction of bootstrap parameters in KubeVela controller are listed as below + +| Parameter Name | Type | Default Value | Description | +| :-------------------------: | :----: | :-------------------------------: | --------------------------------------------------------------------------------------------------------------------------------------- | +| use-webhook | bool | false | Use Admission Webhook | +| webhook-cert-dir | string | /k8s-webhook-server/serving-certs | The directory of Admission Webhook cert/secret | +| webhook-port | int | 9443 | The address of Admission Webhook | +| metrics-addr | string | :8080 | The address of Prometheus metrics | +| enable-leader-election | bool | false | Enable Leader Election for Controller Manager and ensure that only one replica is active | +| leader-election-namespace | string | "" | The namespace of Leader Election ConfigMap | +| log-file-path | string | "" | The log file path | +| log-file-max-size | int | 1024 | The maximum size (MBi) of log files | +| log-debug | bool | false | Set the logging level to DEBUG, used in develop environment | +| application-revision-limit | int | 10 | The maximum number of application revisions to keep. When the number of revisions exceeeds this number, older version will be discarded | +| definition-revision-limit | int | 20 | The maximum number of definition revisions to keep | +| autogen-workload-definition | bool | true | Generate WorkloadDefinition for ComponentDefinition automatically | +| health-addr | string | :9440 | The address of health check | +| apply-once-only | string | false | The Workload and Trait will not change after being applied. Used in specific scenario | +| disable-caps | string | "" | Disable internal capabilities | +| storage-driver | string | Local | The storage driver for applications | +| informer-re-sync-interval | time | 1h | The resync period for for controller informer, also the time for application to be reconciled when no spec changes were made | +| system-definition-namespace | string | vela-system | The namespace for storing system definitions | +| concurrent-reconciles | int | 4 | The number of threads that controller uses to process requests | +| kube-api-qps | int | 50 | The QPS for controller to access apiserver | +| kube-api-burst | int | 100 | The burst for controller to access apiserver | +| oam-spec-var | string | v0.3 | The version of OAM spec to use | +| pprof-addr | string | "" | The address of pprof, default to be emtpy to disable pprof | +| perf-enabled | bool | false | Enable performance logging, working with monitoring tools like Loki and Grafana to discover performance bottleneck | +| enable-cluster-gateway | bool | false | Enable multi cluster feature | + +> Other parameters not listed in the table are old parameters used in previous versions, the latest version ( v1.1 ) does not use them. + +## Key Parameters + +- **informer-resync-interval**: The resync time of applications when no changes were made. A short time may cause controller to reconcile frequently but uselessly. The regular reconciles of applications can help ensure that application and its components keep up-to-date in case some unexpected differences. +- **concurrent-reconciles**: The number of threads to use for controller to handle requests. When rich CPU resources are available, a small number of working threads may lead to insufficient usage of CPU resources. +- **kube-api-qps / kube-api-burst**: The rate limit for KubeVela controller to access apiserver. When managed applications are complex (containing multiple components and resources), if the access rate of apiserver is limited, it will be hard to increase the concurrency of KubeVela controller. However, high access rate may cause huge burden to apiserver. It is critical to keep a balance when handling massive applications. +- **pprof-addr**: The pprof address to enable controller performance debugging. +- **perf-enabled**: Use this flag if you would like to check time costs for different stages when reconciling applications. Switch it off to simplify loggings. + +> Several sets of recommended parameter configurations are enclosed in section [Performance Fine-tuning](./performance-finetuning). \ No newline at end of file diff --git a/versioned_docs/version-v1.1/platform-engineers/system-operation/managing-clusters.md b/versioned_docs/version-v1.1/platform-engineers/system-operation/managing-clusters.md new file mode 100644 index 00000000..f1a8567a --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/system-operation/managing-clusters.md @@ -0,0 +1,39 @@ +--- +title: Managing Clusters +--- + +Users could manage clusters in KubeVela through a list of Vela CLI commands. + +### vela cluster list + +This command could list all clusters managed by KubeVela currently. +```bash +$ vela cluster list +CLUSTER TYPE ENDPOINT +cluster-prod tls https://47.88.4.97:6443 +cluster-staging tls https://47.88.7.230:6443 +``` + +### vela cluster join + +This command can join new cluster into KubeVela and name it as `cluster-prod`. The joined cluster can be used in [Multi-Environment Deployment](../../end-user/policies/envbinding). + +```shell script +$ vela cluster join example-cluster.kubeconfig --name cluster-prod +``` + +### vela cluster detach + +This command can be used to detach cluster from KubeVela. + +```shell script +$ vela cluster detach cluster-prod +``` + +### vela cluster rename + +This command can rename cluster managed by KubeVela. + +```shell script +$ vela cluster rename cluster-prod cluster-production +``` diff --git a/versioned_docs/version-v1.1/platform-engineers/system-operation/observability.md b/versioned_docs/version-v1.1/platform-engineers/system-operation/observability.md new file mode 100644 index 00000000..0114e766 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/system-operation/observability.md @@ -0,0 +1,160 @@ +--- +title: Observability +--- + +The Observability addon provides system-level monitoring for KubeVela core and business-level monitoring for applications +based on metrics, logging, and tracing data. + +The following describes observability capabilities in detail, and how to enable the observability addon and view various +monitoring data. + +## Introduction to Observable Capabilities + +KubeVela observable capabilities are demonstrated through [Grafana](https://grafana.com/) and provide system-level and +application-level data monitoring. + +### Built-in metric category I: KubeVela Core system-level observability + +- KubeVela Core resource usage monitoring + +1) CPU, memory, and other usage and utilization data + +![](../../resources/observability-system-level-summary-of-source-usages.png) + +2) Graphical representation of CPU and memory usage and utilization over time (e.g. last three hours), and network bandwidth per second + +![](../../resources/observability-system-level-summary-of-source-usages-chart.png) + +### Built-in metrics category II: KubeVela Core log monitoring + +1) Log statistics + +The observable page displays the total number of KubeVela Core logs, as well as the number of `error` occurrences, frequency, +overview of all logs that occur, and details by default. + +![](../../resources/observability-system-level-logging-statistics.png) + +It also shows the total number and frequency, of `error` log occurrences over time. + +![](../../resources/observability-system-level-logging-statistics2.png) + +2) Logging filter + +You can also filter the logs by filling keywords at the top. + +![](../../resources/observability-system-level-logging-search.png) + +## Installing the addon + +The observability plugin is installed with the `vela addon` command. Because this plugin relies on Prometheus, +and Prometheus relies on StorageClass, the StorageClass varies to various Kubernetes distribution, so there are some +differences in the installation command across Kubernetes distributions. + +### Self-built/regular Kubernetes clusters + +Execute the following command to install the observability plugin. The steps are the same for similar clusters, like KinD. + +```shell +$ vela addon enable observability alertmanager-pvc-enabled=false server-pvc-enabled=false grafana-domain=example.com +``` + +### Kubernetes clusters provided by cloud providers + +#### Alibaba Cloud ACK + +```shell +$ vela addon enable observability alertmanager-pvc-class=alicloud-disk-available alertmanager-pvc-size=20Gi server-pvc-class=alicloud- disk-available server-pvc-size=20Gi grafana-domain=grafana.c276f4dac730c47b8b8988905e3c68fcf.cn-hongkong.alicontainer. +com +``` + +The meaning of each parameter is as follows. + + - alertmanager-pvc-class + +The type of pvc required by +the Prometheus alert manager, which is the StorageClass. On Alibaba Cloud, pick one from the StorageClass list. + +```shell +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +alicloud-disk-available alicloud/disk Delete Immediate true 6d +alicloud-disk-efficiency alicloud/disk Delete Immediate true true 6d +alicloud-disk-essd alicloud/disk Delete Immediate true 6d +alicloud-disk-ssd alicloud/disk Delete Immediate true 6d +``` + +We set the value as `alicloud-disk-available`. + +- alertmanager-pvc-size + +The size of the pvc needed by the Prometheus alert manager, on Alibaba Cloud, the minimum PV is 20GB, here it takes the value 20Gi. + +- server-pvc-class + +The type of pvc required by Prometheus server, same as `alertmanager-pvc-class`. + +- server-pvc-size + +The size of the pvc required by the Prometheus server, same as `alertmanager-pvc-size`. + +- grafana-domain + +The domain name of Grafana, you can use either your custom domain name, or the cluster-level wildcard domain provided by ACK, +`*.c276f4dac730c47b8b8988905e3c68fcf.cn-hongkong.alicontainer.com`. You can set the value as `grafana.c276f4dac730c47b8b8988905e3c68fcf.cn-hongkong.alicontainer.com`. + +#### Kubernetes clusters offered by other cloud providers + +Please change the following parameters according to the name and size specifications of the PVCs provided by different +cloud provider's Kubernetes clusters, and the domain rules. + +- alertmanager-pvc-class +- alertmanager-pvc-size +- server-pvc-class +- server-pvc-size +- grafana-domain + +## View monitoring data + +### Get an account for the monitoring dashboard + +```shell +$ kubectl get secret grafana -o jsonpath="{.data.admin-password}" -n observability | base64 --decode ; echo + +``` + +Using username `admin` and the password above to login to the monitoring dashboard below. + +### Get the monitoring url + +- Self-built/regular clusters + +```shell +$ kubectl get svc grafana -n vela-system +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +grafana ClusterIP 192.168.42.243 80/TCP 177m + +$ sudo k port-forward service/grafana -n vela-system 80:80 +Password: +Forwarding from 127.0.0.1:80 -> 3000 +Forwarding from [::1]:80 -> +3000 +``` + +Visit [http://127.0.0.1/dashboards](http://127.0.0.1/dashboards) and click on the corresponding Dashboard to view the +various monitoring data introduced earlier. + +![](../../resources/observability-system-level-dashboards.png) + +- Kubernetes clusters provided by cloud providers + +Access the Grafana domain set up above directly to view the various monitoring data described earlier. + +### View monitoring data for various categories + +On the Grafana home page, click on the console as shown to access the monitoring data for the appropriate category. + +The KubeVela Core System Monitoring Dashboard is the KubeVela Core system-level monitoring console. +The KubeVela Core Logging Dashboard is the KubeVela Core logging monitoring console. + + ![](../../resources/observability-dashboards.png) + diff --git a/versioned_docs/version-v1.1/platform-engineers/system-operation/performance-finetuning.md b/versioned_docs/version-v1.1/platform-engineers/system-operation/performance-finetuning.md new file mode 100644 index 00000000..375a7db4 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/system-operation/performance-finetuning.md @@ -0,0 +1,28 @@ +--- +title: Performance Fine-tuning +--- + +### Recommended Configurations + +When cluster scale becomes large and more applications are needed for managing, KubeVela controller performance might have bottleneck problems due to inappropriate parameters. + +According to the KubeVela performance test, three sets of parameters are recommended in clusters with different scales as below. + +| Scale | #Nodes | #Apps | #Pods | concurrent-reconciles | kube-api-qps | kube-api-burst | CPU | Memory | +| :----: | ------: | -------: | -------: | --------------------: | :----------: | -------------: | ---: | -----: | +| Small | < 200 | < 3,000 | < 18,000 | 2 | 300 | 500 | 0.5 | 1Gi | +| Medium | < 500 | < 5,000 | < 30,000 | 4 | 500 | 800 | 1 | 2Gi | +| Large | < 1,000 | < 12,000 | < 72,000 | 4 | 800 | 1,000 | 2 | 4Gi | + +> The above configurations are based on medium size applications (each application contains 2~3 components and 5~6 resources). If the applications in your scenario are generally larger, e.g., containing 20 resources, then you could increase the application number accordingly to find the appropriate configuration and parameters. + +### Fine-tuning Methods + +You might encounter various performance bottlenecks. Read the following examples and try to find the proper solution for your problem. + +1. Applications could be created. Its managed resources are available but indirect resources are not. For example, Deployments in webservice are successfully created but Pods are not. You could check kube-controller-manager and see if there is performance bottleneck problems with it. +2. Applications could be created. Its managed resources are not available and there is no rendering problem with the application. Check if apiserver has lots of requests waiting in queue. The mutating requests for managed resources might be blocked at apiserver. +3. Applications could be found in cluster but no status information could be displayed. If there is no problem with the application content, it might be caused by the KubeVela controller bottleneck, such as limiting requests to apiserver. Increase **kube-api-qps / kube-api-burst** and check if CPU is overloaded. If CPU is not overloaded, check if the thread number is below the number of CPU cores. +4. KubeVela Controller itself could crash frequently due to Out-Of-Memory. Increase the memory to solve it. + +> Read more details in [KubeVela Performance Test Report](/blog/2021/08/30/kubevela-performance-test) \ No newline at end of file diff --git a/versioned_docs/version-v1.1/platform-engineers/traits/advanced.md b/versioned_docs/version-v1.1/platform-engineers/traits/advanced.md new file mode 100644 index 00000000..90e2b57b --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/traits/advanced.md @@ -0,0 +1,245 @@ +--- +title: Advanced Features +--- + +As a Data Configuration Language, CUE allows you to do some advanced templating magic in definition objects. + +## Render Multiple Resources With a Loop + +You can define the for-loop inside the `outputs`. + +> Note that in this case the type of `parameter` field used in the for-loop must be a map. + +Below is an example that will render multiple Kubernetes Services in one trait: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + name: expose +spec: + schematic: + cue: + template: | + parameter: { + http: [string]: int + } + + outputs: { + for k, v in parameter.http { + "\(k)": { + apiVersion: "v1" + kind: "Service" + spec: { + selector: + app: context.name + ports: [{ + port: v + targetPort: v + }] + } + } + } + } +``` + +The usage of this trait could be: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: testapp +spec: + components: + - name: express-server + type: webservice + properties: + ... + traits: + - type: expose + properties: + http: + myservice1: 8080 + myservice2: 8081 +``` + +## Execute HTTP Request in Trait Definition + +The trait definition can send a HTTP request and capture the response to help you rendering the resource with keyword `processing`. + +You can define HTTP request `method`, `url`, `body`, `header` and `trailer` in the `processing.http` section, and the returned data will be stored in `processing.output`. + +> Please ensure the target HTTP server returns a **JSON data**. `output`. + +Then you can reference the returned data from `processing.output` in `patch` or `output/outputs`. + +Below is an example: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + name: auth-service +spec: + schematic: + cue: + template: | + parameter: { + serviceURL: string + } + + processing: { + output: { + token?: string + } + // The target server will return a JSON data with `token` as key. + http: { + method: *"GET" | string + url: parameter.serviceURL + request: { + body?: bytes + header: {} + trailer: {} + } + } + } + + patch: { + data: token: processing.output.token + } +``` + +In above example, this trait definition will send request to get the `token` data, and then patch the data to given component instance. + +## Data Passing + +A trait definition can read the generated API resources (rendered from `output` and `outputs`) of given component definition. + +> KubeVela will ensure the component definitions are always rendered before traits definitions. + +Specifically, the `context.output` contains the rendered workload API resource (whose GVK is indicated by `spec.workload`in component definition), and use `context.outputs.` to contain all the other rendered API resources. + +Below is an example for data passing: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +metadata: + name: worker +spec: + workload: + definition: + apiVersion: apps/v1 + kind: Deployment + schematic: + cue: + template: | + output: { + apiVersion: "apps/v1" + kind: "Deployment" + spec: { + selector: matchLabels: { + "app.oam.dev/component": context.name + } + + template: { + metadata: labels: { + "app.oam.dev/component": context.name + } + spec: { + containers: [{ + name: context.name + image: parameter.image + ports: [{containerPort: parameter.port}] + envFrom: [{ + configMapRef: name: context.name + "game-config" + }] + if parameter["cmd"] != _|_ { + command: parameter.cmd + } + }] + } + } + } + } + + outputs: gameconfig: { + apiVersion: "v1" + kind: "ConfigMap" + metadata: { + name: context.name + "game-config" + } + data: { + enemies: parameter.enemies + lives: parameter.lives + } + } + + parameter: { + // +usage=Which image would you like to use for your service + // +short=i + image: string + // +usage=Commands to run in the container + cmd?: [...string] + lives: string + enemies: string + port: int + } + + +--- +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + name: ingress +spec: + schematic: + cue: + template: | + parameter: { + domain: string + path: string + exposePort: int + } + // trait template can have multiple outputs in one trait + outputs: service: { + apiVersion: "v1" + kind: "Service" + spec: { + selector: + app: context.name + ports: [{ + port: parameter.exposePort + targetPort: context.output.spec.template.spec.containers[0].ports[0].containerPort + }] + } + } + outputs: ingress: { + apiVersion: "networking.k8s.io/v1beta1" + kind: "Ingress" + metadata: + name: context.name + labels: config: context.outputs.gameconfig.data.enemies + spec: { + rules: [{ + host: parameter.domain + http: { + paths: [{ + path: parameter.path + backend: { + serviceName: context.name + servicePort: parameter.exposePort + } + }] + } + }] + } + } +``` + +In detail, during rendering `worker` `ComponentDefinition`: +1. the rendered Kubernetes Deployment resource will be stored in the `context.output`, +2. all other rendered resources will be stored in `context.outputs.`, with `` is the unique name in every `template.outputs`. + +Thus, in `TraitDefinition`, it can read the rendered API resources (e.g. `context.outputs.gameconfig.data.enemies`) from the `context`. diff --git a/versioned_docs/version-v1.1/platform-engineers/traits/customize-trait.md b/versioned_docs/version-v1.1/platform-engineers/traits/customize-trait.md new file mode 100644 index 00000000..fd2ede93 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/traits/customize-trait.md @@ -0,0 +1,146 @@ +--- +title: How-to +--- + +In this section we will introduce how to define a trait. + +## Simple Trait + +A trait in KubeVela can be defined by simply reference a existing Kubernetes API resource. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + name: ingress +spec: + definitionRef: + name: ingresses.networking.k8s.io +``` + +Let's attach this trait to a component instance in `Application`: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: testapp +spec: + components: + - name: express-server + type: webservice + properties: + cmd: + - node + - server.js + image: oamdev/testapp:v1 + port: 8080 + traits: + - type: ingress + properties: + rules: + - http: + paths: + - path: /testpath + pathType: Prefix + backend: + service: + name: test + port: + number: 80 +``` + +Note that in this case, all fields in the referenced resource's `spec` will be exposed to end user and no metadata (e.g. `annotations` etc) are allowed to be set trait properties. Hence this approach is normally used when you want to bring your own CRD and controller as a trait, and it dose not rely on `annotations` etc as tuning knobs. + +## Using CUE as Trait Schematic + +The recommended approach is defining a CUE based schematic for trait as well. In this case, it comes with abstraction and you have full flexibility to templating any resources and fields as you want. Note that KubeVela requires all traits MUST be defined in `outputs` section (not `output`) in CUE template with format as below: + +```cue +outputs: : + +``` + +Below is an example for `ingress` trait. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + name: ingress +spec: + podDisruptive: false + schematic: + cue: + template: | + parameter: { + domain: string + http: [string]: int + } + + // trait template can have multiple outputs in one trait + outputs: service: { + apiVersion: "v1" + kind: "Service" + spec: { + selector: + app: context.name + ports: [ + for k, v in parameter.http { + port: v + targetPort: v + }, + ] + } + } + + outputs: ingress: { + apiVersion: "networking.k8s.io/v1beta1" + kind: "Ingress" + metadata: + name: context.name + spec: { + rules: [{ + host: parameter.domain + http: { + paths: [ + for k, v in parameter.http { + path: k + backend: { + serviceName: context.name + servicePort: v + } + }, + ] + } + }] + } + } +``` + +Let's attach this trait to a component instance in `Application`: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: testapp +spec: + components: + - name: express-server + type: webservice + properties: + cmd: + - node + - server.js + image: oamdev/testapp:v1 + port: 8080 + traits: + - type: ingress + properties: + domain: test.my.domain + http: + "/api": 8080 +``` + +CUE based trait definitions can also enable many other advanced scenarios such as patching and data passing. They will be explained in detail in the following documentations. \ No newline at end of file diff --git a/versioned_docs/version-v1.1/platform-engineers/traits/patch-trait.md b/versioned_docs/version-v1.1/platform-engineers/traits/patch-trait.md new file mode 100644 index 00000000..69903f32 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/traits/patch-trait.md @@ -0,0 +1,509 @@ +--- +title: Patch Traits +--- + +**Patch** is a very common pattern of trait definitions, i.e. the app operators can amend/patch attributes to the component instance (normally the workload) to enable certain operational features such as sidecar or node affinity rules (and this should be done **before** the resources applied to target cluster). + +This pattern is extremely useful when the component definition is provided by third-party component provider (e.g. software distributor) so app operators do not have privilege to change its template. + +> Note that even patch trait itself is defined by CUE, it can patch any component regardless how its schematic is defined (i.e. CUE, Helm, and any other supported schematic approaches). + +Below is an example for `node-affinity` trait: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + annotations: + definition.oam.dev/description: "affinity specify node affinity and toleration" + name: node-affinity +spec: + appliesToWorkloads: + - deployments.apps + podDisruptive: true + schematic: + cue: + template: | + patch: { + spec: template: spec: { + if parameter.affinity != _|_ { + affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: [{ + matchExpressions: [ + for k, v in parameter.affinity { + key: k + operator: "In" + values: v + }, + ]}] + } + if parameter.tolerations != _|_ { + tolerations: [ + for k, v in parameter.tolerations { + effect: "NoSchedule" + key: k + operator: "Equal" + value: v + }] + } + } + } + + parameter: { + affinity?: [string]: [...string] + tolerations?: [string]: string + } +``` + +The patch trait above assumes the target component instance have `spec.template.spec.affinity` field. +Hence, we need to use `appliesToWorkloads` to enforce the trait only applies to those workload types have this field. + +Another important field is `podDisruptive`, this patch trait will patch to the pod template field, +so changes on any field of this trait will cause the pod to restart, We should add `podDisruptive` and make it to be true +to tell users that applying this trait will cause the pod to restart. + + +Now the users could declare they want to add node affinity rules to the component instance as below: + +```yaml +apiVersion: core.oam.dev/v1alpha2 +kind: Application +metadata: + name: testapp +spec: + components: + - name: express-server + type: webservice + properties: + image: oamdev/testapp:v1 + traits: + - type: "node-affinity" + properties: + affinity: + server-owner: ["owner1","owner2"] + resource-pool: ["pool1","pool2","pool3"] + tolerations: + resource-pool: "broken-pool1" + server-owner: "old-owner" +``` + +### Known Limitations + +By default, patch trait in KubeVela leverages the CUE `merge` operation. It has following known constraints though: + +- Can not handle conflicts. + - For example, if a component instance already been set with value `replicas=5`, then any patch trait to patch `replicas` field will fail, a.k.a you should not expose `replicas` field in its component definition schematic. +- Array list in the patch will be merged following the order of index. It can not handle the duplication of the array list members. This could be fixed by another feature below. + +### Strategy Patch + +Strategy Patch is effective by adding annotation, and supports the following two ways + +> Note that this is not a standard CUE feature, KubeVela enhanced CUE in this case. + +#### 1. With `+patchKey=` annotation + +This is useful for patching array list, merging logic of two array lists will not follow the CUE behavior. Instead, it will treat the list as object and use a strategy merge approach: + - if a duplicated key is found, the patch data will be merge with the existing values; + - if no duplication found, the patch will append into the array list. + +The example of strategy patch trait with 'patchKey' will like below: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + annotations: + definition.oam.dev/description: "add sidecar to the app" + name: sidecar +spec: + appliesToWorkloads: + - deployments.apps + podDisruptive: true + schematic: + cue: + template: | + patch: { + // +patchKey=name + spec: template: spec: containers: [parameter] + } + parameter: { + name: string + image: string + command?: [...string] + } +``` + +In above example we defined `patchKey` is `name` which is the parameter key of container name. In this case, if the workload don't have the container with same name, it will be a sidecar container append into the `spec.template.spec.containers` array list. If the workload already has a container with the same name of this `sidecar` trait, then merge operation will happen instead of append (which leads to duplicated containers). + +If `patch` and `outputs` both exist in one trait definition, the `patch` operation will be handled first and then render the `outputs`. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + annotations: + definition.oam.dev/description: "expose the app" + name: expose +spec: + appliesToWorkloads: + - deployments.apps + podDisruptive: true + schematic: + cue: + template: | + patch: {spec: template: metadata: labels: app: context.name} + outputs: service: { + apiVersion: "v1" + kind: "Service" + metadata: name: context.name + spec: { + selector: app: context.name + ports: [ + for k, v in parameter.http { + port: v + targetPort: v + }, + ] + } + } + parameter: { + http: [string]: int + } +``` + +So the above trait which attaches a Service to given component instance will patch an corresponding label to the workload first and then render the Service resource based on template in `outputs`. + +#### 2. With `+patchStrategy=retainkeys` annotation + +Similar to strategy [retainkeys](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/#use-strategic-merge-patch-to-update-a-deployment-using-the-retainkeys-strategy) in K8s strategic merge patch + +In some scenarios that the entire object needs to be replaced, retainkeys strategy is the best choice. the example as follows: + +Assume the Deployment is the base resource +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: retainkeys-demo +spec: + selector: + matchLabels: + app: nginx + strategy: + type: rollingUpdate + rollingUpdate: + maxSurge: 30% + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: retainkeys-demo-ctr + image: nginx +``` +Now want to replace rollingUpdate strategy with a new strategy, you can write the patch trait like below + +```yaml +apiVersion: core.oam.dev/v1alpha2 +kind: TraitDefinition +metadata: + name: recreate +spec: + appliesToWorkloads: + - deployments.apps + extension: + template: |- + patch: { + spec: { + // +patchStrategy=retainKeys + strategy: type: "Recreate" + } + } +``` +Then the base resource becomes as follows + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: retainkeys-demo +spec: + selector: + matchLabels: + app: nginx + strategy: + type: Recreate + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: retainkeys-demo-ctr + image: nginx +``` +## More Use Cases of Patch Trait + +Patch trait is in general pretty useful to separate operational concerns from the component definition, here are some more examples. + +### Add Labels + +For example, patch common label (virtual group) to the component instance. + +```yaml +apiVersion: core.oam.dev/v1alpha2 +kind: TraitDefinition +metadata: + annotations: + definition.oam.dev/description: "Add virtual group labels" + name: virtualgroup +spec: + appliesToWorkloads: + - deployments.apps + podDisruptive: true + schematic: + cue: + template: | + patch: { + spec: template: { + metadata: labels: { + if parameter.scope == "namespace" { + "app.namespace.virtual.group": parameter.group + } + if parameter.scope == "cluster" { + "app.cluster.virtual.group": parameter.group + } + } + } + } + parameter: { + group: *"default" | string + scope: *"namespace" | string + } +``` + +Then it could be used like: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +spec: + ... + traits: + - type: virtualgroup + properties: + group: "my-group1" + scope: "cluster" +``` + +### Add Annotations + +Similar to common labels, you could also patch the component instance with annotations. The annotation value should be a JSON string. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + annotations: + definition.oam.dev/description: "Specify auto scale by annotation" + name: kautoscale +spec: + appliesToWorkloads: + - deployments.apps + podDisruptive: false + schematic: + cue: + template: | + import "encoding/json" + + patch: { + metadata: annotations: { + "my.custom.autoscale.annotation": json.Marshal({ + "minReplicas": parameter.min + "maxReplicas": parameter.max + }) + } + } + parameter: { + min: *1 | int + max: *3 | int + } +``` + +### Add Pod Environments + +Inject system environments into Pod is also very common use case. + +> This case relies on strategy merge patch, so don't forget add `+patchKey=name` as below: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + annotations: + definition.oam.dev/description: "add env into your pods" + name: env +spec: + appliesToWorkloads: + - deployments.apps + podDisruptive: true + schematic: + cue: + template: | + patch: { + spec: template: spec: { + // +patchKey=name + containers: [{ + name: context.name + // +patchKey=name + env: [ + for k, v in parameter.env { + name: k + value: v + }, + ] + }] + } + } + + parameter: { + env: [string]: string + } +``` + +### Inject `ServiceAccount` Based on External Auth Service + +In this example, the service account was dynamically requested from an authentication service and patched into the service. + +This example put UID token in HTTP header but you can also use request body if you prefer. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + annotations: + definition.oam.dev/description: "dynamically specify service account" + name: service-account +spec: + appliesToWorkloads: + - deployments.apps + podDisruptive: true + schematic: + cue: + template: | + processing: { + output: { + credentials?: string + } + http: { + method: *"GET" | string + url: parameter.serviceURL + request: { + header: { + "authorization.token": parameter.uidtoken + } + } + } + } + patch: { + spec: template: spec: serviceAccountName: processing.output.credentials + } + + parameter: { + uidtoken: string + serviceURL: string + } +``` + +The `processing.http` section is an advanced feature that allow trait definition to send a HTTP request during rendering the resource. Please refer to [Execute HTTP Request in Trait Definition](#Processing-Trait) section for more details. + +### Add `InitContainer` + +[`InitContainer`](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container) is useful to pre-define operations in an image and run it before app container. + +Below is an example: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +metadata: + annotations: + definition.oam.dev/description: "add an init container and use shared volume with pod" + name: init-container +spec: + appliesToWorkloads: + - deployments.apps + podDisruptive: true + schematic: + cue: + template: | + patch: { + spec: template: spec: { + // +patchKey=name + containers: [{ + name: context.name + // +patchKey=name + volumeMounts: [{ + name: parameter.mountName + mountPath: parameter.appMountPath + }] + }] + initContainers: [{ + name: parameter.name + image: parameter.image + if parameter.command != _|_ { + command: parameter.command + } + + // +patchKey=name + volumeMounts: [{ + name: parameter.mountName + mountPath: parameter.initMountPath + }] + }] + // +patchKey=name + volumes: [{ + name: parameter.mountName + emptyDir: {} + }] + } + } + + parameter: { + name: string + image: string + command?: [...string] + mountName: *"workdir" | string + appMountPath: string + initMountPath: string + } +``` + +The usage could be: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: testapp +spec: + components: + - name: express-server + type: webservice + properties: + image: oamdev/testapp:v1 + traits: + - type: "init-container" + properties: + name: "install-container" + image: "busybox" + command: + - wget + - "-O" + - "/work-dir/index.html" + - http://info.cern.ch + mountName: "workdir" + appMountPath: "/usr/share/nginx/html" + initMountPath: "/work-dir" +``` diff --git a/versioned_docs/version-v1.1/platform-engineers/traits/status.md b/versioned_docs/version-v1.1/platform-engineers/traits/status.md new file mode 100644 index 00000000..7b09b22c --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/traits/status.md @@ -0,0 +1,137 @@ +--- +title: Status Write Back +--- + +This documentation will explain how to achieve status write back by using CUE templates in definition objects. + +## Health Check + +The spec of health check is `spec.status.healthPolicy`, they are the same for both Workload Type and Trait. + +If not defined, the health result will always be `true`. + +The keyword in CUE is `isHealth`, the result of CUE expression must be `bool` type. +KubeVela runtime will evaluate the CUE expression periodically until it becomes healthy. Every time the controller will get all the Kubernetes resources and fill them into the context field. + +So the context will contain following information: + +```cue +context:{ + name: + appName: + output: + outputs: { + : + : + } +} +``` + +Trait will not have the `context.output`, other fields are the same. + +The example of health check likes below: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +spec: + status: + healthPolicy: | + isHealth: (context.output.status.readyReplicas > 0) && (context.output.status.readyReplicas == context.output.status.replicas) + ... +``` + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +spec: + status: + healthPolicy: | + isHealth: len(context.outputs.service.spec.clusterIP) > 0 + ... +``` + +> Please refer to [this doc](https://github.com/oam-dev/kubevela/blob/master/docs/examples/app-with-status/template.yaml) for the complete example. + +The health check result will be recorded into the `Application` resource. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +spec: + components: + - name: myweb + type: worker + properties: + cmd: + - sleep + - "1000" + enemies: alien + image: busybox + lives: "3" + traits: + - type: ingress + properties: + domain: www.example.com + http: + /: 80 +status: + ... + services: + - healthy: true + message: "type: busybox,\t enemies:alien" + name: myweb + traits: + - healthy: true + message: 'Visiting URL: www.example.com, IP: 47.111.233.220' + type: ingress + status: running +``` + +## Custom Status + +The spec of custom status is `spec.status.customStatus`, they are the same for both Workload Type and Trait. + +The keyword in CUE is `message`, the result of CUE expression must be `string` type. + +The custom status has the same mechanism with health check. +Application CRD controller will evaluate the CUE expression after the health check succeed. + +The context will contain following information: + +```cue +context:{ + name: + appName: + output: + outputs: { + : + : + } +} +``` + +Trait will not have the `context.output`, other fields are the same. + + +Please refer to [this doc](https://github.com/oam-dev/kubevela/blob/master/docs/examples/app-with-status/template.yaml) for the complete example. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: ComponentDefinition +spec: + status: + customStatus: |- + message: "type: " + context.output.spec.template.spec.containers[0].image + ",\t enemies:" + context.outputs.gameconfig.data.enemies + ... +``` + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: TraitDefinition +spec: + status: + customStatus: |- + message: "type: "+ context.outputs.service.spec.type +",\t clusterIP:"+ context.outputs.service.spec.clusterIP+",\t ports:"+ "\(context.outputs.service.spec.ports[0].port)"+",\t domain"+context.outputs.ingress.spec.rules[0].host + ... +``` diff --git a/versioned_docs/version-v1.1/platform-engineers/workflow/built-in-workflow-defs.md b/versioned_docs/version-v1.1/platform-engineers/workflow/built-in-workflow-defs.md new file mode 100644 index 00000000..a6b39123 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/workflow/built-in-workflow-defs.md @@ -0,0 +1,269 @@ +--- +title: Appendix - Built-in Workflow Definitions +--- + +KubeVela provides some built-in workflow step definitions for better experience. + +## apply-application + +### Overview + +Apply all components and traits in Application. + +### Parameter + +No arguments, used for custom steps before or after application applied. + +### Example + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-workflow + namespace: default +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: ingress + properties: + domain: testsvc.example.com + http: + /: 8000 + workflow: + steps: + - name: express-server + type: apply-application +``` + +## depends-on-app + +### Overview + +Wait for the specified Application to complete. + +### Parameter + +| Name | Type | Description | +| :-------: | :----: | :------------------------------: | +| name | string | The name of the Application | +| namespace | string | The namespace of the Application | + +### Example + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-workflow + namespace: default +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: ingress + properties: + domain: testsvc.example.com + http: + /: 8000 + workflow: + steps: + - name: express-server + type: depends-on-app + properties: + name: another-app + namespace: default +``` + +## multi-env + +### Overview + +Apply Application in different policies and envs. + +### Parameter + +| Name | Type | Description | +| :----: | :----: | :--------------------: | +| policy | string | The name of the policy | +| env | string | The name of the env | + +### Example + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: multi-env-demo + namespace: default +spec: + components: + - name: nginx-server + type: webservice + properties: + image: nginx:1.21 + port: 80 + + policies: + - name: env + type: env-binding + properties: + created: false + envs: + - name: test + patch: + components: + - name: nginx-server + type: webservice + properties: + image: nginx:1.20 + port: 80 + placement: + namespaceSelector: + name: test + - name: prod + patch: + components: + - name: nginx-server + type: webservice + properties: + image: nginx:1.20 + port: 80 + placement: + namespaceSelector: + name: prod + + workflow: + steps: + - name: deploy-test-server + type: deploy2env + properties: + policy: env + env: test + - name: deploy-prod-server + type: deploy2env + properties: + policy: env + env: prod +``` + +## webhook-notification + +### Overview + +Send messages to the webhook address. + +### Parameters + +| Name | Type | Description | +| :--------------: | :----: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| slack | Object | Optional, please fulfill its url and message if you want to send Slack messages | +| slack.url | String | Required, the webhook address of Slack | +| slack.message | Object | Required, the Slack messages you want to send, please follow [Slack messaging](https://api.slack.com/reference/messaging/payload) | +| dingding | Object | Optional, please fulfill its url and message if you want to send DingTalk messages | +| dingding.url | String | Required, the webhook address of DingTalk | +| dingding.message | Object | Required, the DingTalk messages you want to send, please follow [DingTalk messaging](https://developers.dingtalk.com/document/robots/custom-robot-access/title-72m-8ag-pqw) | | + +### Example + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-workflow + namespace: default +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: ingress + properties: + domain: testsvc.example.com + http: + /: 8000 + workflow: + steps: + - name: dingtalk-message + type: webhook-notification + properties: + dingding: + # the DingTalk webhook address, please refer to: https://developers.dingtalk.com/document/robots/custom-robot-access + url: xxx + message: + msgtype: text + text: + context: Workflow starting... + - name: application + type: apply-application + - name: slack-message + type: webhook-notification + properties: + slack: + # the Slack webhook address, please refer to: https://api.slack.com/messaging/webhooks + url: xxx + message: + text: Workflow ended. +``` + +## suspend + +### Overview + +Suspend the current workflow, we can use `vela workflow resume appname` to resume the suspended workflow. + +> For more information of `vela workflow`, please refer to [vela cli](../../cli/vela_workflow)。 + +### Parameter + +| Name | Type | Description | +| :---: | :---: | :---------: | +| - | - | - | + +### Example + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-workflow + namespace: default +spec: + components: + - name: express-server + type: webservice + properties: + image: crccheck/hello-world + port: 8000 + traits: + - type: ingress + properties: + domain: testsvc.example.com + http: + /: 8000 + workflow: + steps: + - name: slack-message + type: webhook-notification + properties: + slack: + # the Slack webhook address, please refer to: https://api.slack.com/messaging/webhooks + message: + text: Ready to apply the application, ask the administrator to approve and resume the workflow. + - name: manual-approval + type: suspend + - name: express-server + type: apply-application +``` \ No newline at end of file diff --git a/versioned_docs/version-v1.1/platform-engineers/workflow/cue-actions.md b/versioned_docs/version-v1.1/platform-engineers/workflow/cue-actions.md new file mode 100644 index 00000000..ff645e5c --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/workflow/cue-actions.md @@ -0,0 +1,297 @@ +--- +title: Appendix - CUE Actions +--- + +This doc will illustrate the CUE actions provided in `vela/op` stdlib package. + +> To learn the syntax of CUE, read [CUE Basic](../cue/basic) + +## Apply + +--- + +Create or update resource in Kubernetes cluster. + +### Action Parameter + +- value: the resource structure to be created or updated. And after successful execution, `value` will be updated with resource status. +- patch: the content support `Strategic Merge Patch`,let you can define the strategy of list merge through comments. + + +``` +#Apply: { + value: {...} + patch: { + //patchKey=$key + ... + } +} +``` + +### Usage + +``` +import "vela/op" +stepName: op.#Apply & { + value: { + kind: "Deployment" + apiVersion: "apps/v1" + metadata: name: "test-app" + spec: { + replicas: 2 + ... + } + } + patch: { + spec: template: spec: { + //patchKey=name + containers: [{name: "sidecar"}] + } + } +} +``` + +## ConditionalWait + +--- + +Step will be blocked until the condition is met. +### Action Parameter + +- continue: Step will be blocked until the value becomes `true`. + +``` +#ConditionalWait: { + continue: bool +} +``` + +### Usage + +``` +import "vela/op" + +apply: op.#Apply + +wait: op.#ConditionalWait: { + continue: apply.value.status.phase=="running" +} +``` + +## Load + +--- + +Get all components in application. + +### Action Parameter +No parameters. + + +``` +#Load: {} +``` + +### Usage + +``` +import "vela/op" + +// You can use `load.value.[componentName] after this action. +load: op.#Load & {} +``` + +## Read + +--- + +Get resource in Kubernetes cluster. + +### Action Parameter + +- value: the resource metadata to be get. And after successful execution, `value` will be updated with resource definition in cluster. +- err: if an error occurs, the `err` will contain the error message. + + +``` +#Read: { + value: {} + err?: string +} +``` + +### Usage + +``` +// You can use configmap.value.data after this action. +configmap: op.#Read & { + value: { + kind: "ConfigMap" + apiVersion: "v1" + metadata: { + name: "configmap-name" + namespace: "configmap-ns" + } + } +} +``` + +## ApplyApplication + +--- + +Create or update resources corresponding to the application in Kubernetes cluster. + +### Action Parameter + +No parameters. + +``` +#ApplyApplication: {} +``` + +### Usage + +``` +apply: op.#ApplyApplication & {} +``` + +## ApplyComponent + +--- + +Create or update resources corresponding to the component in Kubernetes cluster. + +### Action Parameter + +- component: the component name. +- workload: the workload resource of the component. Value will be filled after successful execution. +- traits: the trait resources of the component. Value will be filled after successful execution. + + +``` +#ApplyComponent: { + component: string + workload: {...} + traits: [string]: {...} +} +``` + +### Usage + +``` +apply: op.#ApplyComponent & { + component: "component-name" +} +``` + +## ApplyRemaining + +--- + +Create or update the resources corresponding to all components in the application in the Kubernetes cluster, and specify which components do not need to apply through `exceptions`, or skip some resources of the exceptional component. + +### Action Parameter + +- exceptions: indicates the name of the exceptional component. +- skipApplyWorkload: indicates whether to skip apply the workload resource. +- skipAllTraits: indicates to skip apply all resources of the traits. + + +``` +#ApplyRemaining: { + exceptions?: [componentName=string]: { + skipApplyWorkload: *true | bool + + skipAllTraits: *true| bool + } +} +``` + +### Usage + +``` +apply: op.#ApplyRemaining & { + exceptions: {"applied-component-name": {}} +} +``` + +## Slack + +--- + +Send messages to Slack. + +### Action Parameter + +- url: The webhook address of Slack. +- message: The messages that you want to send, please refer to [Slack messaging](https://api.slack.com/reference/messaging/payload) 。 + +``` +#Slack: { + url: string + message: {...} +} +``` + +### Usage + +``` +apply: op.#Slack & { + url: webhook url + message: + text: Hello KubeVela +} +``` + +## DingTalk + +--- + +Send messages to DingTalk. + +### Action Parameter + +- url: The webhook address of DingTalk. +- message: The messages that you want to send, please refer to [DingTalk messaging](https://developers.dingtalk.com/document/robots/custom-robot-access/title-72m-8ag-pqw) 。 + +``` +#DingTalk: { + url: string + message: {...} +} +``` + +### Usage + +``` +apply: op.#DingTalk & { + url: webhook url + message: + msgtype: text + text: + context: Hello KubeVela +} +``` + +## Steps + +--- + +Used to encapsulate a set of operations. + +- In steps, you need to specify the execution order by tag. + + +### Usage + +``` +app: op.#Steps & { + load: op.#Load & { + component: "component-name" + } @step(1) + apply: op.#Apply & { + value: load.value.workload + } @step(2) +} +``` diff --git a/versioned_docs/version-v1.1/platform-engineers/workflow/workflow.md b/versioned_docs/version-v1.1/platform-engineers/workflow/workflow.md new file mode 100644 index 00000000..48a86c83 --- /dev/null +++ b/versioned_docs/version-v1.1/platform-engineers/workflow/workflow.md @@ -0,0 +1,147 @@ +--- +title: Workflow +--- + +## Overview + +`Workflow` allows you to customize steps in `Application`, glue together additional delivery processes and specify arbitrary delivery environments. In short, `Workflow` provides customized control flow and flexibility based on the original delivery model of Kubernetes(Apply). For example, `Workflow` can be used to implement complex operations such as pause, manual approval, waiting status, data flow, multi-environment gray release, A/B testing, etc. + +`Workflow` is a further exploration and best practice based on OAM model in KubeVela, it obeys the modular concept and reusable characteristics of OAM. Each workflow module is a "super glue" that can combine your arbitrary tools and processes. In modern complex cloud native application delivery environment, you can completely describe all delivery processes through a declarative configuration, ensuring the stability and convenience of the delivery process. + +## Using workflow + +`Workflow` consists of steps, you can either use KubeVela's [built-in workflow steps], or write their own `WorkflowStepDefinition` to complete the operation. + +We can use `vela def` to define workflow steps by writing `Cue templates`. Let's write an `Application` that apply a Tomcat using Helm chart and automatically send message to Slack when the Tomcat is running. + +### Workflow Steps + +KubeVela provides several CUE actions for writing workflow steps. These actions are provided by the `vela/op` package. In order to achieve the above scenario, we need to use the following three CUE actions: + +| Action | Description | Parameter | +| :---: | :--: | :-- | +| [ApplyApplication](./cue-actions#apply) | Apply all the resources in Application. | - | +| [Read](./cue-actions#read) | Read resources in Kubernetes cluster. | value: the resource metadata to be get. And after successful execution, `value` will be updated with resource definition in cluster.
err: if an error occurs, the `err` will contain the error message. | +| [ConditionalWait](./cue-actions#conditionalwait) | The workflow step will be blocked until the condition is met. | continue: The workflow step will be blocked until the value becomes `true`. | + +> For all the workflow actions, please refer to [Cue Actions](./cue-actions) + +After this, we need two `WorkflowStepDefinitions` to complete the Application: + +1. Apply Tomcat and wait till it's status become running. We need to write a custom workflow step for it. +2. Send Slack notifications, we can use the built-in [webhook-notification] step for it. + +#### Step: Apply Tomcat + +First, use `vela def init` to generate a `WorkflowStepDefinition` template: + +```shell +vela def init my-helm -t workflow-step --desc "Apply helm charts and wait till it's running." -o my-helm.cue +``` + +The result is as follows: +```shell +$ cat my-helm.cue + +"my-helm": { + annotations: {} + attributes: {} + description: "Apply helm charts and wait till it's running." + labels: {} + type: "workflow-step" +} + +template: { +} +``` + +Import `vela/op` and complete the Cue code in `template`: + +``` +import ( + "vela/op" +) + +"my-helm": { + annotations: {} + attributes: {} + description: "Apply helm charts and wait till it's running." + labels: {} + type: "workflow-step" +} + +template: { + // Apply all the resources in Application + apply: op.#ApplyApplication & {} + + resource: op.#Read & { + value: { + kind: "Deployment" + apiVersion: "apps/v1" + metadata: { + name: "tomcat" + // we can use context to get any metadata in Application + namespace: context.namespace + } + } + } + + workload: resource.value + // wait till it's ready + wait: op.#ConditionalWait & { + continue: workload.status.readyReplicas == workload.status.replicas && workload.status.observedGeneration == workload.metadata.generation + } +} +``` + +Apply it to the cluster: + +```shell +$ vela def apply my-helm.cue + +WorkflowStepDefinition my-helm in namespace vela-system updated. +``` + +#### Step: Send Slack notifications + +Use the built-in step, [webhook-notification]. + +### Apply the Application + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-workflow + namespace: default +spec: + components: + - name: tomcat + type: helm + properties: + repoType: helm + url: https://charts.bitnami.com/bitnami + chart: tomcat + version: "9.2.20" + workflow: + steps: + - name: tomcat + # specify the step type + type: my-helm + outputs: + - name: msg + # get value from the deployment status in my-helm + valueFrom: resource.value.status.conditions[0].message + - name: send-message + type: webhook-notification + inputs: + - from: msg + # use the output value in the previous step and pass it into the properties slack.message.text + parameterKey: slack.message.text + properties: + slack: + # the address of your slack webhook, please refer to: https://api.slack.com/messaging/webhooks + url: +``` + +Apply the Application to the cluster and you can see that all resources have been successfully applied and Slack has received the messages of the Deployment status. diff --git a/versioned_docs/version-v1.1/quick-start-appfile.md b/versioned_docs/version-v1.1/quick-start-appfile.md new file mode 100644 index 00000000..3e42124f --- /dev/null +++ b/versioned_docs/version-v1.1/quick-start-appfile.md @@ -0,0 +1,86 @@ +--- +title: Overview +--- + +To achieve best user experience for your platform, we recommend platform builders to create simple and user friendly UI for end users instead of exposing full platform level details to them. Some common practices include building GUI console, adopting DSL, or creating a user friendly command line tool. + +As an proof-of-concept of building developer experience with KubeVela, we developed a client-side tool named `Appfile` as well. This tool enables developers to deploy any application with a single file and a single command: `vela up`. + +Now let's walk through its experience. + +## Step 1: Install + +Make sure you have finished and verified the installation following quick-install. + +## Step 2: Deploy Your First Application + +```bash +vela up -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/vela.yaml +``` +```console +Parsing vela.yaml ... +Loading templates ... + +Rendering configs for service (testsvc)... +Writing deploy config to (.vela/deploy.yaml) + +Applying deploy configs ... +Checking if app has been deployed... +App has not been deployed, creating a new deployment... +✅ App has been deployed 🚀🚀🚀 + Port forward: vela port-forward first-vela-app + SSH: vela exec first-vela-app + Logging: vela logs first-vela-app + App status: vela status first-vela-app + Service status: vela status first-vela-app --svc testsvc +``` + +Check the status until we see `Routes` are ready: +```bash +vela status first-vela-app +``` +```console +About: + + Name: first-vela-app + Namespace: default + Created at: ... + Updated at: ... + +Services: + + - Name: testsvc + Type: webservice + HEALTHY Ready: 1/1 + Last Deployment: + Created at: ... + Updated at: ... + Traits: + - ✅ ingress: Visiting URL: testsvc.example.com, IP: +``` + +**In kind cluster setup, you can visit the service via localhost. In other setups, replace localhost with ingress address accordingly. + +``` +curl -H "Host:testsvc.example.com" http://localhost/ +``` +```console + +Hello World + + + ## . + ## ## ## == + ## ## ## ## ## === + /""""""""""""""""\___/ === + ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~ + \______ o _,/ + \ \ _,' + `'--.._\..--'' + +``` +**Voila!** You are all set to go. + +## What's Next + +- Learn details about [`Appfile`](./developers/learn-appfile) and know how it works. diff --git a/versioned_docs/version-v1.1/quick-start.md b/versioned_docs/version-v1.1/quick-start.md new file mode 100644 index 00000000..a65f1c9a --- /dev/null +++ b/versioned_docs/version-v1.1/quick-start.md @@ -0,0 +1,64 @@ +--- +title: Deploy First Application +--- + +Welcome to KubeVela! In this guide, we'll walk you through how to install KubeVela, and deploy your first simple application. + +## Step 1: Install + +Make sure you have finished and verified the installation following [this guide](install). + +## Step 2: Deploy Your First Application + +```bash +$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/vela-app.yaml +application.core.oam.dev/first-vela-app created +``` + +Above command will apply an application to KubeVela and let it distribute the application to proper runtime infrastructure. + +Check the status until we see `status` is `running` and services are `healthy`: + +```bash +$ kubectl get application first-vela-app -o yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +... +status: + ... + services: + - healthy: true + name: express-server + traits: + - healthy: true + message: 'Visiting URL: testsvc.example.com, IP: your ip address' + type: ingress + status: running +``` + +You can now directly visit the application (regardless of where it is running). + +``` +$ curl -H "Host:testsvc.example.com" http:/// + +Hello World + + + ## . + ## ## ## == + ## ## ## ## ## === + /""""""""""""""""\___/ === + ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~ + \______ o _,/ + \ \ _,' + `'--.._\..--'' + +``` +**Voila!** You are all set to go. + +## What's Next + +Here are some recommended next steps: + +- Learn KubeVela's [Core Concepts](./core-concepts/application). +- Learn KubeVela's [Architecture](./core-concepts/architecture). diff --git a/versioned_docs/version-v1.1/resources/KubeVela-01.png b/versioned_docs/version-v1.1/resources/KubeVela-01.png new file mode 100644 index 00000000..f48cd487 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/KubeVela-01.png differ diff --git a/versioned_docs/version-v1.1/resources/KubeVela-02.png b/versioned_docs/version-v1.1/resources/KubeVela-02.png new file mode 100644 index 00000000..1c297828 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/KubeVela-02.png differ diff --git a/versioned_docs/version-v1.1/resources/KubeVela-03.png b/versioned_docs/version-v1.1/resources/KubeVela-03.png new file mode 100644 index 00000000..1e4f1171 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/KubeVela-03.png differ diff --git a/versioned_docs/version-v1.1/resources/KubeVela-04.png b/versioned_docs/version-v1.1/resources/KubeVela-04.png new file mode 100644 index 00000000..bc50f5a9 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/KubeVela-04.png differ diff --git a/versioned_docs/version-v1.1/resources/KubeVela-05.png b/versioned_docs/version-v1.1/resources/KubeVela-05.png new file mode 100644 index 00000000..cf9d3a33 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/KubeVela-05.png differ diff --git a/versioned_docs/version-v1.1/resources/KubeVela-06.png b/versioned_docs/version-v1.1/resources/KubeVela-06.png new file mode 100644 index 00000000..8e77d6b5 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/KubeVela-06.png differ diff --git a/versioned_docs/version-v1.1/resources/api-arch.jpg b/versioned_docs/version-v1.1/resources/api-arch.jpg new file mode 100644 index 00000000..e7007c13 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/api-arch.jpg differ diff --git a/versioned_docs/version-v1.1/resources/api-workflow.png b/versioned_docs/version-v1.1/resources/api-workflow.png new file mode 100644 index 00000000..bc43dc11 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/api-workflow.png differ diff --git a/versioned_docs/version-v1.1/resources/apiserver-arch.jpg b/versioned_docs/version-v1.1/resources/apiserver-arch.jpg new file mode 100644 index 00000000..de8974f2 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/apiserver-arch.jpg differ diff --git a/versioned_docs/version-v1.1/resources/app-centric.png b/versioned_docs/version-v1.1/resources/app-centric.png new file mode 100644 index 00000000..bd3e6d4b Binary files /dev/null and b/versioned_docs/version-v1.1/resources/app-centric.png differ diff --git a/versioned_docs/version-v1.1/resources/appfile.png b/versioned_docs/version-v1.1/resources/appfile.png new file mode 100644 index 00000000..c21ad845 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/appfile.png differ diff --git a/versioned_docs/version-v1.1/resources/approllout-status-transition.jpg b/versioned_docs/version-v1.1/resources/approllout-status-transition.jpg new file mode 100644 index 00000000..ba275688 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/approllout-status-transition.jpg differ diff --git a/versioned_docs/version-v1.1/resources/arch.jpg b/versioned_docs/version-v1.1/resources/arch.jpg new file mode 100644 index 00000000..0512f0db Binary files /dev/null and b/versioned_docs/version-v1.1/resources/arch.jpg differ diff --git a/versioned_docs/version-v1.1/resources/arch.png b/versioned_docs/version-v1.1/resources/arch.png new file mode 100644 index 00000000..37aeffab Binary files /dev/null and b/versioned_docs/version-v1.1/resources/arch.png differ diff --git a/versioned_docs/version-v1.1/resources/book-info-struct.jpg b/versioned_docs/version-v1.1/resources/book-info-struct.jpg new file mode 100644 index 00000000..ca5fbb6f Binary files /dev/null and b/versioned_docs/version-v1.1/resources/book-info-struct.jpg differ diff --git a/versioned_docs/version-v1.1/resources/canary-pic-v2.jpg b/versioned_docs/version-v1.1/resources/canary-pic-v2.jpg new file mode 100644 index 00000000..11b83d4b Binary files /dev/null and b/versioned_docs/version-v1.1/resources/canary-pic-v2.jpg differ diff --git a/versioned_docs/version-v1.1/resources/canary-pic-v3.jpg b/versioned_docs/version-v1.1/resources/canary-pic-v3.jpg new file mode 100644 index 00000000..83a20380 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/canary-pic-v3.jpg differ diff --git a/versioned_docs/version-v1.1/resources/catalog-workflow.jpg b/versioned_docs/version-v1.1/resources/catalog-workflow.jpg new file mode 100644 index 00000000..33957a42 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/catalog-workflow.jpg differ diff --git a/versioned_docs/version-v1.1/resources/coa.png b/versioned_docs/version-v1.1/resources/coa.png new file mode 100644 index 00000000..e1062354 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/coa.png differ diff --git a/versioned_docs/version-v1.1/resources/concepts.png b/versioned_docs/version-v1.1/resources/concepts.png new file mode 100644 index 00000000..e805e745 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/concepts.png differ diff --git a/versioned_docs/version-v1.1/resources/crossplane-visit-application-v2.jpg b/versioned_docs/version-v1.1/resources/crossplane-visit-application-v2.jpg new file mode 100644 index 00000000..77ac4f1f Binary files /dev/null and b/versioned_docs/version-v1.1/resources/crossplane-visit-application-v2.jpg differ diff --git a/versioned_docs/version-v1.1/resources/crossplane-visit-application-v3.jpg b/versioned_docs/version-v1.1/resources/crossplane-visit-application-v3.jpg new file mode 100644 index 00000000..68ad952e Binary files /dev/null and b/versioned_docs/version-v1.1/resources/crossplane-visit-application-v3.jpg differ diff --git a/versioned_docs/version-v1.1/resources/crossplane-visit-application.jpg b/versioned_docs/version-v1.1/resources/crossplane-visit-application.jpg new file mode 100644 index 00000000..ac3802d5 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/crossplane-visit-application.jpg differ diff --git a/versioned_docs/version-v1.1/resources/gitops-commit.png b/versioned_docs/version-v1.1/resources/gitops-commit.png new file mode 100644 index 00000000..2ce9c8b3 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/gitops-commit.png differ diff --git a/versioned_docs/version-v1.1/resources/how-it-works.png b/versioned_docs/version-v1.1/resources/how-it-works.png new file mode 100644 index 00000000..e2940829 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/how-it-works.png differ diff --git a/versioned_docs/version-v1.1/resources/install-metrics-server-in-ASK.jpg b/versioned_docs/version-v1.1/resources/install-metrics-server-in-ASK.jpg new file mode 100644 index 00000000..56140887 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/install-metrics-server-in-ASK.jpg differ diff --git a/versioned_docs/version-v1.1/resources/json-schema-render-example.jpg b/versioned_docs/version-v1.1/resources/json-schema-render-example.jpg new file mode 100644 index 00000000..245563cc Binary files /dev/null and b/versioned_docs/version-v1.1/resources/json-schema-render-example.jpg differ diff --git a/versioned_docs/version-v1.1/resources/kubevela-runtime.png b/versioned_docs/version-v1.1/resources/kubevela-runtime.png new file mode 100644 index 00000000..a91b94ed Binary files /dev/null and b/versioned_docs/version-v1.1/resources/kubevela-runtime.png differ diff --git a/versioned_docs/version-v1.1/resources/kubewatch-notif.jpg b/versioned_docs/version-v1.1/resources/kubewatch-notif.jpg new file mode 100644 index 00000000..54eacf4d Binary files /dev/null and b/versioned_docs/version-v1.1/resources/kubewatch-notif.jpg differ diff --git a/versioned_docs/version-v1.1/resources/li-auto-inc.jpg b/versioned_docs/version-v1.1/resources/li-auto-inc.jpg new file mode 100644 index 00000000..a6008344 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/li-auto-inc.jpg differ diff --git a/versioned_docs/version-v1.1/resources/li.jpg b/versioned_docs/version-v1.1/resources/li.jpg new file mode 100644 index 00000000..a6008344 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/li.jpg differ diff --git a/versioned_docs/version-v1.1/resources/metrics.jpg b/versioned_docs/version-v1.1/resources/metrics.jpg new file mode 100644 index 00000000..af51b9ea Binary files /dev/null and b/versioned_docs/version-v1.1/resources/metrics.jpg differ diff --git a/versioned_docs/version-v1.1/resources/observability-dashboards.png b/versioned_docs/version-v1.1/resources/observability-dashboards.png new file mode 100644 index 00000000..52595944 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/observability-dashboards.png differ diff --git a/versioned_docs/version-v1.1/resources/observability-system-level-dashboards.png b/versioned_docs/version-v1.1/resources/observability-system-level-dashboards.png new file mode 100644 index 00000000..a451efee Binary files /dev/null and b/versioned_docs/version-v1.1/resources/observability-system-level-dashboards.png differ diff --git a/versioned_docs/version-v1.1/resources/observability-system-level-logging-search.png b/versioned_docs/version-v1.1/resources/observability-system-level-logging-search.png new file mode 100644 index 00000000..4dabf42c Binary files /dev/null and b/versioned_docs/version-v1.1/resources/observability-system-level-logging-search.png differ diff --git a/versioned_docs/version-v1.1/resources/observability-system-level-logging-statistics.png b/versioned_docs/version-v1.1/resources/observability-system-level-logging-statistics.png new file mode 100644 index 00000000..daf751ef Binary files /dev/null and b/versioned_docs/version-v1.1/resources/observability-system-level-logging-statistics.png differ diff --git a/versioned_docs/version-v1.1/resources/observability-system-level-logging-statistics2.png b/versioned_docs/version-v1.1/resources/observability-system-level-logging-statistics2.png new file mode 100644 index 00000000..58169e43 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/observability-system-level-logging-statistics2.png differ diff --git a/versioned_docs/version-v1.1/resources/observability-system-level-summary-of-source-usages-chart.png b/versioned_docs/version-v1.1/resources/observability-system-level-summary-of-source-usages-chart.png new file mode 100644 index 00000000..b1a0e897 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/observability-system-level-summary-of-source-usages-chart.png differ diff --git a/versioned_docs/version-v1.1/resources/observability-system-level-summary-of-source-usages.png b/versioned_docs/version-v1.1/resources/observability-system-level-summary-of-source-usages.png new file mode 100644 index 00000000..251439bc Binary files /dev/null and b/versioned_docs/version-v1.1/resources/observability-system-level-summary-of-source-usages.png differ diff --git a/versioned_docs/version-v1.1/resources/openfaas.jpg b/versioned_docs/version-v1.1/resources/openfaas.jpg new file mode 100644 index 00000000..b97e81f1 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/openfaas.jpg differ diff --git a/versioned_docs/version-v1.1/resources/promotion.png b/versioned_docs/version-v1.1/resources/promotion.png new file mode 100644 index 00000000..a66f87d8 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/promotion.png differ diff --git a/versioned_docs/version-v1.1/resources/system-arch.png b/versioned_docs/version-v1.1/resources/system-arch.png new file mode 100644 index 00000000..92c0246e Binary files /dev/null and b/versioned_docs/version-v1.1/resources/system-arch.png differ diff --git a/versioned_docs/version-v1.1/resources/traffic-shifting-analysis.png b/versioned_docs/version-v1.1/resources/traffic-shifting-analysis.png new file mode 100644 index 00000000..1001d851 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/traffic-shifting-analysis.png differ diff --git a/versioned_docs/version-v1.1/resources/vela_show_autoscale.jpg b/versioned_docs/version-v1.1/resources/vela_show_autoscale.jpg new file mode 100644 index 00000000..e14896e1 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/vela_show_autoscale.jpg differ diff --git a/versioned_docs/version-v1.1/resources/vela_show_webservice.jpg b/versioned_docs/version-v1.1/resources/vela_show_webservice.jpg new file mode 100644 index 00000000..ee913061 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/vela_show_webservice.jpg differ diff --git a/versioned_docs/version-v1.1/resources/what-is-kubevela.png b/versioned_docs/version-v1.1/resources/what-is-kubevela.png new file mode 100644 index 00000000..0e8092ee Binary files /dev/null and b/versioned_docs/version-v1.1/resources/what-is-kubevela.png differ diff --git a/versioned_docs/version-v1.1/resources/workflow-multi-env.png b/versioned_docs/version-v1.1/resources/workflow-multi-env.png new file mode 100644 index 00000000..dc44a7b4 Binary files /dev/null and b/versioned_docs/version-v1.1/resources/workflow-multi-env.png differ diff --git a/versioned_docs/version-v1.1/resources/workflow-with-ocm-demo.png b/versioned_docs/version-v1.1/resources/workflow-with-ocm-demo.png new file mode 100644 index 00000000..fcbe317f Binary files /dev/null and b/versioned_docs/version-v1.1/resources/workflow-with-ocm-demo.png differ diff --git a/versioned_docs/version-v1.1/roadmap/2020-12-roadmap.md b/versioned_docs/version-v1.1/roadmap/2020-12-roadmap.md new file mode 100644 index 00000000..6690d99a --- /dev/null +++ b/versioned_docs/version-v1.1/roadmap/2020-12-roadmap.md @@ -0,0 +1,29 @@ +--- +title: Roadmap +--- + +Date: 2020-10-01 to 2020-12-31 + +## Core Platform + +- [Merge CUE based abstraction into OAM user facing objects](https://github.com/oam-dev/kubevela/projects/1#card-48198530). +- [Compatibility checking between workload types and traits](https://github.com/oam-dev/kubevela/projects/1#card-48199349) and [`conflictsWith` feature](https://github.com/oam-dev/kubevela/projects/1#card-48199465) +- [Simplify revision mechanism in kubevela core](https://github.com/oam-dev/kubevela/projects/1#card-48199829) +- [Capability Center (i.e. addon registry)](https://github.com/oam-dev/kubevela/projects/1#card-48203470) +- [CRD registry to manage the third-party dependencies easier](https://github.com/oam-dev/kubevela/projects/1#card-48200758) +- [Dapr trait as built-in capability](https://github.com/oam-dev/kubevela/projects/1#card-49368484) + +## User Experience + +- [Smart Dashboard based on CUE schema](https://github.com/oam-dev/kubevela/projects/1#card-48200031) +- [Make defining CUE templates easier](https://github.com/oam-dev/kubevela/projects/1#card-48200509) +- [Generate reference doc automatically for capability based on CUE schema](https://github.com/oam-dev/kubevela/projects/1#card-48200195) +- [Better application observability](https://github.com/oam-dev/kubevela/projects/1#card-47134946) + +## Integration with other projects + +- Integrate with ArgoCD to do GitOps style application deployment + +## Project improvement + +- [Contributing the modularizing Flagger changes to upstream](https://github.com/oam-dev/kubevela/projects/1#card-48198830) diff --git a/versioned_docs/version-v1.1/roadmap/2021-03-roadmap.md b/versioned_docs/version-v1.1/roadmap/2021-03-roadmap.md new file mode 100644 index 00000000..bb53eccd --- /dev/null +++ b/versioned_docs/version-v1.1/roadmap/2021-03-roadmap.md @@ -0,0 +1,28 @@ +--- +title: Roadmap +--- + +Date: 2021-01-01 to 2021-03-30 + +## Core Platform + +- Add Application object as the deployment unit applied to k8s control plane. + - The new Application object will handle CUE template rendering on the server side. So the appfile would be translated to Application object directly without doing client side rendering. + - CLI/UI will be updated to replace ApplicationConfiguration and Component objects with Application object. +- Integrate Terraform as one of the core templating engines so that platform builders can add Terraform modules as Workloads/Traits into KubeVela. +- Re-architect API Server to have clean API and storage layer as [designed](https://github.com/oam-dev/kubevela/blob/master/design/vela-core/APIServer-Catalog.md#2-api-design). +- Automatically sync Catalog server and display packages information as [designed](https://github.com/oam-dev/kubevela/blob/master/design/vela-core/APIServer-Catalog.md#3-catalog-design). +- Add Rollout CRD to do native Workload and Application level application rollout management. +- Support intermediate store (e.g. ConfigMap) and JSON patch operations in data input/output. + +## User Experience + +- Rewrite dashboard to support up-to-date Vela object model. + - Support dynamic form rendering based on OpenAPI schema generated from Definition objects. + - Support displaying pages of applications, capabilities, catalogs. +- Automatically generate reference docs for capabilities and support displaying them in CLI/UI devtools. + +## Third-party integrations + +- Integrate with S2I (Source2Image) tooling like [Derrick](https://github.com/alibaba/derrick) to enable more developer-friendly workflow in appfile. +- Integrate with Dapr to enable end-to-end microservice application development and deployment workflow. diff --git a/versioned_docs/version-v1.1/roadmap/2021-06-roadmap.md b/versioned_docs/version-v1.1/roadmap/2021-06-roadmap.md new file mode 100644 index 00000000..dfca869d --- /dev/null +++ b/versioned_docs/version-v1.1/roadmap/2021-06-roadmap.md @@ -0,0 +1,36 @@ +--- +title: Roadmap +--- + +Date: 2021-04-01 to 2021-06-30 + +## Core Platform + +1. Implement Application serverside Kustomize and Workflow. +2. KubeVela as a control plane. + - Application Controller deploy resources directly to remote clusters and instead of using AppContext + - AppRollout should be able to work in runtime cluster or rollout remote cluster resources +3. Multi-cluster and Multi-environment support, applications can deploy in different environments which + contains different clusters with different strategies. +4. Better Helm and Kustomize support, users can deploy a helm chart or a git repo directly without any more effort. +5. Support built-in Application monitoring. +6. Support more rollout strategies. + - blue-green + - traffic management rollout + - canary + - A/B +7. Support a general CUE controller which can glue more than K8s CRDs, it should support more protocol such as restful API, + go function call, etc. +8. Discoverable capability registries with more back integrations(file server/github/OSS). + +## User Experience + +1. Develop tools and CI integration. +2. Refine our docs and website. + +## Third-party integrations + +1. Integrate with Open Cluster Management. +2. Integrate with Flux CD +3. Integrate with Dapr to enable end-to-end microservice application development and deployment workflow. +4. Integrate with Tilt for local development. diff --git a/versioned_docs/version-v1.1/roadmap/2021-09-roadmap.md b/versioned_docs/version-v1.1/roadmap/2021-09-roadmap.md new file mode 100644 index 00000000..c9aae325 --- /dev/null +++ b/versioned_docs/version-v1.1/roadmap/2021-09-roadmap.md @@ -0,0 +1,21 @@ +--- +title: Roadmap +--- + +Date: 2021-07-01 to 2021-09-30 + +## Core Platform + +1. Support more built-in capabilities and cloud resources with unified experience, such as monitoring, auto-scaling, middle ware plugins. +2. Auto binding for cloud resources. +3. Support more security policy( integrate with OPA, CIS, Popeye ). + +TBD: more features to be added + +## User Experience + +1. Support Dashboard for deploying KubeVela Application. +2. Support velacp as non-K8s APIServer for CI integration. + +## Third-party integrations + diff --git a/versioned_docs/version-v1.1/roadmap/README.md b/versioned_docs/version-v1.1/roadmap/README.md new file mode 100644 index 00000000..05a70285 --- /dev/null +++ b/versioned_docs/version-v1.1/roadmap/README.md @@ -0,0 +1,11 @@ +--- +title: KubeVela Roadmap +--- + +- [2021 Fall Roadmap](./2021-09-roadmap) +- [2021 Summer Roadmap](./2021-06-roadmap) +- [2021 Spring Roadmap](./2021-03-roadmap) +- [2020 Winter Roadmap](./2020-12-roadmap) + +To learn more details, please visit the [up to date github page](https://github.com/oam-dev/kubevela/tree/master/docs/en/roadmap/) +and [github issues list](https://github.com/oam-dev/kubevela/issues). \ No newline at end of file diff --git a/versioned_docs/version-v1.1/roadmap/template.md b/versioned_docs/version-v1.1/roadmap/template.md new file mode 100644 index 00000000..5acf3767 --- /dev/null +++ b/versioned_docs/version-v1.1/roadmap/template.md @@ -0,0 +1,41 @@ +--- +title: Roadmap +--- + +Date: 2021-01-01 to 2021-03-30 + +> Note: add roadmap entry to `roadmap/README.md` + +## Core Platform + +- K8s controllers +- Core workloads/traits +- Server components +- Architecture and model + +## User Experience + +- Devtools: CLI/UI/Appfile +- SDK/Framework + +## Deployment and Operations + +- Diagnostics and debugging + +## Third-party integrations + +- CI/CD, GitOps +- Application development framework +- Third party workloads/traits + +## Testing + +- Test infrastructure + - CI (Github Actions) and hosts + - codecov +- Unit/e2e test cases + +## Project elevation + +- Website/documentation improvement +- Contribution/development workflow improvement diff --git a/versioned_sidebars/version-v1.1-sidebars.json b/versioned_sidebars/version-v1.1-sidebars.json new file mode 100644 index 00000000..7f61975d --- /dev/null +++ b/versioned_sidebars/version-v1.1-sidebars.json @@ -0,0 +1,454 @@ +{ + "version-v1.1/docs": [ + { + "collapsed": false, + "type": "category", + "label": "Getting Started", + "items": [ + { + "type": "doc", + "id": "version-v1.1/getting-started/introduction" + }, + { + "type": "doc", + "id": "version-v1.1/install" + }, + { + "type": "doc", + "id": "version-v1.1/quick-start" + } + ] + }, + { + "collapsed": false, + "type": "category", + "label": "Core Concepts", + "items": [ + { + "type": "doc", + "id": "version-v1.1/core-concepts/architecture" + }, + { + "type": "doc", + "id": "version-v1.1/core-concepts/application" + } + ] + }, + { + "collapsed": false, + "type": "category", + "label": "Case Studies", + "items": [ + { + "type": "doc", + "id": "version-v1.1/case-studies/jenkins-cicd" + }, + { + "type": "doc", + "id": "version-v1.1/case-studies/gitops" + }, + { + "type": "doc", + "id": "version-v1.1/case-studies/canary-blue-green" + }, + { + "type": "doc", + "id": "version-v1.1/case-studies/multi-cluster" + } + ] + }, + { + "collapsed": false, + "type": "category", + "label": "User Manuals", + "items": [ + { + "collapsed": true, + "type": "category", + "label": "Components", + "items": [ + { + "type": "doc", + "id": "version-v1.1/end-user/components/helm" + }, + { + "type": "doc", + "id": "version-v1.1/end-user/components/kustomize" + }, + { + "collapsed": true, + "type": "category", + "label": "Cloud Services", + "items": [ + { + "collapsed": true, + "type": "category", + "label": "Terraform", + "items": [ + { + "type": "doc", + "id": "version-v1.1/end-user/components/cloud-services/terraform/alibaba-ack" + }, + { + "type": "doc", + "id": "version-v1.1/end-user/components/cloud-services/terraform/alibaba-eip" + }, + { + "type": "doc", + "id": "version-v1.1/end-user/components/cloud-services/terraform/alibaba-rds" + }, + { + "type": "doc", + "id": "version-v1.1/end-user/components/cloud-services/terraform/alibaba-oss" + } + ] + }, + { + "type": "doc", + "id": "version-v1.1/end-user/components/cloud-services/provider-and-consume-cloud-services" + } + ] + }, + { + "collapsed": true, + "type": "category", + "label": "CUE Component", + "items": [ + { + "type": "doc", + "id": "version-v1.1/end-user/components/cue/webservice" + }, + { + "type": "doc", + "id": "version-v1.1/end-user/components/cue/worker" + }, + { + "type": "doc", + "id": "version-v1.1/end-user/components/cue/task" + }, + { + "type": "doc", + "id": "version-v1.1/end-user/components/cue/raw" + } + ] + }, + { + "type": "doc", + "id": "version-v1.1/end-user/components/more" + } + ] + }, + { + "collapsed": true, + "type": "category", + "label": "Traits", + "items": [ + { + "type": "doc", + "id": "version-v1.1/end-user/traits/ingress" + }, + { + "type": "doc", + "id": "version-v1.1/end-user/traits/rollout" + }, + { + "type": "doc", + "id": "version-v1.1/end-user/traits/autoscaler" + }, + { + "type": "doc", + "id": "version-v1.1/end-user/traits/kustomize-patch" + }, + { + "type": "doc", + "id": "version-v1.1/end-user/traits/annotations-and-labels" + }, + { + "type": "doc", + "id": "version-v1.1/end-user/traits/service-binding" + }, + { + "type": "doc", + "id": "version-v1.1/end-user/traits/sidecar" + }, + { + "type": "doc", + "id": "version-v1.1/end-user/traits/more" + } + ] + }, + { + "collapsed": true, + "type": "category", + "label": "Policies", + "items": [ + { + "type": "doc", + "id": "version-v1.1/end-user/policies/envbinding" + }, + { + "type": "doc", + "id": "version-v1.1/end-user/policies/health" + } + ] + }, + { + "collapsed": true, + "type": "category", + "label": "Workflow", + "items": [ + { + "type": "doc", + "id": "version-v1.1/end-user/workflow/webhook-notification" + }, + { + "type": "doc", + "id": "version-v1.1/end-user/workflow/component-dependency-parameter" + } + ] + }, + { + "type": "doc", + "id": "version-v1.1/end-user/version-control" + } + ] + }, + { + "collapsed": false, + "type": "category", + "label": "Administrator Manuals", + "items": [ + { + "collapsed": false, + "type": "category", + "label": "Learning OAM", + "items": [ + { + "type": "doc", + "id": "version-v1.1/platform-engineers/oam/oam-model" + }, + { + "type": "doc", + "id": "version-v1.1/platform-engineers/oam/x-definition" + } + ] + }, + { + "collapsed": true, + "type": "category", + "label": "Learning CUE", + "items": [ + { + "type": "doc", + "id": "version-v1.1/platform-engineers/cue/basic" + }, + { + "type": "doc", + "id": "version-v1.1/platform-engineers/cue/definition-edit" + }, + { + "type": "doc", + "id": "version-v1.1/platform-engineers/cue/advanced" + } + ] + }, + { + "collapsed": true, + "type": "category", + "label": "Component System", + "items": [ + { + "type": "doc", + "id": "version-v1.1/platform-engineers/components/custom-component" + }, + { + "type": "doc", + "id": "version-v1.1/platform-engineers/components/component-terraform" + } + ] + }, + { + "collapsed": true, + "type": "category", + "label": "Traits System", + "items": [ + { + "type": "doc", + "id": "version-v1.1/platform-engineers/traits/customize-trait" + }, + { + "type": "doc", + "id": "version-v1.1/platform-engineers/traits/patch-trait" + }, + { + "type": "doc", + "id": "version-v1.1/platform-engineers/traits/status" + }, + { + "type": "doc", + "id": "version-v1.1/platform-engineers/traits/advanced" + } + ] + }, + { + "collapsed": true, + "type": "category", + "label": "Workflow System", + "items": [ + { + "type": "doc", + "id": "version-v1.1/platform-engineers/workflow/workflow" + }, + { + "type": "doc", + "id": "version-v1.1/platform-engineers/workflow/built-in-workflow-defs" + }, + { + "type": "doc", + "id": "version-v1.1/platform-engineers/workflow/cue-actions" + } + ] + }, + { + "collapsed": true, + "type": "category", + "label": "System Operation", + "items": [ + { + "type": "doc", + "id": "version-v1.1/platform-engineers/system-operation/bootstrap-parameters" + }, + { + "type": "doc", + "id": "version-v1.1/platform-engineers/system-operation/managing-clusters" + }, + { + "type": "doc", + "id": "version-v1.1/platform-engineers/system-operation/observability" + }, + { + "type": "doc", + "id": "version-v1.1/platform-engineers/system-operation/performance-finetuning" + } + ] + }, + { + "collapsed": true, + "type": "category", + "label": "Debugging", + "items": [ + { + "type": "doc", + "id": "version-v1.1/platform-engineers/debug/dry-run" + } + ] + }, + { + "type": "doc", + "id": "version-v1.1/platform-engineers/advanced-install" + } + ] + }, + { + "collapsed": true, + "type": "category", + "label": "References", + "items": [ + { + "collapsed": true, + "type": "category", + "label": "CLI", + "items": [ + { + "type": "doc", + "id": "version-v1.1/cli/vela_components" + }, + { + "type": "doc", + "id": "version-v1.1/cli/vela_config" + }, + { + "type": "doc", + "id": "version-v1.1/cli/vela_env" + }, + { + "type": "doc", + "id": "version-v1.1/cli/vela_init" + }, + { + "type": "doc", + "id": "version-v1.1/cli/vela_up" + }, + { + "type": "doc", + "id": "version-v1.1/cli/vela_version" + }, + { + "type": "doc", + "id": "version-v1.1/cli/vela_exec" + }, + { + "type": "doc", + "id": "version-v1.1/cli/vela_logs" + }, + { + "type": "doc", + "id": "version-v1.1/cli/vela_ls" + }, + { + "type": "doc", + "id": "version-v1.1/cli/vela_port-forward" + }, + { + "type": "doc", + "id": "version-v1.1/cli/vela_show" + }, + { + "type": "doc", + "id": "version-v1.1/cli/vela_status" + }, + { + "type": "doc", + "id": "version-v1.1/cli/vela_workloads" + }, + { + "type": "doc", + "id": "version-v1.1/cli/vela_traits" + }, + { + "type": "doc", + "id": "version-v1.1/cli/vela_system" + }, + { + "type": "doc", + "id": "version-v1.1/cli/vela_template" + }, + { + "type": "doc", + "id": "version-v1.1/cli/vela_cap" + } + ] + }, + { + "type": "doc", + "id": "version-v1.1/developers/references/kubectl-plugin" + } + ] + }, + { + "collapsed": true, + "type": "category", + "label": "Roadmap", + "items": [ + { + "type": "doc", + "id": "version-v1.1/roadmap/README" + } + ] + }, + { + "type": "doc", + "id": "version-v1.1/developers/references/devex/faq" + } + ] +} diff --git a/versions.json b/versions.json index b6dd69b1..7ae9bae7 100644 --- a/versions.json +++ b/versions.json @@ -1,3 +1,4 @@ [ + "v1.1", "v1.0" -] \ No newline at end of file +]