Feat: remove the unuseful documents (#599)

Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
This commit is contained in:
barnettZQG 2022-04-14 11:04:27 +08:00 committed by GitHub
parent 31a2065af2
commit 5b72c29cdf
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
14 changed files with 8 additions and 2365 deletions

View File

@ -1,209 +0,0 @@
---
title: Progressive Rollout with Istio
---
## Introduction
The application deployment model in KubeVela is designed and implemented with extreme level of extensibility at heart. Hence, KubeVela can be easily integrated with any existing tools to superpower your application delivery with modern technologies such as Service Mesh immediately, without writing dirty glue code/scripts.
This guide will introduce how to use KubeVela and [Istio](https://istio.io/latest/) to do an advanced canary release process. In this process, KubeVela will help you to:
- ship Istio capabilities to end users without asking them to become an Istio expert (i.e. KubeVela will provide you a rollout trait as abstraction);
- design canary release steps and do rollout/rollback in a declarative workflow, instead managing the whole process manually or with ugly scripts.
We will use the well-known [bookinfo](https://istio.io/latest/docs/examples/bookinfo/?ie=utf-8&hl=en&docs-search=Canary) application as the sample.
## Preparation
If your cluster haven't installed Istio. Install the Istio cluster plugin.
```shell
vela addon enable istio
```
Otherwise, you just need apply these 4 YAML files under this [path](https://github.com/oam-dev/kubevela/tree/master/vela-templates/addons/istio/definitions)
The default namespace needs to be labeled so that Istio will auto-inject sidecar.
```shell
kubectl label namespace default istio-injection=enabled
```
## Initial deployment
Deploy the Application of `bookinfo`:
```shell
vela up -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/canary-rollout-use-case/first-deploy.yaml
```
The component architecture and relationship of the application are as follows:
![book-info-struct](../resources/book-info-struct.jpg)
This Application has four Components, `productpage`, `ratings`, `details` components configured with an`expose` Trait to expose cluster-level service.
And `reviews` component have a canary-traffic Trait.
The `productpage` component is also configured with an `istio-gateway` Trait, allowing the Component to receive traffic coming from outside the cluster. The example below show that it sets `gateway:ingressgateway` to use Istio's default gateway, and `hosts: "*"` to specify that any request can enter the gateway.
```shell
...
- name: productpage
type: webservice
properties:
image: docker.io/istio/examples-bookinfo-productpage-v1:1.16.2
port: 9080
traits:
- type: expose
properties:
port:
- 9080
- type: istio-gateway
properties:
hosts:
- "*"
gateway: ingressgateway
match:
- exact: /productpage
- prefix: /static
- exact: /login
- prefix: /api/v1/products
port: 9080
...
```
You can port-forward to the gateway as follows:
```shell
kubectl port-forward service/istio-ingressgateway -n istio-system 19082:80
```
Visit http://127.0.0.1:19082/productpage through the browser and you will see the following page.
![pic-v2](../resources/canary-pic-v2.jpg)
## Canary Release
Next, we take the `reviews` Component as an example to simulate the complete process of a canary release, and first upgrade a part of the component instances, and adjust the traffic at the same time, so as to achieve the purpose of progressive canary release.
Execute the following command to update the application.
```shell
vela up -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/canary-rollout-use-case/rollout-v2.yaml
```
This operation updates the mirror of the `reviews` Component from the previous v2 to v3. At the same time, the Rollout Trait of the `reviews` Component specifies that the number of target instances to be upgraded is two, which are upgraded in two batches, with one instance in each batch.
In addition, a canary-traffic Trait has been added to the Component.
```shell
...
- name: reviews
type: webservice
properties:
image: docker.io/istio/examples-bookinfo-reviews-v3:1.16.2
port: 9080
volumes:
- name: wlp-output
type: emptyDir
mountPath: /opt/ibm/wlp/output
- name: tmp
type: emptyDir
mountPath: /tmp
traits:
- type: expose
properties:
port:
- 9080
- type: rollout
properties:
targetSize: 2
rolloutBatches:
- replicas: 1
- replicas: 1
- type: canary-traffic
properties:
port: 9080
...
```
This update also adds an upgraded execution Workflow to the Application, which contains three steps.
The first step is to upgrade only the first batch of instances by specifying `batchPartition` equal to 0. And use `traffic.weightedTargets` to switch 10% of the traffic to the new version of the instance.
After completing the first step, the execution of the second step of the Workflow will enter a pause state, waiting for the user to verify the service status.
The third step of the Workflow is to complete the upgrade of the remaining instances and switch all traffic to the new component version.
```shell
...
workflow:
steps:
- name: rollout-1st-batch
type: canary-rollout
properties:
# just upgrade first batch of component
batchPartition: 0
traffic:
weightedTargets:
- revision: reviews-v1
weight: 90 # 90% shift to new version
- revision: reviews-v2
weight: 10 # 10% shift to new version
# give user time to verify part of traffic shifting to newRevision
- name: manual-approval
type: suspend
- name: rollout-rest
type: canary-rollout
properties:
# upgrade all batches of component
batchPartition: 1
traffic:
weightedTargets:
- revision: reviews-v2
weight: 100 # 100% shift to new version
...
```
After the update is complete, visit the previous URL multiple times in the browser. There is about 10% probability that you will see the new page below,
![pic-v3](../resources/canary-pic-v3.jpg)
It can be seen that the new version of the page has changed from the previous black five-pointed star to a red five-pointed star.
### Continue with Full Release
If the service is found to meet expectations during manual verification, the Workflow needs to be continued to complete the full release. You can do that by executing the following command.
```shell
vela workflow resume book-info
```
If you continue to verify the webpage several times on the browser, you will find that the five-pointed star will always be red.
### Rollback to The Old Version
During the manual verification, if the service does not meet the expectations, you can terminate the pre-defined release workflow and rollback the instances and the traffic to the previous version.
```shell
vela up -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/canary-rollout-use-case/rollback.yaml
```
This is basically updates the workflow to rollback step:
```yaml
...
workflow:
steps:
- name: rollback
type: canary-rollback
```
Under the hood, it changes:
- the Rollout spec to target to the old version, which rolls replicas of new versions to the old version and keeps replicas of old version as is.
- the VirtualService spec to shift all traffic to the old version.
- the DestinationRule spec to update the subsets to only the old version.
You see?! All the complexity of the work is kept away from users and provided in a simple step!
Continue to visit the website on the browser, you will find that the five-pointed star has changed back to black.

View File

@ -1,3 +0,0 @@
---
title: Practical Case
---

View File

@ -1,5 +0,0 @@
---
title: Practical Case
---
TBD

View File

@ -1,211 +0,0 @@
---
title: 基于 Istio 的渐进式发布
---
## 简介
KubeVela 后的应用交付模型OAM是一个从设计与实现上都高度可扩展的模型。因此KubeVela 不需要任何“脏乱差”的胶水代码或者脚本就可以同任何云原生技术和工具(比如 Service Mesh实现集成让社区生态中各种先进技术立刻为你的应用交付助上“一臂之力”。
本文将会介绍如何使用 KubeVela 结合 [Istio](https://istio.io/latest/) 进行复杂的金丝雀发布流程。在这个过程中KubeVela 会帮助你:
- 将 Istio 的能力进行封装和抽象后再交付给用户使用,使得用户无需成为 Istio 专家就可以直接使用这个金丝雀发布的场景KubeVela 会为你提供一个封装好的 Rollout 运维特征)
- 通过声明式工作流来设计金丝雀发布的步骤,以及执行发布/回滚,而无需通过“脏乱差”的脚本或者人工的方式管理这个过程。
本案例中,我们将使用经典的微服务应用 [bookinfo](https://istio.io/latest/docs/examples/bookinfo/?ie=utf-8&hl=en&docs-search=Canary) 来展示上述金丝雀发布过程。
## 准备工作
如果你的集群尚未安装 Istio你可以通过以下命令开启 Istio 集群插件
```shell
vela addon enable istio
```
如果你的集群已经安装 Istio你只需 apply [该目录](https://github.com/oam-dev/kubevela/tree/master/vela-templates/addons/istio/definitions) 下的四个 YAML 文件来达到和上面开启集群插件一样的效果
因为后面的例子运行在 default namespace需要为 default namespace 打上 Istio 自动注入 sidecar 的标签。
```shell
kubectl label namespace default istio-injection=enabled
```
## 初次部署
执行下面的命令,部署 bookinfo 应用。
```shell
vela up -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/canary-rollout-use-case/first-deploy.yaml
```
该应用的组件架构和访问关系如下所示:
![book-info-struct](../resources/book-info-struct.jpg)
该应用包含四个组件,其中组件 pruductpage, details, ratings 均配置了一个暴露端口 (expose) 运维特征用来在集群内暴露服务。
组件 reviews 配置了一个金丝雀流量发布 (canary-traffic) 的运维特征。
productpage 组件还配置了一个 网关入口 (istio-gateway) 的运维特征,从而让该组件接收进入集群的流量。这个运维特征通过设置 `gateway:ingressgateway` 来使用 Istio 的默认网关实现,设置 `hosts: "*"` 来指定携带任意 host 信息的请求均可进入网关。
```shell
...
- name: productpage
type: webservice
properties:
image: docker.io/istio/examples-bookinfo-productpage-v1:1.16.2
port: 9080
traits:
- type: expose
properties:
port:
- 9080
- type: istio-gateway
properties:
hosts:
- "*"
gateway: ingressgateway
match:
- exact: /productpage
- prefix: /static
- exact: /login
- prefix: /api/v1/products
port: 9080
...
```
你可以通过执行下面的命令将网关的端口映射到本地。
```shell
kubectl port-forward service/istio-ingressgateway -n istio-system 19082:80
```
通过浏览器访问 http://127.0.0.1:19082/productpage 将会看到下面的页面。
![pic-v2](../resources/canary-pic-v2.jpg)
## 金丝雀发布
接下来我们以 `reviews` 组件为例,模拟一次金丝雀发布的完整过程,及先升级一部分组件实例,同时调整流量,以此达到渐进式灰度发布的目的。
执行下面的命令,来更新应用。
```shell
vela up -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/canary-rollout-use-case/rollout-v2.yaml
```
这次操作更新了 reviews 组件的镜像,从之前的 v2 升级到了 v3。同时 reviews 组件的灰度发布 (Rollout) 运维特征指定了,升级的目标实例个数为 2 个,分两个批次升级,每批升级 1 个实例。
```shell
...
- name: reviews
type: webservice
properties:
image: docker.io/istio/examples-bookinfo-reviews-v3:1.16.2
port: 9080
volumes:
- name: wlp-output
type: emptyDir
mountPath: /opt/ibm/wlp/output
- name: tmp
type: emptyDir
mountPath: /tmp
traits:
- type: expose
properties:
port:
- 9080
- type: rollout
properties:
targetSize: 2
rolloutBatches:
- replicas: 1
- replicas: 1
- type: canary-traffic
properties:
port: 9080
...
```
这次更新还为应用新增了一个升级的执行工作流,该工作流包含三个步骤。
第一步通过指定 `batchPartition` 等于 0 设置只升级第一批次的实例。并通过 `traffic.weightedTargets` 将 10% 的流量切换到新版本的实例上面。
完成第一步之后,执行第二步工作流会进入暂停状态,等待用户校验服务状态。
工作流的第三步是完成剩下实例的升级,并将全部流量切换致新的组件版本上。
```shell
...
workflow:
steps:
- name: rollout-1st-batch
type: canary-rollout
properties:
# just upgrade first batch of component
batchPartition: 0
traffic:
weightedTargets:
- revision: reviews-v1
weight: 90 # 90% shift to new version
- revision: reviews-v2
weight: 10 # 10% shift to new version
# give user time to verify part of traffic shifting to newRevision
- name: manual-approval
type: suspend
- name: rollout-rest
type: canary-rollout
properties:
# upgrade all batches of component
batchPartition: 1
traffic:
weightedTargets:
- revision: reviews-v2
weight: 100 # 100% shift to new version
...
```
更新完成之后再在浏览器多次访问之前的网址。发现有大概10%的概率会看到下面这个新的页面,
![pic-v3](../resources/canary-pic-v3.jpg)
可见新版本的页面由之前的黑色五角星变成了红色五角星
### 继续完成全量发布
如果在人工校验时,发现服务符合预期,需要继续执行工作流,完成全量发布。你可以通过执行下面的命令完成这一操作。
```shell
vela workflow resume book-info
```
在浏览器上继续多次访问网页,会发现五角星将一直是红色的。
### 终止发布工作流并回滚
如果在人工校验时,发现服务不符合预期,需要终止预先定义好的发布工作流,并将流量和实例切换回之前的版本。你可以通过执行下面的命令完成这一操作:
```shell
vela up -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/canary-rollout-use-case/rollback.yaml
```
这个操作将会更新 Workflow 定义去使用 `canary-rollback` step
```yaml
...
workflow:
steps:
- name: rollback
type: canary-rollback
```
此次操作的原理是:
- 更新 Rollout 对象的 `targetRevisionName` 成旧的版本,这样会自动回滚所有已发布的新版本的实例回到旧版本,并且保持还没升级的旧版本实例。
- 更新 VirtualService 对象的 `route` 字段,将所有流量导向旧的版本。
- 更新 DestinationRule 对象的 `subset` 字段,只容纳旧的版本。
看到了吗?这么多操作,但是暴露给用户的只有一个简单的 step 定义,全部复杂的操作都并抽象化在背后自动运行!
在浏览器上继续访问网址,会发现五角星又变回到了黑色。

View File

@ -1,572 +0,0 @@
---
title: 实践案例-理想汽车
---
## 背景
理想汽车后台服务采用的是微服务架构,虽然借助 Kubernetes 进行部署,但运维工作依然很复杂。并具有以下特点:
- 一个应用能运行起来并对外提供服务正常情况下都需要配套的db实例以及 redis 集群支撑。
- 应用之间存在依赖关系,对于部署顺序有比较强的诉求。
- 应用部署流程中需要和外围系统(如:配置中心)交互。
下面以一个理想汽车的经典场景为例,介绍如何借助 KubeVela 实现以上诉求。
## 典型场景介绍
![场景架构](../resources/li-auto-inc.jpg)
这里面包含两个应用,分别是 `base-server``proxy-server`, 整体应用部署需要满足以下条件:
- base-server 成功启动(状态 ready后需要往配置中心apollo注册信息。
- base-server 需要绑定到 service 和 ingress 进行负载均衡。
- proxy-server 需要在 base-server 成功运行后启动,并需要获取到 base-server 对应的 service 的 clusterIP。
- proxy-server 依赖 redis 中间件,需要在 redis 成功运行后启动。
- proxy-server 需要从配置中心apollo读取 base-server 的相关注册信息。
可见整个部署过程,如果人为操作,会变得异常困难以及容易出错。在借助 KubeVela 后,可以轻松实现场景的自动化和一键式运维。
## 解决方案
在 KubeVela 上,以上诉求可以拆解为以下 KubeVela 的模型:
- 组件部分: 包含三个分别是 base-server 、redis 、proxy-server。
- 运维特征: ingress (包括 service) 作为一个通用的负载均衡运维特征。
- 工作流: 实现组件按照依赖进行部署,并实现和配置中心的交互。
- 应用部署计划: 理想汽车的开发者可以通过 KubeVela 的应用部署计划完成应用发布。
详细过程如下:
## 平台的功能定制
理想汽车的平台工程师通过以下步骤完成方案中所涉及的能力,并向开发者用户透出(通过编写 definition 的方式实现)。
### 1.定义组件
- 编写 base-service 的组件定义,使用 `Deployment` 作为工作负载,并向终端用户透出参数 `image``cluster`。对于终端用户来说,之后在发布时只需要关注镜像以及部署的集群名称。
- 编写 proxy-service 的组件定义,使用 `argo rollout` 作为工作负载,同样向终端用户透出参数 `image``cluster`
如下所示:
```
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: base-service
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
kube:
template:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
appId: BASE-SERVICE
appName: base-service
version: 0.0.1
name: base-service
spec:
replicas: 2
revisionHistoryLimit: 5
selector:
matchLabels:
app: base-service
template:
metadata:
labels:
antiAffinity: none
app: base-service
appId: BASE-SERVICE
version: 0.0.1
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- base-service
- key: antiAffinity
operator: In
values:
- none
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: APP_NAME
value: base-service
- name: LOG_BASE
value: /data/log
- name: RUNTIME_CLUSTER
value: default
image: base-service
imagePullPolicy: Always
name: base-service
ports:
- containerPort: 11223
protocol: TCP
- containerPort: 11224
protocol: TCP
volumeMounts:
- mountPath: /tmp/data/log/base-service
name: log-volume
- mountPath: /data
name: sidecar-sre
- mountPath: /app/skywalking
name: skywalking
initContainers:
- args:
- 'echo "do something" '
command:
- /bin/sh
- -c
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: APP_NAME
value: base-service
image: busybox
imagePullPolicy: Always
name: sidecar-sre
resources:
limits:
cpu: 100m
memory: 100Mi
volumeMounts:
- mountPath: /tmp/data/log/base-service
name: log-volume
- mountPath: /scratch
name: sidecar-sre
terminationGracePeriodSeconds: 120
volumes:
- hostPath:
path: /logs/dev/base-service
type: DirectoryOrCreate
name: log-volume
- emptyDir: {}
name: sidecar-sre
- emptyDir: {}
name: skywalking
parameters:
- name: image
required: true
type: string
fieldPaths:
- "spec.template.spec.containers[0].image"
- name: cluster
required: true
type: string
fieldPaths:
- "spec.template.spec.containers[0].env[6].value"
- "spec.template.metadata.labels.cluster"
---
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: proxy-service
spec:
workload:
definition:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
schematic:
kube:
template:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
labels:
appId: PROXY-SERVICE
appName: proxy-service
version: 0.0.0
name: proxy-service
spec:
replicas: 1
revisionHistoryLimit: 1
selector:
matchLabels:
app: proxy-service
strategy:
canary:
steps:
- setWeight: 50
- pause: {}
template:
metadata:
labels:
app: proxy-service
appId: PROXY-SERVICE
cluster: default
version: 0.0.1
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- proxy-service
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: APP_NAME
value: proxy-service
- name: LOG_BASE
value: /app/data/log
- name: RUNTIME_CLUSTER
value: default
image: proxy-service:0.1
imagePullPolicy: Always
name: proxy-service
ports:
- containerPort: 11024
protocol: TCP
- containerPort: 11025
protocol: TCP
volumeMounts:
- mountPath: /tmp/data/log/proxy-service
name: log-volume
- mountPath: /app/data
name: sidecar-sre
- mountPath: /app/skywalking
name: skywalking
initContainers:
- args:
- 'echo "do something" '
command:
- /bin/sh
- -c
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: APP_NAME
value: proxy-service
image: busybox
imagePullPolicy: Always
name: sidecar-sre
resources:
limits:
cpu: 100m
memory: 100Mi
volumeMounts:
- mountPath: /tmp/data/log/proxy-service
name: log-volume
- mountPath: /scratch
name: sidecar-sre
terminationGracePeriodSeconds: 120
volumes:
- hostPath:
path: /app/logs/dev/proxy-service
type: DirectoryOrCreate
name: log-volume
- emptyDir: {}
name: sidecar-sre
- emptyDir: {}
name: skywalking
parameters:
- name: image
required: true
type: string
fieldPaths:
- "spec.template.spec.containers[0].image"
- name: cluster
required: true
type: string
fieldPaths:
- "spec.template.spec.containers[0].env[5].value"
- "spec.template.metadata.labels.cluster"
```
### 2.定义运维特征
编写用于负载均衡的运维特征的定义,其通过生成 Kubernetes 中的原生资源 `Service``Ingress` 实现负载均衡。
向终端用户透出的参数包括 domain 和 http ,其中 domain 可以指定域名http 用来设定路由,具体将部署服务的端口映射为不同的 url path。
如下所示:
```
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
name: ingress
spec:
schematic:
cue:
template: |
parameter: {
domain: string
http: [string]: int
}
outputs: {
"service": {
apiVersion: "v1"
kind: "Service"
metadata: {
name: context.name
namespace: context.namespace
}
spec: {
selector: app: context.name
ports: [for ph, pt in parameter.http{
protocol: "TCP"
port: pt
targetPort: pt
}]
}
}
"ingress": {
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata: {
name: "\(context.name)-ingress"
namespace: context.namespace
}
spec: rules: [{
host: parameter.domain
http: paths: [for ph, pt in parameter.http {
path: ph
pathType: "Prefix"
backend: service: {
name: context.name
port: number: pt
}
}]
}]
}
}
```
### 3.定义工作流的步骤
- 定义 apply-base 工作流步骤: 完成部署 base-server等待组件成功启动后往注册中心注册信息。透出参数为 component终端用户在流水线中使用步骤 apply-base 时只需要指定组件名称。
- 定义 apply-helm 工作流步骤: 完成部署 redis helm chart并等待 redis 成功启动。透出参数为 component终端用户在流水线中使用步骤 apply-helm 时只需要指定组件名称。
- 定义 apply-proxy 工作流步骤: 完成部署 proxy-server并等待组件成功启动。透出参数为 component 和 backendIP其中 component 为组件名称backendIP 为 proxy-server 服务依赖组件的 IP。
如下所示:
```
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
name: apply-base
namespace: vela-system
spec:
schematic:
cue:
template: |-
import ("vela/op")
parameter: {
component: string
}
apply: op.#ApplyComponent & {
component: parameter.component
}
// 等待 deployment 可用
wait: op.#ConditionalWait & {
continue: apply.workload.status.readyReplicas == apply.workload.status.replicas && apply.workload.status.observedGeneration == apply.workload.metadata.generation
}
message: {...}
// 往三方配置中心apollo写配置
notify: op.#HTTPPost & {
url: "appolo-address"
request: body: json.Marshal(message)
}
// 暴露 service 的 ClusterIP
clusterIP: apply.traits["service"].value.spec.clusterIP
---
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
name: apply-helm
namespace: vela-system
spec:
schematic:
cue:
template: |-
import ("vela/op")
parameter: {
component: string
}
apply: op.#ApplyComponent & {
component: parameter.component
}
chart: op.#Read & {
value: {
// redis 的元数据
...
}
}
// 等待 redis 可用
wait: op.#ConditionalWait & {
// todo
continue: chart.value.status.phase=="ready"
}
---
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
name: apply-proxy
namespace: vela-system
spec:
schematic:
cue:
template: |-
import (
"vela/op"
"encoding/json"
)
parameter: {
component: string
backendIP: string
}
// 往三方配置中心apollo读取配置
// config: op.#HTTPGet
apply: op.#ApplyComponent & {
component: parameter.component
// 给环境变量中注入BackendIP
workload: patch: spec: template: spec: {
containers: [{
// patchKey=name
env: [{name: "BackendIP",value: parameter.backendIP}]
},...]
}
}
// 等待 argo.rollout 可用
wait: op.#ConditionalWait & {
continue: apply.workload.status.readyReplicas == apply.workload.status.replicas && apply.workload.status.observedGeneration == apply.workload.metadata.generation
}
```
### 用户使用
理想汽车的开发工程师接下来就可以使用 Application 完成应用的发布。
开发工程师可以直接使用如上平台工程师在 KubeVela 上定制的通用能力,轻松完成应用部署计划的编写。
> 在下面例子中通过 workflow 的数据传递机制 input/output将 base-server 的 clusterIP 传递给 proxy-server。
```
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: lixiang-app
spec:
components:
- name: base-service
type: base-service
properties:
image: nginx:1.14.2
# 用于区分appollo环境
cluster: default
traits:
- type: ingress
properties:
domain: base-service.dev.example.com
http:
"/": 11001
# redis无依赖启动后service的endpionts 需要通过http接口写入信息写入到apollo
- name: "redis"
type: helm
properties:
chart: "redis-cluster"
version: "6.2.7"
repoUrl: "https://charts.bitnami.com/bitnami"
repoType: helm
- name: proxy-service
type: proxy-service
properties:
image: nginx:1.14.2
# 用于区分appollo环境
cluster: default
traits:
- type: ingress
properties:
domain: proxy-service.dev.example.com
http:
"/": 11002
workflow:
steps:
- name: apply-base-service
type: apply-base
outputs:
- name: baseIP
exportKey: clusterIP
properties:
component: base-service
- name: apply-redis
type: apply-helm
properties:
component: redis
- name: apply-proxy-service
type: apply-proxy
inputs:
- from: baseIP
parameterKey: backendIP
properties:
component: proxy-service
```

View File

@ -1,174 +0,0 @@
---
title: 使用工作流实现多集群部署
---
本案例,将为你讲述如何使用 KubeVela 做多集群应用部署,将包含从集群创建、集群注册、环境初始化、多集群调度,一直到应用多集群部署的完整流程。
- 通过 KubeVela 中的环境初始化Initializer功能我们可以创建一个 Kubernetes 集群并注册到中央管控集群,同样通过环境初始化功能,可以将应用管理所需的系统依赖一并安装。
- 通过 KubeVela 的多集群多环境部署EnvBinding功能可以对应用进行差异化配置并选择资源下发到哪些集群。
## 开始之前
- 首先你需要有一个 Kubernetes 版本为 1.20+ 的集群作为管控集群,并且已经安装好 KubeVela ,管控集群需要有一个可以通过公网访问的 APIServer
的地址。如果不做特殊说明,实践案例上的所有步骤都在管控集群上操作。
- 在这个场景中KubeVela 背后采用[OCM(open-cluster-management)](https://open-cluster-management.io/getting-started/quick-start/)技术做实际的多集群资源分发。
- 本实践案例相关的 YAML 描述和 Shell 脚本都在 KubeVela 项目的 [docs/examples/workflow-with-ocm](https://github.com/oam-dev/kubevela/tree/master/docs/examples/workflow-with-ocm) 下,
请下载该案例,在该目录执行下面的终端命令。
- 本实践案例将以阿里云的 ACK 集群作为例子,创建阿里云资源需要使用相应的鉴权,需要保存你阿里云账号的 AK/SK 到管控集群的 Secret 中。
```shell
export ALICLOUD_ACCESS_KEY=xxx; export ALICLOUD_SECRET_KEY=yyy
```
```shell
# 如果你想使用阿里云安全令牌服务,还要导出环境变量 ALICLOUD_SECURITY_TOKEN 。
export ALICLOUD_SECURITY_TOKEN=zzz
```
```shell
# prepare-alibaba-credentials.sh 脚本会读取环境变量并创建 secret 到当前集群。
sh hack/prepare-alibaba-credentials.sh
```
```shell
$ kubectl get secret -n vela-system
NAME TYPE DATA AGE
alibaba-account-creds Opaque 1 11s
```
## 初始化阿里云资源创建功能
我们可以使用 KubeVela 的环境初始化功能,开启阿里云资源创建的系统功能,这个初始化过程主要是将之前配置的鉴权信息提供出来,并初始化 Terraform 系统插件。我们将这个初始化对象命名为:`terraform-alibaba`,并部署:
```shell
kubectl apply -f initializers/init-terraform-alibaba.yaml
```
### 创建环境初始化 `terraform-alibaba`
```shell
kubectl apply -f initializers/init-terraform-alibaba.yaml
```
当环境初始化 `terraform-alibaba``PHASE` 字段为 `success` 表示环境初始化成功这可能需要等待1分钟左右的时间。
```shell
$ kubectl get initializers.core.oam.dev -n vela-system
NAMESPACE NAME PHASE AGE
vela-system terraform-alibaba success 94s
```
## 初始化多集群调度功能
我们使用 KubeVela 的环境初始化功能,开启多集群调度的系统功能,这个初始化过程主要是创建一个新的 ACK 集群,使用 OCM 多集群管理方案管理新创建的集群,我们将这个初始化对象命名为:`managed-cluster`,并部署:
```shell
kubectl apply -f initializers/init-managed-cluster.yaml
```
除此之外,为了让创建好的集群可以被管控集群所使用,我们还需要将创建的集群注册到管控集群。我们通过定义一个工作流节点来传递新创建集群的证书信息,再定义一个工作流节点来完成集群注册。
**自定义执行集群创建的工作流节点,命名为 `create-ack`**,进行部署:
```shell
kubectl apply -f definitions/create-ack.yaml
```
**自定义集群注册的工作流节点,命名为 `register-cluster`**,进行部署:
```shell
kubectl apply -f definitions/register-cluster.yaml
```
### 创建环境初始化
1. 安装工作流节点定义 `create-ack``register-cluster`
```shell
kubectl apply -f definitions/create-ack.yaml.yaml
kubectl apply -f definitions/register-cluster.yaml
```
2. 修改工作流节点 `register-ack` 的 hubAPIServer 的参数为管控集群的 APIServer 的公网地址。
```yaml
- name: register-ack
type: register-cluster
inputs:
- from: connInfo
parameterKey: connInfo
properties:
# 用户需要填写管控集群的 APIServer 的公网地址
hubAPIServer: {{ public network address of APIServer }}
env: prod
initNameSpace: default
patchLabels:
purpose: test
```
3. 创建环境初始化 `managed-cluster`
```
kubectl apply -f initializers/init-managed-cluster.yaml
```
当环境初始化 `managed-cluster``PHASE` 字段为 `success` 表示环境初始化成功,你可能需要等待 15-20 分钟左右的时间阿里云创建一个ack集群需要 15 分钟左右)。
```shell
$ kubectl get initializers.core.oam.dev -n vela-system
NAMESPACE NAME PHASE AGE
vela-system managed-cluster success 20m
```
当环境初始化 `managed-cluster` 初始化成功后,你可以看到新集群 `poc-01` 已经被被注册到管控集群中。
```shell
$ kubectl get managedclusters.cluster.open-cluster-management.io
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE
poc-01 true {{ APIServer address }} True True 30s
```
## 部署应用到指定集群
管理员完成多集群的注册之后,用户可以在应用部署计划中指定将资源部署到哪个集群中。
```shell
kubectl apply -f app.yaml
```
检查应用部署计划 `workflow-demo` 是否成功创建。
```shell
$ kubectl get app workflow-demo
NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE
workflow-demo podinfo-server webservice running true 7s
```
你可以切换到新创建的 ACK 集群上,查看资源是否被成功地部署。
```shell
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
podinfo-server 1/1 1 1 40s
```
```shell
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
podinfo-server-auxiliaryworkload-85d7b756f9 LoadBalancer 192.168.57.21 < EIP > 9898:31132/TCP 50s
```
Service `podinfo-server` 绑定了一个 EXTERNAL-IP允许用户通过公网访问应用用户可以在浏览器中输入 `http://<EIP>:9898` 来访问刚刚创建的应用。
![workflow-with-ocm-demo](../resources/workflow-with-ocm-demo.png)
上述应用部署计划 `workflow-demo` 中使用了内置的应用策略 `env-binding` 对应用部署计划进行差异化配置,修改了组件 `podinfo-server` 的镜像,
以及运维特征 `expose` 的类型以允许集群外部的请求访问,同时应用策略 `env-binding` 指定了资源调度策略,将资源部署到新注册的 ACK 集群内。
应用部署计划的交付工作流也使用了内置的 `multi-env` 交付工作流定义,指定具体哪一个配置后的组件部署到集群中。

View File

@ -1,211 +0,0 @@
---
title: 基于 Istio 的渐进式发布
---
## 简介
KubeVela 后的应用交付模型OAM是一个从设计与实现上都高度可扩展的模型。因此KubeVela 不需要任何“脏乱差”的胶水代码或者脚本就可以同任何云原生技术和工具(比如 Service Mesh实现集成让社区生态中各种先进技术立刻为你的应用交付助上“一臂之力”。
本文将会介绍如何使用 KubeVela 结合 [Istio](https://istio.io/latest/) 进行复杂的金丝雀发布流程。在这个过程中KubeVela 会帮助你:
- 将 Istio 的能力进行封装和抽象后再交付给用户使用,使得用户无需成为 Istio 专家就可以直接使用这个金丝雀发布的场景KubeVela 会为你提供一个封装好的 Rollout 运维特征)
- 通过声明式工作流来设计金丝雀发布的步骤,以及执行发布/回滚,而无需通过“脏乱差”的脚本或者人工的方式管理这个过程。
本案例中,我们将使用经典的微服务应用 [bookinfo](https://istio.io/latest/docs/examples/bookinfo/?ie=utf-8&hl=en&docs-search=Canary) 来展示上述金丝雀发布过程。
## 准备工作
如果你的集群尚未安装 Istio你可以通过以下命令开启 Istio 集群插件
```shell
vela addon enable istio
```
如果你的集群已经安装 Istio你只需 apply [该目录](https://github.com/oam-dev/kubevela/tree/master/vela-templates/addons/istio/definitions) 下的四个 YAML 文件来达到和上面开启集群插件一样的效果
因为后面的例子运行在 default namespace需要为 default namespace 打上 Istio 自动注入 sidecar 的标签。
```shell
kubectl label namespace default istio-injection=enabled
```
## 初次部署
执行下面的命令,部署 bookinfo 应用。
```shell
vela up -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/canary-rollout-use-case/first-deploy.yaml
```
该应用的组件架构和访问关系如下所示:
![book-info-struct](../resources/book-info-struct.jpg)
该应用包含四个组件,其中组件 pruductpage, details, ratings 均配置了一个暴露端口 (expose) 运维特征用来在集群内暴露服务。
组件 reviews 配置了一个金丝雀流量发布 (canary-traffic) 的运维特征。
productpage 组件还配置了一个 网关入口 (istio-gateway) 的运维特征,从而让该组件接收进入集群的流量。这个运维特征通过设置 `gateway:ingressgateway` 来使用 Istio 的默认网关实现,设置 `hosts: "*"` 来指定携带任意 host 信息的请求均可进入网关。
```shell
...
- name: productpage
type: webservice
properties:
image: docker.io/istio/examples-bookinfo-productpage-v1:1.16.2
port: 9080
traits:
- type: expose
properties:
port:
- 9080
- type: istio-gateway
properties:
hosts:
- "*"
gateway: ingressgateway
match:
- exact: /productpage
- prefix: /static
- exact: /login
- prefix: /api/v1/products
port: 9080
...
```
你可以通过执行下面的命令将网关的端口映射到本地。
```shell
kubectl port-forward service/istio-ingressgateway -n istio-system 19082:80
```
通过浏览器访问 http://127.0.0.1:19082/productpage 将会看到下面的页面。
![pic-v2](../resources/canary-pic-v2.jpg)
## 金丝雀发布
接下来我们以 `reviews` 组件为例,模拟一次金丝雀发布的完整过程,及先升级一部分组件实例,同时调整流量,以此达到渐进式灰度发布的目的。
执行下面的命令,来更新应用。
```shell
vela up -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/canary-rollout-use-case/rollout-v2.yaml
```
这次操作更新了 reviews 组件的镜像,从之前的 v2 升级到了 v3。同时 reviews 组件的灰度发布 (Rollout) 运维特征指定了,升级的目标实例个数为 2 个,分两个批次升级,每批升级 1 个实例。
```shell
...
- name: reviews
type: webservice
properties:
image: docker.io/istio/examples-bookinfo-reviews-v3:1.16.2
port: 9080
volumes:
- name: wlp-output
type: emptyDir
mountPath: /opt/ibm/wlp/output
- name: tmp
type: emptyDir
mountPath: /tmp
traits:
- type: expose
properties:
port:
- 9080
- type: rollout
properties:
targetSize: 2
rolloutBatches:
- replicas: 1
- replicas: 1
- type: canary-traffic
properties:
port: 9080
...
```
这次更新还为应用新增了一个升级的执行工作流,该工作流包含三个步骤。
第一步通过指定 `batchPartition` 等于 0 设置只升级第一批次的实例。并通过 `traffic.weightedTargets` 将 10% 的流量切换到新版本的实例上面。
完成第一步之后,执行第二步工作流会进入暂停状态,等待用户校验服务状态。
工作流的第三步是完成剩下实例的升级,并将全部流量切换致新的组件版本上。
```shell
...
workflow:
steps:
- name: rollout-1st-batch
type: canary-rollout
properties:
# just upgrade first batch of component
batchPartition: 0
traffic:
weightedTargets:
- revision: reviews-v1
weight: 90 # 90% shift to new version
- revision: reviews-v2
weight: 10 # 10% shift to new version
# give user time to verify part of traffic shifting to newRevision
- name: manual-approval
type: suspend
- name: rollout-rest
type: canary-rollout
properties:
# upgrade all batches of component
batchPartition: 1
traffic:
weightedTargets:
- revision: reviews-v2
weight: 100 # 100% shift to new version
...
```
更新完成之后再在浏览器多次访问之前的网址。发现有大概10%的概率会看到下面这个新的页面,
![pic-v3](../resources/canary-pic-v3.jpg)
可见新版本的页面由之前的黑色五角星变成了红色五角星
### 继续完成全量发布
如果在人工校验时,发现服务符合预期,需要继续执行工作流,完成全量发布。你可以通过执行下面的命令完成这一操作。
```shell
vela workflow resume book-info
```
在浏览器上继续多次访问网页,会发现五角星将一直是红色的。
### 终止发布工作流并回滚
如果在人工校验时,发现服务不符合预期,需要终止预先定义好的发布工作流,并将流量和实例切换回之前的版本。你可以通过执行下面的命令完成这一操作:
```shell
vela up -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/canary-rollout-use-case/rollback.yaml
```
这个操作将会更新 Workflow 定义去使用 `canary-rollback` step
```yaml
...
workflow:
steps:
- name: rollback
type: canary-rollback
```
此次操作的原理是:
- 更新 Rollout 对象的 `targetRevisionName` 成旧的版本,这样会自动回滚所有已发布的新版本的实例回到旧版本,并且保持还没升级的旧版本实例。
- 更新 VirtualService 对象的 `route` 字段,将所有流量导向旧的版本。
- 更新 DestinationRule 对象的 `subset` 字段,只容纳旧的版本。
看到了吗?这么多操作,但是暴露给用户的只有一个简单的 step 定义,全部复杂的操作都并抽象化在背后自动运行!
在浏览器上继续访问网址,会发现五角星又变回到了黑色。

View File

@ -1,572 +0,0 @@
---
title: 实践案例-理想汽车
---
## 背景
理想汽车后台服务采用的是微服务架构,虽然借助 Kubernetes 进行部署,但运维工作依然很复杂。并具有以下特点:
- 一个应用能运行起来并对外提供服务正常情况下都需要配套的db实例以及 redis 集群支撑。
- 应用之间存在依赖关系,对于部署顺序有比较强的诉求。
- 应用部署流程中需要和外围系统(如:配置中心)交互。
下面以一个理想汽车的经典场景为例,介绍如何借助 KubeVela 实现以上诉求。
## 典型场景介绍
![场景架构](../resources/li-auto-inc.jpg)
这里面包含两个应用,分别是 `base-server``proxy-server`, 整体应用部署需要满足以下条件:
- base-server 成功启动(状态 ready后需要往配置中心apollo注册信息。
- base-server 需要绑定到 service 和 ingress 进行负载均衡。
- proxy-server 需要在 base-server 成功运行后启动,并需要获取到 base-server 对应的 service 的 clusterIP。
- proxy-server 依赖 redis 中间件,需要在 redis 成功运行后启动。
- proxy-server 需要从配置中心apollo读取 base-server 的相关注册信息。
可见整个部署过程,如果人为操作,会变得异常困难以及容易出错。在借助 KubeVela 后,可以轻松实现场景的自动化和一键式运维。
## 解决方案
在 KubeVela 上,以上诉求可以拆解为以下 KubeVela 的模型:
- 组件部分: 包含三个分别是 base-server 、redis 、proxy-server。
- 运维特征: ingress (包括 service) 作为一个通用的负载均衡运维特征。
- 工作流: 实现组件按照依赖进行部署,并实现和配置中心的交互。
- 应用部署计划: 理想汽车的开发者可以通过 KubeVela 的应用部署计划完成应用发布。
详细过程如下:
## 平台的功能定制
理想汽车的平台工程师通过以下步骤完成方案中所涉及的能力,并向开发者用户透出(通过编写 definition 的方式实现)。
### 1.定义组件
- 编写 base-service 的组件定义,使用 `Deployment` 作为工作负载,并向终端用户透出参数 `image``cluster`。对于终端用户来说,之后在发布时只需要关注镜像以及部署的集群名称。
- 编写 proxy-service 的组件定义,使用 `argo rollout` 作为工作负载,同样向终端用户透出参数 `image``cluster`
如下所示:
```
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: base-service
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
kube:
template:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
appId: BASE-SERVICE
appName: base-service
version: 0.0.1
name: base-service
spec:
replicas: 2
revisionHistoryLimit: 5
selector:
matchLabels:
app: base-service
template:
metadata:
labels:
antiAffinity: none
app: base-service
appId: BASE-SERVICE
version: 0.0.1
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- base-service
- key: antiAffinity
operator: In
values:
- none
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: APP_NAME
value: base-service
- name: LOG_BASE
value: /data/log
- name: RUNTIME_CLUSTER
value: default
image: base-service
imagePullPolicy: Always
name: base-service
ports:
- containerPort: 11223
protocol: TCP
- containerPort: 11224
protocol: TCP
volumeMounts:
- mountPath: /tmp/data/log/base-service
name: log-volume
- mountPath: /data
name: sidecar-sre
- mountPath: /app/skywalking
name: skywalking
initContainers:
- args:
- 'echo "do something" '
command:
- /bin/sh
- -c
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: APP_NAME
value: base-service
image: busybox
imagePullPolicy: Always
name: sidecar-sre
resources:
limits:
cpu: 100m
memory: 100Mi
volumeMounts:
- mountPath: /tmp/data/log/base-service
name: log-volume
- mountPath: /scratch
name: sidecar-sre
terminationGracePeriodSeconds: 120
volumes:
- hostPath:
path: /logs/dev/base-service
type: DirectoryOrCreate
name: log-volume
- emptyDir: {}
name: sidecar-sre
- emptyDir: {}
name: skywalking
parameters:
- name: image
required: true
type: string
fieldPaths:
- "spec.template.spec.containers[0].image"
- name: cluster
required: true
type: string
fieldPaths:
- "spec.template.spec.containers[0].env[6].value"
- "spec.template.metadata.labels.cluster"
---
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: proxy-service
spec:
workload:
definition:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
schematic:
kube:
template:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
labels:
appId: PROXY-SERVICE
appName: proxy-service
version: 0.0.0
name: proxy-service
spec:
replicas: 1
revisionHistoryLimit: 1
selector:
matchLabels:
app: proxy-service
strategy:
canary:
steps:
- setWeight: 50
- pause: {}
template:
metadata:
labels:
app: proxy-service
appId: PROXY-SERVICE
cluster: default
version: 0.0.1
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- proxy-service
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: APP_NAME
value: proxy-service
- name: LOG_BASE
value: /app/data/log
- name: RUNTIME_CLUSTER
value: default
image: proxy-service:0.1
imagePullPolicy: Always
name: proxy-service
ports:
- containerPort: 11024
protocol: TCP
- containerPort: 11025
protocol: TCP
volumeMounts:
- mountPath: /tmp/data/log/proxy-service
name: log-volume
- mountPath: /app/data
name: sidecar-sre
- mountPath: /app/skywalking
name: skywalking
initContainers:
- args:
- 'echo "do something" '
command:
- /bin/sh
- -c
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: APP_NAME
value: proxy-service
image: busybox
imagePullPolicy: Always
name: sidecar-sre
resources:
limits:
cpu: 100m
memory: 100Mi
volumeMounts:
- mountPath: /tmp/data/log/proxy-service
name: log-volume
- mountPath: /scratch
name: sidecar-sre
terminationGracePeriodSeconds: 120
volumes:
- hostPath:
path: /app/logs/dev/proxy-service
type: DirectoryOrCreate
name: log-volume
- emptyDir: {}
name: sidecar-sre
- emptyDir: {}
name: skywalking
parameters:
- name: image
required: true
type: string
fieldPaths:
- "spec.template.spec.containers[0].image"
- name: cluster
required: true
type: string
fieldPaths:
- "spec.template.spec.containers[0].env[5].value"
- "spec.template.metadata.labels.cluster"
```
### 2.定义运维特征
编写用于负载均衡的运维特征的定义,其通过生成 Kubernetes 中的原生资源 `Service``Ingress` 实现负载均衡。
向终端用户透出的参数包括 domain 和 http ,其中 domain 可以指定域名http 用来设定路由,具体将部署服务的端口映射为不同的 url path。
如下所示:
```
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
name: ingress
spec:
schematic:
cue:
template: |
parameter: {
domain: string
http: [string]: int
}
outputs: {
"service": {
apiVersion: "v1"
kind: "Service"
metadata: {
name: context.name
namespace: context.namespace
}
spec: {
selector: app: context.name
ports: [for ph, pt in parameter.http{
protocol: "TCP"
port: pt
targetPort: pt
}]
}
}
"ingress": {
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata: {
name: "\(context.name)-ingress"
namespace: context.namespace
}
spec: rules: [{
host: parameter.domain
http: paths: [for ph, pt in parameter.http {
path: ph
pathType: "Prefix"
backend: service: {
name: context.name
port: number: pt
}
}]
}]
}
}
```
### 3.定义工作流的步骤
- 定义 apply-base 工作流步骤: 完成部署 base-server等待组件成功启动后往注册中心注册信息。透出参数为 component终端用户在流水线中使用步骤 apply-base 时只需要指定组件名称。
- 定义 apply-helm 工作流步骤: 完成部署 redis helm chart并等待 redis 成功启动。透出参数为 component终端用户在流水线中使用步骤 apply-helm 时只需要指定组件名称。
- 定义 apply-proxy 工作流步骤: 完成部署 proxy-server并等待组件成功启动。透出参数为 component 和 backendIP其中 component 为组件名称backendIP 为 proxy-server 服务依赖组件的 IP。
如下所示:
```
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
name: apply-base
namespace: vela-system
spec:
schematic:
cue:
template: |-
import ("vela/op")
parameter: {
component: string
}
apply: op.#ApplyComponent & {
component: parameter.component
}
// 等待 deployment 可用
wait: op.#ConditionalWait & {
continue: apply.workload.status.readyReplicas == apply.workload.status.replicas && apply.workload.status.observedGeneration == apply.workload.metadata.generation
}
message: {...}
// 往三方配置中心apollo写配置
notify: op.#HTTPPost & {
url: "appolo-address"
request: body: json.Marshal(message)
}
// 暴露 service 的 ClusterIP
clusterIP: apply.traits["service"].value.spec.clusterIP
---
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
name: apply-helm
namespace: vela-system
spec:
schematic:
cue:
template: |-
import ("vela/op")
parameter: {
component: string
}
apply: op.#ApplyComponent & {
component: parameter.component
}
chart: op.#Read & {
value: {
// redis 的元数据
...
}
}
// 等待 redis 可用
wait: op.#ConditionalWait & {
// todo
continue: chart.value.status.phase=="ready"
}
---
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
name: apply-proxy
namespace: vela-system
spec:
schematic:
cue:
template: |-
import (
"vela/op"
"encoding/json"
)
parameter: {
component: string
backendIP: string
}
// 往三方配置中心apollo读取配置
// config: op.#HTTPGet
apply: op.#ApplyComponent & {
component: parameter.component
// 给环境变量中注入BackendIP
workload: patch: spec: template: spec: {
containers: [{
// patchKey=name
env: [{name: "BackendIP",value: parameter.backendIP}]
},...]
}
}
// 等待 argo.rollout 可用
wait: op.#ConditionalWait & {
continue: apply.workload.status.readyReplicas == apply.workload.status.replicas && apply.workload.status.observedGeneration == apply.workload.metadata.generation
}
```
### 用户使用
理想汽车的开发工程师接下来就可以使用 Application 完成应用的发布。
开发工程师可以直接使用如上平台工程师在 KubeVela 上定制的通用能力,轻松完成应用部署计划的编写。
> 在下面例子中通过 workflow 的数据传递机制 input/output将 base-server 的 clusterIP 传递给 proxy-server。
```
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: lixiang-app
spec:
components:
- name: base-service
type: base-service
properties:
image: nginx:1.14.2
# 用于区分appollo环境
cluster: default
traits:
- type: ingress
properties:
domain: base-service.dev.example.com
http:
"/": 11001
# redis无依赖启动后service的endpionts 需要通过http接口写入信息写入到apollo
- name: "redis"
type: helm
properties:
chart: "redis-cluster"
version: "6.2.7"
repoUrl: "https://charts.bitnami.com/bitnami"
repoType: helm
- name: proxy-service
type: proxy-service
properties:
image: nginx:1.14.2
# 用于区分appollo环境
cluster: default
traits:
- type: ingress
properties:
domain: proxy-service.dev.example.com
http:
"/": 11002
workflow:
steps:
- name: apply-base-service
type: apply-base
outputs:
- name: baseIP
exportKey: clusterIP
properties:
component: base-service
- name: apply-redis
type: apply-helm
properties:
component: redis
- name: apply-proxy-service
type: apply-proxy
inputs:
- from: baseIP
parameterKey: backendIP
properties:
component: proxy-service
```

View File

@ -1,174 +0,0 @@
---
title: 使用工作流实现多集群部署
---
本案例,将为你讲述如何使用 KubeVela 做多集群应用部署,将包含从集群创建、集群注册、环境初始化、多集群调度,一直到应用多集群部署的完整流程。
- 通过 KubeVela 中的环境初始化Initializer功能我们可以创建一个 Kubernetes 集群并注册到中央管控集群,同样通过环境初始化功能,可以将应用管理所需的系统依赖一并安装。
- 通过 KubeVela 的多集群多环境部署EnvBinding功能可以对应用进行差异化配置并选择资源下发到哪些集群。
## 开始之前
- 首先你需要有一个 Kubernetes 版本为 1.20+ 的集群作为管控集群,并且已经安装好 KubeVela ,管控集群需要有一个可以通过公网访问的 APIServer
的地址。如果不做特殊说明,实践案例上的所有步骤都在管控集群上操作。
- 在这个场景中KubeVela 背后采用[OCM(open-cluster-management)](https://open-cluster-management.io/getting-started/quick-start/)技术做实际的多集群资源分发。
- 本实践案例相关的 YAML 描述和 Shell 脚本都在 KubeVela 项目的 [docs/examples/workflow-with-ocm](https://github.com/oam-dev/kubevela/tree/master/docs/examples/workflow-with-ocm) 下,
请下载该案例,在该目录执行下面的终端命令。
- 本实践案例将以阿里云的 ACK 集群作为例子,创建阿里云资源需要使用相应的鉴权,需要保存你阿里云账号的 AK/SK 到管控集群的 Secret 中。
```shell
export ALICLOUD_ACCESS_KEY=xxx; export ALICLOUD_SECRET_KEY=yyy
```
```shell
# 如果你想使用阿里云安全令牌服务,还要导出环境变量 ALICLOUD_SECURITY_TOKEN 。
export ALICLOUD_SECURITY_TOKEN=zzz
```
```shell
# prepare-alibaba-credentials.sh 脚本会读取环境变量并创建 secret 到当前集群。
sh hack/prepare-alibaba-credentials.sh
```
```shell
$ kubectl get secret -n vela-system
NAME TYPE DATA AGE
alibaba-account-creds Opaque 1 11s
```
## 初始化阿里云资源创建功能
我们可以使用 KubeVela 的环境初始化功能,开启阿里云资源创建的系统功能,这个初始化过程主要是将之前配置的鉴权信息提供出来,并初始化 Terraform 系统插件。我们将这个初始化对象命名为:`terraform-alibaba`,并部署:
```shell
kubectl apply -f initializers/init-terraform-alibaba.yaml
```
### 创建环境初始化 `terraform-alibaba`
```shell
kubectl apply -f initializers/init-terraform-alibaba.yaml
```
当环境初始化 `terraform-alibaba``PHASE` 字段为 `success` 表示环境初始化成功这可能需要等待1分钟左右的时间。
```shell
$ kubectl get initializers.core.oam.dev -n vela-system
NAMESPACE NAME PHASE AGE
vela-system terraform-alibaba success 94s
```
## 初始化多集群调度功能
我们使用 KubeVela 的环境初始化功能,开启多集群调度的系统功能,这个初始化过程主要是创建一个新的 ACK 集群,使用 OCM 多集群管理方案管理新创建的集群,我们将这个初始化对象命名为:`managed-cluster`,并部署:
```shell
kubectl apply -f initializers/init-managed-cluster.yaml
```
除此之外,为了让创建好的集群可以被管控集群所使用,我们还需要将创建的集群注册到管控集群。我们通过定义一个工作流节点来传递新创建集群的证书信息,再定义一个工作流节点来完成集群注册。
**自定义执行集群创建的工作流节点,命名为 `create-ack`**,进行部署:
```shell
kubectl apply -f definitions/create-ack.yaml
```
**自定义集群注册的工作流节点,命名为 `register-cluster`**,进行部署:
```shell
kubectl apply -f definitions/register-cluster.yaml
```
### 创建环境初始化
1. 安装工作流节点定义 `create-ack``register-cluster`
```shell
kubectl apply -f definitions/create-ack.yaml.yaml
kubectl apply -f definitions/register-cluster.yaml
```
2. 修改工作流节点 `register-ack` 的 hubAPIServer 的参数为管控集群的 APIServer 的公网地址。
```yaml
- name: register-ack
type: register-cluster
inputs:
- from: connInfo
parameterKey: connInfo
properties:
# 用户需要填写管控集群的 APIServer 的公网地址
hubAPIServer: {{ public network address of APIServer }}
env: prod
initNameSpace: default
patchLabels:
purpose: test
```
3. 创建环境初始化 `managed-cluster`
```
kubectl apply -f initializers/init-managed-cluster.yaml
```
当环境初始化 `managed-cluster``PHASE` 字段为 `success` 表示环境初始化成功,你可能需要等待 15-20 分钟左右的时间阿里云创建一个ack集群需要 15 分钟左右)。
```shell
$ kubectl get initializers.core.oam.dev -n vela-system
NAMESPACE NAME PHASE AGE
vela-system managed-cluster success 20m
```
当环境初始化 `managed-cluster` 初始化成功后,你可以看到新集群 `poc-01` 已经被被注册到管控集群中。
```shell
$ kubectl get managedclusters.cluster.open-cluster-management.io
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE
poc-01 true {{ APIServer address }} True True 30s
```
## 部署应用到指定集群
管理员完成多集群的注册之后,用户可以在应用部署计划中指定将资源部署到哪个集群中。
```shell
kubectl apply -f app.yaml
```
检查应用部署计划 `workflow-demo` 是否成功创建。
```shell
$ kubectl get app workflow-demo
NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE
workflow-demo podinfo-server webservice running true 7s
```
你可以切换到新创建的 ACK 集群上,查看资源是否被成功地部署。
```shell
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
podinfo-server 1/1 1 1 40s
```
```shell
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
podinfo-server-auxiliaryworkload-85d7b756f9 LoadBalancer 192.168.57.21 < EIP > 9898:31132/TCP 50s
```
Service `podinfo-server` 绑定了一个 EXTERNAL-IP允许用户通过公网访问应用用户可以在浏览器中输入 `http://<EIP>:9898` 来访问刚刚创建的应用。
![workflow-with-ocm-demo](../resources/workflow-with-ocm-demo.png)
上述应用部署计划 `workflow-demo` 中使用了内置的应用策略 `env-binding` 对应用部署计划进行差异化配置,修改了组件 `podinfo-server` 的镜像,
以及运维特征 `expose` 的类型以允许集群外部的请求访问,同时应用策略 `env-binding` 指定了资源调度策略,将资源部署到新注册的 ACK 集群内。
应用部署计划的交付工作流也使用了内置的 `multi-env` 交付工作流定义,指定具体哪一个配置后的组件部署到集群中。

View File

@ -16,7 +16,7 @@ module.exports = {
items: [
"end-user/quick-start-cli",
"case-studies/multi-cluster",
"case-studies/jenkins-cicd",
// "case-studies/jenkins-cicd",
"case-studies/gitops",
"case-studies/initialize-env",
],
@ -24,7 +24,7 @@ module.exports = {
{
type: "category",
label: "Basics",
collapsed: false,
collapsed: true,
items: [
"getting-started/core-concept",
"getting-started/architecture",
@ -90,25 +90,20 @@ module.exports = {
"tutorials/jenkins",
"tutorials/trigger",
"tutorials/workflows",
"tutorials/sso"
// "case-studies/jenkins-cicd",
// "case-studies/canary-blue-green",
"tutorials/sso",
],
},
{
type: "category",
label: "Basics",
collapsed: false,
items: [
"getting-started/velaux-concept",
],
collapsed: true,
items: ["getting-started/velaux-concept"],
},
{
type: "category",
label: "How-to Guides",
collapsed: true,
items: [
// TODO: complete the docs
{
"Manage applications": [
@ -235,7 +230,7 @@ module.exports = {
"reference/addons/overview",
"reference/addons/velaux",
"reference/addons/terraform",
"reference/addons/ai"
"reference/addons/ai",
],
},
"end-user/components/cloud-services/cloud-resources-list",

View File

@ -1,209 +0,0 @@
---
title: Progressive Rollout with Istio
---
## Introduction
The application deployment model in KubeVela is designed and implemented with extreme level of extensibility at heart. Hence, KubeVela can be easily integrated with any existing tools to superpower your application delivery with modern technologies such as Service Mesh immediately, without writing dirty glue code/scripts.
This guide will introduce how to use KubeVela and [Istio](https://istio.io/latest/) to do an advanced canary release process. In this process, KubeVela will help you to:
- ship Istio capabilities to end users without asking them to become an Istio expert (i.e. KubeVela will provide you a rollout trait as abstraction);
- design canary release steps and do rollout/rollback in a declarative workflow, instead managing the whole process manually or with ugly scripts.
We will use the well-known [bookinfo](https://istio.io/latest/docs/examples/bookinfo/?ie=utf-8&hl=en&docs-search=Canary) application as the sample.
## Preparation
If your cluster haven't installed Istio. Install the Istio cluster plugin.
```shell
vela addon enable istio
```
Otherwise, you just need apply these 4 YAML files under this [path](https://github.com/oam-dev/kubevela/tree/master/vela-templates/addons/istio/definitions)
The default namespace needs to be labeled so that Istio will auto-inject sidecar.
```shell
kubectl label namespace default istio-injection=enabled
```
## Initial deployment
Deploy the Application of `bookinfo`:
```shell
vela up -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/canary-rollout-use-case/first-deploy.yaml
```
The component architecture and relationship of the application are as follows:
![book-info-struct](../resources/book-info-struct.jpg)
This Application has four Components, `productpage`, `ratings`, `details` components configured with an`expose` Trait to expose cluster-level service.
And `reviews` component have a canary-traffic Trait.
The `productpage` component is also configured with an `istio-gateway` Trait, allowing the Component to receive traffic coming from outside the cluster. The example below show that it sets `gateway:ingressgateway` to use Istio's default gateway, and `hosts: "*"` to specify that any request can enter the gateway.
```shell
...
- name: productpage
type: webservice
properties:
image: docker.io/istio/examples-bookinfo-productpage-v1:1.16.2
port: 9080
traits:
- type: expose
properties:
port:
- 9080
- type: istio-gateway
properties:
hosts:
- "*"
gateway: ingressgateway
match:
- exact: /productpage
- prefix: /static
- exact: /login
- prefix: /api/v1/products
port: 9080
...
```
You can port-forward to the gateway as follows:
```shell
kubectl port-forward service/istio-ingressgateway -n istio-system 19082:80
```
Visit http://127.0.0.1:19082/productpage through the browser and you will see the following page.
![pic-v2](../resources/canary-pic-v2.jpg)
## Canary Release
Next, we take the `reviews` Component as an example to simulate the complete process of a canary release, and first upgrade a part of the component instances, and adjust the traffic at the same time, so as to achieve the purpose of progressive canary release.
Execute the following command to update the application.
```shell
vela up -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/canary-rollout-use-case/rollout-v2.yaml
```
This operation updates the mirror of the `reviews` Component from the previous v2 to v3. At the same time, the Rollout Trait of the `reviews` Component specifies that the number of target instances to be upgraded is two, which are upgraded in two batches, with one instance in each batch.
In addition, a canary-traffic Trait has been added to the Component.
```shell
...
- name: reviews
type: webservice
properties:
image: docker.io/istio/examples-bookinfo-reviews-v3:1.16.2
port: 9080
volumes:
- name: wlp-output
type: emptyDir
mountPath: /opt/ibm/wlp/output
- name: tmp
type: emptyDir
mountPath: /tmp
traits:
- type: expose
properties:
port:
- 9080
- type: rollout
properties:
targetSize: 2
rolloutBatches:
- replicas: 1
- replicas: 1
- type: canary-traffic
properties:
port: 9080
...
```
This update also adds an upgraded execution Workflow to the Application, which contains three steps.
The first step is to upgrade only the first batch of instances by specifying `batchPartition` equal to 0. And use `traffic.weightedTargets` to switch 10% of the traffic to the new version of the instance.
After completing the first step, the execution of the second step of the Workflow will enter a pause state, waiting for the user to verify the service status.
The third step of the Workflow is to complete the upgrade of the remaining instances and switch all traffic to the new component version.
```shell
...
workflow:
steps:
- name: rollout-1st-batch
type: canary-rollout
properties:
# just upgrade first batch of component
batchPartition: 0
traffic:
weightedTargets:
- revision: reviews-v1
weight: 90 # 90% shift to new version
- revision: reviews-v2
weight: 10 # 10% shift to new version
# give user time to verify part of traffic shifting to newRevision
- name: manual-approval
type: suspend
- name: rollout-rest
type: canary-rollout
properties:
# upgrade all batches of component
batchPartition: 1
traffic:
weightedTargets:
- revision: reviews-v2
weight: 100 # 100% shift to new version
...
```
After the update is complete, visit the previous URL multiple times in the browser. There is about 10% probability that you will see the new page below,
![pic-v3](../resources/canary-pic-v3.jpg)
It can be seen that the new version of the page has changed from the previous black five-pointed star to a red five-pointed star.
### Continue with Full Release
If the service is found to meet expectations during manual verification, the Workflow needs to be continued to complete the full release. You can do that by executing the following command.
```shell
vela workflow resume book-info
```
If you continue to verify the webpage several times on the browser, you will find that the five-pointed star will always be red.
### Rollback to The Old Version
During the manual verification, if the service does not meet the expectations, you can terminate the pre-defined release workflow and rollback the instances and the traffic to the previous version.
```shell
vela up -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/canary-rollout-use-case/rollback.yaml
```
This is basically updates the workflow to rollback step:
```yaml
...
workflow:
steps:
- name: rollback
type: canary-rollback
```
Under the hood, it changes:
- the Rollout spec to target to the old version, which rolls replicas of new versions to the old version and keeps replicas of old version as is.
- the VirtualService spec to shift all traffic to the old version.
- the DestinationRule spec to update the subsets to only the old version.
You see?! All the complexity of the work is kept away from users and provided in a simple step!
Continue to visit the website on the browser, you will find that the five-pointed star has changed back to black.

View File

@ -1,3 +0,0 @@
---
title: Practical Case
---

View File

@ -1,5 +0,0 @@
---
title: Practical Case
---
TBD

View File

@ -26,10 +26,6 @@
"type": "doc",
"id": "version-v1.3/case-studies/multi-cluster"
},
{
"type": "doc",
"id": "version-v1.3/case-studies/jenkins-cicd"
},
{
"type": "doc",
"id": "version-v1.3/case-studies/gitops"
@ -41,7 +37,7 @@
]
},
{
"collapsed": false,
"collapsed": true,
"type": "category",
"label": "Basics",
"items": [
@ -215,7 +211,7 @@
]
},
{
"collapsed": false,
"collapsed": true,
"type": "category",
"label": "Basics",
"items": [