Workflow doc: Core Concept + End User + Admin (#153)

* workflow doc structure

* add workflow core concept

* update workflow concept

* update title

* common step actions 

common step actions

* fix

fix

* cn docs (#157)

* Feat(workflow): add apply components and traits docs (#158)

* Feat(workflow): add apply components and traits docs

* resolve comments

* end user doc: add data flow

* add patch

* Feat(workflow): add apply remaining and multi envs docs (#160)

* Feat(workflow): add apply remaining and multi envs docs

* Update docs/end-user/workflow/apply-remaining.md

* Update docs/end-user/workflow/apply-remaining.md

* Update docs/end-user/workflow/apply-remaining.md

Co-authored-by: Hongchao Deng <hongchaodeng1@gmail.com>

* update

* Fix(workflow): optimize the document structure (#161)

* add input/output admin guide

* update workflow CN docs (#159)

* update workflow doc

* fix build

* upate zh doc: core concept

* Fix(workflow): delete workflow step definitions in docs (#167)

* update data flow

* update zh dataflow

* add zh steps

* update context

* upgrade workflow en docs (#171)

* comment

* comment

* fix(workflow): resolve some comments in workflow end users' docs (#172)

* comment

* update workflow

* Workflow cue-actions  docs format (#175)

* upgrade workflow en docs

* format

* en

* update context

* update workflow

* fix Li-Auto-Inc doc (#176)

* improve case

* action

* inc

* Fix(workflow): fix examples in workflow end users guide (#177)

* fix

Co-authored-by: Jian.Li <74582607+leejanee@users.noreply.github.com>
Co-authored-by: Tianxin Dong <dongtianxin.tx@alibaba-inc.com>
This commit is contained in:
Hongchao Deng 2021-08-18 19:22:09 +08:00 committed by GitHub
parent bc5116a779
commit b877b516a6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
33 changed files with 2360 additions and 70 deletions

4
.gitignore vendored
View File

@ -20,4 +20,6 @@ yarn-debug.log*
yarn-error.log*
# editor and IDE paraphernalia
.idea
.idea
_tmp/

View File

@ -0,0 +1,3 @@
---
title: Practical Case
---

View File

@ -2,4 +2,138 @@
title: Workflow
---
TBD
Workflow in KubeVela empowers users to glue any operational tasks to automate the delivery of applications to hybrid environments.
It is designed to customize the control logic -- not just blindly apply all resources, but provide more procedural flexiblity.
This provides solutions to build more complex operations, e.g. workflow suspend, approval gate, data flow, multi-stage rollout, A/B testing.
Workflow is modular by design.
Each module is defined by a Definition CRD and exposed via K8s API.
Under the hood, it uses a powerful declarative language -- CUE as the superglue for your favourite tools and processes.
Here is an example:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
spec:
components:
- name: database
type: helm
properties:
repoUrl: chart-repo-url
chart: mysql
- name: web
type: helm
properties:
repoUrl: chart-repo-url
chart: my-web
# Deploy the database first and then the web component.
# In each step, it ensures the resource has been deployed successfully before jumping to next step.
# The connection information will be emitted as output from database and input for web component.
workflow:
# Workflow contains multiple steps and each step instantiates from a Definition.
# By running a workflow of an application, KubeVela will orchestrate the flow of data between steps.
steps:
- name: deploy-database
type: apply-and-wait
outputs:
- name: db-conn
exportKey: outConn
properties:
component: database
resourceType: StatefulSet
resourceAPIVersion: apps/v1beta2
names:
- mysql
- name: deploy-web
type: apply-and-wait
inputs:
- from: db-conn # input comes from the output from `deploy-database` step
parameterKey: dbConn
properties:
component: web
resourceType: Deployment
resourceAPIVersion: apps/v1
names:
- my-web
---
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
name: apply-and-wait
spec:
schematic:
cue:
template: |
import (
"vela/op"
)
parameter: {
component: string
names: [...string]
resourceType: string
resourceAPIVersion: string
dbConn?: string
}
// apply the component
apply: op.#ApplyComponent & {
component: parameter.component
if dbConn != _|_ {
spec: containers: [{env: [{name: "DB_CONN",value: parameter.dbConn}]}]
}
}
// iterate through given resource names and wait for them
step: op.#Steps & {
for index, resource in parameter.names {
// read resource object
"resource-\(index)": op.#Read & {
value: {
kind: parameter.resourceType
apiVersion: parameter.resourceAPIVersion
metadata: {
name: resource
namespace: context.namespace
}
}
}
// wait until resource object satisfy given condition.
"wait-\(index)": op.#ConditionalWait & {
if step["resource-\(index)"].workload.status.ready == "true" {
continue: true
}
}
}
}
outConn: apply.status.address.ip
```
Here are more detailed explanation of the above example:
- There is a WorkflowStepDefinition that defines the templated operation process:
- It applies the specified component.
It uses the `op.#ApplyComponent` action which applies all resources of a component.
- It then waits all resources of given names to be ready.
It uses `op.#Read` action which reads a resource into specified key,
and then uses `op#ConditionalWait` which waits until the `continue` field becomes true.
- There is an Application that uses the predefined Definition to initiate delivery of two service components:
- It first does `apply-and-wait` on `database` component.
This will invoke the templated process as defined above with given properties.
- Once the first step is finished, it outputs the value of the `outConn` key to output named `db-conn`,
which basically means any steps can use the output `db-conn` as input later.
- The second step that takes an input `db-conn` from previous output will
get the value of `db-conn` and fill it into the parameter key `dbConn`.
- It then does `apply-and-wait` on `web` component.
This will invoke the same templated process as before except that this time the `dbConn` field will have value.
This basically means the container env field will be rendered as well.
- Once the second step is finished, the workflow will run to completion and stop.
So far we have introduced the basic concept of KubeVela Workflow. For next steps, you can:
- [Try out hands-on workflow scenarios](../end-user/workflow/apply-component).
- [Read how to create your own Definition module](../platform-engineers/workflow/steps).
- [Learn the design behind the workflow system](https://github.com/oam-dev/kubevela/blob/master/design/vela-core/workflow_policy.md).

View File

@ -0,0 +1,98 @@
---
title: Apply Components and Traits
---
In this guide, you will learn how to apply components and traits in `Workflow`.
## How to use
Apply the following `Application`:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: first-vela-workflow
namespace: default
spec:
components:
- name: express-server
type: webservice
properties:
image: crccheck/hello-world
port: 8000
traits:
- type: ingress
properties:
domain: testsvc.example.com
http:
/: 8000
- name: nginx-server
type: webservice
properties:
image: nginx:1.21
port: 80
workflow:
steps:
- name: express-server
# specify the workflow step type
type: apply-component
properties:
# specify the component name
component: express-server
- name: manual-approval
# suspend is a built-in task of workflow used to suspend the workflow
type: suspend
- name: nginx-server
type: apply-component
properties:
component: nginx-server
```
If we want to suspend the workflow for manual approval before applying some certain components, we can use `suspend` step to pause the workflow.
In this case, the workflow will be suspended after applying the first component. The second component will wait to be applied util the `resume` command is called.
Check the status after applying the `Application`:
```shell
$ kubectl get app first-vela-workflow
NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE
first-vela-workflow express-server webservice workflowSuspending 2s
```
We can use `vela workflow resume` to resume the workflow.
```shell
$ vela workflow resume first-vela-workflow
Successfully resume workflow: first-vela-workflow
```
Check the status, the `Application` is now `runningWorkflow`:
```shell
$ kubectl get app first-vela-workflow
NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE
first-vela-workflow express-server webservice running true 10s
```
## Expected outcome
Check the component status in cluster:
```shell
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
express-server 1/1 1 1 3m28s
nginx-server 1/1 1 1 3s
$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
express-server <none> testsvc.example.com 80 4m7s
```
We can see that all the components and traits have been applied to the cluster.

View File

@ -0,0 +1,81 @@
---
title: Apply Remaining
---
If we have applied some resources and do not want to specify the rest one by one, KubeVela provides the `apply-remaining` workflow step to filter out selected resources and apply remaining.
In this guide, you will learn how to apply remaining resources via `apply-remaining` in `Workflow`.
## How to use
Apply the following `Application` with workflow step type of `apply-remaining`:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: first-vela-workflow
namespace: default
spec:
components:
- name: express-server
type: webservice
properties:
image: crccheck/hello-world
port: 8000
traits:
- type: ingress
properties:
domain: testsvc.example.com
http:
/: 8000
- name: express-server2
type: webservice
properties:
image: crccheck/hello-world
port: 8000
workflow:
steps:
- name: express-server
# specify the workflow step type
type: apply-remaining
properties:
exceptions:
# specify the configuration of the component
express-server:
# skipApplyWorkload indicates whether to skip apply the workload resource
skipApplyWorkload: false
# skipAllTraits indicates to skip apply all resources of the traits
# if this is true, skipApplyTraits will be ignored
skipAllTraits: false
# skipApplyTraits specifies the names of the traits to skip apply
skipApplyTraits:
- ingress
- name: express-server2
type: apply-remaining
properties:
exceptions:
express-server:
skipApplyWorkload: true
```
## Expected outcome
Check the component status in cluster:
```shell
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
express-server 1/1 1 1 3m28s
$ kubectl get ingress
No resources found in default namespace.
```
We can see that the first component `express-server` has been applied to the cluster, but the trait named ingress has been skipped.
But the second component `express-server2` hasn't been applied to cluster since it has been skipped.
With `apply-remaining`, we can easily filter and apply resources by filling in the built-in parameters.

View File

@ -1,4 +0,0 @@
---
title: Components Topology
---

View File

@ -1,5 +1,118 @@
---
title: Multi-Enviroment
title: Multi Environments
---
If we have multiple clusters, we want to apply our application in the test cluster first, and then apply it to the production cluster after the application in test cluster is running. KubeVela provides the `multi-env` workflow step to manage multi environments.
In this guide, you will learn how to manage multi environments via `multi-env` in `Workflow`.
## How to use
Apply the following `Application` with workflow step type of `multi-env`:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: multi-env-demo
namespace: default
spec:
components:
- name: nginx-server
type: webservice
properties:
image: nginx:1.21
port: 80
policies:
- name: test-env
type: env-binding
properties:
created: false
envs:
- name: test
patch:
components:
- name: nginx-server
type: webservice
properties:
image: nginx:1.20
port: 80
placement:
clusterSelector:
labels:
purpose: test
- name: prod-env
type: env-binding
properties:
created: false
envs:
- name: prod
patch:
components:
- name: nginx-server
type: webservice
properties:
image: nginx:1.20
port: 80
placement:
clusterSelector:
labels:
purpose: prod
workflow:
steps:
- name: deploy-server
# specify the workflow step type
type: multi-env
properties:
# specify the component name
component: nginx-server
# specify the policy name
policy: patch
# specify the env name in policy
env: prod
- name: manual-approval
# suspend is a built-in task of workflow used to suspend the workflow
type: suspend
- name: deploy-prod-server
type: multi-env
properties:
component: nginx-server
policy: prod-env
env: prod
```
## Expected outcome
First, check the component status in `test` cluster:
```shell
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-server 1/1 1 1 1m10s
```
Use `resume` command after everything is ok in test cluster:
```shell
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-server 1/1 1 1 1m10s
```
Then, check the component status in `prod` cluster:
```shell
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-server 1/1 1 1 1m10s
```
We can see that the component have been applied to both clusters.
With `multi-env`, we can easily manage applications in multiple environments.

View File

@ -1,2 +0,0 @@
---
title: Custom Workflow

View File

@ -1,3 +0,0 @@
---
title: Introduction
---

View File

@ -0,0 +1,87 @@
---
title: Workflow Context
---
When defining the CUE template of the WorkflowStepDefinition,
you can use the `context` to get metadata of the Application.
For example:
```yaml
cue:
template: |
import ("vela/op")
parameter: {
component: string
}
apply: op.#ApplyComponent & {
component: parameter.component
workload: patch: {
metadata: {
labels: {
app: context.name
version: context.labels["version"]
}
}
}
}
```
When defining the CUE template of the WorkflowStepDefinition,
you can use the `context` to get metadata of the Application.
Here is an example:
```yaml
kind: WorkflowStepDefinition
metadata:
name: my-step
spec:
cue:
template: |
import ("vela/op")
parameter: {
component: string
}
apply: op.#ApplyComponent & {
component: parameter.component
workload: patch: {
metadata: {
labels: {
app: context.name
version: context.labels["version"]
}
}
}
}
```
When we deploy the following Application:
```yaml
kind: Application
metadata:
name: example-app
labels:
version: v1.0
spec:
workflow:
steps:
- name: example
type: my-step
properties:
component: example
```
Now `context.XXX` will be filled with the corresponding Application's metadata:
```
apply: op.#ApplyComponent & {
...
app: "example-app" // context.name
version: "v1.0" // context.labels["version"]
}
}
}
}
```

View File

@ -0,0 +1,185 @@
---
title: CUE Actions
---
This doc will illustrate the CUE actions provided in `vela/op` stdlib package.
> To learn the syntax of CUE, read [CUE Basic](../cue/basic.md)
## Apply
---
Create or update resource in kubernetes cluster.
### Action Parameter
- value: the resource structure to be created or updated. And after successful execution, `value` will be updated with resource status.
- patch: the content support `Strategic Merge Patch`,let you can define the strategy of list merge through comments.
```
#Apply: {
value: {...}
patch: {
//patchKey=$key
...
}
}
```
### Usage
```
import "vela/op"
stepName: op.#Apply & {
value: {
kind: "Deployment"
apiVersion: "apps/v1"
metadata: name: "test-app"
spec: {
replicas: 2
...
}
}
patch: {
spec: template: spec: {
//patchKey=name
containers: [{name: "sidecar"}]
}
}
}
```
## ConditionalWait
---
Step will be blocked until the condition is met.
### Action Parameter
- continue: Step will be blocked until the value becomes `true`.
```
#ConditionalWait: {
continue: bool
}
```
### Usage
```
import "vela/op"
apply: op.#Apply
wait: op.#ConditionalWait: {
continue: apply.value.status.phase=="running"
}
```
## Load
---
Get component from application by component name.
### Action Parameter
- component: the component name.
- workload: the workload resource of the componnet.
- auxiliaries: the auxiliary resources of the componnet.
```
#Load: {
component: string
value: {
workload: {...}
auxiliaries: [string]: {...}
}
}
```
### Usage
```
import "vela/op"
// You can use load.workload & load.traits after this action.
load: op.#Load & {
component: "component-name"
}
```
## Read
---
Get resource in kubernetes cluster.
### Action Parameter
- value: the resource metadata to be get. And after successful execution, `value` will be updated with resource definition in cluster.
- err: if an error occurs, the `err` will contain the error message.
```
#Read: {
value: {}
err?: string
}
```
### Usage
```
// You can use configmap.value.data after this action.
configmap: op.#Read & {
value: {
kind: "ConfigMap"
apiVersion: "v1"
metadata: {
name: "configmap-name"
namespace: "configmap-ns"
}
}
}
```
## ApplyComponent
---
Create or update resources corresponding to the component in kubernetes cluster.
### Action Parameter
- component: the component name.
- workload: the workload resource of the componnet. Value will be filled after successful execution.
- traits: the trait resources of the componnet. Value will be filled after successful execution.
```
#ApplyComponent: {
componnet: string
workload: {...}
traits: [string]: {...}
}
```
### Usage
```
apply: op.#ApplyComponent & {
component: "componet-name"
}
```
## ApplyRemaining
---
Create or update the resources corresponding to all components in the application in the kubernetes cluster, and specify which components do not need to apply through `exceptions`, or skip some resources of the exceptional component
### Action Parameter
- exceptions: indicates the name of the exceptional component.
- skipApplyWorkload: indicates whether to skip apply the workload resource.
- skipAllTraits: indicates to skip apply all resources of the traits.
- skipApplyTraits: specifies the names of the traits to skip apply.
```
#ApplyRemaining: {
exceptions?: [componentName=string]: {
skipApplyWorkload: *true | bool
// If this is true, skipApplyTraits will be ignored
skipAllTraits: *true| bool
skipApplyTraits: [...string]
}
}
```
### Usage
```
apply: op.#ApplyRemaining & {
exceptions: {"applied-compo apply: op.#ApplyRemaining & {
exceptions: {"applied-component-name": {}}
}
nent-name": {}}
}
```
## Steps
---
Used to encapsulate a set of operations
- In steps, you need to specify the execution order by tag.
### Usage
```
app: op.#Steps & {
load: op.#Load & {
component: "component-name"
} @step(1)
apply: op.#Apply & {
value: load.value.workload
} @step(2)
}
```

View File

@ -0,0 +1,126 @@
---
title: Data Flow
---
## What's Data Flow
In KubeVela, data flow enables users to pass data from one workflow step to another.
You can orchestrate the data flow by specifying declarative config -- inputs and outputs of each step.
This doc will explain how to specify data inputs and outputs.
> Full example available at: https://github.com/oam-dev/kubevela/blob/master/docs/examples/workflow
## Outputs
An output exports the data corresponding to a key in the CUE template of a workflow step.
Once the workflow step has finished running, the output will have the data from the key.
Here is an example to specify the output in Application:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
spec:
...
workflow:
steps:
- name: deploy-server1
type: apply-component
properties:
component: "server1"
outputs:
- name: server1IP
# Any key can be exported from the CUE template of the Definition
exportKey: "myIP"
```
Here is an example CUE template that contains the export key:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
name: apply-component
spec:
schematic:
cue:
template: |
import ("vela/op")
parameter: {
component: string
}
// load component from application
component: op.#Load & {
component: parameter.component
}
// apply workload to kubernetes cluster
apply: op.#ApplyComponent & {
component: parameter.component
}
// export podIP
myIP: apply.workload.status.podIP
```
The output can then be used by the input in the following.
## Inputs
An input takes the data of an output to fill a parameter in the CUE template of a workflow step.
The parameter will be filled before running the workflow step.
Here is an example to specify the input in Application:
```yaml
kind: Application
spec:
...
workflow:
steps:
...
- name: deploy-server2
type: apply-with-ip
inputs:
- from: server1IP
parameterKey: serverIP
properties:
component: "server2"
```
Here is an example CUE template that takes the input parameter:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
name: apply-with-ip
spec:
schematic:
cue:
template: |
import ("vela/op")
parameter: {
component: string
// the input value will be used to fill this parameter
serverIP?: string
}
// load component from application
component: op.#Load & {
component: parameter.component
value: {}
}
// apply workload to kubernetes cluster
apply: op.#Apply & {
value: {
component.value.workload
metadata: name: parameter.component
if parameter.serverIP!=_|_{
// this data will override the env fields of the workload container
spec: containers: [{env: [{name: "PrefixIP",value: parameter.serverIP}]}]
}
}
}
// wait until workload.status equal "Running"
wait: op.#ConditionalWait & {
continue: apply.value.status.phase =="Running"
}
```

View File

@ -0,0 +1,71 @@
---
title: Steps
---
## What's a Step
A Workflow Step instantiates from a Definition and runs the instance.
It corresponds to find a Definition by the `type` field:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
spec:
...
workflow:
steps:
- name: deploy-server1
# This is the name of the Definition to actually create an executable instance
type: apply-component
properties:
component: server1
...
```
## How to define a Step
The platform admin prepares the WorkflowStepDefinitions for developers to use.
Basically the Definition provides the templated process to automate operation tasks.
This hides the complexities and exposes only high-level parameters to simplify user experience.
Here's an exmaple of a Definition:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
name: apply-component
spec:
schematic:
cue:
template: |
import ("vela/op")
parameter: {
component: string
prefixIP?: string
}
// load component from application
component: op.#Load & {
component: parameter.component
}
// apply workload to kubernetes cluster
apply: op.#ApplyComponent & {
component: parameter.component
}
// wait until workload.status equal "Running"
wait: op.#ConditionalWait & {
continue: apply.workload.status.phase =="Running"
}
// export podIP
myIP: apply.workload.status.podIP
```
## User Parameters
Inside the CUE template, the parameters exposed to users are defined in the `parameters` key.
The workflow step properties from the Application will be used to fill the parameters.
Besides properties, we also support data flow to input parameter values from outputs of other steps.
## CUE Actions
The rest of CUE keys are actions that will be executed in order by KubeVela.
To learn about how to compose such actions, read [CUE Actions Reference](./cue-actions)

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
docs/resources/li.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

View File

@ -0,0 +1,559 @@
---
title: 实践案例-理想汽车
---
## 背景
理想汽车后台服务采用的是微服务架构,虽然借助 kubernetes 进行部署,但运维工作依然很复杂。并具有以下特点
- 一个应用能运行起来并对外提供服务正常情况下都需要配套的db实例以及redis集群支撑
- 应用之间存在依赖关系,对于部署顺序有比较强的诉求
- 应用部署流程中需要和外围系统(比如配置中心)交互
下面以一个理想汽车的经典场景为例,介绍如何借助 KubeVela 实现以上诉求
## 典型场景介绍
![场景架构](../resources/li-auto-inc.jpg)
这里面包含两个应用分别是base-server和proxy-server, 整体应用部署需要满足以下条件
- base-server 成功启动(状态 ready后需要往配置中心apollo注册信息
- base-server 需要绑定到 service 和 ingress 进行负载均衡
- proxy-server 需要在 base-server 成功运行后启动,并需要获取到 base-server 对应的 service 的 clusterIP
- proxy-server 依赖 redis 中间件需要在redis成功运行后启动
- proxy-server 需要从配置中心apollo读取 base-server 的相关注册信息
可见整个部署过程,如果人为操作,会变得异常困难以及容易出错,借助 KubeVela 可以轻松实现场景的自动化和一键式运维
## 解决方案
在 KubeVela 上,以上诉求可以拆解为以下 KubeVela 的模型
- 组件部分: 包含三个分别是 base-server 、redis 、proxy-server
- 运维特征: ingress (包括 service) 作为一个通用的负载均衡运维特征
- 工作流: 实现组件按照依赖进行部署,并实现和配置中心的交互
- 应用部署计划: 理想汽车的开发者可以通过 KubeVela 的应用部署计划完成应用发布
详细过程如下:
## 平台的功能定制
理想汽车的平台工程师通过以下步骤完成方案中所涉及的能力,并向开发者用户透出(都是通过编写 definition 的方式实现)。
### 1.定义组件
- 编写 base-service 的组件定义,使用 `deployment` 作为工作负载,并向终端用户透出参数 `image``cluster`(如下),也就是说终端用户以后在发布时只需要关注镜像以及部署在哪个集群
- 编写 proxy-service 的组件定义,使用 `argo rollout` 作为工作负载,并同样向终端用户透出参数 `image``cluster` (如下)
```
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: base-service
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
kube:
template:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
appId: BASE-SERVICE
appName: base-service
version: 0.0.1
name: base-service
spec:
replicas: 2
revisionHistoryLimit: 5
selector:
matchLabels:
app: base-service
template:
metadata:
labels:
antiAffinity: none
app: base-service
appId: BASE-SERVICE
version: 0.0.1
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- base-service
- key: antiAffinity
operator: In
values:
- none
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: APP_NAME
value: base-service
- name: LOG_BASE
value: /data/log
- name: RUNTIME_CLUSTER
value: default
image: base-service
imagePullPolicy: Always
name: base-service
ports:
- containerPort: 11223
protocol: TCP
- containerPort: 11224
protocol: TCP
volumeMounts:
- mountPath: /tmp/data/log/base-service
name: log-volume
- mountPath: /data
name: sidecar-sre
- mountPath: /app/skywalking
name: skywalking
initContainers:
- args:
- 'echo "do something" '
command:
- /bin/sh
- -c
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: APP_NAME
value: base-service
image: busybox
imagePullPolicy: Always
name: sidecar-sre
resources:
limits:
cpu: 100m
memory: 100Mi
volumeMounts:
- mountPath: /tmp/data/log/base-service
name: log-volume
- mountPath: /scratch
name: sidecar-sre
terminationGracePeriodSeconds: 120
volumes:
- hostPath:
path: /logs/dev/base-service
type: DirectoryOrCreate
name: log-volume
- emptyDir: {}
name: sidecar-sre
- emptyDir: {}
name: skywalking
parameters:
- name: image
required: true
type: string
fieldPaths:
- "spec.template.spec.containers[0].image"
- name: cluster
required: true
type: string
fieldPaths:
- "spec.template.spec.containers[0].env[6].value"
- "spec.template.metadata.labels.cluster"
---
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: proxy-service
spec:
workload:
definition:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
schematic:
kube:
template:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
labels:
appId: PROXY-SERVICE
appName: proxy-service
version: 0.0.0
name: proxy-service
spec:
replicas: 1
revisionHistoryLimit: 1
selector:
matchLabels:
app: proxy-service
strategy:
canary:
steps:
- setWeight: 50
- pause: {}
template:
metadata:
labels:
app: proxy-service
appId: PROXY-SERVICE
cluster: default
version: 0.0.1
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- proxy-service
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: APP_NAME
value: proxy-service
- name: LOG_BASE
value: /app/data/log
- name: RUNTIME_CLUSTER
value: default
image: proxy-service:0.1
imagePullPolicy: Always
name: proxy-service
ports:
- containerPort: 11024
protocol: TCP
- containerPort: 11025
protocol: TCP
volumeMounts:
- mountPath: /tmp/data/log/proxy-service
name: log-volume
- mountPath: /app/data
name: sidecar-sre
- mountPath: /app/skywalking
name: skywalking
initContainers:
- args:
- 'echo "do something" '
command:
- /bin/sh
- -c
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: APP_NAME
value: proxy-service
image: busybox
imagePullPolicy: Always
name: sidecar-sre
resources:
limits:
cpu: 100m
memory: 100Mi
volumeMounts:
- mountPath: /tmp/data/log/proxy-service
name: log-volume
- mountPath: /scratch
name: sidecar-sre
terminationGracePeriodSeconds: 120
volumes:
- hostPath:
path: /app/logs/dev/proxy-service
type: DirectoryOrCreate
name: log-volume
- emptyDir: {}
name: sidecar-sre
- emptyDir: {}
name: skywalking
parameters:
- name: image
required: true
type: string
fieldPaths:
- "spec.template.spec.containers[0].image"
- name: cluster
required: true
type: string
fieldPaths:
- "spec.template.spec.containers[0].env[5].value"
- "spec.template.metadata.labels.cluster"
```
### 2.定义运维特征
编写用于负载均衡的运维特征的定义(如下),其通过生成 kubernetes 中的原生资源 `service``ingress` 实现负载均衡。
向终端用户透出的参数包括 domain 和 http ,其中 domain 可以指定域名http 用来设定路由具体将部署服务的端口映射为不同的url path
```
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
name: ingress
spec:
schematic:
cue:
template: |
parameter: {
domain: string
http: [string]: int
}
outputs: {
"service": {
apiVersion: "v1"
kind: "Service"
metadata: {
name: context.name
namespace: context.namespace
}
spec: {
selector: app: context.name
ports: [for ph, pt in parameter.http{
protocol: "TCP"
port: pt
targetPort: pt
}]
}
}
"ingress": {
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata: {
name: "\(context.name)-ingress"
namespace: context.namespace
}
spec: rules: [{
host: parameter.domain
http: paths: [for ph, pt in parameter.http {
path: ph
pathType: "Prefix"
backend: service: {
name: context.name
port: number: pt
}
}]
}]
}
}
```
### 3.定义工作流的步骤
- 定义 apply-base 工作流步骤: 完成部署 base-server等待组件成功启动后往注册中心注册信息。透出参数为 component ,也就是终端用户在流水线中使用步骤 apply-base 时只需要指定组件名称
- 定义 apply-helm 工作流步骤: 完成部署 redis helm chart并等待redis成功启动。透出参数为 component ,也就是终端用户在流水线中使用步骤 apply-helm 时只需要指定组件名称
- 定义 apply-proxy 工作流步骤: 完成部署 proxy-server并等待组件成功启动。透出参数为 component 和 backendIP其中 component 为组件名称backendIP 为 proxy-server服务依赖组件的IP
```
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
name: apply-base
namespace: vela-system
spec:
schematic:
cue:
template: |-
import ("vela/op")
parameter: {
component: string
}
apply: op.#ApplyComponent & {
component: parameter.component
}
// wait until deployment ready
wait: op.#ConditionalWait & {
continue: apply.workload.status.readyReplicas == apply.workload.status.replicas && apply.workload.status.observedGeneration == apply.workload.metadata.generation
}
message: {...}
// 往三方配置中心apollo写配置
notify: op.#HTTPPost & {
url: "appolo-address"
request: body: json.Marshal(message)
}
// export service ClusterIP
clusterIP: apply.traits["service"].value.spec.clusterIP
---
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
name: apply-helm
namespace: vela-system
spec:
schematic:
cue:
template: |-
import ("vela/op")
parameter: {
component: string
}
apply: op.#ApplyComponent & {
component: parameter.component
}
chart: op.#Read & {
value: {
// the metadata of redis resource
...
}
}
// wait redis ready
wait: op.#ConditionalWait & {
// todo
continue: chart.value.status.phase=="ready"
}
---
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
name: apply-proxy
namespace: vela-system
spec:
schematic:
cue:
template: |-
import (
"vela/op"
"encoding/json"
)
parameter: {
component: string
backendIP: string
}
// 往三方配置中心apollo读取配置
// config: op.#HTTPGet
apply: op.#ApplyComponent & {
component: parameter.component
// 给环境变量中注入BackendIP
workload: patch: spec: template: spec: {
containers: [{
// patchKey=name
env: [{name: "BackendIP",value: parameter.backendIP}]
},...]
}
}
// wait until argo.rollout ready
wait: op.#ConditionalWait & {
continue: apply.workload.status.readyReplicas == apply.workload.status.replicas && apply.workload.status.observedGeneration == apply.workload.metadata.generation
}
```
### 用户使用
理想汽车的开发工程师接下来就可以使用 application 完成应用的发布
开发工程师可以直接使用上面平台工程师在 KubeVela 上定制的通用能力,轻松完成应用部署计划的编写
> 在下面例子中通过workflow的数据传递机制 input/output ,完成 base-server 的 clusterIP 传递给 proxy-server。
```
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: lixiang-app
spec:
components:
- name: base-service
type: base-service
properties:
image: nginx:1.14.2
# 用于区分appollo环境
cluster: default
traits:
- type: ingress
properties:
domain: base-service.dev.example.com
http:
"/": 11001
# redis无依赖启动后service的endpionts 需要通过http接口写入信息写入到apollo
- name: "redis"
type: helm
properties:
chart: "redis-cluster"
version: "6.2.7"
repoUrl: "https://charts.bitnami.com/bitnami"
repoType: helm
- name: proxy-service
type: proxy-service
properties:
image: nginx:1.14.2
# 用于区分appollo环境
cluster: default
traits:
- type: ingress
properties:
domain: proxy-service.dev.example.com
http:
"/": 11002
workflow:
steps:
- name: apply-base-service
type: apply-base
outputs:
- name: baseIP
exportKey: clusterIP
properties:
component: base-service
- name: apply-redis
type: apply-helm
properties:
component: redis
- name: apply-proxy-service
type: apply-proxy
inputs:
- from: baseIP
parameterKey: backendIP
properties:
component: proxy-service
```

View File

@ -0,0 +1,5 @@
---
title: vela workflow
---
Operate workflow

View File

@ -1,5 +1,134 @@
---
title: 交付工作流
title: 工作流
---
在 KubeVela 里,工作流能够让用户去粘合各种运维任务到一个流程中去,实现自动化地快速交付云原生应用到任意混合环境中。
从设计上讲,工作流是为了定制化控制逻辑:不仅仅是简单地 Apply 所有资源,更是为了能够提供一些面向过程的灵活性。
比如说使用工作流能够帮助我们实现暂停、人工验证、等待状态、数据流传递、多环境灰度、A/B 测试等复杂操作。
工作流是基于模块化设计的。
每一个工作流模块都由一个 Definition CRD 定义并且通过 K8s API 来提供给用户操作。
工作流模块作为一个“超级粘合剂”可以将你任意的工具和流程都通过 CUE 语言来组合起来。
这让你可以通过强大的声明式语言和云原生 API 来创建你自己的模块。
下面是一个例子:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
spec:
components:
- name: database
type: helm
properties:
repoUrl: chart-repo-url
chart: mysql
- name: web
type: helm
properties:
repoUrl: chart-repo-url
chart: my-web
# 部署 database 然后再 web 组件
# 在每一步都会确保资源完全部署成功再执行下一步
# 连接信息还会从 database 组件输出出来传递到 web 组件中
workflow:
steps:
- name: deploy-database
type: apply-and-wait
outputs:
- name: db-conn
exportKey: outConn
properties:
component: database
resourceType: StatefulSet
resourceAPIVersion: apps/v1beta2
names:
- mysql
- name: deploy-web
type: apply-and-wait
inputs:
- from: db-conn
parameterKey: dbConn
properties:
component: web
resourceType: Deployment
resourceAPIVersion: apps/v1
names:
- my-web
---
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
name: apply-and-wait
spec:
schematic:
cue:
template: |
import (
"vela/op"
)
parameter: {
component: string
names: [...string]
resourceType: string
resourceAPIVersion: string
dbConn?: string
}
apply: op.#ApplyComponent & {
component: parameter.component
if dbConn != _|_ {
spec: containers: [{env: [{name: "DB_CONN",value: parameter.dbConn}]}]
}
}
// 遍历每一个指定资源并等到状态满足
step: op.#Steps & {
for index, resource in parameter.names {
"resource-\(index)": op.#Read & {
value: {
kind: parameter.resourceType
apiVersion: parameter.resourceAPIVersion
metadata: {
name: resource
namespace: context.namespace
}
}
}
"wait-\(index)": op.#ConditionalWait & {
if step["resource-\(index)"].workload.status.ready == "true" {
continue: true
}
}
}
}
outConn: apply.status.address.ip
```
接下来我们对上面的例子做更详细的说明:
- 这里有一个 WorkflowStepDefinition 定义了预先模板化的操作流程:
- 该定义中首先会部署用户指定的组件。
这里使用了 `op.#ApplyComponent` 指令来部署指定组件的所有资源。
- 然后会等待所有指定资源都准备好。
这里使用了 `op.#Read` 指令来读取一个资源到指定字段,然后使用了 `op#ConditionalWait` 指令来等待到 `continue` 字段为 true 。
- 这里还有一个 Application 用于表达如何使用预先配置好的定义来启动流程去交付两个服务组件:
- 该应用部署首先会为 `database` 组件执行 `apply-and-wait` 步骤。
这会触发该步骤定义里面的流程按照当前配置来执行。
- 第一个步骤执行完后,它将输出 `outConn` 字段的值到一个名叫 `db-conn` 的逻辑输出对象中。
这也意味着任何后面的步骤可以使用这个 `db-conn` 输出的值来作为输入值使用。
- 第二个步骤将拿到之前输出的 `db-conn` 的值作为输入值,并用于填值到参数 `dbConn` 中。
- 接下来第二个步骤将开始执行,也是 `apply-and-wait` 类型,不过这次是用于 `web` 组件,并且这次的 `dbConn` 参数将有输入值。
这也意味着相应的容器环境变量字段会有值被渲染出来。
- 当第二个步骤也执行完后,整个工作流将跑完结束。
到这里我们已经介绍完 KubeVela 工作流的基本概念。作为下一步,你可以:
- [动手尝试工作流的实践案例](../end-user/workflow/apply-component).
- [学习创建你自己的 Definition 模块](../platform-engineers/workflow/steps).
- [了解工作流系统背后的设计和架构](https://github.com/oam-dev/kubevela/blob/master/design/vela-core/workflow_policy.md).

View File

@ -0,0 +1,99 @@
---
title: 部署组件和运维特征
---
本节将介绍如何在工作流中部署组件和运维特征。
## 如何使用
部署如下应用部署计划`
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: first-vela-workflow
namespace: default
spec:
components:
- name: express-server
type: webservice
properties:
image: crccheck/hello-world
port: 8000
traits:
- type: ingress
properties:
domain: testsvc.example.com
http:
/: 8000
- name: nginx-server
type: webservice
properties:
image: nginx:1.21
port: 80
workflow:
steps:
- name: express-server
# 指定步骤类型
type: apply-component
properties:
# 指定组件名称
component: express-server
- name: manual-approval
# 工作流内置 suspend 类型的任务,用于暂停工作流
type: suspend
- name: nginx-server
type: apply-component
properties:
component: nginx-server
```
在一些情况下,我们在部署某些组件前,需要暂停整个工作流,以等待人工审批。
在本例中,部署完第一个组件后,工作流会暂停。直到继续的命令被发起后,才开始部署第二个组件。
部署应用特征计划后,查看工作流状态:
```shell
$ kubectl get app first-vela-workflow
NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE
first-vela-workflow express-server webservice workflowSuspending 2s
```
可以通过 `vela workflow resume` 命令来使工作流继续执行。
```shell
$ vela workflow resume first-vela-workflow
Successfully resume workflow: first-vela-workflow
```
查看应用部署计划,可以看到状态已经变为执行中:
```shell
$ kubectl get app first-vela-workflow
NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE
first-vela-workflow express-server webservice running true 10s
```
## 期望结果
查看集群中组件的状态:
```shell
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
express-server 1/1 1 1 3m28s
nginx-server 1/1 1 1 3s
$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
express-server <none> testsvc.example.com 80 4m7s
```
可以看到,所有的组件及运维特征都被成功地部署到了集群中。

View File

@ -0,0 +1,80 @@
---
title: 部署剩余资源
---
在一些情况下我们并不需要部署所有的资源但跳过不想部署的再一个个指定部署又太过繁琐。KubeVela 提供了一个 `apply-remaining` 类型的工作流步骤,可以使用户方便的一键过滤不想要的资源,并部署剩余组件。
本节将介绍如何在工作流中通过 `apply-remaining` 部署剩余资源。
## 如何使用
部署如下应用部署计划,其工作流中的步骤类型为 `apply-remaining`
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: first-vela-workflow
namespace: default
spec:
components:
- name: express-server
type: webservice
properties:
image: crccheck/hello-world
port: 8000
traits:
- type: ingress
properties:
domain: testsvc.example.com
http:
/: 8000
- name: express-server2
type: webservice
properties:
image: crccheck/hello-world
port: 8000
workflow:
steps:
- name: express-server
# 指定步骤类型
type: apply-remaining
properties:
exceptions:
# 配置组件参数
express-server:
# skipApplyWorkload 表明是否需要跳过组件的部署
skipApplyWorkload: false
# skipAllTraits 表明是否需要跳过所有运维特征的部署
# 如果这个参数值为 True将会忽略
skipAllTraits: false
# skipApplyTraits 指定了需要跳过部署的运维特征
skipApplyTraits:
- ingress
- name: express-server2
type: apply-remaining
properties:
exceptions:
express-server:
skipApplyWorkload: true
```
## 期望结果
查看集群中组件的状态:
```shell
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
express-server 1/1 1 1 3m28s
$ kubectl get ingress
No resources found in default namespace.
```
可以看到,第一个组件 `express-server` 已经被部署到了集群中,但是 `ingress` 的运维特征并没有部署。
而第二个组件 `express-server2` 被跳过了部署,没有部署到集群中。
通过填写 `apply-remaining` 中提供的参数,可以使用户方便的过滤部署资源。

View File

@ -1,35 +0,0 @@
---
title: 蓝绿发布
---
## Description
Configures Canary deployment strategy for your application.
## Specification
List of all configuration options for a `Rollout` trait.
```yaml
...
rollout:
replicas: 2
stepWeight: 50
interval: "10s"
```
## Properties
Name | Description | Type | Required | Default
------------ | ------------- | ------------- | ------------- | -------------
interval | Schedule interval time | string | true | 30s
stepWeight | Weight percent of every step in rolling update | int | true | 50
replicas | Total replicas of the workload | int | true | 2
## Conflicts With
### `Autoscale`
When `Rollout` and `Autoscle` traits are attached to the same service, they two will fight over the number of instances during rollout. Thus, it's by design that `Rollout` will take over replicas control (specified by `.replicas` field) during rollout.
> Note: in up coming releases, KubeVela will introduce a separate section in Appfile to define release phase configurations such as `Rollout`.

View File

@ -1,4 +0,0 @@
---
title: 组件拓扑
---

View File

@ -2,4 +2,118 @@
title: 多环境交付
---
本节将介绍如何在工作流中使用多环境功能。
在多集群的情况下我们首先需要在测试集群部署应用等到测试集群的应用一切正常后再部署到生产集群。KubeVela 提供了一个 `multi-env` 类型的工作流步骤,可以帮助用户方便的管理多环境配置。
本节将介绍如何在工作流使用 `multi-env` 来管理多环境。
## 如何使用
部署如下应用部署计划,其工作流中的步骤类型为 `multi-env`
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: multi-env-demo
namespace: default
spec:
components:
- name: nginx-server
type: webservice
properties:
image: nginx:1.21
port: 80
policies:
- name: test-env
type: env-binding
properties:
created: false
envs:
- name: test
patch:
components:
- name: nginx-server
type: webservice
properties:
image: nginx:1.20
port: 80
placement:
clusterSelector:
labels:
purpose: test
- name: prod-env
type: env-binding
properties:
created: false
envs:
- name: prod
patch:
components:
- name: nginx-server
type: webservice
properties:
image: nginx:1.20
port: 80
placement:
clusterSelector:
labels:
purpose: prod
workflow:
steps:
- name: deploy-test-server
# 指定步骤类型
type: multi-env
properties:
# 指定组件名称
component: nginx-server
# 指定 policy 名称
policy: test-env
# 指定 policy 中的 env 名称
env: test
- name: manual-approval
# 工作流内置 suspend 类型的任务,用于暂停工作流
type: suspend
- name: deploy-prod-server
type: multi-env
properties:
component: nginx-server
policy: prod-env
env: prod
```
## 期望结果
首先在 `test` 集群中,查看应用的状态:
```shell
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-server 1/1 1 1 1m10s
```
测试集群的应用一切正常后,使用命令继续工作流:
```shell
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-server 1/1 1 1 1m10s
```
`prod` 集群中,查看应用的状态:
```shell
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-server 1/1 1 1 1m10s
```
可以看到,使用最新配置的组件已经被成功地部署到了两个集群中。
通过 `multi-env`,我们可以轻松地在多个环境中管理应用。

View File

@ -336,7 +336,7 @@ spec:
工作流节点定义用于描述一系列在工作流中可以声明的步骤节点,如执行资源的部署、状态检查、数据输出、依赖输入、外部脚本调用等一系列能力均可以通过工作流节点定义来描述。
工作流节点的字段相对简单,除了基本的名称和功能以外,其主要的功能都在 CUE 的模板中配置。KubeVela 在工作流模板中内置了大量的操作,具体可以通过[工作流](../workflow/basic-workflow.md)文档学习如何使用和编写。
工作流节点的字段相对简单,除了基本的名称和功能以外,其主要的功能都在 CUE 的模板中配置。KubeVela 在工作流模板中内置了大量的操作,具体可以通过工作流文档学习如何使用和编写。
```yaml
apiVersion: core.oam.dev/v1beta1

View File

@ -0,0 +1,61 @@
---
title: 工作流上下文
---
用户在定义 WorkflowStepDefinition 时可以使用 `context` 来获取该 Application 的任意元信息。
下面是一个例子:
```yaml
kind: WorkflowStepDefinition
metadata:
name: my-step
spec:
cue:
template: |
import ("vela/op")
parameter: {
component: string
}
apply: op.#ApplyComponent & {
component: parameter.component
workload: patch: {
metadata: {
labels: {
app: context.name
version: context.labels["version"]
}
}
}
}
```
当我们部署下面 Application:
```yaml
kind: Application
metadata:
name: example-app
labels:
version: v1.0
spec:
workflow:
steps:
- name: example
type: my-step
properties:
component: example
```
这时候 `context.XXX` 将填写相应的 Application 元信息:
```
apply: op.#ApplyComponent & {
...
app: "example-app" // context.name
version: "v1.0" // context.labels["version"]
}
}
}
}
```

View File

@ -0,0 +1,186 @@
---
title: CUE 操作
---
这个文档介绍step定义过程中可以使用的cue操作类型。这些操作均由 `vela/op` 包提供。
> 可以阅读[CUE 基础文档](../cue/basic.md)来学习 CUE 基础语法
## Apply
--------
在kubernetes集群中创建或者更新资源
### 操作参数
- value: 将被 apply 的资源的定义。操作成功执行后,会用集群中资源的状态重新渲染`value`
- patch: 对 `value` 的内容打补丁,支持策略性合并,比如可以通过注释 `// +patchKey` 实现数组的按主键合并
```
#Apply: {
value: {...}
patch: {
//patchKey=$key
...
}
}
```
### 用法示例
```
import "vela/op"
stepName: op.#Apply & {
value: {
kind: "Deployment"
apiVersion: "apps/v1"
metadata: name: "test-app"
spec: {
replicas: 2
...
}
}
patch: {
spec: template: spec: {
//patchKey=name
containers: [{name: "sidecar"}]
}
}
}
```
## ConditionalWait
---
会让 workflow step 处于等待状态,直到条件被满足
### 操作参数
- continue: 当该字段为 true 时workflow step 才会恢复继续执行
```
#ConditionalWait: {
continue: bool
}
```
### 用法示例
```
import "vela/op"
apply: op.#Apply
wait: op.#ConditionalWait: {
continue: apply.value.status.phase=="running"
}
```
## Load
---
通过组件名称从 application 中获取组件对应的资源数据
### 操作参数
- component: 指定资源名称.
- workload: 获取到的组件的 workload 资源定义.
- auxiliaries: 获取到的组件的辅助资源定义( key 为定义里面 outputs 对应的资源名).
```
#Load: {
component: string
value: {
workload: {...}
auxiliaries: [string]: {...}
}
}
```
### 用法示例
```
import "vela/op"
// 该操作完成后,你就可以使用通过load.value.workload以及load.value.traits使用获取到组件相应的资源数据.
load: op.#Load & {
component: "component-name"
}
```
## Read
---
读取 kubernetes 集群中的资源
### 操作参数
- value: 需要用户描述读取资源的元数据,比如 kind、name 等,操作完成后,集群中资源的数据会被填充到 `value`
- err: 如果读取操作发生错误,这里会以字符串的方式指示错误信息.
```
#Read: {
value: {}
err?: string
}
```
### 用法示例
```
// 操作完成后,你可以通过 configmap.value.data 使用configmap里面的数据
configmap: op.#Read & {
value: {
kind: "ConfigMap"
apiVersion: "v1"
metadata: {
name: "configmap-name"
namespace: "configmap-ns"
}
}
}
```
## ApplyComponent
---
在 kubernetes 集群中创建或者更新组件对应的所有资源
### 操作参数
- component: 指定需要 apply 的组件名称
- workload: 操作完成后,从 kubernetes 集群中获取到的组件对应的 workload 资源的状态数据
- traits: 操作完成后,从 kubernetes 集群中获取到的组件对应的辅助资源的状态数据。数据结构为 map 类型, 索引为定义中 outputs 涉及到的名称
```
#ApplyComponent: {
componnet: string
workload: {...}
traits: [string]: {...}
}
```
### 用法示例
```
apply: op.#ApplyComponent & {
component: "component-name"
}
```
## ApplyRemaining
---
在 kubernetes 集群中创建或者更新 application 中所有组件对应的资源,并可以通过 `exceptions` 指明哪些组件或者组件中的某些资源跳过创建和更新
### 操作参数
- exceptions: 指明该操作需要排除掉的组件
- skipApplyWorkload: 是否跳过该组件 workload 资源的同步
- skipAllTraits: 是否跳过该组件所有辅助资源的同步
- skipApplyTraits: 数组类型,包含需要跳过的该组件中辅助资源对应的名称(定义中 outputs 涉及的到名字)
```
#ApplyRemaining: {
exceptions?: [componentName=string]: {
// skipApplyWorkload indicates whether to skip apply the workload resource
skipApplyWorkload: *true | bool
// skipAllTraits indicates to skip apply all resources of the traits.
// If this is true, skipApplyTraits will be ignored
skipAllTraits: *true| bool
// skipApplyTraits specifies the names of the traits to skip apply
skipApplyTraits: [...string]
}
}
```
### 用法示例
```
apply: op.#ApplyRemaining & {
exceptions: {"applied-component-name": {}}
}
```
## Steps
---
用来封装一组操作
### 操作参数
- steps里面需要通过 tag 的方式指定执行顺序,数字越小执行越靠前
### 用法示例
```
app: op.#Steps & {
load: op.#Load & {
component: "component-name"
} @step(1)
apply: op.#Apply & {
value: load.value.workload
} @step(2)
}
```

View File

@ -0,0 +1,134 @@
---
title: 数据流
---
## 数据流是什么
KubeVela 里的数据流是用来赋能用户在不同的工作流步骤里传递数据的手段。
用户使用数据流的方式是通过编写声明式的字段:即每一个步骤的输入输出 (inputs/outputs)。
这篇文档将阐述如何去编写这些字段来使用数据流功能。
> 完整版例子请参考这个链接: https://github.com/oam-dev/kubevela/blob/master/docs/examples/workflow
## 输出字段 (Outputs)
一个输出字段可以将一个步骤对应的 CUE 模板中的某个 Key 的数据给输出出来。
输出的数据可以在工作流接下来的步骤中被当做输入来使用。
下面是一个如何在 Application 中编写输出字段 (Outputs) 的例子:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
spec:
...
workflow:
steps:
- name: deploy-server1
type: apply-component
properties:
component: "server1"
outputs:
- name: server1IP
# Any key can be exported from the CUE template of the Definition
exportKey: "myIP"
```
与上面对应,使用一个 CUE 模版提供输出字段是这样的:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
name: apply-component
spec:
schematic:
cue:
template: |
import ("vela/op")
parameter: {
component: string
}
// load component from application
component: op.#Load & {
component: parameter.component
}
// apply workload to kubernetes cluster
apply: op.#ApplyComponent & {
component: parameter.component
}
// export podIP
myIP: apply.workload.status.podIP
```
可以看到,当我们在 WorkflowStepDefinition 的 CUE 模板里定义 `myIP` 字段,并且当 Application 里 outputs 的 exportKey 也指定了 `myIP` 字段时,它的值将被输出出去。我们将在下面看到输出的值可以如何使用。
## 输入字段 (Inputs)
输入字段可以对应前面输出的值,然后将输出的值用于填值该步骤的 CUE 模板的指定参数。
参数会在工作流步骤运行前先被填值。
下面是一个如何在 Application 中编写输入字段 (Outputs) 的例子:
```yaml
kind: Application
spec:
...
workflow:
steps:
...
- name: deploy-server2
type: apply-with-ip
inputs:
- from: server1IP
parameterKey: serverIP
properties:
component: "server2"
```
可以看到,`deploy-server2` 工作流步骤中 inputs 有一个 `from: server1IP` 对应了之前的 `deploy-server1` 步骤的一个输出字段。
到这里前面的输出值这时候将被用来给 `deploy-server2` 的参数 `serverIP` 填值。
下面是 `deploy-server2` 对应的 `apply-with-ip` WorkflowStepDefinition 的定义:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
name: apply-with-ip
spec:
schematic:
cue:
template: |
import ("vela/op")
parameter: {
component: string
// the input value will be used to fill this parameter
serverIP?: string
}
// load component from application
component: op.#Load & {
component: parameter.component
value: {}
}
// apply workload to kubernetes cluster
apply: op.#Apply & {
value: {
component.value.workload
metadata: name: parameter.component
if parameter.serverIP!=_|_{
// this data will override the env fields of the workload container
spec: containers: [{env: [{name: "PrefixIP",value: parameter.serverIP}]}]
}
}
}
// wait until workload.status equal "Running"
wait: op.#ConditionalWait & {
continue: apply.value.status.phase =="Running"
}
```
可以看到,这个步骤渲染的对象是需要拿到 `serverIP`,也就是之前部署的服务的 IP 来作为环境变量传入。
到这里,我们看到了一个完整的数据流传递的故事。

View File

@ -0,0 +1,72 @@
---
title: 工作流步骤
---
## 工作流步骤是什么
一个工作流步骤 (Step) 从一个定义 (Definition) 的模板出发来实例化出一个对象,并运行它。
工作流步骤是通过 `type` 字段来找到对应的定义:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
spec:
...
workflow:
steps:
- name: deploy-server1
# This is the name of the Definition to actually create an executable instance
type: apply-component
properties:
component: server1
...
```
## 如何定义一个步骤?
一般平台管理员会预先设置好工作流步骤的定义 (WorkflowStepDefinition) 来给开发者使用。
这些定义包含了写好模板的自动化流程去执行过程式任务。
这样一来,那些复杂的底层信息会被屏蔽掉,而暴露给用户的都是简单易用的参数。
下面是一个定义的例子:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
name: apply-component
spec:
schematic:
cue:
template: |
import ("vela/op")
parameter: {
component: string
prefixIP?: string
}
// load component from application
component: op.#Load & {
component: parameter.component
}
// apply workload to kubernetes cluster
apply: op.#ApplyComponent & {
component: parameter.component
}
// wait until workload.status equal "Running"
wait: op.#ConditionalWait & {
continue: apply.workload.status.phase =="Running"
}
// export podIP
myIP: apply.workload.status.podIP
```
## 用户参数
在 CUE 模板里面,暴露给用户的参数将被定义在 `parameters` 字段中。
在 Application 中的工作流步骤的属性值将被用来给这些参数填值。
此外,我们还支持了使用数据流的方式用其他步骤的输出来为一个步骤的参数填值。
## CUE 动作指令
剩下的 CUE 字段将作为动作指令被逐步执行。
如需知晓更多关于如何编写这些 CUE 动作指令的细节,敬请阅读[CUE 操作文档](./cue-actions).

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

View File

@ -26,6 +26,13 @@ module.exports = {
collapsed: false,
items: [
'end-user/initializer-end-user',
{
'Workflow': [
'end-user/workflow/apply-component',
'end-user/workflow/apply-remaining',
'end-user/workflow/multi-env',
]
},
{
'Components': [
'end-user/components/helm',
@ -50,11 +57,6 @@ module.exports = {
},
]
},
{
'Workflow': [
'end-user/workflow/multi-env',
]
},
{
'Traits': [
'end-user/traits/ingress',
@ -89,7 +91,7 @@ module.exports = {
collapsed: false,
items: [
'case-studies/workflow-edge-computing', // 待完成
'case-studies/workflow-lixiang-auto',
'case-studies/li-auto-inc',
],
},
{
@ -116,6 +118,14 @@ module.exports = {
'platform-engineers/initializer/advanced-initializer',
]
},
{
'Worfklow System': [
'platform-engineers/workflow/steps',
'platform-engineers/workflow/context',
'platform-engineers/workflow/data-flow',
'platform-engineers/workflow/cue-actions',
]
},
{
type: 'category',
label: 'Component System',
@ -125,12 +135,6 @@ module.exports = {
'platform-engineers/components/component-terraform',
]
},
{
'Worfklow System': [
'platform-engineers/workflow/basic-workflow',
'platform-engineers/workflow/advanced-workflow',
]
},
{
type: 'category',
label: 'Traits System',