Merge pull request #994 from wonderflow/beautify
beautify docs and refactor the sidebar
|
|
@ -55,10 +55,10 @@ See [this document](https://kubevela.net/docs/install#1-install-velad) for more
|
|||
- Enable related addon
|
||||
|
||||
```shell
|
||||
$ vela addon enable fluxcd
|
||||
$ vela addon enable ingress-nginx
|
||||
$ vela addon enable kruise-rollout
|
||||
$ vela addon enable velaux
|
||||
vela addon enable fluxcd
|
||||
vela addon enable ingress-nginx
|
||||
vela addon enable kruise-rollout
|
||||
vela addon enable velaux
|
||||
```
|
||||
|
||||
In this step, the following addons are started:
|
||||
|
|
@ -70,7 +70,7 @@ In this step, the following addons are started:
|
|||
- Map the Nginx ingress-controller port to local
|
||||
|
||||
```shell
|
||||
$ vela port-forward addon-ingress-nginx -n vela-system
|
||||
vela port-forward addon-ingress-nginx -n vela-system
|
||||
```
|
||||
|
||||
### First Deployment
|
||||
|
|
|
|||
|
|
@ -2,7 +2,9 @@
|
|||
title: Overview of GitOps
|
||||
---
|
||||
|
||||
> This section will introduce how to use KubeVela in GitOps area and why.
|
||||
:::note
|
||||
This section will introduce how to use KubeVela in GitOps area and why.
|
||||
:::
|
||||
|
||||
GitOps is a continuous delivery method that allows developers to automatically deploy applications by changing code and declarative configurations in a Git repository, with "git-centric" operations such as PR and commit. For detailed benefits of GitOps, you can refer to [this blog](https://www.weave.works/blog/what-is-gitops-really).
|
||||
|
||||
|
|
|
|||
|
|
@ -1,16 +1,16 @@
|
|||
---
|
||||
title: Initialize/Destroy Environment
|
||||
title: Initialize/Destroy Infrastructure of Environment
|
||||
---
|
||||
|
||||
This section will introduce what is environment and how to initialize and destroy an environment with KubeVela easily.
|
||||
This section will introduce how to initialize and destroy infrastructure of environment with KubeVela easily.
|
||||
|
||||
## What is environment
|
||||
## What can be infrastructure of environment
|
||||
|
||||
An Application development team usually needs to initialize some shared environment for users. An environment is a logical concept that represents a set of common resources for Applications.
|
||||
An Application development team usually needs to initialize some shared environment for users. An environment is a logical concept that represents a set of common infrastructure resources for Applications.
|
||||
|
||||
For example, a team usually wants two environments: one for development, and one for production.
|
||||
|
||||
In general, the resource types that can be initialized include the following types:
|
||||
In general, the infra resource types that can be initialized include the following types:
|
||||
|
||||
1. One or more Kubernetes clusters. Different environments may need different sizes and versions of Kubernetes clusters. Environment initialization can also manage multiple clusters .
|
||||
|
||||
|
|
@ -32,8 +32,6 @@ For example, if both the test and develop environments rely on the same controll
|
|||
|
||||
### Directly use Application for initialization
|
||||
|
||||
> Make sure your KubeVela version is `v1.1.6+`.
|
||||
|
||||
If we want to use some CRD controller like [OpenKruise](https://github.com/openkruise/kruise) in cluster, we can use `Helm` to initialize `kruise`.
|
||||
|
||||
We can directly use Application to initialize a kruise environment. The application below will deploy a kruise controller in cluster.
|
||||
|
|
@ -225,7 +223,7 @@ $ kubectl logs -f log-read-worker-774b58f565-ch8ch
|
|||
|
||||
We can see that both components is running. The two components share the same PVC and use the same ConfigMap.
|
||||
|
||||
## Destroy the Environment
|
||||
## Destroy the infrastructure of environment
|
||||
|
||||
As we have already modeled the environment as a KubeVela Application, we can destroy the environment easily by deleting the application.
|
||||
|
||||
|
|
|
|||
|
|
@ -31,7 +31,9 @@ cluster-hangzhou-1 X509Certificate <ENDPOINT_HANGZHOU_1> true
|
|||
cluster-hangzhou-2 X509Certificate <ENDPOINT_HANGZHOU_2> true
|
||||
```
|
||||
|
||||
> By default, the hub cluster where KubeVela locates is registered as the `local` cluster. You can use it like a managed cluster in spite that you cannot detach it or modify it.
|
||||
:::note
|
||||
By default, the hub cluster where KubeVela locates is registered as the `local` cluster. You can use it like a managed cluster in spite that you cannot detach it or modify it.
|
||||
:::
|
||||
|
||||
## Deliver Application to Clusters
|
||||
|
||||
|
|
@ -197,8 +199,9 @@ spec:
|
|||
namespace: examples-alternative
|
||||
```
|
||||
|
||||
> Sometimes, for security issues, you might want to disable this behavior and retrict the resources to be deployed within the same namespace of the application. This can be done by setting `--allow-cross-namespace-resource=false` in the bootstrap parameter of the KubeVela controller.
|
||||
|
||||
:::tip
|
||||
Sometimes, for security issues, you might want to disable this behavior and retrict the resources to be deployed within the same namespace of the application. This can be done by setting `--allow-cross-namespace-resource=false` in the [bootstrap parameter](../platform-engineers/system-operation/bootstrap-parameters) of the KubeVela controller.
|
||||
:::
|
||||
|
||||
### Control the deploy workflow
|
||||
|
||||
|
|
@ -336,7 +339,9 @@ spec:
|
|||
policies: ["topology-hangzhou-clusters", "override-nginx-legacy-image", "override-high-availability"]
|
||||
```
|
||||
|
||||
> NOTE: The override policy is used to modify the basic configuration. Therefore, **it is designed to be used together with topology policy**. If you do not want to use topology policy, you can directly write configurations in the component part instead of using the override policy. *If you misuse the override policy in the deploy workflow step without topology policy, no error will be reported but you will find nothing is deployed.*
|
||||
:::note
|
||||
The override policy is used to modify the basic configuration. Therefore, **it is designed to be used together with topology policy**. If you do not want to use topology policy, you can directly write configurations in the component part instead of using the override policy. *If you misuse the override policy in the deploy workflow step without topology policy, no error will be reported but you will find nothing is deployed.*
|
||||
:::
|
||||
|
||||
The override policy has many advanced capabilities, such as adding new component or selecting components to use.
|
||||
The following example will first deploy an nginx webservice with `nginx:1.20` image to local cluster. Then two nginx webservices with `nginx` and `nginx:stable` images will be deployed to hangzhou clusters respectively.
|
||||
|
|
@ -399,7 +404,9 @@ spec:
|
|||
Sometimes, you may want to use the same policy across multiple applications or reuse previous workflow to deploy different resources.
|
||||
To reduce the repeated code, you can leverage the external policies and workflow and refer to them in your applications.
|
||||
|
||||
> NOTE: you can only refer to Policy and Workflow within your application's namespace.
|
||||
:::caution
|
||||
You can only refer to Policy and Workflow within your application's namespace.
|
||||
:::
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1alpha1
|
||||
|
|
@ -455,7 +462,11 @@ spec:
|
|||
ref: make-release-in-hangzhou
|
||||
```
|
||||
|
||||
> NOTE: The internal policies will be loaded first. External policies will only be used when there is no corresponding policy inside the application. In the following example, we can reuse `topology-hangzhou-clusters` policy and `make-release-in-hangzhou` workflow but modify the `override-high-availability-webservice` policy by injecting the same-named policy inside the new application.
|
||||
:::note
|
||||
The internal policies will be loaded first. External policies will only be used when there is no corresponding policy inside the application.
|
||||
:::
|
||||
|
||||
In the following example, we can reuse `topology-hangzhou-clusters` policy and `make-release-in-hangzhou` workflow but modify the `override-high-availability-webservice` policy by injecting the same-named policy inside the new application.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
|
|
|
|||
|
|
@ -167,7 +167,9 @@ To run vela-core locally for debugging with kubevela installed in the remote clu
|
|||
|
||||
Finally, you can use the commands in the above [Build](#build) and [Testing](#Testing) sections, such as `make run`, to code and debug in your local machine.
|
||||
|
||||
> Note you will not be able to test features relate with validating/mutating webhooks in this way.
|
||||
:::caution
|
||||
Note you will not be able to test features relate with validating/mutating webhooks in this way.
|
||||
:::
|
||||
|
||||
## Run VelaUX Locally
|
||||
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ Building or installing addons is the most important way to extend KubeVela, ther
|
|||
KubeVela use CUE as it's core engine, and you can use CUE and CRD controller to glue almost every infrastructure capabilities.
|
||||
As a result, you can extend more powerful features for your platform.
|
||||
|
||||
- Start to [Learn CUE in KubeVela](../platform-engineers/cue/basic).
|
||||
- Start to [Learn Manage Definition with CUE](../platform-engineers/cue/basic).
|
||||
- Learn what is [CRD Controller](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) in Kubernetes.
|
||||
|
||||
## Contribution Guide
|
||||
|
|
|
|||
|
|
@ -31,7 +31,9 @@ contains the following attributes: name, character_set, description.
|
|||
|
||||
Applying the following application can create more than one database in an RDS instance.
|
||||
|
||||
> ⚠️ This section requires your platform engineers have already enabled [cloud resources addon](../../../reference/addons/terraform).
|
||||
:::caution
|
||||
This section requires your platform engineers have already enabled [cloud resources addon](../../../reference/addons/terraform).
|
||||
:::
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
|
|
|
|||
|
|
@ -4,7 +4,9 @@ title: Provision and Binding Database
|
|||
|
||||
This tutorial will talk about how to provision and consume Alibaba Cloud RDS (and OSS) by Terraform.
|
||||
|
||||
> ⚠️ This section requires your platform engineers have already enabled [cloud resources addon](../../../reference/addons/terraform).
|
||||
:::caution
|
||||
This section requires your platform engineers have already enabled [cloud resources addon](../../../reference/addons/terraform).
|
||||
:::
|
||||
|
||||
Let's deploy the [application](https://github.com/kubevela/kubevela/tree/master/docs/examples/terraform/cloud-resource-provision-and-consume/application.yaml)
|
||||
below to provision Alibaba Cloud OSS and RDS cloud resources, and consume them by the web component.
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Needs More?
|
||||
title: Needs More Capabilities?
|
||||
---
|
||||
|
||||
KubeVela is programmable, it can be extended easily with [definition](../../getting-started/definition). You have the following ways to discover and extend the platform.
|
||||
|
|
@ -132,7 +132,9 @@ Once addon installed, end user can discover and use these capabilities immediate
|
|||
|
||||
### Uninstall Addon
|
||||
|
||||
> Please make sure the addon along with its capabilities is no longer used in any of your applications before uninstalling it.
|
||||
:::danger
|
||||
Please make sure the addon along with its capabilities is no longer used in any of your applications before uninstalling it.
|
||||
:::
|
||||
|
||||
```shell
|
||||
vela addon disable fluxcd
|
||||
|
|
@ -223,7 +225,7 @@ If you're a system infra or operator, you can refer to extension documents to le
|
|||
|
||||
If you're extremely interested in KubeVela, you can also extend more features as a developer.
|
||||
|
||||
- KubeVela use CUE as it's core engine, [learn CUE in KubeVela](../../platform-engineers/cue/basic) and try to extend with CUE configurations.
|
||||
- KubeVela use CUE as it's core engine, [learn Manage Definition with CUE](../../platform-engineers/cue/basic) and try to extend capabilities with definitions.
|
||||
- Read the [developer guide](../../contributor/overview) to learn how to contribute and extend capabilities for KubeVela.
|
||||
|
||||
Welcome to join the KubeVela community! We're eager to see you to contribute your extension.
|
||||
|
|
|
|||
|
|
@ -2,7 +2,9 @@
|
|||
title: Distribute Reference Objects
|
||||
---
|
||||
|
||||
> This section requires you to know the basics about how to deploy [multi-cluster application](../../case-studies/multi-cluster) with policy and workflow.
|
||||
:::note
|
||||
This section requires you to know the basics about how to deploy [multi-cluster application](../../case-studies/multi-cluster) with policy and workflow.
|
||||
:::
|
||||
|
||||
You can reference and distribute existing Kubernetes objects with KubeVela in the following scenarios:
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: One-time Delivery(Working With Other Controllers)
|
||||
title: One-time Delivery (Coordinate with Multi-Controllers)
|
||||
---
|
||||
|
||||
By default, the KubeVela controller will prevent configuration drift for applied resources by reconciling them routinely. This is useful if you want to keep your application always having the desired configuration to avoid some unintentional changes by external modifiers.
|
||||
|
|
|
|||
|
|
@ -10,8 +10,10 @@ In garbage-collect policy, there are two major capabilities you can use.
|
|||
|
||||
Suppose you want to keep the resources created by the old version of the application. Use the garbage-collect policy and enable the option `keepLegacyResource`.
|
||||
|
||||
```yaml
|
||||
# app.yaml
|
||||
1. create app
|
||||
|
||||
```shell
|
||||
cat <<EOF | vela up -f -
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
|
|
@ -24,9 +26,10 @@ spec:
|
|||
image: oamdev/hello-world
|
||||
port: 8000
|
||||
traits:
|
||||
- type: ingress-1-20
|
||||
- type: gateway
|
||||
properties:
|
||||
domain: testsvc.example.com
|
||||
class: traefik
|
||||
domain: 47.251.8.82.nip.io
|
||||
http:
|
||||
"/": 8000
|
||||
policies:
|
||||
|
|
@ -34,24 +37,31 @@ spec:
|
|||
type: garbage-collect
|
||||
properties:
|
||||
keepLegacyResource: true
|
||||
EOF
|
||||
```
|
||||
|
||||
1. create app
|
||||
|
||||
``` shell
|
||||
vela up -f app.yaml
|
||||
```
|
||||
Check the status:
|
||||
|
||||
```shell
|
||||
$ vela ls
|
||||
APP COMPONENT TYPE TRAITS PHASE HEALTHY STATUS CREATED-TIME
|
||||
first-vela-app express-server webservice ingress-1-20 running healthy Ready:1/1 2022-04-06 16:20:25 +0800 CST
|
||||
vela status first-vela-app --tree
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```
|
||||
CLUSTER NAMESPACE RESOURCE STATUS
|
||||
local ─── default ─┬─ Service/express-server updated
|
||||
├─ Deployment/express-server updated
|
||||
└─ Ingress/express-server updated
|
||||
```
|
||||
</details>
|
||||
|
||||
|
||||
2. update the app
|
||||
|
||||
```yaml
|
||||
# app1.yaml
|
||||
cat <<EOF | vela up -f -
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
|
|
@ -64,9 +74,10 @@ spec:
|
|||
image: oamdev/hello-world
|
||||
port: 8000
|
||||
traits:
|
||||
- type: ingress-1-20
|
||||
- type: gateway
|
||||
properties:
|
||||
domain: testsvc.example.com
|
||||
class: traefik
|
||||
domain: 47.251.8.82.nip.io
|
||||
http:
|
||||
"/": 8000
|
||||
policies:
|
||||
|
|
@ -74,50 +85,30 @@ spec:
|
|||
type: garbage-collect
|
||||
properties:
|
||||
keepLegacyResource: true
|
||||
EOF
|
||||
```
|
||||
|
||||
``` shell
|
||||
vela up -f app1.yaml
|
||||
```
|
||||
Check the status again:
|
||||
|
||||
```shell
|
||||
$ vela ls
|
||||
APP COMPONENT TYPE TRAITS PHASE HEALTHY STATUS CREATED-TIME
|
||||
first-vela-app express-server-1 webservice ingress-1-20 running healthy Ready:1/1 2022-04-06 16:20:25 +0800 CST
|
||||
vela status first-vela-app --tree
|
||||
```
|
||||
|
||||
check whether legacy resources are reserved.
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
> In the following steps, we'll use `kubectl` command to do some verification. You can also use `vela status first-vela-app` to check the aggregated application status and see if components are healthy.
|
||||
```shell
|
||||
CLUSTER NAMESPACE RESOURCE STATUS
|
||||
local ─── default ─┬─ Service/express-server outdated
|
||||
├─ Service/express-server-1 updated
|
||||
├─ Deployment/express-server outdated
|
||||
├─ Deployment/express-server-1 updated
|
||||
├─ Ingress/express-server outdated
|
||||
└─ Ingress/express-server-1 updated
|
||||
```
|
||||
</details>
|
||||
|
||||
```
|
||||
$ kubectl get deploy
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
express-server 1/1 1 1 10m
|
||||
express-server-1 1/1 1 1 40s
|
||||
```
|
||||
|
||||
```
|
||||
$ kubectl get svc
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
express-server ClusterIP 10.96.102.249 <none> 8000/TCP 10m
|
||||
express-server-1 ClusterIP 10.96.146.10 <none> 8000/TCP 46s
|
||||
```
|
||||
|
||||
```
|
||||
$ kubectl get ingress
|
||||
NAME CLASS HOSTS ADDRESS PORTS AGE
|
||||
express-server <none> testsvc.example.com 80 10m
|
||||
express-server-1 <none> testsvc.example.com 80 50s
|
||||
```
|
||||
|
||||
```
|
||||
$ kubectl get resourcetracker
|
||||
NAME AGE
|
||||
first-vela-app-default 12m
|
||||
first-vela-app-v1-default 12m
|
||||
first-vela-app-v2-default 2m56s
|
||||
```
|
||||
You can see the legacy resources are reserved but the status is outdated, it will not be synced by periodically reconciliation.
|
||||
|
||||
3. delete the app
|
||||
|
||||
|
|
@ -127,20 +118,21 @@ $ vela delete first-vela-app
|
|||
|
||||
> If you hope to delete resources in one specified version, you can run `kubectl delete resourcetracker first-vela-app-v1-default`.
|
||||
|
||||
## Persist resources
|
||||
## Persist partial resources
|
||||
|
||||
You can also persist some resources, which skips the normal garbage-collect process when the application is updated.
|
||||
You can also persist part of the resources, which skips the normal garbage-collect process when the application is updated.
|
||||
|
||||
Take the following app as an example, in the garbage-collect policy, a rule is added which marks all the resources created by the `expose` trait to use the `onAppDelete` strategy. This will keep those services until application is deleted.
|
||||
|
||||
```shell
|
||||
$ cat <<EOF | vela up -f -
|
||||
cat <<EOF | vela up -f -
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: garbage-collect-app
|
||||
spec:
|
||||
components:
|
||||
- name: hello-world
|
||||
- name: demo-gc
|
||||
type: webservice
|
||||
properties:
|
||||
image: oamdev/hello-world
|
||||
|
|
@ -161,6 +153,7 @@ EOF
|
|||
```
|
||||
|
||||
You can find deployment and service created.
|
||||
|
||||
```shell
|
||||
$ kubectl get deployment
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
|
|
@ -171,8 +164,9 @@ hello-world ClusterIP 10.96.160.208 <none> 8000/TCP 78s
|
|||
```
|
||||
|
||||
If you upgrade the application and use a different component, you will find the old versioned deployment is deleted but the service is kept.
|
||||
|
||||
```shell
|
||||
$ cat <<EOF | vela up -f -
|
||||
cat <<EOF | vela up -f -
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
|
|
|
|||
|
|
@ -105,6 +105,7 @@ Hello World
|
|||
</xmp>
|
||||
```
|
||||
|
||||
> ⚠️ This section requires your runtime cluster has a working ingress controller.
|
||||
|
||||
:::caution
|
||||
This section requires your runtime cluster has a working ingress controller.
|
||||
:::
|
||||
|
||||
|
|
|
|||
|
|
@ -8,8 +8,7 @@ In this section, we will introduce how to canary rollout a container service.
|
|||
|
||||
1. Enable [`kruise-rollout`](../../reference/addons/kruise-rollout) addon, our canary rollout capability relies on the [rollouts from OpenKruise](https://github.com/openkruise/rollouts).
|
||||
```shell
|
||||
$ vela addon enable kruise-rollout
|
||||
Addon: kruise-rollout enabled Successfully.
|
||||
vela addon enable kruise-rollout
|
||||
```
|
||||
|
||||
2. Please make sure one of the [ingress controllers](https://kubernetes.github.io/ingress-nginx/deploy/) is available in your cluster.
|
||||
|
|
|
|||
|
|
@ -67,6 +67,10 @@ And check the logging output of sidecar.
|
|||
```shell
|
||||
vela logs vela-app-with-sidecar -c count-log
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```console
|
||||
0: Fri Apr 16 11:08:45 UTC 2021
|
||||
1: Fri Apr 16 11:08:46 UTC 2021
|
||||
|
|
@ -80,3 +84,4 @@ vela logs vela-app-with-sidecar -c count-log
|
|||
9: Fri Apr 16 11:08:54 UTC 2021
|
||||
```
|
||||
|
||||
</details>
|
||||
|
|
@ -9,7 +9,7 @@ title: Application Version Control
|
|||
In KubeVela, ApplicationRevision keeps the snapshot of the application and all its runtime dependencies such as ComponentDefinition, external Policy or referred objects.
|
||||
This revision can be used to review the application changes and rollback to past configurations.
|
||||
|
||||
In KubeVela v1.3, for application which uses the `PublishVersion` feature, we support viewing the history revisions, checking the differences across revisions, rolling back to the latest succeeded revision and re-publishing past revisions.
|
||||
In KubeVela v1.3+, for application which uses the `PublishVersion` feature, we support viewing the history revisions, checking the differences across revisions, rolling back to the latest succeeded revision and re-publishing past revisions.
|
||||
|
||||
For application with the `app.oam.dev/publishVersion` annotation, the workflow runs are strictly controlled.
|
||||
The annotation, which is noted as *publishVersion* in the following paragraphs, is used to identify a static version of the application and its dependencies.
|
||||
|
|
@ -17,13 +17,15 @@ The annotation, which is noted as *publishVersion* in the following paragraphs,
|
|||
When the annotation is updated to a new value, the application will generate a new revision no matter if the application spec or the dependencies are changed.
|
||||
It will then trigger a fresh new run of workflow after terminating the previous run.
|
||||
|
||||
During the running of workflow, all related data are retrieved from the ApplicationRevision, which means the changes to the application spec or the dependencies will not take effects until a newer `publishVerison` is annotated.
|
||||
During the running of workflow, all related data are retrieved from the ApplicationRevision, which means the changes to the application spec or the dependencies will not take effects until a newer `publishVersion` is annotated.
|
||||
|
||||
## Use Guide
|
||||
|
||||
Fo example, let's start with an application with external workflow and policies to deploy podinfo in managed clusters.
|
||||
|
||||
> For external workflow and policies, please refer to [Multi-cluster Application Delivery](../case-studies/multi-cluster) for more details.
|
||||
:::tip
|
||||
We use reference of external workflow and policies, it works the same. You can refer to [Multi-cluster Application Delivery](../case-studies/multi-cluster) for more details.
|
||||
:::
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
|
|
@ -78,10 +80,16 @@ steps:
|
|||
policies: ["topology-hangzhou-clusters", "override-high-availability"]
|
||||
```
|
||||
|
||||
You can check the application status by running `vela status podinfo -n examples` and view all the related real-time resources by `vela status podinfo -n examples --tree --detail`.
|
||||
You can check the application status by:
|
||||
|
||||
```shell
|
||||
$ vela status podinfo -n examples
|
||||
vela status podinfo -n examples
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```
|
||||
About:
|
||||
|
||||
Name: podinfo
|
||||
|
|
@ -116,23 +124,43 @@ Services:
|
|||
Unhealthy Ready:0/3
|
||||
Traits:
|
||||
✅ scaler
|
||||
```
|
||||
</details>
|
||||
|
||||
$ vela status podinfo -n examples --tree --detail
|
||||
View all the related real-time resources by:
|
||||
|
||||
```
|
||||
vela status podinfo -n examples --tree --detail
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```
|
||||
CLUSTER NAMESPACE RESOURCE STATUS APPLY_TIME DETAIL
|
||||
hangzhou1 ─── examples ─── Deployment/podinfo updated 2022-04-13 19:32:03 Ready: 3/3 Up-to-date: 3 Available: 3 Age: 4m16s
|
||||
hangzhou2 ─── examples ─── Deployment/podinfo updated 2022-04-13 19:32:03 Ready: 3/3 Up-to-date: 3 Available: 3 Age: 4m16s
|
||||
```
|
||||
</details>
|
||||
|
||||
This application should be successful after a while.
|
||||
|
||||
Now if we edit the component image and set it to an invalid value, such as `stefanprodan/podinfo:6.0.xxx`.
|
||||
The application will not re-run the workflow to make this change take effect automatically.
|
||||
But the application spec changes, it means the next workflow run will update the deployment image.
|
||||
|
||||
### Inspect Changes across Revisions
|
||||
|
||||
Now let's run `vela live-diff podinfo -n examples` to check this diff
|
||||
```bash
|
||||
$ vela live-diff podinfo -n examples
|
||||
You can run `vela live-diff` to check revision difference:
|
||||
|
||||
```
|
||||
vela live-diff podinfo -n examples
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```yaml
|
||||
* Application (podinfo) has been modified(*)
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
|
|
@ -156,64 +184,129 @@ $ vela live-diff podinfo -n examples
|
|||
* External Policy (override-high-availability) has no change
|
||||
* External Workflow (make-release-in-hangzhou) has no change
|
||||
```
|
||||
</details>
|
||||
|
||||
We can see all the changes of the application spec and the dependencies.
|
||||
|
||||
Now let's make this change take effects.
|
||||
There are two ways to make it take effects. You can choose any one of them.
|
||||
|
||||
### Publish a new app with specified revision
|
||||
|
||||
There are two ways to publish an app with specified revision. You can choose any one of them.
|
||||
|
||||
1. Update the `publishVersion` annotation in the application to `alpha2` to trigger the re-run of workflow.
|
||||
2. Run `vela up podinfo -n examples --publish-version alpha2` to publish the new version.
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: examples
|
||||
annotations:
|
||||
- app.oam.dev/publishVersion: alpha1
|
||||
+ app.oam.dev/publishVersion: alpha2
|
||||
...
|
||||
```
|
||||
2. Run `vela up --publish-version <revision-name` to publish the new version.
|
||||
```
|
||||
vela up podinfo -n examples --publish-version alpha2
|
||||
```
|
||||
|
||||
We will find the application stuck at `runningWorkflow` as the deployment cannot finish the update progress due to the invalid image.
|
||||
|
||||
Now we can run `vela revision list podinfo -n examples` to list all the available revisions.
|
||||
Now we can run `vela revision list` to list all the available revisions.
|
||||
|
||||
```
|
||||
vela revision list podinfo -n examples
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```bash
|
||||
$ vela revision list podinfo -n examples
|
||||
NAME PUBLISH_VERSION SUCCEEDED HASH BEGIN_TIME STATUS SIZE
|
||||
podinfo-v1 alpha1 true 65844934c2d07288 2022-04-13 19:32:02 Succeeded 23.7 KiB
|
||||
podinfo-v2 alpha2 false 44124fb1a5146a4d 2022-04-13 19:46:50 Executing 23.7 KiB
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Rollback to Last Successful Revision
|
||||
|
||||
Before rolling back, we need to suspend the workflow of the application first. Run `vela workflow suspend podinfo -n examples`.
|
||||
Before rolling back, we need to suspend the workflow of the application first.
|
||||
|
||||
```
|
||||
vela workflow suspend podinfo -n examples
|
||||
```
|
||||
|
||||
After the application workflow is suspended, run `vela workflow rollback podinfo -n examples`, the workflow will be rolled back and the application resources will restore to the succeeded state.
|
||||
|
||||
```
|
||||
vela workflow rollback podinfo -n examples
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```shell
|
||||
$ vela workflow suspend podinfo -n examples
|
||||
Successfully suspend workflow: podinfo
|
||||
$ vela workflow rollback podinfo -n examples
|
||||
Find succeeded application revision podinfo-v1 (PublishVersion: alpha1) to rollback.
|
||||
Application spec rollback successfully.
|
||||
Application status rollback successfully.
|
||||
Application rollback completed.
|
||||
Application outdated revision cleaned up.
|
||||
```
|
||||
</details>
|
||||
|
||||
Now if we return back to see all the resources, we will find the resources have been turned back to use the valid image again.
|
||||
|
||||
```shell
|
||||
$ vela status podinfo -n examples --tree --detail --detail-format wide
|
||||
vela status podinfo -n examples --tree --detail --detail-format wide
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```
|
||||
CLUSTER NAMESPACE RESOURCE STATUS APPLY_TIME DETAIL
|
||||
hangzhou1 ─── examples ─── Deployment/podinfo updated 2022-04-13 19:32:03 Ready: 3/3 Up-to-date: 3 Available: 3 Age: 17m Containers: podinfo Images: stefanprodan/podinfo:6.0.1 Selector: app.oam.dev/component=podinfo
|
||||
hangzhou2 ─── examples ─── Deployment/podinfo updated 2022-04-13 19:32:03 Ready: 3/3 Up-to-date: 3 Available: 3 Age: 17m Containers: podinfo Images: stefanprodan/podinfo:6.0.1 Selector: app.oam.dev/component=podinfo
|
||||
```
|
||||
</details>
|
||||
|
||||
### Re-publish a History Revision
|
||||
|
||||
> This feature is introduced after v1.3.1.
|
||||
|
||||
:::note
|
||||
This feature is introduced after v1.3+.
|
||||
:::
|
||||
|
||||
Rolling back revision allows you to directly go back to the latest successful state. An alternative way is to re-publish an old revision, which will re-run the workflow but can go back to any revision that is still available.
|
||||
|
||||
For example, you might have 2 successful revisions available to use.
|
||||
|
||||
Let's list the history revision by:
|
||||
|
||||
```shell
|
||||
vela revision list podinfo -n examples
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```shell
|
||||
$ vela revision list podinfo -n examples
|
||||
NAME PUBLISH_VERSION SUCCEEDED HASH BEGIN_TIME STATUS SIZE
|
||||
podinfo-v1 alpha1 true 65844934c2d07288 2022-04-13 20:45:19 Succeeded 23.7 KiB
|
||||
podinfo-v2 alpha2 true 4acae1a66013283 2022-04-13 20:45:45 Succeeded 23.7 KiB
|
||||
podinfo-v3 alpha3 false 44124fb1a5146a4d 2022-04-13 20:46:28 Executing 23.7 KiB
|
||||
```
|
||||
|
||||
Alternatively, you can directly use `vela up podinfo -n examples --revision podinfo-v1 --publish-version beta1` to re-publish the earliest version. This process will let the application to use the past revision and re-run the whole workflow. A new revision that is totally same with the specified one will be generated.
|
||||
</details>
|
||||
|
||||
Alternatively, you can directly run the following command to rollback to a specified revision:
|
||||
|
||||
```
|
||||
vela up podinfo -n examples --revision podinfo-v1 --publish-version beta1
|
||||
```
|
||||
|
||||
This process will let the application to use the past revision and re-run the whole workflow. A new revision that is totally same with the specified one will be generated.
|
||||
|
||||
```shell
|
||||
NAME PUBLISH_VERSION SUCCEEDED HASH BEGIN_TIME STATUS SIZE
|
||||
|
|
@ -225,4 +318,6 @@ podinfo-v4 beta1 true 65844934c2d07288 2022-04-
|
|||
|
||||
You can find that the *beta1* version shares the same hash with *alpha1* version.
|
||||
|
||||
> By default, application will hold at most 10 revisions. If you want to modify this number, you can set it in the `--application-revision-limit` bootstrap parameter of KubeVela controller.
|
||||
:::info
|
||||
By default, application will hold at most 10 revisions. If you want to modify this number, you can set it in the `--application-revision-limit` [bootstrap parameter](../platform-engineers/system-operation/bootstrap-parameters) of KubeVela controller.
|
||||
:::
|
||||
|
|
@ -4,10 +4,9 @@ title: Component Orchestration
|
|||
|
||||
This section will introduce the dependencies in components and how to pass data between components.
|
||||
|
||||
> We use helm in the examples, make sure you enable the fluxcd addon:
|
||||
> ```shell
|
||||
> vela addon enable fluxcd
|
||||
> ```
|
||||
:::tip
|
||||
We use `helm` component type in the following examples, make sure you have the `fluxcd` addon enabled (`vela addon enable fluxcd`).
|
||||
:::
|
||||
|
||||
## Dependency
|
||||
|
||||
|
|
@ -102,9 +101,9 @@ mysql mysql-secret raw running healthy 2021-10-14 12:09:55 +0
|
|||
|
||||
After a while, all components is running successfully. The `mysql-cluster` will be deployed after `mysql-controller` and `mysql-secret` is `healthy`.
|
||||
|
||||
> `dependsOn` use `healthy` to check status. If the component is `healthy`, then KubeVela will deploy the next component.
|
||||
> If you want to customize the healthy status of the component, please refer to [Status Write Back](../../platform-engineers/traits/status)
|
||||
|
||||
:::info
|
||||
`dependsOn` use `healthy` to check status. If the component is `healthy`, then KubeVela will deploy the next component. If you want to customize the healthy status of the component, please refer to [Status Write Back](../../platform-engineers/traits/status)
|
||||
:::
|
||||
|
||||
## Inputs and Outputs
|
||||
|
||||
|
|
|
|||
|
|
@ -4,9 +4,11 @@ title: Dependency
|
|||
|
||||
This section will introduce how to specify dependencies for workflow steps.
|
||||
|
||||
> Note: In the current version (1.4), the steps in the workflow are executed sequentially, which means that there is an implicit dependency between steps, ie: the next step depends on the successful execution of the previous step. At this point, specifying dependencies in the workflow may not make much sense.
|
||||
>
|
||||
> In future versions (1.5+), you will be able to display the execution method of the specified workflow steps (eg: change to DAG parallel execution). At this time, you can control the execution of the workflow by specifying the dependencies of the steps.
|
||||
:::note
|
||||
In the version <=1.4, the steps in the workflow are executed sequentially, which means that there is an implicit dependency between steps, ie: the next step depends on the successful execution of the previous step. At this point, specifying dependencies in the workflow may not make much sense.
|
||||
|
||||
In versions 1.5+, you can display the execution method of the specified workflow steps (eg: change to DAG parallel execution). At this time, you can control the execution of the workflow by specifying the dependencies of the steps.
|
||||
:::
|
||||
|
||||
## How to use
|
||||
|
||||
|
|
|
|||
|
|
@ -6,9 +6,11 @@ This section describes how to use sub steps in KubeVela.
|
|||
|
||||
There is a special step type `step-group` in KubeVela workflow where you can declare sub-steps when using `step-group` type steps.
|
||||
|
||||
> Note: In the current version (1.4), sub steps in a step group are executed concurrently.
|
||||
>
|
||||
> In future versions (1.5+), you will be able to specify the execution mode of steps and sub-steps.
|
||||
:::note
|
||||
In the version less or equal than v1.4.x, sub steps in a step group are executed concurrently.
|
||||
|
||||
In version 1.5+, you can specify the execution mode of steps and sub-steps.
|
||||
:::
|
||||
|
||||
Apply the following example:
|
||||
|
||||
|
|
|
|||
|
|
@ -10,13 +10,18 @@ In KubeVela, you can choose to use the `vela` command to manually suspend the ex
|
|||
|
||||
### Suspend Manually
|
||||
|
||||
If you have a running application and you want to suspend its execution, you can use `vela workflow suspend` to suspend the workflow.
|
||||
If you have an application in `runningWorkflow` state, you want to stop the execution of the workflow, you can use `vela workflow suspend` to stop the workflow and use `vela workflow resume` to continue it.
|
||||
|
||||
* Suspend the application
|
||||
|
||||
```bash
|
||||
$ vela workflow suspend my-app
|
||||
Successfully suspend workflow: my-app
|
||||
vela workflow suspend my-app
|
||||
```
|
||||
|
||||
:::tip
|
||||
Nothing will happen if you suspend an application that has already finished running workflow, which is in `running` status.
|
||||
:::
|
||||
|
||||
### Use Suspend Step
|
||||
|
||||
Apply the following example:
|
||||
|
|
@ -25,7 +30,7 @@ Apply the following example:
|
|||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: suspend
|
||||
name: suspend-demo
|
||||
namespace: default
|
||||
spec:
|
||||
components:
|
||||
|
|
@ -56,10 +61,16 @@ spec:
|
|||
Use `vela status` to check the status of the Application:
|
||||
|
||||
```bash
|
||||
$ vela status suspend
|
||||
vela status suspend-demo
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```
|
||||
About:
|
||||
|
||||
Name: suspend
|
||||
Name: suspend-demo
|
||||
Namespace: default
|
||||
Created at: 2022-06-27 17:36:58 +0800 CST
|
||||
Status: workflowSuspending
|
||||
|
|
@ -75,12 +86,10 @@ Workflow:
|
|||
name:apply1
|
||||
type:apply-component
|
||||
phase:succeeded
|
||||
message:
|
||||
- id:xvmda4he5e
|
||||
name:suspend
|
||||
type:suspend
|
||||
phase:running
|
||||
message:
|
||||
|
||||
Services:
|
||||
|
||||
|
|
@ -90,6 +99,7 @@ Services:
|
|||
Healthy Ready:1/1
|
||||
No trait applied
|
||||
```
|
||||
</details>
|
||||
|
||||
As you can see, when the first step is completed, the `suspend` step will be executed and this step will suspend the workflow.
|
||||
|
||||
|
|
@ -102,17 +112,22 @@ Once the workflow is suspended, you can use the `vela workflow resume` command t
|
|||
Take the above suspended application as an example:
|
||||
|
||||
```bash
|
||||
$ vela workflow resume suspend
|
||||
Successfully resume workflow: suspend
|
||||
vela workflow resume suspend-demo
|
||||
```
|
||||
|
||||
After successfully continuing the workflow, view the status of the app:
|
||||
|
||||
```bash
|
||||
$ vela status suspend
|
||||
vela status suspend-demo
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```
|
||||
About:
|
||||
|
||||
Name: suspend
|
||||
Name: suspend-demo
|
||||
Namespace: default
|
||||
Created at: 2022-06-27 17:36:58 +0800 CST
|
||||
Status: running
|
||||
|
|
@ -154,6 +169,7 @@ Services:
|
|||
Healthy Ready:1/1
|
||||
No trait applied
|
||||
```
|
||||
</details>
|
||||
|
||||
As you can see, the workflow has continued to execute.
|
||||
|
||||
|
|
@ -161,11 +177,28 @@ As you can see, the workflow has continued to execute.
|
|||
|
||||
If you want to terminate a workflow while it is suspended, you can use the `vela workflow terminate` command to terminate the workflow.
|
||||
|
||||
* Terminate the application workflow
|
||||
|
||||
```bash
|
||||
$ vela workflow terminate my-app
|
||||
Successfully terminate workflow: my-app
|
||||
vela workflow terminate my-app
|
||||
```
|
||||
|
||||
:::tip
|
||||
Different from suspend, the terminated application workflow can't be resumed, you can only restart the workflow. This means restart the workflow will execute the workflow steps from scratch while resume workflow only continue the unfinished steps.
|
||||
:::
|
||||
|
||||
* Restart the application workflow
|
||||
|
||||
```bash
|
||||
vela workflow restart my-app
|
||||
```
|
||||
|
||||
:::caution
|
||||
Once application is terminated, KubeVela controller won't reconcile the application resources. It can also be used in some cases when you want to manually operate the underlying resources, please caution the configuration drift.
|
||||
:::
|
||||
|
||||
Once application come into `running` status, it can't be terminated or restarted.
|
||||
|
||||
### Resume the Workflow Automatically
|
||||
|
||||
If you want the workflow to be continued automatically after a period of time has passed. Then, you can add a `duration` parameter to the `suspend` step. When the `duration` time elapses, the workflow will automatically continue execution.
|
||||
|
|
@ -209,7 +242,13 @@ spec:
|
|||
Use `vela status` to check the status of the Application:
|
||||
|
||||
```bash
|
||||
$ vela status auto-resume
|
||||
vela status auto-resume
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```
|
||||
About:
|
||||
|
||||
Name: auto-resume
|
||||
|
|
@ -254,5 +293,6 @@ Services:
|
|||
Healthy Ready:1/1
|
||||
No trait applied
|
||||
```
|
||||
</details>
|
||||
|
||||
As you can see, the `suspend` step is automatically executed successfully after five seconds, and the workflow is executed successfully.
|
||||
|
|
|
|||
|
|
@ -2,7 +2,9 @@
|
|||
title: Timeout of Step
|
||||
---
|
||||
|
||||
> Note: You need to upgrade to version 1.5 or above to use the timeout.
|
||||
:::note
|
||||
You need to upgrade to version 1.5+ to use the timeout feature.
|
||||
:::
|
||||
|
||||
This section introduces how to add timeout to workflow steps in KubeVela.
|
||||
|
||||
|
|
|
|||
|
|
@ -164,7 +164,9 @@ Application is also one kind of Kubernetes CRD, you can also use `kubectl apply`
|
|||
|
||||
### Customize
|
||||
|
||||
> **⚠️ In most cases, you don't need to customize any definitions unless you're going to extend the capability of KubeVela. Before that, you should check the built-in definitions and addons to confirm if they can fit your needs.**
|
||||
:::caution
|
||||
In most cases, you don't need to customize any definitions **unless you're going to extend the capability of KubeVela**. Before that, you should check the built-in definitions and addons to confirm if they can fit your needs.
|
||||
:::
|
||||
|
||||
A new definition is built in a declarative template in [CUE configuration language](https://cuelang.org/). If you're not familiar with CUE, you can refer to [CUE Basic](../platform-engineers/cue/basic) for some knowledge.
|
||||
|
||||
|
|
|
|||
|
|
@ -5,7 +5,9 @@ description: Configure a helm repository
|
|||
|
||||
In this guide, we will introduce how to use Integration create a private helm repository and create a helm type application to use this repo.
|
||||
|
||||
Notice: You must enable the `fluxcd` addon firstly.
|
||||
:::note
|
||||
You must enable the `fluxcd` addon firstly.
|
||||
:::
|
||||
|
||||
## Create a helm repo
|
||||
|
||||
|
|
|
|||
|
|
@ -7,7 +7,9 @@ import TabItem from '@theme/TabItem';
|
|||
|
||||
## Upgrade
|
||||
|
||||
> If you're trying to upgrade from a big version later (e.g. from 1.2.x to 1.4.x), please refer to [version migration](./system-operation/migration-from-old-version) for more guides.
|
||||
:::caution
|
||||
If you're trying to upgrade from a big version later (e.g. from 1.2.x to 1.4.x), please refer to [version migration](./system-operation/migration-from-old-version) for more guides.
|
||||
:::
|
||||
|
||||
### 1. Upgrade CLI
|
||||
|
||||
|
|
@ -30,7 +32,9 @@ curl -fsSl https://kubevela.io/script/install.sh | bash
|
|||
|
||||
**Windows**
|
||||
|
||||
> Only the official release version is supported.
|
||||
:::tip
|
||||
Pre-release versions will not be listed.
|
||||
:::
|
||||
|
||||
```shell script
|
||||
powershell -Command "iwr -useb https://kubevela.io/script/install.ps1 | iex"
|
||||
|
|
@ -63,10 +67,12 @@ brew install kubevela
|
|||
sudo mv ./vela /usr/local/bin/vela
|
||||
```
|
||||
|
||||
> [Installation Tips](https://github.com/kubevela/kubevela/issues/625):
|
||||
> If you are using a Mac system, it will pop up a warning that "vela" cannot be opened because the package from the developer cannot be verified.
|
||||
>
|
||||
> MacOS imposes stricter restrictions on the software that can run in the system. You can temporarily solve this problem by opening `System Preference ->Security & Privacy -> General` and clicking on `Allow Anyway`.
|
||||
:::caution
|
||||
[Installation Tips](https://github.com/kubevela/kubevela/issues/625):
|
||||
If you are using a Mac system, it will pop up a warning that "vela" cannot be opened because the package from the developer cannot be verified.
|
||||
|
||||
MacOS imposes stricter restrictions on the software that can run in the system. You can temporarily solve this problem by opening `System Preference ->Security & Privacy -> General` and clicking on `Allow Anyway`.
|
||||
:::
|
||||
|
||||
</TabItem>
|
||||
|
||||
|
|
@ -83,7 +89,9 @@ docker pull oamdev/vela-cli:latest
|
|||
|
||||
### 2. Upgrade Vela Core
|
||||
|
||||
> Please make sure you already upgraded the Vela CLI to latest stable version.
|
||||
:::note
|
||||
Please make sure you already upgraded the Vela CLI to latest stable version.
|
||||
:::
|
||||
|
||||
```shell
|
||||
vela install
|
||||
|
|
@ -95,7 +103,9 @@ vela install
|
|||
vela addon enable velaux
|
||||
```
|
||||
|
||||
> If you set custom parameters during installation, be sure to include the corresponding parameters.
|
||||
:::tip
|
||||
You can use advanced parameters provided by [addons](../reference/addons/overview).
|
||||
:::
|
||||
|
||||
## Uninstall
|
||||
|
||||
|
|
|
|||
|
|
@ -2,4 +2,4 @@
|
|||
title: CUE advanced
|
||||
---
|
||||
|
||||
The docs has been migrated, please refer to [Learning CUE in KubeVela](./basic) sections for details.
|
||||
The docs has been migrated, please refer to [Learning Manage Definition with CUE](./basic) sections for details.
|
||||
|
|
@ -23,36 +23,38 @@ More addons for logging and tracing will be introduced in later versions.
|
|||
|
||||
To enable the addon suites, you simply needs to run the `vela addon enable` commands as below.
|
||||
|
||||
> If your KubeVela is multi-cluster scenario, see the [multi-cluster installation](#multi-cluster-installation) section below.
|
||||
:::tip
|
||||
If your KubeVela is multi-cluster scenario, see the [multi-cluster installation](#multi-cluster-installation) section below.
|
||||
:::
|
||||
|
||||
1. Install the kube-state-metrics addon
|
||||
|
||||
```shell
|
||||
> vela addon enable kube-state-metrics
|
||||
vela addon enable kube-state-metrics
|
||||
```
|
||||
|
||||
2. Install the node-exporter addon
|
||||
|
||||
```shell
|
||||
> vela addon enable node-exporter
|
||||
vela addon enable node-exporter
|
||||
```
|
||||
|
||||
3. Install the prometheus-server
|
||||
|
||||
```shell
|
||||
> vela addon enable prometheus-server
|
||||
vela addon enable prometheus-server
|
||||
```
|
||||
|
||||
4. Install the grafana addon.
|
||||
|
||||
```shell
|
||||
> vela addon enable grafana
|
||||
vela addon enable grafana
|
||||
```
|
||||
|
||||
5. Access your grafana through port-forward.
|
||||
|
||||
```shell
|
||||
> kubectl port-forward svc/grafana -n o11y-system 8080:3000
|
||||
kubectl port-forward svc/grafana -n o11y-system 8080:3000
|
||||
```
|
||||
|
||||
Now you can access your grafana by access `http://localhost:8080` in your browser. The default username and password are `admin` and `kubevela` respectively.
|
||||
|
|
@ -178,7 +180,7 @@ If you want to install observability addons in multi-cluster scenario, make sure
|
|||
By default, the installation process for `kube-state-metrics`, `node-exporter` and `prometheus-server` are natually multi-cluster supported (they will be automatically installed to all clusters). But to let your `grafana` on the control plane to be able to access prometheus-server in managed clusters, you need to use the following command to enable `prometheus-server`.
|
||||
|
||||
```shell
|
||||
> vela addon enable prometheus-server thanos=true serviceType=LoadBalancer
|
||||
vela addon enable prometheus-server thanos=true serviceType=LoadBalancer
|
||||
```
|
||||
|
||||
This will install [thanos](https://github.com/thanos-io/thanos) sidecar & query along with prometheus-server. Then enable grafana, you will be able to see aggregated prometheus metrics now.
|
||||
|
|
@ -186,7 +188,7 @@ This will install [thanos](https://github.com/thanos-io/thanos) sidecar & query
|
|||
You can also choose which clusters to install addons by using commands as below
|
||||
|
||||
```shell
|
||||
> vela addon enable kube-state-metrics clusters=\{local,c2\}
|
||||
vela addon enable kube-state-metrics clusters=\{local,c2\}
|
||||
```
|
||||
|
||||
> If you add new clusters to your control plane after addons being installed, you need to re-enable the addon to let it take effect.
|
||||
|
|
@ -234,7 +236,7 @@ spec:
|
|||
Then you need to add `customConfig` parameter to the enabling process of the prometheus-server addon, like
|
||||
|
||||
```shell
|
||||
> vela addon enable prometheus-server thanos=true serviceType=LoadBalancer storage=1G customConfig=my-prom
|
||||
vela addon enable prometheus-server thanos=true serviceType=LoadBalancer storage=1G customConfig=my-prom
|
||||
```
|
||||
|
||||
Then you will be able to see the recording rules configuration being delivered into all prome
|
||||
|
|
@ -263,7 +265,7 @@ data:
|
|||
If you want to change the default username and password for Grafana, you can run the following command
|
||||
|
||||
```shell
|
||||
> vela addon enable grafana adminUser=super-user adminPassword=PASSWORD
|
||||
vela addon enable grafana adminUser=super-user adminPassword=PASSWORD
|
||||
```
|
||||
|
||||
This will change your default admin user to `super-user` and its password to `PASSWORD`.
|
||||
|
|
@ -273,7 +275,7 @@ This will change your default admin user to `super-user` and its password to `PA
|
|||
If you want your prometheus-server and grafana to persist data in volumes, you can also specify `storage` parameter for your installation, like
|
||||
|
||||
```shell
|
||||
> vela addon enable prometheus-server storage=1G
|
||||
vela addon enable prometheus-server storage=1G
|
||||
```
|
||||
|
||||
This will create PersistentVolumeClaims and let the addon use the provided storage. The storage will not be automatically recycled even if the addon is disabled. You need to clean up the storage manually.
|
||||
|
|
@ -357,9 +359,9 @@ Now you can manage your dashboard and datasource on your grafana instance throug
|
|||
|
||||
```shell
|
||||
# show all the dashboard you have
|
||||
> kubectl get grafanadashboard -l grafana=my-grafana
|
||||
kubectl get grafanadashboard -l grafana=my-grafana
|
||||
# show all the datasource you have
|
||||
> kubectl get grafanadatasource -l grafana=my-grafana
|
||||
kubectl get grafanadatasource -l grafana=my-grafana
|
||||
```
|
||||
|
||||
For more details, you can refer to [vela-prism](https://github.com/kubevela/prism#grafana-related-apis).
|
||||
|
|
|
|||
|
|
@ -8,7 +8,9 @@ KubeVela has [release cadence](../../contributor/release-process) for every 2-3
|
|||
|
||||
## From v1.4.x to v1.5.x
|
||||
|
||||
> ⚠️ Note: Please upgrade to v1.5.5+ to avoid application workflow rerun when controller upgrade.
|
||||
:::caution
|
||||
Note: Please upgrade to v1.5.5+ to avoid application workflow rerun when controller upgrade.
|
||||
:::
|
||||
|
||||
1. Upgrade the CRDs, please make sure you upgrade the CRDs first before upgrade the helm chart.
|
||||
|
||||
|
|
@ -42,7 +44,9 @@ vela addon upgrade velaux --version 1.5.5
|
|||
|
||||
## From v1.3.x to v1.4.x
|
||||
|
||||
> ⚠️ Note: It may cause application workflow rerun when controller upgrade.
|
||||
:::danger
|
||||
Note: It may cause application workflow rerun when controller upgrade.
|
||||
:::
|
||||
|
||||
1. Upgrade the CRDs, please make sure you upgrade the CRDs first before upgrade the helm chart.
|
||||
|
||||
|
|
@ -83,7 +87,9 @@ Please note if you're using terraform addon, you should upgrade the `terraform`
|
|||
|
||||
## From v1.2.x to v1.3.x
|
||||
|
||||
> ⚠️ Note: It may cause application workflow rerun when controller upgrade.
|
||||
:::danger
|
||||
Note: It may cause application workflow rerun when controller upgrade.
|
||||
:::
|
||||
|
||||
1. Upgrade the CRDs, please make sure you upgrade the CRDs first before upgrade the helm chart.
|
||||
|
||||
|
|
@ -123,7 +129,9 @@ Please note if you're using terraform addon, you should upgrade the `terraform`
|
|||
|
||||
## From v1.1.x to v1.2.x
|
||||
|
||||
> ⚠️ Note: It will cause application workflow rerun when controller upgrade.
|
||||
:::danger
|
||||
Note: It will cause application workflow rerun when controller upgrade.
|
||||
:::
|
||||
|
||||
1. Check the service running normally
|
||||
|
||||
|
|
|
|||
|
|
@ -2,7 +2,9 @@
|
|||
title: Deploy First Application
|
||||
---
|
||||
|
||||
> Before starting, please confirm that you've installed KubeVela and enabled the VelaUX addon according to [the installation guide](./install).
|
||||
:::note
|
||||
Before starting, please confirm that you've installed KubeVela and enabled the VelaUX addon according to [the installation guide](./install).
|
||||
:::
|
||||
|
||||
Welcome to KubeVela! This section will guide you to deliver your first app.
|
||||
|
||||
|
|
@ -217,10 +219,9 @@ Great! You have finished deploying your first KubeVela application, you can also
|
|||
After finished [the installation of VelaUX](./install#2-install-velaux), you can view and manage the application created.
|
||||
|
||||
* Port forward the UI if you don't have endpoint for access:
|
||||
|
||||
```
|
||||
vela port-forward addon-velaux -n vela-system 8080:80
|
||||
```
|
||||
```
|
||||
vela port-forward addon-velaux -n vela-system 8080:80
|
||||
```
|
||||
|
||||
* VelaUX need authentication, default username is `admin` and the password is **`VelaUX12345`**.
|
||||
|
||||
|
|
|
|||
|
|
@ -9,14 +9,13 @@ For more details, please refer to: [Kruise Rollout](https://github.com/openkruis
|
|||
## Installation
|
||||
|
||||
```shell
|
||||
$ vela addon enable kruise-rollout
|
||||
Addon: kruise-rollout enabled Successfully.
|
||||
vela addon enable kruise-rollout
|
||||
```
|
||||
|
||||
## Uninstallation
|
||||
|
||||
```shell
|
||||
$ vela addon disable kruise-rollout
|
||||
vela addon disable kruise-rollout
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
|
|
|||
|
|
@ -5,7 +5,9 @@ title: Nginx Ingress Controller
|
|||
|
||||
[Nginx Ingress controller](https://kubernetes.github.io/ingress-nginx/) is an Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer.
|
||||
|
||||
**Notice: If your cluster is already have any kinds of [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/), you don't need to enable this addon.**
|
||||
:::note
|
||||
If your cluster is already have any kinds of [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/), you don't need to enable this addon.
|
||||
:::
|
||||
|
||||
## Install
|
||||
|
||||
|
|
|
|||
|
|
@ -259,7 +259,10 @@ EOF
|
|||
```
|
||||
|
||||
This Rollout Trait represents it will scale workload up to 7. You also can set every batch's number by setting `rolloutBatches`.
|
||||
Notice: A known issue exists if you scale up/down the workload twice or more times by not setting the `rolloutBatches`.So please set the `rolloutBatches` when scale up/down.
|
||||
|
||||
:::danger
|
||||
A known issue exists if you scale up/down the workload twice or more times by not setting the `rolloutBatches`.So please set the `rolloutBatches` when scale up/down.
|
||||
:::
|
||||
|
||||
Check the status after expansion has been succeed.
|
||||
```shell
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Custom Image Delivery
|
||||
title: Custom Container Delivery
|
||||
---
|
||||
|
||||
If the default `webservice` component type is not suitable for your team, and you want to get a more simple way to deploy your business application. This guide will help you. Before, you must have the platform manager's permission.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,181 @@
|
|||
---
|
||||
title: Debugging Application
|
||||
---
|
||||
|
||||
KubeVela supports several CLI commands for debugging your applications, they can work on control plane and help you access resources across multi-clusters. Which also means you can play with your pods in managed clusters directly on the hub cluster, without switching KubeConfig context. If you have multiple clusters in on application, the CLI command will ask you to choose one interactively.
|
||||
|
||||
## List Apps
|
||||
|
||||
List all your applications.
|
||||
|
||||
```
|
||||
vela ls
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
```
|
||||
APP COMPONENT TYPE TRAITS PHASE HEALTHY STATUS CREATED-TIME
|
||||
war war java-war running healthy Ready:1/1 2022-09-30 17:32:29 +0800 CST
|
||||
ck-instance ck-instance clickhouse running healthy 2022-09-30 17:38:13 +0800 CST
|
||||
kubecon-demo hello-world java-war gateway running healthy Ready:1/1 2022-10-08 11:32:47 +0800 CST
|
||||
ck-app my-ck clickhouse gateway running healthy Host not specified, visit the cluster or load balancer in 2022-10-08 17:55:20 +0800 CST
|
||||
front of the cluster with IP: 47.251.8.82
|
||||
demo2 catalog java-war workflowSuspending healthy Ready:1/1 2022-10-08 16:22:11 +0800 CST
|
||||
├─ customer java-war workflowSuspending healthy Ready:1/1 2022-10-08 16:22:11 +0800 CST
|
||||
└─ order-web java-war gateway workflowSuspending healthy Ready:1/1 2022-10-08 16:22:11 +0800 CST
|
||||
kubecon-demo2 hello-world2 java-war gateway workflowSuspending healthy Ready:1/1 2022-10-08 11:48:41 +0800 CST
|
||||
```
|
||||
</details>
|
||||
|
||||
## Show status of app
|
||||
|
||||
- `vela status` can give you an overview of your deployed multi-cluster application.
|
||||
|
||||
```
|
||||
vela up -f https://kubevela.net/example/applications/first-app.yaml
|
||||
vela status first-vela-app
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```
|
||||
About:
|
||||
|
||||
Name: first-vela-app
|
||||
Namespace: default
|
||||
Created at: 2022-10-09 12:10:30 +0800 CST
|
||||
Status: workflowSuspending
|
||||
|
||||
Workflow:
|
||||
|
||||
mode: StepByStep
|
||||
finished: false
|
||||
Suspend: true
|
||||
Terminated: false
|
||||
Steps
|
||||
- id: g1jtl5unra
|
||||
name: deploy2default
|
||||
type: deploy
|
||||
phase: succeeded
|
||||
message:
|
||||
- id: 6cq88ufzq5
|
||||
name: manual-approval
|
||||
type: suspend
|
||||
phase: running
|
||||
message:
|
||||
|
||||
Services:
|
||||
|
||||
- Name: express-server
|
||||
Cluster: local Namespace: default
|
||||
Type: webservice
|
||||
Healthy Ready:1/1
|
||||
Traits:
|
||||
✅ scaler
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
- `vela status --pod` can list the pod status of your application.
|
||||
|
||||
```
|
||||
vela status first-vela-app --pod
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```
|
||||
CLUSTER COMPONENT POD NAME NAMESPACE PHASE CREATE TIME REVISION HOST
|
||||
local express-server express-server-b768d95b7-qnwb4 default Running 2022-10-09T04:10:31Z izrj9f9wodrsepwyb9mcetz
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
- `vela status --endpoint` can list the access endpoints of your application.
|
||||
|
||||
```
|
||||
vela status first-vela-app --endpoint
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```
|
||||
Please access first-vela-app from the following endpoints:
|
||||
+---------+----------------+--------------------------------+-----------------------------+-------+
|
||||
| CLUSTER | COMPONENT | REF(KIND/NAMESPACE/NAME) | ENDPOINT | INNER |
|
||||
+---------+----------------+--------------------------------+-----------------------------+-------+
|
||||
| local | express-server | Service/default/express-server | express-server.default:8000 | true |
|
||||
+---------+----------------+--------------------------------+-----------------------------+-------+
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
- `vela status --tree --detail` can list resources of your application.
|
||||
|
||||
```
|
||||
vela status first-vela-app --tree --detail
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```
|
||||
CLUSTER NAMESPACE RESOURCE STATUS APPLY_TIME DETAIL
|
||||
local ─── default ─┬─ Service/express-server updated 2022-10-09 12:10:30 Type: ClusterIP Cluster-IP: 10.43.212.235 External-IP: <none> Port(s): 8000/TCP Age: 6m44s
|
||||
└─ Deployment/express-server updated 2022-10-09 12:10:30 Ready: 1/1 Up-to-date: 1 Available: 1 Age: 6m44s
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## Show logs of app
|
||||
|
||||
- `vela logs` shows pod logs in managed clusters.
|
||||
|
||||
```bash
|
||||
vela logs first-vela-app
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```
|
||||
+ express-server-b768d95b7-qnwb4 › express-server
|
||||
express-server 2022-10-09T12:10:33.785549770+08:00 httpd started
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## Execute commands inside pod container
|
||||
|
||||
- `vela exec` helps you execute commands in pods in managed clusters.
|
||||
|
||||
```bash
|
||||
vela exec first-vela-app -it -- ls
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```
|
||||
bin dev etc home proc root sys tmp usr var www
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## Access port locally
|
||||
|
||||
- `vela port-forward` can discover and forward ports of pods or services in managed clusters to your local endpoint.
|
||||
|
||||
```
|
||||
vela port-forward first-vela-app 8001:8000
|
||||
```
|
||||
|
||||
You can curl this app by `curl http://127.0.0.1:8001/`.
|
||||
|
||||
## More CLI Details
|
||||
|
||||
Please refer to the [CLI docs](../cli/vela).
|
||||
|
|
@ -8,8 +8,7 @@ title: Canary Rollout
|
|||
|
||||
2. Make sure you have already enabled the [`kruise-rollout`](../reference/addons/kruise-rollout) addon, our canary rollout capability relies on the [rollouts from OpenKruise](https://github.com/openkruise/rollouts).
|
||||
```shell
|
||||
$ vela addon enable kruise-rollout
|
||||
Addon: kruise-rollout enabled Successfully.
|
||||
vela addon enable kruise-rollout
|
||||
```
|
||||
|
||||
3. Please make sure one of the [ingress controllers](https://kubernetes.github.io/ingress-nginx/deploy/) is available in your Kubernetes cluster.
|
||||
|
|
|
|||
|
|
@ -67,6 +67,9 @@ vela up -f https://kubevela.io/example/applications/app-with-chart-redis.yaml
|
|||
|
||||
Then check the deployment status of the application through `vela status helm-redis`
|
||||
|
||||
<details>
|
||||
<summary>expected output of vela status </summary>
|
||||
|
||||
```
|
||||
About:
|
||||
|
||||
|
|
@ -96,6 +99,7 @@ Services:
|
|||
Healthy Fetch repository successfully, Create helm release successfully
|
||||
No trait applied
|
||||
```
|
||||
</details>
|
||||
|
||||
> You can also check the application on UI, application created by CLI will be synced automatically but readonly.
|
||||
|
||||
|
|
@ -224,6 +228,8 @@ spec:
|
|||
version: '6.1.*'
|
||||
```
|
||||
|
||||
> Notice: your fluxcd addon version must be `>=1.3.1`.
|
||||
:::note
|
||||
Your fluxcd addon version must be `1.3.1+`.
|
||||
:::
|
||||
|
||||
Now, you have learned the basic helm delivery. If you want to delivery Helm Chart into multi-clusters, you can refer to [this blog](https://kubevela.io/blog/2022/07/07/helm-multi-cluster).
|
||||
|
|
@ -8,8 +8,7 @@ title: Canary Rollout
|
|||
|
||||
2. Make sure you have already enabled the [`kruise-rollout`](../reference/addons/kruise-rollout) addon, our canary rollout capability relies on the [rollouts from OpenKruise](https://github.com/openkruise/rollouts).
|
||||
```shell
|
||||
$ vela addon enable kruise-rollout
|
||||
Addon: kruise-rollout enabled Successfully.
|
||||
vela addon enable kruise-rollout
|
||||
```
|
||||
|
||||
3. Please make sure one of the [ingress controllers](https://kubernetes.github.io/ingress-nginx/deploy/) is available in your Kubernetes cluster.
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Multi Environment Delivery
|
||||
title: Deploy across Multi Environments
|
||||
---
|
||||
|
||||
Environments represent your deployment targets logically (QA, Prod, etc). You can add the same Environment to as many Targets as you need.
|
||||
|
|
|
|||
|
|
@ -100,7 +100,7 @@ EOF
|
|||
```
|
||||
|
||||
:::note
|
||||
Currently, The application created by CLI will be synced to UI, but it will be readonly.
|
||||
The application created by CLI will be synced to UI automatically.
|
||||
:::
|
||||
|
||||
You can also save the YAML file as webservice-app.yaml and use the `vela up -f webservice-app.yaml` command to deploy.
|
||||
|
|
|
|||
|
|
@ -11,8 +11,8 @@
|
|||
"message": "核心概念",
|
||||
"description": "The label for Core Concepts in sidebar docs"
|
||||
},
|
||||
"sidebar.docs.category.CUE in KubeVela": {
|
||||
"message": "CUE 语言",
|
||||
"sidebar.docs.category.Manage Definition with CUE": {
|
||||
"message": "使用 CUE 扩展模块定义",
|
||||
"description": "The label for category Learning CUE in sidebar docs"
|
||||
},
|
||||
"sidebar.docs.category.Helm": {
|
||||
|
|
@ -310,8 +310,8 @@
|
|||
"sidebar.docs.category.Kubernetes Manifest CD": {
|
||||
"message": "Kubernetes 资源交付"
|
||||
},
|
||||
"sidebar.docs.category.General CD Features": {
|
||||
"message": "通用功能"
|
||||
"sidebar.docs.category.CD Policies": {
|
||||
"message": "资源交付策略"
|
||||
},
|
||||
"sidebar.docs.category.UX Customization": {
|
||||
"message": "定制 UI"
|
||||
|
|
@ -330,5 +330,8 @@
|
|||
},
|
||||
"sidebar.docs.doc.Kubernetes": {
|
||||
"message": "基于 Kubernetes 安装"
|
||||
},
|
||||
"sidebar.docs.category.Registry Integration": {
|
||||
"message": "注册中心集成"
|
||||
}
|
||||
}
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
|
||||
# KubeVela Dashboard (WIP)
|
||||
|
||||
KubeVela has a simple client side dashboard for you to interact with. The functionality is equivalent to the vela cli.
|
||||
|
||||
```bash
|
||||
$ vela dashboard
|
||||
```
|
||||
|
||||
> NOTE: this feature is still under development.
|
||||
|
|
@ -9,8 +9,7 @@ title: 金丝雀发布
|
|||
1. 通过如下命令启用 [`kruise-rollout`](../../reference/addons/kruise-rollout) 插件,金丝雀发布依赖于 [rollouts from OpenKruise](https://github.com/openkruise/rollouts).
|
||||
|
||||
```shell
|
||||
$ vela addon enable kruise-rollout
|
||||
Addon: kruise-rollout enabled Successfully.
|
||||
vela addon enable kruise-rollout
|
||||
```
|
||||
|
||||
2. 请确保在集群中至少安装一种 [ingress controllers](https://kubernetes.github.io/ingress-nginx/deploy/)。
|
||||
|
|
|
|||
|
|
@ -4,9 +4,11 @@ title: 依赖关系
|
|||
|
||||
本节将介绍如何在 KubeVela 中指定工作流步骤的依赖关系。
|
||||
|
||||
> 注意:在当前版本(1.4)中,工作流中的步骤是顺序执行的,这意味着步骤间有一个隐式的依赖关系,即:下一个步骤依赖上一个步骤的成功执行。此时,在工作流中指定依赖关系的意义可能不大。
|
||||
>
|
||||
> 在未来的版本(1.5+)中,你将可以显示指定工作流步骤的执行方式(如:改成 DAG 并行执行),此时,你可以通过指定步骤的依赖关系来控制工作流的执行。
|
||||
:::note
|
||||
在 1.4及以前版本中,工作流中的步骤是顺序执行的,这意味着步骤间有一个隐式的依赖关系,即:下一个步骤依赖上一个步骤的成功执行。此时,在工作流中指定依赖关系的意义可能不大。
|
||||
|
||||
在版本 1.5+ 中,你将可以显示指定工作流步骤的执行方式(如:改成 DAG 并行执行),此时,你可以通过指定步骤的依赖关系来控制工作流的执行。
|
||||
:::
|
||||
|
||||
## 如何使用
|
||||
|
||||
|
|
|
|||
|
|
@ -6,9 +6,10 @@ title: 子步骤
|
|||
|
||||
KubeVela 工作流中有一个特殊的步骤类型 `step-group`,在使用步骤组类型的步骤时,你可以在其中声明子步骤。
|
||||
|
||||
> 注意:在当前版本(1.4)中,步骤组中的子步骤们是并发执行的。
|
||||
>
|
||||
> 在未来的版本(1.5+)中,你将可以显示指定工作流步骤及子步骤的执行方式。
|
||||
:::note
|
||||
在 v1.4 及以前版本中,步骤组中的子步骤们是并发执行的。
|
||||
在 1.5+ 版本中,你将可以显示指定工作流步骤及子步骤的执行方式。
|
||||
:::
|
||||
|
||||
部署如下例子:
|
||||
|
||||
|
|
|
|||
|
|
@ -10,13 +10,18 @@ title: 暂停和继续
|
|||
|
||||
### 手动暂停
|
||||
|
||||
如果你有一个正在运行的应用,并且你希望暂停它的执行,你可以使用 `vela workflow suspend` 来暂停该工作流。
|
||||
如果你有一个正在运行工作流的应用,并且你希望暂停它的执行,你可以使用 `vela workflow suspend` 来暂停该工作流,在未来可以通过 `vela workflow resume` 继续工作流。
|
||||
|
||||
* 暂停工作流
|
||||
|
||||
```bash
|
||||
$ vela workflow suspend my-app
|
||||
Successfully suspend workflow: my-app
|
||||
vela workflow suspend my-app
|
||||
```
|
||||
|
||||
:::tip
|
||||
如果工作流已经执行完毕,对应用使用 `vela workflow suspend` 命令不会产生任何效果。
|
||||
:::
|
||||
|
||||
### 使用暂停步骤
|
||||
|
||||
部署如下例子:
|
||||
|
|
@ -25,7 +30,7 @@ Successfully suspend workflow: my-app
|
|||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: suspend
|
||||
name: suspend-demo
|
||||
namespace: default
|
||||
spec:
|
||||
components:
|
||||
|
|
@ -56,10 +61,16 @@ spec:
|
|||
使用 vela status 命令查看应用状态:
|
||||
|
||||
```bash
|
||||
$ vela status suspend
|
||||
vela status suspend-demo
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>期望输出</summary>
|
||||
|
||||
```
|
||||
About:
|
||||
|
||||
Name: suspend
|
||||
Name: suspend-demo
|
||||
Namespace: default
|
||||
Created at: 2022-06-27 17:36:58 +0800 CST
|
||||
Status: workflowSuspending
|
||||
|
|
@ -75,12 +86,10 @@ Workflow:
|
|||
name:apply1
|
||||
type:apply-component
|
||||
phase:succeeded
|
||||
message:
|
||||
- id:xvmda4he5e
|
||||
name:suspend
|
||||
type:suspend
|
||||
phase:running
|
||||
message:
|
||||
|
||||
Services:
|
||||
|
||||
|
|
@ -90,6 +99,7 @@ Services:
|
|||
Healthy Ready:1/1
|
||||
No trait applied
|
||||
```
|
||||
</details>
|
||||
|
||||
可以看到,当第一个步骤执行完成之后,会开始执行 `suspend` 步骤。而这个步骤会让工作流进入暂停状态。
|
||||
|
||||
|
|
@ -102,17 +112,22 @@ Services:
|
|||
以上面处于暂停状态的应用为例:
|
||||
|
||||
```bash
|
||||
$ vela workflow resume suspend
|
||||
Successfully resume workflow: suspend
|
||||
vela workflow resume suspend-demo
|
||||
```
|
||||
|
||||
成功继续工作流后,查看应用的状态:
|
||||
|
||||
```bash
|
||||
$ vela status suspend
|
||||
vela status suspend-demo
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>期望输出</summary>
|
||||
|
||||
```
|
||||
About:
|
||||
|
||||
Name: suspend
|
||||
Name: suspend-demo
|
||||
Namespace: default
|
||||
Created at: 2022-06-27 17:36:58 +0800 CST
|
||||
Status: running
|
||||
|
|
@ -154,6 +169,7 @@ Services:
|
|||
Healthy Ready:1/1
|
||||
No trait applied
|
||||
```
|
||||
</details>
|
||||
|
||||
可以看到,工作流已经继续执行完毕。
|
||||
|
||||
|
|
@ -161,11 +177,28 @@ Services:
|
|||
|
||||
当工作流处于暂停状态时,如果你想终止它,你可以使用 `vela workflow terminate` 命令来终止工作流。
|
||||
|
||||
* 终止工作流
|
||||
|
||||
```bash
|
||||
$ vela workflow terminate my-app
|
||||
Successfully terminate workflow: my-app
|
||||
vela workflow terminate my-app
|
||||
```
|
||||
|
||||
:::tip
|
||||
区别于暂停,终止的工作流不能继续执行,只能重新运行工作流。重新运行意味着工作流会重新开始执行所有工作流步骤,而继续工作流则是从暂停的步骤后面继续执行。
|
||||
:::
|
||||
|
||||
* 重新运行工作流
|
||||
|
||||
```bash
|
||||
vela workflow restart my-app
|
||||
```
|
||||
|
||||
:::caution
|
||||
一旦应用被终止,KubeVela 控制器不会再对资源做状态维持,你可以对底层资源做手动修改但请注意防止配置漂移。
|
||||
:::
|
||||
|
||||
工作流执行完毕进入正常运行状态的应用无法被终止或重新运行。
|
||||
|
||||
### 自动继续工作流
|
||||
|
||||
如果你希望经过了一段时间后,工作流能够自动被继续。那么,你可以在 `suspend` 步骤中加上 `duration` 参数。当 `duration` 时间超过后,工作流将自动继续执行。
|
||||
|
|
@ -209,7 +242,13 @@ spec:
|
|||
查看应用状态:
|
||||
|
||||
```bash
|
||||
$ vela status auto-resume
|
||||
vela status auto-resume
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>期望输出</summary>
|
||||
|
||||
```
|
||||
About:
|
||||
|
||||
Name: auto-resume
|
||||
|
|
@ -255,4 +294,6 @@ Services:
|
|||
No trait applied
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
可以看到,`suspend` 步骤在五秒后自动执行成功,继续了工作流。
|
||||
|
|
|
|||
|
|
@ -2,7 +2,9 @@
|
|||
title: 步骤超时
|
||||
---
|
||||
|
||||
> 注意:你需要升级到 1.5 及以上版本来使用超时功能。
|
||||
:::note
|
||||
你需要升级到 1.5 及以上版本来使用超时功能。
|
||||
:::
|
||||
|
||||
本节将介绍如何在 KubeVela 中为工作流步骤添加超时时间。
|
||||
|
||||
|
|
|
|||
|
|
@ -66,7 +66,7 @@ template: {
|
|||
kind: "Deployment"
|
||||
}
|
||||
outputs: {}
|
||||
parameters: {}
|
||||
parameter: {}
|
||||
}
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -90,7 +90,7 @@ spec:
|
|||
|
||||
具体抽象方式和交付方式的编写可以查阅对应的文档,这里以一个完整的例子介绍组件定义的工作流程。
|
||||
|
||||
<detail>
|
||||
<details>
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
|
|
@ -192,7 +192,7 @@ spec:
|
|||
...
|
||||
}
|
||||
```
|
||||
</detail>
|
||||
</details>
|
||||
|
||||
如上所示,这个组件定义的名字叫 `helm`,一经注册,最终用户在 Application 的组件类型(`components[*].type`)字段就可以填写这个类型。
|
||||
|
||||
|
|
|
|||
|
|
@ -23,36 +23,38 @@ title: 自动化可观测性
|
|||
|
||||
要启用插件套件,只需运行 `vela addon enable` 命令,如下所示。
|
||||
|
||||
> 如果你的 KubeVela 是多集群场景,请参阅下面的 [多集群安装](#多集群安装) 章节。
|
||||
:::tip
|
||||
如果你的 KubeVela 是多集群场景,请参阅下面的 [多集群安装](#多集群安装) 章节。
|
||||
:::
|
||||
|
||||
1. 安装 kube-state-metrics 插件
|
||||
|
||||
```shell
|
||||
> vela addon enable kube-state-metrics
|
||||
vela addon enable kube-state-metrics
|
||||
```
|
||||
|
||||
2. 安装 node-exporter 插件
|
||||
|
||||
```shell
|
||||
> vela addon enable node-exporter
|
||||
vela addon enable node-exporter
|
||||
```
|
||||
|
||||
3. 安装 prometheus-server
|
||||
|
||||
```shell
|
||||
> vela addon enable prometheus-server
|
||||
vela addon enable prometheus-server
|
||||
```
|
||||
|
||||
4. 安装 grafana 插件
|
||||
|
||||
```shell
|
||||
> vela addon enable grafana
|
||||
vela addon enable grafana
|
||||
```
|
||||
|
||||
5. 通过端口转发访问 grafana
|
||||
|
||||
```shell
|
||||
> kubectl port-forward svc/grafana -n o11y-system 8080:3000
|
||||
kubectl port-forward svc/grafana -n o11y-system 8080:3000
|
||||
```
|
||||
|
||||
现在在浏览器中访问 `http://localhost:8080` 就可以访问你的 grafana。 默认的用户名和密码分别是 `admin` 和 `kubevela`。
|
||||
|
|
@ -178,7 +180,7 @@ URL: http://localhost:8080/d/kubernetes-apiserver/kubernetes-apiserver
|
|||
默认情况下,`kube-state-metrics`、`node-exporter` 和 `prometheus-server` 的安装过程原生支持多集群(它们将自动安装到所有集群)。 但是要让控制平面上的 `grafana` 能够访问托管集群中的 prometheus-server,你需要使用以下命令来启用 `prometheus-server`。
|
||||
|
||||
```shell
|
||||
> vela addon enable prometheus-server thanos=true serviceType=LoadBalancer
|
||||
vela addon enable prometheus-server thanos=true serviceType=LoadBalancer
|
||||
```
|
||||
|
||||
这将安装 [thanos](https://github.com/thanos-io/thanos) sidecar 和 prometheus-server。 然后启用 grafana,你将能够看到聚合的 prometheus 指标。
|
||||
|
|
@ -186,7 +188,7 @@ URL: http://localhost:8080/d/kubernetes-apiserver/kubernetes-apiserver
|
|||
你还可以使用以下命令选择要在哪个集群安装插件:
|
||||
|
||||
```shell
|
||||
> vela addon enable kube-state-metrics clusters=\{local,c2\}
|
||||
vela addon enable kube-state-metrics clusters=\{local,c2\}
|
||||
```
|
||||
|
||||
> 如果在安装插件后将新集群添加到控制平面,则需要重新启用插件才能使其生效。
|
||||
|
|
@ -234,7 +236,7 @@ spec:
|
|||
然后你需要在 prometheus-server 插件的启用过程中添加 `customConfig` 参数,比如:
|
||||
|
||||
```shell
|
||||
> vela addon enable prometheus-server thanos=true serviceType=LoadBalancer storage=1G customConfig=my-prom
|
||||
vela addon enable prometheus-server thanos=true serviceType=LoadBalancer storage=1G customConfig=my-prom
|
||||
```
|
||||
|
||||
然后你将看到记录规则配置被分发到到所有 prome。
|
||||
|
|
@ -263,7 +265,7 @@ data:
|
|||
如果要更改 Grafana 的默认用户名和密码,可以运行以下命令:
|
||||
|
||||
```shell
|
||||
> vela addon enable grafana adminUser=super-user adminPassword=PASSWORD
|
||||
vela addon enable grafana adminUser=super-user adminPassword=PASSWORD
|
||||
```
|
||||
|
||||
这会将你的默认管理员用户更改为 `super-user`,并将其密码更改为 `PASSWORD`。
|
||||
|
|
@ -273,7 +275,7 @@ data:
|
|||
如果你希望 prometheus-server 和 grafana 将数据持久化在卷中,可以在安装时指定 `storage` 参数,例如:
|
||||
|
||||
```shell
|
||||
> vela addon enable prometheus-server storage=1G
|
||||
vela addon enable prometheus-server storage=1G
|
||||
```
|
||||
|
||||
这将创建 PersistentVolumeClaims 并让插件使用提供的存储。 即使插件被禁用,存储也不会自动回收。 你需要手动清理存储。
|
||||
|
|
@ -357,9 +359,9 @@ my-grafana https://grafana-rngwzwnsuvl4s9p66m.grafana.aliyuncs.com:80/ Beare
|
|||
|
||||
```shell
|
||||
# 显示你拥有的所有 dashboard
|
||||
> kubectl get grafanadashboard -l grafana=my-grafana
|
||||
kubectl get grafanadashboard -l grafana=my-grafana
|
||||
# 显示你拥有的所有数据源
|
||||
> kubectl get grafanadatasource -l grafana=my-grafana
|
||||
kubectl get grafanadatasource -l grafana=my-grafana
|
||||
```
|
||||
|
||||
更多详情,你可以参考 [vela-prism](https://github.com/kubevela/prism#grafana-related-apis)。
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Custom Image Delivery
|
||||
title: Custom Container Delivery
|
||||
---
|
||||
|
||||
If the default `webservice` component type is not suitable for your team, and you want to get a more simple way to deploy your business application. This guide will help you. Before, you must have the platform manager's permission.
|
||||
|
|
|
|||
|
|
@ -90,7 +90,7 @@ spec:
|
|||
|
||||
具体抽象方式和交付方式的编写可以查阅对应的文档,这里以一个完整的例子介绍组件定义的工作流程。
|
||||
|
||||
<detail>
|
||||
<details>
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
|
|
@ -192,7 +192,7 @@ spec:
|
|||
...
|
||||
}
|
||||
```
|
||||
</detail>
|
||||
</details>
|
||||
|
||||
如上所示,这个组件定义的名字叫 `helm`,一经注册,最终用户在 Application 的组件类型(`components[*].type`)字段就可以填写这个类型。
|
||||
|
||||
|
|
|
|||
|
|
@ -11,8 +11,8 @@
|
|||
"message": "核心概念",
|
||||
"description": "The label for Core Concepts in sidebar docs"
|
||||
},
|
||||
"sidebar.docs.category.CUE in KubeVela": {
|
||||
"message": "CUE 语言",
|
||||
"sidebar.docs.category.Manage Definition with CUE": {
|
||||
"message": "使用 CUE 扩展模块定义",
|
||||
"description": "The label for category Learning CUE in sidebar docs"
|
||||
},
|
||||
"sidebar.docs.category.Helm": {
|
||||
|
|
@ -310,8 +310,8 @@
|
|||
"sidebar.docs.category.Kubernetes Manifest CD": {
|
||||
"message": "Kubernetes 资源交付"
|
||||
},
|
||||
"sidebar.docs.category.General CD Features": {
|
||||
"message": "通用功能"
|
||||
"sidebar.docs.category.CD Policies": {
|
||||
"message": "资源交付策略"
|
||||
},
|
||||
"sidebar.docs.category.UX Customization": {
|
||||
"message": "定制 UI"
|
||||
|
|
@ -330,5 +330,11 @@
|
|||
},
|
||||
"sidebar.docs.doc.Kubernetes": {
|
||||
"message": "基于 Kubernetes 安装"
|
||||
},
|
||||
"sidebar.docs.category.Day-2 Operations": {
|
||||
"message": "应用运维"
|
||||
},
|
||||
"sidebar.docs.category.Registry Integration": {
|
||||
"message": "注册中心集成"
|
||||
}
|
||||
}
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
|
||||
# KubeVela Dashboard (WIP)
|
||||
|
||||
KubeVela has a simple client side dashboard for you to interact with. The functionality is equivalent to the vela cli.
|
||||
|
||||
```bash
|
||||
$ vela dashboard
|
||||
```
|
||||
|
||||
> NOTE: this feature is still under development.
|
||||
|
|
@ -9,8 +9,7 @@ title: 金丝雀发布
|
|||
1. 通过如下命令启用 [`kruise-rollout`](../../reference/addons/kruise-rollout) 插件,金丝雀发布依赖于 [rollouts from OpenKruise](https://github.com/openkruise/rollouts).
|
||||
|
||||
```shell
|
||||
$ vela addon enable kruise-rollout
|
||||
Addon: kruise-rollout enabled Successfully.
|
||||
vela addon enable kruise-rollout
|
||||
```
|
||||
|
||||
2. 请确保在集群中至少安装一种 [ingress controllers](https://kubernetes.github.io/ingress-nginx/deploy/)。
|
||||
|
|
|
|||
|
|
@ -4,9 +4,11 @@ title: 依赖关系
|
|||
|
||||
本节将介绍如何在 KubeVela 中指定工作流步骤的依赖关系。
|
||||
|
||||
> 注意:在当前版本(1.4)中,工作流中的步骤是顺序执行的,这意味着步骤间有一个隐式的依赖关系,即:下一个步骤依赖上一个步骤的成功执行。此时,在工作流中指定依赖关系的意义可能不大。
|
||||
>
|
||||
> 在未来的版本(1.5+)中,你将可以显示指定工作流步骤的执行方式(如:改成 DAG 并行执行),此时,你可以通过指定步骤的依赖关系来控制工作流的执行。
|
||||
:::note
|
||||
在 1.4及以前版本中,工作流中的步骤是顺序执行的,这意味着步骤间有一个隐式的依赖关系,即:下一个步骤依赖上一个步骤的成功执行。此时,在工作流中指定依赖关系的意义可能不大。
|
||||
|
||||
在版本 1.5+ 中,你将可以显示指定工作流步骤的执行方式(如:改成 DAG 并行执行),此时,你可以通过指定步骤的依赖关系来控制工作流的执行。
|
||||
:::
|
||||
|
||||
## 如何使用
|
||||
|
||||
|
|
|
|||
|
|
@ -6,9 +6,10 @@ title: 子步骤
|
|||
|
||||
KubeVela 工作流中有一个特殊的步骤类型 `step-group`,在使用步骤组类型的步骤时,你可以在其中声明子步骤。
|
||||
|
||||
> 注意:在当前版本(1.4)中,步骤组中的子步骤们是并发执行的。
|
||||
>
|
||||
> 在未来的版本(1.5+)中,你将可以显示指定工作流步骤及子步骤的执行方式。
|
||||
:::note
|
||||
在 v1.4 及以前版本中,步骤组中的子步骤们是并发执行的。
|
||||
在 1.5+ 版本中,你将可以显示指定工作流步骤及子步骤的执行方式。
|
||||
:::
|
||||
|
||||
部署如下例子:
|
||||
|
||||
|
|
|
|||
|
|
@ -10,13 +10,18 @@ title: 暂停和继续
|
|||
|
||||
### 手动暂停
|
||||
|
||||
如果你有一个正在运行的应用,并且你希望暂停它的执行,你可以使用 `vela workflow suspend` 来暂停该工作流。
|
||||
如果你有一个正在运行工作流的应用,并且你希望暂停它的执行,你可以使用 `vela workflow suspend` 来暂停该工作流,在未来可以通过 `vela workflow resume` 继续工作流。
|
||||
|
||||
* 暂停工作流
|
||||
|
||||
```bash
|
||||
$ vela workflow suspend my-app
|
||||
Successfully suspend workflow: my-app
|
||||
vela workflow suspend my-app
|
||||
```
|
||||
|
||||
:::tip
|
||||
如果工作流已经执行完毕,对应用使用 `vela workflow suspend` 命令不会产生任何效果。
|
||||
:::
|
||||
|
||||
### 使用暂停步骤
|
||||
|
||||
部署如下例子:
|
||||
|
|
@ -25,7 +30,7 @@ Successfully suspend workflow: my-app
|
|||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: suspend
|
||||
name: suspend-demo
|
||||
namespace: default
|
||||
spec:
|
||||
components:
|
||||
|
|
@ -56,10 +61,16 @@ spec:
|
|||
使用 vela status 命令查看应用状态:
|
||||
|
||||
```bash
|
||||
$ vela status suspend
|
||||
vela status suspend-demo
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>期望输出</summary>
|
||||
|
||||
```
|
||||
About:
|
||||
|
||||
Name: suspend
|
||||
Name: suspend-demo
|
||||
Namespace: default
|
||||
Created at: 2022-06-27 17:36:58 +0800 CST
|
||||
Status: workflowSuspending
|
||||
|
|
@ -75,12 +86,10 @@ Workflow:
|
|||
name:apply1
|
||||
type:apply-component
|
||||
phase:succeeded
|
||||
message:
|
||||
- id:xvmda4he5e
|
||||
name:suspend
|
||||
type:suspend
|
||||
phase:running
|
||||
message:
|
||||
|
||||
Services:
|
||||
|
||||
|
|
@ -90,6 +99,7 @@ Services:
|
|||
Healthy Ready:1/1
|
||||
No trait applied
|
||||
```
|
||||
</details>
|
||||
|
||||
可以看到,当第一个步骤执行完成之后,会开始执行 `suspend` 步骤。而这个步骤会让工作流进入暂停状态。
|
||||
|
||||
|
|
@ -102,17 +112,22 @@ Services:
|
|||
以上面处于暂停状态的应用为例:
|
||||
|
||||
```bash
|
||||
$ vela workflow resume suspend
|
||||
Successfully resume workflow: suspend
|
||||
vela workflow resume suspend-demo
|
||||
```
|
||||
|
||||
成功继续工作流后,查看应用的状态:
|
||||
|
||||
```bash
|
||||
$ vela status suspend
|
||||
vela status suspend-demo
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>期望输出</summary>
|
||||
|
||||
```
|
||||
About:
|
||||
|
||||
Name: suspend
|
||||
Name: suspend-demo
|
||||
Namespace: default
|
||||
Created at: 2022-06-27 17:36:58 +0800 CST
|
||||
Status: running
|
||||
|
|
@ -154,6 +169,7 @@ Services:
|
|||
Healthy Ready:1/1
|
||||
No trait applied
|
||||
```
|
||||
</details>
|
||||
|
||||
可以看到,工作流已经继续执行完毕。
|
||||
|
||||
|
|
@ -161,11 +177,28 @@ Services:
|
|||
|
||||
当工作流处于暂停状态时,如果你想终止它,你可以使用 `vela workflow terminate` 命令来终止工作流。
|
||||
|
||||
* 终止工作流
|
||||
|
||||
```bash
|
||||
$ vela workflow terminate my-app
|
||||
Successfully terminate workflow: my-app
|
||||
vela workflow terminate my-app
|
||||
```
|
||||
|
||||
:::tip
|
||||
区别于暂停,终止的工作流不能继续执行,只能重新运行工作流。重新运行意味着工作流会重新开始执行所有工作流步骤,而继续工作流则是从暂停的步骤后面继续执行。
|
||||
:::
|
||||
|
||||
* 重新运行工作流
|
||||
|
||||
```bash
|
||||
vela workflow restart my-app
|
||||
```
|
||||
|
||||
:::caution
|
||||
一旦应用被终止,KubeVela 控制器不会再对资源做状态维持,你可以对底层资源做手动修改但请注意防止配置漂移。
|
||||
:::
|
||||
|
||||
工作流执行完毕进入正常运行状态的应用无法被终止或重新运行。
|
||||
|
||||
### 自动继续工作流
|
||||
|
||||
如果你希望经过了一段时间后,工作流能够自动被继续。那么,你可以在 `suspend` 步骤中加上 `duration` 参数。当 `duration` 时间超过后,工作流将自动继续执行。
|
||||
|
|
@ -209,7 +242,13 @@ spec:
|
|||
查看应用状态:
|
||||
|
||||
```bash
|
||||
$ vela status auto-resume
|
||||
vela status auto-resume
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>期望输出</summary>
|
||||
|
||||
```
|
||||
About:
|
||||
|
||||
Name: auto-resume
|
||||
|
|
@ -255,4 +294,6 @@ Services:
|
|||
No trait applied
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
可以看到,`suspend` 步骤在五秒后自动执行成功,继续了工作流。
|
||||
|
|
|
|||
|
|
@ -2,7 +2,9 @@
|
|||
title: 步骤超时
|
||||
---
|
||||
|
||||
> 注意:你需要升级到 1.5 及以上版本来使用超时功能。
|
||||
:::note
|
||||
你需要升级到 1.5 及以上版本来使用超时功能。
|
||||
:::
|
||||
|
||||
本节将介绍如何在 KubeVela 中为工作流步骤添加超时时间。
|
||||
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
title: 自定义组件
|
||||
---
|
||||
|
||||
> 在阅读本部分之前,请确保你已经了解 KubeVela 中 [组件定义(ComponentDefinition](../oam/x-definition#组件定义(ComponentDefinition)) 的概念且学习掌握了 [CUE 的基本知识](../cue/basic)
|
||||
> 在阅读本部分之前,请确保你已经了解 KubeVela 中 [组件定义(ComponentDefinition)](../oam/x-definition#组件定义(ComponentDefinition)) 的概念且学习掌握了 [CUE 的基本知识](../cue/basic)
|
||||
|
||||
本节将以组件定义的例子展开说明,介绍如何使用 [CUE](../cue/basic) 通过组件定义 `ComponentDefinition` 来自定义应用部署计划的组件。
|
||||
|
||||
|
|
@ -106,7 +106,7 @@ template: {
|
|||
kind: "Deployment"
|
||||
}
|
||||
outputs: {}
|
||||
parameter: {
|
||||
parameters: {
|
||||
name: string
|
||||
image: string
|
||||
}
|
||||
|
|
@ -477,5 +477,5 @@ output: {
|
|||
|
||||
## 下一步
|
||||
|
||||
* 了解如何基于 CUE [自定义运维特征](../traits/customize-trait) in CUE。
|
||||
* 了解如何基于 CUE [自定义运维特征](../traits/customize-trait)。
|
||||
* 了解如何为组件和运维特征模块[定义健康状态](../traits/status)。
|
||||
|
|
@ -6,7 +6,7 @@ title: 基础入门
|
|||
|
||||
## 概述
|
||||
|
||||
KubeVela 将 CUE 作为应用交付核心依赖和扩展方式的原因如下::
|
||||
KubeVela 将 CUE 作为应用交付核心依赖和扩展方式的原因如下:
|
||||
|
||||
- **CUE 本身就是为大规模配置而设计。** CUE 能够感知非常复杂的配置文件,并且能够安全地更改可修改配置中成千上万个对象的值。这非常符合 KubeVela 的目标,即以可编程的方式,去定义和交付生产级别的应用程序。
|
||||
- **CUE 支持一流的代码生成和自动化。** CUE 原生支持与现有工具以及工作流进行集成,反观其他工具则需要自定义复杂的方案才能实现。例如,需要手动使用 Go 代码生成 OpenAPI 模式。KubeVela 也是依赖 CUE 该特性进行构建开发工具和 GUI 界面的。
|
||||
|
|
|
|||
|
|
@ -56,7 +56,7 @@ spec:
|
|||
|
||||
在实际使用时,用户通过上述 Application 对象来引用预置的组件、运维特征、应用策略、以及工作流节点模块,填写这些模块暴露的用户参数即可完成一次对应用交付的建模。
|
||||
|
||||
> 注意:上诉可插拔模块在 OAM 中称为 X-Definitions,Application 对象负责引用 X-Definitions 并对用户输入进行校验,而各模块具体的可填写参数则是约束在相应的 X-Definition 文件当中的。具体请参考: [模块定义(Definition)](./x-definition) 章节。
|
||||
> 注意:上述可插拔模块在 OAM 中称为 X-Definitions,Application 对象负责引用 X-Definitions 并对用户输入进行校验,而各模块具体的可填写参数则是约束在相应的 X-Definition 文件当中的。具体请参考: [模块定义(Definition)](./x-definition) 章节。
|
||||
|
||||
## 组件(Component)
|
||||
|
||||
|
|
|
|||
|
|
@ -90,7 +90,7 @@ spec:
|
|||
|
||||
具体抽象方式和交付方式的编写可以查阅对应的文档,这里以一个完整的例子介绍组件定义的工作流程。
|
||||
|
||||
<detail>
|
||||
<details>
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
|
|
@ -192,7 +192,7 @@ spec:
|
|||
...
|
||||
}
|
||||
```
|
||||
</detail>
|
||||
</details>
|
||||
|
||||
如上所示,这个组件定义的名字叫 `helm`,一经注册,最终用户在 Application 的组件类型(`components[*].type`)字段就可以填写这个类型。
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,387 @@
|
|||
---
|
||||
title: 自动化可观测性
|
||||
---
|
||||
|
||||
可观测性对于基础架构和应用程序至关重要。 如果没有可观测性系统,就很难确定系统崩溃时发生了什么。 相反,强大的可观测性系统不仅可以为使用者提供信心,还可以帮助开发人员快速定位整个系统内部的性能瓶颈或薄弱环节。
|
||||
|
||||
为了帮助用户构建自己的可观测性系统,KubeVela 提供了一些插件,包括:
|
||||
|
||||
- prometheus-server: 以时间序列来记录指标的服务,支持灵活的查询。
|
||||
- kube-state-metrics: Kubernetes 系统的指标收集器。
|
||||
- node-exporter: Kubernetes 运行中的节点的指标收集器。
|
||||
- grafana: 提供分析和交互式可视化的 Web 应用程序。
|
||||
|
||||
以后的版本中将引入更多用于 logging 和 tracing 的插件。
|
||||
|
||||
## 前提条件
|
||||
|
||||
1. 可观测性套件包括几个插件,它们需要一些计算资源才能正常工作。 集群的推荐安装资源是 2 核 + 4 Gi 内存。
|
||||
|
||||
2. 安装所需的 KubeVela 版本(服务端控制器和客户端 CLI)**不低于** v1.5.0-beta.4。
|
||||
|
||||
## 快速开始
|
||||
|
||||
要启用插件套件,只需运行 `vela addon enable` 命令,如下所示。
|
||||
|
||||
:::tip
|
||||
如果你的 KubeVela 是多集群场景,请参阅下面的 [多集群安装](#多集群安装) 章节。
|
||||
:::
|
||||
|
||||
1. 安装 kube-state-metrics 插件
|
||||
|
||||
```shell
|
||||
vela addon enable kube-state-metrics
|
||||
```
|
||||
|
||||
2. 安装 node-exporter 插件
|
||||
|
||||
```shell
|
||||
vela addon enable node-exporter
|
||||
```
|
||||
|
||||
3. 安装 prometheus-server
|
||||
|
||||
```shell
|
||||
vela addon enable prometheus-server
|
||||
```
|
||||
|
||||
4. 安装 grafana 插件
|
||||
|
||||
```shell
|
||||
vela addon enable grafana
|
||||
```
|
||||
|
||||
5. 通过端口转发访问 grafana
|
||||
|
||||
```shell
|
||||
kubectl port-forward svc/grafana -n o11y-system 8080:3000
|
||||
```
|
||||
|
||||
现在在浏览器中访问 `http://localhost:8080` 就可以访问你的 grafana。 默认的用户名和密码分别是 `admin` 和 `kubevela`。
|
||||
|
||||
## 自动化的 Dashboard
|
||||
|
||||
有四个自动化的 Dashboard 可以浏览和查看你的系统。
|
||||
|
||||
### KubeVela Application
|
||||
|
||||
这个 dashboard 展示了应用的基本信息。
|
||||
|
||||
URL: http://localhost:8080/d/application-overview/kubevela-applications
|
||||
|
||||

|
||||
|
||||
<details>
|
||||
KubeVela Application dashboard 显示了应用的元数据概况。 它直接访问 Kubernetes API 来检索运行时应用的信息,你可以将其作为入口。
|
||||
|
||||
---
|
||||
|
||||
**Basic Information** 部分将关键信息提取到面板中,并提供当前应用最直观的视图。
|
||||
|
||||
---
|
||||
|
||||
**Related Resource** 部分显示了与应用本身一起工作的那些资源,包括托管资源、记录的 ResourceTracker 和修正。
|
||||
|
||||
</details>
|
||||
|
||||
### Kubernetes Deployemnt
|
||||
|
||||
这个 dashboard 显示原生 deployment 的概况。 你可以查看跨集群的 deployment 的信息。
|
||||
|
||||
URL: http://localhost:8080/d/deployment-overview/kubernetes-deployment
|
||||
|
||||

|
||||
|
||||
<details>
|
||||
Kubernetes Deployment dashboard 向你提供 deployment 的详细运行状态。
|
||||
|
||||
---
|
||||
|
||||
**Pods** 面板显示该 deployment 当前正在管理的 pod。
|
||||
|
||||
---
|
||||
|
||||
**Replicas** 面板显示副本数量如何变化,用于诊断你的 deployment 何时以及如何转变到不希望的状态。
|
||||
|
||||
---
|
||||
|
||||
**Pod** 部分包含资源的详细使用情况(包括 CPU / 内存 / 网络 / 存储),可用于识别 pod 是否面临资源压力或产生/接收意想不到的流量。
|
||||
|
||||
</details>
|
||||
|
||||
### KubeVela 系统
|
||||
|
||||
这个 dashboard 显示 KubeVela 系统的概况。 它可以用来查看 KubeVela 控制器是否健康。
|
||||
|
||||
URL: http://localhost:8080/d/kubevela-system/kubevela-system
|
||||
|
||||

|
||||
|
||||
<details>
|
||||
KubeVela System dashboard 提供 KubeVela 核心模块的运行详细信息,包括控制器和集群网关。 预计将来会添加其他模块,例如 velaux 或 prism。
|
||||
|
||||
---
|
||||
|
||||
**Computation Resource** 部分显示了核心模块的使用情况。 它可用于追踪是否存在内存泄漏(如果内存使用量不断增加)或处于高压状态(cpu 使用率总是很高)。 如果内存使用量达到资源限制,则相应的模块将被杀死并重新启动,这表明计算资源不足。你应该为它们添加更多的 CPU/内存。
|
||||
|
||||
---
|
||||
|
||||
**Controller** 部分包括各种面板,可帮助诊断你的 KubeVela 控制器的瓶颈。
|
||||
|
||||
**Controller Queue** 和 **Controller Queue Add Rate** 面板显示控制器工作队列的变化。 如果控制器队列不断增加,说明系统中应用过多或应用的变化过多,控制器已经不能及时处理。 那么这意味着 KubeVela 控制器存在性能问题。 控制器队列的临时增长是可以容忍的,但维持很长时间将会导致内存占用的增加,最终导致内存不足的问题。
|
||||
|
||||
**Reconcile Rate** 和 **Average Reconcile Time** 面板显示控制器状态的概况。 如果调和速率稳定且平均调和时间合理(例如低于 500 毫秒,具体取决于你的场景),则你的 KubeVela 控制器是健康的。 如果控制器队列入队速率在增加,但调和速率没有上升,会逐渐导致控制器队列增长并引发问题。 有多种情况表明你的控制器运行状况不佳:
|
||||
|
||||
1. Reconcile 是健康的,但是应用太多,你会发现一切都很好,除了控制器队列指标增加。 检查控制器的 CPU/内存使用情况。 你可能需要添加更多的计算资源。
|
||||
2. 由于错误太多,调和不健康。 你会在 **Reconcile Rate** 面板中发现很多错误。 这意味着你的系统正持续面临应用的处理错误。 这可能是由错误的应用配置或运行工作流时出现的意外错误引起的。 检查应用详细信息并查看哪些应用导致错误。
|
||||
3. 由于调和时间过长导致的调整不健康。 你需要检查 **ApplicationController Reconcile Time** 面板,看看它是常见情况(平均调和时间高),还是只有部分应用有问题(p95 调和时间高)。 对于前一种情况,通常是由于 CPU 不足(CPU 使用率高)或过多的请求和 kube-apiserver 限制了速率(检查 **ApplicationController Client Request Throughput** 和 **ApplicationController Client Request Average Time** 面板并查看哪些资源请求缓慢或过多)。 对于后一种情况,你需要检查哪个应用很大并且使用大量时间进行调和。
|
||||
|
||||
有时你可能需要参考 **ApplicationController Reconcile Stage Time**,看看是否有一些特殊的调和阶段异常。 例如,GCResourceTracker 使用大量时间意味着在 KubeVela 系统中可能存在阻塞回收资源的情况。
|
||||
|
||||
---
|
||||
|
||||
**Application** 部分显示了整个 KubeVela 系统中应用的概况。 可用于查看应用数量的变化和使用的工作流步骤。 **Workflow Initialize Rate** 是一个辅助面板,可用于查看启动新工作流执行的频率。 **Workflow Average Complete Time** 可以进一步显示完成整个工作流程所需的时间。
|
||||
|
||||
</details>
|
||||
|
||||
### Kubernetes APIServer
|
||||
|
||||
这个 dashboard 展示了所有 Kubernetes apiserver 的运行状态。
|
||||
|
||||
URL: http://localhost:8080/d/kubernetes-apiserver/kubernetes-apiserver
|
||||
|
||||

|
||||
|
||||
<details>
|
||||
Kubernetes APIServer dashboard 可帮助你查看 Kubernetes 系统最基本的部分。 如果你的 Kubernetes APIServer 运行不正常,你的 Kubernetes 系统中所有控制器和模块都会出现异常,无法成功处理请求。 因此务必确保此 dashboard 中的一切正常。
|
||||
|
||||
---
|
||||
|
||||
**Requests** 部分包括一系列面板,用来显示各种请求的 QPS 和延迟。 通常,如果请求过多, APIServer 可能无法响应。 这时候就可以看到是哪种类型的请求出问题了。
|
||||
|
||||
---
|
||||
|
||||
**WorkQueue** 部分显示 Kubernetes APIServer 的处理状态。 如果 **Queue Size** 很大,则表示请求数超出了你的 Kubernetes APIServer 的处理能力。
|
||||
|
||||
---
|
||||
|
||||
**Watches** 部分显示 Kubernetes APIServer 中的 watch 数量。 与其他类型的请求相比,WATCH 请求会持续消耗 Kubernetes APIServer 中的计算资源,因此限制 watch 的数量会有所帮助。
|
||||
|
||||
</details>
|
||||
|
||||
## 自定义
|
||||
|
||||
上述安装过程可以通过多种方式进行自定义。
|
||||
|
||||
### 多集群安装
|
||||
|
||||
如果你想在多集群场景中安装可观测性插件,请确保你的 Kubernetes 集群支持 LoadBalancer 服务并且可以相互访问。
|
||||
|
||||
默认情况下,`kube-state-metrics`、`node-exporter` 和 `prometheus-server` 的安装过程原生支持多集群(它们将自动安装到所有集群)。 但是要让控制平面上的 `grafana` 能够访问托管集群中的 prometheus-server,你需要使用以下命令来启用 `prometheus-server`。
|
||||
|
||||
```shell
|
||||
vela addon enable prometheus-server thanos=true serviceType=LoadBalancer
|
||||
```
|
||||
|
||||
这将安装 [thanos](https://github.com/thanos-io/thanos) sidecar 和 prometheus-server。 然后启用 grafana,你将能够看到聚合的 prometheus 指标。
|
||||
|
||||
你还可以使用以下命令选择要在哪个集群安装插件:
|
||||
|
||||
```shell
|
||||
vela addon enable kube-state-metrics clusters=\{local,c2\}
|
||||
```
|
||||
|
||||
> 如果在安装插件后将新集群添加到控制平面,则需要重新启用插件才能使其生效。
|
||||
|
||||
### 自定义 Prometheus 配置
|
||||
|
||||
如果你想自定义安装 prometheus-server ,你可以把配置放到一个单独的 ConfigMap 中,比如在命名空间 o11y-system 中的 `my-prom`。 要将你的自定义配置分发到所有集群,你还可以使用 KubeVela Application 来完成这项工作。
|
||||
|
||||
#### 记录规则
|
||||
|
||||
例如,如果你想在所有集群中的所有 prometheus 服务配置中添加一些记录规则,你可以首先创建一个 Application 来分发你的记录规则,如下所示。
|
||||
|
||||
```yaml
|
||||
# my-prom.yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: my-prom
|
||||
namespace: o11y-system
|
||||
spec:
|
||||
components:
|
||||
- type: k8s-objects
|
||||
name: my-prom
|
||||
properties:
|
||||
objects:
|
||||
- apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: my-prom
|
||||
namespace: o11y-system
|
||||
data:
|
||||
my-recording-rules.yml: |
|
||||
groups:
|
||||
- name: example
|
||||
rules:
|
||||
- record: apiserver:requests:rate5m
|
||||
expr: sum(rate(apiserver_request_total{job="kubernetes-nodes"}[5m]))
|
||||
policies:
|
||||
- type: topology
|
||||
name: topology
|
||||
properties:
|
||||
clusterLabelSelector: {}
|
||||
```
|
||||
|
||||
然后你需要在 prometheus-server 插件的启用过程中添加 `customConfig` 参数,比如:
|
||||
|
||||
```shell
|
||||
vela addon enable prometheus-server thanos=true serviceType=LoadBalancer storage=1G customConfig=my-prom
|
||||
```
|
||||
|
||||
然后你将看到记录规则配置被分发到到所有 prome。
|
||||
|
||||
#### 告警规则和其他配置
|
||||
|
||||
要对告警规则等其他配置进行自定义,过程与上面显示的记录规则示例相同。 你只需要在 application 中更改/添加 prometheus 配置。
|
||||
|
||||
```yaml
|
||||
data:
|
||||
my-alerting-rules.yml: |
|
||||
groups:
|
||||
- name: example
|
||||
rules:
|
||||
- alert: HighApplicationQueueDepth
|
||||
expr: sum(workqueue_depth{app_kubernetes_io_name="vela-core",name="application"}) > 100
|
||||
for: 10m
|
||||
annotations:
|
||||
summary: High Application Queue Depth
|
||||
```
|
||||
|
||||

|
||||
|
||||
### 自定义 Grafana 凭证
|
||||
|
||||
如果要更改 Grafana 的默认用户名和密码,可以运行以下命令:
|
||||
|
||||
```shell
|
||||
vela addon enable grafana adminUser=super-user adminPassword=PASSWORD
|
||||
```
|
||||
|
||||
这会将你的默认管理员用户更改为 `super-user`,并将其密码更改为 `PASSWORD`。
|
||||
|
||||
### 自定义存储
|
||||
|
||||
如果你希望 prometheus-server 和 grafana 将数据持久化在卷中,可以在安装时指定 `storage` 参数,例如:
|
||||
|
||||
```shell
|
||||
vela addon enable prometheus-server storage=1G
|
||||
```
|
||||
|
||||
这将创建 PersistentVolumeClaims 并让插件使用提供的存储。 即使插件被禁用,存储也不会自动回收。 你需要手动清理存储。
|
||||
|
||||
## 集成其他 Prometheus 和 Grafana
|
||||
|
||||
有时,你可能已经拥有 Prometheus 和 Grafana 实例。 它们可能由其他工具构建,或者来自云提供商。 按照以下指南与现有系统集成。
|
||||
|
||||
### 集成 Prometheus
|
||||
|
||||
如果你已经有外部 prometheus 服务,并且希望将其连接到 Grafana(由 vela 插件创建),你可以使用 KubeVela Application 创建一个 GrafanaDatasource 从而注册这个外部的 prometheus 服务。
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: register-prometheus
|
||||
spec:
|
||||
components:
|
||||
- type: grafana-datasource
|
||||
name: my-prometheus
|
||||
properties:
|
||||
access: proxy
|
||||
basicAuth: false
|
||||
isDefault: false
|
||||
name: MyPrometheus
|
||||
readOnly: true
|
||||
withCredentials: true
|
||||
jsonData:
|
||||
httpHeaderName1: Authorization
|
||||
tlsSkipVerify: true
|
||||
secureJsonFields:
|
||||
httpHeaderValue1: <token of your prometheus access>
|
||||
type: prometheus
|
||||
url: <my-prometheus url>
|
||||
```
|
||||
|
||||
例如,如果你在阿里云(ARMS)上使用 Prometheus 服务,你可以进入 Prometheus 设置页面,找到访问的 url 和 token。
|
||||
|
||||

|
||||
|
||||
> 你需要确定你的 grafana 已经可以访问。你可以执行 `kubectl get grafana default` 查看它是否已经存在。
|
||||
|
||||
### 集成 Grafana
|
||||
|
||||
如果你已经有 Grafana,与集成 Prometheus 类似,你可以通过 KubeVela Application 注册 Grafana 的访问信息。
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: register-grafana
|
||||
spec:
|
||||
components:
|
||||
- type: grafana-access
|
||||
name: my-grafana
|
||||
properties:
|
||||
name: my-grafana
|
||||
endpoint: <my-grafana url>
|
||||
token: <access token>
|
||||
```
|
||||
|
||||
要获得 Grafana 访问权限,你可以进入 Grafana 实例并配置 API 密钥。
|
||||
|
||||

|
||||
|
||||
然后将 token 复制到你的 grafana 注册配置中。
|
||||
|
||||

|
||||
|
||||
Application 成功派发后,你可以通过运行以下命令检查注册情况。
|
||||
|
||||
```shell
|
||||
> kubectl get grafana
|
||||
NAME ENDPOINT CREDENTIAL_TYPE
|
||||
default http://grafana.o11y-system:3000 BasicAuth
|
||||
my-grafana https://grafana-rngwzwnsuvl4s9p66m.grafana.aliyuncs.com:80/ BearerToken
|
||||
```
|
||||
|
||||
现在,你也可以通过原生 Kubernetes API 在 grafana 实例上管理 dashboard 和数据源。
|
||||
|
||||
```shell
|
||||
# 显示你拥有的所有 dashboard
|
||||
kubectl get grafanadashboard -l grafana=my-grafana
|
||||
# 显示你拥有的所有数据源
|
||||
kubectl get grafanadatasource -l grafana=my-grafana
|
||||
```
|
||||
|
||||
更多详情,你可以参考 [vela-prism](https://github.com/kubevela/prism#grafana-related-apis)。
|
||||
|
||||
### 集成其他工具和系统
|
||||
|
||||
用户可以利用社区的各种工具或生态系统来构建自己的可观测性系统,例如 prometheus-operator 或 DataDog。 到目前为止,针对这些集成,KubeVela 并没有给出最佳实践。 未来我们可能会通过 KubeVela 插件集成那些流行的项目。 我们也欢迎社区贡献更广泛的探索和更多的联系。
|
||||
|
||||
## 对比
|
||||
|
||||
### 与 Helm 对比
|
||||
|
||||
尽管可以通过 Helm 将这些资源安装到你的 Kubernetes 系统中,但使用 KubeVela 插件安装它们的主要好处之一是它原生地支持多集群交付,这意味着,一旦你将托管集群添加到 KubeVela 控制面,你就能够通过一条命令在所有集群中安装、升级或卸载这些插件。
|
||||
|
||||
### 与以前的可观测性插件对比
|
||||
|
||||
旧的 [KubeVela 可观测性插件](https://github.com/kubevela/catalog/tree/master/experimental/addons/observability) 以一个整体的方式安装 prometheus、grafana 和其他一些组件。 最新的可观测性插件套件(KubeVela v1.5.0 之后)将其分为多个部分,允许用户只安装其中的一部分。
|
||||
|
||||
此外,旧的可观测性插件依赖于 Fluxcd 插件以 Helm Release 的方式安装组件。 最新版本使用 KubeVela 中的原生 webservice 组件,可以更灵活的进行自定义。
|
||||
|
||||
## 展望
|
||||
|
||||
KubeVela 将来会集成更多的可观测性插件,例如 logging 和 tracing 插件。 像 [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator) 这样的社区运营商也提供了管理可观测性应用程序的替代方法,这些方法也打算包含在 KubeVela 插件中。 我们也欢迎通过 KubeVela 插件生态系统进行更多的集成。
|
||||
|
After Width: | Height: | Size: 276 KiB |
|
After Width: | Height: | Size: 70 KiB |
|
After Width: | Height: | Size: 85 KiB |
|
After Width: | Height: | Size: 488 KiB |
|
After Width: | Height: | Size: 279 KiB |
|
After Width: | Height: | Size: 322 KiB |
|
After Width: | Height: | Size: 473 KiB |
|
After Width: | Height: | Size: 135 KiB |
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Custom Image Delivery
|
||||
title: Custom Container Delivery
|
||||
---
|
||||
|
||||
If the default `webservice` component type is not suitable for your team, and you want to get a more simple way to deploy your business application. This guide will help you. Before, you must have the platform manager's permission.
|
||||
|
|
|
|||
90
sidebars.js
|
|
@ -77,8 +77,11 @@ module.exports = {
|
|||
type: 'category',
|
||||
label: 'Terraform',
|
||||
collapsed: false,
|
||||
link: {
|
||||
type: "doc",
|
||||
id: 'end-user/components/cloud-services/cloud-resource-scenarios',
|
||||
},
|
||||
items: [
|
||||
'end-user/components/cloud-services/cloud-resource-scenarios',
|
||||
'end-user/components/cloud-services/provision-and-consume-database',
|
||||
'end-user/components/cloud-services/provision-and-initiate-database',
|
||||
'end-user/components/cloud-services/secure-your-database-connection',
|
||||
|
|
@ -104,52 +107,72 @@ module.exports = {
|
|||
'end-user/components/ref-objects',
|
||||
],
|
||||
},
|
||||
'tutorials/multi-env',
|
||||
{
|
||||
type: 'category',
|
||||
label: 'Multi Environment Delivery',
|
||||
collapsed: true,
|
||||
items: [
|
||||
'case-studies/initialize-env',
|
||||
'tutorials/multi-env'
|
||||
],
|
||||
},
|
||||
{
|
||||
type: 'category',
|
||||
label: 'GitOps',
|
||||
collapsed: true,
|
||||
items: ['case-studies/gitops', 'end-user/gitops/fluxcd'],
|
||||
link: {
|
||||
type: "doc",
|
||||
id: 'case-studies/gitops',
|
||||
},
|
||||
items: ['end-user/gitops/fluxcd'],
|
||||
},
|
||||
{
|
||||
type: 'category',
|
||||
label: 'Declarative Workflow',
|
||||
collapsed: true,
|
||||
items: [
|
||||
'end-user/workflow/overview',
|
||||
'end-user/workflow/suspend',
|
||||
'end-user/workflow/step-group',
|
||||
'end-user/workflow/dependency',
|
||||
'end-user/workflow/inputs-outputs',
|
||||
'end-user/workflow/if-condition',
|
||||
'end-user/workflow/timeout',
|
||||
],
|
||||
},
|
||||
{
|
||||
'General CD Features': [
|
||||
'end-user/version-control',
|
||||
'tutorials/dry-run',
|
||||
'end-user/workflow/component-dependency-parameter',
|
||||
'CD Policies': [
|
||||
'end-user/policies/shared-resource',
|
||||
'case-studies/initialize-env',
|
||||
'end-user/policies/apply-once',
|
||||
'end-user/policies/gc',
|
||||
'how-to/dashboard/config/helm-repo',
|
||||
'how-to/dashboard/config/image-registry',
|
||||
'tutorials/access-application',
|
||||
'tutorials/cloud-shell',
|
||||
],
|
||||
},
|
||||
{
|
||||
type: 'category',
|
||||
label: 'CI Integration',
|
||||
collapsed: true,
|
||||
link: {
|
||||
type: "doc",
|
||||
id: 'how-to/dashboard/trigger/overview',
|
||||
},
|
||||
items: [
|
||||
'how-to/dashboard/trigger/overview',
|
||||
'tutorials/jenkins',
|
||||
'tutorials/trigger',
|
||||
],
|
||||
},
|
||||
{
|
||||
'Day-2 Operations': [
|
||||
'tutorials/dry-run',
|
||||
'tutorials/access-application',
|
||||
'tutorials/debug-app',
|
||||
'tutorials/cloud-shell',
|
||||
],
|
||||
},
|
||||
'end-user/version-control',
|
||||
'end-user/workflow/component-dependency-parameter',
|
||||
{
|
||||
type: 'category',
|
||||
label: 'Declarative Workflow',
|
||||
collapsed: true,
|
||||
link: {
|
||||
type: "doc",
|
||||
id: 'end-user/workflow/overview',
|
||||
},
|
||||
items: [
|
||||
'end-user/workflow/suspend',
|
||||
'end-user/workflow/step-group',
|
||||
'end-user/workflow/dependency',
|
||||
'end-user/workflow/inputs-outputs',
|
||||
'end-user/workflow/if-condition',
|
||||
'end-user/workflow/timeout',
|
||||
],
|
||||
},
|
||||
'platform-engineers/operations/observability',
|
||||
'end-user/components/more',
|
||||
],
|
||||
|
|
@ -175,6 +198,12 @@ module.exports = {
|
|||
'how-to/dashboard/config/dex-connectors',
|
||||
],
|
||||
},
|
||||
{
|
||||
'Registry Integration': [
|
||||
'how-to/dashboard/config/helm-repo',
|
||||
'how-to/dashboard/config/image-registry',
|
||||
],
|
||||
},
|
||||
'how-to/dashboard/user/project',
|
||||
{
|
||||
'Authentication and Authorization': [
|
||||
|
|
@ -229,7 +258,7 @@ module.exports = {
|
|||
],
|
||||
},
|
||||
{
|
||||
'CUE in KubeVela': [
|
||||
'Manage Definition with CUE': [
|
||||
'platform-engineers/cue/basic',
|
||||
'platform-engineers/cue/definition-edit',
|
||||
'platform-engineers/components/custom-component',
|
||||
|
|
@ -278,8 +307,11 @@ module.exports = {
|
|||
{
|
||||
type: 'category',
|
||||
label: 'Community Verified Addons',
|
||||
link: {
|
||||
type: "doc",
|
||||
id: 'reference/addons/overview'
|
||||
},
|
||||
items: [
|
||||
'reference/addons/overview',
|
||||
'reference/addons/velaux',
|
||||
'reference/addons/rollout',
|
||||
'reference/addons/fluxcd',
|
||||
|
|
|
|||
|
|
@ -5,7 +5,9 @@ description: Configure a helm repository
|
|||
|
||||
In this guide, we will introduce how to use Integration create a private helm repository and create a helm type application to use this repo.
|
||||
|
||||
Notice: You must enable the `fluxcd` addon firstly.
|
||||
:::note
|
||||
You must enable the `fluxcd` addon firstly.
|
||||
:::
|
||||
|
||||
## Create a helm repo
|
||||
|
||||
|
|
|
|||
|
|
@ -2,7 +2,9 @@
|
|||
title: Overview of GitOps
|
||||
---
|
||||
|
||||
> This section will introduce how to use KubeVela in GitOps area and why.
|
||||
:::note
|
||||
This section will introduce how to use KubeVela in GitOps area and why.
|
||||
:::
|
||||
|
||||
GitOps is a continuous delivery method that allows developers to automatically deploy applications by changing code and declarative configurations in a Git repository, with "git-centric" operations such as PR and commit. For detailed benefits of GitOps, you can refer to [this blog](https://www.weave.works/blog/what-is-gitops-really).
|
||||
|
||||
|
|
|
|||
|
|
@ -1,16 +1,16 @@
|
|||
---
|
||||
title: Initialize/Destroy Environment
|
||||
title: Initialize/Destroy Infrastructure of Environment
|
||||
---
|
||||
|
||||
This section will introduce what is environment and how to initialize and destroy an environment with KubeVela easily.
|
||||
This section will introduce how to initialize and destroy infrastructure of environment with KubeVela easily.
|
||||
|
||||
## What is environment
|
||||
## What can be infrastructure of environment
|
||||
|
||||
An Application development team usually needs to initialize some shared environment for users. An environment is a logical concept that represents a set of common resources for Applications.
|
||||
An Application development team usually needs to initialize some shared environment for users. An environment is a logical concept that represents a set of common infrastructure resources for Applications.
|
||||
|
||||
For example, a team usually wants two environments: one for development, and one for production.
|
||||
|
||||
In general, the resource types that can be initialized include the following types:
|
||||
In general, the infra resource types that can be initialized include the following types:
|
||||
|
||||
1. One or more Kubernetes clusters. Different environments may need different sizes and versions of Kubernetes clusters. Environment initialization can also manage multiple clusters .
|
||||
|
||||
|
|
@ -32,8 +32,6 @@ For example, if both the test and develop environments rely on the same controll
|
|||
|
||||
### Directly use Application for initialization
|
||||
|
||||
> Make sure your KubeVela version is `v1.1.6+`.
|
||||
|
||||
If we want to use some CRD controller like [OpenKruise](https://github.com/openkruise/kruise) in cluster, we can use `Helm` to initialize `kruise`.
|
||||
|
||||
We can directly use Application to initialize a kruise environment. The application below will deploy a kruise controller in cluster.
|
||||
|
|
@ -225,7 +223,7 @@ $ kubectl logs -f log-read-worker-774b58f565-ch8ch
|
|||
|
||||
We can see that both components is running. The two components share the same PVC and use the same ConfigMap.
|
||||
|
||||
## Destroy the Environment
|
||||
## Destroy the infrastructure of environment
|
||||
|
||||
As we have already modeled the environment as a KubeVela Application, we can destroy the environment easily by deleting the application.
|
||||
|
||||
|
|
|
|||
|
|
@ -31,7 +31,9 @@ cluster-hangzhou-1 X509Certificate <ENDPOINT_HANGZHOU_1> true
|
|||
cluster-hangzhou-2 X509Certificate <ENDPOINT_HANGZHOU_2> true
|
||||
```
|
||||
|
||||
> By default, the hub cluster where KubeVela locates is registered as the `local` cluster. You can use it like a managed cluster in spite that you cannot detach it or modify it.
|
||||
:::note
|
||||
By default, the hub cluster where KubeVela locates is registered as the `local` cluster. You can use it like a managed cluster in spite that you cannot detach it or modify it.
|
||||
:::
|
||||
|
||||
## Deliver Application to Clusters
|
||||
|
||||
|
|
@ -197,8 +199,9 @@ spec:
|
|||
namespace: examples-alternative
|
||||
```
|
||||
|
||||
> Sometimes, for security issues, you might want to disable this behavior and retrict the resources to be deployed within the same namespace of the application. This can be done by setting `--allow-cross-namespace-resource=false` in the bootstrap parameter of the KubeVela controller.
|
||||
|
||||
:::tip
|
||||
Sometimes, for security issues, you might want to disable this behavior and retrict the resources to be deployed within the same namespace of the application. This can be done by setting `--allow-cross-namespace-resource=false` in the [bootstrap parameter](../platform-engineers/system-operation/bootstrap-parameters) of the KubeVela controller.
|
||||
:::
|
||||
|
||||
### Control the deploy workflow
|
||||
|
||||
|
|
@ -336,7 +339,9 @@ spec:
|
|||
policies: ["topology-hangzhou-clusters", "override-nginx-legacy-image", "override-high-availability"]
|
||||
```
|
||||
|
||||
> NOTE: The override policy is used to modify the basic configuration. Therefore, **it is designed to be used together with topology policy**. If you do not want to use topology policy, you can directly write configurations in the component part instead of using the override policy. *If you misuse the override policy in the deploy workflow step without topology policy, no error will be reported but you will find nothing is deployed.*
|
||||
:::note
|
||||
The override policy is used to modify the basic configuration. Therefore, **it is designed to be used together with topology policy**. If you do not want to use topology policy, you can directly write configurations in the component part instead of using the override policy. *If you misuse the override policy in the deploy workflow step without topology policy, no error will be reported but you will find nothing is deployed.*
|
||||
:::
|
||||
|
||||
The override policy has many advanced capabilities, such as adding new component or selecting components to use.
|
||||
The following example will first deploy an nginx webservice with `nginx:1.20` image to local cluster. Then two nginx webservices with `nginx` and `nginx:stable` images will be deployed to hangzhou clusters respectively.
|
||||
|
|
@ -399,7 +404,9 @@ spec:
|
|||
Sometimes, you may want to use the same policy across multiple applications or reuse previous workflow to deploy different resources.
|
||||
To reduce the repeated code, you can leverage the external policies and workflow and refer to them in your applications.
|
||||
|
||||
> NOTE: you can only refer to Policy and Workflow within your application's namespace.
|
||||
:::caution
|
||||
You can only refer to Policy and Workflow within your application's namespace.
|
||||
:::
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1alpha1
|
||||
|
|
@ -455,7 +462,11 @@ spec:
|
|||
ref: make-release-in-hangzhou
|
||||
```
|
||||
|
||||
> NOTE: The internal policies will be loaded first. External policies will only be used when there is no corresponding policy inside the application. In the following example, we can reuse `topology-hangzhou-clusters` policy and `make-release-in-hangzhou` workflow but modify the `override-high-availability-webservice` policy by injecting the same-named policy inside the new application.
|
||||
:::note
|
||||
The internal policies will be loaded first. External policies will only be used when there is no corresponding policy inside the application.
|
||||
:::
|
||||
|
||||
In the following example, we can reuse `topology-hangzhou-clusters` policy and `make-release-in-hangzhou` workflow but modify the `override-high-availability-webservice` policy by injecting the same-named policy inside the new application.
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
|
|
|
|||
|
|
@ -167,7 +167,9 @@ To run vela-core locally for debugging with kubevela installed in the remote clu
|
|||
|
||||
Finally, you can use the commands in the above [Build](#build) and [Testing](#Testing) sections, such as `make run`, to code and debug in your local machine.
|
||||
|
||||
> Note you will not be able to test features relate with validating/mutating webhooks in this way.
|
||||
:::caution
|
||||
Note you will not be able to test features relate with validating/mutating webhooks in this way.
|
||||
:::
|
||||
|
||||
## Run VelaUX Locally
|
||||
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ Building or installing addons is the most important way to extend KubeVela, ther
|
|||
KubeVela use CUE as it's core engine, and you can use CUE and CRD controller to glue almost every infrastructure capabilities.
|
||||
As a result, you can extend more powerful features for your platform.
|
||||
|
||||
- Start to [Learn CUE in KubeVela](../platform-engineers/cue/basic).
|
||||
- Start to [Learn Manage Definition with CUE](../platform-engineers/cue/basic).
|
||||
- Learn what is [CRD Controller](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) in Kubernetes.
|
||||
|
||||
## Contribution Guide
|
||||
|
|
|
|||
|
|
@ -31,7 +31,9 @@ contains the following attributes: name, character_set, description.
|
|||
|
||||
Applying the following application can create more than one database in an RDS instance.
|
||||
|
||||
> ⚠️ This section requires your platform engineers have already enabled [cloud resources addon](../../../reference/addons/terraform).
|
||||
:::caution
|
||||
This section requires your platform engineers have already enabled [cloud resources addon](../../../reference/addons/terraform).
|
||||
:::
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
|
|
|
|||
|
|
@ -4,7 +4,9 @@ title: Provision and Binding Database
|
|||
|
||||
This tutorial will talk about how to provision and consume Alibaba Cloud RDS (and OSS) by Terraform.
|
||||
|
||||
> ⚠️ This section requires your platform engineers have already enabled [cloud resources addon](../../../reference/addons/terraform).
|
||||
:::caution
|
||||
This section requires your platform engineers have already enabled [cloud resources addon](../../../reference/addons/terraform).
|
||||
:::
|
||||
|
||||
Let's deploy the [application](https://github.com/kubevela/kubevela/tree/master/docs/examples/terraform/cloud-resource-provision-and-consume/application.yaml)
|
||||
below to provision Alibaba Cloud OSS and RDS cloud resources, and consume them by the web component.
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Needs More?
|
||||
title: Needs More Capabilities?
|
||||
---
|
||||
|
||||
KubeVela is programmable, it can be extended easily with [definition](../../getting-started/definition). You have the following ways to discover and extend the platform.
|
||||
|
|
@ -132,7 +132,9 @@ Once addon installed, end user can discover and use these capabilities immediate
|
|||
|
||||
### Uninstall Addon
|
||||
|
||||
> Please make sure the addon along with its capabilities is no longer used in any of your applications before uninstalling it.
|
||||
:::danger
|
||||
Please make sure the addon along with its capabilities is no longer used in any of your applications before uninstalling it.
|
||||
:::
|
||||
|
||||
```shell
|
||||
vela addon disable fluxcd
|
||||
|
|
@ -223,7 +225,7 @@ If you're a system infra or operator, you can refer to extension documents to le
|
|||
|
||||
If you're extremely interested in KubeVela, you can also extend more features as a developer.
|
||||
|
||||
- KubeVela use CUE as it's core engine, [learn CUE in KubeVela](../../platform-engineers/cue/basic) and try to extend with CUE configurations.
|
||||
- KubeVela use CUE as it's core engine, [learn Manage Definition with CUE](../../platform-engineers/cue/basic) and try to extend capabilities with definitions.
|
||||
- Read the [developer guide](../../contributor/overview) to learn how to contribute and extend capabilities for KubeVela.
|
||||
|
||||
Welcome to join the KubeVela community! We're eager to see you to contribute your extension.
|
||||
|
|
|
|||
|
|
@ -2,7 +2,9 @@
|
|||
title: Distribute Reference Objects
|
||||
---
|
||||
|
||||
> This section requires you to know the basics about how to deploy [multi-cluster application](../../case-studies/multi-cluster) with policy and workflow.
|
||||
:::note
|
||||
This section requires you to know the basics about how to deploy [multi-cluster application](../../case-studies/multi-cluster) with policy and workflow.
|
||||
:::
|
||||
|
||||
You can reference and distribute existing Kubernetes objects with KubeVela in the following scenarios:
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: One-time Delivery(Working With Other Controllers)
|
||||
title: One-time Delivery (Coordinate with Multi-Controllers)
|
||||
---
|
||||
|
||||
By default, the KubeVela controller will prevent configuration drift for applied resources by reconciling them routinely. This is useful if you want to keep your application always having the desired configuration to avoid some unintentional changes by external modifiers.
|
||||
|
|
|
|||
|
|
@ -10,8 +10,10 @@ In garbage-collect policy, there are two major capabilities you can use.
|
|||
|
||||
Suppose you want to keep the resources created by the old version of the application. Use the garbage-collect policy and enable the option `keepLegacyResource`.
|
||||
|
||||
```yaml
|
||||
# app.yaml
|
||||
1. create app
|
||||
|
||||
```shell
|
||||
cat <<EOF | vela up -f -
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
|
|
@ -24,9 +26,10 @@ spec:
|
|||
image: oamdev/hello-world
|
||||
port: 8000
|
||||
traits:
|
||||
- type: ingress-1-20
|
||||
- type: gateway
|
||||
properties:
|
||||
domain: testsvc.example.com
|
||||
class: traefik
|
||||
domain: 47.251.8.82.nip.io
|
||||
http:
|
||||
"/": 8000
|
||||
policies:
|
||||
|
|
@ -34,24 +37,31 @@ spec:
|
|||
type: garbage-collect
|
||||
properties:
|
||||
keepLegacyResource: true
|
||||
EOF
|
||||
```
|
||||
|
||||
1. create app
|
||||
|
||||
``` shell
|
||||
vela up -f app.yaml
|
||||
```
|
||||
Check the status:
|
||||
|
||||
```shell
|
||||
$ vela ls
|
||||
APP COMPONENT TYPE TRAITS PHASE HEALTHY STATUS CREATED-TIME
|
||||
first-vela-app express-server webservice ingress-1-20 running healthy Ready:1/1 2022-04-06 16:20:25 +0800 CST
|
||||
vela status first-vela-app --tree
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```
|
||||
CLUSTER NAMESPACE RESOURCE STATUS
|
||||
local ─── default ─┬─ Service/express-server updated
|
||||
├─ Deployment/express-server updated
|
||||
└─ Ingress/express-server updated
|
||||
```
|
||||
</details>
|
||||
|
||||
|
||||
2. update the app
|
||||
|
||||
```yaml
|
||||
# app1.yaml
|
||||
cat <<EOF | vela up -f -
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
|
|
@ -64,9 +74,10 @@ spec:
|
|||
image: oamdev/hello-world
|
||||
port: 8000
|
||||
traits:
|
||||
- type: ingress-1-20
|
||||
- type: gateway
|
||||
properties:
|
||||
domain: testsvc.example.com
|
||||
class: traefik
|
||||
domain: 47.251.8.82.nip.io
|
||||
http:
|
||||
"/": 8000
|
||||
policies:
|
||||
|
|
@ -74,50 +85,30 @@ spec:
|
|||
type: garbage-collect
|
||||
properties:
|
||||
keepLegacyResource: true
|
||||
EOF
|
||||
```
|
||||
|
||||
``` shell
|
||||
vela up -f app1.yaml
|
||||
```
|
||||
Check the status again:
|
||||
|
||||
```shell
|
||||
$ vela ls
|
||||
APP COMPONENT TYPE TRAITS PHASE HEALTHY STATUS CREATED-TIME
|
||||
first-vela-app express-server-1 webservice ingress-1-20 running healthy Ready:1/1 2022-04-06 16:20:25 +0800 CST
|
||||
vela status first-vela-app --tree
|
||||
```
|
||||
|
||||
check whether legacy resources are reserved.
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
> In the following steps, we'll use `kubectl` command to do some verification. You can also use `vela status first-vela-app` to check the aggregated application status and see if components are healthy.
|
||||
```shell
|
||||
CLUSTER NAMESPACE RESOURCE STATUS
|
||||
local ─── default ─┬─ Service/express-server outdated
|
||||
├─ Service/express-server-1 updated
|
||||
├─ Deployment/express-server outdated
|
||||
├─ Deployment/express-server-1 updated
|
||||
├─ Ingress/express-server outdated
|
||||
└─ Ingress/express-server-1 updated
|
||||
```
|
||||
</details>
|
||||
|
||||
```
|
||||
$ kubectl get deploy
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
express-server 1/1 1 1 10m
|
||||
express-server-1 1/1 1 1 40s
|
||||
```
|
||||
|
||||
```
|
||||
$ kubectl get svc
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
express-server ClusterIP 10.96.102.249 <none> 8000/TCP 10m
|
||||
express-server-1 ClusterIP 10.96.146.10 <none> 8000/TCP 46s
|
||||
```
|
||||
|
||||
```
|
||||
$ kubectl get ingress
|
||||
NAME CLASS HOSTS ADDRESS PORTS AGE
|
||||
express-server <none> testsvc.example.com 80 10m
|
||||
express-server-1 <none> testsvc.example.com 80 50s
|
||||
```
|
||||
|
||||
```
|
||||
$ kubectl get resourcetracker
|
||||
NAME AGE
|
||||
first-vela-app-default 12m
|
||||
first-vela-app-v1-default 12m
|
||||
first-vela-app-v2-default 2m56s
|
||||
```
|
||||
You can see the legacy resources are reserved but the status is outdated, it will not be synced by periodically reconciliation.
|
||||
|
||||
3. delete the app
|
||||
|
||||
|
|
@ -127,20 +118,21 @@ $ vela delete first-vela-app
|
|||
|
||||
> If you hope to delete resources in one specified version, you can run `kubectl delete resourcetracker first-vela-app-v1-default`.
|
||||
|
||||
## Persist resources
|
||||
## Persist partial resources
|
||||
|
||||
You can also persist some resources, which skips the normal garbage-collect process when the application is updated.
|
||||
You can also persist part of the resources, which skips the normal garbage-collect process when the application is updated.
|
||||
|
||||
Take the following app as an example, in the garbage-collect policy, a rule is added which marks all the resources created by the `expose` trait to use the `onAppDelete` strategy. This will keep those services until application is deleted.
|
||||
|
||||
```shell
|
||||
$ cat <<EOF | vela up -f -
|
||||
cat <<EOF | vela up -f -
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: garbage-collect-app
|
||||
spec:
|
||||
components:
|
||||
- name: hello-world
|
||||
- name: demo-gc
|
||||
type: webservice
|
||||
properties:
|
||||
image: oamdev/hello-world
|
||||
|
|
@ -161,6 +153,7 @@ EOF
|
|||
```
|
||||
|
||||
You can find deployment and service created.
|
||||
|
||||
```shell
|
||||
$ kubectl get deployment
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
|
|
@ -171,8 +164,9 @@ hello-world ClusterIP 10.96.160.208 <none> 8000/TCP 78s
|
|||
```
|
||||
|
||||
If you upgrade the application and use a different component, you will find the old versioned deployment is deleted but the service is kept.
|
||||
|
||||
```shell
|
||||
$ cat <<EOF | vela up -f -
|
||||
cat <<EOF | vela up -f -
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
|
|
|
|||
|
|
@ -105,6 +105,7 @@ Hello World
|
|||
</xmp>
|
||||
```
|
||||
|
||||
> ⚠️ This section requires your runtime cluster has a working ingress controller.
|
||||
|
||||
:::caution
|
||||
This section requires your runtime cluster has a working ingress controller.
|
||||
:::
|
||||
|
||||
|
|
|
|||
|
|
@ -8,8 +8,7 @@ In this section, we will introduce how to canary rollout a container service.
|
|||
|
||||
1. Enable [`kruise-rollout`](../../reference/addons/kruise-rollout) addon, our canary rollout capability relies on the [rollouts from OpenKruise](https://github.com/openkruise/rollouts).
|
||||
```shell
|
||||
$ vela addon enable kruise-rollout
|
||||
Addon: kruise-rollout enabled Successfully.
|
||||
vela addon enable kruise-rollout
|
||||
```
|
||||
|
||||
2. Please make sure one of the [ingress controllers](https://kubernetes.github.io/ingress-nginx/deploy/) is available in your cluster.
|
||||
|
|
|
|||
|
|
@ -67,6 +67,10 @@ And check the logging output of sidecar.
|
|||
```shell
|
||||
vela logs vela-app-with-sidecar -c count-log
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```console
|
||||
0: Fri Apr 16 11:08:45 UTC 2021
|
||||
1: Fri Apr 16 11:08:46 UTC 2021
|
||||
|
|
@ -80,3 +84,4 @@ vela logs vela-app-with-sidecar -c count-log
|
|||
9: Fri Apr 16 11:08:54 UTC 2021
|
||||
```
|
||||
|
||||
</details>
|
||||
|
|
@ -9,7 +9,7 @@ title: Application Version Control
|
|||
In KubeVela, ApplicationRevision keeps the snapshot of the application and all its runtime dependencies such as ComponentDefinition, external Policy or referred objects.
|
||||
This revision can be used to review the application changes and rollback to past configurations.
|
||||
|
||||
In KubeVela v1.3, for application which uses the `PublishVersion` feature, we support viewing the history revisions, checking the differences across revisions, rolling back to the latest succeeded revision and re-publishing past revisions.
|
||||
In KubeVela v1.3+, for application which uses the `PublishVersion` feature, we support viewing the history revisions, checking the differences across revisions, rolling back to the latest succeeded revision and re-publishing past revisions.
|
||||
|
||||
For application with the `app.oam.dev/publishVersion` annotation, the workflow runs are strictly controlled.
|
||||
The annotation, which is noted as *publishVersion* in the following paragraphs, is used to identify a static version of the application and its dependencies.
|
||||
|
|
@ -17,13 +17,15 @@ The annotation, which is noted as *publishVersion* in the following paragraphs,
|
|||
When the annotation is updated to a new value, the application will generate a new revision no matter if the application spec or the dependencies are changed.
|
||||
It will then trigger a fresh new run of workflow after terminating the previous run.
|
||||
|
||||
During the running of workflow, all related data are retrieved from the ApplicationRevision, which means the changes to the application spec or the dependencies will not take effects until a newer `publishVerison` is annotated.
|
||||
During the running of workflow, all related data are retrieved from the ApplicationRevision, which means the changes to the application spec or the dependencies will not take effects until a newer `publishVersion` is annotated.
|
||||
|
||||
## Use Guide
|
||||
|
||||
Fo example, let's start with an application with external workflow and policies to deploy podinfo in managed clusters.
|
||||
|
||||
> For external workflow and policies, please refer to [Multi-cluster Application Delivery](../case-studies/multi-cluster) for more details.
|
||||
:::tip
|
||||
We use reference of external workflow and policies, it works the same. You can refer to [Multi-cluster Application Delivery](../case-studies/multi-cluster) for more details.
|
||||
:::
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
|
|
@ -78,10 +80,16 @@ steps:
|
|||
policies: ["topology-hangzhou-clusters", "override-high-availability"]
|
||||
```
|
||||
|
||||
You can check the application status by running `vela status podinfo -n examples` and view all the related real-time resources by `vela status podinfo -n examples --tree --detail`.
|
||||
You can check the application status by:
|
||||
|
||||
```shell
|
||||
$ vela status podinfo -n examples
|
||||
vela status podinfo -n examples
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```
|
||||
About:
|
||||
|
||||
Name: podinfo
|
||||
|
|
@ -116,23 +124,43 @@ Services:
|
|||
Unhealthy Ready:0/3
|
||||
Traits:
|
||||
✅ scaler
|
||||
```
|
||||
</details>
|
||||
|
||||
$ vela status podinfo -n examples --tree --detail
|
||||
View all the related real-time resources by:
|
||||
|
||||
```
|
||||
vela status podinfo -n examples --tree --detail
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```
|
||||
CLUSTER NAMESPACE RESOURCE STATUS APPLY_TIME DETAIL
|
||||
hangzhou1 ─── examples ─── Deployment/podinfo updated 2022-04-13 19:32:03 Ready: 3/3 Up-to-date: 3 Available: 3 Age: 4m16s
|
||||
hangzhou2 ─── examples ─── Deployment/podinfo updated 2022-04-13 19:32:03 Ready: 3/3 Up-to-date: 3 Available: 3 Age: 4m16s
|
||||
```
|
||||
</details>
|
||||
|
||||
This application should be successful after a while.
|
||||
|
||||
Now if we edit the component image and set it to an invalid value, such as `stefanprodan/podinfo:6.0.xxx`.
|
||||
The application will not re-run the workflow to make this change take effect automatically.
|
||||
But the application spec changes, it means the next workflow run will update the deployment image.
|
||||
|
||||
### Inspect Changes across Revisions
|
||||
|
||||
Now let's run `vela live-diff podinfo -n examples` to check this diff
|
||||
```bash
|
||||
$ vela live-diff podinfo -n examples
|
||||
You can run `vela live-diff` to check revision difference:
|
||||
|
||||
```
|
||||
vela live-diff podinfo -n examples
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```yaml
|
||||
* Application (podinfo) has been modified(*)
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
|
|
@ -156,64 +184,129 @@ $ vela live-diff podinfo -n examples
|
|||
* External Policy (override-high-availability) has no change
|
||||
* External Workflow (make-release-in-hangzhou) has no change
|
||||
```
|
||||
</details>
|
||||
|
||||
We can see all the changes of the application spec and the dependencies.
|
||||
|
||||
Now let's make this change take effects.
|
||||
There are two ways to make it take effects. You can choose any one of them.
|
||||
|
||||
### Publish a new app with specified revision
|
||||
|
||||
There are two ways to publish an app with specified revision. You can choose any one of them.
|
||||
|
||||
1. Update the `publishVersion` annotation in the application to `alpha2` to trigger the re-run of workflow.
|
||||
2. Run `vela up podinfo -n examples --publish-version alpha2` to publish the new version.
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: examples
|
||||
annotations:
|
||||
- app.oam.dev/publishVersion: alpha1
|
||||
+ app.oam.dev/publishVersion: alpha2
|
||||
...
|
||||
```
|
||||
2. Run `vela up --publish-version <revision-name` to publish the new version.
|
||||
```
|
||||
vela up podinfo -n examples --publish-version alpha2
|
||||
```
|
||||
|
||||
We will find the application stuck at `runningWorkflow` as the deployment cannot finish the update progress due to the invalid image.
|
||||
|
||||
Now we can run `vela revision list podinfo -n examples` to list all the available revisions.
|
||||
Now we can run `vela revision list` to list all the available revisions.
|
||||
|
||||
```
|
||||
vela revision list podinfo -n examples
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```bash
|
||||
$ vela revision list podinfo -n examples
|
||||
NAME PUBLISH_VERSION SUCCEEDED HASH BEGIN_TIME STATUS SIZE
|
||||
podinfo-v1 alpha1 true 65844934c2d07288 2022-04-13 19:32:02 Succeeded 23.7 KiB
|
||||
podinfo-v2 alpha2 false 44124fb1a5146a4d 2022-04-13 19:46:50 Executing 23.7 KiB
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Rollback to Last Successful Revision
|
||||
|
||||
Before rolling back, we need to suspend the workflow of the application first. Run `vela workflow suspend podinfo -n examples`.
|
||||
Before rolling back, we need to suspend the workflow of the application first.
|
||||
|
||||
```
|
||||
vela workflow suspend podinfo -n examples
|
||||
```
|
||||
|
||||
After the application workflow is suspended, run `vela workflow rollback podinfo -n examples`, the workflow will be rolled back and the application resources will restore to the succeeded state.
|
||||
|
||||
```
|
||||
vela workflow rollback podinfo -n examples
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```shell
|
||||
$ vela workflow suspend podinfo -n examples
|
||||
Successfully suspend workflow: podinfo
|
||||
$ vela workflow rollback podinfo -n examples
|
||||
Find succeeded application revision podinfo-v1 (PublishVersion: alpha1) to rollback.
|
||||
Application spec rollback successfully.
|
||||
Application status rollback successfully.
|
||||
Application rollback completed.
|
||||
Application outdated revision cleaned up.
|
||||
```
|
||||
</details>
|
||||
|
||||
Now if we return back to see all the resources, we will find the resources have been turned back to use the valid image again.
|
||||
|
||||
```shell
|
||||
$ vela status podinfo -n examples --tree --detail --detail-format wide
|
||||
vela status podinfo -n examples --tree --detail --detail-format wide
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```
|
||||
CLUSTER NAMESPACE RESOURCE STATUS APPLY_TIME DETAIL
|
||||
hangzhou1 ─── examples ─── Deployment/podinfo updated 2022-04-13 19:32:03 Ready: 3/3 Up-to-date: 3 Available: 3 Age: 17m Containers: podinfo Images: stefanprodan/podinfo:6.0.1 Selector: app.oam.dev/component=podinfo
|
||||
hangzhou2 ─── examples ─── Deployment/podinfo updated 2022-04-13 19:32:03 Ready: 3/3 Up-to-date: 3 Available: 3 Age: 17m Containers: podinfo Images: stefanprodan/podinfo:6.0.1 Selector: app.oam.dev/component=podinfo
|
||||
```
|
||||
</details>
|
||||
|
||||
### Re-publish a History Revision
|
||||
|
||||
> This feature is introduced after v1.3.1.
|
||||
|
||||
:::note
|
||||
This feature is introduced after v1.3+.
|
||||
:::
|
||||
|
||||
Rolling back revision allows you to directly go back to the latest successful state. An alternative way is to re-publish an old revision, which will re-run the workflow but can go back to any revision that is still available.
|
||||
|
||||
For example, you might have 2 successful revisions available to use.
|
||||
|
||||
Let's list the history revision by:
|
||||
|
||||
```shell
|
||||
vela revision list podinfo -n examples
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```shell
|
||||
$ vela revision list podinfo -n examples
|
||||
NAME PUBLISH_VERSION SUCCEEDED HASH BEGIN_TIME STATUS SIZE
|
||||
podinfo-v1 alpha1 true 65844934c2d07288 2022-04-13 20:45:19 Succeeded 23.7 KiB
|
||||
podinfo-v2 alpha2 true 4acae1a66013283 2022-04-13 20:45:45 Succeeded 23.7 KiB
|
||||
podinfo-v3 alpha3 false 44124fb1a5146a4d 2022-04-13 20:46:28 Executing 23.7 KiB
|
||||
```
|
||||
|
||||
Alternatively, you can directly use `vela up podinfo -n examples --revision podinfo-v1 --publish-version beta1` to re-publish the earliest version. This process will let the application to use the past revision and re-run the whole workflow. A new revision that is totally same with the specified one will be generated.
|
||||
</details>
|
||||
|
||||
Alternatively, you can directly run the following command to rollback to a specified revision:
|
||||
|
||||
```
|
||||
vela up podinfo -n examples --revision podinfo-v1 --publish-version beta1
|
||||
```
|
||||
|
||||
This process will let the application to use the past revision and re-run the whole workflow. A new revision that is totally same with the specified one will be generated.
|
||||
|
||||
```shell
|
||||
NAME PUBLISH_VERSION SUCCEEDED HASH BEGIN_TIME STATUS SIZE
|
||||
|
|
@ -225,4 +318,6 @@ podinfo-v4 beta1 true 65844934c2d07288 2022-04-
|
|||
|
||||
You can find that the *beta1* version shares the same hash with *alpha1* version.
|
||||
|
||||
> By default, application will hold at most 10 revisions. If you want to modify this number, you can set it in the `--application-revision-limit` bootstrap parameter of KubeVela controller.
|
||||
:::info
|
||||
By default, application will hold at most 10 revisions. If you want to modify this number, you can set it in the `--application-revision-limit` [bootstrap parameter](../platform-engineers/system-operation/bootstrap-parameters) of KubeVela controller.
|
||||
:::
|
||||
|
|
@ -4,10 +4,9 @@ title: Component Orchestration
|
|||
|
||||
This section will introduce the dependencies in components and how to pass data between components.
|
||||
|
||||
> We use helm in the examples, make sure you enable the fluxcd addon:
|
||||
> ```shell
|
||||
> vela addon enable fluxcd
|
||||
> ```
|
||||
:::tip
|
||||
We use `helm` component type in the following examples, make sure you have the `fluxcd` addon enabled (`vela addon enable fluxcd`).
|
||||
:::
|
||||
|
||||
## Dependency
|
||||
|
||||
|
|
@ -102,9 +101,9 @@ mysql mysql-secret raw running healthy 2021-10-14 12:09:55 +0
|
|||
|
||||
After a while, all components is running successfully. The `mysql-cluster` will be deployed after `mysql-controller` and `mysql-secret` is `healthy`.
|
||||
|
||||
> `dependsOn` use `healthy` to check status. If the component is `healthy`, then KubeVela will deploy the next component.
|
||||
> If you want to customize the healthy status of the component, please refer to [Status Write Back](../../platform-engineers/traits/status)
|
||||
|
||||
:::info
|
||||
`dependsOn` use `healthy` to check status. If the component is `healthy`, then KubeVela will deploy the next component. If you want to customize the healthy status of the component, please refer to [Status Write Back](../../platform-engineers/traits/status)
|
||||
:::
|
||||
|
||||
## Inputs and Outputs
|
||||
|
||||
|
|
|
|||
|
|
@ -4,9 +4,11 @@ title: Dependency
|
|||
|
||||
This section will introduce how to specify dependencies for workflow steps.
|
||||
|
||||
> Note: In the current version (1.4), the steps in the workflow are executed sequentially, which means that there is an implicit dependency between steps, ie: the next step depends on the successful execution of the previous step. At this point, specifying dependencies in the workflow may not make much sense.
|
||||
>
|
||||
> In future versions (1.5+), you will be able to display the execution method of the specified workflow steps (eg: change to DAG parallel execution). At this time, you can control the execution of the workflow by specifying the dependencies of the steps.
|
||||
:::note
|
||||
In the version <=1.4, the steps in the workflow are executed sequentially, which means that there is an implicit dependency between steps, ie: the next step depends on the successful execution of the previous step. At this point, specifying dependencies in the workflow may not make much sense.
|
||||
|
||||
In versions 1.5+, you can display the execution method of the specified workflow steps (eg: change to DAG parallel execution). At this time, you can control the execution of the workflow by specifying the dependencies of the steps.
|
||||
:::
|
||||
|
||||
## How to use
|
||||
|
||||
|
|
|
|||
|
|
@ -6,9 +6,11 @@ This section describes how to use sub steps in KubeVela.
|
|||
|
||||
There is a special step type `step-group` in KubeVela workflow where you can declare sub-steps when using `step-group` type steps.
|
||||
|
||||
> Note: In the current version (1.4), sub steps in a step group are executed concurrently.
|
||||
>
|
||||
> In future versions (1.5+), you will be able to specify the execution mode of steps and sub-steps.
|
||||
:::note
|
||||
In the version less or equal than v1.4.x, sub steps in a step group are executed concurrently.
|
||||
|
||||
In version 1.5+, you can specify the execution mode of steps and sub-steps.
|
||||
:::
|
||||
|
||||
Apply the following example:
|
||||
|
||||
|
|
|
|||
|
|
@ -10,13 +10,18 @@ In KubeVela, you can choose to use the `vela` command to manually suspend the ex
|
|||
|
||||
### Suspend Manually
|
||||
|
||||
If you have a running application and you want to suspend its execution, you can use `vela workflow suspend` to suspend the workflow.
|
||||
If you have an application in `runningWorkflow` state, you want to stop the execution of the workflow, you can use `vela workflow suspend` to stop the workflow and use `vela workflow resume` to continue it.
|
||||
|
||||
* Suspend the application
|
||||
|
||||
```bash
|
||||
$ vela workflow suspend my-app
|
||||
Successfully suspend workflow: my-app
|
||||
vela workflow suspend my-app
|
||||
```
|
||||
|
||||
:::tip
|
||||
Nothing will happen if you suspend an application that has already finished running workflow, which is in `running` status.
|
||||
:::
|
||||
|
||||
### Use Suspend Step
|
||||
|
||||
Apply the following example:
|
||||
|
|
@ -25,7 +30,7 @@ Apply the following example:
|
|||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: suspend
|
||||
name: suspend-demo
|
||||
namespace: default
|
||||
spec:
|
||||
components:
|
||||
|
|
@ -56,10 +61,16 @@ spec:
|
|||
Use `vela status` to check the status of the Application:
|
||||
|
||||
```bash
|
||||
$ vela status suspend
|
||||
vela status suspend-demo
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```
|
||||
About:
|
||||
|
||||
Name: suspend
|
||||
Name: suspend-demo
|
||||
Namespace: default
|
||||
Created at: 2022-06-27 17:36:58 +0800 CST
|
||||
Status: workflowSuspending
|
||||
|
|
@ -75,12 +86,10 @@ Workflow:
|
|||
name:apply1
|
||||
type:apply-component
|
||||
phase:succeeded
|
||||
message:
|
||||
- id:xvmda4he5e
|
||||
name:suspend
|
||||
type:suspend
|
||||
phase:running
|
||||
message:
|
||||
|
||||
Services:
|
||||
|
||||
|
|
@ -90,6 +99,7 @@ Services:
|
|||
Healthy Ready:1/1
|
||||
No trait applied
|
||||
```
|
||||
</details>
|
||||
|
||||
As you can see, when the first step is completed, the `suspend` step will be executed and this step will suspend the workflow.
|
||||
|
||||
|
|
@ -102,17 +112,22 @@ Once the workflow is suspended, you can use the `vela workflow resume` command t
|
|||
Take the above suspended application as an example:
|
||||
|
||||
```bash
|
||||
$ vela workflow resume suspend
|
||||
Successfully resume workflow: suspend
|
||||
vela workflow resume suspend-demo
|
||||
```
|
||||
|
||||
After successfully continuing the workflow, view the status of the app:
|
||||
|
||||
```bash
|
||||
$ vela status suspend
|
||||
vela status suspend-demo
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```
|
||||
About:
|
||||
|
||||
Name: suspend
|
||||
Name: suspend-demo
|
||||
Namespace: default
|
||||
Created at: 2022-06-27 17:36:58 +0800 CST
|
||||
Status: running
|
||||
|
|
@ -154,6 +169,7 @@ Services:
|
|||
Healthy Ready:1/1
|
||||
No trait applied
|
||||
```
|
||||
</details>
|
||||
|
||||
As you can see, the workflow has continued to execute.
|
||||
|
||||
|
|
@ -161,11 +177,28 @@ As you can see, the workflow has continued to execute.
|
|||
|
||||
If you want to terminate a workflow while it is suspended, you can use the `vela workflow terminate` command to terminate the workflow.
|
||||
|
||||
* Terminate the application workflow
|
||||
|
||||
```bash
|
||||
$ vela workflow terminate my-app
|
||||
Successfully terminate workflow: my-app
|
||||
vela workflow terminate my-app
|
||||
```
|
||||
|
||||
:::tip
|
||||
Different from suspend, the terminated application workflow can't be resumed, you can only restart the workflow. This means restart the workflow will execute the workflow steps from scratch while resume workflow only continue the unfinished steps.
|
||||
:::
|
||||
|
||||
* Restart the application workflow
|
||||
|
||||
```bash
|
||||
vela workflow restart my-app
|
||||
```
|
||||
|
||||
:::caution
|
||||
Once application is terminated, KubeVela controller won't reconcile the application resources. It can also be used in some cases when you want to manually operate the underlying resources, please caution the configuration drift.
|
||||
:::
|
||||
|
||||
Once application come into `running` status, it can't be terminated or restarted.
|
||||
|
||||
### Resume the Workflow Automatically
|
||||
|
||||
If you want the workflow to be continued automatically after a period of time has passed. Then, you can add a `duration` parameter to the `suspend` step. When the `duration` time elapses, the workflow will automatically continue execution.
|
||||
|
|
@ -209,7 +242,13 @@ spec:
|
|||
Use `vela status` to check the status of the Application:
|
||||
|
||||
```bash
|
||||
$ vela status auto-resume
|
||||
vela status auto-resume
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>expected output</summary>
|
||||
|
||||
```
|
||||
About:
|
||||
|
||||
Name: auto-resume
|
||||
|
|
@ -254,5 +293,6 @@ Services:
|
|||
Healthy Ready:1/1
|
||||
No trait applied
|
||||
```
|
||||
</details>
|
||||
|
||||
As you can see, the `suspend` step is automatically executed successfully after five seconds, and the workflow is executed successfully.
|
||||
|
|
|
|||
|
|
@ -2,7 +2,9 @@
|
|||
title: Timeout of Step
|
||||
---
|
||||
|
||||
> Note: You need to upgrade to version 1.5 or above to use the timeout.
|
||||
:::note
|
||||
You need to upgrade to version 1.5+ to use the timeout feature.
|
||||
:::
|
||||
|
||||
This section introduces how to add timeout to workflow steps in KubeVela.
|
||||
|
||||
|
|
|
|||
|
|
@ -164,7 +164,9 @@ Application is also one kind of Kubernetes CRD, you can also use `kubectl apply`
|
|||
|
||||
### Customize
|
||||
|
||||
> **⚠️ In most cases, you don't need to customize any definitions unless you're going to extend the capability of KubeVela. Before that, you should check the built-in definitions and addons to confirm if they can fit your needs.**
|
||||
:::caution
|
||||
In most cases, you don't need to customize any definitions **unless you're going to extend the capability of KubeVela**. Before that, you should check the built-in definitions and addons to confirm if they can fit your needs.
|
||||
:::
|
||||
|
||||
A new definition is built in a declarative template in [CUE configuration language](https://cuelang.org/). If you're not familiar with CUE, you can refer to [CUE Basic](../platform-engineers/cue/basic) for some knowledge.
|
||||
|
||||
|
|
|
|||
|
|
@ -5,7 +5,9 @@ description: Configure a helm repository
|
|||
|
||||
In this guide, we will introduce how to use Integration create a private helm repository and create a helm type application to use this repo.
|
||||
|
||||
Notice: You must enable the `fluxcd` addon firstly.
|
||||
:::note
|
||||
You must enable the `fluxcd` addon firstly.
|
||||
:::
|
||||
|
||||
## Create a helm repo
|
||||
|
||||
|
|
|
|||
|
|
@ -7,7 +7,9 @@ import TabItem from '@theme/TabItem';
|
|||
|
||||
## Upgrade
|
||||
|
||||
> If you're trying to upgrade from a big version later (e.g. from 1.2.x to 1.4.x), please refer to [version migration](./system-operation/migration-from-old-version) for more guides.
|
||||
:::caution
|
||||
If you're trying to upgrade from a big version later (e.g. from 1.2.x to 1.4.x), please refer to [version migration](./system-operation/migration-from-old-version) for more guides.
|
||||
:::
|
||||
|
||||
### 1. Upgrade CLI
|
||||
|
||||
|
|
@ -30,7 +32,9 @@ curl -fsSl https://kubevela.io/script/install.sh | bash
|
|||
|
||||
**Windows**
|
||||
|
||||
> Only the official release version is supported.
|
||||
:::tip
|
||||
Pre-release versions will not be listed.
|
||||
:::
|
||||
|
||||
```shell script
|
||||
powershell -Command "iwr -useb https://kubevela.io/script/install.ps1 | iex"
|
||||
|
|
@ -63,10 +67,12 @@ brew install kubevela
|
|||
sudo mv ./vela /usr/local/bin/vela
|
||||
```
|
||||
|
||||
> [Installation Tips](https://github.com/kubevela/kubevela/issues/625):
|
||||
> If you are using a Mac system, it will pop up a warning that "vela" cannot be opened because the package from the developer cannot be verified.
|
||||
>
|
||||
> MacOS imposes stricter restrictions on the software that can run in the system. You can temporarily solve this problem by opening `System Preference ->Security & Privacy -> General` and clicking on `Allow Anyway`.
|
||||
:::caution
|
||||
[Installation Tips](https://github.com/kubevela/kubevela/issues/625):
|
||||
If you are using a Mac system, it will pop up a warning that "vela" cannot be opened because the package from the developer cannot be verified.
|
||||
|
||||
MacOS imposes stricter restrictions on the software that can run in the system. You can temporarily solve this problem by opening `System Preference ->Security & Privacy -> General` and clicking on `Allow Anyway`.
|
||||
:::
|
||||
|
||||
</TabItem>
|
||||
|
||||
|
|
@ -83,7 +89,9 @@ docker pull oamdev/vela-cli:latest
|
|||
|
||||
### 2. Upgrade Vela Core
|
||||
|
||||
> Please make sure you already upgraded the Vela CLI to latest stable version.
|
||||
:::note
|
||||
Please make sure you already upgraded the Vela CLI to latest stable version.
|
||||
:::
|
||||
|
||||
```shell
|
||||
vela install
|
||||
|
|
@ -95,7 +103,9 @@ vela install
|
|||
vela addon enable velaux
|
||||
```
|
||||
|
||||
> If you set custom parameters during installation, be sure to include the corresponding parameters.
|
||||
:::tip
|
||||
You can use advanced parameters provided by [addons](../reference/addons/overview).
|
||||
:::
|
||||
|
||||
## Uninstall
|
||||
|
||||
|
|
|
|||
|
|
@ -64,7 +64,7 @@ template: {
|
|||
kind: "Deployment"
|
||||
}
|
||||
outputs: {}
|
||||
parameters: {}
|
||||
parameter: {}
|
||||
}
|
||||
```
|
||||
|
||||
|
|
@ -104,7 +104,7 @@ template: {
|
|||
kind: "Deployment"
|
||||
}
|
||||
outputs: {}
|
||||
parameters: {
|
||||
parameter: {
|
||||
name: string
|
||||
image: string
|
||||
}
|
||||
|
|
|
|||
|
|
@ -2,4 +2,4 @@
|
|||
title: CUE advanced
|
||||
---
|
||||
|
||||
The docs has been migrated, please refer to [Learning CUE in KubeVela](./basic) sections for details.
|
||||
The docs has been migrated, please refer to [Learning Manage Definition with CUE](./basic) sections for details.
|
||||
|
|
@ -23,36 +23,38 @@ More addons for logging and tracing will be introduced in later versions.
|
|||
|
||||
To enable the addon suites, you simply needs to run the `vela addon enable` commands as below.
|
||||
|
||||
> If your KubeVela is multi-cluster scenario, see the [multi-cluster installation](#multi-cluster-installation) section below.
|
||||
:::tip
|
||||
If your KubeVela is multi-cluster scenario, see the [multi-cluster installation](#multi-cluster-installation) section below.
|
||||
:::
|
||||
|
||||
1. Install the kube-state-metrics addon
|
||||
|
||||
```shell
|
||||
> vela addon enable kube-state-metrics
|
||||
vela addon enable kube-state-metrics
|
||||
```
|
||||
|
||||
2. Install the node-exporter addon
|
||||
|
||||
```shell
|
||||
> vela addon enable node-exporter
|
||||
vela addon enable node-exporter
|
||||
```
|
||||
|
||||
3. Install the prometheus-server
|
||||
|
||||
```shell
|
||||
> vela addon enable prometheus-server
|
||||
vela addon enable prometheus-server
|
||||
```
|
||||
|
||||
4. Install the grafana addon.
|
||||
|
||||
```shell
|
||||
> vela addon enable grafana
|
||||
vela addon enable grafana
|
||||
```
|
||||
|
||||
5. Access your grafana through port-forward.
|
||||
|
||||
```shell
|
||||
> kubectl port-forward svc/grafana -n o11y-system 8080:3000
|
||||
kubectl port-forward svc/grafana -n o11y-system 8080:3000
|
||||
```
|
||||
|
||||
Now you can access your grafana by access `http://localhost:8080` in your browser. The default username and password are `admin` and `kubevela` respectively.
|
||||
|
|
@ -178,7 +180,7 @@ If you want to install observability addons in multi-cluster scenario, make sure
|
|||
By default, the installation process for `kube-state-metrics`, `node-exporter` and `prometheus-server` are natually multi-cluster supported (they will be automatically installed to all clusters). But to let your `grafana` on the control plane to be able to access prometheus-server in managed clusters, you need to use the following command to enable `prometheus-server`.
|
||||
|
||||
```shell
|
||||
> vela addon enable prometheus-server thanos=true serviceType=LoadBalancer
|
||||
vela addon enable prometheus-server thanos=true serviceType=LoadBalancer
|
||||
```
|
||||
|
||||
This will install [thanos](https://github.com/thanos-io/thanos) sidecar & query along with prometheus-server. Then enable grafana, you will be able to see aggregated prometheus metrics now.
|
||||
|
|
@ -186,7 +188,7 @@ This will install [thanos](https://github.com/thanos-io/thanos) sidecar & query
|
|||
You can also choose which clusters to install addons by using commands as below
|
||||
|
||||
```shell
|
||||
> vela addon enable kube-state-metrics clusters=\{local,c2\}
|
||||
vela addon enable kube-state-metrics clusters=\{local,c2\}
|
||||
```
|
||||
|
||||
> If you add new clusters to your control plane after addons being installed, you need to re-enable the addon to let it take effect.
|
||||
|
|
@ -234,7 +236,7 @@ spec:
|
|||
Then you need to add `customConfig` parameter to the enabling process of the prometheus-server addon, like
|
||||
|
||||
```shell
|
||||
> vela addon enable prometheus-server thanos=true serviceType=LoadBalancer storage=1G customConfig=my-prom
|
||||
vela addon enable prometheus-server thanos=true serviceType=LoadBalancer storage=1G customConfig=my-prom
|
||||
```
|
||||
|
||||
Then you will be able to see the recording rules configuration being delivered into all prome
|
||||
|
|
@ -263,7 +265,7 @@ data:
|
|||
If you want to change the default username and password for Grafana, you can run the following command
|
||||
|
||||
```shell
|
||||
> vela addon enable grafana adminUser=super-user adminPassword=PASSWORD
|
||||
vela addon enable grafana adminUser=super-user adminPassword=PASSWORD
|
||||
```
|
||||
|
||||
This will change your default admin user to `super-user` and its password to `PASSWORD`.
|
||||
|
|
@ -273,7 +275,7 @@ This will change your default admin user to `super-user` and its password to `PA
|
|||
If you want your prometheus-server and grafana to persist data in volumes, you can also specify `storage` parameter for your installation, like
|
||||
|
||||
```shell
|
||||
> vela addon enable prometheus-server storage=1G
|
||||
vela addon enable prometheus-server storage=1G
|
||||
```
|
||||
|
||||
This will create PersistentVolumeClaims and let the addon use the provided storage. The storage will not be automatically recycled even if the addon is disabled. You need to clean up the storage manually.
|
||||
|
|
@ -357,9 +359,9 @@ Now you can manage your dashboard and datasource on your grafana instance throug
|
|||
|
||||
```shell
|
||||
# show all the dashboard you have
|
||||
> kubectl get grafanadashboard -l grafana=my-grafana
|
||||
kubectl get grafanadashboard -l grafana=my-grafana
|
||||
# show all the datasource you have
|
||||
> kubectl get grafanadatasource -l grafana=my-grafana
|
||||
kubectl get grafanadatasource -l grafana=my-grafana
|
||||
```
|
||||
|
||||
For more details, you can refer to [vela-prism](https://github.com/kubevela/prism#grafana-related-apis).
|
||||
|
|
|
|||
|
|
@ -8,7 +8,9 @@ KubeVela has [release cadence](../../contributor/release-process) for every 2-3
|
|||
|
||||
## From v1.4.x to v1.5.x
|
||||
|
||||
> ⚠️ Note: Please upgrade to v1.5.5+ to avoid application workflow rerun when controller upgrade.
|
||||
:::caution
|
||||
Note: Please upgrade to v1.5.5+ to avoid application workflow rerun when controller upgrade.
|
||||
:::
|
||||
|
||||
1. Upgrade the CRDs, please make sure you upgrade the CRDs first before upgrade the helm chart.
|
||||
|
||||
|
|
@ -42,7 +44,9 @@ vela addon upgrade velaux --version 1.5.5
|
|||
|
||||
## From v1.3.x to v1.4.x
|
||||
|
||||
> ⚠️ Note: It may cause application workflow rerun when controller upgrade.
|
||||
:::danger
|
||||
Note: It may cause application workflow rerun when controller upgrade.
|
||||
:::
|
||||
|
||||
1. Upgrade the CRDs, please make sure you upgrade the CRDs first before upgrade the helm chart.
|
||||
|
||||
|
|
@ -83,7 +87,9 @@ Please note if you're using terraform addon, you should upgrade the `terraform`
|
|||
|
||||
## From v1.2.x to v1.3.x
|
||||
|
||||
> ⚠️ Note: It may cause application workflow rerun when controller upgrade.
|
||||
:::danger
|
||||
Note: It may cause application workflow rerun when controller upgrade.
|
||||
:::
|
||||
|
||||
1. Upgrade the CRDs, please make sure you upgrade the CRDs first before upgrade the helm chart.
|
||||
|
||||
|
|
@ -123,7 +129,9 @@ Please note if you're using terraform addon, you should upgrade the `terraform`
|
|||
|
||||
## From v1.1.x to v1.2.x
|
||||
|
||||
> ⚠️ Note: It will cause application workflow rerun when controller upgrade.
|
||||
:::danger
|
||||
Note: It will cause application workflow rerun when controller upgrade.
|
||||
:::
|
||||
|
||||
1. Check the service running normally
|
||||
|
||||
|
|
|
|||