Feat: add statement of syncing application to the project (#1205)

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
This commit is contained in:
Tianxin Dong 2023-03-24 14:32:30 +08:00 committed by GitHub
parent 0e55564c9a
commit cc23c157d3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
22 changed files with 183 additions and 14 deletions

View File

@ -20,6 +20,12 @@ Users with project management permissions can go to `Platform/Projects` page for
Click the `New Project` button for creating a project. Each project should set name and owner, the owner user will be granted the project admin role automatically after the project created.
### Creating Environments for the Project
A project can have multiple associated environments. An environment is a logical concept that points to a namespace in the cluster. By default, if not specified, the namespace of an environment has the same name as the environment itself. When creating an environment, you need to associate it with a project.
![](../../../resources/env-project.png)
## Updating Projects
Project owner, alias, and description fields can be updated. Click the project name and go to the project detail page, you can manage the members and roles in this project.

View File

@ -237,6 +237,11 @@ The UI console shares a different metadata layer with the controller. It's more
By default, if you're using CLI to manage the applications directly from Kubernetes API, we will sync the metadata to UI backend automatically. Once you deployed the application from the UI console, the automatic sync process will be stopped as the source of truth may be changed.
:::tip
If the namespace of the application operated by CLI has already been associated with the corresponding environment in UI, then the application will be automatically synchronized to the project associated with that environment in UI. Otherwise, the application will be synchronized to the default project.
If you want to specify which project in UI console an application should be synchronized to, please refer to [Creating environments for the project](how-to/dashboard/user/project#creating-environments-for-the-project).
:::
If there're any changes happen from CLI after that, the UI console will detect the difference and show it for you. However, it's not recommended to modify the application properties from both sides.
In conclusion, if you're a CLI/YAML/GitOps user, you'd better just use CLI to manage the application CRD and just use the UI console (velaux) as a dashboard. Once you've managed the app from the UI console, you need to align the behavior and manage apps from UI, API, or Webhook provided by velaux.

Binary file not shown.

After

Width:  |  Height:  |  Size: 519 KiB

View File

@ -20,6 +20,12 @@ VelaUX 安装后会默认生成一个`Default`的项目空间,授权给管理
点击右上方的`创建项目` 按钮即可进入项目创建页面,每一个项目需要设置一个负责人,项目创建后负责人自动授予项目管理员角色。
### 创建项目关联的环境
一个项目可以有多个关联的环境。环境是一个逻辑概念,指向集群中的一个命名空间,默认不指定的情况下,环境的命名空间与环境同名。在创建环境时,你需要为其关联一个项目。
![](../../../resources/env-project.png)
## 编辑项目
项目负责人、别名和描述信息支持更新编辑。点击项目名称进入项目详情页后可继续管理项目下的成员、角色和应用。

View File

@ -187,6 +187,11 @@ KubeVela 的 UI 控制台跟底层的控制器使用了不同的元数据存储
默认情况下CLI 操作的应用会自动同步到 UI 控制台的元数据中,但是一旦你通过 UI 界面做应用部署的操作,应用元数据的自动同步就会停止。接下来你依旧可以用 CLI 去管理应用,并且修改的差异可以在 UI 控制台上查看。但是我们不建议你同时通过 CLI 和 UI 去管理应用。
:::tip
如果 CLI 操作的应用的命名空间已经在 UI 中指向了对应的环境,那么该应用被会自动同步到 UI 中环境关联的项目中。否则,应用会被同步到默认项目中。
如果你希望指定应用被同步到 UI 控制台的哪个项目中,请参考 [创建项目关联的环境](how-to/dashboard/user/project#创建项目关联的环境)。
:::
总体而言,如果你的场景更倾向于使用 CLI/YAML/GitOps那么我们建议你直接管理 application CRD将 UI 控制台当成看板使用。如果你喜欢通过 UI 控制台管理,那就保持行为的一致,基于 UI 提供的方式界面、API 和 Webhook 来执行部署。
## 清理

Binary file not shown.

After

Width:  |  Height:  |  Size: 519 KiB

View File

@ -20,6 +20,12 @@ VelaUX 安装后会默认生成一个`Default`的项目空间,授权给管理
点击右上方的`创建项目` 按钮即可进入项目创建页面,每一个项目需要设置一个负责人,项目创建后负责人自动授予项目管理员角色。
### 创建项目关联的环境
一个项目可以有多个关联的环境。环境是一个逻辑概念,指向集群中的一个命名空间,默认不指定的情况下,环境的命名空间与环境同名。在创建环境时,你需要为其关联一个项目。
![](../../../resources/env-project.png)
## 编辑项目
项目负责人、别名和描述信息支持更新编辑。点击项目名称进入项目详情页后可继续管理项目下的成员、角色和应用。

View File

@ -5,7 +5,7 @@ title: 工作机制
这个文档会简单介绍 KubeVela 工作流的一些内部的核心运行机制。
## 运行模式
工作流的执行分为两种模式DAG 模式和 StepByStep 模式。在 DAG 模式下,工作流中的各个步骤会并发运行,并根据各步骤的 Input/Output 形成依赖关系。前置条件未满足的步骤会先处于等待状态。在 StepByStep 模式下,工作流中的各个步骤则是会按照顺序一步步执行。在 KubeVela v1.2+ 的版本中,在配置工作流的情况下,默认采用 StepByStep 模式,暂未支持显式指定工作流以 DAG 模式运行。
工作流的执行分为两种模式DAG 模式和 StepByStep 模式。在 DAG 模式下,工作流中的各个步骤会并发运行,并根据各步骤的 Input/Output 形成依赖关系。前置条件未满足的步骤会先处于等待状态。在 StepByStep 模式下,工作流中的各个步骤则是会按照顺序一步步执行。在 KubeVela v1.2+ 的版本中,在配置工作流的情况下,默认采用 StepByStep 模式,KubeVela v1.5+ 的版本支持显式指定工作流以 DAG 模式运行。
## 暂停与重试
@ -44,4 +44,4 @@ title: 工作机制
## 状态维持
当工作流处于健康运行状态 (running) 或是由于等待资源健康状态而暂停时 (suspending)KubeVela 的应用在默认配置下会定期检查之前下发的资源是否存在配置漂移,并将这些资源恢复成原先下发时的配置。默认定期检查的时间是 5 分钟,可以通过在 KubeVela 控制器[启动参数](../system-operation/bootstrap-parameters)在中设置 `--application-re-sync-period` 来调节。如果想要禁用状态维持的能力,也可以在应用中配置 [apply-once](https://github.com/kubevela/kubevela/blob/master/docs/examples/app-with-policy/apply-once-policy/apply-once.md) 策略。
当工作流处于健康运行状态 (running) 或是由于等待资源健康状态而暂停时 (suspending)KubeVela 的应用在默认配置下会定期检查之前下发的资源是否存在配置漂移,并将这些资源恢复成原先下发时的配置。默认定期检查的时间是 5 分钟,可以通过在 KubeVela 控制器[启动参数](../system-operation/bootstrap-parameters)在中设置 `--application-re-sync-period` 来调节。如果想要禁用状态维持的能力,也可以在应用中配置 [apply-once](https://github.com/kubevela/kubevela/blob/master/docs/examples/app-with-policy/apply-once-policy/apply-once.md) 策略。

View File

@ -187,6 +187,11 @@ KubeVela 的 UI 控制台跟底层的控制器使用了不同的元数据存储
默认情况下CLI 操作的应用会自动同步到 UI 控制台的元数据中,但是一旦你通过 UI 界面做应用部署的操作,应用元数据的自动同步就会停止。接下来你依旧可以用 CLI 去管理应用,并且修改的差异可以在 UI 控制台上查看。但是我们不建议你同时通过 CLI 和 UI 去管理应用。
:::tip
如果 CLI 操作的应用的命名空间已经在 UI 中指向了对应的环境,那么该应用被会自动同步到 UI 中环境关联的项目中。否则,应用会被同步到默认项目中。
如果你希望指定应用被同步到 UI 控制台的哪个项目中,请参考 [创建项目关联的环境](how-to/dashboard/user/project#创建项目关联的环境)。
:::
总体而言,如果你的场景更倾向于使用 CLI/YAML/GitOps那么我们建议你直接管理 application CRD将 UI 控制台当成看板使用。如果你喜欢通过 UI 控制台管理,那就保持行为的一致,基于 UI 提供的方式界面、API 和 Webhook 来执行部署。
## 清理

Binary file not shown.

After

Width:  |  Height:  |  Size: 519 KiB

View File

@ -20,6 +20,12 @@ Users with project management permissions can go to `Platform/Projects` page for
Click the `New Project` button for creating a project. Each project should set name and owner, the owner user will be granted the project admin role automatically after the project created.
### Creating Environments for the Project
A project can have multiple associated environments. An environment is a logical concept that points to a namespace in the cluster. By default, if not specified, the namespace of an environment has the same name as the environment itself. When creating an environment, you need to associate it with a project.
![](../../../resources/env-project.png)
## Updating Projects
Project owner, alias, and description fields can be updated. Click the project name and go to the project detail page, you can manage the members and roles in this project.

View File

@ -19,7 +19,7 @@ KubeVela relies on Kubernetes as a control plane. The control plane could be any
### 2. Install KubeVela CLI {#install-vela-cli}
KubeVela CLI provides an easy to engage and manage your application delivery in command lines.
KubeVela CLI provides an easy way to engage and manage your application delivery in command lines.
<Tabs
className="unique-tabs"

View File

@ -562,14 +562,14 @@ parameter: {
outputs: [{ip: "1.1.1.1", hostname: "xxx.com"}, {ip: "2.2.2.2", hostname: "yyy.com"}]
}
output: {
spec: {
if len(parameter.outputs) > 0 {
_x: [ for _, v in parameter.outputs {
"\(v.ip) \(v.hostname)"
}]
message: "Visiting URL: " + strings.Join(_x, "")
}
}
spec: {
if len(parameter.outputs) > 0 {
_x: [ for _, v in parameter.outputs {
"\(v.ip) \(v.hostname)"
}]
message: "Visiting URL:\n" + strings.Join(_x, "\n")
}
}
}
```

View File

@ -16,7 +16,7 @@ When you deploy your application in a test environment and find problems with th
If your application uses workflow, make sure your app has the `debug` policy before using the `vela debug` command:
```yaml
polices:
policies:
- name: debug
type: debug
```

View File

@ -2,6 +2,10 @@
title: Controller GrayScale Release
---
:::tip
If you are using KubeVela >= v1.8.0, [controller sharding](./controller-sharding.md) is supported. If you need to run multiple version of KubeVela controllers (all have version >= v1.8.0), you can refer to controller sharding.
:::
System upgrades can always be a dangerous operation for system operators. As a control plane operator, KubeVela controller also faces similar challenges. The introduction of new features or function reconstruction could bring potential risks for running higher version controllers on low version applications.
To help system operators overcome such difficulties, KubeVela provide **controller grayscale release** mechanism which allow multiple version controllers to run concurrently. When applications are annotated with key `app.oam.dev/controller-version-require`, only the controller with matched version number will be able to handle it.

View File

@ -0,0 +1,110 @@
---
title: Controller Sharding
---
## Background
As many adopters start to choose KubeVela to manage large scale applications in their production, we have more and more scenarios and experiences on performance issue.
The typical Kubernetes controller usually run one controller to manage all related custom resources. The operator reconciliation is designed to be relatively lightweight. In KubeVela, to support customized delivery process and manage lifecycles of tens of resources in one application makes the reconciliation process heavier compared to typical controllers.
Although multiple replicas of KubeVela controller could provide high availability, in avoid of multiple concurrent reconciliation for the same application (which could lead to conflict problems), only one replica could work and this is achieved by acquiring the same leader-election lock.
So usually, users add more resources to KubeVela controller (more CPUs, Memory...) and set up more threads and larger rate limiter to support more applications in the system. This can lead to potential problems in different cases:
1. There are limitations for the growth of resources for a single KubeVela controller. If the Kubernetes cluster of the control plane uses lots of small machines, a single KubeVela controller can only run one of them, instead of spreading out.
2. The failure of the single KubeVela controller could block all the delivery process of applications. The failure could be caused by various reasons and some frequently seen reasons (like OOM or Crash due to some weird application input) are not recoverable by restarting controller.
3. In multi-tenancy scenario, there is no fairness that could be guaranteed if some users have a huge number of applications. They could make the controller run slow and users with a small number of applications could also be affected, due to the sharing of the single KubeVela controller.
Therefore, KubeVela core controller supports sharding since v1.8 which allows horizontal scaling and provides solutions to the above challenges.
## Architecture
![image](../../resources/vela-core-sharding-arch.jpg)
When KuebVela controller is running in sharding mode, multiple core controllers can work concurrently. Each one will focus on a specific subset of the applications, identified by `shard-id`. There will be one master controller and multiple slave controllers.
**Master controller**: The master controller will enable all the components, such as ComponentDefinitionController, TraitDefinitionController, ApplicationController, Webhooks, etc.
The application controller will only handle applications with the label `controller.core.oam.dev/scheduled-shard-id: master`, and only the applications, applicationrevisions and resourcetrackers that carries this label will be watched and cached.
By default, it will watch the pods within the same namespace as it runs. The pods with labels `app.kubernetes.io/name: vela-core` and carries `controller.core.oam.dev/shard-id` label key will be selected and their health status will be recorded. The selected ready pods will be registered as schedulable shards. The mutating Webhook will automatically assign shard-id for non-assigned applications when requests come in.
**Slave controller**: The slave mode controller will only start the ApplicationController and will not enable others like Webhook or ComponentDefinitionController. It is dedicated to applications that carries the matched labels `controller.core.oam.dev/scheduled-shard-id=<shard-id>`.
## Use Guide
### Installation
1. First, install KubeVela with sharding enabled. This will let the KubeVela core controller deployed in master mode.
```shell
helm install kubevela kubevela/vela-core -n vela-system --set sharding.enabled=true
```
2. Second, deploy slave mode application controller.
```shell
vela addon enable vela-core-shard-manager nShards=3
```
This will start 3 slave controllers that runs with the same configuration as the master controller. For example, if the master one use the flag `max-workflow-wait-backoff-time=60`, all slave controllers will use the same.
You can customize the resource usage by
```shell
vela addon enable vela-core-shard-manager nShards=3 cpu=1 memory=2Gi
```
:::tip
There are different ways to do it.
1. Use kubectl to copy the deployment of master and modify it. Like
```shell
kubectl get deploy kubevela-vela-core -oyaml -n vela-system | \
sed 's/schedulable-shards=/shard-id=shard-0/g' | \
sed 's/instance: kubevela/instance: kubevela-shard/g' | \
sed 's/shard-id: master/shard-id: shard-0/g' | \
sed 's/name: kubevela/name: kubevela-shard/g' | \
kubectl apply -f -
```
This will create a copy of the master vela-core and run a slave replica with shard-id as *shard-0*.
2. Just create a new deployment and set the labels to match the above mentioned requirements.
KubeVela's controller sharding does not require every shard to have the same configuration. You can let different shards to have different resource usage or rate limitations, or even different image versions.
> In the case you do not want dynamic discovery for available application controller, you can specify what shards are schedulable by add arg `--set sharding.schedulableShards=shard-0,shard-1` to the installation of KubeVela.
:::
### CLI
By default, the webhook in the master controller will automatically assign shard-id to applications. It will looks for available shards and set the `controller.core.oam.dev/scheduled-shard-id: <shard-id>` to the application. But there are cases it cannot handle.
1. If the application is created and no shard is available, the webhook will leave it unscheduled. Even if later shards joined, the webhook will not automatically schedule it unless the application is updated manually.
2. If an application is assigned to a shard which is later down, the webhook will not automatically reschedule it, unless the application is recreated.
3. For performance issue, user disable the usage of webhook.
For these cases, you can use the following command to reschedule the application manually.
```shell
vela up <app-name> -n <app-namespace> --shard-id <shard-id>
```
> Notice that not only applications are attached with the shard-id label, related system resources including ResourceTracker and ApplicationRevision are attached with the shard-id label as well. So when you call `vela up --shard-id` command, it will relabel all these system resources as well.
### Extension
Once you have understood the working mechanism of KubeVela's controller sharding, you will find that if you want to implement your own application scheduler, you only need to attach correct labels to applications and their related system resources (ResourceTracker and ApplicationRevision). Therefore, it is possible to design your own scheduler and let it run in the background to schedule applications with your customized strategies, like schedule applications by its owner or by the CPU usage of different shards.
## FAQ
Q: What will happen when the master one is down?
A: If the webhook is not enabled, then only applications scheduled to master shard will be affected. Users can still create or change applications in this case. If the webhook is enabled, the webhook will be down if the master one is down. Then the mutating webhook and validating webhook will fail so no new application creation or change will be accepted. Old applications that are scheduled to master will not be processed anymore. Others that are scheduled to other shards will not be affected.
Q: What will happen when a slave mode controller is down?
A: For applications that are not scheduled to that shard, nothing will happen. For applications that are scheduled to the broken shard, succeeded applications will not run state-keep or gc but the delivered resources will not be touched. For applications that are still running workflow, they will not be handled but can be recovered instantly when the controller is restarted.
Q: What will happen if one user delete the sharding label while the application is already in running state?
A: If the webhook is enabled, this behaviour will be prevented. The sharding label will inherit the original one. If the webhook is not enabled, the application will not be handled anymore (no state-keep, no gc and no update).
### Reference
1. Design doc. https://github.com/kubevela/kubevela/blob/master/design/vela-core/sharding.md
2. Implementation. https://github.com/kubevela/kubevela/pull/5360

View File

@ -6,7 +6,7 @@ This document will give a brief introduction to the core mechanisms of KubeVela
## Running Mode
The execution of workflow has two different running modes: **DAG** mode and **StepByStep** mode. In DAG mode, all steps in the workflow will execute concurrently. They will form a dependency graph for running according to the Input/Output in the step configuration automatically. If one workflow step has not met all its dependencies, it will wait for the conditions. In StepByStep mode, all steps will be executed in order. In KubeVela v1.2+, the defaut running mode is StepByStep. Currently, we do not support using DAG mode explicitly.
The execution of workflow has two different running modes: **DAG** mode and **StepByStep** mode. In DAG mode, all steps in the workflow will execute concurrently. They will form a dependency graph for running according to the Input/Output in the step configuration automatically. If one workflow step has not met all its dependencies, it will wait for the conditions. In StepByStep mode, all steps will be executed in order. In KubeVela v1.2+, the defaut running mode is StepByStep. Using DAG mode is not supported in version before KubeVela v1.5.
## Suspend and Retry
@ -45,4 +45,4 @@ For failure case, the workflow will retry at most 10 times by default and enter
## Avoid Configuration Drift
When workflow enters running state or suspends due to condition wait, KubeVela application will re-apply applied resources to prevent configuration drift routinely. This process is called **State Keep** in KubeVela. By default, the interval of State Keep is 5 minutes, which can be configured in the [bootstrap parameter](../system-operation/bootstrap-parameters) of KubeVela controller by setting `--application-re-sync-period`. If you want to disable the state keep capability, you can also use the [apply-once](https://github.com/kubevela/kubevela/blob/master/docs/examples/app-with-policy/apply-once-policy/apply-once.md) policy in the application.
When workflow enters running state or suspends due to condition wait, KubeVela application will re-apply applied resources to prevent configuration drift routinely. This process is called **State Keep** in KubeVela. By default, the interval of State Keep is 5 minutes, which can be configured in the [bootstrap parameter](../system-operation/bootstrap-parameters) of KubeVela controller by setting `--application-re-sync-period`. If you want to disable the state keep capability, you can also use the [apply-once](https://github.com/kubevela/kubevela/blob/master/docs/examples/app-with-policy/apply-once-policy/apply-once.md) policy in the application.

View File

@ -237,6 +237,11 @@ The UI console shares a different metadata layer with the controller. It's more
By default, if you're using CLI to manage the applications directly from Kubernetes API, we will sync the metadata to UI backend automatically. Once you deployed the application from the UI console, the automatic sync process will be stopped as the source of truth may be changed.
:::tip
If the namespace of the application operated by CLI has already been associated with the corresponding environment in UI, then the application will be automatically synchronized to the project associated with that environment in UI. Otherwise, the application will be synchronized to the default project.
If you want to specify which project in UI console an application should be synchronized to, please refer to [Creating environments for the project](how-to/dashboard/user/project#creating-environments-for-the-project).
:::
If there're any changes happen from CLI after that, the UI console will detect the difference and show it for you. However, it's not recommended to modify the application properties from both sides.
In conclusion, if you're a CLI/YAML/GitOps user, you'd better just use CLI to manage the application CRD and just use the UI console (velaux) as a dashboard. Once you've managed the app from the UI console, you need to align the behavior and manage apps from UI, API, or Webhook provided by velaux.

View File

@ -45,11 +45,21 @@ The following definitions will be enabled after the installation of fluxcd addon
| targetNamespace | optional, the namespace to install chart, decided by chart itself | your-ns |
| releaseName | optional, release name after installed | your-rn |
| values | optional, override the Values.yaml inchart, using for the rendering of Helm | |
| valuesFrom | optional, valuesFrom holds references to resources containing Helm values for this HelmRelease, and information about how they should be merged. It's a list of [ValueReference](#valuereference) | |
| installTimeout | optional, the timeout for operation `helm install`, and 10 minutes by default | 20m |
| interval | optional, the Interval at which to reconcile the Helm release, default to 30s | 1m |
| oss | optional, The [oss](#oss) source configuration | |
| git | optional, The [git](#git) source configuration | dev |
##### ValueReference
| Parameters | Description | Example |
| -------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------- |
| kind | required, kind of the values referent, valid values are ('Secret', 'ConfigMap'). | ConfigMap |
| name | required, name of the values referent. Should reside in the same namespace as the referring resource. | your-cm |
| valuesKey | optional, valuesKey is the data key where the values.yaml or a specific value can be found at. Defaults to 'values.yaml'. | values.yaml |
| targetPath | optional, targetPath is the YAML dot notation path the value should be merged at. When set, the ValuesKey is expected to be a single flat value. Defaults to 'None', which results in the values getting merged at the root. | |
| optional | optional, optional marks this ValuesReference as optional. When set, a not found error or the values reference is ignored, but any ValuesKey, TargetPath or transient error will still result in a reconciliation failure. | |
##### OSS
| Parameters | Description | Example |

Binary file not shown.

After

Width:  |  Height:  |  Size: 519 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 204 KiB

View File

@ -248,6 +248,7 @@
"collapsed": true,
"items": [
"platform-engineers/system-operation/performance-finetuning",
"platform-engineers/system-operation/controller-sharding",
"platform-engineers/system-operation/controller-grayscale-release",
"platform-engineers/system-operation/high-availability",
"platform-engineers/system-operation/migration-from-old-version"