Chore: update docs
Signed-off-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
This commit is contained in:
parent
504b912172
commit
5f6b5f0937
|
@ -2,157 +2,11 @@
|
|||
title: FAQ
|
||||
---
|
||||
|
||||
- [Compare to X](#Compare-to-X)
|
||||
* [What is the difference between KubeVela and Helm?](#What-is-the-difference-between-KubeVela-and-Helm?)
|
||||
## What is the difference between KubeVela and project-X ?
|
||||
|
||||
- [Issues](#issues)
|
||||
* [Error: unable to create new content in namespace cert-manager because it is being terminated](#error-unable-to-create-new-content-in-namespace-cert-manager-because-it-is-being-terminated)
|
||||
* [Error: ScopeDefinition exists](#error-scopedefinition-exists)
|
||||
* [You have reached your pull rate limit](#You-have-reached-your-pull-rate-limit)
|
||||
* [Warning: Namespace cert-manager exists](#warning-namespace-cert-manager-exists)
|
||||
* [How to fix issue: MutatingWebhookConfiguration mutating-webhook-configuration exists?](#how-to-fix-issue-mutatingwebhookconfiguration-mutating-webhook-configuration-exists)
|
||||
|
||||
- [Operating](#operating)
|
||||
* [Autoscale: how to enable metrics server in various Kubernetes clusters?](#autoscale-how-to-enable-metrics-server-in-various-kubernetes-clusters)
|
||||
Refer to the [comparison details](https://kubevela.io/docs/#kubevela-vs-other-software).
|
||||
|
||||
## Compare to X
|
||||
|
||||
### What is the difference between KubeVela and Helm?
|
||||
|
||||
KubeVela is a platform builder tool to create easy-to-use yet extensible app delivery/management systems with Kubernetes. KubeVela relies on Helm as templating engine and package format for apps. But Helm is not the only templating module that KubeVela supports. Another first-class supported approach is CUE.
|
||||
|
||||
Also, KubeVela is by design a Kubernetes controller (i.e. works on server side), even for its Helm part, a Helm operator will be installed.
|
||||
|
||||
## Issues
|
||||
|
||||
### Error: unable to create new content in namespace cert-manager because it is being terminated
|
||||
|
||||
Occasionally you might hit the issue as below. It happens when the last KubeVela release deletion hasn't completed.
|
||||
|
||||
```
|
||||
vela install
|
||||
```
|
||||
```console
|
||||
- Installing Vela Core Chart:
|
||||
install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
|
||||
Failed to install the chart with error: serviceaccounts "cert-manager-cainjector" is forbidden: unable to create new content in namespace cert-manager because it is being terminated
|
||||
failed to create resource
|
||||
helm.sh/helm/v3/pkg/kube.(*Client).Update.func1
|
||||
/home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/kube/client.go:190
|
||||
...
|
||||
Error: failed to create resource: serviceaccounts "cert-manager-cainjector" is forbidden: unable to create new content in namespace cert-manager because it is being terminated
|
||||
```
|
||||
|
||||
Take a break and try again in a few seconds.
|
||||
|
||||
```
|
||||
vela install
|
||||
```
|
||||
```console
|
||||
- Installing Vela Core Chart:
|
||||
Vela system along with OAM runtime already exist.
|
||||
Automatically discover capabilities successfully ✅ Add(0) Update(0) Delete(8)
|
||||
|
||||
TYPE CATEGORY DESCRIPTION
|
||||
-task workload One-off task to run a piece of code or script to completion
|
||||
-webservice workload Long-running scalable service with stable endpoint to receive external traffic
|
||||
-worker workload Long-running scalable backend worker without network endpoint
|
||||
-autoscale trait Automatically scale the app following certain triggers or metrics
|
||||
-metrics trait Configure metrics targets to be monitored for the app
|
||||
-rollout trait Configure canary deployment strategy to release the app
|
||||
-route trait Configure route policy to the app
|
||||
-scaler trait Manually scale the app
|
||||
|
||||
- Finished successfully.
|
||||
```
|
||||
|
||||
And manually apply all WorkloadDefinition and TraitDefinition manifests to have all capabilities back.
|
||||
|
||||
```
|
||||
vela up -f charts/vela-core/templates/defwithtemplate
|
||||
```
|
||||
```console
|
||||
traitdefinition.core.oam.dev/autoscale created
|
||||
traitdefinition.core.oam.dev/scaler created
|
||||
traitdefinition.core.oam.dev/metrics created
|
||||
traitdefinition.core.oam.dev/rollout created
|
||||
traitdefinition.core.oam.dev/route created
|
||||
workloaddefinition.core.oam.dev/task created
|
||||
workloaddefinition.core.oam.dev/webservice created
|
||||
workloaddefinition.core.oam.dev/worker created
|
||||
```
|
||||
```
|
||||
vela workloads
|
||||
```
|
||||
```console
|
||||
Automatically discover capabilities successfully ✅ Add(8) Update(0) Delete(0)
|
||||
|
||||
TYPE CATEGORY DESCRIPTION
|
||||
+task workload One-off task to run a piece of code or script to completion
|
||||
+webservice workload Long-running scalable service with stable endpoint to receive external traffic
|
||||
+worker workload Long-running scalable backend worker without network endpoint
|
||||
+autoscale trait Automatically scale the app following certain triggers or metrics
|
||||
+metrics trait Configure metrics targets to be monitored for the app
|
||||
+rollout trait Configure canary deployment strategy to release the app
|
||||
+route trait Configure route policy to the app
|
||||
+scaler trait Manually scale the app
|
||||
|
||||
NAME DESCRIPTION
|
||||
task One-off task to run a piece of code or script to completion
|
||||
webservice Long-running scalable service with stable endpoint to receive external traffic
|
||||
worker Long-running scalable backend worker without network endpoint
|
||||
```
|
||||
|
||||
### Error: ScopeDefinition exists
|
||||
|
||||
Occasionally you might hit the issue as below. It happens when there is an old OAM Kubernetes Runtime release, or you applied `ScopeDefinition` before.
|
||||
|
||||
```
|
||||
vela install
|
||||
```
|
||||
```console
|
||||
- Installing Vela Core Chart:
|
||||
install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
|
||||
Failed to install the chart with error: ScopeDefinition "healthscopes.core.oam.dev" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "kubevela": current value is "oam"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "vela-system": current value is "oam-system"
|
||||
rendered manifests contain a resource that already exists. Unable to continue with install
|
||||
helm.sh/helm/v3/pkg/action.(*Install).Run
|
||||
/home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274
|
||||
...
|
||||
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ScopeDefinition "healthscopes.core.oam.dev" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "kubevela": current value is "oam"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "vela-system": current value is "oam-system"
|
||||
```
|
||||
|
||||
Delete `ScopeDefinition` "healthscopes.core.oam.dev" and try again.
|
||||
|
||||
```
|
||||
kubectl delete ScopeDefinition "healthscopes.core.oam.dev"
|
||||
```
|
||||
```console
|
||||
scopedefinition.core.oam.dev "healthscopes.core.oam.dev" deleted
|
||||
```
|
||||
```
|
||||
vela install
|
||||
```
|
||||
```console
|
||||
- Installing Vela Core Chart:
|
||||
install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
|
||||
Successfully installed the chart, status: deployed, last deployed time = 2020-12-03 16:26:41.491426 +0800 CST m=+4.026069452
|
||||
WARN: handle workload template `containerizedworkloads.core.oam.dev` failed: no template found, you will unable to use this workload capabilityWARN: handle trait template `manualscalertraits.core.oam.dev` failed
|
||||
: no template found, you will unable to use this trait capabilityAutomatically discover capabilities successfully ✅ Add(8) Update(0) Delete(0)
|
||||
|
||||
TYPE CATEGORY DESCRIPTION
|
||||
+task workload One-off task to run a piece of code or script to completion
|
||||
+webservice workload Long-running scalable service with stable endpoint to receive external traffic
|
||||
+worker workload Long-running scalable backend worker without network endpoint
|
||||
+autoscale trait Automatically scale the app following certain triggers or metrics
|
||||
+metrics trait Configure metrics targets to be monitored for the app
|
||||
+rollout trait Configure canary deployment strategy to release the app
|
||||
+route trait Configure route policy to the app
|
||||
+scaler trait Manually scale the app
|
||||
|
||||
- Finished successfully.
|
||||
```
|
||||
|
||||
### You have reached your pull rate limit
|
||||
## You have reached your pull rate limit
|
||||
|
||||
When you look into the logs of Pod kubevela-vela-core and found the issue as below.
|
||||
|
||||
|
@ -164,7 +18,7 @@ NAME READY STATUS RESTARTS AGE
|
|||
kubevela-vela-core-f8b987775-wjg25 0/1 - 0 35m
|
||||
```
|
||||
|
||||
>Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by
|
||||
>Error response from daemon: too many requests: You have reached your pull rate limit. You may increase the limit by
|
||||
>authenticating and upgrading: https://www.docker.com/increase-rate-limit
|
||||
|
||||
You can use github container registry instead.
|
||||
|
@ -173,165 +27,4 @@ You can use github container registry instead.
|
|||
docker pull ghcr.io/kubevela/kubevela/vela-core:latest
|
||||
```
|
||||
|
||||
### Warning: Namespace cert-manager exists
|
||||
|
||||
If you hit the issue as below, an `cert-manager` release might exist whose namespace and RBAC related resource conflict
|
||||
with KubeVela.
|
||||
|
||||
```
|
||||
vela install
|
||||
```
|
||||
```console
|
||||
- Installing Vela Core Chart:
|
||||
install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
|
||||
Failed to install the chart with error: Namespace "cert-manager" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"
|
||||
rendered manifests contain a resource that already exists. Unable to continue with install
|
||||
helm.sh/helm/v3/pkg/action.(*Install).Run
|
||||
/home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274
|
||||
...
|
||||
/opt/hostedtoolcache/go/1.14.12/x64/src/runtime/asm_amd64.s:1373
|
||||
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Namespace "cert-manager" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"
|
||||
```
|
||||
|
||||
Try these steps to fix the problem.
|
||||
|
||||
- Delete release `cert-manager`
|
||||
- Delete namespace `cert-manager`
|
||||
- Install KubeVela again
|
||||
|
||||
```
|
||||
helm delete cert-manager -n cert-manager
|
||||
```
|
||||
```console
|
||||
release "cert-manager" uninstalled
|
||||
```
|
||||
```
|
||||
kubectl delete ns cert-manager
|
||||
```
|
||||
```console
|
||||
namespace "cert-manager" deleted
|
||||
```
|
||||
```
|
||||
vela install
|
||||
```
|
||||
```console
|
||||
- Installing Vela Core Chart:
|
||||
install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
|
||||
Successfully installed the chart, status: deployed, last deployed time = 2020-12-04 10:46:46.782617 +0800 CST m=+4.248889379
|
||||
Automatically discover capabilities successfully ✅ (no changes)
|
||||
|
||||
TYPE CATEGORY DESCRIPTION
|
||||
task workload One-off task to run a piece of code or script to completion
|
||||
webservice workload Long-running scalable service with stable endpoint to receive external traffic
|
||||
worker workload Long-running scalable backend worker without network endpoint
|
||||
autoscale trait Automatically scale the app following certain triggers or metrics
|
||||
metrics trait Configure metrics targets to be monitored for the app
|
||||
rollout trait Configure canary deployment strategy to release the app
|
||||
route trait Configure route policy to the app
|
||||
scaler trait Manually scale the app
|
||||
- Finished successfully.
|
||||
```
|
||||
|
||||
### How to fix issue: MutatingWebhookConfiguration mutating-webhook-configuration exists?
|
||||
|
||||
If you deploy some other services which will apply MutatingWebhookConfiguration mutating-webhook-configuration, installing
|
||||
KubeVela will hit the issue as below.
|
||||
|
||||
```shell
|
||||
- Installing Vela Core Chart:
|
||||
install chart vela-core, version v0.2.1, desc : A Helm chart for Kube Vela core, contains 36 file
|
||||
Failed to install the chart with error: MutatingWebhookConfiguration "mutating-webhook-configuration" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"
|
||||
rendered manifests contain a resource that already exists. Unable to continue with install
|
||||
helm.sh/helm/v3/pkg/action.(*Install).Run
|
||||
/home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274
|
||||
github.com/kubevela/kubevela/pkg/commands.InstallOamRuntime
|
||||
/home/runner/work/kubevela/kubevela/pkg/commands/system.go:259
|
||||
github.com/kubevela/kubevela/pkg/commands.(*initCmd).run
|
||||
/home/runner/work/kubevela/kubevela/pkg/commands/system.go:162
|
||||
github.com/kubevela/kubevela/pkg/commands.NewInstallCommand.func2
|
||||
/home/runner/work/kubevela/kubevela/pkg/commands/system.go:119
|
||||
github.com/spf13/cobra.(*Command).execute
|
||||
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:850
|
||||
github.com/spf13/cobra.(*Command).ExecuteC
|
||||
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:958
|
||||
github.com/spf13/cobra.(*Command).Execute
|
||||
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:895
|
||||
main.main
|
||||
/home/runner/work/kubevela/kubevela/references/cmd/cli/main.go:16
|
||||
runtime.main
|
||||
/opt/hostedtoolcache/go/1.14.13/x64/src/runtime/proc.go:203
|
||||
runtime.goexit
|
||||
/opt/hostedtoolcache/go/1.14.13/x64/src/runtime/asm_amd64.s:1373
|
||||
Error: rendered manifests contain a resource that already exists. Unable to continue with install: MutatingWebhookConfiguration "mutating-webhook-configuration" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"
|
||||
```
|
||||
|
||||
To fix this issue, please upgrade KubeVela Cli `vela` version to be higher than `v0.2.2` from [KubeVela releases](https://github.com/kubevela/kubevela/releases).
|
||||
|
||||
## Operating
|
||||
|
||||
### Autoscale: how to enable metrics server in various Kubernetes clusters?
|
||||
|
||||
Operating Autoscale depends on metrics server, so it has to be enabled in various clusters. Please check whether metrics server
|
||||
is enabled with command `kubectl top nodes` or `kubectl top pods`.
|
||||
|
||||
If the output is similar as below, the metrics is enabled.
|
||||
|
||||
```shell
|
||||
kubectl top nodes
|
||||
```
|
||||
```console
|
||||
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
|
||||
cn-hongkong.10.0.1.237 288m 7% 5378Mi 78%
|
||||
cn-hongkong.10.0.1.238 351m 8% 5113Mi 74%
|
||||
```
|
||||
```
|
||||
kubectl top pods
|
||||
```
|
||||
```console
|
||||
NAME CPU(cores) MEMORY(bytes)
|
||||
php-apache-65f444bf84-cjbs5 0m 1Mi
|
||||
wordpress-55c59ccdd5-lf59d 1m 66Mi
|
||||
```
|
||||
|
||||
Or you have to manually enable metrics server in your Kubernetes cluster.
|
||||
|
||||
- ACK (Alibaba Cloud Container Service for Kubernetes)
|
||||
|
||||
Metrics server is already enabled.
|
||||
|
||||
- ASK (Alibaba Cloud Serverless Kubernetes)
|
||||
|
||||
Metrics server has to be enabled in `Operations/Add-ons` section of [Alibaba Cloud console](https://cs.console.aliyun.com/) as below.
|
||||
|
||||

|
||||
|
||||
Please refer to [metrics server debug guide](https://help.aliyun.com/document_detail/176515.html) if you hit more issue.
|
||||
|
||||
- Kind
|
||||
|
||||
Install metrics server as below, or you can install the [latest version](https://github.com/kubernetes-sigs/metrics-server#installation).
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
|
||||
```
|
||||
|
||||
Also add the following part under `.spec.template.spec.containers` in the yaml file loaded by `kubectl edit deploy -n kube-system metrics-server`.
|
||||
|
||||
Noted: This is just a walk-around, not for production-level use.
|
||||
|
||||
```
|
||||
command:
|
||||
- /metrics-server
|
||||
- --kubelet-insecure-tls
|
||||
```
|
||||
|
||||
- MiniKube
|
||||
|
||||
Enable it with following command.
|
||||
|
||||
```shell
|
||||
minikube addons enable metrics-server
|
||||
```
|
||||
|
||||
|
||||
Have fun to [set autoscale](../../extensions/set-autoscale) on your application.
|
||||
|
|
|
@ -1,10 +1,26 @@
|
|||
---
|
||||
title: Manage Targets
|
||||
title: Manage Clusters with UX
|
||||
---
|
||||
|
||||
To deploy application components into different places, VelaUX provides **Target** for user to manage their deploy destinations like clusters or namespaces.
|
||||
> This docs requires you to have [velaux](../../../reference/addons/velaux) installed.
|
||||
|
||||
> This document only apply to UI.
|
||||
|
||||
## Manage Clusters
|
||||
|
||||
Currently, VelaUX support manage two kinds of clusters:
|
||||
|
||||
* Support connecting the exist kubernetes cluster.
|
||||
* Support connecting the Alibaba ACK cluster.
|
||||
|
||||
Users with cluster management permissions can enter the cluster management page to add or detach managed clusters.
|
||||
|
||||

|
||||
|
||||
For connecting the ACK clusters, the platform will save some cloud info, Region, VPC, Dashboard Address, etc. When users use the cluster to create a Target, the cloud information is automatically assigned to the Target, which the cloud service applications can use.
|
||||
|
||||
## Manage Delivery Target
|
||||
|
||||
To deploy application components into different places, VelaUX provides a new concept **Delivery Target** for user to manage their deploy destinations not only clusters or namespaces, but also cloud provider information such as region, vpc and so on.
|
||||
|
||||
## Cluster
|
||||
|
||||
|
@ -34,6 +50,8 @@ Now you can use the environent which was bound to the targets just created.
|
|||
|
||||

|
||||
|
||||
In the newly created application, you will see two targets contained in the workflow, which means when you deploy this application, the component will be dispatch to both targets.
|
||||
In the newly created application, you will see two targets contained in the workflow.
|
||||
|
||||

|
||||
|
||||
After you deployed this application, the component will be dispatch to both targets for specific namespace of clusters.
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Project management
|
||||
title: Project Management
|
||||
---
|
||||
|
||||
Project provides a logical separation of applications、environments and delivery targets, this is helpful when VelaUX is used by multiple teams. Project can provide the following features:
|
||||
|
|
|
@ -5,8 +5,8 @@ title: Installation
|
|||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
- For Installation from existing Kubernetes Cluster, please read the [advanced installation guide](./platform-engineers/advanced-install/#install-kubevela-with-existing-kubernetes-cluster).
|
||||
- For upgrading from existing KubeVela control plane, please read the [upgrade guide](./platform-engineers/advanced-install/#upgrade).
|
||||
- For Installation from existing Kubernetes Cluster, please read the [advanced installation guide](./platform-engineers/advanced-install#install-kubevela-with-existing-kubernetes-cluster).
|
||||
- For upgrading from existing KubeVela control plane, please read the [upgrade guide](./platform-engineers/advanced-install#upgrade).
|
||||
|
||||
## 1. Install VelaD
|
||||
|
||||
|
@ -14,7 +14,7 @@ import TabItem from '@theme/TabItem';
|
|||
|
||||
- VelaD provides Kubernetes by leveraging [K3s](https://rancher.com/docs/k3s/latest/en/quick-start/) on linux or [k3d](https://k3d.io/) on docker environment.
|
||||
- KubeVela along with all related images, and `vela` command line are packaged together that enables air-gapped installation.
|
||||
- **VelaD suits great for local development and quick demos, while we strongly recommend you to [install KubeVela with managed Kubernetes services](./platform-engineers/advanced-install/#install-kubevela-with-existing-kubernetes-cluster) for production usage**.
|
||||
- **VelaD suits great for local development and quick demos, while we strongly recommend you to [install KubeVela with managed Kubernetes services](./platform-engineers/advanced-install#install-kubevela-with-existing-kubernetes-cluster) for production usage**.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
|
@ -108,7 +108,7 @@ VelaUX need authentication. Default username is `admin` and the password is `Vel
|
|||
velad uninstall
|
||||
```
|
||||
|
||||
This command will clean up KubeVela controllers along with the Kubernetes cluster, refer to [the advanced guide](./platform-engineers/advanced-install/#uninstall) for more detailed steps.
|
||||
This command will clean up KubeVela controllers along with the Kubernetes cluster, refer to [the advanced guide](./platform-engineers/advanced-install#uninstall) for more detailed steps.
|
||||
|
||||
## Next Step
|
||||
|
||||
|
|
|
@ -167,6 +167,8 @@ Please refer to [VelaUX Guide](../reference/addons/velaux).
|
|||
|
||||
## Upgrade
|
||||
|
||||
> If you're trying to upgrade from a big version later (e.g. from 1.2.x to 1.4.x), please refer to [version migration](./system-operation/migration-from-old-version) for more guides.
|
||||
|
||||
### 1. Upgrade CLI
|
||||
|
||||
<Tabs
|
||||
|
|
|
@ -1,84 +0,0 @@
|
|||
---
|
||||
title: Upgrade KubeVela from 1.1.5 to 1.2.5
|
||||
---
|
||||
|
||||
## Introduction
|
||||
This article is the operation manual for KubeVela to upgrade from 1.1.5 to 1.2.5 with deployed business applications. If you currently have no business applications deployed in KubeVela and no addons are enabled, you can directly refer to the official [upgrade document](https://kubevela. io/en/docs/v1.2/platform-engineers/advanced-install#%E5%8D%87%E7%BA%A7) to upgrade.
|
||||
|
||||
## Introduction to upgrade steps
|
||||
### Preparation before upgrading
|
||||
The following are the inspection items that need to be prepared before upgrading, you can compare them according to your actual situation
|
||||
- 1.2.5 CRD, which can be obtained in [Official website source file](https://github.com/kubevela/kubevela/tree/v1.2.5/charts/vela-core)
|
||||
- KubeVela[offline mirror](https://kubevela.io/zh/docs/v1.2/platform-engineers/system-operation/offline-installation) for offline deployment or using [velad](https:/ /github.com/oam-dev/velad)
|
||||
- Custom Definition
|
||||
### Check the KubeVela service
|
||||
This step is mainly to check whether KubeVela is running normally. If the pod is not in a ready state, in theory, the automatic execution of the upgrade script should be prohibited. Manual intervention is required to fix the KubeVela running state first.
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/kubevela-cluster-gateway-5bff6d564d-rhkp7 1/1 Running 0 16d
|
||||
pod/kubevela-vela-core-b67b87c7-9w7d4 1/1 Running 1 16d
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
service/kubevela-cluster-gateway-service ClusterIP 172.16.236.150 <none> 9443/TCP 16d
|
||||
service/vela-core-webhook ClusterIP 172.16.54.195 <none> 443/TCP 284d
|
||||
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
deployment.apps/kubevela-cluster-gateway 1/1 1 1 16d
|
||||
deployment.apps/kubevela-vela-core 1/1 1 1 284d
|
||||
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
replicaset.apps/kubevela-cluster-gateway-5bff6d564d 1 1 1 16d
|
||||
replicaset.apps/kubevela-vela-core-569669cfb5 0 0 0 136d
|
||||
replicaset.apps/kubevela-vela-core-75bcc6b64 0 0 0 226d
|
||||
replicaset.apps/kubevela-vela-core-78cf7cb5c7 0 0 0 225d
|
||||
replicaset.apps/kubevela-vela-core-b67b87c7 1 1 1 16d
|
||||
replicaset.apps/kubevela-vela-core-cfc54f68f 0 0 0 223d
|
||||
replicaset.apps/kubevela-vela-core-ff5fcbc44 0 0 0 284d
|
||||
```
|
||||
In addition, it is also necessary to check whether the status of the application and workload enabled by the corresponding addon is normal.
|
||||
### update the CRD
|
||||
Update the CRD of version 1.2.5 to the cluster, the CRD list is as follows
|
||||
```shell
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_applicationrevisions.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_applications.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_componentdefinitions.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_definitionrevisions.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_envbindings.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_healthscopes.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_manualscalertraits.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_policydefinitions.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_resourcetrackers.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_scopedefinitions.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_traitdefinitions.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_workflowstepdefinitions.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_workloaddefinitions.yaml
|
||||
kubectl apply -f /path/to/your/crds/standard.oam.dev_rollouts.yaml
|
||||
```
|
||||
### Execute the upgrade command
|
||||
This step is the key step to upgrade KubeVela. The execution command is as follows
|
||||
``` shell
|
||||
helm upgrade -n vela-system --install kubevela kubevela/vela-core --version 1.2.5 --wait
|
||||
```
|
||||
### Enable addon
|
||||
After the upgrade is successful, users can use the following methods to enable addonss if they need to be enabled
|
||||
```shell
|
||||
# View the list of addons
|
||||
vela addon list
|
||||
# Enable addon
|
||||
vela addon enable xxx
|
||||
```
|
||||
⚠️Note: This step is not required if the addon is already enabled and used in the pre-upgrade version
|
||||
### Update custom definition
|
||||
After the addon is enabled, update the custom definition to KubeVela. If there is no custom definition, you can skip it, and the normal upgrade process is completed!
|
||||
## Q&A
|
||||
The following are the problems and solutions you can encounter during the upgrade from 1.1.5 to 1.2.5.
|
||||
- Q: After upgrading from 1.1.5 to 1.2.5, the application status becomes workflowsuspending, and using workflow resume is invalid
|
||||
- A: The reason is that rt already exists. Check the status of the application to see the specific reason. The solution is to delete rt and use workflow resume xxx.
|
||||
- Q: How to solve the business application in workflow suspending state
|
||||
- A: After the update, the status of apply changes to running, so if the status of the application does not affect the business process, it can be resolved automatically after the user operation is upgraded.
|
||||
- Q: The application associated with the flux addon used, the status is always workflowsuspending
|
||||
- A: This can only be restored by manually deleting rt and then executing vela workflow resume. Generally, it does not affect normal use. It can be restored during subsequent upgrades.
|
||||
- Q: The rollout of the stock may fail to deploy the update components
|
||||
- A: Solve by replacing the ownreferences of the stock controllerrevision and point the controllerrevision to the application
|
||||
- Q: The updated definition file found that it did not take effect
|
||||
- A: You need to modify the flux application, remove componentDefinition and traitDefinition
|
|
@ -1,21 +1,20 @@
|
|||
---
|
||||
title: Managing Clusters
|
||||
title: Lifecycle of Managed Cluster
|
||||
---
|
||||
|
||||
## Manage the cluster via UI
|
||||
This section will introduce the lifecycle of managed clusters.
|
||||
|
||||
* Support connecting the exist kubernetes cluster.
|
||||
* Support connecting the ACK cluster.
|
||||
## Create clusters
|
||||
|
||||
Users with cluster management permissions can enter the cluster management page to add or detach managed clusters.
|
||||
KubeVela can generally adopt any Kubernetes cluster as managed cluster, the control plane won't install anything to your managed cluster unless you have enable any addons.
|
||||
|
||||

|
||||
If you don't have any clusters, you can refer to [VelaD](https://github.com/kubevela/velad) to create one:
|
||||
|
||||
For connecting the ACK clusters, the platform will save some cloud info, Region, VPC, Dashboard Address, etc. When users use the cluster to create a Target, the cloud information is automatically assigned to the Target, which the cloud service applications can use.
|
||||
```
|
||||
velad install --name <cluster-name> --cluster-only
|
||||
```
|
||||
|
||||
## Manage the cluster via CLI
|
||||
|
||||
### vela cluster join
|
||||
## Join Clusters
|
||||
|
||||
You can simply join an existing cluster into KubeVela by specifying its KubeConfig as below
|
||||
|
||||
|
@ -32,7 +31,7 @@ $ vela cluster join hangzhou-1.kubeconfig --name hangzhou-1
|
|||
$ vela cluster join hangzhou-2.kubeconfig --name hangzhou-2
|
||||
```
|
||||
|
||||
### vela cluster list
|
||||
## List clusters
|
||||
|
||||
After clusters joined, you could list all clusters managed by KubeVela.
|
||||
|
||||
|
@ -47,7 +46,7 @@ cluster-hangzhou-2 X509Certificate <ENDPOINT_HANGZHOU_2> true
|
|||
|
||||
> By default, the hub cluster where KubeVela locates is registered as the `local` cluster. You can use it like a managed cluster in spite that you cannot detach it or modify it.
|
||||
|
||||
### label your cluster
|
||||
## Label your cluster
|
||||
|
||||
You can also give labels to your clusters, which helps you select clusters for deploying applications.
|
||||
|
||||
|
@ -62,7 +61,7 @@ cluster-hangzhou-1 X509Certificate <ENDPOINT_HANGZHOU_1> true
|
|||
cluster-hangzhou-2 X509Certificate <ENDPOINT_HANGZHOU_2> true region=hangzhou
|
||||
```
|
||||
|
||||
### vela cluster detach
|
||||
## Detach your cluster
|
||||
|
||||
You can also detach a cluster if you do not want to use it anymore.
|
||||
|
||||
|
@ -72,10 +71,14 @@ $ vela cluster detach beijing
|
|||
|
||||
> It is dangerous to detach a cluster that is still in-use. But if you want to do modifications to the held cluster credential, like rotating certificates, it is possible to do so.
|
||||
|
||||
### vela cluster rename
|
||||
## Rename a cluster
|
||||
|
||||
This command can rename cluster managed by KubeVela.
|
||||
|
||||
```shell script
|
||||
$ vela cluster rename cluster-prod cluster-production
|
||||
```
|
||||
```
|
||||
|
||||
## Next
|
||||
|
||||
- Manage cluster with [UI console](../../how-to/dashboard/target/overview).
|
|
@ -0,0 +1,168 @@
|
|||
---
|
||||
title: Version Migration
|
||||
---
|
||||
|
||||
This doc aims to provide a migration guide from old versions to the new ones without disturb the running business. However scenarios are different from each other, we strongly recommend you to test the migration with a simulation environment before real migration for your production.
|
||||
|
||||
## From v1.3.x to v1.4.x
|
||||
|
||||
> ⚠️ Note: You must upgrade to v1.3.x first before you upgrade to v1.4.x from version v1.2.x or older.
|
||||
|
||||
1. Upgrade the CRDs, please make sure you upgrade the CRDs first before upgrade the helm chart.
|
||||
|
||||
```
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.4/charts/vela-core/crds/core.oam.dev_applicationrevisions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.4/charts/vela-core/crds/core.oam.dev_applications.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.4/charts/vela-core/crds/core.oam.dev_resourcetrackers.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.4/charts/vela-core/crds/core.oam.dev_componentdefinitions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.4/charts/vela-core/crds/core.oam.dev_definitionrevisions.yaml
|
||||
```
|
||||
|
||||
2. Upgrade your kubevela chart
|
||||
|
||||
```
|
||||
helm repo add kubevela https://charts.kubevela.net/core
|
||||
helm repo update
|
||||
helm upgrade -n vela-system --install kubevela kubevela/vela-core --version 1.4.1 --wait
|
||||
```
|
||||
|
||||
3. Download and upgrade to the corresponding CLI
|
||||
```
|
||||
curl -fsSl https://kubevela.io/script/install.sh | bash -s 1.4.1
|
||||
```
|
||||
|
||||
3. Upgrade VelaUX or other addon
|
||||
|
||||
```
|
||||
vela addon upgrade velaux --version 1.4.0
|
||||
```
|
||||
|
||||
Please note if you're using terraform addon, you should upgrade the `terraform` addon to version `1.0.6+` along with the vela-core upgrade, you can follow the following steps:
|
||||
|
||||
1. upgrading vela-core to v1.3.4+, all existing Terraform typed Applications won't be affected in this process.
|
||||
2. upgrade the `terrorform` addon, or the newly provisioned Terraform typed Applications won't become successful.
|
||||
- 2.1 Manually upgrade CRD Configuration https://github.com/oam-dev/terraform-controller/blob/v0.4.3/chart/crds/terraform.core.oam.dev_configurations.yaml.
|
||||
- 2.2 Upgrade add-on `terraform` to version `1.0.6+`.
|
||||
|
||||
|
||||
## From v1.2.x to v1.3.x
|
||||
|
||||
1. Upgrade the CRDs, please make sure you upgrade the CRDs first before upgrade the helm chart.
|
||||
|
||||
```
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.3/charts/vela-core/crds/core.oam.dev_applicationrevisions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.3/charts/vela-core/crds/core.oam.dev_applications.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.3/charts/vela-core/crds/core.oam.dev_resourcetrackers.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.3/charts/vela-core/crds/core.oam.dev_componentdefinitions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.3/charts/vela-core/crds/core.oam.dev_definitionrevisions.yaml
|
||||
```
|
||||
|
||||
2. Upgrade your kubevela chart
|
||||
|
||||
```
|
||||
helm repo add kubevela https://charts.kubevela.net/core
|
||||
helm repo update
|
||||
helm upgrade -n vela-system --install kubevela kubevela/vela-core --version 1.3.6 --wait
|
||||
```
|
||||
|
||||
3. Download and upgrade to the corresponding CLI
|
||||
```
|
||||
curl -fsSl https://kubevela.io/script/install.sh | bash -s 1.3.6
|
||||
```
|
||||
|
||||
4. Upgrade VelaUX or other addon
|
||||
|
||||
```
|
||||
vela addon upgrade velaux --version 1.3.6
|
||||
```
|
||||
|
||||
Please note if you're using terraform addon, you should upgrade the `terraform` addon to version `1.0.6+` along with the vela-core upgrade, you can follow the following steps:
|
||||
|
||||
1. upgrading vela-core to v1.3.4+, all existing Terraform typed Applications won't be affected in this process.
|
||||
2. upgrade the `terrorform` addon, or the newly provisioned Terraform typed Applications won't become successful.
|
||||
- 2.1 Manually upgrade CRD Configuration https://github.com/oam-dev/terraform-controller/blob/v0.4.3/chart/crds/terraform.core.oam.dev_configurations.yaml.
|
||||
- 2.2 Upgrade add-on `terraform` to version `1.0.6+`.
|
||||
|
||||
## From v1.1.x to v1.2.x
|
||||
|
||||
1. Check the service running normally
|
||||
|
||||
Make sure all your services are running normally before migration.
|
||||
|
||||
```
|
||||
$ kubectl get all -n vela-system
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/kubevela-cluster-gateway-5bff6d564d-rhkp7 1/1 Running 0 16d
|
||||
pod/kubevela-vela-core-b67b87c7-9w7d4 1/1 Running 1 16d
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
service/kubevela-cluster-gateway-service ClusterIP 172.16.236.150 <none> 9443/TCP 16d
|
||||
service/vela-core-webhook ClusterIP 172.16.54.195 <none> 443/TCP 284d
|
||||
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
deployment.apps/kubevela-cluster-gateway 1/1 1 1 16d
|
||||
deployment.apps/kubevela-vela-core 1/1 1 1 284d
|
||||
```
|
||||
|
||||
In addition, it's also necessary to check the status of all the KubeVela applications including addons running normally.
|
||||
|
||||
2. update the CRD to v1.2.x
|
||||
|
||||
Update the CRD in the cluster to v1.2.x, the CRD list is as follows, some of them can be omitted if you don't have them before:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_applicationrevisions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_applications.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_componentdefinitions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_definitionrevisions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_envbindings.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_healthscopes.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_manualscalertraits.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_policydefinitions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_resourcetrackers.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_scopedefinitions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_traitdefinitions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_workflowstepdefinitions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_workloaddefinitions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/standard.oam.dev_rollouts.yaml
|
||||
```
|
||||
|
||||
3. Execute the upgrade command
|
||||
|
||||
This step will upgrade the system to the new version:
|
||||
|
||||
``` shell
|
||||
helm upgrade -n vela-system --install kubevela kubevela/vela-core --version 1.2.6 --wait
|
||||
```
|
||||
|
||||
Upgrade the CLI to v1.2.x corresponding the the core version:
|
||||
|
||||
```
|
||||
curl -fsSl https://kubevela.io/script/install.sh | bash -s 1.2.6
|
||||
```
|
||||
|
||||
3. Enable addon
|
||||
|
||||
After the upgrade succeed, users can use the following methods to enable addons if they need to be enabled:
|
||||
|
||||
```shell
|
||||
# View the list of addons
|
||||
vela addon list
|
||||
# Enable addon
|
||||
vela addon enable <addon name>
|
||||
```
|
||||
|
||||
⚠️Note: This step is not required if the addon is already enabled and used in the pre-upgrade version
|
||||
|
||||
4. Update Custom Definition
|
||||
|
||||
Check if your custom definition works in the new version, try to upgrade them if there're any issues.
|
||||
If you haven't defined any, the normal upgrade process is completed!
|
||||
|
||||
5. Common Questions for this migration
|
||||
|
||||
- Q: After upgrading from 1.1.x to 1.2.x, the application status becomes `workflowsuspending`, and using `vela workflow resume` doesn't work.
|
||||
- A: There're migration about the resource tracker mechanism. Generally, you can delete the existing resourcetracker, after that you can use `vela workflow resume` command.
|
||||
- Q: Why the status of my application become suspend after the upgrade?
|
||||
- A: Don't worry if your application become suspend after the upgrade, this won't affect the running business application. It will become normal after you deploy the application next time. You can also manually change any annotation of the application to resolve it.
|
|
@ -1,22 +1,20 @@
|
|||
---
|
||||
title: Config resource relationship Rule
|
||||
title: Customize Resource Topology
|
||||
---
|
||||
|
||||
The topology graph of VelaUX can show the resource tree of an application. As shown in this picture.
|
||||
The resource topology graph of VelaUX can automatically show the resource tree of an application for any workloads including Helm charts and cloud resources.
|
||||
|
||||

|
||||
|
||||
## Mechanism
|
||||
|
||||
There have been some built-in rules in system to specify the relationship between two types of resource. System will search all children resources by these rules.
|
||||
By default, the connections in the resource graph rely on the [ownerReference mechanism](https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/), while it's also configurable for CRDs which don't have specific ownerReferences. The controller will search all child resources for one node according to these rules.
|
||||
|
||||
For example, the built-in rules has defined the [Deployment's](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) children resource only can be [ReplicaSet](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/), so when show the children resource of one deployment Vela will only care about the replicaSet.
|
||||
Vela will list all replicaSet in the same namespace with deployment and filter out those ownerReference isn't this deployment.
|
||||
These rules can also reduce redundant resources for better performance. For example, one of the built-in rules has defined that a [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) will only have [ReplicaSet](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/) as its child, so when we draw the resource graph, it will only display resources whose type is ReplicaSet along with ownerReference info.
|
||||
|
||||
## Add more rules
|
||||
|
||||
## Add relationship rules
|
||||
|
||||
The built-in rules is limited, you can add a customized rule by create a configmap like this:
|
||||
The built-in rules is limited, you can add a customized rule by create a ConfigMap like this:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
@ -36,7 +34,8 @@ data:
|
|||
kind: Pod
|
||||
```
|
||||
|
||||
First, this configmap should have the special label `"rules.oam.dev/resources": "true"` the data key `rules` defined a list of relationship rules.
|
||||
One relationship rule define what children type a parent can have.
|
||||
In this example above, the parent type is `Cloneset` in group `apps.kruise.io`, his children resource type is `v1/Pod`
|
||||
All customize rules specified in these configmaps would be merged with built-in rules and take effect in searching children resources.
|
||||
1. The ConfigMap should have the special label `"rules.oam.dev/resources": "true"` the data key `rules` defined a list of relationship rules.
|
||||
2. One relationship rule define what children type a parent can have.
|
||||
|
||||
In the example above, the parent type is `Cloneset` in group `apps.kruise.io`, his child resource type is `v1/Pod`
|
||||
All customized rules specified in these configmaps would be merged with built-in rules and take effect in searching child resources.
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: VelaUX Extension
|
||||
title: Customize UX of Definition
|
||||
---
|
||||
|
||||
VelaUX uses the UI-Schema specification for the extension of components, workflow steps, and operation and maintenance feature resources, in the case of variable input parameters, to achieve a more native UI experience.
|
||||
|
|
|
@ -207,11 +207,11 @@
|
|||
"message": "触发器管理",
|
||||
"description": "通过 Dashboard 对应用触发器的管理操作说明"
|
||||
},
|
||||
"sidebar.docs.category.Manage integration configs": {
|
||||
"sidebar.docs.category.Manage Config of Integration": {
|
||||
"message": "集成配置管理",
|
||||
"description": "外部系统集成相关配置的管理说明"
|
||||
},
|
||||
"sidebar.docs.category.Manage resource": {
|
||||
"sidebar.docs.category.Cluster Management": {
|
||||
"message": "资源管理",
|
||||
"description": "管理集群和交付目标等资源"
|
||||
},
|
||||
|
@ -283,7 +283,7 @@
|
|||
"message": "开发者手册",
|
||||
"description": "developer guide for sidebar"
|
||||
},
|
||||
"sidebar.docs.category.User management": {
|
||||
"sidebar.docs.category.User Management": {
|
||||
"message": "用户管理",
|
||||
"description": "User management for sidebar"
|
||||
},
|
||||
|
|
|
@ -5,8 +5,8 @@ title: 快速安装
|
|||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
- 在 Kubernetes 集群上安装,请参考 [高级安装](./platform-engineers/advanced-install/#install-kubevela-with-existing-kubernetes-cluster).
|
||||
- 升级 KubeVela 请参考 [升级文档](./platform-engineers/advanced-install/#upgrade).
|
||||
- 在 Kubernetes 集群上安装,请参考 [高级安装](./platform-engineers/advanced-install#install-kubevela-with-existing-kubernetes-cluster).
|
||||
- 升级 KubeVela 请参考 [升级文档](./platform-engineers/advanced-install#upgrade).
|
||||
|
||||
## 1. 安装 VelaD
|
||||
|
||||
|
@ -14,7 +14,7 @@ import TabItem from '@theme/TabItem';
|
|||
|
||||
- VelaD 继承了 [K3s](https://rancher.com/docs/k3s/latest/en/quick-start/) 和 [k3d](https://k3d.io/) 的能力,同时将 KubeVela 所需的制品整体打包。
|
||||
- VelaD 可以帮助你在离线环境中完成安装。
|
||||
- VelaD 目前仅适用于快速体验和测试开发, [生成环境安装请参考高级安装文档](./platform-engineers/advanced-install/#install-kubevela-with-existing-kubernetes-cluster)。
|
||||
- VelaD 目前仅适用于快速体验和测试开发, [生成环境安装请参考高级安装文档](./platform-engineers/advanced-install#install-kubevela-with-existing-kubernetes-cluster)。
|
||||
|
||||
### 前提条件
|
||||
|
||||
|
@ -100,7 +100,7 @@ VelaUX 是需要登陆认证的,默认管理员账号为 `admin` 密码为 `Ve
|
|||
velad uninstall
|
||||
```
|
||||
|
||||
此命令将删除 VelaD 安装的环境, 其他自定义方式安装的请参考 [KubeVela 卸载文档](./platform-engineers/advanced-install/#uninstall)。
|
||||
此命令将删除 VelaD 安装的环境, 其他自定义方式安装的请参考 [KubeVela 卸载文档](./platform-engineers/advanced-install#uninstall)。
|
||||
|
||||
## 下一步
|
||||
|
||||
|
|
|
@ -1,82 +0,0 @@
|
|||
---
|
||||
title: Kubevela 1.1.5升级到1.2.5参考文档
|
||||
---
|
||||
## 一、前言
|
||||
此文章为已部署业务应用的KubeVela由1.1.5升级到1.2.5的操作手册,如果您当前KubeVela未部署业务应用并且未启用插件,可以直接参照官方[升级文档](https://kubevela.io/zh/docs/v1.2/platform-engineers/advanced-install#%E5%8D%87%E7%BA%A7)进行升级。
|
||||
## 二、升级步骤
|
||||
### 2.1 升级前准备
|
||||
以下是升级前需要准备的检查项,可以按照自己实际情况对照
|
||||
- 1.2.5 CRD,可以在[官网源码文件](https://github.com/kubevela/kubevela/tree/v1.2.5/charts/vela-core)中获取
|
||||
- KubeVela[离线镜像](https://kubevela.io/zh/docs/v1.2/platform-engineers/system-operation/offline-installation),用于离线化部署或使用[velad](https://github.com/oam-dev/velad)
|
||||
- 自定义Definition
|
||||
### 2.2 第一步 检查KubeVela服务
|
||||
该步骤主要是检查KubeVela运行是否正常,如果pod不为就绪状态,则理论上应该禁止自动执行升级脚本。需要人工介入先修复KubeVela运行状态。
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/kubevela-cluster-gateway-5bff6d564d-rhkp7 1/1 Running 0 16d
|
||||
pod/kubevela-vela-core-b67b87c7-9w7d4 1/1 Running 1 16d
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
service/kubevela-cluster-gateway-service ClusterIP 172.16.236.150 <none> 9443/TCP 16d
|
||||
service/vela-core-webhook ClusterIP 172.16.54.195 <none> 443/TCP 284d
|
||||
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
deployment.apps/kubevela-cluster-gateway 1/1 1 1 16d
|
||||
deployment.apps/kubevela-vela-core 1/1 1 1 284d
|
||||
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
replicaset.apps/kubevela-cluster-gateway-5bff6d564d 1 1 1 16d
|
||||
replicaset.apps/kubevela-vela-core-569669cfb5 0 0 0 136d
|
||||
replicaset.apps/kubevela-vela-core-75bcc6b64 0 0 0 226d
|
||||
replicaset.apps/kubevela-vela-core-78cf7cb5c7 0 0 0 225d
|
||||
replicaset.apps/kubevela-vela-core-b67b87c7 1 1 1 16d
|
||||
replicaset.apps/kubevela-vela-core-cfc54f68f 0 0 0 223d
|
||||
replicaset.apps/kubevela-vela-core-ff5fcbc44 0 0 0 284d
|
||||
```
|
||||
此外也需要检查开启的对应插件启用的application和workload状态是否正常。
|
||||
### 2.3 第二步 更新CRD
|
||||
将1.2.5版本的CRD更新到集群,CRD列表如下
|
||||
```shell
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_applicationrevisions.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_applications.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_componentdefinitions.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_definitionrevisions.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_envbindings.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_healthscopes.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_manualscalertraits.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_policydefinitions.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_resourcetrackers.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_scopedefinitions.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_traitdefinitions.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_workflowstepdefinitions.yaml
|
||||
kubectl apply -f /path/to/your/crds/core.oam.dev_workloaddefinitions.yaml
|
||||
kubectl apply -f /path/to/your/crds/standard.oam.dev_rollouts.yaml
|
||||
```
|
||||
### 2.4 第三步 执行升级命令
|
||||
该步骤为升级KubeVela关键步骤,执行命令如下
|
||||
``` shell
|
||||
helm upgrade -n vela-system --install kubevela kubevela/vela-core --version 1.2.5 --wait
|
||||
```
|
||||
### 2.5 第四步 开启插件
|
||||
升级成功后,如果有需要启用的插件可以自行使用如下方式开启
|
||||
```shell
|
||||
# 查看插件列表
|
||||
vela addon list
|
||||
# 启用插件
|
||||
vela addon enable xxx
|
||||
```
|
||||
⚠️注意: 如果已经在升级前版本已经启用插件并使用,不需要执行此步骤
|
||||
### 2.6 第五步 更新自定义definition
|
||||
开启插件后,将自定义的definition更新到KubeVela,如果没有自定义definition可以跳过,自此正常的升级流程完成!
|
||||
## 三、问题与解决
|
||||
以下是在升级1.1.5到1.2.5过程中您可以遇到的问题与解决办法
|
||||
- Q: 1.1.5升级到1.2.5后,application状态变为workflowsuspending,使用workflow resume无效
|
||||
- A: 原因为rt已存在,查看application的status可看到具体原因,解决办法为删除rt后使用workflow resume xxx。
|
||||
- Q: workflowsuspending状态的业务application如何解决
|
||||
- A: 通过更新后apply状态变为running,因此如果application的状态不影响业务流程的话,可以在后续更新部署后自动解决。
|
||||
- Q: 使用的flux addon关联的application,状态一直为workflowsuspending
|
||||
- A: 这个只能通过手动删除rt,然后执行vela workflow resume才可以恢复,一般不影响正常使用,后续升级的时候再行恢复。
|
||||
- Q: 存量的rollout会存在更新组件部署失败的情况
|
||||
- A: 通过更换存量controllerrevision的ownereferences解决,将controllerrevision指向application
|
||||
- Q: 更新definition文件发现没有生效
|
||||
- A: 需要修改flux的application,去除componentDefinition和traitDefinition
|
|
@ -61,11 +61,38 @@ bucket-app bucket-comp kustomize running healthy 2021-0
|
|||
|
||||
bucket-app APP 的 PHASE 为 running,同时 STATUS 为 healthy。应用部署成功!
|
||||
|
||||
#### 参数说明
|
||||
|
||||
| 参数 | 是否可选 | 含义 | 例子 |
|
||||
| -------------- | -------- | ------------------------------------------------------------------------------------------------- | --------------------------- |
|
||||
| repoType | 必填 | 值为 oss 标志 kustomize 配置来自 OSS bucket | oss |
|
||||
| pullInterval | 可选 | 与 bucket 进行同步,与调谐 kustomize 的时间间隔 默认值5m(5分钟) | 10m |
|
||||
| url | 必填 | bucket 的 endpoint,无需填写 scheme | oss-cn-beijing.aliyuncs.com |
|
||||
| secretRef | 可选 | 保存一个 Secret 的名字,该Secret是读取 bucket 的凭证。Secret 包含 accesskey 和 secretkey 字段 | sec-name |
|
||||
| timeout | 可选 | 下载操作的超时时间,默认 20s | 60s |
|
||||
| path | 必填 | 包含 kustomization.yaml 文件的目录, 或者包含一组 YAML 文件(用以生成 kustomization.yaml )的目录。 | ./prod |
|
||||
| oss.bucketName | 必填 | bucket 名称 | your-bucket |
|
||||
| oss.provider | 可选 | 可选 generic 或 aws,若从 aws EC2 获取凭证则填 aws。默认 generic。 | generic |
|
||||
| oss.region | 可选 | bucket 地区 | |
|
||||
|
||||
|
||||
### 监听 Git 仓库中的文件
|
||||
|
||||
|
||||
#### 参数说明
|
||||
|
||||
### 参数说明
|
||||
|
||||
| 参数 | 是否可选 | 含义 | 例子 |
|
||||
| ------------ | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------- |
|
||||
| repoType | 必填 | 值为 git 标志 kustomize 配置来自 Git 仓库 | git |
|
||||
| pullInterval | 可选 | 与 Git 仓库进行同步,与调谐 helm release 的时间间隔 默认值5m(5分钟) | 10m |
|
||||
| url | 必填 | Git 仓库地址 | https://github.com/oam-dev/terraform-controller |
|
||||
| secretRef | 可选 | 存有拉取 Git 仓库所需凭证的 Secret 对象名,对 HTTP/S 基本鉴权 Secret 中必须包含 username 和 password 字段。对 SSH 形式鉴权必须包含 identity, identity.pub 和 known_hosts 字段 | sec-name |
|
||||
| timeout | 可选 | 下载操作的超时时间,默认 20s | 60s |
|
||||
| git.branch | 可选 | Git 分支,默认为 master | dev |
|
||||
| git.provider | 可选 | Git 客户端类型,默认为 GitHub(会使用 go-git 客户端),也可以为 AzureDevOps(会使用 libgit2 客户端) | GitHub |
|
||||
|
||||
**使用示例**
|
||||
|
||||
```yaml
|
||||
|
@ -88,6 +115,19 @@ spec:
|
|||
|
||||
## 监听镜像仓库
|
||||
|
||||
### 参数说明
|
||||
|
||||
| 参数 | 是否可选 | 含义 | 例子 |
|
||||
| ------------ | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------- |
|
||||
| image | 必填 | 表示监听的镜像地址 | oamdev/vela-core |
|
||||
| secretRef | 可选 | 表示关联的 secret。如果这是一个私有的镜像仓库,可以通过 `kubectl create secret docker-registry` 创建对应的镜像秘钥并相关联 | my-secret |
|
||||
| policy.alphabetical.order | 可选 | 表示用字母顺序来筛选最新的镜像。asc 会优先选择 `Z` 开头的镜像,desc 会优先选择 `A` 开头的镜像 | asc |
|
||||
| policy.numerical.order | 可选 | 表示用数字顺序来筛选最新的镜像。asc 会优先选择 `9` 开头的镜像,desc 会优先选择 `0` 开头的镜像 | asc |
|
||||
| policy.semver.range | 可选 | 表示在指定范围内找到最新的镜像 | '>=1.0.0 <2.0.0' |
|
||||
| filterTags.extract | 可选 | extract 允许从指定的正则表达式模式中提取 pattern | $timestamp |
|
||||
| filterTags.pattern | 可选 | pattern 是用于过滤镜像的正则表达式模式 pattern | '^master-[a-f0-9]' |
|
||||
| commitMessage | 可选 | 用于追加更新镜像时的提交信息 | 'Image: {{range .Updated.Images}}{{println .}}{{end}}' |
|
||||
|
||||
|
||||
**使用示例**
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -2,11 +2,11 @@
|
|||
title: 内置运维特征类型
|
||||
---
|
||||
|
||||
This documentation will walk through the built-in traits.
|
||||
本文档将展示所有内置运维特征的参数列表。
|
||||
|
||||
## gateway
|
||||
|
||||
The `gateway` trait exposes a component to public Internet via a valid domain.
|
||||
`gateway` 运维特征通过一个可用的域名在公网暴露一个组件的服务。
|
||||
|
||||
### 作用的 Component 类型
|
||||
|
||||
|
@ -14,12 +14,12 @@ The `gateway` trait exposes a component to public Internet via a valid domain.
|
|||
|
||||
### 参数说明
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ----------- | -------------------------------------------------------------------------------------------------- | -------------- | -------- | ------- |
|
||||
| http | Specify the mapping relationship between the http path and the workload port | map[string]int | true | |
|
||||
| class | Specify the class of ingress to use | string | true | nginx |
|
||||
| classInSpec | Set ingress class in '.spec.ingressClassName' instead of 'kubernetes.io/ingress.class' annotation. | bool | false | false |
|
||||
| domain | Specify the domain you want to expose | string | true | |
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ----------- | -------------------------------------------------- | -------------- | -------- | ------- |
|
||||
| http | 定义一组网关路径到 Pod 服务端口的映射关系 | map[string]int | true | |
|
||||
| class | 所使用的 kubernetes ingress class | string | true | nginx |
|
||||
| classInSpec | 在 kubernetes ingress 的 '.spec.ingressClassName' 定义 ingress class 而不是在 'kubernetes.io/ingress.class' 注解中定义 | bool | false | false |
|
||||
| domain | 暴露服务所绑定的域名 | string | true | |
|
||||
|
||||
### 样例
|
||||
```yaml
|
||||
|
@ -43,58 +43,6 @@ spec:
|
|||
"/": 8000
|
||||
```
|
||||
|
||||
|
||||
## rollout
|
||||
|
||||
Rollout Trait performs a rolling update on Component.
|
||||
|
||||
### 作用的 Component 类型
|
||||
|
||||
* [webservice](../components/cue/webservice)
|
||||
* worker
|
||||
* clonset
|
||||
|
||||
### 参数说明
|
||||
|
||||
灰度发布运维特征的所有配置项
|
||||
|
||||
| 名称 | 描述 | 类型 | 是否必须 | 默认值 |
|
||||
| -------------- | ------------ | ---------------- | -------- | -------------------------------------- |
|
||||
| targetRevision | 目标组件版本 | string | 否 | 当该字段为空时,一直指向组件的最新版本 |
|
||||
| targetSize | 目标副本个数 | int | 是 | 无 |
|
||||
| rolloutBatches | 批次发布策略 | rolloutBatch数组 | 是 | 无 |
|
||||
| batchPartition | 发布批次 | int | 否 | 无,缺省为发布全部批次 |
|
||||
|
||||
rolloutBatch的属性
|
||||
|
||||
| 名称 | 描述 | 类型 | 是否必须 | 默认值 |
|
||||
| -------- | -------------- | ---- | -------- | ------ |
|
||||
| replicas | 批次的副本个数 | int | 是 | 无 |
|
||||
|
||||
|
||||
### 样例
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: rollout-trait-test
|
||||
spec:
|
||||
components:
|
||||
- name: express-server
|
||||
type: webservice
|
||||
externalRevision: express-server-v1
|
||||
properties:
|
||||
image: stefanprodan/podinfo:4.0.3
|
||||
traits:
|
||||
- type: rollout
|
||||
properties:
|
||||
targetSize: 5
|
||||
rolloutBatches:
|
||||
- replicas: 2
|
||||
- replicas: 3
|
||||
```
|
||||
|
||||
## Scaler
|
||||
|
||||
`scaler` 为组件配置副本数。
|
||||
|
@ -107,15 +55,9 @@ spec:
|
|||
|
||||
### 参数说明
|
||||
|
||||
```
|
||||
$ vela show scaler
|
||||
# Properties
|
||||
+----------+--------------------------------+------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+----------+--------------------------------+------+----------+---------+
|
||||
| replicas | Specify the number of workload | int | true | 1 |
|
||||
+----------+--------------------------------+------+----------+---------+
|
||||
```
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| -------- | -------------- | ---- | -------- | ------ |
|
||||
| replicas | 工作负载的 Pod 个数 | int | true | 1 |
|
||||
|
||||
### 样例
|
||||
|
||||
|
@ -150,18 +92,11 @@ spec:
|
|||
|
||||
### 参数说明
|
||||
|
||||
|
||||
```
|
||||
$ vela show cpuscaler
|
||||
# Properties
|
||||
+---------+---------------------------------------------------------------------------------+------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+---------+---------------------------------------------------------------------------------+------+----------+---------+
|
||||
| min | Specify the minimal number of replicas to which the autoscaler can scale down | int | true | 1 |
|
||||
| max | Specify the maximum number of of replicas to which the autoscaler can scale up | int | true | 10 |
|
||||
| cpuUtil | Specify the average cpu utilization, for example, 50 means the CPU usage is 50% | int | true | 50 |
|
||||
+---------+---------------------------------------------------------------------------------+------+----------+---------+
|
||||
```
|
||||
| -------- | -------------- | ---- | -------- | ------ |
|
||||
| min | 能够将工作负载缩容到的最小副本个数 | int | true | 1 |
|
||||
| max | 能够将工作负载扩容到的最大副本个数 | int | true | 10 |
|
||||
| cpuUtil | 每个容器的平均 CPU 利用率 例如, 50 意味者 CPU 利用率为 50% | int | true | 50 |
|
||||
|
||||
### 样例
|
||||
|
||||
|
@ -200,14 +135,8 @@ spec:
|
|||
|
||||
### 参数说明
|
||||
|
||||
```
|
||||
$ vela show storage
|
||||
# Properties
|
||||
|
||||
## pvc
|
||||
+------------------+-------------+---------------------------------+----------+------------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+------------------+-------------+---------------------------------+----------+------------+
|
||||
| ---------------- | ----------- | ------------------------------- | -------- | ---------- |
|
||||
| name | | string | true | |
|
||||
| volumeMode | | string | true | Filesystem |
|
||||
| mountPath | | string | true | |
|
||||
|
@ -216,55 +145,46 @@ $ vela show storage
|
|||
| volumeName | | string | false | |
|
||||
| storageClassName | | string | false | |
|
||||
| resources | | [resources](#resources) | false | |
|
||||
| dataSourceRef | | [dataSourceRef](#dataSourceRef) | false | |
|
||||
| dataSource | | [dataSource](#dataSource) | false | |
|
||||
| dataSourceRef | | [dataSourceRef](#datasourceref) | false | |
|
||||
| dataSource | | [dataSource](#datasource) | false | |
|
||||
| selector | | [selector](#selector) | false | |
|
||||
+------------------+-------------+---------------------------------+----------+------------+
|
||||
|
||||
...
|
||||
#### emptyDir
|
||||
|
||||
## emptyDir
|
||||
+-----------+-------------+--------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-----------+-------------+--------+----------+---------+
|
||||
| --------- | ----------- | ------ | -------- | ------- |
|
||||
| name | | string | true | |
|
||||
| medium | | string | true | empty |
|
||||
| mountPath | | string | true | |
|
||||
+-----------+-------------+--------+----------+---------+
|
||||
|
||||
## secret
|
||||
+-------------+-------------+--------------------------------------------------------+----------+---------+
|
||||
|
||||
#### secret
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-------------+-------------+--------------------------------------------------------+----------+---------+
|
||||
| ----------- | ----------- | ------------------------------------------------------ | -------- | ------- |
|
||||
| name | | string | true | |
|
||||
| defaultMode | | int | true | 420 |
|
||||
| items | | [[]items](#items) | false | |
|
||||
| mountPath | | string | true | |
|
||||
| mountToEnv | | [mountToEnv](#mountToEnv) | false | |
|
||||
| mountToEnv | | [mountToEnv](#mounttoenv) | false | |
|
||||
| mountOnly | | bool | true | false |
|
||||
| data | | map[string]{null|bool|string|bytes|{...}|[...]|number} | false | |
|
||||
| stringData | | map[string]{null|bool|string|bytes|{...}|[...]|number} | false | |
|
||||
| readOnly | | bool | true | false |
|
||||
+-------------+-------------+--------------------------------------------------------+----------+---------+
|
||||
|
||||
...
|
||||
|
||||
## configMap
|
||||
+-------------+-------------+--------------------------------------------------------+----------+---------+
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-------------+-------------+--------------------------------------------------------+----------+---------+
|
||||
| ----------- | ----------- | ------------------------------------------------------ | -------- | ------- |
|
||||
| name | | string | true | |
|
||||
| defaultMode | | int | true | 420 |
|
||||
| items | | [[]items](#items) | false | |
|
||||
| mountPath | | string | true | |
|
||||
| mountToEnv | | [mountToEnv](#mountToEnv) | false | |
|
||||
| mountToEnv | | [mountToEnv](#mounttoenv) | false | |
|
||||
| mountOnly | | bool | true | false |
|
||||
| data | | map[string]{null|bool|string|bytes|{...}|[...]|number} | false | |
|
||||
| readOnly | | bool | true | false |
|
||||
+-------------+-------------+--------------------------------------------------------+----------+---------+
|
||||
|
||||
|
||||
```
|
||||
|
||||
### 样例
|
||||
|
||||
|
@ -319,7 +239,7 @@ spec:
|
|||
|
||||
## Labels
|
||||
|
||||
`labels` trait allow us to mark labels on Pod for workload.
|
||||
`labels` 运维特征可以用来为工作负载上的 Pod 打特殊的标签。
|
||||
|
||||
> 注:这个运维特征默认在 `VelaUX` 处隐藏,你可以在 CLI 侧使用。
|
||||
|
||||
|
@ -329,17 +249,9 @@ spec:
|
|||
|
||||
### 参数说明
|
||||
|
||||
```shell
|
||||
$ vela show labels
|
||||
# Properties
|
||||
+-----------+-------------+-------------------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-----------+-------------+-------------------+----------+---------+
|
||||
| --------- | ----------- | ----------------- | -------- | ------- |
|
||||
| - | | map[string]string | true | |
|
||||
+-----------+-------------+-------------------+----------+---------+
|
||||
```
|
||||
|
||||
They're all string Key-Value pairs.
|
||||
|
||||
### 样例
|
||||
|
||||
|
@ -365,7 +277,7 @@ spec:
|
|||
|
||||
## Annotations
|
||||
|
||||
`annotations` trait allow us to mark annotations on Pod for workload.
|
||||
`annotations` 运维特征允许用户在工作负载的 Pod 上加入注解。
|
||||
|
||||
> 注:这个运维特征默认在 `VelaUX` 处隐藏,你可以在 CLI 侧使用。
|
||||
|
||||
|
@ -375,17 +287,9 @@ spec:
|
|||
|
||||
### 参数说明
|
||||
|
||||
```shell
|
||||
$ vela show annotations
|
||||
# Properties
|
||||
+-----------+-------------+-------------------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-----------+-------------+-------------------+----------+---------+
|
||||
| --------- | ----------- | ----------------- | -------- | ------- |
|
||||
| - | | map[string]string | true | |
|
||||
+-----------+-------------+-------------------+----------+---------+
|
||||
```
|
||||
|
||||
They're all string Key-Value pairs.
|
||||
|
||||
### 样例
|
||||
|
||||
|
@ -420,32 +324,25 @@ spec:
|
|||
|
||||
### 参数说明
|
||||
|
||||
```shell
|
||||
vela show kustomize-patch
|
||||
```
|
||||
|
||||
```shell
|
||||
# Properties
|
||||
+---------+---------------------------------------------------------------+-----------------------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+---------+---------------------------------------------------------------+-----------------------+----------+---------+
|
||||
| patches | a list of StrategicMerge or JSON6902 patch to selected target | [[]patches](#patches) | true | |
|
||||
+---------+---------------------------------------------------------------+-----------------------+----------+---------+
|
||||
| ------- | ------------------------------------------------------------- | --------------------- | -------- | ------ |
|
||||
| patches | 在目标上进行 StrategicMerge 或者 JSON6902 patch 的列表 | [[]patches](#patches) | true | |
|
||||
|
||||
|
||||
## patches
|
||||
+--------+---------------------------------------------------+-------------------+----------+---------+
|
||||
#### patches
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+--------+---------------------------------------------------+-------------------+----------+---------+
|
||||
| patch | Inline patch string, in yaml style | string | true | |
|
||||
| target | Specify the target the patch should be applied to | [target](#target) | true | |
|
||||
+--------+---------------------------------------------------+-------------------+----------+---------+
|
||||
| ------ | ------------------------------------------------- | ----------------- | -------- | ------ |
|
||||
| patch | Inline yaml 格式的 patch | string | true | |
|
||||
| target | patch 需要作用在的目标 | [target](#target) | true | |
|
||||
|
||||
|
||||
### target
|
||||
+--------------------+-------------+--------+----------+---------+
|
||||
|
||||
##### target
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+--------------------+-------------+--------+----------+---------+
|
||||
| ------------------ | ----------- | ------ | -------- | ------- |
|
||||
| name | | string | false | |
|
||||
| group | | string | false | |
|
||||
| version | | string | false | |
|
||||
|
@ -453,8 +350,6 @@ vela show kustomize-patch
|
|||
| namespace | | string | false | |
|
||||
| annotationSelector | | string | false | |
|
||||
| labelSelector | | string | false | |
|
||||
+--------------------+-------------+--------+----------+---------+
|
||||
```
|
||||
|
||||
### 样例
|
||||
|
||||
|
@ -483,7 +378,7 @@ spec:
|
|||
labelSelector: "app=podinfo"
|
||||
```
|
||||
|
||||
In this example, the `kustomize-patch` will patch the content for all Pods with label `app=podinfo`.
|
||||
在这个例子中,`kustomize-patch` 会在所有标记了 `app=podinfo` 上的 Pod 上做 patch 。
|
||||
|
||||
## kustomize-json-patch
|
||||
|
||||
|
@ -495,32 +390,28 @@ In this example, the `kustomize-patch` will patch the content for all Pods with
|
|||
|
||||
### 参数说明
|
||||
|
||||
```shell
|
||||
vela show kustomize-json-patch
|
||||
```
|
||||
|
||||
```shell
|
||||
# Properties
|
||||
+-------------+---------------------------+-------------------------------+----------+---------+
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-------------+---------------------------+-------------------------------+----------+---------+
|
||||
| patchesJson | A list of JSON6902 patch. | [[]patchesJson](#patchesJson) | true | |
|
||||
+-------------+---------------------------+-------------------------------+----------+---------+
|
||||
| ----------- | ------------------------- | ----------------------------- | -------- | ------- |
|
||||
| patchesJson | JSON6902 patch 的列表 | [[]patchesJson](#patchesJson) | true | |
|
||||
|
||||
|
||||
## patchesJson
|
||||
+--------+-------------+-------------------+----------+---------+
|
||||
|
||||
#### patchesJson
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+--------+-------------+-------------------+----------+---------+
|
||||
| ------ | ----------- | ----------------- | -------- | ------- |
|
||||
| patch | | [patch](#patch) | true | |
|
||||
| target | | [target](#target) | true | |
|
||||
+--------+-------------+-------------------+----------+---------+
|
||||
|
||||
|
||||
#### target
|
||||
+--------------------+-------------+--------+----------+---------+
|
||||
|
||||
##### target
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+--------------------+-------------+--------+----------+---------+
|
||||
| ------------------ | ----------- | ------ | -------- | ------- |
|
||||
| name | | string | false | |
|
||||
| group | | string | false | |
|
||||
| version | | string | false | |
|
||||
|
@ -528,18 +419,16 @@ vela show kustomize-json-patch
|
|||
| namespace | | string | false | |
|
||||
| annotationSelector | | string | false | |
|
||||
| labelSelector | | string | false | |
|
||||
+--------------------+-------------+--------+----------+---------+
|
||||
|
||||
|
||||
### patch
|
||||
+-------+-------------+--------+----------+---------+
|
||||
|
||||
##### patch
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-------+-------------+--------+----------+---------+
|
||||
| ----- | ----------- | ------ | -------- | ------- |
|
||||
| path | | string | true | |
|
||||
| op | | string | true | |
|
||||
| value | | string | false | |
|
||||
+-------+-------------+--------+----------+---------+
|
||||
```
|
||||
|
||||
|
||||
### 样例
|
||||
|
||||
|
@ -577,25 +466,18 @@ spec:
|
|||
|
||||
### 参数说明
|
||||
|
||||
```shell
|
||||
vela show kustomize-json-patch
|
||||
```
|
||||
|
||||
```shell
|
||||
# Properties
|
||||
+-----------------------+-----------------------------------------------------------+---------------------------------------------------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-----------------------+-----------------------------------------------------------+---------------------------------------------------+----------+---------+
|
||||
| patchesStrategicMerge | a list of strategicmerge, defined as inline yaml objects. | [[]patchesStrategicMerge](#patchesStrategicMerge) | true | |
|
||||
+-----------------------+-----------------------------------------------------------+---------------------------------------------------+----------+---------+
|
||||
| --------------------- | --------------------------------------------------------- | ------------------------------------------------- | -------- | ------- |
|
||||
| patchesStrategicMerge | patchesStrategicMerge 列表 | [[]patchesStrategicMerge](#patchesStrategicMerge) | true | |
|
||||
|
||||
|
||||
## patchesStrategicMerge
|
||||
+-----------+-------------+--------------------------------------------------------+----------+---------+
|
||||
|
||||
#### patchesStrategicMerge
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-----------+-------------+--------------------------------------------------------+----------+---------+
|
||||
| undefined | | map[string]{null|bool|string|bytes|{...}|[...]|number} | true | |
|
||||
```
|
||||
| -------- | -------------- | ---- | -------- | ------ |
|
||||
| | | map[string]{null|bool|string|bytes|{...}|[...]|number} | true | |
|
||||
|
||||
|
||||
### 样例
|
||||
|
||||
|
@ -625,7 +507,7 @@ spec:
|
|||
|
||||
## service-binding
|
||||
|
||||
Service binding trait will bind data from Kubernetes `Secret` to the application container's ENV.
|
||||
`service-binding` 运维特征会将一个 Kubernetes 的 secret 资源映射到容器中作为容器的环境变量。
|
||||
|
||||
### 作用的 Component 类型
|
||||
|
||||
|
@ -639,13 +521,13 @@ Service binding trait will bind data from Kubernetes `Secret` to the application
|
|||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
envMappings | The mapping of environment variables to secret | map[string]#KeySecret | true |
|
||||
envMappings | 主键和 secret 名称的键值对 | map[string][KeySecret](#keysecret) | true |
|
||||
|
||||
#### KeySecret
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
| key | if key is empty, we will use envMappings key instead | string | false | |
|
||||
| secret | Kubernetes secret name | string | true | |
|
||||
| Name | Description | Type | Required | Default |
|
||||
|------ | ----------- | ---- | -------- | --------|
|
||||
| key | 主键名称 | string | false | |
|
||||
| secret | secret 名称 | string | true | |
|
||||
|
||||
### 样例
|
||||
|
||||
|
@ -675,7 +557,7 @@ spec:
|
|||
|
||||
## sidecar
|
||||
|
||||
The `sidecar` trait allows you to attach a sidecar container to the component.
|
||||
`sidecar` 能够为你的组件容器添加一个边车容器。
|
||||
|
||||
### 作用的 Component 类型
|
||||
|
||||
|
@ -686,26 +568,19 @@ The `sidecar` trait allows you to attach a sidecar container to the component.
|
|||
|
||||
### 参数说明
|
||||
|
||||
```console
|
||||
# Properties
|
||||
+---------+-----------------------------------------+-----------------------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+---------+-----------------------------------------+-----------------------+----------+---------+
|
||||
| name | Specify the name of sidecar container | string | true | |
|
||||
| cmd | Specify the commands run in the sidecar | []string | false | |
|
||||
| image | Specify the image of sidecar container | string | true | |
|
||||
| volumes | Specify the shared volume path | [[]volumes](#volumes) | false | |
|
||||
+---------+-----------------------------------------+-----------------------+----------+---------+
|
||||
| ------- | --------------------------------------- | --------------------- | -------- | ------- |
|
||||
| name | 容器名称 | string | true | |
|
||||
| cmd | 容器的执行命令 | []string | false | |
|
||||
| image | 容器镜像 | string | true | |
|
||||
| volumes | 挂载卷 | [[]volumes](#volumes) | false | |
|
||||
|
||||
#### volumes
|
||||
|
||||
## volumes
|
||||
+-----------+-------------+--------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-----------+-------------+--------+----------+---------+
|
||||
| --------- | ----------- | ------ | -------- | ------- |
|
||||
| name | | string | true | |
|
||||
| path | | string | true | |
|
||||
+-----------+-------------+--------+----------+---------+
|
||||
```
|
||||
|
||||
### 样例
|
||||
|
||||
|
|
|
@ -5,8 +5,8 @@ title: 快速安装
|
|||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
- 在 Kubernetes 集群上安装,请参考 [高级安装](./platform-engineers/advanced-install/#install-kubevela-with-existing-kubernetes-cluster).
|
||||
- 升级 KubeVela 请参考 [升级文档](./platform-engineers/advanced-install/#upgrade).
|
||||
- 在 Kubernetes 集群上安装,请参考 [高级安装](./platform-engineers/advanced-install#install-kubevela-with-existing-kubernetes-cluster).
|
||||
- 升级 KubeVela 请参考 [升级文档](./platform-engineers/advanced-install#upgrade).
|
||||
|
||||
## 1. 安装 VelaD
|
||||
|
||||
|
@ -14,7 +14,7 @@ import TabItem from '@theme/TabItem';
|
|||
|
||||
- VelaD 继承了 [K3s](https://rancher.com/docs/k3s/latest/en/quick-start/) 和 [k3d](https://k3d.io/) 的能力,同时将 KubeVela 所需的制品整体打包。
|
||||
- VelaD 可以帮助你在离线环境中完成安装。
|
||||
- VelaD 目前仅适用于快速体验和测试开发, [生成环境安装请参考高级安装文档](./platform-engineers/advanced-install/#install-kubevela-with-existing-kubernetes-cluster)。
|
||||
- VelaD 目前仅适用于快速体验和测试开发, [生成环境安装请参考高级安装文档](./platform-engineers/advanced-install#install-kubevela-with-existing-kubernetes-cluster)。
|
||||
|
||||
### 前提条件
|
||||
|
||||
|
@ -100,7 +100,7 @@ VelaUX 是需要登陆认证的,默认管理员账号为 `admin` 密码为 `Ve
|
|||
velad uninstall
|
||||
```
|
||||
|
||||
此命令将删除 VelaD 安装的环境, 其他自定义方式安装的请参考 [KubeVela 卸载文档](./platform-engineers/advanced-install/#uninstall)。
|
||||
此命令将删除 VelaD 安装的环境, 其他自定义方式安装的请参考 [KubeVela 卸载文档](./platform-engineers/advanced-install#uninstall)。
|
||||
|
||||
## 下一步
|
||||
|
||||
|
|
|
@ -11,6 +11,14 @@ title: 灰度发布和扩缩容
|
|||
|
||||
## 如何使用
|
||||
|
||||
### 启用插件
|
||||
|
||||
在开始使用灰度发布运维特征之前,你需要先通过以下命令启用 rollout 插件
|
||||
|
||||
```shell
|
||||
vela addon enable rollout
|
||||
```
|
||||
|
||||
### 首次发布
|
||||
|
||||
应用下面的 YAML 来创建一个应用部署计划,该应用包含了一个使用了灰度发布运维特征的 webservice 类型的组件,并指定[组件版本](../../end-user/version-control)名称为 express-server-v1 。如果你不指定,每次对组件的修改都会自动产生一个组件版本(ControllerRevision),组件版本名称的默认产生规则是:`<组件名>-<版本序号>`。
|
||||
|
@ -243,6 +251,7 @@ spec:
|
|||
EOF
|
||||
```
|
||||
这个灰度发布运维特征的表示,从刚才的5个副本个数扩容至目标的7个副本,你也可以通过设置 `rolloutBatches` 精确控制每个批次需要扩容的副本个数。
|
||||
请注意,如果你缺省指定发布策略进行连续两次以上扩缩容会有已知的问题,所以建议你在扩缩容的时候填写 `rolloutBatches` 字段。
|
||||
|
||||
扩容成功之后,查看资源状态。
|
||||
```shell
|
||||
|
|
17
sidebars.js
17
sidebars.js
|
@ -100,11 +100,11 @@ module.exports = {
|
|||
'platform-engineers/system-operation/bootstrap-parameters',
|
||||
'platform-engineers/advanced-install',
|
||||
'platform-engineers/system-operation/vela-cli-image',
|
||||
'platform-engineers/system-operation/1.2.5-upgrade-practice'
|
||||
'platform-engineers/system-operation/migration-from-old-version'
|
||||
],
|
||||
},
|
||||
{
|
||||
'User management': [
|
||||
'User Management': [
|
||||
'how-to/dashboard/user/user',
|
||||
'tutorials/sso',
|
||||
],
|
||||
|
@ -119,13 +119,13 @@ module.exports = {
|
|||
],
|
||||
},
|
||||
{
|
||||
'Manage resource': [
|
||||
'Cluster Management': [
|
||||
'platform-engineers/system-operation/managing-clusters',
|
||||
'how-to/dashboard/target/overview',
|
||||
],
|
||||
},
|
||||
{
|
||||
'Manage integration configs': [
|
||||
'Manage Config of Integration': [
|
||||
'how-to/dashboard/config/dex-connectors',
|
||||
'how-to/dashboard/config/helm-repo',
|
||||
],
|
||||
|
@ -133,6 +133,12 @@ module.exports = {
|
|||
'how-to/cli/addon/addon',
|
||||
'platform-engineers/system-operation/observability',
|
||||
'platform-engineers/system-operation/performance-finetuning',
|
||||
{
|
||||
'UX Customization': [
|
||||
'reference/ui-schema',
|
||||
'reference/topology-rule',
|
||||
],
|
||||
},
|
||||
],
|
||||
},
|
||||
{
|
||||
|
@ -221,6 +227,7 @@ module.exports = {
|
|||
items: [
|
||||
'reference/addons/overview',
|
||||
'reference/addons/velaux',
|
||||
'reference/addons/rollout',
|
||||
'reference/addons/fluxcd',
|
||||
'reference/addons/terraform',
|
||||
'reference/addons/ai',
|
||||
|
@ -228,8 +235,6 @@ module.exports = {
|
|||
],
|
||||
},
|
||||
'end-user/components/cloud-services/cloud-resources-list',
|
||||
'reference/ui-schema',
|
||||
'reference/topology-rule',
|
||||
'reference/user-improvement-plan',
|
||||
{
|
||||
label: 'VelaUX API Doc',
|
||||
|
|
|
@ -151,6 +151,16 @@ Start to test.
|
|||
make e2e-test
|
||||
```
|
||||
|
||||
### Debugging Locally with Remote KubeVela Environment
|
||||
|
||||
To run vela-core locally for debugging with kubevela installed in the remote cluster:
|
||||
- Firstly, scaling the replicas of `kubevela-vela-core` to 0 for leader election of `controller-manager`, e.g `kubectl scale deploy -n vela-system kubevela-vela-core --replicas=0`.
|
||||
- Secondly, removing the `WebhookConfiguration`, otherwise an error will be reported when applying your application using `vela-cli` or `kubectl`.
|
||||
```shell
|
||||
Internal error occurred: failed calling webhook 'validating.core.oam.dev.v1beta1.applications': Post "https://vela-core-webhook.vela-system.svc:443/validating-core-oam-dev-v1beta1-applications?timeout=10s"
|
||||
```
|
||||
- Finally, you can use the commands in the above [Build](#build) and [Testing](#Testing) sections, such as `make run`, to code and debug in your local machine.
|
||||
|
||||
## Run VelaUX Locally
|
||||
|
||||
VelaUX is the UI console of KubeVela, it's also an addon including apiserver code in `kubevela` repo and the frontend code in `velaux` repo.
|
||||
|
|
|
@ -2,157 +2,11 @@
|
|||
title: FAQ
|
||||
---
|
||||
|
||||
- [Compare to X](#Compare-to-X)
|
||||
* [What is the difference between KubeVela and Helm?](#What-is-the-difference-between-KubeVela-and-Helm?)
|
||||
## What is the difference between KubeVela and project-X ?
|
||||
|
||||
- [Issues](#issues)
|
||||
* [Error: unable to create new content in namespace cert-manager because it is being terminated](#error-unable-to-create-new-content-in-namespace-cert-manager-because-it-is-being-terminated)
|
||||
* [Error: ScopeDefinition exists](#error-scopedefinition-exists)
|
||||
* [You have reached your pull rate limit](#You-have-reached-your-pull-rate-limit)
|
||||
* [Warning: Namespace cert-manager exists](#warning-namespace-cert-manager-exists)
|
||||
* [How to fix issue: MutatingWebhookConfiguration mutating-webhook-configuration exists?](#how-to-fix-issue-mutatingwebhookconfiguration-mutating-webhook-configuration-exists)
|
||||
|
||||
- [Operating](#operating)
|
||||
* [Autoscale: how to enable metrics server in various Kubernetes clusters?](#autoscale-how-to-enable-metrics-server-in-various-kubernetes-clusters)
|
||||
Refer to the [comparison details](https://kubevela.io/docs/#kubevela-vs-other-software).
|
||||
|
||||
## Compare to X
|
||||
|
||||
### What is the difference between KubeVela and Helm?
|
||||
|
||||
KubeVela is a platform builder tool to create easy-to-use yet extensible app delivery/management systems with Kubernetes. KubeVela relies on Helm as templating engine and package format for apps. But Helm is not the only templating module that KubeVela supports. Another first-class supported approach is CUE.
|
||||
|
||||
Also, KubeVela is by design a Kubernetes controller (i.e. works on server side), even for its Helm part, a Helm operator will be installed.
|
||||
|
||||
## Issues
|
||||
|
||||
### Error: unable to create new content in namespace cert-manager because it is being terminated
|
||||
|
||||
Occasionally you might hit the issue as below. It happens when the last KubeVela release deletion hasn't completed.
|
||||
|
||||
```
|
||||
vela install
|
||||
```
|
||||
```console
|
||||
- Installing Vela Core Chart:
|
||||
install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
|
||||
Failed to install the chart with error: serviceaccounts "cert-manager-cainjector" is forbidden: unable to create new content in namespace cert-manager because it is being terminated
|
||||
failed to create resource
|
||||
helm.sh/helm/v3/pkg/kube.(*Client).Update.func1
|
||||
/home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/kube/client.go:190
|
||||
...
|
||||
Error: failed to create resource: serviceaccounts "cert-manager-cainjector" is forbidden: unable to create new content in namespace cert-manager because it is being terminated
|
||||
```
|
||||
|
||||
Take a break and try again in a few seconds.
|
||||
|
||||
```
|
||||
vela install
|
||||
```
|
||||
```console
|
||||
- Installing Vela Core Chart:
|
||||
Vela system along with OAM runtime already exist.
|
||||
Automatically discover capabilities successfully ✅ Add(0) Update(0) Delete(8)
|
||||
|
||||
TYPE CATEGORY DESCRIPTION
|
||||
-task workload One-off task to run a piece of code or script to completion
|
||||
-webservice workload Long-running scalable service with stable endpoint to receive external traffic
|
||||
-worker workload Long-running scalable backend worker without network endpoint
|
||||
-autoscale trait Automatically scale the app following certain triggers or metrics
|
||||
-metrics trait Configure metrics targets to be monitored for the app
|
||||
-rollout trait Configure canary deployment strategy to release the app
|
||||
-route trait Configure route policy to the app
|
||||
-scaler trait Manually scale the app
|
||||
|
||||
- Finished successfully.
|
||||
```
|
||||
|
||||
And manually apply all WorkloadDefinition and TraitDefinition manifests to have all capabilities back.
|
||||
|
||||
```
|
||||
vela up -f charts/vela-core/templates/defwithtemplate
|
||||
```
|
||||
```console
|
||||
traitdefinition.core.oam.dev/autoscale created
|
||||
traitdefinition.core.oam.dev/scaler created
|
||||
traitdefinition.core.oam.dev/metrics created
|
||||
traitdefinition.core.oam.dev/rollout created
|
||||
traitdefinition.core.oam.dev/route created
|
||||
workloaddefinition.core.oam.dev/task created
|
||||
workloaddefinition.core.oam.dev/webservice created
|
||||
workloaddefinition.core.oam.dev/worker created
|
||||
```
|
||||
```
|
||||
vela workloads
|
||||
```
|
||||
```console
|
||||
Automatically discover capabilities successfully ✅ Add(8) Update(0) Delete(0)
|
||||
|
||||
TYPE CATEGORY DESCRIPTION
|
||||
+task workload One-off task to run a piece of code or script to completion
|
||||
+webservice workload Long-running scalable service with stable endpoint to receive external traffic
|
||||
+worker workload Long-running scalable backend worker without network endpoint
|
||||
+autoscale trait Automatically scale the app following certain triggers or metrics
|
||||
+metrics trait Configure metrics targets to be monitored for the app
|
||||
+rollout trait Configure canary deployment strategy to release the app
|
||||
+route trait Configure route policy to the app
|
||||
+scaler trait Manually scale the app
|
||||
|
||||
NAME DESCRIPTION
|
||||
task One-off task to run a piece of code or script to completion
|
||||
webservice Long-running scalable service with stable endpoint to receive external traffic
|
||||
worker Long-running scalable backend worker without network endpoint
|
||||
```
|
||||
|
||||
### Error: ScopeDefinition exists
|
||||
|
||||
Occasionally you might hit the issue as below. It happens when there is an old OAM Kubernetes Runtime release, or you applied `ScopeDefinition` before.
|
||||
|
||||
```
|
||||
vela install
|
||||
```
|
||||
```console
|
||||
- Installing Vela Core Chart:
|
||||
install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
|
||||
Failed to install the chart with error: ScopeDefinition "healthscopes.core.oam.dev" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "kubevela": current value is "oam"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "vela-system": current value is "oam-system"
|
||||
rendered manifests contain a resource that already exists. Unable to continue with install
|
||||
helm.sh/helm/v3/pkg/action.(*Install).Run
|
||||
/home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274
|
||||
...
|
||||
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ScopeDefinition "healthscopes.core.oam.dev" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "kubevela": current value is "oam"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "vela-system": current value is "oam-system"
|
||||
```
|
||||
|
||||
Delete `ScopeDefinition` "healthscopes.core.oam.dev" and try again.
|
||||
|
||||
```
|
||||
kubectl delete ScopeDefinition "healthscopes.core.oam.dev"
|
||||
```
|
||||
```console
|
||||
scopedefinition.core.oam.dev "healthscopes.core.oam.dev" deleted
|
||||
```
|
||||
```
|
||||
vela install
|
||||
```
|
||||
```console
|
||||
- Installing Vela Core Chart:
|
||||
install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
|
||||
Successfully installed the chart, status: deployed, last deployed time = 2020-12-03 16:26:41.491426 +0800 CST m=+4.026069452
|
||||
WARN: handle workload template `containerizedworkloads.core.oam.dev` failed: no template found, you will unable to use this workload capabilityWARN: handle trait template `manualscalertraits.core.oam.dev` failed
|
||||
: no template found, you will unable to use this trait capabilityAutomatically discover capabilities successfully ✅ Add(8) Update(0) Delete(0)
|
||||
|
||||
TYPE CATEGORY DESCRIPTION
|
||||
+task workload One-off task to run a piece of code or script to completion
|
||||
+webservice workload Long-running scalable service with stable endpoint to receive external traffic
|
||||
+worker workload Long-running scalable backend worker without network endpoint
|
||||
+autoscale trait Automatically scale the app following certain triggers or metrics
|
||||
+metrics trait Configure metrics targets to be monitored for the app
|
||||
+rollout trait Configure canary deployment strategy to release the app
|
||||
+route trait Configure route policy to the app
|
||||
+scaler trait Manually scale the app
|
||||
|
||||
- Finished successfully.
|
||||
```
|
||||
|
||||
### You have reached your pull rate limit
|
||||
## You have reached your pull rate limit
|
||||
|
||||
When you look into the logs of Pod kubevela-vela-core and found the issue as below.
|
||||
|
||||
|
@ -164,7 +18,7 @@ NAME READY STATUS RESTARTS AGE
|
|||
kubevela-vela-core-f8b987775-wjg25 0/1 - 0 35m
|
||||
```
|
||||
|
||||
>Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by
|
||||
>Error response from daemon: too many requests: You have reached your pull rate limit. You may increase the limit by
|
||||
>authenticating and upgrading: https://www.docker.com/increase-rate-limit
|
||||
|
||||
You can use github container registry instead.
|
||||
|
@ -173,165 +27,4 @@ You can use github container registry instead.
|
|||
docker pull ghcr.io/kubevela/kubevela/vela-core:latest
|
||||
```
|
||||
|
||||
### Warning: Namespace cert-manager exists
|
||||
|
||||
If you hit the issue as below, an `cert-manager` release might exist whose namespace and RBAC related resource conflict
|
||||
with KubeVela.
|
||||
|
||||
```
|
||||
vela install
|
||||
```
|
||||
```console
|
||||
- Installing Vela Core Chart:
|
||||
install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
|
||||
Failed to install the chart with error: Namespace "cert-manager" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"
|
||||
rendered manifests contain a resource that already exists. Unable to continue with install
|
||||
helm.sh/helm/v3/pkg/action.(*Install).Run
|
||||
/home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274
|
||||
...
|
||||
/opt/hostedtoolcache/go/1.14.12/x64/src/runtime/asm_amd64.s:1373
|
||||
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Namespace "cert-manager" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"
|
||||
```
|
||||
|
||||
Try these steps to fix the problem.
|
||||
|
||||
- Delete release `cert-manager`
|
||||
- Delete namespace `cert-manager`
|
||||
- Install KubeVela again
|
||||
|
||||
```
|
||||
helm delete cert-manager -n cert-manager
|
||||
```
|
||||
```console
|
||||
release "cert-manager" uninstalled
|
||||
```
|
||||
```
|
||||
kubectl delete ns cert-manager
|
||||
```
|
||||
```console
|
||||
namespace "cert-manager" deleted
|
||||
```
|
||||
```
|
||||
vela install
|
||||
```
|
||||
```console
|
||||
- Installing Vela Core Chart:
|
||||
install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
|
||||
Successfully installed the chart, status: deployed, last deployed time = 2020-12-04 10:46:46.782617 +0800 CST m=+4.248889379
|
||||
Automatically discover capabilities successfully ✅ (no changes)
|
||||
|
||||
TYPE CATEGORY DESCRIPTION
|
||||
task workload One-off task to run a piece of code or script to completion
|
||||
webservice workload Long-running scalable service with stable endpoint to receive external traffic
|
||||
worker workload Long-running scalable backend worker without network endpoint
|
||||
autoscale trait Automatically scale the app following certain triggers or metrics
|
||||
metrics trait Configure metrics targets to be monitored for the app
|
||||
rollout trait Configure canary deployment strategy to release the app
|
||||
route trait Configure route policy to the app
|
||||
scaler trait Manually scale the app
|
||||
- Finished successfully.
|
||||
```
|
||||
|
||||
### How to fix issue: MutatingWebhookConfiguration mutating-webhook-configuration exists?
|
||||
|
||||
If you deploy some other services which will apply MutatingWebhookConfiguration mutating-webhook-configuration, installing
|
||||
KubeVela will hit the issue as below.
|
||||
|
||||
```shell
|
||||
- Installing Vela Core Chart:
|
||||
install chart vela-core, version v0.2.1, desc : A Helm chart for Kube Vela core, contains 36 file
|
||||
Failed to install the chart with error: MutatingWebhookConfiguration "mutating-webhook-configuration" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"
|
||||
rendered manifests contain a resource that already exists. Unable to continue with install
|
||||
helm.sh/helm/v3/pkg/action.(*Install).Run
|
||||
/home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274
|
||||
github.com/kubevela/kubevela/pkg/commands.InstallOamRuntime
|
||||
/home/runner/work/kubevela/kubevela/pkg/commands/system.go:259
|
||||
github.com/kubevela/kubevela/pkg/commands.(*initCmd).run
|
||||
/home/runner/work/kubevela/kubevela/pkg/commands/system.go:162
|
||||
github.com/kubevela/kubevela/pkg/commands.NewInstallCommand.func2
|
||||
/home/runner/work/kubevela/kubevela/pkg/commands/system.go:119
|
||||
github.com/spf13/cobra.(*Command).execute
|
||||
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:850
|
||||
github.com/spf13/cobra.(*Command).ExecuteC
|
||||
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:958
|
||||
github.com/spf13/cobra.(*Command).Execute
|
||||
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:895
|
||||
main.main
|
||||
/home/runner/work/kubevela/kubevela/references/cmd/cli/main.go:16
|
||||
runtime.main
|
||||
/opt/hostedtoolcache/go/1.14.13/x64/src/runtime/proc.go:203
|
||||
runtime.goexit
|
||||
/opt/hostedtoolcache/go/1.14.13/x64/src/runtime/asm_amd64.s:1373
|
||||
Error: rendered manifests contain a resource that already exists. Unable to continue with install: MutatingWebhookConfiguration "mutating-webhook-configuration" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"
|
||||
```
|
||||
|
||||
To fix this issue, please upgrade KubeVela Cli `vela` version to be higher than `v0.2.2` from [KubeVela releases](https://github.com/kubevela/kubevela/releases).
|
||||
|
||||
## Operating
|
||||
|
||||
### Autoscale: how to enable metrics server in various Kubernetes clusters?
|
||||
|
||||
Operating Autoscale depends on metrics server, so it has to be enabled in various clusters. Please check whether metrics server
|
||||
is enabled with command `kubectl top nodes` or `kubectl top pods`.
|
||||
|
||||
If the output is similar as below, the metrics is enabled.
|
||||
|
||||
```shell
|
||||
kubectl top nodes
|
||||
```
|
||||
```console
|
||||
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
|
||||
cn-hongkong.10.0.1.237 288m 7% 5378Mi 78%
|
||||
cn-hongkong.10.0.1.238 351m 8% 5113Mi 74%
|
||||
```
|
||||
```
|
||||
kubectl top pods
|
||||
```
|
||||
```console
|
||||
NAME CPU(cores) MEMORY(bytes)
|
||||
php-apache-65f444bf84-cjbs5 0m 1Mi
|
||||
wordpress-55c59ccdd5-lf59d 1m 66Mi
|
||||
```
|
||||
|
||||
Or you have to manually enable metrics server in your Kubernetes cluster.
|
||||
|
||||
- ACK (Alibaba Cloud Container Service for Kubernetes)
|
||||
|
||||
Metrics server is already enabled.
|
||||
|
||||
- ASK (Alibaba Cloud Serverless Kubernetes)
|
||||
|
||||
Metrics server has to be enabled in `Operations/Add-ons` section of [Alibaba Cloud console](https://cs.console.aliyun.com/) as below.
|
||||
|
||||

|
||||
|
||||
Please refer to [metrics server debug guide](https://help.aliyun.com/document_detail/176515.html) if you hit more issue.
|
||||
|
||||
- Kind
|
||||
|
||||
Install metrics server as below, or you can install the [latest version](https://github.com/kubernetes-sigs/metrics-server#installation).
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
|
||||
```
|
||||
|
||||
Also add the following part under `.spec.template.spec.containers` in the yaml file loaded by `kubectl edit deploy -n kube-system metrics-server`.
|
||||
|
||||
Noted: This is just a walk-around, not for production-level use.
|
||||
|
||||
```
|
||||
command:
|
||||
- /metrics-server
|
||||
- --kubelet-insecure-tls
|
||||
```
|
||||
|
||||
- MiniKube
|
||||
|
||||
Enable it with following command.
|
||||
|
||||
```shell
|
||||
minikube addons enable metrics-server
|
||||
```
|
||||
|
||||
|
||||
Have fun to [set autoscale](../../extensions/set-autoscale) on your application.
|
||||
|
|
|
@ -4,123 +4,6 @@ title: Built-in Component Type
|
|||
|
||||
This documentation will walk through the built-in component types.
|
||||
|
||||
## Helm
|
||||
|
||||
### Parameters
|
||||
|
||||
| Parameters | Description | Example |
|
||||
| --------------- | ----------- | ---------------------------------- |
|
||||
| repoType | required, indicates where it's from | Helm |
|
||||
| pullInterval | optional, synchronize with Helm Repo, tunning interval and 5 minutes by default | 10m |
|
||||
| url | required, Helm Reop address, it supports http/https | https://charts.bitnami.com/bitnami |
|
||||
| secretRef | optional, The name of the Secret object that holds the credentials required to pull the repo. The username and password fields must be included in the HTTP/S basic authentication Secret. For TLS the secret must contain a certFile and keyFile, and/or caCert fields. For TLS authentication, the secret must contain a certFile / keyFile field and/or caCert field. | sec-name |
|
||||
| timeout | optional, timeout for pulling repo index | 60s |
|
||||
| chart | required, chart title | redis-cluster |
|
||||
| version | optional, chart version, * by default | 6.2.7 |
|
||||
| targetNamespace | optional, the namespace to install chart, decided by chart itself | your-ns |
|
||||
| releaseName | optional, release name after installed | your-rn |
|
||||
| values | optional, override the Values.yaml inchart, using for the rendering of Helm | |
|
||||
| installTimeout | optional, the timeout for operation `helm install`, and 10 minutes by default | 20m |
|
||||
|
||||
|
||||
### Example
|
||||
|
||||
```
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: app-delivering-chart
|
||||
spec:
|
||||
components:
|
||||
- name: redis-comp
|
||||
type: helm
|
||||
properties:
|
||||
chart: redis-cluster
|
||||
version: 6.2.7
|
||||
url: https://charts.bitnami.com/bitnami
|
||||
repoType: helm
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Helm ( repoType: oss )
|
||||
|
||||
### Parameters
|
||||
|
||||
| Parameters | Description | Example |
|
||||
| ---------- | ----------- | ------- |
|
||||
| repoType | required, indicates where it's from | oss |
|
||||
| pullInterval | optional, synchronize with bucket, tunning interval and 5 minutes by default | 10m |
|
||||
| url | required, bucket's endpoint and no need to fill in with scheme | oss-cn-beijing.aliyuncs.com |
|
||||
| secretRef | optional, Save the name of a Secret, which is the credential to read the bucket. Secret contains accesskey and secretkey fields | sec-name |
|
||||
| timeout | optional, The timeout period of the download operation, the default is 20s | 60s |
|
||||
| chart | required, Chart storage path (key) | ./chart/podinfo-5.1.3.tgz |
|
||||
| version | optional, In OSS source, this parameter has no effect | |
|
||||
| targetNamespace | optional, The namespace of the installed chart, which is determined by the chart itself by default | your-ns |
|
||||
| releaseName | optional, Installed release name | your-rn |
|
||||
| values | optional, Overwrite the Values.yaml of the chart for Helm rendering. | |
|
||||
| oss.bucketName | required, bucket name | your-bucket |
|
||||
| oss.provider | optional, Optional generic or aws, fill in aws if the certificate is obtained from aws EC2. The default is generic. | generic |
|
||||
| oss.region | optional, bucket region | |
|
||||
|
||||
### Example
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: bucket-app
|
||||
spec:
|
||||
components:
|
||||
- name: bucket-comp
|
||||
type: helm
|
||||
properties:
|
||||
repoType: oss
|
||||
# required if bucket is private
|
||||
secretRef: bucket-secret
|
||||
chart: ./chart/podinfo-5.1.3.tgz
|
||||
url: oss-cn-beijing.aliyuncs.com
|
||||
oss:
|
||||
bucketName: definition-registry
|
||||
```
|
||||
|
||||
## Helm ( repoType: git )
|
||||
|
||||
### Parameters
|
||||
|
||||
| Parameters | Description | Example |
|
||||
| ---------- | ----------- | ------- |
|
||||
| repoType | required, indicates where it's from | git |
|
||||
| pullInterval | optional, synchronize with Git Repo, tunning interval and 5 minutes by default | 10m |
|
||||
| url | required, Git Repo address | https://github.com/oam-dev/terraform-controller |
|
||||
| secretRef | optional, The name of the Secret object that holds the credentials required to pull the Git repository. For HTTP/S basic authentication, the Secret must contain the username and password fields. For SSH authentication, the identity, identity.pub and known_hosts fields must be included | sec-name |
|
||||
| timeout | optional, The timeout period of the download operation, the default is 20s | 60s |
|
||||
| chart | required, Chart storage path (key) | ./chart/podinfo-5.1.3.tgz |
|
||||
| version | optional, In Git source, this parameter has no effect | |
|
||||
| targetNamespace | optional, the namespace to install chart, decided by chart itself | your-ns |
|
||||
| releaseName | optional, Installed release name | your-rn |
|
||||
| values | optional, Overwrite the Values.yaml of the chart for Helm rendering. | |
|
||||
| git.branch | optional, Git branch, master by default | dev |
|
||||
|
||||
### Example
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: app-delivering-chart
|
||||
spec:
|
||||
components:
|
||||
- name: terraform-controller
|
||||
type: helm
|
||||
properties:
|
||||
repoType: git
|
||||
url: https://github.com/oam-dev/terraform-controller
|
||||
chart: ./chart
|
||||
git:
|
||||
branch: master
|
||||
```
|
||||
|
||||
## Webservice
|
||||
|
||||
### Description
|
||||
|
@ -133,28 +16,27 @@ Describes long-running, scalable, containerized services that have a stable netw
|
|||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
cmd | Commands to run in the container | []string | false |
|
||||
env | Define arguments by using environment variables | [[]env](#env) | false |
|
||||
volumeMounts | | [volumeMounts](#volumeMounts) | false |
|
||||
volumeMounts | | [volumeMounts](#volumemounts) | false |
|
||||
labels | Specify the labels in the workload | map[string]string | false |
|
||||
annotations | Specify the annotations in the workload | map[string]string | false |
|
||||
image | Which image would you like to use for your service | string | true |
|
||||
ports | Which ports do you want customer traffic sent to, defaults to 80 | [[]ports](#ports) | false |
|
||||
imagePullPolicy | Specify image pull policy for your service | string | false |
|
||||
ports | Which ports do you want customer traffic sent to | [[]ports](#ports) | false |
|
||||
imagePullPolicy | Specify image pull policy for your service. Should be "Always","Never" or "IfNotPresent". | string | false |
|
||||
cpu | Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) | string | false |
|
||||
memory | Specifies the attributes of the memory resource required for the container. | string | false |
|
||||
volumes | Deprecated field, use volumeMounts instead. | [[]volumes](#volumes) | false |
|
||||
livenessProbe | Instructions for assessing whether the container is alive. | [livenessProbe](#livenessProbe) | false |
|
||||
readinessProbe | Instructions for assessing whether the container is in a suitable state to serve traffic. | [readinessProbe](#readinessProbe) | false |
|
||||
livenessProbe | Instructions for assessing whether the container is alive. | [livenessProbe](#livenessprobe) | false |
|
||||
readinessProbe | Instructions for assessing whether the container is in a suitable state to serve traffic. | [readinessProbe](#readinessprobe) | false |
|
||||
imagePullSecrets | Specify image pull secrets for your service | []string | false |
|
||||
hostAliases | Specify the hostAliases to add | [[]hostAliases](#hostaliases) | true |
|
||||
|
||||
|
||||
#### readinessProbe
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
hostAliases | Specify the hostAliases to add | [[]hostAliases](#hostAliases) | true |
|
||||
exec | Instructions for assessing container health by executing a command. Either this attribute or the httpGet attribute or the tcpSocket attribute MUST be specified. This attribute is mutually exclusive with both the httpGet attribute and the tcpSocket attribute. | [exec](#exec) | false |
|
||||
httpGet | Instructions for assessing container health by executing an HTTP GET request. Either this attribute or the exec attribute or the tcpSocket attribute MUST be specified. This attribute is mutually exclusive with both the exec attribute and the tcpSocket attribute. | [httpGet](#httpGet) | false |
|
||||
tcpSocket | Instructions for assessing container health by probing a TCP socket. Either this attribute or the exec attribute or the httpGet attribute MUST be specified. This attribute is mutually exclusive with both the exec attribute and the httpGet attribute. | [tcpSocket](#tcpSocket) | false |
|
||||
httpGet | Instructions for assessing container health by executing an HTTP GET request. Either this attribute or the exec attribute or the tcpSocket attribute MUST be specified. This attribute is mutually exclusive with both the exec attribute and the tcpSocket attribute. | [httpGet](#httpget) | false |
|
||||
tcpSocket | Instructions for assessing container health by probing a TCP socket. Either this attribute or the exec attribute or the httpGet attribute MUST be specified. This attribute is mutually exclusive with both the exec attribute and the httpGet attribute. | [tcpSocket](#tcpsocket) | false |
|
||||
initialDelaySeconds | Number of seconds after the container is started before the first probe is initiated. | int | true | 0
|
||||
periodSeconds | How often, in seconds, to execute the probe. | int | true | 10
|
||||
timeoutSeconds | Number of seconds after which the probe times out. | int | true | 1
|
||||
|
@ -175,7 +57,7 @@ Describes long-running, scalable, containerized services that have a stable netw
|
|||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
path | The endpoint, relative to the port, to which the HTTP GET request should be directed. | string | true |
|
||||
port | The TCP socket within the container to which the HTTP GET request should be directed. | int | true |
|
||||
httpHeaders | | [[]httpHeaders](#httpHeaders) | false |
|
||||
httpHeaders | | [[]httpHeaders](#httpheaders) | false |
|
||||
|
||||
|
||||
###### httpHeaders
|
||||
|
@ -205,65 +87,16 @@ Describes long-running, scalable, containerized services that have a stable netw
|
|||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
hostAliases | Specify the hostAliases to add | [[]hostAliases](#hostAliases) | true |
|
||||
hostAliases | Specify the hostAliases to add | [[]hostAliases](#hostaliases) | true |
|
||||
exec | Instructions for assessing container health by executing a command. Either this attribute or the httpGet attribute or the tcpSocket attribute MUST be specified. This attribute is mutually exclusive with both the httpGet attribute and the tcpSocket attribute. | [exec](#exec) | false |
|
||||
httpGet | Instructions for assessing container health by executing an HTTP GET request. Either this attribute or the exec attribute or the tcpSocket attribute MUST be specified. This attribute is mutually exclusive with both the exec attribute and the tcpSocket attribute. | [httpGet](#httpGet) | false |
|
||||
tcpSocket | Instructions for assessing container health by probing a TCP socket. Either this attribute or the exec attribute or the httpGet attribute MUST be specified. This attribute is mutually exclusive with both the exec attribute and the httpGet attribute. | [tcpSocket](#tcpSocket) | false |
|
||||
httpGet | Instructions for assessing container health by executing an HTTP GET request. Either this attribute or the exec attribute or the tcpSocket attribute MUST be specified. This attribute is mutually exclusive with both the exec attribute and the tcpSocket attribute. | [httpGet](#httpget) | false |
|
||||
tcpSocket | Instructions for assessing container health by probing a TCP socket. Either this attribute or the exec attribute or the httpGet attribute MUST be specified. This attribute is mutually exclusive with both the exec attribute and the httpGet attribute. | [tcpSocket](#tcpsocket) | false |
|
||||
initialDelaySeconds | Number of seconds after the container is started before the first probe is initiated. | int | true | 0
|
||||
periodSeconds | How often, in seconds, to execute the probe. | int | true | 10
|
||||
timeoutSeconds | Number of seconds after which the probe times out. | int | true | 1
|
||||
successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed. | int | true | 1
|
||||
failureThreshold | Number of consecutive failures required to determine the container is not alive (liveness probe) or not ready (readiness probe). | int | true | 3
|
||||
|
||||
|
||||
##### tcpSocket
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
port | The TCP socket within the container that should be probed to assess container health. | int | true |
|
||||
|
||||
|
||||
##### httpGet
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
path | The endpoint, relative to the port, to which the HTTP GET request should be directed. | string | true |
|
||||
port | The TCP socket within the container to which the HTTP GET request should be directed. | int | true |
|
||||
httpHeaders | | [[]httpHeaders](#httpHeaders) | false |
|
||||
|
||||
|
||||
###### httpHeaders
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
name | | string | true |
|
||||
value | | string | true |
|
||||
|
||||
|
||||
##### exec
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
command | A command to be executed inside the container to assess its health. Each space delimited token of the command is a separate array element. Commands exiting 0 are considered to be successful probes, whilst all other exit codes are considered failures. | []string | true |
|
||||
|
||||
|
||||
##### hostAliases
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
ip | | string | true |
|
||||
hostnames | | []string | true |
|
||||
|
||||
|
||||
#### volumes
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
name | | string | true |
|
||||
mountPath | | string | true |
|
||||
type | Specify volume type, options: "pvc","configMap","secret","emptyDir" | string | true |
|
||||
|
||||
|
||||
#### ports
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
|
@ -279,10 +112,10 @@ Describes long-running, scalable, containerized services that have a stable netw
|
|||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
pvc | Mount PVC type volume | [[]pvc](#pvc) | false |
|
||||
configMap | Mount ConfigMap type volume | [[]configMap](#configMap) | false |
|
||||
configMap | Mount ConfigMap type volume | [[]configMap](#configmap) | false |
|
||||
secret | Mount Secret type volume | [[]secret](#secret) | false |
|
||||
emptyDir | Mount EmptyDir type volume | [[]emptyDir](#emptyDir) | false |
|
||||
hostPath | Mount HostPath type volume | [[]hostPath](#hostPath) | false |
|
||||
emptyDir | Mount EmptyDir type volume | [[]emptyDir](#emptydir) | false |
|
||||
hostPath | Mount HostPath type volume | [[]hostPath](#hostpath) | false |
|
||||
|
||||
|
||||
##### hostPath
|
||||
|
@ -358,15 +191,15 @@ Describes long-running, scalable, containerized services that have a stable netw
|
|||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
name | Environment variable name | string | true |
|
||||
value | The value of the environment variable | string | false |
|
||||
valueFrom | Specifies a source the value of this var should come from | [valueFrom](#valueFrom) | false |
|
||||
valueFrom | Specifies a source the value of this var should come from | [valueFrom](#valuefrom) | false |
|
||||
|
||||
|
||||
##### valueFrom
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
------------ | ------------- | ------------- | ------------- | -------------
|
||||
secretKeyRef | Selects a key of a secret in the pod's namespace | [secretKeyRef](#secretKeyRef) | false |
|
||||
configMapKeyRef | Selects a key of a config map in the pod's namespace | [configMapKeyRef](#configMapKeyRef) | false |
|
||||
secretKeyRef | Selects a key of a secret in the pod's namespace | [secretKeyRef](#secretkeyref) | false |
|
||||
configMapKeyRef | Selects a key of a config map in the pod's namespace | [configMapKeyRef](#configmapkeyref) | false |
|
||||
|
||||
|
||||
###### configMapKeyRef
|
||||
|
@ -421,143 +254,14 @@ Describes long-running, scalable, containerized services that running at backend
|
|||
| cmd | Commands to run in the container | []string | false | |
|
||||
| env | Define arguments by using environment variables | [[]env](#env) | false | |
|
||||
| image | Which image would you like to use for your service | string | true | |
|
||||
| imagePullPolicy | Specify image pull policy for your service | string | false | |
|
||||
| imagePullPolicy | Specify image pull policy for your service. Should be "Always","Never" or "IfNotPresent". | string | false | |
|
||||
| cpu | Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) | string | false | |
|
||||
| memory | Specifies the attributes of the memory resource required for the container. | string | false | |
|
||||
| volumes | Declare volumes and volumeMounts | [[]volumes](#volumes) | false | |
|
||||
| livenessProbe | Instructions for assessing whether the container is alive. | [livenessProbe](#livenessProbe) | false | |
|
||||
| readinessProbe | Instructions for assessing whether the container is in a suitable state to serve traffic. | [readinessProbe](#readinessProbe) | false | |
|
||||
| volumeMounts | | [volumeMounts](#volumemounts) | false |
|
||||
| livenessProbe | Instructions for assessing whether the container is alive. | [livenessProbe](#livenessprobe) | false | |
|
||||
| readinessProbe | Instructions for assessing whether the container is in a suitable state to serve traffic. | [readinessProbe](#readinessprobe) | false | |
|
||||
| imagePullSecrets | Specify image pull secrets for your service | []string | false | |
|
||||
|
||||
|
||||
### readinessProbe
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ------------------- | ---------------------------------------------------------------------------------------------------- | ----------------------- | -------- | ------- |
|
||||
| exec | Instructions for assessing container health by executing a command. Either this attribute or the | [exec](#exec) | false | |
|
||||
| | httpGet attribute or the tcpSocket attribute MUST be specified. This attribute is mutually exclusive | | | |
|
||||
| | with both the httpGet attribute and the tcpSocket attribute. | | | |
|
||||
| httpGet | Instructions for assessing container health by executing an HTTP GET request. Either this attribute | [httpGet](#httpGet) | false | |
|
||||
| | or the exec attribute or the tcpSocket attribute MUST be specified. This attribute is mutually | | | |
|
||||
| | exclusive with both the exec attribute and the tcpSocket attribute. | | | |
|
||||
| tcpSocket | Instructions for assessing container health by probing a TCP socket. Either this attribute or the | [tcpSocket](#tcpSocket) | false | |
|
||||
| | exec attribute or the httpGet attribute MUST be specified. This attribute is mutually exclusive with | | | |
|
||||
| | both the exec attribute and the httpGet attribute. | | | |
|
||||
| initialDelaySeconds | Number of seconds after the container is started before the first probe is initiated. | int | true | 0 |
|
||||
| periodSeconds | How often, in seconds, to execute the probe. | int | true | 10 |
|
||||
| timeoutSeconds | Number of seconds after which the probe times out. | int | true | 1 |
|
||||
| successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed. | int | true | 1 |
|
||||
| failureThreshold | Number of consecutive failures required to determine the container is not alive (liveness probe) or | int | true | 3 |
|
||||
| | not ready (readiness probe). | | | |
|
||||
|
||||
|
||||
##### tcpSocket
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ---- | ------------------------------------------------------------------------------------- | ---- | -------- | ------- |
|
||||
| port | The TCP socket within the container that should be probed to assess container health. | int | true | |
|
||||
|
||||
|
||||
#### httpGet
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ----------- | ------------------------------------------------------------------------------------- | ----------------------------- | -------- | ------- |
|
||||
| path | The endpoint, relative to the port, to which the HTTP GET request should be directed. | string | true | |
|
||||
| port | The TCP socket within the container to which the HTTP GET request should be directed. | int | true | |
|
||||
| httpHeaders | | [[]httpHeaders](#httpHeaders) | false | |
|
||||
|
||||
|
||||
##### httpHeaders
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ----- | ----------- | ------ | -------- | ------- |
|
||||
| name | | string | true | |
|
||||
| value | | string | true | |
|
||||
|
||||
|
||||
##### exec
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ------- | --------------------------------------------------------------------------------------------------- | -------- | -------- | ------- |
|
||||
| command | A command to be executed inside the container to assess its health. Each space delimited token of | []string | true | |
|
||||
| | the command is a separate array element. Commands exiting 0 are considered to be successful probes, | | | |
|
||||
| | whilst all other exit codes are considered failures. | | | |
|
||||
|
||||
|
||||
### livenessProbe
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ------------------- | ---------------------------------------------------------------------------------------------------- | ----------------------- | -------- | ------- |
|
||||
| exec | Instructions for assessing container health by executing a command. Either this attribute or the | [exec](#exec) | false | |
|
||||
| | httpGet attribute or the tcpSocket attribute MUST be specified. This attribute is mutually exclusive | | | |
|
||||
| | with both the httpGet attribute and the tcpSocket attribute. | | | |
|
||||
| httpGet | Instructions for assessing container health by executing an HTTP GET request. Either this attribute | [httpGet](#httpGet) | false | |
|
||||
| | or the exec attribute or the tcpSocket attribute MUST be specified. This attribute is mutually | | | |
|
||||
| | exclusive with both the exec attribute and the tcpSocket attribute. | | | |
|
||||
| tcpSocket | Instructions for assessing container health by probing a TCP socket. Either this attribute or the | [tcpSocket](#tcpSocket) | false | |
|
||||
| | exec attribute or the httpGet attribute MUST be specified. This attribute is mutually exclusive with | | | |
|
||||
| | both the exec attribute and the httpGet attribute. | | | |
|
||||
| initialDelaySeconds | Number of seconds after the container is started before the first probe is initiated. | int | true | 0 |
|
||||
| periodSeconds | How often, in seconds, to execute the probe. | int | true | 10 |
|
||||
| timeoutSeconds | Number of seconds after which the probe times out. | int | true | 1 |
|
||||
| successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed. | int | true | 1 |
|
||||
| failureThreshold | Number of consecutive failures required to determine the container is not alive (liveness probe) or | int | true | 3 |
|
||||
| | not ready (readiness probe). | | | |
|
||||
|
||||
|
||||
#### tcpSocket
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ---- | ------------------------------------------------------------------------------------- | ---- | -------- | ------- |
|
||||
| port | The TCP socket within the container that should be probed to assess container health. | int | true | |
|
||||
|
||||
|
||||
#### httpGet
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ----------- | ------------------------------------------------------------------------------------- | ----------------------------- | -------- | ------- |
|
||||
| path | The endpoint, relative to the port, to which the HTTP GET request should be directed. | string | true | |
|
||||
| port | The TCP socket within the container to which the HTTP GET request should be directed. | int | true | |
|
||||
| httpHeaders | | [[]httpHeaders](#httpHeaders) | false | |
|
||||
|
||||
|
||||
##### httpHeaders
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ----- | ----------- | ------ | -------- | ------- |
|
||||
| name | | string | true | |
|
||||
| value | | string | true | |
|
||||
|
||||
|
||||
#### exec
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ------- | --------------------------------------------------------------------------------------------------- | -------- | -------- | ------- |
|
||||
| command | A command to be executed inside the container to assess its health. Each space delimited token of | []string | true | |
|
||||
| | the command is a separate array element. Commands exiting 0 are considered to be successful probes, | | | |
|
||||
| | whilst all other exit codes are considered failures. | | | |
|
||||
|
||||
|
||||
#### volumes
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| --------- | ------------------------------------------------------------------- | ------ | -------- | ------- |
|
||||
| name | | string | true | |
|
||||
| mountPath | | string | true | |
|
||||
| type | Specify volume type, options: "pvc","configMap","secret","emptyDir" | string | true | |
|
||||
|
||||
|
||||
#### env
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| --------- | --------------------------------------------------------- | ----------------------- | -------- | ------- |
|
||||
| name | Environment variable name | string | true | |
|
||||
| value | The value of the environment variable | string | false | |
|
||||
| valueFrom | Specifies a source the value of this var should come from | [valueFrom](#valueFrom) | false | |
|
||||
|
||||
|
||||
#### valueFrom
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ------------ | ------------------------------------------------ | ----------------------------- | -------- | ------- |
|
||||
| secretKeyRef | Selects a key of a secret in the pod's namespace | [secretKeyRef](#secretKeyRef) | true | |
|
||||
|
||||
|
||||
#### secretKeyRef
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ---- | ---------------------------------------------------------------- | ------ | -------- | ------- |
|
||||
| name | The name of the secret in the pod's namespace to select from | string | true | |
|
||||
| key | The key of the secret to select from. Must be a valid secret key | string | true | |
|
||||
|
||||
|
||||
### Examples
|
||||
|
||||
```yaml
|
||||
|
@ -591,153 +295,14 @@ Describes jobs that run code or a script to completion.
|
|||
| image | Which image would you like to use for your service | string | true | |
|
||||
| cpu | Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) | string | false | |
|
||||
| memory | Specifies the attributes of the memory resource required for the container. | string | false | |
|
||||
| volumes | Declare volumes and volumeMounts | [[]volumes](#volumes) | false | |
|
||||
| livenessProbe | Instructions for assessing whether the container is alive. | [livenessProbe](#livenessProbe) | false | |
|
||||
| readinessProbe | Instructions for assessing whether the container is in a suitable state to serve traffic. | [readinessProbe](#readinessProbe) | false | |
|
||||
| volumeMounts | | [volumeMounts](#volumemounts) | false |
|
||||
| livenessProbe | Instructions for assessing whether the container is alive. | [livenessProbe](#livenessprobe) | false | |
|
||||
| readinessProbe | Instructions for assessing whether the container is in a suitable state to serve traffic. | [readinessProbe](#readinessprobe) | false | |
|
||||
| labels | Specify the labels in the workload | []string | false | |
|
||||
| annotations | Specify the annotations in the workload | []string | false | |
|
||||
| imagePullPolicy | Specify image pull policy for your service | string | false | |
|
||||
| imagePullSecrets | Specify image pull secrets for your service | []string | false | |
|
||||
|
||||
### readinessProbe
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ------------------- | ---------------------------------------------------------------------------------------------------- | ----------------------- | -------- | ------- |
|
||||
| exec | Instructions for assessing container health by executing a command. Either this attribute or the | [exec](#exec) | false | |
|
||||
| | httpGet attribute or the tcpSocket attribute MUST be specified. This attribute is mutually exclusive | | | |
|
||||
| | with both the httpGet attribute and the tcpSocket attribute. | | | |
|
||||
| httpGet | Instructions for assessing container health by executing an HTTP GET request. Either this attribute | [httpGet](#httpGet) | false | |
|
||||
| | or the exec attribute or the tcpSocket attribute MUST be specified. This attribute is mutually | | | |
|
||||
| | exclusive with both the exec attribute and the tcpSocket attribute. | | | |
|
||||
| tcpSocket | Instructions for assessing container health by probing a TCP socket. Either this attribute or the | [tcpSocket](#tcpSocket) | false | |
|
||||
| | exec attribute or the httpGet attribute MUST be specified. This attribute is mutually exclusive with | | | |
|
||||
| | both the exec attribute and the httpGet attribute. | | | |
|
||||
| initialDelaySeconds | Number of seconds after the container is started before the first probe is initiated. | int | true | 0 |
|
||||
| periodSeconds | How often, in seconds, to execute the probe. | int | true | 10 |
|
||||
| timeoutSeconds | Number of seconds after which the probe times out. | int | true | 1 |
|
||||
| successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed. | int | true | 1 |
|
||||
| failureThreshold | Number of consecutive failures required to determine the container is not alive (liveness probe) or | int | true | 3 |
|
||||
| | not ready (readiness probe). | | | |
|
||||
|
||||
|
||||
#### tcpSocket
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ---- | ------------------------------------------------------------------------------------- | ---- | -------- | ------- |
|
||||
| port | The TCP socket within the container that should be probed to assess container health. | int | true | |
|
||||
|
||||
|
||||
#### httpGet
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ----------- | ------------------------------------------------------------------------------------- | ----------------------------- | -------- | ------- |
|
||||
| path | The endpoint, relative to the port, to which the HTTP GET request should be directed. | string | true | |
|
||||
| port | The TCP socket within the container to which the HTTP GET request should be directed. | int | true | |
|
||||
| httpHeaders | | [[]httpHeaders](#httpHeaders) | false | |
|
||||
|
||||
|
||||
##### httpHeaders
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ----- | ----------- | ------ | -------- | ------- |
|
||||
| name | | string | true | |
|
||||
| value | | string | true | |
|
||||
|
||||
|
||||
#### exec
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ------- | --------------------------------------------------------------------------------------------------- | -------- | -------- | ------- |
|
||||
| command | A command to be executed inside the container to assess its health. Each space delimited token of | []string | true | |
|
||||
| | the command is a separate array element. Commands exiting 0 are considered to be successful probes, | | | |
|
||||
| | whilst all other exit codes are considered failures. | | | |
|
||||
|
||||
|
||||
### livenessProbe
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ------------------- | ---------------------------------------------------------------------------------------------------- | ----------------------- | -------- | ------- |
|
||||
| exec | Instructions for assessing container health by executing a command. Either this attribute or the | [exec](#exec) | false | |
|
||||
| | httpGet attribute or the tcpSocket attribute MUST be specified. This attribute is mutually exclusive | | | |
|
||||
| | with both the httpGet attribute and the tcpSocket attribute. | | | |
|
||||
| httpGet | Instructions for assessing container health by executing an HTTP GET request. Either this attribute | [httpGet](#httpGet) | false | |
|
||||
| | or the exec attribute or the tcpSocket attribute MUST be specified. This attribute is mutually | | | |
|
||||
| | exclusive with both the exec attribute and the tcpSocket attribute. | | | |
|
||||
| tcpSocket | Instructions for assessing container health by probing a TCP socket. Either this attribute or the | [tcpSocket](#tcpSocket) | false | |
|
||||
| | exec attribute or the httpGet attribute MUST be specified. This attribute is mutually exclusive with | | | |
|
||||
| | both the exec attribute and the httpGet attribute. | | | |
|
||||
| initialDelaySeconds | Number of seconds after the container is started before the first probe is initiated. | int | true | 0 |
|
||||
| periodSeconds | How often, in seconds, to execute the probe. | int | true | 10 |
|
||||
| timeoutSeconds | Number of seconds after which the probe times out. | int | true | 1 |
|
||||
| successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed. | int | true | 1 |
|
||||
| failureThreshold | Number of consecutive failures required to determine the container is not alive (liveness probe) or | int | true | 3 |
|
||||
| | not ready (readiness probe). | | | |
|
||||
|
||||
|
||||
#### tcpSocket
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ---- | ------------------------------------------------------------------------------------- | ---- | -------- | ------- |
|
||||
| port | The TCP socket within the container that should be probed to assess container health. | int | true | |
|
||||
|
||||
|
||||
#### httpGet
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ----------- | ------------------------------------------------------------------------------------- | ----------------------------- | -------- | ------- |
|
||||
| path | The endpoint, relative to the port, to which the HTTP GET request should be directed. | string | true | |
|
||||
| port | The TCP socket within the container to which the HTTP GET request should be directed. | int | true | |
|
||||
| httpHeaders | | [[]httpHeaders](#httpHeaders) | false | |
|
||||
|
||||
|
||||
##### httpHeaders
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ----- | ----------- | ------ | -------- | ------- |
|
||||
| name | | string | true | |
|
||||
| value | | string | true | |
|
||||
|
||||
|
||||
#### exec
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ------- | --------------------------------------------------------------------------------------------------- | -------- | -------- | ------- |
|
||||
| command | A command to be executed inside the container to assess its health. Each space delimited token of | []string | true | |
|
||||
| | the command is a separate array element. Commands exiting 0 are considered to be successful probes, | | | |
|
||||
| | whilst all other exit codes are considered failures. | | | |
|
||||
|
||||
|
||||
##### volumes
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| --------- | ------------------------------------------------------------------- | ------ | -------- | ------- |
|
||||
| name | | string | true | |
|
||||
| mountPath | | string | true | |
|
||||
| type | Specify volume type, options: "pvc","configMap","secret","emptyDir" | string | true | |
|
||||
|
||||
|
||||
#### env
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| --------- | --------------------------------------------------------- | ----------------------- | -------- | ------- |
|
||||
| name | Environment variable name | string | true | |
|
||||
| value | The value of the environment variable | string | false | |
|
||||
| valueFrom | Specifies a source the value of this var should come from | [valueFrom](#valueFrom) | false | |
|
||||
|
||||
|
||||
#### valueFrom
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ------------ | ------------------------------------------------ | ----------------------------- | -------- | ------- |
|
||||
| secretKeyRef | Selects a key of a secret in the pod's namespace | [secretKeyRef](#secretKeyRef) | true | |
|
||||
|
||||
|
||||
#### secretKeyRef
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ---- | ---------------------------------------------------------------- | ------ | -------- | ------- |
|
||||
| name | The name of the secret in the pod's namespace to select from | string | true | |
|
||||
| key | The key of the secret to select from. Must be a valid secret key | string | true | |
|
||||
|
||||
### Examples
|
||||
|
||||
```yaml
|
||||
|
@ -775,154 +340,14 @@ Describes cron jobs that run code or a script to completion.
|
|||
| image | Which image would you like to use for your service | string | true | |
|
||||
| cpu | Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) | string | false | |
|
||||
| memory | Specifies the attributes of the memory resource required for the container. | string | false | |
|
||||
| volumes | Declare volumes and volumeMounts | [[]volumes](#volumes) | false | |
|
||||
| livenessProbe | Instructions for assessing whether the container is alive. | [livenessProbe](#livenessProbe) | false | |
|
||||
| readinessProbe | Instructions for assessing whether the container is in a suitable state to serve traffic. | [readinessProbe](#readinessProbe) | false | |
|
||||
| volumeMounts | | [volumeMounts](#volumemounts) | false |
|
||||
| livenessProbe | Instructions for assessing whether the container is alive. | [livenessProbe](#livenessprobe) | false | |
|
||||
| readinessProbe | Instructions for assessing whether the container is in a suitable state to serve traffic. | [readinessProbe](#readinessprobe) | false | |
|
||||
| labels | Specify the labels in the workload | []string | false | |
|
||||
| annotations | Specify the annotations in the workload | []string | false | |
|
||||
| imagePullPolicy | Specify image pull policy for your service | string | false | |
|
||||
| imagePullSecrets | Specify image pull secrets for your service | []string | false | |
|
||||
|
||||
### readinessProbe
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ------------------- | ---------------------------------------------------------------------------------------------------- | ----------------------- | -------- | ------- |
|
||||
| exec | Instructions for assessing container health by executing a command. Either this attribute or the | [exec](#exec) | false | |
|
||||
| | httpGet attribute or the tcpSocket attribute MUST be specified. This attribute is mutually exclusive | | | |
|
||||
| | with both the httpGet attribute and the tcpSocket attribute. | | | |
|
||||
| httpGet | Instructions for assessing container health by executing an HTTP GET request. Either this attribute | [httpGet](#httpGet) | false | |
|
||||
| | or the exec attribute or the tcpSocket attribute MUST be specified. This attribute is mutually | | | |
|
||||
| | exclusive with both the exec attribute and the tcpSocket attribute. | | | |
|
||||
| tcpSocket | Instructions for assessing container health by probing a TCP socket. Either this attribute or the | [tcpSocket](#tcpSocket) | false | |
|
||||
| | exec attribute or the httpGet attribute MUST be specified. This attribute is mutually exclusive with | | | |
|
||||
| | both the exec attribute and the httpGet attribute. | | | |
|
||||
| initialDelaySeconds | Number of seconds after the container is started before the first probe is initiated. | int | true | 0 |
|
||||
| periodSeconds | How often, in seconds, to execute the probe. | int | true | 10 |
|
||||
| timeoutSeconds | Number of seconds after which the probe times out. | int | true | 1 |
|
||||
| successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed. | int | true | 1 |
|
||||
| failureThreshold | Number of consecutive failures required to determine the container is not alive (liveness probe) or | int | true | 3 |
|
||||
| | not ready (readiness probe). | | | |
|
||||
|
||||
|
||||
#### tcpSocket
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ---- | ------------------------------------------------------------------------------------- | ---- | -------- | ------- |
|
||||
| port | The TCP socket within the container that should be probed to assess container health. | int | true | |
|
||||
|
||||
|
||||
#### httpGet
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ----------- | ------------------------------------------------------------------------------------- | ----------------------------- | -------- | ------- |
|
||||
| path | The endpoint, relative to the port, to which the HTTP GET request should be directed. | string | true | |
|
||||
| port | The TCP socket within the container to which the HTTP GET request should be directed. | int | true | |
|
||||
| httpHeaders | | [[]httpHeaders](#httpHeaders) | false | |
|
||||
|
||||
|
||||
##### httpHeaders
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ----- | ----------- | ------ | -------- | ------- |
|
||||
| name | | string | true | |
|
||||
| value | | string | true | |
|
||||
|
||||
|
||||
#### exec
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ------- | --------------------------------------------------------------------------------------------------- | -------- | -------- | ------- |
|
||||
| command | A command to be executed inside the container to assess its health. Each space delimited token of | []string | true | |
|
||||
| | the command is a separate array element. Commands exiting 0 are considered to be successful probes, | | | |
|
||||
| | whilst all other exit codes are considered failures. | | | |
|
||||
|
||||
|
||||
### livenessProbe
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ------------------- | ---------------------------------------------------------------------------------------------------- | ----------------------- | -------- | ------- |
|
||||
| exec | Instructions for assessing container health by executing a command. Either this attribute or the | [exec](#exec) | false | |
|
||||
| | httpGet attribute or the tcpSocket attribute MUST be specified. This attribute is mutually exclusive | | | |
|
||||
| | with both the httpGet attribute and the tcpSocket attribute. | | | |
|
||||
| httpGet | Instructions for assessing container health by executing an HTTP GET request. Either this attribute | [httpGet](#httpGet) | false | |
|
||||
| | or the exec attribute or the tcpSocket attribute MUST be specified. This attribute is mutually | | | |
|
||||
| | exclusive with both the exec attribute and the tcpSocket attribute. | | | |
|
||||
| tcpSocket | Instructions for assessing container health by probing a TCP socket. Either this attribute or the | [tcpSocket](#tcpSocket) | false | |
|
||||
| | exec attribute or the httpGet attribute MUST be specified. This attribute is mutually exclusive with | | | |
|
||||
| | both the exec attribute and the httpGet attribute. | | | |
|
||||
| initialDelaySeconds | Number of seconds after the container is started before the first probe is initiated. | int | true | 0 |
|
||||
| periodSeconds | How often, in seconds, to execute the probe. | int | true | 10 |
|
||||
| timeoutSeconds | Number of seconds after which the probe times out. | int | true | 1 |
|
||||
| successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed. | int | true | 1 |
|
||||
| failureThreshold | Number of consecutive failures required to determine the container is not alive (liveness probe) or | int | true | 3 |
|
||||
| | not ready (readiness probe). | | | |
|
||||
|
||||
|
||||
#### tcpSocket
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ---- | ------------------------------------------------------------------------------------- | ---- | -------- | ------- |
|
||||
| port | The TCP socket within the container that should be probed to assess container health. | int | true | |
|
||||
|
||||
|
||||
#### httpGet
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ----------- | ------------------------------------------------------------------------------------- | ----------------------------- | -------- | ------- |
|
||||
| path | The endpoint, relative to the port, to which the HTTP GET request should be directed. | string | true | |
|
||||
| port | The TCP socket within the container to which the HTTP GET request should be directed. | int | true | |
|
||||
| httpHeaders | | [[]httpHeaders](#httpHeaders) | false | |
|
||||
|
||||
|
||||
##### httpHeaders
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ----- | ----------- | ------ | -------- | ------- |
|
||||
| name | | string | true | |
|
||||
| value | | string | true | |
|
||||
|
||||
|
||||
#### exec
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ------- | --------------------------------------------------------------------------------------------------- | -------- | -------- | ------- |
|
||||
| command | A command to be executed inside the container to assess its health. Each space delimited token of | []string | true | |
|
||||
| | the command is a separate array element. Commands exiting 0 are considered to be successful probes, | | | |
|
||||
| | whilst all other exit codes are considered failures. | | | |
|
||||
|
||||
|
||||
##### volumes
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| --------- | ------------------------------------------------------------------- | ------ | -------- | ------- |
|
||||
| name | | string | true | |
|
||||
| mountPath | | string | true | |
|
||||
| type | Specify volume type, options: "pvc","configMap","secret","emptyDir" | string | true | |
|
||||
|
||||
|
||||
#### env
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| --------- | --------------------------------------------------------- | ----------------------- | -------- | ------- |
|
||||
| name | Environment variable name | string | true | |
|
||||
| value | The value of the environment variable | string | false | |
|
||||
| valueFrom | Specifies a source the value of this var should come from | [valueFrom](#valueFrom) | false | |
|
||||
|
||||
|
||||
#### valueFrom
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ------------ | ------------------------------------------------ | ----------------------------- | -------- | ------- |
|
||||
| secretKeyRef | Selects a key of a secret in the pod's namespace | [secretKeyRef](#secretKeyRef) | true | |
|
||||
|
||||
|
||||
#### secretKeyRef
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
| ---- | ---------------------------------------------------------------- | ------ | -------- | ------- |
|
||||
| name | The name of the secret in the pod's namespace to select from | string | true | |
|
||||
| key | The key of the secret to select from. Must be a valid secret key | string | true | |
|
||||
|
||||
|
||||
### Examples
|
||||
|
||||
```yaml
|
||||
|
@ -941,71 +366,17 @@ spec:
|
|||
schedule: "*/1 * * * *"
|
||||
```
|
||||
|
||||
## Kustomize ( repoType: oss )
|
||||
|
||||
KubeVela's `kustomize` component meets the needs of users to directly connect Yaml files and folders as component products. No matter whether your Yaml file/folder is stored in a Git Repo or an OSS bucket, KubeVela can read and deliver it.
|
||||
## k8s-objects
|
||||
### 参数说明
|
||||
|
||||
### Parameters
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
|---------|-------------|-----------------------|----------|---------|
|
||||
| objects | Kubernetes resource manifest | [[]K8s-Object](#k8s-object) | true | |
|
||||
|
||||
| Parameters | Description | Example |
|
||||
| -------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------- |
|
||||
| repoType | required, The value of the Git. To indicate that kustomize configuration comes from the Git repository | oss |
|
||||
| pullInterval | optional, Synchronize with Git repository, and the time interval between tuning helm release. The default value is 5m (5 minutes) | 10m |
|
||||
| url | required, bucket's endpoint, no need to fill in with scheme | oss-cn-beijing.aliyuncs.com |
|
||||
| secretRef | optional, Save the name of a Secret, which is the credential to read the bucket. Secret contains accesskey and secretkey fields | sec-name |
|
||||
| timeout | optional, The timeout period of the download operation, the default is 20s | 60s |
|
||||
| path | required, The directory containing the kustomization.yaml file, or the directory containing a set of YAML files (used to generate kustomization.yaml) | ./prod |
|
||||
| oss.bucketName | required, bucket name | your-bucket |
|
||||
| oss.provider | optional, Generic or aws, if you get the certificate from aws EC2, fill in aws. The default is generic. | generic |
|
||||
| oss.region | optional, bucket region | |
|
||||
#### k8s-object
|
||||
|
||||
### Examples
|
||||
|
||||
Let's take the YAML folder component from the OSS bucket registry as an example to explain the usage. In the `Application` we will deliver a component named bucket-comp. The deployment file corresponding to the component is stored in the cloud storage OSS bucket, and the corresponding bucket name is definition-registry. `kustomize.yaml` comes from this address of `oss-cn-beijing.aliyuncs.com` and the path is `./app/prod/`.
|
||||
|
||||
|
||||
1. (Optional) If your OSS bucket needs identity verification, create a Secret:
|
||||
|
||||
```shell
|
||||
$ kubectl create secret generic bucket-secret --from-literal=accesskey=<your-ak> --from-literal=secretkey=<your-sk>
|
||||
secret/bucket-secret created
|
||||
```
|
||||
|
||||
2. Deploy it:
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: bucket-app
|
||||
spec:
|
||||
components:
|
||||
- name: bucket-comp
|
||||
type: kustomize
|
||||
properties:
|
||||
repoType: oss
|
||||
# If the bucket is private, you will need to provide
|
||||
secretRef: bucket-secret
|
||||
url: oss-cn-beijing.aliyuncs.com
|
||||
oss:
|
||||
bucketName: definition-registry
|
||||
path: ./app/prod/
|
||||
```
|
||||
|
||||
|
||||
## Kustomize ( repoType: git )
|
||||
|
||||
### Parameters
|
||||
|
||||
| Parameters | Description | Example |
|
||||
| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------- |
|
||||
| repoType | required, The value of the Git. To indicate that kustomize configuration comes from the Git repository | git |
|
||||
| pullInterval | optional, Synchronize with Git repository, and the time interval between tuning helm release. The default value is 5m (5 minutes) | 10m |
|
||||
| url | required, Git repository address | https://github.com/oam-dev/terraform-controller |
|
||||
| secretRef | optional, The Secret object name that holds the credentials required to pull the Git repository. The username and password fields must be included in the HTTP/S basic authentication Secret. For SSH authentication, the identity, identity.pub and known_hosts fields must be included | sec-name |
|
||||
| timeout | optional, The timeout period of the download operation, the default is 20s | 60s |
|
||||
| git.branch | optional, Git branch, master by default | dev |
|
||||
| git.provider | optional, Determines which git client library to use. Defaults to GitHub, it will pick go-git. AzureDevOps will pick libgit2 | GitHub |
|
||||
A kubernetes plane manifest.
|
||||
|
||||
### Examples
|
||||
|
||||
|
@ -1013,78 +384,24 @@ spec:
|
|||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: git-app
|
||||
name: app-raw
|
||||
spec:
|
||||
components:
|
||||
- name: git-comp
|
||||
type: kustomize
|
||||
- name: myjob
|
||||
type: k8s-objects
|
||||
properties:
|
||||
repoType: git
|
||||
url: https://github.com/<path>/<to>/<repo>
|
||||
git:
|
||||
branch: master
|
||||
provider: GitHub
|
||||
path: ./app/dev/
|
||||
objects:
|
||||
- apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: pi
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: pi
|
||||
image: perl
|
||||
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
|
||||
restartPolicy: Never
|
||||
backoffLimit: 4
|
||||
```
|
||||
|
||||
**Override Kustomize**
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: bucket-app
|
||||
spec:
|
||||
components:
|
||||
- name: bucket-comp
|
||||
type: kustomize
|
||||
properties:
|
||||
# ...omitted for brevity
|
||||
path: ./app/
|
||||
|
||||
```
|
||||
|
||||
## Kustomize ( Watch Image Registry )
|
||||
|
||||
### Parameter
|
||||
|
||||
| Parameter | Required | Description | Example |
|
||||
| ------------ | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------- |
|
||||
| image | required | The image url | oamdev/vela-core |
|
||||
| secretRef | optional | If it's a private image registry, use `kubectl create secret docker-registry` to create the secret | my-secret |
|
||||
| policy.alphabetical.order | optional | Order specifies the sorting order of the tags. Given the letters of the alphabet as tags, ascending order would select Z, and descending order would select A | asc |
|
||||
| policy.numerical.order | optional | Given the integer values from 0 to 9 as tags, ascending order would select 9, and descending order would select 0 | asc |
|
||||
| policy.semver.range | optional | Range gives a semver range for the image tag; the highest version within the range that's a tag yields the latest image | '>=1.0.0 <2.0.0' |
|
||||
| filterTags.extract | optional | Extract allows a capture group to be extracted from the specified regular expression pattern, useful before tag evaluation | $timestamp |
|
||||
| filterTags.pattern | optional | Pattern specifies a regular expression pattern used to filter for image tags | '^master-[a-f0-9]' |
|
||||
| commitMessage | optional | Use for more commit message | 'Image: {{range .Updated.Images}}{{println .}}{{end}}' |
|
||||
|
||||
### Example
|
||||
|
||||
```yaml
|
||||
apiVersion: core.oam.dev/v1beta1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: image-app
|
||||
spec:
|
||||
components:
|
||||
- name: image
|
||||
type: kustomize
|
||||
properties:
|
||||
imageRepository:
|
||||
image: <your image>
|
||||
secretRef: imagesecret
|
||||
filterTags:
|
||||
pattern: '^master-[a-f0-9]+-(?P<ts>[0-9]+)'
|
||||
extract: '$ts'
|
||||
policy:
|
||||
numerical:
|
||||
order: asc
|
||||
commitMessage: "Image: {{range .Updated.Images}}{{println .}}{{end}}"
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -56,15 +56,9 @@ The `scaler` trait allows you to change the replicas for the component.
|
|||
|
||||
### Parameters
|
||||
|
||||
```
|
||||
$ vela show scaler
|
||||
# Properties
|
||||
+----------+--------------------------------+------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+----------+--------------------------------+------+----------+---------+
|
||||
| -------- | ------------------------------- | ---- | -------- | ------- |
|
||||
| replicas | Specify the number of workload | int | true | 1 |
|
||||
+----------+--------------------------------+------+----------+---------+
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
|
@ -141,14 +135,8 @@ The `storage` trait allows you to manage storages for the component.
|
|||
|
||||
### Parameters
|
||||
|
||||
```
|
||||
$ vela show storage
|
||||
# Properties
|
||||
|
||||
## pvc
|
||||
+------------------+-------------+---------------------------------+----------+------------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+------------------+-------------+---------------------------------+----------+------------+
|
||||
| -------- | ------------------------------- | ---- | -------- | ------- |
|
||||
| name | | string | true | |
|
||||
| volumeMode | | string | true | Filesystem |
|
||||
| mountPath | | string | true | |
|
||||
|
@ -160,23 +148,19 @@ $ vela show storage
|
|||
| dataSourceRef | | [dataSourceRef](#dataSourceRef) | false | |
|
||||
| dataSource | | [dataSource](#dataSource) | false | |
|
||||
| selector | | [selector](#selector) | false | |
|
||||
+------------------+-------------+---------------------------------+----------+------------+
|
||||
|
||||
...
|
||||
#### emptyDir
|
||||
|
||||
## emptyDir
|
||||
+-----------+-------------+--------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-----------+-------------+--------+----------+---------+
|
||||
| -------- | ------------------------------- | ---- | -------- | ------- |
|
||||
| name | | string | true | |
|
||||
| medium | | string | true | empty |
|
||||
| mountPath | | string | true | |
|
||||
+-----------+-------------+--------+----------+---------+
|
||||
|
||||
## secret
|
||||
+-------------+-------------+--------------------------------------------------------+----------+---------+
|
||||
#### secret
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-------------+-------------+--------------------------------------------------------+----------+---------+
|
||||
| ----------- | ----------- | ------------------------------------------------------ | -------- | ------- |
|
||||
| name | | string | true | |
|
||||
| defaultMode | | int | true | 420 |
|
||||
| items | | [[]items](#items) | false | |
|
||||
|
@ -186,14 +170,11 @@ $ vela show storage
|
|||
| data | | map[string]{null|bool|string|bytes|{...}|[...]|number} | false | |
|
||||
| stringData | | map[string]{null|bool|string|bytes|{...}|[...]|number} | false | |
|
||||
| readOnly | | bool | true | false |
|
||||
+-------------+-------------+--------------------------------------------------------+----------+---------+
|
||||
|
||||
...
|
||||
#### configMap
|
||||
|
||||
## configMap
|
||||
+-------------+-------------+--------------------------------------------------------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-------------+-------------+--------------------------------------------------------+----------+---------+
|
||||
| -------- | ----------- | ------------------------------------------------------ | -------- | ------- |
|
||||
| name | | string | true | |
|
||||
| defaultMode | | int | true | 420 |
|
||||
| items | | [[]items](#items) | false | |
|
||||
|
@ -202,10 +183,6 @@ $ vela show storage
|
|||
| mountOnly | | bool | true | false |
|
||||
| data | | map[string]{null|bool|string|bytes|{...}|[...]|number} | false | |
|
||||
| readOnly | | bool | true | false |
|
||||
+-------------+-------------+--------------------------------------------------------+----------+---------+
|
||||
|
||||
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
|
@ -270,15 +247,9 @@ spec:
|
|||
|
||||
### Parameters
|
||||
|
||||
```shell
|
||||
$ vela show labels
|
||||
# Properties
|
||||
+-----------+-------------+-------------------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-----------+-------------+-------------------+----------+---------+
|
||||
| --------- | ----------- | ----------------- | -------- | ------- |
|
||||
| - | | map[string]string | true | |
|
||||
+-----------+-------------+-------------------+----------+---------+
|
||||
```
|
||||
|
||||
They're all string Key-Value pairs.
|
||||
|
||||
|
@ -316,15 +287,9 @@ spec:
|
|||
|
||||
### Parameters
|
||||
|
||||
```shell
|
||||
$ vela show annotations
|
||||
# Properties
|
||||
+-----------+-------------+-------------------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-----------+-------------+-------------------+----------+---------+
|
||||
| -------- | ------------ | ----------------- | -------- | ------- |
|
||||
| - | | map[string]string | true | |
|
||||
+-----------+-------------+-------------------+----------+---------+
|
||||
```
|
||||
|
||||
They're all string Key-Value pairs.
|
||||
|
||||
|
@ -361,32 +326,26 @@ Trait `kustomize-patch` will patch on the Kustomize component.
|
|||
|
||||
### Parameters
|
||||
|
||||
```shell
|
||||
vela show kustomize-patch
|
||||
```
|
||||
|
||||
```shell
|
||||
# Properties
|
||||
+---------+---------------------------------------------------------------+-----------------------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+---------+---------------------------------------------------------------+-----------------------+----------+---------+
|
||||
| ------- | ------------------------------------------------------------- | --------------------- | -------- | ------- |
|
||||
| patches | a list of StrategicMerge or JSON6902 patch to selected target | [[]patches](#patches) | true | |
|
||||
+---------+---------------------------------------------------------------+-----------------------+----------+---------+
|
||||
|
||||
|
||||
## patches
|
||||
+--------+---------------------------------------------------+-------------------+----------+---------+
|
||||
|
||||
#### patches
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+--------+---------------------------------------------------+-------------------+----------+---------+
|
||||
| ------ | ------------------------------------------------- | ----------------- | -------- | ------- |
|
||||
| patch | Inline patch string, in yaml style | string | true | |
|
||||
| target | Specify the target the patch should be applied to | [target](#target) | true | |
|
||||
+--------+---------------------------------------------------+-------------------+----------+---------+
|
||||
|
||||
|
||||
### target
|
||||
+--------------------+-------------+--------+----------+---------+
|
||||
|
||||
##### target
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+--------------------+-------------+--------+----------+---------+
|
||||
| ------------------ | ----------- | ------ | -------- | ------- |
|
||||
| name | | string | false | |
|
||||
| group | | string | false | |
|
||||
| version | | string | false | |
|
||||
|
@ -394,8 +353,7 @@ vela show kustomize-patch
|
|||
| namespace | | string | false | |
|
||||
| annotationSelector | | string | false | |
|
||||
| labelSelector | | string | false | |
|
||||
+--------------------+-------------+--------+----------+---------+
|
||||
```
|
||||
|
||||
|
||||
### Examples
|
||||
|
||||
|
@ -436,32 +394,22 @@ You could use this trait in [JSON6902 format](https://kubectl.docs.kubernetes.io
|
|||
|
||||
### Parameters
|
||||
|
||||
```shell
|
||||
vela show kustomize-json-patch
|
||||
```
|
||||
|
||||
```shell
|
||||
# Properties
|
||||
+-------------+---------------------------+-------------------------------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-------------+---------------------------+-------------------------------+----------+---------+
|
||||
| ----------- | ------------------------- | ----------------------------- | -------- | ------- |
|
||||
| patchesJson | A list of JSON6902 patch. | [[]patchesJson](#patchesJson) | true | |
|
||||
+-------------+---------------------------+-------------------------------+----------+---------+
|
||||
|
||||
#### patchesJson
|
||||
|
||||
## patchesJson
|
||||
+--------+-------------+-------------------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+--------+-------------+-------------------+----------+---------+
|
||||
| ------ | ----------- | ----------------- | -------- | ------- |
|
||||
| patch | | [patch](#patch) | true | |
|
||||
| target | | [target](#target) | true | |
|
||||
+--------+-------------+-------------------+----------+---------+
|
||||
|
||||
##### target
|
||||
|
||||
#### target
|
||||
+--------------------+-------------+--------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+--------------------+-------------+--------+----------+---------+
|
||||
| ------------------ | ----------- | ------ | -------- | ------- |
|
||||
| name | | string | false | |
|
||||
| group | | string | false | |
|
||||
| version | | string | false | |
|
||||
|
@ -469,18 +417,15 @@ vela show kustomize-json-patch
|
|||
| namespace | | string | false | |
|
||||
| annotationSelector | | string | false | |
|
||||
| labelSelector | | string | false | |
|
||||
+--------------------+-------------+--------+----------+---------+
|
||||
|
||||
|
||||
### patch
|
||||
+-------+-------------+--------+----------+---------+
|
||||
##### patch
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-------+-------------+--------+----------+---------+
|
||||
| ----- | ----------- | ------ | -------- | ------- |
|
||||
| path | | string | true | |
|
||||
| op | | string | true | |
|
||||
| value | | string | false | |
|
||||
+-------+-------------+--------+----------+---------+
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
|
@ -518,25 +463,16 @@ kustomize-strategy-merge trait provide strategy merge patch for kustomize compon
|
|||
|
||||
### Parameters
|
||||
|
||||
```shell
|
||||
vela show kustomize-json-patch
|
||||
```
|
||||
|
||||
```shell
|
||||
# Properties
|
||||
+-----------------------+-----------------------------------------------------------+---------------------------------------------------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-----------------------+-----------------------------------------------------------+---------------------------------------------------+----------+---------+
|
||||
| --------------------- | --------------------------------------------------------- | ------------------------------------------------- | -------- | ------- |
|
||||
| patchesStrategicMerge | a list of strategicmerge, defined as inline yaml objects. | [[]patchesStrategicMerge](#patchesStrategicMerge) | true | |
|
||||
+-----------------------+-----------------------------------------------------------+---------------------------------------------------+----------+---------+
|
||||
|
||||
|
||||
## patchesStrategicMerge
|
||||
#### patchesStrategicMerge
|
||||
+-----------+-------------+--------------------------------------------------------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-----------+-------------+--------------------------------------------------------+----------+---------+
|
||||
| undefined | | map[string]{null|bool|string|bytes|{...}|[...]|number} | true | |
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
|
@ -575,7 +511,6 @@ Service binding trait will bind data from Kubernetes `Secret` to the application
|
|||
* task
|
||||
* cron-task
|
||||
|
||||
|
||||
### Parameters
|
||||
|
||||
Name | Description | Type | Required | Default
|
||||
|
@ -689,8 +624,6 @@ The `sidecar` trait allows you to attach a sidecar container to the component.
|
|||
|
||||
### Parameters
|
||||
|
||||
```console
|
||||
# Properties
|
||||
+---------+-----------------------------------------+-----------------------+----------+---------+
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+---------+-----------------------------------------+-----------------------+----------+---------+
|
||||
|
@ -700,15 +633,12 @@ The `sidecar` trait allows you to attach a sidecar container to the component.
|
|||
| volumes | Specify the shared volume path | [[]volumes](#volumes) | false | |
|
||||
+---------+-----------------------------------------+-----------------------+----------+---------+
|
||||
|
||||
|
||||
## volumes
|
||||
+-----------+-------------+--------+----------+---------+
|
||||
|
||||
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
|
||||
+-----------+-------------+--------+----------+---------+
|
||||
| --------- | ----------- | ------ | -------- | ------- |
|
||||
| name | | string | true | |
|
||||
| path | | string | true | |
|
||||
+-----------+-------------+--------+----------+---------+
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
|
|
|
@ -1,10 +1,26 @@
|
|||
---
|
||||
title: Manage Targets
|
||||
title: Manage Clusters with UX
|
||||
---
|
||||
|
||||
To deploy application components into different places, VelaUX provides **Target** for user to manage their deploy destinations like clusters or namespaces.
|
||||
> This docs requires you to have [velaux](../../../reference/addons/velaux) installed.
|
||||
|
||||
> This document only apply to UI.
|
||||
|
||||
## Manage Clusters
|
||||
|
||||
Currently, VelaUX support manage two kinds of clusters:
|
||||
|
||||
* Support connecting the exist kubernetes cluster.
|
||||
* Support connecting the Alibaba ACK cluster.
|
||||
|
||||
Users with cluster management permissions can enter the cluster management page to add or detach managed clusters.
|
||||
|
||||

|
||||
|
||||
For connecting the ACK clusters, the platform will save some cloud info, Region, VPC, Dashboard Address, etc. When users use the cluster to create a Target, the cloud information is automatically assigned to the Target, which the cloud service applications can use.
|
||||
|
||||
## Manage Delivery Target
|
||||
|
||||
To deploy application components into different places, VelaUX provides a new concept **Delivery Target** for user to manage their deploy destinations not only clusters or namespaces, but also cloud provider information such as region, vpc and so on.
|
||||
|
||||
## Cluster
|
||||
|
||||
|
@ -34,6 +50,8 @@ Now you can use the environent which was bound to the targets just created.
|
|||
|
||||

|
||||
|
||||
In the newly created application, you will see two targets contained in the workflow, which means when you deploy this application, the component will be dispatch to both targets.
|
||||
In the newly created application, you will see two targets contained in the workflow.
|
||||
|
||||

|
||||
|
||||
After you deployed this application, the component will be dispatch to both targets for specific namespace of clusters.
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Project management
|
||||
title: Project Management
|
||||
---
|
||||
|
||||
Project provides a logical separation of applications、environments and delivery targets, this is helpful when VelaUX is used by multiple teams. Project can provide the following features:
|
||||
|
|
|
@ -5,8 +5,8 @@ title: Installation
|
|||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
- For Installation from existing Kubernetes Cluster, please read the [advanced installation guide](./platform-engineers/advanced-install/#install-kubevela-with-existing-kubernetes-cluster).
|
||||
- For upgrading from existing KubeVela control plane, please read the [upgrade guide](./platform-engineers/advanced-install/#upgrade).
|
||||
- For Installation from existing Kubernetes Cluster, please read the [advanced installation guide](./platform-engineers/advanced-install#install-kubevela-with-existing-kubernetes-cluster).
|
||||
- For upgrading from existing KubeVela control plane, please read the [upgrade guide](./platform-engineers/advanced-install#upgrade).
|
||||
|
||||
## 1. Install VelaD
|
||||
|
||||
|
@ -14,7 +14,7 @@ import TabItem from '@theme/TabItem';
|
|||
|
||||
- VelaD provides Kubernetes by leveraging [K3s](https://rancher.com/docs/k3s/latest/en/quick-start/) on linux or [k3d](https://k3d.io/) on docker environment.
|
||||
- KubeVela along with all related images, and `vela` command line are packaged together that enables air-gapped installation.
|
||||
- **VelaD suits great for local development and quick demos, while we strongly recommend you to [install KubeVela with managed Kubernetes services](./platform-engineers/advanced-install/#install-kubevela-with-existing-kubernetes-cluster) for production usage**.
|
||||
- **VelaD suits great for local development and quick demos, while we strongly recommend you to [install KubeVela with managed Kubernetes services](./platform-engineers/advanced-install#install-kubevela-with-existing-kubernetes-cluster) for production usage**.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
|
@ -108,7 +108,7 @@ VelaUX need authentication. Default username is `admin` and the password is `Vel
|
|||
velad uninstall
|
||||
```
|
||||
|
||||
This command will clean up KubeVela controllers along with the Kubernetes cluster, refer to [the advanced guide](./platform-engineers/advanced-install/#uninstall) for more detailed steps.
|
||||
This command will clean up KubeVela controllers along with the Kubernetes cluster, refer to [the advanced guide](./platform-engineers/advanced-install#uninstall) for more detailed steps.
|
||||
|
||||
## Next Step
|
||||
|
||||
|
|
|
@ -167,6 +167,8 @@ Please refer to [VelaUX Guide](../reference/addons/velaux).
|
|||
|
||||
## Upgrade
|
||||
|
||||
> If you're trying to upgrade from a big version later (e.g. from 1.2.x to 1.4.x), please refer to [version migration](./system-operation/migration-from-old-version) for more guides.
|
||||
|
||||
### 1. Upgrade CLI
|
||||
|
||||
<Tabs
|
||||
|
|
|
@ -1,21 +1,20 @@
|
|||
---
|
||||
title: Managing Clusters
|
||||
title: Lifecycle of Managed Cluster
|
||||
---
|
||||
|
||||
## Manage the cluster via UI
|
||||
This section will introduce the lifecycle of managed clusters.
|
||||
|
||||
* Support connecting the exist kubernetes cluster.
|
||||
* Support connecting the ACK cluster.
|
||||
## Create clusters
|
||||
|
||||
Users with cluster management permissions can enter the cluster management page to add or detach managed clusters.
|
||||
KubeVela can generally adopt any Kubernetes cluster as managed cluster, the control plane won't install anything to your managed cluster unless you have enable any addons.
|
||||
|
||||

|
||||
If you don't have any clusters, you can refer to [VelaD](https://github.com/kubevela/velad) to create one:
|
||||
|
||||
For connecting the ACK clusters, the platform will save some cloud info, Region, VPC, Dashboard Address, etc. When users use the cluster to create a Target, the cloud information is automatically assigned to the Target, which the cloud service applications can use.
|
||||
```
|
||||
velad install --name <cluster-name> --cluster-only
|
||||
```
|
||||
|
||||
## Manage the cluster via CLI
|
||||
|
||||
### vela cluster join
|
||||
## Join Clusters
|
||||
|
||||
You can simply join an existing cluster into KubeVela by specifying its KubeConfig as below
|
||||
|
||||
|
@ -32,7 +31,7 @@ $ vela cluster join hangzhou-1.kubeconfig --name hangzhou-1
|
|||
$ vela cluster join hangzhou-2.kubeconfig --name hangzhou-2
|
||||
```
|
||||
|
||||
### vela cluster list
|
||||
## List clusters
|
||||
|
||||
After clusters joined, you could list all clusters managed by KubeVela.
|
||||
|
||||
|
@ -47,7 +46,7 @@ cluster-hangzhou-2 X509Certificate <ENDPOINT_HANGZHOU_2> true
|
|||
|
||||
> By default, the hub cluster where KubeVela locates is registered as the `local` cluster. You can use it like a managed cluster in spite that you cannot detach it or modify it.
|
||||
|
||||
### label your cluster
|
||||
## Label your cluster
|
||||
|
||||
You can also give labels to your clusters, which helps you select clusters for deploying applications.
|
||||
|
||||
|
@ -62,7 +61,7 @@ cluster-hangzhou-1 X509Certificate <ENDPOINT_HANGZHOU_1> true
|
|||
cluster-hangzhou-2 X509Certificate <ENDPOINT_HANGZHOU_2> true region=hangzhou
|
||||
```
|
||||
|
||||
### vela cluster detach
|
||||
## Detach your cluster
|
||||
|
||||
You can also detach a cluster if you do not want to use it anymore.
|
||||
|
||||
|
@ -72,10 +71,14 @@ $ vela cluster detach beijing
|
|||
|
||||
> It is dangerous to detach a cluster that is still in-use. But if you want to do modifications to the held cluster credential, like rotating certificates, it is possible to do so.
|
||||
|
||||
### vela cluster rename
|
||||
## Rename a cluster
|
||||
|
||||
This command can rename cluster managed by KubeVela.
|
||||
|
||||
```shell script
|
||||
$ vela cluster rename cluster-prod cluster-production
|
||||
```
|
||||
```
|
||||
|
||||
## Next
|
||||
|
||||
- Manage cluster with [UI console](../../how-to/dashboard/target/overview).
|
|
@ -0,0 +1,168 @@
|
|||
---
|
||||
title: Version Migration
|
||||
---
|
||||
|
||||
This doc aims to provide a migration guide from old versions to the new ones without disturb the running business. However scenarios are different from each other, we strongly recommend you to test the migration with a simulation environment before real migration for your production.
|
||||
|
||||
## From v1.3.x to v1.4.x
|
||||
|
||||
> ⚠️ Note: You must upgrade to v1.3.x first before you upgrade to v1.4.x from version v1.2.x or older.
|
||||
|
||||
1. Upgrade the CRDs, please make sure you upgrade the CRDs first before upgrade the helm chart.
|
||||
|
||||
```
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.4/charts/vela-core/crds/core.oam.dev_applicationrevisions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.4/charts/vela-core/crds/core.oam.dev_applications.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.4/charts/vela-core/crds/core.oam.dev_resourcetrackers.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.4/charts/vela-core/crds/core.oam.dev_componentdefinitions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.4/charts/vela-core/crds/core.oam.dev_definitionrevisions.yaml
|
||||
```
|
||||
|
||||
2. Upgrade your kubevela chart
|
||||
|
||||
```
|
||||
helm repo add kubevela https://charts.kubevela.net/core
|
||||
helm repo update
|
||||
helm upgrade -n vela-system --install kubevela kubevela/vela-core --version 1.4.1 --wait
|
||||
```
|
||||
|
||||
3. Download and upgrade to the corresponding CLI
|
||||
```
|
||||
curl -fsSl https://kubevela.io/script/install.sh | bash -s 1.4.1
|
||||
```
|
||||
|
||||
3. Upgrade VelaUX or other addon
|
||||
|
||||
```
|
||||
vela addon upgrade velaux --version 1.4.0
|
||||
```
|
||||
|
||||
Please note if you're using terraform addon, you should upgrade the `terraform` addon to version `1.0.6+` along with the vela-core upgrade, you can follow the following steps:
|
||||
|
||||
1. upgrading vela-core to v1.3.4+, all existing Terraform typed Applications won't be affected in this process.
|
||||
2. upgrade the `terrorform` addon, or the newly provisioned Terraform typed Applications won't become successful.
|
||||
- 2.1 Manually upgrade CRD Configuration https://github.com/oam-dev/terraform-controller/blob/v0.4.3/chart/crds/terraform.core.oam.dev_configurations.yaml.
|
||||
- 2.2 Upgrade add-on `terraform` to version `1.0.6+`.
|
||||
|
||||
|
||||
## From v1.2.x to v1.3.x
|
||||
|
||||
1. Upgrade the CRDs, please make sure you upgrade the CRDs first before upgrade the helm chart.
|
||||
|
||||
```
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.3/charts/vela-core/crds/core.oam.dev_applicationrevisions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.3/charts/vela-core/crds/core.oam.dev_applications.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.3/charts/vela-core/crds/core.oam.dev_resourcetrackers.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.3/charts/vela-core/crds/core.oam.dev_componentdefinitions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.3/charts/vela-core/crds/core.oam.dev_definitionrevisions.yaml
|
||||
```
|
||||
|
||||
2. Upgrade your kubevela chart
|
||||
|
||||
```
|
||||
helm repo add kubevela https://charts.kubevela.net/core
|
||||
helm repo update
|
||||
helm upgrade -n vela-system --install kubevela kubevela/vela-core --version 1.3.6 --wait
|
||||
```
|
||||
|
||||
3. Download and upgrade to the corresponding CLI
|
||||
```
|
||||
curl -fsSl https://kubevela.io/script/install.sh | bash -s 1.3.6
|
||||
```
|
||||
|
||||
4. Upgrade VelaUX or other addon
|
||||
|
||||
```
|
||||
vela addon upgrade velaux --version 1.3.6
|
||||
```
|
||||
|
||||
Please note if you're using terraform addon, you should upgrade the `terraform` addon to version `1.0.6+` along with the vela-core upgrade, you can follow the following steps:
|
||||
|
||||
1. upgrading vela-core to v1.3.4+, all existing Terraform typed Applications won't be affected in this process.
|
||||
2. upgrade the `terrorform` addon, or the newly provisioned Terraform typed Applications won't become successful.
|
||||
- 2.1 Manually upgrade CRD Configuration https://github.com/oam-dev/terraform-controller/blob/v0.4.3/chart/crds/terraform.core.oam.dev_configurations.yaml.
|
||||
- 2.2 Upgrade add-on `terraform` to version `1.0.6+`.
|
||||
|
||||
## From v1.1.x to v1.2.x
|
||||
|
||||
1. Check the service running normally
|
||||
|
||||
Make sure all your services are running normally before migration.
|
||||
|
||||
```
|
||||
$ kubectl get all -n vela-system
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/kubevela-cluster-gateway-5bff6d564d-rhkp7 1/1 Running 0 16d
|
||||
pod/kubevela-vela-core-b67b87c7-9w7d4 1/1 Running 1 16d
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
service/kubevela-cluster-gateway-service ClusterIP 172.16.236.150 <none> 9443/TCP 16d
|
||||
service/vela-core-webhook ClusterIP 172.16.54.195 <none> 443/TCP 284d
|
||||
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
deployment.apps/kubevela-cluster-gateway 1/1 1 1 16d
|
||||
deployment.apps/kubevela-vela-core 1/1 1 1 284d
|
||||
```
|
||||
|
||||
In addition, it's also necessary to check the status of all the KubeVela applications including addons running normally.
|
||||
|
||||
2. update the CRD to v1.2.x
|
||||
|
||||
Update the CRD in the cluster to v1.2.x, the CRD list is as follows, some of them can be omitted if you don't have them before:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_applicationrevisions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_applications.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_componentdefinitions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_definitionrevisions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_envbindings.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_healthscopes.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_manualscalertraits.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_policydefinitions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_resourcetrackers.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_scopedefinitions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_traitdefinitions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_workflowstepdefinitions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/core.oam.dev_workloaddefinitions.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/release-1.2/charts/vela-core/crds/standard.oam.dev_rollouts.yaml
|
||||
```
|
||||
|
||||
3. Execute the upgrade command
|
||||
|
||||
This step will upgrade the system to the new version:
|
||||
|
||||
``` shell
|
||||
helm upgrade -n vela-system --install kubevela kubevela/vela-core --version 1.2.6 --wait
|
||||
```
|
||||
|
||||
Upgrade the CLI to v1.2.x corresponding the the core version:
|
||||
|
||||
```
|
||||
curl -fsSl https://kubevela.io/script/install.sh | bash -s 1.2.6
|
||||
```
|
||||
|
||||
3. Enable addon
|
||||
|
||||
After the upgrade succeed, users can use the following methods to enable addons if they need to be enabled:
|
||||
|
||||
```shell
|
||||
# View the list of addons
|
||||
vela addon list
|
||||
# Enable addon
|
||||
vela addon enable <addon name>
|
||||
```
|
||||
|
||||
⚠️Note: This step is not required if the addon is already enabled and used in the pre-upgrade version
|
||||
|
||||
4. Update Custom Definition
|
||||
|
||||
Check if your custom definition works in the new version, try to upgrade them if there're any issues.
|
||||
If you haven't defined any, the normal upgrade process is completed!
|
||||
|
||||
5. Common Questions for this migration
|
||||
|
||||
- Q: After upgrading from 1.1.x to 1.2.x, the application status becomes `workflowsuspending`, and using `vela workflow resume` doesn't work.
|
||||
- A: There're migration about the resource tracker mechanism. Generally, you can delete the existing resourcetracker, after that you can use `vela workflow resume` command.
|
||||
- Q: Why the status of my application become suspend after the upgrade?
|
||||
- A: Don't worry if your application become suspend after the upgrade, this won't affect the running business application. It will become normal after you deploy the application next time. You can also manually change any annotation of the application to resolve it.
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: FluxCD GitOps
|
||||
title: FluxCD
|
||||
---
|
||||
|
||||
This addon is built based [FluxCD](https://fluxcd.io/)
|
||||
|
@ -16,7 +16,6 @@ The following definitions will be enabled after the installation of fluxcd addon
|
|||
|
||||
|DEFINITION NAME |DEFINITION TYPE |DEFINITION DESCRIPTION|
|
||||
| :----: | :----: | ---|
|
||||
|config-helm-repository |ComponentDefinition |Config information to authenticate helm chart repository|
|
||||
|helm |ComponentDefinition |helps to deploy a helm chart from git repo, helm repo or S3 compatible bucket|
|
||||
|kustomize |ComponentDefinition |helps to deploy a kustomize style artifact and GitOps capability to watch changes from git repo or image registry|
|
||||
|kustomize-json-patch |TraitDefinition |A list of JSON6902 patch to selected target|
|
||||
|
@ -56,7 +55,7 @@ The following definitions will be enabled after the installation of fluxcd addon
|
|||
|
||||
| Parameters | Description | Example |
|
||||
| -------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------- |
|
||||
| provider | optional, Git branch, master by default | your-branch |
|
||||
| branch | optional, Git branch, master by default | your-branch |
|
||||
|
||||
|
||||
#### Example
|
||||
|
@ -132,7 +131,7 @@ spec:
|
|||
|
||||
| Parameters | Description | Example |
|
||||
| -------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------- |
|
||||
| repoType | required, The value of the Git. To indicate that kustomize configuration comes from the Git repository | oss |
|
||||
| repoType | required, indicates the type of repository, should be "helm","git" or "oss". | oss |
|
||||
| pullInterval | optional, Synchronize with Git repository, and the time interval between tuning helm release. The default value is 5m (5 minutes) | 10m |
|
||||
| url | required, bucket's endpoint, no need to fill in with scheme | oss-cn-beijing.aliyuncs.com |
|
||||
| secretRef | optional, Save the name of a Secret, which is the credential to read the bucket. Secret contains accesskey and secretkey fields | sec-name |
|
||||
|
@ -173,7 +172,6 @@ spec:
|
|||
|
||||
1. If your kustomize style artifact is stored in oss, you can create application by flowing these steps:
|
||||
|
||||
|
||||
(Optional)If your OSS bucket needs identity verification, create a Secret first:
|
||||
|
||||
```shell
|
||||
|
|
|
@ -9,4 +9,4 @@ There's an official addon registry (https://addons.kubevela.net) maintained by K
|
|||
* [Addon Cloud Resources](./terraform): Provide a bunch of addons to provision cloud resources for different cloud providers.
|
||||
* [Machine Learning Addon](./ai): Machine learning addon is divided into model-training addon and model-serving addon.
|
||||
* [Traefik](./traefik): Traefik is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease.
|
||||
* [Rollout](./rollout): Provide a capability rollout the applicaton.
|
||||
* [Rollout](./rollout): Provide a capability rollout the applicaton.
|
||||
|
|
|
@ -11,6 +11,14 @@ The component supported for rollout is:
|
|||
|
||||
## How to
|
||||
|
||||
### Enable the rollout addon
|
||||
|
||||
Before using this trait you must enable the `rollout` addon by this command.
|
||||
|
||||
```shell
|
||||
vela addon enable rollout
|
||||
```
|
||||
|
||||
### First Deployment
|
||||
|
||||
Apply the Application YAML below which includes a webservice-type workload with Rollout Trait, and [control version](../../end-user/version-control)
|
||||
|
@ -249,7 +257,9 @@ spec:
|
|||
targetSize: 7
|
||||
EOF
|
||||
```
|
||||
|
||||
This Rollout Trait represents it will scale workload up to 7. You also can set every batch's number by setting `rolloutBatches`.
|
||||
Notice: A known issue exists if you scale up/down the workload twice or more times by not setting the `rolloutBatches`.So please set the `rolloutBatches` when scale up/down.
|
||||
|
||||
Check the status after expansion has been succeed.
|
||||
```shell
|
||||
|
|
|
@ -1,22 +1,20 @@
|
|||
---
|
||||
title: Config resource relationship Rule
|
||||
title: Customize Resource Topology
|
||||
---
|
||||
|
||||
The topology graph of VelaUX can show the resource tree of an application. As shown in this picture.
|
||||
The resource topology graph of VelaUX can automatically show the resource tree of an application for any workloads including Helm charts and cloud resources.
|
||||
|
||||

|
||||
|
||||
## Mechanism
|
||||
|
||||
There have been some built-in rules in system to specify the relationship between two types of resource. System will search all children resources by these rules.
|
||||
By default, the connections in the resource graph rely on the [ownerReference mechanism](https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/), while it's also configurable for CRDs which don't have specific ownerReferences. The controller will search all child resources for one node according to these rules.
|
||||
|
||||
For example, the built-in rules has defined the [Deployment's](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) children resource only can be [ReplicaSet](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/), so when show the children resource of one deployment Vela will only care about the replicaSet.
|
||||
Vela will list all replicaSet in the same namespace with deployment and filter out those ownerReference isn't this deployment.
|
||||
These rules can also reduce redundant resources for better performance. For example, one of the built-in rules has defined that a [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) will only have [ReplicaSet](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/) as its child, so when we draw the resource graph, it will only display resources whose type is ReplicaSet along with ownerReference info.
|
||||
|
||||
## Add more rules
|
||||
|
||||
## Add relationship rules
|
||||
|
||||
The built-in rules is limited, you can add a customized rule by create a configmap like this:
|
||||
The built-in rules is limited, you can add a customized rule by create a ConfigMap like this:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
@ -36,7 +34,8 @@ data:
|
|||
kind: Pod
|
||||
```
|
||||
|
||||
First, this configmap should have the special label `"rules.oam.dev/resources": "true"` the data key `rules` defined a list of relationship rules.
|
||||
One relationship rule define what children type a parent can have.
|
||||
In this example above, the parent type is `Cloneset` in group `apps.kruise.io`, his children resource type is `v1/Pod`
|
||||
All customize rules specified in these configmaps would be merged with built-in rules and take effect in searching children resources.
|
||||
1. The ConfigMap should have the special label `"rules.oam.dev/resources": "true"` the data key `rules` defined a list of relationship rules.
|
||||
2. One relationship rule define what children type a parent can have.
|
||||
|
||||
In the example above, the parent type is `Cloneset` in group `apps.kruise.io`, his child resource type is `v1/Pod`
|
||||
All customized rules specified in these configmaps would be merged with built-in rules and take effect in searching child resources.
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: VelaUX Extension
|
||||
title: Customize UX of Definition
|
||||
---
|
||||
|
||||
VelaUX uses the UI-Schema specification for the extension of components, workflow steps, and operation and maintenance feature resources, in the case of variable input parameters, to achieve a more native UI experience.
|
||||
|
|
|
@ -207,13 +207,17 @@
|
|||
{
|
||||
"type": "doc",
|
||||
"id": "version-v1.4/platform-engineers/system-operation/vela-cli-image"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "version-v1.4/platform-engineers/system-operation/migration-from-old-version"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"collapsed": true,
|
||||
"type": "category",
|
||||
"label": "User management",
|
||||
"label": "User Management",
|
||||
"items": [
|
||||
{
|
||||
"type": "doc",
|
||||
|
@ -255,7 +259,7 @@
|
|||
{
|
||||
"collapsed": true,
|
||||
"type": "category",
|
||||
"label": "Manage resource",
|
||||
"label": "Cluster Management",
|
||||
"items": [
|
||||
{
|
||||
"type": "doc",
|
||||
|
@ -270,7 +274,7 @@
|
|||
{
|
||||
"collapsed": true,
|
||||
"type": "category",
|
||||
"label": "Manage integration configs",
|
||||
"label": "Manage Config of Integration",
|
||||
"items": [
|
||||
{
|
||||
"type": "doc",
|
||||
|
@ -293,6 +297,21 @@
|
|||
{
|
||||
"type": "doc",
|
||||
"id": "version-v1.4/platform-engineers/system-operation/performance-finetuning"
|
||||
},
|
||||
{
|
||||
"collapsed": true,
|
||||
"type": "category",
|
||||
"label": "UX Customization",
|
||||
"items": [
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "version-v1.4/reference/ui-schema"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "version-v1.4/reference/topology-rule"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
|
@ -504,6 +523,10 @@
|
|||
"type": "doc",
|
||||
"id": "version-v1.4/reference/addons/velaux"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "version-v1.4/reference/addons/rollout"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "version-v1.4/reference/addons/fluxcd"
|
||||
|
@ -526,14 +549,6 @@
|
|||
"type": "doc",
|
||||
"id": "version-v1.4/end-user/components/cloud-services/cloud-resources-list"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "version-v1.4/reference/ui-schema"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "version-v1.4/reference/topology-rule"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "version-v1.4/reference/user-improvement-plan"
|
||||
|
|
Loading…
Reference in New Issue