Docs: add some user guides (#924)

* Docs: add some user guides

Signed-off-by: barnettZQG <barnett.zqg@gmail.com>

* Docs: change some words

Signed-off-by: barnettZQG <barnett.zqg@gmail.com>

* Docs: change some titles

Signed-off-by: barnettZQG <barnett.zqg@gmail.com>

* Docs: change some titles

Signed-off-by: barnettZQG <barnett.zqg@gmail.com>

* Docs: change the traefik addon document

Signed-off-by: barnettZQG <barnett.zqg@gmail.com>

Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
This commit is contained in:
barnettZQG 2022-08-20 10:34:56 +08:00 committed by GitHub
parent e8ba0dad6e
commit 928de09b8a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
39 changed files with 2783 additions and 309 deletions

View File

@ -1,12 +1,17 @@
---
title: One-time delivery
title: One-time Delivery(Working With Other Controllers)
---
By default, the KubeVela controller will prevent configuration drift for applied resources by reconciling them routinely. This is useful if you want to keep your application always having the desired configuration in avoid of some unintentional changes by external modifiers.
By default, the KubeVela controller will prevent configuration drift for applied resources by reconciling them routinely. This is useful if you want to keep your application always having the desired configuration to avoid some unintentional changes by external modifiers.
However, sometimes, you might want to use KubeVela Application to dispatch jobs and recycle jobs, but also want to leave those resources mutable after workflow is finished. In this case, you can use the following apply-once policy.
However, sometimes, you might want to use KubeVela Application to dispatch jobs and recycle jobs, but also want to leave those resources mutable after the workflow is finished. In this case, you can use the following apply-once policy. Some use cases:
1. Working with HPA, which will change the deployment replicas.
2. Working with Istio, which will change the deployment containers.
3. Other controllers...
### All managed fields once apply
### How to use
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
@ -29,6 +34,39 @@ spec:
enable: true
```
In this case, if you change the replicas of the hello-world deployment after Application enters running state, the change will not be reverted. On the contrary, if you disable the apply-once policy (by default), any changes to the replicas of hello-world application will be brought back in the next reconcile loop.
In this case, if you change the replicas or other managed fields of the hello-world deployment after the application enters the running state, the change will not be reverted. On the contrary, if you disable the apply-once policy (by default), any changes to the replicas of the hello-world application will be brought back in the next reconcile loop.
The configuration drift check will be launched every 5 minutes after application enters the running state or the suspending state without errors. You can configure the time through setting the `application-re-sync-period` bootstrap parameter in the controller. See [bootstrap parameters](../../platform-engineers/system-operation/bootstrap-parameters) for details.
### Specified fields once apply
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: apply-once-app
spec:
components:
- name: hello-world
type: webservice
properties:
image: oamdev/hello-world
traits:
- type: scaler
properties:
replicas: 3
policies:
- name: apply-once
type: apply-once
properties:
enable: true
rules:
- path:
- spec.replicas
selector:
resourceTypes:
- Deployment
```
In this case, only support changing the replicas field by other controllers, such as HPA. The selector rule support: `componentNames`,`componentTypes`,`oamTypes`,`resourceNames`,`resourceTypes` and `traitTypes`.
The configuration drift check will be launched every 5 minutes after the application enters the running state or the suspending state without errors. You can configure the time by setting the `application-re-sync-period` bootstrap parameter in the controller. See [bootstrap parameters](../../platform-engineers/system-operation/bootstrap-parameters) for details.

View File

@ -1,5 +1,5 @@
---
title: Canary rollout
title: Canary Rollout
---
In this section, we will introduce how to canary rollout a container service.

View File

@ -1,11 +1,9 @@
---
title: Attaching Sidecar
title: Attaching Sidecar Container
---
The `sidecar` trait allows you to attach a sidecar container to the component.
> Note: `sidecar` trait is hidden by default in `VelaUX`, you can use it in CLI.
## How to use
In this Application, component `log-gen-worker` and sidecar share the data volume that saves the logs.

View File

@ -1,5 +1,5 @@
---
title: Helm repository
title: Connect Helm Repository
description: Configure a helm repository
---

View File

@ -1,5 +1,5 @@
---
title: Image Registry
title: Connect Image Registry
description: Configure an image registry
---

View File

@ -1,3 +0,0 @@
---
title: Overview
---

View File

@ -2,9 +2,10 @@
title: Automated Observability
---
Observability is critical for infrastructures. Without observability system, it is hard to identify what happens when system broke down. On contrary, a strong observabilty system can not only provide confidences for operators but can also help developers quickly locate the performance bottleneck or the weak points inside the whole system.
Observability is critical for infrastructures and applications. Without observability system, it is hard to identify what happens when system broke down. On contrary, a strong observabilty system can not only provide confidences for operators but can also help developers quickly locate the performance bottleneck or the weak points inside the whole system.
To help users build their own observability system, KubeVela provides a list of addons, including
- prometheus-server: A server records metrics in time series with flexible queries supported.
- kube-state-metrics: A metrics collector for the Kubernetes system.
- node-exporter: A metrics collector for the running Kubernetes nodes.
@ -25,26 +26,31 @@ To enable the addon suites, you simply needs to run the `vela addon enable` comm
> If your KubeVela is multi-cluster scenario, see the [multi-cluster installation](#multi-cluster-installation) section below.
1. Install the kube-state-metrics addon
```shell
> vela addon enable kube-state-metrics
```
2. Install the node-exporter addon
```shell
> vela addon enable node-exporter
```
3. Install the prometheus-server
```shell
> vela addon enable prometheus-server
```
4. Install the grafana addon.
```shell
> vela addon enable grafana
```
5. Access your grafana through port-forward.
```shell
> kubectl port-forward svc/grafana -n o11y-system 8080:3000
```
@ -66,11 +72,14 @@ URL: http://localhost:8080/d/application-overview/kubevela-applications
<details>
The KubeVela Application dashboard shows the overview of the metadata for the application. It directly accesses the Kubernetes API to retrieve the runtime application information, where you can use it as an entrance.
---
The **Basic Information** section extracts key information into panels and give you the most straightforward view for the current application.
---
The **Basic Information** section extracts key information into panels and give you the most straightforward view for the current application.
---
The **Related Resource** section shows those resources that work together with the application itself, including the managed resources, the recorded ResourceTrackers and the revisions.
---
The **Related Resource** section shows those resources that work together with the application itself, including the managed resources, the recorded ResourceTrackers and the revisions.
</details>
### Kubernetes Deployemnt
@ -84,14 +93,18 @@ URL: http://localhost:8080/d/deployment-overview/kubernetes-deployment
<details>
The Kubernetes Deployment dashboard gives you the detail running status for the deployment.
---
The **Pods** panel shows the pods that the deployment itself is currently managing.
---
---
The **Replicas** panel shows how the number of replicas changes, which can be used to diagnose when and how your deployment shifted to undesired state.
The **Pods** panel shows the pods that the deployment itself is currently managing.
---
The **Replicas** panel shows how the number of replicas changes, which can be used to diagnose when and how your deployment shifted to undesired state.
---
The **Pod** section includes the details for the resource usage (including the CPU / Memory / Network / Storage) which can be used to identify if the pod is facing resource pressure or making/receiving unexpected traffics.
---
The **Pod** section includes the details for the resource usage (including the CPU / Memory / Network / Storage) which can be used to identify if the pod is facing resource pressure or making/receiving unexpected traffics.
</details>
### KubeVela System
@ -105,23 +118,27 @@ URL: http://localhost:8080/d/kubevela-system/kubevela-system
<details>
The KubeVela System dashboard gives you the running details of the KubeVela core modules, including the controller and the cluster-gateway. Other modules like velaux or prism are expected to be added in the future.
---
The **Computation Resource** section shows the usage for core modules. It can be used to track if there is any memory leak (if the memory usage is continuously increasing) or under high pressure (the cpu usage is always very high). If the memory usage hits the resource limit, the corresponding module will be killed and restarted, which indicates the lack of computation resource. You should add more CPU/Memory for them.
---
---
The **Controller** section includes a wide range of panels which can help you to diagnose the bottleneck of the KubeVela controller in your scenario.
The **Computation Resource** section shows the usage for core modules. It can be used to track if there is any memory leak (if the memory usage is continuously increasing) or under high pressure (the cpu usage is always very high). If the memory usage hits the resource limit, the corresponding module will be killed and restarted, which indicates the lack of computation resource. You should add more CPU/Memory for them.
The **Controller Queue** and **Controller Queue Add Rate** panels show you the controller working queue changes. If the controller queue is increasing continuously, it means there are too much applications or application changes in the system, and the controller is unable to handle them in time. Then it means there is performance issues for KubeVela controller. A temporary increase for the controller queue is tolerable, but keeping for a long time will lead to memory increase which will finally cause Out-Of-Memory problems.
---
**Reconcile Rate** and **Average Reconcile Time** panels give you the overview of the controller status. If reconcile rate is steady and average reconcile time is reasonable (like under 500ms, depending on your scenario), your KubeVela controller is healthy. If the controller queue add rate is increasing but the reconcile rate does not go up, it will gradually lead to increase for the controller queue and cause troubles. There are various cases that your controller is unhealthy:
1. Reconcile is healthy but there are too much applications, you will find everything is okay except the controller queue metrics increasing. Check your CPU/Memory usage for the controller. You might need to add more computation resources.
2. Reconcile is not healthy due to too much errors. You will find lots of errors in the **Reconcile Rate** panel. This means your system is continuously facing process errors for application. It could be caused by invalid application configurations or unexpected errors while running workflows. Check application details and see which applications are causing errors.
3. Reconcile is not healthy due to long reconcile times. You need to check **ApplicationController Reconcile Time** panel and see whether it is a common case (the average reconcile time is high), or only part of your applications have problems (the p95 reconcile time is high). For the former case, usually it is caused by either insufficient CPU (CPU usage is high) or too much requests and rate limited by kube-apiserver (check **ApplicationController Client Request Throughput** and **ApplicationController Client Request Average Time** panel and see which resource requests is slow or excessive). For the later case you need to check which application is large and uses lots of time for reconciliations.
The **Controller** section includes a wide range of panels which can help you to diagnose the bottleneck of the KubeVela controller in your scenario.
Sometimes you might need refer to **ApplicationController Reconcile Stage Time** and see if there is some special reconcile stages are abnormal. For example, GCResourceTrackers use lots of time means there might be blockings for recycling resource in KubeVela system.
The **Controller Queue** and **Controller Queue Add Rate** panels show you the controller working queue changes. If the controller queue is increasing continuously, it means there are too much applications or application changes in the system, and the controller is unable to handle them in time. Then it means there is performance issues for KubeVela controller. A temporary increase for the controller queue is tolerable, but keeping for a long time will lead to memory increase which will finally cause Out-Of-Memory problems.
---
The **Application** section shows the overview of the applications in your whole KubeVela system. It can be used to see the changes of the application numbers and the used workflow steps. The **Workflow Initialize Rate** is an auxiliary panel which can be used to see how frequent new workflow execution is launched. The **Workflow Average Complete Time** can further show how much time it costs to finish the whole workflow.
**Reconcile Rate** and **Average Reconcile Time** panels give you the overview of the controller status. If reconcile rate is steady and average reconcile time is reasonable (like under 500ms, depending on your scenario), your KubeVela controller is healthy. If the controller queue add rate is increasing but the reconcile rate does not go up, it will gradually lead to increase for the controller queue and cause troubles. There are various cases that your controller is unhealthy:
1. Reconcile is healthy but there are too much applications, you will find everything is okay except the controller queue metrics increasing. Check your CPU/Memory usage for the controller. You might need to add more computation resources.
2. Reconcile is not healthy due to too much errors. You will find lots of errors in the **Reconcile Rate** panel. This means your system is continuously facing process errors for application. It could be caused by invalid application configurations or unexpected errors while running workflows. Check application details and see which applications are causing errors.
3. Reconcile is not healthy due to long reconcile times. You need to check **ApplicationController Reconcile Time** panel and see whether it is a common case (the average reconcile time is high), or only part of your applications have problems (the p95 reconcile time is high). For the former case, usually it is caused by either insufficient CPU (CPU usage is high) or too much requests and rate limited by kube-apiserver (check **ApplicationController Client Request Throughput** and **ApplicationController Client Request Average Time** panel and see which resource requests is slow or excessive). For the later case you need to check which application is large and uses lots of time for reconciliations.
Sometimes you might need refer to **ApplicationController Reconcile Stage Time** and see if there is some special reconcile stages are abnormal. For example, GCResourceTrackers use lots of time means there might be blockings for recycling resource in KubeVela system.
---
The **Application** section shows the overview of the applications in your whole KubeVela system. It can be used to see the changes of the application numbers and the used workflow steps. The **Workflow Initialize Rate** is an auxiliary panel which can be used to see how frequent new workflow execution is launched. The **Workflow Average Complete Time** can further show how much time it costs to finish the whole workflow.
</details>
@ -136,14 +153,18 @@ URL: http://localhost:8080/d/kubernetes-apiserver/kubernetes-apiserver
<details>
The Kubernetes APIServer dashboard helps you to see the most fundamental part for your Kubernetes system. If your Kubernetes APIServer is not running healthy, all of your controllers and modules in your Kubernetes system will be abnormal and unable to handle requests successfully. So it is important to make sure everything is fine in this dashboard.
---
The **Requests** section includes a series of panels which shows the QPS and latency for various kind of requests. Usually your APIServer could fail to respond if it is flooded by too much requests. At this time, you can see which type of requests is causing trouble.
---
---
The **WorkQueue** section shows the process status of the Kubernetes APIServer. If the **Queue Size** is large, it means the number of requests is out of the process capability of your Kubernetes APIServer.
The **Requests** section includes a series of panels which shows the QPS and latency for various kind of requests. Usually your APIServer could fail to respond if it is flooded by too much requests. At this time, you can see which type of requests is causing trouble.
---
The **WorkQueue** section shows the process status of the Kubernetes APIServer. If the **Queue Size** is large, it means the number of requests is out of the process capability of your Kubernetes APIServer.
---
The **Watches** section shows the number of watches in your Kubernetes APIServer. Compared to other types of requests, WATCH requests will continuously consume computation resources in Kubernetes APIServer, so it will be helpful to keep the number of watches limited.
---
The **Watches** section shows the number of watches in your Kubernetes APIServer. Compared to other types of requests, WATCH requests will continuously consume computation resources in Kubernetes APIServer, so it will be helpful to keep the number of watches limited.
</details>
## Customization
@ -187,30 +208,31 @@ metadata:
namespace: o11y-system
spec:
components:
- type: k8s-objects
name: my-prom
properties:
objects:
- apiVersion: v1
kind: ConfigMap
metadata:
name: my-prom
namespace: o11y-system
data:
my-recording-rules.yml: |
groups:
- name: example
rules:
- record: apiserver:requests:rate5m
expr: sum(rate(apiserver_request_total{job="kubernetes-nodes"}[5m]))
- type: k8s-objects
name: my-prom
properties:
objects:
- apiVersion: v1
kind: ConfigMap
metadata:
name: my-prom
namespace: o11y-system
data:
my-recording-rules.yml: |
groups:
- name: example
rules:
- record: apiserver:requests:rate5m
expr: sum(rate(apiserver_request_total{job="kubernetes-nodes"}[5m]))
policies:
- type: topology
name: topology
properties:
clusterLabelSelector: {}
- type: topology
name: topology
properties:
clusterLabelSelector: {}
```
Then you need to add `customConfig` parameter to the enabling process of the prometheus-server addon, like
```shell
> vela addon enable prometheus-server thanos=true serviceType=LoadBalancer storage=1G customConfig=my-prom
```
@ -239,6 +261,7 @@ data:
### Customize Grafana credential
If you want to change the default username and password for Grafana, you can run the following command
```shell
> vela addon enable grafana adminUser=super-user adminPassword=PASSWORD
```
@ -270,22 +293,22 @@ metadata:
name: register-prometheus
spec:
components:
- type: grafana-datasource
name: my-prometheus
properties:
access: proxy
basicAuth: false
isDefault: false
name: MyPrometheus
readOnly: true
withCredentials: true
jsonData:
httpHeaderName1: Authorization
tlsSkipVerify: true
secureJsonFields:
httpHeaderValue1: <token of your prometheus access>
type: prometheus
url: <my-prometheus url>
- type: grafana-datasource
name: my-prometheus
properties:
access: proxy
basicAuth: false
isDefault: false
name: MyPrometheus
readOnly: true
withCredentials: true
jsonData:
httpHeaderName1: Authorization
tlsSkipVerify: true
secureJsonFields:
httpHeaderValue1: <token of your prometheus access>
type: prometheus
url: <my-prometheus url>
```
For example, if you are using the Prometheus service on Alibaba Cloud (ARMS), you can go to the Prometheus setting page and find the access url & access token.
@ -305,12 +328,12 @@ metadata:
name: register-grafana
spec:
components:
- type: grafana-access
name: my-grafana
properties:
- type: grafana-access
name: my-grafana
endpoint: <my-grafana url>
token: <access token>
properties:
name: my-grafana
endpoint: <my-grafana url>
token: <access token>
```
To get your grafana access, you can go into your Grafana instance and configure API keys.
@ -326,7 +349,7 @@ After the application is successfully dispatched, you can check the registration
```shell
> kubectl get grafana
NAME ENDPOINT CREDENTIAL_TYPE
default http://grafana.o11y-system:3000 BasicAuth
default http://grafana.o11y-system:3000 BasicAuth
my-grafana https://grafana-rngwzwnsuvl4s9p66m.grafana.aliyuncs.com:80/ BearerToken
```
@ -359,4 +382,4 @@ Besides, the old observability addon relies on the fluxcd addon to install the w
## Future
KubeVela will integrate more observability addons in the future, such as logging and tracing addons. Community operators like [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator) also provide alternative ways for managing observability application, which are intended to be included in KubeVela addons as well. We are also welcome to more integrations through KubeVela addon ecosystem.
KubeVela will integrate more observability addons in the future, such as logging and tracing addons. Community operators like [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator) also provide alternative ways for managing observability application, which are intended to be included in KubeVela addons as well. We are also welcome to more integrations through KubeVela addon ecosystem.

View File

@ -4,6 +4,127 @@ title: Traefik
Traefik is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease. you can use this addon as a cluster gateway or a microservices gateway.
## Install
```bash
vela addon enable traefik
```
### Visit Traefik dashboard by port-forward
Port forward will work as a proxy to allow visiting Traefik dashboard by local port.
```bash
vela port-forward -n vela-system addon-traefik
```
expected output:
```bash
Forwarding from 127.0.0.1:9000 -> 9000
Forwarding from [::1]:9000 -> 9000
Forward successfully! Opening browser ...
Handling connection for 9000
```
You can visiting Traefik dashboard with address `http://127.0.0.1:9000/dashboard/`.
### Setup with Specified Service Type
If your cluster has cloud LoadBalancer available:
```bash
vela addon enable traefik serviceType=LoadBalancer
```
## How to use
1. Configure a HTTP domain for a component.
```bash
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: example
namespace: e2e-test
spec:
components:
- name: express-server
type: webservice
properties:
image: oamdev/hello-world
ports:
- port: 8000
expose: true
traits:
- properties:
domains:
- example.domain.com
rules:
- path:
type: PathPrefix
value: /
port: 8080
type: http-route
```
2. Configure a HTTPS domain for a component.
You should create a secret that includes the certificate first.
```yaml
apiVersion: v1
type: Opaque
data:
tls.crt: <BASE64>
tls.key: <BASE64>
kind: Secret
metadata:
annotations:
config.oam.dev/alias: ""
config.oam.dev/description: ""
labels:
config.oam.dev/catalog: velacore-config
config.oam.dev/multi-cluster: "true"
config.oam.dev/project: addons
config.oam.dev/type: config-tls-certificate
workload.oam.dev/type: config-tls-certificate
name: example
```
The example application configuration:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: example-https
namespace: e2e-test
spec:
components:
- name: express-server
type: webservice
properties:
image: oamdev/hello-world
ports:
- port: 8000
expose: true
traits:
- properties:
domains:
- example.domain.com
rules:
- path:
type: PathPrefix
value: /
port: 8080
secrets:
- name: example
type: https-route
```
## Definitions
### http-route(trait)
@ -120,39 +241,4 @@ This component definition is designed to manage the TLS certificate
Name | Description | Type | Required | Default
------------ | ------------- | ------------- | ------------- | -------------
cert | the certificate public key encrypted by base64 | string | true |
key | the certificate private key encrypted by base64 | string | true |
## Install
```bash
vela addon enable traefik
```
### Visit Traefik dashboard by port-forward
Port forward will work as a proxy to allow visiting Traefik dashboard by local port.
```bash
vela port-forward -n vela-system addon-traefik
```
expected output:
```
Forwarding from 127.0.0.1:9000 -> 9000
Forwarding from [::1]:9000 -> 9000
Forward successfully! Opening browser ...
Handling connection for 9000
```
You can visiting Traefik dashboard with address `http://127.0.0.1:9000/dashboard/`.
### Setup with Specified Service Type
If your cluster has cloud LoadBalancer available:
```bash
vela addon enable traefik serviceType=LoadBalancer
```
key | the certificate private key encrypted by base64 | string | true |

View File

@ -1,5 +1,5 @@
---
title: Access the application
title: Access Application
---
After deploying the application, the next station is publishing your service.

View File

@ -0,0 +1,39 @@
---
title: Cloud Shell
---
The cloud shell feature helps you to manage the application via CLI(vela、kubectl). such as:
* Creating applications in batches with the YAML.
* Watch the logs of the application.
* Open the shell TTY of the application instance.
* Upload and download files from the shell environment.
* Others...
### Enable the addon
This operation needs to have the permission of the addon management.
```bash
vela addon enable cloudshell
```
### Open the cloud shell environment
![open-cloud-shell](https://static.kubevela.net/images/1.5/cloud-shell.jpg)
On the UI top right, there is a cloud shell button. Users can open the cloud shell environment by clicking it. By default, all users have permission to open the cloud shell. If you want to disable this feature for some users, you could create a new platform role that has `Disable CloudShell` permission. Bind this role to the user, who you want to disable.
![cloud shell environment](https://static.kubevela.net/images/1.5/cloud-shell-environment.jpg)
After connected to the cloud shell environment, you could exec `vela --help` to get the help message. In the console, users have some permissions(Kubernetes RBAC) that auto-grant base the permission in the UI. there are some rules:
1. If you have the `admin` permission, you are allowed to do anything in the cloud shell environment.
2. If you have the `deploy application` permission in the projects, you could have all permissions of the Application in the namespace that belongs to the project.
3. If you have the `project view` permission in the projects, you will only have the read permission of the Application in the namespace that belongs to the project.
### Open the console of the application containers
![container](https://static.kubevela.net/images/1.5/container-console.jpg)
When you want to debug the application runtime, using the container console could be pretty helpful. In KubeVela, this feature is one of the common use cases of the cloud shell. After clicking the button, you will open a dialog window and see the command line. You can copy the commands and execute them in the cloud shell.

View File

@ -0,0 +1,304 @@
---
title: Custom Image Delivery
---
If the default `webservice` component type is not suitable for your team, and you want to get a more simple way to deploy your business application. This guide will help you. Before, you must have the platform manager's permission.
### Simplify the `webservice` type
The default [webservice](../end-user/components/references#webservice) component type has more than 10 properties. Maybe your developer only needs to configure the image path and resource limits. For the other properties, the team could set the default values.
If so, you could change the `webservice` definition.
1. Change the UI schema to hide some fields
> This way is only suitable the UI users.
![image](https://static.kubevela.net/images/1.5/custom-ui-schema.jpg)
On the definition detail page, users could customize the UI schema to setting the UI forms. For example, if you want to hide the ExposeType field, only need to set the disable is `true`.
```yaml
...
- jsonKey: exposeType
uiType: Select
label: ExposeType
disable: true
...
```
More references: [UI Schema](../reference/ui-schema)
2. Change the definition and remove or add some fields
If you want to completely remove or add some fields, you should edit the component definition.
> This guide should learn the CUE language.
```bash
vela def get webservice > custom-webservice.cue
```
Refer to the [CUE Basic](../platform-engineers/cue/basic) and [Component Definition](../platform-engineers/components/custom-component) documents to learn how to custom the `custom-webservice.cue`.
After edit:
```bash
vela def apply custom-webservice.cue
```
### Create a new component type to deploy the war package
If your team uses the war package to deploy the Java application. In KubeVela you could create a new component type to deploy the War package.
```cue
"java-war": {
alias: ""
annotations: {}
attributes: {
workload: {
definition: {
apiVersion: "apps/v1"
kind: "Deployment"
}
type: "deployments.apps"
}
status: {
customStatus: #"""
ready: {
readyReplicas: *0 | int
} & {
if context.output.status.readyReplicas != _|_ {
readyReplicas: context.output.status.readyReplicas
}
}
message: "Ready:\(ready.readyReplicas)/\(context.output.spec.replicas)"
"""#
healthPolicy: #"""
ready: {
updatedReplicas: *0 | int
readyReplicas: *0 | int
replicas: *0 | int
observedGeneration: *0 | int
} & {
if context.output.status.updatedReplicas != _|_ {
updatedReplicas: context.output.status.updatedReplicas
}
if context.output.status.readyReplicas != _|_ {
readyReplicas: context.output.status.readyReplicas
}
if context.output.status.replicas != _|_ {
replicas: context.output.status.replicas
}
if context.output.status.observedGeneration != _|_ {
observedGeneration: context.output.status.observedGeneration
}
}
isHealth: (context.output.spec.replicas == ready.readyReplicas) && (context.output.spec.replicas == ready.updatedReplicas) && (context.output.spec.replicas == ready.replicas) && (ready.observedGeneration == context.output.metadata.generation || ready.observedGeneration > context.output.metadata.generation)
"""#
}
}
description: ""
labels: {}
type: "component"
}
template: {
output: {
apiVersion: "apps/v1"
kind: "Deployment"
metadata: {
name: context.name
namespace: context.namespace
}
spec: {
replicas: parameter.replicas
selector: {
matchLabels: {
"app.oam.dev/component": context.name
}
}
template: {
metadata: {
labels: {
"app.oam.dev/name": context.appName
"app.oam.dev/component": context.name
"app.oam.dev/revision": context.revision
}
}
spec: {
initContainers: [{
name: "prepare-war"
image: "busybox"
if parameter["deployToRoot"] != _|_ {
if parameter["deployToRoot"] {
command: ["wget", "-O", "/usr/local/tomcat/webapps/ROOT.war", parameter["warURL"]]
}
}
if parameter["deployToRoot"] == _|_ {
command: ["wget", "-P", "/usr/local/tomcat/webapps/", parameter["warURL"]]
}
volumeMounts: [{
name: "webapps"
mountPath: "/usr/local/tomcat/webapps"
}]
}]
containers: [{
name: context.name
image: "tomcat:" + parameter["envVersion"]
if parameter["cpu"] != _|_ {
resources: {
limits: cpu: parameter.cpu
requests: cpu: parameter.cpu
}
}
if parameter["memory"] != _|_ {
resources: {
limits: memory: parameter.memory
requests: memory: parameter.memory
}
}
ports: [{
containerPort: 8080
name: "webapp"
}]
_envs: {
custom: *parameter["env"] | []
inner: [
if parameter["javaOpts"] != _|_ {
{
name: "JAVA_OPTS"
value: parameter.javaOpts
}
},
]
}
env: _envs.custom + _envs.inner
volumeMounts: [{
name: "webapps"
mountPath: "/usr/local/tomcat/webapps"
}]
}]
volumes: [{
name: "webapps"
emptyDir: {}
}]
}
}
}
}
outputs: {
services: {
kind: "Service"
apiVersion: "v1"
metadata: {
name: context.name
namespace: context.namespace
}
spec: {
selector: "app.oam.dev/component": context.name
ports: [{
port: 8080
}]
type: "ClusterIP"
}
}
}
parameter: {
// +usage=The URL of the war package.
warURL: string
// +usage=Select a environment version([tomcat version]-[jdk version])
envVersion: *"8-jdk8" | "9-jdk8" | "10-jdk8" | "8-jdk11" | "9-jdk11" | "10-jdk11" | "8-jdk17" | "9-jdk17" | "10-jdk17"
// +usage=Specifies the number of replicas.
replicas: *1 | int
// +usage=Define arguments by using environment variables
env?: [...{
name: string
value?: string
}]
// +usage=Setting the Java Opts configuration.
javaOpts?: string
// +usage=Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core)
cpu?: string
// +usage=Specifies the attributes of the memory resource required for the container.
memory?: =~"^([1-9][0-9]{0,63})(E|P|T|G|M|K|Ei|Pi|Ti|Gi|Mi|Ki)$"
deployToRoot?: bool
}
}
```
Copy the definition and create a file `java-war.cue`, then:
```bash
vela def apply java-war.cue
```
Now, other developers could create the application with a war URL, for example:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: nanoservice
namespace: e2e-test
spec:
components:
- name: catalog
properties:
envVersion: 8-jdk8
replicas: 1
warURL: https://static.kubevela.net/example/java-example/nanoservice/catalog.war
type: java-war
- name: customer
properties:
envVersion: 8-jdk8
replicas: 1
warURL: https://static.kubevela.net/example/java-example/nanoservice/customer.war
type: java-war
- dependsOn:
- catalog
- customer
name: order
properties:
env:
- name: CATALOG_HOST
value: catalog
- name: CUSTOMER_HOST
value: customer
envVersion: 8-jdk8
javaOpts: -Xms512m -Xmx512m -Xss256K
replicas: 1
warURL: https://static.kubevela.net/example/java-example/nanoservice/order.war
traits:
- properties:
domains:
- nanoservice.beijing.kubevela.net
rules:
- path:
type: PathPrefix
value: /order
port: 8080
type: http-route
type: java-war
policies:
- name: e2e-test
properties:
clusters:
- local
namespace: e2e-test
type: topology
workflow:
steps:
- name: deploy2-e2e-test
properties:
policies:
- e2e-test
type: deploy
```
![java-app](https://static.kubevela.net/images/1.5/java-war.jpg)
This example includes three components, and the order service depends on the catalog and the customer services. The developer only needs to care about the war package URL and the tomcat/JRE version, they are familiar to the Java developer. The developer should upload the war package to a repository, such as Jfrog. Get a download URL to assign to the `warURL` field.
In the same way, you could create a component type to deploy the Jar package and other's binary packages.

133
docs/tutorials/dry-run.md Normal file
View File

@ -0,0 +1,133 @@
---
title: Dry Run
---
The dry run is a very useful way to check the application configuration and the definitions. The dry run will render the application and print the output resources that check passed on the service side. If the definitions(component, trait) is invalid, the dry run will print the error message.
> The policies and workflows can not be dry run currently, this is a insufficiency.
When you do the following things please must pass dry run.
* Create or update an application.
* Create or update the definition, and dry run some example applications.
### Dry run via CLI
There is a simple application:
```yaml
kind: Application
apiVersion: core.oam.dev/v1beta1
metadata:
name: webservice-app
spec:
components:
- name: frontend
type: webservice
properties:
image: oamdev/testapp:v1
cmd: ["node", "server.js"]
ports:
- port: 8080
expose: true
traits:
- type: scaler
properties:
replicas: 1
```
Copy it and write a file `app.yaml`
```bash
vela dry-run app.yaml
```
The outputs:
```yaml
---
# Application(webservice-app) -- Component(frontend)
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
labels:
app.oam.dev/appRevision: ""
app.oam.dev/component: frontend
app.oam.dev/name: webservice-app
app.oam.dev/namespace: default
app.oam.dev/resourceType: WORKLOAD
workload.oam.dev/type: webservice
name: frontend
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app.oam.dev/component: frontend
template:
metadata:
labels:
app.oam.dev/component: frontend
app.oam.dev/name: webservice-app
spec:
containers:
- command:
- node
- server.js
image: oamdev/testapp:v1
name: frontend
ports:
- containerPort: 8080
name: port-8080
protocol: TCP
---
## From the auxiliary workload
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels:
app.oam.dev/appRevision: ""
app.oam.dev/component: frontend
app.oam.dev/name: webservice-app
app.oam.dev/namespace: default
app.oam.dev/resourceType: TRAIT
trait.oam.dev/resource: webserviceExpose
trait.oam.dev/type: AuxiliaryWorkload
name: frontend
namespace: default
spec:
ports:
- name: port-8080
port: 8080
targetPort: 8080
selector:
app.oam.dev/component: frontend
type: ClusterIP
---
```
If we set the expose field of the port as false, the output resources do not include the `Service`. If we set the replicas field as "1", the output will be an error message:
```bash
Error: validate application: ./app.yaml by dry-run: admission webhook "validating.core.oam.dev.v1beta1.applications" denied the request: field "schematic": Invalid value error encountered, cannot evaluate trait "scaler": invalid template of trait scaler after merge with parameter and context: parameter.replicas: conflicting values (*1 | int) and "1" (mismatched types int and string) (and 1 more errors).
```
This means the replicas field is int type but we provide a string value, this application configuration is not valid.
More use cases refer to the [Dry run command](../cli/vela_dry-run)
### Dry run via UI
![dry-run](https://static.kubevela.net/images/1.5/dry-run.jpg)
Clicking the `Deploy` button to open the workflow selector dialog. You could select a workflow(every workflow corresponding to an environment) and click the `DryRun` button to execute a dry run. If passed, the result is like this
![dry-run-successfully](https://static.kubevela.net/images/1.5/dry-run-success.jpg)
The result report is different from the CLI, there is a complete application configuration that is generated from the application metadata, you could check it. Different environments may have different application configurations.

View File

@ -0,0 +1,50 @@
---
title: Multi Environment Delivery
---
Environments represent your deployment targets logically (QA, Prod, etc). You can add the same Environment to as many Targets as you need.
In KubeVela, the environment bond a namespace in the hub cluster. The application CR should create in this namespace. If the application does not configure the target cluster and namespace(such as Topology policy). The default target namespace to create the pod or service resources is the same as the environment.
For one application, If you want to deploy to a multi-environment, you need to apply the application CR to multi namespaces in hub cluster. There is a way in UI that you only need to manage one application configuration.
## Deploy the application to multi-environment
### 1. Create an environment
```bash
vela env init prod --namespace prod
```
You could create an environment via CLI, but that does not sync to UI before you create an application in this environment. A more elegant way is to create it directly in the UI.
![create-env](https://static.kubevela.net/images/1.5/create-env.jpg)
One environment could includes multi targets. The environment belongs to a project, One project could include multi environments. Only the application belonging to this project could bond this environment.
![environment](https://static.kubevela.net/images/1.5/environment.jpg)
### 2. Bind the application to an environment
You could select multi environments when you create an application. Or at any time bind the other environments. The UI will generate a workflow when the application bind to an environment, that includes some `deploy` type steps. The default rule is one target one step. You could add other steps such as the `suspend` between the two `deploy` steps.
![env workflow](https://static.kubevela.net/images/1.5/env-workflow.jpg)
### 3. Set the different configuration
We always need to set different configurations for different environments. such as environment variables, replicas or resource limits. Let's create some override policies. When we create an override policy, we need to select a workflow and steps, this means selecting the environment or target. So, you could make the different configurations take effect in the specific environment or targets.
![override-policy](https://static.kubevela.net/images/1.5/override-policy.jpg)
Refer to [the override policy](../end-user/policies/references#override)
### 4. Deploy
By default, every environment is deployed independently. When you deploy an application needs to select a workflow to deploy an environment. If you want to deploy to multi environments in one workflow, there is a way to link the multi workflows.
For example, you want to deploy the prod environment after the QA environment is completed deploy. you could create a trigger with the custom type for the prod environment, you could get a webhook URL. Then, edit the workflow of the QA environment and added a webhook type step that uses the trigger webhook URL at the end. Now, the QA workflow will trigger the prod workflow.
## Deploy the helm chart to multi-environment
Some teams deploy the application to a multi-environment through the helm chart and multi values files. You also move this solution to the KubeVela.
The operation is the same as in the previous section. The key point is to use different value files. Refer to [this section](./helm#specify-different-value-file)

View File

@ -1,9 +1,9 @@
---
title: Deploy Container Image
description: deploy the business application by kubevela
description: Deploy the business application by kubevela
---
In this section, we will introduce how to deploy a container based application with KubeVela. The guide will run the whole process with UI console as it's quite the same with the [quick start](../quick-start) if you're using CLI.
In this section, we will introduce how to deploy a container-based application with KubeVela. The guide will run the whole process with UI console as it's quite the same with the [quick start](../quick-start) if you're using CLI.
## Before starting

View File

@ -1,12 +1,17 @@
---
title: 一次性交付
title: 一次性交付(与其他控制器协同)
---
默认情况下KubeVela 控制器会通过不断轮询应用下管控资源的方式来防止出现配置飘移。如果你希望声明应用中的资源状态为终态,这种处理方式可以让应用资源即使受到外部的变更操作也可以在一段时间后自动恢复回先前声明的状态。
然而,在一些场景下,你可能希望使用 KubeVela 应用只负责部署应用资源,而不负责维持应用资源,也就是允许通过外部修改的方式来改变应用资源的目标终态。这种情况下,你可以使用如下所示的 apply-once 策略来进行配置。在这种配置下,当应用资源被外部修改后,也不会被主动恢复回来。
然而,在一些场景下,你可能希望使用 KubeVela 应用只负责部署应用资源,而不负责维持应用资源,也就是允许通过外部修改的方式来改变应用资源的目标终态。这种情况下,你可以使用如下所示的 apply-once 策略来进行配置。在这种配置下,当应用资源被外部修改后,也不会被主动恢复回来。比如在以下用例中非常有用:
1. 与 HPA 控制器协同HPA 控制器会修改副本数字段。
2. 与 Istio 协同Istio 可能注入新的容器到 Deployment 中。
3. 其他会修改资源配置的控制器。
### 允许全部字段变更
### 如何使用
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
@ -31,4 +36,44 @@ spec:
在这个样例中,如果你在应用进入运行中状态之后修改 hello-world 这个 Deployment 资源的副本数,即便该副本数与应用中声明的副本数不一致,应用也不会自动将副本数改回原本声明的状态,即接受手动变更。
配置漂移检查会在应用进入 running 状态或者是 suspending 状态(没有错误的情况下)每隔 5 分钟进行一次。你可以通过设置控制器启动参数中的 `application-re-sync-period` 来修改时间间隔。你可以在[启动参数](../../platform-engineers/system-operation/bootstrap-parameters)章节中了解更多。
### 允许指定字段变更
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: apply-once-app
spec:
components:
- name: hello-world
type: webservice
properties:
image: oamdev/hello-world
traits:
- type: scaler
properties:
replicas: 3
policies:
- name: apply-once
type: apply-once
properties:
enable: true
rules:
- path:
- spec.replicas
selector:
resourceTypes:
- Deployment
```
在这个例子中,只支持副本数配置由其他控制器修改,比如 HPA。通过配置资源筛选和配置路径进行忽略规则的配置。支持的筛选方式包括
* componentNames: 通过组件名称选择
* componentTypes: 通过组件类型选择
* oamTypes: 通过生成资源的配置类型选择,比如 COMPONENT 或 TRAIT
* resourceNames: 通过资源名称选择
* resourceTypes: 通过资源类型选择
* traitTypes: 通过 Trait 类型选择
配置漂移检查会在应用进入 running 状态或者是 suspending 状态(没有错误的情况下)每隔 5 分钟进行一次。你可以通过设置控制器启动参数中的 `application-re-sync-period` 来修改时间间隔。你可以在[启动参数](../../platform-engineers/system-operation/bootstrap-parameters)章节中了解更多。

View File

@ -4,6 +4,130 @@ title: Traefik
Traefik 是一个现代化且易用的 HTTP 反向代理和负载均衡服务,用于部署微服务。你可以使用该插件作为你的集群网关活着微服务系统的网关。
## 插件安装
```bash
vela addon enable traefik
```
### 访问 Traefik 的 UI
默认安装情况下 Traefik 无法直接访问,可通过 port-forward 进行本地代理,这仅适用于调试阶段。
```bash
vela port-forward -n vela-system addon-traefik
```
expected output:
```bash
Forwarding from 127.0.0.1:9000 -> 9000
Forwarding from [::1]:9000 -> 9000
Forward successfully! Opening browser ...
Handling connection for 9000
```
你可以通过 `http://127.0.0.1:9000/dashboard/` 地址访问到 Traefik 的 UI 可视化面板。
### 设置网关流量接入方式
如果你使用云上的集群,使用 LoadBalancer 暴露 Traefik 的访问入口是最佳方案。
```bash
vela addon enable traefik serviceType=LoadBalancer
```
如果在自建集群,需要根据你是作为集群网关还是应用网关来选择流量接入方式。
## 如何使用
1. 为组件配置一个 HTTP 域名
```bash
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: example
namespace: e2e-test
spec:
components:
- name: express-server
type: webservice
properties:
image: oamdev/hello-world
ports:
- port: 8000
expose: true
traits:
- properties:
domains:
- example.domain.com
rules:
- path:
type: PathPrefix
value: /
port: 8080
type: http-route
```
2. 为组件配置一个 HTTPS 域名
首先需要准备一个证书并创建一个 Secret。
```yaml
apiVersion: v1
type: Opaque
data:
tls.crt: <BASE64>
tls.key: <BASE64>
kind: Secret
metadata:
annotations:
config.oam.dev/alias: ""
config.oam.dev/description: ""
labels:
config.oam.dev/catalog: velacore-config
config.oam.dev/multi-cluster: "true"
config.oam.dev/project: addons
config.oam.dev/type: config-tls-certificate
workload.oam.dev/type: config-tls-certificate
name: example
```
应用配置案例如下:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: example-https
namespace: e2e-test
spec:
components:
- name: express-server
type: webservice
properties:
image: oamdev/hello-world
ports:
- port: 8000
expose: true
traits:
- properties:
domains:
- example.domain.com
rules:
- path:
type: PathPrefix
value: /
port: 8080
secrets:
- name: example
type: https-route
```
## XDefinitions
### http-route(trait)
@ -120,40 +244,3 @@ Traefik 是一个现代化且易用的 HTTP 反向代理和负载均衡服务,
------------ | ------------- | ------------- | ------------- | -------------
cert | the certificate public key encrypted by base64 | string | true |
key | the certificate private key encrypted by base64 | string | true |
## 插件安装
```bash
vela addon enable traefik
```
### 访问 Traefik 的 UI
默认安装情况下 Traefik 无法直接访问,可通过 port-forward 进行本地代理,这仅适用于调试阶段。
```bash
vela port-forward -n vela-system addon-traefik
```
expected output:
```
Forwarding from 127.0.0.1:9000 -> 9000
Forwarding from [::1]:9000 -> 9000
Forward successfully! Opening browser ...
Handling connection for 9000
```
你可以通过 `http://127.0.0.1:9000/dashboard/` 地址访问到 Traefik 的 UI 可视化面板。
### 设置网关流量接入方式
如果你使用云上的集群,使用 LoadBalancer 暴露 Traefik 的访问入口是最佳方案。
```bash
vela addon enable traefik serviceType=LoadBalancer
```
如果在自建集群,需要根据你是作为集群网关还是应用网关来选择流量接入方式。

View File

@ -0,0 +1,39 @@
---
title: Cloud Shell
---
The cloud shell feature helps you to manage the application via CLI(vela、kubectl). such as:
* Creating applications in batches with the YAML.
* Watch the logs of the application.
* Open the shell TTY of the application instance.
* Upload and download files from the shell environment.
* Others...
### Enable the addon
This operation needs to have the permission of the addon management.
```bash
vela addon enable cloudshell
```
### Open the cloud shell environment
![open-cloud-shell](https://static.kubevela.net/images/1.5/cloud-shell.jpg)
On the UI top right, there is a cloud shell button. Users can open the cloud shell environment by clicking it. By default, all users have permission to open the cloud shell. If you want to disable this feature for some users, you could create a new platform role that has `Disable CloudShell` permission. Bind this role to the user, who you want to disable.
![cloud shell environment](https://static.kubevela.net/images/1.5/cloud-shell-environment.jpg)
After connected to the cloud shell environment, you could exec `vela --help` to get the help message. In the console, users have some permissions(Kubernetes RBAC) that auto-grant base the permission in the UI. there are some rules:
1. If you have the `admin` permission, you are allowed to do anything in the cloud shell environment.
2. If you have the `deploy application` permission in the projects, you could have all permissions of the Application in the namespace that belongs to the project.
3. If you have the `project view` permission in the projects, you will only have the read permission of the Application in the namespace that belongs to the project.
### Open the console of the application containers
![container](https://static.kubevela.net/images/1.5/container-console.jpg)
When you want to debug the application runtime, using the container console could be pretty helpful. In KubeVela, this feature is one of the common use cases of the cloud shell. After clicking the button, you will open a dialog window and see the command line. You can copy the commands and execute them in the cloud shell.

View File

@ -0,0 +1,304 @@
---
title: Custom Image Delivery
---
If the default `webservice` component type is not suitable for your team, and you want to get a more simple way to deploy your business application. This guide will help you. Before, you must have the platform manager's permission.
### Simplify the `webservice` type
The default [webservice](../end-user/components/references#webservice) component type has more than 10 properties. Maybe your developer only needs to configure the image path and resource limits. For the other properties, the team could set the default values.
If so, you could change the `webservice` definition.
1. Change the UI schema to hide some fields
> This way is only suitable the UI users.
![image](https://static.kubevela.net/images/1.5/custom-ui-schema.jpg)
On the definition detail page, users could customize the UI schema to setting the UI forms. For example, if you want to hide the ExposeType field, only need to set the disable is `true`.
```yaml
...
- jsonKey: exposeType
uiType: Select
label: ExposeType
disable: true
...
```
More references: [UI Schema](../reference/ui-schema)
2. Change the definition and remove or add some fields
If you want to completely remove or add some fields, you should edit the component definition.
> This guide should learn the CUE language.
```bash
vela def get webservice > custom-webservice.cue
```
Refer to the [CUE Basic](../platform-engineers/cue/basic) and [Component Definition](../platform-engineers/components/custom-component) documents to learn how to custom the `custom-webservice.cue`.
After edit:
```bash
vela def apply custom-webservice.cue
```
### Create a new component type to deploy the war package
If your team uses the war package to deploy the Java application. In KubeVela you could create a new component type to deploy the War package.
```cue
"java-war": {
alias: ""
annotations: {}
attributes: {
workload: {
definition: {
apiVersion: "apps/v1"
kind: "Deployment"
}
type: "deployments.apps"
}
status: {
customStatus: #"""
ready: {
readyReplicas: *0 | int
} & {
if context.output.status.readyReplicas != _|_ {
readyReplicas: context.output.status.readyReplicas
}
}
message: "Ready:\(ready.readyReplicas)/\(context.output.spec.replicas)"
"""#
healthPolicy: #"""
ready: {
updatedReplicas: *0 | int
readyReplicas: *0 | int
replicas: *0 | int
observedGeneration: *0 | int
} & {
if context.output.status.updatedReplicas != _|_ {
updatedReplicas: context.output.status.updatedReplicas
}
if context.output.status.readyReplicas != _|_ {
readyReplicas: context.output.status.readyReplicas
}
if context.output.status.replicas != _|_ {
replicas: context.output.status.replicas
}
if context.output.status.observedGeneration != _|_ {
observedGeneration: context.output.status.observedGeneration
}
}
isHealth: (context.output.spec.replicas == ready.readyReplicas) && (context.output.spec.replicas == ready.updatedReplicas) && (context.output.spec.replicas == ready.replicas) && (ready.observedGeneration == context.output.metadata.generation || ready.observedGeneration > context.output.metadata.generation)
"""#
}
}
description: ""
labels: {}
type: "component"
}
template: {
output: {
apiVersion: "apps/v1"
kind: "Deployment"
metadata: {
name: context.name
namespace: context.namespace
}
spec: {
replicas: parameter.replicas
selector: {
matchLabels: {
"app.oam.dev/component": context.name
}
}
template: {
metadata: {
labels: {
"app.oam.dev/name": context.appName
"app.oam.dev/component": context.name
"app.oam.dev/revision": context.revision
}
}
spec: {
initContainers: [{
name: "prepare-war"
image: "busybox"
if parameter["deployToRoot"] != _|_ {
if parameter["deployToRoot"] {
command: ["wget", "-O", "/usr/local/tomcat/webapps/ROOT.war", parameter["warURL"]]
}
}
if parameter["deployToRoot"] == _|_ {
command: ["wget", "-P", "/usr/local/tomcat/webapps/", parameter["warURL"]]
}
volumeMounts: [{
name: "webapps"
mountPath: "/usr/local/tomcat/webapps"
}]
}]
containers: [{
name: context.name
image: "tomcat:" + parameter["envVersion"]
if parameter["cpu"] != _|_ {
resources: {
limits: cpu: parameter.cpu
requests: cpu: parameter.cpu
}
}
if parameter["memory"] != _|_ {
resources: {
limits: memory: parameter.memory
requests: memory: parameter.memory
}
}
ports: [{
containerPort: 8080
name: "webapp"
}]
_envs: {
custom: *parameter["env"] | []
inner: [
if parameter["javaOpts"] != _|_ {
{
name: "JAVA_OPTS"
value: parameter.javaOpts
}
},
]
}
env: _envs.custom + _envs.inner
volumeMounts: [{
name: "webapps"
mountPath: "/usr/local/tomcat/webapps"
}]
}]
volumes: [{
name: "webapps"
emptyDir: {}
}]
}
}
}
}
outputs: {
services: {
kind: "Service"
apiVersion: "v1"
metadata: {
name: context.name
namespace: context.namespace
}
spec: {
selector: "app.oam.dev/component": context.name
ports: [{
port: 8080
}]
type: "ClusterIP"
}
}
}
parameter: {
// +usage=The URL of the war package.
warURL: string
// +usage=Select a environment version([tomcat version]-[jdk version])
envVersion: *"8-jdk8" | "9-jdk8" | "10-jdk8" | "8-jdk11" | "9-jdk11" | "10-jdk11" | "8-jdk17" | "9-jdk17" | "10-jdk17"
// +usage=Specifies the number of replicas.
replicas: *1 | int
// +usage=Define arguments by using environment variables
env?: [...{
name: string
value?: string
}]
// +usage=Setting the Java Opts configuration.
javaOpts?: string
// +usage=Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core)
cpu?: string
// +usage=Specifies the attributes of the memory resource required for the container.
memory?: =~"^([1-9][0-9]{0,63})(E|P|T|G|M|K|Ei|Pi|Ti|Gi|Mi|Ki)$"
deployToRoot?: bool
}
}
```
Copy the definition and create a file `java-war.cue`, then:
```bash
vela def apply java-war.cue
```
Now, other developers could create the application with a war URL, for example:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: nanoservice
namespace: e2e-test
spec:
components:
- name: catalog
properties:
envVersion: 8-jdk8
replicas: 1
warURL: https://static.kubevela.net/example/java-example/nanoservice/catalog.war
type: java-war
- name: customer
properties:
envVersion: 8-jdk8
replicas: 1
warURL: https://static.kubevela.net/example/java-example/nanoservice/customer.war
type: java-war
- dependsOn:
- catalog
- customer
name: order
properties:
env:
- name: CATALOG_HOST
value: catalog
- name: CUSTOMER_HOST
value: customer
envVersion: 8-jdk8
javaOpts: -Xms512m -Xmx512m -Xss256K
replicas: 1
warURL: https://static.kubevela.net/example/java-example/nanoservice/order.war
traits:
- properties:
domains:
- nanoservice.beijing.kubevela.net
rules:
- path:
type: PathPrefix
value: /order
port: 8080
type: http-route
type: java-war
policies:
- name: e2e-test
properties:
clusters:
- local
namespace: e2e-test
type: topology
workflow:
steps:
- name: deploy2-e2e-test
properties:
policies:
- e2e-test
type: deploy
```
![java-app](https://static.kubevela.net/images/1.5/java-war.jpg)
This example includes three components, and the order service depends on the catalog and the customer services. The developer only needs to care about the war package URL and the tomcat/JRE version, they are familiar to the Java developer. The developer should upload the war package to a repository, such as Jfrog. Get a download URL to assign to the `warURL` field.
In the same way, you could create a component type to deploy the Jar package and other's binary packages.

View File

@ -0,0 +1,133 @@
---
title: Dry Run
---
The dry run is a very useful way to check the application configuration and the definitions. The dry run will render the application and print the output resources that check passed on the service side. If the definitions(component, trait) is invalid, the dry run will print the error message.
> The policies and workflows can not be dry run currently, this is a insufficiency.
When you do the following things please must pass dry run.
* Create or update an application.
* Create or update the definition, and dry run some example applications.
### Dry run via CLI
There is a simple application:
```yaml
kind: Application
apiVersion: core.oam.dev/v1beta1
metadata:
name: webservice-app
spec:
components:
- name: frontend
type: webservice
properties:
image: oamdev/testapp:v1
cmd: ["node", "server.js"]
ports:
- port: 8080
expose: true
traits:
- type: scaler
properties:
replicas: 1
```
Copy it and write a file `app.yaml`
```bash
vela dry-run app.yaml
```
The outputs:
```yaml
---
# Application(webservice-app) -- Component(frontend)
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
labels:
app.oam.dev/appRevision: ""
app.oam.dev/component: frontend
app.oam.dev/name: webservice-app
app.oam.dev/namespace: default
app.oam.dev/resourceType: WORKLOAD
workload.oam.dev/type: webservice
name: frontend
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app.oam.dev/component: frontend
template:
metadata:
labels:
app.oam.dev/component: frontend
app.oam.dev/name: webservice-app
spec:
containers:
- command:
- node
- server.js
image: oamdev/testapp:v1
name: frontend
ports:
- containerPort: 8080
name: port-8080
protocol: TCP
---
## From the auxiliary workload
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels:
app.oam.dev/appRevision: ""
app.oam.dev/component: frontend
app.oam.dev/name: webservice-app
app.oam.dev/namespace: default
app.oam.dev/resourceType: TRAIT
trait.oam.dev/resource: webserviceExpose
trait.oam.dev/type: AuxiliaryWorkload
name: frontend
namespace: default
spec:
ports:
- name: port-8080
port: 8080
targetPort: 8080
selector:
app.oam.dev/component: frontend
type: ClusterIP
---
```
If we set the expose field of the port as false, the output resources do not include the `Service`. If we set the replicas field as "1", the output will be an error message:
```bash
Error: validate application: ./app.yaml by dry-run: admission webhook "validating.core.oam.dev.v1beta1.applications" denied the request: field "schematic": Invalid value error encountered, cannot evaluate trait "scaler": invalid template of trait scaler after merge with parameter and context: parameter.replicas: conflicting values (*1 | int) and "1" (mismatched types int and string) (and 1 more errors).
```
This means the replicas field is int type but we provide a string value, this application configuration is not valid.
More use cases refer to the [Dry run command](../cli/vela_dry-run)
### Dry run via UI
![dry-run](https://static.kubevela.net/images/1.5/dry-run.jpg)
Clicking the `Deploy` button to open the workflow selector dialog. You could select a workflow(every workflow corresponding to an environment) and click the `DryRun` button to execute a dry run. If passed, the result is like this
![dry-run-successfully](https://static.kubevela.net/images/1.5/dry-run-success.jpg)
The result report is different from the CLI, there is a complete application configuration that is generated from the application metadata, you could check it. Different environments may have different application configurations.

View File

@ -0,0 +1,50 @@
---
title: Multi Environment Delivery
---
Environments represent your deployment targets logically (QA, Prod, etc). You can add the same Environment to as many Targets as you need.
In KubeVela, the environment bond a namespace in the hub cluster. The application CR should create in this namespace. If the application does not configure the target cluster and namespace(such as Topology policy). The default target namespace to create the pod or service resources is the same as the environment.
For one application, If you want to deploy to a multi-environment, you need to apply the application CR to multi namespaces in hub cluster. There is a way in UI that you only need to manage one application configuration.
## Deploy the application to multi-environment
### 1. Create an environment
```bash
vela env init prod --namespace prod
```
You could create an environment via CLI, but that does not sync to UI before you create an application in this environment. A more elegant way is to create it directly in the UI.
![create-env](https://static.kubevela.net/images/1.5/create-env.jpg)
One environment could includes multi targets. The environment belongs to a project, One project could include multi environments. Only the application belonging to this project could bond this environment.
![environment](https://static.kubevela.net/images/1.5/environment.jpg)
### 2. Bind the application to an environment
You could select multi environments when you create an application. Or at any time bind the other environments. The UI will generate a workflow when the application bind to an environment, that includes some `deploy` type steps. The default rule is one target one step. You could add other steps such as the `suspend` between the two `deploy` steps.
![env workflow](https://static.kubevela.net/images/1.5/env-workflow.jpg)
### 3. Set the different configuration
We always need to set different configurations for different environments. such as environment variables, replicas or resource limits. Let's create some override policies. When we create an override policy, we need to select a workflow and steps, this means selecting the environment or target. So, you could make the different configurations take effect in the specific environment or targets.
![override-policy](https://static.kubevela.net/images/1.5/override-policy.jpg)
Refer to [the override policy](../end-user/policies/references#override)
### 4. Deploy
By default, every environment is deployed independently. When you deploy an application needs to select a workflow to deploy an environment. If you want to deploy to multi environments in one workflow, there is a way to link the multi workflows.
For example, you want to deploy the prod environment after the QA environment is completed deploy. you could create a trigger with the custom type for the prod environment, you could get a webhook URL. Then, edit the workflow of the QA environment and added a webhook type step that uses the trigger webhook URL at the end. Now, the QA workflow will trigger the prod workflow.
## Deploy the helm chart to multi-environment
Some teams deploy the application to a multi-environment through the helm chart and multi values files. You also move this solution to the KubeVela.
The operation is the same as in the previous section. The key point is to use different value files. Refer to [this section](./helm#specify-different-value-file)

View File

@ -1,12 +1,17 @@
---
title: 一次性交付
title: 一次性交付(与其他控制器协同)
---
默认情况下KubeVela 控制器会通过不断轮询应用下管控资源的方式来防止出现配置飘移。如果你希望声明应用中的资源状态为终态,这种处理方式可以让应用资源即使受到外部的变更操作也可以在一段时间后自动恢复回先前声明的状态。
然而,在一些场景下,你可能希望使用 KubeVela 应用只负责部署应用资源,而不负责维持应用资源,也就是允许通过外部修改的方式来改变应用资源的目标终态。这种情况下,你可以使用如下所示的 apply-once 策略来进行配置。在这种配置下,当应用资源被外部修改后,也不会被主动恢复回来。
然而,在一些场景下,你可能希望使用 KubeVela 应用只负责部署应用资源,而不负责维持应用资源,也就是允许通过外部修改的方式来改变应用资源的目标终态。这种情况下,你可以使用如下所示的 apply-once 策略来进行配置。在这种配置下,当应用资源被外部修改后,也不会被主动恢复回来。比如在以下用例中非常有用:
1. 与 HPA 控制器协同HPA 控制器会修改副本数字段。
2. 与 Istio 协同Istio 可能注入新的容器到 Deployment 中。
3. 其他会修改资源配置的控制器。
### 允许全部字段变更
### 如何使用
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
@ -31,4 +36,44 @@ spec:
在这个样例中,如果你在应用进入运行中状态之后修改 hello-world 这个 Deployment 资源的副本数,即便该副本数与应用中声明的副本数不一致,应用也不会自动将副本数改回原本声明的状态,即接受手动变更。
配置漂移检查会在应用进入 running 状态或者是 suspending 状态(没有错误的情况下)每隔 5 分钟进行一次。你可以通过设置控制器启动参数中的 `application-re-sync-period` 来修改时间间隔。你可以在[启动参数](../../platform-engineers/system-operation/bootstrap-parameters)章节中了解更多。
### 允许指定字段变更
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: apply-once-app
spec:
components:
- name: hello-world
type: webservice
properties:
image: oamdev/hello-world
traits:
- type: scaler
properties:
replicas: 3
policies:
- name: apply-once
type: apply-once
properties:
enable: true
rules:
- path:
- spec.replicas
selector:
resourceTypes:
- Deployment
```
在这个例子中,只支持副本数配置由其他控制器修改,比如 HPA。通过配置资源筛选和配置路径进行忽略规则的配置。支持的筛选方式包括
* componentNames: 通过组件名称选择
* componentTypes: 通过组件类型选择
* oamTypes: 通过生成资源的配置类型选择,比如 COMPONENT 或 TRAIT
* resourceNames: 通过资源名称选择
* resourceTypes: 通过资源类型选择
* traitTypes: 通过 Trait 类型选择
配置漂移检查会在应用进入 running 状态或者是 suspending 状态(没有错误的情况下)每隔 5 分钟进行一次。你可以通过设置控制器启动参数中的 `application-re-sync-period` 来修改时间间隔。你可以在[启动参数](../../platform-engineers/system-operation/bootstrap-parameters)章节中了解更多。

View File

@ -0,0 +1,39 @@
---
title: Cloud Shell
---
The cloud shell feature helps you to manage the application via CLI(vela、kubectl). such as:
* Creating applications in batches with the YAML.
* Watch the logs of the application.
* Open the shell TTY of the application instance.
* Upload and download files from the shell environment.
* Others...
### Enable the addon
This operation needs to have the permission of the addon management.
```bash
vela addon enable cloudshell
```
### Open the cloud shell environment
![open-cloud-shell](https://static.kubevela.net/images/1.5/cloud-shell.jpg)
On the UI top right, there is a cloud shell button. Users can open the cloud shell environment by clicking it. By default, all users have permission to open the cloud shell. If you want to disable this feature for some users, you could create a new platform role that has `Disable CloudShell` permission. Bind this role to the user, who you want to disable.
![cloud shell environment](https://static.kubevela.net/images/1.5/cloud-shell-environment.jpg)
After connected to the cloud shell environment, you could exec `vela --help` to get the help message. In the console, users have some permissions(Kubernetes RBAC) that auto-grant base the permission in the UI. there are some rules:
1. If you have the `admin` permission, you are allowed to do anything in the cloud shell environment.
2. If you have the `deploy application` permission in the projects, you could have all permissions of the Application in the namespace that belongs to the project.
3. If you have the `project view` permission in the projects, you will only have the read permission of the Application in the namespace that belongs to the project.
### Open the console of the application containers
![container](https://static.kubevela.net/images/1.5/container-console.jpg)
When you want to debug the application runtime, using the container console could be pretty helpful. In KubeVela, this feature is one of the common use cases of the cloud shell. After clicking the button, you will open a dialog window and see the command line. You can copy the commands and execute them in the cloud shell.

View File

@ -0,0 +1,304 @@
---
title: Custom Image Delivery
---
If the default `webservice` component type is not suitable for your team, and you want to get a more simple way to deploy your business application. This guide will help you. Before, you must have the platform manager's permission.
### Simplify the `webservice` type
The default [webservice](../end-user/components/references#webservice) component type has more than 10 properties. Maybe your developer only needs to configure the image path and resource limits. For the other properties, the team could set the default values.
If so, you could change the `webservice` definition.
1. Change the UI schema to hide some fields
> This way is only suitable the UI users.
![image](https://static.kubevela.net/images/1.5/custom-ui-schema.jpg)
On the definition detail page, users could customize the UI schema to setting the UI forms. For example, if you want to hide the ExposeType field, only need to set the disable is `true`.
```yaml
...
- jsonKey: exposeType
uiType: Select
label: ExposeType
disable: true
...
```
More references: [UI Schema](../reference/ui-schema)
2. Change the definition and remove or add some fields
If you want to completely remove or add some fields, you should edit the component definition.
> This guide should learn the CUE language.
```bash
vela def get webservice > custom-webservice.cue
```
Refer to the [CUE Basic](../platform-engineers/cue/basic) and [Component Definition](../platform-engineers/components/custom-component) documents to learn how to custom the `custom-webservice.cue`.
After edit:
```bash
vela def apply custom-webservice.cue
```
### Create a new component type to deploy the war package
If your team uses the war package to deploy the Java application. In KubeVela you could create a new component type to deploy the War package.
```cue
"java-war": {
alias: ""
annotations: {}
attributes: {
workload: {
definition: {
apiVersion: "apps/v1"
kind: "Deployment"
}
type: "deployments.apps"
}
status: {
customStatus: #"""
ready: {
readyReplicas: *0 | int
} & {
if context.output.status.readyReplicas != _|_ {
readyReplicas: context.output.status.readyReplicas
}
}
message: "Ready:\(ready.readyReplicas)/\(context.output.spec.replicas)"
"""#
healthPolicy: #"""
ready: {
updatedReplicas: *0 | int
readyReplicas: *0 | int
replicas: *0 | int
observedGeneration: *0 | int
} & {
if context.output.status.updatedReplicas != _|_ {
updatedReplicas: context.output.status.updatedReplicas
}
if context.output.status.readyReplicas != _|_ {
readyReplicas: context.output.status.readyReplicas
}
if context.output.status.replicas != _|_ {
replicas: context.output.status.replicas
}
if context.output.status.observedGeneration != _|_ {
observedGeneration: context.output.status.observedGeneration
}
}
isHealth: (context.output.spec.replicas == ready.readyReplicas) && (context.output.spec.replicas == ready.updatedReplicas) && (context.output.spec.replicas == ready.replicas) && (ready.observedGeneration == context.output.metadata.generation || ready.observedGeneration > context.output.metadata.generation)
"""#
}
}
description: ""
labels: {}
type: "component"
}
template: {
output: {
apiVersion: "apps/v1"
kind: "Deployment"
metadata: {
name: context.name
namespace: context.namespace
}
spec: {
replicas: parameter.replicas
selector: {
matchLabels: {
"app.oam.dev/component": context.name
}
}
template: {
metadata: {
labels: {
"app.oam.dev/name": context.appName
"app.oam.dev/component": context.name
"app.oam.dev/revision": context.revision
}
}
spec: {
initContainers: [{
name: "prepare-war"
image: "busybox"
if parameter["deployToRoot"] != _|_ {
if parameter["deployToRoot"] {
command: ["wget", "-O", "/usr/local/tomcat/webapps/ROOT.war", parameter["warURL"]]
}
}
if parameter["deployToRoot"] == _|_ {
command: ["wget", "-P", "/usr/local/tomcat/webapps/", parameter["warURL"]]
}
volumeMounts: [{
name: "webapps"
mountPath: "/usr/local/tomcat/webapps"
}]
}]
containers: [{
name: context.name
image: "tomcat:" + parameter["envVersion"]
if parameter["cpu"] != _|_ {
resources: {
limits: cpu: parameter.cpu
requests: cpu: parameter.cpu
}
}
if parameter["memory"] != _|_ {
resources: {
limits: memory: parameter.memory
requests: memory: parameter.memory
}
}
ports: [{
containerPort: 8080
name: "webapp"
}]
_envs: {
custom: *parameter["env"] | []
inner: [
if parameter["javaOpts"] != _|_ {
{
name: "JAVA_OPTS"
value: parameter.javaOpts
}
},
]
}
env: _envs.custom + _envs.inner
volumeMounts: [{
name: "webapps"
mountPath: "/usr/local/tomcat/webapps"
}]
}]
volumes: [{
name: "webapps"
emptyDir: {}
}]
}
}
}
}
outputs: {
services: {
kind: "Service"
apiVersion: "v1"
metadata: {
name: context.name
namespace: context.namespace
}
spec: {
selector: "app.oam.dev/component": context.name
ports: [{
port: 8080
}]
type: "ClusterIP"
}
}
}
parameter: {
// +usage=The URL of the war package.
warURL: string
// +usage=Select a environment version([tomcat version]-[jdk version])
envVersion: *"8-jdk8" | "9-jdk8" | "10-jdk8" | "8-jdk11" | "9-jdk11" | "10-jdk11" | "8-jdk17" | "9-jdk17" | "10-jdk17"
// +usage=Specifies the number of replicas.
replicas: *1 | int
// +usage=Define arguments by using environment variables
env?: [...{
name: string
value?: string
}]
// +usage=Setting the Java Opts configuration.
javaOpts?: string
// +usage=Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core)
cpu?: string
// +usage=Specifies the attributes of the memory resource required for the container.
memory?: =~"^([1-9][0-9]{0,63})(E|P|T|G|M|K|Ei|Pi|Ti|Gi|Mi|Ki)$"
deployToRoot?: bool
}
}
```
Copy the definition and create a file `java-war.cue`, then:
```bash
vela def apply java-war.cue
```
Now, other developers could create the application with a war URL, for example:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: nanoservice
namespace: e2e-test
spec:
components:
- name: catalog
properties:
envVersion: 8-jdk8
replicas: 1
warURL: https://static.kubevela.net/example/java-example/nanoservice/catalog.war
type: java-war
- name: customer
properties:
envVersion: 8-jdk8
replicas: 1
warURL: https://static.kubevela.net/example/java-example/nanoservice/customer.war
type: java-war
- dependsOn:
- catalog
- customer
name: order
properties:
env:
- name: CATALOG_HOST
value: catalog
- name: CUSTOMER_HOST
value: customer
envVersion: 8-jdk8
javaOpts: -Xms512m -Xmx512m -Xss256K
replicas: 1
warURL: https://static.kubevela.net/example/java-example/nanoservice/order.war
traits:
- properties:
domains:
- nanoservice.beijing.kubevela.net
rules:
- path:
type: PathPrefix
value: /order
port: 8080
type: http-route
type: java-war
policies:
- name: e2e-test
properties:
clusters:
- local
namespace: e2e-test
type: topology
workflow:
steps:
- name: deploy2-e2e-test
properties:
policies:
- e2e-test
type: deploy
```
![java-app](https://static.kubevela.net/images/1.5/java-war.jpg)
This example includes three components, and the order service depends on the catalog and the customer services. The developer only needs to care about the war package URL and the tomcat/JRE version, they are familiar to the Java developer. The developer should upload the war package to a repository, such as Jfrog. Get a download URL to assign to the `warURL` field.
In the same way, you could create a component type to deploy the Jar package and other's binary packages.

View File

@ -0,0 +1,133 @@
---
title: Dry Run
---
The dry run is a very useful way to check the application configuration and the definitions. The dry run will render the application and print the output resources that check passed on the service side. If the definitions(component, trait) is invalid, the dry run will print the error message.
> The policies and workflows can not be dry run currently, this is a insufficiency.
When you do the following things please must pass dry run.
* Create or update an application.
* Create or update the definition, and dry run some example applications.
### Dry run via CLI
There is a simple application:
```yaml
kind: Application
apiVersion: core.oam.dev/v1beta1
metadata:
name: webservice-app
spec:
components:
- name: frontend
type: webservice
properties:
image: oamdev/testapp:v1
cmd: ["node", "server.js"]
ports:
- port: 8080
expose: true
traits:
- type: scaler
properties:
replicas: 1
```
Copy it and write a file `app.yaml`
```bash
vela dry-run app.yaml
```
The outputs:
```yaml
---
# Application(webservice-app) -- Component(frontend)
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
labels:
app.oam.dev/appRevision: ""
app.oam.dev/component: frontend
app.oam.dev/name: webservice-app
app.oam.dev/namespace: default
app.oam.dev/resourceType: WORKLOAD
workload.oam.dev/type: webservice
name: frontend
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app.oam.dev/component: frontend
template:
metadata:
labels:
app.oam.dev/component: frontend
app.oam.dev/name: webservice-app
spec:
containers:
- command:
- node
- server.js
image: oamdev/testapp:v1
name: frontend
ports:
- containerPort: 8080
name: port-8080
protocol: TCP
---
## From the auxiliary workload
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels:
app.oam.dev/appRevision: ""
app.oam.dev/component: frontend
app.oam.dev/name: webservice-app
app.oam.dev/namespace: default
app.oam.dev/resourceType: TRAIT
trait.oam.dev/resource: webserviceExpose
trait.oam.dev/type: AuxiliaryWorkload
name: frontend
namespace: default
spec:
ports:
- name: port-8080
port: 8080
targetPort: 8080
selector:
app.oam.dev/component: frontend
type: ClusterIP
---
```
If we set the expose field of the port as false, the output resources do not include the `Service`. If we set the replicas field as "1", the output will be an error message:
```bash
Error: validate application: ./app.yaml by dry-run: admission webhook "validating.core.oam.dev.v1beta1.applications" denied the request: field "schematic": Invalid value error encountered, cannot evaluate trait "scaler": invalid template of trait scaler after merge with parameter and context: parameter.replicas: conflicting values (*1 | int) and "1" (mismatched types int and string) (and 1 more errors).
```
This means the replicas field is int type but we provide a string value, this application configuration is not valid.
More use cases refer to the [Dry run command](../cli/vela_dry-run)
### Dry run via UI
![dry-run](https://static.kubevela.net/images/1.5/dry-run.jpg)
Clicking the `Deploy` button to open the workflow selector dialog. You could select a workflow(every workflow corresponding to an environment) and click the `DryRun` button to execute a dry run. If passed, the result is like this
![dry-run-successfully](https://static.kubevela.net/images/1.5/dry-run-success.jpg)
The result report is different from the CLI, there is a complete application configuration that is generated from the application metadata, you could check it. Different environments may have different application configurations.

View File

@ -0,0 +1,50 @@
---
title: Multi Environment Delivery
---
Environments represent your deployment targets logically (QA, Prod, etc). You can add the same Environment to as many Targets as you need.
In KubeVela, the environment bond a namespace in the hub cluster. The application CR should create in this namespace. If the application does not configure the target cluster and namespace(such as Topology policy). The default target namespace to create the pod or service resources is the same as the environment.
For one application, If you want to deploy to a multi-environment, you need to apply the application CR to multi namespaces in hub cluster. There is a way in UI that you only need to manage one application configuration.
## Deploy the application to multi-environment
### 1. Create an environment
```bash
vela env init prod --namespace prod
```
You could create an environment via CLI, but that does not sync to UI before you create an application in this environment. A more elegant way is to create it directly in the UI.
![create-env](https://static.kubevela.net/images/1.5/create-env.jpg)
One environment could includes multi targets. The environment belongs to a project, One project could include multi environments. Only the application belonging to this project could bond this environment.
![environment](https://static.kubevela.net/images/1.5/environment.jpg)
### 2. Bind the application to an environment
You could select multi environments when you create an application. Or at any time bind the other environments. The UI will generate a workflow when the application bind to an environment, that includes some `deploy` type steps. The default rule is one target one step. You could add other steps such as the `suspend` between the two `deploy` steps.
![env workflow](https://static.kubevela.net/images/1.5/env-workflow.jpg)
### 3. Set the different configuration
We always need to set different configurations for different environments. such as environment variables, replicas or resource limits. Let's create some override policies. When we create an override policy, we need to select a workflow and steps, this means selecting the environment or target. So, you could make the different configurations take effect in the specific environment or targets.
![override-policy](https://static.kubevela.net/images/1.5/override-policy.jpg)
Refer to [the override policy](../end-user/policies/references#override)
### 4. Deploy
By default, every environment is deployed independently. When you deploy an application needs to select a workflow to deploy an environment. If you want to deploy to multi environments in one workflow, there is a way to link the multi workflows.
For example, you want to deploy the prod environment after the QA environment is completed deploy. you could create a trigger with the custom type for the prod environment, you could get a webhook URL. Then, edit the workflow of the QA environment and added a webhook type step that uses the trigger webhook URL at the end. Now, the QA workflow will trigger the prod workflow.
## Deploy the helm chart to multi-environment
Some teams deploy the application to a multi-environment through the helm chart and multi values files. You also move this solution to the KubeVela.
The operation is the same as in the previous section. The key point is to use different value files. Refer to [this section](./helm#specify-different-value-file)

View File

@ -29,9 +29,9 @@ module.exports = {
collapsed: true,
items: [
'tutorials/webservice',
'tutorials/access-application',
'end-user/traits/rollout',
'end-user/traits/sidecar',
'tutorials/custom-image-delivery',
],
},
{
@ -77,16 +77,7 @@ module.exports = {
'end-user/components/ref-objects',
],
},
{
type: 'category',
label: 'CI Integration',
collapsed: true,
items: [
'how-to/dashboard/trigger/overview',
'tutorials/jenkins',
'tutorials/trigger',
],
},
'tutorials/multi-env',
{
type: 'category',
label: 'GitOps',
@ -107,18 +98,32 @@ module.exports = {
'end-user/workflow/timeout',
],
},
'platform-engineers/operations/observability',
{
'General CD Features': [
'how-to/dashboard/application/create-application',
'end-user/version-control',
'tutorials/dry-run',
'end-user/workflow/component-dependency-parameter',
'end-user/policies/shared-resource',
'case-studies/initialize-env',
'end-user/policies/apply-once',
'end-user/policies/gc',
'end-user/policies/shared-resource',
'how-to/dashboard/config/helm-repo',
'how-to/dashboard/config/image-registry',
'tutorials/access-application',
'tutorials/cloud-shell',
],
},
{
type: 'category',
label: 'CI Integration',
collapsed: true,
items: [
'how-to/dashboard/trigger/overview',
'tutorials/jenkins',
'tutorials/trigger',
],
},
'platform-engineers/operations/observability',
'end-user/components/more',
],
},
@ -136,7 +141,11 @@ module.exports = {
],
},
{
'User Management': ['how-to/dashboard/user/user', 'tutorials/sso'],
'User Management': [
'how-to/dashboard/user/user',
'tutorials/sso',
'how-to/dashboard/config/dex-connectors',
],
},
'how-to/dashboard/user/project',
{
@ -154,13 +163,6 @@ module.exports = {
'platform-engineers/system-operation/working-with-ocm',
],
},
{
'Manage Config of Integration': [
'how-to/dashboard/config/dex-connectors',
'how-to/dashboard/config/helm-repo',
'how-to/dashboard/config/image-registry',
],
},
'platform-engineers/system-operation/performance-finetuning',
'platform-engineers/workflow/working-mechanism',
{

View File

@ -1,12 +1,17 @@
---
title: One-time delivery
title: One-time Delivery(Working With Other Controllers)
---
By default, the KubeVela controller will prevent configuration drift for applied resources by reconciling them routinely. This is useful if you want to keep your application always having the desired configuration in avoid of some unintentional changes by external modifiers.
By default, the KubeVela controller will prevent configuration drift for applied resources by reconciling them routinely. This is useful if you want to keep your application always having the desired configuration to avoid some unintentional changes by external modifiers.
However, sometimes, you might want to use KubeVela Application to dispatch jobs and recycle jobs, but also want to leave those resources mutable after workflow is finished. In this case, you can use the following apply-once policy.
However, sometimes, you might want to use KubeVela Application to dispatch jobs and recycle jobs, but also want to leave those resources mutable after the workflow is finished. In this case, you can use the following apply-once policy. Some use cases:
1. Working with HPA, which will change the deployment replicas.
2. Working with Istio, which will change the deployment containers.
3. Other controllers...
### All managed fields once apply
### How to use
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
@ -29,6 +34,39 @@ spec:
enable: true
```
In this case, if you change the replicas of the hello-world deployment after Application enters running state, the change will not be reverted. On the contrary, if you disable the apply-once policy (by default), any changes to the replicas of hello-world application will be brought back in the next reconcile loop.
In this case, if you change the replicas or other managed fields of the hello-world deployment after the application enters the running state, the change will not be reverted. On the contrary, if you disable the apply-once policy (by default), any changes to the replicas of the hello-world application will be brought back in the next reconcile loop.
The configuration drift check will be launched every 5 minutes after application enters the running state or the suspending state without errors. You can configure the time through setting the `application-re-sync-period` bootstrap parameter in the controller. See [bootstrap parameters](../../platform-engineers/system-operation/bootstrap-parameters) for details.
### Specified fields once apply
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: apply-once-app
spec:
components:
- name: hello-world
type: webservice
properties:
image: oamdev/hello-world
traits:
- type: scaler
properties:
replicas: 3
policies:
- name: apply-once
type: apply-once
properties:
enable: true
rules:
- path:
- spec.replicas
selector:
resourceTypes:
- Deployment
```
In this case, only support changing the replicas field by other controllers, such as HPA. The selector rule support: `componentNames`,`componentTypes`,`oamTypes`,`resourceNames`,`resourceTypes` and `traitTypes`.
The configuration drift check will be launched every 5 minutes after the application enters the running state or the suspending state without errors. You can configure the time by setting the `application-re-sync-period` bootstrap parameter in the controller. See [bootstrap parameters](../../platform-engineers/system-operation/bootstrap-parameters) for details.

View File

@ -1,5 +1,5 @@
---
title: Canary rollout
title: Canary Rollout
---
In this section, we will introduce how to canary rollout a container service.

View File

@ -1,11 +1,9 @@
---
title: Attaching Sidecar
title: Attaching Sidecar Container
---
The `sidecar` trait allows you to attach a sidecar container to the component.
> Note: `sidecar` trait is hidden by default in `VelaUX`, you can use it in CLI.
## How to use
In this Application, component `log-gen-worker` and sidecar share the data volume that saves the logs.

View File

@ -1,3 +0,0 @@
---
title: Overview
---

View File

@ -2,9 +2,10 @@
title: Automated Observability
---
Observability is critical for infrastructures. Without observability system, it is hard to identify what happens when system broke down. On contrary, a strong observabilty system can not only provide confidences for operators but can also help developers quickly locate the performance bottleneck or the weak points inside the whole system.
Observability is critical for infrastructures and applications. Without observability system, it is hard to identify what happens when system broke down. On contrary, a strong observabilty system can not only provide confidences for operators but can also help developers quickly locate the performance bottleneck or the weak points inside the whole system.
To help users build their own observability system, KubeVela provides a list of addons, including
- prometheus-server: A server records metrics in time series with flexible queries supported.
- kube-state-metrics: A metrics collector for the Kubernetes system.
- node-exporter: A metrics collector for the running Kubernetes nodes.
@ -25,26 +26,31 @@ To enable the addon suites, you simply needs to run the `vela addon enable` comm
> If your KubeVela is multi-cluster scenario, see the [multi-cluster installation](#multi-cluster-installation) section below.
1. Install the kube-state-metrics addon
```shell
> vela addon enable kube-state-metrics
```
2. Install the node-exporter addon
```shell
> vela addon enable node-exporter
```
3. Install the prometheus-server
```shell
> vela addon enable prometheus-server
```
4. Install the grafana addon.
```shell
> vela addon enable grafana
```
5. Access your grafana through port-forward.
```shell
> kubectl port-forward svc/grafana -n o11y-system 8080:3000
```
@ -66,11 +72,14 @@ URL: http://localhost:8080/d/application-overview/kubevela-applications
<details>
The KubeVela Application dashboard shows the overview of the metadata for the application. It directly accesses the Kubernetes API to retrieve the runtime application information, where you can use it as an entrance.
---
The **Basic Information** section extracts key information into panels and give you the most straightforward view for the current application.
---
The **Basic Information** section extracts key information into panels and give you the most straightforward view for the current application.
---
The **Related Resource** section shows those resources that work together with the application itself, including the managed resources, the recorded ResourceTrackers and the revisions.
---
The **Related Resource** section shows those resources that work together with the application itself, including the managed resources, the recorded ResourceTrackers and the revisions.
</details>
### Kubernetes Deployemnt
@ -84,14 +93,18 @@ URL: http://localhost:8080/d/deployment-overview/kubernetes-deployment
<details>
The Kubernetes Deployment dashboard gives you the detail running status for the deployment.
---
The **Pods** panel shows the pods that the deployment itself is currently managing.
---
---
The **Replicas** panel shows how the number of replicas changes, which can be used to diagnose when and how your deployment shifted to undesired state.
The **Pods** panel shows the pods that the deployment itself is currently managing.
---
The **Replicas** panel shows how the number of replicas changes, which can be used to diagnose when and how your deployment shifted to undesired state.
---
The **Pod** section includes the details for the resource usage (including the CPU / Memory / Network / Storage) which can be used to identify if the pod is facing resource pressure or making/receiving unexpected traffics.
---
The **Pod** section includes the details for the resource usage (including the CPU / Memory / Network / Storage) which can be used to identify if the pod is facing resource pressure or making/receiving unexpected traffics.
</details>
### KubeVela System
@ -105,23 +118,27 @@ URL: http://localhost:8080/d/kubevela-system/kubevela-system
<details>
The KubeVela System dashboard gives you the running details of the KubeVela core modules, including the controller and the cluster-gateway. Other modules like velaux or prism are expected to be added in the future.
---
The **Computation Resource** section shows the usage for core modules. It can be used to track if there is any memory leak (if the memory usage is continuously increasing) or under high pressure (the cpu usage is always very high). If the memory usage hits the resource limit, the corresponding module will be killed and restarted, which indicates the lack of computation resource. You should add more CPU/Memory for them.
---
---
The **Controller** section includes a wide range of panels which can help you to diagnose the bottleneck of the KubeVela controller in your scenario.
The **Computation Resource** section shows the usage for core modules. It can be used to track if there is any memory leak (if the memory usage is continuously increasing) or under high pressure (the cpu usage is always very high). If the memory usage hits the resource limit, the corresponding module will be killed and restarted, which indicates the lack of computation resource. You should add more CPU/Memory for them.
The **Controller Queue** and **Controller Queue Add Rate** panels show you the controller working queue changes. If the controller queue is increasing continuously, it means there are too much applications or application changes in the system, and the controller is unable to handle them in time. Then it means there is performance issues for KubeVela controller. A temporary increase for the controller queue is tolerable, but keeping for a long time will lead to memory increase which will finally cause Out-Of-Memory problems.
---
**Reconcile Rate** and **Average Reconcile Time** panels give you the overview of the controller status. If reconcile rate is steady and average reconcile time is reasonable (like under 500ms, depending on your scenario), your KubeVela controller is healthy. If the controller queue add rate is increasing but the reconcile rate does not go up, it will gradually lead to increase for the controller queue and cause troubles. There are various cases that your controller is unhealthy:
1. Reconcile is healthy but there are too much applications, you will find everything is okay except the controller queue metrics increasing. Check your CPU/Memory usage for the controller. You might need to add more computation resources.
2. Reconcile is not healthy due to too much errors. You will find lots of errors in the **Reconcile Rate** panel. This means your system is continuously facing process errors for application. It could be caused by invalid application configurations or unexpected errors while running workflows. Check application details and see which applications are causing errors.
3. Reconcile is not healthy due to long reconcile times. You need to check **ApplicationController Reconcile Time** panel and see whether it is a common case (the average reconcile time is high), or only part of your applications have problems (the p95 reconcile time is high). For the former case, usually it is caused by either insufficient CPU (CPU usage is high) or too much requests and rate limited by kube-apiserver (check **ApplicationController Client Request Throughput** and **ApplicationController Client Request Average Time** panel and see which resource requests is slow or excessive). For the later case you need to check which application is large and uses lots of time for reconciliations.
The **Controller** section includes a wide range of panels which can help you to diagnose the bottleneck of the KubeVela controller in your scenario.
Sometimes you might need refer to **ApplicationController Reconcile Stage Time** and see if there is some special reconcile stages are abnormal. For example, GCResourceTrackers use lots of time means there might be blockings for recycling resource in KubeVela system.
The **Controller Queue** and **Controller Queue Add Rate** panels show you the controller working queue changes. If the controller queue is increasing continuously, it means there are too much applications or application changes in the system, and the controller is unable to handle them in time. Then it means there is performance issues for KubeVela controller. A temporary increase for the controller queue is tolerable, but keeping for a long time will lead to memory increase which will finally cause Out-Of-Memory problems.
---
The **Application** section shows the overview of the applications in your whole KubeVela system. It can be used to see the changes of the application numbers and the used workflow steps. The **Workflow Initialize Rate** is an auxiliary panel which can be used to see how frequent new workflow execution is launched. The **Workflow Average Complete Time** can further show how much time it costs to finish the whole workflow.
**Reconcile Rate** and **Average Reconcile Time** panels give you the overview of the controller status. If reconcile rate is steady and average reconcile time is reasonable (like under 500ms, depending on your scenario), your KubeVela controller is healthy. If the controller queue add rate is increasing but the reconcile rate does not go up, it will gradually lead to increase for the controller queue and cause troubles. There are various cases that your controller is unhealthy:
1. Reconcile is healthy but there are too much applications, you will find everything is okay except the controller queue metrics increasing. Check your CPU/Memory usage for the controller. You might need to add more computation resources.
2. Reconcile is not healthy due to too much errors. You will find lots of errors in the **Reconcile Rate** panel. This means your system is continuously facing process errors for application. It could be caused by invalid application configurations or unexpected errors while running workflows. Check application details and see which applications are causing errors.
3. Reconcile is not healthy due to long reconcile times. You need to check **ApplicationController Reconcile Time** panel and see whether it is a common case (the average reconcile time is high), or only part of your applications have problems (the p95 reconcile time is high). For the former case, usually it is caused by either insufficient CPU (CPU usage is high) or too much requests and rate limited by kube-apiserver (check **ApplicationController Client Request Throughput** and **ApplicationController Client Request Average Time** panel and see which resource requests is slow or excessive). For the later case you need to check which application is large and uses lots of time for reconciliations.
Sometimes you might need refer to **ApplicationController Reconcile Stage Time** and see if there is some special reconcile stages are abnormal. For example, GCResourceTrackers use lots of time means there might be blockings for recycling resource in KubeVela system.
---
The **Application** section shows the overview of the applications in your whole KubeVela system. It can be used to see the changes of the application numbers and the used workflow steps. The **Workflow Initialize Rate** is an auxiliary panel which can be used to see how frequent new workflow execution is launched. The **Workflow Average Complete Time** can further show how much time it costs to finish the whole workflow.
</details>
@ -136,14 +153,18 @@ URL: http://localhost:8080/d/kubernetes-apiserver/kubernetes-apiserver
<details>
The Kubernetes APIServer dashboard helps you to see the most fundamental part for your Kubernetes system. If your Kubernetes APIServer is not running healthy, all of your controllers and modules in your Kubernetes system will be abnormal and unable to handle requests successfully. So it is important to make sure everything is fine in this dashboard.
---
The **Requests** section includes a series of panels which shows the QPS and latency for various kind of requests. Usually your APIServer could fail to respond if it is flooded by too much requests. At this time, you can see which type of requests is causing trouble.
---
---
The **WorkQueue** section shows the process status of the Kubernetes APIServer. If the **Queue Size** is large, it means the number of requests is out of the process capability of your Kubernetes APIServer.
The **Requests** section includes a series of panels which shows the QPS and latency for various kind of requests. Usually your APIServer could fail to respond if it is flooded by too much requests. At this time, you can see which type of requests is causing trouble.
---
The **WorkQueue** section shows the process status of the Kubernetes APIServer. If the **Queue Size** is large, it means the number of requests is out of the process capability of your Kubernetes APIServer.
---
The **Watches** section shows the number of watches in your Kubernetes APIServer. Compared to other types of requests, WATCH requests will continuously consume computation resources in Kubernetes APIServer, so it will be helpful to keep the number of watches limited.
---
The **Watches** section shows the number of watches in your Kubernetes APIServer. Compared to other types of requests, WATCH requests will continuously consume computation resources in Kubernetes APIServer, so it will be helpful to keep the number of watches limited.
</details>
## Customization
@ -187,30 +208,31 @@ metadata:
namespace: o11y-system
spec:
components:
- type: k8s-objects
name: my-prom
properties:
objects:
- apiVersion: v1
kind: ConfigMap
metadata:
name: my-prom
namespace: o11y-system
data:
my-recording-rules.yml: |
groups:
- name: example
rules:
- record: apiserver:requests:rate5m
expr: sum(rate(apiserver_request_total{job="kubernetes-nodes"}[5m]))
- type: k8s-objects
name: my-prom
properties:
objects:
- apiVersion: v1
kind: ConfigMap
metadata:
name: my-prom
namespace: o11y-system
data:
my-recording-rules.yml: |
groups:
- name: example
rules:
- record: apiserver:requests:rate5m
expr: sum(rate(apiserver_request_total{job="kubernetes-nodes"}[5m]))
policies:
- type: topology
name: topology
properties:
clusterLabelSelector: {}
- type: topology
name: topology
properties:
clusterLabelSelector: {}
```
Then you need to add `customConfig` parameter to the enabling process of the prometheus-server addon, like
```shell
> vela addon enable prometheus-server thanos=true serviceType=LoadBalancer storage=1G customConfig=my-prom
```
@ -239,6 +261,7 @@ data:
### Customize Grafana credential
If you want to change the default username and password for Grafana, you can run the following command
```shell
> vela addon enable grafana adminUser=super-user adminPassword=PASSWORD
```
@ -270,22 +293,22 @@ metadata:
name: register-prometheus
spec:
components:
- type: grafana-datasource
name: my-prometheus
properties:
access: proxy
basicAuth: false
isDefault: false
name: MyPrometheus
readOnly: true
withCredentials: true
jsonData:
httpHeaderName1: Authorization
tlsSkipVerify: true
secureJsonFields:
httpHeaderValue1: <token of your prometheus access>
type: prometheus
url: <my-prometheus url>
- type: grafana-datasource
name: my-prometheus
properties:
access: proxy
basicAuth: false
isDefault: false
name: MyPrometheus
readOnly: true
withCredentials: true
jsonData:
httpHeaderName1: Authorization
tlsSkipVerify: true
secureJsonFields:
httpHeaderValue1: <token of your prometheus access>
type: prometheus
url: <my-prometheus url>
```
For example, if you are using the Prometheus service on Alibaba Cloud (ARMS), you can go to the Prometheus setting page and find the access url & access token.
@ -305,12 +328,12 @@ metadata:
name: register-grafana
spec:
components:
- type: grafana-access
name: my-grafana
properties:
- type: grafana-access
name: my-grafana
endpoint: <my-grafana url>
token: <access token>
properties:
name: my-grafana
endpoint: <my-grafana url>
token: <access token>
```
To get your grafana access, you can go into your Grafana instance and configure API keys.
@ -326,7 +349,7 @@ After the application is successfully dispatched, you can check the registration
```shell
> kubectl get grafana
NAME ENDPOINT CREDENTIAL_TYPE
default http://grafana.o11y-system:3000 BasicAuth
default http://grafana.o11y-system:3000 BasicAuth
my-grafana https://grafana-rngwzwnsuvl4s9p66m.grafana.aliyuncs.com:80/ BearerToken
```
@ -359,4 +382,4 @@ Besides, the old observability addon relies on the fluxcd addon to install the w
## Future
KubeVela will integrate more observability addons in the future, such as logging and tracing addons. Community operators like [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator) also provide alternative ways for managing observability application, which are intended to be included in KubeVela addons as well. We are also welcome to more integrations through KubeVela addon ecosystem.
KubeVela will integrate more observability addons in the future, such as logging and tracing addons. Community operators like [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator) also provide alternative ways for managing observability application, which are intended to be included in KubeVela addons as well. We are also welcome to more integrations through KubeVela addon ecosystem.

View File

@ -0,0 +1,39 @@
---
title: Cloud Shell
---
The cloud shell feature helps you to manage the application via CLI(vela、kubectl). such as:
* Creating applications in batches with the YAML.
* Watch the logs of the application.
* Open the shell TTY of the application instance.
* Upload and download files from the shell environment.
* Others...
### Enable the addon
This operation needs to have the permission of the addon management.
```bash
vela addon enable cloudshell
```
### Open the cloud shell environment
![open-cloud-shell](https://static.kubevela.net/images/1.5/cloud-shell.jpg)
On the UI top right, there is a cloud shell button. Users can open the cloud shell environment by clicking it. By default, all users have permission to open the cloud shell. If you want to disable this feature for some users, you could create a new platform role that has `Disable CloudShell` permission. Bind this role to the user, who you want to disable.
![cloud shell environment](https://static.kubevela.net/images/1.5/cloud-shell-environment.jpg)
After connected to the cloud shell environment, you could exec `vela --help` to get the help message. In the console, users have some permissions(Kubernetes RBAC) that auto-grant base the permission in the UI. there are some rules:
1. If you have the `admin` permission, you are allowed to do anything in the cloud shell environment.
2. If you have the `deploy application` permission in the projects, you could have all permissions of the Application in the namespace that belongs to the project.
3. If you have the `project view` permission in the projects, you will only have the read permission of the Application in the namespace that belongs to the project.
### Open the console of the application containers
![container](https://static.kubevela.net/images/1.5/container-console.jpg)
When you want to debug the application runtime, using the container console could be pretty helpful. In KubeVela, this feature is one of the common use cases of the cloud shell. After clicking the button, you will open a dialog window and see the command line. You can copy the commands and execute them in the cloud shell.

View File

@ -0,0 +1,304 @@
---
title: Custom Image Delivery
---
If the default `webservice` component type is not suitable for your team, and you want to get a more simple way to deploy your business application. This guide will help you. Before, you must have the platform manager's permission.
### Simplify the `webservice` type
The default [webservice](../end-user/components/references#webservice) component type has more than 10 properties. Maybe your developer only needs to configure the image path and resource limits. For the other properties, the team could set the default values.
If so, you could change the `webservice` definition.
1. Change the UI schema to hide some fields
> This way is only suitable the UI users.
![image](https://static.kubevela.net/images/1.5/custom-ui-schema.jpg)
On the definition detail page, users could customize the UI schema to setting the UI forms. For example, if you want to hide the ExposeType field, only need to set the disable is `true`.
```yaml
...
- jsonKey: exposeType
uiType: Select
label: ExposeType
disable: true
...
```
More references: [UI Schema](../reference/ui-schema)
2. Change the definition and remove or add some fields
If you want to completely remove or add some fields, you should edit the component definition.
> This guide should learn the CUE language.
```bash
vela def get webservice > custom-webservice.cue
```
Refer to the [CUE Basic](../platform-engineers/cue/basic) and [Component Definition](../platform-engineers/components/custom-component) documents to learn how to custom the `custom-webservice.cue`.
After edit:
```bash
vela def apply custom-webservice.cue
```
### Create a new component type to deploy the war package
If your team uses the war package to deploy the Java application. In KubeVela you could create a new component type to deploy the War package.
```cue
"java-war": {
alias: ""
annotations: {}
attributes: {
workload: {
definition: {
apiVersion: "apps/v1"
kind: "Deployment"
}
type: "deployments.apps"
}
status: {
customStatus: #"""
ready: {
readyReplicas: *0 | int
} & {
if context.output.status.readyReplicas != _|_ {
readyReplicas: context.output.status.readyReplicas
}
}
message: "Ready:\(ready.readyReplicas)/\(context.output.spec.replicas)"
"""#
healthPolicy: #"""
ready: {
updatedReplicas: *0 | int
readyReplicas: *0 | int
replicas: *0 | int
observedGeneration: *0 | int
} & {
if context.output.status.updatedReplicas != _|_ {
updatedReplicas: context.output.status.updatedReplicas
}
if context.output.status.readyReplicas != _|_ {
readyReplicas: context.output.status.readyReplicas
}
if context.output.status.replicas != _|_ {
replicas: context.output.status.replicas
}
if context.output.status.observedGeneration != _|_ {
observedGeneration: context.output.status.observedGeneration
}
}
isHealth: (context.output.spec.replicas == ready.readyReplicas) && (context.output.spec.replicas == ready.updatedReplicas) && (context.output.spec.replicas == ready.replicas) && (ready.observedGeneration == context.output.metadata.generation || ready.observedGeneration > context.output.metadata.generation)
"""#
}
}
description: ""
labels: {}
type: "component"
}
template: {
output: {
apiVersion: "apps/v1"
kind: "Deployment"
metadata: {
name: context.name
namespace: context.namespace
}
spec: {
replicas: parameter.replicas
selector: {
matchLabels: {
"app.oam.dev/component": context.name
}
}
template: {
metadata: {
labels: {
"app.oam.dev/name": context.appName
"app.oam.dev/component": context.name
"app.oam.dev/revision": context.revision
}
}
spec: {
initContainers: [{
name: "prepare-war"
image: "busybox"
if parameter["deployToRoot"] != _|_ {
if parameter["deployToRoot"] {
command: ["wget", "-O", "/usr/local/tomcat/webapps/ROOT.war", parameter["warURL"]]
}
}
if parameter["deployToRoot"] == _|_ {
command: ["wget", "-P", "/usr/local/tomcat/webapps/", parameter["warURL"]]
}
volumeMounts: [{
name: "webapps"
mountPath: "/usr/local/tomcat/webapps"
}]
}]
containers: [{
name: context.name
image: "tomcat:" + parameter["envVersion"]
if parameter["cpu"] != _|_ {
resources: {
limits: cpu: parameter.cpu
requests: cpu: parameter.cpu
}
}
if parameter["memory"] != _|_ {
resources: {
limits: memory: parameter.memory
requests: memory: parameter.memory
}
}
ports: [{
containerPort: 8080
name: "webapp"
}]
_envs: {
custom: *parameter["env"] | []
inner: [
if parameter["javaOpts"] != _|_ {
{
name: "JAVA_OPTS"
value: parameter.javaOpts
}
},
]
}
env: _envs.custom + _envs.inner
volumeMounts: [{
name: "webapps"
mountPath: "/usr/local/tomcat/webapps"
}]
}]
volumes: [{
name: "webapps"
emptyDir: {}
}]
}
}
}
}
outputs: {
services: {
kind: "Service"
apiVersion: "v1"
metadata: {
name: context.name
namespace: context.namespace
}
spec: {
selector: "app.oam.dev/component": context.name
ports: [{
port: 8080
}]
type: "ClusterIP"
}
}
}
parameter: {
// +usage=The URL of the war package.
warURL: string
// +usage=Select a environment version([tomcat version]-[jdk version])
envVersion: *"8-jdk8" | "9-jdk8" | "10-jdk8" | "8-jdk11" | "9-jdk11" | "10-jdk11" | "8-jdk17" | "9-jdk17" | "10-jdk17"
// +usage=Specifies the number of replicas.
replicas: *1 | int
// +usage=Define arguments by using environment variables
env?: [...{
name: string
value?: string
}]
// +usage=Setting the Java Opts configuration.
javaOpts?: string
// +usage=Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core)
cpu?: string
// +usage=Specifies the attributes of the memory resource required for the container.
memory?: =~"^([1-9][0-9]{0,63})(E|P|T|G|M|K|Ei|Pi|Ti|Gi|Mi|Ki)$"
deployToRoot?: bool
}
}
```
Copy the definition and create a file `java-war.cue`, then:
```bash
vela def apply java-war.cue
```
Now, other developers could create the application with a war URL, for example:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: nanoservice
namespace: e2e-test
spec:
components:
- name: catalog
properties:
envVersion: 8-jdk8
replicas: 1
warURL: https://static.kubevela.net/example/java-example/nanoservice/catalog.war
type: java-war
- name: customer
properties:
envVersion: 8-jdk8
replicas: 1
warURL: https://static.kubevela.net/example/java-example/nanoservice/customer.war
type: java-war
- dependsOn:
- catalog
- customer
name: order
properties:
env:
- name: CATALOG_HOST
value: catalog
- name: CUSTOMER_HOST
value: customer
envVersion: 8-jdk8
javaOpts: -Xms512m -Xmx512m -Xss256K
replicas: 1
warURL: https://static.kubevela.net/example/java-example/nanoservice/order.war
traits:
- properties:
domains:
- nanoservice.beijing.kubevela.net
rules:
- path:
type: PathPrefix
value: /order
port: 8080
type: http-route
type: java-war
policies:
- name: e2e-test
properties:
clusters:
- local
namespace: e2e-test
type: topology
workflow:
steps:
- name: deploy2-e2e-test
properties:
policies:
- e2e-test
type: deploy
```
![java-app](https://static.kubevela.net/images/1.5/java-war.jpg)
This example includes three components, and the order service depends on the catalog and the customer services. The developer only needs to care about the war package URL and the tomcat/JRE version, they are familiar to the Java developer. The developer should upload the war package to a repository, such as Jfrog. Get a download URL to assign to the `warURL` field.
In the same way, you could create a component type to deploy the Jar package and other's binary packages.

View File

@ -0,0 +1,133 @@
---
title: Dry Run
---
The dry run is a very useful way to check the application configuration and the definitions. The dry run will render the application and print the output resources that check passed on the service side. If the definitions(component, trait) is invalid, the dry run will print the error message.
> The policies and workflows can not be dry run currently, this is a insufficiency.
When you do the following things please must pass dry run.
* Create or update an application.
* Create or update the definition, and dry run some example applications.
### Dry run via CLI
There is a simple application:
```yaml
kind: Application
apiVersion: core.oam.dev/v1beta1
metadata:
name: webservice-app
spec:
components:
- name: frontend
type: webservice
properties:
image: oamdev/testapp:v1
cmd: ["node", "server.js"]
ports:
- port: 8080
expose: true
traits:
- type: scaler
properties:
replicas: 1
```
Copy it and write a file `app.yaml`
```bash
vela dry-run app.yaml
```
The outputs:
```yaml
---
# Application(webservice-app) -- Component(frontend)
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
labels:
app.oam.dev/appRevision: ""
app.oam.dev/component: frontend
app.oam.dev/name: webservice-app
app.oam.dev/namespace: default
app.oam.dev/resourceType: WORKLOAD
workload.oam.dev/type: webservice
name: frontend
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app.oam.dev/component: frontend
template:
metadata:
labels:
app.oam.dev/component: frontend
app.oam.dev/name: webservice-app
spec:
containers:
- command:
- node
- server.js
image: oamdev/testapp:v1
name: frontend
ports:
- containerPort: 8080
name: port-8080
protocol: TCP
---
## From the auxiliary workload
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels:
app.oam.dev/appRevision: ""
app.oam.dev/component: frontend
app.oam.dev/name: webservice-app
app.oam.dev/namespace: default
app.oam.dev/resourceType: TRAIT
trait.oam.dev/resource: webserviceExpose
trait.oam.dev/type: AuxiliaryWorkload
name: frontend
namespace: default
spec:
ports:
- name: port-8080
port: 8080
targetPort: 8080
selector:
app.oam.dev/component: frontend
type: ClusterIP
---
```
If we set the expose field of the port as false, the output resources do not include the `Service`. If we set the replicas field as "1", the output will be an error message:
```bash
Error: validate application: ./app.yaml by dry-run: admission webhook "validating.core.oam.dev.v1beta1.applications" denied the request: field "schematic": Invalid value error encountered, cannot evaluate trait "scaler": invalid template of trait scaler after merge with parameter and context: parameter.replicas: conflicting values (*1 | int) and "1" (mismatched types int and string) (and 1 more errors).
```
This means the replicas field is int type but we provide a string value, this application configuration is not valid.
More use cases refer to the [Dry run command](../cli/vela_dry-run)
### Dry run via UI
![dry-run](https://static.kubevela.net/images/1.5/dry-run.jpg)
Clicking the `Deploy` button to open the workflow selector dialog. You could select a workflow(every workflow corresponding to an environment) and click the `DryRun` button to execute a dry run. If passed, the result is like this
![dry-run-successfully](https://static.kubevela.net/images/1.5/dry-run-success.jpg)
The result report is different from the CLI, there is a complete application configuration that is generated from the application metadata, you could check it. Different environments may have different application configurations.

View File

@ -0,0 +1,50 @@
---
title: Multi Environment Delivery
---
Environments represent your deployment targets logically (QA, Prod, etc). You can add the same Environment to as many Targets as you need.
In KubeVela, the environment bond a namespace in the hub cluster. The application CR should create in this namespace. If the application does not configure the target cluster and namespace(such as Topology policy). The default target namespace to create the pod or service resources is the same as the environment.
For one application, If you want to deploy to a multi-environment, you need to apply the application CR to multi namespaces in hub cluster. There is a way in UI that you only need to manage one application configuration.
## Deploy the application to multi-environment
### 1. Create an environment
```bash
vela env init prod --namespace prod
```
You could create an environment via CLI, but that does not sync to UI before you create an application in this environment. A more elegant way is to create it directly in the UI.
![create-env](https://static.kubevela.net/images/1.5/create-env.jpg)
One environment could includes multi targets. The environment belongs to a project, One project could include multi environments. Only the application belonging to this project could bond this environment.
![environment](https://static.kubevela.net/images/1.5/environment.jpg)
### 2. Bind the application to an environment
You could select multi environments when you create an application. Or at any time bind the other environments. The UI will generate a workflow when the application bind to an environment, that includes some `deploy` type steps. The default rule is one target one step. You could add other steps such as the `suspend` between the two `deploy` steps.
![env workflow](https://static.kubevela.net/images/1.5/env-workflow.jpg)
### 3. Set the different configuration
We always need to set different configurations for different environments. such as environment variables, replicas or resource limits. Let's create some override policies. When we create an override policy, we need to select a workflow and steps, this means selecting the environment or target. So, you could make the different configurations take effect in the specific environment or targets.
![override-policy](https://static.kubevela.net/images/1.5/override-policy.jpg)
Refer to [the override policy](../end-user/policies/references#override)
### 4. Deploy
By default, every environment is deployed independently. When you deploy an application needs to select a workflow to deploy an environment. If you want to deploy to multi environments in one workflow, there is a way to link the multi workflows.
For example, you want to deploy the prod environment after the QA environment is completed deploy. you could create a trigger with the custom type for the prod environment, you could get a webhook URL. Then, edit the workflow of the QA environment and added a webhook type step that uses the trigger webhook URL at the end. Now, the QA workflow will trigger the prod workflow.
## Deploy the helm chart to multi-environment
Some teams deploy the application to a multi-environment through the helm chart and multi values files. You also move this solution to the KubeVela.
The operation is the same as in the previous section. The key point is to use different value files. Refer to [this section](./helm#specify-different-value-file)

View File

@ -1,9 +1,9 @@
---
title: Deploy Container Image
description: deploy the business application by kubevela
description: Deploy the business application by kubevela
---
In this section, we will introduce how to deploy a container based application with KubeVela. The guide will run the whole process with UI console as it's quite the same with the [quick start](../quick-start) if you're using CLI.
In this section, we will introduce how to deploy a container-based application with KubeVela. The guide will run the whole process with UI console as it's quite the same with the [quick start](../quick-start) if you're using CLI.
## Before starting

View File

@ -31,9 +31,9 @@
"collapsed": true,
"items": [
"tutorials/webservice",
"tutorials/access-application",
"end-user/traits/rollout",
"end-user/traits/sidecar"
"end-user/traits/sidecar",
"tutorials/custom-image-delivery"
]
},
{
@ -85,16 +85,7 @@
"end-user/components/ref-objects"
]
},
{
"type": "category",
"label": "CI Integration",
"collapsed": true,
"items": [
"how-to/dashboard/trigger/overview",
"tutorials/jenkins",
"tutorials/trigger"
]
},
"tutorials/multi-env",
{
"type": "category",
"label": "GitOps",
@ -118,18 +109,32 @@
"end-user/workflow/timeout"
]
},
"platform-engineers/operations/observability",
{
"General CD Features": [
"how-to/dashboard/application/create-application",
"end-user/version-control",
"tutorials/dry-run",
"end-user/workflow/component-dependency-parameter",
"end-user/policies/shared-resource",
"case-studies/initialize-env",
"end-user/policies/apply-once",
"end-user/policies/gc",
"end-user/policies/shared-resource"
"how-to/dashboard/config/helm-repo",
"how-to/dashboard/config/image-registry",
"tutorials/access-application",
"tutorials/cloud-shell"
]
},
{
"type": "category",
"label": "CI Integration",
"collapsed": true,
"items": [
"how-to/dashboard/trigger/overview",
"tutorials/jenkins",
"tutorials/trigger"
]
},
"platform-engineers/operations/observability",
"end-user/components/more"
]
},
@ -149,7 +154,8 @@
{
"User Management": [
"how-to/dashboard/user/user",
"tutorials/sso"
"tutorials/sso",
"how-to/dashboard/config/dex-connectors"
]
},
"how-to/dashboard/user/project",
@ -168,13 +174,6 @@
"platform-engineers/system-operation/working-with-ocm"
]
},
{
"Manage Config of Integration": [
"how-to/dashboard/config/dex-connectors",
"how-to/dashboard/config/helm-repo",
"how-to/dashboard/config/image-registry"
]
},
"platform-engineers/system-operation/performance-finetuning",
"platform-engineers/workflow/working-mechanism",
{