rollout docs (#24)
Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>
14
README.md
|
|
@ -2,9 +2,7 @@
|
|||
[](https://www.apache.org/licenses/LICENSE-2.0.html)
|
||||
|
||||
## Introduction
|
||||
Kruise Rollouts is a **Bypass** component which provides advanced deployment capabilities such as canary, traffic routing, and progressive delivery features, for a series of Kubernetes workloads, such as Deployment and CloneSet.
|
||||
|
||||

|
||||
Kruise Rollouts is **a Bypass component which provides advanced deployment capabilities such as canary, traffic routing, and progressive delivery features, for a series of Kubernetes workloads, such as Deployment and CloneSet.**
|
||||
|
||||
## Why Kruise Rollouts?
|
||||
- **Functionality**:
|
||||
|
|
@ -27,8 +25,8 @@ Kruise Rollouts is a **Bypass** component which provides advanced deployment cap
|
|||
- **Easy-integration**:
|
||||
- Easily integrate with classic or GitOps-style Kubernetes-based PaaS.
|
||||
|
||||
## Documents
|
||||
Coming soon ...
|
||||
## Quick Start
|
||||
- [Getting Started](docs/getting_started/introduction.md)
|
||||
|
||||
## Community
|
||||
Active communication channels:
|
||||
|
|
@ -43,9 +41,9 @@ Active communication channels:
|
|||
- Bi-weekly Community Meeting (*English*): TODO
|
||||
|
||||
## Acknowledge
|
||||
- Part of the codes of this project is powered by [KubeVela Community](https://kubevela.io).
|
||||
- This project will be maintained and improved by both the communities of OpenKruise and KubeVela.
|
||||
- The global idea is from both OpenKruise and KubeVela communities, and the basic code of rollout is inherited from the KubeVela Rollout.
|
||||
- This project is maintained by both contributors from [OpenKruise](https://openkruise.io/) and [KubeVela](https://kubevela.io).
|
||||
|
||||
## License
|
||||
Kruise rollouts is licensed under the Apache License, Version 2.0. See [LICENSE](./LICENSE.md) for the full license text.
|
||||
Kruise Rollout is licensed under the Apache License, Version 2.0. See [LICENSE](./LICENSE.md) for the full license text.
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,37 @@
|
|||
# Installation
|
||||
|
||||
## Requirements
|
||||
- Install Kubernetes Cluster, requires **Kubernetes version >= 1.16**.
|
||||
- (Optional, If Use CloneSet) Helm installation of OpenKruise, **Since v1.1.0**, Reference [Install OpenKruise](https://openkruise.io/docs/installation).
|
||||
|
||||
## Install with helm
|
||||
|
||||
Kruise Rollout can be simply installed by helm v3.1+, which is a simple command-line tool and you can get it from [here](https://github.com/helm/helm/releases).
|
||||
|
||||
```bash
|
||||
# Firstly add openkruise charts repository if you haven't do this.
|
||||
$ helm repo add openkruise https://openkruise.github.io/charts/
|
||||
|
||||
# [Optional]
|
||||
$ helm repo update
|
||||
|
||||
# Install the latest version.
|
||||
$ helm install kruise-rollout openkruise/kruise-rollout --version 0.1.0
|
||||
```
|
||||
|
||||
## Uninstall
|
||||
|
||||
Note that this will lead to all resources created by Kruise Rollout, including webhook configurations, services, namespace, CRDs and CR instances Kruise Rollout controller, to be deleted!
|
||||
|
||||
Please do this ONLY when you fully understand the consequence.
|
||||
|
||||
To uninstall kruise rollout if it is installed with helm charts:
|
||||
|
||||
```bash
|
||||
$ helm uninstall kruise-rollout
|
||||
release "kruise-rollout" uninstalled
|
||||
```
|
||||
|
||||
## What's Next
|
||||
Here are some recommended next steps:
|
||||
- Learn Kruise Rollout's [Basic Usage](../tutorials/basic_usage.md).
|
||||
|
|
@ -0,0 +1,40 @@
|
|||
# Introduction
|
||||
## What is Kruise Rollout?
|
||||
Kruise Rollouts is **a Bypass component which provides advanced deployment capabilities such as canary, traffic routing, and progressive delivery features, for a series of Kubernetes workloads, such as Deployment and CloneSet**.
|
||||
|
||||
Kruise Rollout integrates with ingress controllers and service meshes, leveraging their traffic shaping abilities to gradually shift traffic to the new version during an update.
|
||||
In addition, the business Pods metrics analysis can be used during rollout to determine whether the release will continue or be suspended.
|
||||
|
||||

|
||||
|
||||
## Why is Kruise Rollout?
|
||||
The native Kubernetes Deployment Object supports the **RollingUpdate** strategy which provides a basic set of safety guarantees(maxUnavailable, maxSurge) during an update. However the rolling update strategy faces many limitations:
|
||||
- **The process of batch release cannot be strictly controlled**, e.g. 20%, 40% etc. Although maxUnavailable, maxSurge can control the release rate, it will release the next batch as soon as the previous batch has been released.
|
||||
- **Can't precisely control traffic flow during the release**, e.g. 20% traffic flow rate to the new version of Pods.
|
||||
- **No ability to query external metrics to verify whether the business indicators during the upgrade process are normal**.
|
||||
|
||||
## Features
|
||||
- **Functionality**:
|
||||
- Support multi-batch delivery for Deployment/CloneSet.
|
||||
- Support Nginx/ALB/Istio traffic routing control during rollout.
|
||||
|
||||
- **Flexibility**:
|
||||
- Support scaling up/down to workloads during rollout.
|
||||
- Can be applied to newly-created or existing workload objects directly;
|
||||
- Can be ridden out of at any time when you needn't it without consideration of unavailable workloads and traffic problems.
|
||||
- Can cooperate with other native/third-part Kubernetes controllers/operators, such as HPA and WorkloadSpread.
|
||||
|
||||
- **Non-Invasion**:
|
||||
- Does not invade native workload controllers.
|
||||
- Does not replace user-defined workload and traffic configurations.
|
||||
|
||||
- **Extensibility**:
|
||||
- Easily extend to other traffic routing types, or workload types via plugin codes.
|
||||
|
||||
- **Easy-integration**:
|
||||
- Easily integrate with classic or GitOps-style Kubernetes-based PaaS.
|
||||
|
||||
## What's Next
|
||||
Here are some recommended next steps:
|
||||
- Start to [Install Kruise Rollout](./installation.md).
|
||||
- Learn Kruise Rollout's [Basic Usage](../tutorials/basic_usage.md).
|
||||
|
After Width: | Height: | Size: 341 KiB |
|
After Width: | Height: | Size: 350 KiB |
|
After Width: | Height: | Size: 82 KiB |
|
After Width: | Height: | Size: 229 KiB |
|
After Width: | Height: | Size: 283 KiB |
|
After Width: | Height: | Size: 232 KiB |
|
After Width: | Height: | Size: 463 KiB |
|
After Width: | Height: | Size: 153 KiB |
|
|
@ -0,0 +1,208 @@
|
|||
# Basic Usage
|
||||
|
||||
This guide will demonstrate various concepts and features of Kruise Rollout by going through **Canary Release, Deployment, Nginx Ingress**.
|
||||
|
||||

|
||||
|
||||
## Requirements
|
||||
- Helm installation of Kruise Rollout, Reference [Install Kruise Rollout](../getting_started/installation.md).
|
||||
- Helm installation of Nginx Ingress Controller, (e.g. **helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx**)
|
||||
|
||||
## 1. Deploy Business Application (Contains Deployment, Service and Ingress)
|
||||
This is an example of **echoserver application, which contains ingress, service, and deployment crd resources**, as follows:
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: echoserver
|
||||
labels:
|
||||
app: echoserver
|
||||
spec:
|
||||
replicas: 5
|
||||
selector:
|
||||
matchLabels:
|
||||
app: echoserver
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: echoserver
|
||||
spec:
|
||||
containers:
|
||||
- name: echoserver
|
||||
image: cilium/echoserver:1.10.2
|
||||
imagePullPolicy: IfNotPresent
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
env:
|
||||
- name: PORT
|
||||
value: '8080'
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: echoserver
|
||||
labels:
|
||||
app: echoserver
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 8080
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
app: echoserver
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: echoserver
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: nginx
|
||||
spec:
|
||||
rules:
|
||||
- host: echoserver.example.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
service:
|
||||
name: echoserver
|
||||
port:
|
||||
number: 80
|
||||
path: /apis/echo
|
||||
pathType: Exact
|
||||
```
|
||||
After deployed in k8s cluster, it can be accessed via nginx ingress, as follows:
|
||||
|
||||

|
||||
|
||||
## 2. Deploy Kruise Rollout CRD
|
||||
**Kruise Rollout CRD defines the deployment rollout release process, as follows is an example of canary release,
|
||||
the first step is 20% pods, as well as routing 5% traffics to the new version.**
|
||||
```yaml
|
||||
apiVersion: rollouts.kruise.io/v1alpha1
|
||||
kind: Rollout
|
||||
metadata:
|
||||
name: rollouts-demo
|
||||
# The rollout resource needs to be in the same namespace as the corresponding workload(deployment, cloneSet)
|
||||
# namespace: xxxx
|
||||
spec:
|
||||
objectRef:
|
||||
type: workloadRef
|
||||
# rollout of published workloads, currently only supports Deployment, CloneSet
|
||||
workloadRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: echoserver
|
||||
strategy:
|
||||
type: canary
|
||||
canary:
|
||||
# canary published, e.g. 20%, 40%, 60% ...
|
||||
steps:
|
||||
# routing 5% traffics to the new version
|
||||
- weight: 5
|
||||
# Manual confirmation of the release of the remaining pods
|
||||
pause: {}
|
||||
# optional, The first step of released replicas. If not set, the default is to use 'weight', as shown above is 5%.
|
||||
replicas: 20%
|
||||
trafficRouting:
|
||||
# echoserver service name
|
||||
- service: echoserver
|
||||
# nginx ingress
|
||||
type: nginx
|
||||
# echoserver ingress name
|
||||
ingress:
|
||||
name: echoserver
|
||||
```
|
||||
|
||||

|
||||
|
||||
## 3. Upgrade echoserver (Version 1.10.2 -> 1.10.3)
|
||||
Change the image version in deployment from 1.10.2 to 1.10.3, then **kubectl apply -f deployment.yaml** to the k8s cluster, as follows:
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: echoserver
|
||||
...
|
||||
spec:
|
||||
...
|
||||
containers:
|
||||
- name: echoserver
|
||||
image: cilium/echoserver:1.10.3
|
||||
imagePullPolicy: IfNotPresent
|
||||
```
|
||||
**The Kruise Rollout Controller listens to the above behavior and sets deployment paused=true in the webhook, then generates the corresponding canary resources based on the user-defined deployment, service, and ingress configuration.**
|
||||
|
||||
As shown in the figure below, replicas(5)*replicas(20%)=1 new versions of Pods are published and 5% of the traffic is routed to the new version.
|
||||
|
||||

|
||||
|
||||
## 4. Approve Rollout (Release Success)
|
||||
**The Rollout status shows that the current rollout status is *StepPaused*, which means that the first 20% of Pods are released success and 5% of traffic is routed to the new version.**
|
||||
|
||||
After that, developers can use some other methods, such as prometheus metrics business metrics,
|
||||
to determine that the release meets expectations and then continue the subsequent releases via **kubectl-kruise rollout approve rollout/rollouts-demo -n default** and wait deployment release is complete, as follows:
|
||||
|
||||

|
||||
|
||||
## 5. Release Failure
|
||||
|
||||
### Publish Abnormal Version
|
||||
In the publishing process there are often **cases of publishing failure**, such as the following image pulling failure:
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: echoserver
|
||||
...
|
||||
spec:
|
||||
...
|
||||
containers:
|
||||
- name: echoserver
|
||||
# image not found
|
||||
image: cilium/echoserver:failed
|
||||
imagePullPolicy: IfNotPresent
|
||||
```
|
||||
At this point, the rollout status remains at **StepUpgrade** state, and by checking the deployment and pods status, you can see that it is because the image pull failed.
|
||||
|
||||

|
||||
|
||||
### a. Rollback To V1 Version
|
||||
The most common way is Rollback, where you don't need to do anything with the rollout crd,
|
||||
only just rollback the deployment configuration to the previous version, as follows **rollback the image version in deployment to 1.10.2 and kubectl apply -f to the k8s cluster**.
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: echoserver
|
||||
...
|
||||
spec:
|
||||
...
|
||||
containers:
|
||||
- name: echoserver
|
||||
image: cilium/echoserver:1.10.2
|
||||
imagePullPolicy: IfNotPresent
|
||||
```
|
||||

|
||||
|
||||
### b. Continuous Release V3 Version
|
||||
For some scenarios where you can't rollback, you can continuously release the v3 version. as follows, **change the deployment image address to 1.10.3 and then kubectl apply -f to the k8s cluster**.
|
||||
Once the publishing is complete, just perform the **Approve Rollout** step.
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: echoserver
|
||||
...
|
||||
spec:
|
||||
...
|
||||
containers:
|
||||
- name: echoserver
|
||||
image: cilium/echoserver:1.10.3
|
||||
imagePullPolicy: IfNotPresent
|
||||
```
|
||||

|
||||
|
||||
|
||||
|
||||