mirror of https://github.com/openkruise/kruise.git
md link style
Signed-off-by: Junjun Li <junjunli666@gmail.com>
This commit is contained in:
parent
f6b45e1412
commit
4417284d59
|
|
@ -1,3 +1,4 @@
|
|||
```
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
|
@ -199,3 +200,4 @@
|
|||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
```
|
||||
|
|
@ -27,16 +27,17 @@ Several [tutorials](./docs/tutorial/README.md) are provided to demonstrate how t
|
|||
|
||||
### Install with YAML files
|
||||
|
||||
##### Install CRDs
|
||||
#### Install CRDs
|
||||
|
||||
```
|
||||
kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/config/crds/apps_v1alpha1_broadcastjob.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/config/crds/apps_v1alpha1_sidecarset.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/config/crds/apps_v1alpha1_statefulset.yaml
|
||||
```
|
||||
|
||||
Note that ALL three CRDs need to be installed for kruise-controller to run properly.
|
||||
|
||||
##### Install kruise-controller-manager
|
||||
#### Install kruise-controller-manager
|
||||
|
||||
`kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/config/manager/all_in_one.yaml`
|
||||
|
||||
|
|
@ -53,6 +54,7 @@ The official kruise-controller-manager image is hosted under [docker hub](https:
|
|||
## Usage examples
|
||||
|
||||
### Advanced StatefulSet
|
||||
|
||||
```yaml
|
||||
apiVersion: apps.kruise.io/v1alpha1
|
||||
kind: StatefulSet
|
||||
|
|
@ -84,7 +86,9 @@ spec:
|
|||
# Allow parallel updates with max number of unavailable instances equals to 2
|
||||
maxUnavailable: 2
|
||||
```
|
||||
|
||||
### Broadcast Job
|
||||
|
||||
Run a BroadcastJob that each Pod computes pi, with `ttlSecondsAfterFinished` set to 30. The job
|
||||
will be deleted in 30 seconds after the job is finished.
|
||||
|
||||
|
|
@ -105,6 +109,7 @@ spec:
|
|||
type: Always
|
||||
ttlSecondsAfterFinished: 30
|
||||
```
|
||||
|
||||
### SidecarSet
|
||||
|
||||
The yaml file below describes a SidecarSet that contains a sidecar container named `sidecar1`
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
## Overview
|
||||
# Overview
|
||||
|
||||
Kubernetes provides a set of default controllers for workload management,
|
||||
like StatefulSet, Deployment, DaemonSet for instances. While at the same time, managed applications
|
||||
|
|
@ -19,19 +19,18 @@ Currently, Kruise supports the following three new workloads.
|
|||
- [BroadcastJob](./concepts/broadcastJob/README.md): A job that runs pods to completion across all the nodes in the cluster.
|
||||
- [SidecarSet](./concepts/sidecarSet/README.md): A controller that injects sidecar container into the pod spec based on selectors
|
||||
|
||||
|
||||
## Benefits
|
||||
|
||||
* In addition to serving new workloads, Kruise also offers extensions to default
|
||||
- In addition to serving new workloads, Kruise also offers extensions to default
|
||||
controllers for new capabilities. Kruise owners will be responsible to port
|
||||
any change to the default controller from upstream if it has an enhanced
|
||||
version inside (e.g., Advanced StatefulSet).
|
||||
|
||||
* Kruise provides controllers for representative cloud native applications
|
||||
- Kruise provides controllers for representative cloud native applications
|
||||
with full Kubernetes API compatibility. Ideally, it can be the first option to
|
||||
consider when one wants to extend upstream Kubernetes for workload management.
|
||||
|
||||
* Kruise plans to offer more Kubernetes automation solutions in the
|
||||
- Kruise plans to offer more Kubernetes automation solutions in the
|
||||
areas of scaling, QoS and operators, etc. Stay tuned!
|
||||
|
||||
## Tutorials
|
||||
|
|
|
|||
|
|
@ -8,6 +8,7 @@
|
|||
This is done on purpose so that user can easily migrate workload to the Advanced StatefulSet from the
|
||||
default StatefulSet. For example, one may simply replace the value of `apiVersion` in the StatefulSet yaml
|
||||
file from `apps/v1` to `apps.kruise.io/v1alpha1` after installing Kruise manager.
|
||||
|
||||
```yaml
|
||||
- apiVersion: apps/v1
|
||||
+ apiVersion: apps.kruise.io/v1alpha1
|
||||
|
|
@ -26,7 +27,8 @@
|
|||
...
|
||||
```
|
||||
|
||||
### `MaxUnavailable` Rolling Update Strategy
|
||||
## `MaxUnavailable` Rolling Update Strategy
|
||||
|
||||
This controller adds a `maxUnavailable` capability in the `RollingUpdateStatefulSetStrategy` to allow parallel Pod
|
||||
updates with the guarantee that the number of unavailable pods during the update cannot exceed this value.
|
||||
It is only allowed to use when the podManagementPolicy is `Parallel`.
|
||||
|
|
@ -64,9 +66,8 @@ v2, we can perform the following steps using the `MaxUnavailable` feature for fa
|
|||
Note that with default StatefulSet, the Pods will be updated sequentially in the order of P3, P2, P1.
|
||||
4. Once one of P1, P2 and P3 finishes update, P0 will be updated immediately.
|
||||
|
||||
## `In-Place` Pod Update Strategy
|
||||
|
||||
|
||||
### `In-Place` Pod Update Strategy
|
||||
This controller adds a `podUpdatePolicy` field in `spec.updateStrategy.rollingUpdate`
|
||||
which controls recreate or in-place update for Pods.
|
||||
|
||||
|
|
@ -150,5 +151,6 @@ spec:
|
|||
maxUnavailable: 2
|
||||
```
|
||||
|
||||
### Tutorial
|
||||
## Tutorial
|
||||
|
||||
- [Use advanced StatefulSet to install Guestbook app](../../tutorial/advanced-statefulset.md)
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
# BroadcastJob
|
||||
# BroadcastJob
|
||||
|
||||
This controller distributes a Pod on every node in the cluster. Like a
|
||||
[DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/),
|
||||
|
|
@ -20,12 +20,13 @@
|
|||
## BroadcastJob Spec
|
||||
|
||||
### Template
|
||||
|
||||
`Template` describes the Pod template used to run the job.
|
||||
Note that for the Pod restart policy, only `Never` or `OnFailure` is allowed for
|
||||
BroadcastJob.
|
||||
|
||||
|
||||
### Parallelism
|
||||
|
||||
`Parallelism` specifies the maximal desired number of Pods that should be run at
|
||||
any given time. By default, there's no limit.
|
||||
|
||||
|
|
@ -35,7 +36,9 @@ three pods running in parallel. A new Pod is created only after one running Pod
|
|||
### CompletionPolicy
|
||||
|
||||
`CompletionPolicy` specifies the controller behavior when reconciling the BroadcastJob.
|
||||
|
||||
#### `Always`
|
||||
|
||||
`Always` policy means the job will eventually complete with either failed or succeeded
|
||||
condition. The following parameters take effect with this policy:
|
||||
- `ActiveDeadlineSeconds` specifies the duration in seconds relative to the startTime
|
||||
|
|
@ -58,6 +61,7 @@ three pods running in parallel. A new Pod is created only after one running Pod
|
|||
will be deleted.
|
||||
|
||||
#### `Never`
|
||||
|
||||
`Never` policy means the BroadcastJob will never be marked as Failed or Succeeded even if
|
||||
all Pods run to completion. This also means above `ActiveDeadlineSeconds`, `BackoffLimit`
|
||||
and `TTLSecondsAfterFinished` parameters takes no effect if `Never` policy is used.
|
||||
|
|
@ -66,19 +70,23 @@ added node in the cluster, he can deploy a BroadcastJob with `Never` policy.
|
|||
|
||||
## Examples
|
||||
|
||||
#### Monitor BroadcastJob status
|
||||
### Monitor BroadcastJob status
|
||||
|
||||
Assuming the cluster has only one node, run `kubectl get bj` (shortcut name for BroadcastJob) and
|
||||
we will see the following:
|
||||
|
||||
```
|
||||
NAME DESIRED ACTIVE SUCCEEDED FAILED
|
||||
broadcastjob-sample 1 0 1 0
|
||||
```
|
||||
- `Desired` : The number of desired Pods. This equals to the number of matched nodes in the cluster.
|
||||
- `Active`: The number of active Pods.
|
||||
- `SUCCEEDED`: The number of succeeded Pods.
|
||||
- `FAILED`: The number of failed Pods.
|
||||
|
||||
#### Automatically delete the job after it completes for x seconds using `ttlSecondsAfterFinished`
|
||||
- `Desired` : The number of desired Pods. This equals to the number of matched nodes in the cluster.
|
||||
- `Active`: The number of active Pods.
|
||||
- `SUCCEEDED`: The number of succeeded Pods.
|
||||
- `FAILED`: The number of failed Pods.
|
||||
|
||||
### Automatically delete the job after it completes for x seconds using `ttlSecondsAfterFinished`
|
||||
|
||||
Run a BroadcastJob that each Pod computes a pi, with `ttlSecondsAfterFinished` set to 30.
|
||||
The job will be deleted in 30 seconds after it is finished.
|
||||
|
||||
|
|
@ -100,10 +108,11 @@ spec:
|
|||
ttlSecondsAfterFinished: 30
|
||||
```
|
||||
|
||||
#### Restrict the lifetime of a job using `activeDeadlineSeconds`
|
||||
### Restrict the lifetime of a job using `activeDeadlineSeconds`
|
||||
|
||||
Run a BroadcastJob that each Pod sleeps for 50 seconds, with `activeDeadlineSeconds` set to 10 seconds.
|
||||
The job will be marked as Failed after it runs for 10 seconds, and the running Pods will be deleted.
|
||||
|
||||
```
|
||||
apiVersion: apps.kruise.io/v1alpha1
|
||||
kind: BroadcastJob
|
||||
|
|
@ -122,9 +131,11 @@ spec:
|
|||
activeDeadlineSeconds: 10
|
||||
```
|
||||
|
||||
#### Automatically launch pods on newly added nodes by keeping the job active using `Never` completionPolicy
|
||||
### Automatically launch pods on newly added nodes by keeping the job active using `Never` completionPolicy
|
||||
|
||||
Run a BroadcastJob with `Never` completionPolicy. The job will continue to run even if all Pods
|
||||
have completed on all nodes. This is useful for automatically running Pods on newly added nodes.
|
||||
|
||||
```
|
||||
apiVersion: apps.kruise.io/v1alpha1
|
||||
kind: BroadcastJob
|
||||
|
|
@ -142,9 +153,11 @@ spec:
|
|||
type: Never
|
||||
```
|
||||
|
||||
#### Use pod template's `nodeSelector` to run on selected nodes
|
||||
### Use pod template's `nodeSelector` to run on selected nodes
|
||||
|
||||
User can set the `NodeSelector` or the `affinity` field in the pod template to restrict the job to run only on the selected nodes.
|
||||
For example, below spec will run a job only on nodes with label `nodeType=gpu`
|
||||
|
||||
```
|
||||
apiVersion: apps.kruise.io/v1alpha1
|
||||
kind: BroadcastJob
|
||||
|
|
@ -160,5 +173,7 @@ spec:
|
|||
nodeSelector:
|
||||
nodeType: gpu
|
||||
```
|
||||
|
||||
## Tutorial
|
||||
|
||||
- [Use Broadcast Job to pre-download image](../../tutorial/broadcastjob.md)
|
||||
|
|
@ -91,13 +91,18 @@ test-pod 2/2 Running 0 118s
|
|||
In the meantime, the SidecarSet status updated:
|
||||
|
||||
```
|
||||
|
||||
# kubectl get sidecarset test-sidecarset -o yaml | grep -A4 status
|
||||
|
||||
status:
|
||||
matchedPods: 1
|
||||
observedGeneration: 1
|
||||
readyPods: 1
|
||||
updatedPods: 1
|
||||
```
|
||||
|
||||
## Tutorial
|
||||
|
||||
A more sophisticated tutorial is provided:
|
||||
|
||||
- [Use SidecarSet to inject a sidecar container into the Guestbook application](../../tutorial/sidecarset.md)
|
||||
|
|
@ -21,15 +21,12 @@ weeks release cycle. **M1** is expected to be released by the end of July 2019.
|
|||
during Pod creation. The rolling upgrade is done in a sequential manner, which means
|
||||
`MaxUnavailable` is equal to one.
|
||||
|
||||
|
||||
* **[M1] Paused Rollout**: User can pause the current rollout process to avoid potential conflicts with
|
||||
other controllers by setting the `Paused` flag. The rollout can be resumed by setting `Paused` to false.
|
||||
|
||||
|
||||
* **[M2] Selective Upgrade**: An upgrade Pod selector is added. The new sidecar container version will
|
||||
only be applied to the Pods that match the upgrade selector.
|
||||
|
||||
|
||||
* **[M2] Parallel Upgrade**: Support `MaxUnavailable` feature, which allows upgrading sidecar containers
|
||||
for multiple Pods simultaneously.
|
||||
|
||||
|
|
@ -39,7 +36,6 @@ weeks release cycle. **M1** is expected to be released by the end of July 2019.
|
|||
a `Paused` flag is introduced to allow user to pause the current rollout process.
|
||||
The rollout can be resumed by setting the `Paused` flag to false.
|
||||
|
||||
|
||||
* **[M2] Auto Remediation**: When creating new Pods in scaling or rollout workflow, it is possible that
|
||||
a created Pod cannot reach `Ready` state due to certain node problems. For example, node
|
||||
misconfiguration may cause constant failures on pulling images or starting the containers.
|
||||
|
|
@ -47,7 +43,6 @@ weeks release cycle. **M1** is expected to be released by the end of July 2019.
|
|||
create an [SchedPatch](#SchedPatch) CRD which injects a Pod-to-node anti-affinity rule for
|
||||
all new Pods created by this controller.
|
||||
|
||||
|
||||
* **[M2] Condition Report**: Leverage the `StatefulSetConditionType` API to report the StatefulSet
|
||||
condition based on the scaling or roll out results.
|
||||
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
# Install Guestbook Application
|
||||
|
||||
This tutorial walks you through an example to install a guestbook application using advanced statefulset.
|
||||
The guestbook app used is from this [repo](https://github.com/IBM/guestbook/tree/master/v1).
|
||||
|
||||
|
||||
## Installing the Guestbook application using Helm
|
||||
|
||||
To install the chart with release name (application name) of `demo-v1`, replica of `20`:
|
||||
|
|
@ -20,6 +20,7 @@ If you don't use helm, you need to install with YAML files as below.
|
|||
## Install the Guestbook application with YAML files
|
||||
|
||||
Below installs a redis cluster with 1 master 2 replicas
|
||||
|
||||
```
|
||||
kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/docs/tutorial/v1/redis-master-deployment.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/docs/tutorial/v1/redis-master-service.yaml
|
||||
|
|
@ -28,12 +29,14 @@ kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/docs/t
|
|||
```
|
||||
|
||||
Below creates a guestbook application using advanced statefulset.
|
||||
|
||||
```
|
||||
kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/docs/tutorial/v1/guestbook-statefulset.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/docs/tutorial/v1/guestbook-service.yaml
|
||||
```
|
||||
|
||||
Several things to note in the `guestbook-statefulset.yaml`
|
||||
|
||||
```yaml
|
||||
* apiVersion: apps.kruise.io/v1alpha1 # the kruise group version
|
||||
kind: StatefulSet
|
||||
|
|
@ -61,9 +64,11 @@ Several things to note in the `guestbook-statefulset.yaml`
|
|||
Now the app has been installed.
|
||||
|
||||
## Verify Guestbook Started
|
||||
|
||||
Check the guestbook are started. `statefulset.apps.kruise.io` or shortname `sts.apps.kruise.io` is the resource kind.
|
||||
`app.kruise.io` postfix needs to be appended due to naming collision with Kubernetes native `statefulset` kind.
|
||||
Verify that all pods are READY.
|
||||
|
||||
```
|
||||
kubectl get sts.apps.kruise.io
|
||||
|
||||
|
|
@ -81,6 +86,7 @@ You can now view the Guestbook on browser.
|
|||
* **Remote Host:**
|
||||
To view the guestbook on a remote host, locate the external IP of the application in the **IP** column of the `kubectl get services` output.
|
||||
For example, run
|
||||
|
||||
```
|
||||
kubectl get svc
|
||||
|
||||
|
|
@ -92,9 +98,10 @@ demo-v1-guestbook-kruise LoadBalancer 172.21.2.187 47.101.74.131 3000
|
|||
Visit `http://47.101.74.131:3000` for the guestbook UI.
|
||||

|
||||
|
||||
|
||||
## Inplace-update guestbook to the new image
|
||||
|
||||
First, check the running pods.
|
||||
|
||||
```
|
||||
kubectl get pod -L controller-revision-hash -o wide | grep guestbook
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE CONTROLLER-REVISION-HASH
|
||||
|
|
@ -129,6 +136,7 @@ kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/docs/t
|
|||
What this command does is that it changes the image version to `v2` and changes `partition` to `15`.
|
||||
This will update pods with ordinal number >= 15 (i.e. 15 - 19)to image version `v2`. The rest pods (0 ~ 14) will remain at version `v1`.
|
||||
The YAML diff details are shown below:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
...
|
||||
|
|
@ -148,6 +156,7 @@ spec:
|
|||
```
|
||||
|
||||
Check the statefulset, find the statefulset has 5 pods updated
|
||||
|
||||
```
|
||||
kubectl get sts.apps.kruise.io
|
||||
|
||||
|
|
@ -156,7 +165,7 @@ demo-v1-guestbook-kruise 20 20 5 20 18h
|
|||
```
|
||||
|
||||
Check the pods again. `demo-v1-guestbook-kruise-15` to `demo-v1-guestbook-kruise-19` are updated with `RESTARTS` showing `1`,
|
||||
IPs remain the same, `CONTROLLER-REVISION-HASH` are updated from ` demo-v1-guestbook-kruise-7c947b5f94` to `demo-v1-guestbook-kruise-576bd76785`
|
||||
IPs remain the same, `CONTROLLER-REVISION-HASH` are updated from `demo-v1-guestbook-kruise-7c947b5f94` to `demo-v1-guestbook-kruise-576bd76785`
|
||||
|
||||
```
|
||||
kubectl get pod -L controller-revision-hash -o wide | grep guestbook
|
||||
|
|
@ -185,9 +194,11 @@ demo-v1-guestbook-kruise-9 1/1 Running 0 3m21s
|
|||
```
|
||||
|
||||
Now upgrade all the pods, run
|
||||
|
||||
```
|
||||
kubectl edit sts.apps.kruise.io demo-v1-guestbook-kruise
|
||||
```
|
||||
|
||||
and update `partition` to `0`, all pods will be updated to v2 this time, and all pods' IP remain `unchanged`. You should also find
|
||||
that all 20 pods are updated fairly fast because the `maxUnavailable` feature allows parallel updates instead of sequential update.
|
||||
|
||||
|
|
@ -198,6 +209,7 @@ demo-v1-guestbook-kruise 20 20 20 20 18h
|
|||
```
|
||||
|
||||
Describe a pod and find that the events show the original container is killed and new container is started. This verifies `in-place` update
|
||||
|
||||
```
|
||||
kubectl describe pod demo-v1-guestbook-kruise-0
|
||||
|
||||
|
|
@ -212,6 +224,7 @@ Events:
|
|||
```
|
||||
|
||||
The pods should also be in `Ready` state, the `InPlaceUpdateReady` will be set to `False` right before in-place update and to `True` after update is complete
|
||||
|
||||
```yaml
|
||||
Readiness Gates:
|
||||
Type Status
|
||||
|
|
@ -244,6 +257,7 @@ helm uninstall demo-v1
|
|||
```
|
||||
|
||||
If you are not using helm, deleting the application using below commands:
|
||||
|
||||
```
|
||||
kubectl delete sts.apps.kruise.io demo-v1-guestbook-kruise
|
||||
kubectl delete svc demo-v1-guestbook-kruise redis-master redis-slave
|
||||
|
|
|
|||
|
|
@ -2,9 +2,10 @@
|
|||
|
||||
This tutorial walks you through an example to pre-download an image on nodes with broadcastjob.
|
||||
|
||||
|
||||
## Verify nodes do not have images present
|
||||
|
||||
Below command should output nothing.
|
||||
|
||||
```
|
||||
kubectl get nodes -o yaml | grep "openkruise/guestbook:v3"
|
||||
```
|
||||
|
|
@ -14,12 +15,15 @@ kubectl get nodes -o yaml | grep "openkruise/guestbook:v3"
|
|||
`kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/docs/tutorial/v1/broadcastjob.yaml`
|
||||
|
||||
Check the broadcastjob is completed. `bj` is short for `broadcastjob`
|
||||
|
||||
```
|
||||
$ kubectl get bj
|
||||
NAME DESIRED ACTIVE SUCCEEDED FAILED AGE
|
||||
download-image 3 0 3 0 7s
|
||||
```
|
||||
|
||||
Check the pods are completed.
|
||||
|
||||
```
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
|
|
@ -29,8 +33,10 @@ download-image-zc4t4 0/1 Completed 0 61s
|
|||
```
|
||||
|
||||
## Verify images are downloaded on nodes
|
||||
|
||||
Now run the same command and check that the images have been downloaded. The testing cluster has 3 nodes. So below command
|
||||
will output three entries.
|
||||
|
||||
```
|
||||
$ kubectl get nodes -o yaml | grep "openkruise/guestbook:v3"
|
||||
- openkruise/guestbook:v3
|
||||
|
|
@ -38,6 +44,5 @@ $ kubectl get nodes -o yaml | grep "openkruise/guestbook:v3"
|
|||
- openkruise/guestbook:v3
|
||||
```
|
||||
|
||||
|
||||
The broadcastjob is configured with `ttlSecondsAfterFinished` to `60`, meaning the job and its associated pods will be deleted
|
||||
in `60` seconds after the job is finished.
|
||||
|
|
|
|||
|
|
@ -1,9 +1,11 @@
|
|||
# Install Kruise Controller Manager
|
||||
|
||||
Below steps assume you have an existing kubernetes cluster running properly.
|
||||
|
||||
## Install with YAML files
|
||||
|
||||
### Install Kruise CRDs
|
||||
|
||||
```
|
||||
kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/config/crds/apps_v1alpha1_broadcastjob.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/config/crds/apps_v1alpha1_sidecarset.yaml
|
||||
|
|
@ -17,6 +19,7 @@ kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/config
|
|||
## Verify Kruise-manager is running
|
||||
|
||||
Check the kruise-manager pod is running
|
||||
|
||||
```
|
||||
kubectl get pods -n kruise-system
|
||||
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
# Inject Sidecar Container with SidecarSet
|
||||
|
||||
This tutorial walks you through an example to automatically inject a sidecar container with sidecarset.
|
||||
|
||||
## Install Guestbook sidecarset
|
||||
|
|
@ -28,7 +29,6 @@ spec:
|
|||
containerPort: 4000 # different from main guestbook containerPort which is 3000
|
||||
```
|
||||
|
||||
|
||||
## Installing the application
|
||||
|
||||
To install the chart with release name (application name) of `demo-v1`, replica of `20`:
|
||||
|
|
@ -36,19 +36,22 @@ To install the chart with release name (application name) of `demo-v1`, replica
|
|||
```bash
|
||||
helm install demo-v1 apphub/guestbook-kruise --set replicaCount=20,image.repository=openkruise/guestbook,image.tag=v2
|
||||
```
|
||||
|
||||
The Chart is located in [this repo](https://github.com/cloudnativeapp/workshop/tree/master/kubecon2019china/charts/guestbook-kruise).
|
||||
|
||||
|
||||
Alternatively, Install the application using YAML files:
|
||||
|
||||
```
|
||||
kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/docs/tutorial/v1/guestbook-sts-for-sidecar-demo.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/docs/tutorial/v1/guestbook-service-for-sidecar-demo.yaml
|
||||
```
|
||||
|
||||
## Check your application
|
||||
|
||||
Check the guestbook are started. `statefulset.apps.kruise.io` or shortname `sts.apps.kruise.io` is the resource kind.
|
||||
`app.kruise.io` postfix needs to be appended due to naming collision with Kubernetes native `statefulset` kind.
|
||||
Verify that all pods are READY.
|
||||
|
||||
```
|
||||
kubectl get sts.apps.kruise.io
|
||||
NAME DESIRED CURRENT UPDATED READY AGE
|
||||
|
|
@ -103,7 +106,6 @@ Check that the sidecar container is injected.
|
|||
+ Mounts: <none>
|
||||
```
|
||||
|
||||
|
||||
## View the Sidecar Guestbook
|
||||
|
||||
You can now view the Sidecar Guestbook on browser.
|
||||
|
|
@ -114,6 +116,7 @@ You can now view the Sidecar Guestbook on browser.
|
|||
* **Remote Host:**
|
||||
To view the sidecar guestbook on a remote host, locate the external IP of the application in the **IP** column of the `kubectl get services` output.
|
||||
For example, run
|
||||
|
||||
```
|
||||
kubectl get svc
|
||||
|
||||
|
|
@ -123,7 +126,6 @@ demo-v1-guestbook-kruise LoadBalancer 172.21.2.187 47.101.74.131 3000
|
|||
|
||||
`47.101.74.131` is the external IP.
|
||||
|
||||
|
||||
Visit `http://47.101.74.131:4000` for the sidecar guestbook.
|
||||

|
||||
|
||||
|
|
@ -146,7 +148,9 @@ Then uninstall it:
|
|||
```
|
||||
helm uninstall demo-v1
|
||||
```
|
||||
|
||||
If you are not using helm, deleting the application using below commands:
|
||||
|
||||
```
|
||||
kubectl delete sts.apps.kruise.io demo-v1-guestbook-kruise
|
||||
kubectl delete svc demo-v1-guestbook-kruise redis-master redis-slave
|
||||
|
|
|
|||
Loading…
Reference in New Issue