md link style

Signed-off-by: Junjun Li <junjunli666@gmail.com>
This commit is contained in:
Junjun Li 2019-07-11 21:14:18 +08:00
parent f6b45e1412
commit 4417284d59
13 changed files with 174 additions and 125 deletions

View File

@ -1,3 +1,4 @@
```
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
@ -199,3 +200,4 @@
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```

View File

@ -5,7 +5,7 @@
[![codecov](https://codecov.io/gh/openkruise/kruise/branch/master/graph/badge.svg)](https://codecov.io/gh/openkruise/kruise)
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/2908/badge)](https://bestpractices.coreinfrastructure.org/en/projects/2908)
Kruise is the core of the OpenKruise project. It is a set of controllers which extends and complements
Kruise is the core of the OpenKruise project. It is a set of controllers which extends and complements
[Kubernetes core controllers](https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/)
on workload management.
@ -27,16 +27,17 @@ Several [tutorials](./docs/tutorial/README.md) are provided to demonstrate how t
### Install with YAML files
##### Install CRDs
#### Install CRDs
```
kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/config/crds/apps_v1alpha1_broadcastjob.yaml
kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/config/crds/apps_v1alpha1_sidecarset.yaml
kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/config/crds/apps_v1alpha1_statefulset.yaml
```
Note that ALL three CRDs need to be installed for kruise-controller to run properly.
##### Install kruise-controller-manager
#### Install kruise-controller-manager
`kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/config/manager/all_in_one.yaml`
@ -53,6 +54,7 @@ The official kruise-controller-manager image is hosted under [docker hub](https:
## Usage examples
### Advanced StatefulSet
```yaml
apiVersion: apps.kruise.io/v1alpha1
kind: StatefulSet
@ -71,7 +73,7 @@ spec:
spec:
readinessGates:
# A new condition must be added to ensure the pod remain at NotReady state while the in-place update is happening
- conditionType: InPlaceUpdateReady
- conditionType: InPlaceUpdateReady
containers:
- name: main
image: nginx:alpine
@ -84,7 +86,9 @@ spec:
# Allow parallel updates with max number of unavailable instances equals to 2
maxUnavailable: 2
```
### Broadcast Job
Run a BroadcastJob that each Pod computes pi, with `ttlSecondsAfterFinished` set to 30. The job
will be deleted in 30 seconds after the job is finished.
@ -105,6 +109,7 @@ spec:
type: Always
ttlSecondsAfterFinished: 30
```
### SidecarSet
The yaml file below describes a SidecarSet that contains a sidecar container named `sidecar1`
@ -122,7 +127,7 @@ spec:
containers:
- name: sidecar1
image: centos:7
command: ["sleep", "999d"] # do nothing at all
command: ["sleep", "999d"] # do nothing at all
```
## Developer Guide
@ -149,7 +154,7 @@ or just
Generate manifests e.g. CRD, RBAC etc.
`make manifests`
`make manifests`
## Community

View File

@ -1,4 +1,4 @@
## Overview
# Overview
Kubernetes provides a set of default controllers for workload management,
like StatefulSet, Deployment, DaemonSet for instances. While at the same time, managed applications
@ -6,12 +6,12 @@ express more and more diverse requirements for workload upgrade and deployment,
in many cases, cannot be satisfied by the default workload controllers.
Kruise attempts to fill such gap by offering a set of controllers as the supplement
to manage new workloads in Kubernetes. The target user cases are representative,
to manage new workloads in Kubernetes. The target user cases are representative,
originally collected from the users of Alibaba cloud container services and the
developers of the in-house large scale on-line/off-line container applications.
developers of the in-house large scale on-line/off-line container applications.
Most of the use cases can be easily applied to other similar cloud user scenarios.
Currently, Kruise supports the following three new workloads.
Currently, Kruise supports the following three new workloads.
## Workloads
@ -19,21 +19,20 @@ Currently, Kruise supports the following three new workloads.
- [BroadcastJob](./concepts/broadcastJob/README.md): A job that runs pods to completion across all the nodes in the cluster.
- [SidecarSet](./concepts/sidecarSet/README.md): A controller that injects sidecar container into the pod spec based on selectors
## Benefits
* In addition to serving new workloads, Kruise also offers extensions to default
- In addition to serving new workloads, Kruise also offers extensions to default
controllers for new capabilities. Kruise owners will be responsible to port
any change to the default controller from upstream if it has an enhanced
any change to the default controller from upstream if it has an enhanced
version inside (e.g., Advanced StatefulSet).
* Kruise provides controllers for representative cloud native applications
with full Kubernetes API compatibility. Ideally, it can be the first option to
- Kruise provides controllers for representative cloud native applications
with full Kubernetes API compatibility. Ideally, it can be the first option to
consider when one wants to extend upstream Kubernetes for workload management.
* Kruise plans to offer more Kubernetes automation solutions in the
- Kruise plans to offer more Kubernetes automation solutions in the
areas of scaling, QoS and operators, etc. Stay tuned!
## Tutorials
Several [Tutorials](./tutorial/README.md) are provided to demonstrate how to use the controllers
Several [Tutorials](./tutorial/README.md) are provided to demonstrate how to use the controllers

View File

@ -1,13 +1,14 @@
# Advanced StatefulSet
This controller enhances the rolling update workflow of default [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/)
controller from two aspects: adding [MaxUnavailable rolling update strategy](#maxunavailable-rolling-update-strategy)
controller from two aspects: adding [MaxUnavailable rolling update strategy](#maxunavailable-rolling-update-strategy)
and introducing [In-place Pod Update Strategy](#in-place-pod-update-strategy).
Note that Advanced StatefulSet extends the same CRD schema of default StatefulSet with newly added fields.
The CRD kind name is still `StatefulSet`.
This is done on purpose so that user can easily migrate workload to the Advanced StatefulSet from the
This is done on purpose so that user can easily migrate workload to the Advanced StatefulSet from the
default StatefulSet. For example, one may simply replace the value of `apiVersion` in the StatefulSet yaml
file from `apps/v1` to `apps.kruise.io/v1alpha1` after installing Kruise manager.
```yaml
- apiVersion: apps/v1
+ apiVersion: apps.kruise.io/v1alpha1
@ -23,16 +24,17 @@
metadata:
labels:
app: sample
...
...
```
### `MaxUnavailable` Rolling Update Strategy
## `MaxUnavailable` Rolling Update Strategy
This controller adds a `maxUnavailable` capability in the `RollingUpdateStatefulSetStrategy` to allow parallel Pod
updates with the guarantee that the number of unavailable pods during the update cannot exceed this value.
It is only allowed to use when the podManagementPolicy is `Parallel`.
This feature achieves similar update efficiency like Deployment for cases where the order of
update is not critical to the workload. Without this feature, the native `StatefulSet` controller can only
This feature achieves similar update efficiency like Deployment for cases where the order of
update is not critical to the workload. Without this feature, the native `StatefulSet` controller can only
update Pods one by one even if the podManagementPolicy is `Parallel`. The API change is described below:
```go
@ -64,12 +66,11 @@ v2, we can perform the following steps using the `MaxUnavailable` feature for fa
Note that with default StatefulSet, the Pods will be updated sequentially in the order of P3, P2, P1.
4. Once one of P1, P2 and P3 finishes update, P0 will be updated immediately.
## `In-Place` Pod Update Strategy
### `In-Place` Pod Update Strategy
This controller adds a `podUpdatePolicy` field in `spec.updateStrategy.rollingUpdate`
This controller adds a `podUpdatePolicy` field in `spec.updateStrategy.rollingUpdate`
which controls recreate or in-place update for Pods.
With this feature, a Pod will not be recreated if the container images are the only updated spec in
the Advanced StatefulSet Pod template.
Kubelet will handle the image-only update by downloading the new images and restart
@ -77,12 +78,12 @@ v2, we can perform the following steps using the `MaxUnavailable` feature for fa
in common container image update cases since all Pod namespace configurations
(e.g, Pod IP) are preserved after update. In addition, Pods reschedule and reshuffle are avoided
during the update.
Note that currently, only container image update is supported for in-place update. Any other Pod
Note that currently, only container image update is supported for in-place update. Any other Pod
spec update such as changing the command or container ENV will be rejected by kube-apiserver.
The API change is described below:
```go
type PodUpdateStrategyType string
@ -105,13 +106,13 @@ const (
This is the same behavior as default StatefulSet.
- `InPlaceIfPossible` strategy implies that the controller will check if current update is eligible
for in-place update. If so, an in-place update is performed by updating Pod spec directly. Otherwise,
controller falls back to the original Pod recreation mechanism. The `InPlaceIfPossible` strategy only
controller falls back to the original Pod recreation mechanism. The `InPlaceIfPossible` strategy only
works when `Spec.UpdateStrategy.Type` is set to `RollingUpdate`.
- `InPlaceOnly` strategy implies that the controller will only in-place update Pods. Note that `template.spec`
is only allowed to update `containers[x].image`, the api-server will return an error if you try to update other fields in
is only allowed to update `containers[x].image`, the api-server will return an error if you try to update other fields in
`template.spec`.
**More importantly**, a readiness-gate named `InPlaceUpdateReady` must be added into `template.spec.readinessGates`
**More importantly**, a readiness-gate named `InPlaceUpdateReady` must be added into `template.spec.readinessGates`
when using `InPlaceIfPossible` or `InPlaceOnly`. The condition `InPlaceUpdateReady` in podStatus will be updated to False before in-place
update and updated to True after the update is finished. This ensures that pod remain at NotReady state while the in-place
update is happening.
@ -150,5 +151,6 @@ spec:
maxUnavailable: 2
```
### Tutorial
- [Use advanced StatefulSet to install Guestbook app](../../tutorial/advanced-statefulset.md)
## Tutorial
- [Use advanced StatefulSet to install Guestbook app](../../tutorial/advanced-statefulset.md)

View File

@ -1,31 +1,32 @@
# BroadcastJob
# BroadcastJob
This controller distributes a Pod on every node in the cluster. Like a
This controller distributes a Pod on every node in the cluster. Like a
[DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/),
a BroadcastJob makes sure a Pod is created and run on all selected nodes once
in a cluster.
a BroadcastJob makes sure a Pod is created and run on all selected nodes once
in a cluster.
Like a [Job](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/),
a BroadcastJob is expected to run to completion.
a BroadcastJob is expected to run to completion.
In the end, BroadcastJob does not consume any resources after each Pod succeeds on
every node.
In the end, BroadcastJob does not consume any resources after each Pod succeeds on
every node.
This controller is particularly useful when upgrading a software, e.g., Kubelet, or validation check
in every node, which is typically needed only once within a long period of time or
running an adhoc full cluster inspection script.
Optionally, a BroadcastJob can keep alive after all Pods on desired nodes complete
so that a Pod will be automatically launched for every new node after it is added to
so that a Pod will be automatically launched for every new node after it is added to
the cluster.
## BroadcastJob Spec
### Template
`Template` describes the Pod template used to run the job.
Note that for the Pod restart policy, only `Never` or `OnFailure` is allowed for
`Template` describes the Pod template used to run the job.
Note that for the Pod restart policy, only `Never` or `OnFailure` is allowed for
BroadcastJob.
### Parallelism
`Parallelism` specifies the maximal desired number of Pods that should be run at
any given time. By default, there's no limit.
@ -34,31 +35,34 @@ three pods running in parallel. A new Pod is created only after one running Pod
### CompletionPolicy
`CompletionPolicy` specifies the controller behavior when reconciling the BroadcastJob.
#### `Always`
`CompletionPolicy` specifies the controller behavior when reconciling the BroadcastJob.
#### `Always`
`Always` policy means the job will eventually complete with either failed or succeeded
condition. The following parameters take effect with this policy:
- `ActiveDeadlineSeconds` specifies the duration in seconds relative to the startTime
that the job may be active before the system tries to terminate it.
For example, if `ActiveDeadlineSeconds` is set to 60 seconds, after the BroadcastJob starts
For example, if `ActiveDeadlineSeconds` is set to 60 seconds, after the BroadcastJob starts
running for 60 seconds, all the running pods will be deleted and the job will be marked
as Failed.
as Failed.
- `BackoffLimit` specifies the number of retries before marking this job failed.
Currently, the number of retries are defined as the aggregated number of restart
- `BackoffLimit` specifies the number of retries before marking this job failed.
Currently, the number of retries are defined as the aggregated number of restart
counts across all Pods created by the job, i.e., the sum of the
[ContainerStatus.RestartCount](https://github.com/kruiseio/kruise/blob/d61c12451d6a662736c4cfc48682fa75c73adcbc/vendor/k8s.io/api/core/v1/types.go#L2314)
for all containers in every Pod. If this value exceeds `BackoffLimit`, the job is marked
as Failed and all running Pods are deleted. No limit is enforced if `BackoffLimit` is
as Failed and all running Pods are deleted. No limit is enforced if `BackoffLimit` is
not set.
- `TTLSecondsAfterFinished` limits the lifetime of a BroadcastJob that has finished execution
- `TTLSecondsAfterFinished` limits the lifetime of a BroadcastJob that has finished execution
(either Complete or Failed). For example, if TTLSecondsAfterFinished is set to 10 seconds,
the job will be kept for 10 seconds after it finishes. Then the job along with all the Pods
will be deleted.
will be deleted.
#### `Never`
`Never` policy means the BroadcastJob will never be marked as Failed or Succeeded even if
#### `Never`
`Never` policy means the BroadcastJob will never be marked as Failed or Succeeded even if
all Pods run to completion. This also means above `ActiveDeadlineSeconds`, `BackoffLimit`
and `TTLSecondsAfterFinished` parameters takes no effect if `Never` policy is used.
For example, if user wants to perform an initial configuration validation for every newly
@ -66,20 +70,24 @@ added node in the cluster, he can deploy a BroadcastJob with `Never` policy.
## Examples
#### Monitor BroadcastJob status
### Monitor BroadcastJob status
Assuming the cluster has only one node, run `kubectl get bj` (shortcut name for BroadcastJob) and
we will see the following:
```
NAME DESIRED ACTIVE SUCCEEDED FAILED
broadcastjob-sample 1 0 1 0
```
- `Desired` : The number of desired Pods. This equals to the number of matched nodes in the cluster.
- `Active`: The number of active Pods.
- `SUCCEEDED`: The number of succeeded Pods.
- `FAILED`: The number of failed Pods.
#### Automatically delete the job after it completes for x seconds using `ttlSecondsAfterFinished`
Run a BroadcastJob that each Pod computes a pi, with `ttlSecondsAfterFinished` set to 30.
- `Desired` : The number of desired Pods. This equals to the number of matched nodes in the cluster.
- `Active`: The number of active Pods.
- `SUCCEEDED`: The number of succeeded Pods.
- `FAILED`: The number of failed Pods.
### Automatically delete the job after it completes for x seconds using `ttlSecondsAfterFinished`
Run a BroadcastJob that each Pod computes a pi, with `ttlSecondsAfterFinished` set to 30.
The job will be deleted in 30 seconds after it is finished.
```
@ -100,10 +108,11 @@ spec:
ttlSecondsAfterFinished: 30
```
#### Restrict the lifetime of a job using `activeDeadlineSeconds`
### Restrict the lifetime of a job using `activeDeadlineSeconds`
Run a BroadcastJob that each Pod sleeps for 50 seconds, with `activeDeadlineSeconds` set to 10 seconds.
Run a BroadcastJob that each Pod sleeps for 50 seconds, with `activeDeadlineSeconds` set to 10 seconds.
The job will be marked as Failed after it runs for 10 seconds, and the running Pods will be deleted.
```
apiVersion: apps.kruise.io/v1alpha1
kind: BroadcastJob
@ -122,9 +131,11 @@ spec:
activeDeadlineSeconds: 10
```
#### Automatically launch pods on newly added nodes by keeping the job active using `Never` completionPolicy
Run a BroadcastJob with `Never` completionPolicy. The job will continue to run even if all Pods
have completed on all nodes. This is useful for automatically running Pods on newly added nodes.
### Automatically launch pods on newly added nodes by keeping the job active using `Never` completionPolicy
Run a BroadcastJob with `Never` completionPolicy. The job will continue to run even if all Pods
have completed on all nodes. This is useful for automatically running Pods on newly added nodes.
```
apiVersion: apps.kruise.io/v1alpha1
kind: BroadcastJob
@ -142,9 +153,11 @@ spec:
type: Never
```
#### Use pod template's `nodeSelector` to run on selected nodes
### Use pod template's `nodeSelector` to run on selected nodes
User can set the `NodeSelector` or the `affinity` field in the pod template to restrict the job to run only on the selected nodes.
For example, below spec will run a job only on nodes with label `nodeType=gpu`
```
apiVersion: apps.kruise.io/v1alpha1
kind: BroadcastJob
@ -160,5 +173,7 @@ spec:
nodeSelector:
nodeType: gpu
```
## Tutorial
- [Use Broadcast Job to pre-download image](../../tutorial/broadcastjob.md)
- [Use Broadcast Job to pre-download image](../../tutorial/broadcastjob.md)

View File

@ -54,7 +54,7 @@ spec:
containers:
- name: sidecar1
image: centos:7
command: ["sleep", "999d"] # do nothing at all
command: ["sleep", "999d"] # do nothing at all
```
Create a SidecarSet based on the YAML file:
@ -91,13 +91,18 @@ test-pod 2/2 Running 0 118s
In the meantime, the SidecarSet status updated:
```
# kubectl get sidecarset test-sidecarset -o yaml | grep -A4 status
status:
matchedPods: 1
observedGeneration: 1
readyPods: 1
updatedPods: 1
```
## Tutorial
A more sophisticated tutorial is provided:
- [Use SidecarSet to inject a sidecar container into the Guestbook application](../../tutorial/sidecarset.md)
- [Use SidecarSet to inject a sidecar container into the Guestbook application](../../tutorial/sidecarset.md)

View File

@ -1,12 +1,12 @@
# Kruise 2019 Q3 Roadmap
By the end of October, we plan to enhance the existing workload controllers with new features based
on our original development plan and the feedbacks from the users. We also plan to release a few new
By the end of October, we plan to enhance the existing workload controllers with new features based
on our original development plan and the feedbacks from the users. We also plan to release a few new
workload controllers!
## Terminology
**M#** represents the milestone starting from **M1**. Each milestone corresponds to a three-four
**M#** represents the milestone starting from **M1**. Each milestone corresponds to a three-four
weeks release cycle. **M1** is expected to be released by the end of July 2019.
## Enhancements
@ -15,20 +15,17 @@ weeks release cycle. **M1** is expected to be released by the end of July 2019.
* **[M1] Rolling Upgrade**: Support rolling upgrading the sidecar containers for matched Pods if
the sidecar container image is the only updated component, i.e., the upgrade can be done via
"InPlaceUpgrade" strategy. In other cases where Pod recreation is needed, SidecarSet
"InPlaceUpgrade" strategy. In other cases where Pod recreation is needed, SidecarSet
will not proactively delete the affecting Pods. Instead, it "lazily" relies on **user** to trigger
Pod workload upgrade to recreate the Pods. The new version of sidecar containers will be injected
during Pod creation. The rolling upgrade is done in a sequential manner, which means
`MaxUnavailable` is equal to one.
* **[M1] Paused Rollout**: User can pause the current rollout process to avoid potential conflicts with
other controllers by setting the `Paused` flag. The rollout can be resumed by setting `Paused` to false.
* **[M2] Selective Upgrade**: An upgrade Pod selector is added. The new sidecar container version will
only be applied to the Pods that match the upgrade selector.
only be applied to the Pods that match the upgrade selector.
* **[M2] Parallel Upgrade**: Support `MaxUnavailable` feature, which allows upgrading sidecar containers
for multiple Pods simultaneously.
@ -36,19 +33,17 @@ weeks release cycle. **M1** is expected to be released by the end of July 2019.
### Advanced StatefulSet
* **[M1] Paused Rollout**: If the StatefulSet rollout takes a long time due to large number of replicas,
a `Paused` flag is introduced to allow user to pause the current rollout process.
a `Paused` flag is introduced to allow user to pause the current rollout process.
The rollout can be resumed by setting the `Paused` flag to false.
* **[M2] Auto Remediation**: When creating new Pods in scaling or rollout workflow, it is possible that
a created Pod cannot reach `Ready` state due to certain node problems. For example, node
misconfiguration may cause constant failures on pulling images or starting the containers.
`AutoRemediation` flag enables controller to delete a stuck pending Pod, and
misconfiguration may cause constant failures on pulling images or starting the containers.
`AutoRemediation` flag enables controller to delete a stuck pending Pod, and
create an [SchedPatch](#SchedPatch) CRD which injects a Pod-to-node anti-affinity rule for
all new Pods created by this controller.
* **[M2] Condition Report**: Leverage the `StatefulSetConditionType` API to report the StatefulSet
* **[M2] Condition Report**: Leverage the `StatefulSetConditionType` API to report the StatefulSet
condition based on the scaling or roll out results.
## New Controller
@ -58,9 +53,9 @@ weeks release cycle. **M1** is expected to be released by the end of July 2019.
* **[M2]** This controller implements a Pod creation mutating webhook that adds auxiliary
scheduling rules for Pods created by a target workload. The purpose is to change the
Pod specification without modifying the target workload's Pod template, hence avoiding rolling
out all existing Pods, in order to accommodate the new scheduling requirements.
The auxiliary scheduling rules specified in the CRD include affinity, tolerations and node
selectors. The CRD status reports the changed Pods for record.
out all existing Pods, in order to accommodate the new scheduling requirements.
The auxiliary scheduling rules specified in the CRD include affinity, tolerations and node
selectors. The CRD status reports the changed Pods for record.
-- **TBD** --

View File

@ -2,8 +2,8 @@
These tutorials walk through several examples to demonstrate how to use the advanced StatefulSet, SidecarSet and BroadcastJob to deploy and manage applications
- [Install Helm](./helm-install.md)
- [Install Kruise Controller Manager](./kruise-install.md)
- [Use advanced StatefulSet to install Guestbook app](./advanced-statefulset.md)
- [Use SidecarSet to inject a sidecar container](./sidecarset.md)
- [Use Broadcast Job to pre-download image](./broadcastjob.md)
- [Install Helm](./helm-install.md)
- [Install Kruise Controller Manager](./kruise-install.md)
- [Use advanced StatefulSet to install Guestbook app](./advanced-statefulset.md)
- [Use SidecarSet to inject a sidecar container](./sidecarset.md)
- [Use Broadcast Job to pre-download image](./broadcastjob.md)

View File

@ -1,8 +1,8 @@
# Install Guestbook Application
# Install Guestbook Application
This tutorial walks you through an example to install a guestbook application using advanced statefulset.
The guestbook app used is from this [repo](https://github.com/IBM/guestbook/tree/master/v1).
## Installing the Guestbook application using Helm
To install the chart with release name (application name) of `demo-v1`, replica of `20`:
@ -20,6 +20,7 @@ If you don't use helm, you need to install with YAML files as below.
## Install the Guestbook application with YAML files
Below installs a redis cluster with 1 master 2 replicas
```
kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/docs/tutorial/v1/redis-master-deployment.yaml
kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/docs/tutorial/v1/redis-master-service.yaml
@ -28,12 +29,14 @@ kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/docs/t
```
Below creates a guestbook application using advanced statefulset.
```
kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/docs/tutorial/v1/guestbook-statefulset.yaml
kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/docs/tutorial/v1/guestbook-service.yaml
```
Several things to note in the `guestbook-statefulset.yaml`
```yaml
* apiVersion: apps.kruise.io/v1alpha1 # the kruise group version
kind: StatefulSet
@ -61,9 +64,11 @@ Several things to note in the `guestbook-statefulset.yaml`
Now the app has been installed.
## Verify Guestbook Started
Check the guestbook are started. `statefulset.apps.kruise.io` or shortname `sts.apps.kruise.io` is the resource kind.
Check the guestbook are started. `statefulset.apps.kruise.io` or shortname `sts.apps.kruise.io` is the resource kind.
`app.kruise.io` postfix needs to be appended due to naming collision with Kubernetes native `statefulset` kind.
Verify that all pods are READY.
```
kubectl get sts.apps.kruise.io
@ -76,11 +81,12 @@ guestbook-v1 20 20 20 20 6m
You can now view the Guestbook on browser.
* **Local Host:**
If you are running Kubernetes locally, to view the guestbook, navigate to `http://localhost:3000` for the guestbook
If you are running Kubernetes locally, to view the guestbook, navigate to `http://localhost:3000` for the guestbook
* **Remote Host:**
To view the guestbook on a remote host, locate the external IP of the application in the **IP** column of the `kubectl get services` output.
For example, run
For example, run
```
kubectl get svc
@ -88,13 +94,14 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT
demo-v1-guestbook-kruise LoadBalancer 172.21.2.187 47.101.74.131 3000:31459/TCP,4000:32099/TCP 35m
```
`47.101.74.131` is the external IP.
`47.101.74.131` is the external IP.
Visit `http://47.101.74.131:3000` for the guestbook UI.
![Guestbook](./v1/guestbook.jpg)
## Inplace-update guestbook to the new image
First, check the running pods.
```
kubectl get pod -L controller-revision-hash -o wide | grep guestbook
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE CONTROLLER-REVISION-HASH
@ -129,6 +136,7 @@ kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/docs/t
What this command does is that it changes the image version to `v2` and changes `partition` to `15`.
This will update pods with ordinal number >= 15 (i.e. 15 - 19)to image version `v2`. The rest pods (0 ~ 14) will remain at version `v1`.
The YAML diff details are shown below:
```yaml
spec:
...
@ -148,15 +156,16 @@ spec:
```
Check the statefulset, find the statefulset has 5 pods updated
```
kubectl get sts.apps.kruise.io
NAME DESIRED CURRENT UPDATED READY AGE
demo-v1-guestbook-kruise 20 20 5 20 18h
```
```
Check the pods again. `demo-v1-guestbook-kruise-15` to `demo-v1-guestbook-kruise-19` are updated with `RESTARTS` showing `1`,
IPs remain the same, `CONTROLLER-REVISION-HASH` are updated from ` demo-v1-guestbook-kruise-7c947b5f94` to `demo-v1-guestbook-kruise-576bd76785`
Check the pods again. `demo-v1-guestbook-kruise-15` to `demo-v1-guestbook-kruise-19` are updated with `RESTARTS` showing `1`,
IPs remain the same, `CONTROLLER-REVISION-HASH` are updated from `demo-v1-guestbook-kruise-7c947b5f94` to `demo-v1-guestbook-kruise-576bd76785`
```
kubectl get pod -L controller-revision-hash -o wide | grep guestbook
@ -185,10 +194,12 @@ demo-v1-guestbook-kruise-9 1/1 Running 0 3m21s
```
Now upgrade all the pods, run
```
kubectl edit sts.apps.kruise.io demo-v1-guestbook-kruise
```
and update `partition` to `0`, all pods will be updated to v2 this time, and all pods' IP remain `unchanged`. You should also find
```
and update `partition` to `0`, all pods will be updated to v2 this time, and all pods' IP remain `unchanged`. You should also find
that all 20 pods are updated fairly fast because the `maxUnavailable` feature allows parallel updates instead of sequential update.
```
@ -198,6 +209,7 @@ demo-v1-guestbook-kruise 20 20 20 20 18h
```
Describe a pod and find that the events show the original container is killed and new container is started. This verifies `in-place` update
```
kubectl describe pod demo-v1-guestbook-kruise-0
@ -212,6 +224,7 @@ Events:
```
The pods should also be in `Ready` state, the `InPlaceUpdateReady` will be set to `False` right before in-place update and to `True` after update is complete
```yaml
Readiness Gates:
Type Status
@ -235,7 +248,7 @@ First you may want to list your helm apps:
helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART
demo-v1 default 1 2019-06-23 13:33:21.278013 +0800 CST deployed guestbook-kruise-0.3.0
```
```
Then uninstall it:
@ -244,6 +257,7 @@ helm uninstall demo-v1
```
If you are not using helm, deleting the application using below commands:
```
kubectl delete sts.apps.kruise.io demo-v1-guestbook-kruise
kubectl delete svc demo-v1-guestbook-kruise redis-master redis-slave

View File

@ -2,9 +2,10 @@
This tutorial walks you through an example to pre-download an image on nodes with broadcastjob.
## Verify nodes do not have images present
Below command should output nothing.
```
kubectl get nodes -o yaml | grep "openkruise/guestbook:v3"
```
@ -14,12 +15,15 @@ kubectl get nodes -o yaml | grep "openkruise/guestbook:v3"
`kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/docs/tutorial/v1/broadcastjob.yaml`
Check the broadcastjob is completed. `bj` is short for `broadcastjob`
```
$ kubectl get bj
NAME DESIRED ACTIVE SUCCEEDED FAILED AGE
download-image 3 0 3 0 7s
```
Check the pods are completed.
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
@ -29,8 +33,10 @@ download-image-zc4t4 0/1 Completed 0 61s
```
## Verify images are downloaded on nodes
Now run the same command and check that the images have been downloaded. The testing cluster has 3 nodes. So below command
will output three entries.
```
$ kubectl get nodes -o yaml | grep "openkruise/guestbook:v3"
- openkruise/guestbook:v3
@ -38,6 +44,5 @@ $ kubectl get nodes -o yaml | grep "openkruise/guestbook:v3"
- openkruise/guestbook:v3
```
The broadcastjob is configured with `ttlSecondsAfterFinished` to `60`, meaning the job and its associated pods will be deleted
The broadcastjob is configured with `ttlSecondsAfterFinished` to `60`, meaning the job and its associated pods will be deleted
in `60` seconds after the job is finished.

View File

@ -7,7 +7,7 @@ Or, some of Helm v3 Latest Release on Aliyun OSS:
* [MacOS amd64 tar.gz](https://cloudnativeapphub.oss-cn-hangzhou.aliyuncs.com/helm-v3.0.0-alpha.1-darwin-amd64.tar.gz)
* [MacOS amd64 zip](https://cloudnativeapphub.oss-cn-hangzhou.aliyuncs.com/helm-v3.0.0-alpha.1-darwin-amd64.zip)
* [Linux 386](https://cloudnativeapphub.oss-cn-hangzhou.aliyuncs.com/helm-v3.0.0-alpha.1-linux-386.tar.gz)
* [Linux amd64](https://cloudnativeapphub.oss-cn-hangzhou.aliyuncs.com/helm-v3.0.0-alpha.1-linux-amd64.tar.gz)
* [Linux amd64](https://cloudnativeapphub.oss-cn-hangzhou.aliyuncs.com/helm-v3.0.0-alpha.1-linux-amd64.tar.gz)
* [Linux arm64](https://cloudnativeapphub.oss-cn-hangzhou.aliyuncs.com/helm-v3.0.0-alpha.1-linux-arm64.tar.gz)
* [Windows amd64](https://cloudnativeapphub.oss-cn-hangzhou.aliyuncs.com/helm-v3.0.0-alpha.1-windows-amd64.zip)

View File

@ -1,9 +1,11 @@
# Install Kruise Controller Manager
Below steps assume you have an existing kubernetes cluster running properly.
## Install with YAML files
### Install Kruise CRDs
```
kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/config/crds/apps_v1alpha1_broadcastjob.yaml
kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/config/crds/apps_v1alpha1_sidecarset.yaml
@ -17,6 +19,7 @@ kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/config
## Verify Kruise-manager is running
Check the kruise-manager pod is running
```
kubectl get pods -n kruise-system

View File

@ -1,4 +1,5 @@
# Inject Sidecar Container with SidecarSet
This tutorial walks you through an example to automatically inject a sidecar container with sidecarset.
## Install Guestbook sidecarset
@ -26,8 +27,7 @@ spec:
ports:
- name: sidecar-server
containerPort: 4000 # different from main guestbook containerPort which is 3000
```
```
## Installing the application
@ -36,19 +36,22 @@ To install the chart with release name (application name) of `demo-v1`, replica
```bash
helm install demo-v1 apphub/guestbook-kruise --set replicaCount=20,image.repository=openkruise/guestbook,image.tag=v2
```
The Chart is located in [this repo](https://github.com/cloudnativeapp/workshop/tree/master/kubecon2019china/charts/guestbook-kruise).
Alternatively, Install the application using YAML files:
```
kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/docs/tutorial/v1/guestbook-sts-for-sidecar-demo.yaml
kubectl apply -f https://raw.githubusercontent.com/kruiseio/kruise/master/docs/tutorial/v1/guestbook-service-for-sidecar-demo.yaml
```
## Check your application
Check the guestbook are started. `statefulset.apps.kruise.io` or shortname `sts.apps.kruise.io` is the resource kind.
Check the guestbook are started. `statefulset.apps.kruise.io` or shortname `sts.apps.kruise.io` is the resource kind.
`app.kruise.io` postfix needs to be appended due to naming collision with Kubernetes native `statefulset` kind.
Verify that all pods are READY.
```
kubectl get sts.apps.kruise.io
NAME DESIRED CURRENT UPDATED READY AGE
@ -103,17 +106,17 @@ Check that the sidecar container is injected.
+ Mounts: <none>
```
## View the Sidecar Guestbook
You can now view the Sidecar Guestbook on browser.
* **Local Host:**
If you are running Kubernetes locally, to view the sidecar guestbook, navigate to `http://localhost:4000`.
If you are running Kubernetes locally, to view the sidecar guestbook, navigate to `http://localhost:4000`.
* **Remote Host:**
To view the sidecar guestbook on a remote host, locate the external IP of the application in the **IP** column of the `kubectl get services` output.
For example, run
For example, run
```
kubectl get svc
@ -121,8 +124,7 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
demo-v1-guestbook-kruise LoadBalancer 172.21.2.187 47.101.74.131 3000:31459/TCP,4000:32099/TCP 35m
```
`47.101.74.131` is the external IP.
`47.101.74.131` is the external IP.
Visit `http://47.101.74.131:4000` for the sidecar guestbook.
![Guestbook](./v1/guestbook-sidecar.jpg)
@ -139,14 +141,16 @@ First you may want to list your helm apps:
helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART
demo-v1 default 1 2019-06-23 13:33:21.278013 +0800 CST deployed guestbook-kruise-0.3.0
```
```
Then uninstall it:
```
helm uninstall demo-v1
```
If you are not using helm, deleting the application using below commands:
```
kubectl delete sts.apps.kruise.io demo-v1-guestbook-kruise
kubectl delete svc demo-v1-guestbook-kruise redis-master redis-slave