Consolidate YAML files [part-6] (#9261)

* Consolidate YAML files [part-6]

This PR relocates the YAML files used by the stateful application
examples.

* Update examples_test.go
This commit is contained in:
Qiming 2018-07-03 04:37:19 +08:00 committed by k8s-ci-robot
parent ea11ae29ac
commit 3a0c618734
13 changed files with 154 additions and 132 deletions

View File

@ -1,17 +0,0 @@
# This is an image with Percona XtraBackup, mysql-client and ncat installed.
FROM debian:jessie
RUN \
echo "deb http://repo.percona.com/apt jessie main" > /etc/apt/sources.list.d/percona.list \
&& echo "deb-src http://repo.percona.com/apt jessie main" >> /etc/apt/sources.list.d/percona.list \
&& apt-key adv --keyserver keys.gnupg.net --recv-keys 8507EFA5
RUN \
apt-get update && apt-get install -y --no-install-recommends \
percona-xtrabackup-24 \
mysql-client \
nmap \
&& rm -rf /var/lib/apt/lists/*
CMD ["bash"]

View File

@ -60,7 +60,7 @@ example presented in the
It creates a [Headless Service](/docs/concepts/services-networking/service/#headless-services),
`nginx`, to publish the IP addresses of Pods in the StatefulSet, `web`.
{{< code file="web.yaml" >}}
{{< codenew file="application/web/web.yaml" >}}
Download the example above, and save it to a file named `web.yaml`
@ -929,9 +929,9 @@ terminate all Pods in parallel, and not to wait for Pods to become Running
and Ready or completely terminated prior to launching or terminating another
Pod.
{{< code file="webp.yaml" >}}
{{< codenew file="application/web/web-parallel.yaml" >}}
Download the example above, and save it to a file named `webp.yaml`
Download the example above, and save it to a file named `web-parallel.yaml`
This manifest is identical to the one you downloaded above except that the `.spec.podManagementPolicy`
of the `web` StatefulSet is set to `Parallel`.
@ -945,7 +945,7 @@ kubectl get po -l app=nginx -w
In another terminal, create the StatefulSet and Service in the manifest.
```shell
kubectl create -f webp.yaml
kubectl create -f web-parallel.yaml
service "nginx" created
statefulset "web" created
```

View File

@ -15,7 +15,10 @@ Deploying stateful distributed applications, like Cassandra, within a clustered
The Pods use the [`gcr.io/google-samples/cassandra:v13`](https://github.com/kubernetes/examples/blob/master/cassandra/image/Dockerfile)
image from Google's [container registry](https://cloud.google.com/container-registry/docs/).
The docker image above is based on [debian-base](https://github.com/kubernetes/kubernetes/tree/master/build/debian-base) and includes OpenJDK 8. This image includes a standard Cassandra installation from the Apache Debian repo. By using environment variables you can change values that are inserted into `cassandra.yaml`.
The docker image above is based on [debian-base](https://github.com/kubernetes/kubernetes/tree/master/build/debian-base)
and includes OpenJDK 8.
This image includes a standard Cassandra installation from the Apache Debian repo.
By using environment variables you can change values that are inserted into `cassandra.yaml`.
| ENV VAR | DEFAULT VALUE |
| ------------- |:-------------: |
@ -38,7 +41,8 @@ To complete this tutorial, you should already have a basic familiarity with [Pod
* [Install and Configure](/docs/tasks/tools/install-kubectl/) the `kubectl` command line
* Download [cassandra-service.yaml](/docs/tutorials/stateful-application/cassandra/cassandra-service.yaml) and [cassandra-statefulset.yaml](/docs/tutorials/stateful-application/cassandra/cassandra-statefulset.yaml)
* Download [cassandra-service.yaml](/examples/application/cassandra/cassandra-service.yaml)
and [cassandra-statefulset.yaml](/examples/application/cassandra/cassandra-statefulset.yaml)
* Have a supported Kubernetes Cluster running
@ -64,12 +68,14 @@ A Kubernetes [Service](/docs/concepts/services-networking/service/) describes a
The following `Service` is used for DNS lookups between Cassandra Pods and clients within the Kubernetes Cluster.
{{< codenew file="application/cassandra/cassandra-service.yaml" >}}
1. Launch a terminal window in the directory you downloaded the manifest files.
2. Create a `Service` to track all Cassandra StatefulSet Nodes from the `cassandra-service.yaml` file:
1. Create a `Service` to track all Cassandra StatefulSet Nodes from the `cassandra-service.yaml` file:
kubectl create -f cassandra-service.yaml
{{< code file="cassandra/cassandra-service.yaml" >}}
```shell
kubectl create -f https://k8s.io/examples/application/cassandra/cassandra-service.yaml
```
### Validating (optional)
@ -92,106 +98,127 @@ The StatefulSet manifest, included below, creates a Cassandra ring that consists
**Note:** This example uses the default provisioner for Minikube. Please update the following StatefulSet for the cloud you are working with.
{{< /note >}}
{{< codenew file="application/cassandra/cassandra-statefulset.yaml" >}}
1. Update the StatefulSet if necessary.
2. Create the Cassandra StatefulSet from the `cassandra-statefulset.yaml` file:
1. Create the Cassandra StatefulSet from the `cassandra-statefulset.yaml` file:
kubectl create -f cassandra-statefulset.yaml
{{< code file="cassandra/cassandra-statefulset.yaml" >}}
```shell
kubectl create -f https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml
```
## Validating The Cassandra StatefulSet
1. Get the Cassandra StatefulSet:
kubectl get statefulset cassandra
```
kubectl get statefulset cassandra
```
The response should be
The response should be
NAME DESIRED CURRENT AGE
cassandra 3 0 13s
```
NAME DESIRED CURRENT AGE
cassandra 3 0 13s
```
The StatefulSet resource deploys Pods sequentially.
The StatefulSet resource deploys Pods sequentially.
2. Get the Pods to see the ordered creation status:
1. Get the Pods to see the ordered creation status:
kubectl get pods -l="app=cassandra"
```shell
kubectl get pods -l="app=cassandra"
```
The response should be
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 1m
cassandra-1 0/1 ContainerCreating 0 8s
The response should be
{{< note >}}
**Note:** It can take up to ten minutes for all three Pods to deploy.
{{< /note >}}
```
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 1m
cassandra-1 0/1 ContainerCreating 0 8s
```
Once all Pods are deployed, the same command returns:
**Note:** It can take up to ten minutes for all three Pods to deploy.
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 10m
cassandra-1 1/1 Running 0 9m
cassandra-2 1/1 Running 0 8m
Once all Pods are deployed, the same command returns:
3. Run the Cassandra utility nodetool to display the status of the ring.
```
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 10m
cassandra-1 1/1 Running 0 9m
cassandra-2 1/1 Running 0 8m
```
kubectl exec cassandra-0 -- nodetool status
1. Run the Cassandra utility nodetool to display the status of the ring.
The response is:
```shell
kubectl exec cassandra-0 -- nodetool status
```
Datacenter: DC1-K8Demo
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.17.0.5 83.57 KiB 32 74.0% e2dd09e6-d9d3-477e-96c5-45094c08db0f Rack1-K8Demo
UN 172.17.0.4 101.04 KiB 32 58.8% f89d6835-3a42-4419-92b3-0e62cae1479c Rack1-K8Demo
UN 172.17.0.6 84.74 KiB 32 67.1% a6a1e8c2-3dc5-4417-b1a0-26507af2aaad Rack1-K8Demo
The response is:
```
Datacenter: DC1-K8Demo
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.17.0.5 83.57 KiB 32 74.0% e2dd09e6-d9d3-477e-96c5-45094c08db0f Rack1-K8Demo
UN 172.17.0.4 101.04 KiB 32 58.8% f89d6835-3a42-4419-92b3-0e62cae1479c Rack1-K8Demo
UN 172.17.0.6 84.74 KiB 32 67.1% a6a1e8c2-3dc5-4417-b1a0-26507af2aaad Rack1-K8Demo
```
## Modifying the Cassandra StatefulSet
Use `kubectl edit` to modify the size of a Cassandra StatefulSet.
1. Run the following command:
kubectl edit statefulset cassandra
```
kubectl edit statefulset cassandra
```
This command opens an editor in your terminal. The line you need to change is the `replicas` field.
This command opens an editor in your terminal. The line you need to change is the `replicas` field.
{{< note >}}
**Note:** The following sample is an excerpt of the StatefulSet file.
{{< /note >}}
**Note:** The following sample is an excerpt of the StatefulSet file.
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: StatefulSet
metadata:
creationTimestamp: 2016-08-13T18:40:58Z
generation: 1
labels:
app: cassandra
name: cassandra
namespace: default
resourceVersion: "323"
selfLink: /apis/apps/v1/namespaces/default/statefulsets/cassandra
uid: 7a219483-6185-11e6-a910-42010a8a0fc0
spec:
replicas: 3
```
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: StatefulSet
metadata:
creationTimestamp: 2016-08-13T18:40:58Z
generation: 1
labels:
app: cassandra
name: cassandra
namespace: default
resourceVersion: "323"
selfLink: /apis/apps/v1/namespaces/default/statefulsets/cassandra
uid: 7a219483-6185-11e6-a910-42010a8a0fc0
spec:
replicas: 3
```
2. Change the number of replicas to 4, and then save the manifest.
1. Change the number of replicas to 4, and then save the manifest.
The StatefulSet now contains 4 Pods.
3. Get the Cassandra StatefulSet to verify:
1. Get the Cassandra StatefulSet to verify:
kubectl get statefulset cassandra
```shell
kubectl get statefulset cassandra
```
The response should be
The response should be
NAME DESIRED CURRENT AGE
cassandra 4 4 36m
```
NAME DESIRED CURRENT AGE
cassandra 4 4 36m
```
{{% /capture %}}
@ -204,19 +231,24 @@ Deleting or scaling a StatefulSet down does not delete the volumes associated wi
1. Run the following commands to delete everything in a `StatefulSet`:
grace=$(kubectl get po cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodSeconds}') \
```shell
grace=$(kubectl get po cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodSeconds}') \
&& kubectl delete statefulset -l app=cassandra \
&& echo "Sleeping $grace" \
&& sleep $grace \
&& kubectl delete pvc -l app=cassandra
```
2. Run the following command to delete the Cassandra `Service`.
1. Run the following command to delete the Cassandra `Service`.
kubectl delete service -l app=cassandra
```
kubectl delete service -l app=cassandra
```
{{% /capture %}}
{{% capture whatsnext %}}
* Learn how to [Scale a StatefulSet](/docs/tasks/run-application/scale-stateful-set/).
* Learn more about the [KubernetesSeedProvider](https://github.com/kubernetes/examples/blob/master/cassandra/java/src/main/java/io/k8s/cassandra/KubernetesSeedProvider.java)
* See more custom [Seed Provider Configurations](https://git.k8s.io/examples/cassandra/java/README.md)

View File

@ -36,9 +36,9 @@ A [PersistentVolume](/docs/concepts/storage/persistent-volumes/) (PV) is a piece
Download the following configuration files:
1. [mysql-deployment.yaml](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/mysql-deployment.yaml)
1. [mysql-deployment.yaml](/examples/application/wordpress/mysql-deployment.yaml)
1. [wordpress-deployment.yaml](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/wordpress-deployment.yaml)
1. [wordpress-deployment.yaml](/examples/application/wordpress/wordpress-deployment.yaml)
{{% /capture %}}
@ -96,12 +96,12 @@ A [Secret](/docs/concepts/configuration/secret/) is an object that stores a piec
The following manifest describes a single-instance MySQL Deployment. The MySQL container mounts the PersistentVolume at /var/lib/mysql. The `MYSQL_ROOT_PASSWORD` environment variable sets the database password from the Secret.
{{< code file="mysql-wordpress-persistent-volume/mysql-deployment.yaml" >}}
{{< codenew file="application/wordpress/mysql-deployment.yaml" >}}
1. Deploy MySQL from the `mysql-deployment.yaml` file:
```shell
kubectl create -f mysql-deployment.yaml
kubectl create -f https://k8s.io/examples/application/wordpress/mysql-deployment.yaml
```
2. Verify that a PersistentVolume got dynamically provisioned. Note that it can
@ -137,12 +137,12 @@ The following manifest describes a single-instance MySQL Deployment. The MySQL c
The following manifest describes a single-instance WordPress Deployment and Service. It uses many of the same features like a PVC for persistent storage and a Secret for the password. But it also uses a different setting: `type: LoadBalancer`. This setting exposes WordPress to traffic from outside of the cluster.
{{< code file="mysql-wordpress-persistent-volume/wordpress-deployment.yaml" >}}
{{< codenew file="application/wordpress/wordpress-deployment.yaml" >}}
1. Create a WordPress Service and Deployment from the `wordpress-deployment.yaml` file:
```shell
kubectl create -f wordpress-deployment.yaml
kubectl create -f https://k8s.io/examples/wordpress/wordpress-deployment.yaml
```
2. Verify that a PersistentVolume got dynamically provisioned:
@ -231,4 +231,3 @@ The following manifest describes a single-instance WordPress Deployment and Serv
{{% /capture %}}

View File

@ -76,14 +76,14 @@ a [Service](/docs/concepts/services-networking/service/),
a [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions//#specifying-a-poddisruptionbudget),
and a [StatefulSet](/docs/concepts/workloads/controllers/statefulset/).
{{< code file="zookeeper.yaml" >}}
{{< codenew file="application/zookeeper/zookeeper.yaml" >}}
Open a terminal, and use the
[`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply) command to create the
manifest.
```shell
kubectl apply -f https://k8s.io/docs/tutorials/stateful-application/zookeeper.yaml
kubectl apply -f https://k8s.io/examples/application/zookeeper/zookeeper.yaml
```
This creates the `zk-hs` Headless Service, the `zk-cs` Service,
@ -343,7 +343,7 @@ zk-0 0/1 Terminating 0 11m
Reapply the manifest in `zookeeper.yaml`.
```shell
kubectl apply -f https://k8s.io/docs/tutorials/stateful-application/zookeeper.yaml
kubectl apply -f https://k8s.io/examples/application/zookeeper/zookeeper.yaml
```
This creates the `zk` StatefulSet object, but the other API objects in the manifest are not modified because they already exist.
@ -792,14 +792,14 @@ For a ZooKeeper server, liveness implies readiness. Therefore, the readiness
probe from the `zookeeper.yaml` manifest is identical to the liveness probe.
```yaml
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 15
timeoutSeconds: 5
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 15
timeoutSeconds: 5
```
Even though the liveness and readiness probes are identical, it is important
@ -1065,7 +1065,11 @@ Attempt to drain the node on which `zk-2` is scheduled.
```shell
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
```
The output:
```
node "kubernetes-minion-group-i4c4" already cordoned
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-minion-group-i4c4, kube-proxy-kubernetes-minion-group-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog
pod "heapster-v1.2.0-2604621511-wht1r" deleted
@ -1079,7 +1083,9 @@ Uncordon the second node to allow `zk-2` to be rescheduled.
```shell
kubectl uncordon kubernetes-minion-group-ixsl
```
```
node "kubernetes-minion-group-ixsl" uncordoned
```
@ -1089,10 +1095,11 @@ You can use `kubectl drain` in conjunction with `PodDisruptionBudgets` to ensure
{{% capture cleanup %}}
- Use `kubectl uncordon` to uncordon all the nodes in your cluster.
- You will need to delete the persistent storage media for the PersistentVolumes
used in this tutorial. Follow the necessary steps, based on your environment,
storage configuration, and provisioning method, to ensure that all storage is
reclaimed.
{{% /capture %}}
- Use `kubectl uncordon` to uncordon all the nodes in your cluster.
- You will need to delete the persistent storage media for the PersistentVolumes
used in this tutorial. Follow the necessary steps, based on your environment,
storage configuration, and provisioning method, to ensure that all storage is
reclaimed.
{{% /capture %}}

View File

@ -471,6 +471,21 @@ func TestExampleObjectSchemas(t *testing.T) {
"redis-slave-deployment": {&extensions.Deployment{}},
"redis-slave-service": {&api.Service{}},
},
"examples/application/cassandra": {
"cassandra-service": {&api.Service{}},
"cassandra-statefulset": {&apps.StatefulSet{}, &storage.StorageClass{}},
},
"examples/application/web": {
"web": {&api.Service{}, &apps.StatefulSet{}},
"web-parallel": {&api.Service{}, &apps.StatefulSet{}},
},
"examples/application/wordpress": {
"mysql-deployment": {&api.Service{}, &api.PersistentVolumeClaim{}, &extensions.Deployment{}},
"wordpress-deployment": {&api.Service{}, &api.PersistentVolumeClaim{}, &extensions.Deployment{}},
},
"examples/application/zookeeper": {
"zookeeper": {&api.Service{}, &api.Service{}, &policy.PodDisruptionBudget{}, &apps.StatefulSet{}},
},
"docs/tasks/run-application": {
"deployment-patch-demo": {&extensions.Deployment{}},
"hpa-php-apache": {&autoscaling.HorizontalPodAutoscaler{}},
@ -505,20 +520,6 @@ func TestExampleObjectSchemas(t *testing.T) {
"simple_deployment": {&extensions.Deployment{}},
"update_deployment": {&extensions.Deployment{}},
},
"docs/tutorials/stateful-application": {
"web": {&api.Service{}, &apps.StatefulSet{}},
"webp": {&api.Service{}, &apps.StatefulSet{}},
"zookeeper": {&api.Service{}, &api.Service{}, &policy.PodDisruptionBudget{}, &apps.StatefulSet{}},
},
"docs/tutorials/stateful-application/cassandra": {
"cassandra-service": {&api.Service{}},
"cassandra-statefulset": {&apps.StatefulSet{}, &storage.StorageClass{}},
},
"docs/tutorials/stateful-application/mysql-wordpress-persistent-volume": {
"local-volumes": {&api.PersistentVolume{}, &api.PersistentVolume{}},
"mysql-deployment": {&api.Service{}, &api.PersistentVolumeClaim{}, &extensions.Deployment{}},
"wordpress-deployment": {&api.Service{}, &api.PersistentVolumeClaim{}, &extensions.Deployment{}},
},
}
// Note a key in the following map has to be complete relative path