Consolidate YAML files [part-8] (#9347)

* Consolidate YAML files [part-8]

This PR exacts the YAML files referenced from the following subdirs:

- docs/concepts/workloads
- docs/concepts/configuration
- docs/concepts/policy

The following problems are fixed:

- docs/concepts/workloads/controllers/ doesnt have a 'cronjob.yaml for
  test
- the exactly same `pod.yaml` was used in both the task/config-pod-container
  topic and the concepts/configuration topics.

* Update examples_test.go

* Add missing yaml file.
This commit is contained in:
Qiming 2018-07-03 08:35:20 +08:00 committed by k8s-ci-robot
parent 1b914f4ac7
commit b5f6df9926
26 changed files with 58 additions and 58 deletions

View File

@ -68,9 +68,12 @@ spec:
Then add a nodeSelector like so: Then add a nodeSelector like so:
{{< code file="pod.yaml" >}} {{< codenew file="pods/pod-nginx.yaml" >}}
When you then run `kubectl create -f pod.yaml`, the pod will get scheduled on the node that you attached the label to! You can verify that it worked by running `kubectl get pods -o wide` and looking at the "NODE" that the pod was assigned to. When you then run `kubectl create -f https://k8s.io/examples/pods/pod-nginx.yaml`,
the Pod will get scheduled on the node that you attached the label to. You can
verify that it worked by running `kubectl get pods -o wide` and looking at the
"NODE" that the Pod was assigned to.
## Interlude: built-in node labels ## Interlude: built-in node labels
@ -133,7 +136,7 @@ Node affinity is specified as field `nodeAffinity` of field `affinity` in the Po
Here's an example of a pod that uses node affinity: Here's an example of a pod that uses node affinity:
{{< code file="pod-with-node-affinity.yaml" >}} {{< codenew file="pods/pod-with-node-affinity.yaml" >}}
This node affinity rule says the pod can only be placed on a node with a label whose key is This node affinity rule says the pod can only be placed on a node with a label whose key is
`kubernetes.io/e2e-az-name` and whose value is either `e2e-az1` or `e2e-az2`. In addition, `kubernetes.io/e2e-az-name` and whose value is either `e2e-az1` or `e2e-az2`. In addition,
@ -188,7 +191,7 @@ And inter-pod anti-affinity is specified as field `podAntiAffinity` of field `af
#### An example of a pod that uses pod affinity: #### An example of a pod that uses pod affinity:
{{< code file="pod-with-pod-affinity.yaml" >}} {{< codenew file="pods/pod-with-pod-affinity.yaml" >}}
The affinity on this pod defines one pod affinity rule and one pod anti-affinity rule. In this example, the The affinity on this pod defines one pod affinity rule and one pod anti-affinity rule. In this example, the
`podAffinity` is `requiredDuringSchedulingIgnoredDuringExecution` `podAffinity` is `requiredDuringSchedulingIgnoredDuringExecution`

View File

@ -194,7 +194,7 @@ $ alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -
Define the example PodSecurityPolicy object in a file. This is a policy that Define the example PodSecurityPolicy object in a file. This is a policy that
simply prevents the creation of privileged pods. simply prevents the creation of privileged pods.
{{< code file="example-psp.yaml" >}} {{< codenew file="policy/example-psp.yaml" >}}
And create it with kubectl: And create it with kubectl:
@ -355,13 +355,13 @@ podsecuritypolicy "example" deleted
This is the least restricted policy you can create, equivalent to not using the This is the least restricted policy you can create, equivalent to not using the
pod security policy admission controller: pod security policy admission controller:
{{< code file="privileged-psp.yaml" >}} {{< codenew file="policy/privileged-psp.yaml" >}}
This is an example of a restrictive policy that requires users to run as an This is an example of a restrictive policy that requires users to run as an
unprivileged user, blocks possible escalations to root, and requires use of unprivileged user, blocks possible escalations to root, and requires use of
several security mechanisms. several security mechanisms.
{{< code file="restricted-psp.yaml" >}} {{< codenew file="policy/restricted-psp.yaml" >}}
## Policy Reference ## Policy Reference

View File

@ -39,11 +39,11 @@ different flags and/or different memory and cpu requests for different hardware
You can describe a DaemonSet in a YAML file. For example, the `daemonset.yaml` file below describes a DaemonSet that runs the fluentd-elasticsearch Docker image: You can describe a DaemonSet in a YAML file. For example, the `daemonset.yaml` file below describes a DaemonSet that runs the fluentd-elasticsearch Docker image:
{{< code file="daemonset.yaml" >}} {{< codenew file="controllers/daemonset.yaml" >}}
* Create a DaemonSet based on the YAML file: * Create a DaemonSet based on the YAML file:
``` ```
kubectl create -f daemonset.yaml kubectl create -f https://k8s.io/examples/controllers/daemonset.yaml
``` ```
### Required Fields ### Required Fields

View File

@ -40,7 +40,7 @@ The following are typical use cases for Deployments:
The following is an example of a Deployment. It creates a ReplicaSet to bring up three `nginx` Pods: The following is an example of a Deployment. It creates a ReplicaSet to bring up three `nginx` Pods:
{{< code file="nginx-deployment.yaml" >}} {{< codenew file="controllers/nginx-deployment.yaml" >}}
In this example: In this example:
@ -71,7 +71,7 @@ The `template` field contains the following instructions:
To create this Deployment, run the following command: To create this Deployment, run the following command:
```shell ```shell
kubectl create -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/docs/concepts/workloads/controllers/nginx-deployment.yaml kubectl create -f https://k8s.io/examples/controllers/nginx-deployment.yaml
``` ```
{{< note >}} {{< note >}}
@ -417,7 +417,7 @@ First, check the revisions of this deployment:
$ kubectl rollout history deployment/nginx-deployment $ kubectl rollout history deployment/nginx-deployment
deployments "nginx-deployment" deployments "nginx-deployment"
REVISION CHANGE-CAUSE REVISION CHANGE-CAUSE
1 kubectl create -f docs/user-guide/nginx-deployment.yaml --record 1 kubectl create -f https://k8s.io/examples/controllers/nginx-deployment.yaml --record
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1 2 kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
3 kubectl set image deployment/nginx-deployment nginx=nginx:1.91 3 kubectl set image deployment/nginx-deployment nginx=nginx:1.91
``` ```

View File

@ -36,17 +36,17 @@ setting the `ownerReference` field.
Here's a configuration file for a ReplicaSet that has three Pods: Here's a configuration file for a ReplicaSet that has three Pods:
{{< code file="my-repset.yaml" >}} {{< codenew file="controllers/replicaset.yaml" >}}
If you create the ReplicaSet and then view the Pod metadata, you can see If you create the ReplicaSet and then view the Pod metadata, you can see
OwnerReferences field: OwnerReferences field:
```shell ```shell
kubectl create -f https://k8s.io/docs/concepts/controllers/my-repset.yaml kubectl create -f https://k8s.io/examples/controllers/replicaset.yaml
kubectl get pods --output=yaml kubectl get pods --output=yaml
``` ```
The output shows that the Pod owner is a ReplicaSet named my-repset: The output shows that the Pod owner is a ReplicaSet named `my-repset`:
```shell ```shell
apiVersion: v1 apiVersion: v1
@ -110,15 +110,15 @@ field on the `deleteOptions` argument when deleting an Object. Possible values i
Prior to Kubernetes 1.9, the default garbage collection policy for many controller resources was `orphan`. Prior to Kubernetes 1.9, the default garbage collection policy for many controller resources was `orphan`.
This included ReplicationController, ReplicaSet, StatefulSet, DaemonSet, and This included ReplicationController, ReplicaSet, StatefulSet, DaemonSet, and
Deployment. For kinds in the extensions/v1beta1, apps/v1beta1, and apps/v1beta2 group versions, unless you Deployment. For kinds in the `extensions/v1beta1`, `apps/v1beta1`, and `apps/v1beta2` group versions, unless you
specify otherwise, dependent objects are orphaned by default. In Kubernetes 1.9, for all kinds in the apps/v1 specify otherwise, dependent objects are orphaned by default. In Kubernetes 1.9, for all kinds in the `apps/v1`
group version, dependent objects are deleted by default. group version, dependent objects are deleted by default.
Here's an example that deletes dependents in background: Here's an example that deletes dependents in background:
```shell ```shell
kubectl proxy --port=8080 kubectl proxy --port=8080
curl -X DELETE localhost:8080/apis/extensions/v1beta1/namespaces/default/replicasets/my-repset \ curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/replicasets/my-repset \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Background"}' \ -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Background"}' \
-H "Content-Type: application/json" -H "Content-Type: application/json"
``` ```
@ -127,7 +127,7 @@ Here's an example that deletes dependents in foreground:
```shell ```shell
kubectl proxy --port=8080 kubectl proxy --port=8080
curl -X DELETE localhost:8080/apis/extensions/v1beta1/namespaces/default/replicasets/my-repset \ curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/replicasets/my-repset \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \ -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \
-H "Content-Type: application/json" -H "Content-Type: application/json"
``` ```
@ -136,7 +136,7 @@ Here's an example that orphans dependents:
```shell ```shell
kubectl proxy --port=8080 kubectl proxy --port=8080
curl -X DELETE localhost:8080/apis/extensions/v1beta1/namespaces/default/replicasets/my-repset \ curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/replicasets/my-repset \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \ -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \
-H "Content-Type: application/json" -H "Content-Type: application/json"
``` ```

View File

@ -31,12 +31,12 @@ A Job can also be used to run multiple pods in parallel.
Here is an example Job config. It computes π to 2000 places and prints it out. Here is an example Job config. It computes π to 2000 places and prints it out.
It takes around 10s to complete. It takes around 10s to complete.
{{< code file="job.yaml" >}} {{< codenew file="controllers/job.yaml" >}}
Run the example job by downloading the example file and then running this command: Run the example job by downloading the example file and then running this command:
```shell ```shell
$ kubectl create -f ./job.yaml $ kubectl create -f https://k8s.io/examples/controllers/job.yaml
job "pi" created job "pi" created
``` ```

View File

@ -51,13 +51,13 @@ use a Deployment instead, and define your application in the spec section.
## Example ## Example
{{< code file="frontend.yaml" >}} {{< codenew file="controllers/frontend.yaml" >}}
Saving this manifest into `frontend.yaml` and submitting it to a Kubernetes cluster should Saving this manifest into `frontend.yaml` and submitting it to a Kubernetes cluster should
create the defined ReplicaSet and the pods that it manages. create the defined ReplicaSet and the pods that it manages.
```shell ```shell
$ kubectl create -f frontend.yaml $ kubectl create -f http://k8s.io/examples/controllers/frontend.yaml
replicaset "frontend" created replicaset "frontend" created
$ kubectl describe rs/frontend $ kubectl describe rs/frontend
Name: frontend Name: frontend
@ -192,14 +192,14 @@ A ReplicaSet can also be a target for
a ReplicaSet can be auto-scaled by an HPA. Here is an example HPA targeting a ReplicaSet can be auto-scaled by an HPA. Here is an example HPA targeting
the ReplicaSet we created in the previous example. the ReplicaSet we created in the previous example.
{{< code file="hpa-rs.yaml" >}} {{< codenew file="controllers/hpa-rs.yaml" >}}
Saving this manifest into `hpa-rs.yaml` and submitting it to a Kubernetes cluster should Saving this manifest into `hpa-rs.yaml` and submitting it to a Kubernetes cluster should
create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage
of the replicated pods. of the replicated pods.
```shell ```shell
kubectl create -f hpa-rs.yaml kubectl create -f https://k8s.io/examples/controllers/hpa-rs.yaml
``` ```
Alternatively, you can use the `kubectl autoscale` command to accomplish the same Alternatively, you can use the `kubectl autoscale` command to accomplish the same

View File

@ -44,12 +44,12 @@ service, such as web servers.
This example ReplicationController config runs three copies of the nginx web server. This example ReplicationController config runs three copies of the nginx web server.
{{< code file="replication.yaml" >}} {{< codenew file="controllers/replication.yaml" >}}
Run the example job by downloading the example file and then running this command: Run the example job by downloading the example file and then running this command:
```shell ```shell
$ kubectl create -f ./replication.yaml $ kubectl create -f https://k8s.io/examples/controllers/replication.yaml
replicationcontroller "nginx" created replicationcontroller "nginx" created
``` ```

View File

@ -57,12 +57,12 @@ This pod configuration file describes a pod that has a node selector,
`disktype: ssd`. This means that the pod will get scheduled on a node that has `disktype: ssd`. This means that the pod will get scheduled on a node that has
a `disktype=ssd` label. a `disktype=ssd` label.
{{< code file="pod.yaml" >}} {{< codenew file="pods/pod-nginx.yaml" >}}
1. Use the configuration file to create a pod that will get scheduled on your 1. Use the configuration file to create a pod that will get scheduled on your
chosen node: chosen node:
kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/pod.yaml kubectl create -f https://k8s.io/examples/pods/pod-nginx.yaml
1. Verify that the pod is running on your chosen node: 1. Verify that the pod is running on your chosen node:
@ -80,4 +80,3 @@ Learn more about
[labels and selectors](/docs/concepts/overview/working-with-objects/labels/). [labels and selectors](/docs/concepts/overview/working-with-objects/labels/).
{{% /capture %}} {{% /capture %}}

View File

@ -1,13 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: ssd

View File

@ -7,10 +7,8 @@ spec:
matchLabels: matchLabels:
app: nginx app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template replicas: 2 # tells deployment to run 2 pods matching the template
template: # create pods using pod definition in this template template:
metadata: metadata:
# unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
# generated from the deployment name
labels: labels:
app: nginx app: nginx
spec: spec:

View File

@ -306,12 +306,6 @@ func TestExampleObjectSchemas(t *testing.T) {
"nginx-deployment": {&extensions.Deployment{}}, "nginx-deployment": {&extensions.Deployment{}},
"nginx-svc": {&api.Service{}}, "nginx-svc": {&api.Service{}},
}, },
"docs/concepts/configuration": {
"commands": {&api.Pod{}},
"pod": {&api.Pod{}},
"pod-with-node-affinity": {&api.Pod{}},
"pod-with-pod-affinity": {&api.Pod{}},
},
"docs/concepts/overview/working-with-objects": { "docs/concepts/overview/working-with-objects": {
"nginx-deployment": {&extensions.Deployment{}}, "nginx-deployment": {&extensions.Deployment{}},
}, },
@ -400,7 +394,6 @@ func TestExampleObjectSchemas(t *testing.T) {
"memory-request-limit-3": {&api.Pod{}}, "memory-request-limit-3": {&api.Pod{}},
"oir-pod": {&api.Pod{}}, "oir-pod": {&api.Pod{}},
"oir-pod-2": {&api.Pod{}}, "oir-pod-2": {&api.Pod{}},
"pod": {&api.Pod{}},
"pod-redis": {&api.Pod{}}, "pod-redis": {&api.Pod{}},
"private-reg-pod": {&api.Pod{}}, "private-reg-pod": {&api.Pod{}},
"projected-volume": {&api.Pod{}}, "projected-volume": {&api.Pod{}},
@ -486,6 +479,26 @@ func TestExampleObjectSchemas(t *testing.T) {
"examples/application/zookeeper": { "examples/application/zookeeper": {
"zookeeper": {&api.Service{}, &api.Service{}, &policy.PodDisruptionBudget{}, &apps.StatefulSet{}}, "zookeeper": {&api.Service{}, &api.Service{}, &policy.PodDisruptionBudget{}, &apps.StatefulSet{}},
}, },
"examples/controllers": {
"daemonset": {&extensions.DaemonSet{}},
"frontend": {&extensions.ReplicaSet{}},
"hpa-rs": {&autoscaling.HorizontalPodAutoscaler{}},
"job": {&batch.Job{}},
"replicaset": {&extensions.ReplicaSet{}},
"replication": {&api.ReplicationController{}},
"nginx-deployment": {&extensions.Deployment{}},
},
"examples/pods": {
"commands": {&api.Pod{}},
"pod-nginx": {&api.Pod{}},
"pod-with-node-affinity": {&api.Pod{}},
"pod-with-pod-affinity": {&api.Pod{}},
},
"examples/policy": {
"privileged-psp": {&policy.PodSecurityPolicy{}},
"restricted-psp": {&policy.PodSecurityPolicy{}},
"example-psp": {&policy.PodSecurityPolicy{}},
},
"docs/tasks/run-application": { "docs/tasks/run-application": {
"deployment-patch-demo": {&extensions.Deployment{}}, "deployment-patch-demo": {&extensions.Deployment{}},
"hpa-php-apache": {&autoscaling.HorizontalPodAutoscaler{}}, "hpa-php-apache": {&autoscaling.HorizontalPodAutoscaler{}},