Consolidate YAML files [part-8] (#9347)
* Consolidate YAML files [part-8] This PR exacts the YAML files referenced from the following subdirs: - docs/concepts/workloads - docs/concepts/configuration - docs/concepts/policy The following problems are fixed: - docs/concepts/workloads/controllers/ doesnt have a 'cronjob.yaml for test - the exactly same `pod.yaml` was used in both the task/config-pod-container topic and the concepts/configuration topics. * Update examples_test.go * Add missing yaml file.
This commit is contained in:
parent
1b914f4ac7
commit
b5f6df9926
|
@ -68,9 +68,12 @@ spec:
|
|||
|
||||
Then add a nodeSelector like so:
|
||||
|
||||
{{< code file="pod.yaml" >}}
|
||||
{{< codenew file="pods/pod-nginx.yaml" >}}
|
||||
|
||||
When you then run `kubectl create -f pod.yaml`, the pod will get scheduled on the node that you attached the label to! You can verify that it worked by running `kubectl get pods -o wide` and looking at the "NODE" that the pod was assigned to.
|
||||
When you then run `kubectl create -f https://k8s.io/examples/pods/pod-nginx.yaml`,
|
||||
the Pod will get scheduled on the node that you attached the label to. You can
|
||||
verify that it worked by running `kubectl get pods -o wide` and looking at the
|
||||
"NODE" that the Pod was assigned to.
|
||||
|
||||
## Interlude: built-in node labels
|
||||
|
||||
|
@ -133,7 +136,7 @@ Node affinity is specified as field `nodeAffinity` of field `affinity` in the Po
|
|||
|
||||
Here's an example of a pod that uses node affinity:
|
||||
|
||||
{{< code file="pod-with-node-affinity.yaml" >}}
|
||||
{{< codenew file="pods/pod-with-node-affinity.yaml" >}}
|
||||
|
||||
This node affinity rule says the pod can only be placed on a node with a label whose key is
|
||||
`kubernetes.io/e2e-az-name` and whose value is either `e2e-az1` or `e2e-az2`. In addition,
|
||||
|
@ -188,7 +191,7 @@ And inter-pod anti-affinity is specified as field `podAntiAffinity` of field `af
|
|||
|
||||
#### An example of a pod that uses pod affinity:
|
||||
|
||||
{{< code file="pod-with-pod-affinity.yaml" >}}
|
||||
{{< codenew file="pods/pod-with-pod-affinity.yaml" >}}
|
||||
|
||||
The affinity on this pod defines one pod affinity rule and one pod anti-affinity rule. In this example, the
|
||||
`podAffinity` is `requiredDuringSchedulingIgnoredDuringExecution`
|
||||
|
@ -344,4 +347,4 @@ as well, which allow a *node* to *repel* a set of pods.
|
|||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -194,7 +194,7 @@ $ alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -
|
|||
Define the example PodSecurityPolicy object in a file. This is a policy that
|
||||
simply prevents the creation of privileged pods.
|
||||
|
||||
{{< code file="example-psp.yaml" >}}
|
||||
{{< codenew file="policy/example-psp.yaml" >}}
|
||||
|
||||
And create it with kubectl:
|
||||
|
||||
|
@ -355,13 +355,13 @@ podsecuritypolicy "example" deleted
|
|||
This is the least restricted policy you can create, equivalent to not using the
|
||||
pod security policy admission controller:
|
||||
|
||||
{{< code file="privileged-psp.yaml" >}}
|
||||
{{< codenew file="policy/privileged-psp.yaml" >}}
|
||||
|
||||
This is an example of a restrictive policy that requires users to run as an
|
||||
unprivileged user, blocks possible escalations to root, and requires use of
|
||||
several security mechanisms.
|
||||
|
||||
{{< code file="restricted-psp.yaml" >}}
|
||||
{{< codenew file="policy/restricted-psp.yaml" >}}
|
||||
|
||||
## Policy Reference
|
||||
|
||||
|
@ -574,4 +574,4 @@ default cannot be changed.
|
|||
Controlled via annotations on the PodSecurityPolicy. Refer to the [Sysctl documentation](
|
||||
/docs/concepts/cluster-administration/sysctl-cluster/#podsecuritypolicy-annotations).
|
||||
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -39,11 +39,11 @@ different flags and/or different memory and cpu requests for different hardware
|
|||
|
||||
You can describe a DaemonSet in a YAML file. For example, the `daemonset.yaml` file below describes a DaemonSet that runs the fluentd-elasticsearch Docker image:
|
||||
|
||||
{{< code file="daemonset.yaml" >}}
|
||||
{{< codenew file="controllers/daemonset.yaml" >}}
|
||||
|
||||
* Create a DaemonSet based on the YAML file:
|
||||
```
|
||||
kubectl create -f daemonset.yaml
|
||||
kubectl create -f https://k8s.io/examples/controllers/daemonset.yaml
|
||||
```
|
||||
|
||||
### Required Fields
|
||||
|
@ -252,4 +252,4 @@ number of replicas and rolling out updates are more important than controlling e
|
|||
the Pod runs on. Use a DaemonSet when it is important that a copy of a Pod always run on
|
||||
all or certain hosts, and when it needs to start before other Pods.
|
||||
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -40,7 +40,7 @@ The following are typical use cases for Deployments:
|
|||
|
||||
The following is an example of a Deployment. It creates a ReplicaSet to bring up three `nginx` Pods:
|
||||
|
||||
{{< code file="nginx-deployment.yaml" >}}
|
||||
{{< codenew file="controllers/nginx-deployment.yaml" >}}
|
||||
|
||||
In this example:
|
||||
|
||||
|
@ -71,7 +71,7 @@ The `template` field contains the following instructions:
|
|||
To create this Deployment, run the following command:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/docs/concepts/workloads/controllers/nginx-deployment.yaml
|
||||
kubectl create -f https://k8s.io/examples/controllers/nginx-deployment.yaml
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
|
@ -417,7 +417,7 @@ First, check the revisions of this deployment:
|
|||
$ kubectl rollout history deployment/nginx-deployment
|
||||
deployments "nginx-deployment"
|
||||
REVISION CHANGE-CAUSE
|
||||
1 kubectl create -f docs/user-guide/nginx-deployment.yaml --record
|
||||
1 kubectl create -f https://k8s.io/examples/controllers/nginx-deployment.yaml --record
|
||||
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
|
||||
3 kubectl set image deployment/nginx-deployment nginx=nginx:1.91
|
||||
```
|
||||
|
|
|
@ -36,17 +36,17 @@ setting the `ownerReference` field.
|
|||
|
||||
Here's a configuration file for a ReplicaSet that has three Pods:
|
||||
|
||||
{{< code file="my-repset.yaml" >}}
|
||||
{{< codenew file="controllers/replicaset.yaml" >}}
|
||||
|
||||
If you create the ReplicaSet and then view the Pod metadata, you can see
|
||||
OwnerReferences field:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/docs/concepts/controllers/my-repset.yaml
|
||||
kubectl create -f https://k8s.io/examples/controllers/replicaset.yaml
|
||||
kubectl get pods --output=yaml
|
||||
```
|
||||
|
||||
The output shows that the Pod owner is a ReplicaSet named my-repset:
|
||||
The output shows that the Pod owner is a ReplicaSet named `my-repset`:
|
||||
|
||||
```shell
|
||||
apiVersion: v1
|
||||
|
@ -110,15 +110,15 @@ field on the `deleteOptions` argument when deleting an Object. Possible values i
|
|||
|
||||
Prior to Kubernetes 1.9, the default garbage collection policy for many controller resources was `orphan`.
|
||||
This included ReplicationController, ReplicaSet, StatefulSet, DaemonSet, and
|
||||
Deployment. For kinds in the extensions/v1beta1, apps/v1beta1, and apps/v1beta2 group versions, unless you
|
||||
specify otherwise, dependent objects are orphaned by default. In Kubernetes 1.9, for all kinds in the apps/v1
|
||||
Deployment. For kinds in the `extensions/v1beta1`, `apps/v1beta1`, and `apps/v1beta2` group versions, unless you
|
||||
specify otherwise, dependent objects are orphaned by default. In Kubernetes 1.9, for all kinds in the `apps/v1`
|
||||
group version, dependent objects are deleted by default.
|
||||
|
||||
Here's an example that deletes dependents in background:
|
||||
|
||||
```shell
|
||||
kubectl proxy --port=8080
|
||||
curl -X DELETE localhost:8080/apis/extensions/v1beta1/namespaces/default/replicasets/my-repset \
|
||||
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/replicasets/my-repset \
|
||||
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Background"}' \
|
||||
-H "Content-Type: application/json"
|
||||
```
|
||||
|
@ -127,7 +127,7 @@ Here's an example that deletes dependents in foreground:
|
|||
|
||||
```shell
|
||||
kubectl proxy --port=8080
|
||||
curl -X DELETE localhost:8080/apis/extensions/v1beta1/namespaces/default/replicasets/my-repset \
|
||||
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/replicasets/my-repset \
|
||||
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \
|
||||
-H "Content-Type: application/json"
|
||||
```
|
||||
|
@ -136,7 +136,7 @@ Here's an example that orphans dependents:
|
|||
|
||||
```shell
|
||||
kubectl proxy --port=8080
|
||||
curl -X DELETE localhost:8080/apis/extensions/v1beta1/namespaces/default/replicasets/my-repset \
|
||||
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/replicasets/my-repset \
|
||||
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \
|
||||
-H "Content-Type: application/json"
|
||||
```
|
||||
|
|
|
@ -31,12 +31,12 @@ A Job can also be used to run multiple pods in parallel.
|
|||
Here is an example Job config. It computes π to 2000 places and prints it out.
|
||||
It takes around 10s to complete.
|
||||
|
||||
{{< code file="job.yaml" >}}
|
||||
{{< codenew file="controllers/job.yaml" >}}
|
||||
|
||||
Run the example job by downloading the example file and then running this command:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f ./job.yaml
|
||||
$ kubectl create -f https://k8s.io/examples/controllers/job.yaml
|
||||
job "pi" created
|
||||
```
|
||||
|
||||
|
@ -401,4 +401,4 @@ object, but complete control over what pods are created and how work is assigned
|
|||
|
||||
Support for creating Jobs at specified times/dates (i.e. cron) is available in Kubernetes [1.4](https://github.com/kubernetes/kubernetes/pull/11980). More information is available in the [cron job documents](/docs/concepts/workloads/controllers/cron-jobs/)
|
||||
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -51,13 +51,13 @@ use a Deployment instead, and define your application in the spec section.
|
|||
|
||||
## Example
|
||||
|
||||
{{< code file="frontend.yaml" >}}
|
||||
{{< codenew file="controllers/frontend.yaml" >}}
|
||||
|
||||
Saving this manifest into `frontend.yaml` and submitting it to a Kubernetes cluster should
|
||||
create the defined ReplicaSet and the pods that it manages.
|
||||
|
||||
```shell
|
||||
$ kubectl create -f frontend.yaml
|
||||
$ kubectl create -f http://k8s.io/examples/controllers/frontend.yaml
|
||||
replicaset "frontend" created
|
||||
$ kubectl describe rs/frontend
|
||||
Name: frontend
|
||||
|
@ -192,14 +192,14 @@ A ReplicaSet can also be a target for
|
|||
a ReplicaSet can be auto-scaled by an HPA. Here is an example HPA targeting
|
||||
the ReplicaSet we created in the previous example.
|
||||
|
||||
{{< code file="hpa-rs.yaml" >}}
|
||||
{{< codenew file="controllers/hpa-rs.yaml" >}}
|
||||
|
||||
Saving this manifest into `hpa-rs.yaml` and submitting it to a Kubernetes cluster should
|
||||
create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage
|
||||
of the replicated pods.
|
||||
|
||||
```shell
|
||||
kubectl create -f hpa-rs.yaml
|
||||
kubectl create -f https://k8s.io/examples/controllers/hpa-rs.yaml
|
||||
```
|
||||
|
||||
Alternatively, you can use the `kubectl autoscale` command to accomplish the same
|
||||
|
|
|
@ -44,12 +44,12 @@ service, such as web servers.
|
|||
|
||||
This example ReplicationController config runs three copies of the nginx web server.
|
||||
|
||||
{{< code file="replication.yaml" >}}
|
||||
{{< codenew file="controllers/replication.yaml" >}}
|
||||
|
||||
Run the example job by downloading the example file and then running this command:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f ./replication.yaml
|
||||
$ kubectl create -f https://k8s.io/examples/controllers/replication.yaml
|
||||
replicationcontroller "nginx" created
|
||||
```
|
||||
|
||||
|
|
|
@ -57,12 +57,12 @@ This pod configuration file describes a pod that has a node selector,
|
|||
`disktype: ssd`. This means that the pod will get scheduled on a node that has
|
||||
a `disktype=ssd` label.
|
||||
|
||||
{{< code file="pod.yaml" >}}
|
||||
{{< codenew file="pods/pod-nginx.yaml" >}}
|
||||
|
||||
1. Use the configuration file to create a pod that will get scheduled on your
|
||||
chosen node:
|
||||
|
||||
kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/pod.yaml
|
||||
kubectl create -f https://k8s.io/examples/pods/pod-nginx.yaml
|
||||
|
||||
1. Verify that the pod is running on your chosen node:
|
||||
|
||||
|
@ -80,4 +80,3 @@ Learn more about
|
|||
[labels and selectors](/docs/concepts/overview/working-with-objects/labels/).
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
|
|
|
@ -1,13 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
env: test
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
imagePullPolicy: IfNotPresent
|
||||
nodeSelector:
|
||||
disktype: ssd
|
|
@ -7,10 +7,8 @@ spec:
|
|||
matchLabels:
|
||||
app: nginx
|
||||
replicas: 2 # tells deployment to run 2 pods matching the template
|
||||
template: # create pods using pod definition in this template
|
||||
template:
|
||||
metadata:
|
||||
# unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
|
||||
# generated from the deployment name
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
|
|
|
@ -306,12 +306,6 @@ func TestExampleObjectSchemas(t *testing.T) {
|
|||
"nginx-deployment": {&extensions.Deployment{}},
|
||||
"nginx-svc": {&api.Service{}},
|
||||
},
|
||||
"docs/concepts/configuration": {
|
||||
"commands": {&api.Pod{}},
|
||||
"pod": {&api.Pod{}},
|
||||
"pod-with-node-affinity": {&api.Pod{}},
|
||||
"pod-with-pod-affinity": {&api.Pod{}},
|
||||
},
|
||||
"docs/concepts/overview/working-with-objects": {
|
||||
"nginx-deployment": {&extensions.Deployment{}},
|
||||
},
|
||||
|
@ -400,7 +394,6 @@ func TestExampleObjectSchemas(t *testing.T) {
|
|||
"memory-request-limit-3": {&api.Pod{}},
|
||||
"oir-pod": {&api.Pod{}},
|
||||
"oir-pod-2": {&api.Pod{}},
|
||||
"pod": {&api.Pod{}},
|
||||
"pod-redis": {&api.Pod{}},
|
||||
"private-reg-pod": {&api.Pod{}},
|
||||
"projected-volume": {&api.Pod{}},
|
||||
|
@ -486,6 +479,26 @@ func TestExampleObjectSchemas(t *testing.T) {
|
|||
"examples/application/zookeeper": {
|
||||
"zookeeper": {&api.Service{}, &api.Service{}, &policy.PodDisruptionBudget{}, &apps.StatefulSet{}},
|
||||
},
|
||||
"examples/controllers": {
|
||||
"daemonset": {&extensions.DaemonSet{}},
|
||||
"frontend": {&extensions.ReplicaSet{}},
|
||||
"hpa-rs": {&autoscaling.HorizontalPodAutoscaler{}},
|
||||
"job": {&batch.Job{}},
|
||||
"replicaset": {&extensions.ReplicaSet{}},
|
||||
"replication": {&api.ReplicationController{}},
|
||||
"nginx-deployment": {&extensions.Deployment{}},
|
||||
},
|
||||
"examples/pods": {
|
||||
"commands": {&api.Pod{}},
|
||||
"pod-nginx": {&api.Pod{}},
|
||||
"pod-with-node-affinity": {&api.Pod{}},
|
||||
"pod-with-pod-affinity": {&api.Pod{}},
|
||||
},
|
||||
"examples/policy": {
|
||||
"privileged-psp": {&policy.PodSecurityPolicy{}},
|
||||
"restricted-psp": {&policy.PodSecurityPolicy{}},
|
||||
"example-psp": {&policy.PodSecurityPolicy{}},
|
||||
},
|
||||
"docs/tasks/run-application": {
|
||||
"deployment-patch-demo": {&extensions.Deployment{}},
|
||||
"hpa-php-apache": {&autoscaling.HorizontalPodAutoscaler{}},
|
||||
|
|
Loading…
Reference in New Issue