diff --git a/content/en/docs/concepts/configuration/assign-pod-node.md b/content/en/docs/concepts/configuration/assign-pod-node.md index d6ceafc368..7bca696bb5 100644 --- a/content/en/docs/concepts/configuration/assign-pod-node.md +++ b/content/en/docs/concepts/configuration/assign-pod-node.md @@ -68,9 +68,12 @@ spec: Then add a nodeSelector like so: -{{< code file="pod.yaml" >}} +{{< codenew file="pods/pod-nginx.yaml" >}} -When you then run `kubectl create -f pod.yaml`, the pod will get scheduled on the node that you attached the label to! You can verify that it worked by running `kubectl get pods -o wide` and looking at the "NODE" that the pod was assigned to. +When you then run `kubectl create -f https://k8s.io/examples/pods/pod-nginx.yaml`, +the Pod will get scheduled on the node that you attached the label to. You can +verify that it worked by running `kubectl get pods -o wide` and looking at the +"NODE" that the Pod was assigned to. ## Interlude: built-in node labels @@ -133,7 +136,7 @@ Node affinity is specified as field `nodeAffinity` of field `affinity` in the Po Here's an example of a pod that uses node affinity: -{{< code file="pod-with-node-affinity.yaml" >}} +{{< codenew file="pods/pod-with-node-affinity.yaml" >}} This node affinity rule says the pod can only be placed on a node with a label whose key is `kubernetes.io/e2e-az-name` and whose value is either `e2e-az1` or `e2e-az2`. In addition, @@ -188,7 +191,7 @@ And inter-pod anti-affinity is specified as field `podAntiAffinity` of field `af #### An example of a pod that uses pod affinity: -{{< code file="pod-with-pod-affinity.yaml" >}} +{{< codenew file="pods/pod-with-pod-affinity.yaml" >}} The affinity on this pod defines one pod affinity rule and one pod anti-affinity rule. In this example, the `podAffinity` is `requiredDuringSchedulingIgnoredDuringExecution` @@ -344,4 +347,4 @@ as well, which allow a *node* to *repel* a set of pods. {{% capture whatsnext %}} -{{% /capture %}} \ No newline at end of file +{{% /capture %}} diff --git a/content/en/docs/concepts/policy/pod-security-policy.md b/content/en/docs/concepts/policy/pod-security-policy.md index 5205363187..9a1771980a 100644 --- a/content/en/docs/concepts/policy/pod-security-policy.md +++ b/content/en/docs/concepts/policy/pod-security-policy.md @@ -194,7 +194,7 @@ $ alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user - Define the example PodSecurityPolicy object in a file. This is a policy that simply prevents the creation of privileged pods. -{{< code file="example-psp.yaml" >}} +{{< codenew file="policy/example-psp.yaml" >}} And create it with kubectl: @@ -355,13 +355,13 @@ podsecuritypolicy "example" deleted This is the least restricted policy you can create, equivalent to not using the pod security policy admission controller: -{{< code file="privileged-psp.yaml" >}} +{{< codenew file="policy/privileged-psp.yaml" >}} This is an example of a restrictive policy that requires users to run as an unprivileged user, blocks possible escalations to root, and requires use of several security mechanisms. -{{< code file="restricted-psp.yaml" >}} +{{< codenew file="policy/restricted-psp.yaml" >}} ## Policy Reference @@ -574,4 +574,4 @@ default cannot be changed. Controlled via annotations on the PodSecurityPolicy. Refer to the [Sysctl documentation]( /docs/concepts/cluster-administration/sysctl-cluster/#podsecuritypolicy-annotations). -{{% /capture %}} \ No newline at end of file +{{% /capture %}} diff --git a/content/en/docs/concepts/workloads/controllers/daemonset.md b/content/en/docs/concepts/workloads/controllers/daemonset.md index 76e44aedd0..880ad43961 100644 --- a/content/en/docs/concepts/workloads/controllers/daemonset.md +++ b/content/en/docs/concepts/workloads/controllers/daemonset.md @@ -39,11 +39,11 @@ different flags and/or different memory and cpu requests for different hardware You can describe a DaemonSet in a YAML file. For example, the `daemonset.yaml` file below describes a DaemonSet that runs the fluentd-elasticsearch Docker image: -{{< code file="daemonset.yaml" >}} +{{< codenew file="controllers/daemonset.yaml" >}} * Create a DaemonSet based on the YAML file: ``` -kubectl create -f daemonset.yaml +kubectl create -f https://k8s.io/examples/controllers/daemonset.yaml ``` ### Required Fields @@ -252,4 +252,4 @@ number of replicas and rolling out updates are more important than controlling e the Pod runs on. Use a DaemonSet when it is important that a copy of a Pod always run on all or certain hosts, and when it needs to start before other Pods. -{{% /capture %}} \ No newline at end of file +{{% /capture %}} diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md index ef79cd7d4c..92d7b67ddd 100644 --- a/content/en/docs/concepts/workloads/controllers/deployment.md +++ b/content/en/docs/concepts/workloads/controllers/deployment.md @@ -40,7 +40,7 @@ The following are typical use cases for Deployments: The following is an example of a Deployment. It creates a ReplicaSet to bring up three `nginx` Pods: -{{< code file="nginx-deployment.yaml" >}} +{{< codenew file="controllers/nginx-deployment.yaml" >}} In this example: @@ -71,7 +71,7 @@ The `template` field contains the following instructions: To create this Deployment, run the following command: ```shell -kubectl create -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/docs/concepts/workloads/controllers/nginx-deployment.yaml +kubectl create -f https://k8s.io/examples/controllers/nginx-deployment.yaml ``` {{< note >}} @@ -417,7 +417,7 @@ First, check the revisions of this deployment: $ kubectl rollout history deployment/nginx-deployment deployments "nginx-deployment" REVISION CHANGE-CAUSE -1 kubectl create -f docs/user-guide/nginx-deployment.yaml --record +1 kubectl create -f https://k8s.io/examples/controllers/nginx-deployment.yaml --record 2 kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1 3 kubectl set image deployment/nginx-deployment nginx=nginx:1.91 ``` diff --git a/content/en/docs/concepts/workloads/controllers/garbage-collection.md b/content/en/docs/concepts/workloads/controllers/garbage-collection.md index 4fcf1fbc44..ed4aa90cfe 100644 --- a/content/en/docs/concepts/workloads/controllers/garbage-collection.md +++ b/content/en/docs/concepts/workloads/controllers/garbage-collection.md @@ -36,17 +36,17 @@ setting the `ownerReference` field. Here's a configuration file for a ReplicaSet that has three Pods: -{{< code file="my-repset.yaml" >}} +{{< codenew file="controllers/replicaset.yaml" >}} If you create the ReplicaSet and then view the Pod metadata, you can see OwnerReferences field: ```shell -kubectl create -f https://k8s.io/docs/concepts/controllers/my-repset.yaml +kubectl create -f https://k8s.io/examples/controllers/replicaset.yaml kubectl get pods --output=yaml ``` -The output shows that the Pod owner is a ReplicaSet named my-repset: +The output shows that the Pod owner is a ReplicaSet named `my-repset`: ```shell apiVersion: v1 @@ -110,15 +110,15 @@ field on the `deleteOptions` argument when deleting an Object. Possible values i Prior to Kubernetes 1.9, the default garbage collection policy for many controller resources was `orphan`. This included ReplicationController, ReplicaSet, StatefulSet, DaemonSet, and -Deployment. For kinds in the extensions/v1beta1, apps/v1beta1, and apps/v1beta2 group versions, unless you -specify otherwise, dependent objects are orphaned by default. In Kubernetes 1.9, for all kinds in the apps/v1 +Deployment. For kinds in the `extensions/v1beta1`, `apps/v1beta1`, and `apps/v1beta2` group versions, unless you +specify otherwise, dependent objects are orphaned by default. In Kubernetes 1.9, for all kinds in the `apps/v1` group version, dependent objects are deleted by default. Here's an example that deletes dependents in background: ```shell kubectl proxy --port=8080 -curl -X DELETE localhost:8080/apis/extensions/v1beta1/namespaces/default/replicasets/my-repset \ +curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/replicasets/my-repset \ -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Background"}' \ -H "Content-Type: application/json" ``` @@ -127,7 +127,7 @@ Here's an example that deletes dependents in foreground: ```shell kubectl proxy --port=8080 -curl -X DELETE localhost:8080/apis/extensions/v1beta1/namespaces/default/replicasets/my-repset \ +curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/replicasets/my-repset \ -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \ -H "Content-Type: application/json" ``` @@ -136,7 +136,7 @@ Here's an example that orphans dependents: ```shell kubectl proxy --port=8080 -curl -X DELETE localhost:8080/apis/extensions/v1beta1/namespaces/default/replicasets/my-repset \ +curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/replicasets/my-repset \ -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \ -H "Content-Type: application/json" ``` diff --git a/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md b/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md index c02bf976ba..1b0c5a388b 100644 --- a/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md +++ b/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md @@ -31,12 +31,12 @@ A Job can also be used to run multiple pods in parallel. Here is an example Job config. It computes π to 2000 places and prints it out. It takes around 10s to complete. -{{< code file="job.yaml" >}} +{{< codenew file="controllers/job.yaml" >}} Run the example job by downloading the example file and then running this command: ```shell -$ kubectl create -f ./job.yaml +$ kubectl create -f https://k8s.io/examples/controllers/job.yaml job "pi" created ``` @@ -401,4 +401,4 @@ object, but complete control over what pods are created and how work is assigned Support for creating Jobs at specified times/dates (i.e. cron) is available in Kubernetes [1.4](https://github.com/kubernetes/kubernetes/pull/11980). More information is available in the [cron job documents](/docs/concepts/workloads/controllers/cron-jobs/) -{{% /capture %}} \ No newline at end of file +{{% /capture %}} diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index c0aee35a2b..ddf97a0e2a 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -51,13 +51,13 @@ use a Deployment instead, and define your application in the spec section. ## Example -{{< code file="frontend.yaml" >}} +{{< codenew file="controllers/frontend.yaml" >}} Saving this manifest into `frontend.yaml` and submitting it to a Kubernetes cluster should create the defined ReplicaSet and the pods that it manages. ```shell -$ kubectl create -f frontend.yaml +$ kubectl create -f http://k8s.io/examples/controllers/frontend.yaml replicaset "frontend" created $ kubectl describe rs/frontend Name: frontend @@ -192,14 +192,14 @@ A ReplicaSet can also be a target for a ReplicaSet can be auto-scaled by an HPA. Here is an example HPA targeting the ReplicaSet we created in the previous example. -{{< code file="hpa-rs.yaml" >}} +{{< codenew file="controllers/hpa-rs.yaml" >}} Saving this manifest into `hpa-rs.yaml` and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated pods. ```shell -kubectl create -f hpa-rs.yaml +kubectl create -f https://k8s.io/examples/controllers/hpa-rs.yaml ``` Alternatively, you can use the `kubectl autoscale` command to accomplish the same diff --git a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md index 5424cd1be1..c9e23e5bec 100644 --- a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md +++ b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md @@ -44,12 +44,12 @@ service, such as web servers. This example ReplicationController config runs three copies of the nginx web server. -{{< code file="replication.yaml" >}} +{{< codenew file="controllers/replication.yaml" >}} Run the example job by downloading the example file and then running this command: ```shell -$ kubectl create -f ./replication.yaml +$ kubectl create -f https://k8s.io/examples/controllers/replication.yaml replicationcontroller "nginx" created ``` diff --git a/content/en/docs/tasks/configure-pod-container/assign-pods-nodes.md b/content/en/docs/tasks/configure-pod-container/assign-pods-nodes.md index c572b2ff26..035d2aac61 100644 --- a/content/en/docs/tasks/configure-pod-container/assign-pods-nodes.md +++ b/content/en/docs/tasks/configure-pod-container/assign-pods-nodes.md @@ -57,12 +57,12 @@ This pod configuration file describes a pod that has a node selector, `disktype: ssd`. This means that the pod will get scheduled on a node that has a `disktype=ssd` label. -{{< code file="pod.yaml" >}} +{{< codenew file="pods/pod-nginx.yaml" >}} 1. Use the configuration file to create a pod that will get scheduled on your chosen node: - kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/pod.yaml + kubectl create -f https://k8s.io/examples/pods/pod-nginx.yaml 1. Verify that the pod is running on your chosen node: @@ -80,4 +80,3 @@ Learn more about [labels and selectors](/docs/concepts/overview/working-with-objects/labels/). {{% /capture %}} - diff --git a/content/en/docs/tasks/configure-pod-container/pod.yaml b/content/en/docs/tasks/configure-pod-container/pod.yaml deleted file mode 100644 index 134ddae2aa..0000000000 --- a/content/en/docs/tasks/configure-pod-container/pod.yaml +++ /dev/null @@ -1,13 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - name: nginx - labels: - env: test -spec: - containers: - - name: nginx - image: nginx - imagePullPolicy: IfNotPresent - nodeSelector: - disktype: ssd diff --git a/content/en/examples/application/deployment.yaml b/content/en/examples/application/deployment.yaml index c682fe12bd..0f526b16c0 100644 --- a/content/en/examples/application/deployment.yaml +++ b/content/en/examples/application/deployment.yaml @@ -7,10 +7,8 @@ spec: matchLabels: app: nginx replicas: 2 # tells deployment to run 2 pods matching the template - template: # create pods using pod definition in this template + template: metadata: - # unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is - # generated from the deployment name labels: app: nginx spec: diff --git a/content/en/docs/concepts/workloads/controllers/daemonset.yaml b/content/en/examples/controllers/daemonset.yaml similarity index 100% rename from content/en/docs/concepts/workloads/controllers/daemonset.yaml rename to content/en/examples/controllers/daemonset.yaml diff --git a/content/en/docs/concepts/workloads/controllers/frontend.yaml b/content/en/examples/controllers/frontend.yaml similarity index 100% rename from content/en/docs/concepts/workloads/controllers/frontend.yaml rename to content/en/examples/controllers/frontend.yaml diff --git a/content/en/docs/concepts/workloads/controllers/hpa-rs.yaml b/content/en/examples/controllers/hpa-rs.yaml similarity index 100% rename from content/en/docs/concepts/workloads/controllers/hpa-rs.yaml rename to content/en/examples/controllers/hpa-rs.yaml diff --git a/content/en/docs/concepts/workloads/controllers/job.yaml b/content/en/examples/controllers/job.yaml similarity index 100% rename from content/en/docs/concepts/workloads/controllers/job.yaml rename to content/en/examples/controllers/job.yaml diff --git a/content/en/docs/concepts/workloads/controllers/nginx-deployment.yaml b/content/en/examples/controllers/nginx-deployment.yaml similarity index 100% rename from content/en/docs/concepts/workloads/controllers/nginx-deployment.yaml rename to content/en/examples/controllers/nginx-deployment.yaml diff --git a/content/en/docs/concepts/workloads/controllers/my-repset.yaml b/content/en/examples/controllers/replicaset.yaml similarity index 100% rename from content/en/docs/concepts/workloads/controllers/my-repset.yaml rename to content/en/examples/controllers/replicaset.yaml diff --git a/content/en/docs/concepts/workloads/controllers/replication.yaml b/content/en/examples/controllers/replication.yaml similarity index 100% rename from content/en/docs/concepts/workloads/controllers/replication.yaml rename to content/en/examples/controllers/replication.yaml diff --git a/content/en/docs/concepts/configuration/commands.yaml b/content/en/examples/pods/commands.yaml similarity index 100% rename from content/en/docs/concepts/configuration/commands.yaml rename to content/en/examples/pods/commands.yaml diff --git a/content/en/docs/concepts/configuration/pod.yaml b/content/en/examples/pods/pod-nginx.yaml similarity index 100% rename from content/en/docs/concepts/configuration/pod.yaml rename to content/en/examples/pods/pod-nginx.yaml diff --git a/content/en/docs/concepts/configuration/pod-with-node-affinity.yaml b/content/en/examples/pods/pod-with-node-affinity.yaml similarity index 100% rename from content/en/docs/concepts/configuration/pod-with-node-affinity.yaml rename to content/en/examples/pods/pod-with-node-affinity.yaml diff --git a/content/en/docs/concepts/configuration/pod-with-pod-affinity.yaml b/content/en/examples/pods/pod-with-pod-affinity.yaml similarity index 100% rename from content/en/docs/concepts/configuration/pod-with-pod-affinity.yaml rename to content/en/examples/pods/pod-with-pod-affinity.yaml diff --git a/content/en/docs/concepts/policy/example-psp.yaml b/content/en/examples/policy/example-psp.yaml similarity index 100% rename from content/en/docs/concepts/policy/example-psp.yaml rename to content/en/examples/policy/example-psp.yaml diff --git a/content/en/docs/concepts/policy/privileged-psp.yaml b/content/en/examples/policy/privileged-psp.yaml similarity index 100% rename from content/en/docs/concepts/policy/privileged-psp.yaml rename to content/en/examples/policy/privileged-psp.yaml diff --git a/content/en/docs/concepts/policy/restricted-psp.yaml b/content/en/examples/policy/restricted-psp.yaml similarity index 100% rename from content/en/docs/concepts/policy/restricted-psp.yaml rename to content/en/examples/policy/restricted-psp.yaml diff --git a/test/examples_test.go b/test/examples_test.go index 93ddcd8794..028222f681 100644 --- a/test/examples_test.go +++ b/test/examples_test.go @@ -306,12 +306,6 @@ func TestExampleObjectSchemas(t *testing.T) { "nginx-deployment": {&extensions.Deployment{}}, "nginx-svc": {&api.Service{}}, }, - "docs/concepts/configuration": { - "commands": {&api.Pod{}}, - "pod": {&api.Pod{}}, - "pod-with-node-affinity": {&api.Pod{}}, - "pod-with-pod-affinity": {&api.Pod{}}, - }, "docs/concepts/overview/working-with-objects": { "nginx-deployment": {&extensions.Deployment{}}, }, @@ -400,7 +394,6 @@ func TestExampleObjectSchemas(t *testing.T) { "memory-request-limit-3": {&api.Pod{}}, "oir-pod": {&api.Pod{}}, "oir-pod-2": {&api.Pod{}}, - "pod": {&api.Pod{}}, "pod-redis": {&api.Pod{}}, "private-reg-pod": {&api.Pod{}}, "projected-volume": {&api.Pod{}}, @@ -486,6 +479,26 @@ func TestExampleObjectSchemas(t *testing.T) { "examples/application/zookeeper": { "zookeeper": {&api.Service{}, &api.Service{}, &policy.PodDisruptionBudget{}, &apps.StatefulSet{}}, }, + "examples/controllers": { + "daemonset": {&extensions.DaemonSet{}}, + "frontend": {&extensions.ReplicaSet{}}, + "hpa-rs": {&autoscaling.HorizontalPodAutoscaler{}}, + "job": {&batch.Job{}}, + "replicaset": {&extensions.ReplicaSet{}}, + "replication": {&api.ReplicationController{}}, + "nginx-deployment": {&extensions.Deployment{}}, + }, + "examples/pods": { + "commands": {&api.Pod{}}, + "pod-nginx": {&api.Pod{}}, + "pod-with-node-affinity": {&api.Pod{}}, + "pod-with-pod-affinity": {&api.Pod{}}, + }, + "examples/policy": { + "privileged-psp": {&policy.PodSecurityPolicy{}}, + "restricted-psp": {&policy.PodSecurityPolicy{}}, + "example-psp": {&policy.PodSecurityPolicy{}}, + }, "docs/tasks/run-application": { "deployment-patch-demo": {&extensions.Deployment{}}, "hpa-php-apache": {&autoscaling.HorizontalPodAutoscaler{}},