From 1e8b524f26c1cdd3ce5e1e2094ac98d90dd8d043 Mon Sep 17 00:00:00 2001 From: shavidissa Date: Tue, 18 Jun 2019 12:51:49 -0700 Subject: [PATCH 01/26] Updates the What is Kubernetes page (#14657) * Updates the What is Kubernetes page * Adds content to the what is kubernetes used for section. * Updates the ordered list. * Formats the content. * Update the content as per the comments * Fix a broken link --- .../concepts/overview/what-is-kubernetes.md | 231 +++++------------- static/images/docs/Container_Evolution.svg | 1 + 2 files changed, 57 insertions(+), 175 deletions(-) create mode 100644 static/images/docs/Container_Evolution.svg diff --git a/content/en/docs/concepts/overview/what-is-kubernetes.md b/content/en/docs/concepts/overview/what-is-kubernetes.md index 81d9e2ae49..bb1bc2e0c2 100644 --- a/content/en/docs/concepts/overview/what-is-kubernetes.md +++ b/content/en/docs/concepts/overview/what-is-kubernetes.md @@ -5,7 +5,7 @@ reviewers: title: What is Kubernetes content_template: templates/concept weight: 10 -card: +card: name: concepts weight: 10 --- @@ -15,196 +15,77 @@ This page is an overview of Kubernetes. {{% /capture %}} {{% capture body %}} -Kubernetes is a portable, extensible open-source platform for managing -containerized workloads and services, that facilitates both -declarative configuration and automation. It has a large, rapidly -growing ecosystem. Kubernetes services, support, and tools are widely available. +Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available. -Google open-sourced the Kubernetes project in 2014. Kubernetes builds upon -a [decade and a half of experience that Google has with running -production workloads at -scale](https://research.google.com/pubs/pub43438.html), combined with -best-of-breed ideas and practices from the community. +The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes builds upon a [decade and a half of experience that Google has with running production workloads at scale](https://ai.google/research/pubs/pub43438), combined with best-of-breed ideas and practices from the community. + +## Going back in time +Let's take a look at why Kubernetes is so useful by going back in time. + +![Deployment evolution](/images/docs/Container_Evolution.svg) + +**Traditional deployment era:** +Early on, organizations ran applications on physical servers. There was no way to define resource boundaries for applications in a physical server, and this caused resource allocation issues. For example, if multiple applications run on a physical server, there can be instances where one application would take up most of the resources, and as a result, the other applications would underperform. A solution for this would be to run each application on a different physical server. But this did not scale as resources were underutilized, and it was expensive for organizations to maintain many physical servers. + +**Virtualized deployment era:** As a solution, virtualization was introduced. It allows you to run multiple Virtual Machines (VMs) on a single physical server's CPU. Virtualization allows applications to be isolated between VMs and provides a level of security as the information of one application cannot be freely accessed by another application. + +Virtualization allows better utilization of resources in a physical server and allows better scalability because an application can be added or updated easily, reduces hardware costs, and much more. + +Each VM is a full machine running all the components, including its own operating system, on top of the virtualized hardware. + +**Container deployment era:** Containers are similar to VMs, but they have relaxed isolation properties to share the Operating System (OS) among the applications. Therefore, containers are considered lightweight. Similar to a VM, a container has its own filesystem, CPU, memory, process space, and more. As they are decoupled from the underlying infrastructure, they are portable across clouds and OS distributions. + +Containers are becoming popular because they have many benefits. Some of the container benefits are listed below: + +* Agile application creation and deployment: Increased ease and efficiency of container image creation compared to VM image use. +* Continuous development, integration, and deployment: Provides for reliable and frequent container image build and deployment with quick and easy rollbacks (due to image immutability). +* Dev and Ops separation of concerns: Create application container images at build/release time rather than deployment time, thereby decoupling applications from infrastructure. +* Observability not only surfaces OS-level information and metrics, but also application health and other signals. +* Environmental consistency across development, testing, and production: Runs the same on a laptop as it does in the cloud. +* Cloud and OS distribution portability: Runs on Ubuntu, RHEL, CoreOS, on-prem, Google Kubernetes Engine, and anywhere else. +* Application-centric management: Raises the level of abstraction from running an OS on virtual hardware to running an application on an OS using logical resources. +* Loosely coupled, distributed, elastic, liberated micro-services: Applications are broken into smaller, independent pieces and can be deployed and managed dynamically – not a monolithic stack running on one big single-purpose machine. +* Resource isolation: Predictable application performance. +* Resource utilization: High efficiency and density. ## Why do I need Kubernetes and what can it do -Kubernetes has a number of features. It can be thought of as: +Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to restart. Wouldn't it be easier if this behavior was handled by a system? -- a container platform -- a microservices platform -- a portable cloud platform -and a lot more. +That's how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of your scaling requirements, failover, deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system. -Kubernetes provides a **container-centric** management environment. It -orchestrates computing, networking, and storage infrastructure on -behalf of user workloads. This provides much of the simplicity of -Platform as a Service (PaaS) with the flexibility of Infrastructure as -a Service (IaaS), and enables portability across infrastructure -providers. +Kubernetes provides you with: -## How Kubernetes is a platform - -Even though Kubernetes provides a lot of functionality, there are -always new scenarios that would benefit from new -features. Application-specific workflows can be streamlined to -accelerate developer velocity. Ad hoc orchestration that is acceptable -initially often requires robust automation at scale. This is why -Kubernetes was also designed to serve as a platform for building an -ecosystem of components and tools to make it easier to deploy, scale, -and manage applications. - -[Labels](/docs/concepts/overview/working-with-objects/labels/) empower -users to organize their resources however they -please. [Annotations](/docs/concepts/overview/working-with-objects/annotations/) -enable users to decorate resources with custom information to -facilitate their workflows and provide an easy way for management -tools to checkpoint state. - -Additionally, the [Kubernetes control -plane](/docs/concepts/overview/components/) is built upon the same -[APIs](/docs/reference/using-api/api-overview/) that are available to developers -and users. Users can write their own controllers, such as -[schedulers](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/devel/scheduler.md), -with [their own -APIs](/docs/concepts/api-extension/custom-resources/) -that can be targeted by a general-purpose [command-line -tool](/docs/user-guide/kubectl-overview/). - -This -[design](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) -has enabled a number of other systems to build atop Kubernetes. +* **Service discovery and load balancing** +Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable. +* **Storage orchestration** +Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more. +* **Automated rollouts and rollbacks** +You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container. +* **Automatic bin packing** +Kubernetes allows you to specify how much CPU and memory (RAM) each container needs. When containers have resource requests specified, Kubernetes can make better decisions to manage the resources for containers. +* **Self-healing** +Kubernetes restarts containers that fail, replaces containers, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve. +* **Secret and configuration management** +Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration. ## What Kubernetes is not -Kubernetes is not a traditional, all-inclusive PaaS (Platform as a -Service) system. Since Kubernetes operates at the container level -rather than at the hardware level, it provides some generally -applicable features common to PaaS offerings, such as deployment, -scaling, load balancing, logging, and monitoring. However, Kubernetes -is not monolithic, and these default solutions are optional and -pluggable. Kubernetes provides the building blocks for building developer -platforms, but preserves user choice and flexibility where it is -important. +Kubernetes is not a traditional, all-inclusive PaaS (Platform as a Service) system. Since Kubernetes operates at the container level rather than at the hardware level, it provides some generally applicable features common to PaaS offerings, such as deployment, scaling, load balancing, logging, and monitoring. However, Kubernetes is not monolithic, and these default solutions are optional and pluggable. Kubernetes provides the building blocks for building developer platforms, but preserves user choice and flexibility where it is important. Kubernetes: -* Does not limit the types of applications supported. Kubernetes aims - to support an extremely diverse variety of workloads, including - stateless, stateful, and data-processing workloads. If an - application can run in a container, it should run great on - Kubernetes. -* Does not deploy source code and does not build your - application. Continuous Integration, Delivery, and Deployment - (CI/CD) workflows are determined by organization cultures and preferences - as well as technical requirements. -* Does not provide application-level services, such as middleware - (e.g., message buses), data-processing frameworks (for example, - Spark), databases (e.g., mysql), caches, nor cluster storage systems (e.g., - Ceph) as built-in services. Such components can run on Kubernetes, and/or - can be accessed by applications running on Kubernetes through portable - mechanisms, such as the Open Service Broker. -* Does not dictate logging, monitoring, or alerting solutions. It provides - some integrations as proof of concept, and mechanisms to collect and - export metrics. -* Does not provide nor mandate a configuration language/system (e.g., - [jsonnet](https://github.com/google/jsonnet)). It provides a declarative - API that may be targeted by arbitrary forms of declarative specifications. -* Does not provide nor adopt any comprehensive machine configuration, - maintenance, management, or self-healing systems. - -Additionally, Kubernetes is not a mere *orchestration system*. In -fact, it eliminates the need for orchestration. The technical -definition of *orchestration* is execution of a defined workflow: -first do A, then B, then C. In contrast, Kubernetes is comprised of a -set of independent, composable control processes that continuously -drive the current state towards the provided desired state. It -shouldn't matter how you get from A to C. Centralized control is also -not required. This results in a system that is easier to use and more -powerful, robust, resilient, and extensible. - -## Why containers - -Looking for reasons why you should be using containers? - -![Why Containers?](/images/docs/why_containers.svg) - -The *Old Way* to deploy applications was to install the applications -on a host using the operating-system package manager. This had the -disadvantage of entangling the applications' executables, -configuration, libraries, and lifecycles with each other and with the -host OS. One could build immutable virtual-machine images in order to -achieve predictable rollouts and rollbacks, but VMs are heavyweight -and non-portable. - -The *New Way* is to deploy containers based on operating-system-level -virtualization rather than hardware virtualization. These containers -are isolated from each other and from the host: they have their own -filesystems, they can't see each others' processes, and their -computational resource usage can be bounded. They are easier to build -than VMs, and because they are decoupled from the underlying -infrastructure and from the host filesystem, they are portable across -clouds and OS distributions. - -Because containers are small and fast, one application can be packed -in each container image. This one-to-one application-to-image -relationship unlocks the full benefits of containers. With containers, -immutable container images can be created at build/release time rather -than deployment time, since each application doesn't need to be -composed with the rest of the application stack, nor married to the -production infrastructure environment. Generating container images at -build/release time enables a consistent environment to be carried from -development into production. Similarly, containers are vastly more -transparent than VMs, which facilitates monitoring and -management. This is especially true when the containers' process -lifecycles are managed by the infrastructure rather than hidden by a -process supervisor inside the container. Finally, with a single -application per container, managing the containers becomes tantamount -to managing deployment of the application. - -Summary of container benefits: - -* **Agile application creation and deployment**: - Increased ease and efficiency of container image creation compared to VM image use. -* **Continuous development, integration, and deployment**: - Provides for reliable and frequent container image build and - deployment with quick and easy rollbacks (due to image - immutability). -* **Dev and Ops separation of concerns**: - Create application container images at build/release time rather - than deployment time, thereby decoupling applications from - infrastructure. -* **Observability** - Not only surfaces OS-level information and metrics, but also application - health and other signals. -* **Environmental consistency across development, testing, and production**: - Runs the same on a laptop as it does in the cloud. -* **Cloud and OS distribution portability**: - Runs on Ubuntu, RHEL, CoreOS, on-prem, Google Kubernetes Engine, and anywhere else. -* **Application-centric management**: - Raises the level of abstraction from running an OS on virtual - hardware to running an application on an OS using logical resources. -* **Loosely coupled, distributed, elastic, liberated [micro-services](https://martinfowler.com/articles/microservices.html)**: - Applications are broken into smaller, independent pieces and can - be deployed and managed dynamically -- not a monolithic stack - running on one big single-purpose machine. -* **Resource isolation**: - Predictable application performance. -* **Resource utilization**: - High efficiency and density. - -## What Kubernetes and K8s mean - -The name **Kubernetes** originates from Greek, meaning *helmsman* or -*pilot*, and is the root of *governor* and -[cybernetic](http://www.etymonline.com/index.php?term=cybernetics). *K8s* -is an abbreviation derived by replacing the 8 letters "ubernete" with -"8". +* Does not limit the types of applications supported. Kubernetes aims to support an extremely diverse variety of workloads, including stateless, stateful, and data-processing workloads. If an application can run in a container, it should run great on Kubernetes. +* Does not deploy source code and does not build your application. Continuous Integration, Delivery, and Deployment (CI/CD) workflows are determined by organization cultures and preferences as well as technical requirements. +* Does not provide application-level services, such as middleware (for example, message buses), data-processing frameworks (for example, Spark), databases (for example, mysql), caches, nor cluster storage systems (for example, Ceph) as built-in services. Such components can run on Kubernetes, and/or can be accessed by applications running on Kubernetes through portable mechanisms, such as the Open Service Broker. +* Does not dictate logging, monitoring, or alerting solutions. It provides some integrations as proof of concept, and mechanisms to collect and export metrics. +* Does not provide nor mandate a configuration language/system (for example, jsonnet). It provides a declarative API that may be targeted by arbitrary forms of declarative specifications. +* Does not provide nor adopt any comprehensive machine configuration, maintenance, management, or self-healing systems. +* Additionally, Kubernetes is not a mere orchestration system. In fact, it eliminates the need for orchestration. The technical definition of orchestration is execution of a defined workflow: first do A, then B, then C. In contrast, Kubernetes is comprised of a set of independent, composable control processes that continuously drive the current state towards the provided desired state. It shouldn’t matter how you get from A to C. Centralized control is also not required. This results in a system that is easier to use and more powerful, robust, resilient, and extensible. {{% /capture %}} {{% capture whatsnext %}} +* Take a look at the [Kubernetes Components](/docs/concepts/overview/components/) * Ready to [Get Started](/docs/setup/)? -* For more details, see the [Kubernetes Documentation](/docs/home/). {{% /capture %}} - - diff --git a/static/images/docs/Container_Evolution.svg b/static/images/docs/Container_Evolution.svg new file mode 100644 index 0000000000..3e6eca3fc7 --- /dev/null +++ b/static/images/docs/Container_Evolution.svg @@ -0,0 +1 @@ + \ No newline at end of file From b4738909281ee4f7fb69677edf373f03c7eb2c6f Mon Sep 17 00:00:00 2001 From: Alexandre Vilain Date: Tue, 18 Jun 2019 22:45:49 +0200 Subject: [PATCH 02/26] Fix 404 on link: "Download Kubernetes" (#14969) --- layouts/blog/baseof.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/layouts/blog/baseof.html b/layouts/blog/baseof.html index 4af931be26..358cb7bd3c 100644 --- a/layouts/blog/baseof.html +++ b/layouts/blog/baseof.html @@ -28,7 +28,7 @@ #kubernetes-users Stack Overflow Forum - Download Kubernetes + Download Kubernetes {{ partialCached "blog/archive.html" . }} From 82650353cb857239ee6b5a8ba4849d08b82fe66a Mon Sep 17 00:00:00 2001 From: mhamdi semah Date: Tue, 18 Jun 2019 22:49:50 +0200 Subject: [PATCH 03/26] =?UTF-8?q?Outdated=20terminology:=20=E2=80=9Ckubern?= =?UTF-8?q?etes-minion-group=E2=80=9D=20(#14859)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../administer-cluster/cluster-management.md | 2 +- .../run-replicated-stateful-application.md | 16 +++---- .../en/docs/tutorials/services/source-ip.md | 14 +++--- .../stateful-application/zookeeper.md | 44 +++++++++---------- 4 files changed, 38 insertions(+), 38 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/cluster-management.md b/content/en/docs/tasks/administer-cluster/cluster-management.md index 39c46c2e86..3aa57175e1 100644 --- a/content/en/docs/tasks/administer-cluster/cluster-management.md +++ b/content/en/docs/tasks/administer-cluster/cluster-management.md @@ -85,7 +85,7 @@ If your cluster runs short on resources you can easily add more machines to it i If you're using GCE or Google Kubernetes Engine it's done by resizing Instance Group managing your Nodes. It can be accomplished by modifying number of instances on `Compute > Compute Engine > Instance groups > your group > Edit group` [Google Cloud Console page](https://console.developers.google.com) or using gcloud CLI: ```shell -gcloud compute instance-groups managed resize kubernetes-minion-group --size=42 --zone=$ZONE +gcloud compute instance-groups managed resize kubernetes-node-pool --size=42 --zone=$ZONE ``` Instance Group will take care of putting appropriate image on new machines and start them, while Kubelet will register its Node with API server to make it available for scheduling. If you scale the instance group down, system will randomly choose Nodes to kill. diff --git a/content/en/docs/tasks/run-application/run-replicated-stateful-application.md b/content/en/docs/tasks/run-application/run-replicated-stateful-application.md index 58c7ded5a8..8b5db92a6d 100644 --- a/content/en/docs/tasks/run-application/run-replicated-stateful-application.md +++ b/content/en/docs/tasks/run-application/run-replicated-stateful-application.md @@ -363,7 +363,7 @@ The Node name should show up in the last column: ``` NAME READY STATUS RESTARTS AGE IP NODE -mysql-2 2/2 Running 0 15m 10.244.5.27 kubernetes-minion-group-9l2t +mysql-2 2/2 Running 0 15m 10.244.5.27 kubernetes-node-9l2t ``` Then drain the Node by running the following command, which cordons it so @@ -387,14 +387,14 @@ It should look something like this: ``` NAME READY STATUS RESTARTS AGE IP NODE -mysql-2 2/2 Terminating 0 15m 10.244.1.56 kubernetes-minion-group-9l2t +mysql-2 2/2 Terminating 0 15m 10.244.1.56 kubernetes-node-9l2t [...] -mysql-2 0/2 Pending 0 0s kubernetes-minion-group-fjlm -mysql-2 0/2 Init:0/2 0 0s kubernetes-minion-group-fjlm -mysql-2 0/2 Init:1/2 0 20s 10.244.5.32 kubernetes-minion-group-fjlm -mysql-2 0/2 PodInitializing 0 21s 10.244.5.32 kubernetes-minion-group-fjlm -mysql-2 1/2 Running 0 22s 10.244.5.32 kubernetes-minion-group-fjlm -mysql-2 2/2 Running 0 30s 10.244.5.32 kubernetes-minion-group-fjlm +mysql-2 0/2 Pending 0 0s kubernetes-node-fjlm +mysql-2 0/2 Init:0/2 0 0s kubernetes-node-fjlm +mysql-2 0/2 Init:1/2 0 20s 10.244.5.32 kubernetes-node-fjlm +mysql-2 0/2 PodInitializing 0 21s 10.244.5.32 kubernetes-node-fjlm +mysql-2 1/2 Running 0 22s 10.244.5.32 kubernetes-node-fjlm +mysql-2 2/2 Running 0 30s 10.244.5.32 kubernetes-node-fjlm ``` And again, you should see server ID `102` disappear from the diff --git a/content/en/docs/tutorials/services/source-ip.md b/content/en/docs/tutorials/services/source-ip.md index c8ca0d4b78..e59f9058a0 100644 --- a/content/en/docs/tutorials/services/source-ip.md +++ b/content/en/docs/tutorials/services/source-ip.md @@ -67,13 +67,13 @@ kubectl get nodes The output is similar to this: ``` NAME STATUS ROLES AGE VERSION -kubernetes-minion-group-6jst Ready 2h v1.13.0 -kubernetes-minion-group-cx31 Ready 2h v1.13.0 -kubernetes-minion-group-jj1t Ready 2h v1.13.0 +kubernetes-node-6jst Ready 2h v1.13.0 +kubernetes-node-cx31 Ready 2h v1.13.0 +kubernetes-node-jj1t Ready 2h v1.13.0 ``` Get the proxy mode on one of the node ```console -kubernetes-minion-group-6jst $ curl localhost:10249/proxyMode +kubernetes-node-6jst $ curl localhost:10249/proxyMode ``` The output is: ``` @@ -326,18 +326,18 @@ kubectl get pod -o wide -l run=source-ip-app The output is similar to this: ``` NAME READY STATUS RESTARTS AGE IP NODE -source-ip-app-826191075-qehz4 1/1 Running 0 20h 10.180.1.136 kubernetes-minion-group-6jst +source-ip-app-826191075-qehz4 1/1 Running 0 20h 10.180.1.136 kubernetes-node-6jst ``` Curl the `/healthz` endpoint on different nodes. ```console -kubernetes-minion-group-6jst $ curl localhost:32122/healthz +kubernetes-node-6jst $ curl localhost:32122/healthz ``` The output is similar to this: ``` 1 Service Endpoints found ``` ```console -kubernetes-minion-group-jj1t $ curl localhost:32122/healthz +kubernetes-node-jj1t $ curl localhost:32122/healthz ``` The output is similar to this: ``` diff --git a/content/en/docs/tutorials/stateful-application/zookeeper.md b/content/en/docs/tutorials/stateful-application/zookeeper.md index 2a2d72d3af..6079490eba 100644 --- a/content/en/docs/tutorials/stateful-application/zookeeper.md +++ b/content/en/docs/tutorials/stateful-application/zookeeper.md @@ -832,9 +832,9 @@ for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo ""; All of the Pods in the `zk` `StatefulSet` are deployed on different nodes. ```shell -kubernetes-minion-group-cxpk -kubernetes-minion-group-a5aq -kubernetes-minion-group-2g2d +kubernetes-node-cxpk +kubernetes-node-a5aq +kubernetes-node-2g2d ``` This is because the Pods in the `zk` `StatefulSet` have a `PodAntiAffinity` specified. @@ -906,9 +906,9 @@ In another terminal, use this command to get the nodes that the Pods are current ```shell for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo ""; done -kubernetes-minion-group-pb41 -kubernetes-minion-group-ixsl -kubernetes-minion-group-i4c4 +kubernetes-node-pb41 +kubernetes-node-ixsl +kubernetes-node-i4c4 ``` Use [`kubectl drain`](/docs/reference/generated/kubectl/kubectl-commands/#drain) to cordon and @@ -916,11 +916,11 @@ drain the node on which the `zk-0` Pod is scheduled. ```shell kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data -node "kubernetes-minion-group-pb41" cordoned +node "kubernetes-node-pb41" cordoned -WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-minion-group-pb41, kube-proxy-kubernetes-minion-group-pb41; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-o5elz +WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-pb41, kube-proxy-kubernetes-node-pb41; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-o5elz pod "zk-0" deleted -node "kubernetes-minion-group-pb41" drained +node "kubernetes-node-pb41" drained ``` As there are four nodes in your cluster, `kubectl drain`, succeeds and the @@ -947,11 +947,11 @@ Keep watching the `StatefulSet`'s Pods in the first terminal and drain the node `zk-1` is scheduled. ```shell -kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data "kubernetes-minion-group-ixsl" cordoned +kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data "kubernetes-node-ixsl" cordoned -WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-minion-group-ixsl, kube-proxy-kubernetes-minion-group-ixsl; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-voc74 +WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-ixsl, kube-proxy-kubernetes-node-ixsl; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-voc74 pod "zk-1" deleted -node "kubernetes-minion-group-ixsl" drained +node "kubernetes-node-ixsl" drained ``` The `zk-1` Pod cannot be scheduled because the `zk` `StatefulSet` contains a `PodAntiAffinity` rule preventing co-location of the Pods, and as only two nodes are schedulable, the Pod will remain in a Pending state. @@ -986,10 +986,10 @@ Continue to watch the Pods of the stateful set, and drain the node on which ```shell kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data -node "kubernetes-minion-group-i4c4" cordoned +node "kubernetes-node-i4c4" cordoned -WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-minion-group-i4c4, kube-proxy-kubernetes-minion-group-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog -WARNING: Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog; Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-minion-group-i4c4, kube-proxy-kubernetes-minion-group-i4c4 +WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog +WARNING: Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog; Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4 There are pending pods when an error occurred: Cannot evict pod as it would violate the pod's disruption budget. pod/zk-2 ``` @@ -1025,9 +1025,9 @@ numChildren = 0 Use [`kubectl uncordon`](/docs/reference/generated/kubectl/kubectl-commands/#uncordon) to uncordon the first node. ```shell -kubectl uncordon kubernetes-minion-group-pb41 +kubectl uncordon kubernetes-node-pb41 -node "kubernetes-minion-group-pb41" uncordoned +node "kubernetes-node-pb41" uncordoned ``` `zk-1` is rescheduled on this node. Wait until `zk-1` is Running and Ready. @@ -1070,11 +1070,11 @@ kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-dae The output: ``` -node "kubernetes-minion-group-i4c4" already cordoned -WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-minion-group-i4c4, kube-proxy-kubernetes-minion-group-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog +node "kubernetes-node-i4c4" already cordoned +WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog pod "heapster-v1.2.0-2604621511-wht1r" deleted pod "zk-2" deleted -node "kubernetes-minion-group-i4c4" drained +node "kubernetes-node-i4c4" drained ``` This time `kubectl drain` succeeds. @@ -1082,11 +1082,11 @@ This time `kubectl drain` succeeds. Uncordon the second node to allow `zk-2` to be rescheduled. ```shell -kubectl uncordon kubernetes-minion-group-ixsl +kubectl uncordon kubernetes-node-ixsl ``` ``` -node "kubernetes-minion-group-ixsl" uncordoned +node "kubernetes-node-ixsl" uncordoned ``` You can use `kubectl drain` in conjunction with `PodDisruptionBudgets` to ensure that your services remain available during maintenance. If drain is used to cordon nodes and evict pods prior to taking the node offline for maintenance, services that express a disruption budget will have that budget respected. You should always allocate additional capacity for critical services so that their Pods can be immediately rescheduled. From cef94610c8423085d2dfb30f398632bd10efd9ce Mon Sep 17 00:00:00 2001 From: Weston Carlson Date: Tue, 18 Jun 2019 15:01:49 -0600 Subject: [PATCH 04/26] Improve mysql instance label value. (#14827) --- .../concepts/overview/working-with-objects/common-labels.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/en/docs/concepts/overview/working-with-objects/common-labels.md b/content/en/docs/concepts/overview/working-with-objects/common-labels.md index d0132d92b9..08953cdc71 100644 --- a/content/en/docs/concepts/overview/working-with-objects/common-labels.md +++ b/content/en/docs/concepts/overview/working-with-objects/common-labels.md @@ -144,7 +144,7 @@ kind: StatefulSet metadata: labels: app.kubernetes.io/name: mysql - app.kubernetes.io/instance: wordpress-abcxzy + app.kubernetes.io/instance: mysql-abcxzy app.kubernetes.io/managed-by: helm app.kubernetes.io/component: database app.kubernetes.io/part-of: wordpress @@ -160,7 +160,7 @@ kind: Service metadata: labels: app.kubernetes.io/name: mysql - app.kubernetes.io/instance: wordpress-abcxzy + app.kubernetes.io/instance: mysql-abcxzy app.kubernetes.io/managed-by: helm app.kubernetes.io/component: database app.kubernetes.io/part-of: wordpress @@ -170,4 +170,4 @@ metadata: With the MySQL `StatefulSet` and `Service` you'll notice information about both MySQL and Wordpress, the broader application, are included. -{{% /capture %}} \ No newline at end of file +{{% /capture %}} From b2c496aec3ff6c25788120c8812600ad8a0459e3 Mon Sep 17 00:00:00 2001 From: Josiah Bjorgaard Date: Tue, 18 Jun 2019 16:18:35 -0600 Subject: [PATCH 05/26] Clarify mutation behavior with MustRunAsNonRoot (#14820) --- content/en/docs/concepts/policy/pod-security-policy.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/content/en/docs/concepts/policy/pod-security-policy.md b/content/en/docs/concepts/policy/pod-security-policy.md index 08805d5011..8890f4d7a5 100644 --- a/content/en/docs/concepts/policy/pod-security-policy.md +++ b/content/en/docs/concepts/policy/pod-security-policy.md @@ -485,8 +485,10 @@ spec: minimum value of the first range as the default. Validates against all ranges. - *MustRunAsNonRoot* - Requires that the pod be submitted with a non-zero `runAsUser` or have the `USER` directive defined (using a numeric UID) in the -image. No default provided. Setting `allowPrivilegeEscalation=false` is strongly -recommended with this strategy. +image. Pods which have specified neither `runAsNonRoot` nor `runAsUser` settings +will be mutated to set `runAsNonRoot=true`, thus requiring a defined non-zero +numeric `USER` directive in the container. No default provided. Setting +`allowPrivilegeEscalation=false` is strongly recommended with this strategy. - *RunAsAny* - No default provided. Allows any `runAsUser` to be specified. **RunAsGroup** - Controls which primary group ID the containers are run with. From cc4c4789ceae4bb01d3cd1114761e4f022986b50 Mon Sep 17 00:00:00 2001 From: Serdar Osman Onur Date: Wed, 19 Jun 2019 01:34:32 +0300 Subject: [PATCH 06/26] Update ingress-controllers.md (#14803) I think what is meant is "to provide a default ingress CONTROLLER" --- .../en/docs/concepts/services-networking/ingress-controllers.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/concepts/services-networking/ingress-controllers.md b/content/en/docs/concepts/services-networking/ingress-controllers.md index 57af46a01a..98e5bd7445 100644 --- a/content/en/docs/concepts/services-networking/ingress-controllers.md +++ b/content/en/docs/concepts/services-networking/ingress-controllers.md @@ -55,7 +55,7 @@ within a cluster. When you create an ingress, you should annotate each ingress w [`ingress.class`](https://git.k8s.io/ingress-gce/docs/faq/README.md#how-do-i-run-multiple-ingress-controllers-in-the-same-cluster) to indicate which ingress controller should be used if more than one exists within your cluster. -If you do not define a class, your cloud provider may use a default ingress provider. +If you do not define a class, your cloud provider may use a default ingress controller. Ideally, all ingress controllers should fulfill this specification, but the various ingress controllers operate slightly differently. From 658f5c498dde853b26bde1a9dc1ff984cbdd6537 Mon Sep 17 00:00:00 2001 From: Deepika Pandhi Date: Tue, 18 Jun 2019 15:36:32 -0700 Subject: [PATCH 07/26] Added --show-labels (#14779) * updated OR comments * updated comments from review for --show-labels --- content/en/docs/reference/kubectl/cheatsheet.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md index 665a84330e..7a52021515 100644 --- a/content/en/docs/reference/kubectl/cheatsheet.md +++ b/content/en/docs/reference/kubectl/cheatsheet.md @@ -182,6 +182,9 @@ echo $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name # Also uses "jq" for item in $( kubectl get pod --output=name); do printf "Labels for %s\n" "$item" | grep --color -E '[^/]+$' && kubectl get "$item" --output=json | jq -r -S '.metadata.labels | to_entries | .[] | " \(.key)=\(.value)"' 2>/dev/null; printf "\n"; done +# Or this command can be used as well to get all the labels associated with pods +kubectl get pods --show-labels + # Check which nodes are ready JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \ && kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True" From d778644a2d567ea021af5dc8704d7381216786d6 Mon Sep 17 00:00:00 2001 From: Richard Marcum Date: Tue, 18 Jun 2019 18:42:34 -0400 Subject: [PATCH 08/26] Rename zones on dns-pod-service page (#14672) * Rename zones on dns-pod-service page Issue with k8s.io/docs/concepts/services-networking/dns-pod-service/ - cluster.local #14386 * Update custom-dns.yaml * Change remaining occurrences of cluster.local --- .../services-networking/dns-pod-service.md | 20 +++++++++---------- .../service/networking/custom-dns.yaml | 2 +- 2 files changed, 11 insertions(+), 11 deletions(-) diff --git a/content/en/docs/concepts/services-networking/dns-pod-service.md b/content/en/docs/concepts/services-networking/dns-pod-service.md index d3fba411ed..63c6b59e7b 100644 --- a/content/en/docs/concepts/services-networking/dns-pod-service.md +++ b/content/en/docs/concepts/services-networking/dns-pod-service.md @@ -41,11 +41,11 @@ For more up-to-date specification, see ### A records "Normal" (not headless) Services are assigned a DNS A record for a name of the -form `my-svc.my-namespace.svc.cluster.local`. This resolves to the cluster IP +form `my-svc.my-namespace.svc.cluster-domain.example`. This resolves to the cluster IP of the Service. "Headless" (without a cluster IP) Services are also assigned a DNS A record for -a name of the form `my-svc.my-namespace.svc.cluster.local`. Unlike normal +a name of the form `my-svc.my-namespace.svc.cluster-domain.example`. Unlike normal Services, this resolves to the set of IPs of the pods selected by the Service. Clients are expected to consume the set or else use standard round-robin selection from the set. @@ -55,12 +55,12 @@ selection from the set. SRV Records are created for named ports that are part of normal or [Headless Services](/docs/concepts/services-networking/service/#headless-services). For each named port, the SRV record would have the form -`_my-port-name._my-port-protocol.my-svc.my-namespace.svc.cluster.local`. +`_my-port-name._my-port-protocol.my-svc.my-namespace.svc.cluster-domain.example`. For a regular service, this resolves to the port number and the domain name: -`my-svc.my-namespace.svc.cluster.local`. +`my-svc.my-namespace.svc.cluster-domain.example`. For a headless service, this resolves to multiple answers, one for each pod that is backing the service, and contains the port number and the domain name of the pod -of the form `auto-generated-name.my-svc.my-namespace.svc.cluster.local`. +of the form `auto-generated-name.my-svc.my-namespace.svc.cluster-domain.example`. ## Pods @@ -76,7 +76,7 @@ the hostname of the pod. For example, given a Pod with `hostname` set to The Pod spec also has an optional `subdomain` field which can be used to specify its subdomain. For example, a Pod with `hostname` set to "`foo`", and `subdomain` set to "`bar`", in namespace "`my-namespace`", will have the fully qualified -domain name (FQDN) "`foo.bar.my-namespace.svc.cluster.local`". +domain name (FQDN) "`foo.bar.my-namespace.svc.cluster-domain.example`". Example: @@ -133,7 +133,7 @@ record for the Pod's fully qualified hostname. For example, given a Pod with the hostname set to "`busybox-1`" and the subdomain set to "`default-subdomain`", and a headless Service named "`default-subdomain`" in the same namespace, the pod will see its own FQDN as -"`busybox-1.default-subdomain.my-namespace.svc.cluster.local`". DNS serves an +"`busybox-1.default-subdomain.my-namespace.svc.cluster-domain.example`". DNS serves an A record at that name, pointing to the Pod's IP. Both pods "`busybox1`" and "`busybox2`" can have their distinct A records. @@ -143,7 +143,7 @@ along with its IP. {{< note >}} Because A records are not created for Pod names, `hostname` is required for the Pod's A record to be created. A Pod with no `hostname` but with `subdomain` will only create the -A record for the headless service (`default-subdomain.my-namespace.svc.cluster.local`), +A record for the headless service (`default-subdomain.my-namespace.svc.cluster-domain.example`), pointing to the Pod's IP address. Also, Pod needs to become ready in order to have a record unless `publishNotReadyAddresses=True` is set on the Service. {{< /note >}} @@ -234,7 +234,7 @@ in its `/etc/resolv.conf` file: ``` nameserver 1.2.3.4 -search ns1.svc.cluster.local my.dns.search.suffix +search ns1.svc.cluster-domain.example my.dns.search.suffix options ndots:2 edns0 ``` @@ -246,7 +246,7 @@ kubectl exec -it dns-example -- cat /etc/resolv.conf The output is similar to this: ```shell nameserver fd00:79:30::a -search default.svc.cluster.local svc.cluster.local cluster.local +search default.svc.cluster-domain.example svc.cluster-domain.example cluster-domain.example options ndots:5 ``` diff --git a/content/en/examples/service/networking/custom-dns.yaml b/content/en/examples/service/networking/custom-dns.yaml index 3e5acd841a..02f77a9efe 100644 --- a/content/en/examples/service/networking/custom-dns.yaml +++ b/content/en/examples/service/networking/custom-dns.yaml @@ -12,7 +12,7 @@ spec: nameservers: - 1.2.3.4 searches: - - ns1.svc.cluster.local + - ns1.svc.cluster-domain.example - my.dns.search.suffix options: - name: ndots From a86bc5ca13a1a9daaaa7f24bbe147713b7e679f2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?R=C3=A9my=20L=C3=A9one?= Date: Wed, 19 Jun 2019 16:30:39 +0200 Subject: [PATCH 09/26] Translate components to French (#13436) * Translate components to French * Update content/fr/docs/contribute/generate-ref-docs/kubernetes-components.md Co-Authored-By: Oussema CHERNI * Update content/fr/docs/contribute/generate-ref-docs/kubernetes-components.md Co-Authored-By: Oussema CHERNI * Update content/fr/docs/contribute/generate-ref-docs/kubernetes-components.md Co-Authored-By: Oussema CHERNI --- .../kubernetes-components.md | 204 ++++++++++++++++++ 1 file changed, 204 insertions(+) create mode 100644 content/fr/docs/contribute/generate-ref-docs/kubernetes-components.md diff --git a/content/fr/docs/contribute/generate-ref-docs/kubernetes-components.md b/content/fr/docs/contribute/generate-ref-docs/kubernetes-components.md new file mode 100644 index 0000000000..b3efbe3f2e --- /dev/null +++ b/content/fr/docs/contribute/generate-ref-docs/kubernetes-components.md @@ -0,0 +1,204 @@ +--- +title: Génération de pages de référence pour les composants et les outils Kubernetes +content_template: templates/task +--- + +{{% capture overview %}} + +Cette page montre comment utiliser l'outil `update-importer-docs` pour générer une documentation de référence pour les outils et les composants des dépôts [Kubernetes](https://github.com/kubernetes/kubernetes) et [Federation](https://github.com/kubernetes/federation). + +{{% /capture %}} + +{{% capture prerequisites %}} + +* Vous avez besoin d'une machine qui exécute Linux ou macOS. + +* Ces logiciels doivent être installés: + + * [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) + + * [Golang](https://golang.org/doc/install) version 1.9 ou ultérieure + + * [make](https://www.gnu.org/software/make/) + + * [gcc compiler/linker](https://gcc.gnu.org/) + +* Votre variable d'environnement `$GOPATH` doit être définie. + +* Vous devez savoir comment créer une pull request sur un dépôt GitHub. +Cela implique généralement la création d’un fork d'un dépôt. +Pour plus d'informations, consultez [Créer une Pull Request de documentation](/docs/home/contribute/create-pull-request/). + +{{% /capture %}} + +{{% capture steps %}} + +## Obtenir deux dépôts + +Si vous n'avez pas déjà le dépôt `kubernetes/website`, obtenez le maintenant: + +```shell +mkdir $GOPATH/src +cd $GOPATH/src +go get github.com/kubernetes/website +``` + +Déterminez le répertoire de base de votre clone du dépôt [kubernetes/website](https://github.com/kubernetes/website). +Par exemple, si vous avez suivi l’étape précédente pour obtenir le dépôt, votre répertoire de base est `$GOPATH/src/github.com/kubernetes/website`. +Les étapes restantes se réfèrent à votre répertoire de base en tant que ``. + +Si vous envisagez d’apporter des modifications aux documents de référence et si vous ne disposez pas déjà du dépôt `kubernetes/kubernetes`, obtenez-le maintenant: + +```shell +mkdir $GOPATH/src +cd $GOPATH/src +go get github.com/kubernetes/kubernetes +``` + +Déterminez le répertoire de base de votre clone du dépôt [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes). +Par exemple, si vous avez suivi l’étape précédente pour obtenir le dépôt, votre répertoire de base est `$GOPATH/src/github.com/kubernetes/kubernetes`. +Les étapes restantes se réfèrent à votre répertoire de base en tant que ``. + +{{< note >}} +Si vous devez uniquement générer, sans modifier, les documents de référence, vous n'avez pas besoin d'obtenir manuellement le dépôt `kubernetes/kubernetes`. +Lorsque vous exécutez la commande `update-imported-docs`, il clone automatiquement le dépôt `kubernetes/kubernetes`. +{{< /note >}} + +## Modification du code source de Kubernetes + +La documentation de référence pour les composants et les outils Kubernetes est générée automatiquement à partir du code source de Kubernetes. +Si vous souhaitez modifier la documentation de référence, commencez par modifier un ou plusieurs commentaires dans le code source de Kubernetes. +Faites le changement dans votre dépôt local `kubernetes/kubernetes`, puis soumettez une pull request sur la branche master [github.com/kubernetes/kubernetes](https://github.com/kubernetes/kubernetes). + +[PR 56942](https://github.com/kubernetes/kubernetes/pull/56942) est un exemple de pull request qui modifie les commentaires dans le code source de Kubernetes. + +Surveillez votre pull request, et répondez aux commentaires des relecteurs. +Continuez à surveiller votre pull request jusqu'à ce qu'elle soit mergée dans la branche master du dépot `kubernetes/kubernetes`. + +## Selectionnez vos commits dans une branche release + +Vos commits sont sur la branche master, qui est utilisée pour le développement sur la prochaine sortie de Kubernetes. +Si vous souhaitez que vos commits apparaissent dans la documentation d'une version Kubernetes déjà publiée, vous devez proposer que vos commits soit sélectionnée dans la branche de publication. + +Par exemple, supposons que la branche master est utilisée pour développer Kubernetes 1.10, et vous voulez transférer vos commits sur la branche release-1.9. +Pour savoir comment faire cela, consultez [Propose a Cherry Pick](https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md). + +Surveillez votre pull request cherry-pick jusqu'à ce qu'elle soit mergée dans la branche release. + +{{< note >}} +Proposer un cherry pick exige que vous ayez la permission de définir un label et un milestone dans votre pull request. +Si vous ne disposez pas de ces autorisations, vous devrez travailler avec une personne pouvant définir les paramètres de labels et de milestone pour vous. +{{< /note >}} + +## Vue générale de update-imported-docs + +L'outil `update-importer-docs` se trouve dans le répertoire `kubernetes/website/update-importer-docs/`. +L'outil effectue les étapes suivantes: + +1. Effectuez un clone des différents dépots spéciés dans le fichier de configuration. + Afin de générer des documents de référence, les dépôts clonés par défaut sont: `kubernetes-incubator/reference-docs` et `kubernetes/federation`. +1. Effectuez les commandes dans les dépôts clonés pour préparer le générateur de documentation et génerer les fichiers Markdown. +1. Copiez les fichiers markdown générés dans un copie locale du dépôt `kubernetes/website`. Les fichiers doivent être mis dans les dossiers spécifiés dans le fichier de configuration. + +Quand les fichiers Markdown sont dans votre clone local du dépot `kubernetes/website`, vous pouvez les soumettre dans une [pull request](/docs/home/contribute/create-pull-request/) vers `kubernetes/website`. + +## Personnaliser le fichier de configuration + +Ouvrez `/update-importer-docs/reference.yml` pour le modifier. +Ne modifiez pas le contenu de l'entrée `generate-command` sauf si vous comprenez ce qu'elle fait et devez modifier la branche de release spécifiée. + +```shell +repos: +- name: reference-docs + remote: https://github.com/kubernetes-incubator/reference-docs.git + # Ceci et la commande generate ci-dessous nécessitent une modification lorsque les branches de référence-docs sont correctement définies + branch: master + generate-command: | + cd $GOPATH + git clone https://github.com/kubernetes/kubernetes.git src/k8s.io/kubernetes + cd src/k8s.io/kubernetes + git checkout release-1.11 + make generated_files + cp -L -R vendor $GOPATH/src + rm -r vendor + cd $GOPATH + go get -v github.com/kubernetes-incubator/reference-docs/gen-compdocs + cd src/github.com/kubernetes-incubator/reference-docs/ + make comp +``` + +Dans reference.yml, les attributs `files` est une liste d'objets ayant des attributs `src` et `dst`. +L'attribut `src` spécifie l'emplacement d'un fichier Markdown généré, et l'attribut `dst` spécifie où copier ce fichier dans le dépôt local `kubernetes/website`. +Par exemple: + +```yaml +repos: +- name: reference-docs + remote: https://github.com/kubernetes-incubator/reference-docs.git + files: + - src: gen-compdocs/build/kube-apiserver.md + dst: content/en/docs/reference/command-line-tools-reference/kube-apiserver.md + ... +``` + +Notez que lorsqu'il y a beaucoup de fichiers à copier du même répertoire source dans le même répertoire de destination, vous pouvez utiliser des caractères génériques dans la valeur donnée à `src` et vous pouvez simplement fournir le nom du répertoire comme valeur pour `dst`. +Par exemple: + +```shell + files: + - src: gen-compdocs/build/kubeadm*.md + dst: content/en/docs/reference/setup-tools/kubeadm/generated/ +``` + +## Exécution de l'outil update-importer-docs + +Après avoir revu et/ou personnalisé le fichier `reference.yaml`, vous pouvez exécuter l'outil `update-imports-docs`: + +```shell +cd /update-imported-docs +./update-imported-docs reference.yml +``` + +## Ajouter et valider des modifications dans kubernetes/website + +Répertoriez les fichiers générés et copiés dans le dépôt `kubernetes/website`: + +``` +cd +git status +``` + +La sortie affiche les fichiers nouveaux et modifiés. +Par exemple, la sortie pourrait ressembler à ceci: + +```shell +... + + modified: content/en/docs/reference/command-line-tools-reference/cloud-controller-manager.md + modified: content/en/docs/reference/command-line-tools-reference/federation-apiserver.md + modified: content/en/docs/reference/command-line-tools-reference/federation-controller-manager.md + modified: content/en/docs/reference/command-line-tools-reference/kube-apiserver.md + modified: content/en/docs/reference/command-line-tools-reference/kube-controller-manager.md + modified: content/en/docs/reference/command-line-tools-reference/kube-proxy.md + modified: content/en/docs/reference/command-line-tools-reference/kube-scheduler.md +... +``` + +Exécutez `git add` et `git commit` pour faire un commit de ces fichiers. + +## Créer une pull request + +Créez une pull request vers le dépôt `kubernetes/website`. +Consultez votre pull request et répondez aux corrections suggérées par les rélecteurs jusqu'à ce que la pull request soit acceptée et mergée. + +Quelques minutes après le merge votre pull request, vos références mises à jour seront visibles dans la [documentation publiée](/docs/home/). + +{{% /capture %}} + +{{% capture whatsnext %}} + +* [Génération de documentation de référence pour les commandes kubectl](/docs/home/contribute/generated-reference/kubectl/) +* [Génération de documentation de référence pour l'API Kubernetes](/fr/docs/contribute/generate-ref-docs/kubernetes-api/) +* [Génération de documentation de référence pour l'API de fédération Kubernetes](/docs/home/contribute/generated-reference/federation-api/) + +{{% /capture %}} From 31fce9bee193d8e86d1d6ebd616fdd972e89827f Mon Sep 17 00:00:00 2001 From: zwwhdls <33822635+zwwhdls@users.noreply.github.com> Date: Thu, 20 Jun 2019 03:02:34 +0800 Subject: [PATCH 10/26] Fix kubectl reference table of '--generator' (#14983) * fix `--generator` in `kubectl run` guide * distinguish api group from kubectl command in reference table --- .../en/docs/reference/kubectl/conventions.md | 22 ++++++++----------- 1 file changed, 9 insertions(+), 13 deletions(-) diff --git a/content/en/docs/reference/kubectl/conventions.md b/content/en/docs/reference/kubectl/conventions.md index cb084fda8e..558e4aa4e0 100644 --- a/content/en/docs/reference/kubectl/conventions.md +++ b/content/en/docs/reference/kubectl/conventions.md @@ -37,19 +37,15 @@ For `kubectl run` to satisfy infrastructure as code: You can create the following resources using `kubectl run` with the `--generator` flag: -| Resource | kubectl command | -|---------------------------------|---------------------------------------------------| -| Pod | `kubectl run --generator=run-pod/v1` | -| Replication controller | `kubectl run --generator=run/v1` | -| Deployment | `kubectl run --generator=extensions/v1beta1` | -| -for an endpoint (default) | `kubectl run --generator=deployment/v1beta1` | -| Deployment | `kubectl run --generator=apps/v1beta1` | -| -for an endpoint (recommended) | `kubectl run --generator=deployment/apps.v1beta1` | -| Job | `kubectl run --generator=job/v1` | -| CronJob | `kubectl run --generator=batch/v1beta1` | -| -for an endpoint (default) | `kubectl run --generator=cronjob/v1beta1` | -| CronJob | `kubectl run --generator=batch/v2alpha1` | -| -for an endpoint (deprecated) | `kubectl run --generator=cronjob/v2alpha1` | +| Resource | api group | kubectl command | +|---------------------------------|--------------------|---------------------------------------------------| +| Pod | v1 | `kubectl run --generator=run-pod/v1` | +| Replication controller | v1 | `kubectl run --generator=run/v1` | +| Deployment (deprecated) | extensions/v1beta1 | `kubectl run --generator=deployment/v1beta1` | +| Deployment (deprecated) | apps/v1beta1 | `kubectl run --generator=deployment/apps.v1beta1` | +| Job (deprecated) | batch/v1 | `kubectl run --generator=job/v1` | +| CronJob (default) | batch/v1beta1 | `kubectl run --generator=cronjob/v1beta1` | +| CronJob (deprecated) | batch/v2alpha1 | `kubectl run --generator=cronjob/v2alpha1` | If you do not specify a generator flag, other flags prompt you to use a specific generator. The following table lists the flags that force you to use specific generators, depending on the version of the cluster: From efe2d0fd99df85139cc5b5a285a605b12e875088 Mon Sep 17 00:00:00 2001 From: mohamed chiheb ben jemaa Date: Wed, 19 Jun 2019 20:06:36 +0100 Subject: [PATCH 11/26] change minion-group to node-pool (#14976) --- .../configuration/manage-compute-resources-container.md | 4 ++-- content/en/docs/tutorials/clusters/apparmor.md | 6 +++--- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/content/en/docs/concepts/configuration/manage-compute-resources-container.md b/content/en/docs/concepts/configuration/manage-compute-resources-container.md index dacdc9617d..0b5b9ea008 100644 --- a/content/en/docs/concepts/configuration/manage-compute-resources-container.md +++ b/content/en/docs/concepts/configuration/manage-compute-resources-container.md @@ -212,10 +212,10 @@ You can check node capacities and amounts allocated with the `kubectl describe nodes` command. For example: ```shell -kubectl describe nodes e2e-test-minion-group-4lw4 +kubectl describe nodes e2e-test-node-pool-4lw4 ``` ``` -Name: e2e-test-minion-group-4lw4 +Name: e2e-test-node-pool-4lw4 [ ... lines removed for clarity ...] Capacity: cpu: 2 diff --git a/content/en/docs/tutorials/clusters/apparmor.md b/content/en/docs/tutorials/clusters/apparmor.md index 4e1fff809e..4de988270d 100644 --- a/content/en/docs/tutorials/clusters/apparmor.md +++ b/content/en/docs/tutorials/clusters/apparmor.md @@ -159,7 +159,7 @@ To verify that the profile was applied, you can look for the AppArmor security o kubectl get events | grep Created ``` ``` -22s 22s 1 hello-apparmor Pod spec.containers{hello} Normal Created {kubelet e2e-test-stclair-minion-group-31nt} Created container with docker id 269a53b202d3; Security:[seccomp=unconfined apparmor=k8s-apparmor-example-deny-write] +22s 22s 1 hello-apparmor Pod spec.containers{hello} Normal Created {kubelet e2e-test-stclair-node-pool-31nt} Created container with docker id 269a53b202d3; Security:[seccomp=unconfined apparmor=k8s-apparmor-example-deny-write] ``` You can also verify directly that the container's root process is running with the correct profile by checking its proc attr: @@ -315,8 +315,8 @@ Tolerations: Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- - 23s 23s 1 {default-scheduler } Normal Scheduled Successfully assigned hello-apparmor-2 to e2e-test-stclair-minion-group-t1f5 - 23s 23s 1 {kubelet e2e-test-stclair-minion-group-t1f5} Warning AppArmor Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded + 23s 23s 1 {default-scheduler } Normal Scheduled Successfully assigned hello-apparmor-2 to e2e-test-stclair-node-pool-t1f5 + 23s 23s 1 {kubelet e2e-test-stclair-node-pool-t1f5} Warning AppArmor Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded ``` Note the pod status is Failed, with a helpful error message: `Pod Cannot enforce AppArmor: profile From 5a4b7a38bca186764f6c178ed9bce810c96df840 Mon Sep 17 00:00:00 2001 From: Rajesh Deshpande Date: Thu, 20 Jun 2019 00:40:33 +0530 Subject: [PATCH 12/26] Adding pod configuration file with names (#14971) Adding pod configuration file with pod and container name. This explains the usage of names and how it looks like in actual YAML. --- .../overview/working-with-objects/names.md | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/content/en/docs/concepts/overview/working-with-objects/names.md b/content/en/docs/concepts/overview/working-with-objects/names.md index 7499a96a66..dffff2d1a7 100644 --- a/content/en/docs/concepts/overview/working-with-objects/names.md +++ b/content/en/docs/concepts/overview/working-with-objects/names.md @@ -26,6 +26,21 @@ See the [identifiers design doc](https://git.k8s.io/community/contributors/desig By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, `-`, and `.`, but certain resources have more specific restrictions. +For example, here’s the configuration file with a Pod name as `nginx-demo` and a Container name as `nginx`: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: nginx-demo +spec: + containers: + - name: nginx + image: nginx:1.7.9 + ports: + - containerPort: 80 +``` + ## UIDs {{< glossary_definition term_id="uid" length="all" >}} From c82a5e402c10e4c6821519020c8b46360ca91276 Mon Sep 17 00:00:00 2001 From: Rajesh Deshpande Date: Thu, 20 Jun 2019 00:42:32 +0530 Subject: [PATCH 13/26] Adding alternative command to create namespace (#14974) * Adding alternative command to create namespace As this is first place user look to find details to create a namespace, added an alternative command to create a namespace. Also, this is mostly used way to create namepsace instead of YAML. * Correcting Formatting Correcting formatting for changes * Update namespaces.md --- .../tasks/administer-cluster/namespaces.md | 25 +++++++++++-------- 1 file changed, 15 insertions(+), 10 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/namespaces.md b/content/en/docs/tasks/administer-cluster/namespaces.md index 6900d6d558..d8d7e9b827 100644 --- a/content/en/docs/tasks/administer-cluster/namespaces.md +++ b/content/en/docs/tasks/administer-cluster/namespaces.md @@ -83,18 +83,23 @@ See the [design doc](https://git.k8s.io/community/contributors/design-proposals/ 1. Create a new YAML file called `my-namespace.yaml` with the contents: -```yaml -apiVersion: v1 -kind: Namespace -metadata: - name: -``` + ```yaml + apiVersion: v1 + kind: Namespace + metadata: + name: + ``` + Then run: + + ``` + kubectl create -f ./my-namespace.yaml + ``` -Then run: +2. Alternatively, you can create namespace using below command: -```shell -kubectl create -f ./my-namespace.yaml -``` + ``` + kubectl create namespace + ``` Note that the name of your namespace must be a DNS compatible label. From 01bd547d3ec0f4f16d84d182f8a83e6bb84c11c8 Mon Sep 17 00:00:00 2001 From: Supriya Sirbi Date: Thu, 20 Jun 2019 00:44:31 +0530 Subject: [PATCH 14/26] Issue #14768- update Resource-quotas.md (#14963) Remove "cpu" and "memory" description since they are no longer used --- content/en/docs/concepts/policy/resource-quotas.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/content/en/docs/concepts/policy/resource-quotas.md b/content/en/docs/concepts/policy/resource-quotas.md index a88bc3d468..dee980efe1 100644 --- a/content/en/docs/concepts/policy/resource-quotas.md +++ b/content/en/docs/concepts/policy/resource-quotas.md @@ -66,10 +66,8 @@ The following resource types are supported: | Resource Name | Description | | --------------------- | ----------------------------------------------------------- | -| `cpu` | Across all pods in a non-terminal state, the sum of CPU requests cannot exceed this value. | | `limits.cpu` | Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value. | | `limits.memory` | Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value. | -| `memory` | Across all pods in a non-terminal state, the sum of memory requests cannot exceed this value. | | `requests.cpu` | Across all pods in a non-terminal state, the sum of CPU requests cannot exceed this value. | | `requests.memory` | Across all pods in a non-terminal state, the sum of memory requests cannot exceed this value. | From b9a5c8d7b0fd355ad17a683cbfeeed753e742f0f Mon Sep 17 00:00:00 2001 From: mohamed chiheb ben jemaa Date: Wed, 19 Jun 2019 20:18:31 +0100 Subject: [PATCH 15/26] update tutorial to use deployment yaml file (#14959) * update run command for deployment * recommended label name --- .../expose-external-ip-address.md | 13 ++++++++---- .../service/load-balancer-example.yaml | 21 +++++++++++++++++++ 2 files changed, 30 insertions(+), 4 deletions(-) create mode 100644 content/en/examples/service/load-balancer-example.yaml diff --git a/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md b/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md index 97ad5df58f..ff7917dccc 100644 --- a/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md +++ b/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md @@ -42,9 +42,14 @@ external IP address. 1. Run a Hello World application in your cluster: - kubectl run hello-world --replicas=5 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080 +{{< codenew file="service/load-balancer-example.yaml" >}} - The preceding command creates a +```shell +kubectl apply -f https://k8s.io/examples/service/load-balancer-example.yaml +``` + + +The preceding command creates a [Deployment](/docs/concepts/workloads/controllers/deployment/) object and an associated [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) @@ -86,9 +91,9 @@ external IP address. Name: my-service Namespace: default - Labels: run=load-balancer-example + Labels: app.kubernetes.io/name=load-balancer-example Annotations: - Selector: run=load-balancer-example + Selector: app.kubernetes.io/name=load-balancer-example Type: LoadBalancer IP: 10.3.245.137 LoadBalancer Ingress: 104.198.205.71 diff --git a/content/en/examples/service/load-balancer-example.yaml b/content/en/examples/service/load-balancer-example.yaml new file mode 100644 index 0000000000..ea88fd1548 --- /dev/null +++ b/content/en/examples/service/load-balancer-example.yaml @@ -0,0 +1,21 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.kubernetes.io/name: load-balancer-example + name: hello-world +spec: + replicas: 5 + selector: + matchLabels: + app.kubernetes.io/name: load-balancer-example + template: + metadata: + labels: + app.kubernetes.io/name: load-balancer-example + spec: + containers: + - image: gcr.io/google-samples/node-hello:1.0 + name: hello-world + ports: + - containerPort: 8080 From a0aa15ec7f48d8a110709cfd437037ab8ff5675d Mon Sep 17 00:00:00 2001 From: mhamdi semah Date: Wed, 19 Jun 2019 21:20:34 +0200 Subject: [PATCH 16/26] Issue with k8s.io/docs/setup/production-environment/container-runtimes/ (#14945) --- .../en/docs/setup/production-environment/container-runtimes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/setup/production-environment/container-runtimes.md b/content/en/docs/setup/production-environment/container-runtimes.md index 5db75e8d53..e6362d9264 100644 --- a/content/en/docs/setup/production-environment/container-runtimes.md +++ b/content/en/docs/setup/production-environment/container-runtimes.md @@ -183,7 +183,7 @@ add-apt-repository ppa:projectatomic/ppa apt-get update # Install CRI-O -apt-get install cri-o-1.11 +apt-get install cri-o-1.13 {{< /tab >}} {{< tab name="CentOS/RHEL 7.4+" codelang="bash" >}} From 0c4d5b7506e0cea607083ba7d29ef90732047cc6 Mon Sep 17 00:00:00 2001 From: Tim Bannister Date: Wed, 19 Jun 2019 20:22:33 +0100 Subject: [PATCH 17/26] Update link to Minikube from glossary (#14942) --- content/en/docs/reference/glossary/minikube.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/content/en/docs/reference/glossary/minikube.md b/content/en/docs/reference/glossary/minikube.md index 26e47f3412..175a493ebf 100755 --- a/content/en/docs/reference/glossary/minikube.md +++ b/content/en/docs/reference/glossary/minikube.md @@ -2,7 +2,7 @@ title: Minikube id: minikube date: 2018-04-12 -full_link: /docs/getting-started-guides/minikube/ +full_link: /docs/setup/learning-environment/minikube/ short_description: > A tool for running Kubernetes locally. @@ -16,4 +16,5 @@ tags: Minikube runs a single-node cluster inside a VM on your computer. - +You can use Minikube to +[try Kubernetes in a learning environment](/docs/setup/learning-environment/). From 70aa8fa692b890766602423847398696bc7b1813 Mon Sep 17 00:00:00 2001 From: Naoki Oketani Date: Thu, 20 Jun 2019 04:48:23 +0900 Subject: [PATCH 18/26] specify Pod not to display redundant messages (#14936) --- .../debug-application-cluster/determine-reason-pod-failure.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tasks/debug-application-cluster/determine-reason-pod-failure.md b/content/en/docs/tasks/debug-application-cluster/determine-reason-pod-failure.md index 1353420d63..44b4225fdc 100644 --- a/content/en/docs/tasks/debug-application-cluster/determine-reason-pod-failure.md +++ b/content/en/docs/tasks/debug-application-cluster/determine-reason-pod-failure.md @@ -53,7 +53,7 @@ the container starts. 1. Display detailed information about the Pod: - kubectl get pod --output=yaml + kubectl get pod termination-demo --output=yaml The output includes the "Sleep expired" message: From 8d88bd0b39f6c9aae68c3f191e3bcdb8045f8317 Mon Sep 17 00:00:00 2001 From: Fabian Deutsch Date: Wed, 19 Jun 2019 21:52:20 +0200 Subject: [PATCH 19/26] addons: Add KubeVirt (#14900) KubeVirt is a virutalization add-on for Kubernetes, by now it's pretty mature and maybe a good time to mention it on this page. --- content/en/docs/concepts/cluster-administration/addons.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/content/en/docs/concepts/cluster-administration/addons.md b/content/en/docs/concepts/cluster-administration/addons.md index 7a32a7023c..2dd59dd00c 100644 --- a/content/en/docs/concepts/cluster-administration/addons.md +++ b/content/en/docs/concepts/cluster-administration/addons.md @@ -44,6 +44,10 @@ Add-ons in each section are sorted alphabetically - the ordering does not imply * [Dashboard](https://github.com/kubernetes/dashboard#kubernetes-dashboard) is a dashboard web interface for Kubernetes. * [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s) is a tool for graphically visualizing your containers, pods, services etc. Use it in conjunction with a [Weave Cloud account](https://cloud.weave.works/) or host the UI yourself. +## Infrastructure + +* [KubeVirt](https://kubevirt.io/user-guide/docs/latest/administration/intro.html#cluster-side-add-on-deployment) is an add-on to run virtual machines on Kubernetes. Usually run on bare-metal clusters. + ## Legacy Add-ons There are several other add-ons documented in the deprecated [cluster/addons](https://git.k8s.io/kubernetes/cluster/addons) directory. From f2808242e6578eb44f267292ca6229de0269b0bc Mon Sep 17 00:00:00 2001 From: Joseph Irving Date: Wed, 19 Jun 2019 20:54:22 +0100 Subject: [PATCH 20/26] clarify that job backofflimit causes running job pods to be terminated (#14899) --- .../workloads/controllers/jobs-run-to-completion.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md b/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md index 4d87586445..3f676c578d 100644 --- a/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md +++ b/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md @@ -230,12 +230,13 @@ allows you to still view the logs of completed pods to check for errors, warning The job object also remains after it is completed so that you can view its status. It is up to the user to delete old jobs after noting their status. Delete the job with `kubectl` (e.g. `kubectl delete jobs/pi` or `kubectl delete -f ./job.yaml`). When you delete the job using `kubectl`, all the pods it created are deleted too. -By default, a Job will run uninterrupted unless a Pod fails, at which point the Job defers to the -`.spec.backoffLimit` described above. Another way to terminate a Job is by setting an active deadline. -Do this by setting the `.spec.activeDeadlineSeconds` field of the Job to a number of seconds. +By default, a Job will run uninterrupted unless a Pod fails (`restartPolicy=Never`) or a Container exits in error (`restartPolicy=OnFailure`), at which point the Job defers to the +`.spec.backoffLimit` described above. Once `.spec.backoffLimit` has been reached the Job will be marked as failed and any running Pods will be terminated. +Another way to terminate a Job is by setting an active deadline. +Do this by setting the `.spec.activeDeadlineSeconds` field of the Job to a number of seconds. The `activeDeadlineSeconds` applies to the duration of the job, no matter how many Pods are created. -Once a Job reaches `activeDeadlineSeconds`, all of its Pods are terminated and the Job status will become `type: Failed` with `reason: DeadlineExceeded`. +Once a Job reaches `activeDeadlineSeconds`, all of its running Pods are terminated and the Job status will become `type: Failed` with `reason: DeadlineExceeded`. Note that a Job's `.spec.activeDeadlineSeconds` takes precedence over its `.spec.backoffLimit`. Therefore, a Job that is retrying one or more failed Pods will not deploy additional Pods once it reaches the time limit specified by `activeDeadlineSeconds`, even if the `backoffLimit` is not yet reached. From 4bc7a5f8b1f439d50e569072165057c0226a4867 Mon Sep 17 00:00:00 2001 From: James Woodall Date: Wed, 19 Jun 2019 20:56:18 +0100 Subject: [PATCH 21/26] Minikube IP Address fixup for ingress demonstration (#14906) * Updated IP Address for /etc/hosts Tested on Minikube for macOS. When using Minikube, the IP address listed in `kubectl get ingress` is the internal Minikube IP address and is not available on the web browser. Added advice to the user that when using Minikube, add the Minikube IP address to the Hosts file instead of the IP address displayed in `kubectl get ingress`. * Update ingress-minikube.md --- .../docs/tasks/access-application-cluster/ingress-minikube.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/content/en/docs/tasks/access-application-cluster/ingress-minikube.md b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md index 9408c34561..0335c0978c 100644 --- a/content/en/docs/tasks/access-application-cluster/ingress-minikube.md +++ b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md @@ -178,6 +178,8 @@ The following file is an Ingress resource that sends traffic to your Service via 1. Add the following line to the bottom of the `/etc/hosts` file. + {{< note >}}If you are running Minikube locally, use `minikube ip` to get the external IP. The IP address displayed within the ingress list will be the internal IP.{{< /note >}} + ``` 172.17.0.15 hello-world.info ``` From 52073165cbd08a42b6b7b0676d658f2e2b7c4ee9 Mon Sep 17 00:00:00 2001 From: 839 <8398a7@gmail.com> Date: Thu, 20 Jun 2019 05:02:20 +0900 Subject: [PATCH 22/26] update the upgrade method of kubelet / kubectl of worker nodes (#14871) * [#14870] update the upgrade method of kubelet / kubectl of worker nodes * [#14870] add link of how to update from 1.13 to 1.14 --- .../en/docs/reference/setup-tools/kubeadm/kubeadm-upgrade.md | 3 ++- .../tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14.md | 5 +++-- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-upgrade.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-upgrade.md index 5c2300214b..2e9b9613e5 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-upgrade.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-upgrade.md @@ -22,7 +22,8 @@ For more version-specific upgrade guidance, see the following resources: * [1.10 to 1.11 upgrades](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11/) * [1.11 to 1.12 upgrades](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-12/) * [1.12 to 1.13 upgrades](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-13/) - + * [1.13 to 1.14 upgrades](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14/) + _For older versions, please refer to older documentation sets on the Kubernetes website._ In Kubernetes v1.11.0 and later, you can use `kubeadm upgrade diff` to see the changes that would be diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14.md index 6df1d9d0a9..014ac86383 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14.md @@ -316,8 +316,9 @@ without compromising the minimum required capacity for running your workloads. {{< tabs name="k8s_kubelet_and_kubectl" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} # replace x in 1.14.x-00 with the latest patch version - apt-get update - apt-get install -y kubelet=1.14.x-00 kubectl=1.14.x-00 + apt-mark unhold kubelet kubectl && \ + apt-get update && apt-get install -y kubelet=1.14.x-00 kubectl=1.14.x-00 && \ + apt-mark hold kubelet kubectl {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} # replace x in 1.14.x-0 with the latest patch version From 58f108a489d20af228391ae8b42e561586016662 Mon Sep 17 00:00:00 2001 From: John Law Date: Thu, 20 Jun 2019 04:04:20 +0800 Subject: [PATCH 23/26] Update shell-demo.yaml (#14892) --- content/en/examples/application/shell-demo.yaml | 2 ++ 1 file changed, 2 insertions(+) diff --git a/content/en/examples/application/shell-demo.yaml b/content/en/examples/application/shell-demo.yaml index 2a7d274a64..9eb140d80f 100644 --- a/content/en/examples/application/shell-demo.yaml +++ b/content/en/examples/application/shell-demo.yaml @@ -12,3 +12,5 @@ spec: volumeMounts: - name: shared-data mountPath: /usr/share/nginx/html + hostNetwork: true + dnsPolicy: Default From 70eca876a4931edf0f7ee20910c2237bf657cba4 Mon Sep 17 00:00:00 2001 From: Julien Brochet <556303+aerialls@users.noreply.github.com> Date: Wed, 19 Jun 2019 22:06:50 +0200 Subject: [PATCH 24/26] feat(configMap): add a warning message for ConfigMap with a specific path (#14896) --- .../configure-pod-configmap.md | 54 ++++++++++--------- 1 file changed, 29 insertions(+), 25 deletions(-) diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md index c143946817..cd90e908b3 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -22,7 +22,7 @@ ConfigMaps allow you to decouple configuration artifacts from image content to k ## Create a ConfigMap -You can use either `kubectl create configmap` or a ConfigMap generator in `kustomization.yaml` to create a ConfigMap. Note that `kubectl` starts to support `kustomization.yaml` since 1.14. +You can use either `kubectl create configmap` or a ConfigMap generator in `kustomization.yaml` to create a ConfigMap. Note that `kubectl` starts to support `kustomization.yaml` since 1.14. ### Create a ConfigMap Using kubectl create configmap @@ -444,21 +444,21 @@ configmap/special-config-2-c92b5mmcf2 created {{< codenew file="pods/pod-single-configmap-env-variable.yaml" >}} Create the Pod: - + ```shell kubectl create -f https://kubernetes.io/examples/pods/pod-single-configmap-env-variable.yaml ``` - - Now, the Pod's output includes environment variable `SPECIAL_LEVEL_KEY=very`. - + + Now, the Pod's output includes environment variable `SPECIAL_LEVEL_KEY=very`. + ### Define container environment variables with data from multiple ConfigMaps - + * As with the previous example, create the ConfigMaps first. {{< codenew file="configmap/configmaps.yaml" >}} Create the ConfigMap: - + ```shell kubectl create -f https://kubernetes.io/examples/configmap/configmaps.yaml ``` @@ -468,43 +468,43 @@ configmap/special-config-2-c92b5mmcf2 created {{< codenew file="pods/pod-multiple-configmap-env-variable.yaml" >}} Create the Pod: - + ```shell kubectl create -f https://kubernetes.io/examples/pods/pod-multiple-configmap-env-variable.yaml ``` - Now, the Pod's output includes environment variables `SPECIAL_LEVEL_KEY=very` and `LOG_LEVEL=INFO`. + Now, the Pod's output includes environment variables `SPECIAL_LEVEL_KEY=very` and `LOG_LEVEL=INFO`. -## Configure all key-value pairs in a ConfigMap as container environment variables +## Configure all key-value pairs in a ConfigMap as container environment variables {{< note >}} This functionality is available in Kubernetes v1.6 and later. {{< /note >}} -* Create a ConfigMap containing multiple key-value pairs. +* Create a ConfigMap containing multiple key-value pairs. {{< codenew file="configmap/configmap-multikeys.yaml" >}} Create the ConfigMap: - + ```shell kubectl create -f https://kubernetes.io/examples/configmap/configmap-multikeys.yaml ``` * Use `envFrom` to define all of the ConfigMap's data as container environment variables. The key from the ConfigMap becomes the environment variable name in the Pod. - + {{< codenew file="pods/pod-configmap-envFrom.yaml" >}} Create the Pod: - + ```shell kubectl create -f https://kubernetes.io/examples/pods/pod-configmap-envFrom.yaml ``` - Now, the Pod's output includes environment variables `SPECIAL_LEVEL=very` and `SPECIAL_TYPE=charm`. + Now, the Pod's output includes environment variables `SPECIAL_LEVEL=very` and `SPECIAL_TYPE=charm`. -## Use ConfigMap-defined environment variables in Pod commands +## Use ConfigMap-defined environment variables in Pod commands You can use ConfigMap-defined environment variables in the `command` section of the Pod specification using the `$(VAR_NAME)` Kubernetes substitution syntax. @@ -524,23 +524,23 @@ produces the following output in the `test-container` container: very charm ``` -## Add ConfigMap data to a Volume +## Add ConfigMap data to a Volume -As explained in [Create ConfigMaps from files](#create-configmaps-from-files), when you create a ConfigMap using ``--from-file``, the filename becomes a key stored in the `data` section of the ConfigMap. The file contents become the key's value. +As explained in [Create ConfigMaps from files](#create-configmaps-from-files), when you create a ConfigMap using ``--from-file``, the filename becomes a key stored in the `data` section of the ConfigMap. The file contents become the key's value. The examples in this section refer to a ConfigMap named special-config, shown below. {{< codenew file="configmap/configmap-multikeys.yaml" >}} Create the ConfigMap: - + ```shell kubectl create -f https://kubernetes.io/examples/configmap/configmap-multikeys.yaml ``` ### Populate a Volume with data stored in a ConfigMap -Add the ConfigMap name under the `volumes` section of the Pod specification. +Add the ConfigMap name under the `volumes` section of the Pod specification. This adds the ConfigMap data to the directory specified as `volumeMounts.mountPath` (in this case, `/etc/config`). The `command` section references the `special.level` item stored in the ConfigMap. @@ -565,7 +565,7 @@ If there are some files in the `/etc/config/` directory, they will be deleted. ### Add ConfigMap data to a specific path in the Volume -Use the `path` field to specify the desired file path for specific ConfigMap items. +Use the `path` field to specify the desired file path for specific ConfigMap items. In this case, the `SPECIAL_LEVEL` item will be mounted in the `config-volume` volume at `/etc/config/keys`. {{< codenew file="pods/pod-configmap-volume-specific-key.yaml" >}} @@ -582,6 +582,10 @@ When the pod runs, the command `cat /etc/config/keys` produces the output below: very ``` +{{< caution >}} +Like before, all previous files in the `/etc/config/` directory will be deleted. +{{< /caution >}} + ### Project keys to specific paths and file permissions You can project keys to specific paths and specific permissions on a per-file @@ -636,9 +640,9 @@ data: ```shell kubectl get events ``` - + The output is similar to this: - ``` + ``` LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE 0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames {kubelet, 127.0.0.1} Keys [1badkey, 2alsobad] from the EnvFrom configMap default/myconfig were skipped since they are considered invalid environment variable names. ``` @@ -646,11 +650,11 @@ data: - ConfigMaps reside in a specific [namespace](/docs/concepts/overview/working-with-objects/namespaces/). A ConfigMap can only be referenced by pods residing in the same namespace. - Kubelet doesn't support the use of ConfigMaps for pods not found on the API server. This includes pods created via the Kubelet's `--manifest-url` flag, `--config` flag, or the Kubelet REST API. - + {{< note >}} These are not commonly-used ways to create pods. {{< /note >}} - + {{% /capture %}} {{% capture whatsnext %}} From 3780c56243cc93ad4afe3391d59574f05cdbd28a Mon Sep 17 00:00:00 2001 From: CJ Cullen Date: Wed, 19 Jun 2019 13:08:51 -0700 Subject: [PATCH 25/26] Update vuln reporting language to match reality (#14885) We don't need to be the dispatch for all vulns, now that other projects are starting to have their own processes. But we don't want to discourage reports about stuff that isn't directly in k/k either. Saying that we usually disclose vuln reports in 7 days is just not true. But, I think it's still good to aim for 7 days when we aren't blocked on and coordinating release of patches. --- content/en/docs/reference/issues-security/security.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/content/en/docs/reference/issues-security/security.md b/content/en/docs/reference/issues-security/security.md index 15554958b3..d8105fa7cb 100644 --- a/content/en/docs/reference/issues-security/security.md +++ b/content/en/docs/reference/issues-security/security.md @@ -33,7 +33,9 @@ You may encrypt your email to this list using the GPG keys of the [Product Secur - You think you discovered a potential security vulnerability in Kubernetes - You are unsure how a vulnerability affects Kubernetes -- You think you discovered a vulnerability in another project that Kubernetes depends on (e.g. docker, rkt, etcd) +- You think you discovered a vulnerability in another project that Kubernetes depends on + - For projects with their own vulnerability reporting and disclosure process, please report it directly there + ### When Should I NOT Report a Vulnerability? @@ -51,5 +53,5 @@ As the security issue moves from triage, to identified fix, to release planning ## Public Disclosure Timing -A public disclosure date is negotiated by the Kubernetes Product Security Committee and the bug submitter. We prefer to fully disclose the bug as soon as possible once a user mitigation is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for vendor coordination. The timeframe for disclosure is from immediate (especially if it's already publicly known) to a few weeks. As a basic default, we expect report date to disclosure date to be on the order of 7 days. The Kubernetes Product Security Committee holds the final say when setting a disclosure date. +A public disclosure date is negotiated by the Kubernetes Product Security Committee and the bug submitter. We prefer to fully disclose the bug as soon as possible once a user mitigation is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for vendor coordination. The timeframe for disclosure is from immediate (especially if it's already publicly known) to a few weeks. For a vulnerability with a straightforward mitigation, we expect report date to disclosure date to be on the order of 7 days. The Kubernetes Product Security Committee holds the final say when setting a disclosure date. {{% /capture %}} From 0e1e971f1381264b96319b7cf5ae13e1aa03cb6a Mon Sep 17 00:00:00 2001 From: Guy Templeton Date: Wed, 19 Jun 2019 21:10:51 +0100 Subject: [PATCH 26/26] Add links out to different metrics APIs (#14865) There's been a number of questions around the difference between the external.metrics.k8s.io and custom.metrics.k8s.io in #sig-autoscaling referring back to the HPA docs recently. Added links out to the design proposals for each and the relevant sections of the existing walkthrough docs. --- .../tasks/run-application/horizontal-pod-autoscale.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md index e613c8a185..2ec086b20d 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -271,6 +271,14 @@ APIs, cluster administrators must ensure that: * The `--horizontal-pod-autoscaler-use-rest-clients` is `true` or unset. Setting this to false switches to Heapster-based autoscaling, which is deprecated. +For more information on these different metrics paths and how they differ please see the relevant design proposals for +[the HPA V2](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/autoscaling/hpa-v2.md), +[custom.metrics.k8s.io](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md) +and [external.metrics.k8s.io](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/external-metrics-api.md). + +For examples of how to use them see [the walkthrough for using custom metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics) +and [the walkthrough for using external metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects). + {{% /capture %}} {{% capture whatsnext %}}