diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES index 01894721d1..c31b04d5b0 100644 --- a/OWNERS_ALIASES +++ b/OWNERS_ALIASES @@ -10,6 +10,7 @@ aliases: - kbarnard10 - mrbobbytables - onlydole + - sftim sig-docs-de-owners: # Admins for German content - bene2k1 - mkorbi @@ -175,14 +176,20 @@ aliases: # zhangxiaoyu-zidif sig-docs-pt-owners: # Admins for Portuguese content - femrtnz + - jailton - jcjesus - devlware - jhonmike + - rikatz + - yagonobre sig-docs-pt-reviews: # PR reviews for Portugese content - femrtnz + - jailton - jcjesus - devlware - jhonmike + - rikatz + - yagonobre sig-docs-vi-owners: # Admins for Vietnamese content - huynguyennovem - ngtuna diff --git a/README-es.md b/README-es.md index f5b1e870dd..3f00746125 100644 --- a/README-es.md +++ b/README-es.md @@ -30,6 +30,17 @@ El método recomendado para levantar una copia local del sitio web kubernetes.io > Si prefiere levantar el sitio web sin utilizar **Docker**, puede seguir las instrucciones disponibles en la sección [Levantando kubernetes.io en local con Hugo](#levantando-kubernetesio-en-local-con-hugo). +**`Nota`: Para el procedimiento de construir una imagen de Docker e iniciar el servidor.** +El sitio web de Kubernetes utiliza Docsy Hugo theme. Se sugiere que se instale si aún no se ha hecho, los **submódulos** y otras dependencias de herramientas de desarrollo ejecutando el siguiente comando de `git`: + +```bash +# pull de los submódulos del repositorio +git submodule update --init --recursive --depth 1 + +``` + +Si identifica que `git` reconoce una cantidad innumerable de cambios nuevos en el proyecto, la forma más simple de solucionarlo es cerrando y volviendo a abrir el proyecto en el editor. Los submódulos son automáticamente detectados por `git`, pero los plugins usados por los editores pueden tener dificultades para ser cargados. + Una vez tenga Docker [configurado en su máquina](https://www.docker.com/get-started), puede construir la imagen de Docker `kubernetes-hugo` localmente ejecutando el siguiente comando en la raíz del repositorio: ```bash @@ -73,4 +84,4 @@ La participación en la comunidad de Kubernetes está regulada por el [Código d Kubernetes es posible gracias a la participación de la comunidad y la documentación es vital para facilitar el acceso al proyecto. -Agradecemos muchísimo sus contribuciones a nuestro sitio web y nuestra documentación. \ No newline at end of file +Agradecemos muchísimo sus contribuciones a nuestro sitio web y nuestra documentación. diff --git a/README-pt.md b/README-pt.md index 0992f6c045..e27bf544d1 100644 --- a/README-pt.md +++ b/README-pt.md @@ -144,6 +144,15 @@ sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist Esta solução funciona tanto para o MacOS Catalina quanto para o MacOS Mojave. +### Erro de "Out of Memory" + +Se você executar o comando `make container-serve` e retornar o seguinte erro: +``` +make: *** [container-serve] Error 137 +``` + +Verifique a quantidade de memória disponível para o agente de execução de contêiner. No caso do Docker Desktop para macOS, abra o menu "Preferences..." -> "Resources..." e tente disponibilizar mais memória. + # Comunidade, discussão, contribuição e apoio Saiba mais sobre a comunidade Kubernetes SIG Docs e reuniões na [página da comunidade](http://kubernetes.io/community/). diff --git a/assets/scss/_base.scss b/assets/scss/_base.scss index b1b112cb38..4113b49bee 100644 --- a/assets/scss/_base.scss +++ b/assets/scss/_base.scss @@ -869,3 +869,22 @@ body.td-documentation { display: none; } } + +// nav-tabs and tab-content +.nav-tabs { + border-bottom: none !important; +} + +.td-content .tab-content .highlight { + margin: 0; +} + +.tab-pane { + border-radius: 0.25rem; + padding: 0 16px 16px; + + border: 1px solid #dee2e6; + &:first-of-type.active { + border-top-left-radius: 0; + } +} diff --git a/config.toml b/config.toml index d77c315331..04329284d8 100644 --- a/config.toml +++ b/config.toml @@ -91,7 +91,7 @@ blog = "/:section/:year/:month/:day/:slug/" [outputs] home = [ "HTML", "RSS", "HEADERS" ] page = [ "HTML"] -section = [ "HTML"] +section = [ "HTML", "print" ] # Add a "text/netlify" media type for auto-generating the _headers file [mediaTypes] diff --git a/content/de/docs/setup/_index.md b/content/de/docs/setup/_index.md index d7f074efb3..2203fcc19c 100644 --- a/content/de/docs/setup/_index.md +++ b/content/de/docs/setup/_index.md @@ -9,7 +9,7 @@ content_type: concept Diese Sektion umfasst verschiedene Optionen zum Einrichten und Betrieb von Kubernetes. -Verschiedene Kubernetes Lösungen haben verschiedene Anforderungen: Einfache Wartung, Sicherheit, Kontrolle, verfügbare Resourcen und erforderliches Fachwissen zum Betrieb und zur Verwaltung dess folgende Diagramm zeigt die möglichen Abstraktionen eines Kubernetes-Clusters und ob eine Abstraktion selbst verwaltet oder von einem Anbieter verwaltet wird. +Verschiedene Kubernetes Lösungen haben verschiedene Anforderungen: Einfache Wartung, Sicherheit, Kontrolle, verfügbare Resourcen und erforderliches Fachwissen zum Betrieb und zur Verwaltung. Das folgende Diagramm zeigt die möglichen Abstraktionen eines Kubernetes-Clusters und ob eine Abstraktion selbst verwaltet oder von einem Anbieter verwaltet wird. Sie können einen Kubernetes-Cluster auf einer lokalen Maschine, Cloud, On-Prem Datacenter bereitstellen; oder wählen Sie einen verwalteten Kubernetes-Cluster. Sie können auch eine individuelle Lösung über eine grosse Auswahl an Cloud Anbietern oder Bare-Metal-Umgebungen nutzen. diff --git a/content/en/blog/_posts/2017-03-00-Advanced-Scheduling-In-Kubernetes.md b/content/en/blog/_posts/2017-03-00-Advanced-Scheduling-In-Kubernetes.md index 5d7f0383c5..722b1e59b0 100644 --- a/content/en/blog/_posts/2017-03-00-Advanced-Scheduling-In-Kubernetes.md +++ b/content/en/blog/_posts/2017-03-00-Advanced-Scheduling-In-Kubernetes.md @@ -20,21 +20,14 @@ For example, if we want to require scheduling on a node that is in the us-centra ``` -affinity: - - nodeAffinity: - - requiredDuringSchedulingIgnoredDuringExecution: - - nodeSelectorTerms: - - - matchExpressions: - - - key: "failure-domain.beta.kubernetes.io/zone" - - operator: In - - values: ["us-central1-a"] + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: "failure-domain.beta.kubernetes.io/zone" + operator: In + values: ["us-central1-a"] ``` @@ -44,21 +37,14 @@ Preferred rules mean that if nodes match the rules, they will be chosen first, a ``` -affinity: - - nodeAffinity: - - preferredDuringSchedulingIgnoredDuringExecution: - - nodeSelectorTerms: - - - matchExpressions: - - - key: "failure-domain.beta.kubernetes.io/zone" - - operator: In - - values: ["us-central1-a"] + affinity: + nodeAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: "failure-domain.beta.kubernetes.io/zone" + operator: In + values: ["us-central1-a"] ``` @@ -67,21 +53,14 @@ Node anti-affinity can be achieved by using negative operators. So for instance ``` -affinity: - - nodeAffinity: - - requiredDuringSchedulingIgnoredDuringExecution: - - nodeSelectorTerms: - - - matchExpressions: - - - key: "failure-domain.beta.kubernetes.io/zone" - - operator: NotIn - - values: ["us-central1-a"] + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: "failure-domain.beta.kubernetes.io/zone" + operator: NotIn + values: ["us-central1-a"] ``` @@ -99,7 +78,7 @@ The kubectl command allows you to set taints on nodes, for example: ``` kubectl taint nodes node1 key=value:NoSchedule - ``` +``` creates a taint that marks the node as unschedulable by any pods that do not have a toleration for taint with key key, value value, and effect NoSchedule. (The other taint effects are PreferNoSchedule, which is the preferred version of NoSchedule, and NoExecute, which means any pods that are running on the node when the taint is applied will be evicted unless they tolerate the taint.) The toleration you would add to a PodSpec to have the corresponding pod tolerate this taint would look like this @@ -107,15 +86,11 @@ creates a taint that marks the node as unschedulable by any pods that do not hav ``` -tolerations: - -- key: "key" - - operator: "Equal" - - value: "value" - - effect: "NoSchedule" + tolerations: + - key: "key" + operator: "Equal" + value: "value" + effect: "NoSchedule" ``` @@ -138,21 +113,13 @@ Let’s look at an example. Say you have front-ends in service S1, and they comm ``` affinity: - podAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: service - operator: In - values: [“S1”] - topologyKey: failure-domain.beta.kubernetes.io/zone ``` @@ -172,25 +139,15 @@ Here we have a Pod where we specify the schedulerName field: ``` apiVersion: v1 - kind: Pod - metadata: - name: nginx - labels: - app: nginx - spec: - schedulerName: my-scheduler - containers: - - name: nginx - image: nginx:1.10 ``` diff --git a/content/en/blog/_posts/2018-08-03-make-kubernetes-production-grade-anywhere.md b/content/en/blog/_posts/2018-08-03-make-kubernetes-production-grade-anywhere.md index 024506a2de..a28196d568 100644 --- a/content/en/blog/_posts/2018-08-03-make-kubernetes-production-grade-anywhere.md +++ b/content/en/blog/_posts/2018-08-03-make-kubernetes-production-grade-anywhere.md @@ -176,7 +176,7 @@ Cluster-distributed stateful services (e.g., Cassandra) can benefit from splitti [Logs](/docs/concepts/cluster-administration/logging/) and [metrics](/docs/tasks/debug-application-cluster/resource-usage-monitoring/) (if collected and persistently retained) are valuable to diagnose outages, but given the variety of technologies available it will not be addressed in this blog. If Internet connectivity is available, it may be desirable to retain logs and metrics externally at a central location. -Your production deployment should utilize an automated installation, configuration and update tool (e.g., [Ansible](https://github.com/kubernetes-incubator/kubespray), [BOSH](https://github.com/cloudfoundry-incubator/kubo-deployment), [Chef](https://github.com/chef-cookbooks/kubernetes), [Juju](/docs/getting-started-guides/ubuntu/installation/), [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/), [Puppet](https://forge.puppet.com/puppetlabs/kubernetes), etc.). A manual process will have repeatability issues, be labor intensive, error prone, and difficult to scale. [Certified distributions](https://www.cncf.io/certification/software-conformance/#logos) are likely to include a facility for retaining configuration settings across updates, but if you implement your own install and config toolchain, then retention, backup and recovery of the configuration artifacts is essential. Consider keeping your deployment components and settings under a version control system such as Git. +Your production deployment should utilize an automated installation, configuration and update tool (e.g., [Ansible](https://github.com/kubernetes-incubator/kubespray), [BOSH](https://github.com/cloudfoundry-incubator/kubo-deployment), [Chef](https://github.com/chef-cookbooks/kubernetes), [Juju](/docs/getting-started-guides/ubuntu/installation/), [kubeadm](/docs/reference/setup-tools/kubeadm/), [Puppet](https://forge.puppet.com/puppetlabs/kubernetes), etc.). A manual process will have repeatability issues, be labor intensive, error prone, and difficult to scale. [Certified distributions](https://www.cncf.io/certification/software-conformance/#logos) are likely to include a facility for retaining configuration settings across updates, but if you implement your own install and config toolchain, then retention, backup and recovery of the configuration artifacts is essential. Consider keeping your deployment components and settings under a version control system such as Git. ## Outage recovery diff --git a/content/en/blog/_posts/2018-12-03-kubernetes-1-13-release-announcement.md b/content/en/blog/_posts/2018-12-03-kubernetes-1-13-release-announcement.md index 247bfa2c8d..8aba0dc232 100644 --- a/content/en/blog/_posts/2018-12-03-kubernetes-1-13-release-announcement.md +++ b/content/en/blog/_posts/2018-12-03-kubernetes-1-13-release-announcement.md @@ -17,7 +17,7 @@ Let’s dive into the key features of this release: ## Simplified Kubernetes Cluster Management with kubeadm in GA -Most people who have gotten hands-on with Kubernetes have at some point been hands-on with kubeadm. It's an essential tool for managing the cluster lifecycle, from creation to configuration to upgrade; and now kubeadm is officially GA. [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) handles the bootstrapping of production clusters on existing hardware and configuring the core Kubernetes components in a best-practice-manner to providing a secure yet easy joining flow for new nodes and supporting easy upgrades. What’s notable about this GA release are the now graduated advanced features, specifically around pluggability and configurability. The scope of kubeadm is to be a toolbox for both admins and automated, higher-level system and this release is a significant step in that direction. +Most people who have gotten hands-on with Kubernetes have at some point been hands-on with kubeadm. It's an essential tool for managing the cluster lifecycle, from creation to configuration to upgrade; and now kubeadm is officially GA. [kubeadm](/docs/reference/setup-tools/kubeadm/) handles the bootstrapping of production clusters on existing hardware and configuring the core Kubernetes components in a best-practice-manner to providing a secure yet easy joining flow for new nodes and supporting easy upgrades. What’s notable about this GA release are the now graduated advanced features, specifically around pluggability and configurability. The scope of kubeadm is to be a toolbox for both admins and automated, higher-level system and this release is a significant step in that direction. ## Container Storage Interface (CSI) Goes GA diff --git a/content/en/blog/_posts/2019-03-15-Kubernetes-setup-using-Ansible-and-Vagrant.md b/content/en/blog/_posts/2019-03-15-Kubernetes-setup-using-Ansible-and-Vagrant.md index 8b31d1df0b..247748b6f0 100644 --- a/content/en/blog/_posts/2019-03-15-Kubernetes-setup-using-Ansible-and-Vagrant.md +++ b/content/en/blog/_posts/2019-03-15-Kubernetes-setup-using-Ansible-and-Vagrant.md @@ -66,6 +66,7 @@ Vagrant.configure("2") do |config| end end end +end ``` ### Step 2: Create an Ansible playbook for Kubernetes master. diff --git a/content/en/blog/_posts/2020-12-02-dockershim-faq.md b/content/en/blog/_posts/2020-12-02-dockershim-faq.md index f8dbe7f7c7..918a969e51 100644 --- a/content/en/blog/_posts/2020-12-02-dockershim-faq.md +++ b/content/en/blog/_posts/2020-12-02-dockershim-faq.md @@ -114,7 +114,7 @@ will have strictly better performance and less overhead. However, we encourage y to explore all the options from the [CNCF landscape] in case another would be an even better fit for your environment. -[CNCF landscape]: https://landscape.cncf.io/category=container-runtime&format=card-mode&grouping=category +[CNCF landscape]: https://landscape.cncf.io/card-mode?category=container-runtime&grouping=category ### What should I look out for when changing CRI implementations? diff --git a/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/along-the-way-ui.png b/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/along-the-way-ui.png new file mode 100644 index 0000000000..e83656a624 Binary files /dev/null and b/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/along-the-way-ui.png differ diff --git a/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/current-ui.png b/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/current-ui.png new file mode 100644 index 0000000000..7d96058165 Binary files /dev/null and b/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/current-ui.png differ diff --git a/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/first-ui.png b/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/first-ui.png new file mode 100644 index 0000000000..ba7fec5408 Binary files /dev/null and b/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/first-ui.png differ diff --git a/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/index.md b/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/index.md new file mode 100644 index 0000000000..345394f809 --- /dev/null +++ b/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/index.md @@ -0,0 +1,63 @@ +--- +layout: blog +title: "The Evolution of Kubernetes Dashboard" +date: 2021-03-09 +slug: the-evolution-of-kubernetes-dashboard +--- + +Authors: Marcin Maciaszczyk, Kubermatic & Sebastian Florek, Kubermatic + +In October 2020, the Kubernetes Dashboard officially turned five. As main project maintainers, we can barely believe that so much time has passed since our very first commits to the project. However, looking back with a bit of nostalgia, we realize that quite a lot has happened since then. Now it’s due time to celebrate “our baby” with a short recap. + +## How It All Began + +The initial idea behind the Kubernetes Dashboard project was to provide a web interface for Kubernetes. We wanted to reflect the kubectl functionality through an intuitive web UI. The main benefit from using the UI is to be able to quickly see things that do not work as expected (monitoring and troubleshooting). Also, the Kubernetes Dashboard is a great starting point for users that are new to the Kubernetes ecosystem. + +The very [first commit](https://github.com/kubernetes/dashboard/commit/5861187fa807ac1cc2d9b2ac786afeced065076c) to the Kubernetes Dashboard was made by Filip Grządkowski from Google on 16th October 2015 – just a few months from the initial commit to the Kubernetes repository. Our initial commits go back to November 2015 ([Sebastian committed on 16 November 2015](https://github.com/kubernetes/dashboard/commit/09e65b6bb08c49b926253de3621a73da05e400fd); [Marcin committed on 23 November 2015](https://github.com/kubernetes/dashboard/commit/1da4b1c25ef040818072c734f71333f9b4733f55)). Since that time, we’ve become regular contributors to the project. For the next two years, we worked closely with the Googlers, eventually becoming main project maintainers ourselves. + +{{< figure src="first-ui.png" caption="The First Version of the User Interface" >}} + +{{< figure src="along-the-way-ui.png" caption="Prototype of the New User Interface" >}} + +{{< figure src="current-ui.png" caption="The Current User Interface" >}} + +As you can see, the initial look and feel of the project were completely different from the current one. We have changed the design multiple times. The same has happened with the code itself. + +## Growing Up - The Big Migration + +At [the beginning of 2018](https://github.com/kubernetes/dashboard/pull/2727), we reached a point where AngularJS was getting closer to the end of its life, while the new Angular versions were published quite often. A lot of the libraries and the modules that we were using were following the trend. That forced us to spend a lot of the time rewriting the frontend part of the project to make it work with newer technologies. + +The migration came with many benefits like being able to refactor a lot of the code, introduce design patterns, reduce code complexity, and benefit from the new modules. However, you can imagine that the scale of the migration was huge. Luckily, there were a number of contributions from the community helping us with the resource support, new Kubernetes version support, i18n, and much more. After many long days and nights, we finally released the [first beta version](https://github.com/kubernetes/dashboard/releases/tag/v2.0.0-beta1) in July 2019, followed by the [2.0 release](https://github.com/kubernetes/dashboard/releases/tag/v2.0.0) in April 2020 — our baby had grown up. + +## Where Are We Standing in 2021? + +Due to limited resources, unfortunately, we were not able to offer extensive support for many different Kubernetes versions. So, we’ve decided to always try and support the latest Kubernetes version available at the time of the Kubernetes Dashboard release. The latest release, [Dashboard v2.2.0](https://github.com/kubernetes/dashboard/releases/tag/v2.2.0) provides support for Kubernetes v1.20. + +On top of that, we put in a great deal of effort into [improving resource support](https://github.com/kubernetes/dashboard/issues/5232). Meanwhile, we do offer support for most of the Kubernetes resources. Also, the Kubernetes Dashboard supports multiple languages: English, German, French, Japanese, Korean, Chinese (Traditional, Simplified, Traditional Hong Kong). Persian and Russian localizations are currently in progress. Moreover, we are working on the support for 3rd party themes and the design of the app in general. As you can see, quite a lot of things are going on. + +Luckily, we do have regular contributors with domain knowledge who are taking care of the project, updating the Helm charts, translations, Go modules, and more. But as always, there could be many more hands on deck. So if you are thinking about contributing to Kubernetes, keep us in mind ;) + +## What’s Next + +The Kubernetes Dashboard has been growing and prospering for more than 5 years now. It provides the community with an intuitive Web UI, thereby decreasing the complexity of Kubernetes and increasing its accessibility to new community members. We are proud of what the project has achieved so far, but this is by far not the end. These are our priorities for the future: + +* Keep providing support for the new Kubernetes versions +* Keep improving the support for the existing resources +* Keep working on auth system improvements +* [Rewrite the API to use gRPC and shared informers](https://github.com/kubernetes/dashboard/pull/5449): This will allow us to improve the performance of the application but, most importantly, to support live updates coming from the Kubernetes project. It is one of the most requested features from the community. +* Split the application into two containers, one with the UI and the second with the API running inside. + +## The Kubernetes Dashboard in Numbers + +* Initial commit made on October 16, 2015 +* Over 100 million pulls from Dockerhub since the v2 release +* 8 supported languages and the next 2 in progress +* Over 3360 closed PRs +* Over 2260 closed issues +* 100% coverage of the supported core Kubernetes resources +* Over 9000 stars on GitHub +* Over 237 000 lines of code + +## Join Us + +As mentioned earlier, we are currently looking for more people to help us further develop and grow the project. We are open to contributions in multiple areas, i.e., [issues with help wanted label](https://github.com/kubernetes/dashboard/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22). Please feel free to reach out via GitHub or the #sig-ui channel in the [Kubernetes Slack](https://slack.k8s.io/). diff --git a/content/en/docs/concepts/architecture/control-plane-node-communication.md b/content/en/docs/concepts/architecture/control-plane-node-communication.md index 2e7235a89f..a4814aab4b 100644 --- a/content/en/docs/concepts/architecture/control-plane-node-communication.md +++ b/content/en/docs/concepts/architecture/control-plane-node-communication.md @@ -11,20 +11,20 @@ aliases: -This document catalogs the communication paths between the control plane (really the apiserver) and the Kubernetes cluster. The intent is to allow users to customize their installation to harden the network configuration such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud provider). +This document catalogs the communication paths between the control plane (apiserver) and the Kubernetes cluster. The intent is to allow users to customize their installation to harden the network configuration such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud provider). ## Node to Control Plane -Kubernetes has a "hub-and-spoke" API pattern. All API usage from nodes (or the pods they run) terminate at the apiserver (none of the other control plane components are designed to expose remote services). The apiserver is configured to listen for remote connections on a secure HTTPS port (typically 443) with one or more forms of client [authentication](/docs/reference/access-authn-authz/authentication/) enabled. +Kubernetes has a "hub-and-spoke" API pattern. All API usage from nodes (or the pods they run) terminates at the apiserver. None of the other control plane components are designed to expose remote services. The apiserver is configured to listen for remote connections on a secure HTTPS port (typically 443) with one or more forms of client [authentication](/docs/reference/access-authn-authz/authentication/) enabled. One or more forms of [authorization](/docs/reference/access-authn-authz/authorization/) should be enabled, especially if [anonymous requests](/docs/reference/access-authn-authz/authentication/#anonymous-requests) or [service account tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens) are allowed. Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the apiserver along with valid client credentials. A good approach is that the client credentials provided to the kubelet are in the form of a client certificate. See [kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for automated provisioning of kubelet client certificates. Pods that wish to connect to the apiserver can do so securely by leveraging a service account so that Kubernetes will automatically inject the public root certificate and a valid bearer token into the pod when it is instantiated. -The `kubernetes` service (in all namespaces) is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver. +The `kubernetes` service (in `default` namespace) is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver. The control plane components also communicate with the cluster apiserver over the secure port. @@ -42,7 +42,7 @@ The connections from the apiserver to the kubelet are used for: * Attaching (through kubectl) to running pods. * Providing the kubelet's port-forwarding functionality. -These connections terminate at the kubelet's HTTPS endpoint. By default, the apiserver does not verify the kubelet's serving certificate, which makes the connection subject to man-in-the-middle attacks, and **unsafe** to run over untrusted and/or public networks. +These connections terminate at the kubelet's HTTPS endpoint. By default, the apiserver does not verify the kubelet's serving certificate, which makes the connection subject to man-in-the-middle attacks and **unsafe** to run over untrusted and/or public networks. To verify this connection, use the `--kubelet-certificate-authority` flag to provide the apiserver with a root certificate bundle to use to verify the kubelet's serving certificate. @@ -53,20 +53,20 @@ Finally, [Kubelet authentication and/or authorization](/docs/reference/command-l ### apiserver to nodes, pods, and services -The connections from the apiserver to a node, pod, or service default to plain HTTP connections and are therefore neither authenticated nor encrypted. They can be run over a secure HTTPS connection by prefixing `https:` to the node, pod, or service name in the API URL, but they will not validate the certificate provided by the HTTPS endpoint nor provide client credentials so while the connection will be encrypted, it will not provide any guarantees of integrity. These connections **are not currently safe** to run over untrusted and/or public networks. +The connections from the apiserver to a node, pod, or service default to plain HTTP connections and are therefore neither authenticated nor encrypted. They can be run over a secure HTTPS connection by prefixing `https:` to the node, pod, or service name in the API URL, but they will not validate the certificate provided by the HTTPS endpoint nor provide client credentials. So while the connection will be encrypted, it will not provide any guarantees of integrity. These connections **are not currently safe** to run over untrusted or public networks. ### SSH tunnels Kubernetes supports SSH tunnels to protect the control plane to nodes communication paths. In this configuration, the apiserver initiates an SSH tunnel to each node in the cluster (connecting to the ssh server listening on port 22) and passes all traffic destined for a kubelet, node, pod, or service through the tunnel. This tunnel ensures that the traffic is not exposed outside of the network in which the nodes are running. -SSH tunnels are currently deprecated so you shouldn't opt to use them unless you know what you are doing. The Konnectivity service is a replacement for this communication channel. +SSH tunnels are currently deprecated, so you shouldn't opt to use them unless you know what you are doing. The Konnectivity service is a replacement for this communication channel. ### Konnectivity service {{< feature-state for_k8s_version="v1.18" state="beta" >}} -As a replacement to the SSH tunnels, the Konnectivity service provides TCP level proxy for the control plane to cluster communication. The Konnectivity service consists of two parts: the Konnectivity server and the Konnectivity agents, running in the control plane network and the nodes network respectively. The Konnectivity agents initiate connections to the Konnectivity server and maintain the network connections. +As a replacement to the SSH tunnels, the Konnectivity service provides TCP level proxy for the control plane to cluster communication. The Konnectivity service consists of two parts: the Konnectivity server in the control plane network and the Konnectivity agents in the nodes network. The Konnectivity agents initiate connections to the Konnectivity server and maintain the network connections. After enabling the Konnectivity service, all control plane to nodes traffic goes through these connections. Follow the [Konnectivity service task](/docs/tasks/extend-kubernetes/setup-konnectivity/) to set up the Konnectivity service in your cluster. diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md index de08e48d8c..57414a415f 100644 --- a/content/en/docs/concepts/architecture/nodes.md +++ b/content/en/docs/concepts/architecture/nodes.md @@ -17,7 +17,7 @@ and contains the services necessary to run {{< glossary_tooltip text="Pods" term_id="pod" >}} Typically you have several nodes in a cluster; in a learning or resource-limited -environment, you might have just one. +environment, you might have only one node. The [components](/docs/concepts/overview/components/#node-components) on a node include the {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}, a @@ -31,7 +31,7 @@ The [components](/docs/concepts/overview/components/#node-components) on a node There are two main ways to have Nodes added to the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}: 1. The kubelet on a node self-registers to the control plane -2. You, or another human user, manually add a Node object +2. You (or another human user) manually add a Node object After you create a Node object, or the kubelet on a node self-registers, the control plane checks whether the new Node object is valid. For example, if you @@ -52,8 +52,8 @@ try to create a Node from the following JSON manifest: Kubernetes creates a Node object internally (the representation). Kubernetes checks that a kubelet has registered to the API server that matches the `metadata.name` -field of the Node. If the node is healthy (if all necessary services are running), -it is eligible to run a Pod. Otherwise, that node is ignored for any cluster activity +field of the Node. If the node is healthy (i.e. all necessary services are running), +then it is eligible to run a Pod. Otherwise, that node is ignored for any cluster activity until it becomes healthy. {{< note >}} @@ -67,6 +67,16 @@ delete the Node object to stop that health checking. The name of a Node object must be a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). +### Node name uniqueness + +The [name](/docs/concepts/overview/working-with-objects/names#names) identifies a Node. Two Nodes +cannot have the same name at the same time. Kubernetes also assumes that a resource with the same +name is the same object. In case of a Node, it is implicitly assumed that an instance using the +same name will have the same state (e.g. network settings, root disk contents). This may lead to +inconsistencies if an instance was modified without changing its name. If the Node needs to be +replaced or updated significantly, the existing Node object needs to be removed from API server +first and re-added after the update. + ### Self-registration of Nodes When the kubelet flag `--register-node` is true (the default), the kubelet will attempt to @@ -96,14 +106,14 @@ You can create and modify Node objects using When you want to create Node objects manually, set the kubelet flag `--register-node=false`. You can modify Node objects regardless of the setting of `--register-node`. -For example, you can set labels on an existing Node, or mark it unschedulable. +For example, you can set labels on an existing Node or mark it unschedulable. You can use labels on Nodes in conjunction with node selectors on Pods to control scheduling. For example, you can constrain a Pod to only be eligible to run on a subset of the available nodes. Marking a node as unschedulable prevents the scheduler from placing new pods onto -that Node, but does not affect existing Pods on the Node. This is useful as a +that Node but does not affect existing Pods on the Node. This is useful as a preparatory step before a node reboot or other maintenance. To mark a Node unschedulable, run: @@ -179,14 +189,14 @@ The node condition is represented as a JSON object. For example, the following s ] ``` -If the Status of the Ready condition remains `Unknown` or `False` for longer than the `pod-eviction-timeout` (an argument passed to the {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}), all the Pods on the node are scheduled for deletion by the node controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the API server is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the API server is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node. +If the Status of the Ready condition remains `Unknown` or `False` for longer than the `pod-eviction-timeout` (an argument passed to the {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}), then all the Pods on the node are scheduled for deletion by the node controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the API server is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the API server is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node. The node controller does not force delete pods until it is confirmed that they have stopped running in the cluster. You can see the pods that might be running on an unreachable node as being in the `Terminating` or `Unknown` state. In cases where Kubernetes cannot deduce from the underlying infrastructure if a node has permanently left a cluster, the cluster administrator -may need to delete the node object by hand. Deleting the node object from Kubernetes causes -all the Pod objects running on the node to be deleted from the API server, and frees up their +may need to delete the node object by hand. Deleting the node object from Kubernetes causes +all the Pod objects running on the node to be deleted from the API server and frees up their names. The node lifecycle controller automatically creates @@ -199,7 +209,7 @@ for more details. ### Capacity and Allocatable {#capacity} -Describes the resources available on the node: CPU, memory and the maximum +Describes the resources available on the node: CPU, memory, and the maximum number of pods that can be scheduled onto the node. The fields in the capacity block indicate the total amount of resources that a @@ -225,18 +235,20 @@ CIDR block to the node when it is registered (if CIDR assignment is turned on). The second is keeping the node controller's internal list of nodes up to date with the cloud provider's list of available machines. When running in a cloud -environment, whenever a node is unhealthy, the node controller asks the cloud +environment and whenever a node is unhealthy, the node controller asks the cloud provider if the VM for that node is still available. If not, the node controller deletes the node from its list of nodes. The third is monitoring the nodes' health. The node controller is -responsible for updating the NodeReady condition of NodeStatus to -ConditionUnknown when a node becomes unreachable (i.e. the node controller stops -receiving heartbeats for some reason, for example due to the node being down), and then later evicting -all the pods from the node (using graceful termination) if the node continues -to be unreachable. (The default timeouts are 40s to start reporting -ConditionUnknown and 5m after that to start evicting pods.) The node controller -checks the state of each node every `--node-monitor-period` seconds. +responsible for: +- Updating the NodeReady condition of NodeStatus to ConditionUnknown when a node + becomes unreachable, as the node controller stops receiving heartbeats for some + reason such as the node being down. +- Evicting all the pods from the node using graceful termination if + the node continues to be unreachable. The default timeouts are 40s to start + reporting ConditionUnknown and 5m after that to start evicting pods. + +The node controller checks the state of each node every `--node-monitor-period` seconds. #### Heartbeats @@ -252,13 +264,14 @@ of the node heartbeats as the cluster scales. The kubelet is responsible for creating and updating the `NodeStatus` and a Lease object. -- The kubelet updates the `NodeStatus` either when there is change in status, +- The kubelet updates the `NodeStatus` either when there is change in status or if there has been no update for a configured interval. The default interval - for `NodeStatus` updates is 5 minutes (much longer than the 40 second default - timeout for unreachable nodes). + for `NodeStatus` updates is 5 minutes, which is much longer than the 40 second default + timeout for unreachable nodes. - The kubelet creates and then updates its Lease object every 10 seconds (the default update interval). Lease updates occur independently from the - `NodeStatus` updates. If the Lease update fails, the kubelet retries with exponential backoff starting at 200 milliseconds and capped at 7 seconds. + `NodeStatus` updates. If the Lease update fails, the kubelet retries with + exponential backoff starting at 200 milliseconds and capped at 7 seconds. #### Reliability @@ -269,23 +282,25 @@ from more than 1 node per 10 seconds. The node eviction behavior changes when a node in a given availability zone becomes unhealthy. The node controller checks what percentage of nodes in the zone are unhealthy (NodeReady condition is ConditionUnknown or ConditionFalse) at -the same time. If the fraction of unhealthy nodes is at least -`--unhealthy-zone-threshold` (default 0.55) then the eviction rate is reduced: -if the cluster is small (i.e. has less than or equal to -`--large-cluster-size-threshold` nodes - default 50) then evictions are -stopped, otherwise the eviction rate is reduced to -`--secondary-node-eviction-rate` (default 0.01) per second. The reason these -policies are implemented per availability zone is because one availability zone -might become partitioned from the master while the others remain connected. If -your cluster does not span multiple cloud provider availability zones, then -there is only one availability zone (the whole cluster). +the same time: +- If the fraction of unhealthy nodes is at least `--unhealthy-zone-threshold` + (default 0.55), then the eviction rate is reduced. +- If the cluster is small (i.e. has less than or equal to + `--large-cluster-size-threshold` nodes - default 50), then evictions are stopped. +- Otherwise, the eviction rate is reduced to `--secondary-node-eviction-rate` + (default 0.01) per second. + +The reason these policies are implemented per availability zone is because one +availability zone might become partitioned from the master while the others remain +connected. If your cluster does not span multiple cloud provider availability zones, +then there is only one availability zone (i.e. the whole cluster). A key reason for spreading your nodes across availability zones is so that the workload can be shifted to healthy zones when one entire zone goes down. -Therefore, if all nodes in a zone are unhealthy then the node controller evicts at +Therefore, if all nodes in a zone are unhealthy, then the node controller evicts at the normal rate of `--node-eviction-rate`. The corner case is when all zones are completely unhealthy (i.e. there are no healthy nodes in the cluster). In such a -case, the node controller assumes that there's some problem with master +case, the node controller assumes that there is some problem with master connectivity and stops all evictions until some connectivity is restored. The node controller is also responsible for evicting pods running on nodes with @@ -303,8 +318,8 @@ eligible for, effectively removing incoming load balancer traffic from the cordo ### Node capacity -Node objects track information about the Node's resource capacity (for example: the amount -of memory available, and the number of CPUs). +Node objects track information about the Node's resource capacity: for example, the amount +of memory available and the number of CPUs. Nodes that [self register](#self-registration-of-nodes) report their capacity during registration. If you [manually](#manual-node-administration) add a Node, then you need to set the node's capacity information when you add it. @@ -338,7 +353,7 @@ for more information. If you have enabled the `GracefulNodeShutdown` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/), then the kubelet attempts to detect the node system shutdown and terminates pods running on the node. Kubelet ensures that pods follow the normal [pod termination process](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) during the node shutdown. -When the `GracefulNodeShutdown` feature gate is enabled, kubelet uses [systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) to delay the node shutdown with a given duration. During a shutdown kubelet terminates pods in two phases: +When the `GracefulNodeShutdown` feature gate is enabled, kubelet uses [systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) to delay the node shutdown with a given duration. During a shutdown, kubelet terminates pods in two phases: 1. Terminate regular pods running on the node. 2. Terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) running on the node. @@ -359,4 +374,3 @@ For example, if `ShutdownGracePeriod=30s`, and `ShutdownGracePeriodCriticalPods= * Read the [Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) section of the architecture design document. * Read about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/). - diff --git a/content/en/docs/concepts/cluster-administration/_index.md b/content/en/docs/concepts/cluster-administration/_index.md index cac156b53b..7d5aec5078 100644 --- a/content/en/docs/concepts/cluster-administration/_index.md +++ b/content/en/docs/concepts/cluster-administration/_index.md @@ -45,7 +45,7 @@ Before choosing a guide, here are some considerations: ## Securing a cluster -* [Certificates](/docs/concepts/cluster-administration/certificates/) describes the steps to generate certificates using different tool chains. +* [Generate Certificates](/docs/tasks/administer-cluster/certificates/) describes the steps to generate certificates using different tool chains. * [Kubernetes Container Environment](/docs/concepts/containers/container-environment/) describes the environment for Kubelet managed containers on a Kubernetes node. diff --git a/content/en/docs/concepts/cluster-administration/certificates.md b/content/en/docs/concepts/cluster-administration/certificates.md index 6314420c01..6cce47f13c 100644 --- a/content/en/docs/concepts/cluster-administration/certificates.md +++ b/content/en/docs/concepts/cluster-administration/certificates.md @@ -4,249 +4,6 @@ content_type: concept weight: 20 --- - -When using client certificate authentication, you can generate certificates -manually through `easyrsa`, `openssl` or `cfssl`. - - - - - - -### easyrsa - -**easyrsa** can manually generate certificates for your cluster. - -1. Download, unpack, and initialize the patched version of easyrsa3. - - curl -LO https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz - tar xzf easy-rsa.tar.gz - cd easy-rsa-master/easyrsa3 - ./easyrsa init-pki -1. Generate a new certificate authority (CA). `--batch` sets automatic mode; - `--req-cn` specifies the Common Name (CN) for the CA's new root certificate. - - ./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass -1. Generate server certificate and key. - The argument `--subject-alt-name` sets the possible IPs and DNS names the API server will - be accessed with. The `MASTER_CLUSTER_IP` is usually the first IP from the service CIDR - that is specified as the `--service-cluster-ip-range` argument for both the API server and - the controller manager component. The argument `--days` is used to set the number of days - after which the certificate expires. - The sample below also assumes that you are using `cluster.local` as the default - DNS domain name. - - ./easyrsa --subject-alt-name="IP:${MASTER_IP},"\ - "IP:${MASTER_CLUSTER_IP},"\ - "DNS:kubernetes,"\ - "DNS:kubernetes.default,"\ - "DNS:kubernetes.default.svc,"\ - "DNS:kubernetes.default.svc.cluster,"\ - "DNS:kubernetes.default.svc.cluster.local" \ - --days=10000 \ - build-server-full server nopass -1. Copy `pki/ca.crt`, `pki/issued/server.crt`, and `pki/private/server.key` to your directory. -1. Fill in and add the following parameters into the API server start parameters: - - --client-ca-file=/yourdirectory/ca.crt - --tls-cert-file=/yourdirectory/server.crt - --tls-private-key-file=/yourdirectory/server.key - -### openssl - -**openssl** can manually generate certificates for your cluster. - -1. Generate a ca.key with 2048bit: - - openssl genrsa -out ca.key 2048 -1. According to the ca.key generate a ca.crt (use -days to set the certificate effective time): - - openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt -1. Generate a server.key with 2048bit: - - openssl genrsa -out server.key 2048 -1. Create a config file for generating a Certificate Signing Request (CSR). - Be sure to substitute the values marked with angle brackets (e.g. ``) - with real values before saving this to a file (e.g. `csr.conf`). - Note that the value for `MASTER_CLUSTER_IP` is the service cluster IP for the - API server as described in previous subsection. - The sample below also assumes that you are using `cluster.local` as the default - DNS domain name. - - [ req ] - default_bits = 2048 - prompt = no - default_md = sha256 - req_extensions = req_ext - distinguished_name = dn - - [ dn ] - C = - ST = - L = - O = - OU = - CN = - - [ req_ext ] - subjectAltName = @alt_names - - [ alt_names ] - DNS.1 = kubernetes - DNS.2 = kubernetes.default - DNS.3 = kubernetes.default.svc - DNS.4 = kubernetes.default.svc.cluster - DNS.5 = kubernetes.default.svc.cluster.local - IP.1 = - IP.2 = - - [ v3_ext ] - authorityKeyIdentifier=keyid,issuer:always - basicConstraints=CA:FALSE - keyUsage=keyEncipherment,dataEncipherment - extendedKeyUsage=serverAuth,clientAuth - subjectAltName=@alt_names -1. Generate the certificate signing request based on the config file: - - openssl req -new -key server.key -out server.csr -config csr.conf -1. Generate the server certificate using the ca.key, ca.crt and server.csr: - - openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \ - -CAcreateserial -out server.crt -days 10000 \ - -extensions v3_ext -extfile csr.conf -1. View the certificate: - - openssl x509 -noout -text -in ./server.crt - -Finally, add the same parameters into the API server start parameters. - -### cfssl - -**cfssl** is another tool for certificate generation. - -1. Download, unpack and prepare the command line tools as shown below. - Note that you may need to adapt the sample commands based on the hardware - architecture and cfssl version you are using. - - curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl - chmod +x cfssl - curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson - chmod +x cfssljson - curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo - chmod +x cfssl-certinfo -1. Create a directory to hold the artifacts and initialize cfssl: - - mkdir cert - cd cert - ../cfssl print-defaults config > config.json - ../cfssl print-defaults csr > csr.json -1. Create a JSON config file for generating the CA file, for example, `ca-config.json`: - - { - "signing": { - "default": { - "expiry": "8760h" - }, - "profiles": { - "kubernetes": { - "usages": [ - "signing", - "key encipherment", - "server auth", - "client auth" - ], - "expiry": "8760h" - } - } - } - } -1. Create a JSON config file for CA certificate signing request (CSR), for example, - `ca-csr.json`. Be sure to replace the values marked with angle brackets with - real values you want to use. - - { - "CN": "kubernetes", - "key": { - "algo": "rsa", - "size": 2048 - }, - "names":[{ - "C": "", - "ST": "", - "L": "", - "O": "", - "OU": "" - }] - } -1. Generate CA key (`ca-key.pem`) and certificate (`ca.pem`): - - ../cfssl gencert -initca ca-csr.json | ../cfssljson -bare ca -1. Create a JSON config file for generating keys and certificates for the API - server, for example, `server-csr.json`. Be sure to replace the values in angle brackets with - real values you want to use. The `MASTER_CLUSTER_IP` is the service cluster - IP for the API server as described in previous subsection. - The sample below also assumes that you are using `cluster.local` as the default - DNS domain name. - - { - "CN": "kubernetes", - "hosts": [ - "127.0.0.1", - "", - "", - "kubernetes", - "kubernetes.default", - "kubernetes.default.svc", - "kubernetes.default.svc.cluster", - "kubernetes.default.svc.cluster.local" - ], - "key": { - "algo": "rsa", - "size": 2048 - }, - "names": [{ - "C": "", - "ST": "", - "L": "", - "O": "", - "OU": "" - }] - } -1. Generate the key and certificate for the API server, which are by default - saved into file `server-key.pem` and `server.pem` respectively: - - ../cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \ - --config=ca-config.json -profile=kubernetes \ - server-csr.json | ../cfssljson -bare server - - -## Distributing Self-Signed CA Certificate - -A client node may refuse to recognize a self-signed CA certificate as valid. -For a non-production deployment, or for a deployment that runs behind a company -firewall, you can distribute a self-signed CA certificate to all clients and -refresh the local list for valid certificates. - -On each client, perform the following operations: - -```bash -sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt -sudo update-ca-certificates -``` - -``` -Updating certificates in /etc/ssl/certs... -1 added, 0 removed; done. -Running hooks in /etc/ca-certificates/update.d.... -done. -``` - -## Certificates API - -You can use the `certificates.k8s.io` API to provision -x509 certificates to use for authentication as documented -[here](/docs/tasks/tls/managing-tls-in-a-cluster). - - +To learn how to generate certificates for your cluster, see [Certificates](/docs/tasks/administer-cluster/certificates/). diff --git a/content/en/docs/concepts/cluster-administration/flow-control.md b/content/en/docs/concepts/cluster-administration/flow-control.md index 6cb4c386d3..3e94277d93 100644 --- a/content/en/docs/concepts/cluster-administration/flow-control.md +++ b/content/en/docs/concepts/cluster-administration/flow-control.md @@ -59,7 +59,7 @@ kube-apiserver \ ``` Alternatively, you can enable the v1alpha1 version of the API group -with `--runtime-config=flowcontrol.apiserver.k8s.io/v1beta1=true`. +with `--runtime-config=flowcontrol.apiserver.k8s.io/v1alpha1=true`. The command-line flag `--enable-priority-and-fairness=false` will disable the API Priority and Fairness feature, even if other flags have enabled it. @@ -427,7 +427,7 @@ poorly-behaved workloads that may be harming system health. histogram vector of queue lengths for the queues, broken down by the labels `priority_level` and `flow_schema`, as sampled by the enqueued requests. Each request that gets queued contributes one - sample to its histogram, reporting the length of the queue just + sample to its histogram, reporting the length of the queue immediately after the request was added. Note that this produces different statistics than an unbiased survey would. {{< note >}} diff --git a/content/en/docs/concepts/cluster-administration/manage-deployment.md b/content/en/docs/concepts/cluster-administration/manage-deployment.md index d9814a887a..f51911116d 100644 --- a/content/en/docs/concepts/cluster-administration/manage-deployment.md +++ b/content/en/docs/concepts/cluster-administration/manage-deployment.md @@ -45,9 +45,9 @@ kubectl apply -f https://k8s.io/examples/application/nginx/ `kubectl` will read any files with suffixes `.yaml`, `.yml`, or `.json`. -It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, then you can then simply deploy all of the components of your stack en masse. +It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, you can deploy all of the components of your stack together. -A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into github: +A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into GitHub: ```shell kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx/nginx-deployment.yaml @@ -265,7 +265,7 @@ For a more concrete example, check the [tutorial of deploying Ghost](https://git ## Updating labels Sometimes existing pods and other resources need to be relabeled before creating new resources. This can be done with `kubectl label`. -For example, if you want to label all your nginx pods as frontend tier, simply run: +For example, if you want to label all your nginx pods as frontend tier, run: ```shell kubectl label pods -l app=nginx tier=fe @@ -278,7 +278,7 @@ pod/my-nginx-2035384211-u3t6x labeled ``` This first filters all pods with the label "app=nginx", and then labels them with the "tier=fe". -To see the pods you just labeled, run: +To see the pods you labeled, run: ```shell kubectl get pods -l app=nginx -L tier @@ -411,7 +411,7 @@ and ## Disruptive updates -In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can simply modify your original configuration file: +In some cases, you may need to update resource fields that cannot be updated once initialized, or you may want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can modify your original configuration file: ```shell kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force @@ -448,7 +448,7 @@ kubectl scale deployment my-nginx --current-replicas=1 --replicas=3 deployment.apps/my-nginx scaled ``` -To update to version 1.16.1, simply change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1`, with the kubectl commands we learned above. +To update to version 1.16.1, change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1` using the previous kubectl commands. ```shell kubectl edit deployment/my-nginx diff --git a/content/en/docs/concepts/cluster-administration/proxies.md b/content/en/docs/concepts/cluster-administration/proxies.md index 9bf204bd9f..ba86c969b8 100644 --- a/content/en/docs/concepts/cluster-administration/proxies.md +++ b/content/en/docs/concepts/cluster-administration/proxies.md @@ -39,7 +39,7 @@ There are several different proxies you may encounter when using Kubernetes: - proxies UDP, TCP and SCTP - does not understand HTTP - provides load balancing - - is just used to reach services + - is only used to reach services 1. A Proxy/Load-balancer in front of apiserver(s): diff --git a/content/en/docs/concepts/configuration/configmap.md b/content/en/docs/concepts/configuration/configmap.md index 9a134dfc99..74281e1ec1 100644 --- a/content/en/docs/concepts/configuration/configmap.md +++ b/content/en/docs/concepts/configuration/configmap.md @@ -43,7 +43,7 @@ Kubernetes objects that have a `spec`, a ConfigMap has `data` and `binaryData` fields. These fields accept key-value pairs as their values. Both the `data` field and the `binaryData` are optional. The `data` field is designed to contain UTF-8 byte sequences while the `binaryData` field is designed to -contain binary data. +contain binary data as base64-encoded strings. The name of a ConfigMap must be a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). @@ -225,7 +225,7 @@ The kubelet checks whether the mounted ConfigMap is fresh on every periodic sync However, the kubelet uses its local cache for getting the current value of the ConfigMap. The type of the cache is configurable using the `ConfigMapAndSecretChangeDetectionStrategy` field in the [KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go). -A ConfigMap can be either propagated by watch (default), ttl-based, or simply redirecting +A ConfigMap can be either propagated by watch (default), ttl-based, or by redirecting all requests directly to the API server. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the Pod can be as long as the kubelet sync period + cache diff --git a/content/en/docs/concepts/configuration/manage-resources-containers.md b/content/en/docs/concepts/configuration/manage-resources-containers.md index 2668050d26..fa683e97f3 100644 --- a/content/en/docs/concepts/configuration/manage-resources-containers.md +++ b/content/en/docs/concepts/configuration/manage-resources-containers.md @@ -72,8 +72,7 @@ You cannot overcommit `hugepages-*` resources. This is different from the `memory` and `cpu` resources. {{< /note >}} -CPU and memory are collectively referred to as *compute resources*, or just -*resources*. Compute +CPU and memory are collectively referred to as *compute resources*, or *resources*. Compute resources are measurable quantities that can be requested, allocated, and consumed. They are distinct from [API resources](/docs/concepts/overview/kubernetes-api/). API resources, such as Pods and @@ -554,7 +553,7 @@ extender. ### Consuming extended resources -Users can consume extended resources in Pod specs just like CPU and memory. +Users can consume extended resources in Pod specs like CPU and memory. The scheduler takes care of the resource accounting so that no more than the available amount is simultaneously allocated to Pods. diff --git a/content/en/docs/concepts/configuration/overview.md b/content/en/docs/concepts/configuration/overview.md index fce8a0c7a8..25cfb2e7f1 100644 --- a/content/en/docs/concepts/configuration/overview.md +++ b/content/en/docs/concepts/configuration/overview.md @@ -81,9 +81,9 @@ The [imagePullPolicy](/docs/concepts/containers/images/#updating-images) and the - `imagePullPolicy: Always`: every time the kubelet launches a container, the kubelet queries the container image registry to resolve the name to an image digest. If the kubelet has a container image with that exact digest cached locally, the kubelet uses its cached image; otherwise, the kubelet downloads (pulls) the image with the resolved digest, and uses that image to launch the container. -- `imagePullPolicy` is omitted and either the image tag is `:latest` or it is omitted: `Always` is applied. +- `imagePullPolicy` is omitted and either the image tag is `:latest` or it is omitted: `imagePullPolicy` is automatically set to `Always`. Note that this will _not_ be updated to `IfNotPresent` if the tag changes value. -- `imagePullPolicy` is omitted and the image tag is present but not `:latest`: `IfNotPresent` is applied. +- `imagePullPolicy` is omitted and the image tag is present but not `:latest`: `imagePullPolicy` is automatically set to `IfNotPresent`. Note that this will _not_ be updated to `Always` if the tag is later removed or changed to `:latest`. - `imagePullPolicy: Never`: the image is assumed to exist locally. No attempt is made to pull the image. @@ -96,7 +96,7 @@ You should avoid using the `:latest` tag when deploying containers in production {{< /note >}} {{< note >}} -The caching semantics of the underlying image provider make even `imagePullPolicy: Always` efficient. With Docker, for example, if the image already exists, the pull attempt is fast because all image layers are cached and no image download is needed. +The caching semantics of the underlying image provider make even `imagePullPolicy: Always` efficient, as long as the registry is reliably accessible. With Docker, for example, if the image already exists, the pull attempt is fast because all image layers are cached and no image download is needed. {{< /note >}} ## Using kubectl diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md index 5a6a0dd092..7f47aeeaf5 100644 --- a/content/en/docs/concepts/configuration/secret.md +++ b/content/en/docs/concepts/configuration/secret.md @@ -109,7 +109,7 @@ empty-secret Opaque 0 2m6s ``` The `DATA` column shows the number of data items stored in the Secret. -In this case, `0` means we have just created an empty Secret. +In this case, `0` means we have created an empty Secret. ### Service account token Secrets @@ -669,7 +669,7 @@ The kubelet checks whether the mounted secret is fresh on every periodic sync. However, the kubelet uses its local cache for getting the current value of the Secret. The type of the cache is configurable using the `ConfigMapAndSecretChangeDetectionStrategy` field in the [KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go). -A Secret can be either propagated by watch (default), ttl-based, or simply redirecting +A Secret can be either propagated by watch (default), ttl-based, or by redirecting all requests directly to the API server. As a result, the total delay from the moment when the Secret is updated to the moment when new keys are projected to the Pod can be as long as the kubelet sync period + cache @@ -718,7 +718,7 @@ spec: #### Consuming Secret Values from environment variables -Inside a container that consumes a secret in an environment variables, the secret keys appear as +Inside a container that consumes a secret in the environment variables, the secret keys appear as normal environment variables containing the base64 decoded values of the secret data. This is the result of commands executed inside the container from the example above: diff --git a/content/en/docs/concepts/containers/container-environment.md b/content/en/docs/concepts/containers/container-environment.md index 7ec28e97b4..a1eba4d96d 100644 --- a/content/en/docs/concepts/containers/container-environment.md +++ b/content/en/docs/concepts/containers/container-environment.md @@ -40,6 +40,7 @@ as are any environment variables specified statically in the Docker image. ### Cluster information A list of all services that were running when a Container was created is available to that Container as environment variables. +This list is limited to services within the same namespace as the new Container's Pod and Kubernetes control plane services. Those environment variables match the syntax of Docker links. For a service named *foo* that maps to a Container named *bar*, diff --git a/content/en/docs/concepts/containers/container-lifecycle-hooks.md b/content/en/docs/concepts/containers/container-lifecycle-hooks.md index 96569f9518..49cc25ffbd 100644 --- a/content/en/docs/concepts/containers/container-lifecycle-hooks.md +++ b/content/en/docs/concepts/containers/container-lifecycle-hooks.md @@ -50,10 +50,11 @@ A more detailed description of the termination behavior can be found in ### Hook handler implementations Containers can access a hook by implementing and registering a handler for that hook. -There are two types of hook handlers that can be implemented for Containers: +There are three types of hook handlers that can be implemented for Containers: * Exec - Executes a specific command, such as `pre-stop.sh`, inside the cgroups and namespaces of the Container. Resources consumed by the command are counted against the Container. +* TCP - Opens a TCP connecton against a specific port on the Container. * HTTP - Executes an HTTP request against a specific endpoint on the Container. ### Hook handler execution diff --git a/content/en/docs/concepts/containers/images.md b/content/en/docs/concepts/containers/images.md index 99698668c4..6d0db16fe8 100644 --- a/content/en/docs/concepts/containers/images.md +++ b/content/en/docs/concepts/containers/images.md @@ -49,16 +49,32 @@ Instead, specify a meaningful tag such as `v1.42.0`. ## Updating images -The default pull policy is `IfNotPresent` which causes the -{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} to skip -pulling an image if it already exists. If you would like to always force a pull, -you can do one of the following: +When you first create a {{< glossary_tooltip text="Deployment" term_id="deployment" >}}, +{{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}}, Pod, or other +object that includes a Pod template, then by default the pull policy of all +containers in that pod will be set to `IfNotPresent` if it is not explicitly +specified. This policy causes the +{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} to skip pulling an +image if it already exists. + +If you would like to always force a pull, you can do one of the following: - set the `imagePullPolicy` of the container to `Always`. -- omit the `imagePullPolicy` and use `:latest` as the tag for the image to use. +- omit the `imagePullPolicy` and use `:latest` as the tag for the image to use; + Kubernetes will set the policy to `Always`. - omit the `imagePullPolicy` and the tag for the image to use. - enable the [AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) admission controller. +{{< note >}} +The value of `imagePullPolicy` of the container is always set when the object is +first _created_, and is not updated if the image's tag later changes. + +For example, if you create a Deployment with an image whose tag is _not_ +`:latest`, and later update that Deployment's image to a `:latest` tag, the +`imagePullPolicy` field will _not_ change to `Always`. You must manually change +the pull policy of any object after its initial creation. +{{< /note >}} + When `imagePullPolicy` is defined without a specific value, it is also set to `Always`. ## Multi-architecture images with image indexes @@ -119,7 +135,7 @@ Here are the recommended steps to configuring your nodes to use a private regist example, run these on your desktop/laptop: 1. Run `docker login [server]` for each set of credentials you want to use. This updates `$HOME/.docker/config.json` on your PC. - 1. View `$HOME/.docker/config.json` in an editor to ensure it contains just the credentials you want to use. + 1. View `$HOME/.docker/config.json` in an editor to ensure it contains only the credentials you want to use. 1. Get a list of your nodes; for example: - if you want the names: `nodes=$( kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}' )` - if you want to get the IP addresses: `nodes=$( kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}' )` diff --git a/content/en/docs/concepts/extend-kubernetes/_index.md b/content/en/docs/concepts/extend-kubernetes/_index.md index 429912a9ed..cc5ba809ec 100644 --- a/content/en/docs/concepts/extend-kubernetes/_index.md +++ b/content/en/docs/concepts/extend-kubernetes/_index.md @@ -145,7 +145,7 @@ Kubernetes provides several built-in authentication methods, and an [Authenticat ### Authorization -[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It just works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision. +[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision. ### Dynamic Admission Control diff --git a/content/en/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md b/content/en/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md index 74147624f5..d9fe184f85 100644 --- a/content/en/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md +++ b/content/en/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md @@ -28,9 +28,7 @@ The most common way to implement the APIService is to run an *extension API serv Extension API servers should have low latency networking to and from the kube-apiserver. Discovery requests are required to round-trip from the kube-apiserver in five seconds or less. -If your extension API server cannot achieve that latency requirement, consider making changes that let you meet it. You can also set the -`EnableAggregatedDiscoveryTimeout=false` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) on the kube-apiserver -to disable the timeout restriction. This deprecated feature gate will be removed in a future release. +If your extension API server cannot achieve that latency requirement, consider making changes that let you meet it. ## {{% heading "whatsnext" %}} diff --git a/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md b/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md index dcfef3f6b6..5457dc9204 100644 --- a/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md +++ b/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md @@ -31,7 +31,7 @@ Once a custom resource is installed, users can create and access its objects usi ## Custom controllers -On their own, custom resources simply let you store and retrieve structured data. +On their own, custom resources let you store and retrieve structured data. When you combine a custom resource with a *custom controller*, custom resources provide a true _declarative API_. @@ -120,7 +120,7 @@ Kubernetes provides two ways to add custom resources to your cluster: Kubernetes provides these two options to meet the needs of different users, so that neither ease of use nor flexibility is compromised. -Aggregated APIs are subordinate API servers that sit behind the primary API server, which acts as a proxy. This arrangement is called [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) (AA). To users, it simply appears that the Kubernetes API is extended. +Aggregated APIs are subordinate API servers that sit behind the primary API server, which acts as a proxy. This arrangement is called [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) (AA). To users, the Kubernetes API appears extended. CRDs allow users to create new types of resources without adding another API server. You do not need to understand API Aggregation to use CRDs. diff --git a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md index 7b53fa326f..0ec8bf81b1 100644 --- a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md +++ b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md @@ -24,7 +24,7 @@ Network plugins in Kubernetes come in a few flavors: The kubelet has a single default network plugin, and a default network common to the entire cluster. It probes for plugins when it starts up, remembers what it finds, and executes the selected plugin at appropriate times in the pod lifecycle (this is only true for Docker, as CRI manages its own CNI plugins). There are two Kubelet command line parameters to keep in mind when using plugins: * `cni-bin-dir`: Kubelet probes this directory for plugins on startup -* `network-plugin`: The network plugin to use from `cni-bin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is simply "cni". +* `network-plugin`: The network plugin to use from `cni-bin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is `cni`. ## Network Plugin Requirements diff --git a/content/en/docs/concepts/extend-kubernetes/extend-cluster.md b/content/en/docs/concepts/extend-kubernetes/extend-cluster.md index 84d14cee3e..2bdc74e7e9 100644 --- a/content/en/docs/concepts/extend-kubernetes/extend-cluster.md +++ b/content/en/docs/concepts/extend-kubernetes/extend-cluster.md @@ -146,7 +146,7 @@ Kubernetes provides several built-in authentication methods, and an [Authenticat ### Authorization -[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It just works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision. +[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision. ### Dynamic Admission Control diff --git a/content/en/docs/concepts/extend-kubernetes/operator.md b/content/en/docs/concepts/extend-kubernetes/operator.md index 0e9d227a53..323200ec3a 100644 --- a/content/en/docs/concepts/extend-kubernetes/operator.md +++ b/content/en/docs/concepts/extend-kubernetes/operator.md @@ -103,27 +103,28 @@ as well as keeping the existing service in good shape. ## Writing your own Operator {#writing-operator} If there isn't an Operator in the ecosystem that implements the behavior you -want, you can code your own. In [What's next](#what-s-next) you'll find a few -links to libraries and tools you can use to write your own cloud native -Operator. +want, you can code your own. You also implement an Operator (that is, a Controller) using any language / runtime that can act as a [client for the Kubernetes API](/docs/reference/using-api/client-libraries/). +Following are a few libraries and tools you can use to write your own cloud native +Operator. +{{% thirdparty-content %}} + +* [kubebuilder](https://book.kubebuilder.io/) +* [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator) +* [Metacontroller](https://metacontroller.app/) along with WebHooks that + you implement yourself +* [Operator Framework](https://operatorframework.io) ## {{% heading "whatsnext" %}} * Learn more about [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) * Find ready-made operators on [OperatorHub.io](https://operatorhub.io/) to suit your use case -* Use existing tools to write your own operator, eg: - * using [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator) - * using [kubebuilder](https://book.kubebuilder.io/) - * using [Metacontroller](https://metacontroller.app/) along with WebHooks that - you implement yourself - * using the [Operator Framework](https://operatorframework.io) * [Publish](https://operatorhub.io/) your operator for other people to use -* Read [CoreOS' original article](https://coreos.com/blog/introducing-operators.html) that introduced the Operator pattern +* Read [CoreOS' original article](https://web.archive.org/web/20170129131616/https://coreos.com/blog/introducing-operators.html) that introduced the Operator pattern (this is an archived version of the original article). * Read an [article](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) from Google Cloud about best practices for building Operators diff --git a/content/en/docs/concepts/extend-kubernetes/service-catalog.md b/content/en/docs/concepts/extend-kubernetes/service-catalog.md index 3aa9675788..af0271d9ab 100644 --- a/content/en/docs/concepts/extend-kubernetes/service-catalog.md +++ b/content/en/docs/concepts/extend-kubernetes/service-catalog.md @@ -26,7 +26,7 @@ Fortunately, there is a cloud provider that offers message queuing as a managed A cluster operator can setup Service Catalog and use it to communicate with the cloud provider's service broker to provision an instance of the message queuing service and make it available to the application within the Kubernetes cluster. The application developer therefore does not need to be concerned with the implementation details or management of the message queue. -The application can simply use it as a service. +The application can access the message queue as a service. ## Architecture diff --git a/content/en/docs/concepts/overview/components.md b/content/en/docs/concepts/overview/components.md index eb17e2dd7e..763308887d 100644 --- a/content/en/docs/concepts/overview/components.md +++ b/content/en/docs/concepts/overview/components.md @@ -51,11 +51,11 @@ the same machine, and do not run user containers on this machine. See {{< glossary_definition term_id="kube-controller-manager" length="all" >}} -These controllers include: +Some types of these controllers are: * Node controller: Responsible for noticing and responding when nodes go down. - * Replication controller: Responsible for maintaining the correct number of pods for every replication - controller object in the system. + * Job controller: Watches for Job objects that represent one-off tasks, then creates + Pods to run those tasks to completion. * Endpoints controller: Populates the Endpoints object (that is, joins Services & Pods). * Service Account & Token controllers: Create default accounts and API access tokens for new namespaces. diff --git a/content/en/docs/concepts/overview/working-with-objects/labels.md b/content/en/docs/concepts/overview/working-with-objects/labels.md index 2feec43801..811d9fb3f7 100644 --- a/content/en/docs/concepts/overview/working-with-objects/labels.md +++ b/content/en/docs/concepts/overview/working-with-objects/labels.md @@ -52,7 +52,10 @@ If the prefix is omitted, the label Key is presumed to be private to the user. A The `kubernetes.io/` and `k8s.io/` prefixes are reserved for Kubernetes core components. -Valid label values must be 63 characters or less and must be empty or begin and end with an alphanumeric character (`[a-z0-9A-Z]`) with dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between. +Valid label value: +* must be 63 characters or less (cannot be empty), +* must begin and end with an alphanumeric character (`[a-z0-9A-Z]`), +* could contain dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between. For example, here's the configuration file for a Pod that has two labels `environment: production` and `app: nginx` : @@ -98,7 +101,7 @@ For both equality-based and set-based conditions there is no logical _OR_ (`||`) ### _Equality-based_ requirement _Equality-_ or _inequality-based_ requirements allow filtering by label keys and values. Matching objects must satisfy all of the specified label constraints, though they may have additional labels as well. -Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ (and are simply synonyms), while the latter represents _inequality_. For example: +Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ (and are synonyms), while the latter represents _inequality_. For example: ``` environment = production diff --git a/content/en/docs/concepts/overview/working-with-objects/names.md b/content/en/docs/concepts/overview/working-with-objects/names.md index bc89d1c30a..8e74eb5c0b 100644 --- a/content/en/docs/concepts/overview/working-with-objects/names.md +++ b/content/en/docs/concepts/overview/working-with-objects/names.md @@ -24,6 +24,10 @@ For non-unique user-provided attributes, Kubernetes provides [labels](/docs/conc {{< glossary_definition term_id="name" length="all" >}} +{{< note >}} +In cases when objects represent a physical entity, like a Node representing a physical host, when the host is re-created under the same name without deleting and re-creating the Node, Kubernetes treats the new host as the old one, which may lead to inconsistencies. +{{< /note >}} + Below are three types of commonly used name constraints for resources. ### DNS Subdomain Names @@ -86,4 +90,3 @@ UUIDs are standardized as ISO/IEC 9834-8 and as ITU-T X.667. * Read about [labels](/docs/concepts/overview/working-with-objects/labels/) in Kubernetes. * See the [Identifiers and Names in Kubernetes](https://git.k8s.io/community/contributors/design-proposals/architecture/identifiers.md) design document. - diff --git a/content/en/docs/concepts/overview/working-with-objects/namespaces.md b/content/en/docs/concepts/overview/working-with-objects/namespaces.md index f078cb8636..b7ae176d7c 100644 --- a/content/en/docs/concepts/overview/working-with-objects/namespaces.md +++ b/content/en/docs/concepts/overview/working-with-objects/namespaces.md @@ -28,7 +28,7 @@ resource can only be in one namespace. Namespaces are a way to divide cluster resources between multiple users (via [resource quota](/docs/concepts/policy/resource-quotas/)). -It is not necessary to use multiple namespaces just to separate slightly different +It is not necessary to use multiple namespaces to separate slightly different resources, such as different versions of the same software: use [labels](/docs/concepts/overview/working-with-objects/labels) to distinguish resources within the same namespace. @@ -91,7 +91,7 @@ kubectl config view --minify | grep namespace: When you create a [Service](/docs/concepts/services-networking/service/), it creates a corresponding [DNS entry](/docs/concepts/services-networking/dns-pod-service/). This entry is of the form `..svc.cluster.local`, which means -that if a container just uses ``, it will resolve to the service which +that if a container only uses ``, it will resolve to the service which is local to a namespace. This is useful for using the same configuration across multiple namespaces such as Development, Staging and Production. If you want to reach across namespaces, you need to use the fully qualified domain name (FQDN). diff --git a/content/en/docs/concepts/policy/pod-security-policy.md b/content/en/docs/concepts/policy/pod-security-policy.md index 17f30906bf..f355a8f539 100644 --- a/content/en/docs/concepts/policy/pod-security-policy.md +++ b/content/en/docs/concepts/policy/pod-security-policy.md @@ -197,7 +197,7 @@ alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -n ### Create a policy and a pod Define the example PodSecurityPolicy object in a file. This is a policy that -simply prevents the creation of privileged pods. +prevents the creation of privileged pods. The name of a PodSecurityPolicy object must be a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). diff --git a/content/en/docs/concepts/policy/resource-quotas.md b/content/en/docs/concepts/policy/resource-quotas.md index 0edb1be338..bc5eca8754 100644 --- a/content/en/docs/concepts/policy/resource-quotas.md +++ b/content/en/docs/concepts/policy/resource-quotas.md @@ -58,7 +58,7 @@ Neither contention nor changes to quota will affect already created resources. ## Enabling Resource Quota Resource Quota support is enabled by default for many Kubernetes distributions. It is -enabled when the API server `--enable-admission-plugins=` flag has `ResourceQuota` as +enabled when the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}} `--enable-admission-plugins=` flag has `ResourceQuota` as one of its arguments. A resource quota is enforced in a particular namespace when there is a @@ -610,17 +610,28 @@ plugins: values: ["cluster-services"] ``` -Now, "cluster-services" pods will be allowed in only those namespaces where a quota object with a matching `scopeSelector` is present. -For example: +Then, create a resource quota object in the `kube-system` namespace: -```yaml - scopeSelector: - matchExpressions: - - scopeName: PriorityClass - operator: In - values: ["cluster-services"] +{{< codenew file="policy/priority-class-resourcequota.yaml" >}} + +```shell +$ kubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system ``` +``` +resourcequota/pods-cluster-services created +``` + +In this case, a pod creation will be allowed if: + +1. the Pod's `priorityClassName` is not specified. +1. the Pod's `priorityClassName` is specified to a value other than `cluster-services`. +1. the Pod's `priorityClassName` is set to `cluster-services`, it is to be created + in the `kube-system` namespace, and it has passed the resource quota check. + +A Pod creation request is rejected if its `priorityClassName` is set to `cluster-services` +and it is to be created in a namespace other than `kube-system`. + ## {{% heading "whatsnext" %}} - See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) for more information. diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md index abe4f4b9eb..c2c332afb0 100644 --- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -5,24 +5,23 @@ reviewers: - bsalamat title: Assigning Pods to Nodes content_type: concept -weight: 50 +weight: 20 --- -You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} to only be able to run on particular -{{< glossary_tooltip text="Node(s)" term_id="node" >}}, or to prefer to run on particular nodes. -There are several ways to do this, and the recommended approaches all use -[label selectors](/docs/concepts/overview/working-with-objects/labels/) to make the selection. +You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it can only run on particular set of +{{< glossary_tooltip text="Node(s)" term_id="node" >}}. +There are several ways to do this and the recommended approaches all use +[label selectors](/docs/concepts/overview/working-with-objects/labels/) to facilitate the selection. Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement -(e.g. spread your pods across nodes, not place the pod on a node with insufficient free resources, etc.) -but there are some circumstances where you may want more control on a node where a pod lands, for example to ensure +(e.g. spread your pods across nodes so as not place the pod on a node with insufficient free resources, etc.) +but there are some circumstances where you may want to control which node the pod deploys to - for example to ensure that a pod ends up on a machine with an SSD attached to it, or to co-locate pods from two different services that communicate a lot into the same availability zone. - ## nodeSelector @@ -120,12 +119,12 @@ pod is eligible to be scheduled on, based on labels on the node. There are currently two types of node affinity, called `requiredDuringSchedulingIgnoredDuringExecution` and `preferredDuringSchedulingIgnoredDuringExecution`. You can think of them as "hard" and "soft" respectively, -in the sense that the former specifies rules that *must* be met for a pod to be scheduled onto a node (just like +in the sense that the former specifies rules that *must* be met for a pod to be scheduled onto a node (similar to `nodeSelector` but using a more expressive syntax), while the latter specifies *preferences* that the scheduler will try to enforce but will not guarantee. The "IgnoredDuringExecution" part of the names means that, similar to how `nodeSelector` works, if labels on a node change at runtime such that the affinity rules on a pod are no longer -met, the pod will still continue to run on the node. In the future we plan to offer -`requiredDuringSchedulingRequiredDuringExecution` which will be just like `requiredDuringSchedulingIgnoredDuringExecution` +met, the pod continues to run on the node. In the future we plan to offer +`requiredDuringSchedulingRequiredDuringExecution` which will be identical to `requiredDuringSchedulingIgnoredDuringExecution` except that it will evict pods from nodes that cease to satisfy the pods' node affinity requirements. Thus an example of `requiredDuringSchedulingIgnoredDuringExecution` would be "only run the pod on nodes with Intel CPUs" @@ -261,7 +260,7 @@ for performance and security reasons, there are some constraints on topologyKey: and `preferredDuringSchedulingIgnoredDuringExecution`. 2. For pod anti-affinity, empty `topologyKey` is also not allowed in both `requiredDuringSchedulingIgnoredDuringExecution` and `preferredDuringSchedulingIgnoredDuringExecution`. -3. For `requiredDuringSchedulingIgnoredDuringExecution` pod anti-affinity, the admission controller `LimitPodHardAntiAffinityTopology` was introduced to limit `topologyKey` to `kubernetes.io/hostname`. If you want to make it available for custom topologies, you may modify the admission controller, or simply disable it. +3. For `requiredDuringSchedulingIgnoredDuringExecution` pod anti-affinity, the admission controller `LimitPodHardAntiAffinityTopology` was introduced to limit `topologyKey` to `kubernetes.io/hostname`. If you want to make it available for custom topologies, you may modify the admission controller, or disable it. 4. Except for the above cases, the `topologyKey` can be any legal label-key. In addition to `labelSelector` and `topologyKey`, you can optionally specify a list `namespaces` diff --git a/content/en/docs/concepts/scheduling-eviction/resource-bin-packing.md b/content/en/docs/concepts/scheduling-eviction/resource-bin-packing.md index a327f1de24..94bfaa1280 100644 --- a/content/en/docs/concepts/scheduling-eviction/resource-bin-packing.md +++ b/content/en/docs/concepts/scheduling-eviction/resource-bin-packing.md @@ -5,7 +5,7 @@ reviewers: - ahg-g title: Resource Bin Packing for Extended Resources content_type: concept -weight: 50 +weight: 30 --- diff --git a/content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md b/content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md index 932e076dfc..7936f9dedc 100644 --- a/content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md +++ b/content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md @@ -107,7 +107,7 @@ value being calculated based on the cluster size. There is also a hardcoded minimum value of 50 nodes. {{< note >}}In clusters with less than 50 feasible nodes, the scheduler still -checks all the nodes, simply because there are not enough feasible nodes to stop +checks all the nodes because there are not enough feasible nodes to stop the scheduler's search early. In a small cluster, if you set a low value for `percentageOfNodesToScore`, your diff --git a/content/en/docs/concepts/scheduling-eviction/scheduling-framework.md b/content/en/docs/concepts/scheduling-eviction/scheduling-framework.md index ae32f840fd..06ed901c2a 100644 --- a/content/en/docs/concepts/scheduling-eviction/scheduling-framework.md +++ b/content/en/docs/concepts/scheduling-eviction/scheduling-framework.md @@ -183,7 +183,7 @@ the three things: {{< note >}} While any plugin can access the list of "waiting" Pods and approve them -(see [`FrameworkHandle`](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20180409-scheduling-framework.md#frameworkhandle)), we expect only the permit +(see [`FrameworkHandle`](https://git.k8s.io/enhancements/keps/sig-scheduling/624-scheduling-framework#frameworkhandle)), we expect only the permit plugins to approve binding of reserved Pods that are in "waiting" state. Once a Pod is approved, it is sent to the [PreBind](#pre-bind) phase. {{< /note >}} diff --git a/content/en/docs/concepts/security/controlling-access.md b/content/en/docs/concepts/security/controlling-access.md index 62dc273cf7..9d6c2b9617 100644 --- a/content/en/docs/concepts/security/controlling-access.md +++ b/content/en/docs/concepts/security/controlling-access.md @@ -28,7 +28,7 @@ a private certificate authority (CA), or based on a public key infrastructure li to a generally recognized CA. If your cluster uses a private certificate authority, you need a copy of that CA -certifcate configured into your `~/.kube/config` on the client, so that you can +certificate configured into your `~/.kube/config` on the client, so that you can trust the connection and be confident it was not intercepted. Your client can present a TLS client certificate at this stage. @@ -43,7 +43,7 @@ Authenticators are described in more detail in [Authentication](/docs/reference/access-authn-authz/authentication/). The input to the authentication step is the entire HTTP request; however, it typically -just examines the headers and/or client certificate. +examines the headers and/or client certificate. Authentication modules include client certificates, password, and plain tokens, bootstrap tokens, and JSON Web Tokens (used for service accounts). @@ -135,7 +135,7 @@ for the corresponding API object, and then written to the object store (shown as The previous discussion applies to requests sent to the secure port of the API server (the typical case). The API server can actually serve on 2 ports: -By default the Kubernetes API server serves HTTP on 2 ports: +By default, the Kubernetes API server serves HTTP on 2 ports: 1. `localhost` port: diff --git a/content/en/docs/concepts/security/overview.md b/content/en/docs/concepts/security/overview.md index fe9129c109..b23a07c79a 100644 --- a/content/en/docs/concepts/security/overview.md +++ b/content/en/docs/concepts/security/overview.md @@ -120,6 +120,7 @@ Area of Concern for Containers | Recommendation | Container Vulnerability Scanning and OS Dependency Security | As part of an image build step, you should scan your containers for known vulnerabilities. Image Signing and Enforcement | Sign container images to maintain a system of trust for the content of your containers. Disallow privileged users | When constructing containers, consult your documentation for how to create users inside of the containers that have the least level of operating system privilege necessary in order to carry out the goal of the container. +Use container runtime with stronger isolation | Select [container runtime classes](/docs/concepts/containers/runtime-class/) that provider stronger isolation ## Code @@ -152,3 +153,4 @@ Learn about related Kubernetes security topics: * [Data encryption in transit](/docs/tasks/tls/managing-tls-in-a-cluster/) for the control plane * [Data encryption at rest](/docs/tasks/administer-cluster/encrypt-data/) * [Secrets in Kubernetes](/docs/concepts/configuration/secret/) +* [Runtime class](/docs/concepts/containers/runtime-class) diff --git a/content/en/docs/concepts/security/pod-security-standards.md b/content/en/docs/concepts/security/pod-security-standards.md index 18c1b7e862..a3c9ee138e 100644 --- a/content/en/docs/concepts/security/pod-security-standards.md +++ b/content/en/docs/concepts/security/pod-security-standards.md @@ -32,7 +32,7 @@ should range from highly restricted to highly flexible: - **_Privileged_** - Unrestricted policy, providing the widest possible level of permissions. This policy allows for known privilege escalations. -- **_Baseline/Default_** - Minimally restrictive policy while preventing known privilege +- **_Baseline_** - Minimally restrictive policy while preventing known privilege escalations. Allows the default (minimally specified) Pod configuration. - **_Restricted_** - Heavily restricted policy, following current Pod hardening best practices. @@ -48,9 +48,9 @@ mechanisms (such as gatekeeper), the privileged profile may be an absence of app rather than an instantiated policy. In contrast, for a deny-by-default mechanism (such as Pod Security Policy) the privileged policy should enable all controls (disable all restrictions). -### Baseline/Default +### Baseline -The Baseline/Default policy is aimed at ease of adoption for common containerized workloads while +The Baseline policy is aimed at ease of adoption for common containerized workloads while preventing known privilege escalations. This policy is targeted at application operators and developers of non-critical applications. The following listed controls should be enforced/disallowed: @@ -115,7 +115,9 @@ enforced/disallowed: AppArmor (optional) - On supported hosts, the 'runtime/default' AppArmor profile is applied by default. The default policy should prevent overriding or disabling the policy, or restrict overrides to an allowed set of profiles.
+ On supported hosts, the 'runtime/default' AppArmor profile is applied by default. + The baseline policy should prevent overriding or disabling the default AppArmor + profile, or restrict overrides to an allowed set of profiles.

Restricted Fields:
metadata.annotations['container.apparmor.security.beta.kubernetes.io/*']

Allowed Values: 'runtime/default', undefined
@@ -175,7 +177,7 @@ well as lower-trust users.The following listed controls should be enforced/disal Policy - Everything from the default profile. + Everything from the baseline profile. Volume Types @@ -275,7 +277,7 @@ of individual policies are not defined here. ## FAQ -### Why isn't there a profile between privileged and default? +### Why isn't there a profile between privileged and baseline? The three profiles defined here have a clear linear progression from most secure (restricted) to least secure (privileged), and cover a broad set of workloads. Privileges required above the baseline diff --git a/content/en/docs/concepts/services-networking/connect-applications-service.md b/content/en/docs/concepts/services-networking/connect-applications-service.md index 402c3c57ca..14bc98101f 100644 --- a/content/en/docs/concepts/services-networking/connect-applications-service.md +++ b/content/en/docs/concepts/services-networking/connect-applications-service.md @@ -387,7 +387,7 @@ $ curl https://: -k

Welcome to nginx!

``` -Let's now recreate the Service to use a cloud load balancer, just change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`: +Let's now recreate the Service to use a cloud load balancer. Change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`: ```shell kubectl edit svc my-nginx diff --git a/content/en/docs/concepts/services-networking/dns-pod-service.md b/content/en/docs/concepts/services-networking/dns-pod-service.md index 93474f24fa..2888064c2e 100644 --- a/content/en/docs/concepts/services-networking/dns-pod-service.md +++ b/content/en/docs/concepts/services-networking/dns-pod-service.md @@ -7,8 +7,8 @@ content_type: concept weight: 20 --- -This page provides an overview of DNS support by Kubernetes. - +Kubernetes creates DNS records for services and pods. You can contact +services with consistent DNS names instead of IP addresses. @@ -18,19 +18,47 @@ Kubernetes DNS schedules a DNS Pod and Service on the cluster, and configures the kubelets to tell individual containers to use the DNS Service's IP to resolve DNS names. -### What things get DNS names? - Every Service defined in the cluster (including the DNS server itself) is -assigned a DNS name. By default, a client Pod's DNS search list will -include the Pod's own namespace and the cluster's default domain. This is best -illustrated by example: +assigned a DNS name. By default, a client Pod's DNS search list includes the +Pod's own namespace and the cluster's default domain. -Assume a Service named `foo` in the Kubernetes namespace `bar`. A Pod running -in namespace `bar` can look up this service by simply doing a DNS query for -`foo`. A Pod running in namespace `quux` can look up this service by doing a -DNS query for `foo.bar`. +### Namespaces of Services -The following sections detail the supported record types and layout that is +A DNS query may return different results based on the namespace of the pod making +it. DNS queries that don't specify a namespace are limited to the pod's +namespace. Access services in other namespaces by specifying it in the DNS query. + +For example, consider a pod in a `test` namespace. A `data` service is in +the `prod` namespace. + +A query for `data` returns no results, because it uses the pod's `test` namespace. + +A query for `data.prod` returns the intended result, because it specifies the +namespace. + +DNS queries may be expanded using the pod's `/etc/resolv.conf`. Kubelet +sets this file for each pod. For example, a query for just `data` may be +expanded to `data.test.cluster.local`. The values of the `search` option +are used to expand queries. To learn more about DNS queries, see +[the `resolv.conf` manual page.](https://www.man7.org/linux/man-pages/man5/resolv.conf.5.html) + +``` +nameserver 10.32.0.10 +search .svc.cluster.local svc.cluster.local cluster.local +options ndots:5 +``` + +In summary, a pod in the _test_ namespace can successfully resolve either +`data.prod` or `data.prod.cluster.local`. + +### DNS Records + +What objects get DNS records? + +1. Services +2. Pods + +The following sections detail the supported DNS record types and layout that is supported. Any other layout or names or queries that happen to work are considered implementation details and are subject to change without warning. For more up-to-date specification, see diff --git a/content/en/docs/concepts/services-networking/dual-stack.md b/content/en/docs/concepts/services-networking/dual-stack.md index d8b900847f..20cbcb5f33 100644 --- a/content/en/docs/concepts/services-networking/dual-stack.md +++ b/content/en/docs/concepts/services-networking/dual-stack.md @@ -163,7 +163,7 @@ status: loadBalancer: {} ``` -1. When dual-stack is enabled on a cluster, existing [headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors are configured by the control plane to set `.spec.ipFamilyPolicy` to `SingleStack` and set `.spec.ipFamilies` to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-controller-manager) even though `.spec.ClusterIP` is set to `None`. +1. When dual-stack is enabled on a cluster, existing [headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors are configured by the control plane to set `.spec.ipFamilyPolicy` to `SingleStack` and set `.spec.ipFamilies` to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-apiserver) even though `.spec.ClusterIP` is set to `None`. {{< codenew file="service/networking/dual-stack-default-svc.yaml" >}} diff --git a/content/en/docs/concepts/services-networking/ingress-controllers.md b/content/en/docs/concepts/services-networking/ingress-controllers.md index 119958b915..d0405a060d 100644 --- a/content/en/docs/concepts/services-networking/ingress-controllers.md +++ b/content/en/docs/concepts/services-networking/ingress-controllers.md @@ -49,6 +49,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet * [Skipper](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/) HTTP router and reverse proxy for service composition, including use cases like Kubernetes Ingress, designed as a library to build your custom proxy. * The [Traefik Kubernetes Ingress provider](https://doc.traefik.io/traefik/providers/kubernetes-ingress/) is an ingress controller for the [Traefik](https://traefik.io/traefik/) proxy. +* [Tyk Operator](https://github.com/TykTechnologies/tyk-operator) extends Ingress with Custom Resources to bring API Management capabilities to Ingress. Tyk Operator works with the Open Source Tyk Gateway & Tyk Cloud control plane. * [Voyager](https://appscode.com/products/voyager) is an ingress controller for [HAProxy](https://www.haproxy.org/#desc). diff --git a/content/en/docs/concepts/services-networking/ingress.md b/content/en/docs/concepts/services-networking/ingress.md index 7a189a401b..b6be91cb9a 100644 --- a/content/en/docs/concepts/services-networking/ingress.md +++ b/content/en/docs/concepts/services-networking/ingress.md @@ -260,7 +260,7 @@ There are existing Kubernetes concepts that allow you to expose a single Service {{< codenew file="service/networking/test-ingress.yaml" >}} If you create it using `kubectl apply -f` you should be able to view the state -of the Ingress you just added: +of the Ingress you added: ```bash kubectl get ingress test-ingress diff --git a/content/en/docs/concepts/services-networking/service-topology.md b/content/en/docs/concepts/services-networking/service-topology.md index d36b76f55f..66976b23fb 100644 --- a/content/en/docs/concepts/services-networking/service-topology.md +++ b/content/en/docs/concepts/services-networking/service-topology.md @@ -57,7 +57,7 @@ the first label matches the originating Node's value for that label. If there is no backend for the Service on a matching Node, then the second label will be considered, and so forth, until no labels remain. -If no match is found, the traffic will be rejected, just as if there were no +If no match is found, the traffic will be rejected, as if there were no backends for the Service at all. That is, endpoints are chosen based on the first topology key with available backends. If this field is specified and all entries have no backends that match the topology of the client, the service has no @@ -87,7 +87,7 @@ traffic as follows. * Service topology is not compatible with `externalTrafficPolicy=Local`, and therefore a Service cannot use both of these features. It is possible to use - both features in the same cluster on different Services, just not on the same + both features in the same cluster on different Services, only not on the same Service. * Valid topology keys are currently limited to `kubernetes.io/hostname`, diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index 348c1616ce..c62d29ea51 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -74,8 +74,8 @@ a new instance. The name of a Service object must be a valid [DNS label name](/docs/concepts/overview/working-with-objects/names#dns-label-names). -For example, suppose you have a set of Pods that each listen on TCP port 9376 -and carry a label `app=MyApp`: +For example, suppose you have a set of Pods where each listens on TCP port 9376 +and contains a label `app=MyApp`: ```yaml apiVersion: v1 @@ -430,7 +430,7 @@ Services by their DNS name. For example, if you have a Service called `my-service` in a Kubernetes namespace `my-ns`, the control plane and the DNS Service acting together create a DNS record for `my-service.my-ns`. Pods in the `my-ns` namespace -should be able to find it by simply doing a name lookup for `my-service` +should be able to find the service by doing a name lookup for `my-service` (`my-service.my-ns` would also work). Pods in other namespaces must qualify the name as `my-service.my-ns`. These names @@ -463,7 +463,7 @@ selectors defined: For headless Services that define selectors, the endpoints controller creates `Endpoints` records in the API, and modifies the DNS configuration to return -records (addresses) that point directly to the `Pods` backing the `Service`. +A records (IP addresses) that point directly to the `Pods` backing the `Service`. ### Without selectors @@ -527,7 +527,7 @@ for NodePort use. Using a NodePort gives you the freedom to set up your own load balancing solution, to configure environments that are not fully supported by Kubernetes, or even -to just expose one or more nodes' IPs directly. +to expose one or more nodes' IPs directly. Note that this Service is visible as `:spec.ports[*].nodePort` and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, would be filtered NodeIP(s).) @@ -785,8 +785,7 @@ you can use the following annotations: ``` In the above example, if the Service contained three ports, `80`, `443`, and -`8443`, then `443` and `8443` would use the SSL certificate, but `80` would just -be proxied HTTP. +`8443`, then `443` and `8443` would use the SSL certificate, but `80` would be proxied HTTP. From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services. To see which policies are available for use, you can use the `aws` command line tool: @@ -958,7 +957,8 @@ groups are modified with the following IP rules: | Rule | Protocol | Port(s) | IpRange(s) | IpRange Description | |------|----------|---------|------------|---------------------| -| Health Check | TCP | NodePort(s) (`.spec.healthCheckNodePort` for `.spec.externalTrafficPolicy = Local`) | VPC CIDR | kubernetes.io/rule/nlb/health=\ | +| Health Check | TCP | NodePort(s) (`.spec.healthCheckNodePort` for `.spec.externalTrafficPolicy = Local`) | Subnet CIDR | kubernetes.io/rule/nlb/health=\ | + | Client Traffic | TCP | NodePort(s) | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/client=\ | | MTU Discovery | ICMP | 3,4 | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/mtu=\ | @@ -1107,7 +1107,7 @@ but the current API requires it. ## Virtual IP implementation {#the-gory-details-of-virtual-ips} -The previous information should be sufficient for many people who just want to +The previous information should be sufficient for many people who want to use Services. However, there is a lot going on behind the scenes that may be worth understanding. @@ -1163,7 +1163,7 @@ rule kicks in, and redirects the packets to the proxy's own port. The "Service proxy" chooses a backend, and starts proxying traffic from the client to the backend. This means that Service owners can choose any port they want without risk of -collision. Clients can simply connect to an IP and port, without being aware +collision. Clients can connect to an IP and port, without being aware of which Pods they are actually accessing. #### iptables diff --git a/content/en/docs/concepts/storage/dynamic-provisioning.md b/content/en/docs/concepts/storage/dynamic-provisioning.md index bedd431dc9..63263fb370 100644 --- a/content/en/docs/concepts/storage/dynamic-provisioning.md +++ b/content/en/docs/concepts/storage/dynamic-provisioning.md @@ -80,7 +80,7 @@ parameters: Users request dynamically provisioned storage by including a storage class in their `PersistentVolumeClaim`. Before Kubernetes v1.6, this was done via the `volume.beta.kubernetes.io/storage-class` annotation. However, this annotation -is deprecated since v1.6. Users now can and should instead use the +is deprecated since v1.9. Users now can and should instead use the `storageClassName` field of the `PersistentVolumeClaim` object. The value of this field must match the name of a `StorageClass` configured by the administrator (see [below](#enabling-dynamic-provisioning)). diff --git a/content/en/docs/concepts/storage/ephemeral-volumes.md b/content/en/docs/concepts/storage/ephemeral-volumes.md index 9b0b9464f5..bc391a3f36 100644 --- a/content/en/docs/concepts/storage/ephemeral-volumes.md +++ b/content/en/docs/concepts/storage/ephemeral-volumes.md @@ -135,8 +135,9 @@ As a cluster administrator, you can use a [PodSecurityPolicy](/docs/concepts/pol This feature requires the `GenericEphemeralVolume` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled. Because this is an alpha feature, it is disabled by default. -Generic ephemeral volumes are similar to `emptyDir` volumes, just more +Generic ephemeral volumes are similar to `emptyDir` volumes, except more flexible: + - Storage can be local or network-attached. - Volumes can have a fixed size that Pods are not able to exceed. - Volumes may have some initial data, depending on the driver and diff --git a/content/en/docs/concepts/storage/persistent-volumes.md b/content/en/docs/concepts/storage/persistent-volumes.md index 3423c575db..54e42bae9e 100644 --- a/content/en/docs/concepts/storage/persistent-volumes.md +++ b/content/en/docs/concepts/storage/persistent-volumes.md @@ -29,7 +29,7 @@ A _PersistentVolume_ (PV) is a piece of storage in the cluster that has been pro A _PersistentVolumeClaim_ (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see [AccessModes](#access-modes)). -While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than just size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the _StorageClass_ resource. +While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the _StorageClass_ resource. See the [detailed walkthrough with working examples](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/). @@ -487,7 +487,7 @@ The following volume types support mount options: * VsphereVolume * iSCSI -Mount options are not validated, so mount will simply fail if one is invalid. +Mount options are not validated. If a mount option is invalid, the mount fails. In the past, the annotation `volume.beta.kubernetes.io/mount-options` was used instead of the `mountOptions` attribute. This annotation is still working; however, diff --git a/content/en/docs/concepts/storage/storage-classes.md b/content/en/docs/concepts/storage/storage-classes.md index e6846c7ea4..0abdf6b545 100644 --- a/content/en/docs/concepts/storage/storage-classes.md +++ b/content/en/docs/concepts/storage/storage-classes.md @@ -37,7 +37,7 @@ request a particular class. Administrators set the name and other parameters of a class when first creating StorageClass objects, and the objects cannot be updated once they are created. -Administrators can specify a default StorageClass just for PVCs that don't +Administrators can specify a default StorageClass only for PVCs that don't request any particular class to bind to: see the [PersistentVolumeClaim section](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) for details. @@ -149,7 +149,7 @@ mount options specified in the `mountOptions` field of the class. If the volume plugin does not support mount options but mount options are specified, provisioning will fail. Mount options are not validated on either -the class or PV, so mount of the PV will simply fail if one is invalid. +the class or PV. If a mount option is invalid, the PV mount fails. ### Volume Binding Mode @@ -569,7 +569,7 @@ parameters: `"http(s)://api-server:7860"` * `registry`: Quobyte registry to use to mount the volume. You can specify the registry as ``:`` pair or if you want to specify multiple - registries you just have to put a comma between them e.q. + registries, put a comma between them. ``:,:,:``. The host can be an IP address or if you have a working DNS you can also provide the DNS names. diff --git a/content/en/docs/concepts/storage/volume-pvc-datasource.md b/content/en/docs/concepts/storage/volume-pvc-datasource.md index ac8d16041d..9e59560d1d 100644 --- a/content/en/docs/concepts/storage/volume-pvc-datasource.md +++ b/content/en/docs/concepts/storage/volume-pvc-datasource.md @@ -24,7 +24,7 @@ The {{< glossary_tooltip text="CSI" term_id="csi" >}} Volume Cloning feature add A Clone is defined as a duplicate of an existing Kubernetes Volume that can be consumed as any standard Volume would be. The only difference is that upon provisioning, rather than creating a "new" empty Volume, the back end device creates an exact duplicate of the specified Volume. -The implementation of cloning, from the perspective of the Kubernetes API, simply adds the ability to specify an existing PVC as a dataSource during new PVC creation. The source PVC must be bound and available (not in use). +The implementation of cloning, from the perspective of the Kubernetes API, adds the ability to specify an existing PVC as a dataSource during new PVC creation. The source PVC must be bound and available (not in use). Users need to be aware of the following when using this feature: @@ -40,7 +40,7 @@ Users need to be aware of the following when using this feature: ## Provisioning -Clones are provisioned just like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace. +Clones are provisioned like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace. ```yaml apiVersion: v1 diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md index f8475d6284..9743ec34f8 100644 --- a/content/en/docs/concepts/storage/volumes.md +++ b/content/en/docs/concepts/storage/volumes.md @@ -34,10 +34,11 @@ Kubernetes supports many types of volumes. A {{< glossary_tooltip term_id="pod" can use any number of volume types simultaneously. Ephemeral volume types have a lifetime of a pod, but persistent volumes exist beyond the lifetime of a pod. Consequently, a volume outlives any containers -that run within the pod, and data is preserved across container restarts. When a -pod ceases to exist, the volume is destroyed. +that run within the pod, and data is preserved across container restarts. When a pod +ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not +destroy persistent volumes. -At its core, a volume is just a directory, possibly with some data in it, which +At its core, a volume is a directory, possibly with some data in it, which is accessible to the containers in a pod. How that directory comes to be, the medium that backs it, and the contents of it are determined by the particular volume type used. @@ -106,6 +107,8 @@ spec: fsType: ext4 ``` +If the EBS volume is partitioned, you can supply the optional field `partition: ""` to specify which parition to mount on. + #### AWS EBS CSI migration {{< feature-state for_k8s_version="v1.17" state="beta" >}} @@ -929,7 +932,7 @@ GitHub project has [instructions](https://github.com/quobyte/quobyte-csi#quobyte ### rbd An `rbd` volume allows a -[Rados Block Device](https://ceph.com/docs/master/rbd/rbd/) (RBD) volume to mount into your +[Rados Block Device](https://docs.ceph.com/en/latest/rbd/) (RBD) volume to mount into your Pod. Unlike `emptyDir`, which is erased when a pod is removed, the contents of an `rbd` volume are preserved and the volume is unmounted. This means that a RBD volume can be pre-populated with data, and that data can diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md index d8e7646aed..22b95255c5 100644 --- a/content/en/docs/concepts/workloads/controllers/deployment.md +++ b/content/en/docs/concepts/workloads/controllers/deployment.md @@ -47,14 +47,14 @@ In this example: * A Deployment named `nginx-deployment` is created, indicated by the `.metadata.name` field. * The Deployment creates three replicated Pods, indicated by the `.spec.replicas` field. * The `.spec.selector` field defines how the Deployment finds which Pods to manage. - In this case, you simply select a label that is defined in the Pod template (`app: nginx`). + In this case, you select a label that is defined in the Pod template (`app: nginx`). However, more sophisticated selection rules are possible, as long as the Pod template itself satisfies the rule. {{< note >}} The `.spec.selector.matchLabels` field is a map of {key,value} pairs. A single {key,value} in the `matchLabels` map is equivalent to an element of `matchExpressions`, - whose key field is "key" the operator is "In", and the values array contains only "value". + whose `key` field is "key", the `operator` is "In", and the `values` array contains only "value". All of the requirements, from both `matchLabels` and `matchExpressions`, must be satisfied in order to match. {{< /note >}} @@ -171,13 +171,15 @@ Follow the steps given below to update your Deployment: ```shell kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 ``` - or simply use the following command: - + + or use the following command: + ```shell kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record ``` - The output is similar to this: + The output is similar to: + ``` deployment.apps/nginx-deployment image updated ``` @@ -188,7 +190,8 @@ Follow the steps given below to update your Deployment: kubectl edit deployment.v1.apps/nginx-deployment ``` - The output is similar to this: + The output is similar to: + ``` deployment.apps/nginx-deployment edited ``` @@ -200,10 +203,13 @@ Follow the steps given below to update your Deployment: ``` The output is similar to this: + ``` Waiting for rollout to finish: 2 out of 3 new replicas have been updated... ``` + or + ``` deployment "nginx-deployment" successfully rolled out ``` @@ -212,10 +218,11 @@ Get more details on your updated Deployment: * After the rollout succeeds, you can view the Deployment by running `kubectl get deployments`. The output is similar to this: - ``` - NAME READY UP-TO-DATE AVAILABLE AGE - nginx-deployment 3/3 3 3 36s - ``` + + ```ini + NAME READY UP-TO-DATE AVAILABLE AGE + nginx-deployment 3/3 3 3 36s + ``` * Run `kubectl get rs` to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas. @@ -701,7 +708,7 @@ nginx-deployment-618515232 11 11 11 7m You can pause a Deployment before triggering one or more updates and then resume it. This allows you to apply multiple fixes in between pausing and resuming without triggering unnecessary rollouts. -* For example, with a Deployment that was just created: +* For example, with a Deployment that was created: Get the Deployment details: ```shell kubectl get deploy diff --git a/content/en/docs/concepts/workloads/controllers/job.md b/content/en/docs/concepts/workloads/controllers/job.md index 2c99a704d1..be4393d891 100644 --- a/content/en/docs/concepts/workloads/controllers/job.md +++ b/content/en/docs/concepts/workloads/controllers/job.md @@ -99,7 +99,7 @@ pi-5rwd7 ``` Here, the selector is the same as the selector for the Job. The `--output=jsonpath` option specifies an expression -that just gets the name from each Pod in the returned list. +with the name from each Pod in the returned list. View the standard output of one of the pods: diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index e45d20c8f7..4766c56578 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -222,7 +222,7 @@ In this manner, a ReplicaSet can own a non-homogenous set of Pods ## Writing a ReplicaSet manifest As with all other Kubernetes API objects, a ReplicaSet needs the `apiVersion`, `kind`, and `metadata` fields. -For ReplicaSets, the kind is always just ReplicaSet. +For ReplicaSets, the `kind` is always a ReplicaSet. In Kubernetes 1.9 the API version `apps/v1` on the ReplicaSet kind is the current version and is enabled by default. The API version `apps/v1beta2` is deprecated. Refer to the first lines of the `frontend.yaml` example for guidance. @@ -237,7 +237,7 @@ The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/#pod-temp required to have labels in place. In our `frontend.yaml` example we had one label: `tier: frontend`. Be careful not to overlap with the selectors of other controllers, lest they try to adopt this Pod. -For the template's [restart policy](/docs/concepts/workloads/Pods/pod-lifecycle/#restart-policy) field, +For the template's [restart policy](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) field, `.spec.template.spec.restartPolicy`, the only allowed value is `Always`, which is the default. ### Pod Selector diff --git a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md index a6427bedb3..9bfb1264a4 100644 --- a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md +++ b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md @@ -110,8 +110,7 @@ nginx-3ntk0 nginx-4ok8v nginx-qrm3m Here, the selector is the same as the selector for the ReplicationController (seen in the `kubectl describe` output), and in a different form in `replication.yaml`. The `--output=jsonpath` option -specifies an expression that just gets the name from each pod in the returned list. - +specifies an expression with the name from each pod in the returned list. ## Writing a ReplicationController Spec @@ -180,16 +179,16 @@ delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). Kubectl wi for it to delete each pod before deleting the ReplicationController itself. If this kubectl command is interrupted, it can be restarted. -When using the REST API or go client library, you need to do the steps explicitly (scale replicas to +When using the REST API or Go client library, you need to do the steps explicitly (scale replicas to 0, wait for pod deletions, then delete the ReplicationController). -### Deleting just a ReplicationController +### Deleting only a ReplicationController You can delete a ReplicationController without affecting any of its pods. Using kubectl, specify the `--cascade=false` option to [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). -When using the REST API or go client library, simply delete the ReplicationController object. +When using the REST API or Go client library, you can delete the ReplicationController object. Once the original is deleted, you can create a new ReplicationController to replace it. As long as the old and new `.spec.selector` are the same, then the new one will adopt the old pods. @@ -240,7 +239,7 @@ Pods created by a ReplicationController are intended to be fungible and semantic ## Responsibilities of the ReplicationController -The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies. +The ReplicationController ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies. The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](https://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](https://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](https://issue.k8s.io/170)). diff --git a/content/en/docs/concepts/workloads/pods/disruptions.md b/content/en/docs/concepts/workloads/pods/disruptions.md index 3d4248443d..d0a8bbf5a9 100644 --- a/content/en/docs/concepts/workloads/pods/disruptions.md +++ b/content/en/docs/concepts/workloads/pods/disruptions.md @@ -75,7 +75,7 @@ Here are some ways to mitigate involuntary disruptions: and [stateful](/docs/tasks/run-application/run-replicated-stateful-application/) applications.) - For even higher availability when running replicated applications, spread applications across racks (using - [anti-affinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature)) + [anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)) or across zones (if using a [multi-zone cluster](/docs/setup/multiple-zones).) @@ -104,7 +104,7 @@ ensure that the number of replicas serving load never falls below a certain percentage of the total. Cluster managers and hosting providers should use tools which -respect PodDisruptionBudgets by calling the [Eviction API](/docs/tasks/administer-cluster/safely-drain-node/#the-eviction-api) +respect PodDisruptionBudgets by calling the [Eviction API](/docs/tasks/administer-cluster/safely-drain-node/#eviction-api) instead of directly deleting pods or deployments. For example, the `kubectl drain` subcommand lets you mark a node as going out of diff --git a/content/en/docs/concepts/workloads/pods/ephemeral-containers.md b/content/en/docs/concepts/workloads/pods/ephemeral-containers.md index 49894937e3..011fa4adf4 100644 --- a/content/en/docs/concepts/workloads/pods/ephemeral-containers.md +++ b/content/en/docs/concepts/workloads/pods/ephemeral-containers.md @@ -103,7 +103,7 @@ the ephemeral container to add as an `EphemeralContainers` list: "apiVersion": "v1", "kind": "EphemeralContainers", "metadata": { - "name": "example-pod" + "name": "example-pod" }, "ephemeralContainers": [{ "command": [ diff --git a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md index df83f7c5f3..778bee6c02 100644 --- a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md @@ -38,8 +38,7 @@ If a {{< glossary_tooltip term_id="node" >}} dies, the Pods scheduled to that no are [scheduled for deletion](#pod-garbage-collection) after a timeout period. Pods do not, by themselves, self-heal. If a Pod is scheduled to a -{{< glossary_tooltip text="node" term_id="node" >}} that then fails, -or if the scheduling operation itself fails, the Pod is deleted; likewise, a Pod won't +{{< glossary_tooltip text="node" term_id="node" >}} that then fails, the Pod is deleted; likewise, a Pod won't survive an eviction due to a lack of resources or Node maintenance. Kubernetes uses a higher-level abstraction, called a {{< glossary_tooltip term_id="controller" text="controller" >}}, that handles the work of @@ -313,7 +312,7 @@ can specify a readiness probe that checks an endpoint specific to readiness that is different from the liveness probe. {{< note >}} -If you just want to be able to drain requests when the Pod is deleted, you do not +If you want to be able to drain requests when the Pod is deleted, you do not necessarily need a readiness probe; on deletion, the Pod automatically puts itself into an unready state regardless of whether the readiness probe exists. The Pod remains in the unready state while it waits for the containers in the Pod diff --git a/content/en/docs/contribute/localization.md b/content/en/docs/contribute/localization.md index 48c48b6ed3..ad0b9420c5 100644 --- a/content/en/docs/contribute/localization.md +++ b/content/en/docs/contribute/localization.md @@ -61,7 +61,7 @@ Members of `@kubernetes/sig-docs-**-owners` can approve PRs that change content For each localization, The `@kubernetes/sig-docs-**-reviews` team automates review assignment for new PRs. -Members of `@kubernetes/website-maintainers` can create new development branches to coordinate translation efforts. +Members of `@kubernetes/website-maintainers` can create new localization branches to coordinate translation efforts. Members of `@kubernetes/website-milestone-maintainers` can use the `/milestone` [Prow command](https://prow.k8s.io/command-help) to assign a milestone to issues or PRs. @@ -205,14 +205,20 @@ To ensure accuracy in grammar and meaning, members of your localization team sho ### Source files -Localizations must be based on the English files from the most recent release, {{< latest-version >}}. +Localizations must be based on the English files from a specific release targeted by the localization team. +Each localization team can decide which release to target which is referred to as the _target version_ below. -To find source files for the most recent release: +To find source files for your target version: 1. Navigate to the Kubernetes website repository at https://github.com/kubernetes/website. -2. Select the `release-1.X` branch for the most recent version. +2. Select a branch for your target version from the following table: + Target version | Branch + -----|----- + Next version | [`dev-{{< skew nextMinorVersion >}}`](https://github.com/kubernetes/website/tree/dev-{{< skew nextMinorVersion >}}) + Latest version | [`master`](https://github.com/kubernetes/website/tree/master) + Previous version | `release-*.**` -The latest version is {{< latest-version >}}, so the most recent release branch is [`{{< release-branch >}}`](https://github.com/kubernetes/website/tree/{{< release-branch >}}). +The `master` branch holds content for the current release `{{< latest-version >}}`. The release team will create `{{< release-branch >}}` branch shortly before the next release: v{{< skew nextMinorVersion >}}. ### Site strings in i18n @@ -239,11 +245,11 @@ Some language teams have their own language-specific style guide and glossary. F ## Branching strategy -Because localization projects are highly collaborative efforts, we encourage teams to work in shared development branches. +Because localization projects are highly collaborative efforts, we encourage teams to work in shared localization branches. -To collaborate on a development branch: +To collaborate on a localization branch: -1. A team member of [@kubernetes/website-maintainers](https://github.com/orgs/kubernetes/teams/website-maintainers) opens a development branch from a source branch on https://github.com/kubernetes/website. +1. A team member of [@kubernetes/website-maintainers](https://github.com/orgs/kubernetes/teams/website-maintainers) opens a localization branch from a source branch on https://github.com/kubernetes/website. Your team approvers joined the `@kubernetes/website-maintainers` team when you [added your localization team](#add-your-localization-team-in-github) to the [`kubernetes/org`](https://github.com/kubernetes/org) repository. @@ -251,25 +257,31 @@ To collaborate on a development branch: `dev--.` - For example, an approver on a German localization team opens the development branch `dev-1.12-de.1` directly against the k/website repository, based on the source branch for Kubernetes v1.12. + For example, an approver on a German localization team opens the localization branch `dev-1.12-de.1` directly against the k/website repository, based on the source branch for Kubernetes v1.12. -2. Individual contributors open feature branches based on the development branch. +2. Individual contributors open feature branches based on the localization branch. For example, a German contributor opens a pull request with changes to `kubernetes:dev-1.12-de.1` from `username:local-branch-name`. -3. Approvers review and merge feature branches into the development branch. +3. Approvers review and merge feature branches into the localization branch. -4. Periodically, an approver merges the development branch to its source branch by opening and approving a new pull request. Be sure to squash the commits before approving the pull request. +4. Periodically, an approver merges the localization branch to its source branch by opening and approving a new pull request. Be sure to squash the commits before approving the pull request. -Repeat steps 1-4 as needed until the localization is complete. For example, subsequent German development branches would be: `dev-1.12-de.2`, `dev-1.12-de.3`, etc. +Repeat steps 1-4 as needed until the localization is complete. For example, subsequent German localization branches would be: `dev-1.12-de.2`, `dev-1.12-de.3`, etc. -Teams must merge localized content into the same release branch from which the content was sourced. For example, a development branch sourced from {{< release-branch >}} must be based on {{< release-branch >}}. +Teams must merge localized content into the same branch from which the content was sourced. -An approver must maintain a development branch by keeping it current with its source branch and resolving merge conflicts. The longer a development branch stays open, the more maintenance it typically requires. Consider periodically merging development branches and opening new ones, rather than maintaining one extremely long-running development branch. +For example: +- a localization branch sourced from `master` must be merged into `master`. +- a localization branch sourced from `release-1.19` must be merged into `release-1.19`. -At the beginning of every team milestone, it's helpful to open an issue [comparing upstream changes](https://github.com/kubernetes/website/blob/master/scripts/upstream_changes.py) between the previous development branch and the current development branch. +{{< note >}} +If your localization branch was created from `master` branch but it is not merged into `master` before new release branch `{{< release-branch >}}` created, merge it into both `master` and new release branch `{{< release-branch >}}`. To merge your localization branch into new release branch `{{< release-branch >}}`, you need to switch upstream branch of your localization branch to `{{< release-branch >}}`. +{{< /note >}} - While only approvers can open a new development branch and merge pull requests, anyone can open a pull request for a new development branch. No special permissions are required. +At the beginning of every team milestone, it's helpful to open an issue comparing upstream changes between the previous localization branch and the current localization branch. There are two scripts for comparing upstream changes. [`upstream_changes.py`](https://github.com/kubernetes/website/tree/master/scripts#upstream_changespy) is useful for checking the changes made to a specific file. And [`diff_l10n_branches.py`](https://github.com/kubernetes/website/tree/master/scripts#diff_l10n_branchespy) is useful for creating a list of outdated files for a specific localization branch. + +While only approvers can open a new localization branch and merge pull requests, anyone can open a pull request for a new localization branch. No special permissions are required. For more information about working from forks or directly from the repository, see ["fork and clone the repo"](#fork-and-clone-the-repo). @@ -290,5 +302,3 @@ Once a localization meets requirements for workflow and minimum output, SIG docs - Enable language selection on the website - Publicize the localization's availability through [Cloud Native Computing Foundation](https://www.cncf.io/about/) (CNCF) channels, including the [Kubernetes blog](https://kubernetes.io/blog/). - - diff --git a/content/en/docs/contribute/new-content/blogs-case-studies.md b/content/en/docs/contribute/new-content/blogs-case-studies.md index 6628925694..8f2f6baaf7 100644 --- a/content/en/docs/contribute/new-content/blogs-case-studies.md +++ b/content/en/docs/contribute/new-content/blogs-case-studies.md @@ -39,8 +39,8 @@ Anyone can write a blog post and submit it for review. - Posts about other CNCF projects may or may not be on topic. We recommend asking the blog team before submitting a draft. - Many CNCF projects have their own blog. These are often a better choice for posts. There are times of major feature or milestone for a CNCF project that users would be interested in reading on the Kubernetes blog. - Blog posts should be original content - - The official blog is not for repurposing existing content from a third party as new content. - - The [license](https://github.com/kubernetes/website/blob/master/LICENSE) for the blog does allow commercial use of the content for commercial purposes, just not the other way around. + - The official blog is not for repurposing existing content from a third party as new content. + - The [license](https://github.com/kubernetes/website/blob/master/LICENSE) for the blog allows commercial use of the content for commercial purposes, but not the other way around. - Blog posts should aim to be future proof - Given the development velocity of the project, we want evergreen content that won't require updates to stay accurate for the reader. - It can be a better choice to add a tutorial or update official documentation than to write a high level overview as a blog post. diff --git a/content/en/docs/contribute/new-content/new-features.md b/content/en/docs/contribute/new-content/new-features.md index a0e3600562..268c447402 100644 --- a/content/en/docs/contribute/new-content/new-features.md +++ b/content/en/docs/contribute/new-content/new-features.md @@ -77,9 +77,8 @@ merged. Keep the following in mind: Alpha features. - It's hard to test (and therefore to document) a feature that hasn't been merged, or is at least considered feature-complete in its PR. -- Determining whether a feature needs documentation is a manual process and - just because a feature is not marked as needing docs doesn't mean it doesn't - need them. +- Determining whether a feature needs documentation is a manual process. Even if + a feature is not marked as needing docs, you may need to document the feature. ## For developers or other SIG members diff --git a/content/en/docs/contribute/participate/roles-and-responsibilities.md b/content/en/docs/contribute/participate/roles-and-responsibilities.md index 8ebe7a1303..4e8632ac0b 100644 --- a/content/en/docs/contribute/participate/roles-and-responsibilities.md +++ b/content/en/docs/contribute/participate/roles-and-responsibilities.md @@ -52,7 +52,7 @@ Members can: {{< note >}} Using `/lgtm` triggers automation. If you want to provide non-binding - approval, simply commenting "LGTM" works too! + approval, commenting "LGTM" works too! {{< /note >}} - Use the `/hold` comment to block merging for a pull request diff --git a/content/en/docs/contribute/style/style-guide.md b/content/en/docs/contribute/style/style-guide.md index 0047799fb0..5931422e95 100644 --- a/content/en/docs/contribute/style/style-guide.md +++ b/content/en/docs/contribute/style/style-guide.md @@ -17,8 +17,6 @@ Changes to the style guide are made by SIG Docs as a group. To propose a change or addition, [add it to the agenda](https://docs.google.com/document/d/1ddHwLK3kUMX1wVFIwlksjTk0MsqitBnWPe1LRa1Rx5A/edit) for an upcoming SIG Docs meeting, and attend the meeting to participate in the discussion. - - {{< note >}} @@ -48,12 +46,11 @@ When you refer specifically to interacting with an API object, use [UpperCamelCa When you are generally discussing an API object, use [sentence-style capitalization](https://docs.microsoft.com/en-us/style-guide/text-formatting/using-type/use-sentence-style-capitalization). -You may use the word "resource", "API", or "object" to clarify a Kubernetes resource type in a sentence. +You may use the word "resource", "API", or "object" to clarify a Kubernetes resource type in a sentence. -Don't split the API object name into separate words. For example, use -PodTemplateList, not Pod Template List. +Don't split an API object name into separate words. For example, use PodTemplateList, not Pod Template List. -The following examples focus on capitalization. Review the related guidance on [Code Style](#code-style-inline-code) for more information on formatting API objects. +The following examples focus on capitalization. For more information about formatting API object names, review the related guidance on [Code Style](#code-style-inline-code). {{< table caption = "Do and Don't - Use Pascal case for API objects" >}} Do | Don't @@ -65,17 +62,18 @@ Every ConfigMap object is part of a namespace. | Every configMap object is part For managing confidential data, consider using the Secret API. | For managing confidential data, consider using the secret API. {{< /table >}} - ### Use angle brackets for placeholders Use angle brackets for placeholders. Tell the reader what a placeholder -represents. +represents, for example: -1. Display information about a pod: +Display information about a pod: - kubectl describe pod -n +```shell +kubectl describe pod -n +``` - If the namespace of the pod is `default`, you can omit the '-n' parameter. +If the namespace of the pod is `default`, you can omit the '-n' parameter. ### Use bold for user interface elements @@ -189,7 +187,6 @@ Set the value of `image` to nginx:1.16. | Set the value of `image` to `nginx:1.1 Set the value of the `replicas` field to 2. | Set the value of the `replicas` field to `2`. {{< /table >}} - ## Code snippet formatting ### Don't include the command prompt @@ -200,17 +197,20 @@ Do | Don't kubectl get pods | $ kubectl get pods {{< /table >}} - ### Separate commands from output Verify that the pod is running on your chosen node: - kubectl get pods --output=wide +```shell +kubectl get pods --output=wide +``` The output is similar to this: - NAME READY STATUS RESTARTS AGE IP NODE - nginx 1/1 Running 0 13s 10.200.0.4 worker0 +```console +NAME READY STATUS RESTARTS AGE IP NODE +nginx 1/1 Running 0 13s 10.200.0.4 worker0 +``` ### Versioning Kubernetes examples @@ -263,17 +263,17 @@ Hugo [Shortcodes](https://gohugo.io/content-management/shortcodes) help create d 2. Use the following syntax to apply a style: - ``` - {{}} - No need to include a prefix; the shortcode automatically provides one. (Note:, Caution:, etc.) - {{}} - ``` + ```none + {{}} + No need to include a prefix; the shortcode automatically provides one. (Note:, Caution:, etc.) + {{}} + ``` -The output is: + The output is: -{{< note >}} -The prefix you choose is the same text for the tag. -{{< /note >}} + {{< note >}} + The prefix you choose is the same text for the tag. + {{< /note >}} ### Note @@ -403,7 +403,7 @@ The output is: 1. Prepare the batter, and pour into springform pan. - {{< note >}}Grease the pan for best results.{{< /note >}} + {{< note >}}Grease the pan for best results.{{< /note >}} 1. Bake for 20-25 minutes or until set. @@ -417,13 +417,14 @@ Shortcodes inside include statements will break the build. You must insert them {{}} ``` - ## Markdown elements ### Line breaks + Use a single newline to separate block-level content like headings, lists, images, code blocks, and others. The exception is second-level headings, where it should be two newlines. Second-level headings follow the first-level (or the title) without any preceding paragraphs or texts. A two line spacing helps visualize the overall structure of content in a code editor better. ### Headings + People accessing this documentation may use a screen reader or other assistive technology (AT). [Screen readers](https://en.wikipedia.org/wiki/Screen_reader) are linear output devices, they output items on a page one at a time. If there is a lot of content on a page, you can use headings to give the page an internal structure. A good page structure helps all readers to easily navigate the page or filter topics of interest. {{< table caption = "Do and Don't - Headings" >}} @@ -453,24 +454,24 @@ Write hyperlinks that give you context for the content they link to. For example Write Markdown-style links: `[link text](URL)`. For example: `[Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/#table-captions)` and the output is [Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/#table-captions). | Write HTML-style links: `Visit our tutorial!`, or create links that open in new tabs or windows. For example: `[example website](https://example.com){target="_blank"}` {{< /table >}} - ### Lists + Group items in a list that are related to each other and need to appear in a specific order or to indicate a correlation between multiple items. When a screen reader comes across a list—whether it is an ordered or unordered list—it will be announced to the user that there is a group of list items. The user can then use the arrow keys to move up and down between the various items in the list. Website navigation links can also be marked up as list items; after all they are nothing but a group of related links. - - End each item in a list with a period if one or more items in the list are complete sentences. For the sake of consistency, normally either all items or none should be complete sentences. +- End each item in a list with a period if one or more items in the list are complete sentences. For the sake of consistency, normally either all items or none should be complete sentences. - {{< note >}} Ordered lists that are part of an incomplete introductory sentence can be in lowercase and punctuated as if each item was a part of the introductory sentence.{{< /note >}} + {{< note >}} Ordered lists that are part of an incomplete introductory sentence can be in lowercase and punctuated as if each item was a part of the introductory sentence.{{< /note >}} - - Use the number one (`1.`) for ordered lists. +- Use the number one (`1.`) for ordered lists. - - Use (`+`), (`*`), or (`-`) for unordered lists. +- Use (`+`), (`*`), or (`-`) for unordered lists. - - Leave a blank line after each list. +- Leave a blank line after each list. - - Indent nested lists with four spaces (for example, ⋅⋅⋅⋅). +- Indent nested lists with four spaces (for example, ⋅⋅⋅⋅). - - List items may consist of multiple paragraphs. Each subsequent paragraph in a list item must be indented by either four spaces or one tab. +- List items may consist of multiple paragraphs. Each subsequent paragraph in a list item must be indented by either four spaces or one tab. ### Tables @@ -490,7 +491,6 @@ Do | Don't This command starts a proxy. | This command will start a proxy. {{< /table >}} - Exception: Use future or past tense if it is required to convey the correct meaning. @@ -503,7 +503,6 @@ You can explore the API using a browser. | The API can be explored using a brows The YAML file specifies the replica count. | The replica count is specified in the YAML file. {{< /table >}} - Exception: Use passive voice if active voice leads to an awkward construction. ### Use simple and direct language @@ -527,7 +526,6 @@ You can create a Deployment by ... | We'll create a Deployment by ... In the preceding output, you can see... | In the preceding output, we can see ... {{< /table >}} - ### Avoid Latin phrases Prefer English terms over Latin abbreviations. @@ -539,7 +537,6 @@ For example, ... | e.g., ... That is, ...| i.e., ... {{< /table >}} - Exception: Use "etc." for et cetera. ## Patterns to avoid @@ -557,7 +554,6 @@ Kubernetes provides a new feature for ... | We provide a new feature ... This page teaches you how to use pods. | In this page, we are going to learn about pods. {{< /table >}} - ### Avoid jargon and idioms Some readers speak English as a second language. Avoid jargon and idioms to help them understand better. @@ -569,13 +565,16 @@ Internally, ... | Under the hood, ... Create a new cluster. | Turn up a new cluster. {{< /table >}} - ### Avoid statements about the future Avoid making promises or giving hints about the future. If you need to talk about an alpha feature, put the text under a heading that identifies it as alpha information. +An exception to this rule is documentation about announced deprecations +targeting removal in future versions. One example of documentation like this +is the [Deprecated API migration guide](/docs/reference/using-api/deprecation-guide/). + ### Avoid statements that will soon be out of date Avoid words like "currently" and "new." A feature that is new today might not be @@ -588,6 +587,18 @@ In version 1.4, ... | In the current version, ... The Federation feature provides ... | The new Federation feature provides ... {{< /table >}} +### Avoid words that assume a specific level of understanding + +Avoid words such as "just", "simply", "easy", "easily", or "simple". These words do not add value. + +{{< table caption = "Do and Don't - Avoid insensitive words" >}} +Do | Don't +:--| :----- +Include one command in ... | Include just one command in ... +Run the container ... | Simply run the container ... +You can easily remove ... | You can remove ... +These simple steps ... | These steps ... +{{< /table >}} ## {{% heading "whatsnext" %}} diff --git a/content/en/docs/reference/_index.md b/content/en/docs/reference/_index.md index 5885618102..a261b8a947 100644 --- a/content/en/docs/reference/_index.md +++ b/content/en/docs/reference/_index.md @@ -6,8 +6,10 @@ linkTitle: "Reference" main_menu: true weight: 70 content_type: concept +no_list: true --- + This section of the Kubernetes documentation contains references. @@ -18,11 +20,17 @@ This section of the Kubernetes documentation contains references. ## API Reference +* [Glossary](/docs/reference/glossary/) - a comprehensive, standardized list of Kubernetes terminology + + + * [Kubernetes API Reference](/docs/reference/kubernetes-api/) * [One-page API Reference for Kubernetes {{< param "version" >}}](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) * [Using The Kubernetes API](/docs/reference/using-api/) - overview of the API for Kubernetes. +* [API access control](/docs/reference/access-authn-authz/) - details on how Kubernetes controls API access +* [Well-Known Labels, Annotations and Taints](/docs/reference/kubernetes-api/labels-annotations-taints/) -## API Client Libraries +## Officially supported client libraries To call the Kubernetes API from a programming language, you can use [client libraries](/docs/reference/using-api/client-libraries/). Officially supported @@ -32,22 +40,28 @@ client libraries: - [Kubernetes Python client library](https://github.com/kubernetes-client/python) - [Kubernetes Java client library](https://github.com/kubernetes-client/java) - [Kubernetes JavaScript client library](https://github.com/kubernetes-client/javascript) +- [Kubernetes Dotnet client library](https://github.com/kubernetes-client/csharp) +- [Kubernetes Haskell Client library](https://github.com/kubernetes-client/haskell) -## CLI Reference +## CLI * [kubectl](/docs/reference/kubectl/overview/) - Main CLI tool for running commands and managing Kubernetes clusters. * [JSONPath](/docs/reference/kubectl/jsonpath/) - Syntax guide for using [JSONPath expressions](https://goessner.net/articles/JsonPath/) with kubectl. * [kubeadm](/docs/reference/setup-tools/kubeadm/) - CLI tool to easily provision a secure Kubernetes cluster. -## Components Reference +## Components * [kubelet](/docs/reference/command-line-tools-reference/kubelet/) - The primary *node agent* that runs on each node. The kubelet takes a set of PodSpecs and ensures that the described containers are running and healthy. * [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/) - REST API that validates and configures data for API objects such as pods, services, replication controllers. * [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) - Daemon that embeds the core control loops shipped with Kubernetes. * [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) - Can do simple TCP/UDP stream forwarding or round-robin TCP/UDP forwarding across a set of back-ends. -* [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/) - Scheduler that manages availability, performance, and capacity. - * [kube-scheduler Policies](/docs/reference/scheduling/policies) - * [kube-scheduler Profiles](/docs/reference/scheduling/config#profiles) +* [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/) - Scheduler that manages availability, performance, and capacity. + +## Scheduling + +* [Scheduler Policies](/docs/reference/scheduling/policies) +* [Scheduler Profiles](/docs/reference/scheduling/config#profiles) + ## Design Docs diff --git a/content/en/docs/reference/access-authn-authz/_index.md b/content/en/docs/reference/access-authn-authz/_index.md index d999e52bf5..86d06488a8 100644 --- a/content/en/docs/reference/access-authn-authz/_index.md +++ b/content/en/docs/reference/access-authn-authz/_index.md @@ -1,6 +1,6 @@ --- title: API Access Control -weight: 20 +weight: 15 no_list: true --- diff --git a/content/en/docs/reference/access-authn-authz/abac.md b/content/en/docs/reference/access-authn-authz/abac.md index 99fce41aba..3e2aea6b36 100644 --- a/content/en/docs/reference/access-authn-authz/abac.md +++ b/content/en/docs/reference/access-authn-authz/abac.md @@ -19,7 +19,7 @@ Attribute-based access control (ABAC) defines an access control paradigm whereby To enable `ABAC` mode, specify `--authorization-policy-file=SOME_FILENAME` and `--authorization-mode=ABAC` on startup. The file format is [one JSON object per line](https://jsonlines.org/). There -should be no enclosing list or map, just one map per line. +should be no enclosing list or map, only one map per line. Each line is a "policy object", where each such object is a map with the following properties: diff --git a/content/en/docs/reference/access-authn-authz/admission-controllers.md b/content/en/docs/reference/access-authn-authz/admission-controllers.md index 3ff113bb63..9e45e57d37 100644 --- a/content/en/docs/reference/access-authn-authz/admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/admission-controllers.md @@ -110,7 +110,7 @@ This admission controller allows all pods into the cluster. It is deprecated bec This admission controller modifies every new Pod to force the image pull policy to Always. This is useful in a multitenant cluster so that users can be assured that their private images can only be used by those who have the credentials to pull them. Without this admission controller, once an image has been pulled to a -node, any pod from any user can use it simply by knowing the image's name (assuming the Pod is +node, any pod from any user can use it by knowing the image's name (assuming the Pod is scheduled onto the right node), without any authorization check against the image. When this admission controller is enabled, images are always pulled prior to starting containers, which means valid credentials are required. @@ -176,7 +176,7 @@ The default value for `default-not-ready-toleration-seconds` and `default-unreac This admission controller will intercept all requests to exec a command in a pod if that pod has a privileged container. This functionality has been merged into [DenyEscalatingExec](#denyescalatingexec). -The DenyExecOnPrivileged admission plugin is deprecated and will be removed in v1.18. +The DenyExecOnPrivileged admission plugin is deprecated. Use of a policy-based admission plugin (like [PodSecurityPolicy](#podsecuritypolicy) or a custom admission plugin) which can be targeted at specific users or Namespaces and also protects against creation of overly privileged Pods @@ -190,7 +190,7 @@ This admission controller will deny exec and attach commands to pods that run wi allow host access. This includes pods that run as privileged, have access to the host IPC namespace, and have access to the host PID namespace. -The DenyEscalatingExec admission plugin is deprecated and will be removed in v1.18. +The DenyEscalatingExec admission plugin is deprecated. Use of a policy-based admission plugin (like [PodSecurityPolicy](#podsecuritypolicy) or a custom admission plugin) which can be targeted at specific users or Namespaces and also protects against creation of overly privileged Pods @@ -565,6 +565,8 @@ Starting from 1.11, this admission controller is disabled by default. ### PodNodeSelector {#podnodeselector} +{{< feature-state for_k8s_version="v1.5" state="alpha" >}} + This admission controller defaults and limits what node selectors may be used within a namespace by reading a namespace annotation and a global configuration. #### Configuration File Format @@ -675,6 +677,8 @@ for more information. ### PodTolerationRestriction {#podtolerationrestriction} +{{< feature-state for_k8s_version="v1.7" state="alpha" >}} + The PodTolerationRestriction admission controller verifies any conflict between tolerations of a pod and the tolerations of its namespace. It rejects the pod request if there is a conflict. It then merges the tolerations annotated on the namespace into the tolerations of the pod. diff --git a/content/en/docs/reference/access-authn-authz/authentication.md b/content/en/docs/reference/access-authn-authz/authentication.md index 8c2c4fa520..0a830586f0 100644 --- a/content/en/docs/reference/access-authn-authz/authentication.md +++ b/content/en/docs/reference/access-authn-authz/authentication.md @@ -99,7 +99,7 @@ openssl req -new -key jbeda.pem -out jbeda-csr.pem -subj "/CN=jbeda/O=app1/O=app This would create a CSR for the username "jbeda", belonging to two groups, "app1" and "app2". -See [Managing Certificates](/docs/concepts/cluster-administration/certificates/) for how to generate a client cert. +See [Managing Certificates](/docs/tasks/administer-cluster/certificates/) for how to generate a client cert. ### Static Token File @@ -205,8 +205,10 @@ spec: ``` Service account bearer tokens are perfectly valid to use outside the cluster and + can be used to create identities for long standing jobs that wish to talk to the -Kubernetes API. To manually create a service account, simply use the `kubectl +Kubernetes API. To manually create a service account, simply use the `kubectl` + create serviceaccount (NAME)` command. This creates a service account in the current namespace and an associated secret. @@ -320,12 +322,13 @@ sequenceDiagram 8. Once authorized the API server returns a response to `kubectl` 9. `kubectl` provides feedback to the user + Since all of the data needed to validate who you are is in the `id_token`, Kubernetes doesn't need to "phone home" to the identity provider. In a model where every request is stateless this provides a very scalable solution for authentication. It does offer a few challenges: 1. Kubernetes has no "web interface" to trigger the authentication process. There is no browser or interface to collect credentials which is why you need to authenticate to your identity provider first. 2. The `id_token` can't be revoked, it's like a certificate so it should be short-lived (only a few minutes) so it can be very annoying to have to get a new token every few minutes. -3. To authenticate to the Kubernetes dashboard, you must the `kubectl proxy` command or a reverse proxy that injects the `id_token`. +3. To authenticate to the Kubernetes dashboard, you must use the `kubectl proxy` command or a reverse proxy that injects the `id_token`. #### Configuring the API Server @@ -420,12 +423,12 @@ users: refresh-token: q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXq name: oidc ``` -Once your `id_token` expires, `kubectl` will attempt to refresh your `id_token` using your `refresh_token` and `client_secret` storing the new values for the `refresh_token` and `id_token` in your `.kube/config`. +Once your `id_token` expires, `kubectl` will attempt to refresh your `id_token` using your `refresh_token` and `client_secret` storing the new values for the `refresh_token` and `id_token` in your `.kube/config`. ##### Option 2 - Use the `--token` Option -The `kubectl` command lets you pass in a token using the `--token` option. Simply copy and paste the `id_token` into this option: +The `kubectl` command lets you pass in a token using the `--token` option. Copy and paste the `id_token` into this option: ```bash kubectl --token=eyJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJodHRwczovL21sYi50cmVtb2xvLmxhbjo4MDQzL2F1dGgvaWRwL29pZGMiLCJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNDc0NTk2NjY5LCJqdGkiOiI2RDUzNXoxUEpFNjJOR3QxaWVyYm9RIiwiaWF0IjoxNDc0NTk2MzY5LCJuYmYiOjE0NzQ1OTYyNDksInN1YiI6Im13aW5kdSIsInVzZXJfcm9sZSI6WyJ1c2VycyIsIm5ldy1uYW1lc3BhY2Utdmlld2VyIl0sImVtYWlsIjoibXdpbmR1QG5vbW9yZWplZGkuY29tIn0.f2As579n9VNoaKzoF-dOQGmXkFKf1FMyNV0-va_B63jn-_n9LGSCca_6IVMP8pO-Zb4KvRqGyTP0r3HkHxYy5c81AnIh8ijarruczl-TK_yF5akjSTHFZD-0gRzlevBDiH8Q79NAr-ky0P4iIXS8lY9Vnjch5MF74Zx0c3alKJHJUnnpjIACByfF2SCaYzbWFMUNat-K1PaUk5-ujMBG7yYnr95xD-63n8CO8teGUAAEMx6zRjzfhnhbzX-ajwZLGwGUBT4WqjMs70-6a7_8gZmLZb2az1cZynkFRj2BaCkVT3A2RrjeEwZEtGXlMqKJ1_I2ulrOVsYx01_yD35-rw get nodes @@ -455,7 +458,7 @@ clusters: - name: name-of-remote-authn-service cluster: certificate-authority: /path/to/ca.pem # CA for verifying the remote service. - server: https://authn.example.com/authenticate # URL of remote service to query. Must use 'https'. + server: https://authn.example.com/authenticate # URL of remote service to query. 'https' recommended for production. # users refers to the API server's webhook configuration. users: @@ -731,7 +734,7 @@ to the impersonated user info. The following HTTP headers can be used to performing an impersonation request: * `Impersonate-User`: The username to act as. -* `Impersonate-Group`: A group name to act as. Can be provided multiple times to set multiple groups. Optional. Requires "Impersonate-User" +* `Impersonate-Group`: A group name to act as. Can be provided multiple times to set multiple groups. Optional. Requires "Impersonate-User". * `Impersonate-Extra-( extra name )`: A dynamic header used to associate extra fields with the user. Optional. Requires "Impersonate-User". In order to be preserved consistently, `( extra name )` should be lower-case, and any characters which aren't [legal in HTTP header labels](https://tools.ietf.org/html/rfc7230#section-3.2.6) MUST be utf8 and [percent-encoded](https://tools.ietf.org/html/rfc3986#section-2.1). {{< note >}} diff --git a/content/en/docs/reference/access-authn-authz/authorization.md b/content/en/docs/reference/access-authn-authz/authorization.md index 04963e10ee..af73a23350 100644 --- a/content/en/docs/reference/access-authn-authz/authorization.md +++ b/content/en/docs/reference/access-authn-authz/authorization.md @@ -138,7 +138,7 @@ no exposes the API server authorization to external services. Other resources in this group include: -* `SubjectAccessReview` - Access review for any user, not just the current one. Useful for delegating authorization decisions to the API server. For example, the kubelet and extension API servers use this to determine user access to their own APIs. +* `SubjectAccessReview` - Access review for any user, not only the current one. Useful for delegating authorization decisions to the API server. For example, the kubelet and extension API servers use this to determine user access to their own APIs. * `LocalSubjectAccessReview` - Like `SubjectAccessReview` but restricted to a specific namespace. * `SelfSubjectRulesReview` - A review which returns the set of actions a user can perform within a namespace. Useful for users to quickly summarize their own access, or for UIs to hide/show actions. diff --git a/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md b/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md index 856669a5d8..f128c14a7a 100644 --- a/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md +++ b/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md @@ -167,7 +167,7 @@ data: users: [] ``` -The `kubeconfig` member of the ConfigMap is a config file with just the cluster +The `kubeconfig` member of the ConfigMap is a config file with only the cluster information filled out. The key thing being communicated here is the `certificate-authority-data`. This may be expanded in the future. diff --git a/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md b/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md index 6d05d0436a..450bedf541 100644 --- a/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md +++ b/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md @@ -196,8 +196,8 @@ O is the group that this user will belong to. You can refer to [RBAC](/docs/reference/access-authn-authz/rbac/) for standard groups. ```shell -openssl genrsa -out john.key 2048 -openssl req -new -key john.key -out john.csr +openssl genrsa -out myuser.key 2048 +openssl req -new -key myuser.key -out myuser.csr ``` ### Create CertificateSigningRequest @@ -209,7 +209,7 @@ cat < myuser.crt +``` + ### Create Role and RoleBinding With the certificate created. it is time to define the Role and RoleBinding for @@ -266,31 +272,30 @@ kubectl create role developer --verb=create --verb=get --verb=list --verb=update This is a sample command to create a RoleBinding for this new user: ```shell -kubectl create rolebinding developer-binding-john --role=developer --user=john +kubectl create rolebinding developer-binding-myuser --role=developer --user=myuser ``` ### Add to kubeconfig The last step is to add this user into the kubeconfig file. -This example assumes the key and certificate files are located at "/home/vagrant/work/". First, you need to add new credentials: ``` -kubectl config set-credentials john --client-key=/home/vagrant/work/john.key --client-certificate=/home/vagrant/work/john.crt --embed-certs=true +kubectl config set-credentials myuser --client-key=myuser.key --client-certificate=myuser.crt --embed-certs=true ``` Then, you need to add the context: ``` -kubectl config set-context john --cluster=kubernetes --user=john +kubectl config set-context myuser --cluster=kubernetes --user=myuser ``` -To test it, change the context to `john`: +To test it, change the context to `myuser`: ``` -kubectl config use-context john +kubectl config use-context myuser ``` ## Approval or rejection {#approval-rejection} @@ -363,7 +368,7 @@ status: It's usual to set `status.conditions.reason` to a machine-friendly reason code using TitleCase; this is a convention but you can set it to anything -you like. If you want to add a note just for human consumption, use the +you like. If you want to add a note for human consumption, use the `status.conditions.message` field. ## Signing @@ -438,4 +443,3 @@ status: * View the source code for the kube-controller-manager built in [approver](https://github.com/kubernetes/kubernetes/blob/32ec6c212ec9415f604ffc1f4c1f29b782968ff1/pkg/controller/certificates/approver/sarapprove.go) * For details of X.509 itself, refer to [RFC 5280](https://tools.ietf.org/html/rfc5280#section-3.1) section 3.1 * For information on the syntax of PKCS#10 certificate signing requests, refer to [RFC 2986](https://tools.ietf.org/html/rfc2986) - diff --git a/content/en/docs/reference/access-authn-authz/rbac.md b/content/en/docs/reference/access-authn-authz/rbac.md index 4bc2b86dd6..bd9aba1aa8 100644 --- a/content/en/docs/reference/access-authn-authz/rbac.md +++ b/content/en/docs/reference/access-authn-authz/rbac.md @@ -219,7 +219,7 @@ the role that is granted to those subjects. 1. A binding to a different role is a fundamentally different binding. Requiring a binding to be deleted/recreated in order to change the `roleRef` ensures the full list of subjects in the binding is intended to be granted -the new role (as opposed to enabling accidentally modifying just the roleRef +the new role (as opposed to enabling or accidentally modifying only the roleRef without verifying all of the existing subjects should be given the new role's permissions). @@ -333,7 +333,7 @@ as a cluster administrator, include rules for custom resources, such as those se or aggregated API servers, to extend the default roles. For example: the following ClusterRoles let the "admin" and "edit" default roles manage the custom resource -named CronTab, whereas the "view" role can perform just read actions on CronTab resources. +named CronTab, whereas the "view" role can perform only read actions on CronTab resources. You can assume that CronTab objects are named `"crontabs"` in URLs as seen by the API server. ```yaml diff --git a/content/en/docs/reference/command-line-tools-reference/_index.md b/content/en/docs/reference/command-line-tools-reference/_index.md index 6698fe66c0..8f9cf74a0e 100644 --- a/content/en/docs/reference/command-line-tools-reference/_index.md +++ b/content/en/docs/reference/command-line-tools-reference/_index.md @@ -1,4 +1,4 @@ --- -title: Command line tools reference +title: Component tools weight: 60 --- diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index d9754afb56..daf525c781 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -242,6 +242,7 @@ different Kubernetes components. | `DynamicProvisioningScheduling` | - | Deprecated| 1.12 | - | | `DynamicVolumeProvisioning` | `true` | Alpha | 1.3 | 1.7 | | `DynamicVolumeProvisioning` | `true` | GA | 1.8 | - | +| `EnableAggregatedDiscoveryTimeout` | `true` | Deprecated | 1.16 | - | | `EnableEquivalenceClassCache` | `false` | Alpha | 1.8 | 1.14 | | `EnableEquivalenceClassCache` | - | Deprecated | 1.15 | - | | `ExperimentalCriticalPodAnnotation` | `false` | Alpha | 1.5 | 1.12 | @@ -351,7 +352,7 @@ different Kubernetes components. | `VolumeScheduling` | `false` | Alpha | 1.9 | 1.9 | | `VolumeScheduling` | `true` | Beta | 1.10 | 1.12 | | `VolumeScheduling` | `true` | GA | 1.13 | - | -| `VolumeSubpath` | `true` | GA | 1.13 | - | +| `VolumeSubpath` | `true` | GA | 1.10 | - | | `VolumeSubpathEnvExpansion` | `false` | Alpha | 1.14 | 1.14 | | `VolumeSubpathEnvExpansion` | `true` | Beta | 1.15 | 1.16 | | `VolumeSubpathEnvExpansion` | `true` | GA | 1.17 | - | @@ -634,8 +635,8 @@ Each feature gate is designed for enabling/disabling a specific feature: - `KubeletCredentialProviders`: Enable kubelet exec credential providers for image pull credentials. - `KubeletPluginsWatcher`: Enable probe-based plugin watcher utility to enable kubelet to discover plugins such as [CSI volume drivers](/docs/concepts/storage/volumes/#csi). -- `KubeletPodResources`: Enable the kubelet's pod resources GRPC endpoint. See - [Support Device Monitoring](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/compute-device-assignment.md) +- `KubeletPodResources`: Enable the kubelet's pod resources gRPC endpoint. See + [Support Device Monitoring](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/606-compute-device-assignment/README.md) for more details. - `LegacyNodeRoleBehavior`: When disabled, legacy behavior in service load balancers and node disruption will ignore the `node-role.kubernetes.io/master` label in favor of the @@ -728,7 +729,7 @@ Each feature gate is designed for enabling/disabling a specific feature: [ServiceTopology](/docs/concepts/services-networking/service-topology/) for more details. - `SizeMemoryBackedVolumes`: Enables kubelet support to size memory backed volumes. - See [volumes](docs/concepts/storage/volumes) for more details. + See [volumes](/docs/concepts/storage/volumes) for more details. - `SetHostnameAsFQDN`: Enable the ability of setting Fully Qualified Domain Name(FQDN) as the hostname of a pod. See [Pod's `setHostnameAsFQDN` field](/docs/concepts/services-networking/dns-pod-service/#pod-sethostnameasfqdn-field). diff --git a/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md b/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md index 6e87bff44c..edf25ad835 100644 --- a/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md +++ b/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md @@ -2,8 +2,21 @@ title: kube-apiserver content_type: tool-reference weight: 30 +auto_generated: true --- + + + ## {{% heading "synopsis" %}} @@ -29,1099 +42,1099 @@ kube-apiserver [flags] --add-dir-header -If true, adds the file directory to the header of the log messages +

If true, adds the file directory to the header of the log messages

--admission-control-config-file string -File with admission control configuration. +

File with admission control configuration.

---advertise-address ip +--advertise-address string -The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used. +

The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.

--allow-privileged -If true, allow privileged containers. [default=false] +

If true, allow privileged containers. [default=false]

--alsologtostderr -log to standard error as well as files +

log to standard error as well as files

--anonymous-auth     Default: true -Enables anonymous requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated. +

Enables anonymous requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated.

---api-audiences stringSlice +--api-audiences strings -Identifiers of the API. The service account token authenticator will validate that tokens used against the API are bound to at least one of these audiences. If the --service-account-issuer flag is configured and this flag is not, this field defaults to a single element list containing the issuer URL. +

Identifiers of the API. The service account token authenticator will validate that tokens used against the API are bound to at least one of these audiences. If the --service-account-issuer flag is configured and this flag is not, this field defaults to a single element list containing the issuer URL.

--apiserver-count int     Default: 1 -The number of apiservers running in the cluster, must be a positive number. (In use when --endpoint-reconciler-type=master-count is enabled.) +

The number of apiservers running in the cluster, must be a positive number. (In use when --endpoint-reconciler-type=master-count is enabled.)

--audit-log-batch-buffer-size int     Default: 10000 -The size of the buffer to store events before batching and writing. Only used in batch mode. +

The size of the buffer to store events before batching and writing. Only used in batch mode.

--audit-log-batch-max-size int     Default: 1 -The maximum size of a batch. Only used in batch mode. +

The maximum size of a batch. Only used in batch mode.

--audit-log-batch-max-wait duration -The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode. +

The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.

--audit-log-batch-throttle-burst int -Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode. +

Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.

--audit-log-batch-throttle-enable -Whether batching throttling is enabled. Only used in batch mode. +

Whether batching throttling is enabled. Only used in batch mode.

---audit-log-batch-throttle-qps float32 +--audit-log-batch-throttle-qps float -Maximum average number of batches per second. Only used in batch mode. +

Maximum average number of batches per second. Only used in batch mode.

--audit-log-compress -If set, the rotated log files will be compressed using gzip. +

If set, the rotated log files will be compressed using gzip.

--audit-log-format string     Default: "json" -Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json. +

Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json.

--audit-log-maxage int -The maximum number of days to retain old audit log files based on the timestamp encoded in their filename. +

The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.

--audit-log-maxbackup int -The maximum number of old audit log files to retain. +

The maximum number of old audit log files to retain.

--audit-log-maxsize int -The maximum size in megabytes of the audit log file before it gets rotated. +

The maximum size in megabytes of the audit log file before it gets rotated.

--audit-log-mode string     Default: "blocking" -Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. +

Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict.

--audit-log-path string -If set, all requests coming to the apiserver will be logged to this file. '-' means standard out. +

If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.

--audit-log-truncate-enabled -Whether event and batch truncating is enabled. +

Whether event and batch truncating is enabled.

--audit-log-truncate-max-batch-size int     Default: 10485760 -Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. +

Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size.

--audit-log-truncate-max-event-size int     Default: 102400 -Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. +

Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded.

--audit-log-version string     Default: "audit.k8s.io/v1" -API group and version used for serializing audit events written to log. +

API group and version used for serializing audit events written to log.

--audit-policy-file string -Path to the file that defines the audit policy configuration. +

Path to the file that defines the audit policy configuration.

--audit-webhook-batch-buffer-size int     Default: 10000 -The size of the buffer to store events before batching and writing. Only used in batch mode. +

The size of the buffer to store events before batching and writing. Only used in batch mode.

--audit-webhook-batch-max-size int     Default: 400 -The maximum size of a batch. Only used in batch mode. +

The maximum size of a batch. Only used in batch mode.

--audit-webhook-batch-max-wait duration     Default: 30s -The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode. +

The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.

--audit-webhook-batch-throttle-burst int     Default: 15 -Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode. +

Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.

--audit-webhook-batch-throttle-enable     Default: true -Whether batching throttling is enabled. Only used in batch mode. +

Whether batching throttling is enabled. Only used in batch mode.

---audit-webhook-batch-throttle-qps float32     Default: 10 +--audit-webhook-batch-throttle-qps float     Default: 10 -Maximum average number of batches per second. Only used in batch mode. +

Maximum average number of batches per second. Only used in batch mode.

--audit-webhook-config-file string -Path to a kubeconfig formatted file that defines the audit webhook configuration. +

Path to a kubeconfig formatted file that defines the audit webhook configuration.

--audit-webhook-initial-backoff duration     Default: 10s -The amount of time to wait before retrying the first failed request. +

The amount of time to wait before retrying the first failed request.

--audit-webhook-mode string     Default: "batch" -Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. +

Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict.

--audit-webhook-truncate-enabled -Whether event and batch truncating is enabled. +

Whether event and batch truncating is enabled.

--audit-webhook-truncate-max-batch-size int     Default: 10485760 -Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. +

Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size.

--audit-webhook-truncate-max-event-size int     Default: 102400 -Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. +

Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded.

--audit-webhook-version string     Default: "audit.k8s.io/v1" -API group and version used for serializing audit events written to webhook. +

API group and version used for serializing audit events written to webhook.

--authentication-token-webhook-cache-ttl duration     Default: 2m0s -The duration to cache responses from the webhook token authenticator. +

The duration to cache responses from the webhook token authenticator.

--authentication-token-webhook-config-file string -File with webhook configuration for token authentication in kubeconfig format. The API server will query the remote service to determine authentication for bearer tokens. +

File with webhook configuration for token authentication in kubeconfig format. The API server will query the remote service to determine authentication for bearer tokens.

--authentication-token-webhook-version string     Default: "v1beta1" -The API version of the authentication.k8s.io TokenReview to send to and expect from the webhook. +

The API version of the authentication.k8s.io TokenReview to send to and expect from the webhook.

---authorization-mode stringSlice     Default: [AlwaysAllow] +--authorization-mode strings     Default: "AlwaysAllow" -Ordered list of plug-ins to do authorization on secure port. Comma-delimited list of: AlwaysAllow,AlwaysDeny,ABAC,Webhook,RBAC,Node. +

Ordered list of plug-ins to do authorization on secure port. Comma-delimited list of: AlwaysAllow,AlwaysDeny,ABAC,Webhook,RBAC,Node.

--authorization-policy-file string -File with authorization policy in json line by line format, used with --authorization-mode=ABAC, on the secure port. +

File with authorization policy in json line by line format, used with --authorization-mode=ABAC, on the secure port.

--authorization-webhook-cache-authorized-ttl duration     Default: 5m0s -The duration to cache 'authorized' responses from the webhook authorizer. +

The duration to cache 'authorized' responses from the webhook authorizer.

--authorization-webhook-cache-unauthorized-ttl duration     Default: 30s -The duration to cache 'unauthorized' responses from the webhook authorizer. +

The duration to cache 'unauthorized' responses from the webhook authorizer.

--authorization-webhook-config-file string -File with webhook configuration in kubeconfig format, used with --authorization-mode=Webhook. The API server will query the remote service to determine access on the API server's secure port. +

File with webhook configuration in kubeconfig format, used with --authorization-mode=Webhook. The API server will query the remote service to determine access on the API server's secure port.

--authorization-webhook-version string     Default: "v1beta1" -The API version of the authorization.k8s.io SubjectAccessReview to send to and expect from the webhook. +

The API version of the authorization.k8s.io SubjectAccessReview to send to and expect from the webhook.

--azure-container-registry-config string -Path to the file containing Azure container registry configuration information. +

Path to the file containing Azure container registry configuration information.

---bind-address ip     Default: 0.0.0.0 +--bind-address string     Default: 0.0.0.0 -The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used. +

The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.

--cert-dir string     Default: "/var/run/kubernetes" -The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. +

The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.

--client-ca-file string -If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate. +

If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.

--cloud-config string -The path to the cloud provider configuration file. Empty string for no configuration file. +

The path to the cloud provider configuration file. Empty string for no configuration file.

--cloud-provider string -The provider for cloud services. Empty string for no provider. +

The provider for cloud services. Empty string for no provider.

--cloud-provider-gce-l7lb-src-cidrs cidrs     Default: 130.211.0.0/22,35.191.0.0/16 -CIDRs opened in GCE firewall for L7 LB traffic proxy & health checks +

CIDRs opened in GCE firewall for L7 LB traffic proxy & health checks

--contention-profiling -Enable lock contention profiling, if profiling is enabled +

Enable lock contention profiling, if profiling is enabled

---cors-allowed-origins stringSlice +--cors-allowed-origins strings -List of allowed origins for CORS, comma separated. An allowed origin can be a regular expression to support subdomain matching. If this list is empty CORS will not be enabled. +

List of allowed origins for CORS, comma separated. An allowed origin can be a regular expression to support subdomain matching. If this list is empty CORS will not be enabled.

--default-not-ready-toleration-seconds int     Default: 300 -Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration. +

Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.

--default-unreachable-toleration-seconds int     Default: 300 -Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration. +

Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.

--default-watch-cache-size int     Default: 100 -Default watch cache size. If zero, watch cache will be disabled for resources that do not have a default watch size set. +

Default watch cache size. If zero, watch cache will be disabled for resources that do not have a default watch size set.

--delete-collection-workers int     Default: 1 -Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup. +

Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup.

---disable-admission-plugins stringSlice +--disable-admission-plugins strings -admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter. +

admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.

--egress-selector-config-file string -File with apiserver egress selector configuration. +

File with apiserver egress selector configuration.

---enable-admission-plugins stringSlice +--enable-admission-plugins strings -admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter. +

admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.

--enable-aggregator-routing -Turns on aggregator routing requests to endpoints IP rather than cluster IP. +

Turns on aggregator routing requests to endpoints IP rather than cluster IP.

--enable-bootstrap-token-auth -Enable to allow secrets of type 'bootstrap.kubernetes.io/token' in the 'kube-system' namespace to be used for TLS bootstrapping authentication. +

Enable to allow secrets of type 'bootstrap.kubernetes.io/token' in the 'kube-system' namespace to be used for TLS bootstrapping authentication.

--enable-garbage-collector     Default: true -Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager. +

Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager.

--enable-priority-and-fairness     Default: true -If true and the APIPriorityAndFairness feature gate is enabled, replace the max-in-flight handler with an enhanced one that queues and dispatches with priority and fairness +

If true and the APIPriorityAndFairness feature gate is enabled, replace the max-in-flight handler with an enhanced one that queues and dispatches with priority and fairness

--encryption-provider-config string -The file containing configuration for encryption providers to be used for storing secrets in etcd +

The file containing configuration for encryption providers to be used for storing secrets in etcd

--endpoint-reconciler-type string     Default: "lease" -Use an endpoint reconciler (master-count, lease, none) +

Use an endpoint reconciler (master-count, lease, none)

--etcd-cafile string -SSL Certificate Authority file used to secure etcd communication. +

SSL Certificate Authority file used to secure etcd communication.

--etcd-certfile string -SSL certification file used to secure etcd communication. +

SSL certification file used to secure etcd communication.

--etcd-compaction-interval duration     Default: 5m0s -The interval of compaction requests. If 0, the compaction request from apiserver is disabled. +

The interval of compaction requests. If 0, the compaction request from apiserver is disabled.

--etcd-count-metric-poll-period duration     Default: 1m0s -Frequency of polling etcd for number of resources per type. 0 disables the metric collection. +

Frequency of polling etcd for number of resources per type. 0 disables the metric collection.

--etcd-db-metric-poll-interval duration     Default: 30s -The interval of requests to poll etcd and update metric. 0 disables the metric collection +

The interval of requests to poll etcd and update metric. 0 disables the metric collection

--etcd-healthcheck-timeout duration     Default: 2s -The timeout to use when checking etcd health. +

The timeout to use when checking etcd health.

--etcd-keyfile string -SSL key file used to secure etcd communication. +

SSL key file used to secure etcd communication.

--etcd-prefix string     Default: "/registry" -The prefix to prepend to all resource paths in etcd. +

The prefix to prepend to all resource paths in etcd.

---etcd-servers stringSlice +--etcd-servers strings -List of etcd servers to connect with (scheme://ip:port), comma separated. +

List of etcd servers to connect with (scheme://ip:port), comma separated.

---etcd-servers-overrides stringSlice +--etcd-servers-overrides strings -Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are URLs, semicolon separated. +

Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are URLs, semicolon separated.

--event-ttl duration     Default: 1h0m0s -Amount of time to retain events. +

Amount of time to retain events.

--experimental-logging-sanitization -[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens).
Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production. +

[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens).
Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.

--external-hostname string -The hostname to use when generating externalized URLs for this master (e.g. Swagger API Docs or OpenID Discovery). +

The hostname to use when generating externalized URLs for this master (e.g. Swagger API Docs or OpenID Discovery).

---feature-gates mapStringBool +--feature-gates <comma-separated 'key=True|False' pairs> -A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (BETA - default=true)
APIResponseCompression=true|false (BETA - default=true)
APIServerIdentity=true|false (ALPHA - default=false)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AllowInsecureBackendProxy=true|false (BETA - default=true)
AnyVolumeDataSource=true|false (ALPHA - default=false)
AppArmor=true|false (BETA - default=true)
BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
CPUManager=true|false (BETA - default=true)
CRIContainerLogRotation=true|false (BETA - default=true)
CSIInlineVolume=true|false (BETA - default=true)
CSIMigration=true|false (BETA - default=true)
CSIMigrationAWS=true|false (BETA - default=false)
CSIMigrationAWSComplete=true|false (ALPHA - default=false)
CSIMigrationAzureDisk=true|false (BETA - default=false)
CSIMigrationAzureDiskComplete=true|false (ALPHA - default=false)
CSIMigrationAzureFile=true|false (ALPHA - default=false)
CSIMigrationAzureFileComplete=true|false (ALPHA - default=false)
CSIMigrationGCE=true|false (BETA - default=false)
CSIMigrationGCEComplete=true|false (ALPHA - default=false)
CSIMigrationOpenStack=true|false (BETA - default=false)
CSIMigrationOpenStackComplete=true|false (ALPHA - default=false)
CSIMigrationvSphere=true|false (BETA - default=false)
CSIMigrationvSphereComplete=true|false (BETA - default=false)
CSIServiceAccountToken=true|false (ALPHA - default=false)
CSIStorageCapacity=true|false (ALPHA - default=false)
CSIVolumeFSGroupPolicy=true|false (BETA - default=true)
ConfigurableFSGroupPolicy=true|false (BETA - default=true)
CronJobControllerV2=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
DefaultPodTopologySpread=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DisableAcceleratorUsageMetrics=true|false (BETA - default=true)
DownwardAPIHugePages=true|false (ALPHA - default=false)
DynamicKubeletConfig=true|false (BETA - default=true)
EfficientWatchResumption=true|false (ALPHA - default=false)
EndpointSlice=true|false (BETA - default=true)
EndpointSliceNodeName=true|false (ALPHA - default=false)
EndpointSliceProxying=true|false (BETA - default=true)
EndpointSliceTerminatingCondition=true|false (ALPHA - default=false)
EphemeralContainers=true|false (ALPHA - default=false)
ExpandCSIVolumes=true|false (BETA - default=true)
ExpandInUsePersistentVolumes=true|false (BETA - default=true)
ExpandPersistentVolumes=true|false (BETA - default=true)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
GenericEphemeralVolume=true|false (ALPHA - default=false)
GracefulNodeShutdown=true|false (ALPHA - default=false)
HPAContainerMetrics=true|false (ALPHA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HugePageStorageMediumSize=true|false (BETA - default=true)
IPv6DualStack=true|false (ALPHA - default=false)
ImmutableEphemeralVolumes=true|false (BETA - default=true)
KubeletCredentialProviders=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
LegacyNodeRoleBehavior=true|false (BETA - default=true)
LocalStorageCapacityIsolation=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
MixedProtocolLBService=true|false (ALPHA - default=false)
NodeDisruptionExclusion=true|false (BETA - default=true)
NonPreemptingPriority=true|false (BETA - default=true)
PodDisruptionBudget=true|false (BETA - default=true)
PodOverhead=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
QOSReserved=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RemoveSelfLink=true|false (BETA - default=true)
RootCAConfigMap=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
RunAsGroup=true|false (BETA - default=true)
ServerSideApply=true|false (BETA - default=true)
ServiceAccountIssuerDiscovery=true|false (BETA - default=true)
ServiceLBNodePortControl=true|false (ALPHA - default=false)
ServiceNodeExclusion=true|false (BETA - default=true)
ServiceTopology=true|false (ALPHA - default=false)
SetHostnameAsFQDN=true|false (BETA - default=true)
SizeMemoryBackedVolumes=true|false (ALPHA - default=false)
StorageVersionAPI=true|false (ALPHA - default=false)
StorageVersionHash=true|false (BETA - default=true)
Sysctls=true|false (BETA - default=true)
TTLAfterFinished=true|false (ALPHA - default=false)
TopologyManager=true|false (BETA - default=true)
ValidateProxyRedirects=true|false (BETA - default=true)
WarningHeaders=true|false (BETA - default=true)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (BETA - default=true)
WindowsEndpointSliceProxying=true|false (ALPHA - default=false) +

A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (BETA - default=true)
APIResponseCompression=true|false (BETA - default=true)
APIServerIdentity=true|false (ALPHA - default=false)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AllowInsecureBackendProxy=true|false (BETA - default=true)
AnyVolumeDataSource=true|false (ALPHA - default=false)
AppArmor=true|false (BETA - default=true)
BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
CPUManager=true|false (BETA - default=true)
CRIContainerLogRotation=true|false (BETA - default=true)
CSIInlineVolume=true|false (BETA - default=true)
CSIMigration=true|false (BETA - default=true)
CSIMigrationAWS=true|false (BETA - default=false)
CSIMigrationAWSComplete=true|false (ALPHA - default=false)
CSIMigrationAzureDisk=true|false (BETA - default=false)
CSIMigrationAzureDiskComplete=true|false (ALPHA - default=false)
CSIMigrationAzureFile=true|false (ALPHA - default=false)
CSIMigrationAzureFileComplete=true|false (ALPHA - default=false)
CSIMigrationGCE=true|false (BETA - default=false)
CSIMigrationGCEComplete=true|false (ALPHA - default=false)
CSIMigrationOpenStack=true|false (BETA - default=false)
CSIMigrationOpenStackComplete=true|false (ALPHA - default=false)
CSIMigrationvSphere=true|false (BETA - default=false)
CSIMigrationvSphereComplete=true|false (BETA - default=false)
CSIServiceAccountToken=true|false (ALPHA - default=false)
CSIStorageCapacity=true|false (ALPHA - default=false)
CSIVolumeFSGroupPolicy=true|false (BETA - default=true)
ConfigurableFSGroupPolicy=true|false (BETA - default=true)
CronJobControllerV2=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
DefaultPodTopologySpread=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DisableAcceleratorUsageMetrics=true|false (BETA - default=true)
DownwardAPIHugePages=true|false (ALPHA - default=false)
DynamicKubeletConfig=true|false (BETA - default=true)
EfficientWatchResumption=true|false (ALPHA - default=false)
EndpointSlice=true|false (BETA - default=true)
EndpointSliceNodeName=true|false (ALPHA - default=false)
EndpointSliceProxying=true|false (BETA - default=true)
EndpointSliceTerminatingCondition=true|false (ALPHA - default=false)
EphemeralContainers=true|false (ALPHA - default=false)
ExpandCSIVolumes=true|false (BETA - default=true)
ExpandInUsePersistentVolumes=true|false (BETA - default=true)
ExpandPersistentVolumes=true|false (BETA - default=true)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
GenericEphemeralVolume=true|false (ALPHA - default=false)
GracefulNodeShutdown=true|false (ALPHA - default=false)
HPAContainerMetrics=true|false (ALPHA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HugePageStorageMediumSize=true|false (BETA - default=true)
IPv6DualStack=true|false (ALPHA - default=false)
ImmutableEphemeralVolumes=true|false (BETA - default=true)
KubeletCredentialProviders=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
LegacyNodeRoleBehavior=true|false (BETA - default=true)
LocalStorageCapacityIsolation=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
MixedProtocolLBService=true|false (ALPHA - default=false)
NodeDisruptionExclusion=true|false (BETA - default=true)
NonPreemptingPriority=true|false (BETA - default=true)
PodDisruptionBudget=true|false (BETA - default=true)
PodOverhead=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
QOSReserved=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RemoveSelfLink=true|false (BETA - default=true)
RootCAConfigMap=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
RunAsGroup=true|false (BETA - default=true)
ServerSideApply=true|false (BETA - default=true)
ServiceAccountIssuerDiscovery=true|false (BETA - default=true)
ServiceLBNodePortControl=true|false (ALPHA - default=false)
ServiceNodeExclusion=true|false (BETA - default=true)
ServiceTopology=true|false (ALPHA - default=false)
SetHostnameAsFQDN=true|false (BETA - default=true)
SizeMemoryBackedVolumes=true|false (ALPHA - default=false)
StorageVersionAPI=true|false (ALPHA - default=false)
StorageVersionHash=true|false (BETA - default=true)
Sysctls=true|false (BETA - default=true)
TTLAfterFinished=true|false (ALPHA - default=false)
TopologyManager=true|false (BETA - default=true)
ValidateProxyRedirects=true|false (BETA - default=true)
WarningHeaders=true|false (BETA - default=true)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (BETA - default=true)
WindowsEndpointSliceProxying=true|false (ALPHA - default=false)

--goaway-chance float -To prevent HTTP/2 clients from getting stuck on a single apiserver, randomly close a connection (GOAWAY). The client's other in-flight requests won't be affected, and the client will reconnect, likely landing on a different apiserver after going through the load balancer again. This argument sets the fraction of requests that will be sent a GOAWAY. Clusters with single apiservers, or which don't use a load balancer, should NOT enable this. Min is 0 (off), Max is .02 (1/50 requests); .001 (1/1000) is a recommended starting point. +

To prevent HTTP/2 clients from getting stuck on a single apiserver, randomly close a connection (GOAWAY). The client's other in-flight requests won't be affected, and the client will reconnect, likely landing on a different apiserver after going through the load balancer again. This argument sets the fraction of requests that will be sent a GOAWAY. Clusters with single apiservers, or which don't use a load balancer, should NOT enable this. Min is 0 (off), Max is .02 (1/50 requests); .001 (1/1000) is a recommended starting point.

-h, --help -help for kube-apiserver +

help for kube-apiserver

--http2-max-streams-per-connection int -The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default. +

The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.

--identity-lease-duration-seconds int     Default: 3600 -The duration of kube-apiserver lease in seconds, must be a positive number. (In use when the APIServerIdentity feature gate is enabled.) +

The duration of kube-apiserver lease in seconds, must be a positive number. (In use when the APIServerIdentity feature gate is enabled.)

--identity-lease-renew-interval-seconds int     Default: 10 -The interval of kube-apiserver renewing its lease in seconds, must be a positive number. (In use when the APIServerIdentity feature gate is enabled.) +

The interval of kube-apiserver renewing its lease in seconds, must be a positive number. (In use when the APIServerIdentity feature gate is enabled.)

--kubelet-certificate-authority string -Path to a cert file for the certificate authority. +

Path to a cert file for the certificate authority.

--kubelet-client-certificate string -Path to a client cert file for TLS. +

Path to a client cert file for TLS.

--kubelet-client-key string -Path to a client key file for TLS. +

Path to a client key file for TLS.

---kubelet-preferred-address-types stringSlice     Default: [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP] +--kubelet-preferred-address-types strings     Default: "Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP" -List of the preferred NodeAddressTypes to use for kubelet connections. +

List of the preferred NodeAddressTypes to use for kubelet connections.

--kubelet-timeout duration     Default: 5s -Timeout for kubelet operations. +

Timeout for kubelet operations.

--kubernetes-service-node-port int -If non-zero, the Kubernetes master service (which apiserver creates/maintains) will be of type NodePort, using this as the value of the port. If zero, the Kubernetes master service will be of type ClusterIP. +

If non-zero, the Kubernetes master service (which apiserver creates/maintains) will be of type NodePort, using this as the value of the port. If zero, the Kubernetes master service will be of type ClusterIP.

--livez-grace-period duration -This option represents the maximum amount of time it should take for apiserver to complete its startup sequence and become live. From apiserver's start time to when this amount of time has elapsed, /livez will assume that unfinished post-start hooks will complete successfully and therefore return true. +

This option represents the maximum amount of time it should take for apiserver to complete its startup sequence and become live. From apiserver's start time to when this amount of time has elapsed, /livez will assume that unfinished post-start hooks will complete successfully and therefore return true.

---log-backtrace-at traceLocation     Default: :0 +--log-backtrace-at <a string in the form 'file:N'>     Default: :0 -when logging hits line file:N, emit a stack trace +

when logging hits line file:N, emit a stack trace

--log-dir string -If non-empty, write log files in this directory +

If non-empty, write log files in this directory

--log-file string -If non-empty, use this log file +

If non-empty, use this log file

--log-file-max-size uint     Default: 1800 -Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. +

Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.

--log-flush-frequency duration     Default: 5s -Maximum number of seconds between log flushes +

Maximum number of seconds between log flushes

--logging-format string     Default: "text" -Sets the log format. Permitted formats: "json", "text".
Non-default formats don't honor these flags: --add_dir_header, --alsologtostderr, --log_backtrace_at, --log_dir, --log_file, --log_file_max_size, --logtostderr, --one_output, --skip_headers, --skip_log_headers, --stderrthreshold, --vmodule, --log-flush-frequency.
Non-default choices are currently alpha and subject to change without warning. +

Sets the log format. Permitted formats: "json", "text".
Non-default formats don't honor these flags: --add_dir_header, --alsologtostderr, --log_backtrace_at, --log_dir, --log_file, --log_file_max_size, --logtostderr, --one_output, --skip_headers, --skip_log_headers, --stderrthreshold, --vmodule, --log-flush-frequency.
Non-default choices are currently alpha and subject to change without warning.

--logtostderr     Default: true -log to standard error instead of files +

log to standard error instead of files

--master-service-namespace string     Default: "default" -DEPRECATED: the namespace from which the Kubernetes master services should be injected into pods. +

DEPRECATED: the namespace from which the Kubernetes master services should be injected into pods.

--max-connection-bytes-per-sec int -If non-zero, throttle each user connection to this number of bytes/sec. Currently only applies to long-running requests. +

If non-zero, throttle each user connection to this number of bytes/sec. Currently only applies to long-running requests.

--max-mutating-requests-inflight int     Default: 200 -The maximum number of mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. +

The maximum number of mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit.

--max-requests-inflight int     Default: 400 -The maximum number of non-mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. +

The maximum number of non-mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit.

--min-request-timeout int     Default: 1800 -An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load. +

An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load.

--oidc-ca-file string -If set, the OpenID server's certificate will be verified by one of the authorities in the oidc-ca-file, otherwise the host's root CA set will be used. +

If set, the OpenID server's certificate will be verified by one of the authorities in the oidc-ca-file, otherwise the host's root CA set will be used.

--oidc-client-id string -The client ID for the OpenID Connect client, must be set if oidc-issuer-url is set. +

The client ID for the OpenID Connect client, must be set if oidc-issuer-url is set.

--oidc-groups-claim string -If provided, the name of a custom OpenID Connect claim for specifying user groups. The claim value is expected to be a string or array of strings. This flag is experimental, please see the authentication documentation for further details. +

If provided, the name of a custom OpenID Connect claim for specifying user groups. The claim value is expected to be a string or array of strings. This flag is experimental, please see the authentication documentation for further details.

--oidc-groups-prefix string -If provided, all groups will be prefixed with this value to prevent conflicts with other authentication strategies. +

If provided, all groups will be prefixed with this value to prevent conflicts with other authentication strategies.

--oidc-issuer-url string -The URL of the OpenID issuer, only HTTPS scheme will be accepted. If set, it will be used to verify the OIDC JSON Web Token (JWT). +

The URL of the OpenID issuer, only HTTPS scheme will be accepted. If set, it will be used to verify the OIDC JSON Web Token (JWT).

---oidc-required-claim mapStringString +--oidc-required-claim <comma-separated 'key=value' pairs> -A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims. +

A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims.

---oidc-signing-algs stringSlice     Default: [RS256] +--oidc-signing-algs strings     Default: "RS256" -Comma-separated list of allowed JOSE asymmetric signing algorithms. JWTs with a 'alg' header value not in this list will be rejected. Values are defined by RFC 7518 https://tools.ietf.org/html/rfc7518#section-3.1. +

Comma-separated list of allowed JOSE asymmetric signing algorithms. JWTs with a 'alg' header value not in this list will be rejected. Values are defined by RFC 7518 https://tools.ietf.org/html/rfc7518#section-3.1.

--oidc-username-claim string     Default: "sub" -The OpenID claim to use as the user name. Note that claims other than the default ('sub') is not guaranteed to be unique and immutable. This flag is experimental, please see the authentication documentation for further details. +

The OpenID claim to use as the user name. Note that claims other than the default ('sub') is not guaranteed to be unique and immutable. This flag is experimental, please see the authentication documentation for further details.

--oidc-username-prefix string -If provided, all usernames will be prefixed with this value. If not provided, username claims other than 'email' are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value '-'. +

If provided, all usernames will be prefixed with this value. If not provided, username claims other than 'email' are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value '-'.

--one-output -If true, only write logs to their native severity level (vs also writing to each lower severity level +

If true, only write logs to their native severity level (vs also writing to each lower severity level

--permit-port-sharing -If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false] +

If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false]

--profiling     Default: true -Enable profiling via web interface host:port/debug/pprof/ +

Enable profiling via web interface host:port/debug/pprof/

--proxy-client-cert-file string -Client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. It is expected that this cert includes a signature from the CA in the --requestheader-client-ca-file flag. That CA is published in the 'extension-apiserver-authentication' configmap in the kube-system namespace. Components receiving calls from kube-aggregator should use that CA to perform their half of the mutual TLS verification. +

Client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. It is expected that this cert includes a signature from the CA in the --requestheader-client-ca-file flag. That CA is published in the 'extension-apiserver-authentication' configmap in the kube-system namespace. Components receiving calls from kube-aggregator should use that CA to perform their half of the mutual TLS verification.

--proxy-client-key-file string -Private key for the client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. +

Private key for the client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins.

--request-timeout duration     Default: 1m0s -An optional field indicating the duration a handler must keep a request open before timing it out. This is the default request timeout for requests but may be overridden by flags such as --min-request-timeout for specific types of requests. +

An optional field indicating the duration a handler must keep a request open before timing it out. This is the default request timeout for requests but may be overridden by flags such as --min-request-timeout for specific types of requests.

---requestheader-allowed-names stringSlice +--requestheader-allowed-names strings -List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed. +

List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.

--requestheader-client-ca-file string -Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests. +

Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.

---requestheader-extra-headers-prefix stringSlice +--requestheader-extra-headers-prefix strings -List of request header prefixes to inspect. X-Remote-Extra- is suggested. +

List of request header prefixes to inspect. X-Remote-Extra- is suggested.

---requestheader-group-headers stringSlice +--requestheader-group-headers strings -List of request headers to inspect for groups. X-Remote-Group is suggested. +

List of request headers to inspect for groups. X-Remote-Group is suggested.

---requestheader-username-headers stringSlice +--requestheader-username-headers strings -List of request headers to inspect for usernames. X-Remote-User is common. +

List of request headers to inspect for usernames. X-Remote-User is common.

---runtime-config mapStringString +--runtime-config <comma-separated 'key=value' pairs> -A set of key=value pairs that enable or disable built-in APIs. Supported options are:
v1=true|false for the core API group
<group>/<version>=true|false for a specific API group and version (e.g. apps/v1=true)
api/all=true|false controls all API versions
api/ga=true|false controls all API versions of the form v[0-9]+
api/beta=true|false controls all API versions of the form v[0-9]+beta[0-9]+
api/alpha=true|false controls all API versions of the form v[0-9]+alpha[0-9]+
api/legacy is deprecated, and will be removed in a future version +

A set of key=value pairs that enable or disable built-in APIs. Supported options are:
v1=true|false for the core API group
/=true|false for a specific API group and version (e.g. apps/v1=true)
api/all=true|false controls all API versions
api/ga=true|false controls all API versions of the form v[0-9]+
api/beta=true|false controls all API versions of the form v[0-9]+beta[0-9]+
api/alpha=true|false controls all API versions of the form v[0-9]+alpha[0-9]+
api/legacy is deprecated, and will be removed in a future version

--secure-port int     Default: 6443 -The port on which to serve HTTPS with authentication and authorization. It cannot be switched off with 0. +

The port on which to serve HTTPS with authentication and authorization. It cannot be switched off with 0.

--service-account-extend-token-expiration     Default: true -Turns on projected service account expiration extension during token generation, which helps safe transition from legacy token to bound service account token feature. If this flag is enabled, admission injected tokens would be extended up to 1 year to prevent unexpected failure during transition, ignoring value of service-account-max-token-expiration. +

Turns on projected service account expiration extension during token generation, which helps safe transition from legacy token to bound service account token feature. If this flag is enabled, admission injected tokens would be extended up to 1 year to prevent unexpected failure during transition, ignoring value of service-account-max-token-expiration.

--service-account-issuer string -Identifier of the service account token issuer. The issuer will assert this identifier in "iss" claim of issued tokens. This value is a string or URI. If this option is not a valid URI per the OpenID Discovery 1.0 spec, the ServiceAccountIssuerDiscovery feature will remain disabled, even if the feature gate is set to true. It is highly recommended that this value comply with the OpenID spec: https://openid.net/specs/openid-connect-discovery-1_0.html. In practice, this means that service-account-issuer must be an https URL. It is also highly recommended that this URL be capable of serving OpenID discovery documents at {service-account-issuer}/.well-known/openid-configuration. +

Identifier of the service account token issuer. The issuer will assert this identifier in "iss" claim of issued tokens. This value is a string or URI. If this option is not a valid URI per the OpenID Discovery 1.0 spec, the ServiceAccountIssuerDiscovery feature will remain disabled, even if the feature gate is set to true. It is highly recommended that this value comply with the OpenID spec: https://openid.net/specs/openid-connect-discovery-1_0.html. In practice, this means that service-account-issuer must be an https URL. It is also highly recommended that this URL be capable of serving OpenID discovery documents at {service-account-issuer}/.well-known/openid-configuration.

--service-account-jwks-uri string -Overrides the URI for the JSON Web Key Set in the discovery doc served at /.well-known/openid-configuration. This flag is useful if the discovery docand key set are served to relying parties from a URL other than the API server's external (as auto-detected or overridden with external-hostname). Only valid if the ServiceAccountIssuerDiscovery feature gate is enabled. +

Overrides the URI for the JSON Web Key Set in the discovery doc served at /.well-known/openid-configuration. This flag is useful if the discovery docand key set are served to relying parties from a URL other than the API server's external (as auto-detected or overridden with external-hostname). Only valid if the ServiceAccountIssuerDiscovery feature gate is enabled.

---service-account-key-file stringArray +--service-account-key-file strings -File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If unspecified, --tls-private-key-file is used. Must be specified when --service-account-signing-key is provided +

File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If unspecified, --tls-private-key-file is used. Must be specified when --service-account-signing-key is provided

--service-account-lookup     Default: true -If true, validate ServiceAccount tokens exist in etcd as part of authentication. +

If true, validate ServiceAccount tokens exist in etcd as part of authentication.

--service-account-max-token-expiration duration -The maximum validity duration of a token created by the service account token issuer. If an otherwise valid TokenRequest with a validity duration larger than this value is requested, a token will be issued with a validity duration of this value. +

The maximum validity duration of a token created by the service account token issuer. If an otherwise valid TokenRequest with a validity duration larger than this value is requested, a token will be issued with a validity duration of this value.

--service-account-signing-key-file string -Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. +

Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key.

--service-cluster-ip-range string -A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes or pods. +

A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes or pods.

---service-node-port-range portRange     Default: 30000-32767 +--service-node-port-range <a string in the form 'N1-N2'>     Default: 30000-32767 -A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. +

A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range.

--show-hidden-metrics-for-version string -The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that. +

The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is ., e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.

--shutdown-delay-duration duration -Time to delay the termination. During that time the server keeps serving requests normally. The endpoints /healthz and /livez will return success, but /readyz immediately returns failure. Graceful termination starts after this delay has elapsed. This can be used to allow load balancer to stop sending traffic to this server. +

Time to delay the termination. During that time the server keeps serving requests normally. The endpoints /healthz and /livez will return success, but /readyz immediately returns failure. Graceful termination starts after this delay has elapsed. This can be used to allow load balancer to stop sending traffic to this server.

--skip-headers -If true, avoid header prefixes in the log messages +

If true, avoid header prefixes in the log messages

--skip-log-headers -If true, avoid headers when opening log files +

If true, avoid headers when opening log files

---stderrthreshold severity     Default: 2 +--stderrthreshold int     Default: 2 -logs at or above this threshold go to stderr +

logs at or above this threshold go to stderr

--storage-backend string -The storage backend for persistence. Options: 'etcd3' (default). +

The storage backend for persistence. Options: 'etcd3' (default).

--storage-media-type string     Default: "application/vnd.kubernetes.protobuf" -The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting. +

The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting.

--tls-cert-file string -File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir. +

File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.

---tls-cipher-suites stringSlice +--tls-cipher-suites strings -Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.
Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA. +

Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.
Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.

--tls-min-version string -Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13 +

Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13

--tls-private-key-file string -File containing the default x509 private key matching --tls-cert-file. +

File containing the default x509 private key matching --tls-cert-file.

---tls-sni-cert-key namedCertKey     Default: [] +--tls-sni-cert-key string -A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". +

A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com".

--token-auth-file string -If set, the file that will be used to secure the secure port of the API server via token authentication. +

If set, the file that will be used to secure the secure port of the API server via token authentication.

--v, --v Level +-v, --v int -number for the log level verbosity +

number for the log level verbosity

--version version[=true] -Print version information and quit +

Print version information and quit

---vmodule moduleSpec +--vmodule <comma-separated 'pattern=N' settings> -comma-separated list of pattern=N settings for file-filtered logging +

comma-separated list of pattern=N settings for file-filtered logging

--watch-cache     Default: true -Enable watch caching in the apiserver +

Enable watch caching in the apiserver

---watch-cache-sizes stringSlice +--watch-cache-sizes strings -Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. It takes effect when watch-cache is enabled. Some resources (replicationcontrollers, endpoints, nodes, pods, services, apiservices.apiregistration.k8s.io) have system defaults set by heuristics, others default to default-watch-cache-size +

Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. It takes effect when watch-cache is enabled. Some resources (replicationcontrollers, endpoints, nodes, pods, services, apiservices.apiregistration.k8s.io) have system defaults set by heuristics, others default to default-watch-cache-size

diff --git a/content/en/docs/reference/command-line-tools-reference/kube-controller-manager.md b/content/en/docs/reference/command-line-tools-reference/kube-controller-manager.md index 79af1bc81a..a7ad248e94 100644 --- a/content/en/docs/reference/command-line-tools-reference/kube-controller-manager.md +++ b/content/en/docs/reference/command-line-tools-reference/kube-controller-manager.md @@ -2,8 +2,21 @@ title: kube-controller-manager content_type: tool-reference weight: 30 +auto_generated: true --- + + + ## {{% heading "synopsis" %}} @@ -33,980 +46,980 @@ kube-controller-manager [flags] --add-dir-header -If true, adds the file directory to the header of the log messages +

If true, adds the file directory to the header of the log messages

--allocate-node-cidrs -Should CIDRs for Pods be allocated and set on the cloud provider. +

Should CIDRs for Pods be allocated and set on the cloud provider.

--alsologtostderr -log to standard error as well as files +

log to standard error as well as files

--attach-detach-reconcile-sync-period duration     Default: 1m0s -The reconciler sync wait time between volume attach detach. This duration must be larger than one second, and increasing this value from the default may allow for volumes to be mismatched with pods. +

The reconciler sync wait time between volume attach detach. This duration must be larger than one second, and increasing this value from the default may allow for volumes to be mismatched with pods.

--authentication-kubeconfig string -kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io. This is optional. If empty, all token requests are considered to be anonymous and no client CA is looked up in the cluster. +

kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io. This is optional. If empty, all token requests are considered to be anonymous and no client CA is looked up in the cluster.

--authentication-skip-lookup -If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster. +

If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.

--authentication-token-webhook-cache-ttl duration     Default: 10s -The duration to cache responses from the webhook token authenticator. +

The duration to cache responses from the webhook token authenticator.

--authentication-tolerate-lookup-failure -If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous. +

If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.

---authorization-always-allow-paths stringSlice     Default: [/healthz] +--authorization-always-allow-paths strings     Default: "/healthz" -A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server. +

A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server.

--authorization-kubeconfig string -kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io. This is optional. If empty, all requests not skipped by authorization are forbidden. +

kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io. This is optional. If empty, all requests not skipped by authorization are forbidden.

--authorization-webhook-cache-authorized-ttl duration     Default: 10s -The duration to cache 'authorized' responses from the webhook authorizer. +

The duration to cache 'authorized' responses from the webhook authorizer.

--authorization-webhook-cache-unauthorized-ttl duration     Default: 10s -The duration to cache 'unauthorized' responses from the webhook authorizer. +

The duration to cache 'unauthorized' responses from the webhook authorizer.

--azure-container-registry-config string -Path to the file containing Azure container registry configuration information. +

Path to the file containing Azure container registry configuration information.

---bind-address ip     Default: 0.0.0.0 +--bind-address string     Default: 0.0.0.0 -The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used. +

The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.

--cert-dir string -The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. +

The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.

--cidr-allocator-type string     Default: "RangeAllocator" -Type of CIDR allocator to use +

Type of CIDR allocator to use

--client-ca-file string -If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate. +

If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.

--cloud-config string -The path to the cloud provider configuration file. Empty string for no configuration file. +

The path to the cloud provider configuration file. Empty string for no configuration file.

--cloud-provider string -The provider for cloud services. Empty string for no provider. +

The provider for cloud services. Empty string for no provider.

--cluster-cidr string -CIDR Range for Pods in cluster. Requires --allocate-node-cidrs to be true +

CIDR Range for Pods in cluster. Requires --allocate-node-cidrs to be true

--cluster-name string     Default: "kubernetes" -The instance prefix for the cluster. +

The instance prefix for the cluster.

--cluster-signing-cert-file string -Filename containing a PEM-encoded X509 CA certificate used to issue cluster-scoped certificates. If specified, no more specific --cluster-signing-* flag may be specified. +

Filename containing a PEM-encoded X509 CA certificate used to issue cluster-scoped certificates. If specified, no more specific --cluster-signing-* flag may be specified.

--cluster-signing-duration duration     Default: 8760h0m0s -The length of duration signed certificates will be given. +

The length of duration signed certificates will be given.

--cluster-signing-key-file string -Filename containing a PEM-encoded RSA or ECDSA private key used to sign cluster-scoped certificates. If specified, no more specific --cluster-signing-* flag may be specified. +

Filename containing a PEM-encoded RSA or ECDSA private key used to sign cluster-scoped certificates. If specified, no more specific --cluster-signing-* flag may be specified.

--cluster-signing-kube-apiserver-client-cert-file string -Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/kube-apiserver-client signer. If specified, --cluster-signing-{cert,key}-file must not be set. +

Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/kube-apiserver-client signer. If specified, --cluster-signing-{cert,key}-file must not be set.

--cluster-signing-kube-apiserver-client-key-file string -Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/kube-apiserver-client signer. If specified, --cluster-signing-{cert,key}-file must not be set. +

Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/kube-apiserver-client signer. If specified, --cluster-signing-{cert,key}-file must not be set.

--cluster-signing-kubelet-client-cert-file string -Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/kube-apiserver-client-kubelet signer. If specified, --cluster-signing-{cert,key}-file must not be set. +

Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/kube-apiserver-client-kubelet signer. If specified, --cluster-signing-{cert,key}-file must not be set.

--cluster-signing-kubelet-client-key-file string -Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/kube-apiserver-client-kubelet signer. If specified, --cluster-signing-{cert,key}-file must not be set. +

Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/kube-apiserver-client-kubelet signer. If specified, --cluster-signing-{cert,key}-file must not be set.

--cluster-signing-kubelet-serving-cert-file string -Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/kubelet-serving signer. If specified, --cluster-signing-{cert,key}-file must not be set. +

Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/kubelet-serving signer. If specified, --cluster-signing-{cert,key}-file must not be set.

--cluster-signing-kubelet-serving-key-file string -Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/kubelet-serving signer. If specified, --cluster-signing-{cert,key}-file must not be set. +

Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/kubelet-serving signer. If specified, --cluster-signing-{cert,key}-file must not be set.

--cluster-signing-legacy-unknown-cert-file string -Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/legacy-unknown signer. If specified, --cluster-signing-{cert,key}-file must not be set. +

Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/legacy-unknown signer. If specified, --cluster-signing-{cert,key}-file must not be set.

--cluster-signing-legacy-unknown-key-file string -Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/legacy-unknown signer. If specified, --cluster-signing-{cert,key}-file must not be set. +

Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/legacy-unknown signer. If specified, --cluster-signing-{cert,key}-file must not be set.

--concurrent-deployment-syncs int32     Default: 5 -The number of deployment objects that are allowed to sync concurrently. Larger number = more responsive deployments, but more CPU (and network) load +

The number of deployment objects that are allowed to sync concurrently. Larger number = more responsive deployments, but more CPU (and network) load

--concurrent-endpoint-syncs int32     Default: 5 -The number of endpoint syncing operations that will be done concurrently. Larger number = faster endpoint updating, but more CPU (and network) load +

The number of endpoint syncing operations that will be done concurrently. Larger number = faster endpoint updating, but more CPU (and network) load

--concurrent-gc-syncs int32     Default: 20 -The number of garbage collector workers that are allowed to sync concurrently. +

The number of garbage collector workers that are allowed to sync concurrently.

--concurrent-namespace-syncs int32     Default: 10 -The number of namespace objects that are allowed to sync concurrently. Larger number = more responsive namespace termination, but more CPU (and network) load +

The number of namespace objects that are allowed to sync concurrently. Larger number = more responsive namespace termination, but more CPU (and network) load

--concurrent-replicaset-syncs int32     Default: 5 -The number of replica sets that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load +

The number of replica sets that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load

--concurrent-resource-quota-syncs int32     Default: 5 -The number of resource quotas that are allowed to sync concurrently. Larger number = more responsive quota management, but more CPU (and network) load +

The number of resource quotas that are allowed to sync concurrently. Larger number = more responsive quota management, but more CPU (and network) load

--concurrent-service-endpoint-syncs int32     Default: 5 -The number of service endpoint syncing operations that will be done concurrently. Larger number = faster endpoint slice updating, but more CPU (and network) load. Defaults to 5. +

The number of service endpoint syncing operations that will be done concurrently. Larger number = faster endpoint slice updating, but more CPU (and network) load. Defaults to 5.

--concurrent-service-syncs int32     Default: 1 -The number of services that are allowed to sync concurrently. Larger number = more responsive service management, but more CPU (and network) load +

The number of services that are allowed to sync concurrently. Larger number = more responsive service management, but more CPU (and network) load

--concurrent-serviceaccount-token-syncs int32     Default: 5 -The number of service account token objects that are allowed to sync concurrently. Larger number = more responsive token generation, but more CPU (and network) load +

The number of service account token objects that are allowed to sync concurrently. Larger number = more responsive token generation, but more CPU (and network) load

--concurrent-statefulset-syncs int32     Default: 5 -The number of statefulset objects that are allowed to sync concurrently. Larger number = more responsive statefulsets, but more CPU (and network) load +

The number of statefulset objects that are allowed to sync concurrently. Larger number = more responsive statefulsets, but more CPU (and network) load

--concurrent-ttl-after-finished-syncs int32     Default: 5 -The number of TTL-after-finished controller workers that are allowed to sync concurrently. +

The number of TTL-after-finished controller workers that are allowed to sync concurrently.

--concurrent_rc_syncs int32     Default: 5 -The number of replication controllers that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load +

The number of replication controllers that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load

--configure-cloud-routes     Default: true -Should CIDRs allocated by allocate-node-cidrs be configured on the cloud provider. +

Should CIDRs allocated by allocate-node-cidrs be configured on the cloud provider.

--contention-profiling -Enable lock contention profiling, if profiling is enabled +

Enable lock contention profiling, if profiling is enabled

--controller-start-interval duration -Interval between starting controller managers. +

Interval between starting controller managers.

---controllers stringSlice     Default: [*] +--controllers strings     Default: "*" -A list of controllers to enable. '*' enables all on-by-default controllers, 'foo' enables the controller named 'foo', '-foo' disables the controller named 'foo'.
All controllers: attachdetach, bootstrapsigner, cloud-node-lifecycle, clusterrole-aggregation, cronjob, csrapproving, csrcleaner, csrsigning, daemonset, deployment, disruption, endpoint, endpointslice, endpointslicemirroring, ephemeral-volume, garbagecollector, horizontalpodautoscaling, job, namespace, nodeipam, nodelifecycle, persistentvolume-binder, persistentvolume-expander, podgc, pv-protection, pvc-protection, replicaset, replicationcontroller, resourcequota, root-ca-cert-publisher, route, service, serviceaccount, serviceaccount-token, statefulset, tokencleaner, ttl, ttl-after-finished
Disabled-by-default controllers: bootstrapsigner, tokencleaner +

A list of controllers to enable. '*' enables all on-by-default controllers, 'foo' enables the controller named 'foo', '-foo' disables the controller named 'foo'.
All controllers: attachdetach, bootstrapsigner, cloud-node-lifecycle, clusterrole-aggregation, cronjob, csrapproving, csrcleaner, csrsigning, daemonset, deployment, disruption, endpoint, endpointslice, endpointslicemirroring, ephemeral-volume, garbagecollector, horizontalpodautoscaling, job, namespace, nodeipam, nodelifecycle, persistentvolume-binder, persistentvolume-expander, podgc, pv-protection, pvc-protection, replicaset, replicationcontroller, resourcequota, root-ca-cert-publisher, route, service, serviceaccount, serviceaccount-token, statefulset, tokencleaner, ttl, ttl-after-finished
Disabled-by-default controllers: bootstrapsigner, tokencleaner

--deployment-controller-sync-period duration     Default: 30s -Period for syncing the deployments. +

Period for syncing the deployments.

--disable-attach-detach-reconcile-sync -Disable volume attach detach reconciler sync. Disabling this may cause volumes to be mismatched with pods. Use wisely. +

Disable volume attach detach reconciler sync. Disabling this may cause volumes to be mismatched with pods. Use wisely.

--enable-dynamic-provisioning     Default: true -Enable dynamic provisioning for environments that support it. +

Enable dynamic provisioning for environments that support it.

--enable-garbage-collector     Default: true -Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-apiserver. +

Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-apiserver.

--enable-hostpath-provisioner -Enable HostPath PV provisioning when running without a cloud provider. This allows testing and development of provisioning features. HostPath provisioning is not supported in any way, won't work in a multi-node cluster, and should not be used for anything other than testing or development. +

Enable HostPath PV provisioning when running without a cloud provider. This allows testing and development of provisioning features. HostPath provisioning is not supported in any way, won't work in a multi-node cluster, and should not be used for anything other than testing or development.

--enable-taint-manager     Default: true -WARNING: Beta feature. If set to true enables NoExecute Taints and will evict all not-tolerating Pod running on Nodes tainted with this kind of Taints. +

WARNING: Beta feature. If set to true enables NoExecute Taints and will evict all not-tolerating Pod running on Nodes tainted with this kind of Taints.

--endpoint-updates-batch-period duration -The length of endpoint updates batching period. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated +

The length of endpoint updates batching period. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated

--endpointslice-updates-batch-period duration -The length of endpoint slice updates batching period. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated +

The length of endpoint slice updates batching period. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated

--experimental-logging-sanitization -[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens).
Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production. +

[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens).
Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.

--external-cloud-volume-plugin string -The plugin to use when cloud provider is set to external. Can be empty, should only be set when cloud-provider is external. Currently used to allow node and volume controllers to work for in tree cloud providers. +

The plugin to use when cloud provider is set to external. Can be empty, should only be set when cloud-provider is external. Currently used to allow node and volume controllers to work for in tree cloud providers.

---feature-gates mapStringBool +--feature-gates <comma-separated 'key=True|False' pairs> -A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (BETA - default=true)
APIResponseCompression=true|false (BETA - default=true)
APIServerIdentity=true|false (ALPHA - default=false)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AllowInsecureBackendProxy=true|false (BETA - default=true)
AnyVolumeDataSource=true|false (ALPHA - default=false)
AppArmor=true|false (BETA - default=true)
BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
CPUManager=true|false (BETA - default=true)
CRIContainerLogRotation=true|false (BETA - default=true)
CSIInlineVolume=true|false (BETA - default=true)
CSIMigration=true|false (BETA - default=true)
CSIMigrationAWS=true|false (BETA - default=false)
CSIMigrationAWSComplete=true|false (ALPHA - default=false)
CSIMigrationAzureDisk=true|false (BETA - default=false)
CSIMigrationAzureDiskComplete=true|false (ALPHA - default=false)
CSIMigrationAzureFile=true|false (ALPHA - default=false)
CSIMigrationAzureFileComplete=true|false (ALPHA - default=false)
CSIMigrationGCE=true|false (BETA - default=false)
CSIMigrationGCEComplete=true|false (ALPHA - default=false)
CSIMigrationOpenStack=true|false (BETA - default=false)
CSIMigrationOpenStackComplete=true|false (ALPHA - default=false)
CSIMigrationvSphere=true|false (BETA - default=false)
CSIMigrationvSphereComplete=true|false (BETA - default=false)
CSIServiceAccountToken=true|false (ALPHA - default=false)
CSIStorageCapacity=true|false (ALPHA - default=false)
CSIVolumeFSGroupPolicy=true|false (BETA - default=true)
ConfigurableFSGroupPolicy=true|false (BETA - default=true)
CronJobControllerV2=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
DefaultPodTopologySpread=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DisableAcceleratorUsageMetrics=true|false (BETA - default=true)
DownwardAPIHugePages=true|false (ALPHA - default=false)
DynamicKubeletConfig=true|false (BETA - default=true)
EfficientWatchResumption=true|false (ALPHA - default=false)
EndpointSlice=true|false (BETA - default=true)
EndpointSliceNodeName=true|false (ALPHA - default=false)
EndpointSliceProxying=true|false (BETA - default=true)
EndpointSliceTerminatingCondition=true|false (ALPHA - default=false)
EphemeralContainers=true|false (ALPHA - default=false)
ExpandCSIVolumes=true|false (BETA - default=true)
ExpandInUsePersistentVolumes=true|false (BETA - default=true)
ExpandPersistentVolumes=true|false (BETA - default=true)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
GenericEphemeralVolume=true|false (ALPHA - default=false)
GracefulNodeShutdown=true|false (ALPHA - default=false)
HPAContainerMetrics=true|false (ALPHA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HugePageStorageMediumSize=true|false (BETA - default=true)
IPv6DualStack=true|false (ALPHA - default=false)
ImmutableEphemeralVolumes=true|false (BETA - default=true)
KubeletCredentialProviders=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
LegacyNodeRoleBehavior=true|false (BETA - default=true)
LocalStorageCapacityIsolation=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
MixedProtocolLBService=true|false (ALPHA - default=false)
NodeDisruptionExclusion=true|false (BETA - default=true)
NonPreemptingPriority=true|false (BETA - default=true)
PodDisruptionBudget=true|false (BETA - default=true)
PodOverhead=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
QOSReserved=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RemoveSelfLink=true|false (BETA - default=true)
RootCAConfigMap=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
RunAsGroup=true|false (BETA - default=true)
ServerSideApply=true|false (BETA - default=true)
ServiceAccountIssuerDiscovery=true|false (BETA - default=true)
ServiceLBNodePortControl=true|false (ALPHA - default=false)
ServiceNodeExclusion=true|false (BETA - default=true)
ServiceTopology=true|false (ALPHA - default=false)
SetHostnameAsFQDN=true|false (BETA - default=true)
SizeMemoryBackedVolumes=true|false (ALPHA - default=false)
StorageVersionAPI=true|false (ALPHA - default=false)
StorageVersionHash=true|false (BETA - default=true)
Sysctls=true|false (BETA - default=true)
TTLAfterFinished=true|false (ALPHA - default=false)
TopologyManager=true|false (BETA - default=true)
ValidateProxyRedirects=true|false (BETA - default=true)
WarningHeaders=true|false (BETA - default=true)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (BETA - default=true)
WindowsEndpointSliceProxying=true|false (ALPHA - default=false) +

A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (BETA - default=true)
APIResponseCompression=true|false (BETA - default=true)
APIServerIdentity=true|false (ALPHA - default=false)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AllowInsecureBackendProxy=true|false (BETA - default=true)
AnyVolumeDataSource=true|false (ALPHA - default=false)
AppArmor=true|false (BETA - default=true)
BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
CPUManager=true|false (BETA - default=true)
CRIContainerLogRotation=true|false (BETA - default=true)
CSIInlineVolume=true|false (BETA - default=true)
CSIMigration=true|false (BETA - default=true)
CSIMigrationAWS=true|false (BETA - default=false)
CSIMigrationAWSComplete=true|false (ALPHA - default=false)
CSIMigrationAzureDisk=true|false (BETA - default=false)
CSIMigrationAzureDiskComplete=true|false (ALPHA - default=false)
CSIMigrationAzureFile=true|false (ALPHA - default=false)
CSIMigrationAzureFileComplete=true|false (ALPHA - default=false)
CSIMigrationGCE=true|false (BETA - default=false)
CSIMigrationGCEComplete=true|false (ALPHA - default=false)
CSIMigrationOpenStack=true|false (BETA - default=false)
CSIMigrationOpenStackComplete=true|false (ALPHA - default=false)
CSIMigrationvSphere=true|false (BETA - default=false)
CSIMigrationvSphereComplete=true|false (BETA - default=false)
CSIServiceAccountToken=true|false (ALPHA - default=false)
CSIStorageCapacity=true|false (ALPHA - default=false)
CSIVolumeFSGroupPolicy=true|false (BETA - default=true)
ConfigurableFSGroupPolicy=true|false (BETA - default=true)
CronJobControllerV2=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
DefaultPodTopologySpread=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DisableAcceleratorUsageMetrics=true|false (BETA - default=true)
DownwardAPIHugePages=true|false (ALPHA - default=false)
DynamicKubeletConfig=true|false (BETA - default=true)
EfficientWatchResumption=true|false (ALPHA - default=false)
EndpointSlice=true|false (BETA - default=true)
EndpointSliceNodeName=true|false (ALPHA - default=false)
EndpointSliceProxying=true|false (BETA - default=true)
EndpointSliceTerminatingCondition=true|false (ALPHA - default=false)
EphemeralContainers=true|false (ALPHA - default=false)
ExpandCSIVolumes=true|false (BETA - default=true)
ExpandInUsePersistentVolumes=true|false (BETA - default=true)
ExpandPersistentVolumes=true|false (BETA - default=true)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
GenericEphemeralVolume=true|false (ALPHA - default=false)
GracefulNodeShutdown=true|false (ALPHA - default=false)
HPAContainerMetrics=true|false (ALPHA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HugePageStorageMediumSize=true|false (BETA - default=true)
IPv6DualStack=true|false (ALPHA - default=false)
ImmutableEphemeralVolumes=true|false (BETA - default=true)
KubeletCredentialProviders=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
LegacyNodeRoleBehavior=true|false (BETA - default=true)
LocalStorageCapacityIsolation=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
MixedProtocolLBService=true|false (ALPHA - default=false)
NodeDisruptionExclusion=true|false (BETA - default=true)
NonPreemptingPriority=true|false (BETA - default=true)
PodDisruptionBudget=true|false (BETA - default=true)
PodOverhead=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
QOSReserved=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RemoveSelfLink=true|false (BETA - default=true)
RootCAConfigMap=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
RunAsGroup=true|false (BETA - default=true)
ServerSideApply=true|false (BETA - default=true)
ServiceAccountIssuerDiscovery=true|false (BETA - default=true)
ServiceLBNodePortControl=true|false (ALPHA - default=false)
ServiceNodeExclusion=true|false (BETA - default=true)
ServiceTopology=true|false (ALPHA - default=false)
SetHostnameAsFQDN=true|false (BETA - default=true)
SizeMemoryBackedVolumes=true|false (ALPHA - default=false)
StorageVersionAPI=true|false (ALPHA - default=false)
StorageVersionHash=true|false (BETA - default=true)
Sysctls=true|false (BETA - default=true)
TTLAfterFinished=true|false (ALPHA - default=false)
TopologyManager=true|false (BETA - default=true)
ValidateProxyRedirects=true|false (BETA - default=true)
WarningHeaders=true|false (BETA - default=true)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (BETA - default=true)
WindowsEndpointSliceProxying=true|false (ALPHA - default=false)

--flex-volume-plugin-dir string     Default: "/usr/libexec/kubernetes/kubelet-plugins/volume/exec/" -Full path of the directory in which the flex volume plugin should search for additional third party volume plugins. +

Full path of the directory in which the flex volume plugin should search for additional third party volume plugins.

-h, --help -help for kube-controller-manager +

help for kube-controller-manager

--horizontal-pod-autoscaler-cpu-initialization-period duration     Default: 5m0s -The period after pod start when CPU samples might be skipped. +

The period after pod start when CPU samples might be skipped.

--horizontal-pod-autoscaler-downscale-stabilization duration     Default: 5m0s -The period for which autoscaler will look backwards and not scale down below any recommendation it made during that period. +

The period for which autoscaler will look backwards and not scale down below any recommendation it made during that period.

--horizontal-pod-autoscaler-initial-readiness-delay duration     Default: 30s -The period after pod start during which readiness changes will be treated as initial readiness. +

The period after pod start during which readiness changes will be treated as initial readiness.

--horizontal-pod-autoscaler-sync-period duration     Default: 15s -The period for syncing the number of pods in horizontal pod autoscaler. +

The period for syncing the number of pods in horizontal pod autoscaler.

--horizontal-pod-autoscaler-tolerance float     Default: 0.1 -The minimum change (from 1.0) in the desired-to-actual metrics ratio for the horizontal pod autoscaler to consider scaling. +

The minimum change (from 1.0) in the desired-to-actual metrics ratio for the horizontal pod autoscaler to consider scaling.

--http2-max-streams-per-connection int -The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default. +

The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.

--kube-api-burst int32     Default: 30 -Burst to use while talking with kubernetes apiserver. +

Burst to use while talking with kubernetes apiserver.

--kube-api-content-type string     Default: "application/vnd.kubernetes.protobuf" -Content type of requests sent to apiserver. +

Content type of requests sent to apiserver.

---kube-api-qps float32     Default: 20 +--kube-api-qps float     Default: 20 -QPS to use while talking with kubernetes apiserver. +

QPS to use while talking with kubernetes apiserver.

--kubeconfig string -Path to kubeconfig file with authorization and master location information. +

Path to kubeconfig file with authorization and master location information.

--large-cluster-size-threshold int32     Default: 50 -Number of nodes from which NodeController treats the cluster as large for the eviction logic purposes. --secondary-node-eviction-rate is implicitly overridden to 0 for clusters this size or smaller. +

Number of nodes from which NodeController treats the cluster as large for the eviction logic purposes. --secondary-node-eviction-rate is implicitly overridden to 0 for clusters this size or smaller.

--leader-elect     Default: true -Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability. +

Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability.

--leader-elect-lease-duration duration     Default: 15s -The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled. +

The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled.

--leader-elect-renew-deadline duration     Default: 10s -The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled. +

The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled.

--leader-elect-resource-lock string     Default: "leases" -The type of resource object that is used for locking during leader election. Supported options are 'endpoints', 'configmaps', 'leases', 'endpointsleases' and 'configmapsleases'. +

The type of resource object that is used for locking during leader election. Supported options are 'endpoints', 'configmaps', 'leases', 'endpointsleases' and 'configmapsleases'.

--leader-elect-resource-name string     Default: "kube-controller-manager" -The name of resource object that is used for locking during leader election. +

The name of resource object that is used for locking during leader election.

--leader-elect-resource-namespace string     Default: "kube-system" -The namespace of resource object that is used for locking during leader election. +

The namespace of resource object that is used for locking during leader election.

--leader-elect-retry-period duration     Default: 2s -The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled. +

The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled.

---log-backtrace-at traceLocation     Default: :0 +--log-backtrace-at <a string in the form 'file:N'>     Default: :0 -when logging hits line file:N, emit a stack trace +

when logging hits line file:N, emit a stack trace

--log-dir string -If non-empty, write log files in this directory +

If non-empty, write log files in this directory

--log-file string -If non-empty, use this log file +

If non-empty, use this log file

--log-file-max-size uint     Default: 1800 -Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. +

Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.

--log-flush-frequency duration     Default: 5s -Maximum number of seconds between log flushes +

Maximum number of seconds between log flushes

--logging-format string     Default: "text" -Sets the log format. Permitted formats: "json", "text".
Non-default formats don't honor these flags: --add_dir_header, --alsologtostderr, --log_backtrace_at, --log_dir, --log_file, --log_file_max_size, --logtostderr, --one_output, --skip_headers, --skip_log_headers, --stderrthreshold, --vmodule, --log-flush-frequency.
Non-default choices are currently alpha and subject to change without warning. +

Sets the log format. Permitted formats: "json", "text".
Non-default formats don't honor these flags: --add_dir_header, --alsologtostderr, --log_backtrace_at, --log_dir, --log_file, --log_file_max_size, --logtostderr, --one_output, --skip_headers, --skip_log_headers, --stderrthreshold, --vmodule, --log-flush-frequency.
Non-default choices are currently alpha and subject to change without warning.

--logtostderr     Default: true -log to standard error instead of files +

log to standard error instead of files

--master string -The address of the Kubernetes API server (overrides any value in kubeconfig). +

The address of the Kubernetes API server (overrides any value in kubeconfig).

--max-endpoints-per-slice int32     Default: 100 -The maximum number of endpoints that will be added to an EndpointSlice. More endpoints per slice will result in less endpoint slices, but larger resources. Defaults to 100. +

The maximum number of endpoints that will be added to an EndpointSlice. More endpoints per slice will result in less endpoint slices, but larger resources. Defaults to 100.

--min-resync-period duration     Default: 12h0m0s -The resync period in reflectors will be random between MinResyncPeriod and 2*MinResyncPeriod. +

The resync period in reflectors will be random between MinResyncPeriod and 2*MinResyncPeriod.

--mirroring-concurrent-service-endpoint-syncs int32     Default: 5 -The number of service endpoint syncing operations that will be done concurrently by the EndpointSliceMirroring controller. Larger number = faster endpoint slice updating, but more CPU (and network) load. Defaults to 5. +

The number of service endpoint syncing operations that will be done concurrently by the EndpointSliceMirroring controller. Larger number = faster endpoint slice updating, but more CPU (and network) load. Defaults to 5.

--mirroring-endpointslice-updates-batch-period duration -The length of EndpointSlice updates batching period for EndpointSliceMirroring controller. Processing of EndpointSlice changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of EndpointSlice updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated +

The length of EndpointSlice updates batching period for EndpointSliceMirroring controller. Processing of EndpointSlice changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of EndpointSlice updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated

--mirroring-max-endpoints-per-subset int32     Default: 1000 -The maximum number of endpoints that will be added to an EndpointSlice by the EndpointSliceMirroring controller. More endpoints per slice will result in less endpoint slices, but larger resources. Defaults to 100. +

The maximum number of endpoints that will be added to an EndpointSlice by the EndpointSliceMirroring controller. More endpoints per slice will result in less endpoint slices, but larger resources. Defaults to 100.

--namespace-sync-period duration     Default: 5m0s -The period for syncing namespace life-cycle updates +

The period for syncing namespace life-cycle updates

--node-cidr-mask-size int32 -Mask size for node cidr in cluster. Default is 24 for IPv4 and 64 for IPv6. +

Mask size for node cidr in cluster. Default is 24 for IPv4 and 64 for IPv6.

--node-cidr-mask-size-ipv4 int32 -Mask size for IPv4 node cidr in dual-stack cluster. Default is 24. +

Mask size for IPv4 node cidr in dual-stack cluster. Default is 24.

--node-cidr-mask-size-ipv6 int32 -Mask size for IPv6 node cidr in dual-stack cluster. Default is 64. +

Mask size for IPv6 node cidr in dual-stack cluster. Default is 64.

---node-eviction-rate float32     Default: 0.1 +--node-eviction-rate float     Default: 0.1 -Number of nodes per second on which pods are deleted in case of node failure when a zone is healthy (see --unhealthy-zone-threshold for definition of healthy/unhealthy). Zone refers to entire cluster in non-multizone clusters. +

Number of nodes per second on which pods are deleted in case of node failure when a zone is healthy (see --unhealthy-zone-threshold for definition of healthy/unhealthy). Zone refers to entire cluster in non-multizone clusters.

--node-monitor-grace-period duration     Default: 40s -Amount of time which we allow running Node to be unresponsive before marking it unhealthy. Must be N times more than kubelet's nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet to post node status. +

Amount of time which we allow running Node to be unresponsive before marking it unhealthy. Must be N times more than kubelet's nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet to post node status.

--node-monitor-period duration     Default: 5s -The period for syncing NodeStatus in NodeController. +

The period for syncing NodeStatus in NodeController.

--node-startup-grace-period duration     Default: 1m0s -Amount of time which we allow starting Node to be unresponsive before marking it unhealthy. +

Amount of time which we allow starting Node to be unresponsive before marking it unhealthy.

--one-output -If true, only write logs to their native severity level (vs also writing to each lower severity level +

If true, only write logs to their native severity level (vs also writing to each lower severity level

--permit-port-sharing -If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false] +

If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false]

--pod-eviction-timeout duration     Default: 5m0s -The grace period for deleting pods on failed nodes. +

The grace period for deleting pods on failed nodes.

--profiling     Default: true -Enable profiling via web interface host:port/debug/pprof/ +

Enable profiling via web interface host:port/debug/pprof/

--pv-recycler-increment-timeout-nfs int32     Default: 30 -the increment of time added per Gi to ActiveDeadlineSeconds for an NFS scrubber pod +

the increment of time added per Gi to ActiveDeadlineSeconds for an NFS scrubber pod

--pv-recycler-minimum-timeout-hostpath int32     Default: 60 -The minimum ActiveDeadlineSeconds to use for a HostPath Recycler pod. This is for development and testing only and will not work in a multi-node cluster. +

The minimum ActiveDeadlineSeconds to use for a HostPath Recycler pod. This is for development and testing only and will not work in a multi-node cluster.

--pv-recycler-minimum-timeout-nfs int32     Default: 300 -The minimum ActiveDeadlineSeconds to use for an NFS Recycler pod +

The minimum ActiveDeadlineSeconds to use for an NFS Recycler pod

--pv-recycler-pod-template-filepath-hostpath string -The file path to a pod definition used as a template for HostPath persistent volume recycling. This is for development and testing only and will not work in a multi-node cluster. +

The file path to a pod definition used as a template for HostPath persistent volume recycling. This is for development and testing only and will not work in a multi-node cluster.

--pv-recycler-pod-template-filepath-nfs string -The file path to a pod definition used as a template for NFS persistent volume recycling +

The file path to a pod definition used as a template for NFS persistent volume recycling

--pv-recycler-timeout-increment-hostpath int32     Default: 30 -the increment of time added per Gi to ActiveDeadlineSeconds for a HostPath scrubber pod. This is for development and testing only and will not work in a multi-node cluster. +

the increment of time added per Gi to ActiveDeadlineSeconds for a HostPath scrubber pod. This is for development and testing only and will not work in a multi-node cluster.

--pvclaimbinder-sync-period duration     Default: 15s -The period for syncing persistent volumes and persistent volume claims +

The period for syncing persistent volumes and persistent volume claims

---requestheader-allowed-names stringSlice +--requestheader-allowed-names strings -List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed. +

List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.

--requestheader-client-ca-file string -Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests. +

Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.

---requestheader-extra-headers-prefix stringSlice     Default: [x-remote-extra-] +--requestheader-extra-headers-prefix strings     Default: "x-remote-extra-" -List of request header prefixes to inspect. X-Remote-Extra- is suggested. +

List of request header prefixes to inspect. X-Remote-Extra- is suggested.

---requestheader-group-headers stringSlice     Default: [x-remote-group] +--requestheader-group-headers strings     Default: "x-remote-group" -List of request headers to inspect for groups. X-Remote-Group is suggested. +

List of request headers to inspect for groups. X-Remote-Group is suggested.

---requestheader-username-headers stringSlice     Default: [x-remote-user] +--requestheader-username-headers strings     Default: "x-remote-user" -List of request headers to inspect for usernames. X-Remote-User is common. +

List of request headers to inspect for usernames. X-Remote-User is common.

--resource-quota-sync-period duration     Default: 5m0s -The period for syncing quota usage status in the system +

The period for syncing quota usage status in the system

--root-ca-file string -If set, this root certificate authority will be included in service account's token secret. This must be a valid PEM-encoded CA bundle. +

If set, this root certificate authority will be included in service account's token secret. This must be a valid PEM-encoded CA bundle.

--route-reconciliation-period duration     Default: 10s -The period for reconciling routes created for Nodes by cloud provider. +

The period for reconciling routes created for Nodes by cloud provider.

---secondary-node-eviction-rate float32     Default: 0.01 +--secondary-node-eviction-rate float     Default: 0.01 -Number of nodes per second on which pods are deleted in case of node failure when a zone is unhealthy (see --unhealthy-zone-threshold for definition of healthy/unhealthy). Zone refers to entire cluster in non-multizone clusters. This value is implicitly overridden to 0 if the cluster size is smaller than --large-cluster-size-threshold. +

Number of nodes per second on which pods are deleted in case of node failure when a zone is unhealthy (see --unhealthy-zone-threshold for definition of healthy/unhealthy). Zone refers to entire cluster in non-multizone clusters. This value is implicitly overridden to 0 if the cluster size is smaller than --large-cluster-size-threshold.

--secure-port int     Default: 10257 -The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all. +

The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all.

--service-account-private-key-file string -Filename containing a PEM-encoded private RSA or ECDSA key used to sign service account tokens. +

Filename containing a PEM-encoded private RSA or ECDSA key used to sign service account tokens.

--service-cluster-ip-range string -CIDR Range for Services in cluster. Requires --allocate-node-cidrs to be true +

CIDR Range for Services in cluster. Requires --allocate-node-cidrs to be true

--show-hidden-metrics-for-version string -The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that. +

The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is ., e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.

--skip-headers -If true, avoid header prefixes in the log messages +

If true, avoid header prefixes in the log messages

--skip-log-headers -If true, avoid headers when opening log files +

If true, avoid headers when opening log files

---stderrthreshold severity     Default: 2 +--stderrthreshold int     Default: 2 -logs at or above this threshold go to stderr +

logs at or above this threshold go to stderr

--terminated-pod-gc-threshold int32     Default: 12500 -Number of terminated pods that can exist before the terminated pod garbage collector starts deleting terminated pods. If <= 0, the terminated pod garbage collector is disabled. +

Number of terminated pods that can exist before the terminated pod garbage collector starts deleting terminated pods. If <= 0, the terminated pod garbage collector is disabled.

--tls-cert-file string -File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir. +

File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.

---tls-cipher-suites stringSlice +--tls-cipher-suites strings -Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.
Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA. +

Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.
Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.

--tls-min-version string -Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13 +

Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13

--tls-private-key-file string -File containing the default x509 private key matching --tls-cert-file. +

File containing the default x509 private key matching --tls-cert-file.

---tls-sni-cert-key namedCertKey     Default: [] +--tls-sni-cert-key string -A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". +

A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com".

---unhealthy-zone-threshold float32     Default: 0.55 +--unhealthy-zone-threshold float     Default: 0.55 -Fraction of Nodes in a zone which needs to be not Ready (minimum 3) for zone to be treated as unhealthy. +

Fraction of Nodes in a zone which needs to be not Ready (minimum 3) for zone to be treated as unhealthy.

--use-service-account-credentials -If true, use individual service account credentials for each controller. +

If true, use individual service account credentials for each controller.

--v, --v Level +-v, --v int -number for the log level verbosity +

number for the log level verbosity

--version version[=true] -Print version information and quit +

Print version information and quit

---vmodule moduleSpec +--vmodule <comma-separated 'pattern=N' settings> -comma-separated list of pattern=N settings for file-filtered logging +

comma-separated list of pattern=N settings for file-filtered logging

--volume-host-allow-local-loopback     Default: true -If false, deny local loopback IPs in addition to any CIDR ranges in --volume-host-cidr-denylist +

If false, deny local loopback IPs in addition to any CIDR ranges in --volume-host-cidr-denylist

---volume-host-cidr-denylist stringSlice +--volume-host-cidr-denylist strings -A comma-separated list of CIDR ranges to avoid from volume plugins. +

A comma-separated list of CIDR ranges to avoid from volume plugins.

diff --git a/content/en/docs/reference/command-line-tools-reference/kube-proxy.md b/content/en/docs/reference/command-line-tools-reference/kube-proxy.md index a13368ed01..ad9d8b022d 100644 --- a/content/en/docs/reference/command-line-tools-reference/kube-proxy.md +++ b/content/en/docs/reference/command-line-tools-reference/kube-proxy.md @@ -2,8 +2,21 @@ title: kube-proxy content_type: tool-reference weight: 30 +auto_generated: true --- + + + ## {{% heading "synopsis" %}} @@ -32,308 +45,308 @@ kube-proxy [flags] --azure-container-registry-config string -Path to the file containing Azure container registry configuration information. +

Path to the file containing Azure container registry configuration information.

---bind-address ip     Default: 0.0.0.0 +--bind-address string     Default: 0.0.0.0 -The IP address for the proxy server to serve on (set to '0.0.0.0' for all IPv4 interfaces and '::' for all IPv6 interfaces) +

The IP address for the proxy server to serve on (set to '0.0.0.0' for all IPv4 interfaces and '::' for all IPv6 interfaces)

--bind-address-hard-fail -If true kube-proxy will treat failure to bind to a port as fatal and exit +

If true kube-proxy will treat failure to bind to a port as fatal and exit

--cleanup -If true cleanup iptables and ipvs rules and exit. +

If true cleanup iptables and ipvs rules and exit.

--cluster-cidr string -The CIDR range of pods in the cluster. When configured, traffic sent to a Service cluster IP from outside this range will be masqueraded and traffic sent from pods to an external LoadBalancer IP will be directed to the respective cluster IP instead +

The CIDR range of pods in the cluster. When configured, traffic sent to a Service cluster IP from outside this range will be masqueraded and traffic sent from pods to an external LoadBalancer IP will be directed to the respective cluster IP instead

--config string -The path to the configuration file. +

The path to the configuration file.

--config-sync-period duration     Default: 15m0s -How often configuration from the apiserver is refreshed. Must be greater than 0. +

How often configuration from the apiserver is refreshed. Must be greater than 0.

--conntrack-max-per-core int32     Default: 32768 -Maximum number of NAT connections to track per CPU core (0 to leave the limit as-is and ignore conntrack-min). +

Maximum number of NAT connections to track per CPU core (0 to leave the limit as-is and ignore conntrack-min).

--conntrack-min int32     Default: 131072 -Minimum number of conntrack entries to allocate, regardless of conntrack-max-per-core (set conntrack-max-per-core=0 to leave the limit as-is). +

Minimum number of conntrack entries to allocate, regardless of conntrack-max-per-core (set conntrack-max-per-core=0 to leave the limit as-is).

--conntrack-tcp-timeout-close-wait duration     Default: 1h0m0s -NAT timeout for TCP connections in the CLOSE_WAIT state +

NAT timeout for TCP connections in the CLOSE_WAIT state

--conntrack-tcp-timeout-established duration     Default: 24h0m0s -Idle timeout for established TCP connections (0 to leave as-is) +

Idle timeout for established TCP connections (0 to leave as-is)

--detect-local-mode LocalMode -Mode to use to detect local traffic +

Mode to use to detect local traffic

---feature-gates mapStringBool +--feature-gates <comma-separated 'key=True|False' pairs> -A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (BETA - default=true)
APIResponseCompression=true|false (BETA - default=true)
APIServerIdentity=true|false (ALPHA - default=false)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AllowInsecureBackendProxy=true|false (BETA - default=true)
AnyVolumeDataSource=true|false (ALPHA - default=false)
AppArmor=true|false (BETA - default=true)
BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
CPUManager=true|false (BETA - default=true)
CRIContainerLogRotation=true|false (BETA - default=true)
CSIInlineVolume=true|false (BETA - default=true)
CSIMigration=true|false (BETA - default=true)
CSIMigrationAWS=true|false (BETA - default=false)
CSIMigrationAWSComplete=true|false (ALPHA - default=false)
CSIMigrationAzureDisk=true|false (BETA - default=false)
CSIMigrationAzureDiskComplete=true|false (ALPHA - default=false)
CSIMigrationAzureFile=true|false (ALPHA - default=false)
CSIMigrationAzureFileComplete=true|false (ALPHA - default=false)
CSIMigrationGCE=true|false (BETA - default=false)
CSIMigrationGCEComplete=true|false (ALPHA - default=false)
CSIMigrationOpenStack=true|false (BETA - default=false)
CSIMigrationOpenStackComplete=true|false (ALPHA - default=false)
CSIMigrationvSphere=true|false (BETA - default=false)
CSIMigrationvSphereComplete=true|false (BETA - default=false)
CSIServiceAccountToken=true|false (ALPHA - default=false)
CSIStorageCapacity=true|false (ALPHA - default=false)
CSIVolumeFSGroupPolicy=true|false (BETA - default=true)
ConfigurableFSGroupPolicy=true|false (BETA - default=true)
CronJobControllerV2=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
DefaultPodTopologySpread=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DisableAcceleratorUsageMetrics=true|false (BETA - default=true)
DownwardAPIHugePages=true|false (ALPHA - default=false)
DynamicKubeletConfig=true|false (BETA - default=true)
EfficientWatchResumption=true|false (ALPHA - default=false)
EndpointSlice=true|false (BETA - default=true)
EndpointSliceNodeName=true|false (ALPHA - default=false)
EndpointSliceProxying=true|false (BETA - default=true)
EndpointSliceTerminatingCondition=true|false (ALPHA - default=false)
EphemeralContainers=true|false (ALPHA - default=false)
ExpandCSIVolumes=true|false (BETA - default=true)
ExpandInUsePersistentVolumes=true|false (BETA - default=true)
ExpandPersistentVolumes=true|false (BETA - default=true)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
GenericEphemeralVolume=true|false (ALPHA - default=false)
GracefulNodeShutdown=true|false (ALPHA - default=false)
HPAContainerMetrics=true|false (ALPHA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HugePageStorageMediumSize=true|false (BETA - default=true)
IPv6DualStack=true|false (ALPHA - default=false)
ImmutableEphemeralVolumes=true|false (BETA - default=true)
KubeletCredentialProviders=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
LegacyNodeRoleBehavior=true|false (BETA - default=true)
LocalStorageCapacityIsolation=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
MixedProtocolLBService=true|false (ALPHA - default=false)
NodeDisruptionExclusion=true|false (BETA - default=true)
NonPreemptingPriority=true|false (BETA - default=true)
PodDisruptionBudget=true|false (BETA - default=true)
PodOverhead=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
QOSReserved=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RemoveSelfLink=true|false (BETA - default=true)
RootCAConfigMap=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
RunAsGroup=true|false (BETA - default=true)
ServerSideApply=true|false (BETA - default=true)
ServiceAccountIssuerDiscovery=true|false (BETA - default=true)
ServiceLBNodePortControl=true|false (ALPHA - default=false)
ServiceNodeExclusion=true|false (BETA - default=true)
ServiceTopology=true|false (ALPHA - default=false)
SetHostnameAsFQDN=true|false (BETA - default=true)
SizeMemoryBackedVolumes=true|false (ALPHA - default=false)
StorageVersionAPI=true|false (ALPHA - default=false)
StorageVersionHash=true|false (BETA - default=true)
Sysctls=true|false (BETA - default=true)
TTLAfterFinished=true|false (ALPHA - default=false)
TopologyManager=true|false (BETA - default=true)
ValidateProxyRedirects=true|false (BETA - default=true)
WarningHeaders=true|false (BETA - default=true)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (BETA - default=true)
WindowsEndpointSliceProxying=true|false (ALPHA - default=false) +

A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (BETA - default=true)
APIResponseCompression=true|false (BETA - default=true)
APIServerIdentity=true|false (ALPHA - default=false)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AllowInsecureBackendProxy=true|false (BETA - default=true)
AnyVolumeDataSource=true|false (ALPHA - default=false)
AppArmor=true|false (BETA - default=true)
BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
CPUManager=true|false (BETA - default=true)
CRIContainerLogRotation=true|false (BETA - default=true)
CSIInlineVolume=true|false (BETA - default=true)
CSIMigration=true|false (BETA - default=true)
CSIMigrationAWS=true|false (BETA - default=false)
CSIMigrationAWSComplete=true|false (ALPHA - default=false)
CSIMigrationAzureDisk=true|false (BETA - default=false)
CSIMigrationAzureDiskComplete=true|false (ALPHA - default=false)
CSIMigrationAzureFile=true|false (ALPHA - default=false)
CSIMigrationAzureFileComplete=true|false (ALPHA - default=false)
CSIMigrationGCE=true|false (BETA - default=false)
CSIMigrationGCEComplete=true|false (ALPHA - default=false)
CSIMigrationOpenStack=true|false (BETA - default=false)
CSIMigrationOpenStackComplete=true|false (ALPHA - default=false)
CSIMigrationvSphere=true|false (BETA - default=false)
CSIMigrationvSphereComplete=true|false (BETA - default=false)
CSIServiceAccountToken=true|false (ALPHA - default=false)
CSIStorageCapacity=true|false (ALPHA - default=false)
CSIVolumeFSGroupPolicy=true|false (BETA - default=true)
ConfigurableFSGroupPolicy=true|false (BETA - default=true)
CronJobControllerV2=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
DefaultPodTopologySpread=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DisableAcceleratorUsageMetrics=true|false (BETA - default=true)
DownwardAPIHugePages=true|false (ALPHA - default=false)
DynamicKubeletConfig=true|false (BETA - default=true)
EfficientWatchResumption=true|false (ALPHA - default=false)
EndpointSlice=true|false (BETA - default=true)
EndpointSliceNodeName=true|false (ALPHA - default=false)
EndpointSliceProxying=true|false (BETA - default=true)
EndpointSliceTerminatingCondition=true|false (ALPHA - default=false)
EphemeralContainers=true|false (ALPHA - default=false)
ExpandCSIVolumes=true|false (BETA - default=true)
ExpandInUsePersistentVolumes=true|false (BETA - default=true)
ExpandPersistentVolumes=true|false (BETA - default=true)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
GenericEphemeralVolume=true|false (ALPHA - default=false)
GracefulNodeShutdown=true|false (ALPHA - default=false)
HPAContainerMetrics=true|false (ALPHA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HugePageStorageMediumSize=true|false (BETA - default=true)
IPv6DualStack=true|false (ALPHA - default=false)
ImmutableEphemeralVolumes=true|false (BETA - default=true)
KubeletCredentialProviders=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
LegacyNodeRoleBehavior=true|false (BETA - default=true)
LocalStorageCapacityIsolation=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
MixedProtocolLBService=true|false (ALPHA - default=false)
NodeDisruptionExclusion=true|false (BETA - default=true)
NonPreemptingPriority=true|false (BETA - default=true)
PodDisruptionBudget=true|false (BETA - default=true)
PodOverhead=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
QOSReserved=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RemoveSelfLink=true|false (BETA - default=true)
RootCAConfigMap=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
RunAsGroup=true|false (BETA - default=true)
ServerSideApply=true|false (BETA - default=true)
ServiceAccountIssuerDiscovery=true|false (BETA - default=true)
ServiceLBNodePortControl=true|false (ALPHA - default=false)
ServiceNodeExclusion=true|false (BETA - default=true)
ServiceTopology=true|false (ALPHA - default=false)
SetHostnameAsFQDN=true|false (BETA - default=true)
SizeMemoryBackedVolumes=true|false (ALPHA - default=false)
StorageVersionAPI=true|false (ALPHA - default=false)
StorageVersionHash=true|false (BETA - default=true)
Sysctls=true|false (BETA - default=true)
TTLAfterFinished=true|false (ALPHA - default=false)
TopologyManager=true|false (BETA - default=true)
ValidateProxyRedirects=true|false (BETA - default=true)
WarningHeaders=true|false (BETA - default=true)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (BETA - default=true)
WindowsEndpointSliceProxying=true|false (ALPHA - default=false)

--healthz-bind-address ipport     Default: 0.0.0.0:10256 -The IP address with port for the health check server to serve on (set to '0.0.0.0:10256' for all IPv4 interfaces and '[::]:10256' for all IPv6 interfaces). Set empty to disable. +

The IP address with port for the health check server to serve on (set to '0.0.0.0:10256' for all IPv4 interfaces and '[::]:10256' for all IPv6 interfaces). Set empty to disable.

-h, --help -help for kube-proxy +

help for kube-proxy

--hostname-override string -If non-empty, will use this string as identification instead of the actual hostname. +

If non-empty, will use this string as identification instead of the actual hostname.

--iptables-masquerade-bit int32     Default: 14 -If using the pure iptables proxy, the bit of the fwmark space to mark packets requiring SNAT with. Must be within the range [0, 31]. +

If using the pure iptables proxy, the bit of the fwmark space to mark packets requiring SNAT with. Must be within the range [0, 31].

--iptables-min-sync-period duration     Default: 1s -The minimum interval of how often the iptables rules can be refreshed as endpoints and services change (e.g. '5s', '1m', '2h22m'). +

The minimum interval of how often the iptables rules can be refreshed as endpoints and services change (e.g. '5s', '1m', '2h22m').

--iptables-sync-period duration     Default: 30s -The maximum interval of how often iptables rules are refreshed (e.g. '5s', '1m', '2h22m'). Must be greater than 0. +

The maximum interval of how often iptables rules are refreshed (e.g. '5s', '1m', '2h22m'). Must be greater than 0.

---ipvs-exclude-cidrs stringSlice +--ipvs-exclude-cidrs strings -A comma-separated list of CIDR's which the ipvs proxier should not touch when cleaning up IPVS rules. +

A comma-separated list of CIDR's which the ipvs proxier should not touch when cleaning up IPVS rules.

--ipvs-min-sync-period duration -The minimum interval of how often the ipvs rules can be refreshed as endpoints and services change (e.g. '5s', '1m', '2h22m'). +

The minimum interval of how often the ipvs rules can be refreshed as endpoints and services change (e.g. '5s', '1m', '2h22m').

--ipvs-scheduler string -The ipvs scheduler type when proxy mode is ipvs +

The ipvs scheduler type when proxy mode is ipvs

--ipvs-strict-arp -Enable strict ARP by setting arp_ignore to 1 and arp_announce to 2 +

Enable strict ARP by setting arp_ignore to 1 and arp_announce to 2

--ipvs-sync-period duration     Default: 30s -The maximum interval of how often ipvs rules are refreshed (e.g. '5s', '1m', '2h22m'). Must be greater than 0. +

The maximum interval of how often ipvs rules are refreshed (e.g. '5s', '1m', '2h22m'). Must be greater than 0.

--ipvs-tcp-timeout duration -The timeout for idle IPVS TCP connections, 0 to leave as-is. (e.g. '5s', '1m', '2h22m'). +

The timeout for idle IPVS TCP connections, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').

--ipvs-tcpfin-timeout duration -The timeout for IPVS TCP connections after receiving a FIN packet, 0 to leave as-is. (e.g. '5s', '1m', '2h22m'). +

The timeout for IPVS TCP connections after receiving a FIN packet, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').

--ipvs-udp-timeout duration -The timeout for IPVS UDP packets, 0 to leave as-is. (e.g. '5s', '1m', '2h22m'). +

The timeout for IPVS UDP packets, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').

--kube-api-burst int32     Default: 10 -Burst to use while talking with kubernetes apiserver +

Burst to use while talking with kubernetes apiserver

--kube-api-content-type string     Default: "application/vnd.kubernetes.protobuf" -Content type of requests sent to apiserver. +

Content type of requests sent to apiserver.

---kube-api-qps float32     Default: 5 +--kube-api-qps float     Default: 5 -QPS to use while talking with kubernetes apiserver +

QPS to use while talking with kubernetes apiserver

--kubeconfig string -Path to kubeconfig file with authorization information (the master location can be overridden by the master flag). +

Path to kubeconfig file with authorization information (the master location can be overridden by the master flag).

--log-flush-frequency duration     Default: 5s -Maximum number of seconds between log flushes +

Maximum number of seconds between log flushes

--masquerade-all -If using the pure iptables proxy, SNAT all traffic sent via Service cluster IPs (this not commonly needed) +

If using the pure iptables proxy, SNAT all traffic sent via Service cluster IPs (this not commonly needed)

--master string -The address of the Kubernetes API server (overrides any value in kubeconfig) +

The address of the Kubernetes API server (overrides any value in kubeconfig)

--metrics-bind-address ipport     Default: 127.0.0.1:10249 -The IP address with port for the metrics server to serve on (set to '0.0.0.0:10249' for all IPv4 interfaces and '[::]:10249' for all IPv6 interfaces). Set empty to disable. +

The IP address with port for the metrics server to serve on (set to '0.0.0.0:10249' for all IPv4 interfaces and '[::]:10249' for all IPv6 interfaces). Set empty to disable.

---nodeport-addresses stringSlice +--nodeport-addresses strings -A string slice of values which specify the addresses to use for NodePorts. Values may be valid IP blocks (e.g. 1.2.3.0/24, 1.2.3.4/32). The default empty string slice ([]) means to use all local addresses. +

A string slice of values which specify the addresses to use for NodePorts. Values may be valid IP blocks (e.g. 1.2.3.0/24, 1.2.3.4/32). The default empty string slice ([]) means to use all local addresses.

--oom-score-adj int32     Default: -999 -The oom-score-adj value for kube-proxy process. Values must be within the range [-1000, 1000] +

The oom-score-adj value for kube-proxy process. Values must be within the range [-1000, 1000]

--profiling -If true enables profiling via web interface on /debug/pprof handler. +

If true enables profiling via web interface on /debug/pprof handler.

--proxy-mode ProxyMode -Which proxy mode to use: 'userspace' (older) or 'iptables' (faster) or 'ipvs' or 'kernelspace' (windows). If blank, use the best-available proxy (currently iptables). If the iptables proxy is selected, regardless of how, but the system's kernel or iptables versions are insufficient, this always falls back to the userspace proxy. +

Which proxy mode to use: 'userspace' (older) or 'iptables' (faster) or 'ipvs' or 'kernelspace' (windows). If blank, use the best-available proxy (currently iptables). If the iptables proxy is selected, regardless of how, but the system's kernel or iptables versions are insufficient, this always falls back to the userspace proxy.

--proxy-port-range port-range -Range of host ports (beginPort-endPort, single port or beginPort+offset, inclusive) that may be consumed in order to proxy service traffic. If (unspecified, 0, or 0-0) then ports will be randomly chosen. +

Range of host ports (beginPort-endPort, single port or beginPort+offset, inclusive) that may be consumed in order to proxy service traffic. If (unspecified, 0, or 0-0) then ports will be randomly chosen.

--show-hidden-metrics-for-version string -The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that. +

The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is ., e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.

--udp-timeout duration     Default: 250ms -How long an idle UDP connection will be kept open (e.g. '250ms', '2s'). Must be greater than 0. Only applicable for proxy-mode=userspace +

How long an idle UDP connection will be kept open (e.g. '250ms', '2s'). Must be greater than 0. Only applicable for proxy-mode=userspace

--version version[=true] -Print version information and quit +

Print version information and quit

--write-config-to string -If set, write the default configuration values to this file and exit. +

If set, write the default configuration values to this file and exit.

diff --git a/content/en/docs/reference/command-line-tools-reference/kube-scheduler.md b/content/en/docs/reference/command-line-tools-reference/kube-scheduler.md index 7aa8ccf5d0..913d7eb925 100644 --- a/content/en/docs/reference/command-line-tools-reference/kube-scheduler.md +++ b/content/en/docs/reference/command-line-tools-reference/kube-scheduler.md @@ -2,8 +2,21 @@ title: kube-scheduler content_type: tool-reference weight: 30 +auto_generated: true --- + + + ## {{% heading "synopsis" %}} @@ -33,504 +46,504 @@ kube-scheduler [flags] --add-dir-header -If true, adds the file directory to the header of the log messages +

If true, adds the file directory to the header of the log messages

--address string     Default: "0.0.0.0" -DEPRECATED: the IP address on which to listen for the --port port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). See --bind-address instead. +

DEPRECATED: the IP address on which to listen for the --port port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). See --bind-address instead.

--algorithm-provider string -DEPRECATED: the scheduling algorithm provider to use, this sets the default plugins for component config profiles. Choose one of: ClusterAutoscalerProvider | DefaultProvider +

DEPRECATED: the scheduling algorithm provider to use, this sets the default plugins for component config profiles. Choose one of: ClusterAutoscalerProvider | DefaultProvider

--alsologtostderr -log to standard error as well as files +

log to standard error as well as files

--authentication-kubeconfig string -kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io. This is optional. If empty, all token requests are considered to be anonymous and no client CA is looked up in the cluster. +

kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io. This is optional. If empty, all token requests are considered to be anonymous and no client CA is looked up in the cluster.

--authentication-skip-lookup -If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster. +

If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.

--authentication-token-webhook-cache-ttl duration     Default: 10s -The duration to cache responses from the webhook token authenticator. +

The duration to cache responses from the webhook token authenticator.

--authentication-tolerate-lookup-failure     Default: true -If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous. +

If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.

---authorization-always-allow-paths stringSlice     Default: [/healthz] +--authorization-always-allow-paths strings     Default: "/healthz" -A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server. +

A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server.

--authorization-kubeconfig string -kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io. This is optional. If empty, all requests not skipped by authorization are forbidden. +

kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io. This is optional. If empty, all requests not skipped by authorization are forbidden.

--authorization-webhook-cache-authorized-ttl duration     Default: 10s -The duration to cache 'authorized' responses from the webhook authorizer. +

The duration to cache 'authorized' responses from the webhook authorizer.

--authorization-webhook-cache-unauthorized-ttl duration     Default: 10s -The duration to cache 'unauthorized' responses from the webhook authorizer. +

The duration to cache 'unauthorized' responses from the webhook authorizer.

--azure-container-registry-config string -Path to the file containing Azure container registry configuration information. +

Path to the file containing Azure container registry configuration information.

---bind-address ip     Default: 0.0.0.0 +--bind-address string     Default: 0.0.0.0 -The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used. +

The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.

--cert-dir string -The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. +

The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.

--client-ca-file string -If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate. +

If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.

--config string -The path to the configuration file. The following flags can overwrite fields in this file:
--address
--port
--use-legacy-policy-config
--policy-configmap
--policy-config-file
--algorithm-provider +

The path to the configuration file. The following flags can overwrite fields in this file:
--address
--port
--use-legacy-policy-config
--policy-configmap
--policy-config-file
--algorithm-provider

--contention-profiling     Default: true -DEPRECATED: enable lock contention profiling, if profiling is enabled +

DEPRECATED: enable lock contention profiling, if profiling is enabled

--experimental-logging-sanitization -[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens).
Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production. +

[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens).
Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.

---feature-gates mapStringBool +--feature-gates <comma-separated 'key=True|False' pairs> -A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (BETA - default=true)
APIResponseCompression=true|false (BETA - default=true)
APIServerIdentity=true|false (ALPHA - default=false)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AllowInsecureBackendProxy=true|false (BETA - default=true)
AnyVolumeDataSource=true|false (ALPHA - default=false)
AppArmor=true|false (BETA - default=true)
BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
CPUManager=true|false (BETA - default=true)
CRIContainerLogRotation=true|false (BETA - default=true)
CSIInlineVolume=true|false (BETA - default=true)
CSIMigration=true|false (BETA - default=true)
CSIMigrationAWS=true|false (BETA - default=false)
CSIMigrationAWSComplete=true|false (ALPHA - default=false)
CSIMigrationAzureDisk=true|false (BETA - default=false)
CSIMigrationAzureDiskComplete=true|false (ALPHA - default=false)
CSIMigrationAzureFile=true|false (ALPHA - default=false)
CSIMigrationAzureFileComplete=true|false (ALPHA - default=false)
CSIMigrationGCE=true|false (BETA - default=false)
CSIMigrationGCEComplete=true|false (ALPHA - default=false)
CSIMigrationOpenStack=true|false (BETA - default=false)
CSIMigrationOpenStackComplete=true|false (ALPHA - default=false)
CSIMigrationvSphere=true|false (BETA - default=false)
CSIMigrationvSphereComplete=true|false (BETA - default=false)
CSIServiceAccountToken=true|false (ALPHA - default=false)
CSIStorageCapacity=true|false (ALPHA - default=false)
CSIVolumeFSGroupPolicy=true|false (BETA - default=true)
ConfigurableFSGroupPolicy=true|false (BETA - default=true)
CronJobControllerV2=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
DefaultPodTopologySpread=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DisableAcceleratorUsageMetrics=true|false (BETA - default=true)
DownwardAPIHugePages=true|false (ALPHA - default=false)
DynamicKubeletConfig=true|false (BETA - default=true)
EfficientWatchResumption=true|false (ALPHA - default=false)
EndpointSlice=true|false (BETA - default=true)
EndpointSliceNodeName=true|false (ALPHA - default=false)
EndpointSliceProxying=true|false (BETA - default=true)
EndpointSliceTerminatingCondition=true|false (ALPHA - default=false)
EphemeralContainers=true|false (ALPHA - default=false)
ExpandCSIVolumes=true|false (BETA - default=true)
ExpandInUsePersistentVolumes=true|false (BETA - default=true)
ExpandPersistentVolumes=true|false (BETA - default=true)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
GenericEphemeralVolume=true|false (ALPHA - default=false)
GracefulNodeShutdown=true|false (ALPHA - default=false)
HPAContainerMetrics=true|false (ALPHA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HugePageStorageMediumSize=true|false (BETA - default=true)
IPv6DualStack=true|false (ALPHA - default=false)
ImmutableEphemeralVolumes=true|false (BETA - default=true)
KubeletCredentialProviders=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
LegacyNodeRoleBehavior=true|false (BETA - default=true)
LocalStorageCapacityIsolation=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
MixedProtocolLBService=true|false (ALPHA - default=false)
NodeDisruptionExclusion=true|false (BETA - default=true)
NonPreemptingPriority=true|false (BETA - default=true)
PodDisruptionBudget=true|false (BETA - default=true)
PodOverhead=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
QOSReserved=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RemoveSelfLink=true|false (BETA - default=true)
RootCAConfigMap=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
RunAsGroup=true|false (BETA - default=true)
ServerSideApply=true|false (BETA - default=true)
ServiceAccountIssuerDiscovery=true|false (BETA - default=true)
ServiceLBNodePortControl=true|false (ALPHA - default=false)
ServiceNodeExclusion=true|false (BETA - default=true)
ServiceTopology=true|false (ALPHA - default=false)
SetHostnameAsFQDN=true|false (BETA - default=true)
SizeMemoryBackedVolumes=true|false (ALPHA - default=false)
StorageVersionAPI=true|false (ALPHA - default=false)
StorageVersionHash=true|false (BETA - default=true)
Sysctls=true|false (BETA - default=true)
TTLAfterFinished=true|false (ALPHA - default=false)
TopologyManager=true|false (BETA - default=true)
ValidateProxyRedirects=true|false (BETA - default=true)
WarningHeaders=true|false (BETA - default=true)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (BETA - default=true)
WindowsEndpointSliceProxying=true|false (ALPHA - default=false) +

A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (BETA - default=true)
APIResponseCompression=true|false (BETA - default=true)
APIServerIdentity=true|false (ALPHA - default=false)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AllowInsecureBackendProxy=true|false (BETA - default=true)
AnyVolumeDataSource=true|false (ALPHA - default=false)
AppArmor=true|false (BETA - default=true)
BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
CPUManager=true|false (BETA - default=true)
CRIContainerLogRotation=true|false (BETA - default=true)
CSIInlineVolume=true|false (BETA - default=true)
CSIMigration=true|false (BETA - default=true)
CSIMigrationAWS=true|false (BETA - default=false)
CSIMigrationAWSComplete=true|false (ALPHA - default=false)
CSIMigrationAzureDisk=true|false (BETA - default=false)
CSIMigrationAzureDiskComplete=true|false (ALPHA - default=false)
CSIMigrationAzureFile=true|false (ALPHA - default=false)
CSIMigrationAzureFileComplete=true|false (ALPHA - default=false)
CSIMigrationGCE=true|false (BETA - default=false)
CSIMigrationGCEComplete=true|false (ALPHA - default=false)
CSIMigrationOpenStack=true|false (BETA - default=false)
CSIMigrationOpenStackComplete=true|false (ALPHA - default=false)
CSIMigrationvSphere=true|false (BETA - default=false)
CSIMigrationvSphereComplete=true|false (BETA - default=false)
CSIServiceAccountToken=true|false (ALPHA - default=false)
CSIStorageCapacity=true|false (ALPHA - default=false)
CSIVolumeFSGroupPolicy=true|false (BETA - default=true)
ConfigurableFSGroupPolicy=true|false (BETA - default=true)
CronJobControllerV2=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
DefaultPodTopologySpread=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DisableAcceleratorUsageMetrics=true|false (BETA - default=true)
DownwardAPIHugePages=true|false (ALPHA - default=false)
DynamicKubeletConfig=true|false (BETA - default=true)
EfficientWatchResumption=true|false (ALPHA - default=false)
EndpointSlice=true|false (BETA - default=true)
EndpointSliceNodeName=true|false (ALPHA - default=false)
EndpointSliceProxying=true|false (BETA - default=true)
EndpointSliceTerminatingCondition=true|false (ALPHA - default=false)
EphemeralContainers=true|false (ALPHA - default=false)
ExpandCSIVolumes=true|false (BETA - default=true)
ExpandInUsePersistentVolumes=true|false (BETA - default=true)
ExpandPersistentVolumes=true|false (BETA - default=true)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
GenericEphemeralVolume=true|false (ALPHA - default=false)
GracefulNodeShutdown=true|false (ALPHA - default=false)
HPAContainerMetrics=true|false (ALPHA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HugePageStorageMediumSize=true|false (BETA - default=true)
IPv6DualStack=true|false (ALPHA - default=false)
ImmutableEphemeralVolumes=true|false (BETA - default=true)
KubeletCredentialProviders=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
LegacyNodeRoleBehavior=true|false (BETA - default=true)
LocalStorageCapacityIsolation=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
MixedProtocolLBService=true|false (ALPHA - default=false)
NodeDisruptionExclusion=true|false (BETA - default=true)
NonPreemptingPriority=true|false (BETA - default=true)
PodDisruptionBudget=true|false (BETA - default=true)
PodOverhead=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
QOSReserved=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RemoveSelfLink=true|false (BETA - default=true)
RootCAConfigMap=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
RunAsGroup=true|false (BETA - default=true)
ServerSideApply=true|false (BETA - default=true)
ServiceAccountIssuerDiscovery=true|false (BETA - default=true)
ServiceLBNodePortControl=true|false (ALPHA - default=false)
ServiceNodeExclusion=true|false (BETA - default=true)
ServiceTopology=true|false (ALPHA - default=false)
SetHostnameAsFQDN=true|false (BETA - default=true)
SizeMemoryBackedVolumes=true|false (ALPHA - default=false)
StorageVersionAPI=true|false (ALPHA - default=false)
StorageVersionHash=true|false (BETA - default=true)
Sysctls=true|false (BETA - default=true)
TTLAfterFinished=true|false (ALPHA - default=false)
TopologyManager=true|false (BETA - default=true)
ValidateProxyRedirects=true|false (BETA - default=true)
WarningHeaders=true|false (BETA - default=true)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (BETA - default=true)
WindowsEndpointSliceProxying=true|false (ALPHA - default=false)

--hard-pod-affinity-symmetric-weight int32     Default: 1 -DEPRECATED: RequiredDuringScheduling affinity is not symmetric, but there is an implicit PreferredDuringScheduling affinity rule corresponding to every RequiredDuringScheduling affinity rule. --hard-pod-affinity-symmetric-weight represents the weight of implicit PreferredDuringScheduling affinity rule. Must be in the range 0-100.This option was moved to the policy configuration file +

DEPRECATED: RequiredDuringScheduling affinity is not symmetric, but there is an implicit PreferredDuringScheduling affinity rule corresponding to every RequiredDuringScheduling affinity rule. --hard-pod-affinity-symmetric-weight represents the weight of implicit PreferredDuringScheduling affinity rule. Must be in the range 0-100.This option was moved to the policy configuration file

-h, --help -help for kube-scheduler +

help for kube-scheduler

--http2-max-streams-per-connection int -The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default. +

The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.

--kube-api-burst int32     Default: 100 -DEPRECATED: burst to use while talking with kubernetes apiserver +

DEPRECATED: burst to use while talking with kubernetes apiserver

--kube-api-content-type string     Default: "application/vnd.kubernetes.protobuf" -DEPRECATED: content type of requests sent to apiserver. +

DEPRECATED: content type of requests sent to apiserver.

---kube-api-qps float32     Default: 50 +--kube-api-qps float     Default: 50 -DEPRECATED: QPS to use while talking with kubernetes apiserver +

DEPRECATED: QPS to use while talking with kubernetes apiserver

--kubeconfig string -DEPRECATED: path to kubeconfig file with authorization and master location information. +

DEPRECATED: path to kubeconfig file with authorization and master location information.

--leader-elect     Default: true -Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability. +

Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability.

--leader-elect-lease-duration duration     Default: 15s -The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled. +

The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled.

--leader-elect-renew-deadline duration     Default: 10s -The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled. +

The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled.

--leader-elect-resource-lock string     Default: "leases" -The type of resource object that is used for locking during leader election. Supported options are 'endpoints', 'configmaps', 'leases', 'endpointsleases' and 'configmapsleases'. +

The type of resource object that is used for locking during leader election. Supported options are 'endpoints', 'configmaps', 'leases', 'endpointsleases' and 'configmapsleases'.

--leader-elect-resource-name string     Default: "kube-scheduler" -The name of resource object that is used for locking during leader election. +

The name of resource object that is used for locking during leader election.

--leader-elect-resource-namespace string     Default: "kube-system" -The namespace of resource object that is used for locking during leader election. +

The namespace of resource object that is used for locking during leader election.

--leader-elect-retry-period duration     Default: 2s -The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled. +

The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled.

--lock-object-name string     Default: "kube-scheduler" -DEPRECATED: define the name of the lock object. Will be removed in favor of leader-elect-resource-name +

DEPRECATED: define the name of the lock object. Will be removed in favor of leader-elect-resource-name

--lock-object-namespace string     Default: "kube-system" -DEPRECATED: define the namespace of the lock object. Will be removed in favor of leader-elect-resource-namespace. +

DEPRECATED: define the namespace of the lock object. Will be removed in favor of leader-elect-resource-namespace.

---log-backtrace-at traceLocation     Default: :0 +--log-backtrace-at <a string in the form 'file:N'>     Default: :0 -when logging hits line file:N, emit a stack trace +

when logging hits line file:N, emit a stack trace

--log-dir string -If non-empty, write log files in this directory +

If non-empty, write log files in this directory

--log-file string -If non-empty, use this log file +

If non-empty, use this log file

--log-file-max-size uint     Default: 1800 -Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. +

Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.

--log-flush-frequency duration     Default: 5s -Maximum number of seconds between log flushes +

Maximum number of seconds between log flushes

--logging-format string     Default: "text" -Sets the log format. Permitted formats: "json", "text".
Non-default formats don't honor these flags: --add_dir_header, --alsologtostderr, --log_backtrace_at, --log_dir, --log_file, --log_file_max_size, --logtostderr, --one_output, --skip_headers, --skip_log_headers, --stderrthreshold, --vmodule, --log-flush-frequency.
Non-default choices are currently alpha and subject to change without warning. +

Sets the log format. Permitted formats: "json", "text".
Non-default formats don't honor these flags: --add_dir_header, --alsologtostderr, --log_backtrace_at, --log_dir, --log_file, --log_file_max_size, --logtostderr, --one_output, --skip_headers, --skip_log_headers, --stderrthreshold, --vmodule, --log-flush-frequency.
Non-default choices are currently alpha and subject to change without warning.

--logtostderr     Default: true -log to standard error instead of files +

log to standard error instead of files

--master string -The address of the Kubernetes API server (overrides any value in kubeconfig) +

The address of the Kubernetes API server (overrides any value in kubeconfig)

--one-output -If true, only write logs to their native severity level (vs also writing to each lower severity level +

If true, only write logs to their native severity level (vs also writing to each lower severity level

--permit-port-sharing -If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false] +

If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false]

--policy-config-file string -DEPRECATED: file with scheduler policy configuration. This file is used if policy ConfigMap is not provided or --use-legacy-policy-config=true. Note: The scheduler will fail if this is combined with Plugin configs +

DEPRECATED: file with scheduler policy configuration. This file is used if policy ConfigMap is not provided or --use-legacy-policy-config=true. Note: The scheduler will fail if this is combined with Plugin configs

--policy-configmap string -DEPRECATED: name of the ConfigMap object that contains scheduler's policy configuration. It must exist in the system namespace before scheduler initialization if --use-legacy-policy-config=false. The config must be provided as the value of an element in 'Data' map with the key='policy.cfg'. Note: The scheduler will fail if this is combined with Plugin configs +

DEPRECATED: name of the ConfigMap object that contains scheduler's policy configuration. It must exist in the system namespace before scheduler initialization if --use-legacy-policy-config=false. The config must be provided as the value of an element in 'Data' map with the key='policy.cfg'. Note: The scheduler will fail if this is combined with Plugin configs

--policy-configmap-namespace string     Default: "kube-system" -DEPRECATED: the namespace where policy ConfigMap is located. The kube-system namespace will be used if this is not provided or is empty. Note: The scheduler will fail if this is combined with Plugin configs +

DEPRECATED: the namespace where policy ConfigMap is located. The kube-system namespace will be used if this is not provided or is empty. Note: The scheduler will fail if this is combined with Plugin configs

--port int     Default: 10251 -DEPRECATED: the port on which to serve HTTP insecurely without authentication and authorization. If 0, don't serve plain HTTP at all. See --secure-port instead. +

DEPRECATED: the port on which to serve HTTP insecurely without authentication and authorization. If 0, don't serve plain HTTP at all. See --secure-port instead.

--profiling     Default: true -DEPRECATED: enable profiling via web interface host:port/debug/pprof/ +

DEPRECATED: enable profiling via web interface host:port/debug/pprof/

---requestheader-allowed-names stringSlice +--requestheader-allowed-names strings -List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed. +

List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.

--requestheader-client-ca-file string -Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests. +

Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.

---requestheader-extra-headers-prefix stringSlice     Default: [x-remote-extra-] +--requestheader-extra-headers-prefix strings     Default: "x-remote-extra-" -List of request header prefixes to inspect. X-Remote-Extra- is suggested. +

List of request header prefixes to inspect. X-Remote-Extra- is suggested.

---requestheader-group-headers stringSlice     Default: [x-remote-group] +--requestheader-group-headers strings     Default: "x-remote-group" -List of request headers to inspect for groups. X-Remote-Group is suggested. +

List of request headers to inspect for groups. X-Remote-Group is suggested.

---requestheader-username-headers stringSlice     Default: [x-remote-user] +--requestheader-username-headers strings     Default: "x-remote-user" -List of request headers to inspect for usernames. X-Remote-User is common. +

List of request headers to inspect for usernames. X-Remote-User is common.

--scheduler-name string     Default: "default-scheduler" -DEPRECATED: name of the scheduler, used to select which pods will be processed by this scheduler, based on pod's "spec.schedulerName". +

DEPRECATED: name of the scheduler, used to select which pods will be processed by this scheduler, based on pod's "spec.schedulerName".

--secure-port int     Default: 10259 -The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all. +

The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all.

--show-hidden-metrics-for-version string -The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that. +

The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is ., e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.

--skip-headers -If true, avoid header prefixes in the log messages +

If true, avoid header prefixes in the log messages

--skip-log-headers -If true, avoid headers when opening log files +

If true, avoid headers when opening log files

---stderrthreshold severity     Default: 2 +--stderrthreshold int     Default: 2 -logs at or above this threshold go to stderr +

logs at or above this threshold go to stderr

--tls-cert-file string -File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir. +

File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.

---tls-cipher-suites stringSlice +--tls-cipher-suites strings -Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.
Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA. +

Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.
Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.

--tls-min-version string -Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13 +

Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13

--tls-private-key-file string -File containing the default x509 private key matching --tls-cert-file. +

File containing the default x509 private key matching --tls-cert-file.

---tls-sni-cert-key namedCertKey     Default: [] +--tls-sni-cert-key string -A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". +

A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com".

--use-legacy-policy-config -DEPRECATED: when set to true, scheduler will ignore policy ConfigMap and uses policy config file. Note: The scheduler will fail if this is combined with Plugin configs +

DEPRECATED: when set to true, scheduler will ignore policy ConfigMap and uses policy config file. Note: The scheduler will fail if this is combined with Plugin configs

--v, --v Level +-v, --v int -number for the log level verbosity +

number for the log level verbosity

--version version[=true] -Print version information and quit +

Print version information and quit

---vmodule moduleSpec +--vmodule <comma-separated 'pattern=N' settings> -comma-separated list of pattern=N settings for file-filtered logging +

comma-separated list of pattern=N settings for file-filtered logging

--write-config-to string -If set, write the configuration values to this file and exit. +

If set, write the configuration values to this file and exit.

diff --git a/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md b/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md index 89ad711a56..a441f8ce5b 100644 --- a/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md +++ b/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md @@ -185,9 +185,9 @@ systemd unit file perhaps) to enable the token file. See docs further details. ### Authorize kubelet to create CSR -Now that the bootstrapping node is _authenticated_ as part of the `system:bootstrappers` group, it needs to be _authorized_ to create a certificate signing request (CSR) as well as retrieve it when done. Fortunately, Kubernetes ships with a `ClusterRole` with precisely these (and just these) permissions, `system:node-bootstrapper`. +Now that the bootstrapping node is _authenticated_ as part of the `system:bootstrappers` group, it needs to be _authorized_ to create a certificate signing request (CSR) as well as retrieve it when done. Fortunately, Kubernetes ships with a `ClusterRole` with precisely these (and only these) permissions, `system:node-bootstrapper`. -To do this, you just need to create a `ClusterRoleBinding` that binds the `system:bootstrappers` group to the cluster role `system:node-bootstrapper`. +To do this, you only need to create a `ClusterRoleBinding` that binds the `system:bootstrappers` group to the cluster role `system:node-bootstrapper`. ``` # enable bootstrapping nodes to create CSR @@ -345,7 +345,7 @@ The important elements to note are: * `token`: the token to use The format of the token does not matter, as long as it matches what kube-apiserver expects. In the above example, we used a bootstrap token. -As stated earlier, _any_ valid authentication method can be used, not just tokens. +As stated earlier, _any_ valid authentication method can be used, not only tokens. Because the bootstrap `kubeconfig` _is_ a standard `kubeconfig`, you can use `kubectl` to generate it. To create the above example file: diff --git a/content/en/docs/reference/command-line-tools-reference/kubelet.md b/content/en/docs/reference/command-line-tools-reference/kubelet.md index b569177dda..0e4ed2216e 100644 --- a/content/en/docs/reference/command-line-tools-reference/kubelet.md +++ b/content/en/docs/reference/command-line-tools-reference/kubelet.md @@ -302,7 +302,7 @@ kubelet [flags] --enable-cadvisor-json-endpoints     Default: `false` -Enable cAdvisor json `/spec` and `/stats/*` endpoints. (DEPRECATED: will be removed in a future version) +Enable cAdvisor json `/spec` and `/stats/*` endpoints. This flag has no effect on the /stats/summary endpoint. (DEPRECATED: will be removed in a future version) @@ -917,7 +917,7 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)
--pod-infra-container-image string     Default: `k8s.gcr.io/pause:3.2` -The image whose network/IPC namespaces containers in each pod will use. This docker-specific flag only works when container-runtime is set to `docker`. + Specified image will not be pruned by the image garbage collector. When container-runtime is set to `docker`, all containers in each pod will use the network/ipc namespaces from this image. Other CRI implementations have their own configuration to set this image. diff --git a/content/en/docs/reference/glossary/cloud-controller-manager.md b/content/en/docs/reference/glossary/cloud-controller-manager.md index c78bf393cb..874d0925cf 100755 --- a/content/en/docs/reference/glossary/cloud-controller-manager.md +++ b/content/en/docs/reference/glossary/cloud-controller-manager.md @@ -14,7 +14,7 @@ tags: A Kubernetes {{< glossary_tooltip text="control plane" term_id="control-plane" >}} component that embeds cloud-specific control logic. The cloud controller manager lets you link your cluster into your cloud provider's API, and separates out the components that interact -with that cloud platform from components that just interact with your cluster. +with that cloud platform from components that only interact with your cluster. diff --git a/content/en/docs/reference/glossary/cluster-operator.md b/content/en/docs/reference/glossary/cluster-operator.md index c897343830..48bdd4d3df 100755 --- a/content/en/docs/reference/glossary/cluster-operator.md +++ b/content/en/docs/reference/glossary/cluster-operator.md @@ -17,6 +17,6 @@ tags: Their primary responsibility is keeping a cluster up and running, which may involve periodic maintenance activities or upgrades.
{{< note >}} -Cluster operators are different from the [Operator pattern](https://coreos.com/operators) that extends the Kubernetes API. +Cluster operators are different from the [Operator pattern](https://www.openshift.com/learn/topics/operators) that extends the Kubernetes API. {{< /note >}} diff --git a/content/en/docs/reference/glossary/index.md b/content/en/docs/reference/glossary/index.md index 1fb8799a16..29bd54bd21 100755 --- a/content/en/docs/reference/glossary/index.md +++ b/content/en/docs/reference/glossary/index.md @@ -2,7 +2,7 @@ approvers: - chenopis - abiogenesis-now -title: Standardized Glossary +title: Glossary layout: glossary noedit: true default_active_tag: fundamental diff --git a/content/en/docs/reference/issues-security/_index.md b/content/en/docs/reference/issues-security/_index.md index 530e98bf61..50c3f29333 100644 --- a/content/en/docs/reference/issues-security/_index.md +++ b/content/en/docs/reference/issues-security/_index.md @@ -1,4 +1,4 @@ --- title: Kubernetes Issues and Security -weight: 10 +weight: 40 --- \ No newline at end of file diff --git a/content/en/docs/reference/kubectl/_index.md b/content/en/docs/reference/kubectl/_index.md index 7b6c2d720b..765adb6fe8 100755 --- a/content/en/docs/reference/kubectl/_index.md +++ b/content/en/docs/reference/kubectl/_index.md @@ -1,5 +1,5 @@ --- -title: "kubectl CLI" +title: "kubectl" weight: 60 --- diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md index 79ca330bd5..f5a971d3bd 100644 --- a/content/en/docs/reference/kubectl/cheatsheet.md +++ b/content/en/docs/reference/kubectl/cheatsheet.md @@ -320,6 +320,18 @@ kubectl top pod POD_NAME --containers # Show metrics for a given p kubectl top pod POD_NAME --sort-by=cpu # Show metrics for a given pod and sort it by 'cpu' or 'memory' ``` +## Interacting with Deployments and Services +```bash +kubectl logs deploy/my-deployment # dump Pod logs for a Deployment (single-container case) +kubectl logs deploy/my-deployment -c my-container # dump Pod logs for a Deployment (multi-container case) + +kubectl port-forward svc/my-service 5000 # listen on local port 5000 and forward to port 5000 on Service backend +kubectl port-forward svc/my-service 5000:my-service-port # listen on local port 5000 and forward to Service target port with name + +kubectl port-forward deploy/my-deployment 5000:6000 # listen on local port 5000 and forward to port 6000 on a Pod created by +kubectl exec deploy/my-deployment -- ls # run command in first Pod and first container in Deployment (single- or multi-container cases) +``` + ## Interacting with Nodes and cluster ```bash @@ -348,7 +360,7 @@ Other operations for exploring API resources: ```bash kubectl api-resources --namespaced=true # All namespaced resources kubectl api-resources --namespaced=false # All non-namespaced resources -kubectl api-resources -o name # All resources with simple output (just the resource name) +kubectl api-resources -o name # All resources with simple output (only the resource name) kubectl api-resources -o wide # All resources with expanded (aka "wide") output kubectl api-resources --verbs=list,get # All resources that support the "list" and "get" request verbs kubectl api-resources --api-group=extensions # All resources in the "extensions" API group @@ -375,6 +387,9 @@ Examples using `-o=custom-columns`: # All images running in a cluster kubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image' +# All images running in namespace: default, grouped by Pod +kubectl get pods --namespace default --output=custom-columns="NAME:.metadata.name,IMAGE:.spec.containers[*].image" + # All images excluding "k8s.gcr.io/coredns:1.6.2" kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!="k8s.gcr.io/coredns:1.6.2")].image' diff --git a/content/en/docs/reference/kubectl/docker-cli-to-kubectl.md b/content/en/docs/reference/kubectl/docker-cli-to-kubectl.md index 6c214513a0..ac7b7a49f9 100644 --- a/content/en/docs/reference/kubectl/docker-cli-to-kubectl.md +++ b/content/en/docs/reference/kubectl/docker-cli-to-kubectl.md @@ -7,7 +7,7 @@ reviewers: --- -You can use the Kubernetes command line tool kubectl to interact with the API Server. Using kubectl is straightforward if you are familiar with the Docker command line tool. However, there are a few differences between the docker commands and the kubectl commands. The following sections show a docker sub-command and describe the equivalent kubectl command. +You can use the Kubernetes command line tool `kubectl` to interact with the API Server. Using kubectl is straightforward if you are familiar with the Docker command line tool. However, there are a few differences between the Docker commands and the kubectl commands. The following sections show a Docker sub-command and describe the equivalent `kubectl` command. diff --git a/content/en/docs/reference/kubectl/overview.md b/content/en/docs/reference/kubectl/overview.md index a9f1550659..f8ec7e5603 100644 --- a/content/en/docs/reference/kubectl/overview.md +++ b/content/en/docs/reference/kubectl/overview.md @@ -19,7 +19,7 @@ files by setting the KUBECONFIG environment variable or by setting the This overview covers `kubectl` syntax, describes the command operations, and provides common examples. For details about each command, including all the supported flags and subcommands, see the [kubectl](/docs/reference/generated/kubectl/kubectl-commands/) reference documentation. -For installation instructions see [installing kubectl](/docs/tasks/tools/install-kubectl/). +For installation instructions see [installing kubectl](/docs/tasks/tools/). @@ -69,7 +69,7 @@ for example `create`, `get`, `describe`, `delete`. Flags that you specify from the command line override default values and any corresponding environment variables. {{< /caution >}} -If you need help, just run `kubectl help` from the terminal window. +If you need help, run `kubectl help` from the terminal window. ## Operations diff --git a/content/en/docs/reference/kubernetes-api/policies-resources/_index.md b/content/en/docs/reference/kubernetes-api/policies-resources/_index.md deleted file mode 100644 index 251e411647..0000000000 --- a/content/en/docs/reference/kubernetes-api/policies-resources/_index.md +++ /dev/null @@ -1,4 +0,0 @@ ---- -title: "Policies Resources" -weight: 6 ---- diff --git a/content/en/docs/reference/kubernetes-api/policy-resources/_index.md b/content/en/docs/reference/kubernetes-api/policy-resources/_index.md new file mode 100644 index 0000000000..06a9e27fee --- /dev/null +++ b/content/en/docs/reference/kubernetes-api/policy-resources/_index.md @@ -0,0 +1,4 @@ +--- +title: "Policy Resources" +weight: 6 +--- diff --git a/content/en/docs/reference/kubernetes-api/policies-resources/limit-range-v1.md b/content/en/docs/reference/kubernetes-api/policy-resources/limit-range-v1.md similarity index 90% rename from content/en/docs/reference/kubernetes-api/policies-resources/limit-range-v1.md rename to content/en/docs/reference/kubernetes-api/policy-resources/limit-range-v1.md index d73c4d3630..00982c5ab8 100644 --- a/content/en/docs/reference/kubernetes-api/policies-resources/limit-range-v1.md +++ b/content/en/docs/reference/kubernetes-api/policy-resources/limit-range-v1.md @@ -30,7 +30,7 @@ LimitRange sets resource usage limits for each kind of resource in a Namespace. Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **spec** (}}">LimitRangeSpec) +- **spec** (}}">LimitRangeSpec) Spec defines the limits enforced. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status @@ -95,7 +95,7 @@ LimitRangeList is a list of LimitRange items. Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds -- **items** ([]}}">LimitRange), required +- **items** ([]}}">LimitRange), required Items is a list of LimitRange objects. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ @@ -142,7 +142,7 @@ GET /api/v1/namespaces/{namespace}/limitranges/{name} #### Response -200 (}}">LimitRange): OK +200 (}}">LimitRange): OK 401: Unauthorized @@ -215,7 +215,7 @@ GET /api/v1/namespaces/{namespace}/limitranges #### Response -200 (}}">LimitRangeList): OK +200 (}}">LimitRangeList): OK 401: Unauthorized @@ -283,7 +283,7 @@ GET /api/v1/limitranges #### Response -200 (}}">LimitRangeList): OK +200 (}}">LimitRangeList): OK 401: Unauthorized @@ -302,7 +302,7 @@ POST /api/v1/namespaces/{namespace}/limitranges }}">namespace -- **body**: }}">LimitRange, required +- **body**: }}">LimitRange, required @@ -326,11 +326,11 @@ POST /api/v1/namespaces/{namespace}/limitranges #### Response -200 (}}">LimitRange): OK +200 (}}">LimitRange): OK -201 (}}">LimitRange): Created +201 (}}">LimitRange): Created -202 (}}">LimitRange): Accepted +202 (}}">LimitRange): Accepted 401: Unauthorized @@ -354,7 +354,7 @@ PUT /api/v1/namespaces/{namespace}/limitranges/{name} }}">namespace -- **body**: }}">LimitRange, required +- **body**: }}">LimitRange, required @@ -378,9 +378,9 @@ PUT /api/v1/namespaces/{namespace}/limitranges/{name} #### Response -200 (}}">LimitRange): OK +200 (}}">LimitRange): OK -201 (}}">LimitRange): Created +201 (}}">LimitRange): Created 401: Unauthorized @@ -433,7 +433,7 @@ PATCH /api/v1/namespaces/{namespace}/limitranges/{name} #### Response -200 (}}">LimitRange): OK +200 (}}">LimitRange): OK 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/policies-resources/network-policy-v1.md b/content/en/docs/reference/kubernetes-api/policy-resources/network-policy-v1.md similarity index 93% rename from content/en/docs/reference/kubernetes-api/policies-resources/network-policy-v1.md rename to content/en/docs/reference/kubernetes-api/policy-resources/network-policy-v1.md index 798a3dc891..78e4868d94 100644 --- a/content/en/docs/reference/kubernetes-api/policies-resources/network-policy-v1.md +++ b/content/en/docs/reference/kubernetes-api/policy-resources/network-policy-v1.md @@ -30,7 +30,7 @@ NetworkPolicy describes what network traffic is allowed for a set of Pods Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **spec** (}}">NetworkPolicySpec) +- **spec** (}}">NetworkPolicySpec) Specification of the desired behavior for this NetworkPolicy. @@ -190,7 +190,7 @@ NetworkPolicyList is a list of NetworkPolicy objects. Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **items** ([]}}">NetworkPolicy), required +- **items** ([]}}">NetworkPolicy), required Items is a list of schema objects. @@ -237,7 +237,7 @@ GET /apis/networking.k8s.io/v1/namespaces/{namespace}/networkpolicies/{name} #### Response -200 (}}">NetworkPolicy): OK +200 (}}">NetworkPolicy): OK 401: Unauthorized @@ -310,7 +310,7 @@ GET /apis/networking.k8s.io/v1/namespaces/{namespace}/networkpolicies #### Response -200 (}}">NetworkPolicyList): OK +200 (}}">NetworkPolicyList): OK 401: Unauthorized @@ -378,7 +378,7 @@ GET /apis/networking.k8s.io/v1/networkpolicies #### Response -200 (}}">NetworkPolicyList): OK +200 (}}">NetworkPolicyList): OK 401: Unauthorized @@ -397,7 +397,7 @@ POST /apis/networking.k8s.io/v1/namespaces/{namespace}/networkpolicies }}">namespace -- **body**: }}">NetworkPolicy, required +- **body**: }}">NetworkPolicy, required @@ -421,11 +421,11 @@ POST /apis/networking.k8s.io/v1/namespaces/{namespace}/networkpolicies #### Response -200 (}}">NetworkPolicy): OK +200 (}}">NetworkPolicy): OK -201 (}}">NetworkPolicy): Created +201 (}}">NetworkPolicy): Created -202 (}}">NetworkPolicy): Accepted +202 (}}">NetworkPolicy): Accepted 401: Unauthorized @@ -449,7 +449,7 @@ PUT /apis/networking.k8s.io/v1/namespaces/{namespace}/networkpolicies/{name} }}">namespace -- **body**: }}">NetworkPolicy, required +- **body**: }}">NetworkPolicy, required @@ -473,9 +473,9 @@ PUT /apis/networking.k8s.io/v1/namespaces/{namespace}/networkpolicies/{name} #### Response -200 (}}">NetworkPolicy): OK +200 (}}">NetworkPolicy): OK -201 (}}">NetworkPolicy): Created +201 (}}">NetworkPolicy): Created 401: Unauthorized @@ -528,7 +528,7 @@ PATCH /apis/networking.k8s.io/v1/namespaces/{namespace}/networkpolicies/{name} #### Response -200 (}}">NetworkPolicy): OK +200 (}}">NetworkPolicy): OK 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/policies-resources/pod-disruption-budget-v1beta1.md b/content/en/docs/reference/kubernetes-api/policy-resources/pod-disruption-budget-v1beta1.md similarity index 86% rename from content/en/docs/reference/kubernetes-api/policies-resources/pod-disruption-budget-v1beta1.md rename to content/en/docs/reference/kubernetes-api/policy-resources/pod-disruption-budget-v1beta1.md index 47b9d83a7e..9120b19468 100644 --- a/content/en/docs/reference/kubernetes-api/policies-resources/pod-disruption-budget-v1beta1.md +++ b/content/en/docs/reference/kubernetes-api/policy-resources/pod-disruption-budget-v1beta1.md @@ -29,11 +29,11 @@ PodDisruptionBudget is an object to define the max disruption that can be caused - **metadata** (}}">ObjectMeta) -- **spec** (}}">PodDisruptionBudgetSpec) +- **spec** (}}">PodDisruptionBudgetSpec) Specification of the desired behavior of the PodDisruptionBudget. -- **status** (}}">PodDisruptionBudgetStatus) +- **status** (}}">PodDisruptionBudgetStatus) Most recently observed status of the PodDisruptionBudget. @@ -121,7 +121,7 @@ PodDisruptionBudgetList is a collection of PodDisruptionBudgets. - **metadata** (}}">ListMeta) -- **items** ([]}}">PodDisruptionBudget), required +- **items** ([]}}">PodDisruptionBudget), required @@ -167,7 +167,7 @@ GET /apis/policy/v1beta1/namespaces/{namespace}/poddisruptionbudgets/{name} #### Response -200 (}}">PodDisruptionBudget): OK +200 (}}">PodDisruptionBudget): OK 401: Unauthorized @@ -200,7 +200,7 @@ GET /apis/policy/v1beta1/namespaces/{namespace}/poddisruptionbudgets/{name}/stat #### Response -200 (}}">PodDisruptionBudget): OK +200 (}}">PodDisruptionBudget): OK 401: Unauthorized @@ -273,7 +273,7 @@ GET /apis/policy/v1beta1/namespaces/{namespace}/poddisruptionbudgets #### Response -200 (}}">PodDisruptionBudgetList): OK +200 (}}">PodDisruptionBudgetList): OK 401: Unauthorized @@ -341,7 +341,7 @@ GET /apis/policy/v1beta1/poddisruptionbudgets #### Response -200 (}}">PodDisruptionBudgetList): OK +200 (}}">PodDisruptionBudgetList): OK 401: Unauthorized @@ -360,7 +360,7 @@ POST /apis/policy/v1beta1/namespaces/{namespace}/poddisruptionbudgets }}">namespace -- **body**: }}">PodDisruptionBudget, required +- **body**: }}">PodDisruptionBudget, required @@ -384,11 +384,11 @@ POST /apis/policy/v1beta1/namespaces/{namespace}/poddisruptionbudgets #### Response -200 (}}">PodDisruptionBudget): OK +200 (}}">PodDisruptionBudget): OK -201 (}}">PodDisruptionBudget): Created +201 (}}">PodDisruptionBudget): Created -202 (}}">PodDisruptionBudget): Accepted +202 (}}">PodDisruptionBudget): Accepted 401: Unauthorized @@ -412,7 +412,7 @@ PUT /apis/policy/v1beta1/namespaces/{namespace}/poddisruptionbudgets/{name} }}">namespace -- **body**: }}">PodDisruptionBudget, required +- **body**: }}">PodDisruptionBudget, required @@ -436,9 +436,9 @@ PUT /apis/policy/v1beta1/namespaces/{namespace}/poddisruptionbudgets/{name} #### Response -200 (}}">PodDisruptionBudget): OK +200 (}}">PodDisruptionBudget): OK -201 (}}">PodDisruptionBudget): Created +201 (}}">PodDisruptionBudget): Created 401: Unauthorized @@ -462,7 +462,7 @@ PUT /apis/policy/v1beta1/namespaces/{namespace}/poddisruptionbudgets/{name}/stat }}">namespace -- **body**: }}">PodDisruptionBudget, required +- **body**: }}">PodDisruptionBudget, required @@ -486,9 +486,9 @@ PUT /apis/policy/v1beta1/namespaces/{namespace}/poddisruptionbudgets/{name}/stat #### Response -200 (}}">PodDisruptionBudget): OK +200 (}}">PodDisruptionBudget): OK -201 (}}">PodDisruptionBudget): Created +201 (}}">PodDisruptionBudget): Created 401: Unauthorized @@ -541,7 +541,7 @@ PATCH /apis/policy/v1beta1/namespaces/{namespace}/poddisruptionbudgets/{name} #### Response -200 (}}">PodDisruptionBudget): OK +200 (}}">PodDisruptionBudget): OK 401: Unauthorized @@ -594,7 +594,7 @@ PATCH /apis/policy/v1beta1/namespaces/{namespace}/poddisruptionbudgets/{name}/st #### Response -200 (}}">PodDisruptionBudget): OK +200 (}}">PodDisruptionBudget): OK 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/policies-resources/pod-security-policy-v1beta1.md b/content/en/docs/reference/kubernetes-api/policy-resources/pod-security-policy-v1beta1.md similarity index 91% rename from content/en/docs/reference/kubernetes-api/policies-resources/pod-security-policy-v1beta1.md rename to content/en/docs/reference/kubernetes-api/policy-resources/pod-security-policy-v1beta1.md index 1d2eac8837..59444cbbcf 100644 --- a/content/en/docs/reference/kubernetes-api/policies-resources/pod-security-policy-v1beta1.md +++ b/content/en/docs/reference/kubernetes-api/policy-resources/pod-security-policy-v1beta1.md @@ -30,7 +30,7 @@ PodSecurityPolicy governs the ability to make requests that affect the Security Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **spec** (}}">PodSecurityPolicySpec) +- **spec** (}}">PodSecurityPolicySpec) spec defines the policy enforced. @@ -331,7 +331,7 @@ PodSecurityPolicyList is a list of PodSecurityPolicy objects. Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **items** ([]}}">PodSecurityPolicy), required +- **items** ([]}}">PodSecurityPolicy), required items is a list of schema objects. @@ -373,7 +373,7 @@ GET /apis/policy/v1beta1/podsecuritypolicies/{name} #### Response -200 (}}">PodSecurityPolicy): OK +200 (}}">PodSecurityPolicy): OK 401: Unauthorized @@ -441,7 +441,7 @@ GET /apis/policy/v1beta1/podsecuritypolicies #### Response -200 (}}">PodSecurityPolicyList): OK +200 (}}">PodSecurityPolicyList): OK 401: Unauthorized @@ -455,7 +455,7 @@ POST /apis/policy/v1beta1/podsecuritypolicies #### Parameters -- **body**: }}">PodSecurityPolicy, required +- **body**: }}">PodSecurityPolicy, required @@ -479,11 +479,11 @@ POST /apis/policy/v1beta1/podsecuritypolicies #### Response -200 (}}">PodSecurityPolicy): OK +200 (}}">PodSecurityPolicy): OK -201 (}}">PodSecurityPolicy): Created +201 (}}">PodSecurityPolicy): Created -202 (}}">PodSecurityPolicy): Accepted +202 (}}">PodSecurityPolicy): Accepted 401: Unauthorized @@ -502,7 +502,7 @@ PUT /apis/policy/v1beta1/podsecuritypolicies/{name} name of the PodSecurityPolicy -- **body**: }}">PodSecurityPolicy, required +- **body**: }}">PodSecurityPolicy, required @@ -526,9 +526,9 @@ PUT /apis/policy/v1beta1/podsecuritypolicies/{name} #### Response -200 (}}">PodSecurityPolicy): OK +200 (}}">PodSecurityPolicy): OK -201 (}}">PodSecurityPolicy): Created +201 (}}">PodSecurityPolicy): Created 401: Unauthorized @@ -576,7 +576,7 @@ PATCH /apis/policy/v1beta1/podsecuritypolicies/{name} #### Response -200 (}}">PodSecurityPolicy): OK +200 (}}">PodSecurityPolicy): OK 401: Unauthorized @@ -624,9 +624,9 @@ DELETE /apis/policy/v1beta1/podsecuritypolicies/{name} #### Response -200 (}}">PodSecurityPolicy): OK +200 (}}">PodSecurityPolicy): OK -202 (}}">PodSecurityPolicy): Accepted +202 (}}">PodSecurityPolicy): Accepted 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/policies-resources/resource-quota-v1.md b/content/en/docs/reference/kubernetes-api/policy-resources/resource-quota-v1.md similarity index 86% rename from content/en/docs/reference/kubernetes-api/policies-resources/resource-quota-v1.md rename to content/en/docs/reference/kubernetes-api/policy-resources/resource-quota-v1.md index 66b0451d2d..93b8cbc9e8 100644 --- a/content/en/docs/reference/kubernetes-api/policies-resources/resource-quota-v1.md +++ b/content/en/docs/reference/kubernetes-api/policy-resources/resource-quota-v1.md @@ -30,11 +30,11 @@ ResourceQuota sets aggregate quota restrictions enforced per namespace Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **spec** (}}">ResourceQuotaSpec) +- **spec** (}}">ResourceQuotaSpec) Spec defines the desired quota. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status -- **status** (}}">ResourceQuotaStatus) +- **status** (}}">ResourceQuotaStatus) Status defines the actual enforced quota and its current usage. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status @@ -120,7 +120,7 @@ ResourceQuotaList is a list of ResourceQuota items. Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds -- **items** ([]}}">ResourceQuota), required +- **items** ([]}}">ResourceQuota), required Items is a list of ResourceQuota objects. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ @@ -167,7 +167,7 @@ GET /api/v1/namespaces/{namespace}/resourcequotas/{name} #### Response -200 (}}">ResourceQuota): OK +200 (}}">ResourceQuota): OK 401: Unauthorized @@ -200,7 +200,7 @@ GET /api/v1/namespaces/{namespace}/resourcequotas/{name}/status #### Response -200 (}}">ResourceQuota): OK +200 (}}">ResourceQuota): OK 401: Unauthorized @@ -273,7 +273,7 @@ GET /api/v1/namespaces/{namespace}/resourcequotas #### Response -200 (}}">ResourceQuotaList): OK +200 (}}">ResourceQuotaList): OK 401: Unauthorized @@ -341,7 +341,7 @@ GET /api/v1/resourcequotas #### Response -200 (}}">ResourceQuotaList): OK +200 (}}">ResourceQuotaList): OK 401: Unauthorized @@ -360,7 +360,7 @@ POST /api/v1/namespaces/{namespace}/resourcequotas }}">namespace -- **body**: }}">ResourceQuota, required +- **body**: }}">ResourceQuota, required @@ -384,11 +384,11 @@ POST /api/v1/namespaces/{namespace}/resourcequotas #### Response -200 (}}">ResourceQuota): OK +200 (}}">ResourceQuota): OK -201 (}}">ResourceQuota): Created +201 (}}">ResourceQuota): Created -202 (}}">ResourceQuota): Accepted +202 (}}">ResourceQuota): Accepted 401: Unauthorized @@ -412,7 +412,7 @@ PUT /api/v1/namespaces/{namespace}/resourcequotas/{name} }}">namespace -- **body**: }}">ResourceQuota, required +- **body**: }}">ResourceQuota, required @@ -436,9 +436,9 @@ PUT /api/v1/namespaces/{namespace}/resourcequotas/{name} #### Response -200 (}}">ResourceQuota): OK +200 (}}">ResourceQuota): OK -201 (}}">ResourceQuota): Created +201 (}}">ResourceQuota): Created 401: Unauthorized @@ -462,7 +462,7 @@ PUT /api/v1/namespaces/{namespace}/resourcequotas/{name}/status }}">namespace -- **body**: }}">ResourceQuota, required +- **body**: }}">ResourceQuota, required @@ -486,9 +486,9 @@ PUT /api/v1/namespaces/{namespace}/resourcequotas/{name}/status #### Response -200 (}}">ResourceQuota): OK +200 (}}">ResourceQuota): OK -201 (}}">ResourceQuota): Created +201 (}}">ResourceQuota): Created 401: Unauthorized @@ -541,7 +541,7 @@ PATCH /api/v1/namespaces/{namespace}/resourcequotas/{name} #### Response -200 (}}">ResourceQuota): OK +200 (}}">ResourceQuota): OK 401: Unauthorized @@ -594,7 +594,7 @@ PATCH /api/v1/namespaces/{namespace}/resourcequotas/{name}/status #### Response -200 (}}">ResourceQuota): OK +200 (}}">ResourceQuota): OK 401: Unauthorized @@ -647,9 +647,9 @@ DELETE /api/v1/namespaces/{namespace}/resourcequotas/{name} #### Response -200 (}}">ResourceQuota): OK +200 (}}">ResourceQuota): OK -202 (}}">ResourceQuota): Accepted +202 (}}">ResourceQuota): Accepted 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/service-resources/_index.md b/content/en/docs/reference/kubernetes-api/service-resources/_index.md new file mode 100644 index 0000000000..0ab511b5a4 --- /dev/null +++ b/content/en/docs/reference/kubernetes-api/service-resources/_index.md @@ -0,0 +1,4 @@ +--- +title: "Service Resources" +weight: 2 +--- diff --git a/content/en/docs/reference/kubernetes-api/services-resources/endpoint-slice-v1beta1.md b/content/en/docs/reference/kubernetes-api/service-resources/endpoint-slice-v1beta1.md similarity index 92% rename from content/en/docs/reference/kubernetes-api/services-resources/endpoint-slice-v1beta1.md rename to content/en/docs/reference/kubernetes-api/service-resources/endpoint-slice-v1beta1.md index bb8c7213bb..1ad467b022 100644 --- a/content/en/docs/reference/kubernetes-api/services-resources/endpoint-slice-v1beta1.md +++ b/content/en/docs/reference/kubernetes-api/service-resources/endpoint-slice-v1beta1.md @@ -136,7 +136,7 @@ EndpointSliceList represents a list of endpoint slices Standard list metadata. -- **items** ([]}}">EndpointSlice), required +- **items** ([]}}">EndpointSlice), required List of endpoint slices @@ -183,7 +183,7 @@ GET /apis/discovery.k8s.io/v1beta1/namespaces/{namespace}/endpointslices/{name} #### Response -200 (}}">EndpointSlice): OK +200 (}}">EndpointSlice): OK 401: Unauthorized @@ -256,7 +256,7 @@ GET /apis/discovery.k8s.io/v1beta1/namespaces/{namespace}/endpointslices #### Response -200 (}}">EndpointSliceList): OK +200 (}}">EndpointSliceList): OK 401: Unauthorized @@ -324,7 +324,7 @@ GET /apis/discovery.k8s.io/v1beta1/endpointslices #### Response -200 (}}">EndpointSliceList): OK +200 (}}">EndpointSliceList): OK 401: Unauthorized @@ -343,7 +343,7 @@ POST /apis/discovery.k8s.io/v1beta1/namespaces/{namespace}/endpointslices }}">namespace -- **body**: }}">EndpointSlice, required +- **body**: }}">EndpointSlice, required @@ -367,11 +367,11 @@ POST /apis/discovery.k8s.io/v1beta1/namespaces/{namespace}/endpointslices #### Response -200 (}}">EndpointSlice): OK +200 (}}">EndpointSlice): OK -201 (}}">EndpointSlice): Created +201 (}}">EndpointSlice): Created -202 (}}">EndpointSlice): Accepted +202 (}}">EndpointSlice): Accepted 401: Unauthorized @@ -395,7 +395,7 @@ PUT /apis/discovery.k8s.io/v1beta1/namespaces/{namespace}/endpointslices/{name} }}">namespace -- **body**: }}">EndpointSlice, required +- **body**: }}">EndpointSlice, required @@ -419,9 +419,9 @@ PUT /apis/discovery.k8s.io/v1beta1/namespaces/{namespace}/endpointslices/{name} #### Response -200 (}}">EndpointSlice): OK +200 (}}">EndpointSlice): OK -201 (}}">EndpointSlice): Created +201 (}}">EndpointSlice): Created 401: Unauthorized @@ -474,7 +474,7 @@ PATCH /apis/discovery.k8s.io/v1beta1/namespaces/{namespace}/endpointslices/{name #### Response -200 (}}">EndpointSlice): OK +200 (}}">EndpointSlice): OK 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/services-resources/endpoints-v1.md b/content/en/docs/reference/kubernetes-api/service-resources/endpoints-v1.md similarity index 92% rename from content/en/docs/reference/kubernetes-api/services-resources/endpoints-v1.md rename to content/en/docs/reference/kubernetes-api/service-resources/endpoints-v1.md index b0c73533f8..ce5c582b30 100644 --- a/content/en/docs/reference/kubernetes-api/services-resources/endpoints-v1.md +++ b/content/en/docs/reference/kubernetes-api/service-resources/endpoints-v1.md @@ -144,7 +144,7 @@ EndpointsList is a list of endpoints. Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds -- **items** ([]}}">Endpoints), required +- **items** ([]}}">Endpoints), required List of endpoints. @@ -191,7 +191,7 @@ GET /api/v1/namespaces/{namespace}/endpoints/{name} #### Response -200 (}}">Endpoints): OK +200 (}}">Endpoints): OK 401: Unauthorized @@ -264,7 +264,7 @@ GET /api/v1/namespaces/{namespace}/endpoints #### Response -200 (}}">EndpointsList): OK +200 (}}">EndpointsList): OK 401: Unauthorized @@ -332,7 +332,7 @@ GET /api/v1/endpoints #### Response -200 (}}">EndpointsList): OK +200 (}}">EndpointsList): OK 401: Unauthorized @@ -351,7 +351,7 @@ POST /api/v1/namespaces/{namespace}/endpoints }}">namespace -- **body**: }}">Endpoints, required +- **body**: }}">Endpoints, required @@ -375,11 +375,11 @@ POST /api/v1/namespaces/{namespace}/endpoints #### Response -200 (}}">Endpoints): OK +200 (}}">Endpoints): OK -201 (}}">Endpoints): Created +201 (}}">Endpoints): Created -202 (}}">Endpoints): Accepted +202 (}}">Endpoints): Accepted 401: Unauthorized @@ -403,7 +403,7 @@ PUT /api/v1/namespaces/{namespace}/endpoints/{name} }}">namespace -- **body**: }}">Endpoints, required +- **body**: }}">Endpoints, required @@ -427,9 +427,9 @@ PUT /api/v1/namespaces/{namespace}/endpoints/{name} #### Response -200 (}}">Endpoints): OK +200 (}}">Endpoints): OK -201 (}}">Endpoints): Created +201 (}}">Endpoints): Created 401: Unauthorized @@ -482,7 +482,7 @@ PATCH /api/v1/namespaces/{namespace}/endpoints/{name} #### Response -200 (}}">Endpoints): OK +200 (}}">Endpoints): OK 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/services-resources/ingress-class-v1.md b/content/en/docs/reference/kubernetes-api/service-resources/ingress-class-v1.md similarity index 87% rename from content/en/docs/reference/kubernetes-api/services-resources/ingress-class-v1.md rename to content/en/docs/reference/kubernetes-api/service-resources/ingress-class-v1.md index 121c9551eb..92ad085310 100644 --- a/content/en/docs/reference/kubernetes-api/services-resources/ingress-class-v1.md +++ b/content/en/docs/reference/kubernetes-api/service-resources/ingress-class-v1.md @@ -30,7 +30,7 @@ IngressClass represents the class of the Ingress, referenced by the Ingress Spec Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **spec** (}}">IngressClassSpec) +- **spec** (}}">IngressClassSpec) Spec is the desired state of the IngressClass. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status @@ -72,7 +72,7 @@ IngressClassList is a collection of IngressClasses. Standard list metadata. -- **items** ([]}}">IngressClass), required +- **items** ([]}}">IngressClass), required Items is the list of IngressClasses. @@ -114,7 +114,7 @@ GET /apis/networking.k8s.io/v1/ingressclasses/{name} #### Response -200 (}}">IngressClass): OK +200 (}}">IngressClass): OK 401: Unauthorized @@ -182,7 +182,7 @@ GET /apis/networking.k8s.io/v1/ingressclasses #### Response -200 (}}">IngressClassList): OK +200 (}}">IngressClassList): OK 401: Unauthorized @@ -196,7 +196,7 @@ POST /apis/networking.k8s.io/v1/ingressclasses #### Parameters -- **body**: }}">IngressClass, required +- **body**: }}">IngressClass, required @@ -220,11 +220,11 @@ POST /apis/networking.k8s.io/v1/ingressclasses #### Response -200 (}}">IngressClass): OK +200 (}}">IngressClass): OK -201 (}}">IngressClass): Created +201 (}}">IngressClass): Created -202 (}}">IngressClass): Accepted +202 (}}">IngressClass): Accepted 401: Unauthorized @@ -243,7 +243,7 @@ PUT /apis/networking.k8s.io/v1/ingressclasses/{name} name of the IngressClass -- **body**: }}">IngressClass, required +- **body**: }}">IngressClass, required @@ -267,9 +267,9 @@ PUT /apis/networking.k8s.io/v1/ingressclasses/{name} #### Response -200 (}}">IngressClass): OK +200 (}}">IngressClass): OK -201 (}}">IngressClass): Created +201 (}}">IngressClass): Created 401: Unauthorized @@ -317,7 +317,7 @@ PATCH /apis/networking.k8s.io/v1/ingressclasses/{name} #### Response -200 (}}">IngressClass): OK +200 (}}">IngressClass): OK 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/services-resources/ingress-v1.md b/content/en/docs/reference/kubernetes-api/service-resources/ingress-v1.md similarity index 93% rename from content/en/docs/reference/kubernetes-api/services-resources/ingress-v1.md rename to content/en/docs/reference/kubernetes-api/service-resources/ingress-v1.md index 003ad959ea..72a0383f96 100644 --- a/content/en/docs/reference/kubernetes-api/services-resources/ingress-v1.md +++ b/content/en/docs/reference/kubernetes-api/service-resources/ingress-v1.md @@ -30,11 +30,11 @@ Ingress is a collection of rules that allow inbound connections to reach the end Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **spec** (}}">IngressSpec) +- **spec** (}}">IngressSpec) Spec is the desired state of the Ingress. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status -- **status** (}}">IngressStatus) +- **status** (}}">IngressStatus) Status is the current state of the Ingress. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status @@ -274,7 +274,7 @@ IngressList is a collection of Ingress. Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **items** ([]}}">Ingress), required +- **items** ([]}}">Ingress), required Items is the list of Ingress. @@ -321,7 +321,7 @@ GET /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses/{name} #### Response -200 (}}">Ingress): OK +200 (}}">Ingress): OK 401: Unauthorized @@ -354,7 +354,7 @@ GET /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses/{name}/status #### Response -200 (}}">Ingress): OK +200 (}}">Ingress): OK 401: Unauthorized @@ -427,7 +427,7 @@ GET /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses #### Response -200 (}}">IngressList): OK +200 (}}">IngressList): OK 401: Unauthorized @@ -495,7 +495,7 @@ GET /apis/networking.k8s.io/v1/ingresses #### Response -200 (}}">IngressList): OK +200 (}}">IngressList): OK 401: Unauthorized @@ -514,7 +514,7 @@ POST /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses }}">namespace -- **body**: }}">Ingress, required +- **body**: }}">Ingress, required @@ -538,11 +538,11 @@ POST /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses #### Response -200 (}}">Ingress): OK +200 (}}">Ingress): OK -201 (}}">Ingress): Created +201 (}}">Ingress): Created -202 (}}">Ingress): Accepted +202 (}}">Ingress): Accepted 401: Unauthorized @@ -566,7 +566,7 @@ PUT /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses/{name} }}">namespace -- **body**: }}">Ingress, required +- **body**: }}">Ingress, required @@ -590,9 +590,9 @@ PUT /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses/{name} #### Response -200 (}}">Ingress): OK +200 (}}">Ingress): OK -201 (}}">Ingress): Created +201 (}}">Ingress): Created 401: Unauthorized @@ -616,7 +616,7 @@ PUT /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses/{name}/status }}">namespace -- **body**: }}">Ingress, required +- **body**: }}">Ingress, required @@ -640,9 +640,9 @@ PUT /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses/{name}/status #### Response -200 (}}">Ingress): OK +200 (}}">Ingress): OK -201 (}}">Ingress): Created +201 (}}">Ingress): Created 401: Unauthorized @@ -695,7 +695,7 @@ PATCH /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses/{name} #### Response -200 (}}">Ingress): OK +200 (}}">Ingress): OK 401: Unauthorized @@ -748,7 +748,7 @@ PATCH /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses/{name}/status #### Response -200 (}}">Ingress): OK +200 (}}">Ingress): OK 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/services-resources/service-v1.md b/content/en/docs/reference/kubernetes-api/service-resources/service-v1.md similarity index 94% rename from content/en/docs/reference/kubernetes-api/services-resources/service-v1.md rename to content/en/docs/reference/kubernetes-api/service-resources/service-v1.md index bfb2c819bd..3ed9e93d96 100644 --- a/content/en/docs/reference/kubernetes-api/services-resources/service-v1.md +++ b/content/en/docs/reference/kubernetes-api/service-resources/service-v1.md @@ -30,11 +30,11 @@ Service is a named abstraction of software service (for example, mysql) consisti Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **spec** (}}">ServiceSpec) +- **spec** (}}">ServiceSpec) Spec defines the behavior of a service. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status -- **status** (}}">ServiceStatus) +- **status** (}}">ServiceStatus) Most recently observed status of the service. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status @@ -290,7 +290,7 @@ ServiceList holds a list of services. Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds -- **items** ([]}}">Service), required +- **items** ([]}}">Service), required List of services @@ -337,7 +337,7 @@ GET /api/v1/namespaces/{namespace}/services/{name} #### Response -200 (}}">Service): OK +200 (}}">Service): OK 401: Unauthorized @@ -370,7 +370,7 @@ GET /api/v1/namespaces/{namespace}/services/{name}/status #### Response -200 (}}">Service): OK +200 (}}">Service): OK 401: Unauthorized @@ -443,7 +443,7 @@ GET /api/v1/namespaces/{namespace}/services #### Response -200 (}}">ServiceList): OK +200 (}}">ServiceList): OK 401: Unauthorized @@ -511,7 +511,7 @@ GET /api/v1/services #### Response -200 (}}">ServiceList): OK +200 (}}">ServiceList): OK 401: Unauthorized @@ -530,7 +530,7 @@ POST /api/v1/namespaces/{namespace}/services }}">namespace -- **body**: }}">Service, required +- **body**: }}">Service, required @@ -554,11 +554,11 @@ POST /api/v1/namespaces/{namespace}/services #### Response -200 (}}">Service): OK +200 (}}">Service): OK -201 (}}">Service): Created +201 (}}">Service): Created -202 (}}">Service): Accepted +202 (}}">Service): Accepted 401: Unauthorized @@ -582,7 +582,7 @@ PUT /api/v1/namespaces/{namespace}/services/{name} }}">namespace -- **body**: }}">Service, required +- **body**: }}">Service, required @@ -606,9 +606,9 @@ PUT /api/v1/namespaces/{namespace}/services/{name} #### Response -200 (}}">Service): OK +200 (}}">Service): OK -201 (}}">Service): Created +201 (}}">Service): Created 401: Unauthorized @@ -632,7 +632,7 @@ PUT /api/v1/namespaces/{namespace}/services/{name}/status }}">namespace -- **body**: }}">Service, required +- **body**: }}">Service, required @@ -656,9 +656,9 @@ PUT /api/v1/namespaces/{namespace}/services/{name}/status #### Response -200 (}}">Service): OK +200 (}}">Service): OK -201 (}}">Service): Created +201 (}}">Service): Created 401: Unauthorized @@ -711,7 +711,7 @@ PATCH /api/v1/namespaces/{namespace}/services/{name} #### Response -200 (}}">Service): OK +200 (}}">Service): OK 401: Unauthorized @@ -764,7 +764,7 @@ PATCH /api/v1/namespaces/{namespace}/services/{name}/status #### Response -200 (}}">Service): OK +200 (}}">Service): OK 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/services-resources/_index.md b/content/en/docs/reference/kubernetes-api/services-resources/_index.md deleted file mode 100644 index 1c4c64040d..0000000000 --- a/content/en/docs/reference/kubernetes-api/services-resources/_index.md +++ /dev/null @@ -1,4 +0,0 @@ ---- -title: "Services Resources" -weight: 2 ---- diff --git a/content/en/docs/reference/kubernetes-api/workload-resources/_index.md b/content/en/docs/reference/kubernetes-api/workload-resources/_index.md new file mode 100644 index 0000000000..85d1bfa44f --- /dev/null +++ b/content/en/docs/reference/kubernetes-api/workload-resources/_index.md @@ -0,0 +1,4 @@ +--- +title: "Workload Resources" +weight: 1 +--- diff --git a/content/en/docs/reference/kubernetes-api/workloads-resources/container.md b/content/en/docs/reference/kubernetes-api/workload-resources/container.md similarity index 100% rename from content/en/docs/reference/kubernetes-api/workloads-resources/container.md rename to content/en/docs/reference/kubernetes-api/workload-resources/container.md diff --git a/content/en/docs/reference/kubernetes-api/workloads-resources/controller-revision-v1.md b/content/en/docs/reference/kubernetes-api/workload-resources/controller-revision-v1.md similarity index 89% rename from content/en/docs/reference/kubernetes-api/workloads-resources/controller-revision-v1.md rename to content/en/docs/reference/kubernetes-api/workload-resources/controller-revision-v1.md index 950e9d729c..5caea2a5d9 100644 --- a/content/en/docs/reference/kubernetes-api/workloads-resources/controller-revision-v1.md +++ b/content/en/docs/reference/kubernetes-api/workload-resources/controller-revision-v1.md @@ -88,7 +88,7 @@ ControllerRevisionList is a resource containing a list of ControllerRevision obj More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **items** ([]}}">ControllerRevision), required +- **items** ([]}}">ControllerRevision), required Items is the list of ControllerRevisions @@ -135,7 +135,7 @@ GET /apis/apps/v1/namespaces/{namespace}/controllerrevisions/{name} #### Response -200 (}}">ControllerRevision): OK +200 (}}">ControllerRevision): OK 401: Unauthorized @@ -208,7 +208,7 @@ GET /apis/apps/v1/namespaces/{namespace}/controllerrevisions #### Response -200 (}}">ControllerRevisionList): OK +200 (}}">ControllerRevisionList): OK 401: Unauthorized @@ -276,7 +276,7 @@ GET /apis/apps/v1/controllerrevisions #### Response -200 (}}">ControllerRevisionList): OK +200 (}}">ControllerRevisionList): OK 401: Unauthorized @@ -295,7 +295,7 @@ POST /apis/apps/v1/namespaces/{namespace}/controllerrevisions }}">namespace -- **body**: }}">ControllerRevision, required +- **body**: }}">ControllerRevision, required @@ -319,11 +319,11 @@ POST /apis/apps/v1/namespaces/{namespace}/controllerrevisions #### Response -200 (}}">ControllerRevision): OK +200 (}}">ControllerRevision): OK -201 (}}">ControllerRevision): Created +201 (}}">ControllerRevision): Created -202 (}}">ControllerRevision): Accepted +202 (}}">ControllerRevision): Accepted 401: Unauthorized @@ -347,7 +347,7 @@ PUT /apis/apps/v1/namespaces/{namespace}/controllerrevisions/{name} }}">namespace -- **body**: }}">ControllerRevision, required +- **body**: }}">ControllerRevision, required @@ -371,9 +371,9 @@ PUT /apis/apps/v1/namespaces/{namespace}/controllerrevisions/{name} #### Response -200 (}}">ControllerRevision): OK +200 (}}">ControllerRevision): OK -201 (}}">ControllerRevision): Created +201 (}}">ControllerRevision): Created 401: Unauthorized @@ -426,7 +426,7 @@ PATCH /apis/apps/v1/namespaces/{namespace}/controllerrevisions/{name} #### Response -200 (}}">ControllerRevision): OK +200 (}}">ControllerRevision): OK 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/workloads-resources/cron-job-v1beta1.md b/content/en/docs/reference/kubernetes-api/workload-resources/cron-job-v1beta1.md similarity index 88% rename from content/en/docs/reference/kubernetes-api/workloads-resources/cron-job-v1beta1.md rename to content/en/docs/reference/kubernetes-api/workload-resources/cron-job-v1beta1.md index 22a0135bf8..c99aa57998 100644 --- a/content/en/docs/reference/kubernetes-api/workloads-resources/cron-job-v1beta1.md +++ b/content/en/docs/reference/kubernetes-api/workload-resources/cron-job-v1beta1.md @@ -30,11 +30,11 @@ CronJob represents the configuration of a single cron job. Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **spec** (}}">CronJobSpec) +- **spec** (}}">CronJobSpec) Specification of the desired behavior of a cron job, including the schedule. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status -- **status** (}}">CronJobStatus) +- **status** (}}">CronJobStatus) Current status of a cron job. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status @@ -59,7 +59,7 @@ CronJobSpec describes how the job execution will look like and when it will actu Standard object's metadata of the jobs created from this template. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata - - **jobTemplate.spec** (}}">JobSpec) + - **jobTemplate.spec** (}}">JobSpec) Specification of the desired behavior of the job. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status @@ -128,7 +128,7 @@ CronJobList is a collection of cron jobs. Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **items** ([]}}">CronJob), required +- **items** ([]}}">CronJob), required items is the list of CronJobs. @@ -175,7 +175,7 @@ GET /apis/batch/v1beta1/namespaces/{namespace}/cronjobs/{name} #### Response -200 (}}">CronJob): OK +200 (}}">CronJob): OK 401: Unauthorized @@ -208,7 +208,7 @@ GET /apis/batch/v1beta1/namespaces/{namespace}/cronjobs/{name}/status #### Response -200 (}}">CronJob): OK +200 (}}">CronJob): OK 401: Unauthorized @@ -281,7 +281,7 @@ GET /apis/batch/v1beta1/namespaces/{namespace}/cronjobs #### Response -200 (}}">CronJobList): OK +200 (}}">CronJobList): OK 401: Unauthorized @@ -349,7 +349,7 @@ GET /apis/batch/v1beta1/cronjobs #### Response -200 (}}">CronJobList): OK +200 (}}">CronJobList): OK 401: Unauthorized @@ -368,7 +368,7 @@ POST /apis/batch/v1beta1/namespaces/{namespace}/cronjobs }}">namespace -- **body**: }}">CronJob, required +- **body**: }}">CronJob, required @@ -392,11 +392,11 @@ POST /apis/batch/v1beta1/namespaces/{namespace}/cronjobs #### Response -200 (}}">CronJob): OK +200 (}}">CronJob): OK -201 (}}">CronJob): Created +201 (}}">CronJob): Created -202 (}}">CronJob): Accepted +202 (}}">CronJob): Accepted 401: Unauthorized @@ -420,7 +420,7 @@ PUT /apis/batch/v1beta1/namespaces/{namespace}/cronjobs/{name} }}">namespace -- **body**: }}">CronJob, required +- **body**: }}">CronJob, required @@ -444,9 +444,9 @@ PUT /apis/batch/v1beta1/namespaces/{namespace}/cronjobs/{name} #### Response -200 (}}">CronJob): OK +200 (}}">CronJob): OK -201 (}}">CronJob): Created +201 (}}">CronJob): Created 401: Unauthorized @@ -470,7 +470,7 @@ PUT /apis/batch/v1beta1/namespaces/{namespace}/cronjobs/{name}/status }}">namespace -- **body**: }}">CronJob, required +- **body**: }}">CronJob, required @@ -494,9 +494,9 @@ PUT /apis/batch/v1beta1/namespaces/{namespace}/cronjobs/{name}/status #### Response -200 (}}">CronJob): OK +200 (}}">CronJob): OK -201 (}}">CronJob): Created +201 (}}">CronJob): Created 401: Unauthorized @@ -549,7 +549,7 @@ PATCH /apis/batch/v1beta1/namespaces/{namespace}/cronjobs/{name} #### Response -200 (}}">CronJob): OK +200 (}}">CronJob): OK 401: Unauthorized @@ -602,7 +602,7 @@ PATCH /apis/batch/v1beta1/namespaces/{namespace}/cronjobs/{name}/status #### Response -200 (}}">CronJob): OK +200 (}}">CronJob): OK 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/workloads-resources/cron-job-v2alpha1.md b/content/en/docs/reference/kubernetes-api/workload-resources/cron-job-v2alpha1.md similarity index 88% rename from content/en/docs/reference/kubernetes-api/workloads-resources/cron-job-v2alpha1.md rename to content/en/docs/reference/kubernetes-api/workload-resources/cron-job-v2alpha1.md index c7c2ba7a81..fc6a933131 100644 --- a/content/en/docs/reference/kubernetes-api/workloads-resources/cron-job-v2alpha1.md +++ b/content/en/docs/reference/kubernetes-api/workload-resources/cron-job-v2alpha1.md @@ -30,11 +30,11 @@ CronJob represents the configuration of a single cron job. Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **spec** (}}">CronJobSpec) +- **spec** (}}">CronJobSpec) Specification of the desired behavior of a cron job, including the schedule. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status -- **status** (}}">CronJobStatus) +- **status** (}}">CronJobStatus) Current status of a cron job. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status @@ -59,7 +59,7 @@ CronJobSpec describes how the job execution will look like and when it will actu Standard object's metadata of the jobs created from this template. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata - - **jobTemplate.spec** (}}">JobSpec) + - **jobTemplate.spec** (}}">JobSpec) Specification of the desired behavior of the job. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status @@ -128,7 +128,7 @@ CronJobList is a collection of cron jobs. Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **items** ([]}}">CronJob), required +- **items** ([]}}">CronJob), required items is the list of CronJobs. @@ -175,7 +175,7 @@ GET /apis/batch/v2alpha1/namespaces/{namespace}/cronjobs/{name} #### Response -200 (}}">CronJob): OK +200 (}}">CronJob): OK 401: Unauthorized @@ -208,7 +208,7 @@ GET /apis/batch/v2alpha1/namespaces/{namespace}/cronjobs/{name}/status #### Response -200 (}}">CronJob): OK +200 (}}">CronJob): OK 401: Unauthorized @@ -281,7 +281,7 @@ GET /apis/batch/v2alpha1/namespaces/{namespace}/cronjobs #### Response -200 (}}">CronJobList): OK +200 (}}">CronJobList): OK 401: Unauthorized @@ -349,7 +349,7 @@ GET /apis/batch/v2alpha1/cronjobs #### Response -200 (}}">CronJobList): OK +200 (}}">CronJobList): OK 401: Unauthorized @@ -368,7 +368,7 @@ POST /apis/batch/v2alpha1/namespaces/{namespace}/cronjobs }}">namespace -- **body**: }}">CronJob, required +- **body**: }}">CronJob, required @@ -392,11 +392,11 @@ POST /apis/batch/v2alpha1/namespaces/{namespace}/cronjobs #### Response -200 (}}">CronJob): OK +200 (}}">CronJob): OK -201 (}}">CronJob): Created +201 (}}">CronJob): Created -202 (}}">CronJob): Accepted +202 (}}">CronJob): Accepted 401: Unauthorized @@ -420,7 +420,7 @@ PUT /apis/batch/v2alpha1/namespaces/{namespace}/cronjobs/{name} }}">namespace -- **body**: }}">CronJob, required +- **body**: }}">CronJob, required @@ -444,9 +444,9 @@ PUT /apis/batch/v2alpha1/namespaces/{namespace}/cronjobs/{name} #### Response -200 (}}">CronJob): OK +200 (}}">CronJob): OK -201 (}}">CronJob): Created +201 (}}">CronJob): Created 401: Unauthorized @@ -470,7 +470,7 @@ PUT /apis/batch/v2alpha1/namespaces/{namespace}/cronjobs/{name}/status }}">namespace -- **body**: }}">CronJob, required +- **body**: }}">CronJob, required @@ -494,9 +494,9 @@ PUT /apis/batch/v2alpha1/namespaces/{namespace}/cronjobs/{name}/status #### Response -200 (}}">CronJob): OK +200 (}}">CronJob): OK -201 (}}">CronJob): Created +201 (}}">CronJob): Created 401: Unauthorized @@ -549,7 +549,7 @@ PATCH /apis/batch/v2alpha1/namespaces/{namespace}/cronjobs/{name} #### Response -200 (}}">CronJob): OK +200 (}}">CronJob): OK 401: Unauthorized @@ -602,7 +602,7 @@ PATCH /apis/batch/v2alpha1/namespaces/{namespace}/cronjobs/{name}/status #### Response -200 (}}">CronJob): OK +200 (}}">CronJob): OK 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/workloads-resources/daemon-set-v1.md b/content/en/docs/reference/kubernetes-api/workload-resources/daemon-set-v1.md similarity index 90% rename from content/en/docs/reference/kubernetes-api/workloads-resources/daemon-set-v1.md rename to content/en/docs/reference/kubernetes-api/workload-resources/daemon-set-v1.md index 95ae869be1..169d361c2a 100644 --- a/content/en/docs/reference/kubernetes-api/workloads-resources/daemon-set-v1.md +++ b/content/en/docs/reference/kubernetes-api/workload-resources/daemon-set-v1.md @@ -30,11 +30,11 @@ DaemonSet represents the configuration of a daemon set. Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **spec** (}}">DaemonSetSpec) +- **spec** (}}">DaemonSetSpec) The desired behavior of this daemon set. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status -- **status** (}}">DaemonSetStatus) +- **status** (}}">DaemonSetStatus) The current status of this daemon set. This data may be out of date by some window of time. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status @@ -52,7 +52,7 @@ DaemonSetSpec is the specification of a daemon set. A label query over pods that are managed by the daemon set. Must match in order to be controlled. It must match the pod template's labels. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors -- **template** (}}">PodTemplateSpec), required +- **template** (}}">PodTemplateSpec), required An object that describes the pod that will be created. The DaemonSet will create exactly one copy of this pod on every node that matches the template's node selector (or on every node if no node selector is specified). More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#pod-template @@ -187,7 +187,7 @@ DaemonSetList is a collection of daemon sets. Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **items** ([]}}">DaemonSet), required +- **items** ([]}}">DaemonSet), required A list of daemon sets. @@ -234,7 +234,7 @@ GET /apis/apps/v1/namespaces/{namespace}/daemonsets/{name} #### Response -200 (}}">DaemonSet): OK +200 (}}">DaemonSet): OK 401: Unauthorized @@ -267,7 +267,7 @@ GET /apis/apps/v1/namespaces/{namespace}/daemonsets/{name}/status #### Response -200 (}}">DaemonSet): OK +200 (}}">DaemonSet): OK 401: Unauthorized @@ -340,7 +340,7 @@ GET /apis/apps/v1/namespaces/{namespace}/daemonsets #### Response -200 (}}">DaemonSetList): OK +200 (}}">DaemonSetList): OK 401: Unauthorized @@ -408,7 +408,7 @@ GET /apis/apps/v1/daemonsets #### Response -200 (}}">DaemonSetList): OK +200 (}}">DaemonSetList): OK 401: Unauthorized @@ -427,7 +427,7 @@ POST /apis/apps/v1/namespaces/{namespace}/daemonsets }}">namespace -- **body**: }}">DaemonSet, required +- **body**: }}">DaemonSet, required @@ -451,11 +451,11 @@ POST /apis/apps/v1/namespaces/{namespace}/daemonsets #### Response -200 (}}">DaemonSet): OK +200 (}}">DaemonSet): OK -201 (}}">DaemonSet): Created +201 (}}">DaemonSet): Created -202 (}}">DaemonSet): Accepted +202 (}}">DaemonSet): Accepted 401: Unauthorized @@ -479,7 +479,7 @@ PUT /apis/apps/v1/namespaces/{namespace}/daemonsets/{name} }}">namespace -- **body**: }}">DaemonSet, required +- **body**: }}">DaemonSet, required @@ -503,9 +503,9 @@ PUT /apis/apps/v1/namespaces/{namespace}/daemonsets/{name} #### Response -200 (}}">DaemonSet): OK +200 (}}">DaemonSet): OK -201 (}}">DaemonSet): Created +201 (}}">DaemonSet): Created 401: Unauthorized @@ -529,7 +529,7 @@ PUT /apis/apps/v1/namespaces/{namespace}/daemonsets/{name}/status }}">namespace -- **body**: }}">DaemonSet, required +- **body**: }}">DaemonSet, required @@ -553,9 +553,9 @@ PUT /apis/apps/v1/namespaces/{namespace}/daemonsets/{name}/status #### Response -200 (}}">DaemonSet): OK +200 (}}">DaemonSet): OK -201 (}}">DaemonSet): Created +201 (}}">DaemonSet): Created 401: Unauthorized @@ -608,7 +608,7 @@ PATCH /apis/apps/v1/namespaces/{namespace}/daemonsets/{name} #### Response -200 (}}">DaemonSet): OK +200 (}}">DaemonSet): OK 401: Unauthorized @@ -661,7 +661,7 @@ PATCH /apis/apps/v1/namespaces/{namespace}/daemonsets/{name}/status #### Response -200 (}}">DaemonSet): OK +200 (}}">DaemonSet): OK 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/workloads-resources/deployment-v1.md b/content/en/docs/reference/kubernetes-api/workload-resources/deployment-v1.md similarity index 90% rename from content/en/docs/reference/kubernetes-api/workloads-resources/deployment-v1.md rename to content/en/docs/reference/kubernetes-api/workload-resources/deployment-v1.md index 7e5b39c929..052a54bcc8 100644 --- a/content/en/docs/reference/kubernetes-api/workloads-resources/deployment-v1.md +++ b/content/en/docs/reference/kubernetes-api/workload-resources/deployment-v1.md @@ -30,11 +30,11 @@ Deployment enables declarative updates for Pods and ReplicaSets. Standard object metadata. -- **spec** (}}">DeploymentSpec) +- **spec** (}}">DeploymentSpec) Specification of the desired behavior of the Deployment. -- **status** (}}">DeploymentStatus) +- **status** (}}">DeploymentStatus) Most recently observed status of the Deployment. @@ -52,7 +52,7 @@ DeploymentSpec is the specification of the desired behavior of the Deployment. Label selector for pods. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. -- **template** (}}">PodTemplateSpec), required +- **template** (}}">PodTemplateSpec), required Template describes the pods that will be created. @@ -207,7 +207,7 @@ DeploymentList is a list of Deployments. Standard list metadata. -- **items** ([]}}">Deployment), required +- **items** ([]}}">Deployment), required Items is the list of Deployments. @@ -254,7 +254,7 @@ GET /apis/apps/v1/namespaces/{namespace}/deployments/{name} #### Response -200 (}}">Deployment): OK +200 (}}">Deployment): OK 401: Unauthorized @@ -287,7 +287,7 @@ GET /apis/apps/v1/namespaces/{namespace}/deployments/{name}/status #### Response -200 (}}">Deployment): OK +200 (}}">Deployment): OK 401: Unauthorized @@ -360,7 +360,7 @@ GET /apis/apps/v1/namespaces/{namespace}/deployments #### Response -200 (}}">DeploymentList): OK +200 (}}">DeploymentList): OK 401: Unauthorized @@ -428,7 +428,7 @@ GET /apis/apps/v1/deployments #### Response -200 (}}">DeploymentList): OK +200 (}}">DeploymentList): OK 401: Unauthorized @@ -447,7 +447,7 @@ POST /apis/apps/v1/namespaces/{namespace}/deployments }}">namespace -- **body**: }}">Deployment, required +- **body**: }}">Deployment, required @@ -471,11 +471,11 @@ POST /apis/apps/v1/namespaces/{namespace}/deployments #### Response -200 (}}">Deployment): OK +200 (}}">Deployment): OK -201 (}}">Deployment): Created +201 (}}">Deployment): Created -202 (}}">Deployment): Accepted +202 (}}">Deployment): Accepted 401: Unauthorized @@ -499,7 +499,7 @@ PUT /apis/apps/v1/namespaces/{namespace}/deployments/{name} }}">namespace -- **body**: }}">Deployment, required +- **body**: }}">Deployment, required @@ -523,9 +523,9 @@ PUT /apis/apps/v1/namespaces/{namespace}/deployments/{name} #### Response -200 (}}">Deployment): OK +200 (}}">Deployment): OK -201 (}}">Deployment): Created +201 (}}">Deployment): Created 401: Unauthorized @@ -549,7 +549,7 @@ PUT /apis/apps/v1/namespaces/{namespace}/deployments/{name}/status }}">namespace -- **body**: }}">Deployment, required +- **body**: }}">Deployment, required @@ -573,9 +573,9 @@ PUT /apis/apps/v1/namespaces/{namespace}/deployments/{name}/status #### Response -200 (}}">Deployment): OK +200 (}}">Deployment): OK -201 (}}">Deployment): Created +201 (}}">Deployment): Created 401: Unauthorized @@ -628,7 +628,7 @@ PATCH /apis/apps/v1/namespaces/{namespace}/deployments/{name} #### Response -200 (}}">Deployment): OK +200 (}}">Deployment): OK 401: Unauthorized @@ -681,7 +681,7 @@ PATCH /apis/apps/v1/namespaces/{namespace}/deployments/{name}/status #### Response -200 (}}">Deployment): OK +200 (}}">Deployment): OK 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/workloads-resources/ephemeral-container.md b/content/en/docs/reference/kubernetes-api/workload-resources/ephemeral-container.md similarity index 100% rename from content/en/docs/reference/kubernetes-api/workloads-resources/ephemeral-container.md rename to content/en/docs/reference/kubernetes-api/workload-resources/ephemeral-container.md diff --git a/content/en/docs/reference/kubernetes-api/workloads-resources/horizontal-pod-autoscaler-v1.md b/content/en/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v1.md similarity index 85% rename from content/en/docs/reference/kubernetes-api/workloads-resources/horizontal-pod-autoscaler-v1.md rename to content/en/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v1.md index 5bb2f6e175..8ed1581cbe 100644 --- a/content/en/docs/reference/kubernetes-api/workloads-resources/horizontal-pod-autoscaler-v1.md +++ b/content/en/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v1.md @@ -30,11 +30,11 @@ configuration of a horizontal pod autoscaler. Standard object metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **spec** (}}">HorizontalPodAutoscalerSpec) +- **spec** (}}">HorizontalPodAutoscalerSpec) behaviour of autoscaler. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status. -- **status** (}}">HorizontalPodAutoscalerStatus) +- **status** (}}">HorizontalPodAutoscalerStatus) current information about the autoscaler. @@ -132,7 +132,7 @@ list of horizontal pod autoscaler objects. Standard list metadata. -- **items** ([]}}">HorizontalPodAutoscaler), required +- **items** ([]}}">HorizontalPodAutoscaler), required list of horizontal pod autoscaler objects. @@ -179,7 +179,7 @@ GET /apis/autoscaling/v1/namespaces/{namespace}/horizontalpodautoscalers/{name} #### Response -200 (}}">HorizontalPodAutoscaler): OK +200 (}}">HorizontalPodAutoscaler): OK 401: Unauthorized @@ -212,7 +212,7 @@ GET /apis/autoscaling/v1/namespaces/{namespace}/horizontalpodautoscalers/{name}/ #### Response -200 (}}">HorizontalPodAutoscaler): OK +200 (}}">HorizontalPodAutoscaler): OK 401: Unauthorized @@ -285,7 +285,7 @@ GET /apis/autoscaling/v1/namespaces/{namespace}/horizontalpodautoscalers #### Response -200 (}}">HorizontalPodAutoscalerList): OK +200 (}}">HorizontalPodAutoscalerList): OK 401: Unauthorized @@ -353,7 +353,7 @@ GET /apis/autoscaling/v1/horizontalpodautoscalers #### Response -200 (}}">HorizontalPodAutoscalerList): OK +200 (}}">HorizontalPodAutoscalerList): OK 401: Unauthorized @@ -372,7 +372,7 @@ POST /apis/autoscaling/v1/namespaces/{namespace}/horizontalpodautoscalers }}">namespace -- **body**: }}">HorizontalPodAutoscaler, required +- **body**: }}">HorizontalPodAutoscaler, required @@ -396,11 +396,11 @@ POST /apis/autoscaling/v1/namespaces/{namespace}/horizontalpodautoscalers #### Response -200 (}}">HorizontalPodAutoscaler): OK +200 (}}">HorizontalPodAutoscaler): OK -201 (}}">HorizontalPodAutoscaler): Created +201 (}}">HorizontalPodAutoscaler): Created -202 (}}">HorizontalPodAutoscaler): Accepted +202 (}}">HorizontalPodAutoscaler): Accepted 401: Unauthorized @@ -424,7 +424,7 @@ PUT /apis/autoscaling/v1/namespaces/{namespace}/horizontalpodautoscalers/{name} }}">namespace -- **body**: }}">HorizontalPodAutoscaler, required +- **body**: }}">HorizontalPodAutoscaler, required @@ -448,9 +448,9 @@ PUT /apis/autoscaling/v1/namespaces/{namespace}/horizontalpodautoscalers/{name} #### Response -200 (}}">HorizontalPodAutoscaler): OK +200 (}}">HorizontalPodAutoscaler): OK -201 (}}">HorizontalPodAutoscaler): Created +201 (}}">HorizontalPodAutoscaler): Created 401: Unauthorized @@ -474,7 +474,7 @@ PUT /apis/autoscaling/v1/namespaces/{namespace}/horizontalpodautoscalers/{name}/ }}">namespace -- **body**: }}">HorizontalPodAutoscaler, required +- **body**: }}">HorizontalPodAutoscaler, required @@ -498,9 +498,9 @@ PUT /apis/autoscaling/v1/namespaces/{namespace}/horizontalpodautoscalers/{name}/ #### Response -200 (}}">HorizontalPodAutoscaler): OK +200 (}}">HorizontalPodAutoscaler): OK -201 (}}">HorizontalPodAutoscaler): Created +201 (}}">HorizontalPodAutoscaler): Created 401: Unauthorized @@ -553,7 +553,7 @@ PATCH /apis/autoscaling/v1/namespaces/{namespace}/horizontalpodautoscalers/{name #### Response -200 (}}">HorizontalPodAutoscaler): OK +200 (}}">HorizontalPodAutoscaler): OK 401: Unauthorized @@ -606,7 +606,7 @@ PATCH /apis/autoscaling/v1/namespaces/{namespace}/horizontalpodautoscalers/{name #### Response -200 (}}">HorizontalPodAutoscaler): OK +200 (}}">HorizontalPodAutoscaler): OK 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/workloads-resources/horizontal-pod-autoscaler-v2beta2.md b/content/en/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v2beta2.md similarity index 94% rename from content/en/docs/reference/kubernetes-api/workloads-resources/horizontal-pod-autoscaler-v2beta2.md rename to content/en/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v2beta2.md index 56fd55d24b..2f8f15d505 100644 --- a/content/en/docs/reference/kubernetes-api/workloads-resources/horizontal-pod-autoscaler-v2beta2.md +++ b/content/en/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v2beta2.md @@ -30,11 +30,11 @@ HorizontalPodAutoscaler is the configuration for a horizontal pod autoscaler, wh metadata is the standard object metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **spec** (}}">HorizontalPodAutoscalerSpec) +- **spec** (}}">HorizontalPodAutoscalerSpec) spec is the specification for the behaviour of the autoscaler. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status. -- **status** (}}">HorizontalPodAutoscalerStatus) +- **status** (}}">HorizontalPodAutoscalerStatus) status is the current information about the autoscaler. @@ -684,7 +684,7 @@ HorizontalPodAutoscalerList is a list of horizontal pod autoscaler objects. metadata is the standard list metadata. -- **items** ([]}}">HorizontalPodAutoscaler), required +- **items** ([]}}">HorizontalPodAutoscaler), required items is the list of horizontal pod autoscaler objects. @@ -731,7 +731,7 @@ GET /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers/{n #### Response -200 (}}">HorizontalPodAutoscaler): OK +200 (}}">HorizontalPodAutoscaler): OK 401: Unauthorized @@ -764,7 +764,7 @@ GET /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers/{n #### Response -200 (}}">HorizontalPodAutoscaler): OK +200 (}}">HorizontalPodAutoscaler): OK 401: Unauthorized @@ -837,7 +837,7 @@ GET /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers #### Response -200 (}}">HorizontalPodAutoscalerList): OK +200 (}}">HorizontalPodAutoscalerList): OK 401: Unauthorized @@ -905,7 +905,7 @@ GET /apis/autoscaling/v2beta2/horizontalpodautoscalers #### Response -200 (}}">HorizontalPodAutoscalerList): OK +200 (}}">HorizontalPodAutoscalerList): OK 401: Unauthorized @@ -924,7 +924,7 @@ POST /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers }}">namespace -- **body**: }}">HorizontalPodAutoscaler, required +- **body**: }}">HorizontalPodAutoscaler, required @@ -948,11 +948,11 @@ POST /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers #### Response -200 (}}">HorizontalPodAutoscaler): OK +200 (}}">HorizontalPodAutoscaler): OK -201 (}}">HorizontalPodAutoscaler): Created +201 (}}">HorizontalPodAutoscaler): Created -202 (}}">HorizontalPodAutoscaler): Accepted +202 (}}">HorizontalPodAutoscaler): Accepted 401: Unauthorized @@ -976,7 +976,7 @@ PUT /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers/{n }}">namespace -- **body**: }}">HorizontalPodAutoscaler, required +- **body**: }}">HorizontalPodAutoscaler, required @@ -1000,9 +1000,9 @@ PUT /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers/{n #### Response -200 (}}">HorizontalPodAutoscaler): OK +200 (}}">HorizontalPodAutoscaler): OK -201 (}}">HorizontalPodAutoscaler): Created +201 (}}">HorizontalPodAutoscaler): Created 401: Unauthorized @@ -1026,7 +1026,7 @@ PUT /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers/{n }}">namespace -- **body**: }}">HorizontalPodAutoscaler, required +- **body**: }}">HorizontalPodAutoscaler, required @@ -1050,9 +1050,9 @@ PUT /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers/{n #### Response -200 (}}">HorizontalPodAutoscaler): OK +200 (}}">HorizontalPodAutoscaler): OK -201 (}}">HorizontalPodAutoscaler): Created +201 (}}">HorizontalPodAutoscaler): Created 401: Unauthorized @@ -1105,7 +1105,7 @@ PATCH /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers/ #### Response -200 (}}">HorizontalPodAutoscaler): OK +200 (}}">HorizontalPodAutoscaler): OK 401: Unauthorized @@ -1158,7 +1158,7 @@ PATCH /apis/autoscaling/v2beta2/namespaces/{namespace}/horizontalpodautoscalers/ #### Response -200 (}}">HorizontalPodAutoscaler): OK +200 (}}">HorizontalPodAutoscaler): OK 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/workloads-resources/job-v1.md b/content/en/docs/reference/kubernetes-api/workload-resources/job-v1.md similarity index 91% rename from content/en/docs/reference/kubernetes-api/workloads-resources/job-v1.md rename to content/en/docs/reference/kubernetes-api/workload-resources/job-v1.md index 3deab1c4be..f01c257e45 100644 --- a/content/en/docs/reference/kubernetes-api/workloads-resources/job-v1.md +++ b/content/en/docs/reference/kubernetes-api/workload-resources/job-v1.md @@ -30,11 +30,11 @@ Job represents the configuration of a single job. Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **spec** (}}">JobSpec) +- **spec** (}}">JobSpec) Specification of the desired behavior of a job. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status -- **status** (}}">JobStatus) +- **status** (}}">JobStatus) Current status of a job. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status @@ -53,7 +53,7 @@ JobSpec describes how the job execution will look like. ### Replicas -- **template** (}}">PodTemplateSpec), required +- **template** (}}">PodTemplateSpec), required Describes the pod that will be created when executing a job. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/ @@ -184,7 +184,7 @@ JobList is a collection of jobs. Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **items** ([]}}">Job), required +- **items** ([]}}">Job), required items is the list of Jobs. @@ -231,7 +231,7 @@ GET /apis/batch/v1/namespaces/{namespace}/jobs/{name} #### Response -200 (}}">Job): OK +200 (}}">Job): OK 401: Unauthorized @@ -264,7 +264,7 @@ GET /apis/batch/v1/namespaces/{namespace}/jobs/{name}/status #### Response -200 (}}">Job): OK +200 (}}">Job): OK 401: Unauthorized @@ -337,7 +337,7 @@ GET /apis/batch/v1/namespaces/{namespace}/jobs #### Response -200 (}}">JobList): OK +200 (}}">JobList): OK 401: Unauthorized @@ -405,7 +405,7 @@ GET /apis/batch/v1/jobs #### Response -200 (}}">JobList): OK +200 (}}">JobList): OK 401: Unauthorized @@ -424,7 +424,7 @@ POST /apis/batch/v1/namespaces/{namespace}/jobs }}">namespace -- **body**: }}">Job, required +- **body**: }}">Job, required @@ -448,11 +448,11 @@ POST /apis/batch/v1/namespaces/{namespace}/jobs #### Response -200 (}}">Job): OK +200 (}}">Job): OK -201 (}}">Job): Created +201 (}}">Job): Created -202 (}}">Job): Accepted +202 (}}">Job): Accepted 401: Unauthorized @@ -476,7 +476,7 @@ PUT /apis/batch/v1/namespaces/{namespace}/jobs/{name} }}">namespace -- **body**: }}">Job, required +- **body**: }}">Job, required @@ -500,9 +500,9 @@ PUT /apis/batch/v1/namespaces/{namespace}/jobs/{name} #### Response -200 (}}">Job): OK +200 (}}">Job): OK -201 (}}">Job): Created +201 (}}">Job): Created 401: Unauthorized @@ -526,7 +526,7 @@ PUT /apis/batch/v1/namespaces/{namespace}/jobs/{name}/status }}">namespace -- **body**: }}">Job, required +- **body**: }}">Job, required @@ -550,9 +550,9 @@ PUT /apis/batch/v1/namespaces/{namespace}/jobs/{name}/status #### Response -200 (}}">Job): OK +200 (}}">Job): OK -201 (}}">Job): Created +201 (}}">Job): Created 401: Unauthorized @@ -605,7 +605,7 @@ PATCH /apis/batch/v1/namespaces/{namespace}/jobs/{name} #### Response -200 (}}">Job): OK +200 (}}">Job): OK 401: Unauthorized @@ -658,7 +658,7 @@ PATCH /apis/batch/v1/namespaces/{namespace}/jobs/{name}/status #### Response -200 (}}">Job): OK +200 (}}">Job): OK 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/workloads-resources/pod-template-v1.md b/content/en/docs/reference/kubernetes-api/workload-resources/pod-template-v1.md similarity index 86% rename from content/en/docs/reference/kubernetes-api/workloads-resources/pod-template-v1.md rename to content/en/docs/reference/kubernetes-api/workload-resources/pod-template-v1.md index fec34df19f..663ab6bea4 100644 --- a/content/en/docs/reference/kubernetes-api/workloads-resources/pod-template-v1.md +++ b/content/en/docs/reference/kubernetes-api/workload-resources/pod-template-v1.md @@ -30,7 +30,7 @@ PodTemplate describes a template for creating copies of a predefined pod. Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **template** (}}">PodTemplateSpec) +- **template** (}}">PodTemplateSpec) Template defines the pods that will be created from this pod template. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status @@ -48,7 +48,7 @@ PodTemplateSpec describes the data a pod should have when created from a templat Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **spec** (}}">PodSpec) +- **spec** (}}">PodSpec) Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status @@ -72,7 +72,7 @@ PodTemplateList is a list of PodTemplates. Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds -- **items** ([]}}">PodTemplate), required +- **items** ([]}}">PodTemplate), required List of pod templates @@ -119,7 +119,7 @@ GET /api/v1/namespaces/{namespace}/podtemplates/{name} #### Response -200 (}}">PodTemplate): OK +200 (}}">PodTemplate): OK 401: Unauthorized @@ -192,7 +192,7 @@ GET /api/v1/namespaces/{namespace}/podtemplates #### Response -200 (}}">PodTemplateList): OK +200 (}}">PodTemplateList): OK 401: Unauthorized @@ -260,7 +260,7 @@ GET /api/v1/podtemplates #### Response -200 (}}">PodTemplateList): OK +200 (}}">PodTemplateList): OK 401: Unauthorized @@ -279,7 +279,7 @@ POST /api/v1/namespaces/{namespace}/podtemplates }}">namespace -- **body**: }}">PodTemplate, required +- **body**: }}">PodTemplate, required @@ -303,11 +303,11 @@ POST /api/v1/namespaces/{namespace}/podtemplates #### Response -200 (}}">PodTemplate): OK +200 (}}">PodTemplate): OK -201 (}}">PodTemplate): Created +201 (}}">PodTemplate): Created -202 (}}">PodTemplate): Accepted +202 (}}">PodTemplate): Accepted 401: Unauthorized @@ -331,7 +331,7 @@ PUT /api/v1/namespaces/{namespace}/podtemplates/{name} }}">namespace -- **body**: }}">PodTemplate, required +- **body**: }}">PodTemplate, required @@ -355,9 +355,9 @@ PUT /api/v1/namespaces/{namespace}/podtemplates/{name} #### Response -200 (}}">PodTemplate): OK +200 (}}">PodTemplate): OK -201 (}}">PodTemplate): Created +201 (}}">PodTemplate): Created 401: Unauthorized @@ -410,7 +410,7 @@ PATCH /api/v1/namespaces/{namespace}/podtemplates/{name} #### Response -200 (}}">PodTemplate): OK +200 (}}">PodTemplate): OK 401: Unauthorized @@ -463,9 +463,9 @@ DELETE /api/v1/namespaces/{namespace}/podtemplates/{name} #### Response -200 (}}">PodTemplate): OK +200 (}}">PodTemplate): OK -202 (}}">PodTemplate): Accepted +202 (}}">PodTemplate): Accepted 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/workloads-resources/pod-v1.md b/content/en/docs/reference/kubernetes-api/workload-resources/pod-v1.md similarity index 94% rename from content/en/docs/reference/kubernetes-api/workloads-resources/pod-v1.md rename to content/en/docs/reference/kubernetes-api/workload-resources/pod-v1.md index 584c338172..4be3289e9f 100644 --- a/content/en/docs/reference/kubernetes-api/workloads-resources/pod-v1.md +++ b/content/en/docs/reference/kubernetes-api/workload-resources/pod-v1.md @@ -30,11 +30,11 @@ Pod is a collection of containers that can run on a host. This resource is creat Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **spec** (}}">PodSpec) +- **spec** (}}">PodSpec) Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status -- **status** (}}">PodStatus) +- **status** (}}">PodStatus) Most recently observed status of the pod. This data may not be up to date. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status @@ -53,13 +53,13 @@ PodSpec is a description of a pod. ### Containers -- **containers** ([]}}">Container), required +- **containers** ([]}}">Container), required *Patch strategy: merge on key `name`* List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. -- **initContainers** ([]}}">Container) +- **initContainers** ([]}}">Container) *Patch strategy: merge on key `name`* @@ -430,7 +430,7 @@ PodSpec is a description of a pod. ### Alpha level -- **ephemeralContainers** ([]}}">EphemeralContainer) +- **ephemeralContainers** ([]}}">EphemeralContainer) *Patch strategy: merge on key `name`* @@ -547,15 +547,15 @@ PodStatus represents information about the status of a pod. Status may trail the The Quality of Service (QOS) classification assigned to the pod based on resource requirements See PodQOSClass type for available QOS classes More info: https://git.k8s.io/community/contributors/design-proposals/node/resource-qos.md -- **initContainerStatuses** ([]}}">ContainerStatus) +- **initContainerStatuses** ([]}}">ContainerStatus) The list has one entry per init container in the manifest. The most recent successful init container will have ready = true, the most recently started container will have startTime set. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-and-container-status -- **containerStatuses** ([]}}">ContainerStatus) +- **containerStatuses** ([]}}">ContainerStatus) The list has one entry per container in the manifest. Each entry is currently the output of `docker inspect`. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-and-container-status -- **ephemeralContainerStatuses** ([]}}">ContainerStatus) +- **ephemeralContainerStatuses** ([]}}">ContainerStatus) Status for any ephemeral containers that have run in this pod. This field is alpha-level and is only populated by servers that enable the EphemeralContainers feature. @@ -579,7 +579,7 @@ PodList is a list of Pods. Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds -- **items** ([]}}">Pod), required +- **items** ([]}}">Pod), required List of pods. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md @@ -626,7 +626,7 @@ GET /api/v1/namespaces/{namespace}/pods/{name} #### Response -200 (}}">Pod): OK +200 (}}">Pod): OK 401: Unauthorized @@ -732,7 +732,7 @@ GET /api/v1/namespaces/{namespace}/pods/{name}/status #### Response -200 (}}">Pod): OK +200 (}}">Pod): OK 401: Unauthorized @@ -805,7 +805,7 @@ GET /api/v1/namespaces/{namespace}/pods #### Response -200 (}}">PodList): OK +200 (}}">PodList): OK 401: Unauthorized @@ -873,7 +873,7 @@ GET /api/v1/pods #### Response -200 (}}">PodList): OK +200 (}}">PodList): OK 401: Unauthorized @@ -892,7 +892,7 @@ POST /api/v1/namespaces/{namespace}/pods }}">namespace -- **body**: }}">Pod, required +- **body**: }}">Pod, required @@ -916,11 +916,11 @@ POST /api/v1/namespaces/{namespace}/pods #### Response -200 (}}">Pod): OK +200 (}}">Pod): OK -201 (}}">Pod): Created +201 (}}">Pod): Created -202 (}}">Pod): Accepted +202 (}}">Pod): Accepted 401: Unauthorized @@ -944,7 +944,7 @@ PUT /api/v1/namespaces/{namespace}/pods/{name} }}">namespace -- **body**: }}">Pod, required +- **body**: }}">Pod, required @@ -968,9 +968,9 @@ PUT /api/v1/namespaces/{namespace}/pods/{name} #### Response -200 (}}">Pod): OK +200 (}}">Pod): OK -201 (}}">Pod): Created +201 (}}">Pod): Created 401: Unauthorized @@ -994,7 +994,7 @@ PUT /api/v1/namespaces/{namespace}/pods/{name}/status }}">namespace -- **body**: }}">Pod, required +- **body**: }}">Pod, required @@ -1018,9 +1018,9 @@ PUT /api/v1/namespaces/{namespace}/pods/{name}/status #### Response -200 (}}">Pod): OK +200 (}}">Pod): OK -201 (}}">Pod): Created +201 (}}">Pod): Created 401: Unauthorized @@ -1073,7 +1073,7 @@ PATCH /api/v1/namespaces/{namespace}/pods/{name} #### Response -200 (}}">Pod): OK +200 (}}">Pod): OK 401: Unauthorized @@ -1126,7 +1126,7 @@ PATCH /api/v1/namespaces/{namespace}/pods/{name}/status #### Response -200 (}}">Pod): OK +200 (}}">Pod): OK 401: Unauthorized @@ -1179,9 +1179,9 @@ DELETE /api/v1/namespaces/{namespace}/pods/{name} #### Response -200 (}}">Pod): OK +200 (}}">Pod): OK -202 (}}">Pod): Accepted +202 (}}">Pod): Accepted 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/workloads-resources/priority-class-v1.md b/content/en/docs/reference/kubernetes-api/workload-resources/priority-class-v1.md similarity index 88% rename from content/en/docs/reference/kubernetes-api/workloads-resources/priority-class-v1.md rename to content/en/docs/reference/kubernetes-api/workload-resources/priority-class-v1.md index 30b40e6840..28087807c1 100644 --- a/content/en/docs/reference/kubernetes-api/workloads-resources/priority-class-v1.md +++ b/content/en/docs/reference/kubernetes-api/workload-resources/priority-class-v1.md @@ -66,7 +66,7 @@ PriorityClassList is a collection of priority classes. Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **items** ([]}}">PriorityClass), required +- **items** ([]}}">PriorityClass), required items is the list of PriorityClasses @@ -108,7 +108,7 @@ GET /apis/scheduling.k8s.io/v1/priorityclasses/{name} #### Response -200 (}}">PriorityClass): OK +200 (}}">PriorityClass): OK 401: Unauthorized @@ -176,7 +176,7 @@ GET /apis/scheduling.k8s.io/v1/priorityclasses #### Response -200 (}}">PriorityClassList): OK +200 (}}">PriorityClassList): OK 401: Unauthorized @@ -190,7 +190,7 @@ POST /apis/scheduling.k8s.io/v1/priorityclasses #### Parameters -- **body**: }}">PriorityClass, required +- **body**: }}">PriorityClass, required @@ -214,11 +214,11 @@ POST /apis/scheduling.k8s.io/v1/priorityclasses #### Response -200 (}}">PriorityClass): OK +200 (}}">PriorityClass): OK -201 (}}">PriorityClass): Created +201 (}}">PriorityClass): Created -202 (}}">PriorityClass): Accepted +202 (}}">PriorityClass): Accepted 401: Unauthorized @@ -237,7 +237,7 @@ PUT /apis/scheduling.k8s.io/v1/priorityclasses/{name} name of the PriorityClass -- **body**: }}">PriorityClass, required +- **body**: }}">PriorityClass, required @@ -261,9 +261,9 @@ PUT /apis/scheduling.k8s.io/v1/priorityclasses/{name} #### Response -200 (}}">PriorityClass): OK +200 (}}">PriorityClass): OK -201 (}}">PriorityClass): Created +201 (}}">PriorityClass): Created 401: Unauthorized @@ -311,7 +311,7 @@ PATCH /apis/scheduling.k8s.io/v1/priorityclasses/{name} #### Response -200 (}}">PriorityClass): OK +200 (}}">PriorityClass): OK 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/workloads-resources/replica-set-v1.md b/content/en/docs/reference/kubernetes-api/workload-resources/replica-set-v1.md similarity index 88% rename from content/en/docs/reference/kubernetes-api/workloads-resources/replica-set-v1.md rename to content/en/docs/reference/kubernetes-api/workload-resources/replica-set-v1.md index 429a1bd5a2..708b0fd966 100644 --- a/content/en/docs/reference/kubernetes-api/workloads-resources/replica-set-v1.md +++ b/content/en/docs/reference/kubernetes-api/workload-resources/replica-set-v1.md @@ -30,11 +30,11 @@ ReplicaSet ensures that a specified number of pod replicas are running at any gi If the Labels of a ReplicaSet are empty, they are defaulted to be the same as the Pod(s) that the ReplicaSet manages. Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **spec** (}}">ReplicaSetSpec) +- **spec** (}}">ReplicaSetSpec) Spec defines the specification of the desired behavior of the ReplicaSet. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status -- **status** (}}">ReplicaSetStatus) +- **status** (}}">ReplicaSetStatus) Status is the most recently observed status of the ReplicaSet. This data may be out of date by some window of time. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status @@ -52,7 +52,7 @@ ReplicaSetSpec is the specification of a ReplicaSet. Selector is a label query over pods that should match the replica count. Label keys and values that must match in order to be controlled by this replica set. It must match the pod template's labels. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors -- **template** (}}">PodTemplateSpec) +- **template** (}}">PodTemplateSpec) Template is the object that describes the pod that will be created if insufficient replicas are detected. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#pod-template @@ -146,7 +146,7 @@ ReplicaSetList is a collection of ReplicaSets. Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds -- **items** ([]}}">ReplicaSet), required +- **items** ([]}}">ReplicaSet), required List of ReplicaSets. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller @@ -193,7 +193,7 @@ GET /apis/apps/v1/namespaces/{namespace}/replicasets/{name} #### Response -200 (}}">ReplicaSet): OK +200 (}}">ReplicaSet): OK 401: Unauthorized @@ -226,7 +226,7 @@ GET /apis/apps/v1/namespaces/{namespace}/replicasets/{name}/status #### Response -200 (}}">ReplicaSet): OK +200 (}}">ReplicaSet): OK 401: Unauthorized @@ -299,7 +299,7 @@ GET /apis/apps/v1/namespaces/{namespace}/replicasets #### Response -200 (}}">ReplicaSetList): OK +200 (}}">ReplicaSetList): OK 401: Unauthorized @@ -367,7 +367,7 @@ GET /apis/apps/v1/replicasets #### Response -200 (}}">ReplicaSetList): OK +200 (}}">ReplicaSetList): OK 401: Unauthorized @@ -386,7 +386,7 @@ POST /apis/apps/v1/namespaces/{namespace}/replicasets }}">namespace -- **body**: }}">ReplicaSet, required +- **body**: }}">ReplicaSet, required @@ -410,11 +410,11 @@ POST /apis/apps/v1/namespaces/{namespace}/replicasets #### Response -200 (}}">ReplicaSet): OK +200 (}}">ReplicaSet): OK -201 (}}">ReplicaSet): Created +201 (}}">ReplicaSet): Created -202 (}}">ReplicaSet): Accepted +202 (}}">ReplicaSet): Accepted 401: Unauthorized @@ -438,7 +438,7 @@ PUT /apis/apps/v1/namespaces/{namespace}/replicasets/{name} }}">namespace -- **body**: }}">ReplicaSet, required +- **body**: }}">ReplicaSet, required @@ -462,9 +462,9 @@ PUT /apis/apps/v1/namespaces/{namespace}/replicasets/{name} #### Response -200 (}}">ReplicaSet): OK +200 (}}">ReplicaSet): OK -201 (}}">ReplicaSet): Created +201 (}}">ReplicaSet): Created 401: Unauthorized @@ -488,7 +488,7 @@ PUT /apis/apps/v1/namespaces/{namespace}/replicasets/{name}/status }}">namespace -- **body**: }}">ReplicaSet, required +- **body**: }}">ReplicaSet, required @@ -512,9 +512,9 @@ PUT /apis/apps/v1/namespaces/{namespace}/replicasets/{name}/status #### Response -200 (}}">ReplicaSet): OK +200 (}}">ReplicaSet): OK -201 (}}">ReplicaSet): Created +201 (}}">ReplicaSet): Created 401: Unauthorized @@ -567,7 +567,7 @@ PATCH /apis/apps/v1/namespaces/{namespace}/replicasets/{name} #### Response -200 (}}">ReplicaSet): OK +200 (}}">ReplicaSet): OK 401: Unauthorized @@ -620,7 +620,7 @@ PATCH /apis/apps/v1/namespaces/{namespace}/replicasets/{name}/status #### Response -200 (}}">ReplicaSet): OK +200 (}}">ReplicaSet): OK 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/workloads-resources/replication-controller-v1.md b/content/en/docs/reference/kubernetes-api/workload-resources/replication-controller-v1.md similarity index 86% rename from content/en/docs/reference/kubernetes-api/workloads-resources/replication-controller-v1.md rename to content/en/docs/reference/kubernetes-api/workload-resources/replication-controller-v1.md index 2970b6670c..1f7962ee96 100644 --- a/content/en/docs/reference/kubernetes-api/workloads-resources/replication-controller-v1.md +++ b/content/en/docs/reference/kubernetes-api/workload-resources/replication-controller-v1.md @@ -30,11 +30,11 @@ ReplicationController represents the configuration of a replication controller. If the Labels of a ReplicationController are empty, they are defaulted to be the same as the Pod(s) that the replication controller manages. Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata -- **spec** (}}">ReplicationControllerSpec) +- **spec** (}}">ReplicationControllerSpec) Spec defines the specification of the desired behavior of the replication controller. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status -- **status** (}}">ReplicationControllerStatus) +- **status** (}}">ReplicationControllerStatus) Status is the most recently observed status of the replication controller. This data may be out of date by some window of time. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status @@ -52,7 +52,7 @@ ReplicationControllerSpec is the specification of a replication controller. Selector is a label query over pods that should match the Replicas count. If Selector is empty, it is defaulted to the labels present on the Pod template. Label keys and values that must match in order to be controlled by this replication controller, if empty defaulted to labels on Pod template. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors -- **template** (}}">PodTemplateSpec) +- **template** (}}">PodTemplateSpec) Template is the object that describes the pod that will be created if insufficient replicas are detected. This takes precedence over a TemplateRef. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#pod-template @@ -146,7 +146,7 @@ ReplicationControllerList is a collection of replication controllers. Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds -- **items** ([]}}">ReplicationController), required +- **items** ([]}}">ReplicationController), required List of replication controllers. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller @@ -193,7 +193,7 @@ GET /api/v1/namespaces/{namespace}/replicationcontrollers/{name} #### Response -200 (}}">ReplicationController): OK +200 (}}">ReplicationController): OK 401: Unauthorized @@ -226,7 +226,7 @@ GET /api/v1/namespaces/{namespace}/replicationcontrollers/{name}/status #### Response -200 (}}">ReplicationController): OK +200 (}}">ReplicationController): OK 401: Unauthorized @@ -299,7 +299,7 @@ GET /api/v1/namespaces/{namespace}/replicationcontrollers #### Response -200 (}}">ReplicationControllerList): OK +200 (}}">ReplicationControllerList): OK 401: Unauthorized @@ -367,7 +367,7 @@ GET /api/v1/replicationcontrollers #### Response -200 (}}">ReplicationControllerList): OK +200 (}}">ReplicationControllerList): OK 401: Unauthorized @@ -386,7 +386,7 @@ POST /api/v1/namespaces/{namespace}/replicationcontrollers }}">namespace -- **body**: }}">ReplicationController, required +- **body**: }}">ReplicationController, required @@ -410,11 +410,11 @@ POST /api/v1/namespaces/{namespace}/replicationcontrollers #### Response -200 (}}">ReplicationController): OK +200 (}}">ReplicationController): OK -201 (}}">ReplicationController): Created +201 (}}">ReplicationController): Created -202 (}}">ReplicationController): Accepted +202 (}}">ReplicationController): Accepted 401: Unauthorized @@ -438,7 +438,7 @@ PUT /api/v1/namespaces/{namespace}/replicationcontrollers/{name} }}">namespace -- **body**: }}">ReplicationController, required +- **body**: }}">ReplicationController, required @@ -462,9 +462,9 @@ PUT /api/v1/namespaces/{namespace}/replicationcontrollers/{name} #### Response -200 (}}">ReplicationController): OK +200 (}}">ReplicationController): OK -201 (}}">ReplicationController): Created +201 (}}">ReplicationController): Created 401: Unauthorized @@ -488,7 +488,7 @@ PUT /api/v1/namespaces/{namespace}/replicationcontrollers/{name}/status }}">namespace -- **body**: }}">ReplicationController, required +- **body**: }}">ReplicationController, required @@ -512,9 +512,9 @@ PUT /api/v1/namespaces/{namespace}/replicationcontrollers/{name}/status #### Response -200 (}}">ReplicationController): OK +200 (}}">ReplicationController): OK -201 (}}">ReplicationController): Created +201 (}}">ReplicationController): Created 401: Unauthorized @@ -567,7 +567,7 @@ PATCH /api/v1/namespaces/{namespace}/replicationcontrollers/{name} #### Response -200 (}}">ReplicationController): OK +200 (}}">ReplicationController): OK 401: Unauthorized @@ -620,7 +620,7 @@ PATCH /api/v1/namespaces/{namespace}/replicationcontrollers/{name}/status #### Response -200 (}}">ReplicationController): OK +200 (}}">ReplicationController): OK 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/workloads-resources/stateful-set-v1.md b/content/en/docs/reference/kubernetes-api/workload-resources/stateful-set-v1.md similarity index 89% rename from content/en/docs/reference/kubernetes-api/workloads-resources/stateful-set-v1.md rename to content/en/docs/reference/kubernetes-api/workload-resources/stateful-set-v1.md index 1a55fa4c1d..17ea7755f2 100644 --- a/content/en/docs/reference/kubernetes-api/workloads-resources/stateful-set-v1.md +++ b/content/en/docs/reference/kubernetes-api/workload-resources/stateful-set-v1.md @@ -32,11 +32,11 @@ The StatefulSet guarantees that a given network identity will always map to the - **metadata** (}}">ObjectMeta) -- **spec** (}}">StatefulSetSpec) +- **spec** (}}">StatefulSetSpec) Spec defines the desired identities of pods in this set. -- **status** (}}">StatefulSetStatus) +- **status** (}}">StatefulSetStatus) Status is the current status of Pods in this StatefulSet. This data may be out of date by some window of time. @@ -58,7 +58,7 @@ A StatefulSetSpec is the specification of a StatefulSet. selector is a label query over pods that should match the replica count. It must match the pod template's labels. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors -- **template** (}}">PodTemplateSpec), required +- **template** (}}">PodTemplateSpec), required template is the object that describes the pod that will be created if insufficient replicas are detected. Each pod stamped out by the StatefulSet will fulfill this Template, but have a unique identity from the rest of the StatefulSet. @@ -193,7 +193,7 @@ StatefulSetList is a collection of StatefulSets. - **metadata** (}}">ListMeta) -- **items** ([]}}">StatefulSet), required +- **items** ([]}}">StatefulSet), required @@ -239,7 +239,7 @@ GET /apis/apps/v1/namespaces/{namespace}/statefulsets/{name} #### Response -200 (}}">StatefulSet): OK +200 (}}">StatefulSet): OK 401: Unauthorized @@ -272,7 +272,7 @@ GET /apis/apps/v1/namespaces/{namespace}/statefulsets/{name}/status #### Response -200 (}}">StatefulSet): OK +200 (}}">StatefulSet): OK 401: Unauthorized @@ -345,7 +345,7 @@ GET /apis/apps/v1/namespaces/{namespace}/statefulsets #### Response -200 (}}">StatefulSetList): OK +200 (}}">StatefulSetList): OK 401: Unauthorized @@ -413,7 +413,7 @@ GET /apis/apps/v1/statefulsets #### Response -200 (}}">StatefulSetList): OK +200 (}}">StatefulSetList): OK 401: Unauthorized @@ -432,7 +432,7 @@ POST /apis/apps/v1/namespaces/{namespace}/statefulsets }}">namespace -- **body**: }}">StatefulSet, required +- **body**: }}">StatefulSet, required @@ -456,11 +456,11 @@ POST /apis/apps/v1/namespaces/{namespace}/statefulsets #### Response -200 (}}">StatefulSet): OK +200 (}}">StatefulSet): OK -201 (}}">StatefulSet): Created +201 (}}">StatefulSet): Created -202 (}}">StatefulSet): Accepted +202 (}}">StatefulSet): Accepted 401: Unauthorized @@ -484,7 +484,7 @@ PUT /apis/apps/v1/namespaces/{namespace}/statefulsets/{name} }}">namespace -- **body**: }}">StatefulSet, required +- **body**: }}">StatefulSet, required @@ -508,9 +508,9 @@ PUT /apis/apps/v1/namespaces/{namespace}/statefulsets/{name} #### Response -200 (}}">StatefulSet): OK +200 (}}">StatefulSet): OK -201 (}}">StatefulSet): Created +201 (}}">StatefulSet): Created 401: Unauthorized @@ -534,7 +534,7 @@ PUT /apis/apps/v1/namespaces/{namespace}/statefulsets/{name}/status }}">namespace -- **body**: }}">StatefulSet, required +- **body**: }}">StatefulSet, required @@ -558,9 +558,9 @@ PUT /apis/apps/v1/namespaces/{namespace}/statefulsets/{name}/status #### Response -200 (}}">StatefulSet): OK +200 (}}">StatefulSet): OK -201 (}}">StatefulSet): Created +201 (}}">StatefulSet): Created 401: Unauthorized @@ -613,7 +613,7 @@ PATCH /apis/apps/v1/namespaces/{namespace}/statefulsets/{name} #### Response -200 (}}">StatefulSet): OK +200 (}}">StatefulSet): OK 401: Unauthorized @@ -666,7 +666,7 @@ PATCH /apis/apps/v1/namespaces/{namespace}/statefulsets/{name}/status #### Response -200 (}}">StatefulSet): OK +200 (}}">StatefulSet): OK 401: Unauthorized diff --git a/content/en/docs/reference/kubernetes-api/workloads-resources/_index.md b/content/en/docs/reference/kubernetes-api/workloads-resources/_index.md deleted file mode 100644 index 1b5aab3fba..0000000000 --- a/content/en/docs/reference/kubernetes-api/workloads-resources/_index.md +++ /dev/null @@ -1,4 +0,0 @@ ---- -title: "Workloads Resources" -weight: 1 ---- diff --git a/content/en/docs/reference/labels-annotations-taints.md b/content/en/docs/reference/labels-annotations-taints.md index 78be058013..8f4327ceca 100644 --- a/content/en/docs/reference/labels-annotations-taints.md +++ b/content/en/docs/reference/labels-annotations-taints.md @@ -114,3 +114,143 @@ The scheduler (through the _VolumeZonePredicate_ predicate) also will ensure tha If `PersistentVolumeLabel` does not support automatic labeling of your PersistentVolumes, you should consider adding the labels manually (or adding support for `PersistentVolumeLabel`). With `PersistentVolumeLabel`, the scheduler prevents Pods from mounting volumes in a different zone. If your infrastructure doesn't have this constraint, you don't need to add the zone labels to the volumes at all. +## node.kubernetes.io/windows-build {#nodekubernetesiowindows-build} + +Example: `node.kubernetes.io/windows-build=10.0.17763` + +Used on: Node + +When the kubelet is running on Microsoft Windows, it automatically labels its node to record the version of Windows Server in use. + +The label's value is in the format "MajorVersion.MinorVersion.BuildNumber". + +## service.kubernetes.io/headless {#servicekubernetesioheadless} + +Example: `service.kubernetes.io/headless=""` + +Used on: Service + +The control plane adds this label to an Endpoints object when the owning Service is headless. + +## kubernetes.io/service-name {#kubernetesioservice-name} + +Example: `kubernetes.io/service-name="nginx"` + +Used on: Service + +Kubernetes uses this label to differentiate multiple Services. Used currently for `ELB`(Elastic Load Balancer) only. + +## endpointslice.kubernetes.io/managed-by {#endpointslicekubernetesiomanaged-by} + +Example: `endpointslice.kubernetes.io/managed-by="controller"` + +Used on: EndpointSlices + +The label is used to indicate the controller or entity that manages an EndpointSlice. This label aims to enable different EndpointSlice objects to be managed by different controllers or entities within the same cluster. + +## endpointslice.kubernetes.io/skip-mirror {#endpointslicekubernetesioskip-mirror} + +Example: `endpointslice.kubernetes.io/skip-mirror="true"` + +Used on: Endpoints + +The label can be set to `"true"` on an Endpoints resource to indicate that the EndpointSliceMirroring controller should not mirror this resource with EndpointSlices. + +## service.kubernetes.io/service-proxy-name {#servicekubernetesioservice-proxy-name} + +Example: `service.kubernetes.io/service-proxy-name="foo-bar"` + +Used on: Service + +The kube-proxy has this label for custom proxy, which delegates service control to custom proxy. + +## experimental.windows.kubernetes.io/isolation-type + +Example: `experimental.windows.kubernetes.io/isolation-type: "hyperv"` + +Used on: Pod + +The annotation is used to run Windows containers with Hyper-V isolation. To use Hyper-V isolation feature and create a Hyper-V isolated container, the kubelet should be started with feature gates HyperVContainer=true and the Pod should include the annotation experimental.windows.kubernetes.io/isolation-type=hyperv. + +{{< note >}} +You can only set this annotation on Pods that have a single container. +{{< /note >}} + +## ingressclass.kubernetes.io/is-default-class + +Example: `ingressclass.kubernetes.io/is-default-class: "true"` + +Used on: IngressClass + +When a single IngressClass resource has this annotation set to `"true"`, new Ingress resource without a class specified will be assigned this default class. + +## kubernetes.io/ingress.class (deprecated) + +{{< note >}} Starting in v1.18, this annotation is deprecated in favor of `spec.ingressClassName`. {{< /note >}} + +## alpha.kubernetes.io/provided-node-ip + +Example: `alpha.kubernetes.io/provided-node-ip: "10.0.0.1"` + +Used on: Node + +The kubelet can set this annotation on a Node to denote its configured IPv4 address. + +When kubelet is started with the "external" cloud provider, it sets this annotation on the Node to denote an IP address set from the command line flag (`--node-ip`). This IP is verified with the cloud provider as valid by the cloud-controller-manager. + +**The taints listed below are always used on Nodes** + +## node.kubernetes.io/not-ready + +Example: `node.kubernetes.io/not-ready:NoExecute` + +The node controller detects whether a node is ready by monitoring its health and adds or removes this taint accordingly. + +## node.kubernetes.io/unreachable + +Example: `node.kubernetes.io/unreachable:NoExecute` + +The node controller adds the taint to a node corresponding to the [NodeCondition](/docs/concepts/architecture/nodes/#condition) `Ready` being `Unknown`. + +## node.kubernetes.io/unschedulable + +Example: `node.kubernetes.io/unschedulable:NoSchedule` + +The taint will be added to a node when initializing the node to avoid race condition. + +## node.kubernetes.io/memory-pressure + +Example: `node.kubernetes.io/memory-pressure:NoSchedule` + +The kubelet detects memory pressure based on `memory.available` and `allocatableMemory.available` observed on a Node. The observed values are then compared to the corresponding thresholds that can be set on the kubelet to determine if the Node condition and taint should be added/removed. + +## node.kubernetes.io/disk-pressure + +Example: `node.kubernetes.io/disk-pressure:NoSchedule` + +The kubelet detects disk pressure based on `imagefs.available`, `imagefs.inodesFree`, `nodefs.available` and `nodefs.inodesFree`(Linux only) observed on a Node. The observed values are then compared to the corresponding thresholds that can be set on the kubelet to determine if the Node condition and taint should be added/removed. + +## node.kubernetes.io/network-unavailable + +Example: `node.kubernetes.io/network-unavailable:NoSchedule` + +This is initially set by the kubelet when the cloud provider used indicates a requirement for additional network configuration. Only when the route on the cloud is configured properly will the taint be removed by the cloud provider. + +## node.kubernetes.io/pid-pressure + +Example: `node.kubernetes.io/pid-pressure:NoSchedule` + +The kubelet checks D-value of the size of `/proc/sys/kernel/pid_max` and the PIDs consumed by Kubernetes on a node to get the number of available PIDs that referred to as the `pid.available` metric. The metric is then compared to the corresponding threshold that can be set on the kubelet to determine if the node condition and taint should be added/removed. + +## node.cloudprovider.kubernetes.io/uninitialized + +Example: `node.cloudprovider.kubernetes.io/uninitialized:NoSchedule` + +Sets this taint on a node to mark it as unusable, when kubelet is started with the "external" cloud provider, until a controller from the cloud-controller-manager initializes this node, and then removes the taint. + +## node.cloudprovider.kubernetes.io/shutdown + +Example: `node.cloudprovider.kubernetes.io/shutdown:NoSchedule` + +If a Node is in a cloud provider specified shutdown state, the Node gets tainted accordingly with `node.cloudprovider.kubernetes.io/shutdown` and the taint effect of `NoSchedule`. + diff --git a/content/en/docs/reference/scheduling/config.md b/content/en/docs/reference/scheduling/config.md index 7754d7cb7d..8d5cc24208 100644 --- a/content/en/docs/reference/scheduling/config.md +++ b/content/en/docs/reference/scheduling/config.md @@ -181,8 +181,6 @@ that are not enabled by default: - `RequestedToCapacityRatio`: Favor nodes according to a configured function of the allocated resources. Extension points: `Score`. -- `NodeResourceLimits`: Favors nodes that satisfy the Pod resource limits. - Extension points: `PreScore`, `Score`. - `CinderVolume`: Checks that OpenStack Cinder volume limits can be satisfied for the node. Extension points: `Filter`. diff --git a/content/en/docs/reference/setup-tools/_index.md b/content/en/docs/reference/setup-tools/_index.md index 3988d6485e..c97758fe6e 100644 --- a/content/en/docs/reference/setup-tools/_index.md +++ b/content/en/docs/reference/setup-tools/_index.md @@ -1,4 +1,4 @@ --- -title: Setup tools reference +title: Setup tools weight: 50 --- diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md index 3d4b977102..cf578a0657 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md @@ -125,7 +125,7 @@ If your configuration is not using the latest version it is **recommended** that the [kubeadm config migrate](/docs/reference/setup-tools/kubeadm/kubeadm-config/) command. For more information on the fields and usage of the configuration you can navigate to our API reference -page and pick a version from [the list](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#pkg-subdirectories). +page and pick a version from [the list](https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#section-directories). ### Adding kube-proxy parameters {#kube-proxy} diff --git a/content/en/docs/reference/tools.md b/content/en/docs/reference/tools/_index.md similarity index 78% rename from content/en/docs/reference/tools.md rename to content/en/docs/reference/tools/_index.md index 41d8ef4d1c..7194ab83bd 100644 --- a/content/en/docs/reference/tools.md +++ b/content/en/docs/reference/tools/_index.md @@ -1,8 +1,10 @@ --- +title: Other Tools reviewers: - janetkuo -title: Tools content_type: concept +weight: 80 +no_list: true --- @@ -10,13 +12,6 @@ Kubernetes contains several built-in tools to help you work with the Kubernetes -## Kubectl - -[`kubectl`](/docs/tasks/tools/install-kubectl/) is the command line tool for Kubernetes. It controls the Kubernetes cluster manager. - -## Kubeadm - -[`kubeadm`](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) is the command line tool for easily provisioning a secure Kubernetes cluster on top of physical or cloud servers or virtual machines (currently in alpha). ## Minikube diff --git a/content/en/docs/reference/using-api/_index.md b/content/en/docs/reference/using-api/_index.md index df9e00758e..2039e33e28 100644 --- a/content/en/docs/reference/using-api/_index.md +++ b/content/en/docs/reference/using-api/_index.md @@ -1,11 +1,12 @@ --- -title: Kubernetes API Overview +title: API Overview reviewers: - erictune - lavalamp - jbeda content_type: concept weight: 10 +no_list: true card: name: reference weight: 50 diff --git a/content/en/docs/reference/using-api/api-concepts.md b/content/en/docs/reference/using-api/api-concepts.md index e8b834c334..e517a13d52 100644 --- a/content/en/docs/reference/using-api/api-concepts.md +++ b/content/en/docs/reference/using-api/api-concepts.md @@ -258,7 +258,7 @@ Accept: application/json;as=Table;g=meta.k8s.io;v=v1beta1, application/json ## Alternate representations of resources -By default Kubernetes returns objects serialized to JSON with content type `application/json`. This is the default serialization format for the API. However, clients may request the more efficient Protobuf representation of these objects for better performance at scale. The Kubernetes API implements standard HTTP content type negotiation: passing an `Accept` header with a `GET` call will request that the server return objects in the provided content type, while sending an object in Protobuf to the server for a `PUT` or `POST` call takes the `Content-Type` header. The server will return a `Content-Type` header if the requested format is supported, or the `406 Not acceptable` error if an invalid content type is provided. +By default, Kubernetes returns objects serialized to JSON with content type `application/json`. This is the default serialization format for the API. However, clients may request the more efficient Protobuf representation of these objects for better performance at scale. The Kubernetes API implements standard HTTP content type negotiation: passing an `Accept` header with a `GET` call will request that the server return objects in the provided content type, while sending an object in Protobuf to the server for a `PUT` or `POST` call takes the `Content-Type` header. The server will return a `Content-Type` header if the requested format is supported, or the `406 Not acceptable` error if an invalid content type is provided. See the API documentation for a list of supported content types for each API. @@ -560,4 +560,4 @@ If you request a a resourceVersion outside the applicable limit then, depending ### Unavailable resource versions -Servers are not required to serve unrecognized resource versions. List and Get requests for unrecognized resource versions may wait briefly for the resource version to become available, should timeout with a `504 (Gateway Timeout)` if the provided resource versions does not become available in a resonable amount of time, and may respond with a `Retry-After` response header indicating how many seconds a client should wait before retrying the request. Currently the kube-apiserver also identifies these responses with a "Too large resource version" message. Watch requests for a unrecognized resource version may wait indefinitely (until the request timeout) for the resource version to become available. +Servers are not required to serve unrecognized resource versions. List and Get requests for unrecognized resource versions may wait briefly for the resource version to become available, should timeout with a `504 (Gateway Timeout)` if the provided resource versions does not become available in a reasonable amount of time, and may respond with a `Retry-After` response header indicating how many seconds a client should wait before retrying the request. Currently, the kube-apiserver also identifies these responses with a "Too large resource version" message. Watch requests for an unrecognized resource version may wait indefinitely (until the request timeout) for the resource version to become available. diff --git a/content/en/docs/reference/using-api/client-libraries.md b/content/en/docs/reference/using-api/client-libraries.md index 96589c6a55..a484c8e74f 100644 --- a/content/en/docs/reference/using-api/client-libraries.md +++ b/content/en/docs/reference/using-api/client-libraries.md @@ -67,12 +67,13 @@ their authors, not the Kubernetes team. | Python | [github.com/fiaas/k8s](https://github.com/fiaas/k8s) | | Python | [github.com/mnubo/kubernetes-py](https://github.com/mnubo/kubernetes-py) | | Python | [github.com/tomplus/kubernetes_asyncio](https://github.com/tomplus/kubernetes_asyncio) | +| Python | [github.com/Frankkkkk/pykorm](https://github.com/Frankkkkk/pykorm) | | Ruby | [github.com/abonas/kubeclient](https://github.com/abonas/kubeclient) | | Ruby | [github.com/Ch00k/kuber](https://github.com/Ch00k/kuber) | | Ruby | [github.com/kontena/k8s-client](https://github.com/kontena/k8s-client) | | Rust | [github.com/clux/kube-rs](https://github.com/clux/kube-rs) | | Rust | [github.com/ynqa/kubernetes-rust](https://github.com/ynqa/kubernetes-rust) | -| Scala | [github.com/doriordan/skuber](https://github.com/doriordan/skuber) | +| Scala | [github.com/hagay3/skuber](https://github.com/hagay3/skuber) | | Scala | [github.com/joan38/kubernetes-client](https://github.com/joan38/kubernetes-client) | | Swift | [github.com/swiftkube/client](https://github.com/swiftkube/client) | | DotNet | [github.com/tonnyeremin/kubernetes_gen](https://github.com/tonnyeremin/kubernetes_gen) | diff --git a/content/en/docs/reference/using-api/deprecation-guide.md b/content/en/docs/reference/using-api/deprecation-guide.md new file mode 100755 index 0000000000..a8ad3494bd --- /dev/null +++ b/content/en/docs/reference/using-api/deprecation-guide.md @@ -0,0 +1,270 @@ +--- +reviewers: +- liggitt +- lavalamp +- thockin +- smarterclayton +title: "Deprecated API Migration Guide" +weight: 45 +content_type: reference +--- + + + +As the Kubernetes API evolves, APIs are periodically reorganized or upgraded. +When APIs evolve, the old API is deprecated and eventually removed. +This page contains information you need to know when migrating from +deprecated API versions to newer and more stable API versions. + + + +## Removed APIs by release + + +### v1.25 + +The **v1.25** release will stop serving the following deprecated API versions: + +#### Event {#event-v125} + +The **events.k8s.io/v1beta1** API version of Event will no longer be served in v1.25. + +* Migrate manifests and API clients to use the **events.k8s.io/v1** API version, available since v1.19. +* All existing persisted objects are accessible via the new API +* Notable changes in **events.k8s.io/v1**: + * `type` is limited to `Normal` and `Warning` + * `involvedObject` is renamed to `regarding` + * `action`, `reason`, `reportingComponent`, and `reportingInstance` are required when creating new **events.k8s.io/v1** Events + * use `eventTime` instead of the deprecated `firstTimestamp` field (which is renamed to `deprecatedFirstTimestamp` and not permitted in new **events.k8s.io/v1** Events) + * use `series.lastObservedTime` instead of the deprecated `lastTimestamp` field (which is renamed to `deprecatedLastTimestamp` and not permitted in new **events.k8s.io/v1** Events) + * use `series.count` instead of the deprecated `count` field (which is renamed to `deprecatedCount` and not permitted in new **events.k8s.io/v1** Events) + * use `reportingComponent` instead of the deprecated `source.component` field (which is renamed to `deprecatedSource.component` and not permitted in new **events.k8s.io/v1** Events) + * use `reportingInstance` instead of the deprecated `source.host` field (which is renamed to `deprecatedSource.host` and not permitted in new **events.k8s.io/v1** Events) + +#### RuntimeClass {#runtimeclass-v125} + +RuntimeClass in the **node.k8s.io/v1beta1** API version will no longer be served in v1.25. + +* Migrate manifests and API clients to use the **node.k8s.io/v1** API version, available since v1.20. +* All existing persisted objects are accessible via the new API +* No notable changes + +### v1.22 + +The **v1.22** release will stop serving the following deprecated API versions: + +#### Webhook resources {#webhook-resources-v122} + +The **admissionregistration.k8s.io/v1beta1** API version of MutatingWebhookConfiguration and ValidatingWebhookConfiguration will no longer be served in v1.22. + +* Migrate manifests and API clients to use the **admissionregistration.k8s.io/v1** API version, available since v1.16. +* All existing persisted objects are accessible via the new APIs +* Notable changes: + * `webhooks[*].failurePolicy` default changed from `Ignore` to `Fail` for v1 + * `webhooks[*].matchPolicy` default changed from `Exact` to `Equivalent` for v1 + * `webhooks[*].timeoutSeconds` default changed from `30s` to `10s` for v1 + * `webhooks[*].sideEffects` default value is removed, and the field made required, and only `None` and `NoneOnDryRun` are permitted for v1 + * `webhooks[*].admissionReviewVersions` default value is removed and the field made required for v1 (supported versions for AdmissionReview are `v1` and `v1beta1`) + * `webhooks[*].name` must be unique in the list for objects created via `admissionregistration.k8s.io/v1` + +#### CustomResourceDefinition {#customresourcedefinition-v122} + +The **apiextensions.k8s.io/v1beta1** API version of CustomResourceDefinition will no longer be served in v1.22. + +* Migrate manifests and API clients to use the **apiextensions.k8s.io/v1** API version, available since v1.16. +* All existing persisted objects are accessible via the new API +* Notable changes: + * `spec.scope` is no longer defaulted to `Namespaced` and must be explicitly specified + * `spec.version` is removed in v1; use `spec.versions` instead + * `spec.validation` is removed in v1; use `spec.versions[*].schema` instead + * `spec.subresources` is removed in v1; use `spec.versions[*].subresources` instead + * `spec.additionalPrinterColumns` is removed in v1; use `spec.versions[*].additionalPrinterColumns` instead + * `spec.conversion.webhookClientConfig` is moved to `spec.conversion.webhook.clientConfig` in v1 + * `spec.conversion.conversionReviewVersions` is moved to `spec.conversion.webhook.conversionReviewVersions` in v1 + * `spec.versions[*].schema.openAPIV3Schema` is now required when creating v1 CustomResourceDefinition objects, and must be a [structural schema](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#specifying-a-structural-schema) + * `spec.preserveUnknownFields: true` is disallowed when creating v1 CustomResourceDefinition objects; it must be specified within schema definitions as `x-kubernetes-preserve-unknown-fields: true` + * In `additionalPrinterColumns` items, the `JSONPath` field was renamed to `jsonPath` in v1 (fixes [#66531](https://github.com/kubernetes/kubernetes/issues/66531)) + +#### APIService {#apiservice-v122} + +The **apiregistration.k8s.io/v1beta1** API version of APIService will no longer be served in v1.22. + +* Migrate manifests and API clients to use the **apiregistration.k8s.io/v1** API version, available since v1.10. +* All existing persisted objects are accessible via the new API +* No notable changes + +#### TokenReview {#tokenreview-v122} + +The **authentication.k8s.io/v1beta1** API version of TokenReview will no longer be served in v1.22. + +* Migrate manifests and API clients to use the **authentication.k8s.io/v1** API version, available since v1.6. +* No notable changes + +#### SubjectAccessReview resources {#subjectaccessreview-resources-v122} + +The **authorization.k8s.io/v1beta1** API version of LocalSubjectAccessReview, SelfSubjectAccessReview, and SubjectAccessReview will no longer be served in v1.22. + +* Migrate manifests and API clients to use the **authorization.k8s.io/v1** API version, available since v1.6. +* Notable changes: + * `spec.group` was renamed to `spec.groups` in v1 (fixes [#32709](https://github.com/kubernetes/kubernetes/issues/32709)) + +#### CertificateSigningRequest {#certificatesigningrequest-v122} + +The **certificates.k8s.io/v1beta1** API version of CertificateSigningRequest will no longer be served in v1.22. + +* Migrate manifests and API clients to use the **certificates.k8s.io/v1** API version, available since v1.19. +* All existing persisted objects are accessible via the new API +* Notable changes in `certificates.k8s.io/v1`: + * For API clients requesting certificates: + * `spec.signerName` is now required (see [known Kubernetes signers](/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers)), and requests for `kubernetes.io/legacy-unknown` are not allowed to be created via the `certificates.k8s.io/v1` API + * `spec.usages` is now required, may not contain duplicate values, and must only contain known usages + * For API clients approving or signing certificates: + * `status.conditions` may not contain duplicate types + * `status.conditions[*].status` is now required + * `status.certificate` must be PEM-encoded, and contain only `CERTIFICATE` blocks + +#### Lease {#lease-v122} + +The **coordination.k8s.io/v1beta1** API version of Lease will no longer be served in v1.22. + +* Migrate manifests and API clients to use the **coordination.k8s.io/v1** API version, available since v1.14. +* All existing persisted objects are accessible via the new API +* No notable changes + +#### Ingress {#ingress-v122} + +The **extensions/v1beta1** and **networking.k8s.io/v1beta1** API versions of Ingress will no longer be served in v1.22. + +* Migrate manifests and API clients to use the **networking.k8s.io/v1** API version, available since v1.19. +* All existing persisted objects are accessible via the new API +* Notable changes: + * `spec.backend` is renamed to `spec.defaultBackend` + * The backend `serviceName` field is renamed to `service.name` + * Numeric backend `servicePort` fields are renamed to `service.port.number` + * String backend `servicePort` fields are renamed to `service.port.name` + * `pathType` is now required for each specified path. Options are `Prefix`, `Exact`, and `ImplementationSpecific`. To match the undefined `v1beta1` behavior, use `ImplementationSpecific`. + +#### IngressClass {#ingressclass-v122} + +The **networking.k8s.io/v1beta1** API version of IngressClass will no longer be served in v1.22. + +* Migrate manifests and API clients to use the **networking.k8s.io/v1** API version, available since v1.19. +* All existing persisted objects are accessible via the new API +* No notable changes + +#### RBAC resources {#rbac-resources-v122} + +The **rbac.authorization.k8s.io/v1beta1** API version of ClusterRole, ClusterRoleBinding, Role, and RoleBinding will no longer be served in v1.22. + +* Migrate manifests and API clients to use the **rbac.authorization.k8s.io/v1** API version, available since v1.8. +* All existing persisted objects are accessible via the new APIs +* No notable changes + +#### PriorityClass {#priorityclass-v122} + +The **scheduling.k8s.io/v1beta1** API version of PriorityClass will no longer be served in v1.22. + +* Migrate manifests and API clients to use the **scheduling.k8s.io/v1** API version, available since v1.14. +* All existing persisted objects are accessible via the new API +* No notable changes + +#### Storage resources {#storage-resources-v122} + +The **storage.k8s.io/v1beta1** API version of CSIDriver, CSINode, StorageClass, and VolumeAttachment will no longer be served in v1.22. + +* Migrate manifests and API clients to use the **storage.k8s.io/v1** API version + * CSIDriver is available in **storage.k8s.io/v1** since v1.19. + * CSINode is available in **storage.k8s.io/v1** since v1.17 + * StorageClass is available in **storage.k8s.io/v1** since v1.6 + * VolumeAttachment is available in **storage.k8s.io/v1** v1.13 +* All existing persisted objects are accessible via the new APIs +* No notable changes + +### v1.16 + +The **v1.16** release stopped serving the following deprecated API versions: + +#### NetworkPolicy {#networkpolicy-v116} + +The **extensions/v1beta1** API version of NetworkPolicy is no longer served as of v1.16. + +* Migrate manifests and API clients to use the **networking.k8s.io/v1** API version, available since v1.8. +* All existing persisted objects are accessible via the new API + +#### DaemonSet {#daemonset-v116} + +The **extensions/v1beta1** and **apps/v1beta2** API versions of DaemonSet are no longer served as of v1.16. + +* Migrate manifests and API clients to use the **apps/v1** API version, available since v1.9. +* All existing persisted objects are accessible via the new API +* Notable changes: + * `spec.templateGeneration` is removed + * `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades + * `spec.updateStrategy.type` now defaults to `RollingUpdate` (the default in `extensions/v1beta1` was `OnDelete`) + +#### Deployment {#deployment-v116} + +The **extensions/v1beta1**, **apps/v1beta1**, and **apps/v1beta2** API versions of Deployment are no longer served as of v1.16. + +* Migrate manifests and API clients to use the **apps/v1** API version, available since v1.9. +* All existing persisted objects are accessible via the new API +* Notable changes: + * `spec.rollbackTo` is removed + * `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades + * `spec.progressDeadlineSeconds` now defaults to `600` seconds (the default in `extensions/v1beta1` was no deadline) + * `spec.revisionHistoryLimit` now defaults to `10` (the default in `apps/v1beta1` was `2`, the default in `extensions/v1beta1` was to retain all) + * `maxSurge` and `maxUnavailable` now default to `25%` (the default in `extensions/v1beta1` was `1`) + +#### StatefulSet {#statefulset-v116} + +The **apps/v1beta1** and **apps/v1beta2** API versions of StatefulSet are no longer served as of v1.16. + +* Migrate manifests and API clients to use the **apps/v1** API version, available since v1.9. +* All existing persisted objects are accessible via the new API +* Notable changes: + * `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades + * `spec.updateStrategy.type` now defaults to `RollingUpdate` (the default in `apps/v1beta1` was `OnDelete`) + +#### ReplicaSet {#replicaset-v116} + +The **extensions/v1beta1**, **apps/v1beta1**, and **apps/v1beta2** API versions of ReplicaSet are no longer served as of v1.16. + +* Migrate manifests and API clients to use the **apps/v1** API version, available since v1.9. +* All existing persisted objects are accessible via the new API +* Notable changes: + * `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades + +## What to do + +### Test with deprecated APIs disabled + +You can test your clusters by starting an API server with specific API versions disabled +to simulate upcoming removals. Add the following flag to the API server startup arguments: + +`--runtime-config=/=false` + +For example: + +`--runtime-config=admissionregistration.k8s.io/v1beta1=false,apiextensions.k8s.io/v1beta1,...` + +### Locate use of deprecated APIs + +Use [client warnings, metrics, and audit information available in 1.19+](https://kubernetes.io/blog/2020/09/03/warnings/#deprecation-warnings) +to locate use of deprecated APIs. + +### Migrate to non-deprecated APIs + +* Update custom integrations and controllers to call the non-deprecated APIs +* Change YAML files to reference the non-deprecated APIs + + You can use the `kubectl-convert` command (`kubectl convert` prior to v1.20) + to automatically convert an existing object: + + `kubectl-convert -f --output-version /`. + + For example, to convert an older Deployment to `apps/v1`, you can run: + + `kubectl-convert -f ./my-deployment.yaml --output-version apps/v1` + + Note that this may use non-ideal default values. To learn more about a specific + resource, check the Kubernetes [API reference](/docs/reference/kubernetes-api/). diff --git a/content/en/docs/reference/using-api/deprecation-policy.md b/content/en/docs/reference/using-api/deprecation-policy.md index 17840f195b..4de09ee82a 100644 --- a/content/en/docs/reference/using-api/deprecation-policy.md +++ b/content/en/docs/reference/using-api/deprecation-policy.md @@ -327,7 +327,7 @@ supported in API v1 must exist and function until API v1 is removed. ### Component config structures -Component configs are versioned and managed just like REST resources. +Component configs are versioned and managed similar to REST resources. ### Future work diff --git a/content/en/docs/reference/using-api/server-side-apply.md b/content/en/docs/reference/using-api/server-side-apply.md index 302ae94d8b..81afa5b567 100644 --- a/content/en/docs/reference/using-api/server-side-apply.md +++ b/content/en/docs/reference/using-api/server-side-apply.md @@ -16,10 +16,10 @@ min-kubernetes-server-version: 1.16 ## Introduction -Server Side Apply helps users and controllers manage their resources via -declarative configurations. It allows them to create and/or modify their +Server Side Apply helps users and controllers manage their resources through +declarative configurations. Clients can create and modify their [objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/) -declaratively, simply by sending their fully specified intent. +declaratively by sending their fully specified intent. A fully specified intent is a partial object that only includes the fields and values for which the user has an opinion. That intent either creates a new @@ -46,7 +46,7 @@ Server side apply is meant both as a replacement for the original `kubectl apply` and as a simpler mechanism for controllers to enact their changes. If you have Server Side Apply enabled, the control plane tracks managed fields -for all newlly created objects. +for all newly created objects. ## Field Management @@ -209,9 +209,8 @@ would have failed due to conflicting ownership. The merging strategy, implemented with Server Side Apply, provides a generally more stable object lifecycle. Server Side Apply tries to merge fields based on -the fact who manages them instead of overruling just based on values. This way -it is intended to make it easier and more stable for multiple actors updating -the same object by causing less unexpected interference. +the actor who manages them instead of overruling based on values. This way +multiple actors can update the same object without causing unexpected interference. When a user sends a "fully-specified intent" object to the Server Side Apply endpoint, the server merges it with the live object favoring the value in the @@ -319,7 +318,7 @@ kubectl apply -f https://k8s.io/examples/application/ssa/nginx-deployment-replic ``` If the apply results in a conflict with the HPA controller, then do nothing. The -conflict just indicates the controller has claimed the field earlier in the +conflict indicates the controller has claimed the field earlier in the process than it sometimes does. At this point the user may remove the `replicas` field from their configuration. @@ -436,7 +435,7 @@ Data: [{"op": "replace", "path": "/metadata/managedFields", "value": [{}]}] This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the -object. Note that just setting the managedFields to an empty list will not +object. Note that setting the managedFields to an empty list will not reset the field. This is on purpose, so managedFields never get stripped by clients not aware of the field. diff --git a/content/en/docs/setup/best-practices/certificates.md b/content/en/docs/setup/best-practices/certificates.md index a065462baf..1648cc4e9e 100644 --- a/content/en/docs/setup/best-practices/certificates.md +++ b/content/en/docs/setup/best-practices/certificates.md @@ -9,7 +9,7 @@ weight: 40 Kubernetes requires PKI certificates for authentication over TLS. -If you install Kubernetes with [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/), the certificates that your cluster requires are automatically generated. +If you install Kubernetes with [kubeadm](/docs/reference/setup-tools/kubeadm/), the certificates that your cluster requires are automatically generated. You can also generate your own certificates -- for example, to keep your private keys more secure by not storing them on the API server. This page explains the certificates that your cluster requires. @@ -74,7 +74,7 @@ Required certificates: | kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | | | front-proxy-client | kubernetes-front-proxy-ca | | client | | -[1]: any other IP or DNS name you contact your cluster on (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) +[1]: any other IP or DNS name you contact your cluster on (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/) the load balancer stable IP and/or DNS name, `kubernetes`, `kubernetes.default`, `kubernetes.default.svc`, `kubernetes.default.svc.cluster`, `kubernetes.default.svc.cluster.local`) @@ -100,7 +100,7 @@ For kubeadm users only: ### Certificate paths -Certificates should be placed in a recommended path (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/)). +Certificates should be placed in a recommended path (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/)). Paths should be specified using the given argument regardless of location. | Default CN | recommended key path | recommended cert path | command | key argument | cert argument | diff --git a/content/en/docs/setup/best-practices/cluster-large.md b/content/en/docs/setup/best-practices/cluster-large.md index ccb31fe108..a75499a811 100644 --- a/content/en/docs/setup/best-practices/cluster-large.md +++ b/content/en/docs/setup/best-practices/cluster-large.md @@ -69,10 +69,9 @@ When creating a cluster, you can (using custom tooling): ## Addon resources Kubernetes [resource limits](/docs/concepts/configuration/manage-resources-containers/) -help to minimise the impact of memory leaks and other ways that pods and containers can -impact on other components. These resource limits can and should apply to -{{< glossary_tooltip text="addon" term_id="addons" >}} just as they apply to application -workloads. +help to minimize the impact of memory leaks and other ways that pods and containers can +impact on other components. These resource limits apply to +{{< glossary_tooltip text="addon" term_id="addons" >}} resources just as they apply to application workloads. For example, you can set CPU and memory limits for a logging component: diff --git a/content/en/docs/setup/best-practices/multiple-zones.md b/content/en/docs/setup/best-practices/multiple-zones.md index 107ee2d0f7..8f51a3bd06 100644 --- a/content/en/docs/setup/best-practices/multiple-zones.md +++ b/content/en/docs/setup/best-practices/multiple-zones.md @@ -59,7 +59,7 @@ When nodes start up, the kubelet on each node automatically adds {{< glossary_tooltip text="labels" term_id="label" >}} to the Node object that represents that specific kubelet in the Kubernetes API. These labels can include -[zone information](/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone). +[zone information](/docs/reference/labels-annotations-taints/#topologykubernetesiozone). If your cluster spans multiple zones or regions, you can use node labels in conjunction with diff --git a/content/en/docs/setup/production-environment/container-runtimes.md b/content/en/docs/setup/production-environment/container-runtimes.md index 59725188d8..ba6827d833 100644 --- a/content/en/docs/setup/production-environment/container-runtimes.md +++ b/content/en/docs/setup/production-environment/container-runtimes.md @@ -63,7 +63,7 @@ configuration, or reinstall it using automation. ### containerd -This section contains the necessary steps to use `containerd` as CRI runtime. +This section contains the necessary steps to use containerd as CRI runtime. Use the following commands to install Containerd on your system: @@ -92,161 +92,62 @@ sudo sysctl --system Install containerd: {{< tabs name="tab-cri-containerd-installation" >}} -{{% tab name="Ubuntu 16.04" %}} +{{% tab name="Linux" %}} -```shell -# (Install containerd) -## Set up the repository -### Install packages to allow apt to use a repository over HTTPS -sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common -``` +1. Install the `containerd.io` package from the official Docker repositories. Instructions for setting up the Docker repository for your respective Linux distribution and installing the `containerd.io` package can be found at [Install Docker Engine](https://docs.docker.com/engine/install/#server). -```shell -## Add Docker's official GPG key -curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key --keyring /etc/apt/trusted.gpg.d/docker.gpg add - -``` +2. Configure containerd: -```shell -## Add Docker apt repository. -sudo add-apt-repository \ - "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ - $(lsb_release -cs) \ - stable" -``` + ```shell + sudo mkdir -p /etc/containerd + containerd config default | sudo tee /etc/containerd/config.toml + ``` -```shell -## Install containerd -sudo apt-get update && sudo apt-get install -y containerd.io -``` +3. Restart containerd: -```shell -# Configure containerd -sudo mkdir -p /etc/containerd -containerd config default | sudo tee /etc/containerd/config.toml -``` + ```shell + sudo systemctl restart containerd + ``` -```shell -# Restart containerd -sudo systemctl restart containerd -``` -{{% /tab %}} -{{% tab name="Ubuntu 18.04/20.04" %}} - -```shell -# (Install containerd) -sudo apt-get update && sudo apt-get install -y containerd -``` - -```shell -# Configure containerd -sudo mkdir -p /etc/containerd -containerd config default | sudo tee /etc/containerd/config.toml -``` - -```shell -# Restart containerd -sudo systemctl restart containerd -``` -{{% /tab %}} -{{% tab name="Debian 9+" %}} - -```shell -# (Install containerd) -## Set up the repository -### Install packages to allow apt to use a repository over HTTPS -sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common -``` - -```shell -## Add Docker's official GPG key -curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key --keyring /etc/apt/trusted.gpg.d/docker.gpg add - -``` - -```shell -## Add Docker apt repository. -sudo add-apt-repository \ - "deb [arch=amd64] https://download.docker.com/linux/debian \ - $(lsb_release -cs) \ - stable" -``` - -```shell -## Install containerd -sudo apt-get update && sudo apt-get install -y containerd.io -``` - -```shell -# Set default containerd configuration -sudo mkdir -p /etc/containerd -containerd config default | sudo tee /etc/containerd/config.toml -``` - -```shell -# Restart containerd -sudo systemctl restart containerd -``` -{{% /tab %}} -{{% tab name="CentOS/RHEL 7.4+" %}} - -```shell -# (Install containerd) -## Set up the repository -### Install required packages -sudo yum install -y yum-utils device-mapper-persistent-data lvm2 -``` - -```shell -## Add docker repository -sudo yum-config-manager \ - --add-repo \ - https://download.docker.com/linux/centos/docker-ce.repo -``` - -```shell -## Install containerd -sudo yum update -y && sudo yum install -y containerd.io -``` - -```shell -## Configure containerd -sudo mkdir -p /etc/containerd -containerd config default | sudo tee /etc/containerd/config.toml -``` - -```shell -# Restart containerd -sudo systemctl restart containerd -``` {{% /tab %}} {{% tab name="Windows (PowerShell)" %}} -```powershell -# (Install containerd) -# download containerd -cmd /c curl -OL https://github.com/containerd/containerd/releases/download/v1.4.1/containerd-1.4.1-windows-amd64.tar.gz -cmd /c tar xvf .\containerd-1.4.1-windows-amd64.tar.gz -``` -```powershell -# extract and configure -Copy-Item -Path ".\bin\" -Destination "$Env:ProgramFiles\containerd" -Recurse -Force -cd $Env:ProgramFiles\containerd\ -.\containerd.exe config default | Out-File config.toml -Encoding ascii +Start a Powershell session, set `$Version` to the desired version (ex: `$Version=1.4.3`), and then run the following commands: -# review the configuration. depending on setup you may want to adjust: -# - the sandbox_image (kubernetes pause image) -# - cni bin_dir and conf_dir locations -Get-Content config.toml -``` +1. Download containerd: + + ```powershell + curl.exe -L https://github.com/containerd/containerd/releases/download/v$Version/containerd-$Version-windows-amd64.tar.gz -o containerd-windows-amd64.tar.gz + tar.exe xvf .\containerd-windows-amd64.tar.gz + ``` + +2. Extract and configure: + + ```powershell + Copy-Item -Path ".\bin\" -Destination "$Env:ProgramFiles\containerd" -Recurse -Force + cd $Env:ProgramFiles\containerd\ + .\containerd.exe config default | Out-File config.toml -Encoding ascii + + # Review the configuration. Depending on setup you may want to adjust: + # - the sandbox_image (Kubernetes pause image) + # - cni bin_dir and conf_dir locations + Get-Content config.toml + + # (Optional - but highly recommended) Exclude containerd from Windows Defender Scans + Add-MpPreference -ExclusionProcess "$Env:ProgramFiles\containerd\containerd.exe" + ``` + +3. Start containerd: + + ```powershell + .\containerd.exe --register-service + Start-Service containerd + ``` -```powershell -# start containerd -.\containerd.exe --register-service -Start-Service containerd -``` {{% /tab %}} {{< /tabs >}} -#### systemd {#containerd-systemd} +#### Using the `systemd` cgroup driver {#containerd-systemd} To use the `systemd` cgroup driver in `/etc/containerd/config.toml` with `runc`, set @@ -257,6 +158,12 @@ To use the `systemd` cgroup driver in `/etc/containerd/config.toml` with `runc`, SystemdCgroup = true ``` +If you apply this change make sure to restart containerd again: + +```shell +sudo systemctl restart containerd +``` + When using kubeadm, manually configure the [cgroup driver for kubelet](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-control-plane-node). @@ -420,7 +327,7 @@ Start CRI-O: ```shell sudo systemctl daemon-reload -sudo systemctl start crio +sudo systemctl enable crio --now ``` Refer to the [CRI-O installation guide](https://github.com/cri-o/cri-o/blob/master/install.md) @@ -446,138 +353,38 @@ in sync. ### Docker -On each of your nodes, install Docker CE. +1. On each of your nodes, install the Docker for your Linux distribution as per [Install Docker Engine](https://docs.docker.com/engine/install/#server). You can find the latest validated version of Docker in this [dependencies](https://git.k8s.io/kubernetes/build/dependencies.yaml) file. -The Kubernetes release notes list which versions of Docker are compatible -with that version of Kubernetes. +2. Configure the Docker daemon, in particular to use systemd for the management of the container’s cgroups. -Use the following commands to install Docker on your system: + ```shell + sudo mkdir /etc/docker + cat <}} -{{% tab name="Ubuntu 16.04+" %}} + {{< note >}} + `overlay2` is the preferred storage driver for systems running Linux kernel version 4.0 or higher, or RHEL or CentOS using version 3.10.0-514 and above. + {{< /note >}} -```shell -# (Install Docker CE) -## Set up the repository: -### Install packages to allow apt to use a repository over HTTPS -sudo apt-get update && sudo apt-get install -y \ - apt-transport-https ca-certificates curl software-properties-common gnupg2 -``` +3. Restart Docker and enable on boot: -```shell -# Add Docker's official GPG key: -curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key --keyring /etc/apt/trusted.gpg.d/docker.gpg add - -``` + ```shell + sudo systemctl enable docker + sudo systemctl daemon-reload + sudo systemctl restart docker + ``` -```shell -# Add the Docker apt repository: -sudo add-apt-repository \ - "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ - $(lsb_release -cs) \ - stable" -``` - -```shell -# Install Docker CE -sudo apt-get update && sudo apt-get install -y \ - containerd.io=1.2.13-2 \ - docker-ce=5:19.03.11~3-0~ubuntu-$(lsb_release -cs) \ - docker-ce-cli=5:19.03.11~3-0~ubuntu-$(lsb_release -cs) -``` - -```shell -## Create /etc/docker -sudo mkdir /etc/docker -``` - -```shell -# Set up the Docker daemon -cat <}} - -If you want the `docker` service to start on boot, run the following command: - -```shell -sudo systemctl enable docker -``` - -Refer to the [official Docker installation guides](https://docs.docker.com/engine/installation/) -for more information. +{{< note >}} +For more information refer to + - [Configure the Docker daemon](https://docs.docker.com/config/daemon/) + - [Control Docker with systemd](https://docs.docker.com/config/daemon/systemd/) +{{< /note >}} diff --git a/content/en/docs/setup/production-environment/tools/kops.md b/content/en/docs/setup/production-environment/tools/kops.md index 13a2474600..4afab697e4 100644 --- a/content/en/docs/setup/production-environment/tools/kops.md +++ b/content/en/docs/setup/production-environment/tools/kops.md @@ -23,7 +23,7 @@ kops is an automated provisioning system: ## {{% heading "prerequisites" %}} -* You must have [kubectl](/docs/tasks/tools/install-kubectl/) installed. +* You must have [kubectl](/docs/tasks/tools/) installed. * You must [install](https://github.com/kubernetes/kops#installing) `kops` on a 64-bit (AMD64 and Intel 64) device architecture. diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md b/content/en/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md index 1bcdad0092..1dd44e9b0b 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md @@ -78,7 +78,7 @@ kind: ClusterConfiguration kubernetesVersion: v1.16.0 scheduler: extraArgs: - address: 0.0.0.0 + bind-address: 0.0.0.0 config: /home/johndoe/schedconfig.yaml kubeconfig: /home/johndoe/kubeconfig.yaml ``` diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md index 4d932e3e05..a9b37e8167 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md @@ -137,7 +137,7 @@ is not supported by kubeadm. ### More information -For more information about `kubeadm init` arguments, see the [kubeadm reference guide](/docs/reference/setup-tools/kubeadm/kubeadm/). +For more information about `kubeadm init` arguments, see the [kubeadm reference guide](/docs/reference/setup-tools/kubeadm/). To configure `kubeadm init` with a configuration file see [Using kubeadm init with a configuration file](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file). @@ -434,7 +434,7 @@ Now remove the node: kubectl delete node ``` -If you wish to start over simply run `kubeadm init` or `kubeadm join` with the +If you wish to start over, run `kubeadm init` or `kubeadm join` with the appropriate arguments. ### Clean up the control plane diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md index 394820324d..4f9d9c6ce7 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md @@ -18,14 +18,7 @@ For information how to create a cluster with kubeadm once you have performed thi ## {{% heading "prerequisites" %}} -* One or more machines running one of: - - Ubuntu 16.04+ - - Debian 9+ - - CentOS 7+ - - Red Hat Enterprise Linux (RHEL) 7+ - - Fedora 25+ - - HypriotOS v1.0.1+ - - Flatcar Container Linux (tested with 2512.3.0) +* A compatible Linux host. The Kubernetes project provides generic instructions for Linux distributions based on Debian and Red Hat, and those distributions without a package manager. * 2 GB or more of RAM per machine (any less will leave little room for your apps). * 2 CPUs or more. * Full network connectivity between all machines in the cluster (public or private network is fine). @@ -122,7 +115,7 @@ The following table lists container runtimes and their associated socket paths: {{< table caption = "Container runtimes and their socket paths" >}} | Runtime | Path to Unix domain socket | |------------|-----------------------------------| -| Docker | `/var/run/docker.sock` | +| Docker | `/var/run/dockershim.sock` | | containerd | `/run/containerd/containerd.sock` | | CRI-O | `/var/run/crio/crio.sock` | {{< /table >}} @@ -167,7 +160,7 @@ kubelet and the control plane is supported, but the kubelet version may never ex server version. For example, the kubelet running 1.7.0 should be fully compatible with a 1.8.0 API server, but not vice versa. -For information about installing `kubectl`, see [Install and set up kubectl](/docs/tasks/tools/install-kubectl/). +For information about installing `kubectl`, see [Install and set up kubectl](/docs/tasks/tools/). {{< warning >}} These instructions exclude all Kubernetes packages from any system upgrades. @@ -181,19 +174,37 @@ For more information on version skews, see: * Kubeadm-specific [version skew policy](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#version-skew-policy) {{< tabs name="k8s_install" >}} -{{% tab name="Ubuntu, Debian or HypriotOS" %}} -```bash -sudo apt-get update && sudo apt-get install -y apt-transport-https curl -curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - -cat < @@ -68,7 +68,7 @@ Kubespray provides the ability to customize many aspects of the deployment: * {{< glossary_tooltip term_id="cri-o" >}} * Certificate generation methods -Kubespray customizations can be made to a [variable file](https://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes. +Kubespray customizations can be made to a [variable file](https://docs.ansible.com/ansible/playbooks_variables.html). If you are getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes. ### (4/5) Deploy a Cluster diff --git a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md index 03ff264816..c98c6c7d00 100644 --- a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md +++ b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md @@ -1,7 +1,9 @@ --- reviewers: -- michmike -- patricklang +- jayunit100 +- jsturtevant +- marosset +- perithompson title: Intro to Windows support in Kubernetes content_type: concept weight: 65 @@ -221,7 +223,7 @@ On Windows, you can use the following settings to configure Services and load ba #### IPv4/IPv6 dual-stack -You can enable IPv4/IPv6 dual-stack networking for `l2bridge` networks using the `IPv6DualStack` [feature gate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/). See [enable IPv4/IPv6 dual stack](/docs/concepts/services-networking/dual-stack#enable-ipv4ipv6-dual-stack) for more details. +You can enable IPv4/IPv6 dual-stack networking for `l2bridge` networks using the `IPv6DualStack` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/). See [enable IPv4/IPv6 dual stack](/docs/concepts/services-networking/dual-stack#enable-ipv4ipv6-dual-stack) for more details. {{< note >}} On Windows, using IPv6 with Kubernetes require Windows Server, version 2004 (kernel version 10.0.19041.610) or later. @@ -237,7 +239,7 @@ Overlay (VXLAN) networks on Windows do not support dual-stack networking today. Windows is only supported as a worker node in the Kubernetes architecture and component matrix. This means that a Kubernetes cluster must always include Linux master nodes, zero or more Linux worker nodes, and zero or more Windows worker nodes. -#### Compute {compute-limitations} +#### Compute {#compute-limitations} ##### Resource management and process isolation @@ -297,7 +299,7 @@ As a result, the following storage functionality is not supported on Windows nod * NFS based storage/volume support * Expanding the mounted volume (resizefs) -#### Networking {networking-limitations} +#### Networking {#networking-limitations} Windows Container Networking differs in some important ways from Linux networking. The [Microsoft documentation for Windows Container Networking](https://docs.microsoft.com/en-us/virtualization/windowscontainers/container-networking/architecture) contains additional details and background. @@ -308,6 +310,7 @@ The following networking functionality is not supported on Windows nodes * Host networking mode is not available for Windows pods * Local NodePort access from the node itself fails (works for other nodes or external clients) * Accessing service VIPs from nodes will be available with a future release of Windows Server +* A single service can only support up to 64 backend pods / unique destination IPs * Overlay networking support in kube-proxy is an alpha release. In addition, it requires [KB4482887](https://support.microsoft.com/en-us/help/4482887/windows-10-update-kb4482887) to be installed on Windows Server 2019 * Local Traffic Policy and DSR mode * Windows containers connected to l2bridge, l2tunnel, or overlay networks do not support communicating over the IPv6 stack. There is outstanding Windows platform work required to enable these network drivers to consume IPv6 addresses and subsequent Kubernetes work in kubelet, kube-proxy, and CNI plugins. @@ -332,7 +335,7 @@ These features were added in Kubernetes v1.15: ##### DNS {#dns-limitations} * ClusterFirstWithHostNet is not supported for DNS. Windows treats all names with a '.' as a FQDN and skips PQDN resolution -* On Linux, you have a DNS suffix list, which is used when trying to resolve PQDNs. On Windows, we only have 1 DNS suffix, which is the DNS suffix associated with that pod's namespace (mydns.svc.cluster.local for example). Windows can resolve FQDNs and services or names resolvable with just that suffix. For example, a pod spawned in the default namespace, will have the DNS suffix **default.svc.cluster.local**. On a Windows pod, you can resolve both **kubernetes.default.svc.cluster.local** and **kubernetes**, but not the in-betweens, like **kubernetes.default** or **kubernetes.default.svc**. +* On Linux, you have a DNS suffix list, which is used when trying to resolve PQDNs. On Windows, we only have 1 DNS suffix, which is the DNS suffix associated with that pod's namespace (mydns.svc.cluster.local for example). Windows can resolve FQDNs and services or names resolvable with only that suffix. For example, a pod spawned in the default namespace, will have the DNS suffix **default.svc.cluster.local**. On a Windows pod, you can resolve both **kubernetes.default.svc.cluster.local** and **kubernetes**, but not the in-betweens, like **kubernetes.default** or **kubernetes.default.svc**. * On Windows, there are multiple DNS resolvers that can be used. As these come with slightly different behaviors, using the `Resolve-DNSName` utility for name query resolutions is recommended. ##### IPv6 @@ -362,9 +365,9 @@ There are no differences in how most of the Kubernetes APIs work for Windows. Th At a high level, these OS concepts are different: -* Identity - Linux uses userID (UID) and groupID (GID) which are represented as integer types. User and group names are not canonical - they are just an alias in `/etc/groups` or `/etc/passwd` back to UID+GID. Windows uses a larger binary security identifier (SID) which is stored in the Windows Security Access Manager (SAM) database. This database is not shared between the host and containers, or between containers. +* Identity - Linux uses userID (UID) and groupID (GID) which are represented as integer types. User and group names are not canonical - they are an alias in `/etc/groups` or `/etc/passwd` back to UID+GID. Windows uses a larger binary security identifier (SID) which is stored in the Windows Security Access Manager (SAM) database. This database is not shared between the host and containers, or between containers. * File permissions - Windows uses an access control list based on SIDs, rather than a bitmask of permissions and UID+GID -* File paths - convention on Windows is to use `\` instead of `/`. The Go IO libraries typically accept both and just make it work, but when you're setting a path or command line that's interpreted inside a container, `\` may be needed. +* File paths - convention on Windows is to use `\` instead of `/`. The Go IO libraries accept both types of file path separators. However, when you're setting a path or command line that's interpreted inside a container, `\` may be needed. * Signals - Windows interactive apps handle termination differently, and can implement one or more of these: * A UI thread handles well-defined messages including WM_CLOSE * Console apps handle ctrl-c or ctrl-break using a Control Handler @@ -547,7 +550,7 @@ Your main source of help for troubleshooting your Kubernetes cluster should star 1. After launching `start.ps1`, flanneld is stuck in "Waiting for the Network to be created" - There are numerous reports of this [issue which are being investigated](https://github.com/coreos/flannel/issues/1066); most likely it is a timing issue for when the management IP of the flannel network is set. A workaround is to simply relaunch start.ps1 or relaunch it manually as follows: + There are numerous reports of this [issue](https://github.com/coreos/flannel/issues/1066); most likely it is a timing issue for when the management IP of the flannel network is set. A workaround is to relaunch start.ps1 or relaunch it manually as follows: ```powershell PS C:> [Environment]::SetEnvironmentVariable("NODE_NAME", "") diff --git a/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md b/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md index 6c9c05cc90..ce7aee8a89 100644 --- a/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md +++ b/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md @@ -1,7 +1,9 @@ --- reviewers: -- michmike -- patricklang +- jayunit100 +- jsturtevant +- marosset +- perithompson title: Guide for scheduling Windows containers in Kubernetes content_type: concept weight: 75 @@ -23,7 +25,7 @@ Windows applications constitute a large portion of the services and applications ## Before you begin * Create a Kubernetes cluster that includes a [master and a worker node running Windows Server](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes) -* It is important to note that creating and deploying services and workloads on Kubernetes behaves in much the same way for Linux and Windows containers. [Kubectl commands](/docs/reference/kubectl/overview/) to interface with the cluster are identical. The example in the section below is provided simply to jumpstart your experience with Windows containers. +* It is important to note that creating and deploying services and workloads on Kubernetes behaves in much the same way for Linux and Windows containers. [Kubectl commands](/docs/reference/kubectl/overview/) to interface with the cluster are identical. The example in the section below is provided to jumpstart your experience with Windows containers. ## Getting Started: Deploying a Windows container diff --git a/content/en/docs/setup/release/notes.md b/content/en/docs/setup/release/notes.md index adbdb7c48e..54146007a0 100644 --- a/content/en/docs/setup/release/notes.md +++ b/content/en/docs/setup/release/notes.md @@ -1906,7 +1906,7 @@ filename | sha512 hash - Promote SupportNodePidsLimit to GA to provide node to pod pid isolation Promote SupportPodPidsLimit to GA to provide ability to limit pids per pod ([#94140](https://github.com/kubernetes/kubernetes/pull/94140), [@derekwaynecarr](https://github.com/derekwaynecarr)) [SIG Node and Testing] - Rename pod_preemption_metrics to preemption_metrics. ([#93256](https://github.com/kubernetes/kubernetes/pull/93256), [@ahg-g](https://github.com/ahg-g)) [SIG Instrumentation and Scheduling] -- Server-side apply behavior has been regularized in the case where a field is removed from the applied configuration. Removed fields which have no other owners are deleted from the live object, or reset to their default value if they have one. Safe ownership transfers, such as the transfer of a `replicas` field from a user to an HPA without resetting to the default value are documented in [Transferring Ownership](https://kubernetes.io/docs/reference/using-api/api-concepts/#transferring-ownership) ([#92661](https://github.com/kubernetes/kubernetes/pull/92661), [@jpbetz](https://github.com/jpbetz)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation and Testing] +- Server-side apply behavior has been regularized in the case where a field is removed from the applied configuration. Removed fields which have no other owners are deleted from the live object, or reset to their default value if they have one. Safe ownership transfers, such as the transfer of a `replicas` field from a user to an HPA without resetting to the default value are documented in [Transferring Ownership](/docs/reference/using-api/server-side-apply/#transferring-ownership) ([#92661](https://github.com/kubernetes/kubernetes/pull/92661), [@jpbetz](https://github.com/jpbetz)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation and Testing] - Set CSIMigrationvSphere feature gates to beta. Users should enable CSIMigration + CSIMigrationvSphere features and install the vSphere CSI Driver (https://github.com/kubernetes-sigs/vsphere-csi-driver) to move workload from the in-tree vSphere plugin "kubernetes.io/vsphere-volume" to vSphere CSI Driver. diff --git a/content/en/docs/tasks/access-application-cluster/access-cluster.md b/content/en/docs/tasks/access-application-cluster/access-cluster.md index 7f74320118..23d4f133a6 100644 --- a/content/en/docs/tasks/access-application-cluster/access-cluster.md +++ b/content/en/docs/tasks/access-application-cluster/access-cluster.md @@ -231,7 +231,7 @@ You have several options for connecting to nodes, pods and services from outside - Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside the cluster. See the [services](/docs/concepts/services-networking/service/) and [kubectl expose](/docs/reference/generated/kubectl/kubectl-commands/#expose) documentation. - - Depending on your cluster environment, this may just expose the service to your corporate network, + - Depending on your cluster environment, this may only expose the service to your corporate network, or it may expose it to the internet. Think about whether the service being exposed is secure. Does it do its own authentication? - Place pods behind services. To access one specific pod from a set of replicas, such as for debugging, @@ -280,10 +280,10 @@ at `https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-l #### Manually constructing apiserver proxy URLs -As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL: +As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the service's proxy URL: `http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`service_name[:port_name]`*`/proxy` -If you haven't specified a name for your port, you don't have to specify *port_name* in the URL. +If you haven't specified a name for your port, you don't have to specify *port_name* in the URL. You can also use the port number in place of the *port_name* for both named and unnamed ports. By default, the API server proxies to your service using http. To use https, prefix the service name with `https:`: `http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`https:service_name:[port_name]`*`/proxy` @@ -291,9 +291,9 @@ By default, the API server proxies to your service using http. To use https, pre The supported formats for the name segment of the URL are: * `` - proxies to the default or unnamed port using http -* `:` - proxies to the specified port using http +* `:` - proxies to the specified port name or port number using http * `https::` - proxies to the default or unnamed port using https (note the trailing colon) -* `https::` - proxies to the specified port using https +* `https::` - proxies to the specified port name or port number using https ##### Examples @@ -357,7 +357,7 @@ There are several different proxies you may encounter when using Kubernetes: - proxies UDP and TCP - does not understand HTTP - provides load balancing - - is just used to reach services + - is only used to reach services 1. A Proxy/Load-balancer in front of apiserver(s): diff --git a/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md b/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md index 62ddbdcbbc..0a6d352d2c 100644 --- a/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md +++ b/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md @@ -7,7 +7,7 @@ min-kubernetes-server-version: v1.10 -This page shows how to use `kubectl port-forward` to connect to a Redis +This page shows how to use `kubectl port-forward` to connect to a MongoDB server running in a Kubernetes cluster. This type of connection can be useful for database debugging. @@ -19,25 +19,25 @@ for database debugging. * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -* Install [redis-cli](http://redis.io/topics/rediscli). +* Install [MongoDB Shell](https://www.mongodb.com/try/download/shell). -## Creating Redis deployment and service +## Creating MongoDB deployment and service -1. Create a Deployment that runs Redis: +1. Create a Deployment that runs MongoDB: ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml + kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-deployment.yaml ``` The output of a successful command verifies that the deployment was created: ``` - deployment.apps/redis-master created + deployment.apps/mongo created ``` View the pod status to check that it is ready: @@ -49,8 +49,8 @@ for database debugging. The output displays the pod created: ``` - NAME READY STATUS RESTARTS AGE - redis-master-765d459796-258hz 1/1 Running 0 50s + NAME READY STATUS RESTARTS AGE + mongo-75f59d57f4-4nd6q 1/1 Running 0 2m4s ``` View the Deployment's status: @@ -62,8 +62,8 @@ for database debugging. The output displays that the Deployment was created: ``` - NAME READY UP-TO-DATE AVAILABLE AGE - redis-master 1/1 1 1 55s + NAME READY UP-TO-DATE AVAILABLE AGE + mongo 1/1 1 1 2m21s ``` The Deployment automatically manages a ReplicaSet. @@ -76,50 +76,50 @@ for database debugging. The output displays that the ReplicaSet was created: ``` - NAME DESIRED CURRENT READY AGE - redis-master-765d459796 1 1 1 1m + NAME DESIRED CURRENT READY AGE + mongo-75f59d57f4 1 1 1 3m12s ``` -2. Create a Service to expose Redis on the network: +2. Create a Service to expose MongoDB on the network: ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-service.yaml + kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-service.yaml ``` The output of a successful command verifies that the Service was created: ``` - service/redis-master created + service/mongo created ``` Check the Service created: ```shell - kubectl get service redis-master + kubectl get service mongo ``` The output displays the service created: ``` - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - redis-master ClusterIP 10.0.0.213 6379/TCP 27s + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + mongo ClusterIP 10.96.41.183 27017/TCP 11s ``` -3. Verify that the Redis server is running in the Pod, and listening on port 6379: +3. Verify that the MongoDB server is running in the Pod, and listening on port 27017: ```shell - # Change redis-master-765d459796-258hz to the name of the Pod - kubectl get pod redis-master-765d459796-258hz --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}' + # Change mongo-75f59d57f4-4nd6q to the name of the Pod + kubectl get pod mongo-75f59d57f4-4nd6q --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}' ``` - The output displays the port for Redis in that Pod: + The output displays the port for MongoDB in that Pod: ``` - 6379 + 27017 ``` - (this is the TCP port allocated to Redis on the internet). + (this is the TCP port allocated to MongoDB on the internet). ## Forward a local port to a port on the Pod @@ -127,39 +127,39 @@ for database debugging. ```shell - # Change redis-master-765d459796-258hz to the name of the Pod - kubectl port-forward redis-master-765d459796-258hz 7000:6379 + # Change mongo-75f59d57f4-4nd6q to the name of the Pod + kubectl port-forward mongo-75f59d57f4-4nd6q 28015:27017 ``` which is the same as ```shell - kubectl port-forward pods/redis-master-765d459796-258hz 7000:6379 + kubectl port-forward pods/mongo-75f59d57f4-4nd6q 28015:27017 ``` or ```shell - kubectl port-forward deployment/redis-master 7000:6379 + kubectl port-forward deployment/mongo 28015:27017 ``` or ```shell - kubectl port-forward replicaset/redis-master 7000:6379 + kubectl port-forward replicaset/mongo-75f59d57f4 28015:27017 ``` or ```shell - kubectl port-forward service/redis-master 7000:redis + kubectl port-forward service/mongo 28015:27017 ``` Any of the above commands works. The output is similar to this: ``` - Forwarding from 127.0.0.1:7000 -> 6379 - Forwarding from [::1]:7000 -> 6379 + Forwarding from 127.0.0.1:28015 -> 27017 + Forwarding from [::1]:28015 -> 27017 ``` {{< note >}} @@ -168,22 +168,22 @@ for database debugging. {{< /note >}} -2. Start the Redis command line interface: +2. Start the MongoDB command line interface: ```shell - redis-cli -p 7000 + mongosh --port 28015 ``` -3. At the Redis command line prompt, enter the `ping` command: +3. At the MongoDB command line prompt, enter the `ping` command: ``` - ping + db.runCommand( { ping: 1 } ) ``` A successful ping request returns: ``` - PONG + { ok: 1 } ``` ### Optionally let _kubectl_ choose the local port {#let-kubectl-choose-local-port} @@ -193,15 +193,22 @@ the local port and thus relieve you from having to manage local port conflicts, the slightly simpler syntax: ```shell -kubectl port-forward deployment/redis-master :6379 +kubectl port-forward deployment/mongo :27017 +``` + +The output is similar to this: + +``` +Forwarding from 127.0.0.1:63753 -> 27017 +Forwarding from [::1]:63753 -> 27017 ``` The `kubectl` tool finds a local port number that is not in use (avoiding low ports numbers, because these might be used by other applications). The output is similar to: ``` -Forwarding from 127.0.0.1:62162 -> 6379 -Forwarding from [::1]:62162 -> 6379 +Forwarding from 127.0.0.1:63753 -> 27017 +Forwarding from [::1]:63753 -> 27017 ``` @@ -209,8 +216,8 @@ Forwarding from [::1]:62162 -> 6379 ## Discussion -Connections made to local port 7000 are forwarded to port 6379 of the Pod that -is running the Redis server. With this connection in place, you can use your +Connections made to local port 28015 are forwarded to port 27017 of the Pod that +is running the MongoDB server. With this connection in place, you can use your local workstation to debug the database that is running in the Pod. {{< note >}} diff --git a/content/en/docs/tasks/administer-cluster/access-cluster-api.md b/content/en/docs/tasks/administer-cluster/access-cluster-api.md index ffe200b118..0275cadabf 100644 --- a/content/en/docs/tasks/administer-cluster/access-cluster-api.md +++ b/content/en/docs/tasks/administer-cluster/access-cluster-api.md @@ -192,7 +192,7 @@ func main() { } ``` -If the application is deployed as a Pod in the cluster, see [Accessing the API from within a Pod](#accessing-the-api-from-within-a-pod). +If the application is deployed as a Pod in the cluster, see [Accessing the API from within a Pod](/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod). #### Python client @@ -215,7 +215,7 @@ for i in ret.items: #### Java client -* To install the [Java Client](https://github.com/kubernetes-client/java), simply execute : +To install the [Java Client](https://github.com/kubernetes-client/java), run: ```shell # Clone java library @@ -352,102 +352,6 @@ exampleWithKubeConfig = do >>= print ``` +## {{% heading "whatsnext" %}} -### Accessing the API from within a Pod - -When accessing the API from within a Pod, locating and authenticating -to the API server are slightly different to the external client case described above. - -The easiest way to use the Kubernetes API from a Pod is to use -one of the official [client libraries](/docs/reference/using-api/client-libraries/). These -libraries can automatically discover the API server and authenticate. - -#### Using Official Client Libraries - -From within a Pod, the recommended ways to connect to the Kubernetes API are: - - - For a Go client, use the official [Go client library](https://github.com/kubernetes/client-go/). - The `rest.InClusterConfig()` function handles API host discovery and authentication automatically. - See [an example here](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go). - - - For a Python client, use the official [Python client library](https://github.com/kubernetes-client/python/). - The `config.load_incluster_config()` function handles API host discovery and authentication automatically. - See [an example here](https://github.com/kubernetes-client/python/blob/master/examples/in_cluster_config.py). - - - There are a number of other libraries available, please refer to the [Client Libraries](/docs/reference/using-api/client-libraries/) page. - -In each case, the service account credentials of the Pod are used to communicate -securely with the API server. - -#### Directly accessing the REST API - -While running in a Pod, the Kubernetes apiserver is accessible via a Service named -`kubernetes` in the `default` namespace. Therefore, Pods can use the -`kubernetes.default.svc` hostname to query the API server. Official client libraries -do this automatically. - -The recommended way to authenticate to the API server is with a -[service account](/docs/tasks/configure-pod-container/configure-service-account/) credential. By default, a Pod -is associated with a service account, and a credential (token) for that -service account is placed into the filesystem tree of each container in that Pod, -at `/var/run/secrets/kubernetes.io/serviceaccount/token`. - -If available, a certificate bundle is placed into the filesystem tree of each -container at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`, and should be -used to verify the serving certificate of the API server. - -Finally, the default namespace to be used for namespaced API operations is placed in a file -at `/var/run/secrets/kubernetes.io/serviceaccount/namespace` in each container. - -#### Using kubectl proxy - -If you would like to query the API without an official client library, you can run `kubectl proxy` -as the [command](/docs/tasks/inject-data-application/define-command-argument-container/) -of a new sidecar container in the Pod. This way, `kubectl proxy` will authenticate -to the API and expose it on the `localhost` interface of the Pod, so that other containers -in the Pod can use it directly. - -#### Without using a proxy - -It is possible to avoid using the kubectl proxy by passing the authentication token -directly to the API server. The internal certificate secures the connection. - -```shell -# Point to the internal API server hostname -APISERVER=https://kubernetes.default.svc - -# Path to ServiceAccount token -SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount - -# Read this Pod's namespace -NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace) - -# Read the ServiceAccount bearer token -TOKEN=$(cat ${SERVICEACCOUNT}/token) - -# Reference the internal certificate authority (CA) -CACERT=${SERVICEACCOUNT}/ca.crt - -# Explore the API with TOKEN -curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api -``` - -The output will be similar to this: - -```json -{ - "kind": "APIVersions", - "versions": [ - "v1" - ], - "serverAddressByClientCIDRs": [ - { - "clientCIDR": "0.0.0.0/0", - "serverAddress": "10.0.1.149:443" - } - ] -} -``` - - - +* [Accessing the Kubernetes API from a Pod](/docs/tasks/run-application/access-api-from-pod/) diff --git a/content/en/docs/tasks/administer-cluster/access-cluster-services.md b/content/en/docs/tasks/administer-cluster/access-cluster-services.md index c318a3df35..927e05b77a 100644 --- a/content/en/docs/tasks/administer-cluster/access-cluster-services.md +++ b/content/en/docs/tasks/administer-cluster/access-cluster-services.md @@ -31,7 +31,7 @@ You have several options for connecting to nodes, pods and services from outside - Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside the cluster. See the [services](/docs/concepts/services-networking/service/) and [kubectl expose](/docs/reference/generated/kubectl/kubectl-commands/#expose) documentation. - - Depending on your cluster environment, this may just expose the service to your corporate network, + - Depending on your cluster environment, this may only expose the service to your corporate network, or it may expose it to the internet. Think about whether the service being exposed is secure. Does it do its own authentication? - Place pods behind services. To access one specific pod from a set of replicas, such as for debugging, @@ -83,7 +83,7 @@ See [Access Clusters Using the Kubernetes API](/docs/tasks/administer-cluster/ac #### Manually constructing apiserver proxy URLs -As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL: +As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the service's proxy URL: `http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`[https:]service_name[:port_name]`*`/proxy` If you haven't specified a name for your port, you don't have to specify *port_name* in the URL. diff --git a/content/en/docs/tasks/administer-cluster/certificates.md b/content/en/docs/tasks/administer-cluster/certificates.md new file mode 100644 index 0000000000..6361b20d16 --- /dev/null +++ b/content/en/docs/tasks/administer-cluster/certificates.md @@ -0,0 +1,252 @@ +--- +title: Certificates +content_type: task +weight: 20 +--- + + + + +When using client certificate authentication, you can generate certificates +manually through `easyrsa`, `openssl` or `cfssl`. + + + + + + +### easyrsa + +**easyrsa** can manually generate certificates for your cluster. + +1. Download, unpack, and initialize the patched version of easyrsa3. + + curl -LO https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz + tar xzf easy-rsa.tar.gz + cd easy-rsa-master/easyrsa3 + ./easyrsa init-pki +1. Generate a new certificate authority (CA). `--batch` sets automatic mode; + `--req-cn` specifies the Common Name (CN) for the CA's new root certificate. + + ./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass +1. Generate server certificate and key. + The argument `--subject-alt-name` sets the possible IPs and DNS names the API server will + be accessed with. The `MASTER_CLUSTER_IP` is usually the first IP from the service CIDR + that is specified as the `--service-cluster-ip-range` argument for both the API server and + the controller manager component. The argument `--days` is used to set the number of days + after which the certificate expires. + The sample below also assumes that you are using `cluster.local` as the default + DNS domain name. + + ./easyrsa --subject-alt-name="IP:${MASTER_IP},"\ + "IP:${MASTER_CLUSTER_IP},"\ + "DNS:kubernetes,"\ + "DNS:kubernetes.default,"\ + "DNS:kubernetes.default.svc,"\ + "DNS:kubernetes.default.svc.cluster,"\ + "DNS:kubernetes.default.svc.cluster.local" \ + --days=10000 \ + build-server-full server nopass +1. Copy `pki/ca.crt`, `pki/issued/server.crt`, and `pki/private/server.key` to your directory. +1. Fill in and add the following parameters into the API server start parameters: + + --client-ca-file=/yourdirectory/ca.crt + --tls-cert-file=/yourdirectory/server.crt + --tls-private-key-file=/yourdirectory/server.key + +### openssl + +**openssl** can manually generate certificates for your cluster. + +1. Generate a ca.key with 2048bit: + + openssl genrsa -out ca.key 2048 +1. According to the ca.key generate a ca.crt (use -days to set the certificate effective time): + + openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt +1. Generate a server.key with 2048bit: + + openssl genrsa -out server.key 2048 +1. Create a config file for generating a Certificate Signing Request (CSR). + Be sure to substitute the values marked with angle brackets (e.g. ``) + with real values before saving this to a file (e.g. `csr.conf`). + Note that the value for `MASTER_CLUSTER_IP` is the service cluster IP for the + API server as described in previous subsection. + The sample below also assumes that you are using `cluster.local` as the default + DNS domain name. + + [ req ] + default_bits = 2048 + prompt = no + default_md = sha256 + req_extensions = req_ext + distinguished_name = dn + + [ dn ] + C = + ST = + L = + O = + OU = + CN = + + [ req_ext ] + subjectAltName = @alt_names + + [ alt_names ] + DNS.1 = kubernetes + DNS.2 = kubernetes.default + DNS.3 = kubernetes.default.svc + DNS.4 = kubernetes.default.svc.cluster + DNS.5 = kubernetes.default.svc.cluster.local + IP.1 = + IP.2 = + + [ v3_ext ] + authorityKeyIdentifier=keyid,issuer:always + basicConstraints=CA:FALSE + keyUsage=keyEncipherment,dataEncipherment + extendedKeyUsage=serverAuth,clientAuth + subjectAltName=@alt_names +1. Generate the certificate signing request based on the config file: + + openssl req -new -key server.key -out server.csr -config csr.conf +1. Generate the server certificate using the ca.key, ca.crt and server.csr: + + openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \ + -CAcreateserial -out server.crt -days 10000 \ + -extensions v3_ext -extfile csr.conf +1. View the certificate: + + openssl x509 -noout -text -in ./server.crt + +Finally, add the same parameters into the API server start parameters. + +### cfssl + +**cfssl** is another tool for certificate generation. + +1. Download, unpack and prepare the command line tools as shown below. + Note that you may need to adapt the sample commands based on the hardware + architecture and cfssl version you are using. + + curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl + chmod +x cfssl + curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson + chmod +x cfssljson + curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo + chmod +x cfssl-certinfo +1. Create a directory to hold the artifacts and initialize cfssl: + + mkdir cert + cd cert + ../cfssl print-defaults config > config.json + ../cfssl print-defaults csr > csr.json +1. Create a JSON config file for generating the CA file, for example, `ca-config.json`: + + { + "signing": { + "default": { + "expiry": "8760h" + }, + "profiles": { + "kubernetes": { + "usages": [ + "signing", + "key encipherment", + "server auth", + "client auth" + ], + "expiry": "8760h" + } + } + } + } +1. Create a JSON config file for CA certificate signing request (CSR), for example, + `ca-csr.json`. Be sure to replace the values marked with angle brackets with + real values you want to use. + + { + "CN": "kubernetes", + "key": { + "algo": "rsa", + "size": 2048 + }, + "names":[{ + "C": "", + "ST": "", + "L": "", + "O": "", + "OU": "" + }] + } +1. Generate CA key (`ca-key.pem`) and certificate (`ca.pem`): + + ../cfssl gencert -initca ca-csr.json | ../cfssljson -bare ca +1. Create a JSON config file for generating keys and certificates for the API + server, for example, `server-csr.json`. Be sure to replace the values in angle brackets with + real values you want to use. The `MASTER_CLUSTER_IP` is the service cluster + IP for the API server as described in previous subsection. + The sample below also assumes that you are using `cluster.local` as the default + DNS domain name. + + { + "CN": "kubernetes", + "hosts": [ + "127.0.0.1", + "", + "", + "kubernetes", + "kubernetes.default", + "kubernetes.default.svc", + "kubernetes.default.svc.cluster", + "kubernetes.default.svc.cluster.local" + ], + "key": { + "algo": "rsa", + "size": 2048 + }, + "names": [{ + "C": "", + "ST": "", + "L": "", + "O": "", + "OU": "" + }] + } +1. Generate the key and certificate for the API server, which are by default + saved into file `server-key.pem` and `server.pem` respectively: + + ../cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \ + --config=ca-config.json -profile=kubernetes \ + server-csr.json | ../cfssljson -bare server + + +## Distributing Self-Signed CA Certificate + +A client node may refuse to recognize a self-signed CA certificate as valid. +For a non-production deployment, or for a deployment that runs behind a company +firewall, you can distribute a self-signed CA certificate to all clients and +refresh the local list for valid certificates. + +On each client, perform the following operations: + +```bash +sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt +sudo update-ca-certificates +``` + +``` +Updating certificates in /etc/ssl/certs... +1 added, 0 removed; done. +Running hooks in /etc/ca-certificates/update.d.... +done. +``` + +## Certificates API + +You can use the `certificates.k8s.io` API to provision +x509 certificates to use for authentication as documented +[here](/docs/tasks/tls/managing-tls-in-a-cluster). + + diff --git a/content/en/docs/tasks/administer-cluster/change-default-storage-class.md b/content/en/docs/tasks/administer-cluster/change-default-storage-class.md index 9c08a2a4ad..a365fd4ffc 100644 --- a/content/en/docs/tasks/administer-cluster/change-default-storage-class.md +++ b/content/en/docs/tasks/administer-cluster/change-default-storage-class.md @@ -32,7 +32,7 @@ for example, it might provision storage that is too expensive. If this is the ca you can either change the default StorageClass or disable it completely to avoid dynamic provisioning of storage. -Simply deleting the default StorageClass may not work, as it may be re-created +Deleting the default StorageClass may not work, as it may be re-created automatically by the addon manager running in your cluster. Please consult the docs for your installation for details about addon manager and how to disable individual addons. @@ -70,7 +70,7 @@ for details about addon manager and how to disable individual addons. 1. Mark a StorageClass as default: - Similarly to the previous step, you need to add/set the annotation + Similar to the previous step, you need to add/set the annotation `storageclass.kubernetes.io/is-default-class=true`. ```bash diff --git a/content/en/docs/tasks/administer-cluster/cluster-upgrade.md b/content/en/docs/tasks/administer-cluster/cluster-upgrade.md index 1e2cc422e4..6e9dc302c4 100644 --- a/content/en/docs/tasks/administer-cluster/cluster-upgrade.md +++ b/content/en/docs/tasks/administer-cluster/cluster-upgrade.md @@ -34,7 +34,7 @@ If your cluster was deployed using the `kubeadm` tool, refer to for detailed information on how to upgrade the cluster. Once you have upgraded the cluster, remember to -[install the latest version of `kubectl`](/docs/tasks/tools/install-kubectl/). +[install the latest version of `kubectl`](/docs/tasks/tools/). ### Manual deployments @@ -52,7 +52,7 @@ You should manually update the control plane following this sequence: - cloud controller manager, if you use one At this point you should -[install the latest version of `kubectl`](/docs/tasks/tools/install-kubectl/). +[install the latest version of `kubectl`](/docs/tasks/tools/). For each node in your cluster, [drain](/docs/tasks/administer-cluster/safely-drain-node/) that node and then either replace it with a new node that uses the {{< skew latestVersion >}} diff --git a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md index 72f069d6ac..5dee2ae185 100644 --- a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md +++ b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md @@ -10,35 +10,40 @@ content_type: task {{< glossary_definition term_id="etcd" length="all" prepend="etcd is a ">}} - - - ## {{% heading "prerequisites" %}} - {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - - ## Prerequisites * Run etcd as a cluster of odd members. -* etcd is a leader-based distributed system. Ensure that the leader periodically send heartbeats on time to all followers to keep the cluster stable. +* etcd is a leader-based distributed system. Ensure that the leader + periodically send heartbeats on time to all followers to keep the cluster + stable. * Ensure that no resource starvation occurs. - Performance and stability of the cluster is sensitive to network and disk IO. Any resource starvation can lead to heartbeat timeout, causing instability of the cluster. An unstable etcd indicates that no leader is elected. Under such circumstances, a cluster cannot make any changes to its current state, which implies no new pods can be scheduled. + Performance and stability of the cluster is sensitive to network and disk + I/O. Any resource starvation can lead to heartbeat timeout, causing instability + of the cluster. An unstable etcd indicates that no leader is elected. Under + such circumstances, a cluster cannot make any changes to its current state, + which implies no new pods can be scheduled. -* Keeping stable etcd clusters is critical to the stability of Kubernetes clusters. Therefore, run etcd clusters on dedicated machines or isolated environments for [guaranteed resource requirements](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/hardware.md#hardware-recommendations). +* Keeping etcd clusters stable is critical to the stability of Kubernetes + clusters. Therefore, run etcd clusters on dedicated machines or isolated + environments for [guaranteed resource requirements](https://etcd.io/docs/current/op-guide/hardware/). * The minimum recommended version of etcd to run in production is `3.2.10+`. ## Resource requirements -Operating etcd with limited resources is suitable only for testing purposes. For deploying in production, advanced hardware configuration is required. Before deploying etcd in production, see [resource requirement reference documentation](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/hardware.md#example-hardware-configurations). +Operating etcd with limited resources is suitable only for testing purposes. +For deploying in production, advanced hardware configuration is required. +Before deploying etcd in production, see +[resource requirement reference](https://etcd.io/docs/current/op-guide/hardware/#example-hardware-configurations). ## Starting etcd clusters @@ -50,33 +55,43 @@ Use a single-node etcd cluster only for testing purpose. 1. Run the following: - ```sh - ./etcd --listen-client-urls=http://$PRIVATE_IP:2379 --advertise-client-urls=http://$PRIVATE_IP:2379 - ``` + ```sh + etcd --listen-client-urls=http://$PRIVATE_IP:2379 \ + --advertise-client-urls=http://$PRIVATE_IP:2379 + ``` -2. Start Kubernetes API server with the flag `--etcd-servers=$PRIVATE_IP:2379`. +2. Start the Kubernetes API server with the flag + `--etcd-servers=$PRIVATE_IP:2379`. - Replace `PRIVATE_IP` with your etcd client IP. + Make sure `PRIVATE_IP` is set to your etcd client IP. ### Multi-node etcd cluster -For durability and high availability, run etcd as a multi-node cluster in production and back it up periodically. A five-member cluster is recommended in production. For more information, see [FAQ Documentation](https://github.com/coreos/etcd/blob/master/Documentation/faq.md#what-is-failure-tolerance). +For durability and high availability, run etcd as a multi-node cluster in +production and back it up periodically. A five-member cluster is recommended +in production. For more information, see +[FAQ documentation](https://etcd.io/docs/current/faq/#what-is-failure-tolerance). -Configure an etcd cluster either by static member information or by dynamic discovery. For more information on clustering, see [etcd Clustering Documentation](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/clustering.md). +Configure an etcd cluster either by static member information or by dynamic +discovery. For more information on clustering, see +[etcd clustering documentation](https://etcd.io/docs/current/op-guide/clustering/). -For an example, consider a five-member etcd cluster running with the following client URLs: `http://$IP1:2379`, `http://$IP2:2379`, `http://$IP3:2379`, `http://$IP4:2379`, and `http://$IP5:2379`. To start a Kubernetes API server: +For an example, consider a five-member etcd cluster running with the following +client URLs: `http://$IP1:2379`, `http://$IP2:2379`, `http://$IP3:2379`, +`http://$IP4:2379`, and `http://$IP5:2379`. To start a Kubernetes API server: 1. Run the following: - ```sh - ./etcd --listen-client-urls=http://$IP1:2379, http://$IP2:2379, http://$IP3:2379, http://$IP4:2379, http://$IP5:2379 --advertise-client-urls=http://$IP1:2379, http://$IP2:2379, http://$IP3:2379, http://$IP4:2379, http://$IP5:2379 - ``` + ```shell + etcd --listen-client-urls=http://$IP1:2379,http://$IP2:2379,http://$IP3:2379,http://$IP4:2379,http://$IP5:2379 --advertise-client-urls=http://$IP1:2379,http://$IP2:2379,http://$IP3:2379,http://$IP4:2379,http://$IP5:2379 + ``` -2. Start Kubernetes API servers with the flag `--etcd-servers=$IP1:2379, $IP2:2379, $IP3:2379, $IP4:2379, $IP5:2379`. +2. Start the Kubernetes API servers with the flag + `--etcd-servers=$IP1:2379,$IP2:2379,$IP3:2379,$IP4:2379,$IP5:2379`. - Replace `IP` with your client IP addresses. + Make sure the `IP` variables are set to your client IP addresses. -### Multi-node etcd cluster with load balancer +### Multi-node etcd cluster with load balancer To run a load balancing etcd cluster: @@ -87,92 +102,169 @@ To run a load balancing etcd cluster: ## Securing etcd clusters -Access to etcd is equivalent to root permission in the cluster so ideally only the API server should have access to it. Considering the sensitivity of the data, it is recommended to grant permission to only those nodes that require access to etcd clusters. +Access to etcd is equivalent to root permission in the cluster so ideally only +the API server should have access to it. Considering the sensitivity of the +data, it is recommended to grant permission to only those nodes that require +access to etcd clusters. -To secure etcd, either set up firewall rules or use the security features provided by etcd. etcd security features depend on x509 Public Key Infrastructure (PKI). To begin, establish secure communication channels by generating a key and certificate pair. For example, use key pairs `peer.key` and `peer.cert` for securing communication between etcd members, and `client.key` and `client.cert` for securing communication between etcd and its clients. See the [example scripts](https://github.com/coreos/etcd/tree/master/hack/tls-setup) provided by the etcd project to generate key pairs and CA files for client authentication. +To secure etcd, either set up firewall rules or use the security features +provided by etcd. etcd security features depend on x509 Public Key +Infrastructure (PKI). To begin, establish secure communication channels by +generating a key and certificate pair. For example, use key pairs `peer.key` +and `peer.cert` for securing communication between etcd members, and +`client.key` and `client.cert` for securing communication between etcd and its +clients. See the [example scripts](https://github.com/coreos/etcd/tree/master/hack/tls-setup) +provided by the etcd project to generate key pairs and CA files for client +authentication. ### Securing communication -To configure etcd with secure peer communication, specify flags `--peer-key-file=peer.key` and `--peer-cert-file=peer.cert`, and use https as URL schema. +To configure etcd with secure peer communication, specify flags +`--peer-key-file=peer.key` and `--peer-cert-file=peer.cert`, and use HTTPS as +the URL schema. -Similarly, to configure etcd with secure client communication, specify flags `--key-file=k8sclient.key` and `--cert-file=k8sclient.cert`, and use https as URL schema. +Similarly, to configure etcd with secure client communication, specify flags +`--key-file=k8sclient.key` and `--cert-file=k8sclient.cert`, and use HTTPS as +the URL schema. Here is an example on a client command that uses secure +communication: + +``` +ETCDCTL_API=3 etcdctl --endpoints 10.2.0.9:2379 \ + --cert=/etc/kubernetes/pki/etcd/server.crt \ + --key=/etc/kubernetes/pki/etcd/server.key \ + --cacert=/etc/kubernetes/pki/etcd/ca.crt \ + member list +``` ### Limiting access of etcd clusters -After configuring secure communication, restrict the access of etcd cluster to only the Kubernetes API server. Use TLS authentication to do so. +After configuring secure communication, restrict the access of etcd cluster to +only the Kubernetes API servers. Use TLS authentication to do so. -For example, consider key pairs `k8sclient.key` and `k8sclient.cert` that are trusted by the CA `etcd.ca`. When etcd is configured with `--client-cert-auth` along with TLS, it verifies the certificates from clients by using system CAs or the CA passed in by `--trusted-ca-file` flag. Specifying flags `--client-cert-auth=true` and `--trusted-ca-file=etcd.ca` will restrict the access to clients with the certificate `k8sclient.cert`. +For example, consider key pairs `k8sclient.key` and `k8sclient.cert` that are +trusted by the CA `etcd.ca`. When etcd is configured with `--client-cert-auth` +along with TLS, it verifies the certificates from clients by using system CAs +or the CA passed in by `--trusted-ca-file` flag. Specifying flags +`--client-cert-auth=true` and `--trusted-ca-file=etcd.ca` will restrict the +access to clients with the certificate `k8sclient.cert`. -Once etcd is configured correctly, only clients with valid certificates can access it. To give Kubernetes API server the access, configure it with the flags `--etcd-certfile=k8sclient.cert`,`--etcd-keyfile=k8sclient.key` and `--etcd-cafile=ca.cert`. +Once etcd is configured correctly, only clients with valid certificates can +access it. To give Kubernetes API servers the access, configure them with the +flags `--etcd-certfile=k8sclient.cert`,`--etcd-keyfile=k8sclient.key` and +`--etcd-cafile=ca.cert`. {{< note >}} -etcd authentication is not currently supported by Kubernetes. For more information, see the related issue [Support Basic Auth for Etcd v2](https://github.com/kubernetes/kubernetes/issues/23398). +etcd authentication is not currently supported by Kubernetes. For more +information, see the related issue +[Support Basic Auth for Etcd v2](https://github.com/kubernetes/kubernetes/issues/23398). {{< /note >}} ## Replacing a failed etcd member -etcd cluster achieves high availability by tolerating minor member failures. However, to improve the overall health of the cluster, replace failed members immediately. When multiple members fail, replace them one by one. Replacing a failed member involves two steps: removing the failed member and adding a new member. +etcd cluster achieves high availability by tolerating minor member failures. +However, to improve the overall health of the cluster, replace failed members +immediately. When multiple members fail, replace them one by one. Replacing a +failed member involves two steps: removing the failed member and adding a new +member. -Though etcd keeps unique member IDs internally, it is recommended to use a unique name for each member to avoid human errors. For example, consider a three-member etcd cluster. Let the URLs be, member1=http://10.0.0.1, member2=http://10.0.0.2, and member3=http://10.0.0.3. When member1 fails, replace it with member4=http://10.0.0.4. +Though etcd keeps unique member IDs internally, it is recommended to use a +unique name for each member to avoid human errors. For example, consider a +three-member etcd cluster. Let the URLs be, `member1=http://10.0.0.1`, +`member2=http://10.0.0.2`, and `member3=http://10.0.0.3`. When `member1` fails, +replace it with `member4=http://10.0.0.4`. -1. Get the member ID of the failed member1: +1. Get the member ID of the failed `member1`: - `etcdctl --endpoints=http://10.0.0.2,http://10.0.0.3 member list` + ```shell + etcdctl --endpoints=http://10.0.0.2,http://10.0.0.3 member list + ``` - The following message is displayed: + The following message is displayed: - 8211f1d0f64f3269, started, member1, http://10.0.0.1:2380, http://10.0.0.1:2379 - 91bc3c398fb3c146, started, member2, http://10.0.0.2:2380, http://10.0.0.2:2379 - fd422379fda50e48, started, member3, http://10.0.0.3:2380, http://10.0.0.3:2379 + ```console + 8211f1d0f64f3269, started, member1, http://10.0.0.1:2380, http://10.0.0.1:2379 + 91bc3c398fb3c146, started, member2, http://10.0.0.2:2380, http://10.0.0.2:2379 + fd422379fda50e48, started, member3, http://10.0.0.3:2380, http://10.0.0.3:2379 + ``` 2. Remove the failed member: - `etcdctl member remove 8211f1d0f64f3269` + ```shell + etcdctl member remove 8211f1d0f64f3269 + ``` - The following message is displayed: + The following message is displayed: - Removed member 8211f1d0f64f3269 from cluster + ```console + Removed member 8211f1d0f64f3269 from cluster + ``` 3. Add the new member: - `./etcdctl member add member4 --peer-urls=http://10.0.0.4:2380` + ```shell + etcdctl member add member4 --peer-urls=http://10.0.0.4:2380 + ``` - The following message is displayed: + The following message is displayed: - Member 2be1eb8f84b7f63e added to cluster ef37ad9dc622a7c4 + ```console + Member 2be1eb8f84b7f63e added to cluster ef37ad9dc622a7c4 + ``` 4. Start the newly added member on a machine with the IP `10.0.0.4`: - export ETCD_NAME="member4" - export ETCD_INITIAL_CLUSTER="member2=http://10.0.0.2:2380,member3=http://10.0.0.3:2380,member4=http://10.0.0.4:2380" - export ETCD_INITIAL_CLUSTER_STATE=existing - etcd [flags] + ```shell + export ETCD_NAME="member4" + export ETCD_INITIAL_CLUSTER="member2=http://10.0.0.2:2380,member3=http://10.0.0.3:2380,member4=http://10.0.0.4:2380" + export ETCD_INITIAL_CLUSTER_STATE=existing + etcd [flags] + ``` 5. Do either of the following: - 1. Update its `--etcd-servers` flag to make Kubernetes aware of the configuration changes, then restart the Kubernetes API server. - 2. Update the load balancer configuration if a load balancer is used in the deployment. + 1. Update the `--etcd-servers` flag for the Kubernetes API servers to make + Kubernetes aware of the configuration changes, then restart the + Kubernetes API servers. + 2. Update the load balancer configuration if a load balancer is used in the + deployment. -For more information on cluster reconfiguration, see [etcd Reconfiguration Documentation](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/runtime-configuration.md#remove-a-member). +For more information on cluster reconfiguration, see +[etcd reconfiguration documentation](https://etcd.io/docs/current/op-guide/runtime-configuration/#remove-a-member). ## Backing up an etcd cluster -All Kubernetes objects are stored on etcd. Periodically backing up the etcd cluster data is important to recover Kubernetes clusters under disaster scenarios, such as losing all master nodes. The snapshot file contains all the Kubernetes states and critical information. In order to keep the sensitive Kubernetes data safe, encrypt the snapshot files. +All Kubernetes objects are stored on etcd. Periodically backing up the etcd +cluster data is important to recover Kubernetes clusters under disaster +scenarios, such as losing all control plane nodes. The snapshot file contains +all the Kubernetes states and critical information. In order to keep the +sensitive Kubernetes data safe, encrypt the snapshot files. -Backing up an etcd cluster can be accomplished in two ways: etcd built-in snapshot and volume snapshot. +Backing up an etcd cluster can be accomplished in two ways: etcd built-in +snapshot and volume snapshot. ### Built-in snapshot -etcd supports built-in snapshot. A snapshot may either be taken from a live member with the `etcdctl snapshot save` command or by copying the `member/snap/db` file from an etcd [data directory](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/configuration.md#--data-dir) that is not currently used by an etcd process. Taking the snapshot will normally not affect the performance of the member. +etcd supports built-in snapshot. A snapshot may either be taken from a live +member with the `etcdctl snapshot save` command or by copying the +`member/snap/db` file from an etcd +[data directory](https://etcd.io/docs/current/op-guide/configuration/#--data-dir) +that is not currently used by an etcd process. Taking the snapshot will +not affect the performance of the member. -Below is an example for taking a snapshot of the keyspace served by `$ENDPOINT` to the file `snapshotdb`: +Below is an example for taking a snapshot of the keyspace served by +`$ENDPOINT` to the file `snapshotdb`: -```sh +```shell ETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshotdb -# exit 0 +``` -# verify the snapshot +Verify the snapshot: + +```shell ETCDCTL_API=3 etcdctl --write-out=table snapshot status snapshotdb +``` + +```console +----------+----------+------------+------------+ | HASH | REVISION | TOTAL KEYS | TOTAL SIZE | +----------+----------+------------+------------+ @@ -182,74 +274,86 @@ ETCDCTL_API=3 etcdctl --write-out=table snapshot status snapshotdb ### Volume snapshot -If etcd is running on a storage volume that supports backup, such as Amazon Elastic Block Store, back up etcd data by taking a snapshot of the storage volume. +If etcd is running on a storage volume that supports backup, such as Amazon +Elastic Block Store, back up etcd data by taking a snapshot of the storage +volume. + +### Snapshot using etcdctl options + +We can also take the snapshot using various options given by etcdctl. For example + +```shell +ETCDCTL_API=3 etcdctl --h +``` + +will list various options available from etcdctl. For example, you can take a snapshot by specifying +the endpoint, certificates etc as shown below: + +```shell +ETCDCTL_API=3 etcdctl --endpoints=[127.0.0.1:2379] \ + --cacert= --cert= --key= \ + snapshot save +``` +where `trusted-ca-file`, `cert-file` and `key-file` can be obtained from the description of the etcd Pod. ## Scaling up etcd clusters -Scaling up etcd clusters increases availability by trading off performance. Scaling does not increase cluster performance nor capability. A general rule is not to scale up or down etcd clusters. Do not configure any auto scaling groups for etcd clusters. It is highly recommended to always run a static five-member etcd cluster for production Kubernetes clusters at any officially supported scale. +Scaling up etcd clusters increases availability by trading off performance. +Scaling does not increase cluster performance nor capability. A general rule +is not to scale up or down etcd clusters. Do not configure any auto scaling +groups for etcd clusters. It is highly recommended to always run a static +five-member etcd cluster for production Kubernetes clusters at any officially +supported scale. -A reasonable scaling is to upgrade a three-member cluster to a five-member one, when more reliability is desired. See [etcd Reconfiguration Documentation](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/runtime-configuration.md#remove-a-member) for information on how to add members into an existing cluster. +A reasonable scaling is to upgrade a three-member cluster to a five-member +one, when more reliability is desired. See +[etcd reconfiguration documentation](https://etcd.io/docs/current/op-guide/runtime-configuration/#remove-a-member) +for information on how to add members into an existing cluster. ## Restoring an etcd cluster -etcd supports restoring from snapshots that are taken from an etcd process of the [major.minor](http://semver.org/) version. Restoring a version from a different patch version of etcd also is supported. A restore operation is employed to recover the data of a failed cluster. +etcd supports restoring from snapshots that are taken from an etcd process of +the [major.minor](http://semver.org/) version. Restoring a version from a +different patch version of etcd also is supported. A restore operation is +employed to recover the data of a failed cluster. -Before starting the restore operation, a snapshot file must be present. It can either be a snapshot file from a previous backup operation, or from a remaining [data directory](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/configuration.md#--data-dir). For more information and examples on restoring a cluster from a snapshot file, see [etcd disaster recovery documentation](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/recovery.md#restoring-a-cluster). +Before starting the restore operation, a snapshot file must be present. It can +either be a snapshot file from a previous backup operation, or from a remaining +[data directory]( https://etcd.io/docs/current/op-guide/configuration/#--data-dir). +Here is an example: -If the access URLs of the restored cluster is changed from the previous cluster, the Kubernetes API server must be reconfigured accordingly. In this case, restart Kubernetes API server with the flag `--etcd-servers=$NEW_ETCD_CLUSTER` instead of the flag `--etcd-servers=$OLD_ETCD_CLUSTER`. Replace `$NEW_ETCD_CLUSTER` and `$OLD_ETCD_CLUSTER` with the respective IP addresses. If a load balancer is used in front of an etcd cluster, you might need to update the load balancer instead. +```shell +ETCDCTL_API=3 etcdctl --endpoints 10.2.0.9:2379 snapshot restore snapshotdb +``` -If the majority of etcd members have permanently failed, the etcd cluster is considered failed. In this scenario, Kubernetes cannot make any changes to its current state. Although the scheduled pods might continue to run, no new pods can be scheduled. In such cases, recover the etcd cluster and potentially reconfigure Kubernetes API server to fix the issue. +For more information and examples on restoring a cluster from a snapshot file, see +[etcd disaster recovery documentation](https://etcd.io/docs/current/op-guide/recovery/#restoring-a-cluster). + +If the access URLs of the restored cluster is changed from the previous +cluster, the Kubernetes API server must be reconfigured accordingly. In this +case, restart Kubernetes API servers with the flag +`--etcd-servers=$NEW_ETCD_CLUSTER` instead of the flag +`--etcd-servers=$OLD_ETCD_CLUSTER`. Replace `$NEW_ETCD_CLUSTER` and +`$OLD_ETCD_CLUSTER` with the respective IP addresses. If a load balancer is +used in front of an etcd cluster, you might need to update the load balancer +instead. + +If the majority of etcd members have permanently failed, the etcd cluster is +considered failed. In this scenario, Kubernetes cannot make any changes to its +current state. Although the scheduled pods might continue to run, no new pods +can be scheduled. In such cases, recover the etcd cluster and potentially +reconfigure Kubernetes API servers to fix the issue. {{< note >}} -If any API servers are running in your cluster, you should not attempt to restore instances of etcd. -Instead, follow these steps to restore etcd: +If any API servers are running in your cluster, you should not attempt to +restore instances of etcd. Instead, follow these steps to restore etcd: -- stop *all* kube-apiserver instances +- stop *all* API server instances - restore state in all etcd instances -- restart all kube-apiserver instances +- restart all API server instances -We also recommend restarting any components (e.g. kube-scheduler, kube-controller-manager, kubelet) to ensure that they don't -rely on some stale data. Note that in practice, the restore takes a bit of time. -During the restoration, critical components will lose leader lock and restart themselves. +We also recommend restarting any components (e.g. `kube-scheduler`, +`kube-controller-manager`, `kubelet`) to ensure that they don't rely on some +stale data. Note that in practice, the restore takes a bit of time. During the +restoration, critical components will lose leader lock and restart themselves. {{< /note >}} - -## Upgrading and rolling back etcd clusters - -As of Kubernetes v1.13.0, etcd2 is no longer supported as a storage backend for -new or existing Kubernetes clusters. The timeline for Kubernetes support for -etcd2 and etcd3 is as follows: - -- Kubernetes v1.0: etcd2 only -- Kubernetes v1.5.1: etcd3 support added, new clusters still default to etcd2 -- Kubernetes v1.6.0: new clusters created with `kube-up.sh` default to etcd3, - and `kube-apiserver` defaults to etcd3 -- Kubernetes v1.9.0: deprecation of etcd2 storage backend announced -- Kubernetes v1.13.0: etcd2 storage backend removed, `kube-apiserver` will - refuse to start with `--storage-backend=etcd2`, with the - message `etcd2 is no longer a supported storage backend` - -Before upgrading a v1.12.x kube-apiserver using `--storage-backend=etcd2` to -v1.13.x, etcd v2 data must be migrated to the v3 storage backend and -kube-apiserver invocations must be changed to use `--storage-backend=etcd3`. - -The process for migrating from etcd2 to etcd3 is highly dependent on how the -etcd cluster was deployed and configured, as well as how the Kubernetes -cluster was deployed and configured. We recommend that you consult your cluster -provider's documentation to see if there is a predefined solution. - -If your cluster was created via `kube-up.sh` and is still using etcd2 as its -storage backend, please consult the [Kubernetes v1.12 etcd cluster upgrade docs](https://v1-12.docs.kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#upgrading-and-rolling-back-etcd-clusters) - -## Known issue: etcd client balancer with secure endpoints - -The etcd v3 client, released in etcd v3.3.13 or earlier, has a [critical bug](https://github.com/kubernetes/kubernetes/issues/72102) which affects the kube-apiserver and HA deployments. The etcd client balancer failover does not properly work against secure endpoints. As a result, etcd servers may fail or disconnect briefly from the kube-apiserver. This affects kube-apiserver HA deployments. - -The fix was made in [etcd v3.4](https://github.com/etcd-io/etcd/pull/10911) (and backported to v3.3.14 or later): the new client now creates its own credential bundle to correctly set authority target in dial function. - -Because the fix requires gRPC dependency upgrade (to v1.23.0), downstream Kubernetes [did not backport etcd upgrades](https://github.com/kubernetes/kubernetes/issues/72102#issuecomment-526645978). Which means the [etcd fix in kube-apiserver](https://github.com/etcd-io/etcd/pull/10911/commits/db61ee106ca9363ba3f188ecf27d1a8843da33ab) is only available from Kubernetes 1.16. - -To urgently fix this bug for Kubernetes 1.15 or earlier, build a custom kube-apiserver. You can make local changes to [`vendor/google.golang.org/grpc/credentials/credentials.go`](https://github.com/kubernetes/kubernetes/blob/7b85be021cd2943167cd3d6b7020f44735d9d90b/vendor/google.golang.org/grpc/credentials/credentials.go#L135) with [etcd@db61ee106](https://github.com/etcd-io/etcd/pull/10911/commits/db61ee106ca9363ba3f188ecf27d1a8843da33ab). - -See ["kube-apiserver 1.13.x refuses to work when first etcd-server is not available"](https://github.com/kubernetes/kubernetes/issues/72102). - - diff --git a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md index 8680abad43..faed4e1cb1 100644 --- a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md +++ b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md @@ -25,6 +25,12 @@ kube-dns. {{< codenew file="admin/dns/dnsutils.yaml" >}} +{{< note >}} +This example creates a pod in the `default` namespace. DNS name resolution for +services depends on the namespace of the pod. For more information, review +[DNS for Services and Pods](/docs/concepts/services-networking/dns-pod-service/#what-things-get-dns-names). +{{< /note >}} + Use that manifest to create a Pod: ```shell @@ -247,6 +253,27 @@ linux/amd64, go1.10.3, 2e322f6 172.17.0.18:41675 - [07/Sep/2018:15:29:11 +0000] 59925 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd,ra 106 0.000066649s ``` +### Are you in the right namespace for the service? + +DNS queries that don't specify a namespace are limited to the pod's +namespace. + +If the namespace of the pod and service differ, the DNS query must include +the namespace of the service. + +This query is limited to the pod's namespace: +```shell +kubectl exec -i -t dnsutils -- nslookup +``` + +This query specifies the namespace: +```shell +kubectl exec -i -t dnsutils -- nslookup . +``` + +To learn more about name resolution, see +[DNS for Services and Pods](/docs/concepts/services-networking/dns-pod-service/#what-things-get-dns-names). + ## Known issues Some Linux distributions (e.g. Ubuntu) use a local DNS resolver by default (systemd-resolved). diff --git a/content/en/docs/tasks/administer-cluster/extended-resource-node.md b/content/en/docs/tasks/administer-cluster/extended-resource-node.md index a95a325d5d..797993f116 100644 --- a/content/en/docs/tasks/administer-cluster/extended-resource-node.md +++ b/content/en/docs/tasks/administer-cluster/extended-resource-node.md @@ -54,7 +54,7 @@ Host: k8s-master:8080 ``` Note that Kubernetes does not need to know what a dongle is or what a dongle is for. -The preceding PATCH request just tells Kubernetes that your Node has four things that +The preceding PATCH request tells Kubernetes that your Node has four things that you call dongles. Start a proxy, so that you can easily send requests to the Kubernetes API server: diff --git a/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md b/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md index 0d5b6d4ebe..a9aaaacd46 100644 --- a/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md +++ b/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md @@ -9,24 +9,17 @@ content_type: concept -In addition to Kubernetes core components like api-server, scheduler, controller-manager running on a master machine -there are a number of add-ons which, for various reasons, must run on a regular cluster node (rather than the Kubernetes master). +Kubernetes core components such as the API server, scheduler, and controller-manager run on a control plane node. However, add-ons must run on a regular cluster node. Some of these add-ons are critical to a fully functional cluster, such as metrics-server, DNS, and UI. A cluster may stop working properly if a critical add-on is evicted (either manually or as a side effect of another operation like upgrade) and becomes pending (for example when the cluster is highly utilized and either there are other pending pods that schedule into the space vacated by the evicted critical add-on pod or the amount of resources available on the node changed for some other reason). Note that marking a pod as critical is not meant to prevent evictions entirely; it only prevents the pod from becoming permanently unavailable. -For static pods, this means it can't be evicted, but for non-static pods, it just means they will always be rescheduled. - - - +A static pod marked as critical, can't be evicted. However, a non-static pods marked as critical are always rescheduled. - ### Marking pod as critical To mark a Pod as critical, set priorityClassName for that Pod to `system-cluster-critical` or `system-node-critical`. `system-node-critical` is the highest available priority, even higher than `system-cluster-critical`. - - diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md b/content/en/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md index 2f23379400..aad5f13909 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md @@ -1,7 +1,9 @@ --- reviewers: -- michmike -- patricklang +- jayunit100 +- jsturtevant +- marosset +- perithompson title: Adding Windows nodes min-kubernetes-server-version: 1.17 content_type: tutorial diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md index d9d8a5929e..dc7af4a329 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md @@ -35,7 +35,7 @@ and kubeadm will use this CA for signing the rest of the certificates. ## External CA mode {#external-ca-mode} -It is also possible to provide just the `ca.crt` file and not the +It is also possible to provide only the `ca.crt` file and not the `ca.key` file (this is only available for the root CA file, not other cert pairs). If all other certificates and kubeconfig files are in place, kubeadm recognizes this condition and activates the "External CA" mode. kubeadm will proceed without the @@ -170,36 +170,7 @@ controllerManager: ### Create certificate signing requests (CSR) -You can create the certificate signing requests for the Kubernetes certificates API with `kubeadm certs renew --use-api`. - -If you set up an external signer such as [cert-manager](https://github.com/jetstack/cert-manager), certificate signing requests (CSRs) are automatically approved. -Otherwise, you must manually approve certificates with the [`kubectl certificate`](/docs/setup/best-practices/certificates/) command. -The following kubeadm command outputs the name of the certificate to approve, then blocks and waits for approval to occur: - -```shell -sudo kubeadm certs renew apiserver --use-api & -``` -The output is similar to this: -``` -[1] 2890 -[certs] certificate request "kubeadm-cert-kube-apiserver-ld526" created -``` - -### Approve certificate signing requests (CSR) - -If you set up an external signer, certificate signing requests (CSRs) are automatically approved. - -Otherwise, you must manually approve certificates with the [`kubectl certificate`](/docs/setup/best-practices/certificates/) command. e.g. - -```shell -kubectl certificate approve kubeadm-cert-kube-apiserver-ld526 -``` -The output is similar to this: -```shell -certificatesigningrequest.certificates.k8s.io/kubeadm-cert-kube-apiserver-ld526 approved -``` - -You can view a list of pending certificates with `kubectl get csr`. +See [Create CertificateSigningRequest](/docs/reference/access-authn-authz/certificate-signing-requests/#create-certificatesigningrequest) for creating CSRs with the Kubernetes API. ## Renew certificates with external CA diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md index 5af9d27b82..47ea403e9a 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md @@ -37,7 +37,7 @@ The upgrade workflow at high level is the following: ### Additional information -- [Draining nodes](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) before kubelet MINOR version +- [Draining nodes](/docs/tasks/administer-cluster/safely-drain-node/) before kubelet MINOR version upgrades is required. In the case of control plane nodes, they could be running CoreDNS Pods or other critical workloads. - All containers are restarted after upgrade, because the container spec hash value is changed. diff --git a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you.md b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you.md index 766db38485..fe9fd8b0c4 100644 --- a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you.md +++ b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you.md @@ -50,7 +50,7 @@ and scheduling of Pods; on each node, the {{< glossary_tooltip text="kubelet" te uses the container runtime interface as an abstraction so that you can use any compatible container runtime. -In its earliest releases, Kubernetes offered compatibility with just one container runtime: Docker. +In its earliest releases, Kubernetes offered compatibility with one container runtime: Docker. Later in the Kubernetes project's history, cluster operators wanted to adopt additional container runtimes. The CRI was designed to allow this kind of flexibility - and the kubelet began supporting CRI. However, because Docker existed before the CRI specification was invented, the Kubernetes project created an @@ -75,7 +75,7 @@ or execute something inside container using `docker exec`. If you're running workloads via Kubernetes, the best way to stop a container is through the Kubernetes API rather than directly through the container runtime (this advice applies -for all container runtimes, not just Docker). +for all container runtimes, not only Docker). {{< /note >}} diff --git a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md index 1d1461ade7..5d99875527 100644 --- a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md +++ b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md @@ -232,7 +232,7 @@ Apply the manifest to create a Deployment ```shell kubectl apply -f https://k8s.io/examples/admin/snowflake-deployment.yaml ``` -We have just created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that just serves the hostname. +We have created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that serves the hostname. ```shell kubectl get deployment diff --git a/content/en/docs/tasks/administer-cluster/namespaces.md b/content/en/docs/tasks/administer-cluster/namespaces.md index 08b2868806..2934e1c0f7 100644 --- a/content/en/docs/tasks/administer-cluster/namespaces.md +++ b/content/en/docs/tasks/administer-cluster/namespaces.md @@ -196,7 +196,7 @@ This delete is asynchronous, so for a time you will see the namespace in the `Te ```shell kubectl create deployment snowflake --image=k8s.gcr.io/serve_hostname -n=development --replicas=2 ``` - We have just created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that just serves the hostname. + We have created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that serves the hostname. ```shell kubectl get deployment -n=development @@ -302,7 +302,7 @@ Use cases include: When you create a [Service](/docs/concepts/services-networking/service/), it creates a corresponding [DNS entry](/docs/concepts/services-networking/dns-pod-service/). This entry is of the form `..svc.cluster.local`, which means -that if a container just uses `` it will resolve to the service which +that if a container uses `` it will resolve to the service which is local to a namespace. This is useful for using the same configuration across multiple namespaces such as Development, Staging and Production. If you want to reach across namespaces, you need to use the fully qualified domain name (FQDN). diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md index 9efdccfb6e..40733c4c96 100644 --- a/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md @@ -20,7 +20,7 @@ Decide whether you want to deploy a [cloud](#creating-a-calico-cluster-with-goog **Prerequisite**: [gcloud](https://cloud.google.com/sdk/docs/quickstarts). -1. To launch a GKE cluster with Calico, just include the `--enable-network-policy` flag. +1. To launch a GKE cluster with Calico, include the `--enable-network-policy` flag. **Syntax** ```shell diff --git a/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md b/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md index b6249f50ef..a5661263f2 100644 --- a/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md +++ b/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md @@ -90,7 +90,7 @@ In addition to `cpu`, `memory`, and `ephemeral-storage`, `pid` may be specified to reserve the specified number of process IDs for kubernetes system daemons. -To optionally enforce `kube-reserved` on system daemons, specify the parent +To optionally enforce `kube-reserved` on kubernetes system daemons, specify the parent control group for kube daemons as the value for `--kube-reserved-cgroup` kubelet flag. diff --git a/content/en/docs/tasks/administer-cluster/safely-drain-node.md b/content/en/docs/tasks/administer-cluster/safely-drain-node.md index db31cceb8b..0fc0a97ffc 100644 --- a/content/en/docs/tasks/administer-cluster/safely-drain-node.md +++ b/content/en/docs/tasks/administer-cluster/safely-drain-node.md @@ -128,8 +128,8 @@ curl -v -H 'Content-type: application/json' https://your-cluster-api-endpoint.ex The API can respond in one of three ways: -- If the eviction is granted, then the Pod is deleted just as if you had sent - a `DELETE` request to the Pod's URL and you get back `200 OK`. +- If the eviction is granted, then the Pod is deleted as if you sent + a `DELETE` request to the Pod's URL and received back `200 OK`. - If the current state of affairs wouldn't allow an eviction by the rules set forth in the budget, you get back `429 Too Many Requests`. This is typically used for generic rate limiting of *any* requests, but here we mean diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md index 23f85f109b..b405d57baf 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md @@ -184,7 +184,7 @@ Where `YWRtaW5pc3RyYXRvcg==` decodes to `administrator`. ## Clean Up -To delete the Secret you have just created: +To delete the Secret you have created: ```shell kubectl delete secret mysecret diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md index 1e6d88ede4..293915736e 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md @@ -115,8 +115,7 @@ accidentally to an onlooker, or from being stored in a terminal log. ## Decoding the Secret {#decoding-secret} -To view the contents of the Secret we just created, you can run the following -command: +To view the contents of the Secret you created, run the following command: ```shell kubectl get secret db-user-pass -o jsonpath='{.data}' @@ -125,10 +124,10 @@ kubectl get secret db-user-pass -o jsonpath='{.data}' The output is similar to: ```json -{"password.txt":"MWYyZDFlMmU2N2Rm","username.txt":"YWRtaW4="} +{"password":"MWYyZDFlMmU2N2Rm","username":"YWRtaW4="} ``` -Now you can decode the `password.txt` data: +Now you can decode the `password` data: ```shell echo 'MWYyZDFlMmU2N2Rm' | base64 --decode @@ -142,7 +141,7 @@ The output is similar to: ## Clean Up -To delete the Secret you have just created: +To delete the Secret you have created: ```shell kubectl delete secret db-user-pass diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-kustomize.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-kustomize.md index d7b1f48a4a..fb257a6026 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-kustomize.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-kustomize.md @@ -92,7 +92,7 @@ kubectl describe secrets/db-user-pass-96mffmfh4k The output is similar to: ``` -Name: db-user-pass +Name: db-user-pass-96mffmfh4k Namespace: default Labels: Annotations: @@ -113,7 +113,7 @@ To check the actual content of the encoded data, please refer to ## Clean Up -To delete the Secret you have just created: +To delete the Secret you have created: ```shell kubectl delete secret db-user-pass-96mffmfh4k diff --git a/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md b/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md index 243072eff2..21b02cc000 100644 --- a/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md +++ b/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md @@ -112,7 +112,7 @@ kubectl top pod cpu-demo --namespace=cpu-example ``` This example output shows that the Pod is using 974 milliCPU, which is -just a bit less than the limit of 1 CPU specified in the Pod configuration. +slightly less than the limit of 1 CPU specified in the Pod configuration. ``` NAME CPU(cores) MEMORY(bytes) diff --git a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index 45d56531f2..918a5bf33e 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -204,7 +204,7 @@ seconds. In addition to the readiness probe, this configuration includes a liveness probe. The kubelet will run the first liveness probe 15 seconds after the container -starts. Just like the readiness probe, this will attempt to connect to the +starts. Similar to the readiness probe, this will attempt to connect to the `goproxy` container on port 8080. If the liveness probe fails, the container will be restarted. @@ -293,6 +293,10 @@ Services. Readiness probes runs on the container during its whole lifecycle. {{< /note >}} +{{< caution >}} +Liveness probes *do not* wait for readiness probes to succeed. If you want to wait before executing a liveness probe you should use initialDelaySeconds or a startupProbe. +{{< /caution >}} + Readiness probes are configured similarly to liveness probes. The only difference is that you use the `readinessProbe` field instead of the `livenessProbe` field. diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md index 2824cce642..40987152e8 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -201,6 +201,9 @@ allow.textmode=true how.nice.to.look=fairlyNice ``` +When `kubectl` creates a ConfigMap from inputs that are not ASCII or UTF-8, the tool puts these into the `binaryData` field of the ConfigMap, and not in `data`. Both text and binary data sources can be combined in one ConfigMap. +If you want to view the `binaryData` keys (and their values) in a ConfigMap, you can run `kubectl get configmap -o jsonpath='{.binaryData}' `. + Use the option `--from-env-file` to create a ConfigMap from an env-file, for example: ```shell @@ -687,4 +690,3 @@ data: * Follow a real world example of [Configuring Redis using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/). - diff --git a/content/en/docs/tasks/configure-pod-container/configure-service-account.md b/content/en/docs/tasks/configure-pod-container/configure-service-account.md index ca3d0b2966..d96a5c8270 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-service-account.md +++ b/content/en/docs/tasks/configure-pod-container/configure-service-account.md @@ -23,16 +23,10 @@ authenticated by the apiserver as a particular User Account (currently this is usually `admin`, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, `default`). - - - ## {{% heading "prerequisites" %}} - {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - - ## Use the Default Service Account to access the API server. @@ -129,7 +123,7 @@ then you will see that a token has automatically been created and is referenced You may use authorization plugins to [set permissions on service accounts](/docs/reference/access-authn-authz/rbac/#service-account-permissions). -To use a non-default service account, simply set the `spec.serviceAccountName` +To use a non-default service account, set the `spec.serviceAccountName` field of a pod to the name of the service account you wish to use. The service account has to exist at the time the pod is created, or it will be rejected. diff --git a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md index ce0b5b3656..697a4c6e0e 100644 --- a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md +++ b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md @@ -9,18 +9,13 @@ weight: 100 This page shows how to create a Pod that uses a Secret to pull an image from a private Docker registry or repository. - - ## {{% heading "prerequisites" %}} - * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} * To do this exercise, you need a [Docker ID](https://docs.docker.com/docker-id/) and password. - - ## Log in to Docker @@ -106,7 +101,8 @@ kubectl create secret docker-registry regcred --docker-server=` is your Private Docker Registry FQDN. (https://index.docker.io/v1/ for DockerHub) +* `` is your Private Docker Registry FQDN. + Use `https://index.docker.io/v2/` for DockerHub. * `` is your Docker username. * `` is your Docker password. * `` is your Docker email. @@ -122,7 +118,7 @@ those secrets might also be visible to other users on your PC during the time th ## Inspecting the Secret `regcred` -To understand the contents of the `regcred` Secret you just created, start by viewing the Secret in YAML format: +To understand the contents of the `regcred` Secret you created, start by viewing the Secret in YAML format: ```shell kubectl get secret regcred --output=yaml @@ -192,7 +188,8 @@ your.private.registry.example.com/janedoe/jdoe-private:v1 ``` To pull the image from the private registry, Kubernetes needs credentials. -The `imagePullSecrets` field in the configuration file specifies that Kubernetes should get the credentials from a Secret named `regcred`. +The `imagePullSecrets` field in the configuration file specifies that +Kubernetes should get the credentials from a Secret named `regcred`. Create a Pod that uses your Secret, and verify that the Pod is running: @@ -201,11 +198,8 @@ kubectl apply -f my-private-reg-pod.yaml kubectl get pod private-reg ``` - - ## {{% heading "whatsnext" %}} - * Learn more about [Secrets](/docs/concepts/configuration/secret/). * Learn more about [using a private registry](/docs/concepts/containers/images/#using-a-private-registry). * Learn more about [adding image pull secrets to a service account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account). @@ -213,5 +207,3 @@ kubectl get pod private-reg * See [Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core). * See the `imagePullSecrets` field of [PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core). - - diff --git a/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md b/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md index cc4d5c9e3c..384b709720 100644 --- a/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md +++ b/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md @@ -12,16 +12,10 @@ What's Kompose? It's a conversion tool for all things compose (namely Docker Com More information can be found on the Kompose website at [http://kompose.io](http://kompose.io). - - - ## {{% heading "prerequisites" %}} - {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - - ## Install Kompose @@ -49,7 +43,6 @@ sudo mv ./kompose /usr/local/bin/kompose Alternatively, you can download the [tarball](https://github.com/kubernetes/kompose/releases). - {{% /tab %}} {{% tab name="Build from source" %}} @@ -74,7 +67,7 @@ sudo yum -y install kompose {{% /tab %}} {{% tab name="Fedora package" %}} -Kompose is in Fedora 24, 25 and 26 repositories. You can install it just like any other package. +Kompose is in Fedora 24, 25 and 26 repositories. You can install it like any other package. ```bash sudo dnf -y install kompose @@ -87,105 +80,127 @@ On macOS you can install latest release via [Homebrew](https://brew.sh): ```bash brew install kompose - ``` + {{% /tab %}} {{< /tabs >}} ## Use Kompose -In just a few steps, we'll take you from Docker Compose to Kubernetes. All +In a few steps, we'll take you from Docker Compose to Kubernetes. All you need is an existing `docker-compose.yml` file. -1. Go to the directory containing your `docker-compose.yml` file. If you don't - have one, test using this one. +1. Go to the directory containing your `docker-compose.yml` file. If you don't have one, test using this one. - ```yaml - version: "2" + ```yaml + version: "2" - services: + services: - redis-master: - image: k8s.gcr.io/redis:e2e - ports: - - "6379" + redis-master: + image: k8s.gcr.io/redis:e2e + ports: + - "6379" - redis-slave: - image: gcr.io/google_samples/gb-redisslave:v3 - ports: - - "6379" - environment: - - GET_HOSTS_FROM=dns + redis-slave: + image: gcr.io/google_samples/gb-redisslave:v3 + ports: + - "6379" + environment: + - GET_HOSTS_FROM=dns - frontend: - image: gcr.io/google-samples/gb-frontend:v4 - ports: - - "80:80" - environment: - - GET_HOSTS_FROM=dns - labels: - kompose.service.type: LoadBalancer - ``` + frontend: + image: gcr.io/google-samples/gb-frontend:v4 + ports: + - "80:80" + environment: + - GET_HOSTS_FROM=dns + labels: + kompose.service.type: LoadBalancer + ``` -2. To convert the `docker-compose.yml` file to files that you can use with - `kubectl`, run `kompose convert` and then `kubectl apply -f `. +2. To convert the `docker-compose.yml` file to files that you can use with + `kubectl`, run `kompose convert` and then `kubectl apply -f `. - ```bash - $ kompose convert - INFO Kubernetes file "frontend-service.yaml" created - INFO Kubernetes file "redis-master-service.yaml" created - INFO Kubernetes file "redis-slave-service.yaml" created - INFO Kubernetes file "frontend-deployment.yaml" created - INFO Kubernetes file "redis-master-deployment.yaml" created - INFO Kubernetes file "redis-slave-deployment.yaml" created - ``` + ```bash + kompose convert + ``` - ```bash - $ kubectl apply -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,redis-master-deployment.yaml,redis-slave-deployment.yaml - service/frontend created - service/redis-master created - service/redis-slave created - deployment.apps/frontend created - deployment.apps/redis-master created - deployment.apps/redis-slave created - ``` + The output is similar to: - Your deployments are running in Kubernetes. + ```none + INFO Kubernetes file "frontend-service.yaml" created + INFO Kubernetes file "frontend-service.yaml" created + INFO Kubernetes file "frontend-service.yaml" created + INFO Kubernetes file "redis-master-service.yaml" created + INFO Kubernetes file "redis-master-service.yaml" created + INFO Kubernetes file "redis-master-service.yaml" created + INFO Kubernetes file "redis-slave-service.yaml" created + INFO Kubernetes file "redis-slave-service.yaml" created + INFO Kubernetes file "redis-slave-service.yaml" created + INFO Kubernetes file "frontend-deployment.yaml" created + INFO Kubernetes file "frontend-deployment.yaml" created + INFO Kubernetes file "frontend-deployment.yaml" created + INFO Kubernetes file "redis-master-deployment.yaml" created + INFO Kubernetes file "redis-master-deployment.yaml" created + INFO Kubernetes file "redis-master-deployment.yaml" created + INFO Kubernetes file "redis-slave-deployment.yaml" created + INFO Kubernetes file "redis-slave-deployment.yaml" created + INFO Kubernetes file "redis-slave-deployment.yaml" created + ``` -3. Access your application. + ```bash + kubectl apply -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml, + ``` - If you're already using `minikube` for your development process: + The output is similar to: - ```bash - $ minikube service frontend - ``` + ```none + redis-master-deployment.yaml,redis-slave-deployment.yaml + service/frontend created + service/redis-master created + service/redis-slave created + deployment.apps/frontend created + deployment.apps/redis-master created + deployment.apps/redis-slave created + ``` - Otherwise, let's look up what IP your service is using! + Your deployments are running in Kubernetes. - ```sh - $ kubectl describe svc frontend - Name: frontend - Namespace: default - Labels: service=frontend - Selector: service=frontend - Type: LoadBalancer - IP: 10.0.0.183 - LoadBalancer Ingress: 192.0.2.89 - Port: 80 80/TCP - NodePort: 80 31144/TCP - Endpoints: 172.17.0.4:80 - Session Affinity: None - No events. +3. Access your application. - ``` + If you're already using `minikube` for your development process: - If you're using a cloud provider, your IP will be listed next to `LoadBalancer Ingress`. + ```bash + minikube service frontend + ``` - ```sh - $ curl http://192.0.2.89 - ``` + Otherwise, let's look up what IP your service is using! + ```sh + kubectl describe svc frontend + ``` + ```none + Name: frontend + Namespace: default + Labels: service=frontend + Selector: service=frontend + Type: LoadBalancer + IP: 10.0.0.183 + LoadBalancer Ingress: 192.0.2.89 + Port: 80 80/TCP + NodePort: 80 31144/TCP + Endpoints: 172.17.0.4:80 + Session Affinity: None + No events. + ``` + + If you're using a cloud provider, your IP will be listed next to `LoadBalancer Ingress`. + + ```sh + curl http://192.0.2.89 + ``` @@ -205,15 +220,17 @@ you need is an existing `docker-compose.yml` file. Kompose has support for two providers: OpenShift and Kubernetes. You can choose a targeted provider using global option `--provider`. If no provider is specified, Kubernetes is set by default. - ## `kompose convert` Kompose supports conversion of V1, V2, and V3 Docker Compose files into Kubernetes and OpenShift objects. -### Kubernetes +### Kubernetes `kompose convert` example -```sh -$ kompose --file docker-voting.yml convert +```shell +kompose --file docker-voting.yml convert +``` + +```none WARN Unsupported key networks - ignoring WARN Unsupported key build - ignoring INFO Kubernetes file "worker-svc.yaml" created @@ -226,16 +243,24 @@ INFO Kubernetes file "result-deployment.yaml" created INFO Kubernetes file "vote-deployment.yaml" created INFO Kubernetes file "worker-deployment.yaml" created INFO Kubernetes file "db-deployment.yaml" created +``` -$ ls +```shell +ls +``` + +```none db-deployment.yaml docker-compose.yml docker-gitlab.yml redis-deployment.yaml result-deployment.yaml vote-deployment.yaml worker-deployment.yaml db-svc.yaml docker-voting.yml redis-svc.yaml result-svc.yaml vote-svc.yaml worker-svc.yaml ``` You can also provide multiple docker-compose files at the same time: -```sh -$ kompose -f docker-compose.yml -f docker-guestbook.yml convert +```shell +kompose -f docker-compose.yml -f docker-guestbook.yml convert +``` + +```none INFO Kubernetes file "frontend-service.yaml" created INFO Kubernetes file "mlbparks-service.yaml" created INFO Kubernetes file "mongodb-service.yaml" created @@ -247,8 +272,13 @@ INFO Kubernetes file "mongodb-deployment.yaml" created INFO Kubernetes file "mongodb-claim0-persistentvolumeclaim.yaml" created INFO Kubernetes file "redis-master-deployment.yaml" created INFO Kubernetes file "redis-slave-deployment.yaml" created +``` -$ ls +```shell +ls +``` + +```none mlbparks-deployment.yaml mongodb-service.yaml redis-slave-service.jsonmlbparks-service.yaml frontend-deployment.yaml mongodb-claim0-persistentvolumeclaim.yaml redis-master-service.yaml frontend-service.yaml mongodb-deployment.yaml redis-slave-deployment.yaml @@ -257,10 +287,13 @@ redis-master-deployment.yaml When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be over ridden by subsequent file. -### OpenShift +### OpenShift `kompose convert` example ```sh -$ kompose --provider openshift --file docker-voting.yml convert +kompose --provider openshift --file docker-voting.yml convert +``` + +```none WARN [worker] Service cannot be created because of missing port. INFO OpenShift file "vote-service.yaml" created INFO OpenShift file "db-service.yaml" created @@ -281,7 +314,10 @@ INFO OpenShift file "result-imagestream.yaml" created It also supports creating buildconfig for build directive in a service. By default, it uses the remote repo for the current git branch as the source repo, and the current branch as the source branch for the build. You can specify a different source repo and branch using ``--build-repo`` and ``--build-branch`` options respectively. ```sh -$ kompose --provider openshift --file buildconfig/docker-compose.yml convert +kompose --provider openshift --file buildconfig/docker-compose.yml convert +``` + +```none WARN [foo] Service cannot be created because of missing port. INFO OpenShift Buildconfig using git@github.com:rtnpro/kompose.git::master as source. INFO OpenShift file "foo-deploymentconfig.yaml" created @@ -297,23 +333,31 @@ If you are manually pushing the OpenShift artifacts using ``oc create -f``, you Kompose supports a straightforward way to deploy your "composed" application to Kubernetes or OpenShift via `kompose up`. +### Kubernetes `kompose up` example -### Kubernetes -```sh -$ kompose --file ./examples/docker-guestbook.yml up +```shell +kompose --file ./examples/docker-guestbook.yml up +``` + +```none We are going to create Kubernetes deployments and services for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'kubectl apply -f' commands instead. -INFO Successfully created service: redis-master -INFO Successfully created service: redis-slave -INFO Successfully created service: frontend +INFO Successfully created service: redis-master +INFO Successfully created service: redis-slave +INFO Successfully created service: frontend INFO Successfully created deployment: redis-master INFO Successfully created deployment: redis-slave -INFO Successfully created deployment: frontend +INFO Successfully created deployment: frontend Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods' for details. +``` -$ kubectl get deployment,svc,pods +```shell +kubectl get deployment,svc,pods +``` + +```none NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.extensions/frontend 1 1 1 1 4m deployment.extensions/redis-master 1 1 1 1 4m @@ -331,14 +375,19 @@ pod/redis-master-1432129712-63jn8 1/1 Running 0 4m pod/redis-slave-2504961300-nve7b 1/1 Running 0 4m ``` -**Note**: +{{< note >}} - You must have a running Kubernetes cluster with a pre-configured kubectl context. - Only deployments and services are generated and deployed to Kubernetes. If you need different kind of resources, use the `kompose convert` and `kubectl apply -f` commands instead. +{{< /note >}} -### OpenShift -```sh -$ kompose --file ./examples/docker-guestbook.yml --provider openshift up +### OpenShift `kompose up` example + +```shell +kompose --file ./examples/docker-guestbook.yml --provider openshift up +``` + +```none We are going to create OpenShift DeploymentConfigs and Services for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'oc create -f' commands instead. @@ -353,8 +402,13 @@ INFO Successfully created deployment: redis-master INFO Successfully created ImageStream: redis-master Your application has been deployed to OpenShift. You can run 'oc get dc,svc,is' for details. +``` -$ oc get dc,svc,is +```shell +oc get dc,svc,is +``` + +```none NAME REVISION DESIRED CURRENT TRIGGERED BY dc/frontend 0 1 0 config,image(frontend:v4) dc/redis-master 0 1 0 config,image(redis-master:e2e) @@ -369,16 +423,16 @@ is/redis-master 172.30.12.200:5000/fff/redis-master is/redis-slave 172.30.12.200:5000/fff/redis-slave v1 ``` -**Note**: - -- You must have a running OpenShift cluster with a pre-configured `oc` context (`oc login`) +{{< note >}} +You must have a running OpenShift cluster with a pre-configured `oc` context (`oc login`). +{{< /note >}} ## `kompose down` -Once you have deployed "composed" application to Kubernetes, `$ kompose down` will help you to take the application out by deleting its deployments and services. If you need to remove other resources, use the 'kubectl' command. +Once you have deployed "composed" application to Kubernetes, `kompose down` will help you to take the application out by deleting its deployments and services. If you need to remove other resources, use the 'kubectl' command. -```sh -$ kompose --file docker-guestbook.yml down +```shell +kompose --file docker-guestbook.yml down INFO Successfully deleted service: redis-master INFO Successfully deleted deployment: redis-master INFO Successfully deleted service: redis-slave @@ -387,16 +441,16 @@ INFO Successfully deleted service: frontend INFO Successfully deleted deployment: frontend ``` -**Note**: - -- You must have a running Kubernetes cluster with a pre-configured kubectl context. +{{< note >}} +You must have a running Kubernetes cluster with a pre-configured `kubectl` context. +{{< /note >}} ## Build and Push Docker Images Kompose supports both building and pushing Docker images. When using the `build` key within your Docker Compose file, your image will: - - Automatically be built with Docker using the `image` key specified within your file - - Be pushed to the correct Docker repository using local credentials (located at `.docker/config`) +- Automatically be built with Docker using the `image` key specified within your file +- Be pushed to the correct Docker repository using local credentials (located at `.docker/config`) Using an [example Docker Compose file](https://raw.githubusercontent.com/kubernetes/kompose/master/examples/buildconfig/docker-compose.yml): @@ -412,7 +466,7 @@ services: Using `kompose up` with a `build` key: ```none -$ kompose up +kompose up INFO Build key detected. Attempting to build and push image 'docker.io/foo/bar' INFO Building image 'docker.io/foo/bar' from directory 'build' INFO Image 'docker.io/foo/bar' from directory 'build' built successfully @@ -432,10 +486,10 @@ In order to disable the functionality, or choose to use BuildConfig generation ( ```sh # Disable building/pushing Docker images -$ kompose up --build none +kompose up --build none # Generate Build Config artifacts for OpenShift -$ kompose up --provider openshift --build build-config +kompose up --provider openshift --build build-config ``` ## Alternative Conversions @@ -443,45 +497,54 @@ $ kompose up --provider openshift --build build-config The default `kompose` transformation will generate Kubernetes [Deployments](/docs/concepts/workloads/controllers/deployment/) and [Services](/docs/concepts/services-networking/service/), in yaml format. You have alternative option to generate json with `-j`. Also, you can alternatively generate [Replication Controllers](/docs/concepts/workloads/controllers/replicationcontroller/) objects, [Daemon Sets](/docs/concepts/workloads/controllers/daemonset/), or [Helm](https://github.com/helm/helm) charts. ```sh -$ kompose convert -j +kompose convert -j INFO Kubernetes file "redis-svc.json" created INFO Kubernetes file "web-svc.json" created INFO Kubernetes file "redis-deployment.json" created INFO Kubernetes file "web-deployment.json" created ``` + The `*-deployment.json` files contain the Deployment objects. ```sh -$ kompose convert --replication-controller +kompose convert --replication-controller INFO Kubernetes file "redis-svc.yaml" created INFO Kubernetes file "web-svc.yaml" created INFO Kubernetes file "redis-replicationcontroller.yaml" created INFO Kubernetes file "web-replicationcontroller.yaml" created ``` -The `*-replicationcontroller.yaml` files contain the Replication Controller objects. If you want to specify replicas (default is 1), use `--replicas` flag: `$ kompose convert --replication-controller --replicas 3` +The `*-replicationcontroller.yaml` files contain the Replication Controller objects. If you want to specify replicas (default is 1), use `--replicas` flag: `kompose convert --replication-controller --replicas 3` -```sh -$ kompose convert --daemon-set +```shell +kompose convert --daemon-set INFO Kubernetes file "redis-svc.yaml" created INFO Kubernetes file "web-svc.yaml" created INFO Kubernetes file "redis-daemonset.yaml" created INFO Kubernetes file "web-daemonset.yaml" created ``` -The `*-daemonset.yaml` files contain the Daemon Set objects +The `*-daemonset.yaml` files contain the DaemonSet objects -If you want to generate a Chart to be used with [Helm](https://github.com/kubernetes/helm) simply do: +If you want to generate a Chart to be used with [Helm](https://github.com/kubernetes/helm) run: -```sh -$ kompose convert -c +```shell +kompose convert -c +``` + +```none INFO Kubernetes file "web-svc.yaml" created INFO Kubernetes file "redis-svc.yaml" created INFO Kubernetes file "web-deployment.yaml" created INFO Kubernetes file "redis-deployment.yaml" created chart created in "./docker-compose/" +``` -$ tree docker-compose/ +```shell +tree docker-compose/ +``` + +```none docker-compose ├── Chart.yaml ├── README.md @@ -562,7 +625,7 @@ If you want to create normal pods without controllers you can use `restart` cons | `no` | Pod | `Never` | {{< note >}} -The controller object could be `deployment` or `replicationcontroller`, etc. +The controller object could be `deployment` or `replicationcontroller`. {{< /note >}} For example, the `pival` service will become pod down here. This container calculated value of `pi`. @@ -577,7 +640,7 @@ services: restart: "on-failure" ``` -### Warning about Deployment Config's +### Warning about Deployment Configurations If the Docker Compose file has a volume specified for a service, the Deployment (Kubernetes) or DeploymentConfig (OpenShift) strategy is changed to "Recreate" instead of "RollingUpdate" (default). This is done to avoid multiple instances of a service from accessing a volume at the same time. @@ -590,5 +653,3 @@ Please note that changing service name might break some `docker-compose` files. Kompose supports Docker Compose versions: 1, 2 and 3. We have limited support on versions 2.1 and 3.2 due to their experimental nature. A full list on compatibility between all three versions is listed in our [conversion document](https://github.com/kubernetes/kompose/blob/master/docs/conversion.md) including a list of all incompatible Docker Compose keys. - - diff --git a/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md b/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md index 730b9fb00c..03ba9d2c02 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md @@ -177,7 +177,7 @@ kubectl describe pod nginx-deployment-1370807587-fz9sd Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason `FailedScheduling` (and possibly others). The message tells us that there were not enough resources for the Pod on any of the nodes. -To correct this situation, you can use `kubectl scale` to update your Deployment to specify four or fewer replicas. (Or you could just leave the one Pod pending, which is harmless.) +To correct this situation, you can use `kubectl scale` to update your Deployment to specify four or fewer replicas. (Or you could leave the one Pod pending, which is harmless.) Events such as the ones you saw at the end of `kubectl describe pod` are persisted in etcd and provide high-level information on what is happening in the cluster. To list all events you can use diff --git a/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md b/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md index 8a972e1365..c99182b854 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md @@ -57,7 +57,7 @@ case you can try several things: will never be scheduled. You can check node capacities with the `kubectl get nodes -o ` - command. Here are some example command lines that extract just the necessary + command. Here are some example command lines that extract the necessary information: ```shell diff --git a/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md b/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md index 54e474429c..59a83e87c7 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md @@ -99,7 +99,7 @@ kubectl run ephemeral-demo --image=k8s.gcr.io/pause:3.1 --restart=Never ``` The examples in this section use the `pause` container image because it does not -contain userland debugging utilities, but this method works with all container +contain debugging utilities, but this method works with all container images. If you attempt to use `kubectl exec` to create a shell you will see an error diff --git a/content/en/docs/tasks/debug-application-cluster/debug-service.md b/content/en/docs/tasks/debug-application-cluster/debug-service.md index ba141135fa..3b3b1c6081 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-service.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-service.md @@ -111,7 +111,7 @@ kubectl get pods -l app=hostnames \ 10.244.0.7 ``` -The example container used for this walk-through simply serves its own hostname +The example container used for this walk-through serves its own hostname via HTTP on port 9376, but if you are debugging your own app, you'll want to use whatever port number your Pods are listening on. @@ -178,7 +178,7 @@ kubectl expose deployment hostnames --port=80 --target-port=9376 service/hostnames exposed ``` -And read it back, just to be sure: +And read it back: ```shell kubectl get svc hostnames @@ -421,14 +421,13 @@ Earlier you saw that the Pods were running. You can re-check that: kubectl get pods -l app=hostnames ``` ```none -NAME READY STATUS RESTARTS AGE +NAME READY STATUS RESTARTS AGE hostnames-632524106-bbpiw 1/1 Running 0 1h hostnames-632524106-ly40y 1/1 Running 0 1h hostnames-632524106-tlaok 1/1 Running 0 1h ``` -The `-l app=hostnames` argument is a label selector - just like our Service -has. +The `-l app=hostnames` argument is a label selector configured on the Service. The "AGE" column says that these Pods are about an hour old, which implies that they are running fine and not crashing. @@ -607,7 +606,7 @@ iptables-save | grep hostnames -A KUBE-PORTALS-HOST -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j DNAT --to-destination 10.240.115.247:48577 ``` -There should be 2 rules for each port of your Service (just one in this +There should be 2 rules for each port of your Service (only one in this example) - a "KUBE-PORTALS-CONTAINER" and a "KUBE-PORTALS-HOST". Almost nobody should be using the "userspace" mode any more, so you won't spend diff --git a/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md b/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md index 1703bbbe42..29ace662f6 100644 --- a/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md +++ b/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md @@ -294,9 +294,9 @@ a running cluster in the [Deploying section](#deploying). ### Changing `DaemonSet` parameters -When you have the Stackdriver Logging `DaemonSet` in your cluster, you can just modify the -`template` field in its spec, daemonset controller will update the pods for you. For example, -let's assume you've just installed the Stackdriver Logging as described above. Now you want to +When you have the Stackdriver Logging `DaemonSet` in your cluster, you can modify the +`template` field in its spec. The DaemonSet controller manages the pods for you. +For example, assume you've installed the Stackdriver Logging as described above. Now you want to change the memory limit to give fluentd more memory to safely process more logs. Get the spec of `DaemonSet` running in your cluster: diff --git a/content/en/docs/tasks/extend-kubectl/kubectl-plugins.md b/content/en/docs/tasks/extend-kubectl/kubectl-plugins.md index bec423043d..28fd615b45 100644 --- a/content/en/docs/tasks/extend-kubectl/kubectl-plugins.md +++ b/content/en/docs/tasks/extend-kubectl/kubectl-plugins.md @@ -12,20 +12,15 @@ content_type: task This guide demonstrates how to install and write extensions for [kubectl](/docs/reference/kubectl/kubectl/). By thinking of core `kubectl` commands as essential building blocks for interacting with a Kubernetes cluster, a cluster administrator can think of plugins as a means of utilizing these building blocks to create more complex behavior. Plugins extend `kubectl` with new sub-commands, allowing for new and custom features not included in the main distribution of `kubectl`. - - ## {{% heading "prerequisites" %}} - You need to have a working `kubectl` binary installed. - - ## Installing kubectl plugins -A plugin is nothing more than a standalone executable file, whose name begins with `kubectl-`. To install a plugin, simply move its executable file to anywhere on your `PATH`. +A plugin is a standalone executable file, whose name begins with `kubectl-`. To install a plugin, move its executable file to anywhere on your `PATH`. You can also discover and install kubectl plugins available in the open source using [Krew](https://krew.dev/). Krew is a plugin manager maintained by @@ -60,9 +55,9 @@ You can write a plugin in any programming language or script that allows you to There is no plugin installation or pre-loading required. Plugin executables receive the inherited environment from the `kubectl` binary. -A plugin determines which command path it wishes to implement based on its name. For -example, a plugin wanting to provide a new command `kubectl foo`, would simply be named -`kubectl-foo`, and live somewhere in your `PATH`. +A plugin determines which command path it wishes to implement based on its name. +For example, a plugin named `kubectl-foo` provides a command `kubectl foo`. You must +install the plugin executable somewhere in your `PATH`. ### Example plugin @@ -88,32 +83,34 @@ echo "I am a plugin named kubectl-foo" ### Using a plugin -To use the above plugin, simply make it executable: +To use a plugin, make the plugin executable: -``` +```shell sudo chmod +x ./kubectl-foo ``` and place it anywhere in your `PATH`: -``` +```shell sudo mv ./kubectl-foo /usr/local/bin ``` You may now invoke your plugin as a `kubectl` command: -``` +```shell kubectl foo ``` + ``` I am a plugin named kubectl-foo ``` All args and flags are passed as-is to the executable: -``` +```shell kubectl foo version ``` + ``` 1.0.0 ``` @@ -124,6 +121,7 @@ All environment variables are also passed as-is to the executable: export KUBECONFIG=~/.kube/config kubectl foo config ``` + ``` /home//.kube/config ``` @@ -131,6 +129,7 @@ kubectl foo config ```shell KUBECONFIG=/etc/kube/config kubectl foo config ``` + ``` /etc/kube/config ``` @@ -376,16 +375,11 @@ set up a build environment (if it needs compiling), and deploy the plugin. If you also make compiled packages available, or use Krew, that will make installs easier. - - ## {{% heading "whatsnext" %}} - * Check the Sample CLI Plugin repository for a [detailed example](https://github.com/kubernetes/sample-cli-plugin) of a plugin written in Go. In case of any questions, feel free to reach out to the [SIG CLI team](https://github.com/kubernetes/community/tree/master/sig-cli). * Read about [Krew](https://krew.dev/), a package manager for kubectl plugins. - - diff --git a/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md b/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md index b0e272afa0..7ad7072fd7 100644 --- a/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md +++ b/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md @@ -12,7 +12,7 @@ weight: 20 Kubernetes ships with a default scheduler that is described [here](/docs/reference/command-line-tools-reference/kube-scheduler/). If the default scheduler does not suit your needs you can implement your own scheduler. -Not just that, you can even run multiple schedulers simultaneously alongside the default +Moreover, you can even run multiple schedulers simultaneously alongside the default scheduler and instruct Kubernetes what scheduler to use for each of your pods. Let's learn how to run multiple schedulers in Kubernetes with an example. @@ -30,7 +30,7 @@ in the Kubernetes source directory for a canonical example. ## Package the scheduler Package your scheduler binary into a container image. For the purposes of this example, -let's just use the default scheduler (kube-scheduler) as our second scheduler as well. +you can use the default scheduler (kube-scheduler) as your second scheduler. Clone the [Kubernetes source code from GitHub](https://github.com/kubernetes/kubernetes) and build the source. @@ -61,9 +61,9 @@ gcloud docker -- push gcr.io/my-gcp-project/my-kube-scheduler:1.0 ## Define a Kubernetes Deployment for the scheduler -Now that we have our scheduler in a container image, we can just create a pod -config for it and run it in our Kubernetes cluster. But instead of creating a pod -directly in the cluster, let's use a [Deployment](/docs/concepts/workloads/controllers/deployment/) +Now that you have your scheduler in a container image, create a pod +configuration for it and run it in your Kubernetes cluster. But instead of creating a pod +directly in the cluster, you can use a [Deployment](/docs/concepts/workloads/controllers/deployment/) for this example. A [Deployment](/docs/concepts/workloads/controllers/deployment/) manages a [Replica Set](/docs/concepts/workloads/controllers/replicaset/) which in turn manages the pods, thereby making the scheduler resilient to failures. Here is the deployment @@ -83,7 +83,7 @@ detailed description of other command line arguments. ## Run the second scheduler in the cluster -In order to run your scheduler in a Kubernetes cluster, just create the deployment +In order to run your scheduler in a Kubernetes cluster, create the deployment specified in the config above in a Kubernetes cluster: ```shell @@ -132,9 +132,9 @@ kubectl edit clusterrole system:kube-scheduler ## Specify schedulers for pods -Now that our second scheduler is running, let's create some pods, and direct them -to be scheduled by either the default scheduler or the one we just deployed. -In order to schedule a given pod using a specific scheduler, we specify the name of the +Now that your second scheduler is running, create some pods, and direct them +to be scheduled by either the default scheduler or the one you deployed. +In order to schedule a given pod using a specific scheduler, specify the name of the scheduler in that pod spec. Let's look at three examples. - Pod spec without any scheduler name @@ -196,10 +196,13 @@ while the other two pods get scheduled. Once we submit the scheduler deployment and our new scheduler starts running, the `annotation-second-scheduler` pod gets scheduled as well. -Alternatively, one could just look at the "Scheduled" entries in the event logs to +Alternatively, you can look at the "Scheduled" entries in the event logs to verify that the pods were scheduled by the desired schedulers. ```shell kubectl get events ``` +You can also use a [custom scheduler configuration](/docs/reference/scheduling/config/#multiple-profiles) +or a custom container image for the cluster's main scheduler by modifying its static pod manifest +on the relevant control plane nodes. diff --git a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md index b48d44a078..671637c084 100644 --- a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md +++ b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md @@ -404,7 +404,7 @@ how to [authenticate API servers](/docs/reference/access-authn-authz/extensible- A conversion webhook must not mutate anything inside of `metadata` of the converted object other than `labels` and `annotations`. Attempted changes to `name`, `UID` and `namespace` are rejected and fail the request -which caused the conversion. All other changes are just ignored. +which caused the conversion. All other changes are ignored. ### Deploy the conversion webhook service diff --git a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md index de9ab8181c..3230b7b73a 100644 --- a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md +++ b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md @@ -520,7 +520,7 @@ CustomResourceDefinition and migrating your objects from one version to another. ### Finalizers *Finalizers* allow controllers to implement asynchronous pre-delete hooks. -Custom objects support finalizers just like built-in objects. +Custom objects support finalizers similar to built-in objects. You can add a finalizer to a custom object like this: @@ -1129,8 +1129,6 @@ resources that have the scale subresource enabled. ### Categories -{{< feature-state state="beta" for_k8s_version="v1.10" >}} - Categories is a list of grouped resources the custom resource belongs to (eg. `all`). You can use `kubectl get ` to list the resources belonging to the category. diff --git a/content/en/docs/tasks/extend-kubernetes/setup-extension-api-server.md b/content/en/docs/tasks/extend-kubernetes/setup-extension-api-server.md index 626ddcab5c..64c41d9094 100644 --- a/content/en/docs/tasks/extend-kubernetes/setup-extension-api-server.md +++ b/content/en/docs/tasks/extend-kubernetes/setup-extension-api-server.md @@ -41,7 +41,7 @@ Alternatively, you can use an existing 3rd party solution, such as [apiserver-bu 1. Make sure that your extension-apiserver loads those certs from that volume and that they are used in the HTTPS handshake. 1. Create a Kubernetes service account in your namespace. 1. Create a Kubernetes cluster role for the operations you want to allow on your resources. -1. Create a Kubernetes cluster role binding from the service account in your namespace to the cluster role you just created. +1. Create a Kubernetes cluster role binding from the service account in your namespace to the cluster role you created. 1. Create a Kubernetes cluster role binding from the service account in your namespace to the `system:auth-delegator` cluster role to delegate auth decisions to the Kubernetes core API server. 1. Create a Kubernetes role binding from the service account in your namespace to the `extension-apiserver-authentication-reader` role. This allows your extension api-server to access the `extension-apiserver-authentication` configmap. 1. Create a Kubernetes apiservice. The CA cert above should be base64 encoded, stripped of new lines and used as the spec.caBundle in the apiservice. This should not be namespaced. If using the [kube-aggregator API](https://github.com/kubernetes/kube-aggregator/), only pass in the PEM encoded CA bundle because the base 64 encoding is done for you. diff --git a/content/en/docs/tasks/inject-data-application/define-environment-variable-container.md b/content/en/docs/tasks/inject-data-application/define-environment-variable-container.md index 9cefdca03d..02677d6204 100644 --- a/content/en/docs/tasks/inject-data-application/define-environment-variable-container.md +++ b/content/en/docs/tasks/inject-data-application/define-environment-variable-container.md @@ -70,8 +70,9 @@ override any environment variables specified in the container image. {{< /note >}} {{< note >}} -The environment variables can reference each other, and cycles are possible, -pay attention to the order before using +Environment variables may reference each other, however ordering is important. +Variables making use of others defined in the same context must come later in +the list. Similarly, avoid circular references. {{< /note >}} ## Using environment variables inside of your config diff --git a/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md b/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md index ce7da0b453..62ddf56aab 100644 --- a/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md +++ b/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md @@ -19,7 +19,7 @@ Here is an overview of the steps in this example: 1. **Start a message queue service.** In this example, we use RabbitMQ, but you could use another one. In practice you would set up a message queue service once and reuse it for many jobs. 1. **Create a queue, and fill it with messages.** Each message represents one task to be done. In - this example, a message is just an integer that we will do a lengthy computation on. + this example, a message is an integer that we will do a lengthy computation on. 1. **Start a Job that works on tasks from the queue**. The Job starts several pods. Each pod takes one task from the message queue, processes it, and repeats until the end of the queue is reached. @@ -141,13 +141,12 @@ root@temp-loe07:/# ``` In the last command, the `amqp-consume` tool takes one message (`-c 1`) -from the queue, and passes that message to the standard input of an arbitrary command. In this case, the program `cat` is just printing -out what it gets on the standard input, and the echo is just to add a carriage +from the queue, and passes that message to the standard input of an arbitrary command. In this case, the program `cat` prints out the characters read from standard input, and the echo adds a carriage return so the example is readable. ## Filling the Queue with tasks -Now let's fill the queue with some "tasks". In our example, our tasks are just strings to be +Now let's fill the queue with some "tasks". In our example, our tasks are strings to be printed. In a practice, the content of the messages might be: diff --git a/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md b/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md index 7f3c30121e..268eed7f9b 100644 --- a/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md +++ b/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md @@ -21,7 +21,7 @@ Here is an overview of the steps in this example: detect when a finite-length work queue is empty. In practice you would set up a store such as Redis once and reuse it for the work queues of many jobs, and other things. 1. **Create a queue, and fill it with messages.** Each message represents one task to be done. In - this example, a message is just an integer that we will do a lengthy computation on. + this example, a message is an integer that we will do a lengthy computation on. 1. **Start a Job that works on tasks from the queue**. The Job starts several pods. Each pod takes one task from the message queue, processes it, and repeats until the end of the queue is reached. @@ -55,7 +55,7 @@ You could also download the following files directly: ## Filling the Queue with tasks -Now let's fill the queue with some "tasks". In our example, our tasks are just strings to be +Now let's fill the queue with some "tasks". In our example, our tasks are strings to be printed. Start a temporary interactive pod for running the Redis CLI. diff --git a/content/en/docs/tasks/job/parallel-processing-expansion.md b/content/en/docs/tasks/job/parallel-processing-expansion.md index e92fa9f5bb..8f5994929e 100644 --- a/content/en/docs/tasks/job/parallel-processing-expansion.md +++ b/content/en/docs/tasks/job/parallel-processing-expansion.md @@ -12,7 +12,7 @@ based on a common template. You can use this approach to process batches of work parallel. For this example there are only three items: _apple_, _banana_, and _cherry_. -The sample Jobs process each item simply by printing a string then pausing. +The sample Jobs process each item by printing a string then pausing. See [using Jobs in real workloads](#using-jobs-in-real-workloads) to learn about how this pattern fits more realistic use cases. diff --git a/content/en/docs/tasks/manage-daemon/rollback-daemon-set.md b/content/en/docs/tasks/manage-daemon/rollback-daemon-set.md index 05e8060cc9..704b01cc9a 100644 --- a/content/en/docs/tasks/manage-daemon/rollback-daemon-set.md +++ b/content/en/docs/tasks/manage-daemon/rollback-daemon-set.md @@ -25,7 +25,7 @@ You should already know how to [perform a rolling update on a ### Step 1: Find the DaemonSet revision you want to roll back to -You can skip this step if you just want to roll back to the last revision. +You can skip this step if you only want to roll back to the last revision. List all revisions of a DaemonSet: diff --git a/content/en/docs/tasks/manage-daemon/update-daemon-set.md b/content/en/docs/tasks/manage-daemon/update-daemon-set.md index f9e35cb0f5..2f3001da0f 100644 --- a/content/en/docs/tasks/manage-daemon/update-daemon-set.md +++ b/content/en/docs/tasks/manage-daemon/update-daemon-set.md @@ -111,7 +111,7 @@ kubectl edit ds/fluentd-elasticsearch -n kube-system ##### Updating only the container image -If you just need to update the container image in the DaemonSet template, i.e. +If you only need to update the container image in the DaemonSet template, i.e. `.spec.template.spec.containers[*].image`, use `kubectl set image`: ```shell @@ -167,7 +167,7 @@ If the recent DaemonSet template update is broken, for example, the container is crash looping, or the container image doesn't exist (often due to a typo), DaemonSet rollout won't progress. -To fix this, just update the DaemonSet template again. New rollout won't be +To fix this, update the DaemonSet template again. New rollout won't be blocked by previous unhealthy rollouts. #### Clock skew diff --git a/content/en/docs/tasks/manage-gpus/scheduling-gpus.md b/content/en/docs/tasks/manage-gpus/scheduling-gpus.md index 4f8fc434f9..997005e9ce 100644 --- a/content/en/docs/tasks/manage-gpus/scheduling-gpus.md +++ b/content/en/docs/tasks/manage-gpus/scheduling-gpus.md @@ -37,7 +37,7 @@ When the above conditions are true, Kubernetes will expose `amd.com/gpu` or `nvidia.com/gpu` as a schedulable resource. You can consume these GPUs from your containers by requesting -`.com/gpu` just like you request `cpu` or `memory`. +`.com/gpu` the same way you request `cpu` or `memory`. However, there are some limitations in how you specify the resource requirements when using GPUs: diff --git a/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md b/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md index 75c9d56a83..643b57cc3b 100644 --- a/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md +++ b/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md @@ -16,7 +16,7 @@ preview of what changes `apply` will make. ## {{% heading "prerequisites" %}} -Install [`kubectl`](/docs/tasks/tools/install-kubectl/). +Install [`kubectl`](/docs/tasks/tools/). {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} diff --git a/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md b/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md index a51b5664ba..8e0670a89f 100644 --- a/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md +++ b/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md @@ -12,7 +12,7 @@ explains how those commands are organized and how to use them to manage live obj ## {{% heading "prerequisites" %}} -Install [`kubectl`](/docs/tasks/tools/install-kubectl/). +Install [`kubectl`](/docs/tasks/tools/). {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} diff --git a/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md b/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md index 2b97ed271c..87cc423da7 100644 --- a/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md +++ b/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md @@ -13,7 +13,7 @@ This document explains how to define and manage objects using configuration file ## {{% heading "prerequisites" %}} -Install [`kubectl`](/docs/tasks/tools/install-kubectl/). +Install [`kubectl`](/docs/tasks/tools/). {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} diff --git a/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md b/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md index c1722f694a..3ea3c50e8d 100644 --- a/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md +++ b/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md @@ -8,7 +8,7 @@ weight: 20 [Kustomize](https://github.com/kubernetes-sigs/kustomize) is a standalone tool to customize Kubernetes objects -through a [kustomization file](https://kubernetes-sigs.github.io/kustomize/api-reference/glossary/#kustomization). +through a [kustomization file](https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#kustomization). Since 1.14, Kubectl also supports the management of Kubernetes objects using a kustomization file. @@ -29,7 +29,7 @@ kubectl apply -k ## {{% heading "prerequisites" %}} -Install [`kubectl`](/docs/tasks/tools/install-kubectl/). +Install [`kubectl`](/docs/tasks/tools/). {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} diff --git a/content/en/docs/tasks/run-application/access-api-from-pod.md b/content/en/docs/tasks/run-application/access-api-from-pod.md new file mode 100644 index 0000000000..9eb2521f7f --- /dev/null +++ b/content/en/docs/tasks/run-application/access-api-from-pod.md @@ -0,0 +1,111 @@ +--- +title: Accessing the Kubernetes API from a Pod +content_type: task +weight: 120 +--- + + + +This guide demonstrates how to access the Kubernetes API from within a pod. + +## {{% heading "prerequisites" %}} + +{{< include "task-tutorial-prereqs.md" >}} + + + +## Accessing the API from within a Pod + +When accessing the API from within a Pod, locating and authenticating +to the API server are slightly different to the external client case. + +The easiest way to use the Kubernetes API from a Pod is to use +one of the official [client libraries](/docs/reference/using-api/client-libraries/). These +libraries can automatically discover the API server and authenticate. + +### Using Official Client Libraries + +From within a Pod, the recommended ways to connect to the Kubernetes API are: + + - For a Go client, use the official [Go client library](https://github.com/kubernetes/client-go/). + The `rest.InClusterConfig()` function handles API host discovery and authentication automatically. + See [an example here](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go). + + - For a Python client, use the official [Python client library](https://github.com/kubernetes-client/python/). + The `config.load_incluster_config()` function handles API host discovery and authentication automatically. + See [an example here](https://github.com/kubernetes-client/python/blob/master/examples/in_cluster_config.py). + + - There are a number of other libraries available, please refer to the [Client Libraries](/docs/reference/using-api/client-libraries/) page. + +In each case, the service account credentials of the Pod are used to communicate +securely with the API server. + +### Directly accessing the REST API + +While running in a Pod, the Kubernetes apiserver is accessible via a Service named +`kubernetes` in the `default` namespace. Therefore, Pods can use the +`kubernetes.default.svc` hostname to query the API server. Official client libraries +do this automatically. + +The recommended way to authenticate to the API server is with a +[service account](/docs/tasks/configure-pod-container/configure-service-account/) credential. By default, a Pod +is associated with a service account, and a credential (token) for that +service account is placed into the filesystem tree of each container in that Pod, +at `/var/run/secrets/kubernetes.io/serviceaccount/token`. + +If available, a certificate bundle is placed into the filesystem tree of each +container at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`, and should be +used to verify the serving certificate of the API server. + +Finally, the default namespace to be used for namespaced API operations is placed in a file +at `/var/run/secrets/kubernetes.io/serviceaccount/namespace` in each container. + +### Using kubectl proxy + +If you would like to query the API without an official client library, you can run `kubectl proxy` +as the [command](/docs/tasks/inject-data-application/define-command-argument-container/) +of a new sidecar container in the Pod. This way, `kubectl proxy` will authenticate +to the API and expose it on the `localhost` interface of the Pod, so that other containers +in the Pod can use it directly. + +### Without using a proxy + +It is possible to avoid using the kubectl proxy by passing the authentication token +directly to the API server. The internal certificate secures the connection. + +```shell +# Point to the internal API server hostname +APISERVER=https://kubernetes.default.svc + +# Path to ServiceAccount token +SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount + +# Read this Pod's namespace +NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace) + +# Read the ServiceAccount bearer token +TOKEN=$(cat ${SERVICEACCOUNT}/token) + +# Reference the internal certificate authority (CA) +CACERT=${SERVICEACCOUNT}/ca.crt + +# Explore the API with TOKEN +curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api +``` + +The output will be similar to this: + +```json +{ + "kind": "APIVersions", + "versions": [ + "v1" + ], + "serverAddressByClientCIDRs": [ + { + "clientCIDR": "0.0.0.0/0", + "serverAddress": "10.0.1.149:443" + } + ] +} +``` diff --git a/content/en/docs/tasks/run-application/delete-stateful-set.md b/content/en/docs/tasks/run-application/delete-stateful-set.md index 57e54e6797..94b3c583eb 100644 --- a/content/en/docs/tasks/run-application/delete-stateful-set.md +++ b/content/en/docs/tasks/run-application/delete-stateful-set.md @@ -43,8 +43,8 @@ You may need to delete the associated headless service separately after the Stat kubectl delete service ``` -Deleting a StatefulSet through kubectl will scale it down to 0, thereby deleting all pods that are a part of it. -If you want to delete just the StatefulSet and not the pods, use `--cascade=false`. +When deleting a StatefulSet through `kubectl`, the StatefulSet scales down to 0. All Pods that are part of this workload are also deleted. If you want to delete only the StatefulSet and not the Pods, use `--cascade=false`. +For example: ```shell kubectl delete -f --cascade=false @@ -66,7 +66,7 @@ Use caution when deleting a PVC, as it may lead to data loss. ### Complete deletion of a StatefulSet -To simply delete everything in a StatefulSet, including the associated pods, you can run a series of commands similar to the following: +To delete everything in a StatefulSet, including the associated pods, you can run a series of commands similar to the following: ```shell grace=$(kubectl get pods --template '{{.spec.terminationGracePeriodSeconds}}') diff --git a/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md b/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md index cda469f217..0001f4c9f4 100644 --- a/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md +++ b/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md @@ -44,9 +44,9 @@ for StatefulSet Pods. Graceful deletion is safe and will ensure that the Pod [shuts down gracefully](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) before the kubelet deletes the name from the apiserver. -Kubernetes (versions 1.5 or newer) will not delete Pods just because a Node is unreachable. +A Pod is not deleted automatically when a node is unreachable. The Pods running on an unreachable Node enter the 'Terminating' or 'Unknown' state after a -[timeout](/docs/concepts/architecture/nodes/#node-condition). +[timeout](/docs/concepts/architecture/nodes/#condition). Pods may also enter these states when the user attempts graceful deletion of a Pod on an unreachable Node. The only ways in which a Pod in such a state can be removed from the apiserver are as follows: diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index 49009e1268..84ae1addd2 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -382,7 +382,7 @@ with *external metrics*. Using external metrics requires knowledge of your monitoring system; the setup is similar to that required when using custom metrics. External metrics allow you to autoscale your cluster -based on any metric available in your monitoring system. Just provide a `metric` block with a +based on any metric available in your monitoring system. Provide a `metric` block with a `name` and `selector`, as above, and use the `External` metric type instead of `Object`. If multiple time series are matched by the `metricSelector`, the sum of their values is used by the HorizontalPodAutoscaler. diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md index 56a48c8b30..5e94027423 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -23,9 +23,7 @@ Pod Autoscaling does not apply to objects that can't be scaled, for example, Dae The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller. The resource determines the behavior of the controller. -The controller periodically adjusts the number of replicas in a replication controller or deployment -to match the observed average CPU utilization to the target specified by user. - +The controller periodically adjusts the number of replicas in a replication controller or deployment to match the observed metrics such as average CPU utilisation, average memory utilisation or any other custom metric to the target specified by the user. @@ -162,7 +160,7 @@ can be fetched, scaling is skipped. This means that the HPA is still capable of scaling up if one or more metrics give a `desiredReplicas` greater than the current value. -Finally, just before HPA scales the target, the scale recommendation is recorded. The +Finally, right before HPA scales the target, the scale recommendation is recorded. The controller considers all recommendations within a configurable window choosing the highest recommendation from within that window. This value can be configured using the `--horizontal-pod-autoscaler-downscale-stabilization` flag, which defaults to 5 minutes. This means that scaledowns will occur gradually, smoothing out the impact of rapidly @@ -356,7 +354,7 @@ and [the walkthrough for using external metrics](/docs/tasks/run-application/hor ## Support for configurable scaling behavior Starting from -[v1.18](https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/20190307-configurable-scale-velocity-for-hpa.md) +[v1.18](https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/853-configurable-hpa-scale-velocity/README.md) the `v2beta2` API allows scaling behavior to be configured through the HPA `behavior` field. Behaviors are specified separately for scaling up and down in `scaleUp` or `scaleDown` section under the `behavior` field. A stabilization @@ -383,7 +381,12 @@ behavior: periodSeconds: 60 ``` -When the number of pods is more than 40 the second policy will be used for scaling down. +`periodSeconds` indicates the length of time in the past for which the policy must hold true. +The first policy _(Pods)_ allows at most 4 replicas to be scaled down in one minute. The second policy +_(Percent)_ allows at most 10% of the current replicas to be scaled down in one minute. + +Since by default the policy which allows the highest amount of change is selected, the second policy will +only be used when the number of pod replicas is more than 40. With 40 or less replicas, the first policy will be applied. For instance if there are 80 replicas and the target has to be scaled down to 10 replicas then during the first step 8 replicas will be reduced. In the next iteration when the number of replicas is 72, 10% of the pods is 7.2 but the number is rounded up to 8. On each loop of @@ -391,10 +394,6 @@ the autoscaler controller the number of pods to be change is re-calculated based of current replicas. When the number of replicas falls below 40 the first policy _(Pods)_ is applied and 4 replicas will be reduced at a time. -`periodSeconds` indicates the length of time in the past for which the policy must hold true. -The first policy allows at most 4 replicas to be scaled down in one minute. The second policy -allows at most 10% of the current replicas to be scaled down in one minute. - The policy selection can be changed by specifying the `selectPolicy` field for a scaling direction. By setting the value to `Min` which would select the policy which allows the smallest change in the replica count. Setting the value to `Disabled` completely disables @@ -441,7 +440,7 @@ behavior: periodSeconds: 15 selectPolicy: Max ``` -For scaling down the stabilization window is _300_ seconds(or the value of the +For scaling down the stabilization window is _300_ seconds (or the value of the `--horizontal-pod-autoscaler-downscale-stabilization` flag if provided). There is only a single policy for scaling down which allows a 100% of the currently running replicas to be removed which means the scaling target can be scaled down to the minimum allowed replicas. diff --git a/content/en/docs/tasks/run-application/run-replicated-stateful-application.md b/content/en/docs/tasks/run-application/run-replicated-stateful-application.md index f1738ff53e..22f929c06f 100644 --- a/content/en/docs/tasks/run-application/run-replicated-stateful-application.md +++ b/content/en/docs/tasks/run-application/run-replicated-stateful-application.md @@ -39,6 +39,7 @@ on general patterns for running stateful applications in Kubernetes. [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/). * Some familiarity with MySQL helps, but this tutorial aims to present general patterns that should be useful for other systems. +* You are using the default namespace or another namespace that does not contain any conflicting objects. @@ -171,10 +172,10 @@ properties. The script in the `init-mysql` container also applies either `primary.cnf` or `replica.cnf` from the ConfigMap by copying the contents into `conf.d`. Because the example topology consists of a single primary MySQL server and any number of -replicas, the script simply assigns ordinal `0` to be the primary server, and everyone +replicas, the script assigns ordinal `0` to be the primary server, and everyone else to be replicas. Combined with the StatefulSet controller's -[deployment order guarantee](/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees/), +[deployment order guarantee](/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees), this ensures the primary MySQL server is Ready before creating replicas, so they can begin replicating. @@ -534,10 +535,9 @@ kubectl delete pvc data-mysql-4 * Learn more about [debugging a StatefulSet](/docs/tasks/debug-application-cluster/debug-stateful-set/). * Learn more about [deleting a StatefulSet](/docs/tasks/run-application/delete-stateful-set/). * Learn more about [force deleting StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/). -* Look in the [Helm Charts repository](https://github.com/kubernetes/charts) +* Look in the [Helm Charts repository](https://artifacthub.io/) for other stateful application examples. - diff --git a/content/en/docs/tasks/run-application/run-single-instance-stateful-application.md b/content/en/docs/tasks/run-application/run-single-instance-stateful-application.md index 4c43948a21..bdc3b0c524 100644 --- a/content/en/docs/tasks/run-application/run-single-instance-stateful-application.md +++ b/content/en/docs/tasks/run-application/run-single-instance-stateful-application.md @@ -65,6 +65,8 @@ for a secure solution. kubectl describe deployment mysql + The output is similar to this: + Name: mysql Namespace: default CreationTimestamp: Tue, 01 Nov 2016 11:18:45 -0700 @@ -105,6 +107,8 @@ for a secure solution. kubectl get pods -l app=mysql + The output is similar to this: + NAME READY STATUS RESTARTS AGE mysql-63082529-2z3ki 1/1 Running 0 3m @@ -112,6 +116,8 @@ for a secure solution. kubectl describe pvc mysql-pv-claim + The output is similar to this: + Name: mysql-pv-claim Namespace: default StorageClass: diff --git a/content/en/docs/tasks/run-application/run-stateless-application-deployment.md b/content/en/docs/tasks/run-application/run-stateless-application-deployment.md index df604facf8..62bd984ddc 100644 --- a/content/en/docs/tasks/run-application/run-stateless-application-deployment.md +++ b/content/en/docs/tasks/run-application/run-stateless-application-deployment.md @@ -51,7 +51,6 @@ a Deployment that runs the nginx:1.14.2 Docker image: The output is similar to this: - user@computer:~/website$ kubectl describe deployment nginx-deployment Name: nginx-deployment Namespace: default CreationTimestamp: Tue, 30 Aug 2016 18:11:37 -0700 diff --git a/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md b/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md index d558a271ad..b01f380a1a 100644 --- a/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md +++ b/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md @@ -19,7 +19,7 @@ Up to date information on this process can be found at the * You must have a Kubernetes cluster with cluster DNS enabled. * If you are using a cloud-based Kubernetes cluster or {{< glossary_tooltip text="Minikube" term_id="minikube" >}}, you may already have cluster DNS enabled. * If you are using `hack/local-up-cluster.sh`, ensure that the `KUBE_ENABLE_CLUSTER_DNS` environment variable is set, then run the install script. -* [Install and setup kubectl](/docs/tasks/tools/install-kubectl/) v1.7 or higher. Make sure it is configured to connect to the Kubernetes cluster. +* [Install and setup kubectl](/docs/tasks/tools/) v1.7 or higher. Make sure it is configured to connect to the Kubernetes cluster. * Install [Helm](https://helm.sh/) v2.7.0 or newer. * Follow the [Helm install instructions](https://helm.sh/docs/intro/install/). * If you already have an appropriate version of Helm installed, execute `helm init` to install Tiller, the server-side component of Helm. @@ -33,7 +33,7 @@ Up to date information on this process can be found at the Once Helm is installed, add the *service-catalog* Helm repository to your local machine by executing the following command: ```shell -helm repo add svc-cat https://svc-catalog-charts.storage.googleapis.com +helm repo add svc-cat https://kubernetes-sigs.github.io/service-catalog ``` Check to make sure that it installed successfully by executing the following command: diff --git a/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md b/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md index 52a55457a2..a724d5b17b 100644 --- a/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md +++ b/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md @@ -12,10 +12,7 @@ You can use the GCP [Service Catalog Installer](https://github.com/GoogleCloudPl tool to easily install or uninstall Service Catalog on your Kubernetes cluster, linking it to Google Cloud projects. -Service Catalog itself can work with any kind of managed service, not just Google Cloud. - - - +Service Catalog can work with any kind of managed service, not only Google Cloud. ## {{% heading "prerequisites" %}} @@ -23,7 +20,7 @@ Service Catalog itself can work with any kind of managed service, not just Googl * Install [Go 1.6+](https://golang.org/dl/) and set the `GOPATH`. * Install the [cfssl](https://github.com/cloudflare/cfssl) tool needed for generating SSL artifacts. * Service Catalog requires Kubernetes version 1.7+. -* [Install and setup kubectl](/docs/tasks/tools/install-kubectl/) so that it is configured to connect to a Kubernetes v1.7+ cluster. +* [Install and setup kubectl](/docs/tasks/tools/) so that it is configured to connect to a Kubernetes v1.7+ cluster. * The kubectl user must be bound to the *cluster-admin* role for it to install Service Catalog. To ensure that this is true, run the following command: kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user= diff --git a/content/en/docs/tasks/tls/certificate-rotation.md b/content/en/docs/tasks/tls/certificate-rotation.md index ea3602fbb0..5dd9b85714 100644 --- a/content/en/docs/tasks/tls/certificate-rotation.md +++ b/content/en/docs/tasks/tls/certificate-rotation.md @@ -69,8 +69,9 @@ write that to disk, in the location specified by `--cert-dir`. Then the kubelet will use the new certificate to connect to the Kubernetes API. As the expiration of the signed certificate approaches, the kubelet will -automatically issue a new certificate signing request, using the Kubernetes -API. Again, the controller manager will automatically approve the certificate +automatically issue a new certificate signing request, using the Kubernetes API. +This can happen at any point between 30% and 10% of the time remaining on the +certificate. Again, the controller manager will automatically approve the certificate request and attach a signed certificate to the certificate signing request. The kubelet will retrieve the new signed certificate from the Kubernetes API and write that to disk. Then it will update the connections it has to the diff --git a/content/en/docs/tasks/tls/manual-rotation-of-ca-certificates.md b/content/en/docs/tasks/tls/manual-rotation-of-ca-certificates.md index 3147ac3a18..e56322cbec 100644 --- a/content/en/docs/tasks/tls/manual-rotation-of-ca-certificates.md +++ b/content/en/docs/tasks/tls/manual-rotation-of-ca-certificates.md @@ -51,12 +51,12 @@ Configurations with a single API server will experience unavailability while the If any pods are started before new CA is used by API servers, they will get this update and trust both old and new CAs. ```shell - base64_encoded_ca="$(base64 )" + base64_encoded_ca="$(base64 -w0 )" for namespace in $(kubectl get ns --no-headers | awk '{print $1}'); do for token in $(kubectl get secrets --namespace "$namespace" --field-selector type=kubernetes.io/service-account-token -o name); do kubectl get $token --namespace "$namespace" -o yaml | \ - /bin/sed "s/\(ca.crt:\).*/\1 ${base64_encoded_ca}" | \ + /bin/sed "s/\(ca.crt:\).*/\1 ${base64_encoded_ca}/" | \ kubectl apply -f - done done @@ -105,8 +105,8 @@ Configurations with a single API server will experience unavailability while the * Make sure control plane components logs no TLS errors. {{< note >}} - To generate certificates and private keys for your cluster using the `openssl` command line tool, see [Certificates (`openssl`)](/docs/concepts/cluster-administration/certificates/#openssl). - You can also use [`cfssl`](/docs/concepts/cluster-administration/certificates/#cfssl). + To generate certificates and private keys for your cluster using the `openssl` command line tool, see [Certificates (`openssl`)](/docs/tasks/administer-cluster/certificates/#openssl). + You can also use [`cfssl`](/docs/tasks/administer-cluster/certificates/#cfssl). {{< /note >}} 1. Annotate any Daemonsets and Deployments to trigger pod replacement in a safer rolling fashion. @@ -132,10 +132,10 @@ Configurations with a single API server will experience unavailability while the 1. If your cluster is using bootstrap tokens to join nodes, update the ConfigMap `cluster-info` in the `kube-public` namespace with new CA. ```shell - base64_encoded_ca="$(base64 /etc/kubernetes/pki/ca.crt)" + base64_encoded_ca="$(base64 -w0 /etc/kubernetes/pki/ca.crt)" kubectl get cm/cluster-info --namespace kube-public -o yaml | \ - /bin/sed "s/\(certificate-authority-data:\).*/\1 ${base64_encoded_ca}" | \ + /bin/sed "s/\(certificate-authority-data:\).*/\1 ${base64_encoded_ca}/" | \ kubectl apply -f - ``` diff --git a/content/en/docs/tasks/tools/_index.md b/content/en/docs/tasks/tools/_index.md index 7bbb2161ec..5f1b517141 100755 --- a/content/en/docs/tasks/tools/_index.md +++ b/content/en/docs/tasks/tools/_index.md @@ -7,19 +7,20 @@ no_list: true ## kubectl -The Kubernetes command-line tool, `kubectl`, allows you to run commands against -Kubernetes clusters. You can use `kubectl` to deploy applications, inspect and -manage cluster resources, and view logs. - -See [Install and Set Up `kubectl`](/docs/tasks/tools/install-kubectl/) for -information about how to download and install `kubectl` and set it up for -accessing your cluster. - -View kubectl Install and Set Up Guide - -You can also read the + +The Kubernetes command-line tool, [kubectl](/docs/reference/kubectl/kubectl/), allows +you to run commands against Kubernetes clusters. +You can use kubectl to deploy applications, inspect and manage cluster resources, +and view logs. For more information including a complete list of kubectl operations, see the [`kubectl` reference documentation](/docs/reference/kubectl/). +kubectl is installable on a variety of Linux platforms, macOS and Windows. +Find your preferred operating system below. + +- [Install kubectl on Linux](/docs/tasks/tools/install-kubectl-linux) +- [Install kubectl on macOS](/docs/tasks/tools/install-kubectl-macos) +- [Install kubectl on Windows](/docs/tasks/tools/install-kubectl-windows) + ## kind [`kind`](https://kind.sigs.k8s.io/docs/) lets you run Kubernetes on diff --git a/content/en/docs/tasks/tools/included/_index.md b/content/en/docs/tasks/tools/included/_index.md new file mode 100644 index 0000000000..2da0437b82 --- /dev/null +++ b/content/en/docs/tasks/tools/included/_index.md @@ -0,0 +1,6 @@ +--- +title: "Tools Included" +description: "Snippets to be included in the main kubectl-installs-*.md pages." +headless: true +toc_hide: true +--- \ No newline at end of file diff --git a/content/en/docs/tasks/tools/included/install-kubectl-gcloud.md b/content/en/docs/tasks/tools/included/install-kubectl-gcloud.md new file mode 100644 index 0000000000..dcf8572618 --- /dev/null +++ b/content/en/docs/tasks/tools/included/install-kubectl-gcloud.md @@ -0,0 +1,21 @@ +--- +title: "gcloud kubectl install" +description: "How to install kubectl with gcloud snippet for inclusion in each OS-specific tab." +headless: true +--- + +You can install kubectl as part of the Google Cloud SDK. + +1. Install the [Google Cloud SDK](https://cloud.google.com/sdk/). + +1. Run the `kubectl` installation command: + + ```shell + gcloud components install kubectl + ``` + +1. Test to ensure the version you installed is up-to-date: + + ```shell + kubectl version --client + ``` \ No newline at end of file diff --git a/content/en/docs/tasks/tools/included/kubectl-whats-next.md b/content/en/docs/tasks/tools/included/kubectl-whats-next.md new file mode 100644 index 0000000000..4b0da49bbc --- /dev/null +++ b/content/en/docs/tasks/tools/included/kubectl-whats-next.md @@ -0,0 +1,12 @@ +--- +title: "What's next?" +description: "What's next after installing kubectl." +headless: true +--- + +* [Install Minikube](https://minikube.sigs.k8s.io/docs/start/) +* See the [getting started guides](/docs/setup/) for more about creating clusters. +* [Learn how to launch and expose your application.](/docs/tasks/access-application-cluster/service-access-application-cluster/) +* If you need access to a cluster you didn't create, see the + [Sharing Cluster Access document](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/). +* Read the [kubectl reference docs](/docs/reference/kubectl/kubectl/) diff --git a/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-linux.md b/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-linux.md new file mode 100644 index 0000000000..949f1922c4 --- /dev/null +++ b/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-linux.md @@ -0,0 +1,54 @@ +--- +title: "bash auto-completion on Linux" +description: "Some optional configuration for bash auto-completion on Linux." +headless: true +--- + +### Introduction + +The kubectl completion script for Bash can be generated with the command `kubectl completion bash`. Sourcing the completion script in your shell enables kubectl autocompletion. + +However, the completion script depends on [**bash-completion**](https://github.com/scop/bash-completion), which means that you have to install this software first (you can test if you have bash-completion already installed by running `type _init_completion`). + +### Install bash-completion + +bash-completion is provided by many package managers (see [here](https://github.com/scop/bash-completion#installation)). You can install it with `apt-get install bash-completion` or `yum install bash-completion`, etc. + +The above commands create `/usr/share/bash-completion/bash_completion`, which is the main script of bash-completion. Depending on your package manager, you have to manually source this file in your `~/.bashrc` file. + +To find out, reload your shell and run `type _init_completion`. If the command succeeds, you're already set, otherwise add the following to your `~/.bashrc` file: + +```bash +source /usr/share/bash-completion/bash_completion +``` + +Reload your shell and verify that bash-completion is correctly installed by typing `type _init_completion`. + +### Enable kubectl autocompletion + +You now need to ensure that the kubectl completion script gets sourced in all your shell sessions. There are two ways in which you can do this: + +- Source the completion script in your `~/.bashrc` file: + + ```bash + echo 'source <(kubectl completion bash)' >>~/.bashrc + ``` + +- Add the completion script to the `/etc/bash_completion.d` directory: + + ```bash + kubectl completion bash >/etc/bash_completion.d/kubectl + ``` + +If you have an alias for kubectl, you can extend shell completion to work with that alias: + +```bash +echo 'alias k=kubectl' >>~/.bashrc +echo 'complete -F __start_kubectl k' >>~/.bashrc +``` + +{{< note >}} +bash-completion sources all completion scripts in `/etc/bash_completion.d`. +{{< /note >}} + +Both approaches are equivalent. After reloading your shell, kubectl autocompletion should be working. diff --git a/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md b/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md new file mode 100644 index 0000000000..9854540649 --- /dev/null +++ b/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md @@ -0,0 +1,89 @@ +--- +title: "bash auto-completion on macOS" +description: "Some optional configuration for bash auto-completion on macOS." +headless: true +--- + +### Introduction + +The kubectl completion script for Bash can be generated with `kubectl completion bash`. Sourcing this script in your shell enables kubectl completion. + +However, the kubectl completion script depends on [**bash-completion**](https://github.com/scop/bash-completion) which you thus have to previously install. + +{{< warning>}} +There are two versions of bash-completion, v1 and v2. V1 is for Bash 3.2 (which is the default on macOS), and v2 is for Bash 4.1+. The kubectl completion script **doesn't work** correctly with bash-completion v1 and Bash 3.2. It requires **bash-completion v2** and **Bash 4.1+**. Thus, to be able to correctly use kubectl completion on macOS, you have to install and use Bash 4.1+ ([*instructions*](https://itnext.io/upgrading-bash-on-macos-7138bd1066ba)). The following instructions assume that you use Bash 4.1+ (that is, any Bash version of 4.1 or newer). +{{< /warning >}} + +### Upgrade Bash + +The instructions here assume you use Bash 4.1+. You can check your Bash's version by running: + +```bash +echo $BASH_VERSION +``` + +If it is too old, you can install/upgrade it using Homebrew: + +```bash +brew install bash +``` + +Reload your shell and verify that the desired version is being used: + +```bash +echo $BASH_VERSION $SHELL +``` + +Homebrew usually installs it at `/usr/local/bin/bash`. + +### Install bash-completion + +{{< note >}} +As mentioned, these instructions assume you use Bash 4.1+, which means you will install bash-completion v2 (in contrast to Bash 3.2 and bash-completion v1, in which case kubectl completion won't work). +{{< /note >}} + +You can test if you have bash-completion v2 already installed with `type _init_completion`. If not, you can install it with Homebrew: + +```bash +brew install bash-completion@2 +``` + +As stated in the output of this command, add the following to your `~/.bash_profile` file: + +```bash +export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d" +[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh" +``` + +Reload your shell and verify that bash-completion v2 is correctly installed with `type _init_completion`. + +### Enable kubectl autocompletion + +You now have to ensure that the kubectl completion script gets sourced in all your shell sessions. There are multiple ways to achieve this: + +- Source the completion script in your `~/.bash_profile` file: + + ```bash + echo 'source <(kubectl completion bash)' >>~/.bash_profile + ``` + +- Add the completion script to the `/usr/local/etc/bash_completion.d` directory: + + ```bash + kubectl completion bash >/usr/local/etc/bash_completion.d/kubectl + ``` + +- If you have an alias for kubectl, you can extend shell completion to work with that alias: + + ```bash + echo 'alias k=kubectl' >>~/.bash_profile + echo 'complete -F __start_kubectl k' >>~/.bash_profile + ``` + +- If you installed kubectl with Homebrew (as explained [here](/docs/tasks/tools/install-kubectl-macos/#install-with-homebrew-on-macos)), then the kubectl completion script should already be in `/usr/local/etc/bash_completion.d/kubectl`. In that case, you don't need to do anything. + + {{< note >}} + The Homebrew installation of bash-completion v2 sources all the files in the `BASH_COMPLETION_COMPAT_DIR` directory, that's why the latter two methods work. + {{< /note >}} + +In any case, after reloading your shell, kubectl completion should be working. diff --git a/content/en/docs/tasks/tools/included/optional-kubectl-configs-zsh.md b/content/en/docs/tasks/tools/included/optional-kubectl-configs-zsh.md new file mode 100644 index 0000000000..95c8c0aa71 --- /dev/null +++ b/content/en/docs/tasks/tools/included/optional-kubectl-configs-zsh.md @@ -0,0 +1,29 @@ +--- +title: "zsh auto-completion" +description: "Some optional configuration for zsh auto-completion." +headless: true +--- + +The kubectl completion script for Zsh can be generated with the command `kubectl completion zsh`. Sourcing the completion script in your shell enables kubectl autocompletion. + +To do so in all your shell sessions, add the following to your `~/.zshrc` file: + +```zsh +source <(kubectl completion zsh) +``` + +If you have an alias for kubectl, you can extend shell completion to work with that alias: + +```zsh +echo 'alias k=kubectl' >>~/.zshrc +echo 'complete -F __start_kubectl k' >>~/.zshrc +``` + +After reloading your shell, kubectl autocompletion should be working. + +If you get an error like `complete:13: command not found: compdef`, then add the following to the beginning of your `~/.zshrc` file: + +```zsh +autoload -Uz compinit +compinit +``` \ No newline at end of file diff --git a/content/en/docs/tasks/tools/included/verify-kubectl.md b/content/en/docs/tasks/tools/included/verify-kubectl.md new file mode 100644 index 0000000000..fbd92e4cb6 --- /dev/null +++ b/content/en/docs/tasks/tools/included/verify-kubectl.md @@ -0,0 +1,34 @@ +--- +title: "verify kubectl install" +description: "How to verify kubectl." +headless: true +--- + +In order for kubectl to find and access a Kubernetes cluster, it needs a +[kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/), +which is created automatically when you create a cluster using +[kube-up.sh](https://github.com/kubernetes/kubernetes/blob/master/cluster/kube-up.sh) +or successfully deploy a Minikube cluster. +By default, kubectl configuration is located at `~/.kube/config`. + +Check that kubectl is properly configured by getting the cluster state: + +```shell +kubectl cluster-info +``` + +If you see a URL response, kubectl is correctly configured to access your cluster. + +If you see a message similar to the following, kubectl is not configured correctly or is not able to connect to a Kubernetes cluster. + +``` +The connection to the server was refused - did you specify the right host or port? +``` + +For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like Minikube to be installed first and then re-run the commands stated above. + +If kubectl cluster-info returns the url response but you can't access your cluster, to check whether it is configured properly, use: + +```shell +kubectl cluster-info dump +``` \ No newline at end of file diff --git a/content/en/docs/tasks/tools/install-kubectl-linux.md b/content/en/docs/tasks/tools/install-kubectl-linux.md new file mode 100644 index 0000000000..243dbf4e0d --- /dev/null +++ b/content/en/docs/tasks/tools/install-kubectl-linux.md @@ -0,0 +1,195 @@ +--- +reviewers: +- mikedanese +title: Install and Set Up kubectl on Linux +content_type: task +weight: 10 +card: + name: tasks + weight: 20 + title: Install kubectl on Linux +--- + +## {{% heading "prerequisites" %}} + +You must use a kubectl version that is within one minor version difference of your cluster. +For example, a v1.2 client should work with v1.1, v1.2, and v1.3 master. +Using the latest version of kubectl helps avoid unforeseen issues. + +## Install kubectl on Linux + +The following methods exist for installing kubectl on Linux: + +- [Install kubectl binary with curl on Linux](#install-kubectl-binary-with-curl-on-linux) +- [Install using native package management](#install-using-native-package-management) +- [Install using other package management](#install-using-other-package-management) +- [Install on Linux as part of the Google Cloud SDK](#install-on-linux-as-part-of-the-google-cloud-sdk) + +### Install kubectl binary with curl on Linux + +1. Download the latest release with the command: + + ```bash + curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" + ``` + + {{< note >}} +To download a specific version, replace the `$(curl -L -s https://dl.k8s.io/release/stable.txt)` portion of the command with the specific version. + +For example, to download version {{< param "fullversion" >}} on Linux, type: + + ```bash + curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl + ``` + {{< /note >}} + +1. Validate the binary (optional) + + Download the kubectl checksum file: + + ```bash + curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" + ``` + + Validate the kubectl binary against the checksum file: + + ```bash + echo "$(}} + Download the same version of the binary and checksum. + {{< /note >}} + +1. Install kubectl + + ```bash + sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl + ``` + + {{< note >}} + If you do not have root access on the target system, you can still install kubectl to the `~/.local/bin` directory: + + ```bash + mkdir -p ~/.local/bin/kubectl + mv ./kubectl ~/.local/bin/kubectl + # and then add ~/.local/bin/kubectl to $PATH + ``` + + {{< /note >}} + +1. Test to ensure the version you installed is up-to-date: + + ```bash + kubectl version --client + ``` + +### Install using native package management + +{{< tabs name="kubectl_install" >}} +{{% tab name="Debian-based distributions" %}} + +1. Update the `apt` package index and install packages needed to use the Kubernetes `apt` repository: + + ```shell + sudo apt-get update + sudo apt-get install -y apt-transport-https ca-certificates curl + ``` + +2. Download the Google Cloud public signing key: + + ```shell + sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg + ``` + +3. Add the Kubernetes `apt` repository: + + ```shell + echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list + ``` + +4. Update `apt` package index with the new repository and install kubectl: + + ```shell + sudo apt-get update + sudo apt-get install -y kubectl + ``` + +{{% /tab %}} + +{{< tab name="Red Hat-based distributions" codelang="bash" >}} +cat < /etc/yum.repos.d/kubernetes.repo +[kubernetes] +name=Kubernetes +baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 +enabled=1 +gpgcheck=1 +repo_gpgcheck=1 +gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg +EOF +yum install -y kubectl +{{< /tab >}} +{{< /tabs >}} + +### Install using other package management + +{{< tabs name="other_kubectl_install" >}} +{{% tab name="Snap" %}} +If you are on Ubuntu or another Linux distribution that support [snap](https://snapcraft.io/docs/core/install) package manager, kubectl is available as a [snap](https://snapcraft.io/) application. + +```shell +snap install kubectl --classic +kubectl version --client +``` + +{{% /tab %}} + +{{% tab name="Homebrew" %}} +If you are on Linux and using [Homebrew](https://docs.brew.sh/Homebrew-on-Linux) package manager, kubectl is available for [installation](https://docs.brew.sh/Homebrew-on-Linux#install). + +```shell +brew install kubectl +kubectl version --client +``` + +{{% /tab %}} + +{{< /tabs >}} + +### Install on Linux as part of the Google Cloud SDK + +{{< include "included/install-kubectl-gcloud.md" >}} + +## Verify kubectl configuration + +{{< include "included/verify-kubectl.md" >}} + +## Optional kubectl configurations + +### Enable shell autocompletion + +kubectl provides autocompletion support for Bash and Zsh, which can save you a lot of typing. + +Below are the procedures to set up autocompletion for Bash and Zsh. + +{{< tabs name="kubectl_autocompletion" >}} +{{< tab name="Bash" include="included/optional-kubectl-configs-bash-linux.md" />}} +{{< tab name="Zsh" include="included/optional-kubectl-configs-zsh.md" />}} +{{< /tabs >}} + +## {{% heading "whatsnext" %}} + +{{< include "included/kubectl-whats-next.md" >}} diff --git a/content/en/docs/tasks/tools/install-kubectl-macos.md b/content/en/docs/tasks/tools/install-kubectl-macos.md new file mode 100644 index 0000000000..b4fa864985 --- /dev/null +++ b/content/en/docs/tasks/tools/install-kubectl-macos.md @@ -0,0 +1,160 @@ +--- +reviewers: +- mikedanese +title: Install and Set Up kubectl on macOS +content_type: task +weight: 10 +card: + name: tasks + weight: 20 + title: Install kubectl on macOS +--- + +## {{% heading "prerequisites" %}} + +You must use a kubectl version that is within one minor version difference of your cluster. +For example, a v1.2 client should work with v1.1, v1.2, and v1.3 master. +Using the latest version of kubectl helps avoid unforeseen issues. + +## Install kubectl on macOS + +The following methods exist for installing kubectl on macOS: + +- [Install kubectl binary with curl on macOS](#install-kubectl-binary-with-curl-on-macos) +- [Install with Homebrew on macOS](#install-with-homebrew-on-macos) +- [Install with Macports on macOS](#install-with-macports-on-macos) +- [Install on macOS as part of the Google Cloud SDK](#install-on-macos-as-part-of-the-google-cloud-sdk) + +### Install kubectl binary with curl on macOS + +1. Download the latest release: + + ```bash + curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl" + ``` + + {{< note >}} + To download a specific version, replace the `$(curl -L -s https://dl.k8s.io/release/stable.txt)` portion of the command with the specific version. + + For example, to download version {{< param "fullversion" >}} on macOS, type: + + ```bash + curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl + ``` + + {{< /note >}} + +1. Validate the binary (optional) + + Download the kubectl checksum file: + + ```bash + curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl.sha256" + ``` + + Validate the kubectl binary against the checksum file: + + ```bash + echo "$(}} + Download the same version of the binary and checksum. + {{< /note >}} + +1. Make the kubectl binary executable. + + ```bash + chmod +x ./kubectl + ``` + +1. Move the kubectl binary to a file location on your system `PATH`. + + ```bash + sudo mv ./kubectl /usr/local/bin/kubectl + sudo chown root: /usr/local/bin/kubectl + ``` + +1. Test to ensure the version you installed is up-to-date: + + ```bash + kubectl version --client + ``` + +### Install with Homebrew on macOS + +If you are on macOS and using [Homebrew](https://brew.sh/) package manager, you can install kubectl with Homebrew. + +1. Run the installation command: + + ```bash + brew install kubectl + ``` + + or + + ```bash + brew install kubernetes-cli + ``` + +1. Test to ensure the version you installed is up-to-date: + + ```bash + kubectl version --client + ``` + +### Install with Macports on macOS + +If you are on macOS and using [Macports](https://macports.org/) package manager, you can install kubectl with Macports. + +1. Run the installation command: + + ```bash + sudo port selfupdate + sudo port install kubectl + ``` + +1. Test to ensure the version you installed is up-to-date: + + ```bash + kubectl version --client + ``` + + +### Install on macOS as part of the Google Cloud SDK + +{{< include "included/install-kubectl-gcloud.md" >}} + +## Verify kubectl configuration + +{{< include "included/verify-kubectl.md" >}} + +## Optional kubectl configurations + +### Enable shell autocompletion + +kubectl provides autocompletion support for Bash and Zsh, which can save you a lot of typing. + +Below are the procedures to set up autocompletion for Bash and Zsh. + +{{< tabs name="kubectl_autocompletion" >}} +{{< tab name="Bash" include="included/optional-kubectl-configs-bash-mac.md" />}} +{{< tab name="Zsh" include="included/optional-kubectl-configs-zsh.md" />}} +{{< /tabs >}} + +## {{% heading "whatsnext" %}} + +{{< include "included/kubectl-whats-next.md" >}} diff --git a/content/en/docs/tasks/tools/install-kubectl-windows.md b/content/en/docs/tasks/tools/install-kubectl-windows.md new file mode 100644 index 0000000000..09e8217626 --- /dev/null +++ b/content/en/docs/tasks/tools/install-kubectl-windows.md @@ -0,0 +1,179 @@ +--- +reviewers: +- mikedanese +title: Install and Set Up kubectl on Windows +content_type: task +weight: 10 +card: + name: tasks + weight: 20 + title: Install kubectl on Windows +--- + +## {{% heading "prerequisites" %}} + +You must use a kubectl version that is within one minor version difference of your cluster. +For example, a v1.2 client should work with v1.1, v1.2, and v1.3 master. +Using the latest version of kubectl helps avoid unforeseen issues. + +## Install kubectl on Windows + +The following methods exist for installing kubectl on Windows: + +- [Install kubectl binary with curl on Windows](#install-kubectl-binary-with-curl-on-windows) +- [Install with PowerShell from PSGallery](#install-with-powershell-from-psgallery) +- [Install on Windows using Chocolatey or Scoop](#install-on-windows-using-chocolatey-or-scoop) +- [Install on Windows as part of the Google Cloud SDK](#install-on-windows-as-part-of-the-google-cloud-sdk) + + +### Install kubectl binary with curl on Windows + +1. Download the [latest release {{< param "fullversion" >}}](https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe). + + Or if you have `curl` installed, use this command: + + ```powershell + curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe + ``` + + {{< note >}} + To find out the latest stable version (for example, for scripting), take a look at [https://dl.k8s.io/release/stable.txt](https://dl.k8s.io/release/stable.txt). + {{< /note >}} + +1. Validate the binary (optional) + + Download the kubectl checksum file: + + ```powershell + curl -LO https://dl.k8s.io/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe.sha256 + ``` + + Validate the kubectl binary against the checksum file: + + - Using Command Prompt to manually compare `CertUtil`'s output to the checksum file downloaded: + + ```cmd + CertUtil -hashfile kubectl.exe SHA256 + type kubectl.exe.sha256 + ``` + + - Using PowerShell to automate the verification using the `-eq` operator to get a `True` or `False` result: + + ```powershell + $($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl.exe.sha256) + ``` + +1. Add the binary in to your `PATH`. + +1. Test to ensure the version of `kubectl` is the same as downloaded: + + ```cmd + kubectl version --client + ``` + +{{< note >}} +[Docker Desktop for Windows](https://docs.docker.com/docker-for-windows/#kubernetes) adds its own version of `kubectl` to `PATH`. +If you have installed Docker Desktop before, you may need to place your `PATH` entry before the one added by the Docker Desktop installer or remove the Docker Desktop's `kubectl`. +{{< /note >}} + +### Install with PowerShell from PSGallery + +If you are on Windows and using the [PowerShell Gallery](https://www.powershellgallery.com/) package manager, you can install and update kubectl with PowerShell. + +1. Run the installation commands (making sure to specify a `DownloadLocation`): + + ```powershell + Install-Script -Name 'install-kubectl' -Scope CurrentUser -Force + install-kubectl.ps1 [-DownloadLocation ] + ``` + + {{< note >}} + If you do not specify a `DownloadLocation`, `kubectl` will be installed in the user's `temp` Directory. + {{< /note >}} + + The installer creates `$HOME/.kube` and instructs it to create a config file. + +1. Test to ensure the version you installed is up-to-date: + + ```powershell + kubectl version --client + ``` + +{{< note >}} +Updating the installation is performed by rerunning the two commands listed in step 1. +{{< /note >}} + +### Install on Windows using Chocolatey or Scoop + +1. To install kubectl on Windows you can use either [Chocolatey](https://chocolatey.org) package manager or [Scoop](https://scoop.sh) command-line installer. + + {{< tabs name="kubectl_win_install" >}} + {{% tab name="choco" %}} + ```powershell + choco install kubernetes-cli + ``` + {{% /tab %}} + {{% tab name="scoop" %}} + ```powershell + scoop install kubectl + ``` + {{% /tab %}} + {{< /tabs >}} + + +1. Test to ensure the version you installed is up-to-date: + + ```powershell + kubectl version --client + ``` + +1. Navigate to your home directory: + + ```powershell + # If you're using cmd.exe, run: cd %USERPROFILE% + cd ~ + ``` + +1. Create the `.kube` directory: + + ```powershell + mkdir .kube + ``` + +1. Change to the `.kube` directory you just created: + + ```powershell + cd .kube + ``` + +1. Configure kubectl to use a remote Kubernetes cluster: + + ```powershell + New-Item config -type file + ``` + +{{< note >}} +Edit the config file with a text editor of your choice, such as Notepad. +{{< /note >}} + +### Install on Windows as part of the Google Cloud SDK + +{{< include "included/install-kubectl-gcloud.md" >}} + +## Verify kubectl configuration + +{{< include "included/verify-kubectl.md" >}} + +## Optional kubectl configurations + +### Enable shell autocompletion + +kubectl provides autocompletion support for Bash and Zsh, which can save you a lot of typing. + +Below are the procedures to set up autocompletion for Zsh, if you are running that on Windows. + +{{< include "included/optional-kubectl-configs-zsh.md" >}} + +## {{% heading "whatsnext" %}} + +{{< include "included/kubectl-whats-next.md" >}} \ No newline at end of file diff --git a/content/en/docs/tasks/tools/install-kubectl.md b/content/en/docs/tasks/tools/install-kubectl.md deleted file mode 100644 index 54f1e7e123..0000000000 --- a/content/en/docs/tasks/tools/install-kubectl.md +++ /dev/null @@ -1,634 +0,0 @@ ---- -reviewers: -- mikedanese -title: Install and Set Up kubectl -content_type: task -weight: 10 -card: - name: tasks - weight: 20 - title: Install kubectl ---- - - -The Kubernetes command-line tool, [kubectl](/docs/reference/kubectl/kubectl/), allows -you to run commands against Kubernetes clusters. -You can use kubectl to deploy applications, inspect and manage cluster resources, -and view logs. For a complete list of kubectl operations, see -[Overview of kubectl](/docs/reference/kubectl/overview/). - - -## {{% heading "prerequisites" %}} - -You must use a kubectl version that is within one minor version difference of your cluster. -For example, a v1.2 client should work with v1.1, v1.2, and v1.3 master. -Using the latest version of kubectl helps avoid unforeseen issues. - - - -## Install kubectl on Linux - -### Install kubectl binary with curl on Linux - -1. Download the latest release with the command: - - ```bash - curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" - ``` - - {{< note >}} -To download a specific version, replace the `$(curl -L -s https://dl.k8s.io/release/stable.txt)` portion of the command with the specific version. - -For example, to download version {{< param "fullversion" >}} on Linux, type: - - ```bash - curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl - ``` - {{< /note >}} - -1. Validate the binary (optional) - - Download the kubectl checksum file: - - ```bash - curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" - ``` - - Validate the kubectl binary against the checksum file: - - ```bash - echo "$(}} - Download the same version of the binary and checksum. - {{< /note >}} - -1. Install kubectl - - ```bash - sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl - ``` - - {{< note >}} - If you do not have root access on the target system, you can still install kubectl to the `~/.local/bin` directory: - - ```bash - mkdir -p ~/.local/bin/kubectl - mv ./kubectl ~/.local/bin/kubectl - # and then add ~/.local/bin/kubectl to $PATH - ``` - - {{< /note >}} - -1. Test to ensure the version you installed is up-to-date: - - ```bash - kubectl version --client - ``` - -### Install using native package management - -{{< tabs name="kubectl_install" >}} -{{< tab name="Ubuntu, Debian or HypriotOS" codelang="bash" >}} -sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2 curl -curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - -echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list -sudo apt-get update -sudo apt-get install -y kubectl -{{< /tab >}} - -{{< tab name="CentOS, RHEL or Fedora" codelang="bash" >}}cat < /etc/yum.repos.d/kubernetes.repo -[kubernetes] -name=Kubernetes -baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 -enabled=1 -gpgcheck=1 -repo_gpgcheck=1 -gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg -EOF -yum install -y kubectl -{{< /tab >}} -{{< /tabs >}} - -### Install using other package management - -{{< tabs name="other_kubectl_install" >}} -{{% tab name="Snap" %}} -If you are on Ubuntu or another Linux distribution that support [snap](https://snapcraft.io/docs/core/install) package manager, kubectl is available as a [snap](https://snapcraft.io/) application. - -```shell -snap install kubectl --classic - -kubectl version --client -``` - -{{% /tab %}} - -{{% tab name="Homebrew" %}} -If you are on Linux and using [Homebrew](https://docs.brew.sh/Homebrew-on-Linux) package manager, kubectl is available for [installation](https://docs.brew.sh/Homebrew-on-Linux#install). - -```shell -brew install kubectl - -kubectl version --client -``` - -{{% /tab %}} - -{{< /tabs >}} - - -## Install kubectl on macOS - -### Install kubectl binary with curl on macOS - -1. Download the latest release: - - ```bash - curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl" - ``` - - {{< note >}} - To download a specific version, replace the `$(curl -L -s https://dl.k8s.io/release/stable.txt)` portion of the command with the specific version. - - For example, to download version {{< param "fullversion" >}} on macOS, type: - - ```bash - curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl - ``` - - {{< /note >}} - -1. Validate the binary (optional) - - Download the kubectl checksum file: - - ```bash - curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl.sha256" - ``` - - Validate the kubectl binary against the checksum file: - - ```bash - echo "$(}} - Download the same version of the binary and checksum. - {{< /note >}} - -1. Make the kubectl binary executable. - - ```bash - chmod +x ./kubectl - ``` - -1. Move the kubectl binary to a file location on your system `PATH`. - - ```bash - sudo mv ./kubectl /usr/local/bin/kubectl && \ - sudo chown root: /usr/local/bin/kubectl - ``` - -1. Test to ensure the version you installed is up-to-date: - - ```bash - kubectl version --client - ``` - -### Install with Homebrew on macOS - -If you are on macOS and using [Homebrew](https://brew.sh/) package manager, you can install kubectl with Homebrew. - -1. Run the installation command: - - ```bash - brew install kubectl - ``` - - or - - ```bash - brew install kubernetes-cli - ``` - -1. Test to ensure the version you installed is up-to-date: - - ```bash - kubectl version --client - ``` - -### Install with Macports on macOS - -If you are on macOS and using [Macports](https://macports.org/) package manager, you can install kubectl with Macports. - -1. Run the installation command: - - ```bash - sudo port selfupdate - sudo port install kubectl - ``` - -1. Test to ensure the version you installed is up-to-date: - - ```bash - kubectl version --client - ``` - -## Install kubectl on Windows - -### Install kubectl binary with curl on Windows - -1. Download the [latest release {{< param "fullversion" >}}](https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe). - - Or if you have `curl` installed, use this command: - - ```powershell - curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe - ``` - - {{< note >}} - To find out the latest stable version (for example, for scripting), take a look at [https://dl.k8s.io/release/stable.txt](https://dl.k8s.io/release/stable.txt). - {{< /note >}} - -1. Validate the binary (optional) - - Download the kubectl checksum file: - - ```powershell - curl -LO https://dl.k8s.io/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe.sha256 - ``` - - Validate the kubectl binary against the checksum file: - - - Using Command Prompt to manually compare `CertUtil`'s output to the checksum file downloaded: - - ```cmd - CertUtil -hashfile kubectl.exe SHA256 - type kubectl.exe.sha256 - ``` - - - Using PowerShell to automate the verification using the `-eq` operator to get a `True` or `False` result: - - ```powershell - $($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl.exe.sha256) - ``` - -1. Add the binary in to your `PATH`. - -1. Test to ensure the version of `kubectl` is the same as downloaded: - - ```cmd - kubectl version --client - ``` - -{{< note >}} -[Docker Desktop for Windows](https://docs.docker.com/docker-for-windows/#kubernetes) adds its own version of `kubectl` to `PATH`. -If you have installed Docker Desktop before, you may need to place your `PATH` entry before the one added by the Docker Desktop installer or remove the Docker Desktop's `kubectl`. -{{< /note >}} - -### Install with PowerShell from PSGallery - -If you are on Windows and using the [PowerShell Gallery](https://www.powershellgallery.com/) package manager, you can install and update kubectl with PowerShell. - -1. Run the installation commands (making sure to specify a `DownloadLocation`): - - ```powershell - Install-Script -Name 'install-kubectl' -Scope CurrentUser -Force - install-kubectl.ps1 [-DownloadLocation ] - ``` - - {{< note >}} - If you do not specify a `DownloadLocation`, `kubectl` will be installed in the user's `temp` Directory. - {{< /note >}} - - The installer creates `$HOME/.kube` and instructs it to create a config file. - -1. Test to ensure the version you installed is up-to-date: - - ```powershell - kubectl version --client - ``` - -{{< note >}} -Updating the installation is performed by rerunning the two commands listed in step 1. -{{< /note >}} - -### Install on Windows using Chocolatey or Scoop - -1. To install kubectl on Windows you can use either [Chocolatey](https://chocolatey.org) package manager or [Scoop](https://scoop.sh) command-line installer. - - {{< tabs name="kubectl_win_install" >}} - {{% tab name="choco" %}} - ```powershell - choco install kubernetes-cli - ``` - {{% /tab %}} - {{% tab name="scoop" %}} - ```powershell - scoop install kubectl - ``` - {{% /tab %}} - {{< /tabs >}} - - -1. Test to ensure the version you installed is up-to-date: - - ```powershell - kubectl version --client - ``` - -1. Navigate to your home directory: - - ```powershell - # If you're using cmd.exe, run: cd %USERPROFILE% - cd ~ - ``` - -1. Create the `.kube` directory: - - ```powershell - mkdir .kube - ``` - -1. Change to the `.kube` directory you just created: - - ```powershell - cd .kube - ``` - -1. Configure kubectl to use a remote Kubernetes cluster: - - ```powershell - New-Item config -type file - ``` - -{{< note >}} -Edit the config file with a text editor of your choice, such as Notepad. -{{< /note >}} - -## Download as part of the Google Cloud SDK - -You can install kubectl as part of the Google Cloud SDK. - -1. Install the [Google Cloud SDK](https://cloud.google.com/sdk/). - -1. Run the `kubectl` installation command: - - ```shell - gcloud components install kubectl - ``` - -1. Test to ensure the version you installed is up-to-date: - - ```shell - kubectl version --client - ``` - -## Verifying kubectl configuration - -In order for kubectl to find and access a Kubernetes cluster, it needs a -[kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/), -which is created automatically when you create a cluster using -[kube-up.sh](https://github.com/kubernetes/kubernetes/blob/master/cluster/kube-up.sh) -or successfully deploy a Minikube cluster. -By default, kubectl configuration is located at `~/.kube/config`. - -Check that kubectl is properly configured by getting the cluster state: - -```shell -kubectl cluster-info -``` - -If you see a URL response, kubectl is correctly configured to access your cluster. - -If you see a message similar to the following, kubectl is not configured correctly or is not able to connect to a Kubernetes cluster. - -``` -The connection to the server was refused - did you specify the right host or port? -``` - -For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like Minikube to be installed first and then re-run the commands stated above. - -If kubectl cluster-info returns the url response but you can't access your cluster, to check whether it is configured properly, use: - -```shell -kubectl cluster-info dump -``` - -## Optional kubectl configurations - -### Enabling shell autocompletion - -kubectl provides autocompletion support for Bash and Zsh, which can save you a lot of typing. - -Below are the procedures to set up autocompletion for Bash (including the difference between Linux and macOS) and Zsh. - -{{< tabs name="kubectl_autocompletion" >}} - -{{% tab name="Bash on Linux" %}} - -### Introduction - -The kubectl completion script for Bash can be generated with the command `kubectl completion bash`. Sourcing the completion script in your shell enables kubectl autocompletion. - -However, the completion script depends on [**bash-completion**](https://github.com/scop/bash-completion), which means that you have to install this software first (you can test if you have bash-completion already installed by running `type _init_completion`). - -### Install bash-completion - -bash-completion is provided by many package managers (see [here](https://github.com/scop/bash-completion#installation)). You can install it with `apt-get install bash-completion` or `yum install bash-completion`, etc. - -The above commands create `/usr/share/bash-completion/bash_completion`, which is the main script of bash-completion. Depending on your package manager, you have to manually source this file in your `~/.bashrc` file. - -To find out, reload your shell and run `type _init_completion`. If the command succeeds, you're already set, otherwise add the following to your `~/.bashrc` file: - -```bash -source /usr/share/bash-completion/bash_completion -``` - -Reload your shell and verify that bash-completion is correctly installed by typing `type _init_completion`. - -### Enable kubectl autocompletion - -You now need to ensure that the kubectl completion script gets sourced in all your shell sessions. There are two ways in which you can do this: - -- Source the completion script in your `~/.bashrc` file: - - ```bash - echo 'source <(kubectl completion bash)' >>~/.bashrc - ``` - -- Add the completion script to the `/etc/bash_completion.d` directory: - - ```bash - kubectl completion bash >/etc/bash_completion.d/kubectl - ``` - -If you have an alias for kubectl, you can extend shell completion to work with that alias: - -```bash -echo 'alias k=kubectl' >>~/.bashrc -echo 'complete -F __start_kubectl k' >>~/.bashrc -``` - -{{< note >}} -bash-completion sources all completion scripts in `/etc/bash_completion.d`. -{{< /note >}} - -Both approaches are equivalent. After reloading your shell, kubectl autocompletion should be working. - -{{% /tab %}} - - -{{% tab name="Bash on macOS" %}} - - -### Introduction - -The kubectl completion script for Bash can be generated with `kubectl completion bash`. Sourcing this script in your shell enables kubectl completion. - -However, the kubectl completion script depends on [**bash-completion**](https://github.com/scop/bash-completion) which you thus have to previously install. - -{{< warning>}} -There are two versions of bash-completion, v1 and v2. V1 is for Bash 3.2 (which is the default on macOS), and v2 is for Bash 4.1+. The kubectl completion script **doesn't work** correctly with bash-completion v1 and Bash 3.2. It requires **bash-completion v2** and **Bash 4.1+**. Thus, to be able to correctly use kubectl completion on macOS, you have to install and use Bash 4.1+ ([*instructions*](https://itnext.io/upgrading-bash-on-macos-7138bd1066ba)). The following instructions assume that you use Bash 4.1+ (that is, any Bash version of 4.1 or newer). -{{< /warning >}} - -### Upgrade Bash - -The instructions here assume you use Bash 4.1+. You can check your Bash's version by running: - -```bash -echo $BASH_VERSION -``` - -If it is too old, you can install/upgrade it using Homebrew: - -```bash -brew install bash -``` - -Reload your shell and verify that the desired version is being used: - -```bash -echo $BASH_VERSION $SHELL -``` - -Homebrew usually installs it at `/usr/local/bin/bash`. - -### Install bash-completion - -{{< note >}} -As mentioned, these instructions assume you use Bash 4.1+, which means you will install bash-completion v2 (in contrast to Bash 3.2 and bash-completion v1, in which case kubectl completion won't work). -{{< /note >}} - -You can test if you have bash-completion v2 already installed with `type _init_completion`. If not, you can install it with Homebrew: - -```bash -brew install bash-completion@2 -``` - -As stated in the output of this command, add the following to your `~/.bash_profile` file: - -```bash -export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d" -[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh" -``` - -Reload your shell and verify that bash-completion v2 is correctly installed with `type _init_completion`. - -### Enable kubectl autocompletion - -You now have to ensure that the kubectl completion script gets sourced in all your shell sessions. There are multiple ways to achieve this: - -- Source the completion script in your `~/.bash_profile` file: - - ```bash - echo 'source <(kubectl completion bash)' >>~/.bash_profile - ``` - -- Add the completion script to the `/usr/local/etc/bash_completion.d` directory: - - ```bash - kubectl completion bash >/usr/local/etc/bash_completion.d/kubectl - ``` - -- If you have an alias for kubectl, you can extend shell completion to work with that alias: - - ```bash - echo 'alias k=kubectl' >>~/.bash_profile - echo 'complete -F __start_kubectl k' >>~/.bash_profile - ``` - -- If you installed kubectl with Homebrew (as explained [above](#install-with-homebrew-on-macos)), then the kubectl completion script should already be in `/usr/local/etc/bash_completion.d/kubectl`. In that case, you don't need to do anything. - - {{< note >}} - The Homebrew installation of bash-completion v2 sources all the files in the `BASH_COMPLETION_COMPAT_DIR` directory, that's why the latter two methods work. - {{< /note >}} - -In any case, after reloading your shell, kubectl completion should be working. -{{% /tab %}} - -{{% tab name="Zsh" %}} - -The kubectl completion script for Zsh can be generated with the command `kubectl completion zsh`. Sourcing the completion script in your shell enables kubectl autocompletion. - -To do so in all your shell sessions, add the following to your `~/.zshrc` file: - -```zsh -source <(kubectl completion zsh) -``` - -If you have an alias for kubectl, you can extend shell completion to work with that alias: - -```zsh -echo 'alias k=kubectl' >>~/.zshrc -echo 'complete -F __start_kubectl k' >>~/.zshrc -``` - -After reloading your shell, kubectl autocompletion should be working. - -If you get an error like `complete:13: command not found: compdef`, then add the following to the beginning of your `~/.zshrc` file: - -```zsh -autoload -Uz compinit -compinit -``` -{{% /tab %}} -{{< /tabs >}} - -## {{% heading "whatsnext" %}} - -* [Install Minikube](https://minikube.sigs.k8s.io/docs/start/) -* See the [getting started guides](/docs/setup/) for more about creating clusters. -* [Learn how to launch and expose your application.](/docs/tasks/access-application-cluster/service-access-application-cluster/) -* If you need access to a cluster you didn't create, see the - [Sharing Cluster Access document](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/). -* Read the [kubectl reference docs](/docs/reference/kubectl/kubectl/) - diff --git a/content/en/docs/test.md b/content/en/docs/test.md index aadfc9a9e3..ae5bb447f1 100644 --- a/content/en/docs/test.md +++ b/content/en/docs/test.md @@ -113,7 +113,7 @@ mind: two consecutive lists. **The HTML comment needs to be at the left margin.** 2. Numbered lists can have paragraphs or block elements within them. - Just indent the content to be the same as the first line of the bullet + Indent the content to be the same as the first line of the bullet point. **This paragraph and the code block line up with the `N` in `Numbered` above.** diff --git a/content/en/docs/tutorials/clusters/apparmor.md b/content/en/docs/tutorials/clusters/apparmor.md index 8ca9f30bad..b220647e62 100644 --- a/content/en/docs/tutorials/clusters/apparmor.md +++ b/content/en/docs/tutorials/clusters/apparmor.md @@ -168,8 +168,7 @@ k8s-apparmor-example-deny-write (enforce) *This example assumes you have already set up a cluster with AppArmor support.* -First, we need to load the profile we want to use onto our nodes. The profile we'll use simply -denies all file writes: +First, we need to load the profile we want to use onto our nodes. This profile denies all file writes: ```shell #include @@ -185,7 +184,7 @@ profile k8s-apparmor-example-deny-write flags=(attach_disconnected) { ``` Since we don't know where the Pod will be scheduled, we'll need to load the profile on all our -nodes. For this example we'll just use SSH to install the profiles, but other approaches are +nodes. For this example we'll use SSH to install the profiles, but other approaches are discussed in [Setting up nodes with profiles](#setting-up-nodes-with-profiles). ```shell diff --git a/content/en/docs/tutorials/clusters/seccomp.md b/content/en/docs/tutorials/clusters/seccomp.md index adb3d9c500..971618cf55 100644 --- a/content/en/docs/tutorials/clusters/seccomp.md +++ b/content/en/docs/tutorials/clusters/seccomp.md @@ -37,7 +37,7 @@ profiles that give only the necessary privileges to your container processes. In order to complete all steps in this tutorial, you must install [kind](https://kind.sigs.k8s.io/docs/user/quick-start/) and -[kubectl](/docs/tasks/tools/install-kubectl/). This tutorial will show examples +[kubectl](/docs/tasks/tools/). This tutorial will show examples with both alpha (pre-v1.19) and generally available seccomp functionality, so make sure that your cluster is [configured correctly](https://kind.sigs.k8s.io/docs/user/quick-start/#setting-kubernetes-version) @@ -67,8 +67,8 @@ into the cluster. For simplicity, [kind](https://kind.sigs.k8s.io/) can be used to create a single node cluster with the seccomp profiles loaded. Kind runs Kubernetes in Docker, -so each node of the cluster is actually just a container. This allows for files -to be mounted in the filesystem of each container just as one might load files +so each node of the cluster is a container. This allows for files +to be mounted in the filesystem of each container similar to loading files onto a node. {{< codenew file="pods/security/seccomp/kind.yaml" >}} diff --git a/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md b/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md index 7555a58201..b29b352aca 100644 --- a/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md +++ b/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md @@ -15,10 +15,8 @@ This page provides a real world example of how to configure Redis using a Config ## {{% heading "objectives" %}} -* Create a `kustomization.yaml` file containing: - * a ConfigMap generator - * a Pod resource config using the ConfigMap -* Apply the directory by running `kubectl apply -k ./` +* Create a ConfigMap with Redis configuration values +* Create a Redis Pod that mounts and uses the created ConfigMap * Verify that the configuration was correctly applied. @@ -38,82 +36,218 @@ This page provides a real world example of how to configure Redis using a Config ## Real World Example: Configuring Redis using a ConfigMap -You can follow the steps below to configure a Redis cache using data stored in a ConfigMap. +Follow the steps below to configure a Redis cache using data stored in a ConfigMap. -First create a `kustomization.yaml` containing a ConfigMap from the `redis-config` file: - -{{< codenew file="pods/config/redis-config" >}} +First create a ConfigMap with an empty configuration block: ```shell -curl -OL https://k8s.io/examples/pods/config/redis-config - -cat <./kustomization.yaml -configMapGenerator: -- name: example-redis-config - files: - - redis-config +cat <./example-redis-config.yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: example-redis-config +data: + redis-config: "" EOF ``` -Add the pod resource config to the `kustomization.yaml`: +Apply the ConfigMap created above, along with a Redis pod manifest: + +```shell +kubectl apply -f example-redis-config.yaml +kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/config/redis-pod.yaml +``` + +Examine the contents of the Redis pod manifest and note the following: + +* A volume named `config` is created by `spec.volumes[1]` +* The `key` and `path` under `spec.volumes[1].items[0]` exposes the `redis-config` key from the + `example-redis-config` ConfigMap as a file named `redis.conf` on the `config` volume. +* The `config` volume is then mounted at `/redis-master` by `spec.containers[0].volumeMounts[1]`. + +This has the net effect of exposing the data in `data.redis-config` from the `example-redis-config` +ConfigMap above as `/redis-master/redis.conf` inside the Pod. {{< codenew file="pods/config/redis-pod.yaml" >}} -```shell -curl -OL https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/config/redis-pod.yaml +Examine the created objects: -cat <>./kustomization.yaml -resources: -- redis-pod.yaml -EOF +```shell +kubectl get pod/redis configmap/example-redis-config ``` -Apply the kustomization directory to create both the ConfigMap and Pod objects: +You should see the following output: ```shell -kubectl apply -k . -``` - -Examine the created objects by -```shell -> kubectl get -k . -NAME DATA AGE -configmap/example-redis-config-dgh9dg555m 1 52s - NAME READY STATUS RESTARTS AGE -pod/redis 1/1 Running 0 52s +pod/redis 1/1 Running 0 8s + +NAME DATA AGE +configmap/example-redis-config 1 14s ``` -In the example, the config volume is mounted at `/redis-master`. -It uses `path` to add the `redis-config` key to a file named `redis.conf`. -The file path for the redis config, therefore, is `/redis-master/redis.conf`. -This is where the image will look for the config file for the redis master. +Recall that we left `redis-config` key in the `example-redis-config` ConfigMap blank: -Use `kubectl exec` to enter the pod and run the `redis-cli` tool to verify that -the configuration was correctly applied: +```shell +kubectl describe configmap/example-redis-config +``` + +You should see an empty `redis-config` key: + +```shell +Name: example-redis-config +Namespace: default +Labels: +Annotations: + +Data +==== +redis-config: +``` + +Use `kubectl exec` to enter the pod and run the `redis-cli` tool to check the current configuration: ```shell kubectl exec -it redis -- redis-cli +``` + +Check `maxmemory`: + +```shell 127.0.0.1:6379> CONFIG GET maxmemory +``` + +It should show the default value of 0: + +```shell +1) "maxmemory" +2) "0" +``` + +Similarly, check `maxmemory-policy`: + +```shell +127.0.0.1:6379> CONFIG GET maxmemory-policy +``` + +Which should also yield its default value of `noeviction`: + +```shell +1) "maxmemory-policy" +2) "noeviction" +``` + +Now let's add some configuration values to the `example-redis-config` ConfigMap: + +{{< codenew file="pods/config/example-redis-config.yaml" >}} + +Apply the updated ConfigMap: + +```shell +kubectl apply -f example-redis-config.yaml +``` + +Confirm that the ConfigMap was updated: + +```shell +kubectl describe configmap/example-redis-config +``` + +You should see the configuration values we just added: + +```shell +Name: example-redis-config +Namespace: default +Labels: +Annotations: + +Data +==== +redis-config: +---- +maxmemory 2mb +maxmemory-policy allkeys-lru +``` + +Check the Redis Pod again using `redis-cli` via `kubectl exec` to see if the configuration was applied: + +```shell +kubectl exec -it redis -- redis-cli +``` + +Check `maxmemory`: + +```shell +127.0.0.1:6379> CONFIG GET maxmemory +``` + +It remains at the default value of 0: + +```shell +1) "maxmemory" +2) "0" +``` + +Similarly, `maxmemory-policy` remains at the `noeviction` default setting: + +```shell +127.0.0.1:6379> CONFIG GET maxmemory-policy +``` + +Returns: + +```shell +1) "maxmemory-policy" +2) "noeviction" +``` + +The configuration values have not changed because the Pod needs to be restarted to grab updated +values from associated ConfigMaps. Let's delete and recreate the Pod: + +```shell +kubectl delete pod redis +kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/config/redis-pod.yaml +``` + +Now re-check the configuration values one last time: + +```shell +kubectl exec -it redis -- redis-cli +``` + +Check `maxmemory`: + +```shell +127.0.0.1:6379> CONFIG GET maxmemory +``` + +It should now return the updated value of 2097152: + +```shell 1) "maxmemory" 2) "2097152" +``` + +Similarly, `maxmemory-policy` has also been updated: + +```shell 127.0.0.1:6379> CONFIG GET maxmemory-policy +``` + +It now reflects the desired value of `allkeys-lru`: + +```shell 1) "maxmemory-policy" 2) "allkeys-lru" ``` -Delete the created pod: +Clean up your work by deleting the created resources: + ```shell -kubectl delete pod redis +kubectl delete pod/redis configmap/example-redis-config ``` - - ## {{% heading "whatsnext" %}} * Learn more about [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/). - - - - diff --git a/content/en/docs/tutorials/hello-minikube.md b/content/en/docs/tutorials/hello-minikube.md index ba115b601c..d8ad753958 100644 --- a/content/en/docs/tutorials/hello-minikube.md +++ b/content/en/docs/tutorials/hello-minikube.md @@ -46,7 +46,7 @@ This tutorial provides a container image that uses NGINX to echo back all the re {{< kat-button >}} {{< note >}} -If you installed minikube locally, run `minikube start`. +If you installed minikube locally, run `minikube start`. Before you run `minikube dashboard`, you should open a new terminal, start `minikube dashboard` there, and then switch back to the main terminal. {{< /note >}} 2. Open the Kubernetes dashboard in a browser: @@ -59,6 +59,22 @@ If you installed minikube locally, run `minikube start`. 4. Katacoda environment only: Type `30000`, and then click **Display Port**. +{{< note >}} +The `dashboard` command enables the dashboard add-on and opens the proxy in the default web browser. You can create Kubernetes resources on the dashboard such as Deployment and Service. + +If you are running in an environment as root, see [Open Dashboard with URL](#open-dashboard-with-url). + +To stop the proxy, run `Ctrl+C` to exit the process. The dashboard remains running. +{{< /note >}} + +## Open Dashboard with URL + +If you don't want to open a web browser, run the dashboard command with the url flag to emit a URL: + +```shell +minikube dashboard --url +``` + ## Create a Deployment A Kubernetes [*Pod*](/docs/concepts/workloads/pods/) is a group of one or more Containers, @@ -136,7 +152,7 @@ Kubernetes [*Service*](/docs/concepts/services-networking/service/). The application code inside the image `k8s.gcr.io/echoserver` only listens on TCP port 8080. If you used `kubectl expose` to expose a different port, clients could not connect to that other port. -2. View the Service you just created: +2. View the Service you created: ```shell kubectl get services @@ -211,7 +227,7 @@ The minikube tool includes a set of built-in {{< glossary_tooltip text="addons" metrics-server was successfully enabled ``` -3. View the Pod and Service you just created: +3. View the Pod and Service you created: ```shell kubectl get pod,svc -n kube-system diff --git a/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html b/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html index 5ac682d7af..47a2629feb 100644 --- a/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html +++ b/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html @@ -33,7 +33,7 @@ weight: 10

A Kubernetes cluster consists of two types of resources:

    -
  • The Master coordinates the cluster
  • +
  • The Control Plane coordinates the cluster
  • Nodes are the workers that run applications

@@ -71,22 +71,22 @@ weight: 10
-

The Master is responsible for managing the cluster. The master coordinates all activities in your cluster, such as scheduling applications, maintaining applications' desired state, scaling applications, and rolling out new updates.

-

A node is a VM or a physical computer that serves as a worker machine in a Kubernetes cluster. Each node has a Kubelet, which is an agent for managing the node and communicating with the Kubernetes master. The node should also have tools for handling container operations, such as containerd or Docker. A Kubernetes cluster that handles production traffic should have a minimum of three nodes.

+

The Control Plane is responsible for managing the cluster. The Control Plane coordinates all activities in your cluster, such as scheduling applications, maintaining applications' desired state, scaling applications, and rolling out new updates.

+

A node is a VM or a physical computer that serves as a worker machine in a Kubernetes cluster. Each node has a Kubelet, which is an agent for managing the node and communicating with the Kubernetes control plane. The node should also have tools for handling container operations, such as containerd or Docker. A Kubernetes cluster that handles production traffic should have a minimum of three nodes.

-

Masters manage the cluster and the nodes that are used to host the running applications.

+

Control Planes manage the cluster and the nodes that are used to host the running applications.

-

When you deploy applications on Kubernetes, you tell the master to start the application containers. The master schedules the containers to run on the cluster's nodes. The nodes communicate with the master using the Kubernetes API, which the master exposes. End users can also use the Kubernetes API directly to interact with the cluster.

+

When you deploy applications on Kubernetes, you tell the control plane to start the application containers. The control plane schedules the containers to run on the cluster's nodes. The nodes communicate with the control plane using the Kubernetes API, which the control plane exposes. End users can also use the Kubernetes API directly to interact with the cluster.

-

A Kubernetes cluster can be deployed on either physical or virtual machines. To get started with Kubernetes development, you can use Minikube. Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. Minikube is available for Linux, macOS, and Windows systems. The Minikube CLI provides basic bootstrapping operations for working with your cluster, including start, stop, status, and delete. For this tutorial, however, you'll use a provided online terminal with Minikube pre-installed.

+

A Kubernetes cluster can be deployed on either physical or virtual machines. To get started with Kubernetes development, you can use Minikube. Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. Minikube is available for Linux, macOS, and Windows systems. The Minikube CLI provides basic bootstrapping operations for working with your cluster, including start, stop, status, and delete. For this tutorial, however, you'll use a provided online terminal with Minikube pre-installed.

Now that you know what Kubernetes is, let's go to the online tutorial and start our first cluster!

diff --git a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html index 2ee67382fd..15b6d00a6c 100644 --- a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html +++ b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html @@ -31,7 +31,7 @@ weight: 10 Once you have a running Kubernetes cluster, you can deploy your containerized applications on top of it. To do so, you create a Kubernetes Deployment configuration. The Deployment instructs Kubernetes how to create and update instances of your application. Once you've created a Deployment, the Kubernetes - master schedules the application instances included in that Deployment to run on individual Nodes in the + control plane schedules the application instances included in that Deployment to run on individual Nodes in the cluster.

diff --git a/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html index 1d8a069984..d7687bc7b1 100644 --- a/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html +++ b/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html @@ -37,7 +37,7 @@ weight: 10
  • ClusterIP (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.
  • NodePort - Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using <NodeIP>:<NodePort>. Superset of ClusterIP.
  • LoadBalancer - Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.
  • -
  • ExternalName - Exposes the Service using an arbitrary name (specified by externalName in the spec) by returning a CNAME record with the name. No proxy is used. This type requires v1.7 or higher of kube-dns.
  • +
  • ExternalName - Maps the Service to the contents of the externalName field (e.g. `foo.bar.example.com`), by returning a CNAME record with its value. No proxying of any kind is set up. This type requires v1.7 or higher of kube-dns, or CoreDNS version 0.0.8 or higher.
  • More information about the different types of Services can be found in the Using Source IP tutorial. Also see Connecting Applications with Services.

    Additionally, note that there are some use cases with Services that involve not defining selector in the spec. A Service created without selector will also not create the corresponding Endpoints object. This allows users to manually map a Service to specific endpoints. Another possibility why there may be no selector is you are strictly using type: ExternalName.

    diff --git a/content/en/docs/tutorials/kubernetes-basics/public/images/module_01_cluster.svg b/content/en/docs/tutorials/kubernetes-basics/public/images/module_01_cluster.svg index e1f92dace0..b183377467 100644 --- a/content/en/docs/tutorials/kubernetes-basics/public/images/module_01_cluster.svg +++ b/content/en/docs/tutorials/kubernetes-basics/public/images/module_01_cluster.svg @@ -1,6 +1,32 @@ - - - diff --git a/content/en/docs/tutorials/kubernetes-basics/public/images/module_02_first_app.svg b/content/en/docs/tutorials/kubernetes-basics/public/images/module_02_first_app.svg index e0ae8fa504..cf8c922916 100644 --- a/content/en/docs/tutorials/kubernetes-basics/public/images/module_02_first_app.svg +++ b/content/en/docs/tutorials/kubernetes-basics/public/images/module_02_first_app.svg @@ -1,5 +1,32 @@ - - tag atau gunakan drop down untuk melakukan filter. Pilih header pada tabel untuk mengurutkan. - -

    -Filter berdasarkan Konsep:
    -Filter berdasarkan Obyek:
    -Filter berdasarkan Perintah: -

    - -
    diff --git a/content/id/docs/tasks/tools/_index.md b/content/id/docs/tasks/tools/_index.md index 9bbd67d8fb..8d9056c50f 100755 --- a/content/id/docs/tasks/tools/_index.md +++ b/content/id/docs/tasks/tools/_index.md @@ -1,5 +1,67 @@ --- title: "Menginstal Peralatan" +description: Peralatan untuk melakukan instalasi Kubernetes dalam komputer kamu. weight: 10 +no_list: true --- +## kubectl + + + +Perangkat baris perintah Kubernetes, [kubectl](/id/docs/reference/kubectl/kubectl/), +memungkinkan kamu untuk menjalankan perintah pada klaster Kubernetes. +Kamu dapat menggunakan kubectl untuk menerapkan aplikasi, memeriksa dan mengelola sumber daya klaster, +dan melihat *log* (catatan). Untuk informasi lebih lanjut termasuk daftar lengkap operasi kubectl, lihat +[referensi dokumentasi `kubectl`](/id/docs/reference/kubectl/). + +kubectl dapat diinstal pada berbagai platform Linux, macOS dan Windows. +Pilihlah sistem operasi pilihan kamu di bawah ini. + +- [Instalasi kubectl pada Linux](/en/docs/tasks/tools/install-kubectl-linux) +- [Instalasi kubectl pada macOS](/en/docs/tasks/tools/install-kubectl-macos) +- [Instalasi kubectl pada Windows](/en/docs/tasks/tools/install-kubectl-windows) + +## kind + +[`kind`](https://kind.sigs.k8s.io/docs/) memberikan kamu kemampuan untuk +menjalankan Kubernetes pada komputer lokal kamu. Perangkat ini membutuhkan +[Docker](https://docs.docker.com/get-docker/) yang sudah diinstal dan +terkonfigurasi. + +Halaman [Memulai Cepat](https://kind.sigs.k8s.io/docs/user/quick-start/) `kind` +memperlihatkan kepada kamu tentang apa yang perlu kamu lakukan untuk `kind` +berjalan dan bekerja. + +Melihat Memulai Cepat Kind + +## minikube + +Seperti halnya dengan `kind`, [`minikube`](https://minikube.sigs.k8s.io/) +merupakan perangkat yang memungkinkan kamu untuk menjalankan Kubernetes +secara lokal. `minikube` menjalankan sebuah klaster Kubernetes dengan +satu node saja dalam komputer pribadi (termasuk Windows, macOS dan Linux) +sehingga kamu dapat mencoba Kubernetes atau untuk pekerjaan pengembangan +sehari-hari. + +Kamu bisa mengikuti petunjuk resmi +[Memulai!](https://minikube.sigs.k8s.io/docs/start/) +`minikube` jika kamu ingin fokus agar perangkat ini terinstal. + +Lihat Panduan Memulai! Minikube + +Setelah kamu memiliki `minikube` yang bekerja, kamu bisa menggunakannya +untuk [menjalankan aplikasi contoh](/id/docs/tutorials/hello-minikube/). + +## kubeadm + +Kamu dapat menggunakan {{< glossary_tooltip term_id="kubeadm" text="kubeadm" >}} +untuk membuat dan mengatur klaster Kubernetes. +`kubeadm` menjalankan langkah-langkah yang diperlukan untuk mendapatkan klaster +dengan kelaikan dan keamanan minimum, aktif dan berjalan dengan cara yang mudah +bagi pengguna. + +[Instalasi kubeadm](/id/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) memperlihatkan tentang bagaimana melakukan instalasi kubeadm. +Setelah terinstal, kamu dapat menggunakannya untuk [membuat klaster](/id/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/). + +Lihat panduan instalasi kubeadm diff --git a/content/id/docs/tasks/tools/install-minikube.md b/content/id/docs/tasks/tools/install-minikube.md deleted file mode 100644 index b674c52b6d..0000000000 --- a/content/id/docs/tasks/tools/install-minikube.md +++ /dev/null @@ -1,254 +0,0 @@ ---- -title: Menginstal Minikube -content_type: task -weight: 20 -card: - name: tasks - weight: 10 ---- - - - -Halaman ini menunjukkan cara instalasi [Minikube](/id/docs/tutorials/hello-minikube), sebuah alat untuk menjalankan sebuah klaster Kubernetes dengan satu Node pada mesin virtual yang ada di komputer kamu. - - - -## {{% heading "prerequisites" %}} - - -{{< tabs name="minikube_before_you_begin" >}} -{{% tab name="Linux" %}} -Untuk mengecek jika virtualisasi didukung pada Linux, jalankan perintah berikut dan pastikan keluarannya tidak kosong: -``` -grep -E --color 'vmx|svm' /proc/cpuinfo -``` -{{% /tab %}} - -{{% tab name="macOS" %}} -Untuk mengecek jika virtualisasi didukung di macOS, jalankan perintah berikut di terminal kamu. -``` -sysctl -a | grep -E --color 'machdep.cpu.features|VMX' -``` -Jika kamu melihat `VMX` pada hasil keluaran (seharusnya berwarna), artinya fitur VT-x sudah diaktifkan di mesin kamu. -{{% /tab %}} - -{{% tab name="Windows" %}} -Untuk mengecek jika virtualisasi didukung di Windows 8 ke atas, jalankan perintah berikut di terminal Windows atau _command prompt_ kamu. - -``` -systeminfo -``` -Jika kamu melihat keluaran berikut, maka virtualisasi didukung di Windows kamu. -``` -Hyper-V Requirements: VM Monitor Mode Extensions: Yes - Virtualization Enabled In Firmware: Yes - Second Level Address Translation: Yes - Data Execution Prevention Available: Yes -``` -Jika kamu melihat keluaran berikut, sistem kamu sudah memiliki sebuah Hypervisor yang terinstal dan kamu bisa melewati langkah berikutnya. -``` -Hyper-V Requirements: A hypervisor has been detected. Features required for Hyper-V will not be displayed. -``` - - -{{% /tab %}} -{{< /tabs >}} - - - - - -## Menginstal minikube - -{{< tabs name="tab_with_md" >}} -{{% tab name="Linux" %}} - -### Instalasi kubectl - -Pastikan kamu sudah menginstal kubectl. Kamu bisa menginstal kubectl dengan mengikuti instruksi pada halaman [Menginstal dan Menyiapkan kubectl](/id/docs/tasks/tools/install-kubectl/#menginstal-kubectl-pada-linux). - -### Menginstal sebuah Hypervisor - -Jika kamu belum menginstal sebuah Hypervisor, silakan instal salah satu dari: - -• [KVM](https://www.linux-kvm.org/), yang juga menggunakan QEMU - -• [VirtualBox](https://www.virtualbox.org/wiki/Downloads) - -Minikube juga mendukung sebuah opsi `--driver=none` untuk menjalankan komponen-komponen Kubernetes pada _host_, bukan di dalam VM. Untuk menggunakan _driver_ ini maka diperlukan [Docker](https://www.docker.com/products/docker-desktop) dan sebuah lingkungan Linux, bukan sebuah hypervisor. - -Jika kamu menggunakan _driver_ `none` pada Debian atau turunannya, gunakan paket (_package_) `.deb` untuk Docker daripada menggunakan paket _snap_-nya, karena paket _snap_ tidak berfungsi dengan Minikube. -Kamu bisa mengunduh paket `.deb` dari [Docker](https://www.docker.com/products/docker-desktop). - -{{< caution >}} -*Driver* VM `none` dapat menyebabkan masalah pada keamanan dan kehilangan data. Sebelum menggunakan opsi `--driver=none`, periksa [dokumentasi ini](https://minikube.sigs.k8s.io/docs/reference/drivers/none/) untuk informasi lebih lanjut. -{{< /caution >}} - -Minikube juga mendukung opsi `vm-driver=podman` yang mirip dengan _driver_ Docker. Podman yang berjalan dengan hak istimewa _superuser_ (pengguna _root_) adalah cara terbaik untuk memastikan kontainer-kontainer kamu memiliki akses penuh ke semua fitur yang ada pada sistem kamu. - -{{< caution >}} -_Driver_ `podman` memerlukan kontainer yang berjalan dengan akses _root_ karena akun pengguna biasa tidak memiliki akses penuh ke semua fitur sistem operasi yang mungkin diperlukan oleh kontainer. -{{< /caution >}} - -### Menginstal Minikube menggunakan sebuah paket - -Tersedia paket uji coba untuk Minikube, kamu bisa menemukan paket untuk Linux (AMD64) di laman [rilisnya](https://github.com/kubernetes/minikube/releases) Minikube di GitHub. - -Gunakan alat instalasi paket pada distribusi Linux kamu untuk menginstal paket yang sesuai. - -### Menginstal Minikube melalui pengunduhan langsung - -Jika kamu tidak menginstal melalui sebuah paket, kamu bisa mengunduh sebuah _stand-alone binary_ dan menggunakannya. - - -```shell -curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \ - && chmod +x minikube -``` - -Berikut adalah cara mudah untuk menambahkan program Minikube ke _path_ kamu. - -```shell -sudo mkdir -p /usr/local/bin/ -sudo install minikube /usr/local/bin/ -``` - -### Menginstal Minikube menggunakan Homebrew - -Sebagai alternatif, kamu bisa menginstal Minikube menggunakan Linux [Homebrew](https://docs.brew.sh/Homebrew-on-Linux): - -```shell -brew install minikube -``` - -{{% /tab %}} -{{% tab name="macOS" %}} -### Instalasi kubectl - -Pastikan kamu sudah menginstal kubectl. Kamu bisa menginstal kubectl dengan mengikuti instruksi pada halaman [Menginstal dan Menyiapkan kubectl](/id/docs/tasks/tools/install-kubectl/#menginstal-kubectl-pada-macos). - -### Instalasi sebuah Hypervisor - -Jika kamu belum menginstal sebuah Hypervisor, silakan instal salah satu dari: - -• [HyperKit](https://github.com/moby/hyperkit) - -• [VirtualBox](https://www.virtualbox.org/wiki/Downloads) - -• [VMware Fusion](https://www.vmware.com/products/fusion) - -### Instalasi Minikube -Cara paling mudah untuk menginstal Minikube pada macOS adalah menggunakan [Homebrew](https://brew.sh): - -```shell -brew install minikube -``` - -Kamu juga bisa menginstalnya dengan mengunduh _stand-alone binary_-nya: - -```shell -curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 \ - && chmod +x minikube -``` - -Berikut adalah cara mudah untuk menambahkan program Minikube ke _path_ kamu. - -```shell -sudo mv minikube /usr/local/bin -``` - -{{% /tab %}} -{{% tab name="Windows" %}} -### Instalasi kubectl - -Pastikan kamu sudah menginstal kubectl. Kamu bisa menginstal kubectl dengan mengikuti instruksi pada halaman [Menginstal dan Menyiapkan kubectl](/id/docs/tasks/tools/install-kubectl/#menginstal-kubectl-pada-windows). - -### Menginstal sebuah Hypervisor - -Jika kamu belum menginstal sebuah Hypervisor, silakan instal salah satu dari: - -• [Hyper-V](https://msdn.microsoft.com/en-us/virtualization/hyperv_on_windows/quick_start/walkthrough_install) - -• [VirtualBox](https://www.virtualbox.org/wiki/Downloads) - -{{< note >}} -Hyper-V hanya dapat berjalan pada tiga versi dari Windows 10: Windows 10 Enterprise, Windows 10 Professional, dan Windows 10 Education. -{{< /note >}} - -### Menginstal Minikube menggunakan Chocolatey - -Cara paling mudah untuk menginstal Minikube pada Windows adalah menggunakan [Chocolatey](https://chocolatey.org/) (jalankan sebagai administrator): - -```shell -choco install minikube -``` - -Setelah Minikube telah selesai diinstal, tutup sesi CLI dan hidupkan ulang CLI-nya. Minikube akan ditambahkan ke _path_ kamu secara otomatis. - -### Menginstal Minikube menggunakan sebuah program penginstal - -Untuk menginstal Minikube secara manual pada Windows menggunakan [Windows Installer](https://docs.microsoft.com/en-us/windows/desktop/msi/windows-installer-portal), unduh [`minikube-installer.exe`](https://github.com/kubernetes/minikube/releases/latest/download/minikube-installer.exe) dan jalankan program penginstal tersebut. - -### Menginstal Minikube melalui pengunduhan langsung - -Untuk menginstal Minikube secara manual pada Windows, unduh [`minikube-windows-amd64`](https://github.com/kubernetes/minikube/releases/latest), ubah nama menjadi `minikube.exe`, dan tambahkan ke _path_ kamu. - -{{% /tab %}} -{{< /tabs >}} - - -## Memastikan instalasi - -Untuk memastikan keberhasilan kedua instalasi hypervisor dan Minikube, kamu bisa menjalankan perintah berikut untuk memulai sebuah klaster Kubernetes lokal: -{{< note >}} - -Untuk pengaturan `--driver` dengan `minikube start`, masukkan nama hypervisor `` yang kamu instal dengan huruf kecil seperti yang ditunjukan dibawah. Daftar lengkap nilai `--driver` tersedia di [dokumentasi menentukan *driver* VM](/docs/setup/learning-environment/minikube/#specifying-the-vm-driver). - -{{< /note >}} - -```shell -minikube start --driver= -``` - -Setelah `minikube start` selesai, jalankan perintah di bawah untuk mengecek status klaster: - -```shell -minikube status -``` - -Jika klasternya berjalan, keluaran dari `minikube status` akan mirip seperti ini: - -``` -host: Running -kubelet: Running -apiserver: Running -kubeconfig: Configured -``` - -Setelah kamu memastikan bahwa Minikube berjalan sesuai dengan hypervisor yang telah kamu pilih, kamu dapat melanjutkan untuk menggunakan Minikube atau menghentikan klaster kamu. Untuk menghentikan klaster, jalankan: - -```shell -minikube stop -``` - -## Membersihkan *state* lokal {#cleanup-local-state} - -Jika sebelumnya kamu pernah menginstal Minikube, dan menjalankan: -```shell -minikube start -``` - -dan `minikube start` memberikan pesan kesalahan: -``` -machine does not exist -``` - -maka kamu perlu membersihkan _state_ lokal Minikube: -```shell -minikube delete -``` - -## {{% heading "whatsnext" %}} - - -* [Menjalanakan Kubernetes secara lokal dengan Minikube](/docs/setup/learning-environment/minikube/) diff --git a/content/id/docs/templates/feature-state-alpha.txt b/content/id/docs/templates/feature-state-alpha.txt deleted file mode 100644 index 35689778fa..0000000000 --- a/content/id/docs/templates/feature-state-alpha.txt +++ /dev/null @@ -1,7 +0,0 @@ -Fitur ini berada di dalam tingkatan *Alpha*, yang artinya: - -* Nama dari versi ini mengandung string `alpha` (misalnya, `v1alpha1`). -* Bisa jadi terdapat *bug*. Secara *default* fitur ini tidak diekspos. -* Ketersediaan untuk fitur yang ada bisa saja dihilangkan pada suatu waktu tanpa pemberitahuan sebelumnya. -* API yang ada mungkin saja berubah tanpa memperhatikan kompatibilitas dengan versi perangkat lunak sebelumnya. -* Hanya direkomendasikan untuk klaster yang digunakan untuk tujuan *testing*. diff --git a/content/id/docs/templates/feature-state-beta.txt b/content/id/docs/templates/feature-state-beta.txt deleted file mode 100644 index a70034e056..0000000000 --- a/content/id/docs/templates/feature-state-beta.txt +++ /dev/null @@ -1,10 +0,0 @@ -Fitur ini berada dalam tingkatan beta, yang artinya: - -* Nama dari versi ini mengandung string `beta` (misalnya `v2beta3`). -* Kode yang ada sudah melalui mekanisme *testing* yang cukup baik. Menggunakan fitur ini dianggap cukup aman. Fitur ini diekspos secara *default*. -* Ketersediaan untuk fitur secara menyeluruh tidak akan dihapus, meskipun begitu detail untuk suatu fitur bisa saja berubah. -* Skema dan/atau semantik dari suatu obyek mungkin saja berubah tanpa memerhatikan kompatibilitas pada rilis *beta* selanjutnya. - Jika hal ini terjadi, kami akan menyediakan suatu instruksi untuk melakukan migrasi di versi rilis selanjutnya. Hal ini bisa saja terdiri dari penghapusan, pengubahan, ataupun pembuatan - obyek API. Proses pengubahan mungkin saja membutuhkan pemikiran yang matang. Dampak proses ini bisa saja menyebabkan *downtime* aplikasi yang bergantung pada fitur ini. -* **Kami mohon untuk mencoba versi *beta* yang kami sediakan dan berikan masukan terhadap fitur yang kamu pakai! Apabila fitur tersebut sudah tidak lagi berada di dalam tingkatan *beta* perubahan yang kami buat terhadap fitur tersebut bisa jadi tidak lagi dapat digunakan** - diff --git a/content/id/docs/templates/feature-state-deprecated.txt b/content/id/docs/templates/feature-state-deprecated.txt deleted file mode 100644 index 599fe098cd..0000000000 --- a/content/id/docs/templates/feature-state-deprecated.txt +++ /dev/null @@ -1,2 +0,0 @@ - -Fitur ini *deprecated*. Untuk informasi lebih lanjut mengenai tingkatan ini, silahkan merujuk pada [Kubernetes Deprecation Policy](/docs/reference/deprecation-policy/) diff --git a/content/id/docs/templates/feaure-state-stable.txt b/content/id/docs/templates/feaure-state-stable.txt deleted file mode 100644 index ee4e17373f..0000000000 --- a/content/id/docs/templates/feaure-state-stable.txt +++ /dev/null @@ -1,4 +0,0 @@ -Fitur ini berada di dalam tingkatan stabil, yang artinya: - -* Versi ini mengandung string `vX` dimana `X` merupakan bilangan bulat. -* Fitur yang ada pada tingkatan ini akan selalu muncul di rilis berikutnya. diff --git a/content/id/docs/templates/index.md b/content/id/docs/templates/index.md deleted file mode 100644 index 9d7bccd143..0000000000 --- a/content/id/docs/templates/index.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -headless: true - -resources: -- src: "*alpha*" - title: "alpha" -- src: "*beta*" - title: "beta" -- src: "*deprecated*" - title: "deprecated" -- src: "*stable*" - title: "stable" ---- diff --git a/content/id/examples/application/job/cronjob.yaml b/content/id/examples/application/job/cronjob.yaml index c9d3893027..2ce31233c3 100644 --- a/content/id/examples/application/job/cronjob.yaml +++ b/content/id/examples/application/job/cronjob.yaml @@ -11,7 +11,7 @@ spec: containers: - name: hello image: busybox - args: + command: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster diff --git a/content/id/docs/search.md b/content/id/search.md similarity index 100% rename from content/id/docs/search.md rename to content/id/search.md diff --git a/content/it/docs/concepts/containers/container-lifecycle-hooks.md b/content/it/docs/concepts/containers/container-lifecycle-hooks.md new file mode 100644 index 0000000000..59140ec333 --- /dev/null +++ b/content/it/docs/concepts/containers/container-lifecycle-hooks.md @@ -0,0 +1,128 @@ +--- +title: Container Lifecycle Hooks +content_type: concept +weight: 30 +--- + + +Questa pagina descrive come i Container gestiti con kubelet possono utilizzare il lifecycle +hook framework dei Container per l'esecuzione di codice eseguito in corrispondenza di alcuni +eventi durante il loro ciclo di vita. + + + +## Overview + +Analogamente a molti framework di linguaggi di programmazione che hanno degli hooks legati al ciclo di +vita dei componenti, come ad esempio Angular, Kubernetes fornisce ai Container degli hook legati al loro ciclo di +vita dei Container. +Gli hook consentono ai Container di essere consapevoli degli eventi durante il loro ciclo di +gestione ed eseguire del codice implementato in un handler quando il corrispondente hook viene +eseguito. + +## Container hooks + +Esistono due tipi di hook che vengono esposti ai Container: + +`PostStart` + +Questo hook viene eseguito successivamente alla creazione del container. +Tuttavia, non vi è garanzia che questo hook venga eseguito prima dell'ENTRYPOINT del container. +Non vengono passati parametri all'handler. + +`PreStop` + +Questo hook viene eseguito prima della terminazione di un container a causa di una richiesta API o +di un evento di gestione, come ad esempio un fallimento delle sonde di liveness/startup, preemption, +risorse contese e altro. Una chiamata all'hook di `PreStop` fallisce se il container è in stato +terminated o completed e l'hook deve finire prima che possa essere inviato il segnale di TERM per +fermare il container. Il conto alla rovescia per la terminazione del Pod (grace period) inizia prima dell'esecuzione +dell'hook `PreStop`, quindi indipendentemente dall'esito dell'handler, il container terminerà entro +il grace period impostato. Non vengono passati parametri all'handler. + +Una descrizione più dettagliata riguardante al processo di terminazione dei Pod può essere trovata in +[Terminazione dei Pod](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination). + +### Implementazione degli hook handler + +I Container possono accedere a un hook implementando e registrando un handler per tale hook. +Ci sono due tipi di handler che possono essere implementati per i Container: + +* Exec - Esegue un comando specifico, tipo `pre-stop.sh`, all'interno dei cgroup e namespace del Container. +Le risorse consumate dal comando vengono contate sul Container. +* HTTP - Esegue una richiesta HTTP verso un endpoint specifico del Container. + +### Esecuzione dell'hook handler + +Quando viene richiamato l'hook legato al lifecycle del Container, il sistema di gestione di Kubernetes +esegue l'handler secondo l'azione dell'hook, `httpGet` e `tcpSocket` vengono eseguiti dal processo kubelet, +mentre `exec` è eseguito nel Container. + +Le chiamate agli handler degli hook sono sincrone rispetto al contesto del Pod che contiene il Container. +Questo significa che per un hook `PostStart`, l'ENTRYPOINT e l'hook si attivano in modo asincrono. +Tuttavia, se l'hook impiega troppo tempo per essere eseguito o si blocca, il container non può raggiungere lo +stato di `running`. + +Gli hook di `PreStop` non vengono eseguiti in modo asincrono dall'evento di stop del container; l'hook +deve completare la sua esecuzione prima che l'evento TERM possa essere inviato. Se un hook di `PreStop` +si blocca durante la sua esecuzione, la fase del Pod rimarrà `Terminating` finchè il Pod non sarà rimosso forzatamente +dopo la scadenza del suo `terminationGracePeriodSeconds`. Questo grace period si applica al tempo totale +necessario per effettuare sia l'esecuzione dell'hook di `PreStop` che per l'arresto normale del container. +Se, per esempio, il `terminationGracePeriodSeconds` è di 60, e l'hook impiega 55 secondi per essere completato, +e il container impiega 10 secondi per fermarsi normalmente dopo aver ricevuto il segnale, allora il container +verrà terminato prima di poter completare il suo arresto, poiché `terminationGracePeriodSeconds` è inferiore al tempo +totale (55+10) necessario perché queste due cose accadano. + +Se un hook `PostStart` o `PreStop` fallisce, allora il container viene terminato. + +Gli utenti dovrebbero mantenere i loro handler degli hook i più leggeri possibili. +Ci sono casi, tuttavia, in cui i comandi di lunga durata hanno senso, +come il salvataggio dello stato del container prima della sua fine. + +### Garanzia della chiamata dell'hook + +La chiamata degli hook avviene *almeno una volta*, il che significa +che un hook può essere chiamato più volte da un dato evento, come per `PostStart` +o `PreStop`. +Sta all'implementazione dell'hook gestire correttamente questo aspetto. + +Generalmente, vengono effettuate singole chiamate agli hook. +Se, per esempio, la destinazione di hook HTTP non è momentaneamente in grado di ricevere traffico, +non c'è alcun tentativo di re invio. +In alcuni rari casi, tuttavia, può verificarsi una doppia chiamata. +Per esempio, se un kubelet si riavvia nel mentre dell'invio di un hook, questo potrebbe essere +chiamato per una seconda volta dopo che il kubelet è tornato in funzione. + +### Debugging Hook handlers + +I log di un handler di hook non sono esposti negli eventi del Pod. +Se un handler fallisce per qualche ragione, trasmette un evento. +Per il `PostStart`, questo è l'evento di `FailedPostStartHook`, +e per il `PreStop`, questo è l'evento di `FailedPreStopHook`. +Puoi vedere questi eventi eseguendo `kubectl describe pod `. +Ecco alcuni esempi di output di eventi dall'esecuzione di questo comando: + +``` +Events: + FirstSeen LastSeen Count From SubObjectPath Type Reason Message + --------- -------- ----- ---- ------------- -------- ------ ------- + 1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned test-1730497541-cq1d2 to gke-test-cluster-default-pool-a07e5d30-siqd + 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulling pulling image "test:1.0" + 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Created Created container with docker id 5c6a256a2567; Security:[seccomp=unconfined] + 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulled Successfully pulled image "test:1.0" + 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Started Started container with docker id 5c6a256a2567 + 38s 38s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 5c6a256a2567: PostStart handler: Error executing in Docker Container: 1 + 37s 37s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 8df9fdfd7054: PostStart handler: Error executing in Docker Container: 1 + 38s 37s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "main" with RunContainerError: "PostStart handler: Error executing in Docker Container: 1" + 1m 22s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Warning FailedPostStartHook +``` + + + +## {{% heading "whatsnext" %}} + + +* Approfondisci [Container environment](/docs/concepts/containers/container-environment/). +* Esegui un tutorial su come + [definire degli handlers per i Container lifecycle events](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/). + diff --git a/content/ja/docs/_index.md b/content/ja/docs/_index.md index a1651d8457..ca2621b2c2 100644 --- a/content/ja/docs/_index.md +++ b/content/ja/docs/_index.md @@ -1,3 +1,4 @@ --- -title: ドキュメント +linktitle: Kubernetesドキュメント +title: ドキュメント --- diff --git a/content/ja/docs/concepts/architecture/nodes.md b/content/ja/docs/concepts/architecture/nodes.md index 05793ebde8..9c47531a37 100644 --- a/content/ja/docs/concepts/architecture/nodes.md +++ b/content/ja/docs/concepts/architecture/nodes.md @@ -6,11 +6,12 @@ weight: 10 -Kubernetesはコンテナを_Node_上で実行されるPodに配置することで、ワークロードを実行します。 +Kubernetesはコンテナを _Node_ 上で実行されるPodに配置することで、ワークロードを実行します。 ノードはクラスターによりますが、1つのVMまたは物理的なマシンです。 各ノードは{{< glossary_tooltip text="Pod" term_id="pod" >}}やそれを制御する{{< glossary_tooltip text="コントロールプレーン" term_id="control-plane" >}}を実行するのに必要なサービスを含んでいます。 通常、1つのクラスターで複数のノードを持ちます。学習用途やリソースの制限がある環境では、1ノードかもしれません。 + 1つのノード上の[コンポーネント](/ja/docs/concepts/overview/components/#node-components)には、{{< glossary_tooltip text="kubelet" term_id="kubelet" >}}、{{< glossary_tooltip text="コンテナランタイム" term_id="container-runtime" >}}、{{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}}が含まれます。 @@ -22,7 +23,7 @@ Kubernetesはコンテナを_Node_上で実行されるPodに配置すること 1. ノード上のkubeletが、コントロールプレーンに自己登録する。 2. あなた、もしくは他のユーザーが手動でNodeオブジェクトを追加する。 -Nodeオブジェクトの作成、もしくはノード上のkubeketによる自己登録の後、コントロールプレーンはNodeオブジェクトが有効かチェックします。例えば、下記のjsonマニフェストでノードを作成してみましょう。 +Nodeオブジェクトの作成、もしくはノード上のkubeketによる自己登録の後、コントロールプレーンはNodeオブジェクトが有効かチェックします。例えば、下記のjsonマニフェストでノードを作成してみましょう: ```json { @@ -72,9 +73,9 @@ kubeletのフラグ `--register-node`がtrue(デフォルト)のとき、kub 管理者が手動でNodeオブジェクトを作成したい場合は、kubeletフラグ `--register-node = false`を設定してください。 管理者は`--register-node`の設定に関係なくNodeオブジェクトを変更することができます。 -変更には、ノードにラベルを設定し、それをunschedulableとしてマークすることが含まれます。 +例えば、ノードにラベルを設定し、それをunschedulableとしてマークすることが含まれます。 -ノード上のラベルは、スケジューリングを制御するためにPod上のノードセレクタと組み合わせて使用できます。 +ノード上のラベルは、スケジューリングを制御するためにPod上のノードセレクターと組み合わせて使用できます。 例えば、Podをノードのサブセットでのみ実行する資格があるように制限します。 ノードをunschedulableとしてマークすると、新しいPodがそのノードにスケジュールされるのを防ぎますが、ノード上の既存のPodには影響しません。 @@ -124,7 +125,7 @@ kubectl describe node <ノード名をここに挿入> {{< table caption = "ノードのConditionと、各condition適用時の概要" >}} | ノードのCondition | 概要 | |----------------------|-------------| -| `Ready` | ノードの状態がHealthyでPodを配置可能な場合に`True`になります。ノードの状態に問題があり、Podが配置できない場合に`False`になります。ノードコントローラーが、`node-monitor-grace-period`で設定された時間内(デフォルトでは40秒)に該当ノードと疎通できない場合、`Unknown`になります。 | +| `Ready` | ノードの状態が有効でPodを配置可能な場合に`True`になります。ノードの状態に問題があり、Podが配置できない場合に`False`になります。ノードコントローラーが、`node-monitor-grace-period`で設定された時間内(デフォルトでは40秒)に該当ノードと疎通できない場合、`Unknown`になります。 | | `DiskPressure` | ノードのディスク容量が圧迫されているときに`True`になります。圧迫とは、ディスクの空き容量が少ないことを指します。それ以外のときは`False`です。 | | `MemoryPressure` | ノードのメモリが圧迫されているときに`True`になります。圧迫とは、メモリの空き容量が少ないことを指します。それ以外のときは`False`です。 | | `PIDPressure` | プロセスが圧迫されているときに`True`になります。圧迫とは、プロセス数が多すぎることを指します。それ以外のときは`False`です。 | @@ -241,7 +242,7 @@ kubeletが`NodeStatus`とLeaseオブジェクトの作成および更新を担 このような場合、ノードコントローラーはマスター接続に問題があると見なし、接続が回復するまですべての退役を停止します。 ノードコントローラーは、Podがtaintを許容しない場合、 `NoExecute`のtaintを持つノード上で実行されているPodを排除する責務もあります。 -さらに、デフォルトで無効になっているアルファ機能として、ノードコントローラーはノードに到達できない、または準備ができていないなどのノードの問題に対応する{{< glossary_tooltip text="taint" term_id="taint" >}}を追加する責務があります。これはスケジューラーが、問題のあるノードにPodを配置しない事を意味しています。 +さらに、ノードコントローラーはノードに到達できない、または準備ができていないなどのノードの問題に対応する{{< glossary_tooltip text="taint" term_id="taint" >}}を追加する責務があります。これはスケジューラーが、問題のあるノードにPodを配置しない事を意味しています。 {{< caution >}} `kubectl cordon`はノードに'unschedulable'としてマークします。それはロードバランサーのターゲットリストからノードを削除するという @@ -254,8 +255,7 @@ Nodeオブジェクトはノードのリソースキャパシティ(CPUの数 [自己登録](#self-registration-of-nodes)したノードは、Nodeオブジェクトを作成するときにキャパシティを報告します。 [手動によるノード管理](#manual-node-administration)を実行している場合は、ノードを追加するときにキャパシティを設定する必要があります。 -Kubernetes{{< glossary_tooltip text="スケジューラー" term_id="kube-scheduler" >}}は、ノード上のすべてのPodに十分なリソースがあることを確認します。 -ノード上のコンテナが要求するリソースの合計がノードキャパシティ以下であることを確認します。 +Kubernetes{{< glossary_tooltip text="スケジューラー" term_id="kube-scheduler" >}}は、ノード上のすべてのPodに十分なリソースがあることを確認します。スケジューラーは、ノード上のコンテナが要求するリソースの合計がノードキャパシティ以下であることを確認します。 これは、kubeletによって管理されたすべてのコンテナを含みますが、コンテナランタイムによって直接開始されたコンテナやkubeletの制御外で実行されているプロセスは含みません。 {{< note >}} diff --git a/content/ja/docs/concepts/cluster-administration/manage-deployment.md b/content/ja/docs/concepts/cluster-administration/manage-deployment.md index 90f96547d5..cb9c7c0fc3 100644 --- a/content/ja/docs/concepts/cluster-administration/manage-deployment.md +++ b/content/ja/docs/concepts/cluster-administration/manage-deployment.md @@ -237,7 +237,7 @@ guestbook-redis-slave-qgazl 1/1 Running 0 3m image: gb-frontend:v3 ``` -そして2つの異なるPodのセットを上書きしないようにするため、`track`ラベルに異なる値を持つ(例: `canary`)ようなguestbookフロントエンドの新しいリリースを作成できます。 +そして2つの異なるPodのセットを上書きしないようにするため、`track`ラベルに異なる値を持つ(例: `canary`)ようなguestbookフロントエンドの新しいリリースを作成できます。 ```yaml name: frontend-canary diff --git a/content/ja/docs/concepts/cluster-administration/networking.md b/content/ja/docs/concepts/cluster-administration/networking.md index 05af3690d1..b81c51f437 100644 --- a/content/ja/docs/concepts/cluster-administration/networking.md +++ b/content/ja/docs/concepts/cluster-administration/networking.md @@ -45,7 +45,7 @@ KubernetesのIPアドレスは`Pod`スコープに存在します。`Pod`内の `Pod`に転送する`ノード`自体のポート(ホストポートと呼ばれる)を要求することは可能ですが、これは非常にニッチな操作です。このポート転送の実装方法も、コンテナランタイムの詳細部分です。`Pod`自体は、ホストポートの有無を認識しません。 -## Kubernetesネットワークモデルの実装方法 +## Kubernetesネットワークモデルの実装方法 {#how-to-implement-the-kubernetes-networking-model} このネットワークモデルを実装する方法はいくつかあります。このドキュメントは、こうした方法を網羅的にはカバーしませんが、いくつかの技術の紹介として、また出発点として役立つことを願っています。 diff --git a/content/ja/docs/concepts/configuration/manage-resources-containers.md b/content/ja/docs/concepts/configuration/manage-resources-containers.md index f94513b8e7..761999d937 100644 --- a/content/ja/docs/concepts/configuration/manage-resources-containers.md +++ b/content/ja/docs/concepts/configuration/manage-resources-containers.md @@ -84,7 +84,7 @@ CPUは常に相対量としてではなく、絶対量として要求されま ### メモリーの意味 `メモリー`の制限と要求はバイト単位で測定されます。 -E、P、T、G、M、Kのいずれかのサフィックスを使用して、メモリーを整数または固定小数点整数として表すことができます。 +E、P、T、G、M、Kのいずれかのサフィックスを使用して、メモリーを整数または固定小数点数として表すことができます。 また、Ei、Pi、Ti、Gi、Mi、Kiのような2の累乗の値を使用することもできます。 たとえば、以下はほぼ同じ値を表しています。 @@ -104,11 +104,9 @@ metadata: name: frontend spec: containers: - - name: db - image: mysql + - name: app + image: images.my-company.example/app:v4 env: - - name: MYSQL_ROOT_PASSWORD - value: "password" resources: requests: memory: "64Mi" @@ -116,8 +114,8 @@ spec: limits: memory: "128Mi" cpu: "500m" - - name: wp - image: wordpress + - name: log-aggregator + image: images.my-company.example/log-aggregator:v6 resources: requests: memory: "64Mi" @@ -185,7 +183,7 @@ kubeletは、ローカルのエフェメラルストレージを使用して、P また、kubeletはこの種類のストレージを使用して、[Nodeレベルのコンテナログ](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level)、コンテナイメージ、実行中のコンテナの書き込み可能なレイヤーを保持します。 {{< caution >}} -Nodeに障害が発生すると、そのエフェメラルストレージ内のデータが失われる可能性があります。 +Nodeに障害が発生すると、そのエフェメラルストレージ内のデータが失われる可能性があります。 アプリケーションは、ローカルのエフェメラルストレージにパフォーマンスのサービス品質保証(ディスクのIOPSなど)を期待することはできません。 {{< /caution >}} @@ -242,7 +240,7 @@ Podの各コンテナは、次の1つ以上を指定できます。 * `spec.containers[].resources.requests.ephemeral-storage` `ephemeral-storage`の制限と要求はバイト単位で記します。 -ストレージは、次のいずれかの接尾辞を使用して、通常の整数または固定小数点整数として表すことができます。 +ストレージは、次のいずれかの接尾辞を使用して、通常の整数または固定小数点数として表すことができます。 E、P、T、G、M、K。Ei、Pi、Ti、Gi、Mi、Kiの2のべき乗を使用することもできます。 たとえば、以下はほぼ同じ値を表しています。 @@ -262,18 +260,15 @@ metadata: name: frontend spec: containers: - - name: db - image: mysql - env: - - name: MYSQL_ROOT_PASSWORD - value: "password" + - name: app + image: images.my-company.example/app:v4 resources: requests: ephemeral-storage: "2Gi" limits: ephemeral-storage: "4Gi" - - name: wp - image: wordpress + - name: log-aggregator + image: images.my-company.example/log-aggregator:v6 resources: requests: ephemeral-storage: "2Gi" @@ -300,6 +295,7 @@ kubeletがローカルのエフェメラルストレージをリソースとし Podが許可するよりも多くのエフェメラルストレージを使用している場合、kubeletはPodの排出をトリガーするシグナルを設定します。 コンテナレベルの分離の場合、コンテナの書き込み可能なレイヤーとログ使用量がストレージの制限を超えると、kubeletはPodに排出のマークを付けます。 + Podレベルの分離の場合、kubeletはPod内のコンテナの制限を合計し、Podの全体的なストレージ制限を計算します。 このケースでは、すべてのコンテナからのローカルのエフェメラルストレージの使用量とPodの`emptyDir`ボリュームの合計がPod全体のストレージ制限を超過する場合、 kubeletはPodをまた排出対象としてマークします。 @@ -345,7 +341,7 @@ Kubernetesでは、`1048576`から始まるプロジェクトIDを使用しま Kubernetesが使用しないようにする必要があります。 クォータはディレクトリスキャンよりも高速で正確です。 -ディレクトリがプロジェクトに割り当てられると、ディレクトリ配下に作成されたファイルはすべてそのプロジェクト内に作成され、カーネルはそのプロジェクト内のファイルによって使用されているブロックの数を追跡するだけです。 +ディレクトリがプロジェクトに割り当てられると、ディレクトリ配下に作成されたファイルはすべてそのプロジェクト内に作成され、カーネルはそのプロジェクト内のファイルによって使用されているブロックの数を追跡するだけです。 ファイルが作成されて削除されても、開いているファイルディスクリプタがあれば、スペースを消費し続けます。 クォータトラッキングはそのスペースを正確に記録しますが、ディレクトリスキャンは削除されたファイルが使用するストレージを見落としてしまいます。 @@ -354,7 +350,7 @@ Kubernetesが使用しないようにする必要があります。 * kubelet設定で、`LocalocalStorpactionCapactionIsolationFSQuotaMonitoring=true`[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gate/)を有効にします。 * ルートファイルシステム(またはオプションのランタイムファイルシステム))がプロジェクトクォータを有効にしていることを確認してください。 - すべてのXFSファイルシステムはプロジェクトクォータをサポートしています。 + すべてのXFSファイルシステムはプロジェクトクォータをサポートしています。 ext4ファイルシステムでは、ファイルシステムがマウントされていない間は、プロジェクトクォータ追跡機能を有効にする必要があります。 ```bash # ext4の場合、/dev/block-deviceがマウントされていません diff --git a/content/ja/docs/concepts/overview/what-is-kubernetes.md b/content/ja/docs/concepts/overview/what-is-kubernetes.md index d1b792c0da..dab17c9b1b 100644 --- a/content/ja/docs/concepts/overview/what-is-kubernetes.md +++ b/content/ja/docs/concepts/overview/what-is-kubernetes.md @@ -17,7 +17,7 @@ card: Kubernetesは、宣言的な構成管理と自動化を促進し、コンテナ化されたワークロードやサービスを管理するための、ポータブルで拡張性のあるオープンソースのプラットフォームです。Kubernetesは巨大で急速に成長しているエコシステムを備えており、それらのサービス、サポート、ツールは幅広い形で利用可能です。 -Kubernetesの名称は、ギリシャ語に由来し、操舵手やパイロットを意味しています。Googleは2014年にKubernetesプロジェクトをオープンソース化しました。Kubernetesは、本番環境で大規模なワークロードを稼働させた[Googleの15年以上の経験](/blog/2015/04/borg-predecessor-to-kubernetes/)と、コミュニティからの最高のアイディアや実践を組み合わせています。 +Kubernetesの名称は、ギリシャ語に由来し、操舵手やパイロットを意味しています。Googleは2014年にKubernetesプロジェクトをオープンソース化しました。Kubernetesは、本番環境で大規模なワークロードを稼働させた[Googleの15年以上の経験](/blog/2015/04/borg-predecessor-to-kubernetes/)と、コミュニティからの最高のアイディアや実践を組み合わせています。 ## 過去を振り返ってみると @@ -57,7 +57,7 @@ Kubernetesの名称は、ギリシャ語に由来し、操舵手やパイロッ Kubernetesは以下を提供します。 * **サービスディスカバリーと負荷分散** -Kubernetesは、DNS名または独自のIPアドレスを使ってコンテナを公開することができます。コンテナへのトラフィックが多い場合は、Kubernetesは負荷分散し、ネットワークトラフィックを振り分けることができるたため、デプロイが安定します。 +Kubernetesは、DNS名または独自のIPアドレスを使ってコンテナを公開することができます。コンテナへのトラフィックが多い場合は、Kubernetesは負荷分散し、ネットワークトラフィックを振り分けることができるため、デプロイが安定します。 * **ストレージ オーケストレーション** Kubernetesは、ローカルストレージやパブリッククラウドプロバイダーなど、選択したストレージシステムを自動でマウントすることができます。 * **自動化されたロールアウトとロールバック** diff --git a/content/ja/docs/concepts/policy/resource-quotas.md b/content/ja/docs/concepts/policy/resource-quotas.md index e7368a8f94..7b00056fcf 100644 --- a/content/ja/docs/concepts/policy/resource-quotas.md +++ b/content/ja/docs/concepts/policy/resource-quotas.md @@ -22,7 +22,7 @@ weight: 10 - 異なる名前空間で異なるチームが存在するとき。現時点ではこれは自主的なものですが、将来的にはACLsを介してリソースクォータの設定を強制するように計画されています。 - 管理者は各名前空間で1つの`ResourceQuota`を作成します。 - ユーザーが名前空間内でリソース(Pod、Serviceなど)を作成し、クォータシステムが`ResourceQuota`によって定義されたハードリソースリミットを超えないことを保証するために、リソースの使用量をトラッキングします。 -- リソースの作成や更新がクォータの制約に違反しているとき、そのリクエストはHTTPステータスコード`403 FORBIDDEN`で失敗し、違反した制約を説明するメッセージが表示されます。 +- リソースの作成や更新がクォータの制約に違反しているとき、そのリクエストはHTTPステータスコード`403 FORBIDDEN`で失敗し、違反した制約を説明するメッセージが表示されます。 - `cpu`や`memory`といったコンピューターリソースに対するクォータが名前空間内で有効になっているとき、ユーザーはそれらの値に対する`requests`や`limits`を設定する必要があります。設定しないとクォータシステムがPodの作成を拒否します。 ヒント: コンピュートリソースの要求を設定しないPodに対してデフォルト値を強制するために、`LimitRanger`アドミッションコントローラーを使用してください。この問題を解決する例は[walkthrough](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)で参照できます。 `ResourceQuota`のオブジェクト名は、有効な[DNSサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)である必要があります. diff --git a/content/ja/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/ja/docs/concepts/scheduling-eviction/assign-pod-node.md index 0733690f0b..18b767cbb4 100644 --- a/content/ja/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/ja/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -140,9 +140,9 @@ Nodeアフィニティでは、`In`、`NotIn`、`Exists`、`DoesNotExist`、`Gt` `nodeSelector`と`nodeAffinity`の両方を指定した場合、Podは**両方の**条件を満たすNodeにスケジュールされます。 -`nodeAffinity`内で複数の`nodeSelectorTerms`を指定した場合、Podは**全ての**`nodeSelectorTerms`を満たしたNodeへスケジュールされます。 +`nodeAffinity`内で複数の`nodeSelectorTerms`を指定した場合、Podは**いずれかの**`nodeSelectorTerms`を満たしたNodeへスケジュールされます。 -`nodeSelectorTerms`内で複数の`matchExpressions`を指定した場合にはPodは**いずれかの**`matchExpressions`を満たしたNodeへスケジュールされます。 +`nodeSelectorTerms`内で複数の`matchExpressions`を指定した場合にはPodは**全ての**`matchExpressions`を満たしたNodeへスケジュールされます。 PodがスケジュールされたNodeのラベルを削除したり変更しても、Podは削除されません。 言い換えると、アフィニティはPodをスケジュールする際にのみ考慮されます。 diff --git a/content/ja/docs/concepts/security/overview.md b/content/ja/docs/concepts/security/overview.md index b50a4ea1a5..0157b28f78 100644 --- a/content/ja/docs/concepts/security/overview.md +++ b/content/ja/docs/concepts/security/overview.md @@ -77,7 +77,7 @@ Kubernetesを保護する為には2つの懸念事項があります。 ### クラスター内のコンポーネント(アプリケーション) {#cluster-applications} -アプリケーションを対象にした攻撃に応じて、セキュリティの特定側面に焦点をあてたい場合があります。例:他のリソースとの連携で重要なサービス(サービスA)と、リソース枯渇攻撃に対して脆弱な別のワークロード(サービスB)が実行されている場合、サービスBのリソースを制限していないとサービスAが危険にさらされるリスクが高くなります。次の表はセキュリティの懸念事項とKubernetesで実行されるワークロードを保護するための推奨事項を示しています。 +アプリケーションを対象にした攻撃に応じて、セキュリティの特定側面に焦点をあてたい場合があります。例:他のリソースとの連携で重要なサービス(サービスA)と、リソース枯渇攻撃に対して脆弱な別のワークロード(サービスB)が実行されている場合、サービスBのリソースを制限していないとサービスAが危険にさらされるリスクが高くなります。次の表はセキュリティの懸念事項とKubernetesで実行されるワークロードを保護するための推奨事項を示しています。 ワークロードセキュリティに関する懸念事項 | 推奨事項 | diff --git a/content/ja/docs/concepts/services-networking/endpoint-slices.md b/content/ja/docs/concepts/services-networking/endpoint-slices.md index 5a678baec7..24a588c29e 100644 --- a/content/ja/docs/concepts/services-networking/endpoint-slices.md +++ b/content/ja/docs/concepts/services-networking/endpoint-slices.md @@ -20,7 +20,7 @@ Serviceのすべてのネットワークエンドポイントが単一のEndpoin ## EndpointSliceリソース {#endpointslice-resource} -Kubernetes内ではEndpointSliceにはネットワークエンドポイントの集合へのリファレンスが含まれます。EndpointSliceコントローラーは、{{< glossary_tooltip text="セレクター" term_id="selector" >}}が指定されると、Kubernetes Serviceに対するEndpointSliceを自動的に作成します。これらのEndpointSliceにはServiceセレクターに一致する任意のPodへのリファレクンスが含まれます。EndpointSliceはネットワークエンドポイントをユニークなServiceとPortの組み合わせでグループ化します。EndpointSliceオブジェクトの名前は有効な[DNSサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)である必要があります。 +Kubernetes内ではEndpointSliceにはネットワークエンドポイントの集合へのリファレンスが含まれます。EndpointSliceコントローラーは、{{< glossary_tooltip text="セレクター" term_id="selector" >}}が指定されると、Kubernetes Serviceに対するEndpointSliceを自動的に作成します。これらのEndpointSliceにはServiceセレクターに一致する任意のPodへのリファレンスが含まれます。EndpointSliceはネットワークエンドポイントをユニークなServiceとPortの組み合わせでグループ化します。EndpointSliceオブジェクトの名前は有効な[DNSサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)である必要があります。 一例として、以下に`example`というKubernetes Serviceに対するサンプルのEndpointSliceリソースを示します。 diff --git a/content/ja/docs/concepts/services-networking/ingress-controllers.md b/content/ja/docs/concepts/services-networking/ingress-controllers.md index f1af46c3a2..cbd652d17e 100644 --- a/content/ja/docs/concepts/services-networking/ingress-controllers.md +++ b/content/ja/docs/concepts/services-networking/ingress-controllers.md @@ -37,7 +37,7 @@ Ingressリソースが動作するためには、クラスターでIngressコン ## 複数のIngressコントローラーの使用 {#using-multiple-ingress-controllers} -[Ingressコントローラーは、好きな数だけ](https://git.k8s.io/ingress-nginx/docs/user-guide/multiple-ingress.md#multiple-ingress-controllers))クラスターにデプロイすることができます。Ingressを作成する際には、クラスター内に複数のIngressコントローラーが存在する場合にどのIngressコントローラーを使用するかを示すために適切な[`ingress.class`](https://git.k8s.io/ingress-gce/docs/faq/README.md#how-do-i-run-multiple-ingress-controllers-in-the-same-cluster)のアノテーションを指定します。 +[Ingressコントローラーは、好きな数だけ](https://git.k8s.io/ingress-nginx/docs/user-guide/multiple-ingress.md#multiple-ingress-controllers)クラスターにデプロイすることができます。Ingressを作成する際には、クラスター内に複数のIngressコントローラーが存在する場合にどのIngressコントローラーを使用するかを示すために適切な[`ingress.class`](https://git.k8s.io/ingress-gce/docs/faq/README.md#how-do-i-run-multiple-ingress-controllers-in-the-same-cluster)のアノテーションを指定します。 クラスを定義しない場合、クラウドプロバイダーはデフォルトのIngressコントローラーを使用する場合があります。 diff --git a/content/ja/docs/concepts/services-networking/service.md b/content/ja/docs/concepts/services-networking/service.md index 2b3d6b26e0..d6894e959e 100644 --- a/content/ja/docs/concepts/services-networking/service.md +++ b/content/ja/docs/concepts/services-networking/service.md @@ -712,7 +712,7 @@ NLBの背後にあるインスタンスに対してクライアントのトラ |------|----------|---------|------------|---------------------| | ヘルスチェック | TCP | NodePort(s) (`.spec.healthCheckNodePort` for `.spec.externalTrafficPolicy = Local`) | VPC CIDR | kubernetes.io/rule/nlb/health=\ | | クライアントのトラフィック | TCP | NodePort(s) | `.spec.loadBalancerSourceRanges` (デフォルト: `0.0.0.0/0`) | kubernetes.io/rule/nlb/client=\ | -| MTCによるサービスディスカバリー | ICMP | 3,4 | `.spec.loadBalancerSourceRanges` (デフォルト: `0.0.0.0/0`) | kubernetes.io/rule/nlb/mtu=\ | +| MTUによるサービスディスカバリー | ICMP | 3,4 | `.spec.loadBalancerSourceRanges` (デフォルト: `0.0.0.0/0`) | kubernetes.io/rule/nlb/mtu=\ | どのクライアントIPがNLBにアクセス可能かを制限するためには、`loadBalancerSourceRanges`を指定してください。 diff --git a/content/ja/docs/concepts/workloads/controllers/deployment.md b/content/ja/docs/concepts/workloads/controllers/deployment.md index 94e04e56ee..e2d720323d 100644 --- a/content/ja/docs/concepts/workloads/controllers/deployment.md +++ b/content/ja/docs/concepts/workloads/controllers/deployment.md @@ -31,8 +31,9 @@ Deploymentによって作成されたReplicaSetを管理しないでください * ReplicaSetをロールアウトするために[Deploymentの作成](#creating-a-deployment)を行う: ReplicaSetはバックグラウンドでPodを作成します。Podの作成が完了したかどうかは、ロールアウトのステータスを確認してください。 * DeploymentのPodTemplateSpecを更新することにより[Podの新しい状態を宣言する](#updating-a-deployment): 新しいReplicaSetが作成され、Deploymentは指定された頻度で古いReplicaSetから新しいReplicaSetへのPodの移行を管理します。新しいReplicaSetはDeploymentのリビジョンを更新します。 * Deploymentの現在の状態が不安定な場合、[Deploymentのロールバック](#rolling-back-a-deployment)をする: ロールバックによる各更新作業は、Deploymentのリビジョンを更新します。 -* より多くの負荷をさばけるように、[Deploymentをスケールアップ](#scaling-a-deployment)する +* より多くの負荷をさばけるように、[Deploymentをスケールアップ](#scaling-a-deployment)する。 * PodTemplateSpecに対する複数の修正を適用するために[Deploymentを停止(Pause)し](#pausing-and-resuming-a-deployment)、それを再開して新しいロールアウトを開始します。 +* [Deploymentのステータス](#deployment-status) をロールアウトが失敗したサインとして利用する。 * 今後必要としない[古いReplicaSetのクリーンアップ](#clean-up-policy) ## Deploymentの作成 {#creating-a-deployment} @@ -82,7 +83,7 @@ Deploymentによって作成されたReplicaSetを管理しないでください ``` クラスターにてDeploymentを調査するとき、以下のフィールドが出力されます。 * `NAME`は、クラスター内にあるDeploymentの名前一覧です。 - * `READY`は、ユーザーが使用できるアプリケーションのレプリカの数です。 + * `READY`は、ユーザーが使用できるアプリケーションのレプリカの数です。使用可能な数/理想的な数の形式で表示されます。 * `UP-TO-DATE`は、理想的な状態を満たすためにアップデートが完了したレプリカの数です。 * `AVAILABLE`は、ユーザーが利用可能なレプリカの数です。 * `AGE`は、アプリケーションが稼働してからの時間です。 @@ -133,7 +134,7 @@ Deploymentによって作成されたReplicaSetを管理しないでください {{< note >}} Deploymentに対して適切なセレクターとPodテンプレートのラベルを設定する必要があります(このケースでは`app: nginx`)。 -ラベルやセレクターを他のコントローラーと重複させないでください(他のDeploymentやStatefulSetを含む)。Kubernetesはユーザーがラベルを重複させることを止めないため、複数のコントローラーでセレクターの重複が発生すると、コントローラー間で衝突し予期せぬふるまいをすることになります。 +ラベルやセレクターを他のコントローラーと重複させないでください(他のDeploymentやStatefulSetを含む)。Kubernetesはユーザーがラベルを重複させることを阻止しないため、複数のコントローラーでセレクターの重複が発生すると、コントローラー間で衝突し予期せぬふるまいをすることになります。 {{< /note >}} ### pod-template-hashラベル @@ -146,7 +147,7 @@ Deploymentに対して適切なセレクターとPodテンプレートのラベ このラベルはDeploymentが管理するReplicaSetが重複しないことを保証します。このラベルはReplicaSetの`PodTemplate`をハッシュ化することにより生成され、生成されたハッシュ値はラベル値としてReplicaSetセレクター、Podテンプレートラベル、ReplicaSetが作成した全てのPodに対して追加されます。 -## Deploymentの更新 +## Deploymentの更新 {#updating-a-deployment} {{< note >}} Deploymentのロールアウトは、DeploymentのPodテンプレート(この場合`.spec.template`)が変更された場合にのみトリガーされます。例えばテンプレートのラベルもしくはコンテナーイメージが更新された場合です。Deploymentのスケールのような更新では、ロールアウトはトリガーされません。 @@ -589,13 +590,11 @@ Deploymentのローリングアップデートは、同時に複数のバージ ``` * クラスター内で、解決できない新しいイメージに更新します。 -* You update to a new image which happens to be unresolvable from inside the cluster. ```shell kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:sometag ``` 実行結果は以下のとおりです。 - The output is similar to this: ``` deployment.apps/nginx-deployment image updated ``` @@ -604,7 +603,8 @@ Deploymentのローリングアップデートは、同時に複数のバージ ```shell kubectl get rs ``` - 実行結果は以下のとおりです。 + + 実行結果は以下のとおりです。 ``` NAME DESIRED CURRENT READY AGE nginx-deployment-1989198191 5 5 0 9s @@ -615,24 +615,26 @@ Deploymentのローリングアップデートは、同時に複数のバージ 上記の例では、3つのレプリカが古いReplicaSetに追加され、2つのレプリカが新しいReplicaSetに追加されました。ロールアウトの処理では、新しいレプリカ数のPodが正常になったと仮定すると、最終的に新しいReplicaSetに全てのレプリカを移動させます。これを確認するためには以下のコマンドを実行して下さい。 - ```shell - kubectl get deploy - ``` - 実行結果は以下のとおりです。 - ``` - NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE - nginx-deployment 15 18 7 8 7m - ``` -  ロールアウトのステータスでレプリカがどのように各ReplicaSetに追加されるか確認できます。 - ```shell - kubectl get rs - ``` - 実行結果は以下のとおりです。 - ``` - NAME DESIRED CURRENT READY AGE - nginx-deployment-1989198191 7 7 0 7m - nginx-deployment-618515232 11 11 11 7m - ``` +```shell +kubectl get deploy +``` + +実行結果は以下のとおりです。 +``` +NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE +nginx-deployment 15 18 7 8 7m +``` +ロールアウトのステータスでレプリカがどのように各ReplicaSetに追加されるか確認できます。 +```shell +kubectl get rs +``` + +実行結果は以下のとおりです。 +``` +NAME DESIRED CURRENT READY AGE +nginx-deployment-1989198191 7 7 0 7m +nginx-deployment-618515232 11 11 11 7m +``` ## Deployment更新の一時停止と再開 {#pausing-and-resuming-a-deployment} @@ -752,7 +754,7 @@ Deploymentのローリングアップデートは、同時に複数のバージ nginx-3926361531 3 3 3 28s ``` {{< note >}} -一時停止したDeploymentの稼働を再開させない限り、Deploymentをロールバックすることはできません。 +Deploymentの稼働を再開させない限り、一時停止したDeploymentをロールバックすることはできません。 {{< /note >}} ## Deploymentのステータス {#deployment-status} @@ -937,13 +939,13 @@ Deploymentが管理する古いReplicaSetをいくつ保持するかを指定す ## カナリアパターンによるデプロイ -Deploymentを使って一部のユーザーやサーバーに対してリリースのロールアウトをしたい場合、[リソースの管理](/docs/concepts/cluster-administration/manage-deployment/#canary-deployments)に記載されているカナリアパターンに従って、リリース毎に1つずつ、複数のDeploymentを作成できます。 +Deploymentを使って一部のユーザーやサーバーに対してリリースのロールアウトをしたい場合、[リソースの管理](/ja/docs/concepts/cluster-administration/manage-deployment/#canary-deployments-カナリアデプロイ)に記載されているカナリアパターンに従って、リリース毎に1つずつ、複数のDeploymentを作成できます。 ## Deployment Specの記述 他の全てのKubernetesの設定と同様に、Deploymentは`.apiVersion`、`.kind`や`.metadata`フィールドを必要とします。 -設定ファイルの利用に関する情報は[アプリケーションのデプロイ](/ja/docs/tasks/run-application/run-stateless-application-deployment/)を参照してください。コンテナーの設定に関しては[リソースを管理するためのkubectlの使用](/docs/concepts/overview/working-with-objects/object-management/)を参照してください。 -Deploymentオブジェクトの名前は、有効な[DNSサブドメイン名](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)でなければなりません。 +設定ファイルの利用に関する情報は[アプリケーションのデプロイ](/ja/docs/tasks/run-application/run-stateless-application-deployment/)を参照してください。コンテナーの設定に関しては[リソースを管理するためのkubectlの使用](/ja/docs/concepts/overview/working-with-objects/object-management/)を参照してください。 +Deploymentオブジェクトの名前は、有効な[DNSサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)でなければなりません。 Deploymentは[`.spec`セクション](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)も必要とします。 ### Podテンプレート @@ -992,25 +994,25 @@ Deploymentのセレクターに一致するラベルを持つPodを直接作成 `.spec.strategy.type==RollingUpdate`と指定されているとき、DeploymentはローリングアップデートによりPodを更新します。ローリングアップデートの処理をコントロールするために`maxUnavailable`と`maxSurge`を指定できます。 -##### maxUnavailable +##### Max Unavailable {#max-unavailable} `.spec.strategy.rollingUpdate.maxUnavailable`はオプションのフィールドで、更新処理において利用不可となる最大のPod数を指定します。値は絶対値(例: 5)を指定するか、理想状態のPodのパーセンテージを指定します(例: 10%)。パーセンテージを指定した場合、絶対値は小数切り捨てされて計算されます。`.spec.strategy.rollingUpdate.maxSurge`が0に指定されている場合、この値を0にできません。デフォルトでは25%です。 例えば、この値が30%と指定されているとき、ローリングアップデートが開始すると古いReplicaSetはすぐに理想状態の70%にスケールダウンされます。一度新しいPodが稼働できる状態になると、古いReplicaSetはさらにスケールダウンされ、続いて新しいReplicaSetがスケールアップされます。この間、利用可能なPodの総数は理想状態のPodの少なくとも70%以上になるように保証されます。 -##### maxSurge +##### Max Surge {#max-surge} `.spec.strategy.rollingUpdate.maxSurge`はオプションのフィールドで、理想状態のPod数を超えて作成できる最大のPod数を指定します。値は絶対値(例: 5)を指定するか、理想状態のPodのパーセンテージを指定します(例: 10%)。パーセンテージを指定した場合、絶対値は小数切り上げで計算されます。`MaxUnavailable`が0に指定されている場合、この値を0にできません。デフォルトでは25%です。 例えば、この値が30%と指定されているとき、ローリングアップデートが開始すると新しいReplicaSetはすぐに更新されます。このとき古いPodと新しいPodの総数は理想状態の130%を超えないように更新されます。一度古いPodが削除されると、新しいReplicaSetはさらにスケールアップされます。この間、利用可能なPodの総数は理想状態のPodに対して最大130%になるように保証されます。 -### progressDeadlineSeconds +### Progress Deadline Seconds `.spec.progressDeadlineSeconds`はオプションのフィールドで、システムがDeploymentの[更新に失敗](#failed-deployment)したと判断するまでに待つ秒数を指定します。更新に失敗したと判断されたとき、リソースのステータスは`Type=Progressing`、`Status=False`かつ`Reason=ProgressDeadlineExceeded`となるのを確認できます。DeploymentコントローラーはDeploymentの更新のリトライし続けます。デフォルト値は600です。今後、自動的なロールバックが実装されたとき、更新失敗状態になるとすぐにDeploymentコントローラーがロールバックを行うようになります。 この値が指定されているとき、`.spec.minReadySeconds`より大きい値を指定する必要があります。 -### minReadySeconds {#min-ready-seconds} +### Min Ready Seconds {#min-ready-seconds} `.spec.minReadySeconds`はオプションのフィールドで、新しく作成されたPodが利用可能となるために、最低どれくらいの秒数コンテナーがクラッシュすることなく稼働し続ければよいかを指定するものです。デフォルトでは0です(Podは作成されるとすぐに利用可能と判断されます)。Podが利用可能と判断された場合についてさらに学ぶために[Container Probes](/ja/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)を参照してください。 @@ -1020,7 +1022,7 @@ Deploymentのリビジョン履歴は、Deploymentが管理するReplicaSetに `.spec.revisionHistoryLimit`はオプションのフィールドで、ロールバック可能な古いReplicaSetの数を指定します。この古いReplicaSetは`etcd`内のリソースを消費し、`kubectl get rs`の出力結果を見にくくします。Deploymentの各リビジョンの設定はReplicaSetに保持されます。このため一度古いReplicaSetが削除されると、そのリビジョンのDeploymentにロールバックすることができなくなります。デフォルトでは10もの古いReplicaSetが保持されます。しかし、この値の最適値は新しいDeploymentの更新頻度と安定性に依存します。 -さらに詳しく言うと、この値を0にすると、0のレプリカを持つ古い全てのReplicaSetが削除されます。このケースでは、リビジョン履歴が完全に削除されているため新しいDeploymentのロールアウトを完了することができません。 +さらに詳しく言うと、この値を0にすると、0のレプリカを持つ古い全てのReplicaSetが削除されます。このケースでは、リビジョン履歴が完全に削除されているため新しいDeploymentのロールアウトを元に戻すことができません。 ### paused diff --git a/content/ja/docs/concepts/workloads/pods/ephemeral-containers.md b/content/ja/docs/concepts/workloads/pods/ephemeral-containers.md index 0fa45de94b..db69d83fcd 100644 --- a/content/ja/docs/concepts/workloads/pods/ephemeral-containers.md +++ b/content/ja/docs/concepts/workloads/pods/ephemeral-containers.md @@ -42,7 +42,7 @@ weight: 80 エフェメラルコンテナを利用する場合には、他のコンテナ内のプロセスにアクセスできるように、[プロセス名前空間の共有](/ja/docs/tasks/configure-pod-container/share-process-namespace/)を有効にすると便利です。 -エフェメラルコンテナを利用してトラブルシューティングを行う例については、[デバッグ用のエフェメラルコンテナを使用してデバッグする](/docs/tasks/debug-application-cluster/debug-running-pod/#debugging-with-ephemeral-debug-container)を参照してください。 +エフェメラルコンテナを利用してトラブルシューティングを行う例については、[デバッグ用のエフェメラルコンテナを使用してデバッグする](/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container)を参照してください。 ## Ephemeral containers API diff --git a/content/ja/docs/contribute/review/reviewing-prs.md b/content/ja/docs/contribute/review/reviewing-prs.md new file mode 100644 index 0000000000..6659d46354 --- /dev/null +++ b/content/ja/docs/contribute/review/reviewing-prs.md @@ -0,0 +1,86 @@ +--- +title: プルリクエストのレビュー +content_type: concept +main_menu: true +weight: 10 +--- + + + +ドキュメントのプルリクエストは誰でもレビューすることができます。Kubernetesのwebsiteリポジトリで[pull requests](https://github.com/kubernetes/website/pulls)のセクションに移動し、open状態のプルリクエストを確認してください。 + +ドキュメントのプルリクエストのレビューは、Kubernetesコミュニティに自分を知ってもらうためのよい方法の1つです。コードベースについて学んだり、他のコントリビューターとの信頼関係を築く助けともなるはずです。 + +レビューを行う前には、以下のことを理解しておくとよいでしょう。 + +- [コンテンツガイド](/docs/contribute/style/content-guide/)と[スタイルガイド](/docs/contribute/style/style-guide/)を読んで、有益なコメントを残せるようにする。 +- Kubernetesのドキュメントコミュニティにおける[役割と責任](/docs/contribute/participate/roles-and-responsibilities/)の違いを理解する。 + + + +## はじめる前に + +レビューを始める前に、以下のことを心に留めてください。 + +- [CNCFの行動規範](https://github.com/cncf/foundation/blob/master/code-of-conduct.md)を読み、いかなる時にも行動規範にしたがって行動するようにする。 +- 礼儀正しく、思いやりを持ち、助け合う気持ちを持つ。 +- 変更点だけでなく、PRのポジティブな側面についてもコメントする。 +- 相手の気持ちに共感して、自分のレビューが相手にどのように受け取られるのかをよく意識する。 +- 相手の善意を前提として、疑問点を明確にする質問をする。 +- 経験を積んだコントリビューターの場合、コンテンツに大幅な変更が必要な新規のコントリビューターとペアを組んで作業に取り組むことを考える。 + +## レビューのプロセス + +一般に、コンテンツや文体に対するプルリクエストは、英語でレビューを行います。 + +1. [https://github.com/kubernetes/website/pulls](https://github.com/kubernetes/website/pulls)に移動します。Kubernetesのウェブサイトとドキュメントに対するopen状態のプルリクエスト一覧が表示されます。 + +2. open状態のPRに、以下に示すラベルを1つ以上使って絞り込みます。 + + - `cncf-cla: yes` (推奨): CLAにサインしていないコントリビューターが提出したPRはマージできません。詳しい情報は、[CLAの署名](/docs/contribute/new-content/overview/#sign-the-cla)を読んでください。 + - `language/en` (推奨): 英語のPRだけに絞り込みます。 + - `size/`: 特定の大きさのPRだけに絞り込みます。レビューを始めたばかりの人は、小さなPRから始めてください。 + + さらに、PRがwork in progressとしてマークされていないことも確認してください。`work in progress`ラベルの付いたPRは、まだレビューの準備ができていない状態です。 + +3. レビューするPRを選んだら、以下のことを行い、変更点について理解します。 + - PRの説明を読み、行われた変更について理解し、関連するissueがあればそれも読みます。 + - 他のレビュアのコメントがあれば読みます。 + - **Files changed**タブをクリックし、変更されたファイルと行を確認します。 + - **Conversation**タブの下にあるPRのbuild checkセクションまでスクロールし、**deploy/netlify**の行の**Details**リンクをクリックして、Netlifyのプレビュービルドで変更点をプレビューします。 + +4. **Files changed**タブに移動してレビューを始めます。 + 1. コメントしたい場合は行の横の`+`マークをクリックします。 + 2. その行に関するコメントを書き、**Add single comment**(1つのコメントだけを残したい場合)または**Start a review**(複数のコメントを行いたい場合)のいずれかをクリックします。 + 3. コメントをすべて書いたら、ページ上部の**Review changes**をクリックします。ここでは、レビューの要約を追加できます(コントリビューターにポジティブなコメントも書きましょう!)。必要に応じて、PRを承認したり、コメントしたり、変更をリクエストします。新しいコントリビューターの場合は**Comment**だけが行えます。 + +## レビューのチェックリスト + +レビューするときは、最初に以下の点を確認してみてください。 + +### 言語と文法 + +- 言語や文法に明らかな間違いはないですか? もっとよい言い方はないですか? +- もっと簡単な単語に置き換えられる複雑な単語や古い単語はありませんか? +- 使われている単語や専門用語や言い回しで差別的ではない別の言葉に置き換えられるものはありませんか? +- 言葉選びや大文字の使い方は[style guide](/docs/contribute/style/style-guide/)に従っていますか? +- もっと短くしたり単純な文に書き換えられる長い文はありませんか? +- 箇条書きやテーブルでもっとわかりやすく表現できる長いパラグラフはありませんか? + +### コンテンツ + +- 同様のコンテンツがKubernetesのサイト上のどこかに存在しませんか? +- コンテンツが外部サイト、特定のベンダー、オープンソースではないドキュメントなどに過剰にリンクを張っていませんか? + +### ウェブサイト + +- PRはページ名、slug/alias、アンカーリンクの変更や削除をしていますか? その場合、このPRの変更の結果、リンク切れは発生しませんか? ページ名を変更してslugはそのままにするなど、他の選択肢はありませんか? +- PRは新しいページを作成するものですか? その場合、次の点に注意してください。 + - ページは正しい[page content type](/docs/contribute/style/page-content-types/)と関係するHugoのshortcodeを使用していますか? + - セクションの横のナビゲーション(または全体)にページは正しく表示されますか? + - ページは[Docs Home](/docs/home/)に一覧されますか? +- Netlifyのプレビューで変更は確認できますか? 特にリスト、コードブロック、テーブル、備考、画像などに注意してください。 + +### その他 + +PRに関して誤字や空白などの小さな問題を指摘する場合は、コメントの前に`nit:`と書いてください。こうすることで、PRの作者は問題が深刻なものではないことが分かります。 diff --git a/content/ja/docs/home/supported-doc-versions.md b/content/ja/docs/home/supported-doc-versions.md index 0ca6fcee64..7cfce77754 100644 --- a/content/ja/docs/home/supported-doc-versions.md +++ b/content/ja/docs/home/supported-doc-versions.md @@ -1,28 +1,11 @@ --- -title: Kubernetesドキュメントがサポートしているバージョン -content_type: concept +title: 利用可能なドキュメントバージョン +content_type: custom +layout: supported-versions card: name: about weight: 10 - title: ドキュメントがサポートしているバージョン + title: 利用可能なドキュメントバージョン --- - - 本ウェブサイトには、現行版とその直前4バージョンのKubernetesドキュメントがあります。 - - - - - -## 現行版 - -現在のバージョンは[{{< param "version" >}}](/)です。 - -## 以前のバージョン - -{{< versions-other >}} - - - - diff --git a/content/ja/docs/reference/command-line-tools-reference/kubelet-authentication-authorization.md b/content/ja/docs/reference/command-line-tools-reference/kubelet-authentication-authorization.md new file mode 100644 index 0000000000..fe7a1626fb --- /dev/null +++ b/content/ja/docs/reference/command-line-tools-reference/kubelet-authentication-authorization.md @@ -0,0 +1,83 @@ +--- +title: Kubelet 認証/認可 +--- + + +## 概要 + +kubeletのHTTPSエンドポイントは、さまざまな感度のデータへのアクセスを提供するAPIを公開し、 +ノードとコンテナ内のさまざまなレベルの権限でタスクを実行できるようにします。 + +このドキュメントでは、kubeletのHTTPSエンドポイントへのアクセスを認証および承認する方法について説明します。 + +## Kubelet 認証 + +デフォルトでは、他の構成済み認証方法によって拒否されないkubeletのHTTPSエンドポイントへのリクエストは +匿名リクエストとして扱われ、ユーザー名は`system:anonymous`、 +グループは`system:unauthenticated`になります。 + +匿名アクセスを無効にし、認証されていないリクエストに対して`401 Unauthorized`応答を送信するには: + +* `--anonymous-auth=false`フラグでkubeletを開始します。 + +kubeletのHTTPSエンドポイントに対するX509クライアント証明書認証を有効にするには: + +* `--client-ca-file`フラグでkubeletを起動し、クライアント証明書を確認するためのCAバンドルを提供します。 +* `--kubelet-client-certificate`および`--kubelet-client-key`フラグを使用してapiserverを起動します。 +* 詳細については、[apiserver認証ドキュメント](/ja/docs/reference/access-authn-authz/authentication/#x509-client-certs)を参照してください。 + +APIベアラートークン(サービスアカウントトークンを含む)を使用して、kubeletのHTTPSエンドポイントへの認証を行うには: + +* APIサーバーで`authentication.k8s.io/v1beta1`グループが有効になっていることを確認します。 +* `--authentication-token-webhook`および`--kubeconfig`フラグを使用してkubeletを開始します。 +* kubeletは、構成済みのAPIサーバーで `TokenReview` APIを呼び出して、ベアラートークンからユーザー情報を判別します。 + +## Kubelet 承認 + +認証に成功した要求(匿名要求を含む)はすべて許可されます。デフォルトの認可モードは、すべての要求を許可する`AlwaysAllow`です。 + +kubelet APIへのアクセスを細分化するのは、次のような多くの理由が考えられます: + +* 匿名認証は有効になっていますが、匿名ユーザーがkubeletのAPIを呼び出す機能は制限する必要があります。 +* ベアラートークン認証は有効になっていますが、kubeletのAPIを呼び出す任意のAPIユーザー(サービスアカウントなど)の機能を制限する必要があります。 +* クライアント証明書の認証は有効になっていますが、構成されたCAによって署名されたクライアント証明書の一部のみがkubeletのAPIの使用を許可されている必要があります。 + +kubeletのAPIへのアクセスを細分化するには、APIサーバーに承認を委任します: + +* APIサーバーで`authorization.k8s.io/v1beta1` APIグループが有効になっていることを確認します。 +* `--authorization-mode=Webhook`と`--kubeconfig`フラグでkubeletを開始します。 +* kubeletは、構成されたAPIサーバーで`SubjectAccessReview` APIを呼び出して、各リクエストが承認されているかどうかを判断します。 + +kubeletは、apiserverと同じ[リクエスト属性](/docs/reference/access-authn-authz/authorization/#review-your-request-attributes)アプローチを使用してAPIリクエストを承認します。 + +動詞は、受けとったリクエストのHTTP動詞から決定されます: + +HTTP動詞 | 要求 動詞 +----------|--------------- +POST | create +GET, HEAD | get +PUT | update +PATCH | patch +DELETE | delete + +リソースとサブリソースは、受けとったリクエストのパスから決定されます: + +Kubelet API | リソース | サブリソース +-------------|----------|------------ +/stats/\* | nodes | stats +/metrics/\* | nodes | metrics +/logs/\* | nodes | log +/spec/\* | nodes | spec +*all others* | nodes | proxy + +名前空間とAPIグループの属性は常に空の文字列であり、 +リソース名は常にkubeletの`Node` APIオブジェクトの名前です。 + +このモードで実行する場合は、apiserverに渡される`--kubelet-client-certificate`フラグと`--kubelet-client-key` +フラグで識別されるユーザーが次の属性に対して許可されていることを確認します: + +* verb=\*, resource=nodes, subresource=proxy +* verb=\*, resource=nodes, subresource=stats +* verb=\*, resource=nodes, subresource=log +* verb=\*, resource=nodes, subresource=spec +* verb=\*, resource=nodes, subresource=metrics diff --git a/content/ja/docs/reference/glossary/kube-apiserver.md b/content/ja/docs/reference/glossary/kube-apiserver.md index 29885884fe..c7a7cfec19 100755 --- a/content/ja/docs/reference/glossary/kube-apiserver.md +++ b/content/ja/docs/reference/glossary/kube-apiserver.md @@ -2,7 +2,7 @@ title: APIサーバー id: kube-apiserver date: 2018-04-12 -full_link: /docs/reference/generated/kube-apiserver/ +full_link: /docs/concepts/overview/components/#kube-apiserver short_description: > Kubernetes APIを提供するコントロールプレーンのコンポーネントです。 diff --git a/content/ja/docs/reference/kubectl/cheatsheet.md b/content/ja/docs/reference/kubectl/cheatsheet.md index caf2fc783c..d93d02551b 100644 --- a/content/ja/docs/reference/kubectl/cheatsheet.md +++ b/content/ja/docs/reference/kubectl/cheatsheet.md @@ -8,16 +8,10 @@ card: -[Kubectl概要](/ja/docs/reference/kubectl/overview/)と[JsonPathガイド](/docs/reference/kubectl/jsonpath)も合わせてご覧ください。 - -このページは`kubectl`コマンドの概要です。 - - +このページには、一般的によく使われる`kubectl`コマンドとフラグのリストが含まれています。 -# kubectl - チートシート - ## Kubectlコマンドの補完 ### BASH @@ -76,7 +70,7 @@ kubectl config set-context gce --user=cluster-admin --namespace=foo \ kubectl config unset users.foo # ユーザーfooを削除します ``` -## Apply +## Kubectl Apply `apply`はKubernetesリソースを定義するファイルを通じてアプリケーションを管理します。`kubectl apply`を実行して、クラスター内のリソースを作成および更新します。これは、本番環境でKubernetesアプリケーションを管理する推奨方法です。 詳しくは[Kubectl Book](https://kubectl.docs.kubernetes.io)をご覧ください。 @@ -372,6 +366,7 @@ kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!="k8s.gcr. kubectl get pods -A -o=custom-columns='DATA:metadata.*' ``` +kubectlに関するより多くのサンプルは[カスタムカラムのリファレンス](/ja/docs/reference/kubectl/overview/#custom-columns)を参照してください。 ### Kubectlのログレベルとデバッグ kubectlのログレベルは、レベルを表す整数が後に続く`-v`または`--v`フラグで制御されます。一般的なKubernetesのログ記録規則と関連するログレベルについて、[こちら](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md)で説明します。 @@ -392,11 +387,10 @@ kubectlのログレベルは、レベルを表す整数が後に続く`-v`また ## {{% heading "whatsnext" %}} - -* kubectlについてより深く学びたい方は[kubectl概要](/ja/docs/reference/kubectl/overview/)をご覧ください。 +* kubectlについてより深く学びたい方は[kubectl概要](/ja/docs/reference/kubectl/overview/)や[JsonPath](/docs/reference/kubectl/jsonpath)をご覧ください。 * オプションについては[kubectl](/docs/reference/kubectl/kubectl/) optionsをご覧ください。 - + * また[kubectlの利用パターン](/docs/reference/kubectl/conventions/)では再利用可能なスクリプトでkubectlを利用する方法を学べます。 * コミュニティ版[kubectlチートシート](https://github.com/dennyzhang/cheatsheet-kubernetes-A4)もご覧ください。 diff --git a/content/ja/docs/reference/tools.md b/content/ja/docs/reference/tools.md new file mode 100644 index 0000000000..0fedb1cf9d --- /dev/null +++ b/content/ja/docs/reference/tools.md @@ -0,0 +1,46 @@ +--- +title: ツール +content_type: concept +--- + + +Kubernetesには、Kubernetesシステムの操作に役立ついくつかの組み込みツールが含まれています。 + + +## Kubectl +[`kubectl`](/docs/tasks/tools/install-kubectl/)は、Kubernetesのためのコマンドラインツールです。このコマンドはKubernetes cluster managerを操作します。 + +## Kubeadm +[`kubeadm`](docs/setup/production-environment/tools/kubeadm/install-kubeadm/)は、物理サーバやクラウドサーバ、仮想マシン上にKubenetesクラスタを容易にプロビジョニングするためのコマンドラインツールです(現在はアルファ版です)。 + +## Minikube +[`minikube`](https://minikube.sigs.k8s.io/docs/)は、開発やテストのためにワークステーション上でシングルノードのKubernetesクラスタをローカルで実行するツールです。 + +## Dashboard +[`Dashboard`](/docs/tasks/access-application-cluster/web-ui-dashboard/)は、KubernetesのWebベースのユーザインタフェースで、コンテナ化されたアプリケーションをKubernetesクラスタにデプロイしたり、トラブルシューティングしたり、クラスタとそのリソース自体を管理したりすることが出来ます。 + +## Helm +[`Kubernetes Helm`](https://github.com/helm/helm)は、事前に設定されたKubernetesリソースのパッケージ、別名Kubernetes chartsを管理するためのツールです。 + +Helmを用いて以下のことを行います。 + +* Kubernetes chartsとしてパッケージ化された人気のあるソフトウェアの検索と利用 + +* Kubernetes chartsとして所有するアプリケーションを共有すること + +* Kubernetesアプリケーションの再現性のあるビルドの作成 + +* Kubernetesマニフェストファイルを知的な方法で管理 + +* Helmパッケージのリリース管理 + +## Kompose +[`Kompose`](https://github.com/kubernetes/kompose)は、Docker ComposeユーザがKubernetesに移行する手助けをするツールです。 + +Komposeを用いて以下のことを行います。 + +* Docker ComposeファイルのKubernetesオブジェクトへの変換 + +* ローカルのDocker開発からKubernetesを経由したアプリケーション管理への移行 + +* v1またはv2のDocker Compose用 `yaml` ファイルならびに[分散されたアプリケーションバンドル](https://docs.docker.com/compose/bundles/)の変換 diff --git a/content/ja/docs/setup/learning-environment/minikube.md b/content/ja/docs/setup/learning-environment/minikube.md index 54a0411132..c197a03081 100644 --- a/content/ja/docs/setup/learning-environment/minikube.md +++ b/content/ja/docs/setup/learning-environment/minikube.md @@ -26,7 +26,7 @@ MinikubeのサポートするKubernetesの機能: ## インストール -[Minikubeのインストール](/ja/docs/tasks/tools/install-minikube/)を参照してください。 +ツールのインストールについて知りたい場合は、公式の[Get Started!](https://minikube.sigs.k8s.io/docs/start/)のガイドに従ってください。 ## クイックスタート diff --git a/content/ja/docs/setup/production-environment/container-runtimes.md b/content/ja/docs/setup/production-environment/container-runtimes.md index 8629e2f102..0f48e26f64 100644 --- a/content/ja/docs/setup/production-environment/container-runtimes.md +++ b/content/ja/docs/setup/production-environment/container-runtimes.md @@ -130,7 +130,7 @@ yum install -y yum-utils device-mapper-persistent-data lvm2 ``` ```shell -### Dockerリポジトリの追加 +## Dockerリポジトリの追加 yum-config-manager --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo ``` @@ -215,73 +215,107 @@ sysctl --system {{< tabs name="tab-cri-cri-o-installation" >}} {{% tab name="Debian" %}} + CRI-Oを以下のOSにインストールするには、環境変数$OSを以下の表の適切なフィールドに設定します。 + +| Operating system | $OS | +| ---------------- | ----------------- | +| Debian Unstable | `Debian_Unstable` | +| Debian Testing | `Debian_Testing` | + +
    +そして、`$VERSION`にKubernetesのバージョンに合わせたCRI-Oのバージョンを設定します。例えば、CRI-O 1.18をインストールしたい場合は、`VERSION=1.18` を設定します。インストールを特定のリリースに固定することができます。バージョン 1.18.3をインストールするには、`VERSION=1.18:1.18.3` を設定します。 +
    + +以下を実行します。 ```shell -# Debian Unstable/Sid -echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_Unstable/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list -wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Debian_Unstable/Release.key -O- | sudo apt-key add - +echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list +echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list + +curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | apt-key add - +curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add - + +apt-get update +apt-get install cri-o cri-o-runc ``` -```shell -# Debian Testing -echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_Testing/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list -wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Debian_Testing/Release.key -O- | sudo apt-key add - -``` -```shell -# Debian 10 -echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_10/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list -wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Debian_10/Release.key -O- | sudo apt-key add - -``` +{{% /tab %}} -```shell -# Raspbian 10 -echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Raspbian_10/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list -wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Raspbian_10/Release.key -O- | sudo apt-key add - -``` +{{% tab name="Ubuntu" %}} -それでは、CRI-Oをインストールします: + CRI-Oを以下のOSにインストールするには、環境変数$OSを以下の表の適切なフィールドに設定します。 + +| Operating system | $OS | +| ---------------- | ----------------- | +| Ubuntu 20.04 | `xUbuntu_20.04` | +| Ubuntu 19.10 | `xUbuntu_19.10` | +| Ubuntu 19.04 | `xUbuntu_19.04` | +| Ubuntu 18.04 | `xUbuntu_18.04` | + +
    +次に、`$VERSION`をKubernetesのバージョンと一致するCRI-Oのバージョンに設定します。例えば、CRI-O 1.18をインストールしたい場合は、`VERSION=1.18` を設定します。インストールを特定のリリースに固定することができます。バージョン 1.18.3 をインストールするには、`VERSION=1.18:1.18.3` を設定します。 +
    + +以下を実行します。 ```shell -sudo apt-get install cri-o-1.17 +echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list +echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list + +curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | apt-key add - +curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add - + +apt-get update +apt-get install cri-o cri-o-runc ``` {{% /tab %}} -{{% tab name="Ubuntu 18.04, 19.04 and 19.10" %}} +{{% tab name="CentOS" %}} + CRI-Oを以下のOSにインストールするには、環境変数$OSを以下の表の適切なフィールドに設定します。 + +| Operating system | $OS | +| ---------------- | ----------------- | +| Centos 8 | `CentOS_8` | +| Centos 8 Stream | `CentOS_8_Stream` | +| Centos 7 | `CentOS_7` | + +
    +次に、`$VERSION`をKubernetesのバージョンと一致するCRI-Oのバージョンに設定します。例えば、CRI-O 1.18 をインストールしたい場合は、`VERSION=1.18` を設定します。インストールを特定のリリースに固定することができます。バージョン 1.18.3 をインストールするには、`VERSION=1.18:1.18.3` を設定します。 +
    + +以下を実行します。 ```shell -# パッケージレポジトリを設定する -. /etc/os-release -sudo sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/x${NAME}_${VERSION_ID}/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list" -wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/x${NAME}_${VERSION_ID}/Release.key -O- | sudo apt-key add - -sudo apt-get update +curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/devel:kubic:libcontainers:stable.repo +curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo +yum install cri-o ``` -```shell -# CRI-Oのインストール -sudo apt-get install cri-o-1.17 -``` -{{% /tab %}} - -{{% tab name="CentOS/RHEL 7.4+" %}} - -```shell -# 必要なパッケージのインストール -curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/CentOS_7/devel:kubic:libcontainers:stable.repo -curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:{{< skew latestVersion >}}.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:{{< skew latestVersion >}}/CentOS_7/devel:kubic:libcontainers:stable:cri-o:{{< skew latestVersion >}}.repo -``` - -```shell -# CRI-Oのインストール -yum install -y cri-o -``` {{% /tab %}} {{% tab name="openSUSE Tumbleweed" %}} ```shell -sudo zypper install cri-o + sudo zypper install cri-o ``` +{{% /tab %}} +{{% tab name="Fedora" %}} + +$VERSIONには、Kubernetesのバージョンと一致するCRI-Oのバージョンを設定します。例えば、CRI-O 1.18をインストールしたい場合は、$VERSION=1.18を設定します。 +以下のコマンドで、利用可能なバージョンを見つけることができます。 +```shell +dnf module list cri-o +``` +CRI-OはFedoraの特定のリリースにピン留めすることをサポートしていません。 + +以下を実行します。 +```shell +dnf module enable cri-o:$VERSION +dnf install cri-o +``` + {{% /tab %}} {{< /tabs >}} + ### CRI-Oの起動 ```shell @@ -321,7 +355,7 @@ sysctl --system ### containerdのインストール {{< tabs name="tab-cri-containerd-installation" >}} -{{< tab name="Ubuntu 16.04" codelang="bash" >}} +{{% tab name="Ubuntu 16.04" %}} ```shell # (containerdのインストール) @@ -335,7 +369,7 @@ apt-get update && apt-get install -y apt-transport-https ca-certificates curl so curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - ``` -``` +```shell ## Dockerのaptリポジトリの追加 add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ diff --git a/content/ja/docs/setup/production-environment/on-premises-vm/cloudstack.md b/content/ja/docs/setup/production-environment/on-premises-vm/cloudstack.md index 1177bcdd94..296e105683 100644 --- a/content/ja/docs/setup/production-environment/on-premises-vm/cloudstack.md +++ b/content/ja/docs/setup/production-environment/on-premises-vm/cloudstack.md @@ -7,7 +7,7 @@ content_type: concept [CloudStack](https://cloudstack.apache.org/) is a software to build public and private clouds based on hardware virtualization principles (traditional IaaS). To deploy Kubernetes on CloudStack there are several possibilities depending on the Cloud being used and what images are made available. CloudStack also has a vagrant plugin available, hence Vagrant could be used to deploy Kubernetes either using the existing shell provisioner or using new Salt based recipes. -[CoreOS](http://coreos.com) templates for CloudStack are built [nightly](http://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions. +[CoreOS](https://coreos.com) templates for CloudStack are built [nightly](https://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](https://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions. This guide uses a single [Ansible playbook](https://github.com/apachecloudstack/k8s), which is completely automated and can deploy Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init. diff --git a/content/ja/docs/setup/production-environment/tools/kops.md b/content/ja/docs/setup/production-environment/tools/kops.md index 92899a300a..dfd2eec406 100644 --- a/content/ja/docs/setup/production-environment/tools/kops.md +++ b/content/ja/docs/setup/production-environment/tools/kops.md @@ -27,7 +27,7 @@ kops is an automated provisioning system: * You must [install](https://github.com/kubernetes/kops#installing) `kops` on a 64-bit (AMD64 and Intel 64) device architecture. -* You must have an [AWS account](https://docs.aws.amazon.com/polly/latest/dg/setting-up.html), generate [IAM keys](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys) and [configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-quick-configuration) them. +* You must have an [AWS account](https://docs.aws.amazon.com/polly/latest/dg/setting-up.html), generate [IAM keys](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys) and [configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-quick-configuration) them. The IAM user will need [adequate permissions](https://github.com/kubernetes/kops/blob/master/docs/getting_started/aws.md#setup-iam-user). @@ -140,7 +140,7 @@ you choose for organization reasons (e.g. you are allowed to create records unde but not under `example.com`). Let's assume you're using `dev.example.com` as your hosted zone. You create that hosted zone using -the [normal process](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingNewSubdomain.html), or +the [normal process](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingNewSubdomain.html), or with a command such as `aws route53 create-hosted-zone --name dev.example.com --caller-reference 1`. You must then set up your NS records in the parent domain, so that records in the domain will resolve. Here, @@ -231,7 +231,7 @@ See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to expl ## {{% heading "whatsnext" %}} -* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/user-guide/kubectl-overview/). +* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/reference/kubectl/overview/). * Learn more about `kops` [advanced usage](https://kops.sigs.k8s.io/) for tutorials, best practices and advanced configuration options. * Follow `kops` community discussions on Slack: [community discussions](https://github.com/kubernetes/kops#other-ways-to-communicate-with-the-contributors) * Contribute to `kops` by addressing or raising an issue [GitHub Issues](https://github.com/kubernetes/kops/issues) diff --git a/content/ja/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/ja/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md index 9c096a6698..9a47644bf6 100644 --- a/content/ja/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md +++ b/content/ja/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md @@ -1,12 +1,12 @@ --- -title: kubeadmを使用したシングルコントロールプレーンクラスターの作成 +title: kubeadmを使用したクラスターの作成 content_type: task weight: 30 --- -`kubeadm`ツールは、ベストプラクティスに準拠した実用最小限のKubernetesクラスターをブートストラップする手助けをします。実際、`kubeadm`を使用すれば、[Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification)に通るクラスターをセットアップすることができます。`kubeadm`は、[ブートストラップトークン](/docs/reference/access-authn-authz/bootstrap-tokens/)やクラスターのアップグレードなどのその他のクラスターのライフサイクルの機能もサポートします。 +ベストプラクティスに準拠した実用最小限のKubernetesクラスターを作成します。実際、`kubeadm`を使用すれば、[Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification)に通るクラスターをセットアップすることができます。`kubeadm`は、[ブートストラップトークン](/docs/reference/access-authn-authz/bootstrap-tokens/)やクラスターのアップグレードなどのその他のクラスターのライフサイクルの機能もサポートします。 `kubeadm`ツールは、次のようなときに適しています。 @@ -41,7 +41,7 @@ kubeadmツールの全体の機能の状態は、一般利用可能(GA)です。 ## 目的 -* シングルコントロールプレーンのKubernetesクラスターまたは[高可用性クラスター](/ja/docs/setup/production-environment/tools/kubeadm/high-availability/)をインストールする +* シングルコントロールプレーンのKubernetesクラスターをインストールする * クラスター上にPodネットワークをインストールして、Podがお互いに通信できるようにする ## 手順 @@ -76,7 +76,7 @@ kubeadm init `--apiserver-advertise-address`は、この特定のコントロールプレーンノードのAPIサーバーへのadvertise addressを設定するために使えますが、`--control-plane-endpoint`は、すべてのコントロールプレーンノード共有のエンドポイントを設定するために使えます。 -`--control-plane-endpoint`はIPアドレスを受け付けますが、IPアドレスへマッピングされるDNSネームも使用できます。利用可能なソリューションをそうしたマッピングの観点から評価するには、ネットワーク管理者に相談してください。 +`--control-plane-endpoint`はIPアドレスと、IPアドレスへマッピングできるDNS名を使用できます。利用可能なソリューションをそうしたマッピングの観点から評価するには、ネットワーク管理者に相談してください。 以下にマッピングの例を示します。 @@ -203,9 +203,14 @@ export KUBECONFIG=/etc/kubernetes/admin.conf {{< /caution >}} -CNIを使用するKubernetes Podネットワークを提供する外部のプロジェクトがいくつかあります。一部のプロジェクトでは、[ネットワークポリシー](/docs/concepts/services-networking/networkpolicies/)もサポートしています。 +{{< note >}} +現在、Calicoはkubeadmプロジェクトがe2eテストを実施している唯一のCNIプラグインです。 +もしCNIプラグインに関する問題を見つけた場合、kubeadmやkubernetesではなく、そのCNIプラグインの課題管理システムへ問題を報告してください。 +{{< /note >}} -利用できる[ネットワークアドオンとネットワークポリシーアドオン](/docs/concepts/cluster-administration/addons/#networking-and-network-policy)のリストを確認してください。 +CNIを使用するKubernetes Podネットワークを提供する外部のプロジェクトがいくつかあります。一部のプロジェクトでは、[ネットワークポリシー](/ja/docs/concepts/services-networking/network-policies/)もサポートしています。 + +[Kubernetesのネットワークモデル](/ja/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model)を実装したアドオンの一覧も確認してください。 Podネットワークアドオンをインストールするには、コントロールプレーンノード上またはkubeconfigクレデンシャルを持っているノード上で、次のコマンドを実行します。 @@ -213,91 +218,7 @@ Podネットワークアドオンをインストールするには、コント kubectl apply -f ``` -インストールできるPodネットワークは、クラスターごとに1つだけです。以下の手順で、いくつかのよく使われるPodネットワークプラグインをインストールできます。 - -{{< tabs name="tabs-pod-install" >}} - -{{% tab name="Calico" %}} -[Calico](https://docs.projectcalico.org/latest/introduction/)は、ネットワークとネットワークポリシーのプロバイダーです。Calicoは柔軟なさまざまなネットワークオプションをサポートするため、自分の状況に適した最も効果的なオプションを選択できます。たとえば、ネットワークのオーバーレイの有無や、BGPの有無が選べます。Calicoは、ホスト、Pod、(もしIstioとEnvoyを使っている場合)サービスメッシュレイヤー上のアプリケーションに対してネットワークポリシーを強制するために、同一のエンジンを使用しています。Calicoは、`amd64`、`arm64`、`ppc64le`を含む複数のアーキテクチャで動作します。 - -デフォルトでは、Calicoは`192.168.0.0/16`をPodネットワークのCIDRとして使いますが、このCIDRはcalico.yamlファイルで設定できます。Calicoを正しく動作させるためには、これと同じCIDRを`--pod-network-cidr=192.168.0.0/16`フラグまたはkubeadmの設定を使って、`kubeadm init`コマンドに渡す必要があります。 - -```shell -kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml -``` - -{{% /tab %}} - -{{% tab name="Cilium" %}} -Ciliumを正しく動作させるためには、`kubeadm init`に `--pod-network-cidr=10.217.0.0/16`を渡さなければなりません。 - -Ciliumのデプロイは、次のコマンドを実行するだけでできます。 - -```shell -kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.6/install/kubernetes/quick-install.yaml -``` - -すべてのCilium Podが`READY`とマークされたら、クラスターを使い始められます。 - -```shell -kubectl get pods -n kube-system --selector=k8s-app=cilium -``` - -出力は次のようになります。 - -``` -NAME READY STATUS RESTARTS AGE -cilium-drxkl 1/1 Running 0 18m -``` - -Ciliumはkube-proxyの代わりに利用することもできます。詳しくは[Kubernetes without kube-proxy](https://docs.cilium.io/en/stable/gettingstarted/kubeproxy-free)を読んでください。 - -KubernetesでのCiliumの使い方に関するより詳しい情報は、[Kubernetes Install guide for Cilium](https://docs.cilium.io/en/stable/kubernetes/)を参照してください。 -{{% /tab %}} - -{{% tab name="Contiv-VPP" %}} -[Contiv-VPP](https://contivpp.io/)は、[FD.io VPP](https://fd.io/)をベースとするプログラマブルなCNF vSwitchを採用し、機能豊富で高性能なクラウドネイティブなネットワーキングとサービスを提供します。 - -Contiv-VPPは、k8sサービスとネットワークポリシーを(VPP上の)ユーザースペースで実装しています。 - -こちらのインストールガイドを参照してください: [Contiv-VPP Manual Installation](https://github.com/contiv/vpp/blob/master/docs/setup/MANUAL_INSTALL.md) -{{% /tab %}} - -{{% tab name="Flannel" %}} -`flannel`を正しく動作させるためには、`--pod-network-cidr=10.244.0.0/16`を`kubeadm init`に渡す必要があります。 - -オーバーレイネットワークに参加しているすべてのホスト上で、ファイアウォールのルールが、UDPポート8285と8472のトラフィックを許可するように設定されていることを確認してください。この設定に関するより詳しい情報は、Flannelのトラブルシューティングガイドの[Firewall](https://coreos.com/flannel/docs/latest/troubleshooting.html#firewalls)のセクションを参照してください。 - -Flannelは、Linux下の`amd64`、`arm`、`arm64`、`ppc64le`、`s390x`アーキテクチャ上で動作します。Windows(`amd64`)はv0.11.0でサポートされたとされていますが、使用方法はドキュメントに書かれていません。 - -```shell -kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml -``` - -`flannel`に関するより詳しい情報は、[GitHub上のCoreOSのflannelリポジトリ](https://github.com/coreos/flannel)を参照してください。 -{{% /tab %}} - -{{% tab name="Kube-router" %}} -Kube-routerは、ノードへのPod CIDRの割り当てをkube-controller-managerに依存しています。そのため、`kubeadm init`時に`--pod-network-cidr`フラグを使用する必要があります。 - -Kube-routerは、Podネットワーク、ネットワークポリシー、および高性能なIP Virtual Server(IPVS)/Linux Virtual Server(LVS)ベースのサービスプロキシーを提供します。 - -Kube-routerを有効にしたKubernetesクラスターをセットアップするために`kubeadm`ツールを使用する方法については、公式の[セットアップガイド](https://github.com/cloudnativelabs/kube-router/blob/master/docs/kubeadm.md)を参照してください。 -{{% /tab %}} - -{{% tab name="Weave Net" %}} -Weave Netを使用してKubernetesクラスターをセットアップするより詳しい情報は、[アドオンを使用してKubernetesを統合する](https://www.weave.works/docs/net/latest/kube-addon/)を読んでください。 - -Weave Netは、`amd64`、`arm`、`arm64`、`ppc64le`プラットフォームで追加の操作なしで動作します。Weave Netはデフォルトでharipinモードをセットします。このモードでは、Pod同士はPodIPを知らなくても、Service IPアドレス経由でアクセスできます。 - -```shell -kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" -``` - -{{% /tab %}} - -{{< /tabs >}} - +インストールできるPodネットワークは、クラスターごとに1つだけです。 Podネットワークがインストールされたら、`kubectl get pods --all-namespaces`の出力結果でCoreDNS Podが`Running`状態であることをチェックすることで、ネットワークが動作していることを確認できます。そして、一度CoreDNS Podが動作すれば、続けてノードを追加できます。 @@ -375,7 +296,7 @@ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outfor ``` {{< note >}} -IPv6タプルを`:`に指定するためには、IPv6アドレスをブラケットで囲みます。たとえば、`[fd00::101]:2073`のように書きます。 +IPv6タプルを`:`と指定するためには、IPv6アドレスを角括弧で囲みます。たとえば、`[fd00::101]:2073`のように書きます。 {{< /note >}} 出力は次のようになります。 @@ -407,7 +328,7 @@ kubectl --kubeconfig ./admin.conf get nodes {{< note >}} 上の例では、rootユーザーに対するSSH接続が有効であることを仮定しています。もしそうでない場合は、`admin.conf`ファイルを誰か他のユーザーからアクセスできるようにコピーした上で、代わりにそのユーザーを使って`scp`してください。 -`admin.conf`ファイルはユーザーにクラスターに対する _特権ユーザー_ の権限を与えます。そのため、このファイルを使うのは控えめにしなければなりません。通常のユーザーには、一部の権限をホワイトリストに加えたユニークなクレデンシャルを生成することを推奨します。これには、`kubeadm alpha kubeconfig user --client-name `コマンドが使えます。このコマンドを実行すると、KubeConfigファイルがSTDOUTに出力されるので、ファイルに保存してユーザーに配布します。その後、`kubectl create (cluster)rolebinding`コマンドを使って権限をホワイトリストに加えます。 +`admin.conf`ファイルはユーザーにクラスターに対する _特権ユーザー_ の権限を与えます。そのため、このファイルを使うのは控えめにしなければなりません。通常のユーザーには、明示的に許可した権限を持つユニークなクレデンシャルを生成することを推奨します。これには、`kubeadm alpha kubeconfig user --client-name `コマンドが使えます。このコマンドを実行すると、KubeConfigファイルがSTDOUTに出力されるので、ファイルに保存してユーザーに配布します。その後、`kubectl create (cluster)rolebinding`コマンドを使って権限を付与します。 {{< /note >}} ### (オプション)APIサーバーをlocalhostへプロキシする @@ -433,10 +354,9 @@ kubectl --kubeconfig ./admin.conf proxy ```bash kubectl drain --delete-local-data --force --ignore-daemonsets -kubectl delete node ``` -その後、ノードが削除されたら、`kubeadm`のインストール状態をすべてリセットします。 +ノードが削除される前に、`kubeadm`によってインストールされた状態をリセットします。 ```bash kubeadm reset @@ -454,6 +374,11 @@ IPVS tablesをリセットしたい場合は、次のコマンドを実行する ipvsadm -C ``` +ノードを削除します。 +```bash +kubectl delete node +``` + クラスターのセットアップを最初から始めたいときは、`kubeadm init`や`kubeadm join`を適切な引数を付けて実行すればいいだけです。 ### コントロールプレーンのクリーンアップ @@ -469,7 +394,7 @@ ipvsadm -C * [Sonobuoy](https://github.com/heptio/sonobuoy)を使用してクラスターが適切に動作しているか検証する。 * `kubeadm`を使用したクラスターをアップグレードする方法について、[kubeadmクラスターをアップグレードする](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)を読む。 * `kubeadm`の高度な利用方法について[kubeadmリファレンスドキュメント](/docs/reference/setup-tools/kubeadm/kubeadm)で学ぶ。 -* Kubernetesの[コンセプト](/ja/docs/concepts/)や[`kubectl`](/docs/user-guide/kubectl-overview/)についてもっと学ぶ。 +* Kubernetesの[コンセプト](/ja/docs/concepts/)や[`kubectl`](/ja/docs/reference/kubectl/overview/)についてもっと学ぶ。 * Podネットワークアドオンのより完全なリストを[クラスターのネットワーク](/docs/concepts/cluster-administration/networking/)で確認する。 * ロギング、モニタリング、ネットワークポリシー、仮想化、Kubernetesクラスターの制御のためのツールなど、その他のアドオンについて、[アドオンのリスト](/docs/concepts/cluster-administration/addons/)で確認する。 * クラスターイベントやPod内で実行中のアプリケーションから送られるログをクラスターがハンドリングする方法を設定する。関係する要素の概要を理解するために、[ロギングのアーキテクチャ](/docs/concepts/cluster-administration/logging/)を読んでください。 @@ -486,9 +411,9 @@ ipvsadm -C ## バージョン互換ポリシー {#version-skew-policy} -バージョンvX.Yの`kubeadm`ツールは、バージョンvX.YまたはvX.(Y-1)のコントロールプレーンを持つクラスターをデプロイできます。また、`kubeadm` vX.Yは、kubeadmで構築された既存のvX.(Y-1)のクラスターをアップグレートできます。 +バージョンv{{< skew latestVersion >}}の`kubeadm`ツールは、バージョンv{{< skew latestVersion >}}またはv{{< skew prevMinorVersion >}}のコントロールプレーンを持つクラスターをデプロイできます。また、バージョンv{{< skew latestVersion >}}の`kubeadm`は、バージョンv{{< skew prevMinorVersion >}}のkubeadmで構築されたクラスターをアップグレートできます。 -未来を見ることはできないため、kubeadm CLI vX.YはvX.(Y+1)をデプロイすることはできません。 +未来を見ることはできないため、kubeadm CLI v{{< skew latestVersion >}}はv{{< skew nextMinorVersion >}}をデプロイできないかもしれません。 例: `kubeadm` v1.8は、v1.7とv1.8のクラスターをデプロイでき、v1.7のkubeadmで構築されたクラスターをv1.8にアップグレートできます。 @@ -507,7 +432,7 @@ kubeletとコントロールプレーンの間や、他のKubernetesコンポー * 定期的に[etcdをバックアップ](https://coreos.com/etcd/docs/latest/admin_guide.html)する。kubeadmが設定するetcdのデータディレクトリは、コントロールプレーンノードの`/var/lib/etcd`にあります。 -* 複数のコントロールプレーンノードを使用する。[高可用性トポロジーのオプション](/docs/setup/production-environment/tools/kubeadm/ha-topology/)では、より高い可用性を提供するクラスターのトポロジーの選択について説明してます。 +* 複数のコントロールプレーンノードを使用する。[高可用性トポロジーのオプション](/ja/docs/setup/production-environment/tools/kubeadm/ha-topology/)では、[より高い可用性](/ja/docs/setup/production-environment/tools/kubeadm/high-availability/)を提供するクラスターのトポロジーの選択について説明してます。 ### プラットフォームの互換性 {#multi-platform} @@ -520,4 +445,3 @@ kubeadmのdeb/rpmパッケージおよびバイナリは、[multi-platform propo ## トラブルシューティング {#troubleshooting} kubeadmに関する問題が起きたときは、[トラブルシューティングドキュメント](/ja/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/)を確認してください。 - diff --git a/content/ja/docs/setup/production-environment/tools/kubeadm/ha-topology.md b/content/ja/docs/setup/production-environment/tools/kubeadm/ha-topology.md index fa3468e1d1..f06a8142a3 100644 --- a/content/ja/docs/setup/production-environment/tools/kubeadm/ha-topology.md +++ b/content/ja/docs/setup/production-environment/tools/kubeadm/ha-topology.md @@ -15,7 +15,10 @@ HAクラスターは次の方法で設定できます。 HAクラスターをセットアップする前に、各トポロジーの利点と欠点について注意深く考慮する必要があります。 - +{{< note >}} +kubeadmは、etcdクラスターを静的にブートストラップします。 +詳細については、etcd[クラスタリングガイド](https://github.com/etcd-io/etcd/blob/release-3.4/Documentation/op-guide/clustering.md#static)をご覧ください。 +{{< /note >}} diff --git a/content/ja/docs/setup/production-environment/tools/kubeadm/high-availability.md b/content/ja/docs/setup/production-environment/tools/kubeadm/high-availability.md index 6b7cb8b610..1d5e3f9aaf 100644 --- a/content/ja/docs/setup/production-environment/tools/kubeadm/high-availability.md +++ b/content/ja/docs/setup/production-environment/tools/kubeadm/high-availability.md @@ -57,10 +57,10 @@ weight: 60 - ロードバランサーは、apiserverポートで、全てのコントロールプレーンノードと通信できなければなりません。また、リスニングポートに対する流入トラフィックも許可されていなければなりません。 - - [HAProxy](http://www.haproxy.org/)をロードバランサーとして使用することができます。 - - ロードバランサーのアドレスは、常にkubeadmの`ControlPlaneEndpoint`のアドレスと一致することを確認してください。 + - 詳細は[Options for Software Load Balancing](https://github.com/kubernetes/kubeadm/blob/master/docs/ha-considerations.md#options-for-software-load-balancing)をご覧ください。 + 1. ロードバランサーに、最初のコントロールプレーンノードを追加し、接続をテストする: ```sh @@ -87,7 +87,7 @@ weight: 60 {{< note >}}`kubeadm init`の`--config`フラグと`--certificate-key`フラグは混在させることはできないため、[kubeadm configuration](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2)を使用する場合は`certificateKey`フィールドを適切な場所に追加する必要があります(`InitConfiguration`と`JoinConfiguration: controlPlane`の配下)。{{< /note >}} - {{< note >}}CalicoなどのいくつかのCNIネットワークプラグインは`192.168.0.0/16`のようなCIDRを必要としますが、Weaveなどは必要としません。[CNIネットワークドキュメント](/ja/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network)を参照してください。PodにCIDRを設定するには、`ClusterConfiguration`の`networking`オブジェクトに`podSubnet: 192.168.0.0/16`フィールドを設定してください。{{< /note >}} + {{< note >}}いくつかのCNIネットワークプラグインはPodのIPのCIDRの指定など追加の設定を必要としますが、必要としないプラグインもあります。[CNIネットワークドキュメント](/ja/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network)を参照してください。PodにCIDRを設定するには、`ClusterConfiguration`の`networking`オブジェクトに`podSubnet: 192.168.0.0/16`フィールドを設定してください。{{< /note >}} - このような出力がされます: diff --git a/content/ja/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md b/content/ja/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md index e061315381..27dd20b1bf 100644 --- a/content/ja/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md +++ b/content/ja/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md @@ -64,7 +64,7 @@ ComponentConfigの詳細については、[このセクション](#configure-kub ### `kubeadm init`実行時の流れ -`kubeadm init`を実行した場合、kubeletの設定は`/var/lib/kubelet/config.yaml`に格納され、クラスターのConfigMapにもアップロードされます。ConfigMapは`kubelet-config-1.X`という名前で、`.X`は初期化するKubernetesのマイナーバージョンを表します。またこの設定ファイルは、クラスタ内の全てのkubeletのために、クラスター全体設定の基準と共に`/etc/kubernetes/kubelet.conf`にも書き込まれます。この設定ファイルは、kubeletがAPIサーバと通信するためのクライアント証明書を指し示します。これは、[各kubeletにクラスターレベルの設定を配布](#propagating-cluster-level-configuration-to-each-kubelet)することの必要性を示しています。 +`kubeadm init`を実行した場合、kubeletの設定は`/var/lib/kubelet/config.yaml`に格納され、クラスターのConfigMapにもアップロードされます。ConfigMapは`kubelet-config-1.X`という名前で、`X`は初期化するKubernetesのマイナーバージョンを表します。またこの設定ファイルは、クラスタ内の全てのkubeletのために、クラスター全体設定の基準と共に`/etc/kubernetes/kubelet.conf`にも書き込まれます。この設定ファイルは、kubeletがAPIサーバと通信するためのクライアント証明書を指し示します。これは、[各kubeletにクラスターレベルの設定を配布](#propagating-cluster-level-configuration-to-each-kubelet)することの必要性を示しています。 二つ目のパターンである、[インスタンス固有の設定内容を適用](#providing-instance-specific-configuration-details)するために、kubeadmは環境ファイルを`/var/lib/kubelet/kubeadm-flags.env`へ書き出します。このファイルは以下のように、kubelet起動時に渡されるフラグのリストを含んでいます。 @@ -99,7 +99,7 @@ kubeletが新たな設定を読み込むと、kubeadmは、KubeConfigファイ `kubeadm`には、systemdがどのようにkubeletを実行するかを指定した設定ファイルが同梱されています。 kubeadm CLIコマンドは決してこのsystemdファイルには触れないことに注意してください。 -kubeadmの[DEBパッケージ](https://github.com/kubernetes/kubernetes/blob/master/build/debs/10-kubeadm.conf)または[RPMパッケージ](https://github.com/kubernetes/kubernetes/blob/master/build/rpms/10-kubeadm.conf)によってインストールされたこの設定ファイルは、`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`に書き込まれ、systemdで使用されます。基本的な`kubelet.service`([RPM用](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/rpm/kubelet/kubelet.service)または、 [DEB用](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service))を拡張します。 +kubeadmの[DEBパッケージ](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf)または[RPMパッケージ](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/rpm/kubeadm/10-kubeadm.conf)によってインストールされたこの設定ファイルは、`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`に書き込まれ、systemdで使用されます。基本的な`kubelet.service`([RPM用](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/rpm/kubelet/kubelet.service)または、 [DEB用](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service))を拡張します。 ```none [Service] @@ -134,6 +134,5 @@ Kubernetesに同梱されるDEB、RPMのパッケージは以下の通りです | `kubeadm` | `/usr/bin/kubeadm`CLIツールと、[kubelet用のsystemdファイル](#the-kubelet-drop-in-file-for-systemd)をインストールします。 | | `kubelet` | kubeletバイナリを`/usr/bin`に、CNIバイナリを`/opt/cni/bin`にインストールします。 | | `kubectl` | `/usr/bin/kubectl`バイナリをインストールします。 | -| `kubernetes-cni` | 公式のCNIバイナリを`/opt/cni/bin`ディレクトリにインストールします。 | | `cri-tools` | `/usr/bin/crictl`バイナリを[cri-tools gitリポジトリ](https://github.com/kubernetes-incubator/cri-tools)からインストールします。 | diff --git a/content/ja/docs/setup/production-environment/tools/kubeadm/self-hosting.md b/content/ja/docs/setup/production-environment/tools/kubeadm/self-hosting.md index a7bee37727..9a6ceccb25 100644 --- a/content/ja/docs/setup/production-environment/tools/kubeadm/self-hosting.md +++ b/content/ja/docs/setup/production-environment/tools/kubeadm/self-hosting.md @@ -1,68 +1,48 @@ --- -title: Configuring your kubernetes cluster to self-host the control plane +title: コントロールプレーンをセルフホストするようにkubernetesクラスターを構成する content_type: concept weight: 100 --- -### Self-hosting the Kubernetes control plane {#self-hosting} +### コントロールプレーンのセルフホスティング {#self-hosting} -kubeadm allows you to experimentally create a _self-hosted_ Kubernetes control -plane. This means that key components such as the API server, controller -manager, and scheduler run as [DaemonSet pods](/ja/docs/concepts/workloads/controllers/daemonset/) -configured via the Kubernetes API instead of [static pods](/docs/tasks/administer-cluster/static-pod/) -configured in the kubelet via static files. +kubeadmを使用すると、セルフホスト型のKubernetesコントロールプレーンを実験的に作成できます。これはAPIサーバー、コントローラーマネージャー、スケジューラーなどの主要コンポーネントは、静的ファイルを介してkubeletで構成された[static pods](/docs/tasks/configure-pod-container/static-pod/)ではなく、Kubernetes APIを介して構成された[DaemonSet pods](/ja/docs/concepts/workloads/controllers/daemonset/)として実行されることを意味します。 -To create a self-hosted cluster see the -[kubeadm alpha selfhosting pivot](/docs/reference/setup-tools/kubeadm/kubeadm-alpha/#cmd-selfhosting) command. +セルフホスト型クラスターを作成する場合は[kubeadm alpha selfhosting pivot](/docs/reference/setup-tools/kubeadm/kubeadm-alpha/#cmd-selfhosting)を参照してください。 -#### Caveats +#### 警告 {{< caution >}} -This feature pivots your cluster into an unsupported state, rendering kubeadm unable -to manage you cluster any longer. This includes `kubeadm upgrade`. +この機能により、クラスターがサポートされていない状態になり、kubeadmがクラスターを管理できなくなります。これには`kubeadm upgrade`が含まれます。 {{< /caution >}} -1. Self-hosting in 1.8 and later has some important limitations. In particular, a - self-hosted cluster _cannot recover from a reboot of the control-plane node_ - without manual intervention. +1. 1.8以降のセルフホスティングには、いくつかの重要な制限があります。特に、セルフホスト型クラスターは、手動の介入なしにコントロールプレーンのNode再起動から回復することはできません。 -1. By default, self-hosted control plane Pods rely on credentials loaded from - [`hostPath`](/docs/concepts/storage/volumes/#hostpath) - volumes. Except for initial creation, these credentials are not managed by - kubeadm. +1. デフォルトでは、セルフホスト型のコントロールプレーンのPodは、[`hostPath`](/docs/concepts/storage/volumes/#hostpath)ボリュームからロードされた資格情報に依存しています。最初の作成を除いて、これらの資格情報はkubeadmによって管理されません。 -1. The self-hosted portion of the control plane does not include etcd, - which still runs as a static Pod. +1. コントロールプレーンのセルフホストされた部分にはetcdが含まれていませんが、etcdは静的Podとして実行されます。 -#### Process +#### プロセス -The self-hosting bootstrap process is documented in the [kubeadm design -document](https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.9.md#optional-self-hosting). +セルフホスティングのブートストラッププロセスは、[kubeadm design +document](https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.9.md#optional-self-hosting)に記載されています。 -In summary, `kubeadm alpha selfhosting` works as follows: +要約すると、`kubeadm alpha selfhosting`は次のように機能します。 - 1. Waits for this bootstrap static control plane to be running and - healthy. This is identical to the `kubeadm init` process without self-hosting. + 1. 静的コントロールプレーンのブートストラップが起動し、正常になるのを待ちます。これは`kubeadm init`のセルフホスティングを使用しないプロセスと同じです。 - 1. Uses the static control plane Pod manifests to construct a set of - DaemonSet manifests that will run the self-hosted control plane. - It also modifies these manifests where necessary, for example adding new volumes - for secrets. + 1. 静的コントロールプレーンのPodのマニフェストを使用して、セルフホスト型コントロールプレーンを実行する一連のDaemonSetのマニフェストを構築します。また、必要に応じてこれらのマニフェストを変更します。たとえば、シークレット用の新しいボリュームを追加します。 - 1. Creates DaemonSets in the `kube-system` namespace and waits for the - resulting Pods to be running. + 1. `kube-system`のネームスペースにDaemonSetを作成し、Podの結果が起動されるのを待ちます。 - 1. Once self-hosted Pods are operational, their associated static Pods are deleted - and kubeadm moves on to install the next component. This triggers kubelet to - stop those static Pods. + 1. セルフホスト型のPodが操作可能になると、関連する静的Podが削除され、kubeadmは次のコンポーネントのインストールに進みます。これによりkubeletがトリガーされて静的Podが停止します。 - 1. When the original static control plane stops, the new self-hosted control - plane is able to bind to listening ports and become active. + 1. 元の静的なコントロールプレーンが停止すると、新しいセルフホスト型コントロールプレーンはリスニングポートにバインドしてアクティブになります。 diff --git a/content/ja/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm.md b/content/ja/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm.md index 0d73dc2df6..356101574d 100644 --- a/content/ja/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm.md +++ b/content/ja/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm.md @@ -29,7 +29,8 @@ when using kubeadm to set up a kubernetes cluster. * Three hosts that can talk to each other over ports 2379 and 2380. This document assumes these default ports. However, they are configurable through the kubeadm config file. -* Each host must [have docker, kubelet, and kubeadm installed][toolbox]. +* Each host must [have docker, kubelet, and kubeadm installed](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/). +* Each host should have access to the Kubernetes container image registry (`k8s.gcr.io`) or list/pull the required etcd image using `kubeadm config images list/pull`. This guide will setup etcd instances as [static pods](/docs/tasks/configure-pod-container/static-pod/) managed by a kubelet. * Some infrastructure to copy files between hosts. For example `ssh` and `scp` can satisfy this requirement. diff --git a/content/ja/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md b/content/ja/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md index 8e9067a4eb..b6911ba298 100644 --- a/content/ja/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md +++ b/content/ja/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md @@ -6,68 +6,100 @@ weight: 20 -As with any program, you might run into an error installing or running kubeadm. -This page lists some common failure scenarios and have provided steps that can help you understand and fix the problem. +どのプログラムでもそうですが、kubeadmのインストールや実行でエラーが発生することがあります。このページでは、一般的な失敗例をいくつか挙げ、問題を理解して解決するための手順を示しています。 -If your problem is not listed below, please follow the following steps: - -- If you think your problem is a bug with kubeadm: - - Go to [github.com/kubernetes/kubeadm](https://github.com/kubernetes/kubeadm/issues) and search for existing issues. - - If no issue exists, please [open one](https://github.com/kubernetes/kubeadm/issues/new) and follow the issue template. - -- If you are unsure about how kubeadm works, you can ask on [Slack](http://slack.k8s.io/) in #kubeadm, or open a question on [StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes). Please include - relevant tags like `#kubernetes` and `#kubeadm` so folks can help you. +本ページに問題が記載されていない場合は、以下の手順を行ってください: +- 問題がkubeadmのバグによるものと思った場合: + - [github.com/kubernetes/kubeadm](https://github.com/kubernetes/kubeadm/issues)にアクセスして、既存のIssueを探してください。 + - Issueがない場合は、テンプレートにしたがって[新しくIssueを立ててください](https://github.com/kubernetes/kubeadm/issues/new)。 +- kubeadmがどのように動作するかわからない場合は、[Slack](http://slack.k8s.io/)の#kubeadmチャンネルで質問するか、[StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes)で質問をあげてください。その際は、他の方が助けを出しやすいように`#kubernetes`や`#kubeadm`といったタグをつけてください。 +## RBACがないため、v1.18ノードをv1.17クラスタに結合できない +v1.18では、同名のノードが既に存在する場合にクラスタ内のノードに参加しないようにする機能を追加しました。これには、ブートストラップトークンユーザがNodeオブジェクトをGETできるようにRBACを追加する必要がありました。 + +しかし、これによりv1.18の`kubeadm join`がkubeadm v1.17で作成したクラスタに参加できないという問題が発生します。 + +この問題を回避するには、次の2つの方法があります。 +- kubeadm v1.18を用いて、コントロールプレーンノード上で`kubeadm init phase bootstrap-token`を実行します。 +これには、ブートストラップトークンの残りのパーミッションも同様に有効にすることに注意してください。 + +- `kubectl apply -f ...`を使って以下のRBACを手動で適用します。 + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: kubeadm:get-nodes +rules: +- apiGroups: + - "" + resources: + - nodes + verbs: + - get +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: kubeadm:get-nodes +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: kubeadm:get-nodes +subjects: +- apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:bootstrappers:kubeadm:default-node-token +``` + ## インストール中に`ebtables`もしくは他の似たような実行プログラムが見つからない -If you see the following warnings while running `kubeadm init` +`kubeadm init`の実行中に以下のような警告が表示された場合は、以降に記載するやり方を行ってください。 ```sh [preflight] WARNING: ebtables not found in system path [preflight] WARNING: ethtool not found in system path ``` -Then you may be missing `ebtables`, `ethtool` or a similar executable on your node. You can install them with the following commands: +このような場合、ノード上に`ebtables`, `ethtool`などの実行ファイルがない可能性があります。これらをインストールするには、以下のコマンドを実行します。 -- For Ubuntu/Debian users, run `apt install ebtables ethtool`. -- For CentOS/Fedora users, run `yum install ebtables ethtool`. +- Ubuntu/Debianユーザーは、`apt install ebtables ethtool`を実行してください。 +- CentOS/Fedoraユーザーは、`yum install ebtables ethtool`を実行してください。 ## インストール中にkubeadmがコントロールプレーンを待ち続けて止まる -If you notice that `kubeadm init` hangs after printing out the following line: +以下のを出力した後に`kubeadm init`が止まる場合は、`kubeadm init`を実行してください: ```sh [apiclient] Created API client, waiting for the control plane to become ready ``` -This may be caused by a number of problems. The most common are: +これはいくつかの問題が原因となっている可能性があります。最も一般的なのは: -- network connection problems. Check that your machine has full network connectivity before continuing. -- the default cgroup driver configuration for the kubelet differs from that used by Docker. - Check the system log file (e.g. `/var/log/message`) or examine the output from `journalctl -u kubelet`. If you see something like the following: +- ネットワーク接続の問題が挙げられます。続行する前に、お使いのマシンがネットワークに完全に接続されていることを確認してください。 +- kubeletのデフォルトのcgroupドライバの設定がDockerで使用されているものとは異なっている場合も考えられます。 + システムログファイル(例: `/var/log/message`)をチェックするか、`journalctl -u kubelet`の出力を調べてください: ```shell error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs" ``` - There are two common ways to fix the cgroup driver problem: + 以上のようなエラーが現れていた場合、cgroupドライバの問題を解決するには、以下の2つの方法があります: - 1. Install Docker again following instructions - [here](/ja/docs/setup/independent/install-kubeadm/#installing-docker). + 1. [ここ](/ja/docs/setup/independent/install-kubeadm/#installing-docker)の指示に従ってDockerを再度インストールします。 - 1. Change the kubelet config to match the Docker cgroup driver manually, you can refer to - [Configure cgroup driver used by kubelet on Master Node](/ja/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node) + 1. Dockerのcgroupドライバに合わせてkubeletの設定を手動で変更します。その際は、[マスターノード上でkubeletが使用するcgroupドライバを設定する](/ja/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node)を参照してください。 -- control plane Docker containers are crashlooping or hanging. You can check this by running `docker ps` and investigating each container by running `docker logs`. +- control plane Dockerコンテナがクラッシュループしたり、ハングしたりしています。これは`docker ps`を実行し、`docker logs`を実行して各コンテナを調査することで確認できます。 ## 管理コンテナを削除する時にkubeadmが止まる -The following could happen if Docker halts and does not remove any Kubernetes-managed containers: +Dockerが停止して、Kubernetesで管理されているコンテナを削除しないと、以下のようなことが起こる可能性があります: ```bash sudo kubeadm reset @@ -78,95 +110,70 @@ sudo kubeadm reset (block) ``` -A possible solution is to restart the Docker service and then re-run `kubeadm reset`: +考えられる解決策は、Dockerサービスを再起動してから`kubeadm reset`を再実行することです: ```bash sudo systemctl restart docker.service sudo kubeadm reset ``` -Inspecting the logs for docker may also be useful: +dockerのログを調べるのも有効な場合があります: ```sh -journalctl -ul docker +journalctl -u docker ``` -## Podの状態が`RunContainerError`、`CrashLoopBackOff`、または`Error` +## Podの状態が`RunContainerError`、`CrashLoopBackOff`、または`Error`となる -Right after `kubeadm init` there should not be any pods in these states. +`kubeadm init`の直後には、これらの状態ではPodは存在しないはずです。 -- If there are pods in one of these states _right after_ `kubeadm init`, please open an - issue in the kubeadm repo. `coredns` (or `kube-dns`) should be in the `Pending` state - until you have deployed the network solution. -- If you see Pods in the `RunContainerError`, `CrashLoopBackOff` or `Error` state - after deploying the network solution and nothing happens to `coredns` (or `kube-dns`), - it's very likely that the Pod Network solution that you installed is somehow broken. - You might have to grant it more RBAC privileges or use a newer version. Please file - an issue in the Pod Network providers' issue tracker and get the issue triaged there. -- If you install a version of Docker older than 1.12.1, remove the `MountFlags=slave` option - when booting `dockerd` with `systemd` and restart `docker`. You can see the MountFlags in `/usr/lib/systemd/system/docker.service`. - MountFlags can interfere with volumes mounted by Kubernetes, and put the Pods in `CrashLoopBackOff` state. - The error happens when Kubernetes does not find `var/run/secrets/kubernetes.io/serviceaccount` files. +- `kubeadm init`の _直後_ にこれらの状態のいずれかにPodがある場合は、kubeadmのリポジトリにIssueを立ててください。ネットワークソリューションをデプロイするまでは`coredns`(または`kube-dns`)は`Pending`状態でなければなりません。 +- ネットワークソリューションをデプロイしても`coredns`(または`kube-dns`)に何も起こらない場合にRunContainerError`、`CrashLoopBackOff`、`Error`の状態でPodが表示された場合は、インストールしたPodネットワークソリューションが壊れている可能性が高いです。より多くのRBACの特権を付与するか、新しいバージョンを使用する必要があるかもしれません。PodネットワークプロバイダのイシュートラッカーにIssueを出して、そこで問題をトリアージしてください。 +- 1.12.1よりも古いバージョンのDockerをインストールした場合は、`systemd`で`dockerd`を起動する際に`MountFlags=slave`オプションを削除して`docker`を再起動してください。マウントフラグは`/usr/lib/systemd/system/docker.service`で確認できます。MountFlagsはKubernetesがマウントしたボリュームに干渉し、Podsを`CrashLoopBackOff`状態にすることがあります。このエラーは、Kubernetesが`var/run/secrets/kubernetes.io/serviceaccount`ファイルを見つけられない場合に発生します。 ## `coredns`(もしくは`kube-dns`)が`Pending`状態でスタックする -This is **expected** and part of the design. kubeadm is network provider-agnostic, so the admin -should [install the pod network solution](/docs/concepts/cluster-administration/addons/) -of choice. You have to install a Pod Network -before CoreDNS may be deployed fully. Hence the `Pending` state before the network is set up. +kubeadmはネットワークプロバイダに依存しないため、管理者は選択した[Podネットワークソリューションをインストール](/docs/concepts/cluster-administration/addons/)をする必要があります。CoreDNSを完全にデプロイする前にPodネットワークをインストールする必要があります。したがって、ネットワークがセットアップされる前の `Pending`状態になります。 ## `HostPort`サービスが動かない -The `HostPort` and `HostIP` functionality is available depending on your Pod Network -provider. Please contact the author of the Pod Network solution to find out whether -`HostPort` and `HostIP` functionality are available. +`HostPort`と`HostIP`の機能は、ご使用のPodネットワークプロバイダによって利用可能です。Podネットワークソリューションの作者に連絡して、`HostPort`と`HostIP`機能が利用可能かどうかを確認してください。 -Calico, Canal, and Flannel CNI providers are verified to support HostPort. +Calico、Canal、FlannelのCNIプロバイダは、HostPortをサポートしていることが確認されています。 -For more information, see the [CNI portmap documentation](https://github.com/containernetworking/plugins/blob/master/plugins/meta/portmap/README.md). +詳細については、[CNI portmap documentation] (https://github.com/containernetworking/plugins/blob/master/plugins/meta/portmap/README.md) を参照してください。 -If your network provider does not support the portmap CNI plugin, you may need to use the [NodePort feature of -services](/ja/docs/concepts/services-networking/service/#nodeport) or use `HostNetwork=true`. +ネットワークプロバイダが portmap CNI プラグインをサポートしていない場合は、[NodePortサービス](/ja/docs/concepts/services-networking/service/#nodeport)を使用するか、`HostNetwork=true`を使用してください。 ## サービスIP経由でPodにアクセスすることができない -- Many network add-ons do not yet enable [hairpin mode](/docs/tasks/debug-application-cluster/debug-service/#a-pod-cannot-reach-itself-via-service-ip) - which allows pods to access themselves via their Service IP. This is an issue related to - [CNI](https://github.com/containernetworking/cni/issues/476). Please contact the network - add-on provider to get the latest status of their support for hairpin mode. +- 多くのネットワークアドオンは、PodがサービスIPを介して自分自身にアクセスできるようにする[ヘアピンモード](/docs/tasks/debug-application-cluster/debug-service/#a-pod-cannot-reach-itself-via-service-ip)を有効にしていません。これは[CNI](https://github.com/containernetworking/cni/issues/476)に関連する問題です。ヘアピンモードのサポート状況については、ネットワークアドオンプロバイダにお問い合わせください。 -- If you are using VirtualBox (directly or via Vagrant), you will need to - ensure that `hostname -i` returns a routable IP address. By default the first - interface is connected to a non-routable host-only network. A work around - is to modify `/etc/hosts`, see this [Vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11) - for an example. +- VirtualBoxを使用している場合(直接またはVagrant経由)は、`hostname -i`がルーティング可能なIPアドレスを返すことを確認する必要があります。デフォルトでは、最初のインターフェースはルーティング可能でないホスト専用のネットワークに接続されています。これを回避するには`/etc/hosts`を修正する必要があります。例としてはこの[Vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11)を参照してください。 ## TLS証明書のエラー -The following error indicates a possible certificate mismatch. +以下のエラーは、証明書の不一致の可能性を示しています。 ```none # kubectl get pods Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") ``` -- Verify that the `$HOME/.kube/config` file contains a valid certificate, and - regenerate a certificate if necessary. The certificates in a kubeconfig file - are base64 encoded. The `base64 --decode` command can be used to decode the certificate - and `openssl x509 -text -noout` can be used for viewing the certificate information. -- Unset the `KUBECONFIG` environment variable using: +- `HOME/.kube/config`ファイルに有効な証明書が含まれていることを確認し、必要に応じて証明書を再生成します。kubeconfigファイル内の証明書はbase64でエンコードされています。証明書をデコードするには`base64 --decode`コマンドを、証明書情報を表示するには`openssl x509 -text -noout`コマンドを用いてください。 +- 環境変数`KUBECONFIG`の設定を解除するには以下のコマンドを実行するか: ```sh unset KUBECONFIG ``` - Or set it to the default `KUBECONFIG` location: + 設定をデフォルトの`KUBECONFIG`の場所に設定します: ```sh export KUBECONFIG=/etc/kubernetes/admin.conf ``` -- Another workaround is to overwrite the existing `kubeconfig` for the "admin" user: +- もう一つの回避策は、既存の`kubeconfig`を"admin"ユーザに上書きすることです: ```sh mv $HOME/.kube $HOME/.kube.bak @@ -177,38 +184,38 @@ Unable to connect to the server: x509: certificate signed by unknown authority ( ## Vagrant内でPodネットワークとしてflannelを使用する時のデフォルトNIC -The following error might indicate that something was wrong in the pod network: +以下のエラーは、Podネットワークに何か問題があったことを示している可能性を示しています: ```sh Error from server (NotFound): the server could not find the requested resource ``` -- If you're using flannel as the pod network inside Vagrant, then you will have to specify the default interface name for flannel. +- Vagrant内のPodネットワークとしてflannelを使用している場合は、flannelのデフォルトのインターフェース名を指定する必要があります。 - Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed. + Vagrantは通常、2つのインターフェースを全てのVMに割り当てます。1つ目は全てのホストにIPアドレス`10.0.2.15`が割り当てられており、NATされる外部トラフィックのためのものです。 - This may lead to problems with flannel, which defaults to the first interface on a host. This leads to all hosts thinking they have the same public IP address. To prevent this, pass the `--iface eth1` flag to flannel so that the second interface is chosen. + これは、ホストの最初のインターフェイスをデフォルトにしているflannelの問題につながるかもしれません。これは、すべてのホストが同じパブリックIPアドレスを持っていると考えます。これを防ぐには、2番目のインターフェイスが選択されるように `--iface eth1`フラグをflannelに渡してください。 ## 公開されていないIPがコンテナに使われている -In some situations `kubectl logs` and `kubectl run` commands may return with the following errors in an otherwise functional cluster: +状況によっては、`kubectl logs`や`kubectl run`コマンドが以下のようなエラーを返すことがあります: ```sh Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc65b868-glc5m/mysql: dial tcp 10.19.0.41:10250: getsockopt: no route to host ``` -- This may be due to Kubernetes using an IP that can not communicate with other IPs on the seemingly same subnet, possibly by policy of the machine provider. -- DigitalOcean assigns a public IP to `eth0` as well as a private one to be used internally as anchor for their floating IP feature, yet `kubelet` will pick the latter as the node's `InternalIP` instead of the public one. +- これには、おそらくマシンプロバイダのポリシーによって、一見同じサブネット上の他のIPと通信できないIPをKubernetesが使用している可能性があります。 +- DigitalOceanはパブリックIPとプライベートIPを`eth0`に割り当てていますが、`kubelet`はパブリックIPではなく、ノードの`InternalIP`として後者を選択します。 - Use `ip addr show` to check for this scenario instead of `ifconfig` because `ifconfig` will not display the offending alias IP address. Alternatively an API endpoint specific to DigitalOcean allows to query for the anchor IP from the droplet: + `ifconfig`ではエイリアスIPアドレスが表示されないため、`ifconfig`の代わりに`ip addr show`を使用してこのシナリオをチェックしてください。あるいは、DigitalOcean専用のAPIエンドポイントを使用して、ドロップレットからアンカーIPを取得することもできます: ```sh curl http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address ``` - The workaround is to tell `kubelet` which IP to use using `--node-ip`. When using DigitalOcean, it can be the public one (assigned to `eth0`) or the private one (assigned to `eth1`) should you want to use the optional private network. The [`KubeletExtraArgs` section of the kubeadm `NodeRegistrationOptions` structure](https://github.com/kubernetes/kubernetes/blob/release-1.13/cmd/kubeadm/app/apis/kubeadm/v1beta1/types.go) can be used for this. + 回避策としては、`--node-ip`を使ってどのIPを使うかを`kubelet`に伝えることです。DigitalOceanを使用する場合、オプションのプライベートネットワークを使用したい場合は、パブリックIP(`eth0`に割り当てられている)かプライベートIP(`eth1`に割り当てられている)のどちらかを指定します。これにはkubeadm `NodeRegistrationOptions`構造体の [`KubeletExtraArgs`セクション](https://github.com/kubernetes/kubernetes/blob/release-1.13/cmd/kubeadm/app/apis/kubeadm/v1beta1/types.go) が利用できます。 - Then restart `kubelet`: + `kubelet`を再起動してください: ```sh systemctl daemon-reload @@ -217,13 +224,12 @@ Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc6 ## `coredns`のPodが`CrashLoopBackOff`もしくは`Error`状態になる -If you have nodes that are running SELinux with an older version of Docker you might experience a scenario -where the `coredns` pods are not starting. To solve that you can try one of the following options: +SELinuxを実行しているノードで古いバージョンのDockerを使用している場合、`coredns` Podが起動しないということが起きるかもしれません。この問題を解決するには、以下のオプションのいずれかを試してみてください: -- Upgrade to a [newer version of Docker](/ja/docs/setup/independent/install-kubeadm/#installing-docker). +- [新しいDockerのバージョン](/ja/docs/setup/independent/install-kubeadm/#installing-docker)にアップグレードする。 -- [Disable SELinux](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-enabling_and_disabling_selinux-disabling_selinux). -- Modify the `coredns` deployment to set `allowPrivilegeEscalation` to `true`: +- [SELinuxを無効化する](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-enabling_and_disabling_selinux-disabling_selinux)。 +- `coredns`を変更して、`allowPrivilegeEscalation`を`true`に設定: ```bash kubectl -n kube-system get deployment coredns -o yaml | \ @@ -231,108 +237,84 @@ kubectl -n kube-system get deployment coredns -o yaml | \ kubectl apply -f - ``` -Another cause for CoreDNS to have `CrashLoopBackOff` is when a CoreDNS Pod deployed in Kubernetes detects a loop. [A number of workarounds](https://github.com/coredns/coredns/tree/master/plugin/loop#troubleshooting-loops-in-kubernetes-clusters) -are available to avoid Kubernetes trying to restart the CoreDNS Pod every time CoreDNS detects the loop and exits. +CoreDNSに`CrashLoopBackOff`が発生する別の原因は、KubernetesにデプロイされたCoreDNS Podがループを検出したときに発生します。CoreDNSがループを検出して終了するたびに、KubernetesがCoreDNS Podを再起動しようとするのを避けるために、[いくつかの回避策](https://github.com/coredns/coredns/tree/master/plugin/loop#troubleshooting-loops-in-kubernetes-clusters)が用意されています。 {{< warning >}} -Disabling SELinux or setting `allowPrivilegeEscalation` to `true` can compromise -the security of your cluster. +SELinuxを無効にするか`allowPrivilegeEscalation`を`true`に設定すると、クラスタのセキュリティが損なわれる可能性があります。 {{< /warning >}} ## etcdのpodが継続的に再起動する -If you encounter the following error: +以下のエラーが発生した場合は: ``` rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "process_linux.go:110: decoding init error from pipe caused \"read parent: connection reset by peer\"" ``` -this issue appears if you run CentOS 7 with Docker 1.13.1.84. -This version of Docker can prevent the kubelet from executing into the etcd container. +この問題は、CentOS 7をDocker 1.13.1.84で実行した場合に表示されます。このバージョンのDockerでは、kubeletがetcdコンテナに実行されないようにすることができます。 -To work around the issue, choose one of these options: +この問題を回避するには、以下のいずれかのオプションを選択します: -- Roll back to an earlier version of Docker, such as 1.13.1-75 +- 1.13.1-75のような以前のバージョンのDockerにロールバックする ``` yum downgrade docker-1.13.1-75.git8633870.el7.centos.x86_64 docker-client-1.13.1-75.git8633870.el7.centos.x86_64 docker-common-1.13.1-75.git8633870.el7.centos.x86_64 ``` -- Install one of the more recent recommended versions, such as 18.06: +- 18.06のような最新の推奨バージョンをインストールする: ```bash sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum install docker-ce-18.06.1.ce-3.el7.x86_64 ``` -## Not possible to pass a comma separated list of values to arguments inside a `--component-extra-args` flag +## コンマで区切られた値のリストを`--component-extra-args`フラグ内の引数に渡すことができない -`kubeadm init` flags such as `--component-extra-args` allow you to pass custom arguments to a control-plane -component like the kube-apiserver. However, this mechanism is limited due to the underlying type used for parsing -the values (`mapStringString`). +`-component-extra-args`のような`kubeadm init`フラグを使うと、kube-apiserverのようなコントロールプレーンコンポーネントにカスタム引数を渡すことができます。しかし、このメカニズムは値の解析に使われる基本的な型 (`mapStringString`) のために制限されています。 -If you decide to pass an argument that supports multiple, comma-separated values such as -`--apiserver-extra-args "enable-admission-plugins=LimitRanger,NamespaceExists"` this flag will fail with -`flag: malformed pair, expect string=string`. This happens because the list of arguments for -`--apiserver-extra-args` expects `key=value` pairs and in this case `NamespacesExists` is considered -as a key that is missing a value. +もし、`--apiserver-extra-args "enable-admission plugins=LimitRanger,NamespaceExists"`のようにカンマで区切られた複数の値をサポートする引数を渡した場合、このフラグは`flag: malformed pair, expect string=string`で失敗します。これは`--apiserver-extra-args`の引数リストが`key=value`のペアを期待しており、この場合`NamespacesExists`は値を欠いたキーとみなされるためです。 -Alternatively, you can try separating the `key=value` pairs like so: -`--apiserver-extra-args "enable-admission-plugins=LimitRanger,enable-admission-plugins=NamespaceExists"` -but this will result in the key `enable-admission-plugins` only having the value of `NamespaceExists`. +別の方法として、`key=value`のペアを以下のように分離してみることもできます: +`--apiserver-extra-args "enable-admission-plugins=LimitRanger,enable-admission-plugins=NamespaceExists"`しかし、この場合は、キー`enable-admission-plugins`は`NamespaceExists`の値しか持ちません。既知の回避策としては、kubeadm[設定ファイル](/ja/docs/setup/production-environment/tools/kubeadm/control-plane-flags/#apiserver-flags)を使用することが挙げられます。 -A known workaround is to use the kubeadm [configuration file](/ja/docs/setup/production-environment/tools/kubeadm/control-plane-flags/#apiserver-flags). +## cloud-controller-managerによってノードが初期化される前にkube-proxyがスケジューリングされる -## kube-proxy scheduled before node is initialized by cloud-controller-manager +クラウドプロバイダのシナリオでは、クラウドコントローラマネージャがノードアドレスを初期化する前に、kube-proxyが新しいワーカーノードでスケジューリングされてしまうことがあります。これにより、kube-proxyがノードのIPアドレスを正しく拾えず、ロードバランサを管理するプロキシ機能に悪影響を及ぼします。 -In cloud provider scenarios, kube-proxy can end up being scheduled on new worker nodes before -the cloud-controller-manager has initialized the node addresses. This causes kube-proxy to fail -to pick up the node's IP address properly and has knock-on effects to the proxy function managing -load balancers. - -The following error can be seen in kube-proxy Pods: +kube-proxy Podsでは以下のようなエラーが発生します: ``` server.go:610] Failed to retrieve node IP: host IP unknown; known addresses: [] proxier.go:340] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP ``` -A known solution is to patch the kube-proxy DaemonSet to allow scheduling it on control-plane -nodes regardless of their conditions, keeping it off of other nodes until their initial guarding -conditions abate: +既知の解決策は、初期のガード条件が緩和されるまで他のノードから離しておき、条件に関係なくコントロールプレーンノード上でスケジューリングできるように、キューブプロキシDaemonSetにパッチを当てることです: + ``` kubectl -n kube-system patch ds kube-proxy -p='{ "spec": { "template": { "spec": { "tolerations": [ { "key": "CriticalAddonsOnly", "operator": "Exists" }, { "effect": "NoSchedule", "key": "node-role.kubernetes.io/master" } ] } } } }' ``` -The tracking issue for this problem is [here](https://github.com/kubernetes/kubeadm/issues/1027). +Tこの問題のトラッキング問題は[こちら](https://github.com/kubernetes/kubeadm/issues/1027)。 -## The NodeRegistration.Taints field is omitted when marshalling kubeadm configuration +## kubeadmの設定をマーシャリングする際、NodeRegistration.Taintsフィールドが省略される -*Note: This [issue](https://github.com/kubernetes/kubeadm/issues/1358) only applies to tools that marshal kubeadm types (e.g. to a YAML configuration file). It will be fixed in kubeadm API v1beta2.* +*注意: この[Issue](https://github.com/kubernetes/kubeadm/issues/1358)は、kubeadmタイプをマーシャルするツール(YAML設定ファイルなど)にのみ適用されます。これはkubeadm API v1beta2で修正される予定です。* -By default, kubeadm applies the `node-role.kubernetes.io/master:NoSchedule` taint to control-plane nodes. -If you prefer kubeadm to not taint the control-plane node, and set `InitConfiguration.NodeRegistration.Taints` to an empty slice, -the field will be omitted when marshalling. When the field is omitted, kubeadm applies the default taint. +デフォルトでは、kubeadmはコントロールプレーンノードに`node-role.kubernetes.io/master:NoSchedule`のテイントを適用します。kubeadmがコントロールプレーンノードに影響を与えないようにし、`InitConfiguration.NodeRegistration.Taints`を空のスライスに設定すると、マーシャリング時にこのフィールドは省略されます。フィールドが省略された場合、kubeadmはデフォルトのテイントを適用します。 -There are at least two workarounds: +少なくとも2つの回避策があります: -1. Use the `node-role.kubernetes.io/master:PreferNoSchedule` taint instead of an empty slice. [Pods will get scheduled on masters](/docs/concepts/configuration/taint-and-toleration/), unless other nodes have capacity. +1. 空のスライスの代わりに`node-role.kubernetes.io/master:PreferNoSchedule`テイントを使用します。他のノードに容量がない限り、[Podsはマスター上でスケジュールされます](/docs/concepts/scheduling-eviction/taint-and-toleration/)。 -2. Remove the taint after kubeadm init exits: +2. kubeadm init終了後のテイントの除去: ```bash kubectl taint nodes NODE_NAME node-role.kubernetes.io/master:NoSchedule- ``` -## `/usr` is mounted read-only on nodes {#usr-mounted-read-only} +## ノード{#usr-mounted-read-only}に`/usr`が読み取り専用でマウントされる -On Linux distributions such as Fedora CoreOS, the directory `/usr` is mounted as a read-only filesystem. -For [flex-volume support](https://github.com/kubernetes/community/blob/ab55d85/contributors/devel/sig-storage/flexvolume.md), -Kubernetes components like the kubelet and kube-controller-manager use the default path of -`/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`, yet the flex-volume directory _must be writeable_ -for the feature to work. +Fedora CoreOSなどのLinuxディストリビューションでは、ディレクトリ`/usr`が読み取り専用のファイルシステムとしてマウントされます。 [flex-volumeサポート](https://github.com/kubernetes/community/blob/ab55d85/contributors/devel/sig-storage/flexvolume.md)では、kubeletやkube-controller-managerのようなKubernetesコンポーネントはデフォルトで`/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`のパスを使用していますが、この機能を動作させるためにはflex-volumeディレクトリは _書き込み可能_ な状態でなければなりません。 -To workaround this issue you can configure the flex-volume directory using the kubeadm -[configuration file](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2). +この問題を回避するには、kubeadm[設定ファイル](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2)を使用してflex-volumeディレクトリを設定します。 -On the primary control-plane Node (created using `kubeadm init`) pass the following -file using `--config`: +プライマリコントロールプレーンノード(`kubeadm init`で作成されたもの)上で、`--config`で以下のファイルを渡します: ```yaml apiVersion: kubeadm.k8s.io/v1beta2 @@ -348,7 +330,7 @@ controllerManager: flex-volume-plugin-dir: "/opt/libexec/kubernetes/kubelet-plugins/volume/exec/" ``` -On joining Nodes: +ノードをジョインするには: ```yaml apiVersion: kubeadm.k8s.io/v1beta2 @@ -358,5 +340,9 @@ nodeRegistration: volume-plugin-dir: "/opt/libexec/kubernetes/kubelet-plugins/volume/exec/" ``` -Alternatively, you can modify `/etc/fstab` to make the `/usr` mount writeable, but please -be advised that this is modifying a design principle of the Linux distribution. +あるいは、`/usr`マウントを書き込み可能にするために `/etc/fstab`を変更することもできますが、これはLinuxディストリビューションの設計原理を変更していることに注意してください。 + +## `kubeadm upgrade plan`が`context deadline exceeded`エラーメッセージを表示する +このエラーメッセージは、外部etcdを実行している場合に`kubeadm`でKubernetesクラスタをアップグレードする際に表示されます。これは致命的なバグではなく、古いバージョンのkubeadmが外部etcdクラスタのバージョンチェックを行うために発生します。`kubeadm upgrade apply ...`で進めることができます。 + +この問題はバージョン1.19で修正されます。 \ No newline at end of file diff --git a/content/ja/docs/setup/production-environment/tools/kubespray.md b/content/ja/docs/setup/production-environment/tools/kubespray.md index 6c02ca5374..e8c49078fd 100644 --- a/content/ja/docs/setup/production-environment/tools/kubespray.md +++ b/content/ja/docs/setup/production-environment/tools/kubespray.md @@ -8,7 +8,7 @@ weight: 30 This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray). -Kubespray is a composition of [Ansible](http://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides: +Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides: * a highly available cluster * composable attributes @@ -21,7 +21,8 @@ Kubespray is a composition of [Ansible](http://docs.ansible.com/) playbooks, [in * openSUSE Leap 15 * continuous integration tests -To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to [kubeadm](/docs/admin/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/). +To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to +[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/). @@ -50,7 +51,7 @@ Kubespray provides the following utilities to help provision your environment: ### (2/5) インベントリファイルの用意 -After you provision your servers, create an [inventory file for Ansible](http://docs.ansible.com/ansible/intro_inventory.html). You can do this manually or via a dynamic inventory script. For more information, see "[Building your own inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)". +After you provision your servers, create an [inventory file for Ansible](https://docs.ansible.com/ansible/intro_inventory.html). You can do this manually or via a dynamic inventory script. For more information, see "[Building your own inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)". ### (3/5) クラスタ作成の計画 @@ -68,7 +69,7 @@ Kubespray provides the ability to customize many aspects of the deployment: * {{< glossary_tooltip term_id="cri-o" >}} * Certificate generation methods -Kubespray customizations can be made to a [variable file](http://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes. +Kubespray customizations can be made to a [variable file](https://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes. ### (4/5) クラスタのデプロイ @@ -110,7 +111,7 @@ When running the reset playbook, be sure not to accidentally target your product ## フィードバック -* Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/) (You can get your invite [here](http://slack.k8s.io/)) +* Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/) (You can get your invite [here](https://slack.k8s.io/)) * [GitHub Issues](https://github.com/kubernetes-sigs/kubespray/issues) diff --git a/content/ja/docs/setup/production-environment/turnkey/aws.md b/content/ja/docs/setup/production-environment/turnkey/aws.md index 1fc53a1f28..03246c1b06 100644 --- a/content/ja/docs/setup/production-environment/turnkey/aws.md +++ b/content/ja/docs/setup/production-environment/turnkey/aws.md @@ -20,9 +20,7 @@ AWS上でKubernetesクラスターを作成するには、AWSからアクセス * [Kubernetes Operations](https://github.com/kubernetes/kops) - プロダクショングレードなKubernetesのインストール、アップグレード、管理が可能です。AWS上のDebian、Ubuntu、CentOS、RHELをサポートしています。 -* [CoreOS Tectonic](https://coreos.com/tectonic/)はAWS上のContainer Linuxノードを含むKubernetesクラスターを作成できる、オープンソースの[Tectonic Installer](https://github.com/coreos/tectonic-installer)を含みます。 - -* CoreOSから生まれ、Kubernetes IncubatorがメンテナンスしているCLIツール[kube-aws](https://github.com/kubernetes-incubator/kube-aws)は、[Container Linux](https://coreos.com/why/)ノードを使用したAWSツール(EC2、CloudFormation、Auto Scaling)によるKubernetesクラスターを作成および管理できます。 +* [kube-aws](https://github.com/kubernetes-incubator/kube-aws) EC2、CloudFormation、Auto Scalingを使用して、[Flatcar Linux](https://www.flatcar-linux.org/)ノードでKubernetesクラスターを作成および管理します。 * [KubeOne](https://github.com/kubermatic/kubeone)は可用性の高いKubernetesクラスターを作成、アップグレード、管理するための、オープンソースのライフサイクル管理ツールです。 @@ -46,10 +44,10 @@ export PATH=/platforms/darwin/amd64:$PATH export PATH=/platforms/linux/amd64:$PATH ``` -ツールに関する最新のドキュメントページはこちらです: [kubectl manual](/docs/user-guide/kubectl/) +ツールに関する最新のドキュメントページはこちらです: [kubectl manual](/docs/reference/kubectl/kubectl/) デフォルトでは、`kubectl`はクラスターの起動中に生成された`kubeconfig`ファイルをAPIに対する認証に使用します。 -詳細な情報は、[kubeconfig files](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)を参照してください。 +詳細な情報は、[kubeconfig files](/ja/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)を参照してください。 ### 例 @@ -61,7 +59,7 @@ export PATH=/platforms/linux/amd64:$PATH ## クラスターのスケーリング -`kubectl`を使用したノードの追加および削除はサポートしていません。インストール中に作成された[Auto Scaling Group](http://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html)内の'Desired'および'Max'プロパティを手動で調整することで、ノード数をスケールさせることができます。 +`kubectl`を使用したノードの追加および削除はサポートしていません。インストール中に作成された[Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html)内の'Desired'および'Max'プロパティを手動で調整することで、ノード数をスケールさせることができます。 ## クラスターの解体 @@ -77,12 +75,8 @@ cluster/kube-down.sh IaaS プロバイダー | 構成管理 | OS | ネットワーク | ドキュメント | 適合 | サポートレベル -------------------- | ------------ | ------------- | ------------ | --------------------------------------------- | ---------| ---------------------------- AWS | kops | Debian | k8s (VPC) | [docs](https://github.com/kubernetes/kops) | | Community ([@justinsb](https://github.com/justinsb)) -AWS | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/aws) | | Community -AWS | Juju | Ubuntu | flannel, calico, canal | [docs](/docs/getting-started-guides/ubuntu) | 100% | Commercial, Community +AWS | CoreOS | CoreOS | flannel | - | | Community +AWS | Juju | Ubuntu | flannel, calico, canal | - | 100% | Commercial, Community AWS | KubeOne | Ubuntu, CoreOS, CentOS | canal, weavenet | [docs](https://github.com/kubermatic/kubeone) | 100% | Commercial, Community -## 参考文献 - -Kubernetesクラスターの利用と管理に関する詳細は、[Kubernetesドキュメント](/ja/docs/)を参照してください。 - diff --git a/content/ja/docs/setup/production-environment/turnkey/clc.md b/content/ja/docs/setup/production-environment/turnkey/clc.md deleted file mode 100644 index b700456b87..0000000000 --- a/content/ja/docs/setup/production-environment/turnkey/clc.md +++ /dev/null @@ -1,340 +0,0 @@ ---- -title: CenturyLink Cloud上でKubernetesを動かす ---- - - -These scripts handle the creation, deletion and expansion of Kubernetes clusters on CenturyLink Cloud. - -You can accomplish all these tasks with a single command. We have made the Ansible playbooks used to perform these tasks available [here](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/ansible/README.md). - -## ヘルプの検索 - -If you run into any problems or want help with anything, we are here to help. Reach out to use via any of the following ways: - -- Submit a github issue -- Send an email to Kubernetes AT ctl DOT io -- Visit [http://info.ctl.io/kubernetes](http://info.ctl.io/kubernetes) - -## 仮想マシンもしくは物理サーバーのクラスター、その選択 - -- We support Kubernetes clusters on both Virtual Machines or Physical Servers. If you want to use physical servers for the worker nodes (minions), simple use the --minion_type=bareMetal flag. -- For more information on physical servers, visit: [https://www.ctl.io/bare-metal/](https://www.ctl.io/bare-metal/) -- Physical serves are only available in the VA1 and GB3 data centers. -- VMs are available in all 13 of our public cloud locations - -## 必要条件 - -The requirements to run this script are: - -- A linux administrative host (tested on ubuntu and macOS) -- python 2 (tested on 2.7.11) - - pip (installed with python as of 2.7.9) -- git -- A CenturyLink Cloud account with rights to create new hosts -- An active VPN connection to the CenturyLink Cloud from your linux host - -## スクリプトのインストール - -After you have all the requirements met, please follow these instructions to install this script. - -1) Clone this repository and cd into it. - -```shell -git clone https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc -``` - -2) Install all requirements, including - - * Ansible - * CenturyLink Cloud SDK - * Ansible Modules - -```shell -sudo pip install -r ansible/requirements.txt -``` - -3) Create the credentials file from the template and use it to set your ENV variables - -```shell -cp ansible/credentials.sh.template ansible/credentials.sh -vi ansible/credentials.sh -source ansible/credentials.sh - -``` - -4) Grant your machine access to the CenturyLink Cloud network by using a VM inside the network or [ configuring a VPN connection to the CenturyLink Cloud network.](https://www.ctl.io/knowledge-base/network/how-to-configure-client-vpn/) - - -#### スクリプトのインストールの例: Ububtu 14の手順 - -If you use an ubuntu 14, for your convenience we have provided a step by step -guide to install the requirements and install the script. - -```shell -# system -apt-get update -apt-get install -y git python python-crypto -curl -O https://bootstrap.pypa.io/get-pip.py -python get-pip.py - -# installing this repository -mkdir -p ~home/k8s-on-clc -cd ~home/k8s-on-clc -git clone https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc.git -cd adm-kubernetes-on-clc/ -pip install -r requirements.txt - -# getting started -cd ansible -cp credentials.sh.template credentials.sh; vi credentials.sh -source credentials.sh -``` - - - -## クラスターの作成 - -To create a new Kubernetes cluster, simply run the ```kube-up.sh``` script. A complete -list of script options and some examples are listed below. - -```shell -CLC_CLUSTER_NAME=[name of kubernetes cluster] -cd ./adm-kubernetes-on-clc -bash kube-up.sh -c="$CLC_CLUSTER_NAME" -``` - -It takes about 15 minutes to create the cluster. Once the script completes, it -will output some commands that will help you setup kubectl on your machine to -point to the new cluster. - -When the cluster creation is complete, the configuration files for it are stored -locally on your administrative host, in the following directory - -```shell -> CLC_CLUSTER_HOME=$HOME/.clc_kube/$CLC_CLUSTER_NAME/ -``` - - -#### クラスターの作成: スクリプトのオプション - -```shell -Usage: kube-up.sh [OPTIONS] -Create servers in the CenturyLinkCloud environment and initialize a Kubernetes cluster -Environment variables CLC_V2_API_USERNAME and CLC_V2_API_PASSWD must be set in -order to access the CenturyLinkCloud API - -All options (both short and long form) require arguments, and must include "=" -between option name and option value. - - -h (--help) display this help and exit - -c= (--clc_cluster_name=) set the name of the cluster, as used in CLC group names - -t= (--minion_type=) standard -> VM (default), bareMetal -> physical] - -d= (--datacenter=) VA1 (default) - -m= (--minion_count=) number of kubernetes minion nodes - -mem= (--vm_memory=) number of GB ram for each minion - -cpu= (--vm_cpu=) number of virtual cps for each minion node - -phyid= (--server_conf_id=) physical server configuration id, one of - physical_server_20_core_conf_id - physical_server_12_core_conf_id - physical_server_4_core_conf_id (default) - -etcd_separate_cluster=yes create a separate cluster of three etcd nodes, - otherwise run etcd on the master node -``` - -## クラスターの拡張 - -To expand an existing Kubernetes cluster, run the ```add-kube-node.sh``` -script. A complete list of script options and some examples are listed [below](#cluster-expansion-script-options). -This script must be run from the same host that created the cluster (or a host -that has the cluster artifact files stored in ```~/.clc_kube/$cluster_name```). - -```shell -cd ./adm-kubernetes-on-clc -bash add-kube-node.sh -c="name_of_kubernetes_cluster" -m=2 -``` - -#### クラスターの拡張: スクリプトのオプション - -```shell -Usage: add-kube-node.sh [OPTIONS] -Create servers in the CenturyLinkCloud environment and add to an -existing CLC kubernetes cluster - -Environment variables CLC_V2_API_USERNAME and CLC_V2_API_PASSWD must be set in -order to access the CenturyLinkCloud API - - -h (--help) display this help and exit - -c= (--clc_cluster_name=) set the name of the cluster, as used in CLC group names - -m= (--minion_count=) number of kubernetes minion nodes to add -``` - -## クラスターの削除 - -There are two ways to delete an existing cluster: - -1) Use our python script: - -```shell -python delete_cluster.py --cluster=clc_cluster_name --datacenter=DC1 -``` - -2) Use the CenturyLink Cloud UI. To delete a cluster, log into the CenturyLink -Cloud control portal and delete the parent server group that contains the -Kubernetes Cluster. We hope to add a scripted option to do this soon. - -## 例 - -Create a cluster with name of k8s_1, 1 master node and 3 worker minions (on physical machines), in VA1 - -```shell -bash kube-up.sh --clc_cluster_name=k8s_1 --minion_type=bareMetal --minion_count=3 --datacenter=VA1 -``` - -Create a cluster with name of k8s_2, an ha etcd cluster on 3 VMs and 6 worker minions (on VMs), in VA1 - -```shell -bash kube-up.sh --clc_cluster_name=k8s_2 --minion_type=standard --minion_count=6 --datacenter=VA1 --etcd_separate_cluster=yes -``` - -Create a cluster with name of k8s_3, 1 master node, and 10 worker minions (on VMs) with higher mem/cpu, in UC1: - -```shell -bash kube-up.sh --clc_cluster_name=k8s_3 --minion_type=standard --minion_count=10 --datacenter=VA1 -mem=6 -cpu=4 -``` - - - -## クラスターの機能とアーキテクチャ - -We configure the Kubernetes cluster with the following features: - -* KubeDNS: DNS resolution and service discovery -* Heapster/InfluxDB: For metric collection. Needed for Grafana and auto-scaling. -* Grafana: Kubernetes/Docker metric dashboard -* KubeUI: Simple web interface to view Kubernetes state -* Kube Dashboard: New web interface to interact with your cluster - -We use the following to create the Kubernetes cluster: - -* Kubernetes 1.1.7 -* Ubuntu 14.04 -* Flannel 0.5.4 -* Docker 1.9.1-0~trusty -* Etcd 2.2.2 - -## 任意のアドオン - -* Logging: We offer an integrated centralized logging ELK platform so that all - Kubernetes and docker logs get sent to the ELK stack. To install the ELK stack - and configure Kubernetes to send logs to it, follow [the log - aggregation documentation](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/log_aggregration.md). Note: We don't install this by default as - the footprint isn't trivial. - -## クラスターの管理 - -The most widely used tool for managing a Kubernetes cluster is the command-line -utility ```kubectl```. If you do not already have a copy of this binary on your -administrative machine, you may run the script ```install_kubectl.sh``` which will -download it and install it in ```/usr/bin/local```. - -The script requires that the environment variable ```CLC_CLUSTER_NAME``` be defined. ```install_kubectl.sh``` also writes a configuration file which will embed the necessary -authentication certificates for the particular cluster. The configuration file is -written to the ```${CLC_CLUSTER_HOME}/kube``` directory - - -```shell -export KUBECONFIG=${CLC_CLUSTER_HOME}/kube/config -kubectl version -kubectl cluster-info -``` - -### プログラムでクラスターへアクセス - -It's possible to use the locally stored client certificates to access the apiserver. For example, you may want to use any of the [Kubernetes API client libraries](/docs/reference/using-api/client-libraries/) to program against your Kubernetes cluster in the programming language of your choice. - -To demonstrate how to use these locally stored certificates, we provide the following example of using ```curl``` to communicate to the master apiserver via https: - -```shell -curl \ - --cacert ${CLC_CLUSTER_HOME}/pki/ca.crt \ - --key ${CLC_CLUSTER_HOME}/pki/kubecfg.key \ - --cert ${CLC_CLUSTER_HOME}/pki/kubecfg.crt https://${MASTER_IP}:6443 -``` - -But please note, this *does not* work out of the box with the ```curl``` binary -distributed with macOS. - -### ブラウザーを使ったクラスターへのアクセス - -We install [the kubernetes dashboard](/docs/tasks/web-ui-dashboard/). When you -create a cluster, the script should output URLs for these interfaces like this: - -kubernetes-dashboard is running at ```https://${MASTER_IP}:6443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy```. - -Note on Authentication to the UIs: - -The cluster is set up to use basic authentication for the user _admin_. -Hitting the url at ```https://${MASTER_IP}:6443``` will -require accepting the self-signed certificate -from the apiserver, and then presenting the admin -password written to file at: ```> _${CLC_CLUSTER_HOME}/kube/admin_password.txt_``` - - -### 設定ファイル - -Various configuration files are written into the home directory *CLC_CLUSTER_HOME* under ```.clc_kube/${CLC_CLUSTER_NAME}``` in several subdirectories. You can use these files -to access the cluster from machines other than where you created the cluster from. - -* ```config/```: Ansible variable files containing parameters describing the master and minion hosts -* ```hosts/```: hosts files listing access information for the Ansible playbooks -* ```kube/```: ```kubectl``` configuration files, and the basic-authentication password for admin access to the Kubernetes API -* ```pki/```: public key infrastructure files enabling TLS communication in the cluster -* ```ssh/```: SSH keys for root access to the hosts - - -## ```kubectl``` usage examples - -There are a great many features of _kubectl_. Here are a few examples - -List existing nodes, pods, services and more, in all namespaces, or in just one: - -```shell -kubectl get nodes -kubectl get --all-namespaces pods -kubectl get --all-namespaces services -kubectl get --namespace=kube-system replicationcontrollers -``` - -The Kubernetes API server exposes services on web URLs, which are protected by requiring -client certificates. If you run a kubectl proxy locally, ```kubectl``` will provide -the necessary certificates and serve locally over http. - -```shell -kubectl proxy -p 8001 -``` - -Then, you can access urls like ```http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/``` without the need for client certificates in your browser. - - -## どのKubernetesの機能がCenturyLink Cloud上で動かないのか - -These are the known items that don't work on CenturyLink cloud but do work on other cloud providers: - -- At this time, there is no support services of the type [LoadBalancer](/docs/tasks/access-application-cluster/create-external-load-balancer/). We are actively working on this and hope to publish the changes sometime around April 2016. - -- At this time, there is no support for persistent storage volumes provided by - CenturyLink Cloud. However, customers can bring their own persistent storage - offering. We ourselves use Gluster. - - -## Ansibleのファイル - -If you want more information about our Ansible files, please [read this file](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/ansible/README.md) - -## 参考文献 - -Please see the [Kubernetes docs](/ja/docs/) for more details on administering -and using a Kubernetes cluster. - - - diff --git a/content/ja/docs/setup/production-environment/turnkey/gce.md b/content/ja/docs/setup/production-environment/turnkey/gce.md index b00d34ade6..dcd269446a 100644 --- a/content/ja/docs/setup/production-environment/turnkey/gce.md +++ b/content/ja/docs/setup/production-environment/turnkey/gce.md @@ -67,7 +67,7 @@ cluster/kube-up.sh If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster. If you run into trouble, please see the section on [troubleshooting](/ja/docs/setup/production-environment/turnkey/gce/#troubleshooting), post to the -[Kubernetes Forum](https://discuss.kubernetes.io), or come ask questions on [Slack](/docs/troubleshooting/#slack). +[Kubernetes Forum](https://discuss.kubernetes.io), or come ask questions on `#gke` Slack channel. The next few steps will show you: @@ -80,7 +80,7 @@ The next few steps will show you: The cluster startup script will leave you with a running cluster and a `kubernetes` directory on your workstation. -The [kubectl](/docs/user-guide/kubectl/) tool controls the Kubernetes cluster +The [kubectl](/docs/reference/kubectl/kubectl/) tool controls the Kubernetes cluster manager. It lets you inspect your cluster resources, create, delete, and update components, and much more. You will use it to look at your new cluster and bring up example apps. @@ -93,7 +93,7 @@ gcloud components install kubectl {{< note >}} The kubectl version bundled with `gcloud` may be older than the one -downloaded by the get.k8s.io install script. See [Installing kubectl](/docs/tasks/kubectl/install/) +The [kubectl](/ja/docs/reference/kubectl/kubectl/) tool controls the Kubernetes cluster document to see how you can set up the latest `kubectl` on your workstation. {{< /note >}} @@ -107,7 +107,7 @@ Once `kubectl` is in your path, you can use it to look at your cluster. E.g., ru kubectl get --all-namespaces services ``` -should show a set of [services](/docs/user-guide/services) that look something like this: +should show a set of [services](/docs/concepts/services-networking/service/) that look something like this: ```shell NAMESPACE NAME TYPE CLUSTER_IP EXTERNAL_IP PORT(S) AGE @@ -117,7 +117,7 @@ kube-system kube-ui ClusterIP 10.0.0.3 ... ``` -Similarly, you can take a look at the set of [pods](/docs/user-guide/pods) that were created during cluster startup. +Similarly, you can take a look at the set of [pods](/ja/docs/concepts/workloads/pods/) that were created during cluster startup. You can do this via the ```shell @@ -144,7 +144,7 @@ Some of the pods may take a few seconds to start up (during this time they'll sh ### いくつかの例の実行 -Then, see [a simple nginx example](/docs/user-guide/simple-nginx) to try out your new cluster. +Then, see [a simple nginx example](/ja/docs/tasks/run-application/run-stateless-application-deployment/) to try out your new cluster. For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/). The [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) is a good "getting started" walkthrough. @@ -215,10 +215,3 @@ IaaS Provider | Config. Mgmt | OS | Networking | Docs -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- GCE | Saltstack | Debian | GCE | [docs](/ja/docs/setup/production-environment/turnkey/gce/) | | Project - -## 参考文献 - -Please see the [Kubernetes docs](/ja/docs/) for more details on administering -and using a Kubernetes cluster. - - diff --git a/content/ja/docs/setup/production-environment/turnkey/icp.md b/content/ja/docs/setup/production-environment/turnkey/icp.md index 9d1a0a17b3..1313f37ff0 100644 --- a/content/ja/docs/setup/production-environment/turnkey/icp.md +++ b/content/ja/docs/setup/production-environment/turnkey/icp.md @@ -25,13 +25,9 @@ The following modules are available where you can deploy IBM Cloud Private by us ## AWS上でのIBM Cloud Private -You can deploy an IBM Cloud Private cluster on Amazon Web Services (AWS) by using either AWS CloudFormation or Terraform. +You can deploy an IBM Cloud Private cluster on Amazon Web Services (AWS) using Terraform. -IBM Cloud Private has a Quick Start that automatically deploys IBM Cloud Private into a new virtual private cloud (VPC) on the AWS Cloud. A regular deployment takes about 60 minutes, and a high availability (HA) deployment takes about 75 minutes to complete. The Quick Start includes AWS CloudFormation templates and a deployment guide. - -This Quick Start is for users who want to explore application modernization and want to accelerate meeting their digital transformation goals, by using IBM Cloud Private and IBM tooling. The Quick Start helps users rapidly deploy a high availability (HA), production-grade, IBM Cloud Private reference architecture on AWS. For all of the details and the deployment guide, see the [IBM Cloud Private on AWS Quick Start](https://aws.amazon.com/quickstart/architecture/ibm-cloud-private/). - -IBM Cloud Private can also run on the AWS cloud platform by using Terraform. To deploy IBM Cloud Private in an AWS EC2 environment, see [Installing IBM Cloud Private on AWS](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_aws.md). +IBM Cloud Private can also run on the AWS cloud platform by using Terraform. To deploy IBM Cloud Private in an AWS EC2 environment, see [Installing IBM Cloud Private on AWS](https://github.com/ibm-cloud-architecture/terraform-icp-aws). ## Azure上でのIBM Cloud Private @@ -64,4 +60,4 @@ You can install IBM Cloud Private on VMware with either Ubuntu or RHEL images. F The IBM Cloud Private Hosted service automatically deploys IBM Cloud Private Hosted on your VMware vCenter Server instances. This service brings the power of microservices and containers to your VMware environment on IBM Cloud. With this service, you can extend the same familiar VMware and IBM Cloud Private operational model and tools from on-premises into the IBM Cloud. -For more information, see [IBM Cloud Private Hosted service](https://cloud.ibm.com/docs/services/vmwaresolutions/vmonic?topic=vmware-solutions-prod_overview#ibm-cloud-private-hosted). +For more information, see [IBM Cloud Private Hosted service](https://cloud.ibm.com/docs/vmwaresolutions?topic=vmwaresolutions-icp_overview). diff --git a/content/ja/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md b/content/ja/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md index 676f7f8a48..fec66df50a 100644 --- a/content/ja/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md +++ b/content/ja/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md @@ -14,7 +14,7 @@ Windowsアプリケーションは、多くの組織で実行されているサ ## KubernetesのWindowsコンテナ -KubernetesでWindowsコンテナのオーケストレーションを有効にする方法は、既存のLinuxクラスターにWindowsノードを含めるだけです。Kubernetesの[Pod](/ja/docs/concepts/workloads/pods/pod-overview/)でWindowsコンテナをスケジュールすることは、Linuxベースのコンテナをスケジュールするのと同じくらいシンプルで簡単です。 +KubernetesでWindowsコンテナのオーケストレーションを有効にする方法は、既存のLinuxクラスターにWindowsノードを含めるだけです。Kubernetesの{{< glossary_tooltip text="Pod" term_id="pod" >}}でWindowsコンテナをスケジュールすることは、Linuxベースのコンテナをスケジュールするのと同じくらいシンプルで簡単です。 Windowsコンテナを実行するには、Kubernetesクラスターに複数のオペレーティングシステムを含める必要があります。コントロールプレーンノードはLinux、ワーカーノードはワークロードのニーズに応じてWindowsまたはLinuxで実行します。Windows Server 2019は、サポートされている唯一のWindowsオペレーティングシステムであり、Windows (kubelet、[コンテナランタイム](https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/containerd)、kube-proxyを含む)で[Kubernetesノード](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)を有効にします。Windowsディストリビューションチャンネルの詳細については、[Microsoftのドキュメント](https://docs.microsoft.com/en-us/windows-server/get-started-19/servicing-channels-19)を参照してください。 @@ -52,7 +52,7 @@ Windows Serverホストオペレーティングシステムには、[Windows Ser Kubernetesの主要な要素は、WindowsでもLinuxと同じように機能します。このセクションでは、主要なワークロードイネーブラーのいくつかと、それらがWindowsにどのようにマップされるかについて説明します。 -* [Pods](/ja/docs/concepts/workloads/pods/pod-overview/) +* [Pods](/ja/docs/concepts/workloads/pods/) Podは、Kubernetesにおける最も基本的な構成要素です。人間が作成またはデプロイするKubernetesオブジェクトモデルの中で最小かつ最もシンプルな単位です。WindowsとLinuxのコンテナを同じPodにデプロイすることはできません。Pod内のすべてのコンテナは、各ノードが特定のプラットフォームとアーキテクチャを表す単一のノードにスケジュールされます。次のPod機能、プロパティ、およびイベントがWindowsコンテナでサポートされています。: @@ -96,7 +96,27 @@ Pod、Controller、Serviceは、KubernetesでWindowsワークロードを管理 #### コンテナランタイム -KubernetesのWindows Server 2019/1809ノードでは、Docker EE-basic 18.09が必要です。これは、kubeletに含まれているdockershimコードで動作します。CRI-ContainerDなどの追加のランタイムは、Kubernetesの以降のバージョンでサポートされる可能性があります。 +##### Docker EE + +{{< feature-state for_k8s_version="v1.14" state="stable" >}} + +Docker EE-basic 18.09+は、Kubernetesを実行しているWindows Server 2019 / 1809ノードに推奨されるコンテナランタイムです。kubeletに含まれるdockershimコードで動作します。 + +##### CRI-ContainerD + +{{< feature-state for_k8s_version="v1.18" state="alpha" >}} + +ContainerDはLinux上のKubernetesで動作するOCI準拠のランタイムです。Kubernetes v1.18では、Windows上での{{< glossary_tooltip term_id="containerd" text="ContainerD" >}}のサポートが追加されています。Windows上でのContainerDの進捗状況は[enhancements#1001](https://github.com/kubernetes/enhancements/issues/1001)で確認できます。 + +{{< caution >}} + +Kubernetes v1.18におけるWindows上でのContainerDは以下の既知の欠点があります: + +* ContainerDは公式リリースではWindowsをサポートしていません。すなわち、Kubernetesでのすべての開発はアクティブなContainerD開発ブランチに対して行われています。本番環境へのデプロイは常に、完全にテストされセキュリティ修正をサポートした公式リリースを利用するべきです。 +* ContainerDを利用した場合、Group Managed Service Accountsは実装されていません。詳細は[containerd/cri#1276](https://github.com/containerd/cri/issues/1276)を参照してください。 + +{{< /caution >}} + #### 永続ストレージ @@ -404,7 +424,6 @@ Kubernetesクラスターのトラブルシューティングの主なヘルプ # kubelet.exeを登録 # マイクロソフトは、mcr.microsoft.com/k8s/core/pause:1.2.0としてポーズインフラストラクチャコンテナをリリース - # 詳細については、「KubernetesにWindowsノードを追加するためのガイド」で「pause」を検索してください nssm install kubelet C:\k\kubelet.exe nssm set kubelet AppParameters --hostname-override= --v=6 --pod-infra-container-image=mcr.microsoft.com/k8s/core/pause:1.2.0 --resolv-conf="" --allow-privileged=true --enable-debugging-handlers --cluster-dns= --cluster-domain=cluster.local --kubeconfig=c:\k\config --hairpin-mode=promiscuous-bridge --image-pull-progress-deadline=20m --cgroups-per-qos=false --log-dir= --logtostderr=false --enforce-node-allocatable="" --network-plugin=cni --cni-bin-dir=c:\k\cni --cni-conf-dir=c:\k\cni\config nssm set kubelet AppDirectory C:\k @@ -516,7 +535,7 @@ Kubernetesクラスターのトラブルシューティングの主なヘルプ PauseイメージがOSバージョンと互換性があることを確認してください。[説明](https://docs.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/deploying-resources)では、OSとコンテナの両方がバージョン1803であると想定しています。それ以降のバージョンのWindowsを使用している場合は、Insiderビルドなどでは、それに応じてイメージを調整する必要があります。イメージについては、Microsoftの[Dockerレジストリ](https://hub.docker.com/u/microsoft/)を参照してください。いずれにしても、PauseイメージのDockerfileとサンプルサービスの両方で、イメージに:latestのタグが付けられていると想定しています。 - Kubernetes v1.14以降、MicrosoftはPauseインフラストラクチャコンテナを`mcr.microsoft.com/k8s/core/pause:1.2.0`でリリースしています。詳細については、[KubernetesにWindowsノードを追加するためのガイド](../user-guide-windows-nodes)で「Pause」を検索してください。 + Kubernetes v1.14以降、MicrosoftはPauseインフラストラクチャコンテナを`mcr.microsoft.com/k8s/core/pause:1.2.0`でリリースしています。 1. DNS名前解決が正しく機能していない @@ -568,18 +587,16 @@ Kubernetesクラスターのトラブルシューティングの主なヘルプ ロードマップには多くの機能があります。高レベルの簡略リストを以下に示しますが、[ロードマッププロジェクト](https://github.com/orgs/kubernetes/projects/8)を見て、[貢献すること](https://github.com/kubernetes/community/blob/master/sig-windows/)によってWindowsサポートを改善することをお勧めします。 -### CRI-ContainerD -{{< glossary_tooltip term_id="containerd" >}}は、最近{{< glossary_tooltip text="CNCF" term_id="cncf" >}}プロジェクトとして卒業した、もう1つのOCI準拠ランタイムです。現在Linuxでテストされていますが、1.3はWindowsとHyper-Vをサポートします。[[リファレンス](https://blog.docker.com/2019/02/containerd-graduates-within-the-cncf/)] +### Hyper-V分離 -CRI-ContainerDインターフェイスは、Hyper-Vに基づいてサンドボックスを管理できるようになります。これにより、RuntimeClassを次のような新しいユースケースに実装できる基盤が提供されます: +Hyper-V分離はKubernetesで以下のWindowsコンテナのユースケースを実現するために必要です。 * Pod間のハイパーバイザーベースの分離により、セキュリティを強化 * 下位互換性により、コンテナの再構築を必要とせずにノードで新しいWindows Serverバージョンを実行 * Podの特定のCPU/NUMA設定 * メモリの分離と予約 -### Hyper-V分離 既存のHyper-V分離サポートは、v1.10の試験的な機能であり、上記のCRI-ContainerD機能とRuntimeClass機能を優先して将来廃止される予定です。現在の機能を使用してHyper-V分離コンテナを作成するには、kubeletのフィーチャーゲートを`HyperVContainer=true`で開始し、Podにアノテーション`experimental.windows.kubernetes.io/isolation-type=hyperv`を含める必要があります。実験的リリースでは、この機能はPodごとに1つのコンテナに制限されています。 @@ -609,7 +626,7 @@ spec: ### kubeadmとクラスターAPIを使用したデプロイ -Kubeadmは、ユーザーがKubernetesクラスターをデプロイするための事実上の標準になりつつあります。kubeadmのWindowsノードのサポートは、将来のリリースで提供予定です。Windowsノードが適切にプロビジョニングされるように、クラスターAPIにも投資しています。 +Kubeadmは、ユーザーがKubernetesクラスターをデプロイするための事実上の標準になりつつあります。kubeadmのWindowsノードのサポートは進行中ですが、ガイドはすでに[ここ](/ja/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/)で利用可能です。Windowsノードが適切にプロビジョニングされるように、クラスターAPIにも投資しています。 ### その他の主な機能 * グループ管理サービスアカウントのベータサポート diff --git a/content/ja/docs/setup/production-environment/windows/kubecluster.ps1-install.gif b/content/ja/docs/setup/production-environment/windows/kubecluster.ps1-install.gif deleted file mode 100644 index e3d94b9b54..0000000000 Binary files a/content/ja/docs/setup/production-environment/windows/kubecluster.ps1-install.gif and /dev/null differ diff --git a/content/ja/docs/setup/production-environment/windows/kubecluster.ps1-join.gif b/content/ja/docs/setup/production-environment/windows/kubecluster.ps1-join.gif deleted file mode 100644 index 828417d685..0000000000 Binary files a/content/ja/docs/setup/production-environment/windows/kubecluster.ps1-join.gif and /dev/null differ diff --git a/content/ja/docs/setup/production-environment/windows/kubecluster.ps1-reset.gif b/content/ja/docs/setup/production-environment/windows/kubecluster.ps1-reset.gif deleted file mode 100644 index e71d40d6df..0000000000 Binary files a/content/ja/docs/setup/production-environment/windows/kubecluster.ps1-reset.gif and /dev/null differ diff --git a/content/ja/docs/setup/production-environment/windows/user-guide-windows-containers.md b/content/ja/docs/setup/production-environment/windows/user-guide-windows-containers.md index 9e218fae53..6f1ed4558e 100644 --- a/content/ja/docs/setup/production-environment/windows/user-guide-windows-containers.md +++ b/content/ja/docs/setup/production-environment/windows/user-guide-windows-containers.md @@ -19,7 +19,7 @@ Windowsアプリケーションは、多くの組織で実行されるサービ ## 始める前に -* [Windows Serverを実行するマスターノードとワーカーノード](/ja/docs/setup/production-environment/windows/user-guide-windows-nodes/)を含むKubernetesクラスターを作成します +* [Windows Serverを実行するマスターノードとワーカーノード](/ja/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes)を含むKubernetesクラスターを作成します * Kubernetes上にServiceとワークロードを作成してデプロイすることは、LinuxコンテナとWindowsコンテナ共に、ほぼ同じように動作することに注意してください。クラスターとのインタフェースとなる[Kubectlコマンド](/docs/reference/kubectl/overview/)も同じです。Windowsコンテナをすぐに体験できる例を以下セクションに用意しています。 ## はじめに:Windowsコンテナのデプロイ @@ -96,7 +96,7 @@ spec: * ネットワークを介したノードとPod間通信、LinuxマスターからのPod IPのポート80に向けて`curl`して、ウェブサーバーの応答をチェックします * docker execまたはkubectl execを使用したPod間通信、Pod間(および複数のWindowsノードがある場合はホスト間)へのpingします * ServiceからPodへの通信、Linuxマスターおよび個々のPodからの仮想Service IP(`kubectl get services`で表示される)に`curl`します - * サービスディスカバリ、Kuberntesの[default DNS suffix](/ja/docs/concepts/services-networking/dns-pod-service/#services)と共にService名に`curl`します + * サービスディスカバリ、Kubernetesの[default DNS suffix](/ja/docs/concepts/services-networking/dns-pod-service/#services)と共にService名に`curl`します * Inbound connectivity, `curl` the NodePort from the Linux master or machines outside of the cluster * インバウンド接続、Linuxマスターまたはクラスター外のマシンからNodePortに`curl`します * アウトバウンド接続、kubectl execを使用したPod内からの外部IPに`curl`します diff --git a/content/ja/docs/setup/production-environment/windows/user-guide-windows-nodes.md b/content/ja/docs/setup/production-environment/windows/user-guide-windows-nodes.md deleted file mode 100644 index 9f54861a94..0000000000 --- a/content/ja/docs/setup/production-environment/windows/user-guide-windows-nodes.md +++ /dev/null @@ -1,306 +0,0 @@ ---- -title: Guide for adding Windows Nodes in Kubernetes -content_type: concept -weight: 70 ---- - - - -The Kubernetes platform can now be used to run both Linux and Windows containers. One or more Windows nodes can be registered to a cluster. This guide shows how to: - -* Register a Windows node to the cluster -* Configure networking so pods on Linux and Windows can communicate - - - - - -## Before you begin - -* Obtain a [Windows Server license](https://www.microsoft.com/en-us/cloud-platform/windows-server-pricing) in order to configure the Windows node that hosts Windows containers. You can use your organization's licenses for the cluster, or acquire one from Microsoft, a reseller, or via the major cloud providers such as GCP, AWS, and Azure by provisioning a virtual machine running Windows Server through their marketplaces. A [time-limited trial](https://www.microsoft.com/en-us/cloud-platform/windows-server-trial) is also available. - -* Build a Linux-based Kubernetes cluster in which you have access to the control plane (some examples include [Creating a single control-plane cluster with kubeadm](/ja/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/), [AKS Engine](/ja/docs/setup/production-environment/turnkey/azure/), [GCE](/ja/docs/setup/production-environment/turnkey/gce/), [AWS](/ja/docs/setup/production-environment/turnkey/aws/). - -## Getting Started: Adding a Windows Node to Your Cluster - -### Plan IP Addressing - -Kubernetes cluster management requires careful planning of your IP addresses so that you do not inadvertently cause network collision. This guide assumes that you are familiar with the [Kubernetes networking concepts](/docs/concepts/cluster-administration/networking/). - -In order to deploy your cluster you need the following address spaces: - -| Subnet / address range | Description | Default value | -| --- | --- | --- | -| Service Subnet | A non-routable, purely virtual subnet that is used by pods to uniformly access services without caring about the network topology. It is translated to/from routable address space by `kube-proxy` running on the nodes. | 10.96.0.0/12 | -| Cluster Subnet | This is a global subnet that is used by all pods in the cluster. Each node is assigned a smaller /24 subnet from this for their pods to use. It must be large enough to accommodate all pods used in your cluster. To calculate *minimumsubnet* size: `(number of nodes) + (number of nodes * maximum pods per node that you configure)`. Example: for a 5 node cluster for 100 pods per node: `(5) + (5 * 100) = 505.` | 10.244.0.0/16 | -| Kubernetes DNS Service IP | IP address of `kube-dns` service that is used for DNS resolution & cluster service discovery. | 10.96.0.10 | - -Review the networking options supported in 'Intro to Windows containers in Kubernetes: Supported Functionality: Networking' to determine how you need to allocate IP addresses for your cluster. - -### Components that run on Windows - -While the Kubernetes control plane runs on your Linux node(s), the following components are configured and run on your Windows node(s). - -1. kubelet -2. kube-proxy -3. kubectl (optional) -4. Container runtime - -Get the latest binaries from [https://github.com/kubernetes/kubernetes/releases](https://github.com/kubernetes/kubernetes/releases), starting with v1.14 or later. The Windows-amd64 binaries for kubeadm, kubectl, kubelet, and kube-proxy can be found under the CHANGELOG link. - -### Networking Configuration - -Once you have a Linux-based Kubernetes master node you are ready to choose a networking solution. This guide illustrates using Flannel in VXLAN mode for simplicity. - -#### Configuring Flannel in VXLAN mode on the Linux controller - -1. Prepare Kubernetes master for Flannel - - Some minor preparation is recommended on the Kubernetes master in our cluster. It is recommended to enable bridged IPv4 traffic to iptables chains when using Flannel. This can be done using the following command: - - ```bash - sudo sysctl net.bridge.bridge-nf-call-iptables=1 - ``` - -1. Download & configure Flannel - - Download the most recent Flannel manifest: - - ```bash - wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml - ``` - - There are two sections you should modify to enable the vxlan networking backend: - - After applying the steps below, the `net-conf.json` section of `kube-flannel.yml` should look as follows: - - ```json - net-conf.json: | - { - "Network": "10.244.0.0/16", - "Backend": { - "Type": "vxlan", - "VNI" : 4096, - "Port": 4789 - } - } - ``` - - {{< note >}}The VNI must be set to 4096 and port 4789 for Flannel on Linux to interoperate with Flannel on Windows. Support for other VNIs is coming soon. See the [VXLAN documentation](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan) - for an explanation of these fields.{{< /note >}} - -1. In the `net-conf.json` section of your `kube-flannel.yml`, double-check: - 1. The cluster subnet (e.g. "10.244.0.0/16") is set as per your IP plan. - * VNI 4096 is set in the backend - * Port 4789 is set in the backend - 1. In the `cni-conf.json` section of your `kube-flannel.yml`, change the network name to `vxlan0`. - - - Your `cni-conf.json` should look as follows: - - ```json - cni-conf.json: | - { - "name": "vxlan0", - "plugins": [ - { - "type": "flannel", - "delegate": { - "hairpinMode": true, - "isDefaultGateway": true - } - }, - { - "type": "portmap", - "capabilities": { - "portMappings": true - } - } - ] - } - ``` - -1. Apply the Flannel yaml and Validate - - Let's apply the Flannel configuration: - - ```bash - kubectl apply -f kube-flannel.yml - ``` - - After a few minutes, you should see all the pods as running if the Flannel pod network was deployed. - - ```bash - kubectl get pods --all-namespaces - ``` - - The output looks like as follows: - - ``` - NAMESPACE NAME READY STATUS RESTARTS AGE - kube-system etcd-flannel-master 1/1 Running 0 1m - kube-system kube-apiserver-flannel-master 1/1 Running 0 1m - kube-system kube-controller-manager-flannel-master 1/1 Running 0 1m - kube-system kube-dns-86f4d74b45-hcx8x 3/3 Running 0 12m - kube-system kube-flannel-ds-54954 1/1 Running 0 1m - kube-system kube-proxy-Zjlxz 1/1 Running 0 1m - kube-system kube-scheduler-flannel-master 1/1 Running 0 1m - ``` - - Verify that the Flannel DaemonSet has the NodeSelector applied. - - ```bash - kubectl get ds -n kube-system - ``` - - The output looks like as follows. The NodeSelector `beta.kubernetes.io/os=linux` is applied. - - ``` - NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE - kube-flannel-ds 2 2 2 2 2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux 21d - kube-proxy 2 2 2 2 2 beta.kubernetes.io/os=linux 26d - ``` - -#### Join Windows Worker - -In this section we'll cover configuring a Windows node from scratch to join a cluster on-prem. If your cluster is on a cloud you'll likely want to follow the cloud specific guides in the next section. - -#### Preparing a Windows Node - -{{< note >}} -All code snippets in Windows sections are to be run in a PowerShell environment with elevated permissions (Admin). -{{< /note >}} - -1. Install Docker (requires a system reboot) - - Kubernetes uses [Docker](https://www.docker.com/) as its container engine, so we need to install it. You can follow the [official Docs instructions](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-docker/configure-docker-daemon#install-docker), the [Docker instructions](https://store.docker.com/editions/enterprise/docker-ee-server-windows), or try the following *recommended* steps: - - ```PowerShell - Enable-WindowsOptionalFeature -FeatureName Containers - Restart-Computer -Force - Install-Module -Name DockerMsftProvider -Repository PSGallery -Force - Install-Package -Name Docker -ProviderName DockerMsftProvider - ``` - - If you are behind a proxy, the following PowerShell environment variables must be defined: - - ```PowerShell - [Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://proxy.example.com:80/", [EnvironmentVariableTarget]::Machine) - [Environment]::SetEnvironmentVariable("HTTPS_PROXY", "http://proxy.example.com:443/", [EnvironmentVariableTarget]::Machine) - ``` - - After reboot, you can verify that the docker service is ready with the command below. - - ```PowerShell - docker version - ``` - - If you see error message like the following, you need to start the docker service manually. - - ``` - Client: - Version: 17.06.2-ee-11 - API version: 1.30 - Go version: go1.8.7 - Git commit: 06fc007 - Built: Thu May 17 06:14:39 2018 - OS/Arch: windows / amd64 - error during connect: Get http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.30/version: open //./pipe/docker_engine: The system c - annot find the file specified. In the default daemon configuration on Windows, the docker client must be run elevated to - connect. This error may also indicate that the docker daemon is not running. - ``` - - You can start the docker service manually like below. - - ```PowerShell - Start-Service docker - ``` - - {{< note >}} - The "pause" (infrastructure) image is hosted on Microsoft Container Registry (MCR). You can access it using "docker pull mcr.microsoft.com/k8s/core/pause:1.2.0". The DOCKERFILE is available at https://github.com/kubernetes-sigs/windows-testing/blob/master/images/pause/Dockerfile. - {{< /note >}} - -1. Prepare a Windows directory for Kubernetes - - Create a "Kubernetes for Windows" directory to store Kubernetes binaries as well as any deployment scripts and config files. - - ```PowerShell - mkdir c:\k - ``` - -1. Copy Kubernetes certificate - - Copy the Kubernetes certificate file `$HOME/.kube/config` [from the Linux controller](https://docs.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/creating-a-linux-master#collect-cluster-information) to this new `C:\k` directory on your Windows node. - - Tip: You can use tools such as [xcopy](https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/xcopy), [WinSCP](https://winscp.net/eng/download.php), or this [PowerShell wrapper for WinSCP](https://www.powershellgallery.com/packages/WinSCP/5.13.2.0) to transfer the config file between nodes. - -1. Download Kubernetes binaries - - To be able to run Kubernetes, you first need to download the `kubelet` and `kube-proxy` binaries. You download these from the Node Binaries links in the CHANGELOG.md file of the [latest releases](https://github.com/kubernetes/kubernetes/releases/). For example 'kubernetes-node-windows-amd64.tar.gz'. You may also optionally download `kubectl` to run on Windows which you can find under Client Binaries. - - Use the [Expand-Archive](https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.archive/expand-archive?view=powershell-6) PowerShell command to extract the archive and place the binaries into `C:\k`. - -#### Join the Windows node to the Flannel cluster - -The Flannel overlay deployment scripts and documentation are available in [this repository](https://github.com/Microsoft/SDN/tree/master/Kubernetes/flannel/overlay). The following steps are a simple walkthrough of the more comprehensive instructions available there. - -Download the [Flannel start.ps1](https://github.com/Microsoft/SDN/blob/master/Kubernetes/flannel/start.ps1) script, the contents of which should be extracted to `C:\k`: - -```PowerShell -cd c:\k -[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 -wget https://raw.githubusercontent.com/Microsoft/SDN/master/Kubernetes/flannel/start.ps1 -o c:\k\start.ps1 -``` - -{{< note >}} -[start.ps1](https://github.com/Microsoft/SDN/blob/master/Kubernetes/flannel/start.ps1) references [install.ps1](https://github.com/Microsoft/SDN/blob/master/Kubernetes/windows/install.ps1), which downloads additional files such as the `flanneld` executable and the [Dockerfile for infrastructure pod](https://github.com/Microsoft/SDN/blob/master/Kubernetes/windows/Dockerfile) and install those for you. For overlay networking mode, the [firewall](https://github.com/Microsoft/SDN/blob/master/Kubernetes/windows/helper.psm1#L111) is opened for local UDP port 4789. There may be multiple powershell windows being opened/closed as well as a few seconds of network outage while the new external vSwitch for the pod network is being created the first time. Run the script using the arguments as specified below: -{{< /note >}} - -```PowerShell -cd c:\k -.\start.ps1 -ManagementIP ` - -NetworkMode overlay ` - -ClusterCIDR ` - -ServiceCIDR ` - -KubeDnsServiceIP ` - -LogDir -``` - -| Parameter | Default Value | Notes | -| --- | --- | --- | -| -ManagementIP | N/A (required) | The IP address assigned to the Windows node. You can use `ipconfig` to find this. | -| -NetworkMode | l2bridge | We're using `overlay` here | -| -ClusterCIDR | 10.244.0.0/16 | Refer to your cluster IP plan | -| -ServiceCIDR | 10.96.0.0/12 | Refer to your cluster IP plan | -| -KubeDnsServiceIP | 10.96.0.10 | | -| -InterfaceName | Ethernet | The name of the network interface of the Windows host. You can use ipconfig to find this. | -| -LogDir | C:\k | The directory where kubelet and kube-proxy logs are redirected into their respective output files. | - -Now you can view the Windows nodes in your cluster by running the following: - -```bash -kubectl get nodes -``` - -{{< note >}} -You may want to configure your Windows node components like kubelet and kube-proxy to run as services. View the services and background processes section under [troubleshooting](#troubleshooting) for additional instructions. Once you are running the node components as services, collecting logs becomes an important part of troubleshooting. View the [gathering logs](https://github.com/kubernetes/community/blob/master/sig-windows/CONTRIBUTING.md#gathering-logs) section of the contributing guide for further instructions. -{{< /note >}} - -### Public Cloud Providers - -#### Azure - -AKS-Engine can deploy a complete, customizable Kubernetes cluster with both Linux & Windows nodes. There is a step-by-step walkthrough available in the [docs on GitHub](https://github.com/Azure/aks-engine/blob/master/docs/topics/windows.md). - -#### GCP - -Users can easily deploy a complete Kubernetes cluster on GCE following this step-by-step walkthrough on [GitHub](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/windows/README-GCE-Windows-kube-up.md) - -#### Deployment with kubeadm and cluster API - -Kubeadm is becoming the de facto standard for users to deploy a Kubernetes cluster. Windows node support in kubeadm will come in a future release. We are also making investments in cluster API to ensure Windows nodes are properly provisioned. - -### Next Steps - -Now that you've configured a Windows worker in your cluster to run Windows containers you may want to add one or more Linux nodes as well to run Linux containers. You are now ready to schedule Windows containers on your cluster. - diff --git a/content/ja/docs/setup/release/version-skew-policy.md b/content/ja/docs/setup/release/version-skew-policy.md index 5c1a18b8ee..eb0764bcc3 100644 --- a/content/ja/docs/setup/release/version-skew-policy.md +++ b/content/ja/docs/setup/release/version-skew-policy.md @@ -12,14 +12,16 @@ weight: 30 ## サポートされるバージョン {#supported-versions} -Kubernetesのバージョンは**x.y.z**の形式で表現され、**x**はメジャーバージョン、**y**はマイナーバージョン、**z**はパッチバージョンを指します。これは[セマンティック バージョニング](http://semver.org/)に従っています。詳細は、[Kubernetesのリリースバージョニング](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#kubernetes-release-versioning)を参照してください。 +Kubernetesのバージョンは**x.y.z**の形式で表現され、**x**はメジャーバージョン、**y**はマイナーバージョン、**z**はパッチバージョンを指します。これは[セマンティック バージョニング](https://semver.org/)に従っています。詳細は、[Kubernetesのリリースバージョニング](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#kubernetes-release-versioning)を参照してください。 -Kubernetesプロジェクトでは、最新の3つのマイナーリリースについてリリースブランチを管理しています。 +Kubernetesプロジェクトでは、最新の3つのマイナーリリースについてリリースブランチを管理しています ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}})。 + +セキュリティフィックスを含む適用可能な修正は、重大度や実行可能性によってはこれら3つのリリースブランチにバックポートされることもあります。パッチリリースは、これらのブランチから [定期的に](https://git.k8s.io/sig-release/releases/patch-releases.md#cadence) 切り出され、必要に応じて追加の緊急リリースも行われます。 + + [リリースマネージャー](https://git.k8s.io/sig-release/release-managers.md)グループがこれを決定しています。 -セキュリティフィックスを含む適用可能な修正は、重大度や実行可能性によってはこれら3つのリリースブランチにバックポートされることもあります。パッチリリースは、定期的または必要に応じてこれらのブランチから分岐されます。[パッチリリースチーム](https://github.com/kubernetes/sig-release/blob/master/release-engineering/role-handbooks/patch-release-team.md#release-timing)がこれを決定しています。パッチリリースチームは[リリースマネージャー](https://github.com/kubernetes/sig-release/blob/master/release-managers.md)の一部です。 詳細は、[Kubernetesパッチリリース](https://github.com/kubernetes/sig-release/blob/master/releases/patch-releases.md)ページを参照してください。 -マイナーリリースは約3ヶ月ごとに行われるため、マイナーリリースのブランチはそれぞれ約9ヶ月保守されます。 ## サポートされるバージョンの差異 @@ -29,8 +31,8 @@ Kubernetesプロジェクトでは、最新の3つのマイナーリリースに 例: -* 最新の`kube-apiserver`が**1.13**であるとします -* ほかの`kube-apiserver`インスタンスは**1.13**および**1.12**がサポートされます +* 最新の`kube-apiserver`が**{{< skew latestVersion >}}**であるとします +* ほかの`kube-apiserver`インスタンスは**{{< skew latestVersion >}}**および**{{< skew prevMinorVersion >}}**がサポートされます ### kubelet @@ -38,8 +40,8 @@ Kubernetesプロジェクトでは、最新の3つのマイナーリリースに 例: -* `kube-apiserver`が**1.13**であるとします -* `kubelet`は**1.13**、**1.12**および**1.11**がサポートされます +* `kube-apiserver`が**{{< skew latestVersion >}}**であるとします +* `kubelet`は**{{< skew latestVersion >}}**、**{{< skew prevMinorVersion >}}**および**{{< skew oldestMinorVersion >}}**がサポートされます {{< note >}} HAクラスター内の`kube-apiserver`間にバージョンの差異がある場合、有効な`kubelet`のバージョンは少なくなります。 @@ -47,8 +49,8 @@ HAクラスター内の`kube-apiserver`間にバージョンの差異がある 例: -* `kube-apiserver`インスタンスが**1.13**および**1.12**であるとします -* `kubelet`は**1.12**および**1.11**がサポートされます(**1.13**はバージョン**1.12**の`kube-apiserver`よりも新しくなるためサポートされません) +* `kube-apiserver`インスタンスが**{{< skew latestVersion >}}**および**1.12**であるとします +* `kubelet`は**{{< skew prevMinorVersion >}}**および**{{< skew oldestMinorVersion >}}**がサポートされます(**{{< skew latestVersion >}}**はバージョン**{{< skew prevMinorVersion >}}**の`kube-apiserver`よりも新しくなるためサポートされません) ### kube-controller-manager、kube-scheduler、およびcloud-controller-manager @@ -56,8 +58,8 @@ HAクラスター内の`kube-apiserver`間にバージョンの差異がある 例: -* `kube-apiserver`が**1.13**であるとします -* `kube-controller-manager`、`kube-scheduler`および`cloud-controller-manager`は**1.13**および**1.12**がサポートされます +* `kube-apiserver`が**{{< skew latestVersion >}}**であるとします +* `kube-controller-manager`、`kube-scheduler`および`cloud-controller-manager`は**{{< skew latestVersion >}}**および**{{< skew prevMinorVersion >}}**がサポートされます {{< note >}} HAクラスター内の`kube-apiserver`間にバージョンの差異があり、これらのコンポーネントがクラスター内のいずれかの`kube-apiserver`と通信する場合(たとえばロードバランサーを経由して)、コンポーネントの有効なバージョンは少なくなります。 @@ -65,8 +67,8 @@ HAクラスター内の`kube-apiserver`間にバージョンの差異があり 例: -* `kube-apiserver`インスタンスが**1.13**および**1.12**であるとします -* いずれかの`kube-apiserver`インスタンスへ配信するロードバランサーと通信する`kube-controller-manager`、`kube-scheduler`および`cloud-controller-manager`は**1.12**がサポートされます(**1.13**はバージョン**1.12**の`kube-apiserver`よりも新しくなるためサポートされません) +* `kube-apiserver`インスタンスが**{{< skew latestVersion >}}**および**{{< skew prevMinorVersion >}}**であるとします +* いずれかの`kube-apiserver`インスタンスへ配信するロードバランサーと通信する`kube-controller-manager`、`kube-scheduler`および`cloud-controller-manager`は**{{< skew prevMinorVersion >}}**がサポートされます(**{{< skew latestVersion >}}**はバージョン**{{< skew prevMinorVersion >}}**の`kube-apiserver`よりも新しくなるためサポートされません) ### kubectl @@ -74,8 +76,8 @@ HAクラスター内の`kube-apiserver`間にバージョンの差異があり 例: -* `kube-apiserver`が**1.13**であるとします -* `kubectl`は**1.14**、**1.13**および**1.12**がサポートされます +* `kube-apiserver`が**{{< skew latestVersion >}}**であるとします +* `kubectl`は**{{< skew nextMinorVersion >}}**、**{{< skew latestVersion >}}**および**{{< skew prevMinorVersion >}}**がサポートされます {{< note >}} HAクラスター内の`kube-apiserver`間にバージョンの差異がある場合、有効な`kubectl`バージョンは少なくなります。 @@ -83,26 +85,26 @@ HAクラスター内の`kube-apiserver`間にバージョンの差異がある 例: -* `kube-apiserver`インスタンスが**1.13**および**1.12**であるとします -* `kubectl`は**1.13**および**1.12**がサポートされます(ほかのバージョンでは、ある`kube-apiserver`コンポーネントからマイナーバージョンが2つ以上離れる可能性があります) +* `kube-apiserver`インスタンスが**{{< skew latestVersion >}}**および**{{< skew prevMinorVersion >}}**であるとします +* `kubectl`は**{{< skew latestVersion >}}**および**{{< skew prevMinorVersion >}}**がサポートされます(ほかのバージョンでは、ある`kube-apiserver`コンポーネントからマイナーバージョンが2つ以上離れる可能性があります) ## サポートされるコンポーネントのアップグレード順序 -コンポーネント間でサポートされるバージョンの差異は、コンポーネントをアップグレードする順序に影響されます。このセクションでは、既存のクラスターをバージョン**1.n**から**1.(n+1)** へ移行するために、コンポーネントをアップグレードする順序を説明します。 +コンポーネント間でサポートされるバージョンの差異は、コンポーネントをアップグレードする順序に影響されます。このセクションでは、既存のクラスターをバージョン**{{< skew prevMinorVersion >}}**から**{{< skew latestVersion >}}** へ移行するために、コンポーネントをアップグレードする順序を説明します。 ### kube-apiserver 前提条件: -* シングルインスタンスのクラスターにおいて、既存の`kube-apiserver`インスタンスは**1.n**とします -* HAクラスターにおいて、既存の`kube-apiserver`は**1.n**または**1.(n+1)** とします(最新と最古の間で、最大で1つのマイナーバージョンの差異となります) -* サーバーと通信する`kube-controller-manager`、`kube-scheduler`および`cloud-controller-manager`はバージョン**1.n**とします(必ず既存のAPIサーバーのバージョンよりも新しいものでなく、かつ新しいAPIサーバーのバージョンの1つ以内のマイナーバージョンとなります) -* すべてのノードの`kubelet`インスタンスはバージョン**1.n**または**1.(n-1)** とします(必ず既存のAPIサーバーよりも新しいバージョンでなく、かつ新しいAPIサーバーのバージョンの2つ以内のマイナーバージョンとなります) +* シングルインスタンスのクラスターにおいて、既存の`kube-apiserver`インスタンスは**{{< skew prevMinorVersion >}}**とします +* HAクラスターにおいて、既存の`kube-apiserver`は**{{< skew prevMinorVersion >}}**または**{{< skew latestVersion >}}** とします(最新と最古の間で、最大で1つのマイナーバージョンの差異となります) +* サーバーと通信する`kube-controller-manager`、`kube-scheduler`および`cloud-controller-manager`はバージョン**{{< skew prevMinorVersion >}}**とします(必ず既存のAPIサーバーのバージョンよりも新しいものでなく、かつ新しいAPIサーバーのバージョンの1つ以内のマイナーバージョンとなります) +* すべてのノードの`kubelet`インスタンスはバージョン**{{< skew prevMinorVersion >}}**または**{{< skew oldestMinorVersion >}}** とします(必ず既存のAPIサーバーよりも新しいバージョンでなく、かつ新しいAPIサーバーのバージョンの2つ以内のマイナーバージョンとなります) * 登録されたAdmission webhookは、新しい`kube-apiserver`インスタンスが送信するこれらのデータを扱うことができます: - * `ValidatingWebhookConfiguration`および`MutatingWebhookConfiguration`オブジェクトは、**1.(n+1)** で追加されたRESTリソースの新しいバージョンを含んで更新されます(または、v1.15から利用可能な[`matchPolicy: Equivalent`オプション](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-matchpolicy)を使用してください) - * Webhookは送信されたRESTリソースの新しいバージョン、および**1.(n+1)** のバージョンで追加された新しいフィールドを扱うことができます + * `ValidatingWebhookConfiguration`および`MutatingWebhookConfiguration`オブジェクトは、**{{< skew latestVersion >}}** で追加されたRESTリソースの新しいバージョンを含んで更新されます(または、v1.15から利用可能な[`matchPolicy: Equivalent`オプション](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-matchpolicy)を使用してください) + * Webhookは送信されたRESTリソースの新しいバージョン、および**{{< skew latestVersion >}}** のバージョンで追加された新しいフィールドを扱うことができます -`kube-apiserver`を**1.(n+1)** にアップグレードしてください。 +`kube-apiserver`を**{{< skew latestVersion >}}** にアップグレードしてください。 {{< note >}} [非推奨API](/docs/reference/using-api/deprecation-policy/)および[APIの変更ガイドライン](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api_changes.md)のプロジェクトポリシーにおいては、シングルインスタンスの場合でも`kube-apiserver`のアップグレードの際にマイナーバージョンをスキップしてはなりません。 @@ -112,17 +114,17 @@ HAクラスター内の`kube-apiserver`間にバージョンの差異がある 前提条件: -* これらのコンポーネントと通信する`kube-apiserver`インスタンスが**1.(n+1)** であること(これらのコントロールプレーンコンポーネントが、クラスター内の`kube-apiserver`インスタンスと通信できるHAクラスターでは、これらのコンポーネントをアップグレードする前にすべての`kube-apiserver`インスタンスをアップグレードしなければなりません) +* これらのコンポーネントと通信する`kube-apiserver`インスタンスが**{{< skew latestVersion >}}** であること(これらのコントロールプレーンコンポーネントが、クラスター内の`kube-apiserver`インスタンスと通信できるHAクラスターでは、これらのコンポーネントをアップグレードする前にすべての`kube-apiserver`インスタンスをアップグレードしなければなりません) -`kube-controller-manager`、`kube-scheduler`および`cloud-controller-manager`を**1.(n+1)** にアップグレードしてください。 +`kube-controller-manager`、`kube-scheduler`および`cloud-controller-manager`を**{{< skew latestVersion >}}** にアップグレードしてください。 ### kubelet 前提条件: -* `kubelet`と通信する`kube-apiserver`が**1.(n+1)** であること +* `kubelet`と通信する`kube-apiserver`が**{{< skew latestVersion >}}** であること -必要に応じて、`kubelet`インスタンスを**1.(n+1)** にアップグレードしてください(**1.n**や**1.(n-1)** のままにすることもできます)。 +必要に応じて、`kubelet`インスタンスを**{{< skew latestVersion >}}** にアップグレードしてください(**{{< skew prevMinorVersion >}}**や**{{< skew oldestMinorVersion >}}** のままにすることもできます)。 {{< warning >}} `kube-apiserver`と2つのマイナーバージョンの`kubelet`インスタンスを使用してクラスターを実行させることは推奨されません: @@ -130,3 +132,18 @@ HAクラスター内の`kube-apiserver`間にバージョンの差異がある * コントロールプレーンをアップグレードする前に、インスタンスを`kube-apiserver`の1つのマイナーバージョン内にアップグレードさせる必要があります * メンテナンスされている3つのマイナーリリースよりも古いバージョンの`kubelet`を実行する可能性が高まります {{}} + + + +### kube-proxy + +* `kube-proxy`のマイナーバージョンはノード上の`kubelet`と同じマイナーバージョンでなければなりません +* `kube-proxy`は`kube-apiserver`よりも新しいものであってはなりません +* `kube-proxy`のマイナーバージョンは`kube-apiserver`のマイナーバージョンよりも2つ以上古いものでなければなりません + +例: + +`kube-proxy`のバージョンが**{{< skew oldestMinorVersion >}}**の場合: + +* `kubelet`のバージョンは**{{< skew oldestMinorVersion >}}**でなければなりません +* `kube-apiserver`のバージョンは**{{< skew oldestMinorVersion >}}**と**{{< skew latestVersion >}}**の間でなければなりません diff --git a/content/ja/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md b/content/ja/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md index b2ebc16d18..d5f6b72296 100644 --- a/content/ja/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md +++ b/content/ja/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md @@ -31,9 +31,9 @@ card: ## クラスター、ユーザー、コンテキストを設定する -例として、開発用のクラスターが一つ、実験用のクラスターが一つ、計二つのクラスターが存在する場合を考えます。`development`と呼ばれる開発用のクラスター内では、フロントエンドの開発者は`frontend`というnamespace内で、ストレージの開発者は`storage`というnamespace内で作業をします。`scratch`と呼ばれる実験用のクラスター内では、開発者はデフォルトのnamespaceで作業をするか、状況に応じて追加のnamespaceを作成します。開発用のクラスターは証明書を通しての認証を必要とします。実験用のクラスターはユーザーネームとパスワードを通しての認証を必要とします。 +例として、開発用のクラスターが一つ、実験用のクラスターが一つ、計二つのクラスターが存在する場合を考えます。`development`と呼ばれる開発用のクラスター内では、フロントエンドの開発者は`frontend`というnamespace内で、ストレージの開発者は`storage`というnamespace内で作業をします。`scratch`と呼ばれる実験用のクラスター内では、開発者はデフォルトのnamespaceで作業をするか、状況に応じて追加のnamespaceを作成します。開発用のクラスターは証明書を通しての認証を必要とします。実験用のクラスターはユーザーネームとパスワードを通しての認証を必要とします。 -`config-exercise`というディレクトリを作成してください。`config-exercise`ディレクトリ内に、以下を含む`config-demo`というファイルを作成してください: +`config-exercise`というディレクトリを作成してください。`config-exercise`ディレクトリ内に、以下を含む`config-demo`というファイルを作成してください: ```shell apiVersion: v1 @@ -61,7 +61,7 @@ contexts: 設定ファイルには、クラスター、ユーザー、コンテキストの情報が含まれています。上記の`config-demo`設定ファイルには、二つのクラスター、二人のユーザー、三つのコンテキストの情報が含まれています。 -`config-exercise`ディレクトリに移動してください。クラスター情報を設定ファイルに追加するために、以下のコマンドを実行してください: +`config-exercise`ディレクトリに移動してください。クラスター情報を設定ファイルに追加するために、以下のコマンドを実行してください: ```shell kubectl config --kubeconfig=config-demo set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file @@ -89,7 +89,7 @@ kubectl config --kubeconfig=config-demo set-context dev-storage --cluster=develo kubectl config --kubeconfig=config-demo set-context exp-scratch --cluster=scratch --namespace=default --user=experimenter ``` -追加した情報を確認するために、`config-demo`ファイルを開いてください。`config-demo`ファイルを開く代わりに、`config view`のコマンドを使うこともできます。 +追加した情報を確認するために、`config-demo`ファイルを開いてください。`config-demo`ファイルを開く代わりに、`config view`のコマンドを使うこともできます。 ```shell kubectl config --kubeconfig=config-demo view diff --git a/content/ja/docs/tasks/access-application-cluster/ingress-minikube.md b/content/ja/docs/tasks/access-application-cluster/ingress-minikube.md index 563ce2478e..be26708099 100644 --- a/content/ja/docs/tasks/access-application-cluster/ingress-minikube.md +++ b/content/ja/docs/tasks/access-application-cluster/ingress-minikube.md @@ -134,28 +134,12 @@ weight: 100 1. 以下の内容で`example-ingress.yaml`を作成します。 - ```yaml - apiVersion: networking.k8s.io/v1beta1 - kind: Ingress - metadata: - name: example-ingress - annotations: - nginx.ingress.kubernetes.io/rewrite-target: /$1 - spec: - rules: - - host: hello-world.info - http: - paths: - - path: / - backend: - serviceName: web - servicePort: 8080 - ``` + {{< codenew file="service/networking/example-ingress.yaml" >}} 1. 次のコマンドを実行して、Ingressリソースを作成します。 ```shell - kubectl apply -f example-ingress.yaml + kubectl apply -f https://kubernetes.io/examples/service/networking/example-ingress.yaml ``` 出力は次のようになります。 @@ -175,8 +159,8 @@ weight: 100 {{< /note >}} ```shell - NAME HOSTS ADDRESS PORTS AGE - example-ingress hello-world.info 172.17.0.15 80 38s + NAME CLASS HOSTS ADDRESS PORTS AGE + example-ingress hello-world.info 172.17.0.15 80 38s ``` 1. 次の行を`/etc/hosts`ファイルの最後に書きます。 @@ -241,9 +225,12 @@ weight: 100 ```yaml - path: /v2 + pathType: Prefix backend: - serviceName: web2 - servicePort: 8080 + service: + name: web2 + port: + number: 8080 ``` 1. 次のコマンドで変更を適用します。 @@ -300,6 +287,3 @@ weight: 100 * [Ingress](/ja/docs/concepts/services-networking/ingress/)についてさらに学ぶ。 * [Ingressコントローラー](/ja/docs/concepts/services-networking/ingress-controllers/)についてさらに学ぶ。 * [Service](/ja/docs/concepts/services-networking/service/)についてさらに学ぶ。 - - - diff --git a/content/ja/docs/tasks/configmap-secret/_index.md b/content/ja/docs/tasks/configmap-secret/_index.md new file mode 100755 index 0000000000..18a8018ce5 --- /dev/null +++ b/content/ja/docs/tasks/configmap-secret/_index.md @@ -0,0 +1,6 @@ +--- +title: "Secretの管理" +weight: 28 +description: Secretを使用した機密設定データの管理 +--- + diff --git a/content/ja/docs/tasks/configmap-secret/managing-secret-using-kubectl.md b/content/ja/docs/tasks/configmap-secret/managing-secret-using-kubectl.md new file mode 100644 index 0000000000..fb8c89c1e3 --- /dev/null +++ b/content/ja/docs/tasks/configmap-secret/managing-secret-using-kubectl.md @@ -0,0 +1,146 @@ +--- +title: kubectlを使用してSecretを管理する +content_type: task +weight: 10 +description: kubectlコマンドラインを使用してSecretを作成する +--- + + + +## {{% heading "prerequisites" %}} + +{{< include "task-tutorial-prereqs.md" >}} + + + +## Secretを作成する + +`Secret`はデータベースにアクセスするためにPodが必要とするユーザー資格情報を含めることができます。 +たとえば、データベース接続文字列はユーザー名とパスワードで構成されます。 +ユーザー名はローカルマシンの`./username.txt`に、パスワードは`./password.txt`に保存します。 + +```shell +echo -n 'admin' > ./username.txt +echo -n '1f2d1e2e67df' > ./password.txt +``` + +上記の2つのコマンドの`-n`フラグは、生成されたファイルにテキスト末尾の余分な改行文字が含まれないようにします。 +`kubectl`がファイルを読み取り、内容をbase64文字列にエンコードすると、余分な改行文字もエンコードされるため、これは重要です。 + +`kubectl create secret`コマンドはこれらのファイルをSecretにパッケージ化し、APIサーバー上にオブジェクトを作成します。 + +```shell +kubectl create secret generic db-user-pass \ + --from-file=./username.txt \ + --from-file=./password.txt +``` + +出力は次のようになります: + +``` +secret/db-user-pass created +``` + +ファイル名がデフォルトのキー名になります。オプションで`--from-file=[key=]source`を使用してキー名を設定できます。たとえば: + +```shell +kubectl create secret generic db-user-pass \ + --from-file=username=./username.txt \ + --from-file=password=./password.txt +``` + +`--from-file`に指定したファイルに含まれるパスワードの特殊文字をエスケープする必要はありません。 + +また、`--from-literal==`タグを使用してSecretデータを提供することもできます。 +このタグは、複数のキーと値のペアを提供するために複数回指定することができます。 +`$`、`\`、`*`、`=`、`!`などの特殊文字は[シェル](https://en.wikipedia.org/wiki/Shell_(computing))によって解釈されるため、エスケープを必要とすることに注意してください。 +ほとんどのシェルでは、パスワードをエスケープする最も簡単な方法は、シングルクォート(`'`)で囲むことです。 +たとえば、実際のパスワードが`S!B\*d$zDsb=`の場合、次のようにコマンドを実行します: + +```shell +kubectl create secret generic dev-db-secret \ + --from-literal=username=devuser \ + --from-literal=password='S!B\*d$zDsb=' +``` + +## Secretを検証する + +Secretが作成されたことを確認できます: + +```shell +kubectl get secrets +``` + +出力は次のようになります: + +``` +NAME TYPE DATA AGE +db-user-pass Opaque 2 51s +``` + +`Secret`の説明を参照できます: + +```shell +kubectl describe secrets/db-user-pass +``` + +出力は次のようになります: + +``` +Name: db-user-pass +Namespace: default +Labels: +Annotations: + +Type: Opaque + +Data +==== +password: 12 bytes +username: 5 bytes +``` + +`kubectl get`と`kubectl describe`コマンドはデフォルトでは`Secret`の内容を表示しません。 +これは、`Secret`が不用意に他人にさらされたり、ターミナルログに保存されたりしないようにするためです。 + +## Secretをデコードする {#decoding-secret} + +先ほど作成したSecretの内容を見るには、以下のコマンドを実行します: + +```shell +kubectl get secret db-user-pass -o jsonpath='{.data}' +``` + +出力は次のようになります: + +```json +{"password.txt":"MWYyZDFlMmU2N2Rm","username.txt":"YWRtaW4="} +``` + +`password.txt`のデータをデコードします: + +```shell +echo 'MWYyZDFlMmU2N2Rm' | base64 --decode +``` + +出力は次のようになります: + +``` +1f2d1e2e67df +``` + +## クリーンアップ + +作成したSecretを削除するには次のコマンドを実行します: + +```shell +kubectl delete secret db-user-pass +``` + + + +## {{% heading "whatsnext" %}} + +- [Secretのコンセプト](/ja/docs/concepts/configuration/secret/)を読む +- [設定ファイルを使用してSecretを管理する](/docs/tasks/configmap-secret/managing-secret-using-config-file/)方法を知る +- [kustomizeを使用してSecretを管理する](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)方法を知る diff --git a/content/ja/docs/tasks/run-application/force-delete-stateful-set-pod.md b/content/ja/docs/tasks/run-application/force-delete-stateful-set-pod.md index 83f7b52c85..ac20713a8d 100644 --- a/content/ja/docs/tasks/run-application/force-delete-stateful-set-pod.md +++ b/content/ja/docs/tasks/run-application/force-delete-stateful-set-pod.md @@ -33,7 +33,7 @@ kubectl delete pods 上記がグレースフルターミネーションにつながるためには、`pod.Spec.TerminationGracePeriodSeconds`に0を指定しては**いけません**。`pod.Spec.TerminationGracePeriodSeconds`を0秒に設定することは安全ではなく、StatefulSet Podには強くお勧めできません。グレースフル削除は安全で、kubeletがapiserverから名前を削除する前にPodが[適切にシャットダウンする](/ja/docs/concepts/workloads/pods/pod-lifecycle/#termination-of-pods)ことを保証します。 -Kubernetes(バージョン1.5以降)は、Nodeにアクセスできないという理由だけでPodを削除しません。到達不能なNodeで実行されているPodは、[タイムアウト](/docs/concepts/architecture/nodes/#node-condition)の後に`Terminating`または`Unknown`状態になります。到達不能なNode上のPodをユーザーが適切に削除しようとすると、Podはこれらの状態に入ることもあります。そのような状態のPodをapiserverから削除することができる唯一の方法は以下の通りです: +Kubernetes(バージョン1.5以降)は、Nodeにアクセスできないという理由だけでPodを削除しません。到達不能なNodeで実行されているPodは、[タイムアウト](/ja/docs/concepts/architecture/nodes/#condition)の後に`Terminating`または`Unknown`状態になります。到達不能なNode上のPodをユーザーが適切に削除しようとすると、Podはこれらの状態に入ることもあります。そのような状態のPodをapiserverから削除することができる唯一の方法は以下の通りです: * (ユーザーまたは[Node Controller](/ja/docs/concepts/architecture/nodes/)によって)Nodeオブジェクトが削除されます。 * 応答していないNodeのkubeletが応答を開始し、Podを終了してapiserverからエントリーを削除します。 @@ -76,4 +76,3 @@ StatefulSet Podの強制削除は、常に慎重に、関連するリスクを [StatefulSetのデバッグ](/docs/tasks/debug-application-cluster/debug-stateful-set/)の詳細 - diff --git a/content/ja/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/ja/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md new file mode 100644 index 0000000000..445742e1d6 --- /dev/null +++ b/content/ja/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -0,0 +1,403 @@ +--- +title: Horizontal Pod Autoscalerウォークスルー +content_type: task +weight: 100 +--- + + + +Horizontal Pod Autoscalerは、Deployment、ReplicaSetまたはStatefulSetといったレプリケーションコントローラ内のPodの数を、観測されたCPU使用率(もしくはベータサポートの、アプリケーションによって提供されるその他のメトリクス)に基づいて自動的にスケールさせます。 + +このドキュメントはphp-apacheサーバーに対しHorizontal Pod Autoscalerを有効化するという例に沿ってウォークスルーで説明していきます。Horizontal Pod Autoscalerの動作についてのより詳細な情報を知りたい場合は、[Horizontal Pod Autoscalerユーザーガイド](/docs/tasks/run-application/horizontal-pod-autoscale/)をご覧ください。 + +## {{% heading "前提条件" %}} + +この例ではバージョン1.2以上の動作するKubernetesクラスターおよびkubectlが必要です。 +[Metrics API](https://github.com/kubernetes/metrics)を介してメトリクスを提供するために、[Metrics server](https://github.com/kubernetes-sigs/metrics-server)によるモニタリングがクラスター内にデプロイされている必要があります。 +Horizontal Pod Autoscalerはメトリクスを収集するためにこのAPIを利用します。metrics-serverをデプロイする方法を知りたい場合は[metrics-server ドキュメント](https://github.com/kubernetes-sigs/metrics-server#deployment)をご覧ください。 + +Horizontal Pod Autoscalerで複数のリソースメトリクスを利用するためには、バージョン1.6以上のKubernetesクラスターおよびkubectlが必要です。カスタムメトリクスを使えるようにするためには、あなたのクラスターがカスタムメトリクスAPIを提供するAPIサーバーと通信できる必要があります。 +最後に、Kubernetesオブジェクトと関係のないメトリクスを使うにはバージョン1.10以上のKubernetesクラスターおよびkubectlが必要で、さらにあなたのクラスターが外部メトリクスAPIを提供するAPIサーバーと通信できる必要があります。 +詳細については[Horizontal Pod Autoscaler user guide](/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics)をご覧ください。 + + + +## php-apacheの起動と公開 + +Horizontal Pod Autoscalerのデモンストレーションのために、php-apacheイメージをもとにしたカスタムのDockerイメージを使います。 +このDockerfileは下記のようになっています。 + +```dockerfile +FROM php:5-apache +COPY index.php /var/www/html/index.php +RUN chmod a+rx index.php +``` + +これはCPU負荷の高い演算を行うindex.phpを定義しています。 + +```php + +``` + +まず最初に、イメージを動かすDeploymentを起動し、Serviceとして公開しましょう。 +下記の設定を使います。 + +{{< codenew file="application/php-apache.yaml" >}} + +以下のコマンドを実行してください。 + +```shell +kubectl apply -f https://k8s.io/examples/application/php-apache.yaml +``` + +``` +deployment.apps/php-apache created +service/php-apache created +``` + +## Horizontal Pod Autoscalerを作成する + +サーバーが起動したら、[kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands#autoscale)を使ってautoscalerを作成しましょう。以下のコマンドで、最初のステップで作成したphp-apache deploymentによって制御されるPodレプリカ数を1から10の間に維持するHorizontal Pod Autoscalerを作成します。 +簡単に言うと、HPAは(Deploymentを通じて)レプリカ数を増減させ、すべてのPodにおける平均CPU使用率を50%(それぞれのPodは`kubectl run`で200 milli-coresを要求しているため、平均CPU使用率100 milli-coresを意味します)に保とうとします。 +このアルゴリズムについての詳細は[こちら](/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details)をご覧ください。 + +```shell +kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10 +``` + +``` +horizontalpodautoscaler.autoscaling/php-apache autoscaled +``` + +以下を実行して現在のAutoscalerの状況を確認できます。 + +```shell +kubectl get hpa +``` + +``` +NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE +php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 18s +``` + +現在はサーバーにリクエストを送っていないため、CPU使用率が0%になっていることに注意してください(`TARGET`カラムは対応するDeploymentによって制御される全てのPodの平均値を示しています。)。 + +## 負荷の増加 + +Autoscalerがどのように負荷の増加に反応するか見てみましょう。 +コンテナを作成し、クエリの無限ループをphp-apacheサーバーに送ってみます(これは別のターミナルで実行してください)。 + +```shell +kubectl run -i --tty load-generator --rm --image=busybox --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done" +``` + +数分以内に、下記を実行することでCPU負荷が高まっていることを確認できます。 + +```shell +kubectl get hpa +``` + +``` +NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE +php-apache Deployment/php-apache/scale 305% / 50% 1 10 1 3m +``` + +ここでは、CPU使用率はrequestの305%にまで高まっています。 +結果として、Deploymentはレプリカ数7にリサイズされました。 + +```shell +kubectl get deployment php-apache +``` + +``` +NAME READY UP-TO-DATE AVAILABLE AGE +php-apache 7/7 7 7 19m +``` + +{{< note >}} +レプリカ数が安定するまでは数分かかることがあります。負荷量は何らかの方法で制御されているわけではないので、最終的なレプリカ数はこの例とは異なる場合があります。 +{{< /note >}} + +## 負荷の停止 + +ユーザー負荷を止めてこの例を終わらせましょう。 + +私たちが`busybox`イメージを使って作成したコンテナ内のターミナルで、` + C`を入力して負荷生成を終了させます。 + +そして結果の状態を確認します(数分後)。 + +```shell +kubectl get hpa +``` + +``` +NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE +php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 11m +``` + +```shell +kubectl get deployment php-apache +``` + +``` +NAME READY UP-TO-DATE AVAILABLE AGE +php-apache 1/1 1 1 27m +``` + +ここでCPU使用率は0に下がり、HPAによってオートスケールされたレプリカ数は1に戻ります。 + +{{< note >}} +レプリカのオートスケールには数分かかることがあります。 +{{< /note >}} + + + +## 複数のメトリクスやカスタムメトリクスを基にオートスケーリングする + +`autoscaling/v2beta2` APIバージョンと使うと、`php-apache` Deploymentをオートスケーリングする際に使う追加のメトリクスを導入することが出来ます。 + +まず、`autoscaling/v2beta2`内のHorizontalPodAutoscalerのYAMLファイルを入手します。 + +```shell +kubectl get hpa.v2beta2.autoscaling -o yaml > /tmp/hpa-v2.yaml +``` + +`/tmp/hpa-v2.yaml`ファイルをエディタで開くと、以下のようなYAMLファイルが見えるはずです。 + +```yaml +apiVersion: autoscaling/v2beta2 +kind: HorizontalPodAutoscaler +metadata: + name: php-apache +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: php-apache + minReplicas: 1 + maxReplicas: 10 + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: 50 +status: + observedGeneration: 1 + lastScaleTime: + currentReplicas: 1 + desiredReplicas: 1 + currentMetrics: + - type: Resource + resource: + name: cpu + current: + averageUtilization: 0 + averageValue: 0 +``` + +`targetCPUUtilizationPercentage`フィールドは`metrics`と呼ばれる配列に置換されています。 +CPU使用率メトリクスは、Podコンテナで定められたリソースの割合として表されるため、*リソースメトリクス*です。CPU以外のリソースメトリクスを指定することもできます。デフォルトでは、他にメモリだけがリソースメトリクスとしてサポートされています。これらのリソースはクラスター間で名前が変わることはなく、そして`metrics.k8s.io` APIが利用可能である限り常に利用可能です。 + +さらに`target.type`において`Utilization`の代わりに`AverageValue`を使い、`target.averageUtilization`フィールドの代わりに対応する`target.averageValue`フィールドを設定することで、リソースメトリクスをrequest値に対する割合に代わり、直接的な値に設定することも可能です。 + +PodメトリクスとObjectメトリクスという2つの異なる種類のメトリクスが存在し、どちらも*カスタムメトリクス*とみなされます。これらのメトリクスはクラスター特有の名前を持ち、利用するにはより発展的なクラスター監視設定が必要となります。 + +これらの代替メトリクスタイプのうち、最初のものが*Podメトリクス*です。これらのメトリクスはPodを説明し、Podを渡って平均され、レプリカ数を決定するためにターゲット値と比較されます。 +これらはほとんどリソースメトリクス同様に機能しますが、`target`の種類としては`AverageValue`*のみ*をサポートしている点が異なります。 + +Podメトリクスはmetricブロックを使って以下のように指定されます。 + +```yaml +type: Pods +pods: + metric: + name: packets-per-second + target: + type: AverageValue + averageValue: 1k +``` + +2つ目のメトリクスタイプは*Objectメトリクス*です。これらのメトリクスはPodを説明するかわりに、同一Namespace内の異なったオブジェクトを説明します。このメトリクスはオブジェクトから取得される必要はありません。単に説明するだけです。Objectメトリクスは`target`の種類として`Value`と`AverageValue`をサポートします。`Value`では、ターゲットはAPIから返ってきたメトリクスと直接比較されます。`AverageValue`では、カスタムメトリクスAPIから返ってきた値はターゲットと比較される前にPodの数で除算されます。以下の例は`requests-per-second`メトリクスのYAML表現です。 + +```yaml +type: Object +object: + metric: + name: requests-per-second + describedObject: + apiVersion: networking.k8s.io/v1beta1 + kind: Ingress + name: main-route + target: + type: Value + value: 2k +``` + +もしこのようなmetricブロックを複数提供した場合、HorizontalPodAutoscalerはこれらのメトリクスを順番に処理します。 +HorizontalPodAutoscalerはそれぞれのメトリクスについて推奨レプリカ数を算出し、その中で最も多いレプリカ数を採用します。 + +例えば、もしあなたがネットワークトラフィックについてのメトリクスを収集する監視システムを持っているなら、`kubectl edit`を使って指定を次のように更新することができます。 + +```yaml +apiVersion: autoscaling/v2beta2 +kind: HorizontalPodAutoscaler +metadata: + name: php-apache +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: php-apache + minReplicas: 1 + maxReplicas: 10 + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: 50 + - type: Pods + pods: + metric: + name: packets-per-second + target: + type: AverageValue + averageValue: 1k + - type: Object + object: + metric: + name: requests-per-second + describedObject: + apiVersion: networking.k8s.io/v1beta1 + kind: Ingress + name: main-route + target: + type: Value + value: 10k +status: + observedGeneration: 1 + lastScaleTime: + currentReplicas: 1 + desiredReplicas: 1 + currentMetrics: + - type: Resource + resource: + name: cpu + current: + averageUtilization: 0 + averageValue: 0 + - type: Object + object: + metric: + name: requests-per-second + describedObject: + apiVersion: networking.k8s.io/v1beta1 + kind: Ingress + name: main-route + current: + value: 10k +``` + +この時、HorizontalPodAutoscalerはそれぞれのPodがCPU requestの50%を使い、1秒当たり1000パケットを送信し、そしてmain-route +Ingressの裏にあるすべてのPodが合計で1秒当たり10000パケットを送信する状態を保持しようとします。 + +### より詳細なメトリクスをもとにオートスケーリングする + +多くのメトリクスパイプラインは、名前もしくは _labels_ と呼ばれる追加の記述子の組み合わせによって説明することができます。全てのリソースメトリクス以外のメトリクスタイプ(Pod、Object、そして下で説明されている外部メトリクス)において、メトリクスパイプラインに渡す追加のラベルセレクターを指定することができます。例えば、もしあなたが`http_requests`メトリクスを`verb`ラベルとともに収集しているなら、下記のmetricブロックを指定してGETリクエストにのみ基づいてスケールさせることができます。 + +```yaml +type: Object +object: + metric: + name: http_requests + selector: {matchLabels: {verb: GET}} +``` + +このセレクターは完全なKubernetesラベルセレクターと同じ文法を利用します。もし名前とセレクターが複数の系列に一致した場合、この監視パイプラインはどのようにして複数の系列を一つの値にまとめるかを決定します。このセレクターは付加的なもので、ターゲットオブジェクト(`Pods`タイプの場合は対象Pod、`Object`タイプの場合は説明されるオブジェクト)では**ない**オブジェクトを説明するメトリクスを選択することは出来ません。 + +### Kubernetesオブジェクトと関係ないメトリクスに基づいたオートスケーリング + +Kubernetes上で動いているアプリケーションを、Kubernetes Namespaceと直接的な関係がないサービスを説明するメトリクスのような、Kubernetesクラスター内のオブジェクトと明確な関係が無いメトリクスを基にオートスケールする必要があるかもしれません。Kubernetes 1.10以降では、このようなユースケースを*外部メトリクス*によって解決できます。 + +外部メトリクスを使うにはあなたの監視システムについての知識が必要となります。この設定はカスタムメトリクスを使うときのものに似ています。外部メトリクスを使うとあなたの監視システムのあらゆる利用可能なメトリクスに基づいてクラスターをオートスケールできるようになります。上記のように`metric`ブロックで`name`と`selector`を設定し、`Object`のかわりに`External`メトリクスタイプを使います。 +もし複数の時系列が`metricSelector`により一致した場合は、それらの値の合計がHorizontalPodAutoscalerに使われます。 +外部メトリクスは`Value`と`AverageValue`の両方のターゲットタイプをサポートしています。これらの機能は`Object`タイプを利用するときとまったく同じです。 + +例えばもしあなたのアプリケーションがホストされたキューサービスからのタスクを処理している場合、あなたは下記のセクションをHorizontalPodAutoscalerマニフェストに追記し、未処理のタスク30個あたり1つのワーカーを必要とすることを指定します。 + +```yaml +- type: External + external: + metric: + name: queue_messages_ready + selector: "queue=worker_tasks" + target: + type: AverageValue + averageValue: 30 +``` + +可能なら、クラスター管理者がカスタムメトリクスAPIを保護することを簡単にするため、外部メトリクスのかわりにカスタムメトリクスを用いることが望ましいです。外部メトリクスAPIは潜在的に全てのメトリクスへのアクセスを許可するため、クラスター管理者はこれを公開する際には注意が必要です。 + +## 付録: Horizontal Pod Autoscaler status conditions + +`autoscaling/v2beta2`形式のHorizontalPodAutoscalerを使っている場合は、KubernetesによるHorizontalPodAutoscaler上の*status conditions*セットを見ることができます。status conditionsはHorizontalPodAutoscalerがスケール可能かどうか、そして現時点でそれが何らかの方法で制限されているかどうかを示しています。 + +このconditionsは`status.conditions`フィールドに現れます。HorizontalPodAutoscalerに影響しているconditionsを確認するために、`kubectl describe hpa`を利用できます。 + +```shell +kubectl describe hpa cm-test +``` + +``` +Name: cm-test +Namespace: prom +Labels: +Annotations: +CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 +Reference: ReplicationController/cm-test +Metrics: ( current / target ) + "http_requests" on pods: 66m / 500m +Min replicas: 1 +Max replicas: 4 +ReplicationController pods: 1 current / 1 desired +Conditions: + Type Status Reason Message + ---- ------ ------ ------- + AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale + ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_requests + ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range +Events: +``` + +このHorizontalPodAutoscalerにおいて、いくつかの正常な状態のconditionsを見ることができます。まず最初に、`AbleToScale`は、HPAがスケール状況を取得し、更新させることが出来るかどうかだけでなく、何らかのbackoffに関連した状況がスケーリングを妨げていないかを示しています。2番目に、`ScalingActive`は、HPAが有効化されているかどうか(例えば、レプリカ数のターゲットがゼロでないこと)や、望ましいスケールを算出できるかどうかを示します。もしこれが`False`の場合、大体はメトリクスの取得において問題があることを示しています。最後に、一番最後の状況である`ScalingLimited`は、HorizontalPodAutoscalerの最大値や最小値によって望ましいスケールがキャップされていることを示しています。この指標を見てHorizontalPodAutoscaler上の最大・最小レプリカ数制限を増やす、もしくは減らす検討ができます。 + +## 付録: 数量 + +全てのHorizontalPodAutoscalerおよびメトリクスAPIにおけるメトリクスは{{< glossary_tooltip term_id="quantity" text="quantity">}}として知られる特殊な整数表記によって指定されます。例えば、`10500m`という数量は10進数表記で`10.5`と書くことができます。メトリクスAPIは可能であれば接尾辞を用いない整数を返し、そうでない場合は基本的にミリ単位での数量を返します。これはメトリクス値が`1`と`1500m`の間で、もしくは10進法表記で書かれた場合は`1`と`1.5`の間で変動するということを意味します。 + +## 付録: その他の起きうるシナリオ + +### Autoscalerを宣言的に作成する + +`kubectl autoscale`コマンドを使って命令的にHorizontalPodAutoscalerを作るかわりに、下記のファイルを使って宣言的に作成することができます。 + +{{< codenew file="application/hpa/php-apache.yaml" >}} + +下記のコマンドを実行してAutoscalerを作成します。 + +```shell +kubectl create -f https://k8s.io/examples/application/hpa/php-apache.yaml +``` + +``` +horizontalpodautoscaler.autoscaling/php-apache created +``` diff --git a/content/ja/docs/tasks/tools/_index.md b/content/ja/docs/tasks/tools/_index.md index ac97313bb8..72a8c1361f 100755 --- a/content/ja/docs/tasks/tools/_index.md +++ b/content/ja/docs/tasks/tools/_index.md @@ -17,7 +17,7 @@ Kubernetesのコマンドラインツール`kubectl`を使用すると、Kuberne [Minikube](https://minikube.sigs.k8s.io/)は、Kubernetesをローカルで実行するツールです。MinikubeはシングルノードのKubernetesクラスターをパーソナルコンピューター上(Windows、macOS、Linux PCを含む)で実行することで、Kubernetesを試したり、日常的な開発作業のために利用できます。 -ツールのインストールについて知りたい場合は、公式の[Get Started!](https://minikube.sigs.k8s.io/docs/start/)のガイドに従うか、[Minikubeのインストール](/ja/docs/tasks/tools/install-minikube/)を読んでください。 +ツールのインストールについて知りたい場合は、公式の[Get Started!](https://minikube.sigs.k8s.io/docs/start/)のガイドに従ってください。 Minikubeが起動したら、[サンプルアプリケーションの実行](/ja/docs/tutorials/hello-minikube/)を試すことができます。 diff --git a/content/ja/docs/tasks/tools/install-kubectl.md b/content/ja/docs/tasks/tools/install-kubectl.md index 2ab3ca50f7..a33b475671 100644 --- a/content/ja/docs/tasks/tools/install-kubectl.md +++ b/content/ja/docs/tasks/tools/install-kubectl.md @@ -502,7 +502,7 @@ compinit ## {{% heading "whatsnext" %}} -* [Minikubeをインストールする](/ja/docs/tasks/tools/install-minikube/) +* [Minikubeをインストールする](https://minikube.sigs.k8s.io/docs/start/) * クラスターの作成に関する詳細を[スタートガイド](/ja/docs/setup/)で確認する * [アプリケーションを起動して公開する方法を学ぶ](/ja/docs/tasks/access-application-cluster/service-access-application-cluster/) * あなたが作成していないクラスターにアクセスする必要がある場合は、[クラスターアクセスドキュメントの共有](/ja/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)を参照してください diff --git a/content/ja/docs/tasks/tools/install-minikube.md b/content/ja/docs/tasks/tools/install-minikube.md deleted file mode 100644 index 730145740c..0000000000 --- a/content/ja/docs/tasks/tools/install-minikube.md +++ /dev/null @@ -1,267 +0,0 @@ ---- -title: Minikubeのインストール -content_type: task -weight: 20 -card: - name: tasks - weight: 10 ---- - - - -このページでは[Minikube](/ja/docs/tutorials/hello-minikube)のインストール方法を説明し、コンピューターの仮想マシン上で単一ノードのKubernetesクラスターを実行します。 - - - -## {{% heading "prerequisites" %}} - - -{{< tabs name="minikube_before_you_begin" >}} -{{% tab name="Linux" %}} -Linuxで仮想化がサポートされているかどうかを確認するには、次のコマンドを実行して、出力が空でないことを確認します: -``` -grep -E --color 'vmx|svm' /proc/cpuinfo -``` -{{% /tab %}} - -{{% tab name="macOS" %}} -仮想化がmacOSでサポートされているかどうかを確認するには、ターミナルで次のコマンドを実行します。 -``` -sysctl -a | grep -E --color 'machdep.cpu.features|VMX' -``` -出力に`VMX`が表示されている場合(色付けされているはずです)、VT-x機能がマシンで有効になっています。 -{{% /tab %}} - -{{% tab name="Windows" %}} -Windows 8以降で仮想化がサポートされているかどうかを確認するには、Windowsターミナルまたはコマンドプロンプトで次のコマンドを実行します。 -``` -systeminfo -``` -次の出力が表示される場合、仮想化はWindowsでサポートされています。 -``` -Hyper-V Requirements: VM Monitor Mode Extensions: Yes - Virtualization Enabled In Firmware: Yes - Second Level Address Translation: Yes - Data Execution Prevention Available: Yes -``` - -次の出力が表示される場合、システムにはすでにHypervisorがインストールされており、次の手順をスキップできます。 -``` -Hyper-V Requirements: A hypervisor has been detected. Features required for Hyper-V will not be displayed. -``` - - -{{% /tab %}} -{{< /tabs >}} - - - - - -## minikubeのインストール - -{{< tabs name="tab_with_md" >}} -{{% tab name="Linux" %}} - -### kubectlのインストール - -kubectlがインストールされていることを確認してください。 -[kubectlのインストールとセットアップ](/ja/docs/tasks/tools/install-kubectl/#install-kubectl-on-linux)の指示に従ってkubectlをインストールできます。 - -### ハイパーバイザーのインストール - -ハイパーバイザーがまだインストールされていない場合は、これらのいずれかをインストールしてください: - -• [KVM](https://www.linux-kvm.org/)、ただしQEMUも使っているもの - -• [VirtualBox](https://www.virtualbox.org/wiki/Downloads) - -Minikubeは、VMではなくホストでKubernetesコンポーネントを実行する`--driver=none`オプションもサポートしています。 -このドライバーを使用するには、[Docker](https://www.docker.com/products/docker-desktop)とLinux環境が必要ですが、ハイパーバイザーは不要です。 - -Debian系のLinuxで`none`ドライバーを使用する場合は、snapパッケージではなく`.deb`パッケージを使用してDockerをインストールしてください。snapパッケージはMinikubeでは機能しません。 -[Docker](https://www.docker.com/products/docker-desktop)から`.deb`パッケージをダウンロードできます。 - -{{< caution >}} -`none` VMドライバーは、セキュリティとデータ損失の問題を引き起こす可能性があります。 -`--driver=none`を使用する前に、詳細について[このドキュメント](https://minikube.sigs.k8s.io/docs/reference/drivers/none/) を参照してください。 -{{< /caution >}} - -MinikubeはDockerドライバーと似たような`vm-driver=podman`もサポートしています。Podmanを特権ユーザー権限(root user)で実行することは、コンテナがシステム上の利用可能な機能へ完全にアクセスするための最もよい方法です。 - -{{< caution >}} -`podman` ドライバーは、rootでコンテナを実行する必要があります。これは、通常のユーザーアカウントが、コンテナの実行に必要とされるすべてのOS機能への完全なアクセスを持っていないためです。 -{{< /caution >}} - -### パッケージを利用したMinikubeのインストール - -Minikubeの*Experimental*パッケージが利用可能です。 -GitHubのMinikubeの[リリース](https://github.com/kubernetes/minikube/releases)ページからLinux(AMD64)パッケージを見つけることができます。 - -Linuxのディストリビューションのパッケージツールを使用して、適切なパッケージをインストールしてください。 - -### 直接ダウンロードによるMinikubeのインストール - -パッケージ経由でインストールしない場合は、スタンドアロンバイナリをダウンロードして使用できます。 - -```shell -curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \ - && chmod +x minikube -``` - -Minikube実行可能バイナリをパスに追加する簡単な方法を次に示します: - -```shell -sudo mkdir -p /usr/local/bin/ -sudo install minikube /usr/local/bin/ -``` - -### Homebrewを利用したMinikubeのインストール - -別の選択肢として、Linux [Homebrew](https://docs.brew.sh/Homebrew-on-Linux)を利用してインストールできます。 - -```shell -brew install minikube -``` - -{{% /tab %}} -{{% tab name="macOS" %}} -### kubectlのインストール - -kubectlがインストールされていることを確認してください。 -[kubectlのインストールとセットアップ](/ja/docs/tasks/tools/install-kubectl/#install-kubectl-on-macos)の指示に従ってkubectlをインストールできます。 - -### ハイパーバイザーのインストール - -ハイパーバイザーがまだインストールされていない場合は、これらのいずれかをインストールしてください: - -• [HyperKit](https://github.com/moby/hyperkit) - -• [VirtualBox](https://www.virtualbox.org/wiki/Downloads) - -• [VMware Fusion](https://www.vmware.com/products/fusion) - -### Minikubeのインストール -[Homebrew](https://brew.sh)を使うことでmacOSにMinikubeを簡単にインストールできます: - -```shell -brew install minikube -``` - -スタンドアロンのバイナリをダウンロードして、macOSにインストールすることもできます: - -```shell -curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 \ - && chmod +x minikube -``` - -Minikube実行可能バイナリをパスに追加する簡単な方法を次に示します: - -```shell -sudo mv minikube /usr/local/bin -``` - -{{% /tab %}} -{{% tab name="Windows" %}} -### kubectlのインストール - -kubectlがインストールされていることを確認してください。 -[kubectlのインストールとセットアップ](/ja/docs/tasks/tools/install-kubectl/#install-kubectl-on-windows)の指示に従ってkubectlをインストールできます。 - -### ハイパーバイザーのインストール - -ハイパーバイザーがまだインストールされていない場合は、これらのいずれかをインストールしてください: - -• [Hyper-V](https://msdn.microsoft.com/en-us/virtualization/hyperv_on_windows/quick_start/walkthrough_install) - -• [VirtualBox](https://www.virtualbox.org/wiki/Downloads) - -{{< note >}} -Hyper-Vは、Windows 10 Enterprise、Windows 10 Professional、Windows 10 Educationの3つのバージョンのWindows 10で実行できます。 -{{< /note >}} - -### Chocolateyを使用したMinikubeのインストール - -[Chocolatey](https://chocolatey.org/)を使うことでWindowsにMinikubeを簡単にインストールできます(管理者権限で実行する必要があります)。 - -```shell -choco install minikube -``` - -Minikubeのインストールが終わったら、現在のCLIのセッションを終了して再起動します。Minikubeは自動的にパスに追加されます。 - -### インストーラーを使用したMinikubeのインストール - -[Windowsインストーラー](https://docs.microsoft.com/en-us/windows/desktop/msi/windows-installer-portal)を使用してWindowsにMinikubeを手動でインストールするには、[`minikube-installer.exe`](https://github.com/kubernetes/minikube/releases/latest/download/minikube-installer.exe)をダウンロードしてインストーラーを実行します。 - -### 直接ダウンロードによるMinikubeのインストール - -WindowsにMinikubeを手動でインストールするには、[`minikube-windows-amd64`](https://github.com/kubernetes/minikube/releases/latest)をダウンロードし、名前を`minikube.exe`に変更して、パスに追加します。 - -{{% /tab %}} -{{< /tabs >}} - - - -## インストールの確認 - -ハイパーバイザーとMinikube両方のインストール成功を確認するため、以下のコマンドをローカルKubernetesクラスターを起動するために実行してください: - -{{< note >}} - -`minikube start`で`--driver`の設定をするため、次の``の部分では、インストールしたハイパーバイザーの名前を小文字で入力してください。`--driver`値のすべてのリストは、[specifying the VM driver documentation](/docs/setup/learning-environment/minikube/#specifying-the-vm-driver)で確認できます。 - -{{< /note >}} - -{{< caution >}} -KVMを使用する場合、Debianおよび他の一部のシステムでのlibvirtのデフォルトのQEMU URIは`qemu:///session`であるのに対し、MinikubeのデフォルトのQEMU URIは`qemu:///system`であることに注意してください。これがあなたのシステムに当てはまる場合、`--kvm-qemu-uri qemu:///session`を`minikube start`に渡す必要があります。 -{{< /caution >}} - -```shell -minikube start --driver= -``` - -`minikube start`が完了した場合、次のコマンドを実行してクラスターの状態を確認します。 - -```shell -minikube status -``` - -クラスターが起動していると、`minikube status`の出力はこのようになります。 - -``` -host: Running -kubelet: Running -apiserver: Running -kubeconfig: Configured -``` - -選択したハイパーバイザーでMinikubeが動作しているか確認した後は、そのままMinikubeを使い続けることもできます。また、クラスターを停止することもできます。クラスターを停止するためには、次を実行してください。 - -```shell -minikube stop -``` - -## ローカル状態のクリーンアップ {#cleanup-local-state} - -もし以前に Minikubeをインストールしていたら、以下のコマンドを実行します。 -```shell -minikube start -``` - -`minikube start`はエラーを返します。 -```shell -machine does not exist -``` - -minikubeのローカル状態をクリアする必要があります: -```shell -minikube delete -``` - - -## {{% heading "whatsnext" %}} - - -* [Minikubeを使ってローカルでKubernetesを実行する](/ja/docs/setup/learning-environment/minikube/) - diff --git a/content/ja/docs/tutorials/hello-minikube.md b/content/ja/docs/tutorials/hello-minikube.md index 530e53ffd5..35192a8ca9 100644 --- a/content/ja/docs/tutorials/hello-minikube.md +++ b/content/ja/docs/tutorials/hello-minikube.md @@ -18,7 +18,7 @@ card: このチュートリアルでは、[Minikube](/ja/docs/setup/learning-environment/minikube)とKatacodaを使用して、Kubernetes上でサンプルアプリケーションを動かす方法を紹介します。Katacodaはブラウザで無償のKubernetes環境を提供します。 {{< note >}} -[Minikubeをローカルにインストール](/ja/docs/tasks/tools/install-minikube/)している場合もこのチュートリアルを進めることが可能です。 +[Minikubeをローカルにインストール](https://minikube.sigs.k8s.io/docs/start/)している場合もこのチュートリアルを進めることが可能です。 {{< /note >}} diff --git a/content/ja/docs/tutorials/kubernetes-basics/explore/explore-intro.html b/content/ja/docs/tutorials/kubernetes-basics/explore/explore-intro.html index 63b3d503b2..47f9954a15 100644 --- a/content/ja/docs/tutorials/kubernetes-basics/explore/explore-intro.html +++ b/content/ja/docs/tutorials/kubernetes-basics/explore/explore-intro.html @@ -29,7 +29,7 @@ weight: 10

    Kubernetes Pod

    -

    モジュール2でDeploymentを作成したときに、KubernetesはアプリケーションインスタンスをホストするためのPodを作成しました。Podは、1つ以上のアプリケーションコンテナ(Dockerやrktなど)のグループとそれらのコンテナの共有リソースを表すKubernetesの抽象概念です。 Podには以下のものが含まれます:

    +

    モジュール2でDeploymentを作成したときに、KubernetesはアプリケーションインスタンスをホストするためのPodを作成しました。Podは、1つ以上のアプリケーションコンテナ(Dockerなど)のグループとそれらのコンテナの共有リソースを表すKubernetesの抽象概念です。 Podには以下のものが含まれます:

    • 共有ストレージ(ボリューム)
    • ネットワーキング(クラスターに固有のIPアドレス)
    • @@ -49,7 +49,7 @@ weight: 10

    - Podは1つ以上のアプリケーションコンテナ(Dockerやrktなど)のグループであり、共有ストレージ(ボリューム)、IPアドレス、それらの実行方法に関する情報が含まれています。 + Podは1つ以上のアプリケーションコンテナ(Dockerなど)のグループであり、共有ストレージ(ボリューム)、IPアドレス、それらの実行方法に関する情報が含まれています。

    @@ -77,7 +77,7 @@ weight: 10

    すべてのKubernetesノードでは少なくとも以下のものが動作します。

    • Kubelet: Kubernetesマスターとノード間の通信を担当するプロセス。マシン上で実行されているPodとコンテナを管理します。
    • -
    • レジストリからコンテナイメージを取得し、コンテナを解凍し、アプリケーションを実行することを担当する、Docker、rktのようなコンテナランタイム。
    • +
    • レジストリからコンテナイメージを取得し、コンテナを解凍し、アプリケーションを実行することを担当する、Dockerのようなコンテナランタイム。
    diff --git a/content/ja/docs/tutorials/services/source-ip.md b/content/ja/docs/tutorials/services/source-ip.md index 05a54152a3..17317bbeae 100644 --- a/content/ja/docs/tutorials/services/source-ip.md +++ b/content/ja/docs/tutorials/services/source-ip.md @@ -272,7 +272,7 @@ graph TD; ## `Type=LoadBalancer`を使用したServiceでの送信元IP -[`Type=LoadBalancer`](/ja/docs/concepts/services-networking/service/#loadbalancer)を使用したServiceに送られたパケットは、デフォルトでは送信元のNATは行われません。`Ready`状態にあるすべてのスケジュール可能なKubernetesのNodeは、ロードバランサーからのトラフィックを受付可能であるためです。そのため、エンドポイントが存在しないノードにパケットが到達した場合、システムはエンドポイントが*存在する*ノードにパケットをプロシキーします。このとき、(前のセクションで説明したように)パケットの送信元IPがノードのIPに置換されます。 +[`Type=LoadBalancer`](/ja/docs/concepts/services-networking/service/#loadbalancer)を使用したServiceに送られたパケットは、デフォルトで送信元のNATが行われます。`Ready`状態にあるすべてのスケジュール可能なKubernetesのNodeは、ロードバランサーからのトラフィックを受付可能であるためです。そのため、エンドポイントが存在しないノードにパケットが到達した場合、システムはエンドポイントが*存在する*ノードにパケットをプロシキーします。このとき、(前のセクションで説明したように)パケットの送信元IPがノードのIPに置換されます。 ロードバランサー経由でsource-ip-appを公開することで、これをテストできます。 diff --git a/content/ja/examples/application/job/cronjob.yaml b/content/ja/examples/application/job/cronjob.yaml index c9d3893027..2ce31233c3 100644 --- a/content/ja/examples/application/job/cronjob.yaml +++ b/content/ja/examples/application/job/cronjob.yaml @@ -11,7 +11,7 @@ spec: containers: - name: hello image: busybox - args: + command: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster diff --git a/content/ja/examples/application/php-apache.yaml b/content/ja/examples/application/php-apache.yaml new file mode 100644 index 0000000000..e8e1b5aeb4 --- /dev/null +++ b/content/ja/examples/application/php-apache.yaml @@ -0,0 +1,36 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: php-apache +spec: + selector: + matchLabels: + run: php-apache + replicas: 1 + template: + metadata: + labels: + run: php-apache + spec: + containers: + - name: php-apache + image: k8s.gcr.io/hpa-example + ports: + - containerPort: 80 + resources: + limits: + cpu: 500m + requests: + cpu: 200m +--- +apiVersion: v1 +kind: Service +metadata: + name: php-apache + labels: + run: php-apache +spec: + ports: + - port: 80 + selector: + run: php-apache diff --git a/content/ja/examples/service/networking/example-ingress.yaml b/content/ja/examples/service/networking/example-ingress.yaml new file mode 100644 index 0000000000..b309d13275 --- /dev/null +++ b/content/ja/examples/service/networking/example-ingress.yaml @@ -0,0 +1,18 @@ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: example-ingress + annotations: + nginx.ingress.kubernetes.io/rewrite-target: /$1 +spec: + rules: + - host: hello-world.info + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: web + port: + number: 8080 \ No newline at end of file diff --git a/content/ko/blog/_posts/2020-12-02-dont-panic-kubernetes-and-docker.md b/content/ko/blog/_posts/2020-12-02-dont-panic-kubernetes-and-docker.md index b7550a3d83..d540565187 100644 --- a/content/ko/blog/_posts/2020-12-02-dont-panic-kubernetes-and-docker.md +++ b/content/ko/blog/_posts/2020-12-02-dont-panic-kubernetes-and-docker.md @@ -5,7 +5,9 @@ date: 2020-12-02 slug: dont-panic-kubernetes-and-docker --- -**작성자:** Jorge Castro, Duffie Cooley, Kat Cosgrove, Justin Garrison, Noah Kantrowitz, Bob Killen, Rey Lejano, Dan “POP” Papandrea, Jeffrey Sica, Davanum “Dims” Srinivas +**저자:** Jorge Castro, Duffie Cooley, Kat Cosgrove, Justin Garrison, Noah Kantrowitz, Bob Killen, Rey Lejano, Dan “POP” Papandrea, Jeffrey Sica, Davanum “Dims” Srinivas + +**번역:** 박재화(삼성SDS), 손석호(한국전자통신연구원) 쿠버네티스는 v1.20 이후 컨테이너 런타임으로서 [도커를 diff --git a/content/ko/community/_index.html b/content/ko/community/_index.html index 39d17396ff..40bea7b523 100644 --- a/content/ko/community/_index.html +++ b/content/ko/community/_index.html @@ -19,6 +19,7 @@ cid: community
    +커뮤니티 가치      행동 강령       비디오      토론      @@ -41,10 +42,28 @@ cid: community 쿠버네티스 컨퍼런스 갤러리
    쿠버네티스 컨퍼런스 갤러리 - - +
    +
    +
    +
    +

    +

    +

    커뮤니티 가치

    +쿠버네티스 커뮤니티가 추구하는 가치는 프로젝트의 지속적인 성공의 핵심입니다.
    +이러한 원칙은 쿠버네티스 프로젝트의 모든 측면을 이끌어 갑니다. +
    + +

    + + 더 읽어 보기 + +
    +
    +
    +
    +
    diff --git a/content/ko/docs/concepts/architecture/controller.md b/content/ko/docs/concepts/architecture/controller.md index 784ad2ac58..e516dd9cc5 100644 --- a/content/ko/docs/concepts/architecture/controller.md +++ b/content/ko/docs/concepts/architecture/controller.md @@ -102,7 +102,7 @@ weight: 30 온도 조절기 예에서 방이 매우 추우면 다른 컨트롤러가 서리 방지 히터를 켤 수도 있다. 쿠버네티스 클러스터에서는 [쿠버네티스 확장](/ko/docs/concepts/extend-kubernetes/)을 통해 -IP 주소 관리 도구, 스토리지 서비스, 클라우드 제공자 APIS 및 +IP 주소 관리 도구, 스토리지 서비스, 클라우드 제공자의 API 및 기타 서비스 등과 간접적으로 연동하여 이를 구현한다. ## 의도한 상태와 현재 상태 {#desired-vs-current} diff --git a/content/ko/docs/concepts/architecture/nodes.md b/content/ko/docs/concepts/architecture/nodes.md index c951d60ca8..4ad2a7aaaa 100644 --- a/content/ko/docs/concepts/architecture/nodes.md +++ b/content/ko/docs/concepts/architecture/nodes.md @@ -10,7 +10,7 @@ weight: 10 노드는 클러스터에 따라 가상 또는 물리적 머신일 수 있다. 각 노드는 {{< glossary_tooltip text="컨트롤 플레인" term_id="control-plane" >}}에 의해 관리되며 {{< glossary_tooltip text="파드" term_id="pod" >}}를 -실행하는데 필요한 서비스를 포함한다. +실행하는 데 필요한 서비스를 포함한다. 일반적으로 클러스터에는 여러 개의 노드가 있으며, 학습 또는 리소스가 제한되는 환경에서는 하나만 있을 수도 있다. @@ -57,7 +57,7 @@ kubelet이 노드의 `metadata.name` 필드와 일치하는 API 서버에 등록 정상적인지 확인한다. 상태 확인을 중지하려면 사용자 또는 {{< glossary_tooltip term_id="controller" text="컨트롤러">}}에서 -노드 오브젝트를 명시적으로 삭제해야한다. +노드 오브젝트를 명시적으로 삭제해야 한다. {{< /note >}} 노드 오브젝트의 이름은 유효한 diff --git a/content/ko/docs/concepts/cluster-administration/_index.md b/content/ko/docs/concepts/cluster-administration/_index.md index 1442a5da01..9870704596 100755 --- a/content/ko/docs/concepts/cluster-administration/_index.md +++ b/content/ko/docs/concepts/cluster-administration/_index.md @@ -1,5 +1,8 @@ --- title: 클러스터 관리 + + + weight: 100 content_type: concept description: > @@ -11,6 +14,7 @@ no_list: true 클러스터 관리 개요는 쿠버네티스 클러스터를 생성하거나 관리하는 모든 사람들을 위한 것이다. 핵심 쿠버네티스 [개념](/ko/docs/concepts/)에 어느 정도 익숙하다고 가정한다. + ## 클러스터 계획 @@ -22,12 +26,12 @@ no_list: true 가이드를 선택하기 전에 고려해야 할 사항은 다음과 같다. - - 컴퓨터에서 쿠버네티스를 그냥 한번 사용해보고 싶은가? 아니면, 고가용 멀티 노드 클러스터를 만들고 싶은가? 사용자의 필요에 따라 가장 적합한 배포판을 선택한다. + - 컴퓨터에서 쿠버네티스를 한번 사용해보고 싶은가? 아니면, 고가용 멀티 노드 클러스터를 만들고 싶은가? 사용자의 필요에 따라 가장 적합한 배포판을 선택한다. - [구글 쿠버네티스 엔진(Google Kubernetes Engine)](https://cloud.google.com/kubernetes-engine/)과 같은 클라우드 제공자의 **쿠버네티스 클러스터 호스팅** 을 사용할 것인가? 아니면, **자체 클러스터를 호스팅** 할 것인가? - 클러스터가 **온-프레미스 환경** 에 있나? 아니면, **클라우드(IaaS)** 에 있나? 쿠버네티스는 하이브리드 클러스터를 직접 지원하지는 않는다. 대신 여러 클러스터를 설정할 수 있다. - **온-프레미스 환경에 쿠버네티스** 를 구성하는 경우, 어떤 [네트워킹 모델](/ko/docs/concepts/cluster-administration/networking/)이 가장 적합한 지 고려한다. - 쿠버네티스를 **"베어 메탈" 하드웨어** 에서 실행할 것인가? 아니면, **가상 머신(VM)** 에서 실행할 것인가? - - **단지 클러스터만 실행할 것인가?** 아니면, **쿠버네티스 프로젝트 코드를 적극적으로 개발** 하는 것을 기대하는가? 만약 + - **클러스터만 실행할 것인가?** 아니면, **쿠버네티스 프로젝트 코드를 적극적으로 개발** 하는 것을 기대하는가? 만약 후자라면, 활발하게 개발이 진행되고 있는 배포판을 선택한다. 일부 배포판은 바이너리 릴리스만 사용하지만, 더 다양한 선택을 제공한다. - 클러스터를 실행하는 데 필요한 [컴포넌트](/ko/docs/concepts/overview/components/)에 익숙해지자. @@ -41,7 +45,7 @@ no_list: true ## 클러스터 보안 -* [인증서](/ko/docs/concepts/cluster-administration/certificates/)는 다른 툴 체인을 사용하여 인증서를 생성하는 단계를 설명한다. +* [인증서 생성](/ko/docs/tasks/administer-cluster/certificates/)는 다른 툴 체인을 사용하여 인증서를 생성하는 단계를 설명한다. * [쿠버네티스 컨테이너 환경](/ko/docs/concepts/containers/container-environment/)은 쿠버네티스 노드에서 Kubelet으로 관리하는 컨테이너에 대한 환경을 설명한다. diff --git a/content/ko/docs/concepts/cluster-administration/certificates.md b/content/ko/docs/concepts/cluster-administration/certificates.md index 7b71b9c344..5acb75ea80 100644 --- a/content/ko/docs/concepts/cluster-administration/certificates.md +++ b/content/ko/docs/concepts/cluster-administration/certificates.md @@ -4,247 +4,6 @@ content_type: concept weight: 20 --- - -클라이언트 인증서로 인증을 사용하는 경우 `easyrsa`, `openssl` 또는 `cfssl` -을 통해 인증서를 수동으로 생성할 수 있다. - - - - - - -### easyrsa - -**easyrsa** 는 클러스터 인증서를 수동으로 생성할 수 있다. - -1. easyrsa3의 패치 버전을 다운로드하여 압축을 풀고, 초기화한다. - - curl -LO https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz - tar xzf easy-rsa.tar.gz - cd easy-rsa-master/easyrsa3 - ./easyrsa init-pki -1. 새로운 인증 기관(CA)을 생성한다. `--batch` 는 자동 모드를 설정한다. - `--req-cn` 는 CA의 새 루트 인증서에 대한 일반 이름(Common Name (CN))을 지정한다. - - ./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass -1. 서버 인증서와 키를 생성한다. - `--subject-alt-name` 인수는 API 서버에 접근이 가능한 IP와 DNS - 이름을 설정한다. `MASTER_CLUSTER_IP` 는 일반적으로 API 서버와 - 컨트롤러 관리자 컴포넌트에 대해 `--service-cluster-ip-range` 인수로 - 지정된 서비스 CIDR의 첫 번째 IP이다. `--days` 인수는 인증서가 만료되는 - 일 수를 설정하는데 사용된다. - 또한, 아래 샘플은 기본 DNS 이름으로 `cluster.local` 을 - 사용한다고 가정한다. - - ./easyrsa --subject-alt-name="IP:${MASTER_IP},"\ - "IP:${MASTER_CLUSTER_IP},"\ - "DNS:kubernetes,"\ - "DNS:kubernetes.default,"\ - "DNS:kubernetes.default.svc,"\ - "DNS:kubernetes.default.svc.cluster,"\ - "DNS:kubernetes.default.svc.cluster.local" \ - --days=10000 \ - build-server-full server nopass -1. `pki/ca.crt`, `pki/issued/server.crt` 그리고 `pki/private/server.key` 를 디렉터리에 복사한다. -1. API 서버 시작 파라미터에 다음 파라미터를 채우고 추가한다. - - --client-ca-file=/yourdirectory/ca.crt - --tls-cert-file=/yourdirectory/server.crt - --tls-private-key-file=/yourdirectory/server.key - -### openssl - -**openssl** 은 클러스터 인증서를 수동으로 생성할 수 있다. - -1. ca.key를 2048bit로 생성한다. - - openssl genrsa -out ca.key 2048 -1. ca.key에 따라 ca.crt를 생성한다(인증서 유효 기간을 사용하려면 -days를 사용한다). - - openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt -1. server.key를 2048bit로 생성한다. - - openssl genrsa -out server.key 2048 -1. 인증서 서명 요청(Certificate Signing Request (CSR))을 생성하기 위한 설정 파일을 생성한다. - 파일에 저장하기 전에 꺾쇠 괄호(예: ``)로 - 표시된 값을 실제 값으로 대체한다(예: `csr.conf`). - `MASTER_CLUSTER_IP` 의 값은 이전 하위 섹션에서 - 설명한 대로 API 서버의 서비스 클러스터 IP이다. - 또한, 아래 샘플에서는 `cluster.local` 을 기본 DNS 도메인 - 이름으로 사용하고 있다고 가정한다. - - [ req ] - default_bits = 2048 - prompt = no - default_md = sha256 - req_extensions = req_ext - distinguished_name = dn - - [ dn ] - C = <국가(country)> - ST = <도(state)> - L = <시(city)> - O = <조직(organization)> - OU = <조직 단위(organization unit)> - CN = - - [ req_ext ] - subjectAltName = @alt_names - - [ alt_names ] - DNS.1 = kubernetes - DNS.2 = kubernetes.default - DNS.3 = kubernetes.default.svc - DNS.4 = kubernetes.default.svc.cluster - DNS.5 = kubernetes.default.svc.cluster.local - IP.1 = - IP.2 = - - [ v3_ext ] - authorityKeyIdentifier=keyid,issuer:always - basicConstraints=CA:FALSE - keyUsage=keyEncipherment,dataEncipherment - extendedKeyUsage=serverAuth,clientAuth - subjectAltName=@alt_names -1. 설정 파일을 기반으로 인증서 서명 요청을 생성한다. - - openssl req -new -key server.key -out server.csr -config csr.conf -1. ca.key, ca.crt 그리고 server.csr을 사용해서 서버 인증서를 생성한다. - - openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \ - -CAcreateserial -out server.crt -days 10000 \ - -extensions v3_ext -extfile csr.conf -1. 인증서를 본다. - - openssl x509 -noout -text -in ./server.crt - -마지막으로, API 서버 시작 파라미터에 동일한 파라미터를 추가한다. - -### cfssl - -**cfssl** 은 인증서 생성을 위한 또 다른 도구이다. - -1. 아래에 표시된 대로 커맨드 라인 도구를 다운로드하여 압축을 풀고 준비한다. - 사용 중인 하드웨어 아키텍처 및 cfssl 버전에 따라 샘플 - 명령을 조정해야 할 수도 있다. - - curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl - chmod +x cfssl - curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson - chmod +x cfssljson - curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo - chmod +x cfssl-certinfo -1. 아티팩트(artifact)를 보유할 디렉터리를 생성하고 cfssl을 초기화한다. - - mkdir cert - cd cert - ../cfssl print-defaults config > config.json - ../cfssl print-defaults csr > csr.json -1. CA 파일을 생성하기 위한 JSON 설정 파일을 `ca-config.json` 예시와 같이 생성한다. - - { - "signing": { - "default": { - "expiry": "8760h" - }, - "profiles": { - "kubernetes": { - "usages": [ - "signing", - "key encipherment", - "server auth", - "client auth" - ], - "expiry": "8760h" - } - } - } - } -1. CA 인증서 서명 요청(CSR)을 위한 JSON 설정 파일을 - `ca-csr.json` 예시와 같이 생성한다. 꺾쇠 괄호로 표시된 - 값을 사용하려는 실제 값으로 변경한다. - - { - "CN": "kubernetes", - "key": { - "algo": "rsa", - "size": 2048 - }, - "names":[{ - "C": "<국가(country)>", - "ST": "<도(state)>", - "L": "<시(city)>", - "O": "<조직(organization)>", - "OU": "<조직 단위(organization unit)>" - }] - } -1. CA 키(`ca-key.pem`)와 인증서(`ca.pem`)을 생성한다. - - ../cfssl gencert -initca ca-csr.json | ../cfssljson -bare ca -1. API 서버의 키와 인증서를 생성하기 위한 JSON 구성파일을 - `server-csr.json` 예시와 같이 생성한다. 꺾쇠 괄호 안의 값을 - 사용하려는 실제 값으로 변경한다. `MASTER_CLUSTER_IP` 는 - 이전 하위 섹션에서 설명한 API 서버의 클러스터 IP이다. - 아래 샘플은 기본 DNS 도메인 이름으로 `cluster.local` 을 - 사용한다고 가정한다. - - { - "CN": "kubernetes", - "hosts": [ - "127.0.0.1", - "", - "", - "kubernetes", - "kubernetes.default", - "kubernetes.default.svc", - "kubernetes.default.svc.cluster", - "kubernetes.default.svc.cluster.local" - ], - "key": { - "algo": "rsa", - "size": 2048 - }, - "names": [{ - "C": "<국가(country)>", - "ST": "<도(state)>", - "L": "<시(city)>", - "O": "<조직(organization)>", - "OU": "<조직 단위(organization unit)>" - }] - } -1. API 서버 키와 인증서를 생성하면, 기본적으로 - `server-key.pem` 과 `server.pem` 파일에 각각 저장된다. - - ../cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \ - --config=ca-config.json -profile=kubernetes \ - server-csr.json | ../cfssljson -bare server - - -## 자체 서명된 CA 인증서의 배포 - -클라이언트 노드는 자체 서명된 CA 인증서를 유효한 것으로 인식하지 않을 수 있다. -비-프로덕션 디플로이먼트 또는 회사 방화벽 뒤에서 실행되는 -디플로이먼트의 경우, 자체 서명된 CA 인증서를 모든 클라이언트에 -배포하고 유효한 인증서의 로컬 목록을 새로 고칠 수 있다. - -각 클라이언트에서, 다음 작업을 수행한다. - -```bash -sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt -sudo update-ca-certificates -``` - -``` -Updating certificates in /etc/ssl/certs... -1 added, 0 removed; done. -Running hooks in /etc/ca-certificates/update.d.... -done. -``` - -## 인증서 API - -`certificates.k8s.io` API를 사용해서 -[여기](/docs/tasks/tls/managing-tls-in-a-cluster)에 -설명된 대로 인증에 사용할 x509 인증서를 프로비전 할 수 있다. +클러스터를 위한 인증서를 생성하기 위해서는, [인증서](/ko/docs/tasks/administer-cluster/certificates/)를 참고한다. diff --git a/content/ko/docs/concepts/cluster-administration/logging.md b/content/ko/docs/concepts/cluster-administration/logging.md index 48044b0460..695c747a5d 100644 --- a/content/ko/docs/concepts/cluster-administration/logging.md +++ b/content/ko/docs/concepts/cluster-administration/logging.md @@ -1,4 +1,7 @@ --- + + + title: 로깅 아키텍처 content_type: concept weight: 60 @@ -6,23 +9,22 @@ weight: 60 -애플리케이션 로그는 애플리케이션 내부에서 발생하는 상황을 이해하는 데 도움이 된다. 로그는 문제를 디버깅하고 클러스터 활동을 모니터링하는 데 특히 유용하다. 대부분의 최신 애플리케이션에는 일종의 로깅 메커니즘이 있다. 따라서, 대부분의 컨테이너 엔진은 일종의 로깅을 지원하도록 설계되었다. 컨테이너화된 애플리케이션에 가장 쉽고 가장 널리 사용되는 로깅 방법은 표준 출력과 표준 에러 스트림에 작성하는 것이다. +애플리케이션 로그는 애플리케이션 내부에서 발생하는 상황을 이해하는 데 도움이 된다. 로그는 문제를 디버깅하고 클러스터 활동을 모니터링하는 데 특히 유용하다. 대부분의 최신 애플리케이션에는 일종의 로깅 메커니즘이 있다. 마찬가지로, 컨테이너 엔진들도 로깅을 지원하도록 설계되었다. 컨테이너화된 애플리케이션에 가장 쉽고 가장 널리 사용되는 로깅 방법은 표준 출력과 표준 에러 스트림에 작성하는 것이다. -그러나, 일반적으로 컨테이너 엔진이나 런타임에서 제공하는 기본 기능은 완전한 로깅 솔루션으로 충분하지 않다. 예를 들어, 컨테이너가 크래시되거나, 파드가 축출되거나, 노드가 종료된 경우에도 여전히 애플리케이션의 로그에 접근하려고 한다. 따라서, 로그는 노드, 파드 또는 컨테이너와는 독립적으로 별도의 스토리지와 라이프사이클을 가져야 한다. 이 개념을 _클러스터-레벨-로깅_ 이라고 한다. 클러스터-레벨 로깅은 로그를 저장하고, 분석하고, 쿼리하기 위해 별도의 백엔드가 필요하다. 쿠버네티스는 로그 데이터를 위한 네이티브 스토리지 솔루션을 제공하지 않지만, 기존의 많은 로깅 솔루션을 쿠버네티스 클러스터에 통합할 수 있다. +그러나, 일반적으로 컨테이너 엔진이나 런타임에서 제공하는 기본 기능은 완전한 로깅 솔루션으로 충분하지 않다. +예를 들어, 컨테이너가 크래시되거나, 파드가 축출되거나, 노드가 종료된 경우에도 애플리케이션의 로그에 접근하고 싶을 것이다. +클러스터에서 로그는 노드, 파드 또는 컨테이너와는 독립적으로 별도의 스토리지와 라이프사이클을 가져야 한다. 이 개념을 _클러스터-레벨-로깅_ 이라고 한다. -클러스터-레벨 로깅 아키텍처는 로깅 백엔드가 -클러스터 내부 또는 외부에 존재한다고 가정하여 설명한다. 클러스터-레벨 -로깅에 관심이 없는 경우에도, 노드에서 로그를 저장하고 -처리하는 방법에 대한 설명이 여전히 유용할 수 있다. +클러스터-레벨 로깅은 로그를 저장하고, 분석하고, 쿼리하기 위해 별도의 백엔드가 필요하다. 쿠버네티스는 +로그 데이터를 위한 네이티브 스토리지 솔루션을 제공하지 않지만, +쿠버네티스에 통합될 수 있는 기존의 로깅 솔루션이 많이 있다. ## 쿠버네티스의 기본 로깅 -이 섹션에서는, 쿠버네티스에서 표준 출력 스트림으로 데이터를 -출력하는 기본 로깅의 예시를 볼 수 있다. 이 데모에서는 -일부 텍스트를 초당 한 번씩 표준 출력에 쓰는 컨테이너와 함께 -파드 명세를 사용한다. +이 예시는 텍스트를 초당 한 번씩 표준 출력에 쓰는 +컨테이너에 대한 `Pod` 명세를 사용한다. {{< codenew file="debug/counter-pod.yaml" >}} @@ -31,8 +33,10 @@ weight: 60 ```shell kubectl apply -f https://k8s.io/examples/debug/counter-pod.yaml ``` + 출력은 다음과 같다. -``` + +```console pod/counter created ``` @@ -41,69 +45,69 @@ pod/counter created ```shell kubectl logs counter ``` + 출력은 다음과 같다. -``` + +```console 0: Mon Jan 1 00:00:00 UTC 2001 1: Mon Jan 1 00:00:01 UTC 2001 2: Mon Jan 1 00:00:02 UTC 2001 ... ``` -컨테이너가 크래시된 경우, `kubectl logs` 의 `--previous` 플래그를 사용해서 컨테이너의 이전 인스턴스에 대한 로그를 검색할 수 있다. 파드에 여러 컨테이너가 있는 경우, 명령에 컨테이너 이름을 추가하여 접근하려는 컨테이너 로그를 지정해야 한다. 자세한 내용은 [`kubectl logs` 문서](/docs/reference/generated/kubectl/kubectl-commands#logs)를 참조한다. +`kubectl logs --previous` 를 사용해서 컨테이너의 이전 인스턴스에 대한 로그를 검색할 수 있다. 파드에 여러 컨테이너가 있는 경우, 명령에 컨테이너 이름을 추가하여 접근하려는 컨테이너 로그를 지정해야 한다. 자세한 내용은 [`kubectl logs` 문서](/docs/reference/generated/kubectl/kubectl-commands#logs)를 참조한다. ## 노드 레벨에서의 로깅 ![노드 레벨 로깅](/images/docs/user-guide/logging/logging-node-level.png) -컨테이너화된 애플리케이션이 `stdout(표준 출력)` 및 `stderr(표준 에러)` 에 쓰는 모든 것은 컨테이너 엔진에 의해 어딘가에서 처리와 리디렉션 된다. 예를 들어, 도커 컨테이너 엔진은 이 두 스트림을 [로깅 드라이버](https://docs.docker.com/engine/admin/logging/overview)로 리디렉션 한다. 이 드라이버는 쿠버네티스에서 json 형식의 파일에 작성하도록 구성된다. +컨테이너화된 애플리케이션의 `stdout(표준 출력)` 및 `stderr(표준 에러)` 스트림에 의해 생성된 모든 출력은 컨테이너 엔진이 처리 및 리디렉션 한다. +예를 들어, 도커 컨테이너 엔진은 이 두 스트림을 [로깅 드라이버](https://docs.docker.com/engine/admin/logging/overview)로 리디렉션 한다. 이 드라이버는 쿠버네티스에서 JSON 형식의 파일에 작성하도록 구성된다. {{< note >}} -도커 json 로깅 드라이버는 각 라인을 별도의 메시지로 취급한다. 도커 로깅 드라이버를 사용하는 경우, 멀티-라인 메시지를 직접 지원하지 않는다. 로깅 에이전트 레벨 이상에서 멀티-라인 메시지를 처리해야 한다. +도커 JSON 로깅 드라이버는 각 라인을 별도의 메시지로 취급한다. 도커 로깅 드라이버를 사용하는 경우, 멀티-라인 메시지를 직접 지원하지 않는다. 로깅 에이전트 레벨 이상에서 멀티-라인 메시지를 처리해야 한다. {{< /note >}} 기본적으로, 컨테이너가 다시 시작되면, kubelet은 종료된 컨테이너 하나를 로그와 함께 유지한다. 파드가 노드에서 축출되면, 해당하는 모든 컨테이너도 로그와 함께 축출된다. 노드-레벨 로깅에서 중요한 고려 사항은 로그 로테이션을 구현하여, 로그가 노드에서 사용 가능한 모든 스토리지를 사용하지 않도록 하는 것이다. 쿠버네티스는 -현재 로그 로테이션에 대한 의무는 없지만, 디플로이먼트 도구로 +로그 로테이션에 대한 의무는 없지만, 디플로이먼트 도구로 이를 해결하기 위한 솔루션을 설정해야 한다. 예를 들어, `kube-up.sh` 스크립트에 의해 배포된 쿠버네티스 클러스터에는, 매시간 실행되도록 구성된 [`logrotate`](https://linux.die.net/man/8/logrotate) -도구가 있다. 예를 들어, 도커의 `log-opt` 를 사용하여 애플리케이션의 로그를 -자동으로 로테이션을 하도록 컨테이너 런타임을 설정할 수도 있다. -`kube-up.sh` 스크립트에서, 후자의 접근 방식은 GCP의 COS 이미지에 사용되며, -전자의 접근 방식은 다른 환경에서 사용된다. 두 경우 모두, -기본적으로 로그 파일이 10MB를 초과하면 로테이션이 되도록 구성된다. +도구가 있다. 애플리케이션의 로그를 자동으로 +로테이션하도록 컨테이너 런타임을 설정할 수도 있다. -예를 들어, `kube-up.sh` 가 해당 -[스크립트](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh)에서 -GCP의 COS 이미지 로깅을 설정하는 방법에 대한 자세한 정보를 찾을 수 있다. +예를 들어, `kube-up.sh` 가 GCP의 COS 이미지 로깅을 설정하는 방법은 +[`configure-helper` 스크립트](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh)를 통해 +자세히 알 수 있다. 기본 로깅 예제에서와 같이 [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs)를 실행하면, 노드의 kubelet이 요청을 처리하고 -로그 파일에서 직접 읽은 다음, 응답의 내용을 반환한다. +로그 파일에서 직접 읽는다. kubelet은 로그 파일의 내용을 반환한다. {{< note >}} -현재, 일부 외부 시스템에서 로테이션을 수행한 경우, +만약, 일부 외부 시스템이 로테이션을 수행한 경우, `kubectl logs` 를 통해 최신 로그 파일의 내용만 사용할 수 있다. 예를 들어, 10MB 파일이 있으면, `logrotate` 가 -로테이션을 수행하고 두 개의 파일이 생긴다(크기가 10MB인 파일 하나와 비어있는 파일). -그 후 `kubectl logs` 는 빈 응답을 반환한다. +로테이션을 수행하고 두 개의 파일이 생긴다. (크기가 10MB인 파일 하나와 비어있는 파일) +`kubectl logs` 는 이 예시에서는 빈 응답에 해당하는 최신 로그 파일을 반환한다. {{< /note >}} -[cosConfigureHelper]: https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh ### 시스템 컴포넌트 로그 시스템 컴포넌트에는 컨테이너에서 실행되는 것과 컨테이너에서 실행되지 않는 두 가지 유형이 있다. 예를 들면 다음과 같다. * 쿠버네티스 스케줄러와 kube-proxy는 컨테이너에서 실행된다. -* Kubelet과 컨테이너 런타임(예: 도커)은 컨테이너에서 실행되지 않는다. +* Kubelet과 컨테이너 런타임은 컨테이너에서 실행되지 않는다. -systemd를 사용하는 시스템에서, kubelet과 컨테이너 런타임은 journald에 작성한다. -systemd를 사용하지 않으면, `/var/log` 디렉터리의 `.log` 파일에 작성한다. -컨테이너 내부의 시스템 컴포넌트는 기본 로깅 메커니즘을 무시하고, -항상 `/var/log` 디렉터리에 기록한다. 그것은 [klog](https://github.com/kubernetes/klog) +systemd를 사용하는 시스템에서는, kubelet과 컨테이너 런타임은 journald에 작성한다. +systemd를 사용하지 않으면, kubelet과 컨테이너 런타임은 `/var/log` 디렉터리의 +`.log` 파일에 작성한다. 컨테이너 내부의 시스템 컴포넌트는 기본 로깅 메커니즘을 무시하고, +항상 `/var/log` 디렉터리에 기록한다. +시스템 컴포넌트는 [klog](https://github.com/kubernetes/klog) 로깅 라이브러리를 사용한다. [로깅에 대한 개발 문서](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md)에서 해당 컴포넌트의 로깅 심각도(severity)에 대한 규칙을 찾을 수 있다. @@ -126,13 +130,14 @@ systemd를 사용하지 않으면, `/var/log` 디렉터리의 `.log` 파일에 각 노드에 _노드-레벨 로깅 에이전트_ 를 포함시켜 클러스터-레벨 로깅을 구현할 수 있다. 로깅 에이전트는 로그를 노출하거나 로그를 백엔드로 푸시하는 전용 도구이다. 일반적으로, 로깅 에이전트는 해당 노드의 모든 애플리케이션 컨테이너에서 로그 파일이 있는 디렉터리에 접근할 수 있는 컨테이너이다. -로깅 에이전트는 모든 노드에서 실행해야 하므로, 이를 데몬셋 레플리카, 매니페스트 파드 또는 노드의 전용 네이티브 프로세스로 구현하는 것이 일반적이다. 그러나 후자의 두 가지 접근법은 더 이상 사용되지 않으며 절대 권장하지 않는다. +로깅 에이전트는 모든 노드에서 실행해야 하므로, 에이전트는 +`DaemonSet` 으로 동작시키는 것을 추천한다. -쿠버네티스 클러스터는 노드-레벨 로깅 에이전트를 사용하는 것이 가장 일반적이며 권장되는 방법으로, 이는 노드별 하나의 에이전트만 생성하며, 노드에서 실행되는 애플리케이션을 변경할 필요가 없기 때문이다. 그러나, 노드-레벨 로깅은 _애플리케이션의 표준 출력과 표준 에러에 대해서만 작동한다_ . +노드-레벨 로깅은 노드별 하나의 에이전트만 생성하며, 노드에서 실행되는 애플리케이션에 대한 변경은 필요로 하지 않는다. -쿠버네티스는 로깅 에이전트를 지정하지 않지만, 쿠버네티스 릴리스에는 두 가지 선택적인 로깅 에이전트(Google 클라우드 플랫폼과 함께 사용하기 위한 [스택드라이버(Stackdriver) 로깅](/docs/tasks/debug-application-cluster/logging-stackdriver/)과 [엘라스틱서치(Elasticsearch)](/ko/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/))가 패키지로 함께 제공된다. 전용 문서에서 자세한 정보와 지침을 찾을 수 있다. 두 가지 다 사용자 정의 구성이 된 [fluentd](http://www.fluentd.org/)를 에이전트로써 노드에서 사용한다. +컨테이너는 stdout과 stderr를 동의되지 않은 포맷으로 작성한다. 노드-레벨 에이전트는 이러한 로그를 수집하고 취합을 위해 전달한다. -### 로깅 에이전트와 함께 사이드카 컨테이너 사용 +### 로깅 에이전트와 함께 사이드카 컨테이너 사용 {#sidecar-container-with-logging-agent} 다음 중 한 가지 방법으로 사이드카 컨테이너를 사용할 수 있다. @@ -143,28 +148,27 @@ systemd를 사용하지 않으면, `/var/log` 디렉터리의 `.log` 파일에 ![스트리밍 컨테이너가 있는 사이드카 컨테이너](/images/docs/user-guide/logging/logging-with-streaming-sidecar.png) -사이드카 컨테이너를 자체 `stdout` 및 `stderr` 스트림으로 -스트리밍하면, 각 노드에서 이미 실행 중인 kubelet과 로깅 에이전트를 -활용할 수 있다. 사이드카 컨테이너는 파일, 소켓 -또는 journald에서 로그를 읽는다. 각 개별 사이드카 컨테이너는 자체 `stdout` -또는 `stderr` 스트림에 로그를 출력한다. +사이드카 컨테이너가 자체 `stdout` 및 `stderr` 스트림으로 +쓰도록 하면, 각 노드에서 이미 실행 중인 kubelet과 로깅 에이전트를 +활용할 수 있다. 사이드카 컨테이너는 파일, 소켓 또는 journald에서 로그를 읽는다. +각 사이드카 컨테이너는 자체 `stdout` 또는 `stderr` 스트림에 로그를 출력한다. 이 방법을 사용하면 애플리케이션의 다른 부분에서 여러 로그 스트림을 분리할 수 ​​있고, 이 중 일부는 `stdout` 또는 `stderr` 에 작성하기 위한 지원이 부족할 수 있다. 로그를 리디렉션하는 로직은 -미미하기 때문에, 큰 오버헤드가 거의 없다. 또한, +최소화되어 있기 때문에, 심각한 오버헤드가 아니다. 또한, `stdout` 및 `stderr` 가 kubelet에서 처리되므로, `kubectl logs` 와 같은 빌트인 도구를 사용할 수 있다. -다음의 예를 고려해보자. 파드는 단일 컨테이너를 실행하고, 컨테이너는 -서로 다른 두 가지 형식을 사용하여, 서로 다른 두 개의 로그 파일에 기록한다. 파드에 대한 +예를 들어, 파드는 단일 컨테이너를 실행하고, 컨테이너는 +서로 다른 두 가지 형식을 사용하여 서로 다른 두 개의 로그 파일에 기록한다. 파드에 대한 구성 파일은 다음과 같다. {{< codenew file="admin/logging/two-files-counter-pod.yaml" >}} 두 컴포넌트를 컨테이너의 `stdout` 스트림으로 리디렉션한 경우에도, 동일한 로그 -스트림에 서로 다른 형식의 로그 항목을 갖는 것은 -알아보기 힘들다. 대신, 두 개의 사이드카 컨테이너를 도입할 수 있다. 각 사이드카 +스트림에 서로 다른 형식의 로그 항목을 작성하는 것은 +추천하지 않는다. 대신, 두 개의 사이드카 컨테이너를 생성할 수 있다. 각 사이드카 컨테이너는 공유 볼륨에서 특정 로그 파일을 테일(tail)한 다음 로그를 자체 `stdout` 스트림으로 리디렉션할 수 있다. @@ -178,7 +182,10 @@ systemd를 사용하지 않으면, `/var/log` 디렉터리의 `.log` 파일에 ```shell kubectl logs counter count-log-1 ``` -``` + +출력은 다음과 같다. + +```console 0: Mon Jan 1 00:00:00 UTC 2001 1: Mon Jan 1 00:00:01 UTC 2001 2: Mon Jan 1 00:00:02 UTC 2001 @@ -188,7 +195,10 @@ kubectl logs counter count-log-1 ```shell kubectl logs counter count-log-2 ``` -``` + +출력은 다음과 같다. + +```console Mon Jan 1 00:00:00 UTC 2001 INFO 0 Mon Jan 1 00:00:01 UTC 2001 INFO 1 Mon Jan 1 00:00:02 UTC 2001 INFO 2 @@ -204,11 +214,10 @@ Mon Jan 1 00:00:02 UTC 2001 INFO 2 `stdout` 으로 스트리밍하면 디스크 사용량은 두 배가 될 수 있다. 단일 파일에 쓰는 애플리케이션이 있는 경우, 일반적으로 스트리밍 사이드카 컨테이너 방식을 구현하는 대신 `/dev/stdout` 을 대상으로 -설정하는 것이 더 낫다. +설정하는 것을 추천한다. 사이드카 컨테이너를 사용하여 애플리케이션 자체에서 로테이션할 수 없는 -로그 파일을 로테이션할 수도 있다. 이 방법의 예로는 -정기적으로 logrotate를 실행하는 작은 컨테이너를 두는 것이다. +로그 파일을 로테이션할 수도 있다. 이 방법의 예시는 정기적으로 `logrotate` 를 실행하는 작은 컨테이너를 두는 것이다. 그러나, `stdout` 및 `stderr` 을 직접 사용하고 로테이션과 유지 정책을 kubelet에 두는 것이 권장된다. @@ -223,21 +232,17 @@ Mon Jan 1 00:00:02 UTC 2001 INFO 2 {{< note >}} 사이드카 컨테이너에서 로깅 에이전트를 사용하면 상당한 리소스 소비로 이어질 수 있다. 게다가, kubelet에 의해 -제어되지 않기 때문에, `kubectl logs` 명령을 사용하여 해당 로그에 +제어되지 않기 때문에, `kubectl logs` 를 사용하여 해당 로그에 접근할 수 없다. {{< /note >}} -예를 들어, 로깅 에이전트로 fluentd를 사용하는 [스택드라이버](/docs/tasks/debug-application-cluster/logging-stackdriver/)를 -사용할 수 있다. 여기에 이 방법을 구현하는 데 사용할 수 있는 -두 가지 구성 파일이 있다. 첫 번째 파일에는 -fluentd를 구성하기 위한 [컨피그맵](/docs/tasks/configure-pod-container/configure-pod-configmap/)이 포함되어 있다. +여기에 로깅 에이전트가 포함된 사이드카 컨테이너를 구현하는 데 사용할 수 있는 두 가지 구성 파일이 있다. 첫 번째 파일에는 +fluentd를 구성하기 위한 [`ConfigMap`](/docs/tasks/configure-pod-container/configure-pod-configmap/)이 포함되어 있다. {{< codenew file="admin/logging/fluentd-sidecar-config.yaml" >}} {{< note >}} -fluentd의 구성은 이 문서의 범위를 벗어난다. -fluentd를 구성하는 것에 대한 자세한 내용은, -[공식 fluentd 문서](https://docs.fluentd.org/)를 참고한다. +fluentd를 구성하는 것에 대한 자세한 내용은, [fluentd 문서](https://docs.fluentd.org/)를 참고한다. {{< /note >}} 두 번째 파일은 fluentd가 실행되는 사이드카 컨테이너가 있는 파드를 설명한다. @@ -245,16 +250,10 @@ fluentd를 구성하는 것에 대한 자세한 내용은, {{< codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" >}} -얼마 후 스택드라이버 인터페이스에서 로그 메시지를 찾을 수 있다. - -이것은 단지 예시일 뿐이며 실제로 애플리케이션 컨테이너 내의 -모든 소스에서 읽은 fluentd를 로깅 에이전트로 대체할 수 있다는 것을 -기억한다. +이 예시 구성에서, 사용자는 애플리케이션 컨테이너 내의 모든 소스을 읽는 fluentd를 다른 로깅 에이전트로 대체할 수 있다. ### 애플리케이션에서 직접 로그 노출 ![애플리케이션에서 직접 로그 노출](/images/docs/user-guide/logging/logging-from-application.png) -모든 애플리케이션에서 직접 로그를 노출하거나 푸시하여 클러스터-레벨 로깅을 -구현할 수 있다. 그러나, 이러한 로깅 메커니즘의 구현은 -쿠버네티스의 범위를 벗어난다. +모든 애플리케이션에서 직접 로그를 노출하거나 푸시하는 클러스터-로깅은 쿠버네티스의 범위를 벗어난다. diff --git a/content/ko/docs/concepts/cluster-administration/manage-deployment.md b/content/ko/docs/concepts/cluster-administration/manage-deployment.md index be5befdbd3..9893c76548 100644 --- a/content/ko/docs/concepts/cluster-administration/manage-deployment.md +++ b/content/ko/docs/concepts/cluster-administration/manage-deployment.md @@ -1,4 +1,6 @@ --- + + title: 리소스 관리 content_type: concept weight: 40 @@ -43,7 +45,7 @@ kubectl apply -f https://k8s.io/examples/application/nginx/ `kubectl` 은 접미사가 `.yaml`, `.yml` 또는 `.json` 인 파일을 읽는다. -동일한 마이크로서비스 또는 애플리케이션 티어(tier)와 관련된 리소스를 동일한 파일에 배치하고, 애플리케이션과 연관된 모든 파일을 동일한 디렉터리에 그룹화하는 것이 좋다. 애플리케이션의 티어가 DNS를 사용하여 서로 바인딩되면, 스택의 모든 컴포넌트를 일괄로 배포할 수 있다. +동일한 마이크로서비스 또는 애플리케이션 티어(tier)와 관련된 리소스를 동일한 파일에 배치하고, 애플리케이션과 연관된 모든 파일을 동일한 디렉터리에 그룹화하는 것이 좋다. 애플리케이션의 티어가 DNS를 사용하여 서로 바인딩되면, 스택의 모든 컴포넌트를 함께 배포할 수 있다. URL을 구성 소스로 지정할 수도 있다. 이는 github에 체크인된 구성 파일에서 직접 배포하는 데 편리하다. @@ -68,7 +70,7 @@ deployment.apps "my-nginx" deleted service "my-nginx-svc" deleted ``` -두 개의 리소스만 있는 경우, 리소스/이름 구문을 사용하여 커맨드 라인에서 둘다 모두 쉽게 지정할 수도 있다. +두 개의 리소스가 있는 경우, 리소스/이름 구문을 사용하여 커맨드 라인에서 둘다 모두 지정할 수도 있다. ```shell kubectl delete deployments/my-nginx services/my-nginx-svc @@ -85,10 +87,11 @@ deployment.apps "my-nginx" deleted service "my-nginx-svc" deleted ``` -`kubectl` 은 입력을 받아들이는 것과 동일한 구문으로 리소스 이름을 출력하므로, `$()` 또는 `xargs` 를 사용하여 작업을 쉽게 연결할 수 있다. +`kubectl` 은 입력을 받아들이는 것과 동일한 구문으로 리소스 이름을 출력하므로, `$()` 또는 `xargs` 를 사용하여 작업을 연결할 수 있다. ```shell kubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service) +kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service | xargs -i kubectl get {} ``` ```shell @@ -262,7 +265,7 @@ guestbook-redis-slave-qgazl 1/1 Running 0 3m ## 레이블 업데이트 새로운 리소스를 만들기 전에 기존 파드 및 기타 리소스의 레이블을 다시 지정해야 하는 경우가 있다. 이것은 `kubectl label` 로 수행할 수 있다. -예를 들어, 모든 nginx 파드에 프론트엔드 티어로 레이블을 지정하려면, 간단히 다음과 같이 실행한다. +예를 들어, 모든 nginx 파드에 프론트엔드 티어로 레이블을 지정하려면, 다음과 같이 실행한다. ```shell kubectl label pods -l app=nginx tier=fe @@ -299,6 +302,7 @@ my-nginx-2035384211-u3t6x 1/1 Running 0 23m fe kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx' kubectl get pods my-nginx-v4-9gw19 -o yaml ``` + ```shell apiVersion: v1 kind: pod @@ -312,11 +316,12 @@ metadata: ## 애플리케이션 스케일링 -애플리케이션의 로드가 증가하거나 축소되면, `kubectl` 을 사용하여 쉽게 스케일링할 수 있다. 예를 들어, nginx 레플리카 수를 3에서 1로 줄이려면, 다음을 수행한다. +애플리케이션의 로드가 증가하거나 축소되면, `kubectl` 을 사용하여 애플리케이션을 스케일링한다. 예를 들어, nginx 레플리카 수를 3에서 1로 줄이려면, 다음을 수행한다. ```shell kubectl scale deployment/my-nginx --replicas=1 ``` + ```shell deployment.apps/my-nginx scaled ``` @@ -326,6 +331,7 @@ deployment.apps/my-nginx scaled ```shell kubectl get pods -l app=nginx ``` + ```shell NAME READY STATUS RESTARTS AGE my-nginx-2035384211-j5fhi 1/1 Running 0 30m @@ -336,6 +342,7 @@ my-nginx-2035384211-j5fhi 1/1 Running 0 30m ```shell kubectl autoscale deployment/my-nginx --min=1 --max=3 ``` + ```shell horizontalpodautoscaler.autoscaling/my-nginx autoscaled ``` @@ -404,11 +411,12 @@ JSON 병합 패치 그리고 전략적 병합 패치를 지원한다. ## 파괴적(disruptive) 업데이트 -경우에 따라, 한 번 초기화하면 업데이트할 수 없는 리소스 필드를 업데이트해야 하거나, 디플로이먼트에서 생성된 손상된 파드를 고치는 등의 재귀적 변경을 즉시 원할 수도 있다. 이러한 필드를 변경하려면, `replace --force` 를 사용하여 리소스를 삭제하고 다시 만든다. 이 경우, 원래 구성 파일을 간단히 수정할 수 있다. +경우에 따라, 한 번 초기화하면 업데이트할 수 없는 리소스 필드를 업데이트해야 하거나, 디플로이먼트에서 생성된 손상된 파드를 고치는 등의 재귀적 변경을 즉시 원할 수도 있다. 이러한 필드를 변경하려면, `replace --force` 를 사용하여 리소스를 삭제하고 다시 만든다. 이 경우, 원래 구성 파일을 수정할 수 있다. ```shell kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force ``` + ```shell deployment.apps/my-nginx deleted deployment.apps/my-nginx replaced @@ -425,19 +433,22 @@ nginx 1.14.2 버전을 실행한다고 가정해 보겠다. ```shell kubectl create deployment my-nginx --image=nginx:1.14.2 ``` + ```shell deployment.apps/my-nginx created ``` 3개의 레플리카를 포함한다(이전과 새 개정판이 공존할 수 있음). + ```shell kubectl scale deployment my-nginx --current-replicas=1 --replicas=3 ``` + ``` deployment.apps/my-nginx scaled ``` -1.16.1 버전으로 업데이트하려면, 위에서 배운 kubectl 명령을 사용하여 `.spec.template.spec.containers[0].image` 를 `nginx:1.14.2` 에서 `nginx:1.16.1` 로 간단히 변경한다. +1.16.1 버전으로 업데이트하려면, 위에서 배운 kubectl 명령을 사용하여 `.spec.template.spec.containers[0].image` 를 `nginx:1.14.2` 에서 `nginx:1.16.1` 로 변경한다. ```shell kubectl edit deployment/my-nginx @@ -452,5 +463,3 @@ kubectl edit deployment/my-nginx - [애플리케이션 검사 및 디버깅에 `kubectl` 을 사용하는 방법](/docs/tasks/debug-application-cluster/debug-application-introspection/)에 대해 알아본다. - [구성 모범 사례 및 팁](/ko/docs/concepts/configuration/overview/)을 참고한다. - - diff --git a/content/ko/docs/concepts/cluster-administration/system-metrics.md b/content/ko/docs/concepts/cluster-administration/system-metrics.md index 03eb904ee3..08b7b79d0d 100644 --- a/content/ko/docs/concepts/cluster-administration/system-metrics.md +++ b/content/ko/docs/concepts/cluster-administration/system-metrics.md @@ -1,9 +1,5 @@ --- -title: 쿠버네티스 컨트롤 플레인에 대한 메트릭 - - - - +title: 쿠버네티스 시스템 컴포넌트에 대한 메트릭 content_type: concept weight: 60 --- @@ -12,7 +8,7 @@ weight: 60 시스템 컴포넌트 메트릭으로 내부에서 발생하는 상황을 더 잘 파악할 수 있다. 메트릭은 대시보드와 경고를 만드는 데 특히 유용하다. -쿠버네티스 컨트롤 플레인의 메트릭은 [프로메테우스 형식](https://prometheus.io/docs/instrumenting/exposition_formats/)으로 출력된다. +쿠버네티스 컴포넌트의 메트릭은 [프로메테우스 형식](https://prometheus.io/docs/instrumenting/exposition_formats/)으로 출력된다. 이 형식은 구조화된 평문으로 디자인되어 있으므로 사람과 기계 모두가 쉽게 읽을 수 있다. @@ -36,7 +32,7 @@ weight: 60 클러스터가 {{< glossary_tooltip term_id="rbac" text="RBAC" >}}을 사용하는 경우, 메트릭을 읽으려면 `/metrics` 에 접근을 허용하는 클러스터롤(ClusterRole)을 가지는 사용자, 그룹 또는 서비스어카운트(ServiceAccount)를 통한 권한이 필요하다. 예를 들면, 다음과 같다. -``` +```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: @@ -156,5 +152,4 @@ kube-scheduler는 각 파드에 대해 구성된 리소스 [요청과 제한](/k ## {{% heading "whatsnext" %}} * 메트릭에 대한 [프로메테우스 텍스트 형식](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format)에 대해 읽어본다 -* [안정적인 쿠버네티스 메트릭](https://github.com/kubernetes/kubernetes/blob/master/test/instrumentation/testdata/stable-metrics-list.yaml) 목록을 참고한다 * [쿠버네티스 사용 중단 정책](/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior)에 대해 읽어본다 diff --git a/content/ko/docs/concepts/configuration/overview.md b/content/ko/docs/concepts/configuration/overview.md index 6bcec1d6c0..def34dd231 100644 --- a/content/ko/docs/concepts/configuration/overview.md +++ b/content/ko/docs/concepts/configuration/overview.md @@ -59,13 +59,13 @@ DNS 서버는 새로운 `서비스`를 위한 쿠버네티스 API를 Watch하며 - `hostPort`와 같은 이유로, `hostNetwork`를 사용하는 것을 피한다. -- `kube-proxy` 로드 밸런싱이 필요하지 않을 때, 쉬운 서비스 발견을 위해 [헤드리스 서비스](/ko/docs/concepts/services-networking/service/#헤드리스-headless-서비스)(`ClusterIP`의 값을 `None`으로 가지는)를 사용한다. +- `kube-proxy` 로드 밸런싱이 필요하지 않을 때, 서비스 발견을 위해 [헤드리스 서비스](/ko/docs/concepts/services-networking/service/#헤드리스-headless-서비스)(`ClusterIP`의 값을 `None`으로 가지는)를 사용한다. ## 레이블 사용하기 - `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`처럼 애플리케이션이나 디플로이먼트의 __속성에 대한 의미__ 를 식별하는 [레이블](/ko/docs/concepts/overview/working-with-objects/labels/)을 정의해 사용한다. 다른 리소스를 위해 적절한 파드를 선택하는 용도로 이러한 레이블을 이용할 수 있다. 예를 들어, 모든 `tier: frontend` 파드를 선택하거나, `app: myapp`의 모든 `phase: test` 컴포넌트를 선택하는 서비스를 생각해 볼 수 있다. 이 접근 방법의 예시는 [방명록](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) 앱을 참고한다. -릴리스에 특정되는 레이블을 서비스의 셀렉터에서 생략함으로써 여러 개의 디플로이먼트에 걸치는 서비스를 생성할 수 있다. [디플로이먼트](/ko/docs/concepts/workloads/controllers/deployment/)는 생성되어 있는 서비스를 다운타임 없이 수정하기 쉽도록 만든다. +릴리스에 특정되는 레이블을 서비스의 셀렉터에서 생략함으로써 여러 개의 디플로이먼트에 걸치는 서비스를 생성할 수 있다. 동작 중인 서비스를 다운타임 없이 갱신하려면, [디플로이먼트](/ko/docs/concepts/workloads/controllers/deployment/)를 사용한다. 오브젝트의 의도한 상태는 디플로이먼트에 의해 기술되며, 만약 그 스펙에 대한 변화가 _적용될_ 경우, 디플로이먼트 컨트롤러는 일정한 비율로 실제 상태를 의도한 상태로 변화시킨다. diff --git a/content/ko/docs/concepts/configuration/secret.md b/content/ko/docs/concepts/configuration/secret.md index e5466b8dec..4637c1a63d 100644 --- a/content/ko/docs/concepts/configuration/secret.md +++ b/content/ko/docs/concepts/configuration/secret.md @@ -1,4 +1,6 @@ --- + + title: 시크릿(Secret) content_type: concept feature: @@ -22,6 +24,16 @@ weight: 30 명세나 이미지에 포함될 수 있다. 사용자는 시크릿을 만들 수 있고 시스템도 일부 시크릿을 만들 수 있다. +{{< caution >}} +쿠버네티스 시크릿은 기본적으로 암호화되지 않은 base64 인코딩 문자열로 저장된다. +기본적으로 API 액세스 권한이 있는 모든 사용자 또는 쿠버네티스의 기본 데이터 저장소 etcd에 +액세스할 수 있는 모든 사용자가 일반 텍스트로 검색할 수 있다. +시크릿을 안전하게 사용하려면 (최소한) 다음과 같이 하는 것이 좋다. + +1. 시크릿에 대한 [암호화 활성화](/docs/tasks/administer-cluster/encrypt-data/). +2. 시크릿 읽기 및 쓰기를 제한하는 [RBAC 규칙 활성화 또는 구성](/docs/reference/access-authn-authz/authorization/). 파드를 만들 권한이 있는 모든 사용자는 시크릿을 암묵적으로 얻을 수 있다. +{{< /caution >}} + ## 시크릿 개요 @@ -269,6 +281,13 @@ SSH 인증 시크릿 타입은 사용자 편의만을 위해서 제공된다. API 서버는 요구되는 키가 시크릿 구성에서 제공되고 있는지 검증도 한다. +{{< caution >}} +SSH 개인 키는 자체적으로 SSH 클라이언트와 호스트 서버 간에 신뢰할 수 있는 통신을 +설정하지 않는다. 컨피그맵(ConfigMap)에 추가된 `known_hosts` 파일과 같은 +"중간자(man in the middle)" 공격을 완화하려면 신뢰를 설정하는 +2차 수단이 필요하다. +{{< /caution >}} + ### TLS 시크릿 쿠버네티스는 보통 TLS를 위해 사용되는 인증서와 관련된 키를 저장하기 위해서 @@ -650,7 +669,7 @@ kubelet은 마운트된 시크릿이 모든 주기적인 동기화에서 최신 그러나, kubelet은 시크릿의 현재 값을 가져 오기 위해 로컬 캐시를 사용한다. 캐시의 유형은 [KubeletConfiguration 구조체](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go)의 `ConfigMapAndSecretChangeDetectionStrategy` 필드를 사용하여 구성할 수 있다. -시크릿은 watch(기본값), ttl 기반 또는 단순히 API 서버로 모든 요청을 직접 +시크릿은 watch(기본값), ttl 기반 또는 API 서버로 모든 요청을 직접 리디렉션하여 전파할 수 있다. 결과적으로, 시크릿이 업데이트된 순간부터 새로운 키가 파드에 투영되는 순간까지의 총 지연 시간은 kubelet 동기화 시간 + 캐시 @@ -782,12 +801,6 @@ immutable: true 해당 프로세스에 대한 자세한 설명은 [서비스 어카운트에 ImagePullSecrets 추가하기](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)를 참고한다. -### 수동으로 생성된 시크릿의 자동 마운트 - -수동으로 생성된 시크릿(예: GitHub 계정에 접근하기 위한 토큰이 포함된 시크릿)은 -시크릿의 서비스 어카운트를 기반한 파드에 자동으로 연결될 수 있다. -해당 프로세스에 대한 자세한 설명은 [파드프리셋(PodPreset)을 사용하여 파드에 정보 주입하기](/docs/tasks/inject-data-application/podpreset/)를 참고한다. - ## 상세 내용 ### 제약 사항 diff --git a/content/ko/docs/concepts/containers/container-environment.md b/content/ko/docs/concepts/containers/container-environment.md index c6cb09965a..58c106fdce 100644 --- a/content/ko/docs/concepts/containers/container-environment.md +++ b/content/ko/docs/concepts/containers/container-environment.md @@ -1,4 +1,7 @@ --- + + + title: 컨테이너 환경 변수 content_type: concept weight: 20 @@ -24,11 +27,11 @@ weight: 20 ### 컨테이너 정보 컨테이너의 *호스트네임* 은 컨테이너가 동작 중인 파드의 이름과 같다. -그것은 `hostname` 커맨드 또는 libc의 -[`gethostname`](https://man7.org/linux/man-pages/man2/gethostname.2.html) +그것은 `hostname` 커맨드 또는 libc의 +[`gethostname`](https://man7.org/linux/man-pages/man2/gethostname.2.html) 함수 호출을 통해서 구할 수 있다. -파드 이름과 네임스페이스는 +파드 이름과 네임스페이스는 [다운워드(Downward) API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/)를 통해 환경 변수로 구할 수 있다. Docker 이미지에 정적으로 명시된 환경 변수와 마찬가지로, @@ -36,11 +39,12 @@ Docker 이미지에 정적으로 명시된 환경 변수와 마찬가지로, ### 클러스터 정보 -컨테이너가 생성될 때 실행 중이던 모든 서비스의 목록은 환경 변수로 해당 컨테이너에서 사용할 수 +컨테이너가 생성될 때 실행 중이던 모든 서비스의 목록은 환경 변수로 해당 컨테이너에서 사용할 수 있다. +이 목록은 새로운 컨테이너의 파드 및 쿠버네티스 컨트롤 플레인 서비스와 동일한 네임스페이스 내에 있는 서비스로 한정된다. 이러한 환경 변수는 Docker 링크 구문과 일치한다. -*bar* 라는 이름의 컨테이너에 매핑되는 *foo* 라는 이름의 서비스에 대해서는, +*bar* 라는 이름의 컨테이너에 매핑되는 *foo* 라는 이름의 서비스에 대해서는, 다음의 형태로 변수가 정의된다. ```shell @@ -58,5 +62,3 @@ FOO_SERVICE_PORT=<서비스가 동작 중인 포트> * [컨테이너 라이프사이클 훅(hooks)](/ko/docs/concepts/containers/container-lifecycle-hooks/)에 대해 더 배워 보기. * [컨테이너 라이프사이클 이벤트에 핸들러 부착](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/) 실제 경험 얻기. - - diff --git a/content/ko/docs/concepts/containers/container-lifecycle-hooks.md b/content/ko/docs/concepts/containers/container-lifecycle-hooks.md index 662ac71522..d9a1137024 100644 --- a/content/ko/docs/concepts/containers/container-lifecycle-hooks.md +++ b/content/ko/docs/concepts/containers/container-lifecycle-hooks.md @@ -1,4 +1,7 @@ --- + + + title: 컨테이너 라이프사이클 훅(Hook) content_type: concept weight: 30 @@ -33,10 +36,13 @@ Angular와 같이, 컴포넌트 라이프사이클 훅을 가진 많은 프로 `PreStop` -이 훅은 API 요청이나 활성 프로브(liveness probe) 실패, 선점, 자원 경합 등의 관리 이벤트로 인해 컨테이너가 종료되기 직전에 호출된다. 컨테이너가 이미 terminated 또는 completed 상태인 경우에는 preStop 훅 요청이 실패한다. -그것은 동기적인 동작을 의미하는, 차단(blocking)을 수행하고 있으므로, -컨테이너를 중지하기 위한 신호가 전송되기 전에 완료되어야 한다. -파라미터는 핸들러에 전달되지 않는다. +이 훅은 API 요청이나 활성 프로브(liveness probe) 실패, 선점, 자원 경합 +등의 관리 이벤트로 인해 컨테이너가 종료되기 직전에 호출된다. 컨테이너가 이미 +terminated 또는 completed 상태인 경우에는 `PreStop` 훅 요청이 실패하며, +훅은 컨테이너를 중지하기 위한 TERM 신호가 보내지기 이전에 완료되어야 한다. 파드의 그레이스 종료 +기간(termination grace period)의 초읽기는 `PreStop` 훅이 실행되기 전에 시작되어, +핸들러의 결과에 상관없이 컨테이너가 파드의 그레이스 종료 기간 내에 결국 종료되도록 한다. +어떠한 파라미터도 핸들러에게 전달되지 않는다. 종료 동작에 더 자세한 대한 설명은 [파드의 종료](/ko/docs/concepts/workloads/pods/pod-lifecycle/#파드의-종료)에서 찾을 수 있다. @@ -54,7 +60,7 @@ Angular와 같이, 컴포넌트 라이프사이클 훅을 가진 많은 프로 컨테이너 라이프사이클 관리 훅이 호출되면, 쿠버네티스 관리 시스템은 훅 동작에 따라 핸들러를 실행하고, -`exec` 와 `tcpSocket` 은 컨테이너에서 실행되고, `httpGet` 은 kubelet 프로세스에 의해 실행된다. +`httpGet` 와 `tcpSocket` 은 kubelet 프로세스에 의해 실행되고, `exec` 은 컨테이너에서 실행된다. 훅 핸들러 호출은 해당 컨테이너를 포함하고 있는 파드의 컨텍스트와 동기적으로 동작한다. 이것은 `PostStart` 훅에 대해서, @@ -62,17 +68,13 @@ Angular와 같이, 컴포넌트 라이프사이클 훅을 가진 많은 프로 그러나, 만약 해당 훅이 너무 오래 동작하거나 어딘가에 걸려 있다면, 컨테이너는 `running` 상태에 이르지 못한다. -`PreStop` 훅은 컨테이너 중지 신호에서 비동기적으로 -실행되지 않는다. 훅은 신호를 보내기 전에 실행을 -완료해야 한다. -실행 중에 `PreStop` 훅이 중단되면, +`PreStop` 훅은 컨테이너 중지 신호에서 비동기적으로 실행되지 않는다. 훅은 +TERM 신호를 보내기 전에 실행을 완료해야 한다. 실행 중에 `PreStop` 훅이 중단되면, 파드의 단계는 `Terminating` 이며 `terminationGracePeriodSeconds` 가 -만료된 후 파드가 종료될 때까지 남아 있다. -이 유예 기간은 `PreStop` 훅이 실행되고 컨테이너가 -정상적으로 중지되는 데 걸리는 총 시간에 적용된다. -예를 들어, `terminationGracePeriodSeconds` 가 60이고, 훅이 -완료되는 데 55초가 걸리고, 컨테이너가 신호를 수신한 후 -정상적으로 중지하는 데 10초가 걸리면, `terminationGracePeriodSeconds` 이후 +만료된 후 파드가 종료될 때까지 남아 있다. 이 유예 기간은 `PreStop` 훅이 +실행되고 컨테이너가 정상적으로 중지되는 데 걸리는 총 시간에 적용된다. 예를 들어, +`terminationGracePeriodSeconds` 가 60이고, 훅이 완료되는 데 55초가 걸리고, +컨테이너가 신호를 수신한 후 정상적으로 중지하는 데 10초가 걸리면, `terminationGracePeriodSeconds` 이후 컨테이너가 정상적으로 중지되기 전에 종료된다. 이 두 가지 일이 발생하는 데 걸리는 총 시간(55+10)보다 적다. diff --git a/content/ko/docs/concepts/containers/runtime-class.md b/content/ko/docs/concepts/containers/runtime-class.md index b31cabe888..3d7c89b65c 100644 --- a/content/ko/docs/concepts/containers/runtime-class.md +++ b/content/ko/docs/concepts/containers/runtime-class.md @@ -1,7 +1,4 @@ --- - - - title: 런타임클래스(RuntimeClass) content_type: concept weight: 20 @@ -35,10 +32,6 @@ weight: 20 ## 셋업 -런타임클래스 기능 게이트가 활성화(기본값)된 것을 확인한다. -기능 게이트 활성화에 대한 설명은 [기능 게이트](/ko/docs/reference/command-line-tools-reference/feature-gates/)를 -참고한다. `RuntimeClass` 기능 게이트는 API 서버 _및_ kubelets에서 활성화되어야 한다. - 1. CRI 구현(implementation)을 노드에 설정(런타임에 따라서). 2. 상응하는 런타임클래스 리소스 생성. @@ -144,11 +137,9 @@ https://github.com/containerd/cri/blob/master/docs/config.md {{< feature-state for_k8s_version="v1.16" state="beta" >}} -쿠버네티스 v1.16 부터, 런타임 클래스는 `scheduling` 필드를 통해 이종의 클러스터 -지원을 포함한다. 이 필드를 사용하면, 이 런타임 클래스를 갖는 파드가 이를 지원하는 -노드로 스케줄된다는 것을 보장할 수 있다. 이 스케줄링 기능을 사용하려면, -[런타임 클래스 어드미션(admission) 컨트롤러](/docs/reference/access-authn-authz/admission-controllers/#runtimeclass)를 -활성화(1.16 부터 기본값)해야 한다. +RuntimeClass에 `scheduling` 필드를 지정하면, 이 RuntimeClass로 실행되는 파드가 +이를 지원하는 노드로 예약되도록 제약 조건을 설정할 수 있다. +`scheduling`이 설정되지 않은 경우 이 RuntimeClass는 모든 노드에서 지원되는 것으로 간주된다. 파드가 지정된 런타임클래스를 지원하는 노드에 안착한다는 것을 보장하려면, 해당 노드들은 `runtimeClass.scheduling.nodeSelector` 필드에서 선택되는 공통 레이블을 가져야한다. diff --git a/content/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md b/content/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md index 7e8a10e8b1..a2326c71dd 100644 --- a/content/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md +++ b/content/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md @@ -1,5 +1,9 @@ --- title: 애그리게이션 레이어(aggregation layer)로 쿠버네티스 API 확장하기 + + + + content_type: concept weight: 10 --- @@ -25,8 +29,6 @@ Extension-apiserver는 kube-apiserver로 오가는 연결의 레이턴시가 낮 kube-apiserver로 부터의 디스커버리 요청은 왕복 레이턴시가 5초 이내여야 한다. extention API server가 레이턴시 요구 사항을 달성할 수 없는 경우 이를 충족할 수 있도록 변경하는 것을 고려한다. -`EnableAggregatedDiscoveryTimeout=false` [기능 게이트](/ko/docs/reference/command-line-tools-reference/feature-gates/)를 설정해서 타임아웃 -제한을 비활성화 할 수 있다. 이 사용 중단(deprecated)된 기능 게이트는 향후 릴리스에서 제거될 예정이다. ## {{% heading "whatsnext" %}} diff --git a/content/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources.md b/content/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources.md index 68e529549b..d549060108 100644 --- a/content/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources.md +++ b/content/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources.md @@ -1,5 +1,8 @@ --- title: 커스텀 리소스 + + + content_type: concept weight: 10 --- @@ -117,7 +120,7 @@ _선언_ 하거나 지정할 수 있게 해주며 쿠버네티스 오브젝트 쿠버네티스는 다양한 사용자의 요구를 충족시키기 위해 이 두 가지 옵션을 제공하므로 사용의 용이성이나 유연성이 저하되지 않는다. -애그리게이트 API는 기본 API 서버 뒤에 있는 하위 API 서버이며 프록시 역할을 한다. 이 배치를 [API 애그리게이션](/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)(AA)이라고 한다. 사용자에게는 쿠버네티스 API가 확장된 것과 같다. +애그리게이트 API는 기본 API 서버 뒤에 있는 하위 API 서버이며 프록시 역할을 한다. 이 배치를 [API 애그리게이션](/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)(AA)이라고 한다. 사용자에게는 쿠버네티스 API가 확장된 것으로 나타난다. CRD를 사용하면 다른 API 서버를 추가하지 않고도 새로운 타입의 리소스를 생성할 수 있다. CRD를 사용하기 위해 API 애그리게이션을 이해할 필요는 없다. diff --git a/content/ko/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md b/content/ko/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md index 7542ac0add..359284b357 100644 --- a/content/ko/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md +++ b/content/ko/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md @@ -1,4 +1,8 @@ --- + + + + title: 네트워크 플러그인 content_type: concept weight: 10 @@ -20,7 +24,7 @@ weight: 10 kubelet에는 단일 기본 네트워크 플러그인과 전체 클러스터에 공통된 기본 네트워크가 있다. 플러그인은 시작할 때 플러그인을 검색하고, 찾은 것을 기억하며, 파드 라이프사이클에서 적절한 시간에 선택한 플러그인을 실행한다(CRI는 자체 CNI 플러그인을 관리하므로 도커에만 해당됨). 플러그인 사용 시 명심해야 할 두 가지 Kubelet 커맨드라인 파라미터가 있다. * `cni-bin-dir`: Kubelet은 시작할 때 플러그인에 대해 이 디렉터리를 검사한다. -* `network-plugin`: `cni-bin-dir` 에서 사용할 네트워크 플러그인. 플러그인 디렉터리에서 검색한 플러그인이 보고된 이름과 일치해야 한다. CNI 플러그인의 경우, 이는 단순히 "cni"이다. +* `network-plugin`: `cni-bin-dir` 에서 사용할 네트워크 플러그인. 플러그인 디렉터리에서 검색한 플러그인이 보고된 이름과 일치해야 한다. CNI 플러그인의 경우, 이는 "cni"이다. ## 네트워크 플러그인 요구 사항 diff --git a/content/ko/docs/concepts/extend-kubernetes/extend-cluster.md b/content/ko/docs/concepts/extend-kubernetes/extend-cluster.md index d1eecd6fdc..ee9763a769 100644 --- a/content/ko/docs/concepts/extend-kubernetes/extend-cluster.md +++ b/content/ko/docs/concepts/extend-kubernetes/extend-cluster.md @@ -69,7 +69,7 @@ weight: 10 웹훅 모델에서 쿠버네티스는 원격 서비스에 네트워크 요청을 한다. *바이너리 플러그인* 모델에서 쿠버네티스는 바이너리(프로그램)를 실행한다. 바이너리 플러그인은 kubelet(예: -[Flex Volume 플러그인](/ko/docs/concepts/storage/volumes/#flexvolume)과 +[Flex 볼륨 플러그인](/ko/docs/concepts/storage/volumes/#flexvolume)과 [네트워크 플러그인](/ko/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/))과 kubectl에서 사용한다. @@ -157,7 +157,7 @@ API를 추가해도 기존 API(예: 파드)의 동작에 직접 영향을 미치 ### 스토리지 플러그인 -[Flex Volumes](/ko/docs/concepts/storage/volumes/#flexvolume)을 사용하면 +[Flex 볼륨](/ko/docs/concepts/storage/volumes/#flexvolume)을 사용하면 Kubelet이 바이너리 플러그인을 호출하여 볼륨을 마운트하도록 함으로써 빌트인 지원 없이 볼륨 유형을 마운트 할 수 있다. diff --git a/content/ko/docs/concepts/extend-kubernetes/operator.md b/content/ko/docs/concepts/extend-kubernetes/operator.md index f6c80d8067..57a4f3d9d2 100644 --- a/content/ko/docs/concepts/extend-kubernetes/operator.md +++ b/content/ko/docs/concepts/extend-kubernetes/operator.md @@ -124,5 +124,5 @@ kubectl edit SampleDB/example-database # 일부 설정을 수동으로 변경하 사용하여 직접 구현하기 * [오퍼레이터 프레임워크](https://operatorframework.io) 사용하기 * 다른 사람들이 사용할 수 있도록 자신의 오퍼레이터를 [게시](https://operatorhub.io/)하기 -* 오퍼레이터 패턴을 소개한 [CoreOS 원본 기사](https://coreos.com/blog/introducing-operators.html) 읽기 +* 오퍼레이터 패턴을 소개한 [CoreOS 원본 글](https://web.archive.org/web/20170129131616/https://coreos.com/blog/introducing-operators.html) 읽기 (이 링크는 원본 글에 대한 보관 버전임) * 오퍼레이터 구축을 위한 모범 사례에 대한 구글 클라우드(Google Cloud)의 [기사](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) 읽기 diff --git a/content/ko/docs/concepts/extend-kubernetes/service-catalog.md b/content/ko/docs/concepts/extend-kubernetes/service-catalog.md index 3ac6a2df7d..cc387c2f02 100644 --- a/content/ko/docs/concepts/extend-kubernetes/service-catalog.md +++ b/content/ko/docs/concepts/extend-kubernetes/service-catalog.md @@ -1,5 +1,7 @@ --- title: 서비스 카탈로그 + + content_type: concept weight: 40 --- @@ -24,7 +26,7 @@ weight: 40 클러스터 운영자는 서비스 카탈로그를 설정하고 이를 이용하여 클라우드 공급자의 서비스 브로커와 통신하여 메시지 큐 서비스의 인스턴스를 프로비저닝하고 쿠버네티스 클러스터 내의 애플리케이션에서 사용할 수 있게 한다. 따라서 애플리케이션 개발자는 메시지 큐의 세부 구현 또는 관리에 신경 쓸 필요가 없다. -애플리케이션은 그것을 서비스로 간단하게 사용할 수 있다. +애플리케이션은 메시지 큐에 서비스로 접속할 수 있다. ## 아키텍처 @@ -229,8 +231,3 @@ spec: * [샘플 서비스 브로커](https://github.com/openservicebrokerapi/servicebroker/blob/master/gettingStarted.md#sample-service-brokers) 살펴보기 * [kubernetes-sigs/service-catalog](https://github.com/kubernetes-sigs/service-catalog) 프로젝트 탐색 * [svc-cat.io](https://svc-cat.io/docs/) 살펴보기 - - - - - diff --git a/content/ko/docs/concepts/overview/what-is-kubernetes.md b/content/ko/docs/concepts/overview/what-is-kubernetes.md index 449cae4393..344c266d1e 100644 --- a/content/ko/docs/concepts/overview/what-is-kubernetes.md +++ b/content/ko/docs/concepts/overview/what-is-kubernetes.md @@ -1,4 +1,7 @@ --- + + + title: 쿠버네티스란 무엇인가? description: > 쿠버네티스는 컨테이너화된 워크로드와 서비스를 관리하기 위한 이식할 수 있고, 확장 가능한 오픈소스 플랫폼으로, 선언적 구성과 자동화를 모두 지원한다. 쿠버네티스는 크고 빠르게 성장하는 생태계를 가지고 있다. 쿠버네티스 서비스, 지원 그리고 도구들은 광범위하게 제공된다. @@ -40,7 +43,7 @@ sitemap: 컨테이너는 다음과 같은 추가적인 혜택을 제공하기 때문에 인기가 있다. * 기민한 애플리케이션 생성과 배포: VM 이미지를 사용하는 것에 비해 컨테이너 이미지 생성이 보다 쉽고 효율적임. -* 지속적인 개발, 통합 및 배포: 안정적이고 주기적으로 컨테이너 이미지를 빌드해서 배포할 수 있고 (이미지의 불변성 덕에) 빠르고 쉽게 롤백할 수 있다. +* 지속적인 개발, 통합 및 배포: 안정적이고 주기적으로 컨테이너 이미지를 빌드해서 배포할 수 있고 (이미지의 불변성 덕에) 빠르고 효율적으로 롤백할 수 있다. * 개발과 운영의 관심사 분리: 배포 시점이 아닌 빌드/릴리스 시점에 애플리케이션 컨테이너 이미지를 만들기 때문에, 애플리케이션이 인프라스트럭처에서 분리된다. * 가시성은 OS 수준의 정보와 메트릭에 머무르지 않고, 애플리케이션의 헬스와 그 밖의 시그널을 볼 수 있다. * 개발, 테스팅 및 운영 환경에 걸친 일관성: 랩탑에서도 클라우드에서와 동일하게 구동된다. @@ -52,7 +55,7 @@ sitemap: ## 쿠버네티스가 왜 필요하고 무엇을 할 수 있나 {#why-you-need-kubernetes-and-what-can-it-do} -컨테이너는 애플리케이션을 포장하고 실행하는 좋은 방법이다. 프로덕션 환경에서는 애플리케이션을 실행하는 컨테이너를 관리하고 가동 중지 시간이 없는지 확인해야한다. 예를 들어 컨테이너가 다운되면 다른 컨테이너를 다시 시작해야한다. 이 문제를 시스템에 의해 처리한다면 더 쉽지 않을까? +컨테이너는 애플리케이션을 포장하고 실행하는 좋은 방법이다. 프로덕션 환경에서는 애플리케이션을 실행하는 컨테이너를 관리하고 가동 중지 시간이 없는지 확인해야 한다. 예를 들어 컨테이너가 다운되면 다른 컨테이너를 다시 시작해야 한다. 이 문제를 시스템에 의해 처리한다면 더 쉽지 않을까? 그것이 쿠버네티스가 필요한 이유이다! 쿠버네티스는 분산 시스템을 탄력적으로 실행하기 위한 프레임 워크를 제공한다. 애플리케이션의 확장과 장애 조치를 처리하고, 배포 패턴 등을 제공한다. 예를 들어, 쿠버네티스는 시스템의 카나리아 배포를 쉽게 관리 할 수 있다. diff --git a/content/ko/docs/concepts/overview/working-with-objects/labels.md b/content/ko/docs/concepts/overview/working-with-objects/labels.md index 1e0d86a97b..2044cf5135 100644 --- a/content/ko/docs/concepts/overview/working-with-objects/labels.md +++ b/content/ko/docs/concepts/overview/working-with-objects/labels.md @@ -1,4 +1,6 @@ --- + + title: 레이블과 셀렉터 content_type: concept weight: 40 @@ -40,7 +42,7 @@ _레이블_ 은 파드와 같은 오브젝트에 첨부된 키와 값의 쌍이 * `"partition" : "customerA"`, `"partition" : "customerB"` * `"track" : "daily"`, `"track" : "weekly"` -레이블 예시는 일반적으로 사용하는 상황에 해당한다. 당신의 규약에 따라 자유롭게 개발할 수 있다. 오브젝트에 붙여진 레이블 키는 고유해야 한다는 것을 기억해야 한다. +이 예시는 일반적으로 사용하는 레이블이며, 사용자는 자신만의 규칙(convention)에 따라 자유롭게 개발할 수 있다. 오브젝트에 붙여진 레이블 키는 고유해야 한다는 것을 기억해야 한다. ## 구문과 캐릭터 셋 @@ -50,6 +52,11 @@ _레이블_ 은 키와 값의 쌍이다. 유효한 레이블 키에는 슬래시 `kubernetes.io/`와 `k8s.io/` 접두사는 쿠버네티스의 핵심 컴포넌트로 예약되어있다. +유효한 레이블 값은 다음과 같다. +* 63 자 이하 여야 하고(공백이면 안 됨), +* 시작과 끝은 알파벳과 숫자(`[a-z0-9A-Z]`)이며, +* 알파벳과 숫자, 대시(`-`), 밑줄(`_`), 점(`.`)를 중간에 포함할 수 있다. + 유효한 레이블 값은 63자 미만 또는 공백이며 시작과 끝은 알파벳과 숫자(`[a-z0-9A-Z]`)이며, 대시(`-`), 밑줄(`_`), 점(`.`)과 함께 사용할 수 있다. 다음의 예시는 파드에 `environment: production` 과 `app: nginx` 2개의 레이블이 있는 구성 파일이다. @@ -90,14 +97,13 @@ API는 현재 _일치성 기준_ 과 _집합성 기준_ 이라는 두 종류의 {{< /note >}} {{< caution >}} -일치성 기준과 집합성 기준 조건 모두에 대해 논리적인 _OR_ (`||`) 연산자가 없다. -필터 구문이 적절히 구성되어있는지 확인해야 한다. +일치성 기준과 집합성 기준 조건 모두에 대해 논리적인 _OR_ (`||`) 연산자가 없다. 필터 구문이 적절히 구성되어있는지 확인해야 한다. {{< /caution >}} ### _일치성 기준_ 요건 _일치성 기준_ 또는 _불일치 기준_ 의 요구사항으로 레이블의 키와 값의 필터링을 허용한다. 일치하는 오브젝트는 추가 레이블을 가질 수 있지만, 레이블의 명시된 제약 조건을 모두 만족해야 한다. -`=`,`==`,`!=` 이 3가지 연산자만 허용한다. 처음 두 개의 연산자의 _일치성_(그리고 단순히 동의어일 뿐임), 나머지는 _불일치_ 를 의미한다. 예를 들면, +`=`,`==`,`!=` 이 세 가지 연산자만 허용한다. 처음 두 개의 연산자의 _일치성_(그리고 동의어), 나머지는 _불일치_ 를 의미한다. 예를 들면, ``` environment = production @@ -108,8 +114,9 @@ tier != frontend 후자는 `tier`를 키로 가지고, 값을 `frontend`를 가지는 리소스를 제외한 모든 리소스를 선택하고, `tier`를 키로 가지며, 값을 공백으로 가지는 모든 리소스를 선택한다. `environment=production,tier!=frontend` 처럼 쉼표를 통해 한 문장으로 `frontend`를 제외한 `production`을 필터링할 수 있다. -균등-기반 레이블의 요건에 대한 하나의 이용 시나리오는 파드가 노드를 선택하는 기준을 지정하는 것이다. -예를 들어, 아래 샘플 파드는 "`accelerator=nvidia-tesla-p100`" 레이블을 가진 노드를 선택한다. +일치성 기준 레이블 요건에 대한 하나의 이용 시나리오는 파드가 노드를 선택하는 기준을 지정하는 것이다. +예를 들어, 아래 샘플 파드는 "`accelerator=nvidia-tesla-p100`" +레이블을 가진 노드를 선택한다. ```yaml apiVersion: v1 @@ -148,16 +155,17 @@ _집합성 기준_ 레이블 셀렉터는 일반적으로 `environment=productio _집합성 기준_ 요건은 _일치성 기준_ 요건과 조합해서 사용할 수 있다. 예를 들어 `partition in (customerA, customerB),environment!=qa` + ## API ### LIST와 WATCH 필터링 -LIST와 WATCH 작업은 쿼리 파라미터를 사용해서 반환되는 오브젝트 집합을 필터링하기 위해 레이블 셀렉터를 지정할 수 있다. 다음의 2가지 요건 모두 허용된다(URL 쿼리 문자열을 그대로 표기함). +LIST와 WATCH 작업은 쿼리 파라미터를 사용해서 반환되는 오브젝트 집합을 필터링하기 위해 레이블 셀렉터를 지정할 수 있다. 다음의 두 가지 요건 모두 허용된다(URL 쿼리 문자열을 그대로 표기함). - * _불일치 기준_ 요건: `?labelSelector=environment%3Dproduction,tier%3Dfrontend` + * _일치성 기준_ 요건: `?labelSelector=environment%3Dproduction,tier%3Dfrontend` * _집합성 기준_ 요건: `?labelSelector=environment+in+%28production%2Cqa%29%2Ctier+in+%28frontend%29` -두 가지 레이블 셀렉터 스타일은 모두 REST 클라이언트를 통해 선택된 리소스를 확인하거나 목록을 볼 수 있다. 예를 들어, `kubectl`로 `apiserver`를 대상으로 _불일치 기준_ 으로 하는 셀렉터를 다음과 같이 이용할 수 있다. +두 가지 레이블 셀렉터 스타일은 모두 REST 클라이언트를 통해 선택된 리소스를 확인하거나 목록을 볼 수 있다. 예를 들어, `kubectl`로 `apiserver`를 대상으로 _일치성 기준_ 으로 하는 셀렉터를 다음과 같이 이용할 수 있다. ```shell kubectl get pods -l environment=production,tier=frontend @@ -192,7 +200,7 @@ kubectl get pods -l 'environment,environment notin (frontend)' `services`에서 지정하는 파드 집합은 레이블 셀렉터로 정의한다. 마찬가지로 `replicationcontrollers`가 관리하는 파드의 오브젝트 그룹도 레이블 셀렉터로 정의한다. -서비스와 레플리케이션 컨트롤러의 레이블 셀렉터는 `json` 또는 `yaml` 파일에 매핑된 _균등-기반_ 요구사항의 셀렉터만 지원한다. +서비스와 레플리케이션 컨트롤러의 레이블 셀렉터는 `json` 또는 `yaml` 파일에 매핑된 _일치성 기준_ 요구사항의 셀렉터만 지원한다. ```json "selector": { @@ -208,7 +216,6 @@ selector: `json` 또는 `yaml` 서식에서 셀렉터는 `component=redis` 또는 `component in (redis)` 모두 같은 것이다. - #### 세트-기반 요건을 지원하는 리소스 [`Job`](/ko/docs/concepts/workloads/controllers/job/), @@ -232,4 +239,3 @@ selector: 레이블을 통해 선택하는 사용 사례 중 하나는 파드를 스케줄 할 수 있는 노드 셋을 제한하는 것이다. 자세한 내용은 [노드 선택](/ko/docs/concepts/scheduling-eviction/assign-pod-node/) 문서를 참조한다. - diff --git a/content/ko/docs/concepts/overview/working-with-objects/object-management.md b/content/ko/docs/concepts/overview/working-with-objects/object-management.md index 4c1570c458..575def2256 100644 --- a/content/ko/docs/concepts/overview/working-with-objects/object-management.md +++ b/content/ko/docs/concepts/overview/working-with-objects/object-management.md @@ -32,7 +32,7 @@ weight: 15 지정한다. 이것은 클러스터에서 일회성 작업을 개시시키거나 동작시키기 위한 -가장 단순한 방법이다. 이 기법은 활성 오브젝트를 대상으로 직접적인 +추천 방법이다. 이 기법은 활성 오브젝트를 대상으로 직접적인 영향을 미치기 때문에, 이전 구성에 대한 이력을 제공해 주지 않는다. ### 예시 @@ -47,7 +47,7 @@ kubectl create deployment nginx --image nginx 오브젝트 구성에 비해 장점은 다음과 같다. -- 커맨드는 간단해서 배우기 쉽고, 기억하기 쉽다. +- 커맨드는 하나의 동작을 나타내는 단어로 표현된다. - 커맨드는 클러스터를 수정하기 위해 단 하나의 단계만을 필요로 한다. 오브젝트 구성에 비해 단점은 다음과 같다. @@ -125,7 +125,7 @@ kubectl replace -f nginx.yaml 선언형 오브젝트 구성을 사용할 경우, 사용자는 로컬에 보관된 오브젝트 구성 파일을 대상으로 작동시키지만, 사용자는 파일에서 수행 할 작업을 정의하지 않는다. 생성, 업데이트, 그리고 삭제 작업은 -`kubectl`에 의해 오브젝트 마다 자동으로 감지된다. 이를 통해 다른 오브젝트에 대해 +`kubectl`에 의해 오브젝트마다 자동으로 감지된다. 이를 통해 다른 오브젝트에 대해 다른 조작이 필요할 수 있는 디렉터리에서 작업할 수 있다. {{< note >}} diff --git a/content/ko/docs/concepts/policy/resource-quotas.md b/content/ko/docs/concepts/policy/resource-quotas.md index a6e6208d76..df23da5d57 100644 --- a/content/ko/docs/concepts/policy/resource-quotas.md +++ b/content/ko/docs/concepts/policy/resource-quotas.md @@ -1,4 +1,6 @@ --- + + title: 리소스 쿼터 content_type: concept weight: 20 @@ -56,7 +58,7 @@ weight: 20 ## 리소스 쿼터 활성화 많은 쿠버네티스 배포판에 기본적으로 리소스 쿼터 지원이 활성화되어 있다. -API 서버 `--enable-admission-plugins=` 플래그의 인수 중 하나로 +{{< glossary_tooltip text="API 서버" term_id="kube-apiserver" >}} `--enable-admission-plugins=` 플래그의 인수 중 하나로 `ResourceQuota`가 있는 경우 활성화된다. 해당 네임스페이스에 리소스쿼터가 있는 경우 특정 네임스페이스에 @@ -608,17 +610,28 @@ plugins: values: ["cluster-services"] ``` -이제 "cluster-services" 파드는 `scopeSelector`와 일치하는 쿼터 오브젝트가 있는 네임스페이스에서만 허용된다. -예를 들면 다음과 같다. +그리고, `kube-system` 네임스페이스에 리소스 쿼터 오브젝트를 생성한다. -```yaml - scopeSelector: - matchExpressions: - - scopeName: PriorityClass - operator: In - values: ["cluster-services"] +{{< codenew file="policy/priority-class-resourcequota.yaml" >}} + +```shell +$ kubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system ``` +``` +resourcequota/pods-cluster-services created +``` + +이 경우, 파드 생성은 다음의 조건을 만족해야 허용될 것이다. + +1. 파드의 `priorityClassName` 가 명시되지 않음. +1. 파드의 `priorityClassName` 가 `cluster-services` 이외의 다른 값으로 명시됨. +1. 파드의 `priorityClassName` 가 `cluster-services` 로 설정되고, 파드가 `kube-system` + 네임스페이스에 생성되었으며 리소스 쿼터 검증을 통과함. + +파드 생성 요청은 `priorityClassName` 가 `cluster-services` 로 명시되고 +`kube-system` 이외의 다른 네임스페이스에 생성되는 경우, 거절된다. + ## {{% heading "whatsnext" %}} - 자세한 내용은 [리소스쿼터 디자인 문서](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md)를 참고한다. diff --git a/content/ko/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/ko/docs/concepts/scheduling-eviction/assign-pod-node.md index ebbc00f91e..f9e739d7a1 100644 --- a/content/ko/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/ko/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -261,7 +261,7 @@ PodSpec에 지정된 NodeAffinity도 적용된다. `topologyKey` 의 빈 값을 허용하지 않는다. 2. 파드 안티-어피니티에서도 `requiredDuringSchedulingIgnoredDuringExecution` 와 `preferredDuringSchedulingIgnoredDuringExecution` 는 `topologyKey` 의 빈 값을 허용하지 않는다. -3. `requiredDuringSchedulingIgnoredDuringExecution` 파드 안티-어피니티에서 `topologyKey` 를 `kubernetes.io/hostname` 로 제한하기 위해 어드미션 컨트롤러 `LimitPodHardAntiAffinityTopology` 가 도입되었다. 사용자 지정 토폴로지를 사용할 수 있도록 하려면, 어드미션 컨트롤러를 수정하거나 아니면 간단히 이를 비활성화해야 한다. +3. `requiredDuringSchedulingIgnoredDuringExecution` 파드 안티-어피니티에서 `topologyKey` 를 `kubernetes.io/hostname` 로 제한하기 위해 어드미션 컨트롤러 `LimitPodHardAntiAffinityTopology` 가 도입되었다. 사용자 지정 토폴로지를 사용할 수 있도록 하려면, 어드미션 컨트롤러를 수정하거나 아니면 이를 비활성화해야 한다. 4. 위의 경우를 제외하고, `topologyKey` 는 적법한 어느 레이블-키도 가능하다. `labelSelector` 와 `topologyKey` 외에도 `labelSelector` 와 일치해야 하는 네임스페이스 목록 `namespaces` 를 diff --git a/content/ko/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md b/content/ko/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md index 4ed5c58a63..03eec40fae 100644 --- a/content/ko/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md +++ b/content/ko/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md @@ -1,4 +1,6 @@ --- + + title: 스케줄러 성능 튜닝 content_type: concept weight: 80 diff --git a/content/ko/docs/concepts/security/controlling-access.md b/content/ko/docs/concepts/security/controlling-access.md index 0b4bb6e2cc..3b45be648c 100644 --- a/content/ko/docs/concepts/security/controlling-access.md +++ b/content/ko/docs/concepts/security/controlling-access.md @@ -132,7 +132,7 @@ Bob이 `projectCaribou` 네임스페이스에 있는 오브젝트에 쓰기(`cre 이전의 논의는 (일반적인 경우) API 서버의 보안 포트로 전송되는 요청에 적용된다. API 서버는 실제로 다음과 같이 2개의 포트에서 서비스할 수 있다. -기본적으로 쿠버네티스 API 서버는 2개의 포트에서 HTTP 서비스를 한다. +기본적으로, 쿠버네티스 API 서버는 2개의 포트에서 HTTP 서비스를 한다. 1. `로컬호스트 포트`: diff --git a/content/ko/docs/concepts/security/overview.md b/content/ko/docs/concepts/security/overview.md index 86240a4116..9cd48a172c 100644 --- a/content/ko/docs/concepts/security/overview.md +++ b/content/ko/docs/concepts/security/overview.md @@ -119,6 +119,7 @@ RBAC 인증(쿠버네티스 API에 대한 접근) | https://kubernetes.io/docs/r 컨테이너 취약점 스캔 및 OS에 종속적인 보안 | 이미지 빌드 단계의 일부로 컨테이너에 알려진 취약점이 있는지 검사해야 한다. 이미지 서명 및 시행 | 컨테이너 이미지에 서명하여 컨테이너의 내용에 대한 신뢰 시스템을 유지한다. 권한있는 사용자의 비허용 | 컨테이너를 구성할 때 컨테이너의 목적을 수행하는데 필요한 최소 권한을 가진 사용자를 컨테이너 내에 만드는 방법에 대해서는 설명서를 참조한다. +더 강력한 격리로 컨테이너 런타임 사용 | 더 강력한 격리를 제공하는 [컨테이너 런타임 클래스](/ko/docs/concepts/containers/runtime-class/)를 선택한다. ## 코드 @@ -151,3 +152,4 @@ TLS를 통한 접근 | 코드가 TCP를 통해 통신해야 한다면, 미리 * 컨트롤 플레인을 위한 [전송 데이터 암호화](/docs/tasks/tls/managing-tls-in-a-cluster/) * [Rest에서 데이터 암호화](/docs/tasks/administer-cluster/encrypt-data/) * [쿠버네티스 시크릿](/ko/docs/concepts/configuration/secret/) +* [런타임 클래스](/ko/docs/concepts/containers/runtime-class) diff --git a/content/ko/docs/concepts/services-networking/dns-pod-service.md b/content/ko/docs/concepts/services-networking/dns-pod-service.md index fc1074a86c..006ffba99c 100644 --- a/content/ko/docs/concepts/services-networking/dns-pod-service.md +++ b/content/ko/docs/concepts/services-networking/dns-pod-service.md @@ -1,11 +1,14 @@ --- + + + title: 서비스 및 파드용 DNS content_type: concept weight: 20 --- -이 페이지는 쿠버네티스의 DNS 지원에 대한 개요를 설명한다. - +쿠버네티스는 파드와 서비스를 위한 DNS 레코드를 생성한다. 사용자는 IP 주소 대신에 +일관된 DNS 네임을 통해서 서비스에 접속할 수 있다. @@ -15,23 +18,51 @@ weight: 20 개별 컨테이너들이 DNS 네임을 해석할 때 DNS 서비스의 IP를 사용하도록 kubelets를 구성한다. -### DNS 네임이 할당되는 것들 - 클러스터 내의 모든 서비스(DNS 서버 자신도 포함하여)에는 DNS 네임이 할당된다. 기본적으로 클라이언트 파드의 DNS 검색 리스트는 파드 자체의 네임스페이스와 클러스터의 기본 도메인을 포함한다. -이 예시는 다음과 같다. -쿠버네티스 네임스페이스 `bar`에 `foo`라는 서비스가 있다. 네임스페이스 `bar`에서 running 상태인 파드는 -단순하게 `foo`를 조회하는 DNS 쿼리를 통해서 서비스 `foo`를 찾을 수 있다. -네임스페이스 `quux`에서 실행 중인 파드는 -`foo.bar`를 조회하는 DNS 쿼리를 통해서 이 서비스를 찾을 수 있다. +### 서비스의 네임스페이스 -다음 절에서는 쿠버네티스 DNS에서 지원하는 레코드 유형과 레이아웃을 자세히 설명한다. -이 외에 동작하는 레이아웃, 네임 또는 쿼리는 구현 세부 정보로 간주하며 -경고 없이 변경될 수 있다. -최신 업데이트에 대한 자세한 설명은 다음 링크를 통해 참조할 수 있다. -[쿠버네티스 DNS 기반 서비스 디스커버리](https://github.com/kubernetes/dns/blob/master/docs/specification.md). +DNS 쿼리는 그것을 생성하는 파드의 네임스페이스에 따라 다른 결과를 반환할 수 +있다. 네임스페이스를 지정하지 않은 DNS 쿼리는 파드의 네임스페이스에 +국한된다. DNS 쿼리에 네임스페이스를 명시하여 다른 네임스페이스에 있는 서비스에 접속한다. + +예를 들어, `test` 네임스페이스에 있는 파드를 생각해보자. `data` 서비스는 +`prod` 네임스페이스에 있다. + +이 경우, `data` 에 대한 쿼리는 파드의 `test` 네임스페이스를 사용하기 때문에 결과를 반환하지 않을 것이다. + +`data.prod` 로 쿼리하면 의도한 결과를 반환할 것이다. 왜냐하면 +네임스페이스가 명시되어 있기 때문이다. + +DNS 쿼리는 파드의 `/etc/resolv.conf` 를 사용하여 확장될 수 있을 것이다. Kubelet은 +각 파드에 대해서 파일을 설정한다. 예를 들어, `data` 만을 위한 쿼리는 +`data.test.cluster.local` 로 확장된다. `search` 옵션의 값은 +쿼리를 확장하기 위해서 사용된다. DNS 쿼리에 대해 더 자세히 알고 싶은 경우, +[`resolv.conf` 설명 페이지.](https://www.man7.org/linux/man-pages/man5/resolv.conf.5.html)를 참고한다. + +``` +nameserver 10.32.0.10 +search .svc.cluster.local svc.cluster.local cluster.local +options ndots:5 +``` + +요약하면, _test_ 네임스페이스에 있는 파드는 `data.prod` 또는 +`data.prod.cluster.local` 중 하나를 통해 성공적으로 해석될 수 있다. + +### DNS 레코드 + +어떤 오브젝트가 DNS 레코드를 가지는가? + +1. 서비스 +2. 파드 + +다음 섹션은 지원되는 DNS 레코드의 종류 및 레이아웃에 대한 상세 +내용이다. 혹시 동작시킬 필요가 있는 다른 레이아웃, 네임, 또는 쿼리는 +구현 세부 사항으로 간주되며 경고 없이 변경될 수 있다. +최신 명세 확인을 위해서는, +[쿠버네티스 DNS-기반 서비스 디스커버리](https://github.com/kubernetes/dns/blob/master/docs/specification.md)를 본다. ## 서비스 diff --git a/content/ko/docs/concepts/services-networking/dual-stack.md b/content/ko/docs/concepts/services-networking/dual-stack.md index cae986bb5d..dcfb818650 100644 --- a/content/ko/docs/concepts/services-networking/dual-stack.md +++ b/content/ko/docs/concepts/services-networking/dual-stack.md @@ -35,7 +35,7 @@ IPv4/IPv6 이중 스택 쿠버네티스 클러스터를 활용하려면 다음 * 쿠버네티스 1.20 이상 이전 버전과 함께 이중 스택 서비스를 사용하는 방법에 대한 정보 - 쿠버네티스 버전, 쿠버네티스 해당 버전에 대한 + 쿠버네티스 버전, 쿠버네티스 해당 버전에 대한 문서 참조 * 이중 스택 네트워킹을 위한 공급자의 지원(클라우드 공급자 또는 다른 방식으로 쿠버네티스 노드에 라우팅 가능한 IPv4/IPv6 네트워크 인터페이스를 제공할 수 있어야 한다.) * 이중 스택(예: Kubenet 또는 Calico)을 지원하는 네트워크 플러그인 @@ -69,9 +69,9 @@ IPv6 CIDR의 예: `fdXY:IJKL:MNOP:15::/64` (이 형식으로 표시되지만, 클러스터에 이중 스택이 활성화된 경우 IPv4, IPv6 또는 둘 다를 사용할 수 있는 {{< glossary_tooltip text="서비스" term_id="service" >}}를 만들 수 있다. -서비스의 주소 계열은 기본적으로 첫 번째 서비스 클러스터 IP 범위의 주소 계열로 설정된다. (`--service-cluster-ip-range` 플래그를 통해 kube-controller-manager에 구성) +서비스의 주소 계열은 기본적으로 첫 번째 서비스 클러스터 IP 범위의 주소 계열로 설정된다. (`--service-cluster-ip-range` 플래그를 통해 kube-apiserver에 구성) -서비스를 정의할 때 선택적으로 이중 스택으로 구성할 수 있다. 원하는 동작을 지정하려면 `.spec.ipFamilyPolicy` 필드를 +서비스를 정의할 때 선택적으로 이중 스택으로 구성할 수 있다. 원하는 동작을 지정하려면 `.spec.ipFamilyPolicy` 필드를 다음 값 중 하나로 설정한다. * `SingleStack`: 단일 스택 서비스. 컨트롤 플레인은 첫 번째로 구성된 서비스 클러스터 IP 범위를 사용하여 서비스에 대한 클러스터 IP를 할당한다. @@ -158,7 +158,7 @@ status: loadBalancer: {} ``` -1. 클러스터에서 이중 스택이 활성화된 경우, 셀렉터가 있는 기존 [헤드리스 서비스](/ko/docs/concepts/services-networking/service/#헤드리스-headless-서비스)는 `.spec.ClusterIP`가 `None`이라도 컨트롤 플레인이 `.spec.ipFamilyPolicy`을 `SingleStack`으로 지정하고 `.spec.ipFamilies`는 첫 번째 서비스 클러스터 IP 범위(kube-controller-manager에 대한 `--service-cluster-ip-range` 플래그를 통해 구성)의 주소 계열으로 지정한다. +1. 클러스터에서 이중 스택이 활성화된 경우, 셀렉터가 있는 기존 [헤드리스 서비스](/ko/docs/concepts/services-networking/service/#헤드리스-headless-서비스)는 `.spec.ClusterIP`가 `None`이라도 컨트롤 플레인이 `.spec.ipFamilyPolicy`을 `SingleStack`으로 지정하고 `.spec.ipFamilies`는 첫 번째 서비스 클러스터 IP 범위(kube-apiserver에 대한 `--service-cluster-ip-range` 플래그를 통해 구성)의 주소 계열으로 지정한다. {{< codenew file="service/networking/dual-stack-default-svc.yaml" >}} diff --git a/content/ko/docs/concepts/services-networking/ingress-controllers.md b/content/ko/docs/concepts/services-networking/ingress-controllers.md index 3af939488e..41524039f0 100644 --- a/content/ko/docs/concepts/services-networking/ingress-controllers.md +++ b/content/ko/docs/concepts/services-networking/ingress-controllers.md @@ -9,11 +9,11 @@ weight: 40 인그레스 리소스가 작동하려면, 클러스터는 실행 중인 인그레스 컨트롤러가 반드시 필요하다. -kube-controller-manager 바이너리의 일부로 실행되는 컨트롤러의 다른 타입과 달리 인그레스 컨트롤러는 +`kube-controller-manager` 바이너리의 일부로 실행되는 컨트롤러의 다른 타입과 달리 인그레스 컨트롤러는 클러스터와 함께 자동으로 실행되지 않는다. 클러스터에 가장 적합한 인그레스 컨트롤러 구현을 선택하는데 이 페이지를 사용한다. -프로젝트로써 쿠버네티스는 [AWS](https://github.com/kubernetes-sigs/aws-load-balancer-controller#readme), [GCE](https://git.k8s.io/ingress-gce/README.md#readme)와 +프로젝트로서 쿠버네티스는 [AWS](https://github.com/kubernetes-sigs/aws-load-balancer-controller#readme), [GCE](https://git.k8s.io/ingress-gce/README.md#readme)와 [nginx](https://git.k8s.io/ingress-nginx/README.md#readme) 인그레스 컨트롤러를 지원하고 유지한다. @@ -23,32 +23,35 @@ kube-controller-manager 바이너리의 일부로 실행되는 컨트롤러의 {{% thirdparty-content %}} -* [AKS 애플리케이션 게이트웨이 인그레스 컨트롤러] (https://azure.github.io/application-gateway-kubernetes-ingress/)는 [Azure 애플리케이션 게이트웨이](https://docs.microsoft.com)를 구성하는 인그레스 컨트롤러다. +* [AKS 애플리케이션 게이트웨이 인그레스 컨트롤러](https://azure.github.io/application-gateway-kubernetes-ingress/)는 [Azure 애플리케이션 게이트웨이](https://docs.microsoft.com)를 구성하는 인그레스 컨트롤러다. * [Ambassador](https://www.getambassador.io/) API 게이트웨이는 [Envoy](https://www.envoyproxy.io) 기반 인그레스 컨트롤러다. +* [Apache APISIX 인그레스 컨트롤러](https://github.com/apache/apisix-ingress-controller)는 [Apache APISIX](https://github.com/apache/apisix) 기반의 인그레스 컨트롤러이다. * [Avi 쿠버네티스 오퍼레이터](https://github.com/vmware/load-balancer-and-ingress-services-for-kubernetes)는 [VMware NSX Advanced Load Balancer](https://avinetworks.com/)을 사용하는 L4-L7 로드 밸런싱을 제공한다. * [Citrix 인그레스 컨트롤러](https://github.com/citrix/citrix-k8s-ingress-controller#readme)는 Citrix 애플리케이션 딜리버리 컨트롤러에서 작동한다. * [Contour](https://projectcontour.io/)는 [Envoy](https://www.envoyproxy.io/) 기반 인그레스 컨트롤러다. +* [EnRoute](https://getenroute.io/)는 인그레스 컨트롤러로 실행할 수 있는 [Envoy](https://www.envoyproxy.io) 기반 API 게이트웨이다. * F5 BIG-IP [쿠버네티스 용 컨테이너 인그레스 서비스](https://clouddocs.f5.com/containers/latest/userguide/kubernetes/)를 이용하면 인그레스를 사용하여 F5 BIG-IP 가상 서버를 구성할 수 있다. * [Gloo](https://gloo.solo.io)는 API 게이트웨이 기능을 제공하는 [Envoy](https://www.envoyproxy.io) 기반의 오픈소스 인그레스 컨트롤러다. -* [HAProxy 인그레스](https://haproxy-ingress.github.io/)는 [HAProxy](http://www.haproxy.org/#desc)의 +* [HAProxy 인그레스](https://haproxy-ingress.github.io/)는 [HAProxy](https://www.haproxy.org/#desc)의 인그레스 컨트롤러다. -* [쿠버네티스 용 HAProxy 인그레스 컨트롤러](https://github.com/haproxytech/kubernetes-ingress#readme)는 [HAProxy](http://www.haproxy.org/#desc) 용 +* [쿠버네티스 용 HAProxy 인그레스 컨트롤러](https://github.com/haproxytech/kubernetes-ingress#readme)는 [HAProxy](https://www.haproxy.org/#desc) 용 인그레스 컨트롤러이기도 하다. * [Istio 인그레스](https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/)는 [Istio](https://istio.io/) 기반 인그레스 컨트롤러다. * [쿠버네티스 용 Kong 인그레스 컨트롤러](https://github.com/Kong/kubernetes-ingress-controller#readme)는 [Kong 게이트웨이](https://konghq.com/kong/)를 구동하는 인그레스 컨트롤러다. -* [쿠버네티스 용 NGINX 인그레스 컨트롤러](https://www.nginx.com/products/nginx/kubernetes-ingress-controller)는 [NGINX](https://www.nginx.com/resources/glossary) +* [쿠버네티스 용 NGINX 인그레스 컨트롤러](https://www.nginx.com/products/nginx-ingress-controller/)는 [NGINX](https://www.nginx.com/resources/glossary/nginx/) 웹서버(프록시로 사용)와 함께 작동한다. * [Skipper](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/)는 사용자의 커스텀 프록시를 구축하기 위한 라이브러리로 설계된 쿠버네티스 인그레스와 같은 유스케이스를 포함한 서비스 구성을 위한 HTTP 라우터 및 역방향 프록시다. * [Traefik 쿠버네티스 인그레스 제공자](https://doc.traefik.io/traefik/providers/kubernetes-ingress/)는 [Traefik](https://traefik.io/traefik/) 프록시 용 인그레스 컨트롤러다. +* [Tyk 오퍼레이터](https://github.com/TykTechnologies/tyk-operator)는 사용자 지정 리소스로 인그레스를 확장하여 API 관리 기능을 인그레스로 가져온다. Tyk 오퍼레이터는 오픈 소스 Tyk 게이트웨이 및 Tyk 클라우드 컨트롤 플레인과 함께 작동한다. * [Voyager](https://appscode.com/products/voyager)는 - [HAProxy](http://www.haproxy.org/#desc)의 인그레스 컨트롤러다. + [HAProxy](https://www.haproxy.org/#desc)의 인그레스 컨트롤러다. ## 여러 인그레스 컨트롤러 사용 @@ -63,7 +66,7 @@ kube-controller-manager 바이너리의 일부로 실행되는 컨트롤러의 다양한 인그레스 컨트롤러는 약간 다르게 작동한다. {{< note >}} -인그레스 컨트롤러의 설명서를 검토하여 선택 시 주의 사항을 이해해야한다. +인그레스 컨트롤러의 설명서를 검토하여 선택 시 주의 사항을 이해해야 한다. {{< /note >}} diff --git a/content/ko/docs/concepts/services-networking/ingress.md b/content/ko/docs/concepts/services-networking/ingress.md index 5b91356437..ec705e6a7c 100644 --- a/content/ko/docs/concepts/services-networking/ingress.md +++ b/content/ko/docs/concepts/services-networking/ingress.md @@ -167,7 +167,7 @@ Events: ### 예제 -| 종류 | 경로 | 요청 경로 | 일치 여부 | +| 종류 | 경로 | 요청 경로 | 일치 여부 | |--------|---------------------------------|-------------------------------|------------------------------------| | Prefix | `/` | (모든 경로) | 예 | | Exact | `/foo` | `/foo` | 예 | @@ -376,7 +376,7 @@ graph LR; 트래픽을 일치 시킬 수 있다. 예를 들어, 다음 인그레스는 `first.bar.com`에 요청된 트래픽을 -`service1`로, `second.foo.com`는 `service2`로, 호스트 이름이 정의되지 +`service1`로, `second.bar.com`는 `service2`로, 호스트 이름이 정의되지 않은(즉, 요청 헤더가 표시 되지 않는) IP 주소로의 모든 트래픽은 `service3`로 라우팅 한다. diff --git a/content/ko/docs/concepts/services-networking/service.md b/content/ko/docs/concepts/services-networking/service.md index da9d353d6f..b01a971cff 100644 --- a/content/ko/docs/concepts/services-networking/service.md +++ b/content/ko/docs/concepts/services-networking/service.md @@ -134,7 +134,7 @@ spec: * 한 서비스에서 다른 {{< glossary_tooltip term_id="namespace" text="네임스페이스">}} 또는 다른 클러스터의 서비스를 지정하려고 한다. * 워크로드를 쿠버네티스로 마이그레이션하고 있다. 해당 방식을 평가하는 동안, - 쿠버네티스에서는 일정 비율의 백엔드만 실행한다. + 쿠버네티스에서는 백엔드의 일부만 실행한다. 이러한 시나리오 중에서 파드 셀렉터 _없이_ 서비스를 정의 할 수 있다. 예를 들면 @@ -311,7 +311,7 @@ IPVS는 트래픽을 백엔드 파드로 밸런싱하기 위한 추가 옵션을 {{< note >}} IPVS 모드에서 kube-proxy를 실행하려면, kube-proxy를 시작하기 전에 노드에서 IPVS를 -사용 가능하도록 해야한다. +사용 가능하도록 해야 한다. kube-proxy가 IPVS 프록시 모드에서 시작될 때, IPVS 커널 모듈을 사용할 수 있는지 확인한다. IPVS 커널 모듈이 감지되지 않으면, kube-proxy는 @@ -430,8 +430,8 @@ CoreDNS와 같은, 클러스터-인식 DNS 서버는 새로운 서비스를 위 예를 들면, 쿠버네티스 네임스페이스 `my-ns`에 `my-service`라는 서비스가 있는 경우, 컨트롤 플레인과 DNS 서비스가 함께 작동하여 `my-service.my-ns`에 대한 DNS 레코드를 만든다. `my-ns` 네임 스페이스의 파드들은 -간단히 `my-service`에 대한 이름 조회를 수행하여 찾을 수 있어야 한다 -(`my-service.my-ns` 역시 동작함). +`my-service`(`my-service.my-ns` 역시 동작함)에 대한 이름 조회를 +수행하여 서비스를 찾을 수 있어야 한다. 다른 네임스페이스의 파드들은 이름을 `my-service.my-ns`으로 사용해야 한다. 이 이름은 서비스에 할당된 클러스터 IP로 변환된다. @@ -463,7 +463,7 @@ DNS SRV 쿼리를 수행할 수 있다. 셀렉터를 정의하는 헤드리스 서비스의 경우, 엔드포인트 컨트롤러는 API에서 `엔드포인트` 레코드를 생성하고, DNS 구성을 수정하여 -`서비스` 를 지원하는 `파드` 를 직접 가리키는 레코드 (주소)를 반환한다. +`서비스` 를 지원하는 `파드` 를 직접 가리키는 A 레코드(IP 주소)를 반환한다. ### 셀렉터가 없는 경우 @@ -1120,7 +1120,7 @@ VIP용 유저스페이스 프록시를 사용하면 중소 규모의 스케일 않아도 된다. 그것은 격리 실패이다. 서비스에 대한 포트 번호를 선택할 수 있도록 하기 위해, 두 개의 -서비스가 충돌하지 않도록 해야한다. 쿠버네티스는 각 서비스에 고유한 IP 주소를 +서비스가 충돌하지 않도록 해야 한다. 쿠버네티스는 각 서비스에 고유한 IP 주소를 할당하여 이를 수행한다. 각 서비스가 고유한 IP를 받도록 하기 위해, 내부 할당기는 @@ -1164,7 +1164,7 @@ IP 주소(예 : 10.0.0.1)를 할당한다. 서비스 포트를 1234라고 가정 이는 서비스 소유자가 충돌 위험 없이 원하는 어떤 포트든 선택할 수 있음을 의미한다. 클라이언트는 실제로 접근하는 파드를 몰라도, IP와 포트에 -간단히 연결할 수 있다. +연결할 수 있다. #### iptables diff --git a/content/ko/docs/concepts/storage/persistent-volumes.md b/content/ko/docs/concepts/storage/persistent-volumes.md index 9ce1eba6cf..05997eb0f3 100644 --- a/content/ko/docs/concepts/storage/persistent-volumes.md +++ b/content/ko/docs/concepts/storage/persistent-volumes.md @@ -487,7 +487,7 @@ PV는 `storageClassName` 속성을 * VsphereVolume * iSCSI -마운트 옵션의 유효성이 검사되지 않으므로 마운트 옵션이 유효하지 않으면 마운트가 실패한다. +마운트 옵션의 유효성이 검사되지 않는다. 마운트 옵션이 유효하지 않으면, 마운트가 실패한다. 이전에는 `mountOptions` 속성 대신 `volume.beta.kubernetes.io/mount-options` 어노테이션이 사용되었다. 이 어노테이션은 아직까지는 사용할 수 있지만, @@ -629,6 +629,11 @@ spec: 퍼시스턴트볼륨 바인딩은 배타적이며, 퍼시스턴트볼륨클레임은 네임스페이스 오브젝트이므로 "다중" 모드(`ROX`, `RWX`)를 사용한 클레임은 하나의 네임스페이스 내에서만 가능하다. +### `hostPath` 유형의 퍼시스턴트볼륨 + +`hostPath` 퍼시스턴트볼륨은 노드의 파일이나 디렉터리를 사용하여 네트워크 연결 스토리지를 에뮬레이션한다. +[`hostPath` 유형 볼륨의 예](/ko/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#퍼시스턴트볼륨-생성하기)를 참고한다. + ## 원시 블록 볼륨 지원 {{< feature-state for_k8s_version="v1.18" state="stable" >}} diff --git a/content/ko/docs/concepts/storage/storage-classes.md b/content/ko/docs/concepts/storage/storage-classes.md index 8bc6f7b1bf..94577ca182 100644 --- a/content/ko/docs/concepts/storage/storage-classes.md +++ b/content/ko/docs/concepts/storage/storage-classes.md @@ -143,8 +143,8 @@ CSI | 1.14 (alpha), 1.16 (beta) 클래스의 `mountOptions` 필드에 지정된 마운트 옵션을 가진다. 만약 볼륨 플러그인이 마운트 옵션을 지원하지 않는데, 마운트 -옵션을 지정하면 프로비저닝은 실패한다. 마운트 옵션은 클래스 또는 PV 에서 -검증되지 않으므로 PV 마운트가 유효하지 않으면 마운트가 실패하게 된다. +옵션을 지정하면 프로비저닝은 실패한다. 마운트 옵션은 클래스 또는 PV에서 +검증되지 않는다. PV 마운트가 유효하지 않으면, 마운트가 실패하게 된다. ### 볼륨 바인딩 모드 diff --git a/content/ko/docs/concepts/storage/volume-pvc-datasource.md b/content/ko/docs/concepts/storage/volume-pvc-datasource.md index e6ff2caa38..e9857885d7 100644 --- a/content/ko/docs/concepts/storage/volume-pvc-datasource.md +++ b/content/ko/docs/concepts/storage/volume-pvc-datasource.md @@ -19,7 +19,7 @@ weight: 30 복제는 표준 볼륨처럼 소비할 수 있는 쿠버네티스 볼륨의 복제본으로 정의된다. 유일한 차이점은 프로비저닝할 때 "새" 빈 볼륨을 생성하는 대신에 백엔드 장치가 지정된 볼륨의 정확한 복제본을 생성한다는 것이다. -쿠버네티스 API의 관점에서 복제를 구현하면 새로운 PVC 생성 중에 기존 PVC를 데이터 소스로 지정할 수 있는 기능이 추가된다. 소스 PVC는 바인딩되어있고, 사용가능해야 한다(사용 중이 아니어야함). +쿠버네티스 API의 관점에서 복제를 구현하면 새로운 PVC 생성 중에 기존 PVC를 데이터 소스로 지정할 수 있는 기능이 추가된다. 소스 PVC는 바인딩되어 있고, 사용 가능해야 한다(사용 중이 아니어야 함). 사용자는 이 기능을 사용할 때 다음 사항을 알고 있어야 한다. @@ -64,5 +64,3 @@ spec: ## 사용 새 PVC를 사용할 수 있게 되면, 복제된 PVC는 다른 PVC와 동일하게 소비된다. 또한, 이 시점에서 새롭게 생성된 PVC는 독립된 오브젝트이다. 원본 dataSource PVC와는 무관하게 독립적으로 소비하고, 복제하고, 스냅샷의 생성 또는 삭제를 할 수 있다. 이는 소스가 새롭게 생성된 복제본에 어떤 방식으로든 연결되어 있지 않으며, 새롭게 생성된 복제본에 영향 없이 수정하거나, 삭제할 수도 있는 것을 의미한다. - - diff --git a/content/ko/docs/concepts/storage/volume-snapshot-classes.md b/content/ko/docs/concepts/storage/volume-snapshot-classes.md index e5b6002e6e..862c900fee 100644 --- a/content/ko/docs/concepts/storage/volume-snapshot-classes.md +++ b/content/ko/docs/concepts/storage/volume-snapshot-classes.md @@ -68,7 +68,7 @@ parameters: ### 드라이버 볼륨 스냅샷 클래스에는 볼륨스냅샷의 프로비저닝에 사용되는 CSI 볼륨 플러그인을 -결정하는 드라이버를 가지고 있다. 이 필드는 반드시 지정해야한다. +결정하는 드라이버를 가지고 있다. 이 필드는 반드시 지정해야 한다. ### 삭제정책(DeletionPolicy) diff --git a/content/ko/docs/concepts/storage/volumes.md b/content/ko/docs/concepts/storage/volumes.md index 2e37dc0a67..c9b3ac80d9 100644 --- a/content/ko/docs/concepts/storage/volumes.md +++ b/content/ko/docs/concepts/storage/volumes.md @@ -103,6 +103,8 @@ spec: fsType: ext4 ``` +EBS 볼륨이 파티션된 경우, 선택적 필드인 `partition: ""` 를 제공하여 마운트할 파티션을 지정할 수 있다. + #### AWS EBS CSI 마이그레이션 {{< feature-state for_k8s_version="v1.17" state="beta" >}} @@ -207,8 +209,8 @@ spec: Cinder의 `CSIMigration` 기능이 활성화된 경우, 기존 트리 내 플러그인에서 `cinder.csi.openstack.org` 컨테이너 스토리지 인터페이스(CSI) 드라이버로 모든 플러그인 작업을 수행한다. 이 기능을 사용하려면, 클러스터에 [오픈스택 Cinder CSI -드라이버](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md) -를 설치하고 `CSIMigration` 과 `CSIMigrationOpenStack` +드라이버](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md)를 +설치하고 `CSIMigration` 과 `CSIMigrationOpenStack` 베타 기능을 활성화해야 한다. ### 컨피그맵(configMap) {#configmap} @@ -534,7 +536,7 @@ glusterfs 볼륨에 데이터를 미리 채울 수 있으며, 파드 간에 데 | 값 | 행동 | |:------|:---------| -| | 빈 문자열 (기본값)은 이전 버전과의 호환성을 위한 것으로, hostPash 볼륨은 마운트 하기 전에 아무런 검사도 수행되지 않는다. | +| | 빈 문자열 (기본값)은 이전 버전과의 호환성을 위한 것으로, hostPath 볼륨은 마운트 하기 전에 아무런 검사도 수행되지 않는다. | | `DirectoryOrCreate` | 만약 주어진 경로에 아무것도 없다면, 필요에 따라 Kubelet이 가지고 있는 동일한 그룹과 소유권, 권한을 0755로 설정한 빈 디렉터리를 생성한다. | | `Directory` | 주어진 경로에 디렉터리가 있어야 함 | | `FileOrCreate` | 만약 주어진 경로에 아무것도 없다면, 필요에 따라 Kubelet이 가지고 있는 동일한 그룹과 소유권, 권한을 0644로 설정한 빈 디렉터리를 생성한다. | @@ -922,7 +924,7 @@ CSI 는 쿠버네티스 내에서 Quobyte 볼륨을 사용하기 위해 권장 ### rbd `rbd` 볼륨을 사용하면 -[Rados Block Device](https://ceph.com/docs/master/rbd/rbd/)(RBD) 볼륨을 파드에 마운트할 수 +[Rados Block Device](https://docs.ceph.com/en/latest/rbd/)(RBD) 볼륨을 파드에 마운트할 수 있다. 파드를 제거할 때 지워지는 `emptyDir` 와는 다르게 `rbd` 볼륨의 내용은 유지되고, 볼륨은 마운트 해제만 된다. 이 의미는 RBD 볼륨에 데이터를 미리 채울 수 있으며, 데이터를 @@ -1330,7 +1332,7 @@ CSI 호환 볼륨 드라이버가 쿠버네티스 클러스터에 배포되면 * `controllerPublishSecretRef`: CSI의 `ControllerPublishVolume` 그리고 `ControllerUnpublishVolume` 호출을 완료하기 위해 CSI 드라이버에 전달하려는 민감한 정보가 포함된 시크릿 오브젝트에 대한 참조이다. 이 필드는 - 선택사항이며, 시크릿이 필요하지 않은 경우 비어있을 수 있다. 만약 시크릿에 + 선택 사항이며, 시크릿이 필요하지 않은 경우 비어있을 수 있다. 만약 시크릿에 둘 이상의 시크릿이 포함된 경우에도 모든 시크릿이 전달된다. * `nodeStageSecretRef`: CSI의 `NodeStageVolume` 호출을 완료하기위해 CSI 드라이버에 전달하려는 민감한 정보가 포함 된 시크릿 diff --git a/content/ko/docs/concepts/workloads/controllers/cron-jobs.md b/content/ko/docs/concepts/workloads/controllers/cron-jobs.md index ed29659a7e..7756c93cb0 100644 --- a/content/ko/docs/concepts/workloads/controllers/cron-jobs.md +++ b/content/ko/docs/concepts/workloads/controllers/cron-jobs.md @@ -89,6 +89,11 @@ kube-controller-manager 컨테이너에 설정된 시간대는 `concurrencyPolicy` 가 `Allow` 로 설정될 경우, 잡은 항상 적어도 한 번은 실행될 것이다. +{{< caution >}} +`startingDeadlineSeconds` 가 10초 미만의 값으로 설정되면, 크론잡이 스케줄되지 않을 수 있다. 이는 크론잡 컨트롤러가 10초마다 항목을 확인하기 때문이다. +{{< /caution >}} + + 모든 크론잡에 대해 크론잡 {{< glossary_tooltip term_id="controller" text="컨트롤러" >}} 는 마지막 일정부터 지금까지 얼마나 많은 일정이 누락되었는지 확인한다. 만약 100회 이상의 일정이 누락되었다면, 잡을 실행하지 않고 아래와 같은 에러 로그를 남긴다. ```` diff --git a/content/ko/docs/concepts/workloads/controllers/daemonset.md b/content/ko/docs/concepts/workloads/controllers/daemonset.md index 589fe7c1dd..d7d583d142 100644 --- a/content/ko/docs/concepts/workloads/controllers/daemonset.md +++ b/content/ko/docs/concepts/workloads/controllers/daemonset.md @@ -141,8 +141,8 @@ nodeAffinity: | ---------------------------------------- | ---------- | ------- | ------------------------------------------------------------ | | `node.kubernetes.io/not-ready` | NoExecute | 1.13+ | 네트워크 파티션과 같은 노드 문제가 발생해도 데몬셋 파드는 축출되지 않는다. | | `node.kubernetes.io/unreachable` | NoExecute | 1.13+ | 네트워크 파티션과 같은 노드 문제가 발생해도 데몬셋 파드는 축출되지 않는다. | -| `node.kubernetes.io/disk-pressure` | NoSchedule | 1.8+ | | -| `node.kubernetes.io/memory-pressure` | NoSchedule | 1.8+ | | +| `node.kubernetes.io/disk-pressure` | NoSchedule | 1.8+ | 데몬셋 파드는 기본 스케줄러에서 디스크-압박(disk-pressure) 속성을 허용한다. | +| `node.kubernetes.io/memory-pressure` | NoSchedule | 1.8+ | 데몬셋 파드는 기본 스케줄러에서 메모리-압박(memory-pressure) 속성을 허용한다. | | `node.kubernetes.io/unschedulable` | NoSchedule | 1.12+ | 데몬셋 파드는 기본 스케줄러의 스케줄할 수 없는(unschedulable) 속성을 극복한다. | | `node.kubernetes.io/network-unavailable` | NoSchedule | 1.12+ | 호스트 네트워크를 사용하는 데몬셋 파드는 기본 스케줄러에 의해 이용할 수 없는 네트워크(network-unavailable) 속성을 극복한다. | diff --git a/content/ko/docs/concepts/workloads/controllers/deployment.md b/content/ko/docs/concepts/workloads/controllers/deployment.md index 779fcfbe34..f6b9979d47 100644 --- a/content/ko/docs/concepts/workloads/controllers/deployment.md +++ b/content/ko/docs/concepts/workloads/controllers/deployment.md @@ -45,7 +45,7 @@ _디플로이먼트(Deployment)_ 는 {{< glossary_tooltip text="파드" term_id= * `.metadata.name` 필드에 따라 `nginx-deployment` 이름으로 디플로이먼트가 생성된다. * `.spec.replicas` 필드에 따라 디플로이먼트는 3개의 레플리카 파드를 생성한다. * `.spec.selector` 필드는 디플로이먼트가 관리할 파드를 찾는 방법을 정의한다. - 이 사례에서는 간단하게 파드 템플릿에 정의된 레이블(`app: nginx`)을 선택한다. + 이 사례에서는 파드 템플릿에 정의된 레이블(`app: nginx`)을 선택한다. 그러나 파드 템플릿 자체의 규칙이 만족되는 한, 보다 정교한 선택 규칙의 적용이 가능하다. @@ -169,13 +169,15 @@ kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml ```shell kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 ``` - 또는 간단하게 다음의 명령어를 사용한다. + + 또는 다음의 명령어를 사용한다. ```shell kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record ``` - 이와 유사하게 출력된다. + 다음과 유사하게 출력된다. + ``` deployment.apps/nginx-deployment image updated ``` @@ -186,7 +188,8 @@ kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml kubectl edit deployment.v1.apps/nginx-deployment ``` - 이와 유사하게 출력된다. + 다음과 유사하게 출력된다. + ``` deployment.apps/nginx-deployment edited ``` @@ -198,10 +201,13 @@ kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml ``` 이와 유사하게 출력된다. + ``` Waiting for rollout to finish: 2 out of 3 new replicas have been updated... ``` + 또는 + ``` deployment "nginx-deployment" successfully rolled out ``` @@ -210,10 +216,11 @@ kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml * 롤아웃이 성공하면 `kubectl get deployments` 를 실행해서 디플로이먼트를 볼 수 있다. 이와 유사하게 출력된다. - ``` - NAME READY UP-TO-DATE AVAILABLE AGE - nginx-deployment 3/3 3 3 36s - ``` + + ```ini + NAME READY UP-TO-DATE AVAILABLE AGE + nginx-deployment 3/3 3 3 36s + ``` * `kubectl get rs` 를 실행해서 디플로이먼트가 새 레플리카셋을 생성해서 파드를 업데이트 했는지 볼 수 있고, 새 레플리카셋을 최대 3개의 레플리카로 스케일 업, 이전 레플리카셋을 0개의 레플리카로 스케일 다운한다. @@ -334,7 +341,7 @@ kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml API 버전 `apps/v1` 에서 디플로이먼트의 레이블 셀렉터는 생성 이후에는 변경할 수 없다. {{< /note >}} -* 셀렉터 추가 시 디플로이먼트의 사양에 있는 파드 템플릿 레이블도 새 레이블로 업데이트 해야한다. +* 셀렉터 추가 시 디플로이먼트의 사양에 있는 파드 템플릿 레이블도 새 레이블로 업데이트해야 한다. 그렇지 않으면 유효성 검사 오류가 반환된다. 이 변경은 겹치지 않는 변경으로 새 셀렉터가 이전 셀렉터로 만든 레플리카셋과 파드를 선택하지 않게 되고, 그 결과로 모든 기존 레플리카셋은 고아가 되며, 새로운 레플리카셋을 생성하게 된다. @@ -1053,7 +1060,7 @@ echo $? 이것은 {{< glossary_tooltip text="파드" term_id="pod" >}}와 정확하게 동일한 스키마를 가지고 있고, 중첩된 것을 제외하면 `apiVersion` 과 `kind` 를 가지고 있지 않는다. 파드에 필요한 필드 외에 디플로이먼트 파드 템플릿은 적절한 레이블과 적절한 재시작 정책을 명시해야 한다. -레이블의 경우 다른 컨트롤러와 겹치지 않도록 해야한다. 자세한 것은 [셀렉터](#셀렉터)를 참조한다. +레이블의 경우 다른 컨트롤러와 겹치지 않도록 해야 한다. 자세한 것은 [셀렉터](#셀렉터)를 참조한다. [`.spec.template.spec.restartPolicy`](/ko/docs/concepts/workloads/pods/pod-lifecycle/#재시작-정책) 에는 오직 `Always` 만 허용되고, 명시되지 않으면 기본값이 된다. diff --git a/content/ko/docs/concepts/workloads/controllers/job.md b/content/ko/docs/concepts/workloads/controllers/job.md index 0f04051ff1..64b5d3879d 100644 --- a/content/ko/docs/concepts/workloads/controllers/job.md +++ b/content/ko/docs/concepts/workloads/controllers/job.md @@ -13,7 +13,7 @@ weight: 50 -잡에서 하나 이상의 파드를 생성하고 지정된 수의 파드가 성공적으로 종료되도록 한다. +잡에서 하나 이상의 파드를 생성하고 지정된 수의 파드가 성공적으로 종료될 때까지 계속해서 파드의 실행을 재시도한다. 파드가 성공적으로 완료되면, 성공적으로 완료된 잡을 추적한다. 지정된 수의 성공 완료에 도달하면, 작업(즉, 잡)이 완료된다. 잡을 삭제하면 잡이 생성한 파드가 정리된다. diff --git a/content/ko/docs/concepts/workloads/controllers/replicationcontroller.md b/content/ko/docs/concepts/workloads/controllers/replicationcontroller.md index 9c3450851a..06ce543012 100644 --- a/content/ko/docs/concepts/workloads/controllers/replicationcontroller.md +++ b/content/ko/docs/concepts/workloads/controllers/replicationcontroller.md @@ -49,7 +49,7 @@ kubectl 명령에서 숏컷으로 사용된다. {{< codenew file="controllers/replication.yaml" >}} -예제 파일을 다운로드 한 후 다음 명령을 실행하여 예제 작업을 실행하라. +예제 파일을 다운로드한 후 다음 명령을 실행하여 예제 작업을 실행하라. ```shell kubectl apply -f https://k8s.io/examples/controllers/replication.yaml @@ -180,7 +180,7 @@ delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) 를 사용 Kubectl은 레플리케이션 컨트롤러를 0으로 스케일하고 레플리케이션 컨트롤러 자체를 삭제하기 전에 각 파드를 삭제하기를 기다린다. 이 kubectl 명령이 인터럽트되면 다시 시작할 수 있다. -REST API나 go 클라이언트 라이브러리를 사용하는 경우 명시적으로 단계를 수행해야 한다 (레플리카를 0으로 스케일하고 파드의 삭제를 기다린 이후, +REST API나 Go 클라이언트 라이브러리를 사용하는 경우 명시적으로 단계를 수행해야 한다(레플리카를 0으로 스케일하고 파드의 삭제를 기다린 이후, 레플리케이션 컨트롤러를 삭제). ### 레플리케이션 컨트롤러만 삭제 @@ -189,7 +189,7 @@ REST API나 go 클라이언트 라이브러리를 사용하는 경우 명시적 kubectl을 사용하여, [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete)에 옵션으로 `--cascade=false`를 지정하라. -REST API나 go 클라이언트 라이브러리를 사용하는 경우 간단히 레플리케이션 컨트롤러 오브젝트를 삭제하라. +REST API나 Go 클라이언트 라이브러리를 사용하는 경우 레플리케이션 컨트롤러 오브젝트를 삭제하라. 원본이 삭제되면 대체할 새로운 레플리케이션 컨트롤러를 생성하여 교체할 수 있다. 오래된 파드와 새로운 파드의 `.spec.selector` 가 동일하다면, 새로운 레플리케이션 컨트롤러는 오래된 파드를 채택할 것이다. 그러나 기존 파드를 @@ -208,7 +208,8 @@ REST API나 go 클라이언트 라이브러리를 사용하는 경우 간단히 ### 스케일링 -레플리케이션 컨트롤러는 `replicas` 필드를 업데이트함으로써 수동으로 또는 오토 스케일링 제어 에이전트로 레플리카의 수를 쉽게 스케일 업하거나 스케일 다운할 수 있다. +레플리케이션컨트롤러는 `replicas` 필드를 설정하여 레플리카의 수를 늘리거나 줄인다. +레플리카를 수동으로 또는 오토 스케일링 제어 에이전트로 관리하도록 레플리케이션컨트롤러를 구성할 수 있다. ### 롤링 업데이트 @@ -239,7 +240,7 @@ REST API나 go 클라이언트 라이브러리를 사용하는 경우 간단히 ## 레플리케이션 컨트롤러의 책임 -레플리케이션 컨트롤러는 의도한 수의 파드가 해당 레이블 선택기와 일치하고 동작하는지를 단순히 확인한다. 현재, 종료된 파드만 해당 파드의 수에서 제외된다. 향후 시스템에서 사용할 수 있는 [readiness](https://issue.k8s.io/620) 및 기타 정보가 고려될 수 있으며 교체 정책에 대한 통제를 더 추가 할 수 있고 외부 클라이언트가 임의로 정교한 교체 또는 스케일 다운 정책을 구현하기 위해 사용할 수 있는 이벤트를 내보낼 계획이다. +레플리케이션 컨트롤러는 의도한 수의 파드가 해당 레이블 셀렉터와 일치하고 동작하는지를 확인한다. 현재, 종료된 파드만 해당 파드의 수에서 제외된다. 향후 시스템에서 사용할 수 있는 [readiness](https://issue.k8s.io/620) 및 기타 정보가 고려될 수 있으며 교체 정책에 대한 통제를 더 추가 할 수 있고 외부 클라이언트가 임의로 정교한 교체 또는 스케일 다운 정책을 구현하기 위해 사용할 수 있는 이벤트를 내보낼 계획이다. 레플리케이션 컨트롤러는 이 좁은 책임에 영원히 제약을 받는다. 그 자체로는 준비성 또는 활성 프로브를 실행하지 않을 것이다. 오토 스케일링을 수행하는 대신, 외부 오토 스케일러 ([#492](https://issue.k8s.io/492)에서 논의된)가 레플리케이션 컨트롤러의 `replicas` 필드를 변경함으로써 제어되도록 의도되었다. 레플리케이션 컨트롤러에 스케줄링 정책 (예를 들어 [spreading](https://issue.k8s.io/367#issuecomment-48428019))을 추가하지 않을 것이다. 오토사이징 및 기타 자동화 된 프로세스를 방해할 수 있으므로 제어된 파드가 현재 지정된 템플릿과 일치하는지 확인해야 한다. 마찬가지로 기한 완료, 순서 종속성, 구성 확장 및 기타 기능은 다른 곳에 속한다. 대량의 파드 생성 메커니즘 ([#170](https://issue.k8s.io/170))까지도 고려해야 한다. diff --git a/content/ko/docs/concepts/workloads/controllers/statefulset.md b/content/ko/docs/concepts/workloads/controllers/statefulset.md index 6b8299a0c3..3a1f784259 100644 --- a/content/ko/docs/concepts/workloads/controllers/statefulset.md +++ b/content/ko/docs/concepts/workloads/controllers/statefulset.md @@ -107,7 +107,7 @@ spec: ## 파드 셀렉터 -스테이트풀셋의 `.spec.selector` 필드는 `.spec.template.metadata.labels` 레이블과 일치하도록 설정 해야 한다. 쿠버네티스 1.8 이전에서는 생략시에 `.spec.selector` 필드가 기본 설정 되었다. 1.8 과 이후 버전에서는 파드 셀렉터를 명시하지 않으면 스테이트풀셋 생성시 유효성 검증 오류가 발생하는 결과가 나오게 된다. +스테이트풀셋의 `.spec.selector` 필드는 `.spec.template.metadata.labels` 레이블과 일치하도록 설정해야 한다. 쿠버네티스 1.8 이전에서는 생략시에 `.spec.selector` 필드가 기본 설정 되었다. 1.8 과 이후 버전에서는 파드 셀렉터를 명시하지 않으면 스테이트풀셋 생성시 유효성 검증 오류가 발생하는 결과가 나오게 된다. ## 파드 신원 @@ -173,7 +173,7 @@ N개의 레플리카가 있는 스테이트풀셋은 스테이트풀셋에 있 파드의 `volumeMounts` 는 퍼시스턴트 볼륨 클레임과 관련된 퍼시스턴트 볼륨이 마운트 된다. 참고로, 파드 퍼시스턴트 볼륨 클레임과 관련된 퍼시스턴트 볼륨은 파드 또는 스테이트풀셋이 삭제되더라도 삭제되지 않는다. -이것은 반드시 수동으로 해야한다. +이것은 반드시 수동으로 해야 한다. ### 파드 이름 레이블 diff --git a/content/ko/docs/concepts/workloads/controllers/ttlafterfinished.md b/content/ko/docs/concepts/workloads/controllers/ttlafterfinished.md index a941266230..5ed869fb57 100644 --- a/content/ko/docs/concepts/workloads/controllers/ttlafterfinished.md +++ b/content/ko/docs/concepts/workloads/controllers/ttlafterfinished.md @@ -76,4 +76,4 @@ TTL 컨트롤러는 쿠버네티스 리소스에 * [자동으로 잡 정리](/ko/docs/concepts/workloads/controllers/job/#완료된-잡을-자동으로-정리) -* [디자인 문서](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/0026-ttl-after-finish.md) +* [디자인 문서](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/592-ttl-after-finish/README.md) diff --git a/content/ko/docs/concepts/workloads/pods/disruptions.md b/content/ko/docs/concepts/workloads/pods/disruptions.md index 02647adb70..bcfde559cb 100644 --- a/content/ko/docs/concepts/workloads/pods/disruptions.md +++ b/content/ko/docs/concepts/workloads/pods/disruptions.md @@ -103,7 +103,7 @@ PDB는 자발적 중단으로 일정 비율 이하로 떨어지지 않도록 보장할 수 있다. 클러스터 관리자와 호스팅 공급자는 직접적으로 파드나 디플로이먼트를 제거하는 대신 -[Eviction API](/docs/tasks/administer-cluster/safely-drain-node/#the-eviction-api)로 +[Eviction API](/docs/tasks/administer-cluster/safely-drain-node/#eviction-api)로 불리는 PodDisruptionBudget을 준수하는 도구를 이용해야 한다. 예를 들어, `kubectl drain` 하위 명령을 사용하면 노드를 서비스 중단으로 표시할 수 diff --git a/content/ko/docs/concepts/workloads/pods/ephemeral-containers.md b/content/ko/docs/concepts/workloads/pods/ephemeral-containers.md index 27e48b192f..9aa9e9bf51 100644 --- a/content/ko/docs/concepts/workloads/pods/ephemeral-containers.md +++ b/content/ko/docs/concepts/workloads/pods/ephemeral-containers.md @@ -76,7 +76,7 @@ API에서 특별한 `ephemeralcontainers` 핸들러를 사용해서 만들어지 임시 컨테이너를 사용해서 문제를 해결하는 예시는 [임시 디버깅 컨테이너로 디버깅하기] -(/docs/tasks/debug-application-cluster/debug-running-pod/#debugging-with-ephemeral-debug-container)를 참조한다. +(/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container)를 참조한다. ## 임시 컨테이너 API @@ -100,7 +100,7 @@ API에서 특별한 `ephemeralcontainers` 핸들러를 사용해서 만들어지 "apiVersion": "v1", "kind": "EphemeralContainers", "metadata": { - "name": "example-pod" + "name": "example-pod" }, "ephemeralContainers": [{ "command": [ diff --git a/content/ko/docs/concepts/workloads/pods/pod-lifecycle.md b/content/ko/docs/concepts/workloads/pods/pod-lifecycle.md index 1066e3eb83..aa154a4b42 100644 --- a/content/ko/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/ko/docs/concepts/workloads/pods/pod-lifecycle.md @@ -38,8 +38,7 @@ ID([UID](/ko/docs/concepts/overview/working-with-objects/names/#uids))가 타임아웃 기간 후에 [삭제되도록 스케줄된다](#pod-garbage-collection). 파드는 자체적으로 자가 치유되지 않는다. 파드가 -{{< glossary_tooltip text="노드" term_id="node" >}}에 스케줄된 후에 실패하거나, -스케줄 작업 자체가 실패하면, 파드는 삭제된다. 마찬가지로, 파드는 +{{< glossary_tooltip text="노드" term_id="node" >}}에 스케줄된 후에 해당 노드가 실패하면, 파드는 삭제된다. 마찬가지로, 파드는 리소스 부족 또는 노드 유지 관리 작업으로 인해 축출되지 않는다. 쿠버네티스는 {{< glossary_tooltip term_id="controller" text="컨트롤러" >}}라 부르는 하이-레벨 추상화를 사용하여 diff --git a/content/ko/docs/contribute/participate/roles-and-responsibilities.md b/content/ko/docs/contribute/participate/roles-and-responsibilities.md index 354e9d669f..448502c0c3 100644 --- a/content/ko/docs/contribute/participate/roles-and-responsibilities.md +++ b/content/ko/docs/contribute/participate/roles-and-responsibilities.md @@ -51,7 +51,7 @@ GitHub 계정을 가진 누구나 쿠버네티스에 기여할 수 있다. SIG D - 풀 리퀘스트에 `/lgtm` 코멘트를 사용하여 LGTM(looks good to me) 레이블을 추가한다. {{< note >}} - `/lgtm` 사용은 자동화를 트리거한다. 만약 구속력 없는 승인을 제공하려면, 단순히 "LGTM" 코멘트를 남기는 것도 좋다! + `/lgtm` 사용은 자동화를 트리거한다. 만약 구속력 없는 승인을 제공하려면, "LGTM" 코멘트를 남기는 것도 좋다! {{< /note >}} - `/hold` 코멘트를 사용하여 풀 리퀘스트에 대한 병합을 차단한다. diff --git a/content/ko/docs/reference/_index.md b/content/ko/docs/reference/_index.md index 14fee6ee9c..c294f2efb5 100644 --- a/content/ko/docs/reference/_index.md +++ b/content/ko/docs/reference/_index.md @@ -18,7 +18,8 @@ content_type: concept ## API 레퍼런스 -* [쿠버네티스 API 레퍼런스 {{< param "version" >}}](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) +* [쿠버네티스 API 레퍼런스](/docs/reference/kubernetes-api/) +* [쿠버네티스 {{< param "version" >}}용 원페이지(One-page) API 레퍼런스](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) * [쿠버네티스 API 사용](/ko/docs/reference/using-api/) - 쿠버네티스 API에 대한 개요 ## API 클라이언트 라이브러리 diff --git a/content/ko/docs/reference/access-authn-authz/authorization.md b/content/ko/docs/reference/access-authn-authz/authorization.md index b34f68e392..a9370ea74b 100644 --- a/content/ko/docs/reference/access-authn-authz/authorization.md +++ b/content/ko/docs/reference/access-authn-authz/authorization.md @@ -99,6 +99,9 @@ DELETE | delete(개별 리소스), deletecollection(리소스 모음) ```bash kubectl auth can-i create deployments --namespace dev ``` + +다음과 유사하게 출력된다. + ``` yes ``` @@ -106,6 +109,9 @@ yes ```shell kubectl auth can-i create deployments --namespace prod ``` + +다음과 유사하게 출력된다. + ``` no ``` @@ -116,6 +122,9 @@ no ```bash kubectl auth can-i list secrets --namespace dev --as dave ``` + +다음과 유사하게 출력된다. + ``` no ``` @@ -145,7 +154,7 @@ EOF ``` 생성된 `SelfSubjectAccessReview` 는 다음과 같다. -``` +```yaml apiVersion: authorization.k8s.io/v1 kind: SelfSubjectAccessReview metadata: diff --git a/content/ko/docs/reference/command-line-tools-reference/feature-gates.md b/content/ko/docs/reference/command-line-tools-reference/feature-gates.md index 6fa2c58a56..93683ae9b1 100644 --- a/content/ko/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/ko/docs/reference/command-line-tools-reference/feature-gates.md @@ -48,13 +48,15 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능 | 기능 | 디폴트 | 단계 | 도입 | 종료 | |---------|---------|-------|-------|-------| -| `AnyVolumeDataSource` | `false` | 알파 | 1.18 | | | `APIListChunking` | `false` | 알파 | 1.8 | 1.8 | | `APIListChunking` | `true` | 베타 | 1.9 | | | `APIPriorityAndFairness` | `false` | 알파 | 1.17 | 1.19 | | `APIPriorityAndFairness` | `true` | 베타 | 1.20 | | -| `APIResponseCompression` | `false` | 알파 | 1.7 | | +| `APIResponseCompression` | `false` | 알파 | 1.7 | 1.15 | +| `APIResponseCompression` | `false` | 베타 | 1.16 | | | `APIServerIdentity` | `false` | 알파 | 1.20 | | +| `AllowInsecureBackendProxy` | `true` | 베타 | 1.17 | | +| `AnyVolumeDataSource` | `false` | 알파 | 1.18 | | | `AppArmor` | `true` | 베타 | 1.4 | | | `BalanceAttachedNodeVolumes` | `false` | 알파 | 1.11 | | | `BoundServiceAccountTokenVolume` | `false` | 알파 | 1.13 | | @@ -77,7 +79,8 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능 | `CSIMigrationGCE` | `false` | 알파 | 1.14 | 1.16 | | `CSIMigrationGCE` | `false` | 베타 | 1.17 | | | `CSIMigrationGCEComplete` | `false` | 알파 | 1.17 | | -| `CSIMigrationOpenStack` | `false` | 알파 | 1.14 | | +| `CSIMigrationOpenStack` | `false` | 알파 | 1.14 | 1.17 | +| `CSIMigrationOpenStack` | `true` | 베타 | 1.18 | | | `CSIMigrationOpenStackComplete` | `false` | 알파 | 1.17 | | | `CSIMigrationvSphere` | `false` | 베타 | 1.19 | | | `CSIMigrationvSphereComplete` | `false` | 베타 | 1.19 | | @@ -89,26 +92,23 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능 | `ConfigurableFSGroupPolicy` | `true` | 베타 | 1.20 | | | `CronJobControllerV2` | `false` | 알파 | 1.20 | | | `CustomCPUCFSQuotaPeriod` | `false` | 알파 | 1.12 | | -| `CustomResourceDefaulting` | `false` | 알파| 1.15 | 1.15 | -| `CustomResourceDefaulting` | `true` | 베타 | 1.16 | | | `DefaultPodTopologySpread` | `false` | 알파 | 1.19 | 1.19 | | `DefaultPodTopologySpread` | `true` | 베타 | 1.20 | | | `DevicePlugins` | `false` | 알파 | 1.8 | 1.9 | | `DevicePlugins` | `true` | 베타 | 1.10 | | | `DisableAcceleratorUsageMetrics` | `false` | 알파 | 1.19 | 1.19 | -| `DisableAcceleratorUsageMetrics` | `true` | 베타 | 1.20 | 1.22 | +| `DisableAcceleratorUsageMetrics` | `true` | 베타 | 1.20 | | | `DownwardAPIHugePages` | `false` | 알파 | 1.20 | | -| `DryRun` | `false` | 알파 | 1.12 | 1.12 | -| `DryRun` | `true` | 베타 | 1.13 | | | `DynamicKubeletConfig` | `false` | 알파 | 1.4 | 1.10 | | `DynamicKubeletConfig` | `true` | 베타 | 1.11 | | +| `EfficientWatchResumption` | `false` | 알파 | 1.20 | | | `EndpointSlice` | `false` | 알파 | 1.16 | 1.16 | | `EndpointSlice` | `false` | 베타 | 1.17 | | | `EndpointSlice` | `true` | 베타 | 1.18 | | | `EndpointSliceNodeName` | `false` | 알파 | 1.20 | | | `EndpointSliceProxying` | `false` | 알파 | 1.18 | 1.18 | | `EndpointSliceProxying` | `true` | 베타 | 1.19 | | -| `EndpointSliceTerminating` | `false` | 알파 | 1.20 | | +| `EndpointSliceTerminatingCondition` | `false` | 알파 | 1.20 | | | `EphemeralContainers` | `false` | 알파 | 1.16 | | | `ExpandCSIVolumes` | `false` | 알파 | 1.14 | 1.15 | | `ExpandCSIVolumes` | `true` | 베타 | 1.16 | | @@ -119,19 +119,22 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능 | `ExperimentalHostUserNamespaceDefaulting` | `false` | 베타 | 1.5 | | | `GenericEphemeralVolume` | `false` | 알파 | 1.19 | | | `GracefulNodeShutdown` | `false` | 알파 | 1.20 | | +| `HPAContainerMetrics` | `false` | 알파 | 1.20 | | | `HPAScaleToZero` | `false` | 알파 | 1.16 | | | `HugePageStorageMediumSize` | `false` | 알파 | 1.18 | 1.18 | | `HugePageStorageMediumSize` | `true` | 베타 | 1.19 | | -| `HyperVContainer` | `false` | 알파 | 1.10 | | +| `IPv6DualStack` | `false` | 알파 | 1.15 | | | `ImmutableEphemeralVolumes` | `false` | 알파 | 1.18 | 1.18 | | `ImmutableEphemeralVolumes` | `true` | 베타 | 1.19 | | -| `IPv6DualStack` | `false` | 알파 | 1.16 | | -| `LegacyNodeRoleBehavior` | `true` | 알파 | 1.16 | | +| `KubeletCredentialProviders` | `false` | 알파 | 1.20 | | +| `KubeletPodResources` | `true` | 알파 | 1.13 | 1.14 | +| `KubeletPodResources` | `true` | 베타 | 1.15 | | +| `LegacyNodeRoleBehavior` | `false` | 알파 | 1.16 | 1.18 | +| `LegacyNodeRoleBehavior` | `true` | True | 1.19 | | | `LocalStorageCapacityIsolation` | `false` | 알파 | 1.7 | 1.9 | | `LocalStorageCapacityIsolation` | `true` | 베타 | 1.10 | | | `LocalStorageCapacityIsolationFSQuotaMonitoring` | `false` | 알파 | 1.15 | | | `MixedProtocolLBService` | `false` | 알파 | 1.20 | | -| `MountContainers` | `false` | 알파 | 1.9 | | | `NodeDisruptionExclusion` | `false` | 알파 | 1.16 | 1.18 | | `NodeDisruptionExclusion` | `true` | 베타 | 1.19 | | | `NonPreemptingPriority` | `false` | 알파 | 1.15 | 1.18 | @@ -143,25 +146,27 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능 | `ProcMountType` | `false` | 알파 | 1.12 | | | `QOSReserved` | `false` | 알파 | 1.11 | | | `RemainingItemCount` | `false` | 알파 | 1.15 | | +| `RemoveSelfLink` | `false` | 알파 | 1.16 | 1.19 | +| `RemoveSelfLink` | `true` | 베타 | 1.20 | | | `RootCAConfigMap` | `false` | 알파 | 1.13 | 1.19 | | `RootCAConfigMap` | `true` | 베타 | 1.20 | | | `RotateKubeletServerCertificate` | `false` | 알파 | 1.7 | 1.11 | | `RotateKubeletServerCertificate` | `true` | 베타 | 1.12 | | | `RunAsGroup` | `true` | 베타 | 1.14 | | -| `RuntimeClass` | `false` | 알파 | 1.12 | 1.13 | -| `RuntimeClass` | `true` | 베타 | 1.14 | | | `SCTPSupport` | `false` | 알파 | 1.12 | 1.18 | | `SCTPSupport` | `true` | 베타 | 1.19 | | | `ServerSideApply` | `false` | 알파 | 1.14 | 1.15 | | `ServerSideApply` | `true` | 베타 | 1.16 | | -| `ServiceAccountIssuerDiscovery` | `false` | 알파 | 1.18 | | -| `ServiceLBNodePortControl` | `false` | 알파 | 1.20 | 1.20 | +| `ServiceAccountIssuerDiscovery` | `false` | 알파 | 1.18 | 1.19 | +| `ServiceAccountIssuerDiscovery` | `true` | 베타 | 1.20 | | +| `ServiceLBNodePortControl` | `false` | 알파 | 1.20 | | | `ServiceNodeExclusion` | `false` | 알파 | 1.8 | 1.18 | | `ServiceNodeExclusion` | `true` | 베타 | 1.19 | | | `ServiceTopology` | `false` | 알파 | 1.17 | | -| `SizeMemoryBackedVolumes` | `false` | 알파 | 1.20 | | | `SetHostnameAsFQDN` | `false` | 알파 | 1.19 | 1.19 | | `SetHostnameAsFQDN` | `true` | 베타 | 1.20 | | +| `SizeMemoryBackedVolumes` | `false` | 알파 | 1.20 | | +| `StorageVersionAPI` | `false` | 알파 | 1.20 | | | `StorageVersionHash` | `false` | 알파 | 1.14 | 1.14 | | `StorageVersionHash` | `true` | 베타 | 1.15 | | | `Sysctls` | `true` | 베타 | 1.11 | | @@ -170,11 +175,11 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능 | `TopologyManager` | `true` | 베타 | 1.18 | | | `ValidateProxyRedirects` | `false` | 알파 | 1.12 | 1.13 | | `ValidateProxyRedirects` | `true` | 베타 | 1.14 | | -| `WindowsEndpointSliceProxying` | `false` | 알파 | 1.19 | | -| `WindowsGMSA` | `false` | 알파 | 1.14 | | -| `WindowsGMSA` | `true` | 베타 | 1.16 | | +| `WarningHeaders` | `true` | 베타 | 1.19 | | | `WinDSR` | `false` | 알파 | 1.14 | | -| `WinOverlay` | `false` | 알파 | 1.14 | | +| `WinOverlay` | `false` | 알파 | 1.14 | 1.19 | +| `WinOverlay` | `true` | 베타 | 1.20 | | +| `WindowsEndpointSliceProxying` | `false` | 알파 | 1.19 | | {{< /table >}} ### GA 또는 사용 중단된 기능을 위한 기능 게이트 @@ -228,6 +233,9 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능 | `CustomResourceWebhookConversion` | `false` | 알파 | 1.13 | 1.14 | | `CustomResourceWebhookConversion` | `true` | 베타 | 1.15 | 1.15 | | `CustomResourceWebhookConversion` | `true` | GA | 1.16 | - | +| `DryRun` | `false` | 알파 | 1.12 | 1.12 | +| `DryRun` | `true` | 베타 | 1.13 | 1.18 | +| `DryRun` | `true` | GA | 1.19 | - | | `DynamicAuditing` | `false` | 알파 | 1.13 | 1.18 | | `DynamicAuditing` | - | 사용중단 | 1.19 | - | | `DynamicProvisioningScheduling` | `false` | 알파 | 1.11 | 1.11 | @@ -247,23 +255,28 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능 | `HugePages` | `false` | 알파 | 1.8 | 1.9 | | `HugePages` | `true` | 베타| 1.10 | 1.13 | | `HugePages` | `true` | GA | 1.14 | - | +| `HyperVContainer` | `false` | 알파 | 1.10 | 1.19 | +| `HyperVContainer` | `false` | 사용중단 | 1.20 | - | | `Initializers` | `false` | 알파 | 1.7 | 1.13 | | `Initializers` | - | 사용중단 | 1.14 | - | | `KubeletConfigFile` | `false` | 알파 | 1.8 | 1.9 | | `KubeletConfigFile` | - | 사용중단 | 1.10 | - | -| `KubeletCredentialProviders` | `false` | 알파 | 1.20 | 1.20 | | `KubeletPluginsWatcher` | `false` | 알파 | 1.11 | 1.11 | | `KubeletPluginsWatcher` | `true` | 베타 | 1.12 | 1.12 | | `KubeletPluginsWatcher` | `true` | GA | 1.13 | - | | `KubeletPodResources` | `false` | 알파 | 1.13 | 1.14 | | `KubeletPodResources` | `true` | 베타 | 1.15 | | | `KubeletPodResources` | `true` | GA | 1.20 | | +| `MountContainers` | `false` | 알파 | 1.9 | 1.16 | +| `MountContainers` | `false` | 사용중단 | 1.17 | - | | `MountPropagation` | `false` | 알파 | 1.8 | 1.9 | | `MountPropagation` | `true` | 베타 | 1.10 | 1.11 | | `MountPropagation` | `true` | GA | 1.12 | - | | `NodeLease` | `false` | 알파 | 1.12 | 1.13 | | `NodeLease` | `true` | 베타 | 1.14 | 1.16 | | `NodeLease` | `true` | GA | 1.17 | - | +| `PVCProtection` | `false` | 알파 | 1.9 | 1.9 | +| `PVCProtection` | - | 사용중단 | 1.10 | - | | `PersistentLocalVolumes` | `false` | 알파 | 1.7 | 1.9 | | `PersistentLocalVolumes` | `true` | 베타 | 1.10 | 1.13 | | `PersistentLocalVolumes` | `true` | GA | 1.14 | - | @@ -276,8 +289,6 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능 | `PodShareProcessNamespace` | `false` | 알파 | 1.10 | 1.11 | | `PodShareProcessNamespace` | `true` | 베타 | 1.12 | 1.16 | | `PodShareProcessNamespace` | `true` | GA | 1.17 | - | -| `PVCProtection` | `false` | 알파 | 1.9 | 1.9 | -| `PVCProtection` | - | 사용중단 | 1.10 | - | | `RequestManagement` | `false` | 알파 | 1.15 | 1.16 | | `ResourceLimitsPriorityFunction` | `false` | 알파 | 1.9 | 1.18 | | `ResourceLimitsPriorityFunction` | - | 사용중단 | 1.19 | - | @@ -340,7 +351,7 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능 | `VolumeScheduling` | `false` | 알파 | 1.9 | 1.9 | | `VolumeScheduling` | `true` | 베타 | 1.10 | 1.12 | | `VolumeScheduling` | `true` | GA | 1.13 | - | -| `VolumeSubpath` | `true` | GA | 1.13 | - | +| `VolumeSubpath` | `true` | GA | 1.10 | - | | `VolumeSubpathEnvExpansion` | `false` | 알파 | 1.14 | 1.14 | | `VolumeSubpathEnvExpansion` | `true` | 베타 | 1.15 | 1.16 | | `VolumeSubpathEnvExpansion` | `true` | GA | 1.17 | - | @@ -398,62 +409,131 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능 각 기능 게이트는 특정 기능을 활성화/비활성화하도록 설계되었다. +- `APIListChunking`: API 클라이언트가 API 서버에서 (`LIST` 또는 `GET`) + 리소스를 청크(chunks)로 검색할 수 있도록 한다. +- `APIPriorityAndFairness`: 각 서버의 우선 순위와 공정성을 통해 동시 요청을 + 관리할 수 있다. (`RequestManagement` 에서 이름이 변경됨) +- `APIResponseCompression`: `LIST` 또는 `GET` 요청에 대한 API 응답을 압축한다. +- `APIServerIdentity`: 클러스터의 각 API 서버에 ID를 할당한다. - `Accelerators`: 도커 사용 시 Nvidia GPU 지원 활성화한다. - `AdvancedAuditing`: [고급 감사](/docs/tasks/debug-application-cluster/audit/#advanced-audit) 기능을 활성화한다. -- `AffinityInAnnotations`(*사용 중단됨*): [파드 어피니티 또는 안티-어피니티](/ko/docs/concepts/scheduling-eviction/assign-pod-node/#어피니티-affinity-와-안티-어피니티-anti-affinity) 설정을 활성화한다. +- `AffinityInAnnotations`(*사용 중단됨*): [파드 어피니티 또는 안티-어피니티](/ko/docs/concepts/scheduling-eviction/assign-pod-node/#어피니티-affinity-와-안티-어피니티-anti-affinity) + 설정을 활성화한다. - `AllowExtTrafficLocalEndpoints`: 서비스가 외부 요청을 노드의 로컬 엔드포인트로 라우팅할 수 있도록 한다. +- `AllowInsecureBackendProxy`: 사용자가 파드 로그 요청에서 kubelet의 + TLS 확인을 건너뛸 수 있도록 한다. - `AnyVolumeDataSource`: {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}의 `DataSource` 로 모든 사용자 정의 리소스 사용을 활성화한다. -- `APIListChunking`: API 클라이언트가 API 서버에서 (`LIST` 또는 `GET`) 리소스를 청크(chunks)로 검색할 수 있도록 한다. -- `APIPriorityAndFairness`: 각 서버의 우선 순위와 공정성을 통해 동시 요청을 관리할 수 있다. (`RequestManagement` 에서 이름이 변경됨) -- `APIResponseCompression`: `LIST` 또는 `GET` 요청에 대한 API 응답을 압축한다. -- `APIServerIdentity`: 클러스터의 각 kube-apiserver에 ID를 할당한다. - `AppArmor`: 도커를 사용할 때 리눅스 노드에서 AppArmor 기반의 필수 접근 제어를 활성화한다. - 자세한 내용은 [AppArmor 튜토리얼](/ko/docs/tutorials/clusters/apparmor/)을 참고한다. + 자세한 내용은 [AppArmor 튜토리얼](/ko/docs/tutorials/clusters/apparmor/)을 참고한다. - `AttachVolumeLimit`: 볼륨 플러그인이 노드에 연결될 수 있는 볼륨 수에 대한 제한을 보고하도록 한다. - 자세한 내용은 [동적 볼륨 제한](/ko/docs/concepts/storage/storage-limits/#동적-볼륨-한도)을 참고한다. + 자세한 내용은 [동적 볼륨 제한](/ko/docs/concepts/storage/storage-limits/#동적-볼륨-한도)을 참고한다. - `BalanceAttachedNodeVolumes`: 스케줄링 시 균형 잡힌 리소스 할당을 위해 고려할 노드의 볼륨 수를 포함한다. 스케줄러가 결정을 내리는 동안 CPU, 메모리 사용률 및 볼륨 수가 더 가까운 노드가 선호된다. - `BlockVolume`: 파드에서 원시 블록 장치의 정의와 사용을 활성화한다. - 자세한 내용은 [원시 블록 볼륨 지원](/ko/docs/concepts/storage/persistent-volumes/#원시-블록-볼륨-지원)을 - 참고한다. + 자세한 내용은 [원시 블록 볼륨 지원](/ko/docs/concepts/storage/persistent-volumes/#원시-블록-볼륨-지원)을 + 참고한다. - `BoundServiceAccountTokenVolume`: ServiceAccountTokenVolumeProjection으로 구성된 프로젝션 볼륨을 사용하도록 서비스어카운트 볼륨을 - 마이그레이션한다. 클러스터 관리자는 `serviceaccount_stale_tokens_total` 메트릭을 사용하여 - 확장 토큰에 의존하는 워크로드를 모니터링 할 수 있다. 이러한 워크로드가 없는 경우 `--service-account-extend-token-expiration=false` 플래그로 - `kube-apiserver`를 시작하여 확장 토큰 기능을 끈다. - 자세한 내용은 [바운드 서비스 계정 토큰](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md)을 - 확인한다. -- `ConfigurableFSGroupPolicy`: 파드에 볼륨을 마운트할 때 fsGroups에 대한 볼륨 권한 변경 정책을 구성할 수 있다. 자세한 내용은 [파드에 대한 볼륨 권한 및 소유권 변경 정책 구성](/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods)을 참고한다. --`CronJobControllerV2` : {{< glossary_tooltip text="크론잡" term_id="cronjob" >}} 컨트롤러의 대체 구현을 사용한다. 그렇지 않으면 동일한 컨트롤러의 버전 1이 선택된다. 버전 2 컨트롤러는 실험적인 성능 향상을 제공한다. -- `CPUManager`: 컨테이너 수준의 CPU 어피니티 지원을 활성화한다. [CPU 관리 정책](/docs/tasks/administer-cluster/cpu-management-policies/)을 참고한다. + 마이그레이션한다. 클러스터 관리자는 `serviceaccount_stale_tokens_total` 메트릭을 사용하여 + 확장 토큰에 의존하는 워크로드를 모니터링 할 수 있다. 이러한 워크로드가 없는 경우 `--service-account-extend-token-expiration=false` 플래그로 + `kube-apiserver`를 시작하여 확장 토큰 기능을 끈다. + 자세한 내용은 [바운드 서비스 계정 토큰](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md)을 + 확인한다. +- `CPUManager`: 컨테이너 수준의 CPU 어피니티 지원을 활성화한다. + [CPU 관리 정책](/docs/tasks/administer-cluster/cpu-management-policies/)을 참고한다. - `CRIContainerLogRotation`: cri 컨테이너 런타임에 컨테이너 로그 로테이션을 활성화한다. -- `CSIBlockVolume`: 외부 CSI 볼륨 드라이버가 블록 스토리지를 지원할 수 있게 한다. 자세한 내용은 [`csi` 원시 블록 볼륨 지원](/ko/docs/concepts/storage/volumes/#csi-원시-raw-블록-볼륨-지원) 문서를 참고한다. -- `CSIDriverRegistry`: csi.storage.k8s.io에서 CSIDriver API 오브젝트와 관련된 모든 로직을 활성화한다. +- `CSIBlockVolume`: 외부 CSI 볼륨 드라이버가 블록 스토리지를 지원할 수 있게 한다. + 자세한 내용은 [`csi` 원시 블록 볼륨 지원](/ko/docs/concepts/storage/volumes/#csi-원시-raw-블록-볼륨-지원) + 문서를 참고한다. +- `CSIDriverRegistry`: csi.storage.k8s.io에서 CSIDriver API 오브젝트와 관련된 + 모든 로직을 활성화한다. - `CSIInlineVolume`: 파드에 대한 CSI 인라인 볼륨 지원을 활성화한다. -- `CSIMigration`: shim 및 변환 로직을 통해 볼륨 작업을 인-트리 플러그인에서 사전 설치된 해당 CSI 플러그인으로 라우팅할 수 있다. -- `CSIMigrationAWS`: shim 및 변환 로직을 통해 볼륨 작업을 AWS-EBS 인-트리 플러그인에서 EBS CSI 플러그인으로 라우팅할 수 있다. 노드에 EBS CSI 플러그인이 설치와 구성이 되어 있지 않은 경우 인-트리 EBS 플러그인으로 폴백(falling back)을 지원한다. CSIMigration 기능 플래그가 필요하다. -- `CSIMigrationAWSComplete`: kubelet 및 볼륨 컨트롤러에서 EBS 인-트리 플러그인 등록을 중지하고 shim 및 변환 로직을 사용하여 볼륨 작업을 AWS-EBS 인-트리 플러그인에서 EBS CSI 플러그인으로 라우팅할 수 있다. 클러스터의 모든 노드에 CSIMigration과 CSIMigrationAWS 기능 플래그가 활성화되고 EBS CSI 플러그인이 설치 및 구성이 되어 있어야 한다. -- `CSIMigrationAzureDisk`: shim 및 변환 로직을 통해 볼륨 작업을 Azure-Disk 인-트리 플러그인에서 AzureDisk CSI 플러그인으로 라우팅할 수 있다. 노드에 AzureDisk CSI 플러그인이 설치와 구성이 되어 있지 않은 경우 인-트리 AzureDisk 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가 필요하다. -- `CSIMigrationAzureDiskComplete`: kubelet 및 볼륨 컨트롤러에서 Azure-Disk 인-트리 플러그인 등록을 중지하고 shim 및 변환 로직을 사용하여 볼륨 작업을 Azure-Disk 인-트리 플러그인에서 AzureDisk CSI 플러그인으로 라우팅할 수 있다. 클러스터의 모든 노드에 CSIMigration과 CSIMigrationAzureDisk 기능 플래그가 활성화되고 AzureDisk CSI 플러그인이 설치 및 구성이 되어 있어야 한다. -- `CSIMigrationAzureFile`: shim 및 변환 로직을 통해 볼륨 작업을 Azure-File 인-트리 플러그인에서 AzureFile CSI 플러그인으로 라우팅할 수 있다. 노드에 AzureFile CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우 인-트리 AzureFile 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가 필요하다. -- `CSIMigrationAzureFileComplete`: kubelet 및 볼륨 컨트롤러에서 Azure 파일 인-트리 플러그인 등록을 중지하고 shim 및 변환 로직을 통해 볼륨 작업을 Azure 파일 인-트리 플러그인에서 AzureFile CSI 플러그인으로 라우팅할 수 있다. 클러스터의 모든 노드에 CSIMigration과 CSIMigrationAzureFile 기능 플래그가 활성화되고 AzureFile CSI 플러그인이 설치 및 구성이 되어 있어야 한다. -- `CSIMigrationGCE`: shim 및 변환 로직을 통해 볼륨 작업을 GCE-PD 인-트리 플러그인에서 PD CSI 플러그인으로 라우팅할 수 있다. 노드에 PD CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우 인-트리 GCE 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가 필요하다. -- `CSIMigrationGCEComplete`: kubelet 및 볼륨 컨트롤러에서 GCE-PD 인-트리 플러그인 등록을 중지하고 shim 및 변환 로직을 통해 볼륨 작업을 GCE-PD 인-트리 플러그인에서 PD CSI 플러그인으로 라우팅할 수 있다. CSIMigration과 CSIMigrationGCE 기능 플래그가 필요하다. -- `CSIMigrationOpenStack`: shim 및 변환 로직을 통해 볼륨 작업을 Cinder 인-트리 플러그인에서 Cinder CSI 플러그인으로 라우팅할 수 있다. 노드에 Cinder CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우 인-트리 Cinder 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가 필요하다. -- `CSIMigrationOpenStackComplete`: kubelet 및 볼륨 컨트롤러에서 Cinder 인-트리 플러그인 등록을 중지하고 shim 및 변환 로직이 Cinder 인-트리 플러그인에서 Cinder CSI 플러그인으로 볼륨 작업을 라우팅할 수 있도록 한다. 클러스터의 모든 노드에 CSIMigration과 CSIMigrationOpenStack 기능 플래그가 활성화되고 Cinder CSI 플러그인이 설치 및 구성이 되어 있어야 한다. -- `CSIMigrationvSphere`: vSphere 인-트리 플러그인에서 vSphere CSI 플러그인으로 볼륨 작업을 라우팅하는 shim 및 변환 로직을 사용한다. 노드에 vSphere CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우 인-트리 vSphere 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가 필요하다. -- `CSIMigrationvSphereComplete`: kubelet 및 볼륨 컨트롤러에서 vSphere 인-트리 플러그인 등록을 중지하고 shim 및 변환 로직을 활성화하여 vSphere 인-트리 플러그인에서 vSphere CSI 플러그인으로 볼륨 작업을 라우팅할 수 있도록 한다. CSIMigration 및 CSIMigrationvSphere 기능 플래그가 활성화되고 vSphere CSI 플러그인이 클러스터의 모든 노드에 설치 및 구성이 되어 있어야 한다. +- `CSIMigration`: shim 및 변환 로직을 통해 볼륨 작업을 인-트리 플러그인에서 + 사전 설치된 해당 CSI 플러그인으로 라우팅할 수 있다. +- `CSIMigrationAWS`: shim 및 변환 로직을 통해 볼륨 작업을 + AWS-EBS 인-트리 플러그인에서 EBS CSI 플러그인으로 라우팅할 수 있다. 노드에 + EBS CSI 플러그인이 설치와 구성이 되어 있지 않은 경우 인-트리 EBS 플러그인으로 + 폴백(falling back)을 지원한다. CSIMigration 기능 플래그가 필요하다. +- `CSIMigrationAWSComplete`: kubelet 및 볼륨 컨트롤러에서 EBS 인-트리 + 플러그인 등록을 중지하고 shim 및 변환 로직을 사용하여 볼륨 작업을 AWS-EBS + 인-트리 플러그인에서 EBS CSI 플러그인으로 라우팅할 수 있다. + 클러스터의 모든 노드에 CSIMigration과 CSIMigrationAWS 기능 플래그가 활성화되고 + EBS CSI 플러그인이 설치 및 구성이 되어 있어야 한다. +- `CSIMigrationAzureDisk`: shim 및 변환 로직을 통해 볼륨 작업을 + Azure-Disk 인-트리 플러그인에서 AzureDisk CSI 플러그인으로 라우팅할 수 있다. + 노드에 AzureDisk CSI 플러그인이 설치와 구성이 되어 있지 않은 경우 인-트리 + AzureDisk 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가 + 필요하다. +- `CSIMigrationAzureDiskComplete`: kubelet 및 볼륨 컨트롤러에서 Azure-Disk 인-트리 + 플러그인 등록을 중지하고 shim 및 변환 로직을 사용하여 볼륨 작업을 + Azure-Disk 인-트리 플러그인에서 AzureDisk CSI 플러그인으로 + 라우팅할 수 있다. 클러스터의 모든 노드에 CSIMigration과 CSIMigrationAzureDisk 기능 + 플래그가 활성화되고 AzureDisk CSI 플러그인이 설치 및 구성이 되어 + 있어야 한다. +- `CSIMigrationAzureFile`: shim 및 변환 로직을 통해 볼륨 작업을 + Azure-File 인-트리 플러그인에서 AzureFile CSI 플러그인으로 라우팅할 수 있다. + 노드에 AzureFile CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우 인-트리 + AzureFile 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가 + 필요하다. +- `CSIMigrationAzureFileComplete`: kubelet 및 볼륨 컨트롤러에서 Azure 파일 인-트리 + 플러그인 등록을 중지하고 shim 및 변환 로직을 통해 볼륨 작업을 + Azure 파일 인-트리 플러그인에서 AzureFile CSI 플러그인으로 + 라우팅할 수 있다. 클러스터의 모든 노드에 CSIMigration과 CSIMigrationAzureFile 기능 + 플래그가 활성화되고 AzureFile CSI 플러그인이 설치 및 구성이 되어 + 있어야 한다. +- `CSIMigrationGCE`: shim 및 변환 로직을 통해 볼륨 작업을 + GCE-PD 인-트리 플러그인에서 PD CSI 플러그인으로 라우팅할 수 있다. 노드에 + PD CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우 인-트리 GCE 플러그인으로 폴백을 + 지원한다. CSIMigration 기능 플래그가 필요하다. +- `CSIMigrationGCEComplete`: kubelet 및 볼륨 컨트롤러에서 GCE-PD + 인-트리 플러그인 등록을 중지하고 shim 및 변환 로직을 통해 볼륨 작업을 GCE-PD + 인-트리 플러그인에서 PD CSI 플러그인으로 라우팅할 수 있다. + CSIMigration과 CSIMigrationGCE 기능 플래그가 활성화되고 PD CSI + 플러그인이 클러스터의 모든 노드에 설치 및 구성이 되어 있어야 한다. +- `CSIMigrationOpenStack`: shim 및 변환 로직을 통해 볼륨 작업을 + Cinder 인-트리 플러그인에서 Cinder CSI 플러그인으로 라우팅할 수 있다. 노드에 + Cinder CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우 인-트리 + Cinder 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가 필요하다. +- `CSIMigrationOpenStackComplete`: kubelet 및 볼륨 컨트롤러에서 + Cinder 인-트리 플러그인 등록을 중지하고 shim 및 변환 로직이 Cinder 인-트리 + 플러그인에서 Cinder CSI 플러그인으로 볼륨 작업을 라우팅할 수 있도록 한다. + 클러스터의 모든 노드에 CSIMigration과 CSIMigrationOpenStack 기능 플래그가 활성화되고 + Cinder CSI 플러그인이 설치 및 구성이 되어 있어야 한다. +- `CSIMigrationvSphere`: vSphere 인-트리 플러그인에서 vSphere CSI 플러그인으로 볼륨 작업을 + 라우팅하는 shim 및 변환 로직을 사용한다. + 노드에 vSphere CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우 + 인-트리 vSphere 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가 필요하다. +- `CSIMigrationvSphereComplete`: kubelet 및 볼륨 컨트롤러에서 vSphere 인-트리 + 플러그인 등록을 중지하고 shim 및 변환 로직을 활성화하여 vSphere 인-트리 플러그인에서 + vSphere CSI 플러그인으로 볼륨 작업을 라우팅할 수 있도록 한다. CSIMigration 및 + CSIMigrationvSphere 기능 플래그가 활성화되고 vSphere CSI 플러그인이 + 클러스터의 모든 노드에 설치 및 구성이 되어 있어야 한다. - `CSINodeInfo`: csi.storage.k8s.io에서 CSINodeInfo API 오브젝트와 관련된 모든 로직을 활성화한다. - `CSIPersistentVolume`: [CSI (Container Storage Interface)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md) 호환 볼륨 플러그인을 통해 프로비저닝된 볼륨을 감지하고 마운트할 수 있다. -- `CSIServiceAccountToken` : 볼륨을 마운트하는 파드의 서비스 계정 토큰을 받을 수 있도록 CSI 드라이버를 활성화한다. [토큰 요청](https://kubernetes-csi.github.io/docs/token-requests.html)을 참조한다. -- `CSIStorageCapacity`: CSI 드라이버가 스토리지 용량 정보를 게시하고 쿠버네티스 스케줄러가 파드를 스케줄할 때 해당 정보를 사용하도록 한다. [스토리지 용량](/docs/concepts/storage/storage-capacity/)을 참고한다. +- `CSIServiceAccountToken` : 볼륨을 마운트하는 파드의 서비스 계정 토큰을 받을 수 있도록 + CSI 드라이버를 활성화한다. + [토큰 요청](https://kubernetes-csi.github.io/docs/token-requests.html)을 참조한다. +- `CSIStorageCapacity`: CSI 드라이버가 스토리지 용량 정보를 게시하고 + 쿠버네티스 스케줄러가 파드를 스케줄할 때 해당 정보를 사용하도록 한다. + [스토리지 용량](/docs/concepts/storage/storage-capacity/)을 참고한다. 자세한 내용은 [`csi` 볼륨 유형](/ko/docs/concepts/storage/volumes/#csi) 문서를 확인한다. -- `CSIVolumeFSGroupPolicy`: CSI드라이버가 `fsGroupPolicy` 필드를 사용하도록 허용한다. 이 필드는 CSI드라이버에서 생성된 볼륨이 마운트될 때 볼륨 소유권과 권한 수정을 지원하는지 여부를 제어한다. -- `CustomCPUCFSQuotaPeriod`: 노드가 CPUCFSQuotaPeriod를 변경하도록 한다. +- `CSIVolumeFSGroupPolicy`: CSI드라이버가 `fsGroupPolicy` 필드를 사용하도록 허용한다. + 이 필드는 CSI드라이버에서 생성된 볼륨이 마운트될 때 볼륨 소유권과 + 권한 수정을 지원하는지 여부를 제어한다. +- `ConfigurableFSGroupPolicy`: 사용자가 파드에 볼륨을 마운트할 때 fsGroups에 대한 + 볼륨 권한 변경 정책을 구성할 수 있다. 자세한 내용은 + [파드의 볼륨 권한 및 소유권 변경 정책 구성](/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods)을 + 참고한다. +- `CronJobControllerV2`: {{< glossary_tooltip text="크론잡(CronJob)" term_id="cronjob" >}} + 컨트롤러의 대체 구현을 사용한다. 그렇지 않으면, + 동일한 컨트롤러의 버전 1이 선택된다. + 버전 2 컨트롤러는 실험적인 성능 향상을 제공한다. +- `CustomCPUCFSQuotaPeriod`: [kubelet config](/docs/tasks/administer-cluster/kubelet-config-file/)에서 + `cpuCFSQuotaPeriod` 를 노드가 변경할 수 있도록 한다. - `CustomPodDNS`: `dnsConfig` 속성을 사용하여 파드의 DNS 설정을 사용자 정의할 수 있다. 자세한 내용은 [파드의 DNS 설정](/ko/docs/concepts/services-networking/dns-pod-service/#pod-dns-config)을 확인한다. @@ -466,147 +546,248 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능 - `CustomResourceWebhookConversion`: [커스텀리소스데피니션](/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources/)에서 생성된 리소스에 대해 웹 훅 기반의 변환을 활성화한다. 실행 중인 파드 문제를 해결한다. -- `DisableAcceleratorUsageMetrics`: [kubelet이 수집한 액셀러레이터 지표 비활성화](/ko/docs/concepts/cluster-administration/system-metrics/#액셀러레이터-메트릭-비활성화). -- `DevicePlugins`: 노드에서 [장치 플러그인](/ko/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) - 기반 리소스 프로비저닝을 활성화한다. - `DefaultPodTopologySpread`: `PodTopologySpread` 스케줄링 플러그인을 사용하여 [기본 분배](/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints/#내부-기본-제약)를 수행한다. -- `DownwardAPIHugePages`: 다운워드 API에서 hugepages 사용을 활성화한다. +- `DevicePlugins`: 노드에서 [장치 플러그인](/ko/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) + 기반 리소스 프로비저닝을 활성화한다. +- `DisableAcceleratorUsageMetrics`: + [kubelet이 수집한 액셀러레이터 지표 비활성화](/ko/docs/concepts/cluster-administration/system-metrics/#액셀러레이터-메트릭-비활성화). +- `DownwardAPIHugePages`: [다운워드 API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information)에서 + hugepages 사용을 활성화한다. - `DryRun`: 서버 측의 [dry run](/docs/reference/using-api/api-concepts/#dry-run) 요청을 요청을 활성화하여 커밋하지 않고 유효성 검사, 병합 및 변화를 테스트할 수 있다. - `DynamicAuditing`(*사용 중단됨*): v1.19 이전의 버전에서 동적 감사를 활성화하는 데 사용된다. -- `DynamicKubeletConfig`: kubelet의 동적 구성을 활성화한다. [kubelet 재구성](/docs/tasks/administer-cluster/reconfigure-kubelet/)을 참고한다. -- `DynamicProvisioningScheduling`: 볼륨 스케줄을 인식하고 PV 프로비저닝을 처리하도록 기본 스케줄러를 확장한다. +- `DynamicKubeletConfig`: kubelet의 동적 구성을 활성화한다. + [kubelet 재구성](/docs/tasks/administer-cluster/reconfigure-kubelet/)을 참고한다. +- `DynamicProvisioningScheduling`: 볼륨 토폴로지를 인식하고 PV 프로비저닝을 처리하도록 + 기본 스케줄러를 확장한다. 이 기능은 v1.12의 `VolumeScheduling` 기능으로 대체되었다. -- `DynamicVolumeProvisioning`(*사용 중단됨*): 파드에 퍼시스턴트 볼륨의 [동적 프로비저닝](/ko/docs/concepts/storage/dynamic-provisioning/)을 활성화한다. -- `EnableAggregatedDiscoveryTimeout` (*사용 중단됨*): 수집된 검색 호출에서 5초 시간 초과를 활성화한다. -- `EnableEquivalenceClassCache`: 스케줄러가 파드를 스케줄링할 때 노드의 동등성을 캐시할 수 있게 한다. -- `EphemeralContainers`: 파드를 실행하기 위한 {{< glossary_tooltip text="임시 컨테이너" - term_id="ephemeral-container" >}}를 추가할 수 있다. -- `EvenPodsSpread`: 토폴로지 도메인 간에 파드를 균등하게 스케줄링할 수 있다. [파드 토폴로지 분배 제약 조건](/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints/)을 참고한다. --`ExecProbeTimeout` : kubelet이 exec 프로브 시간 초과를 준수하는지 확인한다. 이 기능 게이트는 기존 워크로드가 쿠버네티스가 exec 프로브 제한 시간을 무시한 현재 수정된 결함에 의존하는 경우 존재한다. [준비성 프로브](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes)를 참조한다. -- `ExpandInUsePersistentVolumes`: 사용 중인 PVC를 확장할 수 있다. [사용 중인 퍼시스턴트볼륨클레임 크기 조정](/ko/docs/concepts/storage/persistent-volumes/#사용-중인-퍼시스턴트볼륨클레임-크기-조정)을 참고한다. -- `ExpandPersistentVolumes`: 퍼시스턴트 볼륨 확장을 활성화한다. [퍼시스턴트 볼륨 클레임 확장](/ko/docs/concepts/storage/persistent-volumes/#퍼시스턴트-볼륨-클레임-확장)을 참고한다. -- `ExperimentalCriticalPodAnnotation`: 특정 파드에 *critical* 로 어노테이션을 달아서 [스케줄링이 보장되도록](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/) 한다. +- `DynamicVolumeProvisioning`(*사용 중단됨*): 파드에 퍼시스턴트 볼륨의 + [동적 프로비저닝](/ko/docs/concepts/storage/dynamic-provisioning/)을 활성화한다. +- `EfficientWatchResumption`: 스토리지에서 생성된 북마크(진행 + 알림) 이벤트를 사용자에게 전달할 수 있다. 이것은 감시 작업에만 + 적용된다. +- `EnableAggregatedDiscoveryTimeout` (*사용 중단됨*): 수집된 검색 호출에서 5초 + 시간 초과를 활성화한다. +- `EnableEquivalenceClassCache`: 스케줄러가 파드를 스케줄링할 때 노드의 + 동등성을 캐시할 수 있게 한다. +- `EndpointSlice`: 보다 스케일링 가능하고 확장 가능한 네트워크 엔드포인트에 대한 + 엔드포인트슬라이스(EndpointSlices)를 활성화한다. [엔드포인트슬라이스 활성화](/docs/tasks/administer-cluster/enabling-endpointslices/)를 참고한다. +- `EndpointSliceNodeName` : 엔드포인트슬라이스 `nodeName` 필드를 활성화한다. +- `EndpointSliceProxying`: 활성화되면, 리눅스에서 실행되는 + kube-proxy는 엔드포인트 대신 엔드포인트슬라이스를 + 기본 데이터 소스로 사용하여 확장성과 성능을 향상시킨다. + [엔드포인트 슬라이스 활성화](/docs/tasks/administer-cluster/enabling-endpointslices/)를 참고한다. +- `EndpointSliceTerminatingCondition`: 엔드포인트슬라이스 `terminating` 및 `serving` + 조건 필드를 활성화한다. +- `EphemeralContainers`: 파드를 실행하기 위한 + {{< glossary_tooltip text="임시 컨테이너" term_id="ephemeral-container" >}}를 + 추가할 수 있다. +- `EvenPodsSpread`: 토폴로지 도메인 간에 파드를 균등하게 스케줄링할 수 있다. + [파드 토폴로지 분배 제약 조건](/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints/)을 참고한다. +- `ExecProbeTimeout` : kubelet이 exec 프로브 시간 초과를 준수하는지 확인한다. + 이 기능 게이트는 기존 워크로드가 쿠버네티스가 exec 프로브 제한 시간을 무시한 + 현재 수정된 결함에 의존하는 경우 존재한다. + [준비성 프로브](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes)를 참조한다. +- `ExpandCSIVolumes`: CSI 볼륨 확장을 활성화한다. +- `ExpandInUsePersistentVolumes`: 사용 중인 PVC를 확장할 수 있다. + [사용 중인 퍼시스턴트볼륨클레임 크기 조정](/ko/docs/concepts/storage/persistent-volumes/#사용-중인-퍼시스턴트볼륨클레임-크기-조정)을 참고한다. +- `ExpandPersistentVolumes`: 퍼시스턴트 볼륨 확장을 활성화한다. + [퍼시스턴트 볼륨 클레임 확장](/ko/docs/concepts/storage/persistent-volumes/#퍼시스턴트-볼륨-클레임-확장)을 참고한다. +- `ExperimentalCriticalPodAnnotation`: 특정 파드에 *critical* 로 + 어노테이션을 달아서 [스케줄링이 보장되도록](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/) 한다. 이 기능은 v1.13부터 파드 우선 순위 및 선점으로 인해 사용 중단되었다. - `ExperimentalHostUserNamespaceDefaultingGate`: 사용자 네임스페이스를 호스트로 기본 활성화한다. 이것은 다른 호스트 네임스페이스, 호스트 마운트, 권한이 있는 컨테이너 또는 특정 비-네임스페이스(non-namespaced) 기능(예: `MKNODE`, `SYS_MODULE` 등)을 사용하는 컨테이너를 위한 것이다. 도커 데몬에서 사용자 네임스페이스 재 매핑이 활성화된 경우에만 활성화해야 한다. -- `EndpointSlice`: 보다 스케일링 가능하고 확장 가능한 네트워크 엔드포인트에 대한 - 엔드포인트 슬라이스를 활성화한다. [엔드포인트 슬라이스 활성화](/docs/tasks/administer-cluster/enabling-endpointslices/)를 참고한다. --`EndpointSliceNodeName` : 엔드포인트슬라이스 `nodeName` 필드를 활성화한다. --`EndpointSliceTerminating` : 엔드포인트슬라이스 `terminating` 및 `serving` 조건 필드를 - 활성화한다. -- `EndpointSliceProxying`: 이 기능 게이트가 활성화되면, 리눅스에서 실행되는 - kube-proxy는 엔드포인트 대신 엔드포인트슬라이스를 - 기본 데이터 소스로 사용하여 확장성과 성능을 향상시킨다. - [엔드포인트 슬라이스 활성화](/docs/tasks/administer-cluster/enabling-endpointslices/)를 참고한다. -- `WindowsEndpointSliceProxying`: 이 기능 게이트가 활성화되면, 윈도우에서 실행되는 - kube-proxy는 엔드포인트 대신 엔드포인트슬라이스를 - 기본 데이터 소스로 사용하여 확장성과 성능을 향상시킨다. - [엔드포인트 슬라이스 활성화](/docs/tasks/administer-cluster/enabling-endpointslices/)를 참고한다. - `GCERegionalPersistentDisk`: GCE에서 지역 PD 기능을 활성화한다. -- `GenericEphemeralVolume`: 일반 볼륨의 모든 기능을 지원하는 임시, 인라인 볼륨을 활성화한다(타사 스토리지 공급 업체, 스토리지 용량 추적, 스냅샷으로부터 복원 등에서 제공할 수 있음). [임시 볼륨](/docs/concepts/storage/ephemeral-volumes/)을 참고한다. --`GracefulNodeShutdown` : kubelet에서 정상 종료를 지원한다. 시스템 종료 중에 kubelet은 종료 이벤트를 감지하고 노드에서 실행중인 파드를 정상적으로 종료하려고 시도한다. 자세한 내용은 [Graceful Node Shutdown](/ko/docs/concepts/architecture/nodes/#그레이스풀-graceful-노드-셧다운)을 참조한다. -- `HugePages`: 사전 할당된 [huge page](/ko/docs/tasks/manage-hugepages/scheduling-hugepages/)의 할당 및 사용을 활성화한다. -- `HugePageStorageMediumSize`: 사전 할당된 [huge page](/ko/docs/tasks/manage-hugepages/scheduling-hugepages/)의 여러 크기를 지원한다. -- `HyperVContainer`: 윈도우 컨테이너를 위한 [Hyper-V 격리](https://docs.microsoft.com/ko-kr/virtualization/windowscontainers/manage-containers/hyperv-container) 기능을 활성화한다. -- `HPAScaleToZero`: 사용자 정의 또는 외부 메트릭을 사용할 때 `HorizontalPodAutoscaler` 리소스에 대해 `minReplicas` 를 0으로 설정한다. -- `ImmutableEphemeralVolumes`: 안정성과 성능 향상을 위해 개별 시크릿(Secret)과 컨피그맵(ConfigMap)을 변경할 수 없는(immutable) 것으로 표시할 수 있다. -- `KubeletConfigFile`: 구성 파일을 사용하여 지정된 파일에서 kubelet 구성을 로드할 수 있다. - 자세한 내용은 [구성 파일을 통해 kubelet 파라미터 설정](/docs/tasks/administer-cluster/kubelet-config-file/)을 참고한다. +- `GenericEphemeralVolume`: 일반 볼륨의 모든 기능을 지원하는 임시, 인라인 + 볼륨을 활성화한다(타사 스토리지 공급 업체, 스토리지 용량 추적, 스냅샷으로부터 복원 + 등에서 제공할 수 있음). + [임시 볼륨](/docs/concepts/storage/ephemeral-volumes/)을 참고한다. +- `GracefulNodeShutdown` : kubelet에서 정상 종료를 지원한다. + 시스템 종료 중에 kubelet은 종료 이벤트를 감지하고 노드에서 실행 중인 + 파드를 정상적으로 종료하려고 시도한다. 자세한 내용은 + [Graceful Node Shutdown](/ko/docs/concepts/architecture/nodes/#그레이스풀-graceful-노드-셧다운)을 + 참조한다. +- `HPAContainerMetrics`: `HorizontalPodAutoscaler`를 활성화하여 대상 파드의 + 개별 컨테이너 메트릭을 기반으로 확장한다. +- `HPAScaleToZero`: 사용자 정의 또는 외부 메트릭을 사용할 때 `HorizontalPodAutoscaler` 리소스에 대해 + `minReplicas` 를 0으로 설정한다. +- `HugePages`: 사전 할당된 [huge page](/ko/docs/tasks/manage-hugepages/scheduling-hugepages/)의 + 할당 및 사용을 활성화한다. +- `HugePageStorageMediumSize`: 사전 할당된 [huge page](/ko/docs/tasks/manage-hugepages/scheduling-hugepages/)의 + 여러 크기를 지원한다. +- `HyperVContainer`: 윈도우 컨테이너를 위한 + [Hyper-V 격리](https://docs.microsoft.com/ko-kr/virtualization/windowscontainers/manage-containers/hyperv-container) + 기능을 활성화한다. +- `IPv6DualStack`: IPv6에 대한 [듀얼 스택](/ko/docs/concepts/services-networking/dual-stack/) + 지원을 활성화한다. +- `ImmutableEphemeralVolumes`: 안정성과 성능 향상을 위해 개별 시크릿(Secret)과 컨피그맵(ConfigMap)을 + 변경할 수 없는(immutable) 것으로 표시할 수 있다. +- `KubeletConfigFile`: 구성 파일을 사용하여 지정된 파일에서 + kubelet 구성을 로드할 수 있다. + 자세한 내용은 [구성 파일을 통해 kubelet 파라미터 설정](/docs/tasks/administer-cluster/kubelet-config-file/)을 + 참고한다. - `KubeletCredentialProviders`: 이미지 풀 자격 증명에 대해 kubelet exec 자격 증명 공급자를 활성화한다. - `KubeletPluginsWatcher`: kubelet이 [CSI 볼륨 드라이버](/ko/docs/concepts/storage/volumes/#csi)와 같은 플러그인을 검색할 수 있도록 프로브 기반 플러그인 감시자(watcher) 유틸리티를 사용한다. -- `KubeletPodResources`: kubelet의 파드 리소스 grpc 엔드포인트를 활성화한다. - 자세한 내용은 [장치 모니터링 지원](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/compute-device-assignment.md)을 참고한다. -- `LegacyNodeRoleBehavior`: 비활성화되면, 서비스 로드 밸런서 및 노드 중단의 레거시 동작은 `NodeDisruptionExclusion` 과 `ServiceNodeExclusion` 에 의해 제공된 기능별 레이블을 대신하여 `node-role.kubernetes.io/master` 레이블을 무시한다. -- `LocalStorageCapacityIsolation`: [로컬 임시 스토리지](/ko/docs/concepts/configuration/manage-resources-containers/)와 [emptyDir 볼륨](/ko/docs/concepts/storage/volumes/#emptydir)의 `sizeLimit` 속성을 사용할 수 있게 한다. -- `LocalStorageCapacityIsolationFSQuotaMonitoring`: [로컬 임시 스토리지](/ko/docs/concepts/configuration/manage-resources-containers/)에 `LocalStorageCapacityIsolation` 이 활성화되고 [emptyDir 볼륨](/ko/docs/concepts/storage/volumes/#emptydir)의 백업 파일시스템이 프로젝트 쿼터를 지원하고 활성화된 경우, 파일시스템 사용보다는 프로젝트 쿼터를 사용하여 [emptyDir 볼륨](/ko/docs/concepts/storage/volumes/#emptydir) 스토리지 사용을 모니터링하여 성능과 정확성을 향상시킨다. -- `MixedProtocolLBService`: 동일한 로드밸런서 유형 서비스 인스턴스에서 다른 프로토콜 사용을 활성화한다. -- `MountContainers`: 호스트의 유틸리티 컨테이너를 볼륨 마운터로 사용할 수 있다. +- `KubeletPodResources`: kubelet의 파드 리소스 gPRC 엔드포인트를 활성화한다. 자세한 내용은 + [장치 모니터링 지원](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/606-compute-device-assignment/README.md)을 + 참고한다. +- `LegacyNodeRoleBehavior`: 비활성화되면, 서비스 로드 밸런서 및 노드 중단의 레거시 동작은 + `NodeDisruptionExclusion` 과 `ServiceNodeExclusion` 에 의해 제공된 기능별 레이블을 대신하여 + `node-role.kubernetes.io/master` 레이블을 무시한다. +- `LocalStorageCapacityIsolation`: [로컬 임시 스토리지](/ko/docs/concepts/configuration/manage-resources-containers/)와 + [emptyDir 볼륨](/ko/docs/concepts/storage/volumes/#emptydir)의 + `sizeLimit` 속성을 사용할 수 있게 한다. +- `LocalStorageCapacityIsolationFSQuotaMonitoring`: [로컬 임시 스토리지](/ko/docs/concepts/configuration/manage-resources-containers/)에 + `LocalStorageCapacityIsolation` 이 활성화되고 + [emptyDir 볼륨](/ko/docs/concepts/storage/volumes/#emptydir)의 + 백업 파일시스템이 프로젝트 쿼터를 지원하고 활성화된 경우, 파일시스템 사용보다는 + 프로젝트 쿼터를 사용하여 [emptyDir 볼륨](/ko/docs/concepts/storage/volumes/#emptydir) + 스토리지 사용을 모니터링하여 성능과 정확성을 + 향상시킨다. +- `MixedProtocolLBService`: 동일한 로드밸런서 유형 서비스 인스턴스에서 다른 프로토콜 + 사용을 활성화한다. +- `MountContainers` (*사용 중단됨*): 호스트의 유틸리티 컨테이너를 볼륨 마운터로 + 사용할 수 있다. - `MountPropagation`: 한 컨테이너에서 다른 컨테이너 또는 파드로 마운트된 볼륨을 공유할 수 있다. 자세한 내용은 [마운트 전파(propagation)](/ko/docs/concepts/storage/volumes/#마운트-전파-propagation)을 참고한다. -- `NodeDisruptionExclusion`: 영역(zone) 장애 시 노드가 제외되지 않도록 노드 레이블 `node.kubernetes.io/exclude-disruption` 사용을 활성화한다. +- `NodeDisruptionExclusion`: 영역(zone) 장애 시 노드가 제외되지 않도록 노드 레이블 `node.kubernetes.io/exclude-disruption` + 사용을 활성화한다. - `NodeLease`: 새로운 리스(Lease) API가 노드 상태 신호로 사용될 수 있는 노드 하트비트(heartbeats)를 보고할 수 있게 한다. -- `NonPreemptingPriority`: 프라이어리티클래스(PriorityClass)와 파드에 NonPreempting 옵션을 활성화한다. +- `NonPreemptingPriority`: 프라이어리티클래스(PriorityClass)와 파드에 `preemptionPolicy` 필드를 활성화한다. +- `PVCProtection`: 파드에서 사용 중일 때 퍼시스턴트볼륨클레임(PVC)이 + 삭제되지 않도록 한다. - `PersistentLocalVolumes`: 파드에서 `local` 볼륨 유형의 사용을 활성화한다. `local` 볼륨을 요청하는 경우 파드 어피니티를 지정해야 한다. - `PodDisruptionBudget`: [PodDisruptionBudget](/docs/tasks/run-application/configure-pdb/) 기능을 활성화한다. -- `PodOverhead`: 파드 오버헤드를 판단하기 위해 [파드오버헤드(PodOverhead)](/ko/docs/concepts/scheduling-eviction/pod-overhead/) 기능을 활성화한다. -- `PodPriority`: [우선 순위](/ko/docs/concepts/configuration/pod-priority-preemption/)를 기반으로 파드의 스케줄링 취소와 선점을 활성화한다. +- `PodOverhead`: 파드 오버헤드를 판단하기 위해 [파드오버헤드(PodOverhead)](/ko/docs/concepts/scheduling-eviction/pod-overhead/) + 기능을 활성화한다. +- `PodPriority`: [우선 순위](/ko/docs/concepts/configuration/pod-priority-preemption/)를 + 기반으로 파드의 스케줄링 취소와 선점을 활성화한다. - `PodReadinessGates`: 파드 준비성 평가를 확장하기 위해 `PodReadinessGate` 필드 설정을 활성화한다. 자세한 내용은 [파드의 준비성 게이트](/ko/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-gate)를 참고한다. - `PodShareProcessNamespace`: 파드에서 실행되는 컨테이너 간에 단일 프로세스 네임스페이스를 공유하기 위해 파드에서 `shareProcessNamespace` 설정을 활성화한다. 자세한 내용은 [파드의 컨테이너 간 프로세스 네임스페이스 공유](/docs/tasks/configure-pod-container/share-process-namespace/)에서 확인할 수 있다. -- `ProcMountType`: 컨테이너의 ProcMountType 제어를 활성화한다. -- `PVCProtection`: 파드에서 사용 중일 때 퍼시스턴트볼륨클레임(PVC)이 - 삭제되지 않도록 한다. -- `QOSReserved`: QoS 수준에서 리소스 예약을 허용하여 낮은 QoS 수준의 파드가 더 높은 QoS 수준에서 - 요청된 리소스로 파열되는 것을 방지한다(현재 메모리만 해당). +- `ProcMountType`: SecurityContext의 `procMount` 필드를 설정하여 + 컨테이너의 proc 타입의 마운트를 제어할 수 있다. +- `QOSReserved`: QoS 수준에서 리소스 예약을 허용하여 낮은 QoS 수준의 파드가 + 더 높은 QoS 수준에서 요청된 리소스로 파열되는 것을 방지한다 + (현재 메모리만 해당). +- `RemainingItemCount`: API 서버가 + [청크(chunking) 목록 요청](/docs/reference/using-api/api-concepts/#retrieving-large-results-sets-in-chunks)에 대한 + 응답에서 남은 항목 수를 표시하도록 허용한다. +- `RemoveSelfLink`: ObjectMeta 및 ListMeta에서 `selfLink` 를 사용하지 않고 + 제거한다. - `ResourceLimitsPriorityFunction` (*사용 중단됨*): 입력 파드의 CPU 및 메모리 한도 중 하나 이상을 만족하는 노드에 가능한 최저 점수 1을 할당하는 스케줄러 우선 순위 기능을 활성화한다. 의도는 동일한 점수를 가진 노드 사이의 관계를 끊는 것이다. - `ResourceQuotaScopeSelectors`: 리소스 쿼터 범위 셀렉터를 활성화한다. -- `RootCAConfigMap`: 모든 네임 스페이스에 `kube-root-ca.crt`라는 {{< glossary_tooltip text="컨피그맵" term_id="configmap" >}}을 게시하도록 kube-controller-manager를 구성한다. 이 컨피그맵에는 kube-apiserver에 대한 연결을 확인하는 데 사용되는 CA 번들이 포함되어 있다. - 자세한 내용은 [바운드 서비스 계정 토큰](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md)을 참조한다. +- `RootCAConfigMap`: 모든 네임스페이스에 `kube-root-ca.crt`라는 + {{< glossary_tooltip text="컨피그맵" term_id="configmap" >}}을 게시하도록 + `kube-controller-manager` 를 구성한다. 이 컨피그맵에는 kube-apiserver에 대한 연결을 확인하는 데 + 사용되는 CA 번들이 포함되어 있다. 자세한 내용은 + [바운드 서비스 계정 토큰](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md)을 + 참조한다. - `RotateKubeletClientCertificate`: kubelet에서 클라이언트 TLS 인증서의 로테이션을 활성화한다. 자세한 내용은 [kubelet 구성](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration)을 참고한다. - `RotateKubeletServerCertificate`: kubelet에서 서버 TLS 인증서의 로테이션을 활성화한다. - 자세한 내용은 [kubelet 구성](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration)을 참고한다. -- `RunAsGroup`: 컨테이너의 init 프로세스에 설정된 기본 그룹 ID 제어를 활성화한다. -- `RuntimeClass`: 컨테이너 런타임 구성을 선택하기 위해 [런타임클래스(RuntimeClass)](/ko/docs/concepts/containers/runtime-class/) 기능을 활성화한다. -- `ScheduleDaemonSetPods`: 데몬셋(DaemonSet) 컨트롤러 대신 기본 스케줄러로 데몬셋 파드를 스케줄링할 수 있다. -- `SCTPSupport`: 파드, 서비스, 엔드포인트, 엔드포인트슬라이스 및 네트워크폴리시 정의에서 _SCTP_ `protocol` 값을 활성화한다. -- `ServerSideApply`: API 서버에서 [SSA(Sever Side Apply)](/docs/reference/using-api/server-side-apply/) 경로를 활성화한다. -- `ServiceAccountIssuerDiscovery`: API 서버에서 서비스 어카운트 발행자에 대해 OIDC 디스커버리 엔드포인트(발급자 및 JWKS URL)를 활성화한다. 자세한 내용은 [파드의 서비스 어카운트 구성](/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery)을 참고한다. +- `RunAsGroup`: 컨테이너의 init 프로세스에 설정된 기본 그룹 ID 제어를 + 활성화한다. +- `RuntimeClass`: 컨테이너 런타임 구성을 선택하기 위해 [런타임클래스(RuntimeClass)](/ko/docs/concepts/containers/runtime-class/) + 기능을 활성화한다. +- `ScheduleDaemonSetPods`: 데몬셋(DaemonSet) 컨트롤러 대신 기본 스케줄러로 데몬셋 파드를 + 스케줄링할 수 있다. +- `SCTPSupport`: 파드, 서비스, 엔드포인트, 엔드포인트슬라이스 및 네트워크폴리시 정의에서 + _SCTP_ `protocol` 값을 활성화한다. +- `ServerSideApply`: API 서버에서 [SSA(Sever Side Apply)](/docs/reference/using-api/server-side-apply/) + 경로를 활성화한다. +- `ServiceAccountIssuerDiscovery`: API 서버에서 서비스 어카운트 발행자에 대해 OIDC 디스커버리 엔드포인트(발급자 및 + JWKS URL)를 활성화한다. 자세한 내용은 + [파드의 서비스 어카운트 구성](/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery)을 + 참고한다. - `ServiceAppProtocol`: 서비스와 엔드포인트에서 `AppProtocol` 필드를 활성화한다. -- `ServiceLBNodePortControl`: 서비스에서`spec.allocateLoadBalancerNodePorts` 필드를 활성화한다. +- `ServiceLBNodePortControl`: 서비스에서`spec.allocateLoadBalancerNodePorts` 필드를 + 활성화한다. - `ServiceLoadBalancerFinalizer`: 서비스 로드 밸런서에 대한 Finalizer 보호를 활성화한다. -- `ServiceNodeExclusion`: 클라우드 제공자가 생성한 로드 밸런서에서 노드를 제외할 수 있다. - "`alpha.service-controller.kubernetes.io/exclude-balancer`" 키 또는 `node.kubernetes.io/exclude-from-external-load-balancers` 로 레이블이 지정된 경우 노드를 제외할 수 있다. -- `ServiceTopology`: 서비스가 클러스터의 노드 토폴로지를 기반으로 트래픽을 라우팅할 수 있도록 한다. 자세한 내용은 [서비스토폴로지(ServiceTopology)](/ko/docs/concepts/services-networking/service-topology/)를 참고한다. -- `SizeMemoryBackedVolumes`: kubelet 지원을 사용하여 메모리 백업 볼륨의 크기를 조정한다. 자세한 내용은 [volumes](/ko/docs/concepts/storage/volumes)를 참조한다. -- `SetHostnameAsFQDN`: 전체 주소 도메인 이름(FQDN)을 파드의 호스트 이름으로 설정하는 기능을 활성화한다. [파드의 `setHostnameAsFQDN` 필드](/ko/docs/concepts/services-networking/dns-pod-service/#파드의-sethostnameasfqdn-필드)를 참고한다. -- `StartupProbe`: kubelet에서 [스타트업](/ko/docs/concepts/workloads/pods/pod-lifecycle/#언제-스타트업-프로브를-사용해야-하는가) 프로브를 활성화한다. +- `ServiceNodeExclusion`: 클라우드 제공자가 생성한 로드 밸런서에서 노드를 + 제외할 수 있다. "`node.kubernetes.io/exclude-from-external-load-balancers`"로 + 레이블이 지정된 경우 노드를 제외할 수 있다. +- `ServiceTopology`: 서비스가 클러스터의 노드 토폴로지를 기반으로 트래픽을 라우팅할 수 + 있도록 한다. 자세한 내용은 + [서비스토폴로지(ServiceTopology)](/ko/docs/concepts/services-networking/service-topology/)를 + 참고한다. +- `SizeMemoryBackedVolumes`: kubelet 지원을 사용하여 메모리 백업 볼륨의 크기를 조정한다. + 자세한 내용은 [volumes](/ko/docs/concepts/storage/volumes)를 참조한다. +- `SetHostnameAsFQDN`: 전체 주소 도메인 이름(FQDN)을 파드의 호스트 이름으로 + 설정하는 기능을 활성화한다. + [파드의 `setHostnameAsFQDN` 필드](/ko/docs/concepts/services-networking/dns-pod-service/#파드의-sethostnameasfqdn-필드)를 참고한다. +- `StartupProbe`: kubelet에서 + [스타트업](/ko/docs/concepts/workloads/pods/pod-lifecycle/#언제-스타트업-프로브를-사용해야-하는가) + 프로브를 활성화한다. - `StorageObjectInUseProtection`: 퍼시스턴트볼륨 또는 퍼시스턴트볼륨클레임 오브젝트가 여전히 사용 중인 경우 삭제를 연기한다. -- `StorageVersionHash`: API 서버가 디스커버리에서 스토리지 버전 해시를 노출하도록 허용한다. +- `StorageVersionAPI`: [스토리지 버전 API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#storageversion-v1alpha1-internal-apiserver-k8s-io)를 + 활성화한다. +- `StorageVersionHash`: API 서버가 디스커버리에서 스토리지 버전 해시를 노출하도록 + 허용한다. - `StreamingProxyRedirects`: 스트리밍 요청을 위해 백엔드(kubelet)에서 리디렉션을 가로채서 따르도록 API 서버에 지시한다. 스트리밍 요청의 예로는 `exec`, `attach` 및 `port-forward` 요청이 있다. - `SupportIPVSProxyMode`: IPVS를 사용하여 클러스터 내 서비스 로드 밸런싱을 제공한다. 자세한 내용은 [서비스 프록시](/ko/docs/concepts/services-networking/service/#가상-ip와-서비스-프록시)를 참고한다. - `SupportPodPidsLimit`: 파드의 PID 제한을 지원한다. -- `SupportNodePidsLimit`: 노드에서 PID 제한 지원을 활성화한다. `--system-reserved` 및 `--kube-reserved` 옵션의 `pid=` 매개 변수를 지정하여 지정된 수의 프로세스 ID가 시스템 전체와 각각 쿠버네티스 시스템 데몬에 대해 예약되도록 할 수 있다. -- `Sysctls`: 각 파드에 설정할 수 있는 네임스페이스 커널 파라미터(sysctl)를 지원한다. - 자세한 내용은 [sysctl](/docs/tasks/administer-cluster/sysctl-cluster/)을 참고한다. -- `TaintBasedEvictions`: 노드의 테인트(taint) 및 파드의 톨러레이션(toleration)을 기반으로 노드에서 파드를 축출할 수 있다. - 자세한 내용은 [테인트와 톨러레이션](/ko/docs/concepts/scheduling-eviction/taint-and-toleration/)을 참고한다. -- `TaintNodesByCondition`: [노드 컨디션](/ko/docs/concepts/architecture/nodes/#condition)을 기반으로 자동 테인트 노드를 활성화한다. +- `SupportNodePidsLimit`: 노드에서 PID 제한 지원을 활성화한다. + `--system-reserved` 및 `--kube-reserved` 옵션의 `pid=` + 파라미터를 지정하여 지정된 수의 프로세스 ID가 + 시스템 전체와 각각 쿠버네티스 시스템 데몬에 대해 예약되도록 + 할 수 있다. +- `Sysctls`: 각 파드에 설정할 수 있는 네임스페이스 커널 + 파라미터(sysctl)를 지원한다. 자세한 내용은 + [sysctl](/docs/tasks/administer-cluster/sysctl-cluster/)을 참고한다. +- `TTLAfterFinished`: [TTL 컨트롤러](/ko/docs/concepts/workloads/controllers/ttlafterfinished/)가 + 실행이 끝난 후 리소스를 정리하도록 + 허용한다. +- `TaintBasedEvictions`: 노드의 테인트(taint) 및 파드의 톨러레이션(toleration)을 기반으로 + 노드에서 파드를 축출할 수 있다. + 자세한 내용은 [테인트와 톨러레이션](/ko/docs/concepts/scheduling-eviction/taint-and-toleration/)을 + 참고한다. +- `TaintNodesByCondition`: [노드 컨디션](/ko/docs/concepts/architecture/nodes/#condition)을 + 기반으로 자동 테인트 노드를 활성화한다. - `TokenRequest`: 서비스 어카운트 리소스에서 `TokenRequest` 엔드포인트를 활성화한다. -- `TokenRequestProjection`: [`projected` 볼륨](/ko/docs/concepts/storage/volumes/#projected)을 통해 서비스 어카운트 - 토큰을 파드에 주입할 수 있다. -- `TopologyManager`: 쿠버네티스의 다른 컴포넌트에 대한 세분화된 하드웨어 리소스 할당을 조정하는 메커니즘을 활성화한다. [노드의 토폴로지 관리 정책 제어](/docs/tasks/administer-cluster/topology-manager/)를 참고한다. -- `TTLAfterFinished`: [TTL 컨트롤러](/ko/docs/concepts/workloads/controllers/ttlafterfinished/)가 실행이 끝난 후 리소스를 정리하도록 허용한다. +- `TokenRequestProjection`: [`projected` 볼륨](/ko/docs/concepts/storage/volumes/#projected)을 통해 + 서비스 어카운트 토큰을 파드에 주입할 수 있다. +- `TopologyManager`: 쿠버네티스의 다른 컴포넌트에 대한 세분화된 하드웨어 리소스 + 할당을 조정하는 메커니즘을 활성화한다. + [노드의 토폴로지 관리 정책 제어](/docs/tasks/administer-cluster/topology-manager/)를 참고한다. - `VolumePVCDataSource`: 기존 PVC를 데이터 소스로 지정하는 기능을 지원한다. - `VolumeScheduling`: 볼륨 토폴로지 인식 스케줄링을 활성화하고 퍼시스턴트볼륨클레임(PVC) 바인딩이 스케줄링 결정을 인식하도록 한다. 또한 `PersistentLocalVolumes` 기능 게이트와 함께 사용될 때 [`local`](/ko/docs/concepts/storage/volumes/#local) 볼륨 유형을 사용할 수 있다. - `VolumeSnapshotDataSource`: 볼륨 스냅샷 데이터 소스 지원을 활성화한다. -- `VolumeSubpathEnvExpansion`: 환경 변수를 `subPath`로 확장하기 위해 `subPathExpr` 필드를 활성화한다. +- `VolumeSubpathEnvExpansion`: 환경 변수를 `subPath`로 확장하기 위해 + `subPathExpr` 필드를 활성화한다. +- `WarningHeaders`: API 응답에서 경고 헤더를 보낼 수 있다. - `WatchBookmark`: 감시자 북마크(watch bookmark) 이벤트 지원을 활성화한다. -- `WindowsGMSA`: 파드에서 컨테이너 런타임으로 GMSA 자격 증명 스펙을 전달할 수 있다. -- `WindowsRunAsUserName` : 기본 사용자가 아닌(non-default) 사용자로 윈도우 컨테이너에서 애플리케이션을 실행할 수 있도록 지원한다. - 자세한 내용은 [RunAsUserName 구성](/docs/tasks/configure-pod-container/configure-runasusername)을 참고한다. - `WinDSR`: kube-proxy가 윈도우용 DSR 로드 밸런서를 생성할 수 있다. - `WinOverlay`: kube-proxy가 윈도우용 오버레이 모드에서 실행될 수 있도록 한다. +- `WindowsGMSA`: 파드에서 컨테이너 런타임으로 GMSA 자격 증명 스펙을 전달할 수 있다. +- `WindowsRunAsUserName` : 기본 사용자가 아닌(non-default) 사용자로 윈도우 컨테이너에서 + 애플리케이션을 실행할 수 있도록 지원한다. 자세한 내용은 + [RunAsUserName 구성](/docs/tasks/configure-pod-container/configure-runasusername)을 + 참고한다. +- `WindowsEndpointSliceProxying`: 활성화되면, 윈도우에서 실행되는 kube-proxy는 + 엔드포인트 대신 엔드포인트슬라이스를 기본 데이터 소스로 사용하여 + 확장성과 성능을 향상시킨다. + [엔드포인트 슬라이스 활성화하기](/docs/tasks/administer-cluster/enabling-endpointslices/)를 참고한다. ## {{% heading "whatsnext" %}} diff --git a/content/ko/docs/reference/glossary/api-group.md b/content/ko/docs/reference/glossary/api-group.md index 0c27d3181e..96f32bd9ce 100644 --- a/content/ko/docs/reference/glossary/api-group.md +++ b/content/ko/docs/reference/glossary/api-group.md @@ -2,7 +2,7 @@ title: API 그룹(API Group) id: api-group date: 2019-09-02 -full_link: /ko/docs/concepts/overview/kubernetes-api/#api-groups +full_link: /ko/docs/concepts/overview/kubernetes-api/#api-그룹과-버전-규칙 short_description: > 쿠버네티스 API의 연관된 경로들의 집합. @@ -11,9 +11,9 @@ tags: - fundamental - architecture --- -쿠버네티스 API의 연관된 경로들의 집합. +쿠버네티스 API의 연관된 경로들의 집합. API 서버의 구성을 변경하여 각 API 그룹을 활성화하거나 비활성화할 수 있다. 특정 리소스에 대한 경로를 비활성화하거나 활성화할 수도 있다. API 그룹을 사용하면 쿠버네티스 API를 더 쉽게 확장할 수 있다. API 그룹은 REST 경로 및 직렬화된 오브젝트의 `apiVersion` 필드에 지정된다. -* 자세한 내용은 [API 그룹(/ko/docs/concepts/overview/kubernetes-api/#api-groups)을 참조한다. +* 자세한 내용은 [API 그룹(/ko/docs/concepts/overview/kubernetes-api/#api-그룹과-버전-규칙)을 참조한다. diff --git a/content/ko/docs/reference/glossary/cloud-controller-manager.md b/content/ko/docs/reference/glossary/cloud-controller-manager.md index 20121a9371..ebfa3d926c 100644 --- a/content/ko/docs/reference/glossary/cloud-controller-manager.md +++ b/content/ko/docs/reference/glossary/cloud-controller-manager.md @@ -5,7 +5,7 @@ date: 2018-04-12 full_link: /ko/docs/concepts/architecture/cloud-controller/ short_description: > 쿠버네티스를 타사 클라우드 공급자와 통합하는 컨트롤 플레인 컴포넌트. -aka: +aka: tags: - core-object - architecture @@ -13,7 +13,7 @@ tags: --- 클라우드별 컨트롤 로직을 포함하는 쿠버네티스 {{< glossary_tooltip text="컨트롤 플레인" term_id="control-plane" >}} 컴포넌트이다. -클라우트 컨트롤러 매니저를 통해 클러스터를 클라우드 공급자의 API에 연결하고, +클라우드 컨트롤러 매니저를 통해 클러스터를 클라우드 공급자의 API에 연결하고, 해당 클라우드 플랫폼과 상호 작용하는 컴포넌트와 클러스터와 상호 작용하는 컴포넌트를 분리할 수 있다. diff --git a/content/ko/docs/reference/glossary/persistent-volume-claim.md b/content/ko/docs/reference/glossary/persistent-volume-claim.md new file mode 100644 index 0000000000..122b754d23 --- /dev/null +++ b/content/ko/docs/reference/glossary/persistent-volume-claim.md @@ -0,0 +1,18 @@ +--- +title: 퍼시스턴트 볼륨 클레임(Persistent Volume Claim) +id: persistent-volume-claim +date: 2018-04-12 +full_link: /ko/docs/concepts/storage/persistent-volumes/ +short_description: > + 컨테이너의 볼륨으로 마운트될 수 있도록 퍼시스턴트볼륨(PersistentVolume)에 정의된 스토리지 리소스를 요청한다. + +aka: +tags: +- core-object +- storage +--- + {{< glossary_tooltip text="컨테이너" term_id="container" >}}의 볼륨으로 마운트될 수 있도록 {{< glossary_tooltip text="퍼시스턴트볼륨(PersistentVolume)" term_id="persistent-volume" >}}에 정의된 스토리지 리소스를 요청한다. + + + +스토리지의 양, 스토리지에 엑세스하는 방법(읽기 전용, 읽기 그리고/또는 쓰기) 및 재확보(보존, 재활용 혹은 삭제) 방법을 지정한다. 스토리지 자체에 관한 내용은 퍼시스턴트볼륨 오브젝트에 설명되어 있다. diff --git a/content/ko/docs/reference/glossary/quantity.md b/content/ko/docs/reference/glossary/quantity.md new file mode 100644 index 0000000000..450307841a --- /dev/null +++ b/content/ko/docs/reference/glossary/quantity.md @@ -0,0 +1,33 @@ +--- +title: 수량(Quantity) +id: quantity +date: 2018-08-07 +full_link: +short_description: > + SI 접미사를 사용하는 작거나 큰 숫자의 정수(whole-number) 표현. + +aka: +tags: +- core-object +--- + SI 접미사를 사용하는 작거나 큰 숫자의 정수(whole-number) 표현. + + + +수량은 SI 접미사가 포함된 간결한 정수 표기법을 통해서 작거나 큰 숫자를 표현한 것이다. +분수는 밀리(milli) 단위로 표시되는 반면, +큰 숫자는 킬로(kilo), 메가(mega), 또는 기가(giga) +단위로 표시할 수 있다. + + +예를 들어, 숫자 `1.5`는 `1500m`으로, 숫자 `1000`은 `1k`로, `1000000`은 +`1M`으로 표시할 수 있다. 또한, 이진 표기법 접미사도 명시 가능하므로, +숫자 2048은 `2Ki`로 표기될 수 있다. + +허용되는 10진수(10의 거듭 제곱) 단위는 `m` (밀리), `k` (킬로, 의도적인 소문자), +`M` (메가), `G` (기가), `T` (테라), `P` (페타), +`E` (엑사)가 있다. + +허용되는 2진수(2의 거듭 제곱) 단위는 `Ki` (키비), `Mi` (메비), `Gi` (기비), +`Ti` (테비), `Pi` (페비), `Ei` (엑비)가 있다. + diff --git a/content/ko/docs/reference/glossary/secret.md b/content/ko/docs/reference/glossary/secret.md new file mode 100644 index 0000000000..63637adc1a --- /dev/null +++ b/content/ko/docs/reference/glossary/secret.md @@ -0,0 +1,18 @@ +--- +title: 시크릿(Secret) +id: secret +date: 2018-04-12 +full_link: /ko/docs/concepts/configuration/secret/ +short_description: > + 비밀번호, OAuth 토큰 및 ssh 키와 같은 민감한 정보를 저장한다. + +aka: +tags: +- core-object +- security +--- + 비밀번호, OAuth 토큰 및 ssh 키와 같은 민감한 정보를 저장한다. + + + +민감한 정보를 사용하는 방식에 대해 더 세밀하게 제어할 수 있으며, 유휴 상태의 [암호화](/docs/tasks/administer-cluster/encrypt-data/#ensure-all-secrets-are-encrypted)를 포함하여 우발적인 노출 위험을 줄인다. {{< glossary_tooltip text="파드(Pod)" term_id="pod" >}}는 시크릿을 마운트된 볼륨의 파일로 참조하거나, 파드의 이미지를 풀링하는 kubelet이 시크릿을 참조한다. 시크릿은 기밀 데이터에 적합하고 [컨피그맵](/docs/tasks/configure-pod-container/configure-pod-configmap/)은 기밀이 아닌 데이터에 적합하다. diff --git a/content/ko/docs/reference/glossary/storage-class.md b/content/ko/docs/reference/glossary/storage-class.md new file mode 100644 index 0000000000..63bd655b68 --- /dev/null +++ b/content/ko/docs/reference/glossary/storage-class.md @@ -0,0 +1,20 @@ +--- +title: 스토리지 클래스(Storage Class) +id: storageclass +date: 2018-04-12 +full_link: /ko/docs/concepts/storage/storage-classes +short_description: > + 스토리지클래스는 관리자가 사용 가능한 다양한 스토리지 유형을 설명할 수 있는 방법을 제공한다. + +aka: +tags: +- core-object +- storage +--- + 스토리지클래스는 관리자가 사용 가능한 다양한 스토리지 유형을 설명할 수 있는 방법을 제공한다. + + + +스토리지 클래스는 서비스 품질 수준, 백업 정책 혹은 클러스터 관리자가 결정한 임의의 정책에 매핑할 수 있다. 각 스토리지클래스에는 클래스에 속한 {{< glossary_tooltip text="퍼시스턴트 볼륨(Persistent Volume)" term_id="persistent-volume" >}}을 동적으로 프로비저닝해야 할 때 사용되는 `provisioner`, `parameters` 및 `reclaimPolicy` 필드가 있다. 사용자는 스토리지클래스 객체의 이름을 사용하여 특정 클래스를 요청할 수 있다. + + diff --git a/content/ko/docs/reference/kubectl/cheatsheet.md b/content/ko/docs/reference/kubectl/cheatsheet.md index 2ac5f6076c..d5870bba30 100644 --- a/content/ko/docs/reference/kubectl/cheatsheet.md +++ b/content/ko/docs/reference/kubectl/cheatsheet.md @@ -191,7 +191,7 @@ JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.ty && kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True" # 외부 도구 없이 디코딩된 시크릿 출력 -kubectl get secret ${secret_name} -o go-template='{{range $k,$v := .data}}{{$k}}={{$v|base64decode}}{{"\n"}}{{end}}' +kubectl get secret my-secret -o go-template='{{range $k,$v := .data}}{{"### "}}{{$k}}{{"\n"}}{{$v|base64decode}}{{"\n\n"}}{{end}}' # 파드에 의해 현재 사용되고 있는 모든 시크릿 목록 조회 kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq @@ -293,12 +293,12 @@ kubectl get pods -n mynamespace --no-headers=true | awk '/pattern1|pattern2/{pr ## 실행 중인 파드와 상호 작용 ```bash -kubectl logs my-pod # 파드 로그(stdout) 덤프 +kubectl logs my-pod # 파드 로그 덤프 (stdout) kubectl logs -l name=myLabel # name이 myLabel인 파드 로그 덤프 (stdout) -kubectl logs my-pod --previous # 컨테이너의 이전 인스턴스 생성에 대한 파드 로그(stdout) 덤프 -kubectl logs my-pod -c my-container # 파드 로그(stdout, 멀티-컨테이너 경우) 덤프 +kubectl logs my-pod --previous # 컨테이너의 이전 인스턴스 생성에 대한 파드 로그 덤프 (stdout) +kubectl logs my-pod -c my-container # 파드 로그 덤프 (stdout, 멀티-컨테이너 경우) kubectl logs -l name=myLabel -c my-container # name이 myLabel인 파드 로그 덤프 (stdout) -kubectl logs my-pod -c my-container --previous # 컨테이너의 이전 인스턴스 생성에 대한 파드 로그(stdout, 멀티-컨테이너 경우) 덤프 +kubectl logs my-pod -c my-container --previous # 컨테이너의 이전 인스턴스 생성에 대한 파드 로그 덤프 (stdout, 멀티-컨테이너 경우) kubectl logs -f my-pod # 실시간 스트림 파드 로그(stdout) kubectl logs -f my-pod -c my-container # 실시간 스트림 파드 로그(stdout, 멀티-컨테이너 경우) kubectl logs -f -l name=myLabel --all-containers # name이 myLabel인 모든 파드의 로그 스트리밍 (stdout) @@ -317,6 +317,18 @@ kubectl top pod POD_NAME --containers # 특정 파드와 해당 kubectl top pod POD_NAME --sort-by=cpu # 지정한 파드에 대한 메트릭을 표시하고 'cpu' 또는 'memory'별로 정렬 ``` +## 디플로이먼트, 서비스와 상호 작용 +```bash +kubectl logs deploy/my-deployment # 디플로이먼트에 대한 파드 로그 덤프 (단일-컨테이너 경우) +kubectl logs deploy/my-deployment -c my-container # 디플로이먼트에 대한 파드 로그 덤프 (멀티-컨테이너 경우) + +kubectl port-forward svc/my-service 5000 # 로컬 머신의 5000번 포트를 리스닝하고, my-service의 동일한(5000번) 포트로 전달 +kubectl port-forward svc/my-service 5000:my-service-port # 로컬 머신의 5000번 포트를 리스닝하고, my-service의 라는 이름을 가진 포트로 전달 + +kubectl port-forward deploy/my-deployment 5000:6000 # 로컬 머신의 5000번 포트를 리스닝하고, 에 의해 생성된 파드의 6000번 포트로 전달 +kubectl exec deploy/my-deployment -- ls # 에 의해 생성된 첫번째 파드의 첫번째 컨테이너에 명령어 실행 (단일- 또는 다중-컨테이너 경우) +``` + ## 노드, 클러스터와 상호 작용 ```bash @@ -334,7 +346,7 @@ kubectl taint nodes foo dedicated=special-user:NoSchedule ### 리소스 타입 -단축명, [API 그룹](/ko/docs/concepts/overview/kubernetes-api/#api-그룹)과 함께 지원되는 모든 리소스 유형들, 그것들의 [네임스페이스](/ko/docs/concepts/overview/working-with-objects/namespaces)와 [종류(Kind)](/ko/docs/concepts/overview/working-with-objects/kubernetes-objects)를 나열: +단축명, [API 그룹](/ko/docs/concepts/overview/kubernetes-api/#api-그룹과-버전-규칙)과 함께 지원되는 모든 리소스 유형들, 그것들의 [네임스페이스](/ko/docs/concepts/overview/working-with-objects/namespaces)와 [종류(Kind)](/ko/docs/concepts/overview/working-with-objects/kubernetes-objects)를 나열: ```bash kubectl api-resources diff --git a/content/ko/docs/reference/kubectl/docker-cli-to-kubectl.md b/content/ko/docs/reference/kubectl/docker-cli-to-kubectl.md index 1679059871..12b41b1d98 100644 --- a/content/ko/docs/reference/kubectl/docker-cli-to-kubectl.md +++ b/content/ko/docs/reference/kubectl/docker-cli-to-kubectl.md @@ -7,7 +7,7 @@ content_type: concept --- -당신은 쿠버네티스 커맨드 라인 도구인 kubectl을 사용하여 API 서버와 상호 작용할 수 있다. 만약 도커 커맨드 라인 도구에 익숙하다면 kubectl을 사용하는 것은 간단하다. 다음 섹션에서는 도커의 하위 명령을 보여주고 kubectl과 같은 명령어를 설명한다. +당신은 쿠버네티스 커맨드 라인 도구인 `kubectl`을 사용하여 API 서버와 상호 작용할 수 있다. 만약 도커 커맨드 라인 도구에 익숙하다면 `kubectl`을 사용하는 것은 간단하다. 다음 섹션에서는 도커의 하위 명령을 보여주고 `kubectl`과 같은 명령어를 설명한다. diff --git a/content/ko/docs/reference/tools.md b/content/ko/docs/reference/tools.md index ac0a5fb6c5..97206f3be7 100644 --- a/content/ko/docs/reference/tools.md +++ b/content/ko/docs/reference/tools.md @@ -20,9 +20,9 @@ content_type: concept ## Minikube -[`minikube`](https://minikube.sigs.k8s.io/docs/)는 개발과 테스팅 목적으로 하는 +[`minikube`](https://minikube.sigs.k8s.io/docs/)는 개발과 테스팅 목적으로 단일 노드 쿠버네티스 클러스터를 로컬 워크스테이션에서 -쉽게 구동시키는 도구이다. +실행하는 도구이다. ## 대시보드 diff --git a/content/ko/docs/reference/using-api/client-libraries.md b/content/ko/docs/reference/using-api/client-libraries.md index ae0404239d..f8c1cb91c8 100644 --- a/content/ko/docs/reference/using-api/client-libraries.md +++ b/content/ko/docs/reference/using-api/client-libraries.md @@ -65,12 +65,13 @@ API 호출 또는 요청/응답 타입을 직접 구현할 필요는 없다. | Python | [github.com/fiaas/k8s](https://github.com/fiaas/k8s) | | Python | [github.com/mnubo/kubernetes-py](https://github.com/mnubo/kubernetes-py) | | Python | [github.com/tomplus/kubernetes_asyncio](https://github.com/tomplus/kubernetes_asyncio) | +| Python | [github.com/Frankkkkk/pykorm](https://github.com/Frankkkkk/pykorm) | | Ruby | [github.com/abonas/kubeclient](https://github.com/abonas/kubeclient) | | Ruby | [github.com/Ch00k/kuber](https://github.com/Ch00k/kuber) | | Ruby | [github.com/kontena/k8s-client](https://github.com/kontena/k8s-client) | | Rust | [github.com/clux/kube-rs](https://github.com/clux/kube-rs) | | Rust | [github.com/ynqa/kubernetes-rust](https://github.com/ynqa/kubernetes-rust) | -| Scala | [github.com/doriordan/skuber](https://github.com/doriordan/skuber) | +| Scala | [github.com/hagay3/skuber](https://github.com/hagay3/skuber) | | Scala | [github.com/joan38/kubernetes-client](https://github.com/joan38/kubernetes-client) | | DotNet | [github.com/tonnyeremin/kubernetes_gen](https://github.com/tonnyeremin/kubernetes_gen) | | Swift | [github.com/swiftkube/client](https://github.com/swiftkube/client) | diff --git a/content/ko/docs/setup/best-practices/certificates.md b/content/ko/docs/setup/best-practices/certificates.md index 5595e0ac3d..71e16b7675 100644 --- a/content/ko/docs/setup/best-practices/certificates.md +++ b/content/ko/docs/setup/best-practices/certificates.md @@ -7,7 +7,7 @@ weight: 40 쿠버네티스는 TLS 위에 인증을 위해 PKI 인증서가 필요하다. -만약 [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/)으로 쿠버네티스를 설치했다면, 클러스터에 필요한 인증서는 자동으로 생성된다. +만약 [kubeadm](/docs/reference/setup-tools/kubeadm/)으로 쿠버네티스를 설치했다면, 클러스터에 필요한 인증서는 자동으로 생성된다. 또한 더 안전하게 자신이 소유한 인증서를 생성할 수 있다. 이를 테면, 개인키를 API 서버에 저장하지 않으므로 더 안전하게 보관할 수 있다. 이 페이지는 클러스터에 필요한 인증서를 설명한다. @@ -72,7 +72,7 @@ etcd 역시 클라이언트와 피어 간에 상호 TLS 인증을 구현한다. | kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | | | front-proxy-client | kubernetes-front-proxy-ca | | client | | -[1]: 클러스터에 접속한 다른 IP 또는 DNS 이름([kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) 이 사용하는 로드 밸런서 안정 IP 또는 DNS 이름, `kubernetes`, `kubernetes.default`, `kubernetes.default.svc`, +[1]: 클러스터에 접속한 다른 IP 또는 DNS 이름([kubeadm](/docs/reference/setup-tools/kubeadm/) 이 사용하는 로드 밸런서 안정 IP 또는 DNS 이름, `kubernetes`, `kubernetes.default`, `kubernetes.default.svc`, `kubernetes.default.svc.cluster`, `kubernetes.default.svc.cluster.local`) `kind`는 하나 이상의 [x509 키 사용](https://godoc.org/k8s.io/api/certificates/v1beta1#KeyUsage) 종류를 가진다. @@ -97,7 +97,7 @@ kubeadm 사용자만 해당: ### 인증서 파일 경로 -인증서는 권고하는 파일 경로에 존재해야 한다([kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/)에서 사용되는 것처럼). 경로는 위치에 관계없이 주어진 파라미터를 사용하여 지정되야 한다. +인증서는 권고하는 파일 경로에 존재해야 한다([kubeadm](/docs/reference/setup-tools/kubeadm/)에서 사용되는 것처럼). 경로는 위치에 관계없이 주어진 파라미터를 사용하여 지정되야 한다. | 기본 CN | 권고되는 키 파일 경로 | 권고하는 인증서 파일 경로 | 명령어 | 키 파라미터 | 인증서 파라미터 | |------------------------------|------------------------------|-----------------------------|----------------|------------------------------|-------------------------------------------| @@ -155,5 +155,5 @@ KUBECONFIG= kubectl config use-context default-system |-------------------------|-------------------------|-----------------------------------------------------------------------| | admin.conf | kubectl | 클러스터 관리자를 설정한다. | | kubelet.conf | kubelet | 클러스터 각 노드를 위해 필요하다. | -| controller-manager.conf | kube-controller-manager | 반드시 매니페스트를 `manifests/kube-controller-manager.yaml`에 추가해야한다. | -| scheduler.conf | kube-scheduler | 반드시 매니페스트를 `manifests/kube-scheduler.yaml`에 추가해야한다. | +| controller-manager.conf | kube-controller-manager | 반드시 매니페스트를 `manifests/kube-controller-manager.yaml`에 추가해야 한다. | +| scheduler.conf | kube-scheduler | 반드시 매니페스트를 `manifests/kube-scheduler.yaml`에 추가해야 한다. | diff --git a/content/ko/docs/setup/production-environment/container-runtimes.md b/content/ko/docs/setup/production-environment/container-runtimes.md index 4b749ab7e5..827b407b84 100644 --- a/content/ko/docs/setup/production-environment/container-runtimes.md +++ b/content/ko/docs/setup/production-environment/container-runtimes.md @@ -122,7 +122,7 @@ sudo apt-get update && sudo apt-get install -y containerd.io ```shell # containerd 구성 sudo mkdir -p /etc/containerd -sudo containerd config default | sudo tee /etc/containerd/config.toml +containerd config default | sudo tee /etc/containerd/config.toml ``` ```shell @@ -140,7 +140,7 @@ sudo apt-get update && sudo apt-get install -y containerd ```shell # containerd 구성 sudo mkdir -p /etc/containerd -sudo containerd config default | sudo tee /etc/containerd/config.toml +containerd config default | sudo tee /etc/containerd/config.toml ``` ```shell @@ -210,7 +210,7 @@ sudo yum update -y && sudo yum install -y containerd.io ```shell ## containerd 구성 sudo mkdir -p /etc/containerd -sudo containerd config default | sudo tee /etc/containerd/config.toml +containerd config default | sudo tee /etc/containerd/config.toml ``` ```shell @@ -219,11 +219,16 @@ sudo systemctl restart containerd ``` {{% /tab %}} {{% tab name="Windows (PowerShell)" %}} + +
    +Powershell 세션을 띄우고, `$Version` 환경 변수를 원하는 버전으로 설정(예: `$Version=1.4.3`)한 뒤, 다음 명령어를 실행한다. +
    + ```powershell # (containerd 설치) # containerd 다운로드 -cmd /c curl -OL https://github.com/containerd/containerd/releases/download/v1.4.1/containerd-1.4.1-windows-amd64.tar.gz -cmd /c tar xvf .\containerd-1.4.1-windows-amd64.tar.gz +curl.exe -L https://github.com/containerd/containerd/releases/download/v$Version/containerd-$Version-windows-amd64.tar.gz -o containerd-windows-amd64.tar.gz +tar.exe xvf .\containerd-windows-amd64.tar.gz ``` ```powershell @@ -236,7 +241,9 @@ cd $Env:ProgramFiles\containerd\ # - sandbox_image (쿠버네티스 pause 이미지) # - cni bin_dir 및 conf_dir locations Get-Content config.toml -``` + +# (선택 사항이지만, 강력히 권장됨) containerd를 Windows Defender 검사 예외에 추가 +Add-MpPreference -ExclusionProcess "$Env:ProgramFiles\containerd\containerd.exe" ``` ```powershell # containerd 시작 @@ -420,7 +427,7 @@ CRI-O를 시작한다. ```shell sudo systemctl daemon-reload -sudo systemctl start crio +sudo systemctl enable crio --now ``` 자세한 사항은 [CRI-O 설치 가이드](https://github.com/cri-o/cri-o/blob/master/install.md)를 diff --git a/content/ko/docs/setup/production-environment/tools/kops.md b/content/ko/docs/setup/production-environment/tools/kops.md index 4ec5386d2f..dbea03b735 100644 --- a/content/ko/docs/setup/production-environment/tools/kops.md +++ b/content/ko/docs/setup/production-environment/tools/kops.md @@ -39,7 +39,7 @@ kops는 자동화된 프로비저닝 시스템인데, #### 설치 -[releases page](https://github.com/kubernetes/kops/releases)에서 kops를 다운로드 한다(소스코드로부터 빌드하는것도 역시 어렵지 않다). +[releases page](https://github.com/kubernetes/kops/releases)에서 kops를 다운로드한다(소스 코드로부터 빌드하는 것도 역시 편리하다). {{< tabs name="kops_installation" >}} {{% tab name="macOS" %}} @@ -51,7 +51,7 @@ curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https:// | grep tag_name | cut -d '"' -f 4)/kops-darwin-amd64 ``` -특정 버전을 다운로드 받는다면 명령의 다음부분을 특정 kops 버전으로 변경한다. +특정 버전을 다운로드 받는다면 명령의 다음 부분을 특정 kops 버전으로 변경한다. ```shell $(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4) @@ -147,8 +147,8 @@ Route53 hosted zone은 서브도메인도 지원한다. 여러분의 hosted zone `dev` NS 레코드를 `example.com`에 생성한다. 만약 이것이 루트 도메인 네임이라면 이 NS 레코드들은 도메인 등록기관을 통해서 생성해야 한다(예를 들어, `example.com`는 `example.com`를 구매한 곳에서 설정 할 수 있다). -이 단계에서 문제가 되기 쉽다.(문제를 만드는 가장 큰 이유이다!) dig 툴을 실행해서 -클러스터 설정이 정확한지 한번 더 확인 한다. +route53 도메인 설정을 확인한다(문제를 만드는 가장 큰 이유이다!). dig 툴을 실행해서 +클러스터 설정이 정확한지 한번 더 확인한다. `dig NS dev.example.com` diff --git a/content/ko/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md b/content/ko/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md index 2e6252bf80..358274d143 100644 --- a/content/ko/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md +++ b/content/ko/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md @@ -76,9 +76,7 @@ kind: ClusterConfiguration kubernetesVersion: v1.16.0 scheduler: extraArgs: - address: 0.0.0.0 + bind-address: 0.0.0.0 config: /home/johndoe/schedconfig.yaml kubeconfig: /home/johndoe/kubeconfig.yaml ``` - - diff --git a/content/ko/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/ko/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md new file mode 100644 index 0000000000..39f31dd8af --- /dev/null +++ b/content/ko/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md @@ -0,0 +1,314 @@ +--- +title: kubeadm 설치하기 +content_type: task +weight: 10 +card: + name: setup + weight: 20 + title: kubeadm 설정 도구 설치 +--- + + + +이 페이지에서는 `kubeadm` 툴박스를 설치하는 방법을 보여준다. +이 설치 프로세스를 수행한 후 kubeadm으로 클러스터를 만드는 방법에 대한 자세한 내용은 [kubeadm을 사용하여 클러스터 생성하기](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) 페이지를 참고한다. + + + +## {{% heading "prerequisites" %}} + + +* 호환되는 리눅스 머신. 쿠버네티스 프로젝트는 데비안 기반 배포판, 레드햇 기반 배포판, 그리고 패키지 매니저를 사용하지 않는 경우에 대한 일반적인 가이드를 제공한다. +* 2 GB 이상의 램을 장착한 머신. (이 보다 작으면 사용자의 앱을 위한 공간이 거의 남지 않음) +* 2 이상의 CPU. +* 클러스터의 모든 머신에 걸친 전체 네트워크 연결. (공용 또는 사설 네트워크면 괜찮음) +* 모든 노드에 대해 고유한 호스트 이름, MAC 주소 및 product_uuid. 자세한 내용은 [여기](#verify-mac-address)를 참고한다. +* 컴퓨터의 특정 포트들 개방. 자세한 내용은 [여기](#check-required-ports)를 참고한다. +* 스왑의 비활성화. kubelet이 제대로 작동하게 하려면 **반드시** 스왑을 사용하지 않도록 설정한다. + + + + + +## MAC 주소 및 product_uuid가 모든 노드에 대해 고유한지 확인 {#verify-mac-address} +* 사용자는 `ip link` 또는 `ifconfig -a` 명령을 사용하여 네트워크 인터페이스의 MAC 주소를 확인할 수 있다. +* product_uuid는 `sudo cat /sys/class/dmi/id/product_uuid` 명령을 사용하여 확인할 수 있다. + +일부 가상 머신은 동일한 값을 가질 수 있지만 하드웨어 장치는 고유한 주소를 가질 +가능성이 높다. 쿠버네티스는 이러한 값을 사용하여 클러스터의 노드를 고유하게 식별한다. +이러한 값이 각 노드에 고유하지 않으면 설치 프로세스가 +[실패](https://github.com/kubernetes/kubeadm/issues/31)할 수 있다. + +## 네트워크 어댑터 확인 + +네트워크 어댑터가 두 개 이상이고, 쿠버네티스 컴포넌트가 디폴트 라우트(default route)에서 도달할 수 없는 +경우, 쿠버네티스 클러스터 주소가 적절한 어댑터를 통해 이동하도록 IP 경로를 추가하는 것이 좋다. + +## iptables가 브리지된 트래픽을 보게 하기 + +`br_netfilter` 모듈이 로드되었는지 확인한다. `lsmod | grep br_netfilter` 를 실행하면 된다. 명시적으로 로드하려면 `sudo modprobe br_netfilter` 를 실행한다. + +리눅스 노드의 iptables가 브리지된 트래픽을 올바르게 보기 위한 요구 사항으로, `sysctl` 구성에서 `net.bridge.bridge-nf-call-iptables` 가 1로 설정되어 있는지 확인해야 한다. 다음은 예시이다. + +```bash +cat <}}을 사용한다. + +{{< tabs name="container_runtime" >}} +{{% tab name="리눅스 노드" %}} + +기본적으로, 쿠버네티스는 +{{< glossary_tooltip term_id="cri" text="컨테이너 런타임 인터페이스">}}(CRI)를 +사용하여 사용자가 선택한 컨테이너 런타임과 인터페이스한다. + +런타임을 지정하지 않으면, kubeadm은 잘 알려진 유닉스 도메인 소켓 목록을 검색하여 +설치된 컨테이너 런타임을 자동으로 감지하려고 한다. +다음 표에는 컨테이너 런타임 및 관련 소켓 경로가 나열되어 있다. + +{{< table caption = "컨테이너 런타임과 소켓 경로" >}} +| 런타임 | 유닉스 도메인 소켓 경로 | +|------------|-----------------------------------| +| 도커 | `/var/run/dockershim.sock` | +| containerd | `/run/containerd/containerd.sock` | +| CRI-O | `/var/run/crio/crio.sock` | +{{< /table >}} + +
    +도커와 containerd가 모두 감지되면 도커가 우선시된다. 이것이 필요한 이유는 도커 18.09에서 +도커만 설치한 경우에도 containerd와 함께 제공되므로 둘 다 감지될 수 있기 +때문이다. +다른 두 개 이상의 런타임이 감지되면, kubeadm은 오류와 함께 종료된다. + +kubelet은 빌트인 `dockershim` CRI 구현을 통해 도커와 통합된다. + +자세한 내용은 [컨테이너 런타임](/ko/docs/setup/production-environment/container-runtimes/)을 +참고한다. +{{% /tab %}} +{{% tab name="다른 운영 체제" %}} +기본적으로, kubeadm은 컨테이너 런타임으로 {{< glossary_tooltip term_id="docker" >}}를 사용한다. +kubelet은 빌트인 `dockershim` CRI 구현을 통해 도커와 통합된다. + +자세한 내용은 [컨테이너 런타임](/ko/docs/setup/production-environment/container-runtimes/)을 +참고한다. +{{% /tab %}} +{{< /tabs >}} + + +## kubeadm, kubelet 및 kubectl 설치 + +모든 머신에 다음 패키지들을 설치한다. + +* `kubeadm`: 클러스터를 부트스트랩하는 명령이다. + +* `kubelet`: 클러스터의 모든 머신에서 실행되는 파드와 컨테이너 시작과 + 같은 작업을 수행하는 컴포넌트이다. + +* `kubectl`: 클러스터와 통신하기 위한 커맨드 라인 유틸리티이다. + +kubeadm은 `kubelet` 또는 `kubectl` 을 설치하거나 관리하지 **않으므로**, kubeadm이 +설치하려는 쿠버네티스 컨트롤 플레인의 버전과 일치하는지 +확인해야 한다. 그렇지 않으면, 예상치 못한 버그 동작으로 이어질 수 있는 +버전 차이(skew)가 발생할 위험이 있다. 그러나, kubelet과 컨트롤 플레인 사이에 _하나의_ +마이너 버전 차이가 지원되지만, kubelet 버전은 API 서버 버전 보다 +높을 수 없다. 예를 들어, 1.7.0 버전의 kubelet은 1.8.0 API 서버와 완전히 호환되어야 하지만, +그 반대의 경우는 아니다. + +`kubectl` 설치에 대한 정보는 [kubectl 설치 및 설정](/ko/docs/tasks/tools/install-kubectl/)을 참고한다. + +{{< warning >}} +이 지침은 모든 시스템 업그레이드에서 모든 쿠버네티스 패키지를 제외한다. +이는 kubeadm 및 쿠버네티스를 +[업그레이드 하는 데 특별한 주의](/ko/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)가 필요하기 때문이다. +{{}} + +버전 차이에 대한 자세한 내용은 다음을 참고한다. + +* 쿠버네티스 [버전 및 버전-차이 정책](/docs/setup/release/version-skew-policy/) +* Kubeadm 관련 [버전 차이 정책](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#version-skew-policy) + +{{< tabs name="k8s_install" >}} +{{% tab name="데비안 기반 배포판" %}} +```bash +sudo apt-get update && sudo apt-get install -y apt-transport-https curl +curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - +cat <}} +`DOWNLOAD_DIR` 변수는 쓰기 가능한 디렉터리로 설정되어야 한다. +Flatcar Container Linux를 실행 중인 경우, `DOWNLOAD_DIR=/opt/bin` 을 설정한다. +{{< /note >}} + +```bash +DOWNLOAD_DIR=/usr/local/bin +sudo mkdir -p $DOWNLOAD_DIR +``` + +crictl 설치(kubeadm / Kubelet 컨테이너 런타임 인터페이스(CRI)에 필요) + +```bash +CRICTL_VERSION="v1.17.0" +curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-amd64.tar.gz" | sudo tar -C $DOWNLOAD_DIR -xz +``` + +`kubeadm`, `kubelet`, `kubectl` 설치 및 `kubelet` systemd 서비스 추가 + +```bash +RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)" +cd $DOWNLOAD_DIR +sudo curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl} +sudo chmod +x {kubeadm,kubelet,kubectl} + +RELEASE_VERSION="v0.4.0" +curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service +sudo mkdir -p /etc/systemd/system/kubelet.service.d +curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +``` + +`kubelet` 활성화 및 시작 + +```bash +systemctl enable --now kubelet +``` + +{{< note >}} +Flatcar Container Linux 배포판은 `/usr` 디렉터리를 읽기 전용 파일시스템으로 마운트한다. +클러스터를 부트스트랩하기 전에, 쓰기 가능한 디렉터리를 구성하기 위한 추가 단계를 수행해야 한다. +쓰기 가능한 디렉터리를 설정하는 방법을 알아 보려면 [Kubeadm 문제 해결 가이드](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/#usr-mounted-read-only/)를 참고한다. +{{< /note >}} +{{% /tab %}} +{{< /tabs >}} + + +kubelet은 이제 kubeadm이 수행할 작업을 알려 줄 때까지 크래시루프(crashloop) 상태로 +기다려야 하므로 몇 초마다 다시 시작된다. + +## 컨트롤 플레인 노드에서 kubelet이 사용하는 cgroup 드라이버 구성 + +도커를 사용할 때, kubeadm은 kubelet 용 cgroup 드라이버를 자동으로 감지하여 +런타임 중에 `/var/lib/kubelet/config.yaml` 파일에 설정한다. + +다른 CRI를 사용하는 경우, 다음과 같이 `cgroupDriver` 값을 `kubeadm init` 에 전달해야 한다. + +```yaml +apiVersion: kubelet.config.k8s.io/v1beta1 +kind: KubeletConfiguration +cgroupDriver: +``` + +자세한 내용은 [구성 파일과 함께 kubeadm init 사용](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file)을 참고한다. + +`cgroupfs` 가 이미 kubelet의 기본값이기 때문에, 사용자의 +CRI cgroup 드라이버가 `cgroupfs` 가 아닌 **경우에만** 위와 같이 설정해야 한다. + +{{< note >}} +`--cgroup-driver` 플래그가 kubelet에 의해 사용 중단되었으므로, `/var/lib/kubelet/kubeadm-flags.env` +또는 `/etc/default/kubelet`(RPM에 대해서는 `/etc/sysconfig/kubelet`)에 있는 경우, 그것을 제거하고 대신 KubeletConfiguration을 +사용한다(기본적으로 `/var/lib/kubelet/config.yaml` 에 저장됨). +{{< /note >}} + +CRI-O 및 containerd와 같은 다른 컨테이너 런타임에 대한 cgroup 드라이버의 +자동 감지에 대한 작업이 진행 중이다. + + +## 문제 해결 + +kubeadm에 문제가 있는 경우, [문제 해결 문서](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/)를 참고한다. + +## {{% heading "whatsnext" %}} + + +* [kubeadm을 사용하여 클러스터 생성](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) diff --git a/content/ko/docs/setup/production-environment/tools/kubespray.md b/content/ko/docs/setup/production-environment/tools/kubespray.md index f068061a7d..d838845c61 100644 --- a/content/ko/docs/setup/production-environment/tools/kubespray.md +++ b/content/ko/docs/setup/production-environment/tools/kubespray.md @@ -22,7 +22,7 @@ Kubespray는 [Ansible](https://docs.ansible.com/) 플레이북, [인벤토리](h * Flatcar Container Linux by Kinvolk * 지속적인 통합 (CI) 테스트 -클러스터를 설치해 줄 도구로 유스케이스와 가장 잘 맞는 것을 고르고 싶다면, kubespray를 [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/), [kops](/ko/docs/setup/production-environment/tools/kops/)와 [비교한 글](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md)을 읽어보자. +클러스터를 설치해 줄 도구로 유스케이스와 가장 잘 맞는 것을 고르고 싶다면, kubespray를 [kubeadm](/docs/reference/setup-tools/kubeadm/), [kops](/ko/docs/setup/production-environment/tools/kops/)와 [비교한 글](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md)을 읽어보자. diff --git a/content/ko/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md b/content/ko/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md index 0ea3dc0ce9..bd4a3503e8 100644 --- a/content/ko/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md +++ b/content/ko/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md @@ -12,7 +12,7 @@ weight: 65 ## 쿠버네티스의 윈도우 컨테이너 -쿠버네티스에서 윈도우 컨테이너 오케스트레이션을 활성화하려면, 기존 리눅스 클러스터에 윈도우 노드를 포함하기만 하면 된다. 쿠버네티스의 {{< glossary_tooltip text="파드" term_id="pod" >}}에서 윈도우 컨테이너를 스케줄링하는 것은 리눅스 기반 컨테이너를 스케줄링하는 것만큼 간단하고 쉽다. +쿠버네티스에서 윈도우 컨테이너 오케스트레이션을 활성화하려면, 기존 리눅스 클러스터에 윈도우 노드를 포함한다. 쿠버네티스의 {{< glossary_tooltip text="파드" term_id="pod" >}}에서 윈도우 컨테이너를 스케줄링하는 것은 리눅스 기반 컨테이너를 스케줄링하는 것과 유사하다. 윈도우 컨테이너를 실행하려면, 쿠버네티스 클러스터에 리눅스를 실행하는 컨트롤 플레인 노드와 사용자의 워크로드 요구에 따라 윈도우 또는 리눅스를 실행하는 워커가 있는 여러 운영 체제가 포함되어 있어야 한다. 윈도우 서버 2019는 윈도우에서 [쿠버네티스 노드](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)를 활성화하는 유일한 윈도우 운영 체제이다(kubelet, [컨테이너 런타임](https://docs.microsoft.com/ko-kr/virtualization/windowscontainers/deploy-containers/containerd) 및 kube-proxy 포함). 윈도우 배포 채널에 대한 자세한 설명은 [Microsoft 문서](https://docs.microsoft.com/ko-kr/windows-server/get-started-19/servicing-channels-19)를 참고한다. @@ -303,8 +303,9 @@ CSI 노드 플러그인(특히 블록 디바이스 또는 공유 파일시스템 다음 네트워킹 기능은 윈도우 노드에서 지원되지 않는다. * 윈도우 파드에서는 호스트 네트워킹 모드를 사용할 수 없다. -* 노드 자체에서 로컬 NodePort 접근은 실패한다. (다른 노드 또는 외부 클라이언트에서 작동) +* 노드 자체에서 로컬 NodePort 접근은 실패한다. (다른 노드 또는 외부 클라이언트에서는 가능) * 노드에서 서비스 VIP에 접근하는 것은 향후 윈도우 서버 릴리스에서 사용할 수 있다. +* 한 서비스는 최대 64개의 백엔드 파드 또는 고유한 목적지 IP를 지원할 수 있다. * kube-proxy의 오버레이 네트워킹 지원은 알파 릴리스이다. 또한 윈도우 서버 2019에 [KB4482887](https://support.microsoft.com/ko-kr/help/4482887/windows-10-update-kb4482887)을 설치해야 한다. * 로컬 트래픽 정책 및 DSR 모드 * l2bridge, l2tunnel 또는 오버레이 네트워크에 연결된 윈도우 컨테이너는 IPv6 스택을 통한 통신을 지원하지 않는다. 이러한 네트워크 드라이버가 IPv6 주소를 사용하고 kubelet, kube-proxy 및 CNI 플러그인에서 후속 쿠버네티스 작업을 사용할 수 있도록 하는데 필요한 뛰어난 윈도우 플랫폼 작업이 있다. @@ -544,7 +545,7 @@ PodSecurityContext 필드는 윈도우에서 작동하지 않는다. 참조를 1. `start.ps1`을 시작한 후, flanneld가 "Waiting for the Network to be created"에서 멈춘다. - 이 [조사 중인 이슈](https://github.com/coreos/flannel/issues/1066)에 대한 수많은 보고가 있다. 플란넬 네트워크의 관리 IP가 설정될 때 타이밍 이슈일 가능성이 높다. 해결 방법은 간단히 start.ps1을 다시 시작하거나 다음과 같이 수동으로 다시 시작하는 것이다. + 이 [이슈](https://github.com/coreos/flannel/issues/1066)에 대한 수많은 보고가 있다. 플란넬 네트워크의 관리 IP가 설정될 때의 타이밍 이슈일 가능성이 높다. 해결 방법은 start.ps1을 다시 시작하거나 다음과 같이 수동으로 다시 시작하는 것이다. ```powershell PS C:> [Environment]::SetEnvironmentVariable("NODE_NAME", "") diff --git a/content/ko/docs/setup/release/notes.md b/content/ko/docs/setup/release/notes.md index f833d27094..7bcbb39489 100644 --- a/content/ko/docs/setup/release/notes.md +++ b/content/ko/docs/setup/release/notes.md @@ -92,7 +92,7 @@ We expect this implementation to progress from alpha to beta and GA in coming re ### go1.15.5 -go1.15.5 has been integrated to Kubernets project as of this release, [including other infrastructure related updates on this effort](https://github.com/kubernetes/kubernetes/pull/95776). +go1.15.5 has been integrated to Kubernetes project as of this release, [including other infrastructure related updates on this effort](https://github.com/kubernetes/kubernetes/pull/95776). ### CSI 볼륨 스냅샷(CSI Volume Snapshot)이 안정 기능으로 전환 @@ -190,7 +190,7 @@ Currently, cadvisor_stats_provider provides AcceleratorStats but cri_stats_provi PodSubnet validates against the corresponding cluster "--node-cidr-mask-size" of the kube-controller-manager, it fail if the values are not compatible. kubeadm no longer sets the node-mask automatically on IPv6 deployments, you must check that your IPv6 service subnet mask is compatible with the default node mask /64 or set it accordenly. Previously, for IPv6, if the podSubnet had a mask lower than /112, kubeadm calculated a node-mask to be multiple of eight and splitting the available bits to maximise the number used for nodes. ([#95723](https://github.com/kubernetes/kubernetes/pull/95723), [@aojea](https://github.com/aojea)) [SIG Cluster Lifecycle] -- The deprecated flag --experimental-kustomize is now removed from kubeadm commands. Use --experimental-patches instead, which was introduced in 1.19. Migration infromation available in --help description for --exprimental-patches. ([#94871](https://github.com/kubernetes/kubernetes/pull/94871), [@neolit123](https://github.com/neolit123)) +- The deprecated flag --experimental-kustomize is now removed from kubeadm commands. Use --experimental-patches instead, which was introduced in 1.19. Migration information available in --help description for --experimental-patches. ([#94871](https://github.com/kubernetes/kubernetes/pull/94871), [@neolit123](https://github.com/neolit123)) - Windows hyper-v container featuregate is deprecated in 1.20 and will be removed in 1.21 ([#95505](https://github.com/kubernetes/kubernetes/pull/95505), [@wawa0210](https://github.com/wawa0210)) [SIG Node and Windows] - The kube-apiserver ability to serve on an insecure port, deprecated since v1.10, has been removed. The insecure address flags `--address` and `--insecure-bind-address` have no effect in kube-apiserver and will be removed in v1.24. The insecure port flags `--port` and `--insecure-port` may only be set to 0 and will be removed in v1.24. ([#95856](https://github.com/kubernetes/kubernetes/pull/95856), [@knight42](https://github.com/knight42), [SIG API Machinery, Node, Testing]) - Add dual-stack Services (alpha). This is a BREAKING CHANGE to an alpha API. diff --git a/content/ko/docs/setup/release/version-skew-policy.md b/content/ko/docs/setup/release/version-skew-policy.md index feb675f8ba..76ff7504fd 100644 --- a/content/ko/docs/setup/release/version-skew-policy.md +++ b/content/ko/docs/setup/release/version-skew-policy.md @@ -1,11 +1,18 @@ --- + + + + + + + title: 쿠버네티스 버전 및 버전 차이(skew) 지원 정책 content_type: concept weight: 30 --- -이 문서는 다양한 쿠버네티스 구성 요소 간에 지원되는 최대 버전 차이를 설명한다. +이 문서는 다양한 쿠버네티스 구성 요소 간에 지원되는 최대 버전 차이를 설명한다. 특정 클러스터 배포 도구는 버전 차이에 대한 추가적인 제한을 설정할 수 있다. @@ -19,14 +26,14 @@ weight: 30 쿠버네티스 프로젝트는 최근 세 개의 마이너 릴리스 ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}) 에 대한 릴리스 분기를 유지한다. 쿠버네티스 1.19 이상은 약 1년간의 패치 지원을 받는다. 쿠버네티스 1.18 이상은 약 9개월의 패치 지원을 받는다. -보안 수정사항을 포함한 해당 수정사항은 심각도와 타당성에 따라 세 개의 릴리스 브랜치로 백포트(backport) 될 수 있다. +보안 수정사항을 포함한 해당 수정사항은 심각도와 타당성에 따라 세 개의 릴리스 브랜치로 백포트(backport) 될 수 있다. 패치 릴리스는 각 브랜치별로 [정기적인 주기](https://git.k8s.io/sig-release/releases/patch-releases.md#cadence)로 제공하며, 필요한 경우 추가 긴급 릴리스도 추가한다. [릴리스 관리자](https://git.k8s.io/sig-release/release-managers.md) 그룹이 이러한 결정 권한을 가진다. 자세한 내용은 쿠버네티스 [패치 릴리스](https://git.k8s.io/sig-release/releases/patch-releases.md) 페이지를 참조한다. -## 지원되는 버전 차이 +## 지원되는 버전 차이 ### kube-apiserver @@ -133,6 +140,11 @@ HA 클러스터의 `kube-apiserver` 인스턴스 간에 버전 차이가 있으 필요에 따라서 `kubelet` 인스턴스를 **{{< skew latestVersion >}}** 으로 업그레이드할 수 있다(또는 **{{< skew prevMinorVersion >}}** 아니면 **{{< skew oldestMinorVersion >}}** 으로 유지할 수 있음). +{{< note >}} +`kubelet` 마이너 버전 업그레이드를 수행하기 전에, 해당 노드의 파드를 [드레인(drain)](/docs/tasks/administer-cluster/safely-drain-node/)해야 한다. +인플레이스(In-place) 마이너 버전 `kubelet` 업그레이드는 지원되지 않는다. +{{}} + {{< warning >}} 클러스터 안의 `kubelet` 인스턴스를 `kube-apiserver`의 버전보다 2단계 낮은 버전으로 실행하는 것을 권장하지 않는다: diff --git a/content/ko/docs/tasks/access-application-cluster/access-cluster.md b/content/ko/docs/tasks/access-application-cluster/access-cluster.md index e78046f0ee..43a6a82343 100644 --- a/content/ko/docs/tasks/access-application-cluster/access-cluster.md +++ b/content/ko/docs/tasks/access-application-cluster/access-cluster.md @@ -280,7 +280,7 @@ heapster is running at https://104.197.5.247/api/v1/namespaces/kube-system/servi #### 수작업으로 apiserver proxy URL을 구축 -위에서 언급한 것처럼 서비스의 proxy URL을 검색하는데 `kubectl cluster-info` 커맨드를 사용할 수 있다. 서비스 endpoint, 접미사, 매개변수를 포함하는 proxy URL을 생성하려면 단순하게 해당 서비스에 +위에서 언급한 것처럼 서비스의 proxy URL을 검색하는데 `kubectl cluster-info` 커맨드를 사용할 수 있다. 서비스 endpoint, 접미사, 매개변수를 포함하는 proxy URL을 생성하려면 해당 서비스에 `http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`service_name[:port_name]`*`/proxy` 형식의 proxy URL을 덧붙인다. 당신이 port에 이름을 지정하지 않았다면 URL에 *port_name* 을 지정할 필요는 없다. diff --git a/content/ko/docs/tasks/administer-cluster/access-cluster-api.md b/content/ko/docs/tasks/administer-cluster/access-cluster-api.md index 123c09d2d2..b2ea227718 100644 --- a/content/ko/docs/tasks/administer-cluster/access-cluster-api.md +++ b/content/ko/docs/tasks/administer-cluster/access-cluster-api.md @@ -216,7 +216,7 @@ for i in ret.items: #### Java 클라이언트 {#java-client} -* [Java 클라이언트](https://github.com/kubernetes-client/java)를 설치하려면, 다음을 실행한다. +[Java 클라이언트](https://github.com/kubernetes-client/java)를 설치하려면, 다음을 실행한다. ```shell # java 라이브러리를 클론한다 @@ -353,99 +353,6 @@ exampleWithKubeConfig = do >>= print ``` +## {{% heading "whatsnext" %}} -### 파드 내에서 API에 접근 {#accessing-the-api-from-within-a-pod} - -파드 내에서 API에 접근할 때, API 서버를 찾아 인증하는 것은 -위에서 설명한 외부 클라이언트 사례와 약간 다르다. - -파드에서 쿠버네티스 API를 사용하는 가장 쉬운 방법은 -공식 [클라이언트 라이브러리](/ko/docs/reference/using-api/client-libraries/) 중 하나를 사용하는 것이다. 이러한 -라이브러리는 API 서버를 자동으로 감지하고 인증할 수 있다. - -#### 공식 클라이언트 라이브러리 사용 - -파드 내에서, 쿠버네티스 API에 연결하는 권장 방법은 다음과 같다. - - - Go 클라이언트의 경우, 공식 [Go 클라이언트 라이브러리](https://github.com/kubernetes/client-go/)를 사용한다. - `rest.InClusterConfig()` 기능은 API 호스트 검색과 인증을 자동으로 처리한다. - [여기 예제](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go)를 참고한다. - - - Python 클라이언트의 경우, 공식 [Python 클라이언트 라이브러리](https://github.com/kubernetes-client/python/)를 사용한다. - `config.load_incluster_config()` 기능은 API 호스트 검색과 인증을 자동으로 처리한다. - [여기 예제](https://github.com/kubernetes-client/python/blob/master/examples/in_cluster_config.py)를 참고한다. - - - 사용할 수 있는 다른 라이브러리가 많이 있다. [클라이언트 라이브러리](/ko/docs/reference/using-api/client-libraries/) 페이지를 참고한다. - -각각의 경우, 파드의 서비스 어카운트 자격 증명은 API 서버와 -안전하게 통신하는 데 사용된다. - -#### REST API에 직접 접근 - -파드에서 실행되는 동안, 쿠버네티스 apiserver는 `default` 네임스페이스에서 `kubernetes`라는 -서비스를 통해 접근할 수 있다. 따라서, 파드는 `kubernetes.default.svc` -호스트 이름을 사용하여 API 서버를 쿼리할 수 있다. 공식 클라이언트 라이브러리는 -이를 자동으로 수행한다. - -API 서버를 인증하는 권장 방법은 [서비스 어카운트](/docs/tasks/configure-pod-container/configure-service-account/) -자격 증명을 사용하는 것이다. 기본적으로, 파드는 -서비스 어카운트와 연결되어 있으며, 해당 서비스 어카운트에 대한 자격 증명(토큰)은 -해당 파드에 있는 각 컨테이너의 파일시스템 트리의 -`/var/run/secrets/kubernetes.io/serviceaccount/token` 에 있다. - -사용 가능한 경우, 인증서 번들은 각 컨테이너의 -파일시스템 트리의 `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt` 에 배치되며, -API 서버의 제공 인증서를 확인하는 데 사용해야 한다. - -마지막으로, 네임스페이스가 지정된 API 작업에 사용되는 기본 네임스페이스는 각 컨테이너의 -`/var/run/secrets/kubernetes.io/serviceaccount/namespace` 에 있는 파일에 배치된다. - -#### kubectl 프록시 사용 - -공식 클라이언트 라이브러리 없이 API를 쿼리하려면, 파드에서 -새 사이드카 컨테이너의 [명령](/ko/docs/tasks/inject-data-application/define-command-argument-container/)으로 -`kubectl proxy` 를 실행할 수 있다. 이런 식으로, `kubectl proxy` 는 -API를 인증하고 이를 파드의 `localhost` 인터페이스에 노출시켜서, 파드의 -다른 컨테이너가 직접 사용할 수 있도록 한다. - -#### 프록시를 사용하지 않고 접근 - -인증 토큰을 API 서버에 직접 전달하여 kubectl 프록시 사용을 -피할 수 있다. 내부 인증서는 연결을 보호한다. - -```shell -# 내부 API 서버 호스트 이름을 가리킨다 -APISERVER=https://kubernetes.default.svc - -# ServiceAccount 토큰 경로 -SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount - -# 이 파드의 네임스페이스를 읽는다 -NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace) - -# ServiceAccount 베어러 토큰을 읽는다 -TOKEN=$(cat ${SERVICEACCOUNT}/token) - -# 내부 인증 기관(CA)을 참조한다 -CACERT=${SERVICEACCOUNT}/ca.crt - -# TOKEN으로 API를 탐색한다 -curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api -``` - -출력은 다음과 비슷하다. - -```json -{ - "kind": "APIVersions", - "versions": [ - "v1" - ], - "serverAddressByClientCIDRs": [ - { - "clientCIDR": "0.0.0.0/0", - "serverAddress": "10.0.1.149:443" - } - ] -} -``` +* [파드 내에서 쿠버네티스 API에 접근](/ko/docs/tasks/run-application/access-api-from-pod/) diff --git a/content/ko/docs/tasks/administer-cluster/access-cluster-services.md b/content/ko/docs/tasks/administer-cluster/access-cluster-services.md index f8f707eb33..5dd7a627d0 100644 --- a/content/ko/docs/tasks/administer-cluster/access-cluster-services.md +++ b/content/ko/docs/tasks/administer-cluster/access-cluster-services.md @@ -83,7 +83,7 @@ heapster is running at https://104.197.5.247/api/v1/namespaces/kube-system/servi #### apiserver 프록시 URL 수동 구성 -위에서 언급한 것처럼, `kubectl cluster-info` 명령을 사용하여 서비스의 프록시 URL을 검색한다. 서비스 엔드포인트, 접미사 및 매개 변수를 포함하는 프록시 URL을 작성하려면, 단순히 서비스의 프록시 URL에 추가하면 된다. +위에서 언급한 것처럼, `kubectl cluster-info` 명령을 사용하여 서비스의 프록시 URL을 검색한다. 서비스 엔드포인트, 접미사 및 매개 변수를 포함하는 프록시 URL을 작성하려면, 서비스의 프록시 URL에 추가하면 된다. `http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`[https:]service_name[:port_name]`*`/proxy` 포트에 대한 이름을 지정하지 않은 경우, URL에 *port_name* 을 지정할 필요가 없다. diff --git a/content/ko/docs/tasks/administer-cluster/certificates.md b/content/ko/docs/tasks/administer-cluster/certificates.md new file mode 100644 index 0000000000..8c8f6a148b --- /dev/null +++ b/content/ko/docs/tasks/administer-cluster/certificates.md @@ -0,0 +1,250 @@ +--- +title: 인증서 +content_type: task +weight: 20 +--- + + + + +클라이언트 인증서로 인증을 사용하는 경우 `easyrsa`, `openssl` 또는 `cfssl` +을 통해 인증서를 수동으로 생성할 수 있다. + + + + + + +### easyrsa + +**easyrsa** 는 클러스터 인증서를 수동으로 생성할 수 있다. + +1. easyrsa3의 패치 버전을 다운로드하여 압축을 풀고, 초기화한다. + + curl -LO https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz + tar xzf easy-rsa.tar.gz + cd easy-rsa-master/easyrsa3 + ./easyrsa init-pki +1. 새로운 인증 기관(CA)을 생성한다. `--batch` 는 자동 모드를 설정한다. + `--req-cn` 는 CA의 새 루트 인증서에 대한 일반 이름(Common Name (CN))을 지정한다. + + ./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass +1. 서버 인증서와 키를 생성한다. + `--subject-alt-name` 인수는 API 서버에 접근이 가능한 IP와 DNS + 이름을 설정한다. `MASTER_CLUSTER_IP` 는 일반적으로 API 서버와 + 컨트롤러 관리자 컴포넌트에 대해 `--service-cluster-ip-range` 인수로 + 지정된 서비스 CIDR의 첫 번째 IP이다. `--days` 인수는 인증서가 만료되는 + 일 수를 설정하는데 사용된다. + 또한, 아래 샘플은 기본 DNS 이름으로 `cluster.local` 을 + 사용한다고 가정한다. + + ./easyrsa --subject-alt-name="IP:${MASTER_IP},"\ + "IP:${MASTER_CLUSTER_IP},"\ + "DNS:kubernetes,"\ + "DNS:kubernetes.default,"\ + "DNS:kubernetes.default.svc,"\ + "DNS:kubernetes.default.svc.cluster,"\ + "DNS:kubernetes.default.svc.cluster.local" \ + --days=10000 \ + build-server-full server nopass +1. `pki/ca.crt`, `pki/issued/server.crt` 그리고 `pki/private/server.key` 를 디렉터리에 복사한다. +1. API 서버 시작 파라미터에 다음 파라미터를 채우고 추가한다. + + --client-ca-file=/yourdirectory/ca.crt + --tls-cert-file=/yourdirectory/server.crt + --tls-private-key-file=/yourdirectory/server.key + +### openssl + +**openssl** 은 클러스터 인증서를 수동으로 생성할 수 있다. + +1. ca.key를 2048bit로 생성한다. + + openssl genrsa -out ca.key 2048 +1. ca.key에 따라 ca.crt를 생성한다(인증서 유효 기간을 사용하려면 -days를 사용한다). + + openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt +1. server.key를 2048bit로 생성한다. + + openssl genrsa -out server.key 2048 +1. 인증서 서명 요청(Certificate Signing Request (CSR))을 생성하기 위한 설정 파일을 생성한다. + 파일에 저장하기 전에 꺾쇠 괄호(예: ``)로 + 표시된 값을 실제 값으로 대체한다(예: `csr.conf`). + `MASTER_CLUSTER_IP` 의 값은 이전 하위 섹션에서 + 설명한 대로 API 서버의 서비스 클러스터 IP이다. + 또한, 아래 샘플에서는 `cluster.local` 을 기본 DNS 도메인 + 이름으로 사용하고 있다고 가정한다. + + [ req ] + default_bits = 2048 + prompt = no + default_md = sha256 + req_extensions = req_ext + distinguished_name = dn + + [ dn ] + C = <국가(country)> + ST = <도(state)> + L = <시(city)> + O = <조직(organization)> + OU = <조직 단위(organization unit)> + CN = + + [ req_ext ] + subjectAltName = @alt_names + + [ alt_names ] + DNS.1 = kubernetes + DNS.2 = kubernetes.default + DNS.3 = kubernetes.default.svc + DNS.4 = kubernetes.default.svc.cluster + DNS.5 = kubernetes.default.svc.cluster.local + IP.1 = + IP.2 = + + [ v3_ext ] + authorityKeyIdentifier=keyid,issuer:always + basicConstraints=CA:FALSE + keyUsage=keyEncipherment,dataEncipherment + extendedKeyUsage=serverAuth,clientAuth + subjectAltName=@alt_names +1. 설정 파일을 기반으로 인증서 서명 요청을 생성한다. + + openssl req -new -key server.key -out server.csr -config csr.conf +1. ca.key, ca.crt 그리고 server.csr을 사용해서 서버 인증서를 생성한다. + + openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \ + -CAcreateserial -out server.crt -days 10000 \ + -extensions v3_ext -extfile csr.conf +1. 인증서를 본다. + + openssl x509 -noout -text -in ./server.crt + +마지막으로, API 서버 시작 파라미터에 동일한 파라미터를 추가한다. + +### cfssl + +**cfssl** 은 인증서 생성을 위한 또 다른 도구이다. + +1. 아래에 표시된 대로 커맨드 라인 도구를 다운로드하여 압축을 풀고 준비한다. + 사용 중인 하드웨어 아키텍처 및 cfssl 버전에 따라 샘플 + 명령을 조정해야 할 수도 있다. + + curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl + chmod +x cfssl + curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson + chmod +x cfssljson + curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo + chmod +x cfssl-certinfo +1. 아티팩트(artifact)를 보유할 디렉터리를 생성하고 cfssl을 초기화한다. + + mkdir cert + cd cert + ../cfssl print-defaults config > config.json + ../cfssl print-defaults csr > csr.json +1. CA 파일을 생성하기 위한 JSON 설정 파일을 `ca-config.json` 예시와 같이 생성한다. + + { + "signing": { + "default": { + "expiry": "8760h" + }, + "profiles": { + "kubernetes": { + "usages": [ + "signing", + "key encipherment", + "server auth", + "client auth" + ], + "expiry": "8760h" + } + } + } + } +1. CA 인증서 서명 요청(CSR)을 위한 JSON 설정 파일을 + `ca-csr.json` 예시와 같이 생성한다. 꺾쇠 괄호로 표시된 + 값을 사용하려는 실제 값으로 변경한다. + + { + "CN": "kubernetes", + "key": { + "algo": "rsa", + "size": 2048 + }, + "names":[{ + "C": "<국가(country)>", + "ST": "<도(state)>", + "L": "<시(city)>", + "O": "<조직(organization)>", + "OU": "<조직 단위(organization unit)>" + }] + } +1. CA 키(`ca-key.pem`)와 인증서(`ca.pem`)을 생성한다. + + ../cfssl gencert -initca ca-csr.json | ../cfssljson -bare ca +1. API 서버의 키와 인증서를 생성하기 위한 JSON 구성파일을 + `server-csr.json` 예시와 같이 생성한다. 꺾쇠 괄호 안의 값을 + 사용하려는 실제 값으로 변경한다. `MASTER_CLUSTER_IP` 는 + 이전 하위 섹션에서 설명한 API 서버의 클러스터 IP이다. + 아래 샘플은 기본 DNS 도메인 이름으로 `cluster.local` 을 + 사용한다고 가정한다. + + { + "CN": "kubernetes", + "hosts": [ + "127.0.0.1", + "", + "", + "kubernetes", + "kubernetes.default", + "kubernetes.default.svc", + "kubernetes.default.svc.cluster", + "kubernetes.default.svc.cluster.local" + ], + "key": { + "algo": "rsa", + "size": 2048 + }, + "names": [{ + "C": "<국가(country)>", + "ST": "<도(state)>", + "L": "<시(city)>", + "O": "<조직(organization)>", + "OU": "<조직 단위(organization unit)>" + }] + } +1. API 서버 키와 인증서를 생성하면, 기본적으로 + `server-key.pem` 과 `server.pem` 파일에 각각 저장된다. + + ../cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \ + --config=ca-config.json -profile=kubernetes \ + server-csr.json | ../cfssljson -bare server + + +## 자체 서명된 CA 인증서의 배포 + +클라이언트 노드는 자체 서명된 CA 인증서를 유효한 것으로 인식하지 않을 수 있다. +비-프로덕션 디플로이먼트 또는 회사 방화벽 뒤에서 실행되는 +디플로이먼트의 경우, 자체 서명된 CA 인증서를 모든 클라이언트에 +배포하고 유효한 인증서의 로컬 목록을 새로 고칠 수 있다. + +각 클라이언트에서, 다음 작업을 수행한다. + +```bash +sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt +sudo update-ca-certificates +``` + +``` +Updating certificates in /etc/ssl/certs... +1 added, 0 removed; done. +Running hooks in /etc/ca-certificates/update.d.... +done. +``` + +## 인증서 API + +`certificates.k8s.io` API를 사용해서 +[여기](/docs/tasks/tls/managing-tls-in-a-cluster)에 +설명된 대로 인증에 사용할 x509 인증서를 프로비전 할 수 있다. diff --git a/content/ko/docs/tasks/administer-cluster/change-default-storage-class.md b/content/ko/docs/tasks/administer-cluster/change-default-storage-class.md index 8fd7445fb7..ff6379ee1f 100644 --- a/content/ko/docs/tasks/administer-cluster/change-default-storage-class.md +++ b/content/ko/docs/tasks/administer-cluster/change-default-storage-class.md @@ -32,7 +32,7 @@ content_type: task 수도 있다. 이런 경우에, 기본 스토리지 클래스를 변경하거나 완전히 비활성화 하여 스토리지의 동적 프로비저닝을 방지할 수 있다. -단순하게 기본 스토리지클래스를 삭제하는 경우, 사용자의 클러스터에서 구동중인 +기본 스토리지클래스를 삭제하는 경우, 사용자의 클러스터에서 구동 중인 애드온 매니저에 의해 자동으로 다시 생성될 수 있으므로 정상적으로 삭제가 되지 않을 수도 있다. 애드온 관리자 및 개별 애드온을 비활성화 하는 방법에 대한 자세한 내용은 설치 문서를 참조하자. @@ -70,7 +70,7 @@ content_type: task 1. 스토리지클래스를 기본값으로 표시한다. - 이전 과정과 유사하게, 어노테이션을 추가/설정 해야 한다. + 이전 과정과 유사하게, 어노테이션을 추가/설정해야 한다. `storageclass.kubernetes.io/is-default-class=true`. ```bash diff --git a/content/ko/docs/tasks/configure-pod-container/pull-image-private-registry.md b/content/ko/docs/tasks/configure-pod-container/pull-image-private-registry.md index 34cb102f83..5a8295aff2 100644 --- a/content/ko/docs/tasks/configure-pod-container/pull-image-private-registry.md +++ b/content/ko/docs/tasks/configure-pod-container/pull-image-private-registry.md @@ -9,18 +9,13 @@ weight: 100 이 페이지는 프라이빗 도커 레지스트리나 리포지터리로부터 이미지를 받아오기 위해 시크릿(Secret)을 사용하는 파드를 생성하는 방법을 보여준다. - - ## {{% heading "prerequisites" %}} - * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} * 이 실습을 수행하기 위해, [도커 ID](https://docs.docker.com/docker-id/)와 비밀번호가 필요하다. - - ## 도커 로그인 @@ -106,7 +101,8 @@ kubectl create secret docker-registry regcred --docker-server=` 은 프라이빗 도커 저장소의 FQDN 주소이다. (도커허브(DockerHub)의 경우, https://index.docker.io/v1/) +* `` 은 프라이빗 도커 저장소의 FQDN 주소이다. + 도커허브(DockerHub)는 `https://index.docker.io/v2/` 를 사용한다. * `` 은 도커 사용자의 계정이다. * `` 은 도커 사용자의 비밀번호이다. * `` 은 도커 사용자의 이메일 주소이다. @@ -192,7 +188,8 @@ your.private.registry.example.com/janedoe/jdoe-private:v1 ``` 프라이빗 저장소에서 이미지를 받아오기 위하여, 쿠버네티스에서 자격 증명이 필요하다. -구성 파일의 `imagePullSecrets` 필드를 통해 쿠버네티스가 `regcred` 라는 시크릿으로부터 자격 증명을 가져올 수 있다. +구성 파일의 `imagePullSecrets` 필드를 통해 쿠버네티스가 +`regcred` 라는 시크릿으로부터 자격 증명을 가져올 수 있다. 시크릿을 사용해서 파드를 생성하고, 파드가 실행되는지 확인하자. @@ -201,16 +198,11 @@ kubectl apply -f my-private-reg-pod.yaml kubectl get pod private-reg ``` - - ## {{% heading "whatsnext" %}} - * [시크릿](/ko/docs/concepts/configuration/secret/)에 대해 더 배워 보기. * [프라이빗 레지스트리 사용](/ko/docs/concepts/containers/images/#프라이빗-레지스트리-사용)에 대해 더 배워 보기. * [서비스 어카운트에 풀 시크릿(pull secret) 추가하기](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)에 대해 더 배워 보기. * [kubectl create secret docker-registry](/docs/reference/generated/kubectl/kubectl-commands/#-em-secret-docker-registry-em-)에 대해 읽어보기. * [시크릿](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core)에 대해 읽어보기. * [PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core)의 `imagePullSecrets` 필드에 대해 읽어보기. - - diff --git a/content/ko/docs/tasks/configure-pod-container/static-pod.md b/content/ko/docs/tasks/configure-pod-container/static-pod.md index 41e1f6f7b0..4daf739c2f 100644 --- a/content/ko/docs/tasks/configure-pod-container/static-pod.md +++ b/content/ko/docs/tasks/configure-pod-container/static-pod.md @@ -22,6 +22,7 @@ Kubelet 은 각각의 스태틱 파드에 대하여 쿠버네티스 API 서버 생성하려고 자동으로 시도한다. 즉, 노드에서 구동되는 파드는 API 서버에 의해서 볼 수 있지만, API 서버에서 제어될 수는 없다. +파드 이름에는 노드 호스트 이름 앞에 하이픈을 붙여 접미사로 추가된다. {{< note >}} 만약 클러스터로 구성된 쿠버네티스를 구동하고 있고, 스태틱 파드를 사용하여 diff --git a/content/ko/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md b/content/ko/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md deleted file mode 100644 index 54a2ae61b0..0000000000 --- a/content/ko/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md +++ /dev/null @@ -1,121 +0,0 @@ ---- -content_type: concept -title: 엘라스틱서치(Elasticsearch) 및 키바나(Kibana)를 사용한 로깅 ---- - - - -Google 컴퓨트 엔진(Compute Engine, GCE) 플랫폼에서, 기본 로깅 지원은 -[스택드라이버(Stackdriver) 로깅](https://cloud.google.com/logging/)을 대상으로 한다. 이는 -[스택드라이버 로깅으로 로깅하기](/docs/tasks/debug-application-cluster/logging-stackdriver)에 자세히 설명되어 있다. - -이 문서에서는 GCE에서 운영할 때 스택드라이버 로깅의 대안으로, -[엘라스틱서치](https://www.elastic.co/products/elasticsearch)에 로그를 수집하고 -[키바나](https://www.elastic.co/products/kibana)를 사용하여 볼 수 있도록 -클러스터를 설정하는 방법에 대해 설명한다. - -{{< note >}} -Google 쿠버네티스 엔진(Kubernetes Engine)에서 호스팅되는 쿠버네티스 클러스터에는 엘라스틱서치 및 키바나를 자동으로 배포할 수 없다. 수동으로 배포해야 한다. -{{< /note >}} - - - - - -클러스터 로깅에 엘라스틱서치, 키바나를 사용하려면 kube-up.sh를 사용하여 -클러스터를 생성할 때 아래와 같이 다음의 환경 변수를 -설정해야 한다. - -```shell -KUBE_LOGGING_DESTINATION=elasticsearch -``` - -또한 `KUBE_ENABLE_NODE_LOGGING=true`(GCE 플랫폼의 기본값)인지 확인해야 한다. - -이제, 클러스터를 만들 때, 각 노드에서 실행되는 Fluentd 로그 수집 데몬이 -엘라스틱서치를 대상으로 한다는 메시지가 나타난다. - -```shell -cluster/kube-up.sh -``` -``` -... -Project: kubernetes-satnam -Zone: us-central1-b -... calling kube-up -Project: kubernetes-satnam -Zone: us-central1-b -+++ Staging server tars to Google Storage: gs://kubernetes-staging-e6d0e81793/devel -+++ kubernetes-server-linux-amd64.tar.gz uploaded (sha1 = 6987c098277871b6d69623141276924ab687f89d) -+++ kubernetes-salt.tar.gz uploaded (sha1 = bdfc83ed6b60fa9e3bff9004b542cfc643464cd0) -Looking for already existing resources -Starting master and configuring firewalls -Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/zones/us-central1-b/disks/kubernetes-master-pd]. -NAME ZONE SIZE_GB TYPE STATUS -kubernetes-master-pd us-central1-b 20 pd-ssd READY -Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/regions/us-central1/addresses/kubernetes-master-ip]. -+++ Logging using Fluentd to elasticsearch -``` - -노드별 Fluentd 파드, 엘라스틱서치 파드 및 키바나 파드는 -클러스터가 활성화된 직후 kube-system 네임스페이스에서 모두 실행되어야 -한다. - -```shell -kubectl get pods --namespace=kube-system -``` -``` -NAME READY STATUS RESTARTS AGE -elasticsearch-logging-v1-78nog 1/1 Running 0 2h -elasticsearch-logging-v1-nj2nb 1/1 Running 0 2h -fluentd-elasticsearch-kubernetes-node-5oq0 1/1 Running 0 2h -fluentd-elasticsearch-kubernetes-node-6896 1/1 Running 0 2h -fluentd-elasticsearch-kubernetes-node-l1ds 1/1 Running 0 2h -fluentd-elasticsearch-kubernetes-node-lz9j 1/1 Running 0 2h -kibana-logging-v1-bhpo8 1/1 Running 0 2h -kube-dns-v3-7r1l9 3/3 Running 0 2h -monitoring-heapster-v4-yl332 1/1 Running 1 2h -monitoring-influx-grafana-v1-o79xf 2/2 Running 0 2h -``` - -`fluentd-elasticsearch` 파드는 각 노드에서 로그를 수집하여 -`elasticsearch-logging` 파드로 전송한다. 이 로그는 `elasticsearch-logging` 이라는 -[서비스](/ko/docs/concepts/services-networking/service/)의 일부이다. 이 -엘라스틱서치 파드는 로그를 저장하고 REST API를 통해 노출한다. -`kibana-logging` 파드는 엘라스틱서치에 저장된 로그를 읽기 위한 웹 UI를 -제공하며, `kibana-logging` 이라는 서비스의 일부이다. - -엘라스틱서치 및 키바나 서비스는 모두 `kube-system` 네임스페이스에 -있으며 공개적으로 접근 가능한 IP 주소를 통해 직접 노출되지 않는다. 이를 위해, -[클러스터에서 실행 중인 서비스 접근](/ko/docs/tasks/access-application-cluster/access-cluster/#클러스터에서-실행되는-서비스로-액세스)에 -대한 지침을 참고한다. - -브라우저에서 `elasticsearch-logging` 서비스에 접근하려고 하면, -다음과 같은 상태 페이지가 표시된다. - -![엘라스틱서치 상태](/images/docs/es-browser.png) - -원할 경우, 이제 엘라스틱서치 쿼리를 브라우저에 직접 입력할 수 -있다. 수행 방법에 대한 자세한 내용은 [엘라스틱서치의 문서](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-uri-request.html)를 -참조한다. - -또는, 키바나를 사용하여 클러스터의 로그를 볼 수도 있다(다시 -[클러스터에서 실행되는 서비스에 접근하기 위한 지침](/ko/docs/tasks/access-application-cluster/access-cluster/#클러스터에서-실행되는-서비스로-액세스)을 참고). -키바나 URL을 처음 방문하면 수집된 로그 보기를 -구성하도록 요청하는 페이지가 표시된다. 시계열 값에 -대한 옵션을 선택하고 `@timestamp` 를 선택한다. 다음 페이지에서 -`Discover` 탭을 선택하면 수집된 로그를 볼 수 있다. -로그를 정기적으로 새로 고치려면 새로 고침 간격을 5초로 -설정할 수 있다. - -키바나 뷰어에서 수집된 로그의 일반적인 보기는 다음과 같다. - -![키바나 로그](/images/docs/kibana-logs.png) - - - -## {{% heading "whatsnext" %}} - - -키바나는 로그를 탐색하기 위한 모든 종류의 강력한 옵션을 제공한다! 이를 파헤치는 방법에 대한 -아이디어는 [키바나의 문서](https://www.elastic.co/guide/en/kibana/current/discover.html)를 확인한다. diff --git a/content/ko/docs/tasks/extend-kubectl/kubectl-plugins.md b/content/ko/docs/tasks/extend-kubectl/kubectl-plugins.md index f46df1e511..77081a82f3 100644 --- a/content/ko/docs/tasks/extend-kubectl/kubectl-plugins.md +++ b/content/ko/docs/tasks/extend-kubectl/kubectl-plugins.md @@ -22,7 +22,7 @@ content_type: task ## kubectl 플러그인 설치 -플러그인은 이름이 `kubectl-` 로 시작되는 독립형 실행 파일이다. 플러그인을 설치하려면, 간단히 실행 파일을 `PATH` 에 지정된 디렉터리로 옮기면 된다. +플러그인은 이름이 `kubectl-` 로 시작되는 독립형 실행 파일이다. 플러그인을 설치하려면, 실행 파일을 `PATH` 에 지정된 디렉터리로 옮기면 된다. [Krew](https://krew.dev/)를 사용하여 오픈소스에서 사용 가능한 kubectl 플러그인을 검색하고 설치할 수도 있다. Krew는 쿠버네티스 SIG CLI 커뮤니티에서 관리하는 @@ -57,9 +57,9 @@ Krew [플러그인 인덱스](https://krew.sigs.k8s.io/plugins/)를 통해 사 플러그인 설치 또는 사전 로딩이 필요하지 않다. 플러그인 실행 파일은 `kubectl` 바이너리에서 상속된 환경을 받는다. -플러그인은 이름을 기반으로 구현할 명령 경로를 결정한다. 예를 -들어, 새로운 명령인 `kubectl foo` 를 제공하려는 플러그인은 단순히 이름이 -`kubectl-foo` 이고, `PATH` 의 어딘가에 있다. +플러그인은 이름을 기반으로 구현할 명령 경로를 결정한다. +예를 들어, `kubectl-foo` 라는 플러그인은 `kubectl foo` 명령을 제공한다. +`PATH` 어딘가에 플러그인 실행 파일을 설치해야 한다. ### 플러그인 예제 @@ -85,30 +85,31 @@ echo "I am a plugin named kubectl-foo" ### 플러그인 사용 -위의 플러그인을 사용하려면, 간단히 실행 가능하게 만든다. +플러그인을 사용하려면, 실행 가능하게 만든다. -``` +```shell sudo chmod +x ./kubectl-foo ``` 그리고 `PATH` 의 어느 곳에나 옮겨 놓는다. -``` +```shell sudo mv ./kubectl-foo /usr/local/bin ``` 이제 플러그인을 `kubectl` 명령으로 호출할 수 있다. -``` +```shell kubectl foo ``` + ``` I am a plugin named kubectl-foo ``` 모든 인수와 플래그는 그대로 실행 파일로 전달된다. -``` +```shell kubectl foo version ``` ``` @@ -120,6 +121,7 @@ kubectl foo version ```bash export KUBECONFIG=~/.kube/config kubectl foo config + ``` ``` /home//.kube/config @@ -128,6 +130,7 @@ kubectl foo config ```shell KUBECONFIG=/etc/kube/config kubectl foo config ``` + ``` /etc/kube/config ``` @@ -373,11 +376,8 @@ kubectl 플러그인의 배포 패키지를 컴파일된 패키지를 사용 가능하게 하거나, Krew를 사용하면 설치가 더 쉬워진다. - - ## {{% heading "whatsnext" %}} - * Go로 작성된 플러그인의 [자세한 예제](https://github.com/kubernetes/sample-cli-plugin)에 대해서는 샘플 CLI 플러그인 리포지터리를 확인한다. diff --git a/content/ko/docs/tasks/inject-data-application/define-environment-variable-container.md b/content/ko/docs/tasks/inject-data-application/define-environment-variable-container.md index 22372813e9..5e23ba831e 100644 --- a/content/ko/docs/tasks/inject-data-application/define-environment-variable-container.md +++ b/content/ko/docs/tasks/inject-data-application/define-environment-variable-container.md @@ -70,8 +70,9 @@ weight: 20 {{< /note >}} {{< note >}} -환경 변수는 서로를 참조할 수 있으며 사이클이 가능하다. -사용하기 전에 순서에 주의한다. +환경 변수는 서로를 참조할 수 있는데, 이 때 순서에 주의해야 한다. +동일한 컨텍스트에서 정의된 다른 변수를 참조하는 변수는 목록의 뒤쪽에 나와야 한다. +또한, 순환 참조는 피해야 한다. {{< /note >}} ## 설정 안에서 환경 변수 사용하기 diff --git a/content/ko/docs/tasks/job/coarse-parallel-processing-work-queue.md b/content/ko/docs/tasks/job/coarse-parallel-processing-work-queue.md index aeaac4803d..fa765a7005 100644 --- a/content/ko/docs/tasks/job/coarse-parallel-processing-work-queue.md +++ b/content/ko/docs/tasks/job/coarse-parallel-processing-work-queue.md @@ -35,7 +35,7 @@ weight: 30 ## 메시지 대기열 서비스 시작 -이 문서의 예시에서는 RabbitMQ를 사용하지만, 다른 AMQP 타입의 메시지 서비스에 적용하는데 문제가 없을 것이다. +이 예시에서는 RabbitMQ를 사용하지만, 다른 AMQP 유형의 메시지 서비스를 사용하도록 예시를 조정할 수 있다. 실제로 사용할 때는, 클러스터에 메시지 대기열 서비스를 한 번 구축하고서, 여러 많은 잡이나 오래 동작하는 서비스에 재사용할 수 있다. diff --git a/content/ko/docs/tasks/job/parallel-processing-expansion.md b/content/ko/docs/tasks/job/parallel-processing-expansion.md index 341739ba62..ef02dac61b 100644 --- a/content/ko/docs/tasks/job/parallel-processing-expansion.md +++ b/content/ko/docs/tasks/job/parallel-processing-expansion.md @@ -12,7 +12,7 @@ weight: 20 있다. 이 예에는 _apple_, _banana_ 그리고 _cherry_ 세 항목만 있다. -샘플 잡들은 단순히 문자열을 출력한 다음 일시 정지하는 각 항목을 처리한다. +샘플 잡들은 문자열을 출력한 다음 일시 정지하는 각 항목을 처리한다. 이 패턴이 보다 실질적인 유스케이스에 어떻게 부합하는지 알아 보려면 [실제 워크로드에서 잡 사용하기](#실제-워크로드에서-잡-사용하기)를 참고한다. diff --git a/content/ko/docs/tasks/manage-kubernetes-objects/kustomization.md b/content/ko/docs/tasks/manage-kubernetes-objects/kustomization.md index c9ed347c56..16b2cd05f0 100644 --- a/content/ko/docs/tasks/manage-kubernetes-objects/kustomization.md +++ b/content/ko/docs/tasks/manage-kubernetes-objects/kustomization.md @@ -7,7 +7,7 @@ weight: 20 [Kustomize](https://github.com/kubernetes-sigs/kustomize)는 -[kustomization 파일](https://kubernetes-sigs.github.io/kustomize/api-reference/glossary/#kustomization)을 +[kustomization 파일](https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#kustomization)을 통해 쿠버네티스 오브젝트를 사용자가 원하는 대로 변경하는(customize) 독립형 도구이다. 1.14 이후로, kubectl도 diff --git a/content/ko/docs/tasks/run-application/access-api-from-pod.md b/content/ko/docs/tasks/run-application/access-api-from-pod.md new file mode 100644 index 0000000000..d12f3b2f00 --- /dev/null +++ b/content/ko/docs/tasks/run-application/access-api-from-pod.md @@ -0,0 +1,111 @@ +--- +title: 파드 내에서 쿠버네티스 API에 접근 +content_type: task +weight: 120 +--- + + + +이 페이지는 파드 내에서 쿠버네티스 API에 접근하는 방법을 보여준다. + +## {{% heading "prerequisites" %}} + +{{< include "task-tutorial-prereqs.md" >}} + + + +## 파드 내에서 API에 접근 {#accessing-the-api-from-within-a-pod} + +파드 내에서 API에 접근할 때, API 서버를 찾아 인증하는 것은 +위에서 설명한 외부 클라이언트 사례와 약간 다르다. + +파드에서 쿠버네티스 API를 사용하는 가장 쉬운 방법은 +공식 [클라이언트 라이브러리](/ko/docs/reference/using-api/client-libraries/) 중 하나를 사용하는 것이다. 이러한 +라이브러리는 API 서버를 자동으로 감지하고 인증할 수 있다. + +### 공식 클라이언트 라이브러리 사용 + +파드 내에서, 쿠버네티스 API에 연결하는 권장 방법은 다음과 같다. + + - Go 클라이언트의 경우, 공식 [Go 클라이언트 라이브러리](https://github.com/kubernetes/client-go/)를 사용한다. + `rest.InClusterConfig()` 기능은 API 호스트 검색과 인증을 자동으로 처리한다. + [여기 예제](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go)를 참고한다. + + - Python 클라이언트의 경우, 공식 [Python 클라이언트 라이브러리](https://github.com/kubernetes-client/python/)를 사용한다. + `config.load_incluster_config()` 기능은 API 호스트 검색과 인증을 자동으로 처리한다. + [여기 예제](https://github.com/kubernetes-client/python/blob/master/examples/in_cluster_config.py)를 참고한다. + + - 사용할 수 있는 다른 라이브러리가 많이 있다. [클라이언트 라이브러리](/ko/docs/reference/using-api/client-libraries/) 페이지를 참고한다. + +각각의 경우, 파드의 서비스 어카운트 자격 증명은 API 서버와 +안전하게 통신하는 데 사용된다. + +### REST API에 직접 접근 + +파드에서 실행되는 동안, 쿠버네티스 apiserver는 `default` 네임스페이스에서 `kubernetes`라는 +서비스를 통해 접근할 수 있다. 따라서, 파드는 `kubernetes.default.svc` +호스트 이름을 사용하여 API 서버를 쿼리할 수 있다. 공식 클라이언트 라이브러리는 +이를 자동으로 수행한다. + +API 서버를 인증하는 권장 방법은 [서비스 어카운트](/docs/tasks/configure-pod-container/configure-service-account/) +자격 증명을 사용하는 것이다. 기본적으로, 파드는 +서비스 어카운트와 연결되어 있으며, 해당 서비스 어카운트에 대한 자격 증명(토큰)은 +해당 파드에 있는 각 컨테이너의 파일시스템 트리의 +`/var/run/secrets/kubernetes.io/serviceaccount/token` 에 있다. + +사용 가능한 경우, 인증서 번들은 각 컨테이너의 +파일시스템 트리의 `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt` 에 배치되며, +API 서버의 제공 인증서를 확인하는 데 사용해야 한다. + +마지막으로, 네임스페이스가 지정된 API 작업에 사용되는 기본 네임스페이스는 각 컨테이너의 +`/var/run/secrets/kubernetes.io/serviceaccount/namespace` 에 있는 파일에 배치된다. + +### kubectl 프록시 사용 + +공식 클라이언트 라이브러리 없이 API를 쿼리하려면, 파드에서 +새 사이드카 컨테이너의 [명령](/ko/docs/tasks/inject-data-application/define-command-argument-container/)으로 +`kubectl proxy` 를 실행할 수 있다. 이런 식으로, `kubectl proxy` 는 +API를 인증하고 이를 파드의 `localhost` 인터페이스에 노출시켜서, 파드의 +다른 컨테이너가 직접 사용할 수 있도록 한다. + +### 프록시를 사용하지 않고 접근 + +인증 토큰을 API 서버에 직접 전달하여 kubectl 프록시 사용을 +피할 수 있다. 내부 인증서는 연결을 보호한다. + +```shell +# 내부 API 서버 호스트 이름을 가리킨다 +APISERVER=https://kubernetes.default.svc + +# 서비스어카운트(ServiceAccount) 토큰 경로 +SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount + +# 이 파드의 네임스페이스를 읽는다 +NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace) + +# 서비스어카운트 베어러 토큰을 읽는다 +TOKEN=$(cat ${SERVICEACCOUNT}/token) + +# 내부 인증 기관(CA)을 참조한다 +CACERT=${SERVICEACCOUNT}/ca.crt + +# TOKEN으로 API를 탐색한다 +curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api +``` + +출력은 다음과 비슷하다. + +```json +{ + "kind": "APIVersions", + "versions": [ + "v1" + ], + "serverAddressByClientCIDRs": [ + { + "clientCIDR": "0.0.0.0/0", + "serverAddress": "10.0.1.149:443" + } + ] +} +``` diff --git a/content/ko/docs/tasks/run-application/delete-stateful-set.md b/content/ko/docs/tasks/run-application/delete-stateful-set.md index 8bb7ab89f2..1ef9220d65 100644 --- a/content/ko/docs/tasks/run-application/delete-stateful-set.md +++ b/content/ko/docs/tasks/run-application/delete-stateful-set.md @@ -60,7 +60,7 @@ PVC를 삭제할 때 데이터 손실될 수 있음에 주의하자. ### 스테이트풀셋의 완벽한 삭제 -연결된 파드를 포함해서 스테이트풀셋의 모든 것을 간단히 삭제하기 위해 다음과 같이 일련의 명령을 실행 한다. +연결된 파드를 포함해서 스테이트풀셋의 모든 것을 삭제하기 위해 다음과 같이 일련의 명령을 실행한다. ```shell grace=$(kubectl get pods --template '{{.spec.terminationGracePeriodSeconds}}') diff --git a/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md index 42172a463e..f762357603 100644 --- a/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -190,7 +190,7 @@ Horizontal Pod Autoscaler는 모든 API 리소스와 마찬가지로 `kubectl` `kubectl get hpa`로 오토스케일러 목록을 조회할 수 있고, `kubectl describe hpa`로 세부 사항을 확인할 수 있다. 마지막으로 `kubectl delete hpa`를 사용하여 오토스케일러를 삭제할 수 있다. -또한 Horizontal Pod Autoscaler를 쉽게 생성 할 수 있는 `kubectl autoscale`이라는 특별한 명령이 있다. +또한 Horizontal Pod Autoscaler를 생성할 수 있는 `kubectl autoscale`이라는 특별한 명령이 있다. 예를 들어 `kubectl autoscale rs foo --min=2 --max=5 --cpu-percent=80`을 실행하면 레플리케이션 셋 *foo* 에 대한 오토스케일러가 생성되고, 목표 CPU 사용률은 `80 %`, 그리고 2와 5 사이의 레플리카 개수로 설정된다. @@ -220,9 +220,10 @@ v1.6 부터 클러스터 운영자는 `kube-controller-manager` 컴포넌트의 v1.12부터는 새로운 알고리즘 업데이트가 업스케일 지연에 대한 필요성을 제거하였다. -- `--horizontal-pod-autoscaler-downscale-delay` : 이 옵션 값은 - 오토스케일러가 현재의 작업이 완료된 후에 다른 다운스케일 작업을 - 수행하기까지 기다려야 하는 시간을 지정하는 지속 시간이다. +- `--horizontal-pod-autoscaler-downscale-delay` : 다운스케일이 + 안정화되기까지의 시간 간격을 지정한다. + Horizontal Pod Autoscaler는 이전의 권장하는 크기를 기억하고, + 이 시간 간격에서의 가장 큰 크기에서만 작동한다. 기본값은 5분(`5m0s`)이다. {{< note >}} @@ -382,7 +383,12 @@ behavior: periodSeconds: 60 ``` -파드 수가 40개를 초과하면 두 번째 폴리시가 스케일링 다운에 사용된다. +`periodSeconds` 는 폴리시가 참(true)으로 유지되어야 하는 기간을 나타낸다. +첫 번째 정책은 _(파드들)_ 이 1분 내에 최대 4개의 레플리카를 스케일 다운할 수 있도록 허용한다. +두 번째 정책은 _비율_ 로 현재 레플리카의 최대 10%를 1분 내에 스케일 다운할 수 있도록 허용한다. + +기본적으로 가장 많은 변경을 허용하는 정책이 선택되기에 두 번째 정책은 +파드의 레플리카 수가 40개를 초과하는 경우에만 사용된다. 레플리카가 40개 이하인 경우 첫 번째 정책이 적용된다. 예를 들어 80개의 레플리카가 있고 대상을 10개의 레플리카로 축소해야 하는 경우 첫 번째 단계에서 8개의 레플리카가 스케일 다운 된다. 레플리카의 수가 72개일 때 다음 반복에서 파드의 10%는 7.2 이지만, 숫자는 8로 올림된다. 오토스케일러 컨트롤러의 @@ -390,10 +396,6 @@ behavior: 미만으로 떨어지면 첫 번째 폴리시 _(파드들)_ 가 적용되고 한번에 4개의 레플리카가 줄어든다. -`periodSeconds` 는 폴리시가 참(true)으로 유지되어야 하는 기간을 나타낸다. -첫 번째 정책은 1분 내에 최대 4개의 레플리카를 스케일 다운할 수 있도록 허용한다. -두 번째 정책은 현재 레플리카의 최대 10%를 1분 내에 스케일 다운할 수 있도록 허용한다. - 확장 방향에 대해 `selectPolicy` 필드를 확인하여 폴리시 선택을 변경할 수 있다. 레플리카의 수를 최소로 변경할 수 있는 폴리시를 선택하는 `최소(Min)`로 값을 설정한다. 값을 `Disabled` 로 설정하면 해당 방향으로 스케일링이 완전히 @@ -440,7 +442,7 @@ behavior: periodSeconds: 15 selectPolicy: Max ``` -안정화 윈도우의 스케일링 다운의 경우 _300_ 초(또는 제공된 +안정화 윈도우의 스케일링 다운의 경우 _300_ 초 (또는 제공된 경우`--horizontal-pod-autoscaler-downscale-stabilization` 플래그의 값)이다. 스케일링 다운에서는 현재 실행 중인 레플리카의 100%를 제거할 수 있는 단일 정책만 있으며, 이는 스케일링 대상을 최소 허용 레플리카로 축소할 수 있음을 의미한다. diff --git a/content/ko/docs/tasks/run-application/run-single-instance-stateful-application.md b/content/ko/docs/tasks/run-application/run-single-instance-stateful-application.md index f3debe8781..cf6c3188b7 100644 --- a/content/ko/docs/tasks/run-application/run-single-instance-stateful-application.md +++ b/content/ko/docs/tasks/run-application/run-single-instance-stateful-application.md @@ -65,6 +65,8 @@ MySQL을 실행하고 퍼시스턴트볼륨클레임을 참조하는 디플로 kubectl describe deployment mysql + 출력은 다음과 유사하다. + Name: mysql Namespace: default CreationTimestamp: Tue, 01 Nov 2016 11:18:45 -0700 @@ -105,6 +107,8 @@ MySQL을 실행하고 퍼시스턴트볼륨클레임을 참조하는 디플로 kubectl get pods -l app=mysql + 출력은 다음과 유사하다. + NAME READY STATUS RESTARTS AGE mysql-63082529-2z3ki 1/1 Running 0 3m @@ -112,6 +116,8 @@ MySQL을 실행하고 퍼시스턴트볼륨클레임을 참조하는 디플로 kubectl describe pvc mysql-pv-claim + 출력은 다음과 유사하다. + Name: mysql-pv-claim Namespace: default StorageClass: diff --git a/content/ko/docs/tasks/tls/certificate-rotation.md b/content/ko/docs/tasks/tls/certificate-rotation.md index b23bf2a600..037f99d87a 100644 --- a/content/ko/docs/tasks/tls/certificate-rotation.md +++ b/content/ko/docs/tasks/tls/certificate-rotation.md @@ -70,6 +70,7 @@ kubelet은 쿠버네티스 API로 서명된 인증서를 가져와서 서명된 인증서의 만료가 다가오면 kubelet은 쿠버네티스 API를 사용하여 새로운 인증서 서명 요청을 자동으로 발행한다. +이는 인증서 유효 기간이 30%-10% 남은 시점에 언제든지 실행될 수 있다. 또한, 컨트롤러 관리자는 인증서 요청을 자동으로 승인하고 서명된 인증서를 인증서 서명 요청에 첨부한다. kubelet은 쿠버네티스 API로 서명된 새로운 인증서를 가져와서 디스크에 쓴다. diff --git a/content/ko/docs/tasks/tools/_index.md b/content/ko/docs/tasks/tools/_index.md index 74abf8d981..47c90ec7fd 100755 --- a/content/ko/docs/tasks/tools/_index.md +++ b/content/ko/docs/tasks/tools/_index.md @@ -7,18 +7,19 @@ no_list: true ## kubectl -쿠버네티스 커맨드 라인 도구인 `kubectl` 사용하면 쿠버네티스 클러스터에 대해 명령을 -실행할 수 있다. `kubectl` 을 사용하여 애플리케이션을 배포하고, 클러스터 리소스를 검사 및 -관리하고, 로그를 볼 수 있다. + +쿠버네티스 커맨드 라인 도구인 [`kubectl`](/ko/docs/reference/kubectl/kubectl/)을 사용하면 +쿠버네티스 클러스터에 대해 명령을 실행할 수 있다. +`kubectl` 을 사용하여 애플리케이션을 배포하고, 클러스터 리소스를 검사 및 관리하고, +로그를 볼 수 있다. kubectl 전체 명령어를 포함한 추가 정보는 +[`kubectl` 레퍼런스 문서](/ko/docs/reference/kubectl/)에서 확인할 수 있다. -클러스터에 접근하기 위해 `kubectl` 을 다운로드 및 설치하고 설정하는 방법에 대한 정보는 -[`kubectl` 설치 및 설정](/ko/docs/tasks/tools/install-kubectl/)을 -참고한다. +`kubectl` 은 다양한 리눅스 플랫폼, macOS, 그리고 윈도우에 설치할 수 있다. +각각에 대한 설치 가이드는 다음과 같다. -kubectl 설치 및 설정 가이드 보기 - -[`kubectl` 레퍼런스 문서](/ko/docs/reference/kubectl/)를 -읽어볼 수도 있다. +- [리눅스에 `kubectl` 설치하기](install-kubectl-linux) +- [macOS에 `kubectl` 설치하기](install-kubectl-macos) +- [윈도우에 `kubectl` 설치하기](install-kubectl-windows) ## kind diff --git a/content/ko/docs/tasks/tools/included/_index.md b/content/ko/docs/tasks/tools/included/_index.md new file mode 100644 index 0000000000..4ba9445002 --- /dev/null +++ b/content/ko/docs/tasks/tools/included/_index.md @@ -0,0 +1,6 @@ +--- +title: "포함된 도구들" +description: "메인 kubectl-installs-*.md 페이지에 포함될 스니펫." +headless: true +toc_hide: true +--- \ No newline at end of file diff --git a/content/ko/docs/tasks/tools/included/install-kubectl-gcloud.md b/content/ko/docs/tasks/tools/included/install-kubectl-gcloud.md new file mode 100644 index 0000000000..f3deae981c --- /dev/null +++ b/content/ko/docs/tasks/tools/included/install-kubectl-gcloud.md @@ -0,0 +1,21 @@ +--- +title: "gcloud kubectl install" +description: "gcloud를 이용하여 kubectl을 설치하는 방법을 각 OS별 탭에 포함하기 위한 스니펫." +headless: true +--- + +Google Cloud SDK를 사용하여 kubectl을 설치할 수 있다. + +1. [Google Cloud SDK](https://cloud.google.com/sdk/)를 설치한다. + +1. `kubectl` 설치 명령을 실행한다. + + ```shell + gcloud components install kubectl + ``` + +1. 설치한 버전이 최신 버전인지 확인한다. + + ```shell + kubectl version --client + ``` \ No newline at end of file diff --git a/content/ko/docs/tasks/tools/included/kubectl-whats-next.md b/content/ko/docs/tasks/tools/included/kubectl-whats-next.md new file mode 100644 index 0000000000..70532cd2eb --- /dev/null +++ b/content/ko/docs/tasks/tools/included/kubectl-whats-next.md @@ -0,0 +1,12 @@ +--- +title: "다음 단계는 무엇인가?" +description: "kubectl을 설치한 다음 해야 하는 것에 대해 설명한다." +headless: true +--- + +* [Minikube 설치](https://minikube.sigs.k8s.io/docs/start/) +* 클러스터 생성에 대한 자세한 내용은 [시작하기](/ko/docs/setup/)를 참고한다. +* [애플리케이션을 시작하고 노출하는 방법에 대해 배운다.](/ko/docs/tasks/access-application-cluster/service-access-application-cluster/) +* 직접 생성하지 않은 클러스터에 접근해야 하는 경우, + [클러스터 접근 공유 문서](/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)를 참고한다. +* [kubectl 레퍼런스 문서](/ko/docs/reference/kubectl/kubectl/) 읽기 diff --git a/content/ko/docs/tasks/tools/included/optional-kubectl-configs-bash-linux.md b/content/ko/docs/tasks/tools/included/optional-kubectl-configs-bash-linux.md new file mode 100644 index 0000000000..b9597857bb --- /dev/null +++ b/content/ko/docs/tasks/tools/included/optional-kubectl-configs-bash-linux.md @@ -0,0 +1,54 @@ +--- +title: "리눅스에서 bash 자동 완성 사용하기" +description: "리눅스에서 bash 자동 완성을 위한 몇 가지 선택적 구성에 대해 설명한다." +headless: true +--- + +### 소개 + +Bash의 kubectl 자동 완성 스크립트는 `kubectl completion bash` 명령으로 생성할 수 있다. 셸에서 자동 완성 스크립트를 소싱(sourcing)하면 kubectl 자동 완성 기능이 활성화된다. + +그러나, 자동 완성 스크립트는 [**bash-completion**](https://github.com/scop/bash-completion)에 의존하고 있으며, 이 소프트웨어를 먼저 설치해야 한다(`type _init_completion` 을 실행하여 bash-completion이 이미 설치되어 있는지 확인할 수 있음). + +### bash-completion 설치 + +bash-completion은 많은 패키지 관리자에 의해 제공된다([여기](https://github.com/scop/bash-completion#installation) 참고). `apt-get install bash-completion` 또는 `yum install bash-completion` 등으로 설치할 수 있다. + +위의 명령은 bash-completion의 기본 스크립트인 `/usr/share/bash-completion/bash_completion` 을 생성한다. 패키지 관리자에 따라, `~/.bashrc` 파일에서 이 파일을 수동으로 소스(source)해야 한다. + +확인하려면, 셸을 다시 로드하고 `type _init_completion` 을 실행한다. 명령이 성공하면, 이미 설정된 상태이고, 그렇지 않으면 `~/.bashrc` 파일에 다음을 추가한다. + +```bash +source /usr/share/bash-completion/bash_completion +``` + +셸을 다시 로드하고 `type _init_completion` 을 입력하여 bash-completion이 올바르게 설치되었는지 확인한다. + +### kubectl 자동 완성 활성화 + +이제 kubectl 자동 완성 스크립트가 모든 셸 세션에서 제공되도록 해야 한다. 이를 수행할 수 있는 두 가지 방법이 있다. + +- `~/.bashrc` 파일에서 자동 완성 스크립트를 소싱한다. + + ```bash + echo 'source <(kubectl completion bash)' >>~/.bashrc + ``` + +- 자동 완성 스크립트를 `/etc/bash_completion.d` 디렉터리에 추가한다. + + ```bash + kubectl completion bash >/etc/bash_completion.d/kubectl + ``` + +kubectl에 대한 앨리어스(alias)가 있는 경우, 해당 앨리어스로 작업하도록 셸 자동 완성을 확장할 수 있다. + +```bash +echo 'alias k=kubectl' >>~/.bashrc +echo 'complete -F __start_kubectl k' >>~/.bashrc +``` + +{{< note >}} +bash-completion은 `/etc/bash_completion.d` 에 있는 모든 자동 완성 스크립트를 소싱한다. +{{< /note >}} + +두 방법 모두 동일하다. 셸을 다시 로드하면, kubectl 자동 완성 기능이 작동할 것이다. diff --git a/content/ko/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md b/content/ko/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md new file mode 100644 index 0000000000..7acb5d3621 --- /dev/null +++ b/content/ko/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md @@ -0,0 +1,89 @@ +--- +title: "macOS에서 bash 자동 완성 사용하기" +description: "macOS에서 bash 자동 완성을 위한 몇 가지 선택적 구성에 대해 설명한다." +headless: true +--- + +### 소개 + +Bash의 kubectl 자동 완성 스크립트는 `kubectl completion bash` 로 생성할 수 있다. 이 스크립트를 셸에 소싱하면 kubectl 자동 완성이 가능하다. + +그러나 kubectl 자동 완성 스크립트는 미리 [**bash-completion**](https://github.com/scop/bash-completion)을 설치해야 동작한다. + +{{< warning>}} +bash-completion에는 v1과 v2 두 가지 버전이 있다. v1은 Bash 3.2(macOS의 기본 설치 버전) 버전용이고, v2는 Bash 4.1 이상 버전용이다. kubectl 자동 완성 스크립트는 bash-completion v1과 Bash 3.2 버전에서는 **작동하지 않는다**. **bash-completion v2** 와 **Bash 4.1 이상 버전** 이 필요하다. 따라서, macOS에서 kubectl 자동 완성 기능을 올바르게 사용하려면, Bash 4.1 이상을 설치하고 사용해야 한다([*지침*](https://itnext.io/upgrading-bash-on-macos-7138bd1066ba)). 다음의 내용에서는 Bash 4.1 이상(즉, 모든 Bash 버전 4.1 이상)을 사용한다고 가정한다. +{{< /warning >}} + +### Bash 업그레이드 + +여기의 지침에서는 Bash 4.1 이상을 사용한다고 가정한다. 다음을 실행하여 Bash 버전을 확인할 수 있다. + +```bash +echo $BASH_VERSION +``` + +너무 오래된 버전인 경우, Homebrew를 사용하여 설치/업그레이드할 수 있다. + +```bash +brew install bash +``` + +셸을 다시 로드하고 원하는 버전을 사용 중인지 확인한다. + +```bash +echo $BASH_VERSION $SHELL +``` + +Homebrew는 보통 `/usr/local/bin/bash` 에 설치한다. + +### bash-completion 설치 + +{{< note >}} +언급한 바와 같이, 이 지침에서는 Bash 4.1 이상을 사용한다고 가정한다. 이는 bash-completion v2를 설치한다는 것을 의미한다(Bash 3.2 및 bash-completion v1의 경우, kubectl 자동 완성이 작동하지 않음). +{{< /note >}} + +bash-completion v2가 이미 설치되어 있는지 `type_init_completion` 으로 확인할 수 있다. 그렇지 않은 경우, Homebrew로 설치할 수 있다. + +```bash +brew install bash-completion@2 +``` + +이 명령의 출력에 명시된 바와 같이, `~/.bash_profile` 파일에 다음을 추가한다. + +```bash +export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d" +[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh" +``` + +셸을 다시 로드하고 bash-completion v2가 올바르게 설치되었는지 `type _init_completion` 으로 확인한다. + +### kubectl 자동 완성 활성화 + +이제 kubectl 자동 완성 스크립트가 모든 셸 세션에서 제공되도록 해야 한다. 이를 수행하는 방법에는 여러 가지가 있다. + +- 자동 완성 스크립트를 `~/.bash_profile` 파일에서 소싱한다. + + ```bash + echo 'source <(kubectl completion bash)' >>~/.bash_profile + ``` + +- 자동 완성 스크립트를 `/usr/local/etc/bash_completion.d` 디렉터리에 추가한다. + + ```bash + kubectl completion bash >/usr/local/etc/bash_completion.d/kubectl + ``` + +- kubectl에 대한 앨리어스가 있는 경우, 해당 앨리어스로 작업하기 위해 셸 자동 완성을 확장할 수 있다. + + ```bash + echo 'alias k=kubectl' >>~/.bash_profile + echo 'complete -F __start_kubectl k' >>~/.bash_profile + ``` + +- Homebrew로 kubectl을 설치한 경우([여기](/ko/docs/tasks/tools/install-kubectl-macos/#install-with-homebrew-on-macos)의 설명을 참고), kubectl 자동 완성 스크립트가 이미 `/usr/local/etc/bash_completion.d/kubectl` 에 있을 것이다. 이 경우, 아무 것도 할 필요가 없다. + + {{< note >}} + bash-completion v2의 Homebrew 설치는 `BASH_COMPLETION_COMPAT_DIR` 디렉터리의 모든 파일을 소싱하므로, 후자의 두 가지 방법이 적용된다. + {{< /note >}} + +어떤 경우든, 셸을 다시 로드하면, kubectl 자동 완성 기능이 작동할 것이다. \ No newline at end of file diff --git a/content/ko/docs/tasks/tools/included/optional-kubectl-configs-zsh.md b/content/ko/docs/tasks/tools/included/optional-kubectl-configs-zsh.md new file mode 100644 index 0000000000..e81403300b --- /dev/null +++ b/content/ko/docs/tasks/tools/included/optional-kubectl-configs-zsh.md @@ -0,0 +1,29 @@ +--- +title: "zsh 자동 완성" +description: "zsh 자동 완성을 위한 몇 가지 선택적 구성에 대해 설명한다." +headless: true +--- + +Zsh용 kubectl 자동 완성 스크립트는 `kubectl completion zsh` 명령으로 생성할 수 있다. 셸에서 자동 완성 스크립트를 소싱하면 kubectl 자동 완성 기능이 활성화된다. + +모든 셸 세션에서 사용하려면, `~/.zshrc` 파일에 다음을 추가한다. + +```zsh +source <(kubectl completion zsh) +``` + +kubectl에 대한 앨리어스가 있는 경우, 해당 앨리어스로 작업하도록 셸 자동 완성을 확장할 수 있다. + +```zsh +echo 'alias k=kubectl' >>~/.zshrc +echo 'complete -F __start_kubectl k' >>~/.zshrc +``` + +셸을 다시 로드하면, kubectl 자동 완성 기능이 작동할 것이다. + +`complete:13: command not found: compdef` 와 같은 오류가 발생하면, `~/.zshrc` 파일의 시작 부분에 다음을 추가한다. + +```zsh +autoload -Uz compinit +compinit +``` \ No newline at end of file diff --git a/content/ko/docs/tasks/tools/included/verify-kubectl.md b/content/ko/docs/tasks/tools/included/verify-kubectl.md new file mode 100644 index 0000000000..b935582b7a --- /dev/null +++ b/content/ko/docs/tasks/tools/included/verify-kubectl.md @@ -0,0 +1,34 @@ +--- +title: "kubectl 설치 검증하기" +description: "kubectl을 검증하는 방법에 대해 설명한다." +headless: true +--- + +kubectl이 쿠버네티스 클러스터를 찾아 접근하려면, +[kube-up.sh](https://github.com/kubernetes/kubernetes/blob/master/cluster/kube-up.sh)를 +사용하여 클러스터를 생성하거나 Minikube 클러스터를 성공적으로 배포할 때 자동으로 생성되는 +[kubeconfig 파일](/ko/docs/concepts/configuration/organize-cluster-access-kubeconfig/)이 +필요하다. +기본적으로, kubectl 구성은 `~/.kube/config` 에 있다. + +클러스터 상태를 가져와서 kubectl이 올바르게 구성되어 있는지 확인한다. + +```shell +kubectl cluster-info +``` + +URL 응답이 표시되면, kubectl이 클러스터에 접근하도록 올바르게 구성된 것이다. + +다음과 비슷한 메시지가 표시되면, kubectl이 올바르게 구성되지 않았거나 쿠버네티스 클러스터에 연결할 수 없다. + +``` +The connection to the server was refused - did you specify the right host or port? +``` + +예를 들어, 랩톱에서 로컬로 쿠버네티스 클러스터를 실행하려면, Minikube와 같은 도구를 먼저 설치한 다음 위에서 언급한 명령을 다시 실행해야 한다. + +kubectl cluster-info가 URL 응답을 반환하지만 클러스터에 접근할 수 없는 경우, 올바르게 구성되었는지 확인하려면 다음을 사용한다. + +```shell +kubectl cluster-info dump +``` \ No newline at end of file diff --git a/content/ko/docs/tasks/tools/install-kubectl-linux.md b/content/ko/docs/tasks/tools/install-kubectl-linux.md new file mode 100644 index 0000000000..0e8a6ac6ee --- /dev/null +++ b/content/ko/docs/tasks/tools/install-kubectl-linux.md @@ -0,0 +1,174 @@ +--- + + +title: 리눅스에 kubectl 설치 및 설정 +content_type: task +weight: 10 +card: + name: tasks + weight: 20 + title: 리눅스에 kubectl 설치하기 +--- + +## {{% heading "prerequisites" %}} + +클러스터의 마이너(minor) 버전 차이 내에 있는 kubectl 버전을 사용해야 한다. +예를 들어, v1.2 클라이언트는 v1.1, v1.2 및 v1.3의 마스터와 함께 작동해야 한다. +최신 버전의 kubectl을 사용하면 예기치 않은 문제를 피할 수 있다. + +## 리눅스에 kubectl 설치 + +다음과 같은 방법으로 리눅스에 kubectl을 설치할 수 있다. + +- [리눅스에서 curl을 사용하여 kubectl 바이너리 설치](#install-kubectl-binary-with-curl-on-linux) +- [기본 패키지 관리 도구를 사용하여 설치](#install-using-native-package-management) +- [다른 패키지 관리 도구를 사용하여 설치](#install-using-other-package-management) +- [Google Cloud SDK를 사용하여 설치](#install-on-linux-as-part-of-the-google-cloud-sdk) + +### 리눅스에서 curl을 사용하여 kubectl 바이너리 설치 {#install-kubectl-binary-with-curl-on-linux} + +1. 다음 명령으로 최신 릴리스를 다운로드한다. + + ```bash + curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" + ``` + + {{< note >}} +특정 버전을 다운로드하려면, `$(curl -L -s https://dl.k8s.io/release/stable.txt)` 명령 부분을 특정 버전으로 바꾼다. + +예를 들어, 리눅스에서 버전 {{< param "fullversion" >}}을 다운로드하려면, 다음을 입력한다. + + ```bash + curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl + ``` + {{< /note >}} + +1. 바이너리를 검증한다. (선택 사항) + + kubectl 체크섬(checksum) 파일을 다운로드한다. + + ```bash + curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" + ``` + + kubectl 바이너리를 체크섬 파일을 통해 검증한다. + + ```bash + echo "$(}} + 동일한 버전의 바이너리와 체크섬을 다운로드한다. + {{< /note >}} + +1. kubectl 설치 + + ```bash + sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl + ``` + + {{< note >}} + 대상 시스템에 root 접근 권한을 가지고 있지 않더라도, `~/.local/bin` 디렉터리에 kubectl을 설치할 수 있다. + + ```bash + mkdir -p ~/.local/bin/kubectl + mv ./kubectl ~/.local/bin/kubectl + # 그리고 ~/.local/bin/kubectl을 $PATH에 추가 + ``` + + {{< /note >}} + +1. 설치한 버전이 최신인지 확인한다. + + ```bash + kubectl version --client + ``` + +### 기본 패키지 관리 도구를 사용하여 설치 {#install-using-native-package-management} + +{{< tabs name="kubectl_install" >}} +{{< tab name="Ubuntu, Debian 또는 HypriotOS" codelang="bash" >}} +sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2 curl +curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - +echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list +sudo apt-get update +sudo apt-get install -y kubectl +{{< /tab >}} + +{{< tab name="CentOS, RHEL 또는 Fedora" codelang="bash" >}}cat < /etc/yum.repos.d/kubernetes.repo +[kubernetes] +name=Kubernetes +baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 +enabled=1 +gpgcheck=1 +repo_gpgcheck=1 +gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg +EOF +yum install -y kubectl +{{< /tab >}} +{{< /tabs >}} + +### 다른 패키지 관리 도구를 사용하여 설치 {#install-using-other-package-management} + +{{< tabs name="other_kubectl_install" >}} +{{% tab name="Snap" %}} +[snap](https://snapcraft.io/docs/core/install) 패키지 관리자를 지원하는 Ubuntu 또는 다른 리눅스 배포판을 사용하는 경우, kubectl을 [snap](https://snapcraft.io/) 애플리케이션으로 설치할 수 있다. + +```shell +snap install kubectl --classic + +kubectl version --client +``` + +{{% /tab %}} + +{{% tab name="Homebrew" %}} +리눅스 상에서 [Homebrew](https://docs.brew.sh/Homebrew-on-Linux) 패키지 관리자를 사용한다면, [설치](https://docs.brew.sh/Homebrew-on-Linux#install)를 통해 kubectl을 사용할 수 있다. + +```shell +brew install kubectl + +kubectl version --client +``` + +{{% /tab %}} + +{{< /tabs >}} + +### Google Cloud SDK를 사용하여 설치 {#install-on-linux-as-part-of-the-google-cloud-sdk} + +{{< include "included/install-kubectl-gcloud.md" >}} + +## kubectl 구성 확인 + +{{< include "included/verify-kubectl.md" >}} + +## 선택적 kubectl 구성 + +### 셸 자동 완성 활성화 + +kubectl은 Bash 및 Zsh에 대한 자동 완성 지원을 제공하므로 입력을 위한 타이핑을 많이 절약할 수 있다. + +다음은 Bash 및 Zsh에 대한 자동 완성을 설정하는 절차이다. + +{{< tabs name="kubectl_autocompletion" >}} +{{< tab name="Bash" include="included/optional-kubectl-configs-bash-linux.md" />}} +{{< tab name="Zsh" include="included/optional-kubectl-configs-zsh.md" />}} +{{< /tabs >}} + +## {{% heading "whatsnext" %}} + +{{< include "included/kubectl-whats-next.md" >}} diff --git a/content/ko/docs/tasks/tools/install-kubectl-macos.md b/content/ko/docs/tasks/tools/install-kubectl-macos.md new file mode 100644 index 0000000000..b0747f8a1c --- /dev/null +++ b/content/ko/docs/tasks/tools/install-kubectl-macos.md @@ -0,0 +1,160 @@ +--- + + +title: macOS에 kubectl 설치 및 설정 +content_type: task +weight: 10 +card: + name: tasks + weight: 20 + title: macOS에 kubectl 설치하기 +--- + +## {{% heading "prerequisites" %}} + +클러스터의 마이너(minor) 버전 차이 내에 있는 kubectl 버전을 사용해야 한다. +예를 들어, v1.2 클라이언트는 v1.1, v1.2 및 v1.3의 마스터와 함께 작동해야 한다. +최신 버전의 kubectl을 사용하면 예기치 않은 문제를 피할 수 있다. + +## macOS에 kubectl 설치 + +다음과 같은 방법으로 macOS에 kubectl을 설치할 수 있다. + +- [macOS에서 curl을 사용하여 kubectl 바이너리 설치](#install-kubectl-binary-with-curl-on-macos) +- [macOS에서 Homebrew를 사용하여 설치](#install-with-homebrew-on-macos) +- [macOS에서 Macports를 사용하여 설치](#install-with-macports-on-macos) +- [Google Cloud SDK를 사용하여 설치](#install-on-macos-as-part-of-the-google-cloud-sdk) + +### macOS에서 curl을 사용하여 kubectl 바이너리 설치 {#install-kubectl-binary-with-curl-on-macos} + +1. 최신 릴리스를 다운로드한다. + + ```bash + curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl" + ``` + + {{< note >}} + 특정 버전을 다운로드하려면, `$(curl -L -s https://dl.k8s.io/release/stable.txt)` 명령 부분을 특정 버전으로 바꾼다. + + 예를 들어, macOS에서 버전 {{< param "fullversion" >}}을 다운로드하려면, 다음을 입력한다. + + ```bash + curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl + ``` + + {{< /note >}} + +1. 바이너리를 검증한다. (선택 사항) + + kubectl 체크섬 파일을 다운로드한다. + + ```bash + curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl.sha256" + ``` + + kubectl 바이너리를 체크섬 파일을 통해 검증한다. + + ```bash + echo "$(}} + 동일한 버전의 바이너리와 체크섬을 다운로드한다. + {{< /note >}} + +1. kubectl 바이너리를 실행 가능하게 한다. + + ```bash + chmod +x ./kubectl + ``` + +1. kubectl 바이너리를 시스템 `PATH` 의 파일 위치로 옮긴다. + + ```bash + sudo mv ./kubectl /usr/local/bin/kubectl && \ + sudo chown root: /usr/local/bin/kubectl + ``` + +1. 설치한 버전이 최신 버전인지 확인한다. + + ```bash + kubectl version --client + ``` + +### macOS에서 Homebrew를 사용하여 설치 {#install-with-homebrew-on-macos} + +macOS에서 [Homebrew](https://brew.sh/) 패키지 관리자를 사용하는 경우, Homebrew로 kubectl을 설치할 수 있다. + +1. 설치 명령을 실행한다. + + ```bash + brew install kubectl + ``` + + 또는 + + ```bash + brew install kubernetes-cli + ``` + +1. 설치한 버전이 최신 버전인지 확인한다. + + ```bash + kubectl version --client + ``` + +### macOS에서 Macports를 사용하여 설치 {#install-with-macports-on-macos} + +macOS에서 [Macports](https://macports.org/) 패키지 관리자를 사용하는 경우, Macports로 kubectl을 설치할 수 있다. + +1. 설치 명령을 실행한다. + + ```bash + sudo port selfupdate + sudo port install kubectl + ``` + +1. 설치한 버전이 최신 버전인지 확인한다. + + ```bash + kubectl version --client + ``` + + +### Google Cloud SDK를 사용하여 설치 {#install-on-macos-as-part-of-the-google-cloud-sdk} + +{{< include "included/install-kubectl-gcloud.md" >}} + +## kubectl 구성 확인 + +{{< include "included/verify-kubectl.md" >}} + +## 선택적 kubectl 구성 + +### 셸 자동 완성 활성화 + +kubectl은 Bash 및 Zsh에 대한 자동 완성 지원을 제공하므로 입력을 위한 타이핑을 많이 절약할 수 있다. + +다음은 Bash 및 Zsh에 대한 자동 완성을 설정하는 절차이다. + +{{< tabs name="kubectl_autocompletion" >}} +{{< tab name="Bash" include="included/optional-kubectl-configs-bash-mac.md" />}} +{{< tab name="Zsh" include="included/optional-kubectl-configs-zsh.md" />}} +{{< /tabs >}} + +## {{% heading "whatsnext" %}} + +{{< include "included/kubectl-whats-next.md" >}} \ No newline at end of file diff --git a/content/ko/docs/tasks/tools/install-kubectl-windows.md b/content/ko/docs/tasks/tools/install-kubectl-windows.md new file mode 100644 index 0000000000..e1c67af9ce --- /dev/null +++ b/content/ko/docs/tasks/tools/install-kubectl-windows.md @@ -0,0 +1,179 @@ +--- + + +title: 윈도우에 kubectl 설치 및 설정 +content_type: task +weight: 10 +card: + name: tasks + weight: 20 + title: 윈도우에 kubectl 설치하기 +--- + +## {{% heading "prerequisites" %}} + +클러스터의 마이너(minor) 버전 차이 내에 있는 kubectl 버전을 사용해야 한다. +예를 들어, v1.2 클라이언트는 v1.1, v1.2 및 v1.3의 마스터와 함께 작동해야 한다. +최신 버전의 kubectl을 사용하면 예기치 않은 문제를 피할 수 있다. + +## 윈도우에 kubectl 설치 + +다음과 같은 방법으로 윈도우에 kubectl을 설치할 수 있다. + +- [윈도우에서 curl을 사용하여 kubectl 바이너리 설치](#install-kubectl-binary-with-curl-on-windows) +- [PSGallery에서 PowerShell로 설치](#install-with-powershell-from-psgallery) +- [Chocolatey 또는 Scoop을 사용하여 윈도우에 설치](#install-on-windows-using-chocolatey-or-scoop) +- [Google Cloud SDK를 사용하여 설치](#install-on-windows-as-part-of-the-google-cloud-sdk) + + +### 윈도우에서 curl을 사용하여 kubectl 바이너리 설치 {#install-kubectl-binary-with-curl-on-windows} + +1. [최신 릴리스 {{< param "fullversion" >}}](https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe)를 다운로드한다. + + 또는 `curl` 을 설치한 경우, 다음 명령을 사용한다. + + ```powershell + curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe + ``` + + {{< note >}} + 최신의 안정 버전(예: 스크립팅을 위한)을 찾으려면, [https://dl.k8s.io/release/stable.txt](https://dl.k8s.io/release/stable.txt)를 참고한다. + {{< /note >}} + +1. 바이너리를 검증한다. (선택 사항) + + kubectl 체크섬 파일을 다운로드한다. + + ```powershell + curl -LO https://dl.k8s.io/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe.sha256 + ``` + + kubectl 바이너리를 체크섬 파일을 통해 검증한다. + + - 수동으로 `CertUtil` 의 출력과 다운로드한 체크섬 파일을 비교하기 위해서 커맨드 프롬프트를 사용한다. + + ```cmd + CertUtil -hashfile kubectl.exe SHA256 + type kubectl.exe.sha256 + ``` + + - `-eq` 연산자를 통해 `True` 또는 `False` 결과를 얻는 자동 검증을 위해서 PowerShell을 사용한다. + + ```powershell + $($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl.exe.sha256) + ``` + +1. 바이너리를 `PATH` 가 설정된 디렉터리에 추가한다. + +1. `kubectl` 의 버전이 다운로드한 버전과 같은지 확인한다. + + ```cmd + kubectl version --client + ``` + +{{< note >}} +[윈도우용 도커 데스크톱](https://docs.docker.com/docker-for-windows/#kubernetes)은 자체 버전의 `kubectl` 을 `PATH` 에 추가한다. +도커 데스크톱을 이전에 설치한 경우, 도커 데스크톱 설치 프로그램에서 추가한 `PATH` 항목 앞에 `PATH` 항목을 배치하거나 도커 데스크톱의 `kubectl` 을 제거해야 할 수도 있다. +{{< /note >}} + +### PSGallery에서 PowerShell로 설치 {#install-with-powershell-from-psgallery} + +윈도우에서 [Powershell Gallery](https://www.powershellgallery.com/) 패키지 관리자를 사용하는 경우, Powershell로 kubectl을 설치하고 업데이트할 수 있다. + +1. 설치 명령을 실행한다(`DownloadLocation` 을 지정해야 한다). + + ```powershell + Install-Script -Name install-kubectl -Scope CurrentUser -Force + install-kubectl.ps1 [-DownloadLocation ] + ``` + + {{< note >}} + `DownloadLocation` 을 지정하지 않으면, `kubectl` 은 사용자의 `temp` 디렉터리에 설치된다. + {{< /note >}} + + 설치 프로그램은 `$HOME/.kube` 를 생성하고 구성 파일을 작성하도록 지시한다. + +1. 설치한 버전이 최신 버전인지 확인한다. + + ```powershell + kubectl version --client + ``` + +{{< note >}} +설치 업데이트는 1 단계에서 나열한 두 명령을 다시 실행하여 수행한다. +{{< /note >}} + +### Chocolatey 또는 Scoop을 사용하여 윈도우에 설치 {#install-on-windows-using-chocolatey-or-scoop} + +1. 윈도우에 kubectl을 설치하기 위해서 [Chocolatey](https://chocolatey.org) 패키지 관리자나 [Scoop](https://scoop.sh) 커맨드 라인 설치 프로그램을 사용할 수 있다. + + {{< tabs name="kubectl_win_install" >}} + {{% tab name="choco" %}} + ```powershell + choco install kubernetes-cli + ``` + {{% /tab %}} + {{% tab name="scoop" %}} + ```powershell + scoop install kubectl + ``` + {{% /tab %}} + {{< /tabs >}} + + +1. 설치한 버전이 최신 버전인지 확인한다. + + ```powershell + kubectl version --client + ``` + +1. 홈 디렉터리로 이동한다. + + ```powershell + # cmd.exe를 사용한다면, 다음을 실행한다. cd %USERPROFILE% + cd ~ + ``` + +1. `.kube` 디렉터리를 생성한다. + + ```powershell + mkdir .kube + ``` + +1. 금방 생성한 `.kube` 디렉터리로 이동한다. + + ```powershell + cd .kube + ``` + +1. 원격 쿠버네티스 클러스터를 사용하도록 kubectl을 구성한다. + + ```powershell + New-Item config -type file + ``` + +{{< note >}} +메모장과 같은 텍스트 편집기를 선택하여 구성 파일을 편집한다. +{{< /note >}} + +### Google Cloud SDK를 사용하여 설치 {#install-on-windows-as-part-of-the-google-cloud-sdk} + +{{< include "included/install-kubectl-gcloud.md" >}} + +## kubectl 구성 확인 + +{{< include "included/verify-kubectl.md" >}} + +## 선택적 kubectl 구성 + +### 셸 자동 완성 활성화 + +kubectl은 Bash 및 Zsh에 대한 자동 완성 지원을 제공하므로 입력을 위한 타이핑을 많이 절약할 수 있다. + +다음은 Zsh에 대한 자동 완성을 설정하는 절차이다. + +{{< include "included/optional-kubectl-configs-zsh.md" >}} + +## {{% heading "whatsnext" %}} + +{{< include "included/kubectl-whats-next.md" >}} \ No newline at end of file diff --git a/content/ko/docs/tasks/tools/install-kubectl.md b/content/ko/docs/tasks/tools/install-kubectl.md deleted file mode 100644 index 9d80451e1b..0000000000 --- a/content/ko/docs/tasks/tools/install-kubectl.md +++ /dev/null @@ -1,529 +0,0 @@ ---- -title: kubectl 설치 및 설정 -content_type: task -weight: 10 -card: - name: tasks - weight: 20 - title: kubectl 설치 ---- - - -쿠버네티스 커맨드 라인 도구인 [kubectl](/ko/docs/reference/kubectl/kubectl/)을 사용하면, -쿠버네티스 클러스터에 대해 명령을 실행할 수 있다. -kubectl을 사용하여 애플리케이션을 배포하고, 클러스터 리소스를 검사 및 관리하며 -로그를 볼 수 있다. kubectl 작업의 전체 목록에 대해서는, -[kubectl 개요](/ko/docs/reference/kubectl/overview/)를 참고한다. - - -## {{% heading "prerequisites" %}} - -클러스터의 마이너(minor) 버전 차이 내에 있는 kubectl 버전을 사용해야 한다. -예를 들어, v1.2 클라이언트는 v1.1, v1.2 및 v1.3의 마스터와 함께 작동해야 한다. -최신 버전의 kubectl을 사용하면 예기치 않은 문제를 피할 수 있다. - - - -## 리눅스에 kubectl 설치 - -### 리눅스에서 curl을 사용하여 kubectl 바이너리 설치 - -1. 다음 명령으로 최신 릴리스를 다운로드한다. - - ``` - curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" - ``` - - 특정 버전을 다운로드하려면, `$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)` 명령 부분을 특정 버전으로 바꾼다. - - 예를 들어, 리눅스에서 버전 {{< param "fullversion" >}}을 다운로드하려면, 다음을 입력한다. - ``` - curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl - ``` - -2. kubectl 바이너리를 실행 가능하게 만든다. - - ``` - chmod +x ./kubectl - ``` - -3. 바이너리를 PATH가 설정된 디렉터리로 옮긴다. - - ``` - sudo mv ./kubectl /usr/local/bin/kubectl - ``` -4. 설치한 버전이 최신 버전인지 확인한다. - - ``` - kubectl version --client - ``` - -### 기본 패키지 관리 도구를 사용하여 설치 - -{{< tabs name="kubectl_install" >}} -{{< tab name="Ubuntu, Debian 또는 HypriotOS" codelang="bash" >}} -sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2 curl -curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - -echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list -sudo apt-get update -sudo apt-get install -y kubectl -{{< /tab >}} - -{{< tab name="CentOS, RHEL 또는 Fedora" codelang="bash" >}}cat < /etc/yum.repos.d/kubernetes.repo -[kubernetes] -name=Kubernetes -baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 -enabled=1 -gpgcheck=1 -repo_gpgcheck=1 -gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg -EOF -yum install -y kubectl -{{< /tab >}} -{{< /tabs >}} - -### 다른 패키지 관리 도구를 사용하여 설치 - -{{< tabs name="other_kubectl_install" >}} -{{% tab name="Snap" %}} -[snap](https://snapcraft.io/docs/core/install) 패키지 관리자를 지원하는 Ubuntu 또는 다른 리눅스 배포판을 사용하는 경우, kubectl을 [snap](https://snapcraft.io/) 애플리케이션으로 설치할 수 있다. - -```shell -snap install kubectl --classic - -kubectl version --client -``` - -{{% /tab %}} - -{{% tab name="Homebrew" %}} -리눅스 상에서 [Homebrew](https://docs.brew.sh/Homebrew-on-Linux) 패키지 관리자를 사용한다면, [설치](https://docs.brew.sh/Homebrew-on-Linux#install)를 통해 kubectl을 사용할 수 있다. - -```shell -brew install kubectl - -kubectl version --client -``` - -{{% /tab %}} - -{{< /tabs >}} - - -## macOS에 kubectl 설치 - -### macOS에서 curl을 사용하여 kubectl 바이너리 설치 - -1. 최신 릴리스를 다운로드한다. - - ```bash - curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl" - ``` - - 특정 버전을 다운로드하려면, `$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)` 명령 부분을 특정 버전으로 바꾼다. - - 예를 들어, macOS에서 버전 {{< param "fullversion" >}}을 다운로드하려면, 다음을 입력한다. - ```bash - curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl - ``` - - kubectl 바이너리를 실행 가능하게 만든다. - - ```bash - chmod +x ./kubectl - ``` - -3. 바이너리를 PATH가 설정된 디렉터리로 옮긴다. - - ```bash - sudo mv ./kubectl /usr/local/bin/kubectl - ``` - -4. 설치한 버전이 최신 버전인지 확인한다. - - ```bash - kubectl version --client - ``` - -### macOS에서 Homebrew를 사용하여 설치 - -macOS에서 [Homebrew](https://brew.sh/) 패키지 관리자를 사용하는 경우, Homebrew로 kubectl을 설치할 수 있다. - -1. 설치 명령을 실행한다. - - ```bash - brew install kubectl - ``` - - 또는 - - ```bash - brew install kubernetes-cli - ``` - -2. 설치한 버전이 최신 버전인지 확인한다. - - ```bash - kubectl version --client - ``` - -### macOS에서 Macports를 사용하여 설치 - -macOS에서 [Macports](https://macports.org/) 패키지 관리자를 사용하는 경우, Macports로 kubectl을 설치할 수 있다. - -1. 설치 명령을 실행한다. - - ```bash - sudo port selfupdate - sudo port install kubectl - ``` - -2. 설치한 버전이 최신 버전인지 확인한다. - - ```bash - kubectl version --client - ``` - -## 윈도우에 kubectl 설치 - -### 윈도우에서 curl을 사용하여 kubectl 바이너리 설치 - -1. [이 링크](https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe)에서 최신 릴리스 {{< param "fullversion" >}}을 다운로드한다. - - 또는 `curl` 을 설치한 경우, 다음 명령을 사용한다. - - ```bash - curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe - ``` - - 최신의 안정 버전(예: 스크립팅을 위한)을 찾으려면, [https://storage.googleapis.com/kubernetes-release/release/stable.txt](https://storage.googleapis.com/kubernetes-release/release/stable.txt)를 참고한다. - -2. 바이너리를 PATH가 설정된 디렉터리에 추가한다. - -3. `kubectl` 의 버전이 다운로드한 버전과 같은지 확인한다. - - ```bash - kubectl version --client - ``` - -{{< note >}} -[윈도우용 도커 데스크톱](https://docs.docker.com/docker-for-windows/#kubernetes)은 자체 버전의 `kubectl` 을 PATH에 추가한다. -도커 데스크톱을 이전에 설치한 경우, 도커 데스크톱 설치 프로그램에서 추가한 PATH 항목 앞에 PATH 항목을 배치하거나 도커 데스크톱의 `kubectl` 을 제거해야 할 수도 있다. -{{< /note >}} - -### PSGallery에서 Powershell로 설치 - -윈도우에서 [Powershell Gallery](https://www.powershellgallery.com/) 패키지 관리자를 사용하는 경우, Powershell로 kubectl을 설치하고 업데이트할 수 있다. - -1. 설치 명령을 실행한다(`DownloadLocation` 을 지정해야 한다). - - ```powershell - Install-Script -Name install-kubectl -Scope CurrentUser -Force - install-kubectl.ps1 [-DownloadLocation ] - ``` - - {{< note >}} - `DownloadLocation` 을 지정하지 않으면, `kubectl` 은 사용자의 임시 디렉터리에 설치된다. - {{< /note >}} - - 설치 프로그램은 `$HOME/.kube` 를 생성하고 구성 파일을 작성하도록 지시한다. - -2. 설치한 버전이 최신 버전인지 확인한다. - - ```powershell - kubectl version --client - ``` - -{{< note >}} -설치 업데이트는 1 단계에서 나열한 두 명령을 다시 실행하여 수행한다. -{{< /note >}} - -### Chocolatey 또는 Scoop을 사용하여 윈도우에 설치 - -1. 윈도우에 kubectl을 설치하기 위해서 [Chocolatey](https://chocolatey.org) 패키지 관리자나 [Scoop](https://scoop.sh) 커맨드 라인 설치 프로그램을 사용할 수 있다. - - {{< tabs name="kubectl_win_install" >}} - {{% tab name="choco" %}} - ```powershell - choco install kubernetes-cli - ``` - {{% /tab %}} - {{% tab name="scoop" %}} - ```powershell - scoop install kubectl - ``` - {{% /tab %}} - {{< /tabs >}} - - -2. 설치한 버전이 최신 버전인지 확인한다. - - ```powershell - kubectl version --client - ``` - -3. 홈 디렉터리로 이동한다. - - ```powershell - # cmd.exe를 사용한다면, 다음을 실행한다. cd %USERPROFILE% - cd ~ - ``` - -4. `.kube` 디렉터리를 생성한다. - - ```powershell - mkdir .kube - ``` - -5. 금방 생성한 `.kube` 디렉터리로 이동한다. - - ```powershell - cd .kube - ``` - -6. 원격 쿠버네티스 클러스터를 사용하도록 kubectl을 구성한다. - - ```powershell - New-Item config -type file - ``` - -{{< note >}} -메모장과 같은 텍스트 편집기를 선택하여 구성 파일을 편집한다. -{{< /note >}} - -## Google Cloud SDK의 일부로 다운로드 - -kubectl을 Google Cloud SDK의 일부로 설치할 수 있다. - -1. [Google Cloud SDK](https://cloud.google.com/sdk/)를 설치한다. - -2. `kubectl` 설치 명령을 실행한다. - - ```shell - gcloud components install kubectl - ``` - -3. 설치한 버전이 최신 버전인지 확인한다. - - ```shell - kubectl version --client - ``` - -## kubectl 구성 확인 - -kubectl이 쿠버네티스 클러스터를 찾아 접근하려면, -[kube-up.sh](https://github.com/kubernetes/kubernetes/blob/master/cluster/kube-up.sh)를 -사용하여 클러스터를 생성하거나 Minikube 클러스터를 성공적으로 배포할 때 자동으로 생성되는 -[kubeconfig 파일](/ko/docs/concepts/configuration/organize-cluster-access-kubeconfig/)이 -필요하다. -기본적으로, kubectl 구성은 `~/.kube/config` 에 있다. - -클러스터 상태를 가져와서 kubectl이 올바르게 구성되어 있는지 확인한다. - -```shell -kubectl cluster-info -``` - -URL 응답이 표시되면, kubectl이 클러스터에 접근하도록 올바르게 구성된 것이다. - -다음과 비슷한 메시지가 표시되면, kubectl이 올바르게 구성되지 않았거나 쿠버네티스 클러스터에 연결할 수 없다. - -``` -The connection to the server was refused - did you specify the right host or port? -``` - -예를 들어, 랩톱에서 로컬로 쿠버네티스 클러스터를 실행하려면, Minikube와 같은 도구를 먼저 설치한 다음 위에서 언급한 명령을 다시 실행해야 한다. - -kubectl cluster-info가 URL 응답을 반환하지만 클러스터에 접근할 수 없는 경우, 올바르게 구성되었는지 확인하려면 다음을 사용한다. - -```shell -kubectl cluster-info dump -``` - -## 선택적 kubectl 구성 - -### 셸 자동 완성 활성화 - -kubectl은 Bash 및 Zsh에 대한 자동 완성 지원을 제공하므로 입력을 위한 타이핑을 많이 절약할 수 있다. - -다음은 Bash(리눅스와 macOS의 다른 점 포함) 및 Zsh에 대한 자동 완성을 설정하는 절차이다. - -{{< tabs name="kubectl_autocompletion" >}} - -{{% tab name="리눅스에서의 Bash" %}} - -### 소개 - -Bash의 kubectl 완성 스크립트는 `kubectl completion bash` 명령으로 생성할 수 있다. 셸에서 완성 스크립트를 소싱(sourcing)하면 kubectl 자동 완성 기능이 활성화된다. - -그러나, 완성 스크립트는 [**bash-completion**](https://github.com/scop/bash-completion)에 의존하고 있으며, 이 소프트웨어를 먼저 설치해야 한다(`type _init_completion` 을 실행하여 bash-completion이 이미 설치되어 있는지 확인할 수 있음). - -### bash-completion 설치 - -bash-completion은 많은 패키지 관리자에 의해 제공된다([여기](https://github.com/scop/bash-completion#installation) 참고). `apt-get install bash-completion` 또는 `yum install bash-completion` 등으로 설치할 수 있다. - -위의 명령은 bash-completion의 기본 스크립트인 `/usr/share/bash-completion/bash_completion` 을 생성한다. 패키지 관리자에 따라, `~/.bashrc` 파일에서 이 파일을 수동으로 소스(source)해야 한다. - -확인하려면, 셸을 다시 로드하고 `type _init_completion` 을 실행한다. 명령이 성공하면, 이미 설정된 상태이고, 그렇지 않으면 `~/.bashrc` 파일에 다음을 추가한다. - -```bash -source /usr/share/bash-completion/bash_completion -``` - -셸을 다시 로드하고 `type _init_completion` 을 입력하여 bash-completion이 올바르게 설치되었는지 확인한다. - -### kubectl 자동 완성 활성화 - -이제 kubectl 완성 스크립트가 모든 셸 세션에서 제공되도록 해야 한다. 이를 수행할 수 있는 두 가지 방법이 있다. - -- `~/.bashrc` 파일에서 완성 스크립트를 소싱한다. - - ```bash - echo 'source <(kubectl completion bash)' >>~/.bashrc - ``` -- 완성 스크립트를 `/etc/bash_completion.d` 디렉터리에 추가한다. - - ```bash - kubectl completion bash >/etc/bash_completion.d/kubectl - ``` -kubectl에 대한 앨리어스(alias)가 있는 경우, 해당 앨리어스로 작업하도록 셸 완성을 확장할 수 있다. - -```bash -echo 'alias k=kubectl' >>~/.bashrc -echo 'complete -F __start_kubectl k' >>~/.bashrc -``` - -{{< note >}} -bash-completion은 `/etc/bash_completion.d` 에 있는 모든 완성 스크립트를 소싱한다. -{{< /note >}} - -두 방법 모두 동일하다. 셸을 다시 로드한 후, kubectl 자동 완성 기능이 작동해야 한다. - -{{% /tab %}} - - -{{% tab name="macOS에서의 Bash" %}} - - -### 소개 - -Bash의 kubectl 완성 스크립트는 `kubectl completion bash` 로 생성할 수 있다. 이 스크립트를 셸에 소싱하면 kubectl 완성이 가능하다. - -그러나 kubectl 완성 스크립트는 미리 [**bash-completion**](https://github.com/scop/bash-completion)을 설치해야 동작한다. - -{{< warning>}} -bash-completion에는 v1과 v2 두 가지 버전이 있다. v1은 Bash 3.2(macOS의 기본 설치 버전) 버전용이고, v2는 Bash 4.1 이상 버전용이다. kubectl 완성 스크립트는 bash-completion v1과 Bash 3.2 버전에서는 **작동하지 않는다**. **bash-completion v2** 와 **Bash 4.1 이상 버전** 이 필요하다. 따라서, macOS에서 kubectl 완성 기능을 올바르게 사용하려면, Bash 4.1 이상을 설치하고 사용해야한다([*지침*](https://itnext.io/upgrading-bash-on-macos-7138bd1066ba)). 다음의 내용에서는 Bash 4.1 이상(즉, 모든 Bash 버전 4.1 이상)을 사용한다고 가정한다. -{{< /warning >}} - -### Bash 업그레이드 - -여기의 지침에서는 Bash 4.1 이상을 사용한다고 가정한다. 다음을 실행하여 Bash 버전을 확인할 수 있다. - -```bash -echo $BASH_VERSION -``` - -너무 오래된 버전인 경우, Homebrew를 사용하여 설치/업그레이드할 수 있다. - -```bash -brew install bash -``` - -셸을 다시 로드하고 원하는 버전을 사용 중인지 확인한다. - -```bash -echo $BASH_VERSION $SHELL -``` - -Homebrew는 보통 `/usr/local/bin/bash` 에 설치한다. - -### bash-completion 설치 - -{{< note >}} -언급한 바와 같이, 이 지침에서는 Bash 4.1 이상을 사용한다고 가정한다. 이는 bash-completion v2를 설치한다는 것을 의미한다(Bash 3.2 및 bash-completion v1의 경우, kubectl 완성이 작동하지 않음). -{{< /note >}} - -bash-completion v2가 이미 설치되어 있는지 `type_init_completion` 으로 확인할 수 있다. 그렇지 않은 경우, Homebrew로 설치할 수 있다. - -```bash -brew install bash-completion@2 -``` - -이 명령의 출력에 명시된 바와 같이, `~/.bash_profile` 파일에 다음을 추가한다. - -```bash -export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d" -[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh" -``` - -셸을 다시 로드하고 bash-completion v2가 올바르게 설치되었는지 `type _init_completion` 으로 확인한다. - -### kubectl 자동 완성 활성화 - -이제 kubectl 완성 스크립트가 모든 셸 세션에서 제공되도록 해야 한다. 이를 수행하는 방법에는 여러 가지가 있다. - -- 완성 스크립트를 `~/.bash_profile` 파일에서 소싱한다. - - ```bash - echo 'source <(kubectl completion bash)' >>~/.bash_profile - - ``` - -- 완성 스크립트를 `/usr/local/etc/bash_completion.d` 디렉터리에 추가한다. - - ```bash - kubectl completion bash >/usr/local/etc/bash_completion.d/kubectl - ``` - -- kubectl에 대한 앨리어스가 있는 경우, 해당 앨리어스로 작업하기 위해 셸 완성을 확장할 수 있다. - - ```bash - echo 'alias k=kubectl' >>~/.bash_profile - echo 'complete -F __start_kubectl k' >>~/.bash_profile - ``` - -- Homebrew로 kubectl을 설치한 경우([위](#macos에서-homebrew를-사용하여-설치)의 설명을 참고), kubectl 완성 스크립트는 이미 `/usr/local/etc/bash_completion.d/kubectl` 에 있어야 한다. 이 경우, 아무 것도 할 필요가 없다. - - {{< note >}} - bash-completion v2의 Homebrew 설치는 `BASH_COMPLETION_COMPAT_DIR` 디렉터리의 모든 파일을 소싱하므로, 후자의 두 가지 방법이 적용된다. - {{< /note >}} - -어쨌든, 셸을 다시 로드 한 후에, kubectl 완성이 작동해야 한다. -{{% /tab %}} - -{{% tab name="Zsh" %}} - -Zsh용 kubectl 완성 스크립트는 `kubectl completion zsh` 명령으로 생성할 수 있다. 셸에서 완성 스크립트를 소싱하면 kubectl 자동 완성 기능이 활성화된다. - -모든 셸 세션에서 사용하려면, `~/.zshrc` 파일에 다음을 추가한다. - -```zsh -source <(kubectl completion zsh) -``` - -kubectl에 대한 앨리어스가 있는 경우, 해당 앨리어스로 작업하도록 셸 완성을 확장할 수 있다. - -```zsh -echo 'alias k=kubectl' >>~/.zshrc -echo 'complete -F __start_kubectl k' >>~/.zshrc -``` - -셸을 다시 로드 한 후, kubectl 자동 완성 기능이 작동해야 한다. - -`complete:13: command not found: compdef` 와 같은 오류가 발생하면, `~/.zshrc` 파일의 시작 부분에 다음을 추가한다. - -```zsh -autoload -Uz compinit -compinit -``` -{{% /tab %}} -{{< /tabs >}} - -## {{% heading "whatsnext" %}} - -* [Minikube 설치](https://minikube.sigs.k8s.io/docs/start/) -* 클러스터 생성에 대한 자세한 내용은 [시작하기](/ko/docs/setup/)를 참고한다. -* [애플리케이션을 시작하고 노출하는 방법에 대해 배운다.](/ko/docs/tasks/access-application-cluster/service-access-application-cluster/) -* 직접 생성하지 않은 클러스터에 접근해야하는 경우, - [클러스터 접근 공유 문서](/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)를 참고한다. -* [kubectl 레퍼런스 문서](/ko/docs/reference/kubectl/kubectl/) 읽기 diff --git a/content/ko/docs/tutorials/_index.md b/content/ko/docs/tutorials/_index.md index a0af5ff5b5..7c1216c5fd 100644 --- a/content/ko/docs/tutorials/_index.md +++ b/content/ko/docs/tutorials/_index.md @@ -33,7 +33,7 @@ content_type: concept * [외부 IP 주소를 노출하여 클러스터의 애플리케이션에 접속하기](/ko/docs/tutorials/stateless-application/expose-external-ip-address/) -* [예시: Redis를 사용한 PHP 방명록 애플리케이션 배포하기](/ko/docs/tutorials/stateless-application/guestbook/) +* [예시: MongoDB를 사용한 PHP 방명록 애플리케이션 배포하기](/ko/docs/tutorials/stateless-application/guestbook/) ## 상태 유지가 필요한(stateful) 애플리케이션 diff --git a/content/ko/docs/tutorials/clusters/apparmor.md b/content/ko/docs/tutorials/clusters/apparmor.md index 74008f9961..c28f41a7cc 100644 --- a/content/ko/docs/tutorials/clusters/apparmor.md +++ b/content/ko/docs/tutorials/clusters/apparmor.md @@ -168,8 +168,7 @@ k8s-apparmor-example-deny-write (enforce) *이 예시는 AppArmor를 지원하는 클러스터를 이미 구성하였다고 가정한다.* -먼저 노드에서 사용하려는 프로파일을 적재해야 한다. 사용할 프로파일은 단순히 -파일 쓰기를 거부할 것이다. +먼저 노드에서 사용하려는 프로파일을 적재해야 한다. 사용할 프로파일은 파일 쓰기를 거부한다. ```shell #include diff --git a/content/ko/docs/tutorials/hello-minikube.md b/content/ko/docs/tutorials/hello-minikube.md index 8f3f515f31..39e57ff501 100644 --- a/content/ko/docs/tutorials/hello-minikube.md +++ b/content/ko/docs/tutorials/hello-minikube.md @@ -61,6 +61,22 @@ Katacode는 무료로 브라우저에서 쿠버네티스 환경을 제공한다. 4. Katacoda 환경에서는: 30000 을 입력하고 **Display Port** 를 클릭. +{{< note >}} +`minikube dashboard` 명령을 내리면 대시보드 애드온과 프록시가 활성화되고 해당 프록시로 접속하는 기본 웹 브라우저 창이 열린다. 대시보드에서 디플로이먼트나 서비스와 같은 쿠버네티스 자원을 생성할 수 있다. + +root 환경에서 명령어를 실행하고 있다면, [URL을 이용하여 대시보드 접속하기](#open-dashboard-with-url)를 참고한다. + +`Ctrl+C` 를 눌러 프록시를 종료할 수 있다. 대시보드는 종료되지 않고 실행 상태로 남아 있다. +{{< /note >}} + +## URL을 이용하여 대시보드 접속하기 {#open-dashboard-with-url} + +자동으로 웹 브라우저가 열리는 것을 원치 않는다면, 다음과 같은 명령어를 실행하여 대시보드 접속 URL을 출력할 수 있다: + +```shell +minikube dashboard --url +``` + ## 디플로이먼트 만들기 쿠버네티스 [*파드*](/ko/docs/concepts/workloads/pods/)는 관리와 diff --git a/content/ko/docs/tutorials/kubernetes-basics/_index.html b/content/ko/docs/tutorials/kubernetes-basics/_index.html index 4be0e94231..f1ab593581 100644 --- a/content/ko/docs/tutorials/kubernetes-basics/_index.html +++ b/content/ko/docs/tutorials/kubernetes-basics/_index.html @@ -41,7 +41,7 @@ card:

    쿠버네티스가 어떤 도움이 될까?

    -

    오늘날의 웹서비스에 대해서, 사용자는 애플리케이션이 24/7 가용하기를 바라고, 개발자는 하루에도 몇 번이고 새로운 버전의 애플리케이션을 배포하기를 바란다. 컨테이너화를 통해 소프트웨어를 패키지하면 애플리케이션을 다운타임 없이 쉽고 빠르게 릴리스 및 업데이트할 수 있게 되어서 이런 목표를 달성하는데 도움이 된다. 쿠버네티스는 이렇게 컨테이너화된 애플리케이션을 원하는 곳 어디에든 또 언제든 구동시킬 수 있다는 확신을 갖는데 도움을 주며, 그 애플리케이션이 작동하는데 필요한 자원과 도구를 찾는 것을 도와준다. 쿠버네티스는 구글의 컨테이너 오케스트레이션 부문의 축적된 경험으로 설계되고 커뮤니티로부터 도출된 최고의 아이디어가 결합된 운영 수준의 오픈 소스 플랫폼이다.

    +

    오늘날의 웹서비스에 대해서, 사용자는 애플리케이션이 24/7 가용하기를 바라고, 개발자는 하루에도 몇 번이고 새로운 버전의 애플리케이션을 배포하기를 바란다. 컨테이너화를 통해 소프트웨어를 패키지하면 애플리케이션을 다운타임 없이 릴리스 및 업데이트할 수 있게 되어서 이런 목표를 달성하는데 도움이 된다. 쿠버네티스는 이렇게 컨테이너화된 애플리케이션을 원하는 곳 어디에든 또 언제든 구동시킬 수 있다는 확신을 갖는데 도움을 주며, 그 애플리케이션이 작동하는데 필요한 자원과 도구를 찾는 것을 도와준다. 쿠버네티스는 구글의 컨테이너 오케스트레이션 부문의 축적된 경험으로 설계되고 커뮤니티로부터 도출된 최고의 아이디어가 결합된 운영 수준의 오픈 소스 플랫폼이다.

    diff --git a/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html b/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html index f51d68e866..da8cce3e17 100644 --- a/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html +++ b/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html @@ -33,7 +33,7 @@ weight: 10

    쿠버네티스 클러스터는 두 가지 형태의 자원으로 구성된다.

      -
    • 마스터는 클러스터를 조율한다.
    • +
    • 컨트롤 플레인은 클러스터를 조율한다.
    • 노드는 애플리케이션을 구동하는 작업자(worker)이다.

    @@ -71,20 +71,20 @@ weight: 10
    -

    마스터는 클러스터 관리를 담당한다. 마스터는 애플리케이션을 스케줄링하거나, 애플리케이션의 항상성을 유지하거나, 애플리케이션을 스케일링하고, 새로운 변경사항을 순서대로 반영(rolling out)하는 일과 같은 클러스터 내 모든 활동을 조율한다.

    -

    노드는 쿠버네티스 클러스터 내 워커 머신으로 동작하는 VM 또는 물리적인 컴퓨터다. 각 노드는 노드를 관리하고 쿠버네티스 마스터와 통신하는 Kubelet이라는 에이전트를 갖는다. 노드는 컨테이너 운영을 담당하는 containerd 또는 도커와 같은 툴도 갖는다. 운영 트래픽을 처리하는 쿠버네티스 클러스터는 최소 세 대의 노드를 가져야 한다.

    +

    컨트롤 플레인은 클러스터 관리를 담당한다. 컨트롤 플레인은 애플리케이션을 스케줄링하거나, 애플리케이션의 항상성을 유지하거나, 애플리케이션을 스케일링하고, 새로운 변경사항을 순서대로 반영(rolling out)하는 일과 같은 클러스터 내 모든 활동을 조율한다.

    +

    노드는 쿠버네티스 클러스터 내 워커 머신으로 동작하는 VM 또는 물리적인 컴퓨터다. 각 노드는 노드를 관리하고 쿠버네티스 컨트롤 플레인과 통신하는 Kubelet이라는 에이전트를 갖는다. 노드는 컨테이너 운영을 담당하는 containerd 또는 도커와 같은 툴도 갖는다. 운영 트래픽을 처리하는 쿠버네티스 클러스터는 최소 세 대의 노드를 가져야 한다.

    -

    마스터는 실행 중인 애플리케이션을 호스팅하기 위해 사용되는 노드와 클러스터를 관리한다.

    +

    컨트롤 플레인은 실행 중인 애플리케이션을 호스팅하기 위해 사용되는 노드와 클러스터를 관리한다.

    -

    애플리케이션을 쿠버네티스에 배포하기 위해서는, 마스터에 애플리케이션 컨테이너의 구동을 지시하면 된다. 그러면 마스터는 컨테이너를 클러스터의 어느 노드에 구동시킬지 스케줄한다. 노드는 마스터가 제공하는 쿠버네티스 API를 통해서 마스터와 통신한다. 최종 사용자도 쿠버네티스 API를 사용해서 클러스터와 직접 상호작용(interact)할 수 있다.

    +

    애플리케이션을 쿠버네티스에 배포하기 위해서는, 컨트롤 플레인에 애플리케이션 컨테이너의 구동을 지시하면 된다. 그러면 컨트롤 플레인은 컨테이너를 클러스터의 어느 노드에 구동시킬지 스케줄한다. 노드는 컨트롤 플레인이 제공하는 쿠버네티스 API를 통해서 컨트롤 플레인과 통신한다. 최종 사용자도 쿠버네티스 API를 사용해서 클러스터와 직접 상호작용(interact)할 수 있다.

    쿠버네티스 클러스터는 물리 및 가상 머신 모두에 설치될 수 있다. 쿠버네티스 개발을 시작하려면 Minikube를 사용할 수 있다. Minikube는 가벼운 쿠버네티스 구현체이며, 로컬 머신에 VM을 만들고 하나의 노드로 구성된 간단한 클러스터를 생성한다. Minikube는 리눅스, 맥, 그리고 윈도우 시스템에서 구동이 가능하다. Minikube CLI는 클러스터에 대해 시작, 중지, 상태 조회 및 삭제 등의 기본적인 부트스트래핑(bootstrapping) 기능을 제공한다. 하지만, 본 튜토리얼에서는 Minikube가 미리 설치된 채로 제공되는 온라인 터미널을 사용할 것이다.

    diff --git a/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html index 5b41fe207a..4c250c1272 100644 --- a/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html +++ b/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html @@ -31,7 +31,7 @@ weight: 10 일단 쿠버네티스 클러스터를 구동시키면, 그 위에 컨테이너화된 애플리케이션을 배포할 수 있다. 그러기 위해서, 쿠버네티스 디플로이먼트 설정을 만들어야 한다. 디플로이먼트는 쿠버네티스가 애플리케이션의 인스턴스를 어떻게 생성하고 업데이트해야 하는지를 지시한다. 디플로이먼트가 만들어지면, - 쿠버네티스 마스터가 해당 디플로이먼트에 포함된 애플리케이션 인스턴스가 클러스터의 개별 노드에서 실행되도록 스케줄한다. + 쿠버네티스 컨트롤 플레인이 해당 디플로이먼트에 포함된 애플리케이션 인스턴스가 클러스터의 개별 노드에서 실행되도록 스케줄한다.

    애플리케이션 인스턴스가 생성되면, 쿠버네티스 디플로이먼트 컨트롤러는 지속적으로 이들 인스턴스를 diff --git a/content/ko/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/ko/docs/tutorials/kubernetes-basics/expose/expose-intro.html index ebc880dbdd..e44e68df85 100644 --- a/content/ko/docs/tutorials/kubernetes-basics/expose/expose-intro.html +++ b/content/ko/docs/tutorials/kubernetes-basics/expose/expose-intro.html @@ -64,12 +64,6 @@ weight: 10

    -
    -
    -

    -
    -
    -

    서비스는 파드 셋에 걸쳐서 트래픽을 라우트한다. 여러분의 애플리케이션에 영향을 주지 않으면서 쿠버네티스에서 파드들이 죽게도 하고, 복제가 되게도 해주는 추상적 개념이다. 종속적인 파드들 사이에서의 디스커버리와 라우팅은 (하나의 애플리케이션에서 프로트엔드와 백엔드 컴포넌트와 같은) 쿠버네티스 서비스들에 의해 처리된다.

    diff --git a/content/ko/docs/tutorials/services/source-ip.md b/content/ko/docs/tutorials/services/source-ip.md index 9d47599589..dec30dc54b 100644 --- a/content/ko/docs/tutorials/services/source-ip.md +++ b/content/ko/docs/tutorials/services/source-ip.md @@ -412,7 +412,7 @@ client_address=198.51.100.79 HTTP [Forwarded](https://tools.ietf.org/html/rfc7239#section-5.2) 또는 [X-FORWARDED-FOR](https://en.wikipedia.org/wiki/X-Forwarded-For) 헤더 또는 -[프록시 프로토콜](https://www.haproxy.org/download/1.5/doc/proxy-protocol.txt)과 +[프록시 프로토콜](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt)과 같은 로드밸런서와 백엔드 간에 합의된 프로토콜을 사용해야 한다. 두 번째 범주의 로드밸런서는 서비스의 `service.spec.healthCheckNodePort` 필드의 저장된 포트를 가르키는 HTTP 헬스 체크를 생성하여 diff --git a/content/ko/docs/tutorials/stateful-application/basic-stateful-set.md b/content/ko/docs/tutorials/stateful-application/basic-stateful-set.md index 35eb540724..a17ae9f320 100644 --- a/content/ko/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/content/ko/docs/tutorials/stateful-application/basic-stateful-set.md @@ -921,7 +921,7 @@ web-2 0/1 Terminating 0 3m `web` 스테이트풀셋이 다시 생성될 때 먼저 `web-0` 시작한다. `web-1`은 이미 Running과 Ready 상태이므로 `web-0`이 Running과 Ready 상태로 -전환될 때는 단순히 이 파드에 적용됬다. 스테이트풀셋에`replicas`를 2로 하고 +전환될 때는 이 파드에 적용됐다. 스테이트풀셋에 `replicas`를 2로 하고 `web-0`을 재생성했다면 `web-1`이 이미 Running과 Ready 상태이고, `web-2`은 종료되었을 것이다. @@ -932,6 +932,7 @@ web-2 0/1 Terminating 0 3m ```shell for i in 0 1; do kubectl exec -i -t "web-$i" -- curl http://localhost/; done ``` + ``` web-0 web-1 @@ -957,6 +958,7 @@ kubectl get pods -w -l app=nginx ```shell kubectl delete statefulset web ``` + ``` statefulset.apps "web" deleted ``` @@ -966,6 +968,7 @@ statefulset.apps "web" deleted ```shell kubectl get pods -w -l app=nginx ``` + ``` NAME READY STATUS RESTARTS AGE web-0 1/1 Running 0 11m @@ -997,6 +1000,7 @@ web-1 0/1 Terminating 0 29m ```shell kubectl delete service nginx ``` + ``` service "nginx" deleted ``` @@ -1006,6 +1010,7 @@ service "nginx" deleted ```shell kubectl apply -f web.yaml ``` + ``` service/nginx created statefulset.apps/web created @@ -1017,6 +1022,7 @@ statefulset.apps/web created ```shell for i in 0 1; do kubectl exec -i -t "web-$i" -- curl http://localhost/; done ``` + ``` web-0 web-1 @@ -1031,13 +1037,16 @@ web-1 ```shell kubectl delete service nginx ``` + ``` service "nginx" deleted ``` + 그리고 `web` 스테이트풀셋을 삭제한다. ```shell kubectl delete statefulset web ``` + ``` statefulset "web" deleted ``` diff --git a/content/ko/docs/tutorials/stateful-application/cassandra.md b/content/ko/docs/tutorials/stateful-application/cassandra.md index 8273f3bcd9..0a420100ce 100644 --- a/content/ko/docs/tutorials/stateful-application/cassandra.md +++ b/content/ko/docs/tutorials/stateful-application/cassandra.md @@ -114,7 +114,7 @@ cassandra ClusterIP None 9042/TCP 45s kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml ``` -클러스터에 맞게 `cassandra-statefulset.yaml` 를 수정해야 하는 경우 다음을 다운로드 한 다음 +클러스터에 맞게 `cassandra-statefulset.yaml` 를 수정해야 하는 경우 다음을 다운로드한 다음 수정된 버전을 저장한 폴더에서 해당 매니페스트를 적용한다. https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml ```shell diff --git a/content/ko/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md b/content/ko/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md index 5c27b55183..fd893ca5de 100644 --- a/content/ko/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md +++ b/content/ko/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md @@ -91,7 +91,7 @@ EOF ## MySQL과 WordPress에 필요한 리소스 구성 추가하기 -다음 매니페스트는 MySQL 디플로이먼트 단일 인스턴스를 기술한다. MySQL 컨케이너는 퍼시스턴트볼륨을 /var/lib/mysql에 마운트한다. `MYSQL_ROOT_PASSWORD` 환경변수는 시크릿에서 가져와 데이터베이스 암호로 설정한다. +다음 매니페스트는 MySQL 디플로이먼트 단일 인스턴스를 기술한다. MySQL 컨테이너는 퍼시스턴트볼륨을 /var/lib/mysql에 마운트한다. `MYSQL_ROOT_PASSWORD` 환경변수는 시크릿에서 가져와 데이터베이스 암호로 설정한다. {{< codenew file="application/wordpress/mysql-deployment.yaml" >}} diff --git a/content/ko/docs/tutorials/stateful-application/zookeeper.md b/content/ko/docs/tutorials/stateful-application/zookeeper.md index b1e6dbe523..3fca0a6749 100644 --- a/content/ko/docs/tutorials/stateful-application/zookeeper.md +++ b/content/ko/docs/tutorials/stateful-application/zookeeper.md @@ -15,17 +15,17 @@ weight: 40 이 튜토리얼을 시작하기 전에 다음 쿠버네티스 개념에 친숙해야 한다. -- [파드](/ko/docs/concepts/workloads/pods/) -- [클러스터 DNS](/ko/docs/concepts/services-networking/dns-pod-service/) -- [헤드리스 서비스](/ko/docs/concepts/services-networking/service/#헤드리스-headless-서비스) -- [퍼시스턴트볼륨](/ko/docs/concepts/storage/persistent-volumes/) -- [퍼시스턴트볼륨 프로비저닝](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/persistent-volume-provisioning/) -- [스테이트풀셋](/ko/docs/concepts/workloads/controllers/statefulset/) -- [PodDisruptionBudget](/ko/docs/concepts/workloads/pods/disruptions/#파드-disruption-budgets) -- [파드안티어피니티](/ko/docs/concepts/scheduling-eviction/assign-pod-node/#어피니티-affinity-와-안티-어피니티-anti-affinity) -- [kubectl CLI](/ko/docs/reference/kubectl/kubectl/) +- [파드](/ko/docs/concepts/workloads/pods/) +- [클러스터 DNS](/ko/docs/concepts/services-networking/dns-pod-service/) +- [헤드리스 서비스](/ko/docs/concepts/services-networking/service/#헤드리스-headless-서비스) +- [퍼시스턴트볼륨](/ko/docs/concepts/storage/persistent-volumes/) +- [퍼시스턴트볼륨 프로비저닝](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/persistent-volume-provisioning/) +- [스테이트풀셋](/ko/docs/concepts/workloads/controllers/statefulset/) +- [PodDisruptionBudget](/ko/docs/concepts/workloads/pods/disruptions/#파드-disruption-budgets) +- [파드안티어피니티](/ko/docs/concepts/scheduling-eviction/assign-pod-node/#어피니티-affinity-와-안티-어피니티-anti-affinity) +- [kubectl CLI](/ko/docs/reference/kubectl/kubectl/) -최소한 4개의 노드가 있는 클러스터가 필요하며, 각 노드는 적어도 2 개의 CPU와 4 GiB 메모리가 필요하다. 이 튜토리얼에서 클러스터 노드를 통제(cordon)하고 비우게(drain) 할 것이다. **이것은 클러스터를 종료하여 노드의 모든 파드를 퇴출(evict)하는 것으로, 모든 파드는 임시로 언스케줄된다는 의미이다.** 이 튜토리얼을 위해 전용 클러스터를 이용하거나, 다른 테넌트에 간섭을 하는 혼란이 발생하지 않도록 해야 합니다. +반드시 최소한 4개의 노드가 있는 클러스터가 필요하며, 각 노드는 적어도 2 개의 CPU와 4 GiB 메모리가 필요하다. 이 튜토리얼에서 클러스터 노드를 통제(cordon)하고 비우게(drain) 할 것이다. **이것은 클러스터를 종료하여 노드의 모든 파드를 축출(evict)하는 것으로, 모든 파드는 임시로 언스케줄된다는 의미이다.** 이 튜토리얼을 위해 전용 클러스터를 이용하거나, 다른 테넌트에 간섭을 하는 혼란이 발생하지 않도록 해야 합니다. 이 튜토리얼은 클러스터가 동적으로 퍼시스턴트볼륨을 프로비저닝하도록 구성한다고 가정한다. 그렇게 설정되어 있지 않다면 @@ -37,15 +37,15 @@ weight: 40 이 튜토리얼을 마치면 다음에 대해 알게 된다. -- 어떻게 스테이트풀셋을 이용하여 ZooKeeper 앙상블을 배포하는가. -- 어떻게 앙상블을 일관되게 설정하는가. -- 어떻게 ZooKeeper 서버 디플로이먼트를 앙상블 안에서 퍼뜨리는가. -- 어떻게 PodDisruptionBudget을 이용하여 계획된 점검 기간 동안 서비스 가용성을 보장하는가. +- 어떻게 스테이트풀셋을 이용하여 ZooKeeper 앙상블을 배포하는가. +- 어떻게 앙상블을 일관되게 설정하는가. +- 어떻게 ZooKeeper 서버 디플로이먼트를 앙상블 안에서 퍼뜨리는가. +- 어떻게 PodDisruptionBudget을 이용하여 계획된 점검 기간 동안 서비스 가용성을 보장하는가. -### ZooKeeper 기본 {#zookeeper-basics} +### ZooKeeper [아파치 ZooKeeper](https://zookeeper.apache.org/doc/current/)는 분산 애플리케이션을 위한 분산 오픈 소스 코디네이션 서비스이다. @@ -438,8 +438,8 @@ datadir-zk-2 Bound pvc-bee0817e-bcb1-11e6-994f-42010a800002 20Gi R ```shell volumeMounts: - - name: datadir - mountPath: /var/lib/zookeeper +- name: datadir + mountPath: /var/lib/zookeeper ``` `zk` 스테이트풀셋이 (재)스케줄링될 때 항상 동일한 `퍼시스턴트볼륨`을 @@ -462,6 +462,7 @@ ZooKeeper 앙상블에 서버는 리더 선출과 쿼럼을 구성하기 위한 ```shell kubectl get sts zk -o yaml ``` + ``` … command: @@ -551,11 +552,9 @@ kubectl logs zk-0 --tail 20 2016-12-06 19:34:46,230 [myid:1] - INFO [Thread-1142:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52768 (no session established for client) ``` -쿠버네티스는 더 강력하지만 조금 복잡한 로그 통합을 -[스택드라이버](/docs/tasks/debug-application-cluster/logging-stackdriver/)와 -[Elasticsearch와 Kibana](/ko/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/)를 지원한다. -클러스터 수준의 로그 적재(ship)와 통합을 위해서는 로그 순환과 적재를 위해 -[사이드카](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns) 컨테이너를 배포하는 것을 고려한다. +쿠버네티스는 많은 로그 솔루션과 통합된다. 클러스터와 애플리케이션에 +가장 적합한 로그 솔루션을 선택할 수 있다. 클러스터 수준의 +로그 적재(ship)와 통합을 위해서는 로그 순환과 적재를 위해 [사이드카 컨테이너](/ko/docs/concepts/cluster-administration/logging/#로깅-에이전트와-함께-사이드카-컨테이너-사용)를 배포하는 것을 고려한다. ### 권한 없는 사용자를 위해 구성하기 @@ -623,6 +622,7 @@ drwxr-sr-x 3 zookeeper zookeeper 4096 Dec 5 20:45 /var/lib/zookeeper/data ```shell kubectl patch sts zk --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/requests/cpu", "value":"0.3"}]' ``` + ``` statefulset.apps/zk patched ``` @@ -632,6 +632,7 @@ statefulset.apps/zk patched ```shell kubectl rollout status sts/zk ``` + ``` waiting for statefulset rolling update to complete 0 pods at revision zk-5db4499664... Waiting for 1 pods to be ready... @@ -872,8 +873,8 @@ kubernetes-node-2g2d ## 생존 유지 -**이 섹션에서는 노드를 통제(cordon)하고 비운다(drain). 공유된 클러스터에서 이 튜토리얼을 진행한다면, -다른 테넌트에 부정적인 영향을 비치지 않음을 보증해야 한다.** +이 섹션에서는 노드를 통제(cordon)하고 비운다(drain). 공유된 클러스터에서 이 튜토리얼을 진행한다면, +다른 테넌트에 부정적인 영향을 비치지 않음을 보증해야 한다. 이전 섹션은 계획되지 않은 노드 실패에서 살아남도록 어떻게 파드를 확산할 것인가에 대해 알아보았다. @@ -1008,6 +1009,7 @@ zk-1 0/1 Pending 0 0s ```shell kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data ``` + ``` node "kubernetes-node-i4c4" cordoned @@ -1051,6 +1053,7 @@ numChildren = 0 ```shell kubectl uncordon kubernetes-node-pb41 ``` + ``` node "kubernetes-node-pb41" uncordoned ``` @@ -1060,6 +1063,7 @@ node "kubernetes-node-pb41" uncordoned ```shell kubectl get pods -w -l app=zk ``` + ``` NAME READY STATUS RESTARTS AGE zk-0 1/1 Running 2 1h @@ -1125,7 +1129,6 @@ drain으로 노드를 통제하고 유지보수를 위해 노드를 오프라인 - `kubectl uncordon`은 클러스터 내에 모든 노드를 통제 해제한다. -- 이 튜토리얼에서 사용한 퍼시스턴트 볼륨을 위한 - 퍼시스턴트 스토리지 미디어를 삭제하자. +- 반드시 이 튜토리얼에서 사용한 퍼시스턴트 볼륨을 위한 퍼시스턴트 스토리지 미디어를 삭제하자. 귀하의 환경과 스토리지 구성과 프로비저닝 방법에서 필요한 절차를 따라서 모든 스토리지가 재확보되도록 하자. diff --git a/content/ko/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md b/content/ko/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md deleted file mode 100644 index faf5fd5303..0000000000 --- a/content/ko/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md +++ /dev/null @@ -1,457 +0,0 @@ ---- -title: "예제: PHP / Redis 방명록 예제에 로깅과 메트릭 추가" -content_type: tutorial -weight: 21 -card: - name: tutorials - weight: 31 - title: "예제: PHP / Redis 방명록 예제에 로깅과 메트릭 추가" ---- - - -이 튜토리얼은 [Redis를 이용한 PHP 방명록](/ko/docs/tutorials/stateless-application/guestbook) 튜토리얼을 기반으로 한다. Elastic의 경량 로그, 메트릭, 네트워크 데이터 오픈소스 배송기인 *Beats* 를 방명록과 동일한 쿠버네티스 클러스터에 배포한다. Beats는 데이터를 수집하고 구문분석하여 Elasticsearch에 색인화하므로, Kibana에서 동작 정보를 결과로 보며 분석할 수 있다. 이 예시는 다음과 같이 구성되어 있다. - -* [Redis를 이용한 PHP 방명록](/ko/docs/tutorials/stateless-application/guestbook)을 실행한 인스턴스 -* Elasticsearch와 Kibana -* Filebeat -* Metricbeat -* Packetbeat - -## {{% heading "objectives" %}} - -* Redis를 이용한 PHP 방명록 시작. -* kube-state-metrics 설치. -* 쿠버네티스 시크릿 생성. -* Beats 배포. -* 로그와 메트릭의 대시보드 보기. - -## {{% heading "prerequisites" %}} - - -{{< include "task-tutorial-prereqs.md" >}} -{{< version-check >}} - -추가로 다음이 필요하다. - -* 실행 중인 [Redis를 이용한 PHP 방명록](/ko/docs/tutorials/stateless-application/guestbook) 튜토리얼의 배포본. - -* 실행 중인 Elasticsearch와 Kibana 디플로이먼트. [Elastic Cloud의 Elasticsearch 서비스](https://cloud.elastic.co)를 사용하거나, - [파일을 내려받아](https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-elastic-stack.html) - 워크스테이션이나 서버에서 운영하거나, [Elastic의 Helm 차트](https://github.com/elastic/helm-charts)를 이용한다. - - - - - -## Redis를 이용한 PHP 방명록 시작 - -이 튜토리얼은 [Redis를 이용한 PHP 방명록](/ko/docs/tutorials/stateless-application/guestbook)을 기반으로 한다. 방명록 애플리케이션을 실행 중이라면, 이를 모니터링할 수 있다. 실행되지 않은 경우라면 지침을 따라 방명록을 배포하고 **정리하기** 단계는 수행하지 말자. 방명록을 실행할 때 이 페이지로 돌아오자. - -## 클러스터 롤 바인딩 추가 - -[클러스터 단위 롤 바인딩](/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding)을 생성하여, 클러스터 수준(kube-system 안에)으로 kube-state-metrics와 Beats를 배포할 수 있게 한다. - -```shell -kubectl create clusterrolebinding cluster-admin-binding \ - --clusterrole=cluster-admin --user= -``` - -## kube-state-metrics 설치 - -[*kube-state-metrics*](https://github.com/kubernetes/kube-state-metrics)는 쿠버네티스 API 서버를 모니터링하며 오브젝트 상태에 대한 메트릭을 생성하는 간단한 서비스이다. 이런 메트릭을 Metricbeat이 보고한다. 방명록이 실행된 쿠버네티스 클러스터에서 kube-state-metrics을 추가한다. - -```shell -git clone https://github.com/kubernetes/kube-state-metrics.git kube-state-metrics -kubectl apply -f kube-state-metrics/examples/standard -``` - -### kube-state-metrics 실행 여부 확인 - -```shell -kubectl get pods --namespace=kube-system -l app.kubernetes.io/name=kube-state-metrics -``` - -출력 - -``` -NAME READY STATUS RESTARTS AGE -kube-state-metrics-89d656bf8-vdthm 1/1 Running 0 21s -``` - -## Elastic의 예제를 GitHub 리포지터리에 클론한다. - -```shell -git clone https://github.com/elastic/examples.git -``` - -나머지 커맨드는 `examples/beats-k8s-send-anywhere` 디렉터리의 파일을 참조할 것이라서, 그쪽으로 현재 디렉터리를 변경한다. - -```shell -cd examples/beats-k8s-send-anywhere -``` - -## 쿠버네티스 시크릿 만들기 - -쿠버네티스 {{< glossary_tooltip text="시크릿" term_id="secret" >}}은 암호나 토큰, 키 같이 소량의 민감한 데이터를 포함하는 오브젝트이다. 이러한 정보는 다른 방식으로도 파드 스펙이나 이미지에 넣을 수 있을 것이다. 시크릿 오브젝트에 넣으면 이것이 어떻게 사용되는지 다양하게 제어할 수 있고, 우발적인 노출 사고의 위험이 줄일 수 있다. - -{{< note >}} -여기에는 방식이 나뉘는데, 하나는 *자체 관리(Self managed)* 로 Elasticsearch와 Kibana(Elastic의 Helm 차트를 이용하여 사용자 서버를 구동하는)를 사용하는 경우이고 다른 경우는 Elastic Cloud의 Elasticsearch 서비스의 *관리 서비스(Managed service)* 를 사용하는 방식이다. 이 튜토리얼에서는 사용할 Elasticsearch와 Kibana 시스템의 종류에 따라 시크릿을 만들어야 한다. -{{< /note >}} - -{{< tabs name="tab_with_md" >}} -{{% tab name="자체 관리(Self Managed)" %}} - -### 자체 관리 - -Elastic Cloud의 Elasticsearch 서비스로 연결한다면 **관리 서비스** 탭으로 전환한다. - -### 자격증명(credentials) 설정 - -자체 관리 Elasticsearch와 Kibana(자체 관리는 사실상 Elastic Cloud의 관리 서비스 Elasticsearch와 다르다) 서비스에 접속할 때에 4개 파일을 수정하여 쿠버네티스 시크릿을 생성한다. 파일은 다음과 같다. - -1. `ELASTICSEARCH_HOSTS` -1. `ELASTICSEARCH_PASSWORD` -1. `ELASTICSEARCH_USERNAME` -1. `KIBANA_HOST` - -이 정보를 Elasticsearch 클러스터와 Kibana 호스트에 지정한다. 여기 예시(또는 [*이 구성*](https://stackoverflow.com/questions/59892896/how-to-connect-from-minikube-to-elasticsearch-installed-on-host-local-developme/59892897#59892897)을 본다)가 있다. - -#### `ELASTICSEARCH_HOSTS` - -1. Elastic의 Elasticsearch Helm 차트에서 노드 그룹(nodeGroup). - - ``` - ["http://elasticsearch-master.default.svc.cluster.local:9200"] - ``` - -1. Mac을 위한 Docker에서 Beats를 운영 중인 Mac에서 운영하는 단일 Elasticsearch 노드. - - ``` - ["http://host.docker.internal:9200"] - ``` - -1. VM이나 물리 장비에서 운영 중인 두 개의 ELASTICSEARCH 노드. - - ``` - ["http://host1.example.com:9200", "http://host2.example.com:9200"] - ``` - -`ELASTICSEARCH_HOSTS` 를 수정한다. - -```shell -vi ELASTICSEARCH_HOSTS -``` - -#### `ELASTICSEARCH_PASSWORD` -화이트 스페이스나 인용 부호나 `<` 또는 `>` 도 없는 암호이다. - -``` -<사용자시크릿암호> -``` - -`ELASTICSEARCH_PASSWORD` 를 수정한다. - -```shell -vi ELASTICSEARCH_PASSWORD -``` - -#### `ELASTICSEARCH_USERNAME` -화이트 스페이스나 인용 부호나 `<` 또는 `>` 도 없는 이름이다. - -``` - -``` - -`ELASTICSEARCH_USERNAME` 을 수정한다. - -```shell -vi ELASTICSEARCH_USERNAME -``` - -#### `KIBANA_HOST` - -1.Elastic의 Kibana Helm 차트의 인스턴스이다. 하위 도메인 `default`는 기본 네임스페이스를 참조한다. 다른 네임스페이스를 사용하여 Helm 차트를 배포한 경우 하위 도메인이 다릅니다. - - ``` - "kibana-kibana.default.svc.cluster.local:5601" - ``` - -1. Mac 용 Docker에서 실행하는 Beats가 있는 Mac에서 실행하는 Kibana 인스턴스 - - ``` - "host.docker.internal:5601" - ``` -1. 가상머신이나 물리적 하드웨어에서 실행 중인 두 개의 Elasticsearch 노드 - - ``` - "host1.example.com:5601" - ``` - -`KIBANA_HOST` 를 편집한다. - -```shell -vi KIBANA_HOST -``` - -### 쿠버네티스 시크릿 만들기 - -이 커맨드는 방금 편집한 파일을 기반으로 쿠버네티스의 시스템 수준의 네임스페이스(`kube-system`)에 시크릿을 만든다. - -```shell -kubectl create secret generic dynamic-logging \ - --from-file=./ELASTICSEARCH_HOSTS \ - --from-file=./ELASTICSEARCH_PASSWORD \ - --from-file=./ELASTICSEARCH_USERNAME \ - --from-file=./KIBANA_HOST \ - --namespace=kube-system -``` - -{{% /tab %}} -{{% tab name="관리 서비스(Managed service)" %}} - -## 관리 서비스 - -이 탭은 Elastic Cloud에서 Elasticsearch 서비스 만에 대한 것으로, 이미 자체 관리 Elasticsearch와 Kibana 배포로 시크릿을 생성했다면, [Beats 배포](#deploy-the-beats)를 계속한다. - -### 자격증명(credentials) 설정 - -Elastic Cloud에서 관리되는 Elastic 서비스에 연결할 때, 쿠버네티스 시크릿을 생성하기 위해 편집할 두 파일이 있다. 파일은 다음과 같다. - -1. `ELASTIC_CLOUD_AUTH` -1. `ELASTIC_CLOUD_ID` - -디플로이먼트를 생성할 때에 Elasticsearch 콘솔에서 제공한 정보로 이를 설정한다. 여기 예시들이 있다. - -#### `ELASTIC_CLOUD_ID` - -``` -devk8s:ABC123def456ghi789jkl123mno456pqr789stu123vwx456yza789bcd012efg345hijj678klm901nop345zEwOTJjMTc5YWQ0YzQ5OThlN2U5MjAwYTg4NTIzZQ== -``` - -#### `ELASTIC_CLOUD_AUTH` - -사용자 이름, 콜론(`:`) 및 비밀번호인데, 공백 또는 따옴표는 없다. - -``` -elastic:VFxJJf9Tjwer90wnfTghsn8w -``` - -### 필요 파일 편집하기 - -```shell -vi ELASTIC_CLOUD_ID -vi ELASTIC_CLOUD_AUTH -``` - -### 쿠버네티스 시크릿 생성하기 - -이 커맨드는 방금 편집한 파일을 기반으로 쿠버네티스의 시스템 수준의 네임스페이스(`kube-system`)에 시크릿을 생성한다. - -```shell -kubectl create secret generic dynamic-logging \ - --from-file=./ELASTIC_CLOUD_ID \ - --from-file=./ELASTIC_CLOUD_AUTH \ - --namespace=kube-system -``` - -{{% /tab %}} -{{< /tabs >}} - -## Beats 배포하기 {#deploy-the-beats} - -각 Beat마다 메니페스트 파일을 제공한다. 이 메니페스트 파일은 앞서 생성한 시크릿을 사용하여, Elasticsearch 및 Kibana 서버에 연결하도록 Beats를 구성한다. - -### Filebeat에 대해 - -Filebeat는 쿠버네티스 노드와 해당 노두에서 실행되는 각 파드에서 실행되는 컨테이너의 로그를 수집한다. Filebeat는 {{< glossary_tooltip text="데몬 셋" term_id="daemonset" >}}으로 배포한다. Filebeat는 쿠버네티스 클러스터에서 실행 중인 애플리케이션을 자동 검색할 수 있다. 시작시에 Filebeat는 기존 컨테이너를 검색하고 이에 적절한 구성을 시작하고 새 시작/종료 이벤트를 감시한다. - -아래 내용은 Filebeat가 방명록 애플리케이션과 함께 배포된 Redis 컨테이너에서 Redis 로그를 찾아 구문분석할 수 있게 하는 자동 검색 구성이다. 이 구성은 `filebeat-kubernetes.yaml`파일에 있다. - -```yaml -- condition.contains: - kubernetes.labels.app: redis - config: - - module: redis - log: - input: - type: docker - containers.ids: - - ${data.kubernetes.container.id} - slowlog: - enabled: true - var.hosts: ["${data.host}:${data.port}"] -``` - -이것은 `redis` 컨테이너가 `app` 문자열을 포함하는 레이블로 감지될 때에 Filebeat 모듈 `redis`를 적용하도록 Filebeat를 구성한다. Redis 모듈은 Docker 입력 유형을 사용하여 컨테이너에서 `로그` 스트림을 수집할 수 있다(이 Redis 컨테이너의 STDOUT 스트림과 연관된 쿠버네티스 노드에서 파일 읽기). 또한 이 모듈은 컨테이너 메타 데이터에 제공되는 적절한 파드 호스트와 포트에 연결하여 Redis의 `slowlog` 항목을 수집할 수 있다. - -### Filebeat 배포 - -```shell -kubectl create -f filebeat-kubernetes.yaml -``` - -#### 확인 - -```shell -kubectl get pods -n kube-system -l k8s-app=filebeat-dynamic -``` - -### Metricbeat에 대해 - -Metricbeat 자동 검색은 Filebeat와 같은 방식으로 구성된다. 다음은 Redis 컨테이너에 대한 Metricbeat의 자동 검색 구성이다. 이 구성은 `metricbeat-kubernetes.yaml`에 있다. - -```yaml -- condition.equals: - kubernetes.labels.tier: backend - config: - - module: redis - metricsets: ["info", "keyspace"] - period: 10s - - # Redis hosts - hosts: ["${data.host}:${data.port}"] -``` - -이것은 컨테이너가 `tier` 레이블이 `backend` 문자열과 같은 레이블로 감지될 때에 Metricbeat 모듈 `redis`를 적용하도록 Metricbeat를 구성한다. `redis` 모듈은 컨테이너 메타데이터에 제공되는 적절한 파드 호스트와 포트에 연결하여 컨테이너에서 `info` 및 `keyspace` 메트릭을 수집할 수 있다. - -### Metricbeat 배포 - -```shell -kubectl create -f metricbeat-kubernetes.yaml -``` - -#### 확인 - -```shell -kubectl get pods -n kube-system -l k8s-app=metricbeat -``` - -### Packetbeat에 대해 - -Packetbeat 구성은 Filebeat와 Metricbeat와는 다르다. 컨테이너 레이블과 일치시킬 패턴을 지정하지 않고, 구성은 관련 프로토콜 및 포트 번호를 기반으로 한다. 아래는 포트 번호의 하위 집합이다. - -{{< note >}} -비표준 포트로 서비스를 실행했다면 해당 포트를 `filebeat.yaml`에 적절한 유형에 추가하고, Packetbeat 데몬 셋을 삭제하고 생성한다. -{{< /note >}} - -```yaml -packetbeat.interfaces.device: any - -packetbeat.protocols: -- type: dns - ports: [53] - include_authorities: true - include_additionals: true - -- type: http - ports: [80, 8000, 8080, 9200] - -- type: mysql - ports: [3306] - -- type: redis - ports: [6379] - -packetbeat.flows: - timeout: 30s - period: 10s -``` - -#### Packetbeat 배포하기 - -```shell -kubectl create -f packetbeat-kubernetes.yaml -``` - -#### 확인하기 - -```shell -kubectl get pods -n kube-system -l k8s-app=packetbeat-dynamic -``` - -## Kibana에서 보기 - -브라우저에서 Kibana를 열고, **대시보드** 애플리케이션을 열어보자. 검색창에 kubernetes를 입력하고 쿠버네티스를 위한 Metricbeat 대시보드를 클릭한다. 이 대시보드에는 노드 상태, 배포 등의 보고 내용이 있다. - -대시보드 페이지에 Packetbeat를 검색하고 Packetbeat의 개요 페이지를 살펴보자. - -마찬가지로 Apache와 Redis를 위한 대시보드를 확인한다. 각 로그와 메트릭에 대한 대시보드가 표시된다. 이 Apache Metricbeat 대시보드는 비어 있다. Apache Filebeat 대시보드를 보고, 맨 아래로 스크롤하여 Apache 오류 로그를 확인한다. Apache에서 보여줄 메트릭이 없는 이유를 알려줄 것이다. - -Metricbeat에서 Apache 메트릭을 가져올 수 있게 하려면, mod-status 구성 파일을 포함한 컨피그맵을 추가하고 방명록을 재배포하여 서버 상태를 활성화해야 한다. - - -## 디플로이먼트를 확장하고 모니터링중인 새 파드를 확인하기 - -기존 디플로이먼트를 확인한다. - -```shell -kubectl get deployments -``` - -출력 - -``` -NAME READY UP-TO-DATE AVAILABLE AGE -frontend 3/3 3 3 3h27m -redis-master 1/1 1 1 3h27m -redis-slave 2/2 2 2 3h27m -``` - -front의 디플로이먼트를 두 개의 파드로 축소한다. - -```shell -kubectl scale --replicas=2 deployment/frontend -``` - -출력 - -``` -deployment.extensions/frontend scaled -``` - -frontend의 파드를 최대 3개의 파드로 확장한다. - -```shell -kubectl scale --replicas=3 deployment/frontend -``` - -## Kibana에서 변화 확인하기 - -스크린 캡처를 확인하여, 표시된 필터를 추가하고 해당 열을 뷰에 추가한다. ScalingReplicaSet 항목이 표시되고, 여기에서 이벤트 목록의 맨 위에 풀링되는 이미지, 마운트된 볼륨, 파드 시작 등을 보여준다. -![Kibana 디스커버리](https://raw.githubusercontent.com/elastic/examples/master/beats-k8s-send-anywhere/scaling-up.png) - -## {{% heading "cleanup" %}} - -디플로이먼트와 서비스를 삭제하면 실행중인 파드도 삭제된다. 한 커맨드로 여러 개의 리소스를 삭제하기 위해 레이블을 이용한다. - -1. 다음 커맨드를 실행하여 모든 파드, 디플로이먼트, 서비스를 삭제한다. - - ```shell - kubectl delete deployment -l app=redis - kubectl delete service -l app=redis - kubectl delete deployment -l app=guestbook - kubectl delete service -l app=guestbook - kubectl delete -f filebeat-kubernetes.yaml - kubectl delete -f metricbeat-kubernetes.yaml - kubectl delete -f packetbeat-kubernetes.yaml - kubectl delete secret dynamic-logging -n kube-system - ``` - -1. 실행 중인 파드가 없음을 확인하기 위해 파드 목록을 조회한다. - - ```shell - kubectl get pods - ``` - - 출력은 다음과 같아야 한다. - - ``` - No resources found. - ``` - -## {{% heading "whatsnext" %}} - -* [리소스 모니터링 도구](/ko/docs/tasks/debug-application-cluster/resource-usage-monitoring/)를 공부한다. -* [로깅 아키텍처](/ko/docs/concepts/cluster-administration/logging/)를 더 읽어본다. -* [애플리케이션 검사 및 디버깅](/ko/docs/tasks/debug-application-cluster/)을 더 읽어본다. -* [애플리케이션 문제 해결](/ko/docs/tasks/debug-application-cluster/resource-usage-monitoring/)을 더 읽어본다. diff --git a/content/ko/docs/tutorials/stateless-application/guestbook.md b/content/ko/docs/tutorials/stateless-application/guestbook.md index cce67800a6..24e05ab77f 100644 --- a/content/ko/docs/tutorials/stateless-application/guestbook.md +++ b/content/ko/docs/tutorials/stateless-application/guestbook.md @@ -1,26 +1,25 @@ --- -title: "예시: Redis를 사용한 PHP 방명록 애플리케이션 배포하기" +title: "예시: MongoDB를 사용한 PHP 방명록 애플리케이션 배포하기" content_type: tutorial weight: 20 card: name: tutorials weight: 30 - title: "상태를 유지하지 않는 예제: Redis를 사용한 PHP 방명록" + title: "상태를 유지하지 않는 예제: MongoDB를 사용한 PHP 방명록" +min-kubernetes-server-version: v1.14 --- -이 튜토리얼에서는 쿠버네티스와 [Docker](https://www.docker.com/)를 사용하여 간단한 멀티 티어 웹 애플리케이션을 빌드하고 배포하는 방법을 보여준다. 이 예제는 다음과 같은 구성으로 이루어져 있다. +이 튜토리얼에서는 쿠버네티스와 [Docker](https://www.docker.com/)를 사용하여 간단한 _(운영 준비가 아닌)_ 멀티 티어 웹 애플리케이션을 빌드하고 배포하는 방법을 보여준다. 이 예제는 다음과 같은 구성으로 이루어져 있다. -* 방명록을 저장하는 단일 인스턴스 [Redis](https://redis.io/) 마스터 -* 읽기를 제공하는 여러 개의 [복제된 Redis](https://redis.io/topics/replication) 인스턴스 +* 방명록을 저장하는 단일 인스턴스 [MongoDB](https://www.mongodb.com/) * 여러 개의 웹 프론트엔드 인스턴스 ## {{% heading "objectives" %}} -* Redis 마스터를 시작 -* Redis 슬레이브를 시작 +* Mongo 데이터베이스를 시작 * 방명록 프론트엔드를 시작 * 프론트엔드 서비스를 노출하고 확인 * 정리 하기 @@ -37,24 +36,28 @@ card: -## Redis 마스터를 실행하기 +## Mongo 데이터베이스를 실행 -방명록 애플리케이션은 Redis를 사용하여 데이터를 저장한다. Redis 마스터 인스턴스에 데이터를 기록하고 여러 Redis 슬레이브 인스턴스에서 데이터를 읽는다. +방명록 애플리케이션은 MongoDB를 사용해서 데이터를 저장한다. -### Redis 마스터의 디플로이먼트를 생성하기 +### Mongo 디플로이먼트를 생성하기 -아래의 매니페스트 파일은 단일 복제본 Redis 마스터 파드를 실행하는 디플로이먼트 컨트롤러를 지정한다. +아래의 매니페스트 파일은 단일 복제본 Mongo 파드를 실행하는 디플로이먼트 컨트롤러를 지정한다. -{{< codenew file="application/guestbook/redis-master-deployment.yaml" >}} +{{< codenew file="application/guestbook/mongo-deployment.yaml" >}} 1. 매니페스트 파일을 다운로드한 디렉터리에서 터미널 창을 시작한다. -1. `redis-master-deployment.yaml` 파일을 통해 Redis 마스터의 디플로이먼트에 적용한다. +1. `mongo-deployment.yaml` 파일을 통해 MongoDB 디플로이먼트에 적용한다. ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml + kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-deployment.yaml ``` + -1. 파드의 목록을 질의하여 Redis 마스터 파드가 실행 중인지 확인한다. +1. 파드의 목록을 질의하여 MongoDB 파드가 실행 중인지 확인한다. ```shell kubectl get pods @@ -64,32 +67,32 @@ card: ```shell NAME READY STATUS RESTARTS AGE - redis-master-1068406935-3lswp 1/1 Running 0 28s + mongo-5cfd459dd4-lrcjb 1/1 Running 0 28s ``` -1. Redis 마스터 파드에서 로그를 보려면 다음 명령어를 실행한다. +2. MongoDB 파드에서 로그를 보려면 다음 명령어를 실행한다. ```shell - kubectl logs -f POD-NAME + kubectl logs -f deployment/mongo ``` -{{< note >}} -POD-NAME을 해당 파드 이름으로 수정해야 한다. -{{< /note >}} +### MongoDB 서비스 생성하기 -### Redis 마스터 서비스 생성하기 +방명록 애플리케이션에서 데이터를 쓰려면 MongoDB와 통신해야 한다. MongoDB 파드로 트래픽을 프록시하려면 [서비스](/ko/docs/concepts/services-networking/service/)를 적용해야 한다. 서비스는 파드에 접근하기 위한 정책을 정의한다. -방명록 애플리케이션에서 데이터를 쓰려면 Redis 마스터와 통신해야 한다. Redis 마스터 파드로 트래픽을 프록시하려면 [서비스](/ko/docs/concepts/services-networking/service/)를 적용해야 한다. 서비스는 파드에 접근하기 위한 정책을 정의한다. +{{< codenew file="application/guestbook/mongo-service.yaml" >}} -{{< codenew file="application/guestbook/redis-master-service.yaml" >}} - -1. `redis-master-service.yaml` 파일을 통해 Redis 마스터 서비스에 적용한다. +1. `mongo-service.yaml` 파일을 통해 MongoDB 서비스에 적용한다. ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-service.yaml + kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-service.yaml ``` + -1. 서비스의 목록을 질의하여 Redis 마스터 서비스가 실행 중인지 확인한다. +1. 서비스의 목록을 질의하여 MongoDB 서비스가 실행 중인지 확인한다. ```shell kubectl get service @@ -100,77 +103,17 @@ POD-NAME을 해당 파드 이름으로 수정해야 한다. ```shell NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.0.0.1 443/TCP 1m - redis-master ClusterIP 10.0.0.151 6379/TCP 8s + mongo ClusterIP 10.0.0.151 6379/TCP 8s ``` {{< note >}} -이 매니페스트 파일은 이전에 정의된 레이블과 일치하는 레이블 집합을 가진 `redis-master`라는 서비스를 생성하므로, 서비스는 네트워크 트래픽을 Redis 마스터 파드로 라우팅한다. +이 매니페스트 파일은 이전에 정의된 레이블과 일치하는 레이블 집합을 가진 `mongo`라는 서비스를 생성하므로, 서비스는 네트워크 트래픽을 MongoDB 파드로 라우팅한다. {{< /note >}} -## Redis 슬레이브 실행하기 - -Redis 마스터는 단일 파드이지만, 복제된 Redis 슬레이브를 추가하여 트래픽 요구 사항을 충족시킬 수 있다. - -### Redis 슬레이브의 디플로이먼트 생성하기 - -디플로이먼트는 매니페스트 파일에 설정된 구성에 따라 확장된다. 이 경우, 디플로이먼트 오브젝트는 두 개의 복제본을 지정한다. - -실행 중인 복제본이 없으면, 이 디플로이먼트는 컨테이너 클러스터에 있는 두 개의 복제본을 시작한다. 반대로 두 개 이상의 복제본이 실행 중이면, 두 개의 복제본이 실행될 때까지 축소된다. - -{{< codenew file="application/guestbook/redis-slave-deployment.yaml" >}} - -1. `redis-slave-deployment.yaml` 파일을 통해 Redis 슬레이브의 디플로이먼트에 적용한다. - - ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/redis-slave-deployment.yaml - ``` - -1. 파드의 목록을 질의하여 Redis 슬레이브 파드가 실행 중인지 확인한다. - - ```shell - kubectl get pods - ``` - - 결과는 아래와 같은 형태로 나타난다. - - ```shell - NAME READY STATUS RESTARTS AGE - redis-master-1068406935-3lswp 1/1 Running 0 1m - redis-slave-2005841000-fpvqc 0/1 ContainerCreating 0 6s - redis-slave-2005841000-phfv9 0/1 ContainerCreating 0 6s - ``` - -### Redis 슬레이브 서비스 생성하기 - -방명록 애플리케이션은 Redis 슬레이브와 통신하여 데이터를 읽는다. Redis 슬레이브를 확인할 수 있도록 하기 위해 서비스를 설정해야 한다. 서비스는 파드 집합에 투명한 로드 밸런싱을 제공한다. - -{{< codenew file="application/guestbook/redis-slave-service.yaml" >}} - -1. `redis-slave-service.yaml` 파일을 통해 Redis 슬레이브 서비스에 적용한다. - - ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/redis-slave-service.yaml - ``` - -1. 서비스의 목록을 질의하여 Redis 슬레이브 서비스가 실행 중인지 확인한다. - - ```shell - kubectl get services - ``` - - 결과는 아래와 같은 형태로 나타난다. - - ``` - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - kubernetes ClusterIP 10.0.0.1 443/TCP 2m - redis-master ClusterIP 10.0.0.151 6379/TCP 1m - redis-slave ClusterIP 10.0.0.223 6379/TCP 6s - ``` - ## 방명록 프론트엔드를 설정하고 노출하기 -방명록 애플리케이션에는 PHP로 작성된 HTTP 요청을 처리하는 웹 프론트엔드가 있다. 쓰기 요청을 위한 `redis-master` 서비스와 읽기 요청을 위한 `redis-slave` 서비스에 연결하도록 설정된다. +방명록 애플리케이션에는 PHP로 작성된 HTTP 요청을 처리하는 웹 프론트엔드가 있다. 방명록 항목들을 저장하기 위해 `mongo` 서비스에 연결하도록 구성 한다. ### 방명록 프론트엔드의 디플로이먼트 생성하기 @@ -182,10 +125,15 @@ Redis 마스터는 단일 파드이지만, 복제된 Redis 슬레이브를 추 kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml ``` + + 1. 파드의 목록을 질의하여 세 개의 프론트엔드 복제본이 실행되고 있는지 확인한다. ```shell - kubectl get pods -l app=guestbook -l tier=frontend + kubectl get pods -l app.kubernetes.io/name=guestbook -l app.kubernetes.io/component=frontend ``` 결과는 아래와 같은 형태로 나타난다. @@ -199,12 +147,12 @@ Redis 마스터는 단일 파드이지만, 복제된 Redis 슬레이브를 추 ### 프론트엔드 서비스 생성하기 -서비스의 기본 유형은 [ClusterIP](/ko/docs/concepts/services-networking/service/#publishing-services-service-types)이기 때문에 적용한 redis-slave 및 redis-master 서비스는 컨테이너 클러스터 내에서만 접근할 수 있다. `ClusterIP`는 서비스가 가리키는 파드 집합에 대한 단일 IP 주소를 제공한다. 이 IP 주소는 클러스터 내에서만 접근할 수 있다. +서비스의 기본 유형은 [ClusterIP](/ko/docs/concepts/services-networking/service/#publishing-services-service-types)이기 때문에 적용한 `mongo` 서비스는 컨테이너 클러스터 내에서만 접근할 수 있다. `ClusterIP`는 서비스가 가리키는 파드 집합에 대한 단일 IP 주소를 제공한다. 이 IP 주소는 클러스터 내에서만 접근할 수 있다. -게스트가 방명록에 접근할 수 있도록 하려면, 외부에서 볼 수 있도록 프론트엔드 서비스를 구성해야 한다. 그렇게 하면 클라이언트가 컨테이너 클러스터 외부에서 서비스를 요청할 수 있다. Minikube는 `NodePort`를 통해서만 서비스를 노출할 수 있다. +게스트가 방명록에 접근할 수 있도록 하려면, 외부에서 볼 수 있도록 프론트엔드 서비스를 구성해야 한다. 그렇게 하면 클라이언트가 쿠버네티스 클러스터 외부에서 서비스를 요청할 수 있다. 그러나 쿠버네티스 사용자는 `ClusterIP`를 사용하더라도 `kubectl port-forward`를 사용해서 서비스에 접근할 수 있다. {{< note >}} -Google Compute Engine 또는 Google Kubernetes Engine과 같은 일부 클라우드 공급자는 외부 로드 밸런서를 지원한다. 클라우드 공급자가 로드 밸런서를 지원하고 이를 사용하려면 `type : NodePort`를 삭제하거나 주석 처리하고 `type : LoadBalancer`의 주석을 제거해야 한다. +Google Compute Engine 또는 Google Kubernetes Engine과 같은 일부 클라우드 공급자는 외부 로드 밸런서를 지원한다. 클라우드 공급자가 로드 밸런서를 지원하고 이를 사용하려면 `type : LoadBalancer`의 주석을 제거해야 한다. {{< /note >}} {{< codenew file="application/guestbook/frontend-service.yaml" >}} @@ -215,6 +163,11 @@ Google Compute Engine 또는 Google Kubernetes Engine과 같은 일부 클라우 kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml ``` + + 1. 서비스의 목록을 질의하여 프론트엔드 서비스가 실행 중인지 확인한다. ```shell @@ -225,29 +178,27 @@ Google Compute Engine 또는 Google Kubernetes Engine과 같은 일부 클라우 ``` NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - frontend NodePort 10.0.0.112 80:31323/TCP 6s + frontend ClusterIP 10.0.0.112 80/TCP 6s kubernetes ClusterIP 10.0.0.1 443/TCP 4m - redis-master ClusterIP 10.0.0.151 6379/TCP 2m - redis-slave ClusterIP 10.0.0.223 6379/TCP 1m + mongo ClusterIP 10.0.0.151 6379/TCP 2m ``` -### `NodePort`를 통해 프론트엔드 서비스 확인하기 +### `kubectl port-forward`를 통해 프론트엔드 서비스 확인하기 -애플리케이션을 Minikube 또는 로컬 클러스터에 배포한 경우, 방명록을 보려면 IP 주소를 찾아야 한다. - -1. 프론트엔드 서비스의 IP 주소를 얻기 위해 아래 명령어를 실행한다. +1. 다음 명령어를 실행해서 로컬 머신의 `8080` 포트를 서비스의 `80` 포트로 전달한다. ```shell - minikube service frontend --url + kubectl port-forward svc/frontend 8080:80 ``` 결과는 아래와 같은 형태로 나타난다. ``` - http://192.168.99.100:31323 + Forwarding from 127.0.0.1:8080 -> 80 + Forwarding from [::1]:8080 -> 80 ``` -1. IP 주소를 복사하고, 방명록을 보기 위해 브라우저에서 페이지를 로드한다. +1. 방명록을 보기위해 브라우저에서 [http://localhost:8080](http://localhost:8080) 페이지를 로드한다. ### `LoadBalancer`를 통해 프론트엔드 서비스 확인하기 @@ -270,7 +221,7 @@ Google Compute Engine 또는 Google Kubernetes Engine과 같은 일부 클라우 ## 웹 프론트엔드 확장하기 -서버가 디플로이먼트 컨트롤러를 사용하는 서비스로 정의되어 있기 때문에 확장 또는 축소가 쉽다. +서버가 디플로이먼트 컨르롤러를 사용하는 서비스로 정의되어 있기에 필요에 따라 확장 또는 축소할 수 있다. 1. 프론트엔드 파드의 수를 확장하기 위해 아래 명령어를 실행한다. @@ -293,9 +244,7 @@ Google Compute Engine 또는 Google Kubernetes Engine과 같은 일부 클라우 frontend-3823415956-k22zn 1/1 Running 0 54m frontend-3823415956-w9gbt 1/1 Running 0 54m frontend-3823415956-x2pld 1/1 Running 0 5s - redis-master-1068406935-3lswp 1/1 Running 0 56m - redis-slave-2005841000-fpvqc 1/1 Running 0 55m - redis-slave-2005841000-phfv9 1/1 Running 0 55m + mongo-1068406935-3lswp 1/1 Running 0 56m ``` 1. 프론트엔드 파드의 수를 축소하기 위해 아래 명령어를 실행한다. @@ -316,9 +265,7 @@ Google Compute Engine 또는 Google Kubernetes Engine과 같은 일부 클라우 NAME READY STATUS RESTARTS AGE frontend-3823415956-k22zn 1/1 Running 0 1h frontend-3823415956-w9gbt 1/1 Running 0 1h - redis-master-1068406935-3lswp 1/1 Running 0 1h - redis-slave-2005841000-fpvqc 1/1 Running 0 1h - redis-slave-2005841000-phfv9 1/1 Running 0 1h + mongo-1068406935-3lswp 1/1 Running 0 1h ``` @@ -330,19 +277,18 @@ Google Compute Engine 또는 Google Kubernetes Engine과 같은 일부 클라우 1. 모든 파드, 디플로이먼트, 서비스를 삭제하기 위해 아래 명령어를 실행한다. ```shell - kubectl delete deployment -l app=redis - kubectl delete service -l app=redis - kubectl delete deployment -l app=guestbook - kubectl delete service -l app=guestbook + kubectl delete deployment -l app.kubernetes.io/name=mongo + kubectl delete service -l app.kubernetes.io/name=mongo + kubectl delete deployment -l app.kubernetes.io/name=guestbook + kubectl delete service -l app.kubernetes.io/name=guestbook ``` 결과는 아래와 같은 형태로 나타난다. ``` - deployment.apps "redis-master" deleted - deployment.apps "redis-slave" deleted - service "redis-master" deleted - service "redis-slave" deleted + deployment.apps "mongo" deleted + service "mongo" deleted + deployment.apps "frontend" deleted deployment.apps "frontend" deleted service "frontend" deleted ``` @@ -363,7 +309,6 @@ Google Compute Engine 또는 Google Kubernetes Engine과 같은 일부 클라우 ## {{% heading "whatsnext" %}} -* [ELK 로깅과 모니터링](/ko/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk/)을 방명록 애플리케이션에 추가하기 * [쿠버네티스 기초](/ko/docs/tutorials/kubernetes-basics/) 튜토리얼을 완료 * [MySQL과 Wordpress을 위한 퍼시스턴트 볼륨](/ko/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog)을 사용하여 블로그 생성하는데 쿠버네티스 이용하기 * [애플리케이션 접속](/ko/docs/concepts/services-networking/connect-applications-service/)에 대해 더 알아보기 diff --git a/content/ko/examples/application/guestbook/frontend-deployment.yaml b/content/ko/examples/application/guestbook/frontend-deployment.yaml index 23d64be644..613c654aa9 100644 --- a/content/ko/examples/application/guestbook/frontend-deployment.yaml +++ b/content/ko/examples/application/guestbook/frontend-deployment.yaml @@ -3,22 +3,24 @@ kind: Deployment metadata: name: frontend labels: - app: guestbook + app.kubernetes.io/name: guestbook + app.kubernetes.io/component: frontend spec: selector: matchLabels: - app: guestbook - tier: frontend + app.kubernetes.io/name: guestbook + app.kubernetes.io/component: frontend replicas: 3 template: metadata: labels: - app: guestbook - tier: frontend + app.kubernetes.io/name: guestbook + app.kubernetes.io/component: frontend spec: containers: - - name: php-redis - image: gcr.io/google-samples/gb-frontend:v4 + - name: guestbook + image: paulczar/gb-frontend:v5 + # image: gcr.io/google-samples/gb-frontend:v4 resources: requests: cpu: 100m @@ -26,13 +28,5 @@ spec: env: - name: GET_HOSTS_FROM value: dns - # Using `GET_HOSTS_FROM=dns` requires your cluster to - # provide a dns service. As of Kubernetes 1.3, DNS is a built-in - # service launched automatically. However, if the cluster you are using - # does not have a built-in DNS service, you can instead - # access an environment variable to find the master - # service's host. To do so, comment out the 'value: dns' line above, and - # uncomment the line below: - # value: env ports: - containerPort: 80 diff --git a/content/ko/examples/application/guestbook/frontend-service.yaml b/content/ko/examples/application/guestbook/frontend-service.yaml index 6f283f347b..34ad3771d7 100644 --- a/content/ko/examples/application/guestbook/frontend-service.yaml +++ b/content/ko/examples/application/guestbook/frontend-service.yaml @@ -3,16 +3,14 @@ kind: Service metadata: name: frontend labels: - app: guestbook - tier: frontend + app.kubernetes.io/name: guestbook + app.kubernetes.io/component: frontend spec: - # comment or delete the following line if you want to use a LoadBalancer - type: NodePort # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: - app: guestbook - tier: frontend + app.kubernetes.io/name: guestbook + app.kubernetes.io/component: frontend diff --git a/content/ko/examples/application/guestbook/mongo-deployment.yaml b/content/ko/examples/application/guestbook/mongo-deployment.yaml new file mode 100644 index 0000000000..04908ce25b --- /dev/null +++ b/content/ko/examples/application/guestbook/mongo-deployment.yaml @@ -0,0 +1,31 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: mongo + labels: + app.kubernetes.io/name: mongo + app.kubernetes.io/component: backend +spec: + selector: + matchLabels: + app.kubernetes.io/name: mongo + app.kubernetes.io/component: backend + replicas: 1 + template: + metadata: + labels: + app.kubernetes.io/name: mongo + app.kubernetes.io/component: backend + spec: + containers: + - name: mongo + image: mongo:4.2 + args: + - --bind_ip + - 0.0.0.0 + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 27017 diff --git a/content/ko/examples/application/guestbook/mongo-service.yaml b/content/ko/examples/application/guestbook/mongo-service.yaml new file mode 100644 index 0000000000..b9cef607bc --- /dev/null +++ b/content/ko/examples/application/guestbook/mongo-service.yaml @@ -0,0 +1,14 @@ +apiVersion: v1 +kind: Service +metadata: + name: mongo + labels: + app.kubernetes.io/name: mongo + app.kubernetes.io/component: backend +spec: + ports: + - port: 27017 + targetPort: 27017 + selector: + app.kubernetes.io/name: mongo + app.kubernetes.io/component: backend diff --git a/content/ko/examples/application/guestbook/redis-master-deployment.yaml b/content/ko/examples/application/guestbook/redis-master-deployment.yaml deleted file mode 100644 index 478216d1ac..0000000000 --- a/content/ko/examples/application/guestbook/redis-master-deployment.yaml +++ /dev/null @@ -1,29 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: redis-master - labels: - app: redis -spec: - selector: - matchLabels: - app: redis - role: master - tier: backend - replicas: 1 - template: - metadata: - labels: - app: redis - role: master - tier: backend - spec: - containers: - - name: master - image: k8s.gcr.io/redis:e2e # or just image: redis - resources: - requests: - cpu: 100m - memory: 100Mi - ports: - - containerPort: 6379 diff --git a/content/ko/examples/application/guestbook/redis-master-service.yaml b/content/ko/examples/application/guestbook/redis-master-service.yaml deleted file mode 100644 index 65cef2191c..0000000000 --- a/content/ko/examples/application/guestbook/redis-master-service.yaml +++ /dev/null @@ -1,17 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: redis-master - labels: - app: redis - role: master - tier: backend -spec: - ports: - - name: redis - port: 6379 - targetPort: 6379 - selector: - app: redis - role: master - tier: backend diff --git a/content/ko/examples/application/guestbook/redis-slave-deployment.yaml b/content/ko/examples/application/guestbook/redis-slave-deployment.yaml deleted file mode 100644 index 1a7b04386a..0000000000 --- a/content/ko/examples/application/guestbook/redis-slave-deployment.yaml +++ /dev/null @@ -1,40 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: redis-slave - labels: - app: redis -spec: - selector: - matchLabels: - app: redis - role: slave - tier: backend - replicas: 2 - template: - metadata: - labels: - app: redis - role: slave - tier: backend - spec: - containers: - - name: slave - image: gcr.io/google_samples/gb-redisslave:v3 - resources: - requests: - cpu: 100m - memory: 100Mi - env: - - name: GET_HOSTS_FROM - value: dns - # Using `GET_HOSTS_FROM=dns` requires your cluster to - # provide a dns service. As of Kubernetes 1.3, DNS is a built-in - # service launched automatically. However, if the cluster you are using - # does not have a built-in DNS service, you can instead - # access an environment variable to find the master - # service's host. To do so, comment out the 'value: dns' line above, and - # uncomment the line below: - # value: env - ports: - - containerPort: 6379 diff --git a/content/ko/examples/application/guestbook/redis-slave-service.yaml b/content/ko/examples/application/guestbook/redis-slave-service.yaml deleted file mode 100644 index 238fd63fb6..0000000000 --- a/content/ko/examples/application/guestbook/redis-slave-service.yaml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: redis-slave - labels: - app: redis - role: slave - tier: backend -spec: - ports: - - port: 6379 - selector: - app: redis - role: slave - tier: backend diff --git a/content/ko/examples/application/job/cronjob.yaml b/content/ko/examples/application/job/cronjob.yaml index 3ca130289e..816d682f28 100644 --- a/content/ko/examples/application/job/cronjob.yaml +++ b/content/ko/examples/application/job/cronjob.yaml @@ -12,7 +12,7 @@ spec: - name: hello image: busybox imagePullPolicy: IfNotPresent - args: + command: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster diff --git a/content/ko/examples/policy/priority-class-resourcequota.yaml b/content/ko/examples/policy/priority-class-resourcequota.yaml new file mode 100644 index 0000000000..7350d00c8f --- /dev/null +++ b/content/ko/examples/policy/priority-class-resourcequota.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: ResourceQuota +metadata: + name: pods-cluster-services +spec: + scopeSelector: + matchExpressions: + - operator : In + scopeName: PriorityClass + values: ["cluster-services"] \ No newline at end of file diff --git a/content/pl/docs/concepts/overview/kubernetes-api.md b/content/pl/docs/concepts/overview/kubernetes-api.md index cd376dab1d..731b22cacc 100644 --- a/content/pl/docs/concepts/overview/kubernetes-api.md +++ b/content/pl/docs/concepts/overview/kubernetes-api.md @@ -4,7 +4,7 @@ content_type: concept weight: 30 description: > API Kubernetesa służy do odpytywania i zmiany stanu obiektów Kubernetesa. - Sercem warstwy sterowania Kubernetesa jest serwer API i udostępniane przez niego HTTP API. Przez ten serwer odbywa się komunikacja pomiędzy użytkownikami, różnymi częściami składowymi klastra oraz komponentami zewnętrznymi. + Sercem warstwy sterowania Kubernetesa jest serwer API i udostępniane po HTTP API. Przez ten serwer odbywa się komunikacja pomiędzy użytkownikami, różnymi częściami składowymi klastra oraz komponentami zewnętrznymi. card: name: concepts weight: 30 @@ -14,13 +14,16 @@ card: Sercem {{< glossary_tooltip text="warstwy sterowania" term_id="control-plane" >}} Kubernetes jest {{< glossary_tooltip text="serwer API" term_id="kube-apiserver" >}}. Serwer udostępnia -API poprzez HTTP, umożliwiając wzajemną komunikację pomiędzy użytkownikami, częściami składowymi klastra i komponentami zewnętrznymi. +API poprzez HTTP, umożliwiając wzajemną komunikację pomiędzy użytkownikami, częściami składowymi klastra +i komponentami zewnętrznymi. -API Kubernetes pozwala na sprawdzanie i zmianę stanu obiektów (przykładowo: pody, _Namespaces_, _ConfigMaps_, _Events_). +API Kubernetesa pozwala na sprawdzanie i zmianę stanu obiektów +(przykładowo: pody, _Namespaces_, _ConfigMaps_, _Events_). Większość operacji może zostać wykonana poprzez interfejs linii komend (CLI) [kubectl](/docs/reference/kubectl/overview/) lub inne -programy, takie jak [kubeadm](/docs/reference/setup-tools/kubeadm/), które używają +programy, takie jak +[kubeadm](/docs/reference/setup-tools/kubeadm/), które używają API. Możesz też korzystać z API bezpośrednio przez wywołania typu REST. Jeśli piszesz aplikację używającą API Kubernetesa, @@ -66,54 +69,77 @@ Aby wybrać format odpowiedzi, użyj nagłówków żądania zgodnie z tabelą: -W Kubernetesie zaimplementowany jest alternatywny format serializacji na potrzeby API oparty o Protobuf, -który jest przede wszystkim przeznaczony na potrzeby wewnętrznej komunikacji w klastrze -i opisany w [design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md). -Pliki IDL dla każdego ze schematów można znaleźć w pakietach Go, które definiują obiekty API. +W Kubernetesie zaimplementowany jest alternatywny format serializacji na potrzeby API oparty o +Protobuf, który jest przede wszystkim przeznaczony na potrzeby wewnętrznej komunikacji w klastrze. +Więcej szczegółów znajduje się w dokumencie [Kubernetes Protobuf serialization](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md). +oraz w plikach *Interface Definition Language* (IDL) dla każdego ze schematów +zamieszczonych w pakietach Go, które definiują obiekty API. -## Zmiany API +## Przechowywanie stanu + +Kubernetes przechowuje serializowany stan swoich obiektów w +{{< glossary_tooltip term_id="etcd" >}}. + +## Grupy i wersje API + +Aby ułatwić usuwanie poszczególnych pól lub restrukturyzację reprezentacji zasobów, Kubernetes obsługuje +równocześnie wiele wersji API, każde poprzez osobną ścieżkę API, +na przykład: `/api/v1` lub `/apis/rbac.authorization.k8s.io/v1alpha1`. + +Rozdział wersji wprowadzony jest na poziomie całego API, a nie na poziomach poszczególnych zasobów lub pól, +aby być pewnym, że API odzwierciedla w sposób przejrzysty i spójny zasoby systemowe +i ich zachowania oraz pozwala na kontrolowany dostęp do tych API, które są w fazie wycofywania +lub fazie eksperymentalnej. + +Aby ułatwić rozbudowę API Kubernetes, wprowadziliśmy +[*grupy API*](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md), które mogą +być [włączane i wyłączane](/docs/reference/using-api/#enabling-or-disabling). + +Zasoby API są rozróżniane poprzez przynależność do grupy API, typ zasobu, przestrzeń nazw (_namespace_, +o ile ma zastosowanie) oraz nazwę. Serwer API może przeprowadzać konwersję między +różnymi wersjami API w sposób niewidoczny dla użytkownika: wszystkie te różne wersje +reprezentują w rzeczywistości ten sam zasób. Serwer API może udostępniać te same dane +poprzez kilka różnych wersji API. + +Załóżmy przykładowo, że istnieją dwie wersje `v1` i `v1beta1` tego samego zasobu. +Obiekt utworzony przez wersję `v1beta1` może być odczytany, +zaktualizowany i skasowany zarówno przez wersję +`v1beta1`, jak i `v1`. + +## Trwałość API Z naszego doświadczenia wynika, że każdy system, który odniósł sukces, musi się nieustająco rozwijać w miarę zmieniających się potrzeb. Dlatego Kubernetes został tak zaprojektowany, aby API mogło się zmieniać i rozrastać. Projekt Kubernetes dąży do tego, aby nie wprowadzać zmian niezgodnych z istniejącymi aplikacjami klienckimi i utrzymywać zgodność przez wystarczająco długi czas, aby inne projekty zdążyły się dostosować do zmian. -W ogólności, nowe zasoby i pola definiujące zasoby API są dodawane stosunkowo często. Usuwanie zasobów lub pól -jest regulowane przez [API deprecation policy](/docs/reference/using-api/deprecation-policy/). -Definicja zmiany zgodnej (kompatybilnej) oraz metody wprowadzania zmian w API opisano w szczegółach -w [API change document](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md). +W ogólności, nowe zasoby i pola definiujące zasoby API są dodawane stosunkowo często. +Usuwanie zasobów lub pól jest regulowane przez +[API deprecation policy](/docs/reference/using-api/deprecation-policy/). -## Grupy i wersje API +Po osiągnięciu przez API statusu ogólnej dostępności (_general availability_ - GA), +oznaczanej zazwyczaj jako wersja API `v1`, bardzo zależy nam na utrzymaniu jej zgodności w kolejnych wydaniach. +Kubernetes utrzymuje także zgodność dla wersji _beta_ API tam, gdzie jest to możliwe: +jeśli zdecydowałeś się używać API w wersji beta, możesz z niego korzystać także później, +kiedy dana funkcjonalność osiągnie status stabilnej. -Aby ułatwić usuwanie poszczególnych pól lub restrukturyzację reprezentacji zasobów, Kubernetes obsługuje -równocześnie wiele wersji API, każde poprzez osobną ścieżkę API, na przykład: `/api/v1` lub -`/apis/rbac.authorization.k8s.io/v1alpha1`. - -Rozdział wersji wprowadzony jest na poziomie całego API, a nie na poziomach poszczególnych zasobów lub pól, aby być pewnym, -że API odzwierciedla w sposób przejrzysty i spójny zasoby systemowe i ich zachowania i pozwala -na kontrolowany dostęp do tych API, które są w fazie wycofywania lub fazie eksperymentalnej. - -Aby ułatwić rozbudowę API Kubernetes, wprowadziliśmy [*grupy API*](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md), -które mogą być [włączane i wyłączane](/docs/reference/using-api/#enabling-or-disabling). - -Zasoby API są rozróżniane poprzez przynależność do grupy API, typ zasobu, przestrzeń nazw (_namespace_, -o ile ma zastosowanie) oraz nazwę. Serwer API może obsługiwać -te same dane poprzez różne wersje API i przeprowadzać konwersję między -różnymi wersjami API w sposób niewidoczny dla użytkownika. Wszystkie te różne wersje -reprezentują w rzeczywistości ten sam zasób. Załóżmy przykładowo, że istnieją dwie -wersje `v1` i `v1beta1` tego samego zasobu. Obiekt utworzony przez -wersję `v1beta1` może być odczytany, zaktualizowany i skasowany zarówno przez wersję -`v1beta1`, jak i `v1`. +{{< note >}} +Mimo, że Kubernetes stara się także zachować zgodność dla API w wersji _alpha_, zdarzają się przypadki, +kiedy nie jest to możliwe. Jeśli korzystasz z API w wersji alfa, przed aktualizacją klastra do nowej wersji +zalecamy sprawdzenie w informacjach o wydaniu, czy nie nastąpiła jakaś zmiana w tej części API. +{{< /note >}} Zajrzyj do [API versions reference](/docs/reference/using-api/#api-versioning) -po szczegółowe informacje, jak definiuje się poziomy wersji API. +po szczegółowe definicje różnych poziomów wersji API. + + ## Rozbudowa API -API Kubernetesa można rozbudowywać (rozszerzać) na dwa sposoby: +API Kubernetesa można rozszerzać na dwa sposoby: -1. [Definicje zasobów własnych](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) - pozwalają deklaratywnie określać, jak serwer API powinien dostarczać wybrane zasoby API. +1. [Definicje zasobów własnych (_custom resources_)](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) + pozwalają deklaratywnie określać, jak serwer API powinien dostarczać wybrane przez Ciebie zasoby API. 1. Można także rozszerzać API Kubernetesa implementując [warstwę agregacji](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/). @@ -121,6 +147,9 @@ API Kubernetesa można rozbudowywać (rozszerzać) na dwa sposoby: - Naucz się, jak rozbudowywać API Kubernetesa poprzez dodawanie własnych [CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/). -- [Controlling API Access](/docs/reference/access-authn-authz/controlling-access/) opisuje +- [Controlling Access To The Kubernetes API](/docs/concepts/security/controlling-access/) opisuje sposoby, jakimi klaster zarządza dostępem do API. -- Punkty dostępowe API _(endpoints)_, typy zasobów i przykłady zamieszczono w [API Reference](/docs/reference/kubernetes-api/). +- Punkty dostępowe API _(endpoints)_, typy zasobów i przykłady zamieszczono w + [API Reference](/docs/reference/kubernetes-api/). +- Aby dowiedzieć się, jaki rodzaj zmian można określić jako zgodne i jak zmieniać API, zajrzyj do + [API changes](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#readme). diff --git a/content/pl/docs/concepts/overview/what-is-kubernetes.md b/content/pl/docs/concepts/overview/what-is-kubernetes.md index db8ea18b70..d28c841553 100644 --- a/content/pl/docs/concepts/overview/what-is-kubernetes.md +++ b/content/pl/docs/concepts/overview/what-is-kubernetes.md @@ -42,7 +42,7 @@ Kontenery działają w sposób zbliżony do maszyn wirtualnych, ale mają mniejs Kontenery zyskały popularność ze względu na swoje zalety, takie jak: * Szybkość i elastyczność w tworzeniu i instalacji aplikacji: obraz kontenera buduje się łatwiej niż obraz VM. -* Ułatwienie ciągłego rozwoju, integracji oraz wdrażania aplikacji (*Continuous development, integration, and deployment*): obrazy kontenerów mogą być budowane w sposób wiarygodny i częsty. Wycofanie zmian jest łatwe i szybkie (ponieważ obrazy są niezmienne). +* Ułatwienie ciągłego rozwoju, integracji oraz wdrażania aplikacji (*Continuous development, integration, and deployment*): obrazy kontenerów mogą być budowane w sposób wiarygodny i częsty. Wycofywanie zmian jest skuteczne i szybkie (ponieważ obrazy są niezmienne). * Rozdzielenie zadań *Dev* i *Ops*: obrazy kontenerów powstają w fazie *build/release*, oddzielając w ten sposób aplikacje od infrastruktury. * Obserwowalność obejmuje nie tylko informacje i metryki z poziomu systemu operacyjnego, ale także poprawność działania samej aplikacji i inne sygnały. * Spójność środowiska na etapach rozwoju oprogramowania, testowania i działania w trybie produkcyjnym: działa w ten sam sposób na laptopie i w chmurze. diff --git a/content/pl/docs/reference/_index.md b/content/pl/docs/reference/_index.md index bfc120218c..d598908162 100644 --- a/content/pl/docs/reference/_index.md +++ b/content/pl/docs/reference/_index.md @@ -8,13 +8,14 @@ content_type: concept -Tutaj znajdziesz dokumentację źródłową Kubernetes. +Tutaj znajdziesz dokumentację źródłową Kubernetesa. ## Dokumentacja API -* [Dokumentacja źródłowa API Kubernetesa {{< latest-version >}}](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/) +* [Kubernetes API Reference](/docs/reference/kubernetes-api/) +* [One-page API Reference for Kubernetes {{< param "version" >}}](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) * [Using The Kubernetes API](/docs/reference/using-api/) - ogólne informacje na temat API Kubernetesa. ## Biblioteki klientów API diff --git a/content/pl/docs/reference/tools.md b/content/pl/docs/reference/tools.md index 5d60370ee3..2ec66964ed 100644 --- a/content/pl/docs/reference/tools.md +++ b/content/pl/docs/reference/tools.md @@ -18,7 +18,7 @@ Kubernetes zawiera różne wbudowane narzędzia służące do pracy z systemem: ## Minikube -[`minikube`](https://minikube.sigs.k8s.io/docs/) to narzędzie do łatwego uruchamiania lokalnego klastra Kubernetes na twojej stacji roboczej na potrzeby rozwoju oprogramowania lub prowadzenia testów. +[`minikube`](https://minikube.sigs.k8s.io/docs/) to narzędzie do uruchamiania jednowęzłowego klastra Kubernetes na twojej stacji roboczej na potrzeby rozwoju oprogramowania lub prowadzenia testów. ## Pulpit *(Dashboard)* diff --git a/content/pl/docs/tutorials/_index.md b/content/pl/docs/tutorials/_index.md index e9f8ed32d8..c55fd9c3ff 100644 --- a/content/pl/docs/tutorials/_index.md +++ b/content/pl/docs/tutorials/_index.md @@ -32,7 +32,7 @@ Przed zapoznaniem się z samouczkami warto stworzyć zakładkę do * [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/) -* [Example: Deploying PHP Guestbook application with Redis](/docs/tutorials/stateless-application/guestbook/) +* [Example: Deploying PHP Guestbook application with MongoDB](/docs/tutorials/stateless-application/guestbook/) ## Aplikacje stanowe *(Stateful Applications)* diff --git a/content/pl/docs/tutorials/kubernetes-basics/_index.html b/content/pl/docs/tutorials/kubernetes-basics/_index.html index 39d8bf63c9..e27a3ad6bf 100644 --- a/content/pl/docs/tutorials/kubernetes-basics/_index.html +++ b/content/pl/docs/tutorials/kubernetes-basics/_index.html @@ -41,7 +41,7 @@ card:

    Co Kubernetes może dla Ciebie zrobić?

    -

    Użytkownicy oczekują od współczesnych serwisów internetowych dostępności non-stop, a deweloperzy chcą móc instalować nowe wersje swoich serwisów kilka razy dziennie. Używając kontenerów można przygotowywać oprogramowanie w taki sposób, aby mogło być instalowane i aktualizowane łatwo i nie powodując żadnych przestojów. Kubernetes pomaga uruchamiać te aplikacje w kontenerach tam, gdzie chcesz i kiedy chcesz i znajdować niezbędne zasoby i narzędzia wymagane do ich pracy. Kubernetes może działać w środowiskach produkcyjnych, jest otwartym oprogramowaniem zaprojektowanym z wykorzystaniem nagromadzonego przez Google doświadczenia w zarządzaniu kontenerami, w połączeniu z najcenniejszymi ideami społeczności.

    +

    Użytkownicy oczekują od współczesnych serwisów internetowych dostępności non-stop, a deweloperzy chcą móc instalować nowe wersje swoich serwisów kilka razy dziennie. Używając kontenerów można przygotowywać oprogramowanie w taki sposób, aby mogło być instalowane i aktualizowane nie powodując żadnych przestojów. Kubernetes pomaga uruchamiać te aplikacje w kontenerach tam, gdzie chcesz i kiedy chcesz i znajdować niezbędne zasoby i narzędzia wymagane do ich pracy. Kubernetes może działać w środowiskach produkcyjnych, jest otwartym oprogramowaniem zaprojektowanym z wykorzystaniem nagromadzonego przez Google doświadczenia w zarządzaniu kontenerami, w połączeniu z najcenniejszymi ideami społeczności.

    diff --git a/content/pl/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/pl/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html index 5b90420c5a..c879aa82b9 100644 --- a/content/pl/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html +++ b/content/pl/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html @@ -91,9 +91,7 @@ weight: 10

    - Na potrzeby pierwszej instalacji użyjesz aplikacji na Node.js zapakowaną w kontener Docker-a. (Jeśli jeszcze nie próbowałeś stworzyć - aplikacji na Node.js i uruchomić za pomocą kontenerów, możesz spróbować teraz, kierując się instrukcjami samouczka - Hello Minikube). + Na potrzeby pierwszej instalacji użyjesz aplikacji hello-node zapakowaną w kontener Docker-a, która korzysta z NGINXa i powtarza wszystkie wysłane do niej zapytania. (Jeśli jeszcze nie próbowałeś stworzyć aplikacji hello-node i uruchomić za pomocą kontenerów, możesz spróbować teraz, kierując się instrukcjami samouczka Hello Minikube).

    Teraz, kiedy wiesz, czym są Deploymenty, przejdźmy do samouczka online, żeby zainstalować naszą pierwszą aplikację!

    diff --git a/content/pl/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/pl/docs/tutorials/kubernetes-basics/expose/expose-intro.html index 4ad9a7be76..f9f9134e4a 100644 --- a/content/pl/docs/tutorials/kubernetes-basics/expose/expose-intro.html +++ b/content/pl/docs/tutorials/kubernetes-basics/expose/expose-intro.html @@ -64,12 +64,6 @@ weight: 10
    -
    -
    -

    -
    -
    -

    Serwis kieruje przychodzący ruch do grupy Podów. Serwisy są obiektami abstrakcyjnymi, dzięki którym pody mogą się psuć i być zastępowane przez Kubernetes nowymi bez ujemnego wpływu na działanie twoich aplikacji. Detekcją nowych podów i kierowaniem ruchu pomiędzy zależnymi podami (takimi, jak składowe front-end i back-end w aplikacji) zajmują się Serwisy Kubernetes.

    diff --git a/content/pt/_index.html b/content/pt/_index.html index 9721bcdd37..628047e85c 100644 --- a/content/pt/_index.html +++ b/content/pt/_index.html @@ -47,7 +47,7 @@ O Kubernetes é Open Source, o que te oferece a liberdade de utilizá-lo em seu


    - KubeCon em Shanghai em June 24-26, 2019 + KubeCon em Shanghai em Junho, 24-26 de 2019
    @@ -57,4 +57,4 @@ O Kubernetes é Open Source, o que te oferece a liberdade de utilizá-lo em seu {{< blocks/kubernetes-features >}} -{{< blocks/case-studies >}} \ No newline at end of file +{{< blocks/case-studies >}} diff --git a/content/pt/blog/_posts/2020-09-02-scaling-kubernetes-networking-endpointslices.md b/content/pt/blog/_posts/2020-09-02-scaling-kubernetes-networking-endpointslices.md new file mode 100644 index 0000000000..7440689d5f --- /dev/null +++ b/content/pt/blog/_posts/2020-09-02-scaling-kubernetes-networking-endpointslices.md @@ -0,0 +1,47 @@ +--- +layout: blog +title: 'Escalando a rede do Kubernetes com EndpointSlices' +date: 2020-09-02 +slug: scaling-kubernetes-networking-with-endpointslices +--- + +**Autor:** Rob Scott (Google) + +EndpointSlices é um novo tipo de API que provê uma alternativa escalável e extensível à API de Endpoints. EndpointSlices mantém o rastreio dos endereços IP, portas, informações de topologia e prontidão de Pods que compõem um serviço. + +No Kubernetes 1.19 essa funcionalidade está habilitada por padrão, com o kube-proxy lendo os [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) ao invés de Endpoints. Apesar de isso ser uma mudança praticamente transparente, resulta numa melhoria notável de escalabilidade em grandes clusters. Também permite a adição de novas funcionalidades em releases futuras do Kubernetes, como o [Roteamento baseado em topologia.](/docs/concepts/services-networking/service-topology/). + +## Limitações de escalabilidade da API de Endpoints +Na API de Endpoints, existia apenas um recurso de Endpoint por serviço (Service). Isso significa que +era necessário ser possível armazenar endereços IPs e portas para cada Pod que compunha o serviço correspondente. Isso resultava em recursos imensos de API. Para piorar, o kube-proxy rodava em cada um dos nós e observava qualquer alteração nos recursos de Endpoint. Mesmo que fosse uma simples mudança em um Endpoint, todo o objeto precisava ser enviado para cada uma das instâncias do kube-proxy. + +Outra limitação da API de Endpoints era que ela limitava o número de objetos que podiam ser associados a um _Service_. O tamanho padrão de um objeto armazenado no etcd é 1.5MB. Em alguns casos, isso poderia limitar um Endpoint a 5,000 IPs de Pod. Isso não chega a ser um problema para a maioria dos usuários, mas torna-se um problema significativo para serviços que se aproximem desse tamanho. + +Para demonstrar o quão significante se torna esse problema em grande escala, vamos usar de um simples exemplo: Imagine um _Service_ que possua 5,000 Pods, e que possa causar o Endpoint a ter 1.5Mb . Se apenas um Endpoint nessa lista sofra uma alteração, todo o objeto de Endpoint precisará ser redistribuído para cada um dos nós do cluster. Em um cluster com 3.000 nós, essa atualização causará o envio de 4.5Gb de dados (1.5Mb de Endpoints * 3,000 nós) para todo o cluster. Isso é quase que o suficiente para encher um DVD, e acontecerá para cada mudança de Endpoint. Agora imagine uma atualização gradual em um _Deployment_ que resulte nos 5,000 Pods serem substituídos - isso é mais que 22Tb (ou 5,000 DVDs) de dados transferidos. + +## Dividindo os endpoints com a API de EndpointSlice +A API de EndpointSlice foi desenhada para resolver esse problema com um modelo similar de _sharding_. Ao invés de rastrar todos os IPs dos Pods para um _Service_, com um único recurso de Endpoint, nós dividimos eles em múltiplos EndpointSlices menores. + +Usemos por exemplo um serviço com 15 pods. Nós teríamos um único recurso de Endpoints referente a todos eles. Se o EndpointSlices for configurado para armazenar 5 _endpoints_ cada, nós teríamos 3 EndpointSlices diferentes: +![EndpointSlices](/images/blog/2020-09-02-scaling-kubernetes-networking-endpointslices/endpoint-slices.png) + +Por padrão, o EndpointSlices armazena um máximo de 100 _endpoints_ cada, podendo isso ser configurado com a flag `--max-endpoints-per-slice` no kube-controller-manager. + +## EndpointSlices provê uma melhoria de escalabilidade em 10x +Essa API melhora dramaticamente a escalabilidade da rede. Agora quando um Pod é adicionado ou removido, apenas 1 pequeno EndpointSlice necessita ser atualizado. Essa diferença começa a ser notada quando centenas ou milhares de Pods compõem um único _Service_. + +Mais significativo, agora que todos os IPs de Pods para um _Service_ não precisam ser armazenados em um único recurso, nós não precisamos nos preocupar com o limite de tamanho para objetos armazendos no etcd. EndpointSlices já foram utilizados para escalar um serviço além de 100,000 endpoints de rede. + +Tudo isso é possível com uma melhoria significativa de performance feita no kube-proxy. Quando o EndpointSlices é usado em grande escala, muito menos dados serão transferidos para as atualizações de endpoints e o kube-proxy torna-se mais rápido para atualizar regras do iptables ou do ipvs. Além disso, os _Services_ podem escalar agora para pelo menos 10x mais além dos limites anteriores. + +## EndpointSlices permitem novas funcionalidades +Introduzido como uma funcionalidade alpha no Kubernetes v1.16, os EndpointSlices foram construídos para permitir algumas novas funcionalidades arrebatadoras em futuras versões do Kubernetes. Isso inclui serviços dual-stack, roteamento baseado em topologia e subconjuntos de _endpoints_. + +Serviços Dual-stack são uma nova funcionalidade que foi desenvolvida juntamente com o EndpointSlices. Eles irão utilizar simultâneamente endereços IPv4 e IPv6 para serviços, e dependem do campo addressType do Endpointslices para conter esses novos tipos de endereço por família de IP. + +O roteamento baseado por topologia irá atualizar o kube-proxy para dar preferência no roteamento de requisições para a mesma região ou zona, utilizando-se de campos de topologia armazenados em cada endpoint dentro de um EndpointSlice. Como uma melhoria futura disso, estamos explorando o potencial de subconjuntos de endpoint. Isso irá permitir o kube-proxy apenas observar um subconjunto de EndpointSlices. Por exemplo, isso pode ser combinado com o roteamento baseado em topologia e assim, o kube-proxy precisará observar apenas EndpointSlices contendo _endpoints_ na mesma zona. Isso irá permitir uma outra melhoria significativa de escalabilidade. + +## O que isso significa para a API de Endpoints? +Apesar da API de EndpointSlice prover uma alternativa nova e escalável à API de Endpoints, a API de Endpoints continuará a ser considerada uma funcionalidade estável. A mudança mais significativa para a API de Endpoints envolve começar a truncar Endpoints que podem causar problemas de escalabilidade. + +A API de Endpoints não será removida, mas muitas novas funcionalidades irão depender da nova API EndpointSlice. Para obter vantágem da funcionalidade e escalabilidade que os EndpointSlices provém, aplicações que hoje consomem a API de Endpoints devem considerar suportar EndpointSlices no futuro. diff --git a/content/pt/blog/_posts/2020-12-02-dont-panic-kubernetes-and-docker.md b/content/pt/blog/_posts/2020-12-02-dont-panic-kubernetes-and-docker.md new file mode 100644 index 0000000000..ada16762a0 --- /dev/null +++ b/content/pt/blog/_posts/2020-12-02-dont-panic-kubernetes-and-docker.md @@ -0,0 +1,45 @@ +--- +layout: blog +title: "Não entre em pânico: Kubernetes e Docker" +date: 2020-12-02 +slug: dont-panic-kubernetes-and-docker +--- + +**Autores / Autoras**: Jorge Castro, Duffie Cooley, Kat Cosgrove, Justin Garrison, Noah Kantrowitz, Bob Killen, Rey Lejano, Dan “POP” Papandrea, Jeffrey Sica, Davanum “Dims” Srinivas + +**Tradução:** João Brito + +Kubernetes está [deixando de usar Docker](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation) como seu agente de execução após a versão v1.20. + +**Não entre em pânico. Não é tão dramático quanto parece.** + +TL;DR Docker como um agente de execução primário está sendo deixado de lado em favor de agentes de execução que utilizam a Interface de Agente de Execução de Containers (Container Runtime Interface "CRI") criada para o Kubernetes. As imagens criadas com o Docker continuarão a funcionar em seu cluster com os agentes atuais, como sempre estiveram. + +Se você é um usuário final de Kubernetes, quase nada mudará para você. Isso não significa a morte do Docker, e isso não significa que você não pode, ou não deva, usar ferramentas Docker em desenvolvimento mais. Docker ainda é uma ferramenta útil para a construção de containers, e as imagens resultantes de executar `docker build` ainda rodarão em seu cluster Kubernetes. + +Se você está usando um Kubernetes gerenciado como GKE, EKS, ou AKS (que usa como [padrão containerd](https://github.com/Azure/AKS/releases/tag/2020-11-16)) você precisará ter certeza que seus nós estão usando um agente de execução de container suportado antes que o suporte ao Docker seja removido nas versões futuras do Kubernetes. Se você tem mudanças em seus nós, talvez você precise atualizá-los baseado em seu ambiente e necessidades do agente de execução. + +Se você está rodando seus próprios clusters, você também precisa fazer mudanças para evitar quebras em seu cluster. Na versão v1.20, você terá o aviso de alerta da perda de suporte ao Docker. Quando o suporte ao agente de execução do Docker for removido em uma versão futura (atualmente planejado para a versão 1.22 no final de 2021) do Kubernetes ele não será mais suportado e você precisará trocar para um dos outros agentes de execução de container compatível, como o containerd ou CRI-O. Mas tenha certeza que esse agente de execução escolhido tenha suporte às configurações do daemon do Docker usadas atualmente (Ex.: logs) + +## Então porque a confusão e toda essa turma surtando? + +Estamos falando aqui de dois ambientes diferentes, e isso está criando essa confusão. Dentro do seu cluster Kubernetes, existe uma coisa chamada de agente de execução de container que é responsável por baixar e executar as imagens de seu container. Docker é a escolha popular para esse agente de execução (outras escolhas comuns incluem containerd e CRI-O), mas Docker não foi projetado para ser embutido no Kubernetes, e isso causa problemas. + +Se liga, o que chamamos de "Docker" não é exatamente uma coisa - é uma stack tecnológica inteira, e uma parte disso é chamado de "containerd", que é o agente de execução de container de alto-nível por si só. Docker é legal e útil porque ele possui muitas melhorias de experiência do usuário e isso o torna realmente fácil para humanos interagirem com ele enquanto estão desenvolvendo, mas essas melhorias para o usuário não são necessárias para o Kubernetes, pois ele não é humano. + +Como resultado dessa camada de abstração amigável aos humanos, seu cluster Kubernetes precisa usar outra ferramenta chamada Dockershim para ter o que ele realmente precisa, que é o containerd. Isso não é muito bom, porque adiciona outra coisa a ser mantida e que pode quebrar. O que está atualmente acontecendo aqui é que o Dockershim está sendo removido do Kubelet assim que que a versão v1.23 for lançada, que remove o suporte ao Docker como agente de execução de container como resultado. Você deve estar pensando, mas se o containerd está incluso na stack do Docker, porque o Kubernetes precisa do Dockershim? + +Docker não é compatível com CRI, a [Container Runtime Interface](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/) (interface do agente de execução de container). Se fosse, nós não precisaríamos do shim, e isso não seria nenhum problema. Mas isso não é o fim do mundo, e você não precisa entrar em pânico - você só precisa mudar seu agente de execução de container do Docker para um outro suportado. + +Uma coisa a ser notada: Se você está contando com o socket do Docker (`/var/run/docker.sock`) como parte do seu fluxo de trabalho em seu cluster hoje, mover para um agente de execução diferente acaba com sua habilidade de usá-lo. Esse modelo é conhecido como Docker em Docker. Existem diversas opções por aí para esse caso específico como o [kaniko](https://github.com/GoogleContainerTools/kaniko), [img](https://github.com/genuinetools/img), e [buildah](https://github.com/containers/buildah). + +## O que essa mudança representa para os desenvolvedores? Ainda escrevemos Dockerfiles? Ainda vamos fazer build com Docker? + +Essa mudança aborda um ambiente diferente do que a maioria das pessoas usa para interagir com Docker. A instalação do Docker que você está usando em desenvolvimento não tem relação com o agente de execução de Docker dentro de seu cluster Kubernetes. É confuso, dá pra entender. +Como desenvolvedor, Docker ainda é útil para você em todas as formas que era antes dessa mudança ser anunciada. A imagem que o Docker cria não é uma imagem específica para Docker e sim uma imagem que segue o padrão OCI ([Open Container Initiative](https://opencontainers.org/)). + +Qualquer imagem compatível com OCI, independente da ferramenta usada para construí-la será vista da mesma forma pelo Kubernetes. Ambos [containerd](https://containerd.io/) e [CRI-O](https://cri-o.io/) sabem como baixar e executá-las. Esse é o porque temos um padrão para containers. + +Então, essa mudança está chegando. Isso irá causar problemas para alguns, mas nada catastrófico, no geral é uma boa coisa. Dependendo de como você interage com o Kubernetes, isso tornará as coisas mais fáceis. Se isso ainda é confuso para você, tudo bem, tem muita coisa rolando aqui; Kubernetes tem um monte de partes móveis, e ninguém é 100% especialista nisso. Nós encorajamos toda e qualquer tipo de questão independente do nível de experiência ou de complexidade! Nosso objetivo é ter certeza que todos estão entendendo o máximo possível as mudanças que estão chegando. Esperamos que isso tenha respondido a maioria de suas questões e acalmado algumas ansiedades! ❤️ + +Procurando mais respostas? Dê uma olhada em nosso apanhado de [questões quanto ao desuso do Dockershim](/blog/2020/12/02/dockershim-faq/). diff --git a/content/pt/docs/concepts/cluster-administration/_index.md b/content/pt/docs/concepts/cluster-administration/_index.md index 75c4425176..67051766ed 100755 --- a/content/pt/docs/concepts/cluster-administration/_index.md +++ b/content/pt/docs/concepts/cluster-administration/_index.md @@ -1,5 +1,69 @@ --- -title: "Administração de Cluster" +title: Administração de Cluster weight: 100 +content_type: concept +description: > + Detalhes de baixo nível relevantes para criar ou administrar um cluster Kubernetes. +no_list: true --- + +A visão geral da administração do cluster é para qualquer pessoa que crie ou administre um cluster do Kubernetes. +É pressuposto alguma familiaridade com os [conceitos](/docs/concepts) principais do Kubernetes. + + +## Planejando um cluster + +Consulte os guias em [Configuração](/docs/setup) para exemplos de como planejar, instalar e configurar clusters Kubernetes. As soluções listadas neste artigo são chamadas de *distros*. + + {{< note >}} + Nem todas as distros são mantidas ativamente. Escolha distros que foram testadas com uma versão recente do Kubernetes. + {{< /note >}} + +Antes de escolher um guia, aqui estão algumas considerações: + +- Você quer experimentar o Kubernetes em seu computador ou deseja criar um cluster de vários nós com alta disponibilidade? Escolha as distros mais adequadas ás suas necessidades. +- Você vai usar um **cluster Kubernetes gerenciado** , como o [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/), ou **vai hospedar seu próprio cluster**? +- Seu cluster será **local**, ou **na nuvem (IaaS)**? O Kubernetes não oferece suporte direto a clusters híbridos. Em vez disso, você pode configurar vários clusters. +- **Se você estiver configurando o Kubernetes local**, leve em consideração qual [modelo de rede](/docs/concepts/cluster-Administration/networking) se encaixa melhor. +- Você vai executar o Kubernetes em um hardware **bare metal** ou em **máquinas virtuais? (VMs)**? +- Você **deseja apenas executar um cluster** ou espera **participar ativamente do desenvolvimento do código do projeto Kubernetes**? Se for a segunda opção, +escolha uma distro desenvolvida ativamente. Algumas distros usam apenas versão binária, mas oferecem uma maior variedade de opções. +- Familiarize-se com os [componentes](/docs/concepts/overview/components/) necessários para executar um cluster. + + +## Gerenciando um cluster + +* Aprenda como [gerenciar nós](/docs/concepts/architecture/nodes/). +* Aprenda a configurar e [gerenciar a quota de recursos](/docs/concepts/policy/resource-quotas/) para clusters compartilhados. + +## Protegendo um cluster + +* [Gerar Certificados](/docs/tasks/administer-cluster/certificates/) descreve os passos para gerar certificados usando diferentes cadeias de ferramentas. + +* [Ambiente de Contêineres do Kubernetes](/docs/concepts/containers/container-environment/) descreve o ambiente para contêineres gerenciados pelo kubelet em um nó Kubernetes. + +* [Controle de Acesso a API do Kubernetes](/docs/concepts/security/controlling-access) descreve como o Kubernetes implementa o controle de acesso para sua própria API. + +* [Autenticação](/docs/reference/access-authn-authz/authentication/) explica a autenticação no Kubernetes, incluindo as várias opções de autenticação. + +* [Autorização](/docs/reference/access-authn-authz/authorization/) é separado da autenticação e controla como as chamadas HTTP são tratadas. + +* [Usando Controladores de Admissão](/docs/reference/access-authn-authz/admission-controllers/) explica plugins que interceptam requisições para o servidor da API Kubernetes após +a autenticação e autorização. + +* [usando Sysctl em um Cluster Kubernetes](/docs/tasks/administer-cluster/sysctl-cluster/) descreve a um administrador como usar a ferramenta de linha de comando `sysctl` para +definir os parâmetros do kernel. + +* [Auditoria](/docs/tasks/debug-application-cluster/audit/) descreve como interagir com *logs* de auditoria do Kubernetes. + +### Protegendo o kubelet + * [Comunicação Control Plane-Nó](/docs/concepts/architecture/control-plane-node-communication/) + * [TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) + * [Autenticação/autorização do kubelet](/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/) + +## Serviços Opcionais para o Cluster + +* [Integração com DNS](/docs/concepts/services-networking/dns-pod-service/) descreve como resolver um nome DNS diretamente para um serviço Kubernetes. + +* [Registro e Monitoramento da Atividade do Cluster](/docs/concepts/cluster-administration/logging/) explica como funciona o *logging* no Kubernetes e como implementá-lo. diff --git a/content/pt/docs/concepts/configuration/organize-cluster-access-kubeconfig.md b/content/pt/docs/concepts/configuration/organize-cluster-access-kubeconfig.md new file mode 100644 index 0000000000..4b431b486f --- /dev/null +++ b/content/pt/docs/concepts/configuration/organize-cluster-access-kubeconfig.md @@ -0,0 +1,131 @@ +--- +title: Organizando o acesso ao cluster usando arquivos kubeconfig +content_type: concept +weight: 60 +--- + + + +Utilize arquivos kubeconfig para organizar informações sobre clusters, usuários, namespaces e mecanismos de autenticação. A ferramenta de linha de comando `kubectl` faz uso dos arquivos kubeconfig para encontrar as informações necessárias para escolher e se comunicar com o serviço de API de um cluster. + + +{{< note >}} +Um arquivo que é utilizado para configurar o acesso aos clusters é chamado de *kubeconfig*. Esta á uma forma genérica de referenciamento para um arquivo de configuração desta natureza. Isso não significa que existe um arquivo com o nome `kubeconfig`. +{{< /note >}} + +Por padrão, o `kubectl` procura por um arquivo de nome `config` no diretório `$HOME/.kube` + +Você pode especificar outros arquivos kubeconfig através da variável de ambiente `KUBECONFIG` ou adicionando a opção [`--kubeconfig`](/docs/reference/generated/kubectl/kubectl/). + +Para maiores detalhes na criação e especificação de um kubeconfig, veja o passo a passo em [Configurar Acesso para Múltiplos Clusters](/docs/tasks/access-application-cluster/configure-access-multiple-clusters). + + + + +## Suportando múltiplos clusters, usuários e mecanismos de autenticação + +Imagine que você possua inúmeros clusters, e seus usuários e componentes se autenticam de várias formas. Por exemplo: + +- Um kubelet ativo pode se autenticar utilizando certificados +- Um usuário pode se autenticar através de tokens +- Administradores podem possuir conjuntos de certificados os quais provém acesso aos usuários de forma individual. + +Através de arquivos kubeconfig, você pode organizar os seus clusters, usuários, e namespaces. Você também pode definir contextos para uma fácil troca entre clusters e namespaces. + + +## Contexto + +Um elemento de *contexto* em um kubeconfig é utilizado para agrupar parâmetros de acesso em um nome conveniente. Cada contexto possui três parâmetros: cluster, namespace, e usuário. + +Por padrão, a ferramenta de linha de comando `kubectl` utiliza os parâmetros do _contexto atual_ para se comunicar com o cluster. + +Para escolher o contexto atual: + +```shell +kubectl config use-context +``` + +## A variável de ambiente KUBECONFIG + +A variável de ambiente `KUBECONFIG` possui uma lista dos arquivos kubeconfig. Para Linux e Mac, esta lista é delimitada por vírgula. No Windows, a lista é delimitada por ponto e vírgula. A variável de ambiente `KUBECONFIG` não é um requisito obrigatório - caso ela não exista o `kubectl` utilizará o arquivo kubeconfig padrão localizado no caminho `$HOME/.kube/config`. + +Se a variável de ambiente `KUBECONFIG` existir, o `kubectl` utilizará uma configuração que é o resultado da combinação dos arquivos listados na variável de ambiente `KUBECONFIG`. + +## Combinando arquivos kubeconfig + +Para inspecionar a sua configuração atual, execute o seguinte comando: + +```shell +kubectl config view +``` + +Como descrito anteriormente, a saída poderá ser resultado de um único arquivo kubeconfig, ou poderá ser o resultado da junção de vários arquivos kubeconfig. + +Aqui estão as regras que o `kubectl` utiliza quando realiza a combinação de arquivos kubeconfig: + +1. Se o argumento `--kubeconfig` está definido, apenas o arquivo especificado será utilizado. Apenas uma instância desta flag é permitida. + + Caso contrário, se a variável de ambiente `KUBECONFIG` estiver definida, esta deverá ser utilizada como uma lista de arquivos a serem combinados, seguindo o fluxo a seguir: + + * Ignorar arquivos vazios. + * Produzir erros para aquivos cujo conteúdo não for possível desserializar. + * O primeiro arquivo que definir um valor ou mapear uma chave determinada, será o escolhido. + * Nunca modificar um valor ou mapear uma chave. + Exemplo: Preservar o contexto do primeiro arquivo que definir `current-context`. + Exemplo: Se dois arquivos especificarem um `red-user`, use apenas os valores do primeiro `red-user`. Mesmo se um segundo arquivo possuir entradas não conflitantes sobre a mesma entrada `red-user`, estas deverão ser descartadas. + + Para um exemplo de definição da variável de ambiente `KUBECONFIG` veja [Definido a variável de ambiente KUBECONFIG](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable). + + Caso contrário, utilize o arquivo kubeconfig padrão encontrado no diretório `$HOME/.kube/config`, sem qualquer tipo de combinação. + +1. Determine o contexto a ser utilizado baseado no primeiro padrão encontrado, nesta ordem: + + 1. Usar o conteúdo da flag `--context` caso ela existir. + 1. Usar o `current-context` a partir da combinação dos arquivos kubeconfig. + + + Um contexto vazio é permitido neste momento. + + +1. Determinar o cluster e o usuário. Neste ponto, poderá ou não existir um contexto. + Determinar o cluster e o usuário no primeiro padrão encontrado de acordo com a ordem à seguir. Este procedimento deverá executado duas vezes: uma para definir o usuário a outra para definir o cluster. + + 1. Utilizar a flag caso ela existir: `--user` ou `--cluster`. + 1. Se o contexto não estiver vazio, utilizar o cluster ou usuário deste contexto. + + O usuário e o cluster poderão estar vazios neste ponto. + +1. Determinar as informações do cluster atual a serem utilizadas. Neste ponto, poderá ou não existir informações de um cluster. + + Construir cada peça de informação do cluster baseado nas opções à seguir; a primeira ocorrência encontrada será a opção vencedora: + + 1. Usar as flags de linha de comando caso existirem: `--server`, `--certificate-authority`, `--insecure-skip-tls-verify`. + 1. Se algum atributo do cluster existir a partir da combinação de kubeconfigs, estes deverão ser utilizados. + 1. Se não existir informação de localização do servidor falhar. + +1. Determinar a informação atual de usuário a ser utilizada. Construir a informação de usuário utilizando as mesmas regras utilizadas para o caso de informações de cluster, exceto para a regra de técnica de autenticação que deverá ser única por usuário: + + 1. Usar as flags, caso existirem: `--client-certificate`, `--client-key`, `--username`, `--password`, `--token`. + 1. Usar os campos `user` resultado da combinação de arquivos kubeconfig. + 1. Se existirem duas técnicas conflitantes, falhar. + +1. Para qualquer informação que ainda estiver ausente, utilizar os valores padrão e potencialmente solicitar informações de autenticação a partir do prompt de comando. + + +## Referências de arquivos + +Arquivos e caminhos referenciados em um arquivo kubeconfig são relativos à localização do arquivo kubeconfig. + +Referências de arquivos na linha de comando são relativas ao diretório de trabalho vigente. + +No arquivo `$HOME/.kube/config`, caminhos relativos são armazenados de forma relativa, e caminhos absolutos são armazenados de forma absoluta. + +## {{% heading "whatsnext" %}} + + +* [Configurar Accesso para Multiplos Clusters](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) +* [`kubectl config`](/docs/reference/generated/kubectl/kubectl-commands#config) + + + + diff --git a/content/pt/docs/concepts/containers/_index.md b/content/pt/docs/concepts/containers/_index.md new file mode 100644 index 0000000000..6ce26043c5 --- /dev/null +++ b/content/pt/docs/concepts/containers/_index.md @@ -0,0 +1,34 @@ +--- +title: Contêineres +weight: 40 +description: Tecnologia para empacotar aplicações com suas dependências em tempo de execução +content_type: concept +no_list: true +--- + + + +Cada contêiner executado é repetível; a padronização de ter +dependências incluídas significa que você obtém o mesmo comportamento onde quer que você execute. + +Os contêineres separam os aplicativos da infraestrutura de _host_ subjacente. +Isso torna a implantação mais fácil em diferentes ambientes de nuvem ou sistema operacional. + + + + +## Imagem de contêiner +Uma [imagem de contêiner](/docs/concepts/containers/images/) é um pacote de software pronto para executar, contendo tudo que é preciso para executar uma aplicação: +o código e o agente de execução necessário, aplicação, bibliotecas do sistema e valores padrões para qualquer configuração essencial. + +Por _design_, um contêiner é imutável: você não pode mudar o código de um contêiner que já está executando. Se você tem uma aplicação conteinerizada e quer fazer mudanças, você precisa construir uma nova imagem que inclui a mudança, e recriar o contêiner para iniciar a partir da imagem atualizada. + +## Agente de execução de contêiner + +{{< glossary_definition term_id="container-runtime" length="all" >}} + +## {{% heading "whatsnext" %}} + +* [Imagens de contêineres](/docs/concepts/containers/images/) +* [Pods](/docs/concepts/workloads/pods/) + diff --git a/content/pt/docs/concepts/containers/container-environment.md b/content/pt/docs/concepts/containers/container-environment.md new file mode 100644 index 0000000000..af28e2dd3f --- /dev/null +++ b/content/pt/docs/concepts/containers/container-environment.md @@ -0,0 +1,56 @@ +--- +title: Ambiente de Contêiner +content_type: concept +weight: 20 +--- + + + +Essa página descreve os recursos disponíveis para contêineres no ambiente de contêiner. + + + + + +## Ambiente de contêiner + +O ambiente de contêiner do Kubernetes fornece recursos importantes para contêineres: + +* Um sistema de arquivos, que é a combinação de uma [imagem](/docs/concepts/containers/images/) e um ou mais [volumes](/docs/concepts/storage/volumes/). +* Informação sobre o contêiner propriamente. +* Informação sobre outros objetos no cluster. + +### Informação de contêiner + +O _hostname_ de um contêiner é o nome do Pod em que o contêiner está executando. +Isso é disponibilizado através do comando `hostname` ou da função [`gethostname`](https://man7.org/linux/man-pages/man2/gethostname.2.html) chamada na libc. + +O nome do Pod e o Namespace são expostos como variáveis de ambiente através de um mecanismo chamado [downward API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/). + +Variáveis de ambiente definidas pelo usuário a partir da definição do Pod também são disponíveis para o contêiner, assim como qualquer variável de ambiente especificada estáticamente na imagem Docker. + +### Informação do cluster + +Uma lista de todos os serviços que estão executando quando um contêiner foi criado é disponibilizada para o contêiner como variáveis de ambiente. +Essas variáveis de ambiente são compatíveis com a funcionalidade _docker link_ do Docker. + +Para um serviço nomeado *foo* que mapeia para um contêiner nomeado *bar*, as seguintes variáveis são definidas: + +```shell +FOO_SERVICE_HOST= +FOO_SERVICE_PORT= +``` + +Serviços possuem endereço IP dedicado e são disponibilizados para o contêiner via DNS, +se possuírem [DNS addon](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/) habilitado. + + + +## {{% heading "whatsnext" %}} + + +* Aprenda mais sobre [hooks de ciclo de vida do contêiner](/docs/concepts/containers/container-lifecycle-hooks/). +* Obtenha experiência prática + [anexando manipuladores a eventos de ciclo de vida do contêiner](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/). + + diff --git a/content/pt/docs/concepts/containers/container-lifecycle-hooks.md b/content/pt/docs/concepts/containers/container-lifecycle-hooks.md new file mode 100644 index 0000000000..984f248256 --- /dev/null +++ b/content/pt/docs/concepts/containers/container-lifecycle-hooks.md @@ -0,0 +1,114 @@ +--- +title: Hooks de Ciclo de Vida do Contêiner +content_type: concept +weight: 30 +--- + + + +Essa página descreve como os contêineres gerenciados pelo _kubelet_ podem usar a estrutura de _hook_ de ciclo de vida do contêiner para executar código acionado por eventos durante seu ciclo de vida de gerenciamento. + + + + +## Visão Geral + +Análogo a muitas estruturas de linguagem de programação que tem _hooks_ de ciclo de vida de componentes, como angular, +o Kubernetes fornece aos contêineres _hooks_ de ciclo de vida. +Os _hooks_ permitem que os contêineres estejam cientes dos eventos em seu ciclo de vida de gerenciamento +e executem código implementado em um manipulador quando o _hook_ de ciclo de vida correspondente é executado. + +## Hooks do contêiner + +Existem dois _hooks_ que são expostos para os contêiners: + +`PostStart` + +Este _hook_ é executado imediatamente após um contêiner ser criado. +Entretanto, não há garantia que o _hook_ será executado antes do ENTRYPOINT do contêiner. +Nenhum parâmetro é passado para o manipulador. + +`PreStop` + +Esse _hook_ é chamado imediatamente antes de um contêiner ser encerrado devido a uma solicitação de API ou um gerenciamento de evento como liveness/startup probe failure, preemption, resource contention e outros. +Uma chamada ao _hook_ `PreStop` falha se o contêiner já está em um estado finalizado ou concluído e o _hook_ deve ser concluído antes que o sinal TERM seja enviado para parar o contêiner. A contagem regressiva do período de tolerância de término do Pod começa antes que o _hook_ `PreStop` seja executado, portanto, independentemente do resultado do manipulador, o contêiner será encerrado dentro do período de tolerância de encerramento do Pod. Nenhum parâmetro é passado para o manipulador. + +Uma descrição mais detalhada do comportamento de término pode ser encontrada em [Término de Pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination). + +### Implementações de manipulador de hook + +Os contêineres podem acessar um _hook_ implementando e registrando um manipulador para esse _hook_. +Existem dois tipos de manipuladores de _hooks_ que podem ser implementados para contêineres: + +* Exec - Executa um comando específico, como `pre-stop.sh`, dentro dos cgroups e Namespaces do contêiner. +* HTTP - Executa uma requisição HTTP em um endpoint específico do contêiner. + +### Execução do manipulador de hook + + +Quando um _hook_ de gerenciamento de ciclo de vida do contêiner é chamado, o sistema de gerenciamento do Kubernetes executa o manipulador de acordo com a ação do _hook_, `httpGet` e `tcpSocket` são executados pelo processo kubelet e `exec` é executado pelo contêiner. + +As chamadas do manipulador do _hook_ são síncronas no contexto do Pod que contém o contêiner. +Isso significa que para um _hook_ `PostStart`, o ENTRYPOINT do contêiner e o _hook_ disparam de forma assíncrona. +No entanto, se o _hook_ demorar muito para ser executado ou travar, o contêiner não consegue atingir o estado `running`. + + +Os _hooks_ `PreStop` não são executados de forma assíncrona a partir do sinal para parar o contêiner, o _hook_ precisa finalizar a sua execução antes que o sinal TERM possa ser enviado. +Se um _hook_ `PreStop` travar durante a execução, a fase do Pod será `Terminating` e permanecerá até que o Pod seja morto após seu `terminationGracePeriodSeconds` expirar. Esse período de tolerância se aplica ao tempo total necessário +para o _hook_ `PreStop`executar e para o contêiner parar normalmente. +Se por exemplo, o `terminationGracePeriodSeconds` é 60, e o _hook_ leva 55 segundos para ser concluído, e o contêiner leva 10 segundos para parar normalmente após receber o sinal, então o contêiner será morto antes que possa parar +normalmente, uma vez que o `terminationGracePeriodSeconds` é menor que o tempo total (55 + 10) que é necessário para que essas duas coisas aconteçam. + +Se um _hook_ `PostStart` ou `PreStop` falhar, ele mata o contêiner. + +Os usuários devem tornar seus _hooks_ o mais leve possíveis. +Há casos, no entanto, em que comandos de longa duração fazem sentido, como ao salvar o estado +antes de parar um contêiner. + +### Garantias de entrega de _hooks_ + +A entrega do _hook_ é destinada a acontecer *pelo menos uma vez*, +o que quer dizer que um _hook_ pode ser chamado várias vezes para qualquer evento, +como para `PostStart` ou `PreStop`. +Depende da implementação do _hook_ lidar com isso corretamente. + +Geralmente, apenas entregas únicas são feitas. +Se, por exemplo, um receptor de _hook_ HTTP estiver inativo e não puder receber tráfego, +não há tentativa de reenviar. +Em alguns casos raros, no entanto, pode ocorrer uma entrega dupla. +Por exemplo, se um kubelet reiniciar no meio do envio de um _hook_, o _hook_ pode ser +reenviado depois que o kubelet voltar a funcionar. + +### Depurando manipuladores de _hooks_ + +Os logs para um manipulador de _hook_ não são expostos em eventos de Pod. +Se um manipulador falhar por algum motivo, ele transmitirá um evento. +Para `PostStart` é o evento `FailedPostStartHook` e para `PreStop` é o evento +`FailedPreStopHook`. +Você pode ver esses eventos executando `kubectl describe pod `. +Aqui está um exemplo de saída de eventos da execução deste comando: + +``` +Events: + FirstSeen LastSeen Count From SubObjectPath Type Reason Message + --------- -------- ----- ---- ------------- -------- ------ ------- + 1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned test-1730497541-cq1d2 to gke-test-cluster-default-pool-a07e5d30-siqd + 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulling pulling image "test:1.0" + 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Created Created container with docker id 5c6a256a2567; Security:[seccomp=unconfined] + 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulled Successfully pulled image "test:1.0" + 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Started Started container with docker id 5c6a256a2567 + 38s 38s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 5c6a256a2567: PostStart handler: Error executing in Docker Container: 1 + 37s 37s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 8df9fdfd7054: PostStart handler: Error executing in Docker Container: 1 + 38s 37s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "main" with RunContainerError: "PostStart handler: Error executing in Docker Container: 1" + 1m 22s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Warning FailedPostStartHook +``` + + + +## {{% heading "whatsnext" %}} + + +* Saiba mais sobre o [Ambiente de contêiner](/docs/concepts/containers/container-environment/). +* Obtenha experiência prática + [anexando manipuladores a eventos de ciclo de vida do contêiner](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/). + diff --git a/content/pt/docs/concepts/containers/images.md b/content/pt/docs/concepts/containers/images.md new file mode 100644 index 0000000000..6f8b81dd7c --- /dev/null +++ b/content/pt/docs/concepts/containers/images.md @@ -0,0 +1,290 @@ +--- +reviewers: +- femrtnz +- jcjesus +- hugopfeffer +title: Imagens +content_type: concept +weight: 10 +--- + + + +Uma imagem de contêiner representa dados binários que encapsulam uma aplicação e todas as suas dependências de software. As imagens de contêiner são pacotes de software executáveis que podem ser executados de forma autônoma e que fazem suposições muito bem definidas sobre seu agente de execução do ambiente. + +Normalmente, você cria uma imagem de contêiner da sua aplicação e a envia para um registro antes de fazer referência a ela em um {{< glossary_tooltip text="Pod" term_id="pod" >}} + +Esta página fornece um resumo sobre o conceito de imagem de contêiner. + + + +## Nomes das imagens + +As imagens de contêiner geralmente recebem um nome como `pause`, `exemplo/meuconteiner`, ou `kube-apiserver`. +As imagens também podem incluir um hostname de algum registro; por exemplo: `exemplo.registro.ficticio/nomeimagem`, +e um possível número de porta; por exemplo: `exemplo.registro.ficticio:10443/nomeimagem`. + +Se você não especificar um hostname de registro, o Kubernetes presumirá que você se refere ao registro público do Docker. + +Após a parte do nome da imagem, você pode adicionar uma _tag_ (como também usar com comandos como `docker` e` podman`). +As tags permitem identificar diferentes versões da mesma série de imagens. + +Tags de imagem consistem em letras maiúsculas e minúsculas, dígitos, sublinhados (`_`), +pontos (`.`) e travessões (` -`). +Existem regras adicionais sobre onde você pode colocar o separador +caracteres (`_`,`-` e `.`) dentro de uma tag de imagem. +Se você não especificar uma tag, o Kubernetes presumirá que você se refere à tag `latest` (mais recente). + +{{< caution >}} +Você deve evitar usar a tag `latest` quando estiver realizando o deploy de contêineres em produção, +pois é mais difícil rastrear qual versão da imagem está sendo executada, além de tornar mais difícil o processo de reversão para uma versão funcional. + +Em vez disso, especifique uma tag significativa, como `v1.42.0`. +{{< /caution >}} + +## Atualizando imagens + +A política padrão de pull é `IfNotPresent` a qual faz com que o +{{}} ignore +o processo de *pull* da imagem, caso a mesma já exista. Se você prefere sempre forçar o processo de *pull*, +você pode seguir uma das opções abaixo: + +- defina a `imagePullPolicy` do contêiner para` Always`. +- omita `imagePullPolicy` e use`: latest` como a tag para a imagem a ser usada. +- omita o `imagePullPolicy` e a tag da imagem a ser usada. +- habilite o [AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) controlador de admissão. + +Quando `imagePullPolicy` é definido sem um valor específico, ele também é definido como` Always`. + +## Multiarquitetura de imagens com índice de imagens + +Além de fornecer o binário das imagens, um registro de contêiner também pode servir um [índice de imagem do contêiner](https://github.com/opencontainers/image-spec/blob/master/image-index.md). Um índice de imagem pode apontar para múltiplos [manifestos da imagem](https://github.com/opencontainers/image-spec/blob/master/manifest.md) para versões específicas de arquitetura de um contêiner. A ideia é que você possa ter um nome para uma imagem (por exemplo: `pause`, `exemple/meuconteiner`, `kube-apiserver`) e permitir que diferentes sistemas busquem o binário da imagem correta para a arquitetura de máquina que estão usando. + +O próprio Kubernetes normalmente nomeia as imagens de contêiner com o sufixo `-$(ARCH)`. Para retrocompatibilidade, gere as imagens mais antigas com sufixos. A ideia é gerar a imagem `pause` que tem o manifesto para todas as arquiteturas e `pause-amd64` que é retrocompatível com as configurações anteriores ou arquivos YAML que podem ter codificado as imagens com sufixos. + +## Usando um registro privado + +Os registros privados podem exigir chaves para acessar as imagens deles. +As credenciais podem ser fornecidas de várias maneiras: + - Configurando nós para autenticação em um registro privado + - todos os pods podem ler qualquer registro privado configurado + - requer configuração de nó pelo administrador do cluster + - Imagens pré-obtidas + - todos os pods podem usar qualquer imagem armazenada em cache em um nó + - requer acesso root a todos os nós para configurar + - Especificando ImagePullSecrets em um Pod + - apenas pods que fornecem chaves próprias podem acessar o registro privado + - Extensões locais ou específicas do fornecedor + - se estiver usando uma configuração de nó personalizado, você (ou seu provedor de nuvem) pode implementar seu mecanismo para autenticar o nó ao registro do contêiner. + +Essas opções são explicadas com mais detalhes abaixo. + +### Configurando nós para autenticação em um registro privado + +Se você executar o Docker em seus nós, poderá configurar o contêiner runtime do Docker +para autenticação em um registro de contêiner privado. + +Essa abordagem é adequada se você puder controlar a configuração do nó. + +{{< note >}} +O Kubernetes padrão é compatível apenas com as seções `auths` e` HttpHeaders` na configuração do Docker. +Auxiliares de credencial do Docker (`credHelpers` ou` credsStore`) não são suportados. +{{< /note >}} + +Docker armazena chaves de registros privados no arquivo `$HOME/.dockercfg` ou `$HOME/.docker/config.json`. Se você colocar o mesmo arquivo na lista de caminhos de pesquisa abaixo, o kubelet o usa como provedor de credenciais ao obter imagens. + +* `{--root-dir:-/var/lib/kubelet}/config.json` +* `{cwd of kubelet}/config.json` +* `${HOME}/.docker/config.json` +* `/.docker/config.json` +* `{--root-dir:-/var/lib/kubelet}/.dockercfg` +* `{cwd of kubelet}/.dockercfg` +* `${HOME}/.dockercfg` +* `/.dockercfg` + +{{< note >}} +Você talvez tenha que definir `HOME=/root` explicitamente no ambiente do processo kubelet. +{{< /note >}} + +Aqui estão as etapas recomendadas para configurar seus nós para usar um registro privado. Neste +exemplo, execute-os em seu desktop/laptop: + + 1. Execute `docker login [servidor]` para cada conjunto de credenciais que deseja usar. Isso atualiza o `$HOME/.docker/config.json` em seu PC. + 1. Visualize `$HOME/.docker/config.json` em um editor para garantir que contém apenas as credenciais que você deseja usar. + 1. Obtenha uma lista de seus nós; por exemplo: + - se você quiser os nomes: `nodes=$( kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}' )` + - se você deseja obter os endereços IP: `nodes=$( kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}' )` + 1. Copie seu `.docker/config.json` local para uma das listas de caminhos de busca acima. + - por exemplo, para testar isso: `for n in $nodes; do scp ~/.docker/config.json root@"$n":/var/lib/kubelet/config.json; done` + +{{< note >}} +Para clusters de produção, use uma ferramenta de gerenciamento de configuração para que você possa aplicar esta +configuração em todos os nós que você precisar. +{{< /note >}} + +Verifique se está funcionando criando um pod que usa uma imagem privada; por exemplo: + +```shell +kubectl apply -f - <}} +Essa abordagem é adequada se você puder controlar a configuração do nó. Isto +não funcionará de forma confiável se o seu provedor de nuvem for responsável pelo gerenciamento de nós e os substituir +automaticamente. +{{< /note >}} + +Por padrão, o kubelet tenta realizar um "pull" para cada imagem do registro especificado. +No entanto, se a propriedade `imagePullPolicy` do contêiner for definida como` IfNotPresent` ou `Never`, +em seguida, uma imagem local é usada (preferencial ou exclusivamente, respectivamente). + +Se você quiser usar imagens pré-obtidas como um substituto para a autenticação do registro, +você deve garantir que todos os nós no cluster tenham as mesmas imagens pré-obtidas. + +Isso pode ser usado para pré-carregar certas imagens com o intuíto de aumentar a velocidade ou como uma alternativa para autenticação em um registro privado. + +Todos os pods terão permissão de leitura a quaisquer imagens pré-obtidas. + +### Especificando imagePullSecrets em um pod + +{{< note >}} +Esta é a abordagem recomendada para executar contêineres com base em imagens +de registros privados. +{{< /note >}} + +O Kubernetes oferece suporte à especificação de chaves de registro de imagem de contêiner em um pod. + +#### Criando um segredo com Docker config + +Execute o seguinte comando, substituindo as palavras em maiúsculas com os valores apropriados: + +```shell +kubectl create secret docker-registry --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL +``` + +Se você já tem um arquivo de credenciais do Docker, em vez de usar o +comando acima, você pode importar o arquivo de credenciais como um Kubernetes +{{< glossary_tooltip text="Secrets" term_id="secret" >}}. +[Criar um segredo com base nas credenciais Docker existentes](/docs/tasks/configure-pod-container/pull-image-private-registry/#registry-secret-existing-credentials) explica como configurar isso. + +Isso é particularmente útil se você estiver usando vários registros privados de contêineres, como `kubectl create secret docker-registry` cria um Segredo que +só funciona com um único registro privado. + +{{< note >}} +Os pods só podem fazer referência a *pull secrets* de imagem em seu próprio namespace, +portanto, esse processo precisa ser feito uma vez por namespace. +{{< /note >}} + +#### Referenciando um imagePullSecrets em um pod + +Agora, você pode criar pods que fazem referência a esse segredo adicionando uma seção `imagePullSecrets` +na definição de Pod. + +Por exemplo: + +```shell +cat < pod.yaml +apiVersion: v1 +kind: Pod +metadata: + name: foo + namespace: awesomeapps +spec: + containers: + - name: foo + image: janedoe/awesomeapp:v1 + imagePullSecrets: + - name: myregistrykey +EOF +cat <> ./kustomization.yaml +resources: +- pod.yaml +EOF +``` + +Isso precisa ser feito para cada pod que está usando um registro privado. + +No entanto, a configuração deste campo pode ser automatizada definindo o imagePullSecrets +em um recurso de [ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/). + +Verifique [Adicionar ImagePullSecrets a uma conta de serviço](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) para obter instruções detalhadas. + +Você pode usar isso em conjunto com um `.docker / config.json` por nó. As credenciais +serão mescladas. + +## Casos de uso + +Existem várias soluções para configurar registros privados. Aqui estão alguns +casos de uso comuns e soluções sugeridas. + +1. Cluster executando apenas imagens não proprietárias (por exemplo, código aberto). Não há necessidade de ocultar imagens. + - Use imagens públicas no Docker hub. + - Nenhuma configuração necessária. + - Alguns provedores de nuvem armazenam em cache ou espelham automaticamente imagens públicas, o que melhora a disponibilidade e reduz o tempo para extrair imagens. +1. Cluster executando algumas imagens proprietárias que devem ser ocultadas para quem está fora da empresa, mas + visível para todos os usuários do cluster. + - Use um [registro Docker](https://docs.docker.com/registry/) privado hospedado. + - Pode ser hospedado no [Docker Hub](https://hub.docker.com/signup) ou em outro lugar. + - Configure manualmente .docker/config.json em cada nó conforme descrito acima. + - Ou execute um registro privado interno atrás de seu firewall com permissão de leitura. + - Nenhuma configuração do Kubernetes é necessária. + - Use um serviço de registro de imagem de contêiner que controla o acesso à imagem + - Funcionará melhor com o escalonamento automático do cluster do que com a configuração manual de nós. + - Ou, em um cluster onde alterar a configuração do nó é inconveniente, use `imagePullSecrets`. +1. Cluster com imagens proprietárias, algumas das quais requerem controle de acesso mais rígido. + - Certifique-se de que o [controlador de admissão AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) está ativo. Caso contrário, todos os pods têm potencialmente acesso a todas as imagens. + - Mova dados confidenciais para um recurso "secreto", em vez de empacotá-los em uma imagem. +1. Um cluster multilocatário em que cada locatário precisa de seu próprio registro privado. + - Certifique-se de que o [controlador de admissão AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) está ativo. Caso contrário, todos os Pods de todos os locatários terão potencialmente acesso a todas as imagens. + - Execute um registro privado com autorização necessária. + - Gere credenciais de registro para cada locatário, coloque em segredo e preencha o segredo para cada namespace de locatário. + - O locatário adiciona esse segredo a imagePullSecrets de cada namespace. + + +Se precisar de acesso a vários registros, você pode criar um segredo para cada registro. +O Kubelet mesclará qualquer `imagePullSecrets` em um único `.docker/config.json` virtual + +## {{% heading "whatsnext" %}} + +* Leia a [OCI Image Manifest Specification](https://github.com/opencontainers/image-spec/blob/master/manifest.md) diff --git a/content/pt/docs/concepts/overview/_index.md b/content/pt/docs/concepts/overview/_index.md new file mode 100644 index 0000000000..f254849b77 --- /dev/null +++ b/content/pt/docs/concepts/overview/_index.md @@ -0,0 +1,7 @@ +--- +title: "Visão Geral" +weight: 20 +description: Obtenha uma visão em alto-nível do Kubernetes e dos componentes a partir dos quais ele é construído. +sitemap: + priority: 0.9 +--- diff --git a/content/pt/docs/concepts/overview/components.md b/content/pt/docs/concepts/overview/components.md new file mode 100644 index 0000000000..b03946c4ae --- /dev/null +++ b/content/pt/docs/concepts/overview/components.md @@ -0,0 +1,117 @@ +--- +reviewers: +title: Componentes do Kubernetes +content_type: concept +description: > + Um cluster Kubernetes consiste de componentes que representam a camada de gerenciamento, e um conjunto de máquinas chamadas nós. +weight: 20 +card: + name: concepts + weight: 20 +--- + + +Ao implantar o Kubernetes, você obtém um cluster. +{{< glossary_definition term_id="cluster" length="all" prepend="Um cluster Kubernetes consiste em">}} + +Este documento descreve os vários componentes que você precisa ter para implantar um cluster Kubernetes completo e funcional. + +Esse é o diagrama de um cluster Kubernetes com todos os componentes interligados. + +![Componentes do Kubernetes](/images/docs/components-of-kubernetes.svg) + + + +## Componentes da camada de gerenciamento + +Os componentes da camada de gerenciamento tomam decisões globais sobre o cluster (por exemplo, agendamento de _pods_), bem como detectam e respondem aos eventos do cluster (por exemplo, iniciando um novo _{{< glossary_tooltip text="pod" term_id="pod" >}}_ quando o campo `replicas` de um _Deployment_ não está atendido). + +Os componentes da camada de gerenciamento podem ser executados em qualquer máquina do cluster. Contudo, para simplificar, os _scripts_ de configuração normalmente iniciam todos os componentes da camada de gerenciamento na mesma máquina, e não executa contêineres de usuário nesta máquina. Veja [Construindo clusters de alta disponibilidade](/docs/admin/high-availability/) para um exemplo de configuração de múltiplas VMs para camada de gerenciamento (_multi-main-VM_). + +### kube-apiserver + +{{< glossary_definition term_id="kube-apiserver" length="all" >}} + +### etcd + +{{< glossary_definition term_id="etcd" length="all" >}} + +### kube-scheduler + +{{< glossary_definition term_id="kube-scheduler" length="all" >}} + +### kube-controller-manager + +{{< glossary_definition term_id="kube-controller-manager" length="all" >}} + +Alguns tipos desses controladores são: + + * Controlador de nó: responsável por perceber e responder quando os nós caem. + * Controlador de _Job_: Observa os objetos _Job_ que representam tarefas únicas e, em seguida, cria _pods_ para executar essas tarefas até a conclusão. + * Controlador de _endpoints_: preenche o objeto _Endpoints_ (ou seja, junta os Serviços e os _pods_). + * Controladores de conta de serviço e de _token_: crie contas padrão e _tokens_ de acesso de API para novos _namespaces_. + +### cloud-controller-manager + +{{< glossary_definition term_id="cloud-controller-manager" length="short" >}} + +O cloud-controller-manager executa apenas controladores que são específicos para seu provedor de nuvem. +Se você estiver executando o Kubernetes em suas próprias instalações ou em um ambiente de aprendizagem dentro de seu +próprio PC, o cluster não possui um gerenciador de controlador de nuvem. + +Tal como acontece com o kube-controller-manager, o cloud-controller-manager combina vários ciclos de controle logicamente independentes em um binário único que você executa como um processo único. Você pode escalar horizontalmente (exectuar mais de uma cópia) para melhorar o desempenho ou para auxiliar na tolerância a falhas. + +Os seguintes controladores podem ter dependências de provedor de nuvem: + + * Controlador de nó: para verificar junto ao provedor de nuvem para determinar se um nó foi excluído da nuvem após parar de responder. + * Controlador de rota: para configurar rotas na infraestrutura de nuvem subjacente. + * Controlador de serviço: Para criar, atualizar e excluir balanceadores de carga do provedor de nuvem. + +## Node Components + +Os componentes de nó são executados em todos os nós, mantendo os _pods_ em execução e fornecendo o ambiente de execução do Kubernetes. + +### kubelet + +{{< glossary_definition term_id="kubelet" length="all" >}} + +### kube-proxy + +{{< glossary_definition term_id="kube-proxy" length="all" >}} + +### Container runtime + +{{< glossary_definition term_id="container-runtime" length="all" >}} + +## Addons + +Complementos (_addons_) usam recursos do Kubernetes ({{< glossary_tooltip term_id="daemonset" >}}, {{< glossary_tooltip term_id="deployment" >}}, etc) para implementar funcionalidades do cluster. Como fornecem funcionalidades em nível do cluster, recursos de _addons_ que necessitem ser criados dentro de um _namespace_ pertencem ao _namespace_ `kube-system`. + +Alguns _addons_ selecionados são descritos abaixo; para uma lista estendida dos _addons_ disponíveis, por favor consulte [Addons](/docs/concepts/cluster-administration/addons/). + +### DNS + +Embora os outros complementos não sejam estritamente necessários, todos os clusters do Kubernetes devem ter um [DNS do cluster](/docs/concepts/services-networking/dns-pod-service/), já que muitos exemplos dependem disso. + +O DNS do cluster é um servidor DNS, além de outros servidores DNS em seu ambiente, que fornece registros DNS para serviços do Kubernetes. + +Os contêineres iniciados pelo Kubernetes incluem automaticamente esse servidor DNS em suas pesquisas DNS. + +### Web UI (Dashboard) + +[Dashboard](/docs/tasks/access-application-cluster/web-ui-dashboard/) é uma interface de usuário Web, de uso geral, para clusters do Kubernetes. Ele permite que os usuários gerenciem e solucionem problemas de aplicações em execução no cluster, bem como o próprio cluster. + +### Monitoramento de recursos do contêiner + +[Monitoramento de recursos do contêiner](/docs/tasks/debug-application-cluster/resource-usage-monitoring/) registra métricas de série temporal genéricas sobre os contêineres em um banco de dados central e fornece uma interface de usuário para navegar por esses dados. + +### Logging a nivel do cluster + +Um mecanismo de [_logging_ a nível do cluster](/docs/concepts/cluster-administration/logging/) é responsável por guardar os _logs_ dos contêineres em um armazenamento central de _logs_ com um interface para navegação/pesquisa. + +## {{% heading "whatsnext" %}} + +* Aprenda sobre [Nós](/docs/concepts/architecture/nodes/). +* Aprenda sobre [Controladores](/docs/concepts/architecture/controller/). +* Aprenda sobre [kube-scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/). +* Leia a [documentação](https://etcd.io/docs/) oficial do **etcd**. diff --git a/content/pt/docs/concepts/overview/what-is-kubernetes.md b/content/pt/docs/concepts/overview/what-is-kubernetes.md new file mode 100644 index 0000000000..29473a7f75 --- /dev/null +++ b/content/pt/docs/concepts/overview/what-is-kubernetes.md @@ -0,0 +1,94 @@ +--- +reviewers: +title: O que é Kubernetes? +description: > + Kubernetes é um plataforma de código aberto, portável e extensiva para o gerenciamento de cargas de trabalho e serviços distribuídos em contêineres, que facilita tanto a configuração declarativa quanto a automação. Ele possui um ecossistema grande, e de rápido crescimento. Serviços, suporte, e ferramentas para Kubernetes estão amplamente disponíveis. +content_type: concept +weight: 10 +card: + name: concepts + weight: 10 +sitemap: + priority: 0.9 +--- + + +Essa página é uma visão geral do Kubernetes. + + + +Kubernetes é um plataforma de código aberto, portável e extensiva para o gerenciamento de cargas de trabalho e serviços distribuídos em contêineres, que facilita tanto a configuração declarativa quanto a automação. Ele possui um ecossistema grande, e de rápido crescimento. Serviços, suporte, e ferramentas para Kubernetes estão amplamente disponíveis. + +O Google tornou Kubernetes um projeto de código-aberto em 2014. O Kubernetes combina [mais de 15 anos de experiência do Google](/blog/2015/04/borg-predecessor-to-kubernetes/) executando cargas de trabalho produtivas em escala, com as melhores idéias e práticas da comunidade. + +O nome **Kubernetes** tem origem no Grego, significando _timoneiro_ ou _piloto_. **K8s** é a abreviação derivada pela troca das oito letras "ubernete" por "8", se tornado _K"8"s_. + +## Voltando no tempo + +Vamos dar uma olhada no porque o Kubernetes é tão útil, voltando no tempo. + +![Evolução das implantações](/images/docs/Container_Evolution.svg) + +**Era da implantação tradicional:** No início, as organizações executavam aplicações em servidores físicos. Não havia como definir limites de recursos para aplicações em um mesmo servidor físico, e isso causava problemas de alocação de recursos. Por exemplo, se várias aplicações fossem executadas em um mesmo servidor físico, poderia haver situações em que uma aplicação ocupasse a maior parte dos recursos e, como resultado, o desempenho das outras aplicações seria inferior. Uma solução para isso seria executar cada aplicação em um servidor físico diferente. Mas isso não escalava, pois os recursos eram subutilizados, e se tornava custoso para as organizações manter muitos servidores físicos. + +**Era da implantação virtualizada:** Como solução, a virtualização foi introduzida. Esse modelo permite que você execute várias máquinas virtuais (VMs) em uma única CPU de um servidor físico. A virtualização permite que as aplicações sejam isoladas entre as VMs, e ainda fornece um nível de segurança, pois as informações de uma aplicação não podem ser acessadas livremente por outras aplicações. + +A virtualização permite melhor utilização de recursos em um servidor físico, e permite melhor escalabilidade porque uma aplicação pode ser adicionada ou atualizada facilmente, reduz os custos de hardware e muito mais. Com a virtualização, você pode apresentar um conjunto de recursos físicos como um cluster de máquinas virtuais descartáveis. + +Cada VM é uma máquina completa que executa todos os componentes, incluindo seu próprio sistema operacional, além do hardware virtualizado. + +**Era da implantação em contêineres:** Contêineres são semelhantes às VMs, mas têm propriedades de isolamento flexibilizados para compartilhar o sistema operacional (SO) entre as aplicações. Portanto, os contêineres são considerados leves. Semelhante a uma VM, um contêiner tem seu próprio sistema de arquivos, compartilhamento de CPU, memória, espaço de processo e muito mais. Como eles estão separados da infraestrutura subjacente, eles são portáveis entre nuvens e distribuições de sistema operacional. + +Contêineres se tornaram populares porque eles fornecem benefícios extra, tais como: + +* Criação e implantação ágil de aplicações: aumento da facilidade e eficiência na criação de imagem de contêiner comparado ao uso de imagem de VM. +* Desenvolvimento, integração e implantação contínuos: fornece capacidade de criação e de implantação de imagens de contêiner de forma confiável e frequente, com a funcionalidade de efetuar reversões rápidas e eficientes (devido à imutabilidade da imagem). +* Separação de interesses entre Desenvolvimento e Operações: crie imagens de contêineres de aplicações no momento de construção/liberação em vez de no momento de implantação, desacoplando as aplicações da infraestrutura. +* A capacidade de observação (Observabilidade) não apenas apresenta informações e métricas no nível do sistema operacional, mas também a integridade da aplicação e outros sinais. +* Consistência ambiental entre desenvolvimento, teste e produção: funciona da mesma forma em um laptop e na nuvem. +* Portabilidade de distribuição de nuvem e sistema operacional: executa no Ubuntu, RHEL, CoreOS, localmente, nas principais nuvens públicas e em qualquer outro lugar. +* Gerenciamento centrado em aplicações: eleva o nível de abstração da execução em um sistema operacional em hardware virtualizado à execução de uma aplicação em um sistema operacional usando recursos lógicos. +* Microserviços fracamente acoplados, distribuídos, elásticos e livres: as aplicações são divididas em partes menores e independentes e podem ser implantados e gerenciados dinamicamente - não uma pilha monolítica em execução em uma grande máquina de propósito único. +* Isolamento de recursos: desempenho previsível de aplicações. +* Utilização de recursos: alta eficiência e densidade. + +## Por que você precisa do Kubernetes e o que ele pode fazer{#why-you-need-kubernetes-and-what-can-it-do} + +Os contêineres são uma boa maneira de agrupar e executar suas aplicações. Em um ambiente de produção, você precisa gerenciar os contêineres que executam as aplicações e garantir que não haja tempo de inatividade. Por exemplo, se um contêiner cair, outro contêiner precisa ser iniciado. Não seria mais fácil se esse comportamento fosse controlado por um sistema? + +É assim que o Kubernetes vem ao resgate! O Kubernetes oferece uma estrutura para executar sistemas distribuídos de forma resiliente. Ele cuida do escalonamento e do recuperação à falha de sua aplicação, fornece padrões de implantação e muito mais. Por exemplo, o Kubernetes pode gerenciar facilmente uma implantação no método canário para seu sistema. + +O Kubernetes oferece a você: + +* **Descoberta de serviço e balanceamento de carga** +O Kubernetes pode expor um contêiner usando o nome DNS ou seu próprio endereço IP. Se o tráfego para um contêiner for alto, o Kubernetes pode balancear a carga e distribuir o tráfego de rede para que a implantação seja estável. +* **Orquestração de armazenamento** +O Kubernetes permite que você monte automaticamente um sistema de armazenamento de sua escolha, como armazenamentos locais, provedores de nuvem pública e muito mais. +* **Lançamentos e reversões automatizadas** +Você pode descrever o estado desejado para seus contêineres implantados usando o Kubernetes, e ele pode alterar o estado real para o estado desejado em um ritmo controlada. Por exemplo, você pode automatizar o Kubernetes para criar novos contêineres para sua implantação, remover os contêineres existentes e adotar todos os seus recursos para o novo contêiner. +* **Empacotamento binário automático** +Você fornece ao Kubernetes um cluster de nós que pode ser usado para executar tarefas nos contêineres. Você informa ao Kubernetes de quanta CPU e memória (RAM) cada contêiner precisa. O Kubernetes pode encaixar contêineres em seus nós para fazer o melhor uso de seus recursos. +* **Autocorreção** +O Kubernetes reinicia os contêineres que falham, substitui os contêineres, elimina os contêineres que não respondem à verificação de integridade definida pelo usuário e não os anuncia aos clientes até que estejam prontos para servir. +* **Gerenciamento de configuração e de segredos** +O Kubernetes permite armazenar e gerenciar informações confidenciais, como senhas, tokens OAuth e chaves SSH. Você pode implantar e atualizar segredos e configuração de aplicações sem reconstruir suas imagens de contêiner e sem expor segredos em sua pilha de configuração. + +## O que o Kubernetes não é + +O Kubernetes não é um sistema PaaS (plataforma como serviço) tradicional e completo. Como o Kubernetes opera no nível do contêiner, e não no nível do hardware, ele fornece alguns recursos geralmente aplicáveis comuns às ofertas de PaaS, como implantação, escalonamento, balanceamento de carga, e permite que os usuários integrem suas soluções de _logging_, monitoramento e alerta. No entanto, o Kubernetes não é monolítico, e essas soluções padrão são opcionais e conectáveis. O Kubernetes fornece os blocos de construção para a construção de plataformas de desenvolvimento, mas preserva a escolha e flexibilidade do usuário onde é importante. + +Kubernetes: + +* Não limita os tipos de aplicações suportadas. O Kubernetes visa oferecer suporte a uma variedade extremamente diversa de cargas de trabalho, incluindo cargas de trabalho sem estado, com estado e de processamento de dados. Se uma aplicação puder ser executada em um contêiner, ele deve ser executado perfeitamente no Kubernetes. +* Não implanta código-fonte e não constrói sua aplicação. Os fluxos de trabalho de integração contínua, entrega e implantação (CI/CD) são determinados pelas culturas e preferências da organização, bem como pelos requisitos técnicos. +* Não fornece serviços em nível de aplicação, tais como middleware (por exemplo, barramentos de mensagem), estruturas de processamento de dados (por exemplo, Spark), bancos de dados (por exemplo, MySQL), caches, nem sistemas de armazenamento em cluster (por exemplo, Ceph), como serviços integrados. Esses componentes podem ser executados no Kubernetes e/ou podem ser acessados por aplicações executadas no Kubernetes por meio de mecanismos portáteis, como o [Open Service Broker](https://openservicebrokerapi.org/). +* Não dita soluções de _logging_, monitoramento ou alerta. Ele fornece algumas integrações como prova de conceito e mecanismos para coletar e exportar métricas. +* Não fornece nem exige um sistema/idioma de configuração (por exemplo, Jsonnet). Ele fornece uma API declarativa que pode ser direcionada por formas arbitrárias de especificações declarativas. +* Não fornece nem adota sistemas abrangentes de configuração de máquinas, manutenção, gerenciamento ou autocorreção. +* Adicionalmente, o Kubernetes não é um mero sistema de orquestração. Na verdade, ele elimina a necessidade de orquestração. A definição técnica de orquestração é a execução de um fluxo de trabalho definido: primeiro faça A, depois B e depois C. Em contraste, o Kubernetes compreende um conjunto de processos de controle independentes e combináveis que conduzem continuamente o estado atual em direção ao estado desejado fornecido. Não importa como você vai de A para C. O controle centralizado também não é necessário. Isso resulta em um sistema que é mais fácil de usar e mais poderoso, robusto, resiliente e extensível. + + +## {{% heading "whatsnext" %}} + +* Dê uma olhada em [Componentes do Kubernetes](/docs/concepts/overview/components/). +* Pronto para [Iniciar](/docs/setup/)? diff --git a/content/pt/docs/concepts/scheduling-eviction/_index.md b/content/pt/docs/concepts/scheduling-eviction/_index.md new file mode 100644 index 0000000000..e9e036f0c3 --- /dev/null +++ b/content/pt/docs/concepts/scheduling-eviction/_index.md @@ -0,0 +1,8 @@ +--- +title: "Escalonamento" +weight: 90 +description: > + No Kubernetes, agendamento refere-se a garantia de que os pods correspondam aos nós para que o kubelet possa executá-los. + Remoção é o processo de falha proativa de um ou mais pods em nós com falta de recursos. +--- + diff --git a/content/pt/docs/concepts/scheduling/kube-scheduler.md b/content/pt/docs/concepts/scheduling-eviction/kube-scheduler.md similarity index 93% rename from content/pt/docs/concepts/scheduling/kube-scheduler.md rename to content/pt/docs/concepts/scheduling-eviction/kube-scheduler.md index 575a8e7839..8c8b0ec39a 100644 --- a/content/pt/docs/concepts/scheduling/kube-scheduler.md +++ b/content/pt/docs/concepts/scheduling-eviction/kube-scheduler.md @@ -91,4 +91,7 @@ do escalonador: * Aprenda como [configurar vários escalonadores](/docs/tasks/administer-cluster/configure-multiple-schedulers/) * Aprenda sobre [políticas de gerenciamento de topologia](/docs/tasks/administer-cluster/topology-manager/) * Aprenda sobre [Pod Overhead](/docs/concepts/configuration/pod-overhead/) - +* Saiba mais sobre o agendamento de pods que usam volumes em: + * [Suporte de topologia de volume](/docs/concepts/storage/storage-classes/#volume-binding-mode) + * [Rastreamento de capacidade de armazenamento](/docs/concepts/storage/storage-capacity/) + * [Limites de volumes específicos do nó](/docs/concepts/storage/storage-limits/) \ No newline at end of file diff --git a/content/pt/docs/concepts/configuration/pod-overhead.md b/content/pt/docs/concepts/scheduling-eviction/pod-overhead.md similarity index 54% rename from content/pt/docs/concepts/configuration/pod-overhead.md rename to content/pt/docs/concepts/scheduling-eviction/pod-overhead.md index 78ba1d6ffd..c3788b22fa 100644 --- a/content/pt/docs/concepts/configuration/pod-overhead.md +++ b/content/pt/docs/concepts/scheduling-eviction/pod-overhead.md @@ -1,9 +1,5 @@ --- -reviewers: -- dchen1107 -- egernst -- tallclair -title: Pod Overhead +title: Sobrecarga de Pod content_type: concept weight: 50 --- @@ -12,10 +8,10 @@ weight: 50 {{< feature-state for_k8s_version="v1.18" state="beta" >}} -Quando executa um Pod num nó, o próprio Pod usa uma quantidade de recursos do sistema. Estes -recursos são adicionais aos recursos necessários para executar o(s) _container(s)_ dentro do Pod. +Quando você executa um Pod num nó, o próprio Pod usa uma quantidade de recursos do sistema. Estes +recursos são adicionais aos recursos necessários para executar o(s) contêiner(s) dentro do Pod. Sobrecarga de Pod, do inglês _Pod Overhead_, é uma funcionalidade que serve para contabilizar os recursos consumidos pela -infraestrutura do Pod para além das solicitações e limites do _container_. +infraestrutura do Pod para além das solicitações e limites do contêiner. @@ -23,27 +19,27 @@ infraestrutura do Pod para além das solicitações e limites do _container_. -No Kubernetes, a sobrecarga de _Pods_ é definido no tempo de +No Kubernetes, a sobrecarga de Pods é definido no tempo de [admissão](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks) de acordo com a sobrecarga associada à -[RuntimeClass](/docs/concepts/containers/runtime-class/) do _Pod_. +[RuntimeClass](/docs/concepts/containers/runtime-class/) do Pod. Quando é ativada a Sobrecarga de Pod, a sobrecarga é considerada adicionalmente à soma das -solicitações de recursos do _container_ ao agendar um Pod. Semelhantemente, o _kubelet_ +solicitações de recursos do contêiner ao agendar um Pod. Semelhantemente, o _kubelet_ incluirá a sobrecarga do Pod ao dimensionar o cgroup do Pod e ao -executar a classificação de despejo do Pod. +executar a classificação de prioridade de migração do Pod em caso de _drain_ do Node. -## Possibilitando a Sobrecarga do Pod {#set-up} +## Habilitando a Sobrecarga de Pod {#set-up} -Terá de garantir que o [portão de funcionalidade](/docs/reference/command-line-tools-reference/feature-gates/) -`PodOverhead` está ativo (está ativo por defeito a partir da versão 1.18) -por todo o cluster, e uma `RuntimeClass` é utilizada que defina o campo `overhead`. +Terá de garantir que o [Feature Gate](/docs/reference/command-line-tools-reference/feature-gates/) +`PodOverhead` esteja ativo (está ativo por padrão a partir da versão 1.18) +em todo o cluster, e uma `RuntimeClass` utilizada que defina o campo `overhead`. ## Exemplo de uso Para usar a funcionalidade PodOverhead, é necessário uma RuntimeClass que define o campo `overhead`. -Por exemplo, poderia usar a definição da RuntimeClass abaixo com um _container runtime_ virtualizado -que usa cerca de 120MiB por Pod para a máquina virtual e o sistema operativo convidado: +Por exemplo, poderia usar a definição da RuntimeClass abaixo com um agente de execução de contêiner virtualizado +que use cerca de 120MiB por Pod para a máquina virtual e o sistema operacional convidado: ```yaml --- @@ -88,9 +84,9 @@ spec: memory: 100Mi ``` -Na altura de admissão o [controlador de admissão](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) RuntimeClass +No tempo de admissão o [controlador de admissão](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) RuntimeClass atualiza o _PodSpec_ da carga de trabalho de forma a incluir o `overhead` como descrito na RuntimeClass. Se o _PodSpec_ já tiver este campo definido -o _Pod_ será rejeitado. No exemplo dado, como apenas o nome do RuntimeClass é especificado, o controlador de admissão muda o _Pod_ de forma a +o Pod será rejeitado. No exemplo dado, como apenas o nome do RuntimeClass é especificado, o controlador de admissão muda o Pod de forma a incluir um `overhead`. Depois do controlador de admissão RuntimeClass, pode verificar o _PodSpec_ atualizado: @@ -99,44 +95,43 @@ Depois do controlador de admissão RuntimeClass, pode verificar o _PodSpec_ atua kubectl get pod test-pod -o jsonpath='{.spec.overhead}' ``` -O output é: +A saída é: ``` map[cpu:250m memory:120Mi] ``` -Se for definido um _ResourceQuota_, a soma dos pedidos dos _containers_ assim como o campo `overhead` são contados. +Se for definido um _ResourceQuota_, a soma das requisições dos contêineres assim como o campo `overhead` são contados. -Quando o kube-scheduler está a decidir que nó deve executar um novo _Pod_, o agendador considera o `overhead` do _Pod_, -assim como a soma de pedidos aos _containers_ para esse _Pod_. Para este exemplo, o agendador adiciona os -pedidos e a sobrecarga, depois procura um nó com 2.25 CPU e 320 MiB de memória disponível. +Quando o kube-scheduler está decidindo que nó deve executar um novo Pod, o agendador considera o `overhead` do pod, +assim como a soma de pedidos aos contêineres para esse _Pod_. Para este exemplo, o agendador adiciona as requisições e a sobrecarga, depois procura um nó com 2.25 CPU e 320 MiB de memória disponível. -Assim que um _Pod_ é agendado a um nó, o kubelet nesse nó cria um novo {{< glossary_tooltip text="cgroup" term_id="cgroup" >}} -para o _Pod_. É dentro deste _pod_ que o _container runtime_ subjacente vai criar _containers_. +Assim que um Pod é agendado a um nó, o kubelet nesse nó cria um novo {{< glossary_tooltip text="cgroup" term_id="cgroup" >}} +para o Pod. É dentro deste Pod que o agente de execução de contêiners subjacente vai criar contêineres. -Se o recurso tiver um limite definido para cada _container_ (_QoS_ garantida ou _Burstrable QoS_ com limites definidos), -o kubelet definirá um limite superior para o cgroup do _pod_ associado a esse recurso (cpu.cfs_quota_us para CPU -e memory.limit_in_bytes de memória). Este limite superior é baseado na soma dos limites do _container_ mais o `overhead` +Se o recurso tiver um limite definido para cada contêiner (_QoS_ garantida ou _Burstrable QoS_ com limites definidos), +o kubelet definirá um limite superior para o cgroup do Pod associado a esse recurso (cpu.cfs_quota_us para CPU +e memory.limit_in_bytes de memória). Este limite superior é baseado na soma dos limites do contêiner mais o `overhead` definido no _PodSpec_. -Para o CPU, se o _Pod_ for QoS garantida ou _Burstrable QoS_, o kubelet vai definir `cpu.shares` baseado na soma dos -pedidos ao _container_ mais o `overhead` definido no _PodSpec_. +Para CPU, se o Pod for QoS garantida ou _Burstrable QoS_, o kubelet vai definir `cpu.shares` baseado na soma dos +pedidos ao contêiner mais o `overhead` definido no _PodSpec_. -Olhando para o nosso exemplo, verifique os pedidos ao _container_ para a carga de trabalho: +Olhando para o nosso exemplo, verifique as requisições ao contêiner para a carga de trabalho: ```bash kubectl get pod test-pod -o jsonpath='{.spec.containers[*].resources.limits}' ``` -O total de pedidos ao _container_ são 2000m CPU e 200MiB de memória: +O total de requisições ao contêiner são 2000m CPU e 200MiB de memória: ``` map[cpu: 500m memory:100Mi] map[cpu:1500m memory:100Mi] ``` -Verifique isto contra o que é observado pelo nó: +Verifique isto comparado ao que é observado pelo nó: ```bash kubectl describe node | grep test-pod -B2 ``` -O output mostra que 2250m CPU e 320MiB de memória são solicitados, que inclui _PodOverhead_: +A saída mostra que 2250m CPU e 320MiB de memória são solicitados, que inclui _PodOverhead_: ``` Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- @@ -145,12 +140,12 @@ O output mostra que 2250m CPU e 320MiB de memória são solicitados, que inclui ## Verificar os limites cgroup do Pod -Verifique os cgroups de memória do Pod no nó onde a carga de trabalho está em execução. No seguinte exemplo, [`crictl`] (https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md) -é usado no nó, que fornece uma CLI para _container runtimes_ compatíveis com CRI. Isto é um -exemplo avançado para mostrar o comportamento do _PodOverhead_, e não é esperado que os utilizadores precisem de verificar +Verifique os cgroups de memória do Pod no nó onde a carga de trabalho está em execução. No seguinte exemplo, [`crictl`](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md) +é usado no nó, que fornece uma CLI para agentes de execução compatíveis com CRI. Isto é um +exemplo avançado para mostrar o comportamento do _PodOverhead_, e não é esperado que os usuários precisem verificar cgroups diretamente no nó. -Primeiro, no nó em particular, determine o identificador do _Pod_: +Primeiro, no nó em particular, determine o identificador do Pod: ```bash # Execute no nó onde o Pod está agendado @@ -163,15 +158,15 @@ A partir disto, pode determinar o caminho do cgroup para o _Pod_: sudo crictl inspectp -o=json $POD_ID | grep cgroupsPath ``` -O caminho do cgroup resultante inclui o _container_ `pause` do _Pod_. O cgroup no nível do _Pod_ está um diretório acima. +O caminho do cgroup resultante inclui o contêiner `pause` do Pod. O cgroup no nível do Pod está um diretório acima. ``` "cgroupsPath": "/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/7ccf55aee35dd16aca4189c952d83487297f3cd760f1bbf09620e206e7d0c27a" ``` -Neste caso especifico, o caminho do cgroup do pod é `kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2`. Verifique a configuração cgroup de nível do _Pod_ para a memória: +Neste caso especifico, o caminho do cgroup do Pod é `kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2`. Verifique a configuração cgroup de nível do Pod para a memória: ```bash # Execute no nó onde o Pod está agendado -# Mude também o nome do cgroup de forma a combinar com o cgroup alocado ao pod. +# Mude também o nome do cgroup para combinar com o cgroup alocado ao Pod. cat /sys/fs/cgroup/memory/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/memory.limit_in_bytes ``` @@ -182,10 +177,10 @@ Isto é 320 MiB, como esperado: ### Observabilidade -Uma métrica `kube_pod_overhead` está disponível em [kube-state-metrics] (https://github.com/kubernetes/kube-state-metrics) -para ajudar a identificar quando o _PodOverhead_ está a ser utilizado e para ajudar a observar a estabilidade das cargas de trabalho +Uma métrica `kube_pod_overhead` está disponível em [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) +para ajudar a identificar quando o _PodOverhead_ está sendo utilizado e para ajudar a observar a estabilidade das cargas de trabalho em execução com uma sobrecarga (_Overhead_) definida. Esta funcionalidade não está disponível na versão 1.9 do kube-state-metrics, -mas é esperado num próximo _release_. Os utilizadores necessitarão entretanto de construir kube-state-metrics a partir da fonte. +mas é esperado em uma próxima versão. Os usuários necessitarão entretanto construir o kube-state-metrics a partir do código fonte. diff --git a/content/pt/docs/concepts/scheduling/_index.md b/content/pt/docs/concepts/scheduling/_index.md deleted file mode 100644 index 577dbb8c87..0000000000 --- a/content/pt/docs/concepts/scheduling/_index.md +++ /dev/null @@ -1,5 +0,0 @@ ---- -title: "Escalonamento" -weight: 90 ---- - diff --git a/content/pt/docs/concepts/security/_index.md b/content/pt/docs/concepts/security/_index.md new file mode 100644 index 0000000000..63fca06b9a --- /dev/null +++ b/content/pt/docs/concepts/security/_index.md @@ -0,0 +1,5 @@ +--- +title: "Segurança" +weight: 81 +--- + diff --git a/content/pt/docs/concepts/security/overview.md b/content/pt/docs/concepts/security/overview.md new file mode 100644 index 0000000000..1f75e051ae --- /dev/null +++ b/content/pt/docs/concepts/security/overview.md @@ -0,0 +1,153 @@ +--- +title: Visão Geral da Segurança Cloud Native +content_type: concept +weight: 10 +--- + + + +Esta visão geral define um modelo para pensar sobre a segurança em Kubernetes no contexto da Segurança em Cloud Native. + +{{< warning >}} +Este modelo de segurança no contêiner fornece sugestões, não prova políticas de segurança da informação. +{{< /warning >}} + + + +## Os 4C da Segurança Cloud Native + +Você pode pensar na segurança em camadas. Os 4C da segurança Cloud Native são a Cloud, +Clusters, Contêineres e Código. + +{{< note >}} +Esta abordagem em camadas aumenta a [defesa em profundidade](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)) +para segurança, que é amplamente considerada como uma boa prática de segurança para software de sistemas. +{{< /note >}} + +{{< figure src="/images/docs/4c.png" title="Os 4C da Segurança Cloud Native" >}} + +Cada camada do modelo de segurança Cloud Native é construída sobre a próxima camada mais externa. +A camada de código se beneficia de uma base forte (Cloud, Cluster, Contêiner) de camadas seguras. +Você não pode proteger contra padrões ruins de segurança nas camadas de base através de +segurança no nível do Código. + +## Cloud + +De muitas maneiras, a Cloud (ou servidores co-localizados, ou o datacenter corporativo) é a +[base de computação confiável](https://en.wikipedia.org/wiki/Trusted_computing_base) +de um cluster Kubernetes. Se a camada de Cloud é vulnerável (ou +configurado de alguma maneira vulnerável), então não há garantia de que os componentes construídos +em cima desta base estejam seguros. Cada provedor de Cloud faz recomendações de segurança +para executar as cargas de trabalho com segurança nos ambientes. + +### Segurança no provedor da Cloud + +Se você estiver executando um cluster Kubernetes em seu próprio hardware ou em um provedor de nuvem diferente, +consulte sua documentação para melhores práticas de segurança. +Aqui estão os links para as documentações de segurança dos provedores mais populares de nuvem: + +{{< table caption="Cloud provider security" >}} + +Provedor IaaS | Link | +-------------------- | ------------ | +Alibaba Cloud | https://www.alibabacloud.com/trust-center | +Amazon Web Services | https://aws.amazon.com/security/ | +Google Cloud Platform | https://cloud.google.com/security/ | +IBM Cloud | https://www.ibm.com/cloud/security | +Microsoft Azure | https://docs.microsoft.com/en-us/azure/security/azure-security | +VMWare VSphere | https://www.vmware.com/security/hardening-guides.html | + +{{< /table >}} + +### Segurança de Infraestrutura {#infrastructure-security} + +Sugestões para proteger sua infraestrutura em um cluster Kubernetes: + +{{< table caption="Infrastructure security" >}} + +Área de Interesse para Infraestrutura Kubernetes | Recomendação | +--------------------------------------------- | -------------- | +Acesso de rede ao servidor API (Control plane) | Todo o acesso ao control plane do Kubernetes publicamente na Internet não é permitido e é controlado por listas de controle de acesso à rede restritas ao conjunto de endereços IP necessários para administrar o cluster.| +Acesso de rede aos Nós (nodes) | Os nós devem ser configurados para _só_ aceitar conexões (por meio de listas de controle de acesso à rede) do control plane nas portas especificadas e aceitar conexões para serviços no Kubernetes do tipo NodePort e LoadBalancer. Se possível, esses nós não devem ser expostos inteiramente na Internet pública. +Acesso do Kubernetes à API do provedor de Cloud | Cada provedor de nuvem precisa conceder um conjunto diferente de permissões para o control plane e nós do Kubernetes. É melhor fornecer ao cluster permissão de acesso ao provedor de nuvem que segue o [princípio do menor privilégio](https://en.wikipedia.org/wiki/Principle_of_least_privilege) para os recursos que ele precisa administrar. A [documentação do Kops](https://github.com/kubernetes/kops/blob/master/docs/iam_roles.md#iam-roles) fornece informações sobre as políticas e roles do IAM. +Acesso ao etcd | O acesso ao etcd (o armazenamento de dados do Kubernetes) deve ser limitado apenas ao control plane. Dependendo de sua configuração, você deve tentar usar etcd sobre TLS. Mais informações podem ser encontradas na [documentação do etcd](https://github.com/etcd-io/etcd/tree/master/Documentation). +Encriptação etcd | Sempre que possível, é uma boa prática encriptar todas as unidades de armazenamento, mas como o etcd mantém o estado de todo o cluster (incluindo os Secrets), seu disco deve ser criptografado. + +{{< /table >}} + +## Cluster + +Existem duas áreas de preocupação para proteger o Kubernetes: + +* Protegendo os componentes do cluster que são configuráveis. +* Protegendo as aplicações que correm no cluster. + +### Componentes do Cluster {#cluster-components} + +Se você deseja proteger seu cluster de acesso acidental ou malicioso e adotar +boas práticas de informação, leia e siga os conselhos sobre +[protegendo seu cluster](/docs/tasks/administer-cluster/securing-a-cluster/). + +### Componentes no cluster (sua aplicação) {#cluster-applications} + +Dependendo da superfície de ataque de sua aplicação, você pode querer se concentrar em +tópicos específicos de segurança. Por exemplo: se você estiver executando um serviço (Serviço A) que é crítico +numa cadeia de outros recursos e outra carga de trabalho separada (Serviço B) que é +vulnerável a um ataque de exaustão de recursos e, por consequência, o risco de comprometer o Serviço A +é alto se você não limitar os recursos do Serviço B. A tabela a seguir lista +áreas de atenção na segurança e recomendações para proteger cargas de trabalho em execução no Kubernetes: + +Área de interesse para a segurança do Workload | Recomendação | +------------------------------ | --------------------- | +Autorização RBAC (acesso à API Kubernetes) | https://kubernetes.io/docs/reference/access-authn-authz/rbac/ +Autenticação | https://kubernetes.io/docs/concepts/security/controlling-access/ +Gerenciamento de segredos na aplicação (e encriptando-os no etcd em repouso) | https://kubernetes.io/docs/concepts/configuration/secret/
    https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/ +Políticas de segurança do Pod | https://kubernetes.io/docs/concepts/policy/pod-security-policy/ +Qualidade de serviço (e gerenciamento de recursos de cluster) | https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/ +Políticas de Rede | https://kubernetes.io/docs/concepts/services-networking/network-policies/ +TLS para Kubernetes Ingress | https://kubernetes.io/docs/concepts/services-networking/ingress/#tls + +## Contêiner + +A segurança do contêiner está fora do escopo deste guia. Aqui estão recomendações gerais e +links para explorar este tópico: + +Área de Interesse para Contêineres | Recomendação | +------------------------------ | -------------- | +Scanners de Vulnerabilidade de Contêiner e Segurança de Dependência de SO | Como parte da etapa de construção de imagem, você deve usar algum scanner em seus contêineres em busca de vulnerabilidades. +Assinatura Imagem e Enforcement | Assinatura de imagens de contêineres para manter um sistema de confiança para o conteúdo de seus contêineres. +Proibir Usuários Privilegiados | Ao construir contêineres, consulte a documentação para criar usuários dentro dos contêineres que tenham o menor nível de privilégio no sistema operacional necessário para cumprir o objetivo do contêiner. +Use o Contêiner em Runtime com Isolamento mais Forte | Selecione [classes de contêiner runtime](/docs/concepts/containers/runtime-class/) com o provedor de isolamento mais forte. + +## Código + +O código da aplicação é uma das principais superfícies de ataque sobre a qual você tem maior controle. +Embora a proteção do código do aplicativo esteja fora do tópico de segurança do Kubernetes, aqui +são recomendações para proteger o código do aplicativo: + +### Segurança de código + +{{< table caption="Code security" >}} + +Área de Atenção para o Código | Recomendação | +-------------------------| -------------- | +Acesso só através de TLS | Se seu código precisar se comunicar por TCP, execute um handshake TLS com o cliente antecipadamente. Com exceção de alguns casos, encripte tudo em trânsito. Indo um passo adiante, é uma boa ideia encriptar o tráfego de rede entre os serviços. Isso pode ser feito por meio de um processo conhecido como mutual ou [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication), que realiza uma verificação bilateral da comunicação mediante os certificados nos serviços. | +Limitando intervalos de porta de comunicação | Essa recomendação pode ser um pouco autoexplicativa, mas, sempre que possível, você só deve expor as portas em seu serviço que são absolutamente essenciais para a comunicação ou coleta de métricas. | +Segurança na Dependência de Terceiros | É uma boa prática verificar regularmente as bibliotecas de terceiros de sua aplicação em busca de vulnerabilidades de segurança. Cada linguagem de programação possui uma ferramenta para realizar essa verificação automaticamente. | +Análise de Código Estático | A maioria das linguagens fornece uma maneira para analisar um extrato do código referente a quaisquer práticas de codificação potencialmente inseguras. Sempre que possível, você deve automatizar verificações usando ferramentas que podem verificar as bases de código em busca de erros de segurança comuns. Algumas das ferramentas podem ser encontradas em [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools). | +Ataques de sondagem dinâmica | Existem algumas ferramentas automatizadas que você pode executar contra seu serviço para tentar alguns dos ataques mais conhecidos. Isso inclui injeção de SQL, CSRF e XSS. Uma das ferramentas de análise dinâmica mais populares é o [OWASP Zed Attack proxy](https://owasp.org/www-project-zap/). | + +{{< /table >}} + +## {{% heading "whatsnext" %}} + +Saiba mais sobre os tópicos de segurança do Kubernetes: + +* [Padrões de segurança do Pod](/docs/concepts/security/pod-security-standards/) +* [Políticas de rede para Pods](/docs/concepts/services-networking/network-policies/) +* [Controle de acesso à API Kubernetes](/docs/concepts/security/controlling-access) +* [Protegendo seu cluster](/docs/tasks/administer-cluster/securing-a-cluster/) +* [Criptografia de dados em trânsito](/docs/tasks/tls/managing-tls-in-a-cluster/) for the control plane +* [Criptografia de dados em repouso](/docs/tasks/administer-cluster/encrypt-data/) +* [Secrets no Kubernetes](/docs/concepts/configuration/secret/) +* [Runtime class](/docs/concepts/containers/runtime-class) \ No newline at end of file diff --git a/content/pt/docs/reference/access-authn-authz/bootstrap-tokens.md b/content/pt/docs/reference/access-authn-authz/bootstrap-tokens.md new file mode 100644 index 0000000000..67f23e2bb6 --- /dev/null +++ b/content/pt/docs/reference/access-authn-authz/bootstrap-tokens.md @@ -0,0 +1,169 @@ +--- +title: Autenticando com Tokens de Inicialização +content_type: concept +weight: 20 +--- + + + +{{< feature-state for_k8s_version="v1.18" state="stable" >}} + +Os tokens de inicialização são um _bearer token_ simples que devem ser utilizados +ao criar novos clusters ou para quando novos nós são registrados a clusters existentes. Eles foram construídos +para suportar a ferramenta [kubeadm](/docs/reference/setup-tools/kubeadm/), mas podem ser utilizados em outros contextos para usuários que desejam inicializar clusters sem utilizar o `kubeadm`. +Foram também construídos para funcionar, via políticas RBAC, com o sistema de [Inicialização do Kubelet via TLS](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/). + + +## Visão geral dos tokens de inicialização + +Os tokens de inicialização são definidos com um tipo especifico de _secrets_ (`bootstrap.kubernetes.io/token`) que existem no namespace `kube-system`. Estes _secrets_ são então lidos pelo autenticador de inicialização do servidor de API. +Tokens expirados são removidos pelo controlador _TokenCleaner_ no gerenciador de controle - kube-controller-manager. +Os tokens também são utilizados para criar uma assinatura para um ConfigMap específico usado no processo de descoberta através de um controlador denominado `BootstrapSigner`. + +## Formato do Token + +Tokens de inicialização tem o formato `abcdef.0123456789abcdef`. Mais formalmente, eles devem corresponder a expressão regular `[a-z0-9]{6}\.[a-z0-9]{16}`. + +A primeira parte do token é um identificador ("Token ID") e é considerado informação pública. +Ele é utilizado para se referir a um token sem vazar a parte secreta usada para autenticação. +A segunda parte é o _secret_ do token e somente deve ser compartilhado com partes confiáveis. + +## Habilitando autenticação com tokens de inicialização + +O autenticador de tokens de inicialização pode ser habilitado utilizando a seguinte opção no servidor de API: + +``` +--enable-bootstrap-token-auth +``` + +Quando habilitado, tokens de inicialização podem ser utilizado como credenciais _bearer token_ +para autenticar requisições no servidor de API. + +```http +Authorization: Bearer 07401b.f395accd246ae52d +``` + +Tokens são autenticados como o usuário `system:bootstrap:` e são membros +do grupo `system:bootstrappers`. Grupos adicionais podem ser +especificados dentro do _secret_ do token. + +Tokens expirados podem ser removidos automaticamente ao habilitar o controlador `tokencleaner` +do gerenciador de controle - kube-controller-manager. + +``` +--controllers=*,tokencleaner +``` + +## Formato do _secret_ dos tokens de inicialização + +Cada token válido possui um _secret_ no namespace `kube-system`. Você pode +encontrar a documentação completa [aqui](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md). + +Um _secret_ de token se parece com o exemplo abaixo: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + # Nome DEVE seguir o formato "bootstrap-token-" + name: bootstrap-token-07401b + namespace: kube-system + +# Tipo DEVE ser 'bootstrap.kubernetes.io/token' +type: bootstrap.kubernetes.io/token +stringData: + # Descrição legível. Opcional. + description: "The default bootstrap token generated by 'kubeadm init'." + + # identificador do token e _secret_. Obrigatório. + token-id: 07401b + token-secret: f395accd246ae52d + + # Validade. Opcional. + expiration: 2017-03-10T03:22:11Z + + # Usos permitidos. + usage-bootstrap-authentication: "true" + usage-bootstrap-signing: "true" + + # Grupos adicionais para autenticar o token. Devem começar com "system:bootstrappers:" + auth-extra-groups: system:bootstrappers:worker,system:bootstrappers:ingress +``` + +O tipo do _secret_ deve ser `bootstrap.kubernetes.io/token` e o nome deve seguir o formato `bootstrap-token-`. Ele também tem que existir no namespace `kube-system`. + +Os membros listados em `usage-bootstrap-*` indicam qual a intenção de uso deste _secret_. O valor `true` deve ser definido para que seja ativado. + +* `usage-bootstrap-authentication` indica que o token pode ser utilizado para autenticar no servidor de API como um _bearer token_. +* `usage-bootstrap-signing` indica que o token pode ser utilizado para assinar o ConfigMap `cluster-info` como descrito abaixo. + +O campo `expiration` controla a expiração do token. Tokens expirados são +rejeitados quando usados para autenticação e ignorados durante assinatura de ConfigMaps. +O valor de expiração é codificado como um tempo absoluto UTC utilizando a RFC3339. Para automaticamente +remover tokens expirados basta habilitar o controlador `tokencleaner`. + +## Gerenciamento de tokens com kubeadm + +Você pode usar a ferramenta `kubeadm` para gerenciar tokens em um cluster. Veja [documentação de tokens kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm-token/) para mais detalhes. + +## Assinatura de ConfigMap + +Além de autenticação, os tokens podem ser utilizados para assinar um ConfigMap. Isto pode +ser utilizado em estágio inicial do processo de inicialização de um cluster, antes que o cliente confie +no servidor de API. O Configmap assinado pode ser autenticado por um token compartilhado. + +Habilite a assinatura de ConfigMap ao habilitar o controlador `bootstrapsigner` no gerenciador de controle - kube-controller-manager. + +``` +--controllers=*,bootstrapsigner +``` +O ConfigMap assinado é o `cluster-info` no namespace `kube-public`. +No fluxo típico, um cliente lê o ConfigMap enquanto ainda não autenticado +e ignora os erros da camada de transporte seguro (TLS). +Ele então valida o conteúdo do ConfigMap ao verificar a assinatura contida no ConfigMap. + +O ConfigMap pode se parecer com o exemplo abaixo: + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: cluster-info + namespace: kube-public +data: + jws-kubeconfig-07401b: eyJhbGciOiJIUzI1NiIsImtpZCI6IjA3NDAxYiJ9..tYEfbo6zDNo40MQE07aZcQX2m3EB2rO3NuXtxVMYm9U + kubeconfig: | + apiVersion: v1 + clusters: + - cluster: + certificate-authority-data: + server: https://10.138.0.2:6443 + name: "" + contexts: [] + current-context: "" + kind: Config + preferences: {} + users: [] +``` + +O membro `kubeconfig` do ConfigMap é um arquivo de configuração contendo somente +as informações do cluster preenchidas. A informação chave sendo comunicada aqui +está em `certificate-authority-data`. Isto poderá ser expandido no futuro. + +A assinatura é feita utilizando-se assinatura JWS em modo "separado". Para validar +a assinatura, o usuário deve codificar o conteúdo do `kubeconfig` de acordo com as regras do JWS +(codificando em base64 e descartando qualquer `=` ao final). O conteúdo codificado +e então usado para formar um JWS inteiro, inserindo-o entre os 2 pontos. Você pode +verificar o JWS utilizando o esquema `HS256` (HMAC-SHA256) com o token completo +(por exemplo: `07401b.f395accd246ae52d`) como o _secret_ compartilhado. Usuários _devem_ +verificar que o algoritmo HS256 (que é um método de assinatura simétrica) está sendo utilizado. + + +{{< warning >}} +Qualquer parte em posse de um token de inicialização pode criar uma assinatura válida +daquele token. Não é recomendável, quando utilizando assinatura de ConfigMap, que se compartilhe +o mesmo token com muitos clientes, uma vez que um cliente comprometido pode abrir brecha para potenciais +"homem no meio" entre outro cliente que confia na assinatura para estabelecer inicialização via camada de transporte seguro (TLS). +{{< /warning >}} + +Consulte a seção de [detalhes de implementação do kubeadm](/docs/reference/setup-tools/kubeadm/implementation-details/) para mais informações. \ No newline at end of file diff --git a/content/pt/docs/reference/glossary/cloud-controller-manager.md b/content/pt/docs/reference/glossary/cloud-controller-manager.md new file mode 100644 index 0000000000..622d3f842e --- /dev/null +++ b/content/pt/docs/reference/glossary/cloud-controller-manager.md @@ -0,0 +1,21 @@ +--- +title: Gerenciador de controle de nuvem +id: cloud-controller-manager +date: 2018-04-12 +full_link: /docs/concepts/architecture/cloud-controller/ +short_description: > + Componente da camada de gerenciamento que integra Kubernetes com provedores de nuvem de terceiros. +aka: +tags: +- core-object +- architecture +- operation +--- + + Um componente da {{< glossary_tooltip text="camada de gerenciamento" term_id="control-plane" >}} do Kubernetes + que incorpora a lógica de controle específica da nuvem. O gerenciador de controle de nuvem permite que você vincule seu + _cluster_ na API do seu provedor de nuvem, e separar os componentes que interagem com essa plataforma de nuvem a partir de componentes que apenas interagem com seu cluster. + + + +Desassociando a lógica de interoperabilidade entre o Kubernetes e a infraestrutura de nuvem subjacente, o componente gerenciador de controle de nuvem permite que os provedores de nuvem desenvolvam e disponibilizem recursos em um ritmo diferente em comparação com o projeto principal do Kubernetes. diff --git a/content/pt/docs/reference/glossary/cncf.md b/content/pt/docs/reference/glossary/cncf.md new file mode 100644 index 0000000000..f9ad249547 --- /dev/null +++ b/content/pt/docs/reference/glossary/cncf.md @@ -0,0 +1,20 @@ +--- +title: Cloud Native Computing Foundation (CNCF) +id: cncf +date: 2019-05-26 +full_link: https://cncf.io/ +short_description: > + Cloud Native Computing Foundation + +aka: +tags: +- community +--- + A **Cloud Native Computing Foundation (CNCF)** constrói um ecossistema sustentável e promove uma comunidade no entorno dos [projetos](https://www.cncf.io/projects/) que orquestram contêineres como parte de uma arquitetura de microserviços. + +**Kubernetes** é um projeto CNCF. + + + +A **CNCF** é uma sub-fundação da [Linux Foundation](https://www.linuxfoundation.org/). +Sua missão é tornar a computação nativa em nuvem onipresente. diff --git a/content/pt/docs/reference/glossary/container-runtime.md b/content/pt/docs/reference/glossary/container-runtime.md new file mode 100644 index 0000000000..8c1cb808ef --- /dev/null +++ b/content/pt/docs/reference/glossary/container-runtime.md @@ -0,0 +1,18 @@ +--- +title: Agente de execução de contêiner +id: container-runtime +date: 2019-06-05 +full_link: /docs/setup/production-environment/container-runtimes +short_description: > + O agente de execução de contêiner é o software responsável por executar os contêineres. + +aka: +tags: +- fundamental +- workload +--- + O agente de execução (_runtime_) de contêiner é o software responsável por executar os contêineres. + + + +O Kubernetes suporta diversos agentes de execução de contêineres: {{< glossary_tooltip term_id="docker">}}, {{< glossary_tooltip term_id="containerd" >}}, {{< glossary_tooltip term_id="cri-o" >}}, e qualquer implementação do [Kubernetes CRI (Container Runtime Interface)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md). diff --git a/content/pt/docs/reference/glossary/control-plane.md b/content/pt/docs/reference/glossary/control-plane.md index 0465d5a2b8..c65759b83b 100644 --- a/content/pt/docs/reference/glossary/control-plane.md +++ b/content/pt/docs/reference/glossary/control-plane.md @@ -1,13 +1,13 @@ --- -title: Ambiente de gerenciamento +title: Camada de gerenciamento id: control-plane date: 2020-04-19 full_link: short_description: > - A camada de orquestração de contêiner que expõe a API e as interfaces para definir, implantar e gerenciar o ciclo de vida dos contêineres. + A camada de gerenciamento de contêiner que expõe a API e as interfaces para definir, implantar e gerenciar o ciclo de vida dos contêineres. aka: tags: - fundamental --- - A camada de orquestração de contêiner que expõe a API e as interfaces para definir, implantar e gerenciar o ciclo de vida dos contêineres. + A camada de gerenciamento de contêiner que expõe a API e as interfaces para definir, implantar e gerenciar o ciclo de vida dos contêineres. diff --git a/content/pt/docs/reference/glossary/etcd.md b/content/pt/docs/reference/glossary/etcd.md new file mode 100644 index 0000000000..0761a53865 --- /dev/null +++ b/content/pt/docs/reference/glossary/etcd.md @@ -0,0 +1,19 @@ +--- +title: etcd +id: etcd +date: 2018-04-12 +full_link: /docs/tasks/administer-cluster/configure-upgrade-etcd/ +short_description: > + Armazenamento do tipo Chave-Valor consistente e em alta-disponibilidade usado como repositório de apoio do Kubernetes para todos os dados do cluster. +aka: +tags: +- architecture +- storage +--- + Armazenamento do tipo Chave-Valor consistente e em alta-disponibilidade usado como repositório de apoio do Kubernetes para todos os dados do cluster. + + + +Se o seu cluster Kubernetes usa **etcd** como seu armazenamento de apoio, certifique-se de ter um plano de [back up](/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster) para seus dados. + +Você pode encontrar informações detalhadas sobre o etcd na seção oficial da [documentação](https://etcd.io/docs/). diff --git a/content/pt/docs/reference/glossary/kube-apiserver.md b/content/pt/docs/reference/glossary/kube-apiserver.md new file mode 100644 index 0000000000..f5ce3dba1a --- /dev/null +++ b/content/pt/docs/reference/glossary/kube-apiserver.md @@ -0,0 +1,22 @@ +--- +title: API server +id: kube-apiserver +date: 2018-04-12 +full_link: /docs/concepts/overview/components/#kube-apiserver +short_description: > + O componente da camada de gerenciamento que serve a API do Kubernetes. + +aka: +- kube-apiserver +tags: +- architecture +- fundamental +--- + O servidor de API é um componente da {{< glossary_tooltip text="Camada de gerenciamento" term_id="control-plane" >}} do Kubernetes que expõe a API do Kubernetes. +O servidor de API é o _front end_ para a camada de gerenciamento do Kubernetes. + + + +A principal implementação de um servidor de API do Kubernetes é [kube-apiserver](/docs/reference/generated/kube-apiserver/). +O kube-apiserver foi projetado para ser escalonado horizontalmente — ou seja, ele pode ser escalado com a implantação de mais instâncias. +Você pode executar várias instâncias do kube-apiserver e balancear (balanceamento de carga, etc) o tráfego entre essas instâncias. diff --git a/content/pt/docs/reference/glossary/kube-controller-manager.md b/content/pt/docs/reference/glossary/kube-controller-manager.md new file mode 100644 index 0000000000..0a52ec27ea --- /dev/null +++ b/content/pt/docs/reference/glossary/kube-controller-manager.md @@ -0,0 +1,18 @@ +--- +title: kube-controller-manager +id: kube-controller-manager +date: 2018-04-12 +full_link: /docs/reference/command-line-tools-reference/kube-controller-manager/ +short_description: > + Componente da camada de gerenciamento que executa os processos de controle. + +aka: +tags: +- architecture +- fundamental +--- + Componente da camada de gerenciamento que executa os processos de {{< glossary_tooltip text="controlador" term_id="controller" >}}. + + + +Logicamente, cada _{{< glossary_tooltip text="controlador" term_id="controller" >}}_ está em um processo separado, mas para reduzir a complexidade, eles todos são compilados num único binário e executam em um processo único. diff --git a/content/pt/docs/reference/glossary/kube-proxy.md b/content/pt/docs/reference/glossary/kube-proxy.md new file mode 100644 index 0000000000..1f9a075bd6 --- /dev/null +++ b/content/pt/docs/reference/glossary/kube-proxy.md @@ -0,0 +1,22 @@ +--- +title: kube-proxy +id: kube-proxy +date: 2018-04-12 +full_link: /docs/reference/command-line-tools-reference/kube-proxy/ +short_description: > + `kube-proxy` é um _proxy_ de rede executado em cada nó do _cluster_. + +aka: +tags: +- fundamental +- networking +--- + kube-proxy é um _proxy_ de rede executado em cada {{< glossary_tooltip text="nó" term_id="node" >}} no seu _cluster_, +implementando parte do conceito de {{< glossary_tooltip text="serviço" term_id="service">}} do Kubernetes. + + + +[kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) +mantém regras de rede nos nós. Estas regras de rede permitem a comunicação de rede com seus _pods_ a partir de sessões de rede dentro ou fora de seu _cluster_. + +kube-proxy usa a camada de filtragem de pacotes do sistema operacional se houver uma e estiver disponível. Caso contrário, o kube-proxy encaminha o tráfego ele mesmo. diff --git a/content/pt/docs/reference/glossary/kube-scheduler.md b/content/pt/docs/reference/glossary/kube-scheduler.md new file mode 100644 index 0000000000..1030d27853 --- /dev/null +++ b/content/pt/docs/reference/glossary/kube-scheduler.md @@ -0,0 +1,17 @@ +--- +title: kube-scheduler +id: kube-scheduler +date: 2018-04-12 +full_link: /docs/reference/generated/kube-scheduler/ +short_description: > + Componente da camada de gerenciamento que observa os _pods_ recém-criados sem nenhum nó atribuído, e seleciona um nó para executá-los. +aka: +tags: +- architecture +--- +Componente da camada de gerenciamento que observa os _{{< glossary_tooltip term_id="pod" text="pods" >}}_ recém-criados sem nenhum {{< glossary_tooltip term_id="node" text="nó">}} atribuído, e seleciona um nó para executá-los. + + + +Os fatores levados em consideração para as decisões de agendamento incluem: +requisitos de recursos individuais e coletivos, hardware/software/política de restrições, especificações de afinidade e antiafinidade, localidade de dados, interferência entre cargas de trabalho, e prazos. diff --git a/content/pt/docs/tutorials/_index.md b/content/pt/docs/tutorials/_index.md index a488f84388..bc39fd817a 100644 --- a/content/pt/docs/tutorials/_index.md +++ b/content/pt/docs/tutorials/_index.md @@ -21,7 +21,7 @@ Antes de iniciar um tutorial, é interessante que vocẽ salve a página de [Glo * [Introdução ao Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#) é um curso gratuíto da edX que te guia no entendimento do Kubernetes, seus conceitos, bem como na execução de tarefas mais simples. -* [Hello Minikube](/docs/tutorials/hello-minikube/) é um "Hello World" que te permite testar rapidamente o Kubernetes em sua estação com o uso do Minikube +* [Olá, Minikube!](/pt/docs/tutorials/hello-minikube/) é um "Hello World" que te permite testar rapidamente o Kubernetes em sua estação com o uso do Minikube ## Configuração diff --git a/content/pt/docs/tutorials/hello-minikube.md b/content/pt/docs/tutorials/hello-minikube.md new file mode 100644 index 0000000000..0db5d20ddc --- /dev/null +++ b/content/pt/docs/tutorials/hello-minikube.md @@ -0,0 +1,258 @@ +--- +title: Olá, Minikube! +content_type: tutorial +weight: 5 +menu: + main: + title: "Iniciar" + weight: 10 + post: > +

    Pronto para meter a mão na massa? Vamos criar um cluster Kubernetes simples e executar uma aplicação exemplo.

    +card: + name: tutorials + weight: 10 +--- + + + +Este tutorial mostra como executar uma aplicação exemplo no Kubernetes utilizando o [Minikube](https://minikube.sigs.k8s.io) e o [Katacoda](https://www.katacoda.com). O Katacoda disponibiliza um ambiente Kubernetes gratuito e acessível via navegador. + +{{< note >}} +Você também consegue seguir os passos desse tutorial instalando o Minikube localmente. Para instruções de instalação, acesse: [iniciando com minikube](https://minikube.sigs.k8s.io/docs/start/). +{{< /note >}} + +## Objetivos + +* Instalar uma aplicação exemplo no minikube. +* Executar a aplicação. +* Visualizar os logs da aplicação. + +## Antes de você iniciar + +Este tutorial disponibiliza uma imagem de contêiner que utiliza o NGINX para retornar todas as requisições. + + + +## Criando um cluster do Minikube + +1. Clique no botão abaixo **para iniciar o terminal do Katacoda**. + + {{< kat-button >}} + +{{< note >}} +Se você instalou o Minikube localmente, execute: `minikube start`. +{{< /note >}} + +2. Abra o painel do Kubernetes em um navegador: + + ```shell + minikube dashboard + ``` + +3. Apenas no ambiente do Katacoda: Na parte superior do terminal, clique em **Preview Port 30000**. + +## Criando um Deployment + +Um [*Pod*](/docs/concepts/workloads/pods/) Kubernetes consiste em um ou mais contêineres agrupados para fins de administração e gerenciamento de rede. O Pod desse tutorial possui apenas um contêiner. Um [*Deployment*](/docs/concepts/workloads/controllers/deployment/) Kubernetes verifica a saúde do seu Pod e reinicia o contêiner do Pod caso o mesmo seja finalizado. Deployments são a maneira recomendada de gerenciar a criação e escalonamento dos Pods. + +1. Usando o comando `kubectl create` para criar um Deployment que gerencia um Pod. O Pod executa um contêiner baseado na imagem docker disponibilizada. + + ```shell + kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4 + ``` + +2. Visualizando o Deployment: + + ```shell + kubectl get deployments + ``` + + A saída será semelhante a: + + ``` + NAME READY UP-TO-DATE AVAILABLE AGE + hello-node 1/1 1 1 1m + ``` + +3. Visualizando o Pod: + + ```shell + kubectl get pods + ``` + + A saída será semelhante a: + + ``` + NAME READY STATUS RESTARTS AGE + hello-node-5f76cf6ccf-br9b5 1/1 Running 0 1m + ``` + +4. Visualizando os eventos do cluster: + + ```shell + kubectl get events + ``` + +5. Visualizando a configuração do `kubectl`: + + ```shell + kubectl config view + ``` + +{{< note >}} +Para mais informações sobre o comando `kubectl`, veja o [kubectl overview](/docs/reference/kubectl/overview/). +{{< /note >}} + +## Criando um serviço + +Por padrão, um Pod só é acessível utilizando o seu endereço IP interno no cluster Kubernetes. Para dispobiblilizar o contêiner `hello-node` fora da rede virtual do Kubernetes, você deve expor o Pod como um [*serviço*](/docs/concepts/services-networking/service/) Kubernetes. + +1. Expondo o Pod usando o comando `kubectl expose`: + + ```shell + kubectl expose deployment hello-node --type=LoadBalancer --port=8080 + ``` + + O parâmetro `--type=LoadBalancer` indica que você deseja expor o seu serviço fora do cluster Kubernetes. + + A aplicação dentro da imagem `k8s.gcr.io/echoserver` "escuta" apenas na porta TCP 8080. Se você usou + `kubectl expose` para expor uma porta diferente, os clientes não conseguirão se conectar a essa outra porta. + +2. Visualizando o serviço que você acabou de criar: + + ```shell + kubectl get services + ``` + + A saída será semelhante a: + + ``` + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + hello-node LoadBalancer 10.108.144.78 8080:30369/TCP 21s + kubernetes ClusterIP 10.96.0.1 443/TCP 23m + ``` + + Em provedores de Cloud que fornecem serviços de balanceamento de carga para o Kubernetes, um IP externo seria provisionado para acessar o serviço. No Minikube, o tipo `LoadBalancer` torna o serviço acessível por meio do comando `minikube service`. + +3. Executar o comando a seguir: + + ```shell + minikube service hello-node + ``` + +4. (**Apenas no ambiente do Katacoda**) Clicar no sinal de mais e então clicar em **Select port to view on Host 1**. + +5. (**Apenas no ambiente do Katacoda**) Observe o número da porta com 5 dígitos exibido ao lado de `8080` na saída do serviço. Este número de porta é gerado aleatoriamente e pode ser diferente para você. Digite seu número na caixa de texto do número da porta e clique em **Display Port**. Usando o exemplo anterior, você digitaria `30369`. + +Isso abre uma janela do navegador, acessa o seu aplicativo e mostra o retorno da requisição. + +## Habilitando Complementos (addons) + +O Minikube inclui um conjunto integrado de {{< glossary_tooltip text="complementos" term_id="addons" >}} que podem ser habilitados, desabilitados e executados no ambiente Kubernetes local. + +1. Listando os complementos suportados atualmente: + + ```shell + minikube addons list + ``` + + A saída será semelhante a: + + ``` + addon-manager: enabled + dashboard: enabled + default-storageclass: enabled + efk: disabled + freshpod: disabled + gvisor: disabled + helm-tiller: disabled + ingress: disabled + ingress-dns: disabled + logviewer: disabled + metrics-server: disabled + nvidia-driver-installer: disabled + nvidia-gpu-device-plugin: disabled + registry: disabled + registry-creds: disabled + storage-provisioner: enabled + storage-provisioner-gluster: disabled + ``` + +2. Habilitando um complemento, por exemplo, `metrics-server`: + + ```shell + minikube addons enable metrics-server + ``` + + A saída será semelhante a: + + ``` + metrics-server was successfully enabled + ``` + +3. Visualizando os Pods e os Serviços que você acabou de criar: + + ```shell + kubectl get pod,svc -n kube-system + ``` + + A saída será semelhante a: + + ``` + NAME READY STATUS RESTARTS AGE + pod/coredns-5644d7b6d9-mh9ll 1/1 Running 0 34m + pod/coredns-5644d7b6d9-pqd2t 1/1 Running 0 34m + pod/metrics-server-67fb648c5 1/1 Running 0 26s + pod/etcd-minikube 1/1 Running 0 34m + pod/influxdb-grafana-b29w8 2/2 Running 0 26s + pod/kube-addon-manager-minikube 1/1 Running 0 34m + pod/kube-apiserver-minikube 1/1 Running 0 34m + pod/kube-controller-manager-minikube 1/1 Running 0 34m + pod/kube-proxy-rnlps 1/1 Running 0 34m + pod/kube-scheduler-minikube 1/1 Running 0 34m + pod/storage-provisioner 1/1 Running 0 34m + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + service/metrics-server ClusterIP 10.96.241.45 80/TCP 26s + service/kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 34m + service/monitoring-grafana NodePort 10.99.24.54 80:30002/TCP 26s + service/monitoring-influxdb ClusterIP 10.111.169.94 8083/TCP,8086/TCP 26s + ``` + +4. Desabilitando o complemento `metrics-server`: + + ```shell + minikube addons disable metrics-server + ``` + + A saída será semelhante a: + + ``` + metrics-server was successfully disabled + ``` + +## Removendo os recursos do Minikube + +Agora você pode remover todos os recursos criados no seu cluster: + +```shell +kubectl delete service hello-node +kubectl delete deployment hello-node +``` +(**Opcional**) Pare a máquina virtual (VM) do Minikube: + +```shell +minikube stop +``` +(**Opcional**) Remova a VM do Minikube: + +```shell +minikube delete +``` + +## Próximos passos + +* Aprender mais sobre [Deployment objects](/docs/concepts/workloads/controllers/deployment/). +* Aprender mais sobre [Deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/). +* Aprender mais sobre [Service objects](/docs/concepts/services-networking/service/). + diff --git a/content/pt/docs/tutorials/kubernetes-basics/_index.html b/content/pt/docs/tutorials/kubernetes-basics/_index.html index 90f89ac3da..b397afba37 100644 --- a/content/pt/docs/tutorials/kubernetes-basics/_index.html +++ b/content/pt/docs/tutorials/kubernetes-basics/_index.html @@ -24,7 +24,7 @@ card:

    Básico do Kubernetes

    -

    Este tutorial fornece instruções básicas sobre o sistema de orquestração de cluster do Kubernetes. Cada módulo contém algumas informações básicas sobre os principais recursos e conceitos do Kubernetes e inclui um tutorial online interativo. Esses tutoriais interativos permitem que você mesmo gerencie um cluster simples e seus aplicativos em contêineres.

    +

    Este tutorial fornece instruções básicas sobre o sistema de orquestração de cluster do Kubernetes. Cada módulo contém algumas informações básicas sobre os principais recursos e conceitos do Kubernetes e inclui um tutorial online interativo. Esses tutoriais interativos permitem que você mesmo gerencie um cluster simples e seus aplicativos em contêineres.

    Usando os tutoriais interativos, você pode aprender a:

    • Implante um aplicativo em contêiner em um cluster.
    • @@ -46,7 +46,7 @@ card:

    - +

    Módulos básicos do Kubernetes

    @@ -54,25 +54,25 @@ card:
    @@ -82,17 +82,17 @@ card:
    diff --git a/content/pt/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html b/content/pt/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html index 9be46e849d..5ef10a9920 100644 --- a/content/pt/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html +++ b/content/pt/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html @@ -25,7 +25,7 @@ weight: 20
    diff --git a/content/pt/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html b/content/pt/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html index fd5025ab45..8301e8890c 100644 --- a/content/pt/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html +++ b/content/pt/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html @@ -29,16 +29,16 @@ weight: 10

    Clusters do Kubernetes

    - O Kubernetes coordena um cluster altamente disponível de computadores conectados para funcionar como uma única unidade. - As abstrações no Kubernetes permitem implantar aplicativos em contêineres em um cluster sem amarrá-los especificamente a máquinas individuais. - Para fazer uso desse novo modelo de implantação, os aplicativos precisam ser empacotados de uma forma que os desacople dos hosts individuais: eles precisam ser colocados em contêineres. Os aplicativos em contêineres são mais flexíveis e disponíveis do que nos modelos de implantação anteriores, nos quais os aplicativos eram instalados diretamente em máquinas específicas como pacotes profundamente integrados ao host. + O Kubernetes coordena um cluster com alta disponibilidade de computadores conectados para funcionar como uma única unidade. + As abstrações no Kubernetes permitem implantar aplicativos em contêineres em um cluster sem amarrá-los especificamente as máquinas individuais. + Para fazer uso desse novo modelo de implantação, os aplicativos precisam ser empacotados de uma forma que os desacoplem dos hosts individuais: eles precisam ser empacotados em contêineres. Os aplicativos em contêineres são mais flexíveis e disponíveis do que nos modelos de implantação anteriores, nos quais os aplicativos eram instalados diretamente em máquinas específicas como pacotes profundamente integrados ao host. O Kubernetes automatiza a distribuição e o agendamento de contêineres de aplicativos em um cluster de maneira mais eficiente. O Kubernetes é uma plataforma de código aberto e está pronto para produção.

    Um cluster Kubernetes consiste em dois tipos de recursos:

      -
    • O Master coordena o cluster
    • -
    • Os Nodes são os trabalhadores que executam aplicativos
    • +
    • A Camada de gerenciamento (Control Plane) coordena o cluster
    • +
    • Os Nós (Nodes) são os nós de processamento que executam aplicativos

    @@ -75,22 +75,22 @@ weight: 10
    -

    O mestre é responsável por gerenciar o cluster. O mestre coordena todas as atividades em seu cluster, como programação de aplicativos, manutenção do estado desejado dos aplicativos, escalonamento de aplicativos e lançamento de novas atualizações.

    -

    Um nó é uma VM ou um computador físico que atua como uma máquina de trabalho em um cluster Kubernetes. Cada nó tem um Kubelet, que é um agente para gerenciar o nó e se comunicar com o mestre do Kubernetes. O nó também deve ter ferramentas para lidar com operações de contêiner, como containerd ou Docker. Um cluster Kubernetes que lida com o tráfego de produção deve ter no mínimo três nós.

    +

    A camada de gerenciamento é responsável por gerenciar o cluster. A camada de gerenciamento coordena todas as atividades em seu cluster, como programação de aplicativos, manutenção do estado desejado dos aplicativos, escalonamento de aplicativos e lançamento de novas atualizações.

    +

    Um nó é uma VM ou um computador físico que atua como um nó de processamento em um cluster Kubernetes. Cada nó tem um Kubelet, que é um agente para gerenciar o nó e se comunicar com a camada de gerenciamento do Kubernetes. O nó também deve ter ferramentas para lidar com operações de contêiner, como containerd ou Docker. Um cluster Kubernetes que lida com o tráfego de produção deve ter no mínimo três nós.

    -

    Os mestres gerenciam o cluster e os nós que são usados ​​para hospedar os aplicativos em execução.

    +

    As camadas de gerenciamento gerenciam o cluster e os nós que são usados ​​para hospedar os aplicativos em execução.

    -

    Ao implantar aplicativos no Kubernetes, você diz ao mestre para iniciar os contêineres de aplicativos. O mestre agenda os contêineres para serem executados nos nós do cluster. Os nós se comunicam com o mestre usando a API Kubernetes , que o mestre expõe. Os usuários finais também podem usar a API Kubernetes diretamente para interagir com o cluster.

    +

    Ao implantar aplicativos no Kubernetes, você diz à camada de gerenciamento para iniciar os contêineres de aplicativos. A camada de gerenciamento agenda os contêineres para serem executados nos nós do cluster. Os nós se comunicam com o camada de gerenciamento usando a API do Kubernetes , que a camada de gerenciamento expõe. Os usuários finais também podem usar a API do Kubernetes diretamente para interagir com o cluster.

    -

    Um cluster Kubernetes pode ser implantado em máquinas físicas ou virtuais. Para começar o desenvolvimento do Kubernetes, você pode usar o Minikube. O Minikube é uma implementação leve do Kubernetes que cria uma VM em sua máquina local e implanta um cluster simples contendo apenas um nó. O Minikube está disponível para sistemas Linux, macOS e Windows. O Minikube CLI fornece operações básicas de inicialização para trabalhar com seu cluster, incluindo iniciar, parar, status e excluir. Para este tutorial, no entanto, você usará um terminal online fornecido com o Minikube pré-instalado.

    +

    Um cluster Kubernetes pode ser implantado em máquinas físicas ou virtuais. Para começar o desenvolvimento do Kubernetes, você pode usar o Minikube. O Minikube é uma implementação leve do Kubernetes que cria uma VM em sua máquina local e implanta um cluster simples contendo apenas um nó. O Minikube está disponível para sistemas Linux, macOS e Windows. A linha de comando (cli) do Minikube fornece operações básicas de inicialização para trabalhar com seu cluster, incluindo iniciar, parar, status e excluir. Para este tutorial, no entanto, você usará um terminal online fornecido com o Minikube pré-instalado.

    Agora que você sabe o que é Kubernetes, vamos para o tutorial online e iniciar nosso primeiro cluster!

    @@ -100,7 +100,7 @@ weight: 10 diff --git a/content/pt/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html b/content/pt/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html index cbc22b0d60..96f73e9250 100644 --- a/content/pt/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html +++ b/content/pt/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html @@ -37,7 +37,7 @@ weight: 20
    diff --git a/content/pt/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/pt/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html index a4f60e374c..1d927cf038 100644 --- a/content/pt/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html +++ b/content/pt/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html @@ -31,7 +31,7 @@ weight: 10 Assim que o seu cluster Kubernetes estiver em execução você pode implementar seu aplicativo em contêiners nele. Para fazer isso, você precisa criar uma configuração do tipo Deployment do Kubernetes. O Deployment define como criar e atualizar instâncias do seu aplicativo. Depois de criar um Deployment, o Master do Kubernetes - agenda as instâncias do aplicativo incluídas nesse Deployment para ser executado em nós individuais do CLuster. + agenda as instâncias do aplicativo incluídas nesse Deployment para ser executado em nós individuais do Cluster.

    Depois que as instâncias do aplicativo são criadas, um Controlador do Kubernetes Deployment monitora continuamente essas instâncias. @@ -93,7 +93,7 @@ weight: 10

    - Para sua primeira implantação, você usará um aplicativo Node.js empacotado em um contêiner Docker.(Se você ainda não tentou criar um aplicativo Node.js e implantá-lo usando um contêiner, você pode fazer isso primeiro seguindo as instruções do tutorial Hello Minikube). + Para sua primeira implantação, você usará um aplicativo Node.js empacotado em um contêiner Docker.(Se você ainda não tentou criar um aplicativo Node.js e implantá-lo usando um contêiner, você pode fazer isso primeiro seguindo as instruções do tutorial Olá, Minikube!).

    Agora que você sabe o que são implantações (Deployment), vamos para o tutorial online e implantar nosso primeiro aplicativo!

    @@ -103,7 +103,7 @@ weight: 10 diff --git a/content/pt/docs/tutorials/kubernetes-basics/explore/_index.md b/content/pt/docs/tutorials/kubernetes-basics/explore/_index.md new file mode 100644 index 0000000000..c95e536676 --- /dev/null +++ b/content/pt/docs/tutorials/kubernetes-basics/explore/_index.md @@ -0,0 +1,4 @@ +--- +title: Explore seu aplicativo +weight: 30 +--- diff --git a/content/pt/docs/tutorials/kubernetes-basics/explore/explore-interactive.html b/content/pt/docs/tutorials/kubernetes-basics/explore/explore-interactive.html new file mode 100644 index 0000000000..d4d93e7f7d --- /dev/null +++ b/content/pt/docs/tutorials/kubernetes-basics/explore/explore-interactive.html @@ -0,0 +1,41 @@ +--- +title: Tutorial Interativo - Explorando seu aplicativo +weight: 20 +--- + + + + + + + + + + + +
    + +
    + +
    +
    + +
    + Para interagir com o Terminal, por favor, use a versão para desktop ou table. +
    + +
    +
    +
    + + +
    + +
    + + + diff --git a/content/pt/docs/tutorials/kubernetes-basics/explore/explore-intro.html b/content/pt/docs/tutorials/kubernetes-basics/explore/explore-intro.html new file mode 100644 index 0000000000..c9720995dc --- /dev/null +++ b/content/pt/docs/tutorials/kubernetes-basics/explore/explore-intro.html @@ -0,0 +1,143 @@ +--- +title: Visualizando Pods e Nós (Nodes) +weight: 10 +--- + + + + + + + + + + +
    + +
    + +
    + +
    +

    Objetivos

    +
      +
    • Aprenda sobre Pods do Kubernetes.
    • +
    • Aprenda sobre Nós do Kubernetes.
    • +
    • Solucionar problemas de aplicativos implantados no Kubernetes.
    • +
    +
    + +
    +

    Kubernetes Pods

    +

    Quando você criou um Deployment no Módulo 2, o Kubernetes criou um Pod para hospedar a instância do seu aplicativo. Um Pod é uma abstração do Kubernetes que representa um grupo de um ou mais contêineres de aplicativos (como Docker) e alguns recursos compartilhados para esses contêineres. Esses recursos incluem:

    +
      +
    • Armazenamento compartilhado, como Volumes
    • +
    • Rede, como um endereço IP único no cluster
    • +
    • Informações sobre como executar cada contêiner, como a versão da imagem do contêiner ou portas específicas a serem usadas
    • +
    +

    Um Pod define um "host lógico" específico para o aplicativo e pode conter diferentes contêineres que, na maioria dos casos, são fortemente acoplados. Por exemplo, um Pod pode incluir o contêiner com seu aplicativo Node.js, bem como um outro contêiner que alimenta os dados a serem publicados pelo servidor web Node.js. Os contêineres de um Pod compartilham um endereço IP e intervalo de portas; são sempre localizados, programados e executam em um contexto compartilhado no mesmo Nó.

    + +

    Pods são a unidade atômica na plataforma Kubernetes. Quando criamos um Deployment no Kubernetes, esse Deployment cria Pods com contêineres dentro dele (em vez de você criar contêineres diretamente). Cada Pod está vinculado ao nó onde está programado (scheduled) e lá permanece até o encerramento (de acordo com a política de reinicialização) ou exclusão. Em caso de falha do nó, Pods idênticos são programados em outros nós disponíveis no cluster.

    + +
    +
    +
    +

    Sumário:

    +
      +
    • Pods
    • +
    • Nós (Nodes)
    • +
    • Principais comandos do Kubectl
    • +
    +
    +
    +

    + Um Pod é um grupo de um ou mais contêineres de aplicativos (como Docker) que inclui armazenamento compartilhado (volumes), endereço IP e informações sobre como executá-los. +

    +
    +
    +
    +
    + +
    +
    +

    Visão geral sobre os Pods

    +
    +
    + +
    +
    +

    +
    +
    +
    + +
    +
    +

    Nós (Nodes)

    +

    Um Pod sempre será executando em um . Um Nó é uma máquina de processamento em um cluster Kubernetes e pode ser uma máquina física ou virtual. Cada Nó é gerenciado pelo Control Plane. Um Nó pode possuir múltiplos Pods e o Control Plane do Kubernetes gerencia automaticamente o agendamento dos Pods nos nós do cluster. Para o agendamento automático dos Pods, o Control Plane leva em consideração os recursos disponíveis em cada Nó.

    + +

    Cada Nó do Kubernetes executa pelo menos:

    +
      +
    • Kubelet, o processo responsável pela comunicação entre o Control Plane e o Nó; gerencia os Pods e os contêineres rodando em uma máquina.
    • +
    • Um runtime de contêiner (por exemplo o Docker) é responsável por baixar a imagem do contêiner de um registro de imagens (por exemplo o Docker Hub), extrair o contêiner e executar a aplicação.
    • +
    + +
    +
    +
    +

    Os contêineres só devem ser agendados juntos em um único Pod se estiverem fortemente acoplados e precisarem compartilhar recursos, como disco e IP.

    +
    +
    +
    + +
    + +
    +
    +

    Visão Geral sobre os Nós

    +
    +
    + +
    +
    +

    +
    +
    +
    + +
    +
    +

    Solucionar problemas usando o comando kubectl

    +

    No Módulo 2, você usou o comando Kubectl. Você pode continuar utilizando o Kubectl no Módulo 3 para obter informação sobre Deployment realizado e seus recursos. As operações mais comuns podem ser realizadas com os comandos abaixo:

    +
      +
    • kubectl get - listar recursos
    • +
    • kubectl describe - mostrar informações detalhadas sobre um recurso
    • +
    • kubectl logs - mostrar os logs de um container em um Pod
    • +
    • kubectl exec - executar um comando em um contêiner em um Pod
    • +
    + +

    Você pode usar esses comandos para verificar quando o Deployment foi realizado, qual seu status atual, ondes os Pods estão rodando e qual são as suas configurações.

    + +

    Agora que sabemos mais sobre os componentes de um cluster Kubernetes e o comando kubectl, vamos explorar a nossa aplicação.

    + +
    +
    +
    +

    Um nó é uma máquina operária do Kubernetes e pode ser uma VM ou máquina física, dependendo do cluster. Vários Pods podem ser executados em um nó.

    +
    +
    +
    +
    + + + +
    + +
    + + + diff --git a/content/pt/docs/tutorials/kubernetes-basics/expose/_index.md b/content/pt/docs/tutorials/kubernetes-basics/expose/_index.md new file mode 100644 index 0000000000..c8f0d50a0e --- /dev/null +++ b/content/pt/docs/tutorials/kubernetes-basics/expose/_index.md @@ -0,0 +1,4 @@ +--- +title: Exponha publicamente seu aplicativo +weight: 40 +--- diff --git a/content/pt/docs/tutorials/kubernetes-basics/expose/expose-interactive.html b/content/pt/docs/tutorials/kubernetes-basics/expose/expose-interactive.html new file mode 100644 index 0000000000..cf24ae985e --- /dev/null +++ b/content/pt/docs/tutorials/kubernetes-basics/expose/expose-interactive.html @@ -0,0 +1,38 @@ +--- +title: Tutorial Interativo - Expondo seu aplicativo +weight: 20 +--- + + + + + + + + + + + +
    + +
    + +
    +
    + Para interagir com o terminal, favor utilizar a versão desktop/tablet +
    +
    +
    +
    + + +
    + +
    + + + diff --git a/content/pt/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/pt/docs/tutorials/kubernetes-basics/expose/expose-intro.html new file mode 100644 index 0000000000..4e66601116 --- /dev/null +++ b/content/pt/docs/tutorials/kubernetes-basics/expose/expose-intro.html @@ -0,0 +1,103 @@ +--- +title: Utilizando um serviço para expor seu aplicativo +weight: 10 +--- + + + + + + + + + +
    + +
    + +
    +
    +

    Objetivos

    +
      +
    • Aprenda sobre um Serviço no Kubernetes
    • +
    • Entenda como os objetos labels e LabelSelector se relacionam a um Serviço
    • +
    • Exponha uma aplicação externamente ao cluster Kubernetes usando um Serviço
    • +
    +
    + +
    +

    Visão Geral de Serviços Kubernetes

    + +

    Pods Kubernetes são efêmeros. Na verdade, Pods possuem um ciclo de vida. Quando um nó de processamento morre, os Pods executados no nó também são perdidos. A partir disso, o ReplicaSet pode dinamicamente retornar o cluster ao estado desejado através da criação de novos Pods para manter sua aplicação em execução. Como outro exemplo, considere um backend de processamento de imagens com 3 réplicas. Estas réplicas são intercambiáveis; o sistema front-end não deveria se importar com as réplicas backend ou ainda se um Pod é perdido ou recriado. Dito isso, cada Pod em um cluster Kubernetes tem um único endereço IP, mesmo Pods no mesmo nó, então há necessidade de ter uma forma de reconciliar automaticamente mudanças entre Pods de modo que sua aplicação continue funcionando.

    + +

    Um serviço no Kubernetes é uma abstração que define um conjunto lógico de Pods e uma política pela qual acessá-los. Serviços permitem um baixo acoplamento entre os Pods dependentes. Um serviço é definido usando YAML (preferencialmente) ou JSON, como todos objetos Kubernetes. O conjunto de Pods selecionados por um Serviço é geralmente determinado por um seletor de rótulos LabelSelector (veja abaixo o motivo pelo qual você pode querer um Serviço sem incluir um seletor selector na especificação spec).

    + +

    Embora cada Pod tenha um endereço IP único, estes IPs não são expostos externamente ao cluster sem um Serviço. Serviços permitem que suas aplicações recebam tráfego. Serviços podem ser expostos de formas diferentes especificando um tipo type na especificação do serviço ServiceSpec:

    +
      +
    • ClusterIP (padrão) - Expõe o serviço sob um endereço IP interno no cluster. Este tipo faz do serviço somente alcançável de dentro do cluster.
    • +
    • NodePort - Expõe o serviço sob a mesma porta em cada nó selecionado no cluster usando NAT. Faz o serviço acessível externamente ao cluster usando <NodeIP>:<NodePort>. Superconjunto de ClusterIP.
    • +
    • LoadBalancer - Cria um balanceador de carga externo no provedor de nuvem atual (se suportado) e assinala um endereço IP fixo e externo para o serviço. Superconjunto de NodePort.
    • +
    • ExternalName - Expõe o serviço usando um nome arbitrário (especificado através de externalName na especificação spec) retornando um registro de CNAME com o nome. Nenhum proxy é utilizado. Este tipo requer v1.7 ou mais recente de kube-dns.
    • +
    +

    Mais informações sobre diferentes tipos de Serviços podem ser encontradas no tutorial Utilizando IP de origem. Também confira Conectando aplicações com serviços.

    +

    Adicionalmente, note que existem alguns casos de uso com serviços que envolvem a não definição de selector em spec. Serviços criados sem selector também não criarão objetos Endpoints correspondentes. Isto permite usuários mapear manualmente um serviço a endpoints específicos. Outra possibilidade na qual pode não haver seletores é ao se utilizar estritamente type: ExternalName.

    +
    +
    +
    +

    Resumo

    +
      +
    • Expõe Pods ao tráfego externo
    • +
    • Tráfego de balanceamento de carga entre múltiplos Pods
    • +
    • Uso de rótulos labels
    • +
    +
    +
    +

    Um serviço Kubernetes é uma camada de abstração que define um conjunto lógico de Pods e habilita a exposição ao tráfego externo, balanceamento de carga e descoberta de serviço para esses Pods.

    +
    +
    +
    +
    + +
    +
    +

    Serviços e Rótulos

    +
    +
    + +
    +
    +

    Um serviço roteia tráfego entre um conjunto de Pods. Serviço é a abstração que permite pods morrerem e se replicarem no Kubernetes sem impactar sua aplicação. A descoberta e o roteamento entre Pods dependentes (tal como componentes frontend e backend dentro de uma aplicação) são controlados por serviços Kubernetes.

    +

    Serviços relacionam um conjunto de Pods usando Rótulos e seletores, um agrupamento primitivo que permite operações lógicas sobre objetos Kubernetes. Rótulos são pares de chave/valor anexados à objetos e podem ser usados de inúmeras formas:

    +
      +
    • Designar objetos para desenvolvimento, teste e produção
    • +
    • Adicionar tags de versão
    • +
    • Classificar um objeto usando tags
    • +
    +
    + +
    + +
    + +
    +
    +

    +
    +
    +
    +
    +
    +

    Rótulos podem ser anexados à objetos no momento de sua criação ou posteriormente. Eles podem ser modificados a qualquer tempo. Vamos agora expor sua aplicação usando um serviço e aplicar alguns rótulos.

    +
    +
    +
    + +
    +
    + + + diff --git a/content/pt/docs/tutorials/kubernetes-basics/scale/_index.md b/content/pt/docs/tutorials/kubernetes-basics/scale/_index.md new file mode 100644 index 0000000000..9e6d5b418e --- /dev/null +++ b/content/pt/docs/tutorials/kubernetes-basics/scale/_index.md @@ -0,0 +1,4 @@ +--- +title: Escale seu aplicativo +weight: 50 +--- diff --git a/content/pt/docs/tutorials/kubernetes-basics/scale/scale-interactive.html b/content/pt/docs/tutorials/kubernetes-basics/scale/scale-interactive.html new file mode 100644 index 0000000000..a4ce38ded1 --- /dev/null +++ b/content/pt/docs/tutorials/kubernetes-basics/scale/scale-interactive.html @@ -0,0 +1,40 @@ +--- +title: Tutorial Interativo - Escalando seu aplicativo +weight: 20 +--- + + + + + + + + + + + +
    + +
    + +
    +
    + Para interagir com o terminal, favor utilizar a versão desktop/tablet +
    +
    +
    +
    + + +
    + + + +
    + + + diff --git a/content/pt/docs/tutorials/kubernetes-basics/scale/scale-intro.html b/content/pt/docs/tutorials/kubernetes-basics/scale/scale-intro.html new file mode 100644 index 0000000000..351f4e01fe --- /dev/null +++ b/content/pt/docs/tutorials/kubernetes-basics/scale/scale-intro.html @@ -0,0 +1,121 @@ +--- +title: Executando múltiplas instâncias de seu aplicativo +weight: 10 +--- + + + + + + + + + +
    + +
    + +
    + +
    +

    Objetivos

    +
      +
    • Escalar uma aplicação usando kubectl.
    • +
    +
    + +
    +

    Escalando uma aplicação

    + +

    Nos módulos anteriores nós criamos um Deployment, e então o expusemos publicamente através de um serviço (Service). O Deployment criou apenas um único Pod para executar nossa aplicação. Quando o tráfego aumentar nós precisaremos escalar a aplicação para suportar a demanda de usuários.

    + +

    O escalonamento é obtido pela mudança do número de réplicas em um Deployment

    + +
    +
    +
    +

    Resumo:

    +
      +
    • Escalando um Deployment
    • +
    +
    +
    +

    Você pode criar desde o início um Deployment com múltiplas instâncias usando o parâmetro --replicas para que o kubectl crie o comando de deployment

    +
    +
    +
    +
    + +
    +
    +

    Visão geral sobre escalonamento

    +
    +
    + +
    +
    +
    + +
    +
    + +
    + +
    +
    + +

    Escalar um Deployment garantirá que novos Pods serão criados e agendados para nós de processamento com recursos disponíveis. O escalonamento aumentará o número de Pods para o novo estado desejado. O Kubernetes também suporta o auto-escalonamento (autoscaling) de Pods, mas isso está fora do escopo deste tutorial. Escalar para zero também é possível, e isso terminará todos os Pods do Deployment especificado.

    + +

    Executar múltiplas instâncias de uma aplicação irá requerer uma forma de distribuir o tráfego entre todas elas. Serviços possuem um balanceador de carga integrado que distribuirá o tráfego de rede entre todos os Pods de um Deployment exposto. Serviços irão monitorar continuamente os Pods em execução usando endpoints para garantir que o tráfego seja enviado apenas para Pods disponíveis.

    + +
    +
    +
    +

    O Escalonamento é obtido pela mudança do número de réplicas em um Deployment.

    +
    +
    +
    + +
    + +
    +
    +

    No momento em que tiver múltiplas instâncias de uma aplicação em execução, será capaz de fazer atualizações graduais sem indisponibilidade. Nós cobriremos isso no próximo módulo. Agora, vamos ao terminal online e escalar nossa aplicação.

    +
    +
    +
    + + + +
    + +
    + + + diff --git a/content/ru/docs/reference/kubectl/cheatsheet.md b/content/ru/docs/reference/kubectl/cheatsheet.md index d2be7e9c0c..02a8a9bc4a 100644 --- a/content/ru/docs/reference/kubectl/cheatsheet.md +++ b/content/ru/docs/reference/kubectl/cheatsheet.md @@ -186,6 +186,9 @@ kubectl get pods --show-labels JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \ && kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True" +# Вывод декодированных секретов без внешних инструментов +kubectl get secret my-secret -o go-template='{{range $k,$v := .data}}{{"### "}}{{$k}}{{"\n"}}{{$v|base64decode}}{{"\n\n"}}{{end}}' + # Вывести все секреты, используемые сейчас в поде. kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq diff --git a/content/zh/blog/_posts/2020-12-02-dockershim-faq.md b/content/zh/blog/_posts/2020-12-02-dockershim-faq.md new file mode 100644 index 0000000000..8552f9fd48 --- /dev/null +++ b/content/zh/blog/_posts/2020-12-02-dockershim-faq.md @@ -0,0 +1,315 @@ +--- +layout: blog +title: "弃用 Dockershim 的常见问题" +date: 2020-12-02 +slug: dockershim-faq +--- + + + +本文回顾了自 Kubernetes v1.20 版宣布弃用 Dockershim 以来所引发的一些常见问题。 +关于 Kubernetes kubelets 从容器运行时的角度弃用 Docker 的细节以及这些细节背后的含义,请参考博文 +[别慌: Kubernetes 和 Docker](/blog/2020/12/02/dont-panic-kubernetes-and-docker/) + + +### 为什么弃用 dockershim {#why-is-dockershim-being-deprecated} + + +维护 dockershim 已经成为 Kubernetes 维护者肩头一个沉重的负担。 +创建 CRI 标准就是为了减轻这个负担,同时也可以增加不同容器运行时之间平滑的互操作性。 +但反观 Docker 却至今也没有实现 CRI,所以麻烦就来了。 + + +Dockershim 向来都是一个临时解决方案(因此得名:shim)。 +你可以进一步阅读 +[移除 Kubernetes 增强方案 Dockershim](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/1985-remove-dockershim) +以了解相关的社区讨论和计划。 + + +此外,与 dockershim 不兼容的一些特性,例如:控制组(cgoups)v2 和用户名字空间(user namespace),已经在新的 CRI 运行时中被实现。 +移除对 dockershim 的支持将加速这些领域的发展。 + + +### 在 Kubernetes 1.20 版本中,我还可以用 Docker 吗? {#can-I-still-use-docker-in-kubernetes-1.20} + + +当然可以,在 1.20 版本中仅有的改变就是:如果使用 Docker 运行时,启动 +[kubelet](/zh/docs/reference/command-line-tools-reference/kubelet/) +的过程中将打印一条警告日志。 + + +### 什么时候移除 dockershim {#when-will-dockershim-be-removed} + + +考虑到此改变带来的影响,我们使用了一个加长的废弃时间表。 +在 Kubernetes 1.22 版之前,它不会被彻底移除;换句话说,dockershim 被移除的最早版本会是 2021 年底发布 1.23 版。 +我们将与供应商以及其他生态团队紧密合作,确保顺利过渡,并将依据事态的发展评估后续事项。 + + +### 我现有的 Docker 镜像还能正常工作吗? {#will-my-existing-docker-image-still-work} + + +当然可以,`docker build` 创建的镜像适用于任何 CRI 实现。 +所有你的现有镜像将和往常一样工作。 + + +### 私有镜像呢?{#what-about-private-images} + + +当然可以。所有 CRI 运行时均支持 Kubernetes 中相同的拉取(pull)Secret 配置, +不管是通过 PodSpec 还是通过 ServiceAccount 均可。 + + +### Docker 和容器是一回事吗? {#are-docker-and-containers-the-same-thing} + + +虽然 Linux 的容器技术已经存在了很久, +但 Docker 普及了 Linux 容器这种技术模式,并在开发底层技术方面发挥了重要作用。 +容器的生态相比于单纯的 Docker,已经进化到了一个更宽广的领域。 +像 OCI 和 CRI 这类标准帮助许多工具在我们的生态中成长和繁荣, +其中一些工具替代了 Docker 的某些部分,另一些增强了现有功能。 + + +### 现在是否有在生产系统中使用其他运行时的例子? {#are-there-example-of-folks-using-other-runtimes-in-production-today} + + +Kubernetes 所有项目在所有版本中出产的工件(Kubernetes 二进制文件)都经过了验证。 + + +此外,[kind](https://kind.sigs.k8s.io/) 项目使用 containerd 已经有年头了, +并且在这个场景中,稳定性还明显得到提升。 +Kind 和 containerd 每天都会做多次协调,以验证对 Kubernetes 代码库的所有更改。 +其他相关项目也遵循同样的模式,从而展示了其他容器运行时的稳定性和可用性。 +例如,OpenShift 4.x 从 2019 年 6 月以来,就一直在生产环境中使用 [CRI-O](https://cri-o.io/) 运行时。 + + +至于其他示例和参考资料,你可以查看 containerd 和 CRI-O 的使用者列表, +这两个容器运行时是云原生基金会([CNCF])下的项目。 + +- [containerd](https://github.com/containerd/containerd/blob/master/ADOPTERS.md) +- [CRI-O](https://github.com/cri-o/cri-o/blob/master/ADOPTERS.md) + + +### 人们总在谈论 OCI,那是什么? {#people-keep-referenceing-oci-what-is-that} + + +OCI 代表[开放容器标准](https://opencontainers.org/about/overview/), +它标准化了容器工具和底层实现(technologies)之间的大量接口。 +他们维护了打包容器镜像(OCI image-spec)和运行容器(OCI runtime-spec)的标准规范。 +他们还以 [runc](https://github.com/opencontainers/runc) +的形式维护了一个 runtime-spec 的真实实现, +这也是 [containerd](https://containerd.io/) 和 [CRI-O](https://cri-o.io/) 依赖的默认运行时。 +CRI 建立在这些底层规范之上,为管理容器提供端到端的标准。 + + +### 我应该用哪个 CRI 实现? {#which-cri-implementation-should-I-use} + + +这是一个复杂的问题,依赖于许多因素。 +在 Docker 工作良好的情况下,迁移到 containerd 是一个相对容易的转换,并将获得更好的性能和更少的开销。 +然而,我们建议你先探索 [CNCF 全景图](https://landscape.cncf.io/card-mode?category=container-runtime&grouping=category) +提供的所有选项,以做出更适合你的环境的选择。 + + +### 当切换 CRI 底层实现时,我应该注意什么? {#what-should-I-look-out-for-when-changing-CRI-implementation} + + +Docker 和大多数 CRI(包括 containerd)的底层容器化代码是相同的,但其周边部分却存在一些不同。 +迁移时一些常见的关注点是: + + + +- 日志配置 +- 运行时的资源限制 +- 直接访问 docker 命令或通过控制套接字调用 Docker 的节点供应脚本 +- 需要访问 docker 命令或控制套接字的 kubectl 插件 +- 需要直接访问 Docker 的 Kubernetes 工具(例如:kube-imagepuller) +- 像 `registry-mirrors` 和不安全的注册表这类功能的配置 +- 需要 Docker 保持可用、且运行在 Kubernetes 之外的,其他支持脚本或守护进程(例如:监视或安全代理) +- GPU 或特殊硬件,以及它们如何与你的运行时和 Kubernetes 集成 + + +如果你只是用了 Kubernetes 资源请求/限制或基于文件的日志收集 DaemonSet,它们将继续稳定工作, +但是如果你用了自定义了 dockerd 配置,则可能需要为新容器运行时做一些适配工作。 + + +另外还有一个需要关注的点,那就是当创建镜像时,系统维护或嵌入容器方面的任务将无法工作。 +对于前者,可以用 [`crictl`](https://github.com/kubernetes-sigs/cri-tools) 工具作为临时替代方案 +(参见 [从 docker 命令映射到 crictl](https://kubernetes.io/zh/docs/tasks/debug-application-cluster/crictl/#mapping-from-docker-cli-to-crictl)); +对于后者,可以用新的容器创建选项,比如 +[img](https://github.com/genuinetools/img)、 +[buildah](https://github.com/containers/buildah)、 +[kaniko](https://github.com/GoogleContainerTools/kaniko)、或 +[buildkit-cli-for-kubectl](https://github.com/vmware-tanzu/buildkit-cli-for-kubectl +), +他们均不需要访问 Docker。 + + +对于 containerd,你可以从它们的 +[文档](https://github.com/containerd/cri/blob/master/docs/registry.md) +开始,看看在迁移过程中有哪些配置选项可用。 + + +对于如何协同 Kubernetes 使用 containerd 和 CRI-O 的说明,参见 Kubernetes 文档中这部分: +[容器运行时](/zh/docs/setup/production-environment/container-runtimes)。 + + +### 我还有问题怎么办?{#what-if-I-have-more-question} + + +如果你使用了一个有供应商支持的 Kubernetes 发行版,你可以咨询供应商他们产品的升级计划。 +对于最终用户的问题,请把问题发到我们的最终用户社区的论坛:https://discuss.kubernetes.io/。 + + +你也可以看看这篇优秀的博文: +[等等,Docker 刚刚被 Kubernetes 废掉了?](https://dev.to/inductor/wait-docker-is-deprecated-in-kubernetes-now-what-do-i-do-e4m) +一个对此变化更深入的技术讨论。 + + +### 我可以加入吗?{#can-I-have-a-hug} + + +只要你愿意,随时随地欢迎加入! + diff --git a/content/zh/blog/_posts/2020-12-02-dont-panic-kubernetes-and-docker.md b/content/zh/blog/_posts/2020-12-02-dont-panic-kubernetes-and-docker.md new file mode 100644 index 0000000000..788b28420d --- /dev/null +++ b/content/zh/blog/_posts/2020-12-02-dont-panic-kubernetes-and-docker.md @@ -0,0 +1,210 @@ +--- +layout: blog +title: "别慌: Kubernetes 和 Docker" +date: 2020-12-02 +slug: dont-panic-kubernetes-and-docker +--- + + +**作者:** Jorge Castro, Duffie Cooley, Kat Cosgrove, Justin Garrison, Noah Kantrowitz, Bob Killen, Rey Lejano, Dan “POP” Papandrea, Jeffrey Sica, Davanum “Dims” Srinivas + + +Kubernetes 从版本 v1.20 之后,[弃用 Docker](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation) +这个容器运行时。 + + +**不必慌张,这件事并没有听起来那么吓人。** + + +弃用 Docker 这个底层运行时,转而支持符合为 Kubernetes 创建的 +[Container Runtime Interface (CRI)](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/) +的运行时。 +Docker 构建的镜像,将在你的集群的所有运行时中继续工作,一如既往。 + + +如果你是 Kubernetes 的终端用户,这对你不会有太大影响。 +这事并不意味着 Dockder 已死、也不意味着你不能或不该继续把 Docker 用作开发工具。 +Docker 仍然是构建容器的利器,使用命令 `docker build` 构建的镜像在 Kubernetes 集群中仍然可以运行。 + + +如果你正在使用 GKE、EKS、或 AKS +([默认使用 containerd](https://github.com/Azure/AKS/releases/tag/2020-11-16)) +这类托管 Kubernetes 服务,你需要在 Kubernetes 后续版本移除对 Docker 支持之前, +确认工作节点使用了被支持的容器运行时。 +如果你的节点被定制过,你可能需要根据你自己的环境和运行时需求更新它们。 +请与你的服务供应商协作,确保做出适当的升级测试和计划。 + + +如果你正在运营你自己的集群,那还应该做些工作,以避免集群中断。 +在 v1.20 版中,你仅会得到一个 Docker 的弃用警告。 +当对 Docker 运行时的支持在 Kubernetes 某个后续发行版(目前的计划是 2021 年晚些时候的 1.22 版)中被移除时, +你需要切换到 containerd 或 CRI-O 等兼容的容器运行时。 +只要确保你选择的运行时支持你当前使用的 Docker 守护进程配置(例如 logging)。 + + +## 那为什么会有这样的困惑,为什么每个人要害怕呢?{#so-why-the-confusion-and-what-is-everyone-freaking-out-about} + + +我们在这里讨论的是两套不同的环境,这就是造成困惑的根源。 +在你的 Kubernetes 集群中,有一个叫做容器运行时的东西,它负责拉取并运行容器镜像。 +Docker 对于运行时来说是一个流行的选择(其他常见的选择包括 containerd 和 CRI-O), +但 Docker 并非设计用来嵌入到 Kubernetes,这就是问题所在。 + + +你看,我们称之为 “Docker” 的物件实际上并不是一个物件——它是一个完整的技术堆栈, +它其中一个叫做 “containerd” 的部件本身,才是一个高级容器运行时。 +Docker 既酷炫又实用,因为它提供了很多用户体验增强功能,而这简化了我们做开发工作时的操作, +Kubernetes 用不到这些增强的用户体验,毕竟它并非人类。 + + +因为这个用户友好的抽象层,Kubernetes 集群不得不引入一个叫做 Dockershim 的工具来访问它真正需要的 containerd。 +这不是一件好事,因为这引入了额外的运维工作量,而且还可能出错。 +实际上正在发生的事情就是:Dockershim 将在不早于 v1.23 版中从 kubelet 中被移除,也就取消对 Docker 容器运行时的支持。 +你心里可能会想,如果 containerd 已经包含在 Docker 堆栈中,为什么 Kubernetes 需要 Dockershim。 + + +Docker 不兼容 CRI, +[容器运行时接口](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/)。 +如果支持,我们就不需要这个 shim 了,也就没问题了。 +但这也不是世界末日,你也不需要恐慌——你唯一要做的就是把你的容器运行时从 Docker 切换到其他受支持的容器运行时。 + + +要注意一点:如果你依赖底层的 Docker 套接字(`/var/run/docker.sock`),作为你集群中工作流的一部分, +切换到不同的运行时会导致你无法使用它。 +这种模式经常被称之为嵌套 Docker(Docker in Docker)。 +对于这种特殊的场景,有很多选项,比如: +[kaniko](https://github.com/GoogleContainerTools/kaniko)、 +[img](https://github.com/genuinetools/img)、和 +[buildah](https://github.com/containers/buildah)。 + + +## 那么,这一改变对开发人员意味着什么?我们还要写 Dockerfile 吗?还能用 Docker 构建镜像吗?{#what-does-this-change-mean-for-developers} + + +此次改变带来了一个不同的环境,这不同于我们常用的 Docker 交互方式。 +你在开发环境中用的 Docker 和你 Kubernetes 集群中的 Docker 运行时无关。 +我们知道这听起来让人困惑。 +对于开发人员,Docker 从所有角度来看仍然有用,就跟这次改变之前一样。 +Docker 构建的镜像并不是 Docker 特有的镜像——它是一个 +OCI([开放容器标准](https://opencontainers.org/))镜像。 +任一 OCI 兼容的镜像,不管它是用什么工具构建的,在 Kubernetes 的角度来看都是一样的。 +[containerd](https://containerd.io/) 和 +[CRI-O](https://cri-o.io/) +两者都知道怎么拉取并运行这些镜像。 +这就是我们制定容器标准的原因。 + + +所以,改变已经发生。 +它确实带来了一些问题,但这不是一个灾难,总的说来,这还是一件好事。 +根据你操作 Kubernetes 的方式的不同,这可能对你不构成任何问题,或者也只是意味着一点点的工作量。 +从一个长远的角度看,它使得事情更简单。 +如果你还在困惑,也没问题——这里还有很多事情; +Kubernetes 有很多变化中的功能,没有人是100%的专家。 +我们鼓励你提出任何问题,无论水平高低、问题难易。 +我们的目标是确保所有人都能在即将到来的改变中获得足够的了解。 +我们希望这已经回答了你的大部分问题,并缓解了一些焦虑!❤️ + + +还在寻求更多答案吗?请参考我们附带的 +[弃用 Dockershim 的常见问题](/zh/blog/2020/12/02/dockershim-faq/)。 diff --git a/content/zh/docs/concepts/architecture/cloud-controller.md b/content/zh/docs/concepts/architecture/cloud-controller.md index 47043364b8..7d84826583 100644 --- a/content/zh/docs/concepts/architecture/cloud-controller.md +++ b/content/zh/docs/concepts/architecture/cloud-controller.md @@ -22,7 +22,7 @@ components. 使用云基础设施技术,你可以在公有云、私有云或者混合云环境中运行 Kubernetes。 Kubernetes 的信条是基于自动化的、API 驱动的基础设施,同时避免组件间紧密耦合。 -{{< glossary_definition term_id="cloud-controller-manager" length="all" prepend="组件 cloud-controller-manager 是">}} +{{< glossary_definition term_id="cloud-controller-manager" length="all" prepend="组件 cloud-controller-manager 是指云控制器管理器,">}} 想要连接到 apiserver 的 Pod 可以使用服务账号安全地进行连接。 当 Pod 被实例化时,Kubernetes 自动把公共根证书和一个有效的持有者令牌注入到 Pod 里。 -`kubernetes` 服务(位于所有名字空间中)配置了一个虚拟 IP 地址,用于(通过 kube-proxy)转发 +`kubernetes` 服务(位于 `default` 名字空间中)配置了一个虚拟 IP 地址,用于(通过 kube-proxy)转发 请求到 apiserver 的 HTTPS 末端。 控制面组件也通过安全端口与集群的 apiserver 通信。 diff --git a/content/zh/docs/concepts/architecture/controller.md b/content/zh/docs/concepts/architecture/controller.md index 55bc1e996e..e6660f1e1c 100644 --- a/content/zh/docs/concepts/architecture/controller.md +++ b/content/zh/docs/concepts/architecture/controller.md @@ -155,7 +155,7 @@ that horizontally scales the nodes in your cluster.) 并使当前状态更接近期望状态。 (实际上有一个[控制器](https://github.com/kubernetes/autoscaler/) -可以水平地扩展集群中的节点。请参阅 +可以水平地扩展集群中的节点。) 在温度计的例子中,如果房间很冷,那么某个控制器可能还会启动一个防冻加热器。 @@ -198,7 +198,7 @@ Kubernetes 采用了系统的云原生视图,并且可以处理持续的变化 在任务执行时,集群随时都可能被修改,并且控制回路会自动修复故障。 这意味着很可能集群永远不会达到稳定状态。 -只要集群中控制器的在运行并且进行有效的修改,整体状态的稳定与否是无关紧要的。 +只要集群中的控制器在运行并且进行有效的修改,整体状态的稳定与否是无关紧要的。 Kubernetes 通过将容器放入在节点(Node)上运行的 Pod 中来执行你的工作负载。 -节点可以是一个虚拟机或者物理机器,取决于所在的集群配置。每个节点都包含用于运行 -{{< glossary_tooltip text="Pod" term_id="pod" >}} 所需要的服务,这些服务由 -{{< glossary_tooltip text="控制面" term_id="control-plane" >}}负责管理。 +节点可以是一个虚拟机或者物理机器,取决于所在的集群配置。 +每个节点包含运行 {{< glossary_tooltip text="Pods" term_id="pod" >}} 所需的服务, +这些 Pods 由 {{< glossary_tooltip text="控制面" term_id="control-plane" >}} 负责管理。 通常集群中会有若干个节点;而在一个学习用或者资源受限的环境中,你的集群中也可能 只有一个节点。 @@ -120,7 +120,7 @@ register itself with the API server. This is the preferred pattern, used by mos For self-registration, the kubelet is started with the following options: --> -### 节点自注册 +### 节点自注册 {#self-registration-of-nodes} 当 kubelet 标志 `--register-node` 为 true(默认)时,它会尝试向 API 服务注册自己。 这是首选模式,被绝大多数发行版选用。 @@ -170,7 +170,7 @@ When you want to create Node objects manually, set the kubelet flag `--register- You can modify Node objects regardless of the setting of `--register-node`. For example, you can set labels on an existing Node, or mark it unschedulable. --> -### 手动节点管理 +### 手动节点管理 {#manual-node-administration} 你可以使用 {{< glossary_tooltip text="kubectl" term_id="kubectl" >}} 来创建和修改 Node 对象。 @@ -456,8 +456,7 @@ of the node heartbeats as the cluster scales. #### 心跳机制 {#heartbeats} Kubernetes 节点发送的心跳(Heartbeats)有助于确定节点的可用性。 -心跳有两种形式:`NodeStatus` 和 [`Lease` 对象] -(/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#lease-v1-coordination-k8s-io)。 +心跳有两种形式:`NodeStatus` 和 [`Lease` 对象](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#lease-v1-coordination-k8s-io)。 每个节点在 `kube-node-lease`{{< glossary_tooltip term_id="namespace" text="名字空间">}} 中都有一个与之关联的 `Lease` 对象。 `Lease` 是一种轻量级的资源,可在集群规模扩大时提高节点心跳机制的性能。 diff --git a/content/zh/docs/concepts/cluster-administration/flow-control.md b/content/zh/docs/concepts/cluster-administration/flow-control.md index 022596a0f2..7dde282034 100644 --- a/content/zh/docs/concepts/cluster-administration/flow-control.md +++ b/content/zh/docs/concepts/cluster-administration/flow-control.md @@ -95,10 +95,10 @@ kube-apiserver \ 或者,你也可以通过 -`--runtime-config=flowcontrol.apiserver.k8s.io/v1beta1=true` +`--runtime-config=flowcontrol.apiserver.k8s.io/v1alpha1=true` 启用 API 组的 v1alpha1 版本。 应用日志可以让你了解应用内部的运行状况。日志对调试问题和监控集群活动非常有用。 -大部分现代化应用都有某种日志记录机制;同样地,大多数容器引擎也被设计成支持某种日志记录机制。 -针对容器化应用,最简单且受欢迎的日志记录方式就是写入标准输出和标准错误流。 +大部分现代化应用都有某种日志记录机制。同样地,容器引擎也被设计成支持日志记录。 +针对容器化应用,最简单且最广泛采用的日志记录方式就是写入标准输出和标准错误流。 -但是,由容器引擎或运行时提供的原生功能通常不足以满足完整的日志记录方案。 -例如,如果发生容器崩溃、Pod 被逐出或节点宕机等情况,你仍然想访问到应用日志。 -因此,日志应该具有独立的存储和生命周期,与节点、Pod 或容器的生命周期相独立。 -这个概念叫 _集群级的日志_ 。集群级日志方案需要一个独立的后台来存储、分析和查询日志。 -Kubernetes 没有为日志数据提供原生存储方案,但是你可以集成许多现有的日志解决方案到 Kubernetes 集群中。 +但是,由容器引擎或运行时提供的原生功能通常不足以构成完整的日志记录方案。 +例如,如果发生容器崩溃、Pod 被逐出或节点宕机等情况,你可能想访问应用日志。 +在集群中,日志应该具有独立的存储和生命周期,与节点、Pod 或容器的生命周期相独立。 +这个概念叫 _集群级的日志_ 。 -集群级日志架构假定在集群内部或者外部有一个日志后台。 -如果你对集群级日志不感兴趣,你仍会发现关于如何在节点上存储和处理日志的描述对你是有用的。 +集群级日志架构需要一个独立的后端用来存储、分析和查询日志。 +Kubernetes 并不为日志数据提供原生的存储解决方案。 +相反,有很多现成的日志方案可以集成到 Kubernetes 中。 +下面各节描述如何在节点上处理和存储日志。 ## Kubernetes 中的基本日志记录 -本节,你会看到一个kubernetes 中生成基本日志的例子,该例子中数据被写入到标准输出。 -这里的示例为包含一个容器的 Pod 规约,该容器每秒钟向标准输出写入数据。 +这里的示例使用包含一个容器的 Pod 规约,每秒钟向标准输出写入数据。 {{< codenew file="debug/counter-pod.yaml" >}} @@ -76,7 +75,7 @@ pod/counter created -使用 `kubectl logs` 命令获取日志: +像下面这样,使用 `kubectl logs` 命令获取日志: ```shell kubectl logs counter @@ -95,10 +94,10 @@ The output is: ``` -一旦发生容器崩溃,你可以使用命令 `kubectl logs` 和参数 `--previous` 检索之前的容器日志。 -如果 pod 中有多个容器,你应该向该命令附加一个容器名以访问对应容器的日志。 +你可以使用命令 `kubectl logs --previous` 检索之前容器实例的日志。 +如果 Pod 中有多个容器,你应该为该命令附加容器名以访问对应容器的日志。 详见 [`kubectl logs` 文档](/docs/reference/generated/kubectl/kubectl-commands#logs)。 容器化应用写入 `stdout` 和 `stderr` 的任何数据,都会被容器引擎捕获并被重定向到某个位置。 例如,Docker 容器引擎将这两个输出流重定向到某个 -[日志驱动](https://docs.docker.com/engine/admin/logging/overview) , +[日志驱动(Logging Driver)](https://docs.docker.com/engine/admin/logging/overview) , 该日志驱动在 Kubernetes 中配置为以 JSON 格式写入文件。 -节点级日志记录中,需要重点考虑实现日志的轮转,以此来保证日志不会消耗节点上所有的可用空间。 -Kubernetes 当前并不负责轮转日志,而是通过部署工具建立一个解决问题的方案。 -例如,在 Kubernetes 集群中,用 `kube-up.sh` 部署一个每小时运行的工具 -[`logrotate`](https://linux.die.net/man/8/logrotate)。 -你也可以设置容器 runtime 来自动地轮转应用日志,比如使用 Docker 的 `log-opt` 选项。 -在 `kube-up.sh` 脚本中,使用后一种方式来处理 GCP 上的 COS 镜像,而使用前一种方式来处理其他环境。 -这两种方式,默认日志超过 10MB 大小时都会触发日志轮转。 +节点级日志记录中,需要重点考虑实现日志的轮转,以此来保证日志不会消耗节点上全部可用空间。 +Kubernetes 并不负责轮转日志,而是通过部署工具建立一个解决问题的方案。 +例如,在用 `kube-up.sh` 部署的 Kubernetes 集群中,存在一个 +[`logrotate`](https://linux.die.net/man/8/logrotate),每小时运行一次。 +你也可以设置容器运行时来自动地轮转应用日志。 例如,你可以找到关于 `kube-up.sh` 为 GCP 环境的 COS 镜像设置日志的详细信息, -相应的脚本在 -[这里](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh)。 +脚本为 +[`configure-helper` 脚本](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh)。 当运行 [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs) 时, 节点上的 kubelet 处理该请求并直接读取日志文件,同时在响应中返回日志文件内容。 {{< note >}} -当前,如果有其他系统机制执行日志轮转,那么 `kubectl logs` 仅可查询到最新的日志内容。 -比如,一个 10MB 大小的文件,通过`logrotate` 执行轮转后生成两个文件,一个 10MB 大小, -一个为空,所以 `kubectl logs` 将返回空。 +如果有外部系统执行日志轮转,那么 `kubectl logs` 仅可查询到最新的日志内容。 +比如,对于一个 10MB 大小的文件,通过 `logrotate` 执行轮转后生成两个文件, +一个 10MB 大小,一个为空,`kubectl logs` 返回最新的日志文件,而该日志文件 +在这个例子中为空。 {{< /note >}} * 在容器中运行的 kube-scheduler 和 kube-proxy。 -* 不在容器中运行的 kubelet 和容器运行时(例如 Docker)。 +* 不在容器中运行的 kubelet 和容器运行时。 -在使用 systemd 机制的服务器上,kubelet 和容器 runtime 写入日志到 journald。 -如果没有 systemd,他们写入日志到 `/var/log` 目录的 `.log` 文件。 -容器中的系统组件通常将日志写到 `/var/log` 目录,绕过了默认的日志机制。他们使用 -[klog](https://github.com/kubernetes/klog) 日志库。 -你可以在[日志开发文档](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md)找到这些组件的日志告警级别协议。 +在使用 systemd 机制的服务器上,kubelet 和容器容器运行时将日志写入到 journald 中。 +如果没有 systemd,它们将日志写入到 `/var/log` 目录下的 `.log` 文件中。 +容器中的系统组件通常将日志写到 `/var/log` 目录,绕过了默认的日志机制。 +他们使用 [klog](https://github.com/kubernetes/klog) 日志库。 +你可以在[日志开发文档](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md) +找到这些组件的日志告警级别约定。 和容器日志类似,`/var/log` 目录中的系统组件日志也应该被轮转。 -通过脚本 `kube-up.sh` 启动的 Kubernetes 集群中,日志被工具 `logrotate` 执行每日轮转, -或者日志大小超过 100MB 时触发轮转。 +通过脚本 `kube-up.sh` 启动的 Kubernetes 集群中,日志被工具 `logrotate` +执行每日轮转,或者日志大小超过 100MB 时触发轮转。 ## 集群级日志架构 -虽然Kubernetes没有为集群级日志记录提供原生的解决方案,但你可以考虑几种常见的方法。以下是一些选项: +虽然Kubernetes没有为集群级日志记录提供原生的解决方案,但你可以考虑几种常见的方法。 +以下是一些选项: * 使用在每个节点上运行的节点级日志记录代理。 -* 在应用程序的 pod 中,包含专门记录日志的 sidecar 容器。 +* 在应用程序的 Pod 中,包含专门记录日志的边车(Sidecar)容器。 * 将日志直接从应用程序中推送到日志记录后端。 -由于日志记录代理必须在每个节点上运行,它可以用 DaemonSet 副本,Pod 或 本机进程来实现。 -然而,后两种方法被弃用并且非常不别推荐。 +由于日志记录代理必须在每个节点上运行,通常可以用 `DaemonSet` 的形式运行该代理。 +节点级日志在每个节点上仅创建一个代理,不需要对节点上的应用做修改。 -对于 Kubernetes 集群来说,使用节点级的日志代理是最常用和被推荐的方式, -因为在每个节点上仅创建一个代理,并且不需要对节点上的应用做修改。 -但是,节点级的日志 _仅适用于应用程序的标准输出和标准错误输出_。 +容器向标准输出和标准错误输出写出数据,但在格式上并不统一。 +节点级代理 +收集这些日志并将其进行转发以完成汇总。 -Kubernetes 并不指定日志代理,但是有两个可选的日志代理与 Kubernetes 发行版一起发布。 -[Stackdriver 日志](/zh/docs/tasks/debug-application-cluster/logging-stackdriver/) -适用于 Google Cloud Platform,和 -[Elasticsearch](/zh/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/)。 -你可以在专门的文档中找到更多的信息和说明。 -两者都使用 [fluentd](https://www.fluentd.org/) 与自定义配置作为节点上的代理。 - - -### 使用 sidecar 容器和日志代理 +### 使用 sidecar 容器运行日志代理 {#sidecar-container-with-logging-agent} -你可以通过以下方式之一使用 sidecar 容器: +你可以通过以下方式之一使用边车(Sidecar)容器: -* sidecar 容器将应用程序日志传送到自己的标准输出。 -* sidecar 容器运行一个日志代理,配置该日志代理以便从应用容器收集日志。 +* 边车容器将应用程序日志传送到自己的标准输出。 +* 边车容器运行一个日志代理,配置该日志代理以便从应用容器收集日志。 #### 传输数据流的 sidecar 容器 -![数据流容器的 Sidecar 容器](/images/docs/user-guide/logging/logging-with-streaming-sidecar.png) +![带数据流容器的边车容器](/images/docs/user-guide/logging/logging-with-streaming-sidecar.png) -利用 sidecar 容器向自己的 `stdout` 和 `stderr` 传输流的方式, +利用边车容器向自己的 `stdout` 和 `stderr` 传输流的方式, 你就可以利用每个节点上的 kubelet 和日志代理来处理日志。 -sidecar 容器从文件、套接字或 journald 读取日志。 -每个 sidecar 容器打印其自己的 `stdout` 和 `stderr` 流。 +边车容器从文件、套接字或 journald 读取日志。 +每个边车容器向自己的 `stdout` 和 `stderr` 流中输出日志。 -考虑接下来的例子。pod 的容器向两个文件写不同格式的日志,下面是这个 pod 的配置文件: +例如,某 Pod 中运行一个容器,该容器向两个文件写不同格式的日志。 +下面是这个 pod 的配置文件: {{< codenew file="admin/logging/two-files-counter-pod.yaml" >}} -在同一个日志流中有两种不同格式的日志条目,这有点混乱,即使你试图重定向它们到容器的 `stdout` 流。 -取而代之的是,你可以引入两个 sidecar 容器。 -每一个 sidecar 容器可以从共享卷跟踪特定的日志文件,并重定向文件内容到各自的 `stdout` 流。 +不建议在同一个日志流中写入不同格式的日志条目,即使你成功地将其重定向到容器的 +`stdout` 流。相反,你可以创建两个边车容器。每个边车容器可以从共享卷 +跟踪特定的日志文件,并将文件内容重定向到各自的 `stdout` 流。 -这是运行两个 sidecar 容器的 Pod 文件。 +下面是运行两个边车容器的 Pod 的配置文件: {{< codenew file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" >}} @@ -358,12 +350,18 @@ Here's a configuration file for a pod that has two sidecar containers: Now when you run this pod, you can access each log stream separately by running the following commands: --> -现在当你运行这个 Pod 时,你可以分别地访问每一个日志流,运行如下命令: +现在当你运行这个 Pod 时,你可以运行如下命令分别访问每个日志流: ```shell kubectl logs counter count-log-1 ``` -``` + + +输出为: + +```console 0: Mon Jan 1 00:00:00 UTC 2001 1: Mon Jan 1 00:00:01 UTC 2001 2: Mon Jan 1 00:00:02 UTC 2001 @@ -373,7 +371,13 @@ kubectl logs counter count-log-1 ```shell kubectl logs counter count-log-2 ``` -``` + + +输出为: + +```console Mon Jan 1 00:00:00 UTC 2001 INFO 0 Mon Jan 1 00:00:01 UTC 2001 INFO 1 Mon Jan 1 00:00:02 UTC 2001 INFO 2 @@ -385,7 +389,8 @@ The node-level agent installed in your cluster picks up those log streams automatically without any further configuration. If you like, you can configure the agent to parse log lines depending on the source container. --> -集群中安装的节点级代理会自动获取这些日志流,而无需进一步配置。如果你愿意,你可以配置代理程序来解析源容器的日志行。 +集群中安装的节点级代理会自动获取这些日志流,而无需进一步配置。 +如果你愿意,你也可以配置代理程序来解析源容器的日志行。 -注意,尽管 CPU 和内存使用率都很低(以多个 cpu millicores 指标排序或者按内存的兆字节排序), +注意,尽管 CPU 和内存使用率都很低(以多个 CPU 毫核指标排序或者按内存的兆字节排序), 向文件写日志然后输出到 `stdout` 流仍然会成倍地增加磁盘使用率。 -如果你的应用向单一文件写日志,通常最好设置 `/dev/stdout` 作为目标路径,而不是使用流式的 sidecar 容器方式。 +如果你的应用向单一文件写日志,通常最好设置 `/dev/stdout` 作为目标路径, +而不是使用流式的边车容器方式。 -应用本身如果不具备轮转日志文件的功能,可以通过 sidecar 容器实现。 -该方式的一个例子是运行一个定期轮转日志的容器。 -然而,还是推荐直接使用 `stdout` 和 `stderr`,将日志的轮转和保留策略交给 kubelet。 +应用本身如果不具备轮转日志文件的功能,可以通过边车容器实现。 +该方式的一个例子是运行一个小的、定期轮转日志的容器。 +然而,还是推荐直接使用 `stdout` 和 `stderr`,将日志的轮转和保留策略 +交给 kubelet。 -### 具有日志代理功能的 sidecar 容器 +### 具有日志代理功能的边车容器 -![日志记录代理功能的 sidecar 容器](/images/docs/user-guide/logging/logging-with-sidecar-agent.png) +![含日志代理的边车容器](/images/docs/user-guide/logging/logging-with-sidecar-agent.png) -如果节点级日志记录代理程序对于你的场景来说不够灵活,你可以创建一个带有单独日志记录代理程序的 -sidecar 容器,将代理程序专门配置为与你的应用程序一起运行。 +如果节点级日志记录代理程序对于你的场景来说不够灵活,你可以创建一个 +带有单独日志记录代理的边车容器,将代理程序专门配置为与你的应用程序一起运行。 - -{{< note >}} -在 sidecar 容器中使用日志代理会导致严重的资源损耗。 +在边车容器中使用日志代理会带来严重的资源损耗。 此外,你不能使用 `kubectl logs` 命令访问日志,因为日志并没有被 kubelet 管理。 {{< /note >}} -例如,你可以使用 [Stackdriver](/zh/docs/tasks/debug-application-cluster/logging-stackdriver/), -它使用 fluentd 作为日志记录代理。 -以下是两个可用于实现此方法的配置文件。 -第一个文件包含配置 fluentd 的 +下面是两个配置文件,可以用来实现一个带日志代理的边车容器。 +第一个文件包含用来配置 fluentd 的 [ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)。 {{< codenew file="admin/logging/fluentd-sidecar-config.yaml" >}} +{{< note >}} -{{< note >}} -配置 fluentd 超出了本文的范围。要进一步了解如何配置 fluentd, -请参考 [fluentd 官方文档](https://docs.fluentd.org/). +要进一步了解如何配置 fluentd,请参考 [fluentd 官方文档](https://docs.fluentd.org/)。 {{< /note >}} -第二个文件描述了运行 fluentd sidecar 容器的 Pod 。flutend 通过 Pod 的挂载卷获取它的配置数据。 +第二个文件描述了运行 fluentd 边车容器的 Pod 。 +flutend 通过 Pod 的挂载卷获取它的配置数据。 {{< codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" >}} -一段时间后,你可以在 Stackdriver 界面看到日志消息。 - - -记住,这只是一个例子,事实上你可以用任何一个日志代理替换 fluentd ,并从应用容器中读取任何资源。 +在示例配置中,你可以将 fluentd 替换为任何日志代理,从应用容器内 +的任何来源读取数据。 - ### 从应用中直接暴露日志目录 ![直接从应用程序暴露日志](/images/docs/user-guide/logging/logging-from-application.png) -通过暴露或推送每个应用的日志,你可以实现集群级日志记录; -然而,这种日志记录机制的实现已超出 Kubernetes 的范围。 - +从各个应用中直接暴露和推送日志数据的集群日志机制 +已超出 Kubernetes 的范围。 diff --git a/content/zh/docs/concepts/cluster-administration/networking.md b/content/zh/docs/concepts/cluster-administration/networking.md index 4af04d60ee..dc8687c99a 100644 --- a/content/zh/docs/concepts/cluster-administration/networking.md +++ b/content/zh/docs/concepts/cluster-administration/networking.md @@ -24,8 +24,8 @@ problems to address: 1. 高度耦合的容器间通信:这个已经被 {{< glossary_tooltip text="Pods" term_id="pod" >}} 和 `localhost` 通信解决了。 2. Pod 间通信:这个是本文档的重点要讲述的。 -3. Pod 和服务间通信:这个已经在[服务](/zh/docs/concepts/services-networking/service/) 里讲述过了。 -4. 外部和服务间通信:这也已经在[服务](/zh/docs/concepts/services-networking/service/) 讲述过了。 +3. Pod 和服务间通信:这个已经在[服务](/zh/docs/concepts/services-networking/service/)里讲述过了。 +4. 外部和服务间通信:这也已经在[服务](/zh/docs/concepts/services-networking/service/)讲述过了。 @@ -82,9 +82,9 @@ Linux): Kubernetes 对所有网络设施的实施,都需要满足以下的基本要求(除非有设置一些特定的网络分段策略): * 节点上的 Pod 可以不通过 NAT 和其他任何节点上的 Pod 通信 -* 节点上的代理(比如:系统守护进程、kubelet) 可以和节点上的所有Pod通信 +* 节点上的代理(比如:系统守护进程、kubelet)可以和节点上的所有Pod通信 -备注:仅针对那些支持 `Pods` 在主机网络中运行的平台(比如:Linux) : +备注:仅针对那些支持 `Pods` 在主机网络中运行的平台(比如:Linux): * 那些运行在节点的主机网络里的 Pod 可以不通过 NAT 和所有节点上的 Pod 通信 @@ -107,7 +107,7 @@ usage, but this is no different from processes in a VM. This is called the Kubernetes 的 IP 地址存在于 `Pod` 范围内 - 容器共享它们的网络命名空间 - 包括它们的 IP 地址和 MAC 地址。 这就意味着 `Pod` 内的容器都可以通过 `localhost` 到达各个端口。 这也意味着 `Pod` 内的容器都需要相互协调端口的使用,但是这和虚拟机中的进程似乎没有什么不同, -这也被称为“一个 Pod 一个 IP” 模型。 +这也被称为“一个 Pod 一个 IP”模型。 如何实现这一点是正在使用的容器运行时的特定信息。 -也可以在 `node` 本身通过端口去请求你的 `Pod` (称之为主机端口), +也可以在 `node` 本身通过端口去请求你的 `Pod`(称之为主机端口), 但这是一个很特殊的操作。转发方式如何实现也是容器运行时的细节。 `Pod` 自己并不知道这些主机端口是否存在。 @@ -196,7 +196,7 @@ AOS 具有一组丰富的 REST API 端点,这些端点使 Kubernetes 能够根 从而为私有云和公共云提供端到端管理系统。 AOS 支持使用包括 Cisco、Arista、Dell、Mellanox、HPE 在内的制造商提供的通用供应商设备, -以及大量白盒系统和开放网络操作系统,例如 Microsoft SONiC、Dell OPX 和 Cumulus Linux 。 +以及大量白盒系统和开放网络操作系统,例如 Microsoft SONiC、Dell OPX 和 Cumulus Linux。 想要更详细地了解 AOS 系统是如何工作的可以点击这里:https://www.apstra.com/products/how-it-works/ @@ -218,10 +218,10 @@ AWS 虚拟私有云(VPC)网络。该 CNI 插件提供了高吞吐量和可 使用该 CNI 插件,可使 Kubernetes Pod 拥有与在 VPC 网络上相同的 IP 地址。 CNI 将 AWS 弹性网络接口(ENI)分配给每个 Kubernetes 节点,并将每个 ENI 的辅助 IP 范围用于该节点上的 Pod 。 -CNI 包含用于 ENI 和 IP 地址的预分配的控件,以便加快 Pod 的启动时间,并且能够支持多达2000个节点的大型集群。 +CNI 包含用于 ENI 和 IP 地址的预分配的控件,以便加快 Pod 的启动时间,并且能够支持多达 2000 个节点的大型集群。 此外,CNI 可以与 -[用于执行网络策略的 Calico](https://docs.aws.amazon.com/eks/latest/userguide/calico.html)一起运行。 +[用于执行网络策略的 Calico](https://docs.aws.amazon.com/eks/latest/userguide/calico.html) 一起运行。 AWS VPC CNI 项目是开源的,请查看 [GitHub 上的文档](https://github.com/aws/amazon-vpc-cni-k8s)。 ConfigMap 既可以通过 watch 操作实现内容传播(默认形式),也可实现基于 TTL -的缓存,还可以直接将所有请求重定向到 API 服务器。 +的缓存,还可以直接经过所有请求重定向到 API 服务器。 因此,从 ConfigMap 被更新的那一刻算起,到新的主键被投射到 Pod 中去,这一 时间跨度可能与 kubelet 的同步周期加上高速缓存的传播延迟相等。 这里的传播延迟取决于所选的高速缓存类型 diff --git a/content/zh/docs/concepts/configuration/organize-cluster-access-kubeconfig.md b/content/zh/docs/concepts/configuration/organize-cluster-access-kubeconfig.md index db4176391a..9907772704 100644 --- a/content/zh/docs/concepts/configuration/organize-cluster-access-kubeconfig.md +++ b/content/zh/docs/concepts/configuration/organize-cluster-access-kubeconfig.md @@ -266,7 +266,7 @@ In `$HOME/.kube/config`, relative paths are stored relatively, and absolute path are stored absolutely. --> kubeconfig 文件中的文件和路径引用是相对于 kubeconfig 文件的位置。 -命令行上的文件引用是相当对于当前工作目录的。 +命令行上的文件引用是相对于当前工作目录的。 在 `$HOME/.kube/config` 中,相对路径按相对路径存储,绝对路径按绝对路径存储。 ## {{% heading "whatsnext" %}} diff --git a/content/zh/docs/concepts/extend-kubernetes/operator.md b/content/zh/docs/concepts/extend-kubernetes/operator.md index fb9323d0dc..ca3d0f790f 100644 --- a/content/zh/docs/concepts/extend-kubernetes/operator.md +++ b/content/zh/docs/concepts/extend-kubernetes/operator.md @@ -185,57 +185,63 @@ kubectl edit SampleDB/example-database # 手动修改某些配置 可以了!Operator 会负责应用所作的更改并保持现有服务处于良好的状态。 + + ## 编写你自己的 Operator {#writing-operator} -如果生态系统中没可以实现你目标的 Operator,你可以自己编写代码。在 -[接下来](#what-s-next)一节中,你会找到编写自己的云原生 Operator -需要的库和工具的链接。 +如果生态系统中没可以实现你目标的 Operator,你可以自己编写代码。 你还可以使用任何支持 [Kubernetes API 客户端](/zh/docs/reference/using-api/client-libraries/) 的语言或运行时来实现 Operator(即控制器)。 + +以下是一些库和工具,你可用于编写自己的云原生 Operator。 + +{{% thirdparty-content %}} + +* [kubebuilder](https://book.kubebuilder.io/) +* [KUDO](https://kudo.dev/) (Kubernetes 通用声明式 Operator) +* [Metacontroller](https://metacontroller.app/),可与 Webhooks 结合使用,以实现自己的功能。 +* [Operator Framework](https://operatorframework.io) + ## {{% heading "whatsnext" %}} -* 详细了解[定制资源](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/) +* 详细了解 [定制资源](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/) * 在 [OperatorHub.io](https://operatorhub.io/) 上找到现成的、适合你的 Operator -* 借助已有的工具来编写你自己的 Operator,例如: - * [KUDO](https://kudo.dev/) (Kubernetes 通用声明式 Operator) - * [kubebuilder](https://book.kubebuilder.io/) - * [Metacontroller](https://metacontroller.app/),可与 Webhook 结合使用,以实现自己的功能。 - * [Operator Framework](https://operatorframework.io) * [发布](https://operatorhub.io/)你的 Operator,让别人也可以使用 -* 阅读 [CoreOS 原文](https://coreos.com/blog/introducing-operators.html),其介绍了 Operator 介绍 +* 阅读 [CoreOS 原始文章](https://web.archive.org/web/20170129131616/https://coreos.com/blog/introducing-operators.html),它介绍了 Operator 模式(这是一个存档版本的原始文章)。 * 阅读这篇来自谷歌云的关于构建 Operator 最佳实践的 [文章](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) - diff --git a/content/zh/docs/concepts/extend-kubernetes/service-catalog.md b/content/zh/docs/concepts/extend-kubernetes/service-catalog.md index aa38076e44..1169ac9ecf 100644 --- a/content/zh/docs/concepts/extend-kubernetes/service-catalog.md +++ b/content/zh/docs/concepts/extend-kubernetes/service-catalog.md @@ -167,11 +167,11 @@ kind: ClusterServiceBroker metadata: name: cloud-broker spec: - # 指向服务代理的末端。(这里的 URL 是无法使用的) + # 指向服务代理的末端。(这里的 URL 是无法使用的。) url: https://servicebroker.somecloudprovider.com/v1alpha1/projects/service-catalog/brokers/default ##### # 这里可以添加额外的用来与服务代理通信的属性值, - # 例如持有者令牌信息或者 TLS 的 CA 包 + # 例如持有者令牌信息或者 TLS 的 CA 包。 ##### ``` diff --git a/content/zh/docs/concepts/overview/components.md b/content/zh/docs/concepts/overview/components.md index 1e42cdcbaa..090468282a 100644 --- a/content/zh/docs/concepts/overview/components.md +++ b/content/zh/docs/concepts/overview/components.md @@ -92,10 +92,10 @@ These controllers include: --> 这些控制器包括: -* 节点控制器(Node Controller): 负责在节点出现故障时进行通知和响应。 -* 副本控制器(Replication Controller): 负责为系统中的每个副本控制器对象维护正确数量的 Pod。 -* 端点控制器(Endpoints Controller): 填充端点(Endpoints)对象(即加入 Service 与 Pod)。 -* 服务帐户和令牌控制器(Service Account & Token Controllers): 为新的命名空间创建默认帐户和 API 访问令牌. +* 节点控制器(Node Controller): 负责在节点出现故障时进行通知和响应 +* 副本控制器(Replication Controller): 负责为系统中的每个副本控制器对象维护正确数量的 Pod +* 端点控制器(Endpoints Controller): 填充端点(Endpoints)对象(即加入 Service 与 Pod) +* 服务帐户和令牌控制器(Service Account & Token Controllers): 为新的命名空间创建默认帐户和 API 访问令牌 **传统部署时代:** -早期,组织在物理服务器上运行应用程序。无法为物理服务器中的应用程序定义资源边界,这会导致资源分配问题。 +早期,各个组织机构在物理服务器上运行应用程序。无法为物理服务器中的应用程序定义资源边界,这会导致资源分配问题。 例如,如果在物理服务器上运行多个应用程序,则可能会出现一个应用程序占用大部分资源的情况, 结果可能导致其他应用程序的性能下降。 一种解决方案是在不同的物理服务器上运行每个应用程序,但是由于资源利用不足而无法扩展, -并且组织维护许多物理服务器的成本很高。 +并且维护许多物理服务器的成本很高。 ### LIST 和 WATCH 过滤 -LIST and WATCH 操作可以使用查询参数指定标签选择算符过滤一组对象。 +LIST 和 WATCH 操作可以使用查询参数指定标签选择算符过滤一组对象。 两种需求都是允许的。(这里显示的是它们出现在 URL 查询字符串中) 支持以下两种方式配置调度器的过滤和打分行为: - -1. [调度策略](/zh/docs/reference/scheduling/policies) 允许你配置过滤的 _谓词(Predicates)_ +1. [调度策略](/zh/docs/reference/scheduling/policies) 允许你配置过滤的 _断言(Predicates)_ 和打分的 _优先级(Priorities)_ 。 2. [调度配置](/zh/docs/reference/scheduling/config/#profiles) 允许你配置实现不同调度阶段的插件, 包括:`QueueSort`, `Filter`, `Score`, `Bind`, `Reserve`, `Permit` 等等。 你也可以配置 kube-scheduler 运行不同的配置文件。 ## {{% heading "whatsnext" %}} - 节点亲和性(详见[这里](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)) @@ -67,7 +67,7 @@ kubectl taint nodes node1 key1=value1:NoSchedule- 若要移除上述命令所添加的污点,你可以执行: ```shell -kubectl taint nodes node1 key:NoSchedule- +kubectl taint nodes node1 key1=value1:NoSchedule- ``` -上述例子使用到的 `effect` 的一个值 `NoSchedule`,您也可以使用另外一个值 `PreferNoSchedule`。 +上述例子中 `effect` 使用的值为 `NoSchedule`,您也可以使用另外一个值 `PreferNoSchedule`。 这是“优化”或“软”版本的 `NoSchedule` —— 系统会 *尽量* 避免将 Pod 调度到存在其不能容忍污点的节点上, 但这不是强制的。`effect` 的值还可以设置为 `NoExecute`,下文会详细描述这个值。 @@ -438,7 +438,7 @@ by the user already has a toleration for `node.kubernetes.io/unreachable`. {{< note >}} Kubernetes 会自动给 Pod 添加一个 key 为 `node.kubernetes.io/not-ready` 的容忍度 -并配置 `tolerationSeconds=300`,除非用户提供的 Pod 配置中已经已存在了 key 为 +并配置 `tolerationSeconds=300`,除非用户提供的 Pod 配置中已经已存在了 key 为 `node.kubernetes.io/not-ready` 的容忍度。 同样,Kubernetes 会给 Pod 添加一个 key 为 `node.kubernetes.io/unreachable` 的容忍度 @@ -517,5 +517,3 @@ arbitrary tolerations to DaemonSets. --> * 阅读[资源耗尽的处理](/zh/docs/tasks/administer-cluster/out-of-resource/),以及如何配置其行为 * 阅读 [Pod 优先级](/zh/docs/concepts/configuration/pod-priority-preemption/) - - diff --git a/content/zh/docs/concepts/services-networking/connect-applications-service.md b/content/zh/docs/concepts/services-networking/connect-applications-service.md index bf54a0f2d0..2476d56f73 100644 --- a/content/zh/docs/concepts/services-networking/connect-applications-service.md +++ b/content/zh/docs/concepts/services-networking/connect-applications-service.md @@ -363,7 +363,7 @@ You can acquire all these from the [nginx https example](https://github.com/kube * 使用证书配置的 Nginx 服务器 * 使证书可以访问 Pod 的 [Secret](/zh/docs/concepts/configuration/secret/) -你可以从 [Nginx https 示例](https://github.com/kubernetes/kubernetes/tree/{{< param "githubbranch" >}}/staging/https-nginx/) +你可以从 [Nginx https 示例](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/https-nginx/) 获取所有上述内容。你需要安装 go 和 make 工具。如果你不想安装这些软件,可以按照 后文所述的手动执行步骤执行操作。简要过程如下: @@ -438,7 +438,7 @@ kind: "Secret" metadata: name: "nginxsecret" namespace: "default" - type: kubernetes.io/tls +type: kubernetes.io/tls data: tls.crt: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURIekNDQWdlZ0F3SUJBZ0lKQUp5M3lQK0pzMlpJTUEwR0NTcUdTSWIzRFFFQkJRVUFNQ1l4RVRBUEJnTlYKQkFNVENHNW5hVzU0YzNaak1SRXdEd1lEVlFRS0V3aHVaMmx1ZUhOMll6QWVGdzB4TnpFd01qWXdOekEzTVRKYQpGdzB4T0RFd01qWXdOekEzTVRKYU1DWXhFVEFQQmdOVkJBTVRDRzVuYVc1NGMzWmpNUkV3RHdZRFZRUUtFd2h1CloybHVlSE4yWXpDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBSjFxSU1SOVdWM0IKMlZIQlRMRmtobDRONXljMEJxYUhIQktMSnJMcy8vdzZhU3hRS29GbHlJSU94NGUrMlN5ajBFcndCLzlYTnBwbQppeW1CL3JkRldkOXg5UWhBQUxCZkVaTmNiV3NsTVFVcnhBZW50VWt1dk1vLzgvMHRpbGhjc3paenJEYVJ4NEo5Ci82UVRtVVI3a0ZTWUpOWTVQZkR3cGc3dlVvaDZmZ1Voam92VG42eHNVR0M2QURVODBpNXFlZWhNeVI1N2lmU2YKNHZpaXdIY3hnL3lZR1JBRS9mRTRqakxCdmdONjc2SU90S01rZXV3R0ljNDFhd05tNnNTSzRqYUNGeGpYSnZaZQp2by9kTlEybHhHWCtKT2l3SEhXbXNhdGp4WTRaNVk3R1ZoK0QrWnYvcW1mMFgvbVY0Rmo1NzV3ajFMWVBocWtsCmdhSXZYRyt4U1FVQ0F3RUFBYU5RTUU0d0hRWURWUjBPQkJZRUZPNG9OWkI3YXc1OUlsYkROMzhIYkduYnhFVjcKTUI4R0ExVWRJd1FZTUJhQUZPNG9OWkI3YXc1OUlsYkROMzhIYkduYnhFVjdNQXdHQTFVZEV3UUZNQU1CQWY4dwpEUVlKS29aSWh2Y05BUUVGQlFBRGdnRUJBRVhTMW9FU0lFaXdyMDhWcVA0K2NwTHI3TW5FMTducDBvMm14alFvCjRGb0RvRjdRZnZqeE04Tzd2TjB0clcxb2pGSW0vWDE4ZnZaL3k4ZzVaWG40Vm8zc3hKVmRBcStNZC9jTStzUGEKNmJjTkNUekZqeFpUV0UrKzE5NS9zb2dmOUZ3VDVDK3U2Q3B5N0M3MTZvUXRUakViV05VdEt4cXI0Nk1OZWNCMApwRFhWZmdWQTRadkR4NFo3S2RiZDY5eXM3OVFHYmg5ZW1PZ05NZFlsSUswSGt0ejF5WU4vbVpmK3FqTkJqbWZjCkNnMnlwbGQ0Wi8rUUNQZjl3SkoybFIrY2FnT0R4elBWcGxNSEcybzgvTHFDdnh6elZPUDUxeXdLZEtxaUMwSVEKQ0I5T2wwWW5scE9UNEh1b2hSUzBPOStlMm9KdFZsNUIyczRpbDlhZ3RTVXFxUlU9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K" tls.key: "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2RhaURFZlZsZHdkbFIKd1V5eFpJWmVEZWNuTkFhbWh4d1NpeWF5N1AvOE9ta3NVQ3FCWmNpQ0RzZUh2dGtzbzlCSzhBZi9WemFhWm9zcApnZjYzUlZuZmNmVUlRQUN3WHhHVFhHMXJKVEVGSzhRSHA3VkpMcnpLUC9QOUxZcFlYTE0yYzZ3MmtjZUNmZitrCkU1bEVlNUJVbUNUV09UM3c4S1lPNzFLSWVuNEZJWTZMMDUrc2JGQmd1Z0ExUE5JdWFubm9UTWtlZTRuMG4rTDQKb3NCM01ZUDhtQmtRQlAzeE9JNHl3YjREZXUraURyU2pKSHJzQmlIT05Xc0RadXJFaXVJMmdoY1kxeWIyWHI2UAozVFVOcGNSbC9pVG9zQngxcHJHclk4V09HZVdPeGxZZmcvbWIvNnBuOUYvNWxlQlkrZStjSTlTMkQ0YXBKWUdpCkwxeHZzVWtGQWdNQkFBRUNnZ0VBZFhCK0xkbk8ySElOTGo5bWRsb25IUGlHWWVzZ294RGQwci9hQ1Zkank4dlEKTjIwL3FQWkUxek1yall6Ry9kVGhTMmMwc0QxaTBXSjdwR1lGb0xtdXlWTjltY0FXUTM5SjM0VHZaU2FFSWZWNgo5TE1jUHhNTmFsNjRLMFRVbUFQZytGam9QSFlhUUxLOERLOUtnNXNrSE5pOWNzMlY5ckd6VWlVZWtBL0RBUlBTClI3L2ZjUFBacDRuRWVBZmI3WTk1R1llb1p5V21SU3VKdlNyblBESGtUdW1vVlVWdkxMRHRzaG9reUxiTWVtN3oKMmJzVmpwSW1GTHJqbGtmQXlpNHg0WjJrV3YyMFRrdWtsZU1jaVlMbjk4QWxiRi9DSmRLM3QraTRoMTVlR2ZQegpoTnh3bk9QdlVTaDR2Q0o3c2Q5TmtEUGJvS2JneVVHOXBYamZhRGR2UVFLQmdRRFFLM01nUkhkQ1pKNVFqZWFKClFGdXF4cHdnNzhZTjQyL1NwenlUYmtGcVFoQWtyczJxWGx1MDZBRzhrZzIzQkswaHkzaE9zSGgxcXRVK3NHZVAKOWRERHBsUWV0ODZsY2FlR3hoc0V0L1R6cEdtNGFKSm5oNzVVaTVGZk9QTDhPTm1FZ3MxMVRhUldhNzZxelRyMgphRlpjQ2pWV1g0YnRSTHVwSkgrMjZnY0FhUUtCZ1FEQmxVSUUzTnNVOFBBZEYvL25sQVB5VWs1T3lDdWc3dmVyClUycXlrdXFzYnBkSi9hODViT1JhM05IVmpVM25uRGpHVHBWaE9JeXg5TEFrc2RwZEFjVmxvcG9HODhXYk9lMTAKMUdqbnkySmdDK3JVWUZiRGtpUGx1K09IYnRnOXFYcGJMSHBzUVpsMGhucDBYSFNYVm9CMUliQndnMGEyOFVadApCbFBtWmc2d1BRS0JnRHVIUVV2SDZHYTNDVUsxNFdmOFhIcFFnMU16M2VvWTBPQm5iSDRvZUZKZmcraEppSXlnCm9RN3hqWldVR3BIc3AyblRtcHErQWlSNzdyRVhsdlhtOElVU2FsbkNiRGlKY01Pc29RdFBZNS9NczJMRm5LQTQKaENmL0pWb2FtZm1nZEN0ZGtFMXNINE9MR2lJVHdEbTRpb0dWZGIwMllnbzFyb2htNUpLMUI3MkpBb0dBUW01UQpHNDhXOTVhL0w1eSt5dCsyZ3YvUHM2VnBvMjZlTzRNQ3lJazJVem9ZWE9IYnNkODJkaC8xT2sybGdHZlI2K3VuCnc1YytZUXRSTHlhQmd3MUtpbGhFZDBKTWU3cGpUSVpnQWJ0LzVPbnlDak9OVXN2aDJjS2lrQ1Z2dTZsZlBjNkQKckliT2ZIaHhxV0RZK2Q1TGN1YSt2NzJ0RkxhenJsSlBsRzlOZHhrQ2dZRUF5elIzT3UyMDNRVVV6bUlCRkwzZAp4Wm5XZ0JLSEo3TnNxcGFWb2RjL0d5aGVycjFDZzE2MmJaSjJDV2RsZkI0VEdtUjZZdmxTZEFOOFRwUWhFbUtKCnFBLzVzdHdxNWd0WGVLOVJmMWxXK29xNThRNTBxMmk1NVdUTThoSDZhTjlaMTltZ0FGdE5VdGNqQUx2dFYxdEYKWSs4WFJkSHJaRnBIWll2NWkwVW1VbGc9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K" @@ -462,7 +462,7 @@ nginxsecret kubernetes.io/tls 2 1m -现在修改 nginx 副本,启动一个使用在秘钥中的证书的 HTTPS 服务器和 Servcie,暴露端口(80 和 443): +现在修改 nginx 副本,启动一个使用在秘钥中的证书的 HTTPS 服务器和 Service,暴露端口(80 和 443): {{< codenew file="service/networking/nginx-secure-app.yaml" >}} diff --git a/content/zh/docs/concepts/services-networking/dual-stack.md b/content/zh/docs/concepts/services-networking/dual-stack.md index 4c44d75016..f01bd78038 100644 --- a/content/zh/docs/concepts/services-networking/dual-stack.md +++ b/content/zh/docs/concepts/services-networking/dual-stack.md @@ -9,11 +9,17 @@ weight: 70 --- @@ -85,6 +91,20 @@ The following prerequisites are needed in order to utilize IPv4/IPv6 dual-stack 要启用 IPv4/IPv6 双协议栈,为集群的相关组件启用 `IPv6DualStack` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/), @@ -95,8 +115,8 @@ To enable IPv4/IPv6 dual-stack, enable the `IPv6DualStack` [feature gate](/docs/ * `--service-cluster-ip-range=,` * kube-controller-manager: * `--feature-gates="IPv6DualStack=true"` - * `--cluster-cidr=,` 例如 `--cluster-cidr=10.244.0.0/16,fc00::/48` - * `--service-cluster-ip-range=,` 例如 `--service-cluster-ip-range=10.0.0.0/16,fd00::/108` + * `--cluster-cidr=,` + * `--service-cluster-ip-range=,` * `--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6` 对于 IPv4 默认为 /24,对于 IPv6 默认为 /64 * kubelet: * `--feature-gates="IPv6DualStack=true"` @@ -125,14 +145,14 @@ IPv6 CIDR 的一个例子:`fdXY:IJKL:MNOP:15::/64`(这里演示的是格式 如果你的集群启用了 IPv4/IPv6 双协议栈网络,则可以使用 IPv4 或 IPv6 地址来创建 {{< glossary_tooltip text="Service" term_id="service" >}}。 -服务的地址族默认为第一个服务集群 IP 范围的地址族(通过 kube-controller-manager 的 `--service-cluster-ip-range` 参数配置) +服务的地址族默认为第一个服务集群 IP 范围的地址族(通过 kube-apiserver 的 `--service-cluster-ip-range` 参数配置)。 当你定义服务时,可以选择将其配置为双栈。若要指定所需的行为,你可以设置 `.spec.ipFamilyPolicy` 字段为以下值之一: 2. 在集群上启用双栈时,带有选择算符的现有 [无头服务](/zh/docs/concepts/services-networking/service/#headless-services) 由控制面设置 `.spec.ipFamilyPolicy` 为 `SingleStack` - 并设置 `.spec.ipFamilies` 为第一个服务群集 IP 范围的地址族(通过配置 kube-controller-manager 的 + 并设置 `.spec.ipFamilies` 为第一个服务群集 IP 范围的地址族(通过配置 kube-apiserver 的 `--service-cluster-ip-range` 参数),即使 `.spec.ClusterIP` 的设置值为 `None` 也如此。 {{< codenew file="service/networking/dual-stack-default-svc.yaml" >}} @@ -396,9 +416,9 @@ For [Headless Services without selectors](/docs/concepts/services-networking/ser 若没有显式设置 `.spec.ipFamilyPolicy`,则 `.spec.ipFamilyPolicy` 字段默认设置为 `RequireDualStack`。 -### LoadBalancer 类型 +### LoadBalancer 类型服务 ## 出站流量 @@ -440,6 +460,4 @@ Ensure your {{< glossary_tooltip text="CNI" term_id="cni" >}} provider supports -* [验证 IPv4/IPv6 双协议栈](/zh/docs/tasks/network/validate-dual-stack)网络 - - +* [验证 IPv4/IPv6 双协议栈](/zh/docs/tasks/network/validate-dual-stack)网络 \ No newline at end of file diff --git a/content/zh/docs/concepts/services-networking/ingress.md b/content/zh/docs/concepts/services-networking/ingress.md index 97304e33cc..90bd23d2d7 100644 --- a/content/zh/docs/concepts/services-networking/ingress.md +++ b/content/zh/docs/concepts/services-networking/ingress.md @@ -705,7 +705,7 @@ sure the TLS secret you created came from a certificate that contains a Common Name (CN), also known as a Fully Qualified Domain Name (FQDN) for `https-example.foo.com`. --> 在 Ingress 中引用此 Secret 将会告诉 Ingress 控制器使用 TLS 加密从客户端到负载均衡器的通道。 -你需要确保创建的 TLS Secret 创建自包含 `sslexample.foo.com` 的公用名称(CN)的证书。 +你需要确保创建的 TLS Secret 创建自包含 `https-example.foo.com` 的公用名称(CN)的证书。 这里的公共名称也被称为全限定域名(FQDN)。 {{< note >}} diff --git a/content/zh/docs/concepts/storage/dynamic-provisioning.md b/content/zh/docs/concepts/storage/dynamic-provisioning.md index ae6d82ec4e..14b72ac157 100644 --- a/content/zh/docs/concepts/storage/dynamic-provisioning.md +++ b/content/zh/docs/concepts/storage/dynamic-provisioning.md @@ -112,13 +112,13 @@ parameters: Users request dynamically provisioned storage by including a storage class in their `PersistentVolumeClaim`. Before Kubernetes v1.6, this was done via the `volume.beta.kubernetes.io/storage-class` annotation. However, this annotation -is deprecated since v1.6. Users now can and should instead use the +is deprecated since v1.9. Users now can and should instead use the `storageClassName` field of the `PersistentVolumeClaim` object. The value of this field must match the name of a `StorageClass` configured by the administrator (see [below](#enabling-dynamic-provisioning)). --> 用户通过在 `PersistentVolumeClaim` 中包含存储类来请求动态供应的存储。 -在 Kubernetes v1.6 之前,这通过 `volume.beta.kubernetes.io/storage-class` 注解实现。然而,这个注解自 v1.6 起就不被推荐使用了。 +在 Kubernetes v1.9 之前,这通过 `volume.beta.kubernetes.io/storage-class` 注解实现。然而,这个注解自 v1.6 起就不被推荐使用了。 用户现在能够而且应该使用 `PersistentVolumeClaim` 对象的 `storageClassName` 字段。 这个字段的值必须能够匹配到集群管理员配置的 `StorageClass` 名称(见[下面](#enabling-dynamic-provisioning))。 diff --git a/content/zh/docs/concepts/storage/volumes.md b/content/zh/docs/concepts/storage/volumes.md index 4c49adb4b2..cd2c09bcb2 100644 --- a/content/zh/docs/concepts/storage/volumes.md +++ b/content/zh/docs/concepts/storage/volumes.md @@ -56,13 +56,14 @@ can use any number of volume types simultaneously. Ephemeral volume types have a lifetime of a pod, but persistent volumes exist beyond the lifetime of a pod. Consequently, a volume outlives any containers that run within the pod, and data is preserved across container restarts. When a -pod ceases to exist, the volume is destroyed. +pod ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not +destroy persistent volumes. --> Kubernetes 支持很多类型的卷。 {{< glossary_tooltip term_id="pod" text="Pod" >}} 可以同时使用任意数目的卷类型。 临时卷类型的生命周期与 Pod 相同,但持久卷可以比 Pod 的存活期长。 因此,卷的存在时间会超出 Pod 中运行的所有容器,并且在容器重新启动时数据也会得到保留。 -当 Pod 不再存在时,卷也将不再存在。 +当 Pod 不再存在时,临时卷也将不再存在。但是持久卷会继续存在。 +如果 EBS 卷是分区的,你可以提供可选的字段 `partition: ""` 来指定要挂载到哪个分区上。 + @@ -355,14 +361,14 @@ spec: 启用 Cinder 的 `CSIMigration` 功能后,所有插件操作会从现有的树内插件重定向到 `cinder.csi.openstack.org` 容器存储接口(CSI)驱动程序。 -为了使用此功能,必须在集群中安装 [Openstack Cinder CSI 驱动程序](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md), +为了使用此功能,必须在集群中安装 [OpenStack Cinder CSI 驱动程序](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md), 并且 `CSIMigration` 和 `CSIMigrationOpenStack` Beta 功能必须被启用。 ### configMap @@ -1479,13 +1485,13 @@ Quobyte 的 GitHub 项目包含以 CSI 形式部署 Quobyte 的 -`rbd` 卷允许将 [Rados 块设备](https://ceph.com/docs/master/rbd/rbd/) 卷挂载到你的 Pod 中. +`rbd` 卷允许将 [Rados 块设备](https://docs.ceph.com/en/latest/rbd/) 卷挂载到你的 Pod 中. 不像 `emptyDir` 那样会在删除 Pod 的同时也会被删除,`rbd` 卷的内容在删除 Pod 时 会被保存,卷只是被卸载。 这意味着 `rbd` 卷可以被预先填充数据,并且这些数据可以在 Pod 之间共享。 @@ -2087,7 +2093,7 @@ persistent volume: --> - `volumeHandle`:唯一标识卷的字符串值。 该值必须与 CSI 驱动在 `CreateVolumeResponse` 的 `volume_id` 字段中返回的值相对应; - 接口定义在 [CSI spec](https://github.com/container-storageinterface/spec/blob/master/spec.md#createvolume) 中。 + 接口定义在 [CSI spec](https://github.com/container-storage-interface/spec/blob/master/spec.md#createvolume) 中。 在所有对 CSI 卷驱动程序的调用中,引用该 CSI 卷时都使用此值作为 `volume_id` 参数。 -Job `pi-with-ttl` 在结束 100 秒之后,可以成为被自动删除的标的。 +Job `pi-with-ttl` 在结束 100 秒之后,可以成为被自动删除的对象。 如果该字段设置为 `0`,Job 在结束之后立即成为可被自动删除的对象。 如果该字段没有设置,Job 不会在结束之后被 TTL 控制器自动清除。 diff --git a/content/zh/docs/concepts/workloads/controllers/statefulset.md b/content/zh/docs/concepts/workloads/controllers/statefulset.md index cdc0383316..4cd6606a38 100644 --- a/content/zh/docs/concepts/workloads/controllers/statefulset.md +++ b/content/zh/docs/concepts/workloads/controllers/statefulset.md @@ -1,7 +1,7 @@ --- title: StatefulSets content_type: concept -weight: 40 +weight: 30 --- +上述例子中: + * 名为 `nginx` 的 Headless Service 用来控制网络域名。 * 名为 `web` 的 StatefulSet 有一个 Spec,它表明将在独立的 3 个 Pod 副本中启动 nginx 容器。 * `volumeClaimTemplates` 将通过 PersistentVolumes 驱动提供的 [PersistentVolumes](/zh/docs/concepts/storage/persistent-volumes/) 来提供稳定的存储。 +StatefulSet 的命名需要遵循[DNS 子域名](zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)规范。 + @@ -217,9 +227,48 @@ StatefulSet 可以使用 [无头服务](/zh/docs/concepts/services-networking/se 一旦每个 Pod 创建成功,就会得到一个匹配的 DNS 子域,格式为: `$(pod 名称).$(所属服务的 DNS 域名)`,其中所属服务由 StatefulSet 的 `serviceName` 域来设定。 + +取决于集群域内部 DNS 的配置,有可能无法查询一个刚刚启动的 Pod 的 DNS 命名。 +当集群内其他客户端在 Pod 创建完成前发出 Pod 主机名查询时,就会发生这种情况。 +负缓存 (在 DNS 中较为常见) 意味着之前失败的查询结果会被记录和重用至少若干秒钟, +即使 Pod 已经正常运行了也是如此。 + +如果需要在 Pod 被创建之后及时发现它们,有以下选项: + +- 直接查询 Kubernetes API(比如,利用 watch 机制)而不是依赖于 DNS 查询 +- 缩短 Kubernetes DNS 驱动的缓存时长(通常这意味着修改 CoreDNS 的 ConfigMap,目前缓存时长为 30 秒) + +正如[限制](#limitations)中所述,你需要负责创建[无头服务](/zh/docs/concepts/services-networking/service/#headless-services) +以便为 Pod 提供网络标识。 + 下面给出一些选择集群域、服务名、StatefulSet 名、及其怎样影响 StatefulSet 的 Pod 上的 DNS 名称的示例: @@ -350,12 +399,14 @@ described [above](#deployment-and-scaling-guarantees). `Parallel` pod management tells the StatefulSet controller to launch or terminate all Pods in parallel, and to not wait for Pods to become Running and Ready or completely terminated prior to launching or terminating another -Pod. +Pod. This option only affects the behavior for scaling operations. Updates are not affected. + --> #### 并行 Pod 管理 {#parallel-pod-management} `Parallel` Pod 管理让 StatefulSet 控制器并行的启动或终止所有的 Pod, 启动或者终止其他 Pod 前,无需等待 Pod 进入 Running 和 ready 或者完全停止状态。 +这个选项只会影响伸缩操作的行为,更新则不会被影响。 - -本页面将介绍定制 Hugo 短代码,可以用于 Kubernetes markdown 文档书写。 + +本页面将介绍 Hugo 自定义短代码,可以用于 Kubernetes Markdown 文档书写。 关于短代码的更多信息可参见 [Hugo 文档](https://gohugo.io/content-management/shortcodes)。 @@ -20,31 +20,31 @@ content_type: concept ## 功能状态 -在本站的 markdown 页面中,你可以加入短代码来展示所描述的功能特性的版本和状态。 +在本站的 Markdown 页面中,你可以加入短代码来展示所描述的功能特性的版本和状态。 ### 功能状态示例 -下面是一个功能状态代码段的演示,表明这个功能已经在 Kubernetes v1.10 时就已经稳定了。 +下面是一个功能状态代码段的演示,表明这个功能已经在最新版 Kubernetes 中稳定了。 ``` -{{}} +{{}} ``` - + 会转换为: -{{< feature-state for_k8s_version="v1.10" state="stable" >}} +{{< feature-state state="stable" >}} `state` 的可选值如下: @@ -57,91 +57,42 @@ in Kubernetes version 1.10. ### 功能状态代码 所显示的 Kubernetes 默认为该页或站点版本。 -可以通过修改 for_k8s_version 短代码参数来调整要显示的版本。 +修改 for_k8s_version 短代码参数可以调整要显示的版本。例如 ``` -{{}} +{{}} ``` 会转换为: -{{< feature-state for_k8s_version="v1.11" state="stable" >}} - - -#### Alpha 功能 - -``` -{{}} -``` - - -会转换为: - -{{< feature-state state="alpha" >}} - - -#### Beta 功能 - -``` -{{}} -``` - - -会转换为: - -{{< feature-state state="beta" >}} - - -#### 稳定功能 - -``` -{{}} -``` - - -会转换为: - -{{< feature-state state="stable" >}} - - -#### 废弃功能 - -``` -{{}} -``` - - -会转换为: - -{{< feature-state state="deprecated" >}} +{{< feature-state for_k8s_version="v1.10" state="beta" >}} ## 词汇 -有两种词汇表提示。 +有两种词汇表提示:`glossary_tooltip` 和 `glossary_definition`。 你可以通过加入术语词汇的短代码,来自动更新和替换相应链接中的内容 ([我们的词汇库](/zh/docs/reference/glossary/)) -这样,在浏览在线文档,鼠标移到术语上时,术语解释就会显示在提示框中。 +在浏览在线文档时,术语会显示为超链接的样式,当鼠标移到术语上时,其解释就会显示在提示框中。 除了包含工具提示外,你还可以重用页面内容中词汇表中的定义。 ### 词汇演示 -例如,下面的代码在 markdown 中将会转换为 `{{< glossary_tooltip text="cluster" term_id="cluster" >}}`, +例如,下面的代码在 Markdown 中将会转换为 `{{< glossary_tooltip text="cluster" term_id="cluster" >}}`, 然后在提示框中显示。 ``` @@ -191,7 +142,6 @@ You can also include a full definition: 呈现为: {{< glossary_definition term_id="cluster" length="all" >}} @@ -236,7 +186,7 @@ Parameter | Description | Default {{< /table >}} --> -​```go-html-template +```go-html-template {{}} 参数 | 描述 | 默认值 :---------|:------------|:------- @@ -278,7 +228,7 @@ The `tabs` shortcode takes these parameters: --> ## 标签页 -在本站的 markdown 页面(`.md` 文件)中,你可以加入一个标签页集来显示 +在本站的 Markdown 页面(`.md` 文件)中,你可以加入一个标签页集来显示 某解决方案的不同形式。 标签页的短代码包含以下参数: @@ -398,6 +348,120 @@ println "This is tab 2." {{< tab name="JSON File" include="podtemplate.json" />}} {{< /tabs >}} + +## 版本号信息 + +要在文档中生成版本号信息,可以从以下几种短代码中选择。每个短代码可以基于站点配置文件 +`config.toml` 中的版本参数生成一个版本号取值。最常用的参数为 `latest` 和 `version`。 + + +### `{{}}` + +`{{}}` 短代码可以基于站点参数 `version` 生成 Kubernetes +文档的当前版本号取值。短代码 `param` 允许传入一个站点参数名称,在这里是 `version`。 + + +{{< note >}} +在先前已经发布的文档中,`latest` 和 `version` 参数值并不完全等价。新版本文档发布后,参数 +`latest` 会增加,而 `version` 则保持不变。例如,在上一版本的文档中使用 `version` 会得到 +`v1.19`,而使用 `latest` 则会得到 `v1.20`。 +{{< /note >}} + + +转换为: + +{{< param "version" >}} + + +### `{{}}` + +`{{}}` 返回站点参数 `latest` 的取值。每当新版本文档发布时,该参数均会被更新。 +因此,参数 `latest` 与 `version` 并不总是相同。 + +转换为: + +{{< latest-version >}} + + +### `{{}}` + +`{{}}` 短代码可以生成站点参数 `latest` 不含前缀 `v` 的版本号取值。 + +转换为: + +{{< latest-semver >}} + + +### `{{}}` + +`{{}}` 会检查是否设置了页面参数 `min-kubernetes-server-version` +并将其与 `version` 进行比较。 + +转换为: + +{{< version-check >}} + + +### `{{}}` + +`{{}}` 短代码基于站点参数 `latest` 生成不含前缀 `v` +的版本号取值,并输出该版本更新日志的超链接地址。 + +转换为: + +{{< latest-release-notes >}} + ## {{% heading "whatsnext" %}} -* 了解 [Hugo](https://gohugo.io/)。 +* 了解[Hugo](https://gohugo.io/)。 * 了解[撰写新的话题](/zh/docs/contribute/style/write-new-topic/)。 * 了解[使用页面内容类型](/zh/docs/contribute/style/page-content-types/)。 * 了解[发起 PR](/zh/docs/contribute/new-content/open-a-pr/)。 * 了解[高级贡献](/zh/docs/contribute/advanced/)。 - diff --git a/content/zh/docs/reference/glossary/api-group.md b/content/zh/docs/reference/glossary/api-group.md index 3c23f3dd0d..a219943be7 100644 --- a/content/zh/docs/reference/glossary/api-group.md +++ b/content/zh/docs/reference/glossary/api-group.md @@ -2,7 +2,7 @@ title: API Group id: api-group date: 2019-09-02 -full_link: /zh/docs/concepts/overview/kubernetes-api/#api-groups +full_link: /zh/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning short_description: > Kubernetes API 中的一组相关路径 @@ -17,7 +17,7 @@ tags: title: API Group id: api-group date: 2019-09-02 -full_link: /docs/concepts/overview/kubernetes-api/#api-groups +full_link: /docs/concepts/overview/kubernetes-api/#api-groups-and-versioning short_description: > A set of related paths in the Kubernetes API. diff --git a/content/zh/docs/reference/glossary/cluster-operator.md b/content/zh/docs/reference/glossary/cluster-operator.md index 4be92284b0..0e366d76dd 100644 --- a/content/zh/docs/reference/glossary/cluster-operator.md +++ b/content/zh/docs/reference/glossary/cluster-operator.md @@ -36,10 +36,10 @@ tags: Their primary responsibility is keeping a cluster up and running, which may involve periodic maintenance activities or upgrades.
    -**NOTE:** Cluster operators are different from the [Operator pattern](https://coreos.com/operators) that extends the Kubernetes API. +**NOTE:** Cluster operators are different from the [Operator pattern](https://www.openshift.com/learn/topics/operators) that extends the Kubernetes API. --> 他们的主要责任是保持集群正常运行,可能需要进行周期性的维护和升级活动。
    -**注意:** 集群操作者不同于[操作者模式(Operator Pattern)](https://coreos.com/operators),操作者模式是用来扩展 Kubernetes API 的。 +**注意:** 集群操作者不同于[操作者模式(Operator Pattern)](https://www.openshift.com/learn/topics/operators),操作者模式是用来扩展 Kubernetes API 的。 diff --git a/content/zh/docs/reference/issues-security/issues.md b/content/zh/docs/reference/issues-security/issues.md index d2ac9ae7be..23a015a519 100644 --- a/content/zh/docs/reference/issues-security/issues.md +++ b/content/zh/docs/reference/issues-security/issues.md @@ -1,7 +1,6 @@ --- title: Kubernetes 问题追踪 weight: 10 -aliases: [cve/,cves/] --- {{< feature-state for_k8s_version="v1.19" state="beta" >}} - 你可以通过编写配置文件,并将其路径传给 `kube-scheduler` 的命令行参数,定制 `kube-scheduler` 的行为。 @@ -82,14 +82,14 @@ extension points: --> 1. `QueueSort`:这些插件对调度队列中的悬决的 Pod 排序。 一次只能启用一个队列排序插件。 - 2. `PreFilter`:这些插件用于在过滤之前预处理或检查 Pod 或集群的信息。 它们可以将 Pod 标记为不可调度。 - @@ -127,13 +127,13 @@ extension points: least one bind plugin is required. --> 9. `Bind`:这个插件将 Pod 与节点绑定。绑定插件是按顺序调用的,只要有一个插件完成了绑定,其余插件都会跳过。绑定插件至少需要一个。 - 10. `PostBind`:这是一个信息扩展点,在 Pod 绑定了节点之后调用。 - @@ -154,18 +154,18 @@ profiles: weight: 1 ``` - 你可以在 `disabled` 数组中使用 `*` 禁用该扩展点的所有默认插件。 如果需要,这个字段也可以用来对插件重新顺序。 - + ### 调度插件 {#scheduling-plugin} - @@ -190,7 +190,7 @@ extension points: - `SelectorSpread`:对于属于 {{< glossary_tooltip text="Services" term_id="service" >}}、 {{< glossary_tooltip text="ReplicaSets" term_id="replica-set" >}} 和 {{< glossary_tooltip text="StatefulSets" term_id="statefulset" >}} 的 Pod,偏好跨多个节点部署。 - + 实现的扩展点:`PreScore`,`Score`。 - `ImageLocality`:选择已经存在 Pod 运行所需容器镜像的节点。 - + 实现的扩展点:`Score`。 - `TaintToleration`:实现了[污点和容忍](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)。 - + 实现的扩展点:`Filter`,`Prescore`,`Score`。 - `NodeName`:检查 Pod 指定的节点名称与当前节点是否匹配。 - + 实现的扩展点:`Filter`。 - `NodePorts`:检查 Pod 请求的端口在节点上是否可用。 - + 实现的扩展点:`PreFilter`,`Filter`。 -- `NodePreferAvoidPods`:基于节点的 {{< glossary_tooltip text="注解" term_id="annotation" >}} +- `NodePreferAvoidPods`:基于节点的 {{< glossary_tooltip text="注解" term_id="annotation" >}} `scheduler.alpha.kubernetes.io/preferAvoidPods` 打分。 - + 实现的扩展点:`Score`。 - `NodeAffinity`:实现了[节点选择器](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector) 和[节点亲和性](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity)。 - + 实现的扩展点:`Filter`,`Score`. - `PodTopologySpread`:实现了 [Pod 拓扑分布](/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints/)。 - + 实现的扩展点:`PreFilter`,`Filter`,`PreScore`,`Score`。 - `NodeUnschedulable`:过滤 `.spec.unschedulable` 值为 true 的节点。 - + 实现的扩展点:`Filter`。 - `NodeResourcesFit`:检查节点是否拥有 Pod 请求的所有资源。 - + 实现的扩展点:`PreFilter`,`Filter`。 - `NodeResourcesBalancedAllocation`:调度 Pod 时,选择资源使用更为均衡的节点。 - + 实现的扩展点:`Score`。 - `NodeResourcesLeastAllocated`:选择资源分配较少的节点。 - + 实现的扩展点:`Score`。 - `VolumeBinding`:检查节点是否有请求的卷,或是否可以绑定请求的卷。 - + 实现的扩展点: `PreFilter`,`Filter`,`Reserve`,`PreBind`。 - - `VolumeRestrictions`:检查挂载到节点上的卷是否满足卷提供程序的限制。 - + 实现的扩展点:`Filter`。 - `VolumeZone`:检查请求的卷是否在任何区域都满足。 - + 实现的扩展点:`Filter`。 - - `NodeVolumeLimits`:检查该节点是否满足 CSI 卷限制。 - + 实现的扩展点:`Filter`。 - `EBSLimits`:检查节点是否满足 AWS EBS 卷限制。 - + 实现的扩展点:`Filter`。 - `GCEPDLimits`:检查该节点是否满足 GCP-PD 卷限制。 - + 实现的扩展点:`Filter`。 - `AzureDiskLimits`:检查该节点是否满足 Azure 卷限制。 - + 实现的扩展点:`Filter`。 - `InterPodAffinity`:实现 [Pod 间亲和性与反亲和性](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)。 - + 实现的扩展点:`PreFilter`,`Filter`,`PreScore`,`Score`。 - `PrioritySort`:提供默认的基于优先级的排序。 - + 实现的扩展点:`QueueSort`。 - `DefaultBinder`:提供默认的绑定机制。 - + 实现的扩展点:`Bind`。 - `DefaultPreemption`:提供默认的抢占机制。 - + 实现的扩展点:`PostFilter`。 - `NodeResourcesMostAllocated`:选择已分配资源多的节点。 - + 实现的扩展点:`Score`。 - `RequestedToCapacityRatio`:根据已分配资源的某函数设置选择节点。 - + 实现的扩展点:`Score`。 - `NodeResourceLimits`:选择满足 Pod 资源限制的节点。 - + 实现的扩展点:`PreScore`,`Score`。 - `CinderVolume`:检查该节点是否满足 OpenStack Cinder 卷限制。 - + 实现的扩展点:`Filter`。 -- `NodeLabel`:根据配置的 {{< glossary_tooltip text="标签" term_id="label" >}} +- `NodeLabel`:根据配置的 {{< glossary_tooltip text="标签" term_id="label" >}} 过滤节点和/或给节点打分。 - + 实现的扩展点:`Filter`,`Score`。 @@ -462,14 +462,14 @@ profiles: Pods that want to be scheduled according to a specific profile can include the corresponding scheduler name in its `.spec.schedulerName`. --> -希望根据特定配置文件调度的 Pod,可以在 `.spec.schedulerName` 字段指定相应的调度器名称。 +对于那些希望根据特定配置文件来进行调度的 Pod,可以在 `.spec.schedulerName` 字段指定相应的调度器名称。 -默认情况下,将创建一个名为 `default-scheduler` 的配置文件。 +默认情况下,将创建一个调度器名为 `default-scheduler` 的配置文件。 这个配置文件包括上面描述的所有默认插件。 声明多个配置文件时,每个配置文件中调度器名称必须唯一。 @@ -478,8 +478,8 @@ If a Pod doesn't specify a scheduler name, kube-apiserver will set it to `default-scheduler`. Therefore, a profile with this scheduler name should exist to get those pods scheduled. --> -如果 Pod 未指定调度器名称,kube-apiserver 将会把它设置为 `default-scheduler`。 -因此,应该存在一个名为 `default-scheduler` 的配置文件来调度这些 Pod。 +如果 Pod 未指定调度器名称,kube-apiserver 将会把调度器名设置为 `default-scheduler`。 +因此,应该存在一个调度器名为 `default-scheduler` 的配置文件来调度这些 Pod。 {{< note >}} Pod 的调度事件把 `.spec.schedulerName` 字段值作为 ReportingController。 -领导者选择事件使用列表中第一个配置文件的调度器名称。 +领导者选举事件使用列表中第一个配置文件的调度器名称。 {{< /note >}} {{< note >}} @@ -498,7 +498,7 @@ the same configuration parameters (if applicable). This is because the scheduler only has one pending pods queue. --> 所有配置文件必须在 QueueSort 扩展点使用相同的插件,并具有相同的配置参数(如果适用)。 -这是因为调度器只有一个的队列保存悬决的 Pod。 +这是因为调度器只有一个保存 pending 状态 Pod 的队列。 {{< /note >}} @@ -509,4 +509,4 @@ only has one pending pods queue. * Learn about [scheduling](/docs/concepts/scheduling-eviction/kube-scheduler/) --> * 阅读 [kube-scheduler 参考](/zh/docs/reference/command-line-tools-reference/kube-scheduler/) -* 了解[调度](/zh/docs/concepts/scheduling-eviction/kube-scheduler/) \ No newline at end of file +* 了解[调度](/zh/docs/concepts/scheduling-eviction/kube-scheduler/) diff --git a/content/zh/docs/reference/using-api/_index.md b/content/zh/docs/reference/using-api/_index.md index 88b6bce641..303f2152ac 100644 --- a/content/zh/docs/reference/using-api/_index.md +++ b/content/zh/docs/reference/using-api/_index.md @@ -1,6 +1,10 @@ --- -title: 使用 Kubernetes API +title: API 概述 weight: 10 +no_list: true +card: + name: reference + weight: 50 --- @@ -68,7 +72,7 @@ can find more information about the criteria for each level in the [API Changes documentation](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions). --> 不同的 API 版本代表着不同的稳定性和支持级别。 -你可以在 [API 变更文档]https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions) +你可以在 [API 变更文档](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions) 中查看到更多的不同级别的判定标准。 -## 启用或禁用 API 组 {#enabling-or-disabling} +## 启用或禁用 API 组 {#enabling-or-disabling} 资源和 API 组是在默认情况下被启用的。 你可以通过在 API 服务器上设置 `--runtime-config` 参数来启用或禁用它们。 `--runtime-config` 参数接受逗号分隔的 `[=]` 对, diff --git a/content/zh/docs/reference/using-api/client-libraries.md b/content/zh/docs/reference/using-api/client-libraries.md index 599b1bf6df..cf01bcc56e 100644 --- a/content/zh/docs/reference/using-api/client-libraries.md +++ b/content/zh/docs/reference/using-api/client-libraries.md @@ -111,12 +111,13 @@ their authors, not the Kubernetes team. | Python | [github.com/fiaas/k8s](https://github.com/fiaas/k8s) | | Python | [github.com/mnubo/kubernetes-py](https://github.com/mnubo/kubernetes-py) | | Python | [github.com/tomplus/kubernetes_asyncio](https://github.com/tomplus/kubernetes_asyncio) | +| Python | [github.com/Frankkkkk/pykorm](https://github.com/Frankkkkk/pykorm) | | Ruby | [github.com/abonas/kubeclient](https://github.com/abonas/kubeclient) | | Ruby | [github.com/Ch00k/kuber](https://github.com/Ch00k/kuber) | | Ruby | [github.com/kontena/k8s-client](https://github.com/kontena/k8s-client) | | Rust | [github.com/clux/kube-rs](https://github.com/clux/kube-rs) | | Rust | [github.com/ynqa/kubernetes-rust](https://github.com/ynqa/kubernetes-rust) | -| Scala | [github.com/doriordan/skuber](https://github.com/doriordan/skuber) | +| Scala | [github.com/hagay3/skuber](https://github.com/hagay3/skuber) | | Scala | [github.com/joan38/kubernetes-client](https://github.com/joan38/kubernetes-client) | | DotNet | [github.com/tonnyeremin/kubernetes_gen](https://github.com/tonnyeremin/kubernetes_gen) | | DotNet (RestSharp) | [github.com/masroorhasan/Kubernetes.DotNet](https://github.com/masroorhasan/Kubernetes.DotNet) | @@ -145,12 +146,13 @@ their authors, not the Kubernetes team. | Python | [github.com/fiaas/k8s](https://github.com/fiaas/k8s) | | Python | [github.com/mnubo/kubernetes-py](https://github.com/mnubo/kubernetes-py) | | Python | [github.com/tomplus/kubernetes_asyncio](https://github.com/tomplus/kubernetes_asyncio) | +| Python | [github.com/Frankkkkk/pykorm](https://github.com/Frankkkkk/pykorm) | | Ruby | [github.com/abonas/kubeclient](https://github.com/abonas/kubeclient) | | Ruby | [github.com/Ch00k/kuber](https://github.com/Ch00k/kuber) | | Ruby | [github.com/kontena/k8s-client](https://github.com/kontena/k8s-client) | | Rust | [github.com/clux/kube-rs](https://github.com/clux/kube-rs) | | Rust | [github.com/ynqa/kubernetes-rust](https://github.com/ynqa/kubernetes-rust) | -| Scala | [github.com/doriordan/skuber](https://github.com/doriordan/skuber) | +| Scala | [github.com/hagay3/skuber](https://github.com/hagay3/skuber) | | Scala | [github.com/joan38/kubernetes-client](https://github.com/joan38/kubernetes-client) | | Swift | [github.com/swiftkube/client](https://github.com/swiftkube/client) | | DotNet | [github.com/tonnyeremin/kubernetes_gen](https://github.com/tonnyeremin/kubernetes_gen) | diff --git a/content/zh/docs/setup/best-practices/certificates.md b/content/zh/docs/setup/best-practices/certificates.md index 3a72705416..f8a79245c0 100644 --- a/content/zh/docs/setup/best-practices/certificates.md +++ b/content/zh/docs/setup/best-practices/certificates.md @@ -17,12 +17,12 @@ weight: 40 Kubernetes 需要 PKI 证书才能进行基于 TLS 的身份验证。如果你是使用 -[kubeadm](/zh/docs/reference/setup-tools/kubeadm/kubeadm/) 安装的 Kubernetes, +[kubeadm](/zh/docs/reference/setup-tools/kubeadm/) 安装的 Kubernetes, 则会自动生成集群所需的证书。你还可以生成自己的证书。 例如,不将私钥存储在 API 服务器上,可以让私钥更加安全。此页面说明了集群必需的证书。 @@ -144,13 +144,13 @@ Required certificates: | front-proxy-client | kubernetes-front-proxy-ca | | client | | [1]: 用来连接到集群的不同 IP 或 DNS 名 -(就像 [kubeadm](/zh/docs/reference/setup-tools/kubeadm/kubeadm/) 为负载均衡所使用的固定 +(就像 [kubeadm](/zh/docs/reference/setup-tools/kubeadm/) 为负载均衡所使用的固定 IP 或 DNS 名,`kubernetes`、`kubernetes.default`、`kubernetes.default.svc`、 `kubernetes.default.svc.cluster`、`kubernetes.default.svc.cluster.local`)。 @@ -193,11 +193,11 @@ For kubeadm users only: ### 证书路径 -证书应放置在建议的路径中(以便 [kubeadm](/zh/docs/reference/setup-tools/kubeadm/kubeadm/)使用)。无论使用什么位置,都应使用给定的参数指定路径。 +证书应放置在建议的路径中(以便 [kubeadm](/zh/docs/reference/setup-tools/kubeadm/)使用)。无论使用什么位置,都应使用给定的参数指定路径。 | 默认 CN | 建议的密钥路径 | 建议的证书路径 | 命令 | 密钥参数 | 证书参数 | |------------------------------|------------------------------|-----------------------------|----------------|------------------------------|-------------------------------------------| diff --git a/content/zh/docs/setup/production-environment/container-runtimes.md b/content/zh/docs/setup/production-environment/container-runtimes.md index 08f3ea9fc7..99739e8b10 100644 --- a/content/zh/docs/setup/production-environment/container-runtimes.md +++ b/content/zh/docs/setup/production-environment/container-runtimes.md @@ -9,7 +9,7 @@ reviewers: - bart0sh title: Container runtimes content_type: concept -weight: 10 +weight: 20 --> @@ -108,7 +108,7 @@ configuration, or reinstall it using automation. ### containerd -本节包含使用 `containerd` 作为 CRI 运行时的必要步骤。 +本节包含使用 containerd 作为 CRI 运行时的必要步骤。 使用以下命令在系统上安装容器: @@ -156,7 +156,7 @@ net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF -# Apply sysctl params without reboot +# 应用 sysctl 参数而无需重新启动 sudo sysctl --system ``` @@ -166,310 +166,85 @@ Install containerd: 安装 containerd: {{< tabs name="tab-cri-containerd-installation" >}} -{{% tab name="Ubuntu 16.04" %}} +{{% tab name="Linux" %}} -```shell -# (安装 containerd) -## (设置仓库) -### (安装软件包以允许 apt 通过 HTTPS 使用存储库) -sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common -``` - -```shell -## 安装 Docker 的官方 GPG 密钥 -curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key --keyring /etc/apt/trusted.gpg.d/docker.gpg add - -``` - -```shell -## 新增 Docker apt 仓库。 -sudo add-apt-repository \ - "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ - $(lsb_release -cs) \ - stable" -``` - -```shell -## 安装 containerd -sudo apt-get update && sudo apt-get install -y containerd.io -``` - -```shell -# 配置 containerd -sudo mkdir -p /etc/containerd -sudo containerd config default | sudo tee /etc/containerd/config.toml -``` - -```shell -# 重启 containerd -sudo systemctl restart containerd -``` -{{< /tab >}} -{{% tab name="Ubuntu 18.04/20.04" %}} +1. 从官方Docker仓库安装 `containerd.io` 软件包。可以在 [安装 Docker 引擎](https://docs.docker.com/engine/install/#server) 中找到有关为各自的 Linux 发行版设置 Docker 存储库和安装 `containerd.io` 软件包的说明。 -```shell -# 安装 containerd -sudo apt-get update && sudo apt-get install -y containerd -``` +2. 配置 containerd: -```shell -# 配置 containerd -sudo mkdir -p /etc/containerd -sudo containerd config default | sudo tee /etc/containerd/config.toml -``` - -```shell -# 重启 containerd -sudo systemctl restart containerd -``` -{{% /tab %}} -{{% tab name="Debian 9+" %}} + ```shell + sudo mkdir -p /etc/containerd + containerd config default | sudo tee /etc/containerd/config.toml + ``` -```shell -# 安装 containerd -## 配置仓库 -### 安装软件包以使 apt 能够使用 HTTPS 访问仓库 -sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common -``` +3. 重新启动 containerd: -```shell -## 添加 Docker 的官方 GPG 密钥 -curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key --keyring /etc/apt/trusted.gpg.d/docker.gpg add - -``` + ```shell + sudo systemctl restart containerd + ``` -```shell -## 添加 Docker apt 仓库 -sudo add-apt-repository \ - "deb [arch=amd64] https://download.docker.com/linux/debian \ - $(lsb_release -cs) \ - stable" -``` - - -```shell -## 安装 containerd -sudo apt-get update && sudo apt-get install -y containerd.io -``` - -```shell -# 设置 containerd 的默认配置 -sudo mkdir -p /etc/containerd -containerd config default | sudo tee /etc/containerd/config.toml -``` - -```shell -# 重启 containerd -sudo systemctl restart containerd -``` -{{% /tab %}} -{{% tab name="CentOS/RHEL 7.4+" %}} - - -```shell -# 安装 containerd -## 设置仓库 -### 安装所需包 -sudo yum install -y yum-utils device-mapper-persistent-data lvm2 -``` - -```shell -### 添加 Docker 仓库 -sudo yum-config-manager \ - --add-repo \ - https://download.docker.com/linux/centos/docker-ce.repo -``` - -```shell -## 安装 containerd -sudo yum update -y && sudo yum install -y containerd.io -``` - -```shell -# 配置 containerd -sudo mkdir -p /etc/containerd -containerd config default | sudo tee /etc/containerd/config.toml -``` - -```shell -# 重启 containerd -sudo systemctl restart containerd -``` {{% /tab %}} {{% tab name="Windows (PowerShell)" %}} - +启动 Powershell 会话,将 `$Version` 设置为所需的版本(例如:`$ Version=1.4.3`),然后运行以下命令: -```powershell -# extract and configure -Copy-Item -Path ".\bin\" -Destination "$Env:ProgramFiles\containerd" -Recurse -Force -cd $Env:ProgramFiles\containerd\ -.\containerd.exe config default | Out-File config.toml -Encoding ascii + +1. 下载 containerd: -# review the configuration. depending on setup you may want to adjust: -# - the sandbox_image (kubernetes pause image) -# - cni bin_dir and conf_dir locations -Get-Content config.toml -``` + ```powershell + curl.exe -L https://github.com/containerd/containerd/releases/download/v$Version/containerd-$Version-windows-amd64.tar.gz -o containerd-windows-amd64.tar.gz + tar.exe xvf .\containerd-windows-amd64.tar.gz + ``` + +2. 提取并配置: -```powershell -# start containerd -.\containerd.exe --register-service -Start-Service containerd -``` - --> -```powershell -# 安装 containerd -# 下载 containerd -cmd /c curl -OL https://github.com/containerd/containerd/releases/download/v1.4.1/containerd-1.4.1-windows-amd64.tar.gz -cmd /c tar xvf .\containerd-1.4.1-windows-amd64.tar.gz -``` + ```powershell + Copy-Item -Path ".\bin\" -Destination "$Env:ProgramFiles\containerd" -Recurse -Force + cd $Env:ProgramFiles\containerd\ + .\containerd.exe config default | Out-File config.toml -Encoding ascii -```powershell -# 解压并配置 -Copy-Item -Path ".\bin\" -Destination "$Env:ProgramFiles\containerd" -Recurse -Force -cd $Env:ProgramFiles\containerd\ -.\containerd.exe config default | Out-File config.toml -Encoding ascii + # Review the configuration. Depending on setup you may want to adjust: + # - the sandbox_image (Kubernetes pause image) + # - cni bin_dir and conf_dir locations + Get-Content config.toml -# 检查配置文件,基于你可能想要调整的设置: -# - sandbox_image (kubernetes pause 镜像) -# - CNI 的 bin_dir 和 conf_dir 路径 -Get-Content config.toml -``` + # (Optional - but highly recommended) Exclude containerd from Windows Defender Scans + Add-MpPreference -ExclusionProcess "$Env:ProgramFiles\containerd\containerd.exe" + ``` + + +3. 启动 containerd: + + ```powershell + .\containerd.exe --register-service + Start-Service containerd + ``` -```powershell -# 启动 containerd -.\containerd.exe --register-service -Start-Service containerd -``` {{% /tab %}} {{< /tabs >}} -#### systemd {#containerd-systemd} + + +#### 使用 `systemd` cgroup 驱动程序 {#containerd-systemd} + 结合 `runc` 使用 `systemd` cgroup 驱动,在 `/etc/containerd/config.toml` 中设置 ``` @@ -493,6 +266,19 @@ When using kubeadm, manually configure the SystemdCgroup = true ``` + +如果您应用此更改,请确保再次重新启动 containerd: + +```shell +sudo systemctl restart containerd +``` + + 当使用 kubeadm 时,请手动配置 [kubelet 的 cgroup 驱动](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-control-plane-node). @@ -505,7 +291,7 @@ Use the following commands to install CRI-O on your system: {{< note >}} The CRI-O major and minor versions must match the Kubernetes major and minor versions. -For more information, see the [CRI-O compatibility matrix](https://github.com/cri-o/cri-o). +For more information, see the [CRI-O compatibility matrix](https://github.com/cri-o/cri-o#compatibility-matrix-cri-o--kubernetes). {{< /note >}} Install and configure prerequisites: @@ -536,7 +322,7 @@ sudo sysctl --system 使用以下命令在系统中安装 CRI-O: 提示:CRI-O 的主要以及次要版本必须与 Kubernetes 的主要和次要版本相匹配。 -更多信息请查阅 [CRI-O 兼容性列表](https://github.com/cri-o/cri-o). +更多信息请查阅 [CRI-O 兼容性列表](https://github.com/cri-o/cri-o#compatibility-matrix-cri-o--kubernetes)。 安装以及配置的先决条件: @@ -569,31 +355,31 @@ To install CRI-O on the following operating systems, set the environment variabl to the appropriate value from the following table: | Operating system | `$OS` | -|------------------|-------------------| +| ---------------- | ----------------- | | Debian Unstable | `Debian_Unstable` | | Debian Testing | `Debian_Testing` |
    Then, set `$VERSION` to the CRI-O version that matches your Kubernetes version. -For instance, if you want to install CRI-O 1.18, set `VERSION=1.18`. +For instance, if you want to install CRI-O 1.20, set `VERSION=1.20`. You can pin your installation to a specific release. -To install version 1.18.3, set `VERSION=1.18:1.18.3`. +To install version 1.20.0, set `VERSION=1.20:1.20.0`.
    Then run --> 在下列操作系统上安装 CRI-O, 使用下表中合适的值设置环境变量 `OS`: -| 操作系统 | `$OS` | -|-----------------|-------------------| -| Debian Unstable | `Debian_Unstable` | -| Debian Testing | `Debian_Testing` | +| 操作系统 | `$OS` | +| ---------------- | ----------------- | +| Debian Unstable | `Debian_Unstable` | +| Debian Testing | `Debian_Testing` |
    然后,将 `$VERSION` 设置为与你的 Kubernetes 相匹配的 CRI-O 版本。 -例如,如果你要安装 CRI-O 1.18, 请设置 `VERSION=1.18`. +例如,如果你要安装 CRI-O 1.20, 请设置 `VERSION=1.20`. 你也可以安装一个特定的发行版本。 -例如要安装 1.18.3 版本,设置 `VERSION=1.18:1.18.3`. +例如要安装 1.20.0 版本,设置 `VERSION=1.20.0:1.20.0`.
    然后执行 @@ -605,8 +391,8 @@ cat < Then, set `$VERSION` to the CRI-O version that matches your Kubernetes version. -For instance, if you want to install CRI-O 1.18, set `VERSION=1.18`. +For instance, if you want to install CRI-O 1.20, set `VERSION=1.20`. You can pin your installation to a specific release. -To install version 1.18.3, set `VERSION=1.18:1.18.3`. +To install version 1.20.0, set `VERSION=1.20:1.20.0`.
    Then run --> 在下列操作系统上安装 CRI-O, 使用下表中合适的值设置环境变量 `OS`: -| 操作系统 | `$OS` | -|--------------|-----------------| -| Ubuntu 20.04 | `xUbuntu_20.04` | -| Ubuntu 19.10 | `xUbuntu_19.10` | -| Ubuntu 19.04 | `xUbuntu_19.04` | -| Ubuntu 18.04 | `xUbuntu_18.04` | +| 操作系统 | `$OS` | +| ---------------- | ----------------- | +| Ubuntu 20.04 | `xUbuntu_20.04` | +| Ubuntu 19.10 | `xUbuntu_19.10` | +| Ubuntu 19.04 | `xUbuntu_19.04` | +| Ubuntu 18.04 | `xUbuntu_18.04` |
    然后,将 `$VERSION` 设置为与你的 Kubernetes 相匹配的 CRI-O 版本。 -例如,如果你要安装 CRI-O 1.18, 请设置 `VERSION=1.18`. +例如,如果你要安装 CRI-O 1.20, 请设置 `VERSION=1.20`. 你也可以安装一个特定的发行版本。 -例如要安装 1.18.3 版本,设置 `VERSION=1.18:1.18.3`. +例如要安装 1.20.0 版本,设置 `VERSION=1.20:1.20.0`.
    然后执行 @@ -661,8 +447,8 @@ cat < Then, set `$VERSION` to the CRI-O version that matches your Kubernetes version. -For instance, if you want to install CRI-O 1.18, set `VERSION=1.18`. +For instance, if you want to install CRI-O 1.20, set `VERSION=1.20`. You can pin your installation to a specific release. -To install version 1.18.3, set `VERSION=1.18:1.18.3`. +To install version 1.20.0, set `VERSION=1.20:1.20.0`.
    Then run --> 在下列操作系统上安装 CRI-O, 使用下表中合适的值设置环境变量 `OS`: -| 操作系统 | `$OS` | -|-----------------|-------------------| -| Centos 8 | `CentOS_8` | -| Centos 8 Stream | `CentOS_8_Stream` | -| Centos 7 | `CentOS_7` | +| 操作系统 | `$OS` | +| ---------------- | ----------------- | +| Centos 8 | `CentOS_8` | +| Centos 8 Stream | `CentOS_8_Stream` | +| Centos 7 | `CentOS_7` |
    然后,将 `$VERSION` 设置为与你的 Kubernetes 相匹配的 CRI-O 版本。 -例如,如果你要安装 CRI-O 1.18, 请设置 `VERSION=1.18`. +例如,如果你要安装 CRI-O 1.20, 请设置 `VERSION=1.20`. 你也可以安装一个特定的发行版本。 -例如要安装 1.18.3 版本,设置 `VERSION=1.18:1.18.3`. +例如要安装 1.20.0 版本,设置 `VERSION=1.20:1.20.0`.
    然后执行 @@ -725,7 +511,7 @@ sudo zypper install cri-o 将 `$VERSION` 设置为与你的 Kubernetes 相匹配的 CRI-O 版本。 -例如,如果要安装 CRI-O 1.18,请设置 `VERSION=1.18`。 +例如,如果要安装 CRI-O 1.20,请设置 `VERSION=1.20`。 你可以用下列命令查找可用的版本: ```shell @@ -751,7 +537,7 @@ CRI-O 不支持在 Fedora 上固定到特定的版本。 然后执行 ```shell sudo dnf module enable cri-o:$VERSION -sudo dnf install cri-o +sudo dnf install cri-o --now ``` {{% /tab %}} @@ -762,272 +548,90 @@ Start CRI-O: ```shell sudo systemctl daemon-reload -sudo systemctl start crio +sudo systemctl enable crio --no ``` -Refer to the [CRI-O installation guide](https://github.com/kubernetes-sigs/cri-o#getting-started) +Refer to the [CRI-O installation guide](https://github.com/cri-o/cri-o/blob/master/install.md) for more information. --> -启动 CRI-O: +#### cgroup driver + +默认情况下,CRI-O 使用 systemd cgroup 驱动程序。切换到` +`cgroupfs` +cgroup 驱动程序,或者编辑 `/ etc / crio / crio.conf` 或放置一个插件 +在 `/etc/crio/crio.conf.d/02-cgroup-manager.conf` 中的配置,例如: -```shell -sudo systemctl daemon-reload -sudo systemctl start crio +```toml +[crio.runtime] +conmon_cgroup = "pod" +cgroup_manager = "cgroupfs" ``` - -更多信息请参阅 [CRI-O 安装指南](https://github.com/kubernetes-sigs/cri-o#getting-started)。 + +另请注意更改后的 `conmon_cgroup` ,必须将其设置为 +`pod`将 CRI-O 与 `cgroupfs` 一起使用时。通常有必要保持 +kubelet 的 cgroup 驱动程序配置(通常透过 kubeadm 完成)和CRI-O 同步中。 ### Docker - - -在你的所有节点上安装 Docker CE. - -Kubernetes 发布说明中列出了 Docker 的哪些版本与该版本的 Kubernetes 相兼容。 - -在你的操作系统上使用如下命令安装 Docker: - -{{< tabs name="tab-cri-docker-installation" >}} -{{% tab name="Ubuntu 16.04+" %}} +1. 在每个节点上,根据[安装 Docker 引擎](https://docs.docker.com/engine/install/#server) 为你的 Linux 发行版安装 Docker。 + 你可以在此文件中找到最新的经过验证的 Docker 版本[依赖关系](https://git.k8s.io/kubernetes/build/dependencies.yaml)。 +2. 配置 Docker 守护程序,尤其是使用 systemd 来管理容器的cgroup。 -```shell -# Add Docker's official GPG key: -curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add --keyring /etc/apt/trusted.gpg.d/docker.gpg - -``` ---> - -```shell -# (安装 Docker CE) -## 设置仓库: -### 安装软件包以允许 apt 通过 HTTPS 使用存储库 -sudo apt-get update && sudo apt-get install -y \ - apt-transport-https ca-certificates curl software-properties-common gnupg2 -``` - -```shell -### 新增 Docker 的 官方 GPG 秘钥: -curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add --keyring /etc/apt/trusted.gpg.d/docker.gpg - -``` + ```shell + sudo mkdir /etc/docker + cat <}} + + + 对于运行 Linux 内核版本 4.0 或更高版本,或使用 3.10.0-51 及更高版本的 RHEL 或 CentOS 的系统,`overlay2`是首选的存储驱动程序。 + {{< /note >}} -```shell -### 添加 Docker apt 仓库: -sudo add-apt-repository \ - "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ - $(lsb_release -cs) \ - stable" -``` +3. 重新启动 Docker 并在启动时启用: + ```shell + sudo systemctl enable docker + sudo systemctl daemon-reload + sudo systemctl restart docker + ``` -```shell -## 安装 Docker CE -sudo apt-get update && sudo apt-get install -y \ - containerd.io=1.2.13-2 \ - docker-ce=5:19.03.11~3-0~ubuntu-$(lsb_release -cs) \ - docker-ce-cli=5:19.03.11~3-0~ubuntu-$(lsb_release -cs) -``` - -```shell -# 设置 Docker daemon -cat <}} -```shell -# 重启 docker. -sudo systemctl daemon-reload -sudo systemctl restart docker -``` -{{% /tab %}} -{{% tab name="CentOS/RHEL 7.4+" %}} +{{< /note >}} - -```shell -# (安装 Docker CE) -## 设置仓库 -### 安装所需包 -sudo yum install -y yum-utils device-mapper-persistent-data lvm2 -``` - -```shell -### 新增 Docker 仓库 -sudo yum-config-manager --add-repo \ - https://download.docker.com/linux/centos/docker-ce.repo -``` - -```shell -## 安装 Docker CE -sudo yum update -y && sudo yum install -y \ - containerd.io-1.2.13 \ - docker-ce-19.03.11 \ - docker-ce-cli-19.03.11 -``` - -```shell -## 创建 /etc/docker 目录 -sudo mkdir /etc/docker -``` - -```shell -# 设置 Docker daemon -cat < -```shell -# 重启 Docker -sudo systemctl daemon-reload -sudo systemctl restart docker -``` -{{% /tab %}} -{{% /tabs %}} - - -如果你想开机即启动 `docker` 服务,执行以下命令: - -```shell -sudo systemctl enable docker -``` - - -请参阅[官方 Docker 安装指南](https://docs.docker.com/engine/installation/) -获取更多的信息。 +有关更多信息,请参阅 + - [配置 Docker 守护程序](https://docs.docker.com/config/daemon/) + - [使用 systemd 控制 Docker](https://docs.docker.com/config/daemon/systemd/) diff --git a/content/zh/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/zh/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md index 6fa2bb7158..c6619d0034 100644 --- a/content/zh/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md +++ b/content/zh/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md @@ -266,9 +266,9 @@ kubeadm 不支持将没有 `--control-plane-endpoint` 参数的单个控制平 ### 更多信息 -有关 `kubeadm init` 参数的更多信息,请参见 [kubeadm 参考指南](/zh/docs/reference/setup-tools/kubeadm/kubeadm/)。 +有关 `kubeadm init` 参数的更多信息,请参见 [kubeadm 参考指南](/zh/docs/reference/setup-tools/kubeadm/)。 * 使用 [Sonobuoy](https://github.com/heptio/sonobuoy) 验证集群是否正常运行 * 有关使用kubeadm升级集群的详细信息,请参阅[升级 kubeadm 集群](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)。 -* 在[kubeadm 参考文档](/zh/docs/reference/setup-tools/kubeadm/kubeadm)中了解有关高级 `kubeadm` 用法的信息 +* 在[kubeadm 参考文档](/zh/docs/reference/setup-tools/kubeadm)中了解有关高级 `kubeadm` 用法的信息 * 了解有关Kubernetes[概念](/zh/docs/concepts/)和[`kubectl`](/zh/docs/reference/kubectl/overview/)的更多信息。 * 有关Pod网络附加组件的更多列表,请参见[集群网络](/zh/docs/concepts/cluster-administration/networking/)页面。 * 请参阅[附加组件列表](/zh/docs/concepts/cluster-administration/addons/)以探索其他附加组件, diff --git a/content/zh/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md b/content/zh/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md index 8eeb160775..4dd6304c6d 100644 --- a/content/zh/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md +++ b/content/zh/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md @@ -150,7 +150,7 @@ such as systemd. 由于硬件、操作系统、网络或者其他主机特定参数的差异。某些主机需要特定的 kubelet 配置。 以下列表提供了一些示例。 -- 由 kubelet 配置标志 `--resolv-confkubelet` 指定的 DNS 解析文件的路径在操作系统之间可能有所不同, +- 由 kubelet 配置标志 `--resolv-conf` 指定的 DNS 解析文件的路径在操作系统之间可能有所不同, 它取决于你是否使用 `systemd-resolved`。 如果此路径错误,则在其 kubelet 配置错误的节点上 DNS 解析也将失败。 diff --git a/content/zh/docs/setup/production-environment/tools/kubespray.md b/content/zh/docs/setup/production-environment/tools/kubespray.md index d436dbad6b..eabd14c7f5 100644 --- a/content/zh/docs/setup/production-environment/tools/kubespray.md +++ b/content/zh/docs/setup/production-environment/tools/kubespray.md @@ -48,10 +48,10 @@ Kubespray 是一个由 [Ansible](https://docs.ansible.com/) playbooks、[清单 要选择最适合你的用例的工具,请阅读[此比较](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md)以 - [kubeadm](/zh/docs/reference/setup-tools/kubeadm/kubeadm/) 和 [kops](/zh/docs/setup/production-environment/tools/kops/) 。 + [kubeadm](/zh/docs/reference/setup-tools/kubeadm/) 和 [kops](/zh/docs/setup/production-environment/tools/kops/) 。 * 进一步了解 [Service](/zh/docs/concepts/services-networking/service/) * 进一步了解 [ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/) -* 进一步了解 [Service 和 Pods 的 DNS](/docs/concepts/services-networking/dns-pod-service/) +* 进一步了解 [Service 和 Pods 的 DNS](/zh/docs/concepts/services-networking/dns-pod-service/) diff --git a/content/zh/docs/tasks/access-application-cluster/ingress-minikube.md b/content/zh/docs/tasks/access-application-cluster/ingress-minikube.md index 4694ac5dbc..01fba70e92 100644 --- a/content/zh/docs/tasks/access-application-cluster/ingress-minikube.md +++ b/content/zh/docs/tasks/access-application-cluster/ingress-minikube.md @@ -277,7 +277,7 @@ The following file is an Ingress resource that sends traffic to your Service via If you are running Minikube locally, you can visit hello-world.info from your browser. --> {{< note >}} - 如果你在使用本地 Minikube 环境,你可以从浏览器中访问 hellow-world.info。 + 如果你在使用本地 Minikube 环境,你可以从浏览器中访问 hello-world.info。 {{< /note >}} {{< note >}} 如果你在本地运行 Minikube 环境,你可以使用浏览器来访问 - hellow-world.info 和 hello-world.info/v2。 + hello-world.info 和 hello-world.info/v2。 {{< /note >}} ## {{% heading "whatsnext" %}} diff --git a/content/zh/docs/tasks/administer-cluster/access-cluster-api.md b/content/zh/docs/tasks/administer-cluster/access-cluster-api.md index 59b4fa758f..4f2ee3b05a 100644 --- a/content/zh/docs/tasks/administer-cluster/access-cluster-api.md +++ b/content/zh/docs/tasks/administer-cluster/access-cluster-api.md @@ -251,11 +251,12 @@ Kubernetes 官方支持 [Go](#go-client)、[Python](#python-client)、[Java](#ja #### Go 客户端 {#go-client} +* To get the library, run the following command: `go get k8s.io/client-go@kubernetes-` See [https://github.com/kubernetes/client-go/releases](https://github.com/kubernetes/client-go/releases) to see which versions are supported. +* Write an application atop of the client-go clients. +--> -* 要获取库,运行下列命令:`go get k8s.io/client-go/<版本号>/kubernetes`, - 参见 [https://github.com/kubernetes/client-go](https://github.com/kubernetes/client-go) 查看受支持的版本。 +* 要获取库,运行下列命令:`go get k8s.io/client-go/kubernetes-`, + 参见 [https://github.com/kubernetes/client-go/releases](https://github.com/kubernetes/client-go/releases) 查看受支持的版本。 * 基于 client-go 客户端编写应用程序。 -* 要安装 [Java 客户端](https://github.com/kubernetes-client/java),只需执行: +要安装 [Java 客户端](https://github.com/kubernetes-client/java),运行: ```shell # 克隆 Java 库 @@ -522,168 +523,8 @@ exampleWithKubeConfig = do >>= print ``` - +## {{% heading "whatsnext" %}} -### 从 Pod 中访问 API - -从 Pod 内部访问 API 时,定位 API 服务器和向服务器认证身份的操作 -与上面描述的外部客户场景不同。 - - -从 Pod 使用 Kubernetes API 的最简单的方法就是使用官方的 -[客户端库](/zh/docs/reference/using-api/client-libraries/)。 -这些库可以自动发现 API 服务器并进行身份验证。 - - -#### 使用官方客户端库 - -从一个 Pod 内部连接到 Kubernetes API 的推荐方式为: - -- 对于 Go 语言客户端,使用官方的 [Go 客户端库](https://github.com/kubernetes/client-go/)。 - 函数 `rest.InClusterConfig()` 自动处理 API 主机发现和身份认证。 - 参见[这里的一个例子](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go)。 - -- 对于 Python 客户端,使用官方的 [Python 客户端库](https://github.com/kubernetes-client/python/)。 - 函数 `config.load_incluster_config()` 自动处理 API 主机的发现和身份认证。 - 参见[这里的一个例子](https://github.com/kubernetes-client/python/blob/master/examples/in_cluster_config.py)。 - -- 还有一些其他可用的客户端库,请参阅[客户端库](/zh/docs/reference/using-api/client-libraries/)页面。 - -在以上场景中,客户端库都使用 Pod 的服务账号凭据来与 API 服务器安全地通信。 - - -#### 直接访问 REST API - -在运行在 Pod 中时,可以通过 `default` 命名空间中的名为 `kubernetes` 的服务访问 -Kubernetes API 服务器。也就是说,Pod 可以使用 `kubernetes.default.svc` 主机名 -来查询 API 服务器。官方客户端库自动完成这个工作。 - - -向 API 服务器进行身份认证的推荐做法是使用 -[服务账号](/zh/docs/tasks/configure-pod-container/configure-service-account/)凭据。 -默认情况下,每个 Pod 与一个服务账号关联,该服务账户的凭证(令牌)放置在此 Pod 中 -每个容器的文件系统树中的 `/var/run/secrets/kubernetes.io/serviceaccount/token` 处。 - - -如果由证书包可用,则凭证包被放入每个容器的文件系统树中的 -`/var/run/secrets/kubernetes.io/serviceaccount/ca.crt` 处, -且将被用于验证 API 服务器的服务证书。 - - -最后,用于命名空间域 API 操作的默认命名空间放置在每个容器中的 -`/var/run/secrets/kubernetes.io/serviceaccount/namespace` 文件中。 - - -#### 使用 kubectl proxy {#use-kubectl-proxy} - -如果你希望不实用官方客户端库就完成 API 查询,可以将 `kubectl proxy` 作为 -[command](/zh/docs/tasks/inject-data-application/define-command-argument-container/) -在 Pod 启动一个边车(Sidecar)容器。这样,`kubectl proxy` 自动完成对 API -的身份认证,并将其暴露到 Pod 的 `localhost` 接口,从而 Pod 中的其他容器可以 -直接使用 API。 - - -#### 不使用代理 {#without-using-a-proxy} - -通过将认证令牌直接发送到 API 服务器,也可以避免运行 kubectl proxy 命令。 -内部的证书机制能够为链接提供保护。 - -```shell -# 指向内部 API 服务器的主机名 -APISERVER=https://kubernetes.default.svc - -# 服务账号令牌的路径 -SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount - -# 读取 Pod 的名字空间 -NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace) - -# 读取服务账号的持有者令牌 -TOKEN=$(cat ${SERVICEACCOUNT}/token) - -# 引用内部整数机构(CA) -CACERT=${SERVICEACCOUNT}/ca.crt - -# 使用令牌访问 API -curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api -``` - - -输出类似于: - -```json -{ - "kind": "APIVersions", - "versions": [ - "v1" - ], - "serverAddressByClientCIDRs": [ - { - "clientCIDR": "0.0.0.0/0", - "serverAddress": "10.0.1.149:443" - } - ] -} -``` - +* [从 Pod 中访问 API](/zh/docs/tasks/run-application/access-api-from-pod/) diff --git a/content/zh/docs/tasks/administer-cluster/certificates.md b/content/zh/docs/tasks/administer-cluster/certificates.md new file mode 100644 index 0000000000..a61ad671c0 --- /dev/null +++ b/content/zh/docs/tasks/administer-cluster/certificates.md @@ -0,0 +1,366 @@ +--- +title: 证书 +content_type: task +weight: 20 +--- + + + + + +在使用客户端证书认证的场景下,你可以通过 `easyrsa`、`openssl` 或 `cfssl` 等工具以手工方式生成证书。 + + + +### easyrsa + + +**easyrsa** 支持以手工方式为你的集群生成证书。 + + +1. 下载、解压、初始化打过补丁的 easyrsa3。 + + curl -LO https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz + tar xzf easy-rsa.tar.gz + cd easy-rsa-master/easyrsa3 + ./easyrsa init-pki + + +1. 生成新的证书颁发机构(CA)。参数 `--batch` 用于设置自动模式; + 参数 `--req-cn` 用于设置新的根证书的通用名称(CN)。 + + ./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass + + +1. 生成服务器证书和秘钥。 + 参数 `--subject-alt-name` 设置 API 服务器的 IP 和 DNS 名称。 + `MASTER_CLUSTER_IP` 用于 API 服务器和控制管理器,通常取 CIDR 的第一个 IP,由 `--service-cluster-ip-range` 的参数提供。 + 参数 `--days` 用于设置证书的过期时间。 + 下面的示例假定你的默认 DNS 域名为 `cluster.local`。 + + ./easyrsa --subject-alt-name="IP:${MASTER_IP},"\ + "IP:${MASTER_CLUSTER_IP},"\ + "DNS:kubernetes,"\ + "DNS:kubernetes.default,"\ + "DNS:kubernetes.default.svc,"\ + "DNS:kubernetes.default.svc.cluster,"\ + "DNS:kubernetes.default.svc.cluster.local" \ + --days=10000 \ + build-server-full server nopass + + +1. 拷贝文件 `pki/ca.crt`、`pki/issued/server.crt` 和 `pki/private/server.key` 到你的目录中。 +1. 在 API 服务器的启动参数中添加以下参数: + + --client-ca-file=/yourdirectory/ca.crt + --tls-cert-file=/yourdirectory/server.crt + --tls-private-key-file=/yourdirectory/server.key + +### openssl + + +**openssl** 支持以手工方式为你的集群生成证书。 + + +1. 生成一个 2048 位的 ca.key 文件 + + openssl genrsa -out ca.key 2048 + + +1. 在 ca.key 文件的基础上,生成 ca.crt 文件(用参数 -days 设置证书有效期) + + openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt + + +1. 生成一个 2048 位的 server.key 文件: + + openssl genrsa -out server.key 2048 + + +1. 创建一个用于生成证书签名请求(CSR)的配置文件。 + 保存文件(例如:`csr.conf`)前,记得用真实值替换掉尖括号中的值(例如:``)。 + 注意:`MASTER_CLUSTER_IP` 就像前一小节所述,它的值是 API 服务器的服务集群 IP。 + 下面的例子假定你的默认 DNS 域名为 `cluster.local`。 + + [ req ] + default_bits = 2048 + prompt = no + default_md = sha256 + req_extensions = req_ext + distinguished_name = dn + + [ dn ] + C = + ST = + L = + O = + OU = + CN = + + [ req_ext ] + subjectAltName = @alt_names + + [ alt_names ] + DNS.1 = kubernetes + DNS.2 = kubernetes.default + DNS.3 = kubernetes.default.svc + DNS.4 = kubernetes.default.svc.cluster + DNS.5 = kubernetes.default.svc.cluster.local + IP.1 = + IP.2 = + + [ v3_ext ] + authorityKeyIdentifier=keyid,issuer:always + basicConstraints=CA:FALSE + keyUsage=keyEncipherment,dataEncipherment + extendedKeyUsage=serverAuth,clientAuth + subjectAltName=@alt_names + + +1. 基于上面的配置文件生成证书签名请求: + + openssl req -new -key server.key -out server.csr -config csr.conf + + +1. 基于 ca.key、ca.key 和 server.csr 等三个文件生成服务端证书: + + openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \ + -CAcreateserial -out server.crt -days 10000 \ + -extensions v3_ext -extfile csr.conf + + +1. 查看证书: + + openssl x509 -noout -text -in ./server.crt + + +最后,为 API 服务器添加相同的启动参数。 + +### cfssl + + +**cfssl** 是另一个用于生成证书的工具。 + + +1. 下载、解压并准备如下所示的命令行工具。 + 注意:你可能需要根据所用的硬件体系架构和 cfssl 版本调整示例命令。 + + curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl + chmod +x cfssl + curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson + chmod +x cfssljson + curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo + chmod +x cfssl-certinfo + + +1. 创建一个目录,用它保存所生成的构件和初始化 cfssl: + + mkdir cert + cd cert + ../cfssl print-defaults config > config.json + ../cfssl print-defaults csr > csr.json + + +1. 创建一个 JSON 配置文件来生成 CA 文件,例如:`ca-config.json`: + + { + "signing": { + "default": { + "expiry": "8760h" + }, + "profiles": { + "kubernetes": { + "usages": [ + "signing", + "key encipherment", + "server auth", + "client auth" + ], + "expiry": "8760h" + } + } + } + } + + +1. 创建一个 JSON 配置文件,用于 CA 证书签名请求(CSR),例如:`ca-csr.json`。 + 确认用你需要的值替换掉尖括号中的值。 + + { + "CN": "kubernetes", + "key": { + "algo": "rsa", + "size": 2048 + }, + "names":[{ + "C": "", + "ST": "", + "L": "", + "O": "", + "OU": "" + }] + } + + +1. 生成 CA 秘钥文件(`ca-key.pem`)和证书文件(`ca.pem`): + + ../cfssl gencert -initca ca-csr.json | ../cfssljson -bare ca + + +1. 创建一个 JSON 配置文件,用来为 API 服务器生成秘钥和证书,例如:`server-csr.json`。 + 确认用你需要的值替换掉尖括号中的值。`MASTER_CLUSTER_IP` 是为 API 服务器 指定的服务集群 IP,就像前面小节描述的那样。 + 以下示例假定你的默认 DSN 域名为`cluster.local`。 + + { + "CN": "kubernetes", + "hosts": [ + "127.0.0.1", + "", + "", + "kubernetes", + "kubernetes.default", + "kubernetes.default.svc", + "kubernetes.default.svc.cluster", + "kubernetes.default.svc.cluster.local" + ], + "key": { + "algo": "rsa", + "size": 2048 + }, + "names": [{ + "C": "", + "ST": "", + "L": "", + "O": "", + "OU": "" + }] + } + + +1. 为 API 服务器生成秘钥和证书,默认会分别存储为`server-key.pem` 和 `server.pem` 两个文件。 + + ../cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \ + --config=ca-config.json -profile=kubernetes \ + server-csr.json | ../cfssljson -bare server + + +## 分发自签名的 CA 证书 + + +客户端节点可能不认可自签名 CA 证书的有效性。 +对于非生产环境,或者运行在公司防火墙后的环境,你可以分发自签名的 CA 证书到所有客户节点,并刷新本地列表以使证书生效。 + +在每一个客户节点,执行以下操作: + +```bash +sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt +sudo update-ca-certificates +``` + +``` +Updating certificates in /etc/ssl/certs... +1 added, 0 removed; done. +Running hooks in /etc/ca-certificates/update.d.... +done. +``` + + +## 证书 API {#certificates-api} + + +你可以通过 `certificates.k8s.io` API 提供 x509 证书,用来做身份验证, +如[本](/zh/docs/tasks/tls/managing-tls-in-a-cluster)文档所述。 + diff --git a/content/zh/docs/tasks/administer-cluster/change-default-storage-class.md b/content/zh/docs/tasks/administer-cluster/change-default-storage-class.md index e2d6bd6ad9..ca4efbb7e2 100644 --- a/content/zh/docs/tasks/administer-cluster/change-default-storage-class.md +++ b/content/zh/docs/tasks/administer-cluster/change-default-storage-class.md @@ -43,11 +43,11 @@ dynamic provisioning of storage. 如果是这样的话,你可以改变默认 StorageClass,或者完全禁用它以防止动态配置存储。 -简单的删除默认 StorageClass 可能行不通,因为它可能会被你集群中的扩展管理器自动重建。 +删除默认 StorageClass 可能行不通,因为它可能会被你集群中的扩展管理器自动重建。 请查阅你的安装文档中关于扩展管理器的细节,以及如何禁用单个扩展。 @@ -107,7 +107,7 @@ for details about addon manager and how to disable individual addons. 3. 标记一个 StorageClass 为默认的: 和前面的步骤类似,你需要添加/设置注解 `storageclass.kubernetes.io/is-default-class=true`。 diff --git a/content/zh/docs/tasks/administer-cluster/dns-custom-nameservers.md b/content/zh/docs/tasks/administer-cluster/dns-custom-nameservers.md index 330416050c..386809548b 100644 --- a/content/zh/docs/tasks/administer-cluster/dns-custom-nameservers.md +++ b/content/zh/docs/tasks/administer-cluster/dns-custom-nameservers.md @@ -33,8 +33,6 @@ explains how to use `kubeadm` to migrate from `kube-dns`. 文档[迁移到 CoreDNS](/zh/docs/tasks/administer-cluster/coredns/#migrating-to-coredns) 解释了如何使用 `kubeadm` 从 `kube-dns` 迁移到 CoreDNS。 -{{% version-check %}} - ### 创建证书签名请求 (CSR) -你可以用 `kubeadm alpha certs renew --use-api` 为 Kubernetes 证书 API 创建一个证书签名请求。 - -如果你设置例如 [cert-manager](https://github.com/jetstack/cert-manager) -等外部签名者,证书签名请求(CSRs)会被自动批准。 -否则,你必须使用 [`kubectl certificate`](/zh/docs/setup/best-practices/certificates/) -命令手动批准证书。 -以下 kubeadm 命令输出要批准的证书名称,然后阻塞等待批准发生: - -```shell -sudo kubeadm alpha certs renew apiserver --use-api & -``` - - -输出类似于以下内容: -``` -[1] 2890 -[certs] certificate request "kubeadm-cert-kube-apiserver-ld526" created -``` - - - -### 批准证书签名请求 (CSR) - -如果你设置了一个外部签名者, 证书签名请求 (CSRs) 会自动被批准。 - -否则,你必须用 [`kubectl certificate`](/zh/docs/setup/best-practices/certificates/) -命令手动批准证书,例如: - -```shell -kubectl certificate approve kubeadm-cert-kube-apiserver-ld526 -``` - - -输出类似于以下内容: - -``` -certificatesigningrequest.certificates.k8s.io/kubeadm-cert-kube-apiserver-ld526 approved -``` - - -你可以使用 `kubectl get csr` 查看待处理证书列表。 +有关使用 Kubernetes API 创建 CSR 的信息, +请参见[创建 CertificateSigningRequest](/zh/docs/reference/access-authn-authz/certificate-signing-requests/#create-certificatesigningrequest)。 ### 附加信息 -- 在对 kubelet 作次版本升级时需要[腾空节点](/zh/docs/tasks/administer-cluster/safely-drain-node/)。 +- 在对 kubelet 作次版本升版时需要[腾空节点](/zh/docs/tasks/administer-cluster/safely-drain-node/)。 对于控制面节点,其上可能运行着 CoreDNS Pods 或者其它非常重要的负载。 - 升级后,因为容器规约的哈希值已更改,所有容器都会被重新启动。 diff --git a/content/zh/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents.md b/content/zh/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents.md new file mode 100644 index 0000000000..b22c7d4b67 --- /dev/null +++ b/content/zh/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents.md @@ -0,0 +1,157 @@ +--- +title: 从 dockershim 迁移遥测和安全代理 +content_type: task +weight: 70 +--- + + + + + +在 Kubernetes 1.20 版本中,dockershim 被弃用。 +在博文[弃用 Dockershim 常见问题](/zh/blog/2020/12/02/dockershim-faq/)中, +你大概已经了解到,大多数应用并没有直接通过运行时来托管容器。 +但是,仍然有大量的遥测和安全代理依赖 docker 来收集容器元数据、日志和指标。 +本文汇总了一些信息和链接:信息用于阐述如何探查这些依赖,链接用于解释如何迁移这些代理去使用通用的工具或其他容器运行。 + + +## 遥测和安全代理 {#telemetry-and-security-agents} + + +为了让代理运行在 Kubernetes 集群中,我们有几种办法。 +代理既可以直接在节点上运行,也可以作为守护进程运行。 + + +### 为什么遥测代理依赖于 Docker? {#why-do-telemetry-agents-relyon-docker} + + +因为历史原因,Kubernetes 建立在 Docker 之上。 +Kubernetes 管理网络和调度,Docker 则在具体的节点上定位并操作容器。 +所以,你可以从 Kubernetes 取得调度相关的元数据,比如 Pod 名称;从 Docker 取得容器状态信息。 +后来,人们开发了更多的运行时来管理容器。 +同时一些项目和 Kubernetes 特性也不断涌现,支持跨多个运行时收集容器状态信息。 + + +一些代理和 Docker 工具紧密绑定。此类代理可以这样运行命令,比如用 +[`docker ps`](https://docs.docker.com/engine/reference/commandline/ps/) +或 [`docker top`](https://docs.docker.com/engine/reference/commandline/top/) +这类命令来列出容器和进程,用 +[docker logs](https://docs.docker.com/engine/reference/commandline/logs/) +订阅 Docker 的日志。 +但随着 Docker 作为容器运行时被弃用,这些命令将不再工作。 + + +### 识别依赖于 Docker 的 DaemonSet {#identify-docker-dependency} + + +如果某 Pod 想调用运行在节点上的 `dockerd`,该 Pod 必须满足以下两个条件之一: + +- 将包含 Docker 守护进程特权套接字的文件系统挂载为一个{{< glossary_tooltip text="卷" term_id="volume" >}};或 +- 直接以卷的形式挂载 Docker 守护进程特权套接字的特定路径。 + + +举例来说:在 COS 镜像中,Docker 通过 `/var/run/docker.sock` 开放其 Unix 域套接字。 +这意味着 Pod 的规约中需要包含 `hostPath` 卷以挂载 `/var/run/docker.sock`。 + + +下面是一个 shell 示例脚本,用于查找包含直接映射 Docker 套接字的挂载点的 Pod。 +你也可以删掉 grep `/var/run/docker.sock` 这一代码片段以查看其它挂载信息。 + +```bash +kubectl get pods --all-namespaces \ +-o=jsonpath='{range .items[*]}{"\n"}{.metadata.namespace}{":\t"}{.metadata.name}{":\t"}{range .spec.volumes[*]}{.hostPath.path}{", "}{end}{end}' \ +| sort \ +| grep '/var/run/docker.sock' +``` + + +{{< note >}} +对于 Pod 来说,访问宿主机上的 Docker 还有其他方式。 +例如,可以挂载 `/var/run` 的父目录而非其完整路径 +(就像[这个例子](https://gist.github.com/itaysk/7bc3e56d69c4d72a549286d98fd557dd))。 +上述脚本只检测最常见的使用方式。 +{{< /note >}} + + +### 检测节点代理对 Docker 的依赖性 {#detecting-docker-dependency-from-node-agents} + + +在你的集群节点被定制、且在各个节点上均安装了额外的安全和遥测代理的场景下, +一定要和代理的供应商确认:该代理是否依赖于 Docker。 + + +### 遥测和安全代理的供应商 {#telemetry-and-security-agent-vendors} + + +我们通过 +[谷歌文档](https://docs.google.com/document/d/1ZFi4uKit63ga5sxEiZblfb-c23lFhvy6RXVPikS8wf0/edit#) +提供了为各类遥测和安全代理供应商准备的持续更新的迁移指导。 +请与供应商联系,获取从 dockershim 迁移的最新说明。 diff --git a/content/zh/docs/tasks/administer-cluster/reserve-compute-resources.md b/content/zh/docs/tasks/administer-cluster/reserve-compute-resources.md index 7a93c9c932..381697d352 100644 --- a/content/zh/docs/tasks/administer-cluster/reserve-compute-resources.md +++ b/content/zh/docs/tasks/administer-cluster/reserve-compute-resources.md @@ -256,12 +256,11 @@ If the Kubelet **does not** have `--system-reserved-cgroup` and `--kube-reserved the explicit cpuset provided by `reserved-cpus` will take precedence over the CPUs defined by `--kube-reserved` and `--system-reserved` options. --> -`reserved-cpus` 旨在为操作系统守护程序和 kubernetes 系统守护程序定义一个显式 CPU -集合。`reserved-cpus` 适用于不打算针对 cpuset 资源为操作系统守护程序和 kubernetes +`reserved-cpus` 旨在为操作系统守护程序和 kubernetes 系统守护程序保留一组明确指定编号的 +CPU。`reserved-cpus` 适用于不打算针对 cpuset 资源为操作系统守护程序和 kubernetes 系统守护程序定义独立的顶级 cgroups 的系统。 如果 Kubelet **没有** 指定参数 `--system-reserved-cgroup` 和 `--kube-reserved-cgroup`, -则 `reserved-cpus` 提供的显式 cpuset 将优先于 `--kube-reserved` 和 `--system-reserved` -选项定义的 cpuset。 +则 `reserved-cpus` 的设置将优先于 `--kube-reserved` 和 `--system-reserved` 选项。 @@ -127,7 +125,7 @@ Once it returns (without giving an error), you can power down the node If you leave the node in the cluster during the maintenance operation, you need to run --> 一旦它返回(没有报错), -你就可以下电此节点(或者等价地,如果在云平台上,删除支持该节点的虚拟机)。 +你就可以下线此节点(或者等价地,如果在云平台上,删除支持该节点的虚拟机)。 如果要在维护操作期间将节点留在集群中,则需要运行: ```shell @@ -264,7 +262,14 @@ eviction API will never return anything other than 429 or 500. For example: this can happen if ReplicaSet is creating Pods for your application but the replacement Pods do not become `Ready`. You can also see similar symptoms if the last Pod evicted has a very long termination grace period. +--> +## 驱逐阻塞 +在某些情况下,应用程序可能会到达一个中断状态,除了 429 或 500 之外,它将永远不会返回任何内容。 +例如 ReplicaSet 创建的替换 Pod 没有变成就绪状态,或者被驱逐的最后一个 +Pod 有很长的终止宽限期,就会发生这种情况。 + + -## 驱逐阻塞 - -在某些情况下,应用程序可能会到达一个中断状态,除了 429 或 500 之外,它将永远不会返回任何内容。 -例如 ReplicaSet 创建的替换 Pod 没有变成就绪状态,或者被驱逐的最后一个 -Pod 有很长的终止宽限期,就会发生这种情况。 - 在这种情况下,有两种可能的解决方案: - 中止或暂停自动操作。调查应用程序卡住的原因,并重新启动自动化。 -- 经过适当的长时间等待后, 从集群中删除 Pod 而不是使用驱逐 API。 +- 经过适当的长时间等待后,从集群中删除 Pod 而不是使用驱逐 API。 Kubernetes 并没有具体说明在这种情况下应该采取什么行为, 这应该由应用程序所有者和集群所有者紧密沟通,并达成对行动一致意见。 ## {{% heading "whatsnext" %}} - +--> * 执行[配置 PDB](/zh/docs/tasks/run-application/configure-pdb/)中的各个步骤, 保护你的应用 -* 进一步了解[节点维护](/zh/docs/tasks/administer-cluster/cluster-management/#maintenance-on-a-node)。 - diff --git a/content/zh/docs/tasks/configmap-secret/managing-secret-using-kubectl.md b/content/zh/docs/tasks/configmap-secret/managing-secret-using-kubectl.md index 2039a68e19..7aaf473337 100644 --- a/content/zh/docs/tasks/configmap-secret/managing-secret-using-kubectl.md +++ b/content/zh/docs/tasks/configmap-secret/managing-secret-using-kubectl.md @@ -173,11 +173,13 @@ kubectl get secret db-user-pass -o jsonpath='{.data}' 输出类似于: ```json -{"password.txt":"MWYyZDFlMmU2N2Rm","username.txt":"YWRtaW4="} +{"password":"MWYyZDFlMmU2N2Rm","username":"YWRtaW4="} ``` - -现在你可以解码 `password.txt` 的数据: + +现在你可以解码 `password` 的数据: ```shell echo 'MWYyZDFlMmU2N2Rm' | base64 --decode diff --git a/content/zh/docs/tasks/configure-pod-container/translate-compose-kubernetes.md b/content/zh/docs/tasks/configure-pod-container/translate-compose-kubernetes.md index 6250e452a8..23577048b2 100644 --- a/content/zh/docs/tasks/configure-pod-container/translate-compose-kubernetes.md +++ b/content/zh/docs/tasks/configure-pod-container/translate-compose-kubernetes.md @@ -40,26 +40,25 @@ We have multiple ways to install Kompose. Our preferred method is downloading th 我们有很多种方式安装 Kompose。首选方式是从最新的 GitHub 发布页面下载二进制文件。 - -## GitHub 发布版本 - -Kompose 通过 GitHub 发布版本,发布周期为三星期。 +Kompose 通过 GitHub 发布,发布周期为三星期。 你可以在 [GitHub 发布页面](https://github.com/kubernetes/kompose/releases) 上看到所有当前版本。 ```shell # Linux -curl -L https://github.com/kubernetes/kompose/releases/download/v1.16.0/kompose-linux-amd64 -o kompose +curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-linux-amd64 -o kompose # macOS -curl -L https://github.com/kubernetes/kompose/releases/download/v1.16.0/kompose-darwin-amd64 -o kompose +curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-darwin-amd64 -o kompose # Windows -curl -L https://github.com/kubernetes/kompose/releases/download/v1.16.0/kompose-windows-amd64.exe -o kompose.exe +curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-windows-amd64.exe -o kompose.exe chmod +x kompose sudo mv ./kompose /usr/local/bin/kompose @@ -68,9 +67,10 @@ sudo mv ./kompose /usr/local/bin/kompose -或者,你可以下载 [tarball](https://github.com/kubernetes/kompose/releases)。 +或者,你可以下载 [tar 包](https://github.com/kubernetes/kompose/releases)。 -## Go +{{% /tab %}} +{{% tab name="基于源代码构建" %}} @@ -135,129 +141,139 @@ you need is an existing `docker-compose.yml` file. 再需几步,我们就把你从 Docker Compose 带到 Kubernetes。 你只需要一个现有的 `docker-compose.yml` 文件。 -1. - 进入 `docker-compose.yml` 文件所在的目录。如果没有,请使用下面这个进行测试。 +1. + 进入 `docker-compose.yml` 文件所在的目录。如果没有,请使用下面这个进行测试。 - ```yaml - version: "2" + ```yaml + version: "2" - services: + services: - redis-master: - image: k8s.gcr.io/redis:e2e - ports: - - "6379" + redis-master: + image: k8s.gcr.io/redis:e2e + ports: + - "6379" - redis-slave: - image: gcr.io/google_samples/gb-redisslave:v3 - ports: - - "6379" - environment: - - GET_HOSTS_FROM=dns + redis-slave: + image: gcr.io/google_samples/gb-redisslave:v3 + ports: + - "6379" + environment: + - GET_HOSTS_FROM=dns - frontend: - image: gcr.io/google-samples/gb-frontend:v4 - ports: - - "80:80" - environment: - - GET_HOSTS_FROM=dns - labels: - kompose.service.type: LoadBalancer - ``` + frontend: + image: gcr.io/google-samples/gb-frontend:v4 + ports: + - "80:80" + environment: + - GET_HOSTS_FROM=dns + labels: + kompose.service.type: LoadBalancer + ``` -2. - 运行 `kompose up` 命令直接部署到 Kubernetes,或者跳到下一步,生成 `kubectl` 使用的文件。 + +2. 要将 `docker-compose.yml` 转换为 `kubectl` 可用的文件,请运行 `kompose convert` + 命令进行转换,然后运行 `kubectl create -f ` 进行创建。 - ```bash - $ kompose up - We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. - If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead. + ```shell + kompose convert + ``` - INFO Successfully created Service: redis - INFO Successfully created Service: web - INFO Successfully created Deployment: redis - INFO Successfully created Deployment: web + ```none + INFO Kubernetes file "frontend-service.yaml" created + INFO Kubernetes file "frontend-service.yaml" created + INFO Kubernetes file "frontend-service.yaml" created + INFO Kubernetes file "redis-master-service.yaml" created + INFO Kubernetes file "redis-master-service.yaml" created + INFO Kubernetes file "redis-master-service.yaml" created + INFO Kubernetes file "redis-slave-service.yaml" created + INFO Kubernetes file "redis-slave-service.yaml" created + INFO Kubernetes file "redis-slave-service.yaml" created + INFO Kubernetes file "frontend-deployment.yaml" created + INFO Kubernetes file "frontend-deployment.yaml" created + INFO Kubernetes file "frontend-deployment.yaml" created + INFO Kubernetes file "redis-master-deployment.yaml" created + INFO Kubernetes file "redis-master-deployment.yaml" created + INFO Kubernetes file "redis-master-deployment.yaml" created + INFO Kubernetes file "redis-slave-deployment.yaml" created + INFO Kubernetes file "redis-slave-deployment.yaml" created + INFO Kubernetes file "redis-slave-deployment.yaml" created + ``` - Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods,pvc' for details. - ``` + ```bash + kubectl apply -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml, + ``` -3. - 要将 `docker-compose.yml` 转换为 `kubectl` 可用的文件,请运行 `kompose convert` 命令进行转换, - 然后运行 `kubectl create -f ` 进行创建。 + + 输出类似于: - ```shell - kompose convert - ``` + ```none + service/frontend created + service/redis-master created + service/redis-slave created + deployment.apps/frontend created + deployment.apps/redis-master created + deployment.apps/redis-slave created + ``` - ``` - INFO Kubernetes file "frontend-service.yaml" created - INFO Kubernetes file "redis-master-service.yaml" created - INFO Kubernetes file "redis-slave-service.yaml" created - INFO Kubernetes file "frontend-deployment.yaml" created - INFO Kubernetes file "redis-master-deployment.yaml" created - INFO Kubernetes file "redis-slave-deployment.yaml" created - ``` + + 你部署的应用在 Kubernetes 中运行起来了。 - ```shell - kubectl create -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,redis-master-deployment.yaml,redis-slave-deployment.yaml - ``` + +3. 访问你的应用 - ``` - service/frontend created - service/redis-master created - service/redis-slave created - deployment.apps/frontend created - deployment.apps/redis-master created - deployment.apps/redis-slave created - ``` + - - 你部署的应用在 Kubernetes 中运行起来了。 + 如果你在开发过程中使用 `minikube`,请执行: -4. - 访问你的应用 + ```shell + minikube service frontend + ``` - + + 否则,我们要查看一下你的服务使用了什么 IP! - 如果你在开发过程中使用 `minikube`,请执行: + ```shell + kubectl describe svc frontend + ``` - ```shell - minikube service frontend - ``` + ```none + Name: frontend + Namespace: default + Labels: service=frontend + Selector: service=frontend + Type: LoadBalancer + IP: 10.0.0.183 + LoadBalancer Ingress: 192.0.2.89 + Port: 80 80/TCP + NodePort: 80 31144/TCP + Endpoints: 172.17.0.4:80 + Session Affinity: None + No events. + ``` - - 否则,我们要查看一下你的服务使用了什么 IP! + + 如果你使用的是云提供商,你的 IP 将在 `LoadBalancer Ingress` 字段给出。 - ```shell - kubectl describe svc frontend - ``` - - ``` - Name: frontend - Namespace: default - Labels: service=frontend - Selector: service=frontend - Type: LoadBalancer - IP: 10.0.0.183 - LoadBalancer Ingress: 192.0.2.89 - Port: 80 80/TCP - NodePort: 80 31144/TCP - Endpoints: 172.17.0.4:80 - Session Affinity: None - No events. - ``` - - - 如果你使用的是云提供商,你的 IP 将在 `LoadBalancer Ingress` 字段给出。 - - ```shell - curl http://192.0.2.89 - ``` + ```shell + curl http://192.0.2.89 + ``` @@ -284,29 +300,37 @@ you need is an existing `docker-compose.yml` file. - [`kompose down`](#kompose-down) - 文档 - - [构建和推送 Docker 镜像](#构建和推送-docker-镜像) + - [构建和推送 Docker 镜像](#build-and-push-docker-images) - [其他转换方式](#其他转换方式) - - [标签](#标签) - - [重启](#重启) - - [Docker Compose 版本](#docker-compose-版本) + - [标签](#labels) + - [重启](#restart) + - [Docker Compose 版本](#docker-compose-versions) Kompose 支持两种驱动:OpenShift 和 Kubernetes。 -你可以通过全局选项 `--provider` 选择驱动方式。如果没有指定,会将 Kubernetes 作为默认驱动。 +你可以通过全局选项 `--provider` 选择驱动。如果没有指定, +会将 Kubernetes 作为默认驱动。 ## `kompose convert` + Kompose 支持将 V1、V2 和 V3 版本的 Docker Compose 文件转换为 Kubernetes 和 OpenShift 资源对象。 -### Kubernetes + +### Kubernetes `kompose convert` 示例 ```shell kompose --file docker-voting.yml convert ``` -``` + +```none WARN Unsupported key networks - ignoring WARN Unsupported key build - ignoring INFO Kubernetes file "worker-svc.yaml" created @@ -325,7 +349,7 @@ INFO Kubernetes file "db-deployment.yaml" created ls ``` -``` +```none db-deployment.yaml docker-compose.yml docker-gitlab.yml redis-deployment.yaml result-deployment.yaml vote-deployment.yaml worker-deployment.yaml db-svc.yaml docker-voting.yml redis-svc.yaml result-svc.yaml vote-svc.yaml worker-svc.yaml ``` @@ -338,7 +362,8 @@ You can also provide multiple docker-compose files at the same time: ```shell kompose -f docker-compose.yml -f docker-guestbook.yml convert ``` -``` + +```none INFO Kubernetes file "frontend-service.yaml" created INFO Kubernetes file "mlbparks-service.yaml" created INFO Kubernetes file "mongodb-service.yaml" created @@ -368,7 +393,10 @@ When multiple docker-compose files are provided the configuration is merged. Any --> 当提供多个 docker-compose 文件时,配置将会合并。任何通用的配置都将被后续文件覆盖。 -### OpenShift + +### OpenShift `kompose convert` 示例 ```shell kompose --provider openshift --file docker-voting.yml convert @@ -403,7 +431,7 @@ kompose 还支持为服务中的构建指令创建 buildconfig。 kompose --provider openshift --file buildconfig/docker-compose.yml convert ``` -``` +```none WARN [foo] Service cannot be created because of missing port. INFO OpenShift Buildconfig using git@github.com:rtnpro/kompose.git::master as source. INFO OpenShift file "foo-deploymentconfig.yaml" created @@ -424,15 +452,19 @@ imagestream 工件,以解决 Openshift 的这个问题:https://github.com/op -Kompose 支持通过 `kompose up` 直接将你的"复合的(composed)" 应用程序部署到 Kubernetes 或 OpenShift。 +Kompose 支持通过 `kompose up` 直接将你的"复合的(composed)" 应用程序 +部署到 Kubernetes 或 OpenShift。 -### Kubernetes + +### Kubernetes `kompose up` 示例 ```shell kompose --file ./examples/docker-guestbook.yml up ``` -``` +```none We are going to create Kubernetes deployments and services for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead. @@ -468,26 +500,27 @@ pod/redis-master-1432129712-63jn8 1/1 Running 0 4m pod/redis-slave-2504961300-nve7b 1/1 Running 0 4m ``` + +{{< note >}} - -**注意**: - - 你必须有一个运行正常的 Kubernetes 集群,该集群具有预先配置的 kubectl 上下文。 - 此操作仅生成 Deployment 和 Service 对象并将其部署到 Kubernetes。 如果需要部署其他不同类型的资源,请使用 `kompose convert` 和 `kubectl create -f` 命令。 +{{< /note >}} - -### OpenShift + +### OpenShift `kompose up` 示例 ```shell kompose --file ./examples/docker-guestbook.yml --provider openshift up ``` -``` +```none We are going to create OpenShift DeploymentConfigs and Services for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'oc create -f' commands instead. @@ -508,7 +541,7 @@ Your application has been deployed to OpenShift. You can run 'oc get dc,svc,is' oc get dc,svc,is ``` -``` +```none NAME REVISION DESIRED CURRENT TRIGGERED BY dc/frontend 0 1 0 config,image(frontend:v4) dc/redis-master 0 1 0 config,image(redis-master:e2e) @@ -523,20 +556,18 @@ is/redis-master 172.30.12.200:5000/fff/redis-master is/redis-slave 172.30.12.200:5000/fff/redis-slave v1 ``` +{{< note >}} -**注意**: - -- 你必须有一个运行正常的 OpenShift 集群,该集群具有预先配置的 `oc` 上下文 (`oc login`)。 +你必须有一个运行正常的 OpenShift 集群,该集群具有预先配置的 `oc` 上下文 (`oc login`)。 +{{< /note >}} ## `kompose down` - 你一旦将"复合(composed)" 应用部署到 Kubernetes,`kompose down` 命令将能帮你通过删除 Deployment 和 Service 对象来删除应用。 如果需要删除其他资源,请使用 'kubectl' 命令。 @@ -554,26 +585,27 @@ INFO Successfully deleted service: frontend INFO Successfully deleted deployment: frontend ``` +{{< note >}} +- 你必须有一个运行正常的 Kubernetes 集群,该集群具有预先配置的 kubectl 上下文。 +{{< /note >}} + +## 构建和推送 Docker 镜像 {#build-and-push-docker-images} -**注意**: - -- 你必须有一个运行正常的 Kubernetes 集群,该集群具有预先配置的 kubectl 上下文。 - -## 构建和推送 Docker 镜像 - -Kompose 支持构建和推送 Docker 镜像。如果 Docker Compose 文件中使用了 `build` 关键字,你的镜像将会: +Kompose 支持构建和推送 Docker 镜像。如果 Docker Compose 文件中使用了 `build` +关键字,你的镜像将会: - 使用文档中指定的 `image` 键自动构建 Docker 镜像 - 使用本地凭据推送到正确的 Docker 仓库 @@ -598,7 +630,7 @@ Using `kompose up` with a `build` key: kompose up ``` -``` +```none INFO Build key detected. Attempting to build and push image 'docker.io/foo/bar' INFO Building image 'docker.io/foo/bar' from directory 'build' INFO Image 'docker.io/foo/bar' from directory 'build' built successfully @@ -621,10 +653,10 @@ In order to disable the functionality, or choose to use BuildConfig generation ( 可以通过传递 `--build (local|build-config|none)` 参数来实现。 ```shell -# Disable building/pushing Docker images +# 禁止构造和推送 Docker 镜像 kompose up --build none -# Generate Build Config artifacts for OpenShift +# 为 OpenShift 生成 Build Config 工件 kompose up --provider openshift --build build-config ``` @@ -633,7 +665,7 @@ kompose up --provider openshift --build build-config The default `kompose` transformation will generate Kubernetes [Deployments](/docs/concepts/workloads/controllers/deployment/) and [Services](/docs/concepts/services-networking/service/), in yaml format. You have alternative option to generate json with `-j`. Also, you can alternatively generate [Replication Controllers](/docs/concepts/workloads/controllers/replicationcontroller/) objects, [Daemon Sets](/docs/concepts/workloads/controllers/daemonset/), or [Helm](https://github.com/helm/helm) charts. --> -## 其他转换方式 +## 其他转换方式 {#alternative-conversions} 默认的 `kompose` 转换会生成 yaml 格式的 Kubernetes [Deployment](/zh/docs/concepts/workloads/controllers/deployment/) 和 @@ -646,7 +678,8 @@ The default `kompose` transformation will generate Kubernetes [Deployments](/doc ```shell kompose convert -j ``` -``` + +```none INFO Kubernetes file "redis-svc.json" created INFO Kubernetes file "web-svc.json" created INFO Kubernetes file "redis-deployment.json" created @@ -661,7 +694,8 @@ The `*-deployment.json` files contain the Deployment objects. ```shell kompose convert --replication-controller ``` -``` + +```none INFO Kubernetes file "redis-svc.yaml" created INFO Kubernetes file "web-svc.yaml" created INFO Kubernetes file "redis-replicationcontroller.yaml" created @@ -671,7 +705,6 @@ INFO Kubernetes file "web-replicationcontroller.yaml" created - `*-replicationcontroller.yaml` 文件包含 Replication Controller 对象。 如果你想指定副本数(默认为 1),可以使用 `--replicas` 参数: `kompose convert --replication-controller --replicas 3` @@ -680,7 +713,7 @@ The `*-replicationcontroller.yaml` files contain the Replication Controller obje kompose convert --daemon-set ``` -``` +```none INFO Kubernetes file "redis-svc.yaml" created INFO Kubernetes file "web-svc.yaml" created INFO Kubernetes file "redis-daemonset.yaml" created @@ -688,17 +721,19 @@ INFO Kubernetes file "web-daemonset.yaml" created ``` -`*-daemonset.yaml` 文件包含 Daemon Set 对象。 +`*-daemonset.yaml` 文件包含 DaemonSet 对象。 -如果你想生成 [Helm](https://github.com/kubernetes/helm) 可用的 Chart,只需简单的执行下面的命令: +如果你想生成 [Helm](https://github.com/kubernetes/helm) 可用的 Chart, +只需简单的执行下面的命令: ```shell kompose convert -c ``` -``` + +```none INFO Kubernetes file "web-svc.yaml" created INFO Kubernetes file "redis-svc.yaml" created INFO Kubernetes file "web-deployment.yaml" created @@ -734,9 +769,10 @@ The chart structure is aimed at providing a skeleton for building your Helm char For example: --> -## 标签 +## 标签 {#labels} -`kompose` 支持 `docker-compose.yml` 文件中用于 Kompose 的标签,以便在转换时明确定义 Service 的行为。 +`kompose` 支持 `docker-compose.yml` 文件中用于 Kompose 的标签,以便 +在转换时明确定义 Service 的行为。 - `kompose.service.type` 定义要创建的 Service 类型。例如: @@ -761,11 +797,13 @@ For example: For example: --> - `kompose.service.expose` 定义是否允许从集群外部访问 Service。 - 如果该值被设置为 "true",提供程序将自动设置端点,对于任何其他值,该值将被设置为主机名。 + 如果该值被设置为 "true",提供程序将自动设置端点, + 对于任何其他值,该值将被设置为主机名。 如果在 Service 中定义了多个端口,则选择第一个端口作为公开端口。 - - 对于 Kubernetes 驱动程序,创建了一个 Ingress 资源,并且假定已经配置了相应的 Ingress 控制器。 - - 对于 OpenShift 驱动程序, 创建一个 route。 + - 如果使用 Kubernetes 驱动,会有一个 Ingress 资源被创建,并且假定 + 已经配置了相应的 Ingress 控制器。 + - 如果使用 OpenShift 驱动, 则会有一个 route 被创建。 例如: @@ -793,19 +831,18 @@ The currently supported options are: | kompose.service.type | nodeport / clusterip / loadbalancer | | kompose.service.expose| true / hostname | --> - 当前支持的选项有: -| 键 | 值 | -|----------------------|-------------------------------------| -| kompose.service.type | nodeport / clusterip / loadbalancer | -| kompose.service.expose| true / hostname | +| 键 | 值 | +|------------------------|-------------------------------------| +| kompose.service.type | nodeport / clusterip / loadbalancer | +| kompose.service.expose | true / hostname | +{{< note >}} -{{< note >}} -`kompose.service.type` 标签应该只用`ports`来定义,否则 `kompose` 会失败。 +`kompose.service.type` 标签应该只用 `ports` 来定义,否则 `kompose` 会失败。 {{< /note >}} -## 重启 +## 重启 {#restart} -如果你想创建没有控制器的普通 Pod,可以使用 docker-compose 的 `restart` 结构来定义它。 -请参考下表了解 `restart` 的不同参数。 +如果你想创建没有控制器的普通 Pod,可以使用 docker-compose 的 `restart` +结构来指定这一行为。请参考下表了解 `restart` 的不同参数。 -| `docker-compose` `restart` | 创建的对象 | Pod `restartPolicy` | +| `docker-compose` `restart` | 创建的对象 | Pod `restartPolicy` | |----------------------------|-------------------|---------------------| -| `""` | 控制器对象 | `Always` | -| `always` | 控制器对象 | `Always` | +| `""` | 控制器对象 | `Always` | +| `always` | 控制器对象 | `Always` | | `on-failure` | Pod | `OnFailure` | | `no` | Pod | `Never` | @@ -843,9 +880,9 @@ The controller object could be `deployment` or `replicationcontroller`, etc. {{< /note >}} -例如,`pival` Service 将在这里变成 Pod。这个容器的计算值为 `pi`。 +例如,`pival` Service 将在这里变成 Pod。这个容器计算 `pi` 的取值。 ```yaml version: '2' @@ -858,23 +895,22 @@ services: ``` - ### 关于 Deployment Config 的提醒 -如果 Docker Compose 文件中为服务声明了卷,Deployment (Kubernetes) 或 DeploymentConfig (OpenShift) -的策略会从 "RollingUpdate" (默认) 变为 "Recreate"。 +如果 Docker Compose 文件中为服务声明了卷,Deployment (Kubernetes) 或 +DeploymentConfig (OpenShift) 策略会从 "RollingUpdate" (默认) 变为 "Recreate"。 这样做的目的是为了避免服务的多个实例同时访问卷。 -如果 Docker Compose 文件中的服务名包含 `_` (例如 `web_service`), -那么将会被替换为 `-`,服务也相应的会重命名(例如 `web-service`)。 +如果 Docker Compose 文件中的服务名包含 `_`(例如 `web_service`), +那么将会被替换为 `-`,服务也相应的会重命名(例如 `web-service`)。 Kompose 这样做的原因是 "Kubernetes" 不允许对象名称中包含 `_`。 请注意,更改服务名称可能会破坏一些 `docker-compose` 文件。 @@ -883,14 +919,15 @@ Kompose 这样做的原因是 "Kubernetes" 不允许对象名称中包含 `_`。 ## Docker Compose Versions Kompose supports Docker Compose versions: 1, 2 and 3. We have limited support on versions 2.1 and 3.2 due to their experimental nature. + A full list on compatibility between all three versions is listed in our [conversion document](https://github.com/kubernetes/kompose/blob/master/docs/conversion.md) including a list of all incompatible Docker Compose keys. --> -## Docker Compose 版本 +## Docker Compose 版本 {#docker-compose-versions} -Kompose 支持的 Docker Compose 版本包括:1、2 和 3。有限支持 2.1 和 3.2 版本,因为它们还在实验阶段。 +Kompose 支持的 Docker Compose 版本包括:1、2 和 3。 +对 2.1 和 3.2 版本的支持还有限,因为它们还在实验阶段。 所有三个版本的兼容性列表请查看我们的 [转换文档](https://github.com/kubernetes/kompose/blob/master/docs/conversion.md), 文档中列出了所有不兼容的 Docker Compose 关键字。 - diff --git a/content/zh/docs/tasks/debug-application-cluster/events-stackdriver.md b/content/zh/docs/tasks/debug-application-cluster/events-stackdriver.md deleted file mode 100644 index 04a20a7174..0000000000 --- a/content/zh/docs/tasks/debug-application-cluster/events-stackdriver.md +++ /dev/null @@ -1,156 +0,0 @@ ---- -content_type: concept -title: StackDriver 中的事件 ---- - - - - - - - -Kubernetes 事件是一种对象,它为用户提供了洞察集群内发生的事情的能力, -例如调度程序做出了什么决定,或者为什么某些 Pod 被逐出节点。 -你可以在[应用程序自检和调试](/zh/docs/tasks/debug-application-cluster/debug-application-introspection/) -中阅读有关使用事件调试应用程序的更多信息。 - - -因为事件是 API 对象,所以它们存储在主控节点上的 API 服务器中。 -为了避免主节点磁盘空间被填满,将强制执行保留策略:事件在最后一次发生的一小时后将会被删除。 -为了提供更长的历史记录和聚合能力,应该安装第三方解决方案来捕获事件。 - - -本文描述了一个将 Kubernetes 事件导出为 Stackdriver Logging 的解决方案,在这里可以对它们进行处理和分析。 - - -{{< note >}} -不能保证集群中发生的所有事件都将导出到 Stackdriver。 -事件不能导出的一种可能情况是事件导出器没有运行(例如,在重新启动或升级期间)。 -在大多数情况下,可以将事件用于设置 -[metrics](https://cloud.google.com/logging/docs/view/logs_based_metrics) 和 -[alerts](https://cloud.google.com/logging/docs/view/logs_based_metrics#creating_an_alerting_policy) -等目的,但你应该注意其潜在的不准确性。 -{{< /note >}} - - - - -## 部署 {#deployment} - -### Google Kubernetes Engine - - - -在 Google Kubernetes Engine 中,如果启用了云日志,那么事件导出器默认部署在主节点运行版本为 1.7 及更高版本的集群中。 -为了防止干扰你的工作负载,事件导出器没有设置资源,并且处于尽力而为的 QoS 类型中,这意味着它将在资源匮乏的情况下第一个被杀死。 -如果要导出事件,请确保有足够的资源给事件导出器 Pod 使用。 -这可能会因为工作负载的不同而有所不同,但平均而言,需要大约 100MB 的内存和 100m 的 CPU。 - - -### 部署到现有集群 - -使用下面的命令将事件导出器部署到你的集群: - -```shell -kubectl create -f https://k8s.io/examples/debug/event-exporter.yaml -``` - - - -由于事件导出器访问 Kubernetes API,因此它需要权限才能访问。 -以下的部署配置为使用 RBAC 授权。 -它设置服务帐户和集群角色绑定,以允许事件导出器读取事件。 -为了确保事件导出器 Pod 不会从节点中退出,你可以另外设置资源请求。 -如前所述,100MB 内存和 100m CPU 应该就足够了。 - -{{< codenew file="debug/event-exporter.yaml" >}} - - -## 用户指南 {#user-guide} - -事件在 Stackdriver Logging 中被导出到 `GKE Cluster` 资源。 -你可以通过从可用资源的下拉菜单中选择适当的选项来找到它们: - - -Stackdriver 日志接口中事件的位置 - - -你可以使用 Stackdriver Logging 的 -[过滤机制](https://cloud.google.com/logging/docs/view/advanced_filters) -基于事件对象字段进行过滤。 -例如,下面的查询将显示调度程序中有关 Deployment `nginx-deployment` 中的 Pod 的事件: - -``` -resource.type="gke_cluster" -jsonPayload.kind="Event" -jsonPayload.source.component="default-scheduler" -jsonPayload.involvedObject.name:"nginx-deployment" -``` - -{{< figure src="/images/docs/stackdriver-event-exporter-filter.png" alt="在 Stackdriver 接口中过滤的事件" width="500" >}} - - diff --git a/content/zh/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md b/content/zh/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md deleted file mode 100644 index 2dbabad039..0000000000 --- a/content/zh/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md +++ /dev/null @@ -1,197 +0,0 @@ ---- -content_type: concept -title: 使用 ElasticSearch 和 Kibana 进行日志管理 ---- - - - - - - -在 Google Compute Engine (GCE) 平台上,默认的日志管理支持目标是 -[Stackdriver Logging](https://cloud.google.com/logging/), -在[使用 Stackdriver Logging 管理日志](/zh/docs/tasks/debug-application-cluster/logging-stackdriver/) -中详细描述了这一点。 - - -本文介绍了如何设置一个集群,将日志导入 -[Elasticsearch](https://www.elastic.co/products/elasticsearch),并使用 -[Kibana](https://www.elastic.co/products/kibana) 查看日志,作为在 GCE 上 -运行应用时使用 Stackdriver Logging 管理日志的替代方案。 - - -{{< note >}} -你不能在 Google Kubernetes Engine 平台运行的 Kubernetes 集群上自动部署 -Elasticsearch 和 Kibana。你必须手动部署它们。 -{{< /note >}} - - - - -要使用 Elasticsearch 和 Kibana 处理集群日志,你应该在使用 kube-up.sh -脚本创建集群时设置下面所示的环境变量: - -```shell -KUBE_LOGGING_DESTINATION=elasticsearch -``` - - -你还应该确保设置了 `KUBE_ENABLE_NODE_LOGGING=true` (这是 GCE 平台的默认设置)。 - - -现在,当你创建集群时,将有一条消息将指示每个节点上运行的 fluentd 日志收集守护进程 -以 ElasticSearch 为日志输出目标: - -```shell -cluster/kube-up.sh -``` - -``` -... -Project: kubernetes-satnam -Zone: us-central1-b -... calling kube-up -Project: kubernetes-satnam -Zone: us-central1-b -+++ Staging server tars to Google Storage: gs://kubernetes-staging-e6d0e81793/devel -+++ kubernetes-server-linux-amd64.tar.gz uploaded (sha1 = 6987c098277871b6d69623141276924ab687f89d) -+++ kubernetes-salt.tar.gz uploaded (sha1 = bdfc83ed6b60fa9e3bff9004b542cfc643464cd0) -Looking for already existing resources -Starting master and configuring firewalls -Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/zones/us-central1-b/disks/kubernetes-master-pd]. -NAME ZONE SIZE_GB TYPE STATUS -kubernetes-master-pd us-central1-b 20 pd-ssd READY -Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/regions/us-central1/addresses/kubernetes-master-ip]. -+++ Logging using Fluentd to elasticsearch -``` - - -每个节点的 Fluentd Pod、Elasticsearch Pod 和 Kibana Pod 都应该在集群启动后不久运行在 -kube-system 名字空间中。 - -```shell -kubectl get pods --namespace=kube-system -``` - -``` -NAME READY STATUS RESTARTS AGE -elasticsearch-logging-v1-78nog 1/1 Running 0 2h -elasticsearch-logging-v1-nj2nb 1/1 Running 0 2h -fluentd-elasticsearch-kubernetes-node-5oq0 1/1 Running 0 2h -fluentd-elasticsearch-kubernetes-node-6896 1/1 Running 0 2h -fluentd-elasticsearch-kubernetes-node-l1ds 1/1 Running 0 2h -fluentd-elasticsearch-kubernetes-node-lz9j 1/1 Running 0 2h -kibana-logging-v1-bhpo8 1/1 Running 0 2h -kube-dns-v3-7r1l9 3/3 Running 0 2h -monitoring-heapster-v4-yl332 1/1 Running 1 2h -monitoring-influx-grafana-v1-o79xf 2/2 Running 0 2h -``` - - -`fluentd-elasticsearch` Pod 从每个节点收集日志并将其发送到 `elasticsearch-logging` Pod, -该 Pod 是名为 `elasticsearch-logging` 的 -[服务](/zh/docs/concepts/services-networking/service/)的一部分。 -这些 ElasticSearch pod 存储日志,并通过 REST API 将其公开。 -`kibana-logging` pod 提供了一个用于读取 ElasticSearch 中存储的日志的 Web UI, -它是名为 `kibana-logging` 的服务的一部分。 - - - -Elasticsearch 和 Kibana 服务都位于 `kube-system` 名字空间中,并且没有通过 -可公开访问的 IP 地址直接暴露。要访问它们,请参照 -[访问集群中运行的服务](/zh/docs/tasks/access-application-cluster/access-cluster/#accessing-services-running-on-the-cluster) -的说明进行操作。 - - -如果你想在浏览器中访问 `elasticsearch-logging` 服务,你将看到类似下面的状态页面: - -![Elasticsearch Status](/images/docs/es-browser.png) - - -现在你可以直接在浏览器中输入 Elasticsearch 查询,如果你愿意的话。 -请参考 [Elasticsearch 的文档](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-uri-request.html) -以了解这样做的更多细节。 - - - -或者,你可以使用 Kibana 查看集群的日志(再次使用 -[访问集群中运行的服务的说明](/zh/docs/tasks/access-application-cluster/access-cluster/#accessing-services-running-on-the-cluster))。 -第一次访问 Kibana URL 时,将显示一个页面,要求你配置所接收日志的视图。 -选择时间序列值的选项,然后选择 `@timestamp`。 -在下面的页面中选择 `Discover` 选项卡,然后你应该能够看到所摄取的日志。 -你可以将刷新间隔设置为 5 秒,以便定期刷新日志。 - - - -以下是从 Kibana 查看器中摄取日志的典型视图: - -![Kibana logs](/images/docs/kibana-logs.png) - -## {{% heading "whatsnext" %}} - - -Kibana 为浏览你的日志提供了各种强大的选项!有关如何深入研究它的一些想法, -请查看 [Kibana 的文档](https://www.elastic.co/guide/en/kibana/current/discover.html)。 - diff --git a/content/zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md b/content/zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md index c428192679..daa4f8829e 100644 --- a/content/zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md +++ b/content/zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md @@ -1788,8 +1788,6 @@ resources that have the scale subresource enabled. --> ### 分类 {#categories} -{{< feature-state state="beta" for_k8s_version="v1.10" >}} - [Kustomize](https://github.com/kubernetes-sigs/kustomize) 是一个独立的工具,用来通过 -[kustomization 文件](https://kubernetes-sigs.github.io/kustomize/api-reference/glossary/#kustomization) +[kustomization 文件](https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#kustomization) 定制 Kubernetes 对象。 + + + + +本指南演示了如何从 Pod 中访问 Kubernetes API。 + +## {{% heading "prerequisites" %}} + +{{< include "task-tutorial-prereqs.md" >}} + + + + +### 从 Pod 中访问 API {#accessing-the-api-from-within-a-pod} + +从 Pod 内部访问 API 时,定位 API 服务器和向服务器认证身份的操作 +与外部客户端场景不同。 + + +从 Pod 使用 Kubernetes API 的最简单的方法就是使用官方的 +[客户端库](/zh/docs/reference/using-api/client-libraries/)。 +这些库可以自动发现 API 服务器并进行身份验证。 + + +#### 使用官方客户端库 {#using-official-client-libraries} + +从一个 Pod 内部连接到 Kubernetes API 的推荐方式为: + +- 对于 Go 语言客户端,使用官方的 [Go 客户端库](https://github.com/kubernetes/client-go/)。 + 函数 `rest.InClusterConfig()` 自动处理 API 主机发现和身份认证。 + 参见[这里的一个例子](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go)。 + +- 对于 Python 客户端,使用官方的 [Python 客户端库](https://github.com/kubernetes-client/python/)。 + 函数 `config.load_incluster_config()` 自动处理 API 主机的发现和身份认证。 + 参见[这里的一个例子](https://github.com/kubernetes-client/python/blob/master/examples/in_cluster_config.py)。 + +- 还有一些其他可用的客户端库,请参阅[客户端库](/zh/docs/reference/using-api/client-libraries/)页面。 + +在以上场景中,客户端库都使用 Pod 的服务账号凭据来与 API 服务器安全地通信。 + + +#### 直接访问 REST API {#directly-accessing-the-rest-api} + +在运行在 Pod 中时,可以通过 `default` 命名空间中的名为 `kubernetes` 的服务访问 +Kubernetes API 服务器。也就是说,Pod 可以使用 `kubernetes.default.svc` 主机名 +来查询 API 服务器。官方客户端库自动完成这个工作。 + + +向 API 服务器进行身份认证的推荐做法是使用 +[服务账号](/zh/docs/tasks/configure-pod-container/configure-service-account/)凭据。 +默认情况下,每个 Pod 与一个服务账号关联,该服务账户的凭证(令牌)放置在此 Pod 中 +每个容器的文件系统树中的 `/var/run/secrets/kubernetes.io/serviceaccount/token` 处。 + + +如果证书包可用,则凭证包被放入每个容器的文件系统树中的 +`/var/run/secrets/kubernetes.io/serviceaccount/ca.crt` 处, +且将被用于验证 API 服务器的服务证书。 + + +最后,用于命名空间域 API 操作的默认命名空间放置在每个容器中的 +`/var/run/secrets/kubernetes.io/serviceaccount/namespace` 文件中。 + + +#### 使用 kubectl proxy {#use-kubectl-proxy} + +如果你希望不使用官方客户端库就完成 API 查询,可以将 `kubectl proxy` 作为 +[command](/zh/docs/tasks/inject-data-application/define-command-argument-container/) +在 Pod 中启动一个边车(Sidecar)容器。这样,`kubectl proxy` 自动完成对 API +的身份认证,并将其暴露到 Pod 的 `localhost` 接口,从而 Pod 中的其他容器可以 +直接使用 API。 + + +### 不使用代理 {#without-using-a-proxy} + +通过将认证令牌直接发送到 API 服务器,也可以避免运行 kubectl proxy 命令。 +内部的证书机制能够为链接提供保护。 + +```shell +# 指向内部 API 服务器的主机名 +APISERVER=https://kubernetes.default.svc + +# 服务账号令牌的路径 +SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount + +# 读取 Pod 的名字空间 +NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace) + +# 读取服务账号的持有者令牌 +TOKEN=$(cat ${SERVICEACCOUNT}/token) + +# 引用内部证书机构(CA) +CACERT=${SERVICEACCOUNT}/ca.crt + +# 使用令牌访问 API +curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api +``` + + +输出类似于: + +```json +{ + "kind": "APIVersions", + "versions": [ + "v1" + ], + "serverAddressByClientCIDRs": [ + { + "clientCIDR": "0.0.0.0/0", + "serverAddress": "10.0.1.149:443" + } + ] +} +``` diff --git a/content/zh/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/zh/docs/tasks/run-application/horizontal-pod-autoscale.md index fa96247305..4d4c8416c2 100644 --- a/content/zh/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/zh/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -434,6 +434,127 @@ usual. 如果延迟(冷却)时间设置的太短,那么副本数量有可能跟以前一样出现抖动。 {{< /note >}} + +## 对资源指标的支持 {#support-for-resource-metrics} + +HPA 的任何目标资源都可以基于其中的 Pods 的资源用量来实现扩缩。 +在定义 Pod 规约时,类似 `cpu` 和 `memory` 这类资源请求必须被设定。 +这些设定值被用来确定资源利用量并被 HPA 控制器用来对目标资源完成扩缩操作。 +要使用基于资源利用率的扩缩,可以像下面这样指定一个指标源: + +```yaml +type: Resource +resource: + name: cpu + target: + type: Utilization + averageUtilization: 60 +``` + + +基于这一指标设定,HPA 控制器会维持扩缩目标中的 Pods 的平均资源利用率在 60%。 +利用率是 Pod 的当前资源用量与其请求值之间的比值。关于如何计算利用率以及如何计算平均值 +的细节可参考[算法](#algorithm-details)小节。 + +{{< note >}} + +由于所有的容器的资源用量都会被累加起来,Pod 的总体资源用量值可能不会精确体现 +各个容器的资源用量。这一现象也会导致一些问题,例如某个容器运行时的资源用量非常 +高,但因为 Pod 层面的资源用量总值让人在可接受的约束范围内,HPA 不会执行扩大 +目标对象规模的操作。 +{{< /note >}} + + +### 容器资源指标 {#container-resource-metrics} + +{{< feature-state for_k8s_version="v1.20" state="alpha" >}} + + +`HorizontalPodAutoscaler` 也支持容器指标源,这时 HPA 可以跟踪记录一组 Pods 中各个容器的 +资源用量,进而触发扩缩目标对象的操作。 +容器资源指标的支持使得你可以为特定 Pod 中最重要的容器配置规模缩放阈值。 +例如,如果你有一个 Web 应用和一个执行日志操作的边车容器,你可以基于 Web 应用的 +资源用量来执行扩缩,忽略边车容器的存在及其资源用量。 + + +如果你更改缩放目标对象,令其使用新的、包含一组不同的容器的 Pod 规约,你就需要 +修改 HPA 的规约才能基于新添加的容器来执行规模扩缩操作。 +如果指标源中指定的容器不存在或者仅存在于部分 Pods 中,那么这些 Pods 会被忽略, +HPA 会重新计算资源用量值。参阅[算法](#algorithm-details)小节进一步了解计算细节。 +要使用容器资源用量来完成自动扩缩,可以像下面这样定义指标源: + +```yaml +type: ContainerResource +containerResource: + name: cpu + container: application + target: + type: Utilization + averageUtilization: 60 +``` + + +在上面的例子中,HPA 控制器会对目标对象执行扩缩操作以确保所有 Pods 中 +`application` 容器的平均 CPU 用量为 60%。 + +{{< note >}} + +如果你要更改 HorizontalPodAutoscaler 所跟踪记录的容器的名称,你可以按一定顺序 +来执行这一更改,确保在应用更改的过程中用来判定扩缩行为的容器可用。 +在更新定义容器的资源(如 Deployment)之前,你需要更新相关的 HPA,使之能够同时 +跟踪记录新的和老的容器名称。这样,HPA 就能够在整个更新过程中继续计算并提供扩缩操作建议。 + + +一旦你已经将容器名称变更这一操作应用到整个负载对象至上,就可以从 HPA +的规约中去掉老的容器名称,完成清理操作。 +{{< /note >}} + * 本教程假定你熟悉 [PersistentVolumes](/zh/docs/concepts/storage/persistent-volumes/) @@ -63,6 +64,7 @@ on general patterns for running stateful applications in Kubernetes. [服务](/zh/docs/concepts/services-networking/service/) 与 [ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/). * 熟悉 MySQL 会有所帮助,但是本教程旨在介绍对其他系统应该有用的常规模式。 +* 您正在使用默认命名空间或不包含任何冲突对象的另一个命名空间。 ## {{% heading "objectives" %}} @@ -280,21 +282,20 @@ properties. The script in the `init-mysql` container also applies either `primary.cnf` or `replica.cnf` from the ConfigMap by copying the contents into `conf.d`. Because the example topology consists of a single primary MySQL server and any number of -replicas, the script simply assigns ordinal `0` to be the primary server, and everyone +replicas, the script assigns ordinal `0` to be the primary server, and everyone else to be replicas. Combined with the StatefulSet controller's -[deployment order guarantee](/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees/), +[deployment order guarantee](/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees), this ensures the primary MySQL server is Ready before creating replicas, so they can begin replicating. --> -通过将内容复制到 conf.d 中,`init-mysql` 容器中的脚本也可以应用 ConfigMap 中的 -`primary.cnf` 或 `replica.cnf`。 -由于示例部署结构由单个 MySQL 主节点和任意数量的副本节点组成,因此脚本仅将序数 -`0` 指定为主节点,而将其他所有节点指定为副本节点。 +通过将内容复制到 conf.d 中,`init-mysql` 容器中的脚本也可以应用 ConfigMap 中的 `primary.cnf` 或 `replica.cnf`。 +由于示例部署结构由单个 MySQL 主节点和任意数量的副本节点组成, +因此脚本仅将序数 `0` 指定为主节点,而将其他所有节点指定为副本节点。 与 StatefulSet 控制器的 -[部署顺序保证](/zh/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees/) +[部署顺序保证](/zh/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees) 相结合, 可以确保 MySQL 主服务器在创建副本服务器之前已准备就绪,以便它们可以开始复制。 diff --git a/content/zh/docs/tasks/service-catalog/install-service-catalog-using-helm.md b/content/zh/docs/tasks/service-catalog/install-service-catalog-using-helm.md index ecc001e62d..66f0dd1e1e 100644 --- a/content/zh/docs/tasks/service-catalog/install-service-catalog-using-helm.md +++ b/content/zh/docs/tasks/service-catalog/install-service-catalog-using-helm.md @@ -39,7 +39,7 @@ Use [Helm](https://helm.sh/) to install Service Catalog on your Kubernetes clust * 如果你正在使用 `hack/local-up-cluster.sh`,请确保设置了 `KUBE_ENABLE_CLUSTER_DNS` 环境变量,然后运行安装脚本。 * [安装和设置 v1.7 或更高版本的 kubectl](/zh/docs/tasks/tools/install-kubectl/),确保将其配置为连接到 Kubernetes 集群。 * 安装 v2.7.0 或更高版本的 [Helm](https://helm.sh/)。 - * 遵照 [Helm 安装说明](https://github.com/kubernetes/helm/blob/master/docs/install.md)。 + * 遵照 [Helm 安装说明](https://helm.sh/docs/intro/install/)。 * 如果已经安装了适当版本的 Helm,请执行 `helm init` 来安装 Helm 的服务器端组件 Tiller。 diff --git a/content/zh/docs/tasks/tools/_index.md b/content/zh/docs/tasks/tools/_index.md index 244191cc89..028f22fb38 100644 --- a/content/zh/docs/tasks/tools/_index.md +++ b/content/zh/docs/tasks/tools/_index.md @@ -15,34 +15,34 @@ no_list: true ## kubectl -Kubernetes 命令行工具,`kubectl`,使得你可以对 Kubernetes 集群运行命令。 -你可以使用 `kubectl` 来部署应用、监测和管理集群资源以及查看日志。 +Kubernetes 命令行工具,[kubectl](/docs/reference/kubectl/kubectl/),使得你可以对 Kubernetes 集群运行命令。 +你可以使用 kubectl 来部署应用、监测和管理集群资源以及查看日志。 -关于如何下载和安装 `kubectl` 并配置其访问你的集群,可参阅 -[安装和配置 `kubectl`](/zh/docs/tasks/tools/install-kubectl/)。 +有关更多信息,包括 kubectl 操作的完整列表,请参见[`kubectl` +参考文件](/zh/docs/reference/kubectl/)。 - -查看 kubectl 安装和配置指南 - +kubectl 可安装在各种 Linux 平台、 macOS 和 Windows 上。 +在下面找到你喜欢的操作系统。 -你也可以阅读 [`kubectl` 参考文档](/zh/docs/reference/kubectl/). +- [在 Linux 上安装 kubectl](/zh/docs/tasks/tools/install-kubectl-linux) +- [在 macOS 上安装 kubectl](/zh/docs/tasks/tools/install-kubectl-macos) +- [在 Windows 上安装 kubectl](/zh/docs/tasks/tools/install-kubectl-windows) + + +kubectl 可以作为 Google Cloud SDK 的一部分被安装。 + + +1. 安装 [Google Cloud SDK](https://cloud.google.com/sdk/)。 +1. 运行安装 `kubectl` 的命令: + + ```shell + gcloud components install kubectl + ``` + + +1. 验证一下,确保安装的是最新的版本: + + ```shell + kubectl version --client + ``` \ No newline at end of file diff --git a/content/zh/docs/tasks/tools/included/kubectl-whats-next.md b/content/zh/docs/tasks/tools/included/kubectl-whats-next.md new file mode 100644 index 0000000000..3990922e0e --- /dev/null +++ b/content/zh/docs/tasks/tools/included/kubectl-whats-next.md @@ -0,0 +1,27 @@ +--- +title: "后续内容" +description: "安装 kubectl 之后,还可以做些什么?" +headless: true +--- + + + +* [安装 Minikube](https://minikube.sigs.k8s.io/docs/start/) +* 有关创建集群的更多信息,请参阅[入门指南](/zh/docs/setup/). +* [学习如何启动并对外公开你的应用程序。](/zh/docs/tasks/access-application-cluster/service-access-application-cluster/) +* 如果你需要访问其他人创建的集群,请参阅 + [共享集群接入文档](/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters/). +* 阅读 [kubectl 参考文档](/zh/docs/reference/kubectl/kubectl/) diff --git a/content/zh/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md b/content/zh/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md new file mode 100644 index 0000000000..d92055f6a3 --- /dev/null +++ b/content/zh/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md @@ -0,0 +1,173 @@ +--- +title: "macOS 系统上的 bash 自动补全" +description: "在 macOS 上实现 Bash 自动补全的一些可选配置。" +headless: true +--- + + + +### 简介 + + +kubectl 的 Bash 补全脚本可以通过 `kubectl completion bash` 命令生成。 +在你的 shell 中导入(Sourcing)这个脚本即可启用补全功能。 + +此外,kubectl 补全脚本依赖于工具 [**bash-completion**](https://github.com/scop/bash-completion), +所以你必须先安装它。 + +{{< warning>}} + +bash-completion 有两个版本:v1 和 v2。v1 对应 Bash3.2(也是 macOS 的默认安装版本),v2 对应 Bash 4.1+。 +kubectl 的补全脚本**无法适配** bash-completion v1 和 Bash 3.2。 +必须为它配备 **bash-completion v2** 和 **Bash 4.1+**。 +有鉴于此,为了在 macOS 上使用 kubectl 补全功能,你必须要安装和使用 Bash 4.1+ +([*说明*](https://itnext.io/upgrading-bash-on-macos-7138bd1066ba))。 +后续说明假定你用的是 Bash 4.1+(也就是 Bash 4.1 或更新的版本) +{{< /warning >}} + + +### 升级 Bash + + +后续说明假定你已使用 Bash 4.1+。你可以运行以下命令检查 Bash 版本: + +```bash +echo $BASH_VERSION +``` + + +如果版本太旧,可以用 Homebrew 安装/升级: + +```bash +brew install bash +``` + + +重新加载 shell,并验证所需的版本已经生效: + +```bash +echo $BASH_VERSION $SHELL +``` + + +Homebrew 通常把它安装为 `/usr/local/bin/bash`。 + + +### 安装 bash-completion + + +{{< note >}} + +如前所述,本说明假定你使用的 Bash 版本为 4.1+,这意味着你要安装 bash-completion v2 +(不同于 Bash 3.2 和 bash-completion v1,kubectl 的补全功能在该场景下无法工作)。 +{{< /note >}} + + +你可以用命令 `type _init_completion` 测试 bash-completion v2 是否已经安装。 +如未安装,用 Homebrew 来安装它: + +```bash +brew install bash-completion@2 +``` + + +如命令的输出信息所显示的,将如下内容添加到文件 `~/.bash_profile` 中: + +```bash +export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d" +[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh" +``` + + +重新加载 shell,并用命令 `type _init_completion` 验证 bash-completion v2 已经恰当的安装。 + + +### 启用 kubectl 自动补全功能 + + +你现在需要确保在所有的 shell 环境中均已导入(sourced) kubectl 的补全脚本, +有若干种方法可以实现这一点: + +- 在文件 `~/.bash_profile` 中导入(Source)补全脚本: + + ```bash + echo 'source <(kubectl completion bash)' >>~/.bash_profile + ``` + + +- 将补全脚本添加到目录 `/usr/local/etc/bash_completion.d` 中: + + ```bash + kubectl completion bash >/usr/local/etc/bash_completion.d/kubectl + ``` + + +- 如果你为 kubectl 定义了别名,则可以扩展 shell 补全来兼容该别名: + + ```bash + echo 'alias k=kubectl' >>~/.bash_profile + echo 'complete -F __start_kubectl k' >>~/.bash_profile + ``` + + +- 如果你是用 Homebrew 安装的 kubectl([如上所述](#install-with-homebrew-on-macos)), + 那么 kubectl 补全脚本应该已经安装到目录 `/usr/local/etc/bash_completion.d/kubectl` 中了。 + 这种情况下,你什么都不需要做。 + + {{< note >}} + + 用 Hommbrew 安装的 bash-completion v2 会初始化 目录 `BASH_COMPLETION_COMPAT_DIR` 中的所有文件,这就是后两种方法能正常工作的原因。 + {{< /note >}} + + +总之,重新加载 shell 之后,kubectl 补全功能将立即生效。 diff --git a/content/zh/docs/tasks/tools/included/optional-kubectl-configs-zsh.md b/content/zh/docs/tasks/tools/included/optional-kubectl-configs-zsh.md new file mode 100644 index 0000000000..0f32593d55 --- /dev/null +++ b/content/zh/docs/tasks/tools/included/optional-kubectl-configs-zsh.md @@ -0,0 +1,50 @@ +--- +title: "zsh 自动补全" +description: "zsh 自动补全的一些可选配置" +headless: true +--- + + + +kubectl 通过命令 `kubectl completion zsh` 生成 Zsh 自动补全脚本。 +在 shell 中导入(Sourcing)该自动补全脚本,将启动 kubectl 自动补全功能。 + +为了在所有的 shell 会话中实现此功能,请将下面内容加入到文件 `~/.zshrc` 中。 + +```zsh +source <(kubectl completion zsh) +``` + + +如果你为 kubectl 定义了别名,可以扩展脚本补全,以兼容该别名。 + +```zsh +echo 'alias k=kubectl' >>~/.zshrc +echo 'complete -F __start_kubectl k' >>~/.zshrc +``` + + +重新加载 shell 后,kubectl 自动补全功能将立即生效。 + +如果你收到 `complete:13: command not found: compdef` 这样的错误提示,那请将下面内容添加到 `~/.zshrc` 文件的开头: + +```zsh +autoload -Uz compinit +compinit +``` diff --git a/content/zh/docs/tasks/tools/included/verify-kubectl.md b/content/zh/docs/tasks/tools/included/verify-kubectl.md new file mode 100644 index 0000000000..a587e26cd0 --- /dev/null +++ b/content/zh/docs/tasks/tools/included/verify-kubectl.md @@ -0,0 +1,62 @@ +--- +title: "验证 kubectl 的安装效果" +description: "如何验证 kubectl。" +headless: true +--- + + + +为了让 kubectl 能发现并访问 Kubernetes 集群,你需要一个 +[kubeconfig 文件](/docs/zh/concepts/configuration/organize-cluster-access-kubeconfig/), +该文件在 +[kube-up.sh](https://github.com/kubernetes/kubernetes/blob/master/cluster/kube-up.sh) +创建集群时,或成功部署一个 Miniube 集群时,均会自动生成。 +通常,kubectl 的配置信息存放于文件 `~/.kube/config` 中。 + +通过获取集群状态的方法,检查是否已恰当的配置了 kubectl: + +```shell +kubectl cluster-info +``` + + +如果返回一个 URL,则意味着 kubectl 成功的访问到了你的集群。 + +如果你看到如下所示的消息,则代表 kubectl 配置出了问题,或无法连接到 Kubernetes 集群。 + +``` +The connection to the server was refused - did you specify the right host or port? +(访问 被拒绝 - 你指定的主机和端口是否有误?) +``` + + +例如,如果你想在自己的笔记本上(本地)运行 Kubernetes 集群,你需要先安装一个 Minikube 这样的工具,然后再重新运行上面的命令。 + +如果命令 `kubectl cluster-info` 返回了 url,但你还不能访问集群,那可以用以下命令来检查配置是否妥当: + +```shell +kubectl cluster-info dump +``` \ No newline at end of file diff --git a/content/zh/docs/tasks/tools/install-kubectl-macos.md b/content/zh/docs/tasks/tools/install-kubectl-macos.md new file mode 100644 index 0000000000..bd5e670e86 --- /dev/null +++ b/content/zh/docs/tasks/tools/install-kubectl-macos.md @@ -0,0 +1,263 @@ +--- +title: 在 macOS 系统上安装和设置 kubectl +content_type: task +weight: 10 +card: + name: tasks + weight: 20 + title: 在 macOS 系统上安装 kubectl +--- + + +## {{% heading "prerequisites" %}} + + +kubectl 版本和集群之间的差异必须在一个小版本号之内。 +例如:v1.2 版本的客户端只能与 v1.1、v1.2 和 v1.3 版本的集群一起工作。 +用最新版本的 kubectl 有助于避免不可预见的问题。 + + +## 在 macOS 系统上安装 kubectl {#install-kubectl-on-macos} + + +在 macOS 系统上安装 kubectl 有如下方法: + +- [{{% heading "prerequisites" %}}](#{{% heading "prerequisites" %}}) +- [在 macOS 系统上安装 kubectl](#install-kubectl-on-macos) + - [用 curl 在 macOS 系统上安装 kubectl](#install-kubectl-binary-with-curl-on-macos) + - [用 Homebrew 在 macOS 系统上安装](#install-with-homebrew-on-macos) + - [用 Macports 在 macOS 上安装](#install-with-macports-on-macos) + - [作为谷歌云 SDK 的一部分,在 macOS 上安装](#install-on-macos-as-part-of-the-google-cloud-sdk) +- [验证 kubectl 配置](#verify-kubectl-configuration) +- [可选的 kubectl 配置](#optional-kubectl-configurations) + - [启用 shell 自动补全功能](#enable-shell-autocompletion) +- [{{% heading "whatsnext" %}}](#{{% heading "whatsnext" %}}) + + +### 用 curl 在 macOS 系统上安装 kubectl {#install-kubectl-binary-with-curl-on-macos} + + +1. 下载最新的发行版: + + ```bash + curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl" + ``` + + {{< note >}} + + 如果需要下载某个指定的版本,用该指定版本号替换掉命令的这个部分:`$(curl -L -s https://dl.k8s.io/release/stable.txt)`。 + 例如:要在 macOS 系统中下载 {{< param "fullversion" >}} 版本,则输入: + + ```bash + curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl + ``` + + {{< /note >}} + + +1. 验证可执行文件(可选操作) + + 下载 kubectl 的校验和文件: + + ```bash + curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl.sha256" + ``` + + + 根据校验和文件,验证 kubectl: + + ```bash + echo "$( + 验证通过时,输出如下: + + ```console + kubectl: OK + ``` + + + 验证失败时,`shasum` 将以非零值退出,并打印如下输出: + + ``` + kubectl: FAILED + shasum: WARNING: 1 computed checksum did NOT match + ``` + + {{< note >}} + + 下载的 kubectl 与校验和文件版本要相同。 + {{< /note >}} + + +1. 将 kubectl 置为可执行文件: + + ```bash + chmod +x ./kubectl + ``` + + +1. 将可执行文件 kubectl 移动到系统可寻址路径 `PATH` 内的一个位置: + + ```bash + sudo mv ./kubectl /usr/local/bin/kubectl + sudo chown root: /usr/local/bin/kubectl + ``` + + +1. 测试一下,确保你安装的是最新的版本: + + ```bash + kubectl version --client + ``` + + +### 用 Homebrew 在 macOS 系统上安装 {#install-with-homebrew-on-macos} + + +如果你是 macOS 系统,且用的是 [Homebrew](https://brew.sh/) 包管理工具, +则可以用 Homebrew 安装 kubectl。 + + +1. 运行安装命令: + + ```bash + brew install kubectl + ``` + + 或 + + ```bash + brew install kubernetes-cli + ``` + + +1. 测试一下,确保你安装的是最新的版本: + + ```bash + kubectl version --client + ``` + + +### 用 Macports 在 macOS 上安装 {#install-with-macports-on-macos} + + +如果你用的是 macOS,且用 [Macports](https://macports.org/) 包管理工具,则你可以用 Macports 安装kubectl。 + + +1. 运行安装命令: + + ```bash + sudo port selfupdate + sudo port install kubectl + ``` + + +1. 测试一下,确保你安装的是最新的版本: + + ```bash + kubectl version --client + ``` + + +### 作为谷歌云 SDK 的一部分,在 macOS 上安装 {#install-on-macos-as-part-of-the-google-cloud-sdk} + +{{< include "included/install-kubectl-gcloud.md" >}} + + +## 验证 kubectl 配置 {#verify-kubectl-configuration} + +{{< include "included/verify-kubectl.md" >}} + + +## 可选的 kubectl 配置 {#optional-kubectl-configurations} + +### 启用 shell 自动补全功能 {#enable-shell-autocompletion} + + +kubectl 为 Bash 和 Zsh 提供自动补全功能,这可以节省许多输入的麻烦。 + +下面是为 Bash 和 Zsh 设置自动补全功能的操作步骤。 + +{{< tabs name="kubectl_autocompletion" >}} +{{< tab name="Bash" include="included/optional-kubectl-configs-bash-mac.md" />}} +{{< tab name="Zsh" include="included/optional-kubectl-configs-zsh.md" />}} +{{< /tabs >}} + +## {{% heading "whatsnext" %}} + +{{< include "included/kubectl-whats-next.md" >}} \ No newline at end of file diff --git a/content/zh/docs/test.md b/content/zh/docs/test.md index d873d6cb6d..b6ee5be151 100644 --- a/content/zh/docs/test.md +++ b/content/zh/docs/test.md @@ -783,7 +783,7 @@ sequenceDiagram {{}}
    在官方网站上有更多的[示例](https://mermaid-js.github.io/mermaid/#/examples)。 diff --git a/content/zh/docs/tutorials/_index.md b/content/zh/docs/tutorials/_index.md index f283d60ea6..42415cafe2 100644 --- a/content/zh/docs/tutorials/_index.md +++ b/content/zh/docs/tutorials/_index.md @@ -62,13 +62,13 @@ Kubernetes 文档的这一部分包含教程。每个教程展示了如何完成 * [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/) -* [Example: Deploying PHP Guestbook application with Redis](/docs/tutorials/stateless-application/guestbook/) +* [Example: Deploying PHP Guestbook application with MongoDB](/docs/tutorials/stateless-application/guestbook/) --> ## 无状态应用程序 * [公开外部 IP 地址访问集群中的应用程序](/zh/docs/tutorials/stateless-application/expose-external-ip-address/) -* [示例:使用 Redis 部署 PHP 留言板应用程序](/zh/docs/tutorials/stateless-application/guestbook/) +* [示例:使用 MongoDB 部署 PHP 留言板应用程序](/zh/docs/tutorials/stateless-application/guestbook/) 为了完成本教程中的所有步骤,你必须安装 [kind](https://kind.sigs.k8s.io/docs/user/quick-start/) -和 [kubectl](/zh/doc/tasks/tools/install-kubectl/)。本教程将显示同时具有 alpha(v1.19 之前的版本) +和 [kubectl](/zh/docs/tasks/tools/install-kubectl/)。本教程将显示同时具有 alpha(v1.19 之前的版本) 和通常可用的 seccomp 功能的示例,因此请确保为所使用的版本[正确配置](https://kind.sigs.k8s.io/docs/user/quick-start/#setting-kubernetes-version)了集群。 diff --git a/content/zh/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice.md b/content/zh/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice.md index b6357ed284..b7c99d0e8e 100644 --- a/content/zh/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice.md +++ b/content/zh/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice.md @@ -34,7 +34,7 @@ Dockerfile、kubernetes.yml、Kubernetes ConfigMaps、和 Kubernetes Secrets。 比如赋值给不同的容器中的不同环境变量。 @@ -90,4 +90,4 @@ CDI & MicroProfile 都会被用在互动教程中, ### [Start Interactive Tutorial](/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive/) --> ## 示例:使用 MicroProfile、ConfigMaps、Secrets 实现外部化应用配置 -### [启动互动教程](/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive/) +### [启动互动教程](/zh/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive/) diff --git a/content/zh/docs/tutorials/hello-minikube.md b/content/zh/docs/tutorials/hello-minikube.md index b3a23e348b..c07f99998b 100644 --- a/content/zh/docs/tutorials/hello-minikube.md +++ b/content/zh/docs/tutorials/hello-minikube.md @@ -81,9 +81,13 @@ This tutorial provides a container image that uses NGINX to echo back all the re {{< kat-button >}} - + {{< note >}} - 如果你在本地安装了 Minikube,运行 `minikube start`。 + 如果你在本地安装了 Minikube,运行 `minikube start`。 + 在运行 `minikube dashboard` 之前,你应该打开一个新终端, + 在此启动 `minikube dashboard` ,然后切换回主终端。 {{< /note >}} 3. 仅限 Katacoda 环境:在终端窗口的顶部,单击加号,然后单击 **选择要在主机 1 上查看的端口**。 + 4. 仅限 Katacoda 环境:输入“30000”,然后单击 **显示端口**。 + +{{< note >}} +`dashboard` 命令启用仪表板插件,并在默认的 Web 浏览器中打开代理。你可以在仪表板上创建 Kubernetes 资源,例如 Deployment 和 Service。 + +如果你以 root 用户身份在环境中运行, +请参见[使用 URL 打开仪表板](/zh/docs/tutorials/hello-minikube#open-dashboard-with-url)。 + +要停止代理,请运行 `Ctrl+C` 退出该进程。仪表板仍在运行中。 +{{< /note >}} + + +## 使用 URL 打开仪表板 + + +如果你不想打开 Web 浏览器,请使用 url 标志运行显示板命令以得到 URL: + +```shell +minikube dashboard --url +``` + - v | - endpoint -``` +{{< mermaid >}} +graph LR; + client(client)-->node2[节点 2]; + node2-->client; + node2-. SNAT .->node1[节点 1]; + node1-. SNAT .->node2; + node1-->endpoint(端点); + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + class node1,node2,endpoint k8s; + class client plain; +{{}} 为了防止这种情况发生,Kubernetes 提供了一个特性来保留客户端的源 IP 地址[(点击此处查看可用特性)](/zh/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip)。设置 `service.spec.externalTrafficPolicy` 的值为 `Local`,请求就只会被代理到本地 endpoints 而不会被转发到其它节点。这样就保留了最初的源 IP 地址。如果没有本地 endpoints,发送到这个节点的数据包将会被丢弃。这样在应用到数据包的任何包处理规则下,你都能依赖这个正确的 source-ip 使数据包通过并到达 endpoint。 @@ -229,17 +246,18 @@ client_address=104.132.1.79 用图表示: -``` - client - ^ / \ - / / \ - / v X - node 1 node 2 - ^ | - | | - | v - endpoint -``` +{{< mermaid >}} +graph TD; + client --> node1[节点 1]; + client(client) --x node2[节点 2]; + node1 --> endpoint(端点); + endpoint --> node1; + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + class node1,node2,endpoint k8s; + class client plain; +{{}} @@ -285,17 +303,7 @@ client_address=10.240.0.5 用图表示: -``` - client - | - lb VIP - / ^ - v / -health check ---> node 1 node 2 <--- health check - 200 <--- ^ | ---> 500 - | V - endpoint -``` +![Source IP with externalTrafficPolicy](/images/docs/sourceip-externaltrafficpolicy.svg) 你可以设置 annotation 来进行测试: @@ -367,7 +375,7 @@ __跨平台支持__ 2. 使用一个包转发器,因此从客户端发送到负载均衡器 VIP 的请求在拥有客户端源 IP 地址的节点终止,而不被中间代理。 -第一类负载均衡器必须使用一种它和后端之间约定的协议来和真实的客户端 IP 通信,例如 HTTP [X-FORWARDED-FOR](https://en.wikipedia.org/wiki/X-Forwarded-For) 头,或者 [proxy 协议](http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt)。 +第一类负载均衡器必须使用一种它和后端之间约定的协议来和真实的客户端 IP 通信,例如 HTTP [X-FORWARDED-FOR](https://en.wikipedia.org/wiki/X-Forwarded-For) 头,或者 [proxy 协议](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt)。 第二类负载均衡器可以通过简单的在保存于 Service 的 `service.spec.healthCheckNodePort` 字段上创建一个 HTTP 健康检查点来使用上面描述的特性。 @@ -394,6 +402,4 @@ $ kubectl delete deployment source-ip-app ## {{% heading "whatsnext" %}} -* 学习更多关于 [通过 services 连接应用](/zh/docs/concepts/services-networking/connect-applications-service/) -* 学习更多关于 [负载均衡](/zh/docs/user-guide/load-balancer) - +* 进一步学习 [通过 services 连接应用](/zh/docs/concepts/services-networking/connect-applications-service/) diff --git a/content/zh/docs/tutorials/stateful-application/zookeeper.md b/content/zh/docs/tutorials/stateful-application/zookeeper.md index 500259f5fa..3f5baf3c27 100644 --- a/content/zh/docs/tutorials/stateful-application/zookeeper.md +++ b/content/zh/docs/tutorials/stateful-application/zookeeper.md @@ -1,5 +1,10 @@ --- -approvers: +title: 运行 ZooKeeper,一个分布式协调系统 +content_type: tutorial +weight: 40 +--- + @@ -20,8 +25,11 @@ Kubernetes using [StatefulSets](/docs/concepts/workloads/controllers/statefulset [PodDisruptionBudgets](/docs/concepts/workloads/pods/disruptions/#pod-disruption-budget), and [PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity). --> - -本教程展示了在 Kubernetes 上使用 [StatefulSets](/zh/docs/concepts/workloads/controllers/statefulset/),[PodDisruptionBudgets](/zh/docs/concepts/workloads/pods/disruptions/#pod-disruption-budget) 和 [PodAntiAffinity](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#亲和与反亲和) 特性运行 [Apache Zookeeper](https://zookeeper.apache.org)。 +本教程展示了在 Kubernetes 上使用 +[StatefulSet](/zh/docs/concepts/workloads/controllers/statefulset/), +[PodDisruptionBudget](/zh/docs/concepts/workloads/pods/disruptions/#pod-disruption-budget) 和 +[PodAntiAffinity](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#亲和与反亲和) +特性运行 [Apache Zookeeper](https://zookeeper.apache.org)。 ## {{% heading "prerequisites" %}} @@ -29,44 +37,45 @@ and [PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affini Before starting this tutorial, you should be familiar with the following Kubernetes concepts. --> - 在开始本教程前,你应该熟悉以下 Kubernetes 概念。 -- [Pods](/zh/docs/concepts/workloads/pods/) -- [Cluster DNS](/zh/docs/concepts/services-networking/dns-pod-service/) -- [Headless Services](/zh/docs/concepts/services-networking/service/#headless-services) -- [PersistentVolumes](/zh/docs/concepts/storage/persistent-volumes/) -- [PersistentVolume Provisioning](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/persistent-volume-provisioning/) -- [StatefulSets](/zh/docs/concepts/workloads/controllers/statefulset/) -- [PodDisruptionBudgets](/zh/docs/concepts/workloads/pods/disruptions/#pod-disruption-budget) -- [PodAntiAffinity](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#亲和与反亲和) -- [kubectl CLI](/zh/docs/reference/kubectl/kubectl/) +- [Pods](/zh/docs/concepts/workloads/pods/) +- [集群 DNS](/zh/docs/concepts/services-networking/dns-pod-service/) +- [无头服务(Headless Service)](/zh/docs/concepts/services-networking/service/#headless-services) +- [PersistentVolumes](/zh/docs/concepts/storage/persistent-volumes/) +- [PersistentVolume 制备](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/persistent-volume-provisioning/) +- [StatefulSet](/zh/docs/concepts/workloads/controllers/statefulset/) +- [PodDisruptionBudget](/zh/docs/concepts/workloads/pods/disruptions/#pod-disruption-budget) +- [PodAntiAffinity](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#亲和与反亲和) +- [kubectl CLI](/zh/docs/reference/kubectl/kubectl/) +你需要一个至少包含四个节点的集群,每个节点至少 2 CPUs 和 4 GiB 内存。 +在本教程中你将会隔离(Cordon)和腾空(Drain )集群的节点。 +**这意味着集群节点上所有的 Pods 将会被终止并移除。这些节点也会暂时变为不可调度**。 +在本教程中你应该使用一个独占的集群,或者保证你造成的干扰不会影响其它租户。 + - -你需要一个至少包含四个节点的集群,每个节点至少 2 CPUs 和 4 GiB 内存。在本教程中你将会 cordon 和 drain 集群的节点。**这意味着集群节点上所有的 Pods 将会被终止并移除**。**这些节点也会暂时变为不可调度**。在本教程中你应该使用一个独占的集群,或者保证你造成的干扰不会影响其它租户。 - -本教程假设你的集群配置为动态的提供 PersistentVolumes。如果你的集群没有配置成这样,在开始本教程前,你需要手动准备三个 20 GiB 的卷。 - +本教程假设你的集群配置为动态的提供 PersistentVolumes。 +如果你的集群没有配置成这样,在开始本教程前,你需要手动准备三个 20 GiB 的卷。 ## {{% heading "objectives" %}} - 在学习本教程后,你将熟悉下列内容。 * 如何使用 StatefulSet 部署一个 ZooKeeper ensemble。 @@ -74,11 +83,10 @@ After this tutorial, you will know the following. * 如何在 ensemble 中 分布 ZooKeeper 服务器的部署。 * 如何在计划维护中使用 PodDisruptionBudgets 确保服务可用性。 - +### ZooKeeper {#zookeeper-basics} +[Apache ZooKeeper](https://zookeeper.apache.org/doc/current/) +是一个分布式的开源协调服务,用于分布式系统。 +ZooKeeper 允许你读取、写入数据和发现数据更新。 +数据按层次结构组织在文件系统中,并复制到 ensemble(一个 ZooKeeper 服务器的集合) +中所有的 ZooKeeper 服务器。对数据的所有操作都是原子的和顺序一致的。 +ZooKeeper 通过 +[Zab](https://pdfs.semanticscholar.org/b02c/6b00bd5dbdbd951fddb00b906c82fa80f0b3.pdf) +一致性协议在 ensemble 的所有服务器之间复制一个状态机来确保这个特性。 + + +Ensemble 使用 Zab 协议选举一个领导者,在选举出领导者前不能写入数据。 +一旦选举出了领导者,ensemble 使用 Zab 保证所有写入被复制到一个 quorum, +然后这些写入操作才会被确认并对客户端可用。 +如果没有遵照加权 quorums,一个 quorum 表示包含当前领导者的 ensemble 的多数成员。 +例如,如果 ensemble 有 3 个服务器,一个包含领导者的成员和另一个服务器就组成了一个 +quorum。 +如果 ensemble 不能达成一个 quorum,数据将不能被写入。 + - -### ZooKeeper 基础 - -[Apache ZooKeeper](https://zookeeper.apache.org/doc/current/) 是一个分布式的开源协调服务,用于分布式系统。ZooKeeper 允许你读取、写入数据和发现数据更新。数据按层次结构组织在文件系统中,并复制到 ensemble(一个 ZooKeeper 服务器的集合) 中所有的 ZooKeeper 服务器。对数据的所有操作都是原子的和顺序一致的。ZooKeeper 通过 [Zab](https://pdfs.semanticscholar.org/b02c/6b00bd5dbdbd951fddb00b906c82fa80f0b3.pdf) 一致性协议在 ensemble 的所有服务器之间复制一个状态机来确保这个特性。 - -ensemble 使用 Zab 协议选举一个 leader,在选举出 leader 前不能写入数据。一旦选举出了 leader,ensemble 使用 Zab 保证所有写入被复制到一个 quorum,然后这些写入操作才会被确认并对客户端可用。如果没有遵照加权 quorums,一个 quorum 表示包含当前 leader 的 ensemble 的多数成员。例如,如果 ensemble 有3个服务器,一个包含 leader 的成员和另一个服务器就组成了一个 quorum。如果 ensemble 不能达成一个 quorum,数据将不能被写入。 - -ZooKeeper 在内存中保存它们的整个状态机,但是每个改变都被写入一个在存储介质上的持久 WAL(Write Ahead Log)。当一个服务器故障时,它能够通过回放 WAL 恢复之前的状态。为了防止 WAL 无限制的增长,ZooKeeper 服务器会定期的将内存状态快照保存到存储介质。这些快照能够直接加载到内存中,所有在这个快照之前的 WAL 条目都可以被安全的丢弃。 +ZooKeeper 在内存中保存它们的整个状态机,但是每个改变都被写入一个在存储介质上的 +持久 WAL(Write Ahead Log)。 +当一个服务器出现故障时,它能够通过回放 WAL 恢复之前的状态。 +为了防止 WAL 无限制的增长,ZooKeeper 服务器会定期的将内存状态快照保存到存储介质。 +这些快照能够直接加载到内存中,所有在这个快照之前的 WAL 条目都可以被安全的丢弃。 - ## 创建一个 ZooKeeper Ensemble 下面的清单包含一个 -[Headless Service](/zh/docs/concepts/services-networking/service/#headless-services), +[无头服务](/zh/docs/concepts/services-networking/service/#headless-services), 一个 [Service](/zh/docs/concepts/services-networking/service/), 一个 [PodDisruptionBudget](/zh/docs/concepts/workloads/pods/disruptions/#specifying-a-poddisruptionbudget), 和一个 [StatefulSet](/zh/docs/concepts/workloads/controllers/statefulset/)。 @@ -127,8 +152,8 @@ Open a terminal, and use the [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply) command to create the manifest. --> - -打开一个命令行终端,使用 [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply) +打开一个命令行终端,使用命令 +[`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply) 创建这个清单。 ```shell @@ -139,8 +164,8 @@ kubectl apply -f https://k8s.io/examples/application/zookeeper/zookeeper.yaml This creates the `zk-hs` Headless Service, the `zk-cs` Service, the `zk-pdb` PodDisruptionBudget, and the `zk` StatefulSet. --> - -这个操作创建了 `zk-hs` Headless Service、`zk-cs` Service、`zk-pdb` PodDisruptionBudget 和 `zk` StatefulSet。 +这个操作创建了 `zk-hs` 无头服务、`zk-cs` 服务、`zk-pdb` PodDisruptionBudget +和 `zk` StatefulSet。 ``` service/zk-hs created @@ -153,8 +178,9 @@ statefulset.apps/zk created Use [`kubectl get`](/docs/reference/generated/kubectl/kubectl-commands/#get) to watch the StatefulSet controller create the StatefulSet's Pods. --> - -使用 [`kubectl get`](/docs/reference/generated/kubectl/kubectl-commands/#get) 查看 StatefulSet 控制器创建的 Pods。 +使用命令 +[`kubectl get`](/docs/reference/generated/kubectl/kubectl-commands/#get) +查看 StatefulSet 控制器创建的 Pods。 ```shell kubectl get pods -w -l app=zk @@ -163,7 +189,6 @@ kubectl get pods -w -l app=zk - 一旦 `zk-2` Pod 变成 Running 和 Ready 状态,使用 `CRTL-C` 结束 kubectl。 ``` @@ -189,8 +214,8 @@ zk-2 1/1 Running 0 40s The StatefulSet controller creates three Pods, and each Pod has a container with a [ZooKeeper](https://www-us.apache.org/dist/zookeeper/stable/) server. --> - -StatefulSet 控制器创建了3个 Pods,每个 Pod 包含一个 [ZooKeeper](https://www-us.apache.org/dist/zookeeper/stable/) 服务器。 +StatefulSet 控制器创建 3 个 Pods,每个 Pod 包含一个 +[ZooKeeper](https://www-us.apache.org/dist/zookeeper/stable/) 服务器。 +### 促成 Leader 选举 {#facilitating-leader-election} -### 促成 Leader 选举 +由于在匿名网络中没有用于选举 leader 的终止算法,Zab 要求显式的进行成员关系配置, +以执行 leader 选举。Ensemble 中的每个服务器都需要具有一个独一无二的标识符, +所有的服务器均需要知道标识符的全集,并且每个标识符都需要和一个网络地址相关联。 -由于在匿名网络中没有用于选举 leader 的终止算法,Zab 要求显式的进行成员关系配置,以执行 leader 选举。Ensemble 中的每个服务器都需要具有一个独一无二的标识符,所有的服务器均需要知道标识符的全集,并且每个标识符都需要和一个网络地址相关联。 - -使用 [`kubectl exec`](/docs/reference/generated/kubectl/kubectl-commands/#exec) 获取 `zk` StatefulSet 中 Pods 的主机名。 +使用命令 +[`kubectl exec`](/docs/reference/generated/kubectl/kubectl-commands/#exec) +获取 `zk` StatefulSet 中 Pods 的主机名。 ```shell for i in 0 1 2; do kubectl exec zk-$i -- hostname; done @@ -215,8 +243,10 @@ for i in 0 1 2; do kubectl exec zk-$i -- hostname; done The StatefulSet controller provides each Pod with a unique hostname based on its ordinal index. The hostnames take the form of `-`. Because the `replicas` field of the `zk` StatefulSet is set to `3`, the Set's controller creates three Pods with their hostnames set to `zk-0`, `zk-1`, and `zk-2`. --> - -StatefulSet 控制器基于每个 Pod 的序号索引为它们各自提供一个唯一的主机名。主机名采用 `-` 的形式。由于 `zk` StatefulSet 的 `replicas` 字段设置为3,这个 Set 的控制器将创建3个 Pods,主机名为:`zk-0`、`zk-1` 和 `zk-2`。 +StatefulSet 控制器基于每个 Pod 的序号索引为它们各自提供一个唯一的主机名。 +主机名采用 `-<序数索引>` 的形式。 +由于 `zk` StatefulSet 的 `replicas` 字段设置为 3,这个集合的控制器将创建 +3 个 Pods,主机名为:`zk-0`、`zk-1` 和 `zk-2`。 ``` zk-0 @@ -229,8 +259,8 @@ The servers in a ZooKeeper ensemble use natural numbers as unique identifiers, a To examine the contents of the `myid` file for each server use the following command. --> - -ZooKeeper ensemble 中的服务器使用自然数作为唯一标识符,每个服务器的标识符都保存在服务器的数据目录中一个名为 `myid` 的文件里。 +ZooKeeper ensemble 中的服务器使用自然数作为唯一标识符, +每个服务器的标识符都保存在服务器的数据目录中一个名为 `myid` 的文件里。 检查每个服务器的 `myid` 文件的内容。 @@ -241,7 +271,6 @@ for i in 0 1 2; do echo "myid zk-$i";kubectl exec zk-$i -- cat /var/lib/zookeepe - 由于标识符为自然数并且序号索引是非负整数,你可以在序号上加 1 来生成一个标识符。 ``` @@ -256,8 +285,7 @@ myid zk-2 - -获取 `zk` StatefulSet 中每个 Pod 的 FQDN (Fully Qualified Domain Name,正式域名)。 +获取 `zk` StatefulSet 中每个 Pod 的全限定域名(Fully Qualified Domain Name,FQDN)。 ```shell for i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done @@ -267,8 +295,7 @@ for i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done The `zk-hs` Service creates a domain for all of the Pods, `zk-hs.default.svc.cluster.local`. --> - -`zk-hs` Service 为所有 Pods 创建了一个 domain:`zk-hs.default.svc.cluster.local`。 +`zk-hs` Service 为所有 Pods 创建了一个域:`zk-hs.default.svc.cluster.local`。 ``` zk-0.zk-hs.default.svc.cluster.local @@ -281,10 +308,13 @@ The A records in [Kubernetes DNS](/docs/concepts/services-networking/dns-pod-ser ZooKeeper stores its application configuration in a file named `zoo.cfg`. Use `kubectl exec` to view the contents of the `zoo.cfg` file in the `zk-0` Pod. --> +[Kubernetes DNS](/zh/docs/concepts/services-networking/dns-pod-service/) +中的 A 记录将 FQDNs 解析成为 Pods 的 IP 地址。 +如果 Pods 被调度,这个 A 记录将会使用 Pods 的新 IP 地址完成更新, +但 A 记录的名称不会改变。 -[Kubernetes DNS](/zh/docs/concepts/services-networking/dns-pod-service/) 中的 A 记录将 FQDNs 解析成为 Pods 的 IP 地址。如果 Pods 被调度,这个 A 记录将会使用 Pods 的新 IP 地址更新,但 A 记录的名称不会改变。 - -ZooKeeper 在一个名为 `zoo.cfg` 的文件中保存它的应用配置。使用 `kubectl exec` 在 `zk-0` Pod 中查看 `zoo.cfg` 文件的内容。 +ZooKeeper 在一个名为 `zoo.cfg` 的文件中保存它的应用配置。 +使用 `kubectl exec` 在 `zk-0` Pod 中查看 `zoo.cfg` 文件的内容。 ```shell kubectl exec zk-0 -- cat /opt/zookeeper/conf/zoo.cfg @@ -296,8 +326,9 @@ the file, the `1`, `2`, and `3` correspond to the identifiers in the ZooKeeper servers' `myid` files. They are set to the FQDNs for the Pods in the `zk` StatefulSet. --> - -文件底部为 `server.1`、`server.2` 和 `server.3`,其中的 `1`、`2`和`3`分别对应 ZooKeeper 服务器的 `myid` 文件中的标识符。它们被设置为 `zk` StatefulSet 中的 Pods 的 FQDNs。 +文件底部为 `server.1`、`server.2` 和 `server.3`,其中的 `1`、`2` 和 `3` +分别对应 ZooKeeper 服务器的 `myid` 文件中的标识符。 +它们被设置为 `zk` StatefulSet 中的 Pods 的 FQDNs。 ``` clientPort=2181 @@ -317,14 +348,17 @@ server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888 ``` +### 达成共识 {#achieving-consensus} -### 达成一致 - - 一致性协议要求每个参与者的标识符唯一。在 Zab 协议里任何两个参与者都不应该声明相同的唯一标识符。对于让系统中的进程协商哪些进程已经提交了哪些数据而言,这是必须的。如果有两个 Pods 使用相同的序号启动,这两个 ZooKeeper 服务器会将自己识别为相同的服务器。 + 一致性协议要求每个参与者的标识符唯一。 +在 Zab 协议里任何两个参与者都不应该声明相同的唯一标识符。 +对于让系统中的进程协商哪些进程已经提交了哪些数据而言,这是必须的。 +如果有两个 Pods 使用相同的序号启动,这两个 ZooKeeper 服务器 +会将自己识别为相同的服务器。 ```shell kubectl get pods -w -l app=zk @@ -355,8 +389,10 @@ the FQDNs of the ZooKeeper servers will resolve to a single endpoint, and that endpoint will be the unique ZooKeeper server claiming the identity configured in its `myid` file. --> - -每个 Pod 的 A 记录仅在 Pod 变成 Ready状态时被录入。因此,ZooKeeper 服务器的 FQDNs 只会解析到一个 endpoint,而那个 endpoint 将会是一个唯一的 ZooKeeper 服务器,这个服务器声明了配置在它的 `myid` 文件中的标识符。 +每个 Pod 的 A 记录仅在 Pod 变成 Ready状态时被录入。 +因此,ZooKeeper 服务器的 FQDNs 只会解析到一个端点,而那个端点将会是 +一个唯一的 ZooKeeper 服务器,这个服务器声明了配置在它的 `myid` +文件中的标识符。 ``` zk-0.zk-hs.default.svc.cluster.local @@ -369,7 +405,8 @@ This ensures that the `servers` properties in the ZooKeepers' `zoo.cfg` files represents a correctly configured ensemble. --> -这保证了 ZooKeepers 的 `zoo.cfg` 文件中的 `servers` 属性代表了一个正确配置的 ensemble。 +这保证了 ZooKeepers 的 `zoo.cfg` 文件中的 `servers` 属性代表了 +一个正确配置的 ensemble。 ``` server.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888 @@ -380,8 +417,10 @@ server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888 - -当服务器使用 Zab 协议尝试提交一个值的时候,它们会达成一致并成功提交这个值(如果 leader 选举成功并且至少有两个 Pods 处于 Running 和 Ready状态),或者将会失败(如果没有满足上述条件中的任意一条)。当一个服务器承认另一个服务器的代写时不会有状态产生。 +当服务器使用 Zab 协议尝试提交一个值的时候,它们会达成一致并成功提交这个值 +(如果领导者选举成功并且至少有两个 Pods 处于 Running 和 Ready状态), +或者将会失败(如果没有满足上述条件中的任意一条)。 +当一个服务器承认另一个服务器的代写时不会有状态产生。 - ### Ensemble 健康检查 -最基本的健康检查是向一个 ZooKeeper 服务器写入一些数据,然后从另一个服务器读取这些数据。 +最基本的健康检查是向一个 ZooKeeper 服务器写入一些数据,然后从 +另一个服务器读取这些数据。 使用 `zkCli.sh` 脚本在 `zk-0` Pod 上写入 `world` 到路径 `/hello`。 @@ -411,8 +450,7 @@ Created /hello - -从 `zk-1` Pod 获取数据。 +使用下面的命令从 `zk-1` Pod 获取数据。 ```shell kubectl exec zk-1 zkCli.sh get /hello @@ -422,8 +460,7 @@ kubectl exec zk-1 zkCli.sh get /hello The data that you created on `zk-0` is available on all the servers in the ensemble. --> - -你在 `zk-0` 创建的数据在 ensemble 中所有的服务器上都是可用的。 +你在 `zk-0` 上创建的数据在 ensemble 中所有的服务器上都是可用的。 ``` WATCHER:: @@ -455,12 +492,15 @@ state machine. Use the [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands/#delete) command to delete the `zk` StatefulSet. --> +### 提供持久存储 -### 准备持久存储 +如同在 [ZooKeeper](#zookeeper-basics) 一节所提到的,ZooKeeper 提交 +所有的条目到一个持久 WAL,并周期性的将内存快照写入存储介质。 +对于使用一致性协议实现一个复制状态机的应用来说,使用 WALs 提供持久化 +是一种常用的技术,对于普通的存储应用也是如此。 -如同在 [ZooKeeper 基础](#zookeeper-基础) 一节所提到的,ZooKeeper 提交所有的条目到一个持久 WAL,并周期性的将内存快照写入存储介质。对于使用一致性协议实现一个复制状态机的应用来说,使用 WALs 提供持久化是一种常用的技术,对于普通的存储应用也是如此。 - -使用 [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands/#delete) 删除 `zk` StatefulSet。 +使用 [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands/#delete) +删除 `zk` StatefulSet。 ```shell kubectl delete statefulset zk @@ -473,7 +513,6 @@ statefulset.apps "zk" deleted - 观察 StatefulSet 中的 Pods 变为终止状态。 ```shell @@ -483,7 +522,6 @@ kubectl get pods -w -l app=zk - 当 `zk-0` 完全终止时,使用 `CRTL-C` 结束 kubectl。 ``` @@ -504,8 +542,7 @@ zk-0 0/1 Terminating 0 11m - -重新应用 `zookeeper.yaml` 中的代码清单。 +重新应用 `zookeeper.yaml` 中的清单。 ```shell kubectl apply -f https://k8s.io/examples/application/zookeeper/zookeeper.yaml @@ -516,7 +553,6 @@ This creates the `zk` StatefulSet object, but the other API objects in the manif Watch the StatefulSet controller recreate the StatefulSet's Pods. --> - `zk` StatefulSet 将会被创建。由于清单中的其他 API 对象已经存在,所以它们不会被修改。 观察 StatefulSet 控制器重建 StatefulSet 的 Pods。 @@ -528,7 +564,6 @@ kubectl get pods -w -l app=zk - 一旦 `zk-2` Pod 处于 Running 和 Ready 状态,使用 `CRTL-C` 停止 kubectl命令。 ``` @@ -554,7 +589,6 @@ zk-2 1/1 Running 0 40s Use the command below to get the value you entered during the [sanity test](#sanity-testing-the-ensemble), from the `zk-2` Pod. --> - 从 `zk-2` Pod 中获取你在[健康检查](#Ensemble-健康检查)中输入的值。 ```shell @@ -564,8 +598,8 @@ kubectl exec zk-2 zkCli.sh get /hello - -尽管 `zk` StatefulSet 中所有的 Pods 都已经被终止并重建过,ensemble 仍然使用原来的数值提供服务。 +尽管 `zk` StatefulSet 中所有的 Pods 都已经被终止并重建过,ensemble +仍然使用原来的数值提供服务。 ``` WATCHER:: @@ -588,8 +622,8 @@ numChildren = 0 - -`zk` StatefulSet 的 `spec` 中的 `volumeClaimTemplates` 字段标识了将要为每个 Pod 准备的 PersistentVolume。 +`zk` StatefulSet 的 `spec` 中的 `volumeClaimTemplates` 字段标识了 +将要为每个 Pod 准备的 PersistentVolume。 ```yaml volumeClaimTemplates: @@ -610,10 +644,9 @@ the `StatefulSet`. Use the following command to get the `StatefulSet`'s `PersistentVolumeClaims`. --> +`StatefulSet` 控制器为 `StatefulSet` 中的每个 Pod 生成一个 `PersistentVolumeClaim`。 -StatefulSet 控制器为 StatefulSet 中的每个 Pod 生成一个 PersistentVolumeClaim。 - -获取 StatefulSet 的 PersistentVolumeClaims。 +获取 `StatefulSet` 的 `PersistentVolumeClaim`。 ```shell kubectl get pvc -l app=zk @@ -622,8 +655,7 @@ kubectl get pvc -l app=zk - -当 StatefulSet 重新创建它的 Pods时,Pods 的 PersistentVolumes 会被重新挂载。 +当 `StatefulSet` 重新创建它的 Pods 时,Pods 的 PersistentVolumes 会被重新挂载。 ``` NAME STATUS VOLUME CAPACITY ACCESSMODES AGE @@ -635,8 +667,8 @@ datadir-zk-2 Bound pvc-bee0817e-bcb1-11e6-994f-42010a800002 20Gi R - -StatefulSet 的容器 `template` 中的 `volumeMounts` 一节使得 PersistentVolumes 被挂载到 ZooKeeper 服务器的数据目录。 +StatefulSet 的容器 `template` 中的 `volumeMounts` 一节使得 +PersistentVolumes 被挂载到 ZooKeeper 服务器的数据目录。 ```shell volumeMounts: @@ -650,11 +682,13 @@ same `PersistentVolume` mounted to the ZooKeeper server's data directory. Even when the Pods are rescheduled, all the writes made to the ZooKeeper servers' WALs, and all their snapshots, remain durable. --> - -当 `zk` StatefulSet 中的一个 Pod 被(重新)调度时,它总是拥有相同的 PersistentVolume,挂载到 ZooKeeper 服务器的数据目录。即使在 Pods 被重新调度时,所有对 ZooKeeper 服务器的 WALs 的写入和它们的全部快照都仍然是持久的。 +当 `zk` StatefulSet 中的一个 Pod 被(重新)调度时,它总是拥有相同的 PersistentVolume, +挂载到 ZooKeeper 服务器的数据目录。 +即使在 Pods 被重新调度时,所有对 ZooKeeper 服务器的 WALs 的写入和它们的 +全部快照都仍然是持久的。 - ## 确保一致性配置 -如同在 [促成 leader 选举](#促成-Leader-选举) 和 [达成一致](#达成一致) 小节中提到的,ZooKeeper ensemble 中的服务器需要一致性的配置来选举一个 leader 并形成一个 quorum。它们还需要 Zab 协议的一致性配置来保证这个协议在网络中正确的工作。在这次的样例中,我们通过直接将配置写入代码清单中来达到该目的。 +如同在[促成领导者选举](#facilitating-leader-election) 和[达成一致](#achieving-consensus) +小节中提到的,ZooKeeper ensemble 中的服务器需要一致性的配置来选举一个领导者并形成一个 +quorum。它们还需要 Zab 协议的一致性配置来保证这个协议在网络中正确的工作。 +在这次的示例中,我们通过直接将配置写入代码清单中来达到该目的。 获取 `zk` StatefulSet。 @@ -677,8 +713,8 @@ Get the `zk` StatefulSet. kubectl get sts zk -o yaml ``` ``` -… -command: + ... + command: - sh - -c - "start-zookeeper \ @@ -699,14 +735,14 @@ command: --max_session_timeout=40000 \ --min_session_timeout=4000 \ --log_level=INFO" -… +... ``` - -用于启动 ZooKeeper 服务器的命令将这些配置作为命令行参数传给了 ensemble。你也可以通过环境变量来传入这些配置。 +用于启动 ZooKeeper 服务器的命令将这些配置作为命令行参数传给了 ensemble。 +你也可以通过环境变量来传入这些配置。 +### 配置日志 {#configuring-logging} -### 配置日志 - -`zkGenConfig.sh` 脚本产生的一个文件控制了 ZooKeeper 的日志行为。ZooKeeper 使用了 [Log4j](http://logging.apache.org/log4j/2.x/) 并默认使用基于文件大小和时间的滚动文件追加器作为日志配置。 +`zkGenConfig.sh` 脚本产生的一个文件控制了 ZooKeeper 的日志行为。 +ZooKeeper 使用了 [Log4j](http://logging.apache.org/log4j/2.x/) 并默认使用 +基于文件大小和时间的滚动文件追加器作为日志配置。 从 `zk` StatefulSet 的一个 Pod 中获取日志配置。 @@ -732,7 +769,6 @@ kubectl exec zk-0 cat /usr/etc/zookeeper/log4j.properties The logging configuration below will cause the ZooKeeper process to write all of its logs to the standard output file stream. --> - 下面的日志配置会使 ZooKeeper 进程将其所有的日志写入标志输出文件流中。 ``` @@ -753,11 +789,13 @@ standard out and standard error do not exhaust local storage media. Use [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands/#logs) to retrieve the last 20 log lines from one of the Pods. --> +这是在容器里安全记录日志的最简单的方法。 +由于应用的日志被写入标准输出,Kubernetes 将会为你处理日志轮转。 +Kubernetes 还实现了一个智能保存策略,保证写入标准输出和标准错误流 +的应用日志不会耗尽本地存储媒介。 -这是在容器里安全记录日志的最简单的方法。由于应用的日志被写入标准输出,Kubernetes 将会为你处理日志轮转。Kubernetes 还实现了一个智能保存策略,保证写入标准输出和标准错误流的应用日志不会耗尽本地存储媒介。 - - -使用 [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands/#logs) 从一个 Pod 中取回最后几行日志。 +使用命令 [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands/#logs) +从一个 Pod 中取回最后 20 行日志。 ```shell kubectl logs zk-0 --tail 20 @@ -766,7 +804,6 @@ kubectl logs zk-0 --tail 20 - 使用 `kubectl logs` 或者从 Kubernetes Dashboard 可以查看写入到标准输出和标准错误流中的应用日志。 ``` @@ -793,18 +830,17 @@ You can view application logs written to standard out or standard error using `k ``` - -Kubernetes 支持与 [Stackdriver](/docs/tasks/debug-application-cluster/logging-stackdriver/) 和 [Elasticsearch and Kibana](/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/) 的整合以获得复杂但更为强大的日志功能。 -对于集群级别的日志输出与整合,可以考虑部署一个 [sidecar](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns) 容器。 +Kubernetes 支持与多种日志方案集成。你可以选择一个最适合你的集群和应用 +的日志解决方案。对于集群级别的日志输出与整合,可以考虑部署一个 +[边车容器](/zh/docs/concepts/cluster-administration/logging#sidecar-container-with-logging-agent) +来轮转和提供日志数据。 - ### 配置非特权用户 -在容器中允许应用以特权用户运行这条最佳实践是值得商讨的。如果你的组织要求应用以非特权用户运行,你可以使用 [SecurityContext](/zh/docs/tasks/configure-pod-container/security-context/) 控制运行容器入口点的用户。 +在容器中允许应用以特权用户运行这条最佳实践是值得商讨的。 +如果你的组织要求应用以非特权用户运行,你可以使用 +[SecurityContext](/zh/docs/tasks/configure-pod-container/security-context/) +控制运行容器入口点所使用的用户。 `zk` StatefulSet 的 Pod 的 `template` 包含了一个 `SecurityContext`。 @@ -833,8 +871,7 @@ corresponds to the zookeeper group. Get the ZooKeeper process information from the `zk-0` Pod. --> - -在 Pods 的容器内部,UID 1000 对应用户 zookeeper,GID 1000对应用户组 zookeeper。 +在 Pods 的容器内部,UID 1000 对应用户 zookeeper,GID 1000 对应用户组 zookeeper。 从 `zk-0` Pod 获取 ZooKeeper 进程信息。 @@ -846,8 +883,8 @@ kubectl exec zk-0 -- ps -elf As the `runAsUser` field of the `securityContext` object is set to 1000, instead of running as root, the ZooKeeper process runs as the zookeeper user. --> - -由于 `securityContext` 对象的 `runAsUser` 字段被设置为1000而不是 root,ZooKeeper 进程将以 zookeeper 用户运行。 +由于 `securityContext` 对象的 `runAsUser` 字段被设置为 1000 而不是 root, +ZooKeeper 进程将以 zookeeper 用户运行。 ``` F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD @@ -860,8 +897,8 @@ By default, when the Pod's PersistentVolumes is mounted to the ZooKeeper server' Use the command below to get the file permissions of the ZooKeeper data directory on the `zk-0` Pod. --> - -默认情况下,当 Pod 的 PersistentVolume 被挂载到 ZooKeeper 服务器的数据目录时,它只能被 root 用户访问。这个配置将阻止 ZooKeeper 进程写入它的 WAL 及保存快照。 +默认情况下,当 Pod 的 PersistentVolume 被挂载到 ZooKeeper 服务器的数据目录时, +它只能被 root 用户访问。这个配置将阻止 ZooKeeper 进程写入它的 WAL 及保存快照。 在 `zk-0` Pod 上获取 ZooKeeper 数据目录的文件权限。 @@ -872,8 +909,9 @@ kubectl exec -ti zk-0 -- ls -ld /var/lib/zookeeper/data - -由于 `securityContext` 对象的 `fsGroup` 字段设置为1000,Pods 的 PersistentVolumes 的所有权属于 zookeeper 用户组,因而 ZooKeeper 进程能够成功的读写数据。 +由于 `securityContext` 对象的 `fsGroup` 字段设置为 1000,Pods 的 +PersistentVolumes 的所有权属于 zookeeper 用户组,因而 ZooKeeper +进程能够成功地读写数据。 ``` drwxr-sr-x 3 zookeeper zookeeper 4096 Dec 5 20:45 /var/lib/zookeeper/data @@ -890,19 +928,19 @@ common pattern. When deploying an application in Kubernetes, rather than using an external utility as a supervisory process, you should use Kubernetes as the watchdog for your application. --> - ## 管理 ZooKeeper 进程 -[ZooKeeper documentation](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_supervision) 文档指出“你将需要一个监管程序用于管理每个 ZooKeeper 服务进程(JVM)”。在分布式系统中,使用一个看门狗(监管程序)来重启故障进程是一种常用的模式。 +[ZooKeeper 文档](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_supervision) +指出“你将需要一个监管程序用于管理每个 ZooKeeper 服务进程(JVM)”。 +在分布式系统中,使用一个看门狗(监管程序)来重启故障进程是一种常用的模式。 - ### 更新 Ensemble `zk` `StatefulSet` 的更新策略被设置为了 `RollingUpdate`。 @@ -912,6 +950,7 @@ You can use `kubectl patch` to update the number of `cpus` allocated to the serv ```shell kubectl patch sts zk --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/requests/cpu", "value":"0.3"}]' ``` + ``` statefulset.apps/zk patched ``` @@ -919,12 +958,12 @@ statefulset.apps/zk patched - 使用 `kubectl rollout status` 观测更新状态。 ```shell kubectl rollout status sts/zk ``` + ``` waiting for statefulset rolling update to complete 0 pods at revision zk-5db4499664... Waiting for 1 pods to be ready... @@ -943,8 +982,8 @@ This terminates the Pods, one at a time, in reverse ordinal order, and recreates Use the `kubectl rollout history` command to view a history or previous configurations. --> - -这项操作会逆序地依次终止每一个 Pod,并用新的配置重新创建。这样做确保了在滚动更新的过程中 quorum 依旧保持工作。 +这项操作会逆序地依次终止每一个 Pod,并用新的配置重新创建。 +这样做确保了在滚动更新的过程中 quorum 依旧保持工作。 使用 `kubectl rollout history` 命令查看历史或先前的配置。 @@ -962,7 +1001,6 @@ REVISION - 使用 `kubectl rollout undo` 命令撤销这次的改动。 ```shell @@ -974,7 +1012,7 @@ statefulset.apps/zk rolled back ``` - ### 处理进程故障 -[Restart Policies](/zh/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) 控制 Kubernetes 如何处理一个 Pod 中容器入口点的进程故障。对于 StatefulSet 中的 Pods 来说,Always 是唯一合适的 RestartPolicy,这也是默认值。你应该**绝不**覆盖 stateful 应用的默认策略。 +[重启策略](/zh/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) +控制 Kubernetes 如何处理一个 Pod 中容器入口点的进程故障。 +对于 StatefulSet 中的 Pods 来说,Always 是唯一合适的 RestartPolicy,也是默认值。 +你应该**绝不**覆盖有状态应用的默认策略。 检查 `zk-0` Pod 中运行的 ZooKeeper 服务器的进程树。 @@ -999,8 +1039,8 @@ kubectl exec zk-0 -- ps -ef The command used as the container's entry point has PID 1, and the ZooKeeper process, a child of the entry point, has PID 27. --> - -作为容器入口点的命令的 PID 为 1,Zookeeper 进程是入口点的子进程,PID 为27。 +作为容器入口点的命令的 PID 为 1,Zookeeper 进程是入口点的子进程, +PID 为 27。 ``` UID PID PPID C STIME TTY TIME CMD @@ -1011,8 +1051,7 @@ zookeep+ 27 1 0 15:03 ? 00:00:03 /usr/lib/jvm/java-8-openjdk-amd6 - -在一个终端观察 `zk` StatefulSet 中的 Pods。 +在一个终端观察 `zk` `StatefulSet` 中的 Pods。 ```shell kubectl get pod -w -l app=zk @@ -1021,7 +1060,6 @@ kubectl get pod -w -l app=zk - 在另一个终端杀掉 Pod `zk-0` 中的 ZooKeeper 进程。 ```shell @@ -1031,8 +1069,8 @@ In another terminal, terminate the ZooKeeper process in Pod `zk-0` with the foll - -ZooKeeper 进程的终结导致了它父进程的终止。由于容器的 RestartPolicy 是 Always,父进程被重启。 +ZooKeeper 进程的终结导致了它父进程的终止。由于容器的 `RestartPolicy` +是 Always,父进程被重启。 ``` NAME READY STATUS RESTARTS AGE @@ -1051,11 +1089,12 @@ that implements the application's business logic, the script must terminate with child process. This ensures that Kubernetes will restart the application's container when the process implementing the application's business logic fails. --> - -如果你的应用使用一个脚本(例如 zkServer.sh)来启动一个实现了应用业务逻辑的进程,这个脚本必须和子进程一起结束。这保证了当实现应用业务逻辑的进程故障时,Kubernetes 会重启这个应用的容器。 +如果你的应用使用一个脚本(例如 `zkServer.sh`)来启动一个实现了应用业务逻辑的进程, +这个脚本必须和子进程一起结束。这保证了当实现应用业务逻辑的进程故障时, +Kubernetes 会重启这个应用的容器。 - ### 存活性测试 -你的应用配置为自动重启故障进程,但这对于保持一个分布式系统的健康来说是不够的。许多场景下,一个系统进程可以是活动状态但不响应请求,或者是不健康状态。你应该使用 liveness probes 来通知 Kubernetes 你的应用进程处于不健康状态,需要被重启。 +你的应用配置为自动重启故障进程,但这对于保持一个分布式系统的健康来说是不够的。 +许多场景下,一个系统进程可以是活动状态但不响应请求,或者是不健康状态。 +你应该使用存活性探针来通知 Kubernetes 你的应用进程处于不健康状态,需要被重启。 `zk` StatefulSet 的 Pod 的 `template` 一节指定了一个存活探针。 ```yaml livenessProbe: - exec: - command: - - sh - - -c - - "zookeeper-ready 2181" - initialDelaySeconds: 15 - timeoutSeconds: 5 + exec: + command: + - sh + - -c + - "zookeeper-ready 2181" + initialDelaySeconds: 15 + timeoutSeconds: 5 ``` - -这个探针调用一个简单的 bash 脚本,使用 ZooKeeper 的四字缩写 `ruok` 来测试服务器的健康状态。 +这个探针调用一个简单的 Bash 脚本,使用 ZooKeeper 的四字缩写 `ruok` +来测试服务器的健康状态。 ``` OK=$(echo ruok | nc 127.0.0.1 $1) @@ -1102,8 +1142,7 @@ fi - -在一个终端窗口观察 `zk` StatefulSet 中的 Pods。 +在一个终端窗口中使用下面的命令观察 `zk` StatefulSet 中的 Pods。 ```shell kubectl get pod -w -l app=zk @@ -1112,7 +1151,6 @@ kubectl get pod -w -l app=zk - 在另一个窗口中,从 Pod `zk-0` 的文件系统中删除 `zookeeper-ready` 脚本。 ```shell @@ -1124,8 +1162,8 @@ When the liveness probe for the ZooKeeper process fails, Kubernetes will automatically restart the process for you, ensuring that unhealthy processes in the ensemble are restarted. --> - -当 ZooKeeper 进程的存活探针探测失败时,Kubernetes 将会为你自动重启这个进程,从而保证 ensemble 中不健康状态的进程都被重启。 +当 ZooKeeper 进程的存活探针探测失败时,Kubernetes 将会为你自动重启这个进程, +从而保证 ensemble 中不健康状态的进程都被重启。 ```shell kubectl get pod -w -l app=zk @@ -1143,28 +1181,32 @@ zk-0 1/1 Running 1 1h ``` +### 就绪性测试 +就绪不同于存活。如果一个进程是存活的,它是可调度和健康的。 +如果一个进程是就绪的,它应该能够处理输入。存活是就绪的必要非充分条件。 +在许多场景下,特别是初始化和终止过程中,一个进程可以是存活但没有就绪的。 + + +如果你指定了一个就绪探针,Kubernetes 将保证在就绪检查通过之前, +你的应用不会接收到网络流量。 -### 就绪性测试 - -就绪不同于存活。如果一个进程是存活的,它是可调度和健康的。如果一个进程是就绪的,它应该能够处理输入。存活是就绪的必要非充分条件。在许多场景下,特别是初始化和终止过程中,一个进程可以是存活但没有就绪的。 - -如果你指定了一个就绪探针,Kubernetes将保证在就绪检查通过之前,你的应用不会接收到网络流量。 - -对于一个 ZooKeeper 服务器来说,存活即就绪。因此 `zookeeper.yaml` 清单中的就绪探针和存活探针完全相同。 +对于一个 ZooKeeper 服务器来说,存活即就绪。 +因此 `zookeeper.yaml` 清单中的就绪探针和存活探针完全相同。 ```yaml readinessProbe: @@ -1182,11 +1224,11 @@ Even though the liveness and readiness probes are identical, it is important to specify both. This ensures that only healthy servers in the ZooKeeper ensemble receive network traffic. --> - -虽然存活探针和就绪探针是相同的,但同时指定它们两者仍然重要。这保证了 ZooKeeper ensemble 中只有健康的服务器能接收网络流量。 +虽然存活探针和就绪探针是相同的,但同时指定它们两者仍然重要。 +这保证了 ZooKeeper ensemble 中只有健康的服务器能接收网络流量。 +## 容忍节点故障 +ZooKeeper 需要一个 quorum 来提交数据变动。对于一个拥有 3 个服务器的 ensemble 来说, +必须有两个服务器是健康的,写入才能成功。 +在基于 quorum 的系统里,成员被部署在多个故障域中以保证可用性。 +为了防止由于某台机器断连引起服务中断,最佳实践是防止应用的多个实例在相同的机器上共存。 + + +默认情况下,Kubernetes 可以把 StatefulSet 的 Pods 部署在相同节点上。 +对于你创建的 3 个服务器的 ensemble 来说,如果有两个服务器并存于 +相同的节点上并且该节点发生故障时,ZooKeeper 服务将中断, +直至至少一个 Pods 被重新调度。 + - -## 容忍节点故障 - -ZooKeeper 需要一个 quorum 来提交数据变动。对于一个拥有 3 个服务器的 ensemble来说,必须有两个服务器是健康的,写入才能成功。在基于 quorum 的系统里,成员被部署在故障域之间以保证可用性。为了防止由于某台机器断连引起服务中断,最佳实践是防止应用的多个示例在相同的机器上共存。 - -默认情况下,Kubernetes 可以把 StatefulSet 的 Pods 部署在相同节点上。对于你创建的 3 个服务器的 ensemble 来说,如果有两个服务器并存于相同的节点上并且该节点发生故障时,ZooKeeper 服务将中断,直至至少一个 Pods 被重新调度。 - -你应该总是提供额外的容量以允许关键系统进程在节点故障时能够被重新调度。如果你这样做了,服务故障就只会持续到 Kubernetes 调度器重新调度某个 ZooKeeper 服务器为止。但是,如果希望你的服务在容忍节点故障时无停服时间,你应该设置 `podAntiAffinity`。 +你应该总是提供多余的容量以允许关键系统进程在节点故障时能够被重新调度。 +如果你这样做了,服务故障就只会持续到 Kubernetes 调度器重新调度某个 +ZooKeeper 服务器为止。 +但是,如果希望你的服务在容忍节点故障时无停服时间,你应该设置 `podAntiAffinity`。 获取 `zk` Stateful Set 中的 Pods 的节点。 @@ -1225,8 +1277,7 @@ for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo ""; - -`zk` StatefulSe 中所有的 Pods 都被部署在不同的节点。 +`zk` `StatefulSet` 中所有的 Pods 都被部署在不同的节点。 ``` kubernetes-node-cxpk @@ -1238,19 +1289,19 @@ kubernetes-node-2g2d This is because the Pods in the `zk` `StatefulSet` have a `PodAntiAffinity` specified. --> -这是因为 `zk` StatefulSet 中的 Pods 指定了 `PodAntiAffinity`。 +这是因为 `zk` `StatefulSet` 中的 Pods 指定了 `PodAntiAffinity`。 ```yaml - affinity: - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: "app" - operator: In - values: - - zk - topologyKey: "kubernetes.io/hostname" +affinity: + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: "app" + operator: In + values: + - zk + topologyKey: "kubernetes.io/hostname" ``` - -`requiredDuringSchedulingIgnoredDuringExecution` 告诉 Kubernetes 调度器,在以 `topologyKey` 指定的域中,绝对不要把带有键为 `app`,值为 `zk` 的标签的两个 Pods 调度到相同的节点。`topologyKey` -`kubernetes.io/hostname` 表示这个域是一个单独的节点。使用不同的 rules、labels 和 selectors,你能够通过这种技术把你的 ensemble 分布在不同的物理、网络和电力故障域之间。 +`requiredDuringSchedulingIgnoredDuringExecution` 告诉 Kubernetes 调度器, +在以 `topologyKey` 指定的域中,绝对不要把带有键为 `app`、值为 `zk` 的标签 +的两个 Pods 调度到相同的节点。`topologyKey` `kubernetes.io/hostname` 表示 +这个域是一个单独的节点。 +使用不同的规则、标签和选择算符,你能够通过这种技术把你的 ensemble 分布 +在不同的物理、网络和电力故障域之间。 +## 节点维护期间保持应用可用 -## 存活管理 +**在本节中你将会隔离(Cordon)和腾空(Drain)节点。 +如果你是在一个共享的集群里使用本教程,请保证不会影响到其他租户。** -**在本节中你将会 cordon 和 drain 节点。如果你是在一个共享的集群里使用本教程,请保证不会影响到其他租户** +上一小节展示了如何在节点之间分散 Pods 以在计划外的节点故障时保证服务存活。 +但是你也需要为计划内维护引起的临时节点故障做准备。 -上一小节展示了如何在节点之间分散 Pods 以在计划外的节点故障时保证服务存活。但是你也需要为计划内维护引起的临时节点故障做准备。 - -获取你集群中的节点。 +使用此命令获取你的集群中的节点。 ```shell kubectl get nodes @@ -1295,7 +1350,8 @@ Use [`kubectl cordon`](/docs/reference/generated/kubectl/kubectl-commands/#cordo cordon all but four of the nodes in your cluster. --> -使用 [`kubectl cordon`](/docs/reference/generated/kubectl/kubectl-commands/#cordon) cordon 你的集群中除4个节点以外的所有节点。 +使用 [`kubectl cordon`](/docs/reference/generated/kubectl/kubectl-commands/#cordon) +隔离你的集群中除 4 个节点以外的所有节点。 ```shell kubectl cordon @@ -1304,8 +1360,7 @@ kubectl cordon - -获取 `zk-pdb` `PodDisruptionBudget`。 +使用下面的命令获取 `zk-pdb` `PodDisruptionBudget`。 ```shell kubectl get pdb zk-pdb @@ -1315,8 +1370,8 @@ kubectl get pdb zk-pdb The `max-unavailable` field indicates to Kubernetes that at most one Pod from `zk` `StatefulSet` can be unavailable at any time. --> - -`max-unavailable` 字段指示 Kubernetes 在任何时候,`zk` `StatefulSet` 至多有一个 Pod 是不可用的。 +`max-unavailable` 字段指示 Kubernetes 在任何时候,`zk` `StatefulSet` +至多有一个 Pod 是不可用的。 ``` NAME MIN-AVAILABLE MAX-UNAVAILABLE ALLOWED-DISRUPTIONS AGE @@ -1326,8 +1381,7 @@ zk-pdb N/A 1 1 - -在一个终端观察 `zk` `StatefulSet` 中的 Pods。 +在一个终端中,使用下面的命令观察 `zk` `StatefulSet` 中的 Pods。 ```shell kubectl get pods -w -l app=zk @@ -1337,7 +1391,7 @@ kubectl get pods -w -l app=zk In another terminal, use this command to get the nodes that the Pods are currently scheduled on. --> -在另一个终端获取 Pods 当前调度的节点。 +在另一个终端中,使用下面的命令获取 Pods 当前调度的节点。 ```shell for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo ""; done @@ -1354,7 +1408,8 @@ Use [`kubectl drain`](/docs/reference/generated/kubectl/kubectl-commands/#drain) drain the node on which the `zk-0` Pod is scheduled. --> -使用 [`kubectl drain`](/docs/reference/generated/kubectl/kubectl-commands/#drain) 来 cordon 和 drain `zk-0` Pod 调度的节点。 +使用 [`kubectl drain`](/docs/reference/generated/kubectl/kubectl-commands/#drain) +来隔离和腾空 `zk-0` Pod 调度所在的节点。 ```shell kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data @@ -1372,8 +1427,7 @@ node "kubernetes-node-pb41" drained As there are four nodes in your cluster, `kubectl drain`, succeeds and the `zk-0` is rescheduled to another node. --> - -由于你的集群中有4个节点, `kubectl drain` 执行成功,`zk-0 被调度到其它节点。 +由于你的集群中有 4 个节点, `kubectl drain` 执行成功,`zk-0` 被调度到其它节点。 ``` NAME READY STATUS RESTARTS AGE @@ -1396,8 +1450,7 @@ zk-0 1/1 Running 0 1m Keep watching the `StatefulSet`'s Pods in the first terminal and drain the node on which `zk-1` is scheduled. --> - -在第一个终端持续观察 StatefulSet 的 Pods并 drain `zk-1` 调度的节点。 +在第一个终端中持续观察 StatefulSet 的 Pods 并腾空 `zk-1` 调度所在的节点。 ```shell kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data "kubernetes-node-ixsl" cordoned @@ -1413,42 +1466,42 @@ node "kubernetes-node-ixsl" drained The `zk-1` Pod cannot be scheduled because the `zk` `StatefulSet` contains a `PodAntiAffinity` rule preventing co-location of the Pods, and as only two nodes are schedulable, the Pod will remain in a Pending state. --> - -`zk-1` Pod 不能被调度。由于 `zk` StatefulSet 包含了一个防止 Pods 共存的 PodAntiAffinity 规则,而且只有两个节点可用于调度,这个 Pod 将保持在 Pending 状态。 +`zk-1` Pod 不能被调度,这是因为 `zk` `StatefulSet` 包含了一个防止 Pods +共存的 PodAntiAffinity 规则,而且只有两个节点可用于调度, +这个 Pod 将保持在 Pending 状态。 ```shell kubectl get pods -w -l app=zk ``` ``` -NAME READY STATUS RESTARTS AGE -zk-0 1/1 Running 2 1h -zk-1 1/1 Running 0 1h -zk-2 1/1 Running 0 1h -NAME READY STATUS RESTARTS AGE -zk-0 1/1 Terminating 2 2h -zk-0 0/1 Terminating 2 2h -zk-0 0/1 Terminating 2 2h -zk-0 0/1 Terminating 2 2h -zk-0 0/1 Pending 0 0s -zk-0 0/1 Pending 0 0s -zk-0 0/1 ContainerCreating 0 0s -zk-0 0/1 Running 0 51s -zk-0 1/1 Running 0 1m -zk-1 1/1 Terminating 0 2h -zk-1 0/1 Terminating 0 2h -zk-1 0/1 Terminating 0 2h -zk-1 0/1 Terminating 0 2h -zk-1 0/1 Pending 0 0s -zk-1 0/1 Pending 0 0s +NAME READY STATUS RESTARTS AGE +zk-0 1/1 Running 2 1h +zk-1 1/1 Running 0 1h +zk-2 1/1 Running 0 1h +NAME READY STATUS RESTARTS AGE +zk-0 1/1 Terminating 2 2h +zk-0 0/1 Terminating 2 2h +zk-0 0/1 Terminating 2 2h +zk-0 0/1 Terminating 2 2h +zk-0 0/1 Pending 0 0s +zk-0 0/1 Pending 0 0s +zk-0 0/1 ContainerCreating 0 0s +zk-0 0/1 Running 0 51s +zk-0 1/1 Running 0 1m +zk-1 1/1 Terminating 0 2h +zk-1 0/1 Terminating 0 2h +zk-1 0/1 Terminating 0 2h +zk-1 0/1 Terminating 0 2h +zk-1 0/1 Pending 0 0s +zk-1 0/1 Pending 0 0s ``` - -继续观察 stateful set 的 Pods 并 drain `zk-2` 调度的节点。 +继续观察 StatefulSet 中的 Pods 并腾空 `zk-2` 调度所在的节点。 ```shell kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data @@ -1469,11 +1522,10 @@ You cannot drain the third node because evicting `zk-2` would violate `zk-budget Use `zkCli.sh` to retrieve the value you entered during the sanity test from `zk-0`. --> - 使用 `CRTL-C` 终止 kubectl。 -你不能 drain 第三个节点,因为删除 `zk-2` 将和 `zk-budget` 冲突。然而这个节点仍然保持 cordoned。 - +你不能腾空第三个节点,因为驱逐 `zk-2` 将和 `zk-budget` 冲突。 +然而这个节点仍然处于隔离状态(Cordoned)。 使用 `zkCli.sh` 从 `zk-0` 取回你的健康检查中输入的数值。 @@ -1484,7 +1536,6 @@ kubectl exec zk-0 zkCli.sh get /hello - 由于遵守了 PodDisruptionBudget,服务仍然可用。 ``` @@ -1506,12 +1557,13 @@ numChildren = 0 - -使用 [`kubectl uncordon`](/docs/reference/generated/kubectl/kubectl-commands/#uncordon) 来取消对第一个节点的隔离。 +使用 [`kubectl uncordon`](/docs/reference/generated/kubectl/kubectl-commands/#uncordon) +来取消对第一个节点的隔离。 ```shell kubectl uncordon kubernetes-node-pb41 ``` + ``` node "kubernetes-node-pb41" uncordoned ``` @@ -1519,44 +1571,43 @@ node "kubernetes-node-pb41" uncordoned - `zk-1` 被重新调度到了这个节点。等待 `zk-1` 变为 Running 和 Ready 状态。 ```shell kubectl get pods -w -l app=zk ``` + ``` -NAME READY STATUS RESTARTS AGE -zk-0 1/1 Running 2 1h -zk-1 1/1 Running 0 1h -zk-2 1/1 Running 0 1h -NAME READY STATUS RESTARTS AGE -zk-0 1/1 Terminating 2 2h -zk-0 0/1 Terminating 2 2h -zk-0 0/1 Terminating 2 2h -zk-0 0/1 Terminating 2 2h -zk-0 0/1 Pending 0 0s -zk-0 0/1 Pending 0 0s -zk-0 0/1 ContainerCreating 0 0s -zk-0 0/1 Running 0 51s -zk-0 1/1 Running 0 1m -zk-1 1/1 Terminating 0 2h -zk-1 0/1 Terminating 0 2h -zk-1 0/1 Terminating 0 2h -zk-1 0/1 Terminating 0 2h -zk-1 0/1 Pending 0 0s -zk-1 0/1 Pending 0 0s -zk-1 0/1 Pending 0 12m -zk-1 0/1 ContainerCreating 0 12m -zk-1 0/1 Running 0 13m -zk-1 1/1 Running 0 13m +NAME READY STATUS RESTARTS AGE +zk-0 1/1 Running 2 1h +zk-1 1/1 Running 0 1h +zk-2 1/1 Running 0 1h +NAME READY STATUS RESTARTS AGE +zk-0 1/1 Terminating 2 2h +zk-0 0/1 Terminating 2 2h +zk-0 0/1 Terminating 2 2h +zk-0 0/1 Terminating 2 2h +zk-0 0/1 Pending 0 0s +zk-0 0/1 Pending 0 0s +zk-0 0/1 ContainerCreating 0 0s +zk-0 0/1 Running 0 51s +zk-0 1/1 Running 0 1m +zk-1 1/1 Terminating 0 2h +zk-1 0/1 Terminating 0 2h +zk-1 0/1 Terminating 0 2h +zk-1 0/1 Terminating 0 2h +zk-1 0/1 Pending 0 0s +zk-1 0/1 Pending 0 0s +zk-1 0/1 Pending 0 12m +zk-1 0/1 ContainerCreating 0 12m +zk-1 0/1 Running 0 13m +zk-1 1/1 Running 0 13m ``` - -尝试 drain `zk-2` 调度的节点。 +尝试腾空 `zk-2` 调度所在的节点。 ```shell kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data @@ -1565,7 +1616,6 @@ kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-dae - 输出: ``` @@ -1581,10 +1631,9 @@ This time `kubectl drain` succeeds. Uncordon the second node to allow `zk-2` to be rescheduled. --> - 这次 `kubectl drain` 执行成功。 -Uncordon 第二个节点以允许 `zk-2` 被重新调度。 +取消第二个节点的隔离,以允许 `zk-2` 被重新调度。 ```shell kubectl uncordon kubernetes-node-ixsl @@ -1600,19 +1649,20 @@ If drain is used to cordon nodes and evict pods prior to taking the node offline services that express a disruption budget will have that budget respected. You should always allocate additional capacity for critical services so that their Pods can be immediately rescheduled. --> - -你可以同时使用 `kubectl drain` 和 `PodDisruptionBudgets` 来保证你的服务在维护过程中仍然可用。如果使用了 drain 来隔离节点并在节点离线之前排出了 pods,那么表达了 disruption budget 的服务将会遵守该 budget。你应该总是为关键服务分配额外容量,这样它们的 Pods 就能够迅速的重新调度。 +你可以同时使用 `kubectl drain` 和 `PodDisruptionBudgets` 来保证你的服务 +在维护过程中仍然可用。如果使用了腾空操作来隔离节点并在节点离线之前驱逐了 pods, +那么设置了干扰预算的服务将会遵守该预算。 +你应该总是为关键服务分配额外容量,这样它们的 Pods 就能够迅速的重新调度。 ## {{% heading "cleanup" %}} - - * 使用 `kubectl uncordon` 解除你集群中所有节点的隔离。 -* 你需要删除在本教程中使用的 PersistentVolumes 的持久存储媒介。请遵循必须的步骤,基于你的环境、存储配置和准备方法,保证回收所有的存储。 +* 你需要删除在本教程中使用的 PersistentVolumes 的持久存储媒介。 + 请遵循必须的步骤,基于你的环境、存储配置和制备方法,保证回收所有的存储。 + diff --git a/content/zh/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md b/content/zh/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md deleted file mode 100644 index fb264dbeba..0000000000 --- a/content/zh/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md +++ /dev/null @@ -1,723 +0,0 @@ ---- -title: "示例: 添加日志和指标到 PHP / Redis Guestbook 案例" -content_type: tutorial -weight: 21 -card: - name: tutorials - weight: 31 - title: "示例: 添加日志和指标到 PHP / Redis Guestbook 案例" ---- - - - - -本教程建立在 -[使用 Redis 部署 PHP Guestbook](/zh/docs/tutorials/stateless-application/guestbook) 教程之上。 -*Beats*,是 Elastic 出品的开源的轻量级日志、指标和网络数据采集器, -将和 Guestbook 一同部署在 Kubernetes 集群中。 -Beats 收集、分析、索引数据到 Elasticsearch,使你可以用 Kibana 查看并分析得到的运营信息。 -本示例由以下内容组成: -* [带 Redis 的 PHP Guestbook 教程](/zh/docs/tutorials/stateless-application/guestbook) - 的一个实例部署 -* Elasticsearch 和 Kibana -* Filebeat -* Metricbeat -* Packetbeat - -## {{% heading "objectives" %}} - - -* 启动用 Redis 部署的 PHP Guestbook。 -* 安装 kube-state-metrics。 -* 创建 Kubernetes secret。 -* 部署 Beats。 -* 用仪表板查看日志和指标。 - -## {{% heading "prerequisites" %}} - - -{{< include "task-tutorial-prereqs.md" >}} -{{< version-check >}} - - -此外,你还需要: - -* 依照教程[使用 Redis 的 PHP Guestbook](/zh/docs/tutorials/stateless-application/guestbook)得到的一套运行中的部署环境。 -* 一套运行中的 Elasticsearch 和 Kibana 部署环境。你可以使用 [Elastic 云中的Elasticsearch 服务](https://cloud.elastic.co)、在工作站或者服务器上运行此[下载文件](https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-elastic-stack.html)、或运行 [Elastic Helm Charts](https://github.com/elastic/helm-charts)。 - - - - -## 启动用 Redis 部署的 PHP Guestbook {#start-up-the-php-guestbook-with-redis} - -本教程建立在 -[使用 Redis 部署 PHP Guestbook](/zh/docs/tutorials/stateless-application/guestbook) 之上。 -如果你已经有一个运行的 Guestbook 应用程序,那就监控它。 -如果还没有,那就按照说明先部署 Guestbook ,但不要执行**清理**的步骤。 -当 Guestbook 运行起来后,再返回本页。 - - -## 添加一个集群角色绑定 {#add-a-cluster-role-binding} - -创建一个[集群范围的角色绑定](/zh/docs/reference/access-authn-authz/rbac/#rolebinding-和-clusterrolebinding), -以便你可以在集群范围(在 kube-system 中)部署 kube-state-metrics 和 Beats。 - -```shell -kubectl create clusterrolebinding cluster-admin-binding \ - --clusterrole=cluster-admin --user= -``` - - -### 安装 kube-state-metrics {#install-kube-state-metrics} - -Kubernetes [*kube-state-metrics*](https://github.com/kubernetes/kube-state-metrics) -是一个简单的服务,它侦听 Kubernetes API 服务器并生成对象状态的指标。 -Metricbeat 报告这些指标。 -添加 kube-state-metrics 到运行 Guestbook 的 Kubernetes 集群。 - -```shell -git clone https://github.com/kubernetes/kube-state-metrics.git kube-state-metrics -kubectl apply -f kube-state-metrics/examples/standard -``` - - -### 检查 kube-state-metrics 是否正在运行 {#check-to-see-if-kube-state-metrics-is-running} - -```shell -kubectl get pods --namespace=kube-system -l app.kubernetes.io/name=kube-state-metrics -``` - - -输出: - -``` -NAME READY STATUS RESTARTS AGE -kube-state-metrics-89d656bf8-vdthm 1/1 Running 0 21s -``` - - -## 从 GitHub 克隆 Elastic examples 库 {#clone-the-elastic-examples-github-repo} - -```shell -git clone https://github.com/elastic/examples.git -``` - - -后续命令将引用目录 `examples/beats-k8s-send-anywhere` 中的文件, -所以把目录切换过去。 - -```shell -cd examples/beats-k8s-send-anywhere -``` - - -## 创建 Kubernetes Secret {#create-a-kubernetes-secret} - -Kubernetes {{< glossary_tooltip text="Secret" term_id="secret" >}} -是包含少量敏感数据(类似密码、令牌、秘钥等)的对象。 -这类信息也可以放在 Pod 规格定义或者镜像中; -但放在 Secret 对象中,能更好的控制它的使用方式,也能减少意外泄露的风险。 - -{{< note >}} -这里有两套步骤,一套用于*自管理*的 Elasticsearch 和 Kibana(运行在你的服务器上或使用 Helm Charts), -另一套用于在 Elastic 云服务中 *Managed service* 的 Elasticsearch 服务。 -在本教程中,只需要为 Elasticsearch 和 Kibana 系统创建 secret。 -{{< /note >}} - -{{< tabs name="tab_with_md" >}} -{{% tab name="自管理" %}} - - -### 自管理系统 {#self-managed} - -如果你使用 Elastic 云中的 Elasticsearch 服务,切换到 **Managed service** 标签页。 - -### 设置凭据 {#set-the-credentials} - -当你使用自管理的 Elasticsearch 和 Kibana (对比托管于 Elastic 云中的 Elasticsearch 服务,自管理更有效率), -创建 k8s secret 需要准备四个文件。这些文件是: - -1. `ELASTICSEARCH_HOSTS` -1. `ELASTICSEARCH_PASSWORD` -1. `ELASTICSEARCH_USERNAME` -1. `KIBANA_HOST` - - -为你的 Elasticsearch 集群和 Kibana 主机设置这些信息。这里是一些例子 -(另见[*此配置*](https://stackoverflow.com/questions/59892896/how-to-connect-from-minikube-to-elasticsearch-installed-on-host-local-developme/59892897#59892897)) - -#### `ELASTICSEARCH_HOSTS` {#elasticsearch-hosts} - - -1. 来自于 Elastic Elasticsearch Helm Chart 的节点组: - - ``` - ["http://elasticsearch-master.default.svc.cluster.local:9200"] - ``` - - -1. Mac 上的单节点的 Elasticsearch,Beats 运行在 Mac 的容器中: - - ``` - ["http://host.docker.internal:9200"] - ``` - - -1. 运行在虚拟机或物理机上的两个 Elasticsearch 节点 - - ``` - ["http://host1.example.com:9200", "http://host2.example.com:9200"] - ``` - - -编辑 `ELASTICSEARCH_HOSTS` -```shell -vi ELASTICSEARCH_HOSTS -``` - -#### `ELASTICSEARCH_PASSWORD` {#elasticsearch-password} - - -只有密码;没有空格、引号、< 和 >: - -``` - -``` - - -编辑 `ELASTICSEARCH_PASSWORD`: - -```shell -vi ELASTICSEARCH_PASSWORD -``` - -#### `ELASTICSEARCH_USERNAME` {#elasticsearch-username} - - -只有用名;没有空格、引号、< 和 >: - - -``` -<为 Elasticsearch 注入的用户名> -``` - - -编辑 `ELASTICSEARCH_USERNAME`: - -```shell -vi ELASTICSEARCH_USERNAME -``` - -#### `KIBANA_HOST` {#kibana-host} - - -1. 从 Elastic Kibana Helm Chart 安装的 Kibana 实例。子域 `default` 指默认的命名空间。如果你把 Helm Chart 指定部署到不同的命名空间,那子域会不同: - - ``` - "kibana-kibana.default.svc.cluster.local:5601" - ``` - - -1. Mac 上的 Kibana 实例,Beats 运行于 Mac 的容器: - - ``` - "host.docker.internal:5601" - ``` - - -1. 运行于虚拟机或物理机上的两个 Elasticsearch 节点: - - ``` - "host1.example.com:5601" - ``` - - -编辑 `KIBANA_HOST`: - -```shell -vi KIBANA_HOST -``` - - -### 创建 Kubernetes secret {#create-a-kubernetes-secret} - -在上面编辑完的文件的基础上,本命令在 Kubernetes 系统范围的命名空间(kube-system)创建一个 secret。 - -``` - kubectl create secret generic dynamic-logging \ - --from-file=./ELASTICSEARCH_HOSTS \ - --from-file=./ELASTICSEARCH_PASSWORD \ - --from-file=./ELASTICSEARCH_USERNAME \ - --from-file=./KIBANA_HOST \ - --namespace=kube-system -``` - -{{% /tab %}} -{{% tab name="Managed service" %}} - - -## Managed service {#managed-service} - -本标签页只用于 Elastic 云 的 Elasticsearch 服务,如果你已经为自管理的 Elasticsearch 和 Kibana 创建了secret,请继续[部署 Beats](#deploy-the-beats)并继续。 - -### 设置凭据 {#set-the-credentials} - -在 Elastic 云中的托管 Elasticsearch 服务中,为了创建 k8s secret,你需要先编辑两个文件。它们是: - -1. `ELASTIC_CLOUD_AUTH` -1. `ELASTIC_CLOUD_ID` - - -当你完成部署的时候,Elasticsearch 服务控制台会提供给你一些信息,用这些信息完成设置。 -这里是一些示例: - -#### ELASTIC_CLOUD_ID {#elastic-cloud-id} - -``` -devk8s:ABC123def456ghi789jkl123mno456pqr789stu123vwx456yza789bcd012efg345hijj678klm901nop345zEwOTJjMTc5YWQ0YzQ5OThlN2U5MjAwYTg4NTIzZQ== -``` - -#### ELASTIC_CLOUD_AUTH {#elastic-cloud-auth} - - -只要用户名;没有空格、引号、< 和 >: - -``` -elastic:VFxJJf9Tjwer90wnfTghsn8w -``` - - -### 编辑要求的文件 {#edit-the-required-files} -```shell -vi ELASTIC_CLOUD_ID -vi ELASTIC_CLOUD_AUTH -``` - - -### 创建 Kubernetes secret {#create-a-kubernetes-secret} - -基于上面刚编辑过的文件,在 Kubernetes 系统范围命名空间(kube-system)中,用下面命令创建一个的secret: - - kubectl create secret generic dynamic-logging \ - --from-file=./ELASTIC_CLOUD_ID \ - --from-file=./ELASTIC_CLOUD_AUTH \ - --namespace=kube-system - - {{% /tab %}} -{{< /tabs >}} - - -## 部署 Beats {#deploy-the-beats} - -为每一个 Beat 提供 清单文件。清单文件使用已创建的 secret 接入 Elasticsearch 和 Kibana 服务器。 - -### 关于 Filebeat {#about-filebeat} - -Filebeat 收集日志,日志来源于 Kubernetes 节点以及这些节点上每一个 Pod 中的容器。Filebeat 部署为 -{{< glossary_tooltip text="DaemonSet" term_id="daemonset" >}}。 -Filebeat 支持自动发现 Kubernetes 集群中的应用。 -在启动时,Filebeat 扫描存量的容器,并为它们提供适当的配置, -然后开始监听新的启动/中止信号。 - -下面是一个自动发现的配置,它支持 Filebeat 定位并分析来自于 Guestbook 应用部署的 Redis 容器的日志文件。 -下面的配置片段来自文件 `filebeat-kubernetes.yaml`: - -```yaml -- condition.contains: - kubernetes.labels.app: redis - config: - - module: redis - log: - input: - type: docker - containers.ids: - - ${data.kubernetes.container.id} - slowlog: - enabled: true - var.hosts: ["${data.host}:${data.port}"] -``` - - - -这样配置 Filebeat,当探测到容器拥有 `app` 标签,且值为 `redis`,那就启用 Filebeat 的 `redis` 模块。 -`redis` 模块可以根据 docker 的输入类型(在 Kubernetes 节点上读取和 Redis 容器的标准输出流关联的文件) ,从容器收集 `log` 流。 -另外,此模块还可以使用容器元数据中提供的配置信息,连到 Pod 适当的主机和端口,收集 Redis 的 `slowlog` 。 - -### 部署 Filebeat {#deploy-filebeat} - -```shell -kubectl create -f filebeat-kubernetes.yaml -``` - - -#### 验证 {#verify} - -```shell -kubectl get pods -n kube-system -l k8s-app=filebeat-dynamic -``` - - -### 关于 Metricbeat {#about-metricbeat} - -Metricbeat 自动发现的配置方式与 Filebeat 完全相同。 -这里是针对 Redis 容器的 Metricbeat 自动发现配置。 -此配置片段来自于文件 `metricbeat-kubernetes.yaml`: - -```yaml -- condition.equals: - kubernetes.labels.tier: backend - config: - - module: redis - metricsets: ["info", "keyspace"] - period: 10s - - # Redis hosts - hosts: ["${data.host}:${data.port}"] -``` - -配置 Metricbeat,在探测到标签 `tier` 的值等于 `backend` 时,应用 Metricbeat 模块 `redis`。 -`redis` 模块可以获取容器元数据,连接到 Pod 适当的主机和端口,从 Pod 中收集指标 `info` 和 `keyspace`。 - -### 部署 Metricbeat {#deploy-metricbeat} - -```shell -kubectl create -f metricbeat-kubernetes.yaml -``` - - -#### 验证 {#verify2} - -```shell -kubectl get pods -n kube-system -l k8s-app=metricbeat -``` - - -### 关于 Packetbeat {#about-packetbeat} - -Packetbeat 的配置方式不同于 Filebeat 和 Metricbeat。 -相比于匹配容器标签的模式,它的配置基于相关协议和端口号。 -下面展示的是端口号的一个子集: - -{{< note >}} -如果你的服务运行在非标准的端口上,那就打开文件 `filebeat.yaml`,把这个端口号添加到合适的类型中,然后删除/启动 Packetbeat 的守护进程。 -{{< /note >}} - -```yaml -packetbeat.interfaces.device: any - -packetbeat.protocols: -- type: dns - ports: [53] - include_authorities: true - include_additionals: true - -- type: http - ports: [80, 8000, 8080, 9200] - -- type: mysql - ports: [3306] - -- type: redis - ports: [6379] - -packetbeat.flows: - timeout: 30s - period: 10s -``` - - -### 部署 Packetbeat {#deploy-packetbeat} - -```shell -kubectl create -f packetbeat-kubernetes.yaml -``` - - -#### 验证 {#verify3} - -```shell -kubectl get pods -n kube-system -l k8s-app=packetbeat-dynamic -``` - - -## 在 kibana 中浏览 {#view-in-kibana} - -在浏览器中打开 kibana,再打开 **Dashboard**。 -在搜索栏中键入 Kubernetes,再点击 Metricbeat 的 Kubernetes Dashboard。 -此 Dashboard 展示节点状态、应用部署等。 - -在 Dashboard 页面,搜索 Packetbeat,并浏览 Packetbeat 概览信息。 - -同样地,浏览 Apache 和 Redis 的 Dashboard。 -可以看到日志和指标各自独立 Dashboard。 -Apache Metricbeat Dashboard 是空的。 -找到 Apache Filebeat Dashboard,拉到最下面,查看 Apache 的错误日志。 -日志会揭示出没有 Apache 指标的原因。 - -要让 metricbeat 得到 Apache 的指标,需要添加一个包含模块状态配置文件的 ConfigMap,并重新部署 Guestbook。 - -## 缩放部署规模,查看新 Pod 已被监控 {#scale-your-deployments-and-see-new-pods-being-monitored} - -列出现有的 deployments: - -```shell -kubectl get deployments -``` - - -输出: - -``` -NAME READY UP-TO-DATE AVAILABLE AGE -frontend 3/3 3 3 3h27m -redis-master 1/1 1 1 3h27m -redis-slave 2/2 2 2 3h27m -``` - - -缩放前端到两个 Pod: - -```shell -kubectl scale --replicas=2 deployment/frontend -``` - - -输出: - -``` -deployment.extensions/frontend scaled -``` - - -将前端应用缩放回三个 Pod: - -```shell -kubectl scale --replicas=3 deployment/frontend -``` - - -## 在 Kibana 中查看变化 {#view-the-chagnes-in-kibana} - -参见屏幕截图,添加指定的过滤器,然后将列添加到视图。 -你可以看到,ScalingReplicaSet 被做了标记,从标记的点开始,到消息列表的顶部,展示了拉取的镜像、挂载的卷、启动的 Pod 等。 -![Kibana 发现](https://raw.githubusercontent.com/elastic/examples/master/beats-k8s-send-anywhere/scaling-up.png) - -## {{% heading "cleanup" %}} - - -删除 Deployments 和 Services, 删除运行的 Pod。 -用标签功能在一个命令中删除多个资源。 - -1. 执行下列命令,删除所有的 Pod、Deployment 和 Services。 - - ```shell - kubectl delete deployment -l app=redis - kubectl delete service -l app=redis - kubectl delete deployment -l app=guestbook - kubectl delete service -l app=guestbook - kubectl delete -f filebeat-kubernetes.yaml - kubectl delete -f metricbeat-kubernetes.yaml - kubectl delete -f packetbeat-kubernetes.yaml - kubectl delete secret dynamic-logging -n kube-system - ``` - -2. 查询 Pod,以核实没有 Pod 还在运行: - - ```shell - kubectl get pods - ``` - - - 响应应该是这样: - - ``` - No resources found. - ``` - - -## {{% heading "whatsnext" %}} - - -* 了解[监控资源的工具](/zh/docs/tasks/debug-application-cluster/resource-usage-monitoring/) -* 进一步阅读[日志体系架构](/zh/docs/concepts/cluster-administration/logging/) -* 进一步阅读[应用内省和调试](/zh/docs/tasks/debug-application-cluster/) -* 进一步阅读[应用程序的故障排除](/zh/docs/tasks/debug-application-cluster/resource-usage-monitoring/) diff --git a/content/zh/docs/tutorials/stateless-application/guestbook.md b/content/zh/docs/tutorials/stateless-application/guestbook.md index 0343c58f44..b7ef978490 100644 --- a/content/zh/docs/tutorials/stateless-application/guestbook.md +++ b/content/zh/docs/tutorials/stateless-application/guestbook.md @@ -1,15 +1,16 @@ --- -title: "示例:使用 Redis 部署 PHP 留言板应用程序" +title: "示例:使用 MongoDB 部署 PHP 留言板应用程序" content_type: tutorial weight: 20 card: name: tutorials weight: 30 - title: "无状态应用示例:基于 Redis 的 PHP Guestbook" + title: "无状态应用示例:基于 MongoDB 的 PHP Guestbook" +min-kubernetes-server-version: v1.14 --- 本教程向您展示如何使用 Kubernetes 和 [Docker](https://www.docker.com/) 构建和部署 -一个简单的多层 web 应用程序。本例由以下组件组成: +一个简单的_(非面向生产)的_多层 web 应用程序。本例由以下组件组成: -* 单实例 [Redis](https://redis.io/) 主节点保存留言板条目 -* 多个[从 Redis](https://redis.io/topics/replication) 节点用来读取数据 +* 单实例 [MongoDB](https://www.mongodb.com/) 以保存留言板条目 * 多个 web 前端实例 @@ -45,15 +45,13 @@ This tutorial shows you how to build and deploy a simple, multi-tier web applica -* 启动 Redis 主节点。 -* 启动 Redis 从节点。 +* 启动 Mongo 数据库。 * 启动留言板前端。 * 公开并查看前端服务。 * 清理。 @@ -72,44 +70,50 @@ This tutorial shows you how to build and deploy a simple, multi-tier web applica -## 启动 Redis 主节点 +## 启动 Mongo 数据库 -留言板应用程序使用 Redis 存储数据。它将数据写入一个 Redis 主实例,并从多个 Redis 读取数据。 +留言板应用程序使用 MongoDB 存储数据。 -### 创建 Redis 主节点的 Deployment +### 创建 Mongo 的 Deployment -下面包含的清单文件指定了一个 Deployment 控制器,该控制器运行一个 Redis 主节点 Pod 副本。 +下面包含的清单文件指定了一个 Deployment 控制器,该控制器运行一个 MongoDB Pod 副本。 -{{< codenew file="application/guestbook/redis-master-deployment.yaml" >}} +{{< codenew file="application/guestbook/mongo-deployment.yaml" >}} 1. 在下载清单文件的目录中启动终端窗口。 -2. 从 `redis-master-deployment.yaml` 文件中应用 Redis 主 Deployment: +2. 从 `mongo-deployment.yaml` 文件中应用 MongoDB Deployment: ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml + kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-deployment.yaml ``` - -3. 查询 Pod 列表以验证 Redis 主节点 Pod 是否正在运行: + + + +3. 查询 Pod 列表以验证 MongoDB Pod 是否正在运行: ```shell kubectl get pods @@ -122,53 +126,49 @@ The manifest file, included below, specifies a Deployment controller that runs a ```shell NAME READY STATUS RESTARTS AGE - redis-master-1068406935-3lswp 1/1 Running 0 28s + mongo-5cfd459dd4-lrcjb 1/1 Running 0 28s ``` -4. 运行以下命令查看 Redis 主节点 Pod 中的日志: +4. 运行以下命令查看 MongoDB Deployment 中的日志: ```shell - kubectl logs -f POD-NAME + kubectl logs -f deployment/mongo ``` -{{< note >}} - -将 POD-NAME 替换为您的 Pod 名称。 - -{{< /note >}} - - -### 创建 Redis 主节点的服务 +### 创建 MongoDB 服务 -留言板应用程序需要往 Redis 主节点中写数据。因此,需要创建 [Service](/zh/docs/concepts/services-networking/service/) 来代理 Redis 主节点 Pod 的流量。Service 定义了访问 Pod 的策略。 +留言板应用程序需要往 MongoDB 中写数据。因此,需要创建 [Service](/zh/docs/concepts/services-networking/service/) 来代理 MongoDB Pod 的流量。Service 定义了访问 Pod 的策略。 -{{< codenew file="application/guestbook/redis-master-service.yaml" >}} +{{< codenew file="application/guestbook/mongo-service.yaml" >}} -1. 使用下面的 `redis-master-service.yaml` 文件创建 Redis 主节点的服务: +1. 使用下面的 `mongo-service.yaml` 文件创建 MongoDB 的服务: ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-service.yaml + kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-service.yaml ``` - -2. 查询服务列表验证 Redis 主节点服务是否正在运行: + + +2. 查询服务列表验证 MongoDB 服务是否正在运行: ```shell kubectl get service @@ -182,134 +182,26 @@ The guestbook application needs to communicate to the Redis master to write its ```shell NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.0.0.1 443/TCP 1m - redis-master ClusterIP 10.0.0.151 6379/TCP 8s + mongo ClusterIP 10.0.0.151 6379/TCP 8s ``` + {{< note >}} - - -这个清单文件创建了一个名为 `Redis-master` 的 Service,其中包含一组与前面定义的标签匹配的标签,因此服务将网络流量路由到 Redis 主节点 Pod 上。 - +这个清单文件创建了一个名为 `mongo` 的 Service,其中包含一组与前面定义的标签匹配的标签,因此服务将网络流量路由到 MongoDB Pod 上。 {{< /note >}} - - -## 启动 Redis 从节点 - - -尽管 Redis 主节点是一个单独的 pod,但是您可以通过添加 Redis 从节点的方式来使其高可用性,以满足流量需求。 - - - -### 创建 Redis 从节点 Deployment - - -Deployments 根据清单文件中设置的配置进行伸缩。在这种情况下,Deployment 对象指定两个副本。 - - -如果没有任何副本正在运行,则此 Deployment 将启动容器集群上的两个副本。相反, -如果有两个以上的副本在运行,那么它的规模就会缩小,直到运行两个副本为止。 - -{{< codenew file="application/guestbook/redis-slave-deployment.yaml" >}} - - -1. 从 `redis-slave-deployment.yaml` 文件中应用 Redis Slave Deployment: - - ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/redis-slave-deployment.yaml - ``` - - -2. 查询 Pod 列表以验证 Redis Slave Pod 正在运行: - - ```shell - kubectl get pods - ``` - - - 响应应该与此类似: - - ```shell - NAME READY STATUS RESTARTS AGE - redis-master-1068406935-3lswp 1/1 Running 0 1m - redis-slave-2005841000-fpvqc 0/1 ContainerCreating 0 6s - redis-slave-2005841000-phfv9 0/1 ContainerCreating 0 6s - ``` - - - -### 创建 Redis 从节点的 Service - - -留言板应用程序需要从 Redis 从节点中读取数据。 -为了便于 Redis 从节点可发现, -您需要设置一个 Service。Service 为一组 Pod 提供负载均衡。 - -{{< codenew file="application/guestbook/redis-slave-service.yaml" >}} - - -1. 从以下 `redis-slave-service.yaml` 文件应用 Redis Slave 服务: - - ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/redis-slave-service.yaml - ``` - - -2. 查询服务列表以验证 Redis 在服务是否正在运行: - - ```shell - kubectl get services - ``` - - - 响应应该与此类似: - - ``` - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - kubernetes ClusterIP 10.0.0.1 443/TCP 2m - redis-master ClusterIP 10.0.0.151 6379/TCP 1m - redis-slave ClusterIP 10.0.0.223 6379/TCP 6s - ``` - - ## 设置并公开留言板前端 - + 留言板应用程序有一个 web 前端,服务于用 PHP 编写的 HTTP 请求。 -它被配置为连接到写请求的 `redis-master` 服务和读请求的 `redis-slave` 服务。 +它被配置为连接到 `mongo` 服务以存储留言版条目。 + 2. 查询 Pod 列表,验证三个前端副本是否正在运行: ```shell - kubectl get pods -l app=guestbook -l tier=frontend + kubectl get pods -l app.kubernetes.io/name=guestbook -l app.kubernetes.io/component=frontend ``` -应用的 `redis-slave` 和 `redis-master` 服务只能在容器集群中访问,因为服务的默认类型是 -[ClusterIP](/zh/docs/concepts/Services-networking/Service/#publishingservices-Service-types)。`ClusterIP` 为服务指向的 Pod 集提供一个 IP 地址。这个 IP 地址只能在集群中访问。 +应用的 `mongo` 服务只能在 Kubernetes 集群中访问,因为服务的默认类型是 +[ClusterIP](/zh/docs/concepts/services-networking/service/#publishing-services---service-types)。`ClusterIP` 为服务指向的 Pod 集提供一个 IP 地址。这个 IP 地址只能在集群中访问。 -如果您希望客人能够访问您的留言板,您必须将前端服务配置为外部可见的,以便客户机可以从容器集群之外请求服务。Minikube 只能通过 `NodePort` 公开服务。 +如果您希望访客能够访问您的留言板,您必须将前端服务配置为外部可见的,以便客户端可以从 Kubernetes 集群之外请求服务。然而即便使用了 `ClusterIP` Kubernets 用户仍可以通过 `kubectl port-forwart` 访问服务。 + {{< note >}} - - 一些云提供商,如 Google Compute Engine 或 Google Kubernetes Engine,支持外部负载均衡器。如果您的云提供商支持负载均衡器,并且您希望使用它, -只需删除或注释掉 `type: NodePort`,并取消注释 `type: LoadBalancer` 即可。 - +只需取消注释 `type: LoadBalancer` 即可。 {{< /note >}} {{< codenew file="application/guestbook/frontend-service.yaml" >}} @@ -387,6 +282,11 @@ Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, su kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml ``` + + @@ -403,30 +303,24 @@ Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, su ``` NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - frontend NodePort 10.0.0.112 80:31323/TCP 6s + frontend ClusterIP 10.0.0.112 80/TCP 6s kubernetes ClusterIP 10.0.0.1 443/TCP 4m - redis-master ClusterIP 10.0.0.151 6379/TCP 2m - redis-slave ClusterIP 10.0.0.223 6379/TCP 1m + mongo ClusterIP 10.0.0.151 6379/TCP 2m ``` -### 通过 `NodePort` 查看前端服务 +### 通过 `kubectl port-forward` 查看前端服务 -如果您将此应用程序部署到 Minikube 或本地集群,您需要找到 IP 地址来查看您的留言板。 - - -1. 运行以下命令获取前端服务的 IP 地址。 +1. 运行以下命令将本机的 `8080` 端口转发到服务的 `80` 端口。 ```shell - minikube service frontend --url + kubectl port-forward svc/frontend 8080:80 ``` -2. 复制 IP 地址,然后在浏览器中加载页面以查看留言板。 +2. 在浏览器中加载 [http://localhost:8080](http://localhost:8080) 页面以查看留言板。 -5. 运行以下命令以删除所有 Pod,Deployments 和 Services。 +1. 运行以下命令以删除所有 Pod,Deployments 和 Services。 ```shell - kubectl delete deployment -l app=redis - kubectl delete service -l app=redis - kubectl delete deployment -l app=guestbook - kubectl delete service -l app=guestbook + kubectl delete deployment -l app.kubernetes.io/name=mongo + kubectl delete service -l app.kubernetes.io/name=mongo + kubectl delete deployment -l app.kubernetes.io/name=guestbook + kubectl delete service -l app.kubernetes.io/name=guestbook ``` -6. 查询 Pod 列表,确认没有 Pod 在运行: +2. 查询 Pod 列表,确认没有 Pod 在运行: ```shell kubectl get pods @@ -616,15 +505,12 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels -* 为 Guestbook 应用添加 - [ELK 日志与监控](/zh/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk/) * 完成 [Kubernetes Basics](/zh/docs/tutorials/kubernetes-basics/) 交互式教程 * 使用 Kubernetes 创建一个博客,使用 [MySQL 和 Wordpress 的持久卷](/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog) * 阅读更多关于[连接应用程序](/zh/docs/concepts/services-networking/connect-applications-service/) diff --git a/content/zh/examples/access/certificate-signing-request/clusterrole-sign.yaml b/content/zh/examples/access/certificate-signing-request/clusterrole-sign.yaml index 29bbc6a9cd..6d1a2f7882 100644 --- a/content/zh/examples/access/certificate-signing-request/clusterrole-sign.yaml +++ b/content/zh/examples/access/certificate-signing-request/clusterrole-sign.yaml @@ -21,7 +21,7 @@ rules: - certificates.k8s.io resources: - signers - resourceName: + resourceNames: - example.com/my-signer-name # example.com/* can be used to authorize for all signers in the 'example.com' domain verbs: - sign diff --git a/content/zh/examples/application/guestbook/frontend-deployment.yaml b/content/zh/examples/application/guestbook/frontend-deployment.yaml index 23d64be644..613c654aa9 100644 --- a/content/zh/examples/application/guestbook/frontend-deployment.yaml +++ b/content/zh/examples/application/guestbook/frontend-deployment.yaml @@ -3,22 +3,24 @@ kind: Deployment metadata: name: frontend labels: - app: guestbook + app.kubernetes.io/name: guestbook + app.kubernetes.io/component: frontend spec: selector: matchLabels: - app: guestbook - tier: frontend + app.kubernetes.io/name: guestbook + app.kubernetes.io/component: frontend replicas: 3 template: metadata: labels: - app: guestbook - tier: frontend + app.kubernetes.io/name: guestbook + app.kubernetes.io/component: frontend spec: containers: - - name: php-redis - image: gcr.io/google-samples/gb-frontend:v4 + - name: guestbook + image: paulczar/gb-frontend:v5 + # image: gcr.io/google-samples/gb-frontend:v4 resources: requests: cpu: 100m @@ -26,13 +28,5 @@ spec: env: - name: GET_HOSTS_FROM value: dns - # Using `GET_HOSTS_FROM=dns` requires your cluster to - # provide a dns service. As of Kubernetes 1.3, DNS is a built-in - # service launched automatically. However, if the cluster you are using - # does not have a built-in DNS service, you can instead - # access an environment variable to find the master - # service's host. To do so, comment out the 'value: dns' line above, and - # uncomment the line below: - # value: env ports: - containerPort: 80 diff --git a/content/zh/examples/application/guestbook/frontend-service.yaml b/content/zh/examples/application/guestbook/frontend-service.yaml index 6f283f347b..34ad3771d7 100644 --- a/content/zh/examples/application/guestbook/frontend-service.yaml +++ b/content/zh/examples/application/guestbook/frontend-service.yaml @@ -3,16 +3,14 @@ kind: Service metadata: name: frontend labels: - app: guestbook - tier: frontend + app.kubernetes.io/name: guestbook + app.kubernetes.io/component: frontend spec: - # comment or delete the following line if you want to use a LoadBalancer - type: NodePort # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: - app: guestbook - tier: frontend + app.kubernetes.io/name: guestbook + app.kubernetes.io/component: frontend diff --git a/content/zh/examples/application/guestbook/mongo-deployment.yaml b/content/zh/examples/application/guestbook/mongo-deployment.yaml new file mode 100644 index 0000000000..04908ce25b --- /dev/null +++ b/content/zh/examples/application/guestbook/mongo-deployment.yaml @@ -0,0 +1,31 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: mongo + labels: + app.kubernetes.io/name: mongo + app.kubernetes.io/component: backend +spec: + selector: + matchLabels: + app.kubernetes.io/name: mongo + app.kubernetes.io/component: backend + replicas: 1 + template: + metadata: + labels: + app.kubernetes.io/name: mongo + app.kubernetes.io/component: backend + spec: + containers: + - name: mongo + image: mongo:4.2 + args: + - --bind_ip + - 0.0.0.0 + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 27017 diff --git a/content/zh/examples/application/guestbook/mongo-service.yaml b/content/zh/examples/application/guestbook/mongo-service.yaml new file mode 100644 index 0000000000..b9cef607bc --- /dev/null +++ b/content/zh/examples/application/guestbook/mongo-service.yaml @@ -0,0 +1,14 @@ +apiVersion: v1 +kind: Service +metadata: + name: mongo + labels: + app.kubernetes.io/name: mongo + app.kubernetes.io/component: backend +spec: + ports: + - port: 27017 + targetPort: 27017 + selector: + app.kubernetes.io/name: mongo + app.kubernetes.io/component: backend diff --git a/content/zh/examples/application/guestbook/redis-master-deployment.yaml b/content/zh/examples/application/guestbook/redis-master-deployment.yaml deleted file mode 100644 index 478216d1ac..0000000000 --- a/content/zh/examples/application/guestbook/redis-master-deployment.yaml +++ /dev/null @@ -1,29 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: redis-master - labels: - app: redis -spec: - selector: - matchLabels: - app: redis - role: master - tier: backend - replicas: 1 - template: - metadata: - labels: - app: redis - role: master - tier: backend - spec: - containers: - - name: master - image: k8s.gcr.io/redis:e2e # or just image: redis - resources: - requests: - cpu: 100m - memory: 100Mi - ports: - - containerPort: 6379 diff --git a/content/zh/examples/application/guestbook/redis-master-service.yaml b/content/zh/examples/application/guestbook/redis-master-service.yaml deleted file mode 100644 index a484014f1f..0000000000 --- a/content/zh/examples/application/guestbook/redis-master-service.yaml +++ /dev/null @@ -1,16 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: redis-master - labels: - app: redis - role: master - tier: backend -spec: - ports: - - port: 6379 - targetPort: 6379 - selector: - app: redis - role: master - tier: backend diff --git a/content/zh/examples/application/guestbook/redis-slave-deployment.yaml b/content/zh/examples/application/guestbook/redis-slave-deployment.yaml deleted file mode 100644 index 1a7b04386a..0000000000 --- a/content/zh/examples/application/guestbook/redis-slave-deployment.yaml +++ /dev/null @@ -1,40 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: redis-slave - labels: - app: redis -spec: - selector: - matchLabels: - app: redis - role: slave - tier: backend - replicas: 2 - template: - metadata: - labels: - app: redis - role: slave - tier: backend - spec: - containers: - - name: slave - image: gcr.io/google_samples/gb-redisslave:v3 - resources: - requests: - cpu: 100m - memory: 100Mi - env: - - name: GET_HOSTS_FROM - value: dns - # Using `GET_HOSTS_FROM=dns` requires your cluster to - # provide a dns service. As of Kubernetes 1.3, DNS is a built-in - # service launched automatically. However, if the cluster you are using - # does not have a built-in DNS service, you can instead - # access an environment variable to find the master - # service's host. To do so, comment out the 'value: dns' line above, and - # uncomment the line below: - # value: env - ports: - - containerPort: 6379 diff --git a/content/zh/examples/application/guestbook/redis-slave-service.yaml b/content/zh/examples/application/guestbook/redis-slave-service.yaml deleted file mode 100644 index 238fd63fb6..0000000000 --- a/content/zh/examples/application/guestbook/redis-slave-service.yaml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: redis-slave - labels: - app: redis - role: slave - tier: backend -spec: - ports: - - port: 6379 - selector: - app: redis - role: slave - tier: backend diff --git a/content/zh/examples/application/job/cronjob.yaml b/content/zh/examples/application/job/cronjob.yaml index 3ca130289e..816d682f28 100644 --- a/content/zh/examples/application/job/cronjob.yaml +++ b/content/zh/examples/application/job/cronjob.yaml @@ -12,7 +12,7 @@ spec: - name: hello image: busybox imagePullPolicy: IfNotPresent - args: + command: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster diff --git a/layouts/shortcodes/cncf-landscape.html b/layouts/shortcodes/cncf-landscape.html index a97d4f9f8a..22b05d3ff0 100644 --- a/layouts/shortcodes/cncf-landscape.html +++ b/layouts/shortcodes/cncf-landscape.html @@ -15,7 +15,7 @@ function updateLandscapeSource(button,shouldUpdateFragment) { } else { var landscapeElements = document.querySelectorAll("#landscape"); let categories=button.dataset.landscapeTypes; - let link = "https://landscape.cncf.io/category="+encodeURIComponent(categories)+"&format=card-mode&grouping=category&embed=yes"; + let link = "https://landscape.cncf.io/card-mode?category="+encodeURIComponent(categories)+"&grouping=category&embed=yes"; landscapeElements[0].src = link; } } @@ -58,9 +58,9 @@ document.addEventListener("DOMContentLoaded", function () { {{- end -}}
    {{ if ( .Get "category" ) }} - + {{ else }} - + {{ end }}
    diff --git a/scripts/README.md b/scripts/README.md index e90d65861d..4f7ee7eec3 100644 --- a/scripts/README.md +++ b/scripts/README.md @@ -7,6 +7,10 @@ | `test_examples.sh` | This script tests whether a change affects example files bundled in the website. | | `check-headers-file.sh` | This script checks the headers if you are in a production environment. | | `diff_l10n_branches.py` | This script generates a report of outdated contents in `content/` directory by comparing two l10n team milestone branches. | +| `hash-files.sh` | This script emits as hash for the files listed in $@ | +| `linkchecker.py` | This a link checker for Kubernetes documentation website. | +| `lsync.sh` | This script checks if the English version of a page has changed since a localized page has been committed. | +| `replace-capture.sh` | This script sets K8S_WEBSITE in your env to your docs website root or rely on this script to determine it automatically | @@ -88,3 +92,63 @@ Options: --src-lang TEXT Source language --help Show this message and exit. ``` + +## hash-files.sh + +This script emits as hash for the files listed in $@. + + $ ./scripts/hash-files.sh + +## linkchecker.py + +This a link checker for Kubernetes documentation website. +- We cover the following cases for the language you provide via `-l`, which + defaults to 'en'. +- If the language specified is not English (`en`), we check if you are + actually using the localized links. For example, if you specify `zh` as + the language, and for link target `/docs/foo/bar`, we check if the English + version exists AND if the Chinese version exists as well. A checking record + is produced if the link can use the localized version. + +``` + +Usage: linkchecker.py -h + +Cases handled: + +- [foo](#bar) : ignored currently ++ [foo](http://bar) : insecure links to external site ++ [foo](https://k8s.io/website/...) : hardcoded site domain name + ++ [foo](//docs/bar/...) : where is not 'en' + + //docs/bar : contains shortcode, so ignore, or + + //docs/bar : is a image link (ignore currently), or + + //docs/bar : points to shared (non-localized) page, or + + //docs/bar.md : exists for current lang, or + + //docs/bar/_index.md : exists for current lang, or + + //docs/bar/ : is a redirect entry, or + + //docs/bar : is something we don't understand, then ERR + ++ [foo](/docs/bar/...) + + /docs/bar : contains shortcode, so ignore, or + + /docs/bar : is a image link (ignore currently), or + + /docs/bar : points to a shared (non-localized) page, or + + /docs/bar.md : exists for current lang, or + + /docs/bar/_index.md : exists for current lang, or + + /docs/bar : is a redirect entry, or + + /docs/bar : is something we don't understand + +``` +## lsync.sh + +This script checks if the English version of some localized contents have changed +since a localized version has been committed. + +The following example checks a single file: + + ./scripts/lsync.sh content/zh/docs/concepts/_index.md + +The following command checks a subdirectory: + + ./scripts/lsync.sh content/zh/docs/concepts/ + diff --git a/static/_redirects b/static/_redirects index fd7eba2713..8ff4ed3722 100644 --- a/static/_redirects +++ b/static/_redirects @@ -275,7 +275,8 @@ /docs/tasks/job/work-queue-1/ /docs/concepts/workloads/controllers/job/ 301 /docs/tasks/setup-konnectivity/setup-konnectivity/ /docs/tasks/extend-kubernetes/setup-konnectivity/ 301 /docs/tasks/kubectl/get-shell-running-container/ /docs/tasks/debug-application-cluster/get-shell-running-container/ 301 -/docs/tasks/kubectl/install/ /docs/tasks/tools/install-kubectl/ 301 +/docs/tasks/kubectl/install/ /docs/tasks/tools/ 301 +/docs/tasks/tools/install-kubectl/ /docs/tasks/tools/ 301 /docs/tasks/kubectl/list-all-running-container-images/ /docs/tasks/access-application-cluster/list-all-running-container-images/ 301 /docs/tasks/manage-stateful-set/debugging-a-statefulset/ /docs/tasks/debug-application-cluster/debug-stateful-set/ 301 /docs/tasks/manage-stateful-set/delete-pods/ /docs/tasks/run-application/delete-stateful-set/ 301 @@ -284,6 +285,8 @@ /docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set/ /docs/tasks/run-application/upgrade-pet-set-to-stateful-set/ 301 /docs/tasks/run-application/update-api-object-kubectl-patch/ /docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/ 301 /docs/tasks/stateful-sets/deleting-pods/ /docs/tasks/run-application/force-delete-stateful-set-pod/ 301 +/ja/docs/tasks/tools/install-minikube/ https://minikube.sigs.k8s.io/docs/start/ 302 +/id/docs/tasks/tools/install-minikube/ https://minikube.sigs.k8s.io/docs/start/ 302 /docs/tasks/troubleshoot/debug-init-containers/ /docs/tasks/debug-application-cluster/debug-init-containers/ 301 /docs/tasks/web-ui-dashboard/ /docs/tasks/access-application-cluster/web-ui-dashboard/ 301 diff --git a/themes/docsy b/themes/docsy index 0f6717470e..a7dc77412c 160000 --- a/themes/docsy +++ b/themes/docsy @@ -1 +1 @@ -Subproject commit 0f6717470e74b274e9f554a5ebf2465f2123d6a9 +Subproject commit a7dc77412c533fefc71730927350677fed35f576