Merge branch 'master' into master

This commit is contained in:
Sahadat Hossain 2021-02-12 17:04:39 +06:00 committed by GitHub
commit 2ae6da3c19
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
254 changed files with 7361 additions and 3987 deletions

View File

@ -92,15 +92,21 @@ aliases:
- daminisatya
- mittalyashu
sig-docs-id-owners: # Admins for Indonesian content
- girikuncoro
- irvifa
sig-docs-id-reviews: # PR reviews for Indonesian content
- ariscahyadi
- danninov
- girikuncoro
- habibrosyad
- irvifa
- wahyuoi
- phanama
- wahyuoi
sig-docs-id-reviews: # PR reviews for Indonesian content
- ariscahyadi
- danninov
- girikuncoro
- habibrosyad
- irvifa
- phanama
- wahyuoi
sig-docs-it-owners: # Admins for Italian content
- fabriziopandini
- Fale

View File

@ -1,76 +1,184 @@
# A documentação do Kubernetes
[![Build Status](https://api.travis-ci.org/kubernetes/website.svg?branch=master)](https://travis-ci.org/kubernetes/website)
[![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
[![Netlify Status](https://api.netlify.com/api/v1/badges/be93b718-a6df-402a-b4a4-855ba186c97d/deploy-status)](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
Bem vindos! Este repositório abriga todos os recursos necessários para criar o [site e documentação do Kubernetes](https://kubernetes.io/). Estamos muito satisfeitos por você querer contribuir!
Bem-vindos! Este repositório contém todos os recursos necessários para criar o [website e documentação do Kubernetes](https://kubernetes.io/). Estamos muito satisfeitos por você querer contribuir!
## Contribuindo com os documentos
# Utilizando este repositório
Você pode clicar no botão **Fork** na área superior direita da tela para criar uma cópia desse repositório na sua conta do GitHub. Esta cópia é chamada de *fork*. Faça as alterações desejadas no seu fork e, quando estiver pronto para enviar as alterações para nós, vá até o fork e crie uma nova solicitação de pull para nos informar sobre isso.
Você pode executar o website localmente utilizando o Hugo (versão Extended), ou você pode executa-ló em um container runtime. É altamente recomendável utilizar um container runtime, pois garante a consistência na implantação do website real.
Depois que seu **pull request** for criado, um revisor do Kubernetes assumirá a responsabilidade de fornecer um feedback claro e objetivo. Como proprietário do pull request, **é sua responsabilidade modificar seu pull request para abordar o feedback que foi fornecido a você pelo revisor do Kubernetes.** Observe também que você pode acabar tendo mais de um revisor do Kubernetes para fornecer seu feedback ou você pode acabar obtendo feedback de um revisor do Kubernetes que é diferente daquele originalmente designado para lhe fornecer feedback. Além disso, em alguns casos, um de seus revisores pode solicitar uma revisão técnica de um [revisor de tecnologia Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers) quando necessário. Os revisores farão o melhor para fornecer feedback em tempo hábil, mas o tempo de resposta pode variar de acordo com as circunstâncias.
## Pré-requisitos
Para usar este repositório, você precisa instalar:
- [npm](https://www.npmjs.com/)
- [Go](https://golang.org/)
- [Hugo (versão Extended)](https://gohugo.io/)
- Um container runtime, por exemplo [Docker](https://www.docker.com/).
Antes de você iniciar, instale as dependências, clone o repositório e navegue até o diretório:
```
git clone https://github.com/kubernetes/website.git
cd website
```
O website do Kubernetes utiliza o [tema Docsy Hugo](https://github.com/google/docsy#readme). Mesmo se você planeje executar o website em um container, é altamente recomendado baixar os submódulos e outras dependências executando o seguinte comando:
```
# Baixar o submódulo Docsy
git submodule update --init --recursive --depth 1
```
## Executando o website usando um container
Para executar o build do website em um container, execute o comando abaixo para criar a imagem do container e executa-lá:
```
make container-image
make container-serve
```
Abra seu navegador em http://localhost:1313 para visualizar o website. Conforme você faz alterações nos arquivos fontes, o Hugo atualiza o website e força a atualização do navegador.
## Executando o website localmente utilizando o Hugo
Consulte a [documentação oficial do Hugo](https://gohugo.io/getting-started/installing/) para instruções de instalação do Hugo. Certifique-se de instalar a versão do Hugo especificada pela variável de ambiente `HUGO_VERSION` no arquivo [`netlify.toml`](netlify.toml#L9).
Para executar o build e testar o website localmente, execute:
```bash
# instalar dependências
npm ci
make serve
```
Isso iniciará localmente o Hugo na porta 1313. Abra o seu navegador em http://localhost:1313 para visualizar o website. Conforme você faz alterações nos arquivos fontes, o Hugo atualiza o website e força uma atualização no navegador.
## Construindo a página de referência da API
A página de referência da API localizada em `content/en/docs/reference/kubernetes-api` é construída a partir da especificação do Swagger utilizando https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs.
Siga os passos abaixo para atualizar a página de referência para uma nova versão do Kubernetes:
OBS: modifique o "v1.20" no exemplo a seguir pela versão a ser atualizada
1. Obter o submódulo `kubernetes-resources-reference`:
```
git submodule update --init --recursive --depth 1
```
2. Criar a nova versão da API no submódulo e adicionar à especificação do Swagger:
```
mkdir api-ref-generator/gen-resourcesdocs/api/v1.20
curl 'https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json' > api-ref-generator/gen-resourcesdocs/api/v1.20/swagger.json
```
3. Copiar o sumário e os campos de configuração para a nova versão a partir da versão anterior:
```
mkdir api-ref-generator/gen-resourcesdocs/api/v1.20
cp api-ref-generator/gen-resourcesdocs/api/v1.19/* api-ref-generator/gen-resourcesdocs/api/v1.20/
```
4. Ajustar os arquivos `toc.yaml` e `fields.yaml` para refletir as mudanças entre as duas versões.
5. Em seguida, gerar as páginas:
```
make api-reference
```
Você pode validar o resultado localmente gerando e disponibilizando o site a partir da imagem do container:
```
make container-image
make container-serve
```
Abra o seu navegador em http://localhost:1313/docs/reference/kubernetes-api/ para visualizar a página de referência da API.
6. Quando todas as mudanças forem refletidas nos arquivos de configuração `toc.yaml` e `fields.yaml`, crie um pull request com a nova página de referência de API.
## Troubleshooting
### error: failed to transform resource: TOCSS: failed to transform "scss/main.scss" (text/x-scss): this feature is not available in your current Hugo version
Por motivos técnicos, o Hugo é disponibilizado em dois conjuntos de binários. O website atual funciona apenas na versão **Hugo Extended**. Na [página de releases](https://github.com/gohugoio/hugo/releases) procure por arquivos com `extended` no nome. Para confirmar, execute `hugo version` e procure pela palavra `extended`.
### Troubleshooting macOS for too many open files
Se você executar o comando `make serve` no macOS e retornar o seguinte erro:
```
ERROR 2020/08/01 19:09:18 Error: listen tcp 127.0.0.1:1313: socket: too many open files
make: *** [serve] Error 1
```
Verifique o limite atual para arquivos abertos:
`launchctl limit maxfiles`
Em seguida, execute os seguintes comandos (adaptado de https://gist.github.com/tombigel/d503800a282fcadbee14b537735d202c):
```shell
#!/bin/sh
# Esse são os links do gist original, vinculados ao meu gists agora.
# curl -O https://gist.githubusercontent.com/a2ikm/761c2ab02b7b3935679e55af5d81786a/raw/ab644cb92f216c019a2f032bbf25e258b01d87f9/limit.maxfiles.plist
# curl -O https://gist.githubusercontent.com/a2ikm/761c2ab02b7b3935679e55af5d81786a/raw/ab644cb92f216c019a2f032bbf25e258b01d87f9/limit.maxproc.plist
curl -O https://gist.githubusercontent.com/tombigel/d503800a282fcadbee14b537735d202c/raw/ed73cacf82906fdde59976a0c8248cce8b44f906/limit.maxfiles.plist
curl -O https://gist.githubusercontent.com/tombigel/d503800a282fcadbee14b537735d202c/raw/ed73cacf82906fdde59976a0c8248cce8b44f906/limit.maxproc.plist
sudo mv limit.maxfiles.plist /Library/LaunchDaemons
sudo mv limit.maxproc.plist /Library/LaunchDaemons
sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist
sudo chown root:wheel /Library/LaunchDaemons/limit.maxproc.plist
sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
```
Esta solução funciona tanto para o MacOS Catalina quanto para o MacOS Mojave.
# Comunidade, discussão, contribuição e apoio
Saiba mais sobre a comunidade Kubernetes SIG Docs e reuniões na [página da comunidade](http://kubernetes.io/community/).
Você também pode entrar em contato com os mantenedores deste projeto em:
- [Slack](https://kubernetes.slack.com/messages/sig-docs) ([Obter o convide para o este slack](https://slack.k8s.io/))
- [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
# Contribuindo com os documentos
Você pode clicar no botão **Fork** na área superior direita da tela para criar uma cópia desse repositório na sua conta do GitHub. Esta cópia é chamada de *fork*. Faça as alterações desejadas no seu fork e, quando estiver pronto para enviar as alterações para nós, vá até o fork e crie um novo **pull request** para nos informar sobre isso.
Depois que seu **pull request** for criado, um revisor do Kubernetes assumirá a responsabilidade de fornecer um feedback claro e objetivo. Como proprietário do pull request, **é sua responsabilidade modificar seu pull request para atender ao feedback que foi fornecido a você pelo revisor do Kubernetes.**
Observe também que você pode acabar tendo mais de um revisor do Kubernetes para fornecer seu feedback ou você pode acabar obtendo feedback de um outro revisor do Kubernetes diferente daquele originalmente designado para lhe fornecer o feedback.
Além disso, em alguns casos, um de seus revisores pode solicitar uma revisão técnica de um [revisor técnico do Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers) quando necessário. Os revisores farão o melhor para fornecer feedbacks em tempo hábil, mas o tempo de resposta pode variar de acordo com as circunstâncias.
Para mais informações sobre como contribuir com a documentação do Kubernetes, consulte:
* [Comece a contribuir](https://kubernetes.io/docs/contribute/start/)
* [Preparando suas alterações na documentação](http://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally)
* [Usando Modelos de Página](http://kubernetes.io/docs/contribute/style/page-templates/)
* [Contribua com a documentação do Kubernetes](https://kubernetes.io/docs/contribute/)
* [Tipos de conteúdo de página](https://kubernetes.io/docs/contribute/style/page-content-types/)
* [Guia de Estilo da Documentação](http://kubernetes.io/docs/contribute/style/style-guide/)
* [Localizando documentação do Kubernetes](https://kubernetes.io/docs/contribute/localization/)
Você pode contactar os mantenedores da localização em Português em:
Você pode contatar os mantenedores da localização em Português em:
* Felipe ([GitHub - @femrtnz](https://github.com/femrtnz))
* [Slack channel](https://kubernetes.slack.com/messages/kubernetes-docs-pt)
## Executando o site localmente usando o Docker
A maneira recomendada de executar o site do Kubernetes localmente é executar uma imagem especializada do [Docker](https://docker.com) que inclui o gerador de site estático [Hugo](https://gohugo.io).
> Se você está rodando no Windows, você precisará de mais algumas ferramentas que você pode instalar com o [Chocolatey](https://chocolatey.org). `choco install make`
> Se você preferir executar o site localmente sem o Docker, consulte [Executando o site localmente usando o Hugo](#executando-o-site-localmente-usando-o-hugo) abaixo.
Se você tiver o Docker [em funcionamento](https://www.docker.com/get-started), crie a imagem do Docker do `kubernetes-hugo` localmente:
```bash
make container-image
```
Depois que a imagem foi criada, você pode executar o site localmente:
```bash
make container-serve
```
Abra seu navegador para http://localhost:1313 para visualizar o site. Conforme você faz alterações nos arquivos de origem, Hugo atualiza o site e força a atualização do navegador.
## Executando o site localmente usando o Hugo
Veja a [documentação oficial do Hugo](https://gohugo.io/getting-started/installing/) para instruções de instalação do Hugo. Certifique-se de instalar a versão do Hugo especificada pela variável de ambiente `HUGO_VERSION` no arquivo [`netlify.toml`](netlify.toml#L9).
Para executar o site localmente quando você tiver o Hugo instalado:
```bash
make serve
```
Isso iniciará o servidor Hugo local na porta 1313. Abra o navegador para http://localhost:1313 para visualizar o site. Conforme você faz alterações nos arquivos de origem, Hugo atualiza o site e força a atualização do navegador.
## Comunidade, discussão, contribuição e apoio
Aprenda a se envolver com a comunidade do Kubernetes na [página da comunidade](http://kubernetes.io/community/).
Você pode falar com os mantenedores deste projeto:
- [Slack](https://kubernetes.slack.com/messages/sig-docs)
- [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
### Código de conduta
# Código de conduta
A participação na comunidade Kubernetes é regida pelo [Código de Conduta da Kubernetes](code-of-conduct.md).
## Obrigado!
# Obrigado!
O Kubernetes conta com a participação da comunidade e nós realmente agradecemos suas contribuições para o nosso site e nossa documentação!
O Kubernetes prospera com a participação da comunidade e nós realmente agradecemos suas contribuições para o nosso website e nossa documentação!

View File

@ -100,6 +100,8 @@ make container-image
make container-serve
```
In a web browser, go to http://localhost:1313/docs/reference/kubernetes-api/ to view the API reference.
6. When all changes of the new contract are reflected into the configuration files `toc.yaml` and `fields.yaml`, create a Pull Request with the newly generated API reference pages.
## Troubleshooting

View File

@ -810,6 +810,13 @@ section#cncf {
}
}
.td-search {
header > .header-filler {
height: $hero-padding-top;
background-color: black;
}
}
// Docs specific
#editPageButton {

View File

@ -66,6 +66,7 @@ Vagrant.configure("2") do |config|
end
end
end
end
```
### Step 2: Create an Ansible playbook for Kubernetes master.

View File

@ -19,6 +19,7 @@ cid: community
<div class="community__navbar">
<a href="#values">Community Values</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="#conduct">Code of conduct </a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="#videos">Videos</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="#discuss">Discussions</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
@ -41,10 +42,28 @@ cid: community
<img src="/images/community/kubernetes-community-final-05.jpg" alt="Kubernetes Conference Gallery" style="width:100%;margin-right:0% important" class="desktop">
</div>
<img src="/images/community/kubernetes-community-04-mobile.jpg" alt="Kubernetes Conference Gallery" style="width:100%;margin-bottom:3%" class="mobile">
<a name="conduct"></a>
<a name="values"></a>
</div>
<div><a name="values"></a></div>
<div class="conduct">
<div class="conducttext">
<br class="mobile"><br class="mobile">
<br class="tablet"><br class="tablet">
<div class="conducttextnobutton" style="margin-bottom:2%"><h1>Community Values</h1>
The Kubernetes Community values are the keystone to the ongoing success of the project.<br>
These principles guide every aspect of the Kubernetes project.
<br>
<a href="/community/values/">
<br class="mobile"><br class="mobile">
<span class="fullbutton">
READ MORE
</span>
</a>
</div><a name="conduct"></a>
</div>
</div>
<div class="conduct">

View File

@ -0,0 +1,28 @@
<!-- Do not edit this file directly. Get the latest from
https://git.k8s.io/community/values.md -->
# Kubernetes Community Values
Kubernetes Community culture is frequently cited as a substantial contributor to the meteoric rise of this Open Source project. Below are the distilled values which have evolved over the last many years in our community pushing our project and peers toward constant improvement.
## Distribution is better than centralization
The scale of the Kubernetes project is only viable through high-trust and high-visibility distribution of work, which includes delegation of authority, decision making, technical design, code ownership, and documentation. Distributed asynchronous ownership, collaboration, communication and decision making are the cornerstone of our world-wide community.
## Community over product or company
We are here as a community first, our allegiance is to the intentional stewardship of the Kubernetes project for the benefit of all its members and users everywhere. We support working together publicly for the common goal of a vibrant interoperable ecosystem providing an excellent experience for our users. Individuals gain status through work, companies gain status through their commitments to support this community and fund the resources necessary for the project to operate.
## Automation over process
Large projects have a lot of less exciting, yet, hard work. We value time spent automating repetitive work more highly than toil. Where that work cannot be automated, it is our culture to recognize and reward all types of contributions. However, heroism is not sustainable.
## Inclusive is better than exclusive
Broadly successful and useful technology requires different perspectives and skill sets which can only be heard in a welcoming and respectful environment. Community membership is a privilege, not a right. Community Leadership is earned through effort, scope, quality, quantity, and duration of contributions. Our community shows respect for the time and effort put into a discussion regardless of where a contributor is on their growth path.
## Evolution is better than stagnation
Openness to new ideas and studied technological evolution make Kubernetes a stronger project. Continual improvement, servant leadership, mentorship and respect are the foundations of the Kubernetes project culture. It is the duty for leaders in the Kubernetes community to find, sponsor, and promote new community members. Leaders should expect to step aside. Community members should expect to step up.
**"Culture eats strategy for breakfast." --Peter Drucker**

View File

@ -0,0 +1,13 @@
---
title: Community
layout: basic
cid: community
css: /css/community.css
---
<div class="community_main">
<div class="cncf_coc_container">
{{< include "/static/community-values.md" >}}
</div>
</div>

View File

@ -102,7 +102,7 @@ Other control loops can observe that reported data and take their own actions.
In the thermostat example, if the room is very cold then a different controller
might also turn on a frost protection heater. With Kubernetes clusters, the control
plane indirectly works with IP address management tools, storage services,
cloud provider APIS, and other services by
cloud provider APIs, and other services by
[extending Kubernetes](/docs/concepts/extend-kubernetes/) to implement that.
## Desired versus current state {#desired-vs-current}

View File

@ -11,9 +11,10 @@ weight: 10
Kubernetes runs your workload by placing containers into Pods to run on _Nodes_.
A node may be a virtual or physical machine, depending on the cluster. Each node
contains the services necessary to run
{{< glossary_tooltip text="Pods" term_id="pod" >}}, managed by the
{{< glossary_tooltip text="control plane" term_id="control-plane" >}}.
is managed by the
{{< glossary_tooltip text="control plane" term_id="control-plane" >}}
and contains the services necessary to run
{{< glossary_tooltip text="Pods" term_id="pod" >}}
Typically you have several nodes in a cluster; in a learning or resource-limited
environment, you might have just one.

View File

@ -26,12 +26,12 @@ See the guides in [Setup](/docs/setup/) for examples of how to plan, set up, and
Before choosing a guide, here are some considerations:
- Do you just want to try out Kubernetes on your computer, or do you want to build a high-availability, multi-node cluster? Choose distros best suited for your needs.
- Do you want to try out Kubernetes on your computer, or do you want to build a high-availability, multi-node cluster? Choose distros best suited for your needs.
- Will you be using **a hosted Kubernetes cluster**, such as [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/), or **hosting your own cluster**?
- Will your cluster be **on-premises**, or **in the cloud (IaaS)**? Kubernetes does not directly support hybrid clusters. Instead, you can set up multiple clusters.
- **If you are configuring Kubernetes on-premises**, consider which [networking model](/docs/concepts/cluster-administration/networking/) fits best.
- Will you be running Kubernetes on **"bare metal" hardware** or on **virtual machines (VMs)**?
- Do you **just want to run a cluster**, or do you expect to do **active development of Kubernetes project code**? If the
- Do you **want to run a cluster**, or do you expect to do **active development of Kubernetes project code**? If the
latter, choose an actively-developed distro. Some distros only use binary releases, but
offer a greater variety of choices.
- Familiarize yourself with the [components](/docs/concepts/overview/components/) needed to run a cluster.

View File

@ -9,23 +9,22 @@ weight: 60
<!-- overview -->
Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism; as such, most container engines are likewise designed to support some kind of logging. The easiest and most embraced logging method for containerized applications is to write to the standard output and standard error streams.
Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism. Likewise, container engines are designed to support logging. The easiest and most adopted logging method for containerized applications is writing to standard output and standard error streams.
However, the native functionality provided by a container engine or runtime is usually not enough for a complete logging solution. For example, if a container crashes, a pod is evicted, or a node dies, you'll usually still want to access your application's logs. As such, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called _cluster-level-logging_. Cluster-level logging requires a separate backend to store, analyze, and query logs. Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes cluster.
However, the native functionality provided by a container engine or runtime is usually not enough for a complete logging solution.
For example, you may want access your application's logs if a container crashes; a pod gets evicted; or a node dies.
In a cluster, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called _cluster-level logging_.
<!-- body -->
Cluster-level logging architectures are described in assumption that
a logging backend is present inside or outside of your cluster. If you're
not interested in having cluster-level logging, you might still find
the description of how logs are stored and handled on the node to be useful.
Cluster-level logging architectures require a separate backend to store, analyze, and query logs. Kubernetes
does not provide a native storage solution for log data. Instead, there are many logging solutions that
integrate with Kubernetes. The following sections describe how to handle and store logs on nodes.
## Basic logging in Kubernetes
In this section, you can see an example of basic logging in Kubernetes that
outputs data to the standard output stream. This demonstration uses
a pod specification with a container that writes some text to standard output
once per second.
This example uses a `Pod` specification with a container
to write text to the standard output stream once per second.
{{< codenew file="debug/counter-pod.yaml" >}}
@ -34,8 +33,10 @@ To run this pod, use the following command:
```shell
kubectl apply -f https://k8s.io/examples/debug/counter-pod.yaml
```
The output is:
```
```console
pod/counter created
```
@ -44,73 +45,73 @@ To fetch the logs, use the `kubectl logs` command, as follows:
```shell
kubectl logs counter
```
The output is:
```
```console
0: Mon Jan 1 00:00:00 UTC 2001
1: Mon Jan 1 00:00:01 UTC 2001
2: Mon Jan 1 00:00:02 UTC 2001
...
```
You can use `kubectl logs` to retrieve logs from a previous instantiation of a container with `--previous` flag, in case the container has crashed. If your pod has multiple containers, you should specify which container's logs you want to access by appending a container name to the command. See the [`kubectl logs` documentation](/docs/reference/generated/kubectl/kubectl-commands#logs) for more details.
You can use `kubectl logs --previous` to retrieve logs from a previous instantiation of a container. If your pod has multiple containers, specify which container's logs you want to access by appending a container name to the command. See the [`kubectl logs` documentation](/docs/reference/generated/kubectl/kubectl-commands#logs) for more details.
## Logging at the node level
![Node level logging](/images/docs/user-guide/logging/logging-node-level.png)
Everything a containerized application writes to `stdout` and `stderr` is handled and redirected somewhere by a container engine. For example, the Docker container engine redirects those two streams to [a logging driver](https://docs.docker.com/engine/admin/logging/overview), which is configured in Kubernetes to write to a file in json format.
A container engine handles and redirects any output generated to a containerized application's `stdout` and `stderr` streams.
For example, the Docker container engine redirects those two streams to [a logging driver](https://docs.docker.com/engine/admin/logging/overview), which is configured in Kubernetes to write to a file in JSON format.
{{< note >}}
The Docker json logging driver treats each line as a separate message. When using the Docker logging driver, there is no direct support for multi-line messages. You need to handle multi-line messages at the logging agent level or higher.
The Docker JSON logging driver treats each line as a separate message. When using the Docker logging driver, there is no direct support for multi-line messages. You need to handle multi-line messages at the logging agent level or higher.
{{< /note >}}
By default, if a container restarts, the kubelet keeps one terminated container with its logs. If a pod is evicted from the node, all corresponding containers are also evicted, along with their logs.
An important consideration in node-level logging is implementing log rotation,
so that logs don't consume all available storage on the node. Kubernetes
currently is not responsible for rotating logs, but rather a deployment tool
is not responsible for rotating logs, but rather a deployment tool
should set up a solution to address that.
For example, in Kubernetes clusters, deployed by the `kube-up.sh` script,
there is a [`logrotate`](https://linux.die.net/man/8/logrotate)
tool configured to run each hour. You can also set up a container runtime to
rotate application's logs automatically, for example by using Docker's `log-opt`.
In the `kube-up.sh` script, the latter approach is used for COS image on GCP,
and the former approach is used in any other environment. In both cases, by
default rotation is configured to take place when log file exceeds 10MB.
rotate an application's logs automatically.
As an example, you can find detailed information about how `kube-up.sh` sets
up logging for COS image on GCP in the corresponding
[script](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh).
[`configure-helper` script](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh).
When you run [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs) as in
the basic logging example, the kubelet on the node handles the request and
reads directly from the log file, returning the contents in the response.
reads directly from the log file. The kubelet returns the content of the log file.
{{< note >}}
Currently, if some external system has performed the rotation,
If an external system has performed the rotation,
only the contents of the latest log file will be available through
`kubectl logs`. E.g. if there's a 10MB file, `logrotate` performs
the rotation and there are two files, one 10MB in size and one empty,
`kubectl logs` will return an empty response.
`kubectl logs`. For example, if there's a 10MB file, `logrotate` performs
the rotation and there are two files: one file that is 10MB in size and a second file that is empty.
`kubectl logs` returns the latest log file which in this example is an empty response.
{{< /note >}}
[cosConfigureHelper]: https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh
### System component logs
There are two types of system components: those that run in a container and those
that do not run in a container. For example:
* The Kubernetes scheduler and kube-proxy run in a container.
* The kubelet and container runtime, for example Docker, do not run in containers.
* The kubelet and container runtime do not run in containers.
On machines with systemd, the kubelet and container runtime write to journald. If
systemd is not present, they write to `.log` files in the `/var/log` directory.
System components inside containers always write to the `/var/log` directory,
bypassing the default logging mechanism. They use the [klog](https://github.com/kubernetes/klog)
systemd is not present, the kubelet and container runtime write to `.log` files
in the `/var/log` directory. System components inside containers always write
to the `/var/log` directory, bypassing the default logging mechanism.
They use the [`klog`](https://github.com/kubernetes/klog)
logging library. You can find the conventions for logging severity for those
components in the [development docs on logging](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md).
Similarly to the container logs, system component logs in the `/var/log`
Similar to the container logs, system component logs in the `/var/log`
directory should be rotated. In Kubernetes clusters brought up by
the `kube-up.sh` script, those logs are configured to be rotated by
the `logrotate` tool daily or once the size exceeds 100MB.
@ -129,13 +130,14 @@ While Kubernetes does not provide a native solution for cluster-level logging, t
You can implement cluster-level logging by including a _node-level logging agent_ on each node. The logging agent is a dedicated tool that exposes logs or pushes logs to a backend. Commonly, the logging agent is a container that has access to a directory with log files from all of the application containers on that node.
Because the logging agent must run on every node, it's common to implement it as either a DaemonSet replica, a manifest pod, or a dedicated native process on the node. However the latter two approaches are deprecated and highly discouraged.
Because the logging agent must run on every node, it is recommended to run the agent
as a `DaemonSet`.
Using a node-level logging agent is the most common and encouraged approach for a Kubernetes cluster, because it creates only one agent per node, and it doesn't require any changes to the applications running on the node. However, node-level logging _only works for applications' standard output and standard error_.
Node-level logging creates only one agent per node and doesn't require any changes to the applications running on the node.
Kubernetes doesn't specify a logging agent, but two optional logging agents are packaged with the Kubernetes release: [Stackdriver Logging](/docs/tasks/debug-application-cluster/logging-stackdriver/) for use with Google Cloud Platform, and [Elasticsearch](/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/). You can find more information and instructions in the dedicated documents. Both use [fluentd](https://www.fluentd.org/) with custom configuration as an agent on the node.
Containers write stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation.
### Using a sidecar container with the logging agent
### Using a sidecar container with the logging agent {#sidecar-container-with-logging-agent}
You can use a sidecar container in one of the following ways:
@ -146,28 +148,27 @@ You can use a sidecar container in one of the following ways:
![Sidecar container with a streaming container](/images/docs/user-guide/logging/logging-with-streaming-sidecar.png)
By having your sidecar containers stream to their own `stdout` and `stderr`
By having your sidecar containers write to their own `stdout` and `stderr`
streams, you can take advantage of the kubelet and the logging agent that
already run on each node. The sidecar containers read logs from a file, a socket,
or the journald. Each individual sidecar container prints log to its own `stdout`
or `stderr` stream.
or journald. Each sidecar container prints a log to its own `stdout` or `stderr` stream.
This approach allows you to separate several log streams from different
parts of your application, some of which can lack support
for writing to `stdout` or `stderr`. The logic behind redirecting logs
is minimal, so it's hardly a significant overhead. Additionally, because
is minimal, so it's not a significant overhead. Additionally, because
`stdout` and `stderr` are handled by the kubelet, you can use built-in tools
like `kubectl logs`.
Consider the following example. A pod runs a single container, and the container
writes to two different log files, using two different formats. Here's a
For example, a pod runs a single container, and the container
writes to two different log files using two different formats. Here's a
configuration file for the Pod:
{{< codenew file="admin/logging/two-files-counter-pod.yaml" >}}
It would be a mess to have log entries of different formats in the same log
It is not recommended to write log entries with different formats to the same log
stream, even if you managed to redirect both components to the `stdout` stream of
the container. Instead, you could introduce two sidecar containers. Each sidecar
the container. Instead, you can create two sidecar containers. Each sidecar
container could tail a particular log file from a shared volume and then redirect
the logs to its own `stdout` stream.
@ -181,7 +182,10 @@ running the following commands:
```shell
kubectl logs counter count-log-1
```
```
The output is:
```console
0: Mon Jan 1 00:00:00 UTC 2001
1: Mon Jan 1 00:00:01 UTC 2001
2: Mon Jan 1 00:00:02 UTC 2001
@ -191,7 +195,10 @@ kubectl logs counter count-log-1
```shell
kubectl logs counter count-log-2
```
```
The output is:
```console
Mon Jan 1 00:00:00 UTC 2001 INFO 0
Mon Jan 1 00:00:01 UTC 2001 INFO 1
Mon Jan 1 00:00:02 UTC 2001 INFO 2
@ -202,16 +209,15 @@ The node-level agent installed in your cluster picks up those log streams
automatically without any further configuration. If you like, you can configure
the agent to parse log lines depending on the source container.
Note, that despite low CPU and memory usage (order of couple of millicores
Note, that despite low CPU and memory usage (order of a couple of millicores
for cpu and order of several megabytes for memory), writing logs to a file and
then streaming them to `stdout` can double disk usage. If you have
an application that writes to a single file, it's generally better to set
`/dev/stdout` as destination rather than implementing the streaming sidecar
an application that writes to a single file, it's recommended to set
`/dev/stdout` as the destination rather than implement the streaming sidecar
container approach.
Sidecar containers can also be used to rotate log files that cannot be
rotated by the application itself. An example
of this approach is a small container running logrotate periodically.
rotated by the application itself. An example of this approach is a small container running `logrotate` periodically.
However, it's recommended to use `stdout` and `stderr` directly and leave rotation
and retention policies to the kubelet.
@ -226,21 +232,17 @@ configured specifically to run with your application.
{{< note >}}
Using a logging agent in a sidecar container can lead
to significant resource consumption. Moreover, you won't be able to access
those logs using `kubectl logs` command, because they are not controlled
those logs using `kubectl logs` because they are not controlled
by the kubelet.
{{< /note >}}
As an example, you could use [Stackdriver](/docs/tasks/debug-application-cluster/logging-stackdriver/),
which uses fluentd as a logging agent. Here are two configuration files that
you can use to implement this approach. The first file contains
a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) to configure fluentd.
Here are two configuration files that you can use to implement a sidecar container with a logging agent. The first file contains
a [`ConfigMap`](/docs/tasks/configure-pod-container/configure-pod-configmap/) to configure fluentd.
{{< codenew file="admin/logging/fluentd-sidecar-config.yaml" >}}
{{< note >}}
The configuration of fluentd is beyond the scope of this article. For
information about configuring fluentd, see the
[official fluentd documentation](https://docs.fluentd.org/).
For information about configuring fluentd, see the [fluentd documentation](https://docs.fluentd.org/).
{{< /note >}}
The second file describes a pod that has a sidecar container running fluentd.
@ -248,18 +250,10 @@ The pod mounts a volume where fluentd can pick up its configuration data.
{{< codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" >}}
After some time you can find log messages in the Stackdriver interface.
Remember, that this is just an example and you can actually replace fluentd
with any logging agent, reading from any source inside an application
container.
In the sample configurations, you can replace fluentd with any logging agent, reading from any source inside an application container.
### Exposing logs directly from the application
![Exposing logs directly from the application](/images/docs/user-guide/logging/logging-from-application.png)
You can implement cluster-level logging by exposing or pushing logs directly from
every application; however, the implementation for such a logging mechanism
is outside the scope of Kubernetes.
Cluster-logging that exposes or pushes logs directly from every application is outside the scope of Kubernetes.

View File

@ -45,7 +45,7 @@ kubectl apply -f https://k8s.io/examples/application/nginx/
`kubectl` will read any files with suffixes `.yaml`, `.yml`, or `.json`.
It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, then you can then simply deploy all of the components of your stack en masse.
It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, you can deploy all of the components of your stack together.
A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into github:
@ -70,7 +70,7 @@ deployment.apps "my-nginx" deleted
service "my-nginx-svc" deleted
```
In the case of just two resources, it's also easy to specify both on the command line using the resource/name syntax:
In the case of two resources, you can specify both resources on the command line using the resource/name syntax:
```shell
kubectl delete deployments/my-nginx services/my-nginx-svc
@ -87,10 +87,11 @@ deployment.apps "my-nginx" deleted
service "my-nginx-svc" deleted
```
Because `kubectl` outputs resource names in the same syntax it accepts, it's easy to chain operations using `$()` or `xargs`:
Because `kubectl` outputs resource names in the same syntax it accepts, you can chain operations using `$()` or `xargs`:
```shell
kubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service)
kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service | xargs -i kubectl get {}
```
```shell
@ -264,7 +265,7 @@ For a more concrete example, check the [tutorial of deploying Ghost](https://git
## Updating labels
Sometimes existing pods and other resources need to be relabeled before creating new resources. This can be done with `kubectl label`.
For example, if you want to label all your nginx pods as frontend tier, simply run:
For example, if you want to label all your nginx pods as frontend tier, run:
```shell
kubectl label pods -l app=nginx tier=fe
@ -301,6 +302,7 @@ Sometimes you would want to attach annotations to resources. Annotations are arb
kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx'
kubectl get pods my-nginx-v4-9gw19 -o yaml
```
```shell
apiVersion: v1
kind: pod
@ -314,11 +316,12 @@ For more information, please see [annotations](/docs/concepts/overview/working-w
## Scaling your application
When load on your application grows or shrinks, it's easy to scale with `kubectl`. For instance, to decrease the number of nginx replicas from 3 to 1, do:
When load on your application grows or shrinks, use `kubectl` to scale your application. For instance, to decrease the number of nginx replicas from 3 to 1, do:
```shell
kubectl scale deployment/my-nginx --replicas=1
```
```shell
deployment.apps/my-nginx scaled
```
@ -328,6 +331,7 @@ Now you only have one pod managed by the deployment.
```shell
kubectl get pods -l app=nginx
```
```shell
NAME READY STATUS RESTARTS AGE
my-nginx-2035384211-j5fhi 1/1 Running 0 30m
@ -338,6 +342,7 @@ To have the system automatically choose the number of nginx replicas as needed,
```shell
kubectl autoscale deployment/my-nginx --min=1 --max=3
```
```shell
horizontalpodautoscaler.autoscaling/my-nginx autoscaled
```
@ -406,11 +411,12 @@ and
## Disruptive updates
In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can simply modify your original configuration file:
In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can modify your original configuration file:
```shell
kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force
```
```shell
deployment.apps/my-nginx deleted
deployment.apps/my-nginx replaced
@ -427,19 +433,22 @@ Let's say you were running version 1.14.2 of nginx:
```shell
kubectl create deployment my-nginx --image=nginx:1.14.2
```
```shell
deployment.apps/my-nginx created
```
with 3 replicas (so the old and new revisions can coexist):
```shell
kubectl scale deployment my-nginx --current-replicas=1 --replicas=3
```
```
deployment.apps/my-nginx scaled
```
To update to version 1.16.1, simply change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1`, with the kubectl commands we learned above.
To update to version 1.16.1, change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1` using the previous kubectl commands.
```shell
kubectl edit deployment/my-nginx

View File

@ -31,22 +31,24 @@ I1025 00:15:15.525108 1 httplog.go:79] GET /api/v1/namespaces/kube-system/
{{< feature-state for_k8s_version="v1.19" state="alpha" >}}
{{<warning>}}
{{< warning >}}
Migration to structured log messages is an ongoing process. Not all log messages are structured in this version. When parsing log files, you must also handle unstructured log messages.
Log formatting and value serialization are subject to change.
{{< /warning>}}
Structured logging is a effort to introduce a uniform structure in log messages allowing for easy extraction of information, making logs easier and cheaper to store and process.
Structured logging introduces a uniform structure in log messages allowing for programmatic extraction of information. You can store and process structured logs with less effort and cost.
New message format is backward compatible and enabled by default.
Format of structured logs:
```
```ini
<klog header> "<message>" <key1>="<value1>" <key2>="<value2>" ...
```
Example:
```
```ini
I1025 00:15:15.525108 1 controller_utils.go:116] "Pod status updated" pod="kube-system/kubedns" status="ready"
```

View File

@ -225,7 +225,7 @@ The kubelet checks whether the mounted ConfigMap is fresh on every periodic sync
However, the kubelet uses its local cache for getting the current value of the ConfigMap.
The type of the cache is configurable using the `ConfigMapAndSecretChangeDetectionStrategy` field in
the [KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
A ConfigMap can be either propagated by watch (default), ttl-based, or simply redirecting
A ConfigMap can be either propagated by watch (default), ttl-based, or by redirecting
all requests directly to the API server.
As a result, the total delay from the moment when the ConfigMap is updated to the moment
when new keys are projected to the Pod can be as long as the kubelet sync period + cache

View File

@ -59,13 +59,13 @@ DNS server watches the Kubernetes API for new `Services` and creates a set of DN
- Avoid using `hostNetwork`, for the same reasons as `hostPort`.
- Use [headless Services](/docs/concepts/services-networking/service/#headless-services) (which have a `ClusterIP` of `None`) for easy service discovery when you don't need `kube-proxy` load balancing.
- Use [headless Services](/docs/concepts/services-networking/service/#headless-services) (which have a `ClusterIP` of `None`) for service discovery when you don't need `kube-proxy` load balancing.
## Using Labels
- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or Deployment, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the appropriate Pods for other resources; for example, a Service that selects all `tier: frontend` Pods, or all `phase: test` components of `app: myapp`. See the [guestbook](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) app for examples of this approach.
A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. [Deployments](/docs/concepts/workloads/controllers/deployment/) make it easy to update a running service without downtime.
A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. When you need to update a running service without downtime, use a [Deployment](/docs/concepts/workloads/controllers/deployment/).
A desired state of an object is described by a Deployment, and if changes to that spec are _applied_, the deployment controller changes the actual state to the desired state at a controlled rate.

View File

@ -28,10 +28,10 @@ also creates some Secrets.
Kubernetes Secrets are, by default, stored as unencrypted base64-encoded
strings. By default they can be retrieved - as plain text - by anyone with API
access, or anyone with access to Kubernetes' underlying data store, etcd. In
order to safely use Secrets, we recommend you (at a minimum):
order to safely use Secrets, it is recommended you (at a minimum):
1. [Enable Encryption at Rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/) for Secrets.
2. [Enable RBAC rules that restrict reading and writing the Secret](https://kubernetes.io/docs/reference/access-authn-authz/authorization/). Be aware that secrets can be obtained implicitly by anyone with the permission to create a Pod.
1. [Enable Encryption at Rest](/docs/tasks/administer-cluster/encrypt-data/) for Secrets.
2. [Enable or configure RBAC rules](/docs/reference/access-authn-authz/authorization/) that restrict reading and writing the Secret. Be aware that secrets can be obtained implicitly by anyone with the permission to create a Pod.
{{< /caution >}}
<!-- body -->
@ -116,7 +116,7 @@ In this case, `0` means we have just created an empty Secret.
A `kubernetes.io/service-account-token` type of Secret is used to store a
token that identifies a service account. When using this Secret type, you need
to ensure that the `kubernetes.io/service-account.name` annotation is set to an
existing service account name. An Kubernetes controller fills in some other
existing service account name. A Kubernetes controller fills in some other
fields such as the `kubernetes.io/service-account.uid` annotation and the
`token` key in the `data` field set to actual token content.
@ -669,7 +669,7 @@ The kubelet checks whether the mounted secret is fresh on every periodic sync.
However, the kubelet uses its local cache for getting the current value of the Secret.
The type of the cache is configurable using the `ConfigMapAndSecretChangeDetectionStrategy` field in
the [KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
A Secret can be either propagated by watch (default), ttl-based, or simply redirecting
A Secret can be either propagated by watch (default), ttl-based, or by redirecting
all requests directly to the API server.
As a result, the total delay from the moment when the Secret is updated to the moment
when new keys are projected to the Pod can be as long as the kubelet sync period + cache
@ -801,11 +801,6 @@ field set to that of the service account.
See [Add ImagePullSecrets to a service account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)
for a detailed explanation of that process.
### Automatic mounting of manually created Secrets
Manually created secrets (for example, one containing a token for accessing a GitHub account)
can be automatically attached to pods based on their service account.
## Details
### Restrictions

View File

@ -36,10 +36,13 @@ No parameters are passed to the handler.
`PreStop`
This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others. A call to the preStop hook fails if the container is already in terminated or completed state.
It is blocking, meaning it is synchronous,
so it must complete before the signal to stop the container can be sent.
No parameters are passed to the handler.
This hook is called immediately before a container is terminated due to an API request or management
event such as a liveness/startup probe failure, preemption, resource contention and others. A call
to the `PreStop` hook fails if the container is already in a terminated or completed state and the
hook must complete before the TERM signal to stop the container can be sent. The Pod's termination
grace period countdown begins before the `PreStop` hook is executed, so regardless of the outcome of
the handler, the container will eventually terminate within the Pod's termination grace period. No
parameters are passed to the handler.
A more detailed description of the termination behavior can be found in
[Termination of Pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination).
@ -65,19 +68,15 @@ the Container ENTRYPOINT and hook fire asynchronously.
However, if the hook takes too long to run or hangs,
the Container cannot reach a `running` state.
`PreStop` hooks are not executed asynchronously from the signal
to stop the Container; the hook must complete its execution before
the signal can be sent.
If a `PreStop` hook hangs during execution,
the Pod's phase will be `Terminating` and remain there until the Pod is
killed after its `terminationGracePeriodSeconds` expires.
This grace period applies to the total time it takes for both
the `PreStop` hook to execute and for the Container to stop normally.
If, for example, `terminationGracePeriodSeconds` is 60, and the hook
takes 55 seconds to complete, and the Container takes 10 seconds to stop
normally after receiving the signal, then the Container will be killed
before it can stop normally, since `terminationGracePeriodSeconds` is
less than the total time (55+10) it takes for these two things to happen.
`PreStop` hooks are not executed asynchronously from the signal to stop the Container; the hook must
complete its execution before the TERM signal can be sent. If a `PreStop` hook hangs during
execution, the Pod's phase will be `Terminating` and remain there until the Pod is killed after its
`terminationGracePeriodSeconds` expires. This grace period applies to the total time it takes for
both the `PreStop` hook to execute and for the Container to stop normally. If, for example,
`terminationGracePeriodSeconds` is 60, and the hook takes 55 seconds to complete, and the Container
takes 10 seconds to stop normally after receiving the signal, then the Container will be killed
before it can stop normally, since `terminationGracePeriodSeconds` is less than the total time
(55+10) it takes for these two things to happen.
If either a `PostStart` or `PreStop` hook fails,
it kills the Container.

View File

@ -31,7 +31,7 @@ Once a custom resource is installed, users can create and access its objects usi
## Custom controllers
On their own, custom resources simply let you store and retrieve structured data.
On their own, custom resources let you store and retrieve structured data.
When you combine a custom resource with a *custom controller*, custom resources
provide a true _declarative API_.
@ -120,7 +120,7 @@ Kubernetes provides two ways to add custom resources to your cluster:
Kubernetes provides these two options to meet the needs of different users, so that neither ease of use nor flexibility is compromised.
Aggregated APIs are subordinate API servers that sit behind the primary API server, which acts as a proxy. This arrangement is called [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) (AA). To users, it simply appears that the Kubernetes API is extended.
Aggregated APIs are subordinate API servers that sit behind the primary API server, which acts as a proxy. This arrangement is called [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) (AA). To users, the Kubernetes API appears extended.
CRDs allow users to create new types of resources without adding another API server. You do not need to understand API Aggregation to use CRDs.

View File

@ -24,7 +24,7 @@ Network plugins in Kubernetes come in a few flavors:
The kubelet has a single default network plugin, and a default network common to the entire cluster. It probes for plugins when it starts up, remembers what it finds, and executes the selected plugin at appropriate times in the pod lifecycle (this is only true for Docker, as CRI manages its own CNI plugins). There are two Kubelet command line parameters to keep in mind when using plugins:
* `cni-bin-dir`: Kubelet probes this directory for plugins on startup
* `network-plugin`: The network plugin to use from `cni-bin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is simply "cni".
* `network-plugin`: The network plugin to use from `cni-bin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is `cni`.
## Network Plugin Requirements

View File

@ -26,7 +26,7 @@ Fortunately, there is a cloud provider that offers message queuing as a managed
A cluster operator can setup Service Catalog and use it to communicate with the cloud provider's service broker to provision an instance of the message queuing service and make it available to the application within the Kubernetes cluster.
The application developer therefore does not need to be concerned with the implementation details or management of the message queue.
The application can simply use it as a service.
The application can access the message queue as a service.
## Architecture

View File

@ -43,7 +43,7 @@ Each VM is a full machine running all the components, including its own operatin
Containers have become popular because they provide extra benefits, such as:
* Agile application creation and deployment: increased ease and efficiency of container image creation compared to VM image use.
* Continuous development, integration, and deployment: provides for reliable and frequent container image build and deployment with quick and easy rollbacks (due to image immutability).
* Continuous development, integration, and deployment: provides for reliable and frequent container image build and deployment with quick and efficient rollbacks (due to image immutability).
* Dev and Ops separation of concerns: create application container images at build/release time rather than deployment time, thereby decoupling applications from infrastructure.
* Observability not only surfaces OS-level information and metrics, but also application health and other signals.
* Environmental consistency across development, testing, and production: Runs the same on a laptop as it does in the cloud.

View File

@ -42,7 +42,7 @@ Example labels:
* `"partition" : "customerA"`, `"partition" : "customerB"`
* `"track" : "daily"`, `"track" : "weekly"`
These are just examples of commonly used labels; you are free to develop your own conventions. Keep in mind that label Key must be unique for a given object.
These are examples of commonly used labels; you are free to develop your own conventions. Keep in mind that label Key must be unique for a given object.
## Syntax and character set
@ -98,7 +98,7 @@ For both equality-based and set-based conditions there is no logical _OR_ (`||`)
### _Equality-based_ requirement
_Equality-_ or _inequality-based_ requirements allow filtering by label keys and values. Matching objects must satisfy all of the specified label constraints, though they may have additional labels as well.
Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ (and are simply synonyms), while the latter represents _inequality_. For example:
Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ (and are synonyms), while the latter represents _inequality_. For example:
```
environment = production

View File

@ -31,7 +31,7 @@ When using imperative commands, a user operates directly on live objects
in a cluster. The user provides operations to
the `kubectl` command as arguments or flags.
This is the simplest way to get started or to run a one-off task in
This is the recommended way to get started or to run a one-off task in
a cluster. Because this technique operates directly on live
objects, it provides no history of previous configurations.
@ -47,7 +47,7 @@ kubectl create deployment nginx --image nginx
Advantages compared to object configuration:
- Commands are simple, easy to learn and easy to remember.
- Commands are expressed as a single action word.
- Commands require only a single step to make changes to the cluster.
Disadvantages compared to object configuration:

View File

@ -197,7 +197,7 @@ alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -n
### Create a policy and a pod
Define the example PodSecurityPolicy object in a file. This is a policy that
simply prevents the creation of privileged pods.
prevents the creation of privileged pods.
The name of a PodSecurityPolicy object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).

View File

@ -610,17 +610,28 @@ plugins:
values: ["cluster-services"]
```
Now, "cluster-services" pods will be allowed in only those namespaces where a quota object with a matching `scopeSelector` is present.
For example:
Then, create a resource quota object in the `kube-system` namespace:
```yaml
scopeSelector:
matchExpressions:
- scopeName: PriorityClass
operator: In
values: ["cluster-services"]
{{< codenew file="policy/priority-class-resourcequota.yaml" >}}
```shell
$ kubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system
```
```
resourcequota/pods-cluster-services created
```
In this case, a pod creation will be allowed if:
1. the Pod's `priorityClassName` is not specified.
1. the Pod's `priorityClassName` is specified to a value other than `cluster-services`.
1. the Pod's `priorityClassName` is set to `cluster-services`, it is to be created
in the `kube-system` namespace, and it has passed the resource quota check.
A Pod creation request is rejected if its `priorityClassName` is set to `cluster-services`
and it is to be created in a namespace other than `kube-system`.
## {{% heading "whatsnext" %}}
- See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) for more information.

View File

@ -261,7 +261,7 @@ for performance and security reasons, there are some constraints on topologyKey:
and `preferredDuringSchedulingIgnoredDuringExecution`.
2. For pod anti-affinity, empty `topologyKey` is also not allowed in both `requiredDuringSchedulingIgnoredDuringExecution`
and `preferredDuringSchedulingIgnoredDuringExecution`.
3. For `requiredDuringSchedulingIgnoredDuringExecution` pod anti-affinity, the admission controller `LimitPodHardAntiAffinityTopology` was introduced to limit `topologyKey` to `kubernetes.io/hostname`. If you want to make it available for custom topologies, you may modify the admission controller, or simply disable it.
3. For `requiredDuringSchedulingIgnoredDuringExecution` pod anti-affinity, the admission controller `LimitPodHardAntiAffinityTopology` was introduced to limit `topologyKey` to `kubernetes.io/hostname`. If you want to make it available for custom topologies, you may modify the admission controller, or disable it.
4. Except for the above cases, the `topologyKey` can be any legal label-key.
In addition to `labelSelector` and `topologyKey`, you can optionally specify a list `namespaces`

View File

@ -107,7 +107,7 @@ value being calculated based on the cluster size. There is also a hardcoded
minimum value of 50 nodes.
{{< note >}}In clusters with less than 50 feasible nodes, the scheduler still
checks all the nodes, simply because there are not enough feasible nodes to stop
checks all the nodes because there are not enough feasible nodes to stop
the scheduler's search early.
In a small cluster, if you set a low value for `percentageOfNodesToScore`, your

View File

@ -10,11 +10,9 @@ weight: 70
{{< feature-state for_k8s_version="v1.15" state="alpha" >}}
The scheduling framework is a pluggable architecture for Kubernetes Scheduler
that makes scheduler customizations easy. It adds a new set of "plugin" APIs to
the existing scheduler. Plugins are compiled into the scheduler. The APIs
allow most scheduling features to be implemented as plugins, while keeping the
scheduling "core" simple and maintainable. Refer to the [design proposal of the
The scheduling framework is a pluggable architecture for the Kubernetes scheduler.
It adds a new set of "plugin" APIs to the existing scheduler. Plugins are compiled into the scheduler. The APIs allow most scheduling features to be implemented as plugins, while keeping the
scheduling "core" lightweight and maintainable. Refer to the [design proposal of the
scheduling framework][kep] for more technical information on the design of the
framework.

View File

@ -120,6 +120,7 @@ Area of Concern for Containers | Recommendation |
Container Vulnerability Scanning and OS Dependency Security | As part of an image build step, you should scan your containers for known vulnerabilities.
Image Signing and Enforcement | Sign container images to maintain a system of trust for the content of your containers.
Disallow privileged users | When constructing containers, consult your documentation for how to create users inside of the containers that have the least level of operating system privilege necessary in order to carry out the goal of the container.
Use container runtime with stronger isolation | Select [container runtime classes](/docs/concepts/containers/runtime-class/) that provider stronger isolation
## Code
@ -152,3 +153,4 @@ Learn about related Kubernetes security topics:
* [Data encryption in transit](/docs/tasks/tls/managing-tls-in-a-cluster/) for the control plane
* [Data encryption at rest](/docs/tasks/administer-cluster/encrypt-data/)
* [Secrets in Kubernetes](/docs/concepts/configuration/secret/)
* [Runtime class](/docs/concepts/containers/runtime-class)

View File

@ -25,9 +25,9 @@ assigned a DNS name. By default, a client Pod's DNS search list will
include the Pod's own namespace and the cluster's default domain. This is best
illustrated by example:
Assume a Service named `foo` in the Kubernetes namespace `bar`. A Pod running
in namespace `bar` can look up this service by simply doing a DNS query for
`foo`. A Pod running in namespace `quux` can look up this service by doing a
Assume a Service named `foo` in the Kubernetes namespace `bar`. A Pod running
in namespace `bar` can look up this service by querying a DNS service for
`foo`. A Pod running in namespace `quux` can look up this service by doing a
DNS query for `foo.bar`.
The following sections detail the supported record types and layout that is

View File

@ -74,7 +74,7 @@ An example of an IPv6 CIDR: `fdXY:IJKL:MNOP:15::/64` (this shows the format but
If your cluster has dual-stack enabled, you can create {{< glossary_tooltip text="Services" term_id="service" >}} which can use IPv4, IPv6, or both.
The address family of a Service defaults to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-controller-manager).
The address family of a Service defaults to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-apiserver).
When you define a Service you can optionally configure it as dual stack. To specify the behavior you want, you
set the `.spec.ipFamilyPolicy` field to one of the following values:
@ -163,7 +163,7 @@ status:
loadBalancer: {}
```
1. When dual-stack is enabled on a cluster, existing [headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors are configured by the control plane to set `.spec.ipFamilyPolicy` to `SingleStack` and set `.spec.ipFamilies` to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-controller-manager) even though `.spec.ClusterIP` is set to `None`.
1. When dual-stack is enabled on a cluster, existing [headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors are configured by the control plane to set `.spec.ipFamilyPolicy` to `SingleStack` and set `.spec.ipFamilies` to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-apiserver) even though `.spec.ClusterIP` is set to `None`.
{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}}

View File

@ -31,14 +31,15 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
* The [Citrix ingress controller](https://github.com/citrix/citrix-k8s-ingress-controller#readme) works with
Citrix Application Delivery Controller.
* [Contour](https://projectcontour.io/) is an [Envoy](https://www.envoyproxy.io/) based ingress controller.
* [EnRoute](https://getenroute.io/) is an [Envoy](https://www.envoyproxy.io) based API gateway that can run as an ingress controller.
* F5 BIG-IP [Container Ingress Services for Kubernetes](https://clouddocs.f5.com/containers/latest/userguide/kubernetes/)
lets you use an Ingress to configure F5 BIG-IP virtual servers.
* [Gloo](https://gloo.solo.io) is an open-source ingress controller based on [Envoy](https://www.envoyproxy.io),
which offers API gateway functionality.
* [HAProxy Ingress](https://haproxy-ingress.github.io/) is an ingress controller for
[HAProxy](http://www.haproxy.org/#desc).
[HAProxy](https://www.haproxy.org/#desc).
* The [HAProxy Ingress Controller for Kubernetes](https://github.com/haproxytech/kubernetes-ingress#readme)
is also an ingress controller for [HAProxy](http://www.haproxy.org/#desc).
is also an ingress controller for [HAProxy](https://www.haproxy.org/#desc).
* [Istio Ingress](https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/)
is an [Istio](https://istio.io/) based ingress controller.
* The [Kong Ingress Controller for Kubernetes](https://github.com/Kong/kubernetes-ingress-controller#readme)
@ -49,7 +50,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
* The [Traefik Kubernetes Ingress provider](https://doc.traefik.io/traefik/providers/kubernetes-ingress/) is an
ingress controller for the [Traefik](https://traefik.io/traefik/) proxy.
* [Voyager](https://appscode.com/products/voyager) is an ingress controller for
[HAProxy](http://www.haproxy.org/#desc).
[HAProxy](https://www.haproxy.org/#desc).
## Using multiple Ingress controllers

View File

@ -151,9 +151,9 @@ spec:
targetPort: 9376
```
Because this Service has no selector, the corresponding Endpoint object is not
Because this Service has no selector, the corresponding Endpoints object is not
created automatically. You can manually map the Service to the network address and port
where it's running, by adding an Endpoint object manually:
where it's running, by adding an Endpoints object manually:
```yaml
apiVersion: v1
@ -430,7 +430,7 @@ Services by their DNS name.
For example, if you have a Service called `my-service` in a Kubernetes
namespace `my-ns`, the control plane and the DNS Service acting together
create a DNS record for `my-service.my-ns`. Pods in the `my-ns` namespace
should be able to find it by simply doing a name lookup for `my-service`
should be able to find the service by doing a name lookup for `my-service`
(`my-service.my-ns` would also work).
Pods in other namespaces must qualify the name as `my-service.my-ns`. These names
@ -463,7 +463,7 @@ selectors defined:
For headless Services that define selectors, the endpoints controller creates
`Endpoints` records in the API, and modifies the DNS configuration to return
records (addresses) that point directly to the `Pods` backing the `Service`.
A records (IP addresses) that point directly to the `Pods` backing the `Service`.
### Without selectors
@ -1163,7 +1163,7 @@ rule kicks in, and redirects the packets to the proxy's own port.
The "Service proxy" chooses a backend, and starts proxying traffic from the client to the backend.
This means that Service owners can choose any port they want without risk of
collision. Clients can simply connect to an IP and port, without being aware
collision. Clients can connect to an IP and port, without being aware
of which Pods they are actually accessing.
#### iptables

View File

@ -487,7 +487,7 @@ The following volume types support mount options:
* VsphereVolume
* iSCSI
Mount options are not validated, so mount will simply fail if one is invalid.
Mount options are not validated. If a mount option is invalid, the mount fails.
In the past, the annotation `volume.beta.kubernetes.io/mount-options` was used instead
of the `mountOptions` attribute. This annotation is still working; however,
@ -629,6 +629,11 @@ spec:
PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with "Many" modes (`ROX`, `RWX`) is only possible within one namespace.
### PersistentVolumes typed `hostPath`
A `hostPath` PersistentVolume uses a file or directory on the Node to emulate network-attached storage.
See [an example of `hostPath` typed volume](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume).
## Raw Block Volume Support
{{< feature-state for_k8s_version="v1.18" state="stable" >}}

View File

@ -149,7 +149,7 @@ mount options specified in the `mountOptions` field of the class.
If the volume plugin does not support mount options but mount options are
specified, provisioning will fail. Mount options are not validated on either
the class or PV, so mount of the PV will simply fail if one is invalid.
the class or PV. If a mount option is invalid, the PV mount fails.
### Volume Binding Mode

View File

@ -24,7 +24,7 @@ The {{< glossary_tooltip text="CSI" term_id="csi" >}} Volume Cloning feature add
A Clone is defined as a duplicate of an existing Kubernetes Volume that can be consumed as any standard Volume would be. The only difference is that upon provisioning, rather than creating a "new" empty Volume, the back end device creates an exact duplicate of the specified Volume.
The implementation of cloning, from the perspective of the Kubernetes API, simply adds the ability to specify an existing PVC as a dataSource during new PVC creation. The source PVC must be bound and available (not in use).
The implementation of cloning, from the perspective of the Kubernetes API, adds the ability to specify an existing PVC as a dataSource during new PVC creation. The source PVC must be bound and available (not in use).
Users need to be aware of the following when using this feature:

View File

@ -210,8 +210,8 @@ spec:
The `CSIMigration` feature for Cinder, when enabled, redirects all plugin operations
from the existing in-tree plugin to the `cinder.csi.openstack.org` Container
Storage Interface (CSI) Driver. In order to use this feature, the [Openstack Cinder CSI
Driver](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md)
Storage Interface (CSI) Driver. In order to use this feature, the [OpenStack Cinder CSI
Driver](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md)
must be installed on the cluster and the `CSIMigration` and `CSIMigrationOpenStack`
beta features must be enabled.

View File

@ -90,6 +90,11 @@ If `startingDeadlineSeconds` is set to a large value or left unset (the default)
and if `concurrencyPolicy` is set to `Allow`, the jobs will always run
at least once.
{{< caution >}}
If `startingDeadlineSeconds` is set to a value less than 10 seconds, the CronJob may not be scheduled. This is because the CronJob controller checks things every 10 seconds.
{{< /caution >}}
For every CronJob, the CronJob {{< glossary_tooltip term_id="controller" >}} checks how many schedules it missed in the duration from its last scheduled time until now. If there are more than 100 missed schedules, then it does not start the job and logs the error
````
@ -128,4 +133,3 @@ documents the format of CronJob `schedule` fields.
For instructions on creating and working with cron jobs, and for an example of CronJob
manifest, see [Running automated tasks with cron jobs](/docs/tasks/job/automated-tasks-with-cron-jobs).

View File

@ -47,7 +47,7 @@ In this example:
* A Deployment named `nginx-deployment` is created, indicated by the `.metadata.name` field.
* The Deployment creates three replicated Pods, indicated by the `.spec.replicas` field.
* The `.spec.selector` field defines how the Deployment finds which Pods to manage.
In this case, you simply select a label that is defined in the Pod template (`app: nginx`).
In this case, you select a label that is defined in the Pod template (`app: nginx`).
However, more sophisticated selection rules are possible,
as long as the Pod template itself satisfies the rule.
@ -171,13 +171,15 @@ Follow the steps given below to update your Deployment:
```shell
kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1
```
or simply use the following command:
or use the following command:
```shell
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record
```
The output is similar to this:
The output is similar to:
```
deployment.apps/nginx-deployment image updated
```
@ -188,7 +190,8 @@ Follow the steps given below to update your Deployment:
kubectl edit deployment.v1.apps/nginx-deployment
```
The output is similar to this:
The output is similar to:
```
deployment.apps/nginx-deployment edited
```
@ -200,10 +203,13 @@ Follow the steps given below to update your Deployment:
```
The output is similar to this:
```
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
```
or
```
deployment "nginx-deployment" successfully rolled out
```
@ -212,10 +218,11 @@ Get more details on your updated Deployment:
* After the rollout succeeds, you can view the Deployment by running `kubectl get deployments`.
The output is similar to this:
```
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 36s
```
```ini
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 36s
```
* Run `kubectl get rs` to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it
up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.

View File

@ -13,7 +13,7 @@ weight: 50
<!-- overview -->
A Job creates one or more Pods and ensures that a specified number of them successfully terminate.
A Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate.
As pods successfully complete, the Job tracks the successful completions. When a specified number
of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up
the Pods it created.

View File

@ -180,16 +180,16 @@ delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). Kubectl wi
for it to delete each pod before deleting the ReplicationController itself. If this kubectl
command is interrupted, it can be restarted.
When using the REST API or go client library, you need to do the steps explicitly (scale replicas to
When using the REST API or Go client library, you need to do the steps explicitly (scale replicas to
0, wait for pod deletions, then delete the ReplicationController).
### Deleting just a ReplicationController
### Deleting only a ReplicationController
You can delete a ReplicationController without affecting any of its pods.
Using kubectl, specify the `--cascade=false` option to [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete).
When using the REST API or go client library, simply delete the ReplicationController object.
When using the REST API or Go client library, you can delete the ReplicationController object.
Once the original is deleted, you can create a new ReplicationController to replace it. As long
as the old and new `.spec.selector` are the same, then the new one will adopt the old pods.
@ -208,7 +208,8 @@ As mentioned above, whether you have 1 pod you want to keep running, or 1000, a
### Scaling
The ReplicationController makes it easy to scale the number of replicas up or down, either manually or by an auto-scaling control agent, by simply updating the `replicas` field.
The ReplicationController scales the number of replicas up or down by setting the `replicas` field.
You can configure the ReplicationController to manage the replicas manually or by an auto-scaling control agent.
### Rolling updates
@ -239,7 +240,7 @@ Pods created by a ReplicationController are intended to be fungible and semantic
## Responsibilities of the ReplicationController
The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
The ReplicationController ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](https://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](https://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](https://issue.k8s.io/170)).

View File

@ -78,5 +78,5 @@ very small. Please be aware of this risk when setting a non-zero TTL.
* [Clean up Jobs automatically](/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically)
* [Design doc](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/0026-ttl-after-finish.md)
* [Design doc](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/592-ttl-after-finish/README.md)

View File

@ -78,7 +78,7 @@ sharing](/docs/tasks/configure-pod-container/share-process-namespace/) so
you can view processes in other containers.
See [Debugging with Ephemeral Debug Container](
/docs/tasks/debug-application-cluster/debug-running-pod/#debugging-with-ephemeral-debug-container)
/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container)
for examples of troubleshooting using ephemeral containers.
## Ephemeral containers API
@ -103,7 +103,7 @@ the ephemeral container to add as an `EphemeralContainers` list:
"apiVersion": "v1",
"kind": "EphemeralContainers",
"metadata": {
"name": "example-pod"
"name": "example-pod"
},
"ephemeralContainers": [{
"command": [

View File

@ -52,7 +52,7 @@ Members can:
{{< note >}}
Using `/lgtm` triggers automation. If you want to provide non-binding
approval, simply commenting "LGTM" works too!
approval, commenting "LGTM" works too!
{{< /note >}}
- Use the `/hold` comment to block merging for a pull request

View File

@ -44,22 +44,25 @@ The English-language documentation uses U.S. English spelling and grammar.
### Use upper camel case for API objects
When you refer specifically to interacting with an API object, use [UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case), also known as Pascal Case. When you are generally discussing an API object, use [sentence-style capitalization](https://docs.microsoft.com/en-us/style-guide/text-formatting/using-type/use-sentence-style-capitalization).
When you refer specifically to interacting with an API object, use [UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case), also known as Pascal case. You may see different capitalization, such as "configMap", in the [API Reference](/docs/reference/kubernetes-api/). When writing general documentation, it's better to use upper camel case, calling it "ConfigMap" instead.
When you are generally discussing an API object, use [sentence-style capitalization](https://docs.microsoft.com/en-us/style-guide/text-formatting/using-type/use-sentence-style-capitalization).
You may use the word "resource", "API", or "object" to clarify a Kubernetes resource type in a sentence.
Don't split the API object name into separate words. For example, use
PodTemplateList, not Pod Template List.
Refer to API objects without saying "object," unless omitting "object"
leads to an awkward construction.
The following examples focus on capitalization. Review the related guidance on [Code Style](#code-style-inline-code) for more information on formatting API objects.
{{< table caption = "Do and Don't - API objects" >}}
{{< table caption = "Do and Don't - Use Pascal case for API objects" >}}
Do | Don't
:--| :-----
The pod has two containers. | The Pod has two containers.
The HorizontalPodAutoscaler is responsible for ... | The HorizontalPodAutoscaler object is responsible for ...
A PodList is a list of pods. | A Pod List is a list of pods.
The two ContainerPorts ... | The two ContainerPort objects ...
The two ContainerStateTerminated objects ... | The two ContainerStateTerminateds ...
The HorizontalPodAutoscaler resource is responsible for ... | The Horizontal pod autoscaler is responsible for ...
A PodList object is a list of pods. | A Pod List object is a list of pods.
The Volume object contains a `hostPath` field. | The volume object contains a hostPath field.
Every ConfigMap object is part of a namespace. | Every configMap object is part of a namespace.
For managing confidential data, consider using the Secret API. | For managing confidential data, consider using the secret API.
{{< /table >}}
@ -113,12 +116,12 @@ The copy is called a "fork". | The copy is called a "fork."
## Inline code formatting
### Use code style for inline code, commands, and API objects
### Use code style for inline code, commands, and API objects {#code-style-inline-code}
For inline code in an HTML document, use the `<code>` tag. In a Markdown
document, use the backtick (`` ` ``).
{{< table caption = "Do and Don't - Use code style for inline code and commands" >}}
{{< table caption = "Do and Don't - Use code style for inline code, commands, and API objects" >}}
Do | Don't
:--| :-----
The `kubectl run` command creates a `Pod`. | The "kubectl run" command creates a pod.

View File

@ -110,7 +110,7 @@ This admission controller allows all pods into the cluster. It is deprecated bec
This admission controller modifies every new Pod to force the image pull policy to Always. This is useful in a
multitenant cluster so that users can be assured that their private images can only be used by those
who have the credentials to pull them. Without this admission controller, once an image has been pulled to a
node, any pod from any user can use it simply by knowing the image's name (assuming the Pod is
node, any pod from any user can use it by knowing the image's name (assuming the Pod is
scheduled onto the right node), without any authorization check against the image. When this admission controller
is enabled, images are always pulled prior to starting containers, which means valid credentials are
required.
@ -462,8 +462,6 @@ and the [example of Limit Range](/docs/tasks/administer-cluster/manage-resources
### MutatingAdmissionWebhook {#mutatingadmissionwebhook}
{{< feature-state for_k8s_version="v1.13" state="beta" >}}
This admission controller calls any mutating webhooks which match the request. Matching
webhooks are called in serial; each one may modify the object if it desires.
@ -474,7 +472,7 @@ If a webhook called by this has side effects (for example, decrementing quota) i
webhooks or validating admission controllers will permit the request to finish.
If you disable the MutatingAdmissionWebhook, you must also disable the
`MutatingWebhookConfiguration` object in the `admissionregistration.k8s.io/v1beta1`
`MutatingWebhookConfiguration` object in the `admissionregistration.k8s.io/v1`
group/version via the `--runtime-config` flag (both are on by default in
versions >= 1.9).
@ -486,8 +484,6 @@ versions >= 1.9).
different when read back.
* Setting originally unset fields is less likely to cause problems than
overwriting fields set in the original request. Avoid doing the latter.
* This is a beta feature. Future versions of Kubernetes may restrict the types of
mutations these webhooks can make.
* Future changes to control loops for built-in resources or third-party resources
may break webhooks that work well today. Even when the webhook installation API
is finalized, not all possible webhook behaviors will be guaranteed to be supported
@ -766,8 +762,6 @@ This admission controller {{< glossary_tooltip text="taints" term_id="taint" >}}
### ValidatingAdmissionWebhook {#validatingadmissionwebhook}
{{< feature-state for_k8s_version="v1.13" state="beta" >}}
This admission controller calls any validating webhooks which match the request. Matching
webhooks are called in parallel; if any of them rejects the request, the request
fails. This admission controller only runs in the validation phase; the webhooks it calls may not
@ -778,7 +772,7 @@ If a webhook called by this has side effects (for example, decrementing quota) i
webhooks or other validating admission controllers will permit the request to finish.
If you disable the ValidatingAdmissionWebhook, you must also disable the
`ValidatingWebhookConfiguration` object in the `admissionregistration.k8s.io/v1beta1`
`ValidatingWebhookConfiguration` object in the `admissionregistration.k8s.io/v1`
group/version via the `--runtime-config` flag (both are on by default in
versions 1.9 and later).

View File

@ -68,8 +68,8 @@ when interpreted by an [authorizer](/docs/reference/access-authn-authz/authoriza
You can enable multiple authentication methods at once. You should usually use at least two methods:
- service account tokens for service accounts
- at least one other method for user authentication.
- service account tokens for service accounts
- at least one other method for user authentication.
When multiple authenticator modules are enabled, the first module
to successfully authenticate the request short-circuits evaluation.
@ -205,8 +205,10 @@ spec:
```
Service account bearer tokens are perfectly valid to use outside the cluster and
can be used to create identities for long-standing jobs that wish to talk to the
Kubernetes API. To manually create a service account, simply use the `kubectl
create serviceaccount (NAME)` command. This creates a service account in the
current namespace and an associated secret.
@ -320,14 +322,13 @@ sequenceDiagram
8. Once authorized the API server returns a response to `kubectl`
9. `kubectl` provides feedback to the user
Since all the data needed to validate who you are is in the `id_token`, Kubernetes doesn't need to
"phone home" to the identity provider. In a model where every request is stateless this provides a very scalable
solution for authentication. It does offer a few challenges:
1. Kubernetes has no "web interface" to trigger the authentication process. There is no browser or interface to collect credentials which is why you need to authenticate to your identity provider first.
2. The `id_token` can't be revoked, it's like a certificate so it should be short-lived (only a few minutes) so it can be very annoying to have to get a new token every few minutes.
3. There's no easy way to authenticate to the Kubernetes dashboard without using the `kubectl proxy` command or a reverse proxy that injects the `id_token`.
Since all of the data needed to validate who you are is in the `id_token`, Kubernetes doesn't need to
"phone home" to the identity provider. In a model where every request is stateless this provides a very scalable solution for authentication. It does offer a few challenges:
1. Kubernetes has no "web interface" to trigger the authentication process. There is no browser or interface to collect credentials which is why you need to authenticate to your identity provider first.
2. The `id_token` can't be revoked, it's like a certificate so it should be short-lived (only a few minutes) so it can be very annoying to have to get a new token every few minutes.
3. To authenticate to the Kubernetes dashboard, you must the `kubectl proxy` command or a reverse proxy that injects the `id_token`.
#### Configuring the API Server
@ -422,12 +423,12 @@ users:
refresh-token: q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXq
name: oidc
```
Once your `id_token` expires, `kubectl` will attempt to refresh your `id_token` using your `refresh_token` and `client_secret` storing the new values for the `refresh_token` and `id_token` in your `.kube/config`.
Once your `id_token` expires, `kubectl` will attempt to refresh your `id_token` using your `refresh_token` and `client_secret` storing the new values for the `refresh_token` and `id_token` in your `.kube/config`.
##### Option 2 - Use the `--token` Option
The `kubectl` command lets you pass in a token using the `--token` option. Simply copy and paste the `id_token` into this option:
The `kubectl` command lets you pass in a token using the `--token` option. Copy and paste the `id_token` into this option:
```bash
kubectl --token=eyJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJodHRwczovL21sYi50cmVtb2xvLmxhbjo4MDQzL2F1dGgvaWRwL29pZGMiLCJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNDc0NTk2NjY5LCJqdGkiOiI2RDUzNXoxUEpFNjJOR3QxaWVyYm9RIiwiaWF0IjoxNDc0NTk2MzY5LCJuYmYiOjE0NzQ1OTYyNDksInN1YiI6Im13aW5kdSIsInVzZXJfcm9sZSI6WyJ1c2VycyIsIm5ldy1uYW1lc3BhY2Utdmlld2VyIl0sImVtYWlsIjoibXdpbmR1QG5vbW9yZWplZGkuY29tIn0.f2As579n9VNoaKzoF-dOQGmXkFKf1FMyNV0-va_B63jn-_n9LGSCca_6IVMP8pO-Zb4KvRqGyTP0r3HkHxYy5c81AnIh8ijarruczl-TK_yF5akjSTHFZD-0gRzlevBDiH8Q79NAr-ky0P4iIXS8lY9Vnjch5MF74Zx0c3alKJHJUnnpjIACByfF2SCaYzbWFMUNat-K1PaUk5-ujMBG7yYnr95xD-63n8CO8teGUAAEMx6zRjzfhnhbzX-ajwZLGwGUBT4WqjMs70-6a7_8gZmLZb2az1cZynkFRj2BaCkVT3A2RrjeEwZEtGXlMqKJ1_I2ulrOVsYx01_yD35-rw get nodes
@ -1004,14 +1005,12 @@ RFC3339 timestamp. Presence or absence of an expiry has the following impact:
}
}
```
The plugin can optionally be called with an environment variable, `KUBERNETES_EXEC_INFO`,
that contains information about the cluster for which this plugin is obtaining
credentials. This information can be used to perform cluster-specific credential
acquisition logic. In order to enable this behavior, the `provideClusterInfo` field must
be set on the exec user field in the
[kubeconfig](/docs/concepts/configuration/organize-cluster-access-kubeconfig/). Here is an
example of the aforementioned `KUBERNETES_EXEC_INFO` environment variable.
To enable the exec plugin to obtain cluster-specific information, set `provideClusterInfo` on the `user.exec`
field in the [kubeconfig](/docs/concepts/configuration/organize-cluster-access-kubeconfig/).
The plugin will then be supplied with an environment variable, `KUBERNETES_EXEC_INFO`.
Information from this environment variable can be used to perform cluster-specific
credential acquisition logic.
The following `ExecCredential` manifest describes a cluster information sample.
```json
{

View File

@ -104,6 +104,9 @@ a given action, and works regardless of the authorization mode used.
```bash
kubectl auth can-i create deployments --namespace dev
```
The output is similar to this:
```
yes
```
@ -111,6 +114,9 @@ yes
```shell
kubectl auth can-i create deployments --namespace prod
```
The output is similar to this:
```
no
```
@ -121,6 +127,9 @@ to determine what action other users can perform.
```bash
kubectl auth can-i list secrets --namespace dev --as dave
```
The output is similar to this:
```
no
```
@ -150,7 +159,7 @@ EOF
```
The generated `SelfSubjectAccessReview` is:
```
```yaml
apiVersion: authorization.k8s.io/v1
kind: SelfSubjectAccessReview
metadata:

View File

@ -1093,8 +1093,8 @@ be a layering violation). `host` may also be an IP address.
Please note that using `localhost` or `127.0.0.1` as a `host` is
risky unless you take great care to run this webhook on all hosts
which run an apiserver which might need to make calls to this
webhook. Such installs are likely to be non-portable, i.e., not easy
to turn up in a new cluster.
webhook. Such installations are likely to be non-portable or not readily
run in a new cluster.
The scheme must be "https"; the URL must begin with "https://".

View File

@ -1,6 +1,6 @@
---
weight: 10
title: Feature Gates
weight: 10
content_type: concept
---
@ -48,13 +48,15 @@ different Kubernetes components.
| Feature | Default | Stage | Since | Until |
|---------|---------|-------|-------|-------|
| `AnyVolumeDataSource` | `false` | Alpha | 1.18 | |
| `APIListChunking` | `false` | Alpha | 1.8 | 1.8 |
| `APIListChunking` | `true` | Beta | 1.9 | |
| `APIPriorityAndFairness` | `false` | Alpha | 1.17 | 1.19 |
| `APIPriorityAndFairness` | `true` | Beta | 1.20 | |
| `APIResponseCompression` | `false` | Alpha | 1.7 | |
| `APIResponseCompression` | `false` | Alpha | 1.7 | 1.15 |
| `APIResponseCompression` | `false` | Beta | 1.16 | |
| `APIServerIdentity` | `false` | Alpha | 1.20 | |
| `AllowInsecureBackendProxy` | `true` | Beta | 1.17 | |
| `AnyVolumeDataSource` | `false` | Alpha | 1.18 | |
| `AppArmor` | `true` | Beta | 1.4 | |
| `BalanceAttachedNodeVolumes` | `false` | Alpha | 1.11 | |
| `BoundServiceAccountTokenVolume` | `false` | Alpha | 1.13 | |
@ -77,7 +79,8 @@ different Kubernetes components.
| `CSIMigrationGCE` | `false` | Alpha | 1.14 | 1.16 |
| `CSIMigrationGCE` | `false` | Beta | 1.17 | |
| `CSIMigrationGCEComplete` | `false` | Alpha | 1.17 | |
| `CSIMigrationOpenStack` | `false` | Alpha | 1.14 | |
| `CSIMigrationOpenStack` | `false` | Alpha | 1.14 | 1.17 |
| `CSIMigrationOpenStack` | `true` | Beta | 1.18 | |
| `CSIMigrationOpenStackComplete` | `false` | Alpha | 1.17 | |
| `CSIMigrationvSphere` | `false` | Beta | 1.19 | |
| `CSIMigrationvSphereComplete` | `false` | Beta | 1.19 | |
@ -89,26 +92,23 @@ different Kubernetes components.
| `ConfigurableFSGroupPolicy` | `true` | Beta | 1.20 | |
| `CronJobControllerV2` | `false` | Alpha | 1.20 | |
| `CustomCPUCFSQuotaPeriod` | `false` | Alpha | 1.12 | |
| `CustomResourceDefaulting` | `false` | Alpha| 1.15 | 1.15 |
| `CustomResourceDefaulting` | `true` | Beta | 1.16 | |
| `DefaultPodTopologySpread` | `false` | Alpha | 1.19 | 1.19 |
| `DefaultPodTopologySpread` | `true` | Beta | 1.20 | |
| `DevicePlugins` | `false` | Alpha | 1.8 | 1.9 |
| `DevicePlugins` | `true` | Beta | 1.10 | |
| `DisableAcceleratorUsageMetrics` | `false` | Alpha | 1.19 | 1.19 |
| `DisableAcceleratorUsageMetrics` | `true` | Beta | 1.20 | 1.22 |
| `DisableAcceleratorUsageMetrics` | `true` | Beta | 1.20 | |
| `DownwardAPIHugePages` | `false` | Alpha | 1.20 | |
| `DryRun` | `false` | Alpha | 1.12 | 1.12 |
| `DryRun` | `true` | Beta | 1.13 | |
| `DynamicKubeletConfig` | `false` | Alpha | 1.4 | 1.10 |
| `DynamicKubeletConfig` | `true` | Beta | 1.11 | |
| `EfficientWatchResumption` | `false` | Alpha | 1.20 | |
| `EndpointSlice` | `false` | Alpha | 1.16 | 1.16 |
| `EndpointSlice` | `false` | Beta | 1.17 | |
| `EndpointSlice` | `true` | Beta | 1.18 | |
| `EndpointSliceNodeName` | `false` | Alpha | 1.20 | |
| `EndpointSliceProxying` | `false` | Alpha | 1.18 | 1.18 |
| `EndpointSliceProxying` | `true` | Beta | 1.19 | |
| `EndpointSliceTerminating` | `false` | Alpha | 1.20 | |
| `EndpointSliceTerminatingCondition` | `false` | Alpha | 1.20 | |
| `EphemeralContainers` | `false` | Alpha | 1.16 | |
| `ExpandCSIVolumes` | `false` | Alpha | 1.14 | 1.15 |
| `ExpandCSIVolumes` | `true` | Beta | 1.16 | |
@ -119,19 +119,22 @@ different Kubernetes components.
| `ExperimentalHostUserNamespaceDefaulting` | `false` | Beta | 1.5 | |
| `GenericEphemeralVolume` | `false` | Alpha | 1.19 | |
| `GracefulNodeShutdown` | `false` | Alpha | 1.20 | |
| `HPAContainerMetrics` | `false` | Alpha | 1.20 | |
| `HPAScaleToZero` | `false` | Alpha | 1.16 | |
| `HugePageStorageMediumSize` | `false` | Alpha | 1.18 | 1.18 |
| `HugePageStorageMediumSize` | `true` | Beta | 1.19 | |
| `HyperVContainer` | `false` | Alpha | 1.10 | |
| `IPv6DualStack` | `false` | Alpha | 1.15 | |
| `ImmutableEphemeralVolumes` | `false` | Alpha | 1.18 | 1.18 |
| `ImmutableEphemeralVolumes` | `true` | Beta | 1.19 | |
| `IPv6DualStack` | `false` | Alpha | 1.16 | |
| `LegacyNodeRoleBehavior` | `true` | Alpha | 1.16 | |
| `KubeletCredentialProviders` | `false` | Alpha | 1.20 | |
| `KubeletPodResources` | `true` | Alpha | 1.13 | 1.14 |
| `KubeletPodResources` | `true` | Beta | 1.15 | |
| `LegacyNodeRoleBehavior` | `false` | Alpha | 1.16 | 1.18 |
| `LegacyNodeRoleBehavior` | `true` | True | 1.19 | |
| `LocalStorageCapacityIsolation` | `false` | Alpha | 1.7 | 1.9 |
| `LocalStorageCapacityIsolation` | `true` | Beta | 1.10 | |
| `LocalStorageCapacityIsolationFSQuotaMonitoring` | `false` | Alpha | 1.15 | |
| `MixedProtocolLBService` | `false` | Alpha | 1.20 | |
| `MountContainers` | `false` | Alpha | 1.9 | |
| `NodeDisruptionExclusion` | `false` | Alpha | 1.16 | 1.18 |
| `NodeDisruptionExclusion` | `true` | Beta | 1.19 | |
| `NonPreemptingPriority` | `false` | Alpha | 1.15 | 1.18 |
@ -143,25 +146,27 @@ different Kubernetes components.
| `ProcMountType` | `false` | Alpha | 1.12 | |
| `QOSReserved` | `false` | Alpha | 1.11 | |
| `RemainingItemCount` | `false` | Alpha | 1.15 | |
| `RemoveSelfLink` | `false` | Alpha | 1.16 | 1.19 |
| `RemoveSelfLink` | `true` | Beta | 1.20 | |
| `RootCAConfigMap` | `false` | Alpha | 1.13 | 1.19 |
| `RootCAConfigMap` | `true` | Beta | 1.20 | |
| `RotateKubeletServerCertificate` | `false` | Alpha | 1.7 | 1.11 |
| `RotateKubeletServerCertificate` | `true` | Beta | 1.12 | |
| `RunAsGroup` | `true` | Beta | 1.14 | |
| `RuntimeClass` | `false` | Alpha | 1.12 | 1.13 |
| `RuntimeClass` | `true` | Beta | 1.14 | |
| `SCTPSupport` | `false` | Alpha | 1.12 | 1.18 |
| `SCTPSupport` | `true` | Beta | 1.19 | |
| `ServerSideApply` | `false` | Alpha | 1.14 | 1.15 |
| `ServerSideApply` | `true` | Beta | 1.16 | |
| `ServiceAccountIssuerDiscovery` | `false` | Alpha | 1.18 | |
| `ServiceLBNodePortControl` | `false` | Alpha | 1.20 | 1.20 |
| `ServiceAccountIssuerDiscovery` | `false` | Alpha | 1.18 | 1.19 |
| `ServiceAccountIssuerDiscovery` | `true` | Beta | 1.20 | |
| `ServiceLBNodePortControl` | `false` | Alpha | 1.20 | |
| `ServiceNodeExclusion` | `false` | Alpha | 1.8 | 1.18 |
| `ServiceNodeExclusion` | `true` | Beta | 1.19 | |
| `ServiceTopology` | `false` | Alpha | 1.17 | |
| `SizeMemoryBackedVolumes` | `false` | Alpha | 1.20 | |
| `SetHostnameAsFQDN` | `false` | Alpha | 1.19 | 1.19 |
| `SetHostnameAsFQDN` | `true` | Beta | 1.20 | |
| `SizeMemoryBackedVolumes` | `false` | Alpha | 1.20 | |
| `StorageVersionAPI` | `false` | Alpha | 1.20 | |
| `StorageVersionHash` | `false` | Alpha | 1.14 | 1.14 |
| `StorageVersionHash` | `true` | Beta | 1.15 | |
| `Sysctls` | `true` | Beta | 1.11 | |
@ -170,11 +175,11 @@ different Kubernetes components.
| `TopologyManager` | `true` | Beta | 1.18 | |
| `ValidateProxyRedirects` | `false` | Alpha | 1.12 | 1.13 |
| `ValidateProxyRedirects` | `true` | Beta | 1.14 | |
| `WindowsEndpointSliceProxying` | `false` | Alpha | 1.19 | |
| `WindowsGMSA` | `false` | Alpha | 1.14 | |
| `WindowsGMSA` | `true` | Beta | 1.16 | |
| `WarningHeaders` | `true` | Beta | 1.19 | |
| `WinDSR` | `false` | Alpha | 1.14 | |
| `WinOverlay` | `false` | Alpha | 1.14 | |
| `WinOverlay` | `false` | Alpha | 1.14 | 1.19 |
| `WinOverlay` | `true` | Beta | 1.20 | |
| `WindowsEndpointSliceProxying` | `false` | Alpha | 1.19 | |
{{< /table >}}
### Feature gates for graduated or deprecated features
@ -228,6 +233,9 @@ different Kubernetes components.
| `CustomResourceWebhookConversion` | `false` | Alpha | 1.13 | 1.14 |
| `CustomResourceWebhookConversion` | `true` | Beta | 1.15 | 1.15 |
| `CustomResourceWebhookConversion` | `true` | GA | 1.16 | - |
| `DryRun` | `false` | Alpha | 1.12 | 1.12 |
| `DryRun` | `true` | Beta | 1.13 | 1.18 |
| `DryRun` | `true` | GA | 1.19 | - |
| `DynamicAuditing` | `false` | Alpha | 1.13 | 1.18 |
| `DynamicAuditing` | - | Deprecated | 1.19 | - |
| `DynamicProvisioningScheduling` | `false` | Alpha | 1.11 | 1.11 |
@ -247,23 +255,28 @@ different Kubernetes components.
| `HugePages` | `false` | Alpha | 1.8 | 1.9 |
| `HugePages` | `true` | Beta| 1.10 | 1.13 |
| `HugePages` | `true` | GA | 1.14 | - |
| `HyperVContainer` | `false` | Alpha | 1.10 | 1.19 |
| `HyperVContainer` | `false` | Deprecated | 1.20 | - |
| `Initializers` | `false` | Alpha | 1.7 | 1.13 |
| `Initializers` | - | Deprecated | 1.14 | - |
| `KubeletConfigFile` | `false` | Alpha | 1.8 | 1.9 |
| `KubeletConfigFile` | - | Deprecated | 1.10 | - |
| `KubeletCredentialProviders` | `false` | Alpha | 1.20 | 1.20 |
| `KubeletPluginsWatcher` | `false` | Alpha | 1.11 | 1.11 |
| `KubeletPluginsWatcher` | `true` | Beta | 1.12 | 1.12 |
| `KubeletPluginsWatcher` | `true` | GA | 1.13 | - |
| `KubeletPodResources` | `false` | Alpha | 1.13 | 1.14 |
| `KubeletPodResources` | `true` | Beta | 1.15 | |
| `KubeletPodResources` | `true` | GA | 1.20 | |
| `MountContainers` | `false` | Alpha | 1.9 | 1.16 |
| `MountContainers` | `false` | Deprecated | 1.17 | - |
| `MountPropagation` | `false` | Alpha | 1.8 | 1.9 |
| `MountPropagation` | `true` | Beta | 1.10 | 1.11 |
| `MountPropagation` | `true` | GA | 1.12 | - |
| `NodeLease` | `false` | Alpha | 1.12 | 1.13 |
| `NodeLease` | `true` | Beta | 1.14 | 1.16 |
| `NodeLease` | `true` | GA | 1.17 | - |
| `PVCProtection` | `false` | Alpha | 1.9 | 1.9 |
| `PVCProtection` | - | Deprecated | 1.10 | - |
| `PersistentLocalVolumes` | `false` | Alpha | 1.7 | 1.9 |
| `PersistentLocalVolumes` | `true` | Beta | 1.10 | 1.13 |
| `PersistentLocalVolumes` | `true` | GA | 1.14 | - |
@ -276,8 +289,6 @@ different Kubernetes components.
| `PodShareProcessNamespace` | `false` | Alpha | 1.10 | 1.11 |
| `PodShareProcessNamespace` | `true` | Beta | 1.12 | 1.16 |
| `PodShareProcessNamespace` | `true` | GA | 1.17 | - |
| `PVCProtection` | `false` | Alpha | 1.9 | 1.9 |
| `PVCProtection` | - | Deprecated | 1.10 | - |
| `RequestManagement` | `false` | Alpha | 1.15 | 1.16 |
| `ResourceLimitsPriorityFunction` | `false` | Alpha | 1.9 | 1.18 |
| `ResourceLimitsPriorityFunction` | - | Deprecated | 1.19 | - |
@ -398,65 +409,134 @@ A *General Availability* (GA) feature is also referred to as a *stable* feature.
Each feature gate is designed for enabling/disabling a specific feature:
- `APIListChunking`: Enable the API clients to retrieve (`LIST` or `GET`)
resources from API server in chunks.
- `APIPriorityAndFairness`: Enable managing request concurrency with
prioritization and fairness at each server. (Renamed from `RequestManagement`)
- `APIResponseCompression`: Compress the API responses for `LIST` or `GET` requests.
- `APIServerIdentity`: Assign each API server an ID in a cluster.
- `Accelerators`: Enable Nvidia GPU support when using Docker
- `AdvancedAuditing`: Enable [advanced auditing](/docs/tasks/debug-application-cluster/audit/#advanced-audit)
- `AffinityInAnnotations`(*deprecated*): Enable setting [Pod affinity or anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity).
- `AffinityInAnnotations`(*deprecated*): Enable setting
[Pod affinity or anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity).
- `AllowExtTrafficLocalEndpoints`: Enable a service to route external requests to node local endpoints.
- `AllowInsecureBackendProxy`: Enable the users to skip TLS verification of
kubelets on Pod log requests.
- `AnyVolumeDataSource`: Enable use of any custom resource as the `DataSource` of a
{{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}.
- `APIListChunking`: Enable the API clients to retrieve (`LIST` or `GET`) resources from API server in chunks.
- `APIPriorityAndFairness`: Enable managing request concurrency with prioritization and fairness at each server. (Renamed from `RequestManagement`)
- `APIResponseCompression`: Compress the API responses for `LIST` or `GET` requests.
- `APIServerIdentity`: Assign each kube-apiserver an ID in a cluster.
- `AppArmor`: Enable AppArmor based mandatory access control on Linux nodes when using Docker.
See [AppArmor Tutorial](/docs/tutorials/clusters/apparmor/) for more details.
See [AppArmor Tutorial](/docs/tutorials/clusters/apparmor/) for more details.
- `AttachVolumeLimit`: Enable volume plugins to report limits on number of volumes
that can be attached to a node.
See [dynamic volume limits](/docs/concepts/storage/storage-limits/#dynamic-volume-limits) for more details.
See [dynamic volume limits](/docs/concepts/storage/storage-limits/#dynamic-volume-limits) for more details.
- `BalanceAttachedNodeVolumes`: Include volume count on node to be considered for balanced resource allocation
while scheduling. A node which has closer CPU, memory utilization, and volume count is favored by the scheduler
while making decisions.
- `BlockVolume`: Enable the definition and consumption of raw block devices in Pods.
See [Raw Block Volume Support](/docs/concepts/storage/persistent-volumes/#raw-block-volume-support)
for more details.
See [Raw Block Volume Support](/docs/concepts/storage/persistent-volumes/#raw-block-volume-support)
for more details.
- `BoundServiceAccountTokenVolume`: Migrate ServiceAccount volumes to use a projected volume consisting of a
ServiceAccountTokenVolumeProjection. Cluster admins can use metric `serviceaccount_stale_tokens_total` to
monitor workloads that are depending on the extended tokens. If there are no such workloads, turn off
extended tokens by starting `kube-apiserver` with flag `--service-account-extend-token-expiration=false`.
Check [Bound Service Account Tokens](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md)
ServiceAccountTokenVolumeProjection. Cluster admins can use metric `serviceaccount_stale_tokens_total` to
monitor workloads that are depending on the extended tokens. If there are no such workloads, turn off
extended tokens by starting `kube-apiserver` with flag `--service-account-extend-token-expiration=false`.
Check [Bound Service Account Tokens](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md)
for more details.
- `ConfigurableFSGroupPolicy`: Allows user to configure volume permission change policy for fsGroups when mounting a volume in a Pod. See [Configure volume permission and ownership change policy for Pods](/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods) for more details.
- `CronJobControllerV2`: Use an alternative implementation of the {{< glossary_tooltip text="CronJob" term_id="cronjob" >}} controller. Otherwise, version 1 of the same controller is selected. The version 2 controller provides experimental performance improvements.
- `CPUManager`: Enable container level CPU affinity support, see [CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/).
- `CPUManager`: Enable container level CPU affinity support, see
[CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/).
- `CRIContainerLogRotation`: Enable container log rotation for cri container runtime.
- `CSIBlockVolume`: Enable external CSI volume drivers to support block storage. See the [`csi` raw block volume support](/docs/concepts/storage/volumes/#csi-raw-block-volume-support) documentation for more details.
- `CSIDriverRegistry`: Enable all logic related to the CSIDriver API object in csi.storage.k8s.io.
- `CSIBlockVolume`: Enable external CSI volume drivers to support block storage.
See the [`csi` raw block volume support](/docs/concepts/storage/volumes/#csi-raw-block-volume-support)
documentation for more details.
- `CSIDriverRegistry`: Enable all logic related to the CSIDriver API object in
csi.storage.k8s.io.
- `CSIInlineVolume`: Enable CSI Inline volumes support for pods.
- `CSIMigration`: Enables shims and translation logic to route volume operations from in-tree plugins to corresponding pre-installed CSI plugins
- `CSIMigrationAWS`: Enables shims and translation logic to route volume operations from the AWS-EBS in-tree plugin to EBS CSI plugin. Supports falling back to in-tree EBS plugin if a node does not have EBS CSI plugin installed and configured. Requires CSIMigration feature flag enabled.
- `CSIMigrationAWSComplete`: Stops registering the EBS in-tree plugin in kubelet and volume controllers and enables shims and translation logic to route volume operations from the AWS-EBS in-tree plugin to EBS CSI plugin. Requires CSIMigration and CSIMigrationAWS feature flags enabled and EBS CSI plugin installed and configured on all nodes in the cluster.
- `CSIMigrationAzureDisk`: Enables shims and translation logic to route volume operations from the Azure-Disk in-tree plugin to AzureDisk CSI plugin. Supports falling back to in-tree AzureDisk plugin if a node does not have AzureDisk CSI plugin installed and configured. Requires CSIMigration feature flag enabled.
- `CSIMigrationAzureDiskComplete`: Stops registering the Azure-Disk in-tree plugin in kubelet and volume controllers and enables shims and translation logic to route volume operations from the Azure-Disk in-tree plugin to AzureDisk CSI plugin. Requires CSIMigration and CSIMigrationAzureDisk feature flags enabled and AzureDisk CSI plugin installed and configured on all nodes in the cluster.
- `CSIMigrationAzureFile`: Enables shims and translation logic to route volume operations from the Azure-File in-tree plugin to AzureFile CSI plugin. Supports falling back to in-tree AzureFile plugin if a node does not have AzureFile CSI plugin installed and configured. Requires CSIMigration feature flag enabled.
- `CSIMigrationAzureFileComplete`: Stops registering the Azure-File in-tree plugin in kubelet and volume controllers and enables shims and translation logic to route volume operations from the Azure-File in-tree plugin to AzureFile CSI plugin. Requires CSIMigration and CSIMigrationAzureFile feature flags enabled and AzureFile CSI plugin installed and configured on all nodes in the cluster.
- `CSIMigrationGCE`: Enables shims and translation logic to route volume operations from the GCE-PD in-tree plugin to PD CSI plugin. Supports falling back to in-tree GCE plugin if a node does not have PD CSI plugin installed and configured. Requires CSIMigration feature flag enabled.
- `CSIMigrationGCEComplete`: Stops registering the GCE-PD in-tree plugin in kubelet and volume controllers and enables shims and translation logic to route volume operations from the GCE-PD in-tree plugin to PD CSI plugin. Requires CSIMigration and CSIMigrationGCE feature flags enabled and PD CSI plugin installed and configured on all nodes in the cluster.
- `CSIMigrationOpenStack`: Enables shims and translation logic to route volume operations from the Cinder in-tree plugin to Cinder CSI plugin. Supports falling back to in-tree Cinder plugin if a node does not have Cinder CSI plugin installed and configured. Requires CSIMigration feature flag enabled.
- `CSIMigrationOpenStackComplete`: Stops registering the Cinder in-tree plugin in kubelet and volume controllers and enables shims and translation logic to route volume operations from the Cinder in-tree plugin to Cinder CSI plugin. Requires CSIMigration and CSIMigrationOpenStack feature flags enabled and Cinder CSI plugin installed and configured on all nodes in the cluster.
- `CSIMigrationvSphere`: Enables shims and translation logic to route volume operations from the vSphere in-tree plugin to vSphere CSI plugin. Supports falling back to in-tree vSphere plugin if a node does not have vSphere CSI plugin installed and configured. Requires CSIMigration feature flag enabled.
- `CSIMigrationvSphereComplete`: Stops registering the vSphere in-tree plugin in kubelet and volume controllers and enables shims and translation logic to route volume operations from the vSphere in-tree plugin to vSphere CSI plugin. Requires CSIMigration and CSIMigrationvSphere feature flags enabled and vSphere CSI plugin installed and configured on all nodes in the cluster.
- `CSIMigration`: Enables shims and translation logic to route volume
operations from in-tree plugins to corresponding pre-installed CSI plugins
- `CSIMigrationAWS`: Enables shims and translation logic to route volume
operations from the AWS-EBS in-tree plugin to EBS CSI plugin. Supports
falling back to in-tree EBS plugin if a node does not have EBS CSI plugin
installed and configured. Requires CSIMigration feature flag enabled.
- `CSIMigrationAWSComplete`: Stops registering the EBS in-tree plugin in
kubelet and volume controllers and enables shims and translation logic to
route volume operations from the AWS-EBS in-tree plugin to EBS CSI plugin.
Requires CSIMigration and CSIMigrationAWS feature flags enabled and EBS CSI
plugin installed and configured on all nodes in the cluster.
- `CSIMigrationAzureDisk`: Enables shims and translation logic to route volume
operations from the Azure-Disk in-tree plugin to AzureDisk CSI plugin.
Supports falling back to in-tree AzureDisk plugin if a node does not have
AzureDisk CSI plugin installed and configured. Requires CSIMigration feature
flag enabled.
- `CSIMigrationAzureDiskComplete`: Stops registering the Azure-Disk in-tree
plugin in kubelet and volume controllers and enables shims and translation
logic to route volume operations from the Azure-Disk in-tree plugin to
AzureDisk CSI plugin. Requires CSIMigration and CSIMigrationAzureDisk feature
flags enabled and AzureDisk CSI plugin installed and configured on all nodes
in the cluster.
- `CSIMigrationAzureFile`: Enables shims and translation logic to route volume
operations from the Azure-File in-tree plugin to AzureFile CSI plugin.
Supports falling back to in-tree AzureFile plugin if a node does not have
AzureFile CSI plugin installed and configured. Requires CSIMigration feature
flag enabled.
- `CSIMigrationAzureFileComplete`: Stops registering the Azure-File in-tree
plugin in kubelet and volume controllers and enables shims and translation
logic to route volume operations from the Azure-File in-tree plugin to
AzureFile CSI plugin. Requires CSIMigration and CSIMigrationAzureFile feature
flags enabled and AzureFile CSI plugin installed and configured on all nodes
in the cluster.
- `CSIMigrationGCE`: Enables shims and translation logic to route volume
operations from the GCE-PD in-tree plugin to PD CSI plugin. Supports falling
back to in-tree GCE plugin if a node does not have PD CSI plugin installed and
configured. Requires CSIMigration feature flag enabled.
- `CSIMigrationGCEComplete`: Stops registering the GCE-PD in-tree plugin in
kubelet and volume controllers and enables shims and translation logic to
route volume operations from the GCE-PD in-tree plugin to PD CSI plugin.
Requires CSIMigration and CSIMigrationGCE feature flags enabled and PD CSI
plugin installed and configured on all nodes in the cluster.
- `CSIMigrationOpenStack`: Enables shims and translation logic to route volume
operations from the Cinder in-tree plugin to Cinder CSI plugin. Supports
falling back to in-tree Cinder plugin if a node does not have Cinder CSI
plugin installed and configured. Requires CSIMigration feature flag enabled.
- `CSIMigrationOpenStackComplete`: Stops registering the Cinder in-tree plugin in
kubelet and volume controllers and enables shims and translation logic to route
volume operations from the Cinder in-tree plugin to Cinder CSI plugin.
Requires CSIMigration and CSIMigrationOpenStack feature flags enabled and Cinder
CSI plugin installed and configured on all nodes in the cluster.
- `CSIMigrationvSphere`: Enables shims and translation logic to route volume operations
from the vSphere in-tree plugin to vSphere CSI plugin.
Supports falling back to in-tree vSphere plugin if a node does not have vSphere
CSI plugin installed and configured. Requires CSIMigration feature flag enabled.
- `CSIMigrationvSphereComplete`: Stops registering the vSphere in-tree plugin in kubelet
and volume controllers and enables shims and translation logic to route volume operations
from the vSphere in-tree plugin to vSphere CSI plugin. Requires CSIMigration and
CSIMigrationvSphere feature flags enabled and vSphere CSI plugin installed and
configured on all nodes in the cluster.
- `CSINodeInfo`: Enable all logic related to the CSINodeInfo API object in csi.storage.k8s.io.
- `CSIPersistentVolume`: Enable discovering and mounting volumes provisioned through a
[CSI (Container Storage Interface)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md)
compatible volume plugin.
- `CSIServiceAccountToken`: Enable CSI drivers to receive the pods' service account token that they mount volumes for. See [Token Requests](https://kubernetes-csi.github.io/docs/token-requests.html).
- `CSIStorageCapacity`: Enables CSI drivers to publish storage capacity information and the Kubernetes scheduler to use that information when scheduling pods. See [Storage Capacity](/docs/concepts/storage/storage-capacity/).
- `CSIServiceAccountToken`: Enable CSI drivers to receive the pods' service account token
that they mount volumes for. See
[Token Requests](https://kubernetes-csi.github.io/docs/token-requests.html).
- `CSIStorageCapacity`: Enables CSI drivers to publish storage capacity information
and the Kubernetes scheduler to use that information when scheduling pods. See
[Storage Capacity](/docs/concepts/storage/storage-capacity/).
Check the [`csi` volume type](/docs/concepts/storage/volumes/#csi) documentation for more details.
- `CSIVolumeFSGroupPolicy`: Allows CSIDrivers to use the `fsGroupPolicy` field. This field controls whether volumes created by a CSIDriver support volume ownership and permission modifications when these volumes are mounted.
- `CustomCPUCFSQuotaPeriod`: Enable nodes to change CPUCFSQuotaPeriod.
- `CSIVolumeFSGroupPolicy`: Allows CSIDrivers to use the `fsGroupPolicy` field.
This field controls whether volumes created by a CSIDriver support volume ownership
and permission modifications when these volumes are mounted.
- `ConfigurableFSGroupPolicy`: Allows user to configure volume permission change policy
for fsGroups when mounting a volume in a Pod. See
[Configure volume permission and ownership change policy for Pods](/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods)
for more details.
- `CronJobControllerV2`: Use an alternative implementation of the
{{< glossary_tooltip text="CronJob" term_id="cronjob" >}} controller. Otherwise,
version 1 of the same controller is selected.
The version 2 controller provides experimental performance improvements.
- `CustomCPUCFSQuotaPeriod`: Enable nodes to change `cpuCFSQuotaPeriod` in
[kubelet config](/docs/tasks/administer-cluster/kubelet-config-file/).
- `CustomPodDNS`: Enable customizing the DNS settings for a Pod using its `dnsConfig` property.
Check [Pod's DNS Config](/docs/concepts/services-networking/dns-pod-service/#pods-dns-config)
for more details.
Check [Pod's DNS Config](/docs/concepts/services-networking/dns-pod-service/#pods-dns-config)
for more details.
- `CustomResourceDefaulting`: Enable CRD support for default values in OpenAPI v3 validation schemas.
- `CustomResourcePublishOpenAPI`: Enables publishing of CRD OpenAPI specs.
- `CustomResourceSubresources`: Enable `/status` and `/scale` subresources
@ -466,147 +546,253 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `CustomResourceWebhookConversion`: Enable webhook-based conversion
on resources created from [CustomResourceDefinition](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
troubleshoot a running Pod.
- `DisableAcceleratorUsageMetrics`: [Disable accelerator metrics collected by the kubelet](/docs/concepts/cluster-administration/system-metrics/#disable-accelerator-metrics).
- `DevicePlugins`: Enable the [device-plugins](/docs/concepts/cluster-administration/device-plugins/)
based resource provisioning on nodes.
- `DefaultPodTopologySpread`: Enables the use of `PodTopologySpread` scheduling plugin to do
[default spreading](/docs/concepts/workloads/pods/pod-topology-spread-constraints/#internal-default-constraints).
- `DownwardAPIHugePages`: Enables usage of hugepages in downward API.
- `DevicePlugins`: Enable the [device-plugins](/docs/concepts/cluster-administration/device-plugins/)
based resource provisioning on nodes.
- `DisableAcceleratorUsageMetrics`:
[Disable accelerator metrics collected by the kubelet](/docs/concepts/cluster-administration/system-metrics/#disable-accelerator-metrics).
- `DownwardAPIHugePages`: Enables usage of hugepages in
[downward API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information).
- `DryRun`: Enable server-side [dry run](/docs/reference/using-api/api-concepts/#dry-run) requests
so that validation, merging, and mutation can be tested without committing.
- `DynamicAuditing`(*deprecated*): Used to enable dynamic auditing before v1.19.
- `DynamicKubeletConfig`: Enable the dynamic configuration of kubelet. See [Reconfigure kubelet](/docs/tasks/administer-cluster/reconfigure-kubelet/).
- `DynamicProvisioningScheduling`: Extend the default scheduler to be aware of volume topology and handle PV provisioning.
- `DynamicKubeletConfig`: Enable the dynamic configuration of kubelet. See
[Reconfigure kubelet](/docs/tasks/administer-cluster/reconfigure-kubelet/).
- `DynamicProvisioningScheduling`: Extend the default scheduler to be aware of
volume topology and handle PV provisioning.
This feature is superseded by the `VolumeScheduling` feature completely in v1.12.
- `DynamicVolumeProvisioning`(*deprecated*): Enable the [dynamic provisioning](/docs/concepts/storage/dynamic-provisioning/) of persistent volumes to Pods.
- `EnableAggregatedDiscoveryTimeout` (*deprecated*): Enable the five second timeout on aggregated discovery calls.
- `EnableEquivalenceClassCache`: Enable the scheduler to cache equivalence of nodes when scheduling Pods.
- `EphemeralContainers`: Enable the ability to add {{< glossary_tooltip text="ephemeral containers"
term_id="ephemeral-container" >}} to running pods.
- `EvenPodsSpread`: Enable pods to be scheduled evenly across topology domains. See [Pod Topology Spread Constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/).
- `ExecProbeTimeout`: Ensure kubelet respects exec probe timeouts. This feature gate exists in case any of your existing workloads depend on a now-corrected fault where Kubernetes ignored exec probe timeouts. See [readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes).
- `ExpandInUsePersistentVolumes`: Enable expanding in-use PVCs. See [Resizing an in-use PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim).
- `ExpandPersistentVolumes`: Enable the expanding of persistent volumes. See [Expanding Persistent Volumes Claims](/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims).
- `ExperimentalCriticalPodAnnotation`: Enable annotating specific pods as *critical* so that their [scheduling is guaranteed](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/).
This feature is deprecated by Pod Priority and Preemption as of v1.13.
- `ExperimentalHostUserNamespaceDefaultingGate`: Enabling the defaulting user
namespace to host. This is for containers that are using other host namespaces,
host mounts, or containers that are privileged or using specific non-namespaced
capabilities (e.g. `MKNODE`, `SYS_MODULE` etc.). This should only be enabled
if user namespace remapping is enabled in the Docker daemon.
- `EndpointSlice`: Enables Endpoint Slices for more scalable and extensible
network endpoints. See [Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpointslices/).
- `DynamicVolumeProvisioning`(*deprecated*): Enable the
[dynamic provisioning](/docs/concepts/storage/dynamic-provisioning/) of persistent volumes to Pods.
- `EfficientWatchResumption`: Allows for storage-originated bookmark (progress
notify) events to be delivered to the users. This is only applied to watch
operations.
- `EnableAggregatedDiscoveryTimeout` (*deprecated*): Enable the five second
timeout on aggregated discovery calls.
- `EnableEquivalenceClassCache`: Enable the scheduler to cache equivalence of
nodes when scheduling Pods.
- `EndpointSlice`: Enables EndpointSlices for more scalable and extensible
network endpoints. See [Enabling EndpointSlices](/docs/tasks/administer-cluster/enabling-endpointslices/).
- `EndpointSliceNodeName`: Enables EndpointSlice `nodeName` field.
- `EndpointSliceTerminating`: Enables EndpointSlice `terminating` and `serving`
condition fields.
- `EndpointSliceProxying`: When this feature gate is enabled, kube-proxy running
- `EndpointSliceProxying`: When enabled, kube-proxy running
on Linux will use EndpointSlices as the primary data source instead of
Endpoints, enabling scalability and performance improvements. See
[Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpointslices/).
- `WindowsEndpointSliceProxying`: When this feature gate is enabled, kube-proxy
running on Windows will use EndpointSlices as the primary data source instead
of Endpoints, enabling scalability and performance improvements. See
[Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpointslices/).
- `EndpointSliceTerminatingCondition`: Enables EndpointSlice `terminating` and `serving`
condition fields.
- `EphemeralContainers`: Enable the ability to add
{{< glossary_tooltip text="ephemeral containers" term_id="ephemeral-container" >}}
to running pods.
- `EvenPodsSpread`: Enable pods to be scheduled evenly across topology domains. See
[Pod Topology Spread Constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/).
- `ExecProbeTimeout`: Ensure kubelet respects exec probe timeouts.
This feature gate exists in case any of your existing workloads depend on a
now-corrected fault where Kubernetes ignored exec probe timeouts. See
[readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes).
- `ExpandCSIVolumes`: Enable the expanding of CSI volumes.
- `ExpandInUsePersistentVolumes`: Enable expanding in-use PVCs. See
[Resizing an in-use PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim).
- `ExpandPersistentVolumes`: Enable the expanding of persistent volumes. See
[Expanding Persistent Volumes Claims](/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims).
- `ExperimentalCriticalPodAnnotation`: Enable annotating specific pods as *critical*
so that their [scheduling is guaranteed](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/).
This feature is deprecated by Pod Priority and Preemption as of v1.13.
- `ExperimentalHostUserNamespaceDefaulting`: Enabling the defaulting user
namespace to host. This is for containers that are using other host namespaces,
host mounts, or containers that are privileged or using specific non-namespaced
capabilities (e.g. `MKNODE`, `SYS_MODULE` etc.). This should only be enabled
if user namespace remapping is enabled in the Docker daemon.
- `GCERegionalPersistentDisk`: Enable the regional PD feature on GCE.
- `GenericEphemeralVolume`: Enables ephemeral, inline volumes that support all features of normal volumes (can be provided by third-party storage vendors, storage capacity tracking, restore from snapshot, etc.). See [Ephemeral Volumes](/docs/concepts/storage/ephemeral-volumes/).
- `GracefulNodeShutdown`: Enables support for graceful shutdown in kubelet. During a system shutdown, kubelet will attempt to detect the shutdown event and gracefully terminate pods running on the node. See [Graceful Node Shutdown](/docs/concepts/architecture/nodes/#graceful-node-shutdown) for more details.
- `HugePages`: Enable the allocation and consumption of pre-allocated [huge pages](/docs/tasks/manage-hugepages/scheduling-hugepages/).
- `HugePageStorageMediumSize`: Enable support for multiple sizes pre-allocated [huge pages](/docs/tasks/manage-hugepages/scheduling-hugepages/).
- `HyperVContainer`: Enable [Hyper-V isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container) for Windows containers.
- `HPAScaleToZero`: Enables setting `minReplicas` to 0 for `HorizontalPodAutoscaler` resources when using custom or external metrics.
- `ImmutableEphemeralVolumes`: Allows for marking individual Secrets and ConfigMaps as immutable for better safety and performance.
- `KubeletConfigFile`: Enable loading kubelet configuration from a file specified using a config file.
See [setting kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file/) for more details.
- `GenericEphemeralVolume`: Enables ephemeral, inline volumes that support all features
of normal volumes (can be provided by third-party storage vendors, storage capacity tracking,
restore from snapshot, etc.).
See [Ephemeral Volumes](/docs/concepts/storage/ephemeral-volumes/).
- `GracefulNodeShutdown`: Enables support for graceful shutdown in kubelet.
During a system shutdown, kubelet will attempt to detect the shutdown event
and gracefully terminate pods running on the node. See
[Graceful Node Shutdown](/docs/concepts/architecture/nodes/#graceful-node-shutdown)
for more details.
- `HPAContainerMetrics`: Enable the `HorizontalPodAutoscaler` to scale based on
metrics from individual containers in target pods.
- `HPAScaleToZero`: Enables setting `minReplicas` to 0 for `HorizontalPodAutoscaler`
resources when using custom or external metrics.
- `HugePages`: Enable the allocation and consumption of pre-allocated
[huge pages](/docs/tasks/manage-hugepages/scheduling-hugepages/).
- `HugePageStorageMediumSize`: Enable support for multiple sizes pre-allocated
[huge pages](/docs/tasks/manage-hugepages/scheduling-hugepages/).
- `HyperVContainer`: Enable
[Hyper-V isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container)
for Windows containers.
- `IPv6DualStack`: Enable [dual stack](/docs/concepts/services-networking/dual-stack/)
support for IPv6.
- `ImmutableEphemeralVolumes`: Allows for marking individual Secrets and ConfigMaps as
immutable for better safety and performance.
- `KubeletConfigFile` (*deprecated*): Enable loading kubelet configuration from
a file specified using a config file.
See [setting kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file/)
for more details.
- `KubeletCredentialProviders`: Enable kubelet exec credential providers for image pull credentials.
- `KubeletPluginsWatcher`: Enable probe-based plugin watcher utility to enable kubelet
to discover plugins such as [CSI volume drivers](/docs/concepts/storage/volumes/#csi).
- `KubeletPodResources`: Enable the kubelet's pod resources grpc endpoint.
See [Support Device Monitoring](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/compute-device-assignment.md) for more details.
- `LegacyNodeRoleBehavior`: When disabled, legacy behavior in service load balancers and node disruption will ignore the `node-role.kubernetes.io/master` label in favor of the feature-specific labels provided by `NodeDisruptionExclusion` and `ServiceNodeExclusion`.
- `LocalStorageCapacityIsolation`: Enable the consumption of [local ephemeral storage](/docs/concepts/configuration/manage-resources-containers/) and also the `sizeLimit` property of an [emptyDir volume](/docs/concepts/storage/volumes/#emptydir).
- `LocalStorageCapacityIsolationFSQuotaMonitoring`: When `LocalStorageCapacityIsolation` is enabled for [local ephemeral storage](/docs/concepts/configuration/manage-resources-containers/) and the backing filesystem for [emptyDir volumes](/docs/concepts/storage/volumes/#emptydir) supports project quotas and they are enabled, use project quotas to monitor [emptyDir volume](/docs/concepts/storage/volumes/#emptydir) storage consumption rather than filesystem walk for better performance and accuracy.
- `MixedProtocolLBService`: Enable using different protocols in the same LoadBalancer type Service instance.
- `MountContainers`: Enable using utility containers on host as the volume mounter.
- `KubeletPodResources`: Enable the kubelet's pod resources gRPC endpoint. See
[Support Device Monitoring](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/606-compute-device-assignment/README.md)
for more details.
- `LegacyNodeRoleBehavior`: When disabled, legacy behavior in service load balancers and
node disruption will ignore the `node-role.kubernetes.io/master` label in favor of the
feature-specific labels provided by `NodeDisruptionExclusion` and `ServiceNodeExclusion`.
- `LocalStorageCapacityIsolation`: Enable the consumption of
[local ephemeral storage](/docs/concepts/configuration/manage-resources-containers/)
and also the `sizeLimit` property of an
[emptyDir volume](/docs/concepts/storage/volumes/#emptydir).
- `LocalStorageCapacityIsolationFSQuotaMonitoring`: When `LocalStorageCapacityIsolation`
is enabled for
[local ephemeral storage](/docs/concepts/configuration/manage-resources-containers/)
and the backing filesystem for [emptyDir volumes](/docs/concepts/storage/volumes/#emptydir)
supports project quotas and they are enabled, use project quotas to monitor
[emptyDir volume](/docs/concepts/storage/volumes/#emptydir) storage consumption rather than
filesystem walk for better performance and accuracy.
- `MixedProtocolLBService`: Enable using different protocols in the same `LoadBalancer` type
Service instance.
- `MountContainers` (*deprecated*): Enable using utility containers on host as
the volume mounter.
- `MountPropagation`: Enable sharing volume mounted by one container to other containers or pods.
For more details, please see [mount propagation](/docs/concepts/storage/volumes/#mount-propagation).
- `NodeDisruptionExclusion`: Enable use of the node label `node.kubernetes.io/exclude-disruption` which prevents nodes from being evacuated during zone failures.
- `NodeDisruptionExclusion`: Enable use of the Node label `node.kubernetes.io/exclude-disruption`
which prevents nodes from being evacuated during zone failures.
- `NodeLease`: Enable the new Lease API to report node heartbeats, which could be used as a node health signal.
- `NonPreemptingPriority`: Enable NonPreempting option for PriorityClass and Pod.
- `NonPreemptingPriority`: Enable `preemptionPolicy` field for PriorityClass and Pod.
- `PVCProtection`: Enable the prevention of a PersistentVolumeClaim (PVC) from
being deleted when it is still used by any Pod.
- `PersistentLocalVolumes`: Enable the usage of `local` volume type in Pods.
Pod affinity has to be specified if requesting a `local` volume.
- `PodDisruptionBudget`: Enable the [PodDisruptionBudget](/docs/tasks/run-application/configure-pdb/) feature.
- `PodOverhead`: Enable the [PodOverhead](/docs/concepts/scheduling-eviction/pod-overhead/) feature to account for pod overheads.
- `PodPriority`: Enable the descheduling and preemption of Pods based on their [priorities](/docs/concepts/configuration/pod-priority-preemption/).
- `PodOverhead`: Enable the [PodOverhead](/docs/concepts/scheduling-eviction/pod-overhead/)
feature to account for pod overheads.
- `PodPriority`: Enable the descheduling and preemption of Pods based on their
[priorities](/docs/concepts/configuration/pod-priority-preemption/).
- `PodReadinessGates`: Enable the setting of `PodReadinessGate` field for extending
Pod readiness evaluation. See [Pod readiness gate](/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-gate)
for more details.
- `PodShareProcessNamespace`: Enable the setting of `shareProcessNamespace` in a Pod for sharing
a single process namespace between containers running in a pod. More details can be found in
[Share Process Namespace between Containers in a Pod](/docs/tasks/configure-pod-container/share-process-namespace/).
- `ProcMountType`: Enables control over ProcMountType for containers.
- `PVCProtection`: Enable the prevention of a PersistentVolumeClaim (PVC) from
being deleted when it is still used by any Pod.
- `QOSReserved`: Allows resource reservations at the QoS level preventing pods at lower QoS levels from
bursting into resources requested at higher QoS levels (memory only for now).
- `ProcMountType`: Enables control over the type proc mounts for containers
by setting the `procMount` field of a SecurityContext.
- `QOSReserved`: Allows resource reservations at the QoS level preventing pods
at lower QoS levels from bursting into resources requested at higher QoS levels
(memory only for now).
- `RemainingItemCount`: Allow the API servers to show a count of remaining
items in the response to a
[chunking list request](/docs/reference/using-api/api-concepts/#retrieving-large-results-sets-in-chunks).
- `RemoveSelfLink`: Deprecates and removes `selfLink` from ObjectMeta and
ListMeta.
- `ResourceLimitsPriorityFunction` (*deprecated*): Enable a scheduler priority function that
assigns a lowest possible score of 1 to a node that satisfies at least one of
the input Pod's cpu and memory limits. The intent is to break ties between
nodes with same scores.
- `ResourceQuotaScopeSelectors`: Enable resource quota scope selectors.
- `RootCAConfigMap`: Configure the kube-controller-manager to publish a {{< glossary_tooltip text="ConfigMap" term_id="configmap" >}} named `kube-root-ca.crt` to every namespace. This ConfigMap contains a CA bundle used for verifying connections to the kube-apiserver.
See [Bound Service Account Tokens](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md) for more details.
- `RootCAConfigMap`: Configure the `kube-controller-manager` to publish a
{{< glossary_tooltip text="ConfigMap" term_id="configmap" >}} named `kube-root-ca.crt`
to every namespace. This ConfigMap contains a CA bundle used for verifying connections
to the kube-apiserver. See
[Bound Service Account Tokens](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md)
for more details.
- `RotateKubeletClientCertificate`: Enable the rotation of the client TLS certificate on the kubelet.
See [kubelet configuration](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration) for more details.
- `RotateKubeletServerCertificate`: Enable the rotation of the server TLS certificate on the kubelet.
See [kubelet configuration](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration) for more details.
- `RunAsGroup`: Enable control over the primary group ID set on the init processes of containers.
- `RuntimeClass`: Enable the [RuntimeClass](/docs/concepts/containers/runtime-class/) feature for selecting container runtime configurations.
- `ScheduleDaemonSetPods`: Enable DaemonSet Pods to be scheduled by the default scheduler instead of the DaemonSet controller.
- `SCTPSupport`: Enables the _SCTP_ `protocol` value in Pod, Service, Endpoints, EndpointSlice, and NetworkPolicy definitions.
- `ServerSideApply`: Enables the [Sever Side Apply (SSA)](/docs/reference/using-api/server-side-apply/) path at the API Server.
- `ServiceAccountIssuerDiscovery`: Enable OIDC discovery endpoints (issuer and JWKS URLs) for the service account issuer in the API server. See [Configure Service Accounts for Pods](/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery) for more details.
See [kubelet configuration](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration)
for more details.
- `RunAsGroup`: Enable control over the primary group ID set on the init
processes of containers.
- `RuntimeClass`: Enable the [RuntimeClass](/docs/concepts/containers/runtime-class/) feature
for selecting container runtime configurations.
- `ScheduleDaemonSetPods`: Enable DaemonSet Pods to be scheduled by the default scheduler
instead of the DaemonSet controller.
- `SCTPSupport`: Enables the _SCTP_ `protocol` value in Pod, Service,
Endpoints, EndpointSlice, and NetworkPolicy definitions.
- `ServerSideApply`: Enables the [Sever Side Apply (SSA)](/docs/reference/using-api/server-side-apply/)
feature on the API Server.
- `ServiceAccountIssuerDiscovery`: Enable OIDC discovery endpoints (issuer and
JWKS URLs) for the service account issuer in the API server. See
[Configure Service Accounts for Pods](/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery)
for more details.
- `ServiceAppProtocol`: Enables the `AppProtocol` field on Services and Endpoints.
- `ServiceLBNodePortControl`: Enables the `spec.allocateLoadBalancerNodePorts` field on Services.
- `ServiceLBNodePortControl`: Enables the `spec.allocateLoadBalancerNodePorts`
field on Services.
- `ServiceLoadBalancerFinalizer`: Enable finalizer protection for Service load balancers.
- `ServiceNodeExclusion`: Enable the exclusion of nodes from load balancers created by a cloud provider.
A node is eligible for exclusion if labelled with "`alpha.service-controller.kubernetes.io/exclude-balancer`" key or `node.kubernetes.io/exclude-from-external-load-balancers`.
- `ServiceTopology`: Enable service to route traffic based upon the Node topology of the cluster. See [ServiceTopology](/docs/concepts/services-networking/service-topology/) for more details.
- `SizeMemoryBackedVolumes`: Enables kubelet support to size memory backed volumes. See [volumes](docs/concepts/storage/volumes) for more details.
- `SetHostnameAsFQDN`: Enable the ability of setting Fully Qualified Domain Name(FQDN) as hostname of pod. See [Pod's `setHostnameAsFQDN` field](/docs/concepts/services-networking/dns-pod-service/#pod-sethostnameasfqdn-field).
- `StartupProbe`: Enable the [startup](/docs/concepts/workloads/pods/pod-lifecycle/#when-should-you-use-a-startup-probe) probe in the kubelet.
- `ServiceNodeExclusion`: Enable the exclusion of nodes from load balancers
created by a cloud provider. A node is eligible for exclusion if labelled with
"`node.kubernetes.io/exclude-from-external-load-balancers`".
- `ServiceTopology`: Enable service to route traffic based upon the Node
topology of the cluster. See
[ServiceTopology](/docs/concepts/services-networking/service-topology/)
for more details.
- `SizeMemoryBackedVolumes`: Enables kubelet support to size memory backed volumes.
See [volumes](docs/concepts/storage/volumes) for more details.
- `SetHostnameAsFQDN`: Enable the ability of setting Fully Qualified Domain
Name(FQDN) as the hostname of a pod. See
[Pod's `setHostnameAsFQDN` field](/docs/concepts/services-networking/dns-pod-service/#pod-sethostnameasfqdn-field).
- `SizeMemoryBackedVolumes`: Enable kubelets to determine the size limit for
memory-backed volumes (mainly `emptyDir` volumes).
- `StartupProbe`: Enable the
[startup](/docs/concepts/workloads/pods/pod-lifecycle/#when-should-you-use-a-startup-probe)
probe in the kubelet.
- `StorageObjectInUseProtection`: Postpone the deletion of PersistentVolume or
PersistentVolumeClaim objects if they are still being used.
- `StorageVersionHash`: Allow apiservers to expose the storage version hash in the discovery.
- `StorageVersionAPI`: Enable the
[storage version API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#storageversion-v1alpha1-internal-apiserver-k8s-io).
- `StorageVersionHash`: Allow API servers to expose the storage version hash in the
discovery.
- `StreamingProxyRedirects`: Instructs the API server to intercept (and follow)
redirects from the backend (kubelet) for streaming requests.
redirects from the backend (kubelet) for streaming requests.
Examples of streaming requests include the `exec`, `attach` and `port-forward` requests.
- `SupportIPVSProxyMode`: Enable providing in-cluster service load balancing using IPVS.
See [service proxies](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies) for more details.
- `SupportPodPidsLimit`: Enable the support to limiting PIDs in Pods.
- `SupportNodePidsLimit`: Enable the support to limiting PIDs on the Node. The parameter `pid=<number>` in the `--system-reserved` and `--kube-reserved` options can be specified to ensure that the specified number of process IDs will be reserved for the system as a whole and for Kubernetes system daemons respectively.
- `Sysctls`: Enable support for namespaced kernel parameters (sysctls) that can be set for each pod.
See [sysctls](/docs/tasks/administer-cluster/sysctl-cluster/) for more details.
- `TaintBasedEvictions`: Enable evicting pods from nodes based on taints on nodes and tolerations on Pods.
See [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/) for more details.
- `TaintNodesByCondition`: Enable automatic tainting nodes based on [node conditions](/docs/concepts/architecture/nodes/#condition).
- `SupportNodePidsLimit`: Enable the support to limiting PIDs on the Node.
The parameter `pid=<number>` in the `--system-reserved` and `--kube-reserved`
options can be specified to ensure that the specified number of process IDs
will be reserved for the system as a whole and for Kubernetes system daemons
respectively.
- `Sysctls`: Enable support for namespaced kernel parameters (sysctls) that can be
set for each pod. See
[sysctls](/docs/tasks/administer-cluster/sysctl-cluster/) for more details.
- `TTLAfterFinished`: Allow a
[TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/)
to clean up resources after they finish execution.
- `TaintBasedEvictions`: Enable evicting pods from nodes based on taints on Nodes
and tolerations on Pods.
See [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/)
for more details.
- `TaintNodesByCondition`: Enable automatic tainting nodes based on
[node conditions](/docs/concepts/architecture/nodes/#condition).
- `TokenRequest`: Enable the `TokenRequest` endpoint on service account resources.
- `TokenRequestProjection`: Enable the injection of service account tokens into
a Pod through the [`projected` volume](/docs/concepts/storage/volumes/#projected).
- `TopologyManager`: Enable a mechanism to coordinate fine-grained hardware resource assignments for different components in Kubernetes. See [Control Topology Management Policies on a node](/docs/tasks/administer-cluster/topology-manager/).
- `TTLAfterFinished`: Allow a [TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) to clean up resources after they finish execution.
- `TokenRequestProjection`: Enable the injection of service account tokens into a
Pod through a [`projected` volume](/docs/concepts/storage/volumes/#projected).
- `TopologyManager`: Enable a mechanism to coordinate fine-grained hardware resource
assignments for different components in Kubernetes. See
[Control Topology Management Policies on a node](/docs/tasks/administer-cluster/topology-manager/).
- `VolumePVCDataSource`: Enable support for specifying an existing PVC as a DataSource.
- `VolumeScheduling`: Enable volume topology aware scheduling and make the
PersistentVolumeClaim (PVC) binding aware of scheduling decisions. It also
enables the usage of [`local`](/docs/concepts/storage/volumes/#local) volume
type when used together with the `PersistentLocalVolumes` feature gate.
- `VolumeSnapshotDataSource`: Enable volume snapshot data source support.
- `VolumeSubpathEnvExpansion`: Enable `subPathExpr` field for expanding environment variables into a `subPath`.
- `VolumeSubpathEnvExpansion`: Enable `subPathExpr` field for expanding environment
variables into a `subPath`.
- `WarningHeaders`: Allow sending warning headers in API responses.
- `WatchBookmark`: Enable support for watch bookmark events.
- `WindowsGMSA`: Enables passing of GMSA credential specs from pods to container runtimes.
- `WindowsRunAsUserName` : Enable support for running applications in Windows containers with as a non-default user.
See [Configuring RunAsUserName](/docs/tasks/configure-pod-container/configure-runasusername) for more details.
- `WinDSR`: Allows kube-proxy to create DSR loadbalancers for Windows.
- `WinOverlay`: Allows kube-proxy to run in overlay mode for Windows.
- `WindowsGMSA`: Enables passing of GMSA credential specs from pods to container runtimes.
- `WindowsRunAsUserName` : Enable support for running applications in Windows containers
with as a non-default user. See
[Configuring RunAsUserName](/docs/tasks/configure-pod-container/configure-runasusername)
for more details.
- `WindowsEndpointSliceProxying`: When enabled, kube-proxy running on Windows
will use EndpointSlices as the primary data source instead of Endpoints,
enabling scalability and performance improvements. See
[Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpointslices/).
## {{% heading "whatsnext" %}}

View File

@ -2,7 +2,7 @@
title: API Group
id: api-group
date: 2019-09-02
full_link: /docs/concepts/overview/kubernetes-api/#api-groups
full_link: /docs/concepts/overview/kubernetes-api/#api-groups-and-versioning
short_description: >
A set of related paths in the Kubernetes API.

View File

@ -12,9 +12,8 @@ tags:
---
Facilitates the discussion and/or implementation of a short-lived, narrow, or decoupled project for a committee, {{< glossary_tooltip text="SIG" term_id="sig" >}}, or cross-SIG effort.
<!--more-->
<!--more-->
Working groups are a way of organizing people to accomplish a discrete task, and are relatively easy to create and deprecate when inactive.
Working groups are a way of organizing people to accomplish a discrete task.
For more information, see the [kubernetes/community](https://github.com/kubernetes/community) repo and the current list of [SIGs and working groups](https://github.com/kubernetes/community/blob/master/sig-list.md).

View File

@ -195,7 +195,7 @@ JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.ty
&& kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"
# Output decoded secrets without external tools
kubectl get secret ${secret_name} -o go-template='{{range $k,$v := .data}}{{$k}}={{$v|base64decode}}{{"\n"}}{{end}}'
kubectl get secret my-secret -o go-template='{{range $k,$v := .data}}{{"### "}}{{$k}}{{"\n"}}{{$v|base64decode}}{{"\n\n"}}{{end}}'
# List all Secrets currently in use by a pod
kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq
@ -337,7 +337,7 @@ kubectl taint nodes foo dedicated=special-user:NoSchedule
### Resource types
List all supported resource types along with their shortnames, [API group](/docs/concepts/overview/kubernetes-api/#api-groups), whether they are [namespaced](/docs/concepts/overview/working-with-objects/namespaces), and [Kind](/docs/concepts/overview/working-with-objects/kubernetes-objects):
List all supported resource types along with their shortnames, [API group](/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning), whether they are [namespaced](/docs/concepts/overview/working-with-objects/namespaces), and [Kind](/docs/concepts/overview/working-with-objects/kubernetes-objects):
```bash
kubectl api-resources

View File

@ -250,15 +250,15 @@ CustomResourceDefinitionSpec describes how a user wants their resource to appear
- **conversion.webhook.clientConfig.url** (string)
url gives the location of the webhook, in standard URL form (`scheme://host:port/path`). Exactly one of `url` or `service` must be specified.
The `host` should not refer to a service running in the cluster; use the `service` field instead. The host might be resolved via external DNS in some apiservers (e.g., `kube-apiserver` cannot resolve in-cluster DNS as that would be a layering violation). `host` may also be an IP address.
Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster.
Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installations are likely to be non-portable or not readily run in a new cluster.
The scheme must be "https"; the URL must begin with "https://".
A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier.
Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#...") and query parameters ("?...") are not allowed, either.
- **preserveUnknownFields** (boolean)

View File

@ -82,15 +82,15 @@ MutatingWebhookConfiguration describes the configuration of and admission webhoo
- **webhooks.clientConfig.url** (string)
`url` gives the location of the webhook, in standard URL form (`scheme://host:port/path`). Exactly one of `url` or `service` must be specified.
The `host` should not refer to a service running in the cluster; use the `service` field instead. The host might be resolved via external DNS in some apiservers (e.g., `kube-apiserver` cannot resolve in-cluster DNS as that would be a layering violation). `host` may also be an IP address.
Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster.
Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installations are likely to be non-portable or not readily run in a new cluster.
The scheme must be "https"; the URL must begin with "https://".
A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier.
Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#...") and query parameters ("?...") are not allowed, either.
- **webhooks.name** (string), required

View File

@ -82,15 +82,15 @@ ValidatingWebhookConfiguration describes the configuration of and admission webh
- **webhooks.clientConfig.url** (string)
`url` gives the location of the webhook, in standard URL form (`scheme://host:port/path`). Exactly one of `url` or `service` must be specified.
The `host` should not refer to a service running in the cluster; use the `service` field instead. The host might be resolved via external DNS in some apiservers (e.g., `kube-apiserver` cannot resolve in-cluster DNS as that would be a layering violation). `host` may also be an IP address.
Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster.
Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installations are likely to be non-portable or not readily run in a new cluster.
The scheme must be "https"; the URL must begin with "https://".
A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier.
Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#...") and query parameters ("?...") are not allowed, either.
- **webhooks.name** (string), required

View File

@ -28,7 +28,7 @@ The cluster that `kubeadm init` and `kubeadm join` set up should be:
- lock-down the kubelet API
- locking down access to the API for system components like the kube-proxy and CoreDNS
- locking down what a Bootstrap Token can access
- **Easy to use**: The user should not have to run anything more than a couple of commands:
- **User-friendly**: The user should not have to run anything more than a couple of commands:
- `kubeadm init`
- `export KUBECONFIG=/etc/kubernetes/admin.conf`
- `kubectl apply -f <network-of-choice.yaml>`

View File

@ -108,7 +108,7 @@ if the `kubeadm init` command was called with `--upload-certs`.
control-plane node even if other worker nodes or the network are compromised.
- Convenient to execute manually since all of the information required fits
into a single `kubeadm join` command that is easy to copy and paste.
into a single `kubeadm join` command.
**Disadvantages:**

View File

@ -20,8 +20,8 @@ Kubernetes contains several built-in tools to help you work with the Kubernetes
## Minikube
[`minikube`](https://minikube.sigs.k8s.io/docs/) is a tool that makes it
easy to run a single-node Kubernetes cluster locally on your workstation for
[`minikube`](https://minikube.sigs.k8s.io/docs/) is a tool that
runs a single-node Kubernetes cluster locally on your workstation for
development and testing purposes.
## Dashboard
@ -51,4 +51,3 @@ Use Kompose to:
* Translate a Docker Compose file into Kubernetes objects
* Go from local Docker development to managing your application via Kubernetes
* Convert v1 or v2 Docker Compose `yaml` files or [Distributed Application Bundles](https://docs.docker.com/compose/bundles/)

View File

@ -514,7 +514,7 @@ The meaning of the get and list semantics are:
- **Any:** Return data at any resource version. The newest available resource version is preferred,
but strong consistency is not required; data at any resource version may be served. It is possible
for the request to return data at a much older resource version that the client has previously
observed, particularly in high availabiliy configurations, due to partitions or stale
observed, particularly in high availability configurations, due to partitions or stale
caches. Clients that cannot tolerate this should not use this semantic.
- **Not older than:** Return data at least as new as the provided resourceVersion. The newest
available data is preferred, but any data not older than the provided resourceVersion may be

View File

@ -16,10 +16,10 @@ min-kubernetes-server-version: 1.16
## Introduction
Server Side Apply helps users and controllers manage their resources via
declarative configurations. It allows them to create and/or modify their
Server Side Apply helps users and controllers manage their resources through
declarative configurations. Clients can create and modify their
[objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/)
declaratively, simply by sending their fully specified intent.
declaratively by sending their fully specified intent.
A fully specified intent is a partial object that only includes the fields and
values for which the user has an opinion. That intent either creates a new
@ -297,7 +297,7 @@ is not what the user wants to happen, even temporarily.
There are two solutions:
- (easy) Leave `replicas` in the configuration; when HPA eventually writes to that
- (basic) Leave `replicas` in the configuration; when HPA eventually writes to that
field, the system gives the user a conflict over it. At that point, it is safe
to remove from the configuration.

View File

@ -29,7 +29,7 @@ one zone also impairs services in another zone.
## Control plane behavior
All [control plane components](/docs/concepts/overview/components/#control-plane-components)
support running as a pool of interchangable resources, replicated per
support running as a pool of interchangeable resources, replicated per
component.
When you deploy a cluster control plane, place replicas of

View File

@ -122,7 +122,7 @@ sudo apt-get update && sudo apt-get install -y containerd.io
```shell
# Configure containerd
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
containerd config default | sudo tee /etc/containerd/config.toml
```
```shell
@ -140,7 +140,7 @@ sudo apt-get update && sudo apt-get install -y containerd
```shell
# Configure containerd
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
containerd config default | sudo tee /etc/containerd/config.toml
```
```shell
@ -210,7 +210,7 @@ sudo yum update -y && sudo yum install -y containerd.io
```shell
## Configure containerd
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
containerd config default | sudo tee /etc/containerd/config.toml
```
```shell
@ -420,7 +420,7 @@ Start CRI-O:
```shell
sudo systemctl daemon-reload
sudo systemctl start crio
sudo systemctl enable crio --now
```
Refer to the [CRI-O installation guide](https://github.com/cri-o/cri-o/blob/master/install.md)

View File

@ -39,7 +39,7 @@ kops is an automated provisioning system:
#### Installation
Download kops from the [releases page](https://github.com/kubernetes/kops/releases) (it is also easy to build from source):
Download kops from the [releases page](https://github.com/kubernetes/kops/releases) (it is also convenient to build from source):
{{< tabs name="kops_installation" >}}
{{% tab name="macOS" %}}
@ -147,7 +147,7 @@ You must then set up your NS records in the parent domain, so that records in th
you would create NS records in `example.com` for `dev`. If it is a root domain name you would configure the NS
records at your domain registrar (e.g. `example.com` would need to be configured where you bought `example.com`).
This step is easy to mess up (it is the #1 cause of problems!) You can double-check that
Verify your route53 domain setup (it is the #1 cause of problems!). You can double-check that
your cluster is configured correctly if you have the dig tool by running:
`dig NS dev.example.com`

View File

@ -8,7 +8,7 @@ weight: 30
<!-- overview -->
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">Creating a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use `kubeadm` to set up a cluster that will pass the [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification).
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">Using `kubeadm`, you can create a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use `kubeadm` to set up a cluster that will pass the [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification).
`kubeadm` also supports other cluster
lifecycle functions, such as [bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) and cluster upgrades.
@ -434,7 +434,7 @@ Now remove the node:
kubectl delete node <node name>
```
If you wish to start over simply run `kubeadm init` or `kubeadm join` with the
If you wish to start over, run `kubeadm init` or `kubeadm join` with the
appropriate arguments.
### Clean up the control plane

View File

@ -236,8 +236,8 @@ curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_
Define the directory to download command files
{{< note >}}
The DOWNLOAD_DIR variable must be set to a writable directory.
If you are running Flatcar Container Linux, set DOWNLOAD_DIR=/opt/bin.
The `DOWNLOAD_DIR` variable must be set to a writable directory.
If you are running Flatcar Container Linux, set `DOWNLOAD_DIR=/opt/bin`.
{{< /note >}}
```bash
@ -308,13 +308,6 @@ or `/etc/default/kubelet`(`/etc/sysconfig/kubelet` for RPMs), please remove it a
(stored in `/var/lib/kubelet/config.yaml` by default).
{{< /note >}}
Restarting the kubelet is required:
```bash
sudo systemctl daemon-reload
sudo systemctl restart kubelet
```
The automatic detection of cgroup driver for other container runtimes
like CRI-O and containerd is work in progress.

View File

@ -363,7 +363,7 @@ kubectl taint nodes NODE_NAME node-role.kubernetes.io/master:NoSchedule-
## `/usr` is mounted read-only on nodes {#usr-mounted-read-only}
On Linux distributions such as Fedora CoreOS, the directory `/usr` is mounted as a read-only filesystem.
On Linux distributions such as Fedora CoreOS or Flatcar Container Linux, the directory `/usr` is mounted as a read-only filesystem.
For [flex-volume support](https://github.com/kubernetes/community/blob/ab55d85/contributors/devel/sig-storage/flexvolume.md),
Kubernetes components like the kubelet and kube-controller-manager use the default path of
`/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`, yet the flex-volume directory _must be writeable_

View File

@ -15,7 +15,7 @@ Windows applications constitute a large portion of the services and applications
## Windows containers in Kubernetes
To enable the orchestration of Windows containers in Kubernetes, simply include Windows nodes in your existing Linux cluster. Scheduling Windows containers in {{< glossary_tooltip text="Pods" term_id="pod" >}} on Kubernetes is as simple and easy as scheduling Linux-based containers.
To enable the orchestration of Windows containers in Kubernetes, include Windows nodes in your existing Linux cluster. Scheduling Windows containers in {{< glossary_tooltip text="Pods" term_id="pod" >}} on Kubernetes is similar to scheduling Linux-based containers.
In order to run Windows containers, your Kubernetes cluster must include multiple operating systems, with control plane nodes running Linux and workers running either Windows or Linux depending on your workload needs. Windows Server 2019 is the only Windows operating system supported, enabling [Kubernetes Node](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) on Windows (including kubelet, [container runtime](https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/containerd), and kube-proxy). For a detailed explanation of Windows distribution channels see the [Microsoft documentation](https://docs.microsoft.com/en-us/windows-server/get-started-19/servicing-channels-19).
@ -273,7 +273,7 @@ The behavior of the flags behave differently as described below:
* MemoryPressure Condition is not implemented
* There are no OOM eviction actions taken by the kubelet
* Kubelet running on the windows node does not have memory restrictions. `--kubelet-reserve` and `--system-reserve` do not set limits on kubelet or processes running on the host. This means kubelet or a process on the host could cause memory resource starvation outside the node-allocatable and scheduler
* An additional flag to set the priority of the kubelet process is availabe on the Windows nodes called `--windows-priorityclass`. This flag allows kubelet process to get more CPU time slices when compared to other processes running on the Windows host. More information on the allowable values and their meaning is available at [Windows Priority Classes](https://docs.microsoft.com/en-us/windows/win32/procthread/scheduling-priorities#priority-class). In order for kubelet to always have enough CPU cycles it is recommended to set this flag to `ABOVE_NORMAL_PRIORITY_CLASS` and above
* An additional flag to set the priority of the kubelet process is available on the Windows nodes called `--windows-priorityclass`. This flag allows kubelet process to get more CPU time slices when compared to other processes running on the Windows host. More information on the allowable values and their meaning is available at [Windows Priority Classes](https://docs.microsoft.com/en-us/windows/win32/procthread/scheduling-priorities#priority-class). In order for kubelet to always have enough CPU cycles it is recommended to set this flag to `ABOVE_NORMAL_PRIORITY_CLASS` and above
#### Storage
@ -547,7 +547,7 @@ Your main source of help for troubleshooting your Kubernetes cluster should star
1. After launching `start.ps1`, flanneld is stuck in "Waiting for the Network to be created"
There are numerous reports of this [issue which are being investigated](https://github.com/coreos/flannel/issues/1066); most likely it is a timing issue for when the management IP of the flannel network is set. A workaround is to simply relaunch start.ps1 or relaunch it manually as follows:
There are numerous reports of this [issue](https://github.com/coreos/flannel/issues/1066); most likely it is a timing issue for when the management IP of the flannel network is set. A workaround is to relaunch start.ps1 or relaunch it manually as follows:
```powershell
PS C:> [Environment]::SetEnvironmentVariable("NODE_NAME", "<Windows_Worker_Hostname>")
@ -634,7 +634,7 @@ We have a lot of features in our roadmap. An abbreviated high level list is incl
### Hyper-V isolation
Hyper-V isolation is requried to enable the following use cases for Windows containers in Kubernetes:
Hyper-V isolation is required to enable the following use cases for Windows containers in Kubernetes:
* Hypervisor-based isolation between pods for additional security
* Backwards compatibility allowing a node to run a newer Windows Server version without requiring containers to be rebuilt

View File

@ -23,7 +23,7 @@ Windows applications constitute a large portion of the services and applications
## Before you begin
* Create a Kubernetes cluster that includes a [master and a worker node running Windows Server](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes)
* It is important to note that creating and deploying services and workloads on Kubernetes behaves in much the same way for Linux and Windows containers. [Kubectl commands](/docs/reference/kubectl/overview/) to interface with the cluster are identical. The example in the section below is provided simply to jumpstart your experience with Windows containers.
* It is important to note that creating and deploying services and workloads on Kubernetes behaves in much the same way for Linux and Windows containers. [Kubectl commands](/docs/reference/kubectl/overview/) to interface with the cluster are identical. The example in the section below is provided to jumpstart your experience with Windows containers.
## Getting Started: Deploying a Windows container

View File

@ -92,7 +92,7 @@ We expect this implementation to progress from alpha to beta and GA in coming re
### go1.15.5
go1.15.5 has been integrated to Kubernets project as of this release, [including other infrastructure related updates on this effort](https://github.com/kubernetes/kubernetes/pull/95776).
go1.15.5 has been integrated to Kubernetes project as of this release, [including other infrastructure related updates on this effort](https://github.com/kubernetes/kubernetes/pull/95776).
### CSI Volume Snapshot graduates to General Availability
@ -190,7 +190,7 @@ Currently, cadvisor_stats_provider provides AcceleratorStats but cri_stats_provi
PodSubnet validates against the corresponding cluster "--node-cidr-mask-size" of the kube-controller-manager, it fail if the values are not compatible.
kubeadm no longer sets the node-mask automatically on IPv6 deployments, you must check that your IPv6 service subnet mask is compatible with the default node mask /64 or set it accordenly.
Previously, for IPv6, if the podSubnet had a mask lower than /112, kubeadm calculated a node-mask to be multiple of eight and splitting the available bits to maximise the number used for nodes. ([#95723](https://github.com/kubernetes/kubernetes/pull/95723), [@aojea](https://github.com/aojea)) [SIG Cluster Lifecycle]
- The deprecated flag --experimental-kustomize is now removed from kubeadm commands. Use --experimental-patches instead, which was introduced in 1.19. Migration infromation available in --help description for --exprimental-patches. ([#94871](https://github.com/kubernetes/kubernetes/pull/94871), [@neolit123](https://github.com/neolit123))
- The deprecated flag --experimental-kustomize is now removed from kubeadm commands. Use --experimental-patches instead, which was introduced in 1.19. Migration information available in --help description for --experimental-patches. ([#94871](https://github.com/kubernetes/kubernetes/pull/94871), [@neolit123](https://github.com/neolit123))
- Windows hyper-v container featuregate is deprecated in 1.20 and will be removed in 1.21 ([#95505](https://github.com/kubernetes/kubernetes/pull/95505), [@wawa0210](https://github.com/wawa0210)) [SIG Node and Windows]
- The kube-apiserver ability to serve on an insecure port, deprecated since v1.10, has been removed. The insecure address flags `--address` and `--insecure-bind-address` have no effect in kube-apiserver and will be removed in v1.24. The insecure port flags `--port` and `--insecure-port` may only be set to 0 and will be removed in v1.24. ([#95856](https://github.com/kubernetes/kubernetes/pull/95856), [@knight42](https://github.com/knight42), [SIG API Machinery, Node, Testing])
- Add dual-stack Services (alpha). This is a BREAKING CHANGE to an alpha API.
@ -2138,4 +2138,4 @@ filename | sha512 hash
- github.com/godbus/dbus: [ade71ed](https://github.com/godbus/dbus/tree/ade71ed)
- github.com/xlab/handysort: [fb3537e](https://github.com/xlab/handysort/tree/fb3537e)
- sigs.k8s.io/structured-merge-diff/v3: v3.0.0
- vbom.ml/util: db5cfe1
- vbom.ml/util: db5cfe1

View File

@ -280,7 +280,7 @@ at `https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-l
#### Manually constructing apiserver proxy URLs
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the service's proxy URL:
`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`service_name[:port_name]`*`/proxy`
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL.

View File

@ -215,7 +215,7 @@ for i in ret.items:
#### Java client
* To install the [Java Client](https://github.com/kubernetes-client/java), simply execute :
To install the [Java Client](https://github.com/kubernetes-client/java), run:
```shell
# Clone java library

View File

@ -83,7 +83,7 @@ See [Access Clusters Using the Kubernetes API](/docs/tasks/administer-cluster/ac
#### Manually constructing apiserver proxy URLs
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the service's proxy URL:
`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`[https:]service_name[:port_name]`*`/proxy`
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL.

View File

@ -32,7 +32,7 @@ for example, it might provision storage that is too expensive. If this is the ca
you can either change the default StorageClass or disable it completely to avoid
dynamic provisioning of storage.
Simply deleting the default StorageClass may not work, as it may be re-created
Deleting the default StorageClass may not work, as it may be re-created
automatically by the addon manager running in your cluster. Please consult the docs for your installation
for details about addon manager and how to disable individual addons.

View File

@ -163,7 +163,7 @@ Backing up an etcd cluster can be accomplished in two ways: etcd built-in snapsh
### Built-in snapshot
etcd supports built-in snapshot, so backing up an etcd cluster is easy. A snapshot may either be taken from a live member with the `etcdctl snapshot save` command or by copying the `member/snap/db` file from an etcd [data directory](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/configuration.md#--data-dir) that is not currently used by an etcd process. Taking the snapshot will normally not affect the performance of the member.
etcd supports built-in snapshot. A snapshot may either be taken from a live member with the `etcdctl snapshot save` command or by copying the `member/snap/db` file from an etcd [data directory](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/configuration.md#--data-dir) that is not currently used by an etcd process. Taking the snapshot will normally not affect the performance of the member.
Below is an example for taking a snapshot of the keyspace served by `$ENDPOINT` to the file `snapshotdb`:

View File

@ -255,7 +255,7 @@ To migrate from kube-dns to CoreDNS, a detailed
[blog article](https://coredns.io/2018/05/21/migration-from-kube-dns-to-coredns/)
is available to help users adapt CoreDNS in place of kube-dns.
You can also migrate using the offical CoreDNS
You can also migrate using the official CoreDNS
[deploy script](https://github.com/coredns/deployment/blob/master/kubernetes/deploy.sh).

View File

@ -72,7 +72,7 @@ Once you have a Linux-based Kubernetes control-plane node you are ready to choos
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan",
"VNI" : 4096,
"VNI": 4096,
"Port": 4789
}
}

View File

@ -5,7 +5,7 @@ content_type: task
<!-- overview -->
This example demonstrates an easy way to limit the amount of storage consumed in a namespace.
This example demonstrates how to limit the amount of storage consumed in a namespace.
The following resources are used in the demonstration: [ResourceQuota](/docs/concepts/policy/resource-quotas/),
[LimitRange](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/),

View File

@ -0,0 +1,21 @@
---
title: "Migrating from dockershim"
weight: 10
content_type: task
---
<!-- overview -->
This section presents information you need to know when migrating from
dockershim to other container runtimes.
Since the announcement of [dockershim deprecation](/blog/2020/12/08/kubernetes-1-20-release-announcement/#dockershim-deprecation)
in Kubernetes 1.20, there were questions on how this will affect various workloads and Kubernetes
installations. You can find this blog post useful to understand the problem better: [Dockershim Deprecation FAQ](/blog/2020/12/02/dockershim-faq/)
It is recommended to migrate from dockershim to alternative container runtimes.
Check out [container runtimes](/docs/setup/production-environment/container-runtimes/)
section to know your options. Make sure to
[report issues](https://github.com/kubernetes/kubernetes/issues) you encountered
with the migration. So the issue can be fixed in a timely manner and your cluster would be
ready for dockershim removal.

View File

@ -0,0 +1,85 @@
---
title: Check whether Dockershim deprecation affects you
content_type: task
reviewers:
- SergeyKanzhelev
weight: 20
---
<!-- overview -->
The `dockershim` component of Kubernetes allows to use Docker as a Kubernetes's
{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}.
Kubernetes' built-in `dockershim` component was deprecated in release v1.20.
This page explains how your cluster could be using Docker as a container runtime,
provides details on the role that `dockershim` plays when in use, and shows steps
you can take to check whether any workloads could be affected by `dockershim` deprecation.
## Finding if your app has a dependencies on Docker {#find-docker-dependencies}
If you are using Docker for building your application containers, you can still
run these containers on any container runtime. This use of Docker does not count
as a dependency on Docker as a container runtime.
When alternative container runtime is used, executing Docker commands may either
not work or yield unexpected output. This is how you can find whether you have a
dependency on Docker:
1. Make sure no privileged Pods execute Docker commands.
2. Check that scripts and apps running on nodes outside of Kubernetes
infrastructure do not execute Docker commands. It might be:
- SSH to nodes to troubleshoot;
- Node startup scripts;
- Monitoring and security agents installed on nodes directly.
3. Third-party tools that perform above mentioned privileged operations. See
[Migrating telemetry and security agents from dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents)
for more information.
4. Make sure there is no indirect dependencies on dockershim behavior.
This is an edge case and unlikely to affect your application. Some tooling may be configured
to react to Docker-specific behaviors, for example, raise alert on specific metrics or search for
a specific log message as part of troubleshooting instructions.
If you have such tooling configured, test the behavior on test
cluster before migration.
## Dependency on Docker explained {#role-of-dockershim}
A [container runtime](/docs/concepts/containers/#container-runtimes) is software that can
execute the containers that make up a Kubernetes pod. Kubernetes is responsible for orchestration
and scheduling of Pods; on each node, the {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}
uses the container runtime interface as an abstraction so that you can use any compatible
container runtime.
In its earliest releases, Kubernetes offered compatibility with just one container runtime: Docker.
Later in the Kubernetes project's history, cluster operators wanted to adopt additional container runtimes.
The CRI was designed to allow this kind of flexibility - and the kubelet began supporting CRI. However,
because Docker existed before the CRI specification was invented, the Kubernetes project created an
adapter component, `dockershim`. The dockershim adapter allows the kubelet to interact with Docker as
if Docker were a CRI compatible runtime.
You can read about it in [Kubernetes Containerd integration goes GA](/blog/2018/05/24/kubernetes-containerd-integration-goes-ga/) blog post.
![Dockershim vs. CRI with Containerd](/images/blog/2018-05-24-kubernetes-containerd-integration-goes-ga/cri-containerd.png)
Switching to Containerd as a container runtime eliminates the middleman. All the
same containers can be run by container runtimes like Containerd as before. But
now, since containers schedule directly with the container runtime, they are not visible to Docker.
So any Docker tooling or fancy UI you might have used
before to check on these containers is no longer available.
You cannot get container information using `docker ps` or `docker inspect`
commands. As you cannot list containers, you cannot get logs, stop containers,
or execute something inside container using `docker exec`.
{{< note >}}
If you're running workloads via Kubernetes, the best way to stop a container is through
the Kubernetes API rather than directly through the container runtime (this advice applies
for all container runtimes, not just Docker).
{{< /note >}}
You can still pull images or build them using `docker build` command. But images
built or pulled by Docker would not be visible to container runtime and
Kubernetes. They needed to be pushed to some registry to allow them to be used
by Kubernetes.

View File

@ -0,0 +1,80 @@
---
title: Migrating telemetry and security agents from dockershim
content_type: task
reviewers:
- SergeyKanzhelev
weight: 70
---
<!-- overview -->
With Kubernetes 1.20 dockershim was deprecated. From the
[Dockershim Deprecation FAQ](/blog/2020/12/02/dockershim-faq/)
you might already know that most apps do not have a direct dependency on runtime hosting
containers. However, there are still a lot of telemetry and security agents
that has a dependency on docker to collect containers metadata, logs and
metrics. This document aggregates information on how to detect these
dependencies and links on how to migrate these agents to use generic tools or
alternative runtimes.
## Telemetry and security agents
There are a few ways agents may run on Kubernetes cluster. Agents may run on
nodes directly or as DaemonSets.
### Why do telemetry agents rely on Docker?
Historically, Kubernetes was built on top of Docker. Kubernetes is managing
networking and scheduling, Docker was placing and operating containers on a
node. So you can get scheduling-related metadata like a pod name from Kubernetes
and containers state information from Docker. Over time more runtimes were
created to manage containers. Also there are projects and Kubernetes features
that generalize container status information extraction across many runtimes.
Some agents are tied specifically to the Docker tool. The agents may run
commands like [`docker ps`](https://docs.docker.com/engine/reference/commandline/ps/)
or [`docker top`](https://docs.docker.com/engine/reference/commandline/top/) to list
containers and processes or [docker logs](https://docs.docker.com/engine/reference/commandline/logs/)
to subscribe on docker logs. With the deprecating of Docker as a container runtime,
these commands will not work any longer.
### Identify DaemonSets that depend on Docker {#identify-docker-dependency}
If a pod wants to make calls to the `dockerd` running on the node, the pod must either:
- mount the filesystem containing the Docker daemon's privileged socket, as a
{{< glossary_tooltip text="volume" term_id="volume" >}}; or
- mount the specific path of the Docker daemon's privileged socket directly, also as a volume.
For example: on COS images, Docker exposes its Unix domain socket at
`/var/run/docker.sock` This means that the pod spec will include a
`hostPath` volume mount of `/var/run/docker.sock`.
Here's a sample shell script to find Pods that have a mount directly mapping the
Docker socket. This script outputs the namespace and name of the pod. You can
remove the grep `/var/run/docker.sock` to review other mounts.
```bash
kubectl get pods --all-namespaces \
-o=jsonpath='{range .items[*]}{"\n"}{.metadata.namespace}{":\t"}{.metadata.name}{":\t"}{range .spec.volumes[*]}{.hostPath.path}{", "}{end}{end}' \
| sort \
| grep '/var/run/docker.sock'
```
{{< note >}}
There are alternative ways for a pod to access Docker on the host. For instance, the parent
directory `/var/run` may be mounted instead of the full path (like in [this
example](https://gist.github.com/itaysk/7bc3e56d69c4d72a549286d98fd557dd)).
The script above only detects the most common uses.
{{< /note >}}
### Detecting Docker dependency from node agents
In case your cluster nodes are customized and install additional security and
telemetry agents on the node, make sure to check with the vendor of the agent whether it has dependency on Docker.
### Telemetry and security agent vendors
We keep the work in progress version of migration instructions for various telemetry and security agent vendors
in [Google doc](https://docs.google.com/document/d/1ZFi4uKit63ga5sxEiZblfb-c23lFhvy6RXVPikS8wf0/edit#).
Please contact the vendor to get up to date instructions for migrating from dockershim.

View File

@ -117,9 +117,10 @@ The `kubelet` has the following default hard eviction threshold:
* `memory.available<100Mi`
* `nodefs.available<10%`
* `nodefs.inodesFree<5%`
* `imagefs.available<15%`
On a Linux node, the default value also includes `nodefs.inodesFree<5%`.
### Eviction Monitoring Interval
The `kubelet` evaluates eviction thresholds per its configured housekeeping interval.
@ -140,6 +141,7 @@ The following node conditions are defined that correspond to the specified evict
|-------------------|---------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------|
| `MemoryPressure` | `memory.available` | Available memory on the node has satisfied an eviction threshold |
| `DiskPressure` | `nodefs.available`, `nodefs.inodesFree`, `imagefs.available`, or `imagefs.inodesFree` | Available disk space and inodes on either the node's root filesystem or image filesystem has satisfied an eviction threshold |
| `PIDPressure` | `pid.available` | Available processes identifiers on the (Linux) node has fallen below an eviction threshold | |
The `kubelet` continues to report node status updates at the frequency specified by
`--node-status-update-frequency` which defaults to `10s`.

View File

@ -23,7 +23,7 @@ dynamically, you need a strong understanding of how that change will affect your
cluster's behavior. Always carefully test configuration changes on a small set
of nodes before rolling them out cluster-wide. Advice on configuring specific
fields is available in the inline `KubeletConfiguration`
[type documentation](https://github.com/kubernetes/kubernetes/blob/release-1.11/pkg/kubelet/apis/kubeletconfig/v1beta1/types.go).
[type documentation (for v1.20)](https://github.com/kubernetes/kubernetes/blob/release-1.20/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
{{< /warning >}}

View File

@ -24,7 +24,7 @@ _Topology Manager_ is a Kubelet component that aims to co-ordinate the set of co
## {{% heading "prerequisites" %}}
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
@ -192,7 +192,7 @@ spec:
This pod runs in the `Burstable` QoS class because requests are less than limits.
If the selected policy is anything other than `none`, Topology Manager would consider these Pod specifications. The Topology Manager would consult the Hint Providers to get topology hints. In the case of the `static`, the CPU Manager policy would return default topology hint, because these Pods do not have explicity request CPU resources.
If the selected policy is anything other than `none`, Topology Manager would consider these Pod specifications. The Topology Manager would consult the Hint Providers to get topology hints. In the case of the `static`, the CPU Manager policy would return default topology hint, because these Pods do not have explicitly request CPU resources.
```yaml

View File

@ -187,7 +187,7 @@ Where `YWRtaW5pc3RyYXRvcg==` decodes to `administrator`.
To delete the Secret you have just created:
```shell
kubectl delete secret db-user-pass
kubectl delete secret mysecret
```
## {{% heading "whatsnext" %}}

View File

@ -254,4 +254,4 @@ If the above command corrects the error, you can automate the step by adding the
If you add the `lifecycle` section show above to your Pod spec, the Pod will execute the commands listed to restart the `netlogon` service until the `nltest.exe /query` command exits without error.
## GMSA limitations
When using the [ContainerD runtime for Windows](/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#cri-containerd) accessing restricted network shares via the GMSA domain identity fails. The container will recieve the identity of and calls from `nltest.exe /query` will work. It is recommended to use the [Docker EE runtime](/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#docker-ee) if access to network shares is required. The Windows Server team is working on resolving the issue in the Windows Kernel and will release a patch to resolve this issue in the future. Look for updates on the [Microsoft Windows Containers issue tracker](https://github.com/microsoft/Windows-Containers/issues/44).
When using the [ContainerD runtime for Windows](/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#cri-containerd) accessing restricted network shares via the GMSA domain identity fails. The container will receive the identity of and calls from `nltest.exe /query` will work. It is recommended to use the [Docker EE runtime](/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#docker-ee) if access to network shares is required. The Windows Server team is working on resolving the issue in the Windows Kernel and will release a patch to resolve this issue in the future. Look for updates on the [Microsoft Windows Containers issue tracker](https://github.com/microsoft/Windows-Containers/issues/44).

View File

@ -201,6 +201,9 @@ allow.textmode=true
how.nice.to.look=fairlyNice
```
When `kubectl` creates a ConfigMap from inputs that are not ASCII or UTF-8, the tool puts these into the `binaryData` field of the ConfigMap, and not in `data`. Both text and binary data sources can be combined in one ConfigMap.
If you want to view the `binaryData` keys (and their values) in a ConfigMap, you can run `kubectl get configmap -o jsonpath='{.binaryData}' <name>`.
Use the option `--from-env-file` to create a ConfigMap from an env-file, for example:
```shell
@ -687,4 +690,3 @@ data:
* Follow a real world example of [Configuring Redis using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/).

View File

@ -23,16 +23,10 @@ authenticated by the apiserver as a particular User Account (currently this is
usually `admin`, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver.
When they do, they are authenticated as a particular Service Account (for example, `default`).
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
<!-- steps -->
## Use the Default Service Account to access the API server.
@ -129,7 +123,7 @@ then you will see that a token has automatically been created and is referenced
You may use authorization plugins to [set permissions on service accounts](/docs/reference/access-authn-authz/rbac/#service-account-permissions).
To use a non-default service account, simply set the `spec.serviceAccountName`
To use a non-default service account, set the `spec.serviceAccountName`
field of a pod to the name of the service account you wish to use.
The service account has to exist at the time the pod is created, or it will be rejected.

View File

@ -22,6 +22,7 @@ The kubelet automatically tries to create a {{< glossary_tooltip text="mirror Po
on the Kubernetes API server for each static Pod.
This means that the Pods running on a node are visible on the API server,
but cannot be controlled from there.
The Pod names will suffixed with the node hostname with a leading hyphen
{{< note >}}
If you are running clustered Kubernetes and are using static
@ -237,4 +238,3 @@ CONTAINER ID IMAGE COMMAND CREATED ...
e7a62e3427f1 nginx:latest "nginx -g 'daemon of 27 seconds ago
```

View File

@ -12,16 +12,10 @@ What's Kompose? It's a conversion tool for all things compose (namely Docker Com
More information can be found on the Kompose website at [http://kompose.io](http://kompose.io).
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
<!-- steps -->
## Install Kompose
@ -35,13 +29,13 @@ Kompose is released via GitHub on a three-week cycle, you can see all current re
```sh
# Linux
curl -L https://github.com/kubernetes/kompose/releases/download/v1.21.0/kompose-linux-amd64 -o kompose
curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-linux-amd64 -o kompose
# macOS
curl -L https://github.com/kubernetes/kompose/releases/download/v1.21.0/kompose-darwin-amd64 -o kompose
curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-darwin-amd64 -o kompose
# Windows
curl -L https://github.com/kubernetes/kompose/releases/download/v1.21.0/kompose-windows-amd64.exe -o kompose.exe
curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-windows-amd64.exe -o kompose.exe
chmod +x kompose
sudo mv ./kompose /usr/local/bin/kompose
@ -49,7 +43,6 @@ sudo mv ./kompose /usr/local/bin/kompose
Alternatively, you can download the [tarball](https://github.com/kubernetes/kompose/releases).
{{% /tab %}}
{{% tab name="Build from source" %}}
@ -87,8 +80,8 @@ On macOS you can install latest release via [Homebrew](https://brew.sh):
```bash
brew install kompose
```
{{% /tab %}}
{{< /tabs >}}
@ -97,111 +90,117 @@ brew install kompose
In just a few steps, we'll take you from Docker Compose to Kubernetes. All
you need is an existing `docker-compose.yml` file.
1. Go to the directory containing your `docker-compose.yml` file. If you don't
have one, test using this one.
1. Go to the directory containing your `docker-compose.yml` file. If you don't have one, test using this one.
```yaml
version: "2"
```yaml
version: "2"
services:
services:
redis-master:
image: k8s.gcr.io/redis:e2e
ports:
- "6379"
redis-master:
image: k8s.gcr.io/redis:e2e
ports:
- "6379"
redis-slave:
image: gcr.io/google_samples/gb-redisslave:v3
ports:
- "6379"
environment:
- GET_HOSTS_FROM=dns
redis-slave:
image: gcr.io/google_samples/gb-redisslave:v3
ports:
- "6379"
environment:
- GET_HOSTS_FROM=dns
frontend:
image: gcr.io/google-samples/gb-frontend:v4
ports:
- "80:80"
environment:
- GET_HOSTS_FROM=dns
labels:
kompose.service.type: LoadBalancer
```
frontend:
image: gcr.io/google-samples/gb-frontend:v4
ports:
- "80:80"
environment:
- GET_HOSTS_FROM=dns
labels:
kompose.service.type: LoadBalancer
```
2. Run the `kompose up` command to deploy to Kubernetes directly, or skip to
the next step instead to generate a file to use with `kubectl`.
2. To convert the `docker-compose.yml` file to files that you can use with
`kubectl`, run `kompose convert` and then `kubectl apply -f <output file>`.
```bash
$ kompose up
We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application.
If you need different kind of resources, use the 'kompose convert' and 'kubectl apply -f' commands instead.
```bash
kompose convert
```
INFO Successfully created Service: redis
INFO Successfully created Service: web
INFO Successfully created Deployment: redis
INFO Successfully created Deployment: web
The output is similar to:
Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods,pvc' for details.
```
```none
INFO Kubernetes file "frontend-service.yaml" created
INFO Kubernetes file "frontend-service.yaml" created
INFO Kubernetes file "frontend-service.yaml" created
INFO Kubernetes file "redis-master-service.yaml" created
INFO Kubernetes file "redis-master-service.yaml" created
INFO Kubernetes file "redis-master-service.yaml" created
INFO Kubernetes file "redis-slave-service.yaml" created
INFO Kubernetes file "redis-slave-service.yaml" created
INFO Kubernetes file "redis-slave-service.yaml" created
INFO Kubernetes file "frontend-deployment.yaml" created
INFO Kubernetes file "frontend-deployment.yaml" created
INFO Kubernetes file "frontend-deployment.yaml" created
INFO Kubernetes file "redis-master-deployment.yaml" created
INFO Kubernetes file "redis-master-deployment.yaml" created
INFO Kubernetes file "redis-master-deployment.yaml" created
INFO Kubernetes file "redis-slave-deployment.yaml" created
INFO Kubernetes file "redis-slave-deployment.yaml" created
INFO Kubernetes file "redis-slave-deployment.yaml" created
```
3. To convert the `docker-compose.yml` file to files that you can use with
`kubectl`, run `kompose convert` and then `kubectl apply -f <output file>`.
```bash
kubectl apply -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,
```
```bash
$ kompose convert
INFO Kubernetes file "frontend-service.yaml" created
INFO Kubernetes file "redis-master-service.yaml" created
INFO Kubernetes file "redis-slave-service.yaml" created
INFO Kubernetes file "frontend-deployment.yaml" created
INFO Kubernetes file "redis-master-deployment.yaml" created
INFO Kubernetes file "redis-slave-deployment.yaml" created
```
The output is similar to:
```bash
$ kubectl apply -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,redis-master-deployment.yaml,redis-slave-deployment.yaml
service/frontend created
service/redis-master created
service/redis-slave created
deployment.apps/frontend created
deployment.apps/redis-master created
deployment.apps/redis-slave created
```
```none
redis-master-deployment.yaml,redis-slave-deployment.yaml
service/frontend created
service/redis-master created
service/redis-slave created
deployment.apps/frontend created
deployment.apps/redis-master created
deployment.apps/redis-slave created
```
Your deployments are running in Kubernetes.
Your deployments are running in Kubernetes.
4. Access your application.
3. Access your application.
If you're already using `minikube` for your development process:
If you're already using `minikube` for your development process:
```bash
$ minikube service frontend
```
```bash
minikube service frontend
```
Otherwise, let's look up what IP your service is using!
Otherwise, let's look up what IP your service is using!
```sh
$ kubectl describe svc frontend
Name: frontend
Namespace: default
Labels: service=frontend
Selector: service=frontend
Type: LoadBalancer
IP: 10.0.0.183
LoadBalancer Ingress: 192.0.2.89
Port: 80 80/TCP
NodePort: 80 31144/TCP
Endpoints: 172.17.0.4:80
Session Affinity: None
No events.
```sh
kubectl describe svc frontend
```
```
If you're using a cloud provider, your IP will be listed next to `LoadBalancer Ingress`.
```sh
$ curl http://192.0.2.89
```
```none
Name: frontend
Namespace: default
Labels: service=frontend
Selector: service=frontend
Type: LoadBalancer
IP: 10.0.0.183
LoadBalancer Ingress: 192.0.2.89
Port: 80 80/TCP
NodePort: 80 31144/TCP
Endpoints: 172.17.0.4:80
Session Affinity: None
No events.
```
If you're using a cloud provider, your IP will be listed next to `LoadBalancer Ingress`.
```sh
curl http://192.0.2.89
```
<!-- discussion -->
@ -221,15 +220,17 @@ you need is an existing `docker-compose.yml` file.
Kompose has support for two providers: OpenShift and Kubernetes.
You can choose a targeted provider using global option `--provider`. If no provider is specified, Kubernetes is set by default.
## `kompose convert`
Kompose supports conversion of V1, V2, and V3 Docker Compose files into Kubernetes and OpenShift objects.
### Kubernetes
### Kubernetes `kompose convert` example
```sh
$ kompose --file docker-voting.yml convert
```shell
kompose --file docker-voting.yml convert
```
```none
WARN Unsupported key networks - ignoring
WARN Unsupported key build - ignoring
INFO Kubernetes file "worker-svc.yaml" created
@ -242,16 +243,24 @@ INFO Kubernetes file "result-deployment.yaml" created
INFO Kubernetes file "vote-deployment.yaml" created
INFO Kubernetes file "worker-deployment.yaml" created
INFO Kubernetes file "db-deployment.yaml" created
```
$ ls
```shell
ls
```
```none
db-deployment.yaml docker-compose.yml docker-gitlab.yml redis-deployment.yaml result-deployment.yaml vote-deployment.yaml worker-deployment.yaml
db-svc.yaml docker-voting.yml redis-svc.yaml result-svc.yaml vote-svc.yaml worker-svc.yaml
```
You can also provide multiple docker-compose files at the same time:
```sh
$ kompose -f docker-compose.yml -f docker-guestbook.yml convert
```shell
kompose -f docker-compose.yml -f docker-guestbook.yml convert
```
```none
INFO Kubernetes file "frontend-service.yaml" created
INFO Kubernetes file "mlbparks-service.yaml" created
INFO Kubernetes file "mongodb-service.yaml" created
@ -263,8 +272,13 @@ INFO Kubernetes file "mongodb-deployment.yaml" created
INFO Kubernetes file "mongodb-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "redis-master-deployment.yaml" created
INFO Kubernetes file "redis-slave-deployment.yaml" created
```
$ ls
```shell
ls
```
```none
mlbparks-deployment.yaml mongodb-service.yaml redis-slave-service.jsonmlbparks-service.yaml
frontend-deployment.yaml mongodb-claim0-persistentvolumeclaim.yaml redis-master-service.yaml
frontend-service.yaml mongodb-deployment.yaml redis-slave-deployment.yaml
@ -273,10 +287,13 @@ redis-master-deployment.yaml
When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be over ridden by subsequent file.
### OpenShift
### OpenShift `kompose convert` example
```sh
$ kompose --provider openshift --file docker-voting.yml convert
kompose --provider openshift --file docker-voting.yml convert
```
```none
WARN [worker] Service cannot be created because of missing port.
INFO OpenShift file "vote-service.yaml" created
INFO OpenShift file "db-service.yaml" created
@ -297,7 +314,10 @@ INFO OpenShift file "result-imagestream.yaml" created
It also supports creating buildconfig for build directive in a service. By default, it uses the remote repo for the current git branch as the source repo, and the current branch as the source branch for the build. You can specify a different source repo and branch using ``--build-repo`` and ``--build-branch`` options respectively.
```sh
$ kompose --provider openshift --file buildconfig/docker-compose.yml convert
kompose --provider openshift --file buildconfig/docker-compose.yml convert
```
```none
WARN [foo] Service cannot be created because of missing port.
INFO OpenShift Buildconfig using git@github.com:rtnpro/kompose.git::master as source.
INFO OpenShift file "foo-deploymentconfig.yaml" created
@ -313,23 +333,31 @@ If you are manually pushing the OpenShift artifacts using ``oc create -f``, you
Kompose supports a straightforward way to deploy your "composed" application to Kubernetes or OpenShift via `kompose up`.
### Kubernetes `kompose up` example
### Kubernetes
```sh
$ kompose --file ./examples/docker-guestbook.yml up
```shell
kompose --file ./examples/docker-guestbook.yml up
```
```none
We are going to create Kubernetes deployments and services for your Dockerized application.
If you need different kind of resources, use the 'kompose convert' and 'kubectl apply -f' commands instead.
INFO Successfully created service: redis-master
INFO Successfully created service: redis-slave
INFO Successfully created service: frontend
INFO Successfully created service: redis-master
INFO Successfully created service: redis-slave
INFO Successfully created service: frontend
INFO Successfully created deployment: redis-master
INFO Successfully created deployment: redis-slave
INFO Successfully created deployment: frontend
INFO Successfully created deployment: frontend
Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods' for details.
```
$ kubectl get deployment,svc,pods
```shell
kubectl get deployment,svc,pods
```
```none
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/frontend 1 1 1 1 4m
deployment.extensions/redis-master 1 1 1 1 4m
@ -347,14 +375,19 @@ pod/redis-master-1432129712-63jn8 1/1 Running 0 4m
pod/redis-slave-2504961300-nve7b 1/1 Running 0 4m
```
**Note**:
{{< note >}}
- You must have a running Kubernetes cluster with a pre-configured kubectl context.
- Only deployments and services are generated and deployed to Kubernetes. If you need different kind of resources, use the `kompose convert` and `kubectl apply -f` commands instead.
{{< /note >}}
### OpenShift
```sh
$ kompose --file ./examples/docker-guestbook.yml --provider openshift up
### OpenShift `kompose up` example
```shell
kompose --file ./examples/docker-guestbook.yml --provider openshift up
```
```none
We are going to create OpenShift DeploymentConfigs and Services for your Dockerized application.
If you need different kind of resources, use the 'kompose convert' and 'oc create -f' commands instead.
@ -369,8 +402,13 @@ INFO Successfully created deployment: redis-master
INFO Successfully created ImageStream: redis-master
Your application has been deployed to OpenShift. You can run 'oc get dc,svc,is' for details.
```
$ oc get dc,svc,is
```shell
oc get dc,svc,is
```
```none
NAME REVISION DESIRED CURRENT TRIGGERED BY
dc/frontend 0 1 0 config,image(frontend:v4)
dc/redis-master 0 1 0 config,image(redis-master:e2e)
@ -385,16 +423,16 @@ is/redis-master 172.30.12.200:5000/fff/redis-master
is/redis-slave 172.30.12.200:5000/fff/redis-slave v1
```
**Note**:
- You must have a running OpenShift cluster with a pre-configured `oc` context (`oc login`)
{{< note >}}
You must have a running OpenShift cluster with a pre-configured `oc` context (`oc login`).
{{< /note >}}
## `kompose down`
Once you have deployed "composed" application to Kubernetes, `$ kompose down` will help you to take the application out by deleting its deployments and services. If you need to remove other resources, use the 'kubectl' command.
Once you have deployed "composed" application to Kubernetes, `kompose down` will help you to take the application out by deleting its deployments and services. If you need to remove other resources, use the 'kubectl' command.
```sh
$ kompose --file docker-guestbook.yml down
```shell
kompose --file docker-guestbook.yml down
INFO Successfully deleted service: redis-master
INFO Successfully deleted deployment: redis-master
INFO Successfully deleted service: redis-slave
@ -403,16 +441,16 @@ INFO Successfully deleted service: frontend
INFO Successfully deleted deployment: frontend
```
**Note**:
- You must have a running Kubernetes cluster with a pre-configured kubectl context.
{{< note >}}
You must have a running Kubernetes cluster with a pre-configured `kubectl` context.
{{< /note >}}
## Build and Push Docker Images
Kompose supports both building and pushing Docker images. When using the `build` key within your Docker Compose file, your image will:
- Automatically be built with Docker using the `image` key specified within your file
- Be pushed to the correct Docker repository using local credentials (located at `.docker/config`)
- Automatically be built with Docker using the `image` key specified within your file
- Be pushed to the correct Docker repository using local credentials (located at `.docker/config`)
Using an [example Docker Compose file](https://raw.githubusercontent.com/kubernetes/kompose/master/examples/buildconfig/docker-compose.yml):
@ -428,7 +466,7 @@ services:
Using `kompose up` with a `build` key:
```none
$ kompose up
kompose up
INFO Build key detected. Attempting to build and push image 'docker.io/foo/bar'
INFO Building image 'docker.io/foo/bar' from directory 'build'
INFO Image 'docker.io/foo/bar' from directory 'build' built successfully
@ -448,10 +486,10 @@ In order to disable the functionality, or choose to use BuildConfig generation (
```sh
# Disable building/pushing Docker images
$ kompose up --build none
kompose up --build none
# Generate Build Config artifacts for OpenShift
$ kompose up --provider openshift --build build-config
kompose up --provider openshift --build build-config
```
## Alternative Conversions
@ -459,45 +497,54 @@ $ kompose up --provider openshift --build build-config
The default `kompose` transformation will generate Kubernetes [Deployments](/docs/concepts/workloads/controllers/deployment/) and [Services](/docs/concepts/services-networking/service/), in yaml format. You have alternative option to generate json with `-j`. Also, you can alternatively generate [Replication Controllers](/docs/concepts/workloads/controllers/replicationcontroller/) objects, [Daemon Sets](/docs/concepts/workloads/controllers/daemonset/), or [Helm](https://github.com/helm/helm) charts.
```sh
$ kompose convert -j
kompose convert -j
INFO Kubernetes file "redis-svc.json" created
INFO Kubernetes file "web-svc.json" created
INFO Kubernetes file "redis-deployment.json" created
INFO Kubernetes file "web-deployment.json" created
```
The `*-deployment.json` files contain the Deployment objects.
```sh
$ kompose convert --replication-controller
kompose convert --replication-controller
INFO Kubernetes file "redis-svc.yaml" created
INFO Kubernetes file "web-svc.yaml" created
INFO Kubernetes file "redis-replicationcontroller.yaml" created
INFO Kubernetes file "web-replicationcontroller.yaml" created
```
The `*-replicationcontroller.yaml` files contain the Replication Controller objects. If you want to specify replicas (default is 1), use `--replicas` flag: `$ kompose convert --replication-controller --replicas 3`
The `*-replicationcontroller.yaml` files contain the Replication Controller objects. If you want to specify replicas (default is 1), use `--replicas` flag: `kompose convert --replication-controller --replicas 3`
```sh
$ kompose convert --daemon-set
```shell
kompose convert --daemon-set
INFO Kubernetes file "redis-svc.yaml" created
INFO Kubernetes file "web-svc.yaml" created
INFO Kubernetes file "redis-daemonset.yaml" created
INFO Kubernetes file "web-daemonset.yaml" created
```
The `*-daemonset.yaml` files contain the Daemon Set objects
The `*-daemonset.yaml` files contain the DaemonSet objects
If you want to generate a Chart to be used with [Helm](https://github.com/kubernetes/helm) simply do:
If you want to generate a Chart to be used with [Helm](https://github.com/kubernetes/helm) run:
```sh
$ kompose convert -c
```shell
kompose convert -c
```
```none
INFO Kubernetes file "web-svc.yaml" created
INFO Kubernetes file "redis-svc.yaml" created
INFO Kubernetes file "web-deployment.yaml" created
INFO Kubernetes file "redis-deployment.yaml" created
chart created in "./docker-compose/"
```
$ tree docker-compose/
```shell
tree docker-compose/
```
```none
docker-compose
├── Chart.yaml
├── README.md
@ -578,7 +625,7 @@ If you want to create normal pods without controllers you can use `restart` cons
| `no` | Pod | `Never` |
{{< note >}}
The controller object could be `deployment` or `replicationcontroller`, etc.
The controller object could be `deployment` or `replicationcontroller`.
{{< /note >}}
For example, the `pival` service will become pod down here. This container calculated value of `pi`.
@ -593,7 +640,7 @@ services:
restart: "on-failure"
```
### Warning about Deployment Config's
### Warning about Deployment Configurations
If the Docker Compose file has a volume specified for a service, the Deployment (Kubernetes) or DeploymentConfig (OpenShift) strategy is changed to "Recreate" instead of "RollingUpdate" (default). This is done to avoid multiple instances of a service from accessing a volume at the same time.
@ -606,5 +653,3 @@ Please note that changing service name might break some `docker-compose` files.
Kompose supports Docker Compose versions: 1, 2 and 3. We have limited support on versions 2.1 and 3.2 due to their experimental nature.
A full list on compatibility between all three versions is listed in our [conversion document](https://github.com/kubernetes/kompose/blob/master/docs/conversion.md) including a list of all incompatible Docker Compose keys.

View File

@ -18,10 +18,10 @@ you to figure out what's going wrong.
## Running commands in a Pod
For many steps here you will want to see what a Pod running in the cluster
sees. The simplest way to do this is to run an interactive alpine Pod:
sees. The simplest way to do this is to run an interactive busybox Pod:
```none
kubectl run -it --rm --restart=Never alpine --image=alpine sh
kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox sh
```
{{< note >}}
@ -111,7 +111,7 @@ kubectl get pods -l app=hostnames \
10.244.0.7
```
The example container used for this walk-through simply serves its own hostname
The example container used for this walk-through serves its own hostname
via HTTP on port 9376, but if you are debugging your own app, you'll want to
use whatever port number your Pods are listening on.
@ -421,7 +421,7 @@ Earlier you saw that the Pods were running. You can re-check that:
kubectl get pods -l app=hostnames
```
```none
NAME READY STATUS RESTARTS AGE
NAME READY STATUS RESTARTS AGE
hostnames-632524106-bbpiw 1/1 Running 0 1h
hostnames-632524106-ly40y 1/1 Running 0 1h
hostnames-632524106-tlaok 1/1 Running 0 1h

View File

@ -1,94 +0,0 @@
---
reviewers:
- piosz
- x13n
content_type: concept
title: Events in Stackdriver
---
<!-- overview -->
Kubernetes events are objects that provide insight into what is happening
inside a cluster, such as what decisions were made by scheduler or why some
pods were evicted from the node. You can read more about using events
for debugging your application in the [Application Introspection and Debugging
](/docs/tasks/debug-application-cluster/debug-application-introspection/)
section.
Since events are API objects, they are stored in the apiserver on master. To
avoid filling up master's disk, a retention policy is enforced: events are
removed one hour after the last occurrence. To provide longer history
and aggregation capabilities, a third party solution should be installed
to capture events.
This article describes a solution that exports Kubernetes events to
Stackdriver Logging, where they can be processed and analyzed.
{{< note >}}
It is not guaranteed that all events happening in a cluster will be
exported to Stackdriver. One possible scenario when events will not be
exported is when event exporter is not running (e.g. during restart or
upgrade). In most cases it's fine to use events for purposes like setting up
[metrics](https://cloud.google.com/logging/docs/logs-based-metrics/) and [alerts](https://cloud.google.com/logging/docs/logs-based-metrics/charts-and-alerts), but you should be aware
of the potential inaccuracy.
{{< /note >}}
<!-- body -->
## Deployment
### Google Kubernetes Engine
In Google Kubernetes Engine, if cloud logging is enabled, event exporter
is deployed by default to the clusters with master running version 1.7 and
higher. To prevent disturbing your workloads, event exporter does not have
resources set and is in the best effort QOS class, which means that it will
be the first to be killed in the case of resource starvation. If you want
your events to be exported, make sure you have enough resources to facilitate
the event exporter pod. This may vary depending on the workload, but on
average, approximately 100Mb RAM and 100m CPU is needed.
### Deploying to the Existing Cluster
Deploy event exporter to your cluster using the following command:
```shell
kubectl apply -f https://k8s.io/examples/debug/event-exporter.yaml
```
Since event exporter accesses the Kubernetes API, it requires permissions to
do so. The following deployment is configured to work with RBAC
authorization. It sets up a service account and a cluster role binding
to allow event exporter to read events. To make sure that event exporter
pod will not be evicted from the node, you can additionally set up resource
requests. As mentioned earlier, 100Mb RAM and 100m CPU should be enough.
{{< codenew file="debug/event-exporter.yaml" >}}
## User Guide
Events are exported to the `GKE Cluster` resource in Stackdriver Logging.
You can find them by selecting an appropriate option from a drop-down menu
of available resources:
<img src="/images/docs/stackdriver-event-exporter-resource.png" alt="Events location in the Stackdriver Logging interface" width="500">
You can filter based on the event object fields using Stackdriver Logging
[filtering mechanism](https://cloud.google.com/logging/docs/view/advanced_filters).
For example, the following query will show events from the scheduler
about pods from deployment `nginx-deployment`:
```
resource.type="gke_cluster"
jsonPayload.kind="Event"
jsonPayload.source.component="default-scheduler"
jsonPayload.involvedObject.name:"nginx-deployment"
```
{{< figure src="/images/docs/stackdriver-event-exporter-filter.png" alt="Filtered events in the Stackdriver Logging interface" width="500" >}}

View File

@ -1,126 +0,0 @@
---
reviewers:
- piosz
- x13n
content_type: concept
title: Logging Using Elasticsearch and Kibana
---
<!-- overview -->
On the Google Compute Engine (GCE) platform, the default logging support targets
[Stackdriver Logging](https://cloud.google.com/logging/), which is described in detail
in the [Logging With Stackdriver Logging](/docs/tasks/debug-application-cluster/logging-stackdriver).
This article describes how to set up a cluster to ingest logs into
[Elasticsearch](https://www.elastic.co/products/elasticsearch) and view
them using [Kibana](https://www.elastic.co/products/kibana), as an alternative to
Stackdriver Logging when running on GCE.
{{< note >}}
You cannot automatically deploy Elasticsearch and Kibana in the Kubernetes cluster hosted on Google Kubernetes Engine. You have to deploy them manually.
{{< /note >}}
<!-- body -->
To use Elasticsearch and Kibana for cluster logging, you should set the
following environment variable as shown below when creating your cluster with
kube-up.sh:
```shell
KUBE_LOGGING_DESTINATION=elasticsearch
```
You should also ensure that `KUBE_ENABLE_NODE_LOGGING=true` (which is the default for the GCE platform).
Now, when you create a cluster, a message will indicate that the Fluentd log
collection daemons that run on each node will target Elasticsearch:
```shell
cluster/kube-up.sh
```
```
...
Project: kubernetes-satnam
Zone: us-central1-b
... calling kube-up
Project: kubernetes-satnam
Zone: us-central1-b
+++ Staging server tars to Google Storage: gs://kubernetes-staging-e6d0e81793/devel
+++ kubernetes-server-linux-amd64.tar.gz uploaded (sha1 = 6987c098277871b6d69623141276924ab687f89d)
+++ kubernetes-salt.tar.gz uploaded (sha1 = bdfc83ed6b60fa9e3bff9004b542cfc643464cd0)
Looking for already existing resources
Starting master and configuring firewalls
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/zones/us-central1-b/disks/kubernetes-master-pd].
NAME ZONE SIZE_GB TYPE STATUS
kubernetes-master-pd us-central1-b 20 pd-ssd READY
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/regions/us-central1/addresses/kubernetes-master-ip].
+++ Logging using Fluentd to elasticsearch
```
The per-node Fluentd pods, the Elasticsearch pods, and the Kibana pods should
all be running in the kube-system namespace soon after the cluster comes to
life.
```shell
kubectl get pods --namespace=kube-system
```
```
NAME READY STATUS RESTARTS AGE
elasticsearch-logging-v1-78nog 1/1 Running 0 2h
elasticsearch-logging-v1-nj2nb 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-node-5oq0 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-node-6896 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-node-l1ds 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-node-lz9j 1/1 Running 0 2h
kibana-logging-v1-bhpo8 1/1 Running 0 2h
kube-dns-v3-7r1l9 3/3 Running 0 2h
monitoring-heapster-v4-yl332 1/1 Running 1 2h
monitoring-influx-grafana-v1-o79xf 2/2 Running 0 2h
```
The `fluentd-elasticsearch` pods gather logs from each node and send them to
the `elasticsearch-logging` pods, which are part of a
[service](/docs/concepts/services-networking/service/) named `elasticsearch-logging`. These
Elasticsearch pods store the logs and expose them via a REST API.
The `kibana-logging` pod provides a web UI for reading the logs stored in
Elasticsearch, and is part of a service named `kibana-logging`.
The Elasticsearch and Kibana services are both in the `kube-system` namespace
and are not directly exposed via a publicly reachable IP address. To reach them,
follow the instructions for
[Accessing services running in a cluster](/docs/tasks/access-application-cluster/access-cluster/#accessing-services-running-on-the-cluster).
If you try accessing the `elasticsearch-logging` service in your browser, you'll
see a status page that looks something like this:
![Elasticsearch Status](/images/docs/es-browser.png)
You can now type Elasticsearch queries directly into the browser, if you'd
like. See [Elasticsearch's documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-uri-request.html)
for more details on how to do so.
Alternatively, you can view your cluster's logs using Kibana (again using the
[instructions for accessing a service running in the cluster](/docs/tasks/access-application-cluster/access-cluster/#accessing-services-running-on-the-cluster)).
The first time you visit the Kibana URL you will be presented with a page that
asks you to configure your view of the ingested logs. Select the option for
timeseries values and select `@timestamp`. On the following page select the
`Discover` tab and then you should be able to see the ingested logs.
You can set the refresh interval to 5 seconds to have the logs
regularly refreshed.
Here is a typical view of ingested logs from the Kibana viewer:
![Kibana logs](/images/docs/kibana-logs.png)
## {{% heading "whatsnext" %}}
Kibana opens up all sorts of powerful options for exploring your logs! For some
ideas on how to dig into it, check out [Kibana's documentation](https://www.elastic.co/guide/en/kibana/current/discover.html).

View File

@ -12,20 +12,15 @@ content_type: task
This guide demonstrates how to install and write extensions for [kubectl](/docs/reference/kubectl/kubectl/). By thinking of core `kubectl` commands as essential building blocks for interacting with a Kubernetes cluster, a cluster administrator can think
of plugins as a means of utilizing these building blocks to create more complex behavior. Plugins extend `kubectl` with new sub-commands, allowing for new and custom features not included in the main distribution of `kubectl`.
## {{% heading "prerequisites" %}}
You need to have a working `kubectl` binary installed.
<!-- steps -->
## Installing kubectl plugins
A plugin is nothing more than a standalone executable file, whose name begins with `kubectl-`. To install a plugin, simply move its executable file to anywhere on your `PATH`.
A plugin is a standalone executable file, whose name begins with `kubectl-`. To install a plugin, move its executable file to anywhere on your `PATH`.
You can also discover and install kubectl plugins available in the open source
using [Krew](https://krew.dev/). Krew is a plugin manager maintained by
@ -60,9 +55,9 @@ You can write a plugin in any programming language or script that allows you to
There is no plugin installation or pre-loading required. Plugin executables receive
the inherited environment from the `kubectl` binary.
A plugin determines which command path it wishes to implement based on its name. For
example, a plugin wanting to provide a new command `kubectl foo`, would simply be named
`kubectl-foo`, and live somewhere in your `PATH`.
A plugin determines which command path it wishes to implement based on its name.
For example, a plugin named `kubectl-foo` provides a command `kubectl foo`. You must
install the plugin executable somewhere in your `PATH`.
### Example plugin
@ -88,32 +83,34 @@ echo "I am a plugin named kubectl-foo"
### Using a plugin
To use the above plugin, simply make it executable:
To use a plugin, make the plugin executable:
```
```shell
sudo chmod +x ./kubectl-foo
```
and place it anywhere in your `PATH`:
```
```shell
sudo mv ./kubectl-foo /usr/local/bin
```
You may now invoke your plugin as a `kubectl` command:
```
```shell
kubectl foo
```
```
I am a plugin named kubectl-foo
```
All args and flags are passed as-is to the executable:
```
```shell
kubectl foo version
```
```
1.0.0
```
@ -124,6 +121,7 @@ All environment variables are also passed as-is to the executable:
export KUBECONFIG=~/.kube/config
kubectl foo config
```
```
/home/<user>/.kube/config
```
@ -131,6 +129,7 @@ kubectl foo config
```shell
KUBECONFIG=/etc/kube/config kubectl foo config
```
```
/etc/kube/config
```
@ -376,16 +375,11 @@ set up a build environment (if it needs compiling), and deploy the plugin.
If you also make compiled packages available, or use Krew, that will make
installs easier.
## {{% heading "whatsnext" %}}
* Check the Sample CLI Plugin repository for a
[detailed example](https://github.com/kubernetes/sample-cli-plugin) of a
plugin written in Go.
In case of any questions, feel free to reach out to the
[SIG CLI team](https://github.com/kubernetes/community/tree/master/sig-cli).
* Read about [Krew](https://krew.dev/), a package manager for kubectl plugins.

View File

@ -583,14 +583,13 @@ and can optionally include a custom CA bundle to use to verify the TLS connectio
The `host` should not refer to a service running in the cluster; use
a service reference by specifying the `service` field instead.
The host might be resolved via external DNS in some apiservers
(i.e., `kube-apiserver` cannot resolve in-cluster DNS as that would
(i.e., `kube-apiserver` cannot resolve in-cluster DNS as that would
be a layering violation). `host` may also be an IP address.
Please note that using `localhost` or `127.0.0.1` as a `host` is
risky unless you take great care to run this webhook on all hosts
which run an apiserver which might need to make calls to this
webhook. Such installs are likely to be non-portable, i.e., not easy
to turn up in a new cluster.
webhook. Such installations are likely to be non-portable or not readily run in a new cluster.
The scheme must be "https"; the URL must begin with "https://".

View File

@ -35,7 +35,7 @@ non-parallel, use of [Job](/docs/concepts/workloads/controllers/job/).
## Starting a message queue service
This example uses RabbitMQ, but it should be easy to adapt to another AMQP-type message service.
This example uses RabbitMQ, however, you can adapt the example to use another AMQP-type message service.
In practice you could set up a message queue service once in a
cluster and reuse it for many jobs, as well as for long-running services.

View File

@ -12,7 +12,7 @@ based on a common template. You can use this approach to process batches of work
parallel.
For this example there are only three items: _apple_, _banana_, and _cherry_.
The sample Jobs process each item simply by printing a string then pausing.
The sample Jobs process each item by printing a string then pausing.
See [using Jobs in real workloads](#using-jobs-in-real-workloads) to learn about how
this pattern fits more realistic use cases.

View File

@ -66,7 +66,7 @@ Use caution when deleting a PVC, as it may lead to data loss.
### Complete deletion of a StatefulSet
To simply delete everything in a StatefulSet, including the associated pods, you can run a series of commands similar to the following:
To delete everything in a StatefulSet, including the associated pods, you can run a series of commands similar to the following:
```shell
grace=$(kubectl get pods <stateful-set-pod> --template '{{.spec.terminationGracePeriodSeconds}}')

View File

@ -191,7 +191,7 @@ We can create a new autoscaler using `kubectl create` command.
We can list autoscalers by `kubectl get hpa` and get detailed description by `kubectl describe hpa`.
Finally, we can delete an autoscaler using `kubectl delete hpa`.
In addition, there is a special `kubectl autoscale` command for easy creation of a Horizontal Pod Autoscaler.
In addition, there is a special `kubectl autoscale` command for creating a HorizontalPodAutoscaler object.
For instance, executing `kubectl autoscale rs foo --min=2 --max=5 --cpu-percent=80`
will create an autoscaler for replication set *foo*, with target CPU utilization set to `80%`
and the number of replicas between 2 and 5.
@ -221,9 +221,9 @@ the global HPA settings exposed as flags for the `kube-controller-manager` compo
Starting from v1.12, a new algorithmic update removes the need for the
upscale delay.
- `--horizontal-pod-autoscaler-downscale-stabilization`: The value for this option is a
duration that specifies how long the autoscaler has to wait before another
downscale operation can be performed after the current one has completed.
- `--horizontal-pod-autoscaler-downscale-stabilization`: Specifies the duration of the
downscale stabilization time window. Horizontal Pod Autoscaler remembers
the historical recommended sizes and only acts on the largest size within this time window.
The default value is 5 minutes (`5m0s`).
{{< note >}}
@ -383,7 +383,12 @@ behavior:
periodSeconds: 60
```
When the number of pods is more than 40 the second policy will be used for scaling down.
`periodSeconds` indicates the length of time in the past for which the policy must hold true.
The first policy _(Pods)_ allows at most 4 replicas to be scaled down in one minute. The second policy
_(Percent)_ allows at most 10% of the current replicas to be scaled down in one minute.
Since by default the policy which allows the highest amount of change is selected, the second policy will
only be used when the number of pod replicas is more than 40. With 40 or less replicas, the first policy will be applied.
For instance if there are 80 replicas and the target has to be scaled down to 10 replicas
then during the first step 8 replicas will be reduced. In the next iteration when the number
of replicas is 72, 10% of the pods is 7.2 but the number is rounded up to 8. On each loop of
@ -391,10 +396,6 @@ the autoscaler controller the number of pods to be change is re-calculated based
of current replicas. When the number of replicas falls below 40 the first policy _(Pods)_ is applied
and 4 replicas will be reduced at a time.
`periodSeconds` indicates the length of time in the past for which the policy must hold true.
The first policy allows at most 4 replicas to be scaled down in one minute. The second policy
allows at most 10% of the current replicas to be scaled down in one minute.
The policy selection can be changed by specifying the `selectPolicy` field for a scaling
direction. By setting the value to `Min` which would select the policy which allows the
smallest change in the replica count. Setting the value to `Disabled` completely disables
@ -441,7 +442,7 @@ behavior:
periodSeconds: 15
selectPolicy: Max
```
For scaling down the stabilization window is _300_ seconds(or the value of the
For scaling down the stabilization window is _300_ seconds (or the value of the
`--horizontal-pod-autoscaler-downscale-stabilization` flag if provided). There is only a single policy
for scaling down which allows a 100% of the currently running replicas to be removed which
means the scaling target can be scaled down to the minimum allowed replicas.

Some files were not shown because too many files have changed in this diff Show More