commit
e6ee962e9f
|
|
@ -92,15 +92,21 @@ aliases:
|
||||||
- daminisatya
|
- daminisatya
|
||||||
- mittalyashu
|
- mittalyashu
|
||||||
sig-docs-id-owners: # Admins for Indonesian content
|
sig-docs-id-owners: # Admins for Indonesian content
|
||||||
- girikuncoro
|
- ariscahyadi
|
||||||
- irvifa
|
- danninov
|
||||||
sig-docs-id-reviews: # PR reviews for Indonesian content
|
|
||||||
- girikuncoro
|
- girikuncoro
|
||||||
- habibrosyad
|
- habibrosyad
|
||||||
- irvifa
|
- irvifa
|
||||||
- wahyuoi
|
|
||||||
- phanama
|
- phanama
|
||||||
|
- wahyuoi
|
||||||
|
sig-docs-id-reviews: # PR reviews for Indonesian content
|
||||||
|
- ariscahyadi
|
||||||
- danninov
|
- danninov
|
||||||
|
- girikuncoro
|
||||||
|
- habibrosyad
|
||||||
|
- irvifa
|
||||||
|
- phanama
|
||||||
|
- wahyuoi
|
||||||
sig-docs-it-owners: # Admins for Italian content
|
sig-docs-it-owners: # Admins for Italian content
|
||||||
- fabriziopandini
|
- fabriziopandini
|
||||||
- Fale
|
- Fale
|
||||||
|
|
|
||||||
220
README-pt.md
220
README-pt.md
|
|
@ -1,76 +1,184 @@
|
||||||
# A documentação do Kubernetes
|
# A documentação do Kubernetes
|
||||||
|
|
||||||
[](https://travis-ci.org/kubernetes/website)
|
[](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||||
[](https://github.com/kubernetes/website/releases/latest)
|
|
||||||
|
|
||||||
Bem vindos! Este repositório abriga todos os recursos necessários para criar o [site e documentação do Kubernetes](https://kubernetes.io/). Estamos muito satisfeitos por você querer contribuir!
|
Bem-vindos! Este repositório contém todos os recursos necessários para criar o [website e documentação do Kubernetes](https://kubernetes.io/). Estamos muito satisfeitos por você querer contribuir!
|
||||||
|
|
||||||
## Contribuindo com os documentos
|
# Utilizando este repositório
|
||||||
|
|
||||||
Você pode clicar no botão **Fork** na área superior direita da tela para criar uma cópia desse repositório na sua conta do GitHub. Esta cópia é chamada de *fork*. Faça as alterações desejadas no seu fork e, quando estiver pronto para enviar as alterações para nós, vá até o fork e crie uma nova solicitação de pull para nos informar sobre isso.
|
Você pode executar o website localmente utilizando o Hugo (versão Extended), ou você pode executa-ló em um container runtime. É altamente recomendável utilizar um container runtime, pois garante a consistência na implantação do website real.
|
||||||
|
|
||||||
Depois que seu **pull request** for criado, um revisor do Kubernetes assumirá a responsabilidade de fornecer um feedback claro e objetivo. Como proprietário do pull request, **é sua responsabilidade modificar seu pull request para abordar o feedback que foi fornecido a você pelo revisor do Kubernetes.** Observe também que você pode acabar tendo mais de um revisor do Kubernetes para fornecer seu feedback ou você pode acabar obtendo feedback de um revisor do Kubernetes que é diferente daquele originalmente designado para lhe fornecer feedback. Além disso, em alguns casos, um de seus revisores pode solicitar uma revisão técnica de um [revisor de tecnologia Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers) quando necessário. Os revisores farão o melhor para fornecer feedback em tempo hábil, mas o tempo de resposta pode variar de acordo com as circunstâncias.
|
## Pré-requisitos
|
||||||
|
|
||||||
|
Para usar este repositório, você precisa instalar:
|
||||||
|
|
||||||
|
- [npm](https://www.npmjs.com/)
|
||||||
|
- [Go](https://golang.org/)
|
||||||
|
- [Hugo (versão Extended)](https://gohugo.io/)
|
||||||
|
- Um container runtime, por exemplo [Docker](https://www.docker.com/).
|
||||||
|
|
||||||
|
Antes de você iniciar, instale as dependências, clone o repositório e navegue até o diretório:
|
||||||
|
|
||||||
|
```
|
||||||
|
git clone https://github.com/kubernetes/website.git
|
||||||
|
cd website
|
||||||
|
```
|
||||||
|
|
||||||
|
O website do Kubernetes utiliza o [tema Docsy Hugo](https://github.com/google/docsy#readme). Mesmo se você planeje executar o website em um container, é altamente recomendado baixar os submódulos e outras dependências executando o seguinte comando:
|
||||||
|
|
||||||
|
```
|
||||||
|
# Baixar o submódulo Docsy
|
||||||
|
git submodule update --init --recursive --depth 1
|
||||||
|
```
|
||||||
|
|
||||||
|
## Executando o website usando um container
|
||||||
|
|
||||||
|
Para executar o build do website em um container, execute o comando abaixo para criar a imagem do container e executa-lá:
|
||||||
|
|
||||||
|
```
|
||||||
|
make container-image
|
||||||
|
make container-serve
|
||||||
|
```
|
||||||
|
|
||||||
|
Abra seu navegador em http://localhost:1313 para visualizar o website. Conforme você faz alterações nos arquivos fontes, o Hugo atualiza o website e força a atualização do navegador.
|
||||||
|
|
||||||
|
## Executando o website localmente utilizando o Hugo
|
||||||
|
|
||||||
|
Consulte a [documentação oficial do Hugo](https://gohugo.io/getting-started/installing/) para instruções de instalação do Hugo. Certifique-se de instalar a versão do Hugo especificada pela variável de ambiente `HUGO_VERSION` no arquivo [`netlify.toml`](netlify.toml#L9).
|
||||||
|
|
||||||
|
Para executar o build e testar o website localmente, execute:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# instalar dependências
|
||||||
|
npm ci
|
||||||
|
make serve
|
||||||
|
```
|
||||||
|
|
||||||
|
Isso iniciará localmente o Hugo na porta 1313. Abra o seu navegador em http://localhost:1313 para visualizar o website. Conforme você faz alterações nos arquivos fontes, o Hugo atualiza o website e força uma atualização no navegador.
|
||||||
|
|
||||||
|
## Construindo a página de referência da API
|
||||||
|
|
||||||
|
A página de referência da API localizada em `content/en/docs/reference/kubernetes-api` é construída a partir da especificação do Swagger utilizando https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs.
|
||||||
|
|
||||||
|
Siga os passos abaixo para atualizar a página de referência para uma nova versão do Kubernetes:
|
||||||
|
|
||||||
|
OBS: modifique o "v1.20" no exemplo a seguir pela versão a ser atualizada
|
||||||
|
|
||||||
|
1. Obter o submódulo `kubernetes-resources-reference`:
|
||||||
|
|
||||||
|
```
|
||||||
|
git submodule update --init --recursive --depth 1
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Criar a nova versão da API no submódulo e adicionar à especificação do Swagger:
|
||||||
|
|
||||||
|
```
|
||||||
|
mkdir api-ref-generator/gen-resourcesdocs/api/v1.20
|
||||||
|
curl 'https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json' > api-ref-generator/gen-resourcesdocs/api/v1.20/swagger.json
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Copiar o sumário e os campos de configuração para a nova versão a partir da versão anterior:
|
||||||
|
|
||||||
|
```
|
||||||
|
mkdir api-ref-generator/gen-resourcesdocs/api/v1.20
|
||||||
|
cp api-ref-generator/gen-resourcesdocs/api/v1.19/* api-ref-generator/gen-resourcesdocs/api/v1.20/
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Ajustar os arquivos `toc.yaml` e `fields.yaml` para refletir as mudanças entre as duas versões.
|
||||||
|
|
||||||
|
5. Em seguida, gerar as páginas:
|
||||||
|
|
||||||
|
```
|
||||||
|
make api-reference
|
||||||
|
```
|
||||||
|
|
||||||
|
Você pode validar o resultado localmente gerando e disponibilizando o site a partir da imagem do container:
|
||||||
|
|
||||||
|
```
|
||||||
|
make container-image
|
||||||
|
make container-serve
|
||||||
|
```
|
||||||
|
|
||||||
|
Abra o seu navegador em http://localhost:1313/docs/reference/kubernetes-api/ para visualizar a página de referência da API.
|
||||||
|
|
||||||
|
6. Quando todas as mudanças forem refletidas nos arquivos de configuração `toc.yaml` e `fields.yaml`, crie um pull request com a nova página de referência de API.
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
### error: failed to transform resource: TOCSS: failed to transform "scss/main.scss" (text/x-scss): this feature is not available in your current Hugo version
|
||||||
|
|
||||||
|
Por motivos técnicos, o Hugo é disponibilizado em dois conjuntos de binários. O website atual funciona apenas na versão **Hugo Extended**. Na [página de releases](https://github.com/gohugoio/hugo/releases) procure por arquivos com `extended` no nome. Para confirmar, execute `hugo version` e procure pela palavra `extended`.
|
||||||
|
|
||||||
|
### Troubleshooting macOS for too many open files
|
||||||
|
|
||||||
|
Se você executar o comando `make serve` no macOS e retornar o seguinte erro:
|
||||||
|
|
||||||
|
```
|
||||||
|
ERROR 2020/08/01 19:09:18 Error: listen tcp 127.0.0.1:1313: socket: too many open files
|
||||||
|
make: *** [serve] Error 1
|
||||||
|
```
|
||||||
|
|
||||||
|
Verifique o limite atual para arquivos abertos:
|
||||||
|
|
||||||
|
`launchctl limit maxfiles`
|
||||||
|
|
||||||
|
Em seguida, execute os seguintes comandos (adaptado de https://gist.github.com/tombigel/d503800a282fcadbee14b537735d202c):
|
||||||
|
|
||||||
|
```shell
|
||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
# Esse são os links do gist original, vinculados ao meu gists agora.
|
||||||
|
# curl -O https://gist.githubusercontent.com/a2ikm/761c2ab02b7b3935679e55af5d81786a/raw/ab644cb92f216c019a2f032bbf25e258b01d87f9/limit.maxfiles.plist
|
||||||
|
# curl -O https://gist.githubusercontent.com/a2ikm/761c2ab02b7b3935679e55af5d81786a/raw/ab644cb92f216c019a2f032bbf25e258b01d87f9/limit.maxproc.plist
|
||||||
|
|
||||||
|
curl -O https://gist.githubusercontent.com/tombigel/d503800a282fcadbee14b537735d202c/raw/ed73cacf82906fdde59976a0c8248cce8b44f906/limit.maxfiles.plist
|
||||||
|
curl -O https://gist.githubusercontent.com/tombigel/d503800a282fcadbee14b537735d202c/raw/ed73cacf82906fdde59976a0c8248cce8b44f906/limit.maxproc.plist
|
||||||
|
|
||||||
|
sudo mv limit.maxfiles.plist /Library/LaunchDaemons
|
||||||
|
sudo mv limit.maxproc.plist /Library/LaunchDaemons
|
||||||
|
|
||||||
|
sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist
|
||||||
|
sudo chown root:wheel /Library/LaunchDaemons/limit.maxproc.plist
|
||||||
|
|
||||||
|
sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
|
||||||
|
```
|
||||||
|
|
||||||
|
Esta solução funciona tanto para o MacOS Catalina quanto para o MacOS Mojave.
|
||||||
|
|
||||||
|
# Comunidade, discussão, contribuição e apoio
|
||||||
|
|
||||||
|
Saiba mais sobre a comunidade Kubernetes SIG Docs e reuniões na [página da comunidade](http://kubernetes.io/community/).
|
||||||
|
|
||||||
|
Você também pode entrar em contato com os mantenedores deste projeto em:
|
||||||
|
|
||||||
|
- [Slack](https://kubernetes.slack.com/messages/sig-docs) ([Obter o convide para o este slack](https://slack.k8s.io/))
|
||||||
|
- [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
|
||||||
|
|
||||||
|
# Contribuindo com os documentos
|
||||||
|
|
||||||
|
Você pode clicar no botão **Fork** na área superior direita da tela para criar uma cópia desse repositório na sua conta do GitHub. Esta cópia é chamada de *fork*. Faça as alterações desejadas no seu fork e, quando estiver pronto para enviar as alterações para nós, vá até o fork e crie um novo **pull request** para nos informar sobre isso.
|
||||||
|
|
||||||
|
Depois que seu **pull request** for criado, um revisor do Kubernetes assumirá a responsabilidade de fornecer um feedback claro e objetivo. Como proprietário do pull request, **é sua responsabilidade modificar seu pull request para atender ao feedback que foi fornecido a você pelo revisor do Kubernetes.**
|
||||||
|
|
||||||
|
Observe também que você pode acabar tendo mais de um revisor do Kubernetes para fornecer seu feedback ou você pode acabar obtendo feedback de um outro revisor do Kubernetes diferente daquele originalmente designado para lhe fornecer o feedback.
|
||||||
|
|
||||||
|
Além disso, em alguns casos, um de seus revisores pode solicitar uma revisão técnica de um [revisor técnico do Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers) quando necessário. Os revisores farão o melhor para fornecer feedbacks em tempo hábil, mas o tempo de resposta pode variar de acordo com as circunstâncias.
|
||||||
|
|
||||||
Para mais informações sobre como contribuir com a documentação do Kubernetes, consulte:
|
Para mais informações sobre como contribuir com a documentação do Kubernetes, consulte:
|
||||||
|
|
||||||
* [Comece a contribuir](https://kubernetes.io/docs/contribute/start/)
|
* [Contribua com a documentação do Kubernetes](https://kubernetes.io/docs/contribute/)
|
||||||
* [Preparando suas alterações na documentação](http://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally)
|
* [Tipos de conteúdo de página](https://kubernetes.io/docs/contribute/style/page-content-types/)
|
||||||
* [Usando Modelos de Página](http://kubernetes.io/docs/contribute/style/page-templates/)
|
|
||||||
* [Guia de Estilo da Documentação](http://kubernetes.io/docs/contribute/style/style-guide/)
|
* [Guia de Estilo da Documentação](http://kubernetes.io/docs/contribute/style/style-guide/)
|
||||||
* [Localizando documentação do Kubernetes](https://kubernetes.io/docs/contribute/localization/)
|
* [Localizando documentação do Kubernetes](https://kubernetes.io/docs/contribute/localization/)
|
||||||
|
|
||||||
Você pode contactar os mantenedores da localização em Português em:
|
Você pode contatar os mantenedores da localização em Português em:
|
||||||
|
|
||||||
* Felipe ([GitHub - @femrtnz](https://github.com/femrtnz))
|
* Felipe ([GitHub - @femrtnz](https://github.com/femrtnz))
|
||||||
* [Slack channel](https://kubernetes.slack.com/messages/kubernetes-docs-pt)
|
* [Slack channel](https://kubernetes.slack.com/messages/kubernetes-docs-pt)
|
||||||
|
|
||||||
## Executando o site localmente usando o Docker
|
# Código de conduta
|
||||||
|
|
||||||
A maneira recomendada de executar o site do Kubernetes localmente é executar uma imagem especializada do [Docker](https://docker.com) que inclui o gerador de site estático [Hugo](https://gohugo.io).
|
|
||||||
|
|
||||||
> Se você está rodando no Windows, você precisará de mais algumas ferramentas que você pode instalar com o [Chocolatey](https://chocolatey.org). `choco install make`
|
|
||||||
|
|
||||||
> Se você preferir executar o site localmente sem o Docker, consulte [Executando o site localmente usando o Hugo](#executando-o-site-localmente-usando-o-hugo) abaixo.
|
|
||||||
|
|
||||||
Se você tiver o Docker [em funcionamento](https://www.docker.com/get-started), crie a imagem do Docker do `kubernetes-hugo` localmente:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
make container-image
|
|
||||||
```
|
|
||||||
|
|
||||||
Depois que a imagem foi criada, você pode executar o site localmente:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
make container-serve
|
|
||||||
```
|
|
||||||
|
|
||||||
Abra seu navegador para http://localhost:1313 para visualizar o site. Conforme você faz alterações nos arquivos de origem, Hugo atualiza o site e força a atualização do navegador.
|
|
||||||
|
|
||||||
## Executando o site localmente usando o Hugo
|
|
||||||
|
|
||||||
Veja a [documentação oficial do Hugo](https://gohugo.io/getting-started/installing/) para instruções de instalação do Hugo. Certifique-se de instalar a versão do Hugo especificada pela variável de ambiente `HUGO_VERSION` no arquivo [`netlify.toml`](netlify.toml#L9).
|
|
||||||
|
|
||||||
Para executar o site localmente quando você tiver o Hugo instalado:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
make serve
|
|
||||||
```
|
|
||||||
|
|
||||||
Isso iniciará o servidor Hugo local na porta 1313. Abra o navegador para http://localhost:1313 para visualizar o site. Conforme você faz alterações nos arquivos de origem, Hugo atualiza o site e força a atualização do navegador.
|
|
||||||
|
|
||||||
## Comunidade, discussão, contribuição e apoio
|
|
||||||
|
|
||||||
Aprenda a se envolver com a comunidade do Kubernetes na [página da comunidade](http://kubernetes.io/community/).
|
|
||||||
|
|
||||||
Você pode falar com os mantenedores deste projeto:
|
|
||||||
|
|
||||||
- [Slack](https://kubernetes.slack.com/messages/sig-docs)
|
|
||||||
- [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
|
|
||||||
|
|
||||||
### Código de conduta
|
|
||||||
|
|
||||||
A participação na comunidade Kubernetes é regida pelo [Código de Conduta da Kubernetes](code-of-conduct.md).
|
A participação na comunidade Kubernetes é regida pelo [Código de Conduta da Kubernetes](code-of-conduct.md).
|
||||||
|
|
||||||
## Obrigado!
|
# Obrigado!
|
||||||
|
|
||||||
O Kubernetes conta com a participação da comunidade e nós realmente agradecemos suas contribuições para o nosso site e nossa documentação!
|
O Kubernetes prospera com a participação da comunidade e nós realmente agradecemos suas contribuições para o nosso website e nossa documentação!
|
||||||
|
|
@ -100,6 +100,8 @@ make container-image
|
||||||
make container-serve
|
make container-serve
|
||||||
```
|
```
|
||||||
|
|
||||||
|
In a web browser, go to http://localhost:1313/docs/reference/kubernetes-api/ to view the API reference.
|
||||||
|
|
||||||
6. When all changes of the new contract are reflected into the configuration files `toc.yaml` and `fields.yaml`, create a Pull Request with the newly generated API reference pages.
|
6. When all changes of the new contract are reflected into the configuration files `toc.yaml` and `fields.yaml`, create a Pull Request with the newly generated API reference pages.
|
||||||
|
|
||||||
## Troubleshooting
|
## Troubleshooting
|
||||||
|
|
|
||||||
|
|
@ -810,6 +810,13 @@ section#cncf {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
.td-search {
|
||||||
|
header > .header-filler {
|
||||||
|
height: $hero-padding-top;
|
||||||
|
background-color: black;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Docs specific
|
// Docs specific
|
||||||
|
|
||||||
#editPageButton {
|
#editPageButton {
|
||||||
|
|
|
||||||
|
|
@ -19,6 +19,7 @@ cid: community
|
||||||
|
|
||||||
<div class="community__navbar">
|
<div class="community__navbar">
|
||||||
|
|
||||||
|
<a href="#values">Community Values</a>
|
||||||
<a href="#conduct">Code of conduct </a>
|
<a href="#conduct">Code of conduct </a>
|
||||||
<a href="#videos">Videos</a>
|
<a href="#videos">Videos</a>
|
||||||
<a href="#discuss">Discussions</a>
|
<a href="#discuss">Discussions</a>
|
||||||
|
|
@ -41,10 +42,28 @@ cid: community
|
||||||
<img src="/images/community/kubernetes-community-final-05.jpg" alt="Kubernetes Conference Gallery" style="width:100%;margin-right:0% important" class="desktop">
|
<img src="/images/community/kubernetes-community-final-05.jpg" alt="Kubernetes Conference Gallery" style="width:100%;margin-right:0% important" class="desktop">
|
||||||
</div>
|
</div>
|
||||||
<img src="/images/community/kubernetes-community-04-mobile.jpg" alt="Kubernetes Conference Gallery" style="width:100%;margin-bottom:3%" class="mobile">
|
<img src="/images/community/kubernetes-community-04-mobile.jpg" alt="Kubernetes Conference Gallery" style="width:100%;margin-bottom:3%" class="mobile">
|
||||||
|
<a name="values"></a>
|
||||||
<a name="conduct"></a>
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<div><a name="values"></a></div>
|
||||||
|
<div class="conduct">
|
||||||
|
<div class="conducttext">
|
||||||
|
<br class="mobile"><br class="mobile">
|
||||||
|
<br class="tablet"><br class="tablet">
|
||||||
|
<div class="conducttextnobutton" style="margin-bottom:2%"><h1>Community Values</h1>
|
||||||
|
The Kubernetes Community values are the keystone to the ongoing success of the project.<br>
|
||||||
|
These principles guide every aspect of the Kubernetes project.
|
||||||
|
<br>
|
||||||
|
<a href="/community/values/">
|
||||||
|
<br class="mobile"><br class="mobile">
|
||||||
|
<span class="fullbutton">
|
||||||
|
READ MORE
|
||||||
|
</span>
|
||||||
|
</a>
|
||||||
|
</div><a name="conduct"></a>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
<div class="conduct">
|
<div class="conduct">
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,28 @@
|
||||||
|
<!-- Do not edit this file directly. Get the latest from
|
||||||
|
https://git.k8s.io/community/values.md -->
|
||||||
|
|
||||||
|
# Kubernetes Community Values
|
||||||
|
|
||||||
|
Kubernetes Community culture is frequently cited as a substantial contributor to the meteoric rise of this Open Source project. Below are the distilled values which have evolved over the last many years in our community pushing our project and peers toward constant improvement.
|
||||||
|
|
||||||
|
## Distribution is better than centralization
|
||||||
|
|
||||||
|
The scale of the Kubernetes project is only viable through high-trust and high-visibility distribution of work, which includes delegation of authority, decision making, technical design, code ownership, and documentation. Distributed asynchronous ownership, collaboration, communication and decision making are the cornerstone of our world-wide community.
|
||||||
|
|
||||||
|
## Community over product or company
|
||||||
|
|
||||||
|
We are here as a community first, our allegiance is to the intentional stewardship of the Kubernetes project for the benefit of all its members and users everywhere. We support working together publicly for the common goal of a vibrant interoperable ecosystem providing an excellent experience for our users. Individuals gain status through work, companies gain status through their commitments to support this community and fund the resources necessary for the project to operate.
|
||||||
|
|
||||||
|
## Automation over process
|
||||||
|
|
||||||
|
Large projects have a lot of less exciting, yet, hard work. We value time spent automating repetitive work more highly than toil. Where that work cannot be automated, it is our culture to recognize and reward all types of contributions. However, heroism is not sustainable.
|
||||||
|
|
||||||
|
## Inclusive is better than exclusive
|
||||||
|
|
||||||
|
Broadly successful and useful technology requires different perspectives and skill sets which can only be heard in a welcoming and respectful environment. Community membership is a privilege, not a right. Community Leadership is earned through effort, scope, quality, quantity, and duration of contributions. Our community shows respect for the time and effort put into a discussion regardless of where a contributor is on their growth path.
|
||||||
|
|
||||||
|
## Evolution is better than stagnation
|
||||||
|
|
||||||
|
Openness to new ideas and studied technological evolution make Kubernetes a stronger project. Continual improvement, servant leadership, mentorship and respect are the foundations of the Kubernetes project culture. It is the duty for leaders in the Kubernetes community to find, sponsor, and promote new community members. Leaders should expect to step aside. Community members should expect to step up.
|
||||||
|
|
||||||
|
**"Culture eats strategy for breakfast." --Peter Drucker**
|
||||||
|
|
@ -0,0 +1,13 @@
|
||||||
|
---
|
||||||
|
title: Community
|
||||||
|
layout: basic
|
||||||
|
cid: community
|
||||||
|
css: /css/community.css
|
||||||
|
---
|
||||||
|
|
||||||
|
<div class="community_main">
|
||||||
|
|
||||||
|
<div class="cncf_coc_container">
|
||||||
|
{{< include "/static/community-values.md" >}}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
@ -102,7 +102,7 @@ Other control loops can observe that reported data and take their own actions.
|
||||||
In the thermostat example, if the room is very cold then a different controller
|
In the thermostat example, if the room is very cold then a different controller
|
||||||
might also turn on a frost protection heater. With Kubernetes clusters, the control
|
might also turn on a frost protection heater. With Kubernetes clusters, the control
|
||||||
plane indirectly works with IP address management tools, storage services,
|
plane indirectly works with IP address management tools, storage services,
|
||||||
cloud provider APIS, and other services by
|
cloud provider APIs, and other services by
|
||||||
[extending Kubernetes](/docs/concepts/extend-kubernetes/) to implement that.
|
[extending Kubernetes](/docs/concepts/extend-kubernetes/) to implement that.
|
||||||
|
|
||||||
## Desired versus current state {#desired-vs-current}
|
## Desired versus current state {#desired-vs-current}
|
||||||
|
|
|
||||||
|
|
@ -11,9 +11,10 @@ weight: 10
|
||||||
|
|
||||||
Kubernetes runs your workload by placing containers into Pods to run on _Nodes_.
|
Kubernetes runs your workload by placing containers into Pods to run on _Nodes_.
|
||||||
A node may be a virtual or physical machine, depending on the cluster. Each node
|
A node may be a virtual or physical machine, depending on the cluster. Each node
|
||||||
contains the services necessary to run
|
is managed by the
|
||||||
{{< glossary_tooltip text="Pods" term_id="pod" >}}, managed by the
|
{{< glossary_tooltip text="control plane" term_id="control-plane" >}}
|
||||||
{{< glossary_tooltip text="control plane" term_id="control-plane" >}}.
|
and contains the services necessary to run
|
||||||
|
{{< glossary_tooltip text="Pods" term_id="pod" >}}
|
||||||
|
|
||||||
Typically you have several nodes in a cluster; in a learning or resource-limited
|
Typically you have several nodes in a cluster; in a learning or resource-limited
|
||||||
environment, you might have just one.
|
environment, you might have just one.
|
||||||
|
|
|
||||||
|
|
@ -26,12 +26,12 @@ See the guides in [Setup](/docs/setup/) for examples of how to plan, set up, and
|
||||||
|
|
||||||
Before choosing a guide, here are some considerations:
|
Before choosing a guide, here are some considerations:
|
||||||
|
|
||||||
- Do you just want to try out Kubernetes on your computer, or do you want to build a high-availability, multi-node cluster? Choose distros best suited for your needs.
|
- Do you want to try out Kubernetes on your computer, or do you want to build a high-availability, multi-node cluster? Choose distros best suited for your needs.
|
||||||
- Will you be using **a hosted Kubernetes cluster**, such as [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/), or **hosting your own cluster**?
|
- Will you be using **a hosted Kubernetes cluster**, such as [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/), or **hosting your own cluster**?
|
||||||
- Will your cluster be **on-premises**, or **in the cloud (IaaS)**? Kubernetes does not directly support hybrid clusters. Instead, you can set up multiple clusters.
|
- Will your cluster be **on-premises**, or **in the cloud (IaaS)**? Kubernetes does not directly support hybrid clusters. Instead, you can set up multiple clusters.
|
||||||
- **If you are configuring Kubernetes on-premises**, consider which [networking model](/docs/concepts/cluster-administration/networking/) fits best.
|
- **If you are configuring Kubernetes on-premises**, consider which [networking model](/docs/concepts/cluster-administration/networking/) fits best.
|
||||||
- Will you be running Kubernetes on **"bare metal" hardware** or on **virtual machines (VMs)**?
|
- Will you be running Kubernetes on **"bare metal" hardware** or on **virtual machines (VMs)**?
|
||||||
- Do you **just want to run a cluster**, or do you expect to do **active development of Kubernetes project code**? If the
|
- Do you **want to run a cluster**, or do you expect to do **active development of Kubernetes project code**? If the
|
||||||
latter, choose an actively-developed distro. Some distros only use binary releases, but
|
latter, choose an actively-developed distro. Some distros only use binary releases, but
|
||||||
offer a greater variety of choices.
|
offer a greater variety of choices.
|
||||||
- Familiarize yourself with the [components](/docs/concepts/overview/components/) needed to run a cluster.
|
- Familiarize yourself with the [components](/docs/concepts/overview/components/) needed to run a cluster.
|
||||||
|
|
|
||||||
|
|
@ -9,23 +9,22 @@ weight: 60
|
||||||
|
|
||||||
<!-- overview -->
|
<!-- overview -->
|
||||||
|
|
||||||
Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism; as such, most container engines are likewise designed to support some kind of logging. The easiest and most embraced logging method for containerized applications is to write to the standard output and standard error streams.
|
Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism. Likewise, container engines are designed to support logging. The easiest and most adopted logging method for containerized applications is writing to standard output and standard error streams.
|
||||||
|
|
||||||
However, the native functionality provided by a container engine or runtime is usually not enough for a complete logging solution. For example, if a container crashes, a pod is evicted, or a node dies, you'll usually still want to access your application's logs. As such, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called _cluster-level-logging_. Cluster-level logging requires a separate backend to store, analyze, and query logs. Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes cluster.
|
However, the native functionality provided by a container engine or runtime is usually not enough for a complete logging solution.
|
||||||
|
For example, you may want access your application's logs if a container crashes; a pod gets evicted; or a node dies.
|
||||||
|
In a cluster, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called _cluster-level logging_.
|
||||||
|
|
||||||
<!-- body -->
|
<!-- body -->
|
||||||
|
|
||||||
Cluster-level logging architectures are described in assumption that
|
Cluster-level logging architectures require a separate backend to store, analyze, and query logs. Kubernetes
|
||||||
a logging backend is present inside or outside of your cluster. If you're
|
does not provide a native storage solution for log data. Instead, there are many logging solutions that
|
||||||
not interested in having cluster-level logging, you might still find
|
integrate with Kubernetes. The following sections describe how to handle and store logs on nodes.
|
||||||
the description of how logs are stored and handled on the node to be useful.
|
|
||||||
|
|
||||||
## Basic logging in Kubernetes
|
## Basic logging in Kubernetes
|
||||||
|
|
||||||
In this section, you can see an example of basic logging in Kubernetes that
|
This example uses a `Pod` specification with a container
|
||||||
outputs data to the standard output stream. This demonstration uses
|
to write text to the standard output stream once per second.
|
||||||
a pod specification with a container that writes some text to standard output
|
|
||||||
once per second.
|
|
||||||
|
|
||||||
{{< codenew file="debug/counter-pod.yaml" >}}
|
{{< codenew file="debug/counter-pod.yaml" >}}
|
||||||
|
|
||||||
|
|
@ -34,8 +33,10 @@ To run this pod, use the following command:
|
||||||
```shell
|
```shell
|
||||||
kubectl apply -f https://k8s.io/examples/debug/counter-pod.yaml
|
kubectl apply -f https://k8s.io/examples/debug/counter-pod.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
The output is:
|
The output is:
|
||||||
```
|
|
||||||
|
```console
|
||||||
pod/counter created
|
pod/counter created
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
@ -44,73 +45,73 @@ To fetch the logs, use the `kubectl logs` command, as follows:
|
||||||
```shell
|
```shell
|
||||||
kubectl logs counter
|
kubectl logs counter
|
||||||
```
|
```
|
||||||
|
|
||||||
The output is:
|
The output is:
|
||||||
```
|
|
||||||
|
```console
|
||||||
0: Mon Jan 1 00:00:00 UTC 2001
|
0: Mon Jan 1 00:00:00 UTC 2001
|
||||||
1: Mon Jan 1 00:00:01 UTC 2001
|
1: Mon Jan 1 00:00:01 UTC 2001
|
||||||
2: Mon Jan 1 00:00:02 UTC 2001
|
2: Mon Jan 1 00:00:02 UTC 2001
|
||||||
...
|
...
|
||||||
```
|
```
|
||||||
|
|
||||||
You can use `kubectl logs` to retrieve logs from a previous instantiation of a container with `--previous` flag, in case the container has crashed. If your pod has multiple containers, you should specify which container's logs you want to access by appending a container name to the command. See the [`kubectl logs` documentation](/docs/reference/generated/kubectl/kubectl-commands#logs) for more details.
|
You can use `kubectl logs --previous` to retrieve logs from a previous instantiation of a container. If your pod has multiple containers, specify which container's logs you want to access by appending a container name to the command. See the [`kubectl logs` documentation](/docs/reference/generated/kubectl/kubectl-commands#logs) for more details.
|
||||||
|
|
||||||
## Logging at the node level
|
## Logging at the node level
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Everything a containerized application writes to `stdout` and `stderr` is handled and redirected somewhere by a container engine. For example, the Docker container engine redirects those two streams to [a logging driver](https://docs.docker.com/engine/admin/logging/overview), which is configured in Kubernetes to write to a file in json format.
|
A container engine handles and redirects any output generated to a containerized application's `stdout` and `stderr` streams.
|
||||||
|
For example, the Docker container engine redirects those two streams to [a logging driver](https://docs.docker.com/engine/admin/logging/overview), which is configured in Kubernetes to write to a file in JSON format.
|
||||||
|
|
||||||
{{< note >}}
|
{{< note >}}
|
||||||
The Docker json logging driver treats each line as a separate message. When using the Docker logging driver, there is no direct support for multi-line messages. You need to handle multi-line messages at the logging agent level or higher.
|
The Docker JSON logging driver treats each line as a separate message. When using the Docker logging driver, there is no direct support for multi-line messages. You need to handle multi-line messages at the logging agent level or higher.
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
||||||
By default, if a container restarts, the kubelet keeps one terminated container with its logs. If a pod is evicted from the node, all corresponding containers are also evicted, along with their logs.
|
By default, if a container restarts, the kubelet keeps one terminated container with its logs. If a pod is evicted from the node, all corresponding containers are also evicted, along with their logs.
|
||||||
|
|
||||||
An important consideration in node-level logging is implementing log rotation,
|
An important consideration in node-level logging is implementing log rotation,
|
||||||
so that logs don't consume all available storage on the node. Kubernetes
|
so that logs don't consume all available storage on the node. Kubernetes
|
||||||
currently is not responsible for rotating logs, but rather a deployment tool
|
is not responsible for rotating logs, but rather a deployment tool
|
||||||
should set up a solution to address that.
|
should set up a solution to address that.
|
||||||
For example, in Kubernetes clusters, deployed by the `kube-up.sh` script,
|
For example, in Kubernetes clusters, deployed by the `kube-up.sh` script,
|
||||||
there is a [`logrotate`](https://linux.die.net/man/8/logrotate)
|
there is a [`logrotate`](https://linux.die.net/man/8/logrotate)
|
||||||
tool configured to run each hour. You can also set up a container runtime to
|
tool configured to run each hour. You can also set up a container runtime to
|
||||||
rotate application's logs automatically, for example by using Docker's `log-opt`.
|
rotate an application's logs automatically.
|
||||||
In the `kube-up.sh` script, the latter approach is used for COS image on GCP,
|
|
||||||
and the former approach is used in any other environment. In both cases, by
|
|
||||||
default rotation is configured to take place when log file exceeds 10MB.
|
|
||||||
|
|
||||||
As an example, you can find detailed information about how `kube-up.sh` sets
|
As an example, you can find detailed information about how `kube-up.sh` sets
|
||||||
up logging for COS image on GCP in the corresponding
|
up logging for COS image on GCP in the corresponding
|
||||||
[script](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh).
|
[`configure-helper` script](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh).
|
||||||
|
|
||||||
When you run [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs) as in
|
When you run [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs) as in
|
||||||
the basic logging example, the kubelet on the node handles the request and
|
the basic logging example, the kubelet on the node handles the request and
|
||||||
reads directly from the log file, returning the contents in the response.
|
reads directly from the log file. The kubelet returns the content of the log file.
|
||||||
|
|
||||||
{{< note >}}
|
{{< note >}}
|
||||||
Currently, if some external system has performed the rotation,
|
If an external system has performed the rotation,
|
||||||
only the contents of the latest log file will be available through
|
only the contents of the latest log file will be available through
|
||||||
`kubectl logs`. E.g. if there's a 10MB file, `logrotate` performs
|
`kubectl logs`. For example, if there's a 10MB file, `logrotate` performs
|
||||||
the rotation and there are two files, one 10MB in size and one empty,
|
the rotation and there are two files: one file that is 10MB in size and a second file that is empty.
|
||||||
`kubectl logs` will return an empty response.
|
`kubectl logs` returns the latest log file which in this example is an empty response.
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
||||||
[cosConfigureHelper]: https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh
|
|
||||||
### System component logs
|
### System component logs
|
||||||
|
|
||||||
There are two types of system components: those that run in a container and those
|
There are two types of system components: those that run in a container and those
|
||||||
that do not run in a container. For example:
|
that do not run in a container. For example:
|
||||||
|
|
||||||
* The Kubernetes scheduler and kube-proxy run in a container.
|
* The Kubernetes scheduler and kube-proxy run in a container.
|
||||||
* The kubelet and container runtime, for example Docker, do not run in containers.
|
* The kubelet and container runtime do not run in containers.
|
||||||
|
|
||||||
On machines with systemd, the kubelet and container runtime write to journald. If
|
On machines with systemd, the kubelet and container runtime write to journald. If
|
||||||
systemd is not present, they write to `.log` files in the `/var/log` directory.
|
systemd is not present, the kubelet and container runtime write to `.log` files
|
||||||
System components inside containers always write to the `/var/log` directory,
|
in the `/var/log` directory. System components inside containers always write
|
||||||
bypassing the default logging mechanism. They use the [klog](https://github.com/kubernetes/klog)
|
to the `/var/log` directory, bypassing the default logging mechanism.
|
||||||
|
They use the [`klog`](https://github.com/kubernetes/klog)
|
||||||
logging library. You can find the conventions for logging severity for those
|
logging library. You can find the conventions for logging severity for those
|
||||||
components in the [development docs on logging](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md).
|
components in the [development docs on logging](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md).
|
||||||
|
|
||||||
Similarly to the container logs, system component logs in the `/var/log`
|
Similar to the container logs, system component logs in the `/var/log`
|
||||||
directory should be rotated. In Kubernetes clusters brought up by
|
directory should be rotated. In Kubernetes clusters brought up by
|
||||||
the `kube-up.sh` script, those logs are configured to be rotated by
|
the `kube-up.sh` script, those logs are configured to be rotated by
|
||||||
the `logrotate` tool daily or once the size exceeds 100MB.
|
the `logrotate` tool daily or once the size exceeds 100MB.
|
||||||
|
|
@ -129,13 +130,14 @@ While Kubernetes does not provide a native solution for cluster-level logging, t
|
||||||
|
|
||||||
You can implement cluster-level logging by including a _node-level logging agent_ on each node. The logging agent is a dedicated tool that exposes logs or pushes logs to a backend. Commonly, the logging agent is a container that has access to a directory with log files from all of the application containers on that node.
|
You can implement cluster-level logging by including a _node-level logging agent_ on each node. The logging agent is a dedicated tool that exposes logs or pushes logs to a backend. Commonly, the logging agent is a container that has access to a directory with log files from all of the application containers on that node.
|
||||||
|
|
||||||
Because the logging agent must run on every node, it's common to implement it as either a DaemonSet replica, a manifest pod, or a dedicated native process on the node. However the latter two approaches are deprecated and highly discouraged.
|
Because the logging agent must run on every node, it is recommended to run the agent
|
||||||
|
as a `DaemonSet`.
|
||||||
|
|
||||||
Using a node-level logging agent is the most common and encouraged approach for a Kubernetes cluster, because it creates only one agent per node, and it doesn't require any changes to the applications running on the node. However, node-level logging _only works for applications' standard output and standard error_.
|
Node-level logging creates only one agent per node and doesn't require any changes to the applications running on the node.
|
||||||
|
|
||||||
Kubernetes doesn't specify a logging agent, but two optional logging agents are packaged with the Kubernetes release: [Stackdriver Logging](/docs/tasks/debug-application-cluster/logging-stackdriver/) for use with Google Cloud Platform, and [Elasticsearch](/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/). You can find more information and instructions in the dedicated documents. Both use [fluentd](https://www.fluentd.org/) with custom configuration as an agent on the node.
|
Containers write stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation.
|
||||||
|
|
||||||
### Using a sidecar container with the logging agent
|
### Using a sidecar container with the logging agent {#sidecar-container-with-logging-agent}
|
||||||
|
|
||||||
You can use a sidecar container in one of the following ways:
|
You can use a sidecar container in one of the following ways:
|
||||||
|
|
||||||
|
|
@ -146,28 +148,27 @@ You can use a sidecar container in one of the following ways:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
By having your sidecar containers stream to their own `stdout` and `stderr`
|
By having your sidecar containers write to their own `stdout` and `stderr`
|
||||||
streams, you can take advantage of the kubelet and the logging agent that
|
streams, you can take advantage of the kubelet and the logging agent that
|
||||||
already run on each node. The sidecar containers read logs from a file, a socket,
|
already run on each node. The sidecar containers read logs from a file, a socket,
|
||||||
or the journald. Each individual sidecar container prints log to its own `stdout`
|
or journald. Each sidecar container prints a log to its own `stdout` or `stderr` stream.
|
||||||
or `stderr` stream.
|
|
||||||
|
|
||||||
This approach allows you to separate several log streams from different
|
This approach allows you to separate several log streams from different
|
||||||
parts of your application, some of which can lack support
|
parts of your application, some of which can lack support
|
||||||
for writing to `stdout` or `stderr`. The logic behind redirecting logs
|
for writing to `stdout` or `stderr`. The logic behind redirecting logs
|
||||||
is minimal, so it's hardly a significant overhead. Additionally, because
|
is minimal, so it's not a significant overhead. Additionally, because
|
||||||
`stdout` and `stderr` are handled by the kubelet, you can use built-in tools
|
`stdout` and `stderr` are handled by the kubelet, you can use built-in tools
|
||||||
like `kubectl logs`.
|
like `kubectl logs`.
|
||||||
|
|
||||||
Consider the following example. A pod runs a single container, and the container
|
For example, a pod runs a single container, and the container
|
||||||
writes to two different log files, using two different formats. Here's a
|
writes to two different log files using two different formats. Here's a
|
||||||
configuration file for the Pod:
|
configuration file for the Pod:
|
||||||
|
|
||||||
{{< codenew file="admin/logging/two-files-counter-pod.yaml" >}}
|
{{< codenew file="admin/logging/two-files-counter-pod.yaml" >}}
|
||||||
|
|
||||||
It would be a mess to have log entries of different formats in the same log
|
It is not recommended to write log entries with different formats to the same log
|
||||||
stream, even if you managed to redirect both components to the `stdout` stream of
|
stream, even if you managed to redirect both components to the `stdout` stream of
|
||||||
the container. Instead, you could introduce two sidecar containers. Each sidecar
|
the container. Instead, you can create two sidecar containers. Each sidecar
|
||||||
container could tail a particular log file from a shared volume and then redirect
|
container could tail a particular log file from a shared volume and then redirect
|
||||||
the logs to its own `stdout` stream.
|
the logs to its own `stdout` stream.
|
||||||
|
|
||||||
|
|
@ -181,7 +182,10 @@ running the following commands:
|
||||||
```shell
|
```shell
|
||||||
kubectl logs counter count-log-1
|
kubectl logs counter count-log-1
|
||||||
```
|
```
|
||||||
```
|
|
||||||
|
The output is:
|
||||||
|
|
||||||
|
```console
|
||||||
0: Mon Jan 1 00:00:00 UTC 2001
|
0: Mon Jan 1 00:00:00 UTC 2001
|
||||||
1: Mon Jan 1 00:00:01 UTC 2001
|
1: Mon Jan 1 00:00:01 UTC 2001
|
||||||
2: Mon Jan 1 00:00:02 UTC 2001
|
2: Mon Jan 1 00:00:02 UTC 2001
|
||||||
|
|
@ -191,7 +195,10 @@ kubectl logs counter count-log-1
|
||||||
```shell
|
```shell
|
||||||
kubectl logs counter count-log-2
|
kubectl logs counter count-log-2
|
||||||
```
|
```
|
||||||
```
|
|
||||||
|
The output is:
|
||||||
|
|
||||||
|
```console
|
||||||
Mon Jan 1 00:00:00 UTC 2001 INFO 0
|
Mon Jan 1 00:00:00 UTC 2001 INFO 0
|
||||||
Mon Jan 1 00:00:01 UTC 2001 INFO 1
|
Mon Jan 1 00:00:01 UTC 2001 INFO 1
|
||||||
Mon Jan 1 00:00:02 UTC 2001 INFO 2
|
Mon Jan 1 00:00:02 UTC 2001 INFO 2
|
||||||
|
|
@ -202,16 +209,15 @@ The node-level agent installed in your cluster picks up those log streams
|
||||||
automatically without any further configuration. If you like, you can configure
|
automatically without any further configuration. If you like, you can configure
|
||||||
the agent to parse log lines depending on the source container.
|
the agent to parse log lines depending on the source container.
|
||||||
|
|
||||||
Note, that despite low CPU and memory usage (order of couple of millicores
|
Note, that despite low CPU and memory usage (order of a couple of millicores
|
||||||
for cpu and order of several megabytes for memory), writing logs to a file and
|
for cpu and order of several megabytes for memory), writing logs to a file and
|
||||||
then streaming them to `stdout` can double disk usage. If you have
|
then streaming them to `stdout` can double disk usage. If you have
|
||||||
an application that writes to a single file, it's generally better to set
|
an application that writes to a single file, it's recommended to set
|
||||||
`/dev/stdout` as destination rather than implementing the streaming sidecar
|
`/dev/stdout` as the destination rather than implement the streaming sidecar
|
||||||
container approach.
|
container approach.
|
||||||
|
|
||||||
Sidecar containers can also be used to rotate log files that cannot be
|
Sidecar containers can also be used to rotate log files that cannot be
|
||||||
rotated by the application itself. An example
|
rotated by the application itself. An example of this approach is a small container running `logrotate` periodically.
|
||||||
of this approach is a small container running logrotate periodically.
|
|
||||||
However, it's recommended to use `stdout` and `stderr` directly and leave rotation
|
However, it's recommended to use `stdout` and `stderr` directly and leave rotation
|
||||||
and retention policies to the kubelet.
|
and retention policies to the kubelet.
|
||||||
|
|
||||||
|
|
@ -226,21 +232,17 @@ configured specifically to run with your application.
|
||||||
{{< note >}}
|
{{< note >}}
|
||||||
Using a logging agent in a sidecar container can lead
|
Using a logging agent in a sidecar container can lead
|
||||||
to significant resource consumption. Moreover, you won't be able to access
|
to significant resource consumption. Moreover, you won't be able to access
|
||||||
those logs using `kubectl logs` command, because they are not controlled
|
those logs using `kubectl logs` because they are not controlled
|
||||||
by the kubelet.
|
by the kubelet.
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
||||||
As an example, you could use [Stackdriver](/docs/tasks/debug-application-cluster/logging-stackdriver/),
|
Here are two configuration files that you can use to implement a sidecar container with a logging agent. The first file contains
|
||||||
which uses fluentd as a logging agent. Here are two configuration files that
|
a [`ConfigMap`](/docs/tasks/configure-pod-container/configure-pod-configmap/) to configure fluentd.
|
||||||
you can use to implement this approach. The first file contains
|
|
||||||
a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) to configure fluentd.
|
|
||||||
|
|
||||||
{{< codenew file="admin/logging/fluentd-sidecar-config.yaml" >}}
|
{{< codenew file="admin/logging/fluentd-sidecar-config.yaml" >}}
|
||||||
|
|
||||||
{{< note >}}
|
{{< note >}}
|
||||||
The configuration of fluentd is beyond the scope of this article. For
|
For information about configuring fluentd, see the [fluentd documentation](https://docs.fluentd.org/).
|
||||||
information about configuring fluentd, see the
|
|
||||||
[official fluentd documentation](https://docs.fluentd.org/).
|
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
||||||
The second file describes a pod that has a sidecar container running fluentd.
|
The second file describes a pod that has a sidecar container running fluentd.
|
||||||
|
|
@ -248,18 +250,10 @@ The pod mounts a volume where fluentd can pick up its configuration data.
|
||||||
|
|
||||||
{{< codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" >}}
|
{{< codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" >}}
|
||||||
|
|
||||||
After some time you can find log messages in the Stackdriver interface.
|
In the sample configurations, you can replace fluentd with any logging agent, reading from any source inside an application container.
|
||||||
|
|
||||||
Remember, that this is just an example and you can actually replace fluentd
|
|
||||||
with any logging agent, reading from any source inside an application
|
|
||||||
container.
|
|
||||||
|
|
||||||
### Exposing logs directly from the application
|
### Exposing logs directly from the application
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
You can implement cluster-level logging by exposing or pushing logs directly from
|
Cluster-logging that exposes or pushes logs directly from every application is outside the scope of Kubernetes.
|
||||||
every application; however, the implementation for such a logging mechanism
|
|
||||||
is outside the scope of Kubernetes.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -70,7 +70,7 @@ deployment.apps "my-nginx" deleted
|
||||||
service "my-nginx-svc" deleted
|
service "my-nginx-svc" deleted
|
||||||
```
|
```
|
||||||
|
|
||||||
In the case of just two resources, it's also easy to specify both on the command line using the resource/name syntax:
|
In the case of two resources, you can specify both resources on the command line using the resource/name syntax:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl delete deployments/my-nginx services/my-nginx-svc
|
kubectl delete deployments/my-nginx services/my-nginx-svc
|
||||||
|
|
@ -87,10 +87,11 @@ deployment.apps "my-nginx" deleted
|
||||||
service "my-nginx-svc" deleted
|
service "my-nginx-svc" deleted
|
||||||
```
|
```
|
||||||
|
|
||||||
Because `kubectl` outputs resource names in the same syntax it accepts, it's easy to chain operations using `$()` or `xargs`:
|
Because `kubectl` outputs resource names in the same syntax it accepts, you can chain operations using `$()` or `xargs`:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service)
|
kubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service)
|
||||||
|
kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service | xargs -i kubectl get {}
|
||||||
```
|
```
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
|
|
@ -301,6 +302,7 @@ Sometimes you would want to attach annotations to resources. Annotations are arb
|
||||||
kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx'
|
kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx'
|
||||||
kubectl get pods my-nginx-v4-9gw19 -o yaml
|
kubectl get pods my-nginx-v4-9gw19 -o yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
kind: pod
|
kind: pod
|
||||||
|
|
@ -314,11 +316,12 @@ For more information, please see [annotations](/docs/concepts/overview/working-w
|
||||||
|
|
||||||
## Scaling your application
|
## Scaling your application
|
||||||
|
|
||||||
When load on your application grows or shrinks, it's easy to scale with `kubectl`. For instance, to decrease the number of nginx replicas from 3 to 1, do:
|
When load on your application grows or shrinks, use `kubectl` to scale your application. For instance, to decrease the number of nginx replicas from 3 to 1, do:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl scale deployment/my-nginx --replicas=1
|
kubectl scale deployment/my-nginx --replicas=1
|
||||||
```
|
```
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
deployment.apps/my-nginx scaled
|
deployment.apps/my-nginx scaled
|
||||||
```
|
```
|
||||||
|
|
@ -328,6 +331,7 @@ Now you only have one pod managed by the deployment.
|
||||||
```shell
|
```shell
|
||||||
kubectl get pods -l app=nginx
|
kubectl get pods -l app=nginx
|
||||||
```
|
```
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE
|
||||||
my-nginx-2035384211-j5fhi 1/1 Running 0 30m
|
my-nginx-2035384211-j5fhi 1/1 Running 0 30m
|
||||||
|
|
@ -338,6 +342,7 @@ To have the system automatically choose the number of nginx replicas as needed,
|
||||||
```shell
|
```shell
|
||||||
kubectl autoscale deployment/my-nginx --min=1 --max=3
|
kubectl autoscale deployment/my-nginx --min=1 --max=3
|
||||||
```
|
```
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
horizontalpodautoscaler.autoscaling/my-nginx autoscaled
|
horizontalpodautoscaler.autoscaling/my-nginx autoscaled
|
||||||
```
|
```
|
||||||
|
|
@ -411,6 +416,7 @@ In some cases, you may need to update resource fields that cannot be updated onc
|
||||||
```shell
|
```shell
|
||||||
kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force
|
kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force
|
||||||
```
|
```
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
deployment.apps/my-nginx deleted
|
deployment.apps/my-nginx deleted
|
||||||
deployment.apps/my-nginx replaced
|
deployment.apps/my-nginx replaced
|
||||||
|
|
@ -427,14 +433,17 @@ Let's say you were running version 1.14.2 of nginx:
|
||||||
```shell
|
```shell
|
||||||
kubectl create deployment my-nginx --image=nginx:1.14.2
|
kubectl create deployment my-nginx --image=nginx:1.14.2
|
||||||
```
|
```
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
deployment.apps/my-nginx created
|
deployment.apps/my-nginx created
|
||||||
```
|
```
|
||||||
|
|
||||||
with 3 replicas (so the old and new revisions can coexist):
|
with 3 replicas (so the old and new revisions can coexist):
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl scale deployment my-nginx --current-replicas=1 --replicas=3
|
kubectl scale deployment my-nginx --current-replicas=1 --replicas=3
|
||||||
```
|
```
|
||||||
|
|
||||||
```
|
```
|
||||||
deployment.apps/my-nginx scaled
|
deployment.apps/my-nginx scaled
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -31,22 +31,24 @@ I1025 00:15:15.525108 1 httplog.go:79] GET /api/v1/namespaces/kube-system/
|
||||||
|
|
||||||
{{< feature-state for_k8s_version="v1.19" state="alpha" >}}
|
{{< feature-state for_k8s_version="v1.19" state="alpha" >}}
|
||||||
|
|
||||||
{{<warning>}}
|
{{< warning >}}
|
||||||
Migration to structured log messages is an ongoing process. Not all log messages are structured in this version. When parsing log files, you must also handle unstructured log messages.
|
Migration to structured log messages is an ongoing process. Not all log messages are structured in this version. When parsing log files, you must also handle unstructured log messages.
|
||||||
|
|
||||||
Log formatting and value serialization are subject to change.
|
Log formatting and value serialization are subject to change.
|
||||||
{{< /warning>}}
|
{{< /warning>}}
|
||||||
|
|
||||||
Structured logging is a effort to introduce a uniform structure in log messages allowing for easy extraction of information, making logs easier and cheaper to store and process.
|
Structured logging introduces a uniform structure in log messages allowing for programmatic extraction of information. You can store and process structured logs with less effort and cost.
|
||||||
New message format is backward compatible and enabled by default.
|
New message format is backward compatible and enabled by default.
|
||||||
|
|
||||||
Format of structured logs:
|
Format of structured logs:
|
||||||
```
|
|
||||||
|
```ini
|
||||||
<klog header> "<message>" <key1>="<value1>" <key2>="<value2>" ...
|
<klog header> "<message>" <key1>="<value1>" <key2>="<value2>" ...
|
||||||
```
|
```
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
```
|
|
||||||
|
```ini
|
||||||
I1025 00:15:15.525108 1 controller_utils.go:116] "Pod status updated" pod="kube-system/kubedns" status="ready"
|
I1025 00:15:15.525108 1 controller_utils.go:116] "Pod status updated" pod="kube-system/kubedns" status="ready"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -59,13 +59,13 @@ DNS server watches the Kubernetes API for new `Services` and creates a set of DN
|
||||||
|
|
||||||
- Avoid using `hostNetwork`, for the same reasons as `hostPort`.
|
- Avoid using `hostNetwork`, for the same reasons as `hostPort`.
|
||||||
|
|
||||||
- Use [headless Services](/docs/concepts/services-networking/service/#headless-services) (which have a `ClusterIP` of `None`) for easy service discovery when you don't need `kube-proxy` load balancing.
|
- Use [headless Services](/docs/concepts/services-networking/service/#headless-services) (which have a `ClusterIP` of `None`) for service discovery when you don't need `kube-proxy` load balancing.
|
||||||
|
|
||||||
## Using Labels
|
## Using Labels
|
||||||
|
|
||||||
- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or Deployment, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the appropriate Pods for other resources; for example, a Service that selects all `tier: frontend` Pods, or all `phase: test` components of `app: myapp`. See the [guestbook](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) app for examples of this approach.
|
- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or Deployment, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the appropriate Pods for other resources; for example, a Service that selects all `tier: frontend` Pods, or all `phase: test` components of `app: myapp`. See the [guestbook](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) app for examples of this approach.
|
||||||
|
|
||||||
A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. [Deployments](/docs/concepts/workloads/controllers/deployment/) make it easy to update a running service without downtime.
|
A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. When you need to update a running service without downtime, use a [Deployment](/docs/concepts/workloads/controllers/deployment/).
|
||||||
|
|
||||||
A desired state of an object is described by a Deployment, and if changes to that spec are _applied_, the deployment controller changes the actual state to the desired state at a controlled rate.
|
A desired state of an object is described by a Deployment, and if changes to that spec are _applied_, the deployment controller changes the actual state to the desired state at a controlled rate.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -116,7 +116,7 @@ In this case, `0` means we have just created an empty Secret.
|
||||||
A `kubernetes.io/service-account-token` type of Secret is used to store a
|
A `kubernetes.io/service-account-token` type of Secret is used to store a
|
||||||
token that identifies a service account. When using this Secret type, you need
|
token that identifies a service account. When using this Secret type, you need
|
||||||
to ensure that the `kubernetes.io/service-account.name` annotation is set to an
|
to ensure that the `kubernetes.io/service-account.name` annotation is set to an
|
||||||
existing service account name. An Kubernetes controller fills in some other
|
existing service account name. A Kubernetes controller fills in some other
|
||||||
fields such as the `kubernetes.io/service-account.uid` annotation and the
|
fields such as the `kubernetes.io/service-account.uid` annotation and the
|
||||||
`token` key in the `data` field set to actual token content.
|
`token` key in the `data` field set to actual token content.
|
||||||
|
|
||||||
|
|
@ -801,11 +801,6 @@ field set to that of the service account.
|
||||||
See [Add ImagePullSecrets to a service account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)
|
See [Add ImagePullSecrets to a service account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)
|
||||||
for a detailed explanation of that process.
|
for a detailed explanation of that process.
|
||||||
|
|
||||||
### Automatic mounting of manually created Secrets
|
|
||||||
|
|
||||||
Manually created secrets (for example, one containing a token for accessing a GitHub account)
|
|
||||||
can be automatically attached to pods based on their service account.
|
|
||||||
|
|
||||||
## Details
|
## Details
|
||||||
|
|
||||||
### Restrictions
|
### Restrictions
|
||||||
|
|
|
||||||
|
|
@ -36,10 +36,13 @@ No parameters are passed to the handler.
|
||||||
|
|
||||||
`PreStop`
|
`PreStop`
|
||||||
|
|
||||||
This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others. A call to the preStop hook fails if the container is already in terminated or completed state.
|
This hook is called immediately before a container is terminated due to an API request or management
|
||||||
It is blocking, meaning it is synchronous,
|
event such as a liveness/startup probe failure, preemption, resource contention and others. A call
|
||||||
so it must complete before the signal to stop the container can be sent.
|
to the `PreStop` hook fails if the container is already in a terminated or completed state and the
|
||||||
No parameters are passed to the handler.
|
hook must complete before the TERM signal to stop the container can be sent. The Pod's termination
|
||||||
|
grace period countdown begins before the `PreStop` hook is executed, so regardless of the outcome of
|
||||||
|
the handler, the container will eventually terminate within the Pod's termination grace period. No
|
||||||
|
parameters are passed to the handler.
|
||||||
|
|
||||||
A more detailed description of the termination behavior can be found in
|
A more detailed description of the termination behavior can be found in
|
||||||
[Termination of Pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination).
|
[Termination of Pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination).
|
||||||
|
|
@ -65,19 +68,15 @@ the Container ENTRYPOINT and hook fire asynchronously.
|
||||||
However, if the hook takes too long to run or hangs,
|
However, if the hook takes too long to run or hangs,
|
||||||
the Container cannot reach a `running` state.
|
the Container cannot reach a `running` state.
|
||||||
|
|
||||||
`PreStop` hooks are not executed asynchronously from the signal
|
`PreStop` hooks are not executed asynchronously from the signal to stop the Container; the hook must
|
||||||
to stop the Container; the hook must complete its execution before
|
complete its execution before the TERM signal can be sent. If a `PreStop` hook hangs during
|
||||||
the signal can be sent.
|
execution, the Pod's phase will be `Terminating` and remain there until the Pod is killed after its
|
||||||
If a `PreStop` hook hangs during execution,
|
`terminationGracePeriodSeconds` expires. This grace period applies to the total time it takes for
|
||||||
the Pod's phase will be `Terminating` and remain there until the Pod is
|
both the `PreStop` hook to execute and for the Container to stop normally. If, for example,
|
||||||
killed after its `terminationGracePeriodSeconds` expires.
|
`terminationGracePeriodSeconds` is 60, and the hook takes 55 seconds to complete, and the Container
|
||||||
This grace period applies to the total time it takes for both
|
takes 10 seconds to stop normally after receiving the signal, then the Container will be killed
|
||||||
the `PreStop` hook to execute and for the Container to stop normally.
|
before it can stop normally, since `terminationGracePeriodSeconds` is less than the total time
|
||||||
If, for example, `terminationGracePeriodSeconds` is 60, and the hook
|
(55+10) it takes for these two things to happen.
|
||||||
takes 55 seconds to complete, and the Container takes 10 seconds to stop
|
|
||||||
normally after receiving the signal, then the Container will be killed
|
|
||||||
before it can stop normally, since `terminationGracePeriodSeconds` is
|
|
||||||
less than the total time (55+10) it takes for these two things to happen.
|
|
||||||
|
|
||||||
If either a `PostStart` or `PreStop` hook fails,
|
If either a `PostStart` or `PreStop` hook fails,
|
||||||
it kills the Container.
|
it kills the Container.
|
||||||
|
|
|
||||||
|
|
@ -43,7 +43,7 @@ Each VM is a full machine running all the components, including its own operatin
|
||||||
Containers have become popular because they provide extra benefits, such as:
|
Containers have become popular because they provide extra benefits, such as:
|
||||||
|
|
||||||
* Agile application creation and deployment: increased ease and efficiency of container image creation compared to VM image use.
|
* Agile application creation and deployment: increased ease and efficiency of container image creation compared to VM image use.
|
||||||
* Continuous development, integration, and deployment: provides for reliable and frequent container image build and deployment with quick and easy rollbacks (due to image immutability).
|
* Continuous development, integration, and deployment: provides for reliable and frequent container image build and deployment with quick and efficient rollbacks (due to image immutability).
|
||||||
* Dev and Ops separation of concerns: create application container images at build/release time rather than deployment time, thereby decoupling applications from infrastructure.
|
* Dev and Ops separation of concerns: create application container images at build/release time rather than deployment time, thereby decoupling applications from infrastructure.
|
||||||
* Observability not only surfaces OS-level information and metrics, but also application health and other signals.
|
* Observability not only surfaces OS-level information and metrics, but also application health and other signals.
|
||||||
* Environmental consistency across development, testing, and production: Runs the same on a laptop as it does in the cloud.
|
* Environmental consistency across development, testing, and production: Runs the same on a laptop as it does in the cloud.
|
||||||
|
|
|
||||||
|
|
@ -42,7 +42,7 @@ Example labels:
|
||||||
* `"partition" : "customerA"`, `"partition" : "customerB"`
|
* `"partition" : "customerA"`, `"partition" : "customerB"`
|
||||||
* `"track" : "daily"`, `"track" : "weekly"`
|
* `"track" : "daily"`, `"track" : "weekly"`
|
||||||
|
|
||||||
These are just examples of commonly used labels; you are free to develop your own conventions. Keep in mind that label Key must be unique for a given object.
|
These are examples of commonly used labels; you are free to develop your own conventions. Keep in mind that label Key must be unique for a given object.
|
||||||
|
|
||||||
## Syntax and character set
|
## Syntax and character set
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -31,7 +31,7 @@ When using imperative commands, a user operates directly on live objects
|
||||||
in a cluster. The user provides operations to
|
in a cluster. The user provides operations to
|
||||||
the `kubectl` command as arguments or flags.
|
the `kubectl` command as arguments or flags.
|
||||||
|
|
||||||
This is the simplest way to get started or to run a one-off task in
|
This is the recommended way to get started or to run a one-off task in
|
||||||
a cluster. Because this technique operates directly on live
|
a cluster. Because this technique operates directly on live
|
||||||
objects, it provides no history of previous configurations.
|
objects, it provides no history of previous configurations.
|
||||||
|
|
||||||
|
|
@ -47,7 +47,7 @@ kubectl create deployment nginx --image nginx
|
||||||
|
|
||||||
Advantages compared to object configuration:
|
Advantages compared to object configuration:
|
||||||
|
|
||||||
- Commands are simple, easy to learn and easy to remember.
|
- Commands are expressed as a single action word.
|
||||||
- Commands require only a single step to make changes to the cluster.
|
- Commands require only a single step to make changes to the cluster.
|
||||||
|
|
||||||
Disadvantages compared to object configuration:
|
Disadvantages compared to object configuration:
|
||||||
|
|
|
||||||
|
|
@ -10,11 +10,9 @@ weight: 70
|
||||||
|
|
||||||
{{< feature-state for_k8s_version="v1.15" state="alpha" >}}
|
{{< feature-state for_k8s_version="v1.15" state="alpha" >}}
|
||||||
|
|
||||||
The scheduling framework is a pluggable architecture for Kubernetes Scheduler
|
The scheduling framework is a pluggable architecture for the Kubernetes scheduler.
|
||||||
that makes scheduler customizations easy. It adds a new set of "plugin" APIs to
|
It adds a new set of "plugin" APIs to the existing scheduler. Plugins are compiled into the scheduler. The APIs allow most scheduling features to be implemented as plugins, while keeping the
|
||||||
the existing scheduler. Plugins are compiled into the scheduler. The APIs
|
scheduling "core" lightweight and maintainable. Refer to the [design proposal of the
|
||||||
allow most scheduling features to be implemented as plugins, while keeping the
|
|
||||||
scheduling "core" simple and maintainable. Refer to the [design proposal of the
|
|
||||||
scheduling framework][kep] for more technical information on the design of the
|
scheduling framework][kep] for more technical information on the design of the
|
||||||
framework.
|
framework.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -74,7 +74,7 @@ An example of an IPv6 CIDR: `fdXY:IJKL:MNOP:15::/64` (this shows the format but
|
||||||
|
|
||||||
If your cluster has dual-stack enabled, you can create {{< glossary_tooltip text="Services" term_id="service" >}} which can use IPv4, IPv6, or both.
|
If your cluster has dual-stack enabled, you can create {{< glossary_tooltip text="Services" term_id="service" >}} which can use IPv4, IPv6, or both.
|
||||||
|
|
||||||
The address family of a Service defaults to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-controller-manager).
|
The address family of a Service defaults to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-apiserver).
|
||||||
|
|
||||||
When you define a Service you can optionally configure it as dual stack. To specify the behavior you want, you
|
When you define a Service you can optionally configure it as dual stack. To specify the behavior you want, you
|
||||||
set the `.spec.ipFamilyPolicy` field to one of the following values:
|
set the `.spec.ipFamilyPolicy` field to one of the following values:
|
||||||
|
|
|
||||||
|
|
@ -31,14 +31,15 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
|
||||||
* The [Citrix ingress controller](https://github.com/citrix/citrix-k8s-ingress-controller#readme) works with
|
* The [Citrix ingress controller](https://github.com/citrix/citrix-k8s-ingress-controller#readme) works with
|
||||||
Citrix Application Delivery Controller.
|
Citrix Application Delivery Controller.
|
||||||
* [Contour](https://projectcontour.io/) is an [Envoy](https://www.envoyproxy.io/) based ingress controller.
|
* [Contour](https://projectcontour.io/) is an [Envoy](https://www.envoyproxy.io/) based ingress controller.
|
||||||
|
* [EnRoute](https://getenroute.io/) is an [Envoy](https://www.envoyproxy.io) based API gateway that can run as an ingress controller.
|
||||||
* F5 BIG-IP [Container Ingress Services for Kubernetes](https://clouddocs.f5.com/containers/latest/userguide/kubernetes/)
|
* F5 BIG-IP [Container Ingress Services for Kubernetes](https://clouddocs.f5.com/containers/latest/userguide/kubernetes/)
|
||||||
lets you use an Ingress to configure F5 BIG-IP virtual servers.
|
lets you use an Ingress to configure F5 BIG-IP virtual servers.
|
||||||
* [Gloo](https://gloo.solo.io) is an open-source ingress controller based on [Envoy](https://www.envoyproxy.io),
|
* [Gloo](https://gloo.solo.io) is an open-source ingress controller based on [Envoy](https://www.envoyproxy.io),
|
||||||
which offers API gateway functionality.
|
which offers API gateway functionality.
|
||||||
* [HAProxy Ingress](https://haproxy-ingress.github.io/) is an ingress controller for
|
* [HAProxy Ingress](https://haproxy-ingress.github.io/) is an ingress controller for
|
||||||
[HAProxy](http://www.haproxy.org/#desc).
|
[HAProxy](https://www.haproxy.org/#desc).
|
||||||
* The [HAProxy Ingress Controller for Kubernetes](https://github.com/haproxytech/kubernetes-ingress#readme)
|
* The [HAProxy Ingress Controller for Kubernetes](https://github.com/haproxytech/kubernetes-ingress#readme)
|
||||||
is also an ingress controller for [HAProxy](http://www.haproxy.org/#desc).
|
is also an ingress controller for [HAProxy](https://www.haproxy.org/#desc).
|
||||||
* [Istio Ingress](https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/)
|
* [Istio Ingress](https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/)
|
||||||
is an [Istio](https://istio.io/) based ingress controller.
|
is an [Istio](https://istio.io/) based ingress controller.
|
||||||
* The [Kong Ingress Controller for Kubernetes](https://github.com/Kong/kubernetes-ingress-controller#readme)
|
* The [Kong Ingress Controller for Kubernetes](https://github.com/Kong/kubernetes-ingress-controller#readme)
|
||||||
|
|
@ -49,7 +50,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
|
||||||
* The [Traefik Kubernetes Ingress provider](https://doc.traefik.io/traefik/providers/kubernetes-ingress/) is an
|
* The [Traefik Kubernetes Ingress provider](https://doc.traefik.io/traefik/providers/kubernetes-ingress/) is an
|
||||||
ingress controller for the [Traefik](https://traefik.io/traefik/) proxy.
|
ingress controller for the [Traefik](https://traefik.io/traefik/) proxy.
|
||||||
* [Voyager](https://appscode.com/products/voyager) is an ingress controller for
|
* [Voyager](https://appscode.com/products/voyager) is an ingress controller for
|
||||||
[HAProxy](http://www.haproxy.org/#desc).
|
[HAProxy](https://www.haproxy.org/#desc).
|
||||||
|
|
||||||
## Using multiple Ingress controllers
|
## Using multiple Ingress controllers
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -151,9 +151,9 @@ spec:
|
||||||
targetPort: 9376
|
targetPort: 9376
|
||||||
```
|
```
|
||||||
|
|
||||||
Because this Service has no selector, the corresponding Endpoint object is not
|
Because this Service has no selector, the corresponding Endpoints object is not
|
||||||
created automatically. You can manually map the Service to the network address and port
|
created automatically. You can manually map the Service to the network address and port
|
||||||
where it's running, by adding an Endpoint object manually:
|
where it's running, by adding an Endpoints object manually:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
|
|
|
||||||
|
|
@ -629,6 +629,11 @@ spec:
|
||||||
|
|
||||||
PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with "Many" modes (`ROX`, `RWX`) is only possible within one namespace.
|
PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with "Many" modes (`ROX`, `RWX`) is only possible within one namespace.
|
||||||
|
|
||||||
|
### PersistentVolumes typed `hostPath`
|
||||||
|
|
||||||
|
A `hostPath` PersistentVolume uses a file or directory on the Node to emulate network-attached storage.
|
||||||
|
See [an example of `hostPath` typed volume](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume).
|
||||||
|
|
||||||
## Raw Block Volume Support
|
## Raw Block Volume Support
|
||||||
|
|
||||||
{{< feature-state for_k8s_version="v1.18" state="stable" >}}
|
{{< feature-state for_k8s_version="v1.18" state="stable" >}}
|
||||||
|
|
|
||||||
|
|
@ -210,8 +210,8 @@ spec:
|
||||||
|
|
||||||
The `CSIMigration` feature for Cinder, when enabled, redirects all plugin operations
|
The `CSIMigration` feature for Cinder, when enabled, redirects all plugin operations
|
||||||
from the existing in-tree plugin to the `cinder.csi.openstack.org` Container
|
from the existing in-tree plugin to the `cinder.csi.openstack.org` Container
|
||||||
Storage Interface (CSI) Driver. In order to use this feature, the [Openstack Cinder CSI
|
Storage Interface (CSI) Driver. In order to use this feature, the [OpenStack Cinder CSI
|
||||||
Driver](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md)
|
Driver](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md)
|
||||||
must be installed on the cluster and the `CSIMigration` and `CSIMigrationOpenStack`
|
must be installed on the cluster and the `CSIMigration` and `CSIMigrationOpenStack`
|
||||||
beta features must be enabled.
|
beta features must be enabled.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -90,6 +90,11 @@ If `startingDeadlineSeconds` is set to a large value or left unset (the default)
|
||||||
and if `concurrencyPolicy` is set to `Allow`, the jobs will always run
|
and if `concurrencyPolicy` is set to `Allow`, the jobs will always run
|
||||||
at least once.
|
at least once.
|
||||||
|
|
||||||
|
{{< caution >}}
|
||||||
|
If `startingDeadlineSeconds` is set to a value less than 10 seconds, the CronJob may not be scheduled. This is because the CronJob controller checks things every 10 seconds.
|
||||||
|
{{< /caution >}}
|
||||||
|
|
||||||
|
|
||||||
For every CronJob, the CronJob {{< glossary_tooltip term_id="controller" >}} checks how many schedules it missed in the duration from its last scheduled time until now. If there are more than 100 missed schedules, then it does not start the job and logs the error
|
For every CronJob, the CronJob {{< glossary_tooltip term_id="controller" >}} checks how many schedules it missed in the duration from its last scheduled time until now. If there are more than 100 missed schedules, then it does not start the job and logs the error
|
||||||
|
|
||||||
````
|
````
|
||||||
|
|
@ -128,4 +133,3 @@ documents the format of CronJob `schedule` fields.
|
||||||
For instructions on creating and working with cron jobs, and for an example of CronJob
|
For instructions on creating and working with cron jobs, and for an example of CronJob
|
||||||
manifest, see [Running automated tasks with cron jobs](/docs/tasks/job/automated-tasks-with-cron-jobs).
|
manifest, see [Running automated tasks with cron jobs](/docs/tasks/job/automated-tasks-with-cron-jobs).
|
||||||
|
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -147,8 +147,8 @@ the related features.
|
||||||
| ---------------------------------------- | ---------- | ------- | ----------- |
|
| ---------------------------------------- | ---------- | ------- | ----------- |
|
||||||
| `node.kubernetes.io/not-ready` | NoExecute | 1.13+ | DaemonSet pods will not be evicted when there are node problems such as a network partition. |
|
| `node.kubernetes.io/not-ready` | NoExecute | 1.13+ | DaemonSet pods will not be evicted when there are node problems such as a network partition. |
|
||||||
| `node.kubernetes.io/unreachable` | NoExecute | 1.13+ | DaemonSet pods will not be evicted when there are node problems such as a network partition. |
|
| `node.kubernetes.io/unreachable` | NoExecute | 1.13+ | DaemonSet pods will not be evicted when there are node problems such as a network partition. |
|
||||||
| `node.kubernetes.io/disk-pressure` | NoSchedule | 1.8+ | |
|
| `node.kubernetes.io/disk-pressure` | NoSchedule | 1.8+ | DaemonSet pods tolerate disk-pressure attributes by default scheduler. |
|
||||||
| `node.kubernetes.io/memory-pressure` | NoSchedule | 1.8+ | |
|
| `node.kubernetes.io/memory-pressure` | NoSchedule | 1.8+ | DaemonSet pods tolerate memory-pressure attributes by default scheduler. |
|
||||||
| `node.kubernetes.io/unschedulable` | NoSchedule | 1.12+ | DaemonSet pods tolerate unschedulable attributes by default scheduler. |
|
| `node.kubernetes.io/unschedulable` | NoSchedule | 1.12+ | DaemonSet pods tolerate unschedulable attributes by default scheduler. |
|
||||||
| `node.kubernetes.io/network-unavailable` | NoSchedule | 1.12+ | DaemonSet pods, who uses host network, tolerate network-unavailable attributes by default scheduler. |
|
| `node.kubernetes.io/network-unavailable` | NoSchedule | 1.12+ | DaemonSet pods, who uses host network, tolerate network-unavailable attributes by default scheduler. |
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -208,7 +208,8 @@ As mentioned above, whether you have 1 pod you want to keep running, or 1000, a
|
||||||
|
|
||||||
### Scaling
|
### Scaling
|
||||||
|
|
||||||
The ReplicationController makes it easy to scale the number of replicas up or down, either manually or by an auto-scaling control agent, by simply updating the `replicas` field.
|
The ReplicationController scales the number of replicas up or down by setting the `replicas` field.
|
||||||
|
You can configure the ReplicationController to manage the replicas manually or by an auto-scaling control agent.
|
||||||
|
|
||||||
### Rolling updates
|
### Rolling updates
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -78,7 +78,7 @@ sharing](/docs/tasks/configure-pod-container/share-process-namespace/) so
|
||||||
you can view processes in other containers.
|
you can view processes in other containers.
|
||||||
|
|
||||||
See [Debugging with Ephemeral Debug Container](
|
See [Debugging with Ephemeral Debug Container](
|
||||||
/docs/tasks/debug-application-cluster/debug-running-pod/#debugging-with-ephemeral-debug-container)
|
/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container)
|
||||||
for examples of troubleshooting using ephemeral containers.
|
for examples of troubleshooting using ephemeral containers.
|
||||||
|
|
||||||
## Ephemeral containers API
|
## Ephemeral containers API
|
||||||
|
|
|
||||||
|
|
@ -44,22 +44,25 @@ The English-language documentation uses U.S. English spelling and grammar.
|
||||||
|
|
||||||
### Use upper camel case for API objects
|
### Use upper camel case for API objects
|
||||||
|
|
||||||
When you refer specifically to interacting with an API object, use [UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case), also known as Pascal Case. When you are generally discussing an API object, use [sentence-style capitalization](https://docs.microsoft.com/en-us/style-guide/text-formatting/using-type/use-sentence-style-capitalization).
|
When you refer specifically to interacting with an API object, use [UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case), also known as Pascal case. You may see different capitalization, such as "configMap", in the [API Reference](/docs/reference/kubernetes-api/). When writing general documentation, it's better to use upper camel case, calling it "ConfigMap" instead.
|
||||||
|
|
||||||
|
When you are generally discussing an API object, use [sentence-style capitalization](https://docs.microsoft.com/en-us/style-guide/text-formatting/using-type/use-sentence-style-capitalization).
|
||||||
|
|
||||||
|
You may use the word "resource", "API", or "object" to clarify a Kubernetes resource type in a sentence.
|
||||||
|
|
||||||
Don't split the API object name into separate words. For example, use
|
Don't split the API object name into separate words. For example, use
|
||||||
PodTemplateList, not Pod Template List.
|
PodTemplateList, not Pod Template List.
|
||||||
|
|
||||||
Refer to API objects without saying "object," unless omitting "object"
|
The following examples focus on capitalization. Review the related guidance on [Code Style](#code-style-inline-code) for more information on formatting API objects.
|
||||||
leads to an awkward construction.
|
|
||||||
|
|
||||||
{{< table caption = "Do and Don't - API objects" >}}
|
{{< table caption = "Do and Don't - Use Pascal case for API objects" >}}
|
||||||
Do | Don't
|
Do | Don't
|
||||||
:--| :-----
|
:--| :-----
|
||||||
The pod has two containers. | The Pod has two containers.
|
The HorizontalPodAutoscaler resource is responsible for ... | The Horizontal pod autoscaler is responsible for ...
|
||||||
The HorizontalPodAutoscaler is responsible for ... | The HorizontalPodAutoscaler object is responsible for ...
|
A PodList object is a list of pods. | A Pod List object is a list of pods.
|
||||||
A PodList is a list of pods. | A Pod List is a list of pods.
|
The Volume object contains a `hostPath` field. | The volume object contains a hostPath field.
|
||||||
The two ContainerPorts ... | The two ContainerPort objects ...
|
Every ConfigMap object is part of a namespace. | Every configMap object is part of a namespace.
|
||||||
The two ContainerStateTerminated objects ... | The two ContainerStateTerminateds ...
|
For managing confidential data, consider using the Secret API. | For managing confidential data, consider using the secret API.
|
||||||
{{< /table >}}
|
{{< /table >}}
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -113,12 +116,12 @@ The copy is called a "fork". | The copy is called a "fork."
|
||||||
|
|
||||||
## Inline code formatting
|
## Inline code formatting
|
||||||
|
|
||||||
### Use code style for inline code, commands, and API objects
|
### Use code style for inline code, commands, and API objects {#code-style-inline-code}
|
||||||
|
|
||||||
For inline code in an HTML document, use the `<code>` tag. In a Markdown
|
For inline code in an HTML document, use the `<code>` tag. In a Markdown
|
||||||
document, use the backtick (`` ` ``).
|
document, use the backtick (`` ` ``).
|
||||||
|
|
||||||
{{< table caption = "Do and Don't - Use code style for inline code and commands" >}}
|
{{< table caption = "Do and Don't - Use code style for inline code, commands, and API objects" >}}
|
||||||
Do | Don't
|
Do | Don't
|
||||||
:--| :-----
|
:--| :-----
|
||||||
The `kubectl run` command creates a `Pod`. | The "kubectl run" command creates a pod.
|
The `kubectl run` command creates a `Pod`. | The "kubectl run" command creates a pod.
|
||||||
|
|
|
||||||
|
|
@ -462,8 +462,6 @@ and the [example of Limit Range](/docs/tasks/administer-cluster/manage-resources
|
||||||
|
|
||||||
### MutatingAdmissionWebhook {#mutatingadmissionwebhook}
|
### MutatingAdmissionWebhook {#mutatingadmissionwebhook}
|
||||||
|
|
||||||
{{< feature-state for_k8s_version="v1.13" state="beta" >}}
|
|
||||||
|
|
||||||
This admission controller calls any mutating webhooks which match the request. Matching
|
This admission controller calls any mutating webhooks which match the request. Matching
|
||||||
webhooks are called in serial; each one may modify the object if it desires.
|
webhooks are called in serial; each one may modify the object if it desires.
|
||||||
|
|
||||||
|
|
@ -474,7 +472,7 @@ If a webhook called by this has side effects (for example, decrementing quota) i
|
||||||
webhooks or validating admission controllers will permit the request to finish.
|
webhooks or validating admission controllers will permit the request to finish.
|
||||||
|
|
||||||
If you disable the MutatingAdmissionWebhook, you must also disable the
|
If you disable the MutatingAdmissionWebhook, you must also disable the
|
||||||
`MutatingWebhookConfiguration` object in the `admissionregistration.k8s.io/v1beta1`
|
`MutatingWebhookConfiguration` object in the `admissionregistration.k8s.io/v1`
|
||||||
group/version via the `--runtime-config` flag (both are on by default in
|
group/version via the `--runtime-config` flag (both are on by default in
|
||||||
versions >= 1.9).
|
versions >= 1.9).
|
||||||
|
|
||||||
|
|
@ -486,8 +484,6 @@ versions >= 1.9).
|
||||||
different when read back.
|
different when read back.
|
||||||
* Setting originally unset fields is less likely to cause problems than
|
* Setting originally unset fields is less likely to cause problems than
|
||||||
overwriting fields set in the original request. Avoid doing the latter.
|
overwriting fields set in the original request. Avoid doing the latter.
|
||||||
* This is a beta feature. Future versions of Kubernetes may restrict the types of
|
|
||||||
mutations these webhooks can make.
|
|
||||||
* Future changes to control loops for built-in resources or third-party resources
|
* Future changes to control loops for built-in resources or third-party resources
|
||||||
may break webhooks that work well today. Even when the webhook installation API
|
may break webhooks that work well today. Even when the webhook installation API
|
||||||
is finalized, not all possible webhook behaviors will be guaranteed to be supported
|
is finalized, not all possible webhook behaviors will be guaranteed to be supported
|
||||||
|
|
@ -766,8 +762,6 @@ This admission controller {{< glossary_tooltip text="taints" term_id="taint" >}}
|
||||||
|
|
||||||
### ValidatingAdmissionWebhook {#validatingadmissionwebhook}
|
### ValidatingAdmissionWebhook {#validatingadmissionwebhook}
|
||||||
|
|
||||||
{{< feature-state for_k8s_version="v1.13" state="beta" >}}
|
|
||||||
|
|
||||||
This admission controller calls any validating webhooks which match the request. Matching
|
This admission controller calls any validating webhooks which match the request. Matching
|
||||||
webhooks are called in parallel; if any of them rejects the request, the request
|
webhooks are called in parallel; if any of them rejects the request, the request
|
||||||
fails. This admission controller only runs in the validation phase; the webhooks it calls may not
|
fails. This admission controller only runs in the validation phase; the webhooks it calls may not
|
||||||
|
|
@ -778,7 +772,7 @@ If a webhook called by this has side effects (for example, decrementing quota) i
|
||||||
webhooks or other validating admission controllers will permit the request to finish.
|
webhooks or other validating admission controllers will permit the request to finish.
|
||||||
|
|
||||||
If you disable the ValidatingAdmissionWebhook, you must also disable the
|
If you disable the ValidatingAdmissionWebhook, you must also disable the
|
||||||
`ValidatingWebhookConfiguration` object in the `admissionregistration.k8s.io/v1beta1`
|
`ValidatingWebhookConfiguration` object in the `admissionregistration.k8s.io/v1`
|
||||||
group/version via the `--runtime-config` flag (both are on by default in
|
group/version via the `--runtime-config` flag (both are on by default in
|
||||||
versions 1.9 and later).
|
versions 1.9 and later).
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -68,8 +68,8 @@ when interpreted by an [authorizer](/docs/reference/access-authn-authz/authoriza
|
||||||
|
|
||||||
You can enable multiple authentication methods at once. You should usually use at least two methods:
|
You can enable multiple authentication methods at once. You should usually use at least two methods:
|
||||||
|
|
||||||
- service account tokens for service accounts
|
- service account tokens for service accounts
|
||||||
- at least one other method for user authentication.
|
- at least one other method for user authentication.
|
||||||
|
|
||||||
When multiple authenticator modules are enabled, the first module
|
When multiple authenticator modules are enabled, the first module
|
||||||
to successfully authenticate the request short-circuits evaluation.
|
to successfully authenticate the request short-circuits evaluation.
|
||||||
|
|
@ -321,13 +321,11 @@ sequenceDiagram
|
||||||
9. `kubectl` provides feedback to the user
|
9. `kubectl` provides feedback to the user
|
||||||
|
|
||||||
Since all of the data needed to validate who you are is in the `id_token`, Kubernetes doesn't need to
|
Since all of the data needed to validate who you are is in the `id_token`, Kubernetes doesn't need to
|
||||||
"phone home" to the identity provider. In a model where every request is stateless this provides a very scalable
|
"phone home" to the identity provider. In a model where every request is stateless this provides a very scalable solution for authentication. It does offer a few challenges:
|
||||||
solution for authentication. It does offer a few challenges:
|
|
||||||
|
|
||||||
1. Kubernetes has no "web interface" to trigger the authentication process. There is no browser or interface to collect credentials which is why you need to authenticate to your identity provider first.
|
|
||||||
2. The `id_token` can't be revoked, it's like a certificate so it should be short-lived (only a few minutes) so it can be very annoying to have to get a new token every few minutes.
|
|
||||||
3. There's no easy way to authenticate to the Kubernetes dashboard without using the `kubectl proxy` command or a reverse proxy that injects the `id_token`.
|
|
||||||
|
|
||||||
|
1. Kubernetes has no "web interface" to trigger the authentication process. There is no browser or interface to collect credentials which is why you need to authenticate to your identity provider first.
|
||||||
|
2. The `id_token` can't be revoked, it's like a certificate so it should be short-lived (only a few minutes) so it can be very annoying to have to get a new token every few minutes.
|
||||||
|
3. To authenticate to the Kubernetes dashboard, you must the `kubectl proxy` command or a reverse proxy that injects the `id_token`.
|
||||||
|
|
||||||
#### Configuring the API Server
|
#### Configuring the API Server
|
||||||
|
|
||||||
|
|
@ -1004,14 +1002,12 @@ RFC3339 timestamp. Presence or absence of an expiry has the following impact:
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
To enable the exec plugin to obtain cluster-specific information, set `provideClusterInfo` on the `user.exec`
|
||||||
The plugin can optionally be called with an environment variable, `KUBERNETES_EXEC_INFO`,
|
field in the [kubeconfig](/docs/concepts/configuration/organize-cluster-access-kubeconfig/).
|
||||||
that contains information about the cluster for which this plugin is obtaining
|
The plugin will then be supplied with an environment variable, `KUBERNETES_EXEC_INFO`.
|
||||||
credentials. This information can be used to perform cluster-specific credential
|
Information from this environment variable can be used to perform cluster-specific
|
||||||
acquisition logic. In order to enable this behavior, the `provideClusterInfo` field must
|
credential acquisition logic.
|
||||||
be set on the exec user field in the
|
The following `ExecCredential` manifest describes a cluster information sample.
|
||||||
[kubeconfig](/docs/concepts/configuration/organize-cluster-access-kubeconfig/). Here is an
|
|
||||||
example of the aforementioned `KUBERNETES_EXEC_INFO` environment variable.
|
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
|
|
|
||||||
|
|
@ -104,6 +104,9 @@ a given action, and works regardless of the authorization mode used.
|
||||||
```bash
|
```bash
|
||||||
kubectl auth can-i create deployments --namespace dev
|
kubectl auth can-i create deployments --namespace dev
|
||||||
```
|
```
|
||||||
|
|
||||||
|
The output is similar to this:
|
||||||
|
|
||||||
```
|
```
|
||||||
yes
|
yes
|
||||||
```
|
```
|
||||||
|
|
@ -111,6 +114,9 @@ yes
|
||||||
```shell
|
```shell
|
||||||
kubectl auth can-i create deployments --namespace prod
|
kubectl auth can-i create deployments --namespace prod
|
||||||
```
|
```
|
||||||
|
|
||||||
|
The output is similar to this:
|
||||||
|
|
||||||
```
|
```
|
||||||
no
|
no
|
||||||
```
|
```
|
||||||
|
|
@ -121,6 +127,9 @@ to determine what action other users can perform.
|
||||||
```bash
|
```bash
|
||||||
kubectl auth can-i list secrets --namespace dev --as dave
|
kubectl auth can-i list secrets --namespace dev --as dave
|
||||||
```
|
```
|
||||||
|
|
||||||
|
The output is similar to this:
|
||||||
|
|
||||||
```
|
```
|
||||||
no
|
no
|
||||||
```
|
```
|
||||||
|
|
@ -150,7 +159,7 @@ EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
The generated `SelfSubjectAccessReview` is:
|
The generated `SelfSubjectAccessReview` is:
|
||||||
```
|
```yaml
|
||||||
apiVersion: authorization.k8s.io/v1
|
apiVersion: authorization.k8s.io/v1
|
||||||
kind: SelfSubjectAccessReview
|
kind: SelfSubjectAccessReview
|
||||||
metadata:
|
metadata:
|
||||||
|
|
|
||||||
|
|
@ -1093,8 +1093,8 @@ be a layering violation). `host` may also be an IP address.
|
||||||
Please note that using `localhost` or `127.0.0.1` as a `host` is
|
Please note that using `localhost` or `127.0.0.1` as a `host` is
|
||||||
risky unless you take great care to run this webhook on all hosts
|
risky unless you take great care to run this webhook on all hosts
|
||||||
which run an apiserver which might need to make calls to this
|
which run an apiserver which might need to make calls to this
|
||||||
webhook. Such installs are likely to be non-portable, i.e., not easy
|
webhook. Such installations are likely to be non-portable or not readily
|
||||||
to turn up in a new cluster.
|
run in a new cluster.
|
||||||
|
|
||||||
The scheme must be "https"; the URL must begin with "https://".
|
The scheme must be "https"; the URL must begin with "https://".
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,6 @@
|
||||||
---
|
---
|
||||||
weight: 10
|
|
||||||
title: Feature Gates
|
title: Feature Gates
|
||||||
|
weight: 10
|
||||||
content_type: concept
|
content_type: concept
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
@ -48,13 +48,15 @@ different Kubernetes components.
|
||||||
|
|
||||||
| Feature | Default | Stage | Since | Until |
|
| Feature | Default | Stage | Since | Until |
|
||||||
|---------|---------|-------|-------|-------|
|
|---------|---------|-------|-------|-------|
|
||||||
| `AnyVolumeDataSource` | `false` | Alpha | 1.18 | |
|
|
||||||
| `APIListChunking` | `false` | Alpha | 1.8 | 1.8 |
|
| `APIListChunking` | `false` | Alpha | 1.8 | 1.8 |
|
||||||
| `APIListChunking` | `true` | Beta | 1.9 | |
|
| `APIListChunking` | `true` | Beta | 1.9 | |
|
||||||
| `APIPriorityAndFairness` | `false` | Alpha | 1.17 | 1.19 |
|
| `APIPriorityAndFairness` | `false` | Alpha | 1.17 | 1.19 |
|
||||||
| `APIPriorityAndFairness` | `true` | Beta | 1.20 | |
|
| `APIPriorityAndFairness` | `true` | Beta | 1.20 | |
|
||||||
| `APIResponseCompression` | `false` | Alpha | 1.7 | |
|
| `APIResponseCompression` | `false` | Alpha | 1.7 | 1.15 |
|
||||||
|
| `APIResponseCompression` | `false` | Beta | 1.16 | |
|
||||||
| `APIServerIdentity` | `false` | Alpha | 1.20 | |
|
| `APIServerIdentity` | `false` | Alpha | 1.20 | |
|
||||||
|
| `AllowInsecureBackendProxy` | `true` | Beta | 1.17 | |
|
||||||
|
| `AnyVolumeDataSource` | `false` | Alpha | 1.18 | |
|
||||||
| `AppArmor` | `true` | Beta | 1.4 | |
|
| `AppArmor` | `true` | Beta | 1.4 | |
|
||||||
| `BalanceAttachedNodeVolumes` | `false` | Alpha | 1.11 | |
|
| `BalanceAttachedNodeVolumes` | `false` | Alpha | 1.11 | |
|
||||||
| `BoundServiceAccountTokenVolume` | `false` | Alpha | 1.13 | |
|
| `BoundServiceAccountTokenVolume` | `false` | Alpha | 1.13 | |
|
||||||
|
|
@ -77,7 +79,8 @@ different Kubernetes components.
|
||||||
| `CSIMigrationGCE` | `false` | Alpha | 1.14 | 1.16 |
|
| `CSIMigrationGCE` | `false` | Alpha | 1.14 | 1.16 |
|
||||||
| `CSIMigrationGCE` | `false` | Beta | 1.17 | |
|
| `CSIMigrationGCE` | `false` | Beta | 1.17 | |
|
||||||
| `CSIMigrationGCEComplete` | `false` | Alpha | 1.17 | |
|
| `CSIMigrationGCEComplete` | `false` | Alpha | 1.17 | |
|
||||||
| `CSIMigrationOpenStack` | `false` | Alpha | 1.14 | |
|
| `CSIMigrationOpenStack` | `false` | Alpha | 1.14 | 1.17 |
|
||||||
|
| `CSIMigrationOpenStack` | `true` | Beta | 1.18 | |
|
||||||
| `CSIMigrationOpenStackComplete` | `false` | Alpha | 1.17 | |
|
| `CSIMigrationOpenStackComplete` | `false` | Alpha | 1.17 | |
|
||||||
| `CSIMigrationvSphere` | `false` | Beta | 1.19 | |
|
| `CSIMigrationvSphere` | `false` | Beta | 1.19 | |
|
||||||
| `CSIMigrationvSphereComplete` | `false` | Beta | 1.19 | |
|
| `CSIMigrationvSphereComplete` | `false` | Beta | 1.19 | |
|
||||||
|
|
@ -89,26 +92,23 @@ different Kubernetes components.
|
||||||
| `ConfigurableFSGroupPolicy` | `true` | Beta | 1.20 | |
|
| `ConfigurableFSGroupPolicy` | `true` | Beta | 1.20 | |
|
||||||
| `CronJobControllerV2` | `false` | Alpha | 1.20 | |
|
| `CronJobControllerV2` | `false` | Alpha | 1.20 | |
|
||||||
| `CustomCPUCFSQuotaPeriod` | `false` | Alpha | 1.12 | |
|
| `CustomCPUCFSQuotaPeriod` | `false` | Alpha | 1.12 | |
|
||||||
| `CustomResourceDefaulting` | `false` | Alpha| 1.15 | 1.15 |
|
|
||||||
| `CustomResourceDefaulting` | `true` | Beta | 1.16 | |
|
|
||||||
| `DefaultPodTopologySpread` | `false` | Alpha | 1.19 | 1.19 |
|
| `DefaultPodTopologySpread` | `false` | Alpha | 1.19 | 1.19 |
|
||||||
| `DefaultPodTopologySpread` | `true` | Beta | 1.20 | |
|
| `DefaultPodTopologySpread` | `true` | Beta | 1.20 | |
|
||||||
| `DevicePlugins` | `false` | Alpha | 1.8 | 1.9 |
|
| `DevicePlugins` | `false` | Alpha | 1.8 | 1.9 |
|
||||||
| `DevicePlugins` | `true` | Beta | 1.10 | |
|
| `DevicePlugins` | `true` | Beta | 1.10 | |
|
||||||
| `DisableAcceleratorUsageMetrics` | `false` | Alpha | 1.19 | 1.19 |
|
| `DisableAcceleratorUsageMetrics` | `false` | Alpha | 1.19 | 1.19 |
|
||||||
| `DisableAcceleratorUsageMetrics` | `true` | Beta | 1.20 | 1.22 |
|
| `DisableAcceleratorUsageMetrics` | `true` | Beta | 1.20 | |
|
||||||
| `DownwardAPIHugePages` | `false` | Alpha | 1.20 | |
|
| `DownwardAPIHugePages` | `false` | Alpha | 1.20 | |
|
||||||
| `DryRun` | `false` | Alpha | 1.12 | 1.12 |
|
|
||||||
| `DryRun` | `true` | Beta | 1.13 | |
|
|
||||||
| `DynamicKubeletConfig` | `false` | Alpha | 1.4 | 1.10 |
|
| `DynamicKubeletConfig` | `false` | Alpha | 1.4 | 1.10 |
|
||||||
| `DynamicKubeletConfig` | `true` | Beta | 1.11 | |
|
| `DynamicKubeletConfig` | `true` | Beta | 1.11 | |
|
||||||
|
| `EfficientWatchResumption` | `false` | Alpha | 1.20 | |
|
||||||
| `EndpointSlice` | `false` | Alpha | 1.16 | 1.16 |
|
| `EndpointSlice` | `false` | Alpha | 1.16 | 1.16 |
|
||||||
| `EndpointSlice` | `false` | Beta | 1.17 | |
|
| `EndpointSlice` | `false` | Beta | 1.17 | |
|
||||||
| `EndpointSlice` | `true` | Beta | 1.18 | |
|
| `EndpointSlice` | `true` | Beta | 1.18 | |
|
||||||
| `EndpointSliceNodeName` | `false` | Alpha | 1.20 | |
|
| `EndpointSliceNodeName` | `false` | Alpha | 1.20 | |
|
||||||
| `EndpointSliceProxying` | `false` | Alpha | 1.18 | 1.18 |
|
| `EndpointSliceProxying` | `false` | Alpha | 1.18 | 1.18 |
|
||||||
| `EndpointSliceProxying` | `true` | Beta | 1.19 | |
|
| `EndpointSliceProxying` | `true` | Beta | 1.19 | |
|
||||||
| `EndpointSliceTerminating` | `false` | Alpha | 1.20 | |
|
| `EndpointSliceTerminatingCondition` | `false` | Alpha | 1.20 | |
|
||||||
| `EphemeralContainers` | `false` | Alpha | 1.16 | |
|
| `EphemeralContainers` | `false` | Alpha | 1.16 | |
|
||||||
| `ExpandCSIVolumes` | `false` | Alpha | 1.14 | 1.15 |
|
| `ExpandCSIVolumes` | `false` | Alpha | 1.14 | 1.15 |
|
||||||
| `ExpandCSIVolumes` | `true` | Beta | 1.16 | |
|
| `ExpandCSIVolumes` | `true` | Beta | 1.16 | |
|
||||||
|
|
@ -119,19 +119,22 @@ different Kubernetes components.
|
||||||
| `ExperimentalHostUserNamespaceDefaulting` | `false` | Beta | 1.5 | |
|
| `ExperimentalHostUserNamespaceDefaulting` | `false` | Beta | 1.5 | |
|
||||||
| `GenericEphemeralVolume` | `false` | Alpha | 1.19 | |
|
| `GenericEphemeralVolume` | `false` | Alpha | 1.19 | |
|
||||||
| `GracefulNodeShutdown` | `false` | Alpha | 1.20 | |
|
| `GracefulNodeShutdown` | `false` | Alpha | 1.20 | |
|
||||||
|
| `HPAContainerMetrics` | `false` | Alpha | 1.20 | |
|
||||||
| `HPAScaleToZero` | `false` | Alpha | 1.16 | |
|
| `HPAScaleToZero` | `false` | Alpha | 1.16 | |
|
||||||
| `HugePageStorageMediumSize` | `false` | Alpha | 1.18 | 1.18 |
|
| `HugePageStorageMediumSize` | `false` | Alpha | 1.18 | 1.18 |
|
||||||
| `HugePageStorageMediumSize` | `true` | Beta | 1.19 | |
|
| `HugePageStorageMediumSize` | `true` | Beta | 1.19 | |
|
||||||
| `HyperVContainer` | `false` | Alpha | 1.10 | |
|
| `IPv6DualStack` | `false` | Alpha | 1.15 | |
|
||||||
| `ImmutableEphemeralVolumes` | `false` | Alpha | 1.18 | 1.18 |
|
| `ImmutableEphemeralVolumes` | `false` | Alpha | 1.18 | 1.18 |
|
||||||
| `ImmutableEphemeralVolumes` | `true` | Beta | 1.19 | |
|
| `ImmutableEphemeralVolumes` | `true` | Beta | 1.19 | |
|
||||||
| `IPv6DualStack` | `false` | Alpha | 1.16 | |
|
| `KubeletCredentialProviders` | `false` | Alpha | 1.20 | |
|
||||||
| `LegacyNodeRoleBehavior` | `true` | Alpha | 1.16 | |
|
| `KubeletPodResources` | `true` | Alpha | 1.13 | 1.14 |
|
||||||
|
| `KubeletPodResources` | `true` | Beta | 1.15 | |
|
||||||
|
| `LegacyNodeRoleBehavior` | `false` | Alpha | 1.16 | 1.18 |
|
||||||
|
| `LegacyNodeRoleBehavior` | `true` | True | 1.19 | |
|
||||||
| `LocalStorageCapacityIsolation` | `false` | Alpha | 1.7 | 1.9 |
|
| `LocalStorageCapacityIsolation` | `false` | Alpha | 1.7 | 1.9 |
|
||||||
| `LocalStorageCapacityIsolation` | `true` | Beta | 1.10 | |
|
| `LocalStorageCapacityIsolation` | `true` | Beta | 1.10 | |
|
||||||
| `LocalStorageCapacityIsolationFSQuotaMonitoring` | `false` | Alpha | 1.15 | |
|
| `LocalStorageCapacityIsolationFSQuotaMonitoring` | `false` | Alpha | 1.15 | |
|
||||||
| `MixedProtocolLBService` | `false` | Alpha | 1.20 | |
|
| `MixedProtocolLBService` | `false` | Alpha | 1.20 | |
|
||||||
| `MountContainers` | `false` | Alpha | 1.9 | |
|
|
||||||
| `NodeDisruptionExclusion` | `false` | Alpha | 1.16 | 1.18 |
|
| `NodeDisruptionExclusion` | `false` | Alpha | 1.16 | 1.18 |
|
||||||
| `NodeDisruptionExclusion` | `true` | Beta | 1.19 | |
|
| `NodeDisruptionExclusion` | `true` | Beta | 1.19 | |
|
||||||
| `NonPreemptingPriority` | `false` | Alpha | 1.15 | 1.18 |
|
| `NonPreemptingPriority` | `false` | Alpha | 1.15 | 1.18 |
|
||||||
|
|
@ -143,25 +146,27 @@ different Kubernetes components.
|
||||||
| `ProcMountType` | `false` | Alpha | 1.12 | |
|
| `ProcMountType` | `false` | Alpha | 1.12 | |
|
||||||
| `QOSReserved` | `false` | Alpha | 1.11 | |
|
| `QOSReserved` | `false` | Alpha | 1.11 | |
|
||||||
| `RemainingItemCount` | `false` | Alpha | 1.15 | |
|
| `RemainingItemCount` | `false` | Alpha | 1.15 | |
|
||||||
|
| `RemoveSelfLink` | `false` | Alpha | 1.16 | 1.19 |
|
||||||
|
| `RemoveSelfLink` | `true` | Beta | 1.20 | |
|
||||||
| `RootCAConfigMap` | `false` | Alpha | 1.13 | 1.19 |
|
| `RootCAConfigMap` | `false` | Alpha | 1.13 | 1.19 |
|
||||||
| `RootCAConfigMap` | `true` | Beta | 1.20 | |
|
| `RootCAConfigMap` | `true` | Beta | 1.20 | |
|
||||||
| `RotateKubeletServerCertificate` | `false` | Alpha | 1.7 | 1.11 |
|
| `RotateKubeletServerCertificate` | `false` | Alpha | 1.7 | 1.11 |
|
||||||
| `RotateKubeletServerCertificate` | `true` | Beta | 1.12 | |
|
| `RotateKubeletServerCertificate` | `true` | Beta | 1.12 | |
|
||||||
| `RunAsGroup` | `true` | Beta | 1.14 | |
|
| `RunAsGroup` | `true` | Beta | 1.14 | |
|
||||||
| `RuntimeClass` | `false` | Alpha | 1.12 | 1.13 |
|
|
||||||
| `RuntimeClass` | `true` | Beta | 1.14 | |
|
|
||||||
| `SCTPSupport` | `false` | Alpha | 1.12 | 1.18 |
|
| `SCTPSupport` | `false` | Alpha | 1.12 | 1.18 |
|
||||||
| `SCTPSupport` | `true` | Beta | 1.19 | |
|
| `SCTPSupport` | `true` | Beta | 1.19 | |
|
||||||
| `ServerSideApply` | `false` | Alpha | 1.14 | 1.15 |
|
| `ServerSideApply` | `false` | Alpha | 1.14 | 1.15 |
|
||||||
| `ServerSideApply` | `true` | Beta | 1.16 | |
|
| `ServerSideApply` | `true` | Beta | 1.16 | |
|
||||||
| `ServiceAccountIssuerDiscovery` | `false` | Alpha | 1.18 | |
|
| `ServiceAccountIssuerDiscovery` | `false` | Alpha | 1.18 | 1.19 |
|
||||||
| `ServiceLBNodePortControl` | `false` | Alpha | 1.20 | 1.20 |
|
| `ServiceAccountIssuerDiscovery` | `true` | Beta | 1.20 | |
|
||||||
|
| `ServiceLBNodePortControl` | `false` | Alpha | 1.20 | |
|
||||||
| `ServiceNodeExclusion` | `false` | Alpha | 1.8 | 1.18 |
|
| `ServiceNodeExclusion` | `false` | Alpha | 1.8 | 1.18 |
|
||||||
| `ServiceNodeExclusion` | `true` | Beta | 1.19 | |
|
| `ServiceNodeExclusion` | `true` | Beta | 1.19 | |
|
||||||
| `ServiceTopology` | `false` | Alpha | 1.17 | |
|
| `ServiceTopology` | `false` | Alpha | 1.17 | |
|
||||||
| `SizeMemoryBackedVolumes` | `false` | Alpha | 1.20 | |
|
|
||||||
| `SetHostnameAsFQDN` | `false` | Alpha | 1.19 | 1.19 |
|
| `SetHostnameAsFQDN` | `false` | Alpha | 1.19 | 1.19 |
|
||||||
| `SetHostnameAsFQDN` | `true` | Beta | 1.20 | |
|
| `SetHostnameAsFQDN` | `true` | Beta | 1.20 | |
|
||||||
|
| `SizeMemoryBackedVolumes` | `false` | Alpha | 1.20 | |
|
||||||
|
| `StorageVersionAPI` | `false` | Alpha | 1.20 | |
|
||||||
| `StorageVersionHash` | `false` | Alpha | 1.14 | 1.14 |
|
| `StorageVersionHash` | `false` | Alpha | 1.14 | 1.14 |
|
||||||
| `StorageVersionHash` | `true` | Beta | 1.15 | |
|
| `StorageVersionHash` | `true` | Beta | 1.15 | |
|
||||||
| `Sysctls` | `true` | Beta | 1.11 | |
|
| `Sysctls` | `true` | Beta | 1.11 | |
|
||||||
|
|
@ -170,11 +175,11 @@ different Kubernetes components.
|
||||||
| `TopologyManager` | `true` | Beta | 1.18 | |
|
| `TopologyManager` | `true` | Beta | 1.18 | |
|
||||||
| `ValidateProxyRedirects` | `false` | Alpha | 1.12 | 1.13 |
|
| `ValidateProxyRedirects` | `false` | Alpha | 1.12 | 1.13 |
|
||||||
| `ValidateProxyRedirects` | `true` | Beta | 1.14 | |
|
| `ValidateProxyRedirects` | `true` | Beta | 1.14 | |
|
||||||
| `WindowsEndpointSliceProxying` | `false` | Alpha | 1.19 | |
|
| `WarningHeaders` | `true` | Beta | 1.19 | |
|
||||||
| `WindowsGMSA` | `false` | Alpha | 1.14 | |
|
|
||||||
| `WindowsGMSA` | `true` | Beta | 1.16 | |
|
|
||||||
| `WinDSR` | `false` | Alpha | 1.14 | |
|
| `WinDSR` | `false` | Alpha | 1.14 | |
|
||||||
| `WinOverlay` | `false` | Alpha | 1.14 | |
|
| `WinOverlay` | `false` | Alpha | 1.14 | 1.19 |
|
||||||
|
| `WinOverlay` | `true` | Beta | 1.20 | |
|
||||||
|
| `WindowsEndpointSliceProxying` | `false` | Alpha | 1.19 | |
|
||||||
{{< /table >}}
|
{{< /table >}}
|
||||||
|
|
||||||
### Feature gates for graduated or deprecated features
|
### Feature gates for graduated or deprecated features
|
||||||
|
|
@ -228,6 +233,9 @@ different Kubernetes components.
|
||||||
| `CustomResourceWebhookConversion` | `false` | Alpha | 1.13 | 1.14 |
|
| `CustomResourceWebhookConversion` | `false` | Alpha | 1.13 | 1.14 |
|
||||||
| `CustomResourceWebhookConversion` | `true` | Beta | 1.15 | 1.15 |
|
| `CustomResourceWebhookConversion` | `true` | Beta | 1.15 | 1.15 |
|
||||||
| `CustomResourceWebhookConversion` | `true` | GA | 1.16 | - |
|
| `CustomResourceWebhookConversion` | `true` | GA | 1.16 | - |
|
||||||
|
| `DryRun` | `false` | Alpha | 1.12 | 1.12 |
|
||||||
|
| `DryRun` | `true` | Beta | 1.13 | 1.18 |
|
||||||
|
| `DryRun` | `true` | GA | 1.19 | - |
|
||||||
| `DynamicAuditing` | `false` | Alpha | 1.13 | 1.18 |
|
| `DynamicAuditing` | `false` | Alpha | 1.13 | 1.18 |
|
||||||
| `DynamicAuditing` | - | Deprecated | 1.19 | - |
|
| `DynamicAuditing` | - | Deprecated | 1.19 | - |
|
||||||
| `DynamicProvisioningScheduling` | `false` | Alpha | 1.11 | 1.11 |
|
| `DynamicProvisioningScheduling` | `false` | Alpha | 1.11 | 1.11 |
|
||||||
|
|
@ -247,23 +255,28 @@ different Kubernetes components.
|
||||||
| `HugePages` | `false` | Alpha | 1.8 | 1.9 |
|
| `HugePages` | `false` | Alpha | 1.8 | 1.9 |
|
||||||
| `HugePages` | `true` | Beta| 1.10 | 1.13 |
|
| `HugePages` | `true` | Beta| 1.10 | 1.13 |
|
||||||
| `HugePages` | `true` | GA | 1.14 | - |
|
| `HugePages` | `true` | GA | 1.14 | - |
|
||||||
|
| `HyperVContainer` | `false` | Alpha | 1.10 | 1.19 |
|
||||||
|
| `HyperVContainer` | `false` | Deprecated | 1.20 | - |
|
||||||
| `Initializers` | `false` | Alpha | 1.7 | 1.13 |
|
| `Initializers` | `false` | Alpha | 1.7 | 1.13 |
|
||||||
| `Initializers` | - | Deprecated | 1.14 | - |
|
| `Initializers` | - | Deprecated | 1.14 | - |
|
||||||
| `KubeletConfigFile` | `false` | Alpha | 1.8 | 1.9 |
|
| `KubeletConfigFile` | `false` | Alpha | 1.8 | 1.9 |
|
||||||
| `KubeletConfigFile` | - | Deprecated | 1.10 | - |
|
| `KubeletConfigFile` | - | Deprecated | 1.10 | - |
|
||||||
| `KubeletCredentialProviders` | `false` | Alpha | 1.20 | 1.20 |
|
|
||||||
| `KubeletPluginsWatcher` | `false` | Alpha | 1.11 | 1.11 |
|
| `KubeletPluginsWatcher` | `false` | Alpha | 1.11 | 1.11 |
|
||||||
| `KubeletPluginsWatcher` | `true` | Beta | 1.12 | 1.12 |
|
| `KubeletPluginsWatcher` | `true` | Beta | 1.12 | 1.12 |
|
||||||
| `KubeletPluginsWatcher` | `true` | GA | 1.13 | - |
|
| `KubeletPluginsWatcher` | `true` | GA | 1.13 | - |
|
||||||
| `KubeletPodResources` | `false` | Alpha | 1.13 | 1.14 |
|
| `KubeletPodResources` | `false` | Alpha | 1.13 | 1.14 |
|
||||||
| `KubeletPodResources` | `true` | Beta | 1.15 | |
|
| `KubeletPodResources` | `true` | Beta | 1.15 | |
|
||||||
| `KubeletPodResources` | `true` | GA | 1.20 | |
|
| `KubeletPodResources` | `true` | GA | 1.20 | |
|
||||||
|
| `MountContainers` | `false` | Alpha | 1.9 | 1.16 |
|
||||||
|
| `MountContainers` | `false` | Deprecated | 1.17 | - |
|
||||||
| `MountPropagation` | `false` | Alpha | 1.8 | 1.9 |
|
| `MountPropagation` | `false` | Alpha | 1.8 | 1.9 |
|
||||||
| `MountPropagation` | `true` | Beta | 1.10 | 1.11 |
|
| `MountPropagation` | `true` | Beta | 1.10 | 1.11 |
|
||||||
| `MountPropagation` | `true` | GA | 1.12 | - |
|
| `MountPropagation` | `true` | GA | 1.12 | - |
|
||||||
| `NodeLease` | `false` | Alpha | 1.12 | 1.13 |
|
| `NodeLease` | `false` | Alpha | 1.12 | 1.13 |
|
||||||
| `NodeLease` | `true` | Beta | 1.14 | 1.16 |
|
| `NodeLease` | `true` | Beta | 1.14 | 1.16 |
|
||||||
| `NodeLease` | `true` | GA | 1.17 | - |
|
| `NodeLease` | `true` | GA | 1.17 | - |
|
||||||
|
| `PVCProtection` | `false` | Alpha | 1.9 | 1.9 |
|
||||||
|
| `PVCProtection` | - | Deprecated | 1.10 | - |
|
||||||
| `PersistentLocalVolumes` | `false` | Alpha | 1.7 | 1.9 |
|
| `PersistentLocalVolumes` | `false` | Alpha | 1.7 | 1.9 |
|
||||||
| `PersistentLocalVolumes` | `true` | Beta | 1.10 | 1.13 |
|
| `PersistentLocalVolumes` | `true` | Beta | 1.10 | 1.13 |
|
||||||
| `PersistentLocalVolumes` | `true` | GA | 1.14 | - |
|
| `PersistentLocalVolumes` | `true` | GA | 1.14 | - |
|
||||||
|
|
@ -276,8 +289,6 @@ different Kubernetes components.
|
||||||
| `PodShareProcessNamespace` | `false` | Alpha | 1.10 | 1.11 |
|
| `PodShareProcessNamespace` | `false` | Alpha | 1.10 | 1.11 |
|
||||||
| `PodShareProcessNamespace` | `true` | Beta | 1.12 | 1.16 |
|
| `PodShareProcessNamespace` | `true` | Beta | 1.12 | 1.16 |
|
||||||
| `PodShareProcessNamespace` | `true` | GA | 1.17 | - |
|
| `PodShareProcessNamespace` | `true` | GA | 1.17 | - |
|
||||||
| `PVCProtection` | `false` | Alpha | 1.9 | 1.9 |
|
|
||||||
| `PVCProtection` | - | Deprecated | 1.10 | - |
|
|
||||||
| `RequestManagement` | `false` | Alpha | 1.15 | 1.16 |
|
| `RequestManagement` | `false` | Alpha | 1.15 | 1.16 |
|
||||||
| `ResourceLimitsPriorityFunction` | `false` | Alpha | 1.9 | 1.18 |
|
| `ResourceLimitsPriorityFunction` | `false` | Alpha | 1.9 | 1.18 |
|
||||||
| `ResourceLimitsPriorityFunction` | - | Deprecated | 1.19 | - |
|
| `ResourceLimitsPriorityFunction` | - | Deprecated | 1.19 | - |
|
||||||
|
|
@ -398,65 +409,134 @@ A *General Availability* (GA) feature is also referred to as a *stable* feature.
|
||||||
|
|
||||||
Each feature gate is designed for enabling/disabling a specific feature:
|
Each feature gate is designed for enabling/disabling a specific feature:
|
||||||
|
|
||||||
|
- `APIListChunking`: Enable the API clients to retrieve (`LIST` or `GET`)
|
||||||
|
resources from API server in chunks.
|
||||||
|
- `APIPriorityAndFairness`: Enable managing request concurrency with
|
||||||
|
prioritization and fairness at each server. (Renamed from `RequestManagement`)
|
||||||
|
- `APIResponseCompression`: Compress the API responses for `LIST` or `GET` requests.
|
||||||
|
- `APIServerIdentity`: Assign each API server an ID in a cluster.
|
||||||
- `Accelerators`: Enable Nvidia GPU support when using Docker
|
- `Accelerators`: Enable Nvidia GPU support when using Docker
|
||||||
- `AdvancedAuditing`: Enable [advanced auditing](/docs/tasks/debug-application-cluster/audit/#advanced-audit)
|
- `AdvancedAuditing`: Enable [advanced auditing](/docs/tasks/debug-application-cluster/audit/#advanced-audit)
|
||||||
- `AffinityInAnnotations`(*deprecated*): Enable setting [Pod affinity or anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity).
|
- `AffinityInAnnotations`(*deprecated*): Enable setting
|
||||||
|
[Pod affinity or anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity).
|
||||||
- `AllowExtTrafficLocalEndpoints`: Enable a service to route external requests to node local endpoints.
|
- `AllowExtTrafficLocalEndpoints`: Enable a service to route external requests to node local endpoints.
|
||||||
|
- `AllowInsecureBackendProxy`: Enable the users to skip TLS verification of
|
||||||
|
kubelets on Pod log requests.
|
||||||
- `AnyVolumeDataSource`: Enable use of any custom resource as the `DataSource` of a
|
- `AnyVolumeDataSource`: Enable use of any custom resource as the `DataSource` of a
|
||||||
{{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}.
|
{{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}.
|
||||||
- `APIListChunking`: Enable the API clients to retrieve (`LIST` or `GET`) resources from API server in chunks.
|
|
||||||
- `APIPriorityAndFairness`: Enable managing request concurrency with prioritization and fairness at each server. (Renamed from `RequestManagement`)
|
|
||||||
- `APIResponseCompression`: Compress the API responses for `LIST` or `GET` requests.
|
|
||||||
- `APIServerIdentity`: Assign each kube-apiserver an ID in a cluster.
|
|
||||||
- `AppArmor`: Enable AppArmor based mandatory access control on Linux nodes when using Docker.
|
- `AppArmor`: Enable AppArmor based mandatory access control on Linux nodes when using Docker.
|
||||||
See [AppArmor Tutorial](/docs/tutorials/clusters/apparmor/) for more details.
|
See [AppArmor Tutorial](/docs/tutorials/clusters/apparmor/) for more details.
|
||||||
- `AttachVolumeLimit`: Enable volume plugins to report limits on number of volumes
|
- `AttachVolumeLimit`: Enable volume plugins to report limits on number of volumes
|
||||||
that can be attached to a node.
|
that can be attached to a node.
|
||||||
See [dynamic volume limits](/docs/concepts/storage/storage-limits/#dynamic-volume-limits) for more details.
|
See [dynamic volume limits](/docs/concepts/storage/storage-limits/#dynamic-volume-limits) for more details.
|
||||||
- `BalanceAttachedNodeVolumes`: Include volume count on node to be considered for balanced resource allocation
|
- `BalanceAttachedNodeVolumes`: Include volume count on node to be considered for balanced resource allocation
|
||||||
while scheduling. A node which has closer CPU, memory utilization, and volume count is favored by the scheduler
|
while scheduling. A node which has closer CPU, memory utilization, and volume count is favored by the scheduler
|
||||||
while making decisions.
|
while making decisions.
|
||||||
- `BlockVolume`: Enable the definition and consumption of raw block devices in Pods.
|
- `BlockVolume`: Enable the definition and consumption of raw block devices in Pods.
|
||||||
See [Raw Block Volume Support](/docs/concepts/storage/persistent-volumes/#raw-block-volume-support)
|
See [Raw Block Volume Support](/docs/concepts/storage/persistent-volumes/#raw-block-volume-support)
|
||||||
for more details.
|
for more details.
|
||||||
- `BoundServiceAccountTokenVolume`: Migrate ServiceAccount volumes to use a projected volume consisting of a
|
- `BoundServiceAccountTokenVolume`: Migrate ServiceAccount volumes to use a projected volume consisting of a
|
||||||
ServiceAccountTokenVolumeProjection. Cluster admins can use metric `serviceaccount_stale_tokens_total` to
|
ServiceAccountTokenVolumeProjection. Cluster admins can use metric `serviceaccount_stale_tokens_total` to
|
||||||
monitor workloads that are depending on the extended tokens. If there are no such workloads, turn off
|
monitor workloads that are depending on the extended tokens. If there are no such workloads, turn off
|
||||||
extended tokens by starting `kube-apiserver` with flag `--service-account-extend-token-expiration=false`.
|
extended tokens by starting `kube-apiserver` with flag `--service-account-extend-token-expiration=false`.
|
||||||
Check [Bound Service Account Tokens](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md)
|
Check [Bound Service Account Tokens](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md)
|
||||||
for more details.
|
for more details.
|
||||||
- `ConfigurableFSGroupPolicy`: Allows user to configure volume permission change policy for fsGroups when mounting a volume in a Pod. See [Configure volume permission and ownership change policy for Pods](/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods) for more details.
|
- `CPUManager`: Enable container level CPU affinity support, see
|
||||||
- `CronJobControllerV2`: Use an alternative implementation of the {{< glossary_tooltip text="CronJob" term_id="cronjob" >}} controller. Otherwise, version 1 of the same controller is selected. The version 2 controller provides experimental performance improvements.
|
[CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/).
|
||||||
- `CPUManager`: Enable container level CPU affinity support, see [CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/).
|
|
||||||
- `CRIContainerLogRotation`: Enable container log rotation for cri container runtime.
|
- `CRIContainerLogRotation`: Enable container log rotation for cri container runtime.
|
||||||
- `CSIBlockVolume`: Enable external CSI volume drivers to support block storage. See the [`csi` raw block volume support](/docs/concepts/storage/volumes/#csi-raw-block-volume-support) documentation for more details.
|
- `CSIBlockVolume`: Enable external CSI volume drivers to support block storage.
|
||||||
- `CSIDriverRegistry`: Enable all logic related to the CSIDriver API object in csi.storage.k8s.io.
|
See the [`csi` raw block volume support](/docs/concepts/storage/volumes/#csi-raw-block-volume-support)
|
||||||
|
documentation for more details.
|
||||||
|
- `CSIDriverRegistry`: Enable all logic related to the CSIDriver API object in
|
||||||
|
csi.storage.k8s.io.
|
||||||
- `CSIInlineVolume`: Enable CSI Inline volumes support for pods.
|
- `CSIInlineVolume`: Enable CSI Inline volumes support for pods.
|
||||||
- `CSIMigration`: Enables shims and translation logic to route volume operations from in-tree plugins to corresponding pre-installed CSI plugins
|
- `CSIMigration`: Enables shims and translation logic to route volume
|
||||||
- `CSIMigrationAWS`: Enables shims and translation logic to route volume operations from the AWS-EBS in-tree plugin to EBS CSI plugin. Supports falling back to in-tree EBS plugin if a node does not have EBS CSI plugin installed and configured. Requires CSIMigration feature flag enabled.
|
operations from in-tree plugins to corresponding pre-installed CSI plugins
|
||||||
- `CSIMigrationAWSComplete`: Stops registering the EBS in-tree plugin in kubelet and volume controllers and enables shims and translation logic to route volume operations from the AWS-EBS in-tree plugin to EBS CSI plugin. Requires CSIMigration and CSIMigrationAWS feature flags enabled and EBS CSI plugin installed and configured on all nodes in the cluster.
|
- `CSIMigrationAWS`: Enables shims and translation logic to route volume
|
||||||
- `CSIMigrationAzureDisk`: Enables shims and translation logic to route volume operations from the Azure-Disk in-tree plugin to AzureDisk CSI plugin. Supports falling back to in-tree AzureDisk plugin if a node does not have AzureDisk CSI plugin installed and configured. Requires CSIMigration feature flag enabled.
|
operations from the AWS-EBS in-tree plugin to EBS CSI plugin. Supports
|
||||||
- `CSIMigrationAzureDiskComplete`: Stops registering the Azure-Disk in-tree plugin in kubelet and volume controllers and enables shims and translation logic to route volume operations from the Azure-Disk in-tree plugin to AzureDisk CSI plugin. Requires CSIMigration and CSIMigrationAzureDisk feature flags enabled and AzureDisk CSI plugin installed and configured on all nodes in the cluster.
|
falling back to in-tree EBS plugin if a node does not have EBS CSI plugin
|
||||||
- `CSIMigrationAzureFile`: Enables shims and translation logic to route volume operations from the Azure-File in-tree plugin to AzureFile CSI plugin. Supports falling back to in-tree AzureFile plugin if a node does not have AzureFile CSI plugin installed and configured. Requires CSIMigration feature flag enabled.
|
installed and configured. Requires CSIMigration feature flag enabled.
|
||||||
- `CSIMigrationAzureFileComplete`: Stops registering the Azure-File in-tree plugin in kubelet and volume controllers and enables shims and translation logic to route volume operations from the Azure-File in-tree plugin to AzureFile CSI plugin. Requires CSIMigration and CSIMigrationAzureFile feature flags enabled and AzureFile CSI plugin installed and configured on all nodes in the cluster.
|
- `CSIMigrationAWSComplete`: Stops registering the EBS in-tree plugin in
|
||||||
- `CSIMigrationGCE`: Enables shims and translation logic to route volume operations from the GCE-PD in-tree plugin to PD CSI plugin. Supports falling back to in-tree GCE plugin if a node does not have PD CSI plugin installed and configured. Requires CSIMigration feature flag enabled.
|
kubelet and volume controllers and enables shims and translation logic to
|
||||||
- `CSIMigrationGCEComplete`: Stops registering the GCE-PD in-tree plugin in kubelet and volume controllers and enables shims and translation logic to route volume operations from the GCE-PD in-tree plugin to PD CSI plugin. Requires CSIMigration and CSIMigrationGCE feature flags enabled and PD CSI plugin installed and configured on all nodes in the cluster.
|
route volume operations from the AWS-EBS in-tree plugin to EBS CSI plugin.
|
||||||
- `CSIMigrationOpenStack`: Enables shims and translation logic to route volume operations from the Cinder in-tree plugin to Cinder CSI plugin. Supports falling back to in-tree Cinder plugin if a node does not have Cinder CSI plugin installed and configured. Requires CSIMigration feature flag enabled.
|
Requires CSIMigration and CSIMigrationAWS feature flags enabled and EBS CSI
|
||||||
- `CSIMigrationOpenStackComplete`: Stops registering the Cinder in-tree plugin in kubelet and volume controllers and enables shims and translation logic to route volume operations from the Cinder in-tree plugin to Cinder CSI plugin. Requires CSIMigration and CSIMigrationOpenStack feature flags enabled and Cinder CSI plugin installed and configured on all nodes in the cluster.
|
plugin installed and configured on all nodes in the cluster.
|
||||||
- `CSIMigrationvSphere`: Enables shims and translation logic to route volume operations from the vSphere in-tree plugin to vSphere CSI plugin. Supports falling back to in-tree vSphere plugin if a node does not have vSphere CSI plugin installed and configured. Requires CSIMigration feature flag enabled.
|
- `CSIMigrationAzureDisk`: Enables shims and translation logic to route volume
|
||||||
- `CSIMigrationvSphereComplete`: Stops registering the vSphere in-tree plugin in kubelet and volume controllers and enables shims and translation logic to route volume operations from the vSphere in-tree plugin to vSphere CSI plugin. Requires CSIMigration and CSIMigrationvSphere feature flags enabled and vSphere CSI plugin installed and configured on all nodes in the cluster.
|
operations from the Azure-Disk in-tree plugin to AzureDisk CSI plugin.
|
||||||
|
Supports falling back to in-tree AzureDisk plugin if a node does not have
|
||||||
|
AzureDisk CSI plugin installed and configured. Requires CSIMigration feature
|
||||||
|
flag enabled.
|
||||||
|
- `CSIMigrationAzureDiskComplete`: Stops registering the Azure-Disk in-tree
|
||||||
|
plugin in kubelet and volume controllers and enables shims and translation
|
||||||
|
logic to route volume operations from the Azure-Disk in-tree plugin to
|
||||||
|
AzureDisk CSI plugin. Requires CSIMigration and CSIMigrationAzureDisk feature
|
||||||
|
flags enabled and AzureDisk CSI plugin installed and configured on all nodes
|
||||||
|
in the cluster.
|
||||||
|
- `CSIMigrationAzureFile`: Enables shims and translation logic to route volume
|
||||||
|
operations from the Azure-File in-tree plugin to AzureFile CSI plugin.
|
||||||
|
Supports falling back to in-tree AzureFile plugin if a node does not have
|
||||||
|
AzureFile CSI plugin installed and configured. Requires CSIMigration feature
|
||||||
|
flag enabled.
|
||||||
|
- `CSIMigrationAzureFileComplete`: Stops registering the Azure-File in-tree
|
||||||
|
plugin in kubelet and volume controllers and enables shims and translation
|
||||||
|
logic to route volume operations from the Azure-File in-tree plugin to
|
||||||
|
AzureFile CSI plugin. Requires CSIMigration and CSIMigrationAzureFile feature
|
||||||
|
flags enabled and AzureFile CSI plugin installed and configured on all nodes
|
||||||
|
in the cluster.
|
||||||
|
- `CSIMigrationGCE`: Enables shims and translation logic to route volume
|
||||||
|
operations from the GCE-PD in-tree plugin to PD CSI plugin. Supports falling
|
||||||
|
back to in-tree GCE plugin if a node does not have PD CSI plugin installed and
|
||||||
|
configured. Requires CSIMigration feature flag enabled.
|
||||||
|
- `CSIMigrationGCEComplete`: Stops registering the GCE-PD in-tree plugin in
|
||||||
|
kubelet and volume controllers and enables shims and translation logic to
|
||||||
|
route volume operations from the GCE-PD in-tree plugin to PD CSI plugin.
|
||||||
|
Requires CSIMigration and CSIMigrationGCE feature flags enabled and PD CSI
|
||||||
|
plugin installed and configured on all nodes in the cluster.
|
||||||
|
- `CSIMigrationOpenStack`: Enables shims and translation logic to route volume
|
||||||
|
operations from the Cinder in-tree plugin to Cinder CSI plugin. Supports
|
||||||
|
falling back to in-tree Cinder plugin if a node does not have Cinder CSI
|
||||||
|
plugin installed and configured. Requires CSIMigration feature flag enabled.
|
||||||
|
- `CSIMigrationOpenStackComplete`: Stops registering the Cinder in-tree plugin in
|
||||||
|
kubelet and volume controllers and enables shims and translation logic to route
|
||||||
|
volume operations from the Cinder in-tree plugin to Cinder CSI plugin.
|
||||||
|
Requires CSIMigration and CSIMigrationOpenStack feature flags enabled and Cinder
|
||||||
|
CSI plugin installed and configured on all nodes in the cluster.
|
||||||
|
- `CSIMigrationvSphere`: Enables shims and translation logic to route volume operations
|
||||||
|
from the vSphere in-tree plugin to vSphere CSI plugin.
|
||||||
|
Supports falling back to in-tree vSphere plugin if a node does not have vSphere
|
||||||
|
CSI plugin installed and configured. Requires CSIMigration feature flag enabled.
|
||||||
|
- `CSIMigrationvSphereComplete`: Stops registering the vSphere in-tree plugin in kubelet
|
||||||
|
and volume controllers and enables shims and translation logic to route volume operations
|
||||||
|
from the vSphere in-tree plugin to vSphere CSI plugin. Requires CSIMigration and
|
||||||
|
CSIMigrationvSphere feature flags enabled and vSphere CSI plugin installed and
|
||||||
|
configured on all nodes in the cluster.
|
||||||
- `CSINodeInfo`: Enable all logic related to the CSINodeInfo API object in csi.storage.k8s.io.
|
- `CSINodeInfo`: Enable all logic related to the CSINodeInfo API object in csi.storage.k8s.io.
|
||||||
- `CSIPersistentVolume`: Enable discovering and mounting volumes provisioned through a
|
- `CSIPersistentVolume`: Enable discovering and mounting volumes provisioned through a
|
||||||
[CSI (Container Storage Interface)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md)
|
[CSI (Container Storage Interface)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md)
|
||||||
compatible volume plugin.
|
compatible volume plugin.
|
||||||
- `CSIServiceAccountToken`: Enable CSI drivers to receive the pods' service account token that they mount volumes for. See [Token Requests](https://kubernetes-csi.github.io/docs/token-requests.html).
|
- `CSIServiceAccountToken`: Enable CSI drivers to receive the pods' service account token
|
||||||
- `CSIStorageCapacity`: Enables CSI drivers to publish storage capacity information and the Kubernetes scheduler to use that information when scheduling pods. See [Storage Capacity](/docs/concepts/storage/storage-capacity/).
|
that they mount volumes for. See
|
||||||
|
[Token Requests](https://kubernetes-csi.github.io/docs/token-requests.html).
|
||||||
|
- `CSIStorageCapacity`: Enables CSI drivers to publish storage capacity information
|
||||||
|
and the Kubernetes scheduler to use that information when scheduling pods. See
|
||||||
|
[Storage Capacity](/docs/concepts/storage/storage-capacity/).
|
||||||
Check the [`csi` volume type](/docs/concepts/storage/volumes/#csi) documentation for more details.
|
Check the [`csi` volume type](/docs/concepts/storage/volumes/#csi) documentation for more details.
|
||||||
- `CSIVolumeFSGroupPolicy`: Allows CSIDrivers to use the `fsGroupPolicy` field. This field controls whether volumes created by a CSIDriver support volume ownership and permission modifications when these volumes are mounted.
|
- `CSIVolumeFSGroupPolicy`: Allows CSIDrivers to use the `fsGroupPolicy` field.
|
||||||
- `CustomCPUCFSQuotaPeriod`: Enable nodes to change CPUCFSQuotaPeriod.
|
This field controls whether volumes created by a CSIDriver support volume ownership
|
||||||
|
and permission modifications when these volumes are mounted.
|
||||||
|
- `ConfigurableFSGroupPolicy`: Allows user to configure volume permission change policy
|
||||||
|
for fsGroups when mounting a volume in a Pod. See
|
||||||
|
[Configure volume permission and ownership change policy for Pods](/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods)
|
||||||
|
for more details.
|
||||||
|
- `CronJobControllerV2`: Use an alternative implementation of the
|
||||||
|
{{< glossary_tooltip text="CronJob" term_id="cronjob" >}} controller. Otherwise,
|
||||||
|
version 1 of the same controller is selected.
|
||||||
|
The version 2 controller provides experimental performance improvements.
|
||||||
|
- `CustomCPUCFSQuotaPeriod`: Enable nodes to change `cpuCFSQuotaPeriod` in
|
||||||
|
[kubelet config](/docs/tasks/administer-cluster/kubelet-config-file/).
|
||||||
- `CustomPodDNS`: Enable customizing the DNS settings for a Pod using its `dnsConfig` property.
|
- `CustomPodDNS`: Enable customizing the DNS settings for a Pod using its `dnsConfig` property.
|
||||||
Check [Pod's DNS Config](/docs/concepts/services-networking/dns-pod-service/#pods-dns-config)
|
Check [Pod's DNS Config](/docs/concepts/services-networking/dns-pod-service/#pods-dns-config)
|
||||||
for more details.
|
for more details.
|
||||||
- `CustomResourceDefaulting`: Enable CRD support for default values in OpenAPI v3 validation schemas.
|
- `CustomResourceDefaulting`: Enable CRD support for default values in OpenAPI v3 validation schemas.
|
||||||
- `CustomResourcePublishOpenAPI`: Enables publishing of CRD OpenAPI specs.
|
- `CustomResourcePublishOpenAPI`: Enables publishing of CRD OpenAPI specs.
|
||||||
- `CustomResourceSubresources`: Enable `/status` and `/scale` subresources
|
- `CustomResourceSubresources`: Enable `/status` and `/scale` subresources
|
||||||
|
|
@ -466,147 +546,253 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
||||||
- `CustomResourceWebhookConversion`: Enable webhook-based conversion
|
- `CustomResourceWebhookConversion`: Enable webhook-based conversion
|
||||||
on resources created from [CustomResourceDefinition](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
|
on resources created from [CustomResourceDefinition](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
|
||||||
troubleshoot a running Pod.
|
troubleshoot a running Pod.
|
||||||
- `DisableAcceleratorUsageMetrics`: [Disable accelerator metrics collected by the kubelet](/docs/concepts/cluster-administration/system-metrics/#disable-accelerator-metrics).
|
|
||||||
- `DevicePlugins`: Enable the [device-plugins](/docs/concepts/cluster-administration/device-plugins/)
|
|
||||||
based resource provisioning on nodes.
|
|
||||||
- `DefaultPodTopologySpread`: Enables the use of `PodTopologySpread` scheduling plugin to do
|
- `DefaultPodTopologySpread`: Enables the use of `PodTopologySpread` scheduling plugin to do
|
||||||
[default spreading](/docs/concepts/workloads/pods/pod-topology-spread-constraints/#internal-default-constraints).
|
[default spreading](/docs/concepts/workloads/pods/pod-topology-spread-constraints/#internal-default-constraints).
|
||||||
- `DownwardAPIHugePages`: Enables usage of hugepages in downward API.
|
- `DevicePlugins`: Enable the [device-plugins](/docs/concepts/cluster-administration/device-plugins/)
|
||||||
|
based resource provisioning on nodes.
|
||||||
|
- `DisableAcceleratorUsageMetrics`:
|
||||||
|
[Disable accelerator metrics collected by the kubelet](/docs/concepts/cluster-administration/system-metrics/#disable-accelerator-metrics).
|
||||||
|
- `DownwardAPIHugePages`: Enables usage of hugepages in
|
||||||
|
[downward API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information).
|
||||||
- `DryRun`: Enable server-side [dry run](/docs/reference/using-api/api-concepts/#dry-run) requests
|
- `DryRun`: Enable server-side [dry run](/docs/reference/using-api/api-concepts/#dry-run) requests
|
||||||
so that validation, merging, and mutation can be tested without committing.
|
so that validation, merging, and mutation can be tested without committing.
|
||||||
- `DynamicAuditing`(*deprecated*): Used to enable dynamic auditing before v1.19.
|
- `DynamicAuditing`(*deprecated*): Used to enable dynamic auditing before v1.19.
|
||||||
- `DynamicKubeletConfig`: Enable the dynamic configuration of kubelet. See [Reconfigure kubelet](/docs/tasks/administer-cluster/reconfigure-kubelet/).
|
- `DynamicKubeletConfig`: Enable the dynamic configuration of kubelet. See
|
||||||
- `DynamicProvisioningScheduling`: Extend the default scheduler to be aware of volume topology and handle PV provisioning.
|
[Reconfigure kubelet](/docs/tasks/administer-cluster/reconfigure-kubelet/).
|
||||||
|
- `DynamicProvisioningScheduling`: Extend the default scheduler to be aware of
|
||||||
|
volume topology and handle PV provisioning.
|
||||||
This feature is superseded by the `VolumeScheduling` feature completely in v1.12.
|
This feature is superseded by the `VolumeScheduling` feature completely in v1.12.
|
||||||
- `DynamicVolumeProvisioning`(*deprecated*): Enable the [dynamic provisioning](/docs/concepts/storage/dynamic-provisioning/) of persistent volumes to Pods.
|
- `DynamicVolumeProvisioning`(*deprecated*): Enable the
|
||||||
- `EnableAggregatedDiscoveryTimeout` (*deprecated*): Enable the five second timeout on aggregated discovery calls.
|
[dynamic provisioning](/docs/concepts/storage/dynamic-provisioning/) of persistent volumes to Pods.
|
||||||
- `EnableEquivalenceClassCache`: Enable the scheduler to cache equivalence of nodes when scheduling Pods.
|
- `EfficientWatchResumption`: Allows for storage-originated bookmark (progress
|
||||||
- `EphemeralContainers`: Enable the ability to add {{< glossary_tooltip text="ephemeral containers"
|
notify) events to be delivered to the users. This is only applied to watch
|
||||||
term_id="ephemeral-container" >}} to running pods.
|
operations.
|
||||||
- `EvenPodsSpread`: Enable pods to be scheduled evenly across topology domains. See [Pod Topology Spread Constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/).
|
- `EnableAggregatedDiscoveryTimeout` (*deprecated*): Enable the five second
|
||||||
- `ExecProbeTimeout`: Ensure kubelet respects exec probe timeouts. This feature gate exists in case any of your existing workloads depend on a now-corrected fault where Kubernetes ignored exec probe timeouts. See [readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes).
|
timeout on aggregated discovery calls.
|
||||||
- `ExpandInUsePersistentVolumes`: Enable expanding in-use PVCs. See [Resizing an in-use PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim).
|
- `EnableEquivalenceClassCache`: Enable the scheduler to cache equivalence of
|
||||||
- `ExpandPersistentVolumes`: Enable the expanding of persistent volumes. See [Expanding Persistent Volumes Claims](/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims).
|
nodes when scheduling Pods.
|
||||||
- `ExperimentalCriticalPodAnnotation`: Enable annotating specific pods as *critical* so that their [scheduling is guaranteed](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/).
|
- `EndpointSlice`: Enables EndpointSlices for more scalable and extensible
|
||||||
This feature is deprecated by Pod Priority and Preemption as of v1.13.
|
network endpoints. See [Enabling EndpointSlices](/docs/tasks/administer-cluster/enabling-endpointslices/).
|
||||||
- `ExperimentalHostUserNamespaceDefaultingGate`: Enabling the defaulting user
|
|
||||||
namespace to host. This is for containers that are using other host namespaces,
|
|
||||||
host mounts, or containers that are privileged or using specific non-namespaced
|
|
||||||
capabilities (e.g. `MKNODE`, `SYS_MODULE` etc.). This should only be enabled
|
|
||||||
if user namespace remapping is enabled in the Docker daemon.
|
|
||||||
- `EndpointSlice`: Enables Endpoint Slices for more scalable and extensible
|
|
||||||
network endpoints. See [Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpointslices/).
|
|
||||||
- `EndpointSliceNodeName`: Enables EndpointSlice `nodeName` field.
|
- `EndpointSliceNodeName`: Enables EndpointSlice `nodeName` field.
|
||||||
- `EndpointSliceTerminating`: Enables EndpointSlice `terminating` and `serving`
|
- `EndpointSliceProxying`: When enabled, kube-proxy running
|
||||||
condition fields.
|
|
||||||
- `EndpointSliceProxying`: When this feature gate is enabled, kube-proxy running
|
|
||||||
on Linux will use EndpointSlices as the primary data source instead of
|
on Linux will use EndpointSlices as the primary data source instead of
|
||||||
Endpoints, enabling scalability and performance improvements. See
|
Endpoints, enabling scalability and performance improvements. See
|
||||||
[Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpointslices/).
|
[Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpointslices/).
|
||||||
- `WindowsEndpointSliceProxying`: When this feature gate is enabled, kube-proxy
|
- `EndpointSliceTerminatingCondition`: Enables EndpointSlice `terminating` and `serving`
|
||||||
running on Windows will use EndpointSlices as the primary data source instead
|
condition fields.
|
||||||
of Endpoints, enabling scalability and performance improvements. See
|
- `EphemeralContainers`: Enable the ability to add
|
||||||
[Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpointslices/).
|
{{< glossary_tooltip text="ephemeral containers" term_id="ephemeral-container" >}}
|
||||||
|
to running pods.
|
||||||
|
- `EvenPodsSpread`: Enable pods to be scheduled evenly across topology domains. See
|
||||||
|
[Pod Topology Spread Constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/).
|
||||||
|
- `ExecProbeTimeout`: Ensure kubelet respects exec probe timeouts.
|
||||||
|
This feature gate exists in case any of your existing workloads depend on a
|
||||||
|
now-corrected fault where Kubernetes ignored exec probe timeouts. See
|
||||||
|
[readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes).
|
||||||
|
- `ExpandCSIVolumes`: Enable the expanding of CSI volumes.
|
||||||
|
- `ExpandInUsePersistentVolumes`: Enable expanding in-use PVCs. See
|
||||||
|
[Resizing an in-use PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim).
|
||||||
|
- `ExpandPersistentVolumes`: Enable the expanding of persistent volumes. See
|
||||||
|
[Expanding Persistent Volumes Claims](/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims).
|
||||||
|
- `ExperimentalCriticalPodAnnotation`: Enable annotating specific pods as *critical*
|
||||||
|
so that their [scheduling is guaranteed](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/).
|
||||||
|
This feature is deprecated by Pod Priority and Preemption as of v1.13.
|
||||||
|
- `ExperimentalHostUserNamespaceDefaulting`: Enabling the defaulting user
|
||||||
|
namespace to host. This is for containers that are using other host namespaces,
|
||||||
|
host mounts, or containers that are privileged or using specific non-namespaced
|
||||||
|
capabilities (e.g. `MKNODE`, `SYS_MODULE` etc.). This should only be enabled
|
||||||
|
if user namespace remapping is enabled in the Docker daemon.
|
||||||
- `GCERegionalPersistentDisk`: Enable the regional PD feature on GCE.
|
- `GCERegionalPersistentDisk`: Enable the regional PD feature on GCE.
|
||||||
- `GenericEphemeralVolume`: Enables ephemeral, inline volumes that support all features of normal volumes (can be provided by third-party storage vendors, storage capacity tracking, restore from snapshot, etc.). See [Ephemeral Volumes](/docs/concepts/storage/ephemeral-volumes/).
|
- `GenericEphemeralVolume`: Enables ephemeral, inline volumes that support all features
|
||||||
- `GracefulNodeShutdown`: Enables support for graceful shutdown in kubelet. During a system shutdown, kubelet will attempt to detect the shutdown event and gracefully terminate pods running on the node. See [Graceful Node Shutdown](/docs/concepts/architecture/nodes/#graceful-node-shutdown) for more details.
|
of normal volumes (can be provided by third-party storage vendors, storage capacity tracking,
|
||||||
- `HugePages`: Enable the allocation and consumption of pre-allocated [huge pages](/docs/tasks/manage-hugepages/scheduling-hugepages/).
|
restore from snapshot, etc.).
|
||||||
- `HugePageStorageMediumSize`: Enable support for multiple sizes pre-allocated [huge pages](/docs/tasks/manage-hugepages/scheduling-hugepages/).
|
See [Ephemeral Volumes](/docs/concepts/storage/ephemeral-volumes/).
|
||||||
- `HyperVContainer`: Enable [Hyper-V isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container) for Windows containers.
|
- `GracefulNodeShutdown`: Enables support for graceful shutdown in kubelet.
|
||||||
- `HPAScaleToZero`: Enables setting `minReplicas` to 0 for `HorizontalPodAutoscaler` resources when using custom or external metrics.
|
During a system shutdown, kubelet will attempt to detect the shutdown event
|
||||||
- `ImmutableEphemeralVolumes`: Allows for marking individual Secrets and ConfigMaps as immutable for better safety and performance.
|
and gracefully terminate pods running on the node. See
|
||||||
- `KubeletConfigFile`: Enable loading kubelet configuration from a file specified using a config file.
|
[Graceful Node Shutdown](/docs/concepts/architecture/nodes/#graceful-node-shutdown)
|
||||||
See [setting kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file/) for more details.
|
for more details.
|
||||||
|
- `HPAContainerMetrics`: Enable the `HorizontalPodAutoscaler` to scale based on
|
||||||
|
metrics from individual containers in target pods.
|
||||||
|
- `HPAScaleToZero`: Enables setting `minReplicas` to 0 for `HorizontalPodAutoscaler`
|
||||||
|
resources when using custom or external metrics.
|
||||||
|
- `HugePages`: Enable the allocation and consumption of pre-allocated
|
||||||
|
[huge pages](/docs/tasks/manage-hugepages/scheduling-hugepages/).
|
||||||
|
- `HugePageStorageMediumSize`: Enable support for multiple sizes pre-allocated
|
||||||
|
[huge pages](/docs/tasks/manage-hugepages/scheduling-hugepages/).
|
||||||
|
- `HyperVContainer`: Enable
|
||||||
|
[Hyper-V isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container)
|
||||||
|
for Windows containers.
|
||||||
|
- `IPv6DualStack`: Enable [dual stack](/docs/concepts/services-networking/dual-stack/)
|
||||||
|
support for IPv6.
|
||||||
|
- `ImmutableEphemeralVolumes`: Allows for marking individual Secrets and ConfigMaps as
|
||||||
|
immutable for better safety and performance.
|
||||||
|
- `KubeletConfigFile` (*deprecated*): Enable loading kubelet configuration from
|
||||||
|
a file specified using a config file.
|
||||||
|
See [setting kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file/)
|
||||||
|
for more details.
|
||||||
- `KubeletCredentialProviders`: Enable kubelet exec credential providers for image pull credentials.
|
- `KubeletCredentialProviders`: Enable kubelet exec credential providers for image pull credentials.
|
||||||
- `KubeletPluginsWatcher`: Enable probe-based plugin watcher utility to enable kubelet
|
- `KubeletPluginsWatcher`: Enable probe-based plugin watcher utility to enable kubelet
|
||||||
to discover plugins such as [CSI volume drivers](/docs/concepts/storage/volumes/#csi).
|
to discover plugins such as [CSI volume drivers](/docs/concepts/storage/volumes/#csi).
|
||||||
- `KubeletPodResources`: Enable the kubelet's pod resources grpc endpoint.
|
- `KubeletPodResources`: Enable the kubelet's pod resources GRPC endpoint. See
|
||||||
See [Support Device Monitoring](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/compute-device-assignment.md) for more details.
|
[Support Device Monitoring](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/compute-device-assignment.md)
|
||||||
- `LegacyNodeRoleBehavior`: When disabled, legacy behavior in service load balancers and node disruption will ignore the `node-role.kubernetes.io/master` label in favor of the feature-specific labels provided by `NodeDisruptionExclusion` and `ServiceNodeExclusion`.
|
for more details.
|
||||||
- `LocalStorageCapacityIsolation`: Enable the consumption of [local ephemeral storage](/docs/concepts/configuration/manage-resources-containers/) and also the `sizeLimit` property of an [emptyDir volume](/docs/concepts/storage/volumes/#emptydir).
|
- `LegacyNodeRoleBehavior`: When disabled, legacy behavior in service load balancers and
|
||||||
- `LocalStorageCapacityIsolationFSQuotaMonitoring`: When `LocalStorageCapacityIsolation` is enabled for [local ephemeral storage](/docs/concepts/configuration/manage-resources-containers/) and the backing filesystem for [emptyDir volumes](/docs/concepts/storage/volumes/#emptydir) supports project quotas and they are enabled, use project quotas to monitor [emptyDir volume](/docs/concepts/storage/volumes/#emptydir) storage consumption rather than filesystem walk for better performance and accuracy.
|
node disruption will ignore the `node-role.kubernetes.io/master` label in favor of the
|
||||||
- `MixedProtocolLBService`: Enable using different protocols in the same LoadBalancer type Service instance.
|
feature-specific labels provided by `NodeDisruptionExclusion` and `ServiceNodeExclusion`.
|
||||||
- `MountContainers`: Enable using utility containers on host as the volume mounter.
|
- `LocalStorageCapacityIsolation`: Enable the consumption of
|
||||||
|
[local ephemeral storage](/docs/concepts/configuration/manage-resources-containers/)
|
||||||
|
and also the `sizeLimit` property of an
|
||||||
|
[emptyDir volume](/docs/concepts/storage/volumes/#emptydir).
|
||||||
|
- `LocalStorageCapacityIsolationFSQuotaMonitoring`: When `LocalStorageCapacityIsolation`
|
||||||
|
is enabled for
|
||||||
|
[local ephemeral storage](/docs/concepts/configuration/manage-resources-containers/)
|
||||||
|
and the backing filesystem for [emptyDir volumes](/docs/concepts/storage/volumes/#emptydir)
|
||||||
|
supports project quotas and they are enabled, use project quotas to monitor
|
||||||
|
[emptyDir volume](/docs/concepts/storage/volumes/#emptydir) storage consumption rather than
|
||||||
|
filesystem walk for better performance and accuracy.
|
||||||
|
- `MixedProtocolLBService`: Enable using different protocols in the same `LoadBalancer` type
|
||||||
|
Service instance.
|
||||||
|
- `MountContainers` (*deprecated*): Enable using utility containers on host as
|
||||||
|
the volume mounter.
|
||||||
- `MountPropagation`: Enable sharing volume mounted by one container to other containers or pods.
|
- `MountPropagation`: Enable sharing volume mounted by one container to other containers or pods.
|
||||||
For more details, please see [mount propagation](/docs/concepts/storage/volumes/#mount-propagation).
|
For more details, please see [mount propagation](/docs/concepts/storage/volumes/#mount-propagation).
|
||||||
- `NodeDisruptionExclusion`: Enable use of the node label `node.kubernetes.io/exclude-disruption` which prevents nodes from being evacuated during zone failures.
|
- `NodeDisruptionExclusion`: Enable use of the Node label `node.kubernetes.io/exclude-disruption`
|
||||||
|
which prevents nodes from being evacuated during zone failures.
|
||||||
- `NodeLease`: Enable the new Lease API to report node heartbeats, which could be used as a node health signal.
|
- `NodeLease`: Enable the new Lease API to report node heartbeats, which could be used as a node health signal.
|
||||||
- `NonPreemptingPriority`: Enable NonPreempting option for PriorityClass and Pod.
|
- `NonPreemptingPriority`: Enable `preemptionPolicy` field for PriorityClass and Pod.
|
||||||
|
- `PVCProtection`: Enable the prevention of a PersistentVolumeClaim (PVC) from
|
||||||
|
being deleted when it is still used by any Pod.
|
||||||
- `PersistentLocalVolumes`: Enable the usage of `local` volume type in Pods.
|
- `PersistentLocalVolumes`: Enable the usage of `local` volume type in Pods.
|
||||||
Pod affinity has to be specified if requesting a `local` volume.
|
Pod affinity has to be specified if requesting a `local` volume.
|
||||||
- `PodDisruptionBudget`: Enable the [PodDisruptionBudget](/docs/tasks/run-application/configure-pdb/) feature.
|
- `PodDisruptionBudget`: Enable the [PodDisruptionBudget](/docs/tasks/run-application/configure-pdb/) feature.
|
||||||
- `PodOverhead`: Enable the [PodOverhead](/docs/concepts/scheduling-eviction/pod-overhead/) feature to account for pod overheads.
|
- `PodOverhead`: Enable the [PodOverhead](/docs/concepts/scheduling-eviction/pod-overhead/)
|
||||||
- `PodPriority`: Enable the descheduling and preemption of Pods based on their [priorities](/docs/concepts/configuration/pod-priority-preemption/).
|
feature to account for pod overheads.
|
||||||
|
- `PodPriority`: Enable the descheduling and preemption of Pods based on their
|
||||||
|
[priorities](/docs/concepts/configuration/pod-priority-preemption/).
|
||||||
- `PodReadinessGates`: Enable the setting of `PodReadinessGate` field for extending
|
- `PodReadinessGates`: Enable the setting of `PodReadinessGate` field for extending
|
||||||
Pod readiness evaluation. See [Pod readiness gate](/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-gate)
|
Pod readiness evaluation. See [Pod readiness gate](/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-gate)
|
||||||
for more details.
|
for more details.
|
||||||
- `PodShareProcessNamespace`: Enable the setting of `shareProcessNamespace` in a Pod for sharing
|
- `PodShareProcessNamespace`: Enable the setting of `shareProcessNamespace` in a Pod for sharing
|
||||||
a single process namespace between containers running in a pod. More details can be found in
|
a single process namespace between containers running in a pod. More details can be found in
|
||||||
[Share Process Namespace between Containers in a Pod](/docs/tasks/configure-pod-container/share-process-namespace/).
|
[Share Process Namespace between Containers in a Pod](/docs/tasks/configure-pod-container/share-process-namespace/).
|
||||||
- `ProcMountType`: Enables control over ProcMountType for containers.
|
- `ProcMountType`: Enables control over the type proc mounts for containers
|
||||||
- `PVCProtection`: Enable the prevention of a PersistentVolumeClaim (PVC) from
|
by setting the `procMount` field of a SecurityContext.
|
||||||
being deleted when it is still used by any Pod.
|
- `QOSReserved`: Allows resource reservations at the QoS level preventing pods
|
||||||
- `QOSReserved`: Allows resource reservations at the QoS level preventing pods at lower QoS levels from
|
at lower QoS levels from bursting into resources requested at higher QoS levels
|
||||||
bursting into resources requested at higher QoS levels (memory only for now).
|
(memory only for now).
|
||||||
|
- `RemainingItemCount`: Allow the API servers to show a count of remaining
|
||||||
|
items in the response to a
|
||||||
|
[chunking list request](/docs/reference/using-api/api-concepts/#retrieving-large-results-sets-in-chunks).
|
||||||
|
- `RemoveSelfLink`: Deprecates and removes `selfLink` from ObjectMeta and
|
||||||
|
ListMeta.
|
||||||
- `ResourceLimitsPriorityFunction` (*deprecated*): Enable a scheduler priority function that
|
- `ResourceLimitsPriorityFunction` (*deprecated*): Enable a scheduler priority function that
|
||||||
assigns a lowest possible score of 1 to a node that satisfies at least one of
|
assigns a lowest possible score of 1 to a node that satisfies at least one of
|
||||||
the input Pod's cpu and memory limits. The intent is to break ties between
|
the input Pod's cpu and memory limits. The intent is to break ties between
|
||||||
nodes with same scores.
|
nodes with same scores.
|
||||||
- `ResourceQuotaScopeSelectors`: Enable resource quota scope selectors.
|
- `ResourceQuotaScopeSelectors`: Enable resource quota scope selectors.
|
||||||
- `RootCAConfigMap`: Configure the kube-controller-manager to publish a {{< glossary_tooltip text="ConfigMap" term_id="configmap" >}} named `kube-root-ca.crt` to every namespace. This ConfigMap contains a CA bundle used for verifying connections to the kube-apiserver.
|
- `RootCAConfigMap`: Configure the `kube-controller-manager` to publish a
|
||||||
See [Bound Service Account Tokens](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md) for more details.
|
{{< glossary_tooltip text="ConfigMap" term_id="configmap" >}} named `kube-root-ca.crt`
|
||||||
|
to every namespace. This ConfigMap contains a CA bundle used for verifying connections
|
||||||
|
to the kube-apiserver. See
|
||||||
|
[Bound Service Account Tokens](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md)
|
||||||
|
for more details.
|
||||||
- `RotateKubeletClientCertificate`: Enable the rotation of the client TLS certificate on the kubelet.
|
- `RotateKubeletClientCertificate`: Enable the rotation of the client TLS certificate on the kubelet.
|
||||||
See [kubelet configuration](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration) for more details.
|
See [kubelet configuration](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration) for more details.
|
||||||
- `RotateKubeletServerCertificate`: Enable the rotation of the server TLS certificate on the kubelet.
|
- `RotateKubeletServerCertificate`: Enable the rotation of the server TLS certificate on the kubelet.
|
||||||
See [kubelet configuration](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration) for more details.
|
See [kubelet configuration](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration)
|
||||||
- `RunAsGroup`: Enable control over the primary group ID set on the init processes of containers.
|
for more details.
|
||||||
- `RuntimeClass`: Enable the [RuntimeClass](/docs/concepts/containers/runtime-class/) feature for selecting container runtime configurations.
|
- `RunAsGroup`: Enable control over the primary group ID set on the init
|
||||||
- `ScheduleDaemonSetPods`: Enable DaemonSet Pods to be scheduled by the default scheduler instead of the DaemonSet controller.
|
processes of containers.
|
||||||
- `SCTPSupport`: Enables the _SCTP_ `protocol` value in Pod, Service, Endpoints, EndpointSlice, and NetworkPolicy definitions.
|
- `RuntimeClass`: Enable the [RuntimeClass](/docs/concepts/containers/runtime-class/) feature
|
||||||
- `ServerSideApply`: Enables the [Sever Side Apply (SSA)](/docs/reference/using-api/server-side-apply/) path at the API Server.
|
for selecting container runtime configurations.
|
||||||
- `ServiceAccountIssuerDiscovery`: Enable OIDC discovery endpoints (issuer and JWKS URLs) for the service account issuer in the API server. See [Configure Service Accounts for Pods](/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery) for more details.
|
- `ScheduleDaemonSetPods`: Enable DaemonSet Pods to be scheduled by the default scheduler
|
||||||
|
instead of the DaemonSet controller.
|
||||||
|
- `SCTPSupport`: Enables the _SCTP_ `protocol` value in Pod, Service,
|
||||||
|
Endpoints, EndpointSlice, and NetworkPolicy definitions.
|
||||||
|
- `ServerSideApply`: Enables the [Sever Side Apply (SSA)](/docs/reference/using-api/server-side-apply/)
|
||||||
|
feature on the API Server.
|
||||||
|
- `ServiceAccountIssuerDiscovery`: Enable OIDC discovery endpoints (issuer and
|
||||||
|
JWKS URLs) for the service account issuer in the API server. See
|
||||||
|
[Configure Service Accounts for Pods](/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery)
|
||||||
|
for more details.
|
||||||
- `ServiceAppProtocol`: Enables the `AppProtocol` field on Services and Endpoints.
|
- `ServiceAppProtocol`: Enables the `AppProtocol` field on Services and Endpoints.
|
||||||
- `ServiceLBNodePortControl`: Enables the `spec.allocateLoadBalancerNodePorts` field on Services.
|
- `ServiceLBNodePortControl`: Enables the `spec.allocateLoadBalancerNodePorts`
|
||||||
|
field on Services.
|
||||||
- `ServiceLoadBalancerFinalizer`: Enable finalizer protection for Service load balancers.
|
- `ServiceLoadBalancerFinalizer`: Enable finalizer protection for Service load balancers.
|
||||||
- `ServiceNodeExclusion`: Enable the exclusion of nodes from load balancers created by a cloud provider.
|
- `ServiceNodeExclusion`: Enable the exclusion of nodes from load balancers
|
||||||
A node is eligible for exclusion if labelled with "`alpha.service-controller.kubernetes.io/exclude-balancer`" key or `node.kubernetes.io/exclude-from-external-load-balancers`.
|
created by a cloud provider. A node is eligible for exclusion if labelled with
|
||||||
- `ServiceTopology`: Enable service to route traffic based upon the Node topology of the cluster. See [ServiceTopology](/docs/concepts/services-networking/service-topology/) for more details.
|
"`node.kubernetes.io/exclude-from-external-load-balancers`".
|
||||||
- `SizeMemoryBackedVolumes`: Enables kubelet support to size memory backed volumes. See [volumes](docs/concepts/storage/volumes) for more details.
|
- `ServiceTopology`: Enable service to route traffic based upon the Node
|
||||||
- `SetHostnameAsFQDN`: Enable the ability of setting Fully Qualified Domain Name(FQDN) as hostname of pod. See [Pod's `setHostnameAsFQDN` field](/docs/concepts/services-networking/dns-pod-service/#pod-sethostnameasfqdn-field).
|
topology of the cluster. See
|
||||||
- `StartupProbe`: Enable the [startup](/docs/concepts/workloads/pods/pod-lifecycle/#when-should-you-use-a-startup-probe) probe in the kubelet.
|
[ServiceTopology](/docs/concepts/services-networking/service-topology/)
|
||||||
|
for more details.
|
||||||
|
- `SizeMemoryBackedVolumes`: Enables kubelet support to size memory backed volumes.
|
||||||
|
See [volumes](docs/concepts/storage/volumes) for more details.
|
||||||
|
- `SetHostnameAsFQDN`: Enable the ability of setting Fully Qualified Domain
|
||||||
|
Name(FQDN) as the hostname of a pod. See
|
||||||
|
[Pod's `setHostnameAsFQDN` field](/docs/concepts/services-networking/dns-pod-service/#pod-sethostnameasfqdn-field).
|
||||||
|
- `SizeMemoryBackedVolumes`: Enable kubelets to determine the size limit for
|
||||||
|
memory-backed volumes (mainly `emptyDir` volumes).
|
||||||
|
- `StartupProbe`: Enable the
|
||||||
|
[startup](/docs/concepts/workloads/pods/pod-lifecycle/#when-should-you-use-a-startup-probe)
|
||||||
|
probe in the kubelet.
|
||||||
- `StorageObjectInUseProtection`: Postpone the deletion of PersistentVolume or
|
- `StorageObjectInUseProtection`: Postpone the deletion of PersistentVolume or
|
||||||
PersistentVolumeClaim objects if they are still being used.
|
PersistentVolumeClaim objects if they are still being used.
|
||||||
- `StorageVersionHash`: Allow apiservers to expose the storage version hash in the discovery.
|
- `StorageVersionAPI`: Enable the
|
||||||
|
[storage version API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#storageversion-v1alpha1-internal-apiserver-k8s-io).
|
||||||
|
- `StorageVersionHash`: Allow API servers to expose the storage version hash in the
|
||||||
|
discovery.
|
||||||
- `StreamingProxyRedirects`: Instructs the API server to intercept (and follow)
|
- `StreamingProxyRedirects`: Instructs the API server to intercept (and follow)
|
||||||
redirects from the backend (kubelet) for streaming requests.
|
redirects from the backend (kubelet) for streaming requests.
|
||||||
Examples of streaming requests include the `exec`, `attach` and `port-forward` requests.
|
Examples of streaming requests include the `exec`, `attach` and `port-forward` requests.
|
||||||
- `SupportIPVSProxyMode`: Enable providing in-cluster service load balancing using IPVS.
|
- `SupportIPVSProxyMode`: Enable providing in-cluster service load balancing using IPVS.
|
||||||
See [service proxies](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies) for more details.
|
See [service proxies](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies) for more details.
|
||||||
- `SupportPodPidsLimit`: Enable the support to limiting PIDs in Pods.
|
- `SupportPodPidsLimit`: Enable the support to limiting PIDs in Pods.
|
||||||
- `SupportNodePidsLimit`: Enable the support to limiting PIDs on the Node. The parameter `pid=<number>` in the `--system-reserved` and `--kube-reserved` options can be specified to ensure that the specified number of process IDs will be reserved for the system as a whole and for Kubernetes system daemons respectively.
|
- `SupportNodePidsLimit`: Enable the support to limiting PIDs on the Node.
|
||||||
- `Sysctls`: Enable support for namespaced kernel parameters (sysctls) that can be set for each pod.
|
The parameter `pid=<number>` in the `--system-reserved` and `--kube-reserved`
|
||||||
See [sysctls](/docs/tasks/administer-cluster/sysctl-cluster/) for more details.
|
options can be specified to ensure that the specified number of process IDs
|
||||||
- `TaintBasedEvictions`: Enable evicting pods from nodes based on taints on nodes and tolerations on Pods.
|
will be reserved for the system as a whole and for Kubernetes system daemons
|
||||||
See [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/) for more details.
|
respectively.
|
||||||
- `TaintNodesByCondition`: Enable automatic tainting nodes based on [node conditions](/docs/concepts/architecture/nodes/#condition).
|
- `Sysctls`: Enable support for namespaced kernel parameters (sysctls) that can be
|
||||||
|
set for each pod. See
|
||||||
|
[sysctls](/docs/tasks/administer-cluster/sysctl-cluster/) for more details.
|
||||||
|
- `TTLAfterFinished`: Allow a
|
||||||
|
[TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/)
|
||||||
|
to clean up resources after they finish execution.
|
||||||
|
- `TaintBasedEvictions`: Enable evicting pods from nodes based on taints on Nodes
|
||||||
|
and tolerations on Pods.
|
||||||
|
See [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/)
|
||||||
|
for more details.
|
||||||
|
- `TaintNodesByCondition`: Enable automatic tainting nodes based on
|
||||||
|
[node conditions](/docs/concepts/architecture/nodes/#condition).
|
||||||
- `TokenRequest`: Enable the `TokenRequest` endpoint on service account resources.
|
- `TokenRequest`: Enable the `TokenRequest` endpoint on service account resources.
|
||||||
- `TokenRequestProjection`: Enable the injection of service account tokens into
|
- `TokenRequestProjection`: Enable the injection of service account tokens into a
|
||||||
a Pod through the [`projected` volume](/docs/concepts/storage/volumes/#projected).
|
Pod through a [`projected` volume](/docs/concepts/storage/volumes/#projected).
|
||||||
- `TopologyManager`: Enable a mechanism to coordinate fine-grained hardware resource assignments for different components in Kubernetes. See [Control Topology Management Policies on a node](/docs/tasks/administer-cluster/topology-manager/).
|
- `TopologyManager`: Enable a mechanism to coordinate fine-grained hardware resource
|
||||||
- `TTLAfterFinished`: Allow a [TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) to clean up resources after they finish execution.
|
assignments for different components in Kubernetes. See
|
||||||
|
[Control Topology Management Policies on a node](/docs/tasks/administer-cluster/topology-manager/).
|
||||||
- `VolumePVCDataSource`: Enable support for specifying an existing PVC as a DataSource.
|
- `VolumePVCDataSource`: Enable support for specifying an existing PVC as a DataSource.
|
||||||
- `VolumeScheduling`: Enable volume topology aware scheduling and make the
|
- `VolumeScheduling`: Enable volume topology aware scheduling and make the
|
||||||
PersistentVolumeClaim (PVC) binding aware of scheduling decisions. It also
|
PersistentVolumeClaim (PVC) binding aware of scheduling decisions. It also
|
||||||
enables the usage of [`local`](/docs/concepts/storage/volumes/#local) volume
|
enables the usage of [`local`](/docs/concepts/storage/volumes/#local) volume
|
||||||
type when used together with the `PersistentLocalVolumes` feature gate.
|
type when used together with the `PersistentLocalVolumes` feature gate.
|
||||||
- `VolumeSnapshotDataSource`: Enable volume snapshot data source support.
|
- `VolumeSnapshotDataSource`: Enable volume snapshot data source support.
|
||||||
- `VolumeSubpathEnvExpansion`: Enable `subPathExpr` field for expanding environment variables into a `subPath`.
|
- `VolumeSubpathEnvExpansion`: Enable `subPathExpr` field for expanding environment
|
||||||
|
variables into a `subPath`.
|
||||||
|
- `WarningHeaders`: Allow sending warning headers in API responses.
|
||||||
- `WatchBookmark`: Enable support for watch bookmark events.
|
- `WatchBookmark`: Enable support for watch bookmark events.
|
||||||
- `WindowsGMSA`: Enables passing of GMSA credential specs from pods to container runtimes.
|
|
||||||
- `WindowsRunAsUserName` : Enable support for running applications in Windows containers with as a non-default user.
|
|
||||||
See [Configuring RunAsUserName](/docs/tasks/configure-pod-container/configure-runasusername) for more details.
|
|
||||||
- `WinDSR`: Allows kube-proxy to create DSR loadbalancers for Windows.
|
- `WinDSR`: Allows kube-proxy to create DSR loadbalancers for Windows.
|
||||||
- `WinOverlay`: Allows kube-proxy to run in overlay mode for Windows.
|
- `WinOverlay`: Allows kube-proxy to run in overlay mode for Windows.
|
||||||
|
- `WindowsGMSA`: Enables passing of GMSA credential specs from pods to container runtimes.
|
||||||
|
- `WindowsRunAsUserName` : Enable support for running applications in Windows containers
|
||||||
|
with as a non-default user. See
|
||||||
|
[Configuring RunAsUserName](/docs/tasks/configure-pod-container/configure-runasusername)
|
||||||
|
for more details.
|
||||||
|
- `WindowsEndpointSliceProxying`: When enabled, kube-proxy running on Windows
|
||||||
|
will use EndpointSlices as the primary data source instead of Endpoints,
|
||||||
|
enabling scalability and performance improvements. See
|
||||||
|
[Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpointslices/).
|
||||||
|
|
||||||
|
|
||||||
## {{% heading "whatsnext" %}}
|
## {{% heading "whatsnext" %}}
|
||||||
|
|
|
||||||
|
|
@ -2,7 +2,7 @@
|
||||||
title: API Group
|
title: API Group
|
||||||
id: api-group
|
id: api-group
|
||||||
date: 2019-09-02
|
date: 2019-09-02
|
||||||
full_link: /docs/concepts/overview/kubernetes-api/#api-groups
|
full_link: /docs/concepts/overview/kubernetes-api/#api-groups-and-versioning
|
||||||
short_description: >
|
short_description: >
|
||||||
A set of related paths in the Kubernetes API.
|
A set of related paths in the Kubernetes API.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -12,9 +12,8 @@ tags:
|
||||||
---
|
---
|
||||||
Facilitates the discussion and/or implementation of a short-lived, narrow, or decoupled project for a committee, {{< glossary_tooltip text="SIG" term_id="sig" >}}, or cross-SIG effort.
|
Facilitates the discussion and/or implementation of a short-lived, narrow, or decoupled project for a committee, {{< glossary_tooltip text="SIG" term_id="sig" >}}, or cross-SIG effort.
|
||||||
|
|
||||||
<!--more-->
|
<!--more-->
|
||||||
|
|
||||||
Working groups are a way of organizing people to accomplish a discrete task, and are relatively easy to create and deprecate when inactive.
|
Working groups are a way of organizing people to accomplish a discrete task.
|
||||||
|
|
||||||
For more information, see the [kubernetes/community](https://github.com/kubernetes/community) repo and the current list of [SIGs and working groups](https://github.com/kubernetes/community/blob/master/sig-list.md).
|
For more information, see the [kubernetes/community](https://github.com/kubernetes/community) repo and the current list of [SIGs and working groups](https://github.com/kubernetes/community/blob/master/sig-list.md).
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -195,7 +195,7 @@ JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.ty
|
||||||
&& kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"
|
&& kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"
|
||||||
|
|
||||||
# Output decoded secrets without external tools
|
# Output decoded secrets without external tools
|
||||||
kubectl get secret ${secret_name} -o go-template='{{range $k,$v := .data}}{{$k}}={{$v|base64decode}}{{"\n"}}{{end}}'
|
kubectl get secret my-secret -o go-template='{{range $k,$v := .data}}{{"### "}}{{$k}}{{"\n"}}{{$v|base64decode}}{{"\n\n"}}{{end}}'
|
||||||
|
|
||||||
# List all Secrets currently in use by a pod
|
# List all Secrets currently in use by a pod
|
||||||
kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq
|
kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq
|
||||||
|
|
@ -337,7 +337,7 @@ kubectl taint nodes foo dedicated=special-user:NoSchedule
|
||||||
|
|
||||||
### Resource types
|
### Resource types
|
||||||
|
|
||||||
List all supported resource types along with their shortnames, [API group](/docs/concepts/overview/kubernetes-api/#api-groups), whether they are [namespaced](/docs/concepts/overview/working-with-objects/namespaces), and [Kind](/docs/concepts/overview/working-with-objects/kubernetes-objects):
|
List all supported resource types along with their shortnames, [API group](/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning), whether they are [namespaced](/docs/concepts/overview/working-with-objects/namespaces), and [Kind](/docs/concepts/overview/working-with-objects/kubernetes-objects):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
kubectl api-resources
|
kubectl api-resources
|
||||||
|
|
|
||||||
|
|
@ -250,15 +250,15 @@ CustomResourceDefinitionSpec describes how a user wants their resource to appear
|
||||||
- **conversion.webhook.clientConfig.url** (string)
|
- **conversion.webhook.clientConfig.url** (string)
|
||||||
|
|
||||||
url gives the location of the webhook, in standard URL form (`scheme://host:port/path`). Exactly one of `url` or `service` must be specified.
|
url gives the location of the webhook, in standard URL form (`scheme://host:port/path`). Exactly one of `url` or `service` must be specified.
|
||||||
|
|
||||||
The `host` should not refer to a service running in the cluster; use the `service` field instead. The host might be resolved via external DNS in some apiservers (e.g., `kube-apiserver` cannot resolve in-cluster DNS as that would be a layering violation). `host` may also be an IP address.
|
The `host` should not refer to a service running in the cluster; use the `service` field instead. The host might be resolved via external DNS in some apiservers (e.g., `kube-apiserver` cannot resolve in-cluster DNS as that would be a layering violation). `host` may also be an IP address.
|
||||||
|
|
||||||
Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster.
|
Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installations are likely to be non-portable or not readily run in a new cluster.
|
||||||
|
|
||||||
The scheme must be "https"; the URL must begin with "https://".
|
The scheme must be "https"; the URL must begin with "https://".
|
||||||
|
|
||||||
A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier.
|
A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier.
|
||||||
|
|
||||||
Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#...") and query parameters ("?...") are not allowed, either.
|
Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#...") and query parameters ("?...") are not allowed, either.
|
||||||
|
|
||||||
- **preserveUnknownFields** (boolean)
|
- **preserveUnknownFields** (boolean)
|
||||||
|
|
|
||||||
|
|
@ -82,15 +82,15 @@ MutatingWebhookConfiguration describes the configuration of and admission webhoo
|
||||||
- **webhooks.clientConfig.url** (string)
|
- **webhooks.clientConfig.url** (string)
|
||||||
|
|
||||||
`url` gives the location of the webhook, in standard URL form (`scheme://host:port/path`). Exactly one of `url` or `service` must be specified.
|
`url` gives the location of the webhook, in standard URL form (`scheme://host:port/path`). Exactly one of `url` or `service` must be specified.
|
||||||
|
|
||||||
The `host` should not refer to a service running in the cluster; use the `service` field instead. The host might be resolved via external DNS in some apiservers (e.g., `kube-apiserver` cannot resolve in-cluster DNS as that would be a layering violation). `host` may also be an IP address.
|
The `host` should not refer to a service running in the cluster; use the `service` field instead. The host might be resolved via external DNS in some apiservers (e.g., `kube-apiserver` cannot resolve in-cluster DNS as that would be a layering violation). `host` may also be an IP address.
|
||||||
|
|
||||||
Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster.
|
Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installations are likely to be non-portable or not readily run in a new cluster.
|
||||||
|
|
||||||
The scheme must be "https"; the URL must begin with "https://".
|
The scheme must be "https"; the URL must begin with "https://".
|
||||||
|
|
||||||
A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier.
|
A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier.
|
||||||
|
|
||||||
Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#...") and query parameters ("?...") are not allowed, either.
|
Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#...") and query parameters ("?...") are not allowed, either.
|
||||||
|
|
||||||
- **webhooks.name** (string), required
|
- **webhooks.name** (string), required
|
||||||
|
|
|
||||||
|
|
@ -82,15 +82,15 @@ ValidatingWebhookConfiguration describes the configuration of and admission webh
|
||||||
- **webhooks.clientConfig.url** (string)
|
- **webhooks.clientConfig.url** (string)
|
||||||
|
|
||||||
`url` gives the location of the webhook, in standard URL form (`scheme://host:port/path`). Exactly one of `url` or `service` must be specified.
|
`url` gives the location of the webhook, in standard URL form (`scheme://host:port/path`). Exactly one of `url` or `service` must be specified.
|
||||||
|
|
||||||
The `host` should not refer to a service running in the cluster; use the `service` field instead. The host might be resolved via external DNS in some apiservers (e.g., `kube-apiserver` cannot resolve in-cluster DNS as that would be a layering violation). `host` may also be an IP address.
|
The `host` should not refer to a service running in the cluster; use the `service` field instead. The host might be resolved via external DNS in some apiservers (e.g., `kube-apiserver` cannot resolve in-cluster DNS as that would be a layering violation). `host` may also be an IP address.
|
||||||
|
|
||||||
Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster.
|
Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installations are likely to be non-portable or not readily run in a new cluster.
|
||||||
|
|
||||||
The scheme must be "https"; the URL must begin with "https://".
|
The scheme must be "https"; the URL must begin with "https://".
|
||||||
|
|
||||||
A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier.
|
A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier.
|
||||||
|
|
||||||
Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#...") and query parameters ("?...") are not allowed, either.
|
Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#...") and query parameters ("?...") are not allowed, either.
|
||||||
|
|
||||||
- **webhooks.name** (string), required
|
- **webhooks.name** (string), required
|
||||||
|
|
|
||||||
|
|
@ -28,7 +28,7 @@ The cluster that `kubeadm init` and `kubeadm join` set up should be:
|
||||||
- lock-down the kubelet API
|
- lock-down the kubelet API
|
||||||
- locking down access to the API for system components like the kube-proxy and CoreDNS
|
- locking down access to the API for system components like the kube-proxy and CoreDNS
|
||||||
- locking down what a Bootstrap Token can access
|
- locking down what a Bootstrap Token can access
|
||||||
- **Easy to use**: The user should not have to run anything more than a couple of commands:
|
- **User-friendly**: The user should not have to run anything more than a couple of commands:
|
||||||
- `kubeadm init`
|
- `kubeadm init`
|
||||||
- `export KUBECONFIG=/etc/kubernetes/admin.conf`
|
- `export KUBECONFIG=/etc/kubernetes/admin.conf`
|
||||||
- `kubectl apply -f <network-of-choice.yaml>`
|
- `kubectl apply -f <network-of-choice.yaml>`
|
||||||
|
|
|
||||||
|
|
@ -108,7 +108,7 @@ if the `kubeadm init` command was called with `--upload-certs`.
|
||||||
control-plane node even if other worker nodes or the network are compromised.
|
control-plane node even if other worker nodes or the network are compromised.
|
||||||
|
|
||||||
- Convenient to execute manually since all of the information required fits
|
- Convenient to execute manually since all of the information required fits
|
||||||
into a single `kubeadm join` command that is easy to copy and paste.
|
into a single `kubeadm join` command.
|
||||||
|
|
||||||
**Disadvantages:**
|
**Disadvantages:**
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -20,8 +20,8 @@ Kubernetes contains several built-in tools to help you work with the Kubernetes
|
||||||
|
|
||||||
## Minikube
|
## Minikube
|
||||||
|
|
||||||
[`minikube`](https://minikube.sigs.k8s.io/docs/) is a tool that makes it
|
[`minikube`](https://minikube.sigs.k8s.io/docs/) is a tool that
|
||||||
easy to run a single-node Kubernetes cluster locally on your workstation for
|
runs a single-node Kubernetes cluster locally on your workstation for
|
||||||
development and testing purposes.
|
development and testing purposes.
|
||||||
|
|
||||||
## Dashboard
|
## Dashboard
|
||||||
|
|
@ -51,4 +51,3 @@ Use Kompose to:
|
||||||
* Translate a Docker Compose file into Kubernetes objects
|
* Translate a Docker Compose file into Kubernetes objects
|
||||||
* Go from local Docker development to managing your application via Kubernetes
|
* Go from local Docker development to managing your application via Kubernetes
|
||||||
* Convert v1 or v2 Docker Compose `yaml` files or [Distributed Application Bundles](https://docs.docker.com/compose/bundles/)
|
* Convert v1 or v2 Docker Compose `yaml` files or [Distributed Application Bundles](https://docs.docker.com/compose/bundles/)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -297,7 +297,7 @@ is not what the user wants to happen, even temporarily.
|
||||||
|
|
||||||
There are two solutions:
|
There are two solutions:
|
||||||
|
|
||||||
- (easy) Leave `replicas` in the configuration; when HPA eventually writes to that
|
- (basic) Leave `replicas` in the configuration; when HPA eventually writes to that
|
||||||
field, the system gives the user a conflict over it. At that point, it is safe
|
field, the system gives the user a conflict over it. At that point, it is safe
|
||||||
to remove from the configuration.
|
to remove from the configuration.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -122,7 +122,7 @@ sudo apt-get update && sudo apt-get install -y containerd.io
|
||||||
```shell
|
```shell
|
||||||
# Configure containerd
|
# Configure containerd
|
||||||
sudo mkdir -p /etc/containerd
|
sudo mkdir -p /etc/containerd
|
||||||
sudo containerd config default | sudo tee /etc/containerd/config.toml
|
containerd config default | sudo tee /etc/containerd/config.toml
|
||||||
```
|
```
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
|
|
@ -140,7 +140,7 @@ sudo apt-get update && sudo apt-get install -y containerd
|
||||||
```shell
|
```shell
|
||||||
# Configure containerd
|
# Configure containerd
|
||||||
sudo mkdir -p /etc/containerd
|
sudo mkdir -p /etc/containerd
|
||||||
sudo containerd config default | sudo tee /etc/containerd/config.toml
|
containerd config default | sudo tee /etc/containerd/config.toml
|
||||||
```
|
```
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
|
|
@ -210,7 +210,7 @@ sudo yum update -y && sudo yum install -y containerd.io
|
||||||
```shell
|
```shell
|
||||||
## Configure containerd
|
## Configure containerd
|
||||||
sudo mkdir -p /etc/containerd
|
sudo mkdir -p /etc/containerd
|
||||||
sudo containerd config default | sudo tee /etc/containerd/config.toml
|
containerd config default | sudo tee /etc/containerd/config.toml
|
||||||
```
|
```
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
|
|
|
||||||
|
|
@ -39,7 +39,7 @@ kops is an automated provisioning system:
|
||||||
|
|
||||||
#### Installation
|
#### Installation
|
||||||
|
|
||||||
Download kops from the [releases page](https://github.com/kubernetes/kops/releases) (it is also easy to build from source):
|
Download kops from the [releases page](https://github.com/kubernetes/kops/releases) (it is also convenient to build from source):
|
||||||
|
|
||||||
{{< tabs name="kops_installation" >}}
|
{{< tabs name="kops_installation" >}}
|
||||||
{{% tab name="macOS" %}}
|
{{% tab name="macOS" %}}
|
||||||
|
|
@ -147,7 +147,7 @@ You must then set up your NS records in the parent domain, so that records in th
|
||||||
you would create NS records in `example.com` for `dev`. If it is a root domain name you would configure the NS
|
you would create NS records in `example.com` for `dev`. If it is a root domain name you would configure the NS
|
||||||
records at your domain registrar (e.g. `example.com` would need to be configured where you bought `example.com`).
|
records at your domain registrar (e.g. `example.com` would need to be configured where you bought `example.com`).
|
||||||
|
|
||||||
This step is easy to mess up (it is the #1 cause of problems!) You can double-check that
|
Verify your route53 domain setup (it is the #1 cause of problems!). You can double-check that
|
||||||
your cluster is configured correctly if you have the dig tool by running:
|
your cluster is configured correctly if you have the dig tool by running:
|
||||||
|
|
||||||
`dig NS dev.example.com`
|
`dig NS dev.example.com`
|
||||||
|
|
|
||||||
|
|
@ -8,7 +8,7 @@ weight: 30
|
||||||
|
|
||||||
<!-- overview -->
|
<!-- overview -->
|
||||||
|
|
||||||
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">Creating a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use `kubeadm` to set up a cluster that will pass the [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification).
|
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">Using `kubeadm`, you can create a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use `kubeadm` to set up a cluster that will pass the [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification).
|
||||||
`kubeadm` also supports other cluster
|
`kubeadm` also supports other cluster
|
||||||
lifecycle functions, such as [bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) and cluster upgrades.
|
lifecycle functions, such as [bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) and cluster upgrades.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -236,8 +236,8 @@ curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_
|
||||||
Define the directory to download command files
|
Define the directory to download command files
|
||||||
|
|
||||||
{{< note >}}
|
{{< note >}}
|
||||||
The DOWNLOAD_DIR variable must be set to a writable directory.
|
The `DOWNLOAD_DIR` variable must be set to a writable directory.
|
||||||
If you are running Flatcar Container Linux, set DOWNLOAD_DIR=/opt/bin.
|
If you are running Flatcar Container Linux, set `DOWNLOAD_DIR=/opt/bin`.
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
|
@ -308,13 +308,6 @@ or `/etc/default/kubelet`(`/etc/sysconfig/kubelet` for RPMs), please remove it a
|
||||||
(stored in `/var/lib/kubelet/config.yaml` by default).
|
(stored in `/var/lib/kubelet/config.yaml` by default).
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
||||||
Restarting the kubelet is required:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo systemctl daemon-reload
|
|
||||||
sudo systemctl restart kubelet
|
|
||||||
```
|
|
||||||
|
|
||||||
The automatic detection of cgroup driver for other container runtimes
|
The automatic detection of cgroup driver for other container runtimes
|
||||||
like CRI-O and containerd is work in progress.
|
like CRI-O and containerd is work in progress.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -363,7 +363,7 @@ kubectl taint nodes NODE_NAME node-role.kubernetes.io/master:NoSchedule-
|
||||||
|
|
||||||
## `/usr` is mounted read-only on nodes {#usr-mounted-read-only}
|
## `/usr` is mounted read-only on nodes {#usr-mounted-read-only}
|
||||||
|
|
||||||
On Linux distributions such as Fedora CoreOS, the directory `/usr` is mounted as a read-only filesystem.
|
On Linux distributions such as Fedora CoreOS or Flatcar Container Linux, the directory `/usr` is mounted as a read-only filesystem.
|
||||||
For [flex-volume support](https://github.com/kubernetes/community/blob/ab55d85/contributors/devel/sig-storage/flexvolume.md),
|
For [flex-volume support](https://github.com/kubernetes/community/blob/ab55d85/contributors/devel/sig-storage/flexvolume.md),
|
||||||
Kubernetes components like the kubelet and kube-controller-manager use the default path of
|
Kubernetes components like the kubelet and kube-controller-manager use the default path of
|
||||||
`/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`, yet the flex-volume directory _must be writeable_
|
`/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`, yet the flex-volume directory _must be writeable_
|
||||||
|
|
|
||||||
|
|
@ -15,7 +15,7 @@ Windows applications constitute a large portion of the services and applications
|
||||||
|
|
||||||
## Windows containers in Kubernetes
|
## Windows containers in Kubernetes
|
||||||
|
|
||||||
To enable the orchestration of Windows containers in Kubernetes, simply include Windows nodes in your existing Linux cluster. Scheduling Windows containers in {{< glossary_tooltip text="Pods" term_id="pod" >}} on Kubernetes is as simple and easy as scheduling Linux-based containers.
|
To enable the orchestration of Windows containers in Kubernetes, include Windows nodes in your existing Linux cluster. Scheduling Windows containers in {{< glossary_tooltip text="Pods" term_id="pod" >}} on Kubernetes is similar to scheduling Linux-based containers.
|
||||||
|
|
||||||
In order to run Windows containers, your Kubernetes cluster must include multiple operating systems, with control plane nodes running Linux and workers running either Windows or Linux depending on your workload needs. Windows Server 2019 is the only Windows operating system supported, enabling [Kubernetes Node](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) on Windows (including kubelet, [container runtime](https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/containerd), and kube-proxy). For a detailed explanation of Windows distribution channels see the [Microsoft documentation](https://docs.microsoft.com/en-us/windows-server/get-started-19/servicing-channels-19).
|
In order to run Windows containers, your Kubernetes cluster must include multiple operating systems, with control plane nodes running Linux and workers running either Windows or Linux depending on your workload needs. Windows Server 2019 is the only Windows operating system supported, enabling [Kubernetes Node](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) on Windows (including kubelet, [container runtime](https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/containerd), and kube-proxy). For a detailed explanation of Windows distribution channels see the [Microsoft documentation](https://docs.microsoft.com/en-us/windows-server/get-started-19/servicing-channels-19).
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -92,7 +92,7 @@ We expect this implementation to progress from alpha to beta and GA in coming re
|
||||||
|
|
||||||
### go1.15.5
|
### go1.15.5
|
||||||
|
|
||||||
go1.15.5 has been integrated to Kubernets project as of this release, [including other infrastructure related updates on this effort](https://github.com/kubernetes/kubernetes/pull/95776).
|
go1.15.5 has been integrated to Kubernetes project as of this release, [including other infrastructure related updates on this effort](https://github.com/kubernetes/kubernetes/pull/95776).
|
||||||
|
|
||||||
### CSI Volume Snapshot graduates to General Availability
|
### CSI Volume Snapshot graduates to General Availability
|
||||||
|
|
||||||
|
|
@ -190,7 +190,7 @@ Currently, cadvisor_stats_provider provides AcceleratorStats but cri_stats_provi
|
||||||
PodSubnet validates against the corresponding cluster "--node-cidr-mask-size" of the kube-controller-manager, it fail if the values are not compatible.
|
PodSubnet validates against the corresponding cluster "--node-cidr-mask-size" of the kube-controller-manager, it fail if the values are not compatible.
|
||||||
kubeadm no longer sets the node-mask automatically on IPv6 deployments, you must check that your IPv6 service subnet mask is compatible with the default node mask /64 or set it accordenly.
|
kubeadm no longer sets the node-mask automatically on IPv6 deployments, you must check that your IPv6 service subnet mask is compatible with the default node mask /64 or set it accordenly.
|
||||||
Previously, for IPv6, if the podSubnet had a mask lower than /112, kubeadm calculated a node-mask to be multiple of eight and splitting the available bits to maximise the number used for nodes. ([#95723](https://github.com/kubernetes/kubernetes/pull/95723), [@aojea](https://github.com/aojea)) [SIG Cluster Lifecycle]
|
Previously, for IPv6, if the podSubnet had a mask lower than /112, kubeadm calculated a node-mask to be multiple of eight and splitting the available bits to maximise the number used for nodes. ([#95723](https://github.com/kubernetes/kubernetes/pull/95723), [@aojea](https://github.com/aojea)) [SIG Cluster Lifecycle]
|
||||||
- The deprecated flag --experimental-kustomize is now removed from kubeadm commands. Use --experimental-patches instead, which was introduced in 1.19. Migration infromation available in --help description for --exprimental-patches. ([#94871](https://github.com/kubernetes/kubernetes/pull/94871), [@neolit123](https://github.com/neolit123))
|
- The deprecated flag --experimental-kustomize is now removed from kubeadm commands. Use --experimental-patches instead, which was introduced in 1.19. Migration information available in --help description for --experimental-patches. ([#94871](https://github.com/kubernetes/kubernetes/pull/94871), [@neolit123](https://github.com/neolit123))
|
||||||
- Windows hyper-v container featuregate is deprecated in 1.20 and will be removed in 1.21 ([#95505](https://github.com/kubernetes/kubernetes/pull/95505), [@wawa0210](https://github.com/wawa0210)) [SIG Node and Windows]
|
- Windows hyper-v container featuregate is deprecated in 1.20 and will be removed in 1.21 ([#95505](https://github.com/kubernetes/kubernetes/pull/95505), [@wawa0210](https://github.com/wawa0210)) [SIG Node and Windows]
|
||||||
- The kube-apiserver ability to serve on an insecure port, deprecated since v1.10, has been removed. The insecure address flags `--address` and `--insecure-bind-address` have no effect in kube-apiserver and will be removed in v1.24. The insecure port flags `--port` and `--insecure-port` may only be set to 0 and will be removed in v1.24. ([#95856](https://github.com/kubernetes/kubernetes/pull/95856), [@knight42](https://github.com/knight42), [SIG API Machinery, Node, Testing])
|
- The kube-apiserver ability to serve on an insecure port, deprecated since v1.10, has been removed. The insecure address flags `--address` and `--insecure-bind-address` have no effect in kube-apiserver and will be removed in v1.24. The insecure port flags `--port` and `--insecure-port` may only be set to 0 and will be removed in v1.24. ([#95856](https://github.com/kubernetes/kubernetes/pull/95856), [@knight42](https://github.com/knight42), [SIG API Machinery, Node, Testing])
|
||||||
- Add dual-stack Services (alpha). This is a BREAKING CHANGE to an alpha API.
|
- Add dual-stack Services (alpha). This is a BREAKING CHANGE to an alpha API.
|
||||||
|
|
@ -2138,4 +2138,4 @@ filename | sha512 hash
|
||||||
- github.com/godbus/dbus: [ade71ed](https://github.com/godbus/dbus/tree/ade71ed)
|
- github.com/godbus/dbus: [ade71ed](https://github.com/godbus/dbus/tree/ade71ed)
|
||||||
- github.com/xlab/handysort: [fb3537e](https://github.com/xlab/handysort/tree/fb3537e)
|
- github.com/xlab/handysort: [fb3537e](https://github.com/xlab/handysort/tree/fb3537e)
|
||||||
- sigs.k8s.io/structured-merge-diff/v3: v3.0.0
|
- sigs.k8s.io/structured-merge-diff/v3: v3.0.0
|
||||||
- vbom.ml/util: db5cfe1
|
- vbom.ml/util: db5cfe1
|
||||||
|
|
|
||||||
|
|
@ -163,7 +163,7 @@ Backing up an etcd cluster can be accomplished in two ways: etcd built-in snapsh
|
||||||
|
|
||||||
### Built-in snapshot
|
### Built-in snapshot
|
||||||
|
|
||||||
etcd supports built-in snapshot, so backing up an etcd cluster is easy. A snapshot may either be taken from a live member with the `etcdctl snapshot save` command or by copying the `member/snap/db` file from an etcd [data directory](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/configuration.md#--data-dir) that is not currently used by an etcd process. Taking the snapshot will normally not affect the performance of the member.
|
etcd supports built-in snapshot. A snapshot may either be taken from a live member with the `etcdctl snapshot save` command or by copying the `member/snap/db` file from an etcd [data directory](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/configuration.md#--data-dir) that is not currently used by an etcd process. Taking the snapshot will normally not affect the performance of the member.
|
||||||
|
|
||||||
Below is an example for taking a snapshot of the keyspace served by `$ENDPOINT` to the file `snapshotdb`:
|
Below is an example for taking a snapshot of the keyspace served by `$ENDPOINT` to the file `snapshotdb`:
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -72,7 +72,7 @@ Once you have a Linux-based Kubernetes control-plane node you are ready to choos
|
||||||
"Network": "10.244.0.0/16",
|
"Network": "10.244.0.0/16",
|
||||||
"Backend": {
|
"Backend": {
|
||||||
"Type": "vxlan",
|
"Type": "vxlan",
|
||||||
"VNI" : 4096,
|
"VNI": 4096,
|
||||||
"Port": 4789
|
"Port": 4789
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -5,7 +5,7 @@ content_type: task
|
||||||
|
|
||||||
<!-- overview -->
|
<!-- overview -->
|
||||||
|
|
||||||
This example demonstrates an easy way to limit the amount of storage consumed in a namespace.
|
This example demonstrates how to limit the amount of storage consumed in a namespace.
|
||||||
|
|
||||||
The following resources are used in the demonstration: [ResourceQuota](/docs/concepts/policy/resource-quotas/),
|
The following resources are used in the demonstration: [ResourceQuota](/docs/concepts/policy/resource-quotas/),
|
||||||
[LimitRange](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/),
|
[LimitRange](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/),
|
||||||
|
|
|
||||||
|
|
@ -117,9 +117,10 @@ The `kubelet` has the following default hard eviction threshold:
|
||||||
|
|
||||||
* `memory.available<100Mi`
|
* `memory.available<100Mi`
|
||||||
* `nodefs.available<10%`
|
* `nodefs.available<10%`
|
||||||
* `nodefs.inodesFree<5%`
|
|
||||||
* `imagefs.available<15%`
|
* `imagefs.available<15%`
|
||||||
|
|
||||||
|
On a Linux node, the default value also includes `nodefs.inodesFree<5%`.
|
||||||
|
|
||||||
### Eviction Monitoring Interval
|
### Eviction Monitoring Interval
|
||||||
|
|
||||||
The `kubelet` evaluates eviction thresholds per its configured housekeeping interval.
|
The `kubelet` evaluates eviction thresholds per its configured housekeeping interval.
|
||||||
|
|
@ -140,6 +141,7 @@ The following node conditions are defined that correspond to the specified evict
|
||||||
|-------------------|---------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------|
|
|-------------------|---------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------|
|
||||||
| `MemoryPressure` | `memory.available` | Available memory on the node has satisfied an eviction threshold |
|
| `MemoryPressure` | `memory.available` | Available memory on the node has satisfied an eviction threshold |
|
||||||
| `DiskPressure` | `nodefs.available`, `nodefs.inodesFree`, `imagefs.available`, or `imagefs.inodesFree` | Available disk space and inodes on either the node's root filesystem or image filesystem has satisfied an eviction threshold |
|
| `DiskPressure` | `nodefs.available`, `nodefs.inodesFree`, `imagefs.available`, or `imagefs.inodesFree` | Available disk space and inodes on either the node's root filesystem or image filesystem has satisfied an eviction threshold |
|
||||||
|
| `PIDPressure` | `pid.available` | Available processes identifiers on the (Linux) node has fallen below an eviction threshold | |
|
||||||
|
|
||||||
The `kubelet` continues to report node status updates at the frequency specified by
|
The `kubelet` continues to report node status updates at the frequency specified by
|
||||||
`--node-status-update-frequency` which defaults to `10s`.
|
`--node-status-update-frequency` which defaults to `10s`.
|
||||||
|
|
|
||||||
|
|
@ -23,7 +23,7 @@ dynamically, you need a strong understanding of how that change will affect your
|
||||||
cluster's behavior. Always carefully test configuration changes on a small set
|
cluster's behavior. Always carefully test configuration changes on a small set
|
||||||
of nodes before rolling them out cluster-wide. Advice on configuring specific
|
of nodes before rolling them out cluster-wide. Advice on configuring specific
|
||||||
fields is available in the inline `KubeletConfiguration`
|
fields is available in the inline `KubeletConfiguration`
|
||||||
[type documentation](https://github.com/kubernetes/kubernetes/blob/release-1.11/pkg/kubelet/apis/kubeletconfig/v1beta1/types.go).
|
[type documentation (for v1.20)](https://github.com/kubernetes/kubernetes/blob/release-1.20/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
|
||||||
{{< /warning >}}
|
{{< /warning >}}
|
||||||
|
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -187,7 +187,7 @@ Where `YWRtaW5pc3RyYXRvcg==` decodes to `administrator`.
|
||||||
To delete the Secret you have just created:
|
To delete the Secret you have just created:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl delete secret db-user-pass
|
kubectl delete secret mysecret
|
||||||
```
|
```
|
||||||
|
|
||||||
## {{% heading "whatsnext" %}}
|
## {{% heading "whatsnext" %}}
|
||||||
|
|
|
||||||
|
|
@ -22,6 +22,7 @@ The kubelet automatically tries to create a {{< glossary_tooltip text="mirror Po
|
||||||
on the Kubernetes API server for each static Pod.
|
on the Kubernetes API server for each static Pod.
|
||||||
This means that the Pods running on a node are visible on the API server,
|
This means that the Pods running on a node are visible on the API server,
|
||||||
but cannot be controlled from there.
|
but cannot be controlled from there.
|
||||||
|
The Pod names will suffixed with the node hostname with a leading hyphen
|
||||||
|
|
||||||
{{< note >}}
|
{{< note >}}
|
||||||
If you are running clustered Kubernetes and are using static
|
If you are running clustered Kubernetes and are using static
|
||||||
|
|
@ -237,4 +238,3 @@ CONTAINER ID IMAGE COMMAND CREATED ...
|
||||||
e7a62e3427f1 nginx:latest "nginx -g 'daemon of 27 seconds ago
|
e7a62e3427f1 nginx:latest "nginx -g 'daemon of 27 seconds ago
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -35,13 +35,13 @@ Kompose is released via GitHub on a three-week cycle, you can see all current re
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
# Linux
|
# Linux
|
||||||
curl -L https://github.com/kubernetes/kompose/releases/download/v1.21.0/kompose-linux-amd64 -o kompose
|
curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-linux-amd64 -o kompose
|
||||||
|
|
||||||
# macOS
|
# macOS
|
||||||
curl -L https://github.com/kubernetes/kompose/releases/download/v1.21.0/kompose-darwin-amd64 -o kompose
|
curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-darwin-amd64 -o kompose
|
||||||
|
|
||||||
# Windows
|
# Windows
|
||||||
curl -L https://github.com/kubernetes/kompose/releases/download/v1.21.0/kompose-windows-amd64.exe -o kompose.exe
|
curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-windows-amd64.exe -o kompose.exe
|
||||||
|
|
||||||
chmod +x kompose
|
chmod +x kompose
|
||||||
sudo mv ./kompose /usr/local/bin/kompose
|
sudo mv ./kompose /usr/local/bin/kompose
|
||||||
|
|
@ -127,23 +127,7 @@ you need is an existing `docker-compose.yml` file.
|
||||||
kompose.service.type: LoadBalancer
|
kompose.service.type: LoadBalancer
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Run the `kompose up` command to deploy to Kubernetes directly, or skip to
|
2. To convert the `docker-compose.yml` file to files that you can use with
|
||||||
the next step instead to generate a file to use with `kubectl`.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ kompose up
|
|
||||||
We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application.
|
|
||||||
If you need different kind of resources, use the 'kompose convert' and 'kubectl apply -f' commands instead.
|
|
||||||
|
|
||||||
INFO Successfully created Service: redis
|
|
||||||
INFO Successfully created Service: web
|
|
||||||
INFO Successfully created Deployment: redis
|
|
||||||
INFO Successfully created Deployment: web
|
|
||||||
|
|
||||||
Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods,pvc' for details.
|
|
||||||
```
|
|
||||||
|
|
||||||
3. To convert the `docker-compose.yml` file to files that you can use with
|
|
||||||
`kubectl`, run `kompose convert` and then `kubectl apply -f <output file>`.
|
`kubectl`, run `kompose convert` and then `kubectl apply -f <output file>`.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
|
@ -168,7 +152,7 @@ you need is an existing `docker-compose.yml` file.
|
||||||
|
|
||||||
Your deployments are running in Kubernetes.
|
Your deployments are running in Kubernetes.
|
||||||
|
|
||||||
4. Access your application.
|
3. Access your application.
|
||||||
|
|
||||||
If you're already using `minikube` for your development process:
|
If you're already using `minikube` for your development process:
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -18,10 +18,10 @@ you to figure out what's going wrong.
|
||||||
## Running commands in a Pod
|
## Running commands in a Pod
|
||||||
|
|
||||||
For many steps here you will want to see what a Pod running in the cluster
|
For many steps here you will want to see what a Pod running in the cluster
|
||||||
sees. The simplest way to do this is to run an interactive alpine Pod:
|
sees. The simplest way to do this is to run an interactive busybox Pod:
|
||||||
|
|
||||||
```none
|
```none
|
||||||
kubectl run -it --rm --restart=Never alpine --image=alpine sh
|
kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox sh
|
||||||
```
|
```
|
||||||
|
|
||||||
{{< note >}}
|
{{< note >}}
|
||||||
|
|
|
||||||
|
|
@ -1,94 +0,0 @@
|
||||||
---
|
|
||||||
reviewers:
|
|
||||||
- piosz
|
|
||||||
- x13n
|
|
||||||
content_type: concept
|
|
||||||
title: Events in Stackdriver
|
|
||||||
---
|
|
||||||
|
|
||||||
<!-- overview -->
|
|
||||||
|
|
||||||
Kubernetes events are objects that provide insight into what is happening
|
|
||||||
inside a cluster, such as what decisions were made by scheduler or why some
|
|
||||||
pods were evicted from the node. You can read more about using events
|
|
||||||
for debugging your application in the [Application Introspection and Debugging
|
|
||||||
](/docs/tasks/debug-application-cluster/debug-application-introspection/)
|
|
||||||
section.
|
|
||||||
|
|
||||||
Since events are API objects, they are stored in the apiserver on master. To
|
|
||||||
avoid filling up master's disk, a retention policy is enforced: events are
|
|
||||||
removed one hour after the last occurrence. To provide longer history
|
|
||||||
and aggregation capabilities, a third party solution should be installed
|
|
||||||
to capture events.
|
|
||||||
|
|
||||||
This article describes a solution that exports Kubernetes events to
|
|
||||||
Stackdriver Logging, where they can be processed and analyzed.
|
|
||||||
|
|
||||||
{{< note >}}
|
|
||||||
It is not guaranteed that all events happening in a cluster will be
|
|
||||||
exported to Stackdriver. One possible scenario when events will not be
|
|
||||||
exported is when event exporter is not running (e.g. during restart or
|
|
||||||
upgrade). In most cases it's fine to use events for purposes like setting up
|
|
||||||
[metrics](https://cloud.google.com/logging/docs/logs-based-metrics/) and [alerts](https://cloud.google.com/logging/docs/logs-based-metrics/charts-and-alerts), but you should be aware
|
|
||||||
of the potential inaccuracy.
|
|
||||||
{{< /note >}}
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
<!-- body -->
|
|
||||||
|
|
||||||
## Deployment
|
|
||||||
|
|
||||||
### Google Kubernetes Engine
|
|
||||||
|
|
||||||
In Google Kubernetes Engine, if cloud logging is enabled, event exporter
|
|
||||||
is deployed by default to the clusters with master running version 1.7 and
|
|
||||||
higher. To prevent disturbing your workloads, event exporter does not have
|
|
||||||
resources set and is in the best effort QOS class, which means that it will
|
|
||||||
be the first to be killed in the case of resource starvation. If you want
|
|
||||||
your events to be exported, make sure you have enough resources to facilitate
|
|
||||||
the event exporter pod. This may vary depending on the workload, but on
|
|
||||||
average, approximately 100Mb RAM and 100m CPU is needed.
|
|
||||||
|
|
||||||
### Deploying to the Existing Cluster
|
|
||||||
|
|
||||||
Deploy event exporter to your cluster using the following command:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl apply -f https://k8s.io/examples/debug/event-exporter.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
Since event exporter accesses the Kubernetes API, it requires permissions to
|
|
||||||
do so. The following deployment is configured to work with RBAC
|
|
||||||
authorization. It sets up a service account and a cluster role binding
|
|
||||||
to allow event exporter to read events. To make sure that event exporter
|
|
||||||
pod will not be evicted from the node, you can additionally set up resource
|
|
||||||
requests. As mentioned earlier, 100Mb RAM and 100m CPU should be enough.
|
|
||||||
|
|
||||||
{{< codenew file="debug/event-exporter.yaml" >}}
|
|
||||||
|
|
||||||
## User Guide
|
|
||||||
|
|
||||||
Events are exported to the `GKE Cluster` resource in Stackdriver Logging.
|
|
||||||
You can find them by selecting an appropriate option from a drop-down menu
|
|
||||||
of available resources:
|
|
||||||
|
|
||||||
<img src="/images/docs/stackdriver-event-exporter-resource.png" alt="Events location in the Stackdriver Logging interface" width="500">
|
|
||||||
|
|
||||||
You can filter based on the event object fields using Stackdriver Logging
|
|
||||||
[filtering mechanism](https://cloud.google.com/logging/docs/view/advanced_filters).
|
|
||||||
For example, the following query will show events from the scheduler
|
|
||||||
about pods from deployment `nginx-deployment`:
|
|
||||||
|
|
||||||
```
|
|
||||||
resource.type="gke_cluster"
|
|
||||||
jsonPayload.kind="Event"
|
|
||||||
jsonPayload.source.component="default-scheduler"
|
|
||||||
jsonPayload.involvedObject.name:"nginx-deployment"
|
|
||||||
```
|
|
||||||
|
|
||||||
{{< figure src="/images/docs/stackdriver-event-exporter-filter.png" alt="Filtered events in the Stackdriver Logging interface" width="500" >}}
|
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -1,126 +0,0 @@
|
||||||
---
|
|
||||||
reviewers:
|
|
||||||
- piosz
|
|
||||||
- x13n
|
|
||||||
content_type: concept
|
|
||||||
title: Logging Using Elasticsearch and Kibana
|
|
||||||
---
|
|
||||||
|
|
||||||
<!-- overview -->
|
|
||||||
|
|
||||||
On the Google Compute Engine (GCE) platform, the default logging support targets
|
|
||||||
[Stackdriver Logging](https://cloud.google.com/logging/), which is described in detail
|
|
||||||
in the [Logging With Stackdriver Logging](/docs/tasks/debug-application-cluster/logging-stackdriver).
|
|
||||||
|
|
||||||
This article describes how to set up a cluster to ingest logs into
|
|
||||||
[Elasticsearch](https://www.elastic.co/products/elasticsearch) and view
|
|
||||||
them using [Kibana](https://www.elastic.co/products/kibana), as an alternative to
|
|
||||||
Stackdriver Logging when running on GCE.
|
|
||||||
|
|
||||||
{{< note >}}
|
|
||||||
You cannot automatically deploy Elasticsearch and Kibana in the Kubernetes cluster hosted on Google Kubernetes Engine. You have to deploy them manually.
|
|
||||||
{{< /note >}}
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
<!-- body -->
|
|
||||||
|
|
||||||
To use Elasticsearch and Kibana for cluster logging, you should set the
|
|
||||||
following environment variable as shown below when creating your cluster with
|
|
||||||
kube-up.sh:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
KUBE_LOGGING_DESTINATION=elasticsearch
|
|
||||||
```
|
|
||||||
|
|
||||||
You should also ensure that `KUBE_ENABLE_NODE_LOGGING=true` (which is the default for the GCE platform).
|
|
||||||
|
|
||||||
Now, when you create a cluster, a message will indicate that the Fluentd log
|
|
||||||
collection daemons that run on each node will target Elasticsearch:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
cluster/kube-up.sh
|
|
||||||
```
|
|
||||||
```
|
|
||||||
...
|
|
||||||
Project: kubernetes-satnam
|
|
||||||
Zone: us-central1-b
|
|
||||||
... calling kube-up
|
|
||||||
Project: kubernetes-satnam
|
|
||||||
Zone: us-central1-b
|
|
||||||
+++ Staging server tars to Google Storage: gs://kubernetes-staging-e6d0e81793/devel
|
|
||||||
+++ kubernetes-server-linux-amd64.tar.gz uploaded (sha1 = 6987c098277871b6d69623141276924ab687f89d)
|
|
||||||
+++ kubernetes-salt.tar.gz uploaded (sha1 = bdfc83ed6b60fa9e3bff9004b542cfc643464cd0)
|
|
||||||
Looking for already existing resources
|
|
||||||
Starting master and configuring firewalls
|
|
||||||
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/zones/us-central1-b/disks/kubernetes-master-pd].
|
|
||||||
NAME ZONE SIZE_GB TYPE STATUS
|
|
||||||
kubernetes-master-pd us-central1-b 20 pd-ssd READY
|
|
||||||
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/regions/us-central1/addresses/kubernetes-master-ip].
|
|
||||||
+++ Logging using Fluentd to elasticsearch
|
|
||||||
```
|
|
||||||
|
|
||||||
The per-node Fluentd pods, the Elasticsearch pods, and the Kibana pods should
|
|
||||||
all be running in the kube-system namespace soon after the cluster comes to
|
|
||||||
life.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl get pods --namespace=kube-system
|
|
||||||
```
|
|
||||||
```
|
|
||||||
NAME READY STATUS RESTARTS AGE
|
|
||||||
elasticsearch-logging-v1-78nog 1/1 Running 0 2h
|
|
||||||
elasticsearch-logging-v1-nj2nb 1/1 Running 0 2h
|
|
||||||
fluentd-elasticsearch-kubernetes-node-5oq0 1/1 Running 0 2h
|
|
||||||
fluentd-elasticsearch-kubernetes-node-6896 1/1 Running 0 2h
|
|
||||||
fluentd-elasticsearch-kubernetes-node-l1ds 1/1 Running 0 2h
|
|
||||||
fluentd-elasticsearch-kubernetes-node-lz9j 1/1 Running 0 2h
|
|
||||||
kibana-logging-v1-bhpo8 1/1 Running 0 2h
|
|
||||||
kube-dns-v3-7r1l9 3/3 Running 0 2h
|
|
||||||
monitoring-heapster-v4-yl332 1/1 Running 1 2h
|
|
||||||
monitoring-influx-grafana-v1-o79xf 2/2 Running 0 2h
|
|
||||||
```
|
|
||||||
|
|
||||||
The `fluentd-elasticsearch` pods gather logs from each node and send them to
|
|
||||||
the `elasticsearch-logging` pods, which are part of a
|
|
||||||
[service](/docs/concepts/services-networking/service/) named `elasticsearch-logging`. These
|
|
||||||
Elasticsearch pods store the logs and expose them via a REST API.
|
|
||||||
The `kibana-logging` pod provides a web UI for reading the logs stored in
|
|
||||||
Elasticsearch, and is part of a service named `kibana-logging`.
|
|
||||||
|
|
||||||
The Elasticsearch and Kibana services are both in the `kube-system` namespace
|
|
||||||
and are not directly exposed via a publicly reachable IP address. To reach them,
|
|
||||||
follow the instructions for
|
|
||||||
[Accessing services running in a cluster](/docs/tasks/access-application-cluster/access-cluster/#accessing-services-running-on-the-cluster).
|
|
||||||
|
|
||||||
If you try accessing the `elasticsearch-logging` service in your browser, you'll
|
|
||||||
see a status page that looks something like this:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
You can now type Elasticsearch queries directly into the browser, if you'd
|
|
||||||
like. See [Elasticsearch's documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-uri-request.html)
|
|
||||||
for more details on how to do so.
|
|
||||||
|
|
||||||
Alternatively, you can view your cluster's logs using Kibana (again using the
|
|
||||||
[instructions for accessing a service running in the cluster](/docs/tasks/access-application-cluster/access-cluster/#accessing-services-running-on-the-cluster)).
|
|
||||||
The first time you visit the Kibana URL you will be presented with a page that
|
|
||||||
asks you to configure your view of the ingested logs. Select the option for
|
|
||||||
timeseries values and select `@timestamp`. On the following page select the
|
|
||||||
`Discover` tab and then you should be able to see the ingested logs.
|
|
||||||
You can set the refresh interval to 5 seconds to have the logs
|
|
||||||
regularly refreshed.
|
|
||||||
|
|
||||||
Here is a typical view of ingested logs from the Kibana viewer:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## {{% heading "whatsnext" %}}
|
|
||||||
|
|
||||||
|
|
||||||
Kibana opens up all sorts of powerful options for exploring your logs! For some
|
|
||||||
ideas on how to dig into it, check out [Kibana's documentation](https://www.elastic.co/guide/en/kibana/current/discover.html).
|
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -583,14 +583,13 @@ and can optionally include a custom CA bundle to use to verify the TLS connectio
|
||||||
The `host` should not refer to a service running in the cluster; use
|
The `host` should not refer to a service running in the cluster; use
|
||||||
a service reference by specifying the `service` field instead.
|
a service reference by specifying the `service` field instead.
|
||||||
The host might be resolved via external DNS in some apiservers
|
The host might be resolved via external DNS in some apiservers
|
||||||
(i.e., `kube-apiserver` cannot resolve in-cluster DNS as that would
|
(i.e., `kube-apiserver` cannot resolve in-cluster DNS as that would
|
||||||
be a layering violation). `host` may also be an IP address.
|
be a layering violation). `host` may also be an IP address.
|
||||||
|
|
||||||
Please note that using `localhost` or `127.0.0.1` as a `host` is
|
Please note that using `localhost` or `127.0.0.1` as a `host` is
|
||||||
risky unless you take great care to run this webhook on all hosts
|
risky unless you take great care to run this webhook on all hosts
|
||||||
which run an apiserver which might need to make calls to this
|
which run an apiserver which might need to make calls to this
|
||||||
webhook. Such installs are likely to be non-portable, i.e., not easy
|
webhook. Such installations are likely to be non-portable or not readily run in a new cluster.
|
||||||
to turn up in a new cluster.
|
|
||||||
|
|
||||||
The scheme must be "https"; the URL must begin with "https://".
|
The scheme must be "https"; the URL must begin with "https://".
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -35,7 +35,7 @@ non-parallel, use of [Job](/docs/concepts/workloads/controllers/job/).
|
||||||
|
|
||||||
## Starting a message queue service
|
## Starting a message queue service
|
||||||
|
|
||||||
This example uses RabbitMQ, but it should be easy to adapt to another AMQP-type message service.
|
This example uses RabbitMQ, however, you can adapt the example to use another AMQP-type message service.
|
||||||
|
|
||||||
In practice you could set up a message queue service once in a
|
In practice you could set up a message queue service once in a
|
||||||
cluster and reuse it for many jobs, as well as for long-running services.
|
cluster and reuse it for many jobs, as well as for long-running services.
|
||||||
|
|
|
||||||
|
|
@ -191,7 +191,7 @@ We can create a new autoscaler using `kubectl create` command.
|
||||||
We can list autoscalers by `kubectl get hpa` and get detailed description by `kubectl describe hpa`.
|
We can list autoscalers by `kubectl get hpa` and get detailed description by `kubectl describe hpa`.
|
||||||
Finally, we can delete an autoscaler using `kubectl delete hpa`.
|
Finally, we can delete an autoscaler using `kubectl delete hpa`.
|
||||||
|
|
||||||
In addition, there is a special `kubectl autoscale` command for easy creation of a Horizontal Pod Autoscaler.
|
In addition, there is a special `kubectl autoscale` command for creating a HorizontalPodAutoscaler object.
|
||||||
For instance, executing `kubectl autoscale rs foo --min=2 --max=5 --cpu-percent=80`
|
For instance, executing `kubectl autoscale rs foo --min=2 --max=5 --cpu-percent=80`
|
||||||
will create an autoscaler for replication set *foo*, with target CPU utilization set to `80%`
|
will create an autoscaler for replication set *foo*, with target CPU utilization set to `80%`
|
||||||
and the number of replicas between 2 and 5.
|
and the number of replicas between 2 and 5.
|
||||||
|
|
@ -221,9 +221,9 @@ the global HPA settings exposed as flags for the `kube-controller-manager` compo
|
||||||
Starting from v1.12, a new algorithmic update removes the need for the
|
Starting from v1.12, a new algorithmic update removes the need for the
|
||||||
upscale delay.
|
upscale delay.
|
||||||
|
|
||||||
- `--horizontal-pod-autoscaler-downscale-stabilization`: The value for this option is a
|
- `--horizontal-pod-autoscaler-downscale-stabilization`: Specifies the duration of the
|
||||||
duration that specifies how long the autoscaler has to wait before another
|
downscale stabilization time window. Horizontal Pod Autoscaler remembers
|
||||||
downscale operation can be performed after the current one has completed.
|
the historical recommended sizes and only acts on the largest size within this time window.
|
||||||
The default value is 5 minutes (`5m0s`).
|
The default value is 5 minutes (`5m0s`).
|
||||||
|
|
||||||
{{< note >}}
|
{{< note >}}
|
||||||
|
|
|
||||||
|
|
@ -33,7 +33,7 @@ Before walking through each tutorial, you may want to bookmark the
|
||||||
|
|
||||||
* [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/)
|
* [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/)
|
||||||
|
|
||||||
* [Example: Deploying PHP Guestbook application with Redis](/docs/tutorials/stateless-application/guestbook/)
|
* [Example: Deploying PHP Guestbook application with MongoDB](/docs/tutorials/stateless-application/guestbook/)
|
||||||
|
|
||||||
## Stateful Applications
|
## Stateful Applications
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -41,7 +41,7 @@ card:
|
||||||
<div class="row">
|
<div class="row">
|
||||||
<div class="col-md-9">
|
<div class="col-md-9">
|
||||||
<h2>What can Kubernetes do for you?</h2>
|
<h2>What can Kubernetes do for you?</h2>
|
||||||
<p>With modern web services, users expect applications to be available 24/7, and developers expect to deploy new versions of those applications several times a day. Containerization helps package software to serve these goals, enabling applications to be released and updated in an easy and fast way without downtime. Kubernetes helps you make sure those containerized applications run where and when you want, and helps them find the resources and tools they need to work. Kubernetes is a production-ready, open source platform designed with Google's accumulated experience in container orchestration, combined with best-of-breed ideas from the community.</p>
|
<p>With modern web services, users expect applications to be available 24/7, and developers expect to deploy new versions of those applications several times a day. Containerization helps package software to serve these goals, enabling applications to be released and updated without downtime. Kubernetes helps you make sure those containerized applications run where and when you want, and helps them find the resources and tools they need to work. Kubernetes is a production-ready, open source platform designed with Google's accumulated experience in container orchestration, combined with best-of-breed ideas from the community.</p>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -63,13 +63,7 @@ weight: 10
|
||||||
<h3>Services and Labels</h3>
|
<h3>Services and Labels</h3>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class="row">
|
|
||||||
<div class="col-md-8">
|
|
||||||
<p><img src="/docs/tutorials/kubernetes-basics/public/images/module_04_services.svg" width="150%" height="150%"></p>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<div class="row">
|
<div class="row">
|
||||||
<div class="col-md-8">
|
<div class="col-md-8">
|
||||||
<p>A Service routes traffic across a set of Pods. Services are the abstraction that allow pods to die and replicate in Kubernetes without impacting your application. Discovery and routing among dependent Pods (such as the frontend and backend components in an application) is handled by Kubernetes Services.</p>
|
<p>A Service routes traffic across a set of Pods. Services are the abstraction that allow pods to die and replicate in Kubernetes without impacting your application. Discovery and routing among dependent Pods (such as the frontend and backend components in an application) is handled by Kubernetes Services.</p>
|
||||||
|
|
|
||||||
File diff suppressed because it is too large
Load Diff
|
Before Width: | Height: | Size: 57 KiB After Width: | Height: | Size: 79 KiB |
|
|
@ -552,7 +552,7 @@ In another terminal, watch the Pods in the StatefulSet:
|
||||||
```shell
|
```shell
|
||||||
kubectl get pod -l app=nginx -w
|
kubectl get pod -l app=nginx -w
|
||||||
```
|
```
|
||||||
The output is simular to:
|
The output is similar to:
|
||||||
```
|
```
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE
|
||||||
web-0 1/1 Running 0 7m
|
web-0 1/1 Running 0 7m
|
||||||
|
|
|
||||||
|
|
@ -21,39 +21,37 @@ and [PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affini
|
||||||
## {{% heading "prerequisites" %}}
|
## {{% heading "prerequisites" %}}
|
||||||
|
|
||||||
Before starting this tutorial, you should be familiar with the following
|
Before starting this tutorial, you should be familiar with the following
|
||||||
Kubernetes concepts.
|
Kubernetes concepts:
|
||||||
|
|
||||||
- [Pods](/docs/concepts/workloads/pods/)
|
- [Pods](/docs/concepts/workloads/pods/)
|
||||||
- [Cluster DNS](/docs/concepts/services-networking/dns-pod-service/)
|
- [Cluster DNS](/docs/concepts/services-networking/dns-pod-service/)
|
||||||
- [Headless Services](/docs/concepts/services-networking/service/#headless-services)
|
- [Headless Services](/docs/concepts/services-networking/service/#headless-services)
|
||||||
- [PersistentVolumes](/docs/concepts/storage/persistent-volumes/)
|
- [PersistentVolumes](/docs/concepts/storage/volumes/)
|
||||||
- [PersistentVolume Provisioning](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/persistent-volume-provisioning/)
|
- [PersistentVolume Provisioning](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/persistent-volume-provisioning/)
|
||||||
- [StatefulSets](/docs/concepts/workloads/controllers/statefulset/)
|
- [StatefulSets](/docs/concepts/workloads/controllers/statefulset/)
|
||||||
- [PodDisruptionBudgets](/docs/concepts/workloads/pods/disruptions/#pod-disruption-budget)
|
- [PodDisruptionBudgets](/docs/concepts/workloads/pods/disruptions/#pod-disruption-budget)
|
||||||
- [PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)
|
- [PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)
|
||||||
- [kubectl CLI](/docs/reference/kubectl/kubectl/)
|
- [kubectl CLI](/docs/reference/kubectl/kubectl/)
|
||||||
|
|
||||||
You will require a cluster with at least four nodes, and each node requires at least 2 CPUs and 4 GiB of memory. In this tutorial you will cordon and drain the cluster's nodes. **This means that the cluster will terminate and evict all Pods on its nodes, and the nodes will temporarily become unschedulable.** You should use a dedicated cluster for this tutorial, or you should ensure that the disruption you cause will not interfere with other tenants.
|
You must have a cluster with at least four nodes, and each node requires at least 2 CPUs and 4 GiB of memory. In this tutorial you will cordon and drain the cluster's nodes. **This means that the cluster will terminate and evict all Pods on its nodes, and the nodes will temporarily become unschedulable.** You should use a dedicated cluster for this tutorial, or you should ensure that the disruption you cause will not interfere with other tenants.
|
||||||
|
|
||||||
This tutorial assumes that you have configured your cluster to dynamically provision
|
This tutorial assumes that you have configured your cluster to dynamically provision
|
||||||
PersistentVolumes. If your cluster is not configured to do so, you
|
PersistentVolumes. If your cluster is not configured to do so, you
|
||||||
will have to manually provision three 20 GiB volumes before starting this
|
will have to manually provision three 20 GiB volumes before starting this
|
||||||
tutorial.
|
tutorial.
|
||||||
|
|
||||||
|
|
||||||
## {{% heading "objectives" %}}
|
## {{% heading "objectives" %}}
|
||||||
|
|
||||||
After this tutorial, you will know the following.
|
After this tutorial, you will know the following.
|
||||||
|
|
||||||
- How to deploy a ZooKeeper ensemble using StatefulSet.
|
- How to deploy a ZooKeeper ensemble using StatefulSet.
|
||||||
- How to consistently configure the ensemble.
|
- How to consistently configure the ensemble.
|
||||||
- How to spread the deployment of ZooKeeper servers in the ensemble.
|
- How to spread the deployment of ZooKeeper servers in the ensemble.
|
||||||
- How to use PodDisruptionBudgets to ensure service availability during planned maintenance.
|
- How to use PodDisruptionBudgets to ensure service availability during planned maintenance.
|
||||||
|
|
||||||
|
|
||||||
<!-- lessoncontent -->
|
<!-- lessoncontent -->
|
||||||
|
|
||||||
### ZooKeeper Basics
|
### ZooKeeper
|
||||||
|
|
||||||
[Apache ZooKeeper](https://zookeeper.apache.org/doc/current/) is a
|
[Apache ZooKeeper](https://zookeeper.apache.org/doc/current/) is a
|
||||||
distributed, open-source coordination service for distributed applications.
|
distributed, open-source coordination service for distributed applications.
|
||||||
|
|
@ -68,7 +66,7 @@ The ensemble uses the Zab protocol to elect a leader, and the ensemble cannot wr
|
||||||
|
|
||||||
ZooKeeper servers keep their entire state machine in memory, and write every mutation to a durable WAL (Write Ahead Log) on storage media. When a server crashes, it can recover its previous state by replaying the WAL. To prevent the WAL from growing without bound, ZooKeeper servers will periodically snapshot them in memory state to storage media. These snapshots can be loaded directly into memory, and all WAL entries that preceded the snapshot may be discarded.
|
ZooKeeper servers keep their entire state machine in memory, and write every mutation to a durable WAL (Write Ahead Log) on storage media. When a server crashes, it can recover its previous state by replaying the WAL. To prevent the WAL from growing without bound, ZooKeeper servers will periodically snapshot them in memory state to storage media. These snapshots can be loaded directly into memory, and all WAL entries that preceded the snapshot may be discarded.
|
||||||
|
|
||||||
## Creating a ZooKeeper Ensemble
|
## Creating a ZooKeeper ensemble
|
||||||
|
|
||||||
The manifest below contains a
|
The manifest below contains a
|
||||||
[Headless Service](/docs/concepts/services-networking/service/#headless-services),
|
[Headless Service](/docs/concepts/services-networking/service/#headless-services),
|
||||||
|
|
@ -127,7 +125,7 @@ zk-2 1/1 Running 0 40s
|
||||||
The StatefulSet controller creates three Pods, and each Pod has a container with
|
The StatefulSet controller creates three Pods, and each Pod has a container with
|
||||||
a [ZooKeeper](https://www-us.apache.org/dist/zookeeper/stable/) server.
|
a [ZooKeeper](https://www-us.apache.org/dist/zookeeper/stable/) server.
|
||||||
|
|
||||||
### Facilitating Leader Election
|
### Facilitating leader election
|
||||||
|
|
||||||
Because there is no terminating algorithm for electing a leader in an anonymous network, Zab requires explicit membership configuration to perform leader election. Each server in the ensemble needs to have a unique identifier, all servers need to know the global set of identifiers, and each identifier needs to be associated with a network address.
|
Because there is no terminating algorithm for electing a leader in an anonymous network, Zab requires explicit membership configuration to perform leader election. Each server in the ensemble needs to have a unique identifier, all servers need to know the global set of identifiers, and each identifier needs to be associated with a network address.
|
||||||
|
|
||||||
|
|
@ -211,7 +209,7 @@ server.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888
|
||||||
server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888
|
server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888
|
||||||
```
|
```
|
||||||
|
|
||||||
### Achieving Consensus
|
### Achieving consensus
|
||||||
|
|
||||||
Consensus protocols require that the identifiers of each participant be unique. No two participants in the Zab protocol should claim the same unique identifier. This is necessary to allow the processes in the system to agree on which processes have committed which data. If two Pods are launched with the same ordinal, two ZooKeeper servers would both identify themselves as the same server.
|
Consensus protocols require that the identifiers of each participant be unique. No two participants in the Zab protocol should claim the same unique identifier. This is necessary to allow the processes in the system to agree on which processes have committed which data. If two Pods are launched with the same ordinal, two ZooKeeper servers would both identify themselves as the same server.
|
||||||
|
|
||||||
|
|
@ -260,7 +258,7 @@ server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888
|
||||||
|
|
||||||
When the servers use the Zab protocol to attempt to commit a value, they will either achieve consensus and commit the value (if leader election has succeeded and at least two of the Pods are Running and Ready), or they will fail to do so (if either of the conditions are not met). No state will arise where one server acknowledges a write on behalf of another.
|
When the servers use the Zab protocol to attempt to commit a value, they will either achieve consensus and commit the value (if leader election has succeeded and at least two of the Pods are Running and Ready), or they will fail to do so (if either of the conditions are not met). No state will arise where one server acknowledges a write on behalf of another.
|
||||||
|
|
||||||
### Sanity Testing the Ensemble
|
### Sanity testing the ensemble
|
||||||
|
|
||||||
The most basic sanity test is to write data to one ZooKeeper server and
|
The most basic sanity test is to write data to one ZooKeeper server and
|
||||||
to read the data from another.
|
to read the data from another.
|
||||||
|
|
@ -270,6 +268,7 @@ The command below executes the `zkCli.sh` script to write `world` to the path `/
|
||||||
```shell
|
```shell
|
||||||
kubectl exec zk-0 zkCli.sh create /hello world
|
kubectl exec zk-0 zkCli.sh create /hello world
|
||||||
```
|
```
|
||||||
|
|
||||||
```
|
```
|
||||||
WATCHER::
|
WATCHER::
|
||||||
|
|
||||||
|
|
@ -304,7 +303,7 @@ dataLength = 5
|
||||||
numChildren = 0
|
numChildren = 0
|
||||||
```
|
```
|
||||||
|
|
||||||
### Providing Durable Storage
|
### Providing durable storage
|
||||||
|
|
||||||
As mentioned in the [ZooKeeper Basics](#zookeeper-basics) section,
|
As mentioned in the [ZooKeeper Basics](#zookeeper-basics) section,
|
||||||
ZooKeeper commits all entries to a durable WAL, and periodically writes snapshots
|
ZooKeeper commits all entries to a durable WAL, and periodically writes snapshots
|
||||||
|
|
@ -445,8 +444,8 @@ The `volumeMounts` section of the `StatefulSet`'s container `template` mounts th
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
volumeMounts:
|
volumeMounts:
|
||||||
- name: datadir
|
- name: datadir
|
||||||
mountPath: /var/lib/zookeeper
|
mountPath: /var/lib/zookeeper
|
||||||
```
|
```
|
||||||
|
|
||||||
When a Pod in the `zk` `StatefulSet` is (re)scheduled, it will always have the
|
When a Pod in the `zk` `StatefulSet` is (re)scheduled, it will always have the
|
||||||
|
|
@ -454,7 +453,7 @@ same `PersistentVolume` mounted to the ZooKeeper server's data directory.
|
||||||
Even when the Pods are rescheduled, all the writes made to the ZooKeeper
|
Even when the Pods are rescheduled, all the writes made to the ZooKeeper
|
||||||
servers' WALs, and all their snapshots, remain durable.
|
servers' WALs, and all their snapshots, remain durable.
|
||||||
|
|
||||||
## Ensuring Consistent Configuration
|
## Ensuring consistent configuration
|
||||||
|
|
||||||
As noted in the [Facilitating Leader Election](#facilitating-leader-election) and
|
As noted in the [Facilitating Leader Election](#facilitating-leader-election) and
|
||||||
[Achieving Consensus](#achieving-consensus) sections, the servers in a
|
[Achieving Consensus](#achieving-consensus) sections, the servers in a
|
||||||
|
|
@ -469,6 +468,7 @@ Get the `zk` StatefulSet.
|
||||||
```shell
|
```shell
|
||||||
kubectl get sts zk -o yaml
|
kubectl get sts zk -o yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
```
|
```
|
||||||
…
|
…
|
||||||
command:
|
command:
|
||||||
|
|
@ -497,7 +497,7 @@ command:
|
||||||
|
|
||||||
The command used to start the ZooKeeper servers passed the configuration as command line parameter. You can also use environment variables to pass configuration to the ensemble.
|
The command used to start the ZooKeeper servers passed the configuration as command line parameter. You can also use environment variables to pass configuration to the ensemble.
|
||||||
|
|
||||||
### Configuring Logging
|
### Configuring logging
|
||||||
|
|
||||||
One of the files generated by the `zkGenConfig.sh` script controls ZooKeeper's logging.
|
One of the files generated by the `zkGenConfig.sh` script controls ZooKeeper's logging.
|
||||||
ZooKeeper uses [Log4j](https://logging.apache.org/log4j/2.x/), and, by default,
|
ZooKeeper uses [Log4j](https://logging.apache.org/log4j/2.x/), and, by default,
|
||||||
|
|
@ -558,13 +558,11 @@ You can view application logs written to standard out or standard error using `k
|
||||||
2016-12-06 19:34:46,230 [myid:1] - INFO [Thread-1142:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52768 (no session established for client)
|
2016-12-06 19:34:46,230 [myid:1] - INFO [Thread-1142:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52768 (no session established for client)
|
||||||
```
|
```
|
||||||
|
|
||||||
Kubernetes supports more powerful, but more complex, logging integrations
|
Kubernetes integrates with many logging solutions. You can choose a logging solution
|
||||||
with [Stackdriver](/docs/tasks/debug-application-cluster/logging-stackdriver/)
|
that best fits your cluster and applications. For cluster-level logging and aggregation,
|
||||||
and [Elasticsearch and Kibana](/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/).
|
consider deploying a [sidecar container](/docs/concepts/cluster-administration/logging#sidecar-container-with-logging-agent) to rotate and ship your logs.
|
||||||
For cluster level log shipping and aggregation, consider deploying a [sidecar](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns)
|
|
||||||
container to rotate and ship your logs.
|
|
||||||
|
|
||||||
### Configuring a Non-Privileged User
|
### Configuring a non-privileged user
|
||||||
|
|
||||||
The best practices to allow an application to run as a privileged
|
The best practices to allow an application to run as a privileged
|
||||||
user inside of a container are a matter of debate. If your organization requires
|
user inside of a container are a matter of debate. If your organization requires
|
||||||
|
|
@ -612,7 +610,7 @@ Because the `fsGroup` field of the `securityContext` object is set to 1000, the
|
||||||
drwxr-sr-x 3 zookeeper zookeeper 4096 Dec 5 20:45 /var/lib/zookeeper/data
|
drwxr-sr-x 3 zookeeper zookeeper 4096 Dec 5 20:45 /var/lib/zookeeper/data
|
||||||
```
|
```
|
||||||
|
|
||||||
## Managing the ZooKeeper Process
|
## Managing the ZooKeeper process
|
||||||
|
|
||||||
The [ZooKeeper documentation](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_supervision)
|
The [ZooKeeper documentation](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_supervision)
|
||||||
mentions that "You will want to have a supervisory process that
|
mentions that "You will want to have a supervisory process that
|
||||||
|
|
@ -622,7 +620,7 @@ common pattern. When deploying an application in Kubernetes, rather than using
|
||||||
an external utility as a supervisory process, you should use Kubernetes as the
|
an external utility as a supervisory process, you should use Kubernetes as the
|
||||||
watchdog for your application.
|
watchdog for your application.
|
||||||
|
|
||||||
### Updating the Ensemble
|
### Updating the ensemble
|
||||||
|
|
||||||
The `zk` `StatefulSet` is configured to use the `RollingUpdate` update strategy.
|
The `zk` `StatefulSet` is configured to use the `RollingUpdate` update strategy.
|
||||||
|
|
||||||
|
|
@ -631,6 +629,7 @@ You can use `kubectl patch` to update the number of `cpus` allocated to the serv
|
||||||
```shell
|
```shell
|
||||||
kubectl patch sts zk --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/requests/cpu", "value":"0.3"}]'
|
kubectl patch sts zk --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/requests/cpu", "value":"0.3"}]'
|
||||||
```
|
```
|
||||||
|
|
||||||
```
|
```
|
||||||
statefulset.apps/zk patched
|
statefulset.apps/zk patched
|
||||||
```
|
```
|
||||||
|
|
@ -640,6 +639,7 @@ Use `kubectl rollout status` to watch the status of the update.
|
||||||
```shell
|
```shell
|
||||||
kubectl rollout status sts/zk
|
kubectl rollout status sts/zk
|
||||||
```
|
```
|
||||||
|
|
||||||
```
|
```
|
||||||
waiting for statefulset rolling update to complete 0 pods at revision zk-5db4499664...
|
waiting for statefulset rolling update to complete 0 pods at revision zk-5db4499664...
|
||||||
Waiting for 1 pods to be ready...
|
Waiting for 1 pods to be ready...
|
||||||
|
|
@ -678,7 +678,7 @@ kubectl rollout undo sts/zk
|
||||||
statefulset.apps/zk rolled back
|
statefulset.apps/zk rolled back
|
||||||
```
|
```
|
||||||
|
|
||||||
### Handling Process Failure
|
### Handling process failure
|
||||||
|
|
||||||
[Restart Policies](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) control how
|
[Restart Policies](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) control how
|
||||||
Kubernetes handles process failures for the entry point of the container in a Pod.
|
Kubernetes handles process failures for the entry point of the container in a Pod.
|
||||||
|
|
@ -731,7 +731,7 @@ that implements the application's business logic, the script must terminate with
|
||||||
child process. This ensures that Kubernetes will restart the application's
|
child process. This ensures that Kubernetes will restart the application's
|
||||||
container when the process implementing the application's business logic fails.
|
container when the process implementing the application's business logic fails.
|
||||||
|
|
||||||
### Testing for Liveness
|
### Testing for liveness
|
||||||
|
|
||||||
Configuring your application to restart failed processes is not enough to
|
Configuring your application to restart failed processes is not enough to
|
||||||
keep a distributed system healthy. There are scenarios where
|
keep a distributed system healthy. There are scenarios where
|
||||||
|
|
@ -795,7 +795,7 @@ zk-0 0/1 Running 1 1h
|
||||||
zk-0 1/1 Running 1 1h
|
zk-0 1/1 Running 1 1h
|
||||||
```
|
```
|
||||||
|
|
||||||
### Testing for Readiness
|
### Testing for readiness
|
||||||
|
|
||||||
Readiness is not the same as liveness. If a process is alive, it is scheduled
|
Readiness is not the same as liveness. If a process is alive, it is scheduled
|
||||||
and healthy. If a process is ready, it is able to process input. Liveness is
|
and healthy. If a process is ready, it is able to process input. Liveness is
|
||||||
|
|
@ -824,7 +824,7 @@ Even though the liveness and readiness probes are identical, it is important
|
||||||
to specify both. This ensures that only healthy servers in the ZooKeeper
|
to specify both. This ensures that only healthy servers in the ZooKeeper
|
||||||
ensemble receive network traffic.
|
ensemble receive network traffic.
|
||||||
|
|
||||||
## Tolerating Node Failure
|
## Tolerating Node failure
|
||||||
|
|
||||||
ZooKeeper needs a quorum of servers to successfully commit mutations
|
ZooKeeper needs a quorum of servers to successfully commit mutations
|
||||||
to data. For a three server ensemble, two servers must be healthy for
|
to data. For a three server ensemble, two servers must be healthy for
|
||||||
|
|
@ -879,10 +879,10 @@ as `zk` in the domain defined by the `topologyKey`. The `topologyKey`
|
||||||
different rules, labels, and selectors, you can extend this technique to spread
|
different rules, labels, and selectors, you can extend this technique to spread
|
||||||
your ensemble across physical, network, and power failure domains.
|
your ensemble across physical, network, and power failure domains.
|
||||||
|
|
||||||
## Surviving Maintenance
|
## Surviving maintenance
|
||||||
|
|
||||||
**In this section you will cordon and drain nodes. If you are using this tutorial
|
In this section you will cordon and drain nodes. If you are using this tutorial
|
||||||
on a shared cluster, be sure that this will not adversely affect other tenants.**
|
on a shared cluster, be sure that this will not adversely affect other tenants.
|
||||||
|
|
||||||
The previous section showed you how to spread your Pods across nodes to survive
|
The previous section showed you how to spread your Pods across nodes to survive
|
||||||
unplanned node failures, but you also need to plan for temporary node failures
|
unplanned node failures, but you also need to plan for temporary node failures
|
||||||
|
|
@ -1017,6 +1017,7 @@ Continue to watch the Pods of the stateful set, and drain the node on which
|
||||||
```shell
|
```shell
|
||||||
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
|
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
|
||||||
```
|
```
|
||||||
|
|
||||||
```
|
```
|
||||||
node "kubernetes-node-i4c4" cordoned
|
node "kubernetes-node-i4c4" cordoned
|
||||||
|
|
||||||
|
|
@ -1059,6 +1060,7 @@ Use [`kubectl uncordon`](/docs/reference/generated/kubectl/kubectl-commands/#unc
|
||||||
```shell
|
```shell
|
||||||
kubectl uncordon kubernetes-node-pb41
|
kubectl uncordon kubernetes-node-pb41
|
||||||
```
|
```
|
||||||
|
|
||||||
```
|
```
|
||||||
node "kubernetes-node-pb41" uncordoned
|
node "kubernetes-node-pb41" uncordoned
|
||||||
```
|
```
|
||||||
|
|
@ -1068,6 +1070,7 @@ node "kubernetes-node-pb41" uncordoned
|
||||||
```shell
|
```shell
|
||||||
kubectl get pods -w -l app=zk
|
kubectl get pods -w -l app=zk
|
||||||
```
|
```
|
||||||
|
|
||||||
```
|
```
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE
|
||||||
zk-0 1/1 Running 2 1h
|
zk-0 1/1 Running 2 1h
|
||||||
|
|
@ -1130,9 +1133,7 @@ You should always allocate additional capacity for critical services so that the
|
||||||
|
|
||||||
## {{% heading "cleanup" %}}
|
## {{% heading "cleanup" %}}
|
||||||
|
|
||||||
|
|
||||||
- Use `kubectl uncordon` to uncordon all the nodes in your cluster.
|
- Use `kubectl uncordon` to uncordon all the nodes in your cluster.
|
||||||
- You will need to delete the persistent storage media for the PersistentVolumes
|
- You must delete the persistent storage media for the PersistentVolumes used in this tutorial.
|
||||||
used in this tutorial. Follow the necessary steps, based on your environment,
|
Follow the necessary steps, based on your environment, storage configuration,
|
||||||
storage configuration, and provisioning method, to ensure that all storage is
|
and provisioning method, to ensure that all storage is reclaimed.
|
||||||
reclaimed.
|
|
||||||
|
|
|
||||||
|
|
@ -1,460 +0,0 @@
|
||||||
---
|
|
||||||
title: "Example: Add logging and metrics to the PHP / Redis Guestbook example"
|
|
||||||
reviewers:
|
|
||||||
- sftim
|
|
||||||
content_type: tutorial
|
|
||||||
weight: 21
|
|
||||||
card:
|
|
||||||
name: tutorials
|
|
||||||
weight: 31
|
|
||||||
title: "Example: Add logging and metrics to the PHP / Redis Guestbook example"
|
|
||||||
---
|
|
||||||
|
|
||||||
<!-- overview -->
|
|
||||||
This tutorial builds upon the [PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook) tutorial. Lightweight log, metric, and network data open source shippers, or *Beats*, from Elastic are deployed in the same Kubernetes cluster as the guestbook. The Beats collect, parse, and index the data into Elasticsearch so that you can view and analyze the resulting operational information in Kibana. This example consists of the following components:
|
|
||||||
|
|
||||||
* A running instance of the [PHP Guestbook with Redis tutorial](/docs/tutorials/stateless-application/guestbook)
|
|
||||||
* Elasticsearch and Kibana
|
|
||||||
* Filebeat
|
|
||||||
* Metricbeat
|
|
||||||
* Packetbeat
|
|
||||||
|
|
||||||
## {{% heading "objectives" %}}
|
|
||||||
|
|
||||||
* Start up the PHP Guestbook with Redis.
|
|
||||||
* Install kube-state-metrics.
|
|
||||||
* Create a Kubernetes Secret.
|
|
||||||
* Deploy the Beats.
|
|
||||||
* View dashboards of your logs and metrics.
|
|
||||||
|
|
||||||
## {{% heading "prerequisites" %}}
|
|
||||||
|
|
||||||
|
|
||||||
{{< include "task-tutorial-prereqs.md" >}}
|
|
||||||
{{< version-check >}}
|
|
||||||
|
|
||||||
Additionally you need:
|
|
||||||
|
|
||||||
* A running deployment of the [PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook) tutorial.
|
|
||||||
|
|
||||||
* A running Elasticsearch and Kibana deployment. You can use [Elasticsearch Service in Elastic Cloud](https://cloud.elastic.co),
|
|
||||||
run the [downloaded files](https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-elastic-stack.html)
|
|
||||||
on your workstation or servers, or the [Elastic Helm Charts](https://github.com/elastic/helm-charts).
|
|
||||||
|
|
||||||
<!-- lessoncontent -->
|
|
||||||
|
|
||||||
## Start up the PHP Guestbook with Redis
|
|
||||||
|
|
||||||
This tutorial builds on the [PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook) tutorial. If you have the guestbook application running, then you can monitor that. If you do not have it running then follow the instructions to deploy the guestbook and do not perform the **Cleanup** steps. Come back to this page when you have the guestbook running.
|
|
||||||
|
|
||||||
## Add a Cluster role binding
|
|
||||||
|
|
||||||
Create a [cluster level role binding](/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) so that you can deploy kube-state-metrics and the Beats at the cluster level (in kube-system).
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl create clusterrolebinding cluster-admin-binding \
|
|
||||||
--clusterrole=cluster-admin --user=<your email associated with the k8s provider account>
|
|
||||||
```
|
|
||||||
|
|
||||||
## Install kube-state-metrics
|
|
||||||
|
|
||||||
Kubernetes [*kube-state-metrics*](https://github.com/kubernetes/kube-state-metrics) is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects. Metricbeat reports these metrics. Add kube-state-metrics to the Kubernetes cluster that the guestbook is running in.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
git clone https://github.com/kubernetes/kube-state-metrics.git kube-state-metrics
|
|
||||||
kubectl apply -f kube-state-metrics/examples/standard
|
|
||||||
```
|
|
||||||
|
|
||||||
### Check to see if kube-state-metrics is running
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl get pods --namespace=kube-system -l app.kubernetes.io/name=kube-state-metrics
|
|
||||||
```
|
|
||||||
|
|
||||||
Output:
|
|
||||||
|
|
||||||
```
|
|
||||||
NAME READY STATUS RESTARTS AGE
|
|
||||||
kube-state-metrics-89d656bf8-vdthm 1/1 Running 0 21s
|
|
||||||
```
|
|
||||||
|
|
||||||
## Clone the Elastic examples GitHub repo
|
|
||||||
|
|
||||||
```shell
|
|
||||||
git clone https://github.com/elastic/examples.git
|
|
||||||
```
|
|
||||||
|
|
||||||
The rest of the commands will reference files in the `examples/beats-k8s-send-anywhere` directory, so change dir there:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
cd examples/beats-k8s-send-anywhere
|
|
||||||
```
|
|
||||||
|
|
||||||
## Create a Kubernetes Secret
|
|
||||||
|
|
||||||
A Kubernetes {{< glossary_tooltip text="Secret" term_id="secret" >}} is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in an image; putting it in a Secret object allows for more control over how it is used, and reduces the risk of accidental exposure.
|
|
||||||
|
|
||||||
{{< note >}}
|
|
||||||
There are two sets of steps here, one for *self managed* Elasticsearch and Kibana (running on your servers or using the Elastic Helm Charts), and a second separate set for the *managed service* Elasticsearch Service in Elastic Cloud. Only create the secret for the type of Elasticsearch and Kibana system that you will use for this tutorial.
|
|
||||||
{{< /note >}}
|
|
||||||
|
|
||||||
{{< tabs name="tab_with_md" >}}
|
|
||||||
{{% tab name="Self Managed" %}}
|
|
||||||
|
|
||||||
### Self managed
|
|
||||||
|
|
||||||
Switch to the **Managed service** tab if you are connecting to Elasticsearch Service in Elastic Cloud.
|
|
||||||
|
|
||||||
### Set the credentials
|
|
||||||
|
|
||||||
There are four files to edit to create a k8s secret when you are connecting to self managed Elasticsearch and Kibana (self managed is effectively anything other than the managed Elasticsearch Service in Elastic Cloud). The files are:
|
|
||||||
|
|
||||||
1. `ELASTICSEARCH_HOSTS`
|
|
||||||
1. `ELASTICSEARCH_PASSWORD`
|
|
||||||
1. `ELASTICSEARCH_USERNAME`
|
|
||||||
1. `KIBANA_HOST`
|
|
||||||
|
|
||||||
Set these with the information for your Elasticsearch cluster and your Kibana host. Here are some examples (also see [*this configuration*](https://stackoverflow.com/questions/59892896/how-to-connect-from-minikube-to-elasticsearch-installed-on-host-local-developme/59892897#59892897))
|
|
||||||
|
|
||||||
#### `ELASTICSEARCH_HOSTS`
|
|
||||||
|
|
||||||
1. A nodeGroup from the Elastic Elasticsearch Helm Chart:
|
|
||||||
|
|
||||||
```
|
|
||||||
["http://elasticsearch-master.default.svc.cluster.local:9200"]
|
|
||||||
```
|
|
||||||
|
|
||||||
1. A single Elasticsearch node running on a Mac where your Beats are running in Docker for Mac:
|
|
||||||
|
|
||||||
```
|
|
||||||
["http://host.docker.internal:9200"]
|
|
||||||
```
|
|
||||||
|
|
||||||
1. Two Elasticsearch nodes running in VMs or on physical hardware:
|
|
||||||
|
|
||||||
```
|
|
||||||
["http://host1.example.com:9200", "http://host2.example.com:9200"]
|
|
||||||
```
|
|
||||||
|
|
||||||
Edit `ELASTICSEARCH_HOSTS`:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
vi ELASTICSEARCH_HOSTS
|
|
||||||
```
|
|
||||||
|
|
||||||
#### `ELASTICSEARCH_PASSWORD`
|
|
||||||
|
|
||||||
Just the password; no whitespace, quotes, `<` or `>`:
|
|
||||||
|
|
||||||
```
|
|
||||||
<yoursecretpassword>
|
|
||||||
```
|
|
||||||
|
|
||||||
Edit `ELASTICSEARCH_PASSWORD`:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
vi ELASTICSEARCH_PASSWORD
|
|
||||||
```
|
|
||||||
|
|
||||||
#### `ELASTICSEARCH_USERNAME`
|
|
||||||
|
|
||||||
Just the username; no whitespace, quotes, `<` or `>`:
|
|
||||||
|
|
||||||
```
|
|
||||||
<your ingest username for Elasticsearch>
|
|
||||||
```
|
|
||||||
|
|
||||||
Edit `ELASTICSEARCH_USERNAME`:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
vi ELASTICSEARCH_USERNAME
|
|
||||||
```
|
|
||||||
|
|
||||||
#### `KIBANA_HOST`
|
|
||||||
|
|
||||||
1. The Kibana instance from the Elastic Kibana Helm Chart. The subdomain `default` refers to the default namespace. If you have deployed the Helm Chart using a different namespace, then your subdomain will be different:
|
|
||||||
|
|
||||||
```
|
|
||||||
"kibana-kibana.default.svc.cluster.local:5601"
|
|
||||||
```
|
|
||||||
|
|
||||||
1. A Kibana instance running on a Mac where your Beats are running in Docker for Mac:
|
|
||||||
|
|
||||||
```
|
|
||||||
"host.docker.internal:5601"
|
|
||||||
```
|
|
||||||
1. Two Elasticsearch nodes running in VMs or on physical hardware:
|
|
||||||
|
|
||||||
```
|
|
||||||
"host1.example.com:5601"
|
|
||||||
```
|
|
||||||
|
|
||||||
Edit `KIBANA_HOST`:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
vi KIBANA_HOST
|
|
||||||
```
|
|
||||||
|
|
||||||
### Create a Kubernetes Secret
|
|
||||||
|
|
||||||
This command creates a Secret in the Kubernetes system level namespace (`kube-system`) based on the files you just edited:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl create secret generic dynamic-logging \
|
|
||||||
--from-file=./ELASTICSEARCH_HOSTS \
|
|
||||||
--from-file=./ELASTICSEARCH_PASSWORD \
|
|
||||||
--from-file=./ELASTICSEARCH_USERNAME \
|
|
||||||
--from-file=./KIBANA_HOST \
|
|
||||||
--namespace=kube-system
|
|
||||||
```
|
|
||||||
|
|
||||||
{{% /tab %}}
|
|
||||||
{{% tab name="Managed service" %}}
|
|
||||||
|
|
||||||
## Managed service
|
|
||||||
|
|
||||||
This tab is for Elasticsearch Service in Elastic Cloud only, if you have already created a secret for a self managed Elasticsearch and Kibana deployment, then continue with [Deploy the Beats](#deploy-the-beats).
|
|
||||||
|
|
||||||
### Set the credentials
|
|
||||||
|
|
||||||
There are two files to edit to create a Kubernetes Secret when you are connecting to the managed Elasticsearch Service in Elastic Cloud. The files are:
|
|
||||||
|
|
||||||
1. `ELASTIC_CLOUD_AUTH`
|
|
||||||
1. `ELASTIC_CLOUD_ID`
|
|
||||||
|
|
||||||
Set these with the information provided to you from the Elasticsearch Service console when you created the deployment. Here are some examples:
|
|
||||||
|
|
||||||
#### `ELASTIC_CLOUD_ID`
|
|
||||||
|
|
||||||
```
|
|
||||||
devk8s:ABC123def456ghi789jkl123mno456pqr789stu123vwx456yza789bcd012efg345hijj678klm901nop345zEwOTJjMTc5YWQ0YzQ5OThlN2U5MjAwYTg4NTIzZQ==
|
|
||||||
```
|
|
||||||
|
|
||||||
#### `ELASTIC_CLOUD_AUTH`
|
|
||||||
|
|
||||||
Just the username, a colon (`:`), and the password, no whitespace or quotes:
|
|
||||||
|
|
||||||
```
|
|
||||||
elastic:VFxJJf9Tjwer90wnfTghsn8w
|
|
||||||
```
|
|
||||||
|
|
||||||
### Edit the required files:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
vi ELASTIC_CLOUD_ID
|
|
||||||
vi ELASTIC_CLOUD_AUTH
|
|
||||||
```
|
|
||||||
|
|
||||||
### Create a Kubernetes Secret
|
|
||||||
|
|
||||||
This command creates a Secret in the Kubernetes system level namespace (`kube-system`) based on the files you just edited:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl create secret generic dynamic-logging \
|
|
||||||
--from-file=./ELASTIC_CLOUD_ID \
|
|
||||||
--from-file=./ELASTIC_CLOUD_AUTH \
|
|
||||||
--namespace=kube-system
|
|
||||||
```
|
|
||||||
|
|
||||||
{{% /tab %}}
|
|
||||||
|
|
||||||
{{< /tabs >}}
|
|
||||||
|
|
||||||
## Deploy the Beats
|
|
||||||
|
|
||||||
Manifest files are provided for each Beat. These manifest files use the secret created earlier to configure the Beats to connect to your Elasticsearch and Kibana servers.
|
|
||||||
|
|
||||||
### About Filebeat
|
|
||||||
|
|
||||||
Filebeat will collect logs from the Kubernetes nodes and the containers running in each pod running on those nodes. Filebeat is deployed as a {{< glossary_tooltip text="DaemonSet" term_id="daemonset" >}}. Filebeat can autodiscover applications running in your Kubernetes cluster. At startup Filebeat scans existing containers and launches the proper configurations for them, then it will watch for new start/stop events.
|
|
||||||
|
|
||||||
Here is the autodiscover configuration that enables Filebeat to locate and parse Redis logs from the Redis containers deployed with the guestbook application. This configuration is in the file `filebeat-kubernetes.yaml`:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- condition.contains:
|
|
||||||
kubernetes.labels.app: redis
|
|
||||||
config:
|
|
||||||
- module: redis
|
|
||||||
log:
|
|
||||||
input:
|
|
||||||
type: docker
|
|
||||||
containers.ids:
|
|
||||||
- ${data.kubernetes.container.id}
|
|
||||||
slowlog:
|
|
||||||
enabled: true
|
|
||||||
var.hosts: ["${data.host}:${data.port}"]
|
|
||||||
```
|
|
||||||
|
|
||||||
This configures Filebeat to apply the Filebeat module `redis` when a container is detected with a label `app` containing the string `redis`. The redis module has the ability to collect the `log` stream from the container by using the docker input type (reading the file on the Kubernetes node associated with the STDOUT stream from this Redis container). Additionally, the module has the ability to collect Redis `slowlog` entries by connecting to the proper pod host and port, which is provided in the container metadata.
|
|
||||||
|
|
||||||
### Deploy Filebeat:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl create -f filebeat-kubernetes.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Verify
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl get pods -n kube-system -l k8s-app=filebeat-dynamic
|
|
||||||
```
|
|
||||||
|
|
||||||
### About Metricbeat
|
|
||||||
|
|
||||||
Metricbeat autodiscover is configured in the same way as Filebeat. Here is the Metricbeat autodiscover configuration for the Redis containers. This configuration is in the file `metricbeat-kubernetes.yaml`:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- condition.equals:
|
|
||||||
kubernetes.labels.tier: backend
|
|
||||||
config:
|
|
||||||
- module: redis
|
|
||||||
metricsets: ["info", "keyspace"]
|
|
||||||
period: 10s
|
|
||||||
|
|
||||||
# Redis hosts
|
|
||||||
hosts: ["${data.host}:${data.port}"]
|
|
||||||
```
|
|
||||||
|
|
||||||
This configures Metricbeat to apply the Metricbeat module `redis` when a container is detected with a label `tier` equal to the string `backend`. The `redis` module has the ability to collect the `info` and `keyspace` metrics from the container by connecting to the proper pod host and port, which is provided in the container metadata.
|
|
||||||
|
|
||||||
### Deploy Metricbeat
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl create -f metricbeat-kubernetes.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Verify
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl get pods -n kube-system -l k8s-app=metricbeat
|
|
||||||
```
|
|
||||||
|
|
||||||
### About Packetbeat
|
|
||||||
|
|
||||||
Packetbeat configuration is different than Filebeat and Metricbeat. Rather than specify patterns to match against container labels the configuration is based on the protocols and port numbers involved. Shown below is a subset of the port numbers.
|
|
||||||
|
|
||||||
{{< note >}}
|
|
||||||
If you are running a service on a non-standard port add that port number to the appropriate type in `filebeat.yaml` and delete/create the Packetbeat DaemonSet.
|
|
||||||
{{< /note >}}
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
packetbeat.interfaces.device: any
|
|
||||||
|
|
||||||
packetbeat.protocols:
|
|
||||||
- type: dns
|
|
||||||
ports: [53]
|
|
||||||
include_authorities: true
|
|
||||||
include_additionals: true
|
|
||||||
|
|
||||||
- type: http
|
|
||||||
ports: [80, 8000, 8080, 9200]
|
|
||||||
|
|
||||||
- type: mysql
|
|
||||||
ports: [3306]
|
|
||||||
|
|
||||||
- type: redis
|
|
||||||
ports: [6379]
|
|
||||||
|
|
||||||
packetbeat.flows:
|
|
||||||
timeout: 30s
|
|
||||||
period: 10s
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Deploy Packetbeat
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl create -f packetbeat-kubernetes.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Verify
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl get pods -n kube-system -l k8s-app=packetbeat-dynamic
|
|
||||||
```
|
|
||||||
|
|
||||||
## View in Kibana
|
|
||||||
|
|
||||||
Open Kibana in your browser and then open the **Dashboard** application. In the search bar type Kubernetes and click on the Metricbeat dashboard for Kubernetes. This dashboard reports on the state of your Nodes, deployments, etc.
|
|
||||||
|
|
||||||
Search for Packetbeat on the Dashboard page, and view the Packetbeat overview.
|
|
||||||
|
|
||||||
Similarly, view dashboards for Apache and Redis. You will see dashboards for logs and metrics for each. The Apache Metricbeat dashboard will be blank. Look at the Apache Filebeat dashboard and scroll to the bottom to view the Apache error logs. This will tell you why there are no metrics available for Apache.
|
|
||||||
|
|
||||||
To enable Metricbeat to retrieve the Apache metrics, enable server-status by adding a ConfigMap including a mod-status configuration file and re-deploy the guestbook.
|
|
||||||
|
|
||||||
## Scale your Deployments and see new pods being monitored
|
|
||||||
|
|
||||||
List the existing Deployments:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl get deployments
|
|
||||||
```
|
|
||||||
|
|
||||||
The output:
|
|
||||||
|
|
||||||
```
|
|
||||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
|
||||||
frontend 3/3 3 3 3h27m
|
|
||||||
redis-master 1/1 1 1 3h27m
|
|
||||||
redis-slave 2/2 2 2 3h27m
|
|
||||||
```
|
|
||||||
|
|
||||||
Scale the frontend down to two pods:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl scale --replicas=2 deployment/frontend
|
|
||||||
```
|
|
||||||
|
|
||||||
The output:
|
|
||||||
|
|
||||||
```
|
|
||||||
deployment.extensions/frontend scaled
|
|
||||||
```
|
|
||||||
|
|
||||||
Scale the frontend back up to three pods:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl scale --replicas=3 deployment/frontend
|
|
||||||
```
|
|
||||||
|
|
||||||
## View the changes in Kibana
|
|
||||||
|
|
||||||
See the screenshot, add the indicated filters and then add the columns to the view. You can see the ScalingReplicaSet entry that is marked, following from there to the top of the list of events shows the image being pulled, the volumes mounted, the pod starting, etc.
|
|
||||||

|
|
||||||
|
|
||||||
## {{% heading "cleanup" %}}
|
|
||||||
|
|
||||||
Deleting the Deployments and Services also deletes any running Pods. Use labels to delete multiple resources with one command.
|
|
||||||
|
|
||||||
1. Run the following commands to delete all Pods, Deployments, and Services.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl delete deployment -l app=redis
|
|
||||||
kubectl delete service -l app=redis
|
|
||||||
kubectl delete deployment -l app=guestbook
|
|
||||||
kubectl delete service -l app=guestbook
|
|
||||||
kubectl delete -f filebeat-kubernetes.yaml
|
|
||||||
kubectl delete -f metricbeat-kubernetes.yaml
|
|
||||||
kubectl delete -f packetbeat-kubernetes.yaml
|
|
||||||
kubectl delete secret dynamic-logging -n kube-system
|
|
||||||
```
|
|
||||||
|
|
||||||
1. Query the list of Pods to verify that no Pods are running:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl get pods
|
|
||||||
```
|
|
||||||
|
|
||||||
The response should be this:
|
|
||||||
|
|
||||||
```
|
|
||||||
No resources found.
|
|
||||||
```
|
|
||||||
|
|
||||||
## {{% heading "whatsnext" %}}
|
|
||||||
|
|
||||||
* Learn about [tools for monitoring resources](/docs/tasks/debug-application-cluster/resource-usage-monitoring/)
|
|
||||||
* Read more about [logging architecture](/docs/concepts/cluster-administration/logging/)
|
|
||||||
* Read more about [application introspection and debugging](/docs/tasks/debug-application-cluster/)
|
|
||||||
* Read more about [troubleshoot applications](/docs/tasks/debug-application-cluster/resource-usage-monitoring/)
|
|
||||||
|
|
||||||
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
title: "Example: Deploying PHP Guestbook application with Redis"
|
title: "Example: Deploying PHP Guestbook application with MongoDB"
|
||||||
reviewers:
|
reviewers:
|
||||||
- ahmetb
|
- ahmetb
|
||||||
content_type: tutorial
|
content_type: tutorial
|
||||||
|
|
@ -7,22 +7,19 @@ weight: 20
|
||||||
card:
|
card:
|
||||||
name: tutorials
|
name: tutorials
|
||||||
weight: 30
|
weight: 30
|
||||||
title: "Stateless Example: PHP Guestbook with Redis"
|
title: "Stateless Example: PHP Guestbook with MongoDB"
|
||||||
|
min-kubernetes-server-version: v1.14
|
||||||
---
|
---
|
||||||
|
|
||||||
<!-- overview -->
|
<!-- overview -->
|
||||||
This tutorial shows you how to build and deploy a simple, multi-tier web application using Kubernetes and [Docker](https://www.docker.com/). This example consists of the following components:
|
This tutorial shows you how to build and deploy a simple _(not production ready)_, multi-tier web application using Kubernetes and [Docker](https://www.docker.com/). This example consists of the following components:
|
||||||
|
|
||||||
* A single-instance [Redis](https://redis.io/) master to store guestbook entries
|
* A single-instance [MongoDB](https://www.mongodb.com/) to store guestbook entries
|
||||||
* Multiple [replicated Redis](https://redis.io/topics/replication) instances to serve reads
|
|
||||||
* Multiple web frontend instances
|
* Multiple web frontend instances
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## {{% heading "objectives" %}}
|
## {{% heading "objectives" %}}
|
||||||
|
|
||||||
* Start up a Redis master.
|
* Start up a Mongo database.
|
||||||
* Start up Redis slaves.
|
|
||||||
* Start up the guestbook frontend.
|
* Start up the guestbook frontend.
|
||||||
* Expose and view the Frontend Service.
|
* Expose and view the Frontend Service.
|
||||||
* Clean up.
|
* Clean up.
|
||||||
|
|
@ -39,24 +36,28 @@ This tutorial shows you how to build and deploy a simple, multi-tier web applica
|
||||||
|
|
||||||
<!-- lessoncontent -->
|
<!-- lessoncontent -->
|
||||||
|
|
||||||
## Start up the Redis Master
|
## Start up the Mongo Database
|
||||||
|
|
||||||
The guestbook application uses Redis to store its data. It writes its data to a Redis master instance and reads data from multiple Redis slave instances.
|
The guestbook application uses MongoDB to store its data.
|
||||||
|
|
||||||
### Creating the Redis Master Deployment
|
### Creating the Mongo Deployment
|
||||||
|
|
||||||
The manifest file, included below, specifies a Deployment controller that runs a single replica Redis master Pod.
|
The manifest file, included below, specifies a Deployment controller that runs a single replica MongoDB Pod.
|
||||||
|
|
||||||
{{< codenew file="application/guestbook/redis-master-deployment.yaml" >}}
|
{{< codenew file="application/guestbook/mongo-deployment.yaml" >}}
|
||||||
|
|
||||||
1. Launch a terminal window in the directory you downloaded the manifest files.
|
1. Launch a terminal window in the directory you downloaded the manifest files.
|
||||||
1. Apply the Redis Master Deployment from the `redis-master-deployment.yaml` file:
|
1. Apply the MongoDB Deployment from the `mongo-deployment.yaml` file:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml
|
kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-deployment.yaml
|
||||||
```
|
```
|
||||||
|
<!---
|
||||||
|
for local testing of the content via relative file path
|
||||||
|
kubectl apply -f ./content/en/examples/application/guestbook/mongo-deployment.yaml
|
||||||
|
-->
|
||||||
|
|
||||||
1. Query the list of Pods to verify that the Redis Master Pod is running:
|
1. Query the list of Pods to verify that the MongoDB Pod is running:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl get pods
|
kubectl get pods
|
||||||
|
|
@ -66,32 +67,33 @@ The manifest file, included below, specifies a Deployment controller that runs a
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE
|
||||||
redis-master-1068406935-3lswp 1/1 Running 0 28s
|
mongo-5cfd459dd4-lrcjb 1/1 Running 0 28s
|
||||||
```
|
```
|
||||||
|
|
||||||
1. Run the following command to view the logs from the Redis Master Pod:
|
1. Run the following command to view the logs from the MongoDB Deployment:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl logs -f POD-NAME
|
kubectl logs -f deployment/mongo
|
||||||
```
|
```
|
||||||
|
|
||||||
{{< note >}}
|
### Creating the MongoDB Service
|
||||||
Replace POD-NAME with the name of your Pod.
|
|
||||||
{{< /note >}}
|
|
||||||
|
|
||||||
### Creating the Redis Master Service
|
The guestbook application needs to communicate to the MongoDB to write its data. You need to apply a [Service](/docs/concepts/services-networking/service/) to proxy the traffic to the MongoDB Pod. A Service defines a policy to access the Pods.
|
||||||
|
|
||||||
The guestbook application needs to communicate to the Redis master to write its data. You need to apply a [Service](/docs/concepts/services-networking/service/) to proxy the traffic to the Redis master Pod. A Service defines a policy to access the Pods.
|
{{< codenew file="application/guestbook/mongo-service.yaml" >}}
|
||||||
|
|
||||||
{{< codenew file="application/guestbook/redis-master-service.yaml" >}}
|
1. Apply the MongoDB Service from the following `mongo-service.yaml` file:
|
||||||
|
|
||||||
1. Apply the Redis Master Service from the following `redis-master-service.yaml` file:
|
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-service.yaml
|
kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-service.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
1. Query the list of Services to verify that the Redis Master Service is running:
|
<!---
|
||||||
|
for local testing of the content via relative file path
|
||||||
|
kubectl apply -f ./content/en/examples/application/guestbook/mongo-service.yaml
|
||||||
|
-->
|
||||||
|
|
||||||
|
1. Query the list of Services to verify that the MongoDB Service is running:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl get service
|
kubectl get service
|
||||||
|
|
@ -102,77 +104,17 @@ The guestbook application needs to communicate to the Redis master to write its
|
||||||
```shell
|
```shell
|
||||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1m
|
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1m
|
||||||
redis-master ClusterIP 10.0.0.151 <none> 6379/TCP 8s
|
mongo ClusterIP 10.0.0.151 <none> 6379/TCP 8s
|
||||||
```
|
```
|
||||||
|
|
||||||
{{< note >}}
|
{{< note >}}
|
||||||
This manifest file creates a Service named `redis-master` with a set of labels that match the labels previously defined, so the Service routes network traffic to the Redis master Pod.
|
This manifest file creates a Service named `mongo` with a set of labels that match the labels previously defined, so the Service routes network traffic to the MongoDB Pod.
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
||||||
|
|
||||||
## Start up the Redis Slaves
|
|
||||||
|
|
||||||
Although the Redis master is a single pod, you can make it highly available to meet traffic demands by adding replica Redis slaves.
|
|
||||||
|
|
||||||
### Creating the Redis Slave Deployment
|
|
||||||
|
|
||||||
Deployments scale based off of the configurations set in the manifest file. In this case, the Deployment object specifies two replicas.
|
|
||||||
|
|
||||||
If there are not any replicas running, this Deployment would start the two replicas on your container cluster. Conversely, if there are more than two replicas running, it would scale down until two replicas are running.
|
|
||||||
|
|
||||||
{{< codenew file="application/guestbook/redis-slave-deployment.yaml" >}}
|
|
||||||
|
|
||||||
1. Apply the Redis Slave Deployment from the `redis-slave-deployment.yaml` file:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-slave-deployment.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
1. Query the list of Pods to verify that the Redis Slave Pods are running:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl get pods
|
|
||||||
```
|
|
||||||
|
|
||||||
The response should be similar to this:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
NAME READY STATUS RESTARTS AGE
|
|
||||||
redis-master-1068406935-3lswp 1/1 Running 0 1m
|
|
||||||
redis-slave-2005841000-fpvqc 0/1 ContainerCreating 0 6s
|
|
||||||
redis-slave-2005841000-phfv9 0/1 ContainerCreating 0 6s
|
|
||||||
```
|
|
||||||
|
|
||||||
### Creating the Redis Slave Service
|
|
||||||
|
|
||||||
The guestbook application needs to communicate to Redis slaves to read data. To make the Redis slaves discoverable, you need to set up a Service. A Service provides transparent load balancing to a set of Pods.
|
|
||||||
|
|
||||||
{{< codenew file="application/guestbook/redis-slave-service.yaml" >}}
|
|
||||||
|
|
||||||
1. Apply the Redis Slave Service from the following `redis-slave-service.yaml` file:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-slave-service.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
1. Query the list of Services to verify that the Redis slave service is running:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl get services
|
|
||||||
```
|
|
||||||
|
|
||||||
The response should be similar to this:
|
|
||||||
|
|
||||||
```
|
|
||||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
|
||||||
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 2m
|
|
||||||
redis-master ClusterIP 10.0.0.151 <none> 6379/TCP 1m
|
|
||||||
redis-slave ClusterIP 10.0.0.223 <none> 6379/TCP 6s
|
|
||||||
```
|
|
||||||
|
|
||||||
## Set up and Expose the Guestbook Frontend
|
## Set up and Expose the Guestbook Frontend
|
||||||
|
|
||||||
The guestbook application has a web frontend serving the HTTP requests written in PHP. It is configured to connect to the `redis-master` Service for write requests and the `redis-slave` service for Read requests.
|
The guestbook application has a web frontend serving the HTTP requests written in PHP. It is configured to connect to the `mongo` Service to store Guestbook entries.
|
||||||
|
|
||||||
### Creating the Guestbook Frontend Deployment
|
### Creating the Guestbook Frontend Deployment
|
||||||
|
|
||||||
|
|
@ -184,6 +126,11 @@ The guestbook application has a web frontend serving the HTTP requests written i
|
||||||
kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml
|
kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
|
<!---
|
||||||
|
for local testing of the content via relative file path
|
||||||
|
kubectl apply -f ./content/en/examples/application/guestbook/frontend-deployment.yaml
|
||||||
|
-->
|
||||||
|
|
||||||
1. Query the list of Pods to verify that the three frontend replicas are running:
|
1. Query the list of Pods to verify that the three frontend replicas are running:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
|
|
@ -201,12 +148,12 @@ The guestbook application has a web frontend serving the HTTP requests written i
|
||||||
|
|
||||||
### Creating the Frontend Service
|
### Creating the Frontend Service
|
||||||
|
|
||||||
The `redis-slave` and `redis-master` Services you applied are only accessible within the container cluster because the default type for a Service is [ClusterIP](/docs/concepts/services-networking/service/#publishing-services---service-types). `ClusterIP` provides a single IP address for the set of Pods the Service is pointing to. This IP address is accessible only within the cluster.
|
The `mongo` Services you applied is only accessible within the Kubernetes cluster because the default type for a Service is [ClusterIP](/docs/concepts/services-networking/service/#publishing-services---service-types). `ClusterIP` provides a single IP address for the set of Pods the Service is pointing to. This IP address is accessible only within the cluster.
|
||||||
|
|
||||||
If you want guests to be able to access your guestbook, you must configure the frontend Service to be externally visible, so a client can request the Service from outside the container cluster. Minikube can only expose Services through `NodePort`.
|
If you want guests to be able to access your guestbook, you must configure the frontend Service to be externally visible, so a client can request the Service from outside the Kubernetes cluster. However a Kubernetes user you can use `kubectl port-forward` to access the service even though it uses a `ClusterIP`.
|
||||||
|
|
||||||
{{< note >}}
|
{{< note >}}
|
||||||
Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, support external load balancers. If your cloud provider supports load balancers and you want to use it, simply delete or comment out `type: NodePort`, and uncomment `type: LoadBalancer`.
|
Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, support external load balancers. If your cloud provider supports load balancers and you want to use it, simply uncomment `type: LoadBalancer`.
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
||||||
{{< codenew file="application/guestbook/frontend-service.yaml" >}}
|
{{< codenew file="application/guestbook/frontend-service.yaml" >}}
|
||||||
|
|
@ -217,6 +164,11 @@ Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, su
|
||||||
kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml
|
kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
|
<!---
|
||||||
|
for local testing of the content via relative file path
|
||||||
|
kubectl apply -f ./content/en/examples/application/guestbook/frontend-service.yaml
|
||||||
|
-->
|
||||||
|
|
||||||
1. Query the list of Services to verify that the frontend Service is running:
|
1. Query the list of Services to verify that the frontend Service is running:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
|
|
@ -227,29 +179,27 @@ Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, su
|
||||||
|
|
||||||
```
|
```
|
||||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
frontend NodePort 10.0.0.112 <none> 80:31323/TCP 6s
|
frontend ClusterIP 10.0.0.112 <none> 80/TCP 6s
|
||||||
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 4m
|
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 4m
|
||||||
redis-master ClusterIP 10.0.0.151 <none> 6379/TCP 2m
|
mongo ClusterIP 10.0.0.151 <none> 6379/TCP 2m
|
||||||
redis-slave ClusterIP 10.0.0.223 <none> 6379/TCP 1m
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Viewing the Frontend Service via `NodePort`
|
### Viewing the Frontend Service via `kubectl port-forward`
|
||||||
|
|
||||||
If you deployed this application to Minikube or a local cluster, you need to find the IP address to view your Guestbook.
|
1. Run the following command to forward port `8080` on your local machine to port `80` on the service.
|
||||||
|
|
||||||
1. Run the following command to get the IP address for the frontend Service.
|
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
minikube service frontend --url
|
kubectl port-forward svc/frontend 8080:80
|
||||||
```
|
```
|
||||||
|
|
||||||
The response should be similar to this:
|
The response should be similar to this:
|
||||||
|
|
||||||
```
|
```
|
||||||
http://192.168.99.100:31323
|
Forwarding from 127.0.0.1:8080 -> 80
|
||||||
|
Forwarding from [::1]:8080 -> 80
|
||||||
```
|
```
|
||||||
|
|
||||||
1. Copy the IP address, and load the page in your browser to view your guestbook.
|
1. load the page [http://localhost:8080](http://localhost:8080) in your browser to view your guestbook.
|
||||||
|
|
||||||
### Viewing the Frontend Service via `LoadBalancer`
|
### Viewing the Frontend Service via `LoadBalancer`
|
||||||
|
|
||||||
|
|
@ -272,7 +222,7 @@ If you deployed the `frontend-service.yaml` manifest with type: `LoadBalancer` y
|
||||||
|
|
||||||
## Scale the Web Frontend
|
## Scale the Web Frontend
|
||||||
|
|
||||||
Scaling up or down is easy because your servers are defined as a Service that uses a Deployment controller.
|
You can scale up or down as needed because your servers are defined as a Service that uses a Deployment controller.
|
||||||
|
|
||||||
1. Run the following command to scale up the number of frontend Pods:
|
1. Run the following command to scale up the number of frontend Pods:
|
||||||
|
|
||||||
|
|
@ -295,9 +245,7 @@ Scaling up or down is easy because your servers are defined as a Service that us
|
||||||
frontend-3823415956-k22zn 1/1 Running 0 54m
|
frontend-3823415956-k22zn 1/1 Running 0 54m
|
||||||
frontend-3823415956-w9gbt 1/1 Running 0 54m
|
frontend-3823415956-w9gbt 1/1 Running 0 54m
|
||||||
frontend-3823415956-x2pld 1/1 Running 0 5s
|
frontend-3823415956-x2pld 1/1 Running 0 5s
|
||||||
redis-master-1068406935-3lswp 1/1 Running 0 56m
|
mongo-1068406935-3lswp 1/1 Running 0 56m
|
||||||
redis-slave-2005841000-fpvqc 1/1 Running 0 55m
|
|
||||||
redis-slave-2005841000-phfv9 1/1 Running 0 55m
|
|
||||||
```
|
```
|
||||||
|
|
||||||
1. Run the following command to scale down the number of frontend Pods:
|
1. Run the following command to scale down the number of frontend Pods:
|
||||||
|
|
@ -318,9 +266,7 @@ Scaling up or down is easy because your servers are defined as a Service that us
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE
|
||||||
frontend-3823415956-k22zn 1/1 Running 0 1h
|
frontend-3823415956-k22zn 1/1 Running 0 1h
|
||||||
frontend-3823415956-w9gbt 1/1 Running 0 1h
|
frontend-3823415956-w9gbt 1/1 Running 0 1h
|
||||||
redis-master-1068406935-3lswp 1/1 Running 0 1h
|
mongo-1068406935-3lswp 1/1 Running 0 1h
|
||||||
redis-slave-2005841000-fpvqc 1/1 Running 0 1h
|
|
||||||
redis-slave-2005841000-phfv9 1/1 Running 0 1h
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -332,20 +278,18 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels
|
||||||
1. Run the following commands to delete all Pods, Deployments, and Services.
|
1. Run the following commands to delete all Pods, Deployments, and Services.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl delete deployment -l app=redis
|
kubectl delete deployment -l app.kubernetes.io/name=mongo
|
||||||
kubectl delete service -l app=redis
|
kubectl delete service -l app.kubernetes.io/name=mongo
|
||||||
kubectl delete deployment -l app=guestbook
|
kubectl delete deployment -l app.kubernetes.io/name=guestbook
|
||||||
kubectl delete service -l app=guestbook
|
kubectl delete service -l app.kubernetes.io/name=guestbook
|
||||||
```
|
```
|
||||||
|
|
||||||
The responses should be:
|
The responses should be:
|
||||||
|
|
||||||
```
|
```
|
||||||
deployment.apps "redis-master" deleted
|
deployment.apps "mongo" deleted
|
||||||
deployment.apps "redis-slave" deleted
|
service "mongo" deleted
|
||||||
service "redis-master" deleted
|
deployment.apps "frontend" deleted
|
||||||
service "redis-slave" deleted
|
|
||||||
deployment.apps "frontend" deleted
|
|
||||||
service "frontend" deleted
|
service "frontend" deleted
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
@ -365,9 +309,7 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels
|
||||||
|
|
||||||
## {{% heading "whatsnext" %}}
|
## {{% heading "whatsnext" %}}
|
||||||
|
|
||||||
* Add [ELK logging and monitoring](/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk/) to your Guestbook application
|
|
||||||
* Complete the [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) Interactive Tutorials
|
* Complete the [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) Interactive Tutorials
|
||||||
* Use Kubernetes to create a blog using [Persistent Volumes for MySQL and Wordpress](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog)
|
* Use Kubernetes to create a blog using [Persistent Volumes for MySQL and Wordpress](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog)
|
||||||
* Read more about [connecting applications](/docs/concepts/services-networking/connect-applications-service/)
|
* Read more about [connecting applications](/docs/concepts/services-networking/connect-applications-service/)
|
||||||
* Read more about [Managing Resources](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)
|
* Read more about [Managing Resources](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -21,7 +21,7 @@ rules:
|
||||||
- certificates.k8s.io
|
- certificates.k8s.io
|
||||||
resources:
|
resources:
|
||||||
- signers
|
- signers
|
||||||
resourceName:
|
resourceNames:
|
||||||
- example.com/my-signer-name # example.com/* can be used to authorize for all signers in the 'example.com' domain
|
- example.com/my-signer-name # example.com/* can be used to authorize for all signers in the 'example.com' domain
|
||||||
verbs:
|
verbs:
|
||||||
- sign
|
- sign
|
||||||
|
|
|
||||||
|
|
@ -3,22 +3,24 @@ kind: Deployment
|
||||||
metadata:
|
metadata:
|
||||||
name: frontend
|
name: frontend
|
||||||
labels:
|
labels:
|
||||||
app: guestbook
|
app.kubernetes.io/name: guestbook
|
||||||
|
app.kubernetes.io/component: frontend
|
||||||
spec:
|
spec:
|
||||||
selector:
|
selector:
|
||||||
matchLabels:
|
matchLabels:
|
||||||
app: guestbook
|
app.kubernetes.io/name: guestbook
|
||||||
tier: frontend
|
app.kubernetes.io/component: frontend
|
||||||
replicas: 3
|
replicas: 3
|
||||||
template:
|
template:
|
||||||
metadata:
|
metadata:
|
||||||
labels:
|
labels:
|
||||||
app: guestbook
|
app.kubernetes.io/name: guestbook
|
||||||
tier: frontend
|
app.kubernetes.io/component: frontend
|
||||||
spec:
|
spec:
|
||||||
containers:
|
containers:
|
||||||
- name: php-redis
|
- name: guestbook
|
||||||
image: gcr.io/google-samples/gb-frontend:v4
|
image: paulczar/gb-frontend:v5
|
||||||
|
# image: gcr.io/google-samples/gb-frontend:v4
|
||||||
resources:
|
resources:
|
||||||
requests:
|
requests:
|
||||||
cpu: 100m
|
cpu: 100m
|
||||||
|
|
@ -26,13 +28,5 @@ spec:
|
||||||
env:
|
env:
|
||||||
- name: GET_HOSTS_FROM
|
- name: GET_HOSTS_FROM
|
||||||
value: dns
|
value: dns
|
||||||
# Using `GET_HOSTS_FROM=dns` requires your cluster to
|
|
||||||
# provide a dns service. As of Kubernetes 1.3, DNS is a built-in
|
|
||||||
# service launched automatically. However, if the cluster you are using
|
|
||||||
# does not have a built-in DNS service, you can instead
|
|
||||||
# access an environment variable to find the master
|
|
||||||
# service's host. To do so, comment out the 'value: dns' line above, and
|
|
||||||
# uncomment the line below:
|
|
||||||
# value: env
|
|
||||||
ports:
|
ports:
|
||||||
- containerPort: 80
|
- containerPort: 80
|
||||||
|
|
|
||||||
|
|
@ -3,16 +3,14 @@ kind: Service
|
||||||
metadata:
|
metadata:
|
||||||
name: frontend
|
name: frontend
|
||||||
labels:
|
labels:
|
||||||
app: guestbook
|
app.kubernetes.io/name: guestbook
|
||||||
tier: frontend
|
app.kubernetes.io/component: frontend
|
||||||
spec:
|
spec:
|
||||||
# comment or delete the following line if you want to use a LoadBalancer
|
|
||||||
type: NodePort
|
|
||||||
# if your cluster supports it, uncomment the following to automatically create
|
# if your cluster supports it, uncomment the following to automatically create
|
||||||
# an external load-balanced IP for the frontend service.
|
# an external load-balanced IP for the frontend service.
|
||||||
# type: LoadBalancer
|
# type: LoadBalancer
|
||||||
ports:
|
ports:
|
||||||
- port: 80
|
- port: 80
|
||||||
selector:
|
selector:
|
||||||
app: guestbook
|
app.kubernetes.io/name: guestbook
|
||||||
tier: frontend
|
app.kubernetes.io/component: frontend
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,31 @@
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: mongo
|
||||||
|
labels:
|
||||||
|
app.kubernetes.io/name: mongo
|
||||||
|
app.kubernetes.io/component: backend
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app.kubernetes.io/name: mongo
|
||||||
|
app.kubernetes.io/component: backend
|
||||||
|
replicas: 1
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app.kubernetes.io/name: mongo
|
||||||
|
app.kubernetes.io/component: backend
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: mongo
|
||||||
|
image: mongo:4.2
|
||||||
|
args:
|
||||||
|
- --bind_ip
|
||||||
|
- 0.0.0.0
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
cpu: 100m
|
||||||
|
memory: 100Mi
|
||||||
|
ports:
|
||||||
|
- containerPort: 27017
|
||||||
|
|
@ -0,0 +1,14 @@
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: mongo
|
||||||
|
labels:
|
||||||
|
app.kubernetes.io/name: mongo
|
||||||
|
app.kubernetes.io/component: backend
|
||||||
|
spec:
|
||||||
|
ports:
|
||||||
|
- port: 27017
|
||||||
|
targetPort: 27017
|
||||||
|
selector:
|
||||||
|
app.kubernetes.io/name: mongo
|
||||||
|
app.kubernetes.io/component: backend
|
||||||
|
|
@ -1,29 +0,0 @@
|
||||||
apiVersion: apps/v1
|
|
||||||
kind: Deployment
|
|
||||||
metadata:
|
|
||||||
name: redis-master
|
|
||||||
labels:
|
|
||||||
app: redis
|
|
||||||
spec:
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app: redis
|
|
||||||
role: master
|
|
||||||
tier: backend
|
|
||||||
replicas: 1
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: redis
|
|
||||||
role: master
|
|
||||||
tier: backend
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- name: master
|
|
||||||
image: k8s.gcr.io/redis:e2e # or just image: redis
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
cpu: 100m
|
|
||||||
memory: 100Mi
|
|
||||||
ports:
|
|
||||||
- containerPort: 6379
|
|
||||||
|
|
@ -1,17 +0,0 @@
|
||||||
apiVersion: v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
name: redis-master
|
|
||||||
labels:
|
|
||||||
app: redis
|
|
||||||
role: master
|
|
||||||
tier: backend
|
|
||||||
spec:
|
|
||||||
ports:
|
|
||||||
- name: redis
|
|
||||||
port: 6379
|
|
||||||
targetPort: 6379
|
|
||||||
selector:
|
|
||||||
app: redis
|
|
||||||
role: master
|
|
||||||
tier: backend
|
|
||||||
|
|
@ -1,40 +0,0 @@
|
||||||
apiVersion: apps/v1
|
|
||||||
kind: Deployment
|
|
||||||
metadata:
|
|
||||||
name: redis-slave
|
|
||||||
labels:
|
|
||||||
app: redis
|
|
||||||
spec:
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app: redis
|
|
||||||
role: slave
|
|
||||||
tier: backend
|
|
||||||
replicas: 2
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: redis
|
|
||||||
role: slave
|
|
||||||
tier: backend
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- name: slave
|
|
||||||
image: gcr.io/google_samples/gb-redisslave:v3
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
cpu: 100m
|
|
||||||
memory: 100Mi
|
|
||||||
env:
|
|
||||||
- name: GET_HOSTS_FROM
|
|
||||||
value: dns
|
|
||||||
# Using `GET_HOSTS_FROM=dns` requires your cluster to
|
|
||||||
# provide a dns service. As of Kubernetes 1.3, DNS is a built-in
|
|
||||||
# service launched automatically. However, if the cluster you are using
|
|
||||||
# does not have a built-in DNS service, you can instead
|
|
||||||
# access an environment variable to find the master
|
|
||||||
# service's host. To do so, comment out the 'value: dns' line above, and
|
|
||||||
# uncomment the line below:
|
|
||||||
# value: env
|
|
||||||
ports:
|
|
||||||
- containerPort: 6379
|
|
||||||
|
|
@ -1,15 +0,0 @@
|
||||||
apiVersion: v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
name: redis-slave
|
|
||||||
labels:
|
|
||||||
app: redis
|
|
||||||
role: slave
|
|
||||||
tier: backend
|
|
||||||
spec:
|
|
||||||
ports:
|
|
||||||
- port: 6379
|
|
||||||
selector:
|
|
||||||
app: redis
|
|
||||||
role: slave
|
|
||||||
tier: backend
|
|
||||||
|
|
@ -148,6 +148,11 @@ func getCodecForObject(obj runtime.Object) (runtime.Codec, error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
func validateObject(obj runtime.Object) (errors field.ErrorList) {
|
func validateObject(obj runtime.Object) (errors field.ErrorList) {
|
||||||
|
podValidationOptions := validation.PodValidationOptions{
|
||||||
|
AllowMultipleHugePageResources: true,
|
||||||
|
AllowDownwardAPIHugePages: true,
|
||||||
|
}
|
||||||
|
|
||||||
// Enable CustomPodDNS for testing
|
// Enable CustomPodDNS for testing
|
||||||
// feature.DefaultFeatureGate.Set("CustomPodDNS=true")
|
// feature.DefaultFeatureGate.Set("CustomPodDNS=true")
|
||||||
switch t := obj.(type) {
|
switch t := obj.(type) {
|
||||||
|
|
@ -182,7 +187,7 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
|
||||||
opts := validation.PodValidationOptions{
|
opts := validation.PodValidationOptions{
|
||||||
AllowMultipleHugePageResources: true,
|
AllowMultipleHugePageResources: true,
|
||||||
}
|
}
|
||||||
errors = validation.ValidatePod(t, opts)
|
errors = validation.ValidatePodCreate(t, opts)
|
||||||
case *api.PodList:
|
case *api.PodList:
|
||||||
for i := range t.Items {
|
for i := range t.Items {
|
||||||
errors = append(errors, validateObject(&t.Items[i])...)
|
errors = append(errors, validateObject(&t.Items[i])...)
|
||||||
|
|
@ -191,12 +196,12 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
|
||||||
if t.Namespace == "" {
|
if t.Namespace == "" {
|
||||||
t.Namespace = api.NamespaceDefault
|
t.Namespace = api.NamespaceDefault
|
||||||
}
|
}
|
||||||
errors = validation.ValidatePodTemplate(t)
|
errors = validation.ValidatePodTemplate(t, podValidationOptions)
|
||||||
case *api.ReplicationController:
|
case *api.ReplicationController:
|
||||||
if t.Namespace == "" {
|
if t.Namespace == "" {
|
||||||
t.Namespace = api.NamespaceDefault
|
t.Namespace = api.NamespaceDefault
|
||||||
}
|
}
|
||||||
errors = validation.ValidateReplicationController(t)
|
errors = validation.ValidateReplicationController(t, podValidationOptions)
|
||||||
case *api.ReplicationControllerList:
|
case *api.ReplicationControllerList:
|
||||||
for i := range t.Items {
|
for i := range t.Items {
|
||||||
errors = append(errors, validateObject(&t.Items[i])...)
|
errors = append(errors, validateObject(&t.Items[i])...)
|
||||||
|
|
@ -215,7 +220,11 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
|
||||||
if t.Namespace == "" {
|
if t.Namespace == "" {
|
||||||
t.Namespace = api.NamespaceDefault
|
t.Namespace = api.NamespaceDefault
|
||||||
}
|
}
|
||||||
errors = validation.ValidateService(t, true)
|
// handle clusterIPs, logic copied from service strategy
|
||||||
|
if len(t.Spec.ClusterIP) > 0 && len(t.Spec.ClusterIPs) == 0 {
|
||||||
|
t.Spec.ClusterIPs = []string{t.Spec.ClusterIP}
|
||||||
|
}
|
||||||
|
errors = validation.ValidateService(t)
|
||||||
case *api.ServiceAccount:
|
case *api.ServiceAccount:
|
||||||
if t.Namespace == "" {
|
if t.Namespace == "" {
|
||||||
t.Namespace = api.NamespaceDefault
|
t.Namespace = api.NamespaceDefault
|
||||||
|
|
@ -250,12 +259,12 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
|
||||||
if t.Namespace == "" {
|
if t.Namespace == "" {
|
||||||
t.Namespace = api.NamespaceDefault
|
t.Namespace = api.NamespaceDefault
|
||||||
}
|
}
|
||||||
errors = apps_validation.ValidateDaemonSet(t)
|
errors = apps_validation.ValidateDaemonSet(t, podValidationOptions)
|
||||||
case *apps.Deployment:
|
case *apps.Deployment:
|
||||||
if t.Namespace == "" {
|
if t.Namespace == "" {
|
||||||
t.Namespace = api.NamespaceDefault
|
t.Namespace = api.NamespaceDefault
|
||||||
}
|
}
|
||||||
errors = apps_validation.ValidateDeployment(t)
|
errors = apps_validation.ValidateDeployment(t, podValidationOptions)
|
||||||
case *networking.Ingress:
|
case *networking.Ingress:
|
||||||
if t.Namespace == "" {
|
if t.Namespace == "" {
|
||||||
t.Namespace = api.NamespaceDefault
|
t.Namespace = api.NamespaceDefault
|
||||||
|
|
@ -265,18 +274,30 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
|
||||||
Version: legacyscheme.Scheme.PrioritizedVersionsForGroup(networking.GroupName)[0].Version,
|
Version: legacyscheme.Scheme.PrioritizedVersionsForGroup(networking.GroupName)[0].Version,
|
||||||
}
|
}
|
||||||
errors = networking_validation.ValidateIngressCreate(t, gv)
|
errors = networking_validation.ValidateIngressCreate(t, gv)
|
||||||
|
case *networking.IngressClass:
|
||||||
|
/*
|
||||||
|
if t.Namespace == "" {
|
||||||
|
t.Namespace = api.NamespaceDefault
|
||||||
|
}
|
||||||
|
gv := schema.GroupVersion{
|
||||||
|
Group: networking.GroupName,
|
||||||
|
Version: legacyscheme.Scheme.PrioritizedVersionsForGroup(networking.GroupName)[0].Version,
|
||||||
|
}
|
||||||
|
*/
|
||||||
|
errors = networking_validation.ValidateIngressClass(t)
|
||||||
|
|
||||||
case *policy.PodSecurityPolicy:
|
case *policy.PodSecurityPolicy:
|
||||||
errors = policy_validation.ValidatePodSecurityPolicy(t)
|
errors = policy_validation.ValidatePodSecurityPolicy(t)
|
||||||
case *apps.ReplicaSet:
|
case *apps.ReplicaSet:
|
||||||
if t.Namespace == "" {
|
if t.Namespace == "" {
|
||||||
t.Namespace = api.NamespaceDefault
|
t.Namespace = api.NamespaceDefault
|
||||||
}
|
}
|
||||||
errors = apps_validation.ValidateReplicaSet(t)
|
errors = apps_validation.ValidateReplicaSet(t, podValidationOptions)
|
||||||
case *batch.CronJob:
|
case *batch.CronJob:
|
||||||
if t.Namespace == "" {
|
if t.Namespace == "" {
|
||||||
t.Namespace = api.NamespaceDefault
|
t.Namespace = api.NamespaceDefault
|
||||||
}
|
}
|
||||||
errors = batch_validation.ValidateCronJob(t)
|
errors = batch_validation.ValidateCronJob(t, podValidationOptions)
|
||||||
case *networking.NetworkPolicy:
|
case *networking.NetworkPolicy:
|
||||||
if t.Namespace == "" {
|
if t.Namespace == "" {
|
||||||
t.Namespace = api.NamespaceDefault
|
t.Namespace = api.NamespaceDefault
|
||||||
|
|
@ -287,6 +308,9 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
|
||||||
t.Namespace = api.NamespaceDefault
|
t.Namespace = api.NamespaceDefault
|
||||||
}
|
}
|
||||||
errors = policy_validation.ValidatePodDisruptionBudget(t)
|
errors = policy_validation.ValidatePodDisruptionBudget(t)
|
||||||
|
case *rbac.ClusterRole:
|
||||||
|
// clusterole does not accept namespace
|
||||||
|
errors = rbac_validation.ValidateClusterRole(t)
|
||||||
case *rbac.ClusterRoleBinding:
|
case *rbac.ClusterRoleBinding:
|
||||||
// clusterolebinding does not accept namespace
|
// clusterolebinding does not accept namespace
|
||||||
errors = rbac_validation.ValidateClusterRoleBinding(t)
|
errors = rbac_validation.ValidateClusterRoleBinding(t)
|
||||||
|
|
@ -414,6 +438,7 @@ func TestExampleObjectSchemas(t *testing.T) {
|
||||||
"storagelimits": {&api.LimitRange{}},
|
"storagelimits": {&api.LimitRange{}},
|
||||||
},
|
},
|
||||||
"admin/sched": {
|
"admin/sched": {
|
||||||
|
"clusterrole": {&rbac.ClusterRole{}},
|
||||||
"my-scheduler": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &rbac.ClusterRoleBinding{}, &apps.Deployment{}},
|
"my-scheduler": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &rbac.ClusterRoleBinding{}, &apps.Deployment{}},
|
||||||
"pod1": {&api.Pod{}},
|
"pod1": {&api.Pod{}},
|
||||||
"pod2": {&api.Pod{}},
|
"pod2": {&api.Pod{}},
|
||||||
|
|
@ -539,6 +564,7 @@ func TestExampleObjectSchemas(t *testing.T) {
|
||||||
"dapi-envars-pod": {&api.Pod{}},
|
"dapi-envars-pod": {&api.Pod{}},
|
||||||
"dapi-volume": {&api.Pod{}},
|
"dapi-volume": {&api.Pod{}},
|
||||||
"dapi-volume-resources": {&api.Pod{}},
|
"dapi-volume-resources": {&api.Pod{}},
|
||||||
|
"dependent-envars": {&api.Pod{}},
|
||||||
"envars": {&api.Pod{}},
|
"envars": {&api.Pod{}},
|
||||||
"pod-multiple-secret-env-variable": {&api.Pod{}},
|
"pod-multiple-secret-env-variable": {&api.Pod{}},
|
||||||
"pod-secret-envFrom": {&api.Pod{}},
|
"pod-secret-envFrom": {&api.Pod{}},
|
||||||
|
|
@ -596,29 +622,41 @@ func TestExampleObjectSchemas(t *testing.T) {
|
||||||
"load-balancer-example": {&apps.Deployment{}},
|
"load-balancer-example": {&apps.Deployment{}},
|
||||||
},
|
},
|
||||||
"service/access": {
|
"service/access": {
|
||||||
"frontend": {&api.Service{}, &apps.Deployment{}},
|
"backend-deployment": {&apps.Deployment{}},
|
||||||
"hello-application": {&apps.Deployment{}},
|
"backend-service": {&api.Service{}},
|
||||||
"hello-service": {&api.Service{}},
|
"frontend-deployment": {&apps.Deployment{}},
|
||||||
"hello": {&apps.Deployment{}},
|
"frontend-service": {&api.Service{}},
|
||||||
|
"hello-application": {&apps.Deployment{}},
|
||||||
},
|
},
|
||||||
"service/networking": {
|
"service/networking": {
|
||||||
"curlpod": {&apps.Deployment{}},
|
"curlpod": {&apps.Deployment{}},
|
||||||
"custom-dns": {&api.Pod{}},
|
"custom-dns": {&api.Pod{}},
|
||||||
"dual-stack-default-svc": {&api.Service{}},
|
"dual-stack-default-svc": {&api.Service{}},
|
||||||
"dual-stack-ipv4-svc": {&api.Service{}},
|
"dual-stack-ipfamilies-ipv6": {&api.Service{}},
|
||||||
"dual-stack-ipv6-lb-svc": {&api.Service{}},
|
"dual-stack-ipv6-svc": {&api.Service{}},
|
||||||
"dual-stack-ipv6-svc": {&api.Service{}},
|
"dual-stack-prefer-ipv6-lb-svc": {&api.Service{}},
|
||||||
"hostaliases-pod": {&api.Pod{}},
|
"dual-stack-preferred-ipfamilies-svc": {&api.Service{}},
|
||||||
"ingress": {&networking.Ingress{}},
|
"dual-stack-preferred-svc": {&api.Service{}},
|
||||||
"network-policy-allow-all-egress": {&networking.NetworkPolicy{}},
|
"external-lb": {&networking.IngressClass{}},
|
||||||
"network-policy-allow-all-ingress": {&networking.NetworkPolicy{}},
|
"example-ingress": {&networking.Ingress{}},
|
||||||
"network-policy-default-deny-egress": {&networking.NetworkPolicy{}},
|
"hostaliases-pod": {&api.Pod{}},
|
||||||
"network-policy-default-deny-ingress": {&networking.NetworkPolicy{}},
|
"ingress-resource-backend": {&networking.Ingress{}},
|
||||||
"network-policy-default-deny-all": {&networking.NetworkPolicy{}},
|
"ingress-wildcard-host": {&networking.Ingress{}},
|
||||||
"nginx-policy": {&networking.NetworkPolicy{}},
|
"minimal-ingress": {&networking.Ingress{}},
|
||||||
"nginx-secure-app": {&api.Service{}, &apps.Deployment{}},
|
"name-virtual-host-ingress": {&networking.Ingress{}},
|
||||||
"nginx-svc": {&api.Service{}},
|
"name-virtual-host-ingress-no-third-host": {&networking.Ingress{}},
|
||||||
"run-my-nginx": {&apps.Deployment{}},
|
"network-policy-allow-all-egress": {&networking.NetworkPolicy{}},
|
||||||
|
"network-policy-allow-all-ingress": {&networking.NetworkPolicy{}},
|
||||||
|
"network-policy-default-deny-egress": {&networking.NetworkPolicy{}},
|
||||||
|
"network-policy-default-deny-ingress": {&networking.NetworkPolicy{}},
|
||||||
|
"network-policy-default-deny-all": {&networking.NetworkPolicy{}},
|
||||||
|
"nginx-policy": {&networking.NetworkPolicy{}},
|
||||||
|
"nginx-secure-app": {&api.Service{}, &apps.Deployment{}},
|
||||||
|
"nginx-svc": {&api.Service{}},
|
||||||
|
"run-my-nginx": {&apps.Deployment{}},
|
||||||
|
"simple-fanout-example": {&networking.Ingress{}},
|
||||||
|
"test-ingress": {&networking.Ingress{}},
|
||||||
|
"tls-example-ingress": {&networking.Ingress{}},
|
||||||
},
|
},
|
||||||
"windows": {
|
"windows": {
|
||||||
"configmap-pod": {&api.ConfigMap{}, &api.Pod{}},
|
"configmap-pod": {&api.ConfigMap{}, &api.Pod{}},
|
||||||
|
|
|
||||||
|
|
@ -24,7 +24,7 @@ Ce Code de conduite s’applique à la fois dans le cadre du projet et dans le c
|
||||||
|
|
||||||
Des cas de conduite abusive, de harcèlement ou autre pratique inacceptable ayant cours sur Kubernetes peuvent être signalés en contactant le [comité pour le code de conduite de Kubernetes](https://git.k8s.io/community/committee-code-of-conduct) via l'adresse <conduct@kubernetes.io>. Pour d'autres projets, bien vouloir contacter un responsable de projet CNCF ou notre médiateur, Mishi Choudhary à l'adresse <mishi@linux.com>.
|
Des cas de conduite abusive, de harcèlement ou autre pratique inacceptable ayant cours sur Kubernetes peuvent être signalés en contactant le [comité pour le code de conduite de Kubernetes](https://git.k8s.io/community/committee-code-of-conduct) via l'adresse <conduct@kubernetes.io>. Pour d'autres projets, bien vouloir contacter un responsable de projet CNCF ou notre médiateur, Mishi Choudhary à l'adresse <mishi@linux.com>.
|
||||||
|
|
||||||
Ce Code de conduite est inspiré du « Contributor Covenant » (http://contributor-covenant.org) version 1.2.0, disponible à l’adresse http://contributor-covenant.org/version/1/2/0/.
|
Ce Code de conduite est inspiré du « Contributor Covenant » (https://contributor-covenant.org) version 1.2.0, disponible à l’adresse https://contributor-covenant.org/version/1/2/0/.
|
||||||
|
|
||||||
### Code de conduite pour les événements de la CNCF
|
### Code de conduite pour les événements de la CNCF
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -328,30 +328,10 @@ metadata:
|
||||||
type: kubernetes.io/tls
|
type: kubernetes.io/tls
|
||||||
```
|
```
|
||||||
|
|
||||||
Référencer ce secret dans un Ingress indiquera au contrôleur d'ingress de sécuriser le canal du client au load-balancer à l'aide de TLS. Vous devez vous assurer que le secret TLS que vous avez créé provenait d'un certificat contenant un CN pour `sslexample.foo.com`.
|
Référencer ce secret dans un Ingress indiquera au contrôleur d'Ingress de sécuriser le canal du client au load-balancer à l'aide de TLS. Vous devez vous assurer que le secret TLS que vous avez créé provenait d'un certificat contenant un Common Name (CN), aussi appelé nom de domaine pleinement qualifié (FQDN), pour `https-example.foo.com`.
|
||||||
|
|
||||||
|
{{< codenew file="service/networking/tls-example-ingress.yaml" >}}
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: networking.k8s.io/v1
|
|
||||||
kind: Ingress
|
|
||||||
metadata:
|
|
||||||
name: tls-example-ingress
|
|
||||||
spec:
|
|
||||||
tls:
|
|
||||||
- hosts:
|
|
||||||
- sslexample.foo.com
|
|
||||||
secretName: testsecret-tls
|
|
||||||
rules:
|
|
||||||
- host: sslexample.foo.com
|
|
||||||
http:
|
|
||||||
paths:
|
|
||||||
- path: /
|
|
||||||
pathType: Prefix
|
|
||||||
backend:
|
|
||||||
service:
|
|
||||||
name: service1
|
|
||||||
port:
|
|
||||||
number: 80
|
|
||||||
```
|
|
||||||
|
|
||||||
{{< note >}}
|
{{< note >}}
|
||||||
Les fonctionnalités TLS prisent en charge par les différents contrôleurs peuvent être différentes. Veuillez vous référer à la documentation sur
|
Les fonctionnalités TLS prisent en charge par les différents contrôleurs peuvent être différentes. Veuillez vous référer à la documentation sur
|
||||||
|
|
|
||||||
|
|
@ -48,10 +48,10 @@ Suivez les étapes ci-dessous pour commencer et explorer Minikube.
|
||||||
Starting local Kubernetes cluster...
|
Starting local Kubernetes cluster...
|
||||||
```
|
```
|
||||||
|
|
||||||
Pour plus d'informations sur le démarrage de votre cluster avec une version spécifique de Kubernetes, une machine virtuelle ou un environnement de conteneur, voir [Démarrage d'un cluster].(#starting-a-cluster).
|
Pour plus d'informations sur le démarrage de votre cluster avec une version spécifique de Kubernetes, une machine virtuelle ou un environnement de conteneur, voir [Démarrage d'un cluster](#starting-a-cluster).
|
||||||
|
|
||||||
2. Vous pouvez maintenant interagir avec votre cluster à l'aide de kubectl.
|
2. Vous pouvez maintenant interagir avec votre cluster à l'aide de kubectl.
|
||||||
Pour plus d'informations, voir [Interagir avec votre cluster.](#interacting-with-your-cluster).
|
Pour plus d'informations, voir [Interagir avec votre cluster](#interacting-with-your-cluster).
|
||||||
|
|
||||||
Créons un déploiement Kubernetes en utilisant une image existante nommée `echoserver`, qui est un serveur HTTP, et exposez-la sur le port 8080 à l’aide de `--port`.
|
Créons un déploiement Kubernetes en utilisant une image existante nommée `echoserver`, qui est un serveur HTTP, et exposez-la sur le port 8080 à l’aide de `--port`.
|
||||||
|
|
||||||
|
|
@ -529,5 +529,3 @@ Les contributions, questions et commentaires sont les bienvenus et sont encourag
|
||||||
Les développeurs de minikube sont dans le canal #minikube du [Slack](https://kubernetes.slack.com) de Kubernetes (recevoir une invitation [ici](http://slack.kubernetes.io/)).
|
Les développeurs de minikube sont dans le canal #minikube du [Slack](https://kubernetes.slack.com) de Kubernetes (recevoir une invitation [ici](http://slack.kubernetes.io/)).
|
||||||
Nous avons également la liste de diffusion [kubernetes-dev Google Groupes](https://groups.google.com/forum/#!forum/kubernetes-dev).
|
Nous avons également la liste de diffusion [kubernetes-dev Google Groupes](https://groups.google.com/forum/#!forum/kubernetes-dev).
|
||||||
Si vous publiez sur la liste, veuillez préfixer votre sujet avec "minikube:".
|
Si vous publiez sur la liste, veuillez préfixer votre sujet avec "minikube:".
|
||||||
|
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -40,7 +40,7 @@ Pour plus d'informations sur la création d'un cluster avec kubeadm, une fois qu
|
||||||
## Vérifiez que les adresses MAC et product_uuid sont uniques pour chaque nœud {#verify-mac-address}
|
## Vérifiez que les adresses MAC et product_uuid sont uniques pour chaque nœud {#verify-mac-address}
|
||||||
|
|
||||||
* Vous pouvez obtenir l'adresse MAC des interfaces réseau en utilisant la commande `ip link` ou` ifconfig -a`
|
* Vous pouvez obtenir l'adresse MAC des interfaces réseau en utilisant la commande `ip link` ou` ifconfig -a`
|
||||||
* Le product_uuid peut être vérifié en utilisant la commande `sudo cat/sys/class/dmi/id/product_uuid`
|
* Le product_uuid peut être vérifié en utilisant la commande `sudo cat /sys/class/dmi/id/product_uuid`
|
||||||
|
|
||||||
Il est très probable que les périphériques matériels aient des adresses uniques, bien que
|
Il est très probable que les périphériques matériels aient des adresses uniques, bien que
|
||||||
certaines machines virtuelles puissent avoir des valeurs identiques. Kubernetes utilise ces valeurs pour identifier de manière unique les nœuds du cluster.
|
certaines machines virtuelles puissent avoir des valeurs identiques. Kubernetes utilise ces valeurs pour identifier de manière unique les nœuds du cluster.
|
||||||
|
|
|
||||||
|
|
@ -114,7 +114,7 @@ cpu-demo 974m <something>
|
||||||
Souvenez-vous qu'en réglant `-cpu "2"`, vous avez configuré le conteneur pour faire en sorte qu'il utilise 2 CPU, mais que le conteneur ne peut utiliser qu'environ 1 CPU. L'utilisation du CPU du conteneur est entravée, car le conteneur tente d'utiliser plus de ressources CPU que sa limite.
|
Souvenez-vous qu'en réglant `-cpu "2"`, vous avez configuré le conteneur pour faire en sorte qu'il utilise 2 CPU, mais que le conteneur ne peut utiliser qu'environ 1 CPU. L'utilisation du CPU du conteneur est entravée, car le conteneur tente d'utiliser plus de ressources CPU que sa limite.
|
||||||
|
|
||||||
{{< note >}}
|
{{< note >}}
|
||||||
Une autre explication possible de la la restriction du CPU est que le Nœud pourrait ne pas avoir
|
Une autre explication possible de la restriction du CPU est que le Nœud pourrait ne pas avoir
|
||||||
suffisamment de ressources CPU disponibles. Rappelons que les conditions préalables à cet exercice exigent que chacun de vos Nœuds doit avoir au moins 1 CPU.
|
suffisamment de ressources CPU disponibles. Rappelons que les conditions préalables à cet exercice exigent que chacun de vos Nœuds doit avoir au moins 1 CPU.
|
||||||
Si votre conteneur fonctionne sur un nœud qui n'a qu'un seul CPU, le conteneur ne peut pas utiliser plus que 1 CPU, quelle que soit la limite de CPU spécifiée pour le conteneur.
|
Si votre conteneur fonctionne sur un nœud qui n'a qu'un seul CPU, le conteneur ne peut pas utiliser plus que 1 CPU, quelle que soit la limite de CPU spécifiée pour le conteneur.
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
|
||||||
|
|
@ -84,7 +84,7 @@ Vous pouvez télécharger les packages `.deb` depuis [Docker](https://www.docker
|
||||||
|
|
||||||
{{< caution >}}
|
{{< caution >}}
|
||||||
Le pilote VM `none` peut entraîner des problèmes de sécurité et de perte de données.
|
Le pilote VM `none` peut entraîner des problèmes de sécurité et de perte de données.
|
||||||
Avant d'utiliser `--driver=none`, consultez [cette documentation] (https://minikube.sigs.k8s.io/docs/reference/drivers/none/) pour plus d'informations.
|
Avant d'utiliser `--driver=none`, consultez [cette documentation](https://minikube.sigs.k8s.io/docs/reference/drivers/none/) pour plus d'informations.
|
||||||
{{</ caution >}}
|
{{</ caution >}}
|
||||||
|
|
||||||
Minikube prend également en charge un `vm-driver=podman` similaire au pilote Docker. Podman est exécuté en tant que superutilisateur (utilisateur root), c'est le meilleur moyen de garantir que vos conteneurs ont un accès complet à toutes les fonctionnalités disponibles sur votre système.
|
Minikube prend également en charge un `vm-driver=podman` similaire au pilote Docker. Podman est exécuté en tant que superutilisateur (utilisateur root), c'est le meilleur moyen de garantir que vos conteneurs ont un accès complet à toutes les fonctionnalités disponibles sur votre système.
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,20 @@
|
||||||
|
apiVersion: networking.k8s.io/v1
|
||||||
|
kind: Ingress
|
||||||
|
metadata:
|
||||||
|
name: tls-example-ingress
|
||||||
|
spec:
|
||||||
|
tls:
|
||||||
|
- hosts:
|
||||||
|
- https-example.foo.com
|
||||||
|
secretName: testsecret-tls
|
||||||
|
rules:
|
||||||
|
- host: https-example.foo.com
|
||||||
|
http:
|
||||||
|
paths:
|
||||||
|
- path: /
|
||||||
|
pathType: Prefix
|
||||||
|
backend:
|
||||||
|
service:
|
||||||
|
name: service1
|
||||||
|
port:
|
||||||
|
number: 80
|
||||||
|
|
@ -75,5 +75,5 @@ terhadap dokumentasi Kubernetes, tetapi daftar ini dapat membantumu memulainya.
|
||||||
|
|
||||||
- Untuk berkontribusi ke komunitas Kubernetes melalui forum-forum daring seperti Twitter atau Stack Overflow, atau mengetahui tentang pertemuan komunitas (_meetup_) lokal dan acara-acara Kubernetes, kunjungi [situs komunitas Kubernetes](/community/).
|
- Untuk berkontribusi ke komunitas Kubernetes melalui forum-forum daring seperti Twitter atau Stack Overflow, atau mengetahui tentang pertemuan komunitas (_meetup_) lokal dan acara-acara Kubernetes, kunjungi [situs komunitas Kubernetes](/community/).
|
||||||
- Untuk mulai berkontribusi ke pengembangan fitur, baca [_cheatseet_ kontributor](https://github.com/kubernetes/community/tree/master/contributors/guide/contributor-cheatsheet).
|
- Untuk mulai berkontribusi ke pengembangan fitur, baca [_cheatseet_ kontributor](https://github.com/kubernetes/community/tree/master/contributors/guide/contributor-cheatsheet).
|
||||||
|
- Untuk kontribusi khusus ke halaman Bahansa Indonesia, baca [Dokumentasi Khusus Untuk Translasi Bahasa Indonesia](/docs/contribute/localization_id.md)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,178 @@
|
||||||
|
---
|
||||||
|
title: Dokumentasi Khusus Untuk Translasi Bahasa Indonesia
|
||||||
|
content_type: concept
|
||||||
|
---
|
||||||
|
|
||||||
|
<!-- overview -->
|
||||||
|
|
||||||
|
Panduan khusus untuk bergabung ke komunitas SIG DOC Indonesia dan melakukan
|
||||||
|
kontribusi untuk mentranslasikan dokumentasi Kubernetes ke dalam Bahasa
|
||||||
|
Indonesia.
|
||||||
|
|
||||||
|
<!-- body -->
|
||||||
|
|
||||||
|
## Manajemen _Milestone_ Tim {#manajemen-milestone-tim}
|
||||||
|
|
||||||
|
Secara umum siklus translasi dokumentasi ke Bahasa Indonesia akan dilakukan
|
||||||
|
3 kali dalam setahun (sekitar setiap 4 bulan). Untuk menentukan dan mengevaluasi
|
||||||
|
pencapaian atau _milestone_ dalam kurun waktu tersebut [jadwal rapat daring
|
||||||
|
reguler tim Bahasa Indonesia](https://zoom.us/j/6072809193) dilakukan secara
|
||||||
|
konsisten setiap dua minggu sekali. Dalam [agenda rapat ini](https://docs.google.com/document/d/1Qrj-WUAMA11V6KmcfxJsXcPeWwMbFsyBGV4RGbrSRXY)
|
||||||
|
juga dilakukan pemilihan PR _Wrangler_ untuk dua minggu ke depan. Tugas PR
|
||||||
|
_Wrangler_ tim Bahasa Indonesia serupa dengan PR _Wrangler_ dari proyek
|
||||||
|
_upstream_.
|
||||||
|
|
||||||
|
Target pencapaian atau _milestone_ tim akan dirilis sebagai
|
||||||
|
[_issue tracking_ seperti ini](https://github.com/kubernetes/website/issues/22296)
|
||||||
|
pada Kubernetes GitHub Website setiap 4 bulan. Dan bersama dengan informasi
|
||||||
|
PR _Wrangler_ yang dipilih setiap dua minggu, keduanya akan diumumkan di Slack
|
||||||
|
_channel_ [#kubernetes-docs-id](https://kubernetes.slack.com/archives/CJ1LUCUHM)
|
||||||
|
dari Komunitas Kubernetes.
|
||||||
|
|
||||||
|
## Cara Memulai Translasi
|
||||||
|
|
||||||
|
Untuk menerjemahkan satu halaman Bahasa Inggris ke Bahasa Indonesia, lakukan
|
||||||
|
langkah-langkah berikut ini:
|
||||||
|
|
||||||
|
* Check halaman _issue_ di GitHub dan pastikan tidak ada orang lain yang sudah
|
||||||
|
mengklaim halaman kamu dalam daftar periksa atau komentar-komentar sebelumnya.
|
||||||
|
* Klaim halaman kamu pada _issue_ di GitHub dengan memberikan komentar di bawah
|
||||||
|
dengan nama halaman yang ingin kamu terjemahkan dan ambillah hanya satu halaman
|
||||||
|
dalam satu waktu.
|
||||||
|
* _Fork_ [repo ini](https://github.com/kubernetes/website), buat terjemahan
|
||||||
|
kamu, dan kirimkan PR (_pull request_) dengan label `language/id`
|
||||||
|
* Setelah dikirim, pengulas akan memberikan komentar dalam beberapa hari, dan
|
||||||
|
tolong untuk menjawab semua komentar. Direkomendasikan juga untuk melakukan
|
||||||
|
[_squash_](https://github.com/wprig/wprig/wiki/How-to-squash-commits) _commit_
|
||||||
|
kamu dengan pesan _commit_ yang baik.
|
||||||
|
|
||||||
|
|
||||||
|
## Informasi Acuan Untuk Translasi
|
||||||
|
|
||||||
|
Tidak ada panduan gaya khusus untuk menulis translasi ke bahasa Indonesia.
|
||||||
|
Namun, secara umum kita dapat mengikuti panduan gaya bahasa Inggris dengan
|
||||||
|
beberapa tambahan untuk kata-kata impor yang dicetak miring.
|
||||||
|
|
||||||
|
Harap berkomitmen dengan terjemahan kamu dan pada saat kamu mendapatkan komentar
|
||||||
|
dari pengulas, silahkan atasi sebaik-baiknya. Kami berharap halaman yang
|
||||||
|
diklaim akan diterjemahkan dalam waktu kurang lebih dua minggu. Jika ternyata
|
||||||
|
kamu tidak dapat berkomitmen lagi, beri tahu para pengulas agar mereka dapat
|
||||||
|
meberikan halaman tersebut ke orang lain.
|
||||||
|
|
||||||
|
Beberapa acuan tambahan dalam melakukan translasi silahkan lihat informasi
|
||||||
|
berikut ini:
|
||||||
|
|
||||||
|
### Daftara Glosarium Translasi dari tim SIG DOC Indonesia
|
||||||
|
Untuk kata-kata selengkapnya silahkan baca glosariumnya
|
||||||
|
[disini](#glosarium-indonesia)
|
||||||
|
|
||||||
|
### KBBI
|
||||||
|
Konsultasikan dengan KBBI (Kamus Besar Bahasa Indonesia)
|
||||||
|
[disini](https://kbbi.web.id/) dari
|
||||||
|
[Kemendikbud](https://kbbi.kemdikbud.go.id/).
|
||||||
|
|
||||||
|
### RSNI Glosarium dari Ivan Lanin
|
||||||
|
[RSNI Glosarium](https://github.com/jk8s/sig-docs-id-localization-how-tos/blob/master/resources/RSNI-glossarium.pdf)
|
||||||
|
dapat digunakan untuk memahami bagaimana menerjemahkan berbagai istilah teknis
|
||||||
|
dan khusus Kubernetes.
|
||||||
|
|
||||||
|
|
||||||
|
## Panduan Penulisan _Source Code_
|
||||||
|
|
||||||
|
### Mengikuti kode asli dari dokumentasi bahasa Inggris
|
||||||
|
|
||||||
|
Untuk kenyamanan pemeliharaan, ikuti lebar teks asli dalam kode bahasa Inggris.
|
||||||
|
Dengan kata lain, jika teks asli ditulis dalam baris yang panjang tanpa putus
|
||||||
|
atu baris, maka teks tersebut ditulis panjang dalam satu baris meskipun dalam
|
||||||
|
bahasa Indonesia. Jagalah agar tetap serupa.
|
||||||
|
|
||||||
|
### Hapus nama reviewer di kode asli bahasa Inggris
|
||||||
|
|
||||||
|
Terkadang _reviewer_ ditentukan di bagian atas kode di teks asli Bahasa Inggris.
|
||||||
|
Secara umum, _reviewer-reviewer_ halaman aslinya akan kesulitan untuk meninjau
|
||||||
|
halaman dalam bahasa Indonesia, jadi hapus kode yang terkait dengan informasi
|
||||||
|
_reviewer_ dari metadata kode tersebut.
|
||||||
|
|
||||||
|
|
||||||
|
## Panduan Penulisan Kata-kata Translasi
|
||||||
|
|
||||||
|
### Panduan umum
|
||||||
|
|
||||||
|
* Gunakan "kamu" daripada "Anda" sebagai subyek agar lebih bersahabat dengan
|
||||||
|
para pembaca dokumentasi.
|
||||||
|
* Tulislah miring untuk kata-kata bahasa Inggris yang diimpor jika kamu tidak
|
||||||
|
dapat menemukan kata-kata tersebut dalam bahasa Indonesia.
|
||||||
|
*Benar*: _controller_. *Salah*: controller, `controller`
|
||||||
|
|
||||||
|
### Panduan untuk kata-kata API Objek Kubernetes
|
||||||
|
|
||||||
|
Gunakan gaya "CamelCase" untuk menulis objek API Kubernetes, lihat daftar
|
||||||
|
lengkapnya [di sini](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/).
|
||||||
|
Sebagai contoh:
|
||||||
|
|
||||||
|
* *Benar*: PersistentVolume. *Salah*: volume persisten, `PersistentVolume`,
|
||||||
|
persistentVolume
|
||||||
|
* *Benar*: Pod. *Salah*: pod, `pod`, "pod"
|
||||||
|
|
||||||
|
*Tips* : Biasanya API objek sudah ditulis dalam huruf kapital pada halaman asli
|
||||||
|
bahasa Inggris.
|
||||||
|
|
||||||
|
### Panduan untuk kata-kata yang sama dengan API Objek Kubernetes
|
||||||
|
|
||||||
|
Ada beberapa kata-kata yang serupa dengan nama API objek dari Kubernetes dan
|
||||||
|
dapat mengacu ke arti yang lebih umum (tidak selalu dalam konteks Kubernetes).
|
||||||
|
Sebagai contoh: _service_, _container_, _node_ , dan lain sebagainya. Kata-kata
|
||||||
|
sebaiknya ditranslasikan ke Bahasa Indonesia sebagai contoh _service_ menjadi
|
||||||
|
layanan, _container_ menjadi kontainer.
|
||||||
|
|
||||||
|
*Tips* : Biasanya kata-kata yang mengacu ke arti yang lebih umum sudah *tidak*
|
||||||
|
ditulis dalam huruf kapital pada halaman asli bahasa Inggris.
|
||||||
|
|
||||||
|
### Panduan untuk "Feature Gate" Kubernetes
|
||||||
|
|
||||||
|
Istilah [_functional gate_](https://kubernetes.io/ko/docs/reference/command-line-tools-reference/feature-gates/)
|
||||||
|
Kubernetes tidak perlu diterjemahkan ke dalam bahasa Indonesia dan tetap
|
||||||
|
dipertahankan dalam bentuk aslinya.
|
||||||
|
|
||||||
|
Contoh dari _functional gate_ adalah sebagai berikut:
|
||||||
|
|
||||||
|
- Akselerator
|
||||||
|
- AdvancedAuditing
|
||||||
|
- AffinityInAnnotations
|
||||||
|
- AllowExtTrafficLocalEndpoints
|
||||||
|
- ...
|
||||||
|
|
||||||
|
### Glosarium Indonesia {#glosarium-indonesia}
|
||||||
|
|
||||||
|
Inggris | Tipe Kata | Indonesia | Sumber | Contoh Kalimat
|
||||||
|
---|---|---|---|---
|
||||||
|
cluster | | klaster | |
|
||||||
|
container | | kontainer | |
|
||||||
|
node | kata benda | node | |
|
||||||
|
file | | berkas | |
|
||||||
|
service | kata benda | layanan | |
|
||||||
|
set | | sekumpulan | |
|
||||||
|
resource | | sumber daya | |
|
||||||
|
default | | bawaan atau standar (tergantung context) | | Secara bawaan, ...; Pada konfigurasi dan instalasi standar, ...
|
||||||
|
deploy | | menggelar | |
|
||||||
|
image | | _image_ | |
|
||||||
|
request | | permintaan | |
|
||||||
|
object | kata benda | objek | https://kbbi.web.id/objek |
|
||||||
|
command | | perintah | https://kbbi.web.id/perintah |
|
||||||
|
view | | tampilan | |
|
||||||
|
support | | tersedia atau dukungan (tergantung konteks) | "This feature is supported on version X; Fitur ini tersedia pada versi X; Supported by community; Didukung oleh komunitas"
|
||||||
|
release | kata benda | rilis | https://kbbi.web.id/rilis |
|
||||||
|
tool | | perangkat | |
|
||||||
|
deployment | | penggelaran | |
|
||||||
|
client | | klien | |
|
||||||
|
reference | | rujukan | |
|
||||||
|
update | | pembaruan | | The latest update... ; Pembaruan terkini...
|
||||||
|
state | | _state_ | |
|
||||||
|
task | | _task_ | |
|
||||||
|
certificate | | sertifikat | |
|
||||||
|
install | | instalasi | https://kbbi.web.id/instalasi |
|
||||||
|
scale | | skala | |
|
||||||
|
process | kata kerja | memproses | https://kbbi.web.id/proses |
|
||||||
|
replica | kata benda | replika | https://kbbi.web.id/replika |
|
||||||
|
flag | | tanda, parameter, argumen | |
|
||||||
|
event | | _event_ | |
|
||||||
|
|
@ -21,7 +21,7 @@ I responsabili di progetto hanno il diritto e la responsabilità di rimuovere, m
|
||||||
|
|
||||||
Casi di comportamenti abusivi, molestie o altri comportamenti inaccettabili in Kubernetes potranno essere denunciati contattando [Il comitato del codice di condotta Kubernetes (CNCF)](https://git.k8s.io/community/committee-code-of-conduct) attraverso <conduct@kubernetes.io>. Per altri progetti, contattare il Project manager del CNCF o il nostro mediatore, Mishi Choudhary <mishi@linux.com>.
|
Casi di comportamenti abusivi, molestie o altri comportamenti inaccettabili in Kubernetes potranno essere denunciati contattando [Il comitato del codice di condotta Kubernetes (CNCF)](https://git.k8s.io/community/committee-code-of-conduct) attraverso <conduct@kubernetes.io>. Per altri progetti, contattare il Project manager del CNCF o il nostro mediatore, Mishi Choudhary <mishi@linux.com>.
|
||||||
|
|
||||||
Il codice di condotta è stato adattato dal Contributor Covenant (http://contributor-covenant.org), versione 1.2.0, disponibile su http://contributor-covenant.org/version/1/2/0/
|
Il codice di condotta è stato adattato dal Contributor Covenant (https://contributor-covenant.org), versione 1.2.0, disponibile su https://contributor-covenant.org/version/1/2/0/
|
||||||
|
|
||||||
### CNCF Codice di condotta negli eventi
|
### CNCF Codice di condotta negli eventi
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -25,7 +25,7 @@ CNCF コミュニティ行動規範 v1.0
|
||||||
|
|
||||||
Kubernetesで虐待的、嫌がらせ、または許されない行動があった場合には、<conduct@kubernetes.io>から[Kubernetes Code of Conduct Committee](https://git.k8s.io/community/committee-code-of-conduct)(行動規範委員会)にご連絡ください。その他のプロジェクトにつきましては、CNCFプロジェクト管理者または仲介者<mishi@linux.com>にご連絡ください。
|
Kubernetesで虐待的、嫌がらせ、または許されない行動があった場合には、<conduct@kubernetes.io>から[Kubernetes Code of Conduct Committee](https://git.k8s.io/community/committee-code-of-conduct)(行動規範委員会)にご連絡ください。その他のプロジェクトにつきましては、CNCFプロジェクト管理者または仲介者<mishi@linux.com>にご連絡ください。
|
||||||
|
|
||||||
本行動規範は、コントリビューターの合意 (http://contributor-covenant.org) バージョン 1.2.0 http://contributor-covenant.org/version/1/2/0/ から適応されています。
|
本行動規範は、コントリビューターの合意 (https://contributor-covenant.org) バージョン 1.2.0 https://contributor-covenant.org/version/1/2/0/ から適応されています。
|
||||||
|
|
||||||
### CNCF イベント行動規範
|
### CNCF イベント行動規範
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -52,4 +52,4 @@ Kubernetesにおいてタイムスキューを避けるために、全てのNode
|
||||||
|
|
||||||
* [Jobの自動クリーンアップ](/ja/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically)
|
* [Jobの自動クリーンアップ](/ja/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically)
|
||||||
|
|
||||||
* [設計ドキュメント](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/0026-ttl-after-finish.md)
|
* [設計ドキュメント](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/592-ttl-after-finish/README.md)
|
||||||
|
|
|
||||||
|
|
@ -137,7 +137,8 @@ content_type: concept
|
||||||
| `TokenRequestProjection` | `false` | Alpha | 1.11 | 1.11 |
|
| `TokenRequestProjection` | `false` | Alpha | 1.11 | 1.11 |
|
||||||
| `TokenRequestProjection` | `true` | Beta | 1.12 | |
|
| `TokenRequestProjection` | `true` | Beta | 1.12 | |
|
||||||
| `TTLAfterFinished` | `false` | Alpha | 1.12 | |
|
| `TTLAfterFinished` | `false` | Alpha | 1.12 | |
|
||||||
| `TopologyManager` | `false` | Alpha | 1.16 | |
|
| `TopologyManager` | `false` | Alpha | 1.16 | 1.17 |
|
||||||
|
| `TopologyManager` | `true` | Beta | 1.18 | |
|
||||||
| `ValidateProxyRedirects` | `false` | Alpha | 1.12 | 1.13 |
|
| `ValidateProxyRedirects` | `false` | Alpha | 1.12 | 1.13 |
|
||||||
| `ValidateProxyRedirects` | `true` | Beta | 1.14 | |
|
| `ValidateProxyRedirects` | `true` | Beta | 1.14 | |
|
||||||
| `VolumePVCDataSource` | `false` | Alpha | 1.15 | 1.15 |
|
| `VolumePVCDataSource` | `false` | Alpha | 1.15 | 1.15 |
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,258 @@
|
||||||
|
---
|
||||||
|
title: Namespaceに対する最小および最大メモリー制約の構成
|
||||||
|
|
||||||
|
content_type: task
|
||||||
|
weight: 30
|
||||||
|
---
|
||||||
|
|
||||||
|
|
||||||
|
<!-- overview -->
|
||||||
|
|
||||||
|
このページでは、Namespaceで実行されるコンテナが使用するメモリーの最小値と最大値を設定する方法を説明します。
|
||||||
|
[LimitRange](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#limitrange-v1-core) で最小値と最大値のメモリー値を指定します。
|
||||||
|
PodがLimitRangeによって課される制約を満たさない場合、そのNamespaceではPodを作成できません。
|
||||||
|
|
||||||
|
|
||||||
|
## {{% heading "prerequisites" %}}
|
||||||
|
|
||||||
|
|
||||||
|
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||||
|
|
||||||
|
クラスター内の各ノードには、少なくとも1GiBのメモリーが必要です。
|
||||||
|
|
||||||
|
|
||||||
|
<!-- steps -->
|
||||||
|
|
||||||
|
## Namespaceの作成
|
||||||
|
|
||||||
|
この演習で作成したリソースがクラスターの他の部分から分離されるように、Namespaceを作成します。
|
||||||
|
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl create namespace constraints-mem-example
|
||||||
|
```
|
||||||
|
|
||||||
|
## LimitRangeとPodを作成
|
||||||
|
|
||||||
|
LimitRangeの設定ファイルです。
|
||||||
|
|
||||||
|
{{< codenew file="admin/resource/memory-constraints.yaml" >}}
|
||||||
|
|
||||||
|
LimitRangeを作成します。
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints.yaml --namespace=constraints-mem-example
|
||||||
|
```
|
||||||
|
|
||||||
|
LimitRangeの詳細情報を表示します。
|
||||||
|
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl get limitrange mem-min-max-demo-lr --namespace=constraints-mem-example --output=yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
出力されるのは、予想通りメモリー制約の最小値と最大値を示しています。
|
||||||
|
しかし、LimitRangeの設定ファイルでデフォルト値を指定していないにもかかわらず、
|
||||||
|
自動的に作成されていることに気づきます。
|
||||||
|
|
||||||
|
|
||||||
|
```
|
||||||
|
limits:
|
||||||
|
- default:
|
||||||
|
memory: 1Gi
|
||||||
|
defaultRequest:
|
||||||
|
memory: 1Gi
|
||||||
|
max:
|
||||||
|
memory: 1Gi
|
||||||
|
min:
|
||||||
|
memory: 500Mi
|
||||||
|
type: Container
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
constraints-mem-exampleNamespaceにコンテナが作成されるたびに、
|
||||||
|
Kubernetesは以下の手順を実行するようになっています。
|
||||||
|
|
||||||
|
* コンテナが独自のメモリー要求と制限を指定しない場合は、デフォルトのメモリー要求と制限をコンテナに割り当てます。
|
||||||
|
|
||||||
|
* コンテナに500MiB以上のメモリー要求があることを確認します。
|
||||||
|
|
||||||
|
* コンテナのメモリー制限が1GiB以下であることを確認します。
|
||||||
|
|
||||||
|
以下は、1つのコンテナを持つPodの設定ファイルです。設定ファイルのコンテナ(containers)では、600MiBのメモリー要求と800MiBのメモリー制限が指定されています。これらはLimitRangeによって課される最小と最大のメモリー制約を満たしています。
|
||||||
|
|
||||||
|
|
||||||
|
{{< codenew file="admin/resource/memory-constraints-pod.yaml" >}}
|
||||||
|
|
||||||
|
Podの作成
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod.yaml --namespace=constraints-mem-example
|
||||||
|
```
|
||||||
|
|
||||||
|
Podのコンテナが実行されていることを確認します。
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl get pod constraints-mem-demo --namespace=constraints-mem-example
|
||||||
|
```
|
||||||
|
|
||||||
|
Podの詳細情報を見ます
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl get pod constraints-mem-demo --output=yaml --namespace=constraints-mem-example
|
||||||
|
```
|
||||||
|
|
||||||
|
出力は、コンテナが600MiBのメモリ要求と800MiBのメモリー制限になっていることを示しています。これらはLimitRangeによって課される制約を満たしています。
|
||||||
|
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
memory: 800Mi
|
||||||
|
requests:
|
||||||
|
memory: 600Mi
|
||||||
|
```
|
||||||
|
|
||||||
|
Podを消します。
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl delete pod constraints-mem-demo --namespace=constraints-mem-example
|
||||||
|
```
|
||||||
|
|
||||||
|
## 最大メモリ制約を超えるPodの作成の試み
|
||||||
|
|
||||||
|
これは、1つのコンテナを持つPodの設定ファイルです。コンテナは800MiBのメモリー要求と1.5GiBのメモリー制限を指定しています。
|
||||||
|
|
||||||
|
|
||||||
|
{{< codenew file="admin/resource/memory-constraints-pod-2.yaml" >}}
|
||||||
|
|
||||||
|
Podを作成してみます。
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-2.yaml --namespace=constraints-mem-example
|
||||||
|
```
|
||||||
|
|
||||||
|
出力は、コンテナが大きすぎるメモリー制限を指定しているため、Podが作成されないことを示しています。
|
||||||
|
|
||||||
|
|
||||||
|
```
|
||||||
|
Error from server (Forbidden): error when creating "examples/admin/resource/memory-constraints-pod-2.yaml":
|
||||||
|
pods "constraints-mem-demo-2" is forbidden: maximum memory usage per Container is 1Gi, but limit is 1536Mi.
|
||||||
|
```
|
||||||
|
|
||||||
|
## 最低限のメモリ要求を満たさないPodの作成の試み
|
||||||
|
|
||||||
|
|
||||||
|
これは、1つのコンテナを持つPodの設定ファイルです。コンテナは100MiBのメモリー要求と800MiBのメモリー制限を指定しています。
|
||||||
|
|
||||||
|
|
||||||
|
{{< codenew file="admin/resource/memory-constraints-pod-3.yaml" >}}
|
||||||
|
|
||||||
|
Podを作成してみます。
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-3.yaml --namespace=constraints-mem-example
|
||||||
|
```
|
||||||
|
|
||||||
|
出力は、コンテナが小さすぎるメモリー要求を指定しているため、Podが作成されないことを示しています。
|
||||||
|
|
||||||
|
```
|
||||||
|
Error from server (Forbidden): error when creating "examples/admin/resource/memory-constraints-pod-3.yaml":
|
||||||
|
pods "constraints-mem-demo-3" is forbidden: minimum memory usage per Container is 500Mi, but request is 100Mi.
|
||||||
|
```
|
||||||
|
|
||||||
|
## メモリ要求や制限を指定しないPodの作成
|
||||||
|
|
||||||
|
|
||||||
|
これは、1つのコンテナを持つPodの設定ファイルです。コンテナはメモリー要求を指定しておらず、メモリー制限も指定していません。
|
||||||
|
|
||||||
|
{{< codenew file="admin/resource/memory-constraints-pod-4.yaml" >}}
|
||||||
|
|
||||||
|
Podを作成します。
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-4.yaml --namespace=constraints-mem-example
|
||||||
|
```
|
||||||
|
|
||||||
|
Podの詳細情報を見ます
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl get pod constraints-mem-demo-4 --namespace=constraints-mem-example --output=yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
出力を見ると、Podのコンテナのメモリ要求は1GiB、メモリー制限は1GiBであることがわかります。
|
||||||
|
コンテナはどのようにしてこれらの値を取得したのでしょうか?
|
||||||
|
|
||||||
|
|
||||||
|
```
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
memory: 1Gi
|
||||||
|
requests:
|
||||||
|
memory: 1Gi
|
||||||
|
```
|
||||||
|
|
||||||
|
コンテナが独自のメモリー要求と制限を指定していなかったため、LimitRangeから与えられのです。
|
||||||
|
コンテナが独自のメモリー要求と制限を指定していなかったため、LimitRangeから[デフォルトのメモリー要求と制限](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)が与えられたのです。
|
||||||
|
|
||||||
|
この時点で、コンテナは起動しているかもしれませんし、起動していないかもしれません。このタスクの前提条件は、ノードが少なくとも1GiBのメモリーを持っていることであることを思い出してください。それぞれのノードが1GiBのメモリーしか持っていない場合、どのノードにも1GiBのメモリー要求に対応するのに十分な割り当て可能なメモリーがありません。たまたま2GiBのメモリーを持つノードを使用しているのであれば、おそらく1GiBのメモリーリクエストに対応するのに十分なスペースを持っていることになります。
|
||||||
|
|
||||||
|
|
||||||
|
Podを削除します。
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl delete pod constraints-mem-demo-4 --namespace=constraints-mem-example
|
||||||
|
```
|
||||||
|
|
||||||
|
## 最小および最大メモリー制約の強制
|
||||||
|
|
||||||
|
LimitRangeによってNamespaceに課される最大および最小のメモリー制約は、Podが作成または更新されたときにのみ適用されます。LimitRangeを変更しても、以前に作成されたPodには影響しません。
|
||||||
|
|
||||||
|
|
||||||
|
## 最小・最大メモリー制約の動機
|
||||||
|
|
||||||
|
|
||||||
|
クラスター管理者としては、Podが使用できるメモリー量に制限を課したいと思うかもしれません。
|
||||||
|
|
||||||
|
|
||||||
|
例:
|
||||||
|
|
||||||
|
* クラスター内の各ノードは2GBのメモリーを持っています。クラスタ内のどのノードもその要求をサポートできないため、2GB以上のメモリーを要求するPodは受け入れたくありません。
|
||||||
|
|
||||||
|
|
||||||
|
* クラスターは運用部門と開発部門で共有されています。 本番用のワークロードでは最大8GBのメモリーを消費しますが、開発用のワークロードでは512MBに制限したいとします。本番用と開発用に別々のNamespaceを作成し、それぞれのNamespaceにメモリー制限を適用します。
|
||||||
|
|
||||||
|
## クリーンアップ
|
||||||
|
|
||||||
|
Namespaceを削除します。
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl delete namespace constraints-mem-example
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## {{% heading "whatsnext" %}}
|
||||||
|
|
||||||
|
|
||||||
|
### クラスター管理者向け
|
||||||
|
|
||||||
|
* [名前空間に対するデフォルトのメモリー要求と制限の構成](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)
|
||||||
|
|
||||||
|
* [名前空間に対するデフォルトのCPU要求と制限の構成](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)
|
||||||
|
|
||||||
|
* [名前空間に対する最小および最大CPU制約の構成](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)
|
||||||
|
|
||||||
|
* [名前空間に対するメモリーとCPUのクォータの構成](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)
|
||||||
|
|
||||||
|
* [名前空間に対するPodクォータの設定](/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/)
|
||||||
|
|
||||||
|
* [APIオブジェクトのクォータの設定](/docs/tasks/administer-cluster/quota-api-object/)
|
||||||
|
|
||||||
|
### アプリケーション開発者向け
|
||||||
|
|
||||||
|
* [コンテナとPodへのメモリーリソースの割り当て](/docs/tasks/configure-pod-container/assign-memory-resource/)
|
||||||
|
|
||||||
|
* [コンテナとPodへのCPUリソースの割り当て](/docs/tasks/configure-pod-container/assign-cpu-resource/)
|
||||||
|
|
||||||
|
* [PodのQoS(サービス品質)を設定](/docs/tasks/configure-pod-container/quality-service-pod/)
|
||||||
|
|
@ -70,7 +70,7 @@ containerd가 정말 필요로 하는 것들을 확보하기 위해서 도커심
|
||||||
컨테이너 런타임을 도커에서 지원되는 다른 컨테이너 런타임으로 변경하기만 하면 됩니다.
|
컨테이너 런타임을 도커에서 지원되는 다른 컨테이너 런타임으로 변경하기만 하면 됩니다.
|
||||||
|
|
||||||
참고할 사항 한 가지: 현재 클러스터 내 워크플로의 일부가 기본 도커 소켓
|
참고할 사항 한 가지: 현재 클러스터 내 워크플로의 일부가 기본 도커 소켓
|
||||||
(/var/run/docker.sock)에 의존하고 있는 경우, 다른
|
(`/var/run/docker.sock`)에 의존하고 있는 경우, 다른
|
||||||
런타임으로 전환하는 것은 해당 워크플로의 사용에 문제를 일으킵니다. 이 패턴은 종종
|
런타임으로 전환하는 것은 해당 워크플로의 사용에 문제를 일으킵니다. 이 패턴은 종종
|
||||||
도커 내의 도커라고 합니다. 이런 특정 유스케이스에 대해서
|
도커 내의 도커라고 합니다. 이런 특정 유스케이스에 대해서
|
||||||
[kaniko](https://github.com/GoogleContainerTools/kaniko),
|
[kaniko](https://github.com/GoogleContainerTools/kaniko),
|
||||||
|
|
@ -82,11 +82,11 @@ containerd가 정말 필요로 하는 것들을 확보하기 위해서 도커심
|
||||||
|
|
||||||
이 변경 사항은 사람들(folks)이 보통 도커와 상호 작용하는 데 사용하는 것과는 다른 환경을
|
이 변경 사항은 사람들(folks)이 보통 도커와 상호 작용하는 데 사용하는 것과는 다른 환경을
|
||||||
제시합니다. 개발에 사용하는 도커의 설치는 쿠버네티스 클러스터 내의
|
제시합니다. 개발에 사용하는 도커의 설치는 쿠버네티스 클러스터 내의
|
||||||
도커 런타임과 관련이 없습니다. 혼란스럽죠, 우리도 알고 있습니다. 개발자에게
|
도커 런타임과 관련이 없습니다. 혼란스럽죠, 우리도 알고 있습니다.
|
||||||
도커는 이 변경 사항이 발표되기 전과 마찬가지로 여전히
|
개발자에게 도커는 이 변경 사항이 발표되기 전과 마찬가지로 여전히
|
||||||
유용합니다. 도커가 생성하는 이미지는 실제로는
|
유용합니다. 도커가 생성하는 이미지는 실제로는
|
||||||
도커에만 특정된 이미지가 아니라 OCI([Open Container Initiative](https://opencontainers.org/)) 이미지입니다.
|
도커에만 특정된 이미지가 아니라 OCI([Open Container Initiative](https://opencontainers.org/)) 이미지입니다.
|
||||||
모든 OCI 호환 이미지는 해당 이미지를 빌드하는 데 사용하는 도구에 관계없이
|
모든 OCI 호환 이미지는 해당 이미지를 빌드하는 데 사용하는 도구에 관계없이
|
||||||
쿠버네티스에서 동일하게 보입니다. [containerd](https://containerd.io/)와
|
쿠버네티스에서 동일하게 보입니다. [containerd](https://containerd.io/)와
|
||||||
[CRI-O](https://cri-o.io/)는 모두 해당 이미지를 가져와 실행하는 방법을 알고 있습니다. 이것이
|
[CRI-O](https://cri-o.io/)는 모두 해당 이미지를 가져와 실행하는 방법을 알고 있습니다. 이것이
|
||||||
컨테이너가 어떤 모습이어야 하는지에 대한 표준이 있는 이유입니다.
|
컨테이너가 어떤 모습이어야 하는지에 대한 표준이 있는 이유입니다.
|
||||||
|
|
@ -98,7 +98,7 @@ containerd가 정말 필요로 하는 것들을 확보하기 위해서 도커심
|
||||||
혼란스럽더라도 괜찮습니다. 이에 대해서 많은 일이 진행되고 있습니다. 쿠버네티스에는 변화되는
|
혼란스럽더라도 괜찮습니다. 이에 대해서 많은 일이 진행되고 있습니다. 쿠버네티스에는 변화되는
|
||||||
부분이 많이 있고, 이에 대해 100% 전문가는 없습니다. 경험 수준이나
|
부분이 많이 있고, 이에 대해 100% 전문가는 없습니다. 경험 수준이나
|
||||||
복잡성에 관계없이 어떤 질문이든 하시기 바랍니다! 우리의 목표는
|
복잡성에 관계없이 어떤 질문이든 하시기 바랍니다! 우리의 목표는
|
||||||
모든 사람이 다가오는 변경 사항에 대해 최대한 많은 교육을 받을 수 있도록 하는 것입니다. `<3` 이 글이
|
모든 사람이 다가오는 변경 사항에 대해 최대한 많은 교육을 받을 수 있도록 하는 것입니다. 이 글이
|
||||||
여러분이 가지는 대부분의 질문에 대한 답이 되었고, 불안을 약간은 진정시켰기를 바랍니다!
|
여러분이 가지는 대부분의 질문에 대한 답이 되었고, 불안을 약간은 진정시켰기를 바랍니다! ❤️
|
||||||
|
|
||||||
더 많은 답변을 찾고 계신가요? 함께 제공되는 [도커심 사용 중단 FAQ](/blog/2020/12/02/dockershim-faq/)를 확인하세요.
|
더 많은 답변을 찾고 계신가요? 함께 제공되는 [도커심 사용 중단 FAQ](/blog/2020/12/02/dockershim-faq/)를 확인하세요.
|
||||||
|
|
|
||||||
|
|
@ -7,10 +7,10 @@ weight: 10
|
||||||
<!-- overview -->
|
<!-- overview -->
|
||||||
|
|
||||||
쿠버네티스는 컨테이너를 파드내에 배치하고 _노드_ 에서 실행함으로 워크로드를 구동한다.
|
쿠버네티스는 컨테이너를 파드내에 배치하고 _노드_ 에서 실행함으로 워크로드를 구동한다.
|
||||||
노드는 클러스터에 따라 가상 또는 물리적 머신일 수 있다. 각 노드에는
|
노드는 클러스터에 따라 가상 또는 물리적 머신일 수 있다. 각 노드는
|
||||||
{{< glossary_tooltip text="컨트롤 플레인" term_id="control-plane" >}}이라는
|
{{< glossary_tooltip text="컨트롤 플레인" term_id="control-plane" >}}에 의해 관리되며
|
||||||
{{< glossary_tooltip text="파드" term_id="pod" >}}를
|
{{< glossary_tooltip text="파드" term_id="pod" >}}를
|
||||||
실행하는데 필요한 서비스가 포함되어 있다.
|
실행하는데 필요한 서비스를 포함한다.
|
||||||
|
|
||||||
일반적으로 클러스터에는 여러 개의 노드가 있으며, 학습 또는 리소스가 제한되는
|
일반적으로 클러스터에는 여러 개의 노드가 있으며, 학습 또는 리소스가 제한되는
|
||||||
환경에서는 하나만 있을 수도 있다.
|
환경에서는 하나만 있을 수도 있다.
|
||||||
|
|
@ -239,7 +239,7 @@ NodeStatus의 NodeReady 컨디션을 ConditionUnknown으로 업데이트 하는
|
||||||
쿠버네티스 노드에서 보내는 하트비트는 노드의 가용성을 결정하는데 도움이 된다.
|
쿠버네티스 노드에서 보내는 하트비트는 노드의 가용성을 결정하는데 도움이 된다.
|
||||||
|
|
||||||
하트비트의 두 가지 형태는 `NodeStatus` 와
|
하트비트의 두 가지 형태는 `NodeStatus` 와
|
||||||
[리스(Lease) 오브젝트](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/#lease-v1-coordination-k8s-io) 이다.
|
[리스(Lease) 오브젝트](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#lease-v1-coordination-k8s-io)이다.
|
||||||
각 노드에는 `kube-node-lease` 라는
|
각 노드에는 `kube-node-lease` 라는
|
||||||
{{< glossary_tooltip term_id="namespace" text="네임스페이스">}} 에 관련된 리스 오브젝트가 있다.
|
{{< glossary_tooltip term_id="namespace" text="네임스페이스">}} 에 관련된 리스 오브젝트가 있다.
|
||||||
리스는 경량 리소스로, 클러스터가 확장될 때
|
리스는 경량 리소스로, 클러스터가 확장될 때
|
||||||
|
|
@ -355,4 +355,3 @@ Kubelet은 노드가 종료되는 동안 파드가 일반 [파드 종료 프로
|
||||||
* 아키텍처 디자인 문서의 [노드](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)
|
* 아키텍처 디자인 문서의 [노드](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)
|
||||||
섹션을 읽어본다.
|
섹션을 읽어본다.
|
||||||
* [테인트와 톨러레이션](/ko/docs/concepts/scheduling-eviction/taint-and-toleration/)을 읽어본다.
|
* [테인트와 톨러레이션](/ko/docs/concepts/scheduling-eviction/taint-and-toleration/)을 읽어본다.
|
||||||
|
|
||||||
|
|
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue