Merge remote-tracking branch 'upstream/master' into dev-1.21
|
|
@ -98,6 +98,7 @@ aliases:
|
|||
- irvifa
|
||||
sig-docs-id-reviews: # PR reviews for Indonesian content
|
||||
- girikuncoro
|
||||
- habibrosyad
|
||||
- irvifa
|
||||
- wahyuoi
|
||||
- phanama
|
||||
|
|
|
|||
|
|
@ -14,9 +14,9 @@ Sobald Ihre Pull-Anfrage erstellt wurde, übernimmt ein Rezensent von Kubernetes
|
|||
Weitere Informationen zum Beitrag zur Kubernetes-Dokumentation finden Sie unter:
|
||||
|
||||
* [Mitwirkung beginnen](https://kubernetes.io/docs/contribute/start/)
|
||||
* [Ihre Dokumentationsänderungen bereitstellen](http://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally)
|
||||
* [Seitenvorlagen verwenden](http://kubernetes.io/docs/contribute/style/page-content-types/)
|
||||
* [Dokumentationsstil-Handbuch](http://kubernetes.io/docs/contribute/style/style-guide/)
|
||||
* [Ihre Dokumentationsänderungen bereitstellen](https://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally)
|
||||
* [Seitenvorlagen verwenden](https://kubernetes.io/docs/contribute/style/page-content-types/)
|
||||
* [Dokumentationsstil-Handbuch](https://kubernetes.io/docs/contribute/style/style-guide/)
|
||||
* [Übersetzung der Kubernetes-Dokumentation](https://kubernetes.io/docs/contribute/localization/)
|
||||
|
||||
## `README.md`'s Localizing Kubernetes Documentation
|
||||
|
|
@ -65,7 +65,7 @@ Dadurch wird der lokale Hugo-Server an Port 1313 gestartet. Öffnen Sie Ihren Br
|
|||
|
||||
## Community, Diskussion, Beteiligung und Unterstützung
|
||||
|
||||
Erfahren Sie auf der [Community-Seite](http://kubernetes.io/community/) wie Sie mit der Kubernetes-Community interagieren können.
|
||||
Erfahren Sie auf der [Community-Seite](https://kubernetes.io/community/) wie Sie mit der Kubernetes-Community interagieren können.
|
||||
|
||||
Sie können die Betreuer dieses Projekts unter folgender Adresse erreichen:
|
||||
|
||||
|
|
|
|||
|
|
@ -17,9 +17,9 @@ Los revisores harán todo lo posible para proporcionar toda la información nece
|
|||
Para obtener más información sobre cómo contribuir a la documentación de Kubernetes, puede consultar:
|
||||
|
||||
* [Empezando a contribuir](https://kubernetes.io/docs/contribute/start/)
|
||||
* [Visualizando sus cambios en su entorno local](http://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally)
|
||||
* [Utilizando las plantillas de las páginas](http://kubernetes.io/docs/contribute/style/page-content-types/)
|
||||
* [Guía de estilo de la documentación](http://kubernetes.io/docs/contribute/style/style-guide/)
|
||||
* [Visualizando sus cambios en su entorno local](https://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally)
|
||||
* [Utilizando las plantillas de las páginas](https://kubernetes.io/docs/contribute/style/page-content-types/)
|
||||
* [Guía de estilo de la documentación](https://kubernetes.io/docs/contribute/style/style-guide/)
|
||||
* [Traduciendo la documentación de Kubernetes](https://kubernetes.io/docs/contribute/localization/)
|
||||
|
||||
## Levantando el sitio web kubernetes.io en su entorno local con Docker
|
||||
|
|
|
|||
134
README-ru.md
|
|
@ -2,38 +2,117 @@
|
|||
|
||||
[](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
|
||||
Добро пожаловать! Данный репозиторий содержит все необходимые файлы для сборки [сайта Kubernetes и документации](https://kubernetes.io/). Мы благодарим вас за старания!
|
||||
Данный репозиторий содержит все необходимые файлы для сборки [сайта Kubernetes и документации](https://kubernetes.io/). Мы благодарим вас за желание внести свой вклад!
|
||||
|
||||
## Запуск сайта с помощью Hugo
|
||||
# Использование этого репозитория
|
||||
|
||||
Обратитесь к [официальной документации Hugo](https://gohugo.io/getting-started/installing/), чтобы установить Hugo. Убедитесь, что вы установили правильную версию Hugo, которая устанавливается в переменной окружения `HUGO_VERSION` в файле [`netlify.toml`](netlify.toml#L10).
|
||||
Запустить сайт локально можно с помощью Hugo (Extended version) или же в исполняемой среде для контейнеров. Мы настоятельно рекомендуем воспользоваться контейнерной средой, поскольку она обеспечивает консистивность развёртывания с оригинальным сайтом.
|
||||
|
||||
После установки Hugo, чтобы запустить сайт, выполните в консоли:
|
||||
## Предварительные требования
|
||||
|
||||
```bash
|
||||
Чтобы работать с этим репозиторием, понадобятся следующие компоненты, установленные локально:
|
||||
|
||||
- [npm](https://www.npmjs.com/)
|
||||
- [Go](https://golang.org/)
|
||||
- [Hugo (Extended version)](https://gohugo.io/)
|
||||
- Исполняемая среда для контейнеров вроде [Docker](https://www.docker.com/)
|
||||
|
||||
Перед тем, как начать, установите зависимости. Склонируйте репозиторий и перейдите в его директорию:
|
||||
|
||||
```
|
||||
git clone https://github.com/kubernetes/website.git
|
||||
cd website
|
||||
hugo server --buildFuture
|
||||
```
|
||||
|
||||
Эта команда запустит сервер Hugo на порту 1313. Откройте браузер и перейдите по ссылке http://localhost:1313, чтобы открыть сайт. Если вы отредактируете исходные файлы сайта, Hugo автоматически применит изменения и обновит страницу в браузере.
|
||||
Сайт Kubernetes использует [тему для Hugo под названием Docsy](https://github.com/google/docsy). Даже если вы планируете запускать сайт в контейнере, мы настоятельно рекомендуем загрузить соответствующий подмодуль и другие зависимости для разработки, выполнив следующую команду:
|
||||
|
||||
## Сообщество, обсуждение, вклад и поддержка
|
||||
```
|
||||
# загружаем Git-подмодуль Docsy
|
||||
git submodule update --init --recursive --depth 1
|
||||
```
|
||||
|
||||
Узнайте, как поучаствовать в жизни сообщества Kubernetes на [странице сообщества](http://kubernetes.io/community/).
|
||||
## Запуск сайта в контейнере
|
||||
|
||||
Вы можете связаться с сопровождающими этого проекта по следующим ссылкам:
|
||||
Чтобы собрать сайт в контейнере, выполните следующие команды — они собирают образ контейнера и запускают его:
|
||||
|
||||
- [Канал в Slack](https://kubernetes.slack.com/messages/sig-docs)
|
||||
- [Рассылка](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
|
||||
```
|
||||
make container-image
|
||||
make container-serve
|
||||
```
|
||||
|
||||
## Вклад в документацию
|
||||
Откройте браузер и перейдите по ссылке http://localhost:1313, чтобы увидеть сайт. Если вы отредактируете исходные файлы сайта, Hugo автоматически обновит сам сайт и выполнит обновление страницы в браузере.
|
||||
|
||||
## Запуск сайта с помощью Hugo
|
||||
|
||||
Нажмите на кнопку **Fork** в правом верхнем углу, чтобы создать копию этого репозитория в ваш GitHub-аккаунт. Эта копия называется *форк-репозиторием*. Делайте любые изменения в вашем форк-репозитории, и когда вы будете готовы опубликовать изменения, откройте форк-репозиторий и создайте новый пулреквест, чтобы уведомить нас.
|
||||
Убедитесь, что вы установили расширенную версию Hugo (extended version): она определена в переменной окружения `HUGO_VERSION` в файле [`netlify.toml`](netlify.toml#L10).
|
||||
|
||||
После того, как вы отправите пулреквест, ревьювер Kubernetes даст по нему обратную связь. Вы, как автор пулреквеста, **должны обновить свой пулреквест после его рассмотрения ревьювером Kubernetes.**
|
||||
Чтобы собрать и протестировать сайт локально, выполните:
|
||||
|
||||
Вполне возможно, что более одного ревьювера Kubernetes оставят свои комментарии или даже может быть так, что новый комментарий ревьювера Kubernetes будет отличаться от первоначального назначенного ревьювера. Кроме того, в некоторых случаях один из ревьюверов может запросить технический обзор у [технического ревьювера Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers), если это будет необходимо. Ревьюверы сделают все возможное, чтобы как можно оперативно оставить свои предложения и пожелания, но время ответа может варьироваться в зависимости от обстоятельств.
|
||||
```bash
|
||||
# install dependencies
|
||||
npm ci
|
||||
make serve
|
||||
```
|
||||
|
||||
Эти команды запустят локальный сервер Hugo на порту 1313. Откройте браузер и перейдите по ссылке http://localhost:1313, чтобы увидеть сайт. Если вы отредактируете исходные файлы сайта, Hugo автоматически обновит сам сайт и выполнит обновление страницы в браузере.
|
||||
|
||||
## Решение проблем
|
||||
### error: failed to transform resource: TOCSS: failed to transform "scss/main.scss" (text/x-scss): this feature is not available in your current Hugo version
|
||||
|
||||
По техническим причинам Hugo поставляется с двумя наборами бинарников. Текущий сайт Kubernetes работает только в версии **Hugo Extended**. На [странице релизов](https://github.com/gohugoio/hugo/releases) ищите архивы со словом `extended` в названии. Чтобы убедиться в корректности, запустите команду `hugo version` и найдите в выводе слово `extended`.
|
||||
|
||||
### Решение проблемы на macOS с "too many open files"
|
||||
|
||||
Если вы запускаете `make serve` на macOS и получаете следующую ошибку:
|
||||
|
||||
```
|
||||
ERROR 2020/08/01 19:09:18 Error: listen tcp 127.0.0.1:1313: socket: too many open files
|
||||
make: *** [serve] Error 1
|
||||
```
|
||||
|
||||
Попробуйте проверить текущий лимит для открытых файлов:
|
||||
|
||||
`launchctl limit maxfiles`
|
||||
|
||||
Затем выполните следующие команды (они взяты и адаптированы из https://gist.github.com/tombigel/d503800a282fcadbee14b537735d202c):
|
||||
|
||||
```shell
|
||||
#!/bin/sh
|
||||
|
||||
# Ссылки на оригинальные gist-файлы закомментированы в пользу моих адаптированных.
|
||||
# curl -O https://gist.githubusercontent.com/a2ikm/761c2ab02b7b3935679e55af5d81786a/raw/ab644cb92f216c019a2f032bbf25e258b01d87f9/limit.maxfiles.plist
|
||||
# curl -O https://gist.githubusercontent.com/a2ikm/761c2ab02b7b3935679e55af5d81786a/raw/ab644cb92f216c019a2f032bbf25e258b01d87f9/limit.maxproc.plist
|
||||
|
||||
curl -O https://gist.githubusercontent.com/tombigel/d503800a282fcadbee14b537735d202c/raw/ed73cacf82906fdde59976a0c8248cce8b44f906/limit.maxfiles.plist
|
||||
curl -O https://gist.githubusercontent.com/tombigel/d503800a282fcadbee14b537735d202c/raw/ed73cacf82906fdde59976a0c8248cce8b44f906/limit.maxproc.plist
|
||||
|
||||
sudo mv limit.maxfiles.plist /Library/LaunchDaemons
|
||||
sudo mv limit.maxproc.plist /Library/LaunchDaemons
|
||||
|
||||
sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist
|
||||
sudo chown root:wheel /Library/LaunchDaemons/limit.maxproc.plist
|
||||
|
||||
sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
|
||||
```
|
||||
|
||||
Данное решение работает для macOS Catalina и Mojave.
|
||||
|
||||
# Участие в SIG Docs
|
||||
|
||||
Узнайте о Kubernetes-сообществе SIG Docs и его встречах на [странице сообщества](https://github.com/kubernetes/community/tree/master/sig-docs#meetings).
|
||||
|
||||
Вы можете связаться с сопровождающими этот проект по следующим ссылкам:
|
||||
|
||||
- [Канал в Slack](https://kubernetes.slack.com/messages/sig-docs) ([получите приглашение в этот Slack](https://slack.k8s.io/))
|
||||
- [Почтовая рассылка](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
|
||||
|
||||
# Вклад в документацию
|
||||
|
||||
Нажмите на кнопку **Fork** в правом верхнем углу, чтобы создать копию этого репозитория для вашего GitHub-аккаунта. Эта копия называется *форк-репозиторием*. Делайте любые изменения в своем форк-репозитории и, когда будете готовы опубликовать изменения, зайдите в свой форк-репозиторий и создайте новый pull-запрос (PR), чтобы уведомить нас.
|
||||
|
||||
После того, как вы отправите pull-запрос, ревьювер из проекта Kubernetes даст по нему обратную связь. Вы, как автор pull-запроса, **должны обновить свой PR после его рассмотрения ревьювером Kubernetes.**
|
||||
|
||||
Вполне возможно, что более одного ревьювера Kubernetes оставят свои комментарии. Может быть даже так, что вы будете получать обратную связь уже не от того ревьювера, что был первоначально вам назначен. Кроме того, в некоторых случаях один из ревьюверов может запросить техническую рецензию от [технического ревьювера Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers), если это потребуется. Ревьюверы сделают все возможное, чтобы как можно оперативнее оставить свои предложения и пожелания, но время ответа может варьироваться в зависимости от обстоятельств.
|
||||
|
||||
Узнать подробнее о том, как поучаствовать в документации Kubernetes, вы можете по ссылкам ниже:
|
||||
|
||||
|
|
@ -42,21 +121,22 @@ hugo server --buildFuture
|
|||
* [Руководство по оформлению документации](https://kubernetes.io/docs/contribute/style/style-guide/)
|
||||
* [Руководство по локализации Kubernetes](https://kubernetes.io/docs/contribute/localization/)
|
||||
|
||||
## Файл `README.md` на других языках
|
||||
# Файл `README.md` на других языках
|
||||
|
||||
| другие языки | другие языки |
|
||||
|-------------------------------|-------------------------------|
|
||||
| [Английский](README.md) | [Французский](README-fr.md) |
|
||||
| [Корейский](README-ko.md) | [Немецкий](README-de.md) |
|
||||
| [Португальский](README-pt.md) | [Хинди](README-hi.md) |
|
||||
| [Испанский](README-es.md) | [Индонезийский](README-id.md) |
|
||||
| [Китайский](README-zh.md) | [Японский](README-ja.md) |
|
||||
| [Вьетнамский](README-vi.md) | [Итальянский](README-it.md) |
|
||||
| [Польский]( README-pl.md) | [Украинский](README-uk.md) |
|
||||
| [Английский](README.md) | [Немецкий](README-de.md) |
|
||||
| [Вьетнамский](README-vi.md) | [Польский]( README-pl.md) |
|
||||
| [Индонезийский](README-id.md) | [Португальский](README-pt.md) |
|
||||
| [Испанский](README-es.md) | [Украинский](README-uk.md) |
|
||||
| [Итальянский](README-it.md) | [Французский](README-fr.md) |
|
||||
| [Китайский](README-zh.md) | [Хинди](README-hi.md) |
|
||||
| [Корейский](README-ko.md) | [Японский](README-ja.md) |
|
||||
|
||||
### Кодекс поведения
|
||||
# Кодекс поведения
|
||||
|
||||
Участие в сообществе Kubernetes регулируется [кодексом поведения CNCF](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
|
||||
Участие в сообществе Kubernetes регулируется [кодексом поведения CNCF](https://github.com/cncf/foundation/blob/master/code-of-conduct-languages/ru.md).
|
||||
|
||||
## Спасибо!
|
||||
# Спасибо!
|
||||
|
||||
Kubernetes процветает благодаря сообществу и мы ценим ваш вклад в сайт и документацию!
|
||||
|
|
|
|||
|
|
@ -228,8 +228,8 @@ For more information about contributing to the Kubernetes documentation, see:
|
|||
有关为 Kubernetes 文档做出贡献的更多信息,请参阅:
|
||||
|
||||
* [贡献 Kubernetes 文档](https://kubernetes.io/docs/contribute/)
|
||||
* [页面内容类型](http://kubernetes.io/docs/contribute/style/page-content-types/)
|
||||
* [文档风格指南](http://kubernetes.io/docs/contribute/style/style-guide/)
|
||||
* [页面内容类型](https://kubernetes.io/docs/contribute/style/page-content-types/)
|
||||
* [文档风格指南](https://kubernetes.io/docs/contribute/style/style-guide/)
|
||||
* [本地化 Kubernetes 文档](https://kubernetes.io/docs/contribute/localization/)
|
||||
|
||||
# 中文本地化
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ cid: community
|
|||
<main>
|
||||
<div class="content">
|
||||
<h3>Die Gewissheit, dass Kubernetes überall und für alle gut funktioniert.</h3>
|
||||
<p>Verbinden Sie sich mit der Kubernetes-Community in unserem <a href="http://slack.k8s.io/">Slack Kanal</a>, <a href="https://discuss.kubernetes.io/">Diskussionsforum</a>, oder beteiligen Sie sich an der <a href="https://groups.google.com/forum/#!forum/kubernetes-dev"> Kubernetes-dev-Google-Gruppe</a>. Eine wöchentliches Community-Meeting findet per Videokonferenz statt, um den Stand der Dinge zu diskutieren, folgen Sie
|
||||
<p>Verbinden Sie sich mit der Kubernetes-Community in unserem <a href="http://slack.k8s.io/">Slack Kanal</a>, <a href="https://discuss.kubernetes.io/">Diskussionsforum</a>, oder beteiligen Sie sich an der <a href="https://groups.google.com/g/kubernetes-dev"> Kubernetes-dev-Google-Gruppe</a>. Eine wöchentliches Community-Meeting findet per Videokonferenz statt, um den Stand der Dinge zu diskutieren, folgen Sie
|
||||
<a href="https://github.com/kubernetes/community/blob/master/events/community-meeting.md">diesen Anweisungen</a> für Informationen wie Sie teilnehmen können.</p>
|
||||
<p>Sie können Kubernetes auch auf der ganzen Welt über unsere
|
||||
<a href="https://www.meetup.com/topics/kubernetes/">Kubernetes Meetup Community</a> und der
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ Dieser Verhaltenskodex gilt sowohl innerhalb von Projekträumen als auch in öff
|
|||
|
||||
Fälle von missbräuchlichem, belästigendem oder anderweitig unzumutbarem Verhalten in Kubernetes können gemeldet werden, indem Sie sich an das [Kubernetes Komitee für Verhaltenskodex](https://git.k8s.io/community/committee-code-of-conduct) wenden unter <conduct@kubernetes.io>. Für andere Projekte wenden Sie sich bitte an einen CNCF-Projektbetreuer oder an unseren Mediator, Mishi Choudhary <mishi@linux.com>.
|
||||
|
||||
Dieser Verhaltenskodex wurde aus dem Contributor Covenant übernommen (http://contributor-covenant.org), Version 1.2.0, verfügbar unter http://contributor-covenant.org/version/1/2/0/
|
||||
Dieser Verhaltenskodex wurde aus dem Contributor Covenant übernommen (https://contributor-covenant.org), Version 1.2.0, verfügbar unter https://contributor-covenant.org/version/1/2/0/
|
||||
|
||||
### CNCF Verhaltenskodex für Veranstaltungen
|
||||
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@ On the other hand, CNI is more philosophically aligned with Kubernetes. It's far
|
|||
|
||||
Additionally, it's trivial to wrap a CNI plugin and produce a more customized CNI plugin — it can be done with a simple shell script. CNM is much more complex in this regard. This makes CNI an attractive option for rapid development and iteration. Early prototypes have proven that it's possible to eject almost 100% of the currently hard-coded network logic in kubelet into a plugin.
|
||||
|
||||
We investigated [writing a "bridge" CNM driver](https://groups.google.com/forum/#!topic/kubernetes-sig-network/5MWRPxsURUw) for Docker that ran CNI drivers. This turned out to be very complicated. First, the CNM and CNI models are very different, so none of the "methods" lined up. We still have the global vs. local and key-value issues discussed above. Assuming this driver would declare itself local, we have to get info about logical networks from Kubernetes.
|
||||
We investigated [writing a "bridge" CNM driver](https://groups.google.com/g/kubernetes-sig-network/c/5MWRPxsURUw) for Docker that ran CNI drivers. This turned out to be very complicated. First, the CNM and CNI models are very different, so none of the "methods" lined up. We still have the global vs. local and key-value issues discussed above. Assuming this driver would declare itself local, we have to get info about logical networks from Kubernetes.
|
||||
|
||||
Unfortunately, Docker drivers are hard to map to other control planes like Kubernetes. Specifically, drivers are not told the name of the network to which a container is being attached — just an ID that Docker allocates internally. This makes it hard for a driver to map back to any concept of network that exists in another system.
|
||||
|
||||
|
|
@ -34,6 +34,6 @@ This and other issues have been brought up to Docker developers by network vendo
|
|||
|
||||
For all of these reasons we have chosen to invest in CNI as the Kubernetes plugin model. There will be some unfortunate side-effects of this. Most of them are relatively minor (for example, `docker inspect` will not show an IP address), but some are significant. In particular, containers started by `docker run` might not be able to communicate with containers started by Kubernetes, and network integrators will have to provide CNI drivers if they want to fully integrate with Kubernetes. On the other hand, Kubernetes will get simpler and more flexible, and a lot of the ugliness of early bootstrapping (such as configuring Docker to use our bridge) will go away.
|
||||
|
||||
As we proceed down this path, we’ll certainly keep our eyes and ears open for better ways to integrate and simplify. If you have thoughts on how we can do that, we really would like to hear them — find us on [slack](http://slack.k8s.io/) or on our [network SIG mailing-list](https://groups.google.com/forum/#!forum/kubernetes-sig-network).
|
||||
As we proceed down this path, we’ll certainly keep our eyes and ears open for better ways to integrate and simplify. If you have thoughts on how we can do that, we really would like to hear them — find us on [slack](http://slack.k8s.io/) or on our [network SIG mailing-list](https://groups.google.com/g/kubernetes-sig-network).
|
||||
|
||||
Tim Hockin, Software Engineer, Google
|
||||
|
|
|
|||
|
|
@ -56,13 +56,13 @@ Cri-containerd uses containerd to manage the full container lifecycle and all co
|
|||
|
||||
Let’s use an example to demonstrate how cri-containerd works for the case when Kubelet creates a single-container pod:
|
||||
|
||||
1. 1.Kubelet calls cri-containerd, via the CRI runtime service API, to create a pod;
|
||||
2. 2.cri-containerd uses containerd to create and start a special [pause container](https://www.ianlewis.org/en/almighty-pause-container) (the _sandbox container_) and put that container inside the pod’s cgroups and namespace (steps omitted for brevity);
|
||||
3. 3.cri-containerd configures the pod’s network namespace using CNI;
|
||||
4. 4.Kubelet subsequently calls cri-containerd, via the CRI image service API, to pull the application container image;
|
||||
5. 5.cri-containerd further uses containerd to pull the image if the image is not present on the node;
|
||||
6. 6.Kubelet then calls cri-containerd, via the CRI runtime service API, to create and start the application container inside the pod using the pulled container image;
|
||||
7. 7.cri-containerd finally calls containerd to create the application container, put it inside the pod’s cgroups and namespace, then to start the pod’s new application container.
|
||||
1. Kubelet calls cri-containerd, via the CRI runtime service API, to create a pod;
|
||||
2. cri-containerd uses containerd to create and start a special [pause container](https://www.ianlewis.org/en/almighty-pause-container) (the _sandbox container_) and put that container inside the pod’s cgroups and namespace (steps omitted for brevity);
|
||||
3. cri-containerd configures the pod’s network namespace using CNI;
|
||||
4. Kubelet subsequently calls cri-containerd, via the CRI image service API, to pull the application container image;
|
||||
5. cri-containerd further uses containerd to pull the image if the image is not present on the node;
|
||||
6. Kubelet then calls cri-containerd, via the CRI runtime service API, to create and start the application container inside the pod using the pulled container image;
|
||||
7. cri-containerd finally calls containerd to create the application container, put it inside the pod’s cgroups and namespace, then to start the pod’s new application container.
|
||||
After these steps, a pod and its corresponding application container is created and running.
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ As mentioned above, with the promotion of Volume Snapshot to beta, the feature i
|
|||
|
||||
In order to use the Kubernetes Volume Snapshot feature, you must ensure the following components have been deployed on your Kubernetes cluster:
|
||||
|
||||
- [Kubernetes Volume Snapshot CRDs](https://github.com/kubernetes-csi/external-snapshotter/tree/master/config/crd)
|
||||
- [Kubernetes Volume Snapshot CRDs](https://github.com/kubernetes-csi/external-snapshotter/tree/53469c21962339229dd150cbba50c34359acec73/config/crd)
|
||||
- [Volume snapshot controller](https://github.com/kubernetes-csi/external-snapshotter/tree/master/pkg/common-controller)
|
||||
- CSI Driver supporting Kubernetes volume snapshot beta
|
||||
|
||||
|
|
@ -180,7 +180,7 @@ If your cluster does not come pre-installed with the correct components, you may
|
|||
#### Install Snapshot Beta CRDs
|
||||
|
||||
- `kubectl create -f config/crd`
|
||||
- [https://github.com/kubernetes-csi/external-snapshotter/tree/master/config/crd](https://github.com/kubernetes-csi/external-snapshotter/tree/master/config/crd)
|
||||
- [https://github.com/kubernetes-csi/external-snapshotter/tree/53469c21962339229dd150cbba50c34359acec73/config/crd](https://github.com/kubernetes-csi/external-snapshotter/tree/53469c21962339229dd150cbba50c34359acec73/config/crd)
|
||||
- Do this once per cluster
|
||||
|
||||
|
||||
|
|
|
|||
|
After Width: | Height: | Size: 13 KiB |
|
After Width: | Height: | Size: 30 KiB |
|
After Width: | Height: | Size: 28 KiB |
|
After Width: | Height: | Size: 19 KiB |
|
After Width: | Height: | Size: 18 KiB |
|
After Width: | Height: | Size: 19 KiB |
|
After Width: | Height: | Size: 20 KiB |
|
|
@ -0,0 +1,96 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "A Custom Kubernetes Scheduler to Orchestrate Highly Available Applications"
|
||||
date: 2020-12-21
|
||||
slug: writing-crl-scheduler
|
||||
---
|
||||
|
||||
**Author**: Chris Seto (Cockroach Labs)
|
||||
|
||||
As long as you're willing to follow the rules, deploying on Kubernetes and air travel can be quite pleasant. More often than not, things will "just work". However, if one is interested in travelling with an alligator that must remain alive or scaling a database that must remain available, the situation is likely to become a bit more complicated. It may even be easier to build one's own plane or database for that matter. Travelling with reptiles aside, scaling a highly available stateful system is no trivial task.
|
||||
|
||||
Scaling any system has two main components:
|
||||
1. Adding or removing infrastructure that the system will run on, and
|
||||
2. Ensuring that the system knows how to handle additional instances of itself being added and removed.
|
||||
|
||||
Most stateless systems, web servers for example, are created without the need to be aware of peers. Stateful systems, which includes databases like CockroachDB, have to coordinate with their peer instances and shuffle around data. As luck would have it, CockroachDB handles data redistribution and replication. The tricky part is being able to tolerate failures during these operations by ensuring that data and instances are distributed across many failure domains (availability zones).
|
||||
|
||||
One of Kubernetes' responsibilities is to place "resources" (e.g, a disk or container) into the cluster and satisfy the constraints they request. For example: "I must be in availability zone _A_" (see [Running in multiple zones](/docs/setup/best-practices/multiple-zones/#nodes-are-labeled)), or "I can't be placed onto the same node as this other Pod" (see [Affinity and anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)).
|
||||
|
||||
As an addition to those constraints, Kubernetes offers [Statefulsets](/docs/concepts/workloads/controllers/statefulset/) that provide identity to Pods as well as persistent storage that "follows" these identified pods. Identity in a StatefulSet is handled by an increasing integer at the end of a pod's name. It's important to note that this integer must always be contiguous: in a StatefulSet, if pods 1 and 3 exist then pod 2 must also exist.
|
||||
|
||||
Under the hood, CockroachCloud deploys each region of CockroachDB as a StatefulSet in its own Kubernetes cluster - see [Orchestrate CockroachDB in a Single Kubernetes Cluster](https://www.cockroachlabs.com/docs/stable/orchestrate-cockroachdb-with-kubernetes.html).
|
||||
In this article, I'll be looking at an individual region, one StatefulSet and one Kubernetes cluster which is distributed across at least three availability zones.
|
||||
|
||||
A three-node CockroachCloud cluster would look something like this:
|
||||
|
||||

|
||||
|
||||
When adding additional resources to the cluster we also distribute them across zones. For the speediest user experience, we add all Kubernetes nodes at the same time and then scale up the StatefulSet.
|
||||
|
||||

|
||||
|
||||
Note that anti-affinities are satisfied no matter the order in which pods are assigned to Kubernetes nodes. In the example, pods 0, 1 and 2 were assigned to zones A, B, and C respectively, but pods 3 and 4 were assigned in a different order, to zones B and A respectively. The anti-affinity is still satisfied because the pods are still placed in different zones.
|
||||
|
||||
To remove resources from a cluster, we perform these operations in reverse order.
|
||||
|
||||
We first scale down the StatefulSet and then remove from the cluster any nodes lacking a CockroachDB pod.
|
||||
|
||||

|
||||
|
||||
Now, remember that pods in a StatefulSet of size _n_ must have ids in the range `[0,n)`. When scaling down a StatefulSet by _m_, Kubernetes removes _m_ pods, starting from the highest ordinals and moving towards the lowest, [the reverse in which they were added](/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees).
|
||||
Consider the cluster topology below:
|
||||
|
||||

|
||||
|
||||
As ordinals 5 through 3 are removed from this cluster, the statefulset continues to have a presence across all 3 availability zones.
|
||||
|
||||

|
||||
|
||||
However, Kubernetes' scheduler doesn't _guarantee_ the placement above as we expected at first.
|
||||
|
||||
Our combined knowledge of the following is what lead to this misconception.
|
||||
* Kubernetes' ability to [automatically spread Pods across zone](/docs/setup/best-practices/multiple-zones/#pods-are-spread-across-zones)
|
||||
* The behavior that a StatefulSet with _n_ replicas, when Pods are being deployed, they are created sequentially, in order from `{0..n-1}`. See [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees) for more details.
|
||||
|
||||
Consider the following topology:
|
||||
|
||||

|
||||
|
||||
These pods were created in order and they are spread across all availability zones in the cluster. When ordinals 5 through 3 are terminated, this cluster will lose its presence in zone C!
|
||||
|
||||

|
||||
|
||||
Worse yet, our automation, at the time, would remove Nodes A-2, B-2, and C-2. Leaving CRDB-1 in an unscheduled state as persistent volumes are only available in the zone they are initially created in.
|
||||
|
||||
To correct the latter issue, we now employ a "hunt and peck" approach to removing machines from a cluster. Rather than blindly removing Kubernetes nodes from the cluster, only nodes without a CockroachDB pod would be removed. The much more daunting task was to wrangle the Kubernetes scheduler.
|
||||
|
||||
## A session of brainstorming left us with 3 options:
|
||||
|
||||
### 1. Upgrade to kubernetes 1.18 and make use of Pod Topology Spread Constraints
|
||||
|
||||
While this seems like it could have been the perfect solution, at the time of writing Kubernetes 1.18 was unavailable on the two most common managed Kubernetes services in public cloud, EKS and GKE.
|
||||
Furthermore, [pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/) were still a [beta feature in 1.18](https://v1-18.docs.kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) which meant that it [wasn't guaranteed to be available in managed clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#kubernetes_feature_choices) even when v1.18 became available.
|
||||
The entire endeavour was concerningly reminiscent of checking [caniuse.com](https://caniuse.com/) when Internet Explorer 8 was still around.
|
||||
|
||||
### 2. Deploy a statefulset _per zone_.
|
||||
|
||||
Rather than having one StatefulSet distributed across all availability zones, a single StatefulSet with node affinities per zone would allow manual control over our zonal topology.
|
||||
Our team had considered this as an option in the past which made it particularly appealing.
|
||||
Ultimately, we decided to forego this option as it would have required a massive overhaul to our codebase and performing the migration on existing customer clusters would have been an equally large undertaking.
|
||||
|
||||
### 3. Write a custom Kubernetes scheduler.
|
||||
|
||||
Thanks to an example from [Kelsey Hightower](https://github.com/kelseyhightower/scheduler) and a blog post from [Banzai Cloud](https://banzaicloud.com/blog/k8s-custom-scheduler/), we decided to dive in head first and write our own [custom Kubernetes scheduler](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/).
|
||||
Once our proof-of-concept was deployed and running, we quickly discovered that the Kubernetes' scheduler is also responsible for mapping persistent volumes to the Pods that it schedules.
|
||||
The output of [`kubectl get events`](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/#verifying-that-the-pods-were-scheduled-using-the-desired-schedulers) had led us to believe there was another system at play.
|
||||
In our journey to find the component responsible for storage claim mapping, we discovered the [kube-scheduler plugin system](/docs/concepts/scheduling-eviction/scheduling-framework/). Our next POC was a `Filter` plugin that determined the appropriate availability zone by pod ordinal, and it worked flawlessly!
|
||||
|
||||
Our [custom scheduler plugin](https://github.com/cockroachlabs/crl-scheduler) is open source and runs in all of our CockroachCloud clusters.
|
||||
Having control over how our StatefulSet pods are being scheduled has let us scale out with confidence.
|
||||
We may look into retiring our plugin once pod topology spread constraints are available in GKE and EKS, but the maintenance overhead has been surprisingly low.
|
||||
Better still: the plugin's implementation is orthogonal to our business logic. Deploying it, or retiring it for that matter, is as simple as changing the `schedulerName` field in our StatefulSet definitions.
|
||||
|
||||
---
|
||||
|
||||
_[Chris Seto](https://twitter.com/_ostriches) is a software engineer at Cockroach Labs and works on their Kubernetes automation for [CockroachCloud](https://cockroachlabs.cloud), CockroachDB._
|
||||
|
|
@ -141,13 +141,14 @@ runtime where possible.
|
|||
Another thing to look out for is anything expecting to run for system maintenance
|
||||
or nested inside a container when building images will no longer work. For the
|
||||
former, you can use the [`crictl`][cr] tool as a drop-in replacement (see [mapping from docker cli to crictl](https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/#mapping-from-docker-cli-to-crictl)) and for the
|
||||
latter you can use newer container build options like [img], [buildah], or
|
||||
[kaniko] that don’t require Docker.
|
||||
latter you can use newer container build options like [img], [buildah],
|
||||
[kaniko], or [buildkit-cli-for-kubectl] that don’t require Docker.
|
||||
|
||||
[cr]: https://github.com/kubernetes-sigs/cri-tools
|
||||
[img]: https://github.com/genuinetools/img
|
||||
[buildah]: https://github.com/containers/buildah
|
||||
[kaniko]: https://github.com/GoogleContainerTools/kaniko
|
||||
[buildkit-cli-for-kubectl]: https://github.com/vmware-tanzu/buildkit-cli-for-kubectl
|
||||
|
||||
For containerd, you can start with their [documentation] to see what configuration
|
||||
options are available as you migrate things over.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,224 @@
|
|||
---
|
||||
layout: blog
|
||||
title: 'Kubernetes 1.20: Kubernetes Volume Snapshot Moves to GA'
|
||||
date: 2020-12-10
|
||||
slug: kubernetes-1.20-volume-snapshot-moves-to-ga
|
||||
---
|
||||
|
||||
**Authors**: Xing Yang, VMware & Xiangqian Yu, Google
|
||||
|
||||
The Kubernetes Volume Snapshot feature is now GA in Kubernetes v1.20. It was introduced as [alpha](https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/) in Kubernetes v1.12, followed by a [second alpha](https://kubernetes.io/blog/2019/01/17/update-on-volume-snapshot-alpha-for-kubernetes/) with breaking changes in Kubernetes v1.13, and promotion to [beta](https://kubernetes.io/blog/2019/12/09/kubernetes-1-17-feature-cis-volume-snapshot-beta/) in Kubernetes 1.17. This blog post summarizes the changes releasing the feature from beta to GA.
|
||||
|
||||
## What is a volume snapshot?
|
||||
|
||||
Many storage systems (like Google Cloud Persistent Disks, Amazon Elastic Block Storage, and many on-premise storage systems) provide the ability to create a “snapshot” of a persistent volume. A snapshot represents a point-in-time copy of a volume. A snapshot can be used either to rehydrate a new volume (pre-populated with the snapshot data) or to restore an existing volume to a previous state (represented by the snapshot).
|
||||
|
||||
## Why add volume snapshots to Kubernetes?
|
||||
|
||||
Kubernetes aims to create an abstraction layer between distributed applications and underlying clusters so that applications can be agnostic to the specifics of the cluster they run on and application deployment requires no “cluster-specific” knowledge.
|
||||
|
||||
The Kubernetes Storage SIG identified snapshot operations as critical functionality for many stateful workloads. For example, a database administrator may want to snapshot a database’s volumes before starting a database operation.
|
||||
|
||||
By providing a standard way to trigger volume snapshot operations in Kubernetes, this feature allows Kubernetes users to incorporate snapshot operations in a portable manner on any Kubernetes environment regardless of the underlying storage.
|
||||
|
||||
Additionally, these Kubernetes snapshot primitives act as basic building blocks that unlock the ability to develop advanced enterprise-grade storage administration features for Kubernetes, including application or cluster level backup solutions.
|
||||
|
||||
## What’s new since beta?
|
||||
|
||||
With the promotion of Volume Snapshot to GA, the feature is enabled by default on standard Kubernetes deployments and cannot be turned off.
|
||||
|
||||
Many enhancements have been made to improve the quality of this feature and to make it production-grade.
|
||||
|
||||
- The Volume Snapshot APIs and client library were moved to a separate Go module.
|
||||
|
||||
- A snapshot validation webhook has been added to perform necessary validation on volume snapshot objects. More details can be found in the [Volume Snapshot Validation Webhook Kubernetes Enhancement Proposal](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1900-volume-snapshot-validation-webhook).
|
||||
|
||||
- Along with the validation webhook, the volume snapshot controller will start labeling invalid snapshot objects that already existed. This allows users to identify, remove any invalid objects, and correct their workflows. Once the API is switched to the v1 type, those invalid objects will not be deletable from the system.
|
||||
|
||||
- To provide better insights into how the snapshot feature is performing, an initial set of operation metrics has been added to the volume snapshot controller.
|
||||
|
||||
- There are more end-to-end tests, running on GCP, that validate the feature in a real Kubernetes cluster. Stress tests (based on Google Persistent Disk and `hostPath` CSI Drivers) have been introduced to test the robustness of the system.
|
||||
|
||||
Other than introducing tightening validation, there is no difference between the v1beta1 and v1 Kubernetes volume snapshot API. In this release (with Kubernetes 1.20), both v1 and v1beta1 are served while the stored API version is still v1beta1. Future releases will switch the stored version to v1 and gradually remove v1beta1 support.
|
||||
|
||||
## Which CSI drivers support volume snapshots?
|
||||
|
||||
Snapshots are only supported for CSI drivers, not for in-tree or FlexVolume drivers. Ensure the deployed CSI driver on your cluster has implemented the snapshot interfaces. For more information, see [Container Storage Interface (CSI) for Kubernetes GA](https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/).
|
||||
|
||||
Currently more than [50 CSI drivers](https://kubernetes-csi.github.io/docs/drivers.html) support the Volume Snapshot feature. The [GCE Persistent Disk CSI Driver](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) has gone through the tests for upgrading from volume snapshots beta to GA. GA level support for other CSI drivers should be available soon.
|
||||
|
||||
## Who builds products using volume snapshots?
|
||||
|
||||
As of the publishing of this blog, the following participants from the [Kubernetes Data Protection Working Group](https://github.com/kubernetes/community/tree/master/wg-data-protection) are building products or have already built products using Kubernetes volume snapshots.
|
||||
|
||||
- [Dell-EMC: PowerProtect](https://www.delltechnologies.com/en-us/data-protection/powerprotect-data-manager.htm)
|
||||
- [Druva](https://www.druva.com/)
|
||||
- [Kasten K10](https://www.kasten.io/)
|
||||
- [NetApp: Project Astra](https://cloud.netapp.com/project-astra)
|
||||
- [Portworx (PX-Backup)](https://portworx.com/products/px-backup/)
|
||||
- [Pure Storage (Pure Service Orchestrator)](https://github.com/purestorage/pso-csi)
|
||||
- [Red Hat OpenShift Container Storage](https://www.redhat.com/en/technologies/cloud-computing/openshift-container-storage)
|
||||
- [Robin Cloud Native Storage](https://robin.io/storage/)
|
||||
- [TrilioVault for Kubernetes](https://docs.trilio.io/kubernetes/)
|
||||
- [Velero plugin for CSI](https://github.com/vmware-tanzu/velero-plugin-for-csi)
|
||||
|
||||
## How to deploy volume snapshots?
|
||||
|
||||
Volume Snapshot feature contains the following components:
|
||||
|
||||
- [Kubernetes Volume Snapshot CRDs](https://github.com/kubernetes-csi/external-snapshotter/tree/master/client/config/crd)
|
||||
- [Volume snapshot controller](https://github.com/kubernetes-csi/external-snapshotter/tree/master/pkg/common-controller)
|
||||
- [Snapshot validation webhook](https://github.com/kubernetes-csi/external-snapshotter/tree/master/pkg/validation-webhook)
|
||||
- CSI Driver along with [CSI Snapshotter sidecar](https://github.com/kubernetes-csi/external-snapshotter/tree/master/pkg/sidecar-controller)
|
||||
|
||||
It is strongly recommended that Kubernetes distributors bundle and deploy the volume snapshot controller, CRDs, and validation webhook as part of their Kubernetes cluster management process (independent of any CSI Driver).
|
||||
|
||||
{{< warning >}}
|
||||
|
||||
The snapshot validation webhook serves as a critical component to transition smoothly from using v1beta1 to v1 API. Not installing the snapshot validation webhook makes prevention of invalid volume snapshot objects from creation/updating impossible, which in turn will block deletion of invalid volume snapshot objects in coming upgrades.
|
||||
|
||||
{{< /warning >}}
|
||||
|
||||
If your cluster does not come pre-installed with the correct components, you may manually install them. See the [CSI Snapshotter](https://github.com/kubernetes-csi/external-snapshotter#readme) README for details.
|
||||
|
||||
## How to use volume snapshots?
|
||||
|
||||
Assuming all the required components (including CSI driver) have been already deployed and running on your cluster, you can create volume snapshots using the `VolumeSnapshot` API object, or use an existing `VolumeSnapshot` to restore a PVC by specifying the VolumeSnapshot data source on it. For more details, see the [volume snapshot documentation](/docs/concepts/storage/volume-snapshots/).
|
||||
|
||||
{{< note >}} The Kubernetes Snapshot API does not provide any application consistency guarantees. You have to prepare your application (pause application, freeze filesystem etc.) before taking the snapshot for data consistency either manually or using higher level APIs/controllers. {{< /note >}}
|
||||
|
||||
### Dynamically provision a volume snapshot
|
||||
|
||||
To dynamically provision a volume snapshot, create a `VolumeSnapshotClass` API object first.
|
||||
|
||||
```yaml
|
||||
apiVersion: snapshot.storage.k8s.io/v1
|
||||
kind: VolumeSnapshotClass
|
||||
metadata:
|
||||
name: test-snapclass
|
||||
driver: testdriver.csi.k8s.io
|
||||
deletionPolicy: Delete
|
||||
parameters:
|
||||
csi.storage.k8s.io/snapshotter-secret-name: mysecret
|
||||
csi.storage.k8s.io/snapshotter-secret-namespace: mysecretnamespace
|
||||
```
|
||||
|
||||
Then create a `VolumeSnapshot` API object from a PVC by specifying the volume snapshot class.
|
||||
|
||||
```yaml
|
||||
apiVersion: snapshot.storage.k8s.io/v1
|
||||
kind: VolumeSnapshot
|
||||
metadata:
|
||||
name: test-snapshot
|
||||
namespace: ns1
|
||||
spec:
|
||||
volumeSnapshotClassName: test-snapclass
|
||||
source:
|
||||
persistentVolumeClaimName: test-pvc
|
||||
```
|
||||
|
||||
### Importing an existing volume snapshot with Kubernetes
|
||||
|
||||
To import a pre-existing volume snapshot into Kubernetes, manually create a `VolumeSnapshotContent` object first.
|
||||
|
||||
```yaml
|
||||
apiVersion: snapshot.storage.k8s.io/v1
|
||||
kind: VolumeSnapshotContent
|
||||
metadata:
|
||||
name: test-content
|
||||
spec:
|
||||
deletionPolicy: Delete
|
||||
driver: testdriver.csi.k8s.io
|
||||
source:
|
||||
snapshotHandle: 7bdd0de3-xxx
|
||||
volumeSnapshotRef:
|
||||
name: test-snapshot
|
||||
namespace: default
|
||||
```
|
||||
|
||||
Then create a `VolumeSnapshot` object pointing to the `VolumeSnapshotContent` object.
|
||||
|
||||
```yaml
|
||||
apiVersion: snapshot.storage.k8s.io/v1
|
||||
kind: VolumeSnapshot
|
||||
metadata:
|
||||
name: test-snapshot
|
||||
spec:
|
||||
source:
|
||||
volumeSnapshotContentName: test-content
|
||||
```
|
||||
|
||||
### Rehydrate volume from snapshot
|
||||
|
||||
A bound and ready `VolumeSnapshot` object can be used to rehydrate a new volume with data pre-populated from snapshotted data as shown here:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: pvc-restore
|
||||
namespace: demo-namespace
|
||||
spec:
|
||||
storageClassName: test-storageclass
|
||||
dataSource:
|
||||
name: test-snapshot
|
||||
kind: VolumeSnapshot
|
||||
apiGroup: snapshot.storage.k8s.io
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
```
|
||||
|
||||
## How to add support for snapshots in a CSI driver?
|
||||
|
||||
See the [CSI spec](https://github.com/container-storage-interface/spec/blob/master/spec.md) and the [Kubernetes-CSI Driver Developer Guide](https://kubernetes-csi.github.io/docs/snapshot-restore-feature.html) for more details on how to implement the snapshot feature in a CSI driver.
|
||||
|
||||
## What are the limitations?
|
||||
|
||||
The GA implementation of volume snapshots for Kubernetes has the following limitations:
|
||||
|
||||
- Does not support reverting an existing PVC to an earlier state represented by a snapshot (only supports provisioning a new volume from a snapshot).
|
||||
|
||||
### How to learn more?
|
||||
|
||||
The code repository for snapshot APIs and controller is here: https://github.com/kubernetes-csi/external-snapshotter
|
||||
|
||||
Check out additional documentation on the snapshot feature here: http://k8s.io/docs/concepts/storage/volume-snapshots and https://kubernetes-csi.github.io/docs/
|
||||
|
||||
## How to get involved?
|
||||
|
||||
This project, like all of Kubernetes, is the result of hard work by many contributors from diverse backgrounds working together.
|
||||
|
||||
We offer a huge thank you to the contributors who stepped up these last few quarters to help the project reach GA. We want to thank Saad Ali, Michelle Au, Tim Hockin, and Jordan Liggitt for their insightful reviews and thorough consideration with the design, thank Andi Li for his work on adding the support of the snapshot validation webhook, thank Grant Griffiths on implementing metrics support in the snapshot controller and handling password rotation in the validation webhook, thank Chris Henzie, Raunak Shah, and Manohar Reddy for writing critical e2e tests to meet the scalability and stability requirements for graduation, thank Kartik Sharma for moving snapshot APIs and client lib to a separate go module, and thank Raunak Shah and Prafull Ladha for their help with upgrade testing from beta to GA.
|
||||
|
||||
There are many more people who have helped to move the snapshot feature from beta to GA. We want to thank everyone who has contributed to this effort:
|
||||
- [Andi Li](https://github.com/AndiLi99)
|
||||
- [Ben Swartzlander](https://github.com/bswartz)
|
||||
- [Chris Henzie](https://github.com/chrishenzie)
|
||||
- [Christian Huffman](https://github.com/huffmanca)
|
||||
- [Grant Griffiths](https://github.com/ggriffiths)
|
||||
- [Humble Devassy Chirammal](https://github.com/humblec)
|
||||
- [Jan Šafránek](https://github.com/jsafrane)
|
||||
- [Jiawei Wang](https://github.com/Jiawei0227)
|
||||
- [Jing Xu](https://github.com/jingxu97)
|
||||
- [Jordan Liggitt](https://github.com/liggitt)
|
||||
- [Kartik Sharma](https://github.com/Kartik494)
|
||||
- [Madhu Rajanna](https://github.com/Madhu-1)
|
||||
- [Manohar Reddy](https://github.com/boddumanohar)
|
||||
- [Michelle Au](https://github.com/msau42)
|
||||
- [Patrick Ohly](https://github.com/pohly)
|
||||
- [Prafull Ladha](https://github.com/prafull01)
|
||||
- [Prateek Pandey](https://github.com/prateekpandey14)
|
||||
- [Raunak Shah](https://github.com/RaunakShah)
|
||||
- [Saad Ali](https://github.com/saad-ali)
|
||||
- [Saikat Roychowdhury](https://github.com/saikat-royc)
|
||||
- [Tim Hockin](https://github.com/thockin)
|
||||
- [Xiangqian Yu](https://github.com/yuxiangqian)
|
||||
- [Xing Yang](https://github.com/xing-yang)
|
||||
- [Zhu Can](https://github.com/zhucan)
|
||||
|
||||
For those interested in getting involved with the design and development of CSI or any part of the Kubernetes Storage system, join the [Kubernetes Storage Special Interest Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG). We’re rapidly growing and always welcome new contributors.
|
||||
|
||||
We also hold regular [Data Protection Working Group meetings](https://docs.google.com/document/d/15tLCV3csvjHbKb16DVk-mfUmFry_Rlwo-2uG6KNGsfw/edit#). New attendees are welcome to join in discussions.
|
||||
|
|
@ -0,0 +1,53 @@
|
|||
---
|
||||
layout: blog
|
||||
title: 'Kubernetes 1.20: Pod Impersonation and Short-lived Volumes in CSI Drivers'
|
||||
date: 2020-12-18
|
||||
slug: kubernetes-1.20-pod-impersonation-short-lived-volumes-in-csi
|
||||
---
|
||||
|
||||
**Author**: Shihang Zhang (Google)
|
||||
|
||||
Typically when a [CSI](https://github.com/container-storage-interface/spec/blob/baa71a34651e5ee6cb983b39c03097d7aa384278/spec.md) driver mounts credentials such as secrets and certificates, it has to authenticate against storage providers to access the credentials. However, the access to those credentials are controlled on the basis of the pods' identities rather than the CSI driver's identity. CSI drivers, therefore, need some way to retrieve pod's service account token.
|
||||
|
||||
Currently there are two suboptimal approaches to achieve this, either by granting CSI drivers the permission to use TokenRequest API or by reading tokens directly from the host filesystem.
|
||||
|
||||
Both of them exhibit the following drawbacks:
|
||||
|
||||
- Violating the principle of least privilege
|
||||
- Every CSI driver needs to re-implement the logic of getting the pod’s service account token
|
||||
|
||||
The second approach is more problematic due to:
|
||||
|
||||
- The audience of the token defaults to the kube-apiserver
|
||||
- The token is not guaranteed to be available (e.g. `AutomountServiceAccountToken=false`)
|
||||
- The approach does not work for CSI drivers that run as a different (non-root) user from the pods. See [file permission section for service account token](https://github.com/kubernetes/enhancements/blob/f40c24a5da09390bd521be535b38a4dbab09380c/keps/sig-storage/20180515-svcacct-token-volumes.md#file-permission)
|
||||
- The token might be legacy Kubernetes service account token which doesn’t expire if `BoundServiceAccountTokenVolume=false`
|
||||
|
||||
Kubernetes 1.20 introduces an alpha feature, `CSIServiceAccountToken`, to improve the security posture. The new feature allows CSI drivers to receive pods' [bound service account tokens](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md).
|
||||
|
||||
This feature also provides a knob to re-publish volumes so that short-lived volumes can be refreshed.
|
||||
|
||||
## Pod Impersonation
|
||||
|
||||
### Using GCP APIs
|
||||
|
||||
Using [Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity), a Kubernetes service account can authenticate as a Google service account when accessing Google Cloud APIs. If a CSI driver needs to access GCP APIs on behalf of the pods that it is mounting volumes for, it can use the pod's service account token to [exchange for GCP tokens](https://cloud.google.com/iam/docs/reference/sts/rest). The pod's service account token is plumbed through the volume context in `NodePublishVolume` RPC calls when the feature `CSIServiceAccountToken` is enabled. For example: accessing [Google Secret Manager](https://cloud.google.com/secret-manager/) via a [secret store CSI driver](https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp).
|
||||
|
||||
### Using Vault
|
||||
|
||||
If users configure [Kubernetes as an auth method](https://www.vaultproject.io/docs/auth/kubernetes), Vault uses the `TokenReview` API to validate the Kubernetes service account token. For CSI drivers using Vault as resources provider, they need to present the pod's service account to Vault. For example, [secrets store CSI driver](https://github.com/hashicorp/secrets-store-csi-driver-provider-vault) and [cert manager CSI driver](https://github.com/jetstack/cert-manager-csi).
|
||||
|
||||
## Short-lived Volumes
|
||||
|
||||
To keep short-lived volumes such as certificates effective, CSI drivers can specify `RequiresRepublish=true` in their`CSIDriver` object to have the kubelet periodically call `NodePublishVolume` on mounted volumes. These republishes allow CSI drivers to ensure that the volume content is up-to-date.
|
||||
|
||||
## Next steps
|
||||
|
||||
This feature is alpha and projected to move to beta in 1.21. See more in the following KEP and CSI documentation:
|
||||
|
||||
- [KEP-1855: Service Account Token for CSI Driver](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/1855-csi-driver-service-account-token/README.md)
|
||||
- [Token Requests](https://kubernetes-csi.github.io/docs/token-requests.html)
|
||||
|
||||
Your feedback is always welcome!
|
||||
- SIG-Auth [meets regularly](https://github.com/kubernetes/community/tree/master/sig-auth#meetings) and can be reached via [Slack and the mailing list](https://github.com/kubernetes/community/tree/master/sig-auth#contact)
|
||||
- SIG-Storage [meets regularly](https://github.com/kubernetes/community/tree/master/sig-storage#meetings) and can be reached via [Slack and the mailing list](https://github.com/kubernetes/community/tree/master/sig-storage#contact).
|
||||
|
|
@ -0,0 +1,59 @@
|
|||
---
|
||||
layout: blog
|
||||
title: 'Kubernetes 1.20: Granular Control of Volume Permission Changes'
|
||||
date: 2020-12-14
|
||||
slug: kubernetes-release-1.20-fsGroupChangePolicy-fsGroupPolicy
|
||||
---
|
||||
|
||||
**Authors**: Hemant Kumar, Red Hat & Christian Huffman, Red Hat
|
||||
|
||||
Kubernetes 1.20 brings two important beta features, allowing Kubernetes admins and users alike to have more adequate control over how volume permissions are applied when a volume is mounted inside a Pod.
|
||||
|
||||
### Allow users to skip recursive permission changes on mount
|
||||
Traditionally if your pod is running as a non-root user ([which you should](https://twitter.com/thockin/status/1333892204490735617)), you must specify a `fsGroup` inside the pod’s security context so that the volume can be readable and writable by the Pod. This requirement is covered in more detail in [here](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/).
|
||||
|
||||
But one side-effect of setting `fsGroup` is that, each time a volume is mounted, Kubernetes must recursively `chown()` and `chmod()` all the files and directories inside the volume - with a few exceptions noted below. This happens even if group ownership of the volume already matches the requested `fsGroup`, and can be pretty expensive for larger volumes with lots of small files, which causes pod startup to take a long time. This scenario has been a [known problem](https://github.com/kubernetes/kubernetes/issues/69699) for a while, and in Kubernetes 1.20 we are providing knobs to opt-out of recursive permission changes if the volume already has the correct permissions.
|
||||
|
||||
When configuring a pod’s security context, set `fsGroupChangePolicy` to "OnRootMismatch" so if the root of the volume already has the correct permissions, the recursive permission change can be skipped. Kubernetes ensures that permissions of the top-level directory are changed last the first time it applies permissions.
|
||||
|
||||
```yaml
|
||||
securityContext:
|
||||
runAsUser: 1000
|
||||
runAsGroup: 3000
|
||||
fsGroup: 2000
|
||||
fsGroupChangePolicy: "OnRootMismatch"
|
||||
```
|
||||
You can learn more about this in [Configure volume permission and ownership change policy for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods).
|
||||
|
||||
### Allow CSI Drivers to declare support for fsGroup based permissions
|
||||
|
||||
Although the previous section implied that Kubernetes _always_ recursively changes permissions of a volume if a Pod has a `fsGroup`, this is not strictly true. For certain multi-writer volume types, such as NFS or Gluster, the cluster doesn’t perform recursive permission changes even if the pod has a `fsGroup`. Other volume types may not even support `chown()`/`chmod()`, which rely on Unix-style permission control primitives.
|
||||
|
||||
So how do we know when to apply recursive permission changes and when we shouldn't? For in-tree storage drivers, this was relatively simple. For [CSI](https://kubernetes-csi.github.io/docs/introduction.html#introduction) drivers that could span a multitude of platforms and storage types, this problem can be a bigger challenge.
|
||||
|
||||
Previously, whenever a CSI volume was mounted to a Pod, Kubernetes would attempt to automatically determine if the permissions and ownership should be modified. These methods were imprecise and could cause issues as we already mentioned, depending on the storage type.
|
||||
|
||||
The CSIDriver custom resource now has a `.spec.fsGroupPolicy` field, allowing storage drivers to explicitly opt in or out of these recursive modifications. By having the CSI driver specify a policy for the backing volumes, Kubernetes can avoid needless modification attempts. This optimization helps to reduce volume mount time and also cuts own reporting errors about modifications that would never succeed.
|
||||
|
||||
#### CSIDriver FSGroupPolicy API
|
||||
|
||||
Three FSGroupPolicy values are available as of Kubernetes 1.20, with more planned for future releases.
|
||||
|
||||
- **ReadWriteOnceWithFSType** - This is the default policy, applied if no `fsGroupPolicy` is defined; this preserves the behavior from previous Kubernetes releases. Each volume is examined at mount time to determine if permissions should be recursively applied.
|
||||
- **File** - Always attempt to apply permission modifications, regardless of the filesystem type or PersistentVolumeClaim’s access mode.
|
||||
- **None** - Never apply permission modifications.
|
||||
|
||||
#### How do I use it?
|
||||
The only configuration needed is defining `fsGroupPolicy` inside of the `.spec` for a CSIDriver. Once that element is defined, any subsequently mounted volumes will automatically use the defined policy. There’s no additional deployment required!
|
||||
|
||||
#### What’s next?
|
||||
|
||||
Depending on feedback and adoption, the Kubernetes team plans to push these implementations to GA in either 1.21 or 1.22.
|
||||
|
||||
### How can I learn more?
|
||||
This feature is explained in more detail in Kubernetes project documentation: [CSI Driver fsGroup Support](https://kubernetes-csi.github.io/docs/support-fsgroup.html) and [Configure volume permission and ownership change policy for Pods ](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods).
|
||||
|
||||
### How do I get involved?
|
||||
The [Kubernetes Slack channel #csi](https://kubernetes.slack.com/messages/csi) and any of the [standard SIG Storage communication channels](https://github.com/kubernetes/community/blob/master/sig-storage/README.md#contact) are great mediums to reach out to the SIG Storage and the CSI team.
|
||||
|
||||
Those interested in getting involved with the design and development of CSI or any part of the Kubernetes Storage system, join the [Kubernetes Storage Special Interest Group (SIG)](https://github.com/kubernetes/community/tree/master/sig-storage). We’re rapidly growing and always welcome new contributors.
|
||||
|
|
@ -0,0 +1,134 @@
|
|||
---
|
||||
layout: blog
|
||||
title: 'Third Party Device Metrics Reaches GA'
|
||||
date: 2020-12-16
|
||||
slug: third-party-device-metrics-reaches-ga
|
||||
---
|
||||
|
||||
**Authors:** Renaud Gaubert (NVIDIA), David Ashpole (Google), and Pramod Ramarao (NVIDIA)
|
||||
|
||||
With Kubernetes 1.20, infrastructure teams who manage large scale Kubernetes clusters, are seeing the graduation of two exciting and long awaited features:
|
||||
* The Pod Resources API (introduced in 1.13) is finally graduating to GA. This allows Kubernetes plugins to obtain information about the node’s resource usage and assignment; for example: which pod/container consumes which device.
|
||||
* The `DisableAcceleratorMetrics` feature (introduced in 1.19) is graduating to beta and will be enabled by default. This removes device metrics reported by the kubelet in favor of the new plugin architecture.
|
||||
|
||||
Many of the features related to fundamental device support (device discovery, plugin, and monitoring) are reaching a strong level of stability.
|
||||
Kubernetes users should see these features as stepping stones to enable more complex use cases (networking, scheduling, storage, etc.)!
|
||||
|
||||
One such example is Non Uniform Memory Access (NUMA) placement where, when selecting a device, an application typically wants to ensure that data transfer between CPU Memory and Device Memory is as fast as possible. In some cases, incorrect NUMA placement can nullify the benefit of offloading compute to an external device.
|
||||
|
||||
If these are topics of interest to you, consider joining the [Kubernetes Node Special Insterest Group](https://github.com/kubernetes/community/tree/master/sig-node) (SIG) for all topics related to the Kubernetes node, the COD (container orchestrated device) workgroup for topics related to runtimes, or the resource management forum for topics related to resource management!
|
||||
|
||||
## The Pod Resources API - Why does it need to exist?
|
||||
|
||||
Kubernetes is a vendor neutral platform. If we want it to support device monitoring, adding vendor-specific code in the Kubernetes code base is not an ideal solution. Ultimately, devices are a domain where deep expertise is needed and the best people to add and maintain code in that area are the device vendors themselves.
|
||||
|
||||
The Pod Resources API was built as a solution to this issue. Each vendor can build and maintain their own out-of-tree monitoring plugin. This monitoring plugin, often deployed as a separate pod within a cluster, can then associate the metrics a device emits with the associated pod that's using it.
|
||||
|
||||
For example, use the NVIDIA GPU dcgm-exporter to scrape metrics in Prometheus format:
|
||||
|
||||
```
|
||||
$ curl -sL http://127.0.01:8080/metrics
|
||||
|
||||
|
||||
# HELP DCGM_FI_DEV_SM_CLOCK SM clock frequency (in MHz).
|
||||
# TYPE DCGM_FI_DEV_SM_CLOCK gauge
|
||||
# HELP DCGM_FI_DEV_MEM_CLOCK Memory clock frequency (in MHz).
|
||||
# TYPE DCGM_FI_DEV_MEM_CLOCK gauge
|
||||
# HELP DCGM_FI_DEV_MEMORY_TEMP Memory temperature (in C).
|
||||
# TYPE DCGM_FI_DEV_MEMORY_TEMP gauge
|
||||
...
|
||||
DCGM_FI_DEV_SM_CLOCK{gpu="0", UUID="GPU-604ac76c-d9cf-fef3-62e9-d92044ab6e52",container="foo",namespace="bar",pod="baz"} 139
|
||||
DCGM_FI_DEV_MEM_CLOCK{gpu="0", UUID="GPU-604ac76c-d9cf-fef3-62e9-d92044ab6e52",container="foo",namespace="bar",pod="baz"} 405
|
||||
DCGM_FI_DEV_MEMORY_TEMP{gpu="0", UUID="GPU-604ac76c-d9cf-fef3-62e9-d92044ab6e52",container="foo",namespace="bar",pod="baz"} 9223372036854775794
|
||||
```
|
||||
|
||||
Each agent is expected to adhere to the node monitoring guidelines. In other words, plugins are expected to generate metrics in Prometheus format, and new metrics should not have any dependency on the Kubernetes base directly.
|
||||
|
||||
This allows consumers of the metrics to use a compatible monitoring pipeline to collect and analyze metrics from a variety of agents, even if they are maintained by different vendors.
|
||||
|
||||

|
||||
|
||||
## Disabling the NVIDIA GPU metrics - Warning {#nvidia-gpu-metrics-deprecated}
|
||||
|
||||
With the graduation of the plugin monitoring system, Kubernetes is deprecating the NVIDIA GPU metrics that are being reported by the kubelet.
|
||||
|
||||
With the [DisableAcceleratorMetrics](/docs/concepts/cluster-administration/system-metrics/#disable-accelerator-metrics) feature being enabled by default in Kubernetes 1.20, NVIDIA GPUs are no longer special citizens in Kubernetes. This is a good thing in the spirit of being vendor-neutral, and enables the most suited people to maintain their plugin on their own release schedule!
|
||||
|
||||
Users will now need to either install the [NVIDIA GDGM exporter](https://github.com/NVIDIA/gpu-monitoring-tools) or use [bindings](https://github.com/nvidia/go-nvml) to gather more accurate and complete metrics about NVIDIA GPUs. This deprecation means that you can no longer rely on metrics that were reported by kubelet, such as `container_accelerator_duty_cycle` or `container_accelerator_memory_used_bytes` which were used to gather NVIDIA GPU memory utilization.
|
||||
|
||||
This means that users who used to rely on the NVIDIA GPU metrics reported by the kubelet, will need to update their reference and deploy the NVIDIA plugin. Namely the different metrics reported by Kubernetes map to the following metrics:
|
||||
|
||||
| Kubernetes Metrics | NVIDIA dcgm-exporter metric |
|
||||
| ------------------------------------------ | ------------------------------------------- |
|
||||
| `container_accelerator_duty_cycle` | `DCGM_FI_DEV_GPU_UTIL` |
|
||||
| `container_accelerator_memory_used_bytes` | `DCGM_FI_DEV_FB_USED` |
|
||||
| `container_accelerator_memory_total_bytes` | `DCGM_FI_DEV_FB_FREE + DCGM_FI_DEV_FB_USED` |
|
||||
|
||||
You might also be interested in other metrics such as `DCGM_FI_DEV_GPU_TEMP` (the GPU temperature) or DCGM_FI_DEV_POWER_USAGE (the power usage). The [default set](https://github.com/NVIDIA/gpu-monitoring-tools/blob/d5c9bb55b4d1529ca07068b7f81e690921ce2b59/etc/dcgm-exporter/default-counters.csv) is available in Nvidia's [Data Center GPU Manager documentation](https://docs.nvidia.com/datacenter/dcgm/latest/dcgm-api/group__dcgmFieldIdentifiers.html).
|
||||
|
||||
Note that for this release you can still set the `DisableAcceleratorMetrics` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to _false_, effectively re-enabling the ability for the kubelet to report NVIDIA GPU metrics.
|
||||
|
||||
Paired with the graduation of the Pod Resources API, these tools can be used to generate GPU telemetry [that can be used in visualization dashboards](https://grafana.com/grafana/dashboards/12239), below is an example:
|
||||
|
||||

|
||||
|
||||
## The Pod Resources API - What can I go on to do with this?
|
||||
|
||||
As soon as this interface was introduced, many vendors started using it for widely different use cases! To list a few examples:
|
||||
|
||||
The [kuryr-kubernetes](https://github.com/openstack/kuryr-kubernetes) CNI plugin in tandem with [intel-sriov-device-plugin](https://github.com/intel/sriov-network-device-plugin). This allowed the CNI plugin to know which allocation of SR-IOV Virtual Functions (VFs) the kubelet made and use that information to correctly setup the container network namespace and use a device with the appropriate NUMA node. We also expect this interface to be used to track the allocated and available resources with information about the NUMA topology of the worker node.
|
||||
|
||||
Another use-case is GPU telemetry, where GPU metrics can be associated with the containers and pods that the GPU is assigned to. One such example is the NVIDIA `dcgm-exporter`, but others can be easily built in the same paradigm.
|
||||
|
||||
The Pod Resources API is a simple gRPC service which informs clients of the pods the kubelet knows. The information concerns the devices assignment the kubelet made and the assignment of CPUs. This information is obtained from the internal state of the kubelet's Device Manager and CPU Manager respectively.
|
||||
|
||||
You can see below a sample example of the API and how a go client could use that information in a few lines:
|
||||
|
||||
```
|
||||
service PodResourcesLister {
|
||||
rpc List(ListPodResourcesRequest) returns (ListPodResourcesResponse) {}
|
||||
rpc GetAllocatableResources(AllocatableResourcesRequest) returns (AllocatableResourcesResponse) {}
|
||||
|
||||
// Kubernetes 1.21
|
||||
rpc Watch(WatchPodResourcesRequest) returns (stream WatchPodResourcesResponse) {}
|
||||
}
|
||||
```
|
||||
|
||||
```go
|
||||
func main() {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), connectionTimeout)
|
||||
defer cancel()
|
||||
|
||||
socket := "/var/lib/kubelet/pod-resources/kubelet.sock"
|
||||
conn, err := grpc.DialContext(ctx, socket, grpc.WithInsecure(), grpc.WithBlock(),
|
||||
grpc.WithDialer(func(addr string, timeout time.Duration) (net.Conn, error) {
|
||||
return net.DialTimeout("unix", addr, timeout)
|
||||
}),
|
||||
)
|
||||
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
client := podresourcesapi.NewPodResourcesListerClient(conn)
|
||||
resp, err := client.List(ctx, &podresourcesapi.ListPodResourcesRequest{})
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
net.Printf("%+v\n", resp)
|
||||
}
|
||||
```
|
||||
|
||||
Finally, note that you can watch the number of requests made to the Pod Resources endpoint by watching the new kubelet metric called `pod_resources_endpoint_requests_total` on the kubelet's `/metrics` endpoint.
|
||||
|
||||
## Is device monitoring suitable for production? Can I extend it? Can I contribute?
|
||||
|
||||
Yes! This feature released in 1.13, almost 2 years ago, has seen broad adoption, is already used by different cloud managed services, and with its graduation to G.A in Kubernetes 1.20 is production ready!
|
||||
|
||||
If you are a device vendor, you can start using it today! If you just want to monitor the devices in your cluster, go get the latest version of your monitoring plugin!
|
||||
|
||||
If you feel passionate about that area, join the kubernetes community, help improve the API or contribute the device monitoring plugins!
|
||||
|
||||
## Acknowledgements
|
||||
|
||||
We thank the members of the community who have contributed to this feature or given feedback including members of WG-Resource-Management, SIG-Node and the Resource management forum!
|
||||
|
After Width: | Height: | Size: 13 KiB |
|
After Width: | Height: | Size: 30 KiB |
|
After Width: | Height: | Size: 28 KiB |
|
After Width: | Height: | Size: 19 KiB |
|
After Width: | Height: | Size: 18 KiB |
|
After Width: | Height: | Size: 19 KiB |
|
After Width: | Height: | Size: 20 KiB |
|
|
@ -13,7 +13,7 @@ new_case_study_styles: true
|
|||
heading_background: /images/case-studies/appdirect/banner1.jpg
|
||||
heading_title_logo: /images/appdirect_logo.png
|
||||
subheading: >
|
||||
AppDirect: How AppDirect Supported the 10x Growth of Its Engineering Staff with Kubernetess
|
||||
AppDirect: How AppDirect Supported the 10x Growth of Its Engineering Staff with Kubernetes
|
||||
case_study_details:
|
||||
- Company: AppDirect
|
||||
- Location: San Francisco, California
|
||||
|
|
|
|||
|
|
@ -37,8 +37,8 @@ when an individual is representing the project or its community.
|
|||
Instances of abusive, harassing, or otherwise unacceptable behavior in Kubernetes may be reported by contacting the [Kubernetes Code of Conduct Committee](https://git.k8s.io/community/committee-code-of-conduct) via <conduct@kubernetes.io>. For other projects, please contact a CNCF project maintainer or our mediator, Mishi Choudhary <mishi@linux.com>.
|
||||
|
||||
This Code of Conduct is adapted from the Contributor Covenant
|
||||
(http://contributor-covenant.org), version 1.2.0, available at
|
||||
http://contributor-covenant.org/version/1/2/0/
|
||||
(https://contributor-covenant.org), version 1.2.0, available at
|
||||
https://contributor-covenant.org/version/1/2/0/
|
||||
|
||||
### CNCF Events Code of Conduct
|
||||
|
||||
|
|
|
|||
|
|
@ -1,13 +1,13 @@
|
|||
---
|
||||
reviewers:
|
||||
title: Configuring kubelet Garbage Collection
|
||||
title: Garbage collection for container images
|
||||
content_type: concept
|
||||
weight: 70
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Garbage collection is a helpful function of kubelet that will clean up unused images and unused containers. Kubelet will perform garbage collection for containers every minute and garbage collection for images every five minutes.
|
||||
Garbage collection is a helpful function of kubelet that will clean up unused [images](/docs/concepts/containers/#container-images) and unused [containers](/docs/concepts/containers/). Kubelet will perform garbage collection for containers every minute and garbage collection for images every five minutes.
|
||||
|
||||
External garbage collection tools are not recommended as these tools can potentially break the behavior of kubelet by removing containers expected to exist.
|
||||
|
||||
|
|
|
|||
|
|
@ -114,7 +114,7 @@ Additionally, the CNI can be run alongside [Calico for network policy enforcemen
|
|||
### Azure CNI for Kubernetes
|
||||
[Azure CNI](https://docs.microsoft.com/en-us/azure/virtual-network/container-networking-overview) is an [open source](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md) plugin that integrates Kubernetes Pods with an Azure Virtual Network (also known as VNet) providing network performance at par with VMs. Pods can connect to peered VNet and to on-premises over Express Route or site-to-site VPN and are also directly reachable from these networks. Pods can access Azure services, such as storage and SQL, that are protected by Service Endpoints or Private Link. You can use VNet security policies and routing to filter Pod traffic. The plugin assigns VNet IPs to Pods by utilizing a pool of secondary IPs pre-configured on the Network Interface of a Kubernetes node.
|
||||
|
||||
Azure CNI is available natively in the [Azure Kubernetes Service (AKS)] (https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni).
|
||||
Azure CNI is available natively in the [Azure Kubernetes Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni).
|
||||
|
||||
|
||||
### Big Cloud Fabric from Big Switch Networks
|
||||
|
|
|
|||
|
|
@ -50,39 +50,41 @@ rules:
|
|||
|
||||
## Metric lifecycle
|
||||
|
||||
Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deletion
|
||||
Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric
|
||||
|
||||
Alpha metrics have no stability guarantees; as such they can be modified or deleted at any time.
|
||||
Alpha metrics have no stability guarantees. These metrics can be modified or deleted at any time.
|
||||
|
||||
Stable metrics can be guaranteed to not change; Specifically, stability means:
|
||||
Stable metrics are guaranteed to not change. This means:
|
||||
* A stable metric without a deprecated signature will not be deleted or renamed
|
||||
* A stable metric's type will not be modified
|
||||
|
||||
* the metric itself will not be deleted (or renamed)
|
||||
* the type of metric will not be modified
|
||||
Deprecated metrics are slated for deletion, but are still available for use.
|
||||
These metrics include an annotation about the version in which they became deprecated.
|
||||
|
||||
Deprecated metric signal that the metric will eventually be deleted; to find which version, you need to check annotation, which includes from which kubernetes version that metric will be considered deprecated.
|
||||
For example:
|
||||
|
||||
Before deprecation:
|
||||
* Before deprecation
|
||||
|
||||
```
|
||||
# HELP some_counter this counts things
|
||||
# TYPE some_counter counter
|
||||
some_counter 0
|
||||
```
|
||||
```
|
||||
# HELP some_counter this counts things
|
||||
# TYPE some_counter counter
|
||||
some_counter 0
|
||||
```
|
||||
|
||||
After deprecation:
|
||||
* After deprecation
|
||||
|
||||
```
|
||||
# HELP some_counter (Deprecated since 1.15.0) this counts things
|
||||
# TYPE some_counter counter
|
||||
some_counter 0
|
||||
```
|
||||
```
|
||||
# HELP some_counter (Deprecated since 1.15.0) this counts things
|
||||
# TYPE some_counter counter
|
||||
some_counter 0
|
||||
```
|
||||
|
||||
Once a metric is hidden then by default the metrics is not published for scraping. To use a hidden metric, you need to override the configuration for the relevant cluster component.
|
||||
Hidden metrics are no longer published for scraping, but are still available for use. To use a hidden metric, please refer to the [Show hidden metrics](#show-hidden-metrics) section.
|
||||
|
||||
Once a metric is deleted, the metric is not published. You cannot change this using an override.
|
||||
Deleted metrics are no longer published and cannot be used.
|
||||
|
||||
|
||||
## Show Hidden Metrics
|
||||
## Show hidden metrics
|
||||
|
||||
As described above, admins can enable hidden metrics through a command-line flag on a specific binary. This intends to be used as an escape hatch for admins if they missed the migration of the metrics deprecated in the last release.
|
||||
|
||||
|
|
|
|||
|
|
@ -40,7 +40,7 @@ separate database or file service.
|
|||
A ConfigMap is an API [object](/docs/concepts/overview/working-with-objects/kubernetes-objects/)
|
||||
that lets you store configuration for other objects to use. Unlike most
|
||||
Kubernetes objects that have a `spec`, a ConfigMap has `data` and `binaryData`
|
||||
fields. These fields accepts key-value pairs as their values. Both the `data`
|
||||
fields. These fields accept key-value pairs as their values. Both the `data`
|
||||
field and the `binaryData` are optional. The `data` field is designed to
|
||||
contain UTF-8 byte sequences while the `binaryData` field is designed to
|
||||
contain binary data.
|
||||
|
|
|
|||
|
|
@ -396,7 +396,7 @@ The kubelet supports different ways to measure Pod storage use:
|
|||
|
||||
{{< tabs name="resource-emphemeralstorage-measurement" >}}
|
||||
{{% tab name="Periodic scanning" %}}
|
||||
The kubelet performs regular, schedules checks that scan each
|
||||
The kubelet performs regular, scheduled checks that scan each
|
||||
`emptyDir` volume, container log directory, and writeable container layer.
|
||||
|
||||
The scan measures how much space is used.
|
||||
|
|
|
|||
|
|
@ -271,6 +271,13 @@ However, using the builtin Secret type helps unify the formats of your credentia
|
|||
and the API server does verify if the required keys are provided in a Secret
|
||||
configuration.
|
||||
|
||||
{{< caution >}}
|
||||
SSH private keys do not establish trusted communication between an SSH client and
|
||||
host server on their own. A secondary means of establishing trust is needed to
|
||||
mitigate "man in the middle" attacks, such as a `known_hosts` file added to a
|
||||
ConfigMap.
|
||||
{{< /caution >}}
|
||||
|
||||
### TLS secrets
|
||||
|
||||
Kubernetes provides a builtin Secret type `kubernetes.io/tls` for to storing
|
||||
|
|
@ -351,7 +358,7 @@ data:
|
|||
|
||||
A bootstrap type Secret has the following keys specified under `data`:
|
||||
|
||||
- `token_id`: A random 6 character string as the token identifier. Required.
|
||||
- `token-id`: A random 6 character string as the token identifier. Required.
|
||||
- `token-secret`: A random 16 character string as the actual token secret. Required.
|
||||
- `description`: A human-readable string that describes what the token is
|
||||
used for. Optional.
|
||||
|
|
|
|||
|
|
@ -103,7 +103,7 @@ as well as keeping the existing service in good shape.
|
|||
## Writing your own Operator {#writing-operator}
|
||||
|
||||
If there isn't an Operator in the ecosystem that implements the behavior you
|
||||
want, you can code your own. In [What's next](#whats-next) you'll find a few
|
||||
want, you can code your own. In [What's next](#what-s-next) you'll find a few
|
||||
links to libraries and tools you can use to write your own cloud native
|
||||
Operator.
|
||||
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ is the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}. The
|
|||
exposes an HTTP API that lets end users, different parts of your cluster, and
|
||||
external components communicate with one another.
|
||||
|
||||
The Kubernetes API lets you query and manipulate the state of objects in the Kubernetes API
|
||||
The Kubernetes API lets you query and manipulate the state of API objects in Kubernetes
|
||||
(for example: Pods, Namespaces, ConfigMaps, and Events).
|
||||
|
||||
Most operations can be performed through the
|
||||
|
|
|
|||
|
|
@ -59,8 +59,8 @@ metadata:
|
|||
## Applications And Instances Of Applications
|
||||
|
||||
An application can be installed one or more times into a Kubernetes cluster and,
|
||||
in some cases, the same namespace. For example, wordpress can be installed more
|
||||
than once where different websites are different installations of wordpress.
|
||||
in some cases, the same namespace. For example, WordPress can be installed more
|
||||
than once where different websites are different installations of WordPress.
|
||||
|
||||
The name of an application and the instance name are recorded separately. For
|
||||
example, WordPress has a `app.kubernetes.io/name` of `wordpress` while it has
|
||||
|
|
@ -168,6 +168,6 @@ metadata:
|
|||
...
|
||||
```
|
||||
|
||||
With the MySQL `StatefulSet` and `Service` you'll notice information about both MySQL and Wordpress, the broader application, are included.
|
||||
With the MySQL `StatefulSet` and `Service` you'll notice information about both MySQL and WordPress, the broader application, are included.
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -216,12 +216,17 @@ kubectl-user create -f- <<EOF
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pause
|
||||
name: pause
|
||||
spec:
|
||||
containers:
|
||||
- name: pause
|
||||
- name: pause
|
||||
image: k8s.gcr.io/pause
|
||||
EOF
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```
|
||||
Error from server (Forbidden): error when creating "STDIN": pods "pause" is forbidden: unable to validate against any pod security policy: []
|
||||
```
|
||||
|
||||
|
|
@ -264,12 +269,17 @@ kubectl-user create -f- <<EOF
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pause
|
||||
name: pause
|
||||
spec:
|
||||
containers:
|
||||
- name: pause
|
||||
- name: pause
|
||||
image: k8s.gcr.io/pause
|
||||
EOF
|
||||
```
|
||||
|
||||
The output is similar to this
|
||||
|
||||
```
|
||||
pod "pause" created
|
||||
```
|
||||
|
||||
|
|
@ -281,14 +291,19 @@ kubectl-user create -f- <<EOF
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: privileged
|
||||
name: privileged
|
||||
spec:
|
||||
containers:
|
||||
- name: pause
|
||||
- name: pause
|
||||
image: k8s.gcr.io/pause
|
||||
securityContext:
|
||||
privileged: true
|
||||
EOF
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```
|
||||
Error from server (Forbidden): error when creating "STDIN": pods "privileged" is forbidden: unable to validate against any pod security policy: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ time according to the overhead associated with the Pod's
|
|||
[RuntimeClass](/docs/concepts/containers/runtime-class/).
|
||||
|
||||
When Pod Overhead is enabled, the overhead is considered in addition to the sum of container
|
||||
resource requests when scheduling a Pod. Similarly, Kubelet will include the Pod overhead when sizing
|
||||
resource requests when scheduling a Pod. Similarly, the kubelet will include the Pod overhead when sizing
|
||||
the Pod cgroup, and when carrying out Pod eviction ranking.
|
||||
|
||||
## Enabling Pod Overhead {#set-up}
|
||||
|
|
|
|||
|
|
@ -735,7 +735,7 @@ Only statically provisioned volumes are supported for alpha release. Administrat
|
|||
{{< feature-state for_k8s_version="v1.20" state="stable" >}}
|
||||
|
||||
Volume snapshots only support the out-of-tree CSI volume plugins. For details, see [Volume Snapshots](/docs/concepts/storage/volume-snapshots/).
|
||||
In-tree volume plugins are deprecated. You can read about the deprecated volume plugins in the [Volume Plugin FAQ] (https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md).
|
||||
In-tree volume plugins are deprecated. You can read about the deprecated volume plugins in the [Volume Plugin FAQ](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md).
|
||||
|
||||
### Create a PersistentVolumeClaim from a Volume Snapshot {#create-persistent-volume-claim-from-volume-snapshot}
|
||||
|
||||
|
|
|
|||
|
|
@ -34,7 +34,7 @@ text="Container Storage Interface" term_id="csi" >}} (CSI) drivers and
|
|||
## API
|
||||
|
||||
There are two API extensions for this feature:
|
||||
- [CSIStorageCapacity](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#csistoragecapacity-v1alpha1-storage-k8s-io) objects:
|
||||
- CSIStorageCapacity objects:
|
||||
these get produced by a CSI driver in the namespace
|
||||
where the driver is installed. Each object contains capacity
|
||||
information for one storage class and defines which nodes have
|
||||
|
|
|
|||
|
|
@ -8,50 +8,74 @@ no_list: true
|
|||
|
||||
{{< glossary_definition term_id="workload" length="short" >}}
|
||||
Whether your workload is a single component or several that work together, on Kubernetes you run
|
||||
it inside a set of [Pods](/docs/concepts/workloads/pods).
|
||||
In Kubernetes, a Pod represents a set of running {{< glossary_tooltip text="containers" term_id="container" >}}
|
||||
on your cluster.
|
||||
it inside a set of [_pods_](/docs/concepts/workloads/pods).
|
||||
In Kubernetes, a `Pod` represents a set of running
|
||||
{{< glossary_tooltip text="containers" term_id="container" >}} on your cluster.
|
||||
|
||||
A Pod has a defined lifecycle. For example, once a Pod is running in your cluster then
|
||||
a critical failure on the {{< glossary_tooltip text="node" term_id="node" >}} where that
|
||||
Pod is running means that all the Pods on that node fail. Kubernetes treats that level
|
||||
of failure as final: you would need to create a new Pod even if the node later recovers.
|
||||
Kubernetes pods have a [defined lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/).
|
||||
For example, once a pod is running in your cluster then a critical fault on the
|
||||
{{< glossary_tooltip text="node" term_id="node" >}} where that pod is running means that
|
||||
all the pods on that node fail. Kubernetes treats that level of failure as final: you
|
||||
would need to create a new `Pod` to recover, even if the node later becomes healthy.
|
||||
|
||||
However, to make life considerably easier, you don't need to manage each Pod directly.
|
||||
Instead, you can use _workload resources_ that manage a set of Pods on your behalf.
|
||||
However, to make life considerably easier, you don't need to manage each `Pod` directly.
|
||||
Instead, you can use _workload resources_ that manage a set of pods on your behalf.
|
||||
These resources configure {{< glossary_tooltip term_id="controller" text="controllers" >}}
|
||||
that make sure the right number of the right kind of Pod are running, to match the state
|
||||
that make sure the right number of the right kind of pod are running, to match the state
|
||||
you specified.
|
||||
|
||||
Those workload resources include:
|
||||
Kubernetes provides several built-in workload resources:
|
||||
|
||||
* [Deployment](/docs/concepts/workloads/controllers/deployment/) and [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)
|
||||
(replacing the legacy resource {{< glossary_tooltip text="ReplicationController" term_id="replication-controller" >}});
|
||||
* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/);
|
||||
* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) for running Pods that provide
|
||||
node-local facilities, such as a storage driver or network plugin;
|
||||
* [Job](/docs/concepts/workloads/controllers/job/) and
|
||||
[CronJob](/docs/concepts/workloads/controllers/cron-jobs/)
|
||||
for tasks that run to completion.
|
||||
* [`Deployment`](/docs/concepts/workloads/controllers/deployment/) and [`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/)
|
||||
(replacing the legacy resource
|
||||
{{< glossary_tooltip text="ReplicationController" term_id="replication-controller" >}}).
|
||||
`Deployment` is a good fit for managing a stateless application workload on your cluster,
|
||||
where any `Pod` in the `Deployment` is interchangeable and can be replaced if needed.
|
||||
* [`StatefulSet`](/docs/concepts/workloads/controllers/statefulset/) lets you
|
||||
run one or more related Pods that do track state somehow. For example, if your workload
|
||||
records data persistently, you can run a `StatefulSet` that matches each `Pod` with a
|
||||
[`PersistentVolume`](/docs/concepts/storage/persistent-volumes/). Your code, running in the
|
||||
`Pods` for that `StatefulSet`, can replicate data to other `Pods` in the same `StatefulSet`
|
||||
to improve overall resilience.
|
||||
* [`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/) defines `Pods` that provide
|
||||
node-local facilities. These might be fundamental to the operation of your cluster, such
|
||||
as a networking helper tool, or be part of an
|
||||
{{< glossary_tooltip text="add-on" term_id="addons" >}}.
|
||||
Every time you add a node to your cluster that matches the specification in a `DaemonSet`,
|
||||
the control plane schedules a `Pod` for that `DaemonSet` onto the new node.
|
||||
* [`Job`](/docs/concepts/workloads/controllers/job/) and
|
||||
[`CronJob`](/docs/concepts/workloads/controllers/cron-jobs/)
|
||||
define tasks that run to completion and then stop. Jobs represent one-off tasks, whereas
|
||||
`CronJobs` recur according to a schedule.
|
||||
|
||||
There are also two supporting concepts that you might find relevant:
|
||||
* [Garbage collection](/docs/concepts/workloads/controllers/garbage-collection/) tidies up objects
|
||||
from your cluster after their _owning resource_ has been removed.
|
||||
* The [_time-to-live after finished_ controller](/docs/concepts/workloads/controllers/ttlafterfinished/)
|
||||
removes Jobs once a defined time has passed since they completed.
|
||||
In the wider Kubernetes ecosystem, you can find third-party workload resources that provide
|
||||
additional behaviors. Using a
|
||||
[custom resource definition](/docs/concepts/extend-kubernetes/api-extension/custom-resources/),
|
||||
you can add in a third-party workload resource if you want a specific behavior that's not part
|
||||
of Kubernetes' core. For example, if you wanted to run a group of `Pods` for your application but
|
||||
stop work unless _all_ the Pods are available (perhaps for some high-throughput distributed task),
|
||||
then you can implement or install an extension that does provide that feature.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
As well as reading about each resource, you can learn about specific tasks that relate to them:
|
||||
|
||||
* [Run a stateless application using a Deployment](/docs/tasks/run-application/run-stateless-application-deployment/)
|
||||
* [Run a stateless application using a `Deployment`](/docs/tasks/run-application/run-stateless-application-deployment/)
|
||||
* Run a stateful application either as a [single instance](/docs/tasks/run-application/run-single-instance-stateful-application/)
|
||||
or as a [replicated set](/docs/tasks/run-application/run-replicated-stateful-application/)
|
||||
* [Run Automated Tasks with a CronJob](/docs/tasks/job/automated-tasks-with-cron-jobs/)
|
||||
* [Run automated tasks with a `CronJob`](/docs/tasks/job/automated-tasks-with-cron-jobs/)
|
||||
|
||||
To learn about Kubernetes' mechanisms for separating code from configuration,
|
||||
visit [Configuration](/docs/concepts/configuration/).
|
||||
|
||||
There are two supporting concepts that provide backgrounds about how Kubernetes manages pods
|
||||
for applications:
|
||||
* [Garbage collection](/docs/concepts/workloads/controllers/garbage-collection/) tidies up objects
|
||||
from your cluster after their _owning resource_ has been removed.
|
||||
* The [_time-to-live after finished_ controller](/docs/concepts/workloads/controllers/ttlafterfinished/)
|
||||
removes Jobs once a defined time has passed since they completed.
|
||||
|
||||
Once your application is running, you might want to make it available on the internet as
|
||||
a [Service](/docs/concepts/services-networking/service/) or, for web application only,
|
||||
using an [Ingress](/docs/concepts/services-networking/ingress).
|
||||
a [`Service`](/docs/concepts/services-networking/service/) or, for web application only,
|
||||
using an [`Ingress`](/docs/concepts/services-networking/ingress).
|
||||
|
||||
You can also visit [Configuration](/docs/concepts/configuration/) to learn about Kubernetes'
|
||||
mechanisms for separating code from configuration.
|
||||
|
|
|
|||
|
|
@ -49,6 +49,37 @@ This example CronJob manifest prints the current time and a hello message every
|
|||
([Running Automated Tasks with a CronJob](/docs/tasks/job/automated-tasks-with-cron-jobs/)
|
||||
takes you through this example in more detail).
|
||||
|
||||
### Cron schedule syntax
|
||||
|
||||
```
|
||||
# ┌───────────── minute (0 - 59)
|
||||
# │ ┌───────────── hour (0 - 23)
|
||||
# │ │ ┌───────────── day of the month (1 - 31)
|
||||
# │ │ │ ┌───────────── month (1 - 12)
|
||||
# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday;
|
||||
# │ │ │ │ │ 7 is also Sunday on some systems)
|
||||
# │ │ │ │ │
|
||||
# │ │ │ │ │
|
||||
# * * * * *
|
||||
```
|
||||
|
||||
|
||||
| Entry | Description | Equivalent to |
|
||||
| ------------- | ------------- |------------- |
|
||||
| @yearly (or @annually) | Run once a year at midnight of 1 January | 0 0 1 1 * |
|
||||
| @monthly | Run once a month at midnight of the first day of the month | 0 0 1 * * |
|
||||
| @weekly | Run once a week at midnight on Sunday morning | 0 0 * * 0 |
|
||||
| @daily (or @midnight) | Run once a day at midnight | 0 0 * * * |
|
||||
| @hourly | Run once an hour at the beginning of the hour | 0 * * * * |
|
||||
|
||||
|
||||
|
||||
For example, the line below states that the task must be started every Friday at midnight, as well as on the 13th of each month at midnight:
|
||||
|
||||
`0 0 13 * 5`
|
||||
|
||||
To generate CronJob schedule expressions, you can also use web tools like [crontab.guru](https://crontab.guru/).
|
||||
|
||||
## CronJob limitations {#cron-job-limitations}
|
||||
|
||||
A cron job creates a job object _about_ once per execution time of its schedule. We say "about" because there
|
||||
|
|
|
|||
|
|
@ -38,6 +38,7 @@ You can run the example with this command:
|
|||
```shell
|
||||
kubectl apply -f https://kubernetes.io/examples/controllers/job.yaml
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
job.batch/pi created
|
||||
```
|
||||
|
|
@ -47,6 +48,7 @@ Check on the status of the Job with `kubectl`:
|
|||
```shell
|
||||
kubectl describe jobs/pi
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
Name: pi
|
||||
Namespace: default
|
||||
|
|
@ -91,6 +93,7 @@ To list all the Pods that belong to a Job in a machine readable form, you can us
|
|||
pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}')
|
||||
echo $pods
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
pi-5rwd7
|
||||
```
|
||||
|
|
@ -398,10 +401,11 @@ Therefore, you delete Job `old` but _leave its pods
|
|||
running_, using `kubectl delete jobs/old --cascade=false`.
|
||||
Before deleting it, you make a note of what selector it uses:
|
||||
|
||||
```
|
||||
```shell
|
||||
kubectl get job old -o yaml
|
||||
```
|
||||
```
|
||||
The output is similar to this:
|
||||
```yaml
|
||||
kind: Job
|
||||
metadata:
|
||||
name: old
|
||||
|
|
@ -420,7 +424,7 @@ they are controlled by Job `new` as well.
|
|||
You need to specify `manualSelector: true` in the new Job since you are not using
|
||||
the selector that the system normally generates for you automatically.
|
||||
|
||||
```
|
||||
```yaml
|
||||
kind: Job
|
||||
metadata:
|
||||
name: new
|
||||
|
|
|
|||
|
|
@ -54,6 +54,7 @@ Run the example job by downloading the example file and then running this comman
|
|||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/controllers/replication.yaml
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
replicationcontroller/nginx created
|
||||
```
|
||||
|
|
@ -63,6 +64,7 @@ Check on the status of the ReplicationController using this command:
|
|||
```shell
|
||||
kubectl describe replicationcontrollers/nginx
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
Name: nginx
|
||||
Namespace: default
|
||||
|
|
@ -101,6 +103,7 @@ To list all the pods that belong to the ReplicationController in a machine reada
|
|||
pods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})
|
||||
echo $pods
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
nginx-3ntk0 nginx-4ok8v nginx-qrm3m
|
||||
```
|
||||
|
|
|
|||
|
|
@ -191,6 +191,35 @@ details are abstracted away. That abstraction and separation of concerns simplif
|
|||
system semantics, and makes it feasible to extend the cluster's behavior without
|
||||
changing existing code.
|
||||
|
||||
## Pod update and replacement
|
||||
|
||||
As mentioned in the previous section, when the Pod template for a workload
|
||||
resource is changed, the controller creates new Pods based on the updated
|
||||
template instead of updating or patching the existing Pods.
|
||||
|
||||
Kubernetes doesn't prevent you from managing Pods directly. It is possible to
|
||||
update some fields of a running Pod, in place. However, Pod update operations
|
||||
like
|
||||
[`patch`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#patch-pod-v1-core), and
|
||||
[`replace`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#replace-pod-v1-core)
|
||||
have some limitations:
|
||||
|
||||
- Most of the metadata about a Pod is immutable. For example, you cannot
|
||||
change the `namespace`, `name`, `uid`, or `creationTimestamp` fields;
|
||||
the `generation` field is unique. It only accepts updates that increment the
|
||||
field's current value.
|
||||
- If the `metadata.deletionTimestamp` is set, no new entry can be added to the
|
||||
`metadata.finalizers` list.
|
||||
- Pod updates may not change fields other than `spec.containers[*].image`,
|
||||
`spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or
|
||||
`spec.tolerations`. For `spec.tolerations`, you can only add new entries.
|
||||
- When updating the `spec.activeDeadlineSeconds` field, two types of updates
|
||||
are allowed:
|
||||
|
||||
1. setting the unassigned field to a positive number;
|
||||
1. updating the field from a positive number to a smaller, non-negative
|
||||
number.
|
||||
|
||||
## Resource sharing and communication
|
||||
|
||||
Pods enable data sharing and communication among their constituent
|
||||
|
|
|
|||
|
|
@ -49,9 +49,9 @@ as documented in [Resources](#resources).
|
|||
Also, init containers do not support `lifecycle`, `livenessProbe`, `readinessProbe`, or
|
||||
`startupProbe` because they must run to completion before the Pod can be ready.
|
||||
|
||||
If you specify multiple init containers for a Pod, Kubelet runs each init
|
||||
If you specify multiple init containers for a Pod, kubelet runs each init
|
||||
container sequentially. Each init container must succeed before the next can run.
|
||||
When all of the init containers have run to completion, Kubelet initializes
|
||||
When all of the init containers have run to completion, kubelet initializes
|
||||
the application containers for the Pod and runs them as usual.
|
||||
|
||||
## Using init containers
|
||||
|
|
@ -257,7 +257,7 @@ if the Pod `restartPolicy` is set to Always, the init containers use
|
|||
|
||||
A Pod cannot be `Ready` until all init containers have succeeded. The ports on an
|
||||
init container are not aggregated under a Service. A Pod that is initializing
|
||||
is in the `Pending` state but should have a condition `Initialized` set to true.
|
||||
is in the `Pending` state but should have a condition `Initialized` set to false.
|
||||
|
||||
If the Pod [restarts](#pod-restart-reasons), or is restarted, all init containers
|
||||
must execute again.
|
||||
|
|
|
|||
|
|
@ -66,7 +66,7 @@ Instead of manually applying labels, you can also reuse the [well-known labels](
|
|||
|
||||
The API field `pod.spec.topologySpreadConstraints` is defined as below:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
|
|
|
|||
|
|
@ -95,14 +95,16 @@ deadlines.
|
|||
|
||||
### Open a placeholder PR
|
||||
|
||||
1. Open a pull request against the
|
||||
1. Open a **draft** pull request against the
|
||||
`dev-{{< skew nextMinorVersion >}}` branch in the `kubernetes/website` repository, with a small
|
||||
commit that you will amend later.
|
||||
commit that you will amend later. To create a draft pull request, use the
|
||||
Create Pull Request drop-down and select **Create Draft Pull Request**,
|
||||
then click **Draft Pull Request**.
|
||||
2. Edit the pull request description to include links to [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes)
|
||||
PR(s) and [kubernetes/enhancements](https://github.com/kubernetes/enhancements) issue(s).
|
||||
3. Use the Prow command `/milestone {{< skew nextMinorVersion >}}` to
|
||||
assign the PR to the relevant milestone. This alerts the docs person managing
|
||||
this release that the feature docs are coming.
|
||||
3. Leave a comment on the related [kubernetes/enhancements](https://github.com/kubernetes/enhancements)
|
||||
issue with a link to the PR to notify the docs person managing this release that
|
||||
the feature docs are coming and should be tracked for the release.
|
||||
|
||||
If your feature does not need
|
||||
any documentation changes, make sure the sig-release team knows this, by
|
||||
|
|
@ -112,7 +114,9 @@ milestone.
|
|||
|
||||
### PR ready for review
|
||||
|
||||
When ready, populate your placeholder PR with feature documentation.
|
||||
When ready, populate your placeholder PR with feature documentation and change
|
||||
the state of the PR from draft to **ready for review**. To mark a pull request
|
||||
as ready for review, navigate to the merge box and click **Ready for review**.
|
||||
|
||||
Do your best to describe your feature and how to use it. If you need help structuring your documentation, ask in the `#sig-docs` slack channel.
|
||||
|
||||
|
|
@ -120,6 +124,13 @@ When you complete your content, the documentation person assigned to your featur
|
|||
To ensure technical accuracy, the content may also require a technical review from corresponding SIG(s).
|
||||
Use their suggestions to get the content to a release ready state.
|
||||
|
||||
If your feature is an Alpha or Beta feature and is behind a feature gate,
|
||||
make sure you add it to [Alpha/Beta Feature gates](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features)
|
||||
table as part of your pull request. With new feature gates, a description of
|
||||
the feature gate is also required. If your feature is GA'ed or deprecated,
|
||||
make sure to move it from that table to [Feature gates for graduated or deprecated features](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-graduated-or-deprecated-features)
|
||||
table with Alpha and Beta history intact.
|
||||
|
||||
If your feature needs documentation and the first draft
|
||||
content is not received, the feature may be removed from the milestone.
|
||||
|
||||
|
|
@ -128,10 +139,4 @@ content is not received, the feature may be removed from the milestone.
|
|||
If your PR has not yet been merged into the `dev-{{< skew nextMinorVersion >}}` branch by the release deadline, work with the
|
||||
docs person managing the release to get it in by the deadline. If your feature needs
|
||||
documentation and the docs are not ready, the feature may be removed from the
|
||||
milestone.
|
||||
|
||||
If your feature is an Alpha feature and is behind a feature gate, make sure you
|
||||
add it to [Alpha/Beta Feature gates](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features) table
|
||||
as part of your pull request. If your feature is moving out of Alpha, make sure to
|
||||
remove it from that table.
|
||||
|
||||
milestone.
|
||||
|
|
@ -143,7 +143,7 @@ Do | Don't
|
|||
:--| :-----
|
||||
Set the value of the `replicas` field in the configuration file. | Set the value of the "replicas" field in the configuration file.
|
||||
The value of the `exec` field is an ExecAction object. | The value of the "exec" field is an ExecAction object.
|
||||
Run the process as a Daemonset in the `kube-system` namespace. | Run the process as a Daemonset in the kube-system namespace.
|
||||
Run the process as a DaemonSet in the `kube-system` namespace. | Run the process as a DaemonSet in the kube-system namespace.
|
||||
{{< /table >}}
|
||||
|
||||
### Use code style for Kubernetes command tool and component names
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@ This section of the Kubernetes documentation contains references.
|
|||
|
||||
## API Reference
|
||||
|
||||
* [Kubernetes API Reference {{< latest-version >}}](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/)
|
||||
* [API Reference for Kubernetes {{< param "version" >}}](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
|
||||
* [Using The Kubernetes API](/docs/reference/using-api/) - overview of the API for Kubernetes.
|
||||
|
||||
## API Client Libraries
|
||||
|
|
@ -54,4 +54,3 @@ An archive of the design docs for Kubernetes functionality. Good starting points
|
|||
[Kubernetes Architecture](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) and
|
||||
[Kubernetes Design Overview](https://git.k8s.io/community/contributors/design-proposals).
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -514,7 +514,7 @@ subjects:
|
|||
namespace: kube-system
|
||||
```
|
||||
|
||||
For all service accounts in the "qa" namespace:
|
||||
For all service accounts in the "qa" group in any namespace:
|
||||
|
||||
```yaml
|
||||
subjects:
|
||||
|
|
@ -522,6 +522,15 @@ subjects:
|
|||
name: system:serviceaccounts:qa
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
```
|
||||
For all service accounts in the "dev" group in the "development" namespace:
|
||||
|
||||
```yaml
|
||||
subjects:
|
||||
- kind: Group
|
||||
name: system:serviceaccounts:dev
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
namespace: development
|
||||
```
|
||||
|
||||
For all service accounts in any namespace:
|
||||
|
||||
|
|
|
|||
|
|
@ -73,13 +73,6 @@ kubelet [flags]
|
|||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Enables anonymous requests to the Kubelet server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of `system:anonymous`, and a group name of `system:unauthenticated`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--application-metrics-count-limit int Default: 100</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Max number of application metrics to store (per container) (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns,it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--authentication-token-webhook</td>
|
||||
</tr>
|
||||
|
|
@ -122,13 +115,6 @@ kubelet [flags]
|
|||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Path to the file containing Azure container registry configuration information.</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--boot-id-file string Default: `/proc/sys/kernel/random/boot_id`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Comma-separated list of files to check for `boot-id`. Use the first one that exists. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--bootstrap-kubeconfig string</td>
|
||||
</tr>
|
||||
|
|
@ -234,13 +220,6 @@ kubelet [flags]
|
|||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The Kubelet will load its initial configuration from this file. The path may be absolute or relative; relative paths start at the Kubelet's current working directory. Omit this flag to use the built-in default configuration values. Command-line flags override configuration from this file.</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--container-hints string Default: `/etc/cadvisor/container_hints.json`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">location of the container hints file. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--container-log-max-files int32 Default: 5</td>
|
||||
</tr>
|
||||
|
|
@ -269,13 +248,7 @@ kubelet [flags]
|
|||
<td></td><td style="line-height: 130%; word-wrap: break-word;">[Experimental] The endpoint of remote runtime service. Currently unix socket endpoint is supported on Linux, while npipe and tcp endpoints are supported on windows. Examples: `unix:///var/run/dockershim.sock`, `npipe:////./pipe/dockershim`.</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--containerd string Default: `/run/containerd/containerd.sock`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The `containerd` endpoint. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--contention-profiling</td>
|
||||
</tr>
|
||||
|
|
@ -311,13 +284,6 @@ kubelet [flags]
|
|||
<td></td><td style="line-height: 130%; word-wrap: break-word;"><Warning: Alpha feature> CPU Manager reconciliation period. Examples: `10s`, or `1m`. If not supplied, defaults to node status update frequency. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--docker string Default: `unix:///var/run/docker.sock`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The `docker` endpoint. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--docker-endpoint string Default: `unix:///var/run/docker.sock`</td>
|
||||
</tr>
|
||||
|
|
@ -325,55 +291,6 @@ kubelet [flags]
|
|||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Use this for the `docker` endpoint to communicate with. This docker-specific flag only works when container-runtime is set to `docker`.</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--docker-env-metadata-whitelist string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">a comma-separated list of environment variable keys that needs to be collected for docker containers (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--docker-only</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Only report docker containers in addition to root stats (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--docker-root string Default: `/var/lib/docker`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">DEPRECATED: docker root is read from docker info (this is a fallback).</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--docker-tls</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">use TLS to connect to docker (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--docker-tls-ca string Default: `ca.pem`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">path to trusted CA. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--docker-tls-cert string Default: `cert.pem`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">path to client certificate. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--docker-tls-key string Default: `key.pem`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Path to private key. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--dynamic-config-dir string</td>
|
||||
</tr>
|
||||
|
|
@ -402,13 +319,6 @@ kubelet [flags]
|
|||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Enables server endpoints for log collection and local running of containers and commands. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--enable-load-reader</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Whether to enable CPU load reader (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--enable-server Default: `true`</td>
|
||||
</tr>
|
||||
|
|
@ -438,21 +348,7 @@ kubelet [flags]
|
|||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--event-storage-age-limit string Default: `default=0`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Max length of time for which to store events (per type). Value is a comma separated list of key values, where the keys are event types (e.g.: `creation`, `oom`) or `default` and the value is a duration. Default is applied to all non-specified event types. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--event-storage-event-limit string Default: `default=0`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Max number of events to store (per type). Value is a comma separated list of key values, where the keys are event types (e.g.: `creation`, `oom`) or `default` and the value is an integer. Default is applied to all non-specified event types. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--eviction-hard mapStringString Default: `imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%`</td>
|
||||
<td colspan="2">--eviction-hard mapStringString Default: `imagefs.available<15%,memory.available<100Mi,nodefs.available<10%`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">A set of eviction thresholds (e.g. `memory.available<1Gi`) that if met would trigger a pod eviction. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
|
||||
|
|
@ -528,6 +424,13 @@ kubelet [flags]
|
|||
<td></td><td style="line-height: 130%; word-wrap: break-word;">If enabled, the kubelet will integrate with the kernel memcg notification to determine if memory eviction thresholds are crossed rather than polling. This flag will be removed in 1.23. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--experimental-log-sanitization bool</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens). Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--experimental-mounter-path string Default: `mount`</td>
|
||||
</tr>
|
||||
|
|
@ -548,8 +451,9 @@ kubelet [flags]
|
|||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">A set of `key=value` pairs that describe feature gates for alpha/experimental features. Options are:<br/>
|
||||
APIListChunking=true|false (BETA - default=true)<br/>
|
||||
APIPriorityAndFairness=true|false (ALPHA - default=false)<br/>
|
||||
APIPriorityAndFairness=true|false (BETA - default=true)<br/>
|
||||
APIResponseCompression=true|false (BETA - default=true)<br/>
|
||||
APIServerIdentity=true|false (ALPHA - default=false)<br/>
|
||||
AllAlpha=true|false (ALPHA - default=false)<br/>
|
||||
AllBeta=true|false (BETA - default=false)<br/>
|
||||
AllowInsecureBackendProxy=true|false (BETA - default=true)<br/>
|
||||
|
|
@ -573,31 +477,40 @@ CSIMigrationOpenStack=true|false (BETA - default=false)<br/>
|
|||
CSIMigrationOpenStackComplete=true|false (ALPHA - default=false)<br/>
|
||||
CSIMigrationvSphere=true|false (BETA - default=false)<br/>
|
||||
CSIMigrationvSphereComplete=true|false (BETA - default=false)<br/>
|
||||
CSIServiceAccountToken=true|false (ALPHA - default=false)<br/>
|
||||
CSIStorageCapacity=true|false (ALPHA - default=false)<br/>
|
||||
CSIVolumeFSGroupPolicy=true|false (ALPHA - default=false)<br/>
|
||||
ConfigurableFSGroupPolicy=true|false (ALPHA - default=false)<br/>
|
||||
CSIVolumeFSGroupPolicy=true|false (BETA - default=true)<br/>
|
||||
ConfigurableFSGroupPolicy=true|false (BETA - default=true)<br/>
|
||||
CronJobControllerV2=true|false (ALPHA - default=false)<br/>
|
||||
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)<br/>
|
||||
DefaultPodTopologySpread=true|false (ALPHA - default=false)<br/>
|
||||
DefaultPodTopologySpread=true|false (BETA - default=true)<br/>
|
||||
DevicePlugins=true|false (BETA - default=true)<br/>
|
||||
DisableAcceleratorUsageMetrics=true|false (BETA - default=true)<br/>
|
||||
DownwardAPIHugePages=true|false (ALPHA - default=false)<br/>
|
||||
DynamicKubeletConfig=true|false (BETA - default=true)<br/>
|
||||
EfficientWatchResumption=true|false (ALPHA - default=false)<br/>
|
||||
EndpointSlice=true|false (BETA - default=true)<br/>
|
||||
EndpointSliceNodeName=true|false (ALPHA - default=false)<br/>
|
||||
EndpointSliceProxying=true|false (BETA - default=true)<br/>
|
||||
EndpointSliceTerminatingCondition=true|false (ALPHA - default=false)<br/>
|
||||
EphemeralContainers=true|false (ALPHA - default=false)<br/>
|
||||
ExpandCSIVolumes=true|false (BETA - default=true)<br/>
|
||||
ExpandInUsePersistentVolumes=true|false (BETA - default=true)<br/>
|
||||
ExpandPersistentVolumes=true|false (BETA - default=true)<br/>
|
||||
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)<br/>
|
||||
GenericEphemeralVolume=true|false (ALPHA - default=false)<br/>
|
||||
GracefulNodeShutdown=true|false (ALPHA - default=false)<br/>
|
||||
HPAContainerMetrics=true|false (ALPHA - default=false)<br/>
|
||||
HPAScaleToZero=true|false (ALPHA - default=false)<br/>
|
||||
HugePageStorageMediumSize=true|false (BETA - default=true)<br/>
|
||||
HyperVContainer=true|false (ALPHA - default=false)<br/>
|
||||
IPv6DualStack=true|false (ALPHA - default=false)<br/>
|
||||
ImmutableEphemeralVolumes=true|false (BETA - default=true)<br/>
|
||||
KubeletCredentialProviders=true|false (ALPHA - default=false)<br/>
|
||||
KubeletPodResources=true|false (BETA - default=true)<br/>
|
||||
LegacyNodeRoleBehavior=true|false (BETA - default=true)<br/>
|
||||
LocalStorageCapacityIsolation=true|false (BETA - default=true)<br/>
|
||||
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)<br/>
|
||||
MixedProtocolLBService=true|false (ALPHA - default=false)<br/>
|
||||
NodeDisruptionExclusion=true|false (BETA - default=true)<br/>
|
||||
NonPreemptingPriority=true|false (BETA - default=true)<br/>
|
||||
PodDisruptionBudget=true|false (BETA - default=true)<br/>
|
||||
|
|
@ -605,31 +518,26 @@ PodOverhead=true|false (BETA - default=true)<br/>
|
|||
ProcMountType=true|false (ALPHA - default=false)<br/>
|
||||
QOSReserved=true|false (ALPHA - default=false)<br/>
|
||||
RemainingItemCount=true|false (BETA - default=true)<br/>
|
||||
RemoveSelfLink=true|false (ALPHA - default=false)<br/>
|
||||
RemoveSelfLink=true|false (BETA - default=true)<br/>
|
||||
RootCAConfigMap=true|false (BETA - default=true)<br/>
|
||||
RotateKubeletServerCertificate=true|false (BETA - default=true)<br/>
|
||||
RunAsGroup=true|false (BETA - default=true)<br/>
|
||||
RuntimeClass=true|false (BETA - default=true)<br/>
|
||||
SCTPSupport=true|false (BETA - default=true)<br/>
|
||||
SelectorIndex=true|false (BETA - default=true)<br/>
|
||||
ServerSideApply=true|false (BETA - default=true)<br/>
|
||||
ServiceAccountIssuerDiscovery=true|false (ALPHA - default=false)<br/>
|
||||
ServiceAppProtocol=true|false (BETA - default=true)<br/>
|
||||
ServiceAccountIssuerDiscovery=true|false (BETA - default=true)<br/>
|
||||
ServiceLBNodePortControl=true|false (ALPHA - default=false)<br/>
|
||||
ServiceNodeExclusion=true|false (BETA - default=true)<br/>
|
||||
ServiceTopology=true|false (ALPHA - default=false)<br/>
|
||||
SetHostnameAsFQDN=true|false (ALPHA - default=false)<br/>
|
||||
SetHostnameAsFQDN=true|false (BETA - default=true)<br/>
|
||||
SizeMemoryBackedVolumes=true|false (ALPHA - default=false)<br/>
|
||||
StorageVersionAPI=true|false (ALPHA - default=false)<br/>
|
||||
StorageVersionHash=true|false (BETA - default=true)<br/>
|
||||
SupportNodePidsLimit=true|false (BETA - default=true)<br/>
|
||||
SupportPodPidsLimit=true|false (BETA - default=true)<br/>
|
||||
Sysctls=true|false (BETA - default=true)<br/>
|
||||
TTLAfterFinished=true|false (ALPHA - default=false)<br/>
|
||||
TokenRequest=true|false (BETA - default=true)<br/>
|
||||
TokenRequestProjection=true|false (BETA - default=true)<br/>
|
||||
TopologyManager=true|false (BETA - default=true)<br/>
|
||||
ValidateProxyRedirects=true|false (BETA - default=true)<br/>
|
||||
VolumeSnapshotDataSource=true|false (BETA - default=true)<br/>
|
||||
WarningHeaders=true|false (BETA - default=true)<br/>
|
||||
WinDSR=true|false (ALPHA - default=false)<br/>
|
||||
WinOverlay=true|false (ALPHA - default=false)<br/>
|
||||
WinOverlay=true|false (BETA - default=true)<br/>
|
||||
WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
|
||||
(DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
|
||||
</tr>
|
||||
|
|
@ -641,13 +549,6 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
|
|||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Duration between checking config files for new data. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--global-housekeeping-interval duration Default: `1m0s`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Interval between global housekeepings. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--hairpin-mode string Default: `promiscuous-bridge`</td>
|
||||
</tr>
|
||||
|
|
@ -697,6 +598,20 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
|
|||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Duration between checking HTTP for new data. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--image-credential-provider-bin-dir string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The path to the directory where credential provider plugin binaries are located.</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--image-credential-provider-config string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">The path to the credential provider plugin config file.</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--image-gc-high-threshold int32 Default: 85</td>
|
||||
</tr>
|
||||
|
|
@ -757,7 +672,7 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
|
|||
<td colspan="2">--kube-api-burst int32 Default: 10</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;"> Burst to use while talking with kubernetes API server. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Burst to use while talking with kubernetes API server. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
|
|
@ -778,7 +693,7 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
|
|||
<td colspan="2">--kube-reserved mapStringString Default: <None></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">A set of `<resource name>=<resource quantity>` (e.g. `cpu=200m,memory=500Mi,ephemeral-storage=1Gi`) pairs that describe resources reserved for kubernetes system components. Currently `cpu`, `memory` and local `ephemeral-storage` for root file system are supported. See http://kubernetes.io/docs/user-guide/compute-resources for more detail. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">A set of `<resource name>=<resource quantity>` (e.g. `cpu=200m,memory=500Mi,ephemeral-storage=1Gi,pid='100'`) pairs that describe resources reserved for kubernetes system components. Currently `cpu`, `memory` and local `ephemeral-storage` for root file system are supported. See http://kubernetes.io/docs/user-guide/compute-resources for more detail. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
|
|
@ -816,13 +731,6 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
|
|||
<td></td><td style="line-height: 130%; word-wrap: break-word;">When logging hits line `<file>:<N>`, emit a stack trace.</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--log-cadvisor-usage</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Whether to log the usage of the cAdvisor container (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--log-dir string</td>
|
||||
</tr>
|
||||
|
|
@ -855,7 +763,7 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
|
|||
<td colspan="2">--logging-format string Default: `text`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Sets the log format. Permitted formats: `text`, `json`.\nNon-default formats don't honor these flags: `--add-dir-header`, `--alsologtostderr`, `--log-backtrace-at`, `--log_dir`, `--log-file`, `--log-file-max-size`, `--logtostderr`, `--skip_headers`, `--skip_log_headers`, `--stderrthreshold`, `--log-flush-frequency`.\nNon-default choices are currently alpha and subject to change without warning. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Sets the log format. Permitted formats: `text`, `json`.\nNon-default formats don't honor these flags: `--add-dir-header`, `--alsologtostderr`, `--log-backtrace-at`, `--log-dir`, `--log-file`, `--log-file-max-size`, `--logtostderr`, `--skip_headers`, `--skip_log_headers`, `--stderrthreshold`, `--log-flush-frequency`.\nNon-default choices are currently alpha and subject to change without warning. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
|
|
@ -865,13 +773,6 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
|
|||
<td></td><td style="line-height: 130%; word-wrap: break-word;">log to standard error instead of files.</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--machine-id-file string Default: `/etc/machine-id,/var/lib/dbus/machine-id`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Comma-separated list of files to check for `machine-id`. Use the first one that exists. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--make-iptables-util-chains Default: `true`</td>
|
||||
</tr>
|
||||
|
|
@ -990,6 +891,14 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
|
|||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Traffic to IPs outside this range will use IP masquerade. Set to `0.0.0.0/0` to never masquerade. (DEPRECATED: will be removed in a future version)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--one-output</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">If true, only write logs to their native severity level (vs also writing to each lower severity level.
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--oom-score-adj int32 Default: -999</td>
|
||||
</tr>
|
||||
|
|
@ -1082,7 +991,7 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
|
|||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--register-node</td>
|
||||
<td colspan="2">--register-node Default: `true`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Register the node with the API server. If `--kubeconfig` is not provided, this flag is irrelevant, as the Kubelet won't have an API server to register with. Default to `true`.</td>
|
||||
|
|
@ -1096,7 +1005,7 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
|
|||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--register-with-taints []api.Taint</td>
|
||||
<td colspan="2">--register-with-taints mapStringString</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Register the node with the given list of taints (comma separated `<key>=<value>:<effect>`). No-op if `--register-node` is `false`.</td>
|
||||
|
|
@ -1202,61 +1111,12 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
|
|||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--stderrthreshold severity Default: 2</td>
|
||||
<td colspan="2">--stderrthreshold int Default: 2</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">logs at or above this threshold go to stderr.</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--storage-driver-buffer-duration duration Default: `1m0s`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--storage-driver-db string Default: `cadvisor`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Database name. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--storage-driver-host string Default: `localhost:8086`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Database `host:port`. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--storage-driver-password string Default: `root`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Database password. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--storage-driver-secure</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Use secure connection with database (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--storage-driver-table string Default: `stats`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Table name. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--storage-driver-user string Default: `root`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Database username. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--streaming-connection-idle-timeout duration Default: `4h0m0s`</td>
|
||||
</tr>
|
||||
|
|
@ -1282,7 +1142,7 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
|
|||
<td colspan="2">--system-reserved mapStringString Default: \<none\></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">A set of `<resource name>=<resource quantity>` (e.g. `cpu=200m,memory=500Mi,ephemeral-storage=1Gi`) pairs that describe resources reserved for non-kubernetes components. Currently only `cpu` and `memory` are supported. See http://kubernetes.io/docs/user-guide/compute-resources for more detail. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">A set of `<resource name>=<resource quantity>` (e.g. `cpu=200m,memory=500Mi,ephemeral-storage=1Gi,pid='100'`) pairs that describe resources reserved for non-kubernetes components. Currently only `cpu` and `memory` are supported. See http://kubernetes.io/docs/user-guide/compute-resources for more detail. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
|
|
@ -1331,6 +1191,13 @@ Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_R
|
|||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Topology Manager policy to use. Possible values: `none`, `best-effort`, `restricted`, `single-numa-node`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--topology-manager-scope string Default: `container`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Scope to which topology hints applied. Topology Manager collects hints from Hint Providers and applies them to defined scope to ensure the pod admission. Possible values: 'container' (default), 'pod'. (default "container") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">-v, --v Level</td>
|
||||
</tr>
|
||||
|
|
|
|||
|
|
@ -391,6 +391,7 @@ Verbosity | Description
|
|||
`--v=2` | Useful steady state information about the service and important log messages that may correlate to significant changes in the system. This is the recommended default log level for most systems.
|
||||
`--v=3` | Extended information about changes.
|
||||
`--v=4` | Debug level verbosity.
|
||||
`--v=5` | Trace level verbosity.
|
||||
`--v=6` | Display requested resources.
|
||||
`--v=7` | Display HTTP request headers.
|
||||
`--v=8` | Display HTTP request contents.
|
||||
|
|
|
|||
|
|
@ -74,6 +74,7 @@ their authors, not the Kubernetes team.
|
|||
| Rust | [github.com/ynqa/kubernetes-rust](https://github.com/ynqa/kubernetes-rust) |
|
||||
| Scala | [github.com/doriordan/skuber](https://github.com/doriordan/skuber) |
|
||||
| Scala | [github.com/joan38/kubernetes-client](https://github.com/joan38/kubernetes-client) |
|
||||
| Swift | [github.com/swiftkube/client](https://github.com/swiftkube/client) |
|
||||
| DotNet | [github.com/tonnyeremin/kubernetes_gen](https://github.com/tonnyeremin/kubernetes_gen) |
|
||||
| DotNet (RestSharp) | [github.com/masroorhasan/Kubernetes.DotNet](https://github.com/masroorhasan/Kubernetes.DotNet) |
|
||||
| Elixir | [github.com/obmarg/kazan](https://github.com/obmarg/kazan/) |
|
||||
|
|
|
|||
|
|
@ -125,6 +125,24 @@ sudo mkdir -p /etc/containerd
|
|||
sudo containerd config default | sudo tee /etc/containerd/config.toml
|
||||
```
|
||||
|
||||
```shell
|
||||
# Restart containerd
|
||||
sudo systemctl restart containerd
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="Ubuntu 18.04/20.04" %}}
|
||||
|
||||
```shell
|
||||
# (Install containerd)
|
||||
sudo apt-get update && sudo apt-get install -y containerd
|
||||
```
|
||||
|
||||
```shell
|
||||
# Configure containerd
|
||||
sudo mkdir -p /etc/containerd
|
||||
sudo containerd config default | sudo tee /etc/containerd/config.toml
|
||||
```
|
||||
|
||||
```shell
|
||||
# Restart containerd
|
||||
sudo systemctl restart containerd
|
||||
|
|
@ -218,6 +236,13 @@ For more information, see the [CRI-O compatibility matrix](https://github.com/cr
|
|||
Install and configure prerequisites:
|
||||
|
||||
```shell
|
||||
|
||||
# Create the .conf file to load the modules at bootup
|
||||
cat <<EOF | sudo tee /etc/modules-load.d/crio.conf
|
||||
overlay
|
||||
br_netfilter
|
||||
EOF
|
||||
|
||||
sudo modprobe overlay
|
||||
sudo modprobe br_netfilter
|
||||
|
||||
|
|
|
|||
|
|
@ -26,9 +26,9 @@ For information how to create a cluster with kubeadm once you have performed thi
|
|||
- Fedora 25+
|
||||
- HypriotOS v1.0.1+
|
||||
- Flatcar Container Linux (tested with 2512.3.0)
|
||||
* 2 GB or more of RAM per machine (any less will leave little room for your apps)
|
||||
* 2 CPUs or more
|
||||
* Full network connectivity between all machines in the cluster (public or private network is fine)
|
||||
* 2 GB or more of RAM per machine (any less will leave little room for your apps).
|
||||
* 2 CPUs or more.
|
||||
* Full network connectivity between all machines in the cluster (public or private network is fine).
|
||||
* Unique hostname, MAC address, and product_uuid for every node. See [here](#verify-mac-address) for more details.
|
||||
* Certain ports are open on your machines. See [here](#check-required-ports) for more details.
|
||||
* Swap disabled. You **MUST** disable swap in order for the kubelet to work properly.
|
||||
|
|
@ -59,6 +59,10 @@ Make sure that the `br_netfilter` module is loaded. This can be done by running
|
|||
As a requirement for your Linux Node's iptables to correctly see bridged traffic, you should ensure `net.bridge.bridge-nf-call-iptables` is set to 1 in your `sysctl` config, e.g.
|
||||
|
||||
```bash
|
||||
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
|
||||
br_netfilter
|
||||
EOF
|
||||
|
||||
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
|
||||
net.bridge.bridge-nf-call-ip6tables = 1
|
||||
net.bridge.bridge-nf-call-iptables = 1
|
||||
|
|
@ -76,7 +80,7 @@ For more details please see the [Network Plugin Requirements](/docs/concepts/ext
|
|||
|----------|-----------|------------|-------------------------|---------------------------|
|
||||
| TCP | Inbound | 6443* | Kubernetes API server | All |
|
||||
| TCP | Inbound | 2379-2380 | etcd server client API | kube-apiserver, etcd |
|
||||
| TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
|
||||
| TCP | Inbound | 10250 | kubelet API | Self, Control plane |
|
||||
| TCP | Inbound | 10251 | kube-scheduler | Self |
|
||||
| TCP | Inbound | 10252 | kube-controller-manager | Self |
|
||||
|
||||
|
|
@ -84,7 +88,7 @@ For more details please see the [Network Plugin Requirements](/docs/concepts/ext
|
|||
|
||||
| Protocol | Direction | Port Range | Purpose | Used By |
|
||||
|----------|-----------|-------------|-----------------------|-------------------------|
|
||||
| TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
|
||||
| TCP | Inbound | 10250 | kubelet API | Self, Control plane |
|
||||
| TCP | Inbound | 30000-32767 | NodePort Services† | All |
|
||||
|
||||
† Default port range for [NodePort Services](/docs/concepts/services-networking/service/).
|
||||
|
|
@ -160,7 +164,7 @@ need to ensure they match the version of the Kubernetes control plane you want
|
|||
kubeadm to install for you. If you do not, there is a risk of a version skew occurring that
|
||||
can lead to unexpected, buggy behaviour. However, _one_ minor version skew between the
|
||||
kubelet and the control plane is supported, but the kubelet version may never exceed the API
|
||||
server version. For example, kubelets running 1.7.0 should be fully compatible with a 1.8.0 API server,
|
||||
server version. For example, the kubelet running 1.7.0 should be fully compatible with a 1.8.0 API server,
|
||||
but not vice versa.
|
||||
|
||||
For information about installing `kubectl`, see [Install and set up kubectl](/docs/tasks/tools/install-kubectl/).
|
||||
|
|
@ -299,7 +303,7 @@ Please mind, that you **only** have to do that if the cgroup driver of your CRI
|
|||
is not `cgroupfs`, because that is the default value in the kubelet already.
|
||||
|
||||
{{< note >}}
|
||||
Since `--cgroup-driver` flag has been deprecated by kubelet, if you have that in `/var/lib/kubelet/kubeadm-flags.env`
|
||||
Since `--cgroup-driver` flag has been deprecated by the kubelet, if you have that in `/var/lib/kubelet/kubeadm-flags.env`
|
||||
or `/etc/default/kubelet`(`/etc/sysconfig/kubelet` for RPMs), please remove it and use the KubeletConfiguration instead
|
||||
(stored in `/var/lib/kubelet/config.yaml` by default).
|
||||
{{< /note >}}
|
||||
|
|
|
|||
|
|
@ -1168,23 +1168,7 @@ filename | sha512 hash
|
|||
|
||||
### Other (Cleanup or Flake)
|
||||
|
||||
- **Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.**:
|
||||
|
||||
<!--
|
||||
This section can be blank if this pull request does not require a release note.
|
||||
|
||||
When adding links which point to resources within git repositories, like
|
||||
KEPs or supporting documentation, please reference a specific commit and avoid
|
||||
linking directly to the master branch. This ensures that links reference a
|
||||
specific point in time, rather than a document that may change over time.
|
||||
|
||||
See here for guidance on getting permanent links to files: https://help.github.com/en/articles/getting-permanent-links-to-files
|
||||
|
||||
Please use the following format for linking documentation:
|
||||
- [KEP]: <link>
|
||||
- [Usage]: <link>
|
||||
- [Other doc]: <link>
|
||||
--> ([#96443](https://github.com/kubernetes/kubernetes/pull/96443), [@alaypatel07](https://github.com/alaypatel07)) [SIG Apps]
|
||||
- Handle slow cronjob lister in cronjob controller v2 and improve memory footprint. ([#96443](https://github.com/kubernetes/kubernetes/pull/96443), [@alaypatel07](https://github.com/alaypatel07)) [SIG Apps]
|
||||
- --redirect-container-streaming is no longer functional. The flag will be removed in v1.22 ([#95935](https://github.com/kubernetes/kubernetes/pull/95935), [@tallclair](https://github.com/tallclair)) [SIG Node]
|
||||
- A new metric `requestAbortsTotal` has been introduced that counts aborted requests for each `group`, `version`, `verb`, `resource`, `subresource` and `scope`. ([#95002](https://github.com/kubernetes/kubernetes/pull/95002), [@p0lyn0mial](https://github.com/p0lyn0mial)) [SIG API Machinery, Cloud Provider, Instrumentation and Scheduling]
|
||||
- API priority and fairness metrics use snake_case in label names ([#96236](https://github.com/kubernetes/kubernetes/pull/96236), [@adtac](https://github.com/adtac)) [SIG API Machinery, Cluster Lifecycle, Instrumentation and Testing]
|
||||
|
|
@ -1350,23 +1334,7 @@ filename | sha512 hash
|
|||
|
||||
### Deprecation
|
||||
|
||||
- **Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.**:
|
||||
|
||||
<!--
|
||||
This section can be blank if this pull request does not require a release note.
|
||||
|
||||
When adding links which point to resources within git repositories, like
|
||||
KEPs or supporting documentation, please reference a specific commit and avoid
|
||||
linking directly to the master branch. This ensures that links reference a
|
||||
specific point in time, rather than a document that may change over time.
|
||||
|
||||
See here for guidance on getting permanent links to files: https://help.github.com/en/articles/getting-permanent-links-to-files
|
||||
|
||||
Please use the following format for linking documentation:
|
||||
- [KEP]: <link>
|
||||
- [Usage]: <link>
|
||||
- [Other doc]: <link>
|
||||
--> ([#95856](https://github.com/kubernetes/kubernetes/pull/95856), [@knight42](https://github.com/knight42)) [SIG API Machinery, Node and Testing]
|
||||
- ACTION REQUIRED: The kube-apiserver ability to serve on an insecure port, deprecated since v1.10, has been removed. The insecure address flags `--address` and `--insecure-bind-address` have no effect in kube-apiserver and will be removed in v1.24. The insecure port flags `--port` and `--insecure-port` may only be set to 0 and will be removed in v1.24. ([#95856](https://github.com/kubernetes/kubernetes/pull/95856), [@knight42](https://github.com/knight42)) [SIG API Machinery, Node and Testing]
|
||||
|
||||
### API Change
|
||||
|
||||
|
|
@ -1510,23 +1478,7 @@ filename | sha512 hash
|
|||
|
||||
### Bug or Regression
|
||||
|
||||
- **Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.**:
|
||||
|
||||
<!--
|
||||
This section can be blank if this pull request does not require a release note.
|
||||
|
||||
When adding links which point to resources within git repositories, like
|
||||
KEPs or supporting documentation, please reference a specific commit and avoid
|
||||
linking directly to the master branch. This ensures that links reference a
|
||||
specific point in time, rather than a document that may change over time.
|
||||
|
||||
See here for guidance on getting permanent links to files: https://help.github.com/en/articles/getting-permanent-links-to-files
|
||||
|
||||
Please use the following format for linking documentation:
|
||||
- [KEP]: <link>
|
||||
- [Usage]: <link>
|
||||
- [Other doc]: <link>
|
||||
--> ([#95725](https://github.com/kubernetes/kubernetes/pull/95725), [@p0lyn0mial](https://github.com/p0lyn0mial)) [SIG API Machinery and Cloud Provider]
|
||||
- Exposes and sets a default timeout for the SubjectAccessReview client for DelegatingAuthorizationOptions. ([#95725](https://github.com/kubernetes/kubernetes/pull/95725), [@p0lyn0mial](https://github.com/p0lyn0mial)) [SIG API Machinery and Cloud Provider]
|
||||
- Alter wording to describe pods using a pvc ([#95635](https://github.com/kubernetes/kubernetes/pull/95635), [@RaunakShah](https://github.com/RaunakShah)) [SIG CLI]
|
||||
- If we set SelectPolicy MinPolicySelect on scaleUp behavior or scaleDown behavior,Horizontal Pod Autoscaler doesn`t automatically scale the number of pods correctly ([#95647](https://github.com/kubernetes/kubernetes/pull/95647), [@JoshuaAndrew](https://github.com/JoshuaAndrew)) [SIG Apps and Autoscaling]
|
||||
- Ignore apparmor for non-linux operating systems ([#93220](https://github.com/kubernetes/kubernetes/pull/93220), [@wawa0210](https://github.com/wawa0210)) [SIG Node and Windows]
|
||||
|
|
|
|||
|
|
@ -158,10 +158,16 @@ for database debugging.
|
|||
Any of the above commands works. The output is similar to this:
|
||||
|
||||
```
|
||||
I0710 14:43:38.274550 3655 portforward.go:225] Forwarding from 127.0.0.1:7000 -> 6379
|
||||
I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:7000 -> 6379
|
||||
Forwarding from 127.0.0.1:7000 -> 6379
|
||||
Forwarding from [::1]:7000 -> 6379
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
|
||||
`kubectl port-forward` does not return. To continue with the exercises, you will need to open another terminal.
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
2. Start the Redis command line interface:
|
||||
|
||||
```shell
|
||||
|
|
@ -180,7 +186,23 @@ for database debugging.
|
|||
PONG
|
||||
```
|
||||
|
||||
### Optionally let _kubectl_ choose the local port {#let-kubectl-choose-local-port}
|
||||
|
||||
If you don't need a specific local port, you can let `kubectl` choose and allocate
|
||||
the local port and thus relieve you from having to manage local port conflicts, with
|
||||
the slightly simpler syntax:
|
||||
|
||||
```shell
|
||||
kubectl port-forward deployment/redis-master :6379
|
||||
```
|
||||
|
||||
The `kubectl` tool finds a local port number that is not in use (avoiding low ports numbers,
|
||||
because these might be used by other applications). The output is similar to:
|
||||
|
||||
```
|
||||
Forwarding from 127.0.0.1:62162 -> 6379
|
||||
Forwarding from [::1]:62162 -> 6379
|
||||
```
|
||||
|
||||
|
||||
<!-- discussion -->
|
||||
|
|
@ -203,4 +225,3 @@ The support for UDP protocol is tracked in
|
|||
## {{% heading "whatsnext" %}}
|
||||
|
||||
Learn more about [kubectl port-forward](/docs/reference/generated/kubectl/kubectl-commands/#port-forward).
|
||||
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ The Kubernetes project provides skeleton cloud-controller-manager code with Go i
|
|||
To build an out-of-tree cloud-controller-manager for your cloud:
|
||||
|
||||
1. Create a go package with an implementation that satisfies [cloudprovider.Interface](https://github.com/kubernetes/cloud-provider/blob/master/cloud.go).
|
||||
2. Use [`main.go` in cloud-controller-manager](https://github.com/kubernetes/kubernetes/blob/master/cmd/cloud-controller-manager/controller-manager.go) from Kubernetes core as a template for your `main.go`. As mentioned above, the only difference should be the cloud package that will be imported.
|
||||
2. Use [`main.go` in cloud-controller-manager](https://github.com/kubernetes/kubernetes/blob/master/cmd/cloud-controller-manager/main.go) from Kubernetes core as a template for your `main.go`. As mentioned above, the only difference should be the cloud package that will be imported.
|
||||
3. Import your cloud package in `main.go`, ensure your package has an `init` block to run [`cloudprovider.RegisterCloudProvider`](https://github.com/kubernetes/cloud-provider/blob/master/plugins.go).
|
||||
|
||||
Many cloud providers publish their controller manager code as open source. If you are creating
|
||||
|
|
|
|||
|
|
@ -231,6 +231,14 @@ without compromising the minimum required capacity for running your workloads.
|
|||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
### Call "kubeadm upgrade"
|
||||
|
||||
- For worker nodes this upgrades the local kubelet configuration:
|
||||
|
||||
```shell
|
||||
sudo kubeadm upgrade node
|
||||
```
|
||||
|
||||
### Drain the node
|
||||
|
||||
- Prepare the node for maintenance by marking it unschedulable and evicting the workloads:
|
||||
|
|
@ -240,14 +248,6 @@ without compromising the minimum required capacity for running your workloads.
|
|||
kubectl drain <node-to-drain> --ignore-daemonsets
|
||||
```
|
||||
|
||||
### Call "kubeadm upgrade"
|
||||
|
||||
- For worker nodes this upgrades the local kubelet configuration:
|
||||
|
||||
```shell
|
||||
sudo kubeadm upgrade node
|
||||
```
|
||||
|
||||
### Upgrade kubelet and kubectl
|
||||
|
||||
- Upgrade the kubelet and kubectl:
|
||||
|
|
|
|||
|
|
@ -12,7 +12,6 @@ The following resources are used in the demonstration: [ResourceQuota](/docs/con
|
|||
and [PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/).
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
|
|
@ -41,7 +40,7 @@ the values set by the admin.
|
|||
|
||||
In this example, a PVC requesting 10Gi of storage would be rejected because it exceeds the 2Gi max.
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: LimitRange
|
||||
metadata:
|
||||
|
|
@ -67,7 +66,7 @@ In this example, a 6th PVC in the namespace would be rejected because it exceeds
|
|||
a 5Gi maximum quota when combined with the 2Gi max limit above, cannot have 3 PVCs where each has 2Gi. That would be 6Gi requested
|
||||
for a namespace capped at 5Gi.
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
|
|
@ -78,8 +77,6 @@ spec:
|
|||
requests.storage: "5Gi"
|
||||
```
|
||||
|
||||
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
## Summary
|
||||
|
|
@ -87,7 +84,3 @@ spec:
|
|||
A limit range can put a ceiling on how much storage is requested while a resource quota can effectively cap the storage
|
||||
consumed by a namespace through claim counts and cumulative storage capacity. The allows a cluster-admin to plan their
|
||||
cluster's storage budget without risk of any one project going over their allotment.
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ This page shows how to view, work in, and delete {{< glossary_tooltip text="name
|
|||
## {{% heading "prerequisites" %}}
|
||||
|
||||
* Have an [existing Kubernetes cluster](/docs/setup/).
|
||||
2. You have a basic understanding of Kubernetes {{< glossary_tooltip text="Pods" term_id="pod" >}}, {{< glossary_tooltip term_id="service" text="Services" >}}, and {{< glossary_tooltip text="Deployments" term_id="deployment" >}}.
|
||||
* You have a basic understanding of Kubernetes {{< glossary_tooltip text="Pods" term_id="pod" >}}, {{< glossary_tooltip term_id="service" text="Services" >}}, and {{< glossary_tooltip text="Deployments" term_id="deployment" >}}.
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ weight: 140
|
|||
This page shows how to attach handlers to Container lifecycle events. Kubernetes supports
|
||||
the postStart and preStop events. Kubernetes sends the postStart event immediately
|
||||
after a Container is started, and it sends the preStop event immediately before the
|
||||
Container is terminated.
|
||||
Container is terminated. A Container may specify one handler per event.
|
||||
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -366,7 +366,7 @@ have additional fields that can be set on `httpGet`:
|
|||
* `host`: Host name to connect to, defaults to the pod IP. You probably want to
|
||||
set "Host" in httpHeaders instead.
|
||||
* `scheme`: Scheme to use for connecting to the host (HTTP or HTTPS). Defaults to HTTP.
|
||||
* `path`: Path to access on the HTTP server.
|
||||
* `path`: Path to access on the HTTP server. Defaults to /.
|
||||
* `httpHeaders`: Custom headers to set in the request. HTTP allows repeated headers.
|
||||
* `port`: Name or number of the port to access on the container. Number must be
|
||||
in the range 1 to 65535.
|
||||
|
|
@ -389,24 +389,32 @@ You can override the default headers by defining `.httpHeaders` for the probe; f
|
|||
|
||||
```yaml
|
||||
livenessProbe:
|
||||
httpHeaders:
|
||||
Accept: application/json
|
||||
httpGet:
|
||||
httpHeaders:
|
||||
- name: Accept
|
||||
value: application/json
|
||||
|
||||
startupProbe:
|
||||
httpHeaders:
|
||||
User-Agent: MyUserAgent
|
||||
httpGet:
|
||||
httpHeaders:
|
||||
- name: User-Agent
|
||||
value: MyUserAgent
|
||||
```
|
||||
|
||||
You can also remove these two headers by defining them with an empty value.
|
||||
|
||||
```yaml
|
||||
livenessProbe:
|
||||
httpHeaders:
|
||||
Accept: ""
|
||||
httpGet:
|
||||
httpHeaders:
|
||||
- name: Accept
|
||||
value: ""
|
||||
|
||||
startupProbe:
|
||||
httpHeaders:
|
||||
User-Agent: ""
|
||||
httpGet:
|
||||
httpHeaders:
|
||||
- name: User-Agent
|
||||
value: ""
|
||||
```
|
||||
|
||||
### TCP probes
|
||||
|
|
|
|||
|
|
@ -360,7 +360,9 @@ for more information.
|
|||
|
||||
The exact versions for below mapping table are for docker cli v1.40 and crictl v1.19.0. Please note that the list is not exhaustive. For example, it doesn't include experimental commands of docker cli.
|
||||
|
||||
Warn: the output format of CRICTL is similar to Docker CLI, despite some missing columns for some CLI. Make sure to check output for the specific command if your script output parsing.
|
||||
{{< note >}}
|
||||
The output format of CRICTL is similar to Docker CLI, despite some missing columns for some CLI. Make sure to check output for the specific command if your script output parsing.
|
||||
{{< /note >}}
|
||||
|
||||
### Retrieve Debugging Information
|
||||
|
||||
|
|
|
|||
|
|
@ -49,7 +49,7 @@ case you can try several things:
|
|||
|
||||
* Add more nodes to the cluster.
|
||||
|
||||
* [Terminate unneeded pods](/docs/concepts/workloads/pods/#pod-termination)
|
||||
* [Terminate unneeded pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)
|
||||
to make room for pending pods.
|
||||
|
||||
* Check that the pod is not larger than your nodes. For example, if all
|
||||
|
|
|
|||
|
|
@ -37,8 +37,22 @@ by providing the following flags to the kube-apiserver:
|
|||
1. Create an egress configuration file such as `admin/konnectivity/egress-selector-configuration.yaml`.
|
||||
1. Set the `--egress-selector-config-file` flag of the API Server to the path of
|
||||
your API Server egress configuration file.
|
||||
1. If you use UDS connection, add volumes config to the kube-apiserver:
|
||||
```yaml
|
||||
spec:
|
||||
containers:
|
||||
volumeMounts:
|
||||
- name: konnectivity-uds
|
||||
mountPath: /etc/kubernetes/konnectivity-server
|
||||
readOnly: false
|
||||
volumes:
|
||||
- name: konnectivity-uds
|
||||
hostPath:
|
||||
path: /etc/kubernetes/konnectivity-server
|
||||
type: DirectoryOrCreate
|
||||
```
|
||||
|
||||
Generate or obtain a certificate and kubeconfig for konnectivity-server.
|
||||
Generate or obtain a certificate and kubeconfig for konnectivity-server.
|
||||
For example, you can use the OpenSSL command line tool to issue a X.509 certificate,
|
||||
using the cluster CA certificate `/etc/kubernetes/pki/ca.crt` from a control-plane host.
|
||||
|
||||
|
|
|
|||
|
|
@ -104,17 +104,19 @@ spec:
|
|||
|
||||
- Huge page requests must equal the limits. This is the default if limits are
|
||||
specified, but requests are not.
|
||||
- Huge pages are isolated at a container scope, so each container has own limit on their cgroup sandbox as requested in a container spec.
|
||||
- Huge pages are isolated at a container scope, so each container has own
|
||||
limit on their cgroup sandbox as requested in a container spec.
|
||||
- EmptyDir volumes backed by huge pages may not consume more huge page memory
|
||||
than the pod request.
|
||||
- Applications that consume huge pages via `shmget()` with `SHM_HUGETLB` must
|
||||
run with a supplemental group that matches `proc/sys/vm/hugetlb_shm_group`.
|
||||
- Huge page usage in a namespace is controllable via ResourceQuota similar
|
||||
to other compute resources like `cpu` or `memory` using the `hugepages-<size>`
|
||||
token.
|
||||
to other compute resources like `cpu` or `memory` using the `hugepages-<size>`
|
||||
token.
|
||||
- Support of multiple sizes huge pages is feature gated. It can be
|
||||
disabled with the `HugePageStorageMediumSize` [feature
|
||||
gate](/docs/reference/command-line-tools-reference/feature-gates/) on the {{<
|
||||
glossary_tooltip text="kubelet" term_id="kubelet" >}} and {{<
|
||||
glossary_tooltip text="kube-apiserver"
|
||||
term_id="kube-apiserver" >}} (`--feature-gates=HugePageStorageMediumSize=true`).
|
||||
disabled with the `HugePageStorageMediumSize`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
on the {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} and
|
||||
{{< glossary_tooltip text="kube-apiserver" term_id="kube-apiserver" >}}
|
||||
(`--feature-gates=HugePageStorageMediumSize=false`).
|
||||
|
||||
|
|
|
|||
|
|
@ -590,7 +590,7 @@ spec:
|
|||
containers:
|
||||
- name: my-nginx
|
||||
image: nginx
|
||||
command: ["start", "--host", "\$(MY_SERVICE_NAME)"]
|
||||
command: ["start", "--host", "$(MY_SERVICE_NAME)"]
|
||||
EOF
|
||||
|
||||
# Create a service.yaml file
|
||||
|
|
|
|||
|
|
@ -236,9 +236,6 @@ You can use a PDB with pods controlled by another type of controller, by an
|
|||
- only an integer value can be used with `.spec.minAvailable`, not a percentage.
|
||||
|
||||
You can use a selector which selects a subset or superset of the pods belonging to a built-in
|
||||
controller. However, when there are multiple PDBs in a namespace, you must be careful not
|
||||
to create PDBs whose selectors overlap.
|
||||
|
||||
|
||||
|
||||
|
||||
controller. The eviction API will disallow eviction of any pod covered by multiple PDBs,
|
||||
so most users will want to avoid overlapping selectors. One reasonable use of overlapping
|
||||
PDBs is when pods are being transitioned from one PDB to another.
|
||||
|
|
|
|||
|
|
@ -9,78 +9,73 @@ weight: 10
|
|||
This page shows how to create a Kubernetes Service object that exposes an
|
||||
external IP address.
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* Install [kubectl](/docs/tasks/tools/install-kubectl/).
|
||||
|
||||
* Use a cloud provider like Google Kubernetes Engine or Amazon Web Services to
|
||||
create a Kubernetes cluster. This tutorial creates an
|
||||
[external load balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/),
|
||||
which requires a cloud provider.
|
||||
|
||||
* Configure `kubectl` to communicate with your Kubernetes API server. For
|
||||
instructions, see the documentation for your cloud provider.
|
||||
|
||||
|
||||
|
||||
* Install [kubectl](/docs/tasks/tools/install-kubectl/).
|
||||
* Use a cloud provider like Google Kubernetes Engine or Amazon Web Services to
|
||||
create a Kubernetes cluster. This tutorial creates an
|
||||
[external load balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/),
|
||||
which requires a cloud provider.
|
||||
* Configure `kubectl` to communicate with your Kubernetes API server. For instructions, see the
|
||||
documentation for your cloud provider.
|
||||
|
||||
## {{% heading "objectives" %}}
|
||||
|
||||
|
||||
* Run five instances of a Hello World application.
|
||||
* Create a Service object that exposes an external IP address.
|
||||
* Use the Service object to access the running application.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- lessoncontent -->
|
||||
|
||||
## Creating a service for an application running in five pods
|
||||
|
||||
1. Run a Hello World application in your cluster:
|
||||
|
||||
{{< codenew file="service/load-balancer-example.yaml" >}}
|
||||
{{< codenew file="service/load-balancer-example.yaml" >}}
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/service/load-balancer-example.yaml
|
||||
```
|
||||
|
||||
|
||||
The preceding command creates a
|
||||
{{< glossary_tooltip text="Deployment" term_id="deployment" >}}
|
||||
and an associated
|
||||
{{< glossary_tooltip term_id="replica-set" text="ReplicaSet" >}}.
|
||||
The ReplicaSet has five
|
||||
{{< glossary_tooltip text="Pods" term_id="pod" >}}
|
||||
each of which runs the Hello World application.
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/service/load-balancer-example.yaml
|
||||
```
|
||||
The preceding command creates a
|
||||
{{< glossary_tooltip text="Deployment" term_id="deployment" >}}
|
||||
and an associated
|
||||
{{< glossary_tooltip term_id="replica-set" text="ReplicaSet" >}}.
|
||||
The ReplicaSet has five
|
||||
{{< glossary_tooltip text="Pods" term_id="pod" >}}
|
||||
each of which runs the Hello World application.
|
||||
|
||||
1. Display information about the Deployment:
|
||||
|
||||
kubectl get deployments hello-world
|
||||
kubectl describe deployments hello-world
|
||||
```shell
|
||||
kubectl get deployments hello-world
|
||||
kubectl describe deployments hello-world
|
||||
```
|
||||
|
||||
1. Display information about your ReplicaSet objects:
|
||||
|
||||
kubectl get replicasets
|
||||
kubectl describe replicasets
|
||||
```shell
|
||||
kubectl get replicasets
|
||||
kubectl describe replicasets
|
||||
```
|
||||
|
||||
1. Create a Service object that exposes the deployment:
|
||||
|
||||
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
|
||||
```shell
|
||||
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
|
||||
```
|
||||
|
||||
1. Display information about the Service:
|
||||
|
||||
kubectl get services my-service
|
||||
```shell
|
||||
kubectl get services my-service
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
The output is similar to:
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
my-service LoadBalancer 10.3.245.137 104.198.205.71 8080/TCP 54s
|
||||
```console
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
my-service LoadBalancer 10.3.245.137 104.198.205.71 8080/TCP 54s
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
|
||||
|
|
@ -96,23 +91,27 @@ The preceding command creates a
|
|||
|
||||
1. Display detailed information about the Service:
|
||||
|
||||
kubectl describe services my-service
|
||||
```shell
|
||||
kubectl describe services my-service
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
The output is similar to:
|
||||
|
||||
Name: my-service
|
||||
Namespace: default
|
||||
Labels: app.kubernetes.io/name=load-balancer-example
|
||||
Annotations: <none>
|
||||
Selector: app.kubernetes.io/name=load-balancer-example
|
||||
Type: LoadBalancer
|
||||
IP: 10.3.245.137
|
||||
LoadBalancer Ingress: 104.198.205.71
|
||||
Port: <unset> 8080/TCP
|
||||
NodePort: <unset> 32377/TCP
|
||||
Endpoints: 10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more...
|
||||
Session Affinity: None
|
||||
Events: <none>
|
||||
```console
|
||||
Name: my-service
|
||||
Namespace: default
|
||||
Labels: app.kubernetes.io/name=load-balancer-example
|
||||
Annotations: <none>
|
||||
Selector: app.kubernetes.io/name=load-balancer-example
|
||||
Type: LoadBalancer
|
||||
IP: 10.3.245.137
|
||||
LoadBalancer Ingress: 104.198.205.71
|
||||
Port: <unset> 8080/TCP
|
||||
NodePort: <unset> 32377/TCP
|
||||
Endpoints: 10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more...
|
||||
Session Affinity: None
|
||||
Events: <none>
|
||||
```
|
||||
|
||||
Make a note of the external IP address (`LoadBalancer Ingress`) exposed by
|
||||
your service. In this example, the external IP address is 104.198.205.71.
|
||||
|
|
@ -124,21 +123,27 @@ The preceding command creates a
|
|||
addresses of the pods that are running the Hello World application. To
|
||||
verify these are pod addresses, enter this command:
|
||||
|
||||
kubectl get pods --output=wide
|
||||
```shell
|
||||
kubectl get pods --output=wide
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
The output is similar to:
|
||||
|
||||
NAME ... IP NODE
|
||||
hello-world-2895499144-1jaz9 ... 10.0.1.6 gke-cluster-1-default-pool-e0b8d269-1afc
|
||||
hello-world-2895499144-2e5uh ... 10.0.1.8 gke-cluster-1-default-pool-e0b8d269-1afc
|
||||
hello-world-2895499144-9m4h1 ... 10.0.0.6 gke-cluster-1-default-pool-e0b8d269-5v7a
|
||||
hello-world-2895499144-o4z13 ... 10.0.1.7 gke-cluster-1-default-pool-e0b8d269-1afc
|
||||
hello-world-2895499144-segjf ... 10.0.2.5 gke-cluster-1-default-pool-e0b8d269-cpuc
|
||||
```console
|
||||
NAME ... IP NODE
|
||||
hello-world-2895499144-1jaz9 ... 10.0.1.6 gke-cluster-1-default-pool-e0b8d269-1afc
|
||||
hello-world-2895499144-2e5uh ... 10.0.1.8 gke-cluster-1-default-pool-e0b8d269-1afc
|
||||
hello-world-2895499144-9m4h1 ... 10.0.0.6 gke-cluster-1-default-pool-e0b8d269-5v7a
|
||||
hello-world-2895499144-o4z13 ... 10.0.1.7 gke-cluster-1-default-pool-e0b8d269-1afc
|
||||
hello-world-2895499144-segjf ... 10.0.2.5 gke-cluster-1-default-pool-e0b8d269-cpuc
|
||||
```
|
||||
|
||||
1. Use the external IP address (`LoadBalancer Ingress`) to access the Hello
|
||||
World application:
|
||||
|
||||
curl http://<external-ip>:<port>
|
||||
```shell
|
||||
curl http://<external-ip>:<port>
|
||||
```
|
||||
|
||||
where `<external-ip>` is the external IP address (`LoadBalancer Ingress`)
|
||||
of your Service, and `<port>` is the value of `Port` in your Service
|
||||
|
|
@ -148,29 +153,26 @@ The preceding command creates a
|
|||
|
||||
The response to a successful request is a hello message:
|
||||
|
||||
Hello Kubernetes!
|
||||
|
||||
|
||||
|
||||
```shell
|
||||
Hello Kubernetes!
|
||||
```
|
||||
|
||||
## {{% heading "cleanup" %}}
|
||||
|
||||
|
||||
To delete the Service, enter this command:
|
||||
|
||||
kubectl delete services my-service
|
||||
```shell
|
||||
kubectl delete services my-service
|
||||
```
|
||||
|
||||
To delete the Deployment, the ReplicaSet, and the Pods that are running
|
||||
the Hello World application, enter this command:
|
||||
|
||||
kubectl delete deployment hello-world
|
||||
|
||||
|
||||
|
||||
```shell
|
||||
kubectl delete deployment hello-world
|
||||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
Learn more about
|
||||
[connecting applications with services](/docs/concepts/services-networking/connect-applications-service/).
|
||||
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ cid: community
|
|||
<div class="content">
|
||||
<h3>Garantizando el funcionamiento de Kubernetes para todo el mundo y en cualquier lugar.</h3>
|
||||
<p>Conecte con la comunidad Kubernetes en nuestro canal de <a href="http://slack.k8s.io/">Slack</a>, <a href="https://discuss.kubernetes.io/">foro</a>, o únete al
|
||||
<a href="https://groups.google.com/forum/#!forum/kubernetes-dev">Kubernetes-dev Google group</a>.
|
||||
<a href="https://groups.google.com/g/kubernetes-dev">Kubernetes-dev Google group</a>.
|
||||
Cada semana se lleva a cabo una reunión de la comunidad por videoconferencia para discutir el estado de cosas, revise el documento
|
||||
<a href="https://github.com/kubernetes/community/blob/master/events/community-meeting.md "> Community Meeting </a> para obtener información sobre cómo participar. </p>
|
||||
<p>También puede formar parte de la comunidad en cualquier parte del mundo a través de la
|
||||
|
|
@ -59,4 +59,4 @@ cid: community
|
|||
</div>
|
||||
</div>
|
||||
</main>
|
||||
</section>
|
||||
</section>
|
||||
|
|
|
|||
|
|
@ -1,4 +1,41 @@
|
|||
---
|
||||
title: "Instalar herramientas"
|
||||
description: Configurar las herramientas de Kubernetes en su computadora.
|
||||
weight: 10
|
||||
---
|
||||
no_list: true
|
||||
---
|
||||
|
||||
## kubectl
|
||||
|
||||
Usa la herramienta de línea de comandos de Kubernetes, [kubectl](/docs/user-guide/kubectl/), para desplegar y gestionar aplicaciones en Kubernetes. Usando kubectl, puedes inspeccionar recursos del clúster; crear, eliminar, y actualizar componentes; explorar tu nuevo clúster y arrancar aplicaciones.
|
||||
|
||||
Ver [Instalar y Configurar `kubectl`](/docs/tasks/tools/install-kubectl/) para más información sobre cómo descargar y instalar `kubectl` y configurarlo para acceder su clúster.
|
||||
|
||||
<a class="btn btn-primary" href="/docs/tasks/tools/install-kubectl/" role="button" aria-label="Ver la guía de instalación y configuración de kubectl">Ver la guía de instalación y configuración de kubectl</a>
|
||||
|
||||
También se puede leer [la documentación de referencia](/docs/reference/kubectl) de `kubectl`.
|
||||
|
||||
## kind
|
||||
[`kind`](https://kind.sigs.k8s.io/docs/) le permite usar Kubernetes en su máquina local. Esta herramienta require que [Docker](https://docs.docker.com/get-docker/) instalado y configurado.
|
||||
|
||||
En la página de [inicio rápido](https://kind.sigs.k8s.io/docs/user/quick-start/) encontrarás toda la información necesaria para empezar con kind.
|
||||
|
||||
<a class="btn btn-primary" href="https://kind.sigs.k8s.io/docs/user/quick-start/" role="button" aria-label="Ver la guía de inicio rápido">Ver la guía de inicio rápido</a>
|
||||
|
||||
## minikube
|
||||
|
||||
De forma similar a `kind`, [`minikube`](https://minikube.sigs.k8s.io/) es una herramienta que le permite usar Kubernetes en su máquina local. `minikube` le permite ejecutar un único nodo en su computadora personal (PC de Windows, macOS y Linux) para que se pueda probar Kubernetes, o para su trabajo de desarrollo.
|
||||
|
||||
Se puede seguir la guía oficial de [`minikube`](https://minikube.sigs.k8s.io/docs/start/) si su enfoque esta instalando la herramienta.
|
||||
|
||||
<a class="btn btn-primary" href="https://minikube.sigs.k8s.io/docs/start/" role="button" aria-label="Ver la guía de minikube">Ver la guía de minikube</a>
|
||||
|
||||
Una vez `minikube` ha terminado de instalarse, está lista para desplegar un aplicación de ejemplo (/docs/tutorials/hello-minikube/).
|
||||
|
||||
## kubeadm
|
||||
|
||||
Se puede usar la utilidad {{< glossary_tooltip term_id="kubeadm" text="kubeadm" >}} para crear y gestionar clústeres de Kubernetes.
|
||||
|
||||
En [instalando kubeadm](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) se muestra como instalar kubeadm. Una vez instalado, se puede utilizar [para crear un clúster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/).
|
||||
|
||||
<a class="btn btn-primary" href="/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" role="button" aria-label="Ver la guía de instalación">Ver la guía de instalación</a>
|
||||
|
|
|
|||
|
|
@ -1,6 +1,4 @@
|
|||
---
|
||||
reviewers:
|
||||
- mikedanese
|
||||
title: Instalar y Configurar kubectl
|
||||
content_type: task
|
||||
weight: 10
|
||||
|
|
@ -11,7 +9,7 @@ card:
|
|||
---
|
||||
|
||||
<!-- overview -->
|
||||
Usa la herramienta de línea de comandos de Kubernetes, [kubectl](/docs/user-guide/kubectl/), para desplegar y gestionar aplicaciones en Kubernetes. Usando kubectl, puedes inspeccionar recursos del clúster; crear, eliminar, y actualizar componentes; explorar tu nuevo clúster; y arrancar aplicaciones de ejemplo.
|
||||
Usa la herramienta de línea de comandos de Kubernetes, [kubectl](/docs/reference/kubectl/kubectl/), para desplegar y gestionar aplicaciones en Kubernetes. Usando kubectl, puedes inspeccionar recursos del clúster; crear, eliminar, y actualizar componentes; explorar tu nuevo clúster; y arrancar aplicaciones de ejemplo. Para ver la lista completa de operaciones de kubectl, se puede ver [el resumen de kubectl](/docs/reference/kubectl/overview/).
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
|
@ -22,21 +20,55 @@ Debes usar una versión de kubectl que esté a menos de una versión menor de di
|
|||
|
||||
<!-- steps -->
|
||||
|
||||
## Instalar kubectl
|
||||
## Instalar kubectl en Linux
|
||||
|
||||
Estos son algunos métodos para instalar kubectl.
|
||||
### Instalar el binario de kubectl con curl en Linux
|
||||
|
||||
## Instalar el binario de kubectl usando gestión nativa de paquetes
|
||||
1. Descargar la última entrega:
|
||||
|
||||
```
|
||||
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
|
||||
```
|
||||
|
||||
Para descargar una versión específica, remplaza el comando `$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)` con la versión específica.
|
||||
|
||||
Por ejemplo, para descarga la versión {{< param "fullversion" >}} en Linux, teclea:
|
||||
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl
|
||||
```
|
||||
|
||||
2. Habilita los permisos de ejecución del binario `kubectl`.
|
||||
|
||||
```
|
||||
chmod +x ./kubectl
|
||||
```
|
||||
|
||||
3. Mueve el binario dentro de tu PATH.
|
||||
|
||||
```
|
||||
sudo mv ./kubectl /usr/local/bin/kubectl
|
||||
```
|
||||
|
||||
4. Comprueba que la versión que se ha instalado es la más reciente.
|
||||
|
||||
```
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
|
||||
## Instalar mediante el gestor de paquetes del sistema
|
||||
|
||||
{{< tabs name="kubectl_install" >}}
|
||||
{{< tab name="Ubuntu, Debian o HypriotOS" codelang="bash" >}}
|
||||
sudo apt-get update && sudo apt-get install -y apt-transport-https
|
||||
{{< tab name="Ubuntu, Debian or HypriotOS" codelang="bash" >}}
|
||||
sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2 curl
|
||||
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
|
||||
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y kubectl
|
||||
{{< /tab >}}
|
||||
{{< tab name="CentOS, RHEL o Fedora" codelang="bash" >}}cat <<EOF > /etc/yum.repos.d/kubernetes.repo
|
||||
|
||||
{{< tab name="CentOS, RHEL or Fedora" codelang="bash" >}}cat <<EOF > /etc/yum.repos.d/kubernetes.repo
|
||||
[kubernetes]
|
||||
name=Kubernetes
|
||||
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
|
||||
|
|
@ -49,65 +81,146 @@ yum install -y kubectl
|
|||
{{< /tab >}}
|
||||
{{< /tabs >}}
|
||||
|
||||
### Instalar usando otro gestor de paquetes
|
||||
|
||||
## Instalar con snap en Ubuntu
|
||||
|
||||
{{< tabs name="other_kubectl_install" >}}
|
||||
{{% tab name="Snap" %}}
|
||||
Si usas Ubuntu o alguna de las otras distribuciones de Linux que soportan el gestor de paquetes [snap](https://snapcraft.io/docs/core/install), kubectl está disponible como una aplicación [snap](https://snapcraft.io/).
|
||||
|
||||
1. Cambia al usuario de snap y ejecuta el comando de instalación:
|
||||
```shell
|
||||
snap install kubectl --classic
|
||||
|
||||
```
|
||||
sudo snap install kubectl --classic
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="Homebrew" %}}
|
||||
Si usas alguna de las otras distribuciones de Linux que soportan el gestor de paquetes [Homebrew](https://docs.brew.sh/Homebrew-on-Linux), kubectl está disponible como una aplicación de [Homebrew]((https://docs.brew.sh/Homebrew-on-Linux#install).
|
||||
|
||||
```shell
|
||||
brew install kubectl
|
||||
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
## Instalar kubectl en macOS
|
||||
|
||||
### Instalar el binario de kubectl usando curl en macOS
|
||||
|
||||
1. Descarga la última entrega:
|
||||
|
||||
```bash
|
||||
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl"
|
||||
```
|
||||
|
||||
Para descargar una versión específica, remplaza el comando `$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)` con la versión específica.
|
||||
|
||||
Por ejemplo, para descargar la versión {{< param "fullversion" >}} en macOS, teclea:
|
||||
|
||||
```bash
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl
|
||||
```
|
||||
|
||||
2. Habilita los permisos de ejecución del binario `kubectl`.
|
||||
|
||||
```bash
|
||||
chmod +x ./kubectl
|
||||
```
|
||||
|
||||
3. Mueve el binario dentro de tu PATH.
|
||||
|
||||
```bash
|
||||
sudo mv ./kubectl /usr/local/bin/kubectl
|
||||
```
|
||||
|
||||
2. Para asegurar que la versión utilizada sea la más actual puedes probar:
|
||||
4. Para asegurar que la versión utilizada sea la más actual puedes probar:
|
||||
|
||||
```
|
||||
kubectl version
|
||||
```
|
||||
```bash
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
## Instalar con Homebrew en macOS
|
||||
### Instalar con Homebrew en macOS
|
||||
|
||||
Si estás usando macOS y el gestor de paquetes es [Homebrew](https://brew.sh/), puedes instalar kubectl con Homebrew.
|
||||
Si estás usando macOS y el gestor de paquetes es [Homebrew](https://brew.sh/), puedes instalar `kubectl` con `brew`.
|
||||
|
||||
1. Ejecuta el comando de instalación:
|
||||
|
||||
```bash
|
||||
brew install kubectl
|
||||
```
|
||||
|
||||
o
|
||||
|
||||
```bash
|
||||
brew install kubernetes-cli
|
||||
```
|
||||
|
||||
2. Para asegurar que la versión utilizada sea la más actual puedes probar:
|
||||
2. Para asegurar que la versión utilizada sea la más actual, puedes ejecutar:
|
||||
|
||||
```
|
||||
kubectl version
|
||||
```bash
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
## Instalar con Macports en macOS
|
||||
### Instalar con Macports en macOS
|
||||
|
||||
Si estás en macOS y usando el gestor de paquetes [Macports](https://macports.org/), puedes instalar kubectl con Macports.
|
||||
Si estás en macOS y utilizas el gestor de paquetes [Macports](https://macports.org/), puedes instalar `kubectl` con `port`.
|
||||
|
||||
1. Ejecuta los comandos de instalación:
|
||||
|
||||
```
|
||||
```bash
|
||||
sudo port selfupdate
|
||||
sudo port install kubectl
|
||||
```
|
||||
|
||||
2. Para asegurar que la versión utilizada sea la más actual puedes probar:
|
||||
2. Para asegurar que la versión utilizada sea la más actual puedes ejecutar:
|
||||
|
||||
```
|
||||
kubectl version
|
||||
```bash
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
## Instalar con Powershell desde PSGallery
|
||||
# Instalar kubectl en Windows
|
||||
|
||||
Si estás en Windows y usando el gestor de paquetes [Powershell Gallery](https://www.powershellgallery.com/), puedes instalar y actualizar kubectl con Powershell.
|
||||
### Instalar el binario de kubectl con curl en Windows
|
||||
|
||||
1. Descargar la última entrega {{< param "fullversion" >}} de [este link]((https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe).
|
||||
|
||||
o si tiene `curl` instalada, utiliza este comando:
|
||||
|
||||
```bash
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe
|
||||
```
|
||||
|
||||
Para averiguar la última versión estable (por ejemplo, para secuencias de comandos), echa un vistazo a [https://storage.googleapis.com/kubernetes-release/release/stable.txt](https://storage.googleapis.com/kubernetes-release/release/stable.txt).
|
||||
|
||||
2. Añade el binario a tu PATH.
|
||||
|
||||
3. Para asegurar que la versión utilizada sea la más actual, puedes ejecutar:
|
||||
|
||||
```bash
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
[Docker Desktop para Windows](https://docs.docker.com/docker-for-windows/#kubernetes) añade su propia versión de `kubectl` a PATH.
|
||||
|
||||
Si tienes Docker Desktop instalado, es posible que tengas que modificar tu PATH al PATH añadido por Docker Desktop o eliminar la versión de `kubectl` proporcionada por Docker Desktop.
|
||||
{{< /note >}}
|
||||
|
||||
### Instalar con Powershell desde PSGallery
|
||||
|
||||
Si estás en Windows y utilizas el gestor de paquetes [Powershell Gallery](https://www.powershellgallery.com/), puedes instalar y actualizar kubectl con Powershell.
|
||||
|
||||
1. Ejecuta los comandos de instalación (asegurándote de especificar una `DownloadLocation`):
|
||||
|
||||
```
|
||||
Install-Script -Name install-kubectl -Scope CurrentUser -Force
|
||||
install-kubectl.ps1 [-DownloadLocation <path>]
|
||||
```powershell
|
||||
Install-Script -Name 'install-kubectl' -Scope CurrentUser -Force
|
||||
install-kubectl.ps1 [-DownloadLocation <path>]
|
||||
```
|
||||
|
||||
{{< note >}}Si no especificas una `DownloadLocation`, `kubectl` se instalará en el directorio temporal del usuario.{{< /note >}}
|
||||
|
|
@ -116,57 +229,66 @@ Si estás en Windows y usando el gestor de paquetes [Powershell Gallery](https:/
|
|||
|
||||
2. Para asegurar que la versión utilizada sea la más actual puedes probar:
|
||||
|
||||
```
|
||||
kubectl version
|
||||
```powershell
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
{{< note >}}Actualizar la instalación se realiza mediante la re-ejecución de los dos comandos listados en el paso 1.{{< /note >}}
|
||||
{{< note >}}
|
||||
Actualizar la instalación se realiza mediante la re-ejecución de los dos comandos listados en el paso 1.{{< /note >}}
|
||||
|
||||
## Instalar en Windows usando Chocolatey o scoop
|
||||
### Instalar en Windows usando Chocolatey o scoop
|
||||
|
||||
Para instalar kubectl en Windows puedes usar bien el gestor de paquetes [Chocolatey](https://chocolatey.org) o el instalador de línea de comandos [scoop](https://scoop.sh).
|
||||
{{< tabs name="kubectl_win_install" >}}
|
||||
{{% tab name="choco" %}}
|
||||
1. Para instalar kubectl en Windows puedes usar el gestor de paquetes [Chocolatey](https://chocolatey.org) o el instalador de línea de comandos [scoop](https://scoop.sh).
|
||||
|
||||
{{< tabs name="kubectl_win_install" >}}
|
||||
{{% tab name="choco" %}}
|
||||
Using [Chocolatey](https://chocolatey.org).
|
||||
|
||||
```powershell
|
||||
choco install kubernetes-cli
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="scoop" %}}
|
||||
Using [scoop](https://scoop.sh).
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="scoop" %}}
|
||||
|
||||
```powershell
|
||||
scoop install kubectl
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
2. Para asegurar que la versión utilizada sea la más actual puedes probar:
|
||||
|
||||
```
|
||||
kubectl version
|
||||
```powershell
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
3. Navega a tu directorio de inicio:
|
||||
|
||||
```powershell
|
||||
# Si estas usando cmd.exe, ejecuta: cd %USERPROFILE%
|
||||
cd ~
|
||||
```
|
||||
cd %USERPROFILE%
|
||||
```
|
||||
|
||||
4. Crea el directorio `.kube`:
|
||||
|
||||
```
|
||||
```powershell
|
||||
mkdir .kube
|
||||
```
|
||||
|
||||
5. Cambia al directorio `.kube` que acabas de crear:
|
||||
|
||||
```
|
||||
```powershell
|
||||
cd .kube
|
||||
```
|
||||
|
||||
6. Configura kubectl para usar un clúster remoto de Kubernetes:
|
||||
|
||||
```
|
||||
```powershell
|
||||
New-Item config -type file
|
||||
```
|
||||
|
||||
{{< note >}}Edita el fichero de configuración con un editor de texto de tu elección, como Notepad.{{< /note >}}
|
||||
{{< note >}}Edita el fichero de configuración con un editor de texto de tu elección, como Notepad.{{< /note >}}
|
||||
|
||||
## Descarga como parte del Google Cloud SDK
|
||||
|
||||
|
|
@ -175,107 +297,32 @@ Puedes instalar kubectl como parte del Google Cloud SDK.
|
|||
1. Instala el [Google Cloud SDK](https://cloud.google.com/sdk/).
|
||||
2. Ejecuta el comando de instalación de `kubectl`:
|
||||
|
||||
```
|
||||
```shell
|
||||
gcloud components install kubectl
|
||||
```
|
||||
|
||||
3. Para asegurar que la versión utilizada sea la más actual puedes probar:
|
||||
|
||||
```
|
||||
kubectl version
|
||||
```shell
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
## Instalar el binario de kubectl usando curl
|
||||
|
||||
{{< tabs name="kubectl_install_curl" >}}
|
||||
{{% tab name="macOS" %}}
|
||||
1. Descarga la última entrega:
|
||||
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
|
||||
```
|
||||
|
||||
Para descargar una versión específica, remplaza el comando `$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)` con la versión específica.
|
||||
|
||||
Por ejemplo, para descargar la versión {{< param "fullversion" >}} en macOS, teclea:
|
||||
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl
|
||||
```
|
||||
|
||||
2. Haz el binario de kubectl ejecutable.
|
||||
|
||||
```
|
||||
chmod +x ./kubectl
|
||||
```
|
||||
|
||||
3. Mueve el binario dentro de tu PATH.
|
||||
|
||||
```
|
||||
sudo mv ./kubectl /usr/local/bin/kubectl
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="Linux" %}}
|
||||
|
||||
1. Descarga la última entrega con el comando:
|
||||
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
|
||||
```
|
||||
|
||||
Para descargar una versión específica, remplaza el trozo del comando `$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)` con la versión específica.
|
||||
|
||||
Por ejemplo, para descargar la versión {{< param "fullversion" >}} en Linux, teclea:
|
||||
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl
|
||||
```
|
||||
|
||||
2. Haz el binario de kubectl ejecutable.
|
||||
|
||||
```
|
||||
chmod +x ./kubectl
|
||||
```
|
||||
|
||||
3. Mueve el binario dentro de tu PATH.
|
||||
|
||||
```
|
||||
sudo mv ./kubectl /usr/local/bin/kubectl
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="Windows" %}}
|
||||
1. Descarga la última entrega {{< param "fullversion" >}} desde [este enlace](https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe).
|
||||
|
||||
O si tienes `curl` instalado, usa este comando:
|
||||
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe
|
||||
```
|
||||
|
||||
Para averiguar la última versión estable (por ejemplo, para secuencias de comandos), echa un vistazo a [https://storage.googleapis.com/kubernetes-release/release/stable.txt](https://storage.googleapis.com/kubernetes-release/release/stable.txt).
|
||||
|
||||
2. Añade el binario a tu PATH.
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
|
||||
## Configurar kubectl
|
||||
## Comprobar la configuración kubectl
|
||||
|
||||
Para que kubectl pueda encontrar y acceder a un clúster de Kubernetes, necesita un [fichero kubeconfig](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/), que se crea de forma automática cuando creas un clúster usando [kube-up.sh](https://github.com/kubernetes/kubernetes/blob/master/cluster/kube-up.sh) o despliegas de forma satisfactoria un clúster de Minikube. Revisa las [guías para comenzar](/docs/setup/) para más información acerca de crear clústers. Si necesitas acceso a un clúster que no has creado, ver el [documento de Compartir Acceso a un Clúster](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
|
||||
Por defecto, la configuración de kubectl se encuentra en `~/.kube/config`.
|
||||
|
||||
## Comprobar la configuración kubectl
|
||||
Comprueba que kubectl está correctamente configurado obteniendo el estado del clúster:
|
||||
|
||||
```shell
|
||||
kubectl cluster-info
|
||||
```
|
||||
|
||||
Si ves una respuesta en forma de URL, kubectl está correctamente configurado para acceder a tu clúster.
|
||||
|
||||
Si ves un mensaje similar al siguiente, kubectl no está correctamente configurado o no es capaz de conectar con un clúster de Kubernetes.
|
||||
|
||||
```shell
|
||||
```
|
||||
The connection to the server <server-name:port> was refused - did you specify the right host or port?
|
||||
```
|
||||
|
||||
|
|
@ -287,7 +334,9 @@ Si kubectl cluster-info devuelve la respuesta en forma de url, pero no puedes ac
|
|||
kubectl cluster-info dump
|
||||
```
|
||||
|
||||
## Habilitar el auto-completado en el intérprete de comandos
|
||||
## kubectl configuraciones opcionales
|
||||
|
||||
### Habilitar el auto-completado en el intérprete de comandos
|
||||
|
||||
kubectl provee de soporte para auto-completado para Bash y Zsh, ¡que te puede ahorrar mucho uso del teclado!
|
||||
|
||||
|
|
@ -323,16 +372,23 @@ Debes asegurarte que la secuencia de comandos de completado de kubectl corre en
|
|||
|
||||
- Corre la secuencia de comandos de completado en tu `~/.bashrc`:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
echo 'source <(kubectl completion bash)' >>~/.bashrc
|
||||
```
|
||||
|
||||
- Añade la secuencia de comandos de completado al directorio `/etc/bash_completion.d`:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
kubectl completion bash >/etc/bash_completion.d/kubectl
|
||||
```
|
||||
|
||||
Si tienes un alias para `kubectl`, puedes extender los comandos de shell para funcionar con ese alias:
|
||||
|
||||
```bash
|
||||
echo 'alias k=kubectl' >>~/.bashrc
|
||||
echo 'complete -F __start_kubectl k' >>~/.bashrc
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
bash-completion corre todas las secuencias de comandos de completado en `/etc/bash_completion.d`.
|
||||
{{< /note >}}
|
||||
|
|
@ -344,21 +400,43 @@ Ambas estrategias son equivalentes. Tras recargar tu intérprete de comandos, el
|
|||
|
||||
{{% tab name="Bash en macOS" %}}
|
||||
|
||||
{{< warning>}}
|
||||
macOS incluye Bash 3.2 por defecto. La secuencia de comandos de completado de kubectl requiere Bash 4.1+ y no funciona con Bash 3.2. Una posible alternativa es instalar una nueva versión de Bash en macOS (ver instrucciones [aquí](https://itnext.io/upgrading-bash-on-macos-7138bd1066ba)). Las instrucciones de abajo sólo funcionan si estás usando Bash 4.1+.
|
||||
{{< /warning >}}
|
||||
|
||||
### Introducción
|
||||
|
||||
La secuencia de comandos de completado de kubectl para Bash puede generarse con el comando `kubectl completion bash`. Corriendo la secuencia de comandos de completado en tu intérprete de comandos habilita el auto-completado de kubectl.
|
||||
|
||||
Sin embargo, la secuencia de comandos de completado depende de [*bash-completion**](https://github.com/scop/bash-completion), lo que significa que tienes que instalar primero este programa (puedes probar si ya tienes bash-completion instalado ejecutando `type _init_completion`).
|
||||
|
||||
{{< warning>}}
|
||||
macOS incluye Bash 3.2 por defecto. La secuencia de comandos de completado de kubectl requiere Bash 4.1+ y no funciona con Bash 3.2. Una posible alternativa es instalar una nueva versión de Bash en macOS (ver instrucciones [aquí](https://itnext.io/upgrading-bash-on-macos-7138bd1066ba)). Las instrucciones de abajo sólo funcionan si estás usando Bash 4.1+.
|
||||
{{< /warning >}}
|
||||
|
||||
### Actualizar bash
|
||||
|
||||
Las instrucciones asumen que usa Bash 4.1+. Puedes comprobar tu versión de bash con:
|
||||
|
||||
```bash
|
||||
echo $BASH_VERSION
|
||||
```
|
||||
|
||||
Si no es 4.1+, puede actualizar bash con Homebrew:
|
||||
|
||||
```bash
|
||||
brew install bash
|
||||
```
|
||||
|
||||
Recarga tu intérprete de comandos y verifica que estás usando la versión deseada:
|
||||
|
||||
```bash
|
||||
echo $BASH_VERSION $SHELL
|
||||
```
|
||||
|
||||
Usualmente, Homebrew lo instala en `/usr/local/bin/bash`.
|
||||
|
||||
### Instalar bash-completion
|
||||
|
||||
Puedes instalar bash-completion con Homebrew:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
brew install bash-completion@2
|
||||
```
|
||||
|
||||
|
|
@ -368,9 +446,9 @@ El `@2` simboliza bash-completion 2, que es requerido por la secuencia de comand
|
|||
|
||||
Como se indicaba en la salida de `brew install` (sección "Caveats"), añade las siguientes líneas a tu `~/.bashrc` o `~/.bash_profile`:
|
||||
|
||||
```shell
|
||||
export BASH_COMPLETION_COMPAT_DIR=/usr/local/etc/bash_completion.d
|
||||
[[ -r /usr/local/etc/profile.d/bash_completion.sh ]] && . /usr/local/etc/profile.d/bash_completion.sh
|
||||
```bash
|
||||
export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d"
|
||||
[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh"
|
||||
```
|
||||
|
||||
Recarga tu intérprete de comandos y verifica que bash-completion está correctamente instalado tecleando `type _init_completion`.
|
||||
|
|
@ -407,25 +485,31 @@ La secuencia de comandos de completado de kubectl para Zsh puede ser generada co
|
|||
|
||||
Para hacerlo en todas tus sesiones de tu intérprete de comandos, añade lo siguiente a tu `~/.zshrc`:
|
||||
|
||||
```shell
|
||||
```zsh
|
||||
source <(kubectl completion zsh)
|
||||
```
|
||||
|
||||
Si tienes alias para kubectl, puedes extender el completado de intérprete de comandos para funcionar con ese alias.
|
||||
|
||||
```zsh
|
||||
echo 'alias k=kubectl' >>~/.zshrc
|
||||
echo 'complete -F __start_kubectl k' >>~/.zshrc
|
||||
```
|
||||
|
||||
Tras recargar tu intérprete de comandos, el auto-completado de kubectl debería funcionar.
|
||||
|
||||
Si obtienes un error como `complete:13: command not found: compdef`, entonces añade lo siguiente al principio de tu `~/.zshrc`:
|
||||
|
||||
```shell
|
||||
```zsh
|
||||
autoload -Uz compinit
|
||||
compinit
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
[Aprender cómo lanzar y exponer tu aplicación.](/docs/tasks/access-application-cluster/service-access-application-cluster/)
|
||||
|
||||
|
||||
* [Instalar Minikube](https://minikube.sigs.k8s.io/docs/start/)
|
||||
* Ver las [guías](/docs/setup/) para ver mas información sobre como crear clusteres.
|
||||
* [Aprender cómo lanzar y exponer tu aplicación.](/docs/tasks/access-application-cluster/service-access-application-cluster/).
|
||||
* Si necesita acceso a un clúster que no se creó, ver el documento de [compartiendo acceso a clúster](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
|
||||
* Leer ´la documentación de kubectl reference](/docs/reference/kubectl/kubectl/)
|
||||
|
|
|
|||
|
|
@ -1,131 +0,0 @@
|
|||
---
|
||||
title: Instalar Minikube
|
||||
content_type: task
|
||||
weight: 20
|
||||
card:
|
||||
name: tasks
|
||||
weight: 10
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Esta página muestra cómo instalar [Minikube](/docs/tutorials/hello-minikube), una herramienta que despliega un clúster de Kubernetes con un único nodo en una máquina virtual.
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
La virtualización VT-x o AMD-v debe estar habilitada en la BIOS de tu ordenador. En Linux, puedes comprobar si la tienes habilitada buscando 'vmx' o 'svm' en el fichero `/proc/cpuinfo`:
|
||||
```shell
|
||||
egrep --color 'vmx|svm' /proc/cpuinfo
|
||||
```
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Instalar un Hipervisor
|
||||
|
||||
Si todavía no tienes un hipervisor instalado, puedes instalar uno de los siguientes:
|
||||
|
||||
Sistema Operativo | Hipervisores soportados
|
||||
:-----------------|:------------------------
|
||||
macOS | [VirtualBox](https://www.virtualbox.org/wiki/Downloads), [VMware Fusion](https://www.vmware.com/products/fusion), [HyperKit](https://github.com/moby/hyperkit)
|
||||
Linux | [VirtualBox](https://www.virtualbox.org/wiki/Downloads), [KVM](http://www.linux-kvm.org/)
|
||||
Windows | [VirtualBox](https://www.virtualbox.org/wiki/Downloads), [Hyper-V](https://msdn.microsoft.com/en-us/virtualization/hyperv_on_windows/quick_start/walkthrough_install)
|
||||
|
||||
{{< note >}}
|
||||
Minikube también soporta una opción `--vm-driver=none` que ejecuta los componentes de Kubernetes directamente en el servidor y no en una máquina virtual (MV). Para usar este modo, se requiere Docker y un entorno Linux, pero no es necesario tener un hipervisor.
|
||||
{{< /note >}}
|
||||
|
||||
## Instalar kubectl
|
||||
|
||||
* Instala kubectl siguiendo las instrucciones disponibles en [Instalar y Configurar kubectl](/docs/tasks/tools/install-kubectl/).
|
||||
|
||||
## Instalar Minikube
|
||||
|
||||
### macOS
|
||||
|
||||
La forma más fácil de instalar Minikube en macOS es usar [Homebrew](https://brew.sh):
|
||||
|
||||
```shell
|
||||
brew install minikube
|
||||
```
|
||||
|
||||
También puedes instalarlo en macOS descargando un ejecutable autocontenido:
|
||||
|
||||
```shell
|
||||
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 \
|
||||
&& chmod +x minikube
|
||||
```
|
||||
|
||||
Para tener disponible en la consola el comando `minikube`, puedes añadir el comando al $PATH o moverlo por ejemplo a `/usr/local/bin`:
|
||||
|
||||
```shell
|
||||
sudo mv minikube /usr/local/bin
|
||||
```
|
||||
|
||||
### Linux
|
||||
|
||||
{{< note >}}
|
||||
Este documento muestra cómo instalar Minikube en Linux usando un ejecutable autocontenido. Para métodos alternativos de instalación en Linux, ver [Otros métodos de Instalación](https://github.com/kubernetes/minikube#other-ways-to-install) en el repositorio GitHub oficial de Minikube.
|
||||
{{< /note >}}
|
||||
|
||||
Puedes instalar Minikube en Linux descargando un ejecutable autocontenido:
|
||||
|
||||
```shell
|
||||
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \
|
||||
&& chmod +x minikube
|
||||
```
|
||||
|
||||
Para tener disponible en la consola el comando `minikube`, puedes añadir el comando al $PATH o moverlo por ejemplo a `/usr/local/bin`:
|
||||
|
||||
```shell
|
||||
sudo cp minikube /usr/local/bin && rm minikube
|
||||
```
|
||||
|
||||
### Windows
|
||||
|
||||
{{< note >}}
|
||||
Para ejecutar Minikube en Windows, necesitas instalar [Hyper-V](https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v) primero, que puede ejecutarse en las tres versiones de Windows 10: Windows 10 Enterprise, Windows 10 Professional, y Windows 10 Education.
|
||||
{{< /note >}}
|
||||
|
||||
La forma más fácil de instalar Minikube en Windows es usando [Chocolatey](https://chocolatey.org/) (ejecutar como administrador):
|
||||
|
||||
```shell
|
||||
choco install minikube kubernetes-cli
|
||||
```
|
||||
|
||||
Una vez Minikube ha terminado de instalarse, cierra la sesión cliente actual y reinicia. Minikube debería haberse añadido a tu $PATH automáticamente.
|
||||
|
||||
#### Instalación manual en Windows
|
||||
|
||||
Para instalar Minikube manualmente en Windows, descarga [`minikube-windows-amd64`](https://github.com/kubernetes/minikube/releases/latest), renómbralo a `minikube.exe`, y añádelo a tu PATH.
|
||||
|
||||
#### Instalador de Windows
|
||||
|
||||
Para instalar Minikube manualmente en Windows usando [Windows Installer](https://docs.microsoft.com/en-us/windows/desktop/msi/windows-installer-portal), descarga [`minikube-installer.exe`](https://github.com/kubernetes/minikube/releases/latest) y ejecuta el instalador.
|
||||
|
||||
|
||||
## Limpiar todo para comenzar de cero
|
||||
|
||||
Si habías instalado previamente minikube, y ejecutas:
|
||||
```shell
|
||||
minikube start
|
||||
```
|
||||
|
||||
Y dicho comando devuelve un error:
|
||||
```shell
|
||||
machine does not exist
|
||||
```
|
||||
|
||||
Necesitas eliminar permanentemente los siguientes archivos de configuración:
|
||||
```shell
|
||||
rm -rf ~/.minikube
|
||||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* [Ejecutar Kubernetes Localmente via Minikube](/docs/setup/minikube/)
|
||||
|
|
@ -2,6 +2,8 @@
|
|||
title: "Solution professionnelle d’orchestration de conteneurs"
|
||||
abstract: "Déploiement, mise à l'échelle et gestion automatisée des conteneurs"
|
||||
cid: home
|
||||
sitemap:
|
||||
priority: 1.0
|
||||
---
|
||||
|
||||
{{< blocks/section id="oceanNodes" >}}
|
||||
|
|
@ -41,12 +43,12 @@ Kubernetes est une solution open-source qui vous permet de tirer parti de vos in
|
|||
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Voir la video (en)</button>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu20" button id="desktopKCButton">Venez au KubeCon Amsterdam du 13 au 16 Aout 2020</a>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna20" button id="desktopKCButton">Venez au KubeCon NA Virtuel du 17 au 20 Novembre 2020</a>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna20" button id="desktopKCButton">Venez au KubeCon Boston du 17 au 20 Novembre 2020</a>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu21" button id="desktopKCButton">Venez au KubeCon EU Virtuel du 4 au 7 Mai 2021</a>
|
||||
</div>
|
||||
<div id="videoPlayer">
|
||||
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>
|
||||
|
|
@ -56,4 +58,4 @@ Kubernetes est une solution open-source qui vous permet de tirer parti de vos in
|
|||
|
||||
{{< blocks/kubernetes-features >}}
|
||||
|
||||
{{< blocks/case-studies >}}
|
||||
{{< blocks/case-studies >}}
|
||||
|
|
@ -1,16 +1,17 @@
|
|||
---
|
||||
title: Installer kubeadm
|
||||
description: kubeadm installation Kubernetes
|
||||
content_type: task
|
||||
weight: 20
|
||||
weight: 10
|
||||
card:
|
||||
name: setup
|
||||
weight: 20
|
||||
title: Installez l'outil de configuration kubeadm
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
<img src="https://raw.githubusercontent.com/cncf/artwork/master/projects/kubernetes/certified-kubernetes/versionless/color/certified-kubernetes-color.png" align="right" width="150px">Cette page vous
|
||||
apprend comment installer la boîte à outils `kubeadm`.
|
||||
Pour plus d'informations sur la création d'un cluster avec kubeadm, une fois que vous avez
|
||||
effectué ce processus d'installation, voir la page: [Utiliser kubeadm pour créer un cluster](/fr/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/).
|
||||
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">Cette page vous apprend comment installer la boîte à outils `kubeadm`.
|
||||
Pour plus d'informations sur la création d'un cluster avec kubeadm, une fois que vous avez effectué ce processus d'installation, voir la page: [Utiliser kubeadm pour créer un cluster](/fr/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/).
|
||||
|
||||
|
||||
|
||||
|
|
@ -19,39 +20,53 @@ effectué ce processus d'installation, voir la page: [Utiliser kubeadm pour cré
|
|||
|
||||
* Une ou plusieurs machines exécutant:
|
||||
- Ubuntu 16.04+
|
||||
- Debian 9
|
||||
- Debian 9+
|
||||
- CentOS 7
|
||||
- RHEL 7
|
||||
- Fedora 25/26 (best-effort)
|
||||
- Red Hat Enterprise Linux (RHEL) 7
|
||||
- Fedora 25+
|
||||
- HypriotOS v1.0.1+
|
||||
- Container Linux (testé avec 1800.6.0)
|
||||
- Flatcar Container Linux (testé avec 2512.3.0)
|
||||
* 2 Go ou plus de RAM par machine (toute quantité inférieure laissera peu de place à vos applications)
|
||||
* 2 processeurs ou plus
|
||||
* Connectivité réseau complète entre toutes les machines du cluster (réseau public ou privé)
|
||||
* Nom d'hôte, adresse MAC et product_uuid uniques pour chaque nœud. Voir [ici](#verify-the-mac-address-and-product-uuid-are-unique-for-every-node) pour plus de détails.
|
||||
* Certains ports doivent êtres ouverts sur vos machines. Voir [ici](#check-required-ports) pour plus de détails.
|
||||
* Swap désactivé. Vous devez impérativement désactiver le swap pour que la kubelet fonctionne correctement.
|
||||
* Swap désactivé. Vous **devez** impérativement désactiver le swap pour que la kubelet fonctionne correctement.
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Vérifiez que les adresses MAC et product_uuid sont uniques pour chaque nœud {#verify-the-mac-address-and-product-uuid-are-unique-for-every-node}
|
||||
## Vérifiez que les adresses MAC et product_uuid sont uniques pour chaque nœud {#verify-mac-address}
|
||||
|
||||
* Vous pouvez obtenir l'adresse MAC des interfaces réseau en utilisant la commande `ip link` ou` ifconfig -a`
|
||||
* Le product_uuid peut être vérifié en utilisant la commande `sudo cat/sys/class/dmi/id/product_uuid`
|
||||
|
||||
Il est très probable que les périphériques matériels aient des adresses uniques, bien que
|
||||
certaines machines virtuelles puissent avoir des valeurs identiques. Kubernetes utilise
|
||||
ces valeurs pour identifier de manière unique les nœuds du cluster.
|
||||
certaines machines virtuelles puissent avoir des valeurs identiques. Kubernetes utilise ces valeurs pour identifier de manière unique les nœuds du cluster.
|
||||
Si ces valeurs ne sont pas uniques à chaque nœud, le processus d'installation
|
||||
peut [échouer](https://github.com/kubernetes/kubeadm/issues/31).
|
||||
|
||||
## Vérifiez les cartes réseaux
|
||||
|
||||
Si vous avez plusieurs cartes réseaux et que vos composants Kubernetes ne sont pas accessibles par la
|
||||
route par défaut, nous vous recommandons d’ajouter une ou plusieurs routes IP afin que les adresses
|
||||
de cluster Kubernetes soient acheminées via la carte approprié.
|
||||
Si vous avez plusieurs cartes réseaux et que vos composants Kubernetes ne sont pas accessibles par la route par défaut,
|
||||
nous vous recommandons d’ajouter une ou plusieurs routes IP afin que les adresses de cluster Kubernetes soient acheminées via la carte approprié.
|
||||
|
||||
## Permettre à iptables de voir le trafic ponté
|
||||
|
||||
Assurez-vous que le module `br_netfilter` est chargé. Cela peut être fait en exécutant `lsmod | grep br_netfilter`. Pour le charger explicitement, appelez `sudo modprobe br_netfilter`.
|
||||
|
||||
Pour que les iptables de votre nœud Linux voient correctement le trafic ponté, vous devez vous assurer que `net.bridge.bridge-nf-call-iptables` est défini sur 1 dans votre configuration` sysctl`, par ex.
|
||||
|
||||
```bash
|
||||
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
|
||||
net.bridge.bridge-nf-call-ip6tables = 1
|
||||
net.bridge.bridge-nf-call-iptables = 1
|
||||
EOF
|
||||
sudo sysctl --system
|
||||
```
|
||||
|
||||
Pour plus de détails, veuillez consulter la page [Configuration requise pour le plug-in réseau](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#network-plugin-requirements).
|
||||
|
||||
## Vérifiez les ports requis {#check-required-ports}
|
||||
|
||||
|
|
@ -80,21 +95,54 @@ les ports personnalisés que vous utilisez sont également ouverts.
|
|||
Bien que les ports etcd soient inclus dans les nœuds masters, vous pouvez également héberger
|
||||
votre propre cluster etcd en externe ou sur des ports personnalisés.
|
||||
|
||||
Le plug-in de réseau de pod que vous utilisez (voir ci-dessous) peut également nécessiter certains ports à ouvrir. Étant donné que cela diffère d’un plugin à l’autre, veuillez vous reporter à la
|
||||
Le plug-in de réseau de pod que vous utilisez (voir ci-dessous) peut également nécessiter certains ports à ouvrir.
|
||||
Étant donné que cela diffère d’un plugin à l’autre, veuillez vous reporter à la
|
||||
documentation des plugins sur le(s) port(s) requis(s).
|
||||
|
||||
## Installing runtime
|
||||
## Installation du runtime {#installing-runtime}
|
||||
|
||||
Depuis la version 1.6.0, Kubernetes a activé l'utilisation de la CRI, Container Runtime Interface, par défaut.
|
||||
Le moteur de runtime de conteneur utilisé par défaut est Docker, activé par le biais de la l'implémentation CRI `dockershim` intégrée à l'interieur de la `kubelet`.
|
||||
Pour exécuter des conteneurs dans des pods, Kubernetes utilise un
|
||||
{{< glossary_tooltip term_id="container-runtime" text="container runtime" >}}.
|
||||
|
||||
Les autres runtimes basés sur la CRI incluent:
|
||||
{{< tabs name="container_runtime" >}}
|
||||
{{% tab name="Linux nodes" %}}
|
||||
|
||||
- [containerd](https://github.com/containerd/cri) (plugin CRI construit dans containerd)
|
||||
- [cri-o](https://cri-o.io/)
|
||||
- [frakti](https://github.com/kubernetes/frakti)
|
||||
Par défaut, Kubernetes utilise le
|
||||
{{< glossary_tooltip term_id="cri" text="Container Runtime Interface">}} (CRI)
|
||||
pour s'interfacer avec votre environnement d'exécution de conteneur choisi.
|
||||
|
||||
Si vous ne spécifiez pas de runtime, kubeadm essaie automatiquement de détecter un
|
||||
Runtime de conteneur en parcourant une liste de sockets de domaine Unix bien connus.
|
||||
Le tableau suivant répertorie les environnements d'exécution des conteneurs et leurs chemins de socket associés:
|
||||
|
||||
{{< table caption = "Les environnements d'exécution des conteneurs et leurs chemins de socket" >}}
|
||||
| Runtime | Chemin vers le socket de domaine Unix |
|
||||
|------------|---------------------------------------|
|
||||
| Docker | `/var/run/docker.sock` |
|
||||
| containerd | `/run/containerd/containerd.sock` |
|
||||
| CRI-O | `/var/run/crio/crio.sock` |
|
||||
{{< /table >}}
|
||||
|
||||
<br />
|
||||
Si Docker et containerd sont détectés, Docker est prioritaire. C'est
|
||||
nécessaire car Docker 18.09 est livré avec containerd et les deux sont détectables même si vous
|
||||
installez Docker.
|
||||
Si deux autres environnements d'exécution ou plus sont détectés, kubeadm se ferme avec une erreur.
|
||||
|
||||
Le kubelet s'intègre à Docker via l'implémentation CRI intégrée de `dockershim`.
|
||||
|
||||
Voir [runtimes de conteneur](/docs/setup/production-environment/container-runtimes/)
|
||||
pour plus d'informations.
|
||||
{{% /tab %}}
|
||||
{{% tab name="autres systèmes d'exploitation" %}}
|
||||
Par défaut, kubeadm utilise {{< glossary_tooltip term_id="docker" >}} comme environnement d'exécution du conteneur.
|
||||
Le kubelet s'intègre à Docker via l'implémentation CRI intégrée de `dockershim`.
|
||||
|
||||
Voir [runtimes de conteneur](/docs/setup/production-environment/container-runtimes/)
|
||||
pour plus d'informations.
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
Reportez-vous aux [instructions d'installation de la CRI](/docs/setup/cri) pour plus d'informations.
|
||||
|
||||
## Installation de kubeadm, des kubelets et de kubectl
|
||||
|
||||
|
|
@ -108,17 +156,17 @@ Vous installerez ces paquets sur toutes vos machines:
|
|||
* `kubectl`: la ligne de commande utilisée pour parler à votre cluster.
|
||||
|
||||
kubeadm **n'installera pas** ni ne gèrera les `kubelet` ou` kubectl` pour vous.
|
||||
Vous devez vous assurer qu'ils correspondent à la version du control plane de Kubernetes que vous
|
||||
souhaitez que kubeadm installe pour vous. Si vous ne le faites pas, vous risquez qu'
|
||||
une erreur de version se produise, qui pourrait conduire à un comportement inattendu.
|
||||
Cependant, une version mineure entre les kubelets et le control plane est pris en charge,
|
||||
mais la version de la kubelet ne doit jamais dépasser la version de l'API server. Par exemple,
|
||||
les kubelets exécutant la version 1.7.0 devraient être entièrement compatibles avec un API
|
||||
server en 1.8.0, mais pas l'inverse.
|
||||
Vous devez vous assurer qu'ils correspondent à la version du control plane de Kubernetes que vous souhaitez que kubeadm installe pour vous. Si vous ne le faites pas, vous risquez qu'une
|
||||
erreur de version se produise, qui pourrait conduire à un comportement inattendu.
|
||||
Cependant, une version mineure entre les kubelets et le control plane est pris en charge,
|
||||
mais la version de la kubelet ne doit jamais dépasser la version de l'API server.
|
||||
Par exemple, les kubelets exécutant la version 1.7.0 devraient être entièrement compatibles avec un API server en 1.8.0,
|
||||
mais pas l'inverse.
|
||||
|
||||
For information about installing `kubectl`, see [Installation et configuration kubectl](/fr/docs/tasks/tools/install-kubectl/).
|
||||
|
||||
{{< warning >}}
|
||||
Ces instructions excluent tous les packages Kubernetes de toutes les mises à niveau du système
|
||||
d'exploitation.
|
||||
Ces instructions excluent tous les packages Kubernetes de toutes les mises à niveau du système d'exploitation.
|
||||
C’est parce que kubeadm et Kubernetes ont besoin d'une
|
||||
[attention particulière lors de la mise à niveau](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11/).
|
||||
{{</ warning >}}
|
||||
|
|
@ -131,125 +179,142 @@ Pour plus d'informations sur les compatibilités de version, voir:
|
|||
{{< tabs name="k8s_install" >}}
|
||||
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
|
||||
```bash
|
||||
apt-get update && apt-get install -y apt-transport-https curl
|
||||
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
|
||||
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
|
||||
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
|
||||
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
|
||||
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
|
||||
deb https://apt.kubernetes.io/ kubernetes-xenial main
|
||||
EOF
|
||||
apt-get update
|
||||
apt-get install -y kubelet kubeadm kubectl
|
||||
apt-mark hold kubelet kubeadm kubectl
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y kubelet kubeadm kubectl
|
||||
sudo apt-mark hold kubelet kubeadm kubectl
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
```bash
|
||||
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
|
||||
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
|
||||
[kubernetes]
|
||||
name=Kubernetes
|
||||
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
|
||||
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
repo_gpgcheck=1
|
||||
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||
exclude=kube*
|
||||
exclude=kubelet kubeadm kubectl
|
||||
EOF
|
||||
|
||||
# Mettre SELinux en mode permissif (le désactiver efficacement)
|
||||
setenforce 0
|
||||
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
|
||||
sudo setenforce 0
|
||||
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
|
||||
|
||||
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
|
||||
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
|
||||
|
||||
systemctl enable --now kubelet
|
||||
sudo systemctl enable --now kubelet
|
||||
```
|
||||
|
||||
**Note:**
|
||||
|
||||
- Mettre SELinux en mode permissif en lançant `setenforce 0` et `sed ... `le désactive efficacement.
|
||||
C'est nécessaire pour permettre aux conteneurs d'accéder au système de fichiers hôte, qui
|
||||
est nécessaire par exemple pour les réseaux de pod.
|
||||
Vous devez le faire jusqu'à ce que le support de SELinux soit amélioré dans la kubelet.
|
||||
- Certains utilisateurs de RHEL / CentOS 7 ont signalé des problèmes de routage incorrect
|
||||
du trafic en raison du contournement d'iptables. Vous devez vous assurer que
|
||||
`net.bridge.bridge-nf-call-iptables` est configuré à 1 dans votre config `sysctl` par exemple:
|
||||
C'est nécessaire pour permettre aux conteneurs d'accéder au système de fichiers hôte, qui est nécessaire par exemple pour les réseaux de pod.
|
||||
Vous devez le faire jusqu'à ce que le support de SELinux soit amélioré dans Kubelet.
|
||||
|
||||
- Vous pouvez laisser SELinux activé si vous savez comment le configurer, mais il peut nécessiter des paramètres qui ne sont pas pris en charge par kubeadm.
|
||||
|
||||
```bash
|
||||
cat <<EOF > /etc/sysctl.d/k8s.conf
|
||||
net.bridge.bridge-nf-call-ip6tables = 1
|
||||
net.bridge.bridge-nf-call-iptables = 1
|
||||
EOF
|
||||
sysctl --system
|
||||
```
|
||||
- Assurez-vous que le module `br_netfilter` est chargé avant cette étape. Cela peut être fait en exécutant `lsmod | grep br_netfilter`. Pour le charger explicitement, lancez `modprobe br_netfilter`.
|
||||
{{% /tab %}}
|
||||
{{% tab name="Container Linux" %}}
|
||||
Installez les plugins CNI (requis pour la plupart des réseaux de pod):
|
||||
{{% tab name="Fedora CoreOS ou Flatcar Container Linux" %}}
|
||||
Installez les plugins CNI (requis pour la plupart des réseaux de pods) :
|
||||
|
||||
```bash
|
||||
CNI_VERSION="v0.6.0"
|
||||
mkdir -p /opt/cni/bin
|
||||
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-amd64-${CNI_VERSION}.tgz" | tar -C /opt/cni/bin -xz
|
||||
CNI_VERSION="v0.8.2"
|
||||
sudo mkdir -p /opt/cni/bin
|
||||
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-linux-amd64-${CNI_VERSION}.tgz" | sudo tar -C /opt/cni/bin -xz
|
||||
```
|
||||
|
||||
Installez crictl (obligatoire pour kubeadm / Kubelet Container Runtime Interface (CRI))
|
||||
Définissez le répertoire pour télécharger les fichiers de commande
|
||||
|
||||
{{< note >}}
|
||||
La variable DOWNLOAD_DIR doit être définie sur un répertoire accessible en écriture.
|
||||
Si vous exécutez Flatcar Container Linux, définissez DOWNLOAD_DIR=/opt/bin
|
||||
{{< /note >}}
|
||||
|
||||
```bash
|
||||
CRICTL_VERSION="v1.11.1"
|
||||
mkdir -p /opt/bin
|
||||
curl -L "https://github.com/kubernetes-incubator/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-amd64.tar.gz" | tar -C /opt/bin -xz
|
||||
DOWNLOAD_DIR=/usr/local/bin
|
||||
sudo mkdir -p $DOWNLOAD_DIR
|
||||
```
|
||||
|
||||
Installez `kubeadm`, `kubelet`, `kubectl` et ajouter un service systemd `kubelet`:
|
||||
Installez crictl (requis pour Kubeadm / Kubelet Container Runtime Interface (CRI))
|
||||
|
||||
```bash
|
||||
CRICTL_VERSION="v1.17.0"
|
||||
curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-amd64.tar.gz" | sudo tar -C $DOWNLOAD_DIR -xz
|
||||
```
|
||||
|
||||
Installez `kubeadm`,` kubelet`, `kubectl` et ajoutez un service systemd` kubelet`:
|
||||
|
||||
```bash
|
||||
RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
|
||||
cd $DOWNLOAD_DIR
|
||||
sudo curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl}
|
||||
sudo chmod +x {kubeadm,kubelet,kubectl}
|
||||
|
||||
mkdir -p /opt/bin
|
||||
cd /opt/bin
|
||||
curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl}
|
||||
chmod +x {kubeadm,kubelet,kubectl}
|
||||
|
||||
curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/kubelet.service" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service
|
||||
mkdir -p /etc/systemd/system/kubelet.service.d
|
||||
curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/10-kubeadm.conf" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
|
||||
RELEASE_VERSION="v0.4.0"
|
||||
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service
|
||||
sudo mkdir -p /etc/systemd/system/kubelet.service.d
|
||||
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
|
||||
```
|
||||
|
||||
Activez et démarrez la `kubelet`:
|
||||
Activez et démarrez `kubelet` :
|
||||
|
||||
```bash
|
||||
systemctl enable --now kubelet
|
||||
sudo systemctl enable --now kubelet
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
La distribution Linux Flatcar Container monte le répertoire `/usr` comme un système de fichiers en lecture seule.
|
||||
Avant de démarrer votre cluster, vous devez effectuer des étapes supplémentaires pour configurer un répertoire accessible en écriture.
|
||||
Consultez le [Guide de dépannage de Kubeadm](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/#usr-mounted-read-only/) pour savoir comment configurer un répertoire accessible en écriture.
|
||||
{{< /note >}}
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
La kubelet redémarre maintenant toutes les secondes et quelques, car elle attend dans une boucle
|
||||
kubeadm, pour lui dire quoi faire.
|
||||
Kubelet redémarre maintenant toutes les quelques secondes,
|
||||
car il attend les instructions de kubeadm dans une boucle de crash.
|
||||
|
||||
## Configurer le driver de cgroup utilisé par la kubelet sur un nœud master
|
||||
|
||||
Lorsque vous utilisez Docker, kubeadm détecte automatiquement le pilote ( driver ) de cgroup pour la kubelet
|
||||
et le configure dans le fichier `/var/lib/kubelet/kubeadm-flags.env` lors de son éxecution.
|
||||
Lorsque vous utilisez Docker, kubeadm détecte automatiquement le pilote ( driver ) de cgroup pour kubelet
|
||||
et le configure dans le fichier `/var/lib/kubelet/config.yaml` lors de son éxecution.
|
||||
|
||||
Si vous utilisez un autre CRI, vous devez modifier le fichier `/etc/default/kubelet` avec votre
|
||||
valeur de `cgroup-driver` comme ceci:
|
||||
Si vous utilisez un autre CRI, vous devez passer votre valeur `cgroupDriver` avec `kubeadm init`, comme ceci :
|
||||
|
||||
```bash
|
||||
KUBELET_EXTRA_ARGS=--cgroup-driver=<value>
|
||||
```yaml
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
cgroupDriver: <value>
|
||||
```
|
||||
|
||||
Ce fichier sera utilisé par `kubeadm init` et` kubeadm join` pour sourcer des arguments supplémentaires définis par l'utilisateur pour la kubelet.
|
||||
Pour plus de détails, veuillez lire [Utilisation de kubeadm init avec un fichier de configuration](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file).
|
||||
|
||||
Veuillez noter que vous devez **seulement** le faire si le driver de cgroupe de votre CRI
|
||||
n'est pas `cgroupfs`, car c'est déjà la valeur par défaut dans la kubelet.
|
||||
|
||||
{{< note >}}
|
||||
Depuis que le paramètre `--cgroup-driver` est obsolète par kubelet, si vous l'avez dans`/var/lib/kubelet/kubeadm-flags.env`
|
||||
ou `/etc/default/kubelet`(`/etc/sysconfig/kubelet` pour les RPM), veuillez le supprimer et utiliser à la place KubeletConfiguration
|
||||
(stocké dans`/var/lib/kubelet/config.yaml` par défaut).
|
||||
{{< /note >}}
|
||||
|
||||
Il est nécessaire de redémarrer la kubelet:
|
||||
|
||||
```bash
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart kubelet
|
||||
```
|
||||
|
||||
La détection automatique du pilote cgroup pour d'autres runtimes de conteneur
|
||||
comme CRI-O et containerd est un travail en cours.
|
||||
|
||||
|
||||
## Dépannage
|
||||
|
||||
Si vous rencontrez des difficultés avec kubeadm, veuillez consulter notre [documentation de dépannage](/fr/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/).
|
||||
|
|
@ -258,5 +323,3 @@ Si vous rencontrez des difficultés avec kubeadm, veuillez consulter notre [docu
|
|||
|
||||
|
||||
* [Utiliser kubeadm pour créer un cluster](/fr/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -89,7 +89,7 @@ spec:
|
|||
```
|
||||
|
||||
Seperti layaknya *resource* Kubernetes yang lain, sebuah Ingress membutuhkan *field* `apiVersion`, `kind`, dan `metadata`.
|
||||
Untuk informasi umum soal bagaimana cara bekerja dengan menggunakan file konfigurasi, silahkan merujuk pada [melakukan deploy aplikasi](/docs/tasks/run-application/run-stateless-application-deployment/), [konfigurasi kontainer](/id/docs/tasks/configure-pod-container/configure-pod-configmap/), [mengatur *resource*](/id/docs/concepts/cluster-administration/manage-deployment/).
|
||||
Untuk informasi umum soal bagaimana cara bekerja dengan menggunakan berkas konfigurasi, silahkan merujuk pada [melakukan deploy aplikasi](/docs/tasks/run-application/run-stateless-application-deployment/), [konfigurasi kontainer](/id/docs/tasks/configure-pod-container/configure-pod-configmap/), [mengatur *resource*](/id/docs/concepts/cluster-administration/manage-deployment/).
|
||||
Ingress seringkali menggunakan anotasi untuk melakukan konfigurasi beberapa opsi yang ada bergantung pada kontroler Ingress yang digunakan, sebagai contohnya
|
||||
adalah [anotasi rewrite-target](https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md).
|
||||
[Kontroler Ingress](/id/docs/concepts/services-networking/ingress-controllers) yang berbeda memiliki jenis anotasi yang berbeda. Pastikan kamu sudah terlebih dahulu memahami dokumentasi
|
||||
|
|
@ -442,7 +442,7 @@ Events:
|
|||
Normal ADD 45s loadbalancer-controller default/test
|
||||
```
|
||||
|
||||
Kamu juga dapat mengubah Ingress dengan menggunakan perintah `kubectl replace -f` pada file konfigurasi
|
||||
Kamu juga dapat mengubah Ingress dengan menggunakan perintah `kubectl replace -f` pada berkas konfigurasi
|
||||
Ingress yang ingin diubah.
|
||||
|
||||
## Mekanisme *failing* pada beberapa zona *availability*
|
||||
|
|
|
|||
|
|
@ -289,8 +289,8 @@ atau `/etc/default/kubelet`(`/etc/sysconfig/kubelet` untuk RPM), silakan hapus d
|
|||
Kamu harus melakukan _restart_ pada kubelet:
|
||||
|
||||
```bash
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart kubelet
|
||||
```
|
||||
|
||||
Deteksi _driver_ cgroup secara otomatis untuk _runtime_ Container lainnya
|
||||
|
|
|
|||
|
|
@ -0,0 +1,16 @@
|
|||
---
|
||||
title: 쿠버네티스 블로그
|
||||
linkTitle: 블로그
|
||||
menu:
|
||||
main:
|
||||
title: "블로그"
|
||||
weight: 40
|
||||
post: >
|
||||
<p>쿠버네티스와 컨테이너 전반적 영역에 대한 최신 뉴스도 읽고, 방금 나온 따끈따끈한 기술적 노하우도 알아보세요.</p>
|
||||
---
|
||||
{{< comment >}}
|
||||
|
||||
블로그에 기여하는 방법에 대한 자세한 내용은
|
||||
https://kubernetes.io/docs/contribute/new-content/blogs-case-studies/#write-a-blog-post 를 참고하세요.
|
||||
|
||||
{{< /comment >}}
|
||||
|
|
@ -0,0 +1,104 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "당황하지 마세요. 쿠버네티스와 도커"
|
||||
date: 2020-12-02
|
||||
slug: dont-panic-kubernetes-and-docker
|
||||
---
|
||||
|
||||
**작성자:** Jorge Castro, Duffie Cooley, Kat Cosgrove, Justin Garrison, Noah Kantrowitz, Bob Killen, Rey Lejano, Dan “POP” Papandrea, Jeffrey Sica, Davanum “Dims” Srinivas
|
||||
|
||||
쿠버네티스는 v1.20 이후 컨테이너 런타임으로서
|
||||
[도커를
|
||||
사용 중단(deprecating)](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation)합니다.
|
||||
|
||||
**당황할 필요는 없습니다. 말처럼 극적이진 않습니다.**
|
||||
|
||||
요약하자면, 기본(underlying) 런타임 중 하나인 도커는 쿠버네티스의 [컨테이너 런타임 인터페이스(CRI)](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/)를
|
||||
사용하는 런타임으로써는 더 이상 사용되지 않습니다(deprecated).
|
||||
도커가 생성한 이미지는 늘 그래왔듯이 모든 런타임을 통해 클러스터에서
|
||||
계속 작동될 것입니다.
|
||||
|
||||
쿠버네티스의 엔드유저에게는 많은 변화가 없을 것입니다.
|
||||
이 내용은 도커의 끝을 의미하지 않으며, 도커를 더 이상 개발 도구로 사용할 수 없다거나,
|
||||
사용하면 안 된다는 의미도 아닙니다. 도커는 여전히 컨테이너를
|
||||
빌드하는 데 유용한 도구이며, `docker
|
||||
build` 실행 결과로 만들어진 이미지도 여전히 쿠버네티스 클러스터에서 동작합니다.
|
||||
|
||||
GKE, EKS 또는 AKS([containerd가 기본](https://github.com/Azure/AKS/releases/tag/2020-11-16)인)와 같은 관리형 쿠버네티스 서비스를
|
||||
사용하는 경우 쿠버네티스의 향후 버전에서 도커에 대한 지원이
|
||||
없어지기 전에, 워커 노드가 지원되는 컨테이너 런타임을 사용하고 있는지 확인해야 합니다. 노드에
|
||||
사용자 정의가 적용된 경우 사용자 환경 및 런타임 요구 사항에 따라 업데이트가 필요할 수도
|
||||
있습니다. 서비스 공급자와 협업하여 적절한 업그레이드
|
||||
테스트 및 계획을 확인하세요.
|
||||
|
||||
자체 클러스터를 운영하는 경우에도, 클러스터의 고장을 피하기 위해서
|
||||
변경을 수행해야 합니다. v1.20에서는 도커에 대한 지원 중단 경고(deprecation warning)가 표시됩니다.
|
||||
도커 런타임 지원이 쿠버네티스의 향후 릴리스(현재는 2021년 하반기의
|
||||
1.22 릴리스로 계획됨)에서 제거되면 더 이상 지원되지
|
||||
않으며, containerd 또는 CRI-O와 같은 다른 호환 컨테이너 런타임 중
|
||||
하나로 전환해야 합니다. 선택한 런타임이 현재 사용 중인
|
||||
도커 데몬 구성(예: 로깅)을 지원하는지 확인하세요.
|
||||
|
||||
## 많은 사람들이 걱정하는 이유는 무엇이며, 왜 이런 혼란이 야기되었나요?
|
||||
|
||||
우리는 여기서 두 가지 다른 환경에 대해 이야기하고 있는데, 이것이 혼란을 야기하고
|
||||
있습니다. 쿠버네티스 클러스터 내부에는 컨테이너 이미지를 가져오고
|
||||
실행하는 역할을 하는 컨테이너 런타임이라는 것이 있습니다. 도커를
|
||||
컨테이너 런타임(다른 일반적인 옵션에는 containerd 및 CRI-O가 있음)으로 많이
|
||||
선택하지만, 도커는 쿠버네티스 내부에 포함(embedded)되도록 설계되지 않았기에 문제를
|
||||
유발합니다.
|
||||
|
||||
우리가 "도커"라고 부르는 것은 실제로는 하나가 아닙니다. 도커는
|
||||
전체 기술 스택이고, 그 중 한 부분은 그 자체로서 고수준(high-level)의
|
||||
컨테이너 런타임인 "containerd" 입니다. 도커는 개발을 하는 동안
|
||||
사람이 정말 쉽게 상호 작용할 수 있도록 많은 UX 개선을 포함하므로
|
||||
도커는 멋지고 유용합니다. 하지만, 쿠버네티스는 사람이 아니기 때문에
|
||||
이런 UX 개선 사항들이 필요하지 않습니다.
|
||||
|
||||
이 인간 친화적인 추상화 계층의 결과로, 쿠버네티스 클러스터는
|
||||
containerd가 정말 필요로 하는 것들을 확보하기 위해서 도커심(Dockershim)이라는
|
||||
다른 도구를 사용해야 합니다. 이것은 좋지 않습니다. 왜냐하면, 이는 추가적인 유지 관리를
|
||||
필요로 하고 오류의 가능성도 높이기 때문입니다. 여기서 실제로 일어나는 일은
|
||||
도커심이 빠르면 v1.23 릴리스에 Kubelet에서 제거되어, 결과적으로
|
||||
도커에 대한 컨테이너 런타임으로서의 지원이 제거된다는 것입니다. 여러분
|
||||
스스로도 생각할 수 있을 것입니다. containerd가 도커 스택에 포함되어 있는 것이라면, 도커심이
|
||||
쿠버네티스에 왜 필요할까요?
|
||||
|
||||
도커는 [컨테이너 런타임 인터페이스](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/)인 CRI를 준수하지 않습니다.
|
||||
만약 이를 준수했다면, 심(shim)이 필요하지 않았을 것입니다. 그러나
|
||||
이건 세상의 종말이 아니며, 당황할 필요도 없습니다. 여러분은 단지
|
||||
컨테이너 런타임을 도커에서 지원되는 다른 컨테이너 런타임으로 변경하기만 하면 됩니다.
|
||||
|
||||
참고할 사항 한 가지: 현재 클러스터 내 워크플로의 일부가 기본 도커 소켓
|
||||
(/var/run/docker.sock)에 의존하고 있는 경우, 다른
|
||||
런타임으로 전환하는 것은 해당 워크플로의 사용에 문제를 일으킵니다. 이 패턴은 종종
|
||||
도커 내의 도커라고 합니다. 이런 특정 유스케이스에 대해서
|
||||
[kaniko](https://github.com/GoogleContainerTools/kaniko),
|
||||
[img](https://github.com/genuinetools/img)와
|
||||
[buildah](https://github.com/containers/buildah)
|
||||
같은 것들을 포함해 많은 옵션들이 있습니다.
|
||||
|
||||
## 그렇지만, 이 변경이 개발자에게는 어떤 의미인가요? 여전히 Dockerfile을 작성나요? 여전히 도커로 빌드하나요?
|
||||
|
||||
이 변경 사항은 사람들(folks)이 보통 도커와 상호 작용하는 데 사용하는 것과는 다른 환경을
|
||||
제시합니다. 개발에 사용하는 도커의 설치는 쿠버네티스 클러스터 내의
|
||||
도커 런타임과 관련이 없습니다. 혼란스럽죠, 우리도 알고 있습니다. 개발자에게
|
||||
도커는 이 변경 사항이 발표되기 전과 마찬가지로 여전히
|
||||
유용합니다. 도커가 생성하는 이미지는 실제로는
|
||||
도커에만 특정된 이미지가 아니라 OCI([Open Container Initiative](https://opencontainers.org/)) 이미지입니다.
|
||||
모든 OCI 호환 이미지는 해당 이미지를 빌드하는 데 사용하는 도구에 관계없이
|
||||
쿠버네티스에서 동일하게 보입니다. [containerd](https://containerd.io/)와
|
||||
[CRI-O](https://cri-o.io/)는 모두 해당 이미지를 가져와 실행하는 방법을 알고 있습니다. 이것이
|
||||
컨테이너가 어떤 모습이어야 하는지에 대한 표준이 있는 이유입니다.
|
||||
|
||||
변경은 다가오고 있습니다. 이 변경이 일부 문제를 일으킬 수도 있지만, 치명적이지는
|
||||
않으며, 일반적으로는 괜찮은 일입니다. 사용자가 쿠버네티스와 상호 작용하는
|
||||
방식에 따라 이 변경은 아무런 의미가 없거나 약간의 작업만을 의미할 수 있습니다.
|
||||
장기적으로는 일이 더 쉬워질 것입니다. 이것이 여전히
|
||||
혼란스럽더라도 괜찮습니다. 이에 대해서 많은 일이 진행되고 있습니다. 쿠버네티스에는 변화되는
|
||||
부분이 많이 있고, 이에 대해 100% 전문가는 없습니다. 경험 수준이나
|
||||
복잡성에 관계없이 어떤 질문이든 하시기 바랍니다! 우리의 목표는
|
||||
모든 사람이 다가오는 변경 사항에 대해 최대한 많은 교육을 받을 수 있도록 하는 것입니다. `<3` 이 글이
|
||||
여러분이 가지는 대부분의 질문에 대한 답이 되었고, 불안을 약간은 진정시켰기를 바랍니다!
|
||||
|
||||
더 많은 답변을 찾고 계신가요? 함께 제공되는 [도커심 사용 중단 FAQ](/blog/2020/12/02/dockershim-faq/)를 확인하세요.
|
||||
|
|
@ -327,6 +327,26 @@ kubelet은 `NodeStatus` 와 리스 오브젝트를 생성하고 업데이트 할
|
|||
자세한 내용은
|
||||
[노드의 컨트롤 토폴로지 관리 정책](/docs/tasks/administer-cluster/topology-manager/)을 본다.
|
||||
|
||||
## 그레이스풀(Graceful) 노드 셧다운
|
||||
|
||||
{{< feature-state state="alpha" for_k8s_version="v1.20" >}}
|
||||
|
||||
`GracefulNodeShutdown` [기능 게이트](/ko/docs/reference/command-line-tools-reference/feature-gates/)를 활성화한 경우 kubelet은 노드 시스템 종료를 감지하고 노드에서 실행 중인 파드를 종료한다.
|
||||
Kubelet은 노드가 종료되는 동안 파드가 일반 [파드 종료 프로세스](/ko/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)를 따르도록 한다.
|
||||
|
||||
`GracefulNodeShutdown` 기능 게이트가 활성화되면 kubelet은 [systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/)를 사용하여 주어진 기간 동안 노드 종료를 지연시킨다. 종료 중에 kubelet은 두 단계로 파드를 종료시킨다.
|
||||
|
||||
1. 노드에서 실행 중인 일반 파드를 종료시킨다.
|
||||
2. 노드에서 실행 중인 [중요(critical) 파드](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)를 종료시킨다.
|
||||
|
||||
그레이스풀 노드 셧다운 기능은 두 개의 [`KubeletConfiguration`](/docs/tasks/administer-cluster/kubelet-config-file/) 옵션으로 구성된다.
|
||||
* `ShutdownGracePeriod`:
|
||||
* 노드가 종료를 지연해야 하는 총 기간을 지정한다. 이것은 모든 일반 및 [중요 파드](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)의 파드 종료에 필요한 총 유예 기간에 해당한다.
|
||||
* `ShutdownGracePeriodCriticalPods`:
|
||||
* 노드 종료 중에 [중요 파드](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)를 종료하는 데 사용되는 기간을 지정한다. 이는 `ShutdownGracePeriod`보다 작아야 한다.
|
||||
|
||||
예를 들어 `ShutdownGracePeriod=30s`, `ShutdownGracePeriodCriticalPods=10s` 인 경우 kubelet은 노드 종료를 30 초까지 지연시킨다. 종료하는 동안 처음 20(30-10) 초는 일반 파드의 유예 종료에 할당되고, 마지막 10 초는 [중요 파드](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)의 종료에 할당된다.
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
|
@ -335,3 +355,4 @@ kubelet은 `NodeStatus` 와 리스 오브젝트를 생성하고 업데이트 할
|
|||
* 아키텍처 디자인 문서의 [노드](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)
|
||||
섹션을 읽어본다.
|
||||
* [테인트와 톨러레이션](/ko/docs/concepts/scheduling-eviction/taint-and-toleration/)을 읽어본다.
|
||||
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ weight: 70
|
|||
|
||||
## 이미지 수집
|
||||
|
||||
쿠버네티스는 cadvisor와 imageManager를 통하여 모든 이미지들의
|
||||
쿠버네티스는 cadvisor와 imageManager를 통하여 모든 이미지들의
|
||||
라이프사이클을 관리한다.
|
||||
|
||||
이미지들에 대한 가비지 수집 정책에는 다음 2가지 요소가 고려된다:
|
||||
|
|
@ -36,26 +36,26 @@ kubelet이 관리하지 않는 컨테이너는 컨테이너 가비지 수집 대
|
|||
|
||||
## 사용자 설정
|
||||
|
||||
사용자는 후술될 kubelet 플래그들을 통하여 이미지 가비지 수집을 조정하기 위하여 다음의 임계값을 조정할 수 있다.
|
||||
여러분은 후술될 kubelet 플래그들을 통하여 이미지 가비지 수집을 조정하기 위하여 다음의 임계값을 조정할 수 있다.
|
||||
|
||||
1. `image-gc-high-threshold`, 이미지 가비지 수집을 발생시키는 디스크 사용량의 비율로
|
||||
기본값은 85% 이다.
|
||||
2. `image-gc-low-threshold`, 이미지 가비지 수집을 더 이상 시도하지 않는 디스크 사용량의 비율로
|
||||
기본값은 80% 이다.
|
||||
|
||||
또한 사용자는 다음의 kubelet 플래그를 통해 가비지 수집 정책을 사용자 정의 할 수 있다.
|
||||
다음의 kubelet 플래그를 통해 가비지 수집 정책을 사용자 정의할 수 있다.
|
||||
|
||||
1. `minimum-container-ttl-duration`, 종료된 컨테이너가 가비지 수집
|
||||
되기 전의 최소 시간. 기본 값은 0 분이며, 이 경우 모든 종료된 컨테이너는 바로 가비지 수집의 대상이 된다.
|
||||
1. `minimum-container-ttl-duration`, 종료된 컨테이너가 가비지 수집
|
||||
되기 전의 최소 시간. 기본 값은 0 분이며, 이 경우 모든 종료된 컨테이너는 바로 가비지 수집의 대상이 된다.
|
||||
2. `maximum-dead-containers-per-container`, 컨테이너가 보유할 수 있는 오래된
|
||||
인스턴스의 최대 수. 기본 값은 1 이다.
|
||||
3. `maximum-dead-containers`, 글로벌하게 보유 할 컨테이너의 최대 오래된 인스턴스의 최대 수.
|
||||
기본 값은 -1이며, 이 경우 인스턴스 수의 제한은 없다.
|
||||
|
||||
컨테이너들은 유용성이 만료되기 이전에도 가비지 수집이 될 수 있다. 이러한 컨테이너들은
|
||||
문제 해결에 도움이 될 수 있는 로그나 다른 데이터를 포함하고 있을 수 있다. 컨테이너 당 적어도
|
||||
1개의 죽은 컨테이너가 허용될 수 있도록 `maximum-dead-containers-per-container`
|
||||
값을 충분히 큰 값으로 지정하는 것을 권장한다. 동일한 이유로 `maximum-dead-containers`
|
||||
문제 해결에 도움이 될 수 있는 로그나 다른 데이터를 포함하고 있을 수 있다. 컨테이너 당 적어도
|
||||
1개의 죽은 컨테이너가 허용될 수 있도록 `maximum-dead-containers-per-container`
|
||||
값을 충분히 큰 값으로 지정하는 것을 권장한다. 동일한 이유로 `maximum-dead-containers`
|
||||
의 값도 상대적으로 더 큰 값을 권장한다.
|
||||
자세한 내용은 [해당 이슈](https://github.com/kubernetes/kubernetes/issues/13287)를 참고한다.
|
||||
|
||||
|
|
@ -82,5 +82,3 @@ kubelet이 관리하지 않는 컨테이너는 컨테이너 가비지 수집 대
|
|||
|
||||
|
||||
자세한 내용은 [리소스 부족 처리 구성](/docs/tasks/administer-cluster/out-of-resource/)를 본다.
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -55,11 +55,12 @@ weight: 50
|
|||
주로 호환된다. 잡이 이전에 VM에서 실행된 경우, VM에 IP가 있고
|
||||
프로젝트의 다른 VM과 통신할 수 있다. 이것은 동일한 기본 모델이다.
|
||||
|
||||
쿠버네티스의 IP 주소는 그것의 IP 주소를 포함하여 `Pod` 범위에 존재한다(`Pod` 내
|
||||
쿠버네티스의 IP 주소는 그것의 IP 주소와 MAC 주소를 포함하여 `Pod` 범위에 존재한다(`Pod` 내
|
||||
컨테이너는 네트워크 네임스페이스를 공유함). 이것은 `Pod` 내 컨테이너가 모두
|
||||
`localhost` 에서 서로의 포트에 도달할 수 있다는 것을 의미한다. 또한
|
||||
`Pod` 내부의 컨테이너 포트의 사용을 조정해야하는 것을 의미하지만, 이것도
|
||||
VM 내의 프로세스와 동일하다. 이것을 "IP-per-pod(파드별 IP)" 모델이라고 한다.
|
||||
VM 내의 프로세스와 동일하다. 이것을 "IP-per-pod(파드별 IP)" 모델이라고
|
||||
한다.
|
||||
|
||||
이것이 어떻게 구현되는 지는 사용 중인 특정 컨테이너 런타임의 세부 사항이다.
|
||||
|
||||
|
|
|
|||
|
|
@ -128,6 +128,25 @@ cloudprovider_gce_api_request_duration_seconds { request = "detach_disk"}
|
|||
cloudprovider_gce_api_request_duration_seconds { request = "list_disk"}
|
||||
```
|
||||
|
||||
### kube-scheduler 메트릭
|
||||
|
||||
{{< feature-state for_k8s_version="v1.20" state="alpha" >}}
|
||||
|
||||
스케줄러는 실행 중인 모든 파드의 요청(request)된 리소스와 요구되는 제한(limit)을 보고하는 선택적 메트릭을 노출한다. 이러한 메트릭은 용량 계획(capacity planning) 대시보드를 구축하고, 현재 또는 과거 스케줄링 제한을 평가하고, 리소스 부족으로 스케줄할 수 없는 워크로드를 빠르게 식별하고, 실제 사용량을 파드의 요청과 비교하는 데 사용할 수 있다.
|
||||
|
||||
kube-scheduler는 각 파드에 대해 구성된 리소스 [요청과 제한](/ko/docs/concepts/configuration/manage-resources-containers/)을 식별한다. 요청 또는 제한이 0이 아닌 경우 kube-scheduler는 메트릭 시계열을 보고한다. 시계열에는 다음과 같은 레이블이 지정된다.
|
||||
- 네임스페이스
|
||||
- 파드 이름
|
||||
- 파드가 스케줄된 노드 또는 아직 스케줄되지 않은 경우 빈 문자열
|
||||
- 우선순위
|
||||
- 해당 파드에 할당된 스케줄러
|
||||
- 리소스 이름 (예: `cpu`)
|
||||
- 알려진 경우 리소스 단위 (예: `cores`)
|
||||
|
||||
파드가 완료되면 (`Never` 또는 `OnFailure`의 `restartPolicy`가 있고 `Succeeded` 또는 `Failed` 파드 단계에 있거나, 삭제되고 모든 컨테이너가 종료된 상태에 있음) 스케줄러가 이제 다른 파드를 실행하도록 스케줄할 수 있으므로 시리즈가 더 이상 보고되지 않는다. 두 메트릭을 `kube_pod_resource_request` 및 `kube_pod_resource_limit` 라고 한다.
|
||||
|
||||
메트릭은 HTTP 엔드포인트 `/metrics/resources`에 노출되며 스케줄러의 `/metrics` 엔드포인트
|
||||
와 동일한 인증이 필요하다. 이러한 알파 수준의 메트릭을 노출시키려면 `--show-hidden-metrics-for-version=1.20` 플래그를 사용해야 한다.
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
|
|
|||
|
|
@ -600,6 +600,10 @@ spec:
|
|||
example.com/foo: 1
|
||||
```
|
||||
|
||||
## PID 제한
|
||||
|
||||
프로세스 ID(PID) 제한은 kubelet의 구성에 대해 주어진 파드가 사용할 수 있는 PID 수를 제한할 수 있도록 허용한다. 자세한 내용은 [Pid 제한](/docs/concepts/policy/pid-limiting/)을 참고한다.
|
||||
|
||||
## 문제 해결
|
||||
|
||||
### 내 파드가 failedScheduling 이벤트 메시지로 보류 중이다
|
||||
|
|
|
|||
|
|
@ -134,7 +134,7 @@ data:
|
|||
[서비스 어카운트](/docs/tasks/configure-pod-container/configure-service-account/) 문서를 보면
|
||||
서비스 어카운트가 동작하는 방법에 대한 더 자세한 정보를 얻을 수 있다.
|
||||
또한 파드에서 서비스 어카운트를 참조하는 방법을
|
||||
[`Pod`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core)의
|
||||
[`Pod`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)의
|
||||
`automountServiceAccountToken` 필드와 `serviceAccountName`
|
||||
필드를 통해 확인할 수 있다.
|
||||
|
||||
|
|
@ -152,7 +152,7 @@ data:
|
|||
인코딩된 `~/.dockercfg` 파일의 콘텐츠를 값으로 가지는 `.dockercfg` 키를 포함하고 있는지
|
||||
확실히 확인해야 한다.
|
||||
|
||||
`kubernetes/dockerconfigjson` 타입은 `~/.dockercfg` 의
|
||||
`kubernetes.io/dockerconfigjson` 타입은 `~/.dockercfg` 의
|
||||
새로운 포맷인 `~/.docker/config.json` 파일과 동일한 포맷 법칙을
|
||||
따르는 직렬화 된 JSON의 저장을 위해 디자인되었다.
|
||||
이 시크릿 타입을 사용할 때는, 시크릿 오브젝트의 `data` 필드가 `.dockerconfigjson` 키를
|
||||
|
|
@ -347,22 +347,21 @@ data:
|
|||
usage-bootstrap-signing: dHJ1ZQ==
|
||||
```
|
||||
|
||||
부트스트랩 타입은 `data` 아래 명시된 다음의 키들을 가진다.
|
||||
부트스트랩 타입 시크릿은 `data` 아래 명시된 다음의 키들을 가진다.
|
||||
|
||||
- `token_id`: 토큰 식별자로 임의의 6개 문자의 문자열. 필수 사항.
|
||||
- `token-secret`: 실제 토큰 시크릿으로 임의의 16개 문자의 문자열. 필수 사항.
|
||||
- `description1`: 토큰의 사용처를 설명하는 사람이 읽을 수 있는
|
||||
- `description`: 토큰의 사용처를 설명하는 사람이 읽을 수 있는
|
||||
문자열. 선택 사항.
|
||||
- `expiration`: 토큰이 만료되어야 하는 시기를 명시한 RFC3339를
|
||||
사용하는 절대 UTC 시간. 선택 사항.
|
||||
- `usage-bootstrap-<usage>`: 부트스트랩 토큰의 추가적인 사용처를 나타내는
|
||||
불리언(boolean) 플래그.
|
||||
- `auth-extra-groups`: system:bootstrappers 그룹에 추가로 인증될
|
||||
- `auth-extra-groups`: `system:bootstrappers` 그룹에 추가로 인증될
|
||||
쉼표로 구분된 그룹 이름 목록.
|
||||
|
||||
위의 YAML은 모두 base64로 인코딩된 문자열 값이므로 혼란스러워 보일
|
||||
수 있다. 사실은 다음 YAML을 사용하여 동일한 시크릿 오브젝트 결과를 만드는
|
||||
동일한 시크릿을 생성할 수 있다.
|
||||
수 있다. 사실은 다음 YAML을 사용하여 동일한 시크릿을 생성할 수 있다.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
|
@ -724,6 +723,11 @@ echo $SECRET_PASSWORD
|
|||
1f2d1e2e67df
|
||||
```
|
||||
|
||||
#### 시크릿 업데이트 후 환경 변수가 업데이트되지 않음
|
||||
|
||||
컨테이너가 환경 변수에서 이미 시크릿을 사용하는 경우, 다시 시작하지 않는 한 컨테이너에서 시크릿 업데이트를 볼 수 없다.
|
||||
시크릿이 변경될 때 재시작을 트리거하는 써드파티 솔루션이 있다.
|
||||
|
||||
## 변경할 수 없는(immutable) 시크릿 {#secret-immutable}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.19" state="beta" >}}
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ weight: 20
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state for_k8s_version="v1.12" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.20" state="stable" >}}
|
||||
|
||||
이 페이지는 런타임클래스 리소스와 런타임 선택 메커니즘에 대해서 설명한다.
|
||||
|
||||
|
|
@ -63,7 +63,7 @@ weight: 20
|
|||
(`handler`)로 단 2개의 중요 필드만 가지고 있다. 오브젝트 정의는 다음과 같은 형태이다.
|
||||
|
||||
```yaml
|
||||
apiVersion: node.k8s.io/v1beta1 # 런타임클래스는 node.k8s.io API 그룹에 정의되어 있음
|
||||
apiVersion: node.k8s.io/v1 # 런타임클래스는 node.k8s.io API 그룹에 정의되어 있음
|
||||
kind: RuntimeClass
|
||||
metadata:
|
||||
name: myclass # 런타임클래스는 해당 이름을 통해서 참조됨
|
||||
|
|
|
|||
|
|
@ -203,7 +203,7 @@ gRPC 서비스는 `/var/lib/kubelet/pod-resources/kubelet.sock` 의 유닉스
|
|||
`/var/lib/kubelet/pod-resources` 를
|
||||
{{< glossary_tooltip text="볼륨" term_id="volume" >}}으로 마운트해야 한다.
|
||||
|
||||
"PodResources 서비스"를 지원하려면 `KubeletPodResources` [기능 게이트](/ko/docs/reference/command-line-tools-reference/feature-gates/)를 활성화해야 한다. 쿠버네티스 1.15부터 기본적으로 활성화되어 있다.
|
||||
"PodResources 서비스"를 지원하려면 `KubeletPodResources` [기능 게이트](/ko/docs/reference/command-line-tools-reference/feature-gates/)를 활성화해야 한다.
|
||||
|
||||
## 토폴로지 관리자와 장치 플러그인 통합
|
||||
|
||||
|
|
@ -254,5 +254,3 @@ pluginapi.Device{ID: "25102017", Health: pluginapi.Healthy, Topology:&pluginapi.
|
|||
* 노드에서의 [확장 리소스 알리기](/ko/docs/tasks/administer-cluster/extended-resource-node/)에 대해 배우기
|
||||
* 쿠버네티스에서 [TLS 수신에 하드웨어 가속](https://kubernetes.io/blog/2019/04/24/hardware-accelerated-ssl/tls-termination-in-ingress-controllers-using-kubernetes-device-plugins-and-runtimeclass/) 사용에 대해 읽기
|
||||
* [토폴로지 관리자](/docs/tasks/administer-cluster/topology-manager/)에 대해 알아보기
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -75,19 +75,10 @@ Protobuf에 기반한 직렬화 형식을 구현한다. 이 형식에 대한
|
|||
API 오브젝트를 정의하는 Go 패키지에 들어있는 각각의 스키마에 대한
|
||||
IDL(인터페이스 정의 언어) 파일을 참고한다.
|
||||
|
||||
## API 변경 사항
|
||||
## 지속성
|
||||
|
||||
성공적인 시스템은 새로운 유스케이스가 등장하거나 기존 사례가 변경됨에 따라 성장하고 변화해야 한다.
|
||||
따라서, 쿠버네티스는 쿠버네티스 API가 지속적으로 변경되고 성장할 수 있도록 기능을 설계했다.
|
||||
쿠버네티스 프로젝트는 기존 클라이언트와의 호환성을 깨지 _않고_ 다른 프로젝트가
|
||||
적응할 기회를 가질 수 있도록 장기간 해당 호환성을 유지하는 것을 목표로 한다.
|
||||
|
||||
일반적으로, 새 API 리소스와 새 리소스 필드는 자주 추가될 수 있다.
|
||||
리소스 또는 필드를 제거하려면
|
||||
[API 지원 중단 정책](/docs/reference/using-api/deprecation-policy/)을 따라야 한다.
|
||||
|
||||
호환 가능한 변경 사항과 API 변경 방법은
|
||||
[API 변경 사항](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#readme)에 자세히 설명되어 있다.
|
||||
쿠버네티스는 오브젝트의 직렬화된 상태를
|
||||
{{< glossary_tooltip term_id="etcd" >}}에 기록하여 저장한다.
|
||||
|
||||
## API 그룹과 버전 규칙
|
||||
|
||||
|
|
@ -105,29 +96,44 @@ API가 시스템 리소스 및 동작에 대한 명확하고 일관된 보기를
|
|||
가능한 [API 그룹](/ko/docs/reference/using-api/#api-그룹)을 구현한다.
|
||||
|
||||
API 리소스는 API 그룹, 리소스 유형, 네임스페이스
|
||||
(네임스페이스 리소스용) 및 이름으로 구분된다. API 서버는
|
||||
여러 API 버전을 통해 동일한 기본 데이터를 제공하고 API 버전 간의
|
||||
변환을 투명하게 처리할 수 있다. 이 모든 다른 버전은 실제로
|
||||
같은 리소스의 표현이다. 예를 들어 동일한 리소스에 대해
|
||||
두 가지 버전 `v1`과 `v1beta1`이 있다고 가정해 보자.
|
||||
`v1beta1` 버전에서 생성된 오브젝트를 `v1beta1` 또는 `v1` 버전에서
|
||||
읽기, 업데이트 및 삭제할 수 있다.
|
||||
(네임스페이스 리소스용) 및 이름으로 구분된다. API 서버는 API 버전 간의
|
||||
변환을 투명하게 처리한다. 서로 다른 모든 버전은 실제로
|
||||
동일한 지속 데이터의 표현이다. API 서버는 여러 API 버전을 통해
|
||||
동일한 기본 데이터를 제공할 수 있다.
|
||||
|
||||
API 버전 수준 정의에 대한 자세한 내용은
|
||||
[API 버전 레퍼런스](/ko/docs/reference/using-api/#api-버전-규칙)를 참조한다.
|
||||
예를 들어, 동일한 리소스에 대해 `v1` 과 `v1beta1` 이라는 두 가지 API 버전이
|
||||
있다고 가정한다. 원래 API의 `v1beta1` 버전을 사용하여 오브젝트를
|
||||
만든 경우, 나중에 `v1beta1` 또는 `v1` API 버전을 사용하여 해당 오브젝트를
|
||||
읽거나, 업데이트하거나, 삭제할 수 있다.
|
||||
|
||||
API 리소스는 해당 API 그룹, 리소스 유형, 네임스페이스
|
||||
(네임스페이스 리소스용) 및 이름으로 구분된다. API 서버는 여러 API 버전을 통해 동일한
|
||||
기본 데이터를 제공하고 API 버전 간의 변환을 투명하게
|
||||
처리할 수 있다. 이 모든 다른 버전은 실제로
|
||||
동일한 리소스의 표현이다. 예를 들어, 동일한 리소스에 대해 두 가지
|
||||
버전 `v1` 과 `v1beta1` 이 있다고 가정한다. 그런 다음 `v1beta1` 버전에서
|
||||
생성된 오브젝트를 `v1beta1` 또는 `v1` 버전에서 읽고 업데이트하고
|
||||
삭제할 수 있다.
|
||||
## API 변경 사항
|
||||
|
||||
성공적인 시스템은 새로운 유스케이스가 등장하거나 기존 사례가 변경됨에 따라 성장하고 변화해야 한다.
|
||||
따라서, 쿠버네티스는 쿠버네티스 API가 지속적으로 변경되고 성장할 수 있도록 설계했다.
|
||||
쿠버네티스 프로젝트는 기존 클라이언트와의 호환성을 깨지 _않고_ 다른 프로젝트가
|
||||
적응할 기회를 가질 수 있도록 장기간 해당 호환성을 유지하는 것을 목표로 한다.
|
||||
|
||||
일반적으로, 새 API 리소스와 새 리소스 필드는 자주 추가될 수 있다.
|
||||
리소스 또는 필드를 제거하려면
|
||||
[API 지원 중단 정책](/docs/reference/using-api/deprecation-policy/)을 따라야 한다.
|
||||
|
||||
쿠버네티스는 일반적으로 API 버전 `v1` 에서 안정 버전(GA)에 도달하면, 공식 쿠버네티스 API에
|
||||
대한 호환성 유지를 강력하게 이행한다. 또한,
|
||||
쿠버네티스는 가능한 경우 _베타_ API 버전에서도 호환성을 유지한다.
|
||||
베타 API를 채택하면 기능이 안정된 후에도 해당 API를 사용하여 클러스터와
|
||||
계속 상호 작용할 수 있다.
|
||||
|
||||
{{< note >}}
|
||||
쿠버네티스는 또한 _알파_ API 버전에 대한 호환성을 유지하는 것을 목표로 하지만, 일부
|
||||
상황에서는 호환성이 깨진다. 알파 API 버전을 사용하는 경우, API가 변경된 경우 클러스터를
|
||||
업그레이드할 때 쿠버네티스에 대한 릴리스 정보를 확인한다.
|
||||
{{< /note >}}
|
||||
|
||||
API 버전 수준 정의에 대한 자세한 내용은
|
||||
[API 버전 레퍼런스](/ko/docs/reference/using-api/api-overview/#api-버전-규칙)를 참조한다.
|
||||
|
||||
|
||||
|
||||
## API 확장
|
||||
|
||||
쿠버네티스 API는 다음 두 가지 방법 중 하나로 확장할 수 있다.
|
||||
|
|
@ -145,3 +151,5 @@ API 버전 수준 정의에 대한 자세한 내용은
|
|||
클러스터가 API 접근을 위한 인증 및 권한을 관리하는 방법을 설명한다.
|
||||
- [API 레퍼런스](/docs/reference/kubernetes-api/)를
|
||||
읽고 API 엔드포인트, 리소스 유형 및 샘플에 대해 배우기.
|
||||
- [API 변경 사항](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#readme)에서
|
||||
호환 가능한 변경 사항을 구성하고, API를 변경하는 방법에 대해 알아본다.
|
||||
|
|
|
|||
|
|
@ -138,10 +138,11 @@ partition
|
|||
!partition
|
||||
```
|
||||
|
||||
첫 번째 예시에서 키가 `environment`이고 값이 `production` 또는 `qa`인 모든 리소스를 선택한다.
|
||||
두 번째 예시에서 키가 `tier`이고 값이 `frontend`와 `backend`를 가지는 리소스를 제외한 모든 리소스와 키로 `tier`를 가지고 값을 공백으로 가지는 모든 리소스를 선택한다.
|
||||
세 번째 예시에서 레이블의 값에 상관없이 키가 `partition`을 포함하는 모든 리소스를 선택한다.
|
||||
네 번째 예시에서 레이블의 값에 상관없이 키가 `partition`을 포함하지 않는 모든 리소스를 선택한다.
|
||||
* 첫 번째 예시에서 키가 `environment`이고 값이 `production` 또는 `qa`인 모든 리소스를 선택한다.
|
||||
* 두 번째 예시에서 키가 `tier`이고 값이 `frontend`와 `backend`를 가지는 리소스를 제외한 모든 리소스와 키로 `tier`를 가지고 값을 공백으로 가지는 모든 리소스를 선택한다.
|
||||
* 세 번째 예시에서 레이블의 값에 상관없이 키가 `partition`을 포함하는 모든 리소스를 선택한다.
|
||||
* 네 번째 예시에서 레이블의 값에 상관없이 키가 `partition`을 포함하지 않는 모든 리소스를 선택한다.
|
||||
|
||||
마찬가지로 쉼표는 _AND_ 연산자로 작동한다. 따라서 `partition,environment notin (qa)`와 같이 사용하면 값과 상관없이 키가 `partition`인 것과 키가 `environment`이고 값이 `qa`와 다른 리소스를 필터링할 수 있다.
|
||||
_집합성 기준_ 레이블 셀렉터는 일반적으로 `environment=production`과 `environment in (production)`을 같은 것으로 본다. 유사하게는 `!=`과 `notin`을 같은 것으로 본다.
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: 파드 시큐리티 폴리시
|
||||
content_type: concept
|
||||
weight: 20
|
||||
weight: 30
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: 리소스 쿼터
|
||||
content_type: concept
|
||||
weight: 10
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
|||
|
|
@ -154,6 +154,49 @@ spec:
|
|||
|
||||
`preferredDuringSchedulingIgnoredDuringExecution` 의 `weight` 필드의 범위는 1-100이다. 모든 스케줄링 요구 사항 (리소스 요청, RequiredDuringScheduling 어피니티 표현식 등)을 만족하는 각 노드들에 대해 스케줄러는 이 필드의 요소들을 반복해서 합계를 계산하고 노드가 MatchExpressions 에 일치하는 경우 합계에 "가중치(weight)"를 추가한다. 이후에 이 점수는 노드에 대한 다른 우선순위 함수의 점수와 합쳐진다. 전체 점수가 가장 높은 노드를 가장 선호한다.
|
||||
|
||||
#### 스케줄링 프로파일당 노드 어피니티
|
||||
|
||||
{{< feature-state for_k8s_version="v1.20" state="beta" >}}
|
||||
|
||||
여러 [스케줄링 프로파일](/ko/docs/reference/scheduling/config/#여러-프로파일)을 구성할 때
|
||||
노드 어피니티가 있는 프로파일을 연결할 수 있는데, 이는 프로파일이 특정 노드 집합에만 적용되는 경우 유용하다.
|
||||
이렇게 하려면 [스케줄러 구성](/ko/docs/reference/scheduling/config/)에 있는
|
||||
[`NodeAffinity` 플러그인](/ko/docs/reference/scheduling/config/#스케줄링-플러그인-1)의 인수에 `addedAffinity`를 추가한다. 예를 들면
|
||||
|
||||
```yaml
|
||||
apiVersion: kubescheduler.config.k8s.io/v1beta1
|
||||
kind: KubeSchedulerConfiguration
|
||||
|
||||
profiles:
|
||||
- schedulerName: default-scheduler
|
||||
- schedulerName: foo-scheduler
|
||||
pluginConfig:
|
||||
- name: NodeAffinity
|
||||
args:
|
||||
addedAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: scheduler-profile
|
||||
operator: In
|
||||
values:
|
||||
- foo
|
||||
```
|
||||
|
||||
`addedAffinity`는 `.spec.schedulerName`을 `foo-scheduler`로 설정하는 모든 파드에 적용되며
|
||||
PodSpec에 지정된 NodeAffinity도 적용된다.
|
||||
즉, 파드를 매칭시키려면, 노드가 `addedAffinity`와 파드의 `.spec.NodeAffinity`를 충족해야 한다.
|
||||
|
||||
`addedAffinity`는 엔드 유저에게 표시되지 않으므로, 예상치 못한 동작이 일어날 수 있다. 프로파일의
|
||||
스케줄러 이름과 명확한 상관 관계가 있는 노드 레이블을 사용하는 것이 좋다.
|
||||
|
||||
{{< note >}}
|
||||
[데몬셋용 파드를 생성](/ko/docs/concepts/workloads/controllers/daemonset/#기본-스케줄러로-스케줄)하는 데몬셋 컨트롤러는
|
||||
스케줄링 프로파일을 인식하지 못한다.
|
||||
따라서 `addedAffinity`없이 `default-scheduler`와 같은 스케줄러 프로파일을 유지하는 것이 좋다. 그런 다음 데몬셋의 파드 템플릿이 스케줄러 이름을 사용해야 한다.
|
||||
그렇지 않으면, 데몬셋 컨트롤러에 의해 생성된 일부 파드가 스케줄되지 않은 상태로 유지될 수 있다.
|
||||
{{< /note >}}
|
||||
|
||||
### 파드간 어피니티와 안티-어피니티
|
||||
|
||||
파드간 어피니티와 안티-어피니티를 사용하면 노드의 레이블을 기반으로 하지 않고, *노드에서 이미 실행 중인 파드 레이블을 기반으로*
|
||||
|
|
@ -386,3 +429,5 @@ spec:
|
|||
파드가 노드에 할당되면 kubelet은 파드를 실행하고 노드의 로컬 리소스를 할당한다.
|
||||
[토폴로지 매니저](/docs/tasks/administer-cluster/topology-manager/)는
|
||||
노드 수준의 리소스 할당 결정에 참여할 수 있다.
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ _파드 오버헤드_ 는 컨테이너 리소스 요청과 상한 위에서 파
|
|||
```yaml
|
||||
---
|
||||
kind: RuntimeClass
|
||||
apiVersion: node.k8s.io/v1beta1
|
||||
apiVersion: node.k8s.io/v1
|
||||
metadata:
|
||||
name: kata-fc
|
||||
handler: kata-fc
|
||||
|
|
|
|||