Merge pull request #1 from kubernetes/master

merge k/website into tomkivlin/website
This commit is contained in:
Tom Kivlin 2021-01-19 17:17:26 +00:00 committed by GitHub
commit 8044b0dcf7
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
507 changed files with 15843 additions and 10453 deletions

View File

@ -0,0 +1,15 @@
---
name: Scheduled Netlify site build
on:
schedule: # Build twice daily: shortly after midnight and noon (UTC)
# Offset is to be nice to the build service
- cron: '4 0,12 * * *'
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Trigger build on Netlify
env:
TOKEN: ${{ secrets.NETLIFY_BUILD_HOOK_KEY }}
run: >-
curl -s -H "Accept: application/json" -H "Content-Type: application/json" -X POST -d "{}" "https://api.netlify.com/build_hooks/${TOKEN}"

View File

@ -58,7 +58,7 @@ docker-serve:
@echo -e "$(CCRED)**** The use of docker-serve is deprecated. Use container-serve instead. ****$(CCEND)"
$(MAKE) container-serve
container-image:
container-image: ## Build a container image for the preview of the website
$(CONTAINER_ENGINE) build . \
--network=host \
--tag $(CONTAINER_IMAGE) \
@ -67,7 +67,7 @@ container-image:
container-build: module-check
$(CONTAINER_RUN) --read-only --mount type=tmpfs,destination=/tmp,tmpfs-mode=01777 $(CONTAINER_IMAGE) sh -c "npm ci && hugo --minify"
container-serve: module-check
container-serve: module-check ## Boot the development server using container. Run `make container-image` before this.
$(CONTAINER_RUN) --read-only --mount type=tmpfs,destination=/tmp,tmpfs-mode=01777 -p 1313:1313 $(CONTAINER_IMAGE) hugo server --buildFuture --bind 0.0.0.0 --destination /tmp/hugo --cleanDestinationDir
test-examples:

View File

@ -7,12 +7,9 @@ aliases:
- mrbobbytables
sig-docs-blog-reviewers: # Reviewers for blog content
- castrojo
- cody-clark
- kbarnard10
- mrbobbytables
- onlydole
- parispittman
- vonguard
sig-docs-de-owners: # Admins for German content
- bene2k1
- mkorbi
@ -30,6 +27,7 @@ aliases:
- kbarnard10
- kbhawkey
- onlydole
- reylejano
- savitharaghunathan
- sftim
- steveperry-53
@ -98,6 +96,7 @@ aliases:
- irvifa
sig-docs-id-reviews: # PR reviews for Indonesian content
- girikuncoro
- habibrosyad
- irvifa
- wahyuoi
- phanama
@ -138,6 +137,7 @@ aliases:
- seokho-son
- ysyukr
- pjhwa
- yoonian
sig-docs-leads: # Website chairs and tech leads
- irvifa
- jimangel
@ -214,3 +214,12 @@ aliases:
- idvoretskyi
- MaxymVlasov
- Potapy4
# authoritative source: git.k8s.io/community/OWNERS_ALIASES
committee-steering: # provide PR approvals for announcements
- cblecker
- derekwaynecarr
- dims
- liggitt
- mrbobbytables
- nikhita
- parispittman

View File

@ -14,9 +14,9 @@ Sobald Ihre Pull-Anfrage erstellt wurde, übernimmt ein Rezensent von Kubernetes
Weitere Informationen zum Beitrag zur Kubernetes-Dokumentation finden Sie unter:
* [Mitwirkung beginnen](https://kubernetes.io/docs/contribute/start/)
* [Ihre Dokumentationsänderungen bereitstellen](http://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally)
* [Seitenvorlagen verwenden](http://kubernetes.io/docs/contribute/style/page-content-types/)
* [Dokumentationsstil-Handbuch](http://kubernetes.io/docs/contribute/style/style-guide/)
* [Ihre Dokumentationsänderungen bereitstellen](https://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally)
* [Seitenvorlagen verwenden](https://kubernetes.io/docs/contribute/style/page-content-types/)
* [Dokumentationsstil-Handbuch](https://kubernetes.io/docs/contribute/style/style-guide/)
* [Übersetzung der Kubernetes-Dokumentation](https://kubernetes.io/docs/contribute/localization/)
## `README.md`'s Localizing Kubernetes Documentation
@ -65,7 +65,7 @@ Dadurch wird der lokale Hugo-Server an Port 1313 gestartet. Öffnen Sie Ihren Br
## Community, Diskussion, Beteiligung und Unterstützung
Erfahren Sie auf der [Community-Seite](http://kubernetes.io/community/) wie Sie mit der Kubernetes-Community interagieren können.
Erfahren Sie auf der [Community-Seite](https://kubernetes.io/community/) wie Sie mit der Kubernetes-Community interagieren können.
Sie können die Betreuer dieses Projekts unter folgender Adresse erreichen:

View File

@ -17,9 +17,9 @@ Los revisores harán todo lo posible para proporcionar toda la información nece
Para obtener más información sobre cómo contribuir a la documentación de Kubernetes, puede consultar:
* [Empezando a contribuir](https://kubernetes.io/docs/contribute/start/)
* [Visualizando sus cambios en su entorno local](http://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally)
* [Utilizando las plantillas de las páginas](http://kubernetes.io/docs/contribute/style/page-content-types/)
* [Guía de estilo de la documentación](http://kubernetes.io/docs/contribute/style/style-guide/)
* [Visualizando sus cambios en su entorno local](https://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally)
* [Utilizando las plantillas de las páginas](https://kubernetes.io/docs/contribute/style/page-content-types/)
* [Guía de estilo de la documentación](https://kubernetes.io/docs/contribute/style/style-guide/)
* [Traduciendo la documentación de Kubernetes](https://kubernetes.io/docs/contribute/localization/)
## Levantando el sitio web kubernetes.io en su entorno local con Docker

View File

@ -2,38 +2,117 @@
[![Netlify Status](https://api.netlify.com/api/v1/badges/be93b718-a6df-402a-b4a4-855ba186c97d/deploy-status)](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
Добро пожаловать! Данный репозиторий содержит все необходимые файлы для сборки [сайта Kubernetes и документации](https://kubernetes.io/). Мы благодарим вас за старания!
Данный репозиторий содержит все необходимые файлы для сборки [сайта Kubernetes и документации](https://kubernetes.io/). Мы благодарим вас за желание внести свой вклад!
# Использование этого репозитория
Запустить сайт локально можно с помощью Hugo (Extended version) или же в исполняемой среде для контейнеров. Мы настоятельно рекомендуем воспользоваться контейнерной средой, поскольку она обеспечивает консистивность развёртывания с оригинальным сайтом.
## Предварительные требования
Чтобы работать с этим репозиторием, понадобятся следующие компоненты, установленные локально:
- [npm](https://www.npmjs.com/)
- [Go](https://golang.org/)
- [Hugo (Extended version)](https://gohugo.io/)
- Исполняемая среда для контейнеров вроде [Docker](https://www.docker.com/)
Перед тем, как начать, установите зависимости. Склонируйте репозиторий и перейдите в его директорию:
```
git clone https://github.com/kubernetes/website.git
cd website
```
Сайт Kubernetes использует [тему для Hugo под названием Docsy](https://github.com/google/docsy). Даже если вы планируете запускать сайт в контейнере, мы настоятельно рекомендуем загрузить соответствующий подмодуль и другие зависимости для разработки, выполнив следующую команду:
```
# загружаем Git-подмодуль Docsy
git submodule update --init --recursive --depth 1
```
## Запуск сайта в контейнере
Чтобы собрать сайт в контейнере, выполните следующие команды — они собирают образ контейнера и запускают его:
```
make container-image
make container-serve
```
Откройте браузер и перейдите по ссылке http://localhost:1313, чтобы увидеть сайт. Если вы отредактируете исходные файлы сайта, Hugo автоматически обновит сам сайт и выполнит обновление страницы в браузере.
## Запуск сайта с помощью Hugo
Обратитесь к [официальной документации Hugo](https://gohugo.io/getting-started/installing/), чтобы установить Hugo. Убедитесь, что вы установили правильную версию Hugo, которая устанавливается в переменной окружения `HUGO_VERSION` в файле [`netlify.toml`](netlify.toml#L10).
Убедитесь, что вы установили расширенную версию Hugo (extended version): она определена в переменной окружения `HUGO_VERSION` в файле [`netlify.toml`](netlify.toml#L10).
После установки Hugo, чтобы запустить сайт, выполните в консоли:
Чтобы собрать и протестировать сайт локально, выполните:
```bash
git clone https://github.com/kubernetes/website.git
cd website
hugo server --buildFuture
# install dependencies
npm ci
make serve
```
Эта команда запустит сервер Hugo на порту 1313. Откройте браузер и перейдите по ссылке http://localhost:1313, чтобы открыть сайт. Если вы отредактируете исходные файлы сайта, Hugo автоматически применит изменения и обновит страницу в браузере.
Эти команды запустят локальный сервер Hugo на порту 1313. Откройте браузер и перейдите по ссылке http://localhost:1313, чтобы увидеть сайт. Если вы отредактируете исходные файлы сайта, Hugo автоматически обновит сам сайт и выполнит обновление страницы в браузере.
## Сообщество, обсуждение, вклад и поддержка
## Решение проблем
### error: failed to transform resource: TOCSS: failed to transform "scss/main.scss" (text/x-scss): this feature is not available in your current Hugo version
Узнайте, как поучаствовать в жизни сообщества Kubernetes на [странице сообщества](http://kubernetes.io/community/).
По техническим причинам Hugo поставляется с двумя наборами бинарников. Текущий сайт Kubernetes работает только в версии **Hugo Extended**. На [странице релизов](https://github.com/gohugoio/hugo/releases) ищите архивы со словом `extended` в названии. Чтобы убедиться в корректности, запустите команду `hugo version` и найдите в выводе слово `extended`.
Вы можете связаться с сопровождающими этого проекта по следующим ссылкам:
### Решение проблемы на macOS с "too many open files"
- [Канал в Slack](https://kubernetes.slack.com/messages/sig-docs)
- [Рассылка](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
Если вы запускаете `make serve` на macOS и получаете следующую ошибку:
## Вклад в документацию
```
ERROR 2020/08/01 19:09:18 Error: listen tcp 127.0.0.1:1313: socket: too many open files
make: *** [serve] Error 1
```
Нажмите на кнопку **Fork** в правом верхнем углу, чтобы создать копию этого репозитория в ваш GitHub-аккаунт. Эта копия называется *форк-репозиторием*. Делайте любые изменения в вашем форк-репозитории, и когда вы будете готовы опубликовать изменения, откройте форк-репозиторий и создайте новый пулреквест, чтобы уведомить нас.
Попробуйте проверить текущий лимит для открытых файлов:
После того, как вы отправите пулреквест, ревьювер Kubernetes даст по нему обратную связь. Вы, как автор пулреквеста, **должны обновить свой пулреквест после его рассмотрения ревьювером Kubernetes.**
`launchctl limit maxfiles`
Вполне возможно, что более одного ревьювера Kubernetes оставят свои комментарии или даже может быть так, что новый комментарий ревьювера Kubernetes будет отличаться от первоначального назначенного ревьювера. Кроме того, в некоторых случаях один из ревьюверов может запросить технический обзор у [технического ревьювера Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers), если это будет необходимо. Ревьюверы сделают все возможное, чтобы как можно оперативно оставить свои предложения и пожелания, но время ответа может варьироваться в зависимости от обстоятельств.
Затем выполните следующие команды (они взяты и адаптированы из https://gist.github.com/tombigel/d503800a282fcadbee14b537735d202c):
```shell
#!/bin/sh
# Ссылки на оригинальные gist-файлы закомментированы в пользу моих адаптированных.
# curl -O https://gist.githubusercontent.com/a2ikm/761c2ab02b7b3935679e55af5d81786a/raw/ab644cb92f216c019a2f032bbf25e258b01d87f9/limit.maxfiles.plist
# curl -O https://gist.githubusercontent.com/a2ikm/761c2ab02b7b3935679e55af5d81786a/raw/ab644cb92f216c019a2f032bbf25e258b01d87f9/limit.maxproc.plist
curl -O https://gist.githubusercontent.com/tombigel/d503800a282fcadbee14b537735d202c/raw/ed73cacf82906fdde59976a0c8248cce8b44f906/limit.maxfiles.plist
curl -O https://gist.githubusercontent.com/tombigel/d503800a282fcadbee14b537735d202c/raw/ed73cacf82906fdde59976a0c8248cce8b44f906/limit.maxproc.plist
sudo mv limit.maxfiles.plist /Library/LaunchDaemons
sudo mv limit.maxproc.plist /Library/LaunchDaemons
sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist
sudo chown root:wheel /Library/LaunchDaemons/limit.maxproc.plist
sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
```
Данное решение работает для macOS Catalina и Mojave.
# Участие в SIG Docs
Узнайте о Kubernetes-сообществе SIG Docs и его встречах на [странице сообщества](https://github.com/kubernetes/community/tree/master/sig-docs#meetings).
Вы можете связаться с сопровождающими этот проект по следующим ссылкам:
- [Канал в Slack](https://kubernetes.slack.com/messages/sig-docs) ([получите приглашение в этот Slack](https://slack.k8s.io/))
- [Почтовая рассылка](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
# Вклад в документацию
Нажмите на кнопку **Fork** в правом верхнем углу, чтобы создать копию этого репозитория для вашего GitHub-аккаунта. Эта копия называется *форк-репозиторием*. Делайте любые изменения в своем форк-репозитории и, когда будете готовы опубликовать изменения, зайдите в свой форк-репозиторий и создайте новый pull-запрос (PR), чтобы уведомить нас.
После того, как вы отправите pull-запрос, ревьювер из проекта Kubernetes даст по нему обратную связь. Вы, как автор pull-запроса, **должны обновить свой PR после его рассмотрения ревьювером Kubernetes.**
Вполне возможно, что более одного ревьювера Kubernetes оставят свои комментарии. Может быть даже так, что вы будете получать обратную связь уже не от того ревьювера, что был первоначально вам назначен. Кроме того, в некоторых случаях один из ревьюверов может запросить техническую рецензию от [технического ревьювера Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers), если это потребуется. Ревьюверы сделают все возможное, чтобы как можно оперативнее оставить свои предложения и пожелания, но время ответа может варьироваться в зависимости от обстоятельств.
Узнать подробнее о том, как поучаствовать в документации Kubernetes, вы можете по ссылкам ниже:
@ -42,21 +121,22 @@ hugo server --buildFuture
* [Руководство по оформлению документации](https://kubernetes.io/docs/contribute/style/style-guide/)
* [Руководство по локализации Kubernetes](https://kubernetes.io/docs/contribute/localization/)
## Файл `README.md` на других языках
# Файл `README.md` на других языках
| другие языки | другие языки |
|-------------------------------|-------------------------------|
| [Английский](README.md) | [Французский](README-fr.md) |
| [Корейский](README-ko.md) | [Немецкий](README-de.md) |
| [Португальский](README-pt.md) | [Хинди](README-hi.md) |
| [Испанский](README-es.md) | [Индонезийский](README-id.md) |
| [Китайский](README-zh.md) | [Японский](README-ja.md) |
| [Вьетнамский](README-vi.md) | [Итальянский](README-it.md) |
| [Польский]( README-pl.md) | [Украинский](README-uk.md) |
| [Английский](README.md) | [Немецкий](README-de.md) |
| [Вьетнамский](README-vi.md) | [Польский]( README-pl.md) |
| [Индонезийский](README-id.md) | [Португальский](README-pt.md) |
| [Испанский](README-es.md) | [Украинский](README-uk.md) |
| [Итальянский](README-it.md) | [Французский](README-fr.md) |
| [Китайский](README-zh.md) | [Хинди](README-hi.md) |
| [Корейский](README-ko.md) | [Японский](README-ja.md) |
### Кодекс поведения
# Кодекс поведения
Участие в сообществе Kubernetes регулируется [кодексом поведения CNCF](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
Участие в сообществе Kubernetes регулируется [кодексом поведения CNCF](https://github.com/cncf/foundation/blob/master/code-of-conduct-languages/ru.md).
## Спасибо!
# Спасибо!
Kubernetes процветает благодаря сообществу и мы ценим ваш вклад в сайт и документацию!

View File

@ -18,7 +18,8 @@
```bash
git clone https://github.com/kubernetes/website.git
cd website
hugo server --buildFuture
git submodule update --init --recursive --depth 1
make serve
```
<!-- This will start the local Hugo server on port 1313. Open up your browser to http://localhost:1313 to view the website. As you make changes to the source files, Hugo updates the website and forces a browser refresh. -->

View File

@ -228,8 +228,8 @@ For more information about contributing to the Kubernetes documentation, see:
有关为 Kubernetes 文档做出贡献的更多信息,请参阅:
* [贡献 Kubernetes 文档](https://kubernetes.io/docs/contribute/)
* [页面内容类型](http://kubernetes.io/docs/contribute/style/page-content-types/)
* [文档风格指南](http://kubernetes.io/docs/contribute/style/style-guide/)
* [页面内容类型](https://kubernetes.io/docs/contribute/style/page-content-types/)
* [文档风格指南](https://kubernetes.io/docs/contribute/style/style-guide/)
* [本地化 Kubernetes 文档](https://kubernetes.io/docs/contribute/localization/)
# 中文本地化

View File

@ -4,6 +4,9 @@
This repository contains the assets required to build the [Kubernetes website and documentation](https://kubernetes.io/). We're glad that you want to contribute!
+ [Contributing to the docs](#contributing-to-the-docs)
+ [Localization ReadMes](#localization-readmemds)
# Using this repository
You can run the website locally using Hugo (Extended version), or you can run it in a container runtime. We strongly recommend using the container runtime, as it gives deployment consistency with the live website.

View File

@ -578,3 +578,64 @@ body.td-documentation {
color: black;
text-decoration: none !important;
}
@media print {
/* Do not print announcements */
#announcement, section#announcement, #fp-announcement, section#fp-announcement {
display: none;
}
}
#announcement, #fp-announcement {
> * {
color: inherit;
background: inherit;
}
a {
color: inherit;
border-bottom: 1px solid #fff;
}
a:hover {
color: inherit;
border-bottom: none;
}
}
#announcement {
padding-top: 105px;
padding-bottom: 25px;
}
.header-hero {
padding-top: 40px;
}
/* Extra announcement height only for landscape viewports */
@media (min-aspect-ratio: 8/9) {
#fp-announcement {
min-height: 25vh;
}
}
#fp-announcement aside {
padding-top: 115px;
padding-bottom: 25px;
}
.announcement {
.content {
margin-bottom: 0px;
}
> p {
.gridPage #announcement .content p,
.announcement > h4,
.announcement > h3 {
color: #ffffff;
}
}
}

View File

@ -13,7 +13,7 @@ disableBrowserError = true
disableKinds = ["taxonomy", "taxonomyTerm"]
ignoreFiles = [ "^OWNERS$", "README[-]+[a-z]*\\.md", "^node_modules$", "content/en/docs/doc-contributor-tools" ]
ignoreFiles = [ "(?:^|/)OWNERS$", "README[-]+[a-z]*\\.md", "^node_modules$", "content/en/docs/doc-contributor-tools" ]
timeout = 3000
@ -154,11 +154,6 @@ githubWebsiteRaw = "raw.githubusercontent.com/kubernetes/website"
# GitHub repository link for editing a page and opening issues.
github_repo = "https://github.com/kubernetes/website"
# param for displaying an announcement block on every page.
# See /i18n/en.toml for message text and title.
announcement = true
announcement_bg = "#000000" #choose a dark color  text is white
#Searching
k8s_search = true

View File

@ -8,7 +8,7 @@ cid: community
<main>
<div class="content">
<h3>Die Gewissheit, dass Kubernetes überall und für alle gut funktioniert.</h3>
<p>Verbinden Sie sich mit der Kubernetes-Community in unserem <a href="http://slack.k8s.io/">Slack Kanal</a>, <a href="https://discuss.kubernetes.io/">Diskussionsforum</a>, oder beteiligen Sie sich an der <a href="https://groups.google.com/forum/#!forum/kubernetes-dev"> Kubernetes-dev-Google-Gruppe</a>. Eine wöchentliches Community-Meeting findet per Videokonferenz statt, um den Stand der Dinge zu diskutieren, folgen Sie
<p>Verbinden Sie sich mit der Kubernetes-Community in unserem <a href="http://slack.k8s.io/">Slack Kanal</a>, <a href="https://discuss.kubernetes.io/">Diskussionsforum</a>, oder beteiligen Sie sich an der <a href="https://groups.google.com/g/kubernetes-dev"> Kubernetes-dev-Google-Gruppe</a>. Eine wöchentliches Community-Meeting findet per Videokonferenz statt, um den Stand der Dinge zu diskutieren, folgen Sie
<a href="https://github.com/kubernetes/community/blob/master/events/community-meeting.md">diesen Anweisungen</a> für Informationen wie Sie teilnehmen können.</p>
<p>Sie können Kubernetes auch auf der ganzen Welt über unsere
<a href="https://www.meetup.com/topics/kubernetes/">Kubernetes Meetup Community</a> und der

View File

@ -23,7 +23,7 @@ Dieser Verhaltenskodex gilt sowohl innerhalb von Projekträumen als auch in öff
Fälle von missbräuchlichem, belästigendem oder anderweitig unzumutbarem Verhalten in Kubernetes können gemeldet werden, indem Sie sich an das [Kubernetes Komitee für Verhaltenskodex](https://git.k8s.io/community/committee-code-of-conduct) wenden unter <conduct@kubernetes.io>. Für andere Projekte wenden Sie sich bitte an einen CNCF-Projektbetreuer oder an unseren Mediator, Mishi Choudhary <mishi@linux.com>.
Dieser Verhaltenskodex wurde aus dem Contributor Covenant übernommen (http://contributor-covenant.org), Version 1.2.0, verfügbar unter http://contributor-covenant.org/version/1/2/0/
Dieser Verhaltenskodex wurde aus dem Contributor Covenant übernommen (https://contributor-covenant.org), Version 1.2.0, verfügbar unter https://contributor-covenant.org/version/1/2/0/
### CNCF Verhaltenskodex für Veranstaltungen

View File

@ -26,7 +26,7 @@ On the other hand, CNI is more philosophically aligned with Kubernetes. It's far
Additionally, it's trivial to wrap a CNI plugin and produce a more customized CNI plugin — it can be done with a simple shell script. CNM is much more complex in this regard. This makes CNI an attractive option for rapid development and iteration. Early prototypes have proven that it's possible to eject almost 100% of the currently hard-coded network logic in kubelet into a plugin.
We investigated [writing a "bridge" CNM driver](https://groups.google.com/forum/#!topic/kubernetes-sig-network/5MWRPxsURUw) for Docker that ran CNI drivers. This turned out to be very complicated. First, the CNM and CNI models are very different, so none of the "methods" lined up. We still have the global vs. local and key-value issues discussed above. Assuming this driver would declare itself local, we have to get info about logical networks from Kubernetes.
We investigated [writing a "bridge" CNM driver](https://groups.google.com/g/kubernetes-sig-network/c/5MWRPxsURUw) for Docker that ran CNI drivers. This turned out to be very complicated. First, the CNM and CNI models are very different, so none of the "methods" lined up. We still have the global vs. local and key-value issues discussed above. Assuming this driver would declare itself local, we have to get info about logical networks from Kubernetes.
Unfortunately, Docker drivers are hard to map to other control planes like Kubernetes. Specifically, drivers are not told the name of the network to which a container is being attached — just an ID that Docker allocates internally. This makes it hard for a driver to map back to any concept of network that exists in another system.
@ -34,6 +34,6 @@ This and other issues have been brought up to Docker developers by network vendo
For all of these reasons we have chosen to invest in CNI as the Kubernetes plugin model. There will be some unfortunate side-effects of this. Most of them are relatively minor (for example, `docker inspect` will not show an IP address), but some are significant. In particular, containers started by `docker run` might not be able to communicate with containers started by Kubernetes, and network integrators will have to provide CNI drivers if they want to fully integrate with Kubernetes. On the other hand, Kubernetes will get simpler and more flexible, and a lot of the ugliness of early bootstrapping (such as configuring Docker to use our bridge) will go away.
As we proceed down this path, well certainly keep our eyes and ears open for better ways to integrate and simplify. If you have thoughts on how we can do that, we really would like to hear them — find us on [slack](http://slack.k8s.io/) or on our [network SIG mailing-list](https://groups.google.com/forum/#!forum/kubernetes-sig-network).
As we proceed down this path, well certainly keep our eyes and ears open for better ways to integrate and simplify. If you have thoughts on how we can do that, we really would like to hear them — find us on [slack](http://slack.k8s.io/) or on our [network SIG mailing-list](https://groups.google.com/g/kubernetes-sig-network).
Tim Hockin, Software Engineer, Google

View File

@ -56,13 +56,13 @@ Cri-containerd uses containerd to manage the full container lifecycle and all co
Lets use an example to demonstrate how cri-containerd works for the case when Kubelet creates a single-container pod:
1. 1.Kubelet calls cri-containerd, via the CRI runtime service API, to create a pod;
2. 2.cri-containerd uses containerd to create and start a special [pause container](https://www.ianlewis.org/en/almighty-pause-container) (the _sandbox container_) and put that container inside the pods cgroups and namespace (steps omitted for brevity);
3. 3.cri-containerd configures the pods network namespace using CNI;
4. 4.Kubelet subsequently calls cri-containerd, via the CRI image service API, to pull the application container image;
5. 5.cri-containerd further uses containerd to pull the image if the image is not present on the node;
6. 6.Kubelet then calls cri-containerd, via the CRI runtime service API, to create and start the application container inside the pod using the pulled container image;
7. 7.cri-containerd finally calls containerd to create the application container, put it inside the pods cgroups and namespace, then to start the pods new application container.
1. Kubelet calls cri-containerd, via the CRI runtime service API, to create a pod;
2. cri-containerd uses containerd to create and start a special [pause container](https://www.ianlewis.org/en/almighty-pause-container) (the _sandbox container_) and put that container inside the pods cgroups and namespace (steps omitted for brevity);
3. cri-containerd configures the pods network namespace using CNI;
4. Kubelet subsequently calls cri-containerd, via the CRI image service API, to pull the application container image;
5. cri-containerd further uses containerd to pull the image if the image is not present on the node;
6. Kubelet then calls cri-containerd, via the CRI runtime service API, to create and start the application container inside the pod using the pulled container image;
7. cri-containerd finally calls containerd to create the application container, put it inside the pods cgroups and namespace, then to start the pods new application container.
After these steps, a pod and its corresponding application container is created and running.

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

View File

@ -0,0 +1,96 @@
---
layout: blog
title: "A Custom Kubernetes Scheduler to Orchestrate Highly Available Applications"
date: 2020-12-21
slug: writing-crl-scheduler
---
**Author**: Chris Seto (Cockroach Labs)
As long as you're willing to follow the rules, deploying on Kubernetes and air travel can be quite pleasant. More often than not, things will "just work". However, if one is interested in travelling with an alligator that must remain alive or scaling a database that must remain available, the situation is likely to become a bit more complicated. It may even be easier to build one's own plane or database for that matter. Travelling with reptiles aside, scaling a highly available stateful system is no trivial task.
Scaling any system has two main components:
1. Adding or removing infrastructure that the system will run on, and
2. Ensuring that the system knows how to handle additional instances of itself being added and removed.
Most stateless systems, web servers for example, are created without the need to be aware of peers. Stateful systems, which includes databases like CockroachDB, have to coordinate with their peer instances and shuffle around data. As luck would have it, CockroachDB handles data redistribution and replication. The tricky part is being able to tolerate failures during these operations by ensuring that data and instances are distributed across many failure domains (availability zones).
One of Kubernetes' responsibilities is to place "resources" (e.g, a disk or container) into the cluster and satisfy the constraints they request. For example: "I must be in availability zone _A_" (see [Running in multiple zones](/docs/setup/best-practices/multiple-zones/#nodes-are-labeled)), or "I can't be placed onto the same node as this other Pod" (see [Affinity and anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)).
As an addition to those constraints, Kubernetes offers [Statefulsets](/docs/concepts/workloads/controllers/statefulset/) that provide identity to Pods as well as persistent storage that "follows" these identified pods. Identity in a StatefulSet is handled by an increasing integer at the end of a pod's name. It's important to note that this integer must always be contiguous: in a StatefulSet, if pods 1 and 3 exist then pod 2 must also exist.
Under the hood, CockroachCloud deploys each region of CockroachDB as a StatefulSet in its own Kubernetes cluster - see [Orchestrate CockroachDB in a Single Kubernetes Cluster](https://www.cockroachlabs.com/docs/stable/orchestrate-cockroachdb-with-kubernetes.html).
In this article, I'll be looking at an individual region, one StatefulSet and one Kubernetes cluster which is distributed across at least three availability zones.
A three-node CockroachCloud cluster would look something like this:
![3-node, multi-zone cockroachdb cluster](image01.png)
When adding additional resources to the cluster we also distribute them across zones. For the speediest user experience, we add all Kubernetes nodes at the same time and then scale up the StatefulSet.
![illustration of phases: adding Kubernetes nodes to the multi-zone cockroachdb cluster](image02.png)
Note that anti-affinities are satisfied no matter the order in which pods are assigned to Kubernetes nodes. In the example, pods 0, 1 and 2 were assigned to zones A, B, and C respectively, but pods 3 and 4 were assigned in a different order, to zones B and A respectively. The anti-affinity is still satisfied because the pods are still placed in different zones.
To remove resources from a cluster, we perform these operations in reverse order.
We first scale down the StatefulSet and then remove from the cluster any nodes lacking a CockroachDB pod.
![illustration of phases: scaling down pods in a multi-zone cockroachdb cluster in Kubernetes](image03.png)
Now, remember that pods in a StatefulSet of size _n_ must have ids in the range `[0,n)`. When scaling down a StatefulSet by _m_, Kubernetes removes _m_ pods, starting from the highest ordinals and moving towards the lowest, [the reverse in which they were added](/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees).
Consider the cluster topology below:
![illustration: cockroachdb cluster: 6 nodes distributed across 3 availability zones](image04.png)
As ordinals 5 through 3 are removed from this cluster, the statefulset continues to have a presence across all 3 availability zones.
![illustration: removing 3 nodes from a 6-node, 3-zone cockroachdb cluster](image05.png)
However, Kubernetes' scheduler doesn't _guarantee_ the placement above as we expected at first.
Our combined knowledge of the following is what lead to this misconception.
* Kubernetes' ability to [automatically spread Pods across zone](/docs/setup/best-practices/multiple-zones/#pods-are-spread-across-zones)
* The behavior that a StatefulSet with _n_ replicas, when Pods are being deployed, they are created sequentially, in order from `{0..n-1}`. See [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees) for more details.
Consider the following topology:
![illustration: 6-node cockroachdb cluster distributed across 3 availability zones](image06.png)
These pods were created in order and they are spread across all availability zones in the cluster. When ordinals 5 through 3 are terminated, this cluster will lose its presence in zone C!
![illustration: terminating 3 nodes in 6-node cluster spread across 3 availability zones, where 2/2 nodes in the same availability zone are terminated, knocking out that AZ](image07.png)
Worse yet, our automation, at the time, would remove Nodes A-2, B-2, and C-2. Leaving CRDB-1 in an unscheduled state as persistent volumes are only available in the zone they are initially created in.
To correct the latter issue, we now employ a "hunt and peck" approach to removing machines from a cluster. Rather than blindly removing Kubernetes nodes from the cluster, only nodes without a CockroachDB pod would be removed. The much more daunting task was to wrangle the Kubernetes scheduler.
## A session of brainstorming left us with 3 options:
### 1. Upgrade to kubernetes 1.18 and make use of Pod Topology Spread Constraints
While this seems like it could have been the perfect solution, at the time of writing Kubernetes 1.18 was unavailable on the two most common managed Kubernetes services in public cloud, EKS and GKE.
Furthermore, [pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/) were still a [beta feature in 1.18](https://v1-18.docs.kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) which meant that it [wasn't guaranteed to be available in managed clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#kubernetes_feature_choices) even when v1.18 became available.
The entire endeavour was concerningly reminiscent of checking [caniuse.com](https://caniuse.com/) when Internet Explorer 8 was still around.
### 2. Deploy a statefulset _per zone_.
Rather than having one StatefulSet distributed across all availability zones, a single StatefulSet with node affinities per zone would allow manual control over our zonal topology.
Our team had considered this as an option in the past which made it particularly appealing.
Ultimately, we decided to forego this option as it would have required a massive overhaul to our codebase and performing the migration on existing customer clusters would have been an equally large undertaking.
### 3. Write a custom Kubernetes scheduler.
Thanks to an example from [Kelsey Hightower](https://github.com/kelseyhightower/scheduler) and a blog post from [Banzai Cloud](https://banzaicloud.com/blog/k8s-custom-scheduler/), we decided to dive in head first and write our own [custom Kubernetes scheduler](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/).
Once our proof-of-concept was deployed and running, we quickly discovered that the Kubernetes' scheduler is also responsible for mapping persistent volumes to the Pods that it schedules.
The output of [`kubectl get events`](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/#verifying-that-the-pods-were-scheduled-using-the-desired-schedulers) had led us to believe there was another system at play.
In our journey to find the component responsible for storage claim mapping, we discovered the [kube-scheduler plugin system](/docs/concepts/scheduling-eviction/scheduling-framework/). Our next POC was a `Filter` plugin that determined the appropriate availability zone by pod ordinal, and it worked flawlessly!
Our [custom scheduler plugin](https://github.com/cockroachlabs/crl-scheduler) is open source and runs in all of our CockroachCloud clusters.
Having control over how our StatefulSet pods are being scheduled has let us scale out with confidence.
We may look into retiring our plugin once pod topology spread constraints are available in GKE and EKS, but the maintenance overhead has been surprisingly low.
Better still: the plugin's implementation is orthogonal to our business logic. Deploying it, or retiring it for that matter, is as simple as changing the `schedulerName` field in our StatefulSet definitions.
---
_[Chris Seto](https://twitter.com/_ostriches) is a software engineer at Cockroach Labs and works on their Kubernetes automation for [CockroachCloud](https://cockroachlabs.cloud), CockroachDB._

View File

@ -55,7 +55,7 @@ All your existing images will still work exactly the same.
### What about private images?
Also yes. All CRI runtimes support the same pull secrets configuration used in
Yes. All CRI runtimes support the same pull secrets configuration used in
Kubernetes, either via the PodSpec or ServiceAccount.
@ -82,7 +82,7 @@ usability of other container runtimes. As an example, OpenShift 4.x has been
using the [CRI-O] runtime in production since June 2019.
For other examples and references you can look at the adopters of containerd and
cri-o, two container runtimes under the Cloud Native Computing Foundation ([CNCF]).
CRI-O, two container runtimes under the Cloud Native Computing Foundation ([CNCF]).
- [containerd](https://github.com/containerd/containerd/blob/master/ADOPTERS.md)
- [CRI-O](https://github.com/cri-o/cri-o/blob/master/ADOPTERS.md)
@ -110,7 +110,7 @@ provide an end-to-end standard for managing containers.
Thats a complex question and it depends on a lot of factors. If Docker is
working for you, moving to containerd should be a relatively easy swap and
has have strictly better performance and less overhead. However we encourage you
will have strictly better performance and less overhead. However, we encourage you
to explore all the options from the [CNCF landscape] in case another would be an
even better fit for your environment.
@ -129,7 +129,7 @@ common things to consider when migrating are:
- Kubectl plugins that require docker CLI or the control socket
- Kubernetes tools that require direct access to Docker (e.g. kube-imagepuller)
- Configuration of functionality like `registry-mirrors` and insecure registries
- Other support scripts or daemons that expect docker to be available and are run
- Other support scripts or daemons that expect Docker to be available and are run
outside of Kubernetes (e.g. monitoring or security agents)
- GPUs or special hardware and how they integrate with your runtime and Kubernetes
@ -141,13 +141,14 @@ runtime where possible.
Another thing to look out for is anything expecting to run for system maintenance
or nested inside a container when building images will no longer work. For the
former, you can use the [`crictl`][cr] tool as a drop-in replacement (see [mapping from docker cli to crictl](https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/#mapping-from-docker-cli-to-crictl)) and for the
latter you can use newer container build options like [img], [buildah], or
[kaniko] that dont require Docker.
latter you can use newer container build options like [img], [buildah],
[kaniko], or [buildkit-cli-for-kubectl] that dont require Docker.
[cr]: https://github.com/kubernetes-sigs/cri-tools
[img]: https://github.com/genuinetools/img
[buildah]: https://github.com/containers/buildah
[kaniko]: https://github.com/GoogleContainerTools/kaniko
[buildkit-cli-for-kubectl]: https://github.com/vmware-tanzu/buildkit-cli-for-kubectl
For containerd, you can start with their [documentation] to see what configuration
options are available as you migrate things over.

View File

@ -13,8 +13,8 @@ as a container runtime after v1.20.
**You do not need to panic. Its not as dramatic as it sounds.**
tl;dr Docker as an underlying runtime is being deprecated in favor of runtimes
that use the [Container Runtime Interface(CRI)](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/)
TL;DR Docker as an underlying runtime is being deprecated in favor of runtimes
that use the [Container Runtime Interface (CRI)](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/)
created for Kubernetes. Docker-produced images will continue to work in your
cluster with all runtimes, as they always have.
@ -48,7 +48,7 @@ is a popular choice for that runtime (other common options include containerd
and CRI-O), but Docker was not designed to be embedded inside Kubernetes, and
that causes a problem.
You see, the thing we call “Docker” isnt actually one thing -- its an entire
You see, the thing we call “Docker” isnt actually one thing&mdash;its an entire
tech stack, and one part of it is a thing called “containerd,” which is a
high-level container runtime by itself. Docker is cool and useful because it has
a lot of UX enhancements that make it really easy for humans to interact with
@ -66,11 +66,11 @@ does Kubernetes need the Dockershim?
Docker isnt compliant with CRI, the [Container Runtime Interface](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/).
If it were, we wouldnt need the shim, and this wouldnt be a thing. But its
not the end of the world, and you dont need to panic -- you just need to change
not the end of the world, and you dont need to panic&mdash;you just need to change
your container runtime from Docker to another supported container runtime.
One thing to note: If you are relying on the underlying docker socket
(/var/run/docker.sock) as part of a workflow within your cluster today, moving
(`/var/run/docker.sock`) as part of a workflow within your cluster today, moving
to a different runtime will break your ability to use it. This pattern is often
called Docker in Docker. There are lots of options out there for this specific
use case including things like
@ -82,10 +82,10 @@ use case including things like
This change addresses a different environment than most folks use to interact
with Docker. The Docker installation youre using in development is unrelated to
the Docker runtime inside your Kubernetes cluster. Its confusing, I know. As a
developer, Docker is still useful to you in all the ways it was before this
the Docker runtime inside your Kubernetes cluster. Its confusing, we understand.
As a developer, Docker is still useful to you in all the ways it was before this
change was announced. The image that Docker produces isnt really a
Docker-specific image -- its an OCI ([Open Container Initiative](https://opencontainers.org/)) image.
Docker-specific image&mdash;its an OCI ([Open Container Initiative](https://opencontainers.org/)) image.
Any OCI-compliant image, regardless of the tool you use to build it, will look
the same to Kubernetes. Both [containerd](https://containerd.io/) and
[CRI-O](https://cri-o.io/) know how to pull those images and run them. This is
@ -95,10 +95,10 @@ So, this change is coming. Its going to cause issues for some, but it isnt
catastrophic, and generally its a good thing. Depending on how you interact
with Kubernetes, this could mean nothing to you, or it could mean a bit of work.
In the long run, its going to make things easier. If this is still confusing
for you, thats okay -- theres a lot going on here, Kubernetes has a lot of
for you, thats okay&mdash;theres a lot going on here; Kubernetes has a lot of
moving parts, and nobody is an expert in 100% of it. We encourage any and all
questions regardless of experience level or complexity! Our goal is to make sure
everyone is educated as much as possible on the upcoming changes. `<3` We hope
this has answered most of your questions and soothed some anxieties!
everyone is educated as much as possible on the upcoming changes. We hope
this has answered most of your questions and soothed some anxieties! ❤️
Looking for more answers? Check out our accompanying [Dockershim Deprecation FAQ](/blog/2020/12/02/dockershim-faq/).

View File

@ -52,10 +52,13 @@ Currently more than [50 CSI drivers](https://kubernetes-csi.github.io/docs/drive
As of the publishing of this blog, the following participants from the [Kubernetes Data Protection Working Group](https://github.com/kubernetes/community/tree/master/wg-data-protection) are building products or have already built products using Kubernetes volume snapshots.
- [Dell-EMC: PowerProtect](https://www.delltechnologies.com/en-us/data-protection/powerprotect-data-manager.htm)
- Druva
- [Druva](https://www.druva.com/)
- [Kasten K10](https://www.kasten.io/)
- Pure Storage (Pure Service Orchestrator)
- Red Hat OpenShift Container Storage
- [NetApp: Project Astra](https://cloud.netapp.com/project-astra)
- [Portworx (PX-Backup)](https://portworx.com/products/px-backup/)
- [Pure Storage (Pure Service Orchestrator)](https://github.com/purestorage/pso-csi)
- [Red Hat OpenShift Container Storage](https://www.redhat.com/en/technologies/cloud-computing/openshift-container-storage)
- [Robin Cloud Native Storage](https://robin.io/storage/)
- [TrilioVault for Kubernetes](https://docs.trilio.io/kubernetes/)
- [Velero plugin for CSI](https://github.com/vmware-tanzu/velero-plugin-for-csi)
@ -198,7 +201,7 @@ There are many more people who have helped to move the snapshot feature from bet
- [Grant Griffiths](https://github.com/ggriffiths)
- [Humble Devassy Chirammal](https://github.com/humblec)
- [Jan Šafránek](https://github.com/jsafrane)
- [Jiawei Wang](https://github.com/jiawei0277)
- [Jiawei Wang](https://github.com/Jiawei0227)
- [Jing Xu](https://github.com/jingxu97)
- [Jordan Liggitt](https://github.com/liggitt)
- [Kartik Sharma](https://github.com/Kartik494)
@ -209,7 +212,7 @@ There are many more people who have helped to move the snapshot feature from bet
- [Prafull Ladha](https://github.com/prafull01)
- [Prateek Pandey](https://github.com/prateekpandey14)
- [Raunak Shah](https://github.com/RaunakShah)
- [Saad Ali](https://github.com/saadali)
- [Saad Ali](https://github.com/saad-ali)
- [Saikat Roychowdhury](https://github.com/saikat-royc)
- [Tim Hockin](https://github.com/thockin)
- [Xiangqian Yu](https://github.com/yuxiangqian)

View File

@ -0,0 +1,53 @@
---
layout: blog
title: 'Kubernetes 1.20: Pod Impersonation and Short-lived Volumes in CSI Drivers'
date: 2020-12-18
slug: kubernetes-1.20-pod-impersonation-short-lived-volumes-in-csi
---
**Author**: Shihang Zhang (Google)
Typically when a [CSI](https://github.com/container-storage-interface/spec/blob/baa71a34651e5ee6cb983b39c03097d7aa384278/spec.md) driver mounts credentials such as secrets and certificates, it has to authenticate against storage providers to access the credentials. However, the access to those credentials are controlled on the basis of the pods' identities rather than the CSI driver's identity. CSI drivers, therefore, need some way to retrieve pod's service account token.
Currently there are two suboptimal approaches to achieve this, either by granting CSI drivers the permission to use TokenRequest API or by reading tokens directly from the host filesystem.
Both of them exhibit the following drawbacks:
- Violating the principle of least privilege
- Every CSI driver needs to re-implement the logic of getting the pods service account token
The second approach is more problematic due to:
- The audience of the token defaults to the kube-apiserver
- The token is not guaranteed to be available (e.g. `AutomountServiceAccountToken=false`)
- The approach does not work for CSI drivers that run as a different (non-root) user from the pods. See [file permission section for service account token](https://github.com/kubernetes/enhancements/blob/f40c24a5da09390bd521be535b38a4dbab09380c/keps/sig-storage/20180515-svcacct-token-volumes.md#file-permission)
- The token might be legacy Kubernetes service account token which doesnt expire if `BoundServiceAccountTokenVolume=false`
Kubernetes 1.20 introduces an alpha feature, `CSIServiceAccountToken`, to improve the security posture. The new feature allows CSI drivers to receive pods' [bound service account tokens](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md).
This feature also provides a knob to re-publish volumes so that short-lived volumes can be refreshed.
## Pod Impersonation
### Using GCP APIs
Using [Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity), a Kubernetes service account can authenticate as a Google service account when accessing Google Cloud APIs. If a CSI driver needs to access GCP APIs on behalf of the pods that it is mounting volumes for, it can use the pod's service account token to [exchange for GCP tokens](https://cloud.google.com/iam/docs/reference/sts/rest). The pod's service account token is plumbed through the volume context in `NodePublishVolume` RPC calls when the feature `CSIServiceAccountToken` is enabled. For example: accessing [Google Secret Manager](https://cloud.google.com/secret-manager/) via a [secret store CSI driver](https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp).
### Using Vault
If users configure [Kubernetes as an auth method](https://www.vaultproject.io/docs/auth/kubernetes), Vault uses the `TokenReview` API to validate the Kubernetes service account token. For CSI drivers using Vault as resources provider, they need to present the pod's service account to Vault. For example, [secrets store CSI driver](https://github.com/hashicorp/secrets-store-csi-driver-provider-vault) and [cert manager CSI driver](https://github.com/jetstack/cert-manager-csi).
## Short-lived Volumes
To keep short-lived volumes such as certificates effective, CSI drivers can specify `RequiresRepublish=true` in their`CSIDriver` object to have the kubelet periodically call `NodePublishVolume` on mounted volumes. These republishes allow CSI drivers to ensure that the volume content is up-to-date.
## Next steps
This feature is alpha and projected to move to beta in 1.21. See more in the following KEP and CSI documentation:
- [KEP-1855: Service Account Token for CSI Driver](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/1855-csi-driver-service-account-token/README.md)
- [Token Requests](https://kubernetes-csi.github.io/docs/token-requests.html)
Your feedback is always welcome!
- SIG-Auth [meets regularly](https://github.com/kubernetes/community/tree/master/sig-auth#meetings) and can be reached via [Slack and the mailing list](https://github.com/kubernetes/community/tree/master/sig-auth#contact)
- SIG-Storage [meets regularly](https://github.com/kubernetes/community/tree/master/sig-storage#meetings) and can be reached via [Slack and the mailing list](https://github.com/kubernetes/community/tree/master/sig-storage#contact).

View File

@ -0,0 +1,59 @@
---
layout: blog
title: 'Kubernetes 1.20: Granular Control of Volume Permission Changes'
date: 2020-12-14
slug: kubernetes-release-1.20-fsGroupChangePolicy-fsGroupPolicy
---
**Authors**: Hemant Kumar, Red Hat & Christian Huffman, Red Hat
Kubernetes 1.20 brings two important beta features, allowing Kubernetes admins and users alike to have more adequate control over how volume permissions are applied when a volume is mounted inside a Pod.
### Allow users to skip recursive permission changes on mount
Traditionally if your pod is running as a non-root user ([which you should](https://twitter.com/thockin/status/1333892204490735617)), you must specify a `fsGroup` inside the pods security context so that the volume can be readable and writable by the Pod. This requirement is covered in more detail in [here](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/).
But one side-effect of setting `fsGroup` is that, each time a volume is mounted, Kubernetes must recursively `chown()` and `chmod()` all the files and directories inside the volume - with a few exceptions noted below. This happens even if group ownership of the volume already matches the requested `fsGroup`, and can be pretty expensive for larger volumes with lots of small files, which causes pod startup to take a long time. This scenario has been a [known problem](https://github.com/kubernetes/kubernetes/issues/69699) for a while, and in Kubernetes 1.20 we are providing knobs to opt-out of recursive permission changes if the volume already has the correct permissions.
When configuring a pods security context, set `fsGroupChangePolicy` to "OnRootMismatch" so if the root of the volume already has the correct permissions, the recursive permission change can be skipped. Kubernetes ensures that permissions of the top-level directory are changed last the first time it applies permissions.
```yaml
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
fsGroupChangePolicy: "OnRootMismatch"
```
You can learn more about this in [Configure volume permission and ownership change policy for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods).
### Allow CSI Drivers to declare support for fsGroup based permissions
Although the previous section implied that Kubernetes _always_ recursively changes permissions of a volume if a Pod has a `fsGroup`, this is not strictly true. For certain multi-writer volume types, such as NFS or Gluster, the cluster doesnt perform recursive permission changes even if the pod has a `fsGroup`. Other volume types may not even support `chown()`/`chmod()`, which rely on Unix-style permission control primitives.
So how do we know when to apply recursive permission changes and when we shouldn't? For in-tree storage drivers, this was relatively simple. For [CSI](https://kubernetes-csi.github.io/docs/introduction.html#introduction) drivers that could span a multitude of platforms and storage types, this problem can be a bigger challenge.
Previously, whenever a CSI volume was mounted to a Pod, Kubernetes would attempt to automatically determine if the permissions and ownership should be modified. These methods were imprecise and could cause issues as we already mentioned, depending on the storage type.
The CSIDriver custom resource now has a `.spec.fsGroupPolicy` field, allowing storage drivers to explicitly opt in or out of these recursive modifications. By having the CSI driver specify a policy for the backing volumes, Kubernetes can avoid needless modification attempts. This optimization helps to reduce volume mount time and also cuts own reporting errors about modifications that would never succeed.
#### CSIDriver FSGroupPolicy API
Three FSGroupPolicy values are available as of Kubernetes 1.20, with more planned for future releases.
- **ReadWriteOnceWithFSType** - This is the default policy, applied if no `fsGroupPolicy` is defined; this preserves the behavior from previous Kubernetes releases. Each volume is examined at mount time to determine if permissions should be recursively applied.
- **File** - Always attempt to apply permission modifications, regardless of the filesystem type or PersistentVolumeClaims access mode.
- **None** - Never apply permission modifications.
#### How do I use it?
The only configuration needed is defining `fsGroupPolicy` inside of the `.spec` for a CSIDriver. Once that element is defined, any subsequently mounted volumes will automatically use the defined policy. Theres no additional deployment required!
#### Whats next?
Depending on feedback and adoption, the Kubernetes team plans to push these implementations to GA in either 1.21 or 1.22.
### How can I learn more?
This feature is explained in more detail in Kubernetes project documentation: [CSI Driver fsGroup Support](https://kubernetes-csi.github.io/docs/support-fsgroup.html) and [Configure volume permission and ownership change policy for Pods ](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods).
### How do I get involved?
The [Kubernetes Slack channel #csi](https://kubernetes.slack.com/messages/csi) and any of the [standard SIG Storage communication channels](https://github.com/kubernetes/community/blob/master/sig-storage/README.md#contact) are great mediums to reach out to the SIG Storage and the CSI team.
Those interested in getting involved with the design and development of CSI or any part of the Kubernetes Storage system, join the [Kubernetes Storage Special Interest Group (SIG)](https://github.com/kubernetes/community/tree/master/sig-storage). Were rapidly growing and always welcome new contributors.

View File

@ -0,0 +1,134 @@
---
layout: blog
title: 'Third Party Device Metrics Reaches GA'
date: 2020-12-16
slug: third-party-device-metrics-reaches-ga
---
**Authors:** Renaud Gaubert (NVIDIA), David Ashpole (Google), and Pramod Ramarao (NVIDIA)
With Kubernetes 1.20, infrastructure teams who manage large scale Kubernetes clusters, are seeing the graduation of two exciting and long awaited features:
* The Pod Resources API (introduced in 1.13) is finally graduating to GA. This allows Kubernetes plugins to obtain information about the nodes resource usage and assignment; for example: which pod/container consumes which device.
* The `DisableAcceleratorMetrics` feature (introduced in 1.19) is graduating to beta and will be enabled by default. This removes device metrics reported by the kubelet in favor of the new plugin architecture.
Many of the features related to fundamental device support (device discovery, plugin, and monitoring) are reaching a strong level of stability.
Kubernetes users should see these features as stepping stones to enable more complex use cases (networking, scheduling, storage, etc.)!
One such example is Non Uniform Memory Access (NUMA) placement where, when selecting a device, an application typically wants to ensure that data transfer between CPU Memory and Device Memory is as fast as possible. In some cases, incorrect NUMA placement can nullify the benefit of offloading compute to an external device.
If these are topics of interest to you, consider joining the [Kubernetes Node Special Insterest Group](https://github.com/kubernetes/community/tree/master/sig-node) (SIG) for all topics related to the Kubernetes node, the COD (container orchestrated device) workgroup for topics related to runtimes, or the resource management forum for topics related to resource management!
## The Pod Resources API - Why does it need to exist?
Kubernetes is a vendor neutral platform. If we want it to support device monitoring, adding vendor-specific code in the Kubernetes code base is not an ideal solution. Ultimately, devices are a domain where deep expertise is needed and the best people to add and maintain code in that area are the device vendors themselves.
The Pod Resources API was built as a solution to this issue. Each vendor can build and maintain their own out-of-tree monitoring plugin. This monitoring plugin, often deployed as a separate pod within a cluster, can then associate the metrics a device emits with the associated pod that's using it.
For example, use the NVIDIA GPU dcgm-exporter to scrape metrics in Prometheus format:
```
$ curl -sL http://127.0.01:8080/metrics
# HELP DCGM_FI_DEV_SM_CLOCK SM clock frequency (in MHz).
# TYPE DCGM_FI_DEV_SM_CLOCK gauge
# HELP DCGM_FI_DEV_MEM_CLOCK Memory clock frequency (in MHz).
# TYPE DCGM_FI_DEV_MEM_CLOCK gauge
# HELP DCGM_FI_DEV_MEMORY_TEMP Memory temperature (in C).
# TYPE DCGM_FI_DEV_MEMORY_TEMP gauge
...
DCGM_FI_DEV_SM_CLOCK{gpu="0", UUID="GPU-604ac76c-d9cf-fef3-62e9-d92044ab6e52",container="foo",namespace="bar",pod="baz"} 139
DCGM_FI_DEV_MEM_CLOCK{gpu="0", UUID="GPU-604ac76c-d9cf-fef3-62e9-d92044ab6e52",container="foo",namespace="bar",pod="baz"} 405
DCGM_FI_DEV_MEMORY_TEMP{gpu="0", UUID="GPU-604ac76c-d9cf-fef3-62e9-d92044ab6e52",container="foo",namespace="bar",pod="baz"} 9223372036854775794
```
Each agent is expected to adhere to the node monitoring guidelines. In other words, plugins are expected to generate metrics in Prometheus format, and new metrics should not have any dependency on the Kubernetes base directly.
This allows consumers of the metrics to use a compatible monitoring pipeline to collect and analyze metrics from a variety of agents, even if they are maintained by different vendors.
![Device metrics flowchart](/images/blog/2020-12-16-third-party-device-metrics-hits-ga/metrics-chart.png)
## Disabling the NVIDIA GPU metrics - Warning {#nvidia-gpu-metrics-deprecated}
With the graduation of the plugin monitoring system, Kubernetes is deprecating the NVIDIA GPU metrics that are being reported by the kubelet.
With the [DisableAcceleratorMetrics](/docs/concepts/cluster-administration/system-metrics/#disable-accelerator-metrics) feature being enabled by default in Kubernetes 1.20, NVIDIA GPUs are no longer special citizens in Kubernetes. This is a good thing in the spirit of being vendor-neutral, and enables the most suited people to maintain their plugin on their own release schedule!
Users will now need to either install the [NVIDIA GDGM exporter](https://github.com/NVIDIA/gpu-monitoring-tools) or use [bindings](https://github.com/nvidia/go-nvml) to gather more accurate and complete metrics about NVIDIA GPUs. This deprecation means that you can no longer rely on metrics that were reported by kubelet, such as `container_accelerator_duty_cycle` or `container_accelerator_memory_used_bytes` which were used to gather NVIDIA GPU memory utilization.
This means that users who used to rely on the NVIDIA GPU metrics reported by the kubelet, will need to update their reference and deploy the NVIDIA plugin. Namely the different metrics reported by Kubernetes map to the following metrics:
| Kubernetes Metrics | NVIDIA dcgm-exporter metric |
| ------------------------------------------ | ------------------------------------------- |
| `container_accelerator_duty_cycle` | `DCGM_FI_DEV_GPU_UTIL` |
| `container_accelerator_memory_used_bytes` | `DCGM_FI_DEV_FB_USED` |
| `container_accelerator_memory_total_bytes` | `DCGM_FI_DEV_FB_FREE + DCGM_FI_DEV_FB_USED` |
You might also be interested in other metrics such as `DCGM_FI_DEV_GPU_TEMP` (the GPU temperature) or DCGM_FI_DEV_POWER_USAGE (the power usage). The [default set](https://github.com/NVIDIA/gpu-monitoring-tools/blob/d5c9bb55b4d1529ca07068b7f81e690921ce2b59/etc/dcgm-exporter/default-counters.csv) is available in Nvidia's [Data Center GPU Manager documentation](https://docs.nvidia.com/datacenter/dcgm/latest/dcgm-api/group__dcgmFieldIdentifiers.html).
Note that for this release you can still set the `DisableAcceleratorMetrics` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to _false_, effectively re-enabling the ability for the kubelet to report NVIDIA GPU metrics.
Paired with the graduation of the Pod Resources API, these tools can be used to generate GPU telemetry [that can be used in visualization dashboards](https://grafana.com/grafana/dashboards/12239), below is an example:
![Grafana visualization of device metrics](/images/blog/2020-12-16-third-party-device-metrics-hits-ga/grafana.png)
## The Pod Resources API - What can I go on to do with this?
As soon as this interface was introduced, many vendors started using it for widely different use cases! To list a few examples:
The [kuryr-kubernetes](https://github.com/openstack/kuryr-kubernetes) CNI plugin in tandem with [intel-sriov-device-plugin](https://github.com/intel/sriov-network-device-plugin). This allowed the CNI plugin to know which allocation of SR-IOV Virtual Functions (VFs) the kubelet made and use that information to correctly setup the container network namespace and use a device with the appropriate NUMA node. We also expect this interface to be used to track the allocated and available resources with information about the NUMA topology of the worker node.
Another use-case is GPU telemetry, where GPU metrics can be associated with the containers and pods that the GPU is assigned to. One such example is the NVIDIA `dcgm-exporter`, but others can be easily built in the same paradigm.
The Pod Resources API is a simple gRPC service which informs clients of the pods the kubelet knows. The information concerns the devices assignment the kubelet made and the assignment of CPUs. This information is obtained from the internal state of the kubelet's Device Manager and CPU Manager respectively.
You can see below a sample example of the API and how a go client could use that information in a few lines:
```
service PodResourcesLister {
rpc List(ListPodResourcesRequest) returns (ListPodResourcesResponse) {}
rpc GetAllocatableResources(AllocatableResourcesRequest) returns (AllocatableResourcesResponse) {}
// Kubernetes 1.21
rpc Watch(WatchPodResourcesRequest) returns (stream WatchPodResourcesResponse) {}
}
```
```go
func main() {
ctx, cancel := context.WithTimeout(context.Background(), connectionTimeout)
defer cancel()
socket := "/var/lib/kubelet/pod-resources/kubelet.sock"
conn, err := grpc.DialContext(ctx, socket, grpc.WithInsecure(), grpc.WithBlock(),
grpc.WithDialer(func(addr string, timeout time.Duration) (net.Conn, error) {
return net.DialTimeout("unix", addr, timeout)
}),
)
if err != nil {
panic(err)
}
client := podresourcesapi.NewPodResourcesListerClient(conn)
resp, err := client.List(ctx, &podresourcesapi.ListPodResourcesRequest{})
if err != nil {
panic(err)
}
net.Printf("%+v\n", resp)
}
```
Finally, note that you can watch the number of requests made to the Pod Resources endpoint by watching the new kubelet metric called `pod_resources_endpoint_requests_total` on the kubelet's `/metrics` endpoint.
## Is device monitoring suitable for production? Can I extend it? Can I contribute?
Yes! This feature released in 1.13, almost 2 years ago, has seen broad adoption, is already used by different cloud managed services, and with its graduation to G.A in Kubernetes 1.20 is production ready!
If you are a device vendor, you can start using it today! If you just want to monitor the devices in your cluster, go get the latest version of your monitoring plugin!
If you feel passionate about that area, join the kubernetes community, help improve the API or contribute the device monitoring plugins!
## Acknowledgements
We thank the members of the community who have contributed to this feature or given feedback including members of WG-Resource-Management, SIG-Node and the Resource management forum!

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

View File

@ -13,7 +13,7 @@ new_case_study_styles: true
heading_background: /images/case-studies/appdirect/banner1.jpg
heading_title_logo: /images/appdirect_logo.png
subheading: >
AppDirect: How AppDirect Supported the 10x Growth of Its Engineering Staff with Kubernetess
AppDirect: How AppDirect Supported the 10x Growth of Its Engineering Staff with Kubernetes
case_study_details:
- Company: AppDirect
- Location: San Francisco, California

View File

@ -37,8 +37,8 @@ when an individual is representing the project or its community.
Instances of abusive, harassing, or otherwise unacceptable behavior in Kubernetes may be reported by contacting the [Kubernetes Code of Conduct Committee](https://git.k8s.io/community/committee-code-of-conduct) via <conduct@kubernetes.io>. For other projects, please contact a CNCF project maintainer or our mediator, Mishi Choudhary <mishi@linux.com>.
This Code of Conduct is adapted from the Contributor Covenant
(http://contributor-covenant.org), version 1.2.0, available at
http://contributor-covenant.org/version/1/2/0/
(https://contributor-covenant.org), version 1.2.0, available at
https://contributor-covenant.org/version/1/2/0/
### CNCF Events Code of Conduct

View File

@ -242,7 +242,7 @@ checks the state of each node every `--node-monitor-period` seconds.
Heartbeats, sent by Kubernetes nodes, help determine the availability of a node.
There are two forms of heartbeats: updates of `NodeStatus` and the
[Lease object](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/#lease-v1-coordination-k8s-io).
[Lease object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#lease-v1-coordination-k8s-io).
Each Node has an associated Lease object in the `kube-node-lease`
{{< glossary_tooltip term_id="namespace" text="namespace">}}.
Lease is a lightweight resource, which improves the performance

View File

@ -130,11 +130,11 @@ Finally, add the same parameters into the API server start parameters.
Note that you may need to adapt the sample commands based on the hardware
architecture and cfssl version you are using.
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl_1.4.1_linux_amd64 -o cfssl
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl
chmod +x cfssl
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssljson_1.4.1_linux_amd64 -o cfssljson
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson
chmod +x cfssljson
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl-certinfo_1.4.1_linux_amd64 -o cfssl-certinfo
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo
chmod +x cfssl-certinfo
1. Create a directory to hold the artifacts and initialize cfssl:

View File

@ -527,5 +527,5 @@ When you enable the API Priority and Fairness feature, the kube-apiserver serves
For background information on design details for API priority and fairness, see
the [enhancement proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/20190228-priority-and-fairness.md).
You can make suggestions and feature requests via [SIG API
Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery).
You can make suggestions and feature requests via [SIG API Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery)
or the feature's [slack channel](http://kubernetes.slack.com/messages/api-priority-and-fairness).

View File

@ -1,13 +1,13 @@
---
reviewers:
title: Configuring kubelet Garbage Collection
title: Garbage collection for container images
content_type: concept
weight: 70
---
<!-- overview -->
Garbage collection is a helpful function of kubelet that will clean up unused images and unused containers. Kubelet will perform garbage collection for containers every minute and garbage collection for images every five minutes.
Garbage collection is a helpful function of kubelet that will clean up unused [images](/docs/concepts/containers/#container-images) and unused [containers](/docs/concepts/containers/). Kubelet will perform garbage collection for containers every minute and garbage collection for images every five minutes.
External garbage collection tools are not recommended as these tools can potentially break the behavior of kubelet by removing containers expected to exist.

View File

@ -114,7 +114,7 @@ Additionally, the CNI can be run alongside [Calico for network policy enforcemen
### Azure CNI for Kubernetes
[Azure CNI](https://docs.microsoft.com/en-us/azure/virtual-network/container-networking-overview) is an [open source](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md) plugin that integrates Kubernetes Pods with an Azure Virtual Network (also known as VNet) providing network performance at par with VMs. Pods can connect to peered VNet and to on-premises over Express Route or site-to-site VPN and are also directly reachable from these networks. Pods can access Azure services, such as storage and SQL, that are protected by Service Endpoints or Private Link. You can use VNet security policies and routing to filter Pod traffic. The plugin assigns VNet IPs to Pods by utilizing a pool of secondary IPs pre-configured on the Network Interface of a Kubernetes node.
Azure CNI is available natively in the [Azure Kubernetes Service (AKS)] (https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni).
Azure CNI is available natively in the [Azure Kubernetes Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni).
### Big Cloud Fabric from Big Switch Networks

View File

@ -50,39 +50,41 @@ rules:
## Metric lifecycle
Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deletion
Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric
Alpha metrics have no stability guarantees; as such they can be modified or deleted at any time.
Alpha metrics have no stability guarantees. These metrics can be modified or deleted at any time.
Stable metrics can be guaranteed to not change; Specifically, stability means:
Stable metrics are guaranteed to not change. This means:
* A stable metric without a deprecated signature will not be deleted or renamed
* A stable metric's type will not be modified
* the metric itself will not be deleted (or renamed)
* the type of metric will not be modified
Deprecated metrics are slated for deletion, but are still available for use.
These metrics include an annotation about the version in which they became deprecated.
Deprecated metric signal that the metric will eventually be deleted; to find which version, you need to check annotation, which includes from which kubernetes version that metric will be considered deprecated.
For example:
Before deprecation:
* Before deprecation
```
# HELP some_counter this counts things
# TYPE some_counter counter
some_counter 0
```
```
# HELP some_counter this counts things
# TYPE some_counter counter
some_counter 0
```
After deprecation:
* After deprecation
```
# HELP some_counter (Deprecated since 1.15.0) this counts things
# TYPE some_counter counter
some_counter 0
```
```
# HELP some_counter (Deprecated since 1.15.0) this counts things
# TYPE some_counter counter
some_counter 0
```
Once a metric is hidden then by default the metrics is not published for scraping. To use a hidden metric, you need to override the configuration for the relevant cluster component.
Hidden metrics are no longer published for scraping, but are still available for use. To use a hidden metric, please refer to the [Show hidden metrics](#show-hidden-metrics) section.
Once a metric is deleted, the metric is not published. You cannot change this using an override.
Deleted metrics are no longer published and cannot be used.
## Show Hidden Metrics
## Show hidden metrics
As described above, admins can enable hidden metrics through a command-line flag on a specific binary. This intends to be used as an escape hatch for admins if they missed the migration of the metrics deprecated in the last release.
@ -154,5 +156,4 @@ endpoint on the scheduler. You must use the `--show-hidden-metrics-for-version=1
## {{% heading "whatsnext" %}}
* Read about the [Prometheus text format](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format) for metrics
* See the list of [stable Kubernetes metrics](https://github.com/kubernetes/kubernetes/blob/master/test/instrumentation/testdata/stable-metrics-list.yaml)
* Read about the [Kubernetes deprecation policy](/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior)

View File

@ -40,7 +40,7 @@ separate database or file service.
A ConfigMap is an API [object](/docs/concepts/overview/working-with-objects/kubernetes-objects/)
that lets you store configuration for other objects to use. Unlike most
Kubernetes objects that have a `spec`, a ConfigMap has `data` and `binaryData`
fields. These fields accepts key-value pairs as their values. Both the `data`
fields. These fields accept key-value pairs as their values. Both the `data`
field and the `binaryData` are optional. The `data` field is designed to
contain UTF-8 byte sequences while the `binaryData` field is designed to
contain binary data.

View File

@ -396,7 +396,7 @@ The kubelet supports different ways to measure Pod storage use:
{{< tabs name="resource-emphemeralstorage-measurement" >}}
{{% tab name="Periodic scanning" %}}
The kubelet performs regular, schedules checks that scan each
The kubelet performs regular, scheduled checks that scan each
`emptyDir` volume, container log directory, and writeable container layer.
The scan measures how much space is used.

View File

@ -69,6 +69,8 @@ A Service can be made to span multiple Deployments by omitting release-specific
A desired state of an object is described by a Deployment, and if changes to that spec are _applied_, the deployment controller changes the actual state to the desired state at a controlled rate.
- Use the [Kubernetes common labels](/docs/concepts/overview/working-with-objects/common-labels/) for common use cases. These standardized labels enrich the metadata in a way that allows tools, including `kubectl` and [dashboard](/docs/tasks/access-application-cluster/web-ui-dashboard), to work in an interoperable way.
- You can manipulate labels for debugging. Because Kubernetes controllers (such as ReplicaSet) and Services match to Pods using selector labels, removing the relevant labels from a Pod will stop it from being considered by a controller or from being served traffic by a Service. If you remove the labels of an existing Pod, its controller will create a new Pod to take its place. This is a useful way to debug a previously "live" Pod in a "quarantine" environment. To interactively remove or add labels, use [`kubectl label`](/docs/reference/generated/kubectl/kubectl-commands#label).
## Container Images

View File

@ -24,6 +24,16 @@ a password, a token, or a key. Such information might otherwise be put in a
Pod specification or in an image. Users can create Secrets and the system
also creates some Secrets.
{{< caution >}}
Kubernetes Secrets are, by default, stored as unencrypted base64-encoded
strings. By default they can be retrieved - as plain text - by anyone with API
access, or anyone with access to Kubernetes' underlying data store, etcd. In
order to safely use Secrets, we recommend you (at a minimum):
1. [Enable Encryption at Rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/) for Secrets.
2. [Enable RBAC rules that restrict reading and writing the Secret](https://kubernetes.io/docs/reference/access-authn-authz/authorization/). Be aware that secrets can be obtained implicitly by anyone with the permission to create a Pod.
{{< /caution >}}
<!-- body -->
## Overview of Secrets
@ -271,6 +281,13 @@ However, using the builtin Secret type helps unify the formats of your credentia
and the API server does verify if the required keys are provided in a Secret
configuration.
{{< caution >}}
SSH private keys do not establish trusted communication between an SSH client and
host server on their own. A secondary means of establishing trust is needed to
mitigate "man in the middle" attacks, such as a `known_hosts` file added to a
ConfigMap.
{{< /caution >}}
### TLS secrets
Kubernetes provides a builtin Secret type `kubernetes.io/tls` for to storing
@ -351,7 +368,7 @@ data:
A bootstrap type Secret has the following keys specified under `data`:
- `token_id`: A random 6 character string as the token identifier. Required.
- `token-id`: A random 6 character string as the token identifier. Required.
- `token-secret`: A random 16 character string as the actual token secret. Required.
- `description`: A human-readable string that describes what the token is
used for. Optional.
@ -769,7 +786,7 @@ these pods.
The `imagePullSecrets` field is a list of references to secrets in the same namespace.
You can use an `imagePullSecrets` to pass a secret that contains a Docker (or other) image registry
password to the kubelet. The kubelet uses this information to pull a private image on behalf of your Pod.
See the [PodSpec API](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/#podspec-v1-core) for more information about the `imagePullSecrets` field.
See the [PodSpec API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core) for more information about the `imagePullSecrets` field.
#### Manually specifying an imagePullSecret
@ -788,7 +805,6 @@ See [Add ImagePullSecrets to a service account](/docs/tasks/configure-pod-contai
Manually created secrets (for example, one containing a token for accessing a GitHub account)
can be automatically attached to pods based on their service account.
See [Injecting Information into Pods Using a PodPreset](/docs/tasks/inject-data-application/podpreset/) for a detailed explanation of that process.
## Details

View File

@ -37,10 +37,10 @@ but with different settings.
Ensure the RuntimeClass feature gate is enabled (it is by default). See [Feature
Gates](/docs/reference/command-line-tools-reference/feature-gates/) for an explanation of enabling
feature gates. The `RuntimeClass` feature gate must be enabled on apiservers _and_ kubelets.
feature gates. The `RuntimeClass` feature gate must be enabled on API server _and_ kubelets.
1. Configure the CRI implementation on nodes (runtime dependent)
2. Create the corresponding RuntimeClass resources
1. Configure the CRI implementation on nodes (runtime dependent).
2. Create the corresponding RuntimeClass resources.
### 1. Configure the CRI implementation on nodes
@ -51,7 +51,7 @@ CRI implementation for how to configure.
{{< note >}}
RuntimeClass assumes a homogeneous node configuration across the cluster by default (which means
that all nodes are configured the same way with respect to container runtimes). To support
heterogenous node configurations, see [Scheduling](#scheduling) below.
heterogeneous node configurations, see [Scheduling](#scheduling) below.
{{< /note >}}
The configurations have a corresponding `handler` name, referenced by the RuntimeClass. The
@ -98,7 +98,7 @@ spec:
# ...
```
This will instruct the Kubelet to use the named RuntimeClass to run this pod. If the named
This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named
RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the
`Failed` terminal [phase](/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase). Look for a
corresponding [event](/docs/tasks/debug-application-cluster/debug-application-introspection/) for an
@ -144,7 +144,7 @@ See CRI-O's [config documentation](https://raw.githubusercontent.com/cri-o/cri-o
{{< feature-state for_k8s_version="v1.16" state="beta" >}}
As of Kubernetes v1.16, RuntimeClass includes support for heterogenous clusters through its
As of Kubernetes v1.16, RuntimeClass includes support for heterogeneous clusters through its
`scheduling` fields. Through the use of these fields, you can ensure that pods running with this
RuntimeClass are scheduled to nodes that support it. To use the scheduling support, you must have
the [RuntimeClass admission controller](/docs/reference/access-authn-authz/admission-controllers/#runtimeclass)

View File

@ -201,7 +201,7 @@ Monitoring agents for device plugin resources can be deployed as a daemon, or as
The canonical directory `/var/lib/kubelet/pod-resources` requires privileged access, so monitoring
agents must run in a privileged security context. If a device monitoring agent is running as a
DaemonSet, `/var/lib/kubelet/pod-resources` must be mounted as a
{{< glossary_tooltip term_id="volume" >}} in the plugin's
{{< glossary_tooltip term_id="volume" >}} in the device monitoring agent's
[PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core).
Support for the "PodResources service" requires `KubeletPodResources` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled.

View File

@ -159,7 +159,7 @@ This option is provided to the network-plugin; currently **only kubenet supports
## Usage Summary
* `--network-plugin=cni` specifies that we use the `cni` network plugin with actual CNI plugin binaries located in `--cni-bin-dir` (default `/opt/cni/bin`) and CNI plugin configuration located in `--cni-conf-dir` (default `/etc/cni/net.d`).
* `--network-plugin=kubenet` specifies that we use the `kubenet` network plugin with CNI `bridge` and `host-local` plugins placed in `/opt/cni/bin` or `cni-bin-dir`.
* `--network-plugin=kubenet` specifies that we use the `kubenet` network plugin with CNI `bridge`, `lo` and `host-local` plugins placed in `/opt/cni/bin` or `cni-bin-dir`.
* `--network-plugin-mtu=9001` specifies the MTU to use, currently only used by the `kubenet` network plugin.
## {{% heading "whatsnext" %}}

View File

@ -103,7 +103,7 @@ as well as keeping the existing service in good shape.
## Writing your own Operator {#writing-operator}
If there isn't an Operator in the ecosystem that implements the behavior you
want, you can code your own. In [What's next](#whats-next) you'll find a few
want, you can code your own. In [What's next](#what-s-next) you'll find a few
links to libraries and tools you can use to write your own cloud native
Operator.

View File

@ -19,7 +19,7 @@ is the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}. The
exposes an HTTP API that lets end users, different parts of your cluster, and
external components communicate with one another.
The Kubernetes API lets you query and manipulate the state of objects in the Kubernetes API
The Kubernetes API lets you query and manipulate the state of API objects in Kubernetes
(for example: Pods, Namespaces, ConfigMaps, and Events).
Most operations can be performed through the

View File

@ -59,8 +59,8 @@ metadata:
## Applications And Instances Of Applications
An application can be installed one or more times into a Kubernetes cluster and,
in some cases, the same namespace. For example, wordpress can be installed more
than once where different websites are different installations of wordpress.
in some cases, the same namespace. For example, WordPress can be installed more
than once where different websites are different installations of WordPress.
The name of an application and the instance name are recorded separately. For
example, WordPress has a `app.kubernetes.io/name` of `wordpress` while it has
@ -168,6 +168,6 @@ metadata:
...
```
With the MySQL `StatefulSet` and `Service` you'll notice information about both MySQL and Wordpress, the broader application, are included.
With the MySQL `StatefulSet` and `Service` you'll notice information about both MySQL and WordPress, the broader application, are included.

View File

@ -216,12 +216,17 @@ kubectl-user create -f- <<EOF
apiVersion: v1
kind: Pod
metadata:
name: pause
name: pause
spec:
containers:
- name: pause
- name: pause
image: k8s.gcr.io/pause
EOF
```
The output is similar to this:
```
Error from server (Forbidden): error when creating "STDIN": pods "pause" is forbidden: unable to validate against any pod security policy: []
```
@ -264,12 +269,17 @@ kubectl-user create -f- <<EOF
apiVersion: v1
kind: Pod
metadata:
name: pause
name: pause
spec:
containers:
- name: pause
- name: pause
image: k8s.gcr.io/pause
EOF
```
The output is similar to this
```
pod "pause" created
```
@ -281,14 +291,19 @@ kubectl-user create -f- <<EOF
apiVersion: v1
kind: Pod
metadata:
name: privileged
name: privileged
spec:
containers:
- name: pause
- name: pause
image: k8s.gcr.io/pause
securityContext:
privileged: true
EOF
```
The output is similar to this:
```
Error from server (Forbidden): error when creating "STDIN": pods "privileged" is forbidden: unable to validate against any pod security policy: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
```

View File

@ -246,7 +246,7 @@ as at least one already-running pod that has a label with key "security" and val
on node N if node N has a label with key `topology.kubernetes.io/zone` and some value V
such that there is at least one node in the cluster with key `topology.kubernetes.io/zone` and
value V that is running a pod that has a label with key "security" and value "S1".) The pod anti-affinity
rule says that the pod cannot be scheduled onto a node if that node is in the same zone as a pod with
rule says that the pod should not be scheduled onto a node if that node is in the same zone as a pod with
label having key "security" and value "S2". See the
[design doc](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md)
for many more examples of pod affinity and anti-affinity, both the `requiredDuringSchedulingIgnoredDuringExecution`

View File

@ -30,7 +30,7 @@ time according to the overhead associated with the Pod's
[RuntimeClass](/docs/concepts/containers/runtime-class/).
When Pod Overhead is enabled, the overhead is considered in addition to the sum of container
resource requests when scheduling a Pod. Similarly, Kubelet will include the Pod overhead when sizing
resource requests when scheduling a Pod. Similarly, the kubelet will include the Pod overhead when sizing
the Pod cgroup, and when carrying out Pod eviction ranking.
## Enabling Pod Overhead {#set-up}
@ -194,4 +194,4 @@ from source in the meantime.
* [RuntimeClass](/docs/concepts/containers/runtime-class/)
* [PodOverhead Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md)
* [PodOverhead Design](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead)

View File

@ -82,7 +82,7 @@ requested the score value must be reversed as follows.
```yaml
shape:
- utilization: 0
score: 100
score: 10
- utilization: 100
score: 0
```

View File

@ -26,6 +26,8 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
* [AKS Application Gateway Ingress Controller](https://azure.github.io/application-gateway-kubernetes-ingress/) is an ingress controller that configures the [Azure Application Gateway](https://docs.microsoft.com/azure/application-gateway/overview).
* [Ambassador](https://www.getambassador.io/) API Gateway is an [Envoy](https://www.envoyproxy.io)-based ingress
controller.
* [Apache APISIX ingress controller](https://github.com/apache/apisix-ingress-controller) is an [Apache APISIX](https://github.com/apache/apisix)-based ingress controller.
* [Avi Kubernetes Operator](https://github.com/vmware/load-balancer-and-ingress-services-for-kubernetes) provides L4-L7 load-balancing using [VMware NSX Advanced Load Balancer](https://avinetworks.com/).
* The [Citrix ingress controller](https://github.com/citrix/citrix-k8s-ingress-controller#readme) works with
Citrix Application Delivery Controller.
* [Contour](https://projectcontour.io/) is an [Envoy](https://www.envoyproxy.io/) based ingress controller.

View File

@ -35,6 +35,8 @@ Pods become isolated by having a NetworkPolicy that selects them. Once there is
Network policies do not conflict; they are additive. If any policy or policies select a pod, the pod is restricted to what is allowed by the union of those policies' ingress/egress rules. Thus, order of evaluation does not affect the policy result.
For a network flow between two pods to be allowed, both the egress policy on the source pod and the ingress policy on the destination pod need to allow the traffic. If either the egress policy on the source, or the ingress policy on the destination denies the traffic, the traffic will be denied.
## The NetworkPolicy resource {#networkpolicy-resource}
See the [NetworkPolicy](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#networkpolicy-v1-networking-k8s-io) reference for a full definition of the resource.

View File

@ -134,7 +134,7 @@ For example:
* You want to point your Service to a Service in a different
{{< glossary_tooltip term_id="namespace" >}} or on another cluster.
* You are migrating a workload to Kubernetes. While evaluating the approach,
you run only a proportion of your backends in Kubernetes.
you run only a portion of your backends in Kubernetes.
In any of these scenarios you can define a Service _without_ a Pod selector.
For example:
@ -238,7 +238,7 @@ There are a few reasons for using proxying for Services:
### User space proxy mode {#proxy-mode-userspace}
In this mode, kube-proxy watches the Kubernetes master for the addition and
In this mode, kube-proxy watches the Kubernetes control plane for the addition and
removal of Service and Endpoint objects. For each Service it opens a
port (randomly chosen) on the local node. Any connections to this "proxy port"
are proxied to one of the Service's backend Pods (as reported via

View File

@ -231,7 +231,7 @@ the following types of volumes:
* Azure Disk
* Portworx
* FlexVolumes
* CSI
* {{< glossary_tooltip text="CSI" term_id="csi" >}}
You can only expand a PVC if its storage class's `allowVolumeExpansion` field is set to true.
@ -311,7 +311,7 @@ If expanding underlying storage fails, the cluster administrator can manually re
PersistentVolume types are implemented as plugins. Kubernetes currently supports the following plugins:
* [`awsElasticBlockStore`](/docs/concepts/storage/volumes/#awselasticblockstore) - AWS Elastic Block Store (EBS)
* [`azureDisk`](/docs/concepts/sotrage/volumes/#azuredisk) - Azure Disk
* [`azureDisk`](/docs/concepts/storage/volumes/#azuredisk) - Azure Disk
* [`azureFile`](/docs/concepts/storage/volumes/#azurefile) - Azure File
* [`cephfs`](/docs/concepts/storage/volumes/#cephfs) - CephFS volume
* [`cinder`](/docs/concepts/storage/volumes/#cinder) - Cinder (OpenStack block storage)
@ -735,7 +735,7 @@ Only statically provisioned volumes are supported for alpha release. Administrat
{{< feature-state for_k8s_version="v1.20" state="stable" >}}
Volume snapshots only support the out-of-tree CSI volume plugins. For details, see [Volume Snapshots](/docs/concepts/storage/volume-snapshots/).
In-tree volume plugins are deprecated. You can read about the deprecated volume plugins in the [Volume Plugin FAQ] (https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md).
In-tree volume plugins are deprecated. You can read about the deprecated volume plugins in the [Volume Plugin FAQ](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md).
### Create a PersistentVolumeClaim from a Volume Snapshot {#create-persistent-volume-claim-from-volume-snapshot}

View File

@ -34,7 +34,7 @@ text="Container Storage Interface" term_id="csi" >}} (CSI) drivers and
## API
There are two API extensions for this feature:
- [CSIStorageCapacity](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#csistoragecapacity-v1alpha1-storage-k8s-io) objects:
- CSIStorageCapacity objects:
these get produced by a CSI driver in the namespace
where the driver is installed. Each object contains capacity
information for one storage class and defines which nodes have

View File

@ -72,7 +72,7 @@ used for provisioning VolumeSnapshots. This field must be specified.
### DeletionPolicy
Volume snapshot classes have a deletionPolicy. It enables you to configure what happens to a VolumeSnapshotContent when the VolumeSnapshot object it is bound to is to be deleted. The deletionPolicy of a volume snapshot can either be `Retain` or `Delete`. This field must be specified.
Volume snapshot classes have a deletionPolicy. It enables you to configure what happens to a VolumeSnapshotContent when the VolumeSnapshot object it is bound to is to be deleted. The deletionPolicy of a volume snapshot class can either be `Retain` or `Delete`. This field must be specified.
If the deletionPolicy is `Delete`, then the underlying storage snapshot will be deleted along with the VolumeSnapshotContent object. If the deletionPolicy is `Retain`, then both the underlying snapshot and VolumeSnapshotContent remain.

View File

@ -8,50 +8,74 @@ no_list: true
{{< glossary_definition term_id="workload" length="short" >}}
Whether your workload is a single component or several that work together, on Kubernetes you run
it inside a set of [Pods](/docs/concepts/workloads/pods).
In Kubernetes, a Pod represents a set of running {{< glossary_tooltip text="containers" term_id="container" >}}
on your cluster.
it inside a set of [_pods_](/docs/concepts/workloads/pods).
In Kubernetes, a `Pod` represents a set of running
{{< glossary_tooltip text="containers" term_id="container" >}} on your cluster.
A Pod has a defined lifecycle. For example, once a Pod is running in your cluster then
a critical failure on the {{< glossary_tooltip text="node" term_id="node" >}} where that
Pod is running means that all the Pods on that node fail. Kubernetes treats that level
of failure as final: you would need to create a new Pod even if the node later recovers.
Kubernetes pods have a [defined lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/).
For example, once a pod is running in your cluster then a critical fault on the
{{< glossary_tooltip text="node" term_id="node" >}} where that pod is running means that
all the pods on that node fail. Kubernetes treats that level of failure as final: you
would need to create a new `Pod` to recover, even if the node later becomes healthy.
However, to make life considerably easier, you don't need to manage each Pod directly.
Instead, you can use _workload resources_ that manage a set of Pods on your behalf.
However, to make life considerably easier, you don't need to manage each `Pod` directly.
Instead, you can use _workload resources_ that manage a set of pods on your behalf.
These resources configure {{< glossary_tooltip term_id="controller" text="controllers" >}}
that make sure the right number of the right kind of Pod are running, to match the state
that make sure the right number of the right kind of pod are running, to match the state
you specified.
Those workload resources include:
Kubernetes provides several built-in workload resources:
* [Deployment](/docs/concepts/workloads/controllers/deployment/) and [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)
(replacing the legacy resource {{< glossary_tooltip text="ReplicationController" term_id="replication-controller" >}});
* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/);
* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) for running Pods that provide
node-local facilities, such as a storage driver or network plugin;
* [Job](/docs/concepts/workloads/controllers/job/) and
[CronJob](/docs/concepts/workloads/controllers/cron-jobs/)
for tasks that run to completion.
* [`Deployment`](/docs/concepts/workloads/controllers/deployment/) and [`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/)
(replacing the legacy resource
{{< glossary_tooltip text="ReplicationController" term_id="replication-controller" >}}).
`Deployment` is a good fit for managing a stateless application workload on your cluster,
where any `Pod` in the `Deployment` is interchangeable and can be replaced if needed.
* [`StatefulSet`](/docs/concepts/workloads/controllers/statefulset/) lets you
run one or more related Pods that do track state somehow. For example, if your workload
records data persistently, you can run a `StatefulSet` that matches each `Pod` with a
[`PersistentVolume`](/docs/concepts/storage/persistent-volumes/). Your code, running in the
`Pods` for that `StatefulSet`, can replicate data to other `Pods` in the same `StatefulSet`
to improve overall resilience.
* [`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/) defines `Pods` that provide
node-local facilities. These might be fundamental to the operation of your cluster, such
as a networking helper tool, or be part of an
{{< glossary_tooltip text="add-on" term_id="addons" >}}.
Every time you add a node to your cluster that matches the specification in a `DaemonSet`,
the control plane schedules a `Pod` for that `DaemonSet` onto the new node.
* [`Job`](/docs/concepts/workloads/controllers/job/) and
[`CronJob`](/docs/concepts/workloads/controllers/cron-jobs/)
define tasks that run to completion and then stop. Jobs represent one-off tasks, whereas
`CronJobs` recur according to a schedule.
There are also two supporting concepts that you might find relevant:
* [Garbage collection](/docs/concepts/workloads/controllers/garbage-collection/) tidies up objects
from your cluster after their _owning resource_ has been removed.
* The [_time-to-live after finished_ controller](/docs/concepts/workloads/controllers/ttlafterfinished/)
removes Jobs once a defined time has passed since they completed.
In the wider Kubernetes ecosystem, you can find third-party workload resources that provide
additional behaviors. Using a
[custom resource definition](/docs/concepts/extend-kubernetes/api-extension/custom-resources/),
you can add in a third-party workload resource if you want a specific behavior that's not part
of Kubernetes' core. For example, if you wanted to run a group of `Pods` for your application but
stop work unless _all_ the Pods are available (perhaps for some high-throughput distributed task),
then you can implement or install an extension that does provide that feature.
## {{% heading "whatsnext" %}}
As well as reading about each resource, you can learn about specific tasks that relate to them:
* [Run a stateless application using a Deployment](/docs/tasks/run-application/run-stateless-application-deployment/)
* [Run a stateless application using a `Deployment`](/docs/tasks/run-application/run-stateless-application-deployment/)
* Run a stateful application either as a [single instance](/docs/tasks/run-application/run-single-instance-stateful-application/)
or as a [replicated set](/docs/tasks/run-application/run-replicated-stateful-application/)
* [Run Automated Tasks with a CronJob](/docs/tasks/job/automated-tasks-with-cron-jobs/)
* [Run automated tasks with a `CronJob`](/docs/tasks/job/automated-tasks-with-cron-jobs/)
To learn about Kubernetes' mechanisms for separating code from configuration,
visit [Configuration](/docs/concepts/configuration/).
There are two supporting concepts that provide backgrounds about how Kubernetes manages pods
for applications:
* [Garbage collection](/docs/concepts/workloads/controllers/garbage-collection/) tidies up objects
from your cluster after their _owning resource_ has been removed.
* The [_time-to-live after finished_ controller](/docs/concepts/workloads/controllers/ttlafterfinished/)
removes Jobs once a defined time has passed since they completed.
Once your application is running, you might want to make it available on the internet as
a [Service](/docs/concepts/services-networking/service/) or, for web application only,
using an [Ingress](/docs/concepts/services-networking/ingress).
a [`Service`](/docs/concepts/services-networking/service/) or, for web application only,
using an [`Ingress`](/docs/concepts/services-networking/ingress).
You can also visit [Configuration](/docs/concepts/configuration/) to learn about Kubernetes'
mechanisms for separating code from configuration.

View File

@ -49,6 +49,37 @@ This example CronJob manifest prints the current time and a hello message every
([Running Automated Tasks with a CronJob](/docs/tasks/job/automated-tasks-with-cron-jobs/)
takes you through this example in more detail).
### Cron schedule syntax
```
# ┌───────────── minute (0 - 59)
# │ ┌───────────── hour (0 - 23)
# │ │ ┌───────────── day of the month (1 - 31)
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday;
# │ │ │ │ │ 7 is also Sunday on some systems)
# │ │ │ │ │
# │ │ │ │ │
# * * * * *
```
| Entry | Description | Equivalent to |
| ------------- | ------------- |------------- |
| @yearly (or @annually) | Run once a year at midnight of 1 January | 0 0 1 1 * |
| @monthly | Run once a month at midnight of the first day of the month | 0 0 1 * * |
| @weekly | Run once a week at midnight on Sunday morning | 0 0 * * 0 |
| @daily (or @midnight) | Run once a day at midnight | 0 0 * * * |
| @hourly | Run once an hour at the beginning of the hour | 0 * * * * |
For example, the line below states that the task must be started every Friday at midnight, as well as on the 13th of each month at midnight:
`0 0 13 * 5`
To generate CronJob schedule expressions, you can also use web tools like [crontab.guru](https://crontab.guru/).
## CronJob limitations {#cron-job-limitations}
A cron job creates a job object _about_ once per execution time of its schedule. We say "about" because there

View File

@ -150,14 +150,17 @@ curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/replicasets/my-rep
```
kubectl also supports cascading deletion.
To delete dependents automatically using kubectl, set `--cascade` to true. To
orphan dependents, set `--cascade` to false. The default value for `--cascade`
is true.
To delete dependents in the foreground using kubectl, set `--cascade=foreground`. To
orphan dependents, set `--cascade=orphan`.
The default behavior is to delete the dependents in the background which is the
behavior when `--cascade` is omitted or explicitly set to `background`.
Here's an example that orphans the dependents of a ReplicaSet:
```shell
kubectl delete replicaset my-repset --cascade=false
kubectl delete replicaset my-repset --cascade=orphan
```
### Additional note on Deployments

View File

@ -38,6 +38,7 @@ You can run the example with this command:
```shell
kubectl apply -f https://kubernetes.io/examples/controllers/job.yaml
```
The output is similar to this:
```
job.batch/pi created
```
@ -47,6 +48,7 @@ Check on the status of the Job with `kubectl`:
```shell
kubectl describe jobs/pi
```
The output is similar to this:
```
Name: pi
Namespace: default
@ -91,6 +93,7 @@ To list all the Pods that belong to a Job in a machine readable form, you can us
pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}')
echo $pods
```
The output is similar to this:
```
pi-5rwd7
```
@ -398,10 +401,11 @@ Therefore, you delete Job `old` but _leave its pods
running_, using `kubectl delete jobs/old --cascade=false`.
Before deleting it, you make a note of what selector it uses:
```
```shell
kubectl get job old -o yaml
```
```
The output is similar to this:
```yaml
kind: Job
metadata:
name: old
@ -420,7 +424,7 @@ they are controlled by Job `new` as well.
You need to specify `manualSelector: true` in the new Job since you are not using
the selector that the system normally generates for you automatically.
```
```yaml
kind: Job
metadata:
name: new

View File

@ -283,7 +283,7 @@ curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/fron
### Deleting just a ReplicaSet
You can delete a ReplicaSet without affecting any of its Pods using [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) with the `--cascade=false` option.
You can delete a ReplicaSet without affecting any of its Pods using [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) with the `--cascade=orphan` option.
When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Orphan`.
For example:
```shell

View File

@ -54,6 +54,7 @@ Run the example job by downloading the example file and then running this comman
```shell
kubectl apply -f https://k8s.io/examples/controllers/replication.yaml
```
The output is similar to this:
```
replicationcontroller/nginx created
```
@ -63,6 +64,7 @@ Check on the status of the ReplicationController using this command:
```shell
kubectl describe replicationcontrollers/nginx
```
The output is similar to this:
```
Name: nginx
Namespace: default
@ -101,6 +103,7 @@ To list all the pods that belong to the ReplicationController in a machine reada
pods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})
echo $pods
```
The output is similar to this:
```
nginx-3ntk0 nginx-4ok8v nginx-qrm3m
```

View File

@ -15,8 +15,7 @@ card:
_Pods_ are the smallest deployable units of computing that you can create and manage in Kubernetes.
A _Pod_ (as in a pod of whales or pea pod) is a group of one or more
{{< glossary_tooltip text="containers" term_id="container" >}}, with shared storage/network resources, and a specification
for how to run the containers. A Pod's contents are always co-located and
{{< glossary_tooltip text="containers" term_id="container" >}}, with shared storage and network resources, and a specification for how to run the containers. A Pod's contents are always co-located and
co-scheduled, and run in a shared context. A Pod models an
application-specific "logical host": it contains one or more application
containers which are relatively tightly coupled.
@ -191,6 +190,35 @@ details are abstracted away. That abstraction and separation of concerns simplif
system semantics, and makes it feasible to extend the cluster's behavior without
changing existing code.
## Pod update and replacement
As mentioned in the previous section, when the Pod template for a workload
resource is changed, the controller creates new Pods based on the updated
template instead of updating or patching the existing Pods.
Kubernetes doesn't prevent you from managing Pods directly. It is possible to
update some fields of a running Pod, in place. However, Pod update operations
like
[`patch`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#patch-pod-v1-core), and
[`replace`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#replace-pod-v1-core)
have some limitations:
- Most of the metadata about a Pod is immutable. For example, you cannot
change the `namespace`, `name`, `uid`, or `creationTimestamp` fields;
the `generation` field is unique. It only accepts updates that increment the
field's current value.
- If the `metadata.deletionTimestamp` is set, no new entry can be added to the
`metadata.finalizers` list.
- Pod updates may not change fields other than `spec.containers[*].image`,
`spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or
`spec.tolerations`. For `spec.tolerations`, you can only add new entries.
- When updating the `spec.activeDeadlineSeconds` field, two types of updates
are allowed:
1. setting the unassigned field to a positive number;
1. updating the field from a positive number to a smaller, non-negative
number.
## Resource sharing and communication
Pods enable data sharing and communication among their constituent
@ -266,9 +294,10 @@ but cannot be controlled from there.
object definition describes the object in detail.
* [The Distributed System Toolkit: Patterns for Composite Containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns) explains common layouts for Pods with more than one container.
To understand the context for why Kubernetes wraps a common Pod API in other resources (such as {{< glossary_tooltip text="StatefulSets" term_id="statefulset" >}} or {{< glossary_tooltip text="Deployments" term_id="deployment" >}}, you can read about the prior art, including:
* [Aurora](https://aurora.apache.org/documentation/latest/reference/configuration/#job-schema)
* [Borg](https://research.google.com/pubs/pub43438.html)
* [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html)
* [Omega](https://research.google/pubs/pub41684/)
* [Tupperware](https://engineering.fb.com/data-center-engineering/tupperware/).
To understand the context for why Kubernetes wraps a common Pod API in other resources (such as {{< glossary_tooltip text="StatefulSets" term_id="statefulset" >}} or {{< glossary_tooltip text="Deployments" term_id="deployment" >}}), you can read about the prior art, including:
* [Aurora](https://aurora.apache.org/documentation/latest/reference/configuration/#job-schema)
* [Borg](https://research.google.com/pubs/pub43438.html)
* [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html)
* [Omega](https://research.google/pubs/pub41684/)
* [Tupperware](https://engineering.fb.com/data-center-engineering/tupperware/).

View File

@ -49,7 +49,7 @@ Cluster administrator actions include:
- [Draining a node](/docs/tasks/administer-cluster/safely-drain-node/) for repair or upgrade.
- Draining a node from a cluster to scale the cluster down (learn about
[Cluster Autoscaling](/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaler)
[Cluster Autoscaling](https://github.com/kubernetes/autoscaler/#readme)
).
- Removing a pod from a node to permit something else to fit on that node.

View File

@ -49,9 +49,9 @@ as documented in [Resources](#resources).
Also, init containers do not support `lifecycle`, `livenessProbe`, `readinessProbe`, or
`startupProbe` because they must run to completion before the Pod can be ready.
If you specify multiple init containers for a Pod, Kubelet runs each init
If you specify multiple init containers for a Pod, kubelet runs each init
container sequentially. Each init container must succeed before the next can run.
When all of the init containers have run to completion, Kubelet initializes
When all of the init containers have run to completion, kubelet initializes
the application containers for the Pod and runs them as usual.
## Using init containers
@ -133,6 +133,7 @@ You can start this Pod by running:
```shell
kubectl apply -f myapp.yaml
```
The output is similar to this:
```
pod/myapp-pod created
```
@ -141,6 +142,7 @@ And check on its status with:
```shell
kubectl get -f myapp.yaml
```
The output is similar to this:
```
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Init:0/2 0 6m
@ -150,6 +152,7 @@ or for more details:
```shell
kubectl describe -f myapp.yaml
```
The output is similar to this:
```
Name: myapp-pod
Namespace: default
@ -224,6 +227,7 @@ To create the `mydb` and `myservice` services:
```shell
kubectl apply -f services.yaml
```
The output is similar to this:
```
service/myservice created
service/mydb created
@ -235,6 +239,7 @@ Pod moves into the Running state:
```shell
kubectl get -f myapp.yaml
```
The output is similar to this:
```
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 9m
@ -257,7 +262,7 @@ if the Pod `restartPolicy` is set to Always, the init containers use
A Pod cannot be `Ready` until all init containers have succeeded. The ports on an
init container are not aggregated under a Service. A Pod that is initializing
is in the `Pending` state but should have a condition `Initialized` set to true.
is in the `Pending` state but should have a condition `Initialized` set to false.
If the Pod [restarts](#pod-restart-reasons), or is restarted, all init containers
must execute again.
@ -319,11 +324,9 @@ reasons:
## {{% heading "whatsnext" %}}
* Read about [creating a Pod that has an init container](/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container)
* Learn how to [debug init containers](/docs/tasks/debug-application-cluster/debug-init-containers/)

View File

@ -85,6 +85,13 @@ Value | Description
`Failed` | All containers in the Pod have terminated, and at least one container has terminated in failure. That is, the container either exited with non-zero status or was terminated by the system.
`Unknown` | For some reason the state of the Pod could not be obtained. This phase typically occurs due to an error in communicating with the node where the Pod should be running.
{{< note >}}
When a Pod is being deleted, it is shown as `Terminating` by some kubectl commands.
This `Terminating` status is not one of the Pod phases.
A Pod is granted a term to terminate gracefully, which defaults to 30 seconds.
You can use the flag `--force` to [terminate a Pod by force](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination-forced).
{{< /note >}}
If a node dies or is disconnected from the rest of the cluster, Kubernetes
applies a policy for setting the `phase` of all Pods on the lost node to Failed.
@ -325,7 +332,7 @@ a time longer than the liveness interval would allow.
If your container usually starts in more than
`initialDelaySeconds + failureThreshold × periodSeconds`, you should specify a
startup probe that checks the same endpoint as the liveness probe. The default for
`periodSeconds` is 30s. You should then set its `failureThreshold` high enough to
`periodSeconds` is 10s. You should then set its `failureThreshold` high enough to
allow the container to start, without changing the default values of the liveness
probe. This helps to protect against deadlocks.

View File

@ -66,7 +66,7 @@ Instead of manually applying labels, you can also reuse the [well-known labels](
The API field `pod.spec.topologySpreadConstraints` is defined as below:
```
```yaml
apiVersion: v1
kind: Pod
metadata:

View File

@ -95,14 +95,16 @@ deadlines.
### Open a placeholder PR
1. Open a pull request against the
1. Open a **draft** pull request against the
`dev-{{< skew nextMinorVersion >}}` branch in the `kubernetes/website` repository, with a small
commit that you will amend later.
commit that you will amend later. To create a draft pull request, use the
Create Pull Request drop-down and select **Create Draft Pull Request**,
then click **Draft Pull Request**.
2. Edit the pull request description to include links to [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes)
PR(s) and [kubernetes/enhancements](https://github.com/kubernetes/enhancements) issue(s).
3. Use the Prow command `/milestone {{< skew nextMinorVersion >}}` to
assign the PR to the relevant milestone. This alerts the docs person managing
this release that the feature docs are coming.
3. Leave a comment on the related [kubernetes/enhancements](https://github.com/kubernetes/enhancements)
issue with a link to the PR to notify the docs person managing this release that
the feature docs are coming and should be tracked for the release.
If your feature does not need
any documentation changes, make sure the sig-release team knows this, by
@ -112,7 +114,9 @@ milestone.
### PR ready for review
When ready, populate your placeholder PR with feature documentation.
When ready, populate your placeholder PR with feature documentation and change
the state of the PR from draft to **ready for review**. To mark a pull request
as ready for review, navigate to the merge box and click **Ready for review**.
Do your best to describe your feature and how to use it. If you need help structuring your documentation, ask in the `#sig-docs` slack channel.
@ -120,6 +124,13 @@ When you complete your content, the documentation person assigned to your featur
To ensure technical accuracy, the content may also require a technical review from corresponding SIG(s).
Use their suggestions to get the content to a release ready state.
If your feature is an Alpha or Beta feature and is behind a feature gate,
make sure you add it to [Alpha/Beta Feature gates](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features)
table as part of your pull request. With new feature gates, a description of
the feature gate is also required. If your feature is GA'ed or deprecated,
make sure to move it from that table to [Feature gates for graduated or deprecated features](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-graduated-or-deprecated-features)
table with Alpha and Beta history intact.
If your feature needs documentation and the first draft
content is not received, the feature may be removed from the milestone.
@ -129,9 +140,3 @@ If your PR has not yet been merged into the `dev-{{< skew nextMinorVersion >}}`
docs person managing the release to get it in by the deadline. If your feature needs
documentation and the docs are not ready, the feature may be removed from the
milestone.
If your feature is an Alpha feature and is behind a feature gate, make sure you
add it to [Alpha/Beta Feature gates](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features) table
as part of your pull request. If your feature is moving out of Alpha, make sure to
remove it from that table.

View File

@ -1,33 +1,30 @@
---
approvers:
- chenopis
title: Custom Hugo Shortcodes
content_type: concept
---
<!-- overview -->
This page explains the custom Hugo shortcodes that can be used in Kubernetes markdown documentation.
This page explains the custom Hugo shortcodes that can be used in Kubernetes Markdown documentation.
Read more about shortcodes in the [Hugo documentation](https://gohugo.io/content-management/shortcodes).
<!-- body -->
## Feature state
In a markdown page (`.md` file) on this site, you can add a shortcode to display version and state of the documented feature.
In a Markdown page (`.md` file) on this site, you can add a shortcode to display version and state of the documented feature.
### Feature state demo
Below is a demo of the feature state snippet, which displays the feature as stable in Kubernetes version 1.10.
Below is a demo of the feature state snippet, which displays the feature as stable in the latest Kubernetes version.
```
{{</* feature-state for_k8s_version="v1.10" state="stable" */>}}
{{</* feature-state state="stable" */>}}
```
Renders to:
{{< feature-state for_k8s_version="v1.10" state="stable" >}}
{{< feature-state state="stable" >}}
The valid values for `state` are:
@ -38,62 +35,22 @@ The valid values for `state` are:
### Feature state code
The displayed Kubernetes version defaults to that of the page or the site. This can be changed by passing the <code>for_k8s_version</code> shortcode parameter.
The displayed Kubernetes version defaults to that of the page or the site. You can change the
feature state version by passing the `for_k8s_version` shortcode parameter. For example:
```
{{</* feature-state for_k8s_version="v1.10" state="stable" */>}}
{{</* feature-state for_k8s_version="v1.10" state="beta" */>}}
```
Renders to:
{{< feature-state for_k8s_version="v1.10" state="stable" >}}
#### Alpha feature
```
{{</* feature-state state="alpha" */>}}
```
Renders to:
{{< feature-state state="alpha" >}}
#### Beta feature
```
{{</* feature-state state="beta" */>}}
```
Renders to:
{{< feature-state state="beta" >}}
#### Stable feature
```
{{</* feature-state state="stable" */>}}
```
Renders to:
{{< feature-state state="stable" >}}
#### Deprecated feature
```
{{</* feature-state state="deprecated" */>}}
```
Renders to:
{{< feature-state state="deprecated" >}}
{{< feature-state for_k8s_version="v1.10" state="beta" >}}
## Glossary
There are two glossary tooltips.
There are two glossary shortcodes: `glossary_tooltip` and `glossary_definition`.
You can reference glossary terms with an inclusion that automatically updates and replaces content with the relevant links from [our glossary](/docs/reference/glossary/). When the term is moused-over by someone
using the online documentation, the glossary entry displays a tooltip.
You can reference glossary terms with an inclusion that automatically updates and replaces content with the relevant links from [our glossary](/docs/reference/glossary/). When the glossary term is moused-over, the glossary entry displays a tooltip. The glossary term also displays as a link.
As well as inclusions with tooltips, you can reuse the definitions from the glossary in
page content.
@ -102,7 +59,7 @@ The raw data for glossary terms is stored at [https://github.com/kubernetes/webs
### Glossary demo
For example, the following include within the markdown renders to {{< glossary_tooltip text="cluster" term_id="cluster" >}} with a tooltip:
For example, the following include within the Markdown renders to {{< glossary_tooltip text="cluster" term_id="cluster" >}} with a tooltip:
```
{{</* glossary_tooltip text="cluster" term_id="cluster" */>}}
@ -113,13 +70,16 @@ Here's a short glossary definition:
```
{{</* glossary_definition prepend="A cluster is" term_id="cluster" length="short" */>}}
```
which renders as:
{{< glossary_definition prepend="A cluster is" term_id="cluster" length="short" >}}
You can also include a full definition:
```
{{</* glossary_definition term_id="cluster" length="all" */>}}
```
which renders as:
{{< glossary_definition term_id="cluster" length="all" >}}
@ -255,7 +215,63 @@ Renders to:
{{< tab name="JSON File" include="podtemplate.json" />}}
{{< /tabs >}}
## Version strings
To generate a version string for inclusion in the documentation, you can choose from
several version shortcodes. Each version shortcode displays a version string derived from
the value of a version parameter found in the site configuration file, `config.toml`.
The two most commonly used version parameters are `latest` and `version`.
### `{{</* param "version" */>}}`
The `{{</* param "version" */>}}` shortcode generates the value of the current version of
the Kubernetes documentation from the `version` site parameter. The `param` shortcode accepts the name of one site parameter, in this case: `version`.
{{< note >}}
In previously released documentation, `latest` and `version` parameter values are not equivalent.
After a new version is released, `latest` is incremented and the value of `version` for the documentation set remains unchanged. For example, a previously released version of the documentation displays `version` as
`v1.19` and `latest` as `v1.20`.
{{< /note >}}
Renders to:
{{< param "version" >}}
### `{{</* latest-version */>}}`
The `{{</* latest-version */>}}` shortcode returns the value of the `latest` site parameter.
The `latest` site parameter is updated when a new version of the documentation is released.
This parameter does not always match the value of `version` in a documentation set.
Renders to:
{{< latest-version >}}
### `{{</* latest-semver */>}}`
The `{{</* latest-semver */>}}` shortcode generates the value of `latest` without the "v" prefix.
Renders to:
{{< latest-semver >}}
### `{{</* version-check */>}}`
The `{{</* version-check */>}}` shortcode checks if the `min-kubernetes-server-version`
page parameter is present and then uses this value to compare to `version`.
Renders to:
{{< version-check >}}
### `{{</* latest-release-notes */>}}`
The `{{</* latest-release-notes */>}}` shortcode generates a version string from `latest` and removes
the "v" prefix. The shortcode prints a new URL for the release note CHANGELOG page with the modified version string.
Renders to:
{{< latest-release-notes >}}
## {{% heading "whatsnext" %}}
@ -264,4 +280,3 @@ Renders to:
* Learn about [page content types](/docs/contribute/style/page-content-types/).
* Learn about [opening a pull request](/docs/contribute/new-content/open-a-pr/).
* Learn about [advanced contributing](/docs/contribute/advanced/).

View File

@ -143,7 +143,7 @@ Do | Don't
:--| :-----
Set the value of the `replicas` field in the configuration file. | Set the value of the "replicas" field in the configuration file.
The value of the `exec` field is an ExecAction object. | The value of the "exec" field is an ExecAction object.
Run the process as a Daemonset in the `kube-system` namespace. | Run the process as a Daemonset in the kube-system namespace.
Run the process as a DaemonSet in the `kube-system` namespace. | Run the process as a DaemonSet in the kube-system namespace.
{{< /table >}}
### Use code style for Kubernetes command tool and component names

View File

@ -18,7 +18,7 @@ This section of the Kubernetes documentation contains references.
## API Reference
* [Kubernetes API Reference {{< latest-version >}}](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/)
* [API Reference for Kubernetes {{< param "version" >}}](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
* [Using The Kubernetes API](/docs/reference/using-api/) - overview of the API for Kubernetes.
## API Client Libraries
@ -54,4 +54,3 @@ An archive of the design docs for Kubernetes functionality. Good starting points
[Kubernetes Architecture](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) and
[Kubernetes Design Overview](https://git.k8s.io/community/contributors/design-proposals).

View File

@ -669,13 +669,6 @@ allowVolumeExpansion: true
For more information about persistent volume claims, see [PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims).
### PodPreset {#podpreset}
This admission controller injects a pod with the fields specified in a matching PodPreset.
See also [PodPreset concept](/docs/concepts/workloads/pods/podpreset/) and
[Inject Information into Pods Using a PodPreset](/docs/tasks/inject-data-application/podpreset)
for more information.
### PodSecurityPolicy {#podsecuritypolicy}
This admission controller acts on creation and modification of the pod and determines if it should be admitted
@ -792,25 +785,8 @@ versions 1.9 and later).
## Is there a recommended set of admission controllers to use?
Yes. For Kubernetes version 1.10 and later, the recommended admission controllers are enabled by default (shown [here](/docs/reference/command-line-tools-reference/kube-apiserver/#options)), so you do not need to explicitly specify them. You can enable additional admission controllers beyond the default set using the `--enable-admission-plugins` flag (**order doesn't matter**).
Yes. The recommended admission controllers are enabled by default (shown [here](/docs/reference/command-line-tools-reference/kube-apiserver/#options)), so you do not need to explicitly specify them. You can enable additional admission controllers beyond the default set using the `--enable-admission-plugins` flag (**order doesn't matter**).
{{< note >}}
`--admission-control` was deprecated in 1.10 and replaced with `--enable-admission-plugins`.
{{< /note >}}
For Kubernetes 1.9 and earlier, we recommend running the following set of admission controllers using the `--admission-control` flag (**order matters**).
* v1.9
```shell
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
```
* It's worth reiterating that in 1.9, these happen in a mutating phase
and a validating phase, and that for example `ResourceQuota` runs in the validating
phase, and therefore is the last admission controller to run.
`MutatingAdmissionWebhook` appears before it in this list, because it runs
in the mutating phase.
For earlier versions, there was no concept of validating versus mutating and the
admission controllers ran in the exact order specified.

View File

@ -86,8 +86,9 @@ Because ClusterRoles are cluster-scoped, you can also use them to grant access t
* cluster-scoped resources (like {{< glossary_tooltip text="nodes" term_id="node" >}})
* non-resource endpoints (like `/healthz`)
* namespaced resources (like Pods), across all namespaces
For example: you can use a ClusterRole to allow a particular user to run
`kubectl get pods --all-namespaces`.
`kubectl get pods --all-namespaces`
Here is an example of a ClusterRole that can be used to grant read access to
{{< glossary_tooltip text="secrets" term_id="secret" >}} in any particular namespace,
@ -514,7 +515,7 @@ subjects:
namespace: kube-system
```
For all service accounts in the "qa" namespace:
For all service accounts in the "qa" group in any namespace:
```yaml
subjects:
@ -522,6 +523,15 @@ subjects:
name: system:serviceaccounts:qa
apiGroup: rbac.authorization.k8s.io
```
For all service accounts in the "dev" group in the "development" namespace:
```yaml
subjects:
- kind: Group
name: system:serviceaccounts:dev
apiGroup: rbac.authorization.k8s.io
namespace: development
```
For all service accounts in any namespace:

View File

@ -166,7 +166,8 @@ different Kubernetes components.
| `StorageVersionHash` | `true` | Beta | 1.15 | |
| `Sysctls` | `true` | Beta | 1.11 | |
| `TTLAfterFinished` | `false` | Alpha | 1.12 | |
| `TopologyManager` | `false` | Alpha | 1.16 | |
| `TopologyManager` | `false` | Alpha | 1.16 | 1.17 |
| `TopologyManager` | `true` | Beta | 1.18 | |
| `ValidateProxyRedirects` | `false` | Alpha | 1.12 | 1.13 |
| `ValidateProxyRedirects` | `true` | Beta | 1.14 | |
| `WindowsEndpointSliceProxying` | `false` | Alpha | 1.19 | |

View File

@ -73,13 +73,6 @@ kubelet [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;">Enables anonymous requests to the Kubelet server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of `system:anonymous`, and a group name of `system:unauthenticated`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr>
<tr>
<td colspan="2">--application-metrics-count-limit int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 100</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Max number of application metrics to store (per container) (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns,it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--authentication-token-webhook</td>
</tr>
@ -122,13 +115,6 @@ kubelet [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;">Path to the file containing Azure container registry configuration information.</td>
</tr>
<tr>
<td colspan="2">--boot-id-file string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `/proc/sys/kernel/random/boot_id`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Comma-separated list of files to check for `boot-id`. Use the first one that exists. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--bootstrap-kubeconfig string</td>
</tr>
@ -234,13 +220,6 @@ kubelet [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;">The Kubelet will load its initial configuration from this file. The path may be absolute or relative; relative paths start at the Kubelet's current working directory. Omit this flag to use the built-in default configuration values. Command-line flags override configuration from this file.</td>
</tr>
<tr>
<td colspan="2">--container-hints string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `/etc/cadvisor/container_hints.json`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">location of the container hints file. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--container-log-max-files int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5</td>
</tr>
@ -269,12 +248,6 @@ kubelet [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;">[Experimental] The endpoint of remote runtime service. Currently unix socket endpoint is supported on Linux, while npipe and tcp endpoints are supported on windows. Examples: `unix:///var/run/dockershim.sock`, `npipe:////./pipe/dockershim`.</td>
</tr>
<tr>
<td colspan="2">--containerd string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `/run/containerd/containerd.sock`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">The `containerd` endpoint. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--contention-profiling</td>
@ -311,13 +284,6 @@ kubelet [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;">&lt;Warning: Alpha feature&gt; CPU Manager reconciliation period. Examples: `10s`, or `1m`. If not supplied, defaults to node status update frequency. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr>
<tr>
<td colspan="2">--docker string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `unix:///var/run/docker.sock`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">The `docker` endpoint. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--docker-endpoint string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `unix:///var/run/docker.sock`</td>
</tr>
@ -325,55 +291,6 @@ kubelet [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;">Use this for the `docker` endpoint to communicate with. This docker-specific flag only works when container-runtime is set to `docker`.</td>
</tr>
<tr>
<td colspan="2">--docker-env-metadata-whitelist string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">a comma-separated list of environment variable keys that needs to be collected for docker containers (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--docker-only</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Only report docker containers in addition to root stats (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--docker-root string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `/var/lib/docker`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">DEPRECATED: docker root is read from docker info (this is a fallback).</td>
</tr>
<tr>
<td colspan="2">--docker-tls</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">use TLS to connect to docker (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--docker-tls-ca string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `ca.pem`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">path to trusted CA. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--docker-tls-cert string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `cert.pem`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">path to client certificate. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--docker-tls-key string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `key.pem`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Path to private key. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--dynamic-config-dir string</td>
</tr>
@ -402,13 +319,6 @@ kubelet [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;">Enables server endpoints for log collection and local running of containers and commands. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr>
<tr>
<td colspan="2">--enable-load-reader</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Whether to enable CPU load reader (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--enable-server&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `true`</td>
</tr>
@ -438,24 +348,10 @@ kubelet [flags]
</tr>
<tr>
<td colspan="2">--event-storage-age-limit string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `default=0`</td>
<td colspan="2">--eviction-hard mapStringString&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `imagefs.available<15%,memory.available<100Mi,nodefs.available<10%`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Max length of time for which to store events (per type). Value is a comma separated list of key values, where the keys are event types (e.g.: `creation`, `oom`) or `default` and the value is a duration. Default is applied to all non-specified event types. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--event-storage-event-limit string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `default=0`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Max number of events to store (per type). Value is a comma separated list of key values, where the keys are event types (e.g.: `creation`, `oom`) or `default` and the value is an integer. Default is applied to all non-specified event types. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--eviction-hard mapStringString&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">A set of eviction thresholds (e.g. `memory.available<1Gi`) that if met would trigger a pod eviction. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
<td></td><td style="line-height: 130%; word-wrap: break-word;">A set of eviction thresholds (e.g. `memory.available<1Gi`) that if met would trigger a pod eviction. On a Linux node, the default value also includes `nodefs.inodesFree<5%`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr>
<tr>
@ -528,6 +424,13 @@ kubelet [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;">If enabled, the kubelet will integrate with the kernel memcg notification to determine if memory eviction thresholds are crossed rather than polling. This flag will be removed in 1.23. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr>
<tr>
<td colspan="2">--experimental-log-sanitization bool</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens). Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
</tr>
<tr>
<td colspan="2">--experimental-mounter-path string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `mount`</td>
</tr>
@ -548,8 +451,9 @@ kubelet [flags]
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">A set of `key=value` pairs that describe feature gates for alpha/experimental features. Options are:<br/>
APIListChunking=true|false (BETA - default=true)<br/>
APIPriorityAndFairness=true|false (ALPHA - default=false)<br/>
APIPriorityAndFairness=true|false (BETA - default=true)<br/>
APIResponseCompression=true|false (BETA - default=true)<br/>
APIServerIdentity=true|false (ALPHA - default=false)<br/>
AllAlpha=true|false (ALPHA - default=false)<br/>
AllBeta=true|false (BETA - default=false)<br/>
AllowInsecureBackendProxy=true|false (BETA - default=true)<br/>
@ -573,31 +477,40 @@ CSIMigrationOpenStack=true|false (BETA - default=false)<br/>
CSIMigrationOpenStackComplete=true|false (ALPHA - default=false)<br/>
CSIMigrationvSphere=true|false (BETA - default=false)<br/>
CSIMigrationvSphereComplete=true|false (BETA - default=false)<br/>
CSIServiceAccountToken=true|false (ALPHA - default=false)<br/>
CSIStorageCapacity=true|false (ALPHA - default=false)<br/>
CSIVolumeFSGroupPolicy=true|false (ALPHA - default=false)<br/>
ConfigurableFSGroupPolicy=true|false (ALPHA - default=false)<br/>
CSIVolumeFSGroupPolicy=true|false (BETA - default=true)<br/>
ConfigurableFSGroupPolicy=true|false (BETA - default=true)<br/>
CronJobControllerV2=true|false (ALPHA - default=false)<br/>
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)<br/>
DefaultPodTopologySpread=true|false (ALPHA - default=false)<br/>
DefaultPodTopologySpread=true|false (BETA - default=true)<br/>
DevicePlugins=true|false (BETA - default=true)<br/>
DisableAcceleratorUsageMetrics=true|false (BETA - default=true)<br/>
DownwardAPIHugePages=true|false (ALPHA - default=false)<br/>
DynamicKubeletConfig=true|false (BETA - default=true)<br/>
EfficientWatchResumption=true|false (ALPHA - default=false)<br/>
EndpointSlice=true|false (BETA - default=true)<br/>
EndpointSliceNodeName=true|false (ALPHA - default=false)<br/>
EndpointSliceProxying=true|false (BETA - default=true)<br/>
EndpointSliceTerminatingCondition=true|false (ALPHA - default=false)<br/>
EphemeralContainers=true|false (ALPHA - default=false)<br/>
ExpandCSIVolumes=true|false (BETA - default=true)<br/>
ExpandInUsePersistentVolumes=true|false (BETA - default=true)<br/>
ExpandPersistentVolumes=true|false (BETA - default=true)<br/>
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)<br/>
GenericEphemeralVolume=true|false (ALPHA - default=false)<br/>
GracefulNodeShutdown=true|false (ALPHA - default=false)<br/>
HPAContainerMetrics=true|false (ALPHA - default=false)<br/>
HPAScaleToZero=true|false (ALPHA - default=false)<br/>
HugePageStorageMediumSize=true|false (BETA - default=true)<br/>
HyperVContainer=true|false (ALPHA - default=false)<br/>
IPv6DualStack=true|false (ALPHA - default=false)<br/>
ImmutableEphemeralVolumes=true|false (BETA - default=true)<br/>
KubeletCredentialProviders=true|false (ALPHA - default=false)<br/>
KubeletPodResources=true|false (BETA - default=true)<br/>
LegacyNodeRoleBehavior=true|false (BETA - default=true)<br/>
LocalStorageCapacityIsolation=true|false (BETA - default=true)<br/>
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)<br/>
MixedProtocolLBService=true|false (ALPHA - default=false)<br/>
NodeDisruptionExclusion=true|false (BETA - default=true)<br/>
NonPreemptingPriority=true|false (BETA - default=true)<br/>
PodDisruptionBudget=true|false (BETA - default=true)<br/>
@ -605,31 +518,26 @@ PodOverhead=true|false (BETA - default=true)<br/>
ProcMountType=true|false (ALPHA - default=false)<br/>
QOSReserved=true|false (ALPHA - default=false)<br/>
RemainingItemCount=true|false (BETA - default=true)<br/>
RemoveSelfLink=true|false (ALPHA - default=false)<br/>
RemoveSelfLink=true|false (BETA - default=true)<br/>
RootCAConfigMap=true|false (BETA - default=true)<br/>
RotateKubeletServerCertificate=true|false (BETA - default=true)<br/>
RunAsGroup=true|false (BETA - default=true)<br/>
RuntimeClass=true|false (BETA - default=true)<br/>
SCTPSupport=true|false (BETA - default=true)<br/>
SelectorIndex=true|false (BETA - default=true)<br/>
ServerSideApply=true|false (BETA - default=true)<br/>
ServiceAccountIssuerDiscovery=true|false (ALPHA - default=false)<br/>
ServiceAppProtocol=true|false (BETA - default=true)<br/>
ServiceAccountIssuerDiscovery=true|false (BETA - default=true)<br/>
ServiceLBNodePortControl=true|false (ALPHA - default=false)<br/>
ServiceNodeExclusion=true|false (BETA - default=true)<br/>
ServiceTopology=true|false (ALPHA - default=false)<br/>
SetHostnameAsFQDN=true|false (ALPHA - default=false)<br/>
SetHostnameAsFQDN=true|false (BETA - default=true)<br/>
SizeMemoryBackedVolumes=true|false (ALPHA - default=false)<br/>
StorageVersionAPI=true|false (ALPHA - default=false)<br/>
StorageVersionHash=true|false (BETA - default=true)<br/>
SupportNodePidsLimit=true|false (BETA - default=true)<br/>
SupportPodPidsLimit=true|false (BETA - default=true)<br/>
Sysctls=true|false (BETA - default=true)<br/>
TTLAfterFinished=true|false (ALPHA - default=false)<br/>
TokenRequest=true|false (BETA - default=true)<br/>
TokenRequestProjection=true|false (BETA - default=true)<br/>
TopologyManager=true|false (BETA - default=true)<br/>
ValidateProxyRedirects=true|false (BETA - default=true)<br/>
VolumeSnapshotDataSource=true|false (BETA - default=true)<br/>
WarningHeaders=true|false (BETA - default=true)<br/>
WinDSR=true|false (ALPHA - default=false)<br/>
WinOverlay=true|false (ALPHA - default=false)<br/>
WinOverlay=true|false (BETA - default=true)<br/>
WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
(DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr>
@ -641,13 +549,6 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Duration between checking config files for new data. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr>
<tr>
<td colspan="2">--global-housekeeping-interval duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `1m0s`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Interval between global housekeepings. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--hairpin-mode string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `promiscuous-bridge`</td>
</tr>
@ -697,6 +598,20 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Duration between checking HTTP for new data. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr>
<tr>
<td colspan="2">--image-credential-provider-bin-dir string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">The path to the directory where credential provider plugin binaries are located.</td>
</tr>
<tr>
<td colspan="2">--image-credential-provider-config string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">The path to the credential provider plugin config file.</td>
</tr>
<tr>
<td colspan="2">--image-gc-high-threshold int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 85</td>
</tr>
@ -757,7 +672,7 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
<td colspan="2">--kube-api-burst int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"> Burst to use while talking with kubernetes API server. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Burst to use while talking with kubernetes API server. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr>
<tr>
@ -778,7 +693,7 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
<td colspan="2">--kube-reserved mapStringString&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: &lt;None&gt;</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">A set of `<resource name>=<resource quantity>` (e.g. `cpu=200m,memory=500Mi,ephemeral-storage=1Gi`) pairs that describe resources reserved for kubernetes system components. Currently `cpu`, `memory` and local `ephemeral-storage` for root file system are supported. See http://kubernetes.io/docs/user-guide/compute-resources for more detail. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
<td></td><td style="line-height: 130%; word-wrap: break-word;">A set of `<resource name>=<resource quantity>` (e.g. `cpu=200m,memory=500Mi,ephemeral-storage=1Gi,pid='100'`) pairs that describe resources reserved for kubernetes system components. Currently `cpu`, `memory` and local `ephemeral-storage` for root file system are supported. See http://kubernetes.io/docs/user-guide/compute-resources for more detail. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr>
<tr>
@ -816,13 +731,6 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
<td></td><td style="line-height: 130%; word-wrap: break-word;">When logging hits line `<file>:<N>`, emit a stack trace.</td>
</tr>
<tr>
<td colspan="2">--log-cadvisor-usage</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Whether to log the usage of the cAdvisor container (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--log-dir string</td>
</tr>
@ -855,7 +763,7 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
<td colspan="2">--logging-format string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `text`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Sets the log format. Permitted formats: `text`, `json`.\nNon-default formats don't honor these flags: `--add-dir-header`, `--alsologtostderr`, `--log-backtrace-at`, `--log_dir`, `--log-file`, `--log-file-max-size`, `--logtostderr`, `--skip_headers`, `--skip_log_headers`, `--stderrthreshold`, `--log-flush-frequency`.\nNon-default choices are currently alpha and subject to change without warning. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Sets the log format. Permitted formats: `text`, `json`.\nNon-default formats don't honor these flags: `--add-dir-header`, `--alsologtostderr`, `--log-backtrace-at`, `--log-dir`, `--log-file`, `--log-file-max-size`, `--logtostderr`, `--skip_headers`, `--skip_log_headers`, `--stderrthreshold`, `--log-flush-frequency`.\nNon-default choices are currently alpha and subject to change without warning. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr>
<tr>
@ -865,13 +773,6 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
<td></td><td style="line-height: 130%; word-wrap: break-word;">log to standard error instead of files.</td>
</tr>
<tr>
<td colspan="2">--machine-id-file string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `/etc/machine-id,/var/lib/dbus/machine-id`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Comma-separated list of files to check for `machine-id`. Use the first one that exists. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--make-iptables-util-chains&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `true`</td>
</tr>
@ -990,6 +891,14 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Traffic to IPs outside this range will use IP masquerade. Set to `0.0.0.0/0` to never masquerade. (DEPRECATED: will be removed in a future version)</td>
</tr>
<tr>
<td colspan="2">--one-output</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">If true, only write logs to their native severity level (vs also writing to each lower severity level.
</td>
</tr>
<tr>
<td colspan="2">--oom-score-adj int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: -999</td>
</tr>
@ -1082,7 +991,7 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
</tr>
<tr>
<td colspan="2">--register-node</td>
<td colspan="2">--register-node&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `true`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Register the node with the API server. If `--kubeconfig` is not provided, this flag is irrelevant, as the Kubelet won't have an API server to register with. Default to `true`.</td>
@ -1096,7 +1005,7 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
</tr>
<tr>
<td colspan="2">--register-with-taints []api.Taint</td>
<td colspan="2">--register-with-taints mapStringString</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Register the node with the given list of taints (comma separated `<key>=<value>:<effect>`). No-op if `--register-node` is `false`.</td>
@ -1202,61 +1111,12 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
</tr>
<tr>
<td colspan="2">--stderrthreshold severity&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 2</td>
<td colspan="2">--stderrthreshold int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 2</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">logs at or above this threshold go to stderr.</td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `1m0s`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `cadvisor`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Database name. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `localhost:8086`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Database `host:port`. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `root`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Database password. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Use secure connection with database (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `stats`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Table name. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `root`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Database username. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)</td>
</tr>
<tr>
<td colspan="2">--streaming-connection-idle-timeout duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `4h0m0s`</td>
</tr>
@ -1282,7 +1142,7 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
<td colspan="2">--system-reserved mapStringString&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \<none\></td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">A set of `<resource name>=<resource quantity>` (e.g. `cpu=200m,memory=500Mi,ephemeral-storage=1Gi`) pairs that describe resources reserved for non-kubernetes components. Currently only `cpu` and `memory` are supported. See http://kubernetes.io/docs/user-guide/compute-resources for more detail. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
<td></td><td style="line-height: 130%; word-wrap: break-word;">A set of `<resource name>=<resource quantity>` (e.g. `cpu=200m,memory=500Mi,ephemeral-storage=1Gi,pid='100'`) pairs that describe resources reserved for non-kubernetes components. Currently only `cpu` and `memory` are supported. See http://kubernetes.io/docs/user-guide/compute-resources for more detail. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr>
<tr>
@ -1331,6 +1191,13 @@ Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_R
<td></td><td style="line-height: 130%; word-wrap: break-word;">Topology Manager policy to use. Possible values: `none`, `best-effort`, `restricted`, `single-numa-node`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr>
<tr>
<td colspan="2">--topology-manager-scope string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `container`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Scope to which topology hints applied. Topology Manager collects hints from Hint Providers and applies them to defined scope to ensure the pod admission. Possible values: 'container' (default), 'pod'. (default "container") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr>
<tr>
<td colspan="2">-v, --v Level</td>
</tr>

View File

@ -1,18 +0,0 @@
---
title: PodPreset
id: podpreset
date: 2018-04-12
full_link:
short_description: >
An API object that injects information such as secrets, volume mounts, and environment variables into pods at creation time.
aka:
tags:
- operation
---
An API object that injects information such as secrets, volume mounts, and environment variables into {{< glossary_tooltip text="Pods" term_id="pod" >}} at creation time.
<!--more-->
This object chooses the Pods to inject information into using standard selectors. This allows the podspec definitions to be nonspecific, decoupling the podspec from environment specific configuration.

View File

@ -194,6 +194,9 @@ kubectl get pods --show-labels
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
&& kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"
# Output decoded secrets without external tools
kubectl get secret ${secret_name} -o go-template='{{range $k,$v := .data}}{{$k}}={{$v|base64decode}}{{"\n"}}{{end}}'
# List all Secrets currently in use by a pod
kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq
@ -314,6 +317,7 @@ kubectl exec my-pod -- ls / # Run command in existing po
kubectl exec --stdin --tty my-pod -- /bin/sh # Interactive shell access to a running pod (1 container case)
kubectl exec my-pod -c my-container -- ls / # Run command in existing pod (multi-container case)
kubectl top pod POD_NAME --containers # Show metrics for a given pod and its containers
kubectl top pod POD_NAME --sort-by=cpu # Show metrics for a given pod and sort it by 'cpu' or 'memory'
```
## Interacting with Nodes and cluster
@ -391,6 +395,7 @@ Verbosity | Description
`--v=2` | Useful steady state information about the service and important log messages that may correlate to significant changes in the system. This is the recommended default log level for most systems.
`--v=3` | Extended information about changes.
`--v=4` | Debug level verbosity.
`--v=5` | Trace level verbosity.
`--v=6` | Display requested resources.
`--v=7` | Display HTTP request headers.
`--v=8` | Display HTTP request contents.

View File

@ -37,23 +37,22 @@ All `kubectl run` generators are deprecated. See the Kubernetes v1.17 documentat
#### Generators
You can generate the following resources with a kubectl command, `kubectl create --dry-run=client -o yaml`:
```
clusterrole Create a ClusterRole.
clusterrolebinding Create a ClusterRoleBinding for a particular ClusterRole.
configmap Create a configmap from a local file, directory or literal value.
cronjob Create a cronjob with the specified name.
deployment Create a deployment with the specified name.
job Create a job with the specified name.
namespace Create a namespace with the specified name.
poddisruptionbudget Create a pod disruption budget with the specified name.
priorityclass Create a priorityclass with the specified name.
quota Create a quota with the specified name.
role Create a role with single rule.
rolebinding Create a RoleBinding for a particular Role or ClusterRole.
secret Create a secret using specified subcommand.
service Create a service using specified subcommand.
serviceaccount Create a service account with the specified name.
```
* `clusterrole`: Create a ClusterRole.
* `clusterrolebinding`: Create a ClusterRoleBinding for a particular ClusterRole.
* `configmap`: Create a ConfigMap from a local file, directory or literal value.
* `cronjob`: Create a CronJob with the specified name.
* `deployment`: Create a Deployment with the specified name.
* `job`: Create a Job with the specified name.
* `namespace`: Create a Namespace with the specified name.
* `poddisruptionbudget`: Create a PodDisruptionBudget with the specified name.
* `priorityclass`: Create a PriorityClass with the specified name.
* `quota`: Create a Quota with the specified name.
* `role`: Create a Role with single rule.
* `rolebinding`: Create a RoleBinding for a particular Role or ClusterRole.
* `secret`: Create a Secret using specified subcommand.
* `service`: Create a Service using specified subcommand.
* `serviceaccount`: Create a ServiceAccount with the specified name.
### `kubectl apply`

View File

@ -37,16 +37,11 @@ kubectl:
# start the pod running nginx
kubectl create deployment --image=nginx nginx-app
```
```shell
# add env to nginx-app
kubectl set env deployment/nginx-app DOMAIN=cluster
```
```
deployment.apps/nginx-app created
```
```
```shell
# add env to nginx-app
kubectl set env deployment/nginx-app DOMAIN=cluster
```

View File

@ -108,7 +108,7 @@ extension points:
- `SelectorSpread`: Favors spreading across nodes for Pods that belong to
{{< glossary_tooltip text="Services" term_id="service" >}},
{{< glossary_tooltip text="ReplicaSets" term_id="replica-set" >}} and
{{< glossary_tooltip text="StatefulSets" term_id="statefulset" >}}
{{< glossary_tooltip text="StatefulSets" term_id="statefulset" >}}.
Extension points: `PreScore`, `Score`.
- `ImageLocality`: Favors nodes that already have the container images that the
Pod runs.

View File

@ -74,6 +74,7 @@ their authors, not the Kubernetes team.
| Rust | [github.com/ynqa/kubernetes-rust](https://github.com/ynqa/kubernetes-rust) |
| Scala | [github.com/doriordan/skuber](https://github.com/doriordan/skuber) |
| Scala | [github.com/joan38/kubernetes-client](https://github.com/joan38/kubernetes-client) |
| Swift | [github.com/swiftkube/client](https://github.com/swiftkube/client) |
| DotNet | [github.com/tonnyeremin/kubernetes_gen](https://github.com/tonnyeremin/kubernetes_gen) |
| DotNet (RestSharp) | [github.com/masroorhasan/Kubernetes.DotNet](https://github.com/masroorhasan/Kubernetes.DotNet) |
| Elixir | [github.com/obmarg/kazan](https://github.com/obmarg/kazan/) |

View File

@ -71,7 +71,7 @@ the appliers, results in a conflict. Shared field owners may give up ownership
of a field by removing it from their configuration.
Field management is stored in a`managedFields` field that is part of an object's
[`metadata`](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/#objectmeta-v1-meta).
[`metadata`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#objectmeta-v1-meta).
A simple example of an object created by Server Side Apply could look like this:

View File

@ -125,6 +125,62 @@ sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
```
```shell
# Restart containerd
sudo systemctl restart containerd
```
{{% /tab %}}
{{% tab name="Ubuntu 18.04/20.04" %}}
```shell
# (Install containerd)
sudo apt-get update && sudo apt-get install -y containerd
```
```shell
# Configure containerd
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
```
```shell
# Restart containerd
sudo systemctl restart containerd
```
{{% /tab %}}
{{% tab name="Debian 9+" %}}
```shell
# (Install containerd)
## Set up the repository
### Install packages to allow apt to use a repository over HTTPS
sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
```
```shell
## Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key --keyring /etc/apt/trusted.gpg.d/docker.gpg add -
```
```shell
## Add Docker apt repository.
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
```
```shell
## Install containerd
sudo apt-get update && sudo apt-get install -y containerd.io
```
```shell
# Set default containerd configuration
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
```
```shell
# Restart containerd
sudo systemctl restart containerd
@ -154,7 +210,7 @@ sudo yum update -y && sudo yum install -y containerd.io
```shell
## Configure containerd
sudo mkdir -p /etc/containerd
sudo containerd config default > /etc/containerd/config.toml
sudo containerd config default | sudo tee /etc/containerd/config.toml
```
```shell
@ -212,12 +268,18 @@ Use the following commands to install CRI-O on your system:
{{< note >}}
The CRI-O major and minor versions must match the Kubernetes major and minor versions.
For more information, see the [CRI-O compatibility matrix](https://github.com/cri-o/cri-o).
For more information, see the [CRI-O compatibility matrix](https://github.com/cri-o/cri-o#compatibility-matrix-cri-o--kubernetes).
{{< /note >}}
Install and configure prerequisites:
```shell
# Create the .conf file to load the modules at bootup
cat <<EOF | sudo tee /etc/modules-load.d/crio.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
@ -244,9 +306,9 @@ to the appropriate value from the following table:
<br />
Then, set `$VERSION` to the CRI-O version that matches your Kubernetes version.
For instance, if you want to install CRI-O 1.18, set `VERSION=1.18`.
For instance, if you want to install CRI-O 1.20, set `VERSION=1.20`.
You can pin your installation to a specific release.
To install version 1.18.3, set `VERSION=1.18:1.18.3`.
To install version 1.20.0, set `VERSION=1.20:1.20.0`.
<br />
Then run
@ -280,9 +342,9 @@ To install on the following operating systems, set the environment variable `OS`
<br />
Then, set `$VERSION` to the CRI-O version that matches your Kubernetes version.
For instance, if you want to install CRI-O 1.18, set `VERSION=1.18`.
For instance, if you want to install CRI-O 1.20, set `VERSION=1.20`.
You can pin your installation to a specific release.
To install version 1.18.3, set `VERSION=1.18:1.18.3`.
To install version 1.20.0, set `VERSION=1.20:1.20.0`.
<br />
Then run
@ -314,9 +376,9 @@ To install on the following operating systems, set the environment variable `OS`
<br />
Then, set `$VERSION` to the CRI-O version that matches your Kubernetes version.
For instance, if you want to install CRI-O 1.18, set `VERSION=1.18`.
For instance, if you want to install CRI-O 1.20, set `VERSION=1.20`.
You can pin your installation to a specific release.
To install version 1.18.3, set `VERSION=1.18:1.18.3`.
To install version 1.20.0, set `VERSION=1.20:1.20.0`.
<br />
Then run
@ -337,7 +399,7 @@ sudo zypper install cri-o
{{% tab name="Fedora" %}}
Set `$VERSION` to the CRI-O version that matches your Kubernetes version.
For instance, if you want to install CRI-O 1.18, `VERSION=1.18`.
For instance, if you want to install CRI-O 1.20, `VERSION=1.20`.
You can find available versions with:
```shell
@ -361,10 +423,26 @@ sudo systemctl daemon-reload
sudo systemctl start crio
```
Refer to the [CRI-O installation guide](https://github.com/kubernetes-sigs/cri-o#getting-started)
Refer to the [CRI-O installation guide](https://github.com/cri-o/cri-o/blob/master/install.md)
for more information.
#### cgroup driver
CRI-O uses the systemd cgroup driver per default. To switch to the `cgroupfs`
cgroup driver, either edit `/etc/crio/crio.conf` or place a drop-in
configuration in `/etc/crio/crio.conf.d/02-cgroup-manager.conf`, for example:
```toml
[crio.runtime]
conmon_cgroup = "pod"
cgroup_manager = "cgroupfs"
```
Please also note the changed `conmon_cgroup`, which has to be set to the value
`pod` when using CRI-O with `cgroupfs`. It is generally necessary to keep the
cgroup driver configuration of the kubelet (usually done via kubeadm) and CRI-O
in sync.
### Docker
@ -407,6 +485,11 @@ sudo apt-get update && sudo apt-get install -y \
docker-ce-cli=5:19.03.11~3-0~ubuntu-$(lsb_release -cs)
```
```shell
## Create /etc/docker
sudo mkdir /etc/docker
```
```shell
# Set up the Docker daemon
cat <<EOF | sudo tee /etc/docker/daemon.json
@ -498,4 +581,3 @@ sudo systemctl enable docker
Refer to the [official Docker installation guides](https://docs.docker.com/engine/installation/)
for more information.

View File

@ -21,14 +21,14 @@ For information how to create a cluster with kubeadm once you have performed thi
* One or more machines running one of:
- Ubuntu 16.04+
- Debian 9+
- CentOS 7
- Red Hat Enterprise Linux (RHEL) 7
- CentOS 7+
- Red Hat Enterprise Linux (RHEL) 7+
- Fedora 25+
- HypriotOS v1.0.1+
- Flatcar Container Linux (tested with 2512.3.0)
* 2 GB or more of RAM per machine (any less will leave little room for your apps)
* 2 CPUs or more
* Full network connectivity between all machines in the cluster (public or private network is fine)
* 2 GB or more of RAM per machine (any less will leave little room for your apps).
* 2 CPUs or more.
* Full network connectivity between all machines in the cluster (public or private network is fine).
* Unique hostname, MAC address, and product_uuid for every node. See [here](#verify-mac-address) for more details.
* Certain ports are open on your machines. See [here](#check-required-ports) for more details.
* Swap disabled. You **MUST** disable swap in order for the kubelet to work properly.
@ -59,6 +59,10 @@ Make sure that the `br_netfilter` module is loaded. This can be done by running
As a requirement for your Linux Node's iptables to correctly see bridged traffic, you should ensure `net.bridge.bridge-nf-call-iptables` is set to 1 in your `sysctl` config, e.g.
```bash
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
@ -76,7 +80,7 @@ For more details please see the [Network Plugin Requirements](/docs/concepts/ext
|----------|-----------|------------|-------------------------|---------------------------|
| TCP | Inbound | 6443* | Kubernetes API server | All |
| TCP | Inbound | 2379-2380 | etcd server client API | kube-apiserver, etcd |
| TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
| TCP | Inbound | 10250 | kubelet API | Self, Control plane |
| TCP | Inbound | 10251 | kube-scheduler | Self |
| TCP | Inbound | 10252 | kube-controller-manager | Self |
@ -84,7 +88,7 @@ For more details please see the [Network Plugin Requirements](/docs/concepts/ext
| Protocol | Direction | Port Range | Purpose | Used By |
|----------|-----------|-------------|-----------------------|-------------------------|
| TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
| TCP | Inbound | 10250 | kubelet API | Self, Control plane |
| TCP | Inbound | 30000-32767 | NodePort Services† | All |
† Default port range for [NodePort Services](/docs/concepts/services-networking/service/).
@ -160,7 +164,7 @@ need to ensure they match the version of the Kubernetes control plane you want
kubeadm to install for you. If you do not, there is a risk of a version skew occurring that
can lead to unexpected, buggy behaviour. However, _one_ minor version skew between the
kubelet and the control plane is supported, but the kubelet version may never exceed the API
server version. For example, kubelets running 1.7.0 should be fully compatible with a 1.8.0 API server,
server version. For example, the kubelet running 1.7.0 should be fully compatible with a 1.8.0 API server,
but not vice versa.
For information about installing `kubectl`, see [Install and set up kubectl](/docs/tasks/tools/install-kubectl/).
@ -299,7 +303,7 @@ Please mind, that you **only** have to do that if the cgroup driver of your CRI
is not `cgroupfs`, because that is the default value in the kubelet already.
{{< note >}}
Since `--cgroup-driver` flag has been deprecated by kubelet, if you have that in `/var/lib/kubelet/kubeadm-flags.env`
Since `--cgroup-driver` flag has been deprecated by the kubelet, if you have that in `/var/lib/kubelet/kubeadm-flags.env`
or `/etc/default/kubelet`(`/etc/sysconfig/kubelet` for RPMs), please remove it and use the KubeletConfiguration instead
(stored in `/var/lib/kubelet/config.yaml` by default).
{{< /note >}}

View File

@ -102,8 +102,7 @@ This may be caused by a number of problems. The most common are:
1. Install Docker again following instructions
[here](/docs/setup/production-environment/container-runtimes/#docker).
1. Change the kubelet config to match the Docker cgroup driver manually, you can refer to
[Configure cgroup driver used by kubelet on Master Node](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node)
1. Change the kubelet config to match the Docker cgroup driver manually, you can refer to [Configure cgroup driver used by kubelet on control-plane node](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-control-plane-node)
- control plane Docker containers are crashlooping or hanging. You can check this by running `docker ps` and investigating each container by running `docker logs`.

View File

@ -140,6 +140,11 @@ Pre-requisites:
Optionally upgrade `kubelet` instances to **{{< skew latestVersion >}}** (or they can be left at **{{< skew prevMinorVersion >}}** or **{{< skew oldestMinorVersion >}}**)
{{< note >}}
Before performing a minor version `kubelet` upgrade, [drain](/docs/tasks/administer-cluster/safely-drain-node/) pods from that node.
In-place minor version `kubelet` upgrades are not supported.
{{</ note >}}
{{< warning >}}
Running a cluster with `kubelet` instances that are persistently two minor versions behind `kube-apiserver` is not recommended:

View File

@ -160,7 +160,7 @@ for the pathnames of the certificate files. You need to change these to the actu
of certificate files in your environment.
Sometimes you may want to use Base64-encoded data embedded here instead of separate
certificate files; in that case you need add the suffix `-data` to the keys, for example,
certificate files; in that case you need to add the suffix `-data` to the keys, for example,
`certificate-authority-data`, `client-certificate-data`, `client-key-data`.
Each context is a triple (cluster, user, namespace). For example, the

View File

@ -1,22 +1,24 @@
---
title: Connect a Front End to a Back End Using a Service
title: Connect a Frontend to a Backend Using Services
content_type: tutorial
weight: 70
---
<!-- overview -->
This task shows how to create a frontend and a backend
microservice. The backend microservice is a hello greeter. The
frontend and backend are connected using a Kubernetes
{{< glossary_tooltip term_id="service" >}} object.
This task shows how to create a _frontend_ and a _backend_ microservice. The backend
microservice is a hello greeter. The frontend exposes the backend using nginx and a
Kubernetes {{< glossary_tooltip term_id="service" >}} object.
## {{% heading "objectives" %}}
* Create and run a microservice using a {{< glossary_tooltip term_id="deployment" >}} object.
* Route traffic to the backend using a frontend.
* Use a Service object to connect the frontend application to the
backend application.
* Create and run a sample `hello` backend microservice using a
{{< glossary_tooltip term_id="deployment" >}} object.
* Use a Service object to send traffic to the backend microservice's multiple replicas.
* Create and run a `nginx` frontend microservice, also using a Deployment object.
* Configure the frontend microservice to send traffic to the backend microservice.
* Use a Service object of `type=LoadBalancer` to expose the frontend microservice
outside the cluster.
## {{% heading "prerequisites" %}}
@ -34,24 +36,24 @@ require a supported environment. If your environment does not support this, you
The backend is a simple hello greeter microservice. Here is the configuration
file for the backend Deployment:
{{< codenew file="service/access/hello.yaml" >}}
{{< codenew file="service/access/backend-deployment.yaml" >}}
Create the backend Deployment:
```shell
kubectl apply -f https://k8s.io/examples/service/access/hello.yaml
kubectl apply -f https://k8s.io/examples/service/access/backend-deployment.yaml
```
View information about the backend Deployment:
```shell
kubectl describe deployment hello
kubectl describe deployment backend
```
The output is similar to this:
```
Name: hello
Name: backend
Namespace: default
CreationTimestamp: Mon, 24 Oct 2016 14:21:02 -0700
Labels: app=hello
@ -59,7 +61,7 @@ Labels: app=hello
track=stable
Annotations: deployment.kubernetes.io/revision=1
Selector: app=hello,tier=backend,track=stable
Replicas: 7 desired | 7 updated | 7 total | 7 available | 0 unavailable
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
@ -80,14 +82,14 @@ Conditions:
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: hello-3621623197 (7/7 replicas created)
NewReplicaSet: hello-3621623197 (3/3 replicas created)
Events:
...
```
## Creating the backend Service object
## Creating the `hello` Service object
The key to connecting a frontend to a backend is the backend
The key to sending requests from a frontend to a backend is the backend
Service. A Service creates a persistent IP address and DNS name entry
so that the backend microservice can always be reached. A Service uses
{{< glossary_tooltip text="selectors" term_id="selector" >}} to find
@ -95,42 +97,51 @@ the Pods that it routes traffic to.
First, explore the Service configuration file:
{{< codenew file="service/access/hello-service.yaml" >}}
{{< codenew file="service/access/backend-service.yaml" >}}
In the configuration file, you can see that the Service routes traffic to Pods
that have the labels `app: hello` and `tier: backend`.
In the configuration file, you can see that the Service, named `hello` routes
traffic to Pods that have the labels `app: hello` and `tier: backend`.
Create the `hello` Service:
Create the backend Service:
```shell
kubectl apply -f https://k8s.io/examples/service/access/hello-service.yaml
kubectl apply -f https://k8s.io/examples/service/access/backend-service.yaml
```
At this point, you have a backend Deployment running, and you have a
Service that can route traffic to it.
At this point, you have a `backend` Deployment running three replicas of your `hello`
application, and you have a Service that can route traffic to them. However, this
service is neither available nor resolvable outside the cluster.
## Creating the frontend
Now that you have your backend, you can create a frontend that connects to the backend.
The frontend connects to the backend worker Pods by using the DNS name
given to the backend Service. The DNS name is "hello", which is the value
of the `name` field in the preceding Service configuration file.
Now that you have your backend running, you can create a frontend that is accessible
outside the cluster, and connects to the backend by proxying requests to it.
The Pods in the frontend Deployment run an nginx image that is configured
to find the hello backend Service. Here is the nginx configuration file:
The frontend sends requests to the backend worker Pods by using the DNS name
given to the backend Service. The DNS name is `hello`, which is the value
of the `name` field in the `examples/service/access/backend-service.yaml`
configuration file.
{{< codenew file="service/access/frontend.conf" >}}
The Pods in the frontend Deployment run a nginx image that is configured
to proxy requests to the `hello` backend Service. Here is the nginx configuration file:
Similar to the backend, the frontend has a Deployment and a Service. The
configuration for the Service has `type: LoadBalancer`, which means that
the Service uses the default load balancer of your cloud provider.
{{< codenew file="service/access/frontend-nginx.conf" >}}
{{< codenew file="service/access/frontend.yaml" >}}
Similar to the backend, the frontend has a Deployment and a Service. An important
difference to notice between the backend and frontend services, is that the
configuration for the frontend Service has `type: LoadBalancer`, which means that
the Service uses a load balancer provisioned by your cloud provider and will be
accessible from outside the cluster.
{{< codenew file="service/access/frontend-service.yaml" >}}
{{< codenew file="service/access/frontend-deployment.yaml" >}}
Create the frontend Deployment and Service:
```shell
kubectl apply -f https://k8s.io/examples/service/access/frontend.yaml
kubectl apply -f https://k8s.io/examples/service/access/frontend-deployment.yaml
kubectl apply -f https://k8s.io/examples/service/access/frontend-service.yaml
```
The output verifies that both resources were created:
@ -178,7 +189,7 @@ cluster.
## Send traffic through the frontend
The frontend and backends are now connected. You can hit the endpoint
The frontend and backend are now connected. You can hit the endpoint
by using the curl command on the external IP of your frontend Service.
```shell
@ -196,17 +207,17 @@ The output shows the message generated by the backend:
To delete the Services, enter this command:
```shell
kubectl delete services frontend hello
kubectl delete services frontend backend
```
To delete the Deployments, the ReplicaSets and the Pods that are running the backend and frontend applications, enter this command:
```shell
kubectl delete deployment frontend hello
kubectl delete deployment frontend backend
```
## {{% heading "whatsnext" %}}
* Learn more about [Services](/docs/concepts/services-networking/service/)
* Learn more about [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/)
* Learn more about [DNS for Service and Pods](/docs/concepts/services-networking/dns-pod-service/)

View File

@ -158,10 +158,16 @@ for database debugging.
Any of the above commands works. The output is similar to this:
```
I0710 14:43:38.274550 3655 portforward.go:225] Forwarding from 127.0.0.1:7000 -> 6379
I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:7000 -> 6379
Forwarding from 127.0.0.1:7000 -> 6379
Forwarding from [::1]:7000 -> 6379
```
{{< note >}}
`kubectl port-forward` does not return. To continue with the exercises, you will need to open another terminal.
{{< /note >}}
2. Start the Redis command line interface:
```shell
@ -180,7 +186,23 @@ for database debugging.
PONG
```
### Optionally let _kubectl_ choose the local port {#let-kubectl-choose-local-port}
If you don't need a specific local port, you can let `kubectl` choose and allocate
the local port and thus relieve you from having to manage local port conflicts, with
the slightly simpler syntax:
```shell
kubectl port-forward deployment/redis-master :6379
```
The `kubectl` tool finds a local port number that is not in use (avoiding low ports numbers,
because these might be used by other applications). The output is similar to:
```
Forwarding from 127.0.0.1:62162 -> 6379
Forwarding from [::1]:62162 -> 6379
```
<!-- discussion -->
@ -203,4 +225,3 @@ The support for UDP protocol is tracked in
## {{% heading "whatsnext" %}}
Learn more about [kubectl port-forward](/docs/reference/generated/kubectl/kubectl-commands/#port-forward).

View File

@ -30,7 +30,7 @@ The Kubernetes project provides skeleton cloud-controller-manager code with Go i
To build an out-of-tree cloud-controller-manager for your cloud:
1. Create a go package with an implementation that satisfies [cloudprovider.Interface](https://github.com/kubernetes/cloud-provider/blob/master/cloud.go).
2. Use [`main.go` in cloud-controller-manager](https://github.com/kubernetes/kubernetes/blob/master/cmd/cloud-controller-manager/controller-manager.go) from Kubernetes core as a template for your `main.go`. As mentioned above, the only difference should be the cloud package that will be imported.
2. Use [`main.go` in cloud-controller-manager](https://github.com/kubernetes/kubernetes/blob/master/cmd/cloud-controller-manager/main.go) from Kubernetes core as a template for your `main.go`. As mentioned above, the only difference should be the cloud package that will be imported.
3. Import your cloud package in `main.go`, ensure your package has an `init` block to run [`cloudprovider.RegisterCloudProvider`](https://github.com/kubernetes/cloud-provider/blob/master/plugins.go).
Many cloud providers publish their controller manager code as open source. If you are creating

View File

@ -231,6 +231,14 @@ without compromising the minimum required capacity for running your workloads.
{{% /tab %}}
{{< /tabs >}}
### Call "kubeadm upgrade"
- For worker nodes this upgrades the local kubelet configuration:
```shell
sudo kubeadm upgrade node
```
### Drain the node
- Prepare the node for maintenance by marking it unschedulable and evicting the workloads:
@ -240,14 +248,6 @@ without compromising the minimum required capacity for running your workloads.
kubectl drain <node-to-drain> --ignore-daemonsets
```
### Call "kubeadm upgrade"
- For worker nodes this upgrades the local kubelet configuration:
```shell
sudo kubeadm upgrade node
```
### Upgrade kubelet and kubectl
- Upgrade the kubelet and kubectl:

View File

@ -12,7 +12,6 @@ The following resources are used in the demonstration: [ResourceQuota](/docs/con
and [PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/).
## {{% heading "prerequisites" %}}
@ -41,7 +40,7 @@ the values set by the admin.
In this example, a PVC requesting 10Gi of storage would be rejected because it exceeds the 2Gi max.
```
```yaml
apiVersion: v1
kind: LimitRange
metadata:
@ -67,7 +66,7 @@ In this example, a 6th PVC in the namespace would be rejected because it exceeds
a 5Gi maximum quota when combined with the 2Gi max limit above, cannot have 3 PVCs where each has 2Gi. That would be 6Gi requested
for a namespace capped at 5Gi.
```
```yaml
apiVersion: v1
kind: ResourceQuota
metadata:
@ -78,8 +77,6 @@ spec:
requests.storage: "5Gi"
```
<!-- discussion -->
## Summary
@ -87,7 +84,3 @@ spec:
A limit range can put a ceiling on how much storage is requested while a resource quota can effectively cap the storage
consumed by a namespace through claim counts and cumulative storage capacity. The allows a cluster-admin to plan their
cluster's storage budget without risk of any one project going over their allotment.

Some files were not shown because too many files have changed in this diff Show More