Merge branch 'main' into patch-1

This commit is contained in:
Paszymaja 2022-07-27 14:00:51 +02:00 committed by GitHub
commit 7deb7e78cd
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1994 changed files with 81288 additions and 29428 deletions

View File

@ -1,4 +1,4 @@
<!-- 🛈
<!--
Hello!
@ -9,7 +9,7 @@
PLEASE title the FIRST commit appropriately, so that if you squash all
your commits into one, the combined commit message makes sense.
For overall help on editing and submitting pull requests, visit:
https://kubernetes.io/docs/contribute/start/#improve-existing-content
https://kubernetes.io/docs/contribute/suggest-improvements/
Use the default base branch, “main”, if you're documenting existing
features in the English localization.

View File

@ -9,7 +9,7 @@ These are just guidelines, not rules. Use your best judgment, and feel free to p
### Code of Conduct
Kubernetes follows the [Cloud Native Computing Foundation (CNCF) Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md). By participating, you are expected to uphold this code. Please report unacceptable behavior to the
Kubernetes follows the [Cloud Native Computing Foundation (CNCF) Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md). By participating, you are expected to uphold this code. Please report unacceptable behavior to the
[Kubernetes Code of Conduct Committee](https://github.com/kubernetes/community/tree/master/committee-code-of-conduct) <conduct@kubernetes.io>.
### Documentation and Site Decisions

View File

@ -4,7 +4,7 @@
# change is that the Hugo version is now an overridable argument rather than a fixed
# environment variable.
FROM golang:1.16-alpine
FROM golang:1.18-alpine
LABEL maintainer="Luc Perkins <lperkins@linuxfoundation.org>"
@ -24,7 +24,7 @@ RUN mkdir $HOME/src && \
cd "hugo-${HUGO_VERSION}" && \
go install --tags extended
FROM golang:1.16-alpine
FROM golang:1.18-alpine
RUN apk add --no-cache \
runuser \

View File

@ -9,7 +9,7 @@ CONTAINER_ENGINE ?= docker
IMAGE_REGISTRY ?= gcr.io/k8s-staging-sig-docs
IMAGE_VERSION=$(shell scripts/hash-files.sh Dockerfile Makefile | cut -c 1-12)
CONTAINER_IMAGE = $(IMAGE_REGISTRY)/k8s-website-hugo:v$(HUGO_VERSION)-$(IMAGE_VERSION)
CONTAINER_RUN = $(CONTAINER_ENGINE) run --rm --interactive --tty --volume $(CURDIR):/src
CONTAINER_RUN = "$(CONTAINER_ENGINE)" run --rm --interactive --tty --volume "$(CURDIR):/src"
CCRED=\033[0;31m
CCEND=\033[0m
@ -77,8 +77,9 @@ container-push: container-image ## Push container image for the preview of the w
container-build: module-check
$(CONTAINER_RUN) --read-only --mount type=tmpfs,destination=/tmp,tmpfs-mode=01777 $(CONTAINER_IMAGE) sh -c "npm ci && hugo --minify --environment development"
container-serve: module-check ## Boot the development server using container. Run `make container-image` before this.
$(CONTAINER_RUN) --cap-drop=ALL --cap-add=AUDIT_WRITE --read-only --mount type=tmpfs,destination=/tmp,tmpfs-mode=01777 -p 1313:1313 $(CONTAINER_IMAGE) hugo server --buildFuture --environment development --bind 0.0.0.0 --destination /tmp/hugo --cleanDestinationDir
# no build lock to allow for read-only mounts
container-serve: module-check ## Boot the development server using container.
$(CONTAINER_RUN) --cap-drop=ALL --cap-add=AUDIT_WRITE --read-only --mount type=tmpfs,destination=/tmp,tmpfs-mode=01777 -p 1313:1313 $(CONTAINER_IMAGE) hugo server --buildFuture --environment development --bind 0.0.0.0 --destination /tmp/hugo --cleanDestinationDir --noBuildLock
test-examples:
scripts/test_examples.sh install
@ -94,7 +95,7 @@ docker-internal-linkcheck:
container-internal-linkcheck: link-checker-image-pull
$(CONTAINER_RUN) $(CONTAINER_IMAGE) hugo --config config.toml,linkcheck-config.toml --buildFuture --environment test
$(CONTAINER_ENGINE) run --mount type=bind,source=$(CURDIR),target=/test --rm wjdp/htmltest htmltest
$(CONTAINER_ENGINE) run --mount "type=bind,source=$(CURDIR),target=/test" --rm wjdp/htmltest htmltest
clean-api-reference: ## Clean all directories in API reference directory, preserve _index.md
rm -rf content/en/docs/reference/kubernetes-api/*/

View File

@ -3,6 +3,7 @@ aliases:
- onlydole
- mrbobbytables
- sftim
- nate-double-u
sig-docs-blog-reviewers: # Reviewers for blog content
- mrbobbytables
- onlydole
@ -199,6 +200,7 @@ aliases:
- devlware
- jhonmike
- rikatz
- stormqueen1990
- yagonobre
sig-docs-vi-owners: # Admins for Vietnamese content
- huynguyennovem

View File

@ -4,6 +4,9 @@
이 저장소에는 [쿠버네티스 웹사이트 및 문서](https://kubernetes.io/)를 빌드하는 데 필요한 자산이 포함되어 있습니다. 기여해주셔서 감사합니다!
- [문서에 기여하기](#contributing-to-the-docs)
- [`README.md`에 대한 쿠버네티스 문서 현지화](#localization-readmemds)
# 저장소 사용하기
Hugo(확장 버전)를 사용하여 웹사이트를 로컬에서 실행하거나, 컨테이너 런타임에서 실행할 수 있습니다. 라이브 웹사이트와의 배포 일관성을 제공하므로, 컨테이너 런타임을 사용하는 것을 적극 권장합니다.
@ -40,6 +43,8 @@ make container-image
make container-serve
```
에러가 발생한다면, Hugo 컨테이너를 위한 컴퓨팅 리소스가 충분하지 않기 때문일 수 있습니다. 이를 해결하려면, 머신에서 도커에 허용할 CPU 및 메모리 사용량을 늘립니다([MacOSX](https://docs.docker.com/docker-for-mac/#resources) / [Windows](https://docs.docker.com/docker-for-windows/#resources)).
웹사이트를 보려면 브라우저를 http://localhost:1313 으로 엽니다. 소스 파일을 변경하면 Hugo가 웹사이트를 업데이트하고 브라우저를 강제로 새로 고칩니다.
## Hugo를 사용하여 로컬에서 웹사이트 실행하기
@ -56,7 +61,45 @@ make serve
그러면 포트 1313에서 로컬 Hugo 서버가 시작됩니다. 웹사이트를 보려면 http://localhost:1313 으로 브라우저를 엽니다. 소스 파일을 변경하면, Hugo가 웹사이트를 업데이트하고 브라우저를 강제로 새로 고칩니다.
## API 레퍼런스 페이지 빌드하기
`content/en/docs/reference/kubernetes-api`에 있는 API 레퍼런스 페이지는 <https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs>를 사용하여 Swagger 명세로부터 빌드되었습니다.
새로운 쿠버네티스 릴리스를 위해 레퍼런스 페이지를 업데이트하려면 다음 단계를 수행합니다.
1. `api-ref-generator` 서브모듈을 받아옵니다.
```bash
git submodule update --init --recursive --depth 1
```
2. Swagger 명세를 업데이트합니다.
```bash
curl 'https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json' > api-ref-assets/api/swagger.json
```
3. `api-ref-assets/config/`에서, 새 릴리스의 변경 사항이 반영되도록 `toc.yaml``fields.yaml` 파일을 업데이트합니다.
4. 다음으로, 페이지를 빌드합니다.
```bash
make api-reference
```
로컬에서 결과를 테스트하기 위해 컨테이너 이미지를 이용하여 사이트를 빌드 및 실행합니다.
```bash
make container-image
make container-serve
```
웹 브라우저에서, <http://localhost:1313/docs/reference/kubernetes-api/>로 이동하여 API 레퍼런스를 확인합니다.
5. 모든 API 변경사항이 `toc.yaml``fields.yaml` 구성 파일에 반영되었다면, 새로 생성된 API 레퍼런스 페이지에 대한 PR을 엽니다.
## 문제 해결
### error: failed to transform resource: TOCSS: failed to transform "scss/main.scss" (text/x-scss): this feature is not available in your current Hugo version
Hugo는 기술적인 이유로 2개의 바이너리 세트로 제공됩니다. 현재 웹사이트는 **Hugo 확장** 버전 기반에서만 실행됩니다. [릴리스 페이지](https://github.com/gohugoio/hugo/releases)에서 이름에 `extended` 가 포함된 아카이브를 찾습니다. 확인하려면, `hugo version` 을 실행하고 `extended` 라는 단어를 찾습니다.
@ -97,17 +140,17 @@ sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
이 내용은 Catalina와 Mojave macOS에서 작동합니다.
# SIG Docs에 참여하기
[커뮤니티 페이지](https://github.com/kubernetes/community/tree/master/sig-docs#meetings)에서 SIG Docs 쿠버네티스 커뮤니티 및 회의에 대한 자세한 내용을 확인합니다.
이 프로젝트의 메인테이너에게 연락을 할 수도 있습니다.
- [슬랙](https://kubernetes.slack.com/messages/sig-docs) [슬랙에 초대 받기](https://slack.k8s.io/)
- [슬랙](https://kubernetes.slack.com/messages/sig-docs)
- [슬랙에 초대 받기](https://slack.k8s.io/)
- [메일링 리스트](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
# 문서에 기여하기
# 문서에 기여하기 {#contributing-to-the-docs}
이 저장소에 대한 복제본을 여러분의 GitHub 계정에 생성하기 위해 화면 오른쪽 위 영역에 있는 **Fork** 버튼을 클릭하면 됩니다. 이 복제본은 *fork* 라고 부릅니다. 여러분의 fork에서 원하는 임의의 변경 사항을 만들고, 해당 변경 사항을 보낼 준비가 되었다면, 여러분의 fork로 이동하여 새로운 풀 리퀘스트를 만들어 우리에게 알려주시기 바랍니다.
@ -124,7 +167,15 @@ sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
* [문서화 스타일 가이드](http://kubernetes.io/docs/contribute/style/style-guide/)
* [쿠버네티스 문서 현지화](https://kubernetes.io/docs/contribute/localization/)
# `README.md`에 대한 쿠버네티스 문서 현지화(localization)
### 신규 기여자 대사(ambassadors)
기여 과정에서 도움이 필요하다면, [신규 기여자 대사](https://kubernetes.io/docs/contribute/advanced/#serve-as-a-new-contributor-ambassador)에게 연락하는 것이 좋습니다. 이들은 신규 기여자를 멘토링하고 첫 PR 과정에서 도움을 주는 역할도 담당하는 SIG Docs 승인자입니다. 신규 기여자 대사에게 문의할 가장 좋은 곳은 [쿠버네티스 슬랙](https://slack.k8s.io/)입니다. 현재 SIG Docs 신규 기여자 대사는 다음과 같습니다.
| Name | Slack | GitHub |
| -------------------------- | -------------------------- | -------------------------- |
| Arsh Sharma | @arsh | @RinkiyaKeDad |
# `README.md`에 대한 쿠버네티스 문서 현지화(localization) {#localization-readmemds}
## 한국어
@ -135,6 +186,7 @@ sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
* 손석호 ([GitHub - @seokho-son](https://github.com/seokho-son))
* [슬랙 채널](https://kubernetes.slack.com/messages/kubernetes-docs-ko)
# 행동 강령
쿠버네티스 커뮤니티 참여는 [CNCF 행동 강령](https://github.com/cncf/foundation/blob/master/code-of-conduct-languages/ko.md)을 따릅니다.

View File

@ -65,7 +65,7 @@ The Kubernetes website uses the [Docsy Hugo theme](https://github.com/google/doc
Kubernetes 网站使用的是 [Docsy Hugo 主题](https://github.com/google/docsy#readme)。 即使你打算在容器中运行网站,我们也强烈建议你通过运行以下命令来引入子模块和其他开发依赖项:
```bash
# pull in the Docsy submodule
# 引入 Docsy 子模块
git submodule update --init --recursive --depth 1
```
@ -80,7 +80,7 @@ To build the site in a container, run the following to build the container image
要在容器中构建网站,请通过以下命令来构建容器镜像并运行:
```bash
make container-image
# 你可以将 $CONTAINER_ENGINE 设置为任何 Docker 类容器工具的名称
make container-serve
```
@ -113,7 +113,7 @@ Hugo 扩展版本。
若要在本地构造和测试网站,请运行:
```bash
# install dependencies
# 安装依赖
npm ci
make serve
```
@ -257,6 +257,51 @@ This works for Catalina as well as Mojave macOS.
-->
这适用于 Catalina 和 Mojave macOS。
### 对执行 make container-image 命令部分地区访问超时的故障排除
现象如下:
```shell
langs/language.go:23:2: golang.org/x/text@v0.3.7: Get "https://proxy.golang.org/golang.org/x/text/@v/v0.3.7.zip": dial tcp 142.251.43.17:443: i/o timeout
langs/language.go:24:2: golang.org/x/text@v0.3.7: Get "https://proxy.golang.org/golang.org/x/text/@v/v0.3.7.zip": dial tcp 142.251.43.17:443: i/o timeout
common/text/transform.go:21:2: golang.org/x/text@v0.3.7: Get "https://proxy.golang.org/golang.org/x/text/@v/v0.3.7.zip": dial tcp 142.251.43.17:443: i/o timeout
common/text/transform.go:22:2: golang.org/x/text@v0.3.7: Get "https://proxy.golang.org/golang.org/x/text/@v/v0.3.7.zip": dial tcp 142.251.43.17:443: i/o timeout
common/text/transform.go:23:2: golang.org/x/text@v0.3.7: Get "https://proxy.golang.org/golang.org/x/text/@v/v0.3.7.zip": dial tcp 142.251.43.17:443: i/o timeout
hugolib/integrationtest_builder.go:29:2: golang.org/x/tools@v0.1.11: Get "https://proxy.golang.org/golang.org/x/tools/@v/v0.1.11.zip": dial tcp 142.251.42.241:443: i/o timeout
deploy/google.go:24:2: google.golang.org/api@v0.76.0: Get "https://proxy.golang.org/google.golang.org/api/@v/v0.76.0.zip": dial tcp 142.251.43.17:443: i/o timeout
parser/metadecoders/decoder.go:32:2: gopkg.in/yaml.v2@v2.4.0: Get "https://proxy.golang.org/gopkg.in/yaml.v2/@v/v2.4.0.zip": dial tcp 142.251.42.241:443: i/o timeout
The command '/bin/sh -c mkdir $HOME/src && cd $HOME/src && curl -L https://github.com/gohugoio/hugo/archive/refs/tags/v${HUGO_VERSION}.tar.gz | tar -xz && cd "hugo-${HUGO_VERS ION}" && go install --tags extended' returned a non-zero code: 1
make: *** [Makefile:69container-image] error 1
```
请修改 `Dockerfile` 文件,为其添加网络代理。修改内容如下:
```dockerfile
...
FROM golang:1.18-alpine
LABEL maintainer="Luc Perkins <lperkins@linuxfoundation.org>"
ENV GO111MODULE=on # 需要添加内容1
ENV GOPROXY=https://proxy.golang.org,direct # 需要添加内容2
RUN apk add --no-cache \
curl \
gcc \
g++ \
musl-dev \
build-base \
libc6-compat
ARG HUGO_VERSION
...
```
将 "https://proxy.golang.org" 替换为本地可以使用的代理地址。
**注意:** 此部分仅适用于中国大陆
<!--
## Get involved with SIG Docs

View File

@ -36,10 +36,10 @@ git submodule update --init --recursive --depth 1
## Running the website using a container
To build the site in a container, run the following to build the container image and run it:
To build the site in a container, run the following:
```bash
make container-image
# You can set $CONTAINER_ENGINE to the name of any Docker-like container tool
make container-serve
```
@ -189,7 +189,7 @@ If you need help at any point when contributing, the [New Contributor Ambassador
## Code of conduct
Participation in the Kubernetes community is governed by the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
Participation in the Kubernetes community is governed by the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).
## Thank you

View File

@ -302,22 +302,24 @@ languageName = "English"
weight = 1
languagedirection = "ltr"
[languages.zh]
[languages.zh-cn]
title = "Kubernetes"
description = "生产级别的容器编排系统"
languageName = "中文 Chinese"
languageName = "中文 (Chinese)"
languageNameLatinScript = "Chinese"
weight = 2
contentDir = "content/zh"
contentDir = "content/zh-cn"
languagedirection = "ltr"
[languages.zh.params]
[languages.zh-cn.params]
time_format_blog = "2006.01.02"
language_alternatives = ["en"]
[languages.ko]
title = "Kubernetes"
description = "운영 수준의 컨테이너 오케스트레이션"
languageName = "한국어 Korean"
languageName = "한국어 (Korean)"
languageNameLatinScript = "Korean"
weight = 3
contentDir = "content/ko"
languagedirection = "ltr"
@ -329,7 +331,8 @@ language_alternatives = ["en"]
[languages.ja]
title = "Kubernetes"
description = "プロダクショングレードのコンテナ管理基盤"
languageName = "日本語 Japanese"
languageName = "日本語 (Japanese)"
languageNameLatinScript = "Japanese"
weight = 4
contentDir = "content/ja"
languagedirection = "ltr"
@ -341,7 +344,8 @@ language_alternatives = ["en"]
[languages.fr]
title = "Kubernetes"
description = "Solution professionnelle dorchestration de conteneurs"
languageName = "Français"
languageName = "Français (French)"
languageNameLatinScript = "Français"
weight = 5
contentDir = "content/fr"
languagedirection = "ltr"
@ -354,7 +358,8 @@ language_alternatives = ["en"]
[languages.it]
title = "Kubernetes"
description = "Orchestrazione di Container in produzione"
languageName = "Italiano"
languageName = "Italiano (Italian)"
languageNameLatinScript = "Italiano"
weight = 6
contentDir = "content/it"
languagedirection = "ltr"
@ -367,7 +372,8 @@ language_alternatives = ["en"]
[languages.no]
title = "Kubernetes"
description = "Production-Grade Container Orchestration"
languageName = "Norsk"
languageName = "Norsk (Norwegian)"
languageNameLatinScript = "Norsk"
weight = 7
contentDir = "content/no"
languagedirection = "ltr"
@ -380,7 +386,8 @@ language_alternatives = ["en"]
[languages.de]
title = "Kubernetes"
description = "Produktionsreife Container-Orchestrierung"
languageName = "Deutsch"
languageName = "Deutsch (German)"
languageNameLatinScript = "Deutsch"
weight = 8
contentDir = "content/de"
languagedirection = "ltr"
@ -393,7 +400,8 @@ language_alternatives = ["en"]
[languages.es]
title = "Kubernetes"
description = "Orquestación de contenedores para producción"
languageName = "Español"
languageName = "Español (Spanish)"
languageNameLatinScript = "Español"
weight = 9
contentDir = "content/es"
languagedirection = "ltr"
@ -406,8 +414,10 @@ language_alternatives = ["en"]
[languages.pt-br]
title = "Kubernetes"
description = "Orquestração de contêineres em nível de produção"
languageName = "Português"
languageName = "Português (Portuguese)"
languageNameLatinScript = "Português"
weight = 10
contentDir = "content/pt-br"
languagedirection = "ltr"
@ -419,7 +429,8 @@ language_alternatives = ["en"]
[languages.id]
title = "Kubernetes"
description = "Orkestrasi Kontainer dengan Skala Produksi"
languageName = "Bahasa Indonesia"
languageName ="Bahasa Indonesia"
languageNameLatinScript = "Bahasa Indonesia"
weight = 11
contentDir = "content/id"
languagedirection = "ltr"
@ -432,7 +443,8 @@ language_alternatives = ["en"]
[languages.hi]
title = "Kubernetes"
description = "Production-Grade Container Orchestration"
languageName = "Hindi"
languageName = "हिन्दी (Hindi)"
languageNameLatinScript = "Hindi"
weight = 12
contentDir = "content/hi"
languagedirection = "ltr"
@ -444,7 +456,8 @@ language_alternatives = ["en"]
[languages.vi]
title = "Kubernetes"
description = "Giải pháp điều phối container trong môi trường production"
languageName = "Tiếng Việt"
languageName = "Tiếng Việt (Vietnamese)"
languageNameLatinScript = "Tiếng Việt"
contentDir = "content/vi"
weight = 13
languagedirection = "ltr"
@ -452,7 +465,8 @@ languagedirection = "ltr"
[languages.ru]
title = "Kubernetes"
description = "Первоклассная оркестрация контейнеров"
languageName = "Русский"
languageName = "Русский (Russian)"
languageNameLatinScript = "Russian"
weight = 14
contentDir = "content/ru"
languagedirection = "ltr"
@ -465,7 +479,8 @@ language_alternatives = ["en"]
[languages.pl]
title = "Kubernetes"
description = "Produkcyjny system zarządzania kontenerami"
languageName = "Polski"
languageName = "Polski (Polish)"
languageNameLatinScript = "Polski"
weight = 15
contentDir = "content/pl"
languagedirection = "ltr"
@ -478,7 +493,8 @@ language_alternatives = ["en"]
[languages.uk]
title = "Kubernetes"
description = "Довершена система оркестрації контейнерів"
languageName = "Українська"
languageName = "Українська (Ukrainian)"
languageNameLatinScript = "Ukrainian"
weight = 16
contentDir = "content/uk"
languagedirection = "ltr"

View File

@ -21,15 +21,15 @@ Die Add-Ons in den einzelnen Kategorien sind alphabetisch sortiert - Die Reihenf
* [ACI](https://www.github.com/noironetworks/aci-containers) bietet Container-Networking und Network-Security mit Cisco ACI.
* [Calico](https://docs.projectcalico.org/latest/introduction/) ist ein Networking- und Network-Policy-Provider. Calico unterstützt eine Reihe von Networking-Optionen, damit Du die richtige für deinen Use-Case wählen kannst. Dies beinhaltet Non-Overlaying and Overlaying-Networks mit oder ohne BGP. Calico nutzt die gleiche Engine um Network-Policies für Hosts, Pods und (falls Du Istio & Envoy benutzt) Anwendungen auf Service-Mesh-Ebene durchzusetzen.
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) vereint Flannel und Calico um Networking- und Network-Policies bereitzustellen.
* [Canal](https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel) vereint Flannel und Calico um Networking- und Network-Policies bereitzustellen.
* [Cilium](https://github.com/cilium/cilium) ist ein L3 Network- and Network-Policy-Plugin welches das transparent HTTP/API/L7-Policies durchsetzen kann. Sowohl Routing- als auch Overlay/Encapsulation-Modes werden uterstützt. Außerdem kann Cilium auf andere CNI-Plugins aufsetzen.
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) ermöglicht das nahtlose Verbinden von Kubernetes mit einer Reihe an CNI-Plugins wie z.B. Calico, Canal, Flannel, Romana, oder Weave.
* [CNI-Genie](https://github.com/cni-genie/CNI-Genie) ermöglicht das nahtlose Verbinden von Kubernetes mit einer Reihe an CNI-Plugins wie z.B. Calico, Canal, Flannel, Romana, oder Weave.
* [Contiv](https://contivpp.io/) bietet konfigurierbares Networking (Native L3 auf BGP, Overlay mit vxlan, Klassisches L2, Cisco-SDN/ACI) für verschiedene Anwendungszwecke und auch umfangreiches Policy-Framework. Das Contiv-Projekt ist vollständig [Open Source](http://github.com/contiv). Der [installer](http://github.com/contiv/install) bietet sowohl kubeadm als auch nicht-kubeadm basierte Installationen.
* [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), basierend auf [Tungsten Fabric](https://tungsten.io), ist eine Open Source, multi-Cloud Netzwerkvirtualisierungs- und Policy-Management Plattform. Contrail und Tungsten Fabric sind mit Orechstratoren wie z.B. Kubernetes, OpenShift, OpenStack und Mesos integriert und bieten Isolationsmodi für Virtuelle Maschinen, Container (bzw. Pods) und Bare Metal workloads.
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) ist ein Overlay-Network-Provider der mit Kubernetes genutzt werden kann.
* [Knitter](https://github.com/ZTE/Knitter/) ist eine Network-Lösung die Mehrfach-Network in Kubernetes ermöglicht.
* Multus ist ein Multi-Plugin für Mehrfachnetzwerk-Unterstützung um alle CNI-Plugins (z.B. Calico, Cilium, Contiv, Flannel), zusätzlich zu SRIOV-, DPDK-, OVS-DPDK- und VPP-Basierten Workloads in Kubernetes zu unterstützen.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) bietet eine Integration zwischen VMware NSX-T und einem Orchestator wie z.B. Kubernetes. Außerdem bietet es eine Integration zwischen NSX-T und Containerbasierten CaaS/PaaS-Plattformen wie z.B. Pivotal Container Service (PKS) und OpenShift.
* [Multus](https://github.com/k8snetworkplumbingwg/multus-cni) ist ein Multi-Plugin für Mehrfachnetzwerk-Unterstützung um alle CNI-Plugins (z.B. Calico, Cilium, Contiv, Flannel), zusätzlich zu SRIOV-, DPDK-, OVS-DPDK- und VPP-Basierten Workloads in Kubernetes zu unterstützen.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html) Container Plug-in (NCP) bietet eine Integration zwischen VMware NSX-T und einem Orchestator wie z.B. Kubernetes. Außerdem bietet es eine Integration zwischen NSX-T und Containerbasierten CaaS/PaaS-Plattformen wie z.B. Pivotal Container Service (PKS) und OpenShift.
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) ist eine SDN-Plattform die Policy-Basiertes Networking zwischen Kubernetes Pods und nicht-Kubernetes Umgebungen inklusive Sichtbarkeit und Security-Monitoring bereitstellt.
* [Romana](https://github.com/romana/romana) ist eine Layer 3 Network-Lösung für Pod-Netzwerke welche auch die [NetworkPolicy API](/docs/concepts/services-networking/network-policies/) unterstützt. Details zur Installation als kubeadm Add-On sind [hier](https://github.com/romana/romana/tree/master/containerize) verfügbar.
* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) bietet Networking and Network-Policies und arbeitet auf beiden Seiten der Network-Partition ohne auf eine externe Datenbank angwiesen zu sein.

View File

@ -16,7 +16,7 @@ Die `image` Eigenschaft eines Containers unterstüzt die gleiche Syntax wie die
## Aktualisieren von Images
Die Standardregel für das Herunterladen von Images ist `IfNotPresent`, dies führt dazu, dass das Kubelet Images überspringt, die bereits auf einem Node vorliegen.
Die Standardregel für das Herunterladen von Images ist `IfNotPresent`, dies führt dazu, dass das Image wird nur heruntergeladen wenn es noch nicht lokal verfügbar ist.
Wenn sie stattdessen möchten, dass ein Image immer forciert heruntergeladen wird, können sie folgendes tun:

View File

@ -54,14 +54,14 @@ die Entwicklern und Anwendern zur Verfügung stehen. Benutzer können ihre eigen
ihren [eigenen APIs](/docs/concepts/api-extension/custom-resources/) schreiben, die von einem
universellen [Kommandozeilen-Tool](/docs/user-guide/kubectl-overview/) angesprochen werden können.
Dieses [Design](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) hat es einer Reihe anderer Systeme ermöglicht, auf Kubernetes aufzubauen.
Dieses [Design](https://git.k8s.io/design-proposals-archive/architecture/architecture.md) hat es einer Reihe anderer Systeme ermöglicht, auf Kubernetes aufzubauen.
## Was Kubernetes nicht ist
Kubernetes ist kein traditionelles, allumfassendes PaaS (Plattform als ein Service) System. Da Kubernetes nicht auf Hardware-,
sondern auf Containerebene arbeitet, bietet es einige allgemein anwendbare Funktionen, die PaaS-Angeboten gemeinsam sind,
wie Bereitstellung, Skalierung, Lastausgleich, Protokollierung und Überwachung.
Kubernetes ist jedoch nicht monolithisch, und diese Standardlösungen sind optional und modular etweiterbar.
Kubernetes ist jedoch nicht monolithisch, und diese Standardlösungen sind optional und modular erweiterbar.
Kubernetes liefert die Bausteine für den Aufbau von Entwicklerplattformen, bewahrt aber die
Wahlmöglichkeiten und Flexibilität der Benutzer, wo es wichtig ist.
@ -79,7 +79,7 @@ Kubernetes:
Cluster-Speichersysteme (z.B. Ceph) als eingebaute Dienste. Solche Komponenten können
auf Kubernetes laufen und/oder von Anwendungen, die auf Kubernetes laufen, über
portable Mechanismen wie den Open Service Broker angesprochen werden.
* Bietet keine Konfigurationssprache bzw. kein Konfigurationssystem (z.B.[jsonnet](https://github.com/google/jsonnet)).
* Bietet keine Konfigurationssprache bzw. kein Konfigurationssystem (z.B. [jsonnet](https://github.com/google/jsonnet)).
Es bietet eine deklarative API, die von beliebigen Formen deklarativer Spezifikationen angesprochen werden kann.
* Bietet keine umfassenden Systeme zur Maschinenkonfiguration, Wartung, Verwaltung oder Selbstheilung.
@ -135,17 +135,17 @@ Zusammenfassung der Container-Vorteile:
* **Dev und Ops Trennung der Bedenken**:
Erstellen Sie Anwendungscontainer-Images nicht zum Deployment-, sondern zum Build-Releasezeitpunkt
und entkoppeln Sie so Anwendungen von der Infrastruktur.
* **Überwachbarkeit**
* **Überwachbarkeit**:
Nicht nur Informationen und Metriken auf Betriebssystemebene werden angezeigt,
sondern auch der Zustand der Anwendung und andere Signale.
* **Umgebungskontinuität in Entwicklung, Test und Produktion**:
Läuft auf einem Laptop genauso wie in der Cloud.
* **Cloud- und OS-Distribution portabilität**:
* **Cloud- und OS-Distribution-Portabilität**:
Läuft auf Ubuntu, RHEL, CoreOS, On-Prem, Google Kubernetes Engine und überall sonst.
* **Anwendungsorientiertes Management**:
Erhöht den Abstraktionsgrad vom Ausführen eines Betriebssystems auf virtueller Hardware
bis zum Ausführen einer Anwendung auf einem Betriebssystem unter Verwendung logischer Ressourcen.
* **Locker gekoppelte, verteilte, elastische, freie [micro-services](https://martinfowler.com/articles/microservices.html)**:
* **Locker gekoppelte, verteilte, elastische, freie [Microservices](https://martinfowler.com/articles/microservices.html)**:
Anwendungen werden in kleinere, unabhängige Teile zerlegt und können dynamisch bereitgestellt
und verwaltet werden -- nicht ein monolithischer Stack, der auf einer großen Single-Purpose-Maschine läuft.
* **Ressourcenisolierung**:

View File

@ -56,6 +56,6 @@ Offiziell unterstützte Clientbibliotheken:
## Design Dokumentation
Ein Archiv der Designdokumente für Kubernetes-Funktionalität. Gute Ansatzpunkte sind [Kubernetes Architektur](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) und [Kubernetes Design Übersicht](https://git.k8s.io/community/contributors/design-proposals).
Ein Archiv der Designdokumente für Kubernetes-Funktionalität. Gute Ansatzpunkte sind [Kubernetes Architektur](https://git.k8s.io/design-proposals-archive/architecture/architecture.md) und [Kubernetes Design Übersicht](https://git.k8s.io/community/contributors/design-proposals).

View File

@ -424,7 +424,7 @@ export no_proxy=$no_proxy,$(minikube ip)
Minikube verwendet [libmachine](https://github.com/docker/machine/tree/master/libmachine) zur Bereitstellung von VMs, und [kubeadm](https://github.com/kubernetes/kubeadm) um einen Kubernetes-Cluster in Betrieb zu nehmen.
Weitere Informationen zu Minikube finden Sie im [Vorschlag](https://git.k8s.io/community/contributors/design-proposals/cluster-lifecycle/local-cluster-ux.md).
Weitere Informationen zu Minikube finden Sie im [Vorschlag](https://git.k8s.io/design-proposals-archive/cluster-lifecycle/local-cluster-ux.md).
## Zusätzliche Links

View File

@ -11,7 +11,7 @@ weight: 90
<!-- overview -->
Der Horizontal Pod Autoscaler skaliert automatisch die Anzahl der Pods eines Replication Controller, Deployment oder Replikat Set basierend auf der beobachteten CPU-Auslastung (oder, mit Unterstützung von [benutzerdefinierter Metriken](https://git.k8s.io/community/contributors/design-proposals/instrumentation/custom-metrics-api.md), von der Anwendung bereitgestellten Metriken). Beachte, dass die horizontale Pod Autoskalierung nicht für Objekte gilt, die nicht skaliert werden können, z. B. DaemonSets.
Der Horizontal Pod Autoscaler skaliert automatisch die Anzahl der Pods eines Replication Controller, Deployment oder Replikat Set basierend auf der beobachteten CPU-Auslastung (oder, mit Unterstützung von [benutzerdefinierter Metriken](https://git.k8s.io/design-proposals-archive/instrumentation/custom-metrics-api.md), von der Anwendung bereitgestellten Metriken). Beachte, dass die horizontale Pod Autoskalierung nicht für Objekte gilt, die nicht skaliert werden können, z. B. DaemonSets.
Der Horizontal Pod Autoscaler ist als Kubernetes API-Ressource und einem Controller implementiert.
Die Ressource bestimmt das Verhalten des Controllers.
@ -46,7 +46,7 @@ Das Verwenden von Metriken aus Heapster ist seit der Kubernetes Version 1.11 ver
Siehe [Unterstützung der Metrik APIs](#unterstützung-der-metrik-apis) für weitere Details.
Der Autoscaler greift über die Scale Sub-Ressource auf die entsprechenden skalierbaren Controller (z.B. Replication Controller, Deployments und Replika Sets) zu. Scale ist eine Schnittstelle, mit der Sie die Anzahl der Replikate dynamisch einstellen und jeden ihrer aktuellen Zustände untersuchen können. Weitere Details zu der Scale Sub-Ressource findest du [hier](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#scale-subresource).
Der Autoscaler greift über die Scale Sub-Ressource auf die entsprechenden skalierbaren Controller (z.B. Replication Controller, Deployments und Replika Sets) zu. Scale ist eine Schnittstelle, mit der Sie die Anzahl der Replikate dynamisch einstellen und jeden ihrer aktuellen Zustände untersuchen können. Weitere Details zu der Scale Sub-Ressource findest du [hier](https://git.k8s.io/design-proposals-archive/autoscaling/horizontal-pod-autoscaler.md#scale-subresource).
### Details zum Algorithmus
@ -90,7 +90,7 @@ Die aktuelle stabile Version, die nur die Unterstützung für die automatische S
Die Beta-Version, welche die Skalierung des Speichers und benutzerdefinierte Metriken unterstützt, befindet sich unter `autoscaling/v2beta2`. Die in `autoscaling/v2beta2` neu eingeführten Felder bleiben bei der Arbeit mit `autoscaling/v1` als Anmerkungen erhalten.
Weitere Details über das API Objekt kann unter dem [HorizontalPodAutoscaler Objekt](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object) gefunden werden.
Weitere Details über das API Objekt kann unter dem [HorizontalPodAutoscaler Objekt](https://git.k8s.io/design-proposals-archive/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object) gefunden werden.
## Unterstützung des Horizontal Pod Autoscaler in kubectl
@ -166,7 +166,7 @@ Standardmäßig ruft der HorizontalPodAutoscaler Controller Metriken aus einer R
## {{% heading "whatsnext" %}}
* Design Dokument [Horizontal Pod Autoscaling](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md).
* Design Dokument [Horizontal Pod Autoscaling](https://git.k8s.io/design-proposals-archive/autoscaling/horizontal-pod-autoscaler.md).
* kubectl autoscale Befehl: [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale).
* Verwenden des [Horizontal Pod Autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/).

View File

@ -16,7 +16,7 @@ It groups containers that make up an application into logical units for easy man
{{% blocks/feature image="scalable" %}}
#### Planet Scale
Designed on the same principles that allows Google to run billions of containers a week, Kubernetes can scale without increasing your ops team.
Designed on the same principles that allow Google to run billions of containers a week, Kubernetes can scale without increasing your operations team.
{{% /blocks/feature %}}
@ -43,12 +43,12 @@ Kubernetes is open source giving you the freedom to take advantage of on-premise
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Watch Video</button>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe-2022/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu22" button id="desktopKCButton">Attend KubeCon Europe on May 17-20, 2022</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america" button id="desktopKCButton">Attend KubeCon North America on October 24-28, 2022</a>
<br>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna22" button id="desktopKCButton">Attend KubeCon North America on October 24-28, 2022</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe-2023" button id="desktopKCButton">Attend KubeCon Europe on April 17-21, 2023</a>
</div>
<div id="videoPlayer">
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>

View File

@ -1,9 +1,13 @@
---
title: Welcome to the Kubernetes Blog!
title: Welcome to the Kubernetes Blog!
date: 2015-03-20
slug: welcome-to-kubernetes-blog
url: /blog/2015/03/Welcome-To-Kubernetes-Blog
evergreen: true
---
**Author:** Kit Merker (Google)
Welcome to the new Kubernetes Blog. Follow this blog to learn about the Kubernetes Open Source project. We plan to post release notes, how-to articles, events, and maybe even some off topic fun here from time to time.
@ -25,6 +29,3 @@ To start things off, here's a roundup of recent Kubernetes posts from other site
Happy cloud computing!
- Kit Merker - Product Manager, Google Cloud Platform

View File

@ -1,8 +1,9 @@
---
title: " Kubernetes Release: 0.16.0 "
title: "Kubernetes Release: 0.16.0"
date: 2015-05-11
slug: kubernetes-release-0160
url: /blog/2015/05/Kubernetes-Release-0160
evergreen: true
---
Release Notes:

View File

@ -1,8 +1,9 @@
---
title: " Kubernetes Release: 0.17.0 "
title: "Kubernetes Release: 0.17.0"
date: 2015-05-15
slug: kubernetes-release-0170
url: /blog/2015/05/Kubernetes-Release-0170
evergreen: true
---
Release Notes:

View File

@ -1,9 +1,10 @@
---
title: " Announcing the First Kubernetes Enterprise Training Course "
title: "Announcing the First Kubernetes Enterprise Training Course"
date: 2015-07-08
slug: announcing-first-kubernetes-enterprise
url: /blog/2015/07/Announcing-First-Kubernetes-Enterprise
---
At Google we rely on Linux application containers to run our core infrastructure. Everything from Search to Gmail runs in containers. &nbsp;In fact, we like containers so much that even our Google Compute Engine VMs run in containers! &nbsp;Because containers are critical to our business, we have been working with the community on many of the basic container technologies (from cgroups to Dockers LibContainer) and even decided to build the next generation of Googles container scheduling technology, Kubernetes, in the open.

View File

@ -1,9 +1,13 @@
---
title: " Kubernetes 1.1 Performance upgrades, improved tooling and a growing community "
title: "Kubernetes 1.1 Performance upgrades, improved tooling and a growing community"
date: 2015-11-09
slug: kubernetes-1-1-performance-upgrades-improved-tooling-and-a-growing-community
url: /blog/2015/11/Kubernetes-1-1-Performance-Upgrades-Improved-Tooling-And-A-Growing-Community
evergreen: true
---
**Author:** David Aronchick (Google)
Since the Kubernetes 1.0 release in July, weve seen tremendous adoption by companies building distributed systems to manage their container clusters. Were also been humbled by the rapid growth of the community who help make Kubernetes better everyday. We have seen commercial offerings such as Tectonic by CoreOS and RedHat Atomic Host emerge to deliver deployment and support of Kubernetes. And a growing ecosystem has added Kubernetes support including tool vendors such as Sysdig and Project Calico.
With the help of hundreds of contributors, were proud to announce the availability of Kubernetes 1.1, which offers major performance upgrades, improved tooling, and new features that make applications even easier to build and deploy.
@ -50,4 +54,3 @@ As we mentioned above, we would love your help:
But, most of all, just let us know how you are transforming your business using Kubernetes, and how we can help you do it even faster. Thank you for your support!
&nbsp;- David Aronchick, Senior Product Manager for Kubernetes and Google Container Engine

View File

@ -1,37 +1,41 @@
---
title: " KubeCon EU 2016: Kubernetes Community in London "
title: "KubeCon EU 2016: Kubernetes Community in London"
date: 2016-02-24
slug: kubecon-eu-2016-kubernetes-community-in
url: /blog/2016/02/Kubecon-Eu-2016-Kubernetes-Community-In
evergreen: true
---
KubeCon EU 2016 is the inaugural [European Kubernetes](http://kubernetes.io/) community conference that follows on the American launch in November 2015. KubeCon is fully dedicated to education and community engagement for[Kubernetes](http://kubernetes.io/) enthusiasts, production users and the surrounding ecosystem.
**Author:** Sarah Novotny (Google)
KubeCon EU 2016 is the inaugural European Kubernetes community conference that follows on the American launch in November 2015. KubeCon is fully dedicated to education and community engagement for [Kubernetes](/) enthusiasts, production users and the surrounding ecosystem.
Come join us in London and hang out with hundreds from the Kubernetes community and experience a wide variety of deep technical expert talks and use cases.
Dont miss these great speaker sessions at the conference:
* “Kubernetes Hardware Hacks: Exploring the Kubernetes API Through Knobs, Faders, and Sliders” by Ian Lewis and Brian Dorsey, Developer Advocate, Google -* [http://sched.co/6Bl3](http://sched.co/6Bl3)
* “Kubernetes Hardware Hacks: Exploring the Kubernetes API Through Knobs, Faders, and Sliders” by Ian Lewis and Brian Dorsey, Developer Advocate, Google [https://sched.co/6Bl3](http://sched.co/6Bl3)
* “rktnetes: what's new with container runtimes and Kubernetes” by Jonathan Boulle, Developer and Team Lead at CoreOS -* [http://sched.co/6BY7](http://sched.co/6BY7)
* “rktnetes: what's new with container runtimes and Kubernetes” by Jonathan Boulle, Developer and Team Lead at CoreOS [https://sched.co/6BY7](http://sched.co/6BY7)
* “Kubernetes Documentation: Contributing, fixing issues, collecting bounties” by John Mulhausen, Lead Technical Writer, Google -* [http://sched.co/6BUP](http://sched.co/6BUP)&nbsp;
* “[What is OpenStack's role in a Kubernetes world?](https://kubeconeurope2016.sched.org/event/6BYC/what-is-openstacks-role-in-a-kubernetes-world?iframe=yes&w=i:0;&sidebar=yes&bg=no#?iframe=yes&w=i:100;&sidebar=yes&bg=no)” By Thierry Carrez, Director of Engineering, OpenStack Foundation -* http://sched.co/6BYC
* “A Practical Guide to Container Scheduling” by Mandy Waite, Developer Advocate, Google -* [http://sched.co/6BZa](http://sched.co/6BZa)
* “Kubernetes Documentation: Contributing, fixing issues, collecting bounties” by John Mulhausen, Lead Technical Writer, Google [https://sched.co/6BUP](http://sched.co/6BUP)&nbsp;
* “[Kubernetes in Production in The New York Times newsroom](https://kubeconeurope2016.sched.org/event/67f2/kubernetes-in-production-in-the-new-york-times-newsroom?iframe=yes&w=i:0;&sidebar=yes&bg=no#?iframe=yes&w=i:100;&sidebar=yes&bg=no)” Eric Lewis, Web Developer, New York Times -* [http://sched.co/67f2](http://sched.co/67f2)
* “[Creating an Advanced Load Balancing Solution for Kubernetes with NGINX](https://kubeconeurope2016.sched.org/event/6Bc9/creating-an-advanced-load-balancing-solution-for-kubernetes-with-nginx?iframe=yes&w=i:0;&sidebar=yes&bg=no#?iframe=yes&w=i:100;&sidebar=yes&bg=no)” by Andrew Hutchings, Technical Product Manager, NGINX -* http://sched.co/6Bc9
* And many more http://kubeconeurope2016.sched.org/
* “[What is OpenStack's role in a Kubernetes world?](https://kubeconeurope2016.sched.org/event/6BYC/what-is-openstacks-role-in-a-kubernetes-world?iframe=yes&w=i:0;&sidebar=yes&bg=no#?iframe=yes&w=i:100;&sidebar=yes&bg=no)” By Thierry Carrez, Director of Engineering, OpenStack Foundation [https://sched.co/6BYC](http://sched.co/6BYC)
* “A Practical Guide to Container Scheduling” by Mandy Waite, Developer Advocate, Google [https://sched.co/6BZa](http://sched.co/6BZa)
* “[Kubernetes in Production in The New York Times newsroom](https://kubeconeurope2016.sched.org/event/67f2/kubernetes-in-production-in-the-new-york-times-newsroom?iframe=yes&w=i:0;&sidebar=yes&bg=no#?iframe=yes&w=i:100;&sidebar=yes&bg=no)” Eric Lewis, Web Developer, New York Times [https://sched.co/67f2](http://sched.co/67f2)
* “[Creating an Advanced Load Balancing Solution for Kubernetes with NGINX](https://kubeconeurope2016.sched.org/event/6Bc9/creating-an-advanced-load-balancing-solution-for-kubernetes-with-nginx?iframe=yes&w=i:0;&sidebar=yes&bg=no#?iframe=yes&w=i:100;&sidebar=yes&bg=no)” by Andrew Hutchings, Technical Product Manager, NGINX [https://sched.co/6Bc9](https://sched.co/6Bc9)
…and many more https://kubeconeurope2016.sched.org/
Get your KubeCon EU [tickets here](https://ti.to/kubecon/kubecon-eu-2016).
~Get your KubeCon EU [tickets here](https://ti.to/kubecon/kubecon-eu-2016)~.
Venue Location: CodeNode * 10 South Pl, London, United Kingdom
Accommodations: [hotels](https://skillsmatter.com/contact-us#hotels)
Website: [kubecon.io](https://www.kubecon.io/)
Twitter: [@KubeConio](https://twitter.com/kubeconio) #KubeCon
Google is a proud Diamond sponsor of KubeCon EU 2016. Come to London next month, March 10th & 11th, and visit booth #13 to learn all about Kubernetes, Google Container Engine (GKE) and Google Cloud Platform!
_KubeCon is organized by KubeAcademy, LLC, a community-driven group of developers focused on the education of developers and the promotion of Kubernetes._
-* Sarah Novotny, Kubernetes Community Manager, Google

View File

@ -1,17 +1,20 @@
---
title: " Kubernetes 1.2: Even more performance upgrades, plus easier application deployment and management "
title: "Kubernetes 1.2: Even more performance upgrades, plus easier application deployment and management"
date: 2016-03-17
slug: kubernetes-1.2-even-more-performance-upgrades-plus-easier-application-deployment-and-management
url: /blog/2016/03/Kubernetes-1-2-Even-More-Performance-Upgrades-Plus-Easier-Application-Deployment-And-Management
evergreen: true
---
Today we released Kubernetes 1.2. This release represents significant improvements for large organizations building distributed systems. Now with over 680 unique contributors to the project, this release represents our largest yet.
**Author:** David Aronchick (Google)
Today the Kubernetes project released Kubernetes 1.2. This release represents significant improvements for large organizations building distributed systems. Now with over 680 unique contributors to the project, this release represents our largest yet.
From the beginning, our mission has been to make building distributed systems easy and accessible for all. With the Kubernetes 1.2 release weve made strides towards our goal by increasing scale, decreasing latency and overall simplifying the way applications are deployed and managed. Now, developers at organizations of all sizes can build production scale apps more easily than ever before.&nbsp;
### Whats new:&nbsp;
## Whats new
- **Significant scale improvements**. Increased cluster scale by 400% to 1,000 nodes and 30,000 containers per cluster.
- **Simplified application deployment and management**.&nbsp;
- **Simplified application deployment and management**.
- Dynamic Configuration (via the ConfigMap API) enables applications to pull their configuration when they run rather than packaging it in at build time.&nbsp;
- Turnkey Deployments (via the Beta Deployment API) let you declare your application and Kubernetes will do the rest. It handles versioning, multiple simultaneous rollouts, aggregating status across all pods, maintaining application availability and rollback.&nbsp;
@ -28,15 +31,15 @@ From the beginning, our mission has been to make building distributed systems ea
- **And many more**. For a complete list of updates, see the [release notes on github](https://github.com/kubernetes/kubernetes/releases/tag/v1.2.0).&nbsp;
#### Community&nbsp;
## Community
All these improvements would not be possible without our enthusiastic and global community. The momentum is astounding. Were seeing over 400 pull requests per week, a 50% increase since the previous 1.1 release. There are meetups and conferences discussing Kubernetes nearly every day, on top of the 85 Kubernetes related [meetup groups](http://www.meetup.com/topics/kubernetes/) around the world. Weve also seen significant participation in the community in the form of Special Interest Groups, with 18 active SIGs that cover topics from AWS and OpenStack to big data and scalability, to get involved [join or start a new SIG](https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-(SIGs)). Lastly, were proud that Kubernetes is the first project to be accepted to the Cloud Native Computing Foundation (CNCF), read more about the announcement [here](https://cncf.io/news/announcement/2016/03/cloud-native-computing-foundation-accepts-kubernetes-first-hosted-projec-0).&nbsp;
All these improvements would not be possible without our enthusiastic and global community. The momentum is astounding. Were seeing over 400 pull requests per week, a 50% increase since the previous 1.1 release. There are meetups and conferences discussing Kubernetes nearly every day, on top of the 85 Kubernetes related [meetup groups](http://www.meetup.com/topics/kubernetes/) around the world. Weve also seen significant participation in the community in the form of Special Interest Groups, with 18 active SIGs that cover topics from AWS and OpenStack to big data and scalability, to get involved [join or start a new SIG](https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-(SIGs)). Lastly, were proud that Kubernetes is the first project to be accepted to the Cloud Native Computing Foundation (CNCF), read more about the announcement [here](https://cncf.io/news/announcement/2016/03/cloud-native-computing-foundation-accepts-kubernetes-first-hosted-projec-0).
#### Documentation&nbsp;
## Documentation
With Kubernetes 1.2 comes a relaunch of our website at [kubernetes.io](http://kubernetes.io/). Weve slimmed down the docs contribution process so that all you have to do is fork/clone and send a PR. And the site works the same whether youre staging it on your laptop, on github.io, or viewing it in production. Its a pure GitHub Pages project; no scripts, no plugins.&nbsp;
With Kubernetes 1.2 comes a relaunch of our website at [kubernetes.io](http://kubernetes.io/). Weve slimmed down the docs contribution process so that all you have to do is fork/clone and send a PR. And the site works the same whether youre staging it on your laptop, on github.io, or viewing it in production. Its a pure GitHub Pages project; no scripts, no plugins.
@ -48,7 +51,7 @@ To entice you even further to contribute, were also announcing our new bounty
#### Roadmap&nbsp;
## Roadmap
All of our work is done in the open, to learn the latest about the project j[oin the weekly community meeting](https://groups.google.com/forum/#!forum/kubernetes-community-video-chat) or [watch a recorded hangout](https://www.youtube.com/playlist?list=PL69nYSiGNLP1pkHsbPjzAewvMgGUpkCnJ). In keeping with our major release schedule of every three to four months, here are just a few items that are in development for [next release and beyond](https://github.com/kubernetes/kubernetes/wiki/Release-1.3):&nbsp;
@ -64,7 +67,7 @@ Kubernetes 1.2 is available for download at [get.k8s.io](http://get.k8s.io/) and
#### Connect&nbsp;
## Connect
Wed love to hear from you and see you participate in this growing community:&nbsp;
@ -73,8 +76,7 @@ Wed love to hear from you and see you participate in this growing community:&
- &nbsp;Connect with the community on [Slack](http://slack.kubernetes.io/)&nbsp;
- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates&nbsp;
Thank you for your support!&nbsp;
Thank you for your support!
&nbsp;-&nbsp;_David Aronchick, Senior Product Manager for Kubernetes, Google_

View File

@ -131,7 +131,7 @@ In this example, the **tenant-a** namespace would get policy **pol1*
Today, [Romana](http://romana.io/), [OpenShift](https://www.openshift.com/), [OpenContrail](http://www.opencontrail.org/) and [Calico](http://projectcalico.org/) support network policies applied to namespaces and pods. Cisco and VMware are working on implementations as well. Both Romana and Calico demonstrated these capabilities with Kubernetes 1.2 recently at KubeCon. You can watch their presentations here: [Romana](https://www.youtube.com/watch?v=f-dLKtK6qCs) ([slides](http://www.slideshare.net/RomanaProject/kubecon-london-2016-ronana-cloud-native-sdn)), [Calico](https://www.youtube.com/watch?v=p1zfh4N4SX0) ([slides](http://www.slideshare.net/kubecon/kubecon-eu-2016-secure-cloudnative-networking-with-project-calico)).&nbsp;
Today, Romana, OpenShift, OpenContrail and Calico support network policies applied to namespaces and pods. Cisco and VMware are working on implementations as well. Both Romana and Calico demonstrated these capabilities with Kubernetes 1.2 recently at KubeCon. You can watch their presentations here: [Romana](https://www.youtube.com/watch?v=f-dLKtK6qCs) ([slides](http://www.slideshare.net/RomanaProject/kubecon-london-2016-ronana-cloud-native-sdn)), [Calico](https://www.youtube.com/watch?v=p1zfh4N4SX0) ([slides](http://www.slideshare.net/kubecon/kubecon-eu-2016-secure-cloudnative-networking-with-project-calico)).&nbsp;

View File

@ -1,9 +1,13 @@
---
title: " Kubernetes 1.3: Bridging Cloud Native and Enterprise Workloads "
title: "Kubernetes 1.3: Bridging Cloud Native and Enterprise Workloads"
date: 2016-07-06
slug: kubernetes-1.3-bridging-cloud-native-and-enterprise-workloads
url: /blog/2016/07/Kubernetes-1-3-Bridging-Cloud-Native-And-Enterprise-Workloads
evergreen: true
---
**Author:** Aparna Sinha, Google
Nearly two years ago, when we officially kicked off the Kubernetes project, we wanted to simplify distributed systems management and provide the core technology required to everyone. The communitys response to this effort has blown us away. Today, thousands of customers, partners and developers are running clusters in production using Kubernetes and have joined the cloud native revolution.&nbsp;
Thanks to the help of over 800 contributors, we are pleased to announce today the availability of Kubernetes 1.3, our most robust and feature-rich release to date.
@ -14,7 +18,7 @@ Product highlights in Kubernetes 1.3 include the ability to bridge services acro
**Whats new:**
## Whats new
- **Increased scale and automation** - Customers want to scale their services up and down automatically in response to application demand. In 1.3 we have made it easier to autoscale clusters up and down while doubling the maximum number of nodes per cluster. Customers no longer need to think about cluster size, and can allow the underlying cluster to respond to demand.
@ -31,13 +35,13 @@ Product highlights in Kubernetes 1.3 include the ability to bridge services acro
- **Updated Kubernetes dashboard UI** - Customers can now use the Kubernetes open source dashboard for the majority of interactions with their clusters, rather than having to use the CLI. The updated UI lets users control, edit and create all workload resources (including Deployments and PetSets).
- And many more. For a complete list of updates, see the [_release notes on GitHub_](https://github.com/kubernetes/kubernetes/releases/tag/v1.3.0).
**Community**
## Community
We could not have achieved this milestone without the tireless effort of countless people that are part of the Kubernetes community. We have [19 different Special Interest Groups](https://github.com/kubernetes/community/blob/master/README.md#special-interest-groups-sig), and over 100 meetups around the world. Kubernetes is a community project, built in the open, and it truly would not be possible without the over 233 person-years of effort the community has put in to date. Woot!
**Availability**
## Availability
Kubernetes 1.3 is available for download at [get.k8s.io](http://get.k8s.io/)&nbsp;and via the open source repository hosted on [GitHub](http://github.com/kubernetes/kubernetes). To get started with Kubernetes try our [Hello World app](/docs/hellonode/).
@ -47,7 +51,7 @@ To learn the latest about the project, we encourage everyone to [join the weekly
**Connect**
## Connect
Wed love to hear from you and see you participate in this growing community:
@ -58,8 +62,4 @@ Wed love to hear from you and see you participate in this growing community:
Thank you for your support!&nbsp;
-- Aparna Sinha, Product Manager, Google
Thank you for your support!

View File

@ -65,7 +65,7 @@ Network policies are an exciting feature, which the Kubernetes community has wor
There are only a few policy-capable networking backends available for Kubernetes today: [Romana](http://romana.io/), [Calico](http://projectcalico.org/), and [Canal](https://github.com/tigera/canal); with [Weave](http://www.weave.works/) indicating support in the near future. Red Hats OpenShift includes network policy features as well.
There are only a few policy-capable networking backends available for Kubernetes today: Romana, [Calico](http://projectcalico.org/), and [Canal](https://github.com/tigera/canal); with [Weave](http://www.weave.works/) indicating support in the near future. Red Hats OpenShift includes network policy features as well.
@ -100,7 +100,7 @@ This is because during a typical network performance benchmark, theres no app
- Hardware: Two servers with Intel Core i5-5250U CPUs (2 core, 2 threads per core) running at 1.60GHz, 16GB RAM and 512GB SSD. NIC: Intel Ethernet Connection I218-V (rev 03)
- Ubuntu 14.04.5
- Kubernetes 1.3 for data collection (verified samples on [v1.4.0-beta.5](http://v1.4.0-beta.5/))
- [Romana v0.9.3.1](https://github.com/romana/romana)
- Romana v0.9.3.1
- Client and server load test [software](https://github.com/paninetworks/testing-tools)
For the tests we had a client pod send 2,000 HTTP requests to a server pod. HTTP requests were sent by the client pod at a rate that ensured that neither the server nor network ever saturated. We also made sure each request started a new TCP session by disabling persistent connections (i.e. HTTP [keep-alive](https://en.wikipedia.org/wiki/HTTP_persistent_connection)). We ran each test with different response sizes and measured the average request duration time (how long does it take to complete a request of that size). Finally, we repeated each set of measurements with different policy configurations.
@ -189,4 +189,4 @@ These tests were performed using Romana as the backend policy provider and other
If you wish to try it for yourself, we invite you to check out [Romana](http://romana.io/). In our [GitHub repo](https://github.com/romana/romana) you can find an easy to use installer, which works with AWS, Vagrant VMs or any other servers. You can use it to quickly get you started with a Romana powered Kubernetes or OpenStack cluster.
If you wish to try it for yourself, we invite you to check out Romana. In our GitHub repo you can find an easy to use installer, which works with AWS, Vagrant VMs or any other servers. You can use it to quickly get you started with a Romana powered Kubernetes or OpenStack cluster.

View File

@ -9,7 +9,7 @@ url: /blog/2017/08/High-Performance-Networking-With-Ec2
One of the most popular platforms for running Kubernetes is Amazon Web Services Elastic Compute Cloud (AWS EC2). With more than a decade of experience delivering IaaS, and expanding over time to include a rich set of services with easy to consume APIs, EC2 has captured developer mindshare and loyalty worldwide.
When it comes to networking, however, EC2 has some limits that hinder performance and make deploying Kubernetes clusters to production unnecessarily complex. The preview release of [Romana v2.0](http://romana.io/), a network and security automation solution for Cloud Native applications, includes features that address some well known network issues when running Kubernetes in EC2.
When it comes to networking, however, EC2 has some limits that hinder performance and make deploying Kubernetes clusters to production unnecessarily complex. The preview release of Romana v2.0, a network and security automation solution for Cloud Native applications, includes features that address some well known network issues when running Kubernetes in EC2.
## Traditional VPC Networking Performance Roadblocks
@ -40,7 +40,7 @@ Whether you were interested in advanced networking for traffic isolation or runn
The way to avoid running out of VPC routes is to use them sparingly by making them forward pod traffic for multiple instances. From a networking perspective, what that means is that the VPC route needs to forward to a router, which can then forward traffic on to the final destination instance.
[Romana](http://romana.io/) is a CNI network provider that configures routes on the host to forward pod network traffic without an overlay. Since inter-node routes are installed on hosts, no VPC routes are necessary at all. However, when the VPC is split into subnets for an HA deployment across zones, VPC routes are necessary.
Romana is a CNI network provider that configures routes on the host to forward pod network traffic without an overlay. Since inter-node routes are installed on hosts, no VPC routes are necessary at all. However, when the VPC is split into subnets for an HA deployment across zones, VPC routes are necessary.
Fortunately, inter-node routes on hosts allows them to act as a network router and forward traffic inbound from another zone just as it would for traffic from local pods. This makes any Kubernetes node configured by Romana able to accept inbound pod traffic from other zones and forward it to the proper destination node on the subnet.
@ -73,8 +73,5 @@ When using Romana v2.0, native VPC networking is now available for clusters of a
![](https://archive.org/download/hpc-ec2-vpc-2/hpc-ec2-vpc-2.png)
The preview release of Romana v2.0 is available [here](http://romana.io/preview). We welcome comments and feedback so we can make EC2 deployments of Kubernetes as fast and reliable as possible.
-- _Juergen Brendel and Chris Marino, co-founders of Pani Networks, sponsor of the Romana project_

View File

@ -3,8 +3,10 @@ title: " Kubernetes 1.8: Security, Workloads and Feature Depth "
date: 2017-09-29
slug: kubernetes-18-security-workloads-and
url: /blog/2017/09/Kubernetes-18-Security-Workloads-And
evergreen: true
---
_Editor's note: today's post is by Aparna Sinha, Group Product Manager, Kubernetes, Google; Ihor Dvoretskyi, Developer Advocate, CNCF; Jaice Singer DuMars, Kubernetes Ambassador, Microsoft; and Caleb Miles, Technical Program Manager, CoreOS on the latest release of Kubernetes 1.8._
**Authors:** Kubernetes v1.8 release team
Were pleased to announce the delivery of Kubernetes 1.8, our third release this year. Kubernetes 1.8 represents a snapshot of many exciting enhancements and refinements underway. In addition to functional improvements, were increasing project-wide focus on maturing [process](https://github.com/kubernetes/sig-release), formalizing [architecture](https://github.com/kubernetes/community/tree/master/sig-architecture), and strengthening Kubernetes [governance model](https://github.com/kubernetes/community/tree/master/community/elections/2017). The evolution of mature processes clearly signals that sustainability is a driving concern, and helps to ensure that Kubernetes is a viable and thriving project far into the future.
@ -50,7 +52,7 @@ The [Release team](https://github.com/kubernetes/features/blob/master/release-1.
As the Kubernetes community has grown, our release process has become an amazing demonstration of collaboration in open source software development. Kubernetes continues to gain new users at a rapid clip. This growth creates a positive feedback cycle where more contributors commit code creating a more vibrant ecosystem.
## User Highlights
## User highlights
According to [Redmonk](http://redmonk.com/fryan/2017/09/10/cloud-native-technologies-in-the-fortune-100/), 54 percent of Fortune 100 companies are running Kubernetes in some form with adoption coming from every sector across the world. Recent user stories from the community include:
@ -91,3 +93,6 @@ The simplest way to get involved with Kubernetes is by joining one of the many [
- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
- Chat with the community on [Slack](http://slack.k8s.io/).
- [Share your Kubernetes story.](https://docs.google.com/a/linuxfoundation.org/forms/d/e/1FAIpQLScuI7Ye3VQHQTwBASrgkjQDSS5TP0g3AXfFhwSM9YpHgxRKFA/viewform)
_Editor's note: this announcement was authored by Aparna Sinha (Google), Ihor Dvoretskyi (CNCF), Jaice Singer DuMars (Microsoft), and Caleb Miles (CoreOS)._

View File

@ -1,9 +1,13 @@
---
title: " Kubernetes 1.9: Apps Workloads GA and Expanded Ecosystem "
title: "Kubernetes 1.9: Apps Workloads GA and Expanded Ecosystem"
date: 2017-12-15
slug: kubernetes-19-workloads-expanded-ecosystem
url: /blog/2017/12/Kubernetes-19-Workloads-Expanded-Ecosystem
evergreen: true
---
**Authors:** Kubernetes v1.9 release team
Were pleased to announce the delivery of Kubernetes 1.9, our fourth and final release this year.
Todays release continues the evolution of an increasingly rich feature set, more robust stability, and even greater community contributions. As the fourth release of the year, it gives us an opportunity to look back at the progress made in key areas. Particularly notable is the advancement of the Apps Workloads API to stable. This removes any reservations potential adopters might have had about the functional stability required to run mission-critical workloads. Another big milestone is the beta release of Windows support, which opens the door for many Windows-specific applications and workloads to run in Kubernetes, significantly expanding the implementation scenarios and enterprise readiness of Kubernetes.
@ -87,7 +91,7 @@ For recorded sessions from the largest Kubernetes gathering, [KubeCon + CloudNat
## Webinar
Join members of the Kubernetes 1.9 release team on **January 9th from 10am-11am PT** to learn about the major features in this release as they demo some of the highlights in the areas of Windows and Docker support, storage, admission control, and the workloads API.&nbsp;[Register here](https://zoom.us/webinar/register/WN_oVjQMwyzQFOmWsfVzDsa2A).
Join members of the Kubernetes 1.9 release team on **January 9th from 10am-11am PT** to learn about the major features in this release as they demo some of the highlights in the areas of Windows and Docker support, storage, admission control, and the workloads API.&nbsp;[~Register here~](https://zoom.us/webinar/register/WN_oVjQMwyzQFOmWsfVzDsa2A).
## Get involved:

View File

@ -1,13 +1,9 @@
---
title: 'Kubernetes 1.10: Stabilizing Storage, Security, and Networking '
author: kbarnard
tags:
title: 'Kubernetes 1.10: Stabilizing Storage, Security, and Networking'
date: 2018-03-26
modified_time: '2018-03-27T11:01:39.569-07:00'
blogger_id: tag:blogger.com,1999:blog-112706738355446097.post-6519705795358457586
blogger_orig_url: https://kubernetes.io/blog/2018/03/26/kubernetes-1.10-stabilizing-storage-security-networking/
slug: kubernetes-1.10-stabilizing-storage-security-networking
date: 2018-03-26
evergreen: true
---
***Editor's note: today's post is by the [1.10 Release

View File

@ -3,6 +3,7 @@ layout: blog
title: 'Kubernetes 1.11: In-Cluster Load Balancing and CoreDNS Plugin Graduate to General Availability'
date: 2018-06-27
slug: kubernetes-1.11-release-announcement
evergreen: true
---
**Author**: Kubernetes 1.11 [Release Team](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.11/release_team.md)

View File

@ -2,6 +2,7 @@
layout: blog
title: 'Kubernetes 1.12: Kubelet TLS Bootstrap and Azure Virtual Machine Scale Sets (VMSS) Move to General Availability'
date: 2018-09-27
evergreen: true
---
**Author**: The 1.12 [Release Team](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.12/release_team.md)

View File

@ -1,13 +1,12 @@
---
layout: "Blog"
layout: blog
title: "Kubernetes 2018 North American Contributor Summit"
date: 2018-10-16
date: 2018-10-16
evergreen: true
---
**Authors:**
[Bob Killen][bob] (University of Michigan)
[Sahdev Zala][sahdev] (IBM),
[Ihor Dvoretskyi][ihor] (CNCF)
[Bob Killen][bob] (University of Michigan), [Sahdev Zala][sahdev] (IBM), [Ihor Dvoretskyi][ihor] (CNCF)
The 2018 North American Kubernetes Contributor Summit to be hosted right before
@ -22,32 +21,34 @@ Unlike previous Contributor Summits, the event now spans two-days with a more
relaxed hallway track and general Contributor get-together to be hosted from
5-8pm on Sunday December 9th at the [Garage Lounge and Gaming Hall][garage], just
a short walk away from the Convention Center. There, contributors can enjoy
billiards, bowling, trivia and more; accompanied by a variety of food and drink.
billiards, bowling, trivia and more; accompanied by a variety of food and drink.
Things pick up the following day, Monday the 10th with three separate tracks:
### New Contributor Workshop:
### New contributor workshop
A half day workshop aimed at getting new and first time contributors onboarded
and comfortable with working within the Kubernetes Community. Staying for the
duration is required; this is not a workshop you can drop into.
### Current Contributor Track:
### Current contributor track
Reserved for those that are actively engaged with the development of the
project; the Current Contributor Track includes Talks, Workshops, Birds of a
Feather, Unconferences, Steering Committee Sessions, and more! Keep an eye on
the [schedule in GitHub][schedule] as content is frequently being updated.
### Docs Sprint:
SIG-Docs will have a curated list of issues and challenges to be tackled closer
### Docs sprint
SIG Docs will have a curated list of issues and challenges to be tackled closer
to the event date.
## To Register:
## How To Register {#to-register}
To register for the Contributor Summit, see the [Registration section of the
Event Details in GitHub][register]. Please note that registrations are being
reviewed. If you select the “Current Contributor Track” and are not an active
contributor, you will be asked to attend the New Contributor Workshop, or asked
to be put on a waitlist. With thousands of contributors and only 300 spots, we
need to make sure the right folks are in the room.
need to make sure the right folks are in the room.
If you have any questions or concerns, please dont hesitate to reach out to
the Contributor Summit Events Team at community@kubernetes.io.

View File

@ -41,7 +41,7 @@ These repo labels let reviewers filter for PRs and issues by language. For examp
### Team review
L10n teams can now review and approve their own PRs. For example, review and approval permissions for English are [assigned in an OWNERS file](https://github.com/kubernetes/website/blob/master/content/en/OWNERS) in the top subfolder for English content.
L10n teams can now review and approve their own PRs. For example, review and approval permissions for English are [assigned in an OWNERS file](https://github.com/kubernetes/website/blob/main/content/en/OWNERS) in the top subfolder for English content.
Adding `OWNERS` files to subdirectories lets localization teams review and approve changes without requiring a rubber stamp approval from reviewers who may lack fluency.

View File

@ -3,6 +3,7 @@ layout: blog
title: 'Kubernetes 1.13: Simplified Cluster Management with Kubeadm, Container Storage Interface (CSI), and CoreDNS as Default DNS are Now Generally Available'
date: 2018-12-03
slug: kubernetes-1-13-release-announcement
evergreen: true
---
**Author**: The 1.13 [Release Team](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.13/release_team.md)

View File

@ -2,6 +2,7 @@
layout: blog
title: Production-Ready Kubernetes Cluster Creation with kubeadm
date: 2018-12-04
evergreen: true
---
**Authors**: Lucas Käldström (CNCF Ambassador) and Luc Perkins (CNCF Developer Advocate)

View File

@ -1,13 +1,13 @@
---
title: A Look Back and What's in Store for Kubernetes Contributor Summits
date: 2019-03-20
layout: blog
evergreen: true
---
**Authors:**
Paris Pittman (Google), Jonas Rosland (VMware)
**tl;dr** - [click here] for Barcelona Contributor Summit information.
{{<figure width="600" src="/images/blog/2019-03-14-A-Look-Back-And-Whats-In-Store-For-Kubernetes-Contributor-Summits/celebrationsig.jpg" caption="Seattle Contributor Summit">}}
As our contributing community grows in great numbers, with more than 16,000 contributors this year across 150+ GitHub repositories, its important to provide face to face connections for our large distributed teams to have opportunities for collaboration and learning. In [Contributor Experience], our methodology with planning events is a lot like our documentation; we build from personas -- interests, skills, and motivators to name a few. This way we ensure there is valuable content and learning for everyone.
@ -28,13 +28,14 @@ We build the contributor summits around you:
These personas combined with ample feedback from previous events, produce the altogether experience that welcomed over 600 contributors in Copenhagen (May), Shanghai(November), and Seattle(December) in 2018. Seattle's event drew over 300+ contributors, equal to Shanghai and Copenhagen combined, for the 6th contributor event in Kubernetes history. In true Kubernetes fashion, we expect another record breaking year of attendance. We've pre-ordered 900+ [contributor patches], a tradition, and we are looking forward to giving them to you!
With that said...
With that said…
**Save the Dates:**
Barcelona: May 19th (evening) and 20th (all day)
Shanghai: June 24th (all day)
San Diego: November 18th, 19th, and activities in KubeCon/CloudNativeCon week
In an effort of continual improvement, here's what to expect from us this year:
In an effort of continual improvement, here's what to expect from us this year:
* Large new contributor workshops and contributor socials at all three events expected to break previous attendance records
* A multiple track event in San Diego for all contributor types including workshops, birds of a feather, lightning talks and more
@ -42,7 +43,8 @@ In an effort of continual improvement, here's what to expect from us this year:
* [An event website]!
* Follow along with updates: kubernetes-dev@googlegroups.com is our main communication hub as always; however, we will also blog here, our [Thursday Kubernetes Community Meeting], [twitter], SIG meetings, event site, discuss.kubernetes.io, and #contributor-summit on Slack.
* Opportunities to get involved: We still have 2019 roles available!
Reach out to Contributor Experience via community@kubernetes.io, stop by a Wednesday SIG update meeting, or catch us on Slack (#sig-contribex).
Reach out to Contributor Experience via community@kubernetes.io, stop by a Wednesday SIG update meeting, or catch us on Slack (#sig-contribex).
{{<figure width="600" src="/images/blog/2019-03-14-A-Look-Back-And-Whats-In-Store-For-Kubernetes-Contributor-Summits/unconference.jpg" caption="Unconference voting">}}
@ -51,11 +53,11 @@ Reach out to Contributor Experience via community@kubernetes.io, stop by a Wedne
Our 2018 crew 🥁
Jorge Castro, Paris Pittman, Bob Killen, Jeff Sica, Megan Lehn, Guinevere Saenger, Josh Berkus, Noah Abrahams, Yang Li, Xiangpeng Zhao, Puja Abbassi, Lindsey Tulloch, Zach Corleissen, Tim Pepper, Ihor Dvoretskyi, Nancy Mohamed, Chris Short, Mario Loria, Jason DeTiberus, Sahdev Zala, Mithra Raja
And an introduction to our 2019 crew (a thanks in advance ;) )...
Jonas Rosland, Josh Berkus, Paris Pittman, Jorge Castro, Bob Killen, Deb Giles, Guinevere Saenger, Noah Abrahams, Yang Li, Xiangpeng Zhao, Puja Abbassi, Rui Chen, Tim Pepper, Ihor Dvoretskyi, Dawn Foster
And an introduction to our 2019 crew (a thanks in advance ;) )
Jonas Rosland, Josh Berkus, Paris Pittman, Jorge Castro, Bob Killen, Deb Giles, Guinevere Saenger, Noah Abrahams, Yang Li, Xiangpeng Zhao, Puja Abbassi, Rui Chen, Tim Pepper, Ihor Dvoretskyi, Dawn Foster
## Relive Seattle Contributor Summit
## Relive Seattle Contributor Summit
📈 80% growth rate since the Austin 2017 December event
@ -81,15 +83,11 @@ Jonas Rosland, Josh Berkus, Paris Pittman, Jorge Castro, Bob Killen, Deb Giles,
📸 Pictures (special thanks to [rdodev])
Garage Pic
Reg Desk
{{<figure width="600" src="/images/blog/2019-03-14-A-Look-Back-And-Whats-In-Store-For-Kubernetes-Contributor-Summits/grouppicseatle.JPG" caption="Some of the group in Seattle">}}
“I love Contrib Summit! The intros and deep dives during KubeCon were a great extension of Contrib Summit. Y'all did an excellent job in the morning to level set expectations and prime everyone.” -- julianv
“great work! really useful and fun!” - coffeepac
[click here]: https://events.linuxfoundation.org/events/contributor-summit-europe-2019/
[Contributor Experience]: https://github.com/kubernetes/community/tree/master/sig-contributor-experience
[Subproject OWNERs]: https://github.com/kubernetes/community/blob/master/community-membership.md
[Chair or Tech Lead]: https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance.md

View File

@ -2,6 +2,7 @@
title: 'Kubernetes 1.14: Production-level support for Windows Nodes, Kubectl Updates, Persistent Local Volumes GA'
date: 2019-03-25
slug: kubernetes-1-14-release-announcement
evergreen: true
---
**Authors:** The 1.14 [Release Team](https://bit.ly/k8s114-team)

View File

@ -8,7 +8,7 @@ date: 2019-04-26
Last year we optimized the Kubernetes website for [hosting multilingual content](/blog/2018/11/08/kubernetes-docs-updates-international-edition/). Contributors responded by adding multiple new localizations: as of April 2019, Kubernetes docs are partially available in nine different languages, with six added in 2019 alone. You can see a list of available languages in the language selector at the top of each page.
By _partially available_, I mean that localizations are ongoing projects. They range from mostly complete ([Chinese docs for 1.12](https://v1-12.docs.kubernetes.io/zh/)) to brand new (1.14 docs in [Portuguese](https://kubernetes.io/pt/)). If you're interested in helping an existing localization, read on!
By _partially available_, I mean that localizations are ongoing projects. They range from mostly complete ([Chinese docs for 1.12](https://v1-12.docs.kubernetes.io/zh-cn/)) to brand new (1.14 docs in [Portuguese](https://kubernetes.io/pt/)). If you're interested in helping an existing localization, read on!
## What is a localization?

View File

@ -2,6 +2,7 @@
title: "Join us for the 2019 KubeCon Diversity Lunch & Hack"
date: 2019-05-02
slug: kubecon-diversity-lunch-and-hack
evergreen: false
---
**Authors:** Kiran Oliver, Podcast Producer, The New Stack
@ -36,4 +37,4 @@ To make this all possible, we need you. Yes, you, to register. As much as we lov
We look forward to seeing you!
_Special thanks to [Leah Petersen](https://www.linkedin.com/in/leahstunts/), [Sarah Conway](https://www.linkedin.com/in/sarah-conway-6166151/) and [Paris Pittman](https://www.linkedin.com/in/parispittman/) for their help in editing this post._
_Special thanks to [Leah Petersen](https://www.linkedin.com/in/leahstunts/), [Sarah Conway](https://www.linkedin.com/in/sarah-conway-6166151/) and [Paris Pittman](https://www.linkedin.com/in/parispittman/) for their help in editing this post._

View File

@ -3,6 +3,7 @@ layout: blog
title: "Kubernetes 1.15: Extensibility and Continuous Improvement"
date: 2019-06-19
slug: kubernetes-1-15-release-announcement
evergreen: true
---
**Authors:** The 1.15 [Release Team](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.15/release_team.md)

View File

@ -3,19 +3,18 @@ layout: blog
title: "Contributor Summit San Diego Registration Open!"
date: 2019-09-24
slug: san-diego-contributor-summit
evergreen: true
---
**Authors: Paris Pittman (Google), Jeffrey Sica (Red Hat), Jonas Rosland (VMware)**
**Authors:** Paris Pittman (Google), Jeffrey Sica (Red Hat), Jonas Rosland (VMware)
[Contributor Summit San Diego 2019 Event Page]
Registration is now open and in record time, weve hit capacity for the
*new contributor workshop* session of the event! Waitlist is now available.
In record time, weve hit capacity for the *new contributor workshop* session of
the event!
**Sunday, November 17**
Evening Contributor Celebration:
[QuartYard]*
[QuartYard]
Address: 1301 Market Street, San Diego, CA 92101
Time: 6:00PM - 9:00PM
@ -68,7 +67,7 @@ Check out past blogs on [persona building around our events] and the [Barcelona
![Group Picture in 2018](/images/blog/2019-09-24-san-diego-contributor-summit/IMG_2588.JPG)
*=QuartYard has a huge stage! Want to perform something in front of your contributor peers? Reach out to us! community@kubernetes.io
=QuartYard has a huge stage! Want to perform something in front of your contributor peers? Reach out to us! community@kubernetes.io

View File

@ -5,13 +5,7 @@ date: 2019-10-10
slug: contributor-summit-san-diego-schedule
---
Authors: Josh Berkus (Red Hat), Paris Pittman (Google), Jonas Rosland (VMware)
tl;dr A week ago we announced that [registration is open][reg] for the contributor
summit , and we're now live with [the full Contributor Summit schedule!][schedule]
Grab your spot while tickets are still available. There is currently a waitlist
for new contributor workshop. ([Register here!][reg])
**Authors:** Josh Berkus (Red Hat), Paris Pittman (Google), Jonas Rosland (VMware)
There are many great sessions planned for the Contributor Summit, spread across
five rooms of current contributor content in addition to the new contributor
@ -32,7 +26,7 @@ While the schedule contains difficult decisions in every timeslot, we've picked
a few below to give you a taste of what you'll hear, see, and participate in, at
the summit:
* **[Vision]**: SIG-Architecture will be sharing their vision of where we're going
* **[Vision]**: SIG Architecture will be sharing their vision of where we're going
with Kubernetes development for the next year and beyond.
* **[Security]**: Tim Allclair and CJ Cullen will present on the current state of
Kubernetes security. In another security talk, Vallery Lancey will lead a
@ -47,7 +41,7 @@ the summit:
one, or at least pass one.
* **[End Users]**: Several end users from the CNCF partner ecosystem, invited by
Cheryl Hung, will hold a Q&A with contributors to strengthen our feedback loop.
* **[Docs]**: As always, SIG-Docs will run a three-hour contributing-to-documentation
* **[Docs]**: As always, SIG Docs will run a three-hour contributing-to-documentation
workshop.
We're also giving out awards to contributors who distinguished themselves in 2019,

View File

@ -5,11 +5,28 @@ date: 2020-02-18
slug: Contributor-Summit-Amsterdam-Schedule-Announced
---
**Authors:** Jeffrey Sica (Red Hat), Amanda Katona (VMware)
**Authors:** Jeffrey Sica (Red Hat), Amanda Katona (VMware)
tl;dr [Registration is open](https://events.linuxfoundation.org/kubernetes-contributor-summit-europe/) and the [schedule is live](https://kcseu2020.sched.com/) so register now and well see you in Amsterdam!
![Contributor Summit](/images/blog/2020-02-18-Contributor-Summit-Amsterdam-Schedule-Announced/contribsummit.jpg)
## Kubernetes Contributor Summit
Hello everyone and Happy 2020! Its hard to believe that KubeCon EU 2020 is less than six weeks away, and with that another contributor summit! This year we have the pleasure of being in Amsterdam in early spring, so be sure to pack some warmer clothing. This summit looks to be exciting with a lot of fantastic community-driven content. We received **26** submissions from the CFP. From that, the events team selected **12** sessions. Each of the sessions falls into one of four categories:
* Community
* Contributor Improvement
* Sustainability
* In-depth Technical
On top of the presentations, there will be a dedicated Docs Sprint as well as the New Contributor Workshop 101 and 201 Sessions. All told, we will have five separate rooms of content throughout the day on Monday. Please **[see the full schedule](https://kcseu2020.sched.com/)** to see what sessions youd be interested in. We hope between the content provided and the inevitable hallway track, everyone has a fun and enriching experience.
Speaking of fun, the social Sunday night should be a blast! Were hosting this summits social close to the conference center, at [ZuidPool](https://www.zuid-pool.nl/en/). There will be games, bingo, and unconference sign-up throughout the evening. It should be a relaxed way to kick off the week.
[~Registration is open~](https://events.linuxfoundation.org/kubernetes-contributor-summit-europe/)! Space is limited so its always a good idea to register early.
If you have any questions, reach out to the [Amsterdam Team](https://github.com/kubernetes/community/tree/master/events/2020/03-contributor-summit#team) on Slack in the [#contributor-summit](https://kubernetes.slack.com/archives/C7J893413) channel.
Hope to see you there!
## Kubernetes Contributor Summit schedule
**Sunday, March 29, 2020**
@ -25,21 +42,3 @@ tl;dr [Registration is open](https://events.linuxfoundation.org/kubernetes-contr
- Address: [Europaplein 24, 1078 GZ Amsterdam, Netherlands](https://www.google.com/search?q=kubecon+amsterdam+2020&oq=kubecon+amste&aqs=chrome.0.35i39j69i57j0l4j69i61l2.3957j1j4&sourceid=chrome&ie=UTF-8&ibp=htl;events&rciv=evn&sa=X&ved=2ahUKEwiZoLvQ0dvnAhVST6wKHScBBZ8Q5bwDMAB6BAgSEAE#)
- Time: 09:00 - 17:00 (Breakfast at 08:00)
![Contributor Summit](/images/blog/2020-02-18-Contributor-Summit-Amsterdam-Schedule-Announced/contribsummit.jpg)
Hello everyone and Happy 2020! Its hard to believe that KubeCon EU 2020 is less than six weeks away, and with that another contributor summit! This year we have the pleasure of being in Amsterdam in early spring, so be sure to pack some warmer clothing. This summit looks to be exciting with a lot of fantastic community-driven content. We received **26** submissions from the CFP. From that, the events team selected **12** sessions. Each of the sessions falls into one of four categories:
* Community
* Contributor Improvement
* Sustainability
* In-depth Technical
On top of the presentations, there will be a dedicated Docs Sprint as well as the New Contributor Workshop 101 and 201 Sessions. All told, we will have five separate rooms of content throughout the day on Monday. Please **[see the full schedule](https://kcseu2020.sched.com/)** to see what sessions youd be interested in. We hope between the content provided and the inevitable hallway track, everyone has a fun and enriching experience.
Speaking of fun, the social Sunday night should be a blast! Were hosting this summits social close to the conference center, at [ZuidPool](https://www.zuid-pool.nl/en/). There will be games, bingo, and unconference sign-up throughout the evening. It should be a relaxed way to kick off the week.
[Registration is open](https://events.linuxfoundation.org/kubernetes-contributor-summit-europe/)! Space is limited so its always a good idea to register early.
If you have any questions, reach out to the [Amsterdam Team](https://github.com/kubernetes/community/tree/master/events/2020/03-contributor-summit#team) on Slack in the [#contributor-summit](https://kubernetes.slack.com/archives/C7J893413) channel.
Hope to see you there!

View File

@ -3,13 +3,14 @@ layout: blog
title: Contributor Summit Amsterdam Postponed
date: 2020-03-04
slug: Contributor-Summit-Delayed
evergreen: true
---
**Authors:** Dawn Foster (VMware), Jorge Castro (VMware)
**Authors:** Dawn Foster (VMware), Jorge Castro (VMware)
The CNCF has announced that [KubeCon + CloudNativeCon EU has been delayed](https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/attend/novel-coronavirus-update/) until July/August of 2020. As a result the Contributor Summit planning team is weighing options for how to proceed. Heres the current plan:
- There will be an in-person Contributor Summit as planned when KubeCon + CloudNativeCon is rescheduled.
- We are looking at options for having additional virtual contributor activities in the meantime.
- We are looking at options for having additional virtual contributor activities in the meantime.
We will communicate via this blog and the usual communications channels on the final plan. Please bear with us as we adapt when we get more information. Thank you for being patient as the team pivots to bring you a great Contributor Summit!

View File

@ -67,7 +67,7 @@ Let's see an example of a cluster to understand this API.
As the feature name "PodTopologySpread" implies, the basic usage of this feature
is to run your workload with an absolute even manner (maxSkew=1), or relatively
even manner (maxSkew>=2). See the [official
document](/docs/concepts/workloads/pods/pod-topology-spread-constraints/)
document](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
for more details.
In addition to this basic usage, there are some advanced usage examples that

View File

@ -60,7 +60,7 @@ so we added two administrator-facing tools to help track use of deprecated APIs
Starting in Kubernetes v1.19, when a request is made to a deprecated REST API endpoint,
an `apiserver_requested_deprecated_apis` gauge metric is set to `1` in the kube-apiserver process.
This metric has labels for the API `group`, `version`, `resource`, and `subresource`,
and a `removed_version` label that indicates the Kubernetes release in which the API will no longer be served.
and a `removed_release` label that indicates the Kubernetes release in which the API will no longer be served.
This is an example query using `kubectl`, [prom2json](https://github.com/prometheus/prom2json),
and [jq](https://stedolan.github.io/jq/) to determine which deprecated APIs have been requested
@ -169,7 +169,7 @@ You can also find that information through the following Prometheus query,
which returns information about requests made to deprecated APIs which will be removed in v1.22:
```promql
apiserver_requested_deprecated_apis{removed_version="1.22"} * on(group,version,resource,subresource)
apiserver_requested_deprecated_apis{removed_release="1.22"} * on(group,version,resource,subresource)
group_right() apiserver_request_total
```

View File

@ -70,7 +70,7 @@ To correct the latter issue, we now employ a "hunt and peck" approach to removin
### 1. Upgrade to kubernetes 1.18 and make use of Pod Topology Spread Constraints
While this seems like it could have been the perfect solution, at the time of writing Kubernetes 1.18 was unavailable on the two most common managed Kubernetes services in public cloud, EKS and GKE.
Furthermore, [pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/) were still a [beta feature in 1.18](https://v1-18.docs.kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) which meant that it [wasn't guaranteed to be available in managed clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#kubernetes_feature_choices) even when v1.18 became available.
Furthermore, [pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/) were still a beta feature in 1.18 which meant that it [wasn't guaranteed to be available in managed clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#kubernetes_feature_choices) even when v1.18 became available.
The entire endeavour was concerningly reminiscent of checking [caniuse.com](https://caniuse.com/) when Internet Explorer 8 was still around.
### 2. Deploy a statefulset _per zone_.

View File

@ -30,9 +30,9 @@ This led to design principles that allow the Gateway API to improve upon Ingress
The Gateway API introduces a few new resource types:
- **[GatewayClasses](https://gateway-api.sigs.k8s.io/v1alpha1/references/spec/#networking.x-k8s.io/v1alpha1.GatewayClass)** are cluster-scoped resources that act as templates to explicitly define behavior for Gateways derived from them. This is similar in concept to StorageClasses, but for networking data-planes.
- **[Gateways](https://gateway-api.sigs.k8s.io/v1alpha1/references/spec/#networking.x-k8s.io/v1alpha1.Gateway)** are the deployed instances of GatewayClasses. They are the logical representation of the data-plane which performs routing, which may be in-cluster proxies, hardware LBs, or cloud LBs.
- **Routes** are not a single resource, but represent many different protocol-specific Route resources. The [HTTPRoute](https://gateway-api.sigs.k8s.io/v1alpha1/references/spec/#networking.x-k8s.io/v1alpha1.HTTPRoute) has matching, filtering, and routing rules that get applied to Gateways that can process HTTP and HTTPS traffic. Similarly, there are [TCPRoutes](https://gateway-api.sigs.k8s.io/v1alpha1/references/spec/#networking.x-k8s.io/v1alpha1.TCPRoute), [UDPRoutes](https://gateway-api.sigs.k8s.io/v1alpha1/references/spec/#networking.x-k8s.io/v1alpha1.UDPRoute), and [TLSRoutes](https://gateway-api.sigs.k8s.io/v1alpha1/references/spec/#networking.x-k8s.io/v1alpha1.TLSRoute) which also have protocol-specific semantics. This model also allows the Gateway API to incrementally expand its protocol support in the future.
- **[GatewayClasses](https://gateway-api.sigs.k8s.io/concepts/api-overview/#gatewayclass)** are cluster-scoped resources that act as templates to explicitly define behavior for Gateways derived from them. This is similar in concept to StorageClasses, but for networking data-planes.
- **[Gateways](https://gateway-api.sigs.k8s.io/concepts/api-overview/#gateway)** are the deployed instances of GatewayClasses. They are the logical representation of the data-plane which performs routing, which may be in-cluster proxies, hardware LBs, or cloud LBs.
- **Routes** are not a single resource, but represent many different protocol-specific Route resources. The [HTTPRoute](https://gateway-api.sigs.k8s.io/concepts/api-overview/#httproute) has matching, filtering, and routing rules that get applied to Gateways that can process HTTP and HTTPS traffic. Similarly, there are [TCPRoutes](https://gateway-api.sigs.k8s.io/concepts/api-overview/#tcproute-and-udproute), [UDPRoutes](https://gateway-api.sigs.k8s.io/concepts/api-overview/#tcproute-and-udproute), and [TLSRoutes](https://gateway-api.sigs.k8s.io/concepts/api-overview/#gateway) which also have protocol-specific semantics. This model also allows the Gateway API to incrementally expand its protocol support in the future.
![The resources of the Gateway API](gateway-api-resources.png)

View File

@ -42,7 +42,7 @@ samplingRatePerMillion: 10000
### Enabling Etcd Tracing
Add `--experimental-enable-distributed-tracing`, `--experimental-distributed-tracing-address=0.0.0.0:4317`, `--experimental-distributed-tracing-service-name=etcd` flags to etcd to enable tracing. Note that this traces every request, so it will probably generate a lot of traces if you enable it.
Add `--experimental-enable-distributed-tracing`, `--experimental-distributed-tracing-address=0.0.0.0:4317`, `--experimental-distributed-tracing-service-name=etcd` flags to etcd to enable tracing. Note that this traces every request, so it will probably generate a lot of traces if you enable it. Required etcd version is [v3.5+](https://etcd.io/docs/v3.5/op-guide/monitoring/#distributed-tracing).
### Example Trace: List Nodes

View File

@ -1,9 +1,9 @@
---
layout: blog
title: "Meet Our Contributors - APAC (India region)"
date: 2022-01-10T12:00:00+0000
date: 2022-01-10
slug: meet-our-contributors-india-ep-01
canonicalUrl: https://kubernetes.dev/blog/2022/01/10/meet-our-contributors-india-ep-01/
canonicalUrl: https://www.kubernetes.dev/blog/2022/01/10/meet-our-contributors-india-ep-01/
---
**Authors & Interviewers:** [Anubhav Vardhan](https://github.com/anubha-v-ardhan), [Atharva Shinde](https://github.com/Atharva-Shinde), [Avinesh Tripathi](https://github.com/AvineshTripathi), [Debabrata Panigrahi](https://github.com/Debanitrkl), [Kunal Verma](https://github.com/verma-kunal), [Pranshu Srivastava](https://github.com/PranshuSrivastava), [Pritish Samal](https://github.com/CIPHERTron), [Purneswar Prasad](https://github.com/PurneswarPrasad), [Vedant Kakde](https://github.com/vedant-kakde)
@ -19,7 +19,7 @@ Welcome to the first episode of the APAC edition of the "Meet Our Contributors"
In this post, we'll introduce you to five amazing folks from the India region who have been actively contributing to the upstream Kubernetes projects in a variety of ways, as well as being the leaders or maintainers of numerous community initiatives.
💫 *Let's get started, so without further ado…*
💫 *Let's get started, so without further ado…*
## [Arsh Sharma](https://github.com/RinkiyaKeDad)
@ -39,7 +39,7 @@ To the newcomers, Arsh helps plan their early contributions sustainably.
Kunal Kushwaha is a core member of the Kubernetes marketing council. He is also a CNCF ambassador and one of the founders of the [CNCF Students Program](https://community.cncf.io/cloud-native-students/).. He also served as a Communications role shadow during the 1.22 release cycle.
At the end of his first year, Kunal began contributing to the [fabric8io kubernetes-client](https://github.com/fabric8io/kubernetes-client) project. He was then selected to work on the same project as part of Google Summer of Code. Kunal mentored people on the same project, first through Google Summer of Code then through Google Code-in.
At the end of his first year, Kunal began contributing to the [fabric8io kubernetes-client](https://github.com/fabric8io/kubernetes-client) project. He was then selected to work on the same project as part of Google Summer of Code. Kunal mentored people on the same project, first through Google Summer of Code then through Google Code-in.
As an open-source enthusiast, he believes that diverse participation in the community is beneficial since it introduces new perspectives and opinions and respect for one's peers. He has worked on various open-source projects, and his participation in communities has considerably assisted his development as a developer.
@ -103,4 +103,3 @@ If you have any recommendations/suggestions for who we should interview next, pl
We'll see you all in the next one. Everyone, till then, have a happy contributing! 👋

View File

@ -86,7 +86,7 @@ in Kubernetes 1.24.
If you're running Kubernetes v1.24 or later, see [Can I still use Docker Engine as my container runtime?](#can-i-still-use-docker-engine-as-my-container-runtime).
(Remember, you can switch away from the dockershim if you're using any supported Kubernetes release; from release v1.24, you
**must** switch as Kubernetes no longer incluides the dockershim).
**must** switch as Kubernetes no longer includes the dockershim).
[kubelet]: /docs/reference/command-line-tools-reference/kubelet/

View File

@ -1,7 +1,7 @@
---
layout: blog
title: "Meet Our Contributors - APAC (Aus-NZ region)"
date: 2022-03-16T12:00:00+0000
date: 2022-03-16
slug: meet-our-contributors-au-nz-ep-02
canonicalUrl: https://www.kubernetes.dev/blog/2022/03/14/meet-our-contributors-au-nz-ep-02/
---
@ -60,19 +60,13 @@ Nick Young works at VMware as a technical lead for Contour, a CNCF ingress contr
His contribution path was notable in that he began working on major areas of the Kubernetes project early on, skewing his trajectory.
He asserts the best thing a new contributor can do is to "start contributing". Naturally, if it is relevant to their employment, that is excellent; however, investing non-work time in contributing can pay off in the long run in terms of work. He believes that new contributors, particularly those who are currently Kubernetes users, should be encouraged to participate in higher-level project discussions.
He asserts the best thing a new contributor can do is to "start contributing". Naturally, if it is relevant to their employment, that is excellent; however, investing non-work time in contributing can pay off in the long run in terms of work. He believes that new contributors, particularly those who are currently Kubernetes users, should be encouraged to participate in higher-level project discussions.
> _Just being active and contributing will get you a long way. Once you've been active for a while, you'll find that you're able to answer questions, which will mean you're asked questions, and before you know it you are an expert._
---
If you have any recommendations/suggestions for who we should interview next, please let us know in #sig-contribex. Your suggestions would be much appreciated. We're thrilled to have additional folks assisting us in reaching out to even more wonderful individuals of the community.
We'll see you all in the next one. Everyone, till then, have a happy contributing! 👋

View File

@ -69,7 +69,7 @@ been deprecated. These removals have been superseded by newer, stable/generally
* [Dynamic kubelet configuration](https://github.com/kubernetes/enhancements/issues/281): `DynamicKubeletConfig` is used to enable the dynamic configuration of the kubelet. The `DynamicKubeletConfig` flag was deprecated in Kubernetes 1.22. In v1.24, this feature gate will be removed from the kubelet. See [Reconfigure kubelet](/docs/tasks/administer-cluster/reconfigure-kubelet/). Refer to the ["Dynamic kubelet config is removed" KEP](https://github.com/kubernetes/enhancements/issues/281) for more information.
* [Dynamic log sanitization](https://github.com/kubernetes/kubernetes/pull/107207): The experimental dynamic log sanitization feature is deprecated and will be removed in v1.24. This feature introduced a logging filter that could be applied to all Kubernetes system components logs to prevent various types of sensitive information from leaking via logs. Refer to [KEP-1753: Kubernetes system components logs sanitization](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1753-logs-sanitization#deprecation) for more information and an [alternative approach](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1753-logs-sanitization#alternatives=).
* In-tree provisioner to CSI driver migration: This applies to a number of in-tree plugins, including [Portworx](https://github.com/kubernetes/enhancements/issues/2589). Refer to the [In-tree Storage Plugin to CSI Migration Design Doc](https://github.com/kubernetes/design-proposals-archive/blob/main/storage/csi-migration.md#background-and-motivations) for more information.
* In-tree provisioner to CSI driver migration: This applies to a number of in-tree plugins, including [Portworx](https://github.com/kubernetes/enhancements/issues/2589). Refer to the [In-tree Storage Plugin to CSI Migration Design Doc](https://git.k8s.io/design-proposals-archive/storage/csi-migration.md#background-and-motivations) for more information.
* [Removing Dockershim from kubelet](https://github.com/kubernetes/enhancements/issues/2221): the Container Runtime Interface (CRI) for Docker (i.e. Dockershim) is currently a built-in container runtime in the kubelet code base. It was deprecated in v1.20. As of v1.24, the kubelet will no longer have dockershim. Check out this blog on [what you need to do be ready for v1.24](/blog/2022/03/31/ready-for-dockershim-removal/).
* [Storage capacity tracking for pod scheduling](https://github.com/kubernetes/enhancements/issues/1472): The CSIStorageCapacity API supports exposing currently available storage capacity via CSIStorageCapacity objects and enhances scheduling of pods that use CSI volumes with late binding. In v1.24, the CSIStorageCapacity API will be stable. The API graduating to stable initates the deprecation of the v1beta1 CSIStorageCapacity API. Refer to the [Storage Capacity Constraints for Pod Scheduling KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1472-storage-capacity-tracking) for more information.
* [The `master` label is no longer present on kubeadm control plane nodes](https://github.com/kubernetes/kubernetes/pull/107533). For new clusters, the label 'node-role.kubernetes.io/master' will no longer be added to control plane nodes, only the label 'node-role.kubernetes.io/control-plane' will be added. For more information, refer to [KEP-2067: Rename the kubeadm "master" label and taint](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/kubeadm/2067-rename-master-label-taint).

View File

@ -1,6 +1,6 @@
---
layout: blog
title: "Storage Capacity Tracking reaches GA in Kubernetes 1.24"
title: "Kubernetes 1.24: Storage Capacity Tracking Now Generally Available"
date: 2022-05-06
slug: storage-capacity-ga
---

View File

@ -0,0 +1,178 @@
---
layout: blog
title: Kubernetes Gateway API Graduates to Beta
date: 2022-07-13
slug: gateway-api-graduates-to-beta
canonicalUrl: https://gateway-api.sigs.k8s.io/blog/2022/graduating-to-beta/
---
**Authors:** Shane Utt (Kong), Rob Scott (Google), Nick Young (VMware), Jeff Apple (HashiCorp)
We are excited to announce the v0.5.0 release of Gateway API. For the first
time, several of our most important Gateway API resources are graduating to
beta. Additionally, we are starting a new initiative to explore how Gateway API
can be used for mesh and introducing new experimental concepts such as URL
rewrites. We'll cover all of this and more below.
## What is Gateway API?
Gateway API is a collection of resources centered around [Gateway][gw] resources
(which represent the underlying network gateways / proxy servers) to enable
robust Kubernetes service networking through expressive, extensible and
role-oriented interfaces that are implemented by many vendors and have broad
industry support.
Originally conceived as a successor to the well known [Ingress][ing] API, the
benefits of Gateway API include (but are not limited to) explicit support for
many commonly used networking protocols (e.g. `HTTP`, `TLS`, `TCP`, `UDP`) as
well as tightly integrated support for Transport Layer Security (TLS). The
`Gateway` resource in particular enables implementations to manage the lifecycle
of network gateways as a Kubernetes API.
If you're an end-user interested in some of the benefits of Gateway API we
invite you to jump in and find an implementation that suits you. At the time of
this release there are over a dozen [implementations][impl] for popular API
gateways and service meshes and guides are available to start exploring quickly.
[gw]:https://gateway-api.sigs.k8s.io/api-types/gateway/
[ing]:https://kubernetes.io/docs/reference/kubernetes-api/service-resources/ingress-v1/
[impl]:https://gateway-api.sigs.k8s.io/implementations/
### Getting started
Gateway API is an official Kubernetes API like
[Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/).
Gateway API represents a superset of Ingress functionality, enabling more
advanced concepts. Similar to Ingress, there is no default implementation of
Gateway API built into Kubernetes. Instead, there are many different
[implementations][impl] available, providing significant choice in terms of underlying
technologies while providing a consistent and portable experience.
Take a look at the [API concepts documentation][concepts] and check out some of
the [Guides][guides] to start familiarizing yourself with the APIs and how they
work. When you're ready for a practical application open the [implementations
page][impl] and select an implementation that belongs to an existing technology
you may already be familiar with or the one your cluster provider uses as a
default (if applicable). Gateway API is a [Custom Resource Definition
(CRD)][crd] based API so you'll need to [install the CRDs][install-crds] onto a
cluster to use the API.
If you're specifically interested in helping to contribute to Gateway API, we
would love to have you! Please feel free to [open a new issue][issue] on the
repository, or join in the [discussions][disc]. Also check out the [community
page][community] which includes links to the Slack channel and community meetings.
[crd]:https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/
[concepts]:https://gateway-api.sigs.k8s.io/concepts/api-overview/
[guides]:https://gateway-api.sigs.k8s.io/guides/getting-started/
[impl]:https://gateway-api.sigs.k8s.io/implementations/
[install-crds]:https://gateway-api.sigs.k8s.io/guides/getting-started/#install-the-crds
[issue]:https://github.com/kubernetes-sigs/gateway-api/issues/new/choose
[disc]:https://github.com/kubernetes-sigs/gateway-api/discussions
[community]:https://gateway-api.sigs.k8s.io/contributing/community/
## Release highlights
### Graduation to beta
The `v0.5.0` release is particularly historic because it marks the growth in
maturity to a beta API version (`v1beta1`) release for some of the key APIs:
- [GatewayClass](https://gateway-api.sigs.k8s.io/api-types/gatewayclass/)
- [Gateway](https://gateway-api.sigs.k8s.io/api-types/gateway/)
- [HTTPRoute](https://gateway-api.sigs.k8s.io/api-types/httproute/)
This achievement was marked by the completion of several graduation criteria:
- API has been [widely implemented][impl].
- Conformance tests provide basic coverage for all resources and have multiple implementations passing tests.
- Most of the API surface is actively being used.
- Kubernetes SIG Network API reviewers have approved graduation to beta.
For more information on Gateway API versioning, refer to the [official
documentation](https://gateway-api.sigs.k8s.io/concepts/versioning/). To see
what's in store for future releases check out the [next steps](#next-steps)
section.
[impl]:https://gateway-api.sigs.k8s.io/implementations/
### Release channels
This release introduces the `experimental` and `standard` [release channels][ch]
which enable a better balance of maintaining stability while still enabling
experimentation and iterative development.
The `standard` release channel includes:
- resources that have graduated to beta
- fields that have graduated to standard (no longer considered experimental)
The `experimental` release channel includes everything in the `standard` release
channel, plus:
- `alpha` API resources
- fields that are considered experimental and have not graduated to `standard` channel
Release channels are used internally to enable iterative development with
quick turnaround, and externally to indicate feature stability to implementors
and end-users.
For this release we've added the following experimental features:
- [Routes can attach to Gateways by specifying port numbers](https://gateway-api.sigs.k8s.io/geps/gep-957/)
- [URL rewrites and path redirects](https://gateway-api.sigs.k8s.io/geps/gep-726/)
[ch]:https://gateway-api.sigs.k8s.io/concepts/versioning/#release-channels-eg-experimental-standard
### Other improvements
For an exhaustive list of changes included in the `v0.5.0` release, please see
the [v0.5.0 release notes](https://github.com/kubernetes-sigs/gateway-api/releases/tag/v0.5.0).
## Gateway API for service mesh: the GAMMA Initiative
Some service mesh projects have [already implemented support for the Gateway
API](https://gateway-api.sigs.k8s.io/implementations/). Significant overlap
between the Service Mesh Interface (SMI) APIs and the Gateway API has [inspired
discussion in the SMI
community](https://github.com/servicemeshinterface/smi-spec/issues/249) about
possible integration.
We are pleased to announce that the service mesh community, including
representatives from Cilium Service Mesh, Consul, Istio, Kuma, Linkerd, NGINX
Service Mesh and Open Service Mesh, is coming together to form the [GAMMA
Initiative](https://gateway-api.sigs.k8s.io/contributing/gamma/), a dedicated
workstream within the Gateway API subproject focused on Gateway API for Mesh
Management and Administration.
This group will deliver [enhancement
proposals](https://gateway-api.sigs.k8s.io/v1beta1/contributing/gep/) consisting
of resources, additions, and modifications to the Gateway API specification for
mesh and mesh-adjacent use-cases.
This work has begun with [an exploration of using Gateway API for
service-to-service
traffic](https://docs.google.com/document/d/1T_DtMQoq2tccLAtJTpo3c0ohjm25vRS35MsestSL9QU/edit#heading=h.jt37re3yi6k5)
and will continue with enhancement in areas such as authentication and
authorization policy.
## Next steps
As we continue to mature the API for production use cases, here are some of the highlights of what we'll be working on for the next Gateway API releases:
- [GRPCRoute][gep1016] for [gRPC][grpc] traffic routing
- [Route delegation][pr1085]
- Layer 4 API maturity: Graduating [TCPRoute][tcpr], [UDPRoute][udpr] and
[TLSRoute][tlsr] to beta
- [GAMMA Initiative](https://gateway-api.sigs.k8s.io/contributing/gamma/) - Gateway API for Service Mesh
If there's something on this list you want to get involved in, or there's
something not on this list that you want to advocate for to get on the roadmap
please join us in the #sig-network-gateway-api channel on Kubernetes Slack or our weekly [community calls](https://gateway-api.sigs.k8s.io/contributing/community/#meetings).
[gep1016]:https://github.com/kubernetes-sigs/gateway-api/blob/master/site-src/geps/gep-1016.md
[grpc]:https://grpc.io/
[pr1085]:https://github.com/kubernetes-sigs/gateway-api/pull/1085
[tcpr]:https://github.com/kubernetes-sigs/gateway-api/blob/main/apis/v1alpha2/tcproute_types.go
[udpr]:https://github.com/kubernetes-sigs/gateway-api/blob/main/apis/v1alpha2/udproute_types.go
[tlsr]:https://github.com/kubernetes-sigs/gateway-api/blob/main/apis/v1alpha2/tlsroute_types.go
[community]:https://gateway-api.sigs.k8s.io/contributing/community/

View File

@ -8,9 +8,9 @@ community_styles_migrated: true
<div class="community-section" id="cncf-code-of-conduct-intro">
<p>
Kubernetes follows the
<a href="https://github.com/cncf/foundation/blob/master/code-of-conduct.md">CNCF Code of Conduct</a>.
<a href="https://github.com/cncf/foundation/blob/main/code-of-conduct.md">CNCF Code of Conduct</a>.
The text of the CNCF CoC is replicated below, as of
<a href="https://github.com/cncf/foundation/blob/214585e24aab747fb85c2ea44fbf4a2442e30de6/code-of-conduct.md">commit 214585e</a>.
<a href="https://github.com/cncf/foundation/blob/71b12a2f8b4589788ef2d69b351a3d035c68d927/code-of-conduct.md">commit 71b12a2</a>.
If you notice that this is out of date, please
<a href="https://github.com/kubernetes/website/issues/new">file an issue</a>.
</p>

View File

@ -1,45 +1,72 @@
<!-- Do not edit this file directly. Get the latest from
https://github.com/cncf/foundation/blob/master/code-of-conduct.md -->
## CNCF Community Code of Conduct v1.0
https://github.com/cncf/foundation/blob/main/code-of-conduct.md -->
## CNCF Community Code of Conduct v1.1
### Contributor Code of Conduct
As contributors and maintainers of this project, and in the interest of fostering
As contributors and maintainers in the CNCF community, and in the interest of fostering
an open and welcoming community, we pledge to respect all people who contribute
through reporting issues, posting feature requests, updating documentation,
submitting pull requests or patches, and other activities.
We are committed to making participation in this project a harassment-free experience for
everyone, regardless of level of experience, gender, gender identity and expression,
We are committed to making participation in the CNCF community a harassment-free experience for everyone, regardless of level of experience, gender, gender identity and expression,
sexual orientation, disability, personal appearance, body size, race, ethnicity, age,
religion, or nationality.
Examples of unacceptable behavior by participants include:
## Scope
* The use of sexualized language or imagery
* Personal attacks
* Trolling or insulting/derogatory comments
This code of conduct applies both within project spaces and in public spaces when an individual is representing the project or its community.
### CNCF Events
CNCF events, or events run by the Linux Foundation with professional events staff, are governed by the Linux Foundation [Events Code of Conduct](https://events.linuxfoundation.org/code-of-conduct/) available on the event page. This is designed to be used in conjunction with the CNCF Code of Conduct.
## Our Standards
Examples of behavior that contributes to a positive environment include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing other's private information, such as physical or electronic addresses,
without explicit permission
* Other unethical or unprofessional conduct.
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
Project maintainers have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are not
aligned to this Code of Conduct. By adopting this Code of Conduct, project maintainers
commit themselves to fairly and consistently applying these principles to every aspect
of managing this project. Project maintainers who do not follow or enforce the Code of
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct.
By adopting this Code of Conduct, project maintainers commit themselves to fairly and consistently applying these principles to every aspect
of managing this project.
Project maintainers who do not follow or enforce the Code of
Conduct may be permanently removed from the project team.
This code of conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community.
## Reporting
Instances of abusive, harassing, or otherwise unacceptable behavior in Kubernetes may be reported by contacting the [Kubernetes Code of Conduct Committee](https://git.k8s.io/community/committee-code-of-conduct) via <conduct@kubernetes.io>. For other projects, please contact a CNCF project maintainer or our mediator, Mishi Choudhary <mishi@linux.com>.
For incidents occurring in the Kubernetes community, contact the [Kubernetes Code of Conduct Committee](https://git.k8s.io/community/committee-code-of-conduct) via <conduct@kubernetes.io>. You can expect a response within three business days.
For other projects, please contact the CNCF staff via <conduct@cncf.io>. You can expect a response within three business days.
In matters that require an outside mediator, CNCF has retained Mishi Choudhary (mishi@linux.com). Use of an outside mediator can be requested when reporting or used at CNCF staff's discretion. In general, contacting <conduct@cncf.io> directly is preferred.
## Enforcement
The Kubernetes project's [Code of Conduct Committee](https://github.com/kubernetes/community/tree/master/committee-code-of-conduct) enforces code of conduct issues. For all other projects, the CNCF enforces code of conduct issues.
Both bodies try to resolve incidents without punishment, but may remove people from the project or CNCF communities at their discretion.
## Acknowledgements
This Code of Conduct is adapted from the Contributor Covenant
(https://contributor-covenant.org), version 1.2.0, available at
https://contributor-covenant.org/version/1/2/0/
### CNCF Events Code of Conduct
CNCF events are governed by the Linux Foundation [Code of Conduct](https://events.linuxfoundation.org/code-of-conduct/) available on the event page. This is designed to be compatible with the above policy and also includes more details on responding to incidents.
(http://contributor-covenant.org), version 2.0 available at
http://contributor-covenant.org/version/2/0/code_of_conduct/

View File

@ -3,26 +3,26 @@
# Kubernetes Community Values
Kubernetes Community culture is frequently cited as a substantial contributor to the meteoric rise of this Open Source project. Below are the distilled values which have evolved over the last many years in our community pushing our project and peers toward constant improvement.
Kubernetes Community culture contributes substantially to the project's success. The following values have evolved over time, pushing our project and peers toward constant improvement.
## Distribution is better than centralization
The scale of the Kubernetes project is only viable through high-trust and high-visibility distribution of work, which includes delegation of authority, decision making, technical design, code ownership, and documentation. Distributed asynchronous ownership, collaboration, communication and decision making are the cornerstone of our world-wide community.
The scale of the Kubernetes project is only viable through high-trust and high-visibility distribution of work, which includes delegation of authority, decision making, technical design, code ownership, and documentation. Distributed asynchronous ownership, collaboration, communication and decision making are the cornerstones of our world-wide community.
## Community over product or company
We are here as a community first, our allegiance is to the intentional stewardship of the Kubernetes project for the benefit of all its members and users everywhere. We support working together publicly for the common goal of a vibrant interoperable ecosystem providing an excellent experience for our users. Individuals gain status through work, companies gain status through their commitments to support this community and fund the resources necessary for the project to operate.
We are here as a community first. Our allegiance is to the intentional stewardship of the Kubernetes project for the benefit of all its members and users everywhere. We support working together publicly for the common goal of a vibrant interoperable ecosystem, providing an excellent experience for our users. Individuals gain status through work. Companies gain status through their commitments to support this community and fund the resources necessary for the project to operate.
## Automation over process
Large projects have a lot of less exciting, yet, hard work. We value time spent automating repetitive work more highly than toil. Where that work cannot be automated, it is our culture to recognize and reward all types of contributions. However, heroism is not sustainable.
Large projects have a lot of hard yet less exciting work. We value time spent automating repetitive work more highly than toil. Where work cannot be automated, our culture recognizes and rewards all types of contributions while recognizing that heroism is not sustainable.
## Inclusive is better than exclusive
Broadly successful and useful technology requires different perspectives and skill sets which can only be heard in a welcoming and respectful environment. Community membership is a privilege, not a right. Community Leadership is earned through effort, scope, quality, quantity, and duration of contributions. Our community shows respect for the time and effort put into a discussion regardless of where a contributor is on their growth path.
Broadly successful and useful technologies require different perspectives and skill sets, which can only be heard in a welcoming and respectful environment. Community membership is a privilege, not a right. Community members earn leadership through effort, scope, quality, quantity, and duration of contributions. Our community respects the time and effort put into a discussion, regardless of where a contributor is on their growth path.
## Evolution is better than stagnation
Openness to new ideas and studied technological evolution make Kubernetes a stronger project. Continual improvement, servant leadership, mentorship and respect are the foundations of the Kubernetes project culture. It is the duty for leaders in the Kubernetes community to find, sponsor, and promote new community members. Leaders should expect to step aside. Community members should expect to step up.
Openness to new ideas and studied technological evolution make Kubernetes a stronger project. Continual improvement, servant leadership, mentorship, and respect are the foundations of Kubernetes culture. Kubernetes community leaders have a duty to find, sponsor, and promote new community members. Leaders should expect to step aside. Community members should expect to step up.
**"Culture eats strategy for breakfast." --Peter Drucker**

View File

@ -9,7 +9,15 @@ community_styles_migrated: true
sitemap:
priority: 0.1
---
<div class="community-section" id="values-legacy">
<p>
This page is a replicated version of
<a href="https://www.kubernetes.dev/community/values/">Kubernetes Community Values</a>, as of
<a href="https://github.com/kubernetes/community/blob/5c642749a030c5f7b2363a6bb9bad00a56a92161/values.md">commit 5c64274</a>.
If you notice that this is out of date, please
<a href="https://github.com/kubernetes/website/issues/new">file an issue</a>.
</p>
{{< include "/static/community-values.md" >}}
</div>

View File

@ -2,7 +2,7 @@
reviewers:
- dchen1107
- liggitt
title: Control Plane-Node Communication
title: Communication between Nodes and the Control Plane
content_type: concept
weight: 20
aliases:
@ -11,62 +11,109 @@ aliases:
<!-- overview -->
This document catalogs the communication paths between the control plane (apiserver) and the Kubernetes cluster. The intent is to allow users to customize their installation to harden the network configuration such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud provider).
This document catalogs the communication paths between the API server and the Kubernetes cluster.
The intent is to allow users to customize their installation to harden the network configuration
such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud
provider).
<!-- body -->
## Node to Control Plane
Kubernetes has a "hub-and-spoke" API pattern. All API usage from nodes (or the pods they run) terminates at the apiserver. None of the other control plane components are designed to expose remote services. The apiserver is configured to listen for remote connections on a secure HTTPS port (typically 443) with one or more forms of client [authentication](/docs/reference/access-authn-authz/authentication/) enabled.
One or more forms of [authorization](/docs/reference/access-authn-authz/authorization/) should be enabled, especially if [anonymous requests](/docs/reference/access-authn-authz/authentication/#anonymous-requests) or [service account tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens) are allowed.
Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the apiserver along with valid client credentials. A good approach is that the client credentials provided to the kubelet are in the form of a client certificate. See [kubelet TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) for automated provisioning of kubelet client certificates.
Kubernetes has a "hub-and-spoke" API pattern. All API usage from nodes (or the pods they run)
terminates at the API server. None of the other control plane components are designed to expose
remote services. The API server is configured to listen for remote connections on a secure HTTPS
port (typically 443) with one or more forms of client
[authentication](/docs/reference/access-authn-authz/authentication/) enabled.
One or more forms of [authorization](/docs/reference/access-authn-authz/authorization/) should be
enabled, especially if [anonymous requests](/docs/reference/access-authn-authz/authentication/#anonymous-requests)
or [service account tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens)
are allowed.
Pods that wish to connect to the apiserver can do so securely by leveraging a service account so that Kubernetes will automatically inject the public root certificate and a valid bearer token into the pod when it is instantiated.
The `kubernetes` service (in `default` namespace) is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver.
Nodes should be provisioned with the public root certificate for the cluster such that they can
connect securely to the API server along with valid client credentials. A good approach is that the
client credentials provided to the kubelet are in the form of a client certificate. See
[kubelet TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/)
for automated provisioning of kubelet client certificates.
The control plane components also communicate with the cluster apiserver over the secure port.
Pods that wish to connect to the API server can do so securely by leveraging a service account so
that Kubernetes will automatically inject the public root certificate and a valid bearer token
into the pod when it is instantiated.
The `kubernetes` service (in `default` namespace) is configured with a virtual IP address that is
redirected (via `kube-proxy`) to the HTTPS endpoint on the API server.
As a result, the default operating mode for connections from the nodes and pods running on the nodes to the control plane is secured by default and can run over untrusted and/or public networks.
The control plane components also communicate with the API server over the secure port.
## Control Plane to node
As a result, the default operating mode for connections from the nodes and pods running on the
nodes to the control plane is secured by default and can run over untrusted and/or public
networks.
There are two primary communication paths from the control plane (apiserver) to the nodes. The first is from the apiserver to the kubelet process which runs on each node in the cluster. The second is from the apiserver to any node, pod, or service through the apiserver's proxy functionality.
## Control plane to node
### apiserver to kubelet
There are two primary communication paths from the control plane (the API server) to the nodes.
The first is from the API server to the kubelet process which runs on each node in the cluster.
The second is from the API server to any node, pod, or service through the API server's _proxy_
functionality.
The connections from the apiserver to the kubelet are used for:
### API server to kubelet
The connections from the API server to the kubelet are used for:
* Fetching logs for pods.
* Attaching (through kubectl) to running pods.
* Attaching (usually through `kubectl`) to running pods.
* Providing the kubelet's port-forwarding functionality.
These connections terminate at the kubelet's HTTPS endpoint. By default, the apiserver does not verify the kubelet's serving certificate, which makes the connection subject to man-in-the-middle attacks and **unsafe** to run over untrusted and/or public networks.
These connections terminate at the kubelet's HTTPS endpoint. By default, the API server does not
verify the kubelet's serving certificate, which makes the connection subject to man-in-the-middle
attacks and **unsafe** to run over untrusted and/or public networks.
To verify this connection, use the `--kubelet-certificate-authority` flag to provide the apiserver with a root certificate bundle to use to verify the kubelet's serving certificate.
To verify this connection, use the `--kubelet-certificate-authority` flag to provide the API
server with a root certificate bundle to use to verify the kubelet's serving certificate.
If that is not possible, use [SSH tunneling](#ssh-tunnels) between the apiserver and kubelet if required to avoid connecting over an
If that is not possible, use [SSH tunneling](#ssh-tunnels) between the API server and kubelet if
required to avoid connecting over an
untrusted or public network.
Finally, [Kubelet authentication and/or authorization](/docs/reference/access-authn-authz/kubelet-authn-authz/) should be enabled to secure the kubelet API.
### apiserver to nodes, pods, and services
Finally, [Kubelet authentication and/or authorization](/docs/reference/access-authn-authz/kubelet-authn-authz/)
should be enabled to secure the kubelet API.
The connections from the apiserver to a node, pod, or service default to plain HTTP connections and are therefore neither authenticated nor encrypted. They can be run over a secure HTTPS connection by prefixing `https:` to the node, pod, or service name in the API URL, but they will not validate the certificate provided by the HTTPS endpoint nor provide client credentials. So while the connection will be encrypted, it will not provide any guarantees of integrity. These connections **are not currently safe** to run over untrusted or public networks.
### API server to nodes, pods, and services
The connections from the API server to a node, pod, or service default to plain HTTP connections
and are therefore neither authenticated nor encrypted. They can be run over a secure HTTPS
connection by prefixing `https:` to the node, pod, or service name in the API URL, but they will
not validate the certificate provided by the HTTPS endpoint nor provide client credentials. So
while the connection will be encrypted, it will not provide any guarantees of integrity. These
connections **are not currently safe** to run over untrusted or public networks.
### SSH tunnels
Kubernetes supports SSH tunnels to protect the control plane to nodes communication paths. In this configuration, the apiserver initiates an SSH tunnel to each node in the cluster (connecting to the ssh server listening on port 22) and passes all traffic destined for a kubelet, node, pod, or service through the tunnel.
This tunnel ensures that the traffic is not exposed outside of the network in which the nodes are running.
Kubernetes supports SSH tunnels to protect the control plane to nodes communication paths. In this
configuration, the API server initiates an SSH tunnel to each node in the cluster (connecting to
the SSH server listening on port 22) and passes all traffic destined for a kubelet, node, pod, or
service through the tunnel.
This tunnel ensures that the traffic is not exposed outside of the network in which the nodes are
running.
SSH tunnels are currently deprecated, so you shouldn't opt to use them unless you know what you are doing. The Konnectivity service is a replacement for this communication channel.
{{< note >}}
SSH tunnels are currently deprecated, so you shouldn't opt to use them unless you know what you
are doing. The [Konnectivity service](#konnectivity-service) is a replacement for this
communication channel.
{{< /note >}}
### Konnectivity service
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
As a replacement to the SSH tunnels, the Konnectivity service provides TCP level proxy for the control plane to cluster communication. The Konnectivity service consists of two parts: the Konnectivity server in the control plane network and the Konnectivity agents in the nodes network. The Konnectivity agents initiate connections to the Konnectivity server and maintain the network connections.
After enabling the Konnectivity service, all control plane to nodes traffic goes through these connections.
As a replacement to the SSH tunnels, the Konnectivity service provides TCP level proxy for the
control plane to cluster communication. The Konnectivity service consists of two parts: the
Konnectivity server in the control plane network and the Konnectivity agents in the nodes network.
The Konnectivity agents initiate connections to the Konnectivity server and maintain the network
connections.
After enabling the Konnectivity service, all control plane to nodes traffic goes through these
connections.
Follow the [Konnectivity service task](/docs/tasks/extend-kubernetes/setup-konnectivity/) to set
up the Konnectivity service in your cluster.
Follow the [Konnectivity service task](/docs/tasks/extend-kubernetes/setup-konnectivity/) to set up the Konnectivity service in your cluster.

View File

@ -458,7 +458,7 @@ Message: Pod was terminated in response to imminent node shutdown.
{{< feature-state state="alpha" for_k8s_version="v1.24" >}}
A node shutdown action may not be detected by kubelet's Node Shutdown Mananger,
A node shutdown action may not be detected by kubelet's Node Shutdown Manager,
either because the command does not trigger the inhibitor locks mechanism used by
kubelet or because of a user error, i.e., the ShutdownGracePeriod and
ShutdownGracePeriodCriticalPods are not configured properly. Please refer to above
@ -479,29 +479,24 @@ these pods will be stuck in terminating status on the shutdown node forever.
To mitigate the above situation, a user can manually add the taint `node
kubernetes.io/out-of-service` with either `NoExecute` or `NoSchedule` effect to
a Node marking it out-of-service.
If the `NodeOutOfServiceVolumeDetach` [feature gate](/docs/reference/
command-line-tools-reference/feature-gates/) is enabled on
`kube-controller-manager`, and a Node is marked out-of-service with this taint, the
pods on the node will be forcefully deleted if there are no matching tolerations on
it and volume detach operations for the pods terminating on the node will happen
immediately. This allows the Pods on the out-of-service node to recover quickly on a
different node.
If the `NodeOutOfServiceVolumeDetach`[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
is enabled on `kube-controller-manager`, and a Node is marked out-of-service with this taint, the
pods on the node will be forcefully deleted if there are no matching tolerations on it and volume
detach operations for the pods terminating on the node will happen immediately. This allows the
Pods on the out-of-service node to recover quickly on a different node.
During a non-graceful shutdown, Pods are terminated in the two phases:
1. Force delete the Pods that do not have matching `out-of-service` tolerations.
2. Immediately perform detach volume operation for such pods.
{{< note >}}
- Before adding the taint `node.kubernetes.io/out-of-service` , it should be verified
that the node is already in shutdown or power off state (not in the middle of
restarting).
that the node is already in shutdown or power off state (not in the middle of
restarting).
- The user is required to manually remove the out-of-service taint after the pods are
moved to a new node and the user has checked that the shutdown node has been
recovered since the user was the one who originally added the taint.
moved to a new node and the user has checked that the shutdown node has been
recovered since the user was the one who originally added the taint.
{{< /note >}}
### Pod Priority based graceful node shutdown {#pod-priority-graceful-node-shutdown}
@ -654,7 +649,7 @@ see [KEP-2400](https://github.com/kubernetes/enhancements/issues/2400) and its
* Learn about the [components](/docs/concepts/overview/components/#node-components) that make up a node.
* Read the [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core).
* Read the [Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)
* Read the [Node](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node)
section of the architecture design document.
* Read about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/).

View File

@ -11,31 +11,37 @@ no_list: true
---
<!-- overview -->
The cluster administration overview is for anyone creating or administering a Kubernetes cluster.
It assumes some familiarity with core Kubernetes [concepts](/docs/concepts/).
<!-- body -->
## Planning a cluster
See the guides in [Setup](/docs/setup/) for examples of how to plan, set up, and configure Kubernetes clusters. The solutions listed in this article are called *distros*.
See the guides in [Setup](/docs/setup/) for examples of how to plan, set up, and configure
Kubernetes clusters. The solutions listed in this article are called *distros*.
{{< note >}}
Not all distros are actively maintained. Choose distros which have been tested with a recent version of Kubernetes.
{{< /note >}}
{{< note >}}
Not all distros are actively maintained. Choose distros which have been tested with a recent
version of Kubernetes.
{{< /note >}}
Before choosing a guide, here are some considerations:
- Do you want to try out Kubernetes on your computer, or do you want to build a high-availability, multi-node cluster? Choose distros best suited for your needs.
- Will you be using **a hosted Kubernetes cluster**, such as [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/), or **hosting your own cluster**?
- Will your cluster be **on-premises**, or **in the cloud (IaaS)**? Kubernetes does not directly support hybrid clusters. Instead, you can set up multiple clusters.
- **If you are configuring Kubernetes on-premises**, consider which [networking model](/docs/concepts/cluster-administration/networking/) fits best.
- Will you be running Kubernetes on **"bare metal" hardware** or on **virtual machines (VMs)**?
- Do you **want to run a cluster**, or do you expect to do **active development of Kubernetes project code**? If the
latter, choose an actively-developed distro. Some distros only use binary releases, but
offer a greater variety of choices.
- Familiarize yourself with the [components](/docs/concepts/overview/components/) needed to run a cluster.
- Do you want to try out Kubernetes on your computer, or do you want to build a high-availability,
multi-node cluster? Choose distros best suited for your needs.
- Will you be using **a hosted Kubernetes cluster**, such as
[Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/), or **hosting your own cluster**?
- Will your cluster be **on-premises**, or **in the cloud (IaaS)**? Kubernetes does not directly
support hybrid clusters. Instead, you can set up multiple clusters.
- **If you are configuring Kubernetes on-premises**, consider which
[networking model](/docs/concepts/cluster-administration/networking/) fits best.
- Will you be running Kubernetes on **"bare metal" hardware** or on **virtual machines (VMs)**?
- Do you **want to run a cluster**, or do you expect to do **active development of Kubernetes project code**?
If the latter, choose an actively-developed distro. Some distros only use binary releases, but
offer a greater variety of choices.
- Familiarize yourself with the [components](/docs/concepts/overview/components/) needed to run a cluster.
## Managing a cluster
@ -45,29 +51,43 @@ Before choosing a guide, here are some considerations:
## Securing a cluster
* [Generate Certificates](/docs/tasks/administer-cluster/certificates/) describes the steps to generate certificates using different tool chains.
* [Generate Certificates](/docs/tasks/administer-cluster/certificates/) describes the steps to
generate certificates using different tool chains.
* [Kubernetes Container Environment](/docs/concepts/containers/container-environment/) describes the environment for Kubelet managed containers on a Kubernetes node.
* [Kubernetes Container Environment](/docs/concepts/containers/container-environment/) describes
the environment for Kubelet managed containers on a Kubernetes node.
* [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access) describes how Kubernetes implements access control for its own API.
* [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access) describes
how Kubernetes implements access control for its own API.
* [Authenticating](/docs/reference/access-authn-authz/authentication/) explains authentication in Kubernetes, including the various authentication options.
* [Authenticating](/docs/reference/access-authn-authz/authentication/) explains authentication in
Kubernetes, including the various authentication options.
* [Authorization](/docs/reference/access-authn-authz/authorization/) is separate from authentication, and controls how HTTP calls are handled.
* [Authorization](/docs/reference/access-authn-authz/authorization/) is separate from
authentication, and controls how HTTP calls are handled.
* [Using Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/) explains plug-ins which intercepts requests to the Kubernetes API server after authentication and authorization.
* [Using Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/)
explains plug-ins which intercepts requests to the Kubernetes API server after authentication
and authorization.
* [Using Sysctls in a Kubernetes Cluster](/docs/tasks/administer-cluster/sysctl-cluster/) describes to an administrator how to use the `sysctl` command-line tool to set kernel parameters .
* [Using Sysctls in a Kubernetes Cluster](/docs/tasks/administer-cluster/sysctl-cluster/)
describes to an administrator how to use the `sysctl` command-line tool to set kernel parameters
.
* [Auditing](/docs/tasks/debug/debug-cluster/audit/) describes how to interact with Kubernetes' audit logs.
* [Auditing](/docs/tasks/debug/debug-cluster/audit/) describes how to interact with Kubernetes'
audit logs.
### Securing the kubelet
* [Control Plane-Node communication](/docs/concepts/architecture/control-plane-node-communication/)
* [TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/)
* [Kubelet authentication/authorization](/docs/reference/acess-authn-authz/kubelet-authn-authz/)
* [Control Plane-Node communication](/docs/concepts/architecture/control-plane-node-communication/)
* [TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/)
* [Kubelet authentication/authorization](/docs/reference/access-authn-authz/kubelet-authn-authz/)
## Optional Cluster Services
* [DNS Integration](/docs/concepts/services-networking/dns-pod-service/) describes how to resolve a DNS name directly to a Kubernetes service.
* [DNS Integration](/docs/concepts/services-networking/dns-pod-service/) describes how to resolve
a DNS name directly to a Kubernetes service.
* [Logging and Monitoring Cluster Activity](/docs/concepts/cluster-administration/logging/)
explains how logging in Kubernetes works and how to implement it.
* [Logging and Monitoring Cluster Activity](/docs/concepts/cluster-administration/logging/) explains how logging in Kubernetes works and how to implement it.

View File

@ -18,17 +18,17 @@ This page lists some of the available add-ons and links to their respective inst
* [ACI](https://www.github.com/noironetworks/aci-containers) provides integrated container networking and network security with Cisco ACI.
* [Antrea](https://antrea.io/) operates at Layer 3/4 to provide networking and security services for Kubernetes, leveraging Open vSwitch as the networking data plane.
* [Calico](https://docs.projectcalico.org/latest/introduction/) is a networking and network policy provider. Calico supports a flexible set of networking options so you can choose the most efficient option for your situation, including non-overlay and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, pods, and (if using Istio & Envoy) applications at the service mesh layer.
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) unites Flannel and Calico, providing networking and network policy.
* [Canal](https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel) unites Flannel and Calico, providing networking and network policy.
* [Cilium](https://github.com/cilium/cilium) is a L3 network and network policy plugin that can enforce HTTP/API/L7 policies transparently. Both routing and overlay/encapsulation mode are supported, and it can work on top of other CNI plugins.
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, or Weave.
* [CNI-Genie](https://github.com/cni-genie/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, or Weave.
* [Contiv](https://contivpp.io/) provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework. Contiv project is fully [open sourced](https://github.com/contiv). The [installer](https://github.com/contiv/install) provides both kubeadm and non-kubeadm based installation options.
* [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is an open source, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods and bare metal workloads.
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) is an overlay network provider that can be used with Kubernetes.
* [Knitter](https://github.com/ZTE/Knitter/) is a plugin to support multiple network interfaces in a Kubernetes pod.
* Multus is a Multi plugin for multiple network support in Kubernetes to support all CNI plugins (e.g. Calico, Cilium, Contiv, Flannel), in addition to SRIOV, DPDK, OVS-DPDK and VPP based workloads in Kubernetes.
* [Multus](https://github.com/k8snetworkplumbingwg/multus-cni) is a Multi plugin for multiple network support in Kubernetes to support all CNI plugins (e.g. Calico, Cilium, Contiv, Flannel), in addition to SRIOV, DPDK, OVS-DPDK and VPP based workloads in Kubernetes.
* [OVN-Kubernetes](https://github.com/ovn-org/ovn-kubernetes/) is a networking provider for Kubernetes based on [OVN (Open Virtual Network)](https://github.com/ovn-org/ovn/), a virtual networking implementation that came out of the Open vSwitch (OVS) project. OVN-Kubernetes provides an overlay based networking implementation for Kubernetes, including an OVS based implementation of load balancing and network policy.
* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) is OVN based CNI controller plugin to provide cloud native based Service function chaining(SFC), Multiple OVN overlay networking, dynamic subnet creation, dynamic creation of virtual networks, VLAN Provider network, Direct provider network and pluggable with other Multi-network plugins, ideal for edge based cloud native workloads in Multi-cluster networking
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and OpenShift.
* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) is OVN based CNI controller plugin to provide cloud native based Service function chaining(SFC), Multiple OVN overlay networking, dynamic subnet creation, dynamic creation of virtual networks, VLAN Provider network, Direct provider network and pluggable with other Multi-network plugins, ideal for edge based cloud native workloads in Multi-cluster networking.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and OpenShift.
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) is an SDN platform that provides policy-based networking between Kubernetes Pods and non-Kubernetes environments with visibility and security monitoring.
* [Romana](https://github.com/romana) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy](/docs/concepts/services-networking/network-policies/) API.
* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database.

View File

@ -174,7 +174,7 @@ to balance progress between request flows.
The queuing configuration allows tuning the fair queuing algorithm for a
priority level. Details of the algorithm can be read in the
[enhancement proposal](#whats-next), but in short:
[enhancement proposal](https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1040-priority-and-fairness), but in short:
* Increasing `queues` reduces the rate of collisions between different flows, at
the cost of increased memory usage. A value of 1 here effectively disables the
@ -471,11 +471,15 @@ poorly-behaved workloads that may be harming system health.
requests, broken down by the labels `phase` (which takes on the
values `waiting` and `executing`) and `request_kind` (which takes on
the values `mutating` and `readOnly`). The observations are made
periodically at a high rate.
periodically at a high rate. Each observed value is a ratio,
between 0 and 1, of a number of requests divided by the
corresponding limit on the number of requests (queue length limit
for waiting and concurrency limit for executing).
* `apiserver_flowcontrol_read_vs_write_request_count_watermarks` is a
histogram vector of high or low water marks of the number of
requests broken down by the labels `phase` (which takes on the
requests (divided by the corresponding limit to get a ratio in the
range 0 to 1) broken down by the labels `phase` (which takes on the
values `waiting` and `executing`) and `request_kind` (which takes on
the values `mutating` and `readOnly`); the label `mark` takes on
values `high` and `low`. The water marks are accumulated over
@ -502,11 +506,15 @@ poorly-behaved workloads that may be harming system health.
values `waiting` and `executing`) and `priority_level`. Each
histogram gets observations taken periodically, up through the last
activity of the relevant sort. The observations are made at a high
rate.
rate. Each observed value is a ratio, between 0 and 1, of a number
of requests divided by the corresponding limit on the number of
requests (queue length limit for waiting and concurrency limit for
executing).
* `apiserver_flowcontrol_priority_level_request_count_watermarks` is a
histogram vector of high or low water marks of the number of
requests broken down by the labels `phase` (which takes on the
requests (divided by the corresponding limit to get a ratio in the
range 0 to 1) broken down by the labels `phase` (which takes on the
values `waiting` and `executing`) and `priority_level`; the label
`mark` takes on values `high` and `low`. The water marks are
accumulated over windows bounded by the times when an observation
@ -514,6 +522,31 @@ poorly-behaved workloads that may be harming system health.
`apiserver_flowcontrol_priority_level_request_count_samples`. These
water marks show the range of values that occurred between samples.
* `apiserver_flowcontrol_priority_level_seat_count_samples` is a
histogram vector of observations of the utilization of a priority
level's concurrency limit, broken down by `priority_level`. This
utilization is the fraction (number of seats occupied) /
(concurrency limit). This metric considers all stages of execution
(both normal and the extra delay at the end of a write to cover for
the corresponding notification work) of all requests except WATCHes;
for those it considers only the initial stage that delivers
notifications of pre-existing objects. Each histogram in the vector
is also labeled with `phase: executing` (there is no seat limit for
the waiting phase). Each histogram gets observations taken
periodically, up through the last activity of the relevant sort.
The observations
are made at a high rate.
* `apiserver_flowcontrol_priority_level_seat_count_watermarks` is a
histogram vector of high or low water marks of the utilization of a
priority level's concurrency limit, broken down by `priority_level`
and `mark` (which takes on values `high` and `low`). Each histogram
in the vector is also labeled with `phase: executing` (there is no
seat limit for the waiting phase). The water marks are accumulated
over windows bounded by the times when an observation was added to
`apiserver_flowcontrol_priority_level_seat_count_samples`. These
water marks show the range of values that occurred between samples.
* `apiserver_flowcontrol_request_queue_length_after_enqueue` is a
histogram vector of queue lengths for the queues, broken down by
the labels `priority_level` and `flow_schema`, as sampled by the
@ -556,6 +589,22 @@ poorly-behaved workloads that may be harming system health.
and `priority_level` (indicating the one to which the request was
assigned).
* `apiserver_flowcontrol_watch_count_samples` is a histogram vector of
the number of active WATCH requests relevant to a given write,
broken down by `flow_schema` and `priority_level`.
* `apiserver_flowcontrol_work_estimated_seats` is a histogram vector
of the number of estimated seats (maximum of initial and final stage
of execution) associated with requests, broken down by `flow_schema`
and `priority_level`.
* `apiserver_flowcontrol_request_dispatch_no_accommodation_total` is a
counter vec of the number of events that in principle could have led
to a request being dispatched but did not, due to lack of available
concurrency, broken down by `flow_schema` and `priority_level`. The
relevant sorts of events are arrival of a request and completion of
a request.
### Debug endpoints
When you enable the API Priority and Fairness feature, the `kube-apiserver`

View File

@ -203,4 +203,4 @@ to run, and in both cases, the network provides one IP address per pod - as is s
The early design of the networking model and its rationale, and some future
plans are described in more detail in the
[networking design document](https://git.k8s.io/community/contributors/design-proposals/network/networking.md).
[networking design document](https://git.k8s.io/design-proposals-archive/network/networking.md).

View File

@ -332,7 +332,7 @@ container of a Pod can specify either or both of the following:
Limits and requests for `ephemeral-storage` are measured in byte quantities.
You can express storage as a plain integer or as a fixed-point number using one of these suffixes:
E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
Mi, Ki. For example, the following quantities all represent roughly the same value:
- `128974848`
@ -340,6 +340,10 @@ Mi, Ki. For example, the following quantities all represent roughly the same val
- `129M`
- `123Mi`
Pay attention to the case of the suffixes. If you request `400m` of ephemeral-storage, this is a request
for 0.4 bytes. Someone who types that probably meant to ask for 400 mebibytes (`400Mi`)
or 400 megabytes (`400M`).
In the following example, the Pod has two containers. Each container has a request of
2GiB of local ephemeral storage. Each container has a limit of 4GiB of local ephemeral
storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and
@ -799,7 +803,7 @@ memory limit (and possibly request) for that container.
* Get hands-on experience [assigning Memory resources to containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/).
* Get hands-on experience [assigning CPU resources to containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/).
* Read how the API reference defines a [container](/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container)
and its [resource requirements](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources)
and its [resource requirements](/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources)
* Read about [project quotas](https://xfs.org/index.php/XFS_FAQ#Q:_Quota:_Do_quotas_work_on_XFS.3F) in XFS
* Read more about the [kube-scheduler configuration reference (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/)

View File

@ -18,7 +18,7 @@ It does not mean that there is a file named `kubeconfig`.
{{< /note >}}
{{< warning >}}
Only use kubeconfig files from trusted sources. Using a specially-crafted kubeconfig file could result in malicious code execution or file exposure.
Only use kubeconfig files from trusted sources. Using a specially-crafted kubeconfig file could result in malicious code execution or file exposure.
If you must use an untrusted kubeconfig file, inspect it carefully first, much as you would a shell script.
{{< /warning>}}
@ -53,7 +53,7 @@ clusters and namespaces.
A *context* element in a kubeconfig file is used to group access parameters
under a convenient name. Each context has three parameters: cluster, namespace, and user.
By default, the `kubectl` command-line tool uses parameters from
the *current context* to communicate with the cluster.
the *current context* to communicate with the cluster.
To choose the current context:
```
@ -150,16 +150,16 @@ are stored absolutely.
## Proxy
You can configure `kubectl` to use proxy by setting `proxy-url` in the kubeconfig file, like:
You can configure `kubectl` to use a proxy per cluster using `proxy-url` in your kubeconfig file, like this:
```yaml
apiVersion: v1
kind: Config
proxy-url: https://proxy.host:3128
clusters:
- cluster:
proxy-url: http://proxy.example.org:3128
server: https://k8s.example.org/k8s/clusters/c-xxyyzz
name: development
users:
@ -168,7 +168,6 @@ users:
contexts:
- context:
name: development
```

View File

@ -63,7 +63,7 @@ DNS server watches the Kubernetes API for new `Services` and creates a set of DN
## Using Labels
- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or Deployment, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the appropriate Pods for other resources; for example, a Service that selects all `tier: frontend` Pods, or all `phase: test` components of `app: myapp`. See the [guestbook](https://github.com/kubernetes/examples/tree/master/guestbook/) app for examples of this approach.
- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or Deployment, such as `{ app.kubernetes.io/name: MyApp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the appropriate Pods for other resources; for example, a Service that selects all `tier: frontend` Pods, or all `phase: test` components of `app.kubernetes.io/name: MyApp`. See the [guestbook](https://github.com/kubernetes/examples/tree/master/guestbook/) app for examples of this approach.
A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. When you need to update a running service without downtime, use a [Deployment](/docs/concepts/workloads/controllers/deployment/).

View File

@ -116,7 +116,7 @@ Runtime handlers are configured through containerd's configuration at
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.${HANDLER_NAME}]
```
See containerd's [config documentation](https://github.com/containerd/cri/blob/master/docs/config.md)
See containerd's [config documentation](https://github.com/containerd/containerd/blob/main/docs/cri/config.md)
for more details:
#### {{< glossary_tooltip term_id="cri-o" >}}

View File

@ -17,26 +17,29 @@ no_list: true
<!-- overview -->
Kubernetes is highly configurable and extensible. As a result,
there is rarely a need to fork or submit patches to the Kubernetes
project code.
Kubernetes is highly configurable and extensible. As a result, there is rarely a need to fork or
submit patches to the Kubernetes project code.
This guide describes the options for customizing a Kubernetes
cluster. It is aimed at {{< glossary_tooltip text="cluster operators" term_id="cluster-operator" >}} who want to
understand how to adapt their Kubernetes cluster to the needs of
their work environment. Developers who are prospective {{< glossary_tooltip text="Platform Developers" term_id="platform-developer" >}} or Kubernetes Project {{< glossary_tooltip text="Contributors" term_id="contributor" >}} will also find it
useful as an introduction to what extension points and patterns
exist, and their trade-offs and limitations.
This guide describes the options for customizing a Kubernetes cluster. It is aimed at
{{< glossary_tooltip text="cluster operators" term_id="cluster-operator" >}} who want to understand
how to adapt their Kubernetes cluster to the needs of their work environment. Developers who are
prospective {{< glossary_tooltip text="Platform Developers" term_id="platform-developer" >}} or
Kubernetes Project {{< glossary_tooltip text="Contributors" term_id="contributor" >}} will also
find it useful as an introduction to what extension points and patterns exist, and their
trade-offs and limitations.
<!-- body -->
## Overview
Customization approaches can be broadly divided into *configuration*, which only involves changing flags, local configuration files, or API resources; and *extensions*, which involve running additional programs or services. This document is primarily about extensions.
Customization approaches can be broadly divided into *configuration*, which only involves changing
flags, local configuration files, or API resources; and *extensions*, which involve running
additional programs or services. This document is primarily about extensions.
## Configuration
*Configuration files* and *flags* are documented in the Reference section of the online documentation, under each binary:
*Configuration files* and *flags* are documented in the Reference section of the online
documentation, under each binary:
* [kubelet](/docs/reference/command-line-tools-reference/kubelet/)
* [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/)
@ -44,9 +47,22 @@ Customization approaches can be broadly divided into *configuration*, which only
* [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/)
* [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/).
Flags and configuration files may not always be changeable in a hosted Kubernetes service or a distribution with managed installation. When they are changeable, they are usually only changeable by the cluster administrator. Also, they are subject to change in future Kubernetes versions, and setting them may require restarting processes. For those reasons, they should be used only when there are no other options.
Flags and configuration files may not always be changeable in a hosted Kubernetes service or a
distribution with managed installation. When they are changeable, they are usually only changeable
by the cluster administrator. Also, they are subject to change in future Kubernetes versions, and
setting them may require restarting processes. For those reasons, they should be used only when
there are no other options.
*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/), [PodSecurityPolicies](/docs/concepts/security/pod-security-policy/), [NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control ([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs. APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations. They are declarative and use the same conventions as other Kubernetes resources like pods, so new cluster configuration can be repeatable and be managed the same way as applications. And, where they are stable, they enjoy a [defined support policy](/docs/reference/using-api/deprecation-policy/) like other Kubernetes APIs. For these reasons, they are preferred over *configuration files* and *flags* where suitable.
*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/),
[PodSecurityPolicies](/docs/concepts/security/pod-security-policy/),
[NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control
([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs.
APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations.
They are declarative and use the same conventions as other Kubernetes resources like pods,
so new cluster configuration can be repeatable and be managed the same way as applications.
And, where they are stable, they enjoy a
[defined support policy](/docs/reference/using-api/deprecation-policy/) like other Kubernetes APIs.
For these reasons, they are preferred over *configuration files* and *flags* where suitable.
## Extensions
@ -70,10 +86,9 @@ There is a specific pattern for writing client programs that work well with
Kubernetes called the *Controller* pattern. Controllers typically read an
object's `.spec`, possibly do things, and then update the object's `.status`.
A controller is a client of Kubernetes. When Kubernetes is the client and
calls out to a remote service, it is called a *Webhook*. The remote service
is called a *Webhook Backend*. Like Controllers, Webhooks do add a point of
failure.
A controller is a client of Kubernetes. When Kubernetes is the client and calls out to a remote
service, it is called a *Webhook*. The remote service is called a *Webhook Backend*. Like
Controllers, Webhooks do add a point of failure.
In the webhook model, Kubernetes makes a network request to a remote service.
In the *Binary Plugin* model, Kubernetes executes a binary (program).
@ -95,15 +110,35 @@ This diagram shows the extension points in a Kubernetes system.
<!-- image source diagrams: https://docs.google.com/drawings/d/1k2YdJgNTtNfW7_A8moIIkij-DmVgEhNrn3y2OODwqQQ/view -->
![Extension Points](/docs/concepts/extend-kubernetes/extension-points.png)
1. Users often interact with the Kubernetes API using `kubectl`. [Kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) extend the kubectl binary. They only affect the individual user's local environment, and so cannot enforce site-wide policies.
2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the [API Access Extensions](#api-access-extensions) section.
3. The apiserver serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are defined by the Kubernetes project and can't be changed. You can also add resources that you define, or that other projects have defined, called *Custom Resources*, as explained in the [Custom Resources](#user-defined-types) section. Custom Resources are often used with API Access Extensions.
4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling. These are described in the [Scheduler Extensions](#scheduler-extensions) section.
5. Much of the behavior of Kubernetes is implemented by programs called Controllers which are clients of the API-Server. Controllers are often used in conjunction with Custom Resources.
6. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on the cluster network. [Network Plugins](#network-plugins) allow for different implementations of pod networking.
7. The kubelet also mounts and unmounts volumes for containers. New types of storage can be supported via [Storage Plugins](#storage-plugins).
1. Users often interact with the Kubernetes API using `kubectl`.
[Kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) extend the kubectl binary.
They only affect the individual user's local environment, and so cannot enforce site-wide policies.
If you are unsure where to start, this flowchart can help. Note that some solutions may involve several types of extensions.
1. The API server handles all requests. Several types of extension points in the API server allow
authenticating requests, or blocking them based on their content, editing content, and handling
deletion. These are described in the [API Access Extensions](#api-access-extensions) section.
1. The API server serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are
defined by the Kubernetes project and can't be changed. You can also add resources that you
define, or that other projects have defined, called *Custom Resources*, as explained in the
[Custom Resources](#user-defined-types) section. Custom Resources are often used with API access
extensions.
1. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend
scheduling. These are described in the [Scheduler Extensions](#scheduler-extensions) section.
1. Much of the behavior of Kubernetes is implemented by programs called Controllers which are
clients of the API server. Controllers are often used in conjunction with Custom Resources.
1. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on
the cluster network. [Network Plugins](#network-plugins) allow for different implementations of
pod networking.
1. The kubelet also mounts and unmounts volumes for containers. New types of storage can be
supported via [Storage Plugins](#storage-plugins).
If you are unsure where to start, this flowchart can help. Note that some solutions may involve
several types of extensions.
<!-- image source drawing: https://docs.google.com/drawings/d/1sdviU6lDz4BpnzJNHfNpQrqI9F19QZ07KnhnxVrp2yg/edit -->
![Flowchart for Extension](/docs/concepts/extend-kubernetes/flowchart.png)
@ -112,60 +147,86 @@ If you are unsure where to start, this flowchart can help. Note that some soluti
### User-Defined Types
Consider adding a Custom Resource to Kubernetes if you want to define new controllers, application configuration objects or other declarative APIs, and to manage them using Kubernetes tools, such as `kubectl`.
Consider adding a Custom Resource to Kubernetes if you want to define new controllers, application
configuration objects or other declarative APIs, and to manage them using Kubernetes tools, such
as `kubectl`.
Do not use a Custom Resource as data storage for application, user, or monitoring data.
For more about Custom Resources, see the [Custom Resources concept guide](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
For more about Custom Resources, see the
[Custom Resources concept guide](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
### Combining New APIs with Automation
The combination of a custom resource API and a control loop is called the [Operator pattern](/docs/concepts/extend-kubernetes/operator/). The Operator pattern is used to manage specific, usually stateful, applications. These custom APIs and control loops can also be used to control other resources, such as storage or policies.
The combination of a custom resource API and a control loop is called the
[Operator pattern](/docs/concepts/extend-kubernetes/operator/). The Operator pattern is used to manage
specific, usually stateful, applications. These custom APIs and control loops can also be used to
control other resources, such as storage or policies.
### Changing Built-in Resources
When you extend the Kubernetes API by adding custom resources, the added resources always fall into a new API Groups. You cannot replace or change existing API groups.
Adding an API does not directly let you affect the behavior of existing APIs (e.g. Pods), but API Access Extensions do.
When you extend the Kubernetes API by adding custom resources, the added resources always fall
into a new API Groups. You cannot replace or change existing API groups.
Adding an API does not directly let you affect the behavior of existing APIs (e.g. Pods), but API
Access Extensions do.
### API Access Extensions
When a request reaches the Kubernetes API Server, it is first Authenticated, then Authorized, then subject to various types of Admission Control. See [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access/) for more on this flow.
When a request reaches the Kubernetes API Server, it is first Authenticated, then Authorized, then
subject to various types of Admission Control. See
[Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access/)
for more on this flow.
Each of these steps offers extension points.
Kubernetes has several built-in authentication methods that it supports. It can also sit behind an authenticating proxy, and it can send a token from an Authorization header to a remote service for verification (a webhook). All of these methods are covered in the [Authentication documentation](/docs/reference/access-authn-authz/authentication/).
Kubernetes has several built-in authentication methods that it supports. It can also sit behind an
authenticating proxy, and it can send a token from an Authorization header to a remote service for
verification (a webhook). All of these methods are covered in the
[Authentication documentation](/docs/reference/access-authn-authz/authentication/).
### Authentication
[Authentication](/docs/reference/access-authn-authz/authentication/) maps headers or certificates in all requests to a username for the client making the request.
Kubernetes provides several built-in authentication methods, and an [Authentication webhook](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication) method if those don't meet your needs.
[Authentication](/docs/reference/access-authn-authz/authentication/) maps headers or certificates
in all requests to a username for the client making the request.
Kubernetes provides several built-in authentication methods, and an
[Authentication webhook](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)
method if those don't meet your needs.
### Authorization
[Authorization](/docs/reference/access-authn-authz/authorization/) determines whether specific users can read, write, and do other operations on API resources. It works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision.
[Authorization](/docs/reference/access-authn-authz/authorization/) determines whether specific
users can read, write, and do other operations on API resources. It works at the level of whole
resources -- it doesn't discriminate based on arbitrary object fields. If the built-in
authorization options don't meet your needs, [Authorization webhook](/docs/reference/access-authn-authz/webhook/)
allows calling out to user-provided code to make an authorization decision.
### Dynamic Admission Control
After a request is authorized, if it is a write operation, it also goes through [Admission Control](/docs/reference/access-authn-authz/admission-controllers/) steps. In addition to the built-in steps, there are several extensions:
After a request is authorized, if it is a write operation, it also goes through
[Admission Control](/docs/reference/access-authn-authz/admission-controllers/) steps.
In addition to the built-in steps, there are several extensions:
* The [Image Policy webhook](/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook) restricts what images can be run in containers.
* To make arbitrary admission control decisions, a general [Admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) can be used. Admission Webhooks can reject creations or updates.
* The [Image Policy webhook](/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook)
restricts what images can be run in containers.
* To make arbitrary admission control decisions, a general
[Admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks)
can be used. Admission Webhooks can reject creations or updates.
## Infrastructure Extensions
### Storage Plugins
[Flex Volumes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/flexvolume-deployment.md
) allow users to mount volume types without built-in support by having the
Kubelet call a Binary Plugin to mount the volume.
FlexVolume is deprecated since Kubernetes v1.23. The Out-of-tree CSI driver is the recommended way to write volume drivers in Kubernetes. See [Kubernetes Volume Plugin FAQ for Storage Vendors](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md#kubernetes-volume-plugin-faq-for-storage-vendors) for more information.
[Flex Volumes](https://git.k8s.io/design-proposals-archive/storage/flexvolume-deployment.md)
allow users to mount volume types without built-in support by having the kubelet call a binary
plugin to mount the volume.
FlexVolume is deprecated since Kubernetes v1.23. The out-of-tree CSI driver is the recommended way
to write volume drivers in Kubernetes. See
[Kubernetes Volume Plugin FAQ for Storage Vendors](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md#kubernetes-volume-plugin-faq-for-storage-vendors)
for more information.
### Device Plugins
@ -173,7 +234,6 @@ Device plugins allow a node to discover new Node resources (in addition to the
builtin ones like cpu and memory) via a
[Device Plugin](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/).
### Network Plugins
Different networking fabrics can be supported via node-level
@ -191,7 +251,7 @@ This is a significant undertaking, and almost all Kubernetes users find they
do not need to modify the scheduler.
The scheduler also supports a
[webhook](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/scheduler_extender.md)
[webhook](https://git.k8s.io/design-proposals-archive/scheduling/scheduler_extender.md)
that permits a webhook backend (scheduler extension) to filter and prioritize
the nodes chosen for a pod.

View File

@ -8,7 +8,7 @@ weight: 20
<!-- overview -->
{{< feature-state for_k8s_version="v1.10" state="beta" >}}
Kubernetes provides a [device plugin framework](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-management/device-plugin.md)
Kubernetes provides a [device plugin framework](https://git.k8s.io/design-proposals-archive/resource-management/device-plugin.md)
that you can use to advertise system hardware resources to the
{{< glossary_tooltip term_id="kubelet" >}}.

View File

@ -113,6 +113,7 @@ Operator.
* [Charmed Operator Framework](https://juju.is/)
* [Java Operator SDK](https://github.com/java-operator-sdk/java-operator-sdk)
* [Kopf](https://github.com/nolar/kopf) (Kubernetes Operator Pythonic Framework)
* [kube-rs](https://kube.rs/) (Rust)
* [kubebuilder](https://book.kubebuilder.io/)
* [KubeOps](https://buehler.github.io/dotnet-operator-sdk/) (.NET operator SDK)
* [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator)

View File

@ -1,237 +0,0 @@
---
title: Service Catalog
reviewers:
- chenopis
content_type: concept
weight: 40
---
<!-- overview -->
{{< glossary_definition term_id="service-catalog" length="all" prepend="Service Catalog is" >}}
A service broker, as defined by the [Open service broker API spec](https://github.com/openservicebrokerapi/servicebroker/blob/v2.13/spec.md), is an endpoint for a set of managed services offered and maintained by a third-party, which could be a cloud provider such as AWS, GCP, or Azure.
Some examples of managed services are Microsoft Azure Cloud Queue, Amazon Simple Queue Service, and Google Cloud Pub/Sub, but they can be any software offering that can be used by an application.
Using Service Catalog, a {{< glossary_tooltip text="cluster operator" term_id="cluster-operator" >}} can browse the list of managed services offered by a service broker, provision an instance of a managed service, and bind with it to make it available to an application in the Kubernetes cluster.
<!-- body -->
## Example use case
An {{< glossary_tooltip text="application developer" term_id="application-developer" >}} wants to use message queuing as part of their application running in a Kubernetes cluster.
However, they do not want to deal with the overhead of setting such a service up and administering it themselves.
Fortunately, there is a cloud provider that offers message queuing as a managed service through its service broker.
A cluster operator can setup Service Catalog and use it to communicate with the cloud provider's service broker to provision an instance of the message queuing service and make it available to the application within the Kubernetes cluster.
The application developer therefore does not need to be concerned with the implementation details or management of the message queue.
The application can access the message queue as a service.
## Architecture
Service Catalog uses the [Open service broker API](https://github.com/openservicebrokerapi/servicebroker) to communicate with service brokers, acting as an intermediary for the Kubernetes API Server to negotiate the initial provisioning and retrieve the credentials necessary for the application to use a managed service.
It is implemented using a [CRDs-based](/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-resources) architecture.
<br>
![Service Catalog Architecture](/images/docs/service-catalog-architecture.svg)
### API Resources
Service Catalog installs the `servicecatalog.k8s.io` API and provides the following Kubernetes resources:
* `ClusterServiceBroker`: An in-cluster representation of a service broker, encapsulating its server connection details.
These are created and managed by cluster operators who wish to use that broker server to make new types of managed services available within their cluster.
* `ClusterServiceClass`: A managed service offered by a particular service broker.
When a new `ClusterServiceBroker` resource is added to the cluster, the Service Catalog controller connects to the service broker to obtain a list of available managed services. It then creates a new `ClusterServiceClass` resource corresponding to each managed service.
* `ClusterServicePlan`: A specific offering of a managed service. For example, a managed service may have different plans available, such as a free tier or paid tier, or it may have different configuration options, such as using SSD storage or having more resources. Similar to `ClusterServiceClass`, when a new `ClusterServiceBroker` is added to the cluster, Service Catalog creates a new `ClusterServicePlan` resource corresponding to each Service Plan available for each managed service.
* `ServiceInstance`: A provisioned instance of a `ClusterServiceClass`.
These are created by cluster operators to make a specific instance of a managed service available for use by one or more in-cluster applications.
When a new `ServiceInstance` resource is created, the Service Catalog controller connects to the appropriate service broker and instruct it to provision the service instance.
* `ServiceBinding`: Access credentials to a `ServiceInstance`.
These are created by cluster operators who want their applications to make use of a `ServiceInstance`.
Upon creation, the Service Catalog controller creates a Kubernetes `Secret` containing connection details and credentials for the Service Instance, which can be mounted into Pods.
### Authentication
Service Catalog supports these methods of authentication:
* Basic (username/password)
* [OAuth 2.0 Bearer Token](https://tools.ietf.org/html/rfc6750)
## Usage
A cluster operator can use Service Catalog API Resources to provision managed services and make them available within a Kubernetes cluster. The steps involved are:
1. Listing the managed services and Service Plans available from a service broker.
1. Provisioning a new instance of the managed service.
1. Binding to the managed service, which returns the connection credentials.
1. Mapping the connection credentials into the application.
### Listing managed services and Service Plans
First, a cluster operator must create a `ClusterServiceBroker` resource within the `servicecatalog.k8s.io` group. This resource contains the URL and connection details necessary to access a service broker endpoint.
This is an example of a `ClusterServiceBroker` resource:
```yaml
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ClusterServiceBroker
metadata:
name: cloud-broker
spec:
# Points to the endpoint of a service broker. (This example is not a working URL.)
url: https://servicebroker.somecloudprovider.com/v1alpha1/projects/service-catalog/brokers/default
#####
# Additional values can be added here, which may be used to communicate
# with the service broker, such as bearer token info or a caBundle for TLS.
#####
```
The following is a sequence diagram illustrating the steps involved in listing managed services and Plans available from a service broker:
![List Services](/images/docs/service-catalog-list.svg)
1. Once the `ClusterServiceBroker` resource is added to Service Catalog, it triggers a call to the external service broker for a list of available services.
1. The service broker returns a list of available managed services and a list of Service Plans, which are cached locally as `ClusterServiceClass` and `ClusterServicePlan` resources respectively.
1. A cluster operator can then get the list of available managed services using the following command:
kubectl get clusterserviceclasses -o=custom-columns=SERVICE\ NAME:.metadata.name,EXTERNAL\ NAME:.spec.externalName
It should output a list of service names with a format similar to:
SERVICE NAME EXTERNAL NAME
4f6e6cf6-ffdd-425f-a2c7-3c9258ad2468 cloud-provider-service
... ...
They can also view the Service Plans available using the following command:
kubectl get clusterserviceplans -o=custom-columns=PLAN\ NAME:.metadata.name,EXTERNAL\ NAME:.spec.externalName
It should output a list of plan names with a format similar to:
PLAN NAME EXTERNAL NAME
86064792-7ea2-467b-af93-ac9694d96d52 service-plan-name
... ...
### Provisioning a new instance
A cluster operator can initiate the provisioning of a new instance by creating a `ServiceInstance` resource.
This is an example of a `ServiceInstance` resource:
```yaml
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
name: cloud-queue-instance
namespace: cloud-apps
spec:
# References one of the previously returned services
clusterServiceClassExternalName: cloud-provider-service
clusterServicePlanExternalName: service-plan-name
#####
# Additional parameters can be added here,
# which may be used by the service broker.
#####
```
The following sequence diagram illustrates the steps involved in provisioning a new instance of a managed service:
![Provision a Service](/images/docs/service-catalog-provision.svg)
1. When the `ServiceInstance` resource is created, Service Catalog initiates a call to the external service broker to provision an instance of the service.
1. The service broker creates a new instance of the managed service and returns an HTTP response.
1. A cluster operator can then check the status of the instance to see if it is ready.
### Binding to a managed service
After a new instance has been provisioned, a cluster operator must bind to the managed service to get the connection credentials and service account details necessary for the application to use the service. This is done by creating a `ServiceBinding` resource.
The following is an example of a `ServiceBinding` resource:
```yaml
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceBinding
metadata:
name: cloud-queue-binding
namespace: cloud-apps
spec:
instanceRef:
name: cloud-queue-instance
#####
# Additional information can be added here, such as a secretName or
# service account parameters, which may be used by the service broker.
#####
```
The following sequence diagram illustrates the steps involved in binding to a managed service instance:
![Bind to a managed service](/images/docs/service-catalog-bind.svg)
1. After the `ServiceBinding` is created, Service Catalog makes a call to the external service broker requesting the information necessary to bind with the service instance.
1. The service broker enables the application permissions/roles for the appropriate service account.
1. The service broker returns the information necessary to connect and access the managed service instance. This is provider and service-specific so the information returned may differ between Service Providers and their managed services.
### Mapping the connection credentials
After binding, the final step involves mapping the connection credentials and service-specific information into the application.
These pieces of information are stored in secrets that the application in the cluster can access and use to connect directly with the managed service.
<br>
![Map connection credentials](/images/docs/service-catalog-map.svg)
#### Pod configuration File
One method to perform this mapping is to use a declarative Pod configuration.
The following example describes how to map service account credentials into the application. A key called `sa-key` is stored in a volume named `provider-cloud-key`, and the application mounts this volume at `/var/secrets/provider/key.json`. The environment variable `PROVIDER_APPLICATION_CREDENTIALS` is mapped from the value of the mounted file.
```yaml
...
spec:
volumes:
- name: provider-cloud-key
secret:
secretName: sa-key
containers:
...
volumeMounts:
- name: provider-cloud-key
mountPath: /var/secrets/provider
env:
- name: PROVIDER_APPLICATION_CREDENTIALS
value: "/var/secrets/provider/key.json"
```
The following example describes how to map secret values into application environment variables. In this example, the messaging queue topic name is mapped from a secret named `provider-queue-credentials` with a key named `topic` to the environment variable `TOPIC`.
```yaml
...
env:
- name: "TOPIC"
valueFrom:
secretKeyRef:
name: provider-queue-credentials
key: topic
```
## {{% heading "whatsnext" %}}
* If you are familiar with {{< glossary_tooltip text="Helm Charts" term_id="helm-chart" >}}, [install Service Catalog using Helm](/docs/tasks/service-catalog/install-service-catalog-using-helm/) into your Kubernetes cluster. Alternatively, you can [install Service Catalog using the SC tool](/docs/tasks/service-catalog/install-service-catalog-using-sc/).
* View [sample service brokers](https://github.com/openservicebrokerapi/servicebroker/blob/master/gettingStarted.md#sample-service-brokers).
* Explore the [kubernetes-sigs/service-catalog](https://github.com/kubernetes-sigs/service-catalog) project.

View File

@ -76,7 +76,7 @@ request headers as follows:
Kubernetes implements an alternative Protobuf based serialization format that
is primarily intended for intra-cluster communication. For more information
about this format, see the [Kubernetes Protobuf serialization](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md) design proposal and the
about this format, see the [Kubernetes Protobuf serialization](https://git.k8s.io/design-proposals-archive/api-machinery/protobuf.md) design proposal and the
Interface Definition Language (IDL) files for each schema located in the Go
packages that define the API objects.

View File

@ -8,21 +8,29 @@ card:
---
<!-- overview -->
This page explains how Kubernetes objects are represented in the Kubernetes API, and how you can express them in `.yaml` format.
This page explains how Kubernetes objects are represented in the Kubernetes API, and how you can
express them in `.yaml` format.
<!-- body -->
## Understanding Kubernetes objects {#kubernetes-objects}
*Kubernetes objects* are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster. Specifically, they can describe:
*Kubernetes objects* are persistent entities in the Kubernetes system. Kubernetes uses these
entities to represent the state of your cluster. Specifically, they can describe:
* What containerized applications are running (and on which nodes)
* The resources available to those applications
* The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance
A Kubernetes object is a "record of intent"--once you create the object, the Kubernetes system will constantly work to ensure that object exists. By creating an object, you're effectively telling the Kubernetes system what you want your cluster's workload to look like; this is your cluster's *desired state*.
A Kubernetes object is a "record of intent"--once you create the object, the Kubernetes system
will constantly work to ensure that object exists. By creating an object, you're effectively
telling the Kubernetes system what you want your cluster's workload to look like; this is your
cluster's *desired state*.
To work with Kubernetes objects--whether to create, modify, or delete them--you'll need to use the [Kubernetes API](/docs/concepts/overview/kubernetes-api/). When you use the `kubectl` command-line interface, for example, the CLI makes the necessary Kubernetes API calls for you. You can also use the Kubernetes API directly in your own programs using one of the [Client Libraries](/docs/reference/using-api/client-libraries/).
To work with Kubernetes objects--whether to create, modify, or delete them--you'll need to use the
[Kubernetes API](/docs/concepts/overview/kubernetes-api/). When you use the `kubectl` command-line
interface, for example, the CLI makes the necessary Kubernetes API calls for you. You can also use
the Kubernetes API directly in your own programs using one of the
[Client Libraries](/docs/reference/using-api/client-libraries/).
### Object Spec and Status
@ -48,11 +56,17 @@ the status to match your spec. If any of those instances should fail
between spec and status by making a correction--in this case, starting
a replacement instance.
For more information on the object spec, status, and metadata, see the [Kubernetes API Conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md).
For more information on the object spec, status, and metadata, see the
[Kubernetes API Conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md).
### Describing a Kubernetes object
When you create an object in Kubernetes, you must provide the object spec that describes its desired state, as well as some basic information about the object (such as a name). When you use the Kubernetes API to create the object (either directly or via `kubectl`), that API request must include that information as JSON in the request body. **Most often, you provide the information to `kubectl` in a .yaml file.** `kubectl` converts the information to JSON when making the API request.
When you create an object in Kubernetes, you must provide the object spec that describes its
desired state, as well as some basic information about the object (such as a name). When you use
the Kubernetes API to create the object (either directly or via `kubectl`), that API request must
include that information as JSON in the request body. **Most often, you provide the information to
`kubectl` in a .yaml file.** `kubectl` converts the information to JSON when making the API
request.
Here's an example `.yaml` file that shows the required fields and object spec for a Kubernetes Deployment:
@ -81,7 +95,9 @@ In the `.yaml` file for the Kubernetes object you want to create, you'll need to
* `metadata` - Data that helps uniquely identify the object, including a `name` string, `UID`, and optional `namespace`
* `spec` - What state you desire for the object
The precise format of the object `spec` is different for every Kubernetes object, and contains nested fields specific to that object. The [Kubernetes API Reference](https://kubernetes.io/docs/reference/kubernetes-api/) can help you find the spec format for all of the objects you can create using Kubernetes.
The precise format of the object `spec` is different for every Kubernetes object, and contains
nested fields specific to that object. The [Kubernetes API Reference](/docs/reference/kubernetes-api/)
can help you find the spec format for all of the objects you can create using Kubernetes.
For example, see the [`spec` field](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec)
for the Pod API reference.
@ -103,5 +119,3 @@ detail the structure of that `.status` field, and its content for each different
* Learn about [controllers](/docs/concepts/architecture/controller/) in Kubernetes.
* [Using the Kubernetes API](/docs/reference/using-api/) explains some more API concepts.

View File

@ -100,4 +100,4 @@ UUIDs are standardized as ISO/IEC 9834-8 and as ITU-T X.667.
## {{% heading "whatsnext" %}}
* Read about [labels](/docs/concepts/overview/working-with-objects/labels/) in Kubernetes.
* See the [Identifiers and Names in Kubernetes](https://git.k8s.io/community/contributors/design-proposals/architecture/identifiers.md) design document.
* See the [Identifiers and Names in Kubernetes](https://git.k8s.io/design-proposals-archive/architecture/identifiers.md) design document.

View File

@ -169,9 +169,9 @@ Disadvantages compared to imperative object configuration:
## {{% heading "whatsnext" %}}
- [Managing Kubernetes Objects Using Imperative Commands](/docs/tasks/manage-kubernetes-objects/imperative-command/)
- [Managing Kubernetes Objects Using Object Configuration (Imperative)](/docs/tasks/manage-kubernetes-objects/imperative-config/)
- [Managing Kubernetes Objects Using Object Configuration (Declarative)](/docs/tasks/manage-kubernetes-objects/declarative-config/)
- [Managing Kubernetes Objects Using Kustomize (Declarative)](/docs/tasks/manage-kubernetes-objects/kustomization/)
- [Imperative Management of Kubernetes Objects Using Configuration Files](/docs/tasks/manage-kubernetes-objects/imperative-config/)
- [Declarative Management of Kubernetes Objects Using Configuration Files](/docs/tasks/manage-kubernetes-objects/declarative-config/)
- [Declarative Management of Kubernetes Objects Using Kustomize](/docs/tasks/manage-kubernetes-objects/kustomization/)
- [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl-commands/)
- [Kubectl Book](https://kubectl.docs.kubernetes.io)
- [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)

View File

@ -53,7 +53,7 @@ Neither contention nor changes to a LimitRange will affect already created resou
## {{% heading "whatsnext" %}}
Refer to the [LimitRanger design document](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md) for more information.
Refer to the [LimitRanger design document](https://git.k8s.io/design-proposals-archive/resource-management/admission_control_limit_range.md) for more information.
For examples on using limits, see:

View File

@ -697,7 +697,7 @@ and it is to be created in a namespace other than `kube-system`.
## {{% heading "whatsnext" %}}
- See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) for more information.
- See [ResourceQuota design doc](https://git.k8s.io/design-proposals-archive/resource-management/admission_control_resource_quota.md) for more information.
- See a [detailed example for how to use resource quota](/docs/tasks/administer-cluster/quota-api-object/).
- Read [Quota support for priority class design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/pod-priority-resourcequota.md).
- Read [Quota support for priority class design doc](https://git.k8s.io/design-proposals-archive/scheduling/pod-priority-resourcequota.md).
- See [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765)

View File

@ -23,6 +23,7 @@ of terminating one or more Pods on Nodes.
* [Kubernetes Scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/)
* [Assigning Pods to Nodes](/docs/concepts/scheduling-eviction/assign-pod-node/)
* [Pod Overhead](/docs/concepts/scheduling-eviction/pod-overhead/)
* [Pod Topology Spread Constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
* [Taints and Tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/)
* [Scheduling Framework](/docs/concepts/scheduling-eviction/scheduling-framework)
* [Scheduler Performance Tuning](/docs/concepts/scheduling-eviction/scheduler-perf-tuning/)

View File

@ -11,24 +11,27 @@ weight: 20
<!-- overview -->
You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it can only run on particular set of
{{< glossary_tooltip text="node(s)" term_id="node" >}}.
You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it is
_restricted_ to run on particular {{< glossary_tooltip text="node(s)" term_id="node" >}},
or to _prefer_ to run on particular nodes.
There are several ways to do this and the recommended approaches all use
[label selectors](/docs/concepts/overview/working-with-objects/labels/) to facilitate the selection.
Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement
Often, you do not need to set any such constraints; the
{{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} will automatically do a reasonable placement
(for example, spreading your Pods across nodes so as not place Pods on a node with insufficient free resources).
However, there are some circumstances where you may want to control which node
the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it, or to co-locate Pods from two different
services that communicate a lot into the same availability zone.
the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it,
or to co-locate Pods from two different services that communicate a lot into the same availability zone.
<!-- body -->
You can use any of the following methods to choose where Kubernetes schedules
specific Pods:
specific Pods:
* [nodeSelector](#nodeselector) field matching against [node labels](#built-in-node-labels)
* [Affinity and anti-affinity](#affinity-and-anti-affinity)
* [nodeName](#nodename) field
* [Pod topology spread constraints](#pod-topology-spread-constraints)
## Node labels {#built-in-node-labels}
@ -124,8 +127,8 @@ For example, consider the following Pod spec:
In this example, the following rules apply:
* The node *must* have a label with the key `kubernetes.io/os` and
the value `linux`.
* The node *must* have a label with the key `topology.kubernetes.io/zone` and
the value of that label *must* be either `antarctica-east1` or `antarctica-west1`.
* The node *preferably* has a label with the key `another-node-label-key` and
the value `another-node-label-value`.
@ -170,7 +173,7 @@ For example, consider the following Pod spec:
{{< codenew file="pods/pod-with-affinity-anti-affinity.yaml" >}}
If there are two possible nodes that match the
`requiredDuringSchedulingIgnoredDuringExecution` rule, one with the
`preferredDuringSchedulingIgnoredDuringExecution` rule, one with the
`label-1:key-1` label and another with the `label-2:key-2` label, the scheduler
considers the `weight` of each node and adds the weight to the other scores for
that node, and schedules the Pod onto the node with the highest final score.
@ -303,7 +306,7 @@ the Pod onto a node that is in the same zone as one or more Pods with the label
same zone currently running Pods with the `Security=S2` Pod label.
To get yourself more familiar with the examples of Pod affinity and anti-affinity,
refer to the [design proposal](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/podaffinity.md).
refer to the [design proposal](https://git.k8s.io/design-proposals-archive/scheduling/podaffinity.md).
You can use the `In`, `NotIn`, `Exists` and `DoesNotExist` values in the
`operator` field for Pod affinity and anti-affinity.
@ -337,13 +340,15 @@ null `namespaceSelector` matches the namespace of the Pod where the rule is defi
Inter-pod affinity and anti-affinity can be even more useful when they are used with higher
level collections such as ReplicaSets, StatefulSets, Deployments, etc. These
rules allow you to configure that a set of workloads should
be co-located in the same defined topology, eg., the same node.
be co-located in the same defined topology; for example, preferring to place two related
Pods onto the same node.
Take, for example, a three-node cluster running a web application with an
in-memory cache like redis. You could use inter-pod affinity and anti-affinity
to co-locate the web servers with the cache as much as possible.
For example: imagine a three-node cluster. You use the cluster to run a web application
and also an in-memory cache (such as Redis). For this example, also assume that latency between
the web application and the memory cache should be as low as is practical. You could use inter-pod
affinity and anti-affinity to co-locate the web servers with the cache as much as possible.
In the following example Deployment for the redis cache, the replicas get the label `app=store`. The
In the following example Deployment for the Redis cache, the replicas get the label `app=store`. The
`podAntiAffinity` rule tells the scheduler to avoid placing multiple replicas
with the `app=store` label on a single node. This creates each cache in a
separate node.
@ -378,10 +383,10 @@ spec:
image: redis:3.2-alpine
```
The following Deployment for the web servers creates replicas with the label `app=web-store`. The
Pod affinity rule tells the scheduler to place each replica on a node that has a
Pod with the label `app=store`. The Pod anti-affinity rule tells the scheduler
to avoid placing multiple `app=web-store` servers on a single node.
The following example Deployment for the web servers creates replicas with the label `app=web-store`.
The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod
with the label `app=store`. The Pod anti-affinity rule tells the scheduler never to place
multiple `app=web-store` servers on a single node.
```yaml
apiVersion: apps/v1
@ -430,6 +435,10 @@ where each web server is co-located with a cache, on three separate nodes.
| *webserver-1* | *webserver-2* | *webserver-3* |
| *cache-1* | *cache-2* | *cache-3* |
The overall effect is that each cache instance is likely to be accessed by a single client, that
is running on the same node. This approach aims to minimize both skew (imbalanced load) and latency.
You might have other reasons to use Pod anti-affinity.
See the [ZooKeeper tutorial](/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure)
for an example of a StatefulSet configured with anti-affinity for high
availability, using the same technique as this example.
@ -468,11 +477,21 @@ spec:
The above Pod will only run on the node `kube-01`.
## Pod topology spread constraints
You can use _topology spread constraints_ to control how {{< glossary_tooltip text="Pods" term_id="Pod" >}}
are spread across your cluster among failure-domains such as regions, zones, nodes, or among any other
topology domains that you define. You might do this to improve performance, expected availability, or
overall utilization.
Read [Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
to learn more about how these work.
## {{% heading "whatsnext" %}}
* Read more about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/) .
* Read the design docs for [node affinity](https://git.k8s.io/community/contributors/design-proposals/scheduling/nodeaffinity.md)
and for [inter-pod affinity/anti-affinity](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md).
* Read the design docs for [node affinity](https://git.k8s.io/design-proposals-archive/scheduling/nodeaffinity.md)
and for [inter-pod affinity/anti-affinity](https://git.k8s.io/design-proposals-archive/scheduling/podaffinity.md).
* Learn about how the [topology manager](/docs/tasks/administer-cluster/topology-manager/) takes part in node-level
resource allocation decisions.
* Learn how to use [nodeSelector](/docs/tasks/configure-pod-container/assign-pods-nodes/).

View File

@ -83,7 +83,7 @@ of the scheduler:
## {{% heading "whatsnext" %}}
* Read about [scheduler performance tuning](/docs/concepts/scheduling-eviction/scheduler-perf-tuning/)
* Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/)
* Read about [Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
* Read the [reference documentation](/docs/reference/command-line-tools-reference/kube-scheduler/) for kube-scheduler
* Read the [kube-scheduler config (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/) reference
* Learn about [configuring multiple schedulers](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)

View File

@ -66,8 +66,8 @@ the signal.
The value for `memory.available` is derived from the cgroupfs instead of tools
like `free -m`. This is important because `free -m` does not work in a
container, and if users use the [node
allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) feature, out of resource decisions
container, and if users use the [node allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable)
feature, out of resource decisions
are made local to the end user Pod part of the cgroup hierarchy as well as the
root node. This [script](/examples/admin/resource/memory-available.sh)
reproduces the same set of steps that the kubelet performs to calculate
@ -85,10 +85,15 @@ The kubelet supports the following filesystem partitions:
Kubelet auto-discovers these filesystems and ignores other filesystems. Kubelet
does not support other configurations.
{{<note>}}
Some kubelet garbage collection features are deprecated in favor of eviction.
For a list of the deprecated features, see [kubelet garbage collection deprecation](/docs/concepts/cluster-administration/kubelet-garbage-collection/#deprecation).
{{</note>}}
Some kubelet garbage collection features are deprecated in favor of eviction:
| Existing Flag | New Flag | Rationale |
| ------------- | -------- | --------- |
| `--image-gc-high-threshold` | `--eviction-hard` or `--eviction-soft` | existing eviction signals can trigger image garbage collection |
| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | eviction reclaims achieve the same behavior |
| `--maximum-dead-containers` | - | deprecated once old logs are stored outside of container's context |
| `--maximum-dead-containers-per-container` | - | deprecated once old logs are stored outside of container's context |
| `--minimum-container-ttl-duration` | - | deprecated once old logs are stored outside of container's context |
### Eviction thresholds
@ -211,7 +216,7 @@ the kubelet frees up disk space in the following order:
If the kubelet's attempts to reclaim node-level resources don't bring the eviction
signal below the threshold, the kubelet begins to evict end-user pods.
The kubelet uses the following parameters to determine pod eviction order:
The kubelet uses the following parameters to determine the pod eviction order:
1. Whether the pod's resource usage exceeds requests
1. [Pod Priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/)
@ -314,7 +319,7 @@ The kubelet sets an `oom_score_adj` value for each container based on the QoS fo
{{<note>}}
The kubelet also sets an `oom_score_adj` value of `-997` for containers in Pods that have
`system-node-critical` {{<glossary_tooltip text="Priority" term_id="pod-priority">}}
`system-node-critical` {{<glossary_tooltip text="Priority" term_id="pod-priority">}}.
{{</note>}}
If the kubelet can't reclaim memory before a node experiences OOM, the
@ -396,7 +401,7 @@ counted as `active_file`. If enough of these kernel block buffers are on the
active LRU list, the kubelet is liable to observe this as high resource use and
taint the node as experiencing memory pressure - triggering pod eviction.
For more more details, see [https://github.com/kubernetes/kubernetes/issues/43916](https://github.com/kubernetes/kubernetes/issues/43916)
For more details, see [https://github.com/kubernetes/kubernetes/issues/43916](https://github.com/kubernetes/kubernetes/issues/43916)
You can work around that behavior by setting the memory limit and memory request
the same for containers likely to perform intensive I/O activity. You will need

View File

@ -15,15 +15,15 @@ is a property of {{< glossary_tooltip text="Pods" term_id="pod" >}} that *attrac
a set of {{< glossary_tooltip text="nodes" term_id="node" >}} (either as a preference or a
hard requirement). _Taints_ are the opposite -- they allow a node to repel a set of pods.
_Tolerations_ are applied to pods, and allow (but do not require) the pods to schedule
onto nodes with matching taints.
_Tolerations_ are applied to pods. Tolerations allow the scheduler to schedule pods with matching
taints. Tolerations allow scheduling but don't guarantee scheduling: the scheduler also
[evaluates other parameters](/docs/concepts/scheduling-eviction/pod-priority-preemption/)
as part of its function.
Taints and tolerations work together to ensure that pods are not scheduled
onto inappropriate nodes. One or more taints are applied to a node; this
marks that the node should not accept any pods that do not tolerate the taints.
<!-- body -->
## Concepts
@ -267,7 +267,8 @@ This ensures that DaemonSet pods are never evicted due to these problems.
## Taint Nodes by Condition
The control plane, using the node {{<glossary_tooltip text="controller" term_id="controller">}},
automatically creates taints with a `NoSchedule` effect for [node conditions](/docs/concepts/scheduling-eviction/node-pressure-eviction/#node-conditions).
automatically creates taints with a `NoSchedule` effect for
[node conditions](/docs/concepts/scheduling-eviction/node-pressure-eviction/#node-conditions).
The scheduler checks taints, not node conditions, when it makes scheduling
decisions. This ensures that node conditions don't directly affect scheduling.
@ -298,7 +299,7 @@ arbitrary tolerations to DaemonSets.
## {{% heading "whatsnext" %}}
* Read about [Node-pressure Eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/) and how you can configure it
* Read about [Node-pressure Eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/)
and how you can configure it
* Read about [Pod Priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/)

View File

@ -0,0 +1,570 @@
---
title: Pod Topology Spread Constraints
content_type: concept
weight: 40
---
<!-- overview -->
You can use _topology spread constraints_ to control how
{{< glossary_tooltip text="Pods" term_id="Pod" >}} are spread across your cluster
among failure-domains such as regions, zones, nodes, and other user-defined topology
domains. This can help to achieve high availability as well as efficient resource
utilization.
You can set [cluster-level constraints](#cluster-level-default-constraints) as a default,
or configure topology spread constraints for individual workloads.
<!-- body -->
## Motivation
Imagine that you have a cluster of up to twenty nodes, and you want to run a
{{< glossary_tooltip text="workload" term_id="workload" >}}
that automatically scales how many replicas it uses. There could be as few as
two Pods or as many as fifteen.
When there are only two Pods, you'd prefer not to have both of those Pods run on the
same node: you would run the risk that a single node failure takes your workload
offline.
In addition to this basic usage, there are some advanced usage examples that
enable your workloads to benefit on high availability and cluster utilization.
As you scale up and run more Pods, a different concern becomes important. Imagine
that you have three nodes running five Pods each. The nodes have enough capacity
to run that many replicas; however, the clients that interact with this workload
are split across three different datacenters (or infrastructure zones). Now you
have less concern about a single node failure, but you notice that latency is
higher than you'd like, and you are paying for network costs associated with
sending network traffic between the different zones.
You decide that under normal operation you'd prefer to have a similar number of replicas
[scheduled](/docs/concepts/scheduling-eviction/) into each infrastructure zone,
and you'd like the cluster to self-heal in the case that there is a problem.
Pod topology spread constraints offer you a declarative way to configure that.
## `topologySpreadConstraints` field
The Pod API includes a field, `spec.topologySpreadConstraints`. Here is an example:
```yaml
---
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
# Configure a topology spread constraint
topologySpreadConstraints:
- maxSkew: <integer>
minDomains: <integer> # optional; alpha since v1.24
topologyKey: <string>
whenUnsatisfiable: <string>
labelSelector: <object>
### other Pod fields go here
```
You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`.
### Spread constraint definition
You can define one or multiple `topologySpreadConstraints` entries to instruct the
kube-scheduler how to place each incoming Pod in relation to the existing Pods across
your cluster. Those fields are:
- **maxSkew** describes the degree to which Pods may be unevenly distributed. You must
specify this field and the number must be greater than zero. Its semantics differ
according to the value of `whenUnsatisfiable`:
- if you select `whenUnsatisfiable: DoNotSchedule`, then `maxSkew` defines the
maximum permitted difference between the number of matching pods in the target
topology and the _global minimum_
(the minimum number of pods that match the label selector in a topology domain).
For example, if you have 3 zones with 2, 4 and 5 matching pods respectively,
then the global minimum is 2 and `maxSkew` is compared relative to that number.
- if you select `whenUnsatisfiable: ScheduleAnyway`, the scheduler gives higher
precedence to topologies that would help reduce the skew.
- **minDomains** indicates a minimum number of eligible domains. This field is optional.
A domain is a particular instance of a topology. An eligible domain is a domain whose
nodes match the node selector.
{{< note >}}
The `minDomains` field is an alpha field added in 1.24. You have to enable the
`MinDomainsInPodToplogySpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
in order to use it.
{{< /note >}}
- The value of `minDomains` must be greater than 0, when specified.
You can only specify `minDomains` in conjunction with `whenUnsatisfiable: DoNotSchedule`.
- When the number of eligible domains with match topology keys is less than `minDomains`,
Pod topology spread treats global minimum as 0, and then the calculation of `skew` is performed.
The global minimum is the minimum number of matching Pods in an eligible domain,
or zero if the number of eligible domains is less than `minDomains`.
- When the number of eligible domains with matching topology keys equals or is greater than
`minDomains`, this value has no effect on scheduling.
- If you do not specify `minDomains`, the constraint behaves as if `minDomains` is 1.
- **topologyKey** is the key of [node labels](#node-labels). If two Nodes are labelled
with this key and have identical values for that label, the scheduler treats both
Nodes as being in the same topology. The scheduler tries to place a balanced number
of Pods into each topology domain.
- **whenUnsatisfiable** indicates how to deal with a Pod if it doesn't satisfy the spread constraint:
- `DoNotSchedule` (default) tells the scheduler not to schedule it.
- `ScheduleAnyway` tells the scheduler to still schedule it while prioritizing nodes that minimize the skew.
- **labelSelector** is used to find matching Pods. Pods
that match this label selector are counted to determine the
number of Pods in their corresponding topology domain.
See [Label Selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors)
for more details.
When a Pod defines more than one `topologySpreadConstraint`, those constraints are
combined using a logical AND operation: the kube-scheduler looks for a node for the incoming Pod
that satisfies all the configured constraints.
### Node labels
Topology spread constraints rely on node labels to identify the topology
domain(s) that each {{< glossary_tooltip text="node" term_id="node" >}} is in.
For example, a node might have labels:
```yaml
region: us-east-1
zone: us-east-1a
```
{{< note >}}
For brevity, this example doesn't use the
[well-known](/docs/reference/labels-annotations-taints/) label keys
`topology.kubernetes.io/zone` and `topology.kubernetes.io/region`. However,
those registered label keys are nonetheless recommended rather than the private
(unqualified) label keys `region` and `zone` that are used here.
You can't make a reliable assumption about the meaning of a private label key
between different contexts.
{{< /note >}}
Suppose you have a 4-node cluster with the following labels:
```
NAME STATUS ROLES AGE VERSION LABELS
node1 Ready <none> 4m26s v1.16.0 node=node1,zone=zoneA
node2 Ready <none> 3m58s v1.16.0 node=node2,zone=zoneA
node3 Ready <none> 3m17s v1.16.0 node=node3,zone=zoneB
node4 Ready <none> 2m43s v1.16.0 node=node4,zone=zoneB
```
Then the cluster is logically viewed as below:
{{<mermaid>}}
graph TB
subgraph "zoneB"
n3(Node3)
n4(Node4)
end
subgraph "zoneA"
n1(Node1)
n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4 k8s;
class zoneA,zoneB cluster;
{{< /mermaid >}}
## Consistency
You should set the same Pod topology spread constraints on all pods in a group.
Usually, if you are using a workload controller such as a Deployment, the pod template
takes care of this for you. If you mix different spread constraints then Kubernetes
follows the API definition of the field; however, the behavior is more likely to become
confusing and troubleshooting is less straightforward.
You need a mechanism to ensure that all the nodes in a topology domain (such as a
cloud provider region) are labelled consistently.
To avoid you needing to manually label nodes, most clusters automatically
populate well-known labels such as `topology.kubernetes.io/hostname`. Check whether
your cluster supports this.
## Topology spread constraint examples
### Example: one topology spread constraint {#example-one-topologyspreadconstraint}
Suppose you have a 4-node cluster where 3 Pods labelled `foo: bar` are located in
node1, node2 and node3 respectively:
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class zoneA,zoneB cluster;
{{< /mermaid >}}
If you want an incoming Pod to be evenly spread with existing Pods across zones, you
can use a manifest similar to:
{{< codenew file="pods/topology-spread-constraints/one-constraint.yaml" >}}
From that manifest, `topologyKey: zone` implies the even distribution will only be applied
to nodes that are labelled `zone: <any value>` (nodes that don't have a `zone` label
are skipped). The field `whenUnsatisfiable: DoNotSchedule` tells the scheduler to let the
incoming Pod stay pending if the scheduler can't find a way to satisfy the constraint.
If the scheduler placed this incoming Pod into zone `A`, the distribution of Pods would
become `[3, 1]`. That means the actual skew is then 2 (calculated as `3 - 1`), which
violates `maxSkew: 1`. To satisfy the constraints and context for this example, the
incoming Pod can only be placed onto a node in zone `B`:
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
p4(mypod) --> n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
OR
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
p4(mypod) --> n3
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
You can tweak the Pod spec to meet various kinds of requirements:
- Change `maxSkew` to a bigger value - such as `2` - so that the incoming Pod can
be placed into zone `A` as well.
- Change `topologyKey` to `node` so as to distribute the Pods evenly across nodes
instead of zones. In the above example, if `maxSkew` remains `1`, the incoming
Pod can only be placed onto the node `node4`.
- Change `whenUnsatisfiable: DoNotSchedule` to `whenUnsatisfiable: ScheduleAnyway`
to ensure the incoming Pod to be always schedulable (suppose other scheduling APIs
are satisfied). However, it's preferred to be placed into the topology domain which
has fewer matching Pods. (Be aware that this preference is jointly normalized
with other internal scheduling priorities such as resource usage ratio).
### Example: multiple topology spread constraints {#example-multiple-topologyspreadconstraints}
This builds upon the previous example. Suppose you have a 4-node cluster where 3
existing Pods labeled `foo: bar` are located on node1, node2 and node3 respectively:
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
You can combine two topology spread constraints to control the spread of Pods both
by node and by zone:
{{< codenew file="pods/topology-spread-constraints/two-constraints.yaml" >}}
In this case, to match the first constraint, the incoming Pod can only be placed onto
nodes in zone `B`; while in terms of the second constraint, the incoming Pod can only be
scheduled to the node `node4`. The scheduler only considers options that satisfy all
defined constraints, so the only valid placement is onto node `node4`.
### Example: conflicting topology spread constraints {#example-conflicting-topologyspreadconstraints}
Multiple constraints can lead to conflicts. Suppose you have a 3-node cluster across 2 zones:
{{<mermaid>}}
graph BT
subgraph "zoneB"
p4(Pod) --> n3(Node3)
p5(Pod) --> n3
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n1
p3(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3,p4,p5 k8s;
class zoneA,zoneB cluster;
{{< /mermaid >}}
If you were to apply
[`two-constraints.yaml`](https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/topology-spread-constraints/two-constraints.yaml)
(the manifest from the previous example)
to **this** cluster, you would see that the Pod `mypod` stays in the `Pending` state.
This happens because: to satisfy the first constraint, the Pod `mypod` can only
be placed into zone `B`; while in terms of the second constraint, the Pod `mypod`
can only schedule to node `node2`. The intersection of the two constraints returns
an empty set, and the scheduler cannot place the Pod.
To overcome this situation, you can either increase the value of `maxSkew` or modify
one of the constraints to use `whenUnsatisfiable: ScheduleAnyway`. Depending on
circumstances, you might also decide to delete an existing Pod manually - for example,
if you are troubleshooting why a bug-fix rollout is not making progress.
#### Interaction with node affinity and node selectors
The scheduler will skip the non-matching nodes from the skew calculations if the
incoming Pod has `spec.nodeSelector` or `spec.affinity.nodeAffinity` defined.
### Example: topology spread constraints with node affinity {#example-topologyspreadconstraints-with-nodeaffinity}
Suppose you have a 5-node cluster ranging across zones A to C:
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
{{<mermaid>}}
graph BT
subgraph "zoneC"
n5(Node5)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n5 k8s;
class zoneC cluster;
{{< /mermaid >}}
and you know that zone `C` must be excluded. In this case, you can compose a manifest
as below, so that Pod `mypod` will be placed into zone `B` instead of zone `C`.
Similarly, Kubernetes also respects `spec.nodeSelector`.
{{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}}
## Implicit conventions
There are some implicit conventions worth noting here:
- Only the Pods holding the same namespace as the incoming Pod can be matching candidates.
- The scheduler bypasses any nodes that don't have any `topologySpreadConstraints[*].topologyKey`
present. This implies that:
1. any Pods located on those bypassed nodes do not impact `maxSkew` calculation - in the
above example, suppose the node `node1` does not have a label "zone", then the 2 Pods will
be disregarded, hence the incoming Pod will be scheduled into zone `A`.
2. the incoming Pod has no chances to be scheduled onto this kind of nodes -
in the above example, suppose a node `node5` has the **mistyped** label `zone-typo: zoneC`
(and no `zone` label set). After node `node5` joins the cluster, it will be bypassed and
Pods for this workload aren't scheduled there.
- Be aware of what will happen if the incoming Pod's
`topologySpreadConstraints[*].labelSelector` doesn't match its own labels. In the
above example, if you remove the incoming Pod's labels, it can still be placed onto
nodes in zone `B`, since the constraints are still satisfied. However, after that
placement, the degree of imbalance of the cluster remains unchanged - it's still zone `A`
having 2 Pods labelled as `foo: bar`, and zone `B` having 1 Pod labelled as
`foo: bar`. If this is not what you expect, update the workload's
`topologySpreadConstraints[*].labelSelector` to match the labels in the pod template.
## Cluster-level default constraints
It is possible to set default topology spread constraints for a cluster. Default
topology spread constraints are applied to a Pod if, and only if:
- It doesn't define any constraints in its `.spec.topologySpreadConstraints`.
- It belongs to a Service, ReplicaSet, StatefulSet or ReplicationController.
Default constraints can be set as part of the `PodTopologySpread` plugin
arguments in a [scheduling profile](/docs/reference/scheduling/config/#profiles).
The constraints are specified with the same [API above](#api), except that
`labelSelector` must be empty. The selectors are calculated from the Services,
ReplicaSets, StatefulSets or ReplicationControllers that the Pod belongs to.
An example configuration might look like follows:
```yaml
apiVersion: kubescheduler.config.k8s.io/v1beta3
kind: KubeSchedulerConfiguration
profiles:
- schedulerName: default-scheduler
pluginConfig:
- name: PodTopologySpread
args:
defaultConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
defaultingType: List
```
{{< note >}}
The [`SelectorSpread` plugin](/docs/reference/scheduling/config/#scheduling-plugins)
is disabled by default. The Kubernetes project recommends using `PodTopologySpread`
to achieve similar behavior.
{{< /note >}}
### Built-in default constraints {#internal-default-constraints}
{{< feature-state for_k8s_version="v1.24" state="stable" >}}
If you don't configure any cluster-level default constraints for pod topology spreading,
then kube-scheduler acts as if you specified the following default topology constraints:
```yaml
defaultConstraints:
- maxSkew: 3
topologyKey: "kubernetes.io/hostname"
whenUnsatisfiable: ScheduleAnyway
- maxSkew: 5
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: ScheduleAnyway
```
Also, the legacy `SelectorSpread` plugin, which provides an equivalent behavior,
is disabled by default.
{{< note >}}
The `PodTopologySpread` plugin does not score the nodes that don't have
the topology keys specified in the spreading constraints. This might result
in a different default behavior compared to the legacy `SelectorSpread` plugin when
using the default topology constraints.
If your nodes are not expected to have **both** `kubernetes.io/hostname` and
`topology.kubernetes.io/zone` labels set, define your own constraints
instead of using the Kubernetes defaults.
{{< /note >}}
If you don't want to use the default Pod spreading constraints for your cluster,
you can disable those defaults by setting `defaultingType` to `List` and leaving
empty `defaultConstraints` in the `PodTopologySpread` plugin configuration:
```yaml
apiVersion: kubescheduler.config.k8s.io/v1beta3
kind: KubeSchedulerConfiguration
profiles:
- schedulerName: default-scheduler
pluginConfig:
- name: PodTopologySpread
args:
defaultConstraints: []
defaultingType: List
```
## Comparison with podAffinity and podAntiAffinity {#comparison-with-podaffinity-podantiaffinity}
In Kubernetes, [inter-Pod affinity and anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)
control how Pods are scheduled in relation to one another - either more packed
or more scattered.
`podAffinity`
: attracts Pods; you can try to pack any number of Pods into qualifying
topology domain(s)
`podAntiAffinity`
: repels Pods. If you set this to `requiredDuringSchedulingIgnoredDuringExecution` mode then
only a single Pod can be scheduled into a single topology domain; if you choose
`preferredDuringSchedulingIgnoredDuringExecution` then you lose the ability to enforce the
constraint.
For finer control, you can specify topology spread constraints to distribute
Pods across different topology domains - to achieve either high availability or
cost-saving. This can also help on rolling update workloads and scaling out
replicas smoothly.
For more context, see the
[Motivation](https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/895-pod-topology-spread#motivation)
section of the enhancement proposal about Pod topology spread constraints.
## Known limitations
- There's no guarantee that the constraints remain satisfied when Pods are removed. For
example, scaling down a Deployment may result in imbalanced Pods distribution.
You can use a tool such as the [Descheduler](https://github.com/kubernetes-sigs/descheduler)
to rebalance the Pods distribution.
- Pods matched on tainted nodes are respected.
See [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921).
- The scheduler doesn't have prior knowledge of all the zones or other topology
domains that a cluster has. They are determined from the existing nodes in the
cluster. This could lead to a problem in autoscaled clusters, when a node pool (or
node group) is scaled to zero nodes, and you're expecting the cluster to scale up,
because, in this case, those topology domains won't be considered until there is
at least one node in them.
You can work around this by using an cluster autoscaling tool that is aware of
Pod topology spread constraints and is also aware of the overall set of topology
domains.
## {{% heading "whatsnext" %}}
- The blog article [Introducing PodTopologySpread](/blog/2020/05/introducing-podtopologyspread/)
explains `maxSkew` in some detail, as well as covering some advanced usage examples.
- Read the [scheduling](/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling) section of
the API reference for Pod.

View File

@ -4,6 +4,7 @@ reviewers:
- lavalamp
title: Controlling Access to the Kubernetes API
content_type: concept
weight: 50
---
<!-- overview -->
@ -22,10 +23,11 @@ following diagram:
## Transport security
In a typical Kubernetes cluster, the API serves on port 443, protected by TLS.
By default, the Kubernetes API server listens on port 6443 on the first non-localhost network interface, protected by TLS. In a typical production Kubernetes cluster, the API serves on port 443. The port can be changed with the `--secure-port`, and the listening IP address with the `--bind-address` flag.
The API server presents a certificate. This certificate may be signed using
a private certificate authority (CA), or based on a public key infrastructure linked
to a generally recognized CA.
to a generally recognized CA. The certificate and corresponding private key can be set by using the `--tls-cert-file` and `--tls-private-key-file` flags.
If your cluster uses a private certificate authority, you need a copy of that CA
certificate configured into your `~/.kube/config` on the client, so that you can
@ -136,34 +138,6 @@ The cluster audits the activities generated by users, by applications that use t
For more information, see [Auditing](/docs/tasks/debug/debug-cluster/audit/).
## API server ports and IPs
The previous discussion applies to requests sent to the secure port of the API server
(the typical case). The API server can actually serve on 2 ports:
By default, the Kubernetes API server serves HTTP on 2 ports:
1. `localhost` port:
- is intended for testing and bootstrap, and for other components of the master node
(scheduler, controller-manager) to talk to the API
- no TLS
- default is port 8080
- default IP is localhost, change with `--insecure-bind-address` flag.
- request **bypasses** authentication and authorization modules.
- request handled by admission control module(s).
- protected by need to have host access
2. “Secure port”:
- use whenever possible
- uses TLS. Set cert with `--tls-cert-file` and key with `--tls-private-key-file` flag.
- default is port 6443, change with `--secure-port` flag.
- default IP is first non-localhost network interface, change with `--bind-address` flag.
- request handled by authentication and authorization modules.
- request handled by admission control module(s).
- authentication and authorization modules run.
## {{% heading "whatsnext" %}}
Read more documentation on authentication, authorization and API access control:

View File

@ -0,0 +1,271 @@
---
title: Multi-tenancy
content_type: concept
weight: 70
---
<!-- overview -->
This page provides an overview of available configuration options and best practices for cluster multi-tenancy.
Sharing clusters saves costs and simplifies administration. However, sharing clusters also presents challenges such as security, fairness, and managing _noisy neighbors_.
Clusters can be shared in many ways. In some cases, different applications may run in the same cluster. In other cases, multiple instances of the same application may run in the same cluster, one for each end user. All these types of sharing are frequently described using the umbrella term _multi-tenancy_.
While Kubernetes does not have first-class concepts of end users or tenants, it provides several features to help manage different tenancy requirements. These are discussed below.
<!-- body -->
## Use cases
The first step to determining how to share your cluster is understanding your use case, so you can evaluate the patterns and tools available. In general, multi-tenancy in Kubernetes clusters falls into two broad categories, though many variations and hybrids are also possible.
### Multiple teams
A common form of multi-tenancy is to share a cluster between multiple teams within an organization, each of whom may operate one or more workloads. These workloads frequently need to communicate with each other, and with other workloads located on the same or different clusters.
In this scenario, members of the teams often have direct access to Kubernetes resources via tools such as `kubectl`, or indirect access through GitOps controllers or other types of release automation tools. There is often some level of trust between members of different teams, but Kubernetes policies such as RBAC, quotas, and network policies are essential to safely and fairly share clusters.
### Multiple customers
The other major form of multi-tenancy frequently involves a Software-as-a-Service (SaaS) vendor running multiple instances of a workload for customers. This business model is so strongly associated with this deployment style that many people call it "SaaS tenancy." However, a better term might be "multi-customer tenancy,” since SaaS vendors may also use other deployment models, and this deployment model can also be used outside of SaaS.
In this scenario, the customers do not have access to the cluster; Kubernetes is invisible from their perspective and is only used by the vendor to manage the workloads. Cost optimization is frequently a critical concern, and Kubernetes policies are used to ensure that the workloads are strongly isolated from each other.
## Terminology
### Tenants
When discussing multi-tenancy in Kubernetes, there is no single definition for a "tenant". Rather, the definition of a tenant will vary depending on whether multi-team or multi-customer tenancy is being discussed.
In multi-team usage, a tenant is typically a team, where each team typically deploys a small number of workloads that scales with the complexity of the service. However, the definition of "team" may itself be fuzzy, as teams may be organized into higher-level divisions or subdivided into smaller teams.
By contrast, if each team deploys dedicated workloads for each new client, they are using a multi-customer model of tenancy. In this case, a "tenant" is simply a group of users who share a single workload. This may be as large as an entire company, or as small as a single team at that company.
In many cases, the same organization may use both definitions of "tenants" in different contexts. For example, a platform team may offer shared services such as security tools and databases to multiple internal “customers” and a SaaS vendor may also have multiple teams sharing a development cluster. Finally, hybrid architectures are also possible, such as a SaaS provider using a combination of per-customer workloads for sensitive data, combined with multi-tenant shared services.
{{< figure src="/images/docs/multi-tenancy.png" title="A cluster showing coexisting tenancy models" class="diagram-large" >}}
### Isolation
There are several ways to design and build multi-tenant solutions with Kubernetes. Each of these methods comes with its own set of tradeoffs that impact the isolation level, implementation effort, operational complexity, and cost of service.
A Kubernetes cluster consists of a control plane which runs Kubernetes software, and a data plane consisting of worker nodes where tenant workloads are executed as pods. Tenant isolation can be applied in both the control plane and the data plane based on organizational requirements.
The level of isolation offered is sometimes described using terms like “hard” multi-tenancy, which implies strong isolation, and “soft” multi-tenancy, which implies weaker isolation. In particular, "hard" multi-tenancy is often used to describe cases where the tenants do not trust each other, often from security and resource sharing perspectives (e.g. guarding against attacks such as data exfiltration or DoS). Since data planes typically have much larger attack surfaces, "hard" multi-tenancy often requires extra attention to isolating the data-plane, though control plane isolation also remains critical.
However, the terms "hard" and "soft" can often be confusing, as there is no single definition that will apply to all users. Rather, "hardness" or "softness" is better understood as a broad spectrum, with many different techniques that can be used to maintain different types of isolation in your clusters, based on your requirements.
In more extreme cases, it may be easier or necessary to forgo any cluster-level sharing at all and assign each tenant their dedicated cluster, possibly even running on dedicated hardware if VMs are not considered an adequate security boundary. This may be easier with managed Kubernetes clusters, where the overhead of creating and operating clusters is at least somewhat taken on by a cloud provider. The benefit of stronger tenant isolation must be evaluated against the cost and complexity of managing multiple clusters. The [Multi-cluster SIG](https://git.k8s.io/community/sig-multicluster/README.md) is responsible for addressing these types of use cases.
The remainder of this page focuses on isolation techniques used for shared Kubernetes clusters. However, even if you are considering dedicated clusters, it may be valuable to review these recommendations, as it will give you the flexibility to shift to shared clusters in the future if your needs or capabilities change.
## Control plane isolation
Control plane isolation ensures that different tenants cannot access or affect each others' Kubernetes API resources.
### Namespaces
In Kubernetes, a {{< glossary_tooltip text="Namespace" term_id="namespace" >}} provides a mechanism for isolating groups of API resources within a single cluster. This isolation has two key dimensions:
1. Object names within a namespace can overlap with names in other namespaces, similar to files in folders. This allows tenants to name their resources without having to consider what other tenants are doing.
2. Many Kubernetes security policies are scoped to namespaces. For example, RBAC Roles and Network Policies are namespace-scoped resources. Using RBAC, Users and Service Accounts can be restricted to a namespace.
In a multi-tenant environment, a Namespace helps segment a tenant's workload into a logical and distinct management unit. In fact, a common practice is to isolate every workload in its own namespace, even if multiple workloads are operated by the same tenant. This ensures that each workload has its own identity and can be configured with an appropriate security policy.
The namespace isolation model requires configuration of several other Kubernetes resources, networking plugins, and adherence to security best practices to properly isolate tenant workloads. These considerations are discussed below.
### Access controls
The most important type of isolation for the control plane is authorization. If teams or their workloads can access or modify each others' API resources, they can change or disable all other types of policies thereby negating any protection those policies may offer. As a result, it is critical to ensure that each tenant has the appropriate access to only the namespaces they need, and no more. This is known as the "Principle of Least Privilege."
Role-based access control (RBAC) is commonly used to enforce authorization in the Kubernetes control plane, for both users and workloads (service accounts). [Roles](/docs/reference/access-authn-authz/rbac/#role-and-clusterrole) and [role bindings](/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) are Kubernetes objects that are used at a namespace level to enforce access control in your application; similar objects exist for authorizing access to cluster-level objects, though these are less useful for multi-tenant clusters.
In a multi-team environment, RBAC must be used to restrict tenants' access to the appropriate namespaces, and ensure that cluster-wide resources can only be accessed or modified by privileged users such as cluster administrators.
If a policy ends up granting a user more permissions than they need, this is likely a signal that the namespace containing the affected resources should be refactored into finer-grained namespaces. Namespace management tools may simplify the management of these finer-grained namespaces by applying common RBAC policies to different namespaces, while still allowing fine-grained policies where necessary.
### Quotas
Kubernetes workloads consume node resources, like CPU and memory. In a multi-tenant environment, you can use
[Resource Quotas](/docs/concepts/policy/resource-quotas/) to manage resource usage of tenant workloads.
For the multiple teams use case, where tenants have access to the Kubernetes API, you can use resource quotas
to limit the number of API resources (for example: the number of Pods, or the number of ConfigMaps)
that a tenant can create. Limits on object count ensure fairness and aim to avoid _noisy neighbor_ issues from
affecting other tenants that share a control plane.
Resource quotas are namespaced objects. By mapping tenants to namespaces, cluster admins can use quotas to ensure that a tenant cannot monopolize a cluster's resources or overwhelm its control plane. Namespace management tools simplify the administration of quotas. In addition, while Kubernetes quotas only apply within a single namespace, some namespace management tools allow groups of namespaces to share quotas, giving administrators far more flexibility with less effort than built-in quotas.
Quotas prevent a single tenant from consuming greater than their allocated share of resources hence minimizing the “noisy neighbor” issue, where one tenant negatively impacts the performance of other tenants' workloads.
When you apply a quota to namespace, Kubernetes requires you to also specify resource requests and limits for each container. Limits are the upper bound for the amount of resources that a container can consume. Containers that attempt to consume resources that exceed the configured limits will either be throttled or killed, based on the resource type. When resource requests are set lower than limits, each container is guaranteed the requested amount but there may still be some potential for impact across workloads.
Quotas cannot protect against all kinds of resource sharing, such as network traffic. Node isolation (described below) may be a better solution for this problem.
## Data Plane Isolation
Data plane isolation ensures that pods and workloads for different tenants are sufficiently isolated.
### Network isolation
By default, all pods in a Kubernetes cluster are allowed to communicate with each other, and all network traffic is unencrypted. This can lead to security vulnerabilities where traffic is accidentally or maliciously sent to an unintended destination, or is intercepted by a workload on a compromised node.
Pod-to-pod communication can be controlled using [Network Policies](/docs/concepts/services-networking/network-policies/), which restrict communication between pods using namespace labels or IP address ranges. In a multi-tenant environment where strict network isolation between tenants is required, starting with a default policy that denies communication between pods is recommended with another rule that allows all pods to query the DNS server for name resolution. With such a default policy in place, you can begin adding more permissive rules that allow for communication within a namespace. This scheme can be further refined as required. Note that this only applies to pods within a single control plane; pods that belong to different virtual control planes cannot talk to each other via Kubernetes networking.
Namespace management tools may simplify the creation of default or common network policies. In addition, some of these tools allow you to enforce a consistent set of namespace labels across your cluster, ensuring that they are a trusted basis for your policies.
{{< warning >}}
Network policies require a [CNI plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#cni) that supports the implementation of network policies. Otherwise, NetworkPolicy resources will be ignored.
{{< /warning >}}
More advanced network isolation may be provided by service meshes, which provide OSI Layer 7 policies based on workload identity, in addition to namespaces. These higher-level policies can make it easier to manage namespace-based multi-tenancy, especially when multiple namespaces are dedicated to a single tenant. They frequently also offer encryption using mutual TLS, protecting your data even in the presence of a compromised node, and work across dedicated or virtual clusters. However, they can be significantly more complex to manage and may not be appropriate for all users.
### Storage isolation
Kubernetes offers several types of volumes that can be used as persistent storage for workloads. For security and data-isolation, [dynamic volume provisioning](/docs/concepts/storage/dynamic-provisioning/) is recommended and volume types that use node resources should be avoided.
[StorageClasses](/docs/concepts/storage/storage-classes/) allow you to describe custom "classes" of storage offered by your cluster, based on quality-of-service levels, backup policies, or custom policies determined by the cluster administrators.
Pods can request storage using a [PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/). A PersistentVolumeClaim is a namespaced resource, which enables isolating portions of the storage system and dedicating it to tenants within the shared Kubernetes cluster. However, it is important to note that a PersistentVolume is a cluster-wide resource and has a lifecycle independent of workloads and namespaces.
For example, you can configure a separate StorageClass for each tenant and use this to strengthen isolation.
If a StorageClass is shared, you should set a [reclaim policy of `Delete`](/docs/concepts/storage/storage-classes/#reclaim-policy)
to ensure that a PersistentVolume cannot be reused across different namespaces.
### Sandboxing containers
{{% thirdparty-content %}}
Kubernetes pods are composed of one or more containers that execute on worker nodes. Containers utilize OS-level virtualization and hence offer a weaker isolation boundary than virtual machines that utilize hardware-based virtualization.
In a shared environment, unpatched vulnerabilities in the application and system layers can be exploited by attackers for container breakouts and remote code execution that allow access to host resources. In some applications, like a Content Management System (CMS), customers may be allowed the ability to upload and execute untrusted scripts or code. In either case, mechanisms to further isolate and protect workloads using strong isolation are desirable.
Sandboxing provides a way to isolate workloads running in a shared cluster. It typically involves running each pod in a separate execution environment such as a virtual machine or a userspace kernel. Sandboxing is often recommended when you are running untrusted code, where workloads are assumed to be malicious. Part of the reason this type of isolation is necessary is because containers are processes running on a shared kernel; they mount file systems like /sys and /proc from the underlying host, making them less secure than an application that runs on a virtual machine which has its own kernel. While controls such as seccomp, AppArmor, and SELinux can be used to strengthen the security of containers, it is hard to apply a universal set of rules to all workloads running in a shared cluster. Running workloads in a sandbox environment helps to insulate the host from container escapes, where an attacker exploits a vulnerability to gain access to the host system and all the processes/files running on that host.
Virtual machines and userspace kernels are 2 popular approaches to sandboxing. The following sandboxing implementations are available:
* [gVisor](https://gvisor.dev/) intercepts syscalls from containers and runs them through a userspace kernel, written in Go, with limited access to the underlying host.
* [Kata Containers](https://katacontainers.io/) is an OCI compliant runtime that allows you to run containers in a VM. The hardware virtualization available in Kata offers an added layer of security for containers running untrusted code.
### Node Isolation
Node isolation is another technique that you can use to isolate tenant workloads from each other. With node isolation, a set of nodes is dedicated to running pods from a particular tenant and co-mingling of tenant pods is prohibited. This configuration reduces the noisy tenant issue, as all pods running on a node will belong to a single tenant. The risk of information disclosure is slightly lower with node isolation because an attacker that manages to escape from a container will only have access to the containers and volumes mounted to that node.
Although workloads from different tenants are running on different nodes, it is important to be aware that the kubelet and (unless using virtual control planes) the API service are still shared services. A skilled attacker could use the permissions assigned to the kubelet or other pods running on the node to move laterally within the cluster and gain access to tenant workloads running on other nodes. If this is a major concern, consider implementing compensating controls such as seccomp, AppArmor or SELinux or explore using sandboxed containers or creating separate clusters for each tenant.
Node isolation is a little easier to reason about from a billing standpoint than sandboxing containers since you can charge back per node rather than per pod. It also has fewer compatibility and performance issues and may be easier to implement than sandboxing containers. For example, nodes for each tenant can be configured with taints so that only pods with the corresponding toleration can run on them. A mutating webhook could then be used to automatically add tolerations and node affinities to pods deployed into tenant namespaces so that they run on a specific set of nodes designated for that tenant.
Node isolation can be implemented using an [pod node selectors](/docs/concepts/scheduling-eviction/assign-pod-node/) or a [Virtual Kubelet](https://github.com/virtual-kubelet).
## Additional Considerations
This section discusses other Kubernetes constructs and patterns that are relevant for multi-tenancy.
### API Priority and Fairness
[API priority and fairness](/docs/concepts/cluster-administration/flow-control/) is a Kubernetes feature that allows you to assign a priority to certain pods running within the cluster. When an application calls the Kubernetes API, the API server evaluates the priority assigned to pod. Calls from pods with higher priority are fulfilled before those with a lower priority. When contention is high, lower priority calls can be queued until the server is less busy or you can reject the requests.
Using API priority and fairness will not be very common in SaaS environments unless you are allowing customers to run applications that interface with the Kubernetes API, e.g. a controller.
### Quality-of-Service (QoS) {#qos}
When youre running a SaaS application, you may want the ability to offer different Quality-of-Service (QoS) tiers of service to different tenants. For example, you may have freemium service that comes with fewer performance guarantees and features and a for-fee service tier with specific performance guarantees. Fortunately, there are several Kubernetes constructs that can help you accomplish this within a shared cluster, including network QoS, storage classes, and pod priority and preemption. The idea with each of these is to provide tenants with the quality of service that they paid for. Lets start by looking at networking QoS.
Typically, all pods on a node share a network interface. Without network QoS, some pods may consume an unfair share of the available bandwidth at the expense of other pods. The Kubernetes [bandwidth plugin](https://www.cni.dev/plugins/current/meta/bandwidth/) creates an [extended resource](/docs/concepts/configuration/manage-resources-containers/#extended-resources) for networking that allows you to use Kubernetes resources constructs, i.e. requests/limits, to apply rate limits to pods by using Linux tc queues. Be aware that the plugin is considered experimental as per the [Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#support-traffic-shaping) documentation and should be thoroughly tested before use in production environments.
For storage QoS, you will likely want to create different storage classes or profiles with different performance characteristics. Each storage profile can be associated with a different tier of service that is optimized for different workloads such IO, redundancy, or throughput. Additional logic might be necessary to allow the tenant to associate the appropriate storage profile with their workload.
Finally, theres [pod priority and preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/) where you can assign priority values to pods. When scheduling pods, the scheduler will try evicting pods with lower priority when there are insufficient resources to schedule pods that are assigned a higher priority. If you have a use case where tenants have different service tiers in in a shared cluster e.g. free and paid, you may want to give higher priority to certain tiers using this feature.
### DNS
Kubernetes clusters include a Domain Name System (DNS) service to provide translations from names to IP addresses, for all Services and Pods. By default, the Kubernetes DNS service allows lookups across all namespaces in the cluster.
In multi-tenant environments where tenants can access pods and other Kubernetes resources, or where
stronger isolation is required, it may be necessary to prevent pods from looking up services in other
Namespaces.
You can restrict cross-namespace DNS lookups by configuring security rules for the DNS service.
For example, CoreDNS (the default DNS service for Kubernetes) can leverage Kubernetes metadata
to restrict queries to Pods and Services within a namespace. For more information, read an
[example](https://github.com/coredns/policy#kubernetes-metadata-multi-tenancy-policy) of configuring
this within the CoreDNS documentation.
When a [Virtual Control Plane per tenant](#virtual-control-plane-per-tenant) model is used, a DNS service must be configured per tenant or a multi-tenant DNS service must be used. Here is an example of a [customized version of CoreDNS](https://github.com/kubernetes-sigs/cluster-api-provider-nested/blob/main/virtualcluster/doc/tenant-dns.md) that supports multiple tenants.
### Operators
[Operators](/docs/concepts/extend-kubernetes/operator/) are Kubernetes controllers that manage applications. Operators can simplify the management of multiple instances of an application, like a database service, which makes them a common building block in the multi-consumer (SaaS) multi-tenancy use case.
Operators used in a multi-tenant environment should follow a stricter set of guidelines. Specifically, the Operator should:
* Support creating resources within different tenant namespaces, rather than just in the namespace in which the Operator is deployed.
* Ensure that the Pods are configured with resource requests and limits, to ensure scheduling and fairness.
* Support configuration of Pods for data-plane isolation techniques such as node isolation and sandboxed containers.
## Implementations
{{% thirdparty-content %}}
There are two primary ways to share a Kubernetes cluster for multi-tenancy: using Namespaces (i.e. a Namespace per tenant) or by virtualizing the control plane (i.e. Virtual control plane per tenant).
In both cases, data plane isolation, and management of additional considerations such as API Priority and Fairness, is also recommended.
Namespace isolation is well-supported by Kubernetes, has a negligible resource cost, and provides mechanisms to allow tenants to interact appropriately, such as by allowing service-to-service communication. However, it can be difficult to configure, and doesn't apply to Kubernetes resources that can't be namespaced, such as Custom Resource Definitions, Storage Classes, and Webhooks.
Control plane virtualization allows for isolation of non-namespaced resources at the cost of somewhat higher resource usage and more difficult cross-tenant sharing. It is a good option when namespace isolation is insufficient but dedicated clusters are undesirable, due to the high cost of maintaining them (especially on-prem) or due to their higher overhead and lack of resource sharing. However, even within a virtualized control plane, you will likely see benefits by using namespaces as well.
The two options are discussed in more detail in the following sections:
### Namespace per tenant
As previously mentioned, you should consider isolating each workload in its own namespace, even if you are using dedicated clusters or virtualized control planes. This ensures that each workload only has access to its own resources, such as Config Maps and Secrets, and allows you to tailor dedicated security policies for each workload. In addition, it is a best practice to give each namespace names that are unique across your entire fleet (i.e., even if they are in separate clusters), as this gives you the flexibility to switch between dedicated and shared clusters in the future, or to use multi-cluster tooling such as service meshes.
Conversely, there are also advantages to assigning namespaces at the tenant level, not just the workload level, since there are often policies that apply to all workloads owned by a single tenant. However, this raises its own problems. Firstly, this makes it difficult or impossible to customize policies to individual workloads, and secondly, it may be challenging to come up with a single level of "tenancy" that should be given a namespace. For example, an organization may have divisions, teams, and subteams - which should be assigned a namespace?
To solve this, Kubernetes provides the [Hierarchical Namespace Controller (HNC)](https://github.com/kubernetes-sigs/hierarchical-namespaces), which allows you to organize your namespaces into hierarchies, and share certain policies and resources between them. It also helps you manage namespace labels, namespace lifecycles, and delegated management, and share resource quotas across related namespaces. These capabilities can be useful in both multi-team and multi-customer scenarios.
Other projects that provide similar capabilities and aid in managing namespaced resources are listed below:
#### Multi-team tenancy
* [Capsule](https://github.com/clastix/capsule)
* [Kiosk](https://github.com/loft-sh/kiosk)
#### Multi-customer tenancy
* [Kubeplus](https://github.com/cloud-ark/kubeplus)
#### Policy engines
Policy engines provide features to validate and generate tenant configurations:
* [Kyverno](https://kyverno.io/)
* [OPA/Gatekeeper](https://github.com/open-policy-agent/gatekeeper)
### Virtual control plane per tenant
Another form of control-plane isolation is to use Kubernetes extensions to provide each tenant a virtual control-plane that enables segmentation of cluster-wide API resources. [Data plane isolation](#data-plane-isolation) techniques can be used with this model to securely manage worker nodes across tenants.
The virtual control plane based multi-tenancy model extends namespace-based multi-tenancy by providing each tenant with dedicated control plane components, and hence complete control over cluster-wide resources and add-on services. Worker nodes are shared across all tenants, and are managed by a Kubernetes cluster that is normally inaccessible to tenants. This cluster is often referred to as a _super-cluster_ (or sometimes as a _host-cluster_). Since a tenants control-plane is not directly associated with underlying compute resources it is referred to as a _virtual control plane_.
A virtual control plane typically consists of the Kubernetes API server, the controller manager, and the etcd data store. It interacts with the super cluster via a metadata synchronization controller which coordinates changes across tenant control planes and the control plane of the super--cluster.
By using per-tenant dedicated control planes, most of the isolation problems due to sharing one API server among all tenants are solved. Examples include noisy neighbors in the control plane, uncontrollable blast radius of policy misconfigurations, and conflicts between cluster scope objects such as webhooks and CRDs. Hence, the virtual control plane model is particularly suitable for cases where each tenant requires access to a Kubernetes API server and expects the full cluster manageability.
The improved isolation comes at the cost of running and maintaining an individual virtual control plane per tenant. In addition, per-tenant control planes do not solve isolation problems in the data plane, such as node-level noisy neighbors or security threats. These must still be addressed separately.
The Kubernetes [Cluster API - Nested (CAPN)](https://github.com/kubernetes-sigs/cluster-api-provider-nested/tree/main/virtualcluster) project provides an implementation of virtual control planes.
#### Other implementations
* [Kamaji](https://github.com/clastix/kamaji)
* [vcluster](https://github.com/loft-sh/vcluster)

View File

@ -37,8 +37,8 @@ To use this mechanism, your cluster must enforce Pod Security admission.
### Built-in Pod Security admission enforcement
In Kubernetes v{{< skew currentVersion >}}, the `PodSecurity` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
is a beta feature and is enabled by default. You must have this feature gate enabled.
From Kubernetes v1.23, the `PodSecurity` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is a beta feature and is enabled by default.
This page is part of the documentation for Kubernetes v{{< skew currentVersion >}}.
If you are running a different version of Kubernetes, consult the documentation for that release.
### Alternative: installing the `PodSecurity` admission webhook {#webhook}
@ -102,7 +102,7 @@ For each mode, there are two labels that determine the policy used:
pod-security.kubernetes.io/<MODE>: <LEVEL>
# Optional: per-mode version label that can be used to pin the policy to the
# version that shipped with a given Kubernetes minor version (for example v{{< skew latestVersion >}}).
# version that shipped with a given Kubernetes minor version (for example v{{< skew currentVersion >}}).
#
# MODE must be one of `enforce`, `audit`, or `warn`.
# VERSION must be a valid Kubernetes minor version, or `latest`.

View File

@ -214,6 +214,9 @@ controller selects policies according to the following criteria:
2. If the pod must be defaulted or mutated, the first PodSecurityPolicy
(ordered by name) to allow the pod is selected.
When a Pod is validated against a PodSecurityPolicy, [a `kubernetes.io/psp` annotation](/docs/reference/labels-annotations-taints/#kubernetes-io-psp)
is added to the Pod, with the name of the PodSecurityPolicy as the annotation value.
{{< note >}}
During update operations (during which mutations to pod specs are disallowed)
only non-mutating PodSecurityPolicies are used to validate the pod.
@ -245,8 +248,7 @@ alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -n
### Create a policy and a pod
Define the example PodSecurityPolicy object in a file. This is a policy that
prevents the creation of privileged pods.
This is a policy that prevents the creation of privileged pods.
The name of a PodSecurityPolicy object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
@ -255,7 +257,7 @@ The name of a PodSecurityPolicy object must be a valid
And create it with kubectl:
```shell
kubectl-admin create -f example-psp.yaml
kubectl-admin create -f https://k8s.io/examples/policy/example-psp.yaml
```
Now, as the unprivileged user, try to create a simple pod:
@ -284,6 +286,11 @@ pod's service account nor `fake-user` have permission to use the new policy:
```shell
kubectl-user auth can-i use podsecuritypolicy/example
```
The output is similar to this:
```
no
```
@ -300,14 +307,27 @@ kubectl-admin create role psp:unprivileged \
--verb=use \
--resource=podsecuritypolicy \
--resource-name=example
role "psp:unprivileged" created
```
```
role "psp:unprivileged" created
```
```shell
kubectl-admin create rolebinding fake-user:psp:unprivileged \
--role=psp:unprivileged \
--serviceaccount=psp-example:fake-user
rolebinding "fake-user:psp:unprivileged" created
```
```
rolebinding "fake-user:psp:unprivileged" created
```
```shell
kubectl-user auth can-i use podsecuritypolicy/example
```
```
yes
```
@ -332,7 +352,20 @@ The output is similar to this
pod "pause" created
```
It works as expected! But any attempts to create a privileged pod should still
It works as expected! You can verify that the pod was validated against the
newly created PodSecurityPolicy:
```shell
kubectl-user get pod pause -o yaml | grep kubernetes.io/psp
```
The output is similar to this
```
kubernetes.io/psp: example
```
But any attempts to create a privileged pod should still
be denied:
```shell

View File

@ -57,7 +57,7 @@ fail validation.
<tr>
<td style="white-space: nowrap">HostProcess</td>
<td>
<p>Windows pods offer the ability to run <a href="/docs/tasks/configure-pod-container/create-hostprocess-pod">HostProcess containers</a> which enables privileged access to the Windows node. Privileged access to the host is disallowed in the baseline policy. HostProcess pods are an <strong>alpha</strong> feature as of Kubernetes <strong>v1.22</strong>.</p>
<p>Windows pods offer the ability to run <a href="/docs/tasks/configure-pod-container/create-hostprocess-pod">HostProcess containers</a> which enables privileged access to the Windows node. Privileged access to the host is disallowed in the baseline policy. {{< feature-state for_k8s_version="v1.23" state="beta" >}}</p>
<p><strong>Restricted Fields</strong></p>
<ul>
<li><code>spec.securityContext.windowsOptions.hostProcess</code></li>
@ -462,11 +462,11 @@ of individual policies are not defined here.
{{% thirdparty-content %}}
Other alternatives for enforcing policies are being developed in the Kubernetes ecosystem, such as:
- [Kubewarden](https://github.com/kubewarden)
- [Kyverno](https://kyverno.io/policies/pod-security/)
- [OPA Gatekeeper](https://github.com/open-policy-agent/gatekeeper)
## FAQ
### Why isn't there a profile between privileged and baseline?
@ -493,9 +493,9 @@ built-in [Pod Security Admission Controller](/docs/concepts/security/pod-securit
### What profiles should I apply to my Windows Pods?
Windows in Kubernetes has some limitations and differentiators from standard Linux-based
workloads. Specifically, many of the Pod SecurityContext fields [have no effect on
Windows](/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#v1-podsecuritycontext). As
such, no standardized Pod Security profiles currently exist.
workloads. Specifically, many of the Pod SecurityContext fields
[have no effect on Windows](/docs/concepts/windows/intro/#compatibility-v1-pod-spec-containers-securitycontext).
As such, no standardized Pod Security profiles currently exist.
If you apply the restricted profile for a Windows pod, this **may** have an impact on the pod
at runtime. The restricted profile requires enforcing Linux-specific restrictions (such as seccomp
@ -504,7 +504,9 @@ these Linux-specific values, then the Windows pod should still work normally wit
profile. However, the lack of enforcement means that there is no additional restriction, for Pods
that use Windows containers, compared to the baseline profile.
The use of the HostProcess flag to create a HostProcess pod should only be done in alignment with the privileged policy. Creation of a Windows HostProcess pod is blocked under the baseline and restricted policies, so any HostProcess pod should be considered privileged.
The use of the HostProcess flag to create a HostProcess pod should only be done in alignment with the privileged policy.
Creation of a Windows HostProcess pod is blocked under the baseline and restricted policies,
so any HostProcess pod should be considered privileged.
### What about sandboxed Pods?
@ -518,3 +520,4 @@ kernel. This allows for workloads requiring heightened permissions to still be i
Additionally, the protection of sandboxed workloads is highly dependent on the method of
sandboxing. As such, no single recommended profile is recommended for all sandboxed workloads.

View File

@ -4,6 +4,7 @@ title: Role Based Access Control Good Practices
description: >
Principles and practices for good RBAC design for cluster operators.
content_type: concept
weight: 60
---
<!-- overview -->
@ -14,7 +15,8 @@ execute their roles. It is important to ensure that, when designing permissions
users, the cluster administrator understands the areas where privilge escalation could occur,
to reduce the risk of excessive access leading to security incidents.
The good practices laid out here should be read in conjunction with the general [RBAC documentation](/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update).
The good practices laid out here should be read in conjunction with the general
[RBAC documentation](/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update).
<!-- body -->
@ -33,7 +35,8 @@ some general rules that can be applied are :
not just to all object types that currently exist in the cluster, but also to all future object types
which are created in the future.
- Administrators should not use `cluster-admin` accounts except where specifically needed.
Providing a low privileged account with [impersonation rights](/docs/reference/access-authn-authz/authentication/#user-impersonation)
Providing a low privileged account with
[impersonation rights](/docs/reference/access-authn-authz/authentication/#user-impersonation)
can avoid accidental modification of cluster resources.
- Avoid adding users to the `system:masters` group. Any user who is a member of this group
bypasses all RBAC rights checks and will always have unrestricted superuser access, which cannot be
@ -43,15 +46,17 @@ some general rules that can be applied are :
### Minimize distribution of privileged tokens
Ideally, pods shouldn't be assigned service accounts that have been granted powerful permissions (for example, any of the rights listed under
[privilege escalation risks](#privilege-escalation-risks)).
Ideally, pods shouldn't be assigned service accounts that have been granted powerful permissions
(for example, any of the rights listed under [privilege escalation risks](#privilege-escalation-risks)).
In cases where a workload requires powerful permissions, consider the following practices:
- Limit the number of nodes running powerful pods. Ensure that any DaemonSets you run
are necessary and are run with least privilege to limit the blast radius of container escapes.
- Avoid running powerful pods alongside untrusted or publicly-exposed ones. Consider using
[Taints and Toleration](/docs/concepts/scheduling-eviction/taint-and-toleration/), [NodeAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity), or [PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) to ensure
pods don't run alongside untrusted or less-trusted Pods. Pay especial attention to
[Taints and Toleration](/docs/concepts/scheduling-eviction/taint-and-toleration/),
[NodeAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity), or
[PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)
to ensure pods don't run alongside untrusted or less-trusted Pods. Pay especial attention to
situations where less-trustworthy Pods are not meeting the **Restricted** Pod Security Standard.
### Hardening
@ -106,7 +111,7 @@ with the ability to create suitably secure and isolated Pods, you should enforce
You can use [Pod Security admission](/docs/concepts/security/pod-security-admission/)
or other (third party) mechanisms to implement that enforcement.
You can also use the deprecated [PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/) mechanism
You can also use the deprecated [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/) mechanism
to restrict users' abilities to create privileged Pods (N.B. PodSecurityPolicy is scheduled for removal
in version 1.25).
@ -116,7 +121,9 @@ Secrets they would not have through RBAC directly.
### Persistent volume creation
As noted in the [PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/#volumes-and-file-systems) documentation, access to create PersistentVolumes can allow for escalation of access to the underlying host. Where access to persistent storage is required trusted administrators should create
As noted in the [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/#volumes-and-file-systems)
documentation, access to create PersistentVolumes can allow for escalation of access to the underlying host.
Where access to persistent storage is required trusted administrators should create
PersistentVolumes, and constrained users should use PersistentVolumeClaims to access that storage.
### Access to `proxy` subresource of Nodes
@ -171,8 +178,11 @@ objects to create a denial of service condition either based on the size or numb
specifically relevant in multi-tenant clusters if semi-trusted or untrusted users
are allowed limited access to a system.
One option for mitigation of this issue would be to use [resource quotas](/docs/concepts/policy/resource-quotas/#object-count-quota)
One option for mitigation of this issue would be to use
[resource quotas](/docs/concepts/policy/resource-quotas/#object-count-quota)
to limit the quantity of objects which can be created.
## {{% heading "whatsnext" %}}
* To learn more about RBAC, see the [RBAC documentation](/docs/reference/access-authn-authz/rbac/).

View File

@ -6,7 +6,7 @@ reviewers:
- perithompson
title: Security For Windows Nodes
content_type: concept
weight: 75
weight: 40
---
<!-- overview -->
@ -22,34 +22,41 @@ storage (as compared to using tmpfs / in-memory filesystems on Linux). As a clus
operator, you should take both of the following additional measures:
1. Use file ACLs to secure the Secrets' file location.
1. Apply volume-level encryption using [BitLocker](https://docs.microsoft.com/windows/security/information-protection/bitlocker/bitlocker-how-to-deploy-on-windows-server).
1. Apply volume-level encryption using
[BitLocker](https://docs.microsoft.com/windows/security/information-protection/bitlocker/bitlocker-how-to-deploy-on-windows-server).
## Container users
[RunAsUsername](/docs/tasks/configure-pod-container/configure-runasusername)
can be specified for Windows Pods or containers to execute the container
processes as specific user. This is roughly equivalent to
[RunAsUser](/docs/concepts/policy/pod-security-policy/#users-and-groups).
[RunAsUser](/docs/concepts/security/pod-security-policy/#users-and-groups).
Windows containers offer two default user accounts, ContainerUser and ContainerAdministrator.
The differences between these two user accounts are covered in
[When to use ContainerAdmin and ContainerUser user accounts](https://docs.microsoft.com/virtualization/windowscontainers/manage-containers/container-security#when-to-use-containeradmin-and-containeruser-user-accounts) within Microsoft's _Secure Windows containers_ documentation.
[When to use ContainerAdmin and ContainerUser user accounts](https://docs.microsoft.com/virtualization/windowscontainers/manage-containers/container-security#when-to-use-containeradmin-and-containeruser-user-accounts)
within Microsoft's _Secure Windows containers_ documentation.
Local users can be added to container images during the container build process.
{{< note >}}
* [Nano Server](https://hub.docker.com/_/microsoft-windows-nanoserver) based images run as `ContainerUser` by default
* [Server Core](https://hub.docker.com/_/microsoft-windows-servercore) based images run as `ContainerAdministrator` by default
* [Nano Server](https://hub.docker.com/_/microsoft-windows-nanoserver) based images run as
`ContainerUser` by default
* [Server Core](https://hub.docker.com/_/microsoft-windows-servercore) based images run as
`ContainerAdministrator` by default
{{< /note >}}
Windows containers can also run as Active Directory identities by utilizing [Group Managed Service Accounts](/docs/tasks/configure-pod-container/configure-gmsa/)
Windows containers can also run as Active Directory identities by utilizing
[Group Managed Service Accounts](/docs/tasks/configure-pod-container/configure-gmsa/)
## Pod-level security isolation
Linux-specific pod security context mechanisms (such as SELinux, AppArmor, Seccomp, or custom
POSIX capabilities) are not supported on Windows nodes.
Privileged containers are [not supported](/docs/concepts/windows/intro/#compatibility-v1-pod-spec-containers-securitycontext) on Windows.
Instead [HostProcess containers](/docs/tasks/configure-pod-container/create-hostprocess-pod) can be used on Windows to perform many of the tasks performed by privileged containers on Linux.
Privileged containers are [not supported](/docs/concepts/windows/intro/#compatibility-v1-pod-spec-containers-securitycontext)
on Windows.
Instead [HostProcess containers](/docs/tasks/configure-pod-container/create-hostprocess-pod)
can be used on Windows to perform many of the tasks performed by privileged containers on Linux.

View File

@ -37,7 +37,7 @@ IPv4/IPv6 dual-stack on your Kubernetes cluster provides the following features:
The following prerequisites are needed in order to utilize IPv4/IPv6 dual-stack Kubernetes clusters:
* Kubernetes 1.20 or later
* Kubernetes 1.20 or later
For information about using dual-stack services with earlier
Kubernetes versions, refer to the documentation for that version
@ -95,7 +95,7 @@ set the `.spec.ipFamilyPolicy` field to one of the following values:
If you would like to define which IP family to use for single stack or define the order of IP
families for dual-stack, you can choose the address families by setting an optional field,
`.spec.ipFamilies`, on the Service.
`.spec.ipFamilies`, on the Service.
{{< note >}}
The `.spec.ipFamilies` field is immutable because the `.spec.ClusterIP` cannot be reallocated on a
@ -133,11 +133,11 @@ These examples demonstrate the behavior of various dual-stack Service configurat
address assignments. The field `.spec.ClusterIPs` is the primary field, and contains both assigned
IP addresses; `.spec.ClusterIP` is a secondary field with its value calculated from
`.spec.ClusterIPs`.
* For the `.spec.ClusterIP` field, the control plane records the IP address that is from the
same address family as the first service cluster IP range.
same address family as the first service cluster IP range.
* On a single-stack cluster, the `.spec.ClusterIPs` and `.spec.ClusterIP` fields both only list
one address.
one address.
* On a cluster with dual-stack enabled, specifying `RequireDualStack` in `.spec.ipFamilyPolicy`
behaves the same as `PreferDualStack`.
@ -174,7 +174,7 @@ dual-stack.)
kind: Service
metadata:
labels:
app: MyApp
app.kubernetes.io/name: MyApp
name: my-service
spec:
clusterIP: 10.0.197.123
@ -188,7 +188,7 @@ dual-stack.)
protocol: TCP
targetPort: 80
selector:
app: MyApp
app.kubernetes.io/name: MyApp
type: ClusterIP
status:
loadBalancer: {}
@ -214,7 +214,7 @@ dual-stack.)
kind: Service
metadata:
labels:
app: MyApp
app.kubernetes.io/name: MyApp
name: my-service
spec:
clusterIP: None
@ -228,7 +228,7 @@ dual-stack.)
protocol: TCP
targetPort: 80
selector:
app: MyApp
app.kubernetes.io/name: MyApp
```
#### Switching Services between single-stack and dual-stack

View File

@ -46,6 +46,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
is an [Istio](https://istio.io/) based ingress controller.
* The [Kong Ingress Controller for Kubernetes](https://github.com/Kong/kubernetes-ingress-controller#readme)
is an ingress controller driving [Kong Gateway](https://konghq.com/kong/).
* [Kusk Gateway](https://kusk.kubeshop.io/) is an OpenAPI-driven ingress controller based on [Envoy](https://www.envoyproxy.io).
* The [NGINX Ingress Controller for Kubernetes](https://www.nginx.com/products/nginx-ingress-controller/)
works with the [NGINX](https://www.nginx.com/resources/glossary/nginx/) webserver (as a proxy).
* The [Pomerium Ingress Controller](https://www.pomerium.com/docs/k8s/ingress.html) is based on [Pomerium](https://pomerium.com/), which offers context-aware access policy.

View File

@ -258,7 +258,7 @@ standardized label to target a specific namespace.
## What you can't do with network policies (at least, not yet)
As of Kubernetes {{< skew latestVersion >}}, the following functionality does not exist in the NetworkPolicy API, but you might be able to implement workarounds using Operating System components (such as SELinux, OpenVSwitch, IPTables, and so on) or Layer 7 technologies (Ingress controllers, Service Mesh implementations) or admission controllers. In case you are new to network security in Kubernetes, its worth noting that the following User Stories cannot (yet) be implemented using the NetworkPolicy API.
As of Kubernetes {{< skew currentVersion >}}, the following functionality does not exist in the NetworkPolicy API, but you might be able to implement workarounds using Operating System components (such as SELinux, OpenVSwitch, IPTables, and so on) or Layer 7 technologies (Ingress controllers, Service Mesh implementations) or admission controllers. In case you are new to network security in Kubernetes, its worth noting that the following User Stories cannot (yet) be implemented using the NetworkPolicy API.
- Forcing internal cluster traffic to go through a common gateway (this might be best served with a service mesh or other proxy).
- Anything TLS related (use a service mesh or ingress controller for this).

View File

@ -43,7 +43,7 @@ metadata:
name: my-service
spec:
selector:
app: MyApp
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
@ -60,12 +60,6 @@ considered.
When the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
`ServiceInternalTrafficPolicy` is enabled, `spec.internalTrafficPolicy` defaults to "Cluster".
## Constraints
* Service Internal Traffic Policy is not used when `externalTrafficPolicy` is set
to `Local` on a Service. It is possible to use both features in the same cluster
on different Services, just not on the same Service.
## {{% heading "whatsnext" %}}
* Read about [Topology Aware Hints](/docs/concepts/services-networking/topology-aware-hints)

View File

@ -75,7 +75,7 @@ The name of a Service object must be a valid
[RFC 1035 label name](/docs/concepts/overview/working-with-objects/names#rfc-1035-label-names).
For example, suppose you have a set of Pods where each listens on TCP port 9376
and contains a label `app=MyApp`:
and contains a label `app.kubernetes.io/name=MyApp`:
```yaml
apiVersion: v1
@ -84,7 +84,7 @@ metadata:
name: my-service
spec:
selector:
app: MyApp
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
@ -92,7 +92,7 @@ spec:
```
This specification creates a new Service object named "my-service", which
targets TCP port 9376 on any Pod with the `app=MyApp` label.
targets TCP port 9376 on any Pod with the `app.kubernetes.io/name=MyApp` label.
Kubernetes assigns this Service an IP address (sometimes called the "cluster IP"),
which is used by the Service proxies
@ -126,7 +126,7 @@ spec:
ports:
- containerPort: 80
name: http-web-svc
---
apiVersion: v1
kind: Service
@ -144,9 +144,9 @@ spec:
This works even if there is a mixture of Pods in the Service using a single
configured name, with the same network protocol available via different
port numbers. This offers a lot of flexibility for deploying and evolving
your Services. For example, you can change the port numbers that Pods expose
configured name, with the same network protocol available via different
port numbers. This offers a lot of flexibility for deploying and evolving
your Services. For example, you can change the port numbers that Pods expose
in the next version of your backend software, without breaking clients.
The default protocol for Services is TCP; you can also use any other
@ -159,7 +159,7 @@ Each port definition can have the same `protocol`, or a different one.
### Services without selectors
Services most commonly abstract access to Kubernetes Pods thanks to the selector,
but when used with a corresponding Endpoints object and without a selector, the Service can abstract other kinds of backends,
but when used with a corresponding Endpoints object and without a selector, the Service can abstract other kinds of backends,
including ones that run outside the cluster. For example:
* You want to have an external database cluster in production, but in your
@ -204,7 +204,7 @@ subsets:
The name of the Endpoints object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
When you create an [Endpoints](docs/reference/kubernetes-api/service-resources/endpoints-v1/)
When you create an [Endpoints](/docs/reference/kubernetes-api/service-resources/endpoints-v1/)
object for a Service, you set the name of the new object to be the same as that
of the Service.
@ -222,10 +222,10 @@ In the example above, traffic is routed to the single endpoint defined in
the YAML: `192.0.2.42:9376` (TCP).
{{< note >}}
The Kubernetes API server does not allow proxying to endpoints that are not mapped to
pods. Actions such as `kubectl proxy <service-name>` where the service has no
selector will fail due to this constraint. This prevents the Kubernetes API server
from being used as a proxy to endpoints the caller may not be authorized to access.
The Kubernetes API server does not allow proxying to endpoints that are not mapped to
pods. Actions such as `kubectl proxy <service-name>` where the service has no
selector will fail due to this constraint. This prevents the Kubernetes API server
from being used as a proxy to endpoints the caller may not be authorized to access.
{{< /note >}}
An ExternalName Service is a special case of Service that does not have
@ -289,7 +289,7 @@ There are a few reasons for using proxying for Services:
Later in this page you can read about various kube-proxy implementations work. Overall,
you should note that, when running `kube-proxy`, kernel level rules may be
modified (for example, iptables rules might get created), which won't get cleaned up,
modified (for example, iptables rules might get created), which won't get cleaned up,
in some cases until you reboot. Thus, running kube-proxy is something that should
only be done by an administrator which understands the consequences of having a
low level, privileged network proxying service on a computer. Although the `kube-proxy`
@ -299,9 +299,14 @@ thus is only available to use as-is.
### Configuration
Note that the kube-proxy starts up in different modes, which are determined by its configuration.
- The kube-proxy's configuration is done via a ConfigMap, and the ConfigMap for kube-proxy effectively deprecates the behaviour for almost all of the flags for the kube-proxy.
- The kube-proxy's configuration is done via a ConfigMap, and the ConfigMap for kube-proxy
effectively deprecates the behaviour for almost all of the flags for the kube-proxy.
- The ConfigMap for the kube-proxy does not support live reloading of configuration.
- The ConfigMap parameters for the kube-proxy cannot all be validated and verified on startup. For example, if your operating system doesn't allow you to run iptables commands, the standard kernel kube-proxy implementation will not work. Likewise, if you have an operating system which doesn't support `netsh`, it will not run in Windows userspace mode.
- The ConfigMap parameters for the kube-proxy cannot all be validated and verified on startup.
For example, if your operating system doesn't allow you to run iptables commands,
the standard kernel kube-proxy implementation will not work.
Likewise, if you have an operating system which doesn't support `netsh`,
it will not run in Windows userspace mode.
### User space proxy mode {#proxy-mode-userspace}
@ -418,7 +423,7 @@ metadata:
name: my-service
spec:
selector:
app: MyApp
app.kubernetes.io/name: MyApp
ports:
- name: http
protocol: TCP
@ -492,7 +497,11 @@ variables and DNS.
### Environment variables
When a Pod is run on a Node, the kubelet adds a set of environment variables
for each active Service. It adds `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables, where the Service name is upper-cased and dashes are converted to underscores. It also supports variables (see [makeLinkVariables](https://github.com/kubernetes/kubernetes/blob/dd2d12f6dc0e654c15d5db57a5f9f6ba61192726/pkg/kubelet/envvars/envvars.go#L72)) that are compatible with Docker Engine's "_[legacy container links](https://docs.docker.com/network/links/)_" feature.
for each active Service. It adds `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables,
where the Service name is upper-cased and dashes are converted to underscores.
It also supports variables (see [makeLinkVariables](https://github.com/kubernetes/kubernetes/blob/dd2d12f6dc0e654c15d5db57a5f9f6ba61192726/pkg/kubelet/envvars/envvars.go#L72))
that are compatible with Docker Engine's
"_[legacy container links](https://docs.docker.com/network/links/)_" feature.
For example, the Service `redis-master` which exposes TCP port 6379 and has been
allocated cluster IP address 10.0.0.11, produces the following environment
@ -604,8 +613,10 @@ The default is `ClusterIP`.
to use the `ExternalName` type.
{{< /note >}}
You can also use [Ingress](/docs/concepts/services-networking/ingress/) to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster. It lets you consolidate your routing rules
into a single resource as it can expose multiple services under the same IP address.
You can also use [Ingress](/docs/concepts/services-networking/ingress/) to expose your Service.
Ingress is not a Service type, but it acts as the entry point for your cluster.
It lets you consolidate your routing rules into a single resource as it can expose multiple
services under the same IP address.
### Type NodePort {#type-nodeport}
@ -620,9 +631,14 @@ field of the
[kube-proxy configuration file](/docs/reference/config-api/kube-proxy-config.v1alpha1/)
to particular IP block(s).
This flag takes a comma-delimited list of IP blocks (e.g. `10.0.0.0/8`, `192.0.2.0/25`) to specify IP address ranges that kube-proxy should consider as local to this node.
This flag takes a comma-delimited list of IP blocks (e.g. `10.0.0.0/8`, `192.0.2.0/25`)
to specify IP address ranges that kube-proxy should consider as local to this node.
For example, if you start kube-proxy with the `--nodeport-addresses=127.0.0.0/8` flag, kube-proxy only selects the loopback interface for NodePort Services. The default for `--nodeport-addresses` is an empty list. This means that kube-proxy should consider all available network interfaces for NodePort. (That's also compatible with earlier Kubernetes releases).
For example, if you start kube-proxy with the `--nodeport-addresses=127.0.0.0/8` flag,
kube-proxy only selects the loopback interface for NodePort Services.
The default for `--nodeport-addresses` is an empty list.
his means that kube-proxy should consider all available network interfaces for NodePort.
(That's also compatible with earlier Kubernetes releases).
If you want a specific port number, you can specify a value in the `nodePort`
field. The control plane will either allocate you that port or report that
@ -650,7 +666,7 @@ metadata:
spec:
type: NodePort
selector:
app: MyApp
app.kubernetes.io/name: MyApp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
@ -676,7 +692,7 @@ metadata:
name: my-service
spec:
selector:
app: MyApp
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
@ -689,7 +705,8 @@ status:
- ip: 192.0.2.127
```
Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.
Traffic from the external load balancer is directed at the backend Pods.
The cloud provider decides how it is load balanced.
Some cloud providers allow you to specify the `loadBalancerIP`. In those cases, the load-balancer is created
with the user-specified `loadBalancerIP`. If the `loadBalancerIP` field is not specified,
@ -704,7 +721,11 @@ to create a static type public IP address resource. This public IP address resou
be in the same resource group of the other automatically created resources of the cluster.
For example, `MC_myResourceGroup_myAKSCluster_eastus`.
Specify the assigned IP address as loadBalancerIP. Ensure that you have updated the securityGroupName in the cloud provider configuration file. For information about troubleshooting `CreatingLoadBalancerFailed` permission issues see, [Use a static IP address with the Azure Kubernetes Service (AKS) load balancer](https://docs.microsoft.com/en-us/azure/aks/static-ip) or [CreatingLoadBalancerFailed on AKS cluster with advanced networking](https://github.com/Azure/AKS/issues/357).
Specify the assigned IP address as loadBalancerIP. Ensure that you have updated the
`securityGroupName` in the cloud provider configuration file.
For information about troubleshooting `CreatingLoadBalancerFailed` permission issues see,
[Use a static IP address with the Azure Kubernetes Service (AKS) load balancer](https://docs.microsoft.com/en-us/azure/aks/static-ip)
or [CreatingLoadBalancerFailed on AKS cluster with advanced networking](https://github.com/Azure/AKS/issues/357).
{{< /note >}}
@ -744,13 +765,13 @@ You must explicitly remove the `nodePorts` entry in every Service port to de-all
`spec.loadBalancerClass` enables you to use a load balancer implementation other than the cloud provider default.
By default, `spec.loadBalancerClass` is `nil` and a `LoadBalancer` type of Service uses
the cloud provider's default load balancer implementation if the cluster is configured with
a cloud provider using the `--cloud-provider` component flag.
a cloud provider using the `--cloud-provider` component flag.
If `spec.loadBalancerClass` is specified, it is assumed that a load balancer
implementation that matches the specified class is watching for Services.
Any default load balancer implementation (for example, the one provided by
the cloud provider) will ignore Services that have this field set.
`spec.loadBalancerClass` can be set on a Service of type `LoadBalancer` only.
Once set, it cannot be changed.
Once set, it cannot be changed.
The value of `spec.loadBalancerClass` must be a label-style identifier,
with an optional prefix such as "`internal-vip`" or "`example.com/internal-vip`".
Unprefixed names are reserved for end-users.
@ -760,7 +781,8 @@ Unprefixed names are reserved for end-users.
In a mixed environment it is sometimes necessary to route traffic from Services inside the same
(virtual) network address block.
In a split-horizon DNS environment you would need two Services to be able to route both external and internal traffic to your endpoints.
In a split-horizon DNS environment you would need two Services to be able to route both external
and internal traffic to your endpoints.
To set an internal load balancer, add one of the following annotations to your Service
depending on the cloud Service provider you're using.
@ -862,6 +884,17 @@ metadata:
[...]
```
{{% /tab %}}
{{% tab name="OCI" %}}
```yaml
[...]
metadata:
name: my-service
annotations:
service.beta.kubernetes.io/oci-load-balancer-internal: true
[...]
```
{{% /tab %}}
{{< /tabs >}}
@ -914,7 +947,9 @@ you can use the following annotations:
In the above example, if the Service contained three ports, `80`, `443`, and
`8443`, then `443` and `8443` would use the SSL certificate, but `80` would be proxied HTTP.
From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services.
From Kubernetes v1.9 onwards you can use
[predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html)
with HTTPS or SSL listeners for your Services.
To see which policies are available for use, you can use the `aws` command line tool:
```bash
@ -970,14 +1005,17 @@ specifies the logical hierarchy you created for your Amazon S3 bucket.
metadata:
name: my-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true"
# Specifies whether access logs are enabled for the load balancer
service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "60"
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true"
# The interval for publishing the access logs. You can specify an interval of either 5 or 60 (minutes).
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "my-bucket"
service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "60"
# The name of the Amazon S3 bucket where the access logs are stored
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "my-bucket-prefix/prod"
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "my-bucket"
# The logical hierarchy you created for your Amazon S3 bucket, for example `my-bucket-prefix/prod`
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "my-bucket-prefix/prod"
```
#### Connection Draining on AWS
@ -986,7 +1024,8 @@ Connection draining for Classic ELBs can be managed with the annotation
`service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled` set
to the value of `"true"`. The annotation
`service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout` can
also be used to set maximum time, in seconds, to keep the existing connections open before deregistering the instances.
also be used to set maximum time, in seconds, to keep the existing connections open before
deregistering the instances.
```yaml
metadata:
@ -1004,50 +1043,56 @@ There are other annotations to manage Classic Elastic Load Balancers that are de
metadata:
name: my-service
annotations:
# The time, in seconds, that the connection is allowed to be idle (no data has been sent
# over the connection) before it is closed by the load balancer
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
# The time, in seconds, that the connection is allowed to be idle (no data has been sent over the connection) before it is closed by the load balancer
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
# Specifies whether cross-zone load balancing is enabled for the load balancer
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "environment=prod,owner=devops"
# A comma-separated list of key-value pairs which will be recorded as
# additional tags in the ELB.
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "environment=prod,owner=devops"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: ""
# The number of successive successful health checks required for a backend to
# be considered healthy for traffic. Defaults to 2, must be between 2 and 10
service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: ""
service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3"
# The number of unsuccessful health checks required for a backend to be
# considered unhealthy for traffic. Defaults to 6, must be between 2 and 10
service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "20"
# The approximate interval, in seconds, between health checks of an
# individual instance. Defaults to 10, must be between 5 and 300
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "20"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "5"
# The amount of time, in seconds, during which no response means a failed
# health check. This value must be less than the service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval
# value. Defaults to 5, must be between 2 and 60
service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "5"
service.beta.kubernetes.io/aws-load-balancer-security-groups: "sg-53fae93f"
# A list of existing security groups to be configured on the ELB created. Unlike the annotation
# service.beta.kubernetes.io/aws-load-balancer-extra-security-groups, this replaces all other security groups previously assigned to the ELB and also overrides the creation
# service.beta.kubernetes.io/aws-load-balancer-extra-security-groups, this replaces all other
# security groups previously assigned to the ELB and also overrides the creation
# of a uniquely generated security group for this ELB.
# The first security group ID on this list is used as a source to permit incoming traffic to target worker nodes (service traffic and health checks).
# If multiple ELBs are configured with the same security group ID, only a single permit line will be added to the worker node security groups, that means if you delete any
# The first security group ID on this list is used as a source to permit incoming traffic to
# target worker nodes (service traffic and health checks).
# If multiple ELBs are configured with the same security group ID, only a single permit line
# will be added to the worker node security groups, that means if you delete any
# of those ELBs it will remove the single permit line and block access for all ELBs that shared the same security group ID.
# This can cause a cross-service outage if not used properly
service.beta.kubernetes.io/aws-load-balancer-security-groups: "sg-53fae93f"
# A list of additional security groups to be added to the created ELB, this leaves the uniquely
# generated security group in place, this ensures that every ELB
# has a unique security group ID and a matching permit line to allow traffic to the target worker nodes
# (service traffic and health checks).
# Security groups defined here can be shared between services.
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-53fae93f,sg-42efd82e"
# A list of additional security groups to be added to the created ELB, this leaves the uniquely generated security group in place, this ensures that every ELB
# has a unique security group ID and a matching permit line to allow traffic to the target worker nodes (service traffic and health checks).
# Security groups defined here can be shared between services.
service.beta.kubernetes.io/aws-load-balancer-target-node-labels: "ingress-gw,gw-name=public-api"
# A comma separated list of key-value pairs which are used
# to select the target nodes for the load balancer
service.beta.kubernetes.io/aws-load-balancer-target-node-labels: "ingress-gw,gw-name=public-api"
```
#### Network Load Balancer support on AWS {#aws-nlb-support}
@ -1064,7 +1109,8 @@ To use a Network Load Balancer on AWS, use the annotation `service.beta.kubernet
```
{{< note >}}
NLB only works with certain instance classes; see the [AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets)
NLB only works with certain instance classes; see the
[AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets)
on Elastic Load Balancing for a list of supported instance types.
{{< /note >}}
@ -1171,7 +1217,8 @@ spec:
```
{{< note >}}
ExternalName accepts an IPv4 address string, but as a DNS name comprised of digits, not as an IP address. ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx because ExternalName
ExternalName accepts an IPv4 address string, but as a DNS name comprised of digits, not as an IP address.
ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx because ExternalName
is intended to specify a canonical DNS name. To hardcode an IP address, consider using
[headless Services](#headless-services).
{{< /note >}}
@ -1185,9 +1232,13 @@ can start its Pods, add appropriate selectors or endpoints, and change the
Service's `type`.
{{< warning >}}
You may have trouble using ExternalName for some common protocols, including HTTP and HTTPS. If you use ExternalName then the hostname used by clients inside your cluster is different from the name that the ExternalName references.
You may have trouble using ExternalName for some common protocols, including HTTP and HTTPS.
If you use ExternalName then the hostname used by clients inside your cluster is different from
the name that the ExternalName references.
For protocols that use hostnames this difference may lead to errors or unexpected responses. HTTP requests will have a `Host:` header that the origin server does not recognize; TLS servers will not be able to provide a certificate matching the hostname that the client connected to.
For protocols that use hostnames this difference may lead to errors or unexpected responses.
HTTP requests will have a `Host:` header that the origin server does not recognize;
TLS servers will not be able to provide a certificate matching the hostname that the client connected to.
{{< /warning >}}
{{< note >}}
@ -1212,7 +1263,7 @@ metadata:
name: my-service
spec:
selector:
app: MyApp
app.kubernetes.io/name: MyApp
ports:
- name: http
protocol: TCP
@ -1346,12 +1397,15 @@ through a load-balancer, though in those cases the client IP does get altered.
#### IPVS
iptables operations slow down dramatically in large scale cluster e.g 10,000 Services.
IPVS is designed for load balancing and based on in-kernel hash tables. So you can achieve performance consistency in large number of Services from IPVS-based kube-proxy. Meanwhile, IPVS-based kube-proxy has more sophisticated load balancing algorithms (least conns, locality, weighted, persistence).
IPVS is designed for load balancing and based on in-kernel hash tables.
So you can achieve performance consistency in large number of Services from IPVS-based kube-proxy.
Meanwhile, IPVS-based kube-proxy has more sophisticated load balancing algorithms
(least conns, locality, weighted, persistence).
## API Object
Service is a top-level resource in the Kubernetes REST API. You can find more details
about the API object at: [Service API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core).
about the [Service API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core).
## Supported protocols {#protocol-support}
@ -1377,7 +1431,8 @@ provider offering this facility. (Most do not).
##### Support for multihomed SCTP associations {#caveat-sctp-multihomed}
{{< warning >}}
The support of multihomed SCTP associations requires that the CNI plugin can support the assignment of multiple interfaces and IP addresses to a Pod.
The support of multihomed SCTP associations requires that the CNI plugin can support the
assignment of multiple interfaces and IP addresses to a Pod.
NAT for multihomed SCTP associations requires special logic in the corresponding kernel modules.
{{< /warning >}}

View File

@ -116,7 +116,7 @@ can enable this behavior by:
is enabled on the API server.
An administrator can mark a specific `StorageClass` as default by adding the
`storageclass.kubernetes.io/is-default-class` annotation to it.
`storageclass.kubernetes.io/is-default-class` [annotation](/docs/reference/labels-annotations-taints/#storageclass-kubernetes-io-is-default-class) to it.
When a default `StorageClass` exists in a cluster and a user creates a
`PersistentVolumeClaim` with `storageClassName` unspecified, the
`DefaultStorageClass` admission controller automatically adds the

View File

@ -76,8 +76,8 @@ is managed by kubelet, or injecting different data.
{{< feature-state for_k8s_version="v1.16" state="beta" >}}
This feature requires the `CSIInlineVolume` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled. It
is enabled by default starting with Kubernetes 1.16.
This feature requires the `CSIInlineVolume` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
to be enabled. It is enabled by default starting with Kubernetes 1.16.
{{< note >}}
CSI ephemeral volumes are only supported by a subset of CSI drivers.
@ -136,8 +136,11 @@ should not be exposed to users through the use of inline ephemeral volumes.
Cluster administrators who need to restrict the CSI drivers that are
allowed to be used as inline volumes within a Pod spec may do so by:
- Removing `Ephemeral` from `volumeLifecycleModes` in the CSIDriver spec, which prevents the driver from being used as an inline ephemeral volume.
- Using an [admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/) to restrict how this driver is used.
- Removing `Ephemeral` from `volumeLifecycleModes` in the CSIDriver spec, which prevents the
driver from being used as an inline ephemeral volume.
- Using an [admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/)
to restrict how this driver is used.
### Generic ephemeral volumes
@ -207,7 +210,7 @@ because then the scheduler is free to choose a suitable node for
the Pod. With immediate binding, the scheduler is forced to select a node that has
access to the volume once it is available.
In terms of [resource ownership](/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents),
In terms of [resource ownership](/docs/concepts/architecture/garbage-collection/#owners-dependents),
a Pod that has generic ephemeral storage is the owner of the PersistentVolumeClaim(s)
that provide that ephemeral storage. When the Pod is deleted,
the Kubernetes garbage collector deletes the PVC, which then usually
@ -252,10 +255,11 @@ Enabling the GenericEphemeralVolume feature allows users to create
PVCs indirectly if they can create Pods, even if they do not have
permission to create PVCs directly. Cluster administrators must be
aware of this. If this does not fit their security model, they should
use an [admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/) that rejects objects like Pods that have a generic ephemeral volume.
use an [admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/)
that rejects objects like Pods that have a generic ephemeral volume.
The normal [namespace quota for PVCs](/docs/concepts/policy/resource-quotas/#storage-resource-quota) still applies, so
even if users are allowed to use this new mechanism, they cannot use
The normal [namespace quota for PVCs](/docs/concepts/policy/resource-quotas/#storage-resource-quota)
still applies, so even if users are allowed to use this new mechanism, they cannot use
it to circumvent other policies.
## {{% heading "whatsnext" %}}
@ -266,11 +270,13 @@ See [local ephemeral storage](/docs/concepts/configuration/manage-resources-cont
### CSI ephemeral volumes
- For more information on the design, see the [Ephemeral Inline CSI
volumes KEP](https://github.com/kubernetes/enhancements/blob/ad6021b3d61a49040a3f835e12c8bb5424db2bbb/keps/sig-storage/20190122-csi-inline-volumes.md).
- For more information on further development of this feature, see the [enhancement tracking issue #596](https://github.com/kubernetes/enhancements/issues/596).
- For more information on the design, see the
[Ephemeral Inline CSI volumes KEP](https://github.com/kubernetes/enhancements/blob/ad6021b3d61a49040a3f835e12c8bb5424db2bbb/keps/sig-storage/20190122-csi-inline-volumes.md).
- For more information on further development of this feature, see the
[enhancement tracking issue #596](https://github.com/kubernetes/enhancements/issues/596).
### Generic ephemeral volumes
- For more information on the design, see the
[Generic ephemeral inline volumes KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/1698-generic-ephemeral-volumes/README.md).
[Generic ephemeral inline volumes KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/1698-generic-ephemeral-volumes/README.md).

Some files were not shown because too many files have changed in this diff Show More