Merge pull request #12 from kubernetes/master

merge from upstream
This commit is contained in:
Yong Zhang 2021-03-28 10:00:54 +08:00 committed by GitHub
commit 08e6a4bd12
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
606 changed files with 17592 additions and 11178 deletions

View File

@ -10,6 +10,7 @@ aliases:
- kbarnard10 - kbarnard10
- mrbobbytables - mrbobbytables
- onlydole - onlydole
- sftim
sig-docs-de-owners: # Admins for German content sig-docs-de-owners: # Admins for German content
- bene2k1 - bene2k1
- mkorbi - mkorbi
@ -175,14 +176,20 @@ aliases:
# zhangxiaoyu-zidif # zhangxiaoyu-zidif
sig-docs-pt-owners: # Admins for Portuguese content sig-docs-pt-owners: # Admins for Portuguese content
- femrtnz - femrtnz
- jailton
- jcjesus - jcjesus
- devlware - devlware
- jhonmike - jhonmike
- rikatz
- yagonobre
sig-docs-pt-reviews: # PR reviews for Portugese content sig-docs-pt-reviews: # PR reviews for Portugese content
- femrtnz - femrtnz
- jailton
- jcjesus - jcjesus
- devlware - devlware
- jhonmike - jhonmike
- rikatz
- yagonobre
sig-docs-vi-owners: # Admins for Vietnamese content sig-docs-vi-owners: # Admins for Vietnamese content
- huynguyennovem - huynguyennovem
- ngtuna - ngtuna

View File

@ -30,6 +30,17 @@ El método recomendado para levantar una copia local del sitio web kubernetes.io
> Si prefiere levantar el sitio web sin utilizar **Docker**, puede seguir las instrucciones disponibles en la sección [Levantando kubernetes.io en local con Hugo](#levantando-kubernetesio-en-local-con-hugo). > Si prefiere levantar el sitio web sin utilizar **Docker**, puede seguir las instrucciones disponibles en la sección [Levantando kubernetes.io en local con Hugo](#levantando-kubernetesio-en-local-con-hugo).
**`Nota`: Para el procedimiento de construir una imagen de Docker e iniciar el servidor.**
El sitio web de Kubernetes utiliza Docsy Hugo theme. Se sugiere que se instale si aún no se ha hecho, los **submódulos** y otras dependencias de herramientas de desarrollo ejecutando el siguiente comando de `git`:
```bash
# pull de los submódulos del repositorio
git submodule update --init --recursive --depth 1
```
Si identifica que `git` reconoce una cantidad innumerable de cambios nuevos en el proyecto, la forma más simple de solucionarlo es cerrando y volviendo a abrir el proyecto en el editor. Los submódulos son automáticamente detectados por `git`, pero los plugins usados por los editores pueden tener dificultades para ser cargados.
Una vez tenga Docker [configurado en su máquina](https://www.docker.com/get-started), puede construir la imagen de Docker `kubernetes-hugo` localmente ejecutando el siguiente comando en la raíz del repositorio: Una vez tenga Docker [configurado en su máquina](https://www.docker.com/get-started), puede construir la imagen de Docker `kubernetes-hugo` localmente ejecutando el siguiente comando en la raíz del repositorio:
```bash ```bash
@ -73,4 +84,4 @@ La participación en la comunidad de Kubernetes está regulada por el [Código d
Kubernetes es posible gracias a la participación de la comunidad y la documentación es vital para facilitar el acceso al proyecto. Kubernetes es posible gracias a la participación de la comunidad y la documentación es vital para facilitar el acceso al proyecto.
Agradecemos muchísimo sus contribuciones a nuestro sitio web y nuestra documentación. Agradecemos muchísimo sus contribuciones a nuestro sitio web y nuestra documentación.

View File

@ -144,6 +144,15 @@ sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
Esta solução funciona tanto para o MacOS Catalina quanto para o MacOS Mojave. Esta solução funciona tanto para o MacOS Catalina quanto para o MacOS Mojave.
### Erro de "Out of Memory"
Se você executar o comando `make container-serve` e retornar o seguinte erro:
```
make: *** [container-serve] Error 137
```
Verifique a quantidade de memória disponível para o agente de execução de contêiner. No caso do Docker Desktop para macOS, abra o menu "Preferences..." -> "Resources..." e tente disponibilizar mais memória.
# Comunidade, discussão, contribuição e apoio # Comunidade, discussão, contribuição e apoio
Saiba mais sobre a comunidade Kubernetes SIG Docs e reuniões na [página da comunidade](http://kubernetes.io/community/). Saiba mais sobre a comunidade Kubernetes SIG Docs e reuniões na [página da comunidade](http://kubernetes.io/community/).

View File

@ -869,3 +869,22 @@ body.td-documentation {
display: none; display: none;
} }
} }
// nav-tabs and tab-content
.nav-tabs {
border-bottom: none !important;
}
.td-content .tab-content .highlight {
margin: 0;
}
.tab-pane {
border-radius: 0.25rem;
padding: 0 16px 16px;
border: 1px solid #dee2e6;
&:first-of-type.active {
border-top-left-radius: 0;
}
}

View File

@ -91,7 +91,7 @@ blog = "/:section/:year/:month/:day/:slug/"
[outputs] [outputs]
home = [ "HTML", "RSS", "HEADERS" ] home = [ "HTML", "RSS", "HEADERS" ]
page = [ "HTML"] page = [ "HTML"]
section = [ "HTML"] section = [ "HTML", "print" ]
# Add a "text/netlify" media type for auto-generating the _headers file # Add a "text/netlify" media type for auto-generating the _headers file
[mediaTypes] [mediaTypes]

View File

@ -9,7 +9,7 @@ content_type: concept
Diese Sektion umfasst verschiedene Optionen zum Einrichten und Betrieb von Kubernetes. Diese Sektion umfasst verschiedene Optionen zum Einrichten und Betrieb von Kubernetes.
Verschiedene Kubernetes Lösungen haben verschiedene Anforderungen: Einfache Wartung, Sicherheit, Kontrolle, verfügbare Resourcen und erforderliches Fachwissen zum Betrieb und zur Verwaltung dess folgende Diagramm zeigt die möglichen Abstraktionen eines Kubernetes-Clusters und ob eine Abstraktion selbst verwaltet oder von einem Anbieter verwaltet wird. Verschiedene Kubernetes Lösungen haben verschiedene Anforderungen: Einfache Wartung, Sicherheit, Kontrolle, verfügbare Resourcen und erforderliches Fachwissen zum Betrieb und zur Verwaltung. Das folgende Diagramm zeigt die möglichen Abstraktionen eines Kubernetes-Clusters und ob eine Abstraktion selbst verwaltet oder von einem Anbieter verwaltet wird.
Sie können einen Kubernetes-Cluster auf einer lokalen Maschine, Cloud, On-Prem Datacenter bereitstellen; oder wählen Sie einen verwalteten Kubernetes-Cluster. Sie können auch eine individuelle Lösung über eine grosse Auswahl an Cloud Anbietern oder Bare-Metal-Umgebungen nutzen. Sie können einen Kubernetes-Cluster auf einer lokalen Maschine, Cloud, On-Prem Datacenter bereitstellen; oder wählen Sie einen verwalteten Kubernetes-Cluster. Sie können auch eine individuelle Lösung über eine grosse Auswahl an Cloud Anbietern oder Bare-Metal-Umgebungen nutzen.

View File

@ -20,21 +20,14 @@ For example, if we want to require scheduling on a node that is in the us-centra
``` ```
affinity: affinity:
nodeAffinity:
nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
requiredDuringSchedulingIgnoredDuringExecution: - matchExpressions:
- key: "failure-domain.beta.kubernetes.io/zone"
nodeSelectorTerms: operator: In
values: ["us-central1-a"]
- matchExpressions:
- key: "failure-domain.beta.kubernetes.io/zone"
operator: In
values: ["us-central1-a"]
``` ```
@ -44,21 +37,14 @@ Preferred rules mean that if nodes match the rules, they will be chosen first, a
``` ```
affinity: affinity:
nodeAffinity:
nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
preferredDuringSchedulingIgnoredDuringExecution: - matchExpressions:
- key: "failure-domain.beta.kubernetes.io/zone"
nodeSelectorTerms: operator: In
values: ["us-central1-a"]
- matchExpressions:
- key: "failure-domain.beta.kubernetes.io/zone"
operator: In
values: ["us-central1-a"]
``` ```
@ -67,21 +53,14 @@ Node anti-affinity can be achieved by using negative operators. So for instance
``` ```
affinity: affinity:
nodeAffinity:
nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
requiredDuringSchedulingIgnoredDuringExecution: - matchExpressions:
- key: "failure-domain.beta.kubernetes.io/zone"
nodeSelectorTerms: operator: NotIn
values: ["us-central1-a"]
- matchExpressions:
- key: "failure-domain.beta.kubernetes.io/zone"
operator: NotIn
values: ["us-central1-a"]
``` ```
@ -99,7 +78,7 @@ The kubectl command allows you to set taints on nodes, for example:
``` ```
kubectl taint nodes node1 key=value:NoSchedule kubectl taint nodes node1 key=value:NoSchedule
``` ```
creates a taint that marks the node as unschedulable by any pods that do not have a toleration for taint with key key, value value, and effect NoSchedule. (The other taint effects are PreferNoSchedule, which is the preferred version of NoSchedule, and NoExecute, which means any pods that are running on the node when the taint is applied will be evicted unless they tolerate the taint.) The toleration you would add to a PodSpec to have the corresponding pod tolerate this taint would look like this creates a taint that marks the node as unschedulable by any pods that do not have a toleration for taint with key key, value value, and effect NoSchedule. (The other taint effects are PreferNoSchedule, which is the preferred version of NoSchedule, and NoExecute, which means any pods that are running on the node when the taint is applied will be evicted unless they tolerate the taint.) The toleration you would add to a PodSpec to have the corresponding pod tolerate this taint would look like this
@ -107,15 +86,11 @@ creates a taint that marks the node as unschedulable by any pods that do not hav
``` ```
tolerations: tolerations:
- key: "key"
- key: "key" operator: "Equal"
value: "value"
operator: "Equal" effect: "NoSchedule"
value: "value"
effect: "NoSchedule"
``` ```
@ -138,21 +113,13 @@ Lets look at an example. Say you have front-ends in service S1, and they comm
``` ```
affinity: affinity:
podAffinity: podAffinity:
requiredDuringSchedulingIgnoredDuringExecution: requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector: - labelSelector:
matchExpressions: matchExpressions:
- key: service - key: service
operator: In operator: In
values: [“S1”] values: [“S1”]
topologyKey: failure-domain.beta.kubernetes.io/zone topologyKey: failure-domain.beta.kubernetes.io/zone
``` ```
@ -172,25 +139,15 @@ Here we have a Pod where we specify the schedulerName field:
``` ```
apiVersion: v1 apiVersion: v1
kind: Pod kind: Pod
metadata: metadata:
name: nginx name: nginx
labels: labels:
app: nginx app: nginx
spec: spec:
schedulerName: my-scheduler schedulerName: my-scheduler
containers: containers:
- name: nginx - name: nginx
image: nginx:1.10 image: nginx:1.10
``` ```

View File

@ -176,7 +176,7 @@ Cluster-distributed stateful services (e.g., Cassandra) can benefit from splitti
[Logs](/docs/concepts/cluster-administration/logging/) and [metrics](/docs/tasks/debug-application-cluster/resource-usage-monitoring/) (if collected and persistently retained) are valuable to diagnose outages, but given the variety of technologies available it will not be addressed in this blog. If Internet connectivity is available, it may be desirable to retain logs and metrics externally at a central location. [Logs](/docs/concepts/cluster-administration/logging/) and [metrics](/docs/tasks/debug-application-cluster/resource-usage-monitoring/) (if collected and persistently retained) are valuable to diagnose outages, but given the variety of technologies available it will not be addressed in this blog. If Internet connectivity is available, it may be desirable to retain logs and metrics externally at a central location.
Your production deployment should utilize an automated installation, configuration and update tool (e.g., [Ansible](https://github.com/kubernetes-incubator/kubespray), [BOSH](https://github.com/cloudfoundry-incubator/kubo-deployment), [Chef](https://github.com/chef-cookbooks/kubernetes), [Juju](/docs/getting-started-guides/ubuntu/installation/), [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/), [Puppet](https://forge.puppet.com/puppetlabs/kubernetes), etc.). A manual process will have repeatability issues, be labor intensive, error prone, and difficult to scale. [Certified distributions](https://www.cncf.io/certification/software-conformance/#logos) are likely to include a facility for retaining configuration settings across updates, but if you implement your own install and config toolchain, then retention, backup and recovery of the configuration artifacts is essential. Consider keeping your deployment components and settings under a version control system such as Git. Your production deployment should utilize an automated installation, configuration and update tool (e.g., [Ansible](https://github.com/kubernetes-incubator/kubespray), [BOSH](https://github.com/cloudfoundry-incubator/kubo-deployment), [Chef](https://github.com/chef-cookbooks/kubernetes), [Juju](/docs/getting-started-guides/ubuntu/installation/), [kubeadm](/docs/reference/setup-tools/kubeadm/), [Puppet](https://forge.puppet.com/puppetlabs/kubernetes), etc.). A manual process will have repeatability issues, be labor intensive, error prone, and difficult to scale. [Certified distributions](https://www.cncf.io/certification/software-conformance/#logos) are likely to include a facility for retaining configuration settings across updates, but if you implement your own install and config toolchain, then retention, backup and recovery of the configuration artifacts is essential. Consider keeping your deployment components and settings under a version control system such as Git.
## Outage recovery ## Outage recovery

View File

@ -17,7 +17,7 @@ Lets dive into the key features of this release:
## Simplified Kubernetes Cluster Management with kubeadm in GA ## Simplified Kubernetes Cluster Management with kubeadm in GA
Most people who have gotten hands-on with Kubernetes have at some point been hands-on with kubeadm. It's an essential tool for managing the cluster lifecycle, from creation to configuration to upgrade; and now kubeadm is officially GA. [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) handles the bootstrapping of production clusters on existing hardware and configuring the core Kubernetes components in a best-practice-manner to providing a secure yet easy joining flow for new nodes and supporting easy upgrades. Whats notable about this GA release are the now graduated advanced features, specifically around pluggability and configurability. The scope of kubeadm is to be a toolbox for both admins and automated, higher-level system and this release is a significant step in that direction. Most people who have gotten hands-on with Kubernetes have at some point been hands-on with kubeadm. It's an essential tool for managing the cluster lifecycle, from creation to configuration to upgrade; and now kubeadm is officially GA. [kubeadm](/docs/reference/setup-tools/kubeadm/) handles the bootstrapping of production clusters on existing hardware and configuring the core Kubernetes components in a best-practice-manner to providing a secure yet easy joining flow for new nodes and supporting easy upgrades. Whats notable about this GA release are the now graduated advanced features, specifically around pluggability and configurability. The scope of kubeadm is to be a toolbox for both admins and automated, higher-level system and this release is a significant step in that direction.
## Container Storage Interface (CSI) Goes GA ## Container Storage Interface (CSI) Goes GA

View File

@ -66,6 +66,7 @@ Vagrant.configure("2") do |config|
end end
end end
end end
end
``` ```
### Step 2: Create an Ansible playbook for Kubernetes master. ### Step 2: Create an Ansible playbook for Kubernetes master.

View File

@ -114,7 +114,7 @@ will have strictly better performance and less overhead. However, we encourage y
to explore all the options from the [CNCF landscape] in case another would be an to explore all the options from the [CNCF landscape] in case another would be an
even better fit for your environment. even better fit for your environment.
[CNCF landscape]: https://landscape.cncf.io/category=container-runtime&format=card-mode&grouping=category [CNCF landscape]: https://landscape.cncf.io/card-mode?category=container-runtime&grouping=category
### What should I look out for when changing CRI implementations? ### What should I look out for when changing CRI implementations?

Binary file not shown.

After

Width:  |  Height:  |  Size: 181 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

View File

@ -0,0 +1,63 @@
---
layout: blog
title: "The Evolution of Kubernetes Dashboard"
date: 2021-03-09
slug: the-evolution-of-kubernetes-dashboard
---
Authors: Marcin Maciaszczyk, Kubermatic & Sebastian Florek, Kubermatic
In October 2020, the Kubernetes Dashboard officially turned five. As main project maintainers, we can barely believe that so much time has passed since our very first commits to the project. However, looking back with a bit of nostalgia, we realize that quite a lot has happened since then. Now its due time to celebrate “our baby” with a short recap.
## How It All Began
The initial idea behind the Kubernetes Dashboard project was to provide a web interface for Kubernetes. We wanted to reflect the kubectl functionality through an intuitive web UI. The main benefit from using the UI is to be able to quickly see things that do not work as expected (monitoring and troubleshooting). Also, the Kubernetes Dashboard is a great starting point for users that are new to the Kubernetes ecosystem.
The very [first commit](https://github.com/kubernetes/dashboard/commit/5861187fa807ac1cc2d9b2ac786afeced065076c) to the Kubernetes Dashboard was made by Filip Grządkowski from Google on 16th October 2015 just a few months from the initial commit to the Kubernetes repository. Our initial commits go back to November 2015 ([Sebastian committed on 16 November 2015](https://github.com/kubernetes/dashboard/commit/09e65b6bb08c49b926253de3621a73da05e400fd); [Marcin committed on 23 November 2015](https://github.com/kubernetes/dashboard/commit/1da4b1c25ef040818072c734f71333f9b4733f55)). Since that time, weve become regular contributors to the project. For the next two years, we worked closely with the Googlers, eventually becoming main project maintainers ourselves.
{{< figure src="first-ui.png" caption="The First Version of the User Interface" >}}
{{< figure src="along-the-way-ui.png" caption="Prototype of the New User Interface" >}}
{{< figure src="current-ui.png" caption="The Current User Interface" >}}
As you can see, the initial look and feel of the project were completely different from the current one. We have changed the design multiple times. The same has happened with the code itself.
## Growing Up - The Big Migration
At [the beginning of 2018](https://github.com/kubernetes/dashboard/pull/2727), we reached a point where AngularJS was getting closer to the end of its life, while the new Angular versions were published quite often. A lot of the libraries and the modules that we were using were following the trend. That forced us to spend a lot of the time rewriting the frontend part of the project to make it work with newer technologies.
The migration came with many benefits like being able to refactor a lot of the code, introduce design patterns, reduce code complexity, and benefit from the new modules. However, you can imagine that the scale of the migration was huge. Luckily, there were a number of contributions from the community helping us with the resource support, new Kubernetes version support, i18n, and much more. After many long days and nights, we finally released the [first beta version](https://github.com/kubernetes/dashboard/releases/tag/v2.0.0-beta1) in July 2019, followed by the [2.0 release](https://github.com/kubernetes/dashboard/releases/tag/v2.0.0) in April 2020 — our baby had grown up.
## Where Are We Standing in 2021?
Due to limited resources, unfortunately, we were not able to offer extensive support for many different Kubernetes versions. So, weve decided to always try and support the latest Kubernetes version available at the time of the Kubernetes Dashboard release. The latest release, [Dashboard v2.2.0](https://github.com/kubernetes/dashboard/releases/tag/v2.2.0) provides support for Kubernetes v1.20.
On top of that, we put in a great deal of effort into [improving resource support](https://github.com/kubernetes/dashboard/issues/5232). Meanwhile, we do offer support for most of the Kubernetes resources. Also, the Kubernetes Dashboard supports multiple languages: English, German, French, Japanese, Korean, Chinese (Traditional, Simplified, Traditional Hong Kong). Persian and Russian localizations are currently in progress. Moreover, we are working on the support for 3rd party themes and the design of the app in general. As you can see, quite a lot of things are going on.
Luckily, we do have regular contributors with domain knowledge who are taking care of the project, updating the Helm charts, translations, Go modules, and more. But as always, there could be many more hands on deck. So if you are thinking about contributing to Kubernetes, keep us in mind ;)
## Whats Next
The Kubernetes Dashboard has been growing and prospering for more than 5 years now. It provides the community with an intuitive Web UI, thereby decreasing the complexity of Kubernetes and increasing its accessibility to new community members. We are proud of what the project has achieved so far, but this is by far not the end. These are our priorities for the future:
* Keep providing support for the new Kubernetes versions
* Keep improving the support for the existing resources
* Keep working on auth system improvements
* [Rewrite the API to use gRPC and shared informers](https://github.com/kubernetes/dashboard/pull/5449): This will allow us to improve the performance of the application but, most importantly, to support live updates coming from the Kubernetes project. It is one of the most requested features from the community.
* Split the application into two containers, one with the UI and the second with the API running inside.
## The Kubernetes Dashboard in Numbers
* Initial commit made on October 16, 2015
* Over 100 million pulls from Dockerhub since the v2 release
* 8 supported languages and the next 2 in progress
* Over 3360 closed PRs
* Over 2260 closed issues
* 100% coverage of the supported core Kubernetes resources
* Over 9000 stars on GitHub
* Over 237 000 lines of code
## Join Us
As mentioned earlier, we are currently looking for more people to help us further develop and grow the project. We are open to contributions in multiple areas, i.e., [issues with help wanted label](https://github.com/kubernetes/dashboard/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22). Please feel free to reach out via GitHub or the #sig-ui channel in the [Kubernetes Slack](https://slack.k8s.io/).

View File

@ -11,20 +11,20 @@ aliases:
<!-- overview --> <!-- overview -->
This document catalogs the communication paths between the control plane (really the apiserver) and the Kubernetes cluster. The intent is to allow users to customize their installation to harden the network configuration such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud provider). This document catalogs the communication paths between the control plane (apiserver) and the Kubernetes cluster. The intent is to allow users to customize their installation to harden the network configuration such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud provider).
<!-- body --> <!-- body -->
## Node to Control Plane ## Node to Control Plane
Kubernetes has a "hub-and-spoke" API pattern. All API usage from nodes (or the pods they run) terminate at the apiserver (none of the other control plane components are designed to expose remote services). The apiserver is configured to listen for remote connections on a secure HTTPS port (typically 443) with one or more forms of client [authentication](/docs/reference/access-authn-authz/authentication/) enabled. Kubernetes has a "hub-and-spoke" API pattern. All API usage from nodes (or the pods they run) terminates at the apiserver. None of the other control plane components are designed to expose remote services. The apiserver is configured to listen for remote connections on a secure HTTPS port (typically 443) with one or more forms of client [authentication](/docs/reference/access-authn-authz/authentication/) enabled.
One or more forms of [authorization](/docs/reference/access-authn-authz/authorization/) should be enabled, especially if [anonymous requests](/docs/reference/access-authn-authz/authentication/#anonymous-requests) or [service account tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens) are allowed. One or more forms of [authorization](/docs/reference/access-authn-authz/authorization/) should be enabled, especially if [anonymous requests](/docs/reference/access-authn-authz/authentication/#anonymous-requests) or [service account tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens) are allowed.
Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the apiserver along with valid client credentials. A good approach is that the client credentials provided to the kubelet are in the form of a client certificate. See [kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for automated provisioning of kubelet client certificates. Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the apiserver along with valid client credentials. A good approach is that the client credentials provided to the kubelet are in the form of a client certificate. See [kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for automated provisioning of kubelet client certificates.
Pods that wish to connect to the apiserver can do so securely by leveraging a service account so that Kubernetes will automatically inject the public root certificate and a valid bearer token into the pod when it is instantiated. Pods that wish to connect to the apiserver can do so securely by leveraging a service account so that Kubernetes will automatically inject the public root certificate and a valid bearer token into the pod when it is instantiated.
The `kubernetes` service (in all namespaces) is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver. The `kubernetes` service (in `default` namespace) is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver.
The control plane components also communicate with the cluster apiserver over the secure port. The control plane components also communicate with the cluster apiserver over the secure port.
@ -42,7 +42,7 @@ The connections from the apiserver to the kubelet are used for:
* Attaching (through kubectl) to running pods. * Attaching (through kubectl) to running pods.
* Providing the kubelet's port-forwarding functionality. * Providing the kubelet's port-forwarding functionality.
These connections terminate at the kubelet's HTTPS endpoint. By default, the apiserver does not verify the kubelet's serving certificate, which makes the connection subject to man-in-the-middle attacks, and **unsafe** to run over untrusted and/or public networks. These connections terminate at the kubelet's HTTPS endpoint. By default, the apiserver does not verify the kubelet's serving certificate, which makes the connection subject to man-in-the-middle attacks and **unsafe** to run over untrusted and/or public networks.
To verify this connection, use the `--kubelet-certificate-authority` flag to provide the apiserver with a root certificate bundle to use to verify the kubelet's serving certificate. To verify this connection, use the `--kubelet-certificate-authority` flag to provide the apiserver with a root certificate bundle to use to verify the kubelet's serving certificate.
@ -53,20 +53,20 @@ Finally, [Kubelet authentication and/or authorization](/docs/reference/command-l
### apiserver to nodes, pods, and services ### apiserver to nodes, pods, and services
The connections from the apiserver to a node, pod, or service default to plain HTTP connections and are therefore neither authenticated nor encrypted. They can be run over a secure HTTPS connection by prefixing `https:` to the node, pod, or service name in the API URL, but they will not validate the certificate provided by the HTTPS endpoint nor provide client credentials so while the connection will be encrypted, it will not provide any guarantees of integrity. These connections **are not currently safe** to run over untrusted and/or public networks. The connections from the apiserver to a node, pod, or service default to plain HTTP connections and are therefore neither authenticated nor encrypted. They can be run over a secure HTTPS connection by prefixing `https:` to the node, pod, or service name in the API URL, but they will not validate the certificate provided by the HTTPS endpoint nor provide client credentials. So while the connection will be encrypted, it will not provide any guarantees of integrity. These connections **are not currently safe** to run over untrusted or public networks.
### SSH tunnels ### SSH tunnels
Kubernetes supports SSH tunnels to protect the control plane to nodes communication paths. In this configuration, the apiserver initiates an SSH tunnel to each node in the cluster (connecting to the ssh server listening on port 22) and passes all traffic destined for a kubelet, node, pod, or service through the tunnel. Kubernetes supports SSH tunnels to protect the control plane to nodes communication paths. In this configuration, the apiserver initiates an SSH tunnel to each node in the cluster (connecting to the ssh server listening on port 22) and passes all traffic destined for a kubelet, node, pod, or service through the tunnel.
This tunnel ensures that the traffic is not exposed outside of the network in which the nodes are running. This tunnel ensures that the traffic is not exposed outside of the network in which the nodes are running.
SSH tunnels are currently deprecated so you shouldn't opt to use them unless you know what you are doing. The Konnectivity service is a replacement for this communication channel. SSH tunnels are currently deprecated, so you shouldn't opt to use them unless you know what you are doing. The Konnectivity service is a replacement for this communication channel.
### Konnectivity service ### Konnectivity service
{{< feature-state for_k8s_version="v1.18" state="beta" >}} {{< feature-state for_k8s_version="v1.18" state="beta" >}}
As a replacement to the SSH tunnels, the Konnectivity service provides TCP level proxy for the control plane to cluster communication. The Konnectivity service consists of two parts: the Konnectivity server and the Konnectivity agents, running in the control plane network and the nodes network respectively. The Konnectivity agents initiate connections to the Konnectivity server and maintain the network connections. As a replacement to the SSH tunnels, the Konnectivity service provides TCP level proxy for the control plane to cluster communication. The Konnectivity service consists of two parts: the Konnectivity server in the control plane network and the Konnectivity agents in the nodes network. The Konnectivity agents initiate connections to the Konnectivity server and maintain the network connections.
After enabling the Konnectivity service, all control plane to nodes traffic goes through these connections. After enabling the Konnectivity service, all control plane to nodes traffic goes through these connections.
Follow the [Konnectivity service task](/docs/tasks/extend-kubernetes/setup-konnectivity/) to set up the Konnectivity service in your cluster. Follow the [Konnectivity service task](/docs/tasks/extend-kubernetes/setup-konnectivity/) to set up the Konnectivity service in your cluster.

View File

@ -17,7 +17,7 @@ and contains the services necessary to run
{{< glossary_tooltip text="Pods" term_id="pod" >}} {{< glossary_tooltip text="Pods" term_id="pod" >}}
Typically you have several nodes in a cluster; in a learning or resource-limited Typically you have several nodes in a cluster; in a learning or resource-limited
environment, you might have just one. environment, you might have only one node.
The [components](/docs/concepts/overview/components/#node-components) on a node include the The [components](/docs/concepts/overview/components/#node-components) on a node include the
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}}, a {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}, a
@ -31,7 +31,7 @@ The [components](/docs/concepts/overview/components/#node-components) on a node
There are two main ways to have Nodes added to the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}: There are two main ways to have Nodes added to the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}:
1. The kubelet on a node self-registers to the control plane 1. The kubelet on a node self-registers to the control plane
2. You, or another human user, manually add a Node object 2. You (or another human user) manually add a Node object
After you create a Node object, or the kubelet on a node self-registers, the After you create a Node object, or the kubelet on a node self-registers, the
control plane checks whether the new Node object is valid. For example, if you control plane checks whether the new Node object is valid. For example, if you
@ -52,8 +52,8 @@ try to create a Node from the following JSON manifest:
Kubernetes creates a Node object internally (the representation). Kubernetes checks Kubernetes creates a Node object internally (the representation). Kubernetes checks
that a kubelet has registered to the API server that matches the `metadata.name` that a kubelet has registered to the API server that matches the `metadata.name`
field of the Node. If the node is healthy (if all necessary services are running), field of the Node. If the node is healthy (i.e. all necessary services are running),
it is eligible to run a Pod. Otherwise, that node is ignored for any cluster activity then it is eligible to run a Pod. Otherwise, that node is ignored for any cluster activity
until it becomes healthy. until it becomes healthy.
{{< note >}} {{< note >}}
@ -67,6 +67,16 @@ delete the Node object to stop that health checking.
The name of a Node object must be a valid The name of a Node object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
### Node name uniqueness
The [name](/docs/concepts/overview/working-with-objects/names#names) identifies a Node. Two Nodes
cannot have the same name at the same time. Kubernetes also assumes that a resource with the same
name is the same object. In case of a Node, it is implicitly assumed that an instance using the
same name will have the same state (e.g. network settings, root disk contents). This may lead to
inconsistencies if an instance was modified without changing its name. If the Node needs to be
replaced or updated significantly, the existing Node object needs to be removed from API server
first and re-added after the update.
### Self-registration of Nodes ### Self-registration of Nodes
When the kubelet flag `--register-node` is true (the default), the kubelet will attempt to When the kubelet flag `--register-node` is true (the default), the kubelet will attempt to
@ -96,14 +106,14 @@ You can create and modify Node objects using
When you want to create Node objects manually, set the kubelet flag `--register-node=false`. When you want to create Node objects manually, set the kubelet flag `--register-node=false`.
You can modify Node objects regardless of the setting of `--register-node`. You can modify Node objects regardless of the setting of `--register-node`.
For example, you can set labels on an existing Node, or mark it unschedulable. For example, you can set labels on an existing Node or mark it unschedulable.
You can use labels on Nodes in conjunction with node selectors on Pods to control You can use labels on Nodes in conjunction with node selectors on Pods to control
scheduling. For example, you can constrain a Pod to only be eligible to run on scheduling. For example, you can constrain a Pod to only be eligible to run on
a subset of the available nodes. a subset of the available nodes.
Marking a node as unschedulable prevents the scheduler from placing new pods onto Marking a node as unschedulable prevents the scheduler from placing new pods onto
that Node, but does not affect existing Pods on the Node. This is useful as a that Node but does not affect existing Pods on the Node. This is useful as a
preparatory step before a node reboot or other maintenance. preparatory step before a node reboot or other maintenance.
To mark a Node unschedulable, run: To mark a Node unschedulable, run:
@ -179,14 +189,14 @@ The node condition is represented as a JSON object. For example, the following s
] ]
``` ```
If the Status of the Ready condition remains `Unknown` or `False` for longer than the `pod-eviction-timeout` (an argument passed to the {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}), all the Pods on the node are scheduled for deletion by the node controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the API server is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the API server is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node. If the Status of the Ready condition remains `Unknown` or `False` for longer than the `pod-eviction-timeout` (an argument passed to the {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}), then all the Pods on the node are scheduled for deletion by the node controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the API server is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the API server is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node.
The node controller does not force delete pods until it is confirmed that they have stopped The node controller does not force delete pods until it is confirmed that they have stopped
running in the cluster. You can see the pods that might be running on an unreachable node as running in the cluster. You can see the pods that might be running on an unreachable node as
being in the `Terminating` or `Unknown` state. In cases where Kubernetes cannot deduce from the being in the `Terminating` or `Unknown` state. In cases where Kubernetes cannot deduce from the
underlying infrastructure if a node has permanently left a cluster, the cluster administrator underlying infrastructure if a node has permanently left a cluster, the cluster administrator
may need to delete the node object by hand. Deleting the node object from Kubernetes causes may need to delete the node object by hand. Deleting the node object from Kubernetes causes
all the Pod objects running on the node to be deleted from the API server, and frees up their all the Pod objects running on the node to be deleted from the API server and frees up their
names. names.
The node lifecycle controller automatically creates The node lifecycle controller automatically creates
@ -199,7 +209,7 @@ for more details.
### Capacity and Allocatable {#capacity} ### Capacity and Allocatable {#capacity}
Describes the resources available on the node: CPU, memory and the maximum Describes the resources available on the node: CPU, memory, and the maximum
number of pods that can be scheduled onto the node. number of pods that can be scheduled onto the node.
The fields in the capacity block indicate the total amount of resources that a The fields in the capacity block indicate the total amount of resources that a
@ -225,18 +235,20 @@ CIDR block to the node when it is registered (if CIDR assignment is turned on).
The second is keeping the node controller's internal list of nodes up to date with The second is keeping the node controller's internal list of nodes up to date with
the cloud provider's list of available machines. When running in a cloud the cloud provider's list of available machines. When running in a cloud
environment, whenever a node is unhealthy, the node controller asks the cloud environment and whenever a node is unhealthy, the node controller asks the cloud
provider if the VM for that node is still available. If not, the node provider if the VM for that node is still available. If not, the node
controller deletes the node from its list of nodes. controller deletes the node from its list of nodes.
The third is monitoring the nodes' health. The node controller is The third is monitoring the nodes' health. The node controller is
responsible for updating the NodeReady condition of NodeStatus to responsible for:
ConditionUnknown when a node becomes unreachable (i.e. the node controller stops - Updating the NodeReady condition of NodeStatus to ConditionUnknown when a node
receiving heartbeats for some reason, for example due to the node being down), and then later evicting becomes unreachable, as the node controller stops receiving heartbeats for some
all the pods from the node (using graceful termination) if the node continues reason such as the node being down.
to be unreachable. (The default timeouts are 40s to start reporting - Evicting all the pods from the node using graceful termination if
ConditionUnknown and 5m after that to start evicting pods.) The node controller the node continues to be unreachable. The default timeouts are 40s to start
checks the state of each node every `--node-monitor-period` seconds. reporting ConditionUnknown and 5m after that to start evicting pods.
The node controller checks the state of each node every `--node-monitor-period` seconds.
#### Heartbeats #### Heartbeats
@ -252,13 +264,14 @@ of the node heartbeats as the cluster scales.
The kubelet is responsible for creating and updating the `NodeStatus` and The kubelet is responsible for creating and updating the `NodeStatus` and
a Lease object. a Lease object.
- The kubelet updates the `NodeStatus` either when there is change in status, - The kubelet updates the `NodeStatus` either when there is change in status
or if there has been no update for a configured interval. The default interval or if there has been no update for a configured interval. The default interval
for `NodeStatus` updates is 5 minutes (much longer than the 40 second default for `NodeStatus` updates is 5 minutes, which is much longer than the 40 second default
timeout for unreachable nodes). timeout for unreachable nodes.
- The kubelet creates and then updates its Lease object every 10 seconds - The kubelet creates and then updates its Lease object every 10 seconds
(the default update interval). Lease updates occur independently from the (the default update interval). Lease updates occur independently from the
`NodeStatus` updates. If the Lease update fails, the kubelet retries with exponential backoff starting at 200 milliseconds and capped at 7 seconds. `NodeStatus` updates. If the Lease update fails, the kubelet retries with
exponential backoff starting at 200 milliseconds and capped at 7 seconds.
#### Reliability #### Reliability
@ -269,23 +282,25 @@ from more than 1 node per 10 seconds.
The node eviction behavior changes when a node in a given availability zone The node eviction behavior changes when a node in a given availability zone
becomes unhealthy. The node controller checks what percentage of nodes in the zone becomes unhealthy. The node controller checks what percentage of nodes in the zone
are unhealthy (NodeReady condition is ConditionUnknown or ConditionFalse) at are unhealthy (NodeReady condition is ConditionUnknown or ConditionFalse) at
the same time. If the fraction of unhealthy nodes is at least the same time:
`--unhealthy-zone-threshold` (default 0.55) then the eviction rate is reduced: - If the fraction of unhealthy nodes is at least `--unhealthy-zone-threshold`
if the cluster is small (i.e. has less than or equal to (default 0.55), then the eviction rate is reduced.
`--large-cluster-size-threshold` nodes - default 50) then evictions are - If the cluster is small (i.e. has less than or equal to
stopped, otherwise the eviction rate is reduced to `--large-cluster-size-threshold` nodes - default 50), then evictions are stopped.
`--secondary-node-eviction-rate` (default 0.01) per second. The reason these - Otherwise, the eviction rate is reduced to `--secondary-node-eviction-rate`
policies are implemented per availability zone is because one availability zone (default 0.01) per second.
might become partitioned from the master while the others remain connected. If
your cluster does not span multiple cloud provider availability zones, then The reason these policies are implemented per availability zone is because one
there is only one availability zone (the whole cluster). availability zone might become partitioned from the master while the others remain
connected. If your cluster does not span multiple cloud provider availability zones,
then there is only one availability zone (i.e. the whole cluster).
A key reason for spreading your nodes across availability zones is so that the A key reason for spreading your nodes across availability zones is so that the
workload can be shifted to healthy zones when one entire zone goes down. workload can be shifted to healthy zones when one entire zone goes down.
Therefore, if all nodes in a zone are unhealthy then the node controller evicts at Therefore, if all nodes in a zone are unhealthy, then the node controller evicts at
the normal rate of `--node-eviction-rate`. The corner case is when all zones are the normal rate of `--node-eviction-rate`. The corner case is when all zones are
completely unhealthy (i.e. there are no healthy nodes in the cluster). In such a completely unhealthy (i.e. there are no healthy nodes in the cluster). In such a
case, the node controller assumes that there's some problem with master case, the node controller assumes that there is some problem with master
connectivity and stops all evictions until some connectivity is restored. connectivity and stops all evictions until some connectivity is restored.
The node controller is also responsible for evicting pods running on nodes with The node controller is also responsible for evicting pods running on nodes with
@ -303,8 +318,8 @@ eligible for, effectively removing incoming load balancer traffic from the cordo
### Node capacity ### Node capacity
Node objects track information about the Node's resource capacity (for example: the amount Node objects track information about the Node's resource capacity: for example, the amount
of memory available, and the number of CPUs). of memory available and the number of CPUs.
Nodes that [self register](#self-registration-of-nodes) report their capacity during Nodes that [self register](#self-registration-of-nodes) report their capacity during
registration. If you [manually](#manual-node-administration) add a Node, then registration. If you [manually](#manual-node-administration) add a Node, then
you need to set the node's capacity information when you add it. you need to set the node's capacity information when you add it.
@ -338,7 +353,7 @@ for more information.
If you have enabled the `GracefulNodeShutdown` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/), then the kubelet attempts to detect the node system shutdown and terminates pods running on the node. If you have enabled the `GracefulNodeShutdown` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/), then the kubelet attempts to detect the node system shutdown and terminates pods running on the node.
Kubelet ensures that pods follow the normal [pod termination process](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) during the node shutdown. Kubelet ensures that pods follow the normal [pod termination process](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) during the node shutdown.
When the `GracefulNodeShutdown` feature gate is enabled, kubelet uses [systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) to delay the node shutdown with a given duration. During a shutdown kubelet terminates pods in two phases: When the `GracefulNodeShutdown` feature gate is enabled, kubelet uses [systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) to delay the node shutdown with a given duration. During a shutdown, kubelet terminates pods in two phases:
1. Terminate regular pods running on the node. 1. Terminate regular pods running on the node.
2. Terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) running on the node. 2. Terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) running on the node.
@ -359,4 +374,3 @@ For example, if `ShutdownGracePeriod=30s`, and `ShutdownGracePeriodCriticalPods=
* Read the [Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) * Read the [Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)
section of the architecture design document. section of the architecture design document.
* Read about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/). * Read about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/).

View File

@ -45,7 +45,7 @@ Before choosing a guide, here are some considerations:
## Securing a cluster ## Securing a cluster
* [Certificates](/docs/concepts/cluster-administration/certificates/) describes the steps to generate certificates using different tool chains. * [Generate Certificates](/docs/tasks/administer-cluster/certificates/) describes the steps to generate certificates using different tool chains.
* [Kubernetes Container Environment](/docs/concepts/containers/container-environment/) describes the environment for Kubelet managed containers on a Kubernetes node. * [Kubernetes Container Environment](/docs/concepts/containers/container-environment/) describes the environment for Kubelet managed containers on a Kubernetes node.

View File

@ -4,249 +4,6 @@ content_type: concept
weight: 20 weight: 20
--- ---
<!-- overview --> <!-- overview -->
When using client certificate authentication, you can generate certificates To learn how to generate certificates for your cluster, see [Certificates](/docs/tasks/administer-cluster/certificates/).
manually through `easyrsa`, `openssl` or `cfssl`.
<!-- body -->
### easyrsa
**easyrsa** can manually generate certificates for your cluster.
1. Download, unpack, and initialize the patched version of easyrsa3.
curl -LO https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz
tar xzf easy-rsa.tar.gz
cd easy-rsa-master/easyrsa3
./easyrsa init-pki
1. Generate a new certificate authority (CA). `--batch` sets automatic mode;
`--req-cn` specifies the Common Name (CN) for the CA's new root certificate.
./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass
1. Generate server certificate and key.
The argument `--subject-alt-name` sets the possible IPs and DNS names the API server will
be accessed with. The `MASTER_CLUSTER_IP` is usually the first IP from the service CIDR
that is specified as the `--service-cluster-ip-range` argument for both the API server and
the controller manager component. The argument `--days` is used to set the number of days
after which the certificate expires.
The sample below also assumes that you are using `cluster.local` as the default
DNS domain name.
./easyrsa --subject-alt-name="IP:${MASTER_IP},"\
"IP:${MASTER_CLUSTER_IP},"\
"DNS:kubernetes,"\
"DNS:kubernetes.default,"\
"DNS:kubernetes.default.svc,"\
"DNS:kubernetes.default.svc.cluster,"\
"DNS:kubernetes.default.svc.cluster.local" \
--days=10000 \
build-server-full server nopass
1. Copy `pki/ca.crt`, `pki/issued/server.crt`, and `pki/private/server.key` to your directory.
1. Fill in and add the following parameters into the API server start parameters:
--client-ca-file=/yourdirectory/ca.crt
--tls-cert-file=/yourdirectory/server.crt
--tls-private-key-file=/yourdirectory/server.key
### openssl
**openssl** can manually generate certificates for your cluster.
1. Generate a ca.key with 2048bit:
openssl genrsa -out ca.key 2048
1. According to the ca.key generate a ca.crt (use -days to set the certificate effective time):
openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt
1. Generate a server.key with 2048bit:
openssl genrsa -out server.key 2048
1. Create a config file for generating a Certificate Signing Request (CSR).
Be sure to substitute the values marked with angle brackets (e.g. `<MASTER_IP>`)
with real values before saving this to a file (e.g. `csr.conf`).
Note that the value for `MASTER_CLUSTER_IP` is the service cluster IP for the
API server as described in previous subsection.
The sample below also assumes that you are using `cluster.local` as the default
DNS domain name.
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn
[ dn ]
C = <country>
ST = <state>
L = <city>
O = <organization>
OU = <organization unit>
CN = <MASTER_IP>
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster
DNS.5 = kubernetes.default.svc.cluster.local
IP.1 = <MASTER_IP>
IP.2 = <MASTER_CLUSTER_IP>
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth,clientAuth
subjectAltName=@alt_names
1. Generate the certificate signing request based on the config file:
openssl req -new -key server.key -out server.csr -config csr.conf
1. Generate the server certificate using the ca.key, ca.crt and server.csr:
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \
-CAcreateserial -out server.crt -days 10000 \
-extensions v3_ext -extfile csr.conf
1. View the certificate:
openssl x509 -noout -text -in ./server.crt
Finally, add the same parameters into the API server start parameters.
### cfssl
**cfssl** is another tool for certificate generation.
1. Download, unpack and prepare the command line tools as shown below.
Note that you may need to adapt the sample commands based on the hardware
architecture and cfssl version you are using.
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl
chmod +x cfssl
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson
chmod +x cfssljson
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo
chmod +x cfssl-certinfo
1. Create a directory to hold the artifacts and initialize cfssl:
mkdir cert
cd cert
../cfssl print-defaults config > config.json
../cfssl print-defaults csr > csr.json
1. Create a JSON config file for generating the CA file, for example, `ca-config.json`:
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "8760h"
}
}
}
}
1. Create a JSON config file for CA certificate signing request (CSR), for example,
`ca-csr.json`. Be sure to replace the values marked with angle brackets with
real values you want to use.
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names":[{
"C": "<country>",
"ST": "<state>",
"L": "<city>",
"O": "<organization>",
"OU": "<organization unit>"
}]
}
1. Generate CA key (`ca-key.pem`) and certificate (`ca.pem`):
../cfssl gencert -initca ca-csr.json | ../cfssljson -bare ca
1. Create a JSON config file for generating keys and certificates for the API
server, for example, `server-csr.json`. Be sure to replace the values in angle brackets with
real values you want to use. The `MASTER_CLUSTER_IP` is the service cluster
IP for the API server as described in previous subsection.
The sample below also assumes that you are using `cluster.local` as the default
DNS domain name.
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"<MASTER_IP>",
"<MASTER_CLUSTER_IP>",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "<country>",
"ST": "<state>",
"L": "<city>",
"O": "<organization>",
"OU": "<organization unit>"
}]
}
1. Generate the key and certificate for the API server, which are by default
saved into file `server-key.pem` and `server.pem` respectively:
../cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \
--config=ca-config.json -profile=kubernetes \
server-csr.json | ../cfssljson -bare server
## Distributing Self-Signed CA Certificate
A client node may refuse to recognize a self-signed CA certificate as valid.
For a non-production deployment, or for a deployment that runs behind a company
firewall, you can distribute a self-signed CA certificate to all clients and
refresh the local list for valid certificates.
On each client, perform the following operations:
```bash
sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt
sudo update-ca-certificates
```
```
Updating certificates in /etc/ssl/certs...
1 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d....
done.
```
## Certificates API
You can use the `certificates.k8s.io` API to provision
x509 certificates to use for authentication as documented
[here](/docs/tasks/tls/managing-tls-in-a-cluster).

View File

@ -59,7 +59,7 @@ kube-apiserver \
``` ```
Alternatively, you can enable the v1alpha1 version of the API group Alternatively, you can enable the v1alpha1 version of the API group
with `--runtime-config=flowcontrol.apiserver.k8s.io/v1beta1=true`. with `--runtime-config=flowcontrol.apiserver.k8s.io/v1alpha1=true`.
The command-line flag `--enable-priority-and-fairness=false` will disable the The command-line flag `--enable-priority-and-fairness=false` will disable the
API Priority and Fairness feature, even if other flags have enabled it. API Priority and Fairness feature, even if other flags have enabled it.
@ -427,7 +427,7 @@ poorly-behaved workloads that may be harming system health.
histogram vector of queue lengths for the queues, broken down by histogram vector of queue lengths for the queues, broken down by
the labels `priority_level` and `flow_schema`, as sampled by the the labels `priority_level` and `flow_schema`, as sampled by the
enqueued requests. Each request that gets queued contributes one enqueued requests. Each request that gets queued contributes one
sample to its histogram, reporting the length of the queue just sample to its histogram, reporting the length of the queue immediately
after the request was added. Note that this produces different after the request was added. Note that this produces different
statistics than an unbiased survey would. statistics than an unbiased survey would.
{{< note >}} {{< note >}}

View File

@ -45,9 +45,9 @@ kubectl apply -f https://k8s.io/examples/application/nginx/
`kubectl` will read any files with suffixes `.yaml`, `.yml`, or `.json`. `kubectl` will read any files with suffixes `.yaml`, `.yml`, or `.json`.
It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, then you can then simply deploy all of the components of your stack en masse. It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, you can deploy all of the components of your stack together.
A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into github: A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into GitHub:
```shell ```shell
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx/nginx-deployment.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx/nginx-deployment.yaml
@ -265,7 +265,7 @@ For a more concrete example, check the [tutorial of deploying Ghost](https://git
## Updating labels ## Updating labels
Sometimes existing pods and other resources need to be relabeled before creating new resources. This can be done with `kubectl label`. Sometimes existing pods and other resources need to be relabeled before creating new resources. This can be done with `kubectl label`.
For example, if you want to label all your nginx pods as frontend tier, simply run: For example, if you want to label all your nginx pods as frontend tier, run:
```shell ```shell
kubectl label pods -l app=nginx tier=fe kubectl label pods -l app=nginx tier=fe
@ -278,7 +278,7 @@ pod/my-nginx-2035384211-u3t6x labeled
``` ```
This first filters all pods with the label "app=nginx", and then labels them with the "tier=fe". This first filters all pods with the label "app=nginx", and then labels them with the "tier=fe".
To see the pods you just labeled, run: To see the pods you labeled, run:
```shell ```shell
kubectl get pods -l app=nginx -L tier kubectl get pods -l app=nginx -L tier
@ -411,7 +411,7 @@ and
## Disruptive updates ## Disruptive updates
In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can simply modify your original configuration file: In some cases, you may need to update resource fields that cannot be updated once initialized, or you may want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can modify your original configuration file:
```shell ```shell
kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force
@ -448,7 +448,7 @@ kubectl scale deployment my-nginx --current-replicas=1 --replicas=3
deployment.apps/my-nginx scaled deployment.apps/my-nginx scaled
``` ```
To update to version 1.16.1, simply change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1`, with the kubectl commands we learned above. To update to version 1.16.1, change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1` using the previous kubectl commands.
```shell ```shell
kubectl edit deployment/my-nginx kubectl edit deployment/my-nginx

View File

@ -39,7 +39,7 @@ There are several different proxies you may encounter when using Kubernetes:
- proxies UDP, TCP and SCTP - proxies UDP, TCP and SCTP
- does not understand HTTP - does not understand HTTP
- provides load balancing - provides load balancing
- is just used to reach services - is only used to reach services
1. A Proxy/Load-balancer in front of apiserver(s): 1. A Proxy/Load-balancer in front of apiserver(s):

View File

@ -43,7 +43,7 @@ Kubernetes objects that have a `spec`, a ConfigMap has `data` and `binaryData`
fields. These fields accept key-value pairs as their values. Both the `data` fields. These fields accept key-value pairs as their values. Both the `data`
field and the `binaryData` are optional. The `data` field is designed to field and the `binaryData` are optional. The `data` field is designed to
contain UTF-8 byte sequences while the `binaryData` field is designed to contain UTF-8 byte sequences while the `binaryData` field is designed to
contain binary data. contain binary data as base64-encoded strings.
The name of a ConfigMap must be a valid The name of a ConfigMap must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
@ -225,7 +225,7 @@ The kubelet checks whether the mounted ConfigMap is fresh on every periodic sync
However, the kubelet uses its local cache for getting the current value of the ConfigMap. However, the kubelet uses its local cache for getting the current value of the ConfigMap.
The type of the cache is configurable using the `ConfigMapAndSecretChangeDetectionStrategy` field in The type of the cache is configurable using the `ConfigMapAndSecretChangeDetectionStrategy` field in
the [KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go). the [KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
A ConfigMap can be either propagated by watch (default), ttl-based, or simply redirecting A ConfigMap can be either propagated by watch (default), ttl-based, or by redirecting
all requests directly to the API server. all requests directly to the API server.
As a result, the total delay from the moment when the ConfigMap is updated to the moment As a result, the total delay from the moment when the ConfigMap is updated to the moment
when new keys are projected to the Pod can be as long as the kubelet sync period + cache when new keys are projected to the Pod can be as long as the kubelet sync period + cache

View File

@ -72,8 +72,7 @@ You cannot overcommit `hugepages-*` resources.
This is different from the `memory` and `cpu` resources. This is different from the `memory` and `cpu` resources.
{{< /note >}} {{< /note >}}
CPU and memory are collectively referred to as *compute resources*, or just CPU and memory are collectively referred to as *compute resources*, or *resources*. Compute
*resources*. Compute
resources are measurable quantities that can be requested, allocated, and resources are measurable quantities that can be requested, allocated, and
consumed. They are distinct from consumed. They are distinct from
[API resources](/docs/concepts/overview/kubernetes-api/). API resources, such as Pods and [API resources](/docs/concepts/overview/kubernetes-api/). API resources, such as Pods and
@ -554,7 +553,7 @@ extender.
### Consuming extended resources ### Consuming extended resources
Users can consume extended resources in Pod specs just like CPU and memory. Users can consume extended resources in Pod specs like CPU and memory.
The scheduler takes care of the resource accounting so that no more than the The scheduler takes care of the resource accounting so that no more than the
available amount is simultaneously allocated to Pods. available amount is simultaneously allocated to Pods.

View File

@ -81,9 +81,9 @@ The [imagePullPolicy](/docs/concepts/containers/images/#updating-images) and the
- `imagePullPolicy: Always`: every time the kubelet launches a container, the kubelet queries the container image registry to resolve the name to an image digest. If the kubelet has a container image with that exact digest cached locally, the kubelet uses its cached image; otherwise, the kubelet downloads (pulls) the image with the resolved digest, and uses that image to launch the container. - `imagePullPolicy: Always`: every time the kubelet launches a container, the kubelet queries the container image registry to resolve the name to an image digest. If the kubelet has a container image with that exact digest cached locally, the kubelet uses its cached image; otherwise, the kubelet downloads (pulls) the image with the resolved digest, and uses that image to launch the container.
- `imagePullPolicy` is omitted and either the image tag is `:latest` or it is omitted: `Always` is applied. - `imagePullPolicy` is omitted and either the image tag is `:latest` or it is omitted: `imagePullPolicy` is automatically set to `Always`. Note that this will _not_ be updated to `IfNotPresent` if the tag changes value.
- `imagePullPolicy` is omitted and the image tag is present but not `:latest`: `IfNotPresent` is applied. - `imagePullPolicy` is omitted and the image tag is present but not `:latest`: `imagePullPolicy` is automatically set to `IfNotPresent`. Note that this will _not_ be updated to `Always` if the tag is later removed or changed to `:latest`.
- `imagePullPolicy: Never`: the image is assumed to exist locally. No attempt is made to pull the image. - `imagePullPolicy: Never`: the image is assumed to exist locally. No attempt is made to pull the image.
@ -96,7 +96,7 @@ You should avoid using the `:latest` tag when deploying containers in production
{{< /note >}} {{< /note >}}
{{< note >}} {{< note >}}
The caching semantics of the underlying image provider make even `imagePullPolicy: Always` efficient. With Docker, for example, if the image already exists, the pull attempt is fast because all image layers are cached and no image download is needed. The caching semantics of the underlying image provider make even `imagePullPolicy: Always` efficient, as long as the registry is reliably accessible. With Docker, for example, if the image already exists, the pull attempt is fast because all image layers are cached and no image download is needed.
{{< /note >}} {{< /note >}}
## Using kubectl ## Using kubectl

View File

@ -109,7 +109,7 @@ empty-secret Opaque 0 2m6s
``` ```
The `DATA` column shows the number of data items stored in the Secret. The `DATA` column shows the number of data items stored in the Secret.
In this case, `0` means we have just created an empty Secret. In this case, `0` means we have created an empty Secret.
### Service account token Secrets ### Service account token Secrets
@ -669,7 +669,7 @@ The kubelet checks whether the mounted secret is fresh on every periodic sync.
However, the kubelet uses its local cache for getting the current value of the Secret. However, the kubelet uses its local cache for getting the current value of the Secret.
The type of the cache is configurable using the `ConfigMapAndSecretChangeDetectionStrategy` field in The type of the cache is configurable using the `ConfigMapAndSecretChangeDetectionStrategy` field in
the [KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go). the [KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
A Secret can be either propagated by watch (default), ttl-based, or simply redirecting A Secret can be either propagated by watch (default), ttl-based, or by redirecting
all requests directly to the API server. all requests directly to the API server.
As a result, the total delay from the moment when the Secret is updated to the moment As a result, the total delay from the moment when the Secret is updated to the moment
when new keys are projected to the Pod can be as long as the kubelet sync period + cache when new keys are projected to the Pod can be as long as the kubelet sync period + cache
@ -718,7 +718,7 @@ spec:
#### Consuming Secret Values from environment variables #### Consuming Secret Values from environment variables
Inside a container that consumes a secret in an environment variables, the secret keys appear as Inside a container that consumes a secret in the environment variables, the secret keys appear as
normal environment variables containing the base64 decoded values of the secret data. normal environment variables containing the base64 decoded values of the secret data.
This is the result of commands executed inside the container from the example above: This is the result of commands executed inside the container from the example above:

View File

@ -40,6 +40,7 @@ as are any environment variables specified statically in the Docker image.
### Cluster information ### Cluster information
A list of all services that were running when a Container was created is available to that Container as environment variables. A list of all services that were running when a Container was created is available to that Container as environment variables.
This list is limited to services within the same namespace as the new Container's Pod and Kubernetes control plane services.
Those environment variables match the syntax of Docker links. Those environment variables match the syntax of Docker links.
For a service named *foo* that maps to a Container named *bar*, For a service named *foo* that maps to a Container named *bar*,

View File

@ -50,10 +50,11 @@ A more detailed description of the termination behavior can be found in
### Hook handler implementations ### Hook handler implementations
Containers can access a hook by implementing and registering a handler for that hook. Containers can access a hook by implementing and registering a handler for that hook.
There are two types of hook handlers that can be implemented for Containers: There are three types of hook handlers that can be implemented for Containers:
* Exec - Executes a specific command, such as `pre-stop.sh`, inside the cgroups and namespaces of the Container. * Exec - Executes a specific command, such as `pre-stop.sh`, inside the cgroups and namespaces of the Container.
Resources consumed by the command are counted against the Container. Resources consumed by the command are counted against the Container.
* TCP - Opens a TCP connecton against a specific port on the Container.
* HTTP - Executes an HTTP request against a specific endpoint on the Container. * HTTP - Executes an HTTP request against a specific endpoint on the Container.
### Hook handler execution ### Hook handler execution

View File

@ -49,16 +49,32 @@ Instead, specify a meaningful tag such as `v1.42.0`.
## Updating images ## Updating images
The default pull policy is `IfNotPresent` which causes the When you first create a {{< glossary_tooltip text="Deployment" term_id="deployment" >}},
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} to skip {{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}}, Pod, or other
pulling an image if it already exists. If you would like to always force a pull, object that includes a Pod template, then by default the pull policy of all
you can do one of the following: containers in that pod will be set to `IfNotPresent` if it is not explicitly
specified. This policy causes the
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} to skip pulling an
image if it already exists.
If you would like to always force a pull, you can do one of the following:
- set the `imagePullPolicy` of the container to `Always`. - set the `imagePullPolicy` of the container to `Always`.
- omit the `imagePullPolicy` and use `:latest` as the tag for the image to use. - omit the `imagePullPolicy` and use `:latest` as the tag for the image to use;
Kubernetes will set the policy to `Always`.
- omit the `imagePullPolicy` and the tag for the image to use. - omit the `imagePullPolicy` and the tag for the image to use.
- enable the [AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) admission controller. - enable the [AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) admission controller.
{{< note >}}
The value of `imagePullPolicy` of the container is always set when the object is
first _created_, and is not updated if the image's tag later changes.
For example, if you create a Deployment with an image whose tag is _not_
`:latest`, and later update that Deployment's image to a `:latest` tag, the
`imagePullPolicy` field will _not_ change to `Always`. You must manually change
the pull policy of any object after its initial creation.
{{< /note >}}
When `imagePullPolicy` is defined without a specific value, it is also set to `Always`. When `imagePullPolicy` is defined without a specific value, it is also set to `Always`.
## Multi-architecture images with image indexes ## Multi-architecture images with image indexes
@ -119,7 +135,7 @@ Here are the recommended steps to configuring your nodes to use a private regist
example, run these on your desktop/laptop: example, run these on your desktop/laptop:
1. Run `docker login [server]` for each set of credentials you want to use. This updates `$HOME/.docker/config.json` on your PC. 1. Run `docker login [server]` for each set of credentials you want to use. This updates `$HOME/.docker/config.json` on your PC.
1. View `$HOME/.docker/config.json` in an editor to ensure it contains just the credentials you want to use. 1. View `$HOME/.docker/config.json` in an editor to ensure it contains only the credentials you want to use.
1. Get a list of your nodes; for example: 1. Get a list of your nodes; for example:
- if you want the names: `nodes=$( kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}' )` - if you want the names: `nodes=$( kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}' )`
- if you want to get the IP addresses: `nodes=$( kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}' )` - if you want to get the IP addresses: `nodes=$( kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}' )`

View File

@ -145,7 +145,7 @@ Kubernetes provides several built-in authentication methods, and an [Authenticat
### Authorization ### Authorization
[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It just works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision. [Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision.
### Dynamic Admission Control ### Dynamic Admission Control

View File

@ -28,9 +28,7 @@ The most common way to implement the APIService is to run an *extension API serv
Extension API servers should have low latency networking to and from the kube-apiserver. Extension API servers should have low latency networking to and from the kube-apiserver.
Discovery requests are required to round-trip from the kube-apiserver in five seconds or less. Discovery requests are required to round-trip from the kube-apiserver in five seconds or less.
If your extension API server cannot achieve that latency requirement, consider making changes that let you meet it. You can also set the If your extension API server cannot achieve that latency requirement, consider making changes that let you meet it.
`EnableAggregatedDiscoveryTimeout=false` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) on the kube-apiserver
to disable the timeout restriction. This deprecated feature gate will be removed in a future release.
## {{% heading "whatsnext" %}} ## {{% heading "whatsnext" %}}

View File

@ -31,7 +31,7 @@ Once a custom resource is installed, users can create and access its objects usi
## Custom controllers ## Custom controllers
On their own, custom resources simply let you store and retrieve structured data. On their own, custom resources let you store and retrieve structured data.
When you combine a custom resource with a *custom controller*, custom resources When you combine a custom resource with a *custom controller*, custom resources
provide a true _declarative API_. provide a true _declarative API_.
@ -120,7 +120,7 @@ Kubernetes provides two ways to add custom resources to your cluster:
Kubernetes provides these two options to meet the needs of different users, so that neither ease of use nor flexibility is compromised. Kubernetes provides these two options to meet the needs of different users, so that neither ease of use nor flexibility is compromised.
Aggregated APIs are subordinate API servers that sit behind the primary API server, which acts as a proxy. This arrangement is called [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) (AA). To users, it simply appears that the Kubernetes API is extended. Aggregated APIs are subordinate API servers that sit behind the primary API server, which acts as a proxy. This arrangement is called [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) (AA). To users, the Kubernetes API appears extended.
CRDs allow users to create new types of resources without adding another API server. You do not need to understand API Aggregation to use CRDs. CRDs allow users to create new types of resources without adding another API server. You do not need to understand API Aggregation to use CRDs.

View File

@ -24,7 +24,7 @@ Network plugins in Kubernetes come in a few flavors:
The kubelet has a single default network plugin, and a default network common to the entire cluster. It probes for plugins when it starts up, remembers what it finds, and executes the selected plugin at appropriate times in the pod lifecycle (this is only true for Docker, as CRI manages its own CNI plugins). There are two Kubelet command line parameters to keep in mind when using plugins: The kubelet has a single default network plugin, and a default network common to the entire cluster. It probes for plugins when it starts up, remembers what it finds, and executes the selected plugin at appropriate times in the pod lifecycle (this is only true for Docker, as CRI manages its own CNI plugins). There are two Kubelet command line parameters to keep in mind when using plugins:
* `cni-bin-dir`: Kubelet probes this directory for plugins on startup * `cni-bin-dir`: Kubelet probes this directory for plugins on startup
* `network-plugin`: The network plugin to use from `cni-bin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is simply "cni". * `network-plugin`: The network plugin to use from `cni-bin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is `cni`.
## Network Plugin Requirements ## Network Plugin Requirements

View File

@ -146,7 +146,7 @@ Kubernetes provides several built-in authentication methods, and an [Authenticat
### Authorization ### Authorization
[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It just works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision. [Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision.
### Dynamic Admission Control ### Dynamic Admission Control

View File

@ -103,27 +103,28 @@ as well as keeping the existing service in good shape.
## Writing your own Operator {#writing-operator} ## Writing your own Operator {#writing-operator}
If there isn't an Operator in the ecosystem that implements the behavior you If there isn't an Operator in the ecosystem that implements the behavior you
want, you can code your own. In [What's next](#what-s-next) you'll find a few want, you can code your own.
links to libraries and tools you can use to write your own cloud native
Operator.
You also implement an Operator (that is, a Controller) using any language / runtime You also implement an Operator (that is, a Controller) using any language / runtime
that can act as a [client for the Kubernetes API](/docs/reference/using-api/client-libraries/). that can act as a [client for the Kubernetes API](/docs/reference/using-api/client-libraries/).
Following are a few libraries and tools you can use to write your own cloud native
Operator.
{{% thirdparty-content %}}
* [kubebuilder](https://book.kubebuilder.io/)
* [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator)
* [Metacontroller](https://metacontroller.app/) along with WebHooks that
you implement yourself
* [Operator Framework](https://operatorframework.io)
## {{% heading "whatsnext" %}} ## {{% heading "whatsnext" %}}
* Learn more about [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) * Learn more about [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
* Find ready-made operators on [OperatorHub.io](https://operatorhub.io/) to suit your use case * Find ready-made operators on [OperatorHub.io](https://operatorhub.io/) to suit your use case
* Use existing tools to write your own operator, eg:
* using [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator)
* using [kubebuilder](https://book.kubebuilder.io/)
* using [Metacontroller](https://metacontroller.app/) along with WebHooks that
you implement yourself
* using the [Operator Framework](https://operatorframework.io)
* [Publish](https://operatorhub.io/) your operator for other people to use * [Publish](https://operatorhub.io/) your operator for other people to use
* Read [CoreOS' original article](https://coreos.com/blog/introducing-operators.html) that introduced the Operator pattern * Read [CoreOS' original article](https://web.archive.org/web/20170129131616/https://coreos.com/blog/introducing-operators.html) that introduced the Operator pattern (this is an archived version of the original article).
* Read an [article](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) from Google Cloud about best practices for building Operators * Read an [article](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) from Google Cloud about best practices for building Operators

View File

@ -26,7 +26,7 @@ Fortunately, there is a cloud provider that offers message queuing as a managed
A cluster operator can setup Service Catalog and use it to communicate with the cloud provider's service broker to provision an instance of the message queuing service and make it available to the application within the Kubernetes cluster. A cluster operator can setup Service Catalog and use it to communicate with the cloud provider's service broker to provision an instance of the message queuing service and make it available to the application within the Kubernetes cluster.
The application developer therefore does not need to be concerned with the implementation details or management of the message queue. The application developer therefore does not need to be concerned with the implementation details or management of the message queue.
The application can simply use it as a service. The application can access the message queue as a service.
## Architecture ## Architecture

View File

@ -51,11 +51,11 @@ the same machine, and do not run user containers on this machine. See
{{< glossary_definition term_id="kube-controller-manager" length="all" >}} {{< glossary_definition term_id="kube-controller-manager" length="all" >}}
These controllers include: Some types of these controllers are:
* Node controller: Responsible for noticing and responding when nodes go down. * Node controller: Responsible for noticing and responding when nodes go down.
* Replication controller: Responsible for maintaining the correct number of pods for every replication * Job controller: Watches for Job objects that represent one-off tasks, then creates
controller object in the system. Pods to run those tasks to completion.
* Endpoints controller: Populates the Endpoints object (that is, joins Services & Pods). * Endpoints controller: Populates the Endpoints object (that is, joins Services & Pods).
* Service Account & Token controllers: Create default accounts and API access tokens for new namespaces. * Service Account & Token controllers: Create default accounts and API access tokens for new namespaces.

View File

@ -52,7 +52,10 @@ If the prefix is omitted, the label Key is presumed to be private to the user. A
The `kubernetes.io/` and `k8s.io/` prefixes are reserved for Kubernetes core components. The `kubernetes.io/` and `k8s.io/` prefixes are reserved for Kubernetes core components.
Valid label values must be 63 characters or less and must be empty or begin and end with an alphanumeric character (`[a-z0-9A-Z]`) with dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between. Valid label value:
* must be 63 characters or less (cannot be empty),
* must begin and end with an alphanumeric character (`[a-z0-9A-Z]`),
* could contain dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between.
For example, here's the configuration file for a Pod that has two labels `environment: production` and `app: nginx` : For example, here's the configuration file for a Pod that has two labels `environment: production` and `app: nginx` :
@ -98,7 +101,7 @@ For both equality-based and set-based conditions there is no logical _OR_ (`||`)
### _Equality-based_ requirement ### _Equality-based_ requirement
_Equality-_ or _inequality-based_ requirements allow filtering by label keys and values. Matching objects must satisfy all of the specified label constraints, though they may have additional labels as well. _Equality-_ or _inequality-based_ requirements allow filtering by label keys and values. Matching objects must satisfy all of the specified label constraints, though they may have additional labels as well.
Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ (and are simply synonyms), while the latter represents _inequality_. For example: Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ (and are synonyms), while the latter represents _inequality_. For example:
``` ```
environment = production environment = production

View File

@ -24,6 +24,10 @@ For non-unique user-provided attributes, Kubernetes provides [labels](/docs/conc
{{< glossary_definition term_id="name" length="all" >}} {{< glossary_definition term_id="name" length="all" >}}
{{< note >}}
In cases when objects represent a physical entity, like a Node representing a physical host, when the host is re-created under the same name without deleting and re-creating the Node, Kubernetes treats the new host as the old one, which may lead to inconsistencies.
{{< /note >}}
Below are three types of commonly used name constraints for resources. Below are three types of commonly used name constraints for resources.
### DNS Subdomain Names ### DNS Subdomain Names
@ -86,4 +90,3 @@ UUIDs are standardized as ISO/IEC 9834-8 and as ITU-T X.667.
* Read about [labels](/docs/concepts/overview/working-with-objects/labels/) in Kubernetes. * Read about [labels](/docs/concepts/overview/working-with-objects/labels/) in Kubernetes.
* See the [Identifiers and Names in Kubernetes](https://git.k8s.io/community/contributors/design-proposals/architecture/identifiers.md) design document. * See the [Identifiers and Names in Kubernetes](https://git.k8s.io/community/contributors/design-proposals/architecture/identifiers.md) design document.

View File

@ -28,7 +28,7 @@ resource can only be in one namespace.
Namespaces are a way to divide cluster resources between multiple users (via [resource quota](/docs/concepts/policy/resource-quotas/)). Namespaces are a way to divide cluster resources between multiple users (via [resource quota](/docs/concepts/policy/resource-quotas/)).
It is not necessary to use multiple namespaces just to separate slightly different It is not necessary to use multiple namespaces to separate slightly different
resources, such as different versions of the same software: use resources, such as different versions of the same software: use
[labels](/docs/concepts/overview/working-with-objects/labels) to distinguish [labels](/docs/concepts/overview/working-with-objects/labels) to distinguish
resources within the same namespace. resources within the same namespace.
@ -91,7 +91,7 @@ kubectl config view --minify | grep namespace:
When you create a [Service](/docs/concepts/services-networking/service/), When you create a [Service](/docs/concepts/services-networking/service/),
it creates a corresponding [DNS entry](/docs/concepts/services-networking/dns-pod-service/). it creates a corresponding [DNS entry](/docs/concepts/services-networking/dns-pod-service/).
This entry is of the form `<service-name>.<namespace-name>.svc.cluster.local`, which means This entry is of the form `<service-name>.<namespace-name>.svc.cluster.local`, which means
that if a container just uses `<service-name>`, it will resolve to the service which that if a container only uses `<service-name>`, it will resolve to the service which
is local to a namespace. This is useful for using the same configuration across is local to a namespace. This is useful for using the same configuration across
multiple namespaces such as Development, Staging and Production. If you want to reach multiple namespaces such as Development, Staging and Production. If you want to reach
across namespaces, you need to use the fully qualified domain name (FQDN). across namespaces, you need to use the fully qualified domain name (FQDN).

View File

@ -197,7 +197,7 @@ alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -n
### Create a policy and a pod ### Create a policy and a pod
Define the example PodSecurityPolicy object in a file. This is a policy that Define the example PodSecurityPolicy object in a file. This is a policy that
simply prevents the creation of privileged pods. prevents the creation of privileged pods.
The name of a PodSecurityPolicy object must be a valid The name of a PodSecurityPolicy object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).

View File

@ -58,7 +58,7 @@ Neither contention nor changes to quota will affect already created resources.
## Enabling Resource Quota ## Enabling Resource Quota
Resource Quota support is enabled by default for many Kubernetes distributions. It is Resource Quota support is enabled by default for many Kubernetes distributions. It is
enabled when the API server `--enable-admission-plugins=` flag has `ResourceQuota` as enabled when the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}} `--enable-admission-plugins=` flag has `ResourceQuota` as
one of its arguments. one of its arguments.
A resource quota is enforced in a particular namespace when there is a A resource quota is enforced in a particular namespace when there is a
@ -610,17 +610,28 @@ plugins:
values: ["cluster-services"] values: ["cluster-services"]
``` ```
Now, "cluster-services" pods will be allowed in only those namespaces where a quota object with a matching `scopeSelector` is present. Then, create a resource quota object in the `kube-system` namespace:
For example:
```yaml {{< codenew file="policy/priority-class-resourcequota.yaml" >}}
scopeSelector:
matchExpressions: ```shell
- scopeName: PriorityClass $ kubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system
operator: In
values: ["cluster-services"]
``` ```
```
resourcequota/pods-cluster-services created
```
In this case, a pod creation will be allowed if:
1. the Pod's `priorityClassName` is not specified.
1. the Pod's `priorityClassName` is specified to a value other than `cluster-services`.
1. the Pod's `priorityClassName` is set to `cluster-services`, it is to be created
in the `kube-system` namespace, and it has passed the resource quota check.
A Pod creation request is rejected if its `priorityClassName` is set to `cluster-services`
and it is to be created in a namespace other than `kube-system`.
## {{% heading "whatsnext" %}} ## {{% heading "whatsnext" %}}
- See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) for more information. - See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) for more information.

View File

@ -5,24 +5,23 @@ reviewers:
- bsalamat - bsalamat
title: Assigning Pods to Nodes title: Assigning Pods to Nodes
content_type: concept content_type: concept
weight: 50 weight: 20
--- ---
<!-- overview --> <!-- overview -->
You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} to only be able to run on particular You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it can only run on particular set of
{{< glossary_tooltip text="Node(s)" term_id="node" >}}, or to prefer to run on particular nodes. {{< glossary_tooltip text="Node(s)" term_id="node" >}}.
There are several ways to do this, and the recommended approaches all use There are several ways to do this and the recommended approaches all use
[label selectors](/docs/concepts/overview/working-with-objects/labels/) to make the selection. [label selectors](/docs/concepts/overview/working-with-objects/labels/) to facilitate the selection.
Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement
(e.g. spread your pods across nodes, not place the pod on a node with insufficient free resources, etc.) (e.g. spread your pods across nodes so as not place the pod on a node with insufficient free resources, etc.)
but there are some circumstances where you may want more control on a node where a pod lands, for example to ensure but there are some circumstances where you may want to control which node the pod deploys to - for example to ensure
that a pod ends up on a machine with an SSD attached to it, or to co-locate pods from two different that a pod ends up on a machine with an SSD attached to it, or to co-locate pods from two different
services that communicate a lot into the same availability zone. services that communicate a lot into the same availability zone.
<!-- body --> <!-- body -->
## nodeSelector ## nodeSelector
@ -120,12 +119,12 @@ pod is eligible to be scheduled on, based on labels on the node.
There are currently two types of node affinity, called `requiredDuringSchedulingIgnoredDuringExecution` and There are currently two types of node affinity, called `requiredDuringSchedulingIgnoredDuringExecution` and
`preferredDuringSchedulingIgnoredDuringExecution`. You can think of them as "hard" and "soft" respectively, `preferredDuringSchedulingIgnoredDuringExecution`. You can think of them as "hard" and "soft" respectively,
in the sense that the former specifies rules that *must* be met for a pod to be scheduled onto a node (just like in the sense that the former specifies rules that *must* be met for a pod to be scheduled onto a node (similar to
`nodeSelector` but using a more expressive syntax), while the latter specifies *preferences* that the scheduler `nodeSelector` but using a more expressive syntax), while the latter specifies *preferences* that the scheduler
will try to enforce but will not guarantee. The "IgnoredDuringExecution" part of the names means that, similar will try to enforce but will not guarantee. The "IgnoredDuringExecution" part of the names means that, similar
to how `nodeSelector` works, if labels on a node change at runtime such that the affinity rules on a pod are no longer to how `nodeSelector` works, if labels on a node change at runtime such that the affinity rules on a pod are no longer
met, the pod will still continue to run on the node. In the future we plan to offer met, the pod continues to run on the node. In the future we plan to offer
`requiredDuringSchedulingRequiredDuringExecution` which will be just like `requiredDuringSchedulingIgnoredDuringExecution` `requiredDuringSchedulingRequiredDuringExecution` which will be identical to `requiredDuringSchedulingIgnoredDuringExecution`
except that it will evict pods from nodes that cease to satisfy the pods' node affinity requirements. except that it will evict pods from nodes that cease to satisfy the pods' node affinity requirements.
Thus an example of `requiredDuringSchedulingIgnoredDuringExecution` would be "only run the pod on nodes with Intel CPUs" Thus an example of `requiredDuringSchedulingIgnoredDuringExecution` would be "only run the pod on nodes with Intel CPUs"
@ -261,7 +260,7 @@ for performance and security reasons, there are some constraints on topologyKey:
and `preferredDuringSchedulingIgnoredDuringExecution`. and `preferredDuringSchedulingIgnoredDuringExecution`.
2. For pod anti-affinity, empty `topologyKey` is also not allowed in both `requiredDuringSchedulingIgnoredDuringExecution` 2. For pod anti-affinity, empty `topologyKey` is also not allowed in both `requiredDuringSchedulingIgnoredDuringExecution`
and `preferredDuringSchedulingIgnoredDuringExecution`. and `preferredDuringSchedulingIgnoredDuringExecution`.
3. For `requiredDuringSchedulingIgnoredDuringExecution` pod anti-affinity, the admission controller `LimitPodHardAntiAffinityTopology` was introduced to limit `topologyKey` to `kubernetes.io/hostname`. If you want to make it available for custom topologies, you may modify the admission controller, or simply disable it. 3. For `requiredDuringSchedulingIgnoredDuringExecution` pod anti-affinity, the admission controller `LimitPodHardAntiAffinityTopology` was introduced to limit `topologyKey` to `kubernetes.io/hostname`. If you want to make it available for custom topologies, you may modify the admission controller, or disable it.
4. Except for the above cases, the `topologyKey` can be any legal label-key. 4. Except for the above cases, the `topologyKey` can be any legal label-key.
In addition to `labelSelector` and `topologyKey`, you can optionally specify a list `namespaces` In addition to `labelSelector` and `topologyKey`, you can optionally specify a list `namespaces`

View File

@ -5,7 +5,7 @@ reviewers:
- ahg-g - ahg-g
title: Resource Bin Packing for Extended Resources title: Resource Bin Packing for Extended Resources
content_type: concept content_type: concept
weight: 50 weight: 30
--- ---
<!-- overview --> <!-- overview -->

View File

@ -107,7 +107,7 @@ value being calculated based on the cluster size. There is also a hardcoded
minimum value of 50 nodes. minimum value of 50 nodes.
{{< note >}}In clusters with less than 50 feasible nodes, the scheduler still {{< note >}}In clusters with less than 50 feasible nodes, the scheduler still
checks all the nodes, simply because there are not enough feasible nodes to stop checks all the nodes because there are not enough feasible nodes to stop
the scheduler's search early. the scheduler's search early.
In a small cluster, if you set a low value for `percentageOfNodesToScore`, your In a small cluster, if you set a low value for `percentageOfNodesToScore`, your

View File

@ -183,7 +183,7 @@ the three things:
{{< note >}} {{< note >}}
While any plugin can access the list of "waiting" Pods and approve them While any plugin can access the list of "waiting" Pods and approve them
(see [`FrameworkHandle`](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20180409-scheduling-framework.md#frameworkhandle)), we expect only the permit (see [`FrameworkHandle`](https://git.k8s.io/enhancements/keps/sig-scheduling/624-scheduling-framework#frameworkhandle)), we expect only the permit
plugins to approve binding of reserved Pods that are in "waiting" state. Once a Pod plugins to approve binding of reserved Pods that are in "waiting" state. Once a Pod
is approved, it is sent to the [PreBind](#pre-bind) phase. is approved, it is sent to the [PreBind](#pre-bind) phase.
{{< /note >}} {{< /note >}}

View File

@ -28,7 +28,7 @@ a private certificate authority (CA), or based on a public key infrastructure li
to a generally recognized CA. to a generally recognized CA.
If your cluster uses a private certificate authority, you need a copy of that CA If your cluster uses a private certificate authority, you need a copy of that CA
certifcate configured into your `~/.kube/config` on the client, so that you can certificate configured into your `~/.kube/config` on the client, so that you can
trust the connection and be confident it was not intercepted. trust the connection and be confident it was not intercepted.
Your client can present a TLS client certificate at this stage. Your client can present a TLS client certificate at this stage.
@ -43,7 +43,7 @@ Authenticators are described in more detail in
[Authentication](/docs/reference/access-authn-authz/authentication/). [Authentication](/docs/reference/access-authn-authz/authentication/).
The input to the authentication step is the entire HTTP request; however, it typically The input to the authentication step is the entire HTTP request; however, it typically
just examines the headers and/or client certificate. examines the headers and/or client certificate.
Authentication modules include client certificates, password, and plain tokens, Authentication modules include client certificates, password, and plain tokens,
bootstrap tokens, and JSON Web Tokens (used for service accounts). bootstrap tokens, and JSON Web Tokens (used for service accounts).
@ -135,7 +135,7 @@ for the corresponding API object, and then written to the object store (shown as
The previous discussion applies to requests sent to the secure port of the API server The previous discussion applies to requests sent to the secure port of the API server
(the typical case). The API server can actually serve on 2 ports: (the typical case). The API server can actually serve on 2 ports:
By default the Kubernetes API server serves HTTP on 2 ports: By default, the Kubernetes API server serves HTTP on 2 ports:
1. `localhost` port: 1. `localhost` port:

View File

@ -120,6 +120,7 @@ Area of Concern for Containers | Recommendation |
Container Vulnerability Scanning and OS Dependency Security | As part of an image build step, you should scan your containers for known vulnerabilities. Container Vulnerability Scanning and OS Dependency Security | As part of an image build step, you should scan your containers for known vulnerabilities.
Image Signing and Enforcement | Sign container images to maintain a system of trust for the content of your containers. Image Signing and Enforcement | Sign container images to maintain a system of trust for the content of your containers.
Disallow privileged users | When constructing containers, consult your documentation for how to create users inside of the containers that have the least level of operating system privilege necessary in order to carry out the goal of the container. Disallow privileged users | When constructing containers, consult your documentation for how to create users inside of the containers that have the least level of operating system privilege necessary in order to carry out the goal of the container.
Use container runtime with stronger isolation | Select [container runtime classes](/docs/concepts/containers/runtime-class/) that provider stronger isolation
## Code ## Code
@ -152,3 +153,4 @@ Learn about related Kubernetes security topics:
* [Data encryption in transit](/docs/tasks/tls/managing-tls-in-a-cluster/) for the control plane * [Data encryption in transit](/docs/tasks/tls/managing-tls-in-a-cluster/) for the control plane
* [Data encryption at rest](/docs/tasks/administer-cluster/encrypt-data/) * [Data encryption at rest](/docs/tasks/administer-cluster/encrypt-data/)
* [Secrets in Kubernetes](/docs/concepts/configuration/secret/) * [Secrets in Kubernetes](/docs/concepts/configuration/secret/)
* [Runtime class](/docs/concepts/containers/runtime-class)

View File

@ -32,7 +32,7 @@ should range from highly restricted to highly flexible:
- **_Privileged_** - Unrestricted policy, providing the widest possible level of permissions. This - **_Privileged_** - Unrestricted policy, providing the widest possible level of permissions. This
policy allows for known privilege escalations. policy allows for known privilege escalations.
- **_Baseline/Default_** - Minimally restrictive policy while preventing known privilege - **_Baseline_** - Minimally restrictive policy while preventing known privilege
escalations. Allows the default (minimally specified) Pod configuration. escalations. Allows the default (minimally specified) Pod configuration.
- **_Restricted_** - Heavily restricted policy, following current Pod hardening best practices. - **_Restricted_** - Heavily restricted policy, following current Pod hardening best practices.
@ -48,9 +48,9 @@ mechanisms (such as gatekeeper), the privileged profile may be an absence of app
rather than an instantiated policy. In contrast, for a deny-by-default mechanism (such as Pod rather than an instantiated policy. In contrast, for a deny-by-default mechanism (such as Pod
Security Policy) the privileged policy should enable all controls (disable all restrictions). Security Policy) the privileged policy should enable all controls (disable all restrictions).
### Baseline/Default ### Baseline
The Baseline/Default policy is aimed at ease of adoption for common containerized workloads while The Baseline policy is aimed at ease of adoption for common containerized workloads while
preventing known privilege escalations. This policy is targeted at application operators and preventing known privilege escalations. This policy is targeted at application operators and
developers of non-critical applications. The following listed controls should be developers of non-critical applications. The following listed controls should be
enforced/disallowed: enforced/disallowed:
@ -115,7 +115,9 @@ enforced/disallowed:
<tr> <tr>
<td>AppArmor <em>(optional)</em></td> <td>AppArmor <em>(optional)</em></td>
<td> <td>
On supported hosts, the 'runtime/default' AppArmor profile is applied by default. The default policy should prevent overriding or disabling the policy, or restrict overrides to an allowed set of profiles.<br> On supported hosts, the 'runtime/default' AppArmor profile is applied by default.
The baseline policy should prevent overriding or disabling the default AppArmor
profile, or restrict overrides to an allowed set of profiles.<br>
<br><b>Restricted Fields:</b><br> <br><b>Restricted Fields:</b><br>
metadata.annotations['container.apparmor.security.beta.kubernetes.io/*']<br> metadata.annotations['container.apparmor.security.beta.kubernetes.io/*']<br>
<br><b>Allowed Values:</b> 'runtime/default', undefined<br> <br><b>Allowed Values:</b> 'runtime/default', undefined<br>
@ -175,7 +177,7 @@ well as lower-trust users.The following listed controls should be enforced/disal
<td><strong>Policy</strong></td> <td><strong>Policy</strong></td>
</tr> </tr>
<tr> <tr>
<td colspan="2"><em>Everything from the default profile.</em></td> <td colspan="2"><em>Everything from the baseline profile.</em></td>
</tr> </tr>
<tr> <tr>
<td>Volume Types</td> <td>Volume Types</td>
@ -275,7 +277,7 @@ of individual policies are not defined here.
## FAQ ## FAQ
### Why isn't there a profile between privileged and default? ### Why isn't there a profile between privileged and baseline?
The three profiles defined here have a clear linear progression from most secure (restricted) to least The three profiles defined here have a clear linear progression from most secure (restricted) to least
secure (privileged), and cover a broad set of workloads. Privileges required above the baseline secure (privileged), and cover a broad set of workloads. Privileges required above the baseline

View File

@ -387,7 +387,7 @@ $ curl https://<EXTERNAL-IP>:<NODE-PORT> -k
<h1>Welcome to nginx!</h1> <h1>Welcome to nginx!</h1>
``` ```
Let's now recreate the Service to use a cloud load balancer, just change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`: Let's now recreate the Service to use a cloud load balancer. Change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`:
```shell ```shell
kubectl edit svc my-nginx kubectl edit svc my-nginx

View File

@ -7,8 +7,8 @@ content_type: concept
weight: 20 weight: 20
--- ---
<!-- overview --> <!-- overview -->
This page provides an overview of DNS support by Kubernetes. Kubernetes creates DNS records for services and pods. You can contact
services with consistent DNS names instead of IP addresses.
<!-- body --> <!-- body -->
@ -18,19 +18,47 @@ Kubernetes DNS schedules a DNS Pod and Service on the cluster, and configures
the kubelets to tell individual containers to use the DNS Service's IP to the kubelets to tell individual containers to use the DNS Service's IP to
resolve DNS names. resolve DNS names.
### What things get DNS names?
Every Service defined in the cluster (including the DNS server itself) is Every Service defined in the cluster (including the DNS server itself) is
assigned a DNS name. By default, a client Pod's DNS search list will assigned a DNS name. By default, a client Pod's DNS search list includes the
include the Pod's own namespace and the cluster's default domain. This is best Pod's own namespace and the cluster's default domain.
illustrated by example:
Assume a Service named `foo` in the Kubernetes namespace `bar`. A Pod running ### Namespaces of Services
in namespace `bar` can look up this service by simply doing a DNS query for
`foo`. A Pod running in namespace `quux` can look up this service by doing a
DNS query for `foo.bar`.
The following sections detail the supported record types and layout that is A DNS query may return different results based on the namespace of the pod making
it. DNS queries that don't specify a namespace are limited to the pod's
namespace. Access services in other namespaces by specifying it in the DNS query.
For example, consider a pod in a `test` namespace. A `data` service is in
the `prod` namespace.
A query for `data` returns no results, because it uses the pod's `test` namespace.
A query for `data.prod` returns the intended result, because it specifies the
namespace.
DNS queries may be expanded using the pod's `/etc/resolv.conf`. Kubelet
sets this file for each pod. For example, a query for just `data` may be
expanded to `data.test.cluster.local`. The values of the `search` option
are used to expand queries. To learn more about DNS queries, see
[the `resolv.conf` manual page.](https://www.man7.org/linux/man-pages/man5/resolv.conf.5.html)
```
nameserver 10.32.0.10
search <namespace>.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
```
In summary, a pod in the _test_ namespace can successfully resolve either
`data.prod` or `data.prod.cluster.local`.
### DNS Records
What objects get DNS records?
1. Services
2. Pods
The following sections detail the supported DNS record types and layout that is
supported. Any other layout or names or queries that happen to work are supported. Any other layout or names or queries that happen to work are
considered implementation details and are subject to change without warning. considered implementation details and are subject to change without warning.
For more up-to-date specification, see For more up-to-date specification, see

View File

@ -163,7 +163,7 @@ status:
loadBalancer: {} loadBalancer: {}
``` ```
1. When dual-stack is enabled on a cluster, existing [headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors are configured by the control plane to set `.spec.ipFamilyPolicy` to `SingleStack` and set `.spec.ipFamilies` to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-controller-manager) even though `.spec.ClusterIP` is set to `None`. 1. When dual-stack is enabled on a cluster, existing [headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors are configured by the control plane to set `.spec.ipFamilyPolicy` to `SingleStack` and set `.spec.ipFamilies` to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-apiserver) even though `.spec.ClusterIP` is set to `None`.
{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}} {{< codenew file="service/networking/dual-stack-default-svc.yaml" >}}

View File

@ -49,6 +49,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
* [Skipper](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/) HTTP router and reverse proxy for service composition, including use cases like Kubernetes Ingress, designed as a library to build your custom proxy. * [Skipper](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/) HTTP router and reverse proxy for service composition, including use cases like Kubernetes Ingress, designed as a library to build your custom proxy.
* The [Traefik Kubernetes Ingress provider](https://doc.traefik.io/traefik/providers/kubernetes-ingress/) is an * The [Traefik Kubernetes Ingress provider](https://doc.traefik.io/traefik/providers/kubernetes-ingress/) is an
ingress controller for the [Traefik](https://traefik.io/traefik/) proxy. ingress controller for the [Traefik](https://traefik.io/traefik/) proxy.
* [Tyk Operator](https://github.com/TykTechnologies/tyk-operator) extends Ingress with Custom Resources to bring API Management capabilities to Ingress. Tyk Operator works with the Open Source Tyk Gateway & Tyk Cloud control plane.
* [Voyager](https://appscode.com/products/voyager) is an ingress controller for * [Voyager](https://appscode.com/products/voyager) is an ingress controller for
[HAProxy](https://www.haproxy.org/#desc). [HAProxy](https://www.haproxy.org/#desc).

View File

@ -260,7 +260,7 @@ There are existing Kubernetes concepts that allow you to expose a single Service
{{< codenew file="service/networking/test-ingress.yaml" >}} {{< codenew file="service/networking/test-ingress.yaml" >}}
If you create it using `kubectl apply -f` you should be able to view the state If you create it using `kubectl apply -f` you should be able to view the state
of the Ingress you just added: of the Ingress you added:
```bash ```bash
kubectl get ingress test-ingress kubectl get ingress test-ingress

View File

@ -57,7 +57,7 @@ the first label matches the originating Node's value for that label. If there is
no backend for the Service on a matching Node, then the second label will be no backend for the Service on a matching Node, then the second label will be
considered, and so forth, until no labels remain. considered, and so forth, until no labels remain.
If no match is found, the traffic will be rejected, just as if there were no If no match is found, the traffic will be rejected, as if there were no
backends for the Service at all. That is, endpoints are chosen based on the first backends for the Service at all. That is, endpoints are chosen based on the first
topology key with available backends. If this field is specified and all entries topology key with available backends. If this field is specified and all entries
have no backends that match the topology of the client, the service has no have no backends that match the topology of the client, the service has no
@ -87,7 +87,7 @@ traffic as follows.
* Service topology is not compatible with `externalTrafficPolicy=Local`, and * Service topology is not compatible with `externalTrafficPolicy=Local`, and
therefore a Service cannot use both of these features. It is possible to use therefore a Service cannot use both of these features. It is possible to use
both features in the same cluster on different Services, just not on the same both features in the same cluster on different Services, only not on the same
Service. Service.
* Valid topology keys are currently limited to `kubernetes.io/hostname`, * Valid topology keys are currently limited to `kubernetes.io/hostname`,

View File

@ -74,8 +74,8 @@ a new instance.
The name of a Service object must be a valid The name of a Service object must be a valid
[DNS label name](/docs/concepts/overview/working-with-objects/names#dns-label-names). [DNS label name](/docs/concepts/overview/working-with-objects/names#dns-label-names).
For example, suppose you have a set of Pods that each listen on TCP port 9376 For example, suppose you have a set of Pods where each listens on TCP port 9376
and carry a label `app=MyApp`: and contains a label `app=MyApp`:
```yaml ```yaml
apiVersion: v1 apiVersion: v1
@ -430,7 +430,7 @@ Services by their DNS name.
For example, if you have a Service called `my-service` in a Kubernetes For example, if you have a Service called `my-service` in a Kubernetes
namespace `my-ns`, the control plane and the DNS Service acting together namespace `my-ns`, the control plane and the DNS Service acting together
create a DNS record for `my-service.my-ns`. Pods in the `my-ns` namespace create a DNS record for `my-service.my-ns`. Pods in the `my-ns` namespace
should be able to find it by simply doing a name lookup for `my-service` should be able to find the service by doing a name lookup for `my-service`
(`my-service.my-ns` would also work). (`my-service.my-ns` would also work).
Pods in other namespaces must qualify the name as `my-service.my-ns`. These names Pods in other namespaces must qualify the name as `my-service.my-ns`. These names
@ -463,7 +463,7 @@ selectors defined:
For headless Services that define selectors, the endpoints controller creates For headless Services that define selectors, the endpoints controller creates
`Endpoints` records in the API, and modifies the DNS configuration to return `Endpoints` records in the API, and modifies the DNS configuration to return
records (addresses) that point directly to the `Pods` backing the `Service`. A records (IP addresses) that point directly to the `Pods` backing the `Service`.
### Without selectors ### Without selectors
@ -527,7 +527,7 @@ for NodePort use.
Using a NodePort gives you the freedom to set up your own load balancing solution, Using a NodePort gives you the freedom to set up your own load balancing solution,
to configure environments that are not fully supported by Kubernetes, or even to configure environments that are not fully supported by Kubernetes, or even
to just expose one or more nodes' IPs directly. to expose one or more nodes' IPs directly.
Note that this Service is visible as `<NodeIP>:spec.ports[*].nodePort` Note that this Service is visible as `<NodeIP>:spec.ports[*].nodePort`
and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, <NodeIP> would be filtered NodeIP(s).) and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, <NodeIP> would be filtered NodeIP(s).)
@ -785,8 +785,7 @@ you can use the following annotations:
``` ```
In the above example, if the Service contained three ports, `80`, `443`, and In the above example, if the Service contained three ports, `80`, `443`, and
`8443`, then `443` and `8443` would use the SSL certificate, but `80` would just `8443`, then `443` and `8443` would use the SSL certificate, but `80` would be proxied HTTP.
be proxied HTTP.
From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services. From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services.
To see which policies are available for use, you can use the `aws` command line tool: To see which policies are available for use, you can use the `aws` command line tool:
@ -958,7 +957,8 @@ groups are modified with the following IP rules:
| Rule | Protocol | Port(s) | IpRange(s) | IpRange Description | | Rule | Protocol | Port(s) | IpRange(s) | IpRange Description |
|------|----------|---------|------------|---------------------| |------|----------|---------|------------|---------------------|
| Health Check | TCP | NodePort(s) (`.spec.healthCheckNodePort` for `.spec.externalTrafficPolicy = Local`) | VPC CIDR | kubernetes.io/rule/nlb/health=\<loadBalancerName\> | | Health Check | TCP | NodePort(s) (`.spec.healthCheckNodePort` for `.spec.externalTrafficPolicy = Local`) | Subnet CIDR | kubernetes.io/rule/nlb/health=\<loadBalancerName\> |
| Client Traffic | TCP | NodePort(s) | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/client=\<loadBalancerName\> | | Client Traffic | TCP | NodePort(s) | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/client=\<loadBalancerName\> |
| MTU Discovery | ICMP | 3,4 | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/mtu=\<loadBalancerName\> | | MTU Discovery | ICMP | 3,4 | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/mtu=\<loadBalancerName\> |
@ -1107,7 +1107,7 @@ but the current API requires it.
## Virtual IP implementation {#the-gory-details-of-virtual-ips} ## Virtual IP implementation {#the-gory-details-of-virtual-ips}
The previous information should be sufficient for many people who just want to The previous information should be sufficient for many people who want to
use Services. However, there is a lot going on behind the scenes that may be use Services. However, there is a lot going on behind the scenes that may be
worth understanding. worth understanding.
@ -1163,7 +1163,7 @@ rule kicks in, and redirects the packets to the proxy's own port.
The "Service proxy" chooses a backend, and starts proxying traffic from the client to the backend. The "Service proxy" chooses a backend, and starts proxying traffic from the client to the backend.
This means that Service owners can choose any port they want without risk of This means that Service owners can choose any port they want without risk of
collision. Clients can simply connect to an IP and port, without being aware collision. Clients can connect to an IP and port, without being aware
of which Pods they are actually accessing. of which Pods they are actually accessing.
#### iptables #### iptables

View File

@ -80,7 +80,7 @@ parameters:
Users request dynamically provisioned storage by including a storage class in Users request dynamically provisioned storage by including a storage class in
their `PersistentVolumeClaim`. Before Kubernetes v1.6, this was done via the their `PersistentVolumeClaim`. Before Kubernetes v1.6, this was done via the
`volume.beta.kubernetes.io/storage-class` annotation. However, this annotation `volume.beta.kubernetes.io/storage-class` annotation. However, this annotation
is deprecated since v1.6. Users now can and should instead use the is deprecated since v1.9. Users now can and should instead use the
`storageClassName` field of the `PersistentVolumeClaim` object. The value of `storageClassName` field of the `PersistentVolumeClaim` object. The value of
this field must match the name of a `StorageClass` configured by the this field must match the name of a `StorageClass` configured by the
administrator (see [below](#enabling-dynamic-provisioning)). administrator (see [below](#enabling-dynamic-provisioning)).

View File

@ -135,8 +135,9 @@ As a cluster administrator, you can use a [PodSecurityPolicy](/docs/concepts/pol
This feature requires the `GenericEphemeralVolume` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be This feature requires the `GenericEphemeralVolume` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be
enabled. Because this is an alpha feature, it is disabled by default. enabled. Because this is an alpha feature, it is disabled by default.
Generic ephemeral volumes are similar to `emptyDir` volumes, just more Generic ephemeral volumes are similar to `emptyDir` volumes, except more
flexible: flexible:
- Storage can be local or network-attached. - Storage can be local or network-attached.
- Volumes can have a fixed size that Pods are not able to exceed. - Volumes can have a fixed size that Pods are not able to exceed.
- Volumes may have some initial data, depending on the driver and - Volumes may have some initial data, depending on the driver and

View File

@ -29,7 +29,7 @@ A _PersistentVolume_ (PV) is a piece of storage in the cluster that has been pro
A _PersistentVolumeClaim_ (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see [AccessModes](#access-modes)). A _PersistentVolumeClaim_ (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see [AccessModes](#access-modes)).
While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than just size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the _StorageClass_ resource. While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the _StorageClass_ resource.
See the [detailed walkthrough with working examples](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/). See the [detailed walkthrough with working examples](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/).
@ -487,7 +487,7 @@ The following volume types support mount options:
* VsphereVolume * VsphereVolume
* iSCSI * iSCSI
Mount options are not validated, so mount will simply fail if one is invalid. Mount options are not validated. If a mount option is invalid, the mount fails.
In the past, the annotation `volume.beta.kubernetes.io/mount-options` was used instead In the past, the annotation `volume.beta.kubernetes.io/mount-options` was used instead
of the `mountOptions` attribute. This annotation is still working; however, of the `mountOptions` attribute. This annotation is still working; however,

View File

@ -37,7 +37,7 @@ request a particular class. Administrators set the name and other parameters
of a class when first creating StorageClass objects, and the objects cannot of a class when first creating StorageClass objects, and the objects cannot
be updated once they are created. be updated once they are created.
Administrators can specify a default StorageClass just for PVCs that don't Administrators can specify a default StorageClass only for PVCs that don't
request any particular class to bind to: see the request any particular class to bind to: see the
[PersistentVolumeClaim section](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) [PersistentVolumeClaim section](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)
for details. for details.
@ -149,7 +149,7 @@ mount options specified in the `mountOptions` field of the class.
If the volume plugin does not support mount options but mount options are If the volume plugin does not support mount options but mount options are
specified, provisioning will fail. Mount options are not validated on either specified, provisioning will fail. Mount options are not validated on either
the class or PV, so mount of the PV will simply fail if one is invalid. the class or PV. If a mount option is invalid, the PV mount fails.
### Volume Binding Mode ### Volume Binding Mode
@ -569,7 +569,7 @@ parameters:
`"http(s)://api-server:7860"` `"http(s)://api-server:7860"`
* `registry`: Quobyte registry to use to mount the volume. You can specify the * `registry`: Quobyte registry to use to mount the volume. You can specify the
registry as ``<host>:<port>`` pair or if you want to specify multiple registry as ``<host>:<port>`` pair or if you want to specify multiple
registries you just have to put a comma between them e.q. registries, put a comma between them.
``<host1>:<port>,<host2>:<port>,<host3>:<port>``. ``<host1>:<port>,<host2>:<port>,<host3>:<port>``.
The host can be an IP address or if you have a working DNS you can also The host can be an IP address or if you have a working DNS you can also
provide the DNS names. provide the DNS names.

View File

@ -24,7 +24,7 @@ The {{< glossary_tooltip text="CSI" term_id="csi" >}} Volume Cloning feature add
A Clone is defined as a duplicate of an existing Kubernetes Volume that can be consumed as any standard Volume would be. The only difference is that upon provisioning, rather than creating a "new" empty Volume, the back end device creates an exact duplicate of the specified Volume. A Clone is defined as a duplicate of an existing Kubernetes Volume that can be consumed as any standard Volume would be. The only difference is that upon provisioning, rather than creating a "new" empty Volume, the back end device creates an exact duplicate of the specified Volume.
The implementation of cloning, from the perspective of the Kubernetes API, simply adds the ability to specify an existing PVC as a dataSource during new PVC creation. The source PVC must be bound and available (not in use). The implementation of cloning, from the perspective of the Kubernetes API, adds the ability to specify an existing PVC as a dataSource during new PVC creation. The source PVC must be bound and available (not in use).
Users need to be aware of the following when using this feature: Users need to be aware of the following when using this feature:
@ -40,7 +40,7 @@ Users need to be aware of the following when using this feature:
## Provisioning ## Provisioning
Clones are provisioned just like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace. Clones are provisioned like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace.
```yaml ```yaml
apiVersion: v1 apiVersion: v1

View File

@ -34,10 +34,11 @@ Kubernetes supports many types of volumes. A {{< glossary_tooltip term_id="pod"
can use any number of volume types simultaneously. can use any number of volume types simultaneously.
Ephemeral volume types have a lifetime of a pod, but persistent volumes exist beyond Ephemeral volume types have a lifetime of a pod, but persistent volumes exist beyond
the lifetime of a pod. Consequently, a volume outlives any containers the lifetime of a pod. Consequently, a volume outlives any containers
that run within the pod, and data is preserved across container restarts. When a that run within the pod, and data is preserved across container restarts. When a pod
pod ceases to exist, the volume is destroyed. ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not
destroy persistent volumes.
At its core, a volume is just a directory, possibly with some data in it, which At its core, a volume is a directory, possibly with some data in it, which
is accessible to the containers in a pod. How that directory comes to be, the is accessible to the containers in a pod. How that directory comes to be, the
medium that backs it, and the contents of it are determined by the particular medium that backs it, and the contents of it are determined by the particular
volume type used. volume type used.
@ -106,6 +107,8 @@ spec:
fsType: ext4 fsType: ext4
``` ```
If the EBS volume is partitioned, you can supply the optional field `partition: "<partition number>"` to specify which parition to mount on.
#### AWS EBS CSI migration #### AWS EBS CSI migration
{{< feature-state for_k8s_version="v1.17" state="beta" >}} {{< feature-state for_k8s_version="v1.17" state="beta" >}}
@ -929,7 +932,7 @@ GitHub project has [instructions](https://github.com/quobyte/quobyte-csi#quobyte
### rbd ### rbd
An `rbd` volume allows a An `rbd` volume allows a
[Rados Block Device](https://ceph.com/docs/master/rbd/rbd/) (RBD) volume to mount into your [Rados Block Device](https://docs.ceph.com/en/latest/rbd/) (RBD) volume to mount into your
Pod. Unlike `emptyDir`, which is erased when a pod is removed, the contents of Pod. Unlike `emptyDir`, which is erased when a pod is removed, the contents of
an `rbd` volume are preserved and the volume is unmounted. This an `rbd` volume are preserved and the volume is unmounted. This
means that a RBD volume can be pre-populated with data, and that data can means that a RBD volume can be pre-populated with data, and that data can

View File

@ -47,14 +47,14 @@ In this example:
* A Deployment named `nginx-deployment` is created, indicated by the `.metadata.name` field. * A Deployment named `nginx-deployment` is created, indicated by the `.metadata.name` field.
* The Deployment creates three replicated Pods, indicated by the `.spec.replicas` field. * The Deployment creates three replicated Pods, indicated by the `.spec.replicas` field.
* The `.spec.selector` field defines how the Deployment finds which Pods to manage. * The `.spec.selector` field defines how the Deployment finds which Pods to manage.
In this case, you simply select a label that is defined in the Pod template (`app: nginx`). In this case, you select a label that is defined in the Pod template (`app: nginx`).
However, more sophisticated selection rules are possible, However, more sophisticated selection rules are possible,
as long as the Pod template itself satisfies the rule. as long as the Pod template itself satisfies the rule.
{{< note >}} {{< note >}}
The `.spec.selector.matchLabels` field is a map of {key,value} pairs. The `.spec.selector.matchLabels` field is a map of {key,value} pairs.
A single {key,value} in the `matchLabels` map is equivalent to an element of `matchExpressions`, A single {key,value} in the `matchLabels` map is equivalent to an element of `matchExpressions`,
whose key field is "key" the operator is "In", and the values array contains only "value". whose `key` field is "key", the `operator` is "In", and the `values` array contains only "value".
All of the requirements, from both `matchLabels` and `matchExpressions`, must be satisfied in order to match. All of the requirements, from both `matchLabels` and `matchExpressions`, must be satisfied in order to match.
{{< /note >}} {{< /note >}}
@ -171,13 +171,15 @@ Follow the steps given below to update your Deployment:
```shell ```shell
kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1
``` ```
or simply use the following command:
or use the following command:
```shell ```shell
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record
``` ```
The output is similar to this: The output is similar to:
``` ```
deployment.apps/nginx-deployment image updated deployment.apps/nginx-deployment image updated
``` ```
@ -188,7 +190,8 @@ Follow the steps given below to update your Deployment:
kubectl edit deployment.v1.apps/nginx-deployment kubectl edit deployment.v1.apps/nginx-deployment
``` ```
The output is similar to this: The output is similar to:
``` ```
deployment.apps/nginx-deployment edited deployment.apps/nginx-deployment edited
``` ```
@ -200,10 +203,13 @@ Follow the steps given below to update your Deployment:
``` ```
The output is similar to this: The output is similar to this:
``` ```
Waiting for rollout to finish: 2 out of 3 new replicas have been updated... Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
``` ```
or or
``` ```
deployment "nginx-deployment" successfully rolled out deployment "nginx-deployment" successfully rolled out
``` ```
@ -212,10 +218,11 @@ Get more details on your updated Deployment:
* After the rollout succeeds, you can view the Deployment by running `kubectl get deployments`. * After the rollout succeeds, you can view the Deployment by running `kubectl get deployments`.
The output is similar to this: The output is similar to this:
```
NAME READY UP-TO-DATE AVAILABLE AGE ```ini
nginx-deployment 3/3 3 3 36s NAME READY UP-TO-DATE AVAILABLE AGE
``` nginx-deployment 3/3 3 3 36s
```
* Run `kubectl get rs` to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it * Run `kubectl get rs` to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it
up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas. up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.
@ -701,7 +708,7 @@ nginx-deployment-618515232 11 11 11 7m
You can pause a Deployment before triggering one or more updates and then resume it. This allows you to You can pause a Deployment before triggering one or more updates and then resume it. This allows you to
apply multiple fixes in between pausing and resuming without triggering unnecessary rollouts. apply multiple fixes in between pausing and resuming without triggering unnecessary rollouts.
* For example, with a Deployment that was just created: * For example, with a Deployment that was created:
Get the Deployment details: Get the Deployment details:
```shell ```shell
kubectl get deploy kubectl get deploy

View File

@ -99,7 +99,7 @@ pi-5rwd7
``` ```
Here, the selector is the same as the selector for the Job. The `--output=jsonpath` option specifies an expression Here, the selector is the same as the selector for the Job. The `--output=jsonpath` option specifies an expression
that just gets the name from each Pod in the returned list. with the name from each Pod in the returned list.
View the standard output of one of the pods: View the standard output of one of the pods:

View File

@ -222,7 +222,7 @@ In this manner, a ReplicaSet can own a non-homogenous set of Pods
## Writing a ReplicaSet manifest ## Writing a ReplicaSet manifest
As with all other Kubernetes API objects, a ReplicaSet needs the `apiVersion`, `kind`, and `metadata` fields. As with all other Kubernetes API objects, a ReplicaSet needs the `apiVersion`, `kind`, and `metadata` fields.
For ReplicaSets, the kind is always just ReplicaSet. For ReplicaSets, the `kind` is always a ReplicaSet.
In Kubernetes 1.9 the API version `apps/v1` on the ReplicaSet kind is the current version and is enabled by default. The API version `apps/v1beta2` is deprecated. In Kubernetes 1.9 the API version `apps/v1` on the ReplicaSet kind is the current version and is enabled by default. The API version `apps/v1beta2` is deprecated.
Refer to the first lines of the `frontend.yaml` example for guidance. Refer to the first lines of the `frontend.yaml` example for guidance.
@ -237,7 +237,7 @@ The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/#pod-temp
required to have labels in place. In our `frontend.yaml` example we had one label: `tier: frontend`. required to have labels in place. In our `frontend.yaml` example we had one label: `tier: frontend`.
Be careful not to overlap with the selectors of other controllers, lest they try to adopt this Pod. Be careful not to overlap with the selectors of other controllers, lest they try to adopt this Pod.
For the template's [restart policy](/docs/concepts/workloads/Pods/pod-lifecycle/#restart-policy) field, For the template's [restart policy](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) field,
`.spec.template.spec.restartPolicy`, the only allowed value is `Always`, which is the default. `.spec.template.spec.restartPolicy`, the only allowed value is `Always`, which is the default.
### Pod Selector ### Pod Selector

View File

@ -110,8 +110,7 @@ nginx-3ntk0 nginx-4ok8v nginx-qrm3m
Here, the selector is the same as the selector for the ReplicationController (seen in the Here, the selector is the same as the selector for the ReplicationController (seen in the
`kubectl describe` output), and in a different form in `replication.yaml`. The `--output=jsonpath` option `kubectl describe` output), and in a different form in `replication.yaml`. The `--output=jsonpath` option
specifies an expression that just gets the name from each pod in the returned list. specifies an expression with the name from each pod in the returned list.
## Writing a ReplicationController Spec ## Writing a ReplicationController Spec
@ -180,16 +179,16 @@ delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). Kubectl wi
for it to delete each pod before deleting the ReplicationController itself. If this kubectl for it to delete each pod before deleting the ReplicationController itself. If this kubectl
command is interrupted, it can be restarted. command is interrupted, it can be restarted.
When using the REST API or go client library, you need to do the steps explicitly (scale replicas to When using the REST API or Go client library, you need to do the steps explicitly (scale replicas to
0, wait for pod deletions, then delete the ReplicationController). 0, wait for pod deletions, then delete the ReplicationController).
### Deleting just a ReplicationController ### Deleting only a ReplicationController
You can delete a ReplicationController without affecting any of its pods. You can delete a ReplicationController without affecting any of its pods.
Using kubectl, specify the `--cascade=false` option to [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). Using kubectl, specify the `--cascade=false` option to [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete).
When using the REST API or go client library, simply delete the ReplicationController object. When using the REST API or Go client library, you can delete the ReplicationController object.
Once the original is deleted, you can create a new ReplicationController to replace it. As long Once the original is deleted, you can create a new ReplicationController to replace it. As long
as the old and new `.spec.selector` are the same, then the new one will adopt the old pods. as the old and new `.spec.selector` are the same, then the new one will adopt the old pods.
@ -240,7 +239,7 @@ Pods created by a ReplicationController are intended to be fungible and semantic
## Responsibilities of the ReplicationController ## Responsibilities of the ReplicationController
The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies. The ReplicationController ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](https://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](https://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](https://issue.k8s.io/170)). The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](https://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](https://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](https://issue.k8s.io/170)).

View File

@ -75,7 +75,7 @@ Here are some ways to mitigate involuntary disruptions:
and [stateful](/docs/tasks/run-application/run-replicated-stateful-application/) applications.) and [stateful](/docs/tasks/run-application/run-replicated-stateful-application/) applications.)
- For even higher availability when running replicated applications, - For even higher availability when running replicated applications,
spread applications across racks (using spread applications across racks (using
[anti-affinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature)) [anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity))
or across zones (if using a or across zones (if using a
[multi-zone cluster](/docs/setup/multiple-zones).) [multi-zone cluster](/docs/setup/multiple-zones).)
@ -104,7 +104,7 @@ ensure that the number of replicas serving load never falls below a certain
percentage of the total. percentage of the total.
Cluster managers and hosting providers should use tools which Cluster managers and hosting providers should use tools which
respect PodDisruptionBudgets by calling the [Eviction API](/docs/tasks/administer-cluster/safely-drain-node/#the-eviction-api) respect PodDisruptionBudgets by calling the [Eviction API](/docs/tasks/administer-cluster/safely-drain-node/#eviction-api)
instead of directly deleting pods or deployments. instead of directly deleting pods or deployments.
For example, the `kubectl drain` subcommand lets you mark a node as going out of For example, the `kubectl drain` subcommand lets you mark a node as going out of

View File

@ -103,7 +103,7 @@ the ephemeral container to add as an `EphemeralContainers` list:
"apiVersion": "v1", "apiVersion": "v1",
"kind": "EphemeralContainers", "kind": "EphemeralContainers",
"metadata": { "metadata": {
"name": "example-pod" "name": "example-pod"
}, },
"ephemeralContainers": [{ "ephemeralContainers": [{
"command": [ "command": [

View File

@ -38,8 +38,7 @@ If a {{< glossary_tooltip term_id="node" >}} dies, the Pods scheduled to that no
are [scheduled for deletion](#pod-garbage-collection) after a timeout period. are [scheduled for deletion](#pod-garbage-collection) after a timeout period.
Pods do not, by themselves, self-heal. If a Pod is scheduled to a Pods do not, by themselves, self-heal. If a Pod is scheduled to a
{{< glossary_tooltip text="node" term_id="node" >}} that then fails, {{< glossary_tooltip text="node" term_id="node" >}} that then fails, the Pod is deleted; likewise, a Pod won't
or if the scheduling operation itself fails, the Pod is deleted; likewise, a Pod won't
survive an eviction due to a lack of resources or Node maintenance. Kubernetes uses a survive an eviction due to a lack of resources or Node maintenance. Kubernetes uses a
higher-level abstraction, called a higher-level abstraction, called a
{{< glossary_tooltip term_id="controller" text="controller" >}}, that handles the work of {{< glossary_tooltip term_id="controller" text="controller" >}}, that handles the work of
@ -313,7 +312,7 @@ can specify a readiness probe that checks an endpoint specific to readiness that
is different from the liveness probe. is different from the liveness probe.
{{< note >}} {{< note >}}
If you just want to be able to drain requests when the Pod is deleted, you do not If you want to be able to drain requests when the Pod is deleted, you do not
necessarily need a readiness probe; on deletion, the Pod automatically puts itself necessarily need a readiness probe; on deletion, the Pod automatically puts itself
into an unready state regardless of whether the readiness probe exists. into an unready state regardless of whether the readiness probe exists.
The Pod remains in the unready state while it waits for the containers in the Pod The Pod remains in the unready state while it waits for the containers in the Pod

View File

@ -61,7 +61,7 @@ Members of `@kubernetes/sig-docs-**-owners` can approve PRs that change content
For each localization, The `@kubernetes/sig-docs-**-reviews` team automates review assignment for new PRs. For each localization, The `@kubernetes/sig-docs-**-reviews` team automates review assignment for new PRs.
Members of `@kubernetes/website-maintainers` can create new development branches to coordinate translation efforts. Members of `@kubernetes/website-maintainers` can create new localization branches to coordinate translation efforts.
Members of `@kubernetes/website-milestone-maintainers` can use the `/milestone` [Prow command](https://prow.k8s.io/command-help) to assign a milestone to issues or PRs. Members of `@kubernetes/website-milestone-maintainers` can use the `/milestone` [Prow command](https://prow.k8s.io/command-help) to assign a milestone to issues or PRs.
@ -205,14 +205,20 @@ To ensure accuracy in grammar and meaning, members of your localization team sho
### Source files ### Source files
Localizations must be based on the English files from the most recent release, {{< latest-version >}}. Localizations must be based on the English files from a specific release targeted by the localization team.
Each localization team can decide which release to target which is referred to as the _target version_ below.
To find source files for the most recent release: To find source files for your target version:
1. Navigate to the Kubernetes website repository at https://github.com/kubernetes/website. 1. Navigate to the Kubernetes website repository at https://github.com/kubernetes/website.
2. Select the `release-1.X` branch for the most recent version. 2. Select a branch for your target version from the following table:
Target version | Branch
-----|-----
Next version | [`dev-{{< skew nextMinorVersion >}}`](https://github.com/kubernetes/website/tree/dev-{{< skew nextMinorVersion >}})
Latest version | [`master`](https://github.com/kubernetes/website/tree/master)
Previous version | `release-*.**`
The latest version is {{< latest-version >}}, so the most recent release branch is [`{{< release-branch >}}`](https://github.com/kubernetes/website/tree/{{< release-branch >}}). The `master` branch holds content for the current release `{{< latest-version >}}`. The release team will create `{{< release-branch >}}` branch shortly before the next release: v{{< skew nextMinorVersion >}}.
### Site strings in i18n ### Site strings in i18n
@ -239,11 +245,11 @@ Some language teams have their own language-specific style guide and glossary. F
## Branching strategy ## Branching strategy
Because localization projects are highly collaborative efforts, we encourage teams to work in shared development branches. Because localization projects are highly collaborative efforts, we encourage teams to work in shared localization branches.
To collaborate on a development branch: To collaborate on a localization branch:
1. A team member of [@kubernetes/website-maintainers](https://github.com/orgs/kubernetes/teams/website-maintainers) opens a development branch from a source branch on https://github.com/kubernetes/website. 1. A team member of [@kubernetes/website-maintainers](https://github.com/orgs/kubernetes/teams/website-maintainers) opens a localization branch from a source branch on https://github.com/kubernetes/website.
Your team approvers joined the `@kubernetes/website-maintainers` team when you [added your localization team](#add-your-localization-team-in-github) to the [`kubernetes/org`](https://github.com/kubernetes/org) repository. Your team approvers joined the `@kubernetes/website-maintainers` team when you [added your localization team](#add-your-localization-team-in-github) to the [`kubernetes/org`](https://github.com/kubernetes/org) repository.
@ -251,25 +257,31 @@ To collaborate on a development branch:
`dev-<source version>-<language code>.<team milestone>` `dev-<source version>-<language code>.<team milestone>`
For example, an approver on a German localization team opens the development branch `dev-1.12-de.1` directly against the k/website repository, based on the source branch for Kubernetes v1.12. For example, an approver on a German localization team opens the localization branch `dev-1.12-de.1` directly against the k/website repository, based on the source branch for Kubernetes v1.12.
2. Individual contributors open feature branches based on the development branch. 2. Individual contributors open feature branches based on the localization branch.
For example, a German contributor opens a pull request with changes to `kubernetes:dev-1.12-de.1` from `username:local-branch-name`. For example, a German contributor opens a pull request with changes to `kubernetes:dev-1.12-de.1` from `username:local-branch-name`.
3. Approvers review and merge feature branches into the development branch. 3. Approvers review and merge feature branches into the localization branch.
4. Periodically, an approver merges the development branch to its source branch by opening and approving a new pull request. Be sure to squash the commits before approving the pull request. 4. Periodically, an approver merges the localization branch to its source branch by opening and approving a new pull request. Be sure to squash the commits before approving the pull request.
Repeat steps 1-4 as needed until the localization is complete. For example, subsequent German development branches would be: `dev-1.12-de.2`, `dev-1.12-de.3`, etc. Repeat steps 1-4 as needed until the localization is complete. For example, subsequent German localization branches would be: `dev-1.12-de.2`, `dev-1.12-de.3`, etc.
Teams must merge localized content into the same release branch from which the content was sourced. For example, a development branch sourced from {{< release-branch >}} must be based on {{< release-branch >}}. Teams must merge localized content into the same branch from which the content was sourced.
An approver must maintain a development branch by keeping it current with its source branch and resolving merge conflicts. The longer a development branch stays open, the more maintenance it typically requires. Consider periodically merging development branches and opening new ones, rather than maintaining one extremely long-running development branch. For example:
- a localization branch sourced from `master` must be merged into `master`.
- a localization branch sourced from `release-1.19` must be merged into `release-1.19`.
At the beginning of every team milestone, it's helpful to open an issue [comparing upstream changes](https://github.com/kubernetes/website/blob/master/scripts/upstream_changes.py) between the previous development branch and the current development branch. {{< note >}}
If your localization branch was created from `master` branch but it is not merged into `master` before new release branch `{{< release-branch >}}` created, merge it into both `master` and new release branch `{{< release-branch >}}`. To merge your localization branch into new release branch `{{< release-branch >}}`, you need to switch upstream branch of your localization branch to `{{< release-branch >}}`.
{{< /note >}}
While only approvers can open a new development branch and merge pull requests, anyone can open a pull request for a new development branch. No special permissions are required. At the beginning of every team milestone, it's helpful to open an issue comparing upstream changes between the previous localization branch and the current localization branch. There are two scripts for comparing upstream changes. [`upstream_changes.py`](https://github.com/kubernetes/website/tree/master/scripts#upstream_changespy) is useful for checking the changes made to a specific file. And [`diff_l10n_branches.py`](https://github.com/kubernetes/website/tree/master/scripts#diff_l10n_branchespy) is useful for creating a list of outdated files for a specific localization branch.
While only approvers can open a new localization branch and merge pull requests, anyone can open a pull request for a new localization branch. No special permissions are required.
For more information about working from forks or directly from the repository, see ["fork and clone the repo"](#fork-and-clone-the-repo). For more information about working from forks or directly from the repository, see ["fork and clone the repo"](#fork-and-clone-the-repo).
@ -290,5 +302,3 @@ Once a localization meets requirements for workflow and minimum output, SIG docs
- Enable language selection on the website - Enable language selection on the website
- Publicize the localization's availability through [Cloud Native Computing Foundation](https://www.cncf.io/about/) (CNCF) channels, including the [Kubernetes blog](https://kubernetes.io/blog/). - Publicize the localization's availability through [Cloud Native Computing Foundation](https://www.cncf.io/about/) (CNCF) channels, including the [Kubernetes blog](https://kubernetes.io/blog/).

View File

@ -39,8 +39,8 @@ Anyone can write a blog post and submit it for review.
- Posts about other CNCF projects may or may not be on topic. We recommend asking the blog team before submitting a draft. - Posts about other CNCF projects may or may not be on topic. We recommend asking the blog team before submitting a draft.
- Many CNCF projects have their own blog. These are often a better choice for posts. There are times of major feature or milestone for a CNCF project that users would be interested in reading on the Kubernetes blog. - Many CNCF projects have their own blog. These are often a better choice for posts. There are times of major feature or milestone for a CNCF project that users would be interested in reading on the Kubernetes blog.
- Blog posts should be original content - Blog posts should be original content
- The official blog is not for repurposing existing content from a third party as new content. - The official blog is not for repurposing existing content from a third party as new content.
- The [license](https://github.com/kubernetes/website/blob/master/LICENSE) for the blog does allow commercial use of the content for commercial purposes, just not the other way around. - The [license](https://github.com/kubernetes/website/blob/master/LICENSE) for the blog allows commercial use of the content for commercial purposes, but not the other way around.
- Blog posts should aim to be future proof - Blog posts should aim to be future proof
- Given the development velocity of the project, we want evergreen content that won't require updates to stay accurate for the reader. - Given the development velocity of the project, we want evergreen content that won't require updates to stay accurate for the reader.
- It can be a better choice to add a tutorial or update official documentation than to write a high level overview as a blog post. - It can be a better choice to add a tutorial or update official documentation than to write a high level overview as a blog post.

View File

@ -77,9 +77,8 @@ merged. Keep the following in mind:
Alpha features. Alpha features.
- It's hard to test (and therefore to document) a feature that hasn't been merged, - It's hard to test (and therefore to document) a feature that hasn't been merged,
or is at least considered feature-complete in its PR. or is at least considered feature-complete in its PR.
- Determining whether a feature needs documentation is a manual process and - Determining whether a feature needs documentation is a manual process. Even if
just because a feature is not marked as needing docs doesn't mean it doesn't a feature is not marked as needing docs, you may need to document the feature.
need them.
## For developers or other SIG members ## For developers or other SIG members

View File

@ -52,7 +52,7 @@ Members can:
{{< note >}} {{< note >}}
Using `/lgtm` triggers automation. If you want to provide non-binding Using `/lgtm` triggers automation. If you want to provide non-binding
approval, simply commenting "LGTM" works too! approval, commenting "LGTM" works too!
{{< /note >}} {{< /note >}}
- Use the `/hold` comment to block merging for a pull request - Use the `/hold` comment to block merging for a pull request

View File

@ -17,8 +17,6 @@ Changes to the style guide are made by SIG Docs as a group. To propose a change
or addition, [add it to the agenda](https://docs.google.com/document/d/1ddHwLK3kUMX1wVFIwlksjTk0MsqitBnWPe1LRa1Rx5A/edit) for an upcoming SIG Docs meeting, and attend the meeting to participate in the or addition, [add it to the agenda](https://docs.google.com/document/d/1ddHwLK3kUMX1wVFIwlksjTk0MsqitBnWPe1LRa1Rx5A/edit) for an upcoming SIG Docs meeting, and attend the meeting to participate in the
discussion. discussion.
<!-- body --> <!-- body -->
{{< note >}} {{< note >}}
@ -48,12 +46,11 @@ When you refer specifically to interacting with an API object, use [UpperCamelCa
When you are generally discussing an API object, use [sentence-style capitalization](https://docs.microsoft.com/en-us/style-guide/text-formatting/using-type/use-sentence-style-capitalization). When you are generally discussing an API object, use [sentence-style capitalization](https://docs.microsoft.com/en-us/style-guide/text-formatting/using-type/use-sentence-style-capitalization).
You may use the word "resource", "API", or "object" to clarify a Kubernetes resource type in a sentence. You may use the word "resource", "API", or "object" to clarify a Kubernetes resource type in a sentence.
Don't split the API object name into separate words. For example, use Don't split an API object name into separate words. For example, use PodTemplateList, not Pod Template List.
PodTemplateList, not Pod Template List.
The following examples focus on capitalization. Review the related guidance on [Code Style](#code-style-inline-code) for more information on formatting API objects. The following examples focus on capitalization. For more information about formatting API object names, review the related guidance on [Code Style](#code-style-inline-code).
{{< table caption = "Do and Don't - Use Pascal case for API objects" >}} {{< table caption = "Do and Don't - Use Pascal case for API objects" >}}
Do | Don't Do | Don't
@ -65,17 +62,18 @@ Every ConfigMap object is part of a namespace. | Every configMap object is part
For managing confidential data, consider using the Secret API. | For managing confidential data, consider using the secret API. For managing confidential data, consider using the Secret API. | For managing confidential data, consider using the secret API.
{{< /table >}} {{< /table >}}
### Use angle brackets for placeholders ### Use angle brackets for placeholders
Use angle brackets for placeholders. Tell the reader what a placeholder Use angle brackets for placeholders. Tell the reader what a placeholder
represents. represents, for example:
1. Display information about a pod: Display information about a pod:
kubectl describe pod <pod-name> -n <namespace> ```shell
kubectl describe pod <pod-name> -n <namespace>
```
If the namespace of the pod is `default`, you can omit the '-n' parameter. If the namespace of the pod is `default`, you can omit the '-n' parameter.
### Use bold for user interface elements ### Use bold for user interface elements
@ -189,7 +187,6 @@ Set the value of `image` to nginx:1.16. | Set the value of `image` to `nginx:1.1
Set the value of the `replicas` field to 2. | Set the value of the `replicas` field to `2`. Set the value of the `replicas` field to 2. | Set the value of the `replicas` field to `2`.
{{< /table >}} {{< /table >}}
## Code snippet formatting ## Code snippet formatting
### Don't include the command prompt ### Don't include the command prompt
@ -200,17 +197,20 @@ Do | Don't
kubectl get pods | $ kubectl get pods kubectl get pods | $ kubectl get pods
{{< /table >}} {{< /table >}}
### Separate commands from output ### Separate commands from output
Verify that the pod is running on your chosen node: Verify that the pod is running on your chosen node:
kubectl get pods --output=wide ```shell
kubectl get pods --output=wide
```
The output is similar to this: The output is similar to this:
NAME READY STATUS RESTARTS AGE IP NODE ```console
nginx 1/1 Running 0 13s 10.200.0.4 worker0 NAME READY STATUS RESTARTS AGE IP NODE
nginx 1/1 Running 0 13s 10.200.0.4 worker0
```
### Versioning Kubernetes examples ### Versioning Kubernetes examples
@ -263,17 +263,17 @@ Hugo [Shortcodes](https://gohugo.io/content-management/shortcodes) help create d
2. Use the following syntax to apply a style: 2. Use the following syntax to apply a style:
``` ```none
{{</* note */>}} {{</* note */>}}
No need to include a prefix; the shortcode automatically provides one. (Note:, Caution:, etc.) No need to include a prefix; the shortcode automatically provides one. (Note:, Caution:, etc.)
{{</* /note */>}} {{</* /note */>}}
``` ```
The output is: The output is:
{{< note >}} {{< note >}}
The prefix you choose is the same text for the tag. The prefix you choose is the same text for the tag.
{{< /note >}} {{< /note >}}
### Note ### Note
@ -403,7 +403,7 @@ The output is:
1. Prepare the batter, and pour into springform pan. 1. Prepare the batter, and pour into springform pan.
{{< note >}}Grease the pan for best results.{{< /note >}} {{< note >}}Grease the pan for best results.{{< /note >}}
1. Bake for 20-25 minutes or until set. 1. Bake for 20-25 minutes or until set.
@ -417,13 +417,14 @@ Shortcodes inside include statements will break the build. You must insert them
{{</* /note */>}} {{</* /note */>}}
``` ```
## Markdown elements ## Markdown elements
### Line breaks ### Line breaks
Use a single newline to separate block-level content like headings, lists, images, code blocks, and others. The exception is second-level headings, where it should be two newlines. Second-level headings follow the first-level (or the title) without any preceding paragraphs or texts. A two line spacing helps visualize the overall structure of content in a code editor better. Use a single newline to separate block-level content like headings, lists, images, code blocks, and others. The exception is second-level headings, where it should be two newlines. Second-level headings follow the first-level (or the title) without any preceding paragraphs or texts. A two line spacing helps visualize the overall structure of content in a code editor better.
### Headings ### Headings
People accessing this documentation may use a screen reader or other assistive technology (AT). [Screen readers](https://en.wikipedia.org/wiki/Screen_reader) are linear output devices, they output items on a page one at a time. If there is a lot of content on a page, you can use headings to give the page an internal structure. A good page structure helps all readers to easily navigate the page or filter topics of interest. People accessing this documentation may use a screen reader or other assistive technology (AT). [Screen readers](https://en.wikipedia.org/wiki/Screen_reader) are linear output devices, they output items on a page one at a time. If there is a lot of content on a page, you can use headings to give the page an internal structure. A good page structure helps all readers to easily navigate the page or filter topics of interest.
{{< table caption = "Do and Don't - Headings" >}} {{< table caption = "Do and Don't - Headings" >}}
@ -453,24 +454,24 @@ Write hyperlinks that give you context for the content they link to. For example
Write Markdown-style links: `[link text](URL)`. For example: `[Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/#table-captions)` and the output is [Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/#table-captions). | Write HTML-style links: `<a href="/media/examples/link-element-example.css" target="_blank">Visit our tutorial!</a>`, or create links that open in new tabs or windows. For example: `[example website](https://example.com){target="_blank"}` Write Markdown-style links: `[link text](URL)`. For example: `[Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/#table-captions)` and the output is [Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/#table-captions). | Write HTML-style links: `<a href="/media/examples/link-element-example.css" target="_blank">Visit our tutorial!</a>`, or create links that open in new tabs or windows. For example: `[example website](https://example.com){target="_blank"}`
{{< /table >}} {{< /table >}}
### Lists ### Lists
Group items in a list that are related to each other and need to appear in a specific order or to indicate a correlation between multiple items. When a screen reader comes across a list—whether it is an ordered or unordered list—it will be announced to the user that there is a group of list items. The user can then use the arrow keys to move up and down between the various items in the list. Group items in a list that are related to each other and need to appear in a specific order or to indicate a correlation between multiple items. When a screen reader comes across a list—whether it is an ordered or unordered list—it will be announced to the user that there is a group of list items. The user can then use the arrow keys to move up and down between the various items in the list.
Website navigation links can also be marked up as list items; after all they are nothing but a group of related links. Website navigation links can also be marked up as list items; after all they are nothing but a group of related links.
- End each item in a list with a period if one or more items in the list are complete sentences. For the sake of consistency, normally either all items or none should be complete sentences. - End each item in a list with a period if one or more items in the list are complete sentences. For the sake of consistency, normally either all items or none should be complete sentences.
{{< note >}} Ordered lists that are part of an incomplete introductory sentence can be in lowercase and punctuated as if each item was a part of the introductory sentence.{{< /note >}} {{< note >}} Ordered lists that are part of an incomplete introductory sentence can be in lowercase and punctuated as if each item was a part of the introductory sentence.{{< /note >}}
- Use the number one (`1.`) for ordered lists. - Use the number one (`1.`) for ordered lists.
- Use (`+`), (`*`), or (`-`) for unordered lists. - Use (`+`), (`*`), or (`-`) for unordered lists.
- Leave a blank line after each list. - Leave a blank line after each list.
- Indent nested lists with four spaces (for example, ⋅⋅⋅⋅). - Indent nested lists with four spaces (for example, ⋅⋅⋅⋅).
- List items may consist of multiple paragraphs. Each subsequent paragraph in a list item must be indented by either four spaces or one tab. - List items may consist of multiple paragraphs. Each subsequent paragraph in a list item must be indented by either four spaces or one tab.
### Tables ### Tables
@ -490,7 +491,6 @@ Do | Don't
This command starts a proxy. | This command will start a proxy. This command starts a proxy. | This command will start a proxy.
{{< /table >}} {{< /table >}}
Exception: Use future or past tense if it is required to convey the correct Exception: Use future or past tense if it is required to convey the correct
meaning. meaning.
@ -503,7 +503,6 @@ You can explore the API using a browser. | The API can be explored using a brows
The YAML file specifies the replica count. | The replica count is specified in the YAML file. The YAML file specifies the replica count. | The replica count is specified in the YAML file.
{{< /table >}} {{< /table >}}
Exception: Use passive voice if active voice leads to an awkward construction. Exception: Use passive voice if active voice leads to an awkward construction.
### Use simple and direct language ### Use simple and direct language
@ -527,7 +526,6 @@ You can create a Deployment by ... | We'll create a Deployment by ...
In the preceding output, you can see... | In the preceding output, we can see ... In the preceding output, you can see... | In the preceding output, we can see ...
{{< /table >}} {{< /table >}}
### Avoid Latin phrases ### Avoid Latin phrases
Prefer English terms over Latin abbreviations. Prefer English terms over Latin abbreviations.
@ -539,7 +537,6 @@ For example, ... | e.g., ...
That is, ...| i.e., ... That is, ...| i.e., ...
{{< /table >}} {{< /table >}}
Exception: Use "etc." for et cetera. Exception: Use "etc." for et cetera.
## Patterns to avoid ## Patterns to avoid
@ -557,7 +554,6 @@ Kubernetes provides a new feature for ... | We provide a new feature ...
This page teaches you how to use pods. | In this page, we are going to learn about pods. This page teaches you how to use pods. | In this page, we are going to learn about pods.
{{< /table >}} {{< /table >}}
### Avoid jargon and idioms ### Avoid jargon and idioms
Some readers speak English as a second language. Avoid jargon and idioms to help them understand better. Some readers speak English as a second language. Avoid jargon and idioms to help them understand better.
@ -569,13 +565,16 @@ Internally, ... | Under the hood, ...
Create a new cluster. | Turn up a new cluster. Create a new cluster. | Turn up a new cluster.
{{< /table >}} {{< /table >}}
### Avoid statements about the future ### Avoid statements about the future
Avoid making promises or giving hints about the future. If you need to talk about Avoid making promises or giving hints about the future. If you need to talk about
an alpha feature, put the text under a heading that identifies it as alpha an alpha feature, put the text under a heading that identifies it as alpha
information. information.
An exception to this rule is documentation about announced deprecations
targeting removal in future versions. One example of documentation like this
is the [Deprecated API migration guide](/docs/reference/using-api/deprecation-guide/).
### Avoid statements that will soon be out of date ### Avoid statements that will soon be out of date
Avoid words like "currently" and "new." A feature that is new today might not be Avoid words like "currently" and "new." A feature that is new today might not be
@ -588,6 +587,18 @@ In version 1.4, ... | In the current version, ...
The Federation feature provides ... | The new Federation feature provides ... The Federation feature provides ... | The new Federation feature provides ...
{{< /table >}} {{< /table >}}
### Avoid words that assume a specific level of understanding
Avoid words such as "just", "simply", "easy", "easily", or "simple". These words do not add value.
{{< table caption = "Do and Don't - Avoid insensitive words" >}}
Do | Don't
:--| :-----
Include one command in ... | Include just one command in ...
Run the container ... | Simply run the container ...
You can easily remove ... | You can remove ...
These simple steps ... | These steps ...
{{< /table >}}
## {{% heading "whatsnext" %}} ## {{% heading "whatsnext" %}}

View File

@ -6,8 +6,10 @@ linkTitle: "Reference"
main_menu: true main_menu: true
weight: 70 weight: 70
content_type: concept content_type: concept
no_list: true
--- ---
<!-- overview --> <!-- overview -->
This section of the Kubernetes documentation contains references. This section of the Kubernetes documentation contains references.
@ -18,11 +20,17 @@ This section of the Kubernetes documentation contains references.
## API Reference ## API Reference
* [Glossary](/docs/reference/glossary/) - a comprehensive, standardized list of Kubernetes terminology
* [Kubernetes API Reference](/docs/reference/kubernetes-api/) * [Kubernetes API Reference](/docs/reference/kubernetes-api/)
* [One-page API Reference for Kubernetes {{< param "version" >}}](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) * [One-page API Reference for Kubernetes {{< param "version" >}}](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
* [Using The Kubernetes API](/docs/reference/using-api/) - overview of the API for Kubernetes. * [Using The Kubernetes API](/docs/reference/using-api/) - overview of the API for Kubernetes.
* [API access control](/docs/reference/access-authn-authz/) - details on how Kubernetes controls API access
* [Well-Known Labels, Annotations and Taints](/docs/reference/kubernetes-api/labels-annotations-taints/)
## API Client Libraries ## Officially supported client libraries
To call the Kubernetes API from a programming language, you can use To call the Kubernetes API from a programming language, you can use
[client libraries](/docs/reference/using-api/client-libraries/). Officially supported [client libraries](/docs/reference/using-api/client-libraries/). Officially supported
@ -32,22 +40,28 @@ client libraries:
- [Kubernetes Python client library](https://github.com/kubernetes-client/python) - [Kubernetes Python client library](https://github.com/kubernetes-client/python)
- [Kubernetes Java client library](https://github.com/kubernetes-client/java) - [Kubernetes Java client library](https://github.com/kubernetes-client/java)
- [Kubernetes JavaScript client library](https://github.com/kubernetes-client/javascript) - [Kubernetes JavaScript client library](https://github.com/kubernetes-client/javascript)
- [Kubernetes Dotnet client library](https://github.com/kubernetes-client/csharp)
- [Kubernetes Haskell Client library](https://github.com/kubernetes-client/haskell)
## CLI Reference ## CLI
* [kubectl](/docs/reference/kubectl/overview/) - Main CLI tool for running commands and managing Kubernetes clusters. * [kubectl](/docs/reference/kubectl/overview/) - Main CLI tool for running commands and managing Kubernetes clusters.
* [JSONPath](/docs/reference/kubectl/jsonpath/) - Syntax guide for using [JSONPath expressions](https://goessner.net/articles/JsonPath/) with kubectl. * [JSONPath](/docs/reference/kubectl/jsonpath/) - Syntax guide for using [JSONPath expressions](https://goessner.net/articles/JsonPath/) with kubectl.
* [kubeadm](/docs/reference/setup-tools/kubeadm/) - CLI tool to easily provision a secure Kubernetes cluster. * [kubeadm](/docs/reference/setup-tools/kubeadm/) - CLI tool to easily provision a secure Kubernetes cluster.
## Components Reference ## Components
* [kubelet](/docs/reference/command-line-tools-reference/kubelet/) - The primary *node agent* that runs on each node. The kubelet takes a set of PodSpecs and ensures that the described containers are running and healthy. * [kubelet](/docs/reference/command-line-tools-reference/kubelet/) - The primary *node agent* that runs on each node. The kubelet takes a set of PodSpecs and ensures that the described containers are running and healthy.
* [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/) - REST API that validates and configures data for API objects such as pods, services, replication controllers. * [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/) - REST API that validates and configures data for API objects such as pods, services, replication controllers.
* [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) - Daemon that embeds the core control loops shipped with Kubernetes. * [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) - Daemon that embeds the core control loops shipped with Kubernetes.
* [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) - Can do simple TCP/UDP stream forwarding or round-robin TCP/UDP forwarding across a set of back-ends. * [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) - Can do simple TCP/UDP stream forwarding or round-robin TCP/UDP forwarding across a set of back-ends.
* [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/) - Scheduler that manages availability, performance, and capacity. * [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/) - Scheduler that manages availability, performance, and capacity.
* [kube-scheduler Policies](/docs/reference/scheduling/policies)
* [kube-scheduler Profiles](/docs/reference/scheduling/config#profiles) ## Scheduling
* [Scheduler Policies](/docs/reference/scheduling/policies)
* [Scheduler Profiles](/docs/reference/scheduling/config#profiles)
## Design Docs ## Design Docs

View File

@ -1,6 +1,6 @@
--- ---
title: API Access Control title: API Access Control
weight: 20 weight: 15
no_list: true no_list: true
--- ---

View File

@ -19,7 +19,7 @@ Attribute-based access control (ABAC) defines an access control paradigm whereby
To enable `ABAC` mode, specify `--authorization-policy-file=SOME_FILENAME` and `--authorization-mode=ABAC` on startup. To enable `ABAC` mode, specify `--authorization-policy-file=SOME_FILENAME` and `--authorization-mode=ABAC` on startup.
The file format is [one JSON object per line](https://jsonlines.org/). There The file format is [one JSON object per line](https://jsonlines.org/). There
should be no enclosing list or map, just one map per line. should be no enclosing list or map, only one map per line.
Each line is a "policy object", where each such object is a map with the following Each line is a "policy object", where each such object is a map with the following
properties: properties:

View File

@ -110,7 +110,7 @@ This admission controller allows all pods into the cluster. It is deprecated bec
This admission controller modifies every new Pod to force the image pull policy to Always. This is useful in a This admission controller modifies every new Pod to force the image pull policy to Always. This is useful in a
multitenant cluster so that users can be assured that their private images can only be used by those multitenant cluster so that users can be assured that their private images can only be used by those
who have the credentials to pull them. Without this admission controller, once an image has been pulled to a who have the credentials to pull them. Without this admission controller, once an image has been pulled to a
node, any pod from any user can use it simply by knowing the image's name (assuming the Pod is node, any pod from any user can use it by knowing the image's name (assuming the Pod is
scheduled onto the right node), without any authorization check against the image. When this admission controller scheduled onto the right node), without any authorization check against the image. When this admission controller
is enabled, images are always pulled prior to starting containers, which means valid credentials are is enabled, images are always pulled prior to starting containers, which means valid credentials are
required. required.
@ -176,7 +176,7 @@ The default value for `default-not-ready-toleration-seconds` and `default-unreac
This admission controller will intercept all requests to exec a command in a pod if that pod has a privileged container. This admission controller will intercept all requests to exec a command in a pod if that pod has a privileged container.
This functionality has been merged into [DenyEscalatingExec](#denyescalatingexec). This functionality has been merged into [DenyEscalatingExec](#denyescalatingexec).
The DenyExecOnPrivileged admission plugin is deprecated and will be removed in v1.18. The DenyExecOnPrivileged admission plugin is deprecated.
Use of a policy-based admission plugin (like [PodSecurityPolicy](#podsecuritypolicy) or a custom admission plugin) Use of a policy-based admission plugin (like [PodSecurityPolicy](#podsecuritypolicy) or a custom admission plugin)
which can be targeted at specific users or Namespaces and also protects against creation of overly privileged Pods which can be targeted at specific users or Namespaces and also protects against creation of overly privileged Pods
@ -190,7 +190,7 @@ This admission controller will deny exec and attach commands to pods that run wi
allow host access. This includes pods that run as privileged, have access to the host IPC namespace, and allow host access. This includes pods that run as privileged, have access to the host IPC namespace, and
have access to the host PID namespace. have access to the host PID namespace.
The DenyEscalatingExec admission plugin is deprecated and will be removed in v1.18. The DenyEscalatingExec admission plugin is deprecated.
Use of a policy-based admission plugin (like [PodSecurityPolicy](#podsecuritypolicy) or a custom admission plugin) Use of a policy-based admission plugin (like [PodSecurityPolicy](#podsecuritypolicy) or a custom admission plugin)
which can be targeted at specific users or Namespaces and also protects against creation of overly privileged Pods which can be targeted at specific users or Namespaces and also protects against creation of overly privileged Pods
@ -565,6 +565,8 @@ Starting from 1.11, this admission controller is disabled by default.
### PodNodeSelector {#podnodeselector} ### PodNodeSelector {#podnodeselector}
{{< feature-state for_k8s_version="v1.5" state="alpha" >}}
This admission controller defaults and limits what node selectors may be used within a namespace by reading a namespace annotation and a global configuration. This admission controller defaults and limits what node selectors may be used within a namespace by reading a namespace annotation and a global configuration.
#### Configuration File Format #### Configuration File Format
@ -675,6 +677,8 @@ for more information.
### PodTolerationRestriction {#podtolerationrestriction} ### PodTolerationRestriction {#podtolerationrestriction}
{{< feature-state for_k8s_version="v1.7" state="alpha" >}}
The PodTolerationRestriction admission controller verifies any conflict between tolerations of a pod and the tolerations of its namespace. The PodTolerationRestriction admission controller verifies any conflict between tolerations of a pod and the tolerations of its namespace.
It rejects the pod request if there is a conflict. It rejects the pod request if there is a conflict.
It then merges the tolerations annotated on the namespace into the tolerations of the pod. It then merges the tolerations annotated on the namespace into the tolerations of the pod.

View File

@ -99,7 +99,7 @@ openssl req -new -key jbeda.pem -out jbeda-csr.pem -subj "/CN=jbeda/O=app1/O=app
This would create a CSR for the username "jbeda", belonging to two groups, "app1" and "app2". This would create a CSR for the username "jbeda", belonging to two groups, "app1" and "app2".
See [Managing Certificates](/docs/concepts/cluster-administration/certificates/) for how to generate a client cert. See [Managing Certificates](/docs/tasks/administer-cluster/certificates/) for how to generate a client cert.
### Static Token File ### Static Token File
@ -205,8 +205,10 @@ spec:
``` ```
Service account bearer tokens are perfectly valid to use outside the cluster and Service account bearer tokens are perfectly valid to use outside the cluster and
can be used to create identities for long standing jobs that wish to talk to the can be used to create identities for long standing jobs that wish to talk to the
Kubernetes API. To manually create a service account, simply use the `kubectl Kubernetes API. To manually create a service account, simply use the `kubectl`
create serviceaccount (NAME)` command. This creates a service account in the create serviceaccount (NAME)` command. This creates a service account in the
current namespace and an associated secret. current namespace and an associated secret.
@ -320,12 +322,13 @@ sequenceDiagram
8. Once authorized the API server returns a response to `kubectl` 8. Once authorized the API server returns a response to `kubectl`
9. `kubectl` provides feedback to the user 9. `kubectl` provides feedback to the user
Since all of the data needed to validate who you are is in the `id_token`, Kubernetes doesn't need to Since all of the data needed to validate who you are is in the `id_token`, Kubernetes doesn't need to
"phone home" to the identity provider. In a model where every request is stateless this provides a very scalable solution for authentication. It does offer a few challenges: "phone home" to the identity provider. In a model where every request is stateless this provides a very scalable solution for authentication. It does offer a few challenges:
1. Kubernetes has no "web interface" to trigger the authentication process. There is no browser or interface to collect credentials which is why you need to authenticate to your identity provider first. 1. Kubernetes has no "web interface" to trigger the authentication process. There is no browser or interface to collect credentials which is why you need to authenticate to your identity provider first.
2. The `id_token` can't be revoked, it's like a certificate so it should be short-lived (only a few minutes) so it can be very annoying to have to get a new token every few minutes. 2. The `id_token` can't be revoked, it's like a certificate so it should be short-lived (only a few minutes) so it can be very annoying to have to get a new token every few minutes.
3. To authenticate to the Kubernetes dashboard, you must the `kubectl proxy` command or a reverse proxy that injects the `id_token`. 3. To authenticate to the Kubernetes dashboard, you must use the `kubectl proxy` command or a reverse proxy that injects the `id_token`.
#### Configuring the API Server #### Configuring the API Server
@ -420,12 +423,12 @@ users:
refresh-token: q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXq refresh-token: q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXq
name: oidc name: oidc
``` ```
Once your `id_token` expires, `kubectl` will attempt to refresh your `id_token` using your `refresh_token` and `client_secret` storing the new values for the `refresh_token` and `id_token` in your `.kube/config`.
Once your `id_token` expires, `kubectl` will attempt to refresh your `id_token` using your `refresh_token` and `client_secret` storing the new values for the `refresh_token` and `id_token` in your `.kube/config`.
##### Option 2 - Use the `--token` Option ##### Option 2 - Use the `--token` Option
The `kubectl` command lets you pass in a token using the `--token` option. Simply copy and paste the `id_token` into this option: The `kubectl` command lets you pass in a token using the `--token` option. Copy and paste the `id_token` into this option:
```bash ```bash
kubectl --token=eyJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJodHRwczovL21sYi50cmVtb2xvLmxhbjo4MDQzL2F1dGgvaWRwL29pZGMiLCJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNDc0NTk2NjY5LCJqdGkiOiI2RDUzNXoxUEpFNjJOR3QxaWVyYm9RIiwiaWF0IjoxNDc0NTk2MzY5LCJuYmYiOjE0NzQ1OTYyNDksInN1YiI6Im13aW5kdSIsInVzZXJfcm9sZSI6WyJ1c2VycyIsIm5ldy1uYW1lc3BhY2Utdmlld2VyIl0sImVtYWlsIjoibXdpbmR1QG5vbW9yZWplZGkuY29tIn0.f2As579n9VNoaKzoF-dOQGmXkFKf1FMyNV0-va_B63jn-_n9LGSCca_6IVMP8pO-Zb4KvRqGyTP0r3HkHxYy5c81AnIh8ijarruczl-TK_yF5akjSTHFZD-0gRzlevBDiH8Q79NAr-ky0P4iIXS8lY9Vnjch5MF74Zx0c3alKJHJUnnpjIACByfF2SCaYzbWFMUNat-K1PaUk5-ujMBG7yYnr95xD-63n8CO8teGUAAEMx6zRjzfhnhbzX-ajwZLGwGUBT4WqjMs70-6a7_8gZmLZb2az1cZynkFRj2BaCkVT3A2RrjeEwZEtGXlMqKJ1_I2ulrOVsYx01_yD35-rw get nodes kubectl --token=eyJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJodHRwczovL21sYi50cmVtb2xvLmxhbjo4MDQzL2F1dGgvaWRwL29pZGMiLCJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNDc0NTk2NjY5LCJqdGkiOiI2RDUzNXoxUEpFNjJOR3QxaWVyYm9RIiwiaWF0IjoxNDc0NTk2MzY5LCJuYmYiOjE0NzQ1OTYyNDksInN1YiI6Im13aW5kdSIsInVzZXJfcm9sZSI6WyJ1c2VycyIsIm5ldy1uYW1lc3BhY2Utdmlld2VyIl0sImVtYWlsIjoibXdpbmR1QG5vbW9yZWplZGkuY29tIn0.f2As579n9VNoaKzoF-dOQGmXkFKf1FMyNV0-va_B63jn-_n9LGSCca_6IVMP8pO-Zb4KvRqGyTP0r3HkHxYy5c81AnIh8ijarruczl-TK_yF5akjSTHFZD-0gRzlevBDiH8Q79NAr-ky0P4iIXS8lY9Vnjch5MF74Zx0c3alKJHJUnnpjIACByfF2SCaYzbWFMUNat-K1PaUk5-ujMBG7yYnr95xD-63n8CO8teGUAAEMx6zRjzfhnhbzX-ajwZLGwGUBT4WqjMs70-6a7_8gZmLZb2az1cZynkFRj2BaCkVT3A2RrjeEwZEtGXlMqKJ1_I2ulrOVsYx01_yD35-rw get nodes
@ -455,7 +458,7 @@ clusters:
- name: name-of-remote-authn-service - name: name-of-remote-authn-service
cluster: cluster:
certificate-authority: /path/to/ca.pem # CA for verifying the remote service. certificate-authority: /path/to/ca.pem # CA for verifying the remote service.
server: https://authn.example.com/authenticate # URL of remote service to query. Must use 'https'. server: https://authn.example.com/authenticate # URL of remote service to query. 'https' recommended for production.
# users refers to the API server's webhook configuration. # users refers to the API server's webhook configuration.
users: users:
@ -731,7 +734,7 @@ to the impersonated user info.
The following HTTP headers can be used to performing an impersonation request: The following HTTP headers can be used to performing an impersonation request:
* `Impersonate-User`: The username to act as. * `Impersonate-User`: The username to act as.
* `Impersonate-Group`: A group name to act as. Can be provided multiple times to set multiple groups. Optional. Requires "Impersonate-User" * `Impersonate-Group`: A group name to act as. Can be provided multiple times to set multiple groups. Optional. Requires "Impersonate-User".
* `Impersonate-Extra-( extra name )`: A dynamic header used to associate extra fields with the user. Optional. Requires "Impersonate-User". In order to be preserved consistently, `( extra name )` should be lower-case, and any characters which aren't [legal in HTTP header labels](https://tools.ietf.org/html/rfc7230#section-3.2.6) MUST be utf8 and [percent-encoded](https://tools.ietf.org/html/rfc3986#section-2.1). * `Impersonate-Extra-( extra name )`: A dynamic header used to associate extra fields with the user. Optional. Requires "Impersonate-User". In order to be preserved consistently, `( extra name )` should be lower-case, and any characters which aren't [legal in HTTP header labels](https://tools.ietf.org/html/rfc7230#section-3.2.6) MUST be utf8 and [percent-encoded](https://tools.ietf.org/html/rfc3986#section-2.1).
{{< note >}} {{< note >}}

View File

@ -138,7 +138,7 @@ no
exposes the API server authorization to external services. Other resources in exposes the API server authorization to external services. Other resources in
this group include: this group include:
* `SubjectAccessReview` - Access review for any user, not just the current one. Useful for delegating authorization decisions to the API server. For example, the kubelet and extension API servers use this to determine user access to their own APIs. * `SubjectAccessReview` - Access review for any user, not only the current one. Useful for delegating authorization decisions to the API server. For example, the kubelet and extension API servers use this to determine user access to their own APIs.
* `LocalSubjectAccessReview` - Like `SubjectAccessReview` but restricted to a specific namespace. * `LocalSubjectAccessReview` - Like `SubjectAccessReview` but restricted to a specific namespace.
* `SelfSubjectRulesReview` - A review which returns the set of actions a user can perform within a namespace. Useful for users to quickly summarize their own access, or for UIs to hide/show actions. * `SelfSubjectRulesReview` - A review which returns the set of actions a user can perform within a namespace. Useful for users to quickly summarize their own access, or for UIs to hide/show actions.

View File

@ -167,7 +167,7 @@ data:
users: [] users: []
``` ```
The `kubeconfig` member of the ConfigMap is a config file with just the cluster The `kubeconfig` member of the ConfigMap is a config file with only the cluster
information filled out. The key thing being communicated here is the information filled out. The key thing being communicated here is the
`certificate-authority-data`. This may be expanded in the future. `certificate-authority-data`. This may be expanded in the future.

View File

@ -196,8 +196,8 @@ O is the group that this user will belong to. You can refer to
[RBAC](/docs/reference/access-authn-authz/rbac/) for standard groups. [RBAC](/docs/reference/access-authn-authz/rbac/) for standard groups.
```shell ```shell
openssl genrsa -out john.key 2048 openssl genrsa -out myuser.key 2048
openssl req -new -key john.key -out john.csr openssl req -new -key myuser.key -out myuser.csr
``` ```
### Create CertificateSigningRequest ### Create CertificateSigningRequest
@ -209,7 +209,7 @@ cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1 apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest kind: CertificateSigningRequest
metadata: metadata:
name: john name: myuser
spec: spec:
groups: groups:
- system:authenticated - system:authenticated
@ -224,7 +224,7 @@ Some points to note:
- `usages` has to be '`client auth`' - `usages` has to be '`client auth`'
- `request` is the base64 encoded value of the CSR file content. - `request` is the base64 encoded value of the CSR file content.
You can get the content using this command: ```cat john.csr | base64 | tr -d "\n"``` You can get the content using this command: ```cat myuser.csr | base64 | tr -d "\n"```
### Approve certificate signing request ### Approve certificate signing request
@ -239,7 +239,7 @@ kubectl get csr
Approve the CSR: Approve the CSR:
```shell ```shell
kubectl certificate approve john kubectl certificate approve myuser
``` ```
### Get the certificate ### Get the certificate
@ -247,11 +247,17 @@ kubectl certificate approve john
Retrieve the certificate from the CSR: Retrieve the certificate from the CSR:
```shell ```shell
kubectl get csr/john -o yaml kubectl get csr/myuser -o yaml
``` ```
The certificate value is in Base64-encoded format under `status.certificate`. The certificate value is in Base64-encoded format under `status.certificate`.
Export the issued certificate from the CertificateSigningRequest.
```
kubectl get csr myuser -o jsonpath='{.status.certificate}'| base64 -d > myuser.crt
```
### Create Role and RoleBinding ### Create Role and RoleBinding
With the certificate created. it is time to define the Role and RoleBinding for With the certificate created. it is time to define the Role and RoleBinding for
@ -266,31 +272,30 @@ kubectl create role developer --verb=create --verb=get --verb=list --verb=update
This is a sample command to create a RoleBinding for this new user: This is a sample command to create a RoleBinding for this new user:
```shell ```shell
kubectl create rolebinding developer-binding-john --role=developer --user=john kubectl create rolebinding developer-binding-myuser --role=developer --user=myuser
``` ```
### Add to kubeconfig ### Add to kubeconfig
The last step is to add this user into the kubeconfig file. The last step is to add this user into the kubeconfig file.
This example assumes the key and certificate files are located at "/home/vagrant/work/".
First, you need to add new credentials: First, you need to add new credentials:
``` ```
kubectl config set-credentials john --client-key=/home/vagrant/work/john.key --client-certificate=/home/vagrant/work/john.crt --embed-certs=true kubectl config set-credentials myuser --client-key=myuser.key --client-certificate=myuser.crt --embed-certs=true
``` ```
Then, you need to add the context: Then, you need to add the context:
``` ```
kubectl config set-context john --cluster=kubernetes --user=john kubectl config set-context myuser --cluster=kubernetes --user=myuser
``` ```
To test it, change the context to `john`: To test it, change the context to `myuser`:
``` ```
kubectl config use-context john kubectl config use-context myuser
``` ```
## Approval or rejection {#approval-rejection} ## Approval or rejection {#approval-rejection}
@ -363,7 +368,7 @@ status:
It's usual to set `status.conditions.reason` to a machine-friendly reason It's usual to set `status.conditions.reason` to a machine-friendly reason
code using TitleCase; this is a convention but you can set it to anything code using TitleCase; this is a convention but you can set it to anything
you like. If you want to add a note just for human consumption, use the you like. If you want to add a note for human consumption, use the
`status.conditions.message` field. `status.conditions.message` field.
## Signing ## Signing
@ -438,4 +443,3 @@ status:
* View the source code for the kube-controller-manager built in [approver](https://github.com/kubernetes/kubernetes/blob/32ec6c212ec9415f604ffc1f4c1f29b782968ff1/pkg/controller/certificates/approver/sarapprove.go) * View the source code for the kube-controller-manager built in [approver](https://github.com/kubernetes/kubernetes/blob/32ec6c212ec9415f604ffc1f4c1f29b782968ff1/pkg/controller/certificates/approver/sarapprove.go)
* For details of X.509 itself, refer to [RFC 5280](https://tools.ietf.org/html/rfc5280#section-3.1) section 3.1 * For details of X.509 itself, refer to [RFC 5280](https://tools.ietf.org/html/rfc5280#section-3.1) section 3.1
* For information on the syntax of PKCS#10 certificate signing requests, refer to [RFC 2986](https://tools.ietf.org/html/rfc2986) * For information on the syntax of PKCS#10 certificate signing requests, refer to [RFC 2986](https://tools.ietf.org/html/rfc2986)

View File

@ -219,7 +219,7 @@ the role that is granted to those subjects.
1. A binding to a different role is a fundamentally different binding. 1. A binding to a different role is a fundamentally different binding.
Requiring a binding to be deleted/recreated in order to change the `roleRef` Requiring a binding to be deleted/recreated in order to change the `roleRef`
ensures the full list of subjects in the binding is intended to be granted ensures the full list of subjects in the binding is intended to be granted
the new role (as opposed to enabling accidentally modifying just the roleRef the new role (as opposed to enabling or accidentally modifying only the roleRef
without verifying all of the existing subjects should be given the new role's without verifying all of the existing subjects should be given the new role's
permissions). permissions).
@ -333,7 +333,7 @@ as a cluster administrator, include rules for custom resources, such as those se
or aggregated API servers, to extend the default roles. or aggregated API servers, to extend the default roles.
For example: the following ClusterRoles let the "admin" and "edit" default roles manage the custom resource For example: the following ClusterRoles let the "admin" and "edit" default roles manage the custom resource
named CronTab, whereas the "view" role can perform just read actions on CronTab resources. named CronTab, whereas the "view" role can perform only read actions on CronTab resources.
You can assume that CronTab objects are named `"crontabs"` in URLs as seen by the API server. You can assume that CronTab objects are named `"crontabs"` in URLs as seen by the API server.
```yaml ```yaml

View File

@ -1,4 +1,4 @@
--- ---
title: Command line tools reference title: Component tools
weight: 60 weight: 60
--- ---

View File

@ -242,6 +242,7 @@ different Kubernetes components.
| `DynamicProvisioningScheduling` | - | Deprecated| 1.12 | - | | `DynamicProvisioningScheduling` | - | Deprecated| 1.12 | - |
| `DynamicVolumeProvisioning` | `true` | Alpha | 1.3 | 1.7 | | `DynamicVolumeProvisioning` | `true` | Alpha | 1.3 | 1.7 |
| `DynamicVolumeProvisioning` | `true` | GA | 1.8 | - | | `DynamicVolumeProvisioning` | `true` | GA | 1.8 | - |
| `EnableAggregatedDiscoveryTimeout` | `true` | Deprecated | 1.16 | - |
| `EnableEquivalenceClassCache` | `false` | Alpha | 1.8 | 1.14 | | `EnableEquivalenceClassCache` | `false` | Alpha | 1.8 | 1.14 |
| `EnableEquivalenceClassCache` | - | Deprecated | 1.15 | - | | `EnableEquivalenceClassCache` | - | Deprecated | 1.15 | - |
| `ExperimentalCriticalPodAnnotation` | `false` | Alpha | 1.5 | 1.12 | | `ExperimentalCriticalPodAnnotation` | `false` | Alpha | 1.5 | 1.12 |
@ -351,7 +352,7 @@ different Kubernetes components.
| `VolumeScheduling` | `false` | Alpha | 1.9 | 1.9 | | `VolumeScheduling` | `false` | Alpha | 1.9 | 1.9 |
| `VolumeScheduling` | `true` | Beta | 1.10 | 1.12 | | `VolumeScheduling` | `true` | Beta | 1.10 | 1.12 |
| `VolumeScheduling` | `true` | GA | 1.13 | - | | `VolumeScheduling` | `true` | GA | 1.13 | - |
| `VolumeSubpath` | `true` | GA | 1.13 | - | | `VolumeSubpath` | `true` | GA | 1.10 | - |
| `VolumeSubpathEnvExpansion` | `false` | Alpha | 1.14 | 1.14 | | `VolumeSubpathEnvExpansion` | `false` | Alpha | 1.14 | 1.14 |
| `VolumeSubpathEnvExpansion` | `true` | Beta | 1.15 | 1.16 | | `VolumeSubpathEnvExpansion` | `true` | Beta | 1.15 | 1.16 |
| `VolumeSubpathEnvExpansion` | `true` | GA | 1.17 | - | | `VolumeSubpathEnvExpansion` | `true` | GA | 1.17 | - |
@ -634,8 +635,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `KubeletCredentialProviders`: Enable kubelet exec credential providers for image pull credentials. - `KubeletCredentialProviders`: Enable kubelet exec credential providers for image pull credentials.
- `KubeletPluginsWatcher`: Enable probe-based plugin watcher utility to enable kubelet - `KubeletPluginsWatcher`: Enable probe-based plugin watcher utility to enable kubelet
to discover plugins such as [CSI volume drivers](/docs/concepts/storage/volumes/#csi). to discover plugins such as [CSI volume drivers](/docs/concepts/storage/volumes/#csi).
- `KubeletPodResources`: Enable the kubelet's pod resources GRPC endpoint. See - `KubeletPodResources`: Enable the kubelet's pod resources gRPC endpoint. See
[Support Device Monitoring](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/compute-device-assignment.md) [Support Device Monitoring](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/606-compute-device-assignment/README.md)
for more details. for more details.
- `LegacyNodeRoleBehavior`: When disabled, legacy behavior in service load balancers and - `LegacyNodeRoleBehavior`: When disabled, legacy behavior in service load balancers and
node disruption will ignore the `node-role.kubernetes.io/master` label in favor of the node disruption will ignore the `node-role.kubernetes.io/master` label in favor of the
@ -728,7 +729,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
[ServiceTopology](/docs/concepts/services-networking/service-topology/) [ServiceTopology](/docs/concepts/services-networking/service-topology/)
for more details. for more details.
- `SizeMemoryBackedVolumes`: Enables kubelet support to size memory backed volumes. - `SizeMemoryBackedVolumes`: Enables kubelet support to size memory backed volumes.
See [volumes](docs/concepts/storage/volumes) for more details. See [volumes](/docs/concepts/storage/volumes) for more details.
- `SetHostnameAsFQDN`: Enable the ability of setting Fully Qualified Domain - `SetHostnameAsFQDN`: Enable the ability of setting Fully Qualified Domain
Name(FQDN) as the hostname of a pod. See Name(FQDN) as the hostname of a pod. See
[Pod's `setHostnameAsFQDN` field](/docs/concepts/services-networking/dns-pod-service/#pod-sethostnameasfqdn-field). [Pod's `setHostnameAsFQDN` field](/docs/concepts/services-networking/dns-pod-service/#pod-sethostnameasfqdn-field).

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -185,9 +185,9 @@ systemd unit file perhaps) to enable the token file. See docs
further details. further details.
### Authorize kubelet to create CSR ### Authorize kubelet to create CSR
Now that the bootstrapping node is _authenticated_ as part of the `system:bootstrappers` group, it needs to be _authorized_ to create a certificate signing request (CSR) as well as retrieve it when done. Fortunately, Kubernetes ships with a `ClusterRole` with precisely these (and just these) permissions, `system:node-bootstrapper`. Now that the bootstrapping node is _authenticated_ as part of the `system:bootstrappers` group, it needs to be _authorized_ to create a certificate signing request (CSR) as well as retrieve it when done. Fortunately, Kubernetes ships with a `ClusterRole` with precisely these (and only these) permissions, `system:node-bootstrapper`.
To do this, you just need to create a `ClusterRoleBinding` that binds the `system:bootstrappers` group to the cluster role `system:node-bootstrapper`. To do this, you only need to create a `ClusterRoleBinding` that binds the `system:bootstrappers` group to the cluster role `system:node-bootstrapper`.
``` ```
# enable bootstrapping nodes to create CSR # enable bootstrapping nodes to create CSR
@ -345,7 +345,7 @@ The important elements to note are:
* `token`: the token to use * `token`: the token to use
The format of the token does not matter, as long as it matches what kube-apiserver expects. In the above example, we used a bootstrap token. The format of the token does not matter, as long as it matches what kube-apiserver expects. In the above example, we used a bootstrap token.
As stated earlier, _any_ valid authentication method can be used, not just tokens. As stated earlier, _any_ valid authentication method can be used, not only tokens.
Because the bootstrap `kubeconfig` _is_ a standard `kubeconfig`, you can use `kubectl` to generate it. To create the above example file: Because the bootstrap `kubeconfig` _is_ a standard `kubeconfig`, you can use `kubectl` to generate it. To create the above example file:

View File

@ -302,7 +302,7 @@ kubelet [flags]
<td colspan="2">--enable-cadvisor-json-endpoints&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `false`</td> <td colspan="2">--enable-cadvisor-json-endpoints&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `false`</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Enable cAdvisor json `/spec` and `/stats/*` endpoints. (DEPRECATED: will be removed in a future version)</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">Enable cAdvisor json `/spec` and `/stats/*` endpoints. This flag has no effect on the /stats/summary endpoint. (DEPRECATED: will be removed in a future version)</td>
</tr> </tr>
<tr> <tr>
@ -917,7 +917,7 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)<br/>
<td colspan="2">--pod-infra-container-image string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `k8s.gcr.io/pause:3.2`</td> <td colspan="2">--pod-infra-container-image string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `k8s.gcr.io/pause:3.2`</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">The image whose network/IPC namespaces containers in each pod will use. This docker-specific flag only works when container-runtime is set to `docker`.</td> <td></td><td style="line-height: 130%; word-wrap: break-word;"> Specified image will not be pruned by the image garbage collector. When container-runtime is set to `docker`, all containers in each pod will use the network/ipc namespaces from this image. Other CRI implementations have their own configuration to set this image.</td>
</tr> </tr>
<tr> <tr>

View File

@ -14,7 +14,7 @@ tags:
A Kubernetes {{< glossary_tooltip text="control plane" term_id="control-plane" >}} component A Kubernetes {{< glossary_tooltip text="control plane" term_id="control-plane" >}} component
that embeds cloud-specific control logic. The cloud controller manager lets you link your that embeds cloud-specific control logic. The cloud controller manager lets you link your
cluster into your cloud provider's API, and separates out the components that interact cluster into your cloud provider's API, and separates out the components that interact
with that cloud platform from components that just interact with your cluster. with that cloud platform from components that only interact with your cluster.
<!--more--> <!--more-->

View File

@ -17,6 +17,6 @@ tags:
Their primary responsibility is keeping a cluster up and running, which may involve periodic maintenance activities or upgrades.<br> Their primary responsibility is keeping a cluster up and running, which may involve periodic maintenance activities or upgrades.<br>
{{< note >}} {{< note >}}
Cluster operators are different from the [Operator pattern](https://coreos.com/operators) that extends the Kubernetes API. Cluster operators are different from the [Operator pattern](https://www.openshift.com/learn/topics/operators) that extends the Kubernetes API.
{{< /note >}} {{< /note >}}

View File

@ -2,7 +2,7 @@
approvers: approvers:
- chenopis - chenopis
- abiogenesis-now - abiogenesis-now
title: Standardized Glossary title: Glossary
layout: glossary layout: glossary
noedit: true noedit: true
default_active_tag: fundamental default_active_tag: fundamental

View File

@ -1,4 +1,4 @@
--- ---
title: Kubernetes Issues and Security title: Kubernetes Issues and Security
weight: 10 weight: 40
--- ---

View File

@ -1,5 +1,5 @@
--- ---
title: "kubectl CLI" title: "kubectl"
weight: 60 weight: 60
--- ---

View File

@ -320,6 +320,18 @@ kubectl top pod POD_NAME --containers # Show metrics for a given p
kubectl top pod POD_NAME --sort-by=cpu # Show metrics for a given pod and sort it by 'cpu' or 'memory' kubectl top pod POD_NAME --sort-by=cpu # Show metrics for a given pod and sort it by 'cpu' or 'memory'
``` ```
## Interacting with Deployments and Services
```bash
kubectl logs deploy/my-deployment # dump Pod logs for a Deployment (single-container case)
kubectl logs deploy/my-deployment -c my-container # dump Pod logs for a Deployment (multi-container case)
kubectl port-forward svc/my-service 5000 # listen on local port 5000 and forward to port 5000 on Service backend
kubectl port-forward svc/my-service 5000:my-service-port # listen on local port 5000 and forward to Service target port with name <my-service-port>
kubectl port-forward deploy/my-deployment 5000:6000 # listen on local port 5000 and forward to port 6000 on a Pod created by <my-deployment>
kubectl exec deploy/my-deployment -- ls # run command in first Pod and first container in Deployment (single- or multi-container cases)
```
## Interacting with Nodes and cluster ## Interacting with Nodes and cluster
```bash ```bash
@ -348,7 +360,7 @@ Other operations for exploring API resources:
```bash ```bash
kubectl api-resources --namespaced=true # All namespaced resources kubectl api-resources --namespaced=true # All namespaced resources
kubectl api-resources --namespaced=false # All non-namespaced resources kubectl api-resources --namespaced=false # All non-namespaced resources
kubectl api-resources -o name # All resources with simple output (just the resource name) kubectl api-resources -o name # All resources with simple output (only the resource name)
kubectl api-resources -o wide # All resources with expanded (aka "wide") output kubectl api-resources -o wide # All resources with expanded (aka "wide") output
kubectl api-resources --verbs=list,get # All resources that support the "list" and "get" request verbs kubectl api-resources --verbs=list,get # All resources that support the "list" and "get" request verbs
kubectl api-resources --api-group=extensions # All resources in the "extensions" API group kubectl api-resources --api-group=extensions # All resources in the "extensions" API group
@ -375,6 +387,9 @@ Examples using `-o=custom-columns`:
# All images running in a cluster # All images running in a cluster
kubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image' kubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image'
# All images running in namespace: default, grouped by Pod
kubectl get pods --namespace default --output=custom-columns="NAME:.metadata.name,IMAGE:.spec.containers[*].image"
# All images excluding "k8s.gcr.io/coredns:1.6.2" # All images excluding "k8s.gcr.io/coredns:1.6.2"
kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!="k8s.gcr.io/coredns:1.6.2")].image' kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!="k8s.gcr.io/coredns:1.6.2")].image'

View File

@ -7,7 +7,7 @@ reviewers:
--- ---
<!-- overview --> <!-- overview -->
You can use the Kubernetes command line tool kubectl to interact with the API Server. Using kubectl is straightforward if you are familiar with the Docker command line tool. However, there are a few differences between the docker commands and the kubectl commands. The following sections show a docker sub-command and describe the equivalent kubectl command. You can use the Kubernetes command line tool `kubectl` to interact with the API Server. Using kubectl is straightforward if you are familiar with the Docker command line tool. However, there are a few differences between the Docker commands and the kubectl commands. The following sections show a Docker sub-command and describe the equivalent `kubectl` command.
<!-- body --> <!-- body -->

View File

@ -19,7 +19,7 @@ files by setting the KUBECONFIG environment variable or by setting the
This overview covers `kubectl` syntax, describes the command operations, and provides common examples. This overview covers `kubectl` syntax, describes the command operations, and provides common examples.
For details about each command, including all the supported flags and subcommands, see the For details about each command, including all the supported flags and subcommands, see the
[kubectl](/docs/reference/generated/kubectl/kubectl-commands/) reference documentation. [kubectl](/docs/reference/generated/kubectl/kubectl-commands/) reference documentation.
For installation instructions see [installing kubectl](/docs/tasks/tools/install-kubectl/). For installation instructions see [installing kubectl](/docs/tasks/tools/).
<!-- body --> <!-- body -->
@ -69,7 +69,7 @@ for example `create`, `get`, `describe`, `delete`.
Flags that you specify from the command line override default values and any corresponding environment variables. Flags that you specify from the command line override default values and any corresponding environment variables.
{{< /caution >}} {{< /caution >}}
If you need help, just run `kubectl help` from the terminal window. If you need help, run `kubectl help` from the terminal window.
## Operations ## Operations

View File

@ -1,4 +0,0 @@
---
title: "Policies Resources"
weight: 6
---

Some files were not shown because too many files have changed in this diff Show More