diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES
index 978be7ca33..01894721d1 100644
--- a/OWNERS_ALIASES
+++ b/OWNERS_ALIASES
@@ -92,15 +92,21 @@ aliases:
- daminisatya
- mittalyashu
sig-docs-id-owners: # Admins for Indonesian content
- - girikuncoro
- - irvifa
- sig-docs-id-reviews: # PR reviews for Indonesian content
+ - ariscahyadi
+ - danninov
- girikuncoro
- habibrosyad
- irvifa
- - wahyuoi
- phanama
+ - wahyuoi
+ sig-docs-id-reviews: # PR reviews for Indonesian content
+ - ariscahyadi
- danninov
+ - girikuncoro
+ - habibrosyad
+ - irvifa
+ - phanama
+ - wahyuoi
sig-docs-it-owners: # Admins for Italian content
- fabriziopandini
- Fale
diff --git a/README-pt.md b/README-pt.md
index 8367b0247c..0992f6c045 100644
--- a/README-pt.md
+++ b/README-pt.md
@@ -2,11 +2,11 @@
[](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
-Bem vindos! Este repositório abriga todos os recursos necessários para criar o [website e documentação do Kubernetes](https://kubernetes.io/). Estamos muito satisfeitos por você querer contribuir!
+Bem-vindos! Este repositório contém todos os recursos necessários para criar o [website e documentação do Kubernetes](https://kubernetes.io/). Estamos muito satisfeitos por você querer contribuir!
# Utilizando este repositório
-Você pode executar o website localmente utilizando o Hugo (versão Extended), ou você pode executa-ló em um container runtime. É altamente recomendável usar um container runtime, pois garante a consistência na implantação do website real.
+Você pode executar o website localmente utilizando o Hugo (versão Extended), ou você pode executa-ló em um container runtime. É altamente recomendável utilizar um container runtime, pois garante a consistência na implantação do website real.
## Pré-requisitos
@@ -40,7 +40,7 @@ make container-image
make container-serve
```
-Abra seu navegador em http://localhost:1313 para visualizar o website. Conforme você faz alterações nos arquivos fontes, Hugo atualiza o website e força a atualização do navegador.
+Abra seu navegador em http://localhost:1313 para visualizar o website. Conforme você faz alterações nos arquivos fontes, o Hugo atualiza o website e força a atualização do navegador.
## Executando o website localmente utilizando o Hugo
@@ -56,10 +56,57 @@ make serve
Isso iniciará localmente o Hugo na porta 1313. Abra o seu navegador em http://localhost:1313 para visualizar o website. Conforme você faz alterações nos arquivos fontes, o Hugo atualiza o website e força uma atualização no navegador.
+## Construindo a página de referência da API
+
+A página de referência da API localizada em `content/en/docs/reference/kubernetes-api` é construída a partir da especificação do Swagger utilizando https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs.
+
+Siga os passos abaixo para atualizar a página de referência para uma nova versão do Kubernetes:
+
+OBS: modifique o "v1.20" no exemplo a seguir pela versão a ser atualizada
+
+1. Obter o submódulo `kubernetes-resources-reference`:
+
+```
+git submodule update --init --recursive --depth 1
+```
+
+2. Criar a nova versão da API no submódulo e adicionar à especificação do Swagger:
+
+```
+mkdir api-ref-generator/gen-resourcesdocs/api/v1.20
+curl 'https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json' > api-ref-generator/gen-resourcesdocs/api/v1.20/swagger.json
+```
+
+3. Copiar o sumário e os campos de configuração para a nova versão a partir da versão anterior:
+
+```
+mkdir api-ref-generator/gen-resourcesdocs/api/v1.20
+cp api-ref-generator/gen-resourcesdocs/api/v1.19/* api-ref-generator/gen-resourcesdocs/api/v1.20/
+```
+
+4. Ajustar os arquivos `toc.yaml` e `fields.yaml` para refletir as mudanças entre as duas versões.
+
+5. Em seguida, gerar as páginas:
+
+```
+make api-reference
+```
+
+Você pode validar o resultado localmente gerando e disponibilizando o site a partir da imagem do container:
+
+```
+make container-image
+make container-serve
+```
+
+Abra o seu navegador em http://localhost:1313/docs/reference/kubernetes-api/ para visualizar a página de referência da API.
+
+6. Quando todas as mudanças forem refletidas nos arquivos de configuração `toc.yaml` e `fields.yaml`, crie um pull request com a nova página de referência de API.
+
## Troubleshooting
### error: failed to transform resource: TOCSS: failed to transform "scss/main.scss" (text/x-scss): this feature is not available in your current Hugo version
-Por motivos técnicos, o Hugo é disponibilizado em dois conjuntos de binários. O website atual funciona apenas na versão **Hugo Extended**. Na [página de releases](https://github.com/gohugoio/hugo/releases) procure por arquivos com `extended` no nome. Para confirmar, execute `hugo version` e procure pela palavra `extended`.
+Por motivos técnicos, o Hugo é disponibilizado em dois conjuntos de binários. O website atual funciona apenas na versão **Hugo Extended**. Na [página de releases](https://github.com/gohugoio/hugo/releases) procure por arquivos com `extended` no nome. Para confirmar, execute `hugo version` e procure pela palavra `extended`.
### Troubleshooting macOS for too many open files
@@ -110,9 +157,9 @@ Você também pode entrar em contato com os mantenedores deste projeto em:
Você pode clicar no botão **Fork** na área superior direita da tela para criar uma cópia desse repositório na sua conta do GitHub. Esta cópia é chamada de *fork*. Faça as alterações desejadas no seu fork e, quando estiver pronto para enviar as alterações para nós, vá até o fork e crie um novo **pull request** para nos informar sobre isso.
-Depois que seu **pull request** for criado, um revisor do Kubernetes assumirá a responsabilidade de fornecer um feedback claro e objetivo. Como proprietário do pull request, **é sua responsabilidade modificar seu pull request para atender ao feedback que foi fornecido a você pelo revisor do Kubernetes.**
+Depois que seu **pull request** for criado, um revisor do Kubernetes assumirá a responsabilidade de fornecer um feedback claro e objetivo. Como proprietário do pull request, **é sua responsabilidade modificar seu pull request para atender ao feedback que foi fornecido a você pelo revisor do Kubernetes.**
-Observe também que você pode acabar tendo mais de um revisor do Kubernetes para fornecer seu feedback ou você pode acabar obtendo feedback de um outro revisor do Kubernetes diferente daquele originalmente designado para lhe fornecer o feedback.
+Observe também que você pode acabar tendo mais de um revisor do Kubernetes para fornecer seu feedback ou você pode acabar obtendo feedback de um outro revisor do Kubernetes diferente daquele originalmente designado para lhe fornecer o feedback.
Além disso, em alguns casos, um de seus revisores pode solicitar uma revisão técnica de um [revisor técnico do Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers) quando necessário. Os revisores farão o melhor para fornecer feedbacks em tempo hábil, mas o tempo de resposta pode variar de acordo com as circunstâncias.
@@ -134,4 +181,4 @@ A participação na comunidade Kubernetes é regida pelo [Código de Conduta da
# Obrigado!
-O Kubernetes conta com a participação da comunidade e nós realmente agradecemos suas contribuições para o nosso website e nossa documentação!
+O Kubernetes prospera com a participação da comunidade e nós realmente agradecemos suas contribuições para o nosso website e nossa documentação!
\ No newline at end of file
diff --git a/README.md b/README.md
index 44dcc7a7ca..8ec876eef2 100644
--- a/README.md
+++ b/README.md
@@ -100,6 +100,8 @@ make container-image
make container-serve
```
+In a web browser, go to http://localhost:1313/docs/reference/kubernetes-api/ to view the API reference.
+
6. When all changes of the new contract are reflected into the configuration files `toc.yaml` and `fields.yaml`, create a Pull Request with the newly generated API reference pages.
## Troubleshooting
diff --git a/content/en/blog/_posts/2019-03-15-Kubernetes-setup-using-Ansible-and-Vagrant.md b/content/en/blog/_posts/2019-03-15-Kubernetes-setup-using-Ansible-and-Vagrant.md
index 8b31d1df0b..247748b6f0 100644
--- a/content/en/blog/_posts/2019-03-15-Kubernetes-setup-using-Ansible-and-Vagrant.md
+++ b/content/en/blog/_posts/2019-03-15-Kubernetes-setup-using-Ansible-and-Vagrant.md
@@ -66,6 +66,7 @@ Vagrant.configure("2") do |config|
end
end
end
+end
```
### Step 2: Create an Ansible playbook for Kubernetes master.
diff --git a/content/en/community/_index.html b/content/en/community/_index.html
index e1ebb9e9cb..ad9cab5d94 100644
--- a/content/en/community/_index.html
+++ b/content/en/community/_index.html
@@ -19,6 +19,7 @@ cid: community
diff --git a/content/en/community/static/community-values.md b/content/en/community/static/community-values.md
new file mode 100644
index 0000000000..f6469a3e61
--- /dev/null
+++ b/content/en/community/static/community-values.md
@@ -0,0 +1,28 @@
+
+
+# Kubernetes Community Values
+
+Kubernetes Community culture is frequently cited as a substantial contributor to the meteoric rise of this Open Source project. Below are the distilled values which have evolved over the last many years in our community pushing our project and peers toward constant improvement.
+
+## Distribution is better than centralization
+
+The scale of the Kubernetes project is only viable through high-trust and high-visibility distribution of work, which includes delegation of authority, decision making, technical design, code ownership, and documentation. Distributed asynchronous ownership, collaboration, communication and decision making are the cornerstone of our world-wide community.
+
+## Community over product or company
+
+We are here as a community first, our allegiance is to the intentional stewardship of the Kubernetes project for the benefit of all its members and users everywhere. We support working together publicly for the common goal of a vibrant interoperable ecosystem providing an excellent experience for our users. Individuals gain status through work, companies gain status through their commitments to support this community and fund the resources necessary for the project to operate.
+
+## Automation over process
+
+Large projects have a lot of less exciting, yet, hard work. We value time spent automating repetitive work more highly than toil. Where that work cannot be automated, it is our culture to recognize and reward all types of contributions. However, heroism is not sustainable.
+
+## Inclusive is better than exclusive
+
+Broadly successful and useful technology requires different perspectives and skill sets which can only be heard in a welcoming and respectful environment. Community membership is a privilege, not a right. Community Leadership is earned through effort, scope, quality, quantity, and duration of contributions. Our community shows respect for the time and effort put into a discussion regardless of where a contributor is on their growth path.
+
+## Evolution is better than stagnation
+
+Openness to new ideas and studied technological evolution make Kubernetes a stronger project. Continual improvement, servant leadership, mentorship and respect are the foundations of the Kubernetes project culture. It is the duty for leaders in the Kubernetes community to find, sponsor, and promote new community members. Leaders should expect to step aside. Community members should expect to step up.
+
+**"Culture eats strategy for breakfast." --Peter Drucker**
diff --git a/content/en/community/values.md b/content/en/community/values.md
new file mode 100644
index 0000000000..4ae1fe30b6
--- /dev/null
+++ b/content/en/community/values.md
@@ -0,0 +1,13 @@
+---
+title: Community
+layout: basic
+cid: community
+css: /css/community.css
+---
+
+
+
+
+{{< include "/static/community-values.md" >}}
+
+
diff --git a/content/en/docs/concepts/cluster-administration/manage-deployment.md b/content/en/docs/concepts/cluster-administration/manage-deployment.md
index d9814a887a..fa31fbd35f 100644
--- a/content/en/docs/concepts/cluster-administration/manage-deployment.md
+++ b/content/en/docs/concepts/cluster-administration/manage-deployment.md
@@ -45,7 +45,7 @@ kubectl apply -f https://k8s.io/examples/application/nginx/
`kubectl` will read any files with suffixes `.yaml`, `.yml`, or `.json`.
-It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, then you can then simply deploy all of the components of your stack en masse.
+It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, you can deploy all of the components of your stack together.
A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into github:
@@ -265,7 +265,7 @@ For a more concrete example, check the [tutorial of deploying Ghost](https://git
## Updating labels
Sometimes existing pods and other resources need to be relabeled before creating new resources. This can be done with `kubectl label`.
-For example, if you want to label all your nginx pods as frontend tier, simply run:
+For example, if you want to label all your nginx pods as frontend tier, run:
```shell
kubectl label pods -l app=nginx tier=fe
@@ -411,7 +411,7 @@ and
## Disruptive updates
-In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can simply modify your original configuration file:
+In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can modify your original configuration file:
```shell
kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force
@@ -448,7 +448,7 @@ kubectl scale deployment my-nginx --current-replicas=1 --replicas=3
deployment.apps/my-nginx scaled
```
-To update to version 1.16.1, simply change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1`, with the kubectl commands we learned above.
+To update to version 1.16.1, change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1` using the previous kubectl commands.
```shell
kubectl edit deployment/my-nginx
diff --git a/content/en/docs/concepts/configuration/configmap.md b/content/en/docs/concepts/configuration/configmap.md
index d0195e6bd9..f0f0f439f9 100644
--- a/content/en/docs/concepts/configuration/configmap.md
+++ b/content/en/docs/concepts/configuration/configmap.md
@@ -225,7 +225,7 @@ The kubelet checks whether the mounted ConfigMap is fresh on every periodic sync
However, the kubelet uses its local cache for getting the current value of the ConfigMap.
The type of the cache is configurable using the `ConfigMapAndSecretChangeDetectionStrategy` field in
the [KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
-A ConfigMap can be either propagated by watch (default), ttl-based, or simply redirecting
+A ConfigMap can be either propagated by watch (default), ttl-based, or by redirecting
all requests directly to the API server.
As a result, the total delay from the moment when the ConfigMap is updated to the moment
when new keys are projected to the Pod can be as long as the kubelet sync period + cache
diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md
index ab627ce056..9d14423816 100644
--- a/content/en/docs/concepts/configuration/secret.md
+++ b/content/en/docs/concepts/configuration/secret.md
@@ -669,7 +669,7 @@ The kubelet checks whether the mounted secret is fresh on every periodic sync.
However, the kubelet uses its local cache for getting the current value of the Secret.
The type of the cache is configurable using the `ConfigMapAndSecretChangeDetectionStrategy` field in
the [KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
-A Secret can be either propagated by watch (default), ttl-based, or simply redirecting
+A Secret can be either propagated by watch (default), ttl-based, or by redirecting
all requests directly to the API server.
As a result, the total delay from the moment when the Secret is updated to the moment
when new keys are projected to the Pod can be as long as the kubelet sync period + cache
diff --git a/content/en/docs/concepts/containers/container-lifecycle-hooks.md b/content/en/docs/concepts/containers/container-lifecycle-hooks.md
index b315ba6f59..96569f9518 100644
--- a/content/en/docs/concepts/containers/container-lifecycle-hooks.md
+++ b/content/en/docs/concepts/containers/container-lifecycle-hooks.md
@@ -36,10 +36,13 @@ No parameters are passed to the handler.
`PreStop`
-This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others. A call to the preStop hook fails if the container is already in terminated or completed state.
-It is blocking, meaning it is synchronous,
-so it must complete before the signal to stop the container can be sent.
-No parameters are passed to the handler.
+This hook is called immediately before a container is terminated due to an API request or management
+event such as a liveness/startup probe failure, preemption, resource contention and others. A call
+to the `PreStop` hook fails if the container is already in a terminated or completed state and the
+hook must complete before the TERM signal to stop the container can be sent. The Pod's termination
+grace period countdown begins before the `PreStop` hook is executed, so regardless of the outcome of
+the handler, the container will eventually terminate within the Pod's termination grace period. No
+parameters are passed to the handler.
A more detailed description of the termination behavior can be found in
[Termination of Pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination).
@@ -65,19 +68,15 @@ the Container ENTRYPOINT and hook fire asynchronously.
However, if the hook takes too long to run or hangs,
the Container cannot reach a `running` state.
-`PreStop` hooks are not executed asynchronously from the signal
-to stop the Container; the hook must complete its execution before
-the signal can be sent.
-If a `PreStop` hook hangs during execution,
-the Pod's phase will be `Terminating` and remain there until the Pod is
-killed after its `terminationGracePeriodSeconds` expires.
-This grace period applies to the total time it takes for both
-the `PreStop` hook to execute and for the Container to stop normally.
-If, for example, `terminationGracePeriodSeconds` is 60, and the hook
-takes 55 seconds to complete, and the Container takes 10 seconds to stop
-normally after receiving the signal, then the Container will be killed
-before it can stop normally, since `terminationGracePeriodSeconds` is
-less than the total time (55+10) it takes for these two things to happen.
+`PreStop` hooks are not executed asynchronously from the signal to stop the Container; the hook must
+complete its execution before the TERM signal can be sent. If a `PreStop` hook hangs during
+execution, the Pod's phase will be `Terminating` and remain there until the Pod is killed after its
+`terminationGracePeriodSeconds` expires. This grace period applies to the total time it takes for
+both the `PreStop` hook to execute and for the Container to stop normally. If, for example,
+`terminationGracePeriodSeconds` is 60, and the hook takes 55 seconds to complete, and the Container
+takes 10 seconds to stop normally after receiving the signal, then the Container will be killed
+before it can stop normally, since `terminationGracePeriodSeconds` is less than the total time
+(55+10) it takes for these two things to happen.
If either a `PostStart` or `PreStop` hook fails,
it kills the Container.
diff --git a/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md b/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md
index dcfef3f6b6..5457dc9204 100644
--- a/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md
+++ b/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md
@@ -31,7 +31,7 @@ Once a custom resource is installed, users can create and access its objects usi
## Custom controllers
-On their own, custom resources simply let you store and retrieve structured data.
+On their own, custom resources let you store and retrieve structured data.
When you combine a custom resource with a *custom controller*, custom resources
provide a true _declarative API_.
@@ -120,7 +120,7 @@ Kubernetes provides two ways to add custom resources to your cluster:
Kubernetes provides these two options to meet the needs of different users, so that neither ease of use nor flexibility is compromised.
-Aggregated APIs are subordinate API servers that sit behind the primary API server, which acts as a proxy. This arrangement is called [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) (AA). To users, it simply appears that the Kubernetes API is extended.
+Aggregated APIs are subordinate API servers that sit behind the primary API server, which acts as a proxy. This arrangement is called [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) (AA). To users, the Kubernetes API appears extended.
CRDs allow users to create new types of resources without adding another API server. You do not need to understand API Aggregation to use CRDs.
diff --git a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md
index 7b53fa326f..0ec8bf81b1 100644
--- a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md
+++ b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md
@@ -24,7 +24,7 @@ Network plugins in Kubernetes come in a few flavors:
The kubelet has a single default network plugin, and a default network common to the entire cluster. It probes for plugins when it starts up, remembers what it finds, and executes the selected plugin at appropriate times in the pod lifecycle (this is only true for Docker, as CRI manages its own CNI plugins). There are two Kubelet command line parameters to keep in mind when using plugins:
* `cni-bin-dir`: Kubelet probes this directory for plugins on startup
-* `network-plugin`: The network plugin to use from `cni-bin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is simply "cni".
+* `network-plugin`: The network plugin to use from `cni-bin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is `cni`.
## Network Plugin Requirements
diff --git a/content/en/docs/concepts/extend-kubernetes/service-catalog.md b/content/en/docs/concepts/extend-kubernetes/service-catalog.md
index 3aa9675788..af0271d9ab 100644
--- a/content/en/docs/concepts/extend-kubernetes/service-catalog.md
+++ b/content/en/docs/concepts/extend-kubernetes/service-catalog.md
@@ -26,7 +26,7 @@ Fortunately, there is a cloud provider that offers message queuing as a managed
A cluster operator can setup Service Catalog and use it to communicate with the cloud provider's service broker to provision an instance of the message queuing service and make it available to the application within the Kubernetes cluster.
The application developer therefore does not need to be concerned with the implementation details or management of the message queue.
-The application can simply use it as a service.
+The application can access the message queue as a service.
## Architecture
diff --git a/content/en/docs/concepts/overview/working-with-objects/labels.md b/content/en/docs/concepts/overview/working-with-objects/labels.md
index 2feec43801..7ff6f267a0 100644
--- a/content/en/docs/concepts/overview/working-with-objects/labels.md
+++ b/content/en/docs/concepts/overview/working-with-objects/labels.md
@@ -98,7 +98,7 @@ For both equality-based and set-based conditions there is no logical _OR_ (`||`)
### _Equality-based_ requirement
_Equality-_ or _inequality-based_ requirements allow filtering by label keys and values. Matching objects must satisfy all of the specified label constraints, though they may have additional labels as well.
-Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ (and are simply synonyms), while the latter represents _inequality_. For example:
+Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ (and are synonyms), while the latter represents _inequality_. For example:
```
environment = production
diff --git a/content/en/docs/concepts/policy/pod-security-policy.md b/content/en/docs/concepts/policy/pod-security-policy.md
index 17f30906bf..f355a8f539 100644
--- a/content/en/docs/concepts/policy/pod-security-policy.md
+++ b/content/en/docs/concepts/policy/pod-security-policy.md
@@ -197,7 +197,7 @@ alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -n
### Create a policy and a pod
Define the example PodSecurityPolicy object in a file. This is a policy that
-simply prevents the creation of privileged pods.
+prevents the creation of privileged pods.
The name of a PodSecurityPolicy object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
diff --git a/content/en/docs/concepts/policy/resource-quotas.md b/content/en/docs/concepts/policy/resource-quotas.md
index 0edb1be338..d5c94d99cf 100644
--- a/content/en/docs/concepts/policy/resource-quotas.md
+++ b/content/en/docs/concepts/policy/resource-quotas.md
@@ -610,17 +610,28 @@ plugins:
values: ["cluster-services"]
```
-Now, "cluster-services" pods will be allowed in only those namespaces where a quota object with a matching `scopeSelector` is present.
-For example:
+Then, create a resource quota object in the `kube-system` namespace:
-```yaml
- scopeSelector:
- matchExpressions:
- - scopeName: PriorityClass
- operator: In
- values: ["cluster-services"]
+{{< codenew file="policy/priority-class-resourcequota.yaml" >}}
+
+```shell
+$ kubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system
```
+```
+resourcequota/pods-cluster-services created
+```
+
+In this case, a pod creation will be allowed if:
+
+1. the Pod's `priorityClassName` is not specified.
+1. the Pod's `priorityClassName` is specified to a value other than `cluster-services`.
+1. the Pod's `priorityClassName` is set to `cluster-services`, it is to be created
+ in the `kube-system` namespace, and it has passed the resource quota check.
+
+A Pod creation request is rejected if its `priorityClassName` is set to `cluster-services`
+and it is to be created in a namespace other than `kube-system`.
+
## {{% heading "whatsnext" %}}
- See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) for more information.
diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md
index abe4f4b9eb..b6b2bd79a8 100644
--- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md
+++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md
@@ -261,7 +261,7 @@ for performance and security reasons, there are some constraints on topologyKey:
and `preferredDuringSchedulingIgnoredDuringExecution`.
2. For pod anti-affinity, empty `topologyKey` is also not allowed in both `requiredDuringSchedulingIgnoredDuringExecution`
and `preferredDuringSchedulingIgnoredDuringExecution`.
-3. For `requiredDuringSchedulingIgnoredDuringExecution` pod anti-affinity, the admission controller `LimitPodHardAntiAffinityTopology` was introduced to limit `topologyKey` to `kubernetes.io/hostname`. If you want to make it available for custom topologies, you may modify the admission controller, or simply disable it.
+3. For `requiredDuringSchedulingIgnoredDuringExecution` pod anti-affinity, the admission controller `LimitPodHardAntiAffinityTopology` was introduced to limit `topologyKey` to `kubernetes.io/hostname`. If you want to make it available for custom topologies, you may modify the admission controller, or disable it.
4. Except for the above cases, the `topologyKey` can be any legal label-key.
In addition to `labelSelector` and `topologyKey`, you can optionally specify a list `namespaces`
diff --git a/content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md b/content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md
index 932e076dfc..7936f9dedc 100644
--- a/content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md
+++ b/content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md
@@ -107,7 +107,7 @@ value being calculated based on the cluster size. There is also a hardcoded
minimum value of 50 nodes.
{{< note >}}In clusters with less than 50 feasible nodes, the scheduler still
-checks all the nodes, simply because there are not enough feasible nodes to stop
+checks all the nodes because there are not enough feasible nodes to stop
the scheduler's search early.
In a small cluster, if you set a low value for `percentageOfNodesToScore`, your
diff --git a/content/en/docs/concepts/scheduling-eviction/scheduling-framework.md b/content/en/docs/concepts/scheduling-eviction/scheduling-framework.md
index ae32f840fd..06ed901c2a 100644
--- a/content/en/docs/concepts/scheduling-eviction/scheduling-framework.md
+++ b/content/en/docs/concepts/scheduling-eviction/scheduling-framework.md
@@ -183,7 +183,7 @@ the three things:
{{< note >}}
While any plugin can access the list of "waiting" Pods and approve them
-(see [`FrameworkHandle`](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20180409-scheduling-framework.md#frameworkhandle)), we expect only the permit
+(see [`FrameworkHandle`](https://git.k8s.io/enhancements/keps/sig-scheduling/624-scheduling-framework#frameworkhandle)), we expect only the permit
plugins to approve binding of reserved Pods that are in "waiting" state. Once a Pod
is approved, it is sent to the [PreBind](#pre-bind) phase.
{{< /note >}}
diff --git a/content/en/docs/concepts/security/overview.md b/content/en/docs/concepts/security/overview.md
index fe9129c109..b23a07c79a 100644
--- a/content/en/docs/concepts/security/overview.md
+++ b/content/en/docs/concepts/security/overview.md
@@ -120,6 +120,7 @@ Area of Concern for Containers | Recommendation |
Container Vulnerability Scanning and OS Dependency Security | As part of an image build step, you should scan your containers for known vulnerabilities.
Image Signing and Enforcement | Sign container images to maintain a system of trust for the content of your containers.
Disallow privileged users | When constructing containers, consult your documentation for how to create users inside of the containers that have the least level of operating system privilege necessary in order to carry out the goal of the container.
+Use container runtime with stronger isolation | Select [container runtime classes](/docs/concepts/containers/runtime-class/) that provider stronger isolation
## Code
@@ -152,3 +153,4 @@ Learn about related Kubernetes security topics:
* [Data encryption in transit](/docs/tasks/tls/managing-tls-in-a-cluster/) for the control plane
* [Data encryption at rest](/docs/tasks/administer-cluster/encrypt-data/)
* [Secrets in Kubernetes](/docs/concepts/configuration/secret/)
+* [Runtime class](/docs/concepts/containers/runtime-class)
diff --git a/content/en/docs/concepts/services-networking/dns-pod-service.md b/content/en/docs/concepts/services-networking/dns-pod-service.md
index 93474f24fa..f02aede0e1 100644
--- a/content/en/docs/concepts/services-networking/dns-pod-service.md
+++ b/content/en/docs/concepts/services-networking/dns-pod-service.md
@@ -25,9 +25,9 @@ assigned a DNS name. By default, a client Pod's DNS search list will
include the Pod's own namespace and the cluster's default domain. This is best
illustrated by example:
-Assume a Service named `foo` in the Kubernetes namespace `bar`. A Pod running
-in namespace `bar` can look up this service by simply doing a DNS query for
-`foo`. A Pod running in namespace `quux` can look up this service by doing a
+Assume a Service named `foo` in the Kubernetes namespace `bar`. A Pod running
+in namespace `bar` can look up this service by querying a DNS service for
+`foo`. A Pod running in namespace `quux` can look up this service by doing a
DNS query for `foo.bar`.
The following sections detail the supported record types and layout that is
diff --git a/content/en/docs/concepts/services-networking/dual-stack.md b/content/en/docs/concepts/services-networking/dual-stack.md
index d8b900847f..20cbcb5f33 100644
--- a/content/en/docs/concepts/services-networking/dual-stack.md
+++ b/content/en/docs/concepts/services-networking/dual-stack.md
@@ -163,7 +163,7 @@ status:
loadBalancer: {}
```
-1. When dual-stack is enabled on a cluster, existing [headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors are configured by the control plane to set `.spec.ipFamilyPolicy` to `SingleStack` and set `.spec.ipFamilies` to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-controller-manager) even though `.spec.ClusterIP` is set to `None`.
+1. When dual-stack is enabled on a cluster, existing [headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors are configured by the control plane to set `.spec.ipFamilyPolicy` to `SingleStack` and set `.spec.ipFamilies` to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-apiserver) even though `.spec.ClusterIP` is set to `None`.
{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}}
diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md
index 348c1616ce..bcbf7d7f75 100644
--- a/content/en/docs/concepts/services-networking/service.md
+++ b/content/en/docs/concepts/services-networking/service.md
@@ -430,7 +430,7 @@ Services by their DNS name.
For example, if you have a Service called `my-service` in a Kubernetes
namespace `my-ns`, the control plane and the DNS Service acting together
create a DNS record for `my-service.my-ns`. Pods in the `my-ns` namespace
-should be able to find it by simply doing a name lookup for `my-service`
+should be able to find the service by doing a name lookup for `my-service`
(`my-service.my-ns` would also work).
Pods in other namespaces must qualify the name as `my-service.my-ns`. These names
@@ -463,7 +463,7 @@ selectors defined:
For headless Services that define selectors, the endpoints controller creates
`Endpoints` records in the API, and modifies the DNS configuration to return
-records (addresses) that point directly to the `Pods` backing the `Service`.
+A records (IP addresses) that point directly to the `Pods` backing the `Service`.
### Without selectors
@@ -1163,7 +1163,7 @@ rule kicks in, and redirects the packets to the proxy's own port.
The "Service proxy" chooses a backend, and starts proxying traffic from the client to the backend.
This means that Service owners can choose any port they want without risk of
-collision. Clients can simply connect to an IP and port, without being aware
+collision. Clients can connect to an IP and port, without being aware
of which Pods they are actually accessing.
#### iptables
diff --git a/content/en/docs/concepts/storage/persistent-volumes.md b/content/en/docs/concepts/storage/persistent-volumes.md
index 3423c575db..ef46b7f99a 100644
--- a/content/en/docs/concepts/storage/persistent-volumes.md
+++ b/content/en/docs/concepts/storage/persistent-volumes.md
@@ -487,7 +487,7 @@ The following volume types support mount options:
* VsphereVolume
* iSCSI
-Mount options are not validated, so mount will simply fail if one is invalid.
+Mount options are not validated. If a mount option is invalid, the mount fails.
In the past, the annotation `volume.beta.kubernetes.io/mount-options` was used instead
of the `mountOptions` attribute. This annotation is still working; however,
diff --git a/content/en/docs/concepts/storage/storage-classes.md b/content/en/docs/concepts/storage/storage-classes.md
index e6846c7ea4..6834977d70 100644
--- a/content/en/docs/concepts/storage/storage-classes.md
+++ b/content/en/docs/concepts/storage/storage-classes.md
@@ -149,7 +149,7 @@ mount options specified in the `mountOptions` field of the class.
If the volume plugin does not support mount options but mount options are
specified, provisioning will fail. Mount options are not validated on either
-the class or PV, so mount of the PV will simply fail if one is invalid.
+the class or PV. If a mount option is invalid, the PV mount fails.
### Volume Binding Mode
diff --git a/content/en/docs/concepts/storage/volume-pvc-datasource.md b/content/en/docs/concepts/storage/volume-pvc-datasource.md
index ac8d16041d..8210df661c 100644
--- a/content/en/docs/concepts/storage/volume-pvc-datasource.md
+++ b/content/en/docs/concepts/storage/volume-pvc-datasource.md
@@ -24,7 +24,7 @@ The {{< glossary_tooltip text="CSI" term_id="csi" >}} Volume Cloning feature add
A Clone is defined as a duplicate of an existing Kubernetes Volume that can be consumed as any standard Volume would be. The only difference is that upon provisioning, rather than creating a "new" empty Volume, the back end device creates an exact duplicate of the specified Volume.
-The implementation of cloning, from the perspective of the Kubernetes API, simply adds the ability to specify an existing PVC as a dataSource during new PVC creation. The source PVC must be bound and available (not in use).
+The implementation of cloning, from the perspective of the Kubernetes API, adds the ability to specify an existing PVC as a dataSource during new PVC creation. The source PVC must be bound and available (not in use).
Users need to be aware of the following when using this feature:
diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md
index f8475d6284..8d84a519c0 100644
--- a/content/en/docs/concepts/storage/volumes.md
+++ b/content/en/docs/concepts/storage/volumes.md
@@ -106,6 +106,8 @@ spec:
fsType: ext4
```
+If the EBS volume is partitioned, you can supply the optional field `partition: "
"` to specify which parition to mount on.
+
#### AWS EBS CSI migration
{{< feature-state for_k8s_version="v1.17" state="beta" >}}
diff --git a/content/en/docs/concepts/workloads/controllers/cron-jobs.md b/content/en/docs/concepts/workloads/controllers/cron-jobs.md
index 481c6f5017..3624135a20 100644
--- a/content/en/docs/concepts/workloads/controllers/cron-jobs.md
+++ b/content/en/docs/concepts/workloads/controllers/cron-jobs.md
@@ -90,6 +90,11 @@ If `startingDeadlineSeconds` is set to a large value or left unset (the default)
and if `concurrencyPolicy` is set to `Allow`, the jobs will always run
at least once.
+{{< caution >}}
+If `startingDeadlineSeconds` is set to a value less than 10 seconds, the CronJob may not be scheduled. This is because the CronJob controller checks things every 10 seconds.
+{{< /caution >}}
+
+
For every CronJob, the CronJob {{< glossary_tooltip term_id="controller" >}} checks how many schedules it missed in the duration from its last scheduled time until now. If there are more than 100 missed schedules, then it does not start the job and logs the error
````
@@ -128,4 +133,3 @@ documents the format of CronJob `schedule` fields.
For instructions on creating and working with cron jobs, and for an example of CronJob
manifest, see [Running automated tasks with cron jobs](/docs/tasks/job/automated-tasks-with-cron-jobs).
-
diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md
index d8e7646aed..9ba746c2ad 100644
--- a/content/en/docs/concepts/workloads/controllers/deployment.md
+++ b/content/en/docs/concepts/workloads/controllers/deployment.md
@@ -47,7 +47,7 @@ In this example:
* A Deployment named `nginx-deployment` is created, indicated by the `.metadata.name` field.
* The Deployment creates three replicated Pods, indicated by the `.spec.replicas` field.
* The `.spec.selector` field defines how the Deployment finds which Pods to manage.
- In this case, you simply select a label that is defined in the Pod template (`app: nginx`).
+ In this case, you select a label that is defined in the Pod template (`app: nginx`).
However, more sophisticated selection rules are possible,
as long as the Pod template itself satisfies the rule.
@@ -171,13 +171,15 @@ Follow the steps given below to update your Deployment:
```shell
kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1
```
- or simply use the following command:
-
+
+ or use the following command:
+
```shell
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record
```
- The output is similar to this:
+ The output is similar to:
+
```
deployment.apps/nginx-deployment image updated
```
@@ -188,7 +190,8 @@ Follow the steps given below to update your Deployment:
kubectl edit deployment.v1.apps/nginx-deployment
```
- The output is similar to this:
+ The output is similar to:
+
```
deployment.apps/nginx-deployment edited
```
@@ -200,10 +203,13 @@ Follow the steps given below to update your Deployment:
```
The output is similar to this:
+
```
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
```
+
or
+
```
deployment "nginx-deployment" successfully rolled out
```
@@ -212,10 +218,11 @@ Get more details on your updated Deployment:
* After the rollout succeeds, you can view the Deployment by running `kubectl get deployments`.
The output is similar to this:
- ```
- NAME READY UP-TO-DATE AVAILABLE AGE
- nginx-deployment 3/3 3 3 36s
- ```
+
+ ```ini
+ NAME READY UP-TO-DATE AVAILABLE AGE
+ nginx-deployment 3/3 3 3 36s
+ ```
* Run `kubectl get rs` to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it
up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.
diff --git a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md
index a6427bedb3..23d87f81fd 100644
--- a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md
+++ b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md
@@ -180,16 +180,16 @@ delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). Kubectl wi
for it to delete each pod before deleting the ReplicationController itself. If this kubectl
command is interrupted, it can be restarted.
-When using the REST API or go client library, you need to do the steps explicitly (scale replicas to
+When using the REST API or Go client library, you need to do the steps explicitly (scale replicas to
0, wait for pod deletions, then delete the ReplicationController).
-### Deleting just a ReplicationController
+### Deleting only a ReplicationController
You can delete a ReplicationController without affecting any of its pods.
Using kubectl, specify the `--cascade=false` option to [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete).
-When using the REST API or go client library, simply delete the ReplicationController object.
+When using the REST API or Go client library, you can delete the ReplicationController object.
Once the original is deleted, you can create a new ReplicationController to replace it. As long
as the old and new `.spec.selector` are the same, then the new one will adopt the old pods.
@@ -240,7 +240,7 @@ Pods created by a ReplicationController are intended to be fungible and semantic
## Responsibilities of the ReplicationController
-The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
+The ReplicationController ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](https://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](https://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](https://issue.k8s.io/170)).
diff --git a/content/en/docs/concepts/workloads/pods/ephemeral-containers.md b/content/en/docs/concepts/workloads/pods/ephemeral-containers.md
index 49894937e3..011fa4adf4 100644
--- a/content/en/docs/concepts/workloads/pods/ephemeral-containers.md
+++ b/content/en/docs/concepts/workloads/pods/ephemeral-containers.md
@@ -103,7 +103,7 @@ the ephemeral container to add as an `EphemeralContainers` list:
"apiVersion": "v1",
"kind": "EphemeralContainers",
"metadata": {
- "name": "example-pod"
+ "name": "example-pod"
},
"ephemeralContainers": [{
"command": [
diff --git a/content/en/docs/contribute/participate/roles-and-responsibilities.md b/content/en/docs/contribute/participate/roles-and-responsibilities.md
index 8ebe7a1303..4e8632ac0b 100644
--- a/content/en/docs/contribute/participate/roles-and-responsibilities.md
+++ b/content/en/docs/contribute/participate/roles-and-responsibilities.md
@@ -52,7 +52,7 @@ Members can:
{{< note >}}
Using `/lgtm` triggers automation. If you want to provide non-binding
- approval, simply commenting "LGTM" works too!
+ approval, commenting "LGTM" works too!
{{< /note >}}
- Use the `/hold` comment to block merging for a pull request
diff --git a/content/en/docs/contribute/style/style-guide.md b/content/en/docs/contribute/style/style-guide.md
index b4864dbabf..0047799fb0 100644
--- a/content/en/docs/contribute/style/style-guide.md
+++ b/content/en/docs/contribute/style/style-guide.md
@@ -44,22 +44,25 @@ The English-language documentation uses U.S. English spelling and grammar.
### Use upper camel case for API objects
-When you refer specifically to interacting with an API object, use [UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case), also known as Pascal Case. When you are generally discussing an API object, use [sentence-style capitalization](https://docs.microsoft.com/en-us/style-guide/text-formatting/using-type/use-sentence-style-capitalization).
+When you refer specifically to interacting with an API object, use [UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case), also known as Pascal case. You may see different capitalization, such as "configMap", in the [API Reference](/docs/reference/kubernetes-api/). When writing general documentation, it's better to use upper camel case, calling it "ConfigMap" instead.
+
+When you are generally discussing an API object, use [sentence-style capitalization](https://docs.microsoft.com/en-us/style-guide/text-formatting/using-type/use-sentence-style-capitalization).
+
+You may use the word "resource", "API", or "object" to clarify a Kubernetes resource type in a sentence.
Don't split the API object name into separate words. For example, use
PodTemplateList, not Pod Template List.
-Refer to API objects without saying "object," unless omitting "object"
-leads to an awkward construction.
+The following examples focus on capitalization. Review the related guidance on [Code Style](#code-style-inline-code) for more information on formatting API objects.
-{{< table caption = "Do and Don't - API objects" >}}
+{{< table caption = "Do and Don't - Use Pascal case for API objects" >}}
Do | Don't
:--| :-----
-The pod has two containers. | The Pod has two containers.
-The HorizontalPodAutoscaler is responsible for ... | The HorizontalPodAutoscaler object is responsible for ...
-A PodList is a list of pods. | A Pod List is a list of pods.
-The two ContainerPorts ... | The two ContainerPort objects ...
-The two ContainerStateTerminated objects ... | The two ContainerStateTerminateds ...
+The HorizontalPodAutoscaler resource is responsible for ... | The Horizontal pod autoscaler is responsible for ...
+A PodList object is a list of pods. | A Pod List object is a list of pods.
+The Volume object contains a `hostPath` field. | The volume object contains a hostPath field.
+Every ConfigMap object is part of a namespace. | Every configMap object is part of a namespace.
+For managing confidential data, consider using the Secret API. | For managing confidential data, consider using the secret API.
{{< /table >}}
@@ -113,12 +116,12 @@ The copy is called a "fork". | The copy is called a "fork."
## Inline code formatting
-### Use code style for inline code, commands, and API objects
+### Use code style for inline code, commands, and API objects {#code-style-inline-code}
For inline code in an HTML document, use the `` tag. In a Markdown
document, use the backtick (`` ` ``).
-{{< table caption = "Do and Don't - Use code style for inline code and commands" >}}
+{{< table caption = "Do and Don't - Use code style for inline code, commands, and API objects" >}}
Do | Don't
:--| :-----
The `kubectl run` command creates a `Pod`. | The "kubectl run" command creates a pod.
diff --git a/content/en/docs/reference/access-authn-authz/admission-controllers.md b/content/en/docs/reference/access-authn-authz/admission-controllers.md
index 632229315a..f972a0184e 100644
--- a/content/en/docs/reference/access-authn-authz/admission-controllers.md
+++ b/content/en/docs/reference/access-authn-authz/admission-controllers.md
@@ -116,7 +116,7 @@ Rejects all requests. AlwaysDeny is DEPRECATED as it has no real meaning.
This admission controller modifies every new Pod to force the image pull policy to Always. This is useful in a
multitenant cluster so that users can be assured that their private images can only be used by those
who have the credentials to pull them. Without this admission controller, once an image has been pulled to a
-node, any pod from any user can use it simply by knowing the image's name (assuming the Pod is
+node, any pod from any user can use it by knowing the image's name (assuming the Pod is
scheduled onto the right node), without any authorization check against the image. When this admission controller
is enabled, images are always pulled prior to starting containers, which means valid credentials are
required.
diff --git a/content/en/docs/reference/access-authn-authz/authentication.md b/content/en/docs/reference/access-authn-authz/authentication.md
index 8c2c4fa520..13517002f0 100644
--- a/content/en/docs/reference/access-authn-authz/authentication.md
+++ b/content/en/docs/reference/access-authn-authz/authentication.md
@@ -206,7 +206,7 @@ spec:
Service account bearer tokens are perfectly valid to use outside the cluster and
can be used to create identities for long standing jobs that wish to talk to the
-Kubernetes API. To manually create a service account, simply use the `kubectl
+Kubernetes API. To manually create a service account, use the `kubectl
create serviceaccount (NAME)` command. This creates a service account in the
current namespace and an associated secret.
@@ -420,12 +420,12 @@ users:
refresh-token: q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXq
name: oidc
```
-Once your `id_token` expires, `kubectl` will attempt to refresh your `id_token` using your `refresh_token` and `client_secret` storing the new values for the `refresh_token` and `id_token` in your `.kube/config`.
+Once your `id_token` expires, `kubectl` will attempt to refresh your `id_token` using your `refresh_token` and `client_secret` storing the new values for the `refresh_token` and `id_token` in your `.kube/config`.
##### Option 2 - Use the `--token` Option
-The `kubectl` command lets you pass in a token using the `--token` option. Simply copy and paste the `id_token` into this option:
+The `kubectl` command lets you pass in a token using the `--token` option. Copy and paste the `id_token` into this option:
```bash
kubectl --token=eyJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJodHRwczovL21sYi50cmVtb2xvLmxhbjo4MDQzL2F1dGgvaWRwL29pZGMiLCJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNDc0NTk2NjY5LCJqdGkiOiI2RDUzNXoxUEpFNjJOR3QxaWVyYm9RIiwiaWF0IjoxNDc0NTk2MzY5LCJuYmYiOjE0NzQ1OTYyNDksInN1YiI6Im13aW5kdSIsInVzZXJfcm9sZSI6WyJ1c2VycyIsIm5ldy1uYW1lc3BhY2Utdmlld2VyIl0sImVtYWlsIjoibXdpbmR1QG5vbW9yZWplZGkuY29tIn0.f2As579n9VNoaKzoF-dOQGmXkFKf1FMyNV0-va_B63jn-_n9LGSCca_6IVMP8pO-Zb4KvRqGyTP0r3HkHxYy5c81AnIh8ijarruczl-TK_yF5akjSTHFZD-0gRzlevBDiH8Q79NAr-ky0P4iIXS8lY9Vnjch5MF74Zx0c3alKJHJUnnpjIACByfF2SCaYzbWFMUNat-K1PaUk5-ujMBG7yYnr95xD-63n8CO8teGUAAEMx6zRjzfhnhbzX-ajwZLGwGUBT4WqjMs70-6a7_8gZmLZb2az1cZynkFRj2BaCkVT3A2RrjeEwZEtGXlMqKJ1_I2ulrOVsYx01_yD35-rw get nodes
diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md
index bc5f066d73..f84005b567 100644
--- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md
+++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md
@@ -635,8 +635,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `KubeletCredentialProviders`: Enable kubelet exec credential providers for image pull credentials.
- `KubeletPluginsWatcher`: Enable probe-based plugin watcher utility to enable kubelet
to discover plugins such as [CSI volume drivers](/docs/concepts/storage/volumes/#csi).
-- `KubeletPodResources`: Enable the kubelet's pod resources GRPC endpoint. See
- [Support Device Monitoring](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/compute-device-assignment.md)
+- `KubeletPodResources`: Enable the kubelet's pod resources gRPC endpoint. See
+ [Support Device Monitoring](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/606-compute-device-assignment/README.md)
for more details.
- `LegacyNodeRoleBehavior`: When disabled, legacy behavior in service load balancers and
node disruption will ignore the `node-role.kubernetes.io/master` label in favor of the
diff --git a/content/en/docs/reference/using-api/server-side-apply.md b/content/en/docs/reference/using-api/server-side-apply.md
index 302ae94d8b..d91497f8f2 100644
--- a/content/en/docs/reference/using-api/server-side-apply.md
+++ b/content/en/docs/reference/using-api/server-side-apply.md
@@ -16,10 +16,10 @@ min-kubernetes-server-version: 1.16
## Introduction
-Server Side Apply helps users and controllers manage their resources via
-declarative configurations. It allows them to create and/or modify their
+Server Side Apply helps users and controllers manage their resources through
+declarative configurations. Clients can create and modify their
[objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/)
-declaratively, simply by sending their fully specified intent.
+declaratively by sending their fully specified intent.
A fully specified intent is a partial object that only includes the fields and
values for which the user has an opinion. That intent either creates a new
diff --git a/content/en/docs/setup/production-environment/container-runtimes.md b/content/en/docs/setup/production-environment/container-runtimes.md
index 59725188d8..5aa4ac894e 100644
--- a/content/en/docs/setup/production-environment/container-runtimes.md
+++ b/content/en/docs/setup/production-environment/container-runtimes.md
@@ -420,7 +420,7 @@ Start CRI-O:
```shell
sudo systemctl daemon-reload
-sudo systemctl start crio
+sudo systemctl enable crio --now
```
Refer to the [CRI-O installation guide](https://github.com/cri-o/cri-o/blob/master/install.md)
diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
index 4d932e3e05..6516a18825 100644
--- a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
+++ b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
@@ -434,7 +434,7 @@ Now remove the node:
kubectl delete node
```
-If you wish to start over simply run `kubeadm init` or `kubeadm join` with the
+If you wish to start over, run `kubeadm init` or `kubeadm join` with the
appropriate arguments.
### Clean up the control plane
diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
index 90f80db6ca..394820324d 100644
--- a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
+++ b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
@@ -308,13 +308,6 @@ or `/etc/default/kubelet`(`/etc/sysconfig/kubelet` for RPMs), please remove it a
(stored in `/var/lib/kubelet/config.yaml` by default).
{{< /note >}}
-Restarting the kubelet is required:
-
-```bash
-sudo systemctl daemon-reload
-sudo systemctl restart kubelet
-```
-
The automatic detection of cgroup driver for other container runtimes
like CRI-O and containerd is work in progress.
diff --git a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md
index 03ff264816..014df012c7 100644
--- a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md
+++ b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md
@@ -547,7 +547,7 @@ Your main source of help for troubleshooting your Kubernetes cluster should star
1. After launching `start.ps1`, flanneld is stuck in "Waiting for the Network to be created"
- There are numerous reports of this [issue which are being investigated](https://github.com/coreos/flannel/issues/1066); most likely it is a timing issue for when the management IP of the flannel network is set. A workaround is to simply relaunch start.ps1 or relaunch it manually as follows:
+ There are numerous reports of this [issue](https://github.com/coreos/flannel/issues/1066); most likely it is a timing issue for when the management IP of the flannel network is set. A workaround is to relaunch start.ps1 or relaunch it manually as follows:
```powershell
PS C:> [Environment]::SetEnvironmentVariable("NODE_NAME", "")
diff --git a/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md b/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md
index 6c9c05cc90..9133a2de1b 100644
--- a/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md
+++ b/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md
@@ -23,7 +23,7 @@ Windows applications constitute a large portion of the services and applications
## Before you begin
* Create a Kubernetes cluster that includes a [master and a worker node running Windows Server](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes)
-* It is important to note that creating and deploying services and workloads on Kubernetes behaves in much the same way for Linux and Windows containers. [Kubectl commands](/docs/reference/kubectl/overview/) to interface with the cluster are identical. The example in the section below is provided simply to jumpstart your experience with Windows containers.
+* It is important to note that creating and deploying services and workloads on Kubernetes behaves in much the same way for Linux and Windows containers. [Kubectl commands](/docs/reference/kubectl/overview/) to interface with the cluster are identical. The example in the section below is provided to jumpstart your experience with Windows containers.
## Getting Started: Deploying a Windows container
diff --git a/content/en/docs/tasks/access-application-cluster/access-cluster.md b/content/en/docs/tasks/access-application-cluster/access-cluster.md
index 7f74320118..400a54ffb2 100644
--- a/content/en/docs/tasks/access-application-cluster/access-cluster.md
+++ b/content/en/docs/tasks/access-application-cluster/access-cluster.md
@@ -280,7 +280,7 @@ at `https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-l
#### Manually constructing apiserver proxy URLs
-As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
+As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the service's proxy URL:
`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`service_name[:port_name]`*`/proxy`
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL.
diff --git a/content/en/docs/tasks/administer-cluster/access-cluster-api.md b/content/en/docs/tasks/administer-cluster/access-cluster-api.md
index ffe200b118..99fd3596b3 100644
--- a/content/en/docs/tasks/administer-cluster/access-cluster-api.md
+++ b/content/en/docs/tasks/administer-cluster/access-cluster-api.md
@@ -215,7 +215,7 @@ for i in ret.items:
#### Java client
-* To install the [Java Client](https://github.com/kubernetes-client/java), simply execute :
+To install the [Java Client](https://github.com/kubernetes-client/java), run:
```shell
# Clone java library
diff --git a/content/en/docs/tasks/administer-cluster/access-cluster-services.md b/content/en/docs/tasks/administer-cluster/access-cluster-services.md
index c318a3df35..f6ba4e4fc0 100644
--- a/content/en/docs/tasks/administer-cluster/access-cluster-services.md
+++ b/content/en/docs/tasks/administer-cluster/access-cluster-services.md
@@ -83,7 +83,7 @@ See [Access Clusters Using the Kubernetes API](/docs/tasks/administer-cluster/ac
#### Manually constructing apiserver proxy URLs
-As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
+As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the service's proxy URL:
`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`[https:]service_name[:port_name]`*`/proxy`
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL.
diff --git a/content/en/docs/tasks/administer-cluster/change-default-storage-class.md b/content/en/docs/tasks/administer-cluster/change-default-storage-class.md
index 9c08a2a4ad..0c7c3c3ca1 100644
--- a/content/en/docs/tasks/administer-cluster/change-default-storage-class.md
+++ b/content/en/docs/tasks/administer-cluster/change-default-storage-class.md
@@ -32,7 +32,7 @@ for example, it might provision storage that is too expensive. If this is the ca
you can either change the default StorageClass or disable it completely to avoid
dynamic provisioning of storage.
-Simply deleting the default StorageClass may not work, as it may be re-created
+Deleting the default StorageClass may not work, as it may be re-created
automatically by the addon manager running in your cluster. Please consult the docs for your installation
for details about addon manager and how to disable individual addons.
diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md
index 2824cce642..40987152e8 100644
--- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md
+++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md
@@ -201,6 +201,9 @@ allow.textmode=true
how.nice.to.look=fairlyNice
```
+When `kubectl` creates a ConfigMap from inputs that are not ASCII or UTF-8, the tool puts these into the `binaryData` field of the ConfigMap, and not in `data`. Both text and binary data sources can be combined in one ConfigMap.
+If you want to view the `binaryData` keys (and their values) in a ConfigMap, you can run `kubectl get configmap -o jsonpath='{.binaryData}' `.
+
Use the option `--from-env-file` to create a ConfigMap from an env-file, for example:
```shell
@@ -687,4 +690,3 @@ data:
* Follow a real world example of [Configuring Redis using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/).
-
diff --git a/content/en/docs/tasks/configure-pod-container/configure-service-account.md b/content/en/docs/tasks/configure-pod-container/configure-service-account.md
index ca3d0b2966..d96a5c8270 100644
--- a/content/en/docs/tasks/configure-pod-container/configure-service-account.md
+++ b/content/en/docs/tasks/configure-pod-container/configure-service-account.md
@@ -23,16 +23,10 @@ authenticated by the apiserver as a particular User Account (currently this is
usually `admin`, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver.
When they do, they are authenticated as a particular Service Account (for example, `default`).
-
-
-
## {{% heading "prerequisites" %}}
-
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
-
-
## Use the Default Service Account to access the API server.
@@ -129,7 +123,7 @@ then you will see that a token has automatically been created and is referenced
You may use authorization plugins to [set permissions on service accounts](/docs/reference/access-authn-authz/rbac/#service-account-permissions).
-To use a non-default service account, simply set the `spec.serviceAccountName`
+To use a non-default service account, set the `spec.serviceAccountName`
field of a pod to the name of the service account you wish to use.
The service account has to exist at the time the pod is created, or it will be rejected.
diff --git a/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md b/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md
index 4fadbb3f42..dc4fba3480 100644
--- a/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md
+++ b/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md
@@ -12,16 +12,10 @@ What's Kompose? It's a conversion tool for all things compose (namely Docker Com
More information can be found on the Kompose website at [http://kompose.io](http://kompose.io).
-
-
-
## {{% heading "prerequisites" %}}
-
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
-
-
## Install Kompose
@@ -35,13 +29,13 @@ Kompose is released via GitHub on a three-week cycle, you can see all current re
```sh
# Linux
-curl -L https://github.com/kubernetes/kompose/releases/download/v1.21.0/kompose-linux-amd64 -o kompose
+curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-linux-amd64 -o kompose
# macOS
-curl -L https://github.com/kubernetes/kompose/releases/download/v1.21.0/kompose-darwin-amd64 -o kompose
+curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-darwin-amd64 -o kompose
# Windows
-curl -L https://github.com/kubernetes/kompose/releases/download/v1.21.0/kompose-windows-amd64.exe -o kompose.exe
+curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-windows-amd64.exe -o kompose.exe
chmod +x kompose
sudo mv ./kompose /usr/local/bin/kompose
@@ -49,7 +43,6 @@ sudo mv ./kompose /usr/local/bin/kompose
Alternatively, you can download the [tarball](https://github.com/kubernetes/kompose/releases).
-
{{% /tab %}}
{{% tab name="Build from source" %}}
@@ -87,8 +80,8 @@ On macOS you can install latest release via [Homebrew](https://brew.sh):
```bash
brew install kompose
-
```
+
{{% /tab %}}
{{< /tabs >}}
@@ -97,111 +90,117 @@ brew install kompose
In just a few steps, we'll take you from Docker Compose to Kubernetes. All
you need is an existing `docker-compose.yml` file.
-1. Go to the directory containing your `docker-compose.yml` file. If you don't
- have one, test using this one.
+1. Go to the directory containing your `docker-compose.yml` file. If you don't have one, test using this one.
- ```yaml
- version: "2"
+ ```yaml
+ version: "2"
- services:
+ services:
- redis-master:
- image: k8s.gcr.io/redis:e2e
- ports:
- - "6379"
+ redis-master:
+ image: k8s.gcr.io/redis:e2e
+ ports:
+ - "6379"
- redis-slave:
- image: gcr.io/google_samples/gb-redisslave:v3
- ports:
- - "6379"
- environment:
- - GET_HOSTS_FROM=dns
+ redis-slave:
+ image: gcr.io/google_samples/gb-redisslave:v3
+ ports:
+ - "6379"
+ environment:
+ - GET_HOSTS_FROM=dns
- frontend:
- image: gcr.io/google-samples/gb-frontend:v4
- ports:
- - "80:80"
- environment:
- - GET_HOSTS_FROM=dns
- labels:
- kompose.service.type: LoadBalancer
- ```
+ frontend:
+ image: gcr.io/google-samples/gb-frontend:v4
+ ports:
+ - "80:80"
+ environment:
+ - GET_HOSTS_FROM=dns
+ labels:
+ kompose.service.type: LoadBalancer
+ ```
-2. Run the `kompose up` command to deploy to Kubernetes directly, or skip to
- the next step instead to generate a file to use with `kubectl`.
+2. To convert the `docker-compose.yml` file to files that you can use with
+ `kubectl`, run `kompose convert` and then `kubectl apply -f
-
-