diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES
index 2c90bb82a4..c31b04d5b0 100644
--- a/OWNERS_ALIASES
+++ b/OWNERS_ALIASES
@@ -176,14 +176,20 @@ aliases:
# zhangxiaoyu-zidif
sig-docs-pt-owners: # Admins for Portuguese content
- femrtnz
+ - jailton
- jcjesus
- devlware
- jhonmike
+ - rikatz
+ - yagonobre
sig-docs-pt-reviews: # PR reviews for Portugese content
- femrtnz
+ - jailton
- jcjesus
- devlware
- jhonmike
+ - rikatz
+ - yagonobre
sig-docs-vi-owners: # Admins for Vietnamese content
- huynguyennovem
- ngtuna
diff --git a/config.toml b/config.toml
index d77c315331..04329284d8 100644
--- a/config.toml
+++ b/config.toml
@@ -91,7 +91,7 @@ blog = "/:section/:year/:month/:day/:slug/"
[outputs]
home = [ "HTML", "RSS", "HEADERS" ]
page = [ "HTML"]
-section = [ "HTML"]
+section = [ "HTML", "print" ]
# Add a "text/netlify" media type for auto-generating the _headers file
[mediaTypes]
diff --git a/content/en/blog/_posts/2018-08-03-make-kubernetes-production-grade-anywhere.md b/content/en/blog/_posts/2018-08-03-make-kubernetes-production-grade-anywhere.md
index 024506a2de..a28196d568 100644
--- a/content/en/blog/_posts/2018-08-03-make-kubernetes-production-grade-anywhere.md
+++ b/content/en/blog/_posts/2018-08-03-make-kubernetes-production-grade-anywhere.md
@@ -176,7 +176,7 @@ Cluster-distributed stateful services (e.g., Cassandra) can benefit from splitti
[Logs](/docs/concepts/cluster-administration/logging/) and [metrics](/docs/tasks/debug-application-cluster/resource-usage-monitoring/) (if collected and persistently retained) are valuable to diagnose outages, but given the variety of technologies available it will not be addressed in this blog. If Internet connectivity is available, it may be desirable to retain logs and metrics externally at a central location.
-Your production deployment should utilize an automated installation, configuration and update tool (e.g., [Ansible](https://github.com/kubernetes-incubator/kubespray), [BOSH](https://github.com/cloudfoundry-incubator/kubo-deployment), [Chef](https://github.com/chef-cookbooks/kubernetes), [Juju](/docs/getting-started-guides/ubuntu/installation/), [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/), [Puppet](https://forge.puppet.com/puppetlabs/kubernetes), etc.). A manual process will have repeatability issues, be labor intensive, error prone, and difficult to scale. [Certified distributions](https://www.cncf.io/certification/software-conformance/#logos) are likely to include a facility for retaining configuration settings across updates, but if you implement your own install and config toolchain, then retention, backup and recovery of the configuration artifacts is essential. Consider keeping your deployment components and settings under a version control system such as Git.
+Your production deployment should utilize an automated installation, configuration and update tool (e.g., [Ansible](https://github.com/kubernetes-incubator/kubespray), [BOSH](https://github.com/cloudfoundry-incubator/kubo-deployment), [Chef](https://github.com/chef-cookbooks/kubernetes), [Juju](/docs/getting-started-guides/ubuntu/installation/), [kubeadm](/docs/reference/setup-tools/kubeadm/), [Puppet](https://forge.puppet.com/puppetlabs/kubernetes), etc.). A manual process will have repeatability issues, be labor intensive, error prone, and difficult to scale. [Certified distributions](https://www.cncf.io/certification/software-conformance/#logos) are likely to include a facility for retaining configuration settings across updates, but if you implement your own install and config toolchain, then retention, backup and recovery of the configuration artifacts is essential. Consider keeping your deployment components and settings under a version control system such as Git.
## Outage recovery
diff --git a/content/en/blog/_posts/2018-12-03-kubernetes-1-13-release-announcement.md b/content/en/blog/_posts/2018-12-03-kubernetes-1-13-release-announcement.md
index 247bfa2c8d..8aba0dc232 100644
--- a/content/en/blog/_posts/2018-12-03-kubernetes-1-13-release-announcement.md
+++ b/content/en/blog/_posts/2018-12-03-kubernetes-1-13-release-announcement.md
@@ -17,7 +17,7 @@ Let’s dive into the key features of this release:
## Simplified Kubernetes Cluster Management with kubeadm in GA
-Most people who have gotten hands-on with Kubernetes have at some point been hands-on with kubeadm. It's an essential tool for managing the cluster lifecycle, from creation to configuration to upgrade; and now kubeadm is officially GA. [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) handles the bootstrapping of production clusters on existing hardware and configuring the core Kubernetes components in a best-practice-manner to providing a secure yet easy joining flow for new nodes and supporting easy upgrades. What’s notable about this GA release are the now graduated advanced features, specifically around pluggability and configurability. The scope of kubeadm is to be a toolbox for both admins and automated, higher-level system and this release is a significant step in that direction.
+Most people who have gotten hands-on with Kubernetes have at some point been hands-on with kubeadm. It's an essential tool for managing the cluster lifecycle, from creation to configuration to upgrade; and now kubeadm is officially GA. [kubeadm](/docs/reference/setup-tools/kubeadm/) handles the bootstrapping of production clusters on existing hardware and configuring the core Kubernetes components in a best-practice-manner to providing a secure yet easy joining flow for new nodes and supporting easy upgrades. What’s notable about this GA release are the now graduated advanced features, specifically around pluggability and configurability. The scope of kubeadm is to be a toolbox for both admins and automated, higher-level system and this release is a significant step in that direction.
## Container Storage Interface (CSI) Goes GA
diff --git a/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/along-the-way-ui.png b/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/along-the-way-ui.png
new file mode 100644
index 0000000000..e83656a624
Binary files /dev/null and b/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/along-the-way-ui.png differ
diff --git a/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/current-ui.png b/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/current-ui.png
new file mode 100644
index 0000000000..7d96058165
Binary files /dev/null and b/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/current-ui.png differ
diff --git a/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/first-ui.png b/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/first-ui.png
new file mode 100644
index 0000000000..ba7fec5408
Binary files /dev/null and b/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/first-ui.png differ
diff --git a/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/index.md b/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/index.md
new file mode 100644
index 0000000000..345394f809
--- /dev/null
+++ b/content/en/blog/_posts/2021-03-09-The-Evolution-of-Kubernetes-Dashboard/index.md
@@ -0,0 +1,63 @@
+---
+layout: blog
+title: "The Evolution of Kubernetes Dashboard"
+date: 2021-03-09
+slug: the-evolution-of-kubernetes-dashboard
+---
+
+Authors: Marcin Maciaszczyk, Kubermatic & Sebastian Florek, Kubermatic
+
+In October 2020, the Kubernetes Dashboard officially turned five. As main project maintainers, we can barely believe that so much time has passed since our very first commits to the project. However, looking back with a bit of nostalgia, we realize that quite a lot has happened since then. Now it’s due time to celebrate “our baby” with a short recap.
+
+## How It All Began
+
+The initial idea behind the Kubernetes Dashboard project was to provide a web interface for Kubernetes. We wanted to reflect the kubectl functionality through an intuitive web UI. The main benefit from using the UI is to be able to quickly see things that do not work as expected (monitoring and troubleshooting). Also, the Kubernetes Dashboard is a great starting point for users that are new to the Kubernetes ecosystem.
+
+The very [first commit](https://github.com/kubernetes/dashboard/commit/5861187fa807ac1cc2d9b2ac786afeced065076c) to the Kubernetes Dashboard was made by Filip Grządkowski from Google on 16th October 2015 – just a few months from the initial commit to the Kubernetes repository. Our initial commits go back to November 2015 ([Sebastian committed on 16 November 2015](https://github.com/kubernetes/dashboard/commit/09e65b6bb08c49b926253de3621a73da05e400fd); [Marcin committed on 23 November 2015](https://github.com/kubernetes/dashboard/commit/1da4b1c25ef040818072c734f71333f9b4733f55)). Since that time, we’ve become regular contributors to the project. For the next two years, we worked closely with the Googlers, eventually becoming main project maintainers ourselves.
+
+{{< figure src="first-ui.png" caption="The First Version of the User Interface" >}}
+
+{{< figure src="along-the-way-ui.png" caption="Prototype of the New User Interface" >}}
+
+{{< figure src="current-ui.png" caption="The Current User Interface" >}}
+
+As you can see, the initial look and feel of the project were completely different from the current one. We have changed the design multiple times. The same has happened with the code itself.
+
+## Growing Up - The Big Migration
+
+At [the beginning of 2018](https://github.com/kubernetes/dashboard/pull/2727), we reached a point where AngularJS was getting closer to the end of its life, while the new Angular versions were published quite often. A lot of the libraries and the modules that we were using were following the trend. That forced us to spend a lot of the time rewriting the frontend part of the project to make it work with newer technologies.
+
+The migration came with many benefits like being able to refactor a lot of the code, introduce design patterns, reduce code complexity, and benefit from the new modules. However, you can imagine that the scale of the migration was huge. Luckily, there were a number of contributions from the community helping us with the resource support, new Kubernetes version support, i18n, and much more. After many long days and nights, we finally released the [first beta version](https://github.com/kubernetes/dashboard/releases/tag/v2.0.0-beta1) in July 2019, followed by the [2.0 release](https://github.com/kubernetes/dashboard/releases/tag/v2.0.0) in April 2020 — our baby had grown up.
+
+## Where Are We Standing in 2021?
+
+Due to limited resources, unfortunately, we were not able to offer extensive support for many different Kubernetes versions. So, we’ve decided to always try and support the latest Kubernetes version available at the time of the Kubernetes Dashboard release. The latest release, [Dashboard v2.2.0](https://github.com/kubernetes/dashboard/releases/tag/v2.2.0) provides support for Kubernetes v1.20.
+
+On top of that, we put in a great deal of effort into [improving resource support](https://github.com/kubernetes/dashboard/issues/5232). Meanwhile, we do offer support for most of the Kubernetes resources. Also, the Kubernetes Dashboard supports multiple languages: English, German, French, Japanese, Korean, Chinese (Traditional, Simplified, Traditional Hong Kong). Persian and Russian localizations are currently in progress. Moreover, we are working on the support for 3rd party themes and the design of the app in general. As you can see, quite a lot of things are going on.
+
+Luckily, we do have regular contributors with domain knowledge who are taking care of the project, updating the Helm charts, translations, Go modules, and more. But as always, there could be many more hands on deck. So if you are thinking about contributing to Kubernetes, keep us in mind ;)
+
+## What’s Next
+
+The Kubernetes Dashboard has been growing and prospering for more than 5 years now. It provides the community with an intuitive Web UI, thereby decreasing the complexity of Kubernetes and increasing its accessibility to new community members. We are proud of what the project has achieved so far, but this is by far not the end. These are our priorities for the future:
+
+* Keep providing support for the new Kubernetes versions
+* Keep improving the support for the existing resources
+* Keep working on auth system improvements
+* [Rewrite the API to use gRPC and shared informers](https://github.com/kubernetes/dashboard/pull/5449): This will allow us to improve the performance of the application but, most importantly, to support live updates coming from the Kubernetes project. It is one of the most requested features from the community.
+* Split the application into two containers, one with the UI and the second with the API running inside.
+
+## The Kubernetes Dashboard in Numbers
+
+* Initial commit made on October 16, 2015
+* Over 100 million pulls from Dockerhub since the v2 release
+* 8 supported languages and the next 2 in progress
+* Over 3360 closed PRs
+* Over 2260 closed issues
+* 100% coverage of the supported core Kubernetes resources
+* Over 9000 stars on GitHub
+* Over 237 000 lines of code
+
+## Join Us
+
+As mentioned earlier, we are currently looking for more people to help us further develop and grow the project. We are open to contributions in multiple areas, i.e., [issues with help wanted label](https://github.com/kubernetes/dashboard/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22). Please feel free to reach out via GitHub or the #sig-ui channel in the [Kubernetes Slack](https://slack.k8s.io/).
diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md
index de08e48d8c..823dddf710 100644
--- a/content/en/docs/concepts/architecture/nodes.md
+++ b/content/en/docs/concepts/architecture/nodes.md
@@ -31,7 +31,7 @@ The [components](/docs/concepts/overview/components/#node-components) on a node
There are two main ways to have Nodes added to the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}:
1. The kubelet on a node self-registers to the control plane
-2. You, or another human user, manually add a Node object
+2. You (or another human user) manually add a Node object
After you create a Node object, or the kubelet on a node self-registers, the
control plane checks whether the new Node object is valid. For example, if you
@@ -52,8 +52,8 @@ try to create a Node from the following JSON manifest:
Kubernetes creates a Node object internally (the representation). Kubernetes checks
that a kubelet has registered to the API server that matches the `metadata.name`
-field of the Node. If the node is healthy (if all necessary services are running),
-it is eligible to run a Pod. Otherwise, that node is ignored for any cluster activity
+field of the Node. If the node is healthy (i.e. all necessary services are running),
+then it is eligible to run a Pod. Otherwise, that node is ignored for any cluster activity
until it becomes healthy.
{{< note >}}
@@ -96,14 +96,14 @@ You can create and modify Node objects using
When you want to create Node objects manually, set the kubelet flag `--register-node=false`.
You can modify Node objects regardless of the setting of `--register-node`.
-For example, you can set labels on an existing Node, or mark it unschedulable.
+For example, you can set labels on an existing Node or mark it unschedulable.
You can use labels on Nodes in conjunction with node selectors on Pods to control
scheduling. For example, you can constrain a Pod to only be eligible to run on
a subset of the available nodes.
Marking a node as unschedulable prevents the scheduler from placing new pods onto
-that Node, but does not affect existing Pods on the Node. This is useful as a
+that Node but does not affect existing Pods on the Node. This is useful as a
preparatory step before a node reboot or other maintenance.
To mark a Node unschedulable, run:
@@ -179,14 +179,14 @@ The node condition is represented as a JSON object. For example, the following s
]
```
-If the Status of the Ready condition remains `Unknown` or `False` for longer than the `pod-eviction-timeout` (an argument passed to the {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}), all the Pods on the node are scheduled for deletion by the node controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the API server is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the API server is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node.
+If the Status of the Ready condition remains `Unknown` or `False` for longer than the `pod-eviction-timeout` (an argument passed to the {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}), then all the Pods on the node are scheduled for deletion by the node controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the API server is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the API server is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node.
The node controller does not force delete pods until it is confirmed that they have stopped
running in the cluster. You can see the pods that might be running on an unreachable node as
being in the `Terminating` or `Unknown` state. In cases where Kubernetes cannot deduce from the
underlying infrastructure if a node has permanently left a cluster, the cluster administrator
-may need to delete the node object by hand. Deleting the node object from Kubernetes causes
-all the Pod objects running on the node to be deleted from the API server, and frees up their
+may need to delete the node object by hand. Deleting the node object from Kubernetes causes
+all the Pod objects running on the node to be deleted from the API server and frees up their
names.
The node lifecycle controller automatically creates
@@ -199,7 +199,7 @@ for more details.
### Capacity and Allocatable {#capacity}
-Describes the resources available on the node: CPU, memory and the maximum
+Describes the resources available on the node: CPU, memory, and the maximum
number of pods that can be scheduled onto the node.
The fields in the capacity block indicate the total amount of resources that a
@@ -225,18 +225,19 @@ CIDR block to the node when it is registered (if CIDR assignment is turned on).
The second is keeping the node controller's internal list of nodes up to date with
the cloud provider's list of available machines. When running in a cloud
-environment, whenever a node is unhealthy, the node controller asks the cloud
+environment and whenever a node is unhealthy, the node controller asks the cloud
provider if the VM for that node is still available. If not, the node
controller deletes the node from its list of nodes.
The third is monitoring the nodes' health. The node controller is
-responsible for updating the NodeReady condition of NodeStatus to
-ConditionUnknown when a node becomes unreachable (i.e. the node controller stops
-receiving heartbeats for some reason, for example due to the node being down), and then later evicting
-all the pods from the node (using graceful termination) if the node continues
-to be unreachable. (The default timeouts are 40s to start reporting
-ConditionUnknown and 5m after that to start evicting pods.) The node controller
-checks the state of each node every `--node-monitor-period` seconds.
+responsible for:
+- Updating the NodeReady condition of NodeStatus to ConditionUnknown when a node
+ becomes unreachable, as the node controller stops receiving heartbeats for some
+ reason such as the node being down.
+- Evicting all the pods from the node using graceful termination if
+ the node continues to be unreachable. The default timeouts are 40s to start
+ reporting ConditionUnknown and 5m after that to start evicting pods.
+The node controller checks the state of each node every `--node-monitor-period` seconds.
#### Heartbeats
@@ -252,13 +253,14 @@ of the node heartbeats as the cluster scales.
The kubelet is responsible for creating and updating the `NodeStatus` and
a Lease object.
-- The kubelet updates the `NodeStatus` either when there is change in status,
+- The kubelet updates the `NodeStatus` either when there is change in status
or if there has been no update for a configured interval. The default interval
- for `NodeStatus` updates is 5 minutes (much longer than the 40 second default
- timeout for unreachable nodes).
+ for `NodeStatus` updates is 5 minutes, which is much longer than the 40 second default
+ timeout for unreachable nodes.
- The kubelet creates and then updates its Lease object every 10 seconds
(the default update interval). Lease updates occur independently from the
- `NodeStatus` updates. If the Lease update fails, the kubelet retries with exponential backoff starting at 200 milliseconds and capped at 7 seconds.
+ `NodeStatus` updates. If the Lease update fails, the kubelet retries with
+ exponential backoff starting at 200 milliseconds and capped at 7 seconds.
#### Reliability
@@ -269,23 +271,24 @@ from more than 1 node per 10 seconds.
The node eviction behavior changes when a node in a given availability zone
becomes unhealthy. The node controller checks what percentage of nodes in the zone
are unhealthy (NodeReady condition is ConditionUnknown or ConditionFalse) at
-the same time. If the fraction of unhealthy nodes is at least
-`--unhealthy-zone-threshold` (default 0.55) then the eviction rate is reduced:
-if the cluster is small (i.e. has less than or equal to
-`--large-cluster-size-threshold` nodes - default 50) then evictions are
-stopped, otherwise the eviction rate is reduced to
-`--secondary-node-eviction-rate` (default 0.01) per second. The reason these
-policies are implemented per availability zone is because one availability zone
-might become partitioned from the master while the others remain connected. If
-your cluster does not span multiple cloud provider availability zones, then
-there is only one availability zone (the whole cluster).
+the same time:
+- If the fraction of unhealthy nodes is at least `--unhealthy-zone-threshold`
+ (default 0.55), then the eviction rate is reduced.
+- If the cluster is small (i.e. has less than or equal to
+ `--large-cluster-size-threshold` nodes - default 50), then evictions are stopped.
+- Otherwise, the eviction rate is reduced to `--secondary-node-eviction-rate`
+ (default 0.01) per second.
+The reason these policies are implemented per availability zone is because one
+availability zone might become partitioned from the master while the others remain
+connected. If your cluster does not span multiple cloud provider availability zones,
+then there is only one availability zone (i.e. the whole cluster).
A key reason for spreading your nodes across availability zones is so that the
workload can be shifted to healthy zones when one entire zone goes down.
-Therefore, if all nodes in a zone are unhealthy then the node controller evicts at
+Therefore, if all nodes in a zone are unhealthy, then the node controller evicts at
the normal rate of `--node-eviction-rate`. The corner case is when all zones are
completely unhealthy (i.e. there are no healthy nodes in the cluster). In such a
-case, the node controller assumes that there's some problem with master
+case, the node controller assumes that there is some problem with master
connectivity and stops all evictions until some connectivity is restored.
The node controller is also responsible for evicting pods running on nodes with
@@ -303,8 +306,8 @@ eligible for, effectively removing incoming load balancer traffic from the cordo
### Node capacity
-Node objects track information about the Node's resource capacity (for example: the amount
-of memory available, and the number of CPUs).
+Node objects track information about the Node's resource capacity: for example, the amount
+of memory available and the number of CPUs.
Nodes that [self register](#self-registration-of-nodes) report their capacity during
registration. If you [manually](#manual-node-administration) add a Node, then
you need to set the node's capacity information when you add it.
@@ -338,7 +341,7 @@ for more information.
If you have enabled the `GracefulNodeShutdown` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/), then the kubelet attempts to detect the node system shutdown and terminates pods running on the node.
Kubelet ensures that pods follow the normal [pod termination process](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) during the node shutdown.
-When the `GracefulNodeShutdown` feature gate is enabled, kubelet uses [systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) to delay the node shutdown with a given duration. During a shutdown kubelet terminates pods in two phases:
+When the `GracefulNodeShutdown` feature gate is enabled, kubelet uses [systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) to delay the node shutdown with a given duration. During a shutdown, kubelet terminates pods in two phases:
1. Terminate regular pods running on the node.
2. Terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) running on the node.
diff --git a/content/en/docs/concepts/cluster-administration/logging.md b/content/en/docs/concepts/cluster-administration/logging.md
index d8c0a0615c..e75fdea4a5 100644
--- a/content/en/docs/concepts/cluster-administration/logging.md
+++ b/content/en/docs/concepts/cluster-administration/logging.md
@@ -83,12 +83,15 @@ As an example, you can find detailed information about how `kube-up.sh` sets
up logging for COS image on GCP in the corresponding
[`configure-helper` script](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh).
+When using a **CRI container runtime**, the kubelet is responsible for rotating the logs and managing the logging directory structure. The kubelet
+sends this information to the CRI container runtime and the runtime writes the container logs to the given location. The two kubelet flags `container-log-max-size` and `container-log-max-files` can be used to configure the maximum size for each log file and the maximum number of files allowed for each container respectively.
+
When you run [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs) as in
the basic logging example, the kubelet on the node handles the request and
reads directly from the log file. The kubelet returns the content of the log file.
{{< note >}}
-If an external system has performed the rotation,
+If an external system has performed the rotation or a CRI container runtime is used,
only the contents of the latest log file will be available through
`kubectl logs`. For example, if there's a 10MB file, `logrotate` performs
the rotation and there are two files: one file that is 10MB in size and a second file that is empty.
diff --git a/content/en/docs/concepts/cluster-administration/system-metrics.md b/content/en/docs/concepts/cluster-administration/system-metrics.md
index 3c7e137ded..073e0c7236 100644
--- a/content/en/docs/concepts/cluster-administration/system-metrics.md
+++ b/content/en/docs/concepts/cluster-administration/system-metrics.md
@@ -134,7 +134,7 @@ cloudprovider_gce_api_request_duration_seconds { request = "list_disk"}
### kube-scheduler metrics
-{{< feature-state for_k8s_version="v1.20" state="alpha" >}}
+{{< feature-state for_k8s_version="v1.21" state="beta" >}}
The scheduler exposes optional metrics that reports the requested resources and the desired limits of all running pods. These metrics can be used to build capacity planning dashboards, assess current or historical scheduling limits, quickly identify workloads that cannot schedule due to lack of resources, and compare actual usage to the pod's request.
diff --git a/content/en/docs/concepts/configuration/configmap.md b/content/en/docs/concepts/configuration/configmap.md
index f0f0f439f9..2434c42437 100644
--- a/content/en/docs/concepts/configuration/configmap.md
+++ b/content/en/docs/concepts/configuration/configmap.md
@@ -43,7 +43,7 @@ Kubernetes objects that have a `spec`, a ConfigMap has `data` and `binaryData`
fields. These fields accept key-value pairs as their values. Both the `data`
field and the `binaryData` are optional. The `data` field is designed to
contain UTF-8 byte sequences while the `binaryData` field is designed to
-contain binary data.
+contain binary data as base64-encoded strings.
The name of a ConfigMap must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
diff --git a/content/en/docs/concepts/configuration/overview.md b/content/en/docs/concepts/configuration/overview.md
index fce8a0c7a8..25cfb2e7f1 100644
--- a/content/en/docs/concepts/configuration/overview.md
+++ b/content/en/docs/concepts/configuration/overview.md
@@ -81,9 +81,9 @@ The [imagePullPolicy](/docs/concepts/containers/images/#updating-images) and the
- `imagePullPolicy: Always`: every time the kubelet launches a container, the kubelet queries the container image registry to resolve the name to an image digest. If the kubelet has a container image with that exact digest cached locally, the kubelet uses its cached image; otherwise, the kubelet downloads (pulls) the image with the resolved digest, and uses that image to launch the container.
-- `imagePullPolicy` is omitted and either the image tag is `:latest` or it is omitted: `Always` is applied.
+- `imagePullPolicy` is omitted and either the image tag is `:latest` or it is omitted: `imagePullPolicy` is automatically set to `Always`. Note that this will _not_ be updated to `IfNotPresent` if the tag changes value.
-- `imagePullPolicy` is omitted and the image tag is present but not `:latest`: `IfNotPresent` is applied.
+- `imagePullPolicy` is omitted and the image tag is present but not `:latest`: `imagePullPolicy` is automatically set to `IfNotPresent`. Note that this will _not_ be updated to `Always` if the tag is later removed or changed to `:latest`.
- `imagePullPolicy: Never`: the image is assumed to exist locally. No attempt is made to pull the image.
@@ -96,7 +96,7 @@ You should avoid using the `:latest` tag when deploying containers in production
{{< /note >}}
{{< note >}}
-The caching semantics of the underlying image provider make even `imagePullPolicy: Always` efficient. With Docker, for example, if the image already exists, the pull attempt is fast because all image layers are cached and no image download is needed.
+The caching semantics of the underlying image provider make even `imagePullPolicy: Always` efficient, as long as the registry is reliably accessible. With Docker, for example, if the image already exists, the pull attempt is fast because all image layers are cached and no image download is needed.
{{< /note >}}
## Using kubectl
diff --git a/content/en/docs/concepts/containers/images.md b/content/en/docs/concepts/containers/images.md
index 99698668c4..1166d4106a 100644
--- a/content/en/docs/concepts/containers/images.md
+++ b/content/en/docs/concepts/containers/images.md
@@ -49,16 +49,32 @@ Instead, specify a meaningful tag such as `v1.42.0`.
## Updating images
-The default pull policy is `IfNotPresent` which causes the
-{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} to skip
-pulling an image if it already exists. If you would like to always force a pull,
-you can do one of the following:
+When you first create a {{< glossary_tooltip text="Deployment" term_id="deployment" >}},
+{{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}}, Pod, or other
+object that includes a Pod template, then by default the pull policy of all
+containers in that pod will be set to `IfNotPresent` if it is not explicitly
+specified. This policy causes the
+{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} to skip pulling an
+image if it already exists.
+
+If you would like to always force a pull, you can do one of the following:
- set the `imagePullPolicy` of the container to `Always`.
-- omit the `imagePullPolicy` and use `:latest` as the tag for the image to use.
+- omit the `imagePullPolicy` and use `:latest` as the tag for the image to use;
+ Kubernetes will set the policy to `Always`.
- omit the `imagePullPolicy` and the tag for the image to use.
- enable the [AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) admission controller.
+{{< note >}}
+The value of `imagePullPolicy` of the container is always set when the object is
+first _created_, and is not updated if the image's tag later changes.
+
+For example, if you create a Deployment with an image whose tag is _not_
+`:latest`, and later update that Deployment's image to a `:latest` tag, the
+`imagePullPolicy` field will _not_ change to `Always`. You must manually change
+the pull policy of any object after its initial creation.
+{{< /note >}}
+
When `imagePullPolicy` is defined without a specific value, it is also set to `Always`.
## Multi-architecture images with image indexes
diff --git a/content/en/docs/concepts/extend-kubernetes/operator.md b/content/en/docs/concepts/extend-kubernetes/operator.md
index 64b9ed3eb8..323200ec3a 100644
--- a/content/en/docs/concepts/extend-kubernetes/operator.md
+++ b/content/en/docs/concepts/extend-kubernetes/operator.md
@@ -103,26 +103,27 @@ as well as keeping the existing service in good shape.
## Writing your own Operator {#writing-operator}
If there isn't an Operator in the ecosystem that implements the behavior you
-want, you can code your own. In [What's next](#what-s-next) you'll find a few
-links to libraries and tools you can use to write your own cloud native
-Operator.
+want, you can code your own.
You also implement an Operator (that is, a Controller) using any language / runtime
that can act as a [client for the Kubernetes API](/docs/reference/using-api/client-libraries/).
+Following are a few libraries and tools you can use to write your own cloud native
+Operator.
+{{% thirdparty-content %}}
+
+* [kubebuilder](https://book.kubebuilder.io/)
+* [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator)
+* [Metacontroller](https://metacontroller.app/) along with WebHooks that
+ you implement yourself
+* [Operator Framework](https://operatorframework.io)
## {{% heading "whatsnext" %}}
* Learn more about [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
* Find ready-made operators on [OperatorHub.io](https://operatorhub.io/) to suit your use case
-* Use existing tools to write your own operator, eg:
- * using [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator)
- * using [kubebuilder](https://book.kubebuilder.io/)
- * using [Metacontroller](https://metacontroller.app/) along with WebHooks that
- you implement yourself
- * using the [Operator Framework](https://operatorframework.io)
* [Publish](https://operatorhub.io/) your operator for other people to use
* Read [CoreOS' original article](https://web.archive.org/web/20170129131616/https://coreos.com/blog/introducing-operators.html) that introduced the Operator pattern (this is an archived version of the original article).
* Read an [article](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) from Google Cloud about best practices for building Operators
diff --git a/content/en/docs/concepts/policy/resource-quotas.md b/content/en/docs/concepts/policy/resource-quotas.md
index 034458b8e9..1d0e9d4ecd 100644
--- a/content/en/docs/concepts/policy/resource-quotas.md
+++ b/content/en/docs/concepts/policy/resource-quotas.md
@@ -124,6 +124,10 @@ In release 1.8, quota support for local ephemeral storage is added as an alpha f
| `limits.ephemeral-storage` | Across all pods in the namespace, the sum of local ephemeral storage limits cannot exceed this value. |
| `ephemeral-storage` | Same as `requests.ephemeral-storage`. |
+{{< note >}}
+When using a CRI container runtime, container logs will count against the ephemeral storage quota. This can result in the unexpected eviction of pods that have exhausted their storage quotas. Refer to [Logging Architecture](/docs/concepts/cluster-administration/logging/) for details.
+{{< /note >}}
+
## Object Count Quota
You can set quota for the total number of certain resources of all standard,
diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md
index 8b13ed3b45..41c4d0ceca 100644
--- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md
+++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md
@@ -5,7 +5,7 @@ reviewers:
- bsalamat
title: Assigning Pods to Nodes
content_type: concept
-weight: 50
+weight: 20
---
diff --git a/content/en/docs/concepts/scheduling-eviction/resource-bin-packing.md b/content/en/docs/concepts/scheduling-eviction/resource-bin-packing.md
index a327f1de24..94bfaa1280 100644
--- a/content/en/docs/concepts/scheduling-eviction/resource-bin-packing.md
+++ b/content/en/docs/concepts/scheduling-eviction/resource-bin-packing.md
@@ -5,7 +5,7 @@ reviewers:
- ahg-g
title: Resource Bin Packing for Extended Resources
content_type: concept
-weight: 50
+weight: 30
---
diff --git a/content/en/docs/concepts/storage/dynamic-provisioning.md b/content/en/docs/concepts/storage/dynamic-provisioning.md
index bedd431dc9..63263fb370 100644
--- a/content/en/docs/concepts/storage/dynamic-provisioning.md
+++ b/content/en/docs/concepts/storage/dynamic-provisioning.md
@@ -80,7 +80,7 @@ parameters:
Users request dynamically provisioned storage by including a storage class in
their `PersistentVolumeClaim`. Before Kubernetes v1.6, this was done via the
`volume.beta.kubernetes.io/storage-class` annotation. However, this annotation
-is deprecated since v1.6. Users now can and should instead use the
+is deprecated since v1.9. Users now can and should instead use the
`storageClassName` field of the `PersistentVolumeClaim` object. The value of
this field must match the name of a `StorageClass` configured by the
administrator (see [below](#enabling-dynamic-provisioning)).
diff --git a/content/en/docs/concepts/storage/storage-capacity.md b/content/en/docs/concepts/storage/storage-capacity.md
index d5993d4f59..13ae8ab722 100644
--- a/content/en/docs/concepts/storage/storage-capacity.md
+++ b/content/en/docs/concepts/storage/storage-capacity.md
@@ -17,6 +17,7 @@ which a pod runs: network-attached storage might not be accessible by
all nodes, or storage is local to a node to begin with.
{{< feature-state for_k8s_version="v1.19" state="alpha" >}}
+{{< feature-state for_k8s_version="v1.21" state="beta" >}}
This page describes how Kubernetes keeps track of storage capacity and
how the scheduler uses that information to schedule Pods onto nodes
@@ -103,34 +104,10 @@ to handle this automatically.
## Enabling storage capacity tracking
-Storage capacity tracking is an *alpha feature* and only enabled when
-the `CSIStorageCapacity` [feature
-gate](/docs/reference/command-line-tools-reference/feature-gates/) and
-the `storage.k8s.io/v1alpha1` {{< glossary_tooltip text="API group" term_id="api-group" >}} are enabled. For details on
-that, see the `--feature-gates` and `--runtime-config` [kube-apiserver
-parameters](/docs/reference/command-line-tools-reference/kube-apiserver/).
-
-A quick check
-whether a Kubernetes cluster supports the feature is to list
-CSIStorageCapacity objects with:
-```shell
-kubectl get csistoragecapacities --all-namespaces
-```
-
-If your cluster supports CSIStorageCapacity, the response is either a list of CSIStorageCapacity objects or:
-```
-No resources found
-```
-
-If not supported, this error is printed instead:
-```
-error: the server doesn't have a resource type "csistoragecapacities"
-```
-
-In addition to enabling the feature in the cluster, a CSI
-driver also has to
-support it. Please refer to the driver's documentation for
-details.
+Storage capacity tracking is a beta feature and enabled by default in
+a Kubernetes cluster since Kubernetes 1.21. In addition to having the
+feature enabled in the cluster, a CSI driver also has to support
+it. Please refer to the driver's documentation for details.
## {{% heading "whatsnext" %}}
diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md
index 8d84a519c0..28abe9ee8a 100644
--- a/content/en/docs/concepts/storage/volumes.md
+++ b/content/en/docs/concepts/storage/volumes.md
@@ -34,8 +34,9 @@ Kubernetes supports many types of volumes. A {{< glossary_tooltip term_id="pod"
can use any number of volume types simultaneously.
Ephemeral volume types have a lifetime of a pod, but persistent volumes exist beyond
the lifetime of a pod. Consequently, a volume outlives any containers
-that run within the pod, and data is preserved across container restarts. When a
-pod ceases to exist, the volume is destroyed.
+that run within the pod, and data is preserved across container restarts. When a pod
+ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not
+destroy persistent volumes.
At its core, a volume is just a directory, possibly with some data in it, which
is accessible to the containers in a pod. How that directory comes to be, the
@@ -152,14 +153,16 @@ For more details, see the [`azureFile` volume plugin](https://github.com/kuberne
#### azureFile CSI migration
-{{< feature-state for_k8s_version="v1.15" state="alpha" >}}
+{{< feature-state for_k8s_version="v1.21" state="beta" >}}
The `CSIMigration` feature for `azureFile`, when enabled, redirects all plugin operations
from the existing in-tree plugin to the `file.csi.azure.com` Container
Storage Interface (CSI) Driver. In order to use this feature, the [Azure File CSI
Driver](https://github.com/kubernetes-sigs/azurefile-csi-driver)
must be installed on the cluster and the `CSIMigration` and `CSIMigrationAzureFile`
-alpha features must be enabled.
+[feature gates](/docs/reference/command-line-tools-reference/feature-gates/) must be enabled.
+
+Azure File CSI driver does not support using same volume with different fsgroups, if Azurefile CSI migration is enabled, using same volume with different fsgroups won't be supported at all.
### cephfs
diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md
index 9ba746c2ad..0380889414 100644
--- a/content/en/docs/concepts/workloads/controllers/deployment.md
+++ b/content/en/docs/concepts/workloads/controllers/deployment.md
@@ -54,7 +54,7 @@ In this example:
{{< note >}}
The `.spec.selector.matchLabels` field is a map of {key,value} pairs.
A single {key,value} in the `matchLabels` map is equivalent to an element of `matchExpressions`,
- whose key field is "key" the operator is "In", and the values array contains only "value".
+ whose `key` field is "key", the `operator` is "In", and the `values` array contains only "value".
All of the requirements, from both `matchLabels` and `matchExpressions`, must be satisfied in order to match.
{{< /note >}}
diff --git a/content/en/docs/concepts/workloads/controllers/job.md b/content/en/docs/concepts/workloads/controllers/job.md
index 2c99a704d1..d5f505473a 100644
--- a/content/en/docs/concepts/workloads/controllers/job.md
+++ b/content/en/docs/concepts/workloads/controllers/job.md
@@ -145,8 +145,8 @@ There are three main types of task suitable to run as a Job:
- the Job is complete as soon as its Pod terminates successfully.
1. Parallel Jobs with a *fixed completion count*:
- specify a non-zero positive value for `.spec.completions`.
- - the Job represents the overall task, and is complete when there is one successful Pod for each value in the range 1 to `.spec.completions`.
- - **not implemented yet:** Each Pod is passed a different index in the range 1 to `.spec.completions`.
+ - the Job represents the overall task, and is complete when there are `.spec.completions` successful Pods.
+ - when using `.spec.completionMode="Indexed"`, each Pod gets a different index in the range 0 to `.spec.completions-1`.
1. Parallel Jobs with a *work queue*:
- do not specify `.spec.completions`, default to `.spec.parallelism`.
- the Pods must coordinate amongst themselves or an external service to determine what each should work on. For example, a Pod might fetch a batch of up to N items from the work queue.
@@ -166,7 +166,6 @@ a non-negative integer.
For more information about how to make use of the different types of job, see the [job patterns](#job-patterns) section.
-
#### Controlling parallelism
The requested parallelism (`.spec.parallelism`) can be set to any non-negative value.
@@ -185,6 +184,33 @@ parallelism, for a variety of reasons:
- The Job controller may throttle new Pod creation due to excessive previous pod failures in the same Job.
- When a Pod is gracefully shut down, it takes time to stop.
+### Completion mode
+
+{{< feature-state for_k8s_version="v1.21" state="alpha" >}}
+
+{{< note >}}
+To be able to create Indexed Jobs, make sure to enable the `IndexedJob`
+[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
+on the [API server](docs/reference/command-line-tools-reference/kube-apiserver/)
+and the [controller manager](/docs/reference/command-line-tools-reference/kube-controller-manager/).
+{{< /note >}}
+
+Jobs with _fixed completion count_ - that is, jobs that have non null
+`.spec.completions` - can have a completion mode that is specified in `.spec.completionMode`:
+
+- `NonIndexed` (default): the Job is considered complete when there have been
+ `.spec.completions` successfully completed Pods. In other words, each Pod
+ completion is homologous to each other. Note that Jobs that have null
+ `.spec.completions` are implicitly `NonIndexed`.
+- `Indexed`: the Pods of a Job get an associated completion index from 0 to
+ `.spec.completions-1`, available in the annotation `batch.kubernetes.io/job-completion-index`.
+ The Job is considered complete when there is one successfully completed Pod
+ for each index. For more information about how to use this mode, see
+ [Indexed Job for Parallel Processing with Static Work Assignment](/docs/tasks/job/indexed-parallel-processing-static/).
+ Note that, although rare, more than one Pod could be started for the same
+ index, but only one of them will count towards the completion count.
+
+
## Handling Pod and container failures
A container in a Pod may fail for a number of reasons, such as because the process in it exited with
@@ -348,12 +374,12 @@ The tradeoffs are:
The tradeoffs are summarized here, with columns 2 to 4 corresponding to the above tradeoffs.
The pattern names are also links to examples and more detailed description.
-| Pattern | Single Job object | Fewer pods than work items? | Use app unmodified? | Works in Kube 1.1? |
-| -------------------------------------------------------------------- |:-----------------:|:---------------------------:|:-------------------:|:-------------------:|
-| [Job Template Expansion](/docs/tasks/job/parallel-processing-expansion/) | | | ✓ | ✓ |
-| [Queue with Pod Per Work Item](/docs/tasks/job/coarse-parallel-processing-work-queue/) | ✓ | | sometimes | ✓ |
-| [Queue with Variable Pod Count](/docs/tasks/job/fine-parallel-processing-work-queue/) | ✓ | ✓ | | ✓ |
-| Single Job with Static Work Assignment | ✓ | | ✓ | |
+| Pattern | Single Job object | Fewer pods than work items? | Use app unmodified? |
+| ----------------------------------------- |:-----------------:|:---------------------------:|:-------------------:|
+| [Queue with Pod Per Work Item] | ✓ | | sometimes |
+| [Queue with Variable Pod Count] | ✓ | ✓ | |
+| [Indexed Job with Static Work Assignment] | ✓ | | ✓ |
+| [Job Template Expansion] | | | ✓ |
When you specify completions with `.spec.completions`, each Pod created by the Job controller
has an identical [`spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). This means that
@@ -364,13 +390,17 @@ are different ways to arrange for pods to work on different things.
This table shows the required settings for `.spec.parallelism` and `.spec.completions` for each of the patterns.
Here, `W` is the number of work items.
-| Pattern | `.spec.completions` | `.spec.parallelism` |
-| -------------------------------------------------------------------- |:-------------------:|:--------------------:|
-| [Job Template Expansion](/docs/tasks/job/parallel-processing-expansion/) | 1 | should be 1 |
-| [Queue with Pod Per Work Item](/docs/tasks/job/coarse-parallel-processing-work-queue/) | W | any |
-| [Queue with Variable Pod Count](/docs/tasks/job/fine-parallel-processing-work-queue/) | 1 | any |
-| Single Job with Static Work Assignment | W | any |
+| Pattern | `.spec.completions` | `.spec.parallelism` |
+| ----------------------------------------- |:-------------------:|:--------------------:|
+| [Queue with Pod Per Work Item] | W | any |
+| [Queue with Variable Pod Count] | null | any |
+| [Indexed Job with Static Work Assignment] | W | any |
+| [Job Template Expansion] | 1 | should be 1 |
+[Queue with Pod Per Work Item]: /docs/tasks/job/coarse-parallel-processing-work-queue/
+[Queue with Variable Pod Count]: /docs/tasks/job/fine-parallel-processing-work-queue/
+[Indexed Job with Static Work Assignment]: /docs/tasks/job/indexed-parallel-processing-static/
+[Job Template Expansion]: /docs/tasks/job/parallel-processing-expansion/
## Advanced usage
diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md
index 415c5a57b5..6e620ba907 100644
--- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md
+++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md
@@ -62,8 +62,6 @@ different Kubernetes components.
| `BoundServiceAccountTokenVolume` | `false` | Alpha | 1.13 | |
| `CPUManager` | `false` | Alpha | 1.8 | 1.9 |
| `CPUManager` | `true` | Beta | 1.10 | |
-| `CRIContainerLogRotation` | `false` | Alpha | 1.10 | 1.10 |
-| `CRIContainerLogRotation` | `true` | Beta| 1.11 | |
| `CSIInlineVolume` | `false` | Alpha | 1.15 | 1.15 |
| `CSIInlineVolume` | `true` | Beta | 1.16 | - |
| `CSIMigration` | `false` | Alpha | 1.14 | 1.16 |
@@ -74,7 +72,8 @@ different Kubernetes components.
| `CSIMigrationAzureDisk` | `false` | Alpha | 1.15 | 1.18 |
| `CSIMigrationAzureDisk` | `false` | Beta | 1.19 | |
| `CSIMigrationAzureDiskComplete` | `false` | Alpha | 1.17 | |
-| `CSIMigrationAzureFile` | `false` | Alpha | 1.15 | |
+| `CSIMigrationAzureFile` | `false` | Alpha | 1.15 | 1.19 |
+| `CSIMigrationAzureFile` | `false` | Beta | 1.21 | |
| `CSIMigrationAzureFileComplete` | `false` | Alpha | 1.17 | |
| `CSIMigrationGCE` | `false` | Alpha | 1.14 | 1.16 |
| `CSIMigrationGCE` | `false` | Beta | 1.17 | |
@@ -85,7 +84,8 @@ different Kubernetes components.
| `CSIMigrationvSphere` | `false` | Beta | 1.19 | |
| `CSIMigrationvSphereComplete` | `false` | Beta | 1.19 | |
| `CSIServiceAccountToken` | `false` | Alpha | 1.20 | |
-| `CSIStorageCapacity` | `false` | Alpha | 1.19 | |
+| `CSIStorageCapacity` | `false` | Alpha | 1.19 | 1.20 |
+| `CSIStorageCapacity` | `true` | Beta | 1.21 | |
| `CSIVolumeFSGroupPolicy` | `false` | Alpha | 1.19 | 1.19 |
| `CSIVolumeFSGroupPolicy` | `true` | Beta | 1.20 | |
| `ConfigurableFSGroupPolicy` | `false` | Alpha | 1.18 | 1.19 |
@@ -199,6 +199,9 @@ different Kubernetes components.
| `BlockVolume` | `false` | Alpha | 1.9 | 1.12 |
| `BlockVolume` | `true` | Beta | 1.13 | 1.17 |
| `BlockVolume` | `true` | GA | 1.18 | - |
+| `CRIContainerLogRotation` | `false` | Alpha | 1.10 | 1.10 |
+| `CRIContainerLogRotation` | `true` | Beta | 1.11 | 1.20 |
+| `CRIContainerLogRotation` | `true` | GA | 1.21 | - |
| `CSIBlockVolume` | `false` | Alpha | 1.11 | 1.13 |
| `CSIBlockVolume` | `true` | Beta | 1.14 | 1.17 |
| `CSIBlockVolume` | `true` | GA | 1.18 | - |
@@ -260,6 +263,7 @@ different Kubernetes components.
| `ImmutableEphemeralVolumes` | `false` | Alpha | 1.18 | 1.18 |
| `ImmutableEphemeralVolumes` | `true` | Beta | 1.19 | 1.20 |
| `ImmutableEphemeralVolumes` | `true` | GA | 1.21 | |
+| `IndexedJob` | `false` | Alpha | 1.21 | |
| `Initializers` | `false` | Alpha | 1.7 | 1.13 |
| `Initializers` | - | Deprecated | 1.14 | - |
| `KubeletConfigFile` | `false` | Alpha | 1.8 | 1.9 |
@@ -449,7 +453,9 @@ Each feature gate is designed for enabling/disabling a specific feature:
for more details.
- `CPUManager`: Enable container level CPU affinity support, see
[CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/).
-- `CRIContainerLogRotation`: Enable container log rotation for cri container runtime.
+- `CRIContainerLogRotation`: Enable container log rotation for CRI container runtime. The default max size of a log file is 10MB and the
+ default max number of log files allowed for a container is 5. These values can be configured in the kubelet config.
+ See the [logging at node level](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level) documentation for more details.
- `CSIBlockVolume`: Enable external CSI volume drivers to support block storage.
See the [`csi` raw block volume support](/docs/concepts/storage/volumes/#csi-raw-block-volume-support)
documentation for more details.
@@ -629,10 +635,12 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `HyperVContainer`: Enable
[Hyper-V isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container)
for Windows containers.
-- `IPv6DualStack`: Enable [dual stack](/docs/concepts/services-networking/dual-stack/)
- support for IPv6.
- `ImmutableEphemeralVolumes`: Allows for marking individual Secrets and ConfigMaps as
immutable for better safety and performance.
+- `IndexedJob`: Allows the [Job](/docs/concepts/workloads/controllers/job/)
+ controller to manage Pod completions per completion index.
+- `IPv6DualStack`: Enable [dual stack](/docs/concepts/services-networking/dual-stack/)
+ support for IPv6.
- `KubeletConfigFile` (*deprecated*): Enable loading kubelet configuration from
a file specified using a config file.
See [setting kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file/)
@@ -737,7 +745,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
[ServiceTopology](/docs/concepts/services-networking/service-topology/)
for more details.
- `SizeMemoryBackedVolumes`: Enables kubelet support to size memory backed volumes.
- See [volumes](docs/concepts/storage/volumes) for more details.
+ See [volumes](/docs/concepts/storage/volumes) for more details.
- `SetHostnameAsFQDN`: Enable the ability of setting Fully Qualified Domain
Name(FQDN) as the hostname of a pod. See
[Pod's `setHostnameAsFQDN` field](/docs/concepts/services-networking/dns-pod-service/#pod-sethostnameasfqdn-field).
diff --git a/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md b/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md
index 73ef70c81b..edf25ad835 100644
--- a/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md
+++ b/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md
@@ -2,8 +2,21 @@
title: kube-apiserver
content_type: tool-reference
weight: 30
+auto_generated: true
---
+
+
+
## {{% heading "synopsis" %}}
@@ -29,1099 +42,1099 @@ kube-apiserver [flags]
--add-dir-header
-
If true, adds the file directory to the header of the log messages
+
If true, adds the file directory to the header of the log messages
--admission-control-config-file string
-
File with admission control configuration.
+
File with admission control configuration.
-
--advertise-address ip
+
--advertise-address string
-
The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
+
The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
--allow-privileged
-
If true, allow privileged containers. [default=false]
+
If true, allow privileged containers. [default=false]
--alsologtostderr
-
log to standard error as well as files
+
log to standard error as well as files
--anonymous-auth Default: true
-
Enables anonymous requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated.
+
Enables anonymous requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated.
-
--api-audiences stringSlice
+
--api-audiences strings
-
Identifiers of the API. The service account token authenticator will validate that tokens used against the API are bound to at least one of these audiences. If the --service-account-issuer flag is configured and this flag is not, this field defaults to a single element list containing the issuer URL.
+
Identifiers of the API. The service account token authenticator will validate that tokens used against the API are bound to at least one of these audiences. If the --service-account-issuer flag is configured and this flag is not, this field defaults to a single element list containing the issuer URL.
--apiserver-count int Default: 1
-
The number of apiservers running in the cluster, must be a positive number. (In use when --endpoint-reconciler-type=master-count is enabled.)
+
The number of apiservers running in the cluster, must be a positive number. (In use when --endpoint-reconciler-type=master-count is enabled.)
--audit-log-batch-buffer-size int Default: 10000
-
The size of the buffer to store events before batching and writing. Only used in batch mode.
+
The size of the buffer to store events before batching and writing. Only used in batch mode.
--audit-log-batch-max-size int Default: 1
-
The maximum size of a batch. Only used in batch mode.
+
The maximum size of a batch. Only used in batch mode.
--audit-log-batch-max-wait duration
-
The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
+
The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
--audit-log-batch-throttle-burst int
-
Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
+
Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
--audit-log-batch-throttle-enable
-
Whether batching throttling is enabled. Only used in batch mode.
+
Whether batching throttling is enabled. Only used in batch mode.
-
--audit-log-batch-throttle-qps float32
+
--audit-log-batch-throttle-qps float
-
Maximum average number of batches per second. Only used in batch mode.
+
Maximum average number of batches per second. Only used in batch mode.
--audit-log-compress
-
If set, the rotated log files will be compressed using gzip.
+
If set, the rotated log files will be compressed using gzip.
--audit-log-format string Default: "json"
-
Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json.
+
Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json.
--audit-log-maxage int
-
The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
+
The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
--audit-log-maxbackup int
-
The maximum number of old audit log files to retain.
+
The maximum number of old audit log files to retain.
--audit-log-maxsize int
-
The maximum size in megabytes of the audit log file before it gets rotated.
+
The maximum size in megabytes of the audit log file before it gets rotated.
--audit-log-mode string Default: "blocking"
-
Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict.
+
Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict.
--audit-log-path string
-
If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.
+
If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.
--audit-log-truncate-enabled
-
Whether event and batch truncating is enabled.
+
Whether event and batch truncating is enabled.
--audit-log-truncate-max-batch-size int Default: 10485760
-
Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size.
+
Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size.
--audit-log-truncate-max-event-size int Default: 102400
-
Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded.
+
Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded.
The amount of time to wait before retrying the first failed request.
+
The amount of time to wait before retrying the first failed request.
--audit-webhook-mode string Default: "batch"
-
Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict.
+
Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict.
--audit-webhook-truncate-enabled
-
Whether event and batch truncating is enabled.
+
Whether event and batch truncating is enabled.
--audit-webhook-truncate-max-batch-size int Default: 10485760
-
Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size.
+
Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size.
--audit-webhook-truncate-max-event-size int Default: 102400
-
Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded.
+
Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded.
The duration to cache responses from the webhook token authenticator.
+
The duration to cache responses from the webhook token authenticator.
--authentication-token-webhook-config-file string
-
File with webhook configuration for token authentication in kubeconfig format. The API server will query the remote service to determine authentication for bearer tokens.
+
File with webhook configuration for token authentication in kubeconfig format. The API server will query the remote service to determine authentication for bearer tokens.
The duration to cache 'unauthorized' responses from the webhook authorizer.
+
The duration to cache 'unauthorized' responses from the webhook authorizer.
--authorization-webhook-config-file string
-
File with webhook configuration in kubeconfig format, used with --authorization-mode=Webhook. The API server will query the remote service to determine access on the API server's secure port.
+
File with webhook configuration in kubeconfig format, used with --authorization-mode=Webhook. The API server will query the remote service to determine access on the API server's secure port.
The API version of the authorization.k8s.io SubjectAccessReview to send to and expect from the webhook.
+
The API version of the authorization.k8s.io SubjectAccessReview to send to and expect from the webhook.
--azure-container-registry-config string
-
Path to the file containing Azure container registry configuration information.
+
Path to the file containing Azure container registry configuration information.
-
--bind-address ip Default: 0.0.0.0
+
--bind-address string Default: 0.0.0.0
-
The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.
+
The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.
--cert-dir string Default: "/var/run/kubernetes"
-
The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.
+
The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.
--client-ca-file string
-
If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
+
If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
--cloud-config string
-
The path to the cloud provider configuration file. Empty string for no configuration file.
+
The path to the cloud provider configuration file. Empty string for no configuration file.
--cloud-provider string
-
The provider for cloud services. Empty string for no provider.
+
The provider for cloud services. Empty string for no provider.
CIDRs opened in GCE firewall for L7 LB traffic proxy & health checks
+
CIDRs opened in GCE firewall for L7 LB traffic proxy & health checks
--contention-profiling
-
Enable lock contention profiling, if profiling is enabled
+
Enable lock contention profiling, if profiling is enabled
-
--cors-allowed-origins stringSlice
+
--cors-allowed-origins strings
-
List of allowed origins for CORS, comma separated. An allowed origin can be a regular expression to support subdomain matching. If this list is empty CORS will not be enabled.
+
List of allowed origins for CORS, comma separated. An allowed origin can be a regular expression to support subdomain matching. If this list is empty CORS will not be enabled.
--default-not-ready-toleration-seconds int Default: 300
-
Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.
+
Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.
--default-unreachable-toleration-seconds int Default: 300
-
Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.
+
Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.
--default-watch-cache-size int Default: 100
-
Default watch cache size. If zero, watch cache will be disabled for resources that do not have a default watch size set.
+
Default watch cache size. If zero, watch cache will be disabled for resources that do not have a default watch size set.
--delete-collection-workers int Default: 1
-
Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup.
+
Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup.
-
--disable-admission-plugins stringSlice
+
--disable-admission-plugins strings
-
admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
+
admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
--egress-selector-config-file string
-
File with apiserver egress selector configuration.
+
File with apiserver egress selector configuration.
-
--enable-admission-plugins stringSlice
+
--enable-admission-plugins strings
-
admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
+
admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
--enable-aggregator-routing
-
Turns on aggregator routing requests to endpoints IP rather than cluster IP.
+
Turns on aggregator routing requests to endpoints IP rather than cluster IP.
--enable-bootstrap-token-auth
-
Enable to allow secrets of type 'bootstrap.kubernetes.io/token' in the 'kube-system' namespace to be used for TLS bootstrapping authentication.
+
Enable to allow secrets of type 'bootstrap.kubernetes.io/token' in the 'kube-system' namespace to be used for TLS bootstrapping authentication.
--enable-garbage-collector Default: true
-
Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager.
+
Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager.
--enable-priority-and-fairness Default: true
-
If true and the APIPriorityAndFairness feature gate is enabled, replace the max-in-flight handler with an enhanced one that queues and dispatches with priority and fairness
+
If true and the APIPriorityAndFairness feature gate is enabled, replace the max-in-flight handler with an enhanced one that queues and dispatches with priority and fairness
--encryption-provider-config string
-
The file containing configuration for encryption providers to be used for storing secrets in etcd
+
The file containing configuration for encryption providers to be used for storing secrets in etcd
The interval of requests to poll etcd and update metric. 0 disables the metric collection
+
The interval of requests to poll etcd and update metric. 0 disables the metric collection
--etcd-healthcheck-timeout duration Default: 2s
-
The timeout to use when checking etcd health.
+
The timeout to use when checking etcd health.
--etcd-keyfile string
-
SSL key file used to secure etcd communication.
+
SSL key file used to secure etcd communication.
--etcd-prefix string Default: "/registry"
-
The prefix to prepend to all resource paths in etcd.
+
The prefix to prepend to all resource paths in etcd.
-
--etcd-servers stringSlice
+
--etcd-servers strings
-
List of etcd servers to connect with (scheme://ip:port), comma separated.
+
List of etcd servers to connect with (scheme://ip:port), comma separated.
-
--etcd-servers-overrides stringSlice
+
--etcd-servers-overrides strings
-
Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are URLs, semicolon separated.
+
Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are URLs, semicolon separated.
--event-ttl duration Default: 1h0m0s
-
Amount of time to retain events.
+
Amount of time to retain events.
--experimental-logging-sanitization
-
[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens). Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.
+
[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens). Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.
--external-hostname string
-
The hostname to use when generating externalized URLs for this master (e.g. Swagger API Docs or OpenID Discovery).
+
The hostname to use when generating externalized URLs for this master (e.g. Swagger API Docs or OpenID Discovery).
To prevent HTTP/2 clients from getting stuck on a single apiserver, randomly close a connection (GOAWAY). The client's other in-flight requests won't be affected, and the client will reconnect, likely landing on a different apiserver after going through the load balancer again. This argument sets the fraction of requests that will be sent a GOAWAY. Clusters with single apiservers, or which don't use a load balancer, should NOT enable this. Min is 0 (off), Max is .02 (1/50 requests); .001 (1/1000) is a recommended starting point.
+
To prevent HTTP/2 clients from getting stuck on a single apiserver, randomly close a connection (GOAWAY). The client's other in-flight requests won't be affected, and the client will reconnect, likely landing on a different apiserver after going through the load balancer again. This argument sets the fraction of requests that will be sent a GOAWAY. Clusters with single apiservers, or which don't use a load balancer, should NOT enable this. Min is 0 (off), Max is .02 (1/50 requests); .001 (1/1000) is a recommended starting point.
-h, --help
-
help for kube-apiserver
+
help for kube-apiserver
--http2-max-streams-per-connection int
-
The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
+
The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
--identity-lease-duration-seconds int Default: 3600
-
The duration of kube-apiserver lease in seconds, must be a positive number. (In use when the APIServerIdentity feature gate is enabled.)
+
The duration of kube-apiserver lease in seconds, must be a positive number. (In use when the APIServerIdentity feature gate is enabled.)
--identity-lease-renew-interval-seconds int Default: 10
-
The interval of kube-apiserver renewing its lease in seconds, must be a positive number. (In use when the APIServerIdentity feature gate is enabled.)
+
The interval of kube-apiserver renewing its lease in seconds, must be a positive number. (In use when the APIServerIdentity feature gate is enabled.)
--kubelet-certificate-authority string
-
Path to a cert file for the certificate authority.
+
Path to a cert file for the certificate authority.
List of the preferred NodeAddressTypes to use for kubelet connections.
+
List of the preferred NodeAddressTypes to use for kubelet connections.
--kubelet-timeout duration Default: 5s
-
Timeout for kubelet operations.
+
Timeout for kubelet operations.
--kubernetes-service-node-port int
-
If non-zero, the Kubernetes master service (which apiserver creates/maintains) will be of type NodePort, using this as the value of the port. If zero, the Kubernetes master service will be of type ClusterIP.
+
If non-zero, the Kubernetes master service (which apiserver creates/maintains) will be of type NodePort, using this as the value of the port. If zero, the Kubernetes master service will be of type ClusterIP.
--livez-grace-period duration
-
This option represents the maximum amount of time it should take for apiserver to complete its startup sequence and become live. From apiserver's start time to when this amount of time has elapsed, /livez will assume that unfinished post-start hooks will complete successfully and therefore return true.
+
This option represents the maximum amount of time it should take for apiserver to complete its startup sequence and become live. From apiserver's start time to when this amount of time has elapsed, /livez will assume that unfinished post-start hooks will complete successfully and therefore return true.
-
--log-backtrace-at traceLocation Default: :0
+
--log-backtrace-at <a string in the form 'file:N'> Default: :0
-
when logging hits line file:N, emit a stack trace
+
when logging hits line file:N, emit a stack trace
--log-dir string
-
If non-empty, write log files in this directory
+
If non-empty, write log files in this directory
--log-file string
-
If non-empty, use this log file
+
If non-empty, use this log file
--log-file-max-size uint Default: 1800
-
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
+
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
--log-flush-frequency duration Default: 5s
-
Maximum number of seconds between log flushes
+
Maximum number of seconds between log flushes
--logging-format string Default: "text"
-
Sets the log format. Permitted formats: "json", "text". Non-default formats don't honor these flags: --add_dir_header, --alsologtostderr, --log_backtrace_at, --log_dir, --log_file, --log_file_max_size, --logtostderr, --one_output, --skip_headers, --skip_log_headers, --stderrthreshold, --vmodule, --log-flush-frequency. Non-default choices are currently alpha and subject to change without warning.
+
Sets the log format. Permitted formats: "json", "text". Non-default formats don't honor these flags: --add_dir_header, --alsologtostderr, --log_backtrace_at, --log_dir, --log_file, --log_file_max_size, --logtostderr, --one_output, --skip_headers, --skip_log_headers, --stderrthreshold, --vmodule, --log-flush-frequency. Non-default choices are currently alpha and subject to change without warning.
DEPRECATED: the namespace from which the Kubernetes master services should be injected into pods.
+
DEPRECATED: the namespace from which the Kubernetes master services should be injected into pods.
--max-connection-bytes-per-sec int
-
If non-zero, throttle each user connection to this number of bytes/sec. Currently only applies to long-running requests.
+
If non-zero, throttle each user connection to this number of bytes/sec. Currently only applies to long-running requests.
--max-mutating-requests-inflight int Default: 200
-
The maximum number of mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit.
+
The maximum number of mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit.
--max-requests-inflight int Default: 400
-
The maximum number of non-mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit.
+
The maximum number of non-mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit.
--min-request-timeout int Default: 1800
-
An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load.
+
An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load.
--oidc-ca-file string
-
If set, the OpenID server's certificate will be verified by one of the authorities in the oidc-ca-file, otherwise the host's root CA set will be used.
+
If set, the OpenID server's certificate will be verified by one of the authorities in the oidc-ca-file, otherwise the host's root CA set will be used.
--oidc-client-id string
-
The client ID for the OpenID Connect client, must be set if oidc-issuer-url is set.
+
The client ID for the OpenID Connect client, must be set if oidc-issuer-url is set.
--oidc-groups-claim string
-
If provided, the name of a custom OpenID Connect claim for specifying user groups. The claim value is expected to be a string or array of strings. This flag is experimental, please see the authentication documentation for further details.
+
If provided, the name of a custom OpenID Connect claim for specifying user groups. The claim value is expected to be a string or array of strings. This flag is experimental, please see the authentication documentation for further details.
--oidc-groups-prefix string
-
If provided, all groups will be prefixed with this value to prevent conflicts with other authentication strategies.
+
If provided, all groups will be prefixed with this value to prevent conflicts with other authentication strategies.
--oidc-issuer-url string
-
The URL of the OpenID issuer, only HTTPS scheme will be accepted. If set, it will be used to verify the OIDC JSON Web Token (JWT).
+
The URL of the OpenID issuer, only HTTPS scheme will be accepted. If set, it will be used to verify the OIDC JSON Web Token (JWT).
A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims.
+
A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims.
-
--oidc-signing-algs stringSlice Default: [RS256]
+
--oidc-signing-algs strings Default: "RS256"
-
Comma-separated list of allowed JOSE asymmetric signing algorithms. JWTs with a 'alg' header value not in this list will be rejected. Values are defined by RFC 7518 https://tools.ietf.org/html/rfc7518#section-3.1.
+
Comma-separated list of allowed JOSE asymmetric signing algorithms. JWTs with a 'alg' header value not in this list will be rejected. Values are defined by RFC 7518 https://tools.ietf.org/html/rfc7518#section-3.1.
--oidc-username-claim string Default: "sub"
-
The OpenID claim to use as the user name. Note that claims other than the default ('sub') is not guaranteed to be unique and immutable. This flag is experimental, please see the authentication documentation for further details.
+
The OpenID claim to use as the user name. Note that claims other than the default ('sub') is not guaranteed to be unique and immutable. This flag is experimental, please see the authentication documentation for further details.
--oidc-username-prefix string
-
If provided, all usernames will be prefixed with this value. If not provided, username claims other than 'email' are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value '-'.
+
If provided, all usernames will be prefixed with this value. If not provided, username claims other than 'email' are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value '-'.
--one-output
-
If true, only write logs to their native severity level (vs also writing to each lower severity level
+
If true, only write logs to their native severity level (vs also writing to each lower severity level
--permit-port-sharing
-
If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false]
+
If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false]
--profiling Default: true
-
Enable profiling via web interface host:port/debug/pprof/
+
Enable profiling via web interface host:port/debug/pprof/
--proxy-client-cert-file string
-
Client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. It is expected that this cert includes a signature from the CA in the --requestheader-client-ca-file flag. That CA is published in the 'extension-apiserver-authentication' configmap in the kube-system namespace. Components receiving calls from kube-aggregator should use that CA to perform their half of the mutual TLS verification.
+
Client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. It is expected that this cert includes a signature from the CA in the --requestheader-client-ca-file flag. That CA is published in the 'extension-apiserver-authentication' configmap in the kube-system namespace. Components receiving calls from kube-aggregator should use that CA to perform their half of the mutual TLS verification.
--proxy-client-key-file string
-
Private key for the client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins.
+
Private key for the client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins.
--request-timeout duration Default: 1m0s
-
An optional field indicating the duration a handler must keep a request open before timing it out. This is the default request timeout for requests but may be overridden by flags such as --min-request-timeout for specific types of requests.
+
An optional field indicating the duration a handler must keep a request open before timing it out. This is the default request timeout for requests but may be overridden by flags such as --min-request-timeout for specific types of requests.
-
--requestheader-allowed-names stringSlice
+
--requestheader-allowed-names strings
-
List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
+
List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
--requestheader-client-ca-file string
-
Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
+
Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
-
--requestheader-extra-headers-prefix stringSlice
+
--requestheader-extra-headers-prefix strings
-
List of request header prefixes to inspect. X-Remote-Extra- is suggested.
+
List of request header prefixes to inspect. X-Remote-Extra- is suggested.
-
--requestheader-group-headers stringSlice
+
--requestheader-group-headers strings
-
List of request headers to inspect for groups. X-Remote-Group is suggested.
+
List of request headers to inspect for groups. X-Remote-Group is suggested.
-
--requestheader-username-headers stringSlice
+
--requestheader-username-headers strings
-
List of request headers to inspect for usernames. X-Remote-User is common.
+
List of request headers to inspect for usernames. X-Remote-User is common.
A set of key=value pairs that enable or disable built-in APIs. Supported options are: v1=true|false for the core API group <group>/<version>=true|false for a specific API group and version (e.g. apps/v1=true) api/all=true|false controls all API versions api/ga=true|false controls all API versions of the form v[0-9]+ api/beta=true|false controls all API versions of the form v[0-9]+beta[0-9]+ api/alpha=true|false controls all API versions of the form v[0-9]+alpha[0-9]+ api/legacy is deprecated, and will be removed in a future version
+
A set of key=value pairs that enable or disable built-in APIs. Supported options are: v1=true|false for the core API group /=true|false for a specific API group and version (e.g. apps/v1=true) api/all=true|false controls all API versions api/ga=true|false controls all API versions of the form v[0-9]+ api/beta=true|false controls all API versions of the form v[0-9]+beta[0-9]+ api/alpha=true|false controls all API versions of the form v[0-9]+alpha[0-9]+ api/legacy is deprecated, and will be removed in a future version
--secure-port int Default: 6443
-
The port on which to serve HTTPS with authentication and authorization. It cannot be switched off with 0.
+
The port on which to serve HTTPS with authentication and authorization. It cannot be switched off with 0.
Turns on projected service account expiration extension during token generation, which helps safe transition from legacy token to bound service account token feature. If this flag is enabled, admission injected tokens would be extended up to 1 year to prevent unexpected failure during transition, ignoring value of service-account-max-token-expiration.
+
Turns on projected service account expiration extension during token generation, which helps safe transition from legacy token to bound service account token feature. If this flag is enabled, admission injected tokens would be extended up to 1 year to prevent unexpected failure during transition, ignoring value of service-account-max-token-expiration.
--service-account-issuer string
-
Identifier of the service account token issuer. The issuer will assert this identifier in "iss" claim of issued tokens. This value is a string or URI. If this option is not a valid URI per the OpenID Discovery 1.0 spec, the ServiceAccountIssuerDiscovery feature will remain disabled, even if the feature gate is set to true. It is highly recommended that this value comply with the OpenID spec: https://openid.net/specs/openid-connect-discovery-1_0.html. In practice, this means that service-account-issuer must be an https URL. It is also highly recommended that this URL be capable of serving OpenID discovery documents at {service-account-issuer}/.well-known/openid-configuration.
+
Identifier of the service account token issuer. The issuer will assert this identifier in "iss" claim of issued tokens. This value is a string or URI. If this option is not a valid URI per the OpenID Discovery 1.0 spec, the ServiceAccountIssuerDiscovery feature will remain disabled, even if the feature gate is set to true. It is highly recommended that this value comply with the OpenID spec: https://openid.net/specs/openid-connect-discovery-1_0.html. In practice, this means that service-account-issuer must be an https URL. It is also highly recommended that this URL be capable of serving OpenID discovery documents at {service-account-issuer}/.well-known/openid-configuration.
--service-account-jwks-uri string
-
Overrides the URI for the JSON Web Key Set in the discovery doc served at /.well-known/openid-configuration. This flag is useful if the discovery docand key set are served to relying parties from a URL other than the API server's external (as auto-detected or overridden with external-hostname). Only valid if the ServiceAccountIssuerDiscovery feature gate is enabled.
+
Overrides the URI for the JSON Web Key Set in the discovery doc served at /.well-known/openid-configuration. This flag is useful if the discovery docand key set are served to relying parties from a URL other than the API server's external (as auto-detected or overridden with external-hostname). Only valid if the ServiceAccountIssuerDiscovery feature gate is enabled.
-
--service-account-key-file stringArray
+
--service-account-key-file strings
-
File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If unspecified, --tls-private-key-file is used. Must be specified when --service-account-signing-key is provided
+
File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If unspecified, --tls-private-key-file is used. Must be specified when --service-account-signing-key is provided
--service-account-lookup Default: true
-
If true, validate ServiceAccount tokens exist in etcd as part of authentication.
+
If true, validate ServiceAccount tokens exist in etcd as part of authentication.
--service-account-max-token-expiration duration
-
The maximum validity duration of a token created by the service account token issuer. If an otherwise valid TokenRequest with a validity duration larger than this value is requested, a token will be issued with a validity duration of this value.
+
The maximum validity duration of a token created by the service account token issuer. If an otherwise valid TokenRequest with a validity duration larger than this value is requested, a token will be issued with a validity duration of this value.
--service-account-signing-key-file string
-
Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key.
+
Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key.
--service-cluster-ip-range string
-
A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes or pods.
+
A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes or pods.
--service-node-port-range <a string in the form 'N1-N2'> Default: 30000-32767
-
A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range.
+
A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range.
--show-hidden-metrics-for-version string
-
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
+
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is ., e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
--shutdown-delay-duration duration
-
Time to delay the termination. During that time the server keeps serving requests normally. The endpoints /healthz and /livez will return success, but /readyz immediately returns failure. Graceful termination starts after this delay has elapsed. This can be used to allow load balancer to stop sending traffic to this server.
+
Time to delay the termination. During that time the server keeps serving requests normally. The endpoints /healthz and /livez will return success, but /readyz immediately returns failure. Graceful termination starts after this delay has elapsed. This can be used to allow load balancer to stop sending traffic to this server.
--skip-headers
-
If true, avoid header prefixes in the log messages
+
If true, avoid header prefixes in the log messages
--skip-log-headers
-
If true, avoid headers when opening log files
+
If true, avoid headers when opening log files
-
--stderrthreshold severity Default: 2
+
--stderrthreshold int Default: 2
-
logs at or above this threshold go to stderr
+
logs at or above this threshold go to stderr
--storage-backend string
-
The storage backend for persistence. Options: 'etcd3' (default).
+
The storage backend for persistence. Options: 'etcd3' (default).
The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting.
+
The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting.
--tls-cert-file string
-
File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
+
File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
-
--tls-cipher-suites stringSlice
+
--tls-cipher-suites strings
-
Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used. Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384. Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.
+
Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used. Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384. Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.
--tls-min-version string
-
Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
+
Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
--tls-private-key-file string
-
File containing the default x509 private key matching --tls-cert-file.
+
File containing the default x509 private key matching --tls-cert-file.
-
--tls-sni-cert-key namedCertKey Default: []
+
--tls-sni-cert-key string
-
A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com".
+
A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com".
--token-auth-file string
-
If set, the file that will be used to secure the secure port of the API server via token authentication.
+
If set, the file that will be used to secure the secure port of the API server via token authentication.
-
-v, --v Level
+
-v, --v int
-
number for the log level verbosity
+
number for the log level verbosity
--version version[=true]
-
Print version information and quit
+
Print version information and quit
-
--vmodule moduleSpec
+
--vmodule <comma-separated 'pattern=N' settings>
-
comma-separated list of pattern=N settings for file-filtered logging
+
comma-separated list of pattern=N settings for file-filtered logging
--watch-cache Default: true
-
Enable watch caching in the apiserver
+
Enable watch caching in the apiserver
-
--watch-cache-sizes stringSlice
+
--watch-cache-sizes strings
-
Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. It takes effect when watch-cache is enabled. Some resources (replicationcontrollers, endpoints, nodes, pods, services, apiservices.apiregistration.k8s.io) have system defaults set by heuristics, others default to default-watch-cache-size
+
Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. It takes effect when watch-cache is enabled. Some resources (replicationcontrollers, endpoints, nodes, pods, services, apiservices.apiregistration.k8s.io) have system defaults set by heuristics, others default to default-watch-cache-size
The reconciler sync wait time between volume attach detach. This duration must be larger than one second, and increasing this value from the default may allow for volumes to be mismatched with pods.
+
The reconciler sync wait time between volume attach detach. This duration must be larger than one second, and increasing this value from the default may allow for volumes to be mismatched with pods.
--authentication-kubeconfig string
-
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io. This is optional. If empty, all token requests are considered to be anonymous and no client CA is looked up in the cluster.
+
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io. This is optional. If empty, all token requests are considered to be anonymous and no client CA is looked up in the cluster.
--authentication-skip-lookup
-
If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
+
If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
The duration to cache responses from the webhook token authenticator.
+
The duration to cache responses from the webhook token authenticator.
--authentication-tolerate-lookup-failure
-
If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.
+
If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.
A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server.
+
A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server.
--authorization-kubeconfig string
-
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io. This is optional. If empty, all requests not skipped by authorization are forbidden.
+
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io. This is optional. If empty, all requests not skipped by authorization are forbidden.
The duration to cache 'unauthorized' responses from the webhook authorizer.
+
The duration to cache 'unauthorized' responses from the webhook authorizer.
--azure-container-registry-config string
-
Path to the file containing Azure container registry configuration information.
+
Path to the file containing Azure container registry configuration information.
-
--bind-address ip Default: 0.0.0.0
+
--bind-address string Default: 0.0.0.0
-
The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.
+
The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.
--cert-dir string
-
The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.
+
The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.
If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
+
If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
--cloud-config string
-
The path to the cloud provider configuration file. Empty string for no configuration file.
+
The path to the cloud provider configuration file. Empty string for no configuration file.
--cloud-provider string
-
The provider for cloud services. Empty string for no provider.
+
The provider for cloud services. Empty string for no provider.
--cluster-cidr string
-
CIDR Range for Pods in cluster. Requires --allocate-node-cidrs to be true
+
CIDR Range for Pods in cluster. Requires --allocate-node-cidrs to be true
--cluster-name string Default: "kubernetes"
-
The instance prefix for the cluster.
+
The instance prefix for the cluster.
--cluster-signing-cert-file string
-
Filename containing a PEM-encoded X509 CA certificate used to issue cluster-scoped certificates. If specified, no more specific --cluster-signing-* flag may be specified.
+
Filename containing a PEM-encoded X509 CA certificate used to issue cluster-scoped certificates. If specified, no more specific --cluster-signing-* flag may be specified.
The length of duration signed certificates will be given.
+
The length of duration signed certificates will be given.
--cluster-signing-key-file string
-
Filename containing a PEM-encoded RSA or ECDSA private key used to sign cluster-scoped certificates. If specified, no more specific --cluster-signing-* flag may be specified.
+
Filename containing a PEM-encoded RSA or ECDSA private key used to sign cluster-scoped certificates. If specified, no more specific --cluster-signing-* flag may be specified.
Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/kube-apiserver-client signer. If specified, --cluster-signing-{cert,key}-file must not be set.
+
Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/kube-apiserver-client signer. If specified, --cluster-signing-{cert,key}-file must not be set.
Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/kube-apiserver-client signer. If specified, --cluster-signing-{cert,key}-file must not be set.
+
Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/kube-apiserver-client signer. If specified, --cluster-signing-{cert,key}-file must not be set.
--cluster-signing-kubelet-client-cert-file string
-
Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/kube-apiserver-client-kubelet signer. If specified, --cluster-signing-{cert,key}-file must not be set.
+
Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/kube-apiserver-client-kubelet signer. If specified, --cluster-signing-{cert,key}-file must not be set.
--cluster-signing-kubelet-client-key-file string
-
Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/kube-apiserver-client-kubelet signer. If specified, --cluster-signing-{cert,key}-file must not be set.
+
Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/kube-apiserver-client-kubelet signer. If specified, --cluster-signing-{cert,key}-file must not be set.
Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/kubelet-serving signer. If specified, --cluster-signing-{cert,key}-file must not be set.
+
Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/kubelet-serving signer. If specified, --cluster-signing-{cert,key}-file must not be set.
--cluster-signing-kubelet-serving-key-file string
-
Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/kubelet-serving signer. If specified, --cluster-signing-{cert,key}-file must not be set.
+
Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/kubelet-serving signer. If specified, --cluster-signing-{cert,key}-file must not be set.
--cluster-signing-legacy-unknown-cert-file string
-
Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/legacy-unknown signer. If specified, --cluster-signing-{cert,key}-file must not be set.
+
Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/legacy-unknown signer. If specified, --cluster-signing-{cert,key}-file must not be set.
--cluster-signing-legacy-unknown-key-file string
-
Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/legacy-unknown signer. If specified, --cluster-signing-{cert,key}-file must not be set.
+
Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/legacy-unknown signer. If specified, --cluster-signing-{cert,key}-file must not be set.
--concurrent-deployment-syncs int32 Default: 5
-
The number of deployment objects that are allowed to sync concurrently. Larger number = more responsive deployments, but more CPU (and network) load
+
The number of deployment objects that are allowed to sync concurrently. Larger number = more responsive deployments, but more CPU (and network) load
--concurrent-endpoint-syncs int32 Default: 5
-
The number of endpoint syncing operations that will be done concurrently. Larger number = faster endpoint updating, but more CPU (and network) load
+
The number of endpoint syncing operations that will be done concurrently. Larger number = faster endpoint updating, but more CPU (and network) load
--concurrent-gc-syncs int32 Default: 20
-
The number of garbage collector workers that are allowed to sync concurrently.
+
The number of garbage collector workers that are allowed to sync concurrently.
--concurrent-namespace-syncs int32 Default: 10
-
The number of namespace objects that are allowed to sync concurrently. Larger number = more responsive namespace termination, but more CPU (and network) load
+
The number of namespace objects that are allowed to sync concurrently. Larger number = more responsive namespace termination, but more CPU (and network) load
--concurrent-replicaset-syncs int32 Default: 5
-
The number of replica sets that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load
+
The number of replica sets that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load
The number of service endpoint syncing operations that will be done concurrently. Larger number = faster endpoint slice updating, but more CPU (and network) load. Defaults to 5.
+
The number of service endpoint syncing operations that will be done concurrently. Larger number = faster endpoint slice updating, but more CPU (and network) load. Defaults to 5.
--concurrent-service-syncs int32 Default: 1
-
The number of services that are allowed to sync concurrently. Larger number = more responsive service management, but more CPU (and network) load
+
The number of services that are allowed to sync concurrently. Larger number = more responsive service management, but more CPU (and network) load
The number of service account token objects that are allowed to sync concurrently. Larger number = more responsive token generation, but more CPU (and network) load
+
The number of service account token objects that are allowed to sync concurrently. Larger number = more responsive token generation, but more CPU (and network) load
--concurrent-statefulset-syncs int32 Default: 5
-
The number of statefulset objects that are allowed to sync concurrently. Larger number = more responsive statefulsets, but more CPU (and network) load
+
The number of statefulset objects that are allowed to sync concurrently. Larger number = more responsive statefulsets, but more CPU (and network) load
The number of TTL-after-finished controller workers that are allowed to sync concurrently.
+
The number of TTL-after-finished controller workers that are allowed to sync concurrently.
--concurrent_rc_syncs int32 Default: 5
-
The number of replication controllers that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load
+
The number of replication controllers that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load
--configure-cloud-routes Default: true
-
Should CIDRs allocated by allocate-node-cidrs be configured on the cloud provider.
+
Should CIDRs allocated by allocate-node-cidrs be configured on the cloud provider.
--contention-profiling
-
Enable lock contention profiling, if profiling is enabled
+
Enable lock contention profiling, if profiling is enabled
--controller-start-interval duration
-
Interval between starting controller managers.
+
Interval between starting controller managers.
-
--controllers stringSlice Default: [*]
+
--controllers strings Default: "*"
-
A list of controllers to enable. '*' enables all on-by-default controllers, 'foo' enables the controller named 'foo', '-foo' disables the controller named 'foo'. All controllers: attachdetach, bootstrapsigner, cloud-node-lifecycle, clusterrole-aggregation, cronjob, csrapproving, csrcleaner, csrsigning, daemonset, deployment, disruption, endpoint, endpointslice, endpointslicemirroring, ephemeral-volume, garbagecollector, horizontalpodautoscaling, job, namespace, nodeipam, nodelifecycle, persistentvolume-binder, persistentvolume-expander, podgc, pv-protection, pvc-protection, replicaset, replicationcontroller, resourcequota, root-ca-cert-publisher, route, service, serviceaccount, serviceaccount-token, statefulset, tokencleaner, ttl, ttl-after-finished Disabled-by-default controllers: bootstrapsigner, tokencleaner
+
A list of controllers to enable. '*' enables all on-by-default controllers, 'foo' enables the controller named 'foo', '-foo' disables the controller named 'foo'. All controllers: attachdetach, bootstrapsigner, cloud-node-lifecycle, clusterrole-aggregation, cronjob, csrapproving, csrcleaner, csrsigning, daemonset, deployment, disruption, endpoint, endpointslice, endpointslicemirroring, ephemeral-volume, garbagecollector, horizontalpodautoscaling, job, namespace, nodeipam, nodelifecycle, persistentvolume-binder, persistentvolume-expander, podgc, pv-protection, pvc-protection, replicaset, replicationcontroller, resourcequota, root-ca-cert-publisher, route, service, serviceaccount, serviceaccount-token, statefulset, tokencleaner, ttl, ttl-after-finished Disabled-by-default controllers: bootstrapsigner, tokencleaner
Disable volume attach detach reconciler sync. Disabling this may cause volumes to be mismatched with pods. Use wisely.
+
Disable volume attach detach reconciler sync. Disabling this may cause volumes to be mismatched with pods. Use wisely.
--enable-dynamic-provisioning Default: true
-
Enable dynamic provisioning for environments that support it.
+
Enable dynamic provisioning for environments that support it.
--enable-garbage-collector Default: true
-
Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-apiserver.
+
Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-apiserver.
--enable-hostpath-provisioner
-
Enable HostPath PV provisioning when running without a cloud provider. This allows testing and development of provisioning features. HostPath provisioning is not supported in any way, won't work in a multi-node cluster, and should not be used for anything other than testing or development.
+
Enable HostPath PV provisioning when running without a cloud provider. This allows testing and development of provisioning features. HostPath provisioning is not supported in any way, won't work in a multi-node cluster, and should not be used for anything other than testing or development.
--enable-taint-manager Default: true
-
WARNING: Beta feature. If set to true enables NoExecute Taints and will evict all not-tolerating Pod running on Nodes tainted with this kind of Taints.
+
WARNING: Beta feature. If set to true enables NoExecute Taints and will evict all not-tolerating Pod running on Nodes tainted with this kind of Taints.
--endpoint-updates-batch-period duration
-
The length of endpoint updates batching period. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated
+
The length of endpoint updates batching period. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated
--endpointslice-updates-batch-period duration
-
The length of endpoint slice updates batching period. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated
+
The length of endpoint slice updates batching period. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated
--experimental-logging-sanitization
-
[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens). Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.
+
[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens). Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.
--external-cloud-volume-plugin string
-
The plugin to use when cloud provider is set to external. Can be empty, should only be set when cloud-provider is external. Currently used to allow node and volume controllers to work for in tree cloud providers.
+
The plugin to use when cloud provider is set to external. Can be empty, should only be set when cloud-provider is external. Currently used to allow node and volume controllers to work for in tree cloud providers.
QPS to use while talking with kubernetes apiserver.
+
QPS to use while talking with kubernetes apiserver.
--kubeconfig string
-
Path to kubeconfig file with authorization and master location information.
+
Path to kubeconfig file with authorization and master location information.
--large-cluster-size-threshold int32 Default: 50
-
Number of nodes from which NodeController treats the cluster as large for the eviction logic purposes. --secondary-node-eviction-rate is implicitly overridden to 0 for clusters this size or smaller.
+
Number of nodes from which NodeController treats the cluster as large for the eviction logic purposes. --secondary-node-eviction-rate is implicitly overridden to 0 for clusters this size or smaller.
--leader-elect Default: true
-
Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability.
+
Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability.
The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled.
+
The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled.
The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled.
+
The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled.
The type of resource object that is used for locking during leader election. Supported options are 'endpoints', 'configmaps', 'leases', 'endpointsleases' and 'configmapsleases'.
+
The type of resource object that is used for locking during leader election. Supported options are 'endpoints', 'configmaps', 'leases', 'endpointsleases' and 'configmapsleases'.
The namespace of resource object that is used for locking during leader election.
+
The namespace of resource object that is used for locking during leader election.
--leader-elect-retry-period duration Default: 2s
-
The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled.
+
The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled.
-
--log-backtrace-at traceLocation Default: :0
+
--log-backtrace-at <a string in the form 'file:N'> Default: :0
-
when logging hits line file:N, emit a stack trace
+
when logging hits line file:N, emit a stack trace
--log-dir string
-
If non-empty, write log files in this directory
+
If non-empty, write log files in this directory
--log-file string
-
If non-empty, use this log file
+
If non-empty, use this log file
--log-file-max-size uint Default: 1800
-
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
+
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
--log-flush-frequency duration Default: 5s
-
Maximum number of seconds between log flushes
+
Maximum number of seconds between log flushes
--logging-format string Default: "text"
-
Sets the log format. Permitted formats: "json", "text". Non-default formats don't honor these flags: --add_dir_header, --alsologtostderr, --log_backtrace_at, --log_dir, --log_file, --log_file_max_size, --logtostderr, --one_output, --skip_headers, --skip_log_headers, --stderrthreshold, --vmodule, --log-flush-frequency. Non-default choices are currently alpha and subject to change without warning.
+
Sets the log format. Permitted formats: "json", "text". Non-default formats don't honor these flags: --add_dir_header, --alsologtostderr, --log_backtrace_at, --log_dir, --log_file, --log_file_max_size, --logtostderr, --one_output, --skip_headers, --skip_log_headers, --stderrthreshold, --vmodule, --log-flush-frequency. Non-default choices are currently alpha and subject to change without warning.
--logtostderr Default: true
-
log to standard error instead of files
+
log to standard error instead of files
--master string
-
The address of the Kubernetes API server (overrides any value in kubeconfig).
+
The address of the Kubernetes API server (overrides any value in kubeconfig).
--max-endpoints-per-slice int32 Default: 100
-
The maximum number of endpoints that will be added to an EndpointSlice. More endpoints per slice will result in less endpoint slices, but larger resources. Defaults to 100.
+
The maximum number of endpoints that will be added to an EndpointSlice. More endpoints per slice will result in less endpoint slices, but larger resources. Defaults to 100.
--min-resync-period duration Default: 12h0m0s
-
The resync period in reflectors will be random between MinResyncPeriod and 2*MinResyncPeriod.
+
The resync period in reflectors will be random between MinResyncPeriod and 2*MinResyncPeriod.
The number of service endpoint syncing operations that will be done concurrently by the EndpointSliceMirroring controller. Larger number = faster endpoint slice updating, but more CPU (and network) load. Defaults to 5.
+
The number of service endpoint syncing operations that will be done concurrently by the EndpointSliceMirroring controller. Larger number = faster endpoint slice updating, but more CPU (and network) load. Defaults to 5.
The length of EndpointSlice updates batching period for EndpointSliceMirroring controller. Processing of EndpointSlice changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of EndpointSlice updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated
+
The length of EndpointSlice updates batching period for EndpointSliceMirroring controller. Processing of EndpointSlice changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of EndpointSlice updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated
The maximum number of endpoints that will be added to an EndpointSlice by the EndpointSliceMirroring controller. More endpoints per slice will result in less endpoint slices, but larger resources. Defaults to 100.
+
The maximum number of endpoints that will be added to an EndpointSlice by the EndpointSliceMirroring controller. More endpoints per slice will result in less endpoint slices, but larger resources. Defaults to 100.
--namespace-sync-period duration Default: 5m0s
-
The period for syncing namespace life-cycle updates
+
The period for syncing namespace life-cycle updates
--node-cidr-mask-size int32
-
Mask size for node cidr in cluster. Default is 24 for IPv4 and 64 for IPv6.
+
Mask size for node cidr in cluster. Default is 24 for IPv4 and 64 for IPv6.
--node-cidr-mask-size-ipv4 int32
-
Mask size for IPv4 node cidr in dual-stack cluster. Default is 24.
+
Mask size for IPv4 node cidr in dual-stack cluster. Default is 24.
--node-cidr-mask-size-ipv6 int32
-
Mask size for IPv6 node cidr in dual-stack cluster. Default is 64.
+
Mask size for IPv6 node cidr in dual-stack cluster. Default is 64.
-
--node-eviction-rate float32 Default: 0.1
+
--node-eviction-rate float Default: 0.1
-
Number of nodes per second on which pods are deleted in case of node failure when a zone is healthy (see --unhealthy-zone-threshold for definition of healthy/unhealthy). Zone refers to entire cluster in non-multizone clusters.
+
Number of nodes per second on which pods are deleted in case of node failure when a zone is healthy (see --unhealthy-zone-threshold for definition of healthy/unhealthy). Zone refers to entire cluster in non-multizone clusters.
--node-monitor-grace-period duration Default: 40s
-
Amount of time which we allow running Node to be unresponsive before marking it unhealthy. Must be N times more than kubelet's nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet to post node status.
+
Amount of time which we allow running Node to be unresponsive before marking it unhealthy. Must be N times more than kubelet's nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet to post node status.
--node-monitor-period duration Default: 5s
-
The period for syncing NodeStatus in NodeController.
+
The period for syncing NodeStatus in NodeController.
The minimum ActiveDeadlineSeconds to use for a HostPath Recycler pod. This is for development and testing only and will not work in a multi-node cluster.
+
The minimum ActiveDeadlineSeconds to use for a HostPath Recycler pod. This is for development and testing only and will not work in a multi-node cluster.
The file path to a pod definition used as a template for HostPath persistent volume recycling. This is for development and testing only and will not work in a multi-node cluster.
+
The file path to a pod definition used as a template for HostPath persistent volume recycling. This is for development and testing only and will not work in a multi-node cluster.
--pv-recycler-pod-template-filepath-nfs string
-
The file path to a pod definition used as a template for NFS persistent volume recycling
+
The file path to a pod definition used as a template for NFS persistent volume recycling
the increment of time added per Gi to ActiveDeadlineSeconds for a HostPath scrubber pod. This is for development and testing only and will not work in a multi-node cluster.
+
the increment of time added per Gi to ActiveDeadlineSeconds for a HostPath scrubber pod. This is for development and testing only and will not work in a multi-node cluster.
--pvclaimbinder-sync-period duration Default: 15s
-
The period for syncing persistent volumes and persistent volume claims
+
The period for syncing persistent volumes and persistent volume claims
-
--requestheader-allowed-names stringSlice
+
--requestheader-allowed-names strings
-
List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
+
List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
--requestheader-client-ca-file string
-
Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
+
Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
Number of nodes per second on which pods are deleted in case of node failure when a zone is unhealthy (see --unhealthy-zone-threshold for definition of healthy/unhealthy). Zone refers to entire cluster in non-multizone clusters. This value is implicitly overridden to 0 if the cluster size is smaller than --large-cluster-size-threshold.
+
Number of nodes per second on which pods are deleted in case of node failure when a zone is unhealthy (see --unhealthy-zone-threshold for definition of healthy/unhealthy). Zone refers to entire cluster in non-multizone clusters. This value is implicitly overridden to 0 if the cluster size is smaller than --large-cluster-size-threshold.
--secure-port int Default: 10257
-
The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all.
+
The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all.
--service-account-private-key-file string
-
Filename containing a PEM-encoded private RSA or ECDSA key used to sign service account tokens.
+
Filename containing a PEM-encoded private RSA or ECDSA key used to sign service account tokens.
--service-cluster-ip-range string
-
CIDR Range for Services in cluster. Requires --allocate-node-cidrs to be true
+
CIDR Range for Services in cluster. Requires --allocate-node-cidrs to be true
--show-hidden-metrics-for-version string
-
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
+
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is ., e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
--skip-headers
-
If true, avoid header prefixes in the log messages
+
If true, avoid header prefixes in the log messages
Number of terminated pods that can exist before the terminated pod garbage collector starts deleting terminated pods. If <= 0, the terminated pod garbage collector is disabled.
+
Number of terminated pods that can exist before the terminated pod garbage collector starts deleting terminated pods. If <= 0, the terminated pod garbage collector is disabled.
--tls-cert-file string
-
File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
+
File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
-
--tls-cipher-suites stringSlice
+
--tls-cipher-suites strings
-
Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used. Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384. Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.
+
Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used. Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384. Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.
--tls-min-version string
-
Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
+
Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
--tls-private-key-file string
-
File containing the default x509 private key matching --tls-cert-file.
+
File containing the default x509 private key matching --tls-cert-file.
-
--tls-sni-cert-key namedCertKey Default: []
+
--tls-sni-cert-key string
-
A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com".
+
A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com".
-
--unhealthy-zone-threshold float32 Default: 0.55
+
--unhealthy-zone-threshold float Default: 0.55
-
Fraction of Nodes in a zone which needs to be not Ready (minimum 3) for zone to be treated as unhealthy.
+
Fraction of Nodes in a zone which needs to be not Ready (minimum 3) for zone to be treated as unhealthy.
--use-service-account-credentials
-
If true, use individual service account credentials for each controller.
+
If true, use individual service account credentials for each controller.
-
-v, --v Level
+
-v, --v int
-
number for the log level verbosity
+
number for the log level verbosity
--version version[=true]
-
Print version information and quit
+
Print version information and quit
-
--vmodule moduleSpec
+
--vmodule <comma-separated 'pattern=N' settings>
-
comma-separated list of pattern=N settings for file-filtered logging
+
comma-separated list of pattern=N settings for file-filtered logging
--volume-host-allow-local-loopback Default: true
-
If false, deny local loopback IPs in addition to any CIDR ranges in --volume-host-cidr-denylist
+
If false, deny local loopback IPs in addition to any CIDR ranges in --volume-host-cidr-denylist
-
--volume-host-cidr-denylist stringSlice
+
--volume-host-cidr-denylist strings
-
A comma-separated list of CIDR ranges to avoid from volume plugins.
+
A comma-separated list of CIDR ranges to avoid from volume plugins.
Path to the file containing Azure container registry configuration information.
+
Path to the file containing Azure container registry configuration information.
-
--bind-address ip Default: 0.0.0.0
+
--bind-address string Default: 0.0.0.0
-
The IP address for the proxy server to serve on (set to '0.0.0.0' for all IPv4 interfaces and '::' for all IPv6 interfaces)
+
The IP address for the proxy server to serve on (set to '0.0.0.0' for all IPv4 interfaces and '::' for all IPv6 interfaces)
--bind-address-hard-fail
-
If true kube-proxy will treat failure to bind to a port as fatal and exit
+
If true kube-proxy will treat failure to bind to a port as fatal and exit
--cleanup
-
If true cleanup iptables and ipvs rules and exit.
+
If true cleanup iptables and ipvs rules and exit.
--cluster-cidr string
-
The CIDR range of pods in the cluster. When configured, traffic sent to a Service cluster IP from outside this range will be masqueraded and traffic sent from pods to an external LoadBalancer IP will be directed to the respective cluster IP instead
+
The CIDR range of pods in the cluster. When configured, traffic sent to a Service cluster IP from outside this range will be masqueraded and traffic sent from pods to an external LoadBalancer IP will be directed to the respective cluster IP instead
--config string
-
The path to the configuration file.
+
The path to the configuration file.
--config-sync-period duration Default: 15m0s
-
How often configuration from the apiserver is refreshed. Must be greater than 0.
+
How often configuration from the apiserver is refreshed. Must be greater than 0.
--conntrack-max-per-core int32 Default: 32768
-
Maximum number of NAT connections to track per CPU core (0 to leave the limit as-is and ignore conntrack-min).
+
Maximum number of NAT connections to track per CPU core (0 to leave the limit as-is and ignore conntrack-min).
--conntrack-min int32 Default: 131072
-
Minimum number of conntrack entries to allocate, regardless of conntrack-max-per-core (set conntrack-max-per-core=0 to leave the limit as-is).
+
Minimum number of conntrack entries to allocate, regardless of conntrack-max-per-core (set conntrack-max-per-core=0 to leave the limit as-is).
The IP address with port for the health check server to serve on (set to '0.0.0.0:10256' for all IPv4 interfaces and '[::]:10256' for all IPv6 interfaces). Set empty to disable.
+
The IP address with port for the health check server to serve on (set to '0.0.0.0:10256' for all IPv4 interfaces and '[::]:10256' for all IPv6 interfaces). Set empty to disable.
-h, --help
-
help for kube-proxy
+
help for kube-proxy
--hostname-override string
-
If non-empty, will use this string as identification instead of the actual hostname.
+
If non-empty, will use this string as identification instead of the actual hostname.
--iptables-masquerade-bit int32 Default: 14
-
If using the pure iptables proxy, the bit of the fwmark space to mark packets requiring SNAT with. Must be within the range [0, 31].
+
If using the pure iptables proxy, the bit of the fwmark space to mark packets requiring SNAT with. Must be within the range [0, 31].
--iptables-min-sync-period duration Default: 1s
-
The minimum interval of how often the iptables rules can be refreshed as endpoints and services change (e.g. '5s', '1m', '2h22m').
+
The minimum interval of how often the iptables rules can be refreshed as endpoints and services change (e.g. '5s', '1m', '2h22m').
--iptables-sync-period duration Default: 30s
-
The maximum interval of how often iptables rules are refreshed (e.g. '5s', '1m', '2h22m'). Must be greater than 0.
+
The maximum interval of how often iptables rules are refreshed (e.g. '5s', '1m', '2h22m'). Must be greater than 0.
-
--ipvs-exclude-cidrs stringSlice
+
--ipvs-exclude-cidrs strings
-
A comma-separated list of CIDR's which the ipvs proxier should not touch when cleaning up IPVS rules.
+
A comma-separated list of CIDR's which the ipvs proxier should not touch when cleaning up IPVS rules.
--ipvs-min-sync-period duration
-
The minimum interval of how often the ipvs rules can be refreshed as endpoints and services change (e.g. '5s', '1m', '2h22m').
+
The minimum interval of how often the ipvs rules can be refreshed as endpoints and services change (e.g. '5s', '1m', '2h22m').
--ipvs-scheduler string
-
The ipvs scheduler type when proxy mode is ipvs
+
The ipvs scheduler type when proxy mode is ipvs
--ipvs-strict-arp
-
Enable strict ARP by setting arp_ignore to 1 and arp_announce to 2
+
Enable strict ARP by setting arp_ignore to 1 and arp_announce to 2
--ipvs-sync-period duration Default: 30s
-
The maximum interval of how often ipvs rules are refreshed (e.g. '5s', '1m', '2h22m'). Must be greater than 0.
+
The maximum interval of how often ipvs rules are refreshed (e.g. '5s', '1m', '2h22m'). Must be greater than 0.
--ipvs-tcp-timeout duration
-
The timeout for idle IPVS TCP connections, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').
+
The timeout for idle IPVS TCP connections, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').
--ipvs-tcpfin-timeout duration
-
The timeout for IPVS TCP connections after receiving a FIN packet, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').
+
The timeout for IPVS TCP connections after receiving a FIN packet, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').
--ipvs-udp-timeout duration
-
The timeout for IPVS UDP packets, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').
+
The timeout for IPVS UDP packets, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').
--kube-api-burst int32 Default: 10
-
Burst to use while talking with kubernetes apiserver
+
Burst to use while talking with kubernetes apiserver
The IP address with port for the metrics server to serve on (set to '0.0.0.0:10249' for all IPv4 interfaces and '[::]:10249' for all IPv6 interfaces). Set empty to disable.
+
The IP address with port for the metrics server to serve on (set to '0.0.0.0:10249' for all IPv4 interfaces and '[::]:10249' for all IPv6 interfaces). Set empty to disable.
-
--nodeport-addresses stringSlice
+
--nodeport-addresses strings
-
A string slice of values which specify the addresses to use for NodePorts. Values may be valid IP blocks (e.g. 1.2.3.0/24, 1.2.3.4/32). The default empty string slice ([]) means to use all local addresses.
+
A string slice of values which specify the addresses to use for NodePorts. Values may be valid IP blocks (e.g. 1.2.3.0/24, 1.2.3.4/32). The default empty string slice ([]) means to use all local addresses.
--oom-score-adj int32 Default: -999
-
The oom-score-adj value for kube-proxy process. Values must be within the range [-1000, 1000]
+
The oom-score-adj value for kube-proxy process. Values must be within the range [-1000, 1000]
--profiling
-
If true enables profiling via web interface on /debug/pprof handler.
+
If true enables profiling via web interface on /debug/pprof handler.
--proxy-mode ProxyMode
-
Which proxy mode to use: 'userspace' (older) or 'iptables' (faster) or 'ipvs' or 'kernelspace' (windows). If blank, use the best-available proxy (currently iptables). If the iptables proxy is selected, regardless of how, but the system's kernel or iptables versions are insufficient, this always falls back to the userspace proxy.
+
Which proxy mode to use: 'userspace' (older) or 'iptables' (faster) or 'ipvs' or 'kernelspace' (windows). If blank, use the best-available proxy (currently iptables). If the iptables proxy is selected, regardless of how, but the system's kernel or iptables versions are insufficient, this always falls back to the userspace proxy.
--proxy-port-range port-range
-
Range of host ports (beginPort-endPort, single port or beginPort+offset, inclusive) that may be consumed in order to proxy service traffic. If (unspecified, 0, or 0-0) then ports will be randomly chosen.
+
Range of host ports (beginPort-endPort, single port or beginPort+offset, inclusive) that may be consumed in order to proxy service traffic. If (unspecified, 0, or 0-0) then ports will be randomly chosen.
--show-hidden-metrics-for-version string
-
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
+
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is ., e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
--udp-timeout duration Default: 250ms
-
How long an idle UDP connection will be kept open (e.g. '250ms', '2s'). Must be greater than 0. Only applicable for proxy-mode=userspace
+
How long an idle UDP connection will be kept open (e.g. '250ms', '2s'). Must be greater than 0. Only applicable for proxy-mode=userspace
--version version[=true]
-
Print version information and quit
+
Print version information and quit
--write-config-to string
-
If set, write the default configuration values to this file and exit.
+
If set, write the default configuration values to this file and exit.
If true, adds the file directory to the header of the log messages
+
If true, adds the file directory to the header of the log messages
--address string Default: "0.0.0.0"
-
DEPRECATED: the IP address on which to listen for the --port port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). See --bind-address instead.
+
DEPRECATED: the IP address on which to listen for the --port port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). See --bind-address instead.
--algorithm-provider string
-
DEPRECATED: the scheduling algorithm provider to use, this sets the default plugins for component config profiles. Choose one of: ClusterAutoscalerProvider | DefaultProvider
+
DEPRECATED: the scheduling algorithm provider to use, this sets the default plugins for component config profiles. Choose one of: ClusterAutoscalerProvider | DefaultProvider
--alsologtostderr
-
log to standard error as well as files
+
log to standard error as well as files
--authentication-kubeconfig string
-
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io. This is optional. If empty, all token requests are considered to be anonymous and no client CA is looked up in the cluster.
+
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io. This is optional. If empty, all token requests are considered to be anonymous and no client CA is looked up in the cluster.
--authentication-skip-lookup
-
If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
+
If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.
+
If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.
A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server.
+
A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server.
--authorization-kubeconfig string
-
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io. This is optional. If empty, all requests not skipped by authorization are forbidden.
+
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io. This is optional. If empty, all requests not skipped by authorization are forbidden.
The duration to cache 'unauthorized' responses from the webhook authorizer.
+
The duration to cache 'unauthorized' responses from the webhook authorizer.
--azure-container-registry-config string
-
Path to the file containing Azure container registry configuration information.
+
Path to the file containing Azure container registry configuration information.
-
--bind-address ip Default: 0.0.0.0
+
--bind-address string Default: 0.0.0.0
-
The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.
+
The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.
--cert-dir string
-
The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.
+
The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.
--client-ca-file string
-
If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
+
If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
--config string
-
The path to the configuration file. The following flags can overwrite fields in this file: --address --port --use-legacy-policy-config --policy-configmap --policy-config-file --algorithm-provider
+
The path to the configuration file. The following flags can overwrite fields in this file: --address --port --use-legacy-policy-config --policy-configmap --policy-config-file --algorithm-provider
--contention-profiling Default: true
-
DEPRECATED: enable lock contention profiling, if profiling is enabled
+
DEPRECATED: enable lock contention profiling, if profiling is enabled
--experimental-logging-sanitization
-
[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens). Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.
+
[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens). Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.
DEPRECATED: RequiredDuringScheduling affinity is not symmetric, but there is an implicit PreferredDuringScheduling affinity rule corresponding to every RequiredDuringScheduling affinity rule. --hard-pod-affinity-symmetric-weight represents the weight of implicit PreferredDuringScheduling affinity rule. Must be in the range 0-100.This option was moved to the policy configuration file
+
DEPRECATED: RequiredDuringScheduling affinity is not symmetric, but there is an implicit PreferredDuringScheduling affinity rule corresponding to every RequiredDuringScheduling affinity rule. --hard-pod-affinity-symmetric-weight represents the weight of implicit PreferredDuringScheduling affinity rule. Must be in the range 0-100.This option was moved to the policy configuration file
-h, --help
-
help for kube-scheduler
+
help for kube-scheduler
--http2-max-streams-per-connection int
-
The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
+
The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
--kube-api-burst int32 Default: 100
-
DEPRECATED: burst to use while talking with kubernetes apiserver
+
DEPRECATED: burst to use while talking with kubernetes apiserver
DEPRECATED: content type of requests sent to apiserver.
+
DEPRECATED: content type of requests sent to apiserver.
-
--kube-api-qps float32 Default: 50
+
--kube-api-qps float Default: 50
-
DEPRECATED: QPS to use while talking with kubernetes apiserver
+
DEPRECATED: QPS to use while talking with kubernetes apiserver
--kubeconfig string
-
DEPRECATED: path to kubeconfig file with authorization and master location information.
+
DEPRECATED: path to kubeconfig file with authorization and master location information.
--leader-elect Default: true
-
Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability.
+
Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability.
The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled.
+
The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled.
The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled.
+
The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled.
The type of resource object that is used for locking during leader election. Supported options are 'endpoints', 'configmaps', 'leases', 'endpointsleases' and 'configmapsleases'.
+
The type of resource object that is used for locking during leader election. Supported options are 'endpoints', 'configmaps', 'leases', 'endpointsleases' and 'configmapsleases'.
The namespace of resource object that is used for locking during leader election.
+
The namespace of resource object that is used for locking during leader election.
--leader-elect-retry-period duration Default: 2s
-
The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled.
+
The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled.
DEPRECATED: define the namespace of the lock object. Will be removed in favor of leader-elect-resource-namespace.
+
DEPRECATED: define the namespace of the lock object. Will be removed in favor of leader-elect-resource-namespace.
-
--log-backtrace-at traceLocation Default: :0
+
--log-backtrace-at <a string in the form 'file:N'> Default: :0
-
when logging hits line file:N, emit a stack trace
+
when logging hits line file:N, emit a stack trace
--log-dir string
-
If non-empty, write log files in this directory
+
If non-empty, write log files in this directory
--log-file string
-
If non-empty, use this log file
+
If non-empty, use this log file
--log-file-max-size uint Default: 1800
-
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
+
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
--log-flush-frequency duration Default: 5s
-
Maximum number of seconds between log flushes
+
Maximum number of seconds between log flushes
--logging-format string Default: "text"
-
Sets the log format. Permitted formats: "json", "text". Non-default formats don't honor these flags: --add_dir_header, --alsologtostderr, --log_backtrace_at, --log_dir, --log_file, --log_file_max_size, --logtostderr, --one_output, --skip_headers, --skip_log_headers, --stderrthreshold, --vmodule, --log-flush-frequency. Non-default choices are currently alpha and subject to change without warning.
+
Sets the log format. Permitted formats: "json", "text". Non-default formats don't honor these flags: --add_dir_header, --alsologtostderr, --log_backtrace_at, --log_dir, --log_file, --log_file_max_size, --logtostderr, --one_output, --skip_headers, --skip_log_headers, --stderrthreshold, --vmodule, --log-flush-frequency. Non-default choices are currently alpha and subject to change without warning.
--logtostderr Default: true
-
log to standard error instead of files
+
log to standard error instead of files
--master string
-
The address of the Kubernetes API server (overrides any value in kubeconfig)
+
The address of the Kubernetes API server (overrides any value in kubeconfig)
--one-output
-
If true, only write logs to their native severity level (vs also writing to each lower severity level
+
If true, only write logs to their native severity level (vs also writing to each lower severity level
--permit-port-sharing
-
If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false]
+
If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false]
--policy-config-file string
-
DEPRECATED: file with scheduler policy configuration. This file is used if policy ConfigMap is not provided or --use-legacy-policy-config=true. Note: The scheduler will fail if this is combined with Plugin configs
+
DEPRECATED: file with scheduler policy configuration. This file is used if policy ConfigMap is not provided or --use-legacy-policy-config=true. Note: The scheduler will fail if this is combined with Plugin configs
--policy-configmap string
-
DEPRECATED: name of the ConfigMap object that contains scheduler's policy configuration. It must exist in the system namespace before scheduler initialization if --use-legacy-policy-config=false. The config must be provided as the value of an element in 'Data' map with the key='policy.cfg'. Note: The scheduler will fail if this is combined with Plugin configs
+
DEPRECATED: name of the ConfigMap object that contains scheduler's policy configuration. It must exist in the system namespace before scheduler initialization if --use-legacy-policy-config=false. The config must be provided as the value of an element in 'Data' map with the key='policy.cfg'. Note: The scheduler will fail if this is combined with Plugin configs
DEPRECATED: the namespace where policy ConfigMap is located. The kube-system namespace will be used if this is not provided or is empty. Note: The scheduler will fail if this is combined with Plugin configs
+
DEPRECATED: the namespace where policy ConfigMap is located. The kube-system namespace will be used if this is not provided or is empty. Note: The scheduler will fail if this is combined with Plugin configs
--port int Default: 10251
-
DEPRECATED: the port on which to serve HTTP insecurely without authentication and authorization. If 0, don't serve plain HTTP at all. See --secure-port instead.
+
DEPRECATED: the port on which to serve HTTP insecurely without authentication and authorization. If 0, don't serve plain HTTP at all. See --secure-port instead.
--profiling Default: true
-
DEPRECATED: enable profiling via web interface host:port/debug/pprof/
+
DEPRECATED: enable profiling via web interface host:port/debug/pprof/
-
--requestheader-allowed-names stringSlice
+
--requestheader-allowed-names strings
-
List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
+
List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
--requestheader-client-ca-file string
-
Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
+
Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
DEPRECATED: name of the scheduler, used to select which pods will be processed by this scheduler, based on pod's "spec.schedulerName".
+
DEPRECATED: name of the scheduler, used to select which pods will be processed by this scheduler, based on pod's "spec.schedulerName".
--secure-port int Default: 10259
-
The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all.
+
The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all.
--show-hidden-metrics-for-version string
-
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
+
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is ., e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
--skip-headers
-
If true, avoid header prefixes in the log messages
+
If true, avoid header prefixes in the log messages
--skip-log-headers
-
If true, avoid headers when opening log files
+
If true, avoid headers when opening log files
-
--stderrthreshold severity Default: 2
+
--stderrthreshold int Default: 2
-
logs at or above this threshold go to stderr
+
logs at or above this threshold go to stderr
--tls-cert-file string
-
File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
+
File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
-
--tls-cipher-suites stringSlice
+
--tls-cipher-suites strings
-
Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used. Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384. Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.
+
Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used. Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384. Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.
--tls-min-version string
-
Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
+
Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
--tls-private-key-file string
-
File containing the default x509 private key matching --tls-cert-file.
+
File containing the default x509 private key matching --tls-cert-file.
-
--tls-sni-cert-key namedCertKey Default: []
+
--tls-sni-cert-key string
-
A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com".
+
A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com".
--use-legacy-policy-config
-
DEPRECATED: when set to true, scheduler will ignore policy ConfigMap and uses policy config file. Note: The scheduler will fail if this is combined with Plugin configs
+
DEPRECATED: when set to true, scheduler will ignore policy ConfigMap and uses policy config file. Note: The scheduler will fail if this is combined with Plugin configs
-
-v, --v Level
+
-v, --v int
-
number for the log level verbosity
+
number for the log level verbosity
--version version[=true]
-
Print version information and quit
+
Print version information and quit
-
--vmodule moduleSpec
+
--vmodule <comma-separated 'pattern=N' settings>
-
comma-separated list of pattern=N settings for file-filtered logging
+
comma-separated list of pattern=N settings for file-filtered logging
--write-config-to string
-
If set, write the configuration values to this file and exit.
+
If set, write the configuration values to this file and exit.
<Warning: Beta feature> Set the maximum number of container log files that can be present for a container. The number must be ≥ 2. This flag can only be used with `--container-runtime=remote`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
+
Set the maximum number of container log files that can be present for a container. The number must be ≥ 2. This flag can only be used with `--container-runtime=remote`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
--container-log-max-size string Default: `10Mi`
-
<Warning: Beta feature> Set the maximum size (e.g. 10Mi) of container log file before it is rotated. This flag can only be used with `--container-runtime=remote`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
+
Set the maximum size (e.g. 10Mi) of container log file before it is rotated. This flag can only be used with `--container-runtime=remote`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
@@ -298,13 +298,6 @@ kubelet [flags]
The Kubelet will use this directory for checkpointing downloaded configurations and tracking configuration health. The Kubelet will create this directory if it does not already exist. The path may be absolute or relative; relative paths start at the Kubelet's current working directory. Providing this flag enables dynamic Kubelet configuration. The `DynamicKubeletConfig` feature gate must be enabled to pass this flag; this gate currently defaults to `true` because the feature is beta.
-
-
--enable-cadvisor-json-endpoints Default: `false`
-
-
-
Enable cAdvisor json `/spec` and `/stats/*` endpoints. This flag has no effect on the /stats/summary endpoint. (DEPRECATED: will be removed in a future version)
-
-
--enable-controller-attach-detach Default: `true`
@@ -462,7 +455,6 @@ AppArmor=true|false (BETA - default=true)
BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
CPUManager=true|false (BETA - default=true)
-CRIContainerLogRotation=true|false (BETA - default=true)
CSIInlineVolume=true|false (BETA - default=true)
CSIMigration=true|false (BETA - default=true)
CSIMigrationAWS=true|false (BETA - default=false)
diff --git a/content/en/docs/reference/kubectl/overview.md b/content/en/docs/reference/kubectl/overview.md
index a9f1550659..dbad6f5cf2 100644
--- a/content/en/docs/reference/kubectl/overview.md
+++ b/content/en/docs/reference/kubectl/overview.md
@@ -19,7 +19,7 @@ files by setting the KUBECONFIG environment variable or by setting the
This overview covers `kubectl` syntax, describes the command operations, and provides common examples.
For details about each command, including all the supported flags and subcommands, see the
[kubectl](/docs/reference/generated/kubectl/kubectl-commands/) reference documentation.
-For installation instructions see [installing kubectl](/docs/tasks/tools/install-kubectl/).
+For installation instructions see [installing kubectl](/docs/tasks/tools/).
diff --git a/content/en/docs/reference/labels-annotations-taints.md b/content/en/docs/reference/labels-annotations-taints.md
index 2c2a4c001c..41f84c6540 100644
--- a/content/en/docs/reference/labels-annotations-taints.md
+++ b/content/en/docs/reference/labels-annotations-taints.md
@@ -198,6 +198,15 @@ The kubelet can set this annotation on a Node to denote its configured IPv4 addr
When kubelet is started with the "external" cloud provider, it sets this annotation on the Node to denote an IP address set from the command line flag (`--node-ip`). This IP is verified with the cloud provider as valid by the cloud-controller-manager.
+## batch.kubernetes.io/job-completion-index
+
+Example: `batch.kubernetes.io/job-completion-index: "3"`
+
+Used on: Pod
+
+The Job controller in the kube-controller-manager sets this annotation for Pods
+created with Indexed [completion mode](/docs/concepts/workloads/controllers/job/#completion-mode).
+
## kubectl.kubernetes.io/default-container
Example: `kubectl.kubernetes.io/default-container: "front-end-app"`
diff --git a/content/en/docs/reference/scheduling/config.md b/content/en/docs/reference/scheduling/config.md
index 7754d7cb7d..8d5cc24208 100644
--- a/content/en/docs/reference/scheduling/config.md
+++ b/content/en/docs/reference/scheduling/config.md
@@ -181,8 +181,6 @@ that are not enabled by default:
- `RequestedToCapacityRatio`: Favor nodes according to a configured function of
the allocated resources.
Extension points: `Score`.
-- `NodeResourceLimits`: Favors nodes that satisfy the Pod resource limits.
- Extension points: `PreScore`, `Score`.
- `CinderVolume`: Checks that OpenStack Cinder volume limits can be satisfied
for the node.
Extension points: `Filter`.
diff --git a/content/en/docs/setup/best-practices/certificates.md b/content/en/docs/setup/best-practices/certificates.md
index a065462baf..1648cc4e9e 100644
--- a/content/en/docs/setup/best-practices/certificates.md
+++ b/content/en/docs/setup/best-practices/certificates.md
@@ -9,7 +9,7 @@ weight: 40
Kubernetes requires PKI certificates for authentication over TLS.
-If you install Kubernetes with [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/), the certificates that your cluster requires are automatically generated.
+If you install Kubernetes with [kubeadm](/docs/reference/setup-tools/kubeadm/), the certificates that your cluster requires are automatically generated.
You can also generate your own certificates -- for example, to keep your private keys more secure by not storing them on the API server.
This page explains the certificates that your cluster requires.
@@ -74,7 +74,7 @@ Required certificates:
| kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | |
| front-proxy-client | kubernetes-front-proxy-ca | | client | |
-[1]: any other IP or DNS name you contact your cluster on (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/)
+[1]: any other IP or DNS name you contact your cluster on (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/)
the load balancer stable IP and/or DNS name, `kubernetes`, `kubernetes.default`, `kubernetes.default.svc`,
`kubernetes.default.svc.cluster`, `kubernetes.default.svc.cluster.local`)
@@ -100,7 +100,7 @@ For kubeadm users only:
### Certificate paths
-Certificates should be placed in a recommended path (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/)).
+Certificates should be placed in a recommended path (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/)).
Paths should be specified using the given argument regardless of location.
| Default CN | recommended key path | recommended cert path | command | key argument | cert argument |
diff --git a/content/en/docs/setup/best-practices/multiple-zones.md b/content/en/docs/setup/best-practices/multiple-zones.md
index 107ee2d0f7..8f51a3bd06 100644
--- a/content/en/docs/setup/best-practices/multiple-zones.md
+++ b/content/en/docs/setup/best-practices/multiple-zones.md
@@ -59,7 +59,7 @@ When nodes start up, the kubelet on each node automatically adds
{{< glossary_tooltip text="labels" term_id="label" >}} to the Node object
that represents that specific kubelet in the Kubernetes API.
These labels can include
-[zone information](/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone).
+[zone information](/docs/reference/labels-annotations-taints/#topologykubernetesiozone).
If your cluster spans multiple zones or regions, you can use node labels
in conjunction with
diff --git a/content/en/docs/setup/production-environment/container-runtimes.md b/content/en/docs/setup/production-environment/container-runtimes.md
index e59b497302..ba6827d833 100644
--- a/content/en/docs/setup/production-environment/container-runtimes.md
+++ b/content/en/docs/setup/production-environment/container-runtimes.md
@@ -63,7 +63,7 @@ configuration, or reinstall it using automation.
### containerd
-This section contains the necessary steps to use `containerd` as CRI runtime.
+This section contains the necessary steps to use containerd as CRI runtime.
Use the following commands to install Containerd on your system:
@@ -92,170 +92,62 @@ sudo sysctl --system
Install containerd:
{{< tabs name="tab-cri-containerd-installation" >}}
-{{% tab name="Ubuntu 16.04" %}}
+{{% tab name="Linux" %}}
-```shell
-# (Install containerd)
-## Set up the repository
-### Install packages to allow apt to use a repository over HTTPS
-sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
-```
+1. Install the `containerd.io` package from the official Docker repositories. Instructions for setting up the Docker repository for your respective Linux distribution and installing the `containerd.io` package can be found at [Install Docker Engine](https://docs.docker.com/engine/install/#server).
-```shell
-## Add Docker's official GPG key
-curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key --keyring /etc/apt/trusted.gpg.d/docker.gpg add -
-```
+2. Configure containerd:
-```shell
-## Add Docker apt repository.
-sudo add-apt-repository \
- "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
- $(lsb_release -cs) \
- stable"
-```
+ ```shell
+ sudo mkdir -p /etc/containerd
+ containerd config default | sudo tee /etc/containerd/config.toml
+ ```
-```shell
-## Install containerd
-sudo apt-get update && sudo apt-get install -y containerd.io
-```
+3. Restart containerd:
-```shell
-# Configure containerd
-sudo mkdir -p /etc/containerd
-containerd config default | sudo tee /etc/containerd/config.toml
-```
+ ```shell
+ sudo systemctl restart containerd
+ ```
-```shell
-# Restart containerd
-sudo systemctl restart containerd
-```
-{{% /tab %}}
-{{% tab name="Ubuntu 18.04/20.04" %}}
-
-```shell
-# (Install containerd)
-sudo apt-get update && sudo apt-get install -y containerd
-```
-
-```shell
-# Configure containerd
-sudo mkdir -p /etc/containerd
-containerd config default | sudo tee /etc/containerd/config.toml
-```
-
-```shell
-# Restart containerd
-sudo systemctl restart containerd
-```
-{{% /tab %}}
-{{% tab name="Debian 9+" %}}
-
-```shell
-# (Install containerd)
-## Set up the repository
-### Install packages to allow apt to use a repository over HTTPS
-sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
-```
-
-```shell
-## Add Docker's official GPG key
-curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key --keyring /etc/apt/trusted.gpg.d/docker.gpg add -
-```
-
-```shell
-## Add Docker apt repository.
-sudo add-apt-repository \
- "deb [arch=amd64] https://download.docker.com/linux/debian \
- $(lsb_release -cs) \
- stable"
-```
-
-```shell
-## Install containerd
-sudo apt-get update && sudo apt-get install -y containerd.io
-```
-
-```shell
-# Set default containerd configuration
-sudo mkdir -p /etc/containerd
-containerd config default | sudo tee /etc/containerd/config.toml
-```
-
-```shell
-# Restart containerd
-sudo systemctl restart containerd
-```
-{{% /tab %}}
-{{% tab name="CentOS/RHEL 7.4+" %}}
-
-```shell
-# (Install containerd)
-## Set up the repository
-### Install required packages
-sudo yum install -y yum-utils device-mapper-persistent-data lvm2
-```
-
-```shell
-## Add docker repository
-sudo yum-config-manager \
- --add-repo \
- https://download.docker.com/linux/centos/docker-ce.repo
-```
-
-```shell
-## Install containerd
-sudo yum update -y && sudo yum install -y containerd.io
-```
-
-```shell
-## Configure containerd
-sudo mkdir -p /etc/containerd
-containerd config default | sudo tee /etc/containerd/config.toml
-```
-
-```shell
-# Restart containerd
-sudo systemctl restart containerd
-```
{{% /tab %}}
{{% tab name="Windows (PowerShell)" %}}
-
Start a Powershell session, set `$Version` to the desired version (ex: `$Version=1.4.3`), and then run the following commands:
-
-```powershell
-# (Install containerd)
-# Download containerd
-curl.exe -L https://github.com/containerd/containerd/releases/download/v$Version/containerd-$Version-windows-amd64.tar.gz -o containerd-windows-amd64.tar.gz
-tar.exe xvf .\containerd-windows-amd64.tar.gz
-```
+1. Download containerd:
-```powershell
-# Extract and configure
-Copy-Item -Path ".\bin\" -Destination "$Env:ProgramFiles\containerd" -Recurse -Force
-cd $Env:ProgramFiles\containerd\
-.\containerd.exe config default | Out-File config.toml -Encoding ascii
+ ```powershell
+ curl.exe -L https://github.com/containerd/containerd/releases/download/v$Version/containerd-$Version-windows-amd64.tar.gz -o containerd-windows-amd64.tar.gz
+ tar.exe xvf .\containerd-windows-amd64.tar.gz
+ ```
-# Review the configuration. Depending on setup you may want to adjust:
-# - the sandbox_image (Kubernetes pause image)
-# - cni bin_dir and conf_dir locations
-Get-Content config.toml
+2. Extract and configure:
-# (Optional - but highly recommended) Exclude containerd form Windows Defender Scans
-Add-MpPreference -ExclusionProcess "$Env:ProgramFiles\containerd\containerd.exe"
-```
+ ```powershell
+ Copy-Item -Path ".\bin\" -Destination "$Env:ProgramFiles\containerd" -Recurse -Force
+ cd $Env:ProgramFiles\containerd\
+ .\containerd.exe config default | Out-File config.toml -Encoding ascii
-```powershell
-# Start containerd
-.\containerd.exe --register-service
-Start-Service containerd
-```
+ # Review the configuration. Depending on setup you may want to adjust:
+ # - the sandbox_image (Kubernetes pause image)
+ # - cni bin_dir and conf_dir locations
+ Get-Content config.toml
+
+ # (Optional - but highly recommended) Exclude containerd from Windows Defender Scans
+ Add-MpPreference -ExclusionProcess "$Env:ProgramFiles\containerd\containerd.exe"
+ ```
+
+3. Start containerd:
+
+ ```powershell
+ .\containerd.exe --register-service
+ Start-Service containerd
+ ```
{{% /tab %}}
{{< /tabs >}}
-#### systemd {#containerd-systemd}
+#### Using the `systemd` cgroup driver {#containerd-systemd}
To use the `systemd` cgroup driver in `/etc/containerd/config.toml` with `runc`, set
@@ -266,6 +158,12 @@ To use the `systemd` cgroup driver in `/etc/containerd/config.toml` with `runc`,
SystemdCgroup = true
```
+If you apply this change make sure to restart containerd again:
+
+```shell
+sudo systemctl restart containerd
+```
+
When using kubeadm, manually configure the
[cgroup driver for kubelet](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-control-plane-node).
@@ -455,138 +353,38 @@ in sync.
### Docker
-On each of your nodes, install Docker CE.
+1. On each of your nodes, install the Docker for your Linux distribution as per [Install Docker Engine](https://docs.docker.com/engine/install/#server). You can find the latest validated version of Docker in this [dependencies](https://git.k8s.io/kubernetes/build/dependencies.yaml) file.
-The Kubernetes release notes list which versions of Docker are compatible
-with that version of Kubernetes.
+2. Configure the Docker daemon, in particular to use systemd for the management of the container’s cgroups.
-Use the following commands to install Docker on your system:
+ ```shell
+ sudo mkdir /etc/docker
+ cat <}}
-{{% tab name="Ubuntu 16.04+" %}}
+ {{< note >}}
+ `overlay2` is the preferred storage driver for systems running Linux kernel version 4.0 or higher, or RHEL or CentOS using version 3.10.0-514 and above.
+ {{< /note >}}
-```shell
-# (Install Docker CE)
-## Set up the repository:
-### Install packages to allow apt to use a repository over HTTPS
-sudo apt-get update && sudo apt-get install -y \
- apt-transport-https ca-certificates curl software-properties-common gnupg2
-```
+3. Restart Docker and enable on boot:
-```shell
-# Add Docker's official GPG key:
-curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key --keyring /etc/apt/trusted.gpg.d/docker.gpg add -
-```
+ ```shell
+ sudo systemctl enable docker
+ sudo systemctl daemon-reload
+ sudo systemctl restart docker
+ ```
-```shell
-# Add the Docker apt repository:
-sudo add-apt-repository \
- "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
- $(lsb_release -cs) \
- stable"
-```
-
-```shell
-# Install Docker CE
-sudo apt-get update && sudo apt-get install -y \
- containerd.io=1.2.13-2 \
- docker-ce=5:19.03.11~3-0~ubuntu-$(lsb_release -cs) \
- docker-ce-cli=5:19.03.11~3-0~ubuntu-$(lsb_release -cs)
-```
-
-```shell
-## Create /etc/docker
-sudo mkdir /etc/docker
-```
-
-```shell
-# Set up the Docker daemon
-cat <}}
-
-If you want the `docker` service to start on boot, run the following command:
-
-```shell
-sudo systemctl enable docker
-```
-
-Refer to the [official Docker installation guides](https://docs.docker.com/engine/installation/)
-for more information.
+{{< note >}}
+For more information refer to
+ - [Configure the Docker daemon](https://docs.docker.com/config/daemon/)
+ - [Control Docker with systemd](https://docs.docker.com/config/daemon/systemd/)
+{{< /note >}}
diff --git a/content/en/docs/setup/production-environment/tools/kops.md b/content/en/docs/setup/production-environment/tools/kops.md
index 13a2474600..4afab697e4 100644
--- a/content/en/docs/setup/production-environment/tools/kops.md
+++ b/content/en/docs/setup/production-environment/tools/kops.md
@@ -23,7 +23,7 @@ kops is an automated provisioning system:
## {{% heading "prerequisites" %}}
-* You must have [kubectl](/docs/tasks/tools/install-kubectl/) installed.
+* You must have [kubectl](/docs/tasks/tools/) installed.
* You must [install](https://github.com/kubernetes/kops#installing) `kops` on a 64-bit (AMD64 and Intel 64) device architecture.
diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
index 6516a18825..a9b37e8167 100644
--- a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
+++ b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
@@ -137,7 +137,7 @@ is not supported by kubeadm.
### More information
-For more information about `kubeadm init` arguments, see the [kubeadm reference guide](/docs/reference/setup-tools/kubeadm/kubeadm/).
+For more information about `kubeadm init` arguments, see the [kubeadm reference guide](/docs/reference/setup-tools/kubeadm/).
To configure `kubeadm init` with a configuration file see [Using kubeadm init with a configuration file](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file).
diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
index 3f6e991eac..4f9d9c6ce7 100644
--- a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
+++ b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
@@ -160,7 +160,7 @@ kubelet and the control plane is supported, but the kubelet version may never ex
server version. For example, the kubelet running 1.7.0 should be fully compatible with a 1.8.0 API server,
but not vice versa.
-For information about installing `kubectl`, see [Install and set up kubectl](/docs/tasks/tools/install-kubectl/).
+For information about installing `kubectl`, see [Install and set up kubectl](/docs/tasks/tools/).
{{< warning >}}
These instructions exclude all Kubernetes packages from any system upgrades.
@@ -175,16 +175,34 @@ For more information on version skews, see:
{{< tabs name="k8s_install" >}}
{{% tab name="Debian-based distributions" %}}
-```bash
-sudo apt-get update && sudo apt-get install -y apt-transport-https curl
-curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
-cat <
diff --git a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md
index 57f867d0db..5c33b0a94b 100644
--- a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md
+++ b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md
@@ -221,7 +221,7 @@ On Windows, you can use the following settings to configure Services and load ba
#### IPv4/IPv6 dual-stack
-You can enable IPv4/IPv6 dual-stack networking for `l2bridge` networks using the `IPv6DualStack` [feature gate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/). See [enable IPv4/IPv6 dual stack](/docs/concepts/services-networking/dual-stack#enable-ipv4ipv6-dual-stack) for more details.
+You can enable IPv4/IPv6 dual-stack networking for `l2bridge` networks using the `IPv6DualStack` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/). See [enable IPv4/IPv6 dual stack](/docs/concepts/services-networking/dual-stack#enable-ipv4ipv6-dual-stack) for more details.
{{< note >}}
On Windows, using IPv6 with Kubernetes require Windows Server, version 2004 (kernel version 10.0.19041.610) or later.
@@ -237,7 +237,7 @@ Overlay (VXLAN) networks on Windows do not support dual-stack networking today.
Windows is only supported as a worker node in the Kubernetes architecture and component matrix. This means that a Kubernetes cluster must always include Linux master nodes, zero or more Linux worker nodes, and zero or more Windows worker nodes.
-#### Compute {compute-limitations}
+#### Compute {#compute-limitations}
##### Resource management and process isolation
@@ -297,7 +297,7 @@ As a result, the following storage functionality is not supported on Windows nod
* NFS based storage/volume support
* Expanding the mounted volume (resizefs)
-#### Networking {networking-limitations}
+#### Networking {#networking-limitations}
Windows Container Networking differs in some important ways from Linux networking. The [Microsoft documentation for Windows Container Networking](https://docs.microsoft.com/en-us/virtualization/windowscontainers/container-networking/architecture) contains additional details and background.
diff --git a/content/en/docs/setup/release/notes.md b/content/en/docs/setup/release/notes.md
index adbdb7c48e..54146007a0 100644
--- a/content/en/docs/setup/release/notes.md
+++ b/content/en/docs/setup/release/notes.md
@@ -1906,7 +1906,7 @@ filename | sha512 hash
- Promote SupportNodePidsLimit to GA to provide node to pod pid isolation
Promote SupportPodPidsLimit to GA to provide ability to limit pids per pod ([#94140](https://github.com/kubernetes/kubernetes/pull/94140), [@derekwaynecarr](https://github.com/derekwaynecarr)) [SIG Node and Testing]
- Rename pod_preemption_metrics to preemption_metrics. ([#93256](https://github.com/kubernetes/kubernetes/pull/93256), [@ahg-g](https://github.com/ahg-g)) [SIG Instrumentation and Scheduling]
-- Server-side apply behavior has been regularized in the case where a field is removed from the applied configuration. Removed fields which have no other owners are deleted from the live object, or reset to their default value if they have one. Safe ownership transfers, such as the transfer of a `replicas` field from a user to an HPA without resetting to the default value are documented in [Transferring Ownership](https://kubernetes.io/docs/reference/using-api/api-concepts/#transferring-ownership) ([#92661](https://github.com/kubernetes/kubernetes/pull/92661), [@jpbetz](https://github.com/jpbetz)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation and Testing]
+- Server-side apply behavior has been regularized in the case where a field is removed from the applied configuration. Removed fields which have no other owners are deleted from the live object, or reset to their default value if they have one. Safe ownership transfers, such as the transfer of a `replicas` field from a user to an HPA without resetting to the default value are documented in [Transferring Ownership](/docs/reference/using-api/server-side-apply/#transferring-ownership) ([#92661](https://github.com/kubernetes/kubernetes/pull/92661), [@jpbetz](https://github.com/jpbetz)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation and Testing]
- Set CSIMigrationvSphere feature gates to beta.
Users should enable CSIMigration + CSIMigrationvSphere features and install the vSphere CSI Driver (https://github.com/kubernetes-sigs/vsphere-csi-driver) to move workload from the in-tree vSphere plugin "kubernetes.io/vsphere-volume" to vSphere CSI Driver.
diff --git a/content/en/docs/tasks/administer-cluster/access-cluster-api.md b/content/en/docs/tasks/administer-cluster/access-cluster-api.md
index 191ec0c2fe..0275cadabf 100644
--- a/content/en/docs/tasks/administer-cluster/access-cluster-api.md
+++ b/content/en/docs/tasks/administer-cluster/access-cluster-api.md
@@ -192,7 +192,7 @@ func main() {
}
```
-If the application is deployed as a Pod in the cluster, see [Accessing the API from within a Pod](#accessing-the-api-from-within-a-pod).
+If the application is deployed as a Pod in the cluster, see [Accessing the API from within a Pod](/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod).
#### Python client
diff --git a/content/en/docs/tasks/administer-cluster/cluster-upgrade.md b/content/en/docs/tasks/administer-cluster/cluster-upgrade.md
index 1e2cc422e4..6e9dc302c4 100644
--- a/content/en/docs/tasks/administer-cluster/cluster-upgrade.md
+++ b/content/en/docs/tasks/administer-cluster/cluster-upgrade.md
@@ -34,7 +34,7 @@ If your cluster was deployed using the `kubeadm` tool, refer to
for detailed information on how to upgrade the cluster.
Once you have upgraded the cluster, remember to
-[install the latest version of `kubectl`](/docs/tasks/tools/install-kubectl/).
+[install the latest version of `kubectl`](/docs/tasks/tools/).
### Manual deployments
@@ -52,7 +52,7 @@ You should manually update the control plane following this sequence:
- cloud controller manager, if you use one
At this point you should
-[install the latest version of `kubectl`](/docs/tasks/tools/install-kubectl/).
+[install the latest version of `kubectl`](/docs/tasks/tools/).
For each node in your cluster, [drain](/docs/tasks/administer-cluster/safely-drain-node/)
that node and then either replace it with a new node that uses the {{< skew latestVersion >}}
diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md
index d9d8a5929e..56a6c25e9a 100644
--- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md
+++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md
@@ -170,36 +170,7 @@ controllerManager:
### Create certificate signing requests (CSR)
-You can create the certificate signing requests for the Kubernetes certificates API with `kubeadm certs renew --use-api`.
-
-If you set up an external signer such as [cert-manager](https://github.com/jetstack/cert-manager), certificate signing requests (CSRs) are automatically approved.
-Otherwise, you must manually approve certificates with the [`kubectl certificate`](/docs/setup/best-practices/certificates/) command.
-The following kubeadm command outputs the name of the certificate to approve, then blocks and waits for approval to occur:
-
-```shell
-sudo kubeadm certs renew apiserver --use-api &
-```
-The output is similar to this:
-```
-[1] 2890
-[certs] certificate request "kubeadm-cert-kube-apiserver-ld526" created
-```
-
-### Approve certificate signing requests (CSR)
-
-If you set up an external signer, certificate signing requests (CSRs) are automatically approved.
-
-Otherwise, you must manually approve certificates with the [`kubectl certificate`](/docs/setup/best-practices/certificates/) command. e.g.
-
-```shell
-kubectl certificate approve kubeadm-cert-kube-apiserver-ld526
-```
-The output is similar to this:
-```shell
-certificatesigningrequest.certificates.k8s.io/kubeadm-cert-kube-apiserver-ld526 approved
-```
-
-You can view a list of pending certificates with `kubectl get csr`.
+See [Create CertificateSigningRequest](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/#create-certificatesigningrequest) for creating CSRs with the Kubernetes API.
## Renew certificates with external CA
diff --git a/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md b/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md
index b0e272afa0..96f55c3950 100644
--- a/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md
+++ b/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md
@@ -202,4 +202,7 @@ verify that the pods were scheduled by the desired schedulers.
```shell
kubectl get events
```
+You can also use a [custom scheduler configuration](/docs/reference/scheduling/config/#multiple-profiles)
+or a custom container image for the cluster's main scheduler by modifying its static pod manifest
+on the relevant control plane nodes.
diff --git a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md
index de9ab8181c..62251eb222 100644
--- a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md
+++ b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md
@@ -1129,8 +1129,6 @@ resources that have the scale subresource enabled.
### Categories
-{{< feature-state state="beta" for_k8s_version="v1.10" >}}
-
Categories is a list of grouped resources the custom resource belongs to (eg. `all`).
You can use `kubectl get ` to list the resources belonging to the category.
diff --git a/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md b/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md
index ce7da0b453..951848b1b4 100644
--- a/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md
+++ b/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md
@@ -2,7 +2,7 @@
title: Coarse Parallel Processing Using a Work Queue
min-kubernetes-server-version: v1.8
content_type: task
-weight: 30
+weight: 20
---
diff --git a/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md b/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md
index 7f3c30121e..b4cb4e641f 100644
--- a/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md
+++ b/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md
@@ -2,7 +2,7 @@
title: Fine Parallel Processing Using a Work Queue
content_type: task
min-kubernetes-server-version: v1.8
-weight: 40
+weight: 30
---
diff --git a/content/en/docs/tasks/job/indexed-parallel-processing-static.md b/content/en/docs/tasks/job/indexed-parallel-processing-static.md
new file mode 100644
index 0000000000..57d771ef14
--- /dev/null
+++ b/content/en/docs/tasks/job/indexed-parallel-processing-static.md
@@ -0,0 +1,176 @@
+---
+title: Indexed Job for Parallel Processing with Static Work Assignment
+content_type: task
+min-kubernetes-server-version: v1.21
+weight: 30
+---
+
+{{< feature-state for_k8s_version="v1.21" state="alpha" >}}
+
+
+
+
+In this example, you will run a Kubernetes Job that uses multiple parallel
+worker processes.
+Each worker is a different container running in its own Pod. The Pods have an
+_index number_ that the control plane sets automatically, which allows each Pod
+to identify which part of the overall task to work on.
+
+The pod index is available in the {{< glossary_tooltip text="annotation" term_id="annotation" >}}
+`batch.kubernetes.io/job-completion-index` as string representing its
+decimal value. In order for the containerized task process to obtain this index,
+you can publish the value of the annotation using the [downward API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#the-downward-api)
+mechanism.
+For convenience, the control plane automatically sets the downward API to
+expose the index in the `JOB_COMPLETION_INDEX` environment variable.
+
+Here is an overview of the steps in this example:
+
+1. **Create an image that can read the pod index**. You might modify the worker
+ program or add a script wrapper.
+2. **Start an Indexed Job**. The downward API allows you to pass the annotation
+ as an environment variable or file to the container.
+
+## {{% heading "prerequisites" %}}
+
+Be familiar with the basic,
+non-parallel, use of [Job](/docs/concepts/workloads/controllers/job/).
+
+{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
+
+To be able to create Indexed Jobs, make sure to enable the `IndexedJob`
+[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
+on the [API server](docs/reference/command-line-tools-reference/kube-apiserver/)
+and the [controller manager](/docs/reference/command-line-tools-reference/kube-controller-manager/).
+
+
+
+## Choose an approach
+
+To access the work item from the worker program, you have a few options:
+
+1. Read the `JOB_COMPLETION_INDEX` environment variable. The Job
+ {{< glossary_tooltip text="controller" term_id="controller" >}}
+ automatically links this variable to the annotation containing the completion
+ index.
+1. Read a file that contains the completion index.
+1. Assuming that you can't modify the program, you can wrap it with a script
+ that reads the index using any of the methods above and converts it into
+ something that the program can use as input.
+
+For this example, imagine that you chose option 3 and you want to run the
+[rev](https://man7.org/linux/man-pages/man1/rev.1.html) utility. This
+program accepts a file as an argument and prints its content reversed.
+
+```shell
+rev data.txt
+```
+
+For this example, you'll use the `rev` tool from the
+[`busybox`](https://hub.docker.com/_/busybox) container image.
+
+## Define an Indexed Job
+
+Here is a job definition. You'll need to edit the container image to match your
+preferred registry.
+
+{{< codenew language="yaml" file="application/job/indexed-job.yaml" >}}
+
+In the example above, you use the builtin `JOB_COMPLETION_INDEX` environment
+variable set by the Job controller for all containers. An [init container](/docs/concepts/workloads/pods/init-containers/)
+maps the index to a static value and writes it to a file that is shared with the
+container running the worker through an [emptyDir volume](/docs/concepts/storage/volumes/#emptydir).
+Optionally, you can [define your own environment variable through the downward
+API](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/)
+to publish the index to containers. You can also choose to load a list of values
+from a [ConfigMap as an environment variable or file](/docs/tasks/configure-pod-container/configure-pod-configmap/).
+
+Alternatively, you can directly [use the downward API to pass the annotation
+value as a volume file](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#store-pod-fields),
+like shown in the following example:
+
+{{< codenew language="yaml" file="application/job/indexed-job-vol.yaml" >}}
+
+## Running the Job
+
+Now run the Job:
+
+```shell
+kubectl apply -f ./indexed-job.yaml
+```
+
+Wait a bit, then check on the job:
+
+```shell
+kubectl describe jobs/indexed-job
+```
+
+The output is similar to:
+
+```
+Name: indexed-job
+Namespace: default
+Selector: controller-uid=bf865e04-0b67-483b-9a90-74cfc4c3e756
+Labels: controller-uid=bf865e04-0b67-483b-9a90-74cfc4c3e756
+ job-name=indexed-job
+Annotations:
+Parallelism: 3
+Completions: 5
+Start Time: Thu, 11 Mar 2021 15:47:34 +0000
+Pods Statuses: 2 Running / 3 Succeeded / 0 Failed
+Completed Indexes: 0-2
+Pod Template:
+ Labels: controller-uid=bf865e04-0b67-483b-9a90-74cfc4c3e756
+ job-name=indexed-job
+ Init Containers:
+ input:
+ Image: docker.io/library/bash
+ Port:
+ Host Port:
+ Command:
+ bash
+ -c
+ items=(foo bar baz qux xyz)
+ echo ${items[$JOB_COMPLETION_INDEX]} > /input/data.txt
+
+ Environment:
+ Mounts:
+ /input from input (rw)
+ Containers:
+ worker:
+ Image: docker.io/library/busybox
+ Port:
+ Host Port:
+ Command:
+ rev
+ /input/data.txt
+ Environment:
+ Mounts:
+ /input from input (rw)
+ Volumes:
+ input:
+ Type: EmptyDir (a temporary directory that shares a pod's lifetime)
+ Medium:
+ SizeLimit:
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal SuccessfulCreate 4s job-controller Created pod: indexed-job-njkjj
+ Normal SuccessfulCreate 4s job-controller Created pod: indexed-job-9kd4h
+ Normal SuccessfulCreate 4s job-controller Created pod: indexed-job-qjwsz
+ Normal SuccessfulCreate 1s job-controller Created pod: indexed-job-fdhq5
+ Normal SuccessfulCreate 1s job-controller Created pod: indexed-job-ncslj
+```
+
+In this example, we run the job with custom values for each index. You can
+inspect the output of the pods:
+
+```shell
+kubectl logs indexed-job-fdhq5 # Change this to match the name of a Pod in your cluster.
+```
+
+The output is similar to:
+
+```
+xuq
+```
\ No newline at end of file
diff --git a/content/en/docs/tasks/job/parallel-processing-expansion.md b/content/en/docs/tasks/job/parallel-processing-expansion.md
index 8f5994929e..fdd309cafb 100644
--- a/content/en/docs/tasks/job/parallel-processing-expansion.md
+++ b/content/en/docs/tasks/job/parallel-processing-expansion.md
@@ -2,7 +2,7 @@
title: Parallel Processing using Expansions
content_type: task
min-kubernetes-server-version: v1.8
-weight: 20
+weight: 50
---
diff --git a/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md b/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md
index 75c9d56a83..643b57cc3b 100644
--- a/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md
+++ b/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md
@@ -16,7 +16,7 @@ preview of what changes `apply` will make.
## {{% heading "prerequisites" %}}
-Install [`kubectl`](/docs/tasks/tools/install-kubectl/).
+Install [`kubectl`](/docs/tasks/tools/).
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
diff --git a/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md b/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md
index a51b5664ba..8e0670a89f 100644
--- a/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md
+++ b/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md
@@ -12,7 +12,7 @@ explains how those commands are organized and how to use them to manage live obj
## {{% heading "prerequisites" %}}
-Install [`kubectl`](/docs/tasks/tools/install-kubectl/).
+Install [`kubectl`](/docs/tasks/tools/).
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
diff --git a/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md b/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md
index 2b97ed271c..87cc423da7 100644
--- a/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md
+++ b/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md
@@ -13,7 +13,7 @@ This document explains how to define and manage objects using configuration file
## {{% heading "prerequisites" %}}
-Install [`kubectl`](/docs/tasks/tools/install-kubectl/).
+Install [`kubectl`](/docs/tasks/tools/).
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
diff --git a/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md b/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md
index 7c59052ffa..3ea3c50e8d 100644
--- a/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md
+++ b/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md
@@ -29,7 +29,7 @@ kubectl apply -k
## {{% heading "prerequisites" %}}
-Install [`kubectl`](/docs/tasks/tools/install-kubectl/).
+Install [`kubectl`](/docs/tasks/tools/).
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
diff --git a/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md b/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md
index d558a271ad..b75493aae1 100644
--- a/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md
+++ b/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md
@@ -19,7 +19,7 @@ Up to date information on this process can be found at the
* You must have a Kubernetes cluster with cluster DNS enabled.
* If you are using a cloud-based Kubernetes cluster or {{< glossary_tooltip text="Minikube" term_id="minikube" >}}, you may already have cluster DNS enabled.
* If you are using `hack/local-up-cluster.sh`, ensure that the `KUBE_ENABLE_CLUSTER_DNS` environment variable is set, then run the install script.
-* [Install and setup kubectl](/docs/tasks/tools/install-kubectl/) v1.7 or higher. Make sure it is configured to connect to the Kubernetes cluster.
+* [Install and setup kubectl](/docs/tasks/tools/) v1.7 or higher. Make sure it is configured to connect to the Kubernetes cluster.
* Install [Helm](https://helm.sh/) v2.7.0 or newer.
* Follow the [Helm install instructions](https://helm.sh/docs/intro/install/).
* If you already have an appropriate version of Helm installed, execute `helm init` to install Tiller, the server-side component of Helm.
diff --git a/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md b/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md
index 52a55457a2..0789997309 100644
--- a/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md
+++ b/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md
@@ -23,7 +23,7 @@ Service Catalog itself can work with any kind of managed service, not just Googl
* Install [Go 1.6+](https://golang.org/dl/) and set the `GOPATH`.
* Install the [cfssl](https://github.com/cloudflare/cfssl) tool needed for generating SSL artifacts.
* Service Catalog requires Kubernetes version 1.7+.
-* [Install and setup kubectl](/docs/tasks/tools/install-kubectl/) so that it is configured to connect to a Kubernetes v1.7+ cluster.
+* [Install and setup kubectl](/docs/tasks/tools/) so that it is configured to connect to a Kubernetes v1.7+ cluster.
* The kubectl user must be bound to the *cluster-admin* role for it to install Service Catalog. To ensure that this is true, run the following command:
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=
diff --git a/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md b/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md
index 4f4a1771fe..9854540649 100644
--- a/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md
+++ b/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md
@@ -80,10 +80,10 @@ You now have to ensure that the kubectl completion script gets sourced in all yo
echo 'complete -F __start_kubectl k' >>~/.bash_profile
```
-- If you installed kubectl with Homebrew (as explained [above](#install-with-homebrew-on-macos)), then the kubectl completion script should already be in `/usr/local/etc/bash_completion.d/kubectl`. In that case, you don't need to do anything.
+- If you installed kubectl with Homebrew (as explained [here](/docs/tasks/tools/install-kubectl-macos/#install-with-homebrew-on-macos)), then the kubectl completion script should already be in `/usr/local/etc/bash_completion.d/kubectl`. In that case, you don't need to do anything.
{{< note >}}
The Homebrew installation of bash-completion v2 sources all the files in the `BASH_COMPLETION_COMPAT_DIR` directory, that's why the latter two methods work.
{{< /note >}}
-In any case, after reloading your shell, kubectl completion should be working.
\ No newline at end of file
+In any case, after reloading your shell, kubectl completion should be working.
diff --git a/content/en/docs/tasks/tools/install-kubectl-linux.md b/content/en/docs/tasks/tools/install-kubectl-linux.md
index 12a8d641d8..243dbf4e0d 100644
--- a/content/en/docs/tasks/tools/install-kubectl-linux.md
+++ b/content/en/docs/tasks/tools/install-kubectl-linux.md
@@ -100,15 +100,38 @@ For example, to download version {{< param "fullversion" >}} on Linux, type:
### Install using native package management
{{< tabs name="kubectl_install" >}}
-{{< tab name="Ubuntu, Debian or HypriotOS" codelang="bash" >}}
-sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2 curl
-curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
-echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
-sudo apt-get update
-sudo apt-get install -y kubectl
-{{< /tab >}}
+{{% tab name="Debian-based distributions" %}}
-{{< tab name="CentOS, RHEL or Fedora" codelang="bash" >}}cat < /etc/yum.repos.d/kubernetes.repo
+1. Update the `apt` package index and install packages needed to use the Kubernetes `apt` repository:
+
+ ```shell
+ sudo apt-get update
+ sudo apt-get install -y apt-transport-https ca-certificates curl
+ ```
+
+2. Download the Google Cloud public signing key:
+
+ ```shell
+ sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
+ ```
+
+3. Add the Kubernetes `apt` repository:
+
+ ```shell
+ echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
+ ```
+
+4. Update `apt` package index with the new repository and install kubectl:
+
+ ```shell
+ sudo apt-get update
+ sudo apt-get install -y kubectl
+ ```
+
+{{% /tab %}}
+
+{{< tab name="Red Hat-based distributions" codelang="bash" >}}
+cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
diff --git a/content/en/docs/tasks/tools/install-kubectl-macos.md b/content/en/docs/tasks/tools/install-kubectl-macos.md
index 605745c630..b4fa864985 100644
--- a/content/en/docs/tasks/tools/install-kubectl-macos.md
+++ b/content/en/docs/tasks/tools/install-kubectl-macos.md
@@ -23,7 +23,7 @@ The following methods exist for installing kubectl on macOS:
- [Install kubectl binary with curl on macOS](#install-kubectl-binary-with-curl-on-macos)
- [Install with Homebrew on macOS](#install-with-homebrew-on-macos)
- [Install with Macports on macOS](#install-with-macports-on-macos)
-- [Install on Linux as part of the Google Cloud SDK](#install-on-linux-as-part-of-the-google-cloud-sdk)
+- [Install on macOS as part of the Google Cloud SDK](#install-on-macos-as-part-of-the-google-cloud-sdk)
### Install kubectl binary with curl on macOS
@@ -157,4 +157,4 @@ Below are the procedures to set up autocompletion for Bash and Zsh.
## {{% heading "whatsnext" %}}
-{{< include "included/kubectl-whats-next.md" >}}
\ No newline at end of file
+{{< include "included/kubectl-whats-next.md" >}}
diff --git a/content/en/docs/tutorials/clusters/seccomp.md b/content/en/docs/tutorials/clusters/seccomp.md
index adb3d9c500..376c349f72 100644
--- a/content/en/docs/tutorials/clusters/seccomp.md
+++ b/content/en/docs/tutorials/clusters/seccomp.md
@@ -37,7 +37,7 @@ profiles that give only the necessary privileges to your container processes.
In order to complete all steps in this tutorial, you must install
[kind](https://kind.sigs.k8s.io/docs/user/quick-start/) and
-[kubectl](/docs/tasks/tools/install-kubectl/). This tutorial will show examples
+[kubectl](/docs/tasks/tools/). This tutorial will show examples
with both alpha (pre-v1.19) and generally available seccomp functionality, so
make sure that your cluster is [configured
correctly](https://kind.sigs.k8s.io/docs/user/quick-start/#setting-kubernetes-version)
diff --git a/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md b/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md
index 7555a58201..b29b352aca 100644
--- a/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md
+++ b/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md
@@ -15,10 +15,8 @@ This page provides a real world example of how to configure Redis using a Config
## {{% heading "objectives" %}}
-* Create a `kustomization.yaml` file containing:
- * a ConfigMap generator
- * a Pod resource config using the ConfigMap
-* Apply the directory by running `kubectl apply -k ./`
+* Create a ConfigMap with Redis configuration values
+* Create a Redis Pod that mounts and uses the created ConfigMap
* Verify that the configuration was correctly applied.
@@ -38,82 +36,218 @@ This page provides a real world example of how to configure Redis using a Config
## Real World Example: Configuring Redis using a ConfigMap
-You can follow the steps below to configure a Redis cache using data stored in a ConfigMap.
+Follow the steps below to configure a Redis cache using data stored in a ConfigMap.
-First create a `kustomization.yaml` containing a ConfigMap from the `redis-config` file:
-
-{{< codenew file="pods/config/redis-config" >}}
+First create a ConfigMap with an empty configuration block:
```shell
-curl -OL https://k8s.io/examples/pods/config/redis-config
-
-cat <./kustomization.yaml
-configMapGenerator:
-- name: example-redis-config
- files:
- - redis-config
+cat <./example-redis-config.yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: example-redis-config
+data:
+ redis-config: ""
EOF
```
-Add the pod resource config to the `kustomization.yaml`:
+Apply the ConfigMap created above, along with a Redis pod manifest:
+
+```shell
+kubectl apply -f example-redis-config.yaml
+kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/config/redis-pod.yaml
+```
+
+Examine the contents of the Redis pod manifest and note the following:
+
+* A volume named `config` is created by `spec.volumes[1]`
+* The `key` and `path` under `spec.volumes[1].items[0]` exposes the `redis-config` key from the
+ `example-redis-config` ConfigMap as a file named `redis.conf` on the `config` volume.
+* The `config` volume is then mounted at `/redis-master` by `spec.containers[0].volumeMounts[1]`.
+
+This has the net effect of exposing the data in `data.redis-config` from the `example-redis-config`
+ConfigMap above as `/redis-master/redis.conf` inside the Pod.
{{< codenew file="pods/config/redis-pod.yaml" >}}
-```shell
-curl -OL https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/config/redis-pod.yaml
+Examine the created objects:
-cat <>./kustomization.yaml
-resources:
-- redis-pod.yaml
-EOF
+```shell
+kubectl get pod/redis configmap/example-redis-config
```
-Apply the kustomization directory to create both the ConfigMap and Pod objects:
+You should see the following output:
```shell
-kubectl apply -k .
-```
-
-Examine the created objects by
-```shell
-> kubectl get -k .
-NAME DATA AGE
-configmap/example-redis-config-dgh9dg555m 1 52s
-
NAME READY STATUS RESTARTS AGE
-pod/redis 1/1 Running 0 52s
+pod/redis 1/1 Running 0 8s
+
+NAME DATA AGE
+configmap/example-redis-config 1 14s
```
-In the example, the config volume is mounted at `/redis-master`.
-It uses `path` to add the `redis-config` key to a file named `redis.conf`.
-The file path for the redis config, therefore, is `/redis-master/redis.conf`.
-This is where the image will look for the config file for the redis master.
+Recall that we left `redis-config` key in the `example-redis-config` ConfigMap blank:
-Use `kubectl exec` to enter the pod and run the `redis-cli` tool to verify that
-the configuration was correctly applied:
+```shell
+kubectl describe configmap/example-redis-config
+```
+
+You should see an empty `redis-config` key:
+
+```shell
+Name: example-redis-config
+Namespace: default
+Labels:
+Annotations:
+
+Data
+====
+redis-config:
+```
+
+Use `kubectl exec` to enter the pod and run the `redis-cli` tool to check the current configuration:
```shell
kubectl exec -it redis -- redis-cli
+```
+
+Check `maxmemory`:
+
+```shell
127.0.0.1:6379> CONFIG GET maxmemory
+```
+
+It should show the default value of 0:
+
+```shell
+1) "maxmemory"
+2) "0"
+```
+
+Similarly, check `maxmemory-policy`:
+
+```shell
+127.0.0.1:6379> CONFIG GET maxmemory-policy
+```
+
+Which should also yield its default value of `noeviction`:
+
+```shell
+1) "maxmemory-policy"
+2) "noeviction"
+```
+
+Now let's add some configuration values to the `example-redis-config` ConfigMap:
+
+{{< codenew file="pods/config/example-redis-config.yaml" >}}
+
+Apply the updated ConfigMap:
+
+```shell
+kubectl apply -f example-redis-config.yaml
+```
+
+Confirm that the ConfigMap was updated:
+
+```shell
+kubectl describe configmap/example-redis-config
+```
+
+You should see the configuration values we just added:
+
+```shell
+Name: example-redis-config
+Namespace: default
+Labels:
+Annotations:
+
+Data
+====
+redis-config:
+----
+maxmemory 2mb
+maxmemory-policy allkeys-lru
+```
+
+Check the Redis Pod again using `redis-cli` via `kubectl exec` to see if the configuration was applied:
+
+```shell
+kubectl exec -it redis -- redis-cli
+```
+
+Check `maxmemory`:
+
+```shell
+127.0.0.1:6379> CONFIG GET maxmemory
+```
+
+It remains at the default value of 0:
+
+```shell
+1) "maxmemory"
+2) "0"
+```
+
+Similarly, `maxmemory-policy` remains at the `noeviction` default setting:
+
+```shell
+127.0.0.1:6379> CONFIG GET maxmemory-policy
+```
+
+Returns:
+
+```shell
+1) "maxmemory-policy"
+2) "noeviction"
+```
+
+The configuration values have not changed because the Pod needs to be restarted to grab updated
+values from associated ConfigMaps. Let's delete and recreate the Pod:
+
+```shell
+kubectl delete pod redis
+kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/config/redis-pod.yaml
+```
+
+Now re-check the configuration values one last time:
+
+```shell
+kubectl exec -it redis -- redis-cli
+```
+
+Check `maxmemory`:
+
+```shell
+127.0.0.1:6379> CONFIG GET maxmemory
+```
+
+It should now return the updated value of 2097152:
+
+```shell
1) "maxmemory"
2) "2097152"
+```
+
+Similarly, `maxmemory-policy` has also been updated:
+
+```shell
127.0.0.1:6379> CONFIG GET maxmemory-policy
+```
+
+It now reflects the desired value of `allkeys-lru`:
+
+```shell
1) "maxmemory-policy"
2) "allkeys-lru"
```
-Delete the created pod:
+Clean up your work by deleting the created resources:
+
```shell
-kubectl delete pod redis
+kubectl delete pod/redis configmap/example-redis-config
```
-
-
## {{% heading "whatsnext" %}}
* Learn more about [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/).
-
-
-
-
diff --git a/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html
index 1d8a069984..d7687bc7b1 100644
--- a/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html
+++ b/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html
@@ -37,7 +37,7 @@ weight: 10
ClusterIP (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.
NodePort - Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using <NodeIP>:<NodePort>. Superset of ClusterIP.
LoadBalancer - Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.
-
ExternalName - Exposes the Service using an arbitrary name (specified by externalName in the spec) by returning a CNAME record with the name. No proxy is used. This type requires v1.7 or higher of kube-dns.
+
ExternalName - Maps the Service to the contents of the externalName field (e.g. `foo.bar.example.com`), by returning a CNAME record with its value. No proxying of any kind is set up. This type requires v1.7 or higher of kube-dns, or CoreDNS version 0.0.8 or higher.
Additionally, note that there are some use cases with Services that involve not defining selector in the spec. A Service created without selector will also not create the corresponding Endpoints object. This allows users to manually map a Service to specific endpoints. Another possibility why there may be no selector is you are strictly using type: ExternalName.
diff --git a/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md b/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md
index 8368d24132..5b01913859 100644
--- a/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md
+++ b/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md
@@ -11,7 +11,7 @@ external IP address.
## {{% heading "prerequisites" %}}
-* Install [kubectl](/docs/tasks/tools/install-kubectl/).
+* Install [kubectl](/docs/tasks/tools/).
* Use a cloud provider like Google Kubernetes Engine or Amazon Web Services to
create a Kubernetes cluster. This tutorial creates an
[external load balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/),
diff --git a/content/en/docs/tutorials/stateless-application/guestbook.md b/content/en/docs/tutorials/stateless-application/guestbook.md
index 0a84483716..36772253f6 100644
--- a/content/en/docs/tutorials/stateless-application/guestbook.md
+++ b/content/en/docs/tutorials/stateless-application/guestbook.md
@@ -104,7 +104,7 @@ kubectl apply -f ./content/en/examples/application/guestbook/mongo-service.yaml
```shell
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 443/TCP 1m
- mongo ClusterIP 10.0.0.151 6379/TCP 8s
+ mongo ClusterIP 10.0.0.151 27017/TCP 8s
```
{{< note >}}
diff --git a/content/en/examples/application/job/indexed-job-vol.yaml b/content/en/examples/application/job/indexed-job-vol.yaml
new file mode 100644
index 0000000000..ed40e1cc44
--- /dev/null
+++ b/content/en/examples/application/job/indexed-job-vol.yaml
@@ -0,0 +1,27 @@
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: 'indexed-job'
+spec:
+ completions: 5
+ parallelism: 3
+ completionMode: Indexed
+ template:
+ spec:
+ restartPolicy: Never
+ containers:
+ - name: 'worker'
+ image: 'docker.io/library/busybox'
+ command:
+ - "rev"
+ - "/input/data.txt"
+ volumeMounts:
+ - mountPath: /input
+ name: input
+ volumes:
+ - name: input
+ downwardAPI:
+ items:
+ - path: "data.txt"
+ fieldRef:
+ fieldPath: metadata.annotations['batch.kubernetes.io/job-completion-index']
\ No newline at end of file
diff --git a/content/en/examples/application/job/indexed-job.yaml b/content/en/examples/application/job/indexed-job.yaml
new file mode 100644
index 0000000000..5b80d35264
--- /dev/null
+++ b/content/en/examples/application/job/indexed-job.yaml
@@ -0,0 +1,35 @@
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: 'indexed-job'
+spec:
+ completions: 5
+ parallelism: 3
+ completionMode: Indexed
+ template:
+ spec:
+ restartPolicy: Never
+ initContainers:
+ - name: 'input'
+ image: 'docker.io/library/bash'
+ command:
+ - "bash"
+ - "-c"
+ - |
+ items=(foo bar baz qux xyz)
+ echo ${items[$JOB_COMPLETION_INDEX]} > /input/data.txt
+ volumeMounts:
+ - mountPath: /input
+ name: input
+ containers:
+ - name: 'worker'
+ image: 'docker.io/library/busybox'
+ command:
+ - "rev"
+ - "/input/data.txt"
+ volumeMounts:
+ - mountPath: /input
+ name: input
+ volumes:
+ - name: input
+ emptyDir: {}
diff --git a/content/en/examples/pods/config/example-redis-config.yaml b/content/en/examples/pods/config/example-redis-config.yaml
new file mode 100644
index 0000000000..5b093b1213
--- /dev/null
+++ b/content/en/examples/pods/config/example-redis-config.yaml
@@ -0,0 +1,8 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: example-redis-config
+data:
+ redis-config: |
+ maxmemory 2mb
+ maxmemory-policy allkeys-lru
diff --git a/content/es/docs/concepts/containers/runtime-class.md b/content/es/docs/concepts/containers/runtime-class.md
new file mode 100644
index 0000000000..8c53cde87d
--- /dev/null
+++ b/content/es/docs/concepts/containers/runtime-class.md
@@ -0,0 +1,205 @@
+---
+reviewers:
+title: RuntimeClass
+content_type: concept
+weight: 20
+---
+
+
+
+{{< feature-state for_k8s_version="v1.20" state="stable" >}}
+
+Esta página describe el recurso RuntimeClass y el mecanismo de selección del
+motor de ejecución.
+
+RuntimeClass es una característica que permite seleccionar la configuración del
+motor de ejecución para los contenedores. La configuración del motor de ejecución para
+los contenedores se utiliza para ejecutar los contenedores de un Pod.
+
+
+
+
+
+
+## Motivación
+
+Se puede seleccionar un RuntimeClass diferente entre diferentes Pods para
+proporcionar equilibrio entre rendimiento y seguridad. Por ejemplo, si parte de
+la carga de trabajo requiere un alto nivel de garantía de seguridad, se podrían
+planificar esos Pods para ejecutarse en un motor de ejecución que use
+virtualización de hardware. Así se beneficiaría con un mayor aislamiento del motor
+de ejecución alternativo, con el coste de alguna sobrecarga adicional.
+
+También se puede utilizar el RuntimeClass para ejecutar distintos Pods con el
+mismo motor de ejecución pero con distintos parámetros.
+
+## Configuración
+
+1. Configurar la implementación del CRI en los nodos (depende del motor de
+ ejecución)
+2. Crear los recursos RuntimeClass correspondientes.
+
+### 1. Configurar la implementación del CRI en los nodos
+
+La configuración disponible utilizando RuntimeClass dependen de la
+implementación de la Interfaz del Motor de ejecución de Containers (CRI). Véase
+la sección [Configuración del CRI](#cri-configuration) para más
+información sobre cómo configurar la implementación del CRI.
+
+{{< note >}}
+RuntimeClass por defecto asume una configuración de nodos homogénea para todo el
+clúster (lo que significa que todos los nodos están configurados de la misma
+forma para el motor de ejecución de los contenedores). Para soportar configuraciones
+heterogéneas de nodos, véase [Planificación](#scheduling) más abajo.
+{{< /note >}}
+
+Las configuraciones tienen un nombre de `handler` (manipulador) correspondiente, referenciado
+por la RuntimeClass. El `handler` debe ser una etiqueta DNS 1123 válida
+(alfanumérico + caracter `-`).
+
+### 2. Crear los recursos RuntimeClass correspondientes.
+
+Cada configuración establecida en el paso 1 tiene un nombre de `handler`, que
+identifica a dicha configuración. Para cada `handler`, hay que crear un objeto
+RuntimeClass correspondiente.
+
+Actualmente el recurso RuntimeClass sólo tiene dos campos significativos: el
+nombre del RuntimeClass (`metadata.name`) y el `handler`. La
+definición del objeto se parece a ésta:
+
+```yaml
+apiVersion: node.k8s.io/v1 # La RuntimeClass se define en el grupo node.k8s.io
+kind: RuntimeClass
+metadata:
+ name: myclass # Nombre por el que se referenciará la RuntimeClass
+ # no contiene espacio de nombres
+handler: myconfiguration # El nombre de la configuración CRI correspondiente
+```
+
+El nombre de un objeto RuntimeClass debe ser un [nombre de subdominio
+DNS](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)
+válido.
+
+{{< note >}}
+Se recomienda que las operaciones de escritura de la RuntimeClass
+(creación/modificación/parcheo/elimiación) se restrinjan al administrador del
+clúster. Habitualmente es el valor por defecto. Véase [Visión general de la
+Autorización](/docs/reference/access-authn-authz/authorization/) para más
+detalles.
+{{< /note >}}
+
+## Uso
+
+Una vez se han configurado las RuntimeClasses para el clúster, el utilizarlas es
+muy sencillo. Solo se especifica un `runtimeClassName` en la especificación del Pod.
+Por ejemplo:
+
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: mypod
+spec:
+ runtimeClassName: myclass
+ # ...
+```
+
+Así se informa a Kubelet del nombre de la RuntimeClass a utilizar para
+este pod. Si dicha RuntimeClass no existe, o el CRI no puede ejecutar el
+`handler` correspondiente, el pod entrará en la
+[fase](/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase) final `Failed`.
+Se puede buscar por el correspondiente
+[evento](/docs/tasks/debug-application-cluster/debug-application-introspection/)
+con el mensaje de error.
+
+Si no se especifica ninguna `runtimeClassName`, se usará el RuntimeHandler por
+defecto, lo que equivale al comportamiento cuando la opción RuntimeClass está
+deshabilitada.
+
+### Configuración del CRI
+
+Para más detalles sobre cómo configurar los motores de ejecución del CRI, véase
+[instalación del CRI](/docs/setup/production-environment/container-runtimes/).
+
+#### dockershim
+
+El CRI dockershim incorporado por Kubernetes no soporta manejadores del motor de
+ejecución.
+
+#### {{< glossary_tooltip term_id="containerd" >}}
+
+Los `handlers` del motor de ejecución se configuran mediante la configuración
+de containerd en `/etc/containerd/config.toml`. Los `handlers` válidos se
+configuran en la sección de motores de ejecución:
+
+```
+[plugins.cri.containerd.runtimes.${HANDLER_NAME}]
+```
+
+Véase la configuración de containerd para más detalles:
+https://github.com/containerd/cri/blob/master/docs/config.md
+
+#### {{< glossary_tooltip term_id="cri-o" >}}
+
+Los `handlers` del motor de ejecución se configuran a través de la
+configuración del CRI-O en `/etc/crio/crio.conf`. Los manejadores válidos se
+configuran en la [tabla
+crio.runtime](https://github.com/cri-o/cri-o/blob/master/docs/crio.conf.5.md#crioruntime-table)
+
+```
+[crio.runtime.runtimes.${HANDLER_NAME}]
+ runtime_path = "${PATH_TO_BINARY}"
+```
+
+Véase la [documentación de la
+configuración](https://raw.githubusercontent.com/cri-o/cri-o/9f11d1d/docs/crio.conf.5.md)
+de CRI-O para más detalles.
+
+## Planificación
+
+{{< feature-state for_k8s_version="v1.16" state="beta" >}}
+
+Especificando el campo `scheduling` en una RuntimeClass se pueden establecer
+restricciones para asegurar que los Pods ejecutándose con dicha RuntimeClass se
+planifican en los nodos que la soportan.
+
+Para asegurar que los pods sean asignados en nodos que soportan una RuntimeClass
+determinada, ese conjunto de nodos debe tener una etiqueta común que se
+selecciona en el campo `runtimeclass.scheduling.nodeSelector`. El nodeSelector
+de la RuntimeClass se combina con el nodeSelector del pod durante la admisión,
+haciéndose efectiva la intersección del conjunto de nodos seleccionados por
+ambos. Si hay conflicto, el pod se rechazará.
+
+Si los nodos soportados se marcan para evitar que los pods con otra RuntimeClass
+se ejecuten en el nodo, se pueden añadir `tolerations` al RuntimeClass. Igual
+que con el `nodeSelector`, las tolerancias se mezclan con las tolerancias del
+pod durante la admisión, haciéndose efectiva la unión del conjunto de nodos
+tolerados por ambos.
+
+Para saber más sobre configurar el selector de nodos y las tolerancias, véase
+[Asignando Pods a Nodos](/docs/concepts/scheduling-eviction/assign-pod-node/).
+
+### Sobrecarga del Pod
+
+{{< feature-state for_k8s_version="v1.18" state="beta" >}}
+
+Se pueden especificar recursos de _sobrecarga_ adicional que se asocian a los
+Pods que estén ejecutándose. Declarar la sobrecarga permite al clúster (incluido
+el planificador) contabilizarlo al tomar decisiones sobre los Pods y los
+recursos. Para utilizar la sobrecarga de pods, se debe haber habilitado la
+[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
+PodOverhead (lo está por defecto).
+
+La sobrecarga de pods se define en la RuntimeClass a través del los campos de
+`overhead`. Con estos campos se puede especificar la sobrecarga de los pods en
+ejecución que utilizan esta RuntimeClass para asegurar que estas sobrecargas se
+cuentan en Kubernetes.
+
+## {{% heading "whatsnext" %}}
+
+
+- [Diseño de RuntimeClass](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md)
+- [Diseño de programación de RuntimeClass](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md#runtimeclass-scheduling)
+- Leer sobre el concepto de [Pod Overhead](/docs/concepts/scheduling-eviction/pod-overhead/)
+- [Diseño de capacidad de PodOverhead](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md)
diff --git a/content/es/docs/concepts/policy/_index.md b/content/es/docs/concepts/policy/_index.md
index 3182c94f7d..d0f16fc4ad 100755
--- a/content/es/docs/concepts/policy/_index.md
+++ b/content/es/docs/concepts/policy/_index.md
@@ -1,4 +1,8 @@
---
-title: "Políticas"
+title: Políticas
weight: 90
----
\ No newline at end of file
+description: >
+ Políticas configurables que se aplican a grupos de recursos.
+---
+
+La sección de Políticas describe las diferentes políticas configurables que se aplican a grupos de recursos:
diff --git a/content/es/docs/concepts/policy/limit-range.md b/content/es/docs/concepts/policy/limit-range.md
new file mode 100644
index 0000000000..22d4c74f51
--- /dev/null
+++ b/content/es/docs/concepts/policy/limit-range.md
@@ -0,0 +1,70 @@
+---
+reviewers:
+- raelga
+title: Rangos de límites (Limit Ranges)
+description: >
+ Aplica límites de recursos a un Namespace para restringir y garantizar la asignación y consumo de recursos informáticos.
+content_type: concept
+weight: 10
+---
+
+
+
+### Contexto
+
+Por defecto, los contenedores se ejecutan sin restricciones sobre los [recursos informáticos disponibles en un clúster de Kubernetes](/docs/concepts/configuration/manage-resources-containers/).
+Si el {{< glossary_tooltip text="Nodo" term_id="node" >}} dispone de los recursos informáticos, un {{< glossary_tooltip text="Pod" term_id="pod" >}} o sus {{< glossary_tooltip text="Contenedores" term_id="container" >}} tienen permitido consumir por encima de la cuota solicitada si no superan el límite establecido en su especificación.
+Existe la preocupación de que un Pod o Contenedor pueda monopolizar todos los recursos disponibles.
+
+### Utilidad
+
+Aplicando restricciones de asignación de recursos, los administradores de clústeres se aseguran del cumplimiento del consumo de recursos por espacio de nombre ({{< glossary_tooltip text="Namespace" term_id="namespace" >}}).
+
+Un **{{< glossary_tooltip text="LimitRange" term_id="limitrange" >}}** es la política que permite:
+
+- Imponer restricciones de requisitos de recursos a {{< glossary_tooltip text="Pods" term_id="pod" >}} o {{< glossary_tooltip text="Contenedores" term_id="container" >}} por Namespace.
+- Imponer las limitaciones de recursos mínimas/máximas para Pods o Contenedores dentro de un Namespace.
+- Especificar requisitos y límites de recursos predeterminados para Pods o Contenedores de un Namespace.
+- Imponer una relación de proporción entre los requisitos y el límite de un recurso.
+- Imponer el cumplimiento de las demandas de almacenamiento mínimo/máximo para {{< glossary_tooltip text="Solicitudes de Volúmenes Persistentes" term_id="persistent-volume-claim" >}}.
+
+### Habilitar el LimitRange
+
+La compatibilidad con LimitRange está habilitada por defecto en Kubernetes desde la versión 1.10.
+
+Para que un LimitRange se active en un {{< glossary_tooltip text="Namespace" term_id="namespace" >}} en particular, el LimitRange debe definirse con el Namespace, o aplicarse a éste.
+
+El nombre de recurso de un objeto LimitRange debe ser un
+[nombre de subdominio DNS](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) válido.
+
+### Aplicando LimitRanges
+
+- El administrador crea un LimitRange en un {{< glossary_tooltip text="Namespace" term_id="namespace" >}}.
+- Los usuarios crean recursos como {{< glossary_tooltip text="Pods" term_id="pod" >}}, {{< glossary_tooltip text="Contenedores" term_id="container" >}} o {{< glossary_tooltip text="Solicitudes de Volúmenes Persistentes" term_id="persistent-volume-claim" >}} en el Namespace.
+- El controlador de admisión `LimitRanger` aplicará valores predeterminados y límites, para todos los Pods o Contenedores que no establezcan requisitos de recursos informáticos. Y realizará un seguimiento del uso para garantizar que no excedan el mínimo, el máximo, y la proporción de ningún LimitRange definido en el Namespace.
+- Si al crear o actualizar un recurso del ejemplo (Pods, Contenedores, {{< glossary_tooltip text="Solicitudes de Volúmenes Persistentes" term_id="persistent-volume-claim" >}}) se viola una restricción al LimitRange, la solicitud al servidor API fallará con un código de estado HTTP "403 FORBIDDEN" y un mensaje que explica la restricción que se ha violado.
+- En caso de que en se active un LimitRange para recursos de cómputos como `cpu` y `memory`, los usuarios deberán especificar los requisitos y/o límites de recursos a dichos valores. De lo contrario, el sistema puede rechazar la creación del Pod.
+- Las validaciones de LimitRange ocurren solo en la etapa de Admisión de Pod, no en Pods que ya se han iniciado (Running {{< glossary_tooltip text="Pods" term_id="pod" >}}).
+
+Algunos ejemplos de políticas que se pueden crear utilizando rangos de límites son:
+
+- En un clúster de 2 nodos con una capacidad de 8 GiB de RAM y 16 núcleos, podría restringirse los {{< glossary_tooltip text="Pods" term_id="pod" >}} en un {{< glossary_tooltip text="Namespace" term_id="namespace" >}} a requerir `100m` de CPU con un límite máximo de `500m` para CPU y requerir `200Mi` de memoria con un límite máximo de `600Mi` de memoria.
+- Definir el valor por defecto de límite y requisitos de CPU a `150m` y el valor por defecto de requisito de memoria a `300Mi` {{< glossary_tooltip text="Contenedores" term_id="container" >}} que se iniciaron sin requisitos de CPU y memoria en sus especificaciones.
+
+En el caso de que los límites totales del {{< glossary_tooltip text="Namespace" term_id="namespace" >}} sean menores que la suma de los límites de los {{< glossary_tooltip text="Pods" term_id="pod" >}},
+puede haber contienda por los recursos. En este caso, los contenedores o pods no seran creados.
+
+Ni la contención ni los cambios en un LimitRange afectarán a los recursos ya creados.
+
+## {{% heading "whatsnext" %}}
+
+Consulte el [documento de diseño del LimitRanger](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md) para más información.
+
+Los siguientes ejemplos utilizan límites y están pendientes de su traducción:
+
+- [how to configure minimum and maximum CPU constraints per namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/).
+- [how to configure minimum and maximum Memory constraints per namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/).
+- [how to configure default CPU Requests and Limits per namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/).
+- [how to configure default Memory Requests and Limits per namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/).
+- [how to configure minimum and maximum Storage consumption per namespace](/docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage).
+- [a detailed example on configuring quota per namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/).
diff --git a/content/es/docs/reference/glossary/limitrange.md b/content/es/docs/reference/glossary/limitrange.md
new file mode 100755
index 0000000000..686ed9c342
--- /dev/null
+++ b/content/es/docs/reference/glossary/limitrange.md
@@ -0,0 +1,23 @@
+---
+title: LimitRange
+id: limitrange
+date: 2019-04-15
+full_link: /docs/concepts/policy/limit-range/
+short_description: >
+ Proporciona restricciones para limitar el consumo de recursos por Contenedores o Pods en un espacio de nombres
+
+aka:
+tags:
+ - core-object
+ - fundamental
+ - architecture
+related:
+ - pod
+ - container
+---
+
+Proporciona restricciones para limitar el consumo de recursos por {{< glossary_tooltip text="Contenedores" term_id="container" >}} o {{< glossary_tooltip text="Pods" term_id="pod" >}} en un espacio de nombres ({{< glossary_tooltip text="Namespace" term_id="namespace" >}})
+
+
+
+LimitRange limita la cantidad de objetos que se pueden crear por tipo, así como la cantidad de recursos informáticos que pueden ser requeridos/consumidos por {{< glossary_tooltip text="Pods" term_id="pod" >}} o {{< glossary_tooltip text="Contenedores" term_id="container" >}} individuales en un {{< glossary_tooltip text="Namespace" term_id="namespace" >}}.
diff --git a/content/fr/docs/setup/_index.md b/content/fr/docs/setup/_index.md
index 37161dbc0c..5424c448a7 100644
--- a/content/fr/docs/setup/_index.md
+++ b/content/fr/docs/setup/_index.md
@@ -35,7 +35,7 @@ Vous devriez choisir une solution locale si vous souhaitez :
* Essayer ou commencer à apprendre Kubernetes
* Développer et réaliser des tests sur des clusters locaux
-Choisissez une [solution locale] (/fr/docs/setup/pick-right-solution/#solutions-locales).
+Choisissez une [solution locale](/fr/docs/setup/pick-right-solution/#solutions-locales).
## Solutions hébergées
@@ -49,7 +49,7 @@ Vous devriez choisir une solution hébergée si vous :
* N'avez pas d'équipe de Site Reliability Engineering (SRE) dédiée, mais que vous souhaitez une haute disponibilité.
* Vous n'avez pas les ressources pour héberger et surveiller vos clusters
-Choisissez une [solution hébergée] (/fr/docs/setup/pick-right-solution/#solutions-hebergées).
+Choisissez une [solution hébergée](/fr/docs/setup/pick-right-solution/#solutions-hebergées).
## Solutions cloud clés en main
@@ -63,7 +63,7 @@ Vous devriez choisir une solution cloud clés en main si vous :
* Voulez plus de contrôle sur vos clusters que ne le permettent les solutions hébergées
* Voulez réaliser vous même un plus grand nombre d'operations
-Choisissez une [solution clé en main] (/fr/docs/setup/pick-right-solution/#solutions-clés-en-main)
+Choisissez une [solution clé en main](/fr/docs/setup/pick-right-solution/#solutions-clés-en-main)
## Solutions clés en main sur site
@@ -76,7 +76,7 @@ Vous devriez choisir une solution de cloud clé en main sur site si vous :
* Disposez d'une équipe SRE dédiée
* Avez les ressources pour héberger et surveiller vos clusters
-Choisissez une [solution clé en main sur site] (/fr/docs/setup/pick-right-solution/#solutions-on-premises-clés-en-main).
+Choisissez une [solution clé en main sur site](/fr/docs/setup/pick-right-solution/#solutions-on-premises-clés-en-main).
## Solutions personnalisées
@@ -84,11 +84,11 @@ Les solutions personnalisées vous offrent le maximum de liberté sur vos cluste
d'expertise. Ces solutions vont du bare-metal aux fournisseurs de cloud sur
différents systèmes d'exploitation.
-Choisissez une [solution personnalisée] (/fr/docs/setup/pick-right-solution/#solutions-personnalisées).
+Choisissez une [solution personnalisée](/fr/docs/setup/pick-right-solution/#solutions-personnalisées).
## {{% heading "whatsnext" %}}
-Allez à [Choisir la bonne solution] (/fr/docs/setup/pick-right-solution/) pour une liste complète de solutions.
+Allez à [Choisir la bonne solution](/fr/docs/setup/pick-right-solution/) pour une liste complète de solutions.
diff --git a/content/ja/docs/concepts/cluster-administration/manage-deployment.md b/content/ja/docs/concepts/cluster-administration/manage-deployment.md
index 90f96547d5..cb9c7c0fc3 100644
--- a/content/ja/docs/concepts/cluster-administration/manage-deployment.md
+++ b/content/ja/docs/concepts/cluster-administration/manage-deployment.md
@@ -237,7 +237,7 @@ guestbook-redis-slave-qgazl 1/1 Running 0 3m
image: gb-frontend:v3
```
-そして2つの異なるPodのセットを上書きしないようにするため、`track`ラベルに異なる値を持つ(例: `canary`)ようなguestbookフロントエンドの新しいリリースを作成できます。
+そして2つの異なるPodのセットを上書きしないようにするため、`track`ラベルに異なる値を持つ(例: `canary`)ようなguestbookフロントエンドの新しいリリースを作成できます。
```yaml
name: frontend-canary
diff --git a/content/ja/docs/concepts/overview/what-is-kubernetes.md b/content/ja/docs/concepts/overview/what-is-kubernetes.md
index 3ca8fa78fe..dab17c9b1b 100644
--- a/content/ja/docs/concepts/overview/what-is-kubernetes.md
+++ b/content/ja/docs/concepts/overview/what-is-kubernetes.md
@@ -17,7 +17,7 @@ card:
Kubernetesは、宣言的な構成管理と自動化を促進し、コンテナ化されたワークロードやサービスを管理するための、ポータブルで拡張性のあるオープンソースのプラットフォームです。Kubernetesは巨大で急速に成長しているエコシステムを備えており、それらのサービス、サポート、ツールは幅広い形で利用可能です。
-Kubernetesの名称は、ギリシャ語に由来し、操舵手やパイロットを意味しています。Googleは2014年にKubernetesプロジェクトをオープンソース化しました。Kubernetesは、本番環境で大規模なワークロードを稼働させた[Googleの15年以上の経験](/blog/2015/04/borg-predecessor-to-kubernetes/)と、コミュニティからの最高のアイディアや実践を組み合わせています。
+Kubernetesの名称は、ギリシャ語に由来し、操舵手やパイロットを意味しています。Googleは2014年にKubernetesプロジェクトをオープンソース化しました。Kubernetesは、本番環境で大規模なワークロードを稼働させた[Googleの15年以上の経験](/blog/2015/04/borg-predecessor-to-kubernetes/)と、コミュニティからの最高のアイディアや実践を組み合わせています。
## 過去を振り返ってみると
diff --git a/content/ja/docs/concepts/policy/resource-quotas.md b/content/ja/docs/concepts/policy/resource-quotas.md
index e7368a8f94..7b00056fcf 100644
--- a/content/ja/docs/concepts/policy/resource-quotas.md
+++ b/content/ja/docs/concepts/policy/resource-quotas.md
@@ -22,7 +22,7 @@ weight: 10
- 異なる名前空間で異なるチームが存在するとき。現時点ではこれは自主的なものですが、将来的にはACLsを介してリソースクォータの設定を強制するように計画されています。
- 管理者は各名前空間で1つの`ResourceQuota`を作成します。
- ユーザーが名前空間内でリソース(Pod、Serviceなど)を作成し、クォータシステムが`ResourceQuota`によって定義されたハードリソースリミットを超えないことを保証するために、リソースの使用量をトラッキングします。
-- リソースの作成や更新がクォータの制約に違反しているとき、そのリクエストはHTTPステータスコード`403 FORBIDDEN`で失敗し、違反した制約を説明するメッセージが表示されます。
+- リソースの作成や更新がクォータの制約に違反しているとき、そのリクエストはHTTPステータスコード`403 FORBIDDEN`で失敗し、違反した制約を説明するメッセージが表示されます。
- `cpu`や`memory`といったコンピューターリソースに対するクォータが名前空間内で有効になっているとき、ユーザーはそれらの値に対する`requests`や`limits`を設定する必要があります。設定しないとクォータシステムがPodの作成を拒否します。 ヒント: コンピュートリソースの要求を設定しないPodに対してデフォルト値を強制するために、`LimitRanger`アドミッションコントローラーを使用してください。この問題を解決する例は[walkthrough](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)で参照できます。
`ResourceQuota`のオブジェクト名は、有効な[DNSサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)である必要があります.
diff --git a/content/ja/docs/concepts/security/overview.md b/content/ja/docs/concepts/security/overview.md
index b50a4ea1a5..0157b28f78 100644
--- a/content/ja/docs/concepts/security/overview.md
+++ b/content/ja/docs/concepts/security/overview.md
@@ -77,7 +77,7 @@ Kubernetesを保護する為には2つの懸念事項があります。
### クラスター内のコンポーネント(アプリケーション) {#cluster-applications}
-アプリケーションを対象にした攻撃に応じて、セキュリティの特定側面に焦点をあてたい場合があります。例:他のリソースとの連携で重要なサービス(サービスA)と、リソース枯渇攻撃に対して脆弱な別のワークロード(サービスB)が実行されている場合、サービスBのリソースを制限していないとサービスAが危険にさらされるリスクが高くなります。次の表はセキュリティの懸念事項とKubernetesで実行されるワークロードを保護するための推奨事項を示しています。
+アプリケーションを対象にした攻撃に応じて、セキュリティの特定側面に焦点をあてたい場合があります。例:他のリソースとの連携で重要なサービス(サービスA)と、リソース枯渇攻撃に対して脆弱な別のワークロード(サービスB)が実行されている場合、サービスBのリソースを制限していないとサービスAが危険にさらされるリスクが高くなります。次の表はセキュリティの懸念事項とKubernetesで実行されるワークロードを保護するための推奨事項を示しています。
ワークロードセキュリティに関する懸念事項 | 推奨事項 |
diff --git a/content/ja/docs/concepts/workloads/pods/ephemeral-containers.md b/content/ja/docs/concepts/workloads/pods/ephemeral-containers.md
index 0fa45de94b..db69d83fcd 100644
--- a/content/ja/docs/concepts/workloads/pods/ephemeral-containers.md
+++ b/content/ja/docs/concepts/workloads/pods/ephemeral-containers.md
@@ -42,7 +42,7 @@ weight: 80
エフェメラルコンテナを利用する場合には、他のコンテナ内のプロセスにアクセスできるように、[プロセス名前空間の共有](/ja/docs/tasks/configure-pod-container/share-process-namespace/)を有効にすると便利です。
-エフェメラルコンテナを利用してトラブルシューティングを行う例については、[デバッグ用のエフェメラルコンテナを使用してデバッグする](/docs/tasks/debug-application-cluster/debug-running-pod/#debugging-with-ephemeral-debug-container)を参照してください。
+エフェメラルコンテナを利用してトラブルシューティングを行う例については、[デバッグ用のエフェメラルコンテナを使用してデバッグする](/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container)を参照してください。
## Ephemeral containers API
diff --git a/content/ja/docs/contribute/review/reviewing-prs.md b/content/ja/docs/contribute/review/reviewing-prs.md
new file mode 100644
index 0000000000..6659d46354
--- /dev/null
+++ b/content/ja/docs/contribute/review/reviewing-prs.md
@@ -0,0 +1,86 @@
+---
+title: プルリクエストのレビュー
+content_type: concept
+main_menu: true
+weight: 10
+---
+
+
+
+ドキュメントのプルリクエストは誰でもレビューすることができます。Kubernetesのwebsiteリポジトリで[pull requests](https://github.com/kubernetes/website/pulls)のセクションに移動し、open状態のプルリクエストを確認してください。
+
+ドキュメントのプルリクエストのレビューは、Kubernetesコミュニティに自分を知ってもらうためのよい方法の1つです。コードベースについて学んだり、他のコントリビューターとの信頼関係を築く助けともなるはずです。
+
+レビューを行う前には、以下のことを理解しておくとよいでしょう。
+
+- [コンテンツガイド](/docs/contribute/style/content-guide/)と[スタイルガイド](/docs/contribute/style/style-guide/)を読んで、有益なコメントを残せるようにする。
+- Kubernetesのドキュメントコミュニティにおける[役割と責任](/docs/contribute/participate/roles-and-responsibilities/)の違いを理解する。
+
+
+
+## はじめる前に
+
+レビューを始める前に、以下のことを心に留めてください。
+
+- [CNCFの行動規範](https://github.com/cncf/foundation/blob/master/code-of-conduct.md)を読み、いかなる時にも行動規範にしたがって行動するようにする。
+- 礼儀正しく、思いやりを持ち、助け合う気持ちを持つ。
+- 変更点だけでなく、PRのポジティブな側面についてもコメントする。
+- 相手の気持ちに共感して、自分のレビューが相手にどのように受け取られるのかをよく意識する。
+- 相手の善意を前提として、疑問点を明確にする質問をする。
+- 経験を積んだコントリビューターの場合、コンテンツに大幅な変更が必要な新規のコントリビューターとペアを組んで作業に取り組むことを考える。
+
+## レビューのプロセス
+
+一般に、コンテンツや文体に対するプルリクエストは、英語でレビューを行います。
+
+1. [https://github.com/kubernetes/website/pulls](https://github.com/kubernetes/website/pulls)に移動します。Kubernetesのウェブサイトとドキュメントに対するopen状態のプルリクエスト一覧が表示されます。
+
+2. open状態のPRに、以下に示すラベルを1つ以上使って絞り込みます。
+
+ - `cncf-cla: yes` (推奨): CLAにサインしていないコントリビューターが提出したPRはマージできません。詳しい情報は、[CLAの署名](/docs/contribute/new-content/overview/#sign-the-cla)を読んでください。
+ - `language/en` (推奨): 英語のPRだけに絞り込みます。
+ - `size/`: 特定の大きさのPRだけに絞り込みます。レビューを始めたばかりの人は、小さなPRから始めてください。
+
+ さらに、PRがwork in progressとしてマークされていないことも確認してください。`work in progress`ラベルの付いたPRは、まだレビューの準備ができていない状態です。
+
+3. レビューするPRを選んだら、以下のことを行い、変更点について理解します。
+ - PRの説明を読み、行われた変更について理解し、関連するissueがあればそれも読みます。
+ - 他のレビュアのコメントがあれば読みます。
+ - **Files changed**タブをクリックし、変更されたファイルと行を確認します。
+ - **Conversation**タブの下にあるPRのbuild checkセクションまでスクロールし、**deploy/netlify**の行の**Details**リンクをクリックして、Netlifyのプレビュービルドで変更点をプレビューします。
+
+4. **Files changed**タブに移動してレビューを始めます。
+ 1. コメントしたい場合は行の横の`+`マークをクリックします。
+ 2. その行に関するコメントを書き、**Add single comment**(1つのコメントだけを残したい場合)または**Start a review**(複数のコメントを行いたい場合)のいずれかをクリックします。
+ 3. コメントをすべて書いたら、ページ上部の**Review changes**をクリックします。ここでは、レビューの要約を追加できます(コントリビューターにポジティブなコメントも書きましょう!)。必要に応じて、PRを承認したり、コメントしたり、変更をリクエストします。新しいコントリビューターの場合は**Comment**だけが行えます。
+
+## レビューのチェックリスト
+
+レビューするときは、最初に以下の点を確認してみてください。
+
+### 言語と文法
+
+- 言語や文法に明らかな間違いはないですか? もっとよい言い方はないですか?
+- もっと簡単な単語に置き換えられる複雑な単語や古い単語はありませんか?
+- 使われている単語や専門用語や言い回しで差別的ではない別の言葉に置き換えられるものはありませんか?
+- 言葉選びや大文字の使い方は[style guide](/docs/contribute/style/style-guide/)に従っていますか?
+- もっと短くしたり単純な文に書き換えられる長い文はありませんか?
+- 箇条書きやテーブルでもっとわかりやすく表現できる長いパラグラフはありませんか?
+
+### コンテンツ
+
+- 同様のコンテンツがKubernetesのサイト上のどこかに存在しませんか?
+- コンテンツが外部サイト、特定のベンダー、オープンソースではないドキュメントなどに過剰にリンクを張っていませんか?
+
+### ウェブサイト
+
+- PRはページ名、slug/alias、アンカーリンクの変更や削除をしていますか? その場合、このPRの変更の結果、リンク切れは発生しませんか? ページ名を変更してslugはそのままにするなど、他の選択肢はありませんか?
+- PRは新しいページを作成するものですか? その場合、次の点に注意してください。
+ - ページは正しい[page content type](/docs/contribute/style/page-content-types/)と関係するHugoのshortcodeを使用していますか?
+ - セクションの横のナビゲーション(または全体)にページは正しく表示されますか?
+ - ページは[Docs Home](/docs/home/)に一覧されますか?
+- Netlifyのプレビューで変更は確認できますか? 特にリスト、コードブロック、テーブル、備考、画像などに注意してください。
+
+### その他
+
+PRに関して誤字や空白などの小さな問題を指摘する場合は、コメントの前に`nit:`と書いてください。こうすることで、PRの作者は問題が深刻なものではないことが分かります。
diff --git a/content/ja/docs/setup/production-environment/windows/user-guide-windows-containers.md b/content/ja/docs/setup/production-environment/windows/user-guide-windows-containers.md
index e1a23cadfd..6f1ed4558e 100644
--- a/content/ja/docs/setup/production-environment/windows/user-guide-windows-containers.md
+++ b/content/ja/docs/setup/production-environment/windows/user-guide-windows-containers.md
@@ -96,7 +96,7 @@ spec:
* ネットワークを介したノードとPod間通信、LinuxマスターからのPod IPのポート80に向けて`curl`して、ウェブサーバーの応答をチェックします
* docker execまたはkubectl execを使用したPod間通信、Pod間(および複数のWindowsノードがある場合はホスト間)へのpingします
* ServiceからPodへの通信、Linuxマスターおよび個々のPodからの仮想Service IP(`kubectl get services`で表示される)に`curl`します
- * サービスディスカバリ、Kuberntesの[default DNS suffix](/ja/docs/concepts/services-networking/dns-pod-service/#services)と共にService名に`curl`します
+ * サービスディスカバリ、Kubernetesの[default DNS suffix](/ja/docs/concepts/services-networking/dns-pod-service/#services)と共にService名に`curl`します
* Inbound connectivity, `curl` the NodePort from the Linux master or machines outside of the cluster
* インバウンド接続、Linuxマスターまたはクラスター外のマシンからNodePortに`curl`します
* アウトバウンド接続、kubectl execを使用したPod内からの外部IPに`curl`します
diff --git a/content/ja/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md b/content/ja/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md
index b2ebc16d18..d5f6b72296 100644
--- a/content/ja/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md
+++ b/content/ja/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md
@@ -31,9 +31,9 @@ card:
## クラスター、ユーザー、コンテキストを設定する
-例として、開発用のクラスターが一つ、実験用のクラスターが一つ、計二つのクラスターが存在する場合を考えます。`development`と呼ばれる開発用のクラスター内では、フロントエンドの開発者は`frontend`というnamespace内で、ストレージの開発者は`storage`というnamespace内で作業をします。`scratch`と呼ばれる実験用のクラスター内では、開発者はデフォルトのnamespaceで作業をするか、状況に応じて追加のnamespaceを作成します。開発用のクラスターは証明書を通しての認証を必要とします。実験用のクラスターはユーザーネームとパスワードを通しての認証を必要とします。
+例として、開発用のクラスターが一つ、実験用のクラスターが一つ、計二つのクラスターが存在する場合を考えます。`development`と呼ばれる開発用のクラスター内では、フロントエンドの開発者は`frontend`というnamespace内で、ストレージの開発者は`storage`というnamespace内で作業をします。`scratch`と呼ばれる実験用のクラスター内では、開発者はデフォルトのnamespaceで作業をするか、状況に応じて追加のnamespaceを作成します。開発用のクラスターは証明書を通しての認証を必要とします。実験用のクラスターはユーザーネームとパスワードを通しての認証を必要とします。
-`config-exercise`というディレクトリを作成してください。`config-exercise`ディレクトリ内に、以下を含む`config-demo`というファイルを作成してください:
+`config-exercise`というディレクトリを作成してください。`config-exercise`ディレクトリ内に、以下を含む`config-demo`というファイルを作成してください:
```shell
apiVersion: v1
@@ -61,7 +61,7 @@ contexts:
設定ファイルには、クラスター、ユーザー、コンテキストの情報が含まれています。上記の`config-demo`設定ファイルには、二つのクラスター、二人のユーザー、三つのコンテキストの情報が含まれています。
-`config-exercise`ディレクトリに移動してください。クラスター情報を設定ファイルに追加するために、以下のコマンドを実行してください:
+`config-exercise`ディレクトリに移動してください。クラスター情報を設定ファイルに追加するために、以下のコマンドを実行してください:
```shell
kubectl config --kubeconfig=config-demo set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file
@@ -89,7 +89,7 @@ kubectl config --kubeconfig=config-demo set-context dev-storage --cluster=develo
kubectl config --kubeconfig=config-demo set-context exp-scratch --cluster=scratch --namespace=default --user=experimenter
```
-追加した情報を確認するために、`config-demo`ファイルを開いてください。`config-demo`ファイルを開く代わりに、`config view`のコマンドを使うこともできます。
+追加した情報を確認するために、`config-demo`ファイルを開いてください。`config-demo`ファイルを開く代わりに、`config view`のコマンドを使うこともできます。
```shell
kubectl config --kubeconfig=config-demo view
diff --git a/content/ja/docs/tasks/access-application-cluster/ingress-minikube.md b/content/ja/docs/tasks/access-application-cluster/ingress-minikube.md
index 563ce2478e..be26708099 100644
--- a/content/ja/docs/tasks/access-application-cluster/ingress-minikube.md
+++ b/content/ja/docs/tasks/access-application-cluster/ingress-minikube.md
@@ -134,28 +134,12 @@ weight: 100
1. 以下の内容で`example-ingress.yaml`を作成します。
- ```yaml
- apiVersion: networking.k8s.io/v1beta1
- kind: Ingress
- metadata:
- name: example-ingress
- annotations:
- nginx.ingress.kubernetes.io/rewrite-target: /$1
- spec:
- rules:
- - host: hello-world.info
- http:
- paths:
- - path: /
- backend:
- serviceName: web
- servicePort: 8080
- ```
+ {{< codenew file="service/networking/example-ingress.yaml" >}}
1. 次のコマンドを実行して、Ingressリソースを作成します。
```shell
- kubectl apply -f example-ingress.yaml
+ kubectl apply -f https://kubernetes.io/examples/service/networking/example-ingress.yaml
```
出力は次のようになります。
@@ -175,8 +159,8 @@ weight: 100
{{< /note >}}
```shell
- NAME HOSTS ADDRESS PORTS AGE
- example-ingress hello-world.info 172.17.0.15 80 38s
+ NAME CLASS HOSTS ADDRESS PORTS AGE
+ example-ingress hello-world.info 172.17.0.15 80 38s
```
1. 次の行を`/etc/hosts`ファイルの最後に書きます。
@@ -241,9 +225,12 @@ weight: 100
```yaml
- path: /v2
+ pathType: Prefix
backend:
- serviceName: web2
- servicePort: 8080
+ service:
+ name: web2
+ port:
+ number: 8080
```
1. 次のコマンドで変更を適用します。
@@ -300,6 +287,3 @@ weight: 100
* [Ingress](/ja/docs/concepts/services-networking/ingress/)についてさらに学ぶ。
* [Ingressコントローラー](/ja/docs/concepts/services-networking/ingress-controllers/)についてさらに学ぶ。
* [Service](/ja/docs/concepts/services-networking/service/)についてさらに学ぶ。
-
-
-
diff --git a/content/ja/docs/tasks/configmap-secret/_index.md b/content/ja/docs/tasks/configmap-secret/_index.md
new file mode 100755
index 0000000000..18a8018ce5
--- /dev/null
+++ b/content/ja/docs/tasks/configmap-secret/_index.md
@@ -0,0 +1,6 @@
+---
+title: "Secretの管理"
+weight: 28
+description: Secretを使用した機密設定データの管理
+---
+
diff --git a/content/ja/docs/tasks/configmap-secret/managing-secret-using-kubectl.md b/content/ja/docs/tasks/configmap-secret/managing-secret-using-kubectl.md
new file mode 100644
index 0000000000..fb8c89c1e3
--- /dev/null
+++ b/content/ja/docs/tasks/configmap-secret/managing-secret-using-kubectl.md
@@ -0,0 +1,146 @@
+---
+title: kubectlを使用してSecretを管理する
+content_type: task
+weight: 10
+description: kubectlコマンドラインを使用してSecretを作成する
+---
+
+
+
+## {{% heading "prerequisites" %}}
+
+{{< include "task-tutorial-prereqs.md" >}}
+
+
+
+## Secretを作成する
+
+`Secret`はデータベースにアクセスするためにPodが必要とするユーザー資格情報を含めることができます。
+たとえば、データベース接続文字列はユーザー名とパスワードで構成されます。
+ユーザー名はローカルマシンの`./username.txt`に、パスワードは`./password.txt`に保存します。
+
+```shell
+echo -n 'admin' > ./username.txt
+echo -n '1f2d1e2e67df' > ./password.txt
+```
+
+上記の2つのコマンドの`-n`フラグは、生成されたファイルにテキスト末尾の余分な改行文字が含まれないようにします。
+`kubectl`がファイルを読み取り、内容をbase64文字列にエンコードすると、余分な改行文字もエンコードされるため、これは重要です。
+
+`kubectl create secret`コマンドはこれらのファイルをSecretにパッケージ化し、APIサーバー上にオブジェクトを作成します。
+
+```shell
+kubectl create secret generic db-user-pass \
+ --from-file=./username.txt \
+ --from-file=./password.txt
+```
+
+出力は次のようになります:
+
+```
+secret/db-user-pass created
+```
+
+ファイル名がデフォルトのキー名になります。オプションで`--from-file=[key=]source`を使用してキー名を設定できます。たとえば:
+
+```shell
+kubectl create secret generic db-user-pass \
+ --from-file=username=./username.txt \
+ --from-file=password=./password.txt
+```
+
+`--from-file`に指定したファイルに含まれるパスワードの特殊文字をエスケープする必要はありません。
+
+また、`--from-literal==`タグを使用してSecretデータを提供することもできます。
+このタグは、複数のキーと値のペアを提供するために複数回指定することができます。
+`$`、`\`、`*`、`=`、`!`などの特殊文字は[シェル](https://en.wikipedia.org/wiki/Shell_(computing))によって解釈されるため、エスケープを必要とすることに注意してください。
+ほとんどのシェルでは、パスワードをエスケープする最も簡単な方法は、シングルクォート(`'`)で囲むことです。
+たとえば、実際のパスワードが`S!B\*d$zDsb=`の場合、次のようにコマンドを実行します:
+
+```shell
+kubectl create secret generic dev-db-secret \
+ --from-literal=username=devuser \
+ --from-literal=password='S!B\*d$zDsb='
+```
+
+## Secretを検証する
+
+Secretが作成されたことを確認できます:
+
+```shell
+kubectl get secrets
+```
+
+出力は次のようになります:
+
+```
+NAME TYPE DATA AGE
+db-user-pass Opaque 2 51s
+```
+
+`Secret`の説明を参照できます:
+
+```shell
+kubectl describe secrets/db-user-pass
+```
+
+出力は次のようになります:
+
+```
+Name: db-user-pass
+Namespace: default
+Labels:
+Annotations:
+
+Type: Opaque
+
+Data
+====
+password: 12 bytes
+username: 5 bytes
+```
+
+`kubectl get`と`kubectl describe`コマンドはデフォルトでは`Secret`の内容を表示しません。
+これは、`Secret`が不用意に他人にさらされたり、ターミナルログに保存されたりしないようにするためです。
+
+## Secretをデコードする {#decoding-secret}
+
+先ほど作成したSecretの内容を見るには、以下のコマンドを実行します:
+
+```shell
+kubectl get secret db-user-pass -o jsonpath='{.data}'
+```
+
+出力は次のようになります:
+
+```json
+{"password.txt":"MWYyZDFlMmU2N2Rm","username.txt":"YWRtaW4="}
+```
+
+`password.txt`のデータをデコードします:
+
+```shell
+echo 'MWYyZDFlMmU2N2Rm' | base64 --decode
+```
+
+出力は次のようになります:
+
+```
+1f2d1e2e67df
+```
+
+## クリーンアップ
+
+作成したSecretを削除するには次のコマンドを実行します:
+
+```shell
+kubectl delete secret db-user-pass
+```
+
+
+
+## {{% heading "whatsnext" %}}
+
+- [Secretのコンセプト](/ja/docs/concepts/configuration/secret/)を読む
+- [設定ファイルを使用してSecretを管理する](/docs/tasks/configmap-secret/managing-secret-using-config-file/)方法を知る
+- [kustomizeを使用してSecretを管理する](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)方法を知る
diff --git a/content/ja/docs/tasks/run-application/force-delete-stateful-set-pod.md b/content/ja/docs/tasks/run-application/force-delete-stateful-set-pod.md
index 83f7b52c85..ac20713a8d 100644
--- a/content/ja/docs/tasks/run-application/force-delete-stateful-set-pod.md
+++ b/content/ja/docs/tasks/run-application/force-delete-stateful-set-pod.md
@@ -33,7 +33,7 @@ kubectl delete pods
上記がグレースフルターミネーションにつながるためには、`pod.Spec.TerminationGracePeriodSeconds`に0を指定しては**いけません**。`pod.Spec.TerminationGracePeriodSeconds`を0秒に設定することは安全ではなく、StatefulSet Podには強くお勧めできません。グレースフル削除は安全で、kubeletがapiserverから名前を削除する前にPodが[適切にシャットダウンする](/ja/docs/concepts/workloads/pods/pod-lifecycle/#termination-of-pods)ことを保証します。
-Kubernetes(バージョン1.5以降)は、Nodeにアクセスできないという理由だけでPodを削除しません。到達不能なNodeで実行されているPodは、[タイムアウト](/docs/concepts/architecture/nodes/#node-condition)の後に`Terminating`または`Unknown`状態になります。到達不能なNode上のPodをユーザーが適切に削除しようとすると、Podはこれらの状態に入ることもあります。そのような状態のPodをapiserverから削除することができる唯一の方法は以下の通りです:
+Kubernetes(バージョン1.5以降)は、Nodeにアクセスできないという理由だけでPodを削除しません。到達不能なNodeで実行されているPodは、[タイムアウト](/ja/docs/concepts/architecture/nodes/#condition)の後に`Terminating`または`Unknown`状態になります。到達不能なNode上のPodをユーザーが適切に削除しようとすると、Podはこれらの状態に入ることもあります。そのような状態のPodをapiserverから削除することができる唯一の方法は以下の通りです:
* (ユーザーまたは[Node Controller](/ja/docs/concepts/architecture/nodes/)によって)Nodeオブジェクトが削除されます。
* 応答していないNodeのkubeletが応答を開始し、Podを終了してapiserverからエントリーを削除します。
@@ -76,4 +76,3 @@ StatefulSet Podの強制削除は、常に慎重に、関連するリスクを
[StatefulSetのデバッグ](/docs/tasks/debug-application-cluster/debug-stateful-set/)の詳細
-
diff --git a/content/ja/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/ja/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
new file mode 100644
index 0000000000..445742e1d6
--- /dev/null
+++ b/content/ja/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
@@ -0,0 +1,403 @@
+---
+title: Horizontal Pod Autoscalerウォークスルー
+content_type: task
+weight: 100
+---
+
+
+
+Horizontal Pod Autoscalerは、Deployment、ReplicaSetまたはStatefulSetといったレプリケーションコントローラ内のPodの数を、観測されたCPU使用率(もしくはベータサポートの、アプリケーションによって提供されるその他のメトリクス)に基づいて自動的にスケールさせます。
+
+このドキュメントはphp-apacheサーバーに対しHorizontal Pod Autoscalerを有効化するという例に沿ってウォークスルーで説明していきます。Horizontal Pod Autoscalerの動作についてのより詳細な情報を知りたい場合は、[Horizontal Pod Autoscalerユーザーガイド](/docs/tasks/run-application/horizontal-pod-autoscale/)をご覧ください。
+
+## {{% heading "前提条件" %}}
+
+この例ではバージョン1.2以上の動作するKubernetesクラスターおよびkubectlが必要です。
+[Metrics API](https://github.com/kubernetes/metrics)を介してメトリクスを提供するために、[Metrics server](https://github.com/kubernetes-sigs/metrics-server)によるモニタリングがクラスター内にデプロイされている必要があります。
+Horizontal Pod Autoscalerはメトリクスを収集するためにこのAPIを利用します。metrics-serverをデプロイする方法を知りたい場合は[metrics-server ドキュメント](https://github.com/kubernetes-sigs/metrics-server#deployment)をご覧ください。
+
+Horizontal Pod Autoscalerで複数のリソースメトリクスを利用するためには、バージョン1.6以上のKubernetesクラスターおよびkubectlが必要です。カスタムメトリクスを使えるようにするためには、あなたのクラスターがカスタムメトリクスAPIを提供するAPIサーバーと通信できる必要があります。
+最後に、Kubernetesオブジェクトと関係のないメトリクスを使うにはバージョン1.10以上のKubernetesクラスターおよびkubectlが必要で、さらにあなたのクラスターが外部メトリクスAPIを提供するAPIサーバーと通信できる必要があります。
+詳細については[Horizontal Pod Autoscaler user guide](/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics)をご覧ください。
+
+
+
+## php-apacheの起動と公開
+
+Horizontal Pod Autoscalerのデモンストレーションのために、php-apacheイメージをもとにしたカスタムのDockerイメージを使います。
+このDockerfileは下記のようになっています。
+
+```dockerfile
+FROM php:5-apache
+COPY index.php /var/www/html/index.php
+RUN chmod a+rx index.php
+```
+
+これはCPU負荷の高い演算を行うindex.phpを定義しています。
+
+```php
+
+```
+
+まず最初に、イメージを動かすDeploymentを起動し、Serviceとして公開しましょう。
+下記の設定を使います。
+
+{{< codenew file="application/php-apache.yaml" >}}
+
+以下のコマンドを実行してください。
+
+```shell
+kubectl apply -f https://k8s.io/examples/application/php-apache.yaml
+```
+
+```
+deployment.apps/php-apache created
+service/php-apache created
+```
+
+## Horizontal Pod Autoscalerを作成する
+
+サーバーが起動したら、[kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands#autoscale)を使ってautoscalerを作成しましょう。以下のコマンドで、最初のステップで作成したphp-apache deploymentによって制御されるPodレプリカ数を1から10の間に維持するHorizontal Pod Autoscalerを作成します。
+簡単に言うと、HPAは(Deploymentを通じて)レプリカ数を増減させ、すべてのPodにおける平均CPU使用率を50%(それぞれのPodは`kubectl run`で200 milli-coresを要求しているため、平均CPU使用率100 milli-coresを意味します)に保とうとします。
+このアルゴリズムについての詳細は[こちら](/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details)をご覧ください。
+
+```shell
+kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
+```
+
+```
+horizontalpodautoscaler.autoscaling/php-apache autoscaled
+```
+
+以下を実行して現在のAutoscalerの状況を確認できます。
+
+```shell
+kubectl get hpa
+```
+
+```
+NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
+php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 18s
+```
+
+現在はサーバーにリクエストを送っていないため、CPU使用率が0%になっていることに注意してください(`TARGET`カラムは対応するDeploymentによって制御される全てのPodの平均値を示しています。)。
+
+## 負荷の増加
+
+Autoscalerがどのように負荷の増加に反応するか見てみましょう。
+コンテナを作成し、クエリの無限ループをphp-apacheサーバーに送ってみます(これは別のターミナルで実行してください)。
+
+```shell
+kubectl run -i --tty load-generator --rm --image=busybox --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"
+```
+
+数分以内に、下記を実行することでCPU負荷が高まっていることを確認できます。
+
+```shell
+kubectl get hpa
+```
+
+```
+NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
+php-apache Deployment/php-apache/scale 305% / 50% 1 10 1 3m
+```
+
+ここでは、CPU使用率はrequestの305%にまで高まっています。
+結果として、Deploymentはレプリカ数7にリサイズされました。
+
+```shell
+kubectl get deployment php-apache
+```
+
+```
+NAME READY UP-TO-DATE AVAILABLE AGE
+php-apache 7/7 7 7 19m
+```
+
+{{< note >}}
+レプリカ数が安定するまでは数分かかることがあります。負荷量は何らかの方法で制御されているわけではないので、最終的なレプリカ数はこの例とは異なる場合があります。
+{{< /note >}}
+
+## 負荷の停止
+
+ユーザー負荷を止めてこの例を終わらせましょう。
+
+私たちが`busybox`イメージを使って作成したコンテナ内のターミナルで、` + C`を入力して負荷生成を終了させます。
+
+そして結果の状態を確認します(数分後)。
+
+```shell
+kubectl get hpa
+```
+
+```
+NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
+php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 11m
+```
+
+```shell
+kubectl get deployment php-apache
+```
+
+```
+NAME READY UP-TO-DATE AVAILABLE AGE
+php-apache 1/1 1 1 27m
+```
+
+ここでCPU使用率は0に下がり、HPAによってオートスケールされたレプリカ数は1に戻ります。
+
+{{< note >}}
+レプリカのオートスケールには数分かかることがあります。
+{{< /note >}}
+
+
+
+## 複数のメトリクスやカスタムメトリクスを基にオートスケーリングする
+
+`autoscaling/v2beta2` APIバージョンと使うと、`php-apache` Deploymentをオートスケーリングする際に使う追加のメトリクスを導入することが出来ます。
+
+まず、`autoscaling/v2beta2`内のHorizontalPodAutoscalerのYAMLファイルを入手します。
+
+```shell
+kubectl get hpa.v2beta2.autoscaling -o yaml > /tmp/hpa-v2.yaml
+```
+
+`/tmp/hpa-v2.yaml`ファイルをエディタで開くと、以下のようなYAMLファイルが見えるはずです。
+
+```yaml
+apiVersion: autoscaling/v2beta2
+kind: HorizontalPodAutoscaler
+metadata:
+ name: php-apache
+spec:
+ scaleTargetRef:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: php-apache
+ minReplicas: 1
+ maxReplicas: 10
+ metrics:
+ - type: Resource
+ resource:
+ name: cpu
+ target:
+ type: Utilization
+ averageUtilization: 50
+status:
+ observedGeneration: 1
+ lastScaleTime:
+ currentReplicas: 1
+ desiredReplicas: 1
+ currentMetrics:
+ - type: Resource
+ resource:
+ name: cpu
+ current:
+ averageUtilization: 0
+ averageValue: 0
+```
+
+`targetCPUUtilizationPercentage`フィールドは`metrics`と呼ばれる配列に置換されています。
+CPU使用率メトリクスは、Podコンテナで定められたリソースの割合として表されるため、*リソースメトリクス*です。CPU以外のリソースメトリクスを指定することもできます。デフォルトでは、他にメモリだけがリソースメトリクスとしてサポートされています。これらのリソースはクラスター間で名前が変わることはなく、そして`metrics.k8s.io` APIが利用可能である限り常に利用可能です。
+
+さらに`target.type`において`Utilization`の代わりに`AverageValue`を使い、`target.averageUtilization`フィールドの代わりに対応する`target.averageValue`フィールドを設定することで、リソースメトリクスをrequest値に対する割合に代わり、直接的な値に設定することも可能です。
+
+PodメトリクスとObjectメトリクスという2つの異なる種類のメトリクスが存在し、どちらも*カスタムメトリクス*とみなされます。これらのメトリクスはクラスター特有の名前を持ち、利用するにはより発展的なクラスター監視設定が必要となります。
+
+これらの代替メトリクスタイプのうち、最初のものが*Podメトリクス*です。これらのメトリクスはPodを説明し、Podを渡って平均され、レプリカ数を決定するためにターゲット値と比較されます。
+これらはほとんどリソースメトリクス同様に機能しますが、`target`の種類としては`AverageValue`*のみ*をサポートしている点が異なります。
+
+Podメトリクスはmetricブロックを使って以下のように指定されます。
+
+```yaml
+type: Pods
+pods:
+ metric:
+ name: packets-per-second
+ target:
+ type: AverageValue
+ averageValue: 1k
+```
+
+2つ目のメトリクスタイプは*Objectメトリクス*です。これらのメトリクスはPodを説明するかわりに、同一Namespace内の異なったオブジェクトを説明します。このメトリクスはオブジェクトから取得される必要はありません。単に説明するだけです。Objectメトリクスは`target`の種類として`Value`と`AverageValue`をサポートします。`Value`では、ターゲットはAPIから返ってきたメトリクスと直接比較されます。`AverageValue`では、カスタムメトリクスAPIから返ってきた値はターゲットと比較される前にPodの数で除算されます。以下の例は`requests-per-second`メトリクスのYAML表現です。
+
+```yaml
+type: Object
+object:
+ metric:
+ name: requests-per-second
+ describedObject:
+ apiVersion: networking.k8s.io/v1beta1
+ kind: Ingress
+ name: main-route
+ target:
+ type: Value
+ value: 2k
+```
+
+もしこのようなmetricブロックを複数提供した場合、HorizontalPodAutoscalerはこれらのメトリクスを順番に処理します。
+HorizontalPodAutoscalerはそれぞれのメトリクスについて推奨レプリカ数を算出し、その中で最も多いレプリカ数を採用します。
+
+例えば、もしあなたがネットワークトラフィックについてのメトリクスを収集する監視システムを持っているなら、`kubectl edit`を使って指定を次のように更新することができます。
+
+```yaml
+apiVersion: autoscaling/v2beta2
+kind: HorizontalPodAutoscaler
+metadata:
+ name: php-apache
+spec:
+ scaleTargetRef:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: php-apache
+ minReplicas: 1
+ maxReplicas: 10
+ metrics:
+ - type: Resource
+ resource:
+ name: cpu
+ target:
+ type: Utilization
+ averageUtilization: 50
+ - type: Pods
+ pods:
+ metric:
+ name: packets-per-second
+ target:
+ type: AverageValue
+ averageValue: 1k
+ - type: Object
+ object:
+ metric:
+ name: requests-per-second
+ describedObject:
+ apiVersion: networking.k8s.io/v1beta1
+ kind: Ingress
+ name: main-route
+ target:
+ type: Value
+ value: 10k
+status:
+ observedGeneration: 1
+ lastScaleTime:
+ currentReplicas: 1
+ desiredReplicas: 1
+ currentMetrics:
+ - type: Resource
+ resource:
+ name: cpu
+ current:
+ averageUtilization: 0
+ averageValue: 0
+ - type: Object
+ object:
+ metric:
+ name: requests-per-second
+ describedObject:
+ apiVersion: networking.k8s.io/v1beta1
+ kind: Ingress
+ name: main-route
+ current:
+ value: 10k
+```
+
+この時、HorizontalPodAutoscalerはそれぞれのPodがCPU requestの50%を使い、1秒当たり1000パケットを送信し、そしてmain-route
+Ingressの裏にあるすべてのPodが合計で1秒当たり10000パケットを送信する状態を保持しようとします。
+
+### より詳細なメトリクスをもとにオートスケーリングする
+
+多くのメトリクスパイプラインは、名前もしくは _labels_ と呼ばれる追加の記述子の組み合わせによって説明することができます。全てのリソースメトリクス以外のメトリクスタイプ(Pod、Object、そして下で説明されている外部メトリクス)において、メトリクスパイプラインに渡す追加のラベルセレクターを指定することができます。例えば、もしあなたが`http_requests`メトリクスを`verb`ラベルとともに収集しているなら、下記のmetricブロックを指定してGETリクエストにのみ基づいてスケールさせることができます。
+
+```yaml
+type: Object
+object:
+ metric:
+ name: http_requests
+ selector: {matchLabels: {verb: GET}}
+```
+
+このセレクターは完全なKubernetesラベルセレクターと同じ文法を利用します。もし名前とセレクターが複数の系列に一致した場合、この監視パイプラインはどのようにして複数の系列を一つの値にまとめるかを決定します。このセレクターは付加的なもので、ターゲットオブジェクト(`Pods`タイプの場合は対象Pod、`Object`タイプの場合は説明されるオブジェクト)では**ない**オブジェクトを説明するメトリクスを選択することは出来ません。
+
+### Kubernetesオブジェクトと関係ないメトリクスに基づいたオートスケーリング
+
+Kubernetes上で動いているアプリケーションを、Kubernetes Namespaceと直接的な関係がないサービスを説明するメトリクスのような、Kubernetesクラスター内のオブジェクトと明確な関係が無いメトリクスを基にオートスケールする必要があるかもしれません。Kubernetes 1.10以降では、このようなユースケースを*外部メトリクス*によって解決できます。
+
+外部メトリクスを使うにはあなたの監視システムについての知識が必要となります。この設定はカスタムメトリクスを使うときのものに似ています。外部メトリクスを使うとあなたの監視システムのあらゆる利用可能なメトリクスに基づいてクラスターをオートスケールできるようになります。上記のように`metric`ブロックで`name`と`selector`を設定し、`Object`のかわりに`External`メトリクスタイプを使います。
+もし複数の時系列が`metricSelector`により一致した場合は、それらの値の合計がHorizontalPodAutoscalerに使われます。
+外部メトリクスは`Value`と`AverageValue`の両方のターゲットタイプをサポートしています。これらの機能は`Object`タイプを利用するときとまったく同じです。
+
+例えばもしあなたのアプリケーションがホストされたキューサービスからのタスクを処理している場合、あなたは下記のセクションをHorizontalPodAutoscalerマニフェストに追記し、未処理のタスク30個あたり1つのワーカーを必要とすることを指定します。
+
+```yaml
+- type: External
+ external:
+ metric:
+ name: queue_messages_ready
+ selector: "queue=worker_tasks"
+ target:
+ type: AverageValue
+ averageValue: 30
+```
+
+可能なら、クラスター管理者がカスタムメトリクスAPIを保護することを簡単にするため、外部メトリクスのかわりにカスタムメトリクスを用いることが望ましいです。外部メトリクスAPIは潜在的に全てのメトリクスへのアクセスを許可するため、クラスター管理者はこれを公開する際には注意が必要です。
+
+## 付録: Horizontal Pod Autoscaler status conditions
+
+`autoscaling/v2beta2`形式のHorizontalPodAutoscalerを使っている場合は、KubernetesによるHorizontalPodAutoscaler上の*status conditions*セットを見ることができます。status conditionsはHorizontalPodAutoscalerがスケール可能かどうか、そして現時点でそれが何らかの方法で制限されているかどうかを示しています。
+
+このconditionsは`status.conditions`フィールドに現れます。HorizontalPodAutoscalerに影響しているconditionsを確認するために、`kubectl describe hpa`を利用できます。
+
+```shell
+kubectl describe hpa cm-test
+```
+
+```
+Name: cm-test
+Namespace: prom
+Labels:
+Annotations:
+CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000
+Reference: ReplicationController/cm-test
+Metrics: ( current / target )
+ "http_requests" on pods: 66m / 500m
+Min replicas: 1
+Max replicas: 4
+ReplicationController pods: 1 current / 1 desired
+Conditions:
+ Type Status Reason Message
+ ---- ------ ------ -------
+ AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale
+ ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_requests
+ ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range
+Events:
+```
+
+このHorizontalPodAutoscalerにおいて、いくつかの正常な状態のconditionsを見ることができます。まず最初に、`AbleToScale`は、HPAがスケール状況を取得し、更新させることが出来るかどうかだけでなく、何らかのbackoffに関連した状況がスケーリングを妨げていないかを示しています。2番目に、`ScalingActive`は、HPAが有効化されているかどうか(例えば、レプリカ数のターゲットがゼロでないこと)や、望ましいスケールを算出できるかどうかを示します。もしこれが`False`の場合、大体はメトリクスの取得において問題があることを示しています。最後に、一番最後の状況である`ScalingLimited`は、HorizontalPodAutoscalerの最大値や最小値によって望ましいスケールがキャップされていることを示しています。この指標を見てHorizontalPodAutoscaler上の最大・最小レプリカ数制限を増やす、もしくは減らす検討ができます。
+
+## 付録: 数量
+
+全てのHorizontalPodAutoscalerおよびメトリクスAPIにおけるメトリクスは{{< glossary_tooltip term_id="quantity" text="quantity">}}として知られる特殊な整数表記によって指定されます。例えば、`10500m`という数量は10進数表記で`10.5`と書くことができます。メトリクスAPIは可能であれば接尾辞を用いない整数を返し、そうでない場合は基本的にミリ単位での数量を返します。これはメトリクス値が`1`と`1500m`の間で、もしくは10進法表記で書かれた場合は`1`と`1.5`の間で変動するということを意味します。
+
+## 付録: その他の起きうるシナリオ
+
+### Autoscalerを宣言的に作成する
+
+`kubectl autoscale`コマンドを使って命令的にHorizontalPodAutoscalerを作るかわりに、下記のファイルを使って宣言的に作成することができます。
+
+{{< codenew file="application/hpa/php-apache.yaml" >}}
+
+下記のコマンドを実行してAutoscalerを作成します。
+
+```shell
+kubectl create -f https://k8s.io/examples/application/hpa/php-apache.yaml
+```
+
+```
+horizontalpodautoscaler.autoscaling/php-apache created
+```
diff --git a/content/ja/docs/tutorials/services/source-ip.md b/content/ja/docs/tutorials/services/source-ip.md
index 05a54152a3..17317bbeae 100644
--- a/content/ja/docs/tutorials/services/source-ip.md
+++ b/content/ja/docs/tutorials/services/source-ip.md
@@ -272,7 +272,7 @@ graph TD;
## `Type=LoadBalancer`を使用したServiceでの送信元IP
-[`Type=LoadBalancer`](/ja/docs/concepts/services-networking/service/#loadbalancer)を使用したServiceに送られたパケットは、デフォルトでは送信元のNATは行われません。`Ready`状態にあるすべてのスケジュール可能なKubernetesのNodeは、ロードバランサーからのトラフィックを受付可能であるためです。そのため、エンドポイントが存在しないノードにパケットが到達した場合、システムはエンドポイントが*存在する*ノードにパケットをプロシキーします。このとき、(前のセクションで説明したように)パケットの送信元IPがノードのIPに置換されます。
+[`Type=LoadBalancer`](/ja/docs/concepts/services-networking/service/#loadbalancer)を使用したServiceに送られたパケットは、デフォルトで送信元のNATが行われます。`Ready`状態にあるすべてのスケジュール可能なKubernetesのNodeは、ロードバランサーからのトラフィックを受付可能であるためです。そのため、エンドポイントが存在しないノードにパケットが到達した場合、システムはエンドポイントが*存在する*ノードにパケットをプロシキーします。このとき、(前のセクションで説明したように)パケットの送信元IPがノードのIPに置換されます。
ロードバランサー経由でsource-ip-appを公開することで、これをテストできます。
diff --git a/content/ja/examples/application/php-apache.yaml b/content/ja/examples/application/php-apache.yaml
new file mode 100644
index 0000000000..e8e1b5aeb4
--- /dev/null
+++ b/content/ja/examples/application/php-apache.yaml
@@ -0,0 +1,36 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: php-apache
+spec:
+ selector:
+ matchLabels:
+ run: php-apache
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ run: php-apache
+ spec:
+ containers:
+ - name: php-apache
+ image: k8s.gcr.io/hpa-example
+ ports:
+ - containerPort: 80
+ resources:
+ limits:
+ cpu: 500m
+ requests:
+ cpu: 200m
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: php-apache
+ labels:
+ run: php-apache
+spec:
+ ports:
+ - port: 80
+ selector:
+ run: php-apache
diff --git a/content/ja/examples/service/networking/example-ingress.yaml b/content/ja/examples/service/networking/example-ingress.yaml
new file mode 100644
index 0000000000..b309d13275
--- /dev/null
+++ b/content/ja/examples/service/networking/example-ingress.yaml
@@ -0,0 +1,18 @@
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: example-ingress
+ annotations:
+ nginx.ingress.kubernetes.io/rewrite-target: /$1
+spec:
+ rules:
+ - host: hello-world.info
+ http:
+ paths:
+ - path: /
+ pathType: Prefix
+ backend:
+ service:
+ name: web
+ port:
+ number: 8080
\ No newline at end of file
diff --git a/content/pt/_index.html b/content/pt/_index.html
index 9721bcdd37..628047e85c 100644
--- a/content/pt/_index.html
+++ b/content/pt/_index.html
@@ -47,7 +47,7 @@ O Kubernetes é Open Source, o que te oferece a liberdade de utilizá-lo em seu
- O Kubernetes coordena um cluster altamente disponível de computadores conectados para funcionar como uma única unidade.
- As abstrações no Kubernetes permitem implantar aplicativos em contêineres em um cluster sem amarrá-los especificamente a máquinas individuais.
- Para fazer uso desse novo modelo de implantação, os aplicativos precisam ser empacotados de uma forma que os desacople dos hosts individuais: eles precisam ser colocados em contêineres. Os aplicativos em contêineres são mais flexíveis e disponíveis do que nos modelos de implantação anteriores, nos quais os aplicativos eram instalados diretamente em máquinas específicas como pacotes profundamente integrados ao host.
+ O Kubernetes coordena um cluster com alta disponibilidade de computadores conectados para funcionar como uma única unidade.
+ As abstrações no Kubernetes permitem implantar aplicativos em contêineres em um cluster sem amarrá-los especificamente as máquinas individuais.
+ Para fazer uso desse novo modelo de implantação, os aplicativos precisam ser empacotados de uma forma que os desacoplem dos hosts individuais: eles precisam ser empacotados em contêineres. Os aplicativos em contêineres são mais flexíveis e disponíveis do que nos modelos de implantação anteriores, nos quais os aplicativos eram instalados diretamente em máquinas específicas como pacotes profundamente integrados ao host.
O Kubernetes automatiza a distribuição e o agendamento de contêineres de aplicativos em um cluster de maneira mais eficiente.
O Kubernetes é uma plataforma de código aberto e está pronto para produção.
Um cluster Kubernetes consiste em dois tipos de recursos:
-
O Master coordena o cluster
-
Os Nodes são os trabalhadores que executam aplicativos
+
A Camada de gerenciamento (Control Plane) coordena o cluster
+
Os Nós (Nodes) são os nós de processamento que executam aplicativos
@@ -75,22 +75,22 @@ weight: 10
-
O mestre é responsável por gerenciar o cluster. O mestre coordena todas as atividades em seu cluster, como programação de aplicativos, manutenção do estado desejado dos aplicativos, escalonamento de aplicativos e lançamento de novas atualizações.
-
Um nó é uma VM ou um computador físico que atua como uma máquina de trabalho em um cluster Kubernetes. Cada nó tem um Kubelet, que é um agente para gerenciar o nó e se comunicar com o mestre do Kubernetes. O nó também deve ter ferramentas para lidar com operações de contêiner, como containerd ou Docker. Um cluster Kubernetes que lida com o tráfego de produção deve ter no mínimo três nós.
+
A camada de gerenciamento é responsável por gerenciar o cluster. A camada de gerenciamento coordena todas as atividades em seu cluster, como programação de aplicativos, manutenção do estado desejado dos aplicativos, escalonamento de aplicativos e lançamento de novas atualizações.
+
Um nó é uma VM ou um computador físico que atua como um nó de processamento em um cluster Kubernetes. Cada nó tem um Kubelet, que é um agente para gerenciar o nó e se comunicar com a camada de gerenciamento do Kubernetes. O nó também deve ter ferramentas para lidar com operações de contêiner, como containerd ou Docker. Um cluster Kubernetes que lida com o tráfego de produção deve ter no mínimo três nós.
-
Os mestres gerenciam o cluster e os nós que são usados para hospedar os aplicativos em execução.
+
As camadas de gerenciamento gerenciam o cluster e os nós que são usados para hospedar os aplicativos em execução.
-
Ao implantar aplicativos no Kubernetes, você diz ao mestre para iniciar os contêineres de aplicativos. O mestre agenda os contêineres para serem executados nos nós do cluster. Os nós se comunicam com o mestre usando a API Kubernetes , que o mestre expõe. Os usuários finais também podem usar a API Kubernetes diretamente para interagir com o cluster.
+
Ao implantar aplicativos no Kubernetes, você diz à camada de gerenciamento para iniciar os contêineres de aplicativos. A camada de gerenciamento agenda os contêineres para serem executados nos nós do cluster. Os nós se comunicam com o camada de gerenciamento usando a API do Kubernetes , que a camada de gerenciamento expõe. Os usuários finais também podem usar a API do Kubernetes diretamente para interagir com o cluster.
-
Um cluster Kubernetes pode ser implantado em máquinas físicas ou virtuais. Para começar o desenvolvimento do Kubernetes, você pode usar o Minikube. O Minikube é uma implementação leve do Kubernetes que cria uma VM em sua máquina local e implanta um cluster simples contendo apenas um nó. O Minikube está disponível para sistemas Linux, macOS e Windows. O Minikube CLI fornece operações básicas de inicialização para trabalhar com seu cluster, incluindo iniciar, parar, status e excluir. Para este tutorial, no entanto, você usará um terminal online fornecido com o Minikube pré-instalado.
+
Um cluster Kubernetes pode ser implantado em máquinas físicas ou virtuais. Para começar o desenvolvimento do Kubernetes, você pode usar o Minikube. O Minikube é uma implementação leve do Kubernetes que cria uma VM em sua máquina local e implanta um cluster simples contendo apenas um nó. O Minikube está disponível para sistemas Linux, macOS e Windows. A linha de comando (cli) do Minikube fornece operações básicas de inicialização para trabalhar com seu cluster, incluindo iniciar, parar, status e excluir. Para este tutorial, no entanto, você usará um terminal online fornecido com o Minikube pré-instalado.
Agora que você sabe o que é Kubernetes, vamos para o tutorial online e iniciar nosso primeiro cluster!
+
+
+
diff --git a/content/pt/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/pt/docs/tutorials/kubernetes-basics/expose/expose-intro.html
new file mode 100644
index 0000000000..548892d678
--- /dev/null
+++ b/content/pt/docs/tutorials/kubernetes-basics/expose/expose-intro.html
@@ -0,0 +1,103 @@
+---
+title: Utilizando um serviço para expor seu App
+weight: 10
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Objetivos
+
+
Aprenda sobre um Serviço no Kubernetes
+
Entenda como os objetos labels e LabelSelector se relacionam a um Serviço
+
Exponha uma aplicação externamente ao cluster Kubernetes usando um Serviço
+
+
+
+
+
Visão Geral de Serviços Kubernetes
+
+
Pods Kubernetes são efêmeros. Na verdade, Pods possuem um ciclo de vida. Quando um nó de processamento morre, os Pods executados no nó também são perdidos. A partir disso, o ReplicaSet pode dinamicamente retornar o cluster ao estado desejado através da criação de novos Pods para manter sua aplicação em execução. Como outro exemplo, considere um backend de processamento de imagens com 3 réplicas. Estas réplicas são intercambiáveis; o sistema front-end não deveria se importar com as réplicas backend ou ainda se um Pod é perdido ou recriado. Dito isso, cada Pod em um cluster Kubernetes tem um único endereço IP, mesmo Pods no mesmo nó, então há necessidade de ter uma forma de reconciliar automaticamente mudanças entre Pods de modo que sua aplicação continue funcionando.
+
+
Um serviço no Kubernetes é uma abstração que define um conjunto lógico de Pods e uma política pela qual acessá-los. Serviços permitem um baixo acoplamento entre os Pods dependentes. Um serviço é definido usando YAML (preferencialmente) ou JSON, como todos objetos Kubernetes. O conjunto de Pods selecionados por um Serviço é geralmente determinado por um seletor de rótulos LabelSelector (veja abaixo o motivo pelo qual você pode querer um Serviço sem incluir um seletor selector na especificação spec).
+
+
Embora cada Pod tenha um endereço IP único, estes IPs não são expostos externamente ao cluster sem um Serviço. Serviços permitem que suas aplicações recebam tráfego. Serviços podem ser expostos de formas diferentes especificando um tipo type na especificação do serviço ServiceSpec:
+
+
ClusterIP (padrão) - Expõe o serviço sob um endereço IP interno no cluster. Este tipo faz do serviço somente alcançável de dentro do cluster.
+
NodePort - Expõe o serviço sob a mesma porta em cada nó selecionado no cluster usando NAT. Faz o serviço acessível externamente ao cluster usando <NodeIP>:<NodePort>. Superconjunto de ClusterIP.
+
LoadBalancer - Cria um balanceador de carga externo no provedor de nuvem atual (se suportado) e assinala um endereço IP fixo e externo para o serviço. Superconjunto de NodePort.
+
ExternalName - Expõe o serviço usando um nome arbitrário (especificado através de externalName na especificação spec) retornando um registro de CNAME com o nome. Nenhum proxy é utilizado. Este tipo requer v1.7 ou mais recente de kube-dns.
Adicionalmente, note que existem alguns casos de uso com serviços que envolvem a não definição de selector em spec. Serviços criados sem selector também não criarão objetos Endpoints correspondentes. Isto permite usuários mapear manualmente um serviço a endpoints específicos. Outra possibilidade na qual pode não haver seletores é ao se utilizar estritamente type: ExternalName.
+
+
+
+
Resumo
+
+
Expõe Pods ao tráfego externo
+
Tráfego de balanceamento de carga entre múltiplos Pods
+
Uso de rótulos labels
+
+
+
+
Um serviço Kubernetes é uma camada de abstração que define um conjunto lógico de Pods e habilita a exposição ao tráfego externo, balanceamento de carga e descoberta de serviço para esses Pods.
+
+
+
+
+
+
+
+
Serviços e Rótulos
+
+
+
+
+
+
Um serviço roteia tráfego entre um conjunto de Pods. Serviço é a abstração que permite pods morrerem e se replicarem no Kubernetes sem impactar sua aplicação. A descoberta e o roteamento entre Pods dependentes (tal como componentes frontend e backend dentro de uma aplicação) são controlados por serviços Kubernetes.
+
Serviços relacionam um conjunto de Pods usando Rótulos e seletores, um agrupamento primitivo que permite operações lógicas sobre objetos Kubernetes. Rótulos são pares de chave/valor anexados à objetos e podem ser usados de inúmeras formas:
+
+
Designar objetos para desenvolvimento, teste e produção
+
Adicionar tags de versão
+
Classificar um objeto usando tags
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Rótulos podem ser anexados à objetos no momento de sua criação ou posteriormente. Eles podem ser modificados a qualquer tempo. Vamos agora expor sua aplicação usando um serviço e aplicar alguns rótulos.
+
+
+
diff --git a/content/zh/docs/concepts/services-networking/connect-applications-service.md b/content/zh/docs/concepts/services-networking/connect-applications-service.md
index 236c03f910..6f04ea1194 100644
--- a/content/zh/docs/concepts/services-networking/connect-applications-service.md
+++ b/content/zh/docs/concepts/services-networking/connect-applications-service.md
@@ -462,7 +462,7 @@ nginxsecret kubernetes.io/tls 2 1m
-现在修改 nginx 副本,启动一个使用在秘钥中的证书的 HTTPS 服务器和 Servcie,暴露端口(80 和 443):
+现在修改 nginx 副本,启动一个使用在秘钥中的证书的 HTTPS 服务器和 Service,暴露端口(80 和 443):
{{< codenew file="service/networking/nginx-secure-app.yaml" >}}
diff --git a/content/zh/docs/concepts/storage/dynamic-provisioning.md b/content/zh/docs/concepts/storage/dynamic-provisioning.md
index ae6d82ec4e..14b72ac157 100644
--- a/content/zh/docs/concepts/storage/dynamic-provisioning.md
+++ b/content/zh/docs/concepts/storage/dynamic-provisioning.md
@@ -112,13 +112,13 @@ parameters:
Users request dynamically provisioned storage by including a storage class in
their `PersistentVolumeClaim`. Before Kubernetes v1.6, this was done via the
`volume.beta.kubernetes.io/storage-class` annotation. However, this annotation
-is deprecated since v1.6. Users now can and should instead use the
+is deprecated since v1.9. Users now can and should instead use the
`storageClassName` field of the `PersistentVolumeClaim` object. The value of
this field must match the name of a `StorageClass` configured by the
administrator (see [below](#enabling-dynamic-provisioning)).
-->
用户通过在 `PersistentVolumeClaim` 中包含存储类来请求动态供应的存储。
-在 Kubernetes v1.6 之前,这通过 `volume.beta.kubernetes.io/storage-class` 注解实现。然而,这个注解自 v1.6 起就不被推荐使用了。
+在 Kubernetes v1.9 之前,这通过 `volume.beta.kubernetes.io/storage-class` 注解实现。然而,这个注解自 v1.6 起就不被推荐使用了。
用户现在能够而且应该使用 `PersistentVolumeClaim` 对象的 `storageClassName` 字段。
这个字段的值必须能够匹配到集群管理员配置的 `StorageClass` 名称(见[下面](#enabling-dynamic-provisioning))。
diff --git a/content/zh/docs/concepts/storage/volumes.md b/content/zh/docs/concepts/storage/volumes.md
index 4c49adb4b2..5711035c23 100644
--- a/content/zh/docs/concepts/storage/volumes.md
+++ b/content/zh/docs/concepts/storage/volumes.md
@@ -56,13 +56,14 @@ can use any number of volume types simultaneously.
Ephemeral volume types have a lifetime of a pod, but persistent volumes exist beyond
the lifetime of a pod. Consequently, a volume outlives any containers
that run within the pod, and data is preserved across container restarts. When a
-pod ceases to exist, the volume is destroyed.
+pod ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not
+destroy persistent volumes.
-->
Kubernetes 支持很多类型的卷。
{{< glossary_tooltip term_id="pod" text="Pod" >}} 可以同时使用任意数目的卷类型。
临时卷类型的生命周期与 Pod 相同,但持久卷可以比 Pod 的存活期长。
因此,卷的存在时间会超出 Pod 中运行的所有容器,并且在容器重新启动时数据也会得到保留。
-当 Pod 不再存在时,卷也将不再存在。
+当 Pod 不再存在时,临时卷也将不再存在。但是持久卷会继续存在。
+如果 EBS 卷是分区的,你可以提供可选的字段 `partition: ""` 来指定要挂载到哪个分区上。
+
@@ -355,14 +361,14 @@ spec:
启用 Cinder 的 `CSIMigration` 功能后,所有插件操作会从现有的树内插件重定向到
`cinder.csi.openstack.org` 容器存储接口(CSI)驱动程序。
-为了使用此功能,必须在集群中安装 [Openstack Cinder CSI 驱动程序](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md),
+为了使用此功能,必须在集群中安装 [OpenStack Cinder CSI 驱动程序](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md),
并且 `CSIMigration` 和 `CSIMigrationOpenStack` Beta 功能必须被启用。
### configMap
@@ -2087,7 +2093,7 @@ persistent volume:
-->
- `volumeHandle`:唯一标识卷的字符串值。
该值必须与 CSI 驱动在 `CreateVolumeResponse` 的 `volume_id` 字段中返回的值相对应;
- 接口定义在 [CSI spec](https://github.com/container-storageinterface/spec/blob/master/spec.md#createvolume) 中。
+ 接口定义在 [CSI spec](https://github.com/container-storage-interface/spec/blob/master/spec.md#createvolume) 中。
在所有对 CSI 卷驱动程序的调用中,引用该 CSI 卷时都使用此值作为 `volume_id` 参数。
-Job `pi-with-ttl` 在结束 100 秒之后,可以成为被自动删除的标的。
+Job `pi-with-ttl` 在结束 100 秒之后,可以成为被自动删除的对象。
如果该字段设置为 `0`,Job 在结束之后立即成为可被自动删除的对象。
如果该字段没有设置,Job 不会在结束之后被 TTL 控制器自动清除。
diff --git a/content/zh/docs/reference/command-line-tools-reference/kubelet.md b/content/zh/docs/reference/command-line-tools-reference/kubelet.md
index 317a3a581e..6d7d2e757a 100644
--- a/content/zh/docs/reference/command-line-tools-reference/kubelet.md
+++ b/content/zh/docs/reference/command-line-tools-reference/kubelet.md
@@ -584,19 +584,6 @@ kubelet 使用此目录来保存所下载的配置,跟踪配置运行状况。
-