diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md
index de08e48d8c..823dddf710 100644
--- a/content/en/docs/concepts/architecture/nodes.md
+++ b/content/en/docs/concepts/architecture/nodes.md
@@ -31,7 +31,7 @@ The [components](/docs/concepts/overview/components/#node-components) on a node
There are two main ways to have Nodes added to the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}:
1. The kubelet on a node self-registers to the control plane
-2. You, or another human user, manually add a Node object
+2. You (or another human user) manually add a Node object
After you create a Node object, or the kubelet on a node self-registers, the
control plane checks whether the new Node object is valid. For example, if you
@@ -52,8 +52,8 @@ try to create a Node from the following JSON manifest:
Kubernetes creates a Node object internally (the representation). Kubernetes checks
that a kubelet has registered to the API server that matches the `metadata.name`
-field of the Node. If the node is healthy (if all necessary services are running),
-it is eligible to run a Pod. Otherwise, that node is ignored for any cluster activity
+field of the Node. If the node is healthy (i.e. all necessary services are running),
+then it is eligible to run a Pod. Otherwise, that node is ignored for any cluster activity
until it becomes healthy.
{{< note >}}
@@ -96,14 +96,14 @@ You can create and modify Node objects using
When you want to create Node objects manually, set the kubelet flag `--register-node=false`.
You can modify Node objects regardless of the setting of `--register-node`.
-For example, you can set labels on an existing Node, or mark it unschedulable.
+For example, you can set labels on an existing Node or mark it unschedulable.
You can use labels on Nodes in conjunction with node selectors on Pods to control
scheduling. For example, you can constrain a Pod to only be eligible to run on
a subset of the available nodes.
Marking a node as unschedulable prevents the scheduler from placing new pods onto
-that Node, but does not affect existing Pods on the Node. This is useful as a
+that Node but does not affect existing Pods on the Node. This is useful as a
preparatory step before a node reboot or other maintenance.
To mark a Node unschedulable, run:
@@ -179,14 +179,14 @@ The node condition is represented as a JSON object. For example, the following s
]
```
-If the Status of the Ready condition remains `Unknown` or `False` for longer than the `pod-eviction-timeout` (an argument passed to the {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}), all the Pods on the node are scheduled for deletion by the node controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the API server is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the API server is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node.
+If the Status of the Ready condition remains `Unknown` or `False` for longer than the `pod-eviction-timeout` (an argument passed to the {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}), then all the Pods on the node are scheduled for deletion by the node controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the API server is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the API server is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node.
The node controller does not force delete pods until it is confirmed that they have stopped
running in the cluster. You can see the pods that might be running on an unreachable node as
being in the `Terminating` or `Unknown` state. In cases where Kubernetes cannot deduce from the
underlying infrastructure if a node has permanently left a cluster, the cluster administrator
-may need to delete the node object by hand. Deleting the node object from Kubernetes causes
-all the Pod objects running on the node to be deleted from the API server, and frees up their
+may need to delete the node object by hand. Deleting the node object from Kubernetes causes
+all the Pod objects running on the node to be deleted from the API server and frees up their
names.
The node lifecycle controller automatically creates
@@ -199,7 +199,7 @@ for more details.
### Capacity and Allocatable {#capacity}
-Describes the resources available on the node: CPU, memory and the maximum
+Describes the resources available on the node: CPU, memory, and the maximum
number of pods that can be scheduled onto the node.
The fields in the capacity block indicate the total amount of resources that a
@@ -225,18 +225,19 @@ CIDR block to the node when it is registered (if CIDR assignment is turned on).
The second is keeping the node controller's internal list of nodes up to date with
the cloud provider's list of available machines. When running in a cloud
-environment, whenever a node is unhealthy, the node controller asks the cloud
+environment and whenever a node is unhealthy, the node controller asks the cloud
provider if the VM for that node is still available. If not, the node
controller deletes the node from its list of nodes.
The third is monitoring the nodes' health. The node controller is
-responsible for updating the NodeReady condition of NodeStatus to
-ConditionUnknown when a node becomes unreachable (i.e. the node controller stops
-receiving heartbeats for some reason, for example due to the node being down), and then later evicting
-all the pods from the node (using graceful termination) if the node continues
-to be unreachable. (The default timeouts are 40s to start reporting
-ConditionUnknown and 5m after that to start evicting pods.) The node controller
-checks the state of each node every `--node-monitor-period` seconds.
+responsible for:
+- Updating the NodeReady condition of NodeStatus to ConditionUnknown when a node
+ becomes unreachable, as the node controller stops receiving heartbeats for some
+ reason such as the node being down.
+- Evicting all the pods from the node using graceful termination if
+ the node continues to be unreachable. The default timeouts are 40s to start
+ reporting ConditionUnknown and 5m after that to start evicting pods.
+The node controller checks the state of each node every `--node-monitor-period` seconds.
#### Heartbeats
@@ -252,13 +253,14 @@ of the node heartbeats as the cluster scales.
The kubelet is responsible for creating and updating the `NodeStatus` and
a Lease object.
-- The kubelet updates the `NodeStatus` either when there is change in status,
+- The kubelet updates the `NodeStatus` either when there is change in status
or if there has been no update for a configured interval. The default interval
- for `NodeStatus` updates is 5 minutes (much longer than the 40 second default
- timeout for unreachable nodes).
+ for `NodeStatus` updates is 5 minutes, which is much longer than the 40 second default
+ timeout for unreachable nodes.
- The kubelet creates and then updates its Lease object every 10 seconds
(the default update interval). Lease updates occur independently from the
- `NodeStatus` updates. If the Lease update fails, the kubelet retries with exponential backoff starting at 200 milliseconds and capped at 7 seconds.
+ `NodeStatus` updates. If the Lease update fails, the kubelet retries with
+ exponential backoff starting at 200 milliseconds and capped at 7 seconds.
#### Reliability
@@ -269,23 +271,24 @@ from more than 1 node per 10 seconds.
The node eviction behavior changes when a node in a given availability zone
becomes unhealthy. The node controller checks what percentage of nodes in the zone
are unhealthy (NodeReady condition is ConditionUnknown or ConditionFalse) at
-the same time. If the fraction of unhealthy nodes is at least
-`--unhealthy-zone-threshold` (default 0.55) then the eviction rate is reduced:
-if the cluster is small (i.e. has less than or equal to
-`--large-cluster-size-threshold` nodes - default 50) then evictions are
-stopped, otherwise the eviction rate is reduced to
-`--secondary-node-eviction-rate` (default 0.01) per second. The reason these
-policies are implemented per availability zone is because one availability zone
-might become partitioned from the master while the others remain connected. If
-your cluster does not span multiple cloud provider availability zones, then
-there is only one availability zone (the whole cluster).
+the same time:
+- If the fraction of unhealthy nodes is at least `--unhealthy-zone-threshold`
+ (default 0.55), then the eviction rate is reduced.
+- If the cluster is small (i.e. has less than or equal to
+ `--large-cluster-size-threshold` nodes - default 50), then evictions are stopped.
+- Otherwise, the eviction rate is reduced to `--secondary-node-eviction-rate`
+ (default 0.01) per second.
+The reason these policies are implemented per availability zone is because one
+availability zone might become partitioned from the master while the others remain
+connected. If your cluster does not span multiple cloud provider availability zones,
+then there is only one availability zone (i.e. the whole cluster).
A key reason for spreading your nodes across availability zones is so that the
workload can be shifted to healthy zones when one entire zone goes down.
-Therefore, if all nodes in a zone are unhealthy then the node controller evicts at
+Therefore, if all nodes in a zone are unhealthy, then the node controller evicts at
the normal rate of `--node-eviction-rate`. The corner case is when all zones are
completely unhealthy (i.e. there are no healthy nodes in the cluster). In such a
-case, the node controller assumes that there's some problem with master
+case, the node controller assumes that there is some problem with master
connectivity and stops all evictions until some connectivity is restored.
The node controller is also responsible for evicting pods running on nodes with
@@ -303,8 +306,8 @@ eligible for, effectively removing incoming load balancer traffic from the cordo
### Node capacity
-Node objects track information about the Node's resource capacity (for example: the amount
-of memory available, and the number of CPUs).
+Node objects track information about the Node's resource capacity: for example, the amount
+of memory available and the number of CPUs.
Nodes that [self register](#self-registration-of-nodes) report their capacity during
registration. If you [manually](#manual-node-administration) add a Node, then
you need to set the node's capacity information when you add it.
@@ -338,7 +341,7 @@ for more information.
If you have enabled the `GracefulNodeShutdown` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/), then the kubelet attempts to detect the node system shutdown and terminates pods running on the node.
Kubelet ensures that pods follow the normal [pod termination process](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) during the node shutdown.
-When the `GracefulNodeShutdown` feature gate is enabled, kubelet uses [systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) to delay the node shutdown with a given duration. During a shutdown kubelet terminates pods in two phases:
+When the `GracefulNodeShutdown` feature gate is enabled, kubelet uses [systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) to delay the node shutdown with a given duration. During a shutdown, kubelet terminates pods in two phases:
1. Terminate regular pods running on the node.
2. Terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) running on the node.
diff --git a/content/en/docs/concepts/configuration/configmap.md b/content/en/docs/concepts/configuration/configmap.md
index f0f0f439f9..2434c42437 100644
--- a/content/en/docs/concepts/configuration/configmap.md
+++ b/content/en/docs/concepts/configuration/configmap.md
@@ -43,7 +43,7 @@ Kubernetes objects that have a `spec`, a ConfigMap has `data` and `binaryData`
fields. These fields accept key-value pairs as their values. Both the `data`
field and the `binaryData` are optional. The `data` field is designed to
contain UTF-8 byte sequences while the `binaryData` field is designed to
-contain binary data.
+contain binary data as base64-encoded strings.
The name of a ConfigMap must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
diff --git a/content/en/docs/concepts/storage/dynamic-provisioning.md b/content/en/docs/concepts/storage/dynamic-provisioning.md
index bedd431dc9..63263fb370 100644
--- a/content/en/docs/concepts/storage/dynamic-provisioning.md
+++ b/content/en/docs/concepts/storage/dynamic-provisioning.md
@@ -80,7 +80,7 @@ parameters:
Users request dynamically provisioned storage by including a storage class in
their `PersistentVolumeClaim`. Before Kubernetes v1.6, this was done via the
`volume.beta.kubernetes.io/storage-class` annotation. However, this annotation
-is deprecated since v1.6. Users now can and should instead use the
+is deprecated since v1.9. Users now can and should instead use the
`storageClassName` field of the `PersistentVolumeClaim` object. The value of
this field must match the name of a `StorageClass` configured by the
administrator (see [below](#enabling-dynamic-provisioning)).
diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md
index fff6fd9ae6..28abe9ee8a 100644
--- a/content/en/docs/concepts/storage/volumes.md
+++ b/content/en/docs/concepts/storage/volumes.md
@@ -34,8 +34,9 @@ Kubernetes supports many types of volumes. A {{< glossary_tooltip term_id="pod"
can use any number of volume types simultaneously.
Ephemeral volume types have a lifetime of a pod, but persistent volumes exist beyond
the lifetime of a pod. Consequently, a volume outlives any containers
-that run within the pod, and data is preserved across container restarts. When a
-pod ceases to exist, the volume is destroyed.
+that run within the pod, and data is preserved across container restarts. When a pod
+ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not
+destroy persistent volumes.
At its core, a volume is just a directory, possibly with some data in it, which
is accessible to the containers in a pod. How that directory comes to be, the
diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md
index 9ba746c2ad..0380889414 100644
--- a/content/en/docs/concepts/workloads/controllers/deployment.md
+++ b/content/en/docs/concepts/workloads/controllers/deployment.md
@@ -54,7 +54,7 @@ In this example:
{{< note >}}
The `.spec.selector.matchLabels` field is a map of {key,value} pairs.
A single {key,value} in the `matchLabels` map is equivalent to an element of `matchExpressions`,
- whose key field is "key" the operator is "In", and the values array contains only "value".
+ whose `key` field is "key", the `operator` is "In", and the `values` array contains only "value".
All of the requirements, from both `matchLabels` and `matchExpressions`, must be satisfied in order to match.
{{< /note >}}
diff --git a/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md b/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md
index 73ef70c81b..edf25ad835 100644
--- a/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md
+++ b/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md
@@ -2,8 +2,21 @@
title: kube-apiserver
content_type: tool-reference
weight: 30
+auto_generated: true
---
+
+
+
## {{% heading "synopsis" %}}
@@ -29,1099 +42,1099 @@ kube-apiserver [flags]
--add-dir-header
-
If true, adds the file directory to the header of the log messages
+
If true, adds the file directory to the header of the log messages
--admission-control-config-file string
-
File with admission control configuration.
+
File with admission control configuration.
-
--advertise-address ip
+
--advertise-address string
-
The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
+
The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
--allow-privileged
-
If true, allow privileged containers. [default=false]
+
If true, allow privileged containers. [default=false]
--alsologtostderr
-
log to standard error as well as files
+
log to standard error as well as files
--anonymous-auth Default: true
-
Enables anonymous requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated.
+
Enables anonymous requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated.
-
--api-audiences stringSlice
+
--api-audiences strings
-
Identifiers of the API. The service account token authenticator will validate that tokens used against the API are bound to at least one of these audiences. If the --service-account-issuer flag is configured and this flag is not, this field defaults to a single element list containing the issuer URL.
+
Identifiers of the API. The service account token authenticator will validate that tokens used against the API are bound to at least one of these audiences. If the --service-account-issuer flag is configured and this flag is not, this field defaults to a single element list containing the issuer URL.
--apiserver-count int Default: 1
-
The number of apiservers running in the cluster, must be a positive number. (In use when --endpoint-reconciler-type=master-count is enabled.)
+
The number of apiservers running in the cluster, must be a positive number. (In use when --endpoint-reconciler-type=master-count is enabled.)
--audit-log-batch-buffer-size int Default: 10000
-
The size of the buffer to store events before batching and writing. Only used in batch mode.
+
The size of the buffer to store events before batching and writing. Only used in batch mode.
--audit-log-batch-max-size int Default: 1
-
The maximum size of a batch. Only used in batch mode.
+
The maximum size of a batch. Only used in batch mode.
--audit-log-batch-max-wait duration
-
The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
+
The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
--audit-log-batch-throttle-burst int
-
Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
+
Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
--audit-log-batch-throttle-enable
-
Whether batching throttling is enabled. Only used in batch mode.
+
Whether batching throttling is enabled. Only used in batch mode.
-
--audit-log-batch-throttle-qps float32
+
--audit-log-batch-throttle-qps float
-
Maximum average number of batches per second. Only used in batch mode.
+
Maximum average number of batches per second. Only used in batch mode.
--audit-log-compress
-
If set, the rotated log files will be compressed using gzip.
+
If set, the rotated log files will be compressed using gzip.
--audit-log-format string Default: "json"
-
Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json.
+
Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json.
--audit-log-maxage int
-
The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
+
The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
--audit-log-maxbackup int
-
The maximum number of old audit log files to retain.
+
The maximum number of old audit log files to retain.
--audit-log-maxsize int
-
The maximum size in megabytes of the audit log file before it gets rotated.
+
The maximum size in megabytes of the audit log file before it gets rotated.
--audit-log-mode string Default: "blocking"
-
Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict.
+
Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict.
--audit-log-path string
-
If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.
+
If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.
--audit-log-truncate-enabled
-
Whether event and batch truncating is enabled.
+
Whether event and batch truncating is enabled.
--audit-log-truncate-max-batch-size int Default: 10485760
-
Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size.
+
Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size.
--audit-log-truncate-max-event-size int Default: 102400
-
Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded.
+
Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded.
The amount of time to wait before retrying the first failed request.
+
The amount of time to wait before retrying the first failed request.
--audit-webhook-mode string Default: "batch"
-
Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict.
+
Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict.
--audit-webhook-truncate-enabled
-
Whether event and batch truncating is enabled.
+
Whether event and batch truncating is enabled.
--audit-webhook-truncate-max-batch-size int Default: 10485760
-
Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size.
+
Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size.
--audit-webhook-truncate-max-event-size int Default: 102400
-
Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded.
+
Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded.
The duration to cache responses from the webhook token authenticator.
+
The duration to cache responses from the webhook token authenticator.
--authentication-token-webhook-config-file string
-
File with webhook configuration for token authentication in kubeconfig format. The API server will query the remote service to determine authentication for bearer tokens.
+
File with webhook configuration for token authentication in kubeconfig format. The API server will query the remote service to determine authentication for bearer tokens.
The duration to cache 'unauthorized' responses from the webhook authorizer.
+
The duration to cache 'unauthorized' responses from the webhook authorizer.
--authorization-webhook-config-file string
-
File with webhook configuration in kubeconfig format, used with --authorization-mode=Webhook. The API server will query the remote service to determine access on the API server's secure port.
+
File with webhook configuration in kubeconfig format, used with --authorization-mode=Webhook. The API server will query the remote service to determine access on the API server's secure port.
The API version of the authorization.k8s.io SubjectAccessReview to send to and expect from the webhook.
+
The API version of the authorization.k8s.io SubjectAccessReview to send to and expect from the webhook.
--azure-container-registry-config string
-
Path to the file containing Azure container registry configuration information.
+
Path to the file containing Azure container registry configuration information.
-
--bind-address ip Default: 0.0.0.0
+
--bind-address string Default: 0.0.0.0
-
The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.
+
The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.
--cert-dir string Default: "/var/run/kubernetes"
-
The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.
+
The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.
--client-ca-file string
-
If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
+
If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
--cloud-config string
-
The path to the cloud provider configuration file. Empty string for no configuration file.
+
The path to the cloud provider configuration file. Empty string for no configuration file.
--cloud-provider string
-
The provider for cloud services. Empty string for no provider.
+
The provider for cloud services. Empty string for no provider.
CIDRs opened in GCE firewall for L7 LB traffic proxy & health checks
+
CIDRs opened in GCE firewall for L7 LB traffic proxy & health checks
--contention-profiling
-
Enable lock contention profiling, if profiling is enabled
+
Enable lock contention profiling, if profiling is enabled
-
--cors-allowed-origins stringSlice
+
--cors-allowed-origins strings
-
List of allowed origins for CORS, comma separated. An allowed origin can be a regular expression to support subdomain matching. If this list is empty CORS will not be enabled.
+
List of allowed origins for CORS, comma separated. An allowed origin can be a regular expression to support subdomain matching. If this list is empty CORS will not be enabled.
--default-not-ready-toleration-seconds int Default: 300
-
Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.
+
Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.
--default-unreachable-toleration-seconds int Default: 300
-
Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.
+
Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.
--default-watch-cache-size int Default: 100
-
Default watch cache size. If zero, watch cache will be disabled for resources that do not have a default watch size set.
+
Default watch cache size. If zero, watch cache will be disabled for resources that do not have a default watch size set.
--delete-collection-workers int Default: 1
-
Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup.
+
Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup.
-
--disable-admission-plugins stringSlice
+
--disable-admission-plugins strings
-
admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
+
admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
--egress-selector-config-file string
-
File with apiserver egress selector configuration.
+
File with apiserver egress selector configuration.
-
--enable-admission-plugins stringSlice
+
--enable-admission-plugins strings
-
admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
+
admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
--enable-aggregator-routing
-
Turns on aggregator routing requests to endpoints IP rather than cluster IP.
+
Turns on aggregator routing requests to endpoints IP rather than cluster IP.
--enable-bootstrap-token-auth
-
Enable to allow secrets of type 'bootstrap.kubernetes.io/token' in the 'kube-system' namespace to be used for TLS bootstrapping authentication.
+
Enable to allow secrets of type 'bootstrap.kubernetes.io/token' in the 'kube-system' namespace to be used for TLS bootstrapping authentication.
--enable-garbage-collector Default: true
-
Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager.
+
Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager.
--enable-priority-and-fairness Default: true
-
If true and the APIPriorityAndFairness feature gate is enabled, replace the max-in-flight handler with an enhanced one that queues and dispatches with priority and fairness
+
If true and the APIPriorityAndFairness feature gate is enabled, replace the max-in-flight handler with an enhanced one that queues and dispatches with priority and fairness
--encryption-provider-config string
-
The file containing configuration for encryption providers to be used for storing secrets in etcd
+
The file containing configuration for encryption providers to be used for storing secrets in etcd
The interval of requests to poll etcd and update metric. 0 disables the metric collection
+
The interval of requests to poll etcd and update metric. 0 disables the metric collection
--etcd-healthcheck-timeout duration Default: 2s
-
The timeout to use when checking etcd health.
+
The timeout to use when checking etcd health.
--etcd-keyfile string
-
SSL key file used to secure etcd communication.
+
SSL key file used to secure etcd communication.
--etcd-prefix string Default: "/registry"
-
The prefix to prepend to all resource paths in etcd.
+
The prefix to prepend to all resource paths in etcd.
-
--etcd-servers stringSlice
+
--etcd-servers strings
-
List of etcd servers to connect with (scheme://ip:port), comma separated.
+
List of etcd servers to connect with (scheme://ip:port), comma separated.
-
--etcd-servers-overrides stringSlice
+
--etcd-servers-overrides strings
-
Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are URLs, semicolon separated.
+
Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are URLs, semicolon separated.
--event-ttl duration Default: 1h0m0s
-
Amount of time to retain events.
+
Amount of time to retain events.
--experimental-logging-sanitization
-
[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens). Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.
+
[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens). Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.
--external-hostname string
-
The hostname to use when generating externalized URLs for this master (e.g. Swagger API Docs or OpenID Discovery).
+
The hostname to use when generating externalized URLs for this master (e.g. Swagger API Docs or OpenID Discovery).
To prevent HTTP/2 clients from getting stuck on a single apiserver, randomly close a connection (GOAWAY). The client's other in-flight requests won't be affected, and the client will reconnect, likely landing on a different apiserver after going through the load balancer again. This argument sets the fraction of requests that will be sent a GOAWAY. Clusters with single apiservers, or which don't use a load balancer, should NOT enable this. Min is 0 (off), Max is .02 (1/50 requests); .001 (1/1000) is a recommended starting point.
+
To prevent HTTP/2 clients from getting stuck on a single apiserver, randomly close a connection (GOAWAY). The client's other in-flight requests won't be affected, and the client will reconnect, likely landing on a different apiserver after going through the load balancer again. This argument sets the fraction of requests that will be sent a GOAWAY. Clusters with single apiservers, or which don't use a load balancer, should NOT enable this. Min is 0 (off), Max is .02 (1/50 requests); .001 (1/1000) is a recommended starting point.
-h, --help
-
help for kube-apiserver
+
help for kube-apiserver
--http2-max-streams-per-connection int
-
The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
+
The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
--identity-lease-duration-seconds int Default: 3600
-
The duration of kube-apiserver lease in seconds, must be a positive number. (In use when the APIServerIdentity feature gate is enabled.)
+
The duration of kube-apiserver lease in seconds, must be a positive number. (In use when the APIServerIdentity feature gate is enabled.)
--identity-lease-renew-interval-seconds int Default: 10
-
The interval of kube-apiserver renewing its lease in seconds, must be a positive number. (In use when the APIServerIdentity feature gate is enabled.)
+
The interval of kube-apiserver renewing its lease in seconds, must be a positive number. (In use when the APIServerIdentity feature gate is enabled.)
--kubelet-certificate-authority string
-
Path to a cert file for the certificate authority.
+
Path to a cert file for the certificate authority.
List of the preferred NodeAddressTypes to use for kubelet connections.
+
List of the preferred NodeAddressTypes to use for kubelet connections.
--kubelet-timeout duration Default: 5s
-
Timeout for kubelet operations.
+
Timeout for kubelet operations.
--kubernetes-service-node-port int
-
If non-zero, the Kubernetes master service (which apiserver creates/maintains) will be of type NodePort, using this as the value of the port. If zero, the Kubernetes master service will be of type ClusterIP.
+
If non-zero, the Kubernetes master service (which apiserver creates/maintains) will be of type NodePort, using this as the value of the port. If zero, the Kubernetes master service will be of type ClusterIP.
--livez-grace-period duration
-
This option represents the maximum amount of time it should take for apiserver to complete its startup sequence and become live. From apiserver's start time to when this amount of time has elapsed, /livez will assume that unfinished post-start hooks will complete successfully and therefore return true.
+
This option represents the maximum amount of time it should take for apiserver to complete its startup sequence and become live. From apiserver's start time to when this amount of time has elapsed, /livez will assume that unfinished post-start hooks will complete successfully and therefore return true.
-
--log-backtrace-at traceLocation Default: :0
+
--log-backtrace-at <a string in the form 'file:N'> Default: :0
-
when logging hits line file:N, emit a stack trace
+
when logging hits line file:N, emit a stack trace
--log-dir string
-
If non-empty, write log files in this directory
+
If non-empty, write log files in this directory
--log-file string
-
If non-empty, use this log file
+
If non-empty, use this log file
--log-file-max-size uint Default: 1800
-
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
+
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
--log-flush-frequency duration Default: 5s
-
Maximum number of seconds between log flushes
+
Maximum number of seconds between log flushes
--logging-format string Default: "text"
-
Sets the log format. Permitted formats: "json", "text". Non-default formats don't honor these flags: --add_dir_header, --alsologtostderr, --log_backtrace_at, --log_dir, --log_file, --log_file_max_size, --logtostderr, --one_output, --skip_headers, --skip_log_headers, --stderrthreshold, --vmodule, --log-flush-frequency. Non-default choices are currently alpha and subject to change without warning.
+
Sets the log format. Permitted formats: "json", "text". Non-default formats don't honor these flags: --add_dir_header, --alsologtostderr, --log_backtrace_at, --log_dir, --log_file, --log_file_max_size, --logtostderr, --one_output, --skip_headers, --skip_log_headers, --stderrthreshold, --vmodule, --log-flush-frequency. Non-default choices are currently alpha and subject to change without warning.
DEPRECATED: the namespace from which the Kubernetes master services should be injected into pods.
+
DEPRECATED: the namespace from which the Kubernetes master services should be injected into pods.
--max-connection-bytes-per-sec int
-
If non-zero, throttle each user connection to this number of bytes/sec. Currently only applies to long-running requests.
+
If non-zero, throttle each user connection to this number of bytes/sec. Currently only applies to long-running requests.
--max-mutating-requests-inflight int Default: 200
-
The maximum number of mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit.
+
The maximum number of mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit.
--max-requests-inflight int Default: 400
-
The maximum number of non-mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit.
+
The maximum number of non-mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit.
--min-request-timeout int Default: 1800
-
An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load.
+
An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load.
--oidc-ca-file string
-
If set, the OpenID server's certificate will be verified by one of the authorities in the oidc-ca-file, otherwise the host's root CA set will be used.
+
If set, the OpenID server's certificate will be verified by one of the authorities in the oidc-ca-file, otherwise the host's root CA set will be used.
--oidc-client-id string
-
The client ID for the OpenID Connect client, must be set if oidc-issuer-url is set.
+
The client ID for the OpenID Connect client, must be set if oidc-issuer-url is set.
--oidc-groups-claim string
-
If provided, the name of a custom OpenID Connect claim for specifying user groups. The claim value is expected to be a string or array of strings. This flag is experimental, please see the authentication documentation for further details.
+
If provided, the name of a custom OpenID Connect claim for specifying user groups. The claim value is expected to be a string or array of strings. This flag is experimental, please see the authentication documentation for further details.
--oidc-groups-prefix string
-
If provided, all groups will be prefixed with this value to prevent conflicts with other authentication strategies.
+
If provided, all groups will be prefixed with this value to prevent conflicts with other authentication strategies.
--oidc-issuer-url string
-
The URL of the OpenID issuer, only HTTPS scheme will be accepted. If set, it will be used to verify the OIDC JSON Web Token (JWT).
+
The URL of the OpenID issuer, only HTTPS scheme will be accepted. If set, it will be used to verify the OIDC JSON Web Token (JWT).
A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims.
+
A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims.
-
--oidc-signing-algs stringSlice Default: [RS256]
+
--oidc-signing-algs strings Default: "RS256"
-
Comma-separated list of allowed JOSE asymmetric signing algorithms. JWTs with a 'alg' header value not in this list will be rejected. Values are defined by RFC 7518 https://tools.ietf.org/html/rfc7518#section-3.1.
+
Comma-separated list of allowed JOSE asymmetric signing algorithms. JWTs with a 'alg' header value not in this list will be rejected. Values are defined by RFC 7518 https://tools.ietf.org/html/rfc7518#section-3.1.
--oidc-username-claim string Default: "sub"
-
The OpenID claim to use as the user name. Note that claims other than the default ('sub') is not guaranteed to be unique and immutable. This flag is experimental, please see the authentication documentation for further details.
+
The OpenID claim to use as the user name. Note that claims other than the default ('sub') is not guaranteed to be unique and immutable. This flag is experimental, please see the authentication documentation for further details.
--oidc-username-prefix string
-
If provided, all usernames will be prefixed with this value. If not provided, username claims other than 'email' are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value '-'.
+
If provided, all usernames will be prefixed with this value. If not provided, username claims other than 'email' are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value '-'.
--one-output
-
If true, only write logs to their native severity level (vs also writing to each lower severity level
+
If true, only write logs to their native severity level (vs also writing to each lower severity level
--permit-port-sharing
-
If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false]
+
If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false]
--profiling Default: true
-
Enable profiling via web interface host:port/debug/pprof/
+
Enable profiling via web interface host:port/debug/pprof/
--proxy-client-cert-file string
-
Client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. It is expected that this cert includes a signature from the CA in the --requestheader-client-ca-file flag. That CA is published in the 'extension-apiserver-authentication' configmap in the kube-system namespace. Components receiving calls from kube-aggregator should use that CA to perform their half of the mutual TLS verification.
+
Client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. It is expected that this cert includes a signature from the CA in the --requestheader-client-ca-file flag. That CA is published in the 'extension-apiserver-authentication' configmap in the kube-system namespace. Components receiving calls from kube-aggregator should use that CA to perform their half of the mutual TLS verification.
--proxy-client-key-file string
-
Private key for the client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins.
+
Private key for the client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins.
--request-timeout duration Default: 1m0s
-
An optional field indicating the duration a handler must keep a request open before timing it out. This is the default request timeout for requests but may be overridden by flags such as --min-request-timeout for specific types of requests.
+
An optional field indicating the duration a handler must keep a request open before timing it out. This is the default request timeout for requests but may be overridden by flags such as --min-request-timeout for specific types of requests.
-
--requestheader-allowed-names stringSlice
+
--requestheader-allowed-names strings
-
List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
+
List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
--requestheader-client-ca-file string
-
Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
+
Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
-
--requestheader-extra-headers-prefix stringSlice
+
--requestheader-extra-headers-prefix strings
-
List of request header prefixes to inspect. X-Remote-Extra- is suggested.
+
List of request header prefixes to inspect. X-Remote-Extra- is suggested.
-
--requestheader-group-headers stringSlice
+
--requestheader-group-headers strings
-
List of request headers to inspect for groups. X-Remote-Group is suggested.
+
List of request headers to inspect for groups. X-Remote-Group is suggested.
-
--requestheader-username-headers stringSlice
+
--requestheader-username-headers strings
-
List of request headers to inspect for usernames. X-Remote-User is common.
+
List of request headers to inspect for usernames. X-Remote-User is common.
A set of key=value pairs that enable or disable built-in APIs. Supported options are: v1=true|false for the core API group <group>/<version>=true|false for a specific API group and version (e.g. apps/v1=true) api/all=true|false controls all API versions api/ga=true|false controls all API versions of the form v[0-9]+ api/beta=true|false controls all API versions of the form v[0-9]+beta[0-9]+ api/alpha=true|false controls all API versions of the form v[0-9]+alpha[0-9]+ api/legacy is deprecated, and will be removed in a future version
+
A set of key=value pairs that enable or disable built-in APIs. Supported options are: v1=true|false for the core API group /=true|false for a specific API group and version (e.g. apps/v1=true) api/all=true|false controls all API versions api/ga=true|false controls all API versions of the form v[0-9]+ api/beta=true|false controls all API versions of the form v[0-9]+beta[0-9]+ api/alpha=true|false controls all API versions of the form v[0-9]+alpha[0-9]+ api/legacy is deprecated, and will be removed in a future version
--secure-port int Default: 6443
-
The port on which to serve HTTPS with authentication and authorization. It cannot be switched off with 0.
+
The port on which to serve HTTPS with authentication and authorization. It cannot be switched off with 0.
Turns on projected service account expiration extension during token generation, which helps safe transition from legacy token to bound service account token feature. If this flag is enabled, admission injected tokens would be extended up to 1 year to prevent unexpected failure during transition, ignoring value of service-account-max-token-expiration.
+
Turns on projected service account expiration extension during token generation, which helps safe transition from legacy token to bound service account token feature. If this flag is enabled, admission injected tokens would be extended up to 1 year to prevent unexpected failure during transition, ignoring value of service-account-max-token-expiration.
--service-account-issuer string
-
Identifier of the service account token issuer. The issuer will assert this identifier in "iss" claim of issued tokens. This value is a string or URI. If this option is not a valid URI per the OpenID Discovery 1.0 spec, the ServiceAccountIssuerDiscovery feature will remain disabled, even if the feature gate is set to true. It is highly recommended that this value comply with the OpenID spec: https://openid.net/specs/openid-connect-discovery-1_0.html. In practice, this means that service-account-issuer must be an https URL. It is also highly recommended that this URL be capable of serving OpenID discovery documents at {service-account-issuer}/.well-known/openid-configuration.
+
Identifier of the service account token issuer. The issuer will assert this identifier in "iss" claim of issued tokens. This value is a string or URI. If this option is not a valid URI per the OpenID Discovery 1.0 spec, the ServiceAccountIssuerDiscovery feature will remain disabled, even if the feature gate is set to true. It is highly recommended that this value comply with the OpenID spec: https://openid.net/specs/openid-connect-discovery-1_0.html. In practice, this means that service-account-issuer must be an https URL. It is also highly recommended that this URL be capable of serving OpenID discovery documents at {service-account-issuer}/.well-known/openid-configuration.
--service-account-jwks-uri string
-
Overrides the URI for the JSON Web Key Set in the discovery doc served at /.well-known/openid-configuration. This flag is useful if the discovery docand key set are served to relying parties from a URL other than the API server's external (as auto-detected or overridden with external-hostname). Only valid if the ServiceAccountIssuerDiscovery feature gate is enabled.
+
Overrides the URI for the JSON Web Key Set in the discovery doc served at /.well-known/openid-configuration. This flag is useful if the discovery docand key set are served to relying parties from a URL other than the API server's external (as auto-detected or overridden with external-hostname). Only valid if the ServiceAccountIssuerDiscovery feature gate is enabled.
-
--service-account-key-file stringArray
+
--service-account-key-file strings
-
File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If unspecified, --tls-private-key-file is used. Must be specified when --service-account-signing-key is provided
+
File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If unspecified, --tls-private-key-file is used. Must be specified when --service-account-signing-key is provided
--service-account-lookup Default: true
-
If true, validate ServiceAccount tokens exist in etcd as part of authentication.
+
If true, validate ServiceAccount tokens exist in etcd as part of authentication.
--service-account-max-token-expiration duration
-
The maximum validity duration of a token created by the service account token issuer. If an otherwise valid TokenRequest with a validity duration larger than this value is requested, a token will be issued with a validity duration of this value.
+
The maximum validity duration of a token created by the service account token issuer. If an otherwise valid TokenRequest with a validity duration larger than this value is requested, a token will be issued with a validity duration of this value.
--service-account-signing-key-file string
-
Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key.
+
Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key.
--service-cluster-ip-range string
-
A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes or pods.
+
A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes or pods.
--service-node-port-range <a string in the form 'N1-N2'> Default: 30000-32767
-
A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range.
+
A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range.
--show-hidden-metrics-for-version string
-
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
+
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is ., e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
--shutdown-delay-duration duration
-
Time to delay the termination. During that time the server keeps serving requests normally. The endpoints /healthz and /livez will return success, but /readyz immediately returns failure. Graceful termination starts after this delay has elapsed. This can be used to allow load balancer to stop sending traffic to this server.
+
Time to delay the termination. During that time the server keeps serving requests normally. The endpoints /healthz and /livez will return success, but /readyz immediately returns failure. Graceful termination starts after this delay has elapsed. This can be used to allow load balancer to stop sending traffic to this server.
--skip-headers
-
If true, avoid header prefixes in the log messages
+
If true, avoid header prefixes in the log messages
--skip-log-headers
-
If true, avoid headers when opening log files
+
If true, avoid headers when opening log files
-
--stderrthreshold severity Default: 2
+
--stderrthreshold int Default: 2
-
logs at or above this threshold go to stderr
+
logs at or above this threshold go to stderr
--storage-backend string
-
The storage backend for persistence. Options: 'etcd3' (default).
+
The storage backend for persistence. Options: 'etcd3' (default).
The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting.
+
The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting.
--tls-cert-file string
-
File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
+
File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
-
--tls-cipher-suites stringSlice
+
--tls-cipher-suites strings
-
Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used. Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384. Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.
+
Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used. Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384. Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.
--tls-min-version string
-
Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
+
Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
--tls-private-key-file string
-
File containing the default x509 private key matching --tls-cert-file.
+
File containing the default x509 private key matching --tls-cert-file.
-
--tls-sni-cert-key namedCertKey Default: []
+
--tls-sni-cert-key string
-
A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com".
+
A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com".
--token-auth-file string
-
If set, the file that will be used to secure the secure port of the API server via token authentication.
+
If set, the file that will be used to secure the secure port of the API server via token authentication.
-
-v, --v Level
+
-v, --v int
-
number for the log level verbosity
+
number for the log level verbosity
--version version[=true]
-
Print version information and quit
+
Print version information and quit
-
--vmodule moduleSpec
+
--vmodule <comma-separated 'pattern=N' settings>
-
comma-separated list of pattern=N settings for file-filtered logging
+
comma-separated list of pattern=N settings for file-filtered logging
--watch-cache Default: true
-
Enable watch caching in the apiserver
+
Enable watch caching in the apiserver
-
--watch-cache-sizes stringSlice
+
--watch-cache-sizes strings
-
Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. It takes effect when watch-cache is enabled. Some resources (replicationcontrollers, endpoints, nodes, pods, services, apiservices.apiregistration.k8s.io) have system defaults set by heuristics, others default to default-watch-cache-size
+
Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. It takes effect when watch-cache is enabled. Some resources (replicationcontrollers, endpoints, nodes, pods, services, apiservices.apiregistration.k8s.io) have system defaults set by heuristics, others default to default-watch-cache-size
The reconciler sync wait time between volume attach detach. This duration must be larger than one second, and increasing this value from the default may allow for volumes to be mismatched with pods.
+
The reconciler sync wait time between volume attach detach. This duration must be larger than one second, and increasing this value from the default may allow for volumes to be mismatched with pods.
--authentication-kubeconfig string
-
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io. This is optional. If empty, all token requests are considered to be anonymous and no client CA is looked up in the cluster.
+
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io. This is optional. If empty, all token requests are considered to be anonymous and no client CA is looked up in the cluster.
--authentication-skip-lookup
-
If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
+
If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
The duration to cache responses from the webhook token authenticator.
+
The duration to cache responses from the webhook token authenticator.
--authentication-tolerate-lookup-failure
-
If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.
+
If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.
A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server.
+
A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server.
--authorization-kubeconfig string
-
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io. This is optional. If empty, all requests not skipped by authorization are forbidden.
+
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io. This is optional. If empty, all requests not skipped by authorization are forbidden.
The duration to cache 'unauthorized' responses from the webhook authorizer.
+
The duration to cache 'unauthorized' responses from the webhook authorizer.
--azure-container-registry-config string
-
Path to the file containing Azure container registry configuration information.
+
Path to the file containing Azure container registry configuration information.
-
--bind-address ip Default: 0.0.0.0
+
--bind-address string Default: 0.0.0.0
-
The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.
+
The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.
--cert-dir string
-
The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.
+
The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.
If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
+
If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
--cloud-config string
-
The path to the cloud provider configuration file. Empty string for no configuration file.
+
The path to the cloud provider configuration file. Empty string for no configuration file.
--cloud-provider string
-
The provider for cloud services. Empty string for no provider.
+
The provider for cloud services. Empty string for no provider.
--cluster-cidr string
-
CIDR Range for Pods in cluster. Requires --allocate-node-cidrs to be true
+
CIDR Range for Pods in cluster. Requires --allocate-node-cidrs to be true
--cluster-name string Default: "kubernetes"
-
The instance prefix for the cluster.
+
The instance prefix for the cluster.
--cluster-signing-cert-file string
-
Filename containing a PEM-encoded X509 CA certificate used to issue cluster-scoped certificates. If specified, no more specific --cluster-signing-* flag may be specified.
+
Filename containing a PEM-encoded X509 CA certificate used to issue cluster-scoped certificates. If specified, no more specific --cluster-signing-* flag may be specified.
The length of duration signed certificates will be given.
+
The length of duration signed certificates will be given.
--cluster-signing-key-file string
-
Filename containing a PEM-encoded RSA or ECDSA private key used to sign cluster-scoped certificates. If specified, no more specific --cluster-signing-* flag may be specified.
+
Filename containing a PEM-encoded RSA or ECDSA private key used to sign cluster-scoped certificates. If specified, no more specific --cluster-signing-* flag may be specified.
Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/kube-apiserver-client signer. If specified, --cluster-signing-{cert,key}-file must not be set.
+
Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/kube-apiserver-client signer. If specified, --cluster-signing-{cert,key}-file must not be set.
Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/kube-apiserver-client signer. If specified, --cluster-signing-{cert,key}-file must not be set.
+
Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/kube-apiserver-client signer. If specified, --cluster-signing-{cert,key}-file must not be set.
--cluster-signing-kubelet-client-cert-file string
-
Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/kube-apiserver-client-kubelet signer. If specified, --cluster-signing-{cert,key}-file must not be set.
+
Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/kube-apiserver-client-kubelet signer. If specified, --cluster-signing-{cert,key}-file must not be set.
--cluster-signing-kubelet-client-key-file string
-
Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/kube-apiserver-client-kubelet signer. If specified, --cluster-signing-{cert,key}-file must not be set.
+
Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/kube-apiserver-client-kubelet signer. If specified, --cluster-signing-{cert,key}-file must not be set.
Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/kubelet-serving signer. If specified, --cluster-signing-{cert,key}-file must not be set.
+
Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/kubelet-serving signer. If specified, --cluster-signing-{cert,key}-file must not be set.
--cluster-signing-kubelet-serving-key-file string
-
Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/kubelet-serving signer. If specified, --cluster-signing-{cert,key}-file must not be set.
+
Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/kubelet-serving signer. If specified, --cluster-signing-{cert,key}-file must not be set.
--cluster-signing-legacy-unknown-cert-file string
-
Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/legacy-unknown signer. If specified, --cluster-signing-{cert,key}-file must not be set.
+
Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/legacy-unknown signer. If specified, --cluster-signing-{cert,key}-file must not be set.
--cluster-signing-legacy-unknown-key-file string
-
Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/legacy-unknown signer. If specified, --cluster-signing-{cert,key}-file must not be set.
+
Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/legacy-unknown signer. If specified, --cluster-signing-{cert,key}-file must not be set.
--concurrent-deployment-syncs int32 Default: 5
-
The number of deployment objects that are allowed to sync concurrently. Larger number = more responsive deployments, but more CPU (and network) load
+
The number of deployment objects that are allowed to sync concurrently. Larger number = more responsive deployments, but more CPU (and network) load
--concurrent-endpoint-syncs int32 Default: 5
-
The number of endpoint syncing operations that will be done concurrently. Larger number = faster endpoint updating, but more CPU (and network) load
+
The number of endpoint syncing operations that will be done concurrently. Larger number = faster endpoint updating, but more CPU (and network) load
--concurrent-gc-syncs int32 Default: 20
-
The number of garbage collector workers that are allowed to sync concurrently.
+
The number of garbage collector workers that are allowed to sync concurrently.
--concurrent-namespace-syncs int32 Default: 10
-
The number of namespace objects that are allowed to sync concurrently. Larger number = more responsive namespace termination, but more CPU (and network) load
+
The number of namespace objects that are allowed to sync concurrently. Larger number = more responsive namespace termination, but more CPU (and network) load
--concurrent-replicaset-syncs int32 Default: 5
-
The number of replica sets that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load
+
The number of replica sets that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load
The number of service endpoint syncing operations that will be done concurrently. Larger number = faster endpoint slice updating, but more CPU (and network) load. Defaults to 5.
+
The number of service endpoint syncing operations that will be done concurrently. Larger number = faster endpoint slice updating, but more CPU (and network) load. Defaults to 5.
--concurrent-service-syncs int32 Default: 1
-
The number of services that are allowed to sync concurrently. Larger number = more responsive service management, but more CPU (and network) load
+
The number of services that are allowed to sync concurrently. Larger number = more responsive service management, but more CPU (and network) load
The number of service account token objects that are allowed to sync concurrently. Larger number = more responsive token generation, but more CPU (and network) load
+
The number of service account token objects that are allowed to sync concurrently. Larger number = more responsive token generation, but more CPU (and network) load
--concurrent-statefulset-syncs int32 Default: 5
-
The number of statefulset objects that are allowed to sync concurrently. Larger number = more responsive statefulsets, but more CPU (and network) load
+
The number of statefulset objects that are allowed to sync concurrently. Larger number = more responsive statefulsets, but more CPU (and network) load
The number of TTL-after-finished controller workers that are allowed to sync concurrently.
+
The number of TTL-after-finished controller workers that are allowed to sync concurrently.
--concurrent_rc_syncs int32 Default: 5
-
The number of replication controllers that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load
+
The number of replication controllers that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load
--configure-cloud-routes Default: true
-
Should CIDRs allocated by allocate-node-cidrs be configured on the cloud provider.
+
Should CIDRs allocated by allocate-node-cidrs be configured on the cloud provider.
--contention-profiling
-
Enable lock contention profiling, if profiling is enabled
+
Enable lock contention profiling, if profiling is enabled
--controller-start-interval duration
-
Interval between starting controller managers.
+
Interval between starting controller managers.
-
--controllers stringSlice Default: [*]
+
--controllers strings Default: "*"
-
A list of controllers to enable. '*' enables all on-by-default controllers, 'foo' enables the controller named 'foo', '-foo' disables the controller named 'foo'. All controllers: attachdetach, bootstrapsigner, cloud-node-lifecycle, clusterrole-aggregation, cronjob, csrapproving, csrcleaner, csrsigning, daemonset, deployment, disruption, endpoint, endpointslice, endpointslicemirroring, ephemeral-volume, garbagecollector, horizontalpodautoscaling, job, namespace, nodeipam, nodelifecycle, persistentvolume-binder, persistentvolume-expander, podgc, pv-protection, pvc-protection, replicaset, replicationcontroller, resourcequota, root-ca-cert-publisher, route, service, serviceaccount, serviceaccount-token, statefulset, tokencleaner, ttl, ttl-after-finished Disabled-by-default controllers: bootstrapsigner, tokencleaner
+
A list of controllers to enable. '*' enables all on-by-default controllers, 'foo' enables the controller named 'foo', '-foo' disables the controller named 'foo'. All controllers: attachdetach, bootstrapsigner, cloud-node-lifecycle, clusterrole-aggregation, cronjob, csrapproving, csrcleaner, csrsigning, daemonset, deployment, disruption, endpoint, endpointslice, endpointslicemirroring, ephemeral-volume, garbagecollector, horizontalpodautoscaling, job, namespace, nodeipam, nodelifecycle, persistentvolume-binder, persistentvolume-expander, podgc, pv-protection, pvc-protection, replicaset, replicationcontroller, resourcequota, root-ca-cert-publisher, route, service, serviceaccount, serviceaccount-token, statefulset, tokencleaner, ttl, ttl-after-finished Disabled-by-default controllers: bootstrapsigner, tokencleaner
Disable volume attach detach reconciler sync. Disabling this may cause volumes to be mismatched with pods. Use wisely.
+
Disable volume attach detach reconciler sync. Disabling this may cause volumes to be mismatched with pods. Use wisely.
--enable-dynamic-provisioning Default: true
-
Enable dynamic provisioning for environments that support it.
+
Enable dynamic provisioning for environments that support it.
--enable-garbage-collector Default: true
-
Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-apiserver.
+
Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-apiserver.
--enable-hostpath-provisioner
-
Enable HostPath PV provisioning when running without a cloud provider. This allows testing and development of provisioning features. HostPath provisioning is not supported in any way, won't work in a multi-node cluster, and should not be used for anything other than testing or development.
+
Enable HostPath PV provisioning when running without a cloud provider. This allows testing and development of provisioning features. HostPath provisioning is not supported in any way, won't work in a multi-node cluster, and should not be used for anything other than testing or development.
--enable-taint-manager Default: true
-
WARNING: Beta feature. If set to true enables NoExecute Taints and will evict all not-tolerating Pod running on Nodes tainted with this kind of Taints.
+
WARNING: Beta feature. If set to true enables NoExecute Taints and will evict all not-tolerating Pod running on Nodes tainted with this kind of Taints.
--endpoint-updates-batch-period duration
-
The length of endpoint updates batching period. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated
+
The length of endpoint updates batching period. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated
--endpointslice-updates-batch-period duration
-
The length of endpoint slice updates batching period. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated
+
The length of endpoint slice updates batching period. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated
--experimental-logging-sanitization
-
[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens). Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.
+
[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens). Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.
--external-cloud-volume-plugin string
-
The plugin to use when cloud provider is set to external. Can be empty, should only be set when cloud-provider is external. Currently used to allow node and volume controllers to work for in tree cloud providers.
+
The plugin to use when cloud provider is set to external. Can be empty, should only be set when cloud-provider is external. Currently used to allow node and volume controllers to work for in tree cloud providers.
QPS to use while talking with kubernetes apiserver.
+
QPS to use while talking with kubernetes apiserver.
--kubeconfig string
-
Path to kubeconfig file with authorization and master location information.
+
Path to kubeconfig file with authorization and master location information.
--large-cluster-size-threshold int32 Default: 50
-
Number of nodes from which NodeController treats the cluster as large for the eviction logic purposes. --secondary-node-eviction-rate is implicitly overridden to 0 for clusters this size or smaller.
+
Number of nodes from which NodeController treats the cluster as large for the eviction logic purposes. --secondary-node-eviction-rate is implicitly overridden to 0 for clusters this size or smaller.
--leader-elect Default: true
-
Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability.
+
Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability.
The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled.
+
The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled.
The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled.
+
The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled.
The type of resource object that is used for locking during leader election. Supported options are 'endpoints', 'configmaps', 'leases', 'endpointsleases' and 'configmapsleases'.
+
The type of resource object that is used for locking during leader election. Supported options are 'endpoints', 'configmaps', 'leases', 'endpointsleases' and 'configmapsleases'.
The namespace of resource object that is used for locking during leader election.
+
The namespace of resource object that is used for locking during leader election.
--leader-elect-retry-period duration Default: 2s
-
The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled.
+
The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled.
-
--log-backtrace-at traceLocation Default: :0
+
--log-backtrace-at <a string in the form 'file:N'> Default: :0
-
when logging hits line file:N, emit a stack trace
+
when logging hits line file:N, emit a stack trace
--log-dir string
-
If non-empty, write log files in this directory
+
If non-empty, write log files in this directory
--log-file string
-
If non-empty, use this log file
+
If non-empty, use this log file
--log-file-max-size uint Default: 1800
-
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
+
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
--log-flush-frequency duration Default: 5s
-
Maximum number of seconds between log flushes
+
Maximum number of seconds between log flushes
--logging-format string Default: "text"
-
Sets the log format. Permitted formats: "json", "text". Non-default formats don't honor these flags: --add_dir_header, --alsologtostderr, --log_backtrace_at, --log_dir, --log_file, --log_file_max_size, --logtostderr, --one_output, --skip_headers, --skip_log_headers, --stderrthreshold, --vmodule, --log-flush-frequency. Non-default choices are currently alpha and subject to change without warning.
+
Sets the log format. Permitted formats: "json", "text". Non-default formats don't honor these flags: --add_dir_header, --alsologtostderr, --log_backtrace_at, --log_dir, --log_file, --log_file_max_size, --logtostderr, --one_output, --skip_headers, --skip_log_headers, --stderrthreshold, --vmodule, --log-flush-frequency. Non-default choices are currently alpha and subject to change without warning.
--logtostderr Default: true
-
log to standard error instead of files
+
log to standard error instead of files
--master string
-
The address of the Kubernetes API server (overrides any value in kubeconfig).
+
The address of the Kubernetes API server (overrides any value in kubeconfig).
--max-endpoints-per-slice int32 Default: 100
-
The maximum number of endpoints that will be added to an EndpointSlice. More endpoints per slice will result in less endpoint slices, but larger resources. Defaults to 100.
+
The maximum number of endpoints that will be added to an EndpointSlice. More endpoints per slice will result in less endpoint slices, but larger resources. Defaults to 100.
--min-resync-period duration Default: 12h0m0s
-
The resync period in reflectors will be random between MinResyncPeriod and 2*MinResyncPeriod.
+
The resync period in reflectors will be random between MinResyncPeriod and 2*MinResyncPeriod.
The number of service endpoint syncing operations that will be done concurrently by the EndpointSliceMirroring controller. Larger number = faster endpoint slice updating, but more CPU (and network) load. Defaults to 5.
+
The number of service endpoint syncing operations that will be done concurrently by the EndpointSliceMirroring controller. Larger number = faster endpoint slice updating, but more CPU (and network) load. Defaults to 5.
The length of EndpointSlice updates batching period for EndpointSliceMirroring controller. Processing of EndpointSlice changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of EndpointSlice updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated
+
The length of EndpointSlice updates batching period for EndpointSliceMirroring controller. Processing of EndpointSlice changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of EndpointSlice updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated
The maximum number of endpoints that will be added to an EndpointSlice by the EndpointSliceMirroring controller. More endpoints per slice will result in less endpoint slices, but larger resources. Defaults to 100.
+
The maximum number of endpoints that will be added to an EndpointSlice by the EndpointSliceMirroring controller. More endpoints per slice will result in less endpoint slices, but larger resources. Defaults to 100.
--namespace-sync-period duration Default: 5m0s
-
The period for syncing namespace life-cycle updates
+
The period for syncing namespace life-cycle updates
--node-cidr-mask-size int32
-
Mask size for node cidr in cluster. Default is 24 for IPv4 and 64 for IPv6.
+
Mask size for node cidr in cluster. Default is 24 for IPv4 and 64 for IPv6.
--node-cidr-mask-size-ipv4 int32
-
Mask size for IPv4 node cidr in dual-stack cluster. Default is 24.
+
Mask size for IPv4 node cidr in dual-stack cluster. Default is 24.
--node-cidr-mask-size-ipv6 int32
-
Mask size for IPv6 node cidr in dual-stack cluster. Default is 64.
+
Mask size for IPv6 node cidr in dual-stack cluster. Default is 64.
-
--node-eviction-rate float32 Default: 0.1
+
--node-eviction-rate float Default: 0.1
-
Number of nodes per second on which pods are deleted in case of node failure when a zone is healthy (see --unhealthy-zone-threshold for definition of healthy/unhealthy). Zone refers to entire cluster in non-multizone clusters.
+
Number of nodes per second on which pods are deleted in case of node failure when a zone is healthy (see --unhealthy-zone-threshold for definition of healthy/unhealthy). Zone refers to entire cluster in non-multizone clusters.
--node-monitor-grace-period duration Default: 40s
-
Amount of time which we allow running Node to be unresponsive before marking it unhealthy. Must be N times more than kubelet's nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet to post node status.
+
Amount of time which we allow running Node to be unresponsive before marking it unhealthy. Must be N times more than kubelet's nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet to post node status.
--node-monitor-period duration Default: 5s
-
The period for syncing NodeStatus in NodeController.
+
The period for syncing NodeStatus in NodeController.
The minimum ActiveDeadlineSeconds to use for a HostPath Recycler pod. This is for development and testing only and will not work in a multi-node cluster.
+
The minimum ActiveDeadlineSeconds to use for a HostPath Recycler pod. This is for development and testing only and will not work in a multi-node cluster.
The file path to a pod definition used as a template for HostPath persistent volume recycling. This is for development and testing only and will not work in a multi-node cluster.
+
The file path to a pod definition used as a template for HostPath persistent volume recycling. This is for development and testing only and will not work in a multi-node cluster.
--pv-recycler-pod-template-filepath-nfs string
-
The file path to a pod definition used as a template for NFS persistent volume recycling
+
The file path to a pod definition used as a template for NFS persistent volume recycling
the increment of time added per Gi to ActiveDeadlineSeconds for a HostPath scrubber pod. This is for development and testing only and will not work in a multi-node cluster.
+
the increment of time added per Gi to ActiveDeadlineSeconds for a HostPath scrubber pod. This is for development and testing only and will not work in a multi-node cluster.
--pvclaimbinder-sync-period duration Default: 15s
-
The period for syncing persistent volumes and persistent volume claims
+
The period for syncing persistent volumes and persistent volume claims
-
--requestheader-allowed-names stringSlice
+
--requestheader-allowed-names strings
-
List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
+
List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
--requestheader-client-ca-file string
-
Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
+
Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
Number of nodes per second on which pods are deleted in case of node failure when a zone is unhealthy (see --unhealthy-zone-threshold for definition of healthy/unhealthy). Zone refers to entire cluster in non-multizone clusters. This value is implicitly overridden to 0 if the cluster size is smaller than --large-cluster-size-threshold.
+
Number of nodes per second on which pods are deleted in case of node failure when a zone is unhealthy (see --unhealthy-zone-threshold for definition of healthy/unhealthy). Zone refers to entire cluster in non-multizone clusters. This value is implicitly overridden to 0 if the cluster size is smaller than --large-cluster-size-threshold.
--secure-port int Default: 10257
-
The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all.
+
The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all.
--service-account-private-key-file string
-
Filename containing a PEM-encoded private RSA or ECDSA key used to sign service account tokens.
+
Filename containing a PEM-encoded private RSA or ECDSA key used to sign service account tokens.
--service-cluster-ip-range string
-
CIDR Range for Services in cluster. Requires --allocate-node-cidrs to be true
+
CIDR Range for Services in cluster. Requires --allocate-node-cidrs to be true
--show-hidden-metrics-for-version string
-
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
+
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is ., e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
--skip-headers
-
If true, avoid header prefixes in the log messages
+
If true, avoid header prefixes in the log messages
Number of terminated pods that can exist before the terminated pod garbage collector starts deleting terminated pods. If <= 0, the terminated pod garbage collector is disabled.
+
Number of terminated pods that can exist before the terminated pod garbage collector starts deleting terminated pods. If <= 0, the terminated pod garbage collector is disabled.
--tls-cert-file string
-
File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
+
File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
-
--tls-cipher-suites stringSlice
+
--tls-cipher-suites strings
-
Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used. Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384. Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.
+
Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used. Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384. Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.
--tls-min-version string
-
Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
+
Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
--tls-private-key-file string
-
File containing the default x509 private key matching --tls-cert-file.
+
File containing the default x509 private key matching --tls-cert-file.
-
--tls-sni-cert-key namedCertKey Default: []
+
--tls-sni-cert-key string
-
A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com".
+
A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com".
-
--unhealthy-zone-threshold float32 Default: 0.55
+
--unhealthy-zone-threshold float Default: 0.55
-
Fraction of Nodes in a zone which needs to be not Ready (minimum 3) for zone to be treated as unhealthy.
+
Fraction of Nodes in a zone which needs to be not Ready (minimum 3) for zone to be treated as unhealthy.
--use-service-account-credentials
-
If true, use individual service account credentials for each controller.
+
If true, use individual service account credentials for each controller.
-
-v, --v Level
+
-v, --v int
-
number for the log level verbosity
+
number for the log level verbosity
--version version[=true]
-
Print version information and quit
+
Print version information and quit
-
--vmodule moduleSpec
+
--vmodule <comma-separated 'pattern=N' settings>
-
comma-separated list of pattern=N settings for file-filtered logging
+
comma-separated list of pattern=N settings for file-filtered logging
--volume-host-allow-local-loopback Default: true
-
If false, deny local loopback IPs in addition to any CIDR ranges in --volume-host-cidr-denylist
+
If false, deny local loopback IPs in addition to any CIDR ranges in --volume-host-cidr-denylist
-
--volume-host-cidr-denylist stringSlice
+
--volume-host-cidr-denylist strings
-
A comma-separated list of CIDR ranges to avoid from volume plugins.
+
A comma-separated list of CIDR ranges to avoid from volume plugins.
Path to the file containing Azure container registry configuration information.
+
Path to the file containing Azure container registry configuration information.
-
--bind-address ip Default: 0.0.0.0
+
--bind-address string Default: 0.0.0.0
-
The IP address for the proxy server to serve on (set to '0.0.0.0' for all IPv4 interfaces and '::' for all IPv6 interfaces)
+
The IP address for the proxy server to serve on (set to '0.0.0.0' for all IPv4 interfaces and '::' for all IPv6 interfaces)
--bind-address-hard-fail
-
If true kube-proxy will treat failure to bind to a port as fatal and exit
+
If true kube-proxy will treat failure to bind to a port as fatal and exit
--cleanup
-
If true cleanup iptables and ipvs rules and exit.
+
If true cleanup iptables and ipvs rules and exit.
--cluster-cidr string
-
The CIDR range of pods in the cluster. When configured, traffic sent to a Service cluster IP from outside this range will be masqueraded and traffic sent from pods to an external LoadBalancer IP will be directed to the respective cluster IP instead
+
The CIDR range of pods in the cluster. When configured, traffic sent to a Service cluster IP from outside this range will be masqueraded and traffic sent from pods to an external LoadBalancer IP will be directed to the respective cluster IP instead
--config string
-
The path to the configuration file.
+
The path to the configuration file.
--config-sync-period duration Default: 15m0s
-
How often configuration from the apiserver is refreshed. Must be greater than 0.
+
How often configuration from the apiserver is refreshed. Must be greater than 0.
--conntrack-max-per-core int32 Default: 32768
-
Maximum number of NAT connections to track per CPU core (0 to leave the limit as-is and ignore conntrack-min).
+
Maximum number of NAT connections to track per CPU core (0 to leave the limit as-is and ignore conntrack-min).
--conntrack-min int32 Default: 131072
-
Minimum number of conntrack entries to allocate, regardless of conntrack-max-per-core (set conntrack-max-per-core=0 to leave the limit as-is).
+
Minimum number of conntrack entries to allocate, regardless of conntrack-max-per-core (set conntrack-max-per-core=0 to leave the limit as-is).
The IP address with port for the health check server to serve on (set to '0.0.0.0:10256' for all IPv4 interfaces and '[::]:10256' for all IPv6 interfaces). Set empty to disable.
+
The IP address with port for the health check server to serve on (set to '0.0.0.0:10256' for all IPv4 interfaces and '[::]:10256' for all IPv6 interfaces). Set empty to disable.
-h, --help
-
help for kube-proxy
+
help for kube-proxy
--hostname-override string
-
If non-empty, will use this string as identification instead of the actual hostname.
+
If non-empty, will use this string as identification instead of the actual hostname.
--iptables-masquerade-bit int32 Default: 14
-
If using the pure iptables proxy, the bit of the fwmark space to mark packets requiring SNAT with. Must be within the range [0, 31].
+
If using the pure iptables proxy, the bit of the fwmark space to mark packets requiring SNAT with. Must be within the range [0, 31].
--iptables-min-sync-period duration Default: 1s
-
The minimum interval of how often the iptables rules can be refreshed as endpoints and services change (e.g. '5s', '1m', '2h22m').
+
The minimum interval of how often the iptables rules can be refreshed as endpoints and services change (e.g. '5s', '1m', '2h22m').
--iptables-sync-period duration Default: 30s
-
The maximum interval of how often iptables rules are refreshed (e.g. '5s', '1m', '2h22m'). Must be greater than 0.
+
The maximum interval of how often iptables rules are refreshed (e.g. '5s', '1m', '2h22m'). Must be greater than 0.
-
--ipvs-exclude-cidrs stringSlice
+
--ipvs-exclude-cidrs strings
-
A comma-separated list of CIDR's which the ipvs proxier should not touch when cleaning up IPVS rules.
+
A comma-separated list of CIDR's which the ipvs proxier should not touch when cleaning up IPVS rules.
--ipvs-min-sync-period duration
-
The minimum interval of how often the ipvs rules can be refreshed as endpoints and services change (e.g. '5s', '1m', '2h22m').
+
The minimum interval of how often the ipvs rules can be refreshed as endpoints and services change (e.g. '5s', '1m', '2h22m').
--ipvs-scheduler string
-
The ipvs scheduler type when proxy mode is ipvs
+
The ipvs scheduler type when proxy mode is ipvs
--ipvs-strict-arp
-
Enable strict ARP by setting arp_ignore to 1 and arp_announce to 2
+
Enable strict ARP by setting arp_ignore to 1 and arp_announce to 2
--ipvs-sync-period duration Default: 30s
-
The maximum interval of how often ipvs rules are refreshed (e.g. '5s', '1m', '2h22m'). Must be greater than 0.
+
The maximum interval of how often ipvs rules are refreshed (e.g. '5s', '1m', '2h22m'). Must be greater than 0.
--ipvs-tcp-timeout duration
-
The timeout for idle IPVS TCP connections, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').
+
The timeout for idle IPVS TCP connections, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').
--ipvs-tcpfin-timeout duration
-
The timeout for IPVS TCP connections after receiving a FIN packet, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').
+
The timeout for IPVS TCP connections after receiving a FIN packet, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').
--ipvs-udp-timeout duration
-
The timeout for IPVS UDP packets, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').
+
The timeout for IPVS UDP packets, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').
--kube-api-burst int32 Default: 10
-
Burst to use while talking with kubernetes apiserver
+
Burst to use while talking with kubernetes apiserver
The IP address with port for the metrics server to serve on (set to '0.0.0.0:10249' for all IPv4 interfaces and '[::]:10249' for all IPv6 interfaces). Set empty to disable.
+
The IP address with port for the metrics server to serve on (set to '0.0.0.0:10249' for all IPv4 interfaces and '[::]:10249' for all IPv6 interfaces). Set empty to disable.
-
--nodeport-addresses stringSlice
+
--nodeport-addresses strings
-
A string slice of values which specify the addresses to use for NodePorts. Values may be valid IP blocks (e.g. 1.2.3.0/24, 1.2.3.4/32). The default empty string slice ([]) means to use all local addresses.
+
A string slice of values which specify the addresses to use for NodePorts. Values may be valid IP blocks (e.g. 1.2.3.0/24, 1.2.3.4/32). The default empty string slice ([]) means to use all local addresses.
--oom-score-adj int32 Default: -999
-
The oom-score-adj value for kube-proxy process. Values must be within the range [-1000, 1000]
+
The oom-score-adj value for kube-proxy process. Values must be within the range [-1000, 1000]
--profiling
-
If true enables profiling via web interface on /debug/pprof handler.
+
If true enables profiling via web interface on /debug/pprof handler.
--proxy-mode ProxyMode
-
Which proxy mode to use: 'userspace' (older) or 'iptables' (faster) or 'ipvs' or 'kernelspace' (windows). If blank, use the best-available proxy (currently iptables). If the iptables proxy is selected, regardless of how, but the system's kernel or iptables versions are insufficient, this always falls back to the userspace proxy.
+
Which proxy mode to use: 'userspace' (older) or 'iptables' (faster) or 'ipvs' or 'kernelspace' (windows). If blank, use the best-available proxy (currently iptables). If the iptables proxy is selected, regardless of how, but the system's kernel or iptables versions are insufficient, this always falls back to the userspace proxy.
--proxy-port-range port-range
-
Range of host ports (beginPort-endPort, single port or beginPort+offset, inclusive) that may be consumed in order to proxy service traffic. If (unspecified, 0, or 0-0) then ports will be randomly chosen.
+
Range of host ports (beginPort-endPort, single port or beginPort+offset, inclusive) that may be consumed in order to proxy service traffic. If (unspecified, 0, or 0-0) then ports will be randomly chosen.
--show-hidden-metrics-for-version string
-
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
+
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is ., e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
--udp-timeout duration Default: 250ms
-
How long an idle UDP connection will be kept open (e.g. '250ms', '2s'). Must be greater than 0. Only applicable for proxy-mode=userspace
+
How long an idle UDP connection will be kept open (e.g. '250ms', '2s'). Must be greater than 0. Only applicable for proxy-mode=userspace
--version version[=true]
-
Print version information and quit
+
Print version information and quit
--write-config-to string
-
If set, write the default configuration values to this file and exit.
+
If set, write the default configuration values to this file and exit.
If true, adds the file directory to the header of the log messages
+
If true, adds the file directory to the header of the log messages
--address string Default: "0.0.0.0"
-
DEPRECATED: the IP address on which to listen for the --port port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). See --bind-address instead.
+
DEPRECATED: the IP address on which to listen for the --port port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). See --bind-address instead.
--algorithm-provider string
-
DEPRECATED: the scheduling algorithm provider to use, this sets the default plugins for component config profiles. Choose one of: ClusterAutoscalerProvider | DefaultProvider
+
DEPRECATED: the scheduling algorithm provider to use, this sets the default plugins for component config profiles. Choose one of: ClusterAutoscalerProvider | DefaultProvider
--alsologtostderr
-
log to standard error as well as files
+
log to standard error as well as files
--authentication-kubeconfig string
-
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io. This is optional. If empty, all token requests are considered to be anonymous and no client CA is looked up in the cluster.
+
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io. This is optional. If empty, all token requests are considered to be anonymous and no client CA is looked up in the cluster.
--authentication-skip-lookup
-
If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
+
If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.
+
If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.
A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server.
+
A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server.
--authorization-kubeconfig string
-
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io. This is optional. If empty, all requests not skipped by authorization are forbidden.
+
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io. This is optional. If empty, all requests not skipped by authorization are forbidden.
The duration to cache 'unauthorized' responses from the webhook authorizer.
+
The duration to cache 'unauthorized' responses from the webhook authorizer.
--azure-container-registry-config string
-
Path to the file containing Azure container registry configuration information.
+
Path to the file containing Azure container registry configuration information.
-
--bind-address ip Default: 0.0.0.0
+
--bind-address string Default: 0.0.0.0
-
The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.
+
The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.
--cert-dir string
-
The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.
+
The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.
--client-ca-file string
-
If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
+
If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
--config string
-
The path to the configuration file. The following flags can overwrite fields in this file: --address --port --use-legacy-policy-config --policy-configmap --policy-config-file --algorithm-provider
+
The path to the configuration file. The following flags can overwrite fields in this file: --address --port --use-legacy-policy-config --policy-configmap --policy-config-file --algorithm-provider
--contention-profiling Default: true
-
DEPRECATED: enable lock contention profiling, if profiling is enabled
+
DEPRECATED: enable lock contention profiling, if profiling is enabled
--experimental-logging-sanitization
-
[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens). Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.
+
[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens). Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.
DEPRECATED: RequiredDuringScheduling affinity is not symmetric, but there is an implicit PreferredDuringScheduling affinity rule corresponding to every RequiredDuringScheduling affinity rule. --hard-pod-affinity-symmetric-weight represents the weight of implicit PreferredDuringScheduling affinity rule. Must be in the range 0-100.This option was moved to the policy configuration file
+
DEPRECATED: RequiredDuringScheduling affinity is not symmetric, but there is an implicit PreferredDuringScheduling affinity rule corresponding to every RequiredDuringScheduling affinity rule. --hard-pod-affinity-symmetric-weight represents the weight of implicit PreferredDuringScheduling affinity rule. Must be in the range 0-100.This option was moved to the policy configuration file
-h, --help
-
help for kube-scheduler
+
help for kube-scheduler
--http2-max-streams-per-connection int
-
The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
+
The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
--kube-api-burst int32 Default: 100
-
DEPRECATED: burst to use while talking with kubernetes apiserver
+
DEPRECATED: burst to use while talking with kubernetes apiserver
DEPRECATED: content type of requests sent to apiserver.
+
DEPRECATED: content type of requests sent to apiserver.
-
--kube-api-qps float32 Default: 50
+
--kube-api-qps float Default: 50
-
DEPRECATED: QPS to use while talking with kubernetes apiserver
+
DEPRECATED: QPS to use while talking with kubernetes apiserver
--kubeconfig string
-
DEPRECATED: path to kubeconfig file with authorization and master location information.
+
DEPRECATED: path to kubeconfig file with authorization and master location information.
--leader-elect Default: true
-
Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability.
+
Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability.
The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled.
+
The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled.
The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled.
+
The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled.
The type of resource object that is used for locking during leader election. Supported options are 'endpoints', 'configmaps', 'leases', 'endpointsleases' and 'configmapsleases'.
+
The type of resource object that is used for locking during leader election. Supported options are 'endpoints', 'configmaps', 'leases', 'endpointsleases' and 'configmapsleases'.
The namespace of resource object that is used for locking during leader election.
+
The namespace of resource object that is used for locking during leader election.
--leader-elect-retry-period duration Default: 2s
-
The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled.
+
The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled.
DEPRECATED: define the namespace of the lock object. Will be removed in favor of leader-elect-resource-namespace.
+
DEPRECATED: define the namespace of the lock object. Will be removed in favor of leader-elect-resource-namespace.
-
--log-backtrace-at traceLocation Default: :0
+
--log-backtrace-at <a string in the form 'file:N'> Default: :0
-
when logging hits line file:N, emit a stack trace
+
when logging hits line file:N, emit a stack trace
--log-dir string
-
If non-empty, write log files in this directory
+
If non-empty, write log files in this directory
--log-file string
-
If non-empty, use this log file
+
If non-empty, use this log file
--log-file-max-size uint Default: 1800
-
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
+
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
--log-flush-frequency duration Default: 5s
-
Maximum number of seconds between log flushes
+
Maximum number of seconds between log flushes
--logging-format string Default: "text"
-
Sets the log format. Permitted formats: "json", "text". Non-default formats don't honor these flags: --add_dir_header, --alsologtostderr, --log_backtrace_at, --log_dir, --log_file, --log_file_max_size, --logtostderr, --one_output, --skip_headers, --skip_log_headers, --stderrthreshold, --vmodule, --log-flush-frequency. Non-default choices are currently alpha and subject to change without warning.
+
Sets the log format. Permitted formats: "json", "text". Non-default formats don't honor these flags: --add_dir_header, --alsologtostderr, --log_backtrace_at, --log_dir, --log_file, --log_file_max_size, --logtostderr, --one_output, --skip_headers, --skip_log_headers, --stderrthreshold, --vmodule, --log-flush-frequency. Non-default choices are currently alpha and subject to change without warning.
--logtostderr Default: true
-
log to standard error instead of files
+
log to standard error instead of files
--master string
-
The address of the Kubernetes API server (overrides any value in kubeconfig)
+
The address of the Kubernetes API server (overrides any value in kubeconfig)
--one-output
-
If true, only write logs to their native severity level (vs also writing to each lower severity level
+
If true, only write logs to their native severity level (vs also writing to each lower severity level
--permit-port-sharing
-
If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false]
+
If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false]
--policy-config-file string
-
DEPRECATED: file with scheduler policy configuration. This file is used if policy ConfigMap is not provided or --use-legacy-policy-config=true. Note: The scheduler will fail if this is combined with Plugin configs
+
DEPRECATED: file with scheduler policy configuration. This file is used if policy ConfigMap is not provided or --use-legacy-policy-config=true. Note: The scheduler will fail if this is combined with Plugin configs
--policy-configmap string
-
DEPRECATED: name of the ConfigMap object that contains scheduler's policy configuration. It must exist in the system namespace before scheduler initialization if --use-legacy-policy-config=false. The config must be provided as the value of an element in 'Data' map with the key='policy.cfg'. Note: The scheduler will fail if this is combined with Plugin configs
+
DEPRECATED: name of the ConfigMap object that contains scheduler's policy configuration. It must exist in the system namespace before scheduler initialization if --use-legacy-policy-config=false. The config must be provided as the value of an element in 'Data' map with the key='policy.cfg'. Note: The scheduler will fail if this is combined with Plugin configs
DEPRECATED: the namespace where policy ConfigMap is located. The kube-system namespace will be used if this is not provided or is empty. Note: The scheduler will fail if this is combined with Plugin configs
+
DEPRECATED: the namespace where policy ConfigMap is located. The kube-system namespace will be used if this is not provided or is empty. Note: The scheduler will fail if this is combined with Plugin configs
--port int Default: 10251
-
DEPRECATED: the port on which to serve HTTP insecurely without authentication and authorization. If 0, don't serve plain HTTP at all. See --secure-port instead.
+
DEPRECATED: the port on which to serve HTTP insecurely without authentication and authorization. If 0, don't serve plain HTTP at all. See --secure-port instead.
--profiling Default: true
-
DEPRECATED: enable profiling via web interface host:port/debug/pprof/
+
DEPRECATED: enable profiling via web interface host:port/debug/pprof/
-
--requestheader-allowed-names stringSlice
+
--requestheader-allowed-names strings
-
List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
+
List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
--requestheader-client-ca-file string
-
Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
+
Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
DEPRECATED: name of the scheduler, used to select which pods will be processed by this scheduler, based on pod's "spec.schedulerName".
+
DEPRECATED: name of the scheduler, used to select which pods will be processed by this scheduler, based on pod's "spec.schedulerName".
--secure-port int Default: 10259
-
The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all.
+
The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all.
--show-hidden-metrics-for-version string
-
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
+
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is ., e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
--skip-headers
-
If true, avoid header prefixes in the log messages
+
If true, avoid header prefixes in the log messages
--skip-log-headers
-
If true, avoid headers when opening log files
+
If true, avoid headers when opening log files
-
--stderrthreshold severity Default: 2
+
--stderrthreshold int Default: 2
-
logs at or above this threshold go to stderr
+
logs at or above this threshold go to stderr
--tls-cert-file string
-
File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
+
File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
-
--tls-cipher-suites stringSlice
+
--tls-cipher-suites strings
-
Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used. Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384. Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.
+
Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used. Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384. Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.
--tls-min-version string
-
Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
+
Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
--tls-private-key-file string
-
File containing the default x509 private key matching --tls-cert-file.
+
File containing the default x509 private key matching --tls-cert-file.
-
--tls-sni-cert-key namedCertKey Default: []
+
--tls-sni-cert-key string
-
A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com".
+
A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com".
--use-legacy-policy-config
-
DEPRECATED: when set to true, scheduler will ignore policy ConfigMap and uses policy config file. Note: The scheduler will fail if this is combined with Plugin configs
+
DEPRECATED: when set to true, scheduler will ignore policy ConfigMap and uses policy config file. Note: The scheduler will fail if this is combined with Plugin configs
-
-v, --v Level
+
-v, --v int
-
number for the log level verbosity
+
number for the log level verbosity
--version version[=true]
-
Print version information and quit
+
Print version information and quit
-
--vmodule moduleSpec
+
--vmodule <comma-separated 'pattern=N' settings>
-
comma-separated list of pattern=N settings for file-filtered logging
+
comma-separated list of pattern=N settings for file-filtered logging
--write-config-to string
-
If set, write the configuration values to this file and exit.
+
If set, write the configuration values to this file and exit.
diff --git a/content/en/docs/reference/kubectl/overview.md b/content/en/docs/reference/kubectl/overview.md
index a9f1550659..dbad6f5cf2 100644
--- a/content/en/docs/reference/kubectl/overview.md
+++ b/content/en/docs/reference/kubectl/overview.md
@@ -19,7 +19,7 @@ files by setting the KUBECONFIG environment variable or by setting the
This overview covers `kubectl` syntax, describes the command operations, and provides common examples.
For details about each command, including all the supported flags and subcommands, see the
[kubectl](/docs/reference/generated/kubectl/kubectl-commands/) reference documentation.
-For installation instructions see [installing kubectl](/docs/tasks/tools/install-kubectl/).
+For installation instructions see [installing kubectl](/docs/tasks/tools/).
diff --git a/content/en/docs/reference/scheduling/config.md b/content/en/docs/reference/scheduling/config.md
index 7754d7cb7d..8d5cc24208 100644
--- a/content/en/docs/reference/scheduling/config.md
+++ b/content/en/docs/reference/scheduling/config.md
@@ -181,8 +181,6 @@ that are not enabled by default:
- `RequestedToCapacityRatio`: Favor nodes according to a configured function of
the allocated resources.
Extension points: `Score`.
-- `NodeResourceLimits`: Favors nodes that satisfy the Pod resource limits.
- Extension points: `PreScore`, `Score`.
- `CinderVolume`: Checks that OpenStack Cinder volume limits can be satisfied
for the node.
Extension points: `Filter`.
diff --git a/content/en/docs/setup/production-environment/container-runtimes.md b/content/en/docs/setup/production-environment/container-runtimes.md
index 15f3e3a9b6..ba6827d833 100644
--- a/content/en/docs/setup/production-environment/container-runtimes.md
+++ b/content/en/docs/setup/production-environment/container-runtimes.md
@@ -147,7 +147,7 @@ Start a Powershell session, set `$Version` to the desired version (ex: `$Version
{{% /tab %}}
{{< /tabs >}}
-#### systemd {#containerd-systemd}
+#### Using the `systemd` cgroup driver {#containerd-systemd}
To use the `systemd` cgroup driver in `/etc/containerd/config.toml` with `runc`, set
@@ -158,6 +158,12 @@ To use the `systemd` cgroup driver in `/etc/containerd/config.toml` with `runc`,
SystemdCgroup = true
```
+If you apply this change make sure to restart containerd again:
+
+```shell
+sudo systemctl restart containerd
+```
+
When using kubeadm, manually configure the
[cgroup driver for kubelet](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-control-plane-node).
@@ -347,7 +353,7 @@ in sync.
### Docker
-1. On each of your nodes, install the Docker for your Linux distribution as per [Install Docker Engine](https://docs.docker.com/engine/install/#server)
+1. On each of your nodes, install the Docker for your Linux distribution as per [Install Docker Engine](https://docs.docker.com/engine/install/#server). You can find the latest validated version of Docker in this [dependencies](https://git.k8s.io/kubernetes/build/dependencies.yaml) file.
2. Configure the Docker daemon, in particular to use systemd for the management of the container’s cgroups.
diff --git a/content/en/docs/setup/production-environment/tools/kops.md b/content/en/docs/setup/production-environment/tools/kops.md
index 13a2474600..4afab697e4 100644
--- a/content/en/docs/setup/production-environment/tools/kops.md
+++ b/content/en/docs/setup/production-environment/tools/kops.md
@@ -23,7 +23,7 @@ kops is an automated provisioning system:
## {{% heading "prerequisites" %}}
-* You must have [kubectl](/docs/tasks/tools/install-kubectl/) installed.
+* You must have [kubectl](/docs/tasks/tools/) installed.
* You must [install](https://github.com/kubernetes/kops#installing) `kops` on a 64-bit (AMD64 and Intel 64) device architecture.
diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
index 7d74a862f9..4f9d9c6ce7 100644
--- a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
+++ b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
@@ -160,7 +160,7 @@ kubelet and the control plane is supported, but the kubelet version may never ex
server version. For example, the kubelet running 1.7.0 should be fully compatible with a 1.8.0 API server,
but not vice versa.
-For information about installing `kubectl`, see [Install and set up kubectl](/docs/tasks/tools/install-kubectl/).
+For information about installing `kubectl`, see [Install and set up kubectl](/docs/tasks/tools/).
{{< warning >}}
These instructions exclude all Kubernetes packages from any system upgrades.
diff --git a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md
index 3998b48501..5c33b0a94b 100644
--- a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md
+++ b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md
@@ -221,7 +221,7 @@ On Windows, you can use the following settings to configure Services and load ba
#### IPv4/IPv6 dual-stack
-You can enable IPv4/IPv6 dual-stack networking for `l2bridge` networks using the `IPv6DualStack` [feature gate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/). See [enable IPv4/IPv6 dual stack](/docs/concepts/services-networking/dual-stack#enable-ipv4ipv6-dual-stack) for more details.
+You can enable IPv4/IPv6 dual-stack networking for `l2bridge` networks using the `IPv6DualStack` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/). See [enable IPv4/IPv6 dual stack](/docs/concepts/services-networking/dual-stack#enable-ipv4ipv6-dual-stack) for more details.
{{< note >}}
On Windows, using IPv6 with Kubernetes require Windows Server, version 2004 (kernel version 10.0.19041.610) or later.
diff --git a/content/en/docs/setup/release/notes.md b/content/en/docs/setup/release/notes.md
index adbdb7c48e..54146007a0 100644
--- a/content/en/docs/setup/release/notes.md
+++ b/content/en/docs/setup/release/notes.md
@@ -1906,7 +1906,7 @@ filename | sha512 hash
- Promote SupportNodePidsLimit to GA to provide node to pod pid isolation
Promote SupportPodPidsLimit to GA to provide ability to limit pids per pod ([#94140](https://github.com/kubernetes/kubernetes/pull/94140), [@derekwaynecarr](https://github.com/derekwaynecarr)) [SIG Node and Testing]
- Rename pod_preemption_metrics to preemption_metrics. ([#93256](https://github.com/kubernetes/kubernetes/pull/93256), [@ahg-g](https://github.com/ahg-g)) [SIG Instrumentation and Scheduling]
-- Server-side apply behavior has been regularized in the case where a field is removed from the applied configuration. Removed fields which have no other owners are deleted from the live object, or reset to their default value if they have one. Safe ownership transfers, such as the transfer of a `replicas` field from a user to an HPA without resetting to the default value are documented in [Transferring Ownership](https://kubernetes.io/docs/reference/using-api/api-concepts/#transferring-ownership) ([#92661](https://github.com/kubernetes/kubernetes/pull/92661), [@jpbetz](https://github.com/jpbetz)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation and Testing]
+- Server-side apply behavior has been regularized in the case where a field is removed from the applied configuration. Removed fields which have no other owners are deleted from the live object, or reset to their default value if they have one. Safe ownership transfers, such as the transfer of a `replicas` field from a user to an HPA without resetting to the default value are documented in [Transferring Ownership](/docs/reference/using-api/server-side-apply/#transferring-ownership) ([#92661](https://github.com/kubernetes/kubernetes/pull/92661), [@jpbetz](https://github.com/jpbetz)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation and Testing]
- Set CSIMigrationvSphere feature gates to beta.
Users should enable CSIMigration + CSIMigrationvSphere features and install the vSphere CSI Driver (https://github.com/kubernetes-sigs/vsphere-csi-driver) to move workload from the in-tree vSphere plugin "kubernetes.io/vsphere-volume" to vSphere CSI Driver.
diff --git a/content/en/docs/tasks/administer-cluster/access-cluster-api.md b/content/en/docs/tasks/administer-cluster/access-cluster-api.md
index 191ec0c2fe..0275cadabf 100644
--- a/content/en/docs/tasks/administer-cluster/access-cluster-api.md
+++ b/content/en/docs/tasks/administer-cluster/access-cluster-api.md
@@ -192,7 +192,7 @@ func main() {
}
```
-If the application is deployed as a Pod in the cluster, see [Accessing the API from within a Pod](#accessing-the-api-from-within-a-pod).
+If the application is deployed as a Pod in the cluster, see [Accessing the API from within a Pod](/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod).
#### Python client
diff --git a/content/en/docs/tasks/administer-cluster/cluster-upgrade.md b/content/en/docs/tasks/administer-cluster/cluster-upgrade.md
index 1e2cc422e4..6e9dc302c4 100644
--- a/content/en/docs/tasks/administer-cluster/cluster-upgrade.md
+++ b/content/en/docs/tasks/administer-cluster/cluster-upgrade.md
@@ -34,7 +34,7 @@ If your cluster was deployed using the `kubeadm` tool, refer to
for detailed information on how to upgrade the cluster.
Once you have upgraded the cluster, remember to
-[install the latest version of `kubectl`](/docs/tasks/tools/install-kubectl/).
+[install the latest version of `kubectl`](/docs/tasks/tools/).
### Manual deployments
@@ -52,7 +52,7 @@ You should manually update the control plane following this sequence:
- cloud controller manager, if you use one
At this point you should
-[install the latest version of `kubectl`](/docs/tasks/tools/install-kubectl/).
+[install the latest version of `kubectl`](/docs/tasks/tools/).
For each node in your cluster, [drain](/docs/tasks/administer-cluster/safely-drain-node/)
that node and then either replace it with a new node that uses the {{< skew latestVersion >}}
diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md
index d9d8a5929e..56a6c25e9a 100644
--- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md
+++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md
@@ -170,36 +170,7 @@ controllerManager:
### Create certificate signing requests (CSR)
-You can create the certificate signing requests for the Kubernetes certificates API with `kubeadm certs renew --use-api`.
-
-If you set up an external signer such as [cert-manager](https://github.com/jetstack/cert-manager), certificate signing requests (CSRs) are automatically approved.
-Otherwise, you must manually approve certificates with the [`kubectl certificate`](/docs/setup/best-practices/certificates/) command.
-The following kubeadm command outputs the name of the certificate to approve, then blocks and waits for approval to occur:
-
-```shell
-sudo kubeadm certs renew apiserver --use-api &
-```
-The output is similar to this:
-```
-[1] 2890
-[certs] certificate request "kubeadm-cert-kube-apiserver-ld526" created
-```
-
-### Approve certificate signing requests (CSR)
-
-If you set up an external signer, certificate signing requests (CSRs) are automatically approved.
-
-Otherwise, you must manually approve certificates with the [`kubectl certificate`](/docs/setup/best-practices/certificates/) command. e.g.
-
-```shell
-kubectl certificate approve kubeadm-cert-kube-apiserver-ld526
-```
-The output is similar to this:
-```shell
-certificatesigningrequest.certificates.k8s.io/kubeadm-cert-kube-apiserver-ld526 approved
-```
-
-You can view a list of pending certificates with `kubectl get csr`.
+See [Create CertificateSigningRequest](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/#create-certificatesigningrequest) for creating CSRs with the Kubernetes API.
## Renew certificates with external CA
diff --git a/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md b/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md
index b0e272afa0..96f55c3950 100644
--- a/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md
+++ b/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md
@@ -202,4 +202,7 @@ verify that the pods were scheduled by the desired schedulers.
```shell
kubectl get events
```
+You can also use a [custom scheduler configuration](/docs/reference/scheduling/config/#multiple-profiles)
+or a custom container image for the cluster's main scheduler by modifying its static pod manifest
+on the relevant control plane nodes.
diff --git a/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md b/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md
index 75c9d56a83..643b57cc3b 100644
--- a/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md
+++ b/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md
@@ -16,7 +16,7 @@ preview of what changes `apply` will make.
## {{% heading "prerequisites" %}}
-Install [`kubectl`](/docs/tasks/tools/install-kubectl/).
+Install [`kubectl`](/docs/tasks/tools/).
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
diff --git a/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md b/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md
index a51b5664ba..8e0670a89f 100644
--- a/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md
+++ b/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md
@@ -12,7 +12,7 @@ explains how those commands are organized and how to use them to manage live obj
## {{% heading "prerequisites" %}}
-Install [`kubectl`](/docs/tasks/tools/install-kubectl/).
+Install [`kubectl`](/docs/tasks/tools/).
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
diff --git a/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md b/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md
index 2b97ed271c..87cc423da7 100644
--- a/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md
+++ b/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md
@@ -13,7 +13,7 @@ This document explains how to define and manage objects using configuration file
## {{% heading "prerequisites" %}}
-Install [`kubectl`](/docs/tasks/tools/install-kubectl/).
+Install [`kubectl`](/docs/tasks/tools/).
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
diff --git a/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md b/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md
index 7c59052ffa..3ea3c50e8d 100644
--- a/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md
+++ b/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md
@@ -29,7 +29,7 @@ kubectl apply -k
## {{% heading "prerequisites" %}}
-Install [`kubectl`](/docs/tasks/tools/install-kubectl/).
+Install [`kubectl`](/docs/tasks/tools/).
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
diff --git a/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md b/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md
index d558a271ad..b75493aae1 100644
--- a/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md
+++ b/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md
@@ -19,7 +19,7 @@ Up to date information on this process can be found at the
* You must have a Kubernetes cluster with cluster DNS enabled.
* If you are using a cloud-based Kubernetes cluster or {{< glossary_tooltip text="Minikube" term_id="minikube" >}}, you may already have cluster DNS enabled.
* If you are using `hack/local-up-cluster.sh`, ensure that the `KUBE_ENABLE_CLUSTER_DNS` environment variable is set, then run the install script.
-* [Install and setup kubectl](/docs/tasks/tools/install-kubectl/) v1.7 or higher. Make sure it is configured to connect to the Kubernetes cluster.
+* [Install and setup kubectl](/docs/tasks/tools/) v1.7 or higher. Make sure it is configured to connect to the Kubernetes cluster.
* Install [Helm](https://helm.sh/) v2.7.0 or newer.
* Follow the [Helm install instructions](https://helm.sh/docs/intro/install/).
* If you already have an appropriate version of Helm installed, execute `helm init` to install Tiller, the server-side component of Helm.
diff --git a/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md b/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md
index 52a55457a2..0789997309 100644
--- a/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md
+++ b/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md
@@ -23,7 +23,7 @@ Service Catalog itself can work with any kind of managed service, not just Googl
* Install [Go 1.6+](https://golang.org/dl/) and set the `GOPATH`.
* Install the [cfssl](https://github.com/cloudflare/cfssl) tool needed for generating SSL artifacts.
* Service Catalog requires Kubernetes version 1.7+.
-* [Install and setup kubectl](/docs/tasks/tools/install-kubectl/) so that it is configured to connect to a Kubernetes v1.7+ cluster.
+* [Install and setup kubectl](/docs/tasks/tools/) so that it is configured to connect to a Kubernetes v1.7+ cluster.
* The kubectl user must be bound to the *cluster-admin* role for it to install Service Catalog. To ensure that this is true, run the following command:
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=
diff --git a/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md b/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md
index 4f4a1771fe..9854540649 100644
--- a/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md
+++ b/content/en/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md
@@ -80,10 +80,10 @@ You now have to ensure that the kubectl completion script gets sourced in all yo
echo 'complete -F __start_kubectl k' >>~/.bash_profile
```
-- If you installed kubectl with Homebrew (as explained [above](#install-with-homebrew-on-macos)), then the kubectl completion script should already be in `/usr/local/etc/bash_completion.d/kubectl`. In that case, you don't need to do anything.
+- If you installed kubectl with Homebrew (as explained [here](/docs/tasks/tools/install-kubectl-macos/#install-with-homebrew-on-macos)), then the kubectl completion script should already be in `/usr/local/etc/bash_completion.d/kubectl`. In that case, you don't need to do anything.
{{< note >}}
The Homebrew installation of bash-completion v2 sources all the files in the `BASH_COMPLETION_COMPAT_DIR` directory, that's why the latter two methods work.
{{< /note >}}
-In any case, after reloading your shell, kubectl completion should be working.
\ No newline at end of file
+In any case, after reloading your shell, kubectl completion should be working.
diff --git a/content/en/docs/tasks/tools/install-kubectl-macos.md b/content/en/docs/tasks/tools/install-kubectl-macos.md
index 605745c630..b4fa864985 100644
--- a/content/en/docs/tasks/tools/install-kubectl-macos.md
+++ b/content/en/docs/tasks/tools/install-kubectl-macos.md
@@ -23,7 +23,7 @@ The following methods exist for installing kubectl on macOS:
- [Install kubectl binary with curl on macOS](#install-kubectl-binary-with-curl-on-macos)
- [Install with Homebrew on macOS](#install-with-homebrew-on-macos)
- [Install with Macports on macOS](#install-with-macports-on-macos)
-- [Install on Linux as part of the Google Cloud SDK](#install-on-linux-as-part-of-the-google-cloud-sdk)
+- [Install on macOS as part of the Google Cloud SDK](#install-on-macos-as-part-of-the-google-cloud-sdk)
### Install kubectl binary with curl on macOS
@@ -157,4 +157,4 @@ Below are the procedures to set up autocompletion for Bash and Zsh.
## {{% heading "whatsnext" %}}
-{{< include "included/kubectl-whats-next.md" >}}
\ No newline at end of file
+{{< include "included/kubectl-whats-next.md" >}}
diff --git a/content/en/docs/tutorials/clusters/seccomp.md b/content/en/docs/tutorials/clusters/seccomp.md
index adb3d9c500..376c349f72 100644
--- a/content/en/docs/tutorials/clusters/seccomp.md
+++ b/content/en/docs/tutorials/clusters/seccomp.md
@@ -37,7 +37,7 @@ profiles that give only the necessary privileges to your container processes.
In order to complete all steps in this tutorial, you must install
[kind](https://kind.sigs.k8s.io/docs/user/quick-start/) and
-[kubectl](/docs/tasks/tools/install-kubectl/). This tutorial will show examples
+[kubectl](/docs/tasks/tools/). This tutorial will show examples
with both alpha (pre-v1.19) and generally available seccomp functionality, so
make sure that your cluster is [configured
correctly](https://kind.sigs.k8s.io/docs/user/quick-start/#setting-kubernetes-version)
diff --git a/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md b/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md
index 7555a58201..b29b352aca 100644
--- a/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md
+++ b/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md
@@ -15,10 +15,8 @@ This page provides a real world example of how to configure Redis using a Config
## {{% heading "objectives" %}}
-* Create a `kustomization.yaml` file containing:
- * a ConfigMap generator
- * a Pod resource config using the ConfigMap
-* Apply the directory by running `kubectl apply -k ./`
+* Create a ConfigMap with Redis configuration values
+* Create a Redis Pod that mounts and uses the created ConfigMap
* Verify that the configuration was correctly applied.
@@ -38,82 +36,218 @@ This page provides a real world example of how to configure Redis using a Config
## Real World Example: Configuring Redis using a ConfigMap
-You can follow the steps below to configure a Redis cache using data stored in a ConfigMap.
+Follow the steps below to configure a Redis cache using data stored in a ConfigMap.
-First create a `kustomization.yaml` containing a ConfigMap from the `redis-config` file:
-
-{{< codenew file="pods/config/redis-config" >}}
+First create a ConfigMap with an empty configuration block:
```shell
-curl -OL https://k8s.io/examples/pods/config/redis-config
-
-cat <./kustomization.yaml
-configMapGenerator:
-- name: example-redis-config
- files:
- - redis-config
+cat <./example-redis-config.yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: example-redis-config
+data:
+ redis-config: ""
EOF
```
-Add the pod resource config to the `kustomization.yaml`:
+Apply the ConfigMap created above, along with a Redis pod manifest:
+
+```shell
+kubectl apply -f example-redis-config.yaml
+kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/config/redis-pod.yaml
+```
+
+Examine the contents of the Redis pod manifest and note the following:
+
+* A volume named `config` is created by `spec.volumes[1]`
+* The `key` and `path` under `spec.volumes[1].items[0]` exposes the `redis-config` key from the
+ `example-redis-config` ConfigMap as a file named `redis.conf` on the `config` volume.
+* The `config` volume is then mounted at `/redis-master` by `spec.containers[0].volumeMounts[1]`.
+
+This has the net effect of exposing the data in `data.redis-config` from the `example-redis-config`
+ConfigMap above as `/redis-master/redis.conf` inside the Pod.
{{< codenew file="pods/config/redis-pod.yaml" >}}
-```shell
-curl -OL https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/config/redis-pod.yaml
+Examine the created objects:
-cat <>./kustomization.yaml
-resources:
-- redis-pod.yaml
-EOF
+```shell
+kubectl get pod/redis configmap/example-redis-config
```
-Apply the kustomization directory to create both the ConfigMap and Pod objects:
+You should see the following output:
```shell
-kubectl apply -k .
-```
-
-Examine the created objects by
-```shell
-> kubectl get -k .
-NAME DATA AGE
-configmap/example-redis-config-dgh9dg555m 1 52s
-
NAME READY STATUS RESTARTS AGE
-pod/redis 1/1 Running 0 52s
+pod/redis 1/1 Running 0 8s
+
+NAME DATA AGE
+configmap/example-redis-config 1 14s
```
-In the example, the config volume is mounted at `/redis-master`.
-It uses `path` to add the `redis-config` key to a file named `redis.conf`.
-The file path for the redis config, therefore, is `/redis-master/redis.conf`.
-This is where the image will look for the config file for the redis master.
+Recall that we left `redis-config` key in the `example-redis-config` ConfigMap blank:
-Use `kubectl exec` to enter the pod and run the `redis-cli` tool to verify that
-the configuration was correctly applied:
+```shell
+kubectl describe configmap/example-redis-config
+```
+
+You should see an empty `redis-config` key:
+
+```shell
+Name: example-redis-config
+Namespace: default
+Labels:
+Annotations:
+
+Data
+====
+redis-config:
+```
+
+Use `kubectl exec` to enter the pod and run the `redis-cli` tool to check the current configuration:
```shell
kubectl exec -it redis -- redis-cli
+```
+
+Check `maxmemory`:
+
+```shell
127.0.0.1:6379> CONFIG GET maxmemory
+```
+
+It should show the default value of 0:
+
+```shell
+1) "maxmemory"
+2) "0"
+```
+
+Similarly, check `maxmemory-policy`:
+
+```shell
+127.0.0.1:6379> CONFIG GET maxmemory-policy
+```
+
+Which should also yield its default value of `noeviction`:
+
+```shell
+1) "maxmemory-policy"
+2) "noeviction"
+```
+
+Now let's add some configuration values to the `example-redis-config` ConfigMap:
+
+{{< codenew file="pods/config/example-redis-config.yaml" >}}
+
+Apply the updated ConfigMap:
+
+```shell
+kubectl apply -f example-redis-config.yaml
+```
+
+Confirm that the ConfigMap was updated:
+
+```shell
+kubectl describe configmap/example-redis-config
+```
+
+You should see the configuration values we just added:
+
+```shell
+Name: example-redis-config
+Namespace: default
+Labels:
+Annotations:
+
+Data
+====
+redis-config:
+----
+maxmemory 2mb
+maxmemory-policy allkeys-lru
+```
+
+Check the Redis Pod again using `redis-cli` via `kubectl exec` to see if the configuration was applied:
+
+```shell
+kubectl exec -it redis -- redis-cli
+```
+
+Check `maxmemory`:
+
+```shell
+127.0.0.1:6379> CONFIG GET maxmemory
+```
+
+It remains at the default value of 0:
+
+```shell
+1) "maxmemory"
+2) "0"
+```
+
+Similarly, `maxmemory-policy` remains at the `noeviction` default setting:
+
+```shell
+127.0.0.1:6379> CONFIG GET maxmemory-policy
+```
+
+Returns:
+
+```shell
+1) "maxmemory-policy"
+2) "noeviction"
+```
+
+The configuration values have not changed because the Pod needs to be restarted to grab updated
+values from associated ConfigMaps. Let's delete and recreate the Pod:
+
+```shell
+kubectl delete pod redis
+kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/config/redis-pod.yaml
+```
+
+Now re-check the configuration values one last time:
+
+```shell
+kubectl exec -it redis -- redis-cli
+```
+
+Check `maxmemory`:
+
+```shell
+127.0.0.1:6379> CONFIG GET maxmemory
+```
+
+It should now return the updated value of 2097152:
+
+```shell
1) "maxmemory"
2) "2097152"
+```
+
+Similarly, `maxmemory-policy` has also been updated:
+
+```shell
127.0.0.1:6379> CONFIG GET maxmemory-policy
+```
+
+It now reflects the desired value of `allkeys-lru`:
+
+```shell
1) "maxmemory-policy"
2) "allkeys-lru"
```
-Delete the created pod:
+Clean up your work by deleting the created resources:
+
```shell
-kubectl delete pod redis
+kubectl delete pod/redis configmap/example-redis-config
```
-
-
## {{% heading "whatsnext" %}}
* Learn more about [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/).
-
-
-
-
diff --git a/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html
index 1d8a069984..d7687bc7b1 100644
--- a/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html
+++ b/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html
@@ -37,7 +37,7 @@ weight: 10
ClusterIP (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.
NodePort - Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using <NodeIP>:<NodePort>. Superset of ClusterIP.
LoadBalancer - Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.
-
ExternalName - Exposes the Service using an arbitrary name (specified by externalName in the spec) by returning a CNAME record with the name. No proxy is used. This type requires v1.7 or higher of kube-dns.
+
ExternalName - Maps the Service to the contents of the externalName field (e.g. `foo.bar.example.com`), by returning a CNAME record with its value. No proxying of any kind is set up. This type requires v1.7 or higher of kube-dns, or CoreDNS version 0.0.8 or higher.
Additionally, note that there are some use cases with Services that involve not defining selector in the spec. A Service created without selector will also not create the corresponding Endpoints object. This allows users to manually map a Service to specific endpoints. Another possibility why there may be no selector is you are strictly using type: ExternalName.
diff --git a/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md b/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md
index 8368d24132..5b01913859 100644
--- a/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md
+++ b/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md
@@ -11,7 +11,7 @@ external IP address.
## {{% heading "prerequisites" %}}
-* Install [kubectl](/docs/tasks/tools/install-kubectl/).
+* Install [kubectl](/docs/tasks/tools/).
* Use a cloud provider like Google Kubernetes Engine or Amazon Web Services to
create a Kubernetes cluster. This tutorial creates an
[external load balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/),
diff --git a/content/en/docs/tutorials/stateless-application/guestbook.md b/content/en/docs/tutorials/stateless-application/guestbook.md
index 0a84483716..36772253f6 100644
--- a/content/en/docs/tutorials/stateless-application/guestbook.md
+++ b/content/en/docs/tutorials/stateless-application/guestbook.md
@@ -104,7 +104,7 @@ kubectl apply -f ./content/en/examples/application/guestbook/mongo-service.yaml
```shell
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 443/TCP 1m
- mongo ClusterIP 10.0.0.151 6379/TCP 8s
+ mongo ClusterIP 10.0.0.151 27017/TCP 8s
```
{{< note >}}
diff --git a/content/en/examples/pods/config/example-redis-config.yaml b/content/en/examples/pods/config/example-redis-config.yaml
new file mode 100644
index 0000000000..5b093b1213
--- /dev/null
+++ b/content/en/examples/pods/config/example-redis-config.yaml
@@ -0,0 +1,8 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: example-redis-config
+data:
+ redis-config: |
+ maxmemory 2mb
+ maxmemory-policy allkeys-lru
diff --git a/content/es/docs/concepts/policy/_index.md b/content/es/docs/concepts/policy/_index.md
index d5ebfed4f5..d0f16fc4ad 100755
--- a/content/es/docs/concepts/policy/_index.md
+++ b/content/es/docs/concepts/policy/_index.md
@@ -1,6 +1,8 @@
---
title: Políticas
weight: 90
+description: >
+ Políticas configurables que se aplican a grupos de recursos.
---
La sección de Políticas describe las diferentes políticas configurables que se aplican a grupos de recursos:
diff --git a/content/es/docs/reference/glossary/limitrange.md b/content/es/docs/reference/glossary/limitrange.md
index 9311ddc557..686ed9c342 100755
--- a/content/es/docs/reference/glossary/limitrange.md
+++ b/content/es/docs/reference/glossary/limitrange.md
@@ -20,4 +20,4 @@ Proporciona restricciones para limitar el consumo de recursos por {{< glossary_t
-LimitRange limita la cantidad de objetos que se pueden crear por su tipo (vease {{< glossary_tooltip text="Workloads" term_id="workload" >}}), así como la cantidad de recursos informáticos que pueden ser requeridos/consumidos por {{< glossary_tooltip text="Pods" term_id="pod" >}} individuales en un espacio de nombres.
+LimitRange limita la cantidad de objetos que se pueden crear por tipo, así como la cantidad de recursos informáticos que pueden ser requeridos/consumidos por {{< glossary_tooltip text="Pods" term_id="pod" >}} o {{< glossary_tooltip text="Contenedores" term_id="container" >}} individuales en un {{< glossary_tooltip text="Namespace" term_id="namespace" >}}.
diff --git a/content/fr/docs/setup/_index.md b/content/fr/docs/setup/_index.md
index 37161dbc0c..5424c448a7 100644
--- a/content/fr/docs/setup/_index.md
+++ b/content/fr/docs/setup/_index.md
@@ -35,7 +35,7 @@ Vous devriez choisir une solution locale si vous souhaitez :
* Essayer ou commencer à apprendre Kubernetes
* Développer et réaliser des tests sur des clusters locaux
-Choisissez une [solution locale] (/fr/docs/setup/pick-right-solution/#solutions-locales).
+Choisissez une [solution locale](/fr/docs/setup/pick-right-solution/#solutions-locales).
## Solutions hébergées
@@ -49,7 +49,7 @@ Vous devriez choisir une solution hébergée si vous :
* N'avez pas d'équipe de Site Reliability Engineering (SRE) dédiée, mais que vous souhaitez une haute disponibilité.
* Vous n'avez pas les ressources pour héberger et surveiller vos clusters
-Choisissez une [solution hébergée] (/fr/docs/setup/pick-right-solution/#solutions-hebergées).
+Choisissez une [solution hébergée](/fr/docs/setup/pick-right-solution/#solutions-hebergées).
## Solutions cloud clés en main
@@ -63,7 +63,7 @@ Vous devriez choisir une solution cloud clés en main si vous :
* Voulez plus de contrôle sur vos clusters que ne le permettent les solutions hébergées
* Voulez réaliser vous même un plus grand nombre d'operations
-Choisissez une [solution clé en main] (/fr/docs/setup/pick-right-solution/#solutions-clés-en-main)
+Choisissez une [solution clé en main](/fr/docs/setup/pick-right-solution/#solutions-clés-en-main)
## Solutions clés en main sur site
@@ -76,7 +76,7 @@ Vous devriez choisir une solution de cloud clé en main sur site si vous :
* Disposez d'une équipe SRE dédiée
* Avez les ressources pour héberger et surveiller vos clusters
-Choisissez une [solution clé en main sur site] (/fr/docs/setup/pick-right-solution/#solutions-on-premises-clés-en-main).
+Choisissez une [solution clé en main sur site](/fr/docs/setup/pick-right-solution/#solutions-on-premises-clés-en-main).
## Solutions personnalisées
@@ -84,11 +84,11 @@ Les solutions personnalisées vous offrent le maximum de liberté sur vos cluste
d'expertise. Ces solutions vont du bare-metal aux fournisseurs de cloud sur
différents systèmes d'exploitation.
-Choisissez une [solution personnalisée] (/fr/docs/setup/pick-right-solution/#solutions-personnalisées).
+Choisissez une [solution personnalisée](/fr/docs/setup/pick-right-solution/#solutions-personnalisées).
## {{% heading "whatsnext" %}}
-Allez à [Choisir la bonne solution] (/fr/docs/setup/pick-right-solution/) pour une liste complète de solutions.
+Allez à [Choisir la bonne solution](/fr/docs/setup/pick-right-solution/) pour une liste complète de solutions.
diff --git a/content/ja/docs/contribute/review/reviewing-prs.md b/content/ja/docs/contribute/review/reviewing-prs.md
new file mode 100644
index 0000000000..6659d46354
--- /dev/null
+++ b/content/ja/docs/contribute/review/reviewing-prs.md
@@ -0,0 +1,86 @@
+---
+title: プルリクエストのレビュー
+content_type: concept
+main_menu: true
+weight: 10
+---
+
+
+
+ドキュメントのプルリクエストは誰でもレビューすることができます。Kubernetesのwebsiteリポジトリで[pull requests](https://github.com/kubernetes/website/pulls)のセクションに移動し、open状態のプルリクエストを確認してください。
+
+ドキュメントのプルリクエストのレビューは、Kubernetesコミュニティに自分を知ってもらうためのよい方法の1つです。コードベースについて学んだり、他のコントリビューターとの信頼関係を築く助けともなるはずです。
+
+レビューを行う前には、以下のことを理解しておくとよいでしょう。
+
+- [コンテンツガイド](/docs/contribute/style/content-guide/)と[スタイルガイド](/docs/contribute/style/style-guide/)を読んで、有益なコメントを残せるようにする。
+- Kubernetesのドキュメントコミュニティにおける[役割と責任](/docs/contribute/participate/roles-and-responsibilities/)の違いを理解する。
+
+
+
+## はじめる前に
+
+レビューを始める前に、以下のことを心に留めてください。
+
+- [CNCFの行動規範](https://github.com/cncf/foundation/blob/master/code-of-conduct.md)を読み、いかなる時にも行動規範にしたがって行動するようにする。
+- 礼儀正しく、思いやりを持ち、助け合う気持ちを持つ。
+- 変更点だけでなく、PRのポジティブな側面についてもコメントする。
+- 相手の気持ちに共感して、自分のレビューが相手にどのように受け取られるのかをよく意識する。
+- 相手の善意を前提として、疑問点を明確にする質問をする。
+- 経験を積んだコントリビューターの場合、コンテンツに大幅な変更が必要な新規のコントリビューターとペアを組んで作業に取り組むことを考える。
+
+## レビューのプロセス
+
+一般に、コンテンツや文体に対するプルリクエストは、英語でレビューを行います。
+
+1. [https://github.com/kubernetes/website/pulls](https://github.com/kubernetes/website/pulls)に移動します。Kubernetesのウェブサイトとドキュメントに対するopen状態のプルリクエスト一覧が表示されます。
+
+2. open状態のPRに、以下に示すラベルを1つ以上使って絞り込みます。
+
+ - `cncf-cla: yes` (推奨): CLAにサインしていないコントリビューターが提出したPRはマージできません。詳しい情報は、[CLAの署名](/docs/contribute/new-content/overview/#sign-the-cla)を読んでください。
+ - `language/en` (推奨): 英語のPRだけに絞り込みます。
+ - `size/`: 特定の大きさのPRだけに絞り込みます。レビューを始めたばかりの人は、小さなPRから始めてください。
+
+ さらに、PRがwork in progressとしてマークされていないことも確認してください。`work in progress`ラベルの付いたPRは、まだレビューの準備ができていない状態です。
+
+3. レビューするPRを選んだら、以下のことを行い、変更点について理解します。
+ - PRの説明を読み、行われた変更について理解し、関連するissueがあればそれも読みます。
+ - 他のレビュアのコメントがあれば読みます。
+ - **Files changed**タブをクリックし、変更されたファイルと行を確認します。
+ - **Conversation**タブの下にあるPRのbuild checkセクションまでスクロールし、**deploy/netlify**の行の**Details**リンクをクリックして、Netlifyのプレビュービルドで変更点をプレビューします。
+
+4. **Files changed**タブに移動してレビューを始めます。
+ 1. コメントしたい場合は行の横の`+`マークをクリックします。
+ 2. その行に関するコメントを書き、**Add single comment**(1つのコメントだけを残したい場合)または**Start a review**(複数のコメントを行いたい場合)のいずれかをクリックします。
+ 3. コメントをすべて書いたら、ページ上部の**Review changes**をクリックします。ここでは、レビューの要約を追加できます(コントリビューターにポジティブなコメントも書きましょう!)。必要に応じて、PRを承認したり、コメントしたり、変更をリクエストします。新しいコントリビューターの場合は**Comment**だけが行えます。
+
+## レビューのチェックリスト
+
+レビューするときは、最初に以下の点を確認してみてください。
+
+### 言語と文法
+
+- 言語や文法に明らかな間違いはないですか? もっとよい言い方はないですか?
+- もっと簡単な単語に置き換えられる複雑な単語や古い単語はありませんか?
+- 使われている単語や専門用語や言い回しで差別的ではない別の言葉に置き換えられるものはありませんか?
+- 言葉選びや大文字の使い方は[style guide](/docs/contribute/style/style-guide/)に従っていますか?
+- もっと短くしたり単純な文に書き換えられる長い文はありませんか?
+- 箇条書きやテーブルでもっとわかりやすく表現できる長いパラグラフはありませんか?
+
+### コンテンツ
+
+- 同様のコンテンツがKubernetesのサイト上のどこかに存在しませんか?
+- コンテンツが外部サイト、特定のベンダー、オープンソースではないドキュメントなどに過剰にリンクを張っていませんか?
+
+### ウェブサイト
+
+- PRはページ名、slug/alias、アンカーリンクの変更や削除をしていますか? その場合、このPRの変更の結果、リンク切れは発生しませんか? ページ名を変更してslugはそのままにするなど、他の選択肢はありませんか?
+- PRは新しいページを作成するものですか? その場合、次の点に注意してください。
+ - ページは正しい[page content type](/docs/contribute/style/page-content-types/)と関係するHugoのshortcodeを使用していますか?
+ - セクションの横のナビゲーション(または全体)にページは正しく表示されますか?
+ - ページは[Docs Home](/docs/home/)に一覧されますか?
+- Netlifyのプレビューで変更は確認できますか? 特にリスト、コードブロック、テーブル、備考、画像などに注意してください。
+
+### その他
+
+PRに関して誤字や空白などの小さな問題を指摘する場合は、コメントの前に`nit:`と書いてください。こうすることで、PRの作者は問題が深刻なものではないことが分かります。
diff --git a/content/ja/docs/setup/production-environment/windows/user-guide-windows-containers.md b/content/ja/docs/setup/production-environment/windows/user-guide-windows-containers.md
index e1a23cadfd..6f1ed4558e 100644
--- a/content/ja/docs/setup/production-environment/windows/user-guide-windows-containers.md
+++ b/content/ja/docs/setup/production-environment/windows/user-guide-windows-containers.md
@@ -96,7 +96,7 @@ spec:
* ネットワークを介したノードとPod間通信、LinuxマスターからのPod IPのポート80に向けて`curl`して、ウェブサーバーの応答をチェックします
* docker execまたはkubectl execを使用したPod間通信、Pod間(および複数のWindowsノードがある場合はホスト間)へのpingします
* ServiceからPodへの通信、Linuxマスターおよび個々のPodからの仮想Service IP(`kubectl get services`で表示される)に`curl`します
- * サービスディスカバリ、Kuberntesの[default DNS suffix](/ja/docs/concepts/services-networking/dns-pod-service/#services)と共にService名に`curl`します
+ * サービスディスカバリ、Kubernetesの[default DNS suffix](/ja/docs/concepts/services-networking/dns-pod-service/#services)と共にService名に`curl`します
* Inbound connectivity, `curl` the NodePort from the Linux master or machines outside of the cluster
* インバウンド接続、Linuxマスターまたはクラスター外のマシンからNodePortに`curl`します
* アウトバウンド接続、kubectl execを使用したPod内からの外部IPに`curl`します
diff --git a/content/ja/docs/tasks/access-application-cluster/ingress-minikube.md b/content/ja/docs/tasks/access-application-cluster/ingress-minikube.md
index 563ce2478e..be26708099 100644
--- a/content/ja/docs/tasks/access-application-cluster/ingress-minikube.md
+++ b/content/ja/docs/tasks/access-application-cluster/ingress-minikube.md
@@ -134,28 +134,12 @@ weight: 100
1. 以下の内容で`example-ingress.yaml`を作成します。
- ```yaml
- apiVersion: networking.k8s.io/v1beta1
- kind: Ingress
- metadata:
- name: example-ingress
- annotations:
- nginx.ingress.kubernetes.io/rewrite-target: /$1
- spec:
- rules:
- - host: hello-world.info
- http:
- paths:
- - path: /
- backend:
- serviceName: web
- servicePort: 8080
- ```
+ {{< codenew file="service/networking/example-ingress.yaml" >}}
1. 次のコマンドを実行して、Ingressリソースを作成します。
```shell
- kubectl apply -f example-ingress.yaml
+ kubectl apply -f https://kubernetes.io/examples/service/networking/example-ingress.yaml
```
出力は次のようになります。
@@ -175,8 +159,8 @@ weight: 100
{{< /note >}}
```shell
- NAME HOSTS ADDRESS PORTS AGE
- example-ingress hello-world.info 172.17.0.15 80 38s
+ NAME CLASS HOSTS ADDRESS PORTS AGE
+ example-ingress hello-world.info 172.17.0.15 80 38s
```
1. 次の行を`/etc/hosts`ファイルの最後に書きます。
@@ -241,9 +225,12 @@ weight: 100
```yaml
- path: /v2
+ pathType: Prefix
backend:
- serviceName: web2
- servicePort: 8080
+ service:
+ name: web2
+ port:
+ number: 8080
```
1. 次のコマンドで変更を適用します。
@@ -300,6 +287,3 @@ weight: 100
* [Ingress](/ja/docs/concepts/services-networking/ingress/)についてさらに学ぶ。
* [Ingressコントローラー](/ja/docs/concepts/services-networking/ingress-controllers/)についてさらに学ぶ。
* [Service](/ja/docs/concepts/services-networking/service/)についてさらに学ぶ。
-
-
-
diff --git a/content/ja/docs/tasks/configmap-secret/_index.md b/content/ja/docs/tasks/configmap-secret/_index.md
new file mode 100755
index 0000000000..18a8018ce5
--- /dev/null
+++ b/content/ja/docs/tasks/configmap-secret/_index.md
@@ -0,0 +1,6 @@
+---
+title: "Secretの管理"
+weight: 28
+description: Secretを使用した機密設定データの管理
+---
+
diff --git a/content/ja/docs/tasks/configmap-secret/managing-secret-using-kubectl.md b/content/ja/docs/tasks/configmap-secret/managing-secret-using-kubectl.md
new file mode 100644
index 0000000000..fb8c89c1e3
--- /dev/null
+++ b/content/ja/docs/tasks/configmap-secret/managing-secret-using-kubectl.md
@@ -0,0 +1,146 @@
+---
+title: kubectlを使用してSecretを管理する
+content_type: task
+weight: 10
+description: kubectlコマンドラインを使用してSecretを作成する
+---
+
+
+
+## {{% heading "prerequisites" %}}
+
+{{< include "task-tutorial-prereqs.md" >}}
+
+
+
+## Secretを作成する
+
+`Secret`はデータベースにアクセスするためにPodが必要とするユーザー資格情報を含めることができます。
+たとえば、データベース接続文字列はユーザー名とパスワードで構成されます。
+ユーザー名はローカルマシンの`./username.txt`に、パスワードは`./password.txt`に保存します。
+
+```shell
+echo -n 'admin' > ./username.txt
+echo -n '1f2d1e2e67df' > ./password.txt
+```
+
+上記の2つのコマンドの`-n`フラグは、生成されたファイルにテキスト末尾の余分な改行文字が含まれないようにします。
+`kubectl`がファイルを読み取り、内容をbase64文字列にエンコードすると、余分な改行文字もエンコードされるため、これは重要です。
+
+`kubectl create secret`コマンドはこれらのファイルをSecretにパッケージ化し、APIサーバー上にオブジェクトを作成します。
+
+```shell
+kubectl create secret generic db-user-pass \
+ --from-file=./username.txt \
+ --from-file=./password.txt
+```
+
+出力は次のようになります:
+
+```
+secret/db-user-pass created
+```
+
+ファイル名がデフォルトのキー名になります。オプションで`--from-file=[key=]source`を使用してキー名を設定できます。たとえば:
+
+```shell
+kubectl create secret generic db-user-pass \
+ --from-file=username=./username.txt \
+ --from-file=password=./password.txt
+```
+
+`--from-file`に指定したファイルに含まれるパスワードの特殊文字をエスケープする必要はありません。
+
+また、`--from-literal==`タグを使用してSecretデータを提供することもできます。
+このタグは、複数のキーと値のペアを提供するために複数回指定することができます。
+`$`、`\`、`*`、`=`、`!`などの特殊文字は[シェル](https://en.wikipedia.org/wiki/Shell_(computing))によって解釈されるため、エスケープを必要とすることに注意してください。
+ほとんどのシェルでは、パスワードをエスケープする最も簡単な方法は、シングルクォート(`'`)で囲むことです。
+たとえば、実際のパスワードが`S!B\*d$zDsb=`の場合、次のようにコマンドを実行します:
+
+```shell
+kubectl create secret generic dev-db-secret \
+ --from-literal=username=devuser \
+ --from-literal=password='S!B\*d$zDsb='
+```
+
+## Secretを検証する
+
+Secretが作成されたことを確認できます:
+
+```shell
+kubectl get secrets
+```
+
+出力は次のようになります:
+
+```
+NAME TYPE DATA AGE
+db-user-pass Opaque 2 51s
+```
+
+`Secret`の説明を参照できます:
+
+```shell
+kubectl describe secrets/db-user-pass
+```
+
+出力は次のようになります:
+
+```
+Name: db-user-pass
+Namespace: default
+Labels:
+Annotations:
+
+Type: Opaque
+
+Data
+====
+password: 12 bytes
+username: 5 bytes
+```
+
+`kubectl get`と`kubectl describe`コマンドはデフォルトでは`Secret`の内容を表示しません。
+これは、`Secret`が不用意に他人にさらされたり、ターミナルログに保存されたりしないようにするためです。
+
+## Secretをデコードする {#decoding-secret}
+
+先ほど作成したSecretの内容を見るには、以下のコマンドを実行します:
+
+```shell
+kubectl get secret db-user-pass -o jsonpath='{.data}'
+```
+
+出力は次のようになります:
+
+```json
+{"password.txt":"MWYyZDFlMmU2N2Rm","username.txt":"YWRtaW4="}
+```
+
+`password.txt`のデータをデコードします:
+
+```shell
+echo 'MWYyZDFlMmU2N2Rm' | base64 --decode
+```
+
+出力は次のようになります:
+
+```
+1f2d1e2e67df
+```
+
+## クリーンアップ
+
+作成したSecretを削除するには次のコマンドを実行します:
+
+```shell
+kubectl delete secret db-user-pass
+```
+
+
+
+## {{% heading "whatsnext" %}}
+
+- [Secretのコンセプト](/ja/docs/concepts/configuration/secret/)を読む
+- [設定ファイルを使用してSecretを管理する](/docs/tasks/configmap-secret/managing-secret-using-config-file/)方法を知る
+- [kustomizeを使用してSecretを管理する](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)方法を知る
diff --git a/content/ja/docs/tasks/run-application/force-delete-stateful-set-pod.md b/content/ja/docs/tasks/run-application/force-delete-stateful-set-pod.md
index 83f7b52c85..ac20713a8d 100644
--- a/content/ja/docs/tasks/run-application/force-delete-stateful-set-pod.md
+++ b/content/ja/docs/tasks/run-application/force-delete-stateful-set-pod.md
@@ -33,7 +33,7 @@ kubectl delete pods
上記がグレースフルターミネーションにつながるためには、`pod.Spec.TerminationGracePeriodSeconds`に0を指定しては**いけません**。`pod.Spec.TerminationGracePeriodSeconds`を0秒に設定することは安全ではなく、StatefulSet Podには強くお勧めできません。グレースフル削除は安全で、kubeletがapiserverから名前を削除する前にPodが[適切にシャットダウンする](/ja/docs/concepts/workloads/pods/pod-lifecycle/#termination-of-pods)ことを保証します。
-Kubernetes(バージョン1.5以降)は、Nodeにアクセスできないという理由だけでPodを削除しません。到達不能なNodeで実行されているPodは、[タイムアウト](/docs/concepts/architecture/nodes/#node-condition)の後に`Terminating`または`Unknown`状態になります。到達不能なNode上のPodをユーザーが適切に削除しようとすると、Podはこれらの状態に入ることもあります。そのような状態のPodをapiserverから削除することができる唯一の方法は以下の通りです:
+Kubernetes(バージョン1.5以降)は、Nodeにアクセスできないという理由だけでPodを削除しません。到達不能なNodeで実行されているPodは、[タイムアウト](/ja/docs/concepts/architecture/nodes/#condition)の後に`Terminating`または`Unknown`状態になります。到達不能なNode上のPodをユーザーが適切に削除しようとすると、Podはこれらの状態に入ることもあります。そのような状態のPodをapiserverから削除することができる唯一の方法は以下の通りです:
* (ユーザーまたは[Node Controller](/ja/docs/concepts/architecture/nodes/)によって)Nodeオブジェクトが削除されます。
* 応答していないNodeのkubeletが応答を開始し、Podを終了してapiserverからエントリーを削除します。
@@ -76,4 +76,3 @@ StatefulSet Podの強制削除は、常に慎重に、関連するリスクを
[StatefulSetのデバッグ](/docs/tasks/debug-application-cluster/debug-stateful-set/)の詳細
-
diff --git a/content/ja/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/ja/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
new file mode 100644
index 0000000000..445742e1d6
--- /dev/null
+++ b/content/ja/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
@@ -0,0 +1,403 @@
+---
+title: Horizontal Pod Autoscalerウォークスルー
+content_type: task
+weight: 100
+---
+
+
+
+Horizontal Pod Autoscalerは、Deployment、ReplicaSetまたはStatefulSetといったレプリケーションコントローラ内のPodの数を、観測されたCPU使用率(もしくはベータサポートの、アプリケーションによって提供されるその他のメトリクス)に基づいて自動的にスケールさせます。
+
+このドキュメントはphp-apacheサーバーに対しHorizontal Pod Autoscalerを有効化するという例に沿ってウォークスルーで説明していきます。Horizontal Pod Autoscalerの動作についてのより詳細な情報を知りたい場合は、[Horizontal Pod Autoscalerユーザーガイド](/docs/tasks/run-application/horizontal-pod-autoscale/)をご覧ください。
+
+## {{% heading "前提条件" %}}
+
+この例ではバージョン1.2以上の動作するKubernetesクラスターおよびkubectlが必要です。
+[Metrics API](https://github.com/kubernetes/metrics)を介してメトリクスを提供するために、[Metrics server](https://github.com/kubernetes-sigs/metrics-server)によるモニタリングがクラスター内にデプロイされている必要があります。
+Horizontal Pod Autoscalerはメトリクスを収集するためにこのAPIを利用します。metrics-serverをデプロイする方法を知りたい場合は[metrics-server ドキュメント](https://github.com/kubernetes-sigs/metrics-server#deployment)をご覧ください。
+
+Horizontal Pod Autoscalerで複数のリソースメトリクスを利用するためには、バージョン1.6以上のKubernetesクラスターおよびkubectlが必要です。カスタムメトリクスを使えるようにするためには、あなたのクラスターがカスタムメトリクスAPIを提供するAPIサーバーと通信できる必要があります。
+最後に、Kubernetesオブジェクトと関係のないメトリクスを使うにはバージョン1.10以上のKubernetesクラスターおよびkubectlが必要で、さらにあなたのクラスターが外部メトリクスAPIを提供するAPIサーバーと通信できる必要があります。
+詳細については[Horizontal Pod Autoscaler user guide](/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics)をご覧ください。
+
+
+
+## php-apacheの起動と公開
+
+Horizontal Pod Autoscalerのデモンストレーションのために、php-apacheイメージをもとにしたカスタムのDockerイメージを使います。
+このDockerfileは下記のようになっています。
+
+```dockerfile
+FROM php:5-apache
+COPY index.php /var/www/html/index.php
+RUN chmod a+rx index.php
+```
+
+これはCPU負荷の高い演算を行うindex.phpを定義しています。
+
+```php
+
+```
+
+まず最初に、イメージを動かすDeploymentを起動し、Serviceとして公開しましょう。
+下記の設定を使います。
+
+{{< codenew file="application/php-apache.yaml" >}}
+
+以下のコマンドを実行してください。
+
+```shell
+kubectl apply -f https://k8s.io/examples/application/php-apache.yaml
+```
+
+```
+deployment.apps/php-apache created
+service/php-apache created
+```
+
+## Horizontal Pod Autoscalerを作成する
+
+サーバーが起動したら、[kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands#autoscale)を使ってautoscalerを作成しましょう。以下のコマンドで、最初のステップで作成したphp-apache deploymentによって制御されるPodレプリカ数を1から10の間に維持するHorizontal Pod Autoscalerを作成します。
+簡単に言うと、HPAは(Deploymentを通じて)レプリカ数を増減させ、すべてのPodにおける平均CPU使用率を50%(それぞれのPodは`kubectl run`で200 milli-coresを要求しているため、平均CPU使用率100 milli-coresを意味します)に保とうとします。
+このアルゴリズムについての詳細は[こちら](/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details)をご覧ください。
+
+```shell
+kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
+```
+
+```
+horizontalpodautoscaler.autoscaling/php-apache autoscaled
+```
+
+以下を実行して現在のAutoscalerの状況を確認できます。
+
+```shell
+kubectl get hpa
+```
+
+```
+NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
+php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 18s
+```
+
+現在はサーバーにリクエストを送っていないため、CPU使用率が0%になっていることに注意してください(`TARGET`カラムは対応するDeploymentによって制御される全てのPodの平均値を示しています。)。
+
+## 負荷の増加
+
+Autoscalerがどのように負荷の増加に反応するか見てみましょう。
+コンテナを作成し、クエリの無限ループをphp-apacheサーバーに送ってみます(これは別のターミナルで実行してください)。
+
+```shell
+kubectl run -i --tty load-generator --rm --image=busybox --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"
+```
+
+数分以内に、下記を実行することでCPU負荷が高まっていることを確認できます。
+
+```shell
+kubectl get hpa
+```
+
+```
+NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
+php-apache Deployment/php-apache/scale 305% / 50% 1 10 1 3m
+```
+
+ここでは、CPU使用率はrequestの305%にまで高まっています。
+結果として、Deploymentはレプリカ数7にリサイズされました。
+
+```shell
+kubectl get deployment php-apache
+```
+
+```
+NAME READY UP-TO-DATE AVAILABLE AGE
+php-apache 7/7 7 7 19m
+```
+
+{{< note >}}
+レプリカ数が安定するまでは数分かかることがあります。負荷量は何らかの方法で制御されているわけではないので、最終的なレプリカ数はこの例とは異なる場合があります。
+{{< /note >}}
+
+## 負荷の停止
+
+ユーザー負荷を止めてこの例を終わらせましょう。
+
+私たちが`busybox`イメージを使って作成したコンテナ内のターミナルで、` + C`を入力して負荷生成を終了させます。
+
+そして結果の状態を確認します(数分後)。
+
+```shell
+kubectl get hpa
+```
+
+```
+NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
+php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 11m
+```
+
+```shell
+kubectl get deployment php-apache
+```
+
+```
+NAME READY UP-TO-DATE AVAILABLE AGE
+php-apache 1/1 1 1 27m
+```
+
+ここでCPU使用率は0に下がり、HPAによってオートスケールされたレプリカ数は1に戻ります。
+
+{{< note >}}
+レプリカのオートスケールには数分かかることがあります。
+{{< /note >}}
+
+
+
+## 複数のメトリクスやカスタムメトリクスを基にオートスケーリングする
+
+`autoscaling/v2beta2` APIバージョンと使うと、`php-apache` Deploymentをオートスケーリングする際に使う追加のメトリクスを導入することが出来ます。
+
+まず、`autoscaling/v2beta2`内のHorizontalPodAutoscalerのYAMLファイルを入手します。
+
+```shell
+kubectl get hpa.v2beta2.autoscaling -o yaml > /tmp/hpa-v2.yaml
+```
+
+`/tmp/hpa-v2.yaml`ファイルをエディタで開くと、以下のようなYAMLファイルが見えるはずです。
+
+```yaml
+apiVersion: autoscaling/v2beta2
+kind: HorizontalPodAutoscaler
+metadata:
+ name: php-apache
+spec:
+ scaleTargetRef:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: php-apache
+ minReplicas: 1
+ maxReplicas: 10
+ metrics:
+ - type: Resource
+ resource:
+ name: cpu
+ target:
+ type: Utilization
+ averageUtilization: 50
+status:
+ observedGeneration: 1
+ lastScaleTime:
+ currentReplicas: 1
+ desiredReplicas: 1
+ currentMetrics:
+ - type: Resource
+ resource:
+ name: cpu
+ current:
+ averageUtilization: 0
+ averageValue: 0
+```
+
+`targetCPUUtilizationPercentage`フィールドは`metrics`と呼ばれる配列に置換されています。
+CPU使用率メトリクスは、Podコンテナで定められたリソースの割合として表されるため、*リソースメトリクス*です。CPU以外のリソースメトリクスを指定することもできます。デフォルトでは、他にメモリだけがリソースメトリクスとしてサポートされています。これらのリソースはクラスター間で名前が変わることはなく、そして`metrics.k8s.io` APIが利用可能である限り常に利用可能です。
+
+さらに`target.type`において`Utilization`の代わりに`AverageValue`を使い、`target.averageUtilization`フィールドの代わりに対応する`target.averageValue`フィールドを設定することで、リソースメトリクスをrequest値に対する割合に代わり、直接的な値に設定することも可能です。
+
+PodメトリクスとObjectメトリクスという2つの異なる種類のメトリクスが存在し、どちらも*カスタムメトリクス*とみなされます。これらのメトリクスはクラスター特有の名前を持ち、利用するにはより発展的なクラスター監視設定が必要となります。
+
+これらの代替メトリクスタイプのうち、最初のものが*Podメトリクス*です。これらのメトリクスはPodを説明し、Podを渡って平均され、レプリカ数を決定するためにターゲット値と比較されます。
+これらはほとんどリソースメトリクス同様に機能しますが、`target`の種類としては`AverageValue`*のみ*をサポートしている点が異なります。
+
+Podメトリクスはmetricブロックを使って以下のように指定されます。
+
+```yaml
+type: Pods
+pods:
+ metric:
+ name: packets-per-second
+ target:
+ type: AverageValue
+ averageValue: 1k
+```
+
+2つ目のメトリクスタイプは*Objectメトリクス*です。これらのメトリクスはPodを説明するかわりに、同一Namespace内の異なったオブジェクトを説明します。このメトリクスはオブジェクトから取得される必要はありません。単に説明するだけです。Objectメトリクスは`target`の種類として`Value`と`AverageValue`をサポートします。`Value`では、ターゲットはAPIから返ってきたメトリクスと直接比較されます。`AverageValue`では、カスタムメトリクスAPIから返ってきた値はターゲットと比較される前にPodの数で除算されます。以下の例は`requests-per-second`メトリクスのYAML表現です。
+
+```yaml
+type: Object
+object:
+ metric:
+ name: requests-per-second
+ describedObject:
+ apiVersion: networking.k8s.io/v1beta1
+ kind: Ingress
+ name: main-route
+ target:
+ type: Value
+ value: 2k
+```
+
+もしこのようなmetricブロックを複数提供した場合、HorizontalPodAutoscalerはこれらのメトリクスを順番に処理します。
+HorizontalPodAutoscalerはそれぞれのメトリクスについて推奨レプリカ数を算出し、その中で最も多いレプリカ数を採用します。
+
+例えば、もしあなたがネットワークトラフィックについてのメトリクスを収集する監視システムを持っているなら、`kubectl edit`を使って指定を次のように更新することができます。
+
+```yaml
+apiVersion: autoscaling/v2beta2
+kind: HorizontalPodAutoscaler
+metadata:
+ name: php-apache
+spec:
+ scaleTargetRef:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: php-apache
+ minReplicas: 1
+ maxReplicas: 10
+ metrics:
+ - type: Resource
+ resource:
+ name: cpu
+ target:
+ type: Utilization
+ averageUtilization: 50
+ - type: Pods
+ pods:
+ metric:
+ name: packets-per-second
+ target:
+ type: AverageValue
+ averageValue: 1k
+ - type: Object
+ object:
+ metric:
+ name: requests-per-second
+ describedObject:
+ apiVersion: networking.k8s.io/v1beta1
+ kind: Ingress
+ name: main-route
+ target:
+ type: Value
+ value: 10k
+status:
+ observedGeneration: 1
+ lastScaleTime:
+ currentReplicas: 1
+ desiredReplicas: 1
+ currentMetrics:
+ - type: Resource
+ resource:
+ name: cpu
+ current:
+ averageUtilization: 0
+ averageValue: 0
+ - type: Object
+ object:
+ metric:
+ name: requests-per-second
+ describedObject:
+ apiVersion: networking.k8s.io/v1beta1
+ kind: Ingress
+ name: main-route
+ current:
+ value: 10k
+```
+
+この時、HorizontalPodAutoscalerはそれぞれのPodがCPU requestの50%を使い、1秒当たり1000パケットを送信し、そしてmain-route
+Ingressの裏にあるすべてのPodが合計で1秒当たり10000パケットを送信する状態を保持しようとします。
+
+### より詳細なメトリクスをもとにオートスケーリングする
+
+多くのメトリクスパイプラインは、名前もしくは _labels_ と呼ばれる追加の記述子の組み合わせによって説明することができます。全てのリソースメトリクス以外のメトリクスタイプ(Pod、Object、そして下で説明されている外部メトリクス)において、メトリクスパイプラインに渡す追加のラベルセレクターを指定することができます。例えば、もしあなたが`http_requests`メトリクスを`verb`ラベルとともに収集しているなら、下記のmetricブロックを指定してGETリクエストにのみ基づいてスケールさせることができます。
+
+```yaml
+type: Object
+object:
+ metric:
+ name: http_requests
+ selector: {matchLabels: {verb: GET}}
+```
+
+このセレクターは完全なKubernetesラベルセレクターと同じ文法を利用します。もし名前とセレクターが複数の系列に一致した場合、この監視パイプラインはどのようにして複数の系列を一つの値にまとめるかを決定します。このセレクターは付加的なもので、ターゲットオブジェクト(`Pods`タイプの場合は対象Pod、`Object`タイプの場合は説明されるオブジェクト)では**ない**オブジェクトを説明するメトリクスを選択することは出来ません。
+
+### Kubernetesオブジェクトと関係ないメトリクスに基づいたオートスケーリング
+
+Kubernetes上で動いているアプリケーションを、Kubernetes Namespaceと直接的な関係がないサービスを説明するメトリクスのような、Kubernetesクラスター内のオブジェクトと明確な関係が無いメトリクスを基にオートスケールする必要があるかもしれません。Kubernetes 1.10以降では、このようなユースケースを*外部メトリクス*によって解決できます。
+
+外部メトリクスを使うにはあなたの監視システムについての知識が必要となります。この設定はカスタムメトリクスを使うときのものに似ています。外部メトリクスを使うとあなたの監視システムのあらゆる利用可能なメトリクスに基づいてクラスターをオートスケールできるようになります。上記のように`metric`ブロックで`name`と`selector`を設定し、`Object`のかわりに`External`メトリクスタイプを使います。
+もし複数の時系列が`metricSelector`により一致した場合は、それらの値の合計がHorizontalPodAutoscalerに使われます。
+外部メトリクスは`Value`と`AverageValue`の両方のターゲットタイプをサポートしています。これらの機能は`Object`タイプを利用するときとまったく同じです。
+
+例えばもしあなたのアプリケーションがホストされたキューサービスからのタスクを処理している場合、あなたは下記のセクションをHorizontalPodAutoscalerマニフェストに追記し、未処理のタスク30個あたり1つのワーカーを必要とすることを指定します。
+
+```yaml
+- type: External
+ external:
+ metric:
+ name: queue_messages_ready
+ selector: "queue=worker_tasks"
+ target:
+ type: AverageValue
+ averageValue: 30
+```
+
+可能なら、クラスター管理者がカスタムメトリクスAPIを保護することを簡単にするため、外部メトリクスのかわりにカスタムメトリクスを用いることが望ましいです。外部メトリクスAPIは潜在的に全てのメトリクスへのアクセスを許可するため、クラスター管理者はこれを公開する際には注意が必要です。
+
+## 付録: Horizontal Pod Autoscaler status conditions
+
+`autoscaling/v2beta2`形式のHorizontalPodAutoscalerを使っている場合は、KubernetesによるHorizontalPodAutoscaler上の*status conditions*セットを見ることができます。status conditionsはHorizontalPodAutoscalerがスケール可能かどうか、そして現時点でそれが何らかの方法で制限されているかどうかを示しています。
+
+このconditionsは`status.conditions`フィールドに現れます。HorizontalPodAutoscalerに影響しているconditionsを確認するために、`kubectl describe hpa`を利用できます。
+
+```shell
+kubectl describe hpa cm-test
+```
+
+```
+Name: cm-test
+Namespace: prom
+Labels:
+Annotations:
+CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000
+Reference: ReplicationController/cm-test
+Metrics: ( current / target )
+ "http_requests" on pods: 66m / 500m
+Min replicas: 1
+Max replicas: 4
+ReplicationController pods: 1 current / 1 desired
+Conditions:
+ Type Status Reason Message
+ ---- ------ ------ -------
+ AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale
+ ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_requests
+ ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range
+Events:
+```
+
+このHorizontalPodAutoscalerにおいて、いくつかの正常な状態のconditionsを見ることができます。まず最初に、`AbleToScale`は、HPAがスケール状況を取得し、更新させることが出来るかどうかだけでなく、何らかのbackoffに関連した状況がスケーリングを妨げていないかを示しています。2番目に、`ScalingActive`は、HPAが有効化されているかどうか(例えば、レプリカ数のターゲットがゼロでないこと)や、望ましいスケールを算出できるかどうかを示します。もしこれが`False`の場合、大体はメトリクスの取得において問題があることを示しています。最後に、一番最後の状況である`ScalingLimited`は、HorizontalPodAutoscalerの最大値や最小値によって望ましいスケールがキャップされていることを示しています。この指標を見てHorizontalPodAutoscaler上の最大・最小レプリカ数制限を増やす、もしくは減らす検討ができます。
+
+## 付録: 数量
+
+全てのHorizontalPodAutoscalerおよびメトリクスAPIにおけるメトリクスは{{< glossary_tooltip term_id="quantity" text="quantity">}}として知られる特殊な整数表記によって指定されます。例えば、`10500m`という数量は10進数表記で`10.5`と書くことができます。メトリクスAPIは可能であれば接尾辞を用いない整数を返し、そうでない場合は基本的にミリ単位での数量を返します。これはメトリクス値が`1`と`1500m`の間で、もしくは10進法表記で書かれた場合は`1`と`1.5`の間で変動するということを意味します。
+
+## 付録: その他の起きうるシナリオ
+
+### Autoscalerを宣言的に作成する
+
+`kubectl autoscale`コマンドを使って命令的にHorizontalPodAutoscalerを作るかわりに、下記のファイルを使って宣言的に作成することができます。
+
+{{< codenew file="application/hpa/php-apache.yaml" >}}
+
+下記のコマンドを実行してAutoscalerを作成します。
+
+```shell
+kubectl create -f https://k8s.io/examples/application/hpa/php-apache.yaml
+```
+
+```
+horizontalpodautoscaler.autoscaling/php-apache created
+```
diff --git a/content/ja/examples/application/php-apache.yaml b/content/ja/examples/application/php-apache.yaml
new file mode 100644
index 0000000000..e8e1b5aeb4
--- /dev/null
+++ b/content/ja/examples/application/php-apache.yaml
@@ -0,0 +1,36 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: php-apache
+spec:
+ selector:
+ matchLabels:
+ run: php-apache
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ run: php-apache
+ spec:
+ containers:
+ - name: php-apache
+ image: k8s.gcr.io/hpa-example
+ ports:
+ - containerPort: 80
+ resources:
+ limits:
+ cpu: 500m
+ requests:
+ cpu: 200m
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: php-apache
+ labels:
+ run: php-apache
+spec:
+ ports:
+ - port: 80
+ selector:
+ run: php-apache
diff --git a/content/ja/examples/service/networking/example-ingress.yaml b/content/ja/examples/service/networking/example-ingress.yaml
new file mode 100644
index 0000000000..b309d13275
--- /dev/null
+++ b/content/ja/examples/service/networking/example-ingress.yaml
@@ -0,0 +1,18 @@
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: example-ingress
+ annotations:
+ nginx.ingress.kubernetes.io/rewrite-target: /$1
+spec:
+ rules:
+ - host: hello-world.info
+ http:
+ paths:
+ - path: /
+ pathType: Prefix
+ backend:
+ service:
+ name: web
+ port:
+ number: 8080
\ No newline at end of file
diff --git a/content/pt/_index.html b/content/pt/_index.html
index 9721bcdd37..628047e85c 100644
--- a/content/pt/_index.html
+++ b/content/pt/_index.html
@@ -47,7 +47,7 @@ O Kubernetes é Open Source, o que te oferece a liberdade de utilizá-lo em seu
@@ -57,4 +57,4 @@ O Kubernetes é Open Source, o que te oferece a liberdade de utilizá-lo em seu
{{< blocks/kubernetes-features >}}
-{{< blocks/case-studies >}}
\ No newline at end of file
+{{< blocks/case-studies >}}
diff --git a/content/pt/docs/tutorials/kubernetes-basics/_index.html b/content/pt/docs/tutorials/kubernetes-basics/_index.html
index 90e0592c18..aabad1f782 100644
--- a/content/pt/docs/tutorials/kubernetes-basics/_index.html
+++ b/content/pt/docs/tutorials/kubernetes-basics/_index.html
@@ -24,7 +24,7 @@ card:
Básico do Kubernetes
-
Este tutorial fornece instruções básicas sobre o sistema de orquestração de cluster do Kubernetes. Cada módulo contém algumas informações básicas sobre os principais recursos e conceitos do Kubernetes e inclui um tutorial online interativo. Esses tutoriais interativos permitem que você mesmo gerencie um cluster simples e seus aplicativos em contêineres.
+
Este tutorial fornece instruções básicas sobre o sistema de orquestração de cluster do Kubernetes. Cada módulo contém algumas informações básicas sobre os principais recursos e conceitos do Kubernetes e inclui um tutorial online interativo. Esses tutoriais interativos permitem que você mesmo gerencie um cluster simples e seus aplicativos em contêineres.
Usando os tutoriais interativos, você pode aprender a:
Implante um aplicativo em contêiner em um cluster.
- O Kubernetes coordena um cluster altamente disponível de computadores conectados para funcionar como uma única unidade.
- As abstrações no Kubernetes permitem implantar aplicativos em contêineres em um cluster sem amarrá-los especificamente a máquinas individuais.
- Para fazer uso desse novo modelo de implantação, os aplicativos precisam ser empacotados de uma forma que os desacople dos hosts individuais: eles precisam ser colocados em contêineres. Os aplicativos em contêineres são mais flexíveis e disponíveis do que nos modelos de implantação anteriores, nos quais os aplicativos eram instalados diretamente em máquinas específicas como pacotes profundamente integrados ao host.
+ O Kubernetes coordena um cluster com alta disponibilidade de computadores conectados para funcionar como uma única unidade.
+ As abstrações no Kubernetes permitem implantar aplicativos em contêineres em um cluster sem amarrá-los especificamente as máquinas individuais.
+ Para fazer uso desse novo modelo de implantação, os aplicativos precisam ser empacotados de uma forma que os desacoplem dos hosts individuais: eles precisam ser empacotados em contêineres. Os aplicativos em contêineres são mais flexíveis e disponíveis do que nos modelos de implantação anteriores, nos quais os aplicativos eram instalados diretamente em máquinas específicas como pacotes profundamente integrados ao host.
O Kubernetes automatiza a distribuição e o agendamento de contêineres de aplicativos em um cluster de maneira mais eficiente.
O Kubernetes é uma plataforma de código aberto e está pronto para produção.
Um cluster Kubernetes consiste em dois tipos de recursos:
-
O Master coordena o cluster
-
Os Nodes são os trabalhadores que executam aplicativos
+
A Camada de gerenciamento (Control Plane) coordena o cluster
+
Os Nós (Nodes) são os nós de processamento que executam aplicativos
@@ -75,22 +75,22 @@ weight: 10
-
O mestre é responsável por gerenciar o cluster. O mestre coordena todas as atividades em seu cluster, como programação de aplicativos, manutenção do estado desejado dos aplicativos, escalonamento de aplicativos e lançamento de novas atualizações.
-
Um nó é uma VM ou um computador físico que atua como uma máquina de trabalho em um cluster Kubernetes. Cada nó tem um Kubelet, que é um agente para gerenciar o nó e se comunicar com o mestre do Kubernetes. O nó também deve ter ferramentas para lidar com operações de contêiner, como containerd ou Docker. Um cluster Kubernetes que lida com o tráfego de produção deve ter no mínimo três nós.
+
A camada de gerenciamento é responsável por gerenciar o cluster. A camada de gerenciamento coordena todas as atividades em seu cluster, como programação de aplicativos, manutenção do estado desejado dos aplicativos, escalonamento de aplicativos e lançamento de novas atualizações.
+
Um nó é uma VM ou um computador físico que atua como um nó de processamento em um cluster Kubernetes. Cada nó tem um Kubelet, que é um agente para gerenciar o nó e se comunicar com a camada de gerenciamento do Kubernetes. O nó também deve ter ferramentas para lidar com operações de contêiner, como containerd ou Docker. Um cluster Kubernetes que lida com o tráfego de produção deve ter no mínimo três nós.
-
Os mestres gerenciam o cluster e os nós que são usados para hospedar os aplicativos em execução.
+
As camadas de gerenciamento gerenciam o cluster e os nós que são usados para hospedar os aplicativos em execução.
-
Ao implantar aplicativos no Kubernetes, você diz ao mestre para iniciar os contêineres de aplicativos. O mestre agenda os contêineres para serem executados nos nós do cluster. Os nós se comunicam com o mestre usando a API Kubernetes , que o mestre expõe. Os usuários finais também podem usar a API Kubernetes diretamente para interagir com o cluster.
+
Ao implantar aplicativos no Kubernetes, você diz à camada de gerenciamento para iniciar os contêineres de aplicativos. A camada de gerenciamento agenda os contêineres para serem executados nos nós do cluster. Os nós se comunicam com o camada de gerenciamento usando a API do Kubernetes , que a camada de gerenciamento expõe. Os usuários finais também podem usar a API do Kubernetes diretamente para interagir com o cluster.
-
Um cluster Kubernetes pode ser implantado em máquinas físicas ou virtuais. Para começar o desenvolvimento do Kubernetes, você pode usar o Minikube. O Minikube é uma implementação leve do Kubernetes que cria uma VM em sua máquina local e implanta um cluster simples contendo apenas um nó. O Minikube está disponível para sistemas Linux, macOS e Windows. O Minikube CLI fornece operações básicas de inicialização para trabalhar com seu cluster, incluindo iniciar, parar, status e excluir. Para este tutorial, no entanto, você usará um terminal online fornecido com o Minikube pré-instalado.
+
Um cluster Kubernetes pode ser implantado em máquinas físicas ou virtuais. Para começar o desenvolvimento do Kubernetes, você pode usar o Minikube. O Minikube é uma implementação leve do Kubernetes que cria uma VM em sua máquina local e implanta um cluster simples contendo apenas um nó. O Minikube está disponível para sistemas Linux, macOS e Windows. A linha de comando (cli) do Minikube fornece operações básicas de inicialização para trabalhar com seu cluster, incluindo iniciar, parar, status e excluir. Para este tutorial, no entanto, você usará um terminal online fornecido com o Minikube pré-instalado.
Agora que você sabe o que é Kubernetes, vamos para o tutorial online e iniciar nosso primeiro cluster!
+
+
+
diff --git a/content/pt/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/pt/docs/tutorials/kubernetes-basics/expose/expose-intro.html
new file mode 100644
index 0000000000..548892d678
--- /dev/null
+++ b/content/pt/docs/tutorials/kubernetes-basics/expose/expose-intro.html
@@ -0,0 +1,103 @@
+---
+title: Utilizando um serviço para expor seu App
+weight: 10
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Objetivos
+
+
Aprenda sobre um Serviço no Kubernetes
+
Entenda como os objetos labels e LabelSelector se relacionam a um Serviço
+
Exponha uma aplicação externamente ao cluster Kubernetes usando um Serviço
+
+
+
+
+
Visão Geral de Serviços Kubernetes
+
+
Pods Kubernetes são efêmeros. Na verdade, Pods possuem um ciclo de vida. Quando um nó de processamento morre, os Pods executados no nó também são perdidos. A partir disso, o ReplicaSet pode dinamicamente retornar o cluster ao estado desejado através da criação de novos Pods para manter sua aplicação em execução. Como outro exemplo, considere um backend de processamento de imagens com 3 réplicas. Estas réplicas são intercambiáveis; o sistema front-end não deveria se importar com as réplicas backend ou ainda se um Pod é perdido ou recriado. Dito isso, cada Pod em um cluster Kubernetes tem um único endereço IP, mesmo Pods no mesmo nó, então há necessidade de ter uma forma de reconciliar automaticamente mudanças entre Pods de modo que sua aplicação continue funcionando.
+
+
Um serviço no Kubernetes é uma abstração que define um conjunto lógico de Pods e uma política pela qual acessá-los. Serviços permitem um baixo acoplamento entre os Pods dependentes. Um serviço é definido usando YAML (preferencialmente) ou JSON, como todos objetos Kubernetes. O conjunto de Pods selecionados por um Serviço é geralmente determinado por um seletor de rótulos LabelSelector (veja abaixo o motivo pelo qual você pode querer um Serviço sem incluir um seletor selector na especificação spec).
+
+
Embora cada Pod tenha um endereço IP único, estes IPs não são expostos externamente ao cluster sem um Serviço. Serviços permitem que suas aplicações recebam tráfego. Serviços podem ser expostos de formas diferentes especificando um tipo type na especificação do serviço ServiceSpec:
+
+
ClusterIP (padrão) - Expõe o serviço sob um endereço IP interno no cluster. Este tipo faz do serviço somente alcançável de dentro do cluster.
+
NodePort - Expõe o serviço sob a mesma porta em cada nó selecionado no cluster usando NAT. Faz o serviço acessível externamente ao cluster usando <NodeIP>:<NodePort>. Superconjunto de ClusterIP.
+
LoadBalancer - Cria um balanceador de carga externo no provedor de nuvem atual (se suportado) e assinala um endereço IP fixo e externo para o serviço. Superconjunto de NodePort.
+
ExternalName - Expõe o serviço usando um nome arbitrário (especificado através de externalName na especificação spec) retornando um registro de CNAME com o nome. Nenhum proxy é utilizado. Este tipo requer v1.7 ou mais recente de kube-dns.
Adicionalmente, note que existem alguns casos de uso com serviços que envolvem a não definição de selector em spec. Serviços criados sem selector também não criarão objetos Endpoints correspondentes. Isto permite usuários mapear manualmente um serviço a endpoints específicos. Outra possibilidade na qual pode não haver seletores é ao se utilizar estritamente type: ExternalName.
+
+
+
+
Resumo
+
+
Expõe Pods ao tráfego externo
+
Tráfego de balanceamento de carga entre múltiplos Pods
+
Uso de rótulos labels
+
+
+
+
Um serviço Kubernetes é uma camada de abstração que define um conjunto lógico de Pods e habilita a exposição ao tráfego externo, balanceamento de carga e descoberta de serviço para esses Pods.
+
+
+
+
+
+
+
+
Serviços e Rótulos
+
+
+
+
+
+
Um serviço roteia tráfego entre um conjunto de Pods. Serviço é a abstração que permite pods morrerem e se replicarem no Kubernetes sem impactar sua aplicação. A descoberta e o roteamento entre Pods dependentes (tal como componentes frontend e backend dentro de uma aplicação) são controlados por serviços Kubernetes.
+
Serviços relacionam um conjunto de Pods usando Rótulos e seletores, um agrupamento primitivo que permite operações lógicas sobre objetos Kubernetes. Rótulos são pares de chave/valor anexados à objetos e podem ser usados de inúmeras formas:
+
+
Designar objetos para desenvolvimento, teste e produção
+
Adicionar tags de versão
+
Classificar um objeto usando tags
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Rótulos podem ser anexados à objetos no momento de sua criação ou posteriormente. Eles podem ser modificados a qualquer tempo. Vamos agora expor sua aplicação usando um serviço e aplicar alguns rótulos.