Merge master into dev-1.21 to keep in sync - SIG-Release 1.21 Docs team 3/17/21

Fixed merge conflict with /docs/reference/command-line-tools-reference/kube-apiserver.md and preferred master
This commit is contained in:
Rey Lejano 2021-03-17 21:29:04 -07:00
commit 9d56683e8b
60 changed files with 1705 additions and 698 deletions

View File

@ -31,7 +31,7 @@ The [components](/docs/concepts/overview/components/#node-components) on a node
There are two main ways to have Nodes added to the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}:
1. The kubelet on a node self-registers to the control plane
2. You, or another human user, manually add a Node object
2. You (or another human user) manually add a Node object
After you create a Node object, or the kubelet on a node self-registers, the
control plane checks whether the new Node object is valid. For example, if you
@ -52,8 +52,8 @@ try to create a Node from the following JSON manifest:
Kubernetes creates a Node object internally (the representation). Kubernetes checks
that a kubelet has registered to the API server that matches the `metadata.name`
field of the Node. If the node is healthy (if all necessary services are running),
it is eligible to run a Pod. Otherwise, that node is ignored for any cluster activity
field of the Node. If the node is healthy (i.e. all necessary services are running),
then it is eligible to run a Pod. Otherwise, that node is ignored for any cluster activity
until it becomes healthy.
{{< note >}}
@ -96,14 +96,14 @@ You can create and modify Node objects using
When you want to create Node objects manually, set the kubelet flag `--register-node=false`.
You can modify Node objects regardless of the setting of `--register-node`.
For example, you can set labels on an existing Node, or mark it unschedulable.
For example, you can set labels on an existing Node or mark it unschedulable.
You can use labels on Nodes in conjunction with node selectors on Pods to control
scheduling. For example, you can constrain a Pod to only be eligible to run on
a subset of the available nodes.
Marking a node as unschedulable prevents the scheduler from placing new pods onto
that Node, but does not affect existing Pods on the Node. This is useful as a
that Node but does not affect existing Pods on the Node. This is useful as a
preparatory step before a node reboot or other maintenance.
To mark a Node unschedulable, run:
@ -179,14 +179,14 @@ The node condition is represented as a JSON object. For example, the following s
]
```
If the Status of the Ready condition remains `Unknown` or `False` for longer than the `pod-eviction-timeout` (an argument passed to the {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}), all the Pods on the node are scheduled for deletion by the node controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the API server is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the API server is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node.
If the Status of the Ready condition remains `Unknown` or `False` for longer than the `pod-eviction-timeout` (an argument passed to the {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}), then all the Pods on the node are scheduled for deletion by the node controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the API server is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the API server is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node.
The node controller does not force delete pods until it is confirmed that they have stopped
running in the cluster. You can see the pods that might be running on an unreachable node as
being in the `Terminating` or `Unknown` state. In cases where Kubernetes cannot deduce from the
underlying infrastructure if a node has permanently left a cluster, the cluster administrator
may need to delete the node object by hand. Deleting the node object from Kubernetes causes
all the Pod objects running on the node to be deleted from the API server, and frees up their
may need to delete the node object by hand. Deleting the node object from Kubernetes causes
all the Pod objects running on the node to be deleted from the API server and frees up their
names.
The node lifecycle controller automatically creates
@ -199,7 +199,7 @@ for more details.
### Capacity and Allocatable {#capacity}
Describes the resources available on the node: CPU, memory and the maximum
Describes the resources available on the node: CPU, memory, and the maximum
number of pods that can be scheduled onto the node.
The fields in the capacity block indicate the total amount of resources that a
@ -225,18 +225,19 @@ CIDR block to the node when it is registered (if CIDR assignment is turned on).
The second is keeping the node controller's internal list of nodes up to date with
the cloud provider's list of available machines. When running in a cloud
environment, whenever a node is unhealthy, the node controller asks the cloud
environment and whenever a node is unhealthy, the node controller asks the cloud
provider if the VM for that node is still available. If not, the node
controller deletes the node from its list of nodes.
The third is monitoring the nodes' health. The node controller is
responsible for updating the NodeReady condition of NodeStatus to
ConditionUnknown when a node becomes unreachable (i.e. the node controller stops
receiving heartbeats for some reason, for example due to the node being down), and then later evicting
all the pods from the node (using graceful termination) if the node continues
to be unreachable. (The default timeouts are 40s to start reporting
ConditionUnknown and 5m after that to start evicting pods.) The node controller
checks the state of each node every `--node-monitor-period` seconds.
responsible for:
- Updating the NodeReady condition of NodeStatus to ConditionUnknown when a node
becomes unreachable, as the node controller stops receiving heartbeats for some
reason such as the node being down.
- Evicting all the pods from the node using graceful termination if
the node continues to be unreachable. The default timeouts are 40s to start
reporting ConditionUnknown and 5m after that to start evicting pods.
The node controller checks the state of each node every `--node-monitor-period` seconds.
#### Heartbeats
@ -252,13 +253,14 @@ of the node heartbeats as the cluster scales.
The kubelet is responsible for creating and updating the `NodeStatus` and
a Lease object.
- The kubelet updates the `NodeStatus` either when there is change in status,
- The kubelet updates the `NodeStatus` either when there is change in status
or if there has been no update for a configured interval. The default interval
for `NodeStatus` updates is 5 minutes (much longer than the 40 second default
timeout for unreachable nodes).
for `NodeStatus` updates is 5 minutes, which is much longer than the 40 second default
timeout for unreachable nodes.
- The kubelet creates and then updates its Lease object every 10 seconds
(the default update interval). Lease updates occur independently from the
`NodeStatus` updates. If the Lease update fails, the kubelet retries with exponential backoff starting at 200 milliseconds and capped at 7 seconds.
`NodeStatus` updates. If the Lease update fails, the kubelet retries with
exponential backoff starting at 200 milliseconds and capped at 7 seconds.
#### Reliability
@ -269,23 +271,24 @@ from more than 1 node per 10 seconds.
The node eviction behavior changes when a node in a given availability zone
becomes unhealthy. The node controller checks what percentage of nodes in the zone
are unhealthy (NodeReady condition is ConditionUnknown or ConditionFalse) at
the same time. If the fraction of unhealthy nodes is at least
`--unhealthy-zone-threshold` (default 0.55) then the eviction rate is reduced:
if the cluster is small (i.e. has less than or equal to
`--large-cluster-size-threshold` nodes - default 50) then evictions are
stopped, otherwise the eviction rate is reduced to
`--secondary-node-eviction-rate` (default 0.01) per second. The reason these
policies are implemented per availability zone is because one availability zone
might become partitioned from the master while the others remain connected. If
your cluster does not span multiple cloud provider availability zones, then
there is only one availability zone (the whole cluster).
the same time:
- If the fraction of unhealthy nodes is at least `--unhealthy-zone-threshold`
(default 0.55), then the eviction rate is reduced.
- If the cluster is small (i.e. has less than or equal to
`--large-cluster-size-threshold` nodes - default 50), then evictions are stopped.
- Otherwise, the eviction rate is reduced to `--secondary-node-eviction-rate`
(default 0.01) per second.
The reason these policies are implemented per availability zone is because one
availability zone might become partitioned from the master while the others remain
connected. If your cluster does not span multiple cloud provider availability zones,
then there is only one availability zone (i.e. the whole cluster).
A key reason for spreading your nodes across availability zones is so that the
workload can be shifted to healthy zones when one entire zone goes down.
Therefore, if all nodes in a zone are unhealthy then the node controller evicts at
Therefore, if all nodes in a zone are unhealthy, then the node controller evicts at
the normal rate of `--node-eviction-rate`. The corner case is when all zones are
completely unhealthy (i.e. there are no healthy nodes in the cluster). In such a
case, the node controller assumes that there's some problem with master
case, the node controller assumes that there is some problem with master
connectivity and stops all evictions until some connectivity is restored.
The node controller is also responsible for evicting pods running on nodes with
@ -303,8 +306,8 @@ eligible for, effectively removing incoming load balancer traffic from the cordo
### Node capacity
Node objects track information about the Node's resource capacity (for example: the amount
of memory available, and the number of CPUs).
Node objects track information about the Node's resource capacity: for example, the amount
of memory available and the number of CPUs.
Nodes that [self register](#self-registration-of-nodes) report their capacity during
registration. If you [manually](#manual-node-administration) add a Node, then
you need to set the node's capacity information when you add it.
@ -338,7 +341,7 @@ for more information.
If you have enabled the `GracefulNodeShutdown` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/), then the kubelet attempts to detect the node system shutdown and terminates pods running on the node.
Kubelet ensures that pods follow the normal [pod termination process](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) during the node shutdown.
When the `GracefulNodeShutdown` feature gate is enabled, kubelet uses [systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) to delay the node shutdown with a given duration. During a shutdown kubelet terminates pods in two phases:
When the `GracefulNodeShutdown` feature gate is enabled, kubelet uses [systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) to delay the node shutdown with a given duration. During a shutdown, kubelet terminates pods in two phases:
1. Terminate regular pods running on the node.
2. Terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) running on the node.

View File

@ -43,7 +43,7 @@ Kubernetes objects that have a `spec`, a ConfigMap has `data` and `binaryData`
fields. These fields accept key-value pairs as their values. Both the `data`
field and the `binaryData` are optional. The `data` field is designed to
contain UTF-8 byte sequences while the `binaryData` field is designed to
contain binary data.
contain binary data as base64-encoded strings.
The name of a ConfigMap must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).

View File

@ -80,7 +80,7 @@ parameters:
Users request dynamically provisioned storage by including a storage class in
their `PersistentVolumeClaim`. Before Kubernetes v1.6, this was done via the
`volume.beta.kubernetes.io/storage-class` annotation. However, this annotation
is deprecated since v1.6. Users now can and should instead use the
is deprecated since v1.9. Users now can and should instead use the
`storageClassName` field of the `PersistentVolumeClaim` object. The value of
this field must match the name of a `StorageClass` configured by the
administrator (see [below](#enabling-dynamic-provisioning)).

View File

@ -34,8 +34,9 @@ Kubernetes supports many types of volumes. A {{< glossary_tooltip term_id="pod"
can use any number of volume types simultaneously.
Ephemeral volume types have a lifetime of a pod, but persistent volumes exist beyond
the lifetime of a pod. Consequently, a volume outlives any containers
that run within the pod, and data is preserved across container restarts. When a
pod ceases to exist, the volume is destroyed.
that run within the pod, and data is preserved across container restarts. When a pod
ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not
destroy persistent volumes.
At its core, a volume is just a directory, possibly with some data in it, which
is accessible to the containers in a pod. How that directory comes to be, the

View File

@ -54,7 +54,7 @@ In this example:
{{< note >}}
The `.spec.selector.matchLabels` field is a map of {key,value} pairs.
A single {key,value} in the `matchLabels` map is equivalent to an element of `matchExpressions`,
whose key field is "key" the operator is "In", and the values array contains only "value".
whose `key` field is "key", the `operator` is "In", and the `values` array contains only "value".
All of the requirements, from both `matchLabels` and `matchExpressions`, must be satisfied in order to match.
{{< /note >}}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -19,7 +19,7 @@ files by setting the KUBECONFIG environment variable or by setting the
This overview covers `kubectl` syntax, describes the command operations, and provides common examples.
For details about each command, including all the supported flags and subcommands, see the
[kubectl](/docs/reference/generated/kubectl/kubectl-commands/) reference documentation.
For installation instructions see [installing kubectl](/docs/tasks/tools/install-kubectl/).
For installation instructions see [installing kubectl](/docs/tasks/tools/).
<!-- body -->

View File

@ -181,8 +181,6 @@ that are not enabled by default:
- `RequestedToCapacityRatio`: Favor nodes according to a configured function of
the allocated resources.
Extension points: `Score`.
- `NodeResourceLimits`: Favors nodes that satisfy the Pod resource limits.
Extension points: `PreScore`, `Score`.
- `CinderVolume`: Checks that OpenStack Cinder volume limits can be satisfied
for the node.
Extension points: `Filter`.

View File

@ -147,7 +147,7 @@ Start a Powershell session, set `$Version` to the desired version (ex: `$Version
{{% /tab %}}
{{< /tabs >}}
#### systemd {#containerd-systemd}
#### Using the `systemd` cgroup driver {#containerd-systemd}
To use the `systemd` cgroup driver in `/etc/containerd/config.toml` with `runc`, set
@ -158,6 +158,12 @@ To use the `systemd` cgroup driver in `/etc/containerd/config.toml` with `runc`,
SystemdCgroup = true
```
If you apply this change make sure to restart containerd again:
```shell
sudo systemctl restart containerd
```
When using kubeadm, manually configure the
[cgroup driver for kubelet](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-control-plane-node).
@ -347,7 +353,7 @@ in sync.
### Docker
1. On each of your nodes, install the Docker for your Linux distribution as per [Install Docker Engine](https://docs.docker.com/engine/install/#server)
1. On each of your nodes, install the Docker for your Linux distribution as per [Install Docker Engine](https://docs.docker.com/engine/install/#server). You can find the latest validated version of Docker in this [dependencies](https://git.k8s.io/kubernetes/build/dependencies.yaml) file.
2. Configure the Docker daemon, in particular to use systemd for the management of the containers cgroups.

View File

@ -23,7 +23,7 @@ kops is an automated provisioning system:
## {{% heading "prerequisites" %}}
* You must have [kubectl](/docs/tasks/tools/install-kubectl/) installed.
* You must have [kubectl](/docs/tasks/tools/) installed.
* You must [install](https://github.com/kubernetes/kops#installing) `kops` on a 64-bit (AMD64 and Intel 64) device architecture.

View File

@ -160,7 +160,7 @@ kubelet and the control plane is supported, but the kubelet version may never ex
server version. For example, the kubelet running 1.7.0 should be fully compatible with a 1.8.0 API server,
but not vice versa.
For information about installing `kubectl`, see [Install and set up kubectl](/docs/tasks/tools/install-kubectl/).
For information about installing `kubectl`, see [Install and set up kubectl](/docs/tasks/tools/).
{{< warning >}}
These instructions exclude all Kubernetes packages from any system upgrades.

View File

@ -221,7 +221,7 @@ On Windows, you can use the following settings to configure Services and load ba
#### IPv4/IPv6 dual-stack
You can enable IPv4/IPv6 dual-stack networking for `l2bridge` networks using the `IPv6DualStack` [feature gate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/). See [enable IPv4/IPv6 dual stack](/docs/concepts/services-networking/dual-stack#enable-ipv4ipv6-dual-stack) for more details.
You can enable IPv4/IPv6 dual-stack networking for `l2bridge` networks using the `IPv6DualStack` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/). See [enable IPv4/IPv6 dual stack](/docs/concepts/services-networking/dual-stack#enable-ipv4ipv6-dual-stack) for more details.
{{< note >}}
On Windows, using IPv6 with Kubernetes require Windows Server, version 2004 (kernel version 10.0.19041.610) or later.

View File

@ -1906,7 +1906,7 @@ filename | sha512 hash
- Promote SupportNodePidsLimit to GA to provide node to pod pid isolation
Promote SupportPodPidsLimit to GA to provide ability to limit pids per pod ([#94140](https://github.com/kubernetes/kubernetes/pull/94140), [@derekwaynecarr](https://github.com/derekwaynecarr)) [SIG Node and Testing]
- Rename pod_preemption_metrics to preemption_metrics. ([#93256](https://github.com/kubernetes/kubernetes/pull/93256), [@ahg-g](https://github.com/ahg-g)) [SIG Instrumentation and Scheduling]
- Server-side apply behavior has been regularized in the case where a field is removed from the applied configuration. Removed fields which have no other owners are deleted from the live object, or reset to their default value if they have one. Safe ownership transfers, such as the transfer of a `replicas` field from a user to an HPA without resetting to the default value are documented in [Transferring Ownership](https://kubernetes.io/docs/reference/using-api/api-concepts/&#35;transferring-ownership) ([#92661](https://github.com/kubernetes/kubernetes/pull/92661), [@jpbetz](https://github.com/jpbetz)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation and Testing]
- Server-side apply behavior has been regularized in the case where a field is removed from the applied configuration. Removed fields which have no other owners are deleted from the live object, or reset to their default value if they have one. Safe ownership transfers, such as the transfer of a `replicas` field from a user to an HPA without resetting to the default value are documented in [Transferring Ownership](/docs/reference/using-api/server-side-apply/#transferring-ownership) ([#92661](https://github.com/kubernetes/kubernetes/pull/92661), [@jpbetz](https://github.com/jpbetz)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation and Testing]
- Set CSIMigrationvSphere feature gates to beta.
Users should enable CSIMigration + CSIMigrationvSphere features and install the vSphere CSI Driver (https://github.com/kubernetes-sigs/vsphere-csi-driver) to move workload from the in-tree vSphere plugin "kubernetes.io/vsphere-volume" to vSphere CSI Driver.

View File

@ -192,7 +192,7 @@ func main() {
}
```
If the application is deployed as a Pod in the cluster, see [Accessing the API from within a Pod](#accessing-the-api-from-within-a-pod).
If the application is deployed as a Pod in the cluster, see [Accessing the API from within a Pod](/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod).
#### Python client

View File

@ -34,7 +34,7 @@ If your cluster was deployed using the `kubeadm` tool, refer to
for detailed information on how to upgrade the cluster.
Once you have upgraded the cluster, remember to
[install the latest version of `kubectl`](/docs/tasks/tools/install-kubectl/).
[install the latest version of `kubectl`](/docs/tasks/tools/).
### Manual deployments
@ -52,7 +52,7 @@ You should manually update the control plane following this sequence:
- cloud controller manager, if you use one
At this point you should
[install the latest version of `kubectl`](/docs/tasks/tools/install-kubectl/).
[install the latest version of `kubectl`](/docs/tasks/tools/).
For each node in your cluster, [drain](/docs/tasks/administer-cluster/safely-drain-node/)
that node and then either replace it with a new node that uses the {{< skew latestVersion >}}

View File

@ -170,36 +170,7 @@ controllerManager:
### Create certificate signing requests (CSR)
You can create the certificate signing requests for the Kubernetes certificates API with `kubeadm certs renew --use-api`.
If you set up an external signer such as [cert-manager](https://github.com/jetstack/cert-manager), certificate signing requests (CSRs) are automatically approved.
Otherwise, you must manually approve certificates with the [`kubectl certificate`](/docs/setup/best-practices/certificates/) command.
The following kubeadm command outputs the name of the certificate to approve, then blocks and waits for approval to occur:
```shell
sudo kubeadm certs renew apiserver --use-api &
```
The output is similar to this:
```
[1] 2890
[certs] certificate request "kubeadm-cert-kube-apiserver-ld526" created
```
### Approve certificate signing requests (CSR)
If you set up an external signer, certificate signing requests (CSRs) are automatically approved.
Otherwise, you must manually approve certificates with the [`kubectl certificate`](/docs/setup/best-practices/certificates/) command. e.g.
```shell
kubectl certificate approve kubeadm-cert-kube-apiserver-ld526
```
The output is similar to this:
```shell
certificatesigningrequest.certificates.k8s.io/kubeadm-cert-kube-apiserver-ld526 approved
```
You can view a list of pending certificates with `kubectl get csr`.
See [Create CertificateSigningRequest](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/#create-certificatesigningrequest) for creating CSRs with the Kubernetes API.
## Renew certificates with external CA

View File

@ -202,4 +202,7 @@ verify that the pods were scheduled by the desired schedulers.
```shell
kubectl get events
```
You can also use a [custom scheduler configuration](/docs/reference/scheduling/config/#multiple-profiles)
or a custom container image for the cluster's main scheduler by modifying its static pod manifest
on the relevant control plane nodes.

View File

@ -16,7 +16,7 @@ preview of what changes `apply` will make.
## {{% heading "prerequisites" %}}
Install [`kubectl`](/docs/tasks/tools/install-kubectl/).
Install [`kubectl`](/docs/tasks/tools/).
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}

View File

@ -12,7 +12,7 @@ explains how those commands are organized and how to use them to manage live obj
## {{% heading "prerequisites" %}}
Install [`kubectl`](/docs/tasks/tools/install-kubectl/).
Install [`kubectl`](/docs/tasks/tools/).
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}

View File

@ -13,7 +13,7 @@ This document explains how to define and manage objects using configuration file
## {{% heading "prerequisites" %}}
Install [`kubectl`](/docs/tasks/tools/install-kubectl/).
Install [`kubectl`](/docs/tasks/tools/).
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}

View File

@ -29,7 +29,7 @@ kubectl apply -k <kustomization_directory>
## {{% heading "prerequisites" %}}
Install [`kubectl`](/docs/tasks/tools/install-kubectl/).
Install [`kubectl`](/docs/tasks/tools/).
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}

View File

@ -19,7 +19,7 @@ Up to date information on this process can be found at the
* You must have a Kubernetes cluster with cluster DNS enabled.
* If you are using a cloud-based Kubernetes cluster or {{< glossary_tooltip text="Minikube" term_id="minikube" >}}, you may already have cluster DNS enabled.
* If you are using `hack/local-up-cluster.sh`, ensure that the `KUBE_ENABLE_CLUSTER_DNS` environment variable is set, then run the install script.
* [Install and setup kubectl](/docs/tasks/tools/install-kubectl/) v1.7 or higher. Make sure it is configured to connect to the Kubernetes cluster.
* [Install and setup kubectl](/docs/tasks/tools/) v1.7 or higher. Make sure it is configured to connect to the Kubernetes cluster.
* Install [Helm](https://helm.sh/) v2.7.0 or newer.
* Follow the [Helm install instructions](https://helm.sh/docs/intro/install/).
* If you already have an appropriate version of Helm installed, execute `helm init` to install Tiller, the server-side component of Helm.

View File

@ -23,7 +23,7 @@ Service Catalog itself can work with any kind of managed service, not just Googl
* Install [Go 1.6+](https://golang.org/dl/) and set the `GOPATH`.
* Install the [cfssl](https://github.com/cloudflare/cfssl) tool needed for generating SSL artifacts.
* Service Catalog requires Kubernetes version 1.7+.
* [Install and setup kubectl](/docs/tasks/tools/install-kubectl/) so that it is configured to connect to a Kubernetes v1.7+ cluster.
* [Install and setup kubectl](/docs/tasks/tools/) so that it is configured to connect to a Kubernetes v1.7+ cluster.
* The kubectl user must be bound to the *cluster-admin* role for it to install Service Catalog. To ensure that this is true, run the following command:
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=<user-name>

View File

@ -80,10 +80,10 @@ You now have to ensure that the kubectl completion script gets sourced in all yo
echo 'complete -F __start_kubectl k' >>~/.bash_profile
```
- If you installed kubectl with Homebrew (as explained [above](#install-with-homebrew-on-macos)), then the kubectl completion script should already be in `/usr/local/etc/bash_completion.d/kubectl`. In that case, you don't need to do anything.
- If you installed kubectl with Homebrew (as explained [here](/docs/tasks/tools/install-kubectl-macos/#install-with-homebrew-on-macos)), then the kubectl completion script should already be in `/usr/local/etc/bash_completion.d/kubectl`. In that case, you don't need to do anything.
{{< note >}}
The Homebrew installation of bash-completion v2 sources all the files in the `BASH_COMPLETION_COMPAT_DIR` directory, that's why the latter two methods work.
{{< /note >}}
In any case, after reloading your shell, kubectl completion should be working.
In any case, after reloading your shell, kubectl completion should be working.

View File

@ -23,7 +23,7 @@ The following methods exist for installing kubectl on macOS:
- [Install kubectl binary with curl on macOS](#install-kubectl-binary-with-curl-on-macos)
- [Install with Homebrew on macOS](#install-with-homebrew-on-macos)
- [Install with Macports on macOS](#install-with-macports-on-macos)
- [Install on Linux as part of the Google Cloud SDK](#install-on-linux-as-part-of-the-google-cloud-sdk)
- [Install on macOS as part of the Google Cloud SDK](#install-on-macos-as-part-of-the-google-cloud-sdk)
### Install kubectl binary with curl on macOS
@ -157,4 +157,4 @@ Below are the procedures to set up autocompletion for Bash and Zsh.
## {{% heading "whatsnext" %}}
{{< include "included/kubectl-whats-next.md" >}}
{{< include "included/kubectl-whats-next.md" >}}

View File

@ -37,7 +37,7 @@ profiles that give only the necessary privileges to your container processes.
In order to complete all steps in this tutorial, you must install
[kind](https://kind.sigs.k8s.io/docs/user/quick-start/) and
[kubectl](/docs/tasks/tools/install-kubectl/). This tutorial will show examples
[kubectl](/docs/tasks/tools/). This tutorial will show examples
with both alpha (pre-v1.19) and generally available seccomp functionality, so
make sure that your cluster is [configured
correctly](https://kind.sigs.k8s.io/docs/user/quick-start/#setting-kubernetes-version)

View File

@ -15,10 +15,8 @@ This page provides a real world example of how to configure Redis using a Config
## {{% heading "objectives" %}}
* Create a `kustomization.yaml` file containing:
* a ConfigMap generator
* a Pod resource config using the ConfigMap
* Apply the directory by running `kubectl apply -k ./`
* Create a ConfigMap with Redis configuration values
* Create a Redis Pod that mounts and uses the created ConfigMap
* Verify that the configuration was correctly applied.
@ -38,82 +36,218 @@ This page provides a real world example of how to configure Redis using a Config
## Real World Example: Configuring Redis using a ConfigMap
You can follow the steps below to configure a Redis cache using data stored in a ConfigMap.
Follow the steps below to configure a Redis cache using data stored in a ConfigMap.
First create a `kustomization.yaml` containing a ConfigMap from the `redis-config` file:
{{< codenew file="pods/config/redis-config" >}}
First create a ConfigMap with an empty configuration block:
```shell
curl -OL https://k8s.io/examples/pods/config/redis-config
cat <<EOF >./kustomization.yaml
configMapGenerator:
- name: example-redis-config
files:
- redis-config
cat <<EOF >./example-redis-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: example-redis-config
data:
redis-config: ""
EOF
```
Add the pod resource config to the `kustomization.yaml`:
Apply the ConfigMap created above, along with a Redis pod manifest:
```shell
kubectl apply -f example-redis-config.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/config/redis-pod.yaml
```
Examine the contents of the Redis pod manifest and note the following:
* A volume named `config` is created by `spec.volumes[1]`
* The `key` and `path` under `spec.volumes[1].items[0]` exposes the `redis-config` key from the
`example-redis-config` ConfigMap as a file named `redis.conf` on the `config` volume.
* The `config` volume is then mounted at `/redis-master` by `spec.containers[0].volumeMounts[1]`.
This has the net effect of exposing the data in `data.redis-config` from the `example-redis-config`
ConfigMap above as `/redis-master/redis.conf` inside the Pod.
{{< codenew file="pods/config/redis-pod.yaml" >}}
```shell
curl -OL https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/config/redis-pod.yaml
Examine the created objects:
cat <<EOF >>./kustomization.yaml
resources:
- redis-pod.yaml
EOF
```shell
kubectl get pod/redis configmap/example-redis-config
```
Apply the kustomization directory to create both the ConfigMap and Pod objects:
You should see the following output:
```shell
kubectl apply -k .
```
Examine the created objects by
```shell
> kubectl get -k .
NAME DATA AGE
configmap/example-redis-config-dgh9dg555m 1 52s
NAME READY STATUS RESTARTS AGE
pod/redis 1/1 Running 0 52s
pod/redis 1/1 Running 0 8s
NAME DATA AGE
configmap/example-redis-config 1 14s
```
In the example, the config volume is mounted at `/redis-master`.
It uses `path` to add the `redis-config` key to a file named `redis.conf`.
The file path for the redis config, therefore, is `/redis-master/redis.conf`.
This is where the image will look for the config file for the redis master.
Recall that we left `redis-config` key in the `example-redis-config` ConfigMap blank:
Use `kubectl exec` to enter the pod and run the `redis-cli` tool to verify that
the configuration was correctly applied:
```shell
kubectl describe configmap/example-redis-config
```
You should see an empty `redis-config` key:
```shell
Name: example-redis-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
redis-config:
```
Use `kubectl exec` to enter the pod and run the `redis-cli` tool to check the current configuration:
```shell
kubectl exec -it redis -- redis-cli
```
Check `maxmemory`:
```shell
127.0.0.1:6379> CONFIG GET maxmemory
```
It should show the default value of 0:
```shell
1) "maxmemory"
2) "0"
```
Similarly, check `maxmemory-policy`:
```shell
127.0.0.1:6379> CONFIG GET maxmemory-policy
```
Which should also yield its default value of `noeviction`:
```shell
1) "maxmemory-policy"
2) "noeviction"
```
Now let's add some configuration values to the `example-redis-config` ConfigMap:
{{< codenew file="pods/config/example-redis-config.yaml" >}}
Apply the updated ConfigMap:
```shell
kubectl apply -f example-redis-config.yaml
```
Confirm that the ConfigMap was updated:
```shell
kubectl describe configmap/example-redis-config
```
You should see the configuration values we just added:
```shell
Name: example-redis-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
redis-config:
----
maxmemory 2mb
maxmemory-policy allkeys-lru
```
Check the Redis Pod again using `redis-cli` via `kubectl exec` to see if the configuration was applied:
```shell
kubectl exec -it redis -- redis-cli
```
Check `maxmemory`:
```shell
127.0.0.1:6379> CONFIG GET maxmemory
```
It remains at the default value of 0:
```shell
1) "maxmemory"
2) "0"
```
Similarly, `maxmemory-policy` remains at the `noeviction` default setting:
```shell
127.0.0.1:6379> CONFIG GET maxmemory-policy
```
Returns:
```shell
1) "maxmemory-policy"
2) "noeviction"
```
The configuration values have not changed because the Pod needs to be restarted to grab updated
values from associated ConfigMaps. Let's delete and recreate the Pod:
```shell
kubectl delete pod redis
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/config/redis-pod.yaml
```
Now re-check the configuration values one last time:
```shell
kubectl exec -it redis -- redis-cli
```
Check `maxmemory`:
```shell
127.0.0.1:6379> CONFIG GET maxmemory
```
It should now return the updated value of 2097152:
```shell
1) "maxmemory"
2) "2097152"
```
Similarly, `maxmemory-policy` has also been updated:
```shell
127.0.0.1:6379> CONFIG GET maxmemory-policy
```
It now reflects the desired value of `allkeys-lru`:
```shell
1) "maxmemory-policy"
2) "allkeys-lru"
```
Delete the created pod:
Clean up your work by deleting the created resources:
```shell
kubectl delete pod redis
kubectl delete pod/redis configmap/example-redis-config
```
## {{% heading "whatsnext" %}}
* Learn more about [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/).

View File

@ -37,7 +37,7 @@ weight: 10
<li><i>ClusterIP</i> (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.</li>
<li><i>NodePort</i> - Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using <code>&lt;NodeIP&gt;:&lt;NodePort&gt;</code>. Superset of ClusterIP.</li>
<li><i>LoadBalancer</i> - Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.</li>
<li><i>ExternalName</i> - Exposes the Service using an arbitrary name (specified by <code>externalName</code> in the spec) by returning a CNAME record with the name. No proxy is used. This type requires v1.7 or higher of <code>kube-dns</code>.</li>
<li><i>ExternalName</i> - Maps the Service to the contents of the <code>externalName</code> field (e.g. `foo.bar.example.com`), by returning a <code>CNAME</code> record with its value. No proxying of any kind is set up. This type requires v1.7 or higher of <code>kube-dns</code>, or CoreDNS version 0.0.8 or higher.</li>
</ul>
<p>More information about the different types of Services can be found in the <a href="/docs/tutorials/services/source-ip/">Using Source IP</a> tutorial. Also see <a href="/docs/concepts/services-networking/connect-applications-service">Connecting Applications with Services</a>.</p>
<p>Additionally, note that there are some use cases with Services that involve not defining <code>selector</code> in the spec. A Service created without <code>selector</code> will also not create the corresponding Endpoints object. This allows users to manually map a Service to specific endpoints. Another possibility why there may be no selector is you are strictly using <code>type: ExternalName</code>.</p>

View File

@ -11,7 +11,7 @@ external IP address.
## {{% heading "prerequisites" %}}
* Install [kubectl](/docs/tasks/tools/install-kubectl/).
* Install [kubectl](/docs/tasks/tools/).
* Use a cloud provider like Google Kubernetes Engine or Amazon Web Services to
create a Kubernetes cluster. This tutorial creates an
[external load balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/),

View File

@ -104,7 +104,7 @@ kubectl apply -f ./content/en/examples/application/guestbook/mongo-service.yaml
```shell
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1m
mongo ClusterIP 10.0.0.151 <none> 6379/TCP 8s
mongo ClusterIP 10.0.0.151 <none> 27017/TCP 8s
```
{{< note >}}

View File

@ -0,0 +1,8 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: example-redis-config
data:
redis-config: |
maxmemory 2mb
maxmemory-policy allkeys-lru

View File

@ -1,6 +1,8 @@
---
title: Políticas
weight: 90
description: >
Políticas configurables que se aplican a grupos de recursos.
---
La sección de Políticas describe las diferentes políticas configurables que se aplican a grupos de recursos:

View File

@ -20,4 +20,4 @@ Proporciona restricciones para limitar el consumo de recursos por {{< glossary_t
<!--more-->
LimitRange limita la cantidad de objetos que se pueden crear por su tipo (vease {{< glossary_tooltip text="Workloads" term_id="workload" >}}), así como la cantidad de recursos informáticos que pueden ser requeridos/consumidos por {{< glossary_tooltip text="Pods" term_id="pod" >}} individuales en un espacio de nombres.
LimitRange limita la cantidad de objetos que se pueden crear por tipo, así como la cantidad de recursos informáticos que pueden ser requeridos/consumidos por {{< glossary_tooltip text="Pods" term_id="pod" >}} o {{< glossary_tooltip text="Contenedores" term_id="container" >}} individuales en un {{< glossary_tooltip text="Namespace" term_id="namespace" >}}.

View File

@ -35,7 +35,7 @@ Vous devriez choisir une solution locale si vous souhaitez :
* Essayer ou commencer à apprendre Kubernetes
* Développer et réaliser des tests sur des clusters locaux
Choisissez une [solution locale] (/fr/docs/setup/pick-right-solution/#solutions-locales).
Choisissez une [solution locale](/fr/docs/setup/pick-right-solution/#solutions-locales).
## Solutions hébergées
@ -49,7 +49,7 @@ Vous devriez choisir une solution hébergée si vous :
* N'avez pas d'équipe de Site Reliability Engineering (SRE) dédiée, mais que vous souhaitez une haute disponibilité.
* Vous n'avez pas les ressources pour héberger et surveiller vos clusters
Choisissez une [solution hébergée] (/fr/docs/setup/pick-right-solution/#solutions-hebergées).
Choisissez une [solution hébergée](/fr/docs/setup/pick-right-solution/#solutions-hebergées).
## Solutions cloud clés en main
@ -63,7 +63,7 @@ Vous devriez choisir une solution cloud clés en main si vous :
* Voulez plus de contrôle sur vos clusters que ne le permettent les solutions hébergées
* Voulez réaliser vous même un plus grand nombre d'operations
Choisissez une [solution clé en main] (/fr/docs/setup/pick-right-solution/#solutions-clés-en-main)
Choisissez une [solution clé en main](/fr/docs/setup/pick-right-solution/#solutions-clés-en-main)
## Solutions clés en main sur site
@ -76,7 +76,7 @@ Vous devriez choisir une solution de cloud clé en main sur site si vous :
* Disposez d'une équipe SRE dédiée
* Avez les ressources pour héberger et surveiller vos clusters
Choisissez une [solution clé en main sur site] (/fr/docs/setup/pick-right-solution/#solutions-on-premises-clés-en-main).
Choisissez une [solution clé en main sur site](/fr/docs/setup/pick-right-solution/#solutions-on-premises-clés-en-main).
## Solutions personnalisées
@ -84,11 +84,11 @@ Les solutions personnalisées vous offrent le maximum de liberté sur vos cluste
d'expertise. Ces solutions vont du bare-metal aux fournisseurs de cloud sur
différents systèmes d'exploitation.
Choisissez une [solution personnalisée] (/fr/docs/setup/pick-right-solution/#solutions-personnalisées).
Choisissez une [solution personnalisée](/fr/docs/setup/pick-right-solution/#solutions-personnalisées).
## {{% heading "whatsnext" %}}
Allez à [Choisir la bonne solution] (/fr/docs/setup/pick-right-solution/) pour une liste complète de solutions.
Allez à [Choisir la bonne solution](/fr/docs/setup/pick-right-solution/) pour une liste complète de solutions.

View File

@ -0,0 +1,86 @@
---
title: プルリクエストのレビュー
content_type: concept
main_menu: true
weight: 10
---
<!-- overview -->
ドキュメントのプルリクエストは誰でもレビューすることができます。Kubernetesのwebsiteリポジトリで[pull requests](https://github.com/kubernetes/website/pulls)のセクションに移動し、open状態のプルリクエストを確認してください。
ドキュメントのプルリクエストのレビューは、Kubernetesコミュニティに自分を知ってもらうためのよい方法の1つです。コードベースについて学んだり、他のコントリビューターとの信頼関係を築く助けともなるはずです。
レビューを行う前には、以下のことを理解しておくとよいでしょう。
- [コンテンツガイド](/docs/contribute/style/content-guide/)と[スタイルガイド](/docs/contribute/style/style-guide/)を読んで、有益なコメントを残せるようにする。
- Kubernetesのドキュメントコミュニティにおける[役割と責任](/docs/contribute/participate/roles-and-responsibilities/)の違いを理解する。
<!-- body -->
## はじめる前に
レビューを始める前に、以下のことを心に留めてください。
- [CNCFの行動規範](https://github.com/cncf/foundation/blob/master/code-of-conduct.md)を読み、いかなる時にも行動規範にしたがって行動するようにする。
- 礼儀正しく、思いやりを持ち、助け合う気持ちを持つ。
- 変更点だけでなく、PRのポジティブな側面についてもコメントする。
- 相手の気持ちに共感して、自分のレビューが相手にどのように受け取られるのかをよく意識する。
- 相手の善意を前提として、疑問点を明確にする質問をする。
- 経験を積んだコントリビューターの場合、コンテンツに大幅な変更が必要な新規のコントリビューターとペアを組んで作業に取り組むことを考える。
## レビューのプロセス
一般に、コンテンツや文体に対するプルリクエストは、英語でレビューを行います。
1. [https://github.com/kubernetes/website/pulls](https://github.com/kubernetes/website/pulls)に移動します。Kubernetesのウェブサイトとドキュメントに対するopen状態のプルリクエスト一覧が表示されます。
2. open状態のPRに、以下に示すラベルを1つ以上使って絞り込みます。
- `cncf-cla: yes` (推奨): CLAにサインしていないコントリビューターが提出したPRはマージできません。詳しい情報は、[CLAの署名](/docs/contribute/new-content/overview/#sign-the-cla)を読んでください。
- `language/en` (推奨): 英語のPRだけに絞り込みます。
- `size/<size>`: 特定の大きさのPRだけに絞り込みます。レビューを始めたばかりの人は、小さなPRから始めてください。
さらに、PRがwork in progressとしてマークされていないことも確認してください。`work in progress`ラベルの付いたPRは、まだレビューの準備ができていない状態です。
3. レビューするPRを選んだら、以下のことを行い、変更点について理解します。
- PRの説明を読み、行われた変更について理解し、関連するissueがあればそれも読みます。
- 他のレビュアのコメントがあれば読みます。
- **Files changed**タブをクリックし、変更されたファイルと行を確認します。
- **Conversation**タブの下にあるPRのbuild checkセクションまでスクロールし、**deploy/netlify**の行の**Details**リンクをクリックして、Netlifyのプレビュービルドで変更点をプレビューします。
4. **Files changed**タブに移動してレビューを始めます。
1. コメントしたい場合は行の横の`+`マークをクリックします。
2. その行に関するコメントを書き、**Add single comment**(1つのコメントだけを残したい場合)または**Start a review**(複数のコメントを行いたい場合)のいずれかをクリックします。
3. コメントをすべて書いたら、ページ上部の**Review changes**をクリックします。ここでは、レビューの要約を追加できます(コントリビューターにポジティブなコメントも書きましょう!)。必要に応じて、PRを承認したり、コメントしたり、変更をリクエストします。新しいコントリビューターの場合は**Comment**だけが行えます。
## レビューのチェックリスト
レビューするときは、最初に以下の点を確認してみてください。
### 言語と文法
- 言語や文法に明らかな間違いはないですか? もっとよい言い方はないですか?
- もっと簡単な単語に置き換えられる複雑な単語や古い単語はありませんか?
- 使われている単語や専門用語や言い回しで差別的ではない別の言葉に置き換えられるものはありませんか?
- 言葉選びや大文字の使い方は[style guide](/docs/contribute/style/style-guide/)に従っていますか?
- もっと短くしたり単純な文に書き換えられる長い文はありませんか?
- 箇条書きやテーブルでもっとわかりやすく表現できる長いパラグラフはありませんか?
### コンテンツ
- 同様のコンテンツがKubernetesのサイト上のどこかに存在しませんか
- コンテンツが外部サイト、特定のベンダー、オープンソースではないドキュメントなどに過剰にリンクを張っていませんか?
### ウェブサイト
- PRはページ名、slug/alias、アンカーリンクの変更や削除をしていますか その場合、このPRの変更の結果、リンク切れは発生しませんか ページ名を変更してslugはそのままにするなど、他の選択肢はありませんか
- PRは新しいページを作成するものですか その場合、次の点に注意してください。
- ページは正しい[page content type](/docs/contribute/style/page-content-types/)と関係するHugoのshortcodeを使用していますか
- セクションの横のナビゲーション(または全体)にページは正しく表示されますか?
- ページは[Docs Home](/docs/home/)に一覧されますか?
- Netlifyのプレビューで変更は確認できますか 特にリスト、コードブロック、テーブル、備考、画像などに注意してください。
### その他
PRに関して誤字や空白などの小さな問題を指摘する場合は、コメントの前に`nit:`と書いてください。こうすることで、PRの作者は問題が深刻なものではないことが分かります。

View File

@ -96,7 +96,7 @@ spec:
* ネットワークを介したードとPod間通信、LinuxマスターからのPod IPのポート80に向けて`curl`して、ウェブサーバーの応答をチェックします
* docker execまたはkubectl execを使用したPod間通信、Pod間(および複数のWindowsードがある場合はホスト間)へのpingします
* ServiceからPodへの通信、Linuxマスターおよび個々のPodからの仮想Service IP(`kubectl get services`で表示される)に`curl`します
* サービスディスカバリ、Kuberntesの[default DNS suffix](/ja/docs/concepts/services-networking/dns-pod-service/#services)と共にService名に`curl`します
* サービスディスカバリ、Kubernetesの[default DNS suffix](/ja/docs/concepts/services-networking/dns-pod-service/#services)と共にService名に`curl`します
* Inbound connectivity, `curl` the NodePort from the Linux master or machines outside of the cluster
* インバウンド接続、Linuxマスターまたはクラスター外のマシンからNodePortに`curl`します
* アウトバウンド接続、kubectl execを使用したPod内からの外部IPに`curl`します

View File

@ -134,28 +134,12 @@ weight: 100
1. 以下の内容で`example-ingress.yaml`を作成します。
```yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 8080
```
{{< codenew file="service/networking/example-ingress.yaml" >}}
1. 次のコマンドを実行して、Ingressリソースを作成します。
```shell
kubectl apply -f example-ingress.yaml
kubectl apply -f https://kubernetes.io/examples/service/networking/example-ingress.yaml
```
出力は次のようになります。
@ -175,8 +159,8 @@ weight: 100
{{< /note >}}
```shell
NAME HOSTS ADDRESS PORTS AGE
example-ingress hello-world.info 172.17.0.15 80 38s
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress <none> hello-world.info 172.17.0.15 80 38s
```
1. 次の行を`/etc/hosts`ファイルの最後に書きます。
@ -241,9 +225,12 @@ weight: 100
```yaml
- path: /v2
pathType: Prefix
backend:
serviceName: web2
servicePort: 8080
service:
name: web2
port:
number: 8080
```
1. 次のコマンドで変更を適用します。
@ -300,6 +287,3 @@ weight: 100
* [Ingress](/ja/docs/concepts/services-networking/ingress/)についてさらに学ぶ。
* [Ingressコントローラー](/ja/docs/concepts/services-networking/ingress-controllers/)についてさらに学ぶ。
* [Service](/ja/docs/concepts/services-networking/service/)についてさらに学ぶ。

View File

@ -0,0 +1,6 @@
---
title: "Secretの管理"
weight: 28
description: Secretを使用した機密設定データの管理
---

View File

@ -0,0 +1,146 @@
---
title: kubectlを使用してSecretを管理する
content_type: task
weight: 10
description: kubectlコマンドラインを使用してSecretを作成する
---
<!-- overview -->
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}}
<!-- steps -->
## Secretを作成する
`Secret`はデータベースにアクセスするためにPodが必要とするユーザー資格情報を含めることができます。
たとえば、データベース接続文字列はユーザー名とパスワードで構成されます。
ユーザー名はローカルマシンの`./username.txt`に、パスワードは`./password.txt`に保存します。
```shell
echo -n 'admin' > ./username.txt
echo -n '1f2d1e2e67df' > ./password.txt
```
上記の2つのコマンドの`-n`フラグは、生成されたファイルにテキスト末尾の余分な改行文字が含まれないようにします。
`kubectl`がファイルを読み取り、内容をbase64文字列にエンコードすると、余分な改行文字もエンコードされるため、これは重要です。
`kubectl create secret`コマンドはこれらのファイルをSecretにパッケージ化し、APIサーバー上にオブジェクトを作成します。
```shell
kubectl create secret generic db-user-pass \
--from-file=./username.txt \
--from-file=./password.txt
```
出力は次のようになります:
```
secret/db-user-pass created
```
ファイル名がデフォルトのキー名になります。オプションで`--from-file=[key=]source`を使用してキー名を設定できます。たとえば:
```shell
kubectl create secret generic db-user-pass \
--from-file=username=./username.txt \
--from-file=password=./password.txt
```
`--from-file`に指定したファイルに含まれるパスワードの特殊文字をエスケープする必要はありません。
また、`--from-literal=<key>=<value>`タグを使用してSecretデータを提供することもできます。
このタグは、複数のキーと値のペアを提供するために複数回指定することができます。
`$`、`\`、`*`、`=`、`!`などの特殊文字は[シェル](https://en.wikipedia.org/wiki/Shell_(computing))によって解釈されるため、エスケープを必要とすることに注意してください。
ほとんどのシェルでは、パスワードをエスケープする最も簡単な方法は、シングルクォート(`'`)で囲むことです。
たとえば、実際のパスワードが`S!B\*d$zDsb=`の場合、次のようにコマンドを実行します:
```shell
kubectl create secret generic dev-db-secret \
--from-literal=username=devuser \
--from-literal=password='S!B\*d$zDsb='
```
## Secretを検証する
Secretが作成されたことを確認できます:
```shell
kubectl get secrets
```
出力は次のようになります:
```
NAME TYPE DATA AGE
db-user-pass Opaque 2 51s
```
`Secret`の説明を参照できます:
```shell
kubectl describe secrets/db-user-pass
```
出力は次のようになります:
```
Name: db-user-pass
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
password: 12 bytes
username: 5 bytes
```
`kubectl get`と`kubectl describe`コマンドはデフォルトでは`Secret`の内容を表示しません。
これは、`Secret`が不用意に他人にさらされたり、ターミナルログに保存されたりしないようにするためです。
## Secretをデコードする {#decoding-secret}
先ほど作成したSecretの内容を見るには、以下のコマンドを実行します:
```shell
kubectl get secret db-user-pass -o jsonpath='{.data}'
```
出力は次のようになります:
```json
{"password.txt":"MWYyZDFlMmU2N2Rm","username.txt":"YWRtaW4="}
```
`password.txt`のデータをデコードします:
```shell
echo 'MWYyZDFlMmU2N2Rm' | base64 --decode
```
出力は次のようになります:
```
1f2d1e2e67df
```
## クリーンアップ
作成したSecretを削除するには次のコマンドを実行します:
```shell
kubectl delete secret db-user-pass
```
<!-- discussion -->
## {{% heading "whatsnext" %}}
- [Secretのコンセプト](/ja/docs/concepts/configuration/secret/)を読む
- [設定ファイルを使用してSecretを管理する](/docs/tasks/configmap-secret/managing-secret-using-config-file/)方法を知る
- [kustomizeを使用してSecretを管理する](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)方法を知る

View File

@ -33,7 +33,7 @@ kubectl delete pods <pod>
上記がグレースフルターミネーションにつながるためには、`pod.Spec.TerminationGracePeriodSeconds`に0を指定しては**いけません**。`pod.Spec.TerminationGracePeriodSeconds`を0秒に設定することは安全ではなく、StatefulSet Podには強くお勧めできません。グレースフル削除は安全で、kubeletがapiserverから名前を削除する前にPodが[適切にシャットダウンする](/ja/docs/concepts/workloads/pods/pod-lifecycle/#termination-of-pods)ことを保証します。
Kubernetes(バージョン1.5以降)は、Nodeにアクセスできないという理由だけでPodを削除しません。到達不能なNodeで実行されているPodは、[タイムアウト](/docs/concepts/architecture/nodes/#node-condition)の後に`Terminating`または`Unknown`状態になります。到達不能なNode上のPodをユーザーが適切に削除しようとすると、Podはこれらの状態に入ることもあります。そのような状態のPodをapiserverから削除することができる唯一の方法は以下の通りです:
Kubernetes(バージョン1.5以降)は、Nodeにアクセスできないという理由だけでPodを削除しません。到達不能なNodeで実行されているPodは、[タイムアウト](/ja/docs/concepts/architecture/nodes/#condition)の後に`Terminating`または`Unknown`状態になります。到達不能なNode上のPodをユーザーが適切に削除しようとすると、Podはこれらの状態に入ることもあります。そのような状態のPodをapiserverから削除することができる唯一の方法は以下の通りです:
* (ユーザーまたは[Node Controller](/ja/docs/concepts/architecture/nodes/)によって)Nodeオブジェクトが削除されます。
* 応答していないNodeのkubeletが応答を開始し、Podを終了してapiserverからエントリーを削除します。
@ -76,4 +76,3 @@ StatefulSet Podの強制削除は、常に慎重に、関連するリスクを
[StatefulSetのデバッグ](/docs/tasks/debug-application-cluster/debug-stateful-set/)の詳細

View File

@ -0,0 +1,403 @@
---
title: Horizontal Pod Autoscalerウォークスルー
content_type: task
weight: 100
---
<!-- overview -->
Horizontal Pod Autoscalerは、Deployment、ReplicaSetまたはStatefulSetといったレプリケーションコントローラ内のPodの数を、観測されたCPU使用率もしくはベータサポートの、アプリケーションによって提供されるその他のメトリクスに基づいて自動的にスケールさせます。
このドキュメントはphp-apacheサーバーに対しHorizontal Pod Autoscalerを有効化するという例に沿ってウォークスルーで説明していきます。Horizontal Pod Autoscalerの動作についてのより詳細な情報を知りたい場合は、[Horizontal Pod Autoscalerユーザーガイド](/docs/tasks/run-application/horizontal-pod-autoscale/)をご覧ください。
## {{% heading "前提条件" %}}
この例ではバージョン1.2以上の動作するKubernetesクラスターおよびkubectlが必要です。
[Metrics API](https://github.com/kubernetes/metrics)を介してメトリクスを提供するために、[Metrics server](https://github.com/kubernetes-sigs/metrics-server)によるモニタリングがクラスター内にデプロイされている必要があります。
Horizontal Pod Autoscalerはメトリクスを収集するためにこのAPIを利用します。metrics-serverをデプロイする方法を知りたい場合は[metrics-server ドキュメント](https://github.com/kubernetes-sigs/metrics-server#deployment)をご覧ください。
Horizontal Pod Autoscalerで複数のリソースメトリクスを利用するためには、バージョン1.6以上のKubernetesクラスターおよびkubectlが必要です。カスタムメトリクスを使えるようにするためには、あなたのクラスターがカスタムメトリクスAPIを提供するAPIサーバーと通信できる必要があります。
最後に、Kubernetesオブジェクトと関係のないメトリクスを使うにはバージョン1.10以上のKubernetesクラスターおよびkubectlが必要で、さらにあなたのクラスターが外部メトリクスAPIを提供するAPIサーバーと通信できる必要があります。
詳細については[Horizontal Pod Autoscaler user guide](/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics)をご覧ください。
<!-- steps -->
## php-apacheの起動と公開
Horizontal Pod Autoscalerのデモンストレーションのために、php-apacheイメージをもとにしたカスタムのDockerイメージを使います。
このDockerfileは下記のようになっています。
```dockerfile
FROM php:5-apache
COPY index.php /var/www/html/index.php
RUN chmod a+rx index.php
```
これはCPU負荷の高い演算を行うindex.phpを定義しています。
```php
<?php
$x = 0.0001;
for ($i = 0; $i <= 1000000; $i++) {
$x += sqrt($x);
}
echo "OK!";
?>
```
まず最初に、イメージを動かすDeploymentを起動し、Serviceとして公開しましょう。
下記の設定を使います。
{{< codenew file="application/php-apache.yaml" >}}
以下のコマンドを実行してください。
```shell
kubectl apply -f https://k8s.io/examples/application/php-apache.yaml
```
```
deployment.apps/php-apache created
service/php-apache created
```
## Horizontal Pod Autoscalerを作成する
サーバーが起動したら、[kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands#autoscale)を使ってautoscalerを作成しましょう。以下のコマンドで、最初のステップで作成したphp-apache deploymentによって制御されるPodレプリカ数を1から10の間に維持するHorizontal Pod Autoscalerを作成します。
簡単に言うと、HPAはDeploymentを通じてレプリカ数を増減させ、すべてのPodにおける平均CPU使用率を50%それぞれのPodは`kubectl run`で200 milli-coresを要求しているため、平均CPU使用率100 milli-coresを意味しますに保とうとします。
このアルゴリズムについての詳細は[こちら](/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details)をご覧ください。
```shell
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
```
```
horizontalpodautoscaler.autoscaling/php-apache autoscaled
```
以下を実行して現在のAutoscalerの状況を確認できます。
```shell
kubectl get hpa
```
```
NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 18s
```
現在はサーバーにリクエストを送っていないため、CPU使用率が0%になっていることに注意してください(`TARGET`カラムは対応するDeploymentによって制御される全てのPodの平均値を示しています。
## 負荷の増加
Autoscalerがどのように負荷の増加に反応するか見てみましょう。
コンテナを作成し、クエリの無限ループをphp-apacheサーバーに送ってみますこれは別のターミナルで実行してください
```shell
kubectl run -i --tty load-generator --rm --image=busybox --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"
```
数分以内に、下記を実行することでCPU負荷が高まっていることを確認できます。
```shell
kubectl get hpa
```
```
NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache/scale 305% / 50% 1 10 1 3m
```
ここでは、CPU使用率はrequestの305%にまで高まっています。
結果として、Deploymentはレプリカ数7にリサイズされました。
```shell
kubectl get deployment php-apache
```
```
NAME READY UP-TO-DATE AVAILABLE AGE
php-apache 7/7 7 7 19m
```
{{< note >}}
レプリカ数が安定するまでは数分かかることがあります。負荷量は何らかの方法で制御されているわけではないので、最終的なレプリカ数はこの例とは異なる場合があります。
{{< /note >}}
## 負荷の停止
ユーザー負荷を止めてこの例を終わらせましょう。
私たちが`busybox`イメージを使って作成したコンテナ内のターミナルで、`<Ctrl> + C`を入力して負荷生成を終了させます。
そして結果の状態を確認します(数分後)。
```shell
kubectl get hpa
```
```
NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 11m
```
```shell
kubectl get deployment php-apache
```
```
NAME READY UP-TO-DATE AVAILABLE AGE
php-apache 1/1 1 1 27m
```
ここでCPU使用率は0に下がり、HPAによってオートスケールされたレプリカ数は1に戻ります。
{{< note >}}
レプリカのオートスケールには数分かかることがあります。
{{< /note >}}
<!-- discussion -->
## 複数のメトリクスやカスタムメトリクスを基にオートスケーリングする
`autoscaling/v2beta2` APIバージョンと使うと、`php-apache` Deploymentをオートスケーリングする際に使う追加のメトリクスを導入することが出来ます。
まず、`autoscaling/v2beta2`内のHorizontalPodAutoscalerのYAMLファイルを入手します。
```shell
kubectl get hpa.v2beta2.autoscaling -o yaml > /tmp/hpa-v2.yaml
```
`/tmp/hpa-v2.yaml`ファイルをエディタで開くと、以下のようなYAMLファイルが見えるはずです。
```yaml
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: php-apache
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
status:
observedGeneration: 1
lastScaleTime: <some-time>
currentReplicas: 1
desiredReplicas: 1
currentMetrics:
- type: Resource
resource:
name: cpu
current:
averageUtilization: 0
averageValue: 0
```
`targetCPUUtilizationPercentage`フィールドは`metrics`と呼ばれる配列に置換されています。
CPU使用率メトリクスは、Podコンテナで定められたリソースの割合として表されるため、*リソースメトリクス*です。CPU以外のリソースメトリクスを指定することもできます。デフォルトでは、他にメモリだけがリソースメトリクスとしてサポートされています。これらのリソースはクラスター間で名前が変わることはなく、そして`metrics.k8s.io` APIが利用可能である限り常に利用可能です。
さらに`target.type`において`Utilization`の代わりに`AverageValue`を使い、`target.averageUtilization`フィールドの代わりに対応する`target.averageValue`フィールドを設定することで、リソースメトリクスをrequest値に対する割合に代わり、直接的な値に設定することも可能です。
PodメトリクスとObjectメトリクスという2つの異なる種類のメトリクスが存在し、どちらも*カスタムメトリクス*とみなされます。これらのメトリクスはクラスター特有の名前を持ち、利用するにはより発展的なクラスター監視設定が必要となります。
これらの代替メトリクスタイプのうち、最初のものが*Podメトリクス*です。これらのメトリクスはPodを説明し、Podを渡って平均され、レプリカ数を決定するためにターゲット値と比較されます。
これらはほとんどリソースメトリクス同様に機能しますが、`target`の種類としては`AverageValue`*のみ*をサポートしている点が異なります。
Podメトリクスはmetricブロックを使って以下のように指定されます。
```yaml
type: Pods
pods:
metric:
name: packets-per-second
target:
type: AverageValue
averageValue: 1k
```
2つ目のメトリクスタイプは*Objectメトリクス*です。これらのメトリクスはPodを説明するかわりに、同一Namespace内の異なったオブジェクトを説明します。このメトリクスはオブジェクトから取得される必要はありません。単に説明するだけです。Objectメトリクスは`target`の種類として`Value`と`AverageValue`をサポートします。`Value`では、ターゲットはAPIから返ってきたメトリクスと直接比較されます。`AverageValue`では、カスタムメトリクスAPIから返ってきた値はターゲットと比較される前にPodの数で除算されます。以下の例は`requests-per-second`メトリクスのYAML表現です。
```yaml
type: Object
object:
metric:
name: requests-per-second
describedObject:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
name: main-route
target:
type: Value
value: 2k
```
もしこのようなmetricブロックを複数提供した場合、HorizontalPodAutoscalerはこれらのメトリクスを順番に処理します。
HorizontalPodAutoscalerはそれぞれのメトリクスについて推奨レプリカ数を算出し、その中で最も多いレプリカ数を採用します。
例えば、もしあなたがネットワークトラフィックについてのメトリクスを収集する監視システムを持っているなら、`kubectl edit`を使って指定を次のように更新することができます。
```yaml
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: php-apache
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
- type: Pods
pods:
metric:
name: packets-per-second
target:
type: AverageValue
averageValue: 1k
- type: Object
object:
metric:
name: requests-per-second
describedObject:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
name: main-route
target:
type: Value
value: 10k
status:
observedGeneration: 1
lastScaleTime: <some-time>
currentReplicas: 1
desiredReplicas: 1
currentMetrics:
- type: Resource
resource:
name: cpu
current:
averageUtilization: 0
averageValue: 0
- type: Object
object:
metric:
name: requests-per-second
describedObject:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
name: main-route
current:
value: 10k
```
この時、HorizontalPodAutoscalerはそれぞれのPodがCPU requestの50%を使い、1秒当たり1000パケットを送信し、そしてmain-route
Ingressの裏にあるすべてのPodが合計で1秒当たり10000パケットを送信する状態を保持しようとします。
### より詳細なメトリクスをもとにオートスケーリングする
多くのメトリクスパイプラインは、名前もしくは _labels_ と呼ばれる追加の記述子の組み合わせによって説明することができます。全てのリソースメトリクス以外のメトリクスタイプPod、Object、そして下で説明されている外部メトリクスにおいて、メトリクスパイプラインに渡す追加のラベルセレクターを指定することができます。例えば、もしあなたが`http_requests`メトリクスを`verb`ラベルとともに収集しているなら、下記のmetricブロックを指定してGETリクエストにのみ基づいてスケールさせることができます。
```yaml
type: Object
object:
metric:
name: http_requests
selector: {matchLabels: {verb: GET}}
```
このセレクターは完全なKubernetesラベルセレクターと同じ文法を利用します。もし名前とセレクターが複数の系列に一致した場合、この監視パイプラインはどのようにして複数の系列を一つの値にまとめるかを決定します。このセレクターは付加的なもので、ターゲットオブジェクト`Pods`タイプの場合は対象Pod、`Object`タイプの場合は説明されるオブジェクト)では**ない**オブジェクトを説明するメトリクスを選択することは出来ません。
### Kubernetesオブジェクトと関係ないメトリクスに基づいたオートスケーリング
Kubernetes上で動いているアプリケーションを、Kubernetes Namespaceと直接的な関係がないサービスを説明するメトリクスのような、Kubernetesクラスター内のオブジェクトと明確な関係が無いメトリクスを基にオートスケールする必要があるかもしれません。Kubernetes 1.10以降では、このようなユースケースを*外部メトリクス*によって解決できます。
外部メトリクスを使うにはあなたの監視システムについての知識が必要となります。この設定はカスタムメトリクスを使うときのものに似ています。外部メトリクスを使うとあなたの監視システムのあらゆる利用可能なメトリクスに基づいてクラスターをオートスケールできるようになります。上記のように`metric`ブロックで`name`と`selector`を設定し、`Object`のかわりに`External`メトリクスタイプを使います。
もし複数の時系列が`metricSelector`により一致した場合は、それらの値の合計がHorizontalPodAutoscalerに使われます。
外部メトリクスは`Value`と`AverageValue`の両方のターゲットタイプをサポートしています。これらの機能は`Object`タイプを利用するときとまったく同じです。
例えばもしあなたのアプリケーションがホストされたキューサービスからのタスクを処理している場合、あなたは下記のセクションをHorizontalPodAutoscalerマニフェストに追記し、未処理のタスク30個あたり1つのワーカーを必要とすることを指定します。
```yaml
- type: External
external:
metric:
name: queue_messages_ready
selector: "queue=worker_tasks"
target:
type: AverageValue
averageValue: 30
```
可能なら、クラスター管理者がカスタムメトリクスAPIを保護することを簡単にするため、外部メトリクスのかわりにカスタムメトリクスを用いることが望ましいです。外部メトリクスAPIは潜在的に全てのメトリクスへのアクセスを許可するため、クラスター管理者はこれを公開する際には注意が必要です。
## 付録: Horizontal Pod Autoscaler status conditions
`autoscaling/v2beta2`形式のHorizontalPodAutoscalerを使っている場合は、KubernetesによるHorizontalPodAutoscaler上の*status conditions*セットを見ることができます。status conditionsはHorizontalPodAutoscalerがスケール可能かどうか、そして現時点でそれが何らかの方法で制限されているかどうかを示しています。
このconditionsは`status.conditions`フィールドに現れます。HorizontalPodAutoscalerに影響しているconditionsを確認するために、`kubectl describe hpa`を利用できます。
```shell
kubectl describe hpa cm-test
```
```
Name: cm-test
Namespace: prom
Labels: <none>
Annotations: <none>
CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000
Reference: ReplicationController/cm-test
Metrics: ( current / target )
"http_requests" on pods: 66m / 500m
Min replicas: 1
Max replicas: 4
ReplicationController pods: 1 current / 1 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_requests
ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range
Events:
```
このHorizontalPodAutoscalerにおいて、いくつかの正常な状態のconditionsを見ることができます。まず最初に、`AbleToScale`は、HPAがスケール状況を取得し、更新させることが出来るかどうかだけでなく、何らかのbackoffに関連した状況がスケーリングを妨げていないかを示しています。2番目に、`ScalingActive`は、HPAが有効化されているかどうか例えば、レプリカ数のターゲットがゼロでないことや、望ましいスケールを算出できるかどうかを示します。もしこれが`False`の場合、大体はメトリクスの取得において問題があることを示しています。最後に、一番最後の状況である`ScalingLimited`は、HorizontalPodAutoscalerの最大値や最小値によって望ましいスケールがキャップされていることを示しています。この指標を見てHorizontalPodAutoscaler上の最大・最小レプリカ数制限を増やす、もしくは減らす検討ができます。
## 付録: 数量
全てのHorizontalPodAutoscalerおよびメトリクスAPIにおけるメトリクスは{{< glossary_tooltip term_id="quantity" text="quantity">}}として知られる特殊な整数表記によって指定されます。例えば、`10500m`という数量は10進数表記で`10.5`と書くことができます。メトリクスAPIは可能であれば接尾辞を用いない整数を返し、そうでない場合は基本的にミリ単位での数量を返します。これはメトリクス値が`1`と`1500m`の間で、もしくは10進法表記で書かれた場合は`1`と`1.5`の間で変動するということを意味します。
## 付録: その他の起きうるシナリオ
### Autoscalerを宣言的に作成する
`kubectl autoscale`コマンドを使って命令的にHorizontalPodAutoscalerを作るかわりに、下記のファイルを使って宣言的に作成することができます。
{{< codenew file="application/hpa/php-apache.yaml" >}}
下記のコマンドを実行してAutoscalerを作成します。
```shell
kubectl create -f https://k8s.io/examples/application/hpa/php-apache.yaml
```
```
horizontalpodautoscaler.autoscaling/php-apache created
```

View File

@ -0,0 +1,36 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: php-apache
spec:
selector:
matchLabels:
run: php-apache
replicas: 1
template:
metadata:
labels:
run: php-apache
spec:
containers:
- name: php-apache
image: k8s.gcr.io/hpa-example
ports:
- containerPort: 80
resources:
limits:
cpu: 500m
requests:
cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
name: php-apache
labels:
run: php-apache
spec:
ports:
- port: 80
selector:
run: php-apache

View File

@ -0,0 +1,18 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080

View File

@ -47,7 +47,7 @@ O Kubernetes é Open Source, o que te oferece a liberdade de utilizá-lo em seu
<br>
<br>
<br>
<a href="https://www.lfasiallc.com/events/kubecon-cloudnativecon-china-2019" button id="desktopKCButton">KubeCon em Shanghai em June 24-26, 2019</a>
<a href="https://www.lfasiallc.com/events/kubecon-cloudnativecon-china-2019" button id="desktopKCButton">KubeCon em Shanghai em Junho, 24-26 de 2019</a>
</div>
<div id="videoPlayer">
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>
@ -57,4 +57,4 @@ O Kubernetes é Open Source, o que te oferece a liberdade de utilizá-lo em seu
{{< blocks/kubernetes-features >}}
{{< blocks/case-studies >}}
{{< blocks/case-studies >}}

View File

@ -24,7 +24,7 @@ card:
<div class="row">
<div class="col-md-9">
<h2>Básico do Kubernetes</h2>
<p>Este tutorial fornece instruções básicas sobre o sistema de orquestração de cluster do Kubernetes. Cada módulo contém algumas informações básicas sobre os principais recursos e conceitos do Kubernetes e inclui um tutorial online interativo. Esses tutoriais interativos permitem que você mesmo gerencie um cluster simples e seus aplicativos em contêineres.</p>
<p>Este tutorial fornece instruções básicas sobre o sistema de orquestração de cluster do Kubernetes. Cada módulo contém algumas informações básicas sobre os principais recursos e conceitos do Kubernetes e inclui um tutorial online interativo. Esses tutoriais interativos permitem que você mesmo gerencie um cluster simples e seus aplicativos em contêineres.</p>
<p>Usando os tutoriais interativos, você pode aprender a:</p>
<ul>
<li>Implante um aplicativo em contêiner em um cluster.</li>
@ -46,7 +46,7 @@ card:
</div>
<br>
<div id="basics-modules" class="content__modules">
<h2>Módulos básicos do Kubernetes</h2>
<div class="row">
@ -82,9 +82,9 @@ card:
<div class="row">
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/kubernetes-basics/expose/expose-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_04.svg?v=1469803628347" alt=""></a>
<a href="/pt/docs/tutorials/kubernetes-basics/expose/expose-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_04.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="/docs/tutorials/kubernetes-basics/expose/expose-intro/"><h5>4. Exponha seu aplicativo publicamente</h5></a>
<a href="/pt/docs/tutorials/kubernetes-basics/expose/expose-intro/"><h5>4. Exponha seu aplicativo publicamente</h5></a>
</div>
</div>
</div>

View File

@ -29,16 +29,16 @@ weight: 10
<div class="col-md-8">
<h3>Clusters do Kubernetes</h3>
<p>
<b>O Kubernetes coordena um cluster altamente disponível de computadores conectados para funcionar como uma única unidade.</b>
As abstrações no Kubernetes permitem implantar aplicativos em contêineres em um cluster sem amarrá-los especificamente a máquinas individuais.
Para fazer uso desse novo modelo de implantação, os aplicativos precisam ser empacotados de uma forma que os desacople dos hosts individuais: eles precisam ser colocados em contêineres. Os aplicativos em contêineres são mais flexíveis e disponíveis do que nos modelos de implantação anteriores, nos quais os aplicativos eram instalados diretamente em máquinas específicas como pacotes profundamente integrados ao host.
<b>O Kubernetes coordena um cluster com alta disponibilidade de computadores conectados para funcionar como uma única unidade.</b>
As abstrações no Kubernetes permitem implantar aplicativos em contêineres em um cluster sem amarrá-los especificamente as máquinas individuais.
Para fazer uso desse novo modelo de implantação, os aplicativos precisam ser empacotados de uma forma que os desacoplem dos hosts individuais: eles precisam ser empacotados em contêineres. Os aplicativos em contêineres são mais flexíveis e disponíveis do que nos modelos de implantação anteriores, nos quais os aplicativos eram instalados diretamente em máquinas específicas como pacotes profundamente integrados ao host.
<b> O Kubernetes automatiza a distribuição e o agendamento de contêineres de aplicativos em um cluster de maneira mais eficiente. </b>
O Kubernetes é uma plataforma de código aberto e está pronto para produção.
</p>
<p>Um cluster Kubernetes consiste em dois tipos de recursos:
<ul>
<li>O <b>Master</b> coordena o cluster </li>
<li>Os <b>Nodes</b> são os trabalhadores que executam aplicativos</li>
<li>A <b>Camada de gerenciamento <i>(Control Plane)</i></b> coordena o cluster </li>
<li>Os <b>Nós <i>(Nodes)</i></b> são os nós de processamento que executam aplicativos</li>
</ul>
</p>
</div>
@ -75,22 +75,22 @@ weight: 10
<div class="row">
<div class="col-md-8">
<p><b>O mestre é responsável por gerenciar o cluster. </b> O mestre coordena todas as atividades em seu cluster, como programação de aplicativos, manutenção do estado desejado dos aplicativos, escalonamento de aplicativos e lançamento de novas atualizações.</p>
<p><b>Um nó é uma VM ou um computador físico que atua como uma máquina de trabalho em um cluster Kubernetes. </b> Cada nó tem um Kubelet, que é um agente para gerenciar o nó e se comunicar com o mestre do Kubernetes. O nó também deve ter ferramentas para lidar com operações de contêiner, como containerd ou Docker. Um cluster Kubernetes que lida com o tráfego de produção deve ter no mínimo três nós.</p>
<p><b>A camada de gerenciamento é responsável por gerenciar o cluster. </b> A camada de gerenciamento coordena todas as atividades em seu cluster, como programação de aplicativos, manutenção do estado desejado dos aplicativos, escalonamento de aplicativos e lançamento de novas atualizações.</p>
<p><b>Um nó é uma VM ou um computador físico que atua como um nó de processamento em um cluster Kubernetes. </b> Cada nó tem um Kubelet, que é um agente para gerenciar o nó e se comunicar com a camada de gerenciamento do Kubernetes. O nó também deve ter ferramentas para lidar com operações de contêiner, como containerd ou Docker. Um cluster Kubernetes que lida com o tráfego de produção deve ter no mínimo três nós.</p>
</div>
<div class="col-md-4">
<div class="content__box content__box_fill">
<p><i>Os mestres gerenciam o cluster e os nós que são usados para hospedar os aplicativos em execução.</i></p>
<p><i>As camadas de gerenciamento gerenciam o cluster e os nós que são usados para hospedar os aplicativos em execução.</i></p>
</div>
</div>
</div>
<div class="row">
<div class="col-md-8">
<p>Ao implantar aplicativos no Kubernetes, você diz ao mestre para iniciar os contêineres de aplicativos. O mestre agenda os contêineres para serem executados nos nós do cluster. <b> Os nós se comunicam com o mestre usando a <a href="/docs/concepts/overview/kubernetes-api/"> API Kubernetes </a> </b>, que o mestre expõe. Os usuários finais também podem usar a API Kubernetes diretamente para interagir com o cluster.</p>
<p>Ao implantar aplicativos no Kubernetes, você diz à camada de gerenciamento para iniciar os contêineres de aplicativos. A camada de gerenciamento agenda os contêineres para serem executados nos nós do cluster. <b> Os nós se comunicam com o camada de gerenciamento usando a <a href="/docs/concepts/overview/kubernetes-api/"> API do Kubernetes </a> </b>, que a camada de gerenciamento expõe. Os usuários finais também podem usar a API do Kubernetes diretamente para interagir com o cluster.</p>
<p>Um cluster Kubernetes pode ser implantado em máquinas físicas ou virtuais. Para começar o desenvolvimento do Kubernetes, você pode usar o Minikube. O Minikube é uma implementação leve do Kubernetes que cria uma VM em sua máquina local e implanta um cluster simples contendo apenas um nó. O Minikube está disponível para sistemas Linux, macOS e Windows. O Minikube CLI fornece operações básicas de inicialização para trabalhar com seu cluster, incluindo iniciar, parar, status e excluir. Para este tutorial, no entanto, você usará um terminal online fornecido com o Minikube pré-instalado.</p>
<p>Um cluster Kubernetes pode ser implantado em máquinas físicas ou virtuais. Para começar o desenvolvimento do Kubernetes, você pode usar o Minikube. O Minikube é uma implementação leve do Kubernetes que cria uma VM em sua máquina local e implanta um cluster simples contendo apenas um nó. O Minikube está disponível para sistemas Linux, macOS e Windows. A linha de comando <i>(cli)</i> do Minikube fornece operações básicas de inicialização para trabalhar com seu cluster, incluindo iniciar, parar, status e excluir. Para este tutorial, no entanto, você usará um terminal online fornecido com o Minikube pré-instalado.</p>
<p>Agora que você sabe o que é Kubernetes, vamos para o tutorial online e iniciar nosso primeiro cluster!</p>

View File

@ -0,0 +1,4 @@
---
title: Exponha publicamente seu App
weight: 40
---

View File

@ -0,0 +1,38 @@
---
title: Tutorial Interativo - Expondo seu App
weight: 20
---
<!DOCTYPE html>
<html lang="en">
<body>
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
<script src="https://katacoda.com/embed.js"></script>
<div class="layout" id="top">
<main class="content katacoda-content">
<div class="katacoda">
<div class="katacoda__alert">
Para interagir com o terminal, favor utilizar a versão desktop/tablet
</div>
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/8" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-prompt="Terminal Kubernetes Bootcamp" style="height: 600px;">
</div>
</div>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/scale/scale-intro/" role="button">Continuar para o Módulo 5<span class="btn__next"></span></a>
</div>
</div>
</main>
</div>
</body>
</html>

View File

@ -0,0 +1,103 @@
---
title: Utilizando um serviço para expor seu App
weight: 10
---
<!DOCTYPE html>
<html lang="en">
<body>
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
<div class="layout" id="top">
<main class="content">
<div class="row">
<div class="col-md-8">
<h3>Objetivos</h3>
<ul>
<li>Aprenda sobre um Serviço no Kubernetes</li>
<li>Entenda como os objetos <code>labels</code> e <code>LabelSelector</code> se relacionam a um Serviço</li>
<li>Exponha uma aplicação externamente ao cluster Kubernetes usando um Serviço</li>
</ul>
</div>
<div class="col-md-8">
<h3>Visão Geral de Serviços Kubernetes</h3>
<p><a href="/docs/concepts/workloads/pods/">Pods</a> Kubernetes são efêmeros. Na verdade, Pods possuem um <a href="/docs/concepts/workloads/pods/pod-lifecycle/">ciclo de vida</a>. Quando um nó de processamento morre, os Pods executados no nó também são perdidos. A partir disso, o <a href="/docs/concepts/workloads/controllers/replicaset/">ReplicaSet</a> pode dinamicamente retornar o cluster ao estado desejado através da criação de novos Pods para manter sua aplicação em execução. Como outro exemplo, considere um backend de processamento de imagens com 3 réplicas. Estas réplicas são intercambiáveis; o sistema front-end não deveria se importar com as réplicas backend ou ainda se um Pod é perdido ou recriado. Dito isso, cada Pod em um cluster Kubernetes tem um único endereço IP, mesmo Pods no mesmo nó, então há necessidade de ter uma forma de reconciliar automaticamente mudanças entre Pods de modo que sua aplicação continue funcionando.</p>
<p>Um serviço no Kubernetes é uma abstração que define um conjunto lógico de Pods e uma política pela qual acessá-los. Serviços permitem um baixo acoplamento entre os Pods dependentes. Um serviço é definido usando YAML <a href="/docs/concepts/configuration/overview/#general-configuration-tips">(preferencialmente)</a> ou JSON, como todos objetos Kubernetes. O conjunto de Pods selecionados por um Serviço é geralmente determinado por um seletor de rótulos <i>LabelSelector</i> (veja abaixo o motivo pelo qual você pode querer um Serviço sem incluir um seletor <code>selector</code> na especificação <code>spec</code>).</p>
<p>Embora cada Pod tenha um endereço IP único, estes IPs não são expostos externamente ao cluster sem um Serviço. Serviços permitem que suas aplicações recebam tráfego. Serviços podem ser expostos de formas diferentes especificando um tipo <code>type</code> na especificação do serviço <code>ServiceSpec</code>:</p>
<ul>
<li><i>ClusterIP</i> (padrão) - Expõe o serviço sob um endereço IP interno no cluster. Este tipo faz do serviço somente alcançável de dentro do cluster.</li>
<li><i>NodePort</i> - Expõe o serviço sob a mesma porta em cada nó selecionado no cluster usando NAT. Faz o serviço acessível externamente ao cluster usando <code>&lt;NodeIP&gt;:&lt;NodePort&gt;</code>. Superconjunto de ClusterIP.</li>
<li><i>LoadBalancer</i> - Cria um balanceador de carga externo no provedor de nuvem atual (se suportado) e assinala um endereço IP fixo e externo para o serviço. Superconjunto de NodePort.</li>
<li><i>ExternalName</i> - Expõe o serviço usando um nome arbitrário (especificado através de <code>externalName</code> na especificação <code>spec</code>) retornando um registro de CNAME com o nome. Nenhum proxy é utilizado. Este tipo requer v1.7 ou mais recente de <code>kube-dns</code>.</li>
</ul>
<p>Mais informações sobre diferentes tipos de Serviços podem ser encontradas no tutorial <a href="/docs/tutorials/services/source-ip/">Utilizando IP de origem</a>. Também confira <a href="/docs/concepts/services-networking/connect-applications-service">Conectando aplicações com serviços</a>.</p>
<p>Adicionalmente, note que existem alguns casos de uso com serviços que envolvem a não definição de <code>selector</code> em <code>spec</code>. Serviços criados sem <code>selector</code> também não criarão objetos Endpoints correspondentes. Isto permite usuários mapear manualmente um serviço a endpoints específicos. Outra possibilidade na qual pode não haver seletores é ao se utilizar estritamente <code>type: ExternalName</code>.</p>
</div>
<div class="col-md-4">
<div class="content__box content__box_lined">
<h3>Resumo</h3>
<ul>
<li>Expõe Pods ao tráfego externo</li>
<li>Tráfego de balanceamento de carga entre múltiplos Pods</li>
<li>Uso de rótulos <code>labels</code></li>
</ul>
</div>
<div class="content__box content__box_fill">
<p><i>Um serviço Kubernetes é uma camada de abstração que define um conjunto lógico de Pods e habilita a exposição ao tráfego externo, balanceamento de carga e descoberta de serviço para esses Pods.</i></p>
</div>
</div>
</div>
<br>
<div class="row">
<div class="col-md-8">
<h3>Serviços e Rótulos</h3>
</div>
</div>
<div class="row">
<div class="col-md-8">
<p>Um serviço roteia tráfego entre um conjunto de Pods. Serviço é a abstração que permite pods morrerem e se replicarem no Kubernetes sem impactar sua aplicação. A descoberta e o roteamento entre Pods dependentes (tal como componentes frontend e backend dentro de uma aplicação) são controlados por serviços Kubernetes.</p>
<p>Serviços relacionam um conjunto de Pods usando <a href="/docs/concepts/overview/working-with-objects/labels">Rótulos e seletores</a>, um agrupamento primitivo que permite operações lógicas sobre objetos Kubernetes. Rótulos são pares de chave/valor anexados à objetos e podem ser usados de inúmeras formas:</p>
<ul>
<li>Designar objetos para desenvolvimento, teste e produção</li>
<li>Adicionar tags de versão</li>
<li>Classificar um objeto usando tags</li>
</ul>
</div>
</div>
<br>
<div class="row">
<div class="col-md-8">
<p><img src="/docs/tutorials/kubernetes-basics/public/images/module_04_labels.svg"></p>
</div>
</div>
<br>
<div class="row">
<div class="col-md-8">
<p>Rótulos podem ser anexados à objetos no momento de sua criação ou posteriormente. Eles podem ser modificados a qualquer tempo. Vamos agora expor sua aplicação usando um serviço e aplicar alguns rótulos.</p>
</div>
</div>
<br>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/pt/docs/tutorials/kubernetes-basics/expose/expose-interactive/" role="button">Iniciar tutorial interativo<span class="btn__next"></span></a>
</div>
</div>
</main>
</div>
</body>
</html>

View File

@ -462,7 +462,7 @@ nginxsecret kubernetes.io/tls 2 1m
<!--
Now modify your nginx replicas to start an https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):
-->
现在修改 nginx 副本,启动一个使用在秘钥中的证书的 HTTPS 服务器和 Servcie暴露端口80 和 443
现在修改 nginx 副本,启动一个使用在秘钥中的证书的 HTTPS 服务器和 Service暴露端口80 和 443
{{< codenew file="service/networking/nginx-secure-app.yaml" >}}

View File

@ -112,13 +112,13 @@ parameters:
Users request dynamically provisioned storage by including a storage class in
their `PersistentVolumeClaim`. Before Kubernetes v1.6, this was done via the
`volume.beta.kubernetes.io/storage-class` annotation. However, this annotation
is deprecated since v1.6. Users now can and should instead use the
is deprecated since v1.9. Users now can and should instead use the
`storageClassName` field of the `PersistentVolumeClaim` object. The value of
this field must match the name of a `StorageClass` configured by the
administrator (see [below](#enabling-dynamic-provisioning)).
-->
用户通过在 `PersistentVolumeClaim` 中包含存储类来请求动态供应的存储。
在 Kubernetes v1.6 之前,这通过 `volume.beta.kubernetes.io/storage-class` 注解实现。然而,这个注解自 v1.6 起就不被推荐使用了。
在 Kubernetes v1.9 之前,这通过 `volume.beta.kubernetes.io/storage-class` 注解实现。然而,这个注解自 v1.6 起就不被推荐使用了。
用户现在能够而且应该使用 `PersistentVolumeClaim` 对象的 `storageClassName` 字段。
这个字段的值必须能够匹配到集群管理员配置的 `StorageClass` 名称(见[下面](#enabling-dynamic-provisioning))。

View File

@ -56,13 +56,14 @@ can use any number of volume types simultaneously.
Ephemeral volume types have a lifetime of a pod, but persistent volumes exist beyond
the lifetime of a pod. Consequently, a volume outlives any containers
that run within the pod, and data is preserved across container restarts. When a
pod ceases to exist, the volume is destroyed.
pod ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not
destroy persistent volumes.
-->
Kubernetes 支持很多类型的卷。
{{< glossary_tooltip term_id="pod" text="Pod" >}} 可以同时使用任意数目的卷类型。
临时卷类型的生命周期与 Pod 相同,但持久卷可以比 Pod 的存活期长。
因此,卷的存在时间会超出 Pod 中运行的所有容器,并且在容器重新启动时数据也会得到保留。
当 Pod 不再存在时,卷也将不再存在。
当 Pod 不再存在时,临时卷也将不再存在。但是持久卷会继续存在。
<!--
At its core, a volume is just a directory, possibly with some data in it, which
@ -179,6 +180,11 @@ spec:
fsType: ext4
```
<!--
If the EBS volume is partitioned, you can supply the optional field `partition: "<partition number>"` to specify which parition to mount on.
-->
如果 EBS 卷是分区的,你可以提供可选的字段 `partition: "<partition number>"` 来指定要挂载到哪个分区上。
<!--
#### AWS EBS CSI migration
-->
@ -355,14 +361,14 @@ spec:
<!--
The `CSIMigration` feature for Cinder, when enabled, redirects all plugin operations
from the existing in-tree plugin to the `cinder.csi.openstack.org` Container
Storage Interface (CSI) Driver. In order to use this feature, the [Openstack Cinder CSI
Driver](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md)
Storage Interface (CSI) Driver. In order to use this feature, the [OpenStack Cinder CSI
Driver](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md)
must be installed on the cluster and the `CSIMigration` and `CSIMigrationOpenStack`
beta features must be enabled.
-->
启用 Cinder 的 `CSIMigration` 功能后,所有插件操作会从现有的树内插件重定向到
`cinder.csi.openstack.org` 容器存储接口CSI驱动程序。
为了使用此功能,必须在集群中安装 [Openstack Cinder CSI 驱动程序](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md)
为了使用此功能,必须在集群中安装 [OpenStack Cinder CSI 驱动程序](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md)
并且 `CSIMigration``CSIMigrationOpenStack` Beta 功能必须被启用。
### configMap

View File

@ -2,7 +2,7 @@
title: API Group
id: api-group
date: 2019-09-02
full_link: /zh/docs/concepts/overview/kubernetes-api/#api-groups
full_link: /zh/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning
short_description: >
Kubernetes API 中的一组相关路径
@ -17,7 +17,7 @@ tags:
title: API Group
id: api-group
date: 2019-09-02
full_link: /docs/concepts/overview/kubernetes-api/#api-groups
full_link: /docs/concepts/overview/kubernetes-api/#api-groups-and-versioning
short_description: >
A set of related paths in the Kubernetes API.

View File

@ -17,12 +17,12 @@ weight: 40
<!--
Kubernetes requires PKI certificates for authentication over TLS.
If you install Kubernetes with [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/), the certificates that your cluster requires are automatically generated.
If you install Kubernetes with [kubeadm](/docs/reference/setup-tools/kubeadm/), the certificates that your cluster requires are automatically generated.
You can also generate your own certificates - for example, to keep your private keys more secure by not storing them on the API server.
This page explains the certificates that your cluster requires.
-->
Kubernetes 需要 PKI 证书才能进行基于 TLS 的身份验证。如果你是使用
[kubeadm](/zh/docs/reference/setup-tools/kubeadm/kubeadm/) 安装的 Kubernetes
[kubeadm](/zh/docs/reference/setup-tools/kubeadm/) 安装的 Kubernetes
则会自动生成集群所需的证书。你还可以生成自己的证书。
例如,不将私钥存储在 API 服务器上,可以让私钥更加安全。此页面说明了集群必需的证书。
@ -144,13 +144,13 @@ Required certificates:
| front-proxy-client | kubernetes-front-proxy-ca | | client | |
<!--
[1]: any other IP or DNS name you contact your cluster on (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) the load balancer stable IP and/or DNS name, `kubernetes`, `kubernetes.default`, `kubernetes.default.svc`,
[1]: any other IP or DNS name you contact your cluster on (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/) the load balancer stable IP and/or DNS name, `kubernetes`, `kubernetes.default`, `kubernetes.default.svc`,
`kubernetes.default.svc.cluster`, `kubernetes.default.svc.cluster.local`)
where `kind` maps to one or more of the [x509 key usage](https://godoc.org/k8s.io/api/certificates/v1beta1#KeyUsage) types:
-->
[1]: 用来连接到集群的不同 IP 或 DNS 名
(就像 [kubeadm](/zh/docs/reference/setup-tools/kubeadm/kubeadm/) 为负载均衡所使用的固定
(就像 [kubeadm](/zh/docs/reference/setup-tools/kubeadm/) 为负载均衡所使用的固定
IP 或 DNS 名,`kubernetes`、`kubernetes.default`、`kubernetes.default.svc`、
`kubernetes.default.svc.cluster`、`kubernetes.default.svc.cluster.local`)。
@ -193,11 +193,11 @@ For kubeadm users only:
<!--
### Certificate paths
Certificates should be placed in a recommended path (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/)). Paths should be specified using the given argument regardless of location.
Certificates should be placed in a recommended path (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/)). Paths should be specified using the given argument regardless of location.
-->
### 证书路径
证书应放置在建议的路径中(以便 [kubeadm](/zh/docs/reference/setup-tools/kubeadm/kubeadm/)使用)。无论使用什么位置,都应使用给定的参数指定路径。
证书应放置在建议的路径中(以便 [kubeadm](/zh/docs/reference/setup-tools/kubeadm/)使用)。无论使用什么位置,都应使用给定的参数指定路径。
| 默认 CN | 建议的密钥路径 | 建议的证书路径 | 命令 | 密钥参数 | 证书参数 |
|------------------------------|------------------------------|-----------------------------|----------------|------------------------------|-------------------------------------------|

View File

@ -266,9 +266,9 @@ kubeadm 不支持将没有 `--control-plane-endpoint` 参数的单个控制平
### 更多信息
<!--
For more information about `kubeadm init` arguments, see the [kubeadm reference guide](/docs/reference/setup-tools/kubeadm/kubeadm/).
For more information about `kubeadm init` arguments, see the [kubeadm reference guide](/docs/reference/setup-tools/kubeadm/).
-->
有关 `kubeadm init` 参数的更多信息,请参见 [kubeadm 参考指南](/zh/docs/reference/setup-tools/kubeadm/kubeadm/)。
有关 `kubeadm init` 参数的更多信息,请参见 [kubeadm 参考指南](/zh/docs/reference/setup-tools/kubeadm/)。
<!--
To configure `kubeadm init` with a configuration file see [Using kubeadm init with a configuration file](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file).
@ -802,7 +802,7 @@ options.
* Verify that your cluster is running properly with [Sonobuoy](https://github.com/heptio/sonobuoy)
* <a id="lifecycle" />See [Upgrading kubeadm clusters](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
for details about upgrading your cluster using `kubeadm`.
* Learn about advanced `kubeadm` usage in the [kubeadm reference documentation](/docs/reference/setup-tools/kubeadm/kubeadm)
* Learn about advanced `kubeadm` usage in the [kubeadm reference documentation](/docs/reference/setup-tools/kubeadm)
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/reference/kubectl/overview/).
* See the [Cluster Networking](/docs/concepts/cluster-administration/networking/) page for a bigger list
of Pod network add-ons.
@ -816,7 +816,7 @@ options.
-->
* 使用 [Sonobuoy](https://github.com/heptio/sonobuoy) 验证集群是否正常运行
* <a id="lifecycle" />有关使用kubeadm升级集群的详细信息请参阅[升级 kubeadm 集群](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)。
* 在[kubeadm 参考文档](/zh/docs/reference/setup-tools/kubeadm/kubeadm)中了解有关高级 `kubeadm` 用法的信息
* 在[kubeadm 参考文档](/zh/docs/reference/setup-tools/kubeadm)中了解有关高级 `kubeadm` 用法的信息
* 了解有关Kubernetes[概念](/zh/docs/concepts/)和[`kubectl`](/zh/docs/reference/kubectl/overview/)的更多信息。
* 有关Pod网络附加组件的更多列表请参见[集群网络](/zh/docs/concepts/cluster-administration/networking/)页面。
* <a id="other-addons" />请参阅[附加组件列表](/zh/docs/concepts/cluster-administration/addons/)以探索其他附加组件,

View File

@ -150,7 +150,7 @@ such as systemd.
由于硬件、操作系统、网络或者其他主机特定参数的差异。某些主机需要特定的 kubelet 配置。
以下列表提供了一些示例。
- 由 kubelet 配置标志 `--resolv-confkubelet` 指定的 DNS 解析文件的路径在操作系统之间可能有所不同,
- 由 kubelet 配置标志 `--resolv-conf` 指定的 DNS 解析文件的路径在操作系统之间可能有所不同,
它取决于你是否使用 `systemd-resolved`
如果此路径错误,则在其 kubelet 配置错误的节点上 DNS 解析也将失败。

View File

@ -48,10 +48,10 @@ Kubespray 是一个由 [Ansible](https://docs.ansible.com/) playbooks、[清单
<!--
To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to
[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/).
[kubeadm](/docs/reference/setup-tools/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/).
-->
要选择最适合你的用例的工具,请阅读[此比较](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md)以
[kubeadm](/zh/docs/reference/setup-tools/kubeadm/kubeadm/) 和 [kops](/zh/docs/setup/production-environment/tools/kops/) 。
[kubeadm](/zh/docs/reference/setup-tools/kubeadm/) 和 [kops](/zh/docs/setup/production-environment/tools/kops/) 。
<!-- body -->
<!--