Merge remote-tracking branch 'upstream/main' into dev-1.24
This commit is contained in:
commit
712f45dee4
|
|
@ -62,7 +62,7 @@ spec:
|
|||
Note that completion mode is an alpha feature in the 1.21 release. To be able to
|
||||
use it in your cluster, make sure to enable the `IndexedJob` [feature
|
||||
gate](/docs/reference/command-line-tools-reference/feature-gates/) on the
|
||||
[API server](docs/reference/command-line-tools-reference/kube-apiserver/) and
|
||||
[API server](/docs/reference/command-line-tools-reference/kube-apiserver/) and
|
||||
the [controller manager](/docs/reference/command-line-tools-reference/kube-controller-manager/).
|
||||
|
||||
When you run the example, you will see that each of the three created Pods gets a
|
||||
|
|
|
|||
|
|
@ -8,8 +8,8 @@ slug: kubernetes-1-23-statefulset-pvc-auto-deletion
|
|||
**Author:** Matthew Cary (Google)
|
||||
|
||||
Kubernetes v1.23 introduced a new, alpha-level policy for
|
||||
[StatefulSets](docs/concepts/workloads/controllers/statefulset/) that controls the lifetime of
|
||||
[PersistentVolumeClaims](docs/concepts/storage/persistent-volumes/) (PVCs) generated from the
|
||||
[StatefulSets](/docs/concepts/workloads/controllers/statefulset/) that controls the lifetime of
|
||||
[PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/) (PVCs) generated from the
|
||||
StatefulSet spec template for cases when they should be deleted automatically when the StatefulSet
|
||||
is deleted or pods in the StatefulSet are scaled down.
|
||||
|
||||
|
|
@ -82,7 +82,7 @@ This policy forms a matrix with four cases. I’ll walk through and give an exam
|
|||
new replicas will automatically use them.
|
||||
|
||||
Visit the
|
||||
[documentation](docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-policies) to
|
||||
[documentation](/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-policies) to
|
||||
see all the details.
|
||||
|
||||
## What’s next?
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@ The [Container Runtime Interface](/blog/2016/12/container-runtime-interface-cri-
|
|||
|
||||
However, this little software shim was never intended to be a permanent solution. Over the course of years, its existence has introduced a lot of unnecessary complexity to the kubelet itself. Some integrations are inconsistently implemented for Docker because of this shim, resulting in an increased burden on maintainers, and maintaining vendor-specific code is not in line with our open source philosophy. To reduce this maintenance burden and move towards a more collaborative community in support of open standards, [KEP-2221 was introduced](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2221-remove-dockershim), proposing the removal of the dockershim. With the release of Kubernetes v1.20, the deprecation was official.
|
||||
|
||||
We didn’t do a great job communicating this, and unfortunately, the deprecation announcement led to some panic within the community. Confusion around what this meant for Docker as a company, if container images built by Docker would still run, and what Docker Engine actually is led to a conflagration on social media. This was our fault; we should have more clearly communicated what was happening and why at the time. To combat this, we released [a blog](/blog/2020/12/02/dont-panic-kubernetes-and-docker/) and [accompanying FAQ](/blog/2020/12/02/dockershim-faq/) to allay the community’s fears and correct some misconceptions about what Docker is and how containers work within Kubernetes. As a result of the community’s concerns, Docker and Mirantis jointly agreed to continue supporting the dockershim code in the form of [cri-dockerd](https://www.mirantis.com/blog/the-future-of-dockershim-is-cri-dockerd/), allowing you to continue using Docker Engine as your container runtime if need be. For the interest of users who want to try other runtimes, like containerd or cri-o, [migration documentation was written](docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/).
|
||||
We didn’t do a great job communicating this, and unfortunately, the deprecation announcement led to some panic within the community. Confusion around what this meant for Docker as a company, if container images built by Docker would still run, and what Docker Engine actually is led to a conflagration on social media. This was our fault; we should have more clearly communicated what was happening and why at the time. To combat this, we released [a blog](/blog/2020/12/02/dont-panic-kubernetes-and-docker/) and [accompanying FAQ](/blog/2020/12/02/dockershim-faq/) to allay the community’s fears and correct some misconceptions about what Docker is and how containers work within Kubernetes. As a result of the community’s concerns, Docker and Mirantis jointly agreed to continue supporting the dockershim code in the form of [cri-dockerd](https://www.mirantis.com/blog/the-future-of-dockershim-is-cri-dockerd/), allowing you to continue using Docker Engine as your container runtime if need be. For the interest of users who want to try other runtimes, like containerd or cri-o, [migration documentation was written](/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/).
|
||||
|
||||
We later [surveyed the community](https://kubernetes.io/blog/2021/11/12/are-you-ready-for-dockershim-removal/) and [discovered that there are still many users with questions and concerns](/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim). In response, Kubernetes maintainers and the CNCF committed to addressing these concerns by extending documentation and other programs. In fact, this blog post is a part of this program. With so many end users successfully migrated to other runtimes, and improved documentation, we believe that everyone has a paved way to migration now.
|
||||
|
||||
|
|
|
|||
|
|
@ -312,16 +312,18 @@ controller deletes the node from its list of nodes.
|
|||
The third is monitoring the nodes' health. The node controller is
|
||||
responsible for:
|
||||
|
||||
- In the case that a node becomes unreachable, updating the NodeReady condition
|
||||
of within the Node's `.status`. In this case the node controller sets the
|
||||
NodeReady condition to `ConditionUnknown`.
|
||||
- In the case that a node becomes unreachable, updating the `Ready` condition
|
||||
in the Node's `.status` field. In this case the node controller sets the
|
||||
`Ready` condition to `Unknown`.
|
||||
- If a node remains unreachable: triggering
|
||||
[API-initiated eviction](/docs/concepts/scheduling-eviction/api-eviction/)
|
||||
for all of the Pods on the unreachable node. By default, the node controller
|
||||
waits 5 minutes between marking the node as `ConditionUnknown` and submitting
|
||||
waits 5 minutes between marking the node as `Unknown` and submitting
|
||||
the first eviction request.
|
||||
|
||||
The node controller checks the state of each node every `--node-monitor-period` seconds.
|
||||
By default, the node controller checks the state of each node every 5 seconds.
|
||||
This period can be configured using the `--node-monitor-period` flag on the
|
||||
`kube-controller-manager` component.
|
||||
|
||||
### Rate limits on eviction
|
||||
|
||||
|
|
@ -331,7 +333,7 @@ from more than 1 node per 10 seconds.
|
|||
|
||||
The node eviction behavior changes when a node in a given availability zone
|
||||
becomes unhealthy. The node controller checks what percentage of nodes in the zone
|
||||
are unhealthy (NodeReady condition is `ConditionUnknown` or `ConditionFalse`) at
|
||||
are unhealthy (the `Ready` condition is `Unknown` or `False`) at
|
||||
the same time:
|
||||
|
||||
- If the fraction of unhealthy nodes is at least `--unhealthy-zone-threshold`
|
||||
|
|
@ -384,7 +386,7 @@ If you want to explicitly reserve resources for non-Pod processes, see
|
|||
|
||||
## Node topology
|
||||
|
||||
{{< feature-state state="alpha" for_k8s_version="v1.16" >}}
|
||||
{{< feature-state state="beta" for_k8s_version="v1.18" >}}
|
||||
|
||||
If you have enabled the `TopologyManager`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/), then
|
||||
|
|
@ -412,7 +414,7 @@ enabled by default in 1.21.
|
|||
|
||||
Note that by default, both configuration options described below,
|
||||
`shutdownGracePeriod` and `shutdownGracePeriodCriticalPods` are set to zero,
|
||||
thus not activating Graceful node shutdown functionality.
|
||||
thus not activating the graceful node shutdown functionality.
|
||||
To activate the feature, the two kubelet config settings should be configured appropriately and
|
||||
set to non-zero values.
|
||||
|
||||
|
|
|
|||
|
|
@ -108,7 +108,7 @@ Once a Pod is scheduled to a node, the kubelet on that node creates a new {{< gl
|
|||
text="cgroup" term_id="cgroup" >}} for the Pod. It is within this pod that the underlying
|
||||
container runtime will create containers.
|
||||
|
||||
If the resource has a limit defined for each container (Guaranteed QoS or Bustrable QoS with limits defined),
|
||||
If the resource has a limit defined for each container (Guaranteed QoS or Burstable QoS with limits defined),
|
||||
the kubelet will set an upper limit for the pod cgroup associated with that resource (cpu.cfs_quota_us for CPU
|
||||
and memory.limit_in_bytes memory). This upper limit is based on the sum of the container limits plus the `overhead`
|
||||
defined in the PodSpec.
|
||||
|
|
|
|||
|
|
@ -74,7 +74,7 @@ A minimal Ingress resource example:
|
|||
|
||||
{{< codenew file="service/networking/minimal-ingress.yaml" >}}
|
||||
|
||||
As with all other Kubernetes resources, an Ingress needs `apiVersion`, `kind`, and `metadata` fields.
|
||||
An Ingress needs `apiVersion`, `kind`, `metadata` and `spec` fields.
|
||||
The name of an Ingress object must be a valid
|
||||
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
|
||||
For general information about working with config files, see [deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/), [configuring containers](/docs/tasks/configure-pod-container/configure-pod-configmap/), [managing resources](/docs/concepts/cluster-administration/manage-deployment/).
|
||||
|
|
|
|||
|
|
@ -76,9 +76,9 @@ A Pod Template in a DaemonSet must have a [`RestartPolicy`](/docs/concepts/workl
|
|||
The `.spec.selector` field is a pod selector. It works the same as the `.spec.selector` of
|
||||
a [Job](/docs/concepts/workloads/controllers/job/).
|
||||
|
||||
As of Kubernetes 1.8, you must specify a pod selector that matches the labels of the
|
||||
`.spec.template`. The pod selector will no longer be defaulted when left empty. Selector
|
||||
defaulting was not compatible with `kubectl apply`. Also, once a DaemonSet is created,
|
||||
You must specify a pod selector that matches the labels of the
|
||||
`.spec.template`.
|
||||
Also, once a DaemonSet is created,
|
||||
its `.spec.selector` can not be mutated. Mutating the pod selector can lead to the
|
||||
unintentional orphaning of Pods, and it was found to be confusing to users.
|
||||
|
||||
|
|
@ -91,8 +91,8 @@ The `.spec.selector` is an object consisting of two fields:
|
|||
|
||||
When the two are specified the result is ANDed.
|
||||
|
||||
If the `.spec.selector` is specified, it must match the `.spec.template.metadata.labels`.
|
||||
Config with these not matching will be rejected by the API.
|
||||
The `.spec.selector` must match the `.spec.template.metadata.labels`.
|
||||
Config with these two not matching will be rejected by the API.
|
||||
|
||||
### Running Pods on select Nodes
|
||||
|
||||
|
|
@ -107,7 +107,7 @@ If you do not specify either, then the DaemonSet controller will create Pods on
|
|||
|
||||
### Scheduled by default scheduler
|
||||
|
||||
{{< feature-state for_kubernetes_version="1.17" state="stable" >}}
|
||||
{{< feature-state for_k8s_version="1.17" state="stable" >}}
|
||||
|
||||
A DaemonSet ensures that all eligible nodes run a copy of a Pod. Normally, the
|
||||
node that a Pod runs on is selected by the Kubernetes scheduler. However,
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ weight: 15
|
|||
<!--Overview-->
|
||||
|
||||
This guide shows you how to create, edit and share diagrams using the Mermaid
|
||||
Javascript library. Mermaid.js allows you to generate diagrams using a simple
|
||||
JavaScript library. Mermaid.js allows you to generate diagrams using a simple
|
||||
markdown-like syntax inside Markdown files. You can also use Mermaid to
|
||||
generate `.svg` or `.png` image files that you can add to your documentation.
|
||||
|
||||
|
|
|
|||
|
|
@ -33,13 +33,13 @@ properties:
|
|||
- `group`, type string; if you specify `group`, it must match one of the groups of the authenticated user. `system:authenticated` matches all authenticated requests. `system:unauthenticated` matches all unauthenticated requests.
|
||||
- Resource-matching properties:
|
||||
- `apiGroup`, type string; an API group.
|
||||
- Ex: `extensions`
|
||||
- Ex: `apps`, `networking.k8s.io`
|
||||
- Wildcard: `*` matches all API groups.
|
||||
- `namespace`, type string; a namespace.
|
||||
- Ex: `kube-system`
|
||||
- Wildcard: `*` matches all resource requests.
|
||||
- `resource`, type string; a resource type
|
||||
- Ex: `pods`
|
||||
- Ex: `pods`, `deployments`
|
||||
- Wildcard: `*` matches all resource requests.
|
||||
- Non-resource-matching properties:
|
||||
- `nonResourcePath`, type string; non-resource request paths.
|
||||
|
|
|
|||
|
|
@ -384,11 +384,11 @@ rules:
|
|||
```
|
||||
|
||||
Allow reading/writing Deployments (at the HTTP level: objects with `"deployments"`
|
||||
in the resource part of their URL) in both the `"extensions"` and `"apps"` API groups:
|
||||
in the resource part of their URL) in the `"apps"` API groups:
|
||||
|
||||
```yaml
|
||||
rules:
|
||||
- apiGroups: ["extensions", "apps"]
|
||||
- apiGroups: ["apps"]
|
||||
#
|
||||
# at the HTTP level, the name of the resource for accessing Deployment
|
||||
# objects is "deployments"
|
||||
|
|
@ -397,7 +397,7 @@ rules:
|
|||
```
|
||||
|
||||
Allow reading Pods in the core API group, as well as reading or writing Job
|
||||
resources in the `"batch"` or `"extensions"` API groups:
|
||||
resources in the `"batch"` API group:
|
||||
|
||||
```yaml
|
||||
rules:
|
||||
|
|
@ -407,7 +407,7 @@ rules:
|
|||
# objects is "pods"
|
||||
resources: ["pods"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: ["batch", "extensions"]
|
||||
- apiGroups: ["batch"]
|
||||
#
|
||||
# at the HTTP level, the name of the resource for accessing Job
|
||||
# objects is "jobs"
|
||||
|
|
@ -517,7 +517,7 @@ subjects:
|
|||
namespace: kube-system
|
||||
```
|
||||
|
||||
For all service accounts in the "qa" group in any namespace:
|
||||
For all service accounts in the "qa" namespace:
|
||||
|
||||
```yaml
|
||||
subjects:
|
||||
|
|
@ -525,15 +525,6 @@ subjects:
|
|||
name: system:serviceaccounts:qa
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
```
|
||||
For all service accounts in the "dev" group in the "development" namespace:
|
||||
|
||||
```yaml
|
||||
subjects:
|
||||
- kind: Group
|
||||
name: system:serviceaccounts:dev
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
namespace: development
|
||||
```
|
||||
|
||||
For all service accounts in any namespace:
|
||||
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ id: cadvisor
|
|||
date: 2021-12-09
|
||||
full_link: https://github.com/google/cadvisor/
|
||||
short_description: >
|
||||
Tool that provides understanding of the resource usage and perfomance characteristics for containers
|
||||
Tool that provides understanding of the resource usage and performance characteristics for containers
|
||||
aka:
|
||||
tags:
|
||||
- tool
|
||||
|
|
|
|||
|
|
@ -168,6 +168,44 @@ Used on: Pod
|
|||
This annotation is used to set [Pod Deletion Cost](/docs/concepts/workloads/controllers/replicaset/#pod-deletion-cost)
|
||||
which allows users to influence ReplicaSet downscaling order. The annotation parses into an `int32` type.
|
||||
|
||||
### kubernetes.io/ingress-bandwidth
|
||||
|
||||
{{< note >}}
|
||||
Ingress traffic shaping annotation is an experimental feature.
|
||||
If you want to enable traffic shaping support, you must add the `bandwidth` plugin to your CNI configuration file (default `/etc/cni/net.d`) and
|
||||
ensure that the binary is included in your CNI bin dir (default `/opt/cni/bin`).
|
||||
{{< /note >}}
|
||||
|
||||
Example: `kubernetes.io/ingress-bandwidth: 10M`
|
||||
|
||||
Used on: Pod
|
||||
|
||||
You can apply quality-of-service traffic shaping to a pod and effectively limit its available bandwidth.
|
||||
Ingress traffic (to the pod) is handled by shaping queued packets to effectively handle data.
|
||||
To limit the bandwidth on a pod, write an object definition JSON file and specify the data traffic
|
||||
speed using `kubernetes.io/ingress-bandwidth` annotation. The unit used for specifying ingress
|
||||
rate is bits per second, as a [Quantity](/docs/reference/kubernetes-api/common-definitions/quantity/).
|
||||
For example, `10M` means 10 megabits per second.
|
||||
|
||||
### kubernetes.io/egress-bandwidth
|
||||
|
||||
{{< note >}}
|
||||
Egress traffic shaping annotation is an experimental feature.
|
||||
If you want to enable traffic shaping support, you must add the `bandwidth` plugin to your CNI configuration file (default `/etc/cni/net.d`) and
|
||||
ensure that the binary is included in your CNI bin dir (default `/opt/cni/bin`).
|
||||
{{< /note >}}
|
||||
|
||||
Example: `kubernetes.io/egress-bandwidth: 10M`
|
||||
|
||||
Used on: Pod
|
||||
|
||||
Egress traffic (from the pod) is handled by policing, which simply drops packets in excess of the configured rate.
|
||||
The limits you place on a pod do not affect the bandwidth of other pods.
|
||||
To limit the bandwidth on a pod, write an object definition JSON file and specify the data traffic
|
||||
speed using `kubernetes.io/egress-bandwidth` annotation. The unit used for specifying egress
|
||||
rate is bits per second, as a [Quantity](/docs/reference/kubernetes-api/common-definitions/quantity/).
|
||||
For example, `10M` means 10 megabits per second.
|
||||
|
||||
### beta.kubernetes.io/instance-type (deprecated)
|
||||
|
||||
{{< note >}} Starting in v1.17, this label is deprecated in favor of [node.kubernetes.io/instance-type](#nodekubernetesioinstance-type). {{< /note >}}
|
||||
|
|
|
|||
|
|
@ -14,7 +14,6 @@ card:
|
|||
This page shows how to install the `kubeadm` toolbox.
|
||||
For information on how to create a cluster with kubeadm once you have performed this installation process, see the [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) page.
|
||||
|
||||
{{% dockershim-removal %}}
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
|
|
|||
|
|
@ -10,6 +10,8 @@ weight: 80
|
|||
|
||||
{{% dockershim-removal %}}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.11" state="stable" >}}
|
||||
|
||||
The lifecycle of the kubeadm CLI tool is decoupled from the
|
||||
[kubelet](/docs/reference/command-line-tools-reference/kubelet), which is a daemon that runs
|
||||
on each node within the Kubernetes cluster. The kubeadm CLI tool is executed by the user when Kubernetes is
|
||||
|
|
|
|||
|
|
@ -150,7 +150,7 @@ describes how you can configure this as a cluster administrator.
|
|||
|
||||
### Programmatic access to the API
|
||||
|
||||
Kubernetes officially supports client libraries for [Go](#go-client), [Python](#python-client), [Java](#java-client), [dotnet](#dotnet-client), [Javascript](#javascript-client), and [Haskell](#haskell-client). There are other client libraries that are provided and maintained by their authors, not the Kubernetes team. See [client libraries](/docs/reference/using-api/client-libraries/) for accessing the API from other languages and how they authenticate.
|
||||
Kubernetes officially supports client libraries for [Go](#go-client), [Python](#python-client), [Java](#java-client), [dotnet](#dotnet-client), [JavaScript](#javascript-client), and [Haskell](#haskell-client). There are other client libraries that are provided and maintained by their authors, not the Kubernetes team. See [client libraries](/docs/reference/using-api/client-libraries/) for accessing the API from other languages and how they authenticate.
|
||||
|
||||
#### Go client
|
||||
|
||||
|
|
|
|||
|
|
@ -16,7 +16,6 @@ weight: 30
|
|||
|
||||
You can use Kubernetes to run a mixture of Linux and Windows nodes, so you can mix Pods that run on Linux on with Pods that run on Windows. This page shows how to register Windows nodes to your cluster.
|
||||
|
||||
{{% dockershim-removal %}}
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
{{< version-check >}}
|
||||
|
|
|
|||
|
|
@ -312,7 +312,7 @@ appropriate Pod Security profile is applied to new namespaces.
|
|||
|
||||
You can also statically configure the Pod Security admission controller to set a default enforce,
|
||||
audit, and/or warn level for unlabeled namespaces. See
|
||||
[Configure the Admission Controller](docs/tasks/configure-pod-container/enforce-standards-admission-controller/#configure-the-admission-controller)
|
||||
[Configure the Admission Controller](/docs/tasks/configure-pod-container/enforce-standards-admission-controller/#configure-the-admission-controller)
|
||||
for more information.
|
||||
|
||||
## 5. Disable PodSecurityPolicy {#disable-psp}
|
||||
|
|
|
|||
|
|
@ -1,6 +0,0 @@
|
|||
---
|
||||
title: "Monitoring, Logging, and Debugging"
|
||||
description: Set up monitoring and logging to troubleshoot a cluster, or debug a containerized application.
|
||||
weight: 80
|
||||
---
|
||||
|
||||
|
|
@ -1,124 +0,0 @@
|
|||
---
|
||||
reviewers:
|
||||
- davidopp
|
||||
title: Troubleshoot Clusters
|
||||
content_type: concept
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
This doc is about cluster troubleshooting; we assume you have already ruled out your application as the root cause of the
|
||||
problem you are experiencing. See
|
||||
the [application troubleshooting guide](/docs/tasks/debug-application-cluster/debug-application) for tips on application debugging.
|
||||
You may also visit [troubleshooting document](/docs/tasks/debug-application-cluster/troubleshooting/) for more information.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Listing your cluster
|
||||
|
||||
The first thing to debug in your cluster is if your nodes are all registered correctly.
|
||||
|
||||
Run
|
||||
|
||||
```shell
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
And verify that all of the nodes you expect to see are present and that they are all in the `Ready` state.
|
||||
|
||||
To get detailed information about the overall health of your cluster, you can run:
|
||||
|
||||
```shell
|
||||
kubectl cluster-info dump
|
||||
```
|
||||
## Looking at logs
|
||||
|
||||
For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations
|
||||
of the relevant log files. (note that on systemd-based systems, you may need to use `journalctl` instead)
|
||||
|
||||
### Master
|
||||
|
||||
* `/var/log/kube-apiserver.log` - API Server, responsible for serving the API
|
||||
* `/var/log/kube-scheduler.log` - Scheduler, responsible for making scheduling decisions
|
||||
* `/var/log/kube-controller-manager.log` - Controller that manages replication controllers
|
||||
|
||||
### Worker Nodes
|
||||
|
||||
* `/var/log/kubelet.log` - Kubelet, responsible for running containers on the node
|
||||
* `/var/log/kube-proxy.log` - Kube Proxy, responsible for service load balancing
|
||||
|
||||
## A general overview of cluster failure modes
|
||||
|
||||
This is an incomplete list of things that could go wrong, and how to adjust your cluster setup to mitigate the problems.
|
||||
|
||||
### Root causes:
|
||||
|
||||
- VM(s) shutdown
|
||||
- Network partition within cluster, or between cluster and users
|
||||
- Crashes in Kubernetes software
|
||||
- Data loss or unavailability of persistent storage (e.g. GCE PD or AWS EBS volume)
|
||||
- Operator error, for example misconfigured Kubernetes software or application software
|
||||
|
||||
### Specific scenarios:
|
||||
|
||||
- Apiserver VM shutdown or apiserver crashing
|
||||
- Results
|
||||
- unable to stop, update, or start new pods, services, replication controller
|
||||
- existing pods and services should continue to work normally, unless they depend on the Kubernetes API
|
||||
- Apiserver backing storage lost
|
||||
- Results
|
||||
- apiserver should fail to come up
|
||||
- kubelets will not be able to reach it but will continue to run the same pods and provide the same service proxying
|
||||
- manual recovery or recreation of apiserver state necessary before apiserver is restarted
|
||||
- Supporting services (node controller, replication controller manager, scheduler, etc) VM shutdown or crashes
|
||||
- currently those are colocated with the apiserver, and their unavailability has similar consequences as apiserver
|
||||
- in future, these will be replicated as well and may not be co-located
|
||||
- they do not have their own persistent state
|
||||
- Individual node (VM or physical machine) shuts down
|
||||
- Results
|
||||
- pods on that Node stop running
|
||||
- Network partition
|
||||
- Results
|
||||
- partition A thinks the nodes in partition B are down; partition B thinks the apiserver is down. (Assuming the master VM ends up in partition A.)
|
||||
- Kubelet software fault
|
||||
- Results
|
||||
- crashing kubelet cannot start new pods on the node
|
||||
- kubelet might delete the pods or not
|
||||
- node marked unhealthy
|
||||
- replication controllers start new pods elsewhere
|
||||
- Cluster operator error
|
||||
- Results
|
||||
- loss of pods, services, etc
|
||||
- lost of apiserver backing store
|
||||
- users unable to read API
|
||||
- etc.
|
||||
|
||||
### Mitigations:
|
||||
|
||||
- Action: Use IaaS provider's automatic VM restarting feature for IaaS VMs
|
||||
- Mitigates: Apiserver VM shutdown or apiserver crashing
|
||||
- Mitigates: Supporting services VM shutdown or crashes
|
||||
|
||||
- Action: Use IaaS providers reliable storage (e.g. GCE PD or AWS EBS volume) for VMs with apiserver+etcd
|
||||
- Mitigates: Apiserver backing storage lost
|
||||
|
||||
- Action: Use [high-availability](/docs/setup/production-environment/tools/kubeadm/high-availability/) configuration
|
||||
- Mitigates: Control plane node shutdown or control plane components (scheduler, API server, controller-manager) crashing
|
||||
- Will tolerate one or more simultaneous node or component failures
|
||||
- Mitigates: API server backing storage (i.e., etcd's data directory) lost
|
||||
- Assumes HA (highly-available) etcd configuration
|
||||
|
||||
- Action: Snapshot apiserver PDs/EBS-volumes periodically
|
||||
- Mitigates: Apiserver backing storage lost
|
||||
- Mitigates: Some cases of operator error
|
||||
- Mitigates: Some cases of Kubernetes software fault
|
||||
|
||||
- Action: use replication controller and services in front of pods
|
||||
- Mitigates: Node shutdown
|
||||
- Mitigates: Kubelet software fault
|
||||
|
||||
- Action: applications (containers) designed to tolerate unexpected restarts
|
||||
- Mitigates: Node shutdown
|
||||
- Mitigates: Kubelet software fault
|
||||
|
||||
|
||||
|
|
@ -1,107 +0,0 @@
|
|||
---
|
||||
reviewers:
|
||||
- bprashanth
|
||||
title: Debug Pods and ReplicationControllers
|
||||
content_type: task
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
This page shows how to debug Pods and ReplicationControllers.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
* You should be familiar with the basics of
|
||||
{{< glossary_tooltip text="Pods" term_id="pod" >}} and with
|
||||
Pods' [lifecycles](/docs/concepts/workloads/pods/pod-lifecycle/).
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Debugging Pods
|
||||
|
||||
The first step in debugging a pod is taking a look at it. Check the current
|
||||
state of the pod and recent events with the following command:
|
||||
|
||||
```shell
|
||||
kubectl describe pods ${POD_NAME}
|
||||
```
|
||||
|
||||
Look at the state of the containers in the pod. Are they all `Running`? Have
|
||||
there been recent restarts?
|
||||
|
||||
Continue debugging depending on the state of the pods.
|
||||
|
||||
### My pod stays pending
|
||||
|
||||
If a pod is stuck in `Pending` it means that it can not be scheduled onto a
|
||||
node. Generally this is because there are insufficient resources of one type or
|
||||
another that prevent scheduling. Look at the output of the `kubectl describe
|
||||
...` command above. There should be messages from the scheduler about why it
|
||||
can not schedule your pod. Reasons include:
|
||||
|
||||
#### Insufficient resources
|
||||
|
||||
You may have exhausted the supply of CPU or Memory in your cluster. In this
|
||||
case you can try several things:
|
||||
|
||||
* Add more nodes to the cluster.
|
||||
|
||||
* [Terminate unneeded pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)
|
||||
to make room for pending pods.
|
||||
|
||||
* Check that the pod is not larger than your nodes. For example, if all
|
||||
nodes have a capacity of `cpu:1`, then a pod with a request of `cpu: 1.1`
|
||||
will never be scheduled.
|
||||
|
||||
You can check node capacities with the `kubectl get nodes -o <format>`
|
||||
command. Here are some example command lines that extract the necessary
|
||||
information:
|
||||
|
||||
```shell
|
||||
kubectl get nodes -o yaml | egrep '\sname:|cpu:|memory:'
|
||||
kubectl get nodes -o json | jq '.items[] | {name: .metadata.name, cap: .status.capacity}'
|
||||
```
|
||||
|
||||
The [resource quota](/docs/concepts/policy/resource-quotas/)
|
||||
feature can be configured to limit the total amount of
|
||||
resources that can be consumed. If used in conjunction with namespaces, it can
|
||||
prevent one team from hogging all the resources.
|
||||
|
||||
#### Using hostPort
|
||||
|
||||
When you bind a pod to a `hostPort` there are a limited number of places that
|
||||
the pod can be scheduled. In most cases, `hostPort` is unnecessary; try using a
|
||||
service object to expose your pod. If you do require `hostPort` then you can
|
||||
only schedule as many pods as there are nodes in your container cluster.
|
||||
|
||||
### My pod stays waiting
|
||||
|
||||
If a pod is stuck in the `Waiting` state, then it has been scheduled to a
|
||||
worker node, but it can't run on that machine. Again, the information from
|
||||
`kubectl describe ...` should be informative. The most common cause of
|
||||
`Waiting` pods is a failure to pull the image. There are three things to check:
|
||||
|
||||
* Make sure that you have the name of the image correct.
|
||||
* Have you pushed the image to the repository?
|
||||
* Try to manually pull the image to see if it can be pulled. For example, if you
|
||||
use Docker on your PC, run `docker pull <image>`.
|
||||
|
||||
### My pod is crashing or otherwise unhealthy
|
||||
|
||||
Once your pod has been scheduled, the methods described in [Debug Running Pods](
|
||||
/docs/tasks/debug-application-cluster/debug-running-pod/) are available for debugging.
|
||||
|
||||
|
||||
## Debugging ReplicationControllers
|
||||
|
||||
ReplicationControllers are fairly straightforward. They can either create pods
|
||||
or they can't. If they can't create pods, then please refer to the
|
||||
[instructions above](#debugging-pods) to debug your pods.
|
||||
|
||||
You can also use `kubectl describe rc ${CONTROLLER_NAME}` to inspect events
|
||||
related to the replication controller.
|
||||
|
||||
|
||||
|
|
@ -1,333 +0,0 @@
|
|||
---
|
||||
reviewers:
|
||||
- verb
|
||||
- soltysh
|
||||
title: Debug Running Pods
|
||||
content_type: task
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
This page explains how to debug Pods running (or crashing) on a Node.
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* Your {{< glossary_tooltip text="Pod" term_id="pod" >}} should already be
|
||||
scheduled and running. If your Pod is not yet running, start with [Troubleshoot
|
||||
Applications](/docs/tasks/debug-application-cluster/debug-application/).
|
||||
* For some of the advanced debugging steps you need to know on which Node the
|
||||
Pod is running and have shell access to run commands on that Node. You don't
|
||||
need that access to run the standard debug steps that use `kubectl`.
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Examining pod logs {#examine-pod-logs}
|
||||
|
||||
First, look at the logs of the affected container:
|
||||
|
||||
```shell
|
||||
kubectl logs ${POD_NAME} ${CONTAINER_NAME}
|
||||
```
|
||||
|
||||
If your container has previously crashed, you can access the previous container's crash log with:
|
||||
|
||||
```shell
|
||||
kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
|
||||
```
|
||||
|
||||
## Debugging with container exec {#container-exec}
|
||||
|
||||
If the {{< glossary_tooltip text="container image" term_id="image" >}} includes
|
||||
debugging utilities, as is the case with images built from Linux and Windows OS
|
||||
base images, you can run commands inside a specific container with
|
||||
`kubectl exec`:
|
||||
|
||||
```shell
|
||||
kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
`-c ${CONTAINER_NAME}` is optional. You can omit it for Pods that only contain a single container.
|
||||
{{< /note >}}
|
||||
|
||||
As an example, to look at the logs from a running Cassandra pod, you might run
|
||||
|
||||
```shell
|
||||
kubectl exec cassandra -- cat /var/log/cassandra/system.log
|
||||
```
|
||||
|
||||
You can run a shell that's connected to your terminal using the `-i` and `-t`
|
||||
arguments to `kubectl exec`, for example:
|
||||
|
||||
```shell
|
||||
kubectl exec -it cassandra -- sh
|
||||
```
|
||||
|
||||
For more details, see [Get a Shell to a Running Container](
|
||||
/docs/tasks/debug-application-cluster/get-shell-running-container/).
|
||||
|
||||
## Debugging with an ephemeral debug container {#ephemeral-container}
|
||||
|
||||
{{< feature-state state="beta" for_k8s_version="v1.23" >}}
|
||||
|
||||
{{< glossary_tooltip text="Ephemeral containers" term_id="ephemeral-container" >}}
|
||||
are useful for interactive troubleshooting when `kubectl exec` is insufficient
|
||||
because a container has crashed or a container image doesn't include debugging
|
||||
utilities, such as with [distroless images](
|
||||
https://github.com/GoogleContainerTools/distroless).
|
||||
|
||||
### Example debugging using ephemeral containers {#ephemeral-container-example}
|
||||
|
||||
You can use the `kubectl debug` command to add ephemeral containers to a
|
||||
running Pod. First, create a pod for the example:
|
||||
|
||||
```shell
|
||||
kubectl run ephemeral-demo --image=k8s.gcr.io/pause:3.1 --restart=Never
|
||||
```
|
||||
|
||||
The examples in this section use the `pause` container image because it does not
|
||||
contain debugging utilities, but this method works with all container
|
||||
images.
|
||||
|
||||
If you attempt to use `kubectl exec` to create a shell you will see an error
|
||||
because there is no shell in this container image.
|
||||
|
||||
```shell
|
||||
kubectl exec -it ephemeral-demo -- sh
|
||||
```
|
||||
|
||||
```
|
||||
OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused "exec: \"sh\": executable file not found in $PATH": unknown
|
||||
```
|
||||
|
||||
You can instead add a debugging container using `kubectl debug`. If you
|
||||
specify the `-i`/`--interactive` argument, `kubectl` will automatically attach
|
||||
to the console of the Ephemeral Container.
|
||||
|
||||
```shell
|
||||
kubectl debug -it ephemeral-demo --image=busybox:1.28 --target=ephemeral-demo
|
||||
```
|
||||
|
||||
```
|
||||
Defaulting debug container name to debugger-8xzrl.
|
||||
If you don't see a command prompt, try pressing enter.
|
||||
/ #
|
||||
```
|
||||
|
||||
This command adds a new busybox container and attaches to it. The `--target`
|
||||
parameter targets the process namespace of another container. It's necessary
|
||||
here because `kubectl run` does not enable [process namespace sharing](
|
||||
/docs/tasks/configure-pod-container/share-process-namespace/) in the pod it
|
||||
creates.
|
||||
|
||||
{{< note >}}
|
||||
The `--target` parameter must be supported by the {{< glossary_tooltip
|
||||
text="Container Runtime" term_id="container-runtime" >}}. When not supported,
|
||||
the Ephemeral Container may not be started, or it may be started with an
|
||||
isolated process namespace so that `ps` does not reveal processes in other
|
||||
containers.
|
||||
{{< /note >}}
|
||||
|
||||
You can view the state of the newly created ephemeral container using `kubectl describe`:
|
||||
|
||||
```shell
|
||||
kubectl describe pod ephemeral-demo
|
||||
```
|
||||
|
||||
```
|
||||
...
|
||||
Ephemeral Containers:
|
||||
debugger-8xzrl:
|
||||
Container ID: docker://b888f9adfd15bd5739fefaa39e1df4dd3c617b9902082b1cfdc29c4028ffb2eb
|
||||
Image: busybox
|
||||
Image ID: docker-pullable://busybox@sha256:1828edd60c5efd34b2bf5dd3282ec0cc04d47b2ff9caa0b6d4f07a21d1c08084
|
||||
Port: <none>
|
||||
Host Port: <none>
|
||||
State: Running
|
||||
Started: Wed, 12 Feb 2020 14:25:42 +0100
|
||||
Ready: False
|
||||
Restart Count: 0
|
||||
Environment: <none>
|
||||
Mounts: <none>
|
||||
...
|
||||
```
|
||||
|
||||
Use `kubectl delete` to remove the Pod when you're finished:
|
||||
|
||||
```shell
|
||||
kubectl delete pod ephemeral-demo
|
||||
```
|
||||
|
||||
## Debugging using a copy of the Pod
|
||||
|
||||
Sometimes Pod configuration options make it difficult to troubleshoot in certain
|
||||
situations. For example, you can't run `kubectl exec` to troubleshoot your
|
||||
container if your container image does not include a shell or if your application
|
||||
crashes on startup. In these situations you can use `kubectl debug` to create a
|
||||
copy of the Pod with configuration values changed to aid debugging.
|
||||
|
||||
### Copying a Pod while adding a new container
|
||||
|
||||
Adding a new container can be useful when your application is running but not
|
||||
behaving as you expect and you'd like to add additional troubleshooting
|
||||
utilities to the Pod.
|
||||
|
||||
For example, maybe your application's container images are built on `busybox`
|
||||
but you need debugging utilities not included in `busybox`. You can simulate
|
||||
this scenario using `kubectl run`:
|
||||
|
||||
```shell
|
||||
kubectl run myapp --image=busybox:1.28 --restart=Never -- sleep 1d
|
||||
```
|
||||
|
||||
Run this command to create a copy of `myapp` named `myapp-debug` that adds a
|
||||
new Ubuntu container for debugging:
|
||||
|
||||
```shell
|
||||
kubectl debug myapp -it --image=ubuntu --share-processes --copy-to=myapp-debug
|
||||
```
|
||||
|
||||
```
|
||||
Defaulting debug container name to debugger-w7xmf.
|
||||
If you don't see a command prompt, try pressing enter.
|
||||
root@myapp-debug:/#
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
* `kubectl debug` automatically generates a container name if you don't choose
|
||||
one using the `--container` flag.
|
||||
* The `-i` flag causes `kubectl debug` to attach to the new container by
|
||||
default. You can prevent this by specifying `--attach=false`. If your session
|
||||
becomes disconnected you can reattach using `kubectl attach`.
|
||||
* The `--share-processes` allows the containers in this Pod to see processes
|
||||
from the other containers in the Pod. For more information about how this
|
||||
works, see [Share Process Namespace between Containers in a Pod](
|
||||
/docs/tasks/configure-pod-container/share-process-namespace/).
|
||||
{{< /note >}}
|
||||
|
||||
Don't forget to clean up the debugging Pod when you're finished with it:
|
||||
|
||||
```shell
|
||||
kubectl delete pod myapp myapp-debug
|
||||
```
|
||||
|
||||
### Copying a Pod while changing its command
|
||||
|
||||
Sometimes it's useful to change the command for a container, for example to
|
||||
add a debugging flag or because the application is crashing.
|
||||
|
||||
To simulate a crashing application, use `kubectl run` to create a container
|
||||
that immediately exits:
|
||||
|
||||
```
|
||||
kubectl run --image=busybox:1.28 myapp -- false
|
||||
```
|
||||
|
||||
You can see using `kubectl describe pod myapp` that this container is crashing:
|
||||
|
||||
```
|
||||
Containers:
|
||||
myapp:
|
||||
Image: busybox
|
||||
...
|
||||
Args:
|
||||
false
|
||||
State: Waiting
|
||||
Reason: CrashLoopBackOff
|
||||
Last State: Terminated
|
||||
Reason: Error
|
||||
Exit Code: 1
|
||||
```
|
||||
|
||||
You can use `kubectl debug` to create a copy of this Pod with the command
|
||||
changed to an interactive shell:
|
||||
|
||||
```
|
||||
kubectl debug myapp -it --copy-to=myapp-debug --container=myapp -- sh
|
||||
```
|
||||
|
||||
```
|
||||
If you don't see a command prompt, try pressing enter.
|
||||
/ #
|
||||
```
|
||||
|
||||
Now you have an interactive shell that you can use to perform tasks like
|
||||
checking filesystem paths or running the container command manually.
|
||||
|
||||
{{< note >}}
|
||||
* To change the command of a specific container you must
|
||||
specify its name using `--container` or `kubectl debug` will instead
|
||||
create a new container to run the command you specified.
|
||||
* The `-i` flag causes `kubectl debug` to attach to the container by default.
|
||||
You can prevent this by specifying `--attach=false`. If your session becomes
|
||||
disconnected you can reattach using `kubectl attach`.
|
||||
{{< /note >}}
|
||||
|
||||
Don't forget to clean up the debugging Pod when you're finished with it:
|
||||
|
||||
```shell
|
||||
kubectl delete pod myapp myapp-debug
|
||||
```
|
||||
|
||||
### Copying a Pod while changing container images
|
||||
|
||||
In some situations you may want to change a misbehaving Pod from its normal
|
||||
production container images to an image containing a debugging build or
|
||||
additional utilities.
|
||||
|
||||
As an example, create a Pod using `kubectl run`:
|
||||
|
||||
```
|
||||
kubectl run myapp --image=busybox:1.28 --restart=Never -- sleep 1d
|
||||
```
|
||||
|
||||
Now use `kubectl debug` to make a copy and change its container image
|
||||
to `ubuntu`:
|
||||
|
||||
```
|
||||
kubectl debug myapp --copy-to=myapp-debug --set-image=*=ubuntu
|
||||
```
|
||||
|
||||
The syntax of `--set-image` uses the same `container_name=image` syntax as
|
||||
`kubectl set image`. `*=ubuntu` means change the image of all containers
|
||||
to `ubuntu`.
|
||||
|
||||
Don't forget to clean up the debugging Pod when you're finished with it:
|
||||
|
||||
```shell
|
||||
kubectl delete pod myapp myapp-debug
|
||||
```
|
||||
|
||||
## Debugging via a shell on the node {#node-shell-session}
|
||||
|
||||
If none of these approaches work, you can find the Node on which the Pod is
|
||||
running and create a privileged Pod running in the host namespaces. To create
|
||||
an interactive shell on a node using `kubectl debug`, run:
|
||||
|
||||
```shell
|
||||
kubectl debug node/mynode -it --image=ubuntu
|
||||
```
|
||||
|
||||
```
|
||||
Creating debugging pod node-debugger-mynode-pdx84 with container debugger on node mynode.
|
||||
If you don't see a command prompt, try pressing enter.
|
||||
root@ek8s:/#
|
||||
```
|
||||
|
||||
When creating a debugging session on a node, keep in mind that:
|
||||
|
||||
* `kubectl debug` automatically generates the name of the new Pod based on
|
||||
the name of the Node.
|
||||
* The container runs in the host IPC, Network, and PID namespaces.
|
||||
* The root filesystem of the Node will be mounted at `/host`.
|
||||
|
||||
Don't forget to clean up the debugging Pod when you're finished with it:
|
||||
|
||||
```shell
|
||||
kubectl delete pod node-debugger-mynode-pdx84
|
||||
```
|
||||
|
|
@ -1,9 +1,12 @@
|
|||
---
|
||||
title: "Monitoring, Logging, and Debugging"
|
||||
description: Set up monitoring and logging to troubleshoot a cluster, or debug a containerized application.
|
||||
weight: 20
|
||||
reviewers:
|
||||
- brendandburns
|
||||
- davidopp
|
||||
content_type: concept
|
||||
title: Troubleshooting
|
||||
no_list: true
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
@ -11,9 +14,9 @@ title: Troubleshooting
|
|||
Sometimes things go wrong. This guide is aimed at making them right. It has
|
||||
two sections:
|
||||
|
||||
* [Troubleshooting your application](/docs/tasks/debug-application-cluster/debug-application/) - Useful
|
||||
* [Debugging your application](/docs/tasks/debug/debug-application/) - Useful
|
||||
for users who are deploying code into Kubernetes and wondering why it is not working.
|
||||
* [Troubleshooting your cluster](/docs/tasks/debug-application-cluster/debug-cluster/) - Useful
|
||||
* [Debugging your cluster](/docs/tasks/debug/debug-cluster/) - Useful
|
||||
for cluster administrators and people whose Kubernetes cluster is unhappy.
|
||||
|
||||
You should also check the known issues for the [release](https://github.com/kubernetes/kubernetes/releases)
|
||||
|
|
@ -0,0 +1,8 @@
|
|||
---
|
||||
title: "Troubleshooting Applications"
|
||||
description: Debugging common containerized application issues.
|
||||
weight: 20
|
||||
---
|
||||
|
||||
This doc contains a set of resources for fixing issues with containerized applications. It covers things like common issues with Kubernetes resources (like Pods, Services, or StatefulSets), advice on making sense of container termination messages, and ways to debug running containers.
|
||||
|
||||
|
|
@ -9,6 +9,7 @@ reviewers:
|
|||
- smarterclayton
|
||||
title: Debug Init Containers
|
||||
content_type: task
|
||||
weight: 40
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
@ -2,15 +2,16 @@
|
|||
reviewers:
|
||||
- mikedanese
|
||||
- thockin
|
||||
title: Troubleshoot Applications
|
||||
title: Debug Pods
|
||||
content_type: concept
|
||||
weight: 10
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
This guide is to help users debug applications that are deployed into Kubernetes and not behaving correctly.
|
||||
This is *not* a guide for people who want to debug their cluster. For that you should check out
|
||||
[this guide](/docs/tasks/debug-application-cluster/debug-cluster).
|
||||
[this guide](/docs/tasks/debug/debug-cluster).
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
|
@ -64,7 +65,7 @@ Again, the information from `kubectl describe ...` should be informative. The m
|
|||
#### My pod is crashing or otherwise unhealthy
|
||||
|
||||
Once your pod has been scheduled, the methods described in [Debug Running Pods](
|
||||
/docs/tasks/debug-application-cluster/debug-running-pod/) are available for debugging.
|
||||
/docs/tasks/debug/debug-applications/debug-running-pod/) are available for debugging.
|
||||
|
||||
#### My pod is running but not doing what I told it to do
|
||||
|
||||
|
|
@ -145,15 +146,15 @@ Verify that the pod's `containerPort` matches up with the Service's `targetPort`
|
|||
|
||||
#### Network traffic is not forwarded
|
||||
|
||||
Please see [debugging service](/docs/tasks/debug-application-cluster/debug-service/) for more information.
|
||||
Please see [debugging service](/docs/tasks/debug/debug-applications/debug-service/) for more information.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
If none of the above solves your problem, follow the instructions in
|
||||
[Debugging Service document](/docs/tasks/debug-application-cluster/debug-service/)
|
||||
[Debugging Service document](/docs/tasks/debug/debug-applications/debug-service/)
|
||||
to make sure that your `Service` is running, has `Endpoints`, and your `Pods` are
|
||||
actually serving; you have DNS working, iptables rules installed, and kube-proxy
|
||||
does not seem to be misbehaving.
|
||||
|
||||
You may also visit [troubleshooting document](/docs/tasks/debug-application-cluster/troubleshooting/) for more information.
|
||||
You may also visit [troubleshooting document](/docs/tasks/debug/overview/) for more information.
|
||||
|
||||
|
|
@ -1,21 +1,25 @@
|
|||
---
|
||||
reviewers:
|
||||
- janetkuo
|
||||
- thockin
|
||||
content_type: concept
|
||||
title: Application Introspection and Debugging
|
||||
- verb
|
||||
- soltysh
|
||||
title: Debug Running Pods
|
||||
content_type: task
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Once your application is running, you'll inevitably need to debug problems with it.
|
||||
Earlier we described how you can use `kubectl get pods` to retrieve simple status information about
|
||||
your pods. But there are a number of ways to get even more information about your application.
|
||||
This page explains how to debug Pods running (or crashing) on a Node.
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
<!-- body -->
|
||||
* Your {{< glossary_tooltip text="Pod" term_id="pod" >}} should already be
|
||||
scheduled and running. If your Pod is not yet running, start with [Debugging
|
||||
Pods](/docs/tasks/debug/debug-application/).
|
||||
* For some of the advanced debugging steps you need to know on which Node the
|
||||
Pod is running and have shell access to run commands on that Node. You don't
|
||||
need that access to run the standard debug steps that use `kubectl`.
|
||||
|
||||
## Using `kubectl describe pod` to fetch details about pods
|
||||
|
||||
|
|
@ -125,6 +129,7 @@ Currently the only Condition associated with a Pod is the binary Ready condition
|
|||
|
||||
Lastly, you see a log of recent events related to your Pod. The system compresses multiple identical events by indicating the first and last time it was seen and the number of times it was seen. "From" indicates the component that is logging the event, "SubobjectPath" tells you which object (e.g. container within the pod) is being referred to, and "Reason" and "Message" tell you what happened.
|
||||
|
||||
|
||||
## Example: debugging Pending Pods
|
||||
|
||||
A common scenario that you can detect using events is when you've created a Pod that won't fit on any node. For example, the Pod might request more resources than are free on any node, or it might specify a label selector that doesn't match any nodes. Let's say we created the previous Deployment with 5 replicas (instead of 2) and requesting 600 millicores instead of 500, on a four-node cluster where each (virtual) machine has 1 CPU. In that case one of the Pods will not be able to schedule. (Note that because of the cluster addon pods such as fluentd, skydns, etc., that run on each node, if we requested 1000 millicores then none of the Pods would be able to schedule.)
|
||||
|
|
@ -326,197 +331,308 @@ status:
|
|||
startTime: "2022-02-17T21:51:01Z"
|
||||
```
|
||||
|
||||
## Example: debugging a down/unreachable node
|
||||
## Examining pod logs {#examine-pod-logs}
|
||||
|
||||
Sometimes when debugging it can be useful to look at the status of a node -- for example, because you've noticed strange behavior of a Pod that's running on the node, or to find out why a Pod won't schedule onto the node. As with Pods, you can use `kubectl describe node` and `kubectl get node -o yaml` to retrieve detailed information about nodes. For example, here's what you'll see if a node is down (disconnected from the network, or kubelet dies and won't restart, etc.). Notice the events that show the node is NotReady, and also notice that the pods are no longer running (they are evicted after five minutes of NotReady status).
|
||||
First, look at the logs of the affected container:
|
||||
|
||||
```shell
|
||||
kubectl get nodes
|
||||
kubectl logs ${POD_NAME} ${CONTAINER_NAME}
|
||||
```
|
||||
|
||||
```none
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
kube-worker-1 NotReady <none> 1h v1.23.3
|
||||
kubernetes-node-bols Ready <none> 1h v1.23.3
|
||||
kubernetes-node-st6x Ready <none> 1h v1.23.3
|
||||
kubernetes-node-unaj Ready <none> 1h v1.23.3
|
||||
```
|
||||
If your container has previously crashed, you can access the previous container's crash log with:
|
||||
|
||||
```shell
|
||||
kubectl describe node kube-worker-1
|
||||
kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
|
||||
```
|
||||
|
||||
```none
|
||||
Name: kube-worker-1
|
||||
Roles: <none>
|
||||
Labels: beta.kubernetes.io/arch=amd64
|
||||
beta.kubernetes.io/os=linux
|
||||
kubernetes.io/arch=amd64
|
||||
kubernetes.io/hostname=kube-worker-1
|
||||
kubernetes.io/os=linux
|
||||
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
|
||||
node.alpha.kubernetes.io/ttl: 0
|
||||
volumes.kubernetes.io/controller-managed-attach-detach: true
|
||||
CreationTimestamp: Thu, 17 Feb 2022 16:46:30 -0500
|
||||
Taints: node.kubernetes.io/unreachable:NoExecute
|
||||
node.kubernetes.io/unreachable:NoSchedule
|
||||
Unschedulable: false
|
||||
Lease:
|
||||
HolderIdentity: kube-worker-1
|
||||
AcquireTime: <unset>
|
||||
RenewTime: Thu, 17 Feb 2022 17:13:09 -0500
|
||||
Conditions:
|
||||
Type Status LastHeartbeatTime LastTransitionTime Reason Message
|
||||
---- ------ ----------------- ------------------ ------ -------
|
||||
NetworkUnavailable False Thu, 17 Feb 2022 17:09:13 -0500 Thu, 17 Feb 2022 17:09:13 -0500 WeaveIsUp Weave pod has set this
|
||||
MemoryPressure Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.
|
||||
DiskPressure Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.
|
||||
PIDPressure Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.
|
||||
Ready Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.
|
||||
Addresses:
|
||||
InternalIP: 192.168.0.113
|
||||
Hostname: kube-worker-1
|
||||
Capacity:
|
||||
cpu: 2
|
||||
ephemeral-storage: 15372232Ki
|
||||
hugepages-2Mi: 0
|
||||
memory: 2025188Ki
|
||||
pods: 110
|
||||
Allocatable:
|
||||
cpu: 2
|
||||
ephemeral-storage: 14167048988
|
||||
hugepages-2Mi: 0
|
||||
memory: 1922788Ki
|
||||
pods: 110
|
||||
System Info:
|
||||
Machine ID: 9384e2927f544209b5d7b67474bbf92b
|
||||
System UUID: aa829ca9-73d7-064d-9019-df07404ad448
|
||||
Boot ID: 5a295a03-aaca-4340-af20-1327fa5dab5c
|
||||
Kernel Version: 5.13.0-28-generic
|
||||
OS Image: Ubuntu 21.10
|
||||
Operating System: linux
|
||||
Architecture: amd64
|
||||
Container Runtime Version: containerd://1.5.9
|
||||
Kubelet Version: v1.23.3
|
||||
Kube-Proxy Version: v1.23.3
|
||||
Non-terminated Pods: (4 in total)
|
||||
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
|
||||
--------- ---- ------------ ---------- --------------- ------------- ---
|
||||
default nginx-deployment-67d4bdd6f5-cx2nz 500m (25%) 500m (25%) 128Mi (6%) 128Mi (6%) 23m
|
||||
default nginx-deployment-67d4bdd6f5-w6kd7 500m (25%) 500m (25%) 128Mi (6%) 128Mi (6%) 23m
|
||||
kube-system kube-proxy-dnxbz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 28m
|
||||
kube-system weave-net-gjxxp 100m (5%) 0 (0%) 200Mi (10%) 0 (0%) 28m
|
||||
Allocated resources:
|
||||
(Total limits may be over 100 percent, i.e., overcommitted.)
|
||||
Resource Requests Limits
|
||||
-------- -------- ------
|
||||
cpu 1100m (55%) 1 (50%)
|
||||
memory 456Mi (24%) 256Mi (13%)
|
||||
ephemeral-storage 0 (0%) 0 (0%)
|
||||
hugepages-2Mi 0 (0%) 0 (0%)
|
||||
Events:
|
||||
## Debugging with container exec {#container-exec}
|
||||
|
||||
If the {{< glossary_tooltip text="container image" term_id="image" >}} includes
|
||||
debugging utilities, as is the case with images built from Linux and Windows OS
|
||||
base images, you can run commands inside a specific container with
|
||||
`kubectl exec`:
|
||||
|
||||
```shell
|
||||
kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
`-c ${CONTAINER_NAME}` is optional. You can omit it for Pods that only contain a single container.
|
||||
{{< /note >}}
|
||||
|
||||
As an example, to look at the logs from a running Cassandra pod, you might run
|
||||
|
||||
```shell
|
||||
kubectl exec cassandra -- cat /var/log/cassandra/system.log
|
||||
```
|
||||
|
||||
You can run a shell that's connected to your terminal using the `-i` and `-t`
|
||||
arguments to `kubectl exec`, for example:
|
||||
|
||||
```shell
|
||||
kubectl exec -it cassandra -- sh
|
||||
```
|
||||
|
||||
For more details, see [Get a Shell to a Running Container](
|
||||
/docs/tasks/debug/debug-application/get-shell-running-container/).
|
||||
|
||||
## Debugging with an ephemeral debug container {#ephemeral-container}
|
||||
|
||||
{{< feature-state state="beta" for_k8s_version="v1.23" >}}
|
||||
|
||||
{{< glossary_tooltip text="Ephemeral containers" term_id="ephemeral-container" >}}
|
||||
are useful for interactive troubleshooting when `kubectl exec` is insufficient
|
||||
because a container has crashed or a container image doesn't include debugging
|
||||
utilities, such as with [distroless images](
|
||||
https://github.com/GoogleContainerTools/distroless).
|
||||
|
||||
### Example debugging using ephemeral containers {#ephemeral-container-example}
|
||||
|
||||
You can use the `kubectl debug` command to add ephemeral containers to a
|
||||
running Pod. First, create a pod for the example:
|
||||
|
||||
```shell
|
||||
kubectl run ephemeral-demo --image=k8s.gcr.io/pause:3.1 --restart=Never
|
||||
```
|
||||
|
||||
The examples in this section use the `pause` container image because it does not
|
||||
contain debugging utilities, but this method works with all container
|
||||
images.
|
||||
|
||||
If you attempt to use `kubectl exec` to create a shell you will see an error
|
||||
because there is no shell in this container image.
|
||||
|
||||
```shell
|
||||
kubectl exec -it ephemeral-demo -- sh
|
||||
```
|
||||
|
||||
```
|
||||
OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused "exec: \"sh\": executable file not found in $PATH": unknown
|
||||
```
|
||||
|
||||
You can instead add a debugging container using `kubectl debug`. If you
|
||||
specify the `-i`/`--interactive` argument, `kubectl` will automatically attach
|
||||
to the console of the Ephemeral Container.
|
||||
|
||||
```shell
|
||||
kubectl debug -it ephemeral-demo --image=busybox:1.28 --target=ephemeral-demo
|
||||
```
|
||||
|
||||
```
|
||||
Defaulting debug container name to debugger-8xzrl.
|
||||
If you don't see a command prompt, try pressing enter.
|
||||
/ #
|
||||
```
|
||||
|
||||
This command adds a new busybox container and attaches to it. The `--target`
|
||||
parameter targets the process namespace of another container. It's necessary
|
||||
here because `kubectl run` does not enable [process namespace sharing](
|
||||
/docs/tasks/configure-pod-container/share-process-namespace/) in the pod it
|
||||
creates.
|
||||
|
||||
{{< note >}}
|
||||
The `--target` parameter must be supported by the {{< glossary_tooltip
|
||||
text="Container Runtime" term_id="container-runtime" >}}. When not supported,
|
||||
the Ephemeral Container may not be started, or it may be started with an
|
||||
isolated process namespace so that `ps` does not reveal processes in other
|
||||
containers.
|
||||
{{< /note >}}
|
||||
|
||||
You can view the state of the newly created ephemeral container using `kubectl describe`:
|
||||
|
||||
```shell
|
||||
kubectl describe pod ephemeral-demo
|
||||
```
|
||||
|
||||
```
|
||||
...
|
||||
Ephemeral Containers:
|
||||
debugger-8xzrl:
|
||||
Container ID: docker://b888f9adfd15bd5739fefaa39e1df4dd3c617b9902082b1cfdc29c4028ffb2eb
|
||||
Image: busybox
|
||||
Image ID: docker-pullable://busybox@sha256:1828edd60c5efd34b2bf5dd3282ec0cc04d47b2ff9caa0b6d4f07a21d1c08084
|
||||
Port: <none>
|
||||
Host Port: <none>
|
||||
State: Running
|
||||
Started: Wed, 12 Feb 2020 14:25:42 +0100
|
||||
Ready: False
|
||||
Restart Count: 0
|
||||
Environment: <none>
|
||||
Mounts: <none>
|
||||
...
|
||||
```
|
||||
|
||||
Use `kubectl delete` to remove the Pod when you're finished:
|
||||
|
||||
```shell
|
||||
kubectl get node kube-worker-1 -o yaml
|
||||
kubectl delete pod ephemeral-demo
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Node
|
||||
metadata:
|
||||
annotations:
|
||||
kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
|
||||
node.alpha.kubernetes.io/ttl: "0"
|
||||
volumes.kubernetes.io/controller-managed-attach-detach: "true"
|
||||
creationTimestamp: "2022-02-17T21:46:30Z"
|
||||
labels:
|
||||
beta.kubernetes.io/arch: amd64
|
||||
beta.kubernetes.io/os: linux
|
||||
kubernetes.io/arch: amd64
|
||||
kubernetes.io/hostname: kube-worker-1
|
||||
kubernetes.io/os: linux
|
||||
name: kube-worker-1
|
||||
resourceVersion: "4026"
|
||||
uid: 98efe7cb-2978-4a0b-842a-1a7bf12c05f8
|
||||
spec: {}
|
||||
status:
|
||||
addresses:
|
||||
- address: 192.168.0.113
|
||||
type: InternalIP
|
||||
- address: kube-worker-1
|
||||
type: Hostname
|
||||
allocatable:
|
||||
cpu: "2"
|
||||
ephemeral-storage: "14167048988"
|
||||
hugepages-2Mi: "0"
|
||||
memory: 1922788Ki
|
||||
pods: "110"
|
||||
capacity:
|
||||
cpu: "2"
|
||||
ephemeral-storage: 15372232Ki
|
||||
hugepages-2Mi: "0"
|
||||
memory: 2025188Ki
|
||||
pods: "110"
|
||||
conditions:
|
||||
- lastHeartbeatTime: "2022-02-17T22:20:32Z"
|
||||
lastTransitionTime: "2022-02-17T22:20:32Z"
|
||||
message: Weave pod has set this
|
||||
reason: WeaveIsUp
|
||||
status: "False"
|
||||
type: NetworkUnavailable
|
||||
- lastHeartbeatTime: "2022-02-17T22:20:15Z"
|
||||
lastTransitionTime: "2022-02-17T22:13:25Z"
|
||||
message: kubelet has sufficient memory available
|
||||
reason: KubeletHasSufficientMemory
|
||||
status: "False"
|
||||
type: MemoryPressure
|
||||
- lastHeartbeatTime: "2022-02-17T22:20:15Z"
|
||||
lastTransitionTime: "2022-02-17T22:13:25Z"
|
||||
message: kubelet has no disk pressure
|
||||
reason: KubeletHasNoDiskPressure
|
||||
status: "False"
|
||||
type: DiskPressure
|
||||
- lastHeartbeatTime: "2022-02-17T22:20:15Z"
|
||||
lastTransitionTime: "2022-02-17T22:13:25Z"
|
||||
message: kubelet has sufficient PID available
|
||||
reason: KubeletHasSufficientPID
|
||||
status: "False"
|
||||
type: PIDPressure
|
||||
- lastHeartbeatTime: "2022-02-17T22:20:15Z"
|
||||
lastTransitionTime: "2022-02-17T22:15:15Z"
|
||||
message: kubelet is posting ready status. AppArmor enabled
|
||||
reason: KubeletReady
|
||||
status: "True"
|
||||
type: Ready
|
||||
daemonEndpoints:
|
||||
kubeletEndpoint:
|
||||
Port: 10250
|
||||
nodeInfo:
|
||||
architecture: amd64
|
||||
bootID: 22333234-7a6b-44d4-9ce1-67e31dc7e369
|
||||
containerRuntimeVersion: containerd://1.5.9
|
||||
kernelVersion: 5.13.0-28-generic
|
||||
kubeProxyVersion: v1.23.3
|
||||
kubeletVersion: v1.23.3
|
||||
machineID: 9384e2927f544209b5d7b67474bbf92b
|
||||
operatingSystem: linux
|
||||
osImage: Ubuntu 21.10
|
||||
systemUUID: aa829ca9-73d7-064d-9019-df07404ad448
|
||||
## Debugging using a copy of the Pod
|
||||
|
||||
Sometimes Pod configuration options make it difficult to troubleshoot in certain
|
||||
situations. For example, you can't run `kubectl exec` to troubleshoot your
|
||||
container if your container image does not include a shell or if your application
|
||||
crashes on startup. In these situations you can use `kubectl debug` to create a
|
||||
copy of the Pod with configuration values changed to aid debugging.
|
||||
|
||||
### Copying a Pod while adding a new container
|
||||
|
||||
Adding a new container can be useful when your application is running but not
|
||||
behaving as you expect and you'd like to add additional troubleshooting
|
||||
utilities to the Pod.
|
||||
|
||||
For example, maybe your application's container images are built on `busybox`
|
||||
but you need debugging utilities not included in `busybox`. You can simulate
|
||||
this scenario using `kubectl run`:
|
||||
|
||||
```shell
|
||||
kubectl run myapp --image=busybox:1.28 --restart=Never -- sleep 1d
|
||||
```
|
||||
|
||||
Run this command to create a copy of `myapp` named `myapp-debug` that adds a
|
||||
new Ubuntu container for debugging:
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
```shell
|
||||
kubectl debug myapp -it --image=ubuntu --share-processes --copy-to=myapp-debug
|
||||
```
|
||||
|
||||
```
|
||||
Defaulting debug container name to debugger-w7xmf.
|
||||
If you don't see a command prompt, try pressing enter.
|
||||
root@myapp-debug:/#
|
||||
```
|
||||
|
||||
Learn about additional debugging tools, including:
|
||||
{{< note >}}
|
||||
* `kubectl debug` automatically generates a container name if you don't choose
|
||||
one using the `--container` flag.
|
||||
* The `-i` flag causes `kubectl debug` to attach to the new container by
|
||||
default. You can prevent this by specifying `--attach=false`. If your session
|
||||
becomes disconnected you can reattach using `kubectl attach`.
|
||||
* The `--share-processes` allows the containers in this Pod to see processes
|
||||
from the other containers in the Pod. For more information about how this
|
||||
works, see [Share Process Namespace between Containers in a Pod](
|
||||
/docs/tasks/configure-pod-container/share-process-namespace/).
|
||||
{{< /note >}}
|
||||
|
||||
* [Logging](/docs/concepts/cluster-administration/logging/)
|
||||
* [Monitoring](/docs/tasks/debug-application-cluster/resource-usage-monitoring/)
|
||||
* [Getting into containers via `exec`](/docs/tasks/debug-application-cluster/get-shell-running-container/)
|
||||
* [Connecting to containers via proxies](/docs/tasks/extend-kubernetes/http-proxy-access-api/)
|
||||
* [Connecting to containers via port forwarding](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)
|
||||
* [Inspect Kubernetes node with crictl](/docs/tasks/debug-application-cluster/crictl/)
|
||||
Don't forget to clean up the debugging Pod when you're finished with it:
|
||||
|
||||
```shell
|
||||
kubectl delete pod myapp myapp-debug
|
||||
```
|
||||
|
||||
### Copying a Pod while changing its command
|
||||
|
||||
Sometimes it's useful to change the command for a container, for example to
|
||||
add a debugging flag or because the application is crashing.
|
||||
|
||||
To simulate a crashing application, use `kubectl run` to create a container
|
||||
that immediately exits:
|
||||
|
||||
```
|
||||
kubectl run --image=busybox:1.28 myapp -- false
|
||||
```
|
||||
|
||||
You can see using `kubectl describe pod myapp` that this container is crashing:
|
||||
|
||||
```
|
||||
Containers:
|
||||
myapp:
|
||||
Image: busybox
|
||||
...
|
||||
Args:
|
||||
false
|
||||
State: Waiting
|
||||
Reason: CrashLoopBackOff
|
||||
Last State: Terminated
|
||||
Reason: Error
|
||||
Exit Code: 1
|
||||
```
|
||||
|
||||
You can use `kubectl debug` to create a copy of this Pod with the command
|
||||
changed to an interactive shell:
|
||||
|
||||
```
|
||||
kubectl debug myapp -it --copy-to=myapp-debug --container=myapp -- sh
|
||||
```
|
||||
|
||||
```
|
||||
If you don't see a command prompt, try pressing enter.
|
||||
/ #
|
||||
```
|
||||
|
||||
Now you have an interactive shell that you can use to perform tasks like
|
||||
checking filesystem paths or running the container command manually.
|
||||
|
||||
{{< note >}}
|
||||
* To change the command of a specific container you must
|
||||
specify its name using `--container` or `kubectl debug` will instead
|
||||
create a new container to run the command you specified.
|
||||
* The `-i` flag causes `kubectl debug` to attach to the container by default.
|
||||
You can prevent this by specifying `--attach=false`. If your session becomes
|
||||
disconnected you can reattach using `kubectl attach`.
|
||||
{{< /note >}}
|
||||
|
||||
Don't forget to clean up the debugging Pod when you're finished with it:
|
||||
|
||||
```shell
|
||||
kubectl delete pod myapp myapp-debug
|
||||
```
|
||||
|
||||
### Copying a Pod while changing container images
|
||||
|
||||
In some situations you may want to change a misbehaving Pod from its normal
|
||||
production container images to an image containing a debugging build or
|
||||
additional utilities.
|
||||
|
||||
As an example, create a Pod using `kubectl run`:
|
||||
|
||||
```
|
||||
kubectl run myapp --image=busybox:1.28 --restart=Never -- sleep 1d
|
||||
```
|
||||
|
||||
Now use `kubectl debug` to make a copy and change its container image
|
||||
to `ubuntu`:
|
||||
|
||||
```
|
||||
kubectl debug myapp --copy-to=myapp-debug --set-image=*=ubuntu
|
||||
```
|
||||
|
||||
The syntax of `--set-image` uses the same `container_name=image` syntax as
|
||||
`kubectl set image`. `*=ubuntu` means change the image of all containers
|
||||
to `ubuntu`.
|
||||
|
||||
Don't forget to clean up the debugging Pod when you're finished with it:
|
||||
|
||||
```shell
|
||||
kubectl delete pod myapp myapp-debug
|
||||
```
|
||||
|
||||
## Debugging via a shell on the node {#node-shell-session}
|
||||
|
||||
If none of these approaches work, you can find the Node on which the Pod is
|
||||
running and create a privileged Pod running in the host namespaces. To create
|
||||
an interactive shell on a node using `kubectl debug`, run:
|
||||
|
||||
```shell
|
||||
kubectl debug node/mynode -it --image=ubuntu
|
||||
```
|
||||
|
||||
```
|
||||
Creating debugging pod node-debugger-mynode-pdx84 with container debugger on node mynode.
|
||||
If you don't see a command prompt, try pressing enter.
|
||||
root@ek8s:/#
|
||||
```
|
||||
|
||||
When creating a debugging session on a node, keep in mind that:
|
||||
|
||||
* `kubectl debug` automatically generates the name of the new Pod based on
|
||||
the name of the Node.
|
||||
* The container runs in the host IPC, Network, and PID namespaces.
|
||||
* The root filesystem of the Node will be mounted at `/host`.
|
||||
|
||||
Don't forget to clean up the debugging Pod when you're finished with it:
|
||||
|
||||
```shell
|
||||
kubectl delete pod node-debugger-mynode-pdx84
|
||||
```
|
||||
|
|
@ -4,6 +4,7 @@ reviewers:
|
|||
- bowei
|
||||
content_type: concept
|
||||
title: Debug Services
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
@ -441,7 +442,7 @@ they are running fine and not crashing.
|
|||
|
||||
The "RESTARTS" column says that these pods are not crashing frequently or being
|
||||
restarted. Frequent restarts could lead to intermittent connectivity issues.
|
||||
If the restart count is high, read more about how to [debug pods](/docs/tasks/debug-application-cluster/debug-pod-replication-controller/#debugging-pods).
|
||||
If the restart count is high, read more about how to [debug pods](/docs/tasks/debug/debug-application/debug-pods).
|
||||
|
||||
Inside the Kubernetes system is a control loop which evaluates the selector of
|
||||
every Service and saves the results into a corresponding Endpoints object.
|
||||
|
|
@ -727,13 +728,13 @@ Service is not working. Please let us know what is going on, so we can help
|
|||
investigate!
|
||||
|
||||
Contact us on
|
||||
[Slack](/docs/tasks/debug-application-cluster/troubleshooting/#slack) or
|
||||
[Slack](/docs/tasks/debug/overview/#slack) or
|
||||
[Forum](https://discuss.kubernetes.io) or
|
||||
[GitHub](https://github.com/kubernetes/kubernetes).
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
Visit [troubleshooting document](/docs/tasks/debug-application-cluster/troubleshooting/)
|
||||
Visit the [troubleshooting overview document](/docs/tasks/debug/overview/)
|
||||
for more information.
|
||||
|
||||
|
||||
|
|
@ -9,6 +9,7 @@ reviewers:
|
|||
- smarterclayton
|
||||
title: Debug a StatefulSet
|
||||
content_type: task
|
||||
weight: 30
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
@ -34,9 +35,9 @@ If you find that any Pods listed are in `Unknown` or `Terminating` state for an
|
|||
refer to the [Deleting StatefulSet Pods](/docs/tasks/run-application/delete-stateful-set/) task for
|
||||
instructions on how to deal with them.
|
||||
You can debug individual Pods in a StatefulSet using the
|
||||
[Debugging Pods](/docs/tasks/debug-application-cluster/debug-pod-replication-controller/) guide.
|
||||
[Debugging Pods](/docs/tasks/debug/debug-application/debug-pods/) guide.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
Learn more about [debugging an init-container](/docs/tasks/debug-application-cluster/debug-init-containers/).
|
||||
Learn more about [debugging an init-container](/docs/tasks/debug/debug-application/debug-init-containers/).
|
||||
|
||||
|
|
@ -0,0 +1,316 @@
|
|||
---
|
||||
reviewers:
|
||||
- davidopp
|
||||
title: "Troubleshooting Clusters"
|
||||
description: Debugging common cluster issues.
|
||||
weight: 20
|
||||
no_list: true
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
This doc is about cluster troubleshooting; we assume you have already ruled out your application as the root cause of the
|
||||
problem you are experiencing. See
|
||||
the [application troubleshooting guide](/docs/tasks/debug/debug-application/) for tips on application debugging.
|
||||
You may also visit the [troubleshooting overview document](/docs/tasks/debug/) for more information.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Listing your cluster
|
||||
|
||||
The first thing to debug in your cluster is if your nodes are all registered correctly.
|
||||
|
||||
Run the following command:
|
||||
|
||||
```shell
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
And verify that all of the nodes you expect to see are present and that they are all in the `Ready` state.
|
||||
|
||||
To get detailed information about the overall health of your cluster, you can run:
|
||||
|
||||
```shell
|
||||
kubectl cluster-info dump
|
||||
```
|
||||
|
||||
### Example: debugging a down/unreachable node
|
||||
|
||||
Sometimes when debugging it can be useful to look at the status of a node -- for example, because you've noticed strange behavior of a Pod that's running on the node, or to find out why a Pod won't schedule onto the node. As with Pods, you can use `kubectl describe node` and `kubectl get node -o yaml` to retrieve detailed information about nodes. For example, here's what you'll see if a node is down (disconnected from the network, or kubelet dies and won't restart, etc.). Notice the events that show the node is NotReady, and also notice that the pods are no longer running (they are evicted after five minutes of NotReady status).
|
||||
|
||||
```shell
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
```none
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
kube-worker-1 NotReady <none> 1h v1.23.3
|
||||
kubernetes-node-bols Ready <none> 1h v1.23.3
|
||||
kubernetes-node-st6x Ready <none> 1h v1.23.3
|
||||
kubernetes-node-unaj Ready <none> 1h v1.23.3
|
||||
```
|
||||
|
||||
```shell
|
||||
kubectl describe node kube-worker-1
|
||||
```
|
||||
|
||||
```none
|
||||
Name: kube-worker-1
|
||||
Roles: <none>
|
||||
Labels: beta.kubernetes.io/arch=amd64
|
||||
beta.kubernetes.io/os=linux
|
||||
kubernetes.io/arch=amd64
|
||||
kubernetes.io/hostname=kube-worker-1
|
||||
kubernetes.io/os=linux
|
||||
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
|
||||
node.alpha.kubernetes.io/ttl: 0
|
||||
volumes.kubernetes.io/controller-managed-attach-detach: true
|
||||
CreationTimestamp: Thu, 17 Feb 2022 16:46:30 -0500
|
||||
Taints: node.kubernetes.io/unreachable:NoExecute
|
||||
node.kubernetes.io/unreachable:NoSchedule
|
||||
Unschedulable: false
|
||||
Lease:
|
||||
HolderIdentity: kube-worker-1
|
||||
AcquireTime: <unset>
|
||||
RenewTime: Thu, 17 Feb 2022 17:13:09 -0500
|
||||
Conditions:
|
||||
Type Status LastHeartbeatTime LastTransitionTime Reason Message
|
||||
---- ------ ----------------- ------------------ ------ -------
|
||||
NetworkUnavailable False Thu, 17 Feb 2022 17:09:13 -0500 Thu, 17 Feb 2022 17:09:13 -0500 WeaveIsUp Weave pod has set this
|
||||
MemoryPressure Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.
|
||||
DiskPressure Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.
|
||||
PIDPressure Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.
|
||||
Ready Unknown Thu, 17 Feb 2022 17:12:40 -0500 Thu, 17 Feb 2022 17:13:52 -0500 NodeStatusUnknown Kubelet stopped posting node status.
|
||||
Addresses:
|
||||
InternalIP: 192.168.0.113
|
||||
Hostname: kube-worker-1
|
||||
Capacity:
|
||||
cpu: 2
|
||||
ephemeral-storage: 15372232Ki
|
||||
hugepages-2Mi: 0
|
||||
memory: 2025188Ki
|
||||
pods: 110
|
||||
Allocatable:
|
||||
cpu: 2
|
||||
ephemeral-storage: 14167048988
|
||||
hugepages-2Mi: 0
|
||||
memory: 1922788Ki
|
||||
pods: 110
|
||||
System Info:
|
||||
Machine ID: 9384e2927f544209b5d7b67474bbf92b
|
||||
System UUID: aa829ca9-73d7-064d-9019-df07404ad448
|
||||
Boot ID: 5a295a03-aaca-4340-af20-1327fa5dab5c
|
||||
Kernel Version: 5.13.0-28-generic
|
||||
OS Image: Ubuntu 21.10
|
||||
Operating System: linux
|
||||
Architecture: amd64
|
||||
Container Runtime Version: containerd://1.5.9
|
||||
Kubelet Version: v1.23.3
|
||||
Kube-Proxy Version: v1.23.3
|
||||
Non-terminated Pods: (4 in total)
|
||||
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
|
||||
--------- ---- ------------ ---------- --------------- ------------- ---
|
||||
default nginx-deployment-67d4bdd6f5-cx2nz 500m (25%) 500m (25%) 128Mi (6%) 128Mi (6%) 23m
|
||||
default nginx-deployment-67d4bdd6f5-w6kd7 500m (25%) 500m (25%) 128Mi (6%) 128Mi (6%) 23m
|
||||
kube-system kube-proxy-dnxbz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 28m
|
||||
kube-system weave-net-gjxxp 100m (5%) 0 (0%) 200Mi (10%) 0 (0%) 28m
|
||||
Allocated resources:
|
||||
(Total limits may be over 100 percent, i.e., overcommitted.)
|
||||
Resource Requests Limits
|
||||
-------- -------- ------
|
||||
cpu 1100m (55%) 1 (50%)
|
||||
memory 456Mi (24%) 256Mi (13%)
|
||||
ephemeral-storage 0 (0%) 0 (0%)
|
||||
hugepages-2Mi 0 (0%) 0 (0%)
|
||||
Events:
|
||||
...
|
||||
```
|
||||
|
||||
```shell
|
||||
kubectl get node kube-worker-1 -o yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Node
|
||||
metadata:
|
||||
annotations:
|
||||
kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
|
||||
node.alpha.kubernetes.io/ttl: "0"
|
||||
volumes.kubernetes.io/controller-managed-attach-detach: "true"
|
||||
creationTimestamp: "2022-02-17T21:46:30Z"
|
||||
labels:
|
||||
beta.kubernetes.io/arch: amd64
|
||||
beta.kubernetes.io/os: linux
|
||||
kubernetes.io/arch: amd64
|
||||
kubernetes.io/hostname: kube-worker-1
|
||||
kubernetes.io/os: linux
|
||||
name: kube-worker-1
|
||||
resourceVersion: "4026"
|
||||
uid: 98efe7cb-2978-4a0b-842a-1a7bf12c05f8
|
||||
spec: {}
|
||||
status:
|
||||
addresses:
|
||||
- address: 192.168.0.113
|
||||
type: InternalIP
|
||||
- address: kube-worker-1
|
||||
type: Hostname
|
||||
allocatable:
|
||||
cpu: "2"
|
||||
ephemeral-storage: "14167048988"
|
||||
hugepages-2Mi: "0"
|
||||
memory: 1922788Ki
|
||||
pods: "110"
|
||||
capacity:
|
||||
cpu: "2"
|
||||
ephemeral-storage: 15372232Ki
|
||||
hugepages-2Mi: "0"
|
||||
memory: 2025188Ki
|
||||
pods: "110"
|
||||
conditions:
|
||||
- lastHeartbeatTime: "2022-02-17T22:20:32Z"
|
||||
lastTransitionTime: "2022-02-17T22:20:32Z"
|
||||
message: Weave pod has set this
|
||||
reason: WeaveIsUp
|
||||
status: "False"
|
||||
type: NetworkUnavailable
|
||||
- lastHeartbeatTime: "2022-02-17T22:20:15Z"
|
||||
lastTransitionTime: "2022-02-17T22:13:25Z"
|
||||
message: kubelet has sufficient memory available
|
||||
reason: KubeletHasSufficientMemory
|
||||
status: "False"
|
||||
type: MemoryPressure
|
||||
- lastHeartbeatTime: "2022-02-17T22:20:15Z"
|
||||
lastTransitionTime: "2022-02-17T22:13:25Z"
|
||||
message: kubelet has no disk pressure
|
||||
reason: KubeletHasNoDiskPressure
|
||||
status: "False"
|
||||
type: DiskPressure
|
||||
- lastHeartbeatTime: "2022-02-17T22:20:15Z"
|
||||
lastTransitionTime: "2022-02-17T22:13:25Z"
|
||||
message: kubelet has sufficient PID available
|
||||
reason: KubeletHasSufficientPID
|
||||
status: "False"
|
||||
type: PIDPressure
|
||||
- lastHeartbeatTime: "2022-02-17T22:20:15Z"
|
||||
lastTransitionTime: "2022-02-17T22:15:15Z"
|
||||
message: kubelet is posting ready status. AppArmor enabled
|
||||
reason: KubeletReady
|
||||
status: "True"
|
||||
type: Ready
|
||||
daemonEndpoints:
|
||||
kubeletEndpoint:
|
||||
Port: 10250
|
||||
nodeInfo:
|
||||
architecture: amd64
|
||||
bootID: 22333234-7a6b-44d4-9ce1-67e31dc7e369
|
||||
containerRuntimeVersion: containerd://1.5.9
|
||||
kernelVersion: 5.13.0-28-generic
|
||||
kubeProxyVersion: v1.23.3
|
||||
kubeletVersion: v1.23.3
|
||||
machineID: 9384e2927f544209b5d7b67474bbf92b
|
||||
operatingSystem: linux
|
||||
osImage: Ubuntu 21.10
|
||||
systemUUID: aa829ca9-73d7-064d-9019-df07404ad448
|
||||
```
|
||||
|
||||
|
||||
## Looking at logs
|
||||
|
||||
For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations
|
||||
of the relevant log files. On systemd-based systems, you may need to use `journalctl` instead of examining log files.
|
||||
|
||||
### Control Plane nodes
|
||||
|
||||
* `/var/log/kube-apiserver.log` - API Server, responsible for serving the API
|
||||
* `/var/log/kube-scheduler.log` - Scheduler, responsible for making scheduling decisions
|
||||
* `/var/log/kube-controller-manager.log` - a component that runs most Kubernetes built-in {{<glossary_tooltip text="controllers" term_id="controller">}}, with the notable exception of scheduling (the kube-scheduler handles scheduling).
|
||||
|
||||
### Worker Nodes
|
||||
|
||||
* `/var/log/kubelet.log` - logs from the kubelet, responsible for running containers on the node
|
||||
* `/var/log/kube-proxy.log` - logs from `kube-proxy`, which is responsible for directing traffic to Service endpoints
|
||||
|
||||
## Cluster failure modes
|
||||
|
||||
This is an incomplete list of things that could go wrong, and how to adjust your cluster setup to mitigate the problems.
|
||||
|
||||
### Contributing causes
|
||||
|
||||
- VM(s) shutdown
|
||||
- Network partition within cluster, or between cluster and users
|
||||
- Crashes in Kubernetes software
|
||||
- Data loss or unavailability of persistent storage (e.g. GCE PD or AWS EBS volume)
|
||||
- Operator error, for example misconfigured Kubernetes software or application software
|
||||
|
||||
### Specific scenarios
|
||||
|
||||
- API server VM shutdown or apiserver crashing
|
||||
- Results
|
||||
- unable to stop, update, or start new pods, services, replication controller
|
||||
- existing pods and services should continue to work normally, unless they depend on the Kubernetes API
|
||||
- API server backing storage lost
|
||||
- Results
|
||||
- the kube-apiserver component fails to start successfully and become healthy
|
||||
- kubelets will not be able to reach it but will continue to run the same pods and provide the same service proxying
|
||||
- manual recovery or recreation of apiserver state necessary before apiserver is restarted
|
||||
- Supporting services (node controller, replication controller manager, scheduler, etc) VM shutdown or crashes
|
||||
- currently those are colocated with the apiserver, and their unavailability has similar consequences as apiserver
|
||||
- in future, these will be replicated as well and may not be co-located
|
||||
- they do not have their own persistent state
|
||||
- Individual node (VM or physical machine) shuts down
|
||||
- Results
|
||||
- pods on that Node stop running
|
||||
- Network partition
|
||||
- Results
|
||||
- partition A thinks the nodes in partition B are down; partition B thinks the apiserver is down. (Assuming the master VM ends up in partition A.)
|
||||
- Kubelet software fault
|
||||
- Results
|
||||
- crashing kubelet cannot start new pods on the node
|
||||
- kubelet might delete the pods or not
|
||||
- node marked unhealthy
|
||||
- replication controllers start new pods elsewhere
|
||||
- Cluster operator error
|
||||
- Results
|
||||
- loss of pods, services, etc
|
||||
- lost of apiserver backing store
|
||||
- users unable to read API
|
||||
- etc.
|
||||
|
||||
### Mitigations
|
||||
|
||||
- Action: Use IaaS provider's automatic VM restarting feature for IaaS VMs
|
||||
- Mitigates: Apiserver VM shutdown or apiserver crashing
|
||||
- Mitigates: Supporting services VM shutdown or crashes
|
||||
|
||||
- Action: Use IaaS providers reliable storage (e.g. GCE PD or AWS EBS volume) for VMs with apiserver+etcd
|
||||
- Mitigates: Apiserver backing storage lost
|
||||
|
||||
- Action: Use [high-availability](/docs/setup/production-environment/tools/kubeadm/high-availability/) configuration
|
||||
- Mitigates: Control plane node shutdown or control plane components (scheduler, API server, controller-manager) crashing
|
||||
- Will tolerate one or more simultaneous node or component failures
|
||||
- Mitigates: API server backing storage (i.e., etcd's data directory) lost
|
||||
- Assumes HA (highly-available) etcd configuration
|
||||
|
||||
- Action: Snapshot apiserver PDs/EBS-volumes periodically
|
||||
- Mitigates: Apiserver backing storage lost
|
||||
- Mitigates: Some cases of operator error
|
||||
- Mitigates: Some cases of Kubernetes software fault
|
||||
|
||||
- Action: use replication controller and services in front of pods
|
||||
- Mitigates: Node shutdown
|
||||
- Mitigates: Kubelet software fault
|
||||
|
||||
- Action: applications (containers) designed to tolerate unexpected restarts
|
||||
- Mitigates: Node shutdown
|
||||
- Mitigates: Kubelet software fault
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Learn about the metrics available in the [Resource Metrics Pipeline](resource-metrics-pipeline)
|
||||
* Discover additional tools for [monitoring resource usage](resource-usage-monitoring)
|
||||
* Use Node Problem Detector to [monitor node health](monitor-node-health)
|
||||
* Use `crictl` to [debug Kubernetes nodes](crictl)
|
||||
* Get more information about [Kubernetes auditing](audit)
|
||||
* Use `telepresence` to [develop and debug services locally](local-debugging)
|
||||
|
|
@ -5,6 +5,7 @@ reviewers:
|
|||
- mrunalp
|
||||
title: Debugging Kubernetes nodes with crictl
|
||||
content_type: task
|
||||
weight: 30
|
||||
---
|
||||
|
||||
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Developing and debugging services locally
|
||||
title: Developing and debugging services locally using telepresence
|
||||
content_type: task
|
||||
---
|
||||
|
||||
|
|
@ -58,4 +58,4 @@ Telepresence installs a traffic-agent sidecar next to your existing application'
|
|||
|
||||
If you're interested in a hands-on tutorial, check out [this tutorial](https://cloud.google.com/community/tutorials/developing-services-with-k8s) that walks through locally developing the Guestbook application on Google Kubernetes Engine.
|
||||
|
||||
For further reading, visit the [Telepresence website](https://www.telepresence.io).
|
||||
For further reading, visit the [Telepresence website](https://www.telepresence.io).
|
||||
|
|
@ -4,6 +4,7 @@ content_type: task
|
|||
reviewers:
|
||||
- Random-Liu
|
||||
- dchen1107
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
@ -4,6 +4,7 @@ reviewers:
|
|||
- piosz
|
||||
title: Resource metrics pipeline
|
||||
content_type: concept
|
||||
weight: 15
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
@ -3,6 +3,7 @@ reviewers:
|
|||
- mikedanese
|
||||
content_type: concept
|
||||
title: Tools for Monitoring Resources
|
||||
weight: 15
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
@ -58,4 +59,14 @@ then exposes them to Kubernetes via an adapter by implementing either the
|
|||
[Prometheus](https://prometheus.io), a CNCF project, can natively monitor Kubernetes, nodes, and Prometheus itself.
|
||||
Full metrics pipeline projects that are not part of the CNCF are outside the scope of Kubernetes documentation.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
Learn about additional debugging tools, including:
|
||||
|
||||
* [Logging](/docs/concepts/cluster-administration/logging/)
|
||||
* [Monitoring](/docs/tasks/debug-application-cluster/resource-usage-monitoring/)
|
||||
* [Getting into containers via `exec`](/docs/tasks/debug-application-cluster/applications/get-shell-running-container/)
|
||||
* [Connecting to containers via proxies](/docs/tasks/extend-kubernetes/http-proxy-access-api/)
|
||||
* [Connecting to containers via port forwarding](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)
|
||||
* [Inspect Kubernetes node with crictl](/docs/tasks/debug-application-cluster/monitoring/crictl/)
|
||||
|
|
@ -11,9 +11,6 @@ min-kubernetes-server-version: 1.7
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
{{% dockershim-removal %}}
|
||||
|
||||
|
||||
Adding entries to a Pod's `/etc/hosts` file provides Pod-level override of hostname resolution when DNS and other options are not applicable. You can add these custom entries with the HostAliases field in PodSpec.
|
||||
|
||||
Modification not using HostAliases is not suggested because the file is managed by the kubelet and can be overwritten on during Pod creation/restart.
|
||||
|
|
|
|||
|
|
@ -69,7 +69,7 @@ The following methods exist for installing kubectl on Windows:
|
|||
Or use this for detailed view of version:
|
||||
|
||||
```cmd
|
||||
kubectl version --client --output=yaml
|
||||
kubectl version --client --output=yaml
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
|
|
|
|||
|
|
@ -9,4 +9,3 @@ where `<lang>` is the two character representation of a language. For example:
|
|||
```
|
||||
go test k8s.io/website/content/en/examples
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -27,4 +27,3 @@ spec:
|
|||
port: 3306
|
||||
selector:
|
||||
app: mysql
|
||||
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ type: docs
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
The Kubernetes project maintains release branches for the most recent three minor releases ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}). Kubernetes 1.19 and newer receive approximately 1 year of patch support. Kubernetes 1.18 and older received approximately 9 months of patch support.
|
||||
The Kubernetes project maintains release branches for the most recent three minor releases ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}). Kubernetes 1.19 and newer receive [approximately 1 year of patch support](/releases/patch-releases/#support-period). Kubernetes 1.18 and older received approximately 9 months of patch support.
|
||||
|
||||
Kubernetes versions are expressed as **x.y.z**,
|
||||
where **x** is the major version, **y** is the minor version, and **z** is the patch version, following [Semantic Versioning](https://semver.org/) terminology.
|
||||
|
|
@ -24,4 +24,4 @@ More information in the [version skew policy](/releases/version-skew-policy/) do
|
|||
|
||||
Check out the [schedule](https://github.com/kubernetes/sig-release/tree/master/releases/release-{{< skew nextMinorVersion >}}) for the upcoming **{{< skew nextMinorVersion >}}** Kubernetes release!
|
||||
|
||||
## Helpful Resources
|
||||
## Helpful Resources
|
||||
|
|
|
|||
|
|
@ -51,7 +51,7 @@ Por ejemplo, para descargar la versión {{< param "fullversion" >}} en Linux, es
|
|||
Valide el binario kubectl con el archivo de comprobación:
|
||||
|
||||
```bash
|
||||
echo "$(<kubectl.sha256) kubectl" | sha256sum --check
|
||||
echo "$(cat kubectl.sha256) kubectl" | sha256sum --check
|
||||
```
|
||||
|
||||
Si es válido, la salida es:
|
||||
|
|
@ -205,7 +205,7 @@ A continuación, se muestran los procedimientos para configurar el autocompletad
|
|||
Valide el binario kubectl-convert con el archivo de comprobación:
|
||||
|
||||
```bash
|
||||
echo "$(<kubectl-convert.sha256) kubectl-convert" | sha256sum --check
|
||||
echo "$(cat kubectl-convert.sha256) kubectl-convert" | sha256sum --check
|
||||
```
|
||||
|
||||
Si es válido, la salida es:
|
||||
|
|
|
|||
|
|
@ -106,7 +106,7 @@ membuat Pod pada semua Node.
|
|||
|
||||
### Dijadwalkan oleh _default scheduler_
|
||||
|
||||
{{< feature-state for_kubernetes_version="1.17" state="stable" >}}
|
||||
{{< feature-state for_k8s_version="1.17" state="stable" >}}
|
||||
|
||||
DaemonSet memastikan bahwa semua Node yang memenuhi syarat menjalankan salinan
|
||||
Pod. Normalnya, Node yang menjalankan Pod dipilih oleh _scheduler_ Kubernetes.
|
||||
|
|
|
|||
|
|
@ -29,7 +29,7 @@ content_type: concept
|
|||
* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin)は、クラウドネイティブベースのService function chaining(SFC)、Multiple OVNオーバーレイネットワーク、動的なサブネットの作成、動的な仮想ネットワークの作成、VLANプロバイダーネットワーク、Directプロバイダーネットワークを提供し、他のMulti-networkプラグインと付け替え可能なOVNベースのCNIコントローラープラグインです。
|
||||
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in(NCP)は、VMware NSX-TとKubernetesなどのコンテナオーケストレーター間のインテグレーションを提供します。また、NSX-Tと、Pivotal Container Service(PKS)とOpenShiftなどのコンテナベースのCaaS/PaaSプラットフォームとのインテグレーションも提供します。
|
||||
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst)は、Kubernetes Podと非Kubernetes環境間で可視化とセキュリティモニタリングを使用してポリシーベースのネットワークを提供するSDNプラットフォームです。
|
||||
* [Romana](https://romana.io)は、[NetworkPolicy API](/ja/docs/concepts/services-networking/network-policies/)もサポートするPodネットワーク向けのL3のネットワークソリューションです。Kubeadmアドオンのインストールの詳細は[こちら](https://github.com/romana/romana/tree/master/containerize)で確認できます。
|
||||
* [Romana](https://github.com/romana/romana)は、[NetworkPolicy API](/ja/docs/concepts/services-networking/network-policies/)もサポートするPodネットワーク向けのL3のネットワークソリューションです。Kubeadmアドオンのインストールの詳細は[こちら](https://github.com/romana/romana/tree/master/containerize)で確認できます。
|
||||
* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/)は、ネットワークパーティションの両面で機能し、外部データベースを必要とせずに、ネットワークとネットワークポリシーを提供します。
|
||||
|
||||
## サービスディスカバリ
|
||||
|
|
|
|||
|
|
@ -115,7 +115,7 @@ addressing, and it can be used in combination with other CNI plugins.
|
|||
|
||||
### CNI-Genie from Huawei
|
||||
|
||||
[CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) is a CNI plugin that enables Kubernetes to [simultaneously have access to different implementations](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables) of the [Kubernetes network model](/ja/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model) in runtime. This includes any implementation that runs as a [CNI plugin](https://github.com/containernetworking/cni#3rd-party-plugins), such as [Flannel](https://github.com/coreos/flannel#flannel), [Calico](http://docs.projectcalico.org/), [Romana](http://romana.io), [Weave-net](https://www.weave.works/products/weave-net/).
|
||||
[CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) is a CNI plugin that enables Kubernetes to [simultaneously have access to different implementations](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables) of the [Kubernetes network model](/ja/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model) in runtime. This includes any implementation that runs as a [CNI plugin](https://github.com/containernetworking/cni#3rd-party-plugins), such as [Flannel](https://github.com/coreos/flannel#flannel), [Calico](http://docs.projectcalico.org/), [Romana](https://github.com/romana/romana), [Weave-net](https://www.weave.works/products/weave-net/).
|
||||
|
||||
CNI-Genie also supports [assigning multiple IP addresses to a pod](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-ips/README.md#feature-2-extension-cni-genie-multiple-ip-addresses-per-pod), each from a different CNI plugin.
|
||||
|
||||
|
|
@ -273,7 +273,7 @@ at [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes).
|
|||
|
||||
### Romana
|
||||
|
||||
[Romana](https://romana.io) is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network. Romana supports Kubernetes [Network Policy](/docs/concepts/services-networking/network-policies/) to provide isolation across network namespaces.
|
||||
[Romana](https://github.com/romana/romana) is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network. Romana supports Kubernetes [Network Policy](/docs/concepts/services-networking/network-policies/) to provide isolation across network namespaces.
|
||||
|
||||
### Weave Net from Weaveworks
|
||||
|
||||
|
|
|
|||
|
|
@ -80,7 +80,7 @@ selector](/ja/docs/concepts/scheduling-eviction/assign-pod-node/)にマッチす
|
|||
|
||||
### デフォルトスケジューラーによってスケジューリングされる場合
|
||||
|
||||
{{< feature-state state="stable" for-kubernetes-version="1.17" >}}
|
||||
{{< feature-state for_k8s_version="1.17" state="stable" >}}
|
||||
|
||||
DaemonSetは全ての利用可能なNodeが単一のPodのコピーを稼働させることを保証します。通常、Podが稼働するNodeはKubernetesスケジューラーによって選択されます。しかし、DaemonSetのPodは代わりにDaemonSetコントローラーによって作成され、スケジューリングされます。
|
||||
下記の問題について説明します:
|
||||
|
|
|
|||
|
|
@ -0,0 +1,62 @@
|
|||
---
|
||||
title: Dockershim非推奨の影響範囲を確認する
|
||||
content_type: task
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
Kubernetesの`dockershim`コンポーネントは、DockerをKubernetesの{{< glossary_tooltip text="コンテナランタイム" term_id="container-runtime" >}}として使用することを可能にします。
|
||||
|
||||
Kubernetesの組み込みコンポーネントである`dockershim`はリリースv1.20で非推奨となりました。
|
||||
|
||||
このページでは、あなたのクラスターがどのようにDockerをコンテナランタイムとして使用しているか、使用中の`dockershim`が果たす役割について詳しく説明し、`dockershim`の廃止によって影響を受けるワークロードがあるかどうかをチェックするためのステップを示します。
|
||||
|
||||
## 自分のアプリがDockerに依存しているかどうかの確認 {#find-docker-dependencies}
|
||||
|
||||
アプリケーションコンテナの構築にDockerを使用している場合でも、これらのコンテナを任意のコンテナランタイム上で実行することができます。このようなDockerの使用は、コンテナランタイムとしてのDockerへの依存とはみなされません。
|
||||
|
||||
代替のコンテナランタイムが使用されている場合、Dockerコマンドを実行しても動作しないか、予期せぬ出力が得られる可能性があります。
|
||||
|
||||
このように、Dockerへの依存があるかどうかを調べることができます:
|
||||
|
||||
1. 特権を持つPodがDockerコマンド(`docker ps`など)を実行したり、Dockerサービスを再起動したり(`systemctl restart docker.service`などのコマンド)、Docker固有のファイル(`/etc/docker/daemon.json`など)を変更しないことを確認すること。
|
||||
1. Dockerの設定ファイル(`/etc/docker/daemon.json` など)にプライベートレジストリやイメージミラーの設定がないか確認します。これらは通常、別のコンテナランタイムのために再設定する必要があります。
|
||||
1. Kubernetesインフラストラクチャーの外側のノードで実行される以下のようなスクリプトやアプリがDockerコマンドを実行しないことを確認します。
|
||||
- トラブルシューティングのために人間がノードにSSHで接続
|
||||
- ノードのスタートアップスクリプト
|
||||
- ノードに直接インストールされた監視エージェントやセキュリティエージェント
|
||||
1. 上記のような特権的な操作を行うサードパーティツール。詳しくは[Migrating telemetry and security agents from dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents) を参照してください。
|
||||
1. dockershimの動作に間接的な依存性がないことを確認します。
|
||||
これはエッジケースであり、あなたのアプリケーションに影響を与える可能性は低いです。ツールによっては、Docker固有の動作に反応するように設定されている場合があります。例えば、特定のメトリクスでアラートを上げたり、トラブルシューティングの指示の一部として特定のログメッセージを検索したりします。そのようなツールを設定している場合、移行前にテストクラスターで動作をテストしてください。
|
||||
|
||||
## Dockerへの依存について解説 {#role-of-dockershim}
|
||||
|
||||
[コンテナランタイム](/ja/docs/concepts/containers/#container-runtimes)とは、Kubernetes Podを構成するコンテナを実行できるソフトウェアです。
|
||||
|
||||
KubernetesはPodのオーケストレーションとスケジューリングを担当し、各ノードでは{{< glossary_tooltip text="kubelet" term_id="kubelet" >}}がコンテナランタイムインターフェイスを抽象化して使用するので、互換性があればどのコンテナランタイムでも使用することができます。
|
||||
初期のリリースでは、Kubernetesは1つのコンテナランタイムと互換性を提供していました: Dockerです。
|
||||
その後、Kubernetesプロジェクトの歴史の中で、クラスター運用者は追加のコンテナランタイムを採用することを希望しました。
|
||||
CRIはこのような柔軟性を可能にするために設計され、kubeletはCRIのサポートを開始しました。
|
||||
しかし、DockerはCRI仕様が考案される前から存在していたため、Kubernetesプロジェクトはアダプタコンポーネント「dockershim」を作成しました。
|
||||
|
||||
dockershimアダプターは、DockerがCRI互換ランタイムであるかのように、kubeletがDockerと対話することを可能にします。
|
||||
[Kubernetes Containerd integration goes GA](/blog/2018/05/24/kubernetes-containerd-integration-goes-ga/)ブログ記事で紹介されています。
|
||||
|
||||

|
||||
|
||||
コンテナランタイムとしてContainerdに切り替えることで、中間マージンを排除することができます。
|
||||
これまでと同じように、Containerdのようなコンテナランタイムですべてのコンテナを実行できます。
|
||||
しかし今は、コンテナはコンテナランタイムで直接スケジュールするので、Dockerからは見えません。
|
||||
そのため、これらのコンテナをチェックするために以前使っていたかもしれないDockerツールや派手なUIは、もはや利用できません。
|
||||
`docker ps`や`docker inspect`を使用してコンテナ情報を取得することはできません。
|
||||
コンテナを一覧表示できないので、ログを取得したり、コンテナを停止したり、`docker exec`を使用してコンテナ内で何かを実行したりすることもできません。
|
||||
|
||||
{{< note >}}
|
||||
|
||||
Kubernetes経由でワークロードを実行している場合、コンテナを停止する最善の方法は、コンテナランタイムを直接経由するよりもKubernetes APIを経由することです(このアドバイスはDockerだけでなく、すべてのコンテナランタイムに適用されます)。
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
この場合でも、イメージを取得したり、`docker build`コマンドを使用してビルドすることは可能です。
|
||||
しかし、Dockerによってビルドまたはプルされたイメージは、コンテナランタイムとKubernetesからは見えません。
|
||||
Kubernetesで使用できるようにするには、何らかのレジストリにプッシュする必要がありました。
|
||||
|
|
@ -0,0 +1,114 @@
|
|||
---
|
||||
title: クラスターのトラブルシューティング
|
||||
content_type: concept
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
このドキュメントはクラスターのトラブルシューティングに関するもので、あなたが経験している問題の根本原因として、アプリケーションをすでに除外していることを前提としています。
|
||||
アプリケーションのデバッグのコツは、[application troubleshooting guide](/docs/tasks/debug-application-cluster/debug-application)をご覧ください。
|
||||
また、[troubleshooting document](/docs/tasks/debug-application-cluster/troubleshooting/)にも詳しい情報があります。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## クラスターのリストアップ
|
||||
|
||||
クラスターで最初にデバッグするのは、ノードがすべて正しく登録されているかどうかです。
|
||||
|
||||
```shell
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
そして、期待するノードがすべて存在し、それらがすべて `Ready` 状態であることを確認します。
|
||||
|
||||
クラスター全体の健全性に関する詳細な情報を得るには、以下を実行します。
|
||||
|
||||
```shell
|
||||
kubectl cluster-info dump
|
||||
```
|
||||
## ログの確認
|
||||
|
||||
今のところ、クラスターをより深く掘り下げるには、関連するマシンにログインする必要があります。
|
||||
以下は、関連するログファイルの場所です。
|
||||
(systemdベースのシステムでは、代わりに `journalctl` を使う必要があるかもしれないことに注意してください)
|
||||
|
||||
### マスターノード
|
||||
|
||||
* `/var/log/kube-apiserver.log` - APIの提供を担当するAPIサーバーのログ
|
||||
* `/var/log/kube-scheduler.log` - スケジューリング決定責任者であるスケジューラーのログ
|
||||
* `/var/log/kube-controller-manager.log` - レプリケーションコントローラーを管理するコントローラーのログ
|
||||
|
||||
### ワーカーノード
|
||||
|
||||
* `/var/log/kubelet.log` - ノード上でコンテナの実行を担当するKubeletのログ
|
||||
* `/var/log/kube-proxy.log` - サービスのロードバランシングを担うKube Proxyのログ
|
||||
|
||||
## クラスター障害モードの一般的な概要
|
||||
|
||||
これは、問題が発生する可能性のある事柄と、問題を軽減するためにクラスターのセットアップを調整する方法の不完全なリストです。
|
||||
|
||||
### 根本的な原因
|
||||
|
||||
- VMのシャットダウン
|
||||
- クラスター内、またはクラスターとユーザー間のネットワークパーティション
|
||||
- Kubernetesソフトウェアのクラッシュ
|
||||
- データの損失や永続的ストレージ(GCE PDやAWS EBSボリュームなど)の使用不能
|
||||
- Kubernetesソフトウェアやアプリケーションソフトウェアの設定ミスなど、オペレーターのミス
|
||||
|
||||
### 具体的なシナリオ
|
||||
|
||||
- apiserver VMのシャットダウンまたはapiserverのクラッシュ
|
||||
- 新しいPod、サービス、レプリケーションコントローラの停止、更新、起動ができない
|
||||
- Kubernetes APIに依存していない限り、既存のPodやサービスは正常に動作し続けるはずです
|
||||
- apiserverのバックアップストレージが失われた
|
||||
- apiserverが立ち上がらない
|
||||
- kubeletsは到達できなくなりますが、同じPodを実行し、同じサービスのプロキシを提供し続けます
|
||||
- apiserverを再起動する前に、手動でapiserverの状態を回復または再現する必要がある
|
||||
- サポートサービス(ノードコントローラ、レプリケーションコントローラーマネージャー、スケジューラーなど)VMのシャットダウンまたはクラッシュ
|
||||
- 現在、これらはapiserverとコロケーションしており、使用できない場合はapiserverと同様の影響があります
|
||||
- 将来的には、これらも複製されるようになり、同じ場所に配置されない可能性があります
|
||||
- 独自の永続的な状態を持っていない。
|
||||
|
||||
- 個別ノード(VMまたは物理マシン)のシャットダウン
|
||||
- そのノード上のPodの実行を停止
|
||||
- ネットワークパーティション
|
||||
- パーティションAはパーティションBのノードがダウンしていると考え、パーティションBはapiserverがダウンしていると考えています。(マスターVMがパーティションAで終了したと仮定)
|
||||
- Kubeletソフトウェア障害
|
||||
- クラッシュしたkubeletがノード上で新しいPodを起動できない
|
||||
- kubeletがPodを削除するかどうか
|
||||
- ノードが不健全と判定される
|
||||
- レプリケーションコントローラーが別の場所で新しいPodを起動する
|
||||
- クラスターオペレーターエラー
|
||||
- PodやServiceなどの損失
|
||||
- apiserverのバックエンドストレージの紛失
|
||||
- ユーザーがAPIを読めなくなる
|
||||
- その他
|
||||
|
||||
### 軽減策
|
||||
|
||||
- 対処法: IaaSプロバイダーの自動VM再起動機能をIaaS VMに使用する
|
||||
- 異常: Apiserver VMのシャットダウンまたはApiserverのクラッシュ
|
||||
- 異常: サポートサービスのVMシャットダウンまたはクラッシュ
|
||||
|
||||
- 対処法: IaaSプロバイダーの信頼できるストレージ(GCE PDやAWS EBSボリュームなど)をapiserver+etcdを使用するVMに使用する
|
||||
- 異常: Apiserverのバックエンドストレージが失われる
|
||||
|
||||
- 対処法: [高可用性](/docs/setup/production-environment/tools/kubeadm/high-availability/)構成を使用します
|
||||
- 異常: コントロールプレーンノードのシャットダウンまたはコントロールプレーンコンポーネント(スケジューラー、APIサーバー、コントローラーマネージャー)のクラッシュ
|
||||
- 1つ以上のノードまたはコンポーネントの同時故障に耐えることができる
|
||||
- 異常: APIサーバーのバックアップストレージ(etcdのデータディレクトリーなど)が消失
|
||||
- HA(高可用性) etcdの構成を想定しています
|
||||
|
||||
- 対処法: apiserver PDs/EBS-volumesを定期的にスナップショットする
|
||||
- 異常: Apiserver のバックエンドストレージが失われる
|
||||
- 異常: 操作ミスが発生する場合がある
|
||||
- 異常: Kubernetesのソフトウェアに障害が発生する場合がある
|
||||
|
||||
- 対処法:レプリケーションコントローラーとServiceをPodの前に使用する
|
||||
- 異常: ノードのシャットダウン
|
||||
- 異常: Kubeletソフトウェア障害
|
||||
|
||||
- 対処法: 予期せぬ再起動に耐えられるように設計されたアプリケーション(コンテナ)
|
||||
- 異常: ノードのシャットダウン
|
||||
- 異常: Kubeletソフトウェア障害
|
||||
|
||||
|
|
@ -17,9 +17,9 @@ weight: 20
|
|||
[레이블 셀렉터](/ko/docs/concepts/overview/working-with-objects/labels/)를 사용하여 선택을 용이하게 한다.
|
||||
보통 스케줄러가 자동으로 합리적인 배치(예: 자원이 부족한 노드에 파드를 배치하지 않도록
|
||||
노드 간에 파드를 분배하는 등)를 수행하기에 이러한 제약 조건은 필요하지 않지만
|
||||
간혹 파드가 배포할 노드를 제어해야 하는 경우가 있다.
|
||||
예를 들어 SSD가 장착된 머신에 파드가 연결되도록 하거나 또는 동일한 가용성 영역(availability zone)에서
|
||||
많은 것을 통신하는 두 개의 서로 다른 서비스의 파드를 같이 배치할 수 있다.
|
||||
간혹 파드가 배포될 노드를 제어해야 하는 경우가 있다.
|
||||
예를 들어 SSD가 장착된 머신에 파드가 배포되도록 하거나 또는 많은 통신을 하는 두 개의 서로 다른 서비스의 파드를
|
||||
동일한 가용성 영역(availability zone)에 배치할 수 있다.
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
|
|
|||
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
title: Grupo de APIs
|
||||
id: api-group
|
||||
date: 2019-09-02
|
||||
full_link: /docs/concepts/overview/kubernetes-api/#api-groups-and-versioning
|
||||
short_description: >
|
||||
Um conjunto de caminhos relacionados da API Kubernetes.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
- architecture
|
||||
---
|
||||
Um conjunto de caminhos relacionados da API Kubernetes.
|
||||
|
||||
<!--more-->
|
||||
Você pode ativar ou desativar cada grupo de APIs alterando a configuração do seu servidor de API. Você também pode desativar ou ativar caminhos para recursos específicos. O grupo de APIs facilita a extensão da API do Kubernetes. O grupo de APIs é especificado em um caminho REST e no campo `apiVersion` de um objeto serializado.
|
||||
|
||||
* Leia o [Grupo de API](/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning) para obter mais informações.
|
||||
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
title: WG (Grupo de Trabalho)
|
||||
id: wg
|
||||
date: 2018-04-12
|
||||
full_link: https://github.com/kubernetes/community/blob/master/sig-list.md#master-working-group-list
|
||||
short_description: >
|
||||
Facilita a discussão e/ou implementação de um projeto de curta duração, pontual ou dissociado para um comitê, envolvendo um ou mais SIG (grupos de interesse especial).
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- community
|
||||
---
|
||||
Facilita a discussão e/ou implementação de um projeto de curta duração, pontual ou dissociado para um comitê, envolvendo um ou mais {{< glossary_tooltip text="SIG" term_id="sig" >}} (grupos de interesse especial).
|
||||
|
||||
<!--more-->
|
||||
|
||||
Grupos de trabalho (do inglês - Working Group, WG) são uma maneira de organizar as pessoas para realizar uma tarefa.
|
||||
|
||||
Para mais informações, consulte o repositório [kubernetes/community](https://github.com/kubernetes/community) e a lista atual de [SIGs e grupos de trabalho](https://github.com/kubernetes/community/blob/master/sig-list.md).
|
||||
|
|
@ -84,9 +84,11 @@ Take maven project as example, adding the following dependencies into your depen
|
|||
<!--
|
||||
Then we can make use of the provided builder libraries to write your own controller.
|
||||
For example, the following one is a simple controller prints out node information
|
||||
on watch notification, see complete example [here](https://github.com/kubernetes-client/java/blob/master/examples/src/main/java/io/kubernetes/client/examples/ControllerExample.java):
|
||||
on watch notification, see complete example [here](https://github.com/kubernetes-client/java/blob/master/examples/examples-release-13/src/main/java/io/kubernetes/client/examples/ControllerExample.java):
|
||||
-->
|
||||
然后我们可以使用提供的生成器库来编写自己的控制器。例如,下面是一个简单的控制,它打印出关于监视通知的节点信息,请看完整的例子:
|
||||
然后我们可以使用提供的生成器库来编写自己的控制器。例如,下面是一个简单的控制,它打印出关于监视通知的节点信息,
|
||||
在[此处](https://github.com/kubernetes-client/java/blob/master/examples/examples-release-13/src/main/java/io/kubernetes/client/examples/ControllerExample.java)
|
||||
查看完整的例子:
|
||||
```java
|
||||
...
|
||||
Reconciler reconciler = new Reconciler() {
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ layout: blog
|
|||
title: "Kubernetes 1.17:稳定"
|
||||
date: 2019-12-09T13:00:00-08:00
|
||||
slug: kubernetes-1-17-release-announcement
|
||||
evergreen: true
|
||||
---
|
||||
|
||||
<!-- ---
|
||||
|
|
@ -10,6 +11,7 @@ layout: blog
|
|||
title: "Kubernetes 1.17: Stability"
|
||||
date: 2019-12-09T13:00:00-08:00
|
||||
slug: kubernetes-1-17-release-announcement
|
||||
evergreen: true
|
||||
--- -->
|
||||
**作者:** [Kubernetes 1.17发布团队](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.17/release_team.md)
|
||||
|
||||
|
|
@ -67,14 +69,14 @@ Standard labels are used by Kubernetes components to support some features. For
|
|||
The labels are reaching general availability in this release. Kubernetes components have been updated to populate the GA and beta labels and to react to both. However, if you are using the beta labels in your pod specs for features such as node affinity, or in your custom controllers, we recommend that you start migrating them to the new GA labels. You can find the documentation for the new labels here:
|
||||
-->
|
||||
|
||||
- [实例类型](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#nodekubernetesioinstance-type)
|
||||
- [地区](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesioregion)
|
||||
- [区域](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone)
|
||||
- [实例类型](/zh/docs/reference/labels-annotations-taints/#nodekubernetesioinstance-type)
|
||||
- [地区](/zh/docs/reference/labels-annotations-taints/#topologykubernetesioregion)
|
||||
- [区域](/zh/docs/reference/labels-annotations-taints/#topologykubernetesiozone)
|
||||
|
||||
<!--
|
||||
- [node.kubernetes.io/instance-type](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#nodekubernetesioinstance-type)
|
||||
- [topology.kubernetes.io/region](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesioregion)
|
||||
- [topology.kubernetes.io/zone](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone)
|
||||
- [node.kubernetes.io/instance-type](/docs/reference/labels-annotations-taints/#nodekubernetesioinstance-type)
|
||||
- [topology.kubernetes.io/region](/docs/reference/labels-annotations-taints/#topologykubernetesioregion)
|
||||
- [topology.kubernetes.io/zone](/docs/reference/labels-annotations-taints/#topologykubernetesiozone)
|
||||
-->
|
||||
## 卷快照进入公开测试版
|
||||
<!--
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ layout: blog
|
|||
title: 'Kubernetes 1.18: Fit & Finish'
|
||||
date: 2020-03-25
|
||||
slug: kubernetes-1-18-release-announcement
|
||||
evergreen: true
|
||||
---
|
||||
|
||||
<!--
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ layout: blog
|
|||
title: "弃用 Dockershim 的常见问题"
|
||||
date: 2020-12-02
|
||||
slug: dockershim-faq
|
||||
aliases: [ '/dockershim' ]
|
||||
---
|
||||
<!--
|
||||
layout: blog
|
||||
|
|
@ -12,17 +13,27 @@ slug: dockershim-faq
|
|||
aliases: [ '/dockershim' ]
|
||||
-->
|
||||
|
||||
<!--
|
||||
_**Update**: There is a [newer version](/blog/2022/02/17/dockershim-faq/) of this article available._
|
||||
-->
|
||||
_**更新**:本文有[较新版本](/zh/blog/2022/02/17/dockershim-faq/)。_
|
||||
|
||||
<!--
|
||||
This document goes over some frequently asked questions regarding the Dockershim
|
||||
deprecation announced as a part of the Kubernetes v1.20 release. For more detail
|
||||
on the deprecation of Docker as a container runtime for Kubernetes kubelets, and
|
||||
what that means, check out the blog post
|
||||
[Don't Panic: Kubernetes and Docker](/blog/2020/12/02/dont-panic-kubernetes-and-docker/).
|
||||
|
||||
Also, you can read [check whether Dockershim deprecation affects you](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/) to check whether it does.
|
||||
-->
|
||||
本文回顾了自 Kubernetes v1.20 版宣布弃用 Dockershim 以来所引发的一些常见问题。
|
||||
关于 Kubernetes kubelets 从容器运行时的角度弃用 Docker 的细节以及这些细节背后的含义,请参考博文
|
||||
[别慌: Kubernetes 和 Docker](/blog/2020/12/02/dont-panic-kubernetes-and-docker/)。
|
||||
|
||||
此外,你可以阅读 [检查 Dockershim 弃用是否影响你](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/)
|
||||
以检查它是否会影响你。
|
||||
|
||||
<!--
|
||||
### Why is dockershim being deprecated?
|
||||
-->
|
||||
|
|
|
|||
|
|
@ -13,6 +13,15 @@ slug: dont-panic-kubernetes-and-docker
|
|||
|
||||
**作者:** Jorge Castro, Duffie Cooley, Kat Cosgrove, Justin Garrison, Noah Kantrowitz, Bob Killen, Rey Lejano, Dan “POP” Papandrea, Jeffrey Sica, Davanum “Dims” Srinivas
|
||||
|
||||
<!--
|
||||
_Update: Kubernetes support for Docker via `dockershim` is now deprecated.
|
||||
For more information, read the [deprecation notice](/blog/2020/12/08/kubernetes-1-20-release-announcement/#dockershim-deprecation).
|
||||
You can also discuss the deprecation via a dedicated [GitHub issue](https://github.com/kubernetes/kubernetes/issues/106917)._
|
||||
-->
|
||||
_更新:Kubernetes 通过 `dockershim` 对 Docker 的支持现已弃用。
|
||||
有关更多信息,请阅读[弃用通知](/zh/blog/2020/12/08/kubernetes-1-20-release-announcement/#dockershim-deprecation)。
|
||||
你还可以通过专门的 [GitHub issue](https://github.com/kubernetes/kubernetes/issues/106917) 讨论弃用。_
|
||||
|
||||
<!--
|
||||
Kubernetes is [deprecating
|
||||
Docker](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation)
|
||||
|
|
@ -204,7 +213,7 @@ Kubernetes 有很多变化中的功能,没有人是100%的专家。
|
|||
我们希望这已经回答了你的大部分问题,并缓解了一些焦虑!❤️
|
||||
|
||||
<!--
|
||||
Looking for more answers? Check out our accompanying [Dockershim Deprecation FAQ](/blog/2020/12/02/dockershim-faq/).
|
||||
Looking for more answers? Check out our accompanying [Dockershim Removal FAQ](/blog/2022/02/17/dockershim-faq/) _(updated February 2022)_.
|
||||
-->
|
||||
还在寻求更多答案吗?请参考我们附带的
|
||||
[弃用 Dockershim 的常见问题](/zh/blog/2020/12/02/dockershim-faq/)。
|
||||
[移除 Dockershim 的常见问题](/zh/blog/2020/12/02/dockershim-faq/) _(2022年2月更新)_。
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ layout: blog
|
|||
title: 'Kubernetes 1.20: 最新版本'
|
||||
date: 2020-12-08
|
||||
slug: kubernetes-1-20-release-announcement
|
||||
evergreen: true
|
||||
---
|
||||
|
||||
<!-- ---
|
||||
|
|
@ -10,6 +11,7 @@ layout: blog
|
|||
title: 'Kubernetes 1.20: The Raddest Release'
|
||||
date: 2020-12-08
|
||||
slug: kubernetes-1-20-release-announcement
|
||||
evergreen: true
|
||||
--- -->
|
||||
|
||||
**作者:** [Kubernetes 1.20 发布团队](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.20/release_team.md)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,133 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "公布 2021 年指导委员会选举结果"
|
||||
date: 2021-11-08
|
||||
slug: steering-committee-results-2021
|
||||
---
|
||||
|
||||
<!--
|
||||
layout: blog
|
||||
title: "Announcing the 2021 Steering Committee Election Results"
|
||||
date: 2021-11-08
|
||||
slug: steering-committee-results-2021
|
||||
-->
|
||||
|
||||
<!--
|
||||
**Author**: Kaslin Fields
|
||||
-->
|
||||
**作者**:Kaslin Fields
|
||||
|
||||
<!--
|
||||
The [2021 Steering Committee Election](https://github.com/kubernetes/community/tree/master/events/elections/2021) is now complete. The Kubernetes Steering Committee consists of 7 seats, 4 of which were up for election in 2021. Incoming committee members serve a term of 2 years, and all members are elected by the Kubernetes Community.
|
||||
-->
|
||||
[2021 年指导委员会选举](https://github.com/kubernetes/community/tree/master/events/elections/2021)现已完成。
|
||||
Kubernetes 指导委员会由 7 个席位组成,其中 4 个席位将在 2021 年进行选举。
|
||||
新任委员会成员任期 2 年,所有成员均由 Kubernetes 社区选举产生。
|
||||
|
||||
<!--
|
||||
This community body is significant since it oversees the governance of the entire Kubernetes project. With that great power comes great responsibility. You can learn more about the steering committee’s role in their [charter](https://github.com/kubernetes/steering/blob/master/charter.md).
|
||||
-->
|
||||
这个社区机构非常重要,因为它监督整个 Kubernetes 项目的治理。
|
||||
你可以在其[章程](https://github.com/kubernetes/steering/blob/master/charter.md)中了解更多关于指导委员会的角色。
|
||||
|
||||
<!--
|
||||
## Results
|
||||
-->
|
||||
## 选举结果
|
||||
|
||||
<!--
|
||||
Congratulations to the elected committee members whose two year terms begin immediately (listed in alphabetical order by GitHub handle):
|
||||
-->
|
||||
|
||||
祝贺当选的委员会成员,他们的两年任期即刻生效(按 GitHub handle 字母排序):
|
||||
<!--
|
||||
* **Christoph Blecker ([@cblecker](https://github.com/cblecker)), Red Hat**
|
||||
* **Stephen Augustus ([@justaugustus](https://github.com/justaugustus)), Cisco**
|
||||
* **Paris Pittman ([@parispittman](https://github.com/parispittman)), Apple**
|
||||
* **Tim Pepper ([@tpepper](https://github.com/tpepper)), VMware**
|
||||
-->
|
||||
* **Christoph Blecker([@cblecker](https://github.com/cblecker)), 红帽**
|
||||
* **Stephen Augustus([@justaugustus](https://github.com/justaugustus)), 思科**
|
||||
* **Paris Pittman([@parispittman](https://github.com/parispittman)), 苹果**
|
||||
* **Tim Pepper([@tpepper](https://github.com/tpepper)), VMware**
|
||||
|
||||
<!--
|
||||
They join continuing members:
|
||||
-->
|
||||
他们加入永久成员:
|
||||
|
||||
<!--
|
||||
* **Davanum Srinivas ([@dims](https://github.com/dims)), VMware**
|
||||
* **Jordan Liggitt ([@liggitt](https://github.com/liggitt)), Google**
|
||||
* **Bob Killen ([@mrbobbytables](https://github.com/mrbobbytables)), Google**
|
||||
-->
|
||||
* **Davanum Srinivas([@dims](https://github.com/dims)), VMware**
|
||||
* **Jordan Liggitt ([@liggitt](https://github.com/liggitt)), 谷歌**
|
||||
* **Bob Killen ([@mrbobbytables](https://github.com/mrbobbytables)), 谷歌**
|
||||
|
||||
<!--
|
||||
Paris Pittman and Christoph Blecker are returning Steering Committee Members.
|
||||
-->
|
||||
Paris Pittman 和 Christoph Blecker 将回到指导委员会。
|
||||
|
||||
<!--
|
||||
## Big Thanks
|
||||
-->
|
||||
## 非常感谢
|
||||
|
||||
<!--
|
||||
Thank you and congratulations on a successful election to this round’s election officers:
|
||||
-->
|
||||
感谢并祝贺完成本轮成功选举的选举官们:
|
||||
|
||||
* Alison Dowdney, ([@alisondy](https://github.com/alisondy))
|
||||
* Noah Kantrowitz ([@coderanger](https://github.com/coderanger))
|
||||
* Josh Berkus ([@jberkus](https://github.com/jberkus))
|
||||
|
||||
<!--
|
||||
Special thanks to Arnaud Meukam ([@ameukam](https://github.com/ameukam)), k8s-infra liaison, who enabled our voting software on community-owned infrastructure.
|
||||
-->
|
||||
特别感谢 k8s-infra 联络员 Arnaud Meukam([@ameukam](https://github.com/ameukam)),
|
||||
他在社区的基础设施上启动了我们的投票软件。
|
||||
|
||||
<!--
|
||||
Thanks to the Emeritus Steering Committee Members. Your prior service is appreciated by the community:
|
||||
-->
|
||||
感谢荣誉退休的指导委员会成员。对你们之前对社区的贡献表示感谢:
|
||||
|
||||
* Derek Carr ([@derekwaynecarr](https://github.com/derekwaynecarr))
|
||||
* Nikhita Raghunath ([@nikhita](https://github.com/nikhita))
|
||||
|
||||
<!--
|
||||
And thank you to all the candidates who came forward to run for election.
|
||||
-->
|
||||
感谢所有前来参加竞选的候选人。
|
||||
|
||||
<!--
|
||||
## Get Involved with the Steering Committee
|
||||
-->
|
||||
## 参与指导委员会
|
||||
|
||||
<!--
|
||||
This governing body, like all of Kubernetes, is open to all. You can follow along with Steering Committee [backlog items](https://github.com/kubernetes/steering/projects/1) and weigh in by filing an issue or creating a PR against their [repo](https://github.com/kubernetes/steering). They have an open meeting on [the first Monday at 9:30am PT of every month](https://github.com/kubernetes/steering) and regularly attend Meet Our Contributors. They can also be contacted at their public mailing list steering@kubernetes.io.
|
||||
-->
|
||||
与所有 Kubernetes 一样,这个管理机构对所有人开放。
|
||||
你可以查看指导委员会的[待办事项](https://github.com/kubernetes/steering/projects/1),
|
||||
通过在他们的 [repo](https://github.com/kubernetes/steering)
|
||||
中提交一个 issue 或创建一个 PR 来参与讨论。
|
||||
他们在[每月的第一个星期一上午 9:30](https://github.com/kubernetes/steering) 举行公开会议,
|
||||
并定期参加会见我们的贡献者活动。也可以通过他们的公共邮件列表 steering@kubernetes.io 联系他们。
|
||||
|
||||
<!--
|
||||
You can see what the Steering Committee meetings are all about by watching past meetings on the [YouTube Playlist](https://www.youtube.com/playlist?list=PL69nYSiGNLP1yP1B_nd9-drjoxp0Q14qM).
|
||||
-->
|
||||
你可以在 [YouTube 播放列表](https://www.youtube.com/playlist?list=PL69nYSiGNLP1yP1B_nd9-drjoxp0Q14qM)
|
||||
上观看之前的会议视频,了解指导委员会的会议讨论内容。
|
||||
|
||||
---
|
||||
|
||||
<!--
|
||||
_This post was written by the [Upstream Marketing Working Group](https://github.com/kubernetes/community/tree/master/communication/marketing-team#contributor-marketing). If you want to write stories about the Kubernetes community, learn more about us._
|
||||
-->
|
||||
_本文是由[上游营销工作组](https://github.com/kubernetes/community/tree/master/communication/marketing-team#contributor-marketing)撰写的。
|
||||
如果你想撰写有关 Kubernetes 社区的故事,请了解更多关于我们的信息。_
|
||||
|
|
@ -18,8 +18,8 @@ slug: kubernetes-1-23-statefulset-pvc-auto-deletion
|
|||
|
||||
<!--
|
||||
Kubernetes v1.23 introduced a new, alpha-level policy for
|
||||
[StatefulSets](docs/concepts/workloads/controllers/statefulset/) that controls the lifetime of
|
||||
[PersistentVolumeClaims](docs/concepts/storage/persistent-volumes/) (PVCs) generated from the
|
||||
[StatefulSets](/docs/concepts/workloads/controllers/statefulset/) that controls the lifetime of
|
||||
[PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/) (PVCs) generated from the
|
||||
StatefulSet spec template for cases when they should be deleted automatically when the StatefulSet
|
||||
is deleted or pods in the StatefulSet are scaled down.
|
||||
-->
|
||||
|
|
@ -165,7 +165,7 @@ This policy forms a matrix with four cases. I’ll walk through and give an exam
|
|||
|
||||
<!--
|
||||
Visit the
|
||||
[documentation](docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-policies) to
|
||||
[documentation](/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-policies) to
|
||||
see all the details.
|
||||
-->
|
||||
查阅[文档](/zh/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-policies)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,199 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 即将移除 Dockershim:承诺和下一步"
|
||||
date: 2022-01-07
|
||||
slug: kubernetes-is-moving-on-from-dockershim
|
||||
---
|
||||
<!--
|
||||
layout: blog
|
||||
title: "Kubernetes is Moving on From Dockershim: Commitments and Next Steps"
|
||||
date: 2022-01-07
|
||||
slug: kubernetes-is-moving-on-from-dockershim
|
||||
-->
|
||||
|
||||
<!--
|
||||
**Authors:** Sergey Kanzhelev (Google), Jim Angel (Google), Davanum Srinivas (VMware), Shannon Kularathna (Google), Chris Short (AWS), Dawn Chen (Google)
|
||||
-->
|
||||
**作者:** Sergey Kanzhelev (Google), Jim Angel (Google), Davanum Srinivas (VMware), Shannon Kularathna (Google), Chris Short (AWS), Dawn Chen (Google)
|
||||
|
||||
<!--
|
||||
Kubernetes is removing dockershim in the upcoming v1.24 release. We're excited
|
||||
to reaffirm our community values by supporting open source container runtimes,
|
||||
enabling a smaller kubelet, and increasing engineering velocity for teams using
|
||||
Kubernetes. If you [use Docker Engine as a container runtime](/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/)
|
||||
for your Kubernetes cluster, get ready to migrate in 1.24! To check if you're
|
||||
affected, refer to [Check whether dockershim deprecation affects you](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/).
|
||||
-->
|
||||
Kubernetes 将在即将发布的 1.24 版本中移除 dockershim。我们很高兴能够通过支持开源容器运行时、支持更小的
|
||||
kubelet 以及为使用 Kubernetes 的团队提高工程速度来重申我们的社区价值。
|
||||
如果你[使用 Docker Engine 作为 Kubernetes 集群的容器运行时](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/),
|
||||
请准备好在 1.24 中迁移!要检查你是否受到影响,
|
||||
请参考[检查弃用 Dockershim 对你的影响](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/)。
|
||||
|
||||
<!--
|
||||
## Why we’re moving away from dockershim
|
||||
|
||||
Docker was the first container runtime used by Kubernetes. This is one of the
|
||||
reasons why Docker is so familiar to many Kubernetes users and enthusiasts.
|
||||
Docker support was hardcoded into Kubernetes – a component the project refers to
|
||||
as dockershim.
|
||||
-->
|
||||
## 为什么我们要离开 dockershim {#why-we-re-moving-away-from-dockershim}
|
||||
|
||||
Docker 是 Kubernetes 使用的第一个容器运行时。
|
||||
这也是许多 Kubernetes 用户和爱好者如此熟悉 Docker 的原因之一。
|
||||
对 Docker 的支持被硬编码到 Kubernetes 中——一个被项目称为 dockershim 的组件。
|
||||
<!--
|
||||
As containerization became an industry standard, the Kubernetes project added support
|
||||
for additional runtimes. This culminated in the implementation of the
|
||||
container runtime interface (CRI), letting system components (like the kubelet)
|
||||
talk to container runtimes in a standardized way. As a result, dockershim became
|
||||
an anomaly in the Kubernetes project.
|
||||
-->
|
||||
随着容器化成为行业标准,Kubernetes 项目增加了对其他运行时的支持。
|
||||
最终实现了容器运行时接口(CRI),让系统组件(如 kubelet)以标准化的方式与容器运行时通信。
|
||||
因此,dockershim 成为了 Kubernetes 项目中的一个异常现象。
|
||||
<!--
|
||||
Dependencies on Docker and dockershim have crept into various tools
|
||||
and projects in the CNCF ecosystem ecosystem, resulting in fragile code.
|
||||
|
||||
By removing the
|
||||
dockershim CRI, we're embracing the first value of CNCF: "[Fast is better than
|
||||
slow](https://github.com/cncf/foundation/blob/master/charter.md#3-values)".
|
||||
Stay tuned for future communications on the topic!
|
||||
-->
|
||||
对 Docker 和 dockershim 的依赖已经渗透到 CNCF 生态系统中的各种工具和项目中,这导致了代码脆弱。
|
||||
|
||||
通过删除 dockershim CRI,我们拥抱了 CNCF 的第一个价值:
|
||||
“[快比慢好](https://github.com/cncf/foundation/blob/master/charter.md#3-values)”。
|
||||
请继续关注未来关于这个话题的交流!
|
||||
|
||||
<!--
|
||||
## Deprecation timeline
|
||||
|
||||
We [formally announced](/blog/2020/12/08/kubernetes-1-20-release-announcement/) the dockershim deprecation in December 2020. Full removal is targeted
|
||||
in Kubernetes 1.24, in April 2022. This timeline
|
||||
aligns with our [deprecation policy](/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior),
|
||||
which states that deprecated behaviors must function for at least 1 year
|
||||
after their announced deprecation.
|
||||
-->
|
||||
## 弃用时间线 {#deprecation-timeline}
|
||||
|
||||
我们[正式宣布](/zh/blog/2020/12/08/kubernetes-1-20-release-announcement/)于
|
||||
2020 年 12 月弃用 dockershim。目标是在 2022 年 4 月,
|
||||
Kubernetes 1.24 中完全移除 dockershim。
|
||||
此时间线与我们的[弃用策略](/zh/docs/reference/using api/deprecation-policy/#deprecating-a-feature-or-behavior)一致,
|
||||
即规定已弃用的行为必须在其宣布弃用后至少运行 1 年。
|
||||
|
||||
<!--
|
||||
We'll support Kubernetes version 1.23, which includes
|
||||
dockershim, for another year in the Kubernetes project. For managed
|
||||
Kubernetes providers, vendor support is likely to last even longer, but this is
|
||||
dependent on the companies themselves. Regardless, we're confident all cluster operations will have
|
||||
time to migrate. If you have more questions about the dockershim removal, refer
|
||||
to the [Dockershim Deprecation FAQ](/dockershim).
|
||||
-->
|
||||
包括 dockershim 的 Kubernetes 1.23 版本,在 Kubernetes 项目中将再支持一年。
|
||||
对于托管 Kubernetes 的供应商,供应商支持可能会持续更长时间,但这取决于公司本身。
|
||||
无论如何,我们相信所有集群操作都有时间进行迁移。如果你有更多关于 dockershim 移除的问题,
|
||||
请参考[弃用 Dockershim 的常见问题](/zh/blog/2020/12/02/dockershim-faq/)。
|
||||
|
||||
<!--
|
||||
We asked you whether you feel prepared for the migration from dockershim in this
|
||||
survey: [Are you ready for Dockershim removal](/blog/2021/11/12/are-you-ready-for-dockershim-removal/).
|
||||
We had over 600 responses. To everybody who took time filling out the survey,
|
||||
thank you.
|
||||
-->
|
||||
在这个[你是否为 dockershim 的删除做好了准备](/blog/2021/11/12/are-you-ready-for-dockershim-removal/)的调查中,
|
||||
我们询问你是否为 dockershim 的迁移做好了准备。我们收到了 600 多个回复。
|
||||
感谢所有花时间填写调查问卷的人。
|
||||
|
||||
<!--
|
||||
The results show that we still have a lot of ground to cover to help you to
|
||||
migrate smoothly. Other container runtimes exist, and have been promoted
|
||||
extensively. However, many users told us they still rely on dockershim,
|
||||
and sometimes have dependencies that need to be re-worked. Some of these
|
||||
dependencies are outside of your control. Based on your feedback, here are some
|
||||
of the steps we are taking to help.
|
||||
-->
|
||||
结果表明,在帮助你顺利迁移方面,我们还有很多工作要做。
|
||||
存在其他容器运行时,并且已被广泛推广。但是,许多用户告诉我们他们仍然依赖 dockershim,
|
||||
并且有时需要重新处理依赖项。其中一些依赖项超出控制范围。
|
||||
根据收集到的反馈,我们采取了一些措施提供帮助。
|
||||
|
||||
<!--
|
||||
## Our next steps
|
||||
|
||||
Based on the feedback you provided:
|
||||
|
||||
- CNCF and the 1.24 release team are committed to delivering documentation in
|
||||
time for the 1.24 release. This includes more informative blog posts like this
|
||||
one, updating existing code samples, tutorials, and tasks, and producing a
|
||||
migration guide for cluster operators.
|
||||
- We are reaching out to the rest of the CNCF community to help prepare them for
|
||||
this change.
|
||||
-->
|
||||
## 我们的下一个步骤 {#our-next-steps}
|
||||
|
||||
根据提供的反馈:
|
||||
|
||||
- CNCF 和 1.24 版本团队致力于及时交付 1.24 版本的文档。这包括像本文这样的包含更多信息的博客文章,
|
||||
更新现有的代码示例、教程和任务,并为集群操作人员生成迁移指南。
|
||||
- 我们正在联系 CNCF 社区的其他成员,帮助他们为这一变化做好准备。
|
||||
|
||||
<!--
|
||||
If you're part of a project with dependencies on dockershim, or if you're
|
||||
interested in helping with the migration effort, please join us! There's always
|
||||
room for more contributors, whether to our transition tools or to our
|
||||
documentation. To get started, say hello in the
|
||||
[#sig-node](https://kubernetes.slack.com/archives/C0BP8PW9G)
|
||||
channel on [Kubernetes Slack](https://slack.kubernetes.io/)!
|
||||
-->
|
||||
如果你是依赖 dockershim 的项目的一部分,或者如果你有兴趣帮助参与迁移工作,请加入我们!
|
||||
无论是我们的迁移工具还是我们的文档,总是有更多贡献者的空间。
|
||||
作为起步,请在 [Kubernetes Slack](https://slack.kubernetes.io/) 上的
|
||||
[#sig-node](https://kubernetes.slack.com/archives/C0BP8PW9G) 频道打个招呼!
|
||||
|
||||
<!--
|
||||
## Final thoughts
|
||||
|
||||
As a project, we've already seen cluster operators increasingly adopt other
|
||||
container runtimes through 2021.
|
||||
We believe there are no major blockers to migration. The steps we're taking to
|
||||
improve the migration experience will light the path more clearly for you.
|
||||
-->
|
||||
## 最终想法 {#final-thoughts}
|
||||
|
||||
作为一个项目,我们已经看到集群运营商在 2021 年之前越来越多地采用其他容器运行时。
|
||||
我们相信迁移没有主要障碍。我们为改善迁移体验而采取的步骤将为你指明更清晰的道路。
|
||||
|
||||
<!--
|
||||
We understand that migration from dockershim is yet another action you may need to
|
||||
do to keep your Kubernetes infrastructure up to date. For most of you, this step
|
||||
will be straightforward and transparent. In some cases, you will encounter
|
||||
hiccups or issues. The community has discussed at length whether postponing the
|
||||
dockershim removal would be helpful. For example, we recently talked about it in
|
||||
the [SIG Node discussion on November 11th](https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#bookmark=id.r77y11bgzid)
|
||||
and in the [Kubernetes Steering committee meeting held on December 6th](https://docs.google.com/document/d/1qazwMIHGeF3iUh5xMJIJ6PDr-S3bNkT8tNLRkSiOkOU/edit#bookmark=id.m0ir406av7jx).
|
||||
We already [postponed](https://github.com/kubernetes/enhancements/pull/2481/) it
|
||||
once in 2021 because the adoption rate of other
|
||||
runtimes was lower than we wanted, which also gave us more time to identify
|
||||
potential blocking issues.
|
||||
-->
|
||||
我们知道,从 dockershim 迁移是你可能需要执行的另一项操作,以保证你的 Kubernetes 基础架构保持最新。
|
||||
对于你们中的大多数人来说,这一步将是简单明了的。在某些情况下,你会遇到问题。
|
||||
社区已经详细讨论了推迟 dockershim 删除是否会有所帮助。
|
||||
例如,我们最近在 [11 月 11 日的 SIG Node 讨论](https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#bookmark=id.r77y11bgzid)和
|
||||
[12 月 6 日 Kubernetes Steering 举行的委员会会议](https://docs.google.com/document/d/1qazwMIHGeF3iUh5xMJIJ6PDr-S3bNkT8tNLRkSiOkOU/edit#bookmark=id.m0ir406av7jx)谈到了它。
|
||||
我们已经在 2021 年[推迟](https://github.com/kubernetes/enhancements/pull/2481/)它一次,
|
||||
因为其他运行时的采用率低于我们的预期,这也给了我们更多的时间来识别潜在的阻塞问题。
|
||||
|
||||
<!--
|
||||
At this point, we believe that the value that you (and Kubernetes) gain from
|
||||
dockershim removal makes up for the migration effort you'll have. Start planning
|
||||
now to avoid surprises. We'll have more updates and guides before Kubernetes
|
||||
1.24 is released.
|
||||
-->
|
||||
在这一点上,我们相信你(和 Kubernetes)从移除 dockershim 中获得的价值可以弥补你将要进行的迁移工作。
|
||||
现在就开始计划以避免出现意外。在 Kubernetes 1.24 发布之前,我们将提供更多更新信息和指南。
|
||||
|
||||
|
|
@ -0,0 +1,88 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Dockershim:历史背景"
|
||||
date: 2022-05-03
|
||||
slug: dockershim-historical-context
|
||||
---
|
||||
|
||||
<!--
|
||||
layout: blog
|
||||
title: "Dockershim: The Historical Context"
|
||||
date: 2022-05-03
|
||||
slug: dockershim-historical-context
|
||||
-->
|
||||
|
||||
<!--
|
||||
**Author:** Kat Cosgrove
|
||||
|
||||
Dockershim has been removed as of Kubernetes v1.24, and this is a positive move for the project. However, context is important for fully understanding something, be it socially or in software development, and this deserves a more in-depth review. Alongside the dockershim removal in Kubernetes v1.24, we’ve seen some confusion (sometimes at a panic level) and dissatisfaction with this decision in the community, largely due to a lack of context around this removal. The decision to deprecate and eventually remove dockershim from Kubernetes was not made quickly or lightly. Still, it’s been in the works for so long that many of today’s users are newer than that decision, and certainly newer than the choices that led to the dockershim being necessary in the first place.
|
||||
|
||||
So what is the dockershim, and why is it going away?
|
||||
-->
|
||||
**作者:** Kat Cosgrove
|
||||
|
||||
自 Kubernetes v1.24 起,Dockershim 已被删除,这对项目来说是一个积极的举措。
|
||||
然而,背景对于充分理解某事很重要,无论是社交还是软件开发,这值得更深入的审查。
|
||||
除了 Kubernetes v1.24 中的 dockershim 移除之外,我们在社区中看到了一些
|
||||
混乱(有时处于恐慌级别)和对这一决定的不满,主要是由于缺乏有关此删除背景的了解。
|
||||
弃用并最终从 Kubernetes 中删除 dockershim 的决定并不是迅速或轻率地做出的。
|
||||
尽管如此,它已经工作了很长时间,以至于今天的许多用户都比这个决定更新,
|
||||
更不用提当初为何引入 dockershim 了。
|
||||
|
||||
那么 dockershim 是什么,为什么它会消失呢?
|
||||
|
||||
<!--
|
||||
In the early days of Kubernetes, we only supported one container runtime. That runtime was Docker Engine. Back then, there weren’t really a lot of other options out there and Docker was the dominant tool for working with containers, so this was not a controversial choice. Eventually, we started adding more container runtimes, like rkt and hypernetes, and it became clear that Kubernetes users want a choice of runtimes working best for them. So Kubernetes needed a way to allow cluster operators the flexibility to use whatever runtime they choose.
|
||||
-->
|
||||
在 Kubernetes 的早期,我们只支持一个容器运行时,那个运行时就是 Docker Engine。
|
||||
那时,并没有太多其他选择,而 Docker 是使用容器的主要工具,所以这不是一个有争议的选择。
|
||||
最终,我们开始添加更多的容器运行时,比如 rkt 和 hypernetes,很明显 Kubernetes 用户
|
||||
希望选择最适合他们的运行时。 因此,Kubernetes 需要一种方法来允许集群操作员灵活地使用
|
||||
他们选择的任何运行时。
|
||||
|
||||
<!--
|
||||
The [Container Runtime Interface](/blog/2016/12/container-runtime-interface-cri-in-kubernetes/) (CRI) was released to allow that flexibility. The introduction of CRI was great for the project and users alike, but it did introduce a problem: Docker Engine’s use as a container runtime predates CRI, and Docker Engine is not CRI-compatible. To solve this issue, a small software shim (dockershim) was introduced as part of the kubelet component specifically to fill in the gaps between Docker Engine and CRI, allowing cluster operators to continue using Docker Engine as their container runtime largely uninterrupted.
|
||||
-->
|
||||
[容器运行时接口](/blog/2016/12/container-runtime-interface-cri-in-kubernetes/) (CRI)
|
||||
已发布以支持这种灵活性。 CRI 的引入对项目和用户来说都很棒,但它确实引入了一个问题:Docker Engine
|
||||
作为容器运行时的使用早于 CRI,并且 Docker Engine 不兼容 CRI。 为了解决这个问题,在 kubelet 组件
|
||||
中引入了一个小型软件 shim (dockershim),专门用于填补 Docker Engine 和 CRI 之间的空白,
|
||||
允许集群操作员继续使用 Docker Engine 作为他们的容器运行时基本上不间断。
|
||||
|
||||
<!--
|
||||
However, this little software shim was never intended to be a permanent solution. Over the course of years, its existence has introduced a lot of unnecessary complexity to the kubelet itself. Some integrations are inconsistently implemented for Docker because of this shim, resulting in an increased burden on maintainers, and maintaining vendor-specific code is not in line with our open source philosophy. To reduce this maintenance burden and move towards a more collaborative community in support of open standards, [KEP-2221 was introduced](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2221-remove-dockershim), proposing the removal of the dockershim. With the release of Kubernetes v1.20, the deprecation was official.
|
||||
-->
|
||||
然而,这个小软件 shim 从来没有打算成为一个永久的解决方案。 多年来,它的存在给 kubelet
|
||||
本身带来了许多不必要的复杂性。 由于这个 shim,Docker 的一些集成实现不一致,导致维护人员
|
||||
的负担增加,并且维护特定于供应商的代码不符合我们的开源理念。 为了减少这种维护负担并朝着支
|
||||
持开放标准的更具协作性的社区迈进,[引入了 KEP-2221](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2221- remove-dockershim),
|
||||
建议移除 dockershim。 随着 Kubernetes v1.20 的发布,正式弃用。
|
||||
|
||||
<!--
|
||||
We didn’t do a great job communicating this, and unfortunately, the deprecation announcement led to some panic within the community. Confusion around what this meant for Docker as a company, if container images built by Docker would still run, and what Docker Engine actually is led to a conflagration on social media. This was our fault; we should have more clearly communicated what was happening and why at the time. To combat this, we released [a blog](/blog/2020/12/02/dont-panic-kubernetes-and-docker/) and [accompanying FAQ](/blog/2020/12/02/dockershim-faq/) to allay the community’s fears and correct some misconceptions about what Docker is and how containers work within Kubernetes. As a result of the community’s concerns, Docker and Mirantis jointly agreed to continue supporting the dockershim code in the form of [cri-dockerd](https://www.mirantis.com/blog/the-future-of-dockershim-is-cri-dockerd/), allowing you to continue using Docker Engine as your container runtime if need be. For the interest of users who want to try other runtimes, like containerd or cri-o, [migration documentation was written](/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/).
|
||||
-->
|
||||
我们没有很好地传达这一点,不幸的是,弃用公告在社区内引起了一些恐慌。关于这对 Docker 作为
|
||||
一家公司意味着什么,Docker 构建的容器镜像是否仍然可以运行,以及 Docker Engine 究竟是
|
||||
什么导致了社交媒体上的一场大火,人们感到困惑。这是我们的错;我们应该更清楚地传达当时发生
|
||||
的事情和原因。为了解决这个问题,我们发布了[一篇博客](/zh/blog/2020/12/02/dont-panic-kubernetes-and-docker/)
|
||||
和[相应的 FAQ](/zh/blog/2020/12/02/dockershim-faq/ ) 以减轻社区的恐惧并纠正对
|
||||
Docker 是什么以及容器如何在 Kubernetes 中工作的一些误解。由于社区的关注,Docker 和 Mirantis
|
||||
共同决定继续以 [cri-dockerd] 的形式支持 dockershim 代码(https://www.mirantis.com/blog/the-future-of-dockershim-is -cri-dockerd/),
|
||||
允许你在需要时继续使用 Docker Engine 作为容器运行时。对于想要尝试其他运行时(如 containerd 或 cri-o)
|
||||
的用户,[已编写迁移文档](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/)。
|
||||
|
||||
<!--
|
||||
We later [surveyed the community](https://kubernetes.io/blog/2021/11/12/are-you-ready-for-dockershim-removal/) and [discovered that there are still many users with questions and concerns](/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim). In response, Kubernetes maintainers and the CNCF committed to addressing these concerns by extending documentation and other programs. In fact, this blog post is a part of this program. With so many end users successfully migrated to other runtimes, and improved documentation, we believe that everyone has a paved way to migration now.
|
||||
-->
|
||||
我们后来[调查了社区](https://kubernetes.io/blog/2021/11/12/are-you-ready-for-dockershim-removal/)
|
||||
[发现还有很多用户有疑问和顾虑](/zh/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim)。
|
||||
作为回应,Kubernetes 维护人员和 CNCF 承诺通过扩展文档和其他程序来解决这些问题。 事实上,这篇博文是
|
||||
这个计划的一部分。 随着如此多的最终用户成功迁移到其他运行时,以及改进的文档,我们相信每个人现在都为迁移铺平了道路。
|
||||
|
||||
<!--
|
||||
Docker is not going away, either as a tool or as a company. It’s an important part of the cloud native community and the history of the Kubernetes project. We wouldn’t be where we are without them. That said, removing dockershim from kubelet is ultimately good for the community, the ecosystem, the project, and open source at large. This is an opportunity for all of us to come together to support open standards, and we’re glad to be doing so with the help of Docker and the community.
|
||||
-->
|
||||
Docker 不会消失,无论是作为一种工具还是作为一家公司。 它是云原生社区的重要组成部分,
|
||||
也是 Kubernetes 项目的历史。 没有他们,我们就不会是现在的样子。 也就是说,从 kubelet
|
||||
中删除 dockershim 最终对社区、生态系统、项目和整个开源都有好处。 这是我们所有人齐心协力
|
||||
支持开放标准的机会,我们很高兴在 Docker 和社区的帮助下这样做。
|
||||
|
|
@ -1 +1 @@
|
|||
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215.9892 128.40633"><defs><style>.cls-1{fill:#f9f9f9;}.cls-2{fill:#4c81c2;}</style></defs><title>ibm_featured_logo</title><rect class="cls-1" x="-5.9997" y="-8.99955" width="229.48853" height="143.9928"/><polygon class="cls-2" points="190.441 33.693 162.454 33.693 164.178 28.868 190.441 28.868 190.441 33.693"/><path class="cls-2" d="M115.83346,28.867l25.98433-.003,1.7014,4.83715c.01251-.00687-27.677.00593-27.677,0C115.84224,33.69422,115.82554,28.867,115.83346,28.867Z"/><path class="cls-2" d="M95.19668,28.86593A18.6894,18.6894,0,0,1,106.37358,33.7s-47.10052.00489-47.10052,0V28.86488Z"/><rect class="cls-2" x="22.31176" y="28.86593" width="32.72063" height="4.82558"/><path class="cls-2" d="M190.44115,42.74673h-31.194s1.70142-4.79994,1.691-4.80193h29.50305Z"/><polygon class="cls-2" points="146.734 42.753 115.832 42.753 115.832 37.944 145.041 37.944 146.734 42.753"/><path class="cls-2" d="M110.04127,37.94271a12.47,12.47,0,0,1,1.35553,4.80214H59.28193V37.94271Z"/><rect class="cls-2" x="22.31176" y="37.94271" width="32.72063" height="4.80214"/><polygon class="cls-2" points="156.056 51.823 157.768 46.998 181.191 47.005 181.191 51.812 156.056 51.823"/><polygon class="cls-2" points="148.237 46.997 149.944 51.823 125.046 51.823 125.046 46.997 148.237 46.997"/><path class="cls-2" d="M111.81,46.99627a15.748,15.748,0,0,1-.68923,4.82641H96.85137V46.99627Z"/><rect class="cls-2" x="31.43162" y="47.01973" width="14.06406" height="4.8019"/><rect class="cls-2" x="68.7486" y="46.99627" width="14.03976" height="4.82537"/><path class="cls-2" d="M138.87572,57.03292s.004,3.65225.001,3.65913H125.04558V55.89h26.35583l1.637,4.4773c.00773.00292,1.57841-4.48815,1.58153-4.47835h26.56223V60.692h-13.763c-.00124-.00687-.00771-3.65819-.00771-3.65819l-1.273,3.65819-25.99183-.00687Z"/><path class="cls-2" d="M68.7486,55.889h40.30365v-.00188a18.13723,18.13723,0,0,1-3.99812,4.80494s-36.30647.00668-36.30647,0Z"/><rect class="cls-2" x="31.43162" y="55.88794" width="14.06406" height="4.80316"/><rect class="cls-2" x="167.41912" y="64.94348" width="13.76302" height="4.80212"/><path class="cls-2" d="M138.87572,64.94348H125.04558V69.7456c-.00688-.0025,13.83411.00167,13.83411,0C138.87969,69.7431,138.89532,64.94348,138.87572,64.94348Z"/><path class="cls-2" d="M164.63927,64.94348c-.06255-.007-1.61218,4.79962-1.67723,4.80212l-19.60378.00835c-.01543-.00751-1.72371-4.81745-1.725-4.81047Z"/><path class="cls-2" d="M68.74672,64.94233H104.985a23.7047,23.7047,0,0,1,4.32076,4.80327c.06609-.0025-40.5581.00167-40.5581,0Z"/><path class="cls-2" d="M45.49359,69.74436v-4.802H31.45487V69.7431Z"/><rect class="cls-2" x="167.41912" y="73.99693" width="13.76198" height="4.80295"/><rect class="cls-2" x="125.04474" y="73.99693" width="13.83097" height="4.80212"/><path class="cls-2" d="M159.74351,78.8224c.00376-.02169,1.69745-4.82964,1.72373-4.82547H144.80219c-.029-.00209,1.70848,4.80378,1.70848,4.80378S159.7404,78.84241,159.74351,78.8224Z"/><path class="cls-2" d="M68.74766,78.79905c0,.01919-.00094-4.80212,0-4.803H82.9958s.01272,4.80462,0,4.80462C82.98224,78.80072,68.74766,78.79489,68.74766,78.79905Z"/><path class="cls-2" d="M111.30529,73.9961a13.94783,13.94783,0,0,1,.89542,4.825H97.10364v-4.825Z"/><rect class="cls-2" x="31.45487" y="73.9961" width="14.03872" height="4.80171"/><rect class="cls-2" x="167.41912" y="82.86525" width="23.0212" height="4.80421"/><rect class="cls-2" x="115.83139" y="82.86525" width="23.04432" height="4.80421"/><polygon class="cls-2" points="156.647 87.669 149.618 87.669 147.931 82.865 158.272 82.865 156.647 87.669"/><path class="cls-2" d="M22.3099,82.86525v4.80212H55.008c.01366.00751-.01469-4.79919,0-4.79919Z"/><path class="cls-2" d="M111.60237,82.86525c-.3442,1.58445-.65962,3.5158-1.81732,4.80421l-.43175-.00209H59.28005V82.86525Z"/><polygon class="cls-2" points="153.461 96.733 152.814 96.733 151.171 91.92 155.147 91.92 153.461 96.733"/><rect class="cls-2" x="167.41788" y="91.91953" width="23.02244" height="4.82547"/><path class="cls-2" d="M59.27307,96.73333V91.92745s47.24073.00585,47.37623.00585A17.945,17.945,0,0,1,94.43864,96.745l-35.15859-.00959"/><rect class="cls-2" x="115.83139" y="91.91953" width="23.04432" height="4.82547"/><path class="cls-2" d="M55.008,91.94079s-.01469,4.79253,0,4.79253c.01366,0-32.6885.0196-32.69809.00961-.00888-.00961.00875-4.81548,0-4.81548S54.9933,91.95664,55.008,91.94079Z"/></svg>
|
||||
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215.9892 128.40633"><defs><style>.cls-1{fill:transparent;}.cls-2{fill:#4c81c2;}</style></defs><title>ibm_featured_logo</title><rect class="cls-1" x="-5.9997" y="-8.99955" width="229.48853" height="143.9928"/><polygon class="cls-2" points="190.441 33.693 162.454 33.693 164.178 28.868 190.441 28.868 190.441 33.693"/><path class="cls-2" d="M115.83346,28.867l25.98433-.003,1.7014,4.83715c.01251-.00687-27.677.00593-27.677,0C115.84224,33.69422,115.82554,28.867,115.83346,28.867Z"/><path class="cls-2" d="M95.19668,28.86593A18.6894,18.6894,0,0,1,106.37358,33.7s-47.10052.00489-47.10052,0V28.86488Z"/><rect class="cls-2" x="22.31176" y="28.86593" width="32.72063" height="4.82558"/><path class="cls-2" d="M190.44115,42.74673h-31.194s1.70142-4.79994,1.691-4.80193h29.50305Z"/><polygon class="cls-2" points="146.734 42.753 115.832 42.753 115.832 37.944 145.041 37.944 146.734 42.753"/><path class="cls-2" d="M110.04127,37.94271a12.47,12.47,0,0,1,1.35553,4.80214H59.28193V37.94271Z"/><rect class="cls-2" x="22.31176" y="37.94271" width="32.72063" height="4.80214"/><polygon class="cls-2" points="156.056 51.823 157.768 46.998 181.191 47.005 181.191 51.812 156.056 51.823"/><polygon class="cls-2" points="148.237 46.997 149.944 51.823 125.046 51.823 125.046 46.997 148.237 46.997"/><path class="cls-2" d="M111.81,46.99627a15.748,15.748,0,0,1-.68923,4.82641H96.85137V46.99627Z"/><rect class="cls-2" x="31.43162" y="47.01973" width="14.06406" height="4.8019"/><rect class="cls-2" x="68.7486" y="46.99627" width="14.03976" height="4.82537"/><path class="cls-2" d="M138.87572,57.03292s.004,3.65225.001,3.65913H125.04558V55.89h26.35583l1.637,4.4773c.00773.00292,1.57841-4.48815,1.58153-4.47835h26.56223V60.692h-13.763c-.00124-.00687-.00771-3.65819-.00771-3.65819l-1.273,3.65819-25.99183-.00687Z"/><path class="cls-2" d="M68.7486,55.889h40.30365v-.00188a18.13723,18.13723,0,0,1-3.99812,4.80494s-36.30647.00668-36.30647,0Z"/><rect class="cls-2" x="31.43162" y="55.88794" width="14.06406" height="4.80316"/><rect class="cls-2" x="167.41912" y="64.94348" width="13.76302" height="4.80212"/><path class="cls-2" d="M138.87572,64.94348H125.04558V69.7456c-.00688-.0025,13.83411.00167,13.83411,0C138.87969,69.7431,138.89532,64.94348,138.87572,64.94348Z"/><path class="cls-2" d="M164.63927,64.94348c-.06255-.007-1.61218,4.79962-1.67723,4.80212l-19.60378.00835c-.01543-.00751-1.72371-4.81745-1.725-4.81047Z"/><path class="cls-2" d="M68.74672,64.94233H104.985a23.7047,23.7047,0,0,1,4.32076,4.80327c.06609-.0025-40.5581.00167-40.5581,0Z"/><path class="cls-2" d="M45.49359,69.74436v-4.802H31.45487V69.7431Z"/><rect class="cls-2" x="167.41912" y="73.99693" width="13.76198" height="4.80295"/><rect class="cls-2" x="125.04474" y="73.99693" width="13.83097" height="4.80212"/><path class="cls-2" d="M159.74351,78.8224c.00376-.02169,1.69745-4.82964,1.72373-4.82547H144.80219c-.029-.00209,1.70848,4.80378,1.70848,4.80378S159.7404,78.84241,159.74351,78.8224Z"/><path class="cls-2" d="M68.74766,78.79905c0,.01919-.00094-4.80212,0-4.803H82.9958s.01272,4.80462,0,4.80462C82.98224,78.80072,68.74766,78.79489,68.74766,78.79905Z"/><path class="cls-2" d="M111.30529,73.9961a13.94783,13.94783,0,0,1,.89542,4.825H97.10364v-4.825Z"/><rect class="cls-2" x="31.45487" y="73.9961" width="14.03872" height="4.80171"/><rect class="cls-2" x="167.41912" y="82.86525" width="23.0212" height="4.80421"/><rect class="cls-2" x="115.83139" y="82.86525" width="23.04432" height="4.80421"/><polygon class="cls-2" points="156.647 87.669 149.618 87.669 147.931 82.865 158.272 82.865 156.647 87.669"/><path class="cls-2" d="M22.3099,82.86525v4.80212H55.008c.01366.00751-.01469-4.79919,0-4.79919Z"/><path class="cls-2" d="M111.60237,82.86525c-.3442,1.58445-.65962,3.5158-1.81732,4.80421l-.43175-.00209H59.28005V82.86525Z"/><polygon class="cls-2" points="153.461 96.733 152.814 96.733 151.171 91.92 155.147 91.92 153.461 96.733"/><rect class="cls-2" x="167.41788" y="91.91953" width="23.02244" height="4.82547"/><path class="cls-2" d="M59.27307,96.73333V91.92745s47.24073.00585,47.37623.00585A17.945,17.945,0,0,1,94.43864,96.745l-35.15859-.00959"/><rect class="cls-2" x="115.83139" y="91.91953" width="23.04432" height="4.82547"/><path class="cls-2" d="M55.008,91.94079s-.01469,4.79253,0,4.79253c.01366,0-32.6885.0196-32.69809.00961-.00888-.00961.00875-4.81548,0-4.81548S54.9933,91.95664,55.008,91.94079Z"/></svg>
|
||||
|
Before Width: | Height: | Size: 4.3 KiB After Width: | Height: | Size: 4.3 KiB |
|
|
@ -1,35 +1,34 @@
|
|||
---
|
||||
title: 社区
|
||||
title: Kubernetes 社区行为规范
|
||||
layout: basic
|
||||
cid: community
|
||||
css: /css/community.css
|
||||
community_styles_migrated: true
|
||||
---
|
||||
|
||||
<!-- ---
|
||||
title: Community
|
||||
title: Kubernetes Community Code of Conduct
|
||||
layout: basic
|
||||
cid: community
|
||||
css: /css/community.css
|
||||
community_styles_migrated: true
|
||||
--- -->
|
||||
|
||||
<div class="community_main">
|
||||
<!-- <h1>Kubernetes Community Code of Conduct</h1> -->
|
||||
<h1>Kubernetes 社区行为规范</h1>
|
||||
|
||||
<div class="community-section" id="cncf-code-of-conduct-intro">
|
||||
<p>
|
||||
<!-- Kubernetes follows the
|
||||
<a href="https://github.com/cncf/foundation/blob/master/code-of-conduct.md">CNCF Code of Conduct</a>.
|
||||
The text of the CNCF CoC is replicated below, as of
|
||||
<a href="https://github.com/cncf/foundation/blob/0ce4694e5103c0c24ca90c189da81e5408a46632/code-of-conduct.md">commit 0ce4694</a>.
|
||||
If you notice that this is out of date, please
|
||||
<a href="https://github.com/kubernetes/website/issues/new">file an issue</a>. -->
|
||||
|
||||
Kubernetes 遵循
|
||||
<a href="https://github.com/cncf/foundation/blob/master/code-of-conduct-languages/zh.md">CNCF 行为规范</a>。
|
||||
CNCF 社区规范文本如下链接
|
||||
<a href="https://github.com/cncf/foundation/blob/0ce4694e5103c0c24ca90c189da81e5408a46632/code-of-conduct.md">commit 0ce4694</a>。
|
||||
如果您发现这个 CNCF 社区规范文本已经过时,请
|
||||
<a href="https://github.com/kubernetes/website/issues/new">提交 issue</a>。
|
||||
</p>
|
||||
|
||||
<p>
|
||||
<!-- If you notice a violation of the Code of Conduct at an event or meeting, in
|
||||
Slack, or in another communication mechanism, reach out to
|
||||
the [Kubernetes Code of Conduct Committee](https://github.com/kubernetes/community/tree/master/committee-code-of-conduct) <conduct@kubernetes.io>.
|
||||
|
|
@ -37,8 +36,9 @@ Your anonymity will be protected. -->
|
|||
|
||||
如果你在活动、会议、Slack 或是其它场合发现有任何违反行为规范的行为,请联系[Kubernetes 行为规范委员会](https://github.com/kubernetes/community/tree/master/committee-code-of-conduct)<conduct@kubernetes.io>。
|
||||
我们会确保您的匿名性。
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<div class="cncf_coc_container">
|
||||
<div id="cncf_coc_container">
|
||||
{{< include "/static/cncf-code-of-conduct.md" >}}
|
||||
</div>
|
||||
</div>
|
||||
|
|
|
|||
|
|
@ -142,7 +142,7 @@ IP 地址、网络包过滤、目标健康检查等云基础设施组件集成
|
|||
<!--
|
||||
## Authorization
|
||||
|
||||
This section breaks down the access that the cloud controller managers requires
|
||||
This section breaks down the access that the cloud controller manager requires
|
||||
on various API objects, in order to perform its operations.
|
||||
-->
|
||||
## 鉴权 {#authorization}
|
||||
|
|
|
|||
|
|
@ -29,7 +29,7 @@ This page is an overview of Kubernetes.
|
|||
|
||||
<!-- body -->
|
||||
<!--
|
||||
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
|
||||
Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
|
||||
-->
|
||||
Kubernetes 是一个可移植的、可扩展的开源平台,用于管理容器化的工作负载和服务,可促进声明式配置和自动化。
|
||||
Kubernetes 拥有一个庞大且快速增长的生态系统。Kubernetes 的服务、支持和工具广泛可用。
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ weight: 60
|
|||
<!--
|
||||
_Field selectors_ let you [select Kubernetes resources](/docs/concepts/overview/working-with-objects/kubernetes-objects) based on the value of one or more resource fields. Here are some example field selector queries:
|
||||
-->
|
||||
_字段选择器(Field selectors_)允许你根据一个或多个资源字段的值
|
||||
“字段选择器(Field selectors)”允许你根据一个或多个资源字段的值
|
||||
[筛选 Kubernetes 资源](/zh/docs/concepts/overview/working-with-objects/kubernetes-objects)。
|
||||
下面是一些使用字段选择器查询的例子:
|
||||
|
||||
|
|
|
|||
|
|
@ -231,6 +231,7 @@ partition
|
|||
* The second example selects all resources with key equal to `tier` and values other than `frontend` and `backend`, and all resources with no labels with the `tier` key.
|
||||
* The third example selects all resources including a label with key `partition`; no values are checked.
|
||||
* The fourth example selects all resources without a label with key `partition`; no values are checked.
|
||||
|
||||
Similarly the comma separator acts as an _AND_ operator. So filtering resources with a `partition` key (no matter the value) and with `environment` different than `qa` can be achieved using `partition,environment notin (qa)`.
|
||||
-->
|
||||
|
||||
|
|
@ -238,6 +239,7 @@ Similarly the comma separator acts as an _AND_ operator. So filtering resources
|
|||
* 第二个示例选择了所有键等于 `tier` 并且值不等于 `frontend` 或者 `backend` 的资源,以及所有没有 `tier` 键标签的资源。
|
||||
* 第三个示例选择了所有包含了有 `partition` 标签的资源;没有校验它的值。
|
||||
* 第四个示例选择了所有没有 `partition` 标签的资源;没有校验它的值。
|
||||
|
||||
类似地,逗号分隔符充当 _与_ 运算符。因此,使用 `partition` 键(无论为何值)和
|
||||
`environment` 不同于 `qa` 来过滤资源可以使用 `partition, environment notin(qa)` 来实现。
|
||||
|
||||
|
|
|
|||
|
|
@ -178,13 +178,13 @@ for the Pod. It is within this pod that the underlying container runtime will cr
|
|||
Pod 中创建容器。
|
||||
|
||||
<!--
|
||||
If the resource has a limit defined for each container (Guaranteed QoS or Bustrable QoS with limits defined),
|
||||
If the resource has a limit defined for each container (Guaranteed QoS or Burstable QoS with limits defined),
|
||||
the kubelet will set an upper limit for the pod cgroup associated with that resource (cpu.cfs_quota_us for CPU
|
||||
and memory.limit_in_bytes memory). This upper limit is based on the sum of the container limits plus the `overhead`
|
||||
defined in the PodSpec.
|
||||
-->
|
||||
如果该资源对每一个容器都定义了一个限制(定义了限制值的 Guaranteed QoS 或者
|
||||
Bustrable QoS),kubelet 会为与该资源(CPU 的 `cpu.cfs_quota_us` 以及内存的
|
||||
Burstable QoS),kubelet 会为与该资源(CPU 的 `cpu.cfs_quota_us` 以及内存的
|
||||
`memory.limit_in_bytes`)
|
||||
相关的 Pod cgroup 设定一个上限。该上限基于 PodSpec 中定义的容器限制总量与 `overhead` 之和。
|
||||
|
||||
|
|
|
|||
|
|
@ -137,7 +137,7 @@ A minimal Ingress resource example:
|
|||
{{< codenew file="service/networking/minimal-ingress.yaml" >}}
|
||||
|
||||
<!--
|
||||
As with all other Kubernetes resources, an Ingress needs `apiVersion`, `kind`, and `metadata` fields.
|
||||
An Ingress needs `apiVersion`, `kind`, `metadata` and `spec` fields.
|
||||
The name of an Ingress object must be a valid
|
||||
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
|
||||
For general information about working with config files, see [deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/), [configuring containers](/docs/tasks/configure-pod-container/configure-pod-configmap/), [managing resources](/docs/concepts/cluster-administration/manage-deployment/).
|
||||
|
|
@ -146,7 +146,7 @@ For general information about working with config files, see [deploying applicat
|
|||
Different [Ingress controllers](/docs/concepts/services-networking/ingress-controllers) support different annotations. Review the documentation for
|
||||
your choice of Ingress controller to learn which annotations are supported.
|
||||
-->
|
||||
与所有其他 Kubernetes 资源一样,Ingress 需要指定 `apiVersion`、`kind` 和 `metadata` 字段。
|
||||
Ingress 需要指定 `apiVersion`、`kind`、 `metadata`和 `spec` 字段。
|
||||
Ingress 对象的命名必须是合法的 [DNS 子域名名称](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
|
||||
关于如何使用配置文件,请参见[部署应用](/zh/docs/tasks/run-application/run-stateless-application-deployment/)、
|
||||
[配置容器](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)、
|
||||
|
|
|
|||
|
|
@ -32,6 +32,18 @@ _拓扑感知提示_ 包含客户怎么使用服务端点的建议,从而实
|
|||
|
||||
例如,你可以在一个地域内路由流量,以降低通信成本,或提高网络性能。
|
||||
|
||||
<!--
|
||||
The "topology-aware hints" feature is at Beta stage and it is **NOT** enabled
|
||||
by default. To try out this feature, you have to enable the `TopologyAwareHints`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
-->
|
||||
|
||||
{{< note >}}
|
||||
“拓扑感知提示”特性处于 Beta 阶段,并且默认情况下**未**启用。
|
||||
要试用此特性,你必须启用 `TopologyAwareHints`
|
||||
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。
|
||||
{{< /note >}}
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
|
|
|
|||
|
|
@ -70,11 +70,15 @@ To enable dynamic provisioning, a cluster administrator needs to pre-create
|
|||
one or more StorageClass objects for users.
|
||||
StorageClass objects define which provisioner should be used and what parameters
|
||||
should be passed to that provisioner when dynamic provisioning is invoked.
|
||||
The name of a StorageClass object must be a valid
|
||||
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
|
||||
|
||||
The following manifest creates a storage class "slow" which provisions standard
|
||||
disk-like persistent disks.
|
||||
-->
|
||||
要启用动态供应功能,集群管理员需要为用户预先创建一个或多个 `StorageClass` 对象。
|
||||
`StorageClass` 对象定义当动态供应被调用时,哪一个驱动将被使用和哪些参数将被传递给驱动。
|
||||
StorageClass 对象的名字必须是一个合法的 [DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
|
||||
以下清单创建了一个 `StorageClass` 存储类 "slow",它提供类似标准磁盘的永久磁盘。
|
||||
|
||||
```yaml
|
||||
|
|
|
|||
|
|
@ -273,6 +273,21 @@ features must be enabled.
|
|||
[Azure 磁盘 CSI 驱动程序](https://github.com/kubernetes-sigs/azuredisk-csi-driver),
|
||||
并且 `CSIMigration` 和 `CSIMigrationAzureDisk` 功能必须被启用。
|
||||
|
||||
<!--
|
||||
#### azureDisk CSI migration complete
|
||||
|
||||
{{< feature-state for_k8s_version="v1.21" state="alpha" >}}
|
||||
|
||||
To disable the `azureDisk` storage plugin from being loaded by the controller manager
|
||||
and the kubelet, set the `InTreePluginAzureDiskUnregister` flag to `true`.
|
||||
-->
|
||||
#### azureDisk CSI 迁移完成
|
||||
|
||||
{{< feature-state for_k8s_version="v1.21" state="alpha" >}}
|
||||
|
||||
要禁止控制器管理器和 kubelet 加载 `azureDisk` 存储插件,
|
||||
请将 `InTreePluginAzureDiskUnregister` 标志设置为 `true`。
|
||||
|
||||
### azureFile {#azurefile}
|
||||
|
||||
<!--
|
||||
|
|
@ -312,6 +327,21 @@ Azure File CSI driver does not support using same volume with different fsgroups
|
|||
Azure 文件 CSI 驱动尚不支持为同一卷设置不同的 fsgroup。
|
||||
如果 AzureFile CSI 迁移被启用,用不同的 fsgroup 来使用同一卷也是不被支持的。
|
||||
|
||||
<!--
|
||||
#### azureDisk CSI migration complete
|
||||
|
||||
{{< feature-state for_k8s_version="v1.21" state="alpha" >}}
|
||||
|
||||
To disable the `azureDisk` storage plugin from being loaded by the controller manager
|
||||
and the kubelet, set the `InTreePluginAzureDiskUnregister` flag to `true`.
|
||||
-->
|
||||
#### azureDisk CSI 迁移完成
|
||||
|
||||
{{< feature-state for_k8s_version="v1.21" state="alpha" >}}
|
||||
|
||||
要禁止控制器管理器和 kubelet 加载 `azureDisk` 存储插件,
|
||||
请将 `InTreePluginAzureDiskUnregister` 标志设置为 `true`。
|
||||
|
||||
### cephfs {#cephfs}
|
||||
|
||||
<!--
|
||||
|
|
|
|||
|
|
@ -136,9 +136,9 @@ A Pod Template in a DaemonSet must have a [`RestartPolicy`](/docs/concepts/workl
|
|||
The `.spec.selector` field is a pod selector. It works the same as the `.spec.selector` of
|
||||
a [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/).
|
||||
|
||||
As of Kubernetes 1.8, you must specify a pod selector that matches the labels of the
|
||||
`.spec.template`. The pod selector will no longer be defaulted when left empty. Selector
|
||||
defaulting was not compatible with `kubectl apply`. Also, once a DaemonSet is created,
|
||||
You must specify a pod selector that matches the labels of the
|
||||
`.spec.template`.
|
||||
Also, once a DaemonSet is created,
|
||||
its `.spec.selector` can not be mutated. Mutating the pod selector can lead to the
|
||||
unintentional orphaning of Pods, and it was found to be confusing to users.
|
||||
-->
|
||||
|
|
@ -147,9 +147,7 @@ unintentional orphaning of Pods, and it was found to be confusing to users.
|
|||
`.spec.selector` 字段表示 Pod 选择算符,它与
|
||||
[Job](/zh/docs/concepts/workloads/controllers/job/) 的 `.spec.selector` 的作用是相同的。
|
||||
|
||||
从 Kubernetes 1.8 开始,您必须指定与 `.spec.template` 的标签匹配的 Pod 选择算符。
|
||||
用户不指定 Pod 选择算符时,该字段不再有默认值。
|
||||
选择算符的默认值生成结果与 `kubectl apply` 不兼容。
|
||||
你必须指定与 `.spec.template` 的标签匹配的 Pod 选择算符。
|
||||
此外,一旦创建了 DaemonSet,它的 `.spec.selector` 就不能修改。
|
||||
修改 Pod 选择算符可能导致 Pod 意外悬浮,并且这对用户来说是费解的。
|
||||
|
||||
|
|
@ -175,11 +173,11 @@ When the two are specified the result is ANDed.
|
|||
当上述两个字段都指定时,结果会按逻辑与(AND)操作处理。
|
||||
|
||||
<!--
|
||||
If the `.spec.selector` is specified, it must match the `.spec.template.metadata.labels`.
|
||||
Config with these not matching will be rejected by the API.
|
||||
The `.spec.selector` must match the `.spec.template.metadata.labels`.
|
||||
Config with these two not matching will be rejected by the API.
|
||||
-->
|
||||
如果指定了 `.spec.selector`,必须与 `.spec.template.metadata.labels` 相匹配。
|
||||
如果与后者不匹配,则 DeamonSet 会被 API 拒绝。
|
||||
`.spec.selector` 必须与 `.spec.template.metadata.labels` 相匹配。
|
||||
如果配置中这两个字段不匹配,则会被 API 拒绝。
|
||||
|
||||
<!--
|
||||
### Running Pods on Only Some Nodes
|
||||
|
|
@ -209,7 +207,7 @@ If you do not specify either, then the DaemonSet controller will create Pods on
|
|||
|
||||
### 通过默认调度器调度 {#scheduled-by-default-scheduler}
|
||||
|
||||
{{< feature-state for_kubernetes_version="1.17" state="stable" >}}
|
||||
{{< feature-state for_k8s_version="1.17" state="stable" >}}
|
||||
|
||||
<!--
|
||||
A DaemonSet ensures that all eligible nodes run a copy of a Pod. Normally, the
|
||||
|
|
|
|||
|
|
@ -340,7 +340,7 @@ Follow the steps given below to update your Deployment:
|
|||
1. 先来更新 nginx Pod 以使用 `nginx:1.16.1` 镜像,而不是 `nginx:1.14.2` 镜像。
|
||||
|
||||
```shell
|
||||
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
|
||||
kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1
|
||||
```
|
||||
<!--
|
||||
or use the following command:
|
||||
|
|
|
|||
|
|
@ -561,7 +561,7 @@ cleaned up by CronJobs based on the specified capacity-based cleanup policy.
|
|||
|
||||
### 已完成 Job 的 TTL 机制 {#ttl-mechanisms-for-finished-jobs}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.21" state="beta" >}}
|
||||
{{< feature-state for_k8s_version="v1.23" state="stable" >}}
|
||||
|
||||
<!--
|
||||
Another way to clean up finished Jobs (either `Complete` or `Failed`)
|
||||
|
|
@ -733,15 +733,21 @@ version of Kubernetes you're using](/docs/home/supported-doc-versions/).
|
|||
When a Job is created, the Job controller will immediately begin creating Pods
|
||||
to satisfy the Job's requirements and will continue to do so until the Job is
|
||||
complete. However, you may want to temporarily suspend a Job's execution and
|
||||
resume it later. To suspend a Job, you can update the `.spec.suspend` field of
|
||||
resume it later, or start Jobs in suspended state and have a custom controller
|
||||
decide later when to start them.
|
||||
-->
|
||||
Job 被创建时,Job 控制器会马上开始执行 Pod 创建操作以满足 Job 的需求,
|
||||
并持续执行此操作直到 Job 完成为止。
|
||||
不过你可能想要暂时挂起 Job 执行,或启动处于挂起状态的job,
|
||||
并拥有一个自定义控制器以后再决定什么时候开始。
|
||||
|
||||
<!--
|
||||
To suspend a Job, you can update the `.spec.suspend` field of
|
||||
the Job to true; later, when you want to resume it again, update it to false.
|
||||
Creating a Job with `.spec.suspend` set to true will create it in the suspended
|
||||
state.
|
||||
-->
|
||||
Job 被创建时,Job 控制器会马上开始执行 Pod 创建操作以满足 Job 的需求,
|
||||
并持续执行此操作直到 Job 完成为止。
|
||||
不过你可能想要暂时挂起 Job 执行,之后再恢复其执行。
|
||||
要挂起一个 Job,你可以将 Job 的 `.spec.suspend` 字段更新为 true。
|
||||
要挂起一个 Job,你可以更新 `.spec.suspend` 字段为 true,
|
||||
之后,当你希望恢复其执行时,将其更新为 false。
|
||||
创建一个 `.spec.suspend` 被设置为 true 的 Job 本质上会将其创建为被挂起状态。
|
||||
|
||||
|
|
@ -858,6 +864,61 @@ as soon as the Job was resumed.
|
|||
字段值被改来改去造成的。在这两个事件之间,我们看到没有 Pod 被创建,不过当
|
||||
Job 被恢复执行时,Pod 创建操作立即被重启执行。
|
||||
|
||||
<!--
|
||||
### Mutable Scheduling Directives
|
||||
-->
|
||||
### 可变调度指令 {#mutable-scheduling-directives}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.23" state="beta" >}}
|
||||
|
||||
<!--
|
||||
In order to use this behavior, you must enable the `JobMutableNodeSchedulingDirectives`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
on the [API server](/docs/reference/command-line-tools-reference/kube-apiserver/).
|
||||
It is enabled by default.
|
||||
-->
|
||||
{{< note >}}
|
||||
为了使用此功能,你必须在 [API 服务器](/zh/docs/reference/command-line-tools-reference/kube-apiserver/)上启用
|
||||
`JobMutableNodeSchedulingDirectives` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。
|
||||
默认情况下启用。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
In most cases a parallel job will want the pods to run with constraints,
|
||||
like all in the same zone, or all either on GPU model x or y but not a mix of both.
|
||||
-->
|
||||
在大多数情况下,并行作业会希望 Pod 在一定约束条件下运行,
|
||||
比如所有的 Pod 都在同一个区域,或者所有的 Pod 都在 GPU 型号 x 或 y 上,而不是两者的混合。
|
||||
|
||||
<!--
|
||||
The [suspend](#suspending-a-job) field is the first step towards achieving those semantics. Suspend allows a
|
||||
custom queue controller to decide when a job should start; However, once a job is unsuspended,
|
||||
a custom queue controller has no influence on where the pods of a job will actually land.
|
||||
-->
|
||||
[suspend](#suspend-a-job) 字段是实现这些语义的第一步。
|
||||
suspend 允许自定义队列控制器,以决定工作何时开始;然而,一旦工作被取消暂停,
|
||||
自定义队列控制器对 Job 中 Pods 的实际放置位置没有影响。
|
||||
|
||||
<!--
|
||||
This feature allows updating a Job's scheduling directives before it starts, which gives custom queue
|
||||
controllers the ability to influence pod placement while at the same time offloading actual
|
||||
pod-to-node assignment to kube-scheduler. This is allowed only for suspended Jobs that have never
|
||||
been unsuspended before.
|
||||
-->
|
||||
此特性允许在 Job 开始之前更新调度指令,从而为定制队列提供影响 Pod
|
||||
放置的能力,同时将 Pod 与节点间的分配关系留给 kube-scheduler 决定。
|
||||
这一特性仅适用于之前从未被暂停过的、已暂停的 Job。
|
||||
控制器能够影响 Pod 放置,同时参考实际
|
||||
pod-to-node 分配给 kube-scheduler。这仅适用于从未暂停的 Jobs。
|
||||
|
||||
<!--
|
||||
The fields in a Job's pod template that can be updated are node affinity, node selector,
|
||||
tolerations, labels and annotations.
|
||||
-->
|
||||
Job 的 Pod 模板中可以更新的字段是节点亲和性、节点选择器、容忍、标签和注解。
|
||||
|
||||
|
||||
|
||||
<!--
|
||||
### Specifying your own Pod selector {#specifying-your-own-pod-selector}
|
||||
|
||||
|
|
@ -964,7 +1025,7 @@ spec:
|
|||
|
||||
<!--
|
||||
The new Job itself will have a different uid from `a8f3d00d-c6d2-11e5-9f87-42010af00002`. Setting
|
||||
`manualSelector: true` tells the system to that you know what you are doing and to allow this
|
||||
`manualSelector: true` tells the system that you know what you are doing and to allow this
|
||||
mismatch.
|
||||
-->
|
||||
新的 Job 自身会有一个不同于 `a8f3d00d-c6d2-11e5-9f87-42010af00002` 的唯一 ID。
|
||||
|
|
@ -978,24 +1039,25 @@ In order to use this behavior, you must enable the `JobTrackingWithFinalizers`
|
|||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
on the [API server](/docs/reference/command-line-tools-reference/kube-apiserver/)
|
||||
and the [controller manager](/docs/reference/command-line-tools-reference/kube-controller-manager/).
|
||||
It is disabled by default.
|
||||
It is enabled by default.
|
||||
|
||||
When enabled, the control plane tracks new Jobs using the behavior described
|
||||
below. Existing Jobs are unaffected. As a user, the only difference you would
|
||||
see is that the control plane tracking of Job completion is more accurate.
|
||||
below. Jobs created before the feature was enabled are unaffected. As a user,
|
||||
the only difference you would see is that the control plane tracking of Job
|
||||
completion is more accurate.
|
||||
-->
|
||||
### 使用 Finalizer 追踪 Job {#job-tracking-with-finalizers}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.22" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.23" state="beta" >}}
|
||||
|
||||
{{< note >}}
|
||||
要使用该行为,你必须为 [API 服务器](/zh/docs/reference/command-line-tools-reference/kube-apiserver/)
|
||||
和[控制器管理器](/zh/docs/reference/command-line-tools-reference/kube-controller-manager/)
|
||||
启用 `JobTrackingWithFinalizers`
|
||||
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。
|
||||
默认是禁用的。
|
||||
默认是启用的。
|
||||
|
||||
启用后,控制面基于下述行为追踪新的 Job。现有 Job 不受影响。
|
||||
启用后,控制面基于下述行为追踪新的 Job。在启用该特性之前创建的 Job 不受影响。
|
||||
作为用户,你会看到的唯一区别是控制面对 Job 完成情况的跟踪更加准确。
|
||||
{{< /note >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: 高级贡献
|
||||
title: 进阶贡献
|
||||
slug: advanced
|
||||
content_type: concept
|
||||
weight: 98
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@ weight: 15
|
|||
|
||||
<!--
|
||||
This guide shows you how to create, edit and share diagrams using the Mermaid
|
||||
Javascript library. Mermaid.js allows you to generate diagrams using a simple
|
||||
JavaScript library. Mermaid.js allows you to generate diagrams using a simple
|
||||
markdown-like syntax inside Markdown files. You can also use Mermaid to
|
||||
generate `.svg` or `.png` image files that you can add to your documentation.
|
||||
|
||||
|
|
@ -24,7 +24,7 @@ and/or how to create and add diagrams to Kubernetes documentation.
|
|||
|
||||
Figure 1 outlines the topics covered in this section.
|
||||
-->
|
||||
本指南为你展示如何创建、编辑和分享基于 Mermaid Javascript 库的图表。
|
||||
本指南为你展示如何创建、编辑和分享基于 Mermaid JavaScript 库的图表。
|
||||
Mermaid.js 允许你使用简单的、类似于 Markdown 的语法来在 Markdown 文件中生成图表。
|
||||
你也可以使用 Mermaid 来创建 `.svg` 或 `.png` 图片文件,将其添加到你的文档中。
|
||||
|
||||
|
|
|
|||
|
|
@ -9,10 +9,14 @@ content_type: concept
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
<!-- This page explains the custom Hugo shortcodes that can be used in Kubernetes Markdown documentation. -->
|
||||
<!--
|
||||
This page explains the custom Hugo shortcodes that can be used in Kubernetes Markdown documentation.
|
||||
-->
|
||||
本页面将介绍 Hugo 自定义短代码,可以用于 Kubernetes Markdown 文档书写。
|
||||
|
||||
<!-- Read more about shortcodes in the [Hugo documentation](https://gohugo.io/content-management/shortcodes). -->
|
||||
<!--
|
||||
Read more about shortcodes in the [Hugo documentation](https://gohugo.io/content-management/shortcodes).
|
||||
-->
|
||||
关于短代码的更多信息可参见 [Hugo 文档](https://gohugo.io/content-management/shortcodes)。
|
||||
|
||||
<!-- body -->
|
||||
|
|
@ -20,18 +24,18 @@ content_type: concept
|
|||
<!--
|
||||
## Feature state
|
||||
|
||||
In a Markdown page (.md file) on this site, you can add a shortcode to display
|
||||
version and state of the documented feature.
|
||||
In a Markdown page (`.md` file) on this site, you can add a shortcode to
|
||||
display version and state of the documented feature.
|
||||
-->
|
||||
## 功能状态
|
||||
|
||||
在本站的 Markdown 页面中,你可以加入短代码来展示所描述的功能特性的版本和状态。
|
||||
在本站的 Markdown 页面(`.md` 文件)中,你可以加入短代码来展示所描述的功能特性的版本和状态。
|
||||
|
||||
<!--
|
||||
### Feature state demo
|
||||
|
||||
Below is a demo of the feature state snippet, which displays the feature as stable
|
||||
in the latest Kubernetes version.
|
||||
Below is a demo of the feature state snippet, which displays the feature as
|
||||
stable in the latest Kubernetes version.
|
||||
-->
|
||||
### 功能状态示例
|
||||
|
||||
|
|
@ -41,12 +45,16 @@ in the latest Kubernetes version.
|
|||
{{</* feature-state state="stable" */>}}
|
||||
```
|
||||
|
||||
<!-- Renders to: -->
|
||||
<!--
|
||||
Renders to:
|
||||
-->
|
||||
会转换为:
|
||||
|
||||
{{< feature-state state="stable" >}}
|
||||
|
||||
<!-- The valid values for `state` are: -->
|
||||
<!--
|
||||
The valid values for `state` are:
|
||||
-->
|
||||
`state` 的可选值如下:
|
||||
|
||||
* alpha
|
||||
|
|
@ -69,7 +77,9 @@ feature state version by passing the `for_k8s_version` shortcode parameter. For
|
|||
{{</* feature-state for_k8s_version="v1.10" state="beta" */>}}
|
||||
```
|
||||
|
||||
<!-- Renders to: -->
|
||||
<!--
|
||||
Renders to:
|
||||
-->
|
||||
会转换为:
|
||||
|
||||
{{< feature-state for_k8s_version="v1.10" state="beta" >}}
|
||||
|
|
@ -78,10 +88,10 @@ feature state version by passing the `for_k8s_version` shortcode parameter. For
|
|||
## Glossary
|
||||
There are two glossary shortcodes: `glossary_tooltip` and `glossary_definition`.
|
||||
|
||||
You can reference glossary terms with an inclusion that will automatically
|
||||
update and replace content with the relevant links from [our
|
||||
glossary](/docs/reference/glossary/). When the glossary term is moused-over,
|
||||
the glossary entry displays a tooltip. The glossary term also displays as a link.
|
||||
You can reference glossary terms with an inclusion that automatically updates
|
||||
and replaces content with the relevant links from [our glossary](/docs/reference/glossary/).
|
||||
When the glossary term is moused-over, the glossary entry displays a tooltip.
|
||||
The glossary term also displays as a link.
|
||||
|
||||
As well as inclusions with tooltips, you can reuse the definitions from the glossary in
|
||||
page content.
|
||||
|
|
@ -96,21 +106,24 @@ page content.
|
|||
|
||||
除了包含工具提示外,你还可以重用页面内容中词汇表中的定义。
|
||||
<!--
|
||||
The raw data for glossary terms is stored at [https://github.com/kubernetes/website/tree/main/content/en/docs/reference/glossary](https://github.com/kubernetes/website/tree/main/content/en/docs/reference/glossary), with a content file for each glossary term.
|
||||
The raw data for glossary terms is stored at
|
||||
[the glossary directory](https://github.com/kubernetes/website/tree/main/content/en/docs/reference/glossary),
|
||||
with a content file for each glossary term.
|
||||
-->
|
||||
|
||||
词汇术语的原始数据保存在 [https://github.com/kubernetes/website/tree/main/content/en/docs/reference/glossary](https://github.com/kubernetes/website/tree/main/content/en/docs/reference/glossary),每个内容文件对应相应的术语解释。
|
||||
词汇术语的原始数据保存在[词汇目录](https://github.com/kubernetes/website/tree/main/content/en/docs/reference/glossary),
|
||||
每个内容文件对应相应的术语解释。
|
||||
|
||||
<!--
|
||||
### Glossary demo
|
||||
|
||||
For example, the following include within the Markdown will render to
|
||||
For example, the following include within the Markdown renders to
|
||||
{{< glossary_tooltip text="cluster" term_id="cluster" >}} with a tooltip:
|
||||
-->
|
||||
### 词汇演示
|
||||
|
||||
例如,下面的代码在 Markdown 中将会转换为 `{{< glossary_tooltip text="cluster" term_id="cluster" >}}`,
|
||||
然后在提示框中显示。
|
||||
例如下面的代码在 Markdown 中将会转换为
|
||||
{{< glossary_tooltip text="cluster" term_id="cluster" >}},然后在提示框中显示。
|
||||
|
||||
```
|
||||
{{</* glossary_tooltip text="cluster" term_id="cluster" */>}}
|
||||
|
|
@ -146,10 +159,68 @@ which renders as:
|
|||
呈现为:
|
||||
{{< glossary_definition term_id="cluster" length="all" >}}
|
||||
|
||||
<!--
|
||||
## Links to API Reference
|
||||
-->
|
||||
## 链接至 API 参考 {#links-to-api-reference}
|
||||
|
||||
<!--
|
||||
You can link to a page of the Kubernetes API reference using the
|
||||
`api-reference` shortcode, for example to the
|
||||
{{< api-reference page="workload-resources/pod-v1" >}} reference:
|
||||
-->
|
||||
你可以使用 `api-reference` 短代码链接到 Kubernetes API 参考页面,例如
|
||||
Pod
|
||||
{{< api-reference page="workload-resources/pod-v1" >}} 参考文件:
|
||||
|
||||
```
|
||||
{{</* api-reference page="workload-resources/pod-v1" */>}}
|
||||
```
|
||||
|
||||
<!--
|
||||
The content of the `page` parameter is the suffix of the URL of the API reference page.
|
||||
-->
|
||||
本语句中 `page` 参数的内容是 API 参考页面的 URL 后缀。
|
||||
|
||||
|
||||
<!--
|
||||
You can link to a specific place into a page by specifying an `anchor`
|
||||
parameter, for example to the {{< api-reference page="workload-resources/pod-v1" anchor="PodSpec" >}}
|
||||
reference or the {{< api-reference page="workload-resources/pod-v1" anchor="environment-variables" >}}
|
||||
section of the page:
|
||||
-->
|
||||
你可以通过指定 `anchor` 参数链接到页面中的特定位置,例如到
|
||||
{{< api-reference page="workload-resources/pod-v1" anchor="PodSpec" >}} 参考,或页面的
|
||||
{{< api-reference page="workload-resources/pod-v1" anchor="environment-variables" >}}
|
||||
部分。
|
||||
|
||||
```
|
||||
{{</* api-reference page="workload-resources/pod-v1" anchor="PodSpec" */>}}
|
||||
{{</* api-reference page="workload-resources/pod-v1" anchor="environment-variables" */>}}
|
||||
```
|
||||
|
||||
|
||||
<!--
|
||||
You can change the text of the link by specifying a `text` parameter, for
|
||||
example by linking to the
|
||||
{{< api-reference page="workload-resources/pod-v1" anchor="environment-variables" text="Environment Variables">}}
|
||||
section of the page:
|
||||
-->
|
||||
你可以通过指定 `text` 参数来更改链接的文本,例如通过链接到页面的
|
||||
{{< api-reference page="workload-resources/pod-v1" anchor="environment-variables" text="环境变量">}}
|
||||
部分:
|
||||
|
||||
```
|
||||
{{</* api-reference page="workload-resources/pod-v1" anchor="environment-variables" text="环境变量" */>}}
|
||||
```
|
||||
|
||||
|
||||
<!--
|
||||
## Table captions
|
||||
|
||||
You can make tables more accessible to screen readers by adding a table caption. To add a [caption](https://www.w3schools.com/tags/tag_caption.asp) to a table, enclose the table with a `table` shortcode and specify the caption with the `caption` parameter.
|
||||
You can make tables more accessible to screen readers by adding a table caption. To add a
|
||||
[caption](https://www.w3schools.com/tags/tag_caption.asp) to a table,
|
||||
enclose the table with a `table` shortcode and specify the caption with the `caption` parameter.
|
||||
|
||||
{{< note >}}
|
||||
Table captions are visible to screen readers but invisible when viewed in standard HTML.
|
||||
|
|
@ -205,7 +276,8 @@ Parameter | Description | Default
|
|||
{{< /table >}}
|
||||
|
||||
<!--
|
||||
If you inspect the HTML for the table, you should see this element immediately after the opening `<table>` element:
|
||||
If you inspect the HTML for the table, you should see this element immediately
|
||||
after the opening `<table>` element:
|
||||
|
||||
```html
|
||||
<caption style="display: none;">Configuration parameters</caption>
|
||||
|
|
@ -235,8 +307,17 @@ The `tabs` shortcode takes these parameters:
|
|||
|
||||
<!--
|
||||
* `name`: The name as shown on the tab.
|
||||
* `codelang`: If you provide inner content to the `tab` shortcode, you can tell Hugo what code language to use for highlighting.
|
||||
* `include`: The file to include in the tab. If the tab lives in a Hugo [leaf bundle](https://gohugo.io/content-management/page-bundles/#leaf-bundles), the file -- which can be any MIME type supported by Hugo -- will be looked up in the bundle itself. If not, the content page to include will be looked up relative to the current. Note that with the `include` you will not have any shortcode inner content and must use the self-closing syntax, e.g. {{</* tab name="Content File #1" include="example1" /*/>}}. Non-content files will be code-highlighted. The language to use will be taken from the filename if not provided in `codelang`.
|
||||
* `codelang`: If you provide inner content to the `tab` shortcode, you can tell Hugo
|
||||
what code language to use for highlighting.
|
||||
* `include`: The file to include in the tab. If the tab lives in a Hugo
|
||||
[leaf bundle](https://gohugo.io/content-management/page-bundles/#leaf-bundles),
|
||||
the file -- which can be any MIME type supported by Hugo -- is looked up in the bundle itself.
|
||||
If not, the content page that needs to be included is looked up relative to the current page.
|
||||
Note that with the `include`, you do not have any shortcode inner content and must use the
|
||||
self-closing syntax. For example,
|
||||
`{{</* tab name="Content File #1" include="example1" /*/>}}`. The language needs to be specified
|
||||
under `codelang` or the language is taken based on the file name.
|
||||
Non-content files are code-highlighted by default.
|
||||
-->
|
||||
* `name`: 标签页上显示的名字。
|
||||
* `codelang`: 如果要在 `tab` 短代码中加入内部内容,需要告知 Hugo 使用的是什么代码语言,方便代码高亮。
|
||||
|
|
@ -245,10 +326,12 @@ The `tabs` shortcode takes these parameters:
|
|||
Hugo 会在包内查找文件(可以是 Hugo 所支持的任何 MIME 类型文件)。
|
||||
否则,Hugo 会在当前路径的相对路径下查找所要包含的内容页面。
|
||||
注意,在 `include` 页面中不能包含短代码内容,必须要使用自结束(self-closing)语法。
|
||||
非内容文件将会被代码高亮。
|
||||
例如 `{{</* tab name="Content File #1" include="example1" /*/>}}`。
|
||||
如果没有在 `codelang` 进行声明的话,Hugo 会根据文件名推测所用的语言。
|
||||
默认情况下,非内容文件将会被代码高亮。
|
||||
<!--
|
||||
* If your inner content is markdown, you must use `%`-delimiter to surorund the tab, e.g. `{{%/* tab name="Tab 1" %}}This is **markdown**{{% /tab */%}}`
|
||||
* If your inner content is markdown, you must use the `%`-delimiter to surround the tab.
|
||||
For example, `{{%/* tab name="Tab 1" %}}This is **markdown**{{% /tab */%}}`
|
||||
* You can combine the variations mentioned above inside a tab set.
|
||||
-->
|
||||
* 如果内部内容是 Markdown,你必须要使用 `%` 分隔符来包装标签页。
|
||||
|
|
@ -282,7 +365,9 @@ println "This is tab 2."
|
|||
{{< /tabs */>}}
|
||||
```
|
||||
|
||||
<!-- Will be rendered as: -->
|
||||
<!--
|
||||
Renders to:
|
||||
-->
|
||||
会转换为:
|
||||
|
||||
{{< tabs name="tab_with_code" >}}
|
||||
|
|
@ -294,41 +379,51 @@ println "This is tab 2."
|
|||
{{< /tab >}}
|
||||
{{< /tabs >}}
|
||||
|
||||
<!-- ### Tabs demo: Inline Markdown and HTML -->
|
||||
<!--
|
||||
### Tabs demo: Inline Markdown and HTML
|
||||
-->
|
||||
### 标签页演示:内联 Markdown 和 HTML
|
||||
|
||||
```go-html-template
|
||||
{{</* tabs name="tab_with_md" >}}
|
||||
{{% tab name="Markdown" %}}
|
||||
这是 **一些 markdown 。**
|
||||
{{< note >}}它甚至可以包含短代码。{{< /note >}}
|
||||
这是 **一些 markdown。**
|
||||
{{< note >}}
|
||||
它甚至可以包含短代码。
|
||||
{{< /note >}}
|
||||
{{% /tab %}}
|
||||
{{< tab name="HTML" >}}
|
||||
<div>
|
||||
<h3>纯 HTML</h3>
|
||||
<p>这是一些 <i>纯</i> HTML 。</p>
|
||||
<p>这是一些 <i>纯</i> HTML。</p>
|
||||
</div>
|
||||
{{< /tab >}}
|
||||
{{< /tabs */>}}
|
||||
```
|
||||
|
||||
<!-- Will be rendered as: -->
|
||||
<!--
|
||||
Renders to:
|
||||
-->
|
||||
会转换为:
|
||||
|
||||
{{< tabs name="tab_with_md" >}}
|
||||
{{% tab name="Markdown" %}}
|
||||
这是 **一些 markdown 。**
|
||||
{{< note >}}它甚至可以包含短代码。{{< /note >}}
|
||||
这是 **一些 markdown。**
|
||||
{{< note >}}
|
||||
它甚至可以包含短代码。
|
||||
{{< /note >}}
|
||||
{{% /tab %}}
|
||||
{{< tab name="HTML" >}}
|
||||
<div>
|
||||
<h3>纯 HTML</h3>
|
||||
<p>这是一些 <i>纯</i> HTML 。</p>
|
||||
<p>这是一些 <i>纯</i> HTML。</p>
|
||||
</div>
|
||||
{{< /tab >}}
|
||||
{{< /tabs >}}
|
||||
|
||||
<!-- ### Tabs demo: File include -->
|
||||
<!--
|
||||
### Tabs demo: File include
|
||||
-->
|
||||
### 标签页演示:文件嵌套
|
||||
|
||||
```go-text-template
|
||||
|
|
@ -339,7 +434,9 @@ println "This is tab 2."
|
|||
{{< /tabs */>}}
|
||||
```
|
||||
|
||||
<!-- Will be rendered as: -->
|
||||
<!--
|
||||
Renders to:
|
||||
-->
|
||||
会转换为:
|
||||
|
||||
{{< tabs name="tab_with_file_include" >}}
|
||||
|
|
@ -348,6 +445,78 @@ println "This is tab 2."
|
|||
{{< tab name="JSON File" include="podtemplate.json" />}}
|
||||
{{< /tabs >}}
|
||||
|
||||
<!--
|
||||
## Third party content marker
|
||||
-->
|
||||
## 第三方内容标记 {#third-party-content-marker}
|
||||
|
||||
<!--
|
||||
Running Kubernetes requires third-party software. For example: you
|
||||
usually need to add a
|
||||
[DNS server](/docs/tasks/administer-cluster/dns-custom-nameservers/#introduction)
|
||||
to your cluster so that name resolution works.
|
||||
-->
|
||||
运行 Kubernetes 需要第三方软件。例如:你通常需要将
|
||||
[DNS 服务器](/zh/docs/tasks/administer-cluster/dns-custom-nameservers/#introduction)
|
||||
添加到集群中,以便名称解析工作。
|
||||
|
||||
<!--
|
||||
When we link to third-party software, or otherwise mention it,
|
||||
we follow the [content guide](/docs/contribute/style/content-guide/)
|
||||
and we also mark those third party items.
|
||||
-->
|
||||
当我们链接到第三方软件或以其他方式提及它时,我们会遵循[内容指南](/zh/docs/contribute/style/content-guide/)
|
||||
并标记这些第三方项目。
|
||||
|
||||
<!--
|
||||
Using these shortcodes adds a disclaimer to any documentation page
|
||||
that uses them.
|
||||
-->
|
||||
使用这些短代码会向使用它们的任何文档页面添加免责声明。
|
||||
|
||||
<!--
|
||||
### Lists {#third-party-content-list}
|
||||
-->
|
||||
### 列表 {#third-party-content-list}
|
||||
|
||||
<!--
|
||||
For a list of several third-party items, add:
|
||||
-->
|
||||
对于有关几个第三方项目的列表,请添加:
|
||||
```
|
||||
{{%/* thirdparty-content */%}}
|
||||
```
|
||||
<!--
|
||||
just below the heading for the section that includes all items.
|
||||
-->
|
||||
在包含所有项目的段落标题正下方。
|
||||
|
||||
<!--
|
||||
### Items {#third-party-content-item}
|
||||
-->
|
||||
### 项目 {#third-party-content-item}
|
||||
|
||||
<!--
|
||||
If you have a list where most of the items refer to in-project
|
||||
software (for example: Kubernetes itself, and the separate
|
||||
[Descheduler](https://github.com/kubernetes-sigs/descheduler)
|
||||
component), then there is a different form to use.
|
||||
-->
|
||||
如果你有一个列表,其中大多数项目引用项目内软件(例如:Kubernetes 本身,以及单独的
|
||||
[Descheduler](https://github.com/kubernetes-sigs/descheduler)
|
||||
组件),那么可以使用不同的形式。
|
||||
|
||||
<!--
|
||||
Add the shortcode:
|
||||
|
||||
before the item, or just below the heading for the specific item.
|
||||
-->
|
||||
在项目之前,或在特定项目的段落下方添加此短代码:
|
||||
```
|
||||
{{%/* thirdparty-content single="true" */%}}
|
||||
```
|
||||
|
||||
|
||||
<!--
|
||||
## Version strings
|
||||
|
||||
|
|
@ -364,9 +533,10 @@ The two most commonly used version parameters are `latest` and `version`.
|
|||
<!--
|
||||
### `{{</* param "version" */>}}`
|
||||
|
||||
The `{{</* param "version" */>}}` shortcode generates the value of the current version of
|
||||
the Kubernetes documentation from the `version` site parameter. The `param` shortcode accepts
|
||||
the name of one site parameter, in this case: `version`.
|
||||
The `{{</* param "version" */>}}` shortcode generates the value of the current
|
||||
version of the Kubernetes documentation from the `version` site parameter. The
|
||||
`param` shortcode accepts the name of one site parameter, in this case:
|
||||
`version`.
|
||||
-->
|
||||
### `{{</* param "version" */>}}`
|
||||
|
||||
|
|
@ -375,10 +545,11 @@ the name of one site parameter, in this case: `version`.
|
|||
|
||||
<!--
|
||||
{{< note >}}
|
||||
In previously released documentation, `latest` and `version` parameter values are not equivalent.
|
||||
After a new version is released, `latest` is incremented and the value of `version` for the
|
||||
documentation set remains unchanged. For example, a previously released version of the
|
||||
documentation displays `version` as `v1.19` and `latest` as `v1.20`.
|
||||
In previously released documentation, `latest` and `version` parameter values
|
||||
are not equivalent. After a new version is released, `latest` is incremented
|
||||
and the value of `version` for the documentation set remains unchanged. For
|
||||
example, a previously released version of the documentation displays `version`
|
||||
as `v1.19` and `latest` as `v1.20`.
|
||||
{{< /note >}}
|
||||
-->
|
||||
{{< note >}}
|
||||
|
|
@ -415,7 +586,8 @@ Renders to:
|
|||
<!--
|
||||
### `{{</* latest-semver */>}}`
|
||||
|
||||
The `{{</* latest-semver */>}}` shortcode generates the value of `latest` without the "v" prefix.
|
||||
The `{{</* latest-semver */>}}` shortcode generates the value of `latest`
|
||||
without the "v" prefix.
|
||||
|
||||
Renders to:
|
||||
-->
|
||||
|
|
@ -432,7 +604,7 @@ Renders to:
|
|||
|
||||
The `{{</* version-check */>}}` shortcode checks if the `min-kubernetes-server-version`
|
||||
page parameter is present and then uses this value to compare to `version`.
|
||||
|
||||
|
||||
Renders to:
|
||||
-->
|
||||
### `{{</* version-check */>}}`
|
||||
|
|
@ -447,9 +619,9 @@ Renders to:
|
|||
<!--
|
||||
### `{{</* latest-release-notes */>}}`
|
||||
|
||||
The `{{</* latest-release-notes */>}}` shortcode generates a version string from `latest` and removes
|
||||
the "v" prefix. The shortcode prints a new URL for the release note CHANGELOG page with the modified
|
||||
version string.
|
||||
The `{{</* latest-release-notes */>}}` shortcode generates a version string
|
||||
from `latest` and removes the "v" prefix. The shortcode prints a new URL for
|
||||
the release note CHANGELOG page with the modified version string.
|
||||
|
||||
Renders to:
|
||||
-->
|
||||
|
|
@ -466,14 +638,14 @@ Renders to:
|
|||
|
||||
<!--
|
||||
* Learn about [Hugo](https://gohugo.io/).
|
||||
* Learn about [writing a new topic](/docs/home/contribute/style/write-new-topic/).
|
||||
* Learn about [page content types](/docs/home/contribute/style/page-content-types/).
|
||||
* Learn about [creating a pull request](/docs/contribute/new-content/open-a-pr/).
|
||||
* Learn about [writing a new topic](/docs/contribute/style/write-new-topic/).
|
||||
* Learn about [page content types](/docs/contribute/style/page-content-types/).
|
||||
* Learn about [opening a pull request](/docs/contribute/new-content/open-a-pr/).
|
||||
* Learn about [advanced contributing](/docs/contribute/advanced/).
|
||||
-->
|
||||
|
||||
* 了解[Hugo](https://gohugo.io/)。
|
||||
* 了解 [Hugo](https://gohugo.io/)。
|
||||
* 了解[撰写新的话题](/zh/docs/contribute/style/write-new-topic/)。
|
||||
* 了解[使用页面内容类型](/zh/docs/contribute/style/page-content-types/)。
|
||||
* 了解[发起 PR](/zh/docs/contribute/new-content/open-a-pr/)。
|
||||
* 了解[高级贡献](/zh/docs/contribute/advanced/)。
|
||||
* 了解[进阶贡献](/zh/docs/contribute/advanced/)。
|
||||
|
|
|
|||
|
|
@ -42,8 +42,8 @@ This section of the Kubernetes documentation contains references.
|
|||
|
||||
* [术语表](/zh/docs/reference/glossary/) - 一个全面的标准化的 Kubernetes 术语表
|
||||
|
||||
* [Kubernetes API 单页参考](/zh/docs/reference/kubernetes-api/)
|
||||
* [Kubernetes API 参考 {{< param "version" >}}](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)。
|
||||
* [Kubernetes API 参考](/zh/docs/reference/kubernetes-api/)
|
||||
* [Kubernetes API 单页参考 {{< param "version" >}}](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)。
|
||||
* [使用 Kubernetes API ](/zh/docs/reference/using-api/) - Kubernetes 的 API 概述
|
||||
* [API 的访问控制](/zh/docs/reference/access-authn-authz/) - 关于 Kubernetes 如何控制 API 访问的详细信息
|
||||
* [常见的标签、注解和污点](/zh/docs/reference/labels-annotations-taints/)
|
||||
|
|
|
|||
|
|
@ -52,13 +52,13 @@ properties:
|
|||
- `group`, type string; if you specify `group`, it must match one of the groups of the authenticated user. `system:authenticated` matches all authenticated requests. `system:unauthenticated` matches all unauthenticated requests.
|
||||
- Resource-matching properties:
|
||||
- `apiGroup`, type string; an API group.
|
||||
- Ex: `extensions`
|
||||
- Ex: `apps`, `networking.k8s.io`
|
||||
- Wildcard: `*` matches all API groups.
|
||||
- `namespace`, type string; a namespace.
|
||||
- Ex: `kube-system`
|
||||
- Wildcard: `*` matches all resource requests.
|
||||
- `resource`, type string; a resource type
|
||||
- Ex: `pods`
|
||||
- Ex: `pods`, `deployments`
|
||||
- Wildcard: `*` matches all resource requests.
|
||||
- Non-resource-matching properties:
|
||||
- `nonResourcePath`, type string; non-resource request paths.
|
||||
|
|
@ -86,13 +86,13 @@ properties:
|
|||
- `group`,字符串类型;如果指定 `group`,它必须与经过身份验证的用户的一个组匹配,`system:authenticated`匹配所有经过身份验证的请求。`system:unauthenticated`匹配所有未经过身份验证的请求。
|
||||
- 资源匹配属性:
|
||||
- `apiGroup`,字符串类型;一个 API 组。
|
||||
- 例: `extensions`
|
||||
- 例: `apps`, `networking.k8s.io`
|
||||
- 通配符:`*`匹配所有 API 组。
|
||||
- `namespace`,字符串类型;一个命名空间。
|
||||
- 例如:`kube-system`
|
||||
- 通配符:`*`匹配所有资源请求。
|
||||
- `resource`,字符串类型;资源类型。
|
||||
- 例:`pods`
|
||||
- 例:`pods`, `deployments`
|
||||
- 通配符:`*`匹配所有资源请求。
|
||||
- 非资源匹配属性:
|
||||
- `nonResourcePath`,字符串类型;非资源请求路径。
|
||||
|
|
|
|||
|
|
@ -574,23 +574,23 @@ rules:
|
|||
|
||||
<!--
|
||||
Allow reading/writing Deployments (at the HTTP level: objects with `"deployments"`
|
||||
in the resource part of their URL) in both the `"extensions"` and `"apps"` API groups:
|
||||
in the resource part of their URL) in the `"apps"` API groups:
|
||||
-->
|
||||
允许读/写在 "extensions" 和 "apps" API 组中的 Deployment(在 HTTP 层面,对应
|
||||
允许读/写在 `"apps"` API 组中的 Deployment(在 HTTP 层面,对应
|
||||
URL 中资源部分为 "deployments"):
|
||||
|
||||
```yaml
|
||||
rules:
|
||||
- apiGroups: ["extensions", "apps"]
|
||||
- apiGroups: ["apps"]
|
||||
resources: ["deployments"]
|
||||
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
|
||||
```
|
||||
|
||||
<!--
|
||||
Allow reading Pods in the core API group, as well as reading or writing Job
|
||||
resources in the `"batch"` or `"extensions"` API groups:
|
||||
resources in the `"batch"` API group:
|
||||
-->
|
||||
允许读取核心 API 组中的 "pods" 和读/写 `"batch"` 或 `"extensions"` API 组中的
|
||||
允许读取核心 API 组中的 "pods" 和读/写 `"batch"` API 组中的
|
||||
"jobs":
|
||||
|
||||
```yaml
|
||||
|
|
@ -598,7 +598,7 @@ rules:
|
|||
- apiGroups: [""]
|
||||
resources: ["pods"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: ["batch", "extensions"]
|
||||
- apiGroups: ["batch"]
|
||||
resources: ["jobs"]
|
||||
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
|
||||
```
|
||||
|
|
@ -758,9 +758,9 @@ subjects:
|
|||
```
|
||||
|
||||
<!--
|
||||
For all service accounts in the "qa" group in any namespace:
|
||||
For all service accounts in the "qa" namespace:
|
||||
-->
|
||||
对于任何名称空间中的 "qa" 组中所有的服务账户:
|
||||
对于 "qa" 名称空间中的所有服务账户:
|
||||
|
||||
```yaml
|
||||
subjects:
|
||||
|
|
@ -769,19 +769,6 @@ subjects:
|
|||
apiGroup: rbac.authorization.k8s.io
|
||||
```
|
||||
|
||||
<!--
|
||||
For all service accounts in the "dev" group in the "development" namespace:
|
||||
-->
|
||||
对于 "development" 名称空间中 "dev" 组中的所有服务帐户:
|
||||
|
||||
```yaml
|
||||
subjects:
|
||||
- kind: Group
|
||||
name: system:serviceaccounts:dev
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
namespace: development
|
||||
```
|
||||
|
||||
<!--
|
||||
For all service accounts in any namespace:
|
||||
-->
|
||||
|
|
|
|||
|
|
@ -111,7 +111,8 @@ different Kubernetes components.
|
|||
| `AllowInsecureBackendProxy` | `true` | Beta | 1.17 | |
|
||||
| `AnyVolumeDataSource` | `false` | Alpha | 1.18 | |
|
||||
| `AppArmor` | `true` | Beta | 1.4 | |
|
||||
| `ControllerManagerLeaderMigration` | `false` | Alpha | 1.21 | |
|
||||
| `ControllerManagerLeaderMigration` | `false` | Alpha | 1.21 | 1.21 |
|
||||
| `ControllerManagerLeaderMigration` | `true` | Beta | 1.22 | |
|
||||
| `CPUManager` | `false` | Alpha | 1.8 | 1.9 |
|
||||
| `CPUManager` | `true` | Beta | 1.10 | |
|
||||
| `CPUManagerPolicyAlphaOptions` | `false` | Alpha | 1.23 | |
|
||||
|
|
@ -485,6 +486,7 @@ different Kubernetes components.
|
|||
| `ServerSideApply` | `false` | Alpha | 1.14 | 1.15 |
|
||||
| `ServerSideApply` | `true` | Beta | 1.16 | 1.21 |
|
||||
| `ServerSideApply` | `true` | GA | 1.22 | - |
|
||||
| `ServerSideFieldValidation` | `false` | Alpha | 1.23 | - |
|
||||
| `ServiceAccountIssuerDiscovery` | `false` | Alpha | 1.18 | 1.19 |
|
||||
| `ServiceAccountIssuerDiscovery` | `true` | Beta | 1.20 | 1.20 |
|
||||
| `ServiceAccountIssuerDiscovery` | `true` | GA | 1.21 | - |
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load Diff
|
|
@ -38,10 +38,21 @@ You can request eviction either by directly calling the Eviction API
|
|||
using a client of the kube-apiserver, like the `kubectl drain` command.
|
||||
When an `Eviction` object is created, the API server terminates the Pod.
|
||||
|
||||
API-initiated evictions respect your configured [`PodDisruptionBudgets`](/docs/tasks/run-application/configure-pdb/)
|
||||
and [`terminationGracePeriodSeconds`](/docs/concepts/workloads/pods/pod-lifecycle#pod-termination).
|
||||
|
||||
API-initiated eviction is not the same as [node-pressure eviction](/docs/concepts/scheduling-eviction/eviction/#kubelet-eviction).
|
||||
-->
|
||||
你可以通过 kube-apiserver 的客户端,比如 `kubectl drain` 这样的命令,直接调用 Eviction API 发起驱逐。
|
||||
当 `Eviction` 对象创建出来之后,该对象将驱动 API 服务器终止选定的Pod。
|
||||
|
||||
API 发起的驱逐取决于你配置的 [`PodDisruptionBudgets`](/zh/docs/tasks/run-application/configure-pdb/)
|
||||
和 [`terminationGracePeriodSeconds`](/zh/docs/concepts/workloads/pods/pod-lifecycle#pod-termination)。
|
||||
|
||||
API 发起的驱逐不同于
|
||||
[节点压力引发的驱逐](/zh/docs/concepts/scheduling-eviction/eviction/#kubelet-eviction)。
|
||||
|
||||
<!--
|
||||
* See [API-initiated eviction](/docs/concepts/scheduling-eviction/api-eviction/) for more information.
|
||||
-->
|
||||
* 有关详细信息,请参阅 [API 发起的驱逐](/zh/docs/concepts/scheduling-eviction/api-eviction/)。
|
||||
|
|
@ -17,7 +17,7 @@ id: cadvisor
|
|||
date: 2021-12-09
|
||||
full_link: https://github.com/google/cadvisor/
|
||||
short_description: >
|
||||
Tool that provides understanding of the resource usage and perfomance characteristics for containers
|
||||
Tool that provides understanding of the resource usage and performance characteristics for containers
|
||||
aka:
|
||||
tags:
|
||||
- tool
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: 混排切片(Shuffle Sharding)
|
||||
id: shuffle-sharding
|
||||
id: shuffle sharding
|
||||
date: 2020-03-04
|
||||
full_link:
|
||||
short_description: >
|
||||
|
|
@ -40,8 +40,8 @@ We are often concerned with insulating different flows of requests
|
|||
from each other, so that a high-intensity flow does not crowd out low-intensity flows.
|
||||
A simple way to put requests into queues is to hash some
|
||||
characteristics of the request, modulo the number of queues, to get
|
||||
the index of the queue to use. The hash function uses as input
|
||||
characteristics of the request that align with flows. For example, in
|
||||
the index of the queue to use. The hash function uses as input
|
||||
characteristics of the request that align with flows. For example, in
|
||||
the Internet this is often the 5-tuple of source and destination
|
||||
address, protocol, and source and destination port.
|
||||
-->
|
||||
|
|
@ -57,21 +57,21 @@ address, protocol, and source and destination port.
|
|||
That simple hash-based scheme has the property that any high-intensity flow
|
||||
will crowd out all the low-intensity flows that hash to the same queue.
|
||||
Providing good insulation for a large number of flows requires a large
|
||||
number of queues, which is problematic. Shuffle sharding is a more
|
||||
number of queues, which is problematic. Shuffle sharding is a more
|
||||
nimble technique that can do a better job of insulating the low-intensity
|
||||
flows from the high-intensity flows. The terminology of shuffle sharding uses
|
||||
flows from the high-intensity flows. The terminology of shuffle sharding uses
|
||||
the metaphor of dealing a hand from a deck of cards; each queue is a
|
||||
metaphorical card. The shuffle sharding technique starts with hashing
|
||||
metaphorical card. The shuffle sharding technique starts with hashing
|
||||
the flow-identifying characteristics of the request, to produce a hash
|
||||
value with dozens or more of bits. Then the hash value is used as a
|
||||
value with dozens or more of bits. Then the hash value is used as a
|
||||
source of entropy to shuffle the deck and deal a hand of cards
|
||||
(queues). All the dealt queues are examined, and the request is put
|
||||
into one of the examined queues with the shortest length. With a
|
||||
(queues). All the dealt queues are examined, and the request is put
|
||||
into one of the examined queues with the shortest length. With a
|
||||
modest hand size, it does not cost much to examine all the dealt cards
|
||||
and a given low-intensity flow has a good chance to dodge the effects of a
|
||||
given high-intensity flow. With a large hand size it is expensive to examine
|
||||
given high-intensity flow. With a large hand size it is expensive to examine
|
||||
the dealt queues and more difficult for the low-intensity flows to dodge the
|
||||
collective effects of a set of high-intensity flows. Thus, the hand size
|
||||
collective effects of a set of high-intensity flows. Thus, the hand size
|
||||
should be chosen judiciously.
|
||||
-->
|
||||
这种简单的基于哈希的模式有一种特性,高密度的请求序列(流)会湮没那些被
|
||||
|
|
|
|||
|
|
@ -0,0 +1,222 @@
|
|||
---
|
||||
api_metadata:
|
||||
apiVersion: "authentication.k8s.io/v1"
|
||||
import: "k8s.io/api/authentication/v1"
|
||||
kind: "TokenReview"
|
||||
content_type: "api_reference"
|
||||
description: "TokenReview 尝试通过验证令牌来确认已知用户。"
|
||||
title: "TokenReview"
|
||||
weight: 3
|
||||
auto_generated: true
|
||||
---
|
||||
|
||||
<!--
|
||||
api_metadata:
|
||||
apiVersion: "authentication.k8s.io/v1"
|
||||
import: "k8s.io/api/authentication/v1"
|
||||
kind: "TokenReview"
|
||||
content_type: "api_reference"
|
||||
description: "TokenReview attempts to authenticate a token to a known user."
|
||||
title: "TokenReview"
|
||||
weight: 3
|
||||
auto_generated: true
|
||||
-->
|
||||
|
||||
`apiVersion: authentication.k8s.io/v1`
|
||||
|
||||
`import "k8s.io/api/authentication/v1"`
|
||||
|
||||
<!--
|
||||
## TokenReview {#TokenReview}
|
||||
TokenReview attempts to authenticate a token to a known user. Note: TokenReview requests may be cached by the webhook token authenticator plugin in the kube-apiserver.
|
||||
-->
|
||||
## TokenReview {#TokenReview}
|
||||
|
||||
TokenReview 尝试通过验证令牌来确认已知用户。
|
||||
注意:TokenReview 请求可能会被 kube-apiserver 中的 webhook 令牌验证器插件缓存。
|
||||
|
||||
<hr>
|
||||
|
||||
- **apiVersion**: authentication.k8s.io/v1
|
||||
|
||||
|
||||
- **kind**: TokenReview
|
||||
|
||||
|
||||
- **metadata** (<a href="{{< ref "../common-definitions/object-meta#ObjectMeta" >}}">ObjectMeta</a>)
|
||||
|
||||
<!--
|
||||
Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
|
||||
-->
|
||||
标准对象的元数据,更多信息:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
|
||||
|
||||
- **spec** (<a href="{{< ref "../authentication-resources/token-review-v1#TokenReviewSpec" >}}">TokenReviewSpec</a>), required
|
||||
|
||||
<!--
|
||||
Spec holds information about the request being evaluated
|
||||
-->
|
||||
spec 保存有关正在评估的请求的信息
|
||||
|
||||
- **status** (<a href="{{< ref "../authentication-resources/token-review-v1#TokenReviewStatus" >}}">TokenReviewStatus</a>)
|
||||
|
||||
<!--
|
||||
Status is filled in by the server and indicates whether the request can be authenticated.
|
||||
-->
|
||||
status 由服务器填写,指示请求是否可以通过身份验证。
|
||||
|
||||
|
||||
## TokenReviewSpec {#TokenReviewSpec}
|
||||
|
||||
<!--
|
||||
TokenReviewSpec is a description of the token authentication request.
|
||||
-->
|
||||
TokenReviewPec 是对令牌身份验证请求的描述。
|
||||
|
||||
<hr>
|
||||
|
||||
- **audiences** ([]string)
|
||||
|
||||
<!--
|
||||
Audiences is a list of the identifiers that the resource server presented with the token identifies as. Audience-aware token authenticators will verify that the token was intended for at least one of the audiences in this list. If no audiences are provided, the audience will default to the audience of the Kubernetes apiserver.
|
||||
-->
|
||||
audiences 是带有令牌的资源服务器标识为受众的标识符列表。
|
||||
受众感知令牌身份验证器将验证令牌是否适用于此列表中的至少一个受众。
|
||||
如果未提供受众,受众将默认为 Kubernetes API 服务器的受众。
|
||||
|
||||
- **token** (string)
|
||||
|
||||
<!--
|
||||
Token is the opaque bearer token.
|
||||
-->
|
||||
token 是不透明的持有者令牌(Bearer Token)。
|
||||
|
||||
## TokenReviewStatus {#TokenReviewStatus}
|
||||
|
||||
<!--
|
||||
TokenReviewStatus is the result of the token authentication request.
|
||||
-->
|
||||
TokenReviewStatus 是令牌认证请求的结果。
|
||||
|
||||
<hr>
|
||||
|
||||
- **audiences** ([]string)
|
||||
|
||||
<!--
|
||||
Audiences are audience identifiers chosen by the authenticator that are compatible with both the TokenReview and token. An identifier is any identifier in the intersection of the TokenReviewSpec audiences and the token's audiences. A client of the TokenReview API that sets the spec.audiences field should validate that a compatible audience identifier is returned in the status.audiences field to ensure that the TokenReview server is audience aware. If a TokenReview returns an empty status.audience field where status.authenticated is "true", the token is valid against the audience of the Kubernetes API server.
|
||||
-->
|
||||
audiences 是身份验证者选择的与 TokenReview 和令牌兼容的受众标识符。 标识符是
|
||||
TokenReviewSpec 受众和令牌受众的交集中的任何标识符。 设置 spec.audiences
|
||||
字段的 TokenReview API 的客户端应验证在 status.audiences 字段中返回了兼容的受众标识符,
|
||||
以确保 TokenReview 服务器能够识别受众。 如果 TokenReview
|
||||
返回一个空的 status.audience 字段,其中 status.authenticated 为 “true”,
|
||||
则该令牌对 Kubernetes API 服务器的受众有效。
|
||||
|
||||
- **authenticated** (boolean)
|
||||
<!--
|
||||
Authenticated indicates that the token was associated with a known user.
|
||||
-->
|
||||
authenticated 表示令牌与已知用户相关联。
|
||||
|
||||
- **error** (string)
|
||||
|
||||
<!--
|
||||
Error indicates that the token couldn't be checked
|
||||
-->
|
||||
error 表示无法检查令牌
|
||||
|
||||
- **user** (UserInfo)
|
||||
|
||||
<!--
|
||||
User is the UserInfo associated with the provided token.
|
||||
-->
|
||||
user 是与提供的令牌关联的 UserInfo。
|
||||
|
||||
<a name="UserInfo"></a>
|
||||
<--
|
||||
*UserInfo holds the information about the user needed to implement the user.Info interface.*
|
||||
-->
|
||||
**UserInfo 保存实现 user.Info 接口所需的用户信息**
|
||||
|
||||
- **user.extra** (map[string][]string)
|
||||
|
||||
<!--
|
||||
Any additional information provided by the authenticator.
|
||||
-->
|
||||
验证者提供的任何附加信息。
|
||||
|
||||
- **user.groups** ([]string)
|
||||
|
||||
<!--
|
||||
The names of groups this user is a part of.
|
||||
-->
|
||||
此用户所属的组的名称。
|
||||
|
||||
- **user.uid** (string)
|
||||
|
||||
<!--
|
||||
A unique value that identifies this user across time. If this user is deleted and another user by the same name is added, they will have different UIDs.
|
||||
-->
|
||||
跨时间标识此用户的唯一值。如果删除此用户并添加另一个同名用户,他们将拥有不同的 UID。
|
||||
|
||||
- **user.username** (string)
|
||||
|
||||
<!--
|
||||
The name that uniquely identifies this user among all active users.
|
||||
-->
|
||||
在所有活动用户中唯一标识此用户的名称。
|
||||
|
||||
<!--
|
||||
## Operations {#Operations}
|
||||
-->
|
||||
## 操作 {#Operations}
|
||||
|
||||
<hr>
|
||||
|
||||
<!--
|
||||
### `create` create a TokenReview
|
||||
|
||||
#### HTTP Request
|
||||
-->
|
||||
### `create` 创建一个TokenReview
|
||||
|
||||
#### HTTP 请求
|
||||
|
||||
POST /apis/authentication.k8s.io/v1/tokenreviews
|
||||
|
||||
<!--
|
||||
#### Parameters
|
||||
- **body**: <a href="{{< ref "../authentication-resources/token-review-v1#TokenReview" >}}">TokenReview</a>, required
|
||||
-->
|
||||
#### 参数
|
||||
|
||||
- **body**: <a href="{{< ref "../authentication-resources/token-review-v1#TokenReview" >}}">TokenReview</a>, 必需
|
||||
|
||||
- **dryRun** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
|
||||
|
||||
- **fieldManager** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
|
||||
|
||||
- **fieldValidation** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
|
||||
|
||||
- **pretty** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
|
||||
|
||||
<!--
|
||||
#### Response
|
||||
-->
|
||||
#### 响应
|
||||
|
||||
200 (<a href="{{< ref "../authentication-resources/token-review-v1#TokenReview" >}}">TokenReview</a>): OK
|
||||
|
||||
201 (<a href="{{< ref "../authentication-resources/token-review-v1#TokenReview" >}}">TokenReview</a>): Created
|
||||
|
||||
202 (<a href="{{< ref "../authentication-resources/token-review-v1#TokenReview" >}}">TokenReview</a>): Accepted
|
||||
|
||||
401: Unauthorized
|
||||
|
||||
|
|
@ -26,15 +26,151 @@ Kubernetes 将所有标签和注解保留在 kubernetes.io Namespace中。
|
|||
<!--
|
||||
## Labels, annotations and taints used on API objects
|
||||
|
||||
### app.kubernetes.io/component
|
||||
|
||||
Example: `app.kubernetes.io/component=database`
|
||||
|
||||
Used on: All Objects
|
||||
|
||||
The component within the architecture.
|
||||
|
||||
One of the [recommended labels](/docs/concepts/overview/working-with-objects/common-labels/#labels).
|
||||
-->
|
||||
## API 对象上使用的标签、注解和污点
|
||||
|
||||
### app.kubernetes.io/component
|
||||
|
||||
例子: `app.kubernetes.io/component=database`
|
||||
|
||||
用于: 所有对象
|
||||
|
||||
架构中的组件。
|
||||
|
||||
[推荐标签](/zh/docs/concepts/overview/working-with-objects/common-labels/#labels)之一。
|
||||
|
||||
<!-- ### app.kubernetes.io/created-by
|
||||
|
||||
Example: `app.kubernetes.io/created-by=controller-manager`
|
||||
|
||||
Used on: All Objects
|
||||
|
||||
The controller/user who created this resource.
|
||||
|
||||
One of the [recommended labels](/docs/concepts/overview/working-with-objects/common-labels/#labels). -->
|
||||
### app.kubernetes.io/created-by
|
||||
|
||||
示例:`app.kubernetes.io/created-by=controller-manager`
|
||||
|
||||
用于:所有对象
|
||||
|
||||
创建此资源的控制器/用户。
|
||||
|
||||
[推荐标签](/zh/docs/concepts/overview/working-with-objects/common-labels/#labels)之一。
|
||||
|
||||
<!-- ### app.kubernetes.io/instance
|
||||
|
||||
Example: `app.kubernetes.io/instance=mysql-abcxzy`
|
||||
|
||||
Used on: All Objects
|
||||
|
||||
A unique name identifying the instance of an application.
|
||||
|
||||
One of the [recommended labels](/docs/concepts/overview/working-with-objects/common-labels/#labels). -->
|
||||
### app.kubernetes.io/instance
|
||||
|
||||
示例:`app.kubernetes.io/instance=mysql-abcxzy`
|
||||
|
||||
用于:所有对象
|
||||
|
||||
标识应用实例的唯一名称。
|
||||
|
||||
[推荐标签](/zh/docs/concepts/overview/working-with-objects/common-labels/#labels)之一。
|
||||
|
||||
<!-- ### app.kubernetes.io/managed-by
|
||||
|
||||
Example: `app.kubernetes.io/managed-by=helm`
|
||||
|
||||
Used on: All Objects
|
||||
|
||||
The tool being used to manage the operation of an application.
|
||||
|
||||
One of the [recommended labels](/docs/concepts/overview/working-with-objects/common-labels/#labels). -->
|
||||
### app.kubernetes.io/managed-by
|
||||
|
||||
示例:`app.kubernetes.io/managed-by=helm`
|
||||
|
||||
用于:所有对象
|
||||
|
||||
用于管理应用操作的工具。
|
||||
|
||||
[推荐标签](/zh/docs/concepts/overview/working-with-objects/common-labels/#labels)之一。
|
||||
|
||||
<!-- ### app.kubernetes.io/name
|
||||
|
||||
Example: `app.kubernetes.io/name=mysql`
|
||||
|
||||
Used on: All Objects
|
||||
|
||||
The name of the application.
|
||||
|
||||
One of the [recommended labels](/docs/concepts/overview/working-with-objects/common-labels/#labels). -->
|
||||
|
||||
### app.kubernetes.io/name
|
||||
|
||||
示例:`app.kubernetes.io/name=mysql`
|
||||
|
||||
用于:所有对象
|
||||
|
||||
应用的名称。
|
||||
|
||||
[推荐标签](/zh/docs/concepts/overview/working-with-objects/common-labels/#labels)之一。
|
||||
|
||||
<!-- ### app.kubernetes.io/part-of
|
||||
|
||||
Example: `app.kubernetes.io/part-of=wordpress`
|
||||
|
||||
Used on: All Objects
|
||||
|
||||
The name of a higher level application this one is part of.
|
||||
|
||||
One of the [recommended labels](/docs/concepts/overview/working-with-objects/common-labels/#labels). -->
|
||||
### app.kubernetes.io/part-of
|
||||
|
||||
示例:`app.kubernetes.io/part-of=wordpress`
|
||||
|
||||
用于:所有对象
|
||||
|
||||
此应用所属的更高级别应用的名称。
|
||||
|
||||
[推荐标签](/zh/docs/concepts/overview/working-with-objects/common-labels/#labels)之一。
|
||||
|
||||
<!-- ### app.kubernetes.io/version
|
||||
|
||||
Example: `app.kubernetes.io/version="5.7.21"`
|
||||
|
||||
Used on: All Objects
|
||||
|
||||
The current version of the application (e.g., a semantic version, revision hash, etc.).
|
||||
|
||||
One of the [recommended labels](/docs/concepts/overview/working-with-objects/common-labels/#labels). -->
|
||||
### app.kubernetes.io/version
|
||||
|
||||
示例:`app.kubernetes.io/version="5.7.21"`
|
||||
|
||||
用于:所有对象
|
||||
|
||||
应用的当前版本(例如,语义版本、修订哈希等)。
|
||||
|
||||
[推荐标签](/zh/docs/concepts/overview/working-with-objects/common-labels/#labels)之一。
|
||||
|
||||
<!--
|
||||
### kubernetes.io/arch
|
||||
|
||||
Example: `kubernetes.io/arch=amd64`
|
||||
|
||||
Used on: Node
|
||||
|
||||
The Kubelet populates this with `runtime.GOARCH` as defined by Go. This can be handy if you are mixing arm and x86 nodes.
|
||||
-->
|
||||
## API 对象上使用的标签、注解和污点
|
||||
The Kubelet populates this with `runtime.GOARCH` as defined by Go. This can be handy if you are mixing arm and x86 nodes. -->
|
||||
|
||||
### kubernetes.io/arch {#kubernetes-io-arch}
|
||||
|
||||
|
|
@ -185,8 +321,6 @@ Used on: Pod
|
|||
|
||||
This annotation is used to set [Pod Deletion Cost](/docs/concepts/workloads/controllers/replicaset/#pod-deletion-cost)
|
||||
which allows users to influence ReplicaSet downscaling order. The annotation parses into an `int32` type.
|
||||
|
||||
### beta.kubernetes.io/instance-type (deprecated)
|
||||
-->
|
||||
### controller.kubernetes.io/pod-deletion-cost {#pod-deletion-cost}
|
||||
|
||||
|
|
@ -194,8 +328,85 @@ which allows users to influence ReplicaSet downscaling order. The annotation par
|
|||
|
||||
用于:Pod
|
||||
|
||||
该注解用于设置 [Pod 删除成本](/docs/concepts/workloads/controllers/replicaset/#pod-deletion-cost) 允许用户影响 ReplicaSet 缩减顺序。注解解析为 `int32` 类型。
|
||||
该注解用于设置 [Pod 删除成本](/docs/concepts/workloads/controllers/replicaset/#pod-deletion-cost)允许用户影响 ReplicaSet 缩减顺序。注解解析为 `int32` 类型。
|
||||
|
||||
<!--
|
||||
### kubernetes.io/ingress-bandwidth
|
||||
|
||||
Ingress traffic shaping annotation is an experimental feature.
|
||||
If you want to enable traffic shaping support, you must add the `bandwidth` plugin to your CNI configuration file (default `/etc/cni/net.d`) and
|
||||
ensure that the binary is included in your CNI bin dir (default `/opt/cni/bin`).
|
||||
|
||||
Example: `kubernetes.io/ingress-bandwidth: 10M`
|
||||
|
||||
Used on: Pod
|
||||
|
||||
You can apply quality-of-service traffic shaping to a pod and effectively limit its available bandwidth.
|
||||
Ingress traffic (to the pod) is handled by shaping queued packets to effectively handle data.
|
||||
To limit the bandwidth on a pod, write an object definition JSON file and specify the data traffic
|
||||
speed using `kubernetes.io/ingress-bandwidth` annotation. The unit used for specifying ingress
|
||||
rate is bits per second, as a [Quantity](/docs/reference/kubernetes-api/common-definitions/quantity/).
|
||||
For example, `10M` means 10 megabits per second.
|
||||
-->
|
||||
|
||||
### kubernetes.io/ingress-bandwidth
|
||||
|
||||
{{< note >}}
|
||||
入站流量控制注解是一项实验性功能。
|
||||
如果要启用流量控制支持,必须将`bandwidth`插件添加到 CNI 配置文件(默认为`/etc/cni/net.d`)
|
||||
并确保二进制文件包含在你的 CNI bin 目录中(默认为`/opt/cni/bin`)。
|
||||
{{< /note >}}
|
||||
|
||||
示例:`kubernetes.io/ingress-bandwidth: 10M`
|
||||
|
||||
用于:Pod
|
||||
|
||||
你可以对 Pod 应用服务质量流量控制并有效限制其可用带宽。
|
||||
入站流量(到 Pod)通过控制排队的数据包来处理,以有效地处理数据。
|
||||
要限制 Pod 的带宽,请编写对象定义 JSON 文件并使用 `kubernetes.io/ingress-bandwidth`
|
||||
注解指定数据流量速度。 用于指定入站的速率单位是每秒,
|
||||
作为[量纲(Quantity)](/zh/docs/reference/kubernetes-api/common-definitions/quantity/)。
|
||||
例如,`10M`表示每秒 10 兆比特。
|
||||
|
||||
<!--
|
||||
### kubernetes.io/egress-bandwidth
|
||||
|
||||
Egress traffic shaping annotation is an experimental feature.
|
||||
If you want to enable traffic shaping support, you must add the `bandwidth` plugin to your CNI configuration file (default `/etc/cni/net.d`) and
|
||||
ensure that the binary is included in your CNI bin dir (default `/opt/cni/bin`).
|
||||
|
||||
Example: `kubernetes.io/egress-bandwidth: 10M`
|
||||
|
||||
Used on: Pod
|
||||
|
||||
Egress traffic (from the pod) is handled by policing, which simply drops packets in excess of the configured rate.
|
||||
The limits you place on a pod do not affect the bandwidth of other pods.
|
||||
To limit the bandwidth on a pod, write an object definition JSON file and specify the data traffic
|
||||
speed using `kubernetes.io/egress-bandwidth` annotation. The unit used for specifying egress
|
||||
rate is bits per second, as a [Quantity](/docs/reference/kubernetes-api/common-definitions/quantity/).
|
||||
For example, `10M` means 10 megabits per second.
|
||||
-->
|
||||
|
||||
### kubernetes.io/egress-bandwidth
|
||||
|
||||
{{< note >}}
|
||||
出站流量控制注解是一项实验性功能。
|
||||
如果要启用流量控制支持,必须将`bandwidth`插件添加到 CNI 配置文件(默认为`/etc/cni/net.d`)
|
||||
并确保二进制文件包含在你的 CNI bin 目录中(默认为`/opt/cni/bin`)。
|
||||
{{< /note >}}
|
||||
|
||||
示例:`kubernetes.io/egress-bandwidth: 10M`
|
||||
|
||||
用于:Pod
|
||||
|
||||
出站流量(来自 pod)由策略控制,策略只是丢弃超过配置速率的数据包。
|
||||
你为一个 Pod 所设置的限制不会影响其他 Pod 的带宽。
|
||||
要限制 Pod 的带宽,请编写对象定义 JSON 文件并使用 `kubernetes.io/egress-bandwidth` 注解指定数据流量速度。
|
||||
用于指定出站的速率单位是每秒比特数,
|
||||
以[量纲(Quantity)](/zh/docs/reference/kubernetes-api/common-definitions/quantity/)的形式给出。
|
||||
例如,`10M` 表示每秒 10 兆比特。
|
||||
|
||||
<!-- ### beta.kubernetes.io/instance-type (deprecated) -->
|
||||
### beta.kubernetes.io/instance-type (已弃用) {#beta-kubernetes-io-instance-type}
|
||||
|
||||
<!--
|
||||
|
|
@ -1025,16 +1236,20 @@ seccomp 配置文件应用于 Pod 或其容器的步骤。
|
|||
<!--
|
||||
## Annotations used for audit
|
||||
|
||||
- [`pod-security.kubernetes.io/exempt`](/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-exempt)
|
||||
- [`pod-security.kubernetes.io/enforce-policy`](/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-enforce-policy)
|
||||
- [`authorization.k8s.io/decision`](/docs/reference/labels-annotations-taints/audit-annotations/#authorization-k8s-io-decision)
|
||||
- [`authorization.k8s.io/reason`](/docs/reference/labels-annotations-taints/audit-annotations/#authorization-k8s-io-reason)
|
||||
- [`pod-security.kubernetes.io/audit-violations`](/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-audit-violations)
|
||||
- [`pod-security.kubernetes.io/enforce-policy`](/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-enforce-policy)
|
||||
- [`pod-security.kubernetes.io/exempt`](/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-exempt)
|
||||
|
||||
See more details on the [Audit Annotations](/docs/reference/labels-annotations-taints/audit-annotations/) page.
|
||||
-->
|
||||
## 用于审计的注解 {#annonations-used-for-audit}
|
||||
|
||||
- [`pod-security.kubernetes.io/exempt`](/zh/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-exempt)
|
||||
- [`pod-security.kubernetes.io/enforce-policy`](/zh/zh/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-enforce-policy)
|
||||
- [`authorization.k8s.io/decision`](/zh/docs/reference/labels-annotations-taints/audit-annotations/#authorization-k8s-io-decision)
|
||||
- [`authorization.k8s.io/reason`](/zh/docs/reference/labels-annotations-taints/audit-annotations/#authorization-k8s-io-reason)
|
||||
- [`pod-security.kubernetes.io/audit-violations`](/zh/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-audit-violations)
|
||||
- [`pod-security.kubernetes.io/enforce-policy`](/zh/zh/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-enforce-policy)
|
||||
- [`pod-security.kubernetes.io/exempt`](/zh/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-exempt)
|
||||
|
||||
在[审计注解](/zh/docs/reference/labels-annotations-taints/audit-annotations/)页面上查看更多详细信息。
|
||||
|
|
@ -21,10 +21,10 @@ Print configuration
|
|||
|
||||
<!--
|
||||
This command prints configurations for subcommands provided.
|
||||
For details, see: https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
|
||||
For details, see: https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#section-directories
|
||||
-->
|
||||
此命令打印子命令所提供的配置信息。
|
||||
相关细节可参阅 https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
|
||||
相关细节可参阅: https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#section-directories
|
||||
|
||||
```
|
||||
kubeadm config print [flags]
|
||||
|
|
|
|||
|
|
@ -594,7 +594,7 @@ API 服务器的静态 Pod 清单会受到用户提供的以下参数的影响:
|
|||
#### 控制器管理器 {#controller-manager}
|
||||
|
||||
<!--
|
||||
The static Pod manifest for the controller-manager is affected by following parameters provided by the users:
|
||||
The static Pod manifest for the controller manager is affected by following parameters provided by the users:
|
||||
-->
|
||||
控制器管理器的静态 Pod 清单受用户提供的以下参数的影响:
|
||||
|
||||
|
|
|
|||
|
|
@ -43,20 +43,6 @@ Using this phase you can execute preflight checks on a node that is being reset.
|
|||
{{< tab name="preflight" include="generated/kubeadm_reset_phase_preflight.md" />}}
|
||||
{{< /tabs >}}
|
||||
|
||||
<!--
|
||||
## kubeadm reset phase update-cluster-status
|
||||
-->
|
||||
## kubeadm reset phase update-cluster-status {#cmd-reset-phase-update-cluster-status}
|
||||
|
||||
<!--
|
||||
Using this phase you can remove this control-plane node from the ClusterStatus object.
|
||||
-->
|
||||
使用此阶段,你可以从 ClusterStatus 对象中删除此控制平面节点。
|
||||
|
||||
{{< tabs name="tab-update-cluster-status" >}}
|
||||
{{< tab name="update-cluster-status" include="generated/kubeadm_reset_phase_update-cluster-status.md" />}}
|
||||
{{< /tabs >}}
|
||||
|
||||
<!--
|
||||
## kubeadm reset phase remove-etcd-member
|
||||
-->
|
||||
|
|
|
|||
|
|
@ -142,37 +142,41 @@ might have to add an equivalent field or represent it as an annotation.
|
|||
添加一个新的等效字段或者将其表现为一个注解。
|
||||
|
||||
<!--
|
||||
**Rule #3: An API version in a given track may not be deprecated until a new
|
||||
API version at least as stable is released.**
|
||||
**Rule #3: An API version in a given track may not be deprecated in favor of a less stable API version.**
|
||||
|
||||
GA API versions can replace GA API versions as well as beta and alpha API
|
||||
versions. Beta API versions *may not* replace GA API versions.
|
||||
* GA API versions can replace beta and alpha API versions.
|
||||
* Beta API versions can replace earlier beta and alpha API versions, but *may not* replace GA API versions.
|
||||
* Alpha API versions can replace earlier alpha API versions, but *may not* replace GA or beta API versions.
|
||||
-->
|
||||
**规则 #3:给定类别的 API 版本在新的、稳定性未降低的 API 版本发布之前不可被废弃。**
|
||||
**规则 #3:给定类别的 API 版本不可被弃用以支持稳定性更差的 API 版本。**
|
||||
|
||||
一个正式发布的(GA)API 版本可替换现有的正式 API 版本或 alpha、beta API 版本。
|
||||
Beta API 版本 *不可以* 替代正式的 API 版本。
|
||||
* 一个正式发布的(GA)API 版本可替换 beta 或 alpha API 版本。
|
||||
* Beta API 版本可以替换早期的 beta 和 alpha API 版本,但 **不可以** 替换正式的 API 版本。
|
||||
* Alpha API 版本可以替换早期的 alpha API 版本,但 **不可以** 替换正式的或 beta API 版本。
|
||||
|
||||
<!--
|
||||
**Rule #4a: Other than the most recent API versions in each track, older API
|
||||
versions must be supported after their announced deprecation for a duration of
|
||||
no less than:**
|
||||
**Rule #4a: minimum API lifetime is determined by the API stability level**
|
||||
|
||||
* **GA: 12 months or 3 releases (whichever is longer)**
|
||||
* **Beta: 9 months or 3 releases (whichever is longer)**
|
||||
* **Alpha: 0 releases**
|
||||
* **GA API versions may be marked as deprecated, but must not be removed within a major version of Kubernetes**
|
||||
* **Beta API versions must be supported for 9 months or 3 releases (whichever is longer) after deprecation**
|
||||
* **Alpha API versions may be removed in any release without prior deprecation notice**
|
||||
|
||||
This covers the [maximum supported version skew of 2 releases](/docs/setup/release/version-skew-policy/).
|
||||
This ensures beta API support covers the [maximum supported version skew of 2 releases](/releases/version-skew-policy/).
|
||||
-->
|
||||
**规则 #4a:除了每类 API 版本中的最新版本,旧的 API 版本在其被宣布被废弃之后
|
||||
至少以下时长内仍需被支持:**
|
||||
**规则 #4a:最短 API 生命周期由 API 稳定性级别决定**
|
||||
|
||||
* **GA:12 个月或者 3 个发布版本(取其较长者)**
|
||||
* **Beta: 9 个月或者 3 个发布版本(取其较长者)**
|
||||
* **Alpha: 0 个发布版本**
|
||||
* **GA API 版本可以被标记为已弃用,但不得在 Kubernetes 的主要版本中删除**
|
||||
* **Beta API 版本必须支持 9 个月或弃用后的 3 个版本(以较长者为准)**
|
||||
* **Alpha API 版本可能会在任何版本中被删除,不另行通知**
|
||||
|
||||
这里也包含了关于[最大支持 2 个发布版本的版本偏差](/zh/docs/setup/release/version-skew-policy/)
|
||||
的约定。
|
||||
这确保了 beta API 支持涵盖了[最多 2 个版本的支持版本偏差](/zh/releases/version-skew-policy/)。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
There are no current plans for a major version revision of Kubernetes that removes GA APIs.
|
||||
-->
|
||||
目前没有删除正式版本 API 的 Kubernetes 主要版本修订计划。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
Until [#52185](https://github.com/kubernetes/kubernetes/issues/52185) is
|
||||
|
|
@ -363,9 +367,9 @@ API versions are supported in a series of subsequent releases.
|
|||
<td>
|
||||
<ul>
|
||||
<!-- li>v2beta2 is deprecated, "action required" relnote</li>
|
||||
<li>v1 is deprecated, "action required" relnote</li -->
|
||||
<li>v1 is deprecated in favor of v2, but will not be removed</li -->
|
||||
<li>v2beta2 已被弃用,发布说明中包含对应的 "action required(采取行动)" 说明</li>
|
||||
<li>v1 已被弃用,发布说明中包含对应的 "action required(采取行动)" 说明</li>
|
||||
<li>v1 已被弃用,取而代之的是 v2,但不会被删除</li>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
|
|
@ -400,23 +404,6 @@ API versions are supported in a series of subsequent releases.
|
|||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>X+16</td>
|
||||
<!-- td>v2, v1 (deprecated)</td -->
|
||||
<td>v2、v1(已弃用)</td>
|
||||
<td>v2</td>
|
||||
<td></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>X+17</td>
|
||||
<td>v2</td>
|
||||
<td>v2</td>
|
||||
<td>
|
||||
<ul>
|
||||
<li>v1 被删除,发布说明中包含对应的 "action required(采取行动)" 说明</li>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ weight: 50
|
|||
---
|
||||
reviewers:
|
||||
- sig-cluster-lifecycle
|
||||
title: Options for Highly Available topology
|
||||
title: Options for Highly Available Topology
|
||||
content_type: concept
|
||||
weight: 50
|
||||
---
|
||||
|
|
|
|||
|
|
@ -27,7 +27,6 @@ For information on how to create a cluster with kubeadm once you have performed
|
|||
有关在执行此安装过程后如何使用 kubeadm 创建集群的信息,请参见
|
||||
[使用 kubeadm 创建集群](/zh/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) 页面。
|
||||
|
||||
{{% dockershim-removal %}}
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
|
|
|||
|
|
@ -78,7 +78,7 @@ by the kubelet, using the `--cluster-dns` flag. This setting needs to be the sam
|
|||
on every manager and Node in the cluster. The kubelet provides a versioned, structured API object
|
||||
that can configure most parameters in the kubelet and push out this configuration to each running
|
||||
kubelet in the cluster. This object is called
|
||||
[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/).
|
||||
[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/).
|
||||
The `KubeletConfiguration` allows the user to specify flags such as the cluster DNS IP addresses expressed as
|
||||
a list of values to a camelCased key, illustrated by the following example:
|
||||
|
||||
|
|
@ -186,7 +186,7 @@ for more information on the individual fields.
|
|||
通过调用 `kubeadm config print init-defaults --component-configs KubeletConfiguration`,
|
||||
你可以看到此结构中的所有默认值。
|
||||
|
||||
也可以阅读 [KubeletConfiguration 参考](/docs/reference/config-api/kubelet-config.v1beta1/)
|
||||
也可以阅读 [KubeletConfiguration 参考](/zh/docs/reference/config-api/kubelet-config.v1beta1/)
|
||||
来获取有关各个字段的更多信息。
|
||||
|
||||
<!--
|
||||
|
|
@ -308,10 +308,15 @@ It augments the basic
|
|||
[`kubelet.service` for RPM](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/rpm/kubelet/kubelet.service) or
|
||||
[`kubelet.service` for DEB](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service):
|
||||
|
||||
{{< note >}}
|
||||
The contents below are just an example. If you don't want to use a package manager
|
||||
follow the guide outlined in the [Without a package manager](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#k8s-install-2))
|
||||
section.
|
||||
{{< /note >}}
|
||||
|
||||
```none
|
||||
[Service]
|
||||
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
|
||||
--kubeconfig=/etc/kubernetes/kubelet.conf"
|
||||
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
|
||||
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
|
||||
# This is a file that "kubeadm init" and "kubeadm join" generate at runtime, populating
|
||||
the KUBELET_KUBEADM_ARGS variable dynamically
|
||||
|
|
@ -347,10 +352,15 @@ This file specifies the default locations for all of the files managed by kubead
|
|||
或者 [DEB 版本 `kubelet.service`](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service)
|
||||
作了增强:
|
||||
|
||||
{{< note >}}
|
||||
下面的内容只是一个例子。 如果您不想使用包管理器,
|
||||
请遵循[没有包管理器](/zh/docs/setup/productionenvironment/tools/kubeadm/install-kubeadm/#k8s-install-2))
|
||||
部分中叙述的指南。
|
||||
{{< /note >}}
|
||||
|
||||
```none
|
||||
[Service]
|
||||
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
|
||||
--kubeconfig=/etc/kubernetes/kubelet.conf"
|
||||
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
|
||||
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
|
||||
# 这是 "kubeadm init" 和 "kubeadm join" 运行时生成的文件,动态地填充 KUBELET_KUBEADM_ARGS 变量
|
||||
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
|
||||
|
|
@ -381,9 +391,10 @@ The DEB and RPM packages shipped with the Kubernetes releases are:
|
|||
| Package name | Description |
|
||||
|----------------|-------------|
|
||||
| `kubeadm` | Installs the `/usr/bin/kubeadm` CLI tool and the [kubelet drop-in file](#the-kubelet-drop-in-file-for-systemd) for the kubelet. |
|
||||
| `kubelet` | Installs the kubelet binary in `/usr/bin` and CNI binaries in `/opt/cni/bin`. |
|
||||
| `kubelet` | Installs the `/usr/bin/kubelet` binary. |
|
||||
| `kubectl` | Installs the `/usr/bin/kubectl` binary. |
|
||||
| `cri-tools` | Installs the `/usr/bin/crictl` binary from the [cri-tools git repository](https://github.com/kubernetes-sigs/cri-tools). |
|
||||
| `kubernetes-cni` | Installs the `/opt/cni/bin` binaries from the [plugins git repository](https://github.com/containernetworking/plugins). |
|
||||
-->
|
||||
## Kubernetes 可执行文件和软件包内容
|
||||
|
||||
|
|
@ -392,7 +403,8 @@ Kubernetes 版本对应的 DEB 和 RPM 软件包是:
|
|||
| Package name | Description |
|
||||
|--------------|-------------|
|
||||
| `kubeadm` | 给 kubelet 安装 `/usr/bin/kubeadm` CLI 工具和 [kubelet 的 systemd 文件](#the-kubelet-drop-in-file-for-systemd)。 |
|
||||
| `kubelet` | 安装 kubelet 可执行文件到 `/usr/bin` 路径,安装 CNI 可执行文件到 `/opt/cni/bin` 路径。 |
|
||||
| `kubelet` | 安装 `/usr/bin/kubelet` 可执行文件。 |
|
||||
| `kubectl` | 安装 `/usr/bin/kubectl` 可执行文件。 |
|
||||
| `cri-tools` | 从 [cri-tools git 仓库](https://github.com/kubernetes-sigs/cri-tools)中安装 `/usr/bin/crictl` 可执行文件。 |
|
||||
| `kubernetes-cni` | 从 [plugins git 仓库](https://github.com/containernetworking/plugins)中安装 `/opt/cni/bin` 可执行文件。|
|
||||
|
||||
|
|
|
|||
|
|
@ -187,7 +187,7 @@ URL 的 `<service_name>` 段支持的格式为:
|
|||
-->
|
||||
##### 示例
|
||||
|
||||
* 如要访问 Elasticsearch 服务末端 `_search?q=user:kimchy`,你可以使用:
|
||||
* 如要访问 Elasticsearch 服务末端 `_search?q=user:kimchy`,你可以使用:
|
||||
|
||||
```
|
||||
http://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_search?q=user:kimchy
|
||||
|
|
@ -199,7 +199,7 @@ URL 的 `<service_name>` 段支持的格式为:
|
|||
* 如要访问 Elasticsearch 集群健康信息`_cluster/health?pretty=true`,你会使用:
|
||||
|
||||
```
|
||||
https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_cluster/health?pretty=true`
|
||||
https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_cluster/health?pretty=true`
|
||||
```
|
||||
|
||||
<!--
|
||||
|
|
@ -246,8 +246,7 @@ You may be able to put an apiserver proxy URL into the address bar of a browser.
|
|||
- Some web apps may not work, particularly those with client side javascript that construct URLs in a
|
||||
way that is unaware of the proxy path prefix.
|
||||
-->
|
||||
- Web 服务器通常不能传递令牌,所以你可能需要使用基本(密码)认证。
|
||||
- Web 服务器通常不能传递令牌,所以你可能需要使用基本(密码)认证。
|
||||
API 服务器可以配置为接受基本认证,但你的集群可能并没有这样配置。
|
||||
- 某些 Web 应用可能无法工作,特别是那些使用客户端 Javascript 构造 URL 的
|
||||
应用,所构造的 URL 可能并不支持代理路径前缀。
|
||||
|
||||
|
|
@ -316,82 +316,32 @@ Python 客户端可以像 kubectl CLI 一样使用相同的
|
|||
## Accessing the API from a Pod
|
||||
|
||||
When accessing the API from a pod, locating and authenticating
|
||||
to the apiserver are somewhat different.
|
||||
|
||||
The recommended way to locate the apiserver within the pod is with
|
||||
the `kubernetes.default.svc` DNS name, which resolves to a Service IP which in turn
|
||||
will be routed to an apiserver.
|
||||
|
||||
The recommended way to authenticate to the apiserver is with a
|
||||
[service account](/docs/tasks/configure-pod-container/configure-service-account/) credential. By kube-system, a pod
|
||||
is associated with a service account, and a credential (token) for that
|
||||
service account is placed into the filesystem tree of each container in that pod,
|
||||
at `/var/run/secrets/kubernetes.io/serviceaccount/token`.
|
||||
to the API server are somewhat different.
|
||||
-->
|
||||
### 从 Pod 中访问 API {#accessing-the-api-from-a-pod}
|
||||
|
||||
当你从 Pod 中访问 API 时,定位和验证 apiserver 会有些许不同。
|
||||
|
||||
在 Pod 中定位 apiserver 的推荐方式是通过 `kubernetes.default.svc`
|
||||
这个 DNS 名称,该名称将会解析为服务 IP,然后服务 IP 将会路由到 apiserver。
|
||||
|
||||
向 apiserver 进行身份验证的推荐方法是使用
|
||||
[服务帐户](/zh/docs/tasks/configure-pod-container/configure-service-account/) 凭据。
|
||||
通过 kube-system,Pod 与服务帐户相关联,并且该服务帐户的凭证(token)
|
||||
被放置在该 Pod 中每个容器的文件系统中,位于
|
||||
`/var/run/secrets/kubernetes.io/serviceaccount/token`。
|
||||
当你从 Pod 中访问 API 时,定位和验证 API 服务器会有些许不同。
|
||||
|
||||
<!--
|
||||
If available, a certificate bundle is placed into the filesystem tree of each
|
||||
container at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`, and should be
|
||||
used to verify the serving certificate of the apiserver.
|
||||
|
||||
Finally, the default namespace to be used for namespaced API operations is placed in a file
|
||||
at `/var/run/secrets/kubernetes.io/serviceaccount/namespace` in each container.
|
||||
Please check [Accessing the API from within a Pod](/docs/tasks/run-application/access-api-from-pod/)
|
||||
for more details.
|
||||
-->
|
||||
如果可用,则将证书放入每个容器的文件系统中的
|
||||
`/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`,
|
||||
并且应该用于验证 apiserver 的服务证书。
|
||||
|
||||
最后,名字空间作用域的 API 操作所使用的 default 名字空间将被放置在
|
||||
每个容器的 `/var/run/secrets/kubernetes.io/serviceaccount/namespace`
|
||||
文件中。
|
||||
|
||||
<!--
|
||||
From within a pod the recommended ways to connect to API are:
|
||||
|
||||
- run `kubectl proxy` in a sidecar container in the pod, or as a background
|
||||
process within the container. This proxies the
|
||||
Kubernetes API to the localhost interface of the pod, so that other processes
|
||||
in any container of the pod can access it.
|
||||
- use the Go client library, and create a client using the `rest.InClusterConfig()` and `kubernetes.NewForConfig()` functions.
|
||||
They handle locating and authenticating to the apiserver. [example](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go)
|
||||
|
||||
In each case, the credentials of the pod are used to communicate securely with the apiserver.
|
||||
-->
|
||||
在 Pod 中,建议连接 API 的方法是:
|
||||
|
||||
- 在 Pod 的边车容器中运行 `kubectl proxy`,或者以后台进程的形式运行。
|
||||
这将把 Kubernetes API 代理到当前 Pod 的 localhost 接口,
|
||||
所以 Pod 中的所有容器中的进程都能访问它。
|
||||
- 使用 Go 客户端库,并使用 `rest.InClusterConfig()` 和
|
||||
`kubernetes.NewForConfig()` 函数创建一个客户端。
|
||||
他们处理 apiserver 的定位和身份验证。
|
||||
[示例](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go)
|
||||
|
||||
在每种情况下,Pod 的凭证都是为了与 apiserver 安全地通信。
|
||||
请参阅[从 Pod 中访问 API](/zh/docs/tasks/run-application/access-api-from-pod/)
|
||||
了解更多详情。
|
||||
|
||||
<!--
|
||||
## Accessing services running on the cluster
|
||||
|
||||
The previous section describes how to connect to the Kubernetes API server. For information about connecting to other services running on a Kubernetes cluster, see [Access Cluster Services.](/docs/tasks/administer-cluster/access-cluster-services/)
|
||||
The previous section describes how to connect to the Kubernetes API server.
|
||||
For information about connecting to other services running on a Kubernetes cluster, see
|
||||
[Access Cluster Services](/docs/tasks/administer-cluster/access-cluster-services/).
|
||||
-->
|
||||
|
||||
## 访问集群上运行的服务 {#accessing-services-running-on-the-cluster}
|
||||
|
||||
上一节介绍了如何连接到 Kubernetes API 服务器。
|
||||
有关连接到 Kubernetes 集群上运行的其他服务的信息,请参阅[访问集群服务](/zh/docs/tasks/administer-cluster/access-cluster-services/)。
|
||||
|
||||
有关连接到 Kubernetes 集群上运行的其他服务的信息,请参阅
|
||||
[访问集群服务](/zh/docs/tasks/administer-cluster/access-cluster-services/)。
|
||||
|
||||
<!--
|
||||
## Requesting redirects
|
||||
|
|
|
|||
|
|
@ -357,13 +357,13 @@ The following manifest defines an Ingress that sends traffic to your Service via
|
|||
|
||||
|
||||
```yaml
|
||||
- path: /v2
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: web2
|
||||
port:
|
||||
number: 8080
|
||||
- path: /v2
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: web2
|
||||
port:
|
||||
number: 8080
|
||||
```
|
||||
|
||||
<!--
|
||||
|
|
|
|||
|
|
@ -228,12 +228,12 @@ describes how you can configure this as a cluster administrator.
|
|||
<!--
|
||||
### Programmatic access to the API
|
||||
|
||||
Kubernetes officially supports client libraries for [Go](#go-client), [Python](#python-client), [Java](#java-client), [dotnet](#dotnet-client), [Javascript](#javascript-client), and [Haskell](#haskell-client). There are other client libraries that are provided and maintained by their authors, not the Kubernetes team. See [client libraries](/docs/reference/using-api/client-libraries/) for accessing the API from other languages and how they authenticate.
|
||||
Kubernetes officially supports client libraries for [Go](#go-client), [Python](#python-client), [Java](#java-client), [dotnet](#dotnet-client), [JavaScript](#javascript-client), and [Haskell](#haskell-client). There are other client libraries that are provided and maintained by their authors, not the Kubernetes team. See [client libraries](/docs/reference/using-api/client-libraries/) for accessing the API from other languages and how they authenticate.
|
||||
-->
|
||||
### 编程方式访问 API
|
||||
|
||||
Kubernetes 官方支持 [Go](#go-client)、[Python](#python-client)、[Java](#java-client)、
|
||||
[dotnet](#dotnet-client)、[Javascript](#javascript-client) 和 [Haskell](#haskell-client)
|
||||
[dotnet](#dotnet-client)、[JavaScript](#javascript-client) 和 [Haskell](#haskell-client)
|
||||
语言的客户端库。还有一些其他客户端库由对应作者而非 Kubernetes 团队提供并维护。
|
||||
参考[客户端库](/zh/docs/reference/using-api/client-libraries/)了解如何使用其他语言
|
||||
来访问 API 以及如何执行身份认证。
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue