Merge pull request #43938 from katcosgrove/merged-main-dev-1.29

Merge main into dev-1.29
This commit is contained in:
Kubernetes Prow Robot 2023-11-15 17:47:53 +01:00 committed by GitHub
commit 7899eb09a3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
59 changed files with 3457 additions and 622 deletions

View File

@ -43,12 +43,12 @@ Kubernetes ist Open Source und bietet Dir die Freiheit, die Infrastruktur vor Or
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Video ansehen</button>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">Besuche die KubeCon Europe vom 18. bis 21. April 2023</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/" button id="desktopKCButton">Besuche die KubeCon + CloudNativeCon North America vom 6. bis 9. November 2023</a>
<br>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/" button id="desktopKCButton">Besuche die KubeCon North America vom 6. bis 9. November 2023</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">Besuche die KubeCon + CloudNativeCon Europe vom 19. bis 22. M&auml;rz 2024</a>
</div>
<div id="videoPlayer">
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>

View File

@ -50,7 +50,7 @@ Bevor Sie die einzelnen Lernprogramme durchgehen, möchten Sie möglicherweise e
* [AppArmor](/docs/tutorials/clusters/apparmor/)
* [seccomp](/docs/tutorials/clusters/seccomp/)
* [Seccomp](/docs/tutorials/clusters/seccomp/)
## Services

View File

@ -37,7 +37,7 @@ Prow automatically applies language labels based on file path. Thanks to SIG Doc
/language ko
```
These repo labels let reviewers filter for PRs and issues by language. For example, you can now filter the k/website dashboard for [PRs with Chinese content](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+label%3Alanguage%2Fzh).
These repo labels let reviewers filter for PRs and issues by language. For example, you can now filter the kubernetes/website dashboard for [PRs with Chinese content](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+label%3Alanguage%2Fzh).
### Team review

View File

@ -77,7 +77,7 @@ Check out the full details of the Kubernetes 1.19 release in our [release notes]
## Availability
Kubernetes 1.19 is available for download on [GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.19.0). To get started with Kubernetes, check out these [interactive tutorials](https://kubernetes.io/docs/tutorials/) or run local Kubernetes clusters using Docker container “nodes” with [KinD](https://kind.sigs.k8s.io/) (Kubernetes in Docker). You can also easily install 1.19 using [kubeadm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/).
Kubernetes 1.19 is available for download on [GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.19.0). To get started with Kubernetes, check out these [interactive tutorials](https://kubernetes.io/docs/tutorials/) or run local Kubernetes clusters using Docker container “nodes” with [kind](https://kind.sigs.k8s.io/) (Kubernetes in Docker). You can also easily install 1.19 using [kubeadm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/).
## Release Team
This release is made possible through the efforts of hundreds of individuals who contributed both technical and non-technical content. Special thanks to the [release team](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.19/release_team.md) led by Taylor Dolezal, Senior Developer Advocate at HashiCorp. The 34 release team members coordinated many aspects of the release, from documentation to testing, validation, and feature completeness.

View File

@ -0,0 +1,35 @@
---
layout: blog
title: "Introducing SIG etcd"
slug: introducing-sig-etcd
date: 2023-11-07
canonicalUrl: https://etcd.io/blog/2023/introducing-sig-etcd/
---
**Authors**: Han Kang (Google), Marek Siarkowicz (Google), Frederico Muñoz (SAS Institute)
Special Interest Groups (SIGs) are a fundamental part of the Kubernetes project, with a substantial share of the community activity happening within them. When the need arises, [new SIGs can be created](https://github.com/kubernetes/community/blob/master/sig-wg-lifecycle.md), and that was precisely what happened recently.
[SIG etcd](https://github.com/kubernetes/community/blob/master/sig-etcd/README.md) is the most recent addition to the list of Kubernetes SIGs. In this article we will get to know it a bit better, understand its origins, scope, and plans.
## The critical role of etcd
If we look inside the control plane of a Kubernetes cluster, we will find [etcd](https://kubernetes.io/docs/concepts/overview/components/#etcd), a consistent and highly-available key value store used as Kubernetes' backing store for all cluster data -- this description alone highlights the critical role that etcd plays, and the importance of it within the Kubernetes ecosystem.
This critical role makes the health of the etcd project and community an important consideration, and [concerns about the state of the project](https://groups.google.com/a/kubernetes.io/g/steering/c/e-O-tVSCJOk/m/N9IkiWLEAgAJ) in early 2022 did not go unnoticed. The changes in the maintainer team, amongst other factors, contributed to a situation that needed to be addressed.
## Why a special interest group
With the critical role of etcd in mind, it was proposed that the way forward would be to create a new special interest group. If etcd was already at the heart of Kubernetes, creating a dedicated SIG not only recognises that role, it would make etcd a first-class citizen of the Kubernetes community.
Establishing SIG etcd creates a dedicated space to make explicit the contract between etcd and Kubernetes api machinery and to prevent, on the etcd level, changes which violate this contract. Additionally, etcd will be able to adopt the processes that Kubernetes offers its SIGs ([KEPs](https://www.kubernetes.dev/resources/keps/), [PRR](https://github.com/kubernetes/community/blob/master/sig-architecture/production-readiness.md), [phased feature gates](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/), amongst others) in order to improve the consistency and reliability of the codebase. Being able to use these processes will be a substantial benefit to the etcd community.
As a SIG, etcd will also be able to draw contributor support from Kubernetes proper: active contributions to etcd from Kubernetes maintainers would decrease the likelihood of breaking Kubernetes changes, through the increased number of potential reviewers and the integration with existing testing framework. This will not only benefit Kubernetes, which will be able to better participate and shape the direction of etcd in terms of the critical role it plays, but also etcd as a whole.
## About SIG etcd
The recently created SIG is already working towards its goals, defined in its [Charter](https://github.com/kubernetes/community/blob/master/sig-etcd/charter.md) and [Vision](https://github.com/kubernetes/community/blob/master/sig-etcd/vision.md). The purpose is clear: to ensure etcd is a reliable, simple, and scalable production-ready store for building cloud-native distributed systems and managing cloud-native infrastructure via orchestrators like Kubernetes.
The scope of SIG etcd is not exclusively about etcd as a Kubernetes component, it also covers etcd as a standard solution. Our goal is to make etcd the most reliable key-value storage to be used anywhere, unconstrained by any Kubernetes-specific limits and scaling to meet the requirements of many diverse use-cases.
We are confident that the creation of SIG etcd constitutes an important milestone in the lifecycle of the project, simultaneously improving etcd itself, and also the integration of etcd with Kubernetes. We invite everyone interested in etcd to [visit our page](https://github.com/kubernetes/community/blob/master/sig-etcd/README.md), [join us at our Slack channel](https://kubernetes.slack.com/messages/etcd), and get involved in this new stage of etcd's life.

View File

@ -0,0 +1,72 @@
---
layout: blog
title: 'Kubernetes Removals, Deprecations, and Major Changes in Kubernetes 1.29'
date: 2023-11-16
slug: kubernetes-1-29-upcoming-changes
---
**Authors:** Carol Valencia, Kristin Martin, Abigail McCarthy, James Quigley, Hosam Kamel
As with every release, Kubernetes v1.29 will introduce feature deprecations and removals. Our continued ability to produce high-quality releases is a testament to our robust development cycle and healthy community. The following are some of the deprecations and removals coming in the Kubernetes 1.29 release.
## The Kubernetes API removal and deprecation process
The Kubernetes project has a well-documented deprecation policy for features. This policy states that stable APIs may only be deprecated when a newer, stable version of that same API is available and that APIs have a minimum lifetime for each stability level. A deprecated API is one that has been marked for removal in a future Kubernetes release; it will continue to function until removal (at least one year from the deprecation), but usage will result in a warning being displayed. Removed APIs are no longer available in the current version, at which point you must migrate to using the replacement.
* Generally available (GA) or stable API versions may be marked as deprecated, but must not be removed within a major version of Kubernetes.
* Beta or pre-release API versions must be supported for 3 releases after deprecation.
* Alpha or experimental API versions may be removed in any release without prior deprecation notice.
Whether an API is removed as a result of a feature graduating from beta to stable or because that API simply did not succeed, all removals comply with this deprecation policy. Whenever an API is removed, migration options are communicated in the documentation.
## A note about the k8s.gcr.io redirect to registry.k8s.io
To host its container images, the Kubernetes project uses a community-owned image registry called registry.k8s.io. Starting last March traffic to the old k8s.gcr.io registry began being redirected to registry.k8s.io. The deprecated k8s.gcr.io registry will eventually be phased out. For more details on this change or to see if you are impacted, please read [k8s.gcr.io Redirect to registry.k8s.io - What You Need to Know](/blog/2023/03/10/image-registry-redirect/).
## A note about the Kubernetes community-owned package repositories
Earlier in 2023, the Kubernetes project [introduced](/blog/2023/08/15/pkgs-k8s-io-introduction/) `pkgs.k8s.io`, community-owned software repositories for Debian and RPM packages. The community-owned repositories replaced the legacy Google-owned repositories (`apt.kubernetes.io` and `yum.kubernetes.io`).
On September 13, 2023, those legacy repositories were formally deprecated and their contents frozen.
For more information on this change or to see if you are impacted, please read the [deprecation announcement](/blog/2023/08/31/legacy-package-repository-deprecation/).
## Deprecations and removals for Kubernetes v1.29
See the official list of [API removals](/docs/reference/using-api/deprecation-guide/#v1-29) for a full list of planned deprecations for Kubernetes v1.29.
### Removal of in-tree integrations with cloud providers ([KEP-2395](https://kep.k8s.io/2395))
The [feature gates](/docs/reference/command-line-tools-reference/feature-gates/) `DisableCloudProviders` and `DisableKubeletCloudCredentialProviders` will both be set to `true` by default for Kubernetes v1.29. This change will require that users who are currently using in-tree cloud provider integrations (Azure, GCE, or vSphere) enable external cloud controller managers, or opt in to the legacy integration by setting the associated feature gates to `false`.
Enabling external cloud controller managers means you must run a suitable cloud controller manager within your cluster's control plane; it also requires setting the command line argument `--cloud-provider=external` for the kubelet (on every relevant node), and across the control plane (kube-apiserver and kube-controller-manager).
For more information about how to enable and run external cloud controller managers, read [Cloud Controller Manager Administration](/docs/tasks/administer-cluster/running-cloud-controller/) and [Migrate Replicated Control Plane To Use Cloud Controller Manager](/docs/tasks/administer-cluster/controller-manager-leader-migration/).
For general information about cloud controller managers, please see
[Cloud Controller Manager](/docs/concepts/architecture/cloud-controller/) in the Kubernetes documentation.
### Removal of the `v1beta2` flow control API group
The _flowcontrol.apiserver.k8s.io/v1beta2_ API version of FlowSchema and PriorityLevelConfiguration will [no longer be served](/docs/reference/using-api/deprecation-guide/#v1-29) in Kubernetes v1.29.
To prepare for this, you can edit your existing manifests and rewrite client software to use the `flowcontrol.apiserver.k8s.io/v1beta3` API version, available since v1.26. All existing persisted objects are accessible via the new API. Notable changes in `flowcontrol.apiserver.k8s.io/v1beta3` include
that the PriorityLevelConfiguration `spec.limited.assuredConcurrencyShares` field was renamed to `spec.limited.nominalConcurrencyShares`.
### Deprecation of the `status.nodeInfo.kubeProxyVersion` field for Node
The `.status.kubeProxyVersion` field for Node objects will be [marked as deprecated](https://github.com/kubernetes/enhancements/issues/4004) in v1.29 in preparation for its removal in a future release. This field is not accurate and is set by kubelet, which does not actually know the kube-proxy version, or even if kube-proxy is running.
## Want to know more?
Deprecations are announced in the Kubernetes release notes. You can see the announcements of pending deprecations in the release notes for:
* [Kubernetes v1.25](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#deprecation)
* [Kubernetes v1.26](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#deprecation)
* [Kubernetes v1.27](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#deprecation)
* [Kubernetes v1.28](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.28.md#deprecation)
We will formally announce the deprecations that come with [Kubernetes v1.29](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#deprecation) as part of the CHANGELOG for that release.
For information on the deprecation and removal process, refer to the official Kubernetes [deprecation policy](/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api) document.

View File

@ -0,0 +1,50 @@
---
layout: blog
title: "The Case for Kubernetes Resource Limits: Predictability vs. Efficiency"
date: 2023-11-16
slug: the-case-for-kubernetes-resource-limits
---
**Author:** Milan Plžík (Grafana Labs)
Theres been quite a lot of posts suggesting that not using Kubernetes resource limits might be a fairly useful thing (for example, [For the Love of God, Stop Using CPU Limits on Kubernetes](https://home.robusta.dev/blog/stop-using-cpu-limits/) or [Kubernetes: Make your services faster by removing CPU limits](https://erickhun.com/posts/kubernetes-faster-services-no-cpu-limits/) ). The points made there are totally valid it doesnt make much sense to pay for compute power that will not be used due to limits, nor to artificially increase latency. This post strives to argue that limits have their legitimate use as well.
As a Site Reliability Engineer on the [Grafana Labs](https://grafana.com/) platform team, which maintains and improves internal infrastructure and tooling used by the product teams, I primarily try to make Kubernetes upgrades as smooth as possible. But I also spend a lot of time going down the rabbit hole of various interesting Kubernetes issues. This article reflects my personal opinion, and others in the community may disagree.
Lets flip the problem upside down. Every pod in a Kubernetes cluster has inherent resource limits the actual CPU, memory, and other resources of the machine its running on. If those physical limits are reached by a pod, it will experience throttling similar to what is caused by reaching Kubernetes limits.
## The problem
Pods without (or with generous) limits can easily consume the extra resources on the node. This, however, has a hidden cost the amount of extra resources available often heavily depends on pods scheduled on the particular node and their actual load. These extra resources make each pod a special snowflake when it comes to real resource allocation. Even worse, its fairly hard to figure out the resources that the pod had at its disposal at any given moment certainly not without unwieldy data mining of pods running on a particular node, their resource consumption, and similar. And finally, even if we pass this obstacle, we can only have data sampled up to a certain rate and get profiles only for a certain fraction of our calls. This can be scaled up, but the amount of observability data generated might easily reach diminishing returns. Thus, theres no easy way to tell if a pod had a quick spike and for a short period of time used twice as much memory as usual to handle a request burst.
Now, with Black Friday and Cyber Monday approaching, businesses expect a surge in traffic. Good performance data/benchmarks of the past performance allow businesses to plan for some extra capacity. But is data about pods without limits reliable? With memory or CPU instant spikes handled by the extra resources, everything might look good according to past data. But once the pod bin-packing changes and the extra resources get more scarce, everything might start looking different ranging from request latencies rising negligibly to requests slowly snowballing and causing pod OOM kills. While almost no one actually cares about the former, the latter is a serious issue that requires instant capacity increase.
## Configuring the limits
Not using limits takes a tradeoff it opportunistically improves the performance if there are extra resources available, but lowers predictability of the performance, which might strike back in the future. There are a few approaches that can be used to increase the predictability again. Lets pick two of them to analyze:
- **Configure workload limits to be a fixed (and small) percentage more than the requests** I'll call it _fixed-fraction headroom_. This allows the use of some extra shared resources, but keeps the per-node overcommit bound and can be taken to guide worst-case estimates for the workload. Note that the bigger the limits percentage is, the bigger the variance in the performance that might happen across the workloads.
- **Configure workloads with `requests` = `limits`**. From some point of view, this is equivalent to giving each pod its own tiny machine with constrained resources; the performance is fairly predictable. This also puts the pod into the _Guaranteed_ QoS class, which makes it get evicted only after _BestEffort_ and _Burstable_ pods have been evicted by a node under resource pressure (see [Quality of Service for Pods](/docs/concepts/workloads/pods/pod-qos/)).
Some other cases might also be considered, but these are probably the two simplest ones to discuss.
## Cluster resource economy
Note that in both cases discussed above, were effectively preventing the workloads from using some cluster resources it has at the cost of getting more predictability which might sound like a steep price to pay for a bit more stable performance. Lets try to quantify the impact there.
### Bin-packing and cluster resource allocation
Firstly, lets discuss bin-packing and cluster resource allocation. Theres some inherent cluster inefficiency that comes to play its hard to achieve 100% resource allocation in a Kubernetes cluster. Thus, some percentage will be left unallocated.
When configuring fixed-fraction headroom limits, a proportional amount of this will be available to the pods. If the percentage of unallocated resources in the cluster is lower than the constant we use for setting fixed-fraction headroom limits (see the figure, line 2), all the pods together are able to theoretically use up all the nodes resources; otherwise there are some resources that will inevitably be wasted (see the figure, line 1). In order to eliminate the inevitable resource waste, the percentage for fixed-fraction headroom limits should be configured so that its at least equal to the expected percentage of unallocated resources.
{{<figure alt="Chart displaying various requests/limits configurations" width="40%" src="requests-limits-configurations.svg">}}
For requests = limits (see the figure, line 3), this does not hold: Unless were able to allocate all nodes resources, theres going to be some inevitably wasted resources. Without any knobs to turn on the requests/limits side, the only suitable approach here is to ensure efficient bin-packing on the nodes by configuring correct machine profiles. This can be done either manually or by using a variety of cloud service provider tooling for example [Karpenter](https://karpenter.sh/) for EKS or [GKE Node auto provisioning](https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning).
### Optimizing actual resource utilization
Free resources also come in the form of unused resources of other pods (reserved vs. actual CPU utilization, etc.), and their availability cant be predicted in any reasonable way. Configuring limits makes it next to impossible to utilize these. Looking at this from a different perspective, if a workload wastes a significant amount of resources it has requested, re-visiting its own resource requests might be a fair thing to do. Looking at past data and picking more fitting resource requests might help to make the packing more tight (although at the price of worsening its performance for example increasing long tail latencies).
## Conclusion
Optimizing resource requests and limits is hard. Although its much easier to break things when setting limits, those breakages might help prevent a catastrophe later by giving more insights into how the workload behaves in bordering conditions. There are cases where setting limits makes less sense: batch workloads (which are not latency-sensitive for example non-live video encoding), best-effort services (dont need that level of availability and can be preempted), clusters that have a lot of spare resources by design (various cases of specialty workloads for example services that handle spikes by design).
On the other hand, setting limits shouldnt be avoided at all costs even though figuring out the "right” value for limits is harder and configuring a wrong value yields less forgiving situations. Configuring limits helps you learn about a workloads behavior in corner cases, and there are simple strategies that can help when reasoning about the right value. Its a tradeoff between efficient resource usage and performance predictability and should be considered as such.
Theres also an economic aspect of workloads with spiky resource usage. Having “freebie” resources always at hand does not serve as an incentive to improve performance for the product team. Big enough spikes might easily trigger efficiency issues or even problems when trying to defend a products SLA and thus, might be a good candidate to mention when assessing any risks.

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 94 KiB

View File

@ -29,7 +29,7 @@ The following principles shaped the design and architecture of Gateway API:
* __Application Developer:__ Manages an application running in a cluster and is typically
concerned with application-level configuration and [Service](/docs/concepts/services-networking/service/)
composition.
* __Portable:__ Gateway API specifications are defined as [custom resources](docs/concepts/extend-kubernetes/api-extension/custom-resources)
* __Portable:__ Gateway API specifications are defined as [custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources)
and are supported by many [implementations](https://gateway-api.sigs.k8s.io/implementations/).
* __Expressive:__ Gateway API kinds support functionality for common traffic routing use cases
such as header-based matching, traffic weighting, and others that were only possible in

View File

@ -16,8 +16,8 @@ description: >-
<!-- overview -->
If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you
might consider using Kubernetes NetworkPolicies for particular applications in your cluster.
If you want to control traffic flow at the IP address or port level for TCP, UDP, and SCTP protocols,
then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster.
NetworkPolicies are an application-centric construct which allow you to specify how a {{<
glossary_tooltip text="pod" term_id="pod">}} is allowed to communicate with various network
"entities" (we use the word "entity" here to avoid overloading the more common terms such as
@ -257,21 +257,23 @@ creating the following NetworkPolicy in that namespace.
This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed
ingress or egress traffic.
## SCTP support
## Network traffic filtering
{{< feature-state for_k8s_version="v1.20" state="stable" >}}
As a stable feature, this is enabled by default. To disable SCTP at a cluster level, you (or your
cluster administrator) will need to disable the `SCTPSupport`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
for the API server with `--feature-gates=SCTPSupport=false,…`.
When the feature gate is enabled, you can set the `protocol` field of a NetworkPolicy to `SCTP`.
NetworkPolicy is defined for [layer 4](https://en.wikipedia.org/wiki/OSI_model#Layer_4:_Transport_layer)
connections (TCP, UDP, and optionally SCTP). For all the other protocols, the behaviour may vary
across network plugins.
{{< note >}}
You must be using a {{< glossary_tooltip text="CNI" term_id="cni" >}} plugin that supports SCTP
protocol NetworkPolicies.
{{< /note >}}
When a `deny all` network policy is defined, it is only guaranteed to deny TCP, UDP and SCTP
connections. For other protocols, such as ARP or ICMP, the behaviour is undefined.
The same applies to allow rules: when a specific pod is allowed as ingress source or egress destination,
it is undefined what happens with (for example) ICMP packets. Protocols such as ICMP may be allowed by some
network plugins and denied by others.
## Targeting a range of ports
{{< feature-state for_k8s_version="v1.25" state="stable" >}}
@ -346,6 +348,88 @@ namespaces, the value of the label is the namespace name.
While NetworkPolicy cannot target a namespace by its name with some object field, you can use the
standardized label to target a specific namespace.
## Pod lifecycle
{{< note >}}
The following applies to clusters with a conformant networking plugin and a conformant implementation of
NetworkPolicy.
{{< /note >}}
When a new NetworkPolicy object is created, it may take some time for a network plugin
to handle the new object. If a pod that is affected by a NetworkPolicy
is created before the network plugin has completed NetworkPolicy handling,
that pod may be started unprotected, and isolation rules will be applied when
the NetworkPolicy handling is completed.
Once the NetworkPolicy is handled by a network plugin,
1. All newly created pods affected by a given NetworkPolicy will be isolated before
they are started.
Implementations of NetworkPolicy must ensure that filtering is effective throughout
the Pod lifecycle, even from the very first instant that any container in that Pod is started.
Because they are applied at Pod level, NetworkPolicies apply equally to init containers,
sidecar containers, and regular containers.
2. Allow rules will be applied eventually after the isolation rules (or may be applied at the same time).
In the worst case, a newly created pod may have no network connectivity at all when it is first started, if
isolation rules were already applied, but no allow rules were applied yet.
Every created NetworkPolicy will be handled by a network plugin eventually, but there is no
way to tell from the Kubernetes API when exactly that happens.
Therefore, pods must be resilient against being started up with different network
connectivity than expected. If you need to make sure the pod can reach certain destinations
before being started, you can use an [init container](/docs/concepts/workloads/pods/init-containers/)
to wait for those destinations to be reachable before kubelet starts the app containers.
Every NetworkPolicy will be applied to all selected pods eventually.
Because the network plugin may implement NetworkPolicy in a distributed manner,
it is possible that pods may see a slightly inconsistent view of network policies
when the pod is first created, or when pods or policies change.
For example, a newly-created pod that is supposed to be able to reach both Pod A
on Node 1 and Pod B on Node 2 may find that it can reach Pod A immediately,
but cannot reach Pod B until a few seconds later.
## NetworkPolicy and `hostNetwork` pods
NetworkPolicy behaviour for `hostNetwork` pods is undefined, but it should be limited to 2 possibilities:
- The network plugin can distinguish `hostNetwork` pod traffic from all other traffic
(including being able to distinguish traffic from different `hostNetwork` pods on
the same node), and will apply NetworkPolicy to `hostNetwork` pods just like it does
to pod-network pods.
- The network plugin cannot properly distinguish `hostNetwork` pod traffic,
and so it ignores `hostNetwork` pods when matching `podSelector` and `namespaceSelector`.
Traffic to/from `hostNetwork` pods is treated the same as all other traffic to/from the node IP.
(This is the most common implementation.)
This applies when
1. a `hostNetwork` pod is selected by `spec.podSelector`.
```yaml
...
spec:
podSelector:
matchLabels:
role: client
...
```
2. a `hostNetwork` pod is selected by a `podSelector` or `namespaceSelector` in an `ingress` or `egress` rule.
```yaml
...
ingress:
- from:
- podSelector:
matchLabels:
role: client
...
```
At the same time, since `hostNetwork` pods have the same IP addresses as the nodes they reside on,
their connections will be treated as node connections. For example, you can allow traffic
from a `hostNetwork` Pod using an `ipBlock` rule.
## What you can't do with network policies (at least, not yet)
As of Kubernetes {{< skew currentVersion >}}, the following functionality does not exist in the

View File

@ -108,8 +108,8 @@ If you do not specify either, then the DaemonSet controller will create Pods on
## How Daemon Pods are scheduled
A DaemonSet ensures that all eligible nodes run a copy of a Pod. The DaemonSet
controller creates a Pod for each eligible node and adds the
A DaemonSet can be used to ensure that all eligible nodes run a copy of a Pod.
The DaemonSet controller creates a Pod for each eligible node and adds the
`spec.affinity.nodeAffinity` field of the Pod to match the target host. After
the Pod is created, the default scheduler typically takes over and then binds
the Pod to the target host by setting the `.spec.nodeName` field. If the new
@ -118,6 +118,13 @@ the existing Pods based on the
[priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority)
of the new Pod.
{{< note >}}
If it's important that the DaemonSet pod run on each node, it's often desirable
to set the `.spec.template.spec.priorityClassName` of the DaemonSet to a
[PriorityClass](/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass)
with a higher priority to ensure that this eviction occurs.
{{< /note >}}
The user can specify a different scheduler for the Pods of the DaemonSet, by
setting the `.spec.template.spec.schedulerName` field of the DaemonSet.

View File

@ -938,6 +938,11 @@ creates Pods with the finalizer `batch.kubernetes.io/job-tracking`. The
controller removes the finalizer only after the Pod has been accounted for in
the Job status, allowing the Pod to be removed by other controllers or users.
{{< note >}}
See [My pod stays terminating](/docs/tasks/debug-application/debug-pods) if you
observe that pods from a Job are stucked with the tracking finalizer.
{{< /note >}}
### Elastic Indexed Jobs
{{< feature-state for_k8s_version="v1.27" state="beta" >}}

View File

@ -76,7 +76,7 @@ end
subgraph second[Review]
direction TB
T[ ] -.-
D[Look over the<br>K8s/website<br>repository] --- E[Check out the<br>Hugo static site<br>generator]
D[Look over the<br>kubernetes/website<br>repository] --- E[Check out the<br>Hugo static site<br>generator]
E --- F[Understand basic<br>GitHub commands]
F --- G[Review open PR<br>and change review <br>processes]
end
@ -123,7 +123,7 @@ flowchart LR
direction TB
S[ ] -.-
G[Review PRs from other<br>K8s members] -->
A[Check K8s/website<br>issues list for<br>good first PRs] --> B[Open a PR!!]
A[Check kubernetes/website<br>issues list for<br>good first PRs] --> B[Open a PR!!]
end
subgraph first[Suggested Prep]
direction TB

View File

@ -24,7 +24,7 @@ API or the `kube-*` components from the upstream code, see the following instruc
- You need to have these tools installed:
- [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
- [Golang](https://golang.org/doc/install) version 1.13+
- [Golang](https://go.dev/doc/install) version 1.13+
- [Docker](https://docs.docker.com/engine/installation/)
- [etcd](https://github.com/coreos/etcd/)
- [make](https://www.gnu.org/software/make/)

View File

@ -37,7 +37,7 @@ opening a pull request. Figure 1 outlines the steps and the details follow.
{{< mermaid >}}
flowchart LR
A([fa:fa-user New<br>Contributor]) --- id1[(K8s/Website<br>GitHub)]
A([fa:fa-user New<br>Contributor]) --- id1[(kubernetes/website<br>GitHub)]
subgraph tasks[Changes using GitHub]
direction TB
0[ ] -.-
@ -132,7 +132,7 @@ Figure 2 shows the steps to follow when you work from a local fork. The details
{{< mermaid >}}
flowchart LR
1[Fork the K8s/website<br>repository] --> 2[Create local clone<br>and set upstream]
1[Fork the kubernetes/website<br>repository] --> 2[Create local clone<br>and set upstream]
subgraph changes[Your changes]
direction TB
S[ ] -.-
@ -359,7 +359,9 @@ Alternately, install and use the `hugo` command on your computer:
### Open a pull request from your fork to kubernetes/website {#open-a-pr}
Figure 3 shows the steps to open a PR from your fork to the K8s/website. The details follow.
Figure 3 shows the steps to open a PR from your fork to the [kubernetes/website](https://github.com/kubernetes/website). The details follow.
Please, note that contributors can mention `kubernetes/website` as `k/website`.
<!-- See https://github.com/kubernetes/website/issues/28808 for live-editor URL to this figure -->
<!-- You can also cut/paste the mermaid code into the live editor at https://mermaid-js.github.io/mermaid-live-editor to play around with it -->
@ -368,7 +370,7 @@ Figure 3 shows the steps to open a PR from your fork to the K8s/website. The det
flowchart LR
subgraph first[ ]
direction TB
1[1. Go to K8s/website repository] --> 2[2. Select New Pull Request]
1[1. Go to kubernetes/website repository] --> 2[2. Select New Pull Request]
2 --> 3[3. Select compare across forks]
3 --> 4[4. Select your fork from<br>head repository drop-down menu]
end
@ -387,7 +389,7 @@ class 1,2,3,4,5,6,7,8 grey
class first,second white
{{</ mermaid >}}
Figure 3. Steps to open a PR from your fork to the K8s/website.
Figure 3. Steps to open a PR from your fork to the [kubernetes/website](https://github.com/kubernetes/website).
1. In a web browser, go to the [`kubernetes/website`](https://github.com/kubernetes/website/) repository.
1. Select **New Pull Request**.

View File

@ -260,7 +260,8 @@ You should use the [local](/docs/contribute/new-content/open-a-pr/#preview-local
and Netlify previews to verify the diagram is properly rendered.
{{< caution >}}
The Mermaid live editor feature set may not support the K8s/website Mermaid feature set.
The Mermaid live editor feature set may not support the [kubernetes/website](https://github.com/kubernetes/website) Mermaid feature set.
And please, note that contributors can mention `kubernetes/website` as `k/website`.
You might see a syntax error or a blank screen after the Hugo build.
If that is the case, consider using the Mermaid+SVG method.
{{< /caution >}}
@ -342,7 +343,7 @@ The following lists advantages of the Mermaid+SVG method:
* Live editor tool.
* Live editor tool supports the most current Mermaid feature set.
* Employ existing K8s/website methods for handling `.svg` image files.
* Employ existing [kubernetes/website](https://github.com/kubernetes/website) methods for handling `.svg` image files.
* Environment doesn't require Mermaid support.
Be sure to check that your diagram renders properly using the

View File

@ -113,7 +113,7 @@ actions. Failures defined by the `failurePolicy` are enforced
according to these actions only if the `failurePolicy` is set to `Fail` (or not specified),
otherwise the failures are ignored.
See [Audit Annotations: validation falures](/docs/reference/labels-annotations-taints/audit-annotations/#validation-policy-admission-k8s-io-validation_failure)
See [Audit Annotations: validation failures](/docs/reference/labels-annotations-taints/audit-annotations/#validation-policy-admission-k8s-io-validation-failure)
for more details about the validation failure audit annotation.
### Parameter resources
@ -503,4 +503,4 @@ kubectl create deploy --image=dev.example.com/nginx invalid
The error message is similar to this.
```console
error: failed to create deployment: deployments.apps "invalid" is forbidden: ValidatingAdmissionPolicy 'image-matches-namespace-environment.policy.example.com' with binding 'demo-binding-test.example.com' denied request: only prod images are allowed in namespace default
```
```

View File

@ -13,7 +13,6 @@ can specify on different Kubernetes components.
See [feature stages](#feature-stages) for an explanation of the stages for a feature.
<!-- body -->
## Overview
@ -75,15 +74,17 @@ For a reference to old feature gates that are removed, please refer to
| `CPUManagerPolicyBetaOptions` | `true` | Beta | 1.23 | |
| `CPUManagerPolicyOptions` | `false` | Alpha | 1.22 | 1.22 |
| `CPUManagerPolicyOptions` | `true` | Beta | 1.23 | |
| CRDValidationRatcheting | false | Alpha | 1.28 |
| `CRDValidationRatcheting` | `false` | Alpha | 1.28 | |
| `CSIMigrationPortworx` | `false` | Alpha | 1.23 | 1.24 |
| `CSIMigrationPortworx` | `false` | Beta | 1.25 | |
| `CSIVolumeHealth` | `false` | Alpha | 1.21 | |
| `CloudControllerManagerWebhook` | false | Alpha | 1.27 | |
| `CloudDualStackNodeIPs` | false | Alpha | 1.27 | 1.28 |
| `CloudDualStackNodeIPs` | true | Beta | 1.29 | |
| `CloudControllerManagerWebhook` | `false` | Alpha | 1.27 | |
| `CloudDualStackNodeIPs` | `false` | Alpha | 1.27 | 1.28 |
| `CloudDualStackNodeIPs` | `true` | Beta | 1.29 | |
| `ClusterTrustBundle` | false | Alpha | 1.27 | |
| `ConsistentListFromCache` | `false` | Alpha | 1.28 |
| `ComponentSLIs` | `false` | Alpha | 1.26 | 1.26 |
| `ComponentSLIs` | `true` | Beta | 1.27 | |
| `ConsistentListFromCache` | `false` | Alpha | 1.28 | |
| `ContainerCheckpoint` | `false` | Alpha | 1.25 | |
| `ContextualLogging` | `false` | Alpha | 1.24 | |
| `CronJobsScheduledAnnotation` | `true` | Beta | 1.28 | |
@ -95,9 +96,9 @@ For a reference to old feature gates that are removed, please refer to
| `DisableCloudProviders` | `false` | Alpha | 1.22 | |
| `DisableKubeletCloudCredentialProviders` | `false` | Alpha | 1.23 | |
| `DynamicResourceAllocation` | `false` | Alpha | 1.26 | |
| `ElasticIndexedJob` | `true` | Beta` | 1.27 | |
| `ElasticIndexedJob` | `true` | Beta | 1.27 | |
| `EventedPLEG` | `false` | Alpha | 1.26 | 1.26 |
| `EventedPLEG` | `false` | Beta | 1.27 | - |
| `EventedPLEG` | `false` | Beta | 1.27 | |
| `GracefulNodeShutdown` | `false` | Alpha | 1.20 | 1.20 |
| `GracefulNodeShutdown` | `true` | Beta | 1.21 | |
| `GracefulNodeShutdownBasedOnPodPriority` | `false` | Alpha | 1.23 | 1.23 |
@ -210,7 +211,7 @@ For a reference to old feature gates that are removed, please refer to
| `ValidatingAdmissionPolicy` | `false` | Alpha | 1.26 | 1.27 |
| `ValidatingAdmissionPolicy` | `false` | Beta | 1.28 | |
| `VolumeCapacityPriority` | `false` | Alpha | 1.21 | |
| `WatchList` | false | Alpha | 1.27 | |
| `WatchList` | `false` | Alpha | 1.27 | |
| `WinDSR` | `false` | Alpha | 1.14 | |
| `WinOverlay` | `false` | Alpha | 1.14 | 1.19 |
| `WinOverlay` | `true` | Beta | 1.20 | |
@ -344,7 +345,8 @@ An *Alpha* feature means:
A *Beta* feature means:
* Usually enabled by default. Beta API groups are [disabled by default](https://github.com/kubernetes/enhancements/tree/master/keps/sig-architecture/3136-beta-apis-off-by-default).
* Usually enabled by default. Beta API groups are
[disabled by default](https://github.com/kubernetes/enhancements/tree/master/keps/sig-architecture/3136-beta-apis-off-by-default).
* The feature is well tested. Enabling the feature is considered safe.
* Support for the overall feature will not be dropped, though details may change.
* The schema and/or semantics of objects may change in incompatible ways in a
@ -394,11 +396,11 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `CPUManager`: Enable container level CPU affinity support, see
[CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/).
- `CPUManagerPolicyAlphaOptions`: This allows fine-tuning of CPUManager policies,
experimental, Alpha-quality options
experimental, Alpha-quality options.
This feature gate guards *a group* of CPUManager options whose quality level is alpha.
This feature gate will never graduate to beta or stable.
- `CPUManagerPolicyBetaOptions`: This allows fine-tuning of CPUManager policies,
experimental, Beta-quality options
experimental, Beta-quality options.
This feature gate guards *a group* of CPUManager options whose quality level is beta.
This feature gate will never graduate to stable.
- `CPUManagerPolicyOptions`: Allow fine-tuning of CPUManager policies.
@ -442,16 +444,18 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `ContainerCheckpoint`: Enables the kubelet `checkpoint` API.
See [Kubelet Checkpoint API](/docs/reference/node/kubelet-checkpoint-api/) for more details.
- `ContextualLogging`: When you enable this feature gate, Kubernetes components that support
contextual logging add extra detail to log output.
contextual logging add extra detail to log output.
- `CronJobsScheduledAnnotation`: Set the scheduled job time as an
{{< glossary_tooltip text="annotation" term_id="annotation" >}} on Jobs that were created
on behalf of a CronJob.
- `CRDValidationRatcheting`: Enable updates to custom resources to contain
violations of their OpenAPI schema if the offending portions of the resource
update did not change. See [Validation Ratcheting](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-ratcheting) for more details.
- `CronJobTimeZone`: Allow the use of the `timeZone` optional field in [CronJobs](/docs/concepts/workloads/controllers/cron-jobs/).
- `CRDValidationRatcheting`: Enable updates to custom resources to contain
violations of their OpenAPI schema if the offending portions of the resource
update did not change. See [Validation Ratcheting](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-ratcheting)
for more details.
- `CrossNamespaceVolumeDataSource`: Enable the usage of cross namespace volume data source
to allow you to specify a source namespace in the `dataSourceRef` field of a
PersistentVolumeClaim.
to allow you to specify a source namespace in the `dataSourceRef` field of a
PersistentVolumeClaim.
- `CustomCPUCFSQuotaPeriod`: Enable nodes to change `cpuCFSQuotaPeriod` in
[kubelet config](/docs/tasks/administer-cluster/kubelet-config-file/).
- `CustomResourceValidationExpressions`: Enable expression language validation in CRD
@ -494,7 +498,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
[readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes).
- `ExpandedDNSConfig`: Enable kubelet and kube-apiserver to allow more DNS
search paths and longer list of DNS search paths. This feature requires container
runtime support(Containerd: v1.5.6 or higher, CRI-O: v1.22 or higher). See
runtime support (containerd: v1.5.6 or higher, CRI-O: v1.22 or higher). See
[Expanded DNS Configuration](/docs/concepts/services-networking/dns-pod-service/#expanded-dns-configuration).
- `ExperimentalHostUserNamespaceDefaulting`: Enabling the defaulting user
namespace to host. This is for containers that are using other host namespaces,
@ -508,6 +512,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
for more details.
- `GracefulNodeShutdownBasedOnPodPriority`: Enables the kubelet to check Pod priorities
when shutting down a node gracefully.
- `GRPCContainerProbe`: Enables the gRPC probe method for liveness, readiness and startup probes.
See [Configure Liveness, Readiness and Startup Probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe).
- `HonorPVReclaimPolicy`: Honor persistent volume reclaim policy when it is `Delete` irrespective of PV-PVC deletion ordering.
For more details, check the
[PersistentVolume deletion protection finalizer](/docs/concepts/storage/persistent-volumes/#persistentvolume-deletion-protection-finalizer)
@ -534,25 +540,33 @@ Each feature gate is designed for enabling/disabling a specific feature:
and volume controllers.
- `InTreePluginvSphereUnregister`: Stops registering the vSphere in-tree plugin in kubelet
and volume controllers.
- `JobMutableNodeSchedulingDirectives`: Allows updating node scheduling directives in
the pod template of [Job](/docs/concepts/workloads/controllers/job/).
- `JobBackoffLimitPerIndex`: Allows specifying the maximal number of pod
retries per index in Indexed jobs.
- `JobPodFailurePolicy`: Allow users to specify handling of pod failures based on container
exit codes and pod conditions.
- `JobPodReplacementPolicy`: Allows you to specify pod replacement for terminating pods in a [Job](/docs/concepts/workloads/controllers/job)
- `JobPodReplacementPolicy`: Allows you to specify pod replacement for terminating pods in a
[Job](/docs/concepts/workloads/controllers/job/).
- `JobReadyPods`: Enables tracking the number of Pods that have a `Ready`
[condition](/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions).
The count of `Ready` pods is recorded in the
[status](/docs/reference/kubernetes-api/workload-resources/job-v1/#JobStatus)
of a [Job](/docs/concepts/workloads/controllers/job) status.
- `JobTrackingWithFinalizers`: Enables tracking [Job](/docs/concepts/workloads/controllers/job)
of a [Job](/docs/concepts/workloads/controllers/job/) status.
- `JobTrackingWithFinalizers`: Enables tracking [Job](/docs/concepts/workloads/controllers/job/)
completions without relying on Pods remaining in the cluster indefinitely.
The Job controller uses Pod finalizers and a field in the Job status to keep
track of the finished Pods to count towards completion.
- `KMSv1`: Enables KMS v1 API for encryption at rest. See [Using a KMS Provider for data encryption](/docs/tasks/administer-cluster/kms-provider) for more details.
- `KMSv2`: Enables KMS v2 API for encryption at rest. See [Using a KMS Provider for data encryption](/docs/tasks/administer-cluster/kms-provider) for more details.
- `KMSv1`: Enables KMS v1 API for encryption at rest. See
[Using a KMS Provider for data encryption](/docs/tasks/administer-cluster/kms-provider/)
for more details.
- `KMSv2`: Enables KMS v2 API for encryption at rest. See
[Using a KMS Provider for data encryption](/docs/tasks/administer-cluster/kms-provider/)
for more details.
- `KMSv2KDF`: Enables KMS v2 to generate single use data encryption keys.
See [Using a KMS Provider for data encryption](/docs/tasks/administer-cluster/kms-provider) for more details.
If the `KMSv2` feature gate is not enabled in your cluster, the value of the `KMSv2KDF` feature gate has no effect.
See [Using a KMS Provider for data encryption](/docs/tasks/administer-cluster/kms-provider/)
for more details. If the `KMSv2` feature gate is not enabled in your cluster, the value of
the `KMSv2KDF` feature gate has no effect.
- `KubeProxyDrainingTerminatingNodes`: Implement connection draining for
terminating nodes for `externalTrafficPolicy: Cluster` services.
- `KubeletCgroupDriverFromCRI`: Enable detection of the kubelet cgroup driver
@ -564,11 +578,11 @@ Each feature gate is designed for enabling/disabling a specific feature:
line argument). If you enable this feature gate and the container runtime
doesn't support it, the kubelet falls back to using the driver configured using
the `cgroupDriver` configuration setting.
See [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver)
See [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/)
for more details.
- `KubeletInUserNamespace`: Enables support for running kubelet in a
{{<glossary_tooltip text="user namespace" term_id="userns">}}.
See [Running Kubernetes Node Components as a Non-root User](/docs/tasks/administer-cluster/kubelet-in-userns/).
See [Running Kubernetes Node Components as a Non-root User](/docs/tasks/administer-cluster/kubelet-in-userns/).
- `KubeletPodResources`: Enable the kubelet's pod resources gRPC endpoint. See
[Support Device Monitoring](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/606-compute-device-assignment/README.md)
for more details.
@ -576,16 +590,18 @@ Each feature gate is designed for enabling/disabling a specific feature:
This API augments the [resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources).
- `KubeletPodResourcesGetAllocatable`: Enable the kubelet's pod resources
`GetAllocatableResources` functionality. This API augments the
[resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources)
- `KubeletPodResourcesDynamicResources`: Extend the kubelet's pod resources gRPC endpoint to
[resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources).
- `KubeletPodResourcesDynamicResources`: Extend the kubelet's pod resources gRPC endpoint
to include resources allocated in `ResourceClaims` via `DynamicResourceAllocation` API.
See [resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources) for more details.
with informations about the allocatable resources, enabling clients to properly
See [resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources)
for more details. with informations about the allocatable resources, enabling clients to properly
track the free compute resources on a node.
- `KubeletTracing`: Add support for distributed tracing in the kubelet.
When enabled, kubelet CRI interface and authenticated http servers are instrumented to generate
OpenTelemetry trace spans.
See [Traces for Kubernetes System Components](/docs/concepts/cluster-administration/system-traces) for more details.
- `LegacyServiceAccountTokenNoAutoGeneration`: Stop auto-generation of Secret-based
[service account tokens](/docs/concepts/security/service-accounts/#get-a-token).
- `LegacyServiceAccountTokenCleanUp`: Enable cleaning up Secret-based
[service account tokens](/docs/concepts/security/service-accounts/#get-a-token)
when they are not used in a specified time (default to be one year).
@ -637,31 +653,35 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `NodeLogQuery`: Enables querying logs of node services using the `/logs` endpoint.
- `NodeOutOfServiceVolumeDetach`: When a Node is marked out-of-service using the
`node.kubernetes.io/out-of-service` taint, Pods on the node will be forcefully deleted
if they can not tolerate this taint, and the volume detach operations for Pods terminating
on the node will happen immediately. The deleted Pods can recover quickly on different nodes.
if they can not tolerate this taint, and the volume detach operations for Pods terminating
on the node will happen immediately. The deleted Pods can recover quickly on different nodes.
- `NodeSwap`: Enable the kubelet to allocate swap memory for Kubernetes workloads on a node.
Must be used with `KubeletConfiguration.failSwapOn` set to false.
For more details, please see [swap memory](/docs/concepts/architecture/nodes/#swap-memory)
For more details, please see [swap memory](/docs/concepts/architecture/nodes/#swap-memory).
- `OpenAPIEnums`: Enables populating "enum" fields of OpenAPI schemas in the
spec returned from the API server.
- `OpenAPIV3`: Enables the API server to publish OpenAPI v3.
- `PDBUnhealthyPodEvictionPolicy`: Enables the `unhealthyPodEvictionPolicy` field of a `PodDisruptionBudget`. This specifies
when unhealthy pods should be considered for eviction. Please see [Unhealthy Pod Eviction Policy](/docs/tasks/run-application/configure-pdb/#unhealthy-pod-eviction-policy)
- `PDBUnhealthyPodEvictionPolicy`: Enables the `unhealthyPodEvictionPolicy` field of a `PodDisruptionBudget`.
This specifies when unhealthy pods should be considered for eviction. Please see
[Unhealthy Pod Eviction Policy](/docs/tasks/run-application/configure-pdb/#unhealthy-pod-eviction-policy)
for more details.
- `PersistentVolumeLastPhaseTransitionTime`: Adds a new field to PersistentVolume
which holds a timestamp of when the volume last transitioned its phase.
- `PodAndContainerStatsFromCRI`: Configure the kubelet to gather container and pod stats from the CRI container runtime rather than gathering them from cAdvisor.
As of 1.26, this also includes gathering metrics from CRI and emitting them over `/metrics/cadvisor` (rather than having cAdvisor emit them directly).
- `PodAndContainerStatsFromCRI`: Configure the kubelet to gather container and pod stats from the
CRI container runtime rather than gathering them from cAdvisor. As of 1.26, this also includes
gathering metrics from CRI and emitting them over `/metrics/cadvisor` (rather than having cAdvisor emit them directly).
- `PodDeletionCost`: Enable the [Pod Deletion Cost](/docs/concepts/workloads/controllers/replicaset/#pod-deletion-cost)
feature which allows users to influence ReplicaSet downscaling order.
- `PodDisruptionConditions`: Enables support for appending a dedicated pod condition indicating that the pod is being deleted due to a disruption.
feature which allows users to influence ReplicaSet downscaling order.
- `PodDisruptionConditions`: Enables support for appending a dedicated pod condition indicating that
the pod is being deleted due to a disruption.
- `PodHostIPs`: Enable the `status.hostIPs` field for pods and the {{< glossary_tooltip term_id="downward-api" text="downward API" >}}.
The field lets you expose host IP addresses to workloads.
- `PodIndexLabel`: Enables the Job controller and StatefulSet controller to add the pod index as a label when creating new pods. See [Job completion mode docs](/docs/concepts/workloads/controllers/job#completion-mode) and [StatefulSet pod index label docs](/docs/concepts/workloads/controllers/statefulset/#pod-index-label) for more details.
- `PodLifecycleSleepAction`: Enables the `sleep` action in Container lifecycle hooks.
- `PodReadyToStartContainersCondition`: Enable the kubelet to mark the [PodReadyToStartContainers](/docs/concepts/workloads/pods/pod-lifecycle/#pod-has-network)
condition on pods. This was previously (1.25-1.27) known as `PodHasNetworkCondition`.
- `PodSchedulingReadiness`: Enable setting `schedulingGates` field to control a Pod's [scheduling readiness](/docs/concepts/scheduling-eviction/pod-scheduling-readiness).
- `PodSchedulingReadiness`: Enable setting `schedulingGates` field to control a Pod's
[scheduling readiness](/docs/concepts/scheduling-eviction/pod-scheduling-readiness/).
- `ProbeTerminationGracePeriod`: Enable [setting probe-level
`terminationGracePeriodSeconds`](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#probe-level-terminationgraceperiodseconds)
on pods. See the [enhancement proposal](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2238-liveness-probe-grace-period)
@ -726,9 +746,11 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `StorageVersionHash`: Allow API servers to expose the storage version hash in the
discovery.
- `TopologyAwareHints`: Enables topology aware routing based on topology hints
in EndpointSlices. See [Topology Aware
Hints](/docs/concepts/services-networking/topology-aware-hints/) for more
details.
in EndpointSlices. See [Topology Aware Hints](/docs/concepts/services-networking/topology-aware-hints/)
for more details.
- `TopologyManager`: Enable a mechanism to coordinate fine-grained hardware resource
assignments for different components in Kubernetes. See
[Control Topology Management Policies on a node](/docs/tasks/administer-cluster/topology-manager/).
- `TopologyManagerPolicyAlphaOptions`: Allow fine-tuning of topology manager policies,
experimental, Alpha-quality options.
This feature gate guards *a group* of topology manager options whose quality level is alpha.
@ -743,7 +765,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
See [Mixed version proxy](/docs/concepts/architecture/mixed-version-proxy/) for more information.
- `UserNamespacesSupport`: Enable user namespace support for Pods.
Before Kubernetes v1.28, this feature gate was named `UserNamespacesStatelessPodsSupport`.
- `ValidatingAdmissionPolicy`: Enable [ValidatingAdmissionPolicy](/docs/reference/access-authn-authz/validating-admission-policy/) support for CEL validations be used in Admission Control.
- `ValidatingAdmissionPolicy`: Enable [ValidatingAdmissionPolicy](/docs/reference/access-authn-authz/validating-admission-policy/)
support for CEL validations be used in Admission Control.
- `VolumeCapacityPriority`: Enable support for prioritizing nodes in different
topologies based on available PV capacity.
- `WatchBookmark`: Enable support for watch bookmark events.
@ -752,7 +775,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `WinOverlay`: Allows kube-proxy to run in overlay mode for Windows.
- `WindowsHostNetwork`: Enables support for joining Windows containers to a hosts' network namespace.
## {{% heading "whatsnext" %}}
* The [deprecation policy](/docs/reference/using-api/deprecation-policy/) for Kubernetes explains

View File

@ -7,13 +7,10 @@ weight: 140
<!-- overview -->
This page shows how to connect to services running on the Kubernetes cluster.
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
<!-- steps -->
## Accessing services running on the cluster
@ -28,30 +25,30 @@ such as your desktop machine.
You have several options for connecting to nodes, pods and services from outside the cluster:
- Access services through public IPs.
- Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside
the cluster. See the [services](/docs/concepts/services-networking/service/) and
[kubectl expose](/docs/reference/generated/kubectl/kubectl-commands/#expose) documentation.
- Depending on your cluster environment, this may only expose the service to your corporate network,
or it may expose it to the internet. Think about whether the service being exposed is secure.
Does it do its own authentication?
- Place pods behind services. To access one specific pod from a set of replicas, such as for debugging,
place a unique label on the pod and create a new service which selects this label.
- In most cases, it should not be necessary for application developer to directly access
nodes via their nodeIPs.
- Access services, nodes, or pods using the Proxy Verb.
- Does apiserver authentication and authorization prior to accessing the remote service.
Use this if the services are not secure enough to expose to the internet, or to gain
access to ports on the node IP, or for debugging.
- Proxies may cause problems for some web applications.
- Only works for HTTP/HTTPS.
- Described [here](#manually-constructing-apiserver-proxy-urls).
- Access services through public IPs.
- Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside
the cluster. See the [services](/docs/concepts/services-networking/service/) and
[kubectl expose](/docs/reference/generated/kubectl/kubectl-commands/#expose) documentation.
- Depending on your cluster environment, this may only expose the service to your corporate network,
or it may expose it to the internet. Think about whether the service being exposed is secure.
Does it do its own authentication?
- Place pods behind services. To access one specific pod from a set of replicas, such as for debugging,
place a unique label on the pod and create a new service which selects this label.
- In most cases, it should not be necessary for application developer to directly access
nodes via their nodeIPs.
- Access services, nodes, or pods using the Proxy Verb.
- Does apiserver authentication and authorization prior to accessing the remote service.
Use this if the services are not secure enough to expose to the internet, or to gain
access to ports on the node IP, or for debugging.
- Proxies may cause problems for some web applications.
- Only works for HTTP/HTTPS.
- Described [here](#manually-constructing-apiserver-proxy-urls).
- Access from a node or pod in the cluster.
- Run a pod, and then connect to a shell in it using [kubectl exec](/docs/reference/generated/kubectl/kubectl-commands/#exec).
Connect to other nodes, pods, and services from that shell.
- Some clusters may allow you to ssh to a node in the cluster. From there you may be able to
access cluster services. This is a non-standard method, and will work on some clusters but
not others. Browsers and other tools may or may not be installed. Cluster DNS may not work.
- Run a pod, and then connect to a shell in it using [kubectl exec](/docs/reference/generated/kubectl/kubectl-commands/#exec).
Connect to other nodes, pods, and services from that shell.
- Some clusters may allow you to ssh to a node in the cluster. From there you may be able to
access cluster services. This is a non-standard method, and will work on some clusters but
not others. Browsers and other tools may or may not be installed. Cluster DNS may not work.
### Discovering builtin services
@ -75,19 +72,23 @@ heapster is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/
This shows the proxy-verb URL for accessing each service.
For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached
at `https://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/` if suitable credentials are passed, or through a kubectl proxy at, for example:
at `https://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/`
if suitable credentials are passed, or through a kubectl proxy at, for example:
`http://localhost:8080/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/`.
{{< note >}}
See [Access Clusters Using the Kubernetes API](/docs/tasks/administer-cluster/access-cluster-api/#accessing-the-cluster-api) for how to pass credentials or use kubectl proxy.
See [Access Clusters Using the Kubernetes API](/docs/tasks/administer-cluster/access-cluster-api/#accessing-the-cluster-api)
for how to pass credentials or use kubectl proxy.
{{< /note >}}
#### Manually constructing apiserver proxy URLs
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the service's proxy URL:
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create
proxy URLs that include service endpoints, suffixes, and parameters, you append to the service's proxy URL:
`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`[https:]service_name[:port_name]`*`/proxy`
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL. You can also use the port number in place of the *port_name* for both named and unnamed ports.
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL. You can also
use the port number in place of the *port_name* for both named and unnamed ports.
By default, the API server proxies to your service using HTTP. To use HTTPS, prefix the service name with `https:`:
`http://<kubernetes_master_address>/api/v1/namespaces/<namespace_name>/services/<service_name>/proxy`
@ -99,53 +100,49 @@ The supported formats for the `<service_name>` segment of the URL are:
* `https:<service_name>:` - proxies to the default or unnamed port using https (note the trailing colon)
* `https:<service_name>:<port_name>` - proxies to the specified port name or port number using https
##### Examples
* To access the Elasticsearch service endpoint `_search?q=user:kimchy`, you would use:
```
http://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_search?q=user:kimchy
```
```
http://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_search?q=user:kimchy
```
* To access the Elasticsearch cluster health information `_cluster/health?pretty=true`, you would use:
```
https://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_cluster/health?pretty=true
```
```
https://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_cluster/health?pretty=true
```
The health information is similar to this:
The health information is similar to this:
```json
{
"cluster_name" : "kubernetes_logging",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 5,
"active_shards" : 5,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 5
}
```
```json
{
"cluster_name" : "kubernetes_logging",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 5,
"active_shards" : 5,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 5
}
```
* To access the *https* Elasticsearch service health information `_cluster/health?pretty=true`, you would use:
```
https://192.0.2.1/api/v1/namespaces/kube-system/services/https:elasticsearch-logging:/proxy/_cluster/health?pretty=true
```
```
https://192.0.2.1/api/v1/namespaces/kube-system/services/https:elasticsearch-logging:/proxy/_cluster/health?pretty=true
```
#### Using web browsers to access services running on the cluster
You may be able to put an apiserver proxy URL into the address bar of a browser. However:
- Web browsers cannot usually pass tokens, so you may need to use basic (password) auth. Apiserver can be configured to accept basic auth,
but your cluster may not be configured to accept basic auth.
- Some web apps may not work, particularly those with client side javascript that construct URLs in a
way that is unaware of the proxy path prefix.
- Web browsers cannot usually pass tokens, so you may need to use basic (password) auth.
Apiserver can be configured to accept basic auth,
but your cluster may not be configured to accept basic auth.
- Some web apps may not work, particularly those with client side javascript that construct URLs in a
way that is unaware of the proxy path prefix.

View File

@ -6,11 +6,28 @@ weight: 120
<!-- overview -->
This page explains how to enable a package repository for a new Kubernetes minor release
This page explains how to enable a package repository for the desired
Kubernetes minor release upon upgrading a cluster. This is only needed
for users of the community-owned package repositories hosted at `pkgs.k8s.io`.
Unlike the legacy package repositories, the community-owned package repositories are
structured in a way that there's a dedicated package repository for each Kubernetes
minor version.
Unlike the legacy package repositories, the community-owned package
repositories are structured in a way that there's a dedicated package
repository for each Kubernetes minor version.
{{< note >}}
This guide only covers a part of the Kubernetes upgrade process. Please see the
[upgrade guide](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) for
more information about upgrading Kubernetes clusters.
{{</ note >}}
{{< note >}}
This step is only needed upon upgrading a cluster to another **minor** release.
If you're upgrading to another patch release within the same minor release (e.g.
v{{< skew currentVersion >}}.5 to v{{< skew currentVersion >}}.7), you don't
need to follow this guide. However, if you're still using the legacy package
repositories, you'll need to migrate to the new community-owned package
repositories before upgrading (see the next section for more details on how to
do this).
{{</ note >}}
## {{% heading "prerequisites" %}}

View File

@ -69,6 +69,34 @@ There are three things to check:
* Try to manually pull the image to see if the image can be pulled. For example,
if you use Docker on your PC, run `docker pull <image>`.
#### My pod stays terminating
If a Pod is stuck in the `Terminating` state, it means that a deletion has been
issued for the Pod, but the control plane is unable to delete the Pod object.
This typically happens if the Pod has a [finalizer](/docs/concepts/overview/working-with-objects/finalizers/)
and there is an [admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/)
installed in the cluster that prevents the control plane from removing the
finalizer.
To identify this scenario, check if your cluster has any
ValidatingWebhookConfiguration or MutatingWebhookConfiguration that target
`UPDATE` operations for `pods` resources.
If the webhook is provided by a third-party:
- Make sure you are using the latest version.
- Disable the webhook for `UPDATE` operations.
- Report an issue with the corresponding provider.
If you are the author of the webhook:
- For a mutating webhook, make sure it never changes immutable fields on
`UPDATE` operations. For example, changes to containers are usually not allowed.
- For a validating webhook, make sure that your validation policies only apply
to new changes. In other words, you should allow Pods with existing violations
to pass validation. This allows Pods that were created before the validating
webhook was installed to continue running.
#### My pod is crashing or otherwise unhealthy
Once your pod has been scheduled, the methods described in

View File

@ -22,12 +22,12 @@ level. For instructions, refer to
Install the following on your workstation:
- [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
- [kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
- [kubectl](/docs/tasks/tools/)
## Create cluster
1. Create a `KinD` cluster as follows:
1. Create a `kind` cluster as follows:
```shell
kind create cluster --name psa-ns-level
@ -150,7 +150,7 @@ kind delete cluster --name psa-ns-level
[shell script](/examples/security/kind-with-namespace-level-baseline-pod-security.sh)
to perform all the preceding steps all at once.
1. Create KinD cluster
1. Create kind cluster
2. Create new namespace
3. Apply `baseline` Pod Security Standard in `enforce` mode while applying
`restricted` Pod Security Standard also in `warn` and `audit` mode.

View File

@ -35,6 +35,9 @@ spec:
volumeMounts:
- name: varlog
mountPath: /var/log
# it may be desirable to set a high priority class to ensure that a DaemonSet Pod
# preempts running Pods
# priorityClassName: important
terminationGracePeriodSeconds: 30
volumes:
- name: varlog

View File

@ -10,6 +10,24 @@ cluster. Those components are also shipped in container images as part of the
official release process. All binaries as well as container images are available
for multiple operating systems as well as hardware architectures.
### kubectl
<!-- overview -->
The Kubernetes command-line tool, [kubectl](/docs/reference/kubectl/kubectl/), allows
you to run commands against Kubernetes clusters.
You can use kubectl to deploy applications, inspect and manage cluster resources,
and view logs. For more information including a complete list of kubectl operations, see the
[`kubectl` reference documentation](/docs/reference/kubectl/).
kubectl is installable on a variety of Linux platforms, macOS and Windows.
Find your preferred operating system below.
- [Install kubectl on Linux](/docs/tasks/tools/install-kubectl-linux)
- [Install kubectl on macOS](/docs/tasks/tools/install-kubectl-macos)
- [Install kubectl on Windows](/docs/tasks/tools/install-kubectl-windows)
## Container Images
All Kubernetes container images are deployed to the
@ -53,25 +71,4 @@ To manually verify signed container images of Kubernetes core components, refer
## Binaries
Find links to download Kubernetes components (and their checksums) in the
[CHANGELOG](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG) files.
Alternately, use [downloadkubernetes.com](https://www.downloadkubernetes.com/) to filter by version and architecture.
### kubectl
<!-- overview -->
The Kubernetes command-line tool, [kubectl](/docs/reference/kubectl/kubectl/), allows
you to run commands against Kubernetes clusters.
You can use kubectl to deploy applications, inspect and manage cluster resources,
and view logs. For more information including a complete list of kubectl operations, see the
[`kubectl` reference documentation](/docs/reference/kubectl/).
kubectl is installable on a variety of Linux platforms, macOS and Windows.
Find your preferred operating system below.
- [Install kubectl on Linux](/docs/tasks/tools/install-kubectl-linux)
- [Install kubectl on macOS](/docs/tasks/tools/install-kubectl-macos)
- [Install kubectl on Windows](/docs/tasks/tools/install-kubectl-windows)
{{< release-binaries >}}

View File

@ -4,7 +4,7 @@ type: docs
auto_generated: true
---
<!-- THIS CONTENT IS AUTO-GENERATED via ./scripts/releng/update-release-info.sh in k/website -->
<!-- THIS CONTENT IS AUTO-GENERATED via ./scripts/releng/update-release-info.sh in kubernetes/website -->
{{< warning >}}
This content is auto-generated and links may not function. The source of the document is located [here](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-release/release.md).

View File

@ -4,7 +4,7 @@ noedit: true
cid: docsHome
layout: docsportal_home
class: gridPage gridPageHome
linkTitle: "Home"
linkTitle: "Documentación"
main_menu: true
weight: 10
hide_feedback: true

View File

@ -55,7 +55,7 @@ Antes de recorrer cada tutorial, recomendamos añadir un marcador a
* [AppArmor](/docs/tutorials/clusters/apparmor/)
* [seccomp](/docs/tutorials/clusters/seccomp/)
* [Seccomp](/docs/tutorials/clusters/seccomp/)
## Servicios

View File

@ -7,7 +7,7 @@ noedit: true
cid: docsHome
layout: docsportal_home
class: gridPage gridPageHome
linkTitle: "Accueil"
linkTitle: "Documentation"
main_menu: true
weight: 10
hide_feedback: true

View File

@ -50,7 +50,7 @@ Sebelum melangkah lebih lanjut ke tutorial, sebaiknya tandai dulu halaman [Kamus
* [AppArmor](/docs/tutorials/clusters/apparmor/)
* [seccomp](/docs/tutorials/clusters/seccomp/)
* [Seccomp](/docs/tutorials/clusters/seccomp/)
## Servis

View File

@ -6,7 +6,7 @@ noedit: true
cid: docsHome
layout: docsportal_home
class: gridPage
linkTitle: "Home"
linkTitle: "Documentazione"
main_menu: true
weight: 10
hide_feedback: true

View File

@ -50,7 +50,7 @@ Prima di procedere con vari tutorial, raccomandiamo di aggiungere il
* [AppArmor](/docs/tutorials/clusters/apparmor/)
* [seccomp](/docs/tutorials/clusters/seccomp/)
* [Seccomp](/docs/tutorials/clusters/seccomp/)
## Servizi

View File

@ -6,7 +6,7 @@ noedit: true
cid: docsHome
layout: docsportal_home
class: gridPage gridPageHome
linkTitle: "Главная"
linkTitle: "Документация"
main_menu: true
weight: 10
hide_feedback: true

View File

@ -4,7 +4,7 @@ noedit: true
cid: docsHome
layout: docsportal_home
class: gridPage gridPageHome
linkTitle: "Головна"
linkTitle: "Документація"
main_menu: true
weight: 10
hide_feedback: true

View File

@ -4,6 +4,8 @@ abstract: "Triển khai tự động, nhân rộng và quản lý container"
cid: home
---
{{< site-searchbar >}}
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
### [Kubernetes (K8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) là một hệ thống mã nguồn mở giúp tự động hóa việc triển khai, nhân rộng và quản lý các ứng dụng container.

View File

@ -4,7 +4,7 @@ noedit: true
cid: docsHome
layout: docsportal_home
class: gridPage gridPageHome
linkTitle: "Home"
linkTitle: "Tài liệu"
main_menu: true
weight: 10
hide_feedback: true

View File

@ -0,0 +1,284 @@
---
layout: blog
title: "CRI-O 正迁移至 pkgs.k8s.io"
date: 2023-10-10
slug: cri-o-community-package-infrastructure
---
<!--
layout: blog
title: "CRI-O is moving towards pkgs.k8s.io"
date: 2023-10-10
slug: cri-o-community-package-infrastructure
-->
**作者**Sascha Grunert
<!--
**Author:** Sascha Grunert
-->
**译者**Wilson Wu (DaoCloud)
<!--
The Kubernetes community [recently announced](/blog/2023/08/31/legacy-package-repository-deprecation/) that their legacy package repositories are frozen, and now they moved to [introduced community-owned package repositories](/blog/2023/08/15/pkgs-k8s-io-introduction) powered by the [OpenBuildService (OBS)](https://build.opensuse.org/project/subprojects/isv:kubernetes). CRI-O has a long history of utilizing [OBS for their package builds](https://github.com/cri-o/cri-o/blob/e292f17/install.md#install-packaged-versions-of-cri-o), but all of the packaging efforts have been done manually so far.
-->
Kubernetes 社区[最近宣布](/zh-cn/blog/2023/08/31/legacy-package-repository-deprecation/)旧的软件包仓库已被冻结,
现在这些软件包将被迁移到由 [OpenBuildServiceOBS](https://build.opensuse.org/project/subprojects/isv:kubernetes)
提供支持的[社区自治软件包仓库](/blog/2023/08/15/pkgs-k8s-io-introduction)中。
很久以来CRI-O 一直在利用 [OBS 进行软件包构建](https://github.com/cri-o/cri-o/blob/e292f17/install.md#install-packaged-versions-of-cri-o)
但到目前为止,所有打包工作都是手动完成的。
<!--
The CRI-O community absolutely loves Kubernetes, which means that they're delighted to announce that:
-->
CRI-O 社区非常喜欢 Kubernetes这意味着他们很高兴地宣布
<!--
**All future CRI-O packages will be shipped as part of the officially supported Kubernetes infrastructure hosted on pkgs.k8s.io!**
-->
**所有未来的 CRI-O 包都将作为在 pkgs.k8s.io 上托管的官方支持的 Kubernetes 基础设施的一部分提供!**
<!--
There will be a deprecation phase for the existing packages, which is currently being [discussed in the CRI-O community](https://github.com/cri-o/cri-o/discussions/7315). The new infrastructure will only support releases of CRI-O `>= v1.28.2` as well as release branches newer than `release-1.28`.
-->
现有软件包将进入一个弃用阶段,目前正在
[CRI-O 社区中讨论](https://github.com/cri-o/cri-o/discussions/7315)。
新的基础设施将仅支持 CRI-O `>= v1.28.2` 的版本以及比 `release-1.28` 新的版本分支。
<!--
## How to use the new packages
-->
## 如何使用新软件包 {#how-to-use-the-new-packages}
<!--
In the same way as the Kubernetes community, CRI-O provides `deb` and `rpm` packages as part of a dedicated subproject in OBS, called [`isv:kubernetes:addons:cri-o`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o). This project acts as an umbrella and provides `stable` (for CRI-O tags) as well as `prerelease` (for CRI-O `release-1.y` and `main` branches) package builds.
-->
与 Kubernetes 社区一样CRI-O 提供 `deb``rpm` 软件包作为 OBS 中专用子项目的一部分,
被称为 [`isv:kubernetes:addons:cri-o`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o)。
这个项目是一个集合,提供 `stable`(针对 CRI-O 标记)以及 `prerelease`(针对 CRI-O `release-1.y``main` 分支)版本的软件包。
<!--
**Stable Releases:**
-->
**稳定版本:**
<!--
- [`isv:kubernetes:addons:cri-o:stable`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:stable): Stable Packages
- [`isv:kubernetes:addons:cri-o:stable:v1.29`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:stable:v1.29): `v1.29.z` tags
- [`isv:kubernetes:addons:cri-o:stable:v1.28`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:stable:v1.28): `v1.28.z` tags
-->
- [`isv:kubernetes:addons:cri-o:stable`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:stable):稳定软件包
- [`isv:kubernetes:addons:cri-o:stable:v1.29`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:stable:v1.29 )`v1.29.z` 标记
- [`isv:kubernetes:addons:cri-o:stable:v1.28`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:stable:v1.28 )`v1.28.z` 标记
<!--
**Prereleases:**
-->
**预发布版本:**
<!--
- [`isv:kubernetes:addons:cri-o:prerelease`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:prerelease): Prerelease Packages
- [`isv:kubernetes:addons:cri-o:prerelease:main`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:prerelease:main): [`main`](https://github.com/cri-o/cri-o/commits/main) branch
- [`isv:kubernetes:addons:cri-o:prerelease:v1.29`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:prerelease:v1.29): [`release-1.29`](https://github.com/cri-o/cri-o/commits/release-1.29) branch
- [`isv:kubernetes:addons:cri-o:prerelease:v1.28`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:prerelease:v1.28): [`release-1.28`](https://github.com/cri-o/cri-o/commits/release-1.28) branch
-->
- [`isv:kubernetes:addons:cri-o:prerelease`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:prerelease):预发布软件包
- [`isv:kubernetes:addons:cri-o:prerelease:main`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:prerelease:main)
[`main`](https://github.com/cri-o/cri-o/commits/main) 分支
- [`isv:kubernetes:addons:cri-o:prerelease:v1.29`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:prerelease:v1.29)
[`release-1.29`](https://github.com/cri-o/cri-o/commits/release-1.29) 分支
- [`isv:kubernetes:addons:cri-o:prerelease:v1.28`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:prerelease:v1.28)
[`release-1.28`](https://github.com/cri-o/cri-o/commits/release-1.28) 分支
<!--
There are no stable releases available in the v1.29 repository yet, because v1.29.0 will be released in December. The CRI-O community will also **not** support release branches older than `release-1.28`, because there have been CI requirements merged into `main` which could be only backported to `release-1.28` with appropriate efforts.
-->
v1.29 仓库中尚无可用的稳定版本,因为 v1.29.0 将于 12 月发布。
CRI-O 社区也**不**支持早于 `release-1.28` 的版本分支,
因为已经有 CI 需求合并到 `main` 中,只有通过适当的努力才能向后移植到 `release-1.28`
<!--
For example, If an end-user would like to install the latest available version of the CRI-O `main` branch, then they can add the repository in the same way as they do for Kubernetes.
-->
例如,如果最终用户想要安装 CRI-O `main` 分支的最新可用版本,
那么他们可以按照与 Kubernetes 相同的方式添加仓库。
<!--
### `rpm` Based Distributions
-->
### 基于 `rpm` 的发行版 {#rpm-based-distributions}
<!--
For `rpm` based distributions, you can run the following commands as a `root` user to install CRI-O together with Kubernetes:
-->
对于基于 `rpm` 的发行版,您可以以 `root`
用户身份运行以下命令来将 CRI-O 与 Kubernetes 一起安装:
<!--
#### Add the Kubernetes repo
-->
#### 添加 Kubernetes 仓库 {#add-the-kubernetes-repo}
```bash
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
EOF
```
<!--
#### Add the CRI-O repo
-->
#### 添加 CRI-O 仓库 {#add-the-cri-o-repo}
```bash
cat <<EOF | tee /etc/yum.repos.d/cri-o.repo
[cri-o]
name=CRI-O
baseurl=https://pkgs.k8s.io/addons:/cri-o:/prerelease:/main/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/addons:/cri-o:/prerelease:/main/rpm/repodata/repomd.xml.key
EOF
```
<!--
#### Install official package dependencies
-->
#### 安装官方包依赖 {#install-official-package-dependencies}
```bash
dnf install -y \
conntrack \
container-selinux \
ebtables \
ethtool \
iptables \
socat
```
<!--
#### Install the packages from the added repos
-->
#### 从添加的仓库中安装软件包 {#install-the-packages-from-the-added-repos}
```bash
dnf install -y --repo cri-o --repo kubernetes \
cri-o \
kubeadm \
kubectl \
kubelet
```
<!--
### `deb` Based Distributions
-->
### 基于 `deb` 的发行版 {#deb-based-distributions}
<!--
For `deb` based distributions, you can run the following commands as a `root` user:
-->
对于基于 `deb` 的发行版,您可以以 `root` 用户身份运行以下命令:
<!--
#### Install dependencies for adding the repositories
-->
#### 安装用于添加仓库的依赖项 {#install-dependencies-for-adding-the-repositories}
```bash
apt-get update
apt-get install -y software-properties-common curl
```
<!--
#### Add the Kubernetes repository
-->
#### 添加 Kubernetes 仓库 {#add-the-kubernetes-repository}
```bash
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key |
gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" |
tee /etc/apt/sources.list.d/kubernetes.list
```
<!--
#### Add the CRI-O repository
-->
#### 添加 CRI-O 仓库 {#add-the-cri-o-repository}
```bash
curl -fsSL https://pkgs.k8s.io/addons:/cri-o:/prerelease:/main/deb/Release.key |
gpg --dearmor -o /etc/apt/keyrings/cri-o-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://pkgs.k8s.io/addons:/cri-o:/prerelease:/main/deb/ /" |
tee /etc/apt/sources.list.d/cri-o.list
```
<!--
#### Install the packages
-->
#### 安装软件包 {#install-the-packages}
```bash
apt-get update
apt-get install -y cri-o kubelet kubeadm kubectl
```
<!--
#### Start CRI-O
-->
#### 启动 CRI-O {#start-cri-o}
```bash
systemctl start crio.service
```
<!--
The Project's `prerelease:/main` prefix at the CRI-O's package path, can be replaced with `stable:/v1.28`, `stable:/v1.29`, `prerelease:/v1.28` or `prerelease:/v1.29` if another stream package is used.
-->
如果使用的是另一个包序列CRI-O 包路径中项目的 `prerelease:/main`
前缀可以替换为 `stable:/v1.28`、`stable:/v1.29`、`prerelease:/v1.28` 或 `prerelease :/v1.29`
<!--
Bootstrapping [a cluster using `kubeadm`](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) can be done by running `kubeadm init` command, which automatically detects that CRI-O is running in the background. There are also `Vagrantfile` examples available for [Fedora 38](https://github.com/cri-o/packaging/blob/91df5f7/test/rpm/Vagrantfile) as well as [Ubuntu 22.04](https://github.com/cri-o/packaging/blob/91df5f7/test/deb/Vagrantfile) for testing the packages together with `kubeadm`.
-->
你可以使用 `kubeadm init` 命令来[引导集群](/docs/setup/product-environment/tools/kubeadm/install-kubeadm/)
该命令会自动检测后台正在运行 CRI-O。还有适用于
[Fedora 38](https://github.com/cri-o/packaging/blob/91df5f7/test/rpm/Vagrantfile)
以及 [Ubuntu 22.04](https://github.com/cri-o/packaging/blob/91df5f7/test/deb/Vagrantfile)
`Vagrantfile` 示例,可在使用 `kubeadm` 的场景中测试下载的软件包。
<!--
## How it works under the hood
-->
## 它是如何工作的 {#how-it-works-under-the-hood}
<!--
Everything related to these packages lives in the new [CRI-O packaging repository](https://github.com/cri-o/packaging). It contains a [daily reconciliation](https://github.com/cri-o/packaging/blob/91df5f7/.github/workflows/schedule.yml) GitHub action workflow, for all supported release branches as well as tags of CRI-O. A [test pipeline](https://github.com/cri-o/packaging/actions/workflows/obs.yml) in the OBS workflow ensures that the packages can be correctly installed and used before being published. All of the staging and publishing of the packages is done with the help of the [Kubernetes Release Toolbox (krel)](https://github.com/kubernetes/release/blob/1f85912/docs/krel/README.md), which is also used for the official Kubernetes `deb` and `rpm` packages.
-->
与这些包相关的所有内容都位于新的 [CRI-O 打包仓库](https://github.com/cri-o/packaging)中。
它包含 [Daily Reconciliation](https://github.com/cri-o/packaging/blob/91df5f7/.github/workflows/schedule.yml) GitHub 工作流,
支持所有发布分支以及 CRI-O 标签。
OBS 工作流程中的[测试管道](https://github.com/cri-o/packaging/actions/workflows/obs.yml)确保包在发布之前可以被正确安装和使用。
所有包的暂存和发布都是在 [Kubernetes 发布工具箱krel](https://github.com/kubernetes/release/blob/1f85912/docs/krel/README.md)的帮助下完成的,
这一工具箱也被用于官方 Kubernetes `deb``rpm` 软件包。
<!--
The package build inputs will undergo daily reconciliation and will be supplied by CRI-O's static binary bundles. These bundles are built and signed for each commit in the CRI-O CI, and contain everything CRI-O requires to run on a certain architecture. The static builds are reproducible, powered by [nixpkgs](https://github.com/NixOS/nixpkgs) and available only for `x86_64`, `aarch64` and `ppc64le` architecture.
-->
包构建的输入每天都会被动态调整,并使用 CRI-O 的静态二进制包。
这些包是基于 CRI-O CI 中的每次提交来构建和签名的,
并且包含 CRI-O 在特定架构上运行所需的所有内容。静态构建是可重复的,
由 [nixpkgs](https://github.com/NixOS/nixpkgs) 提供支持,
并且仅适用于 `x86_64`、`aarch64` 以及 `ppc64le` 架构。
<!--
The CRI-O maintainers will be happy to listen to any feedback or suggestions on the new packaging efforts! Thank you for reading this blog post, feel free to reach out to the maintainers via the Kubernetes [Slack channel #crio](https://kubernetes.slack.com/messages/CAZH62UR1) or create an issue in the [packaging repository](https://github.com/cri-o/packaging/issues).
-->
CRI-O 维护者将很乐意听取有关新软件包工作情况的任何反馈或建议!
感谢您阅读本文,请随时通过 Kubernetes [Slack 频道 #crio](https://kubernetes.slack.com/messages/CAZH62UR1)
联系维护人员或在[打包仓库](https://github.com/cri-o/packaging/issues)中创建 Issue。

View File

@ -0,0 +1,208 @@
---
layout: blog
title: Kubernetes 中 PersistentVolume 的最后阶段转换时间
date: 2023-10-23
slug: persistent-volume-last-phase-transition-time
---
<!--
layout: blog
title: PersistentVolume Last Phase Transition Time in Kubernetes
date: 2023-10-23
slug: persistent-volume-last-phase-transition-time
-->
<!--
**Author:** Roman Bednář (Red Hat)
-->
**作者:** Roman Bednář (Red Hat)
**译者:** Xin Li (DaoCloud)
<!--
In the recent Kubernetes v1.28 release, we (SIG Storage) introduced a new alpha feature that aims to improve PersistentVolume (PV)
storage management and help cluster administrators gain better insights into the lifecycle of PVs.
With the addition of the `lastPhaseTransitionTime` field into the status of a PV,
cluster administrators are now able to track the last time a PV transitioned to a different
[phase](/docs/concepts/storage/persistent-volumes/#phase), allowing for more efficient
and informed resource management.
-->
在最近的 Kubernetes v1.28 版本中我们SIG Storage引入了一项新的 Alpha 级别特性,
旨在改进 PersistentVolumePV存储管理并帮助集群管理员更好地了解 PV 的生命周期。
通过将 `lastPhaseTransitionTime` 字段添加到 PV 的状态中,集群管理员现在可以跟踪
PV 上次转换到不同[阶段](/zh-cn/docs/concepts/storage/persistent-volumes/#phase)的时间,
从而实现更高效、更明智的资源管理。
<!--
## Why do we need new PV field? {#why-new-field}
PersistentVolumes in Kubernetes play a crucial role in providing storage resources to workloads running in the cluster.
However, managing these PVs effectively can be challenging, especially when it comes
to determining the last time a PV transitioned between different phases, such as
`Pending`, `Bound` or `Released`.
Administrators often need to know when a PV was last used or transitioned to certain
phases; for instance, to implement retention policies, perform cleanup, or monitor storage health.
-->
## 我们为什么需要新的 PV 字段? {#why-new-field}
Kubernetes 中的 PersistentVolume 在为集群中运行的工作负载提供存储资源方面发挥着至关重要的作用。
然而,有效管理这些 PV 可能具有挑战性,特别是在确定 PV 在不同阶段(`Pending`、`Bound` 或 `Released`)之间转换的最后时间时。
管理员通常需要知道 PV 上次使用或转换到某些阶段的时间;例如,实施保留策略、执行清理或监控存储运行状况时。
<!--
In the past, Kubernetes users have faced data loss issues when using the `Delete` retain policy and had to resort to the safer `Retain` policy.
When we planned the work to introduce the new `lastPhaseTransitionTime` field, we
wanted to provide a more generic solution that can be used for various use cases,
including manual cleanup based on the time a volume was last used or producing alerts based on phase transition times.
-->
过去Kubernetes 用户在使用 `Delete` 保留策略时面临数据丢失问题,不得不使用更安全的 `Retain` 策略。
当我们计划引入新的 `lastPhaseTransitionTime` 字段时,我们希望提供一个更通用的解决方案,
可用于各种用例,包括根据卷上次使用时间进行手动清理或根据状态转变时间生成警报。
<!--
## How lastPhaseTransitionTime helps
Provided you've enabled the feature gate (see [How to use it](#how-to-use-it), the new `.status.lastPhaseTransitionTime` field of a PersistentVolume (PV)
is updated every time that PV transitions from one phase to another.
-->
## lastPhaseTransitionTime 如何提供帮助
如果你已启用特性门控(请参阅[如何使用它](#how-to-use-it)),则每次 PV 从一个阶段转换到另一阶段时,
PersistentVolumePV的新字段 `.status.lastPhaseTransitionTime` 都会被更新。
<!--
Whether it's transitioning from `Pending` to `Bound`, `Bound` to `Released`, or any other phase transition, the `lastPhaseTransitionTime` will be recorded.
For newly created PVs the phase will be set to `Pending` and the `lastPhaseTransitionTime` will be recorded as well.
-->
无论是从 `Pending` 转换到 `Bound`、`Bound` 到 `Released`,还是任何其他阶段转换,都会记录 `lastPhaseTransitionTime`
对于新创建的 PV将被声明为处于 `Pending` 阶段,并且 `lastPhaseTransitionTime` 也将被记录。
<!--
This feature allows cluster administrators to:
-->
此功能允许集群管理员:
<!--
1. Implement Retention Policies
With the `lastPhaseTransitionTime`, administrators can now track when a PV was last used or transitioned to the `Released` phase.
This information can be crucial for implementing retention policies to clean up resources that have been in the `Released` phase for a specific duration.
For example, it is now trivial to write a script or a policy that deletes all PVs that have been in the `Released` phase for a week.
-->
1. 实施保留政策
通过 `lastPhaseTransitionTime`,管理员可以跟踪 PV 上次使用或转换到 `Released` 阶段的时间。
此信息对于实施保留策略以清理在特定时间内处于 `Released` 阶段的资源至关重要。
例如,现在编写一个脚本或一个策略来删除一周内处于 `Released` 阶段的所有 PV 是很简单的。
<!--
2. Monitor Storage Health
By analyzing the phase transition times of PVs, administrators can monitor storage health more effectively.
For example, they can identify PVs that have been in the `Pending` phase for an unusually long time, which may indicate underlying issues with the storage provisioner.
-->
2. 监控存储运行状况
通过分析 PV 的相变时间,管理员可以更有效地监控存储运行状况。
例如,他们可以识别处于 `Pending` 阶段时间异常长的 PV这可能表明存储配置程序存在潜在问题。
<!--
## How to use it
The `lastPhaseTransitionTime` field is alpha starting from Kubernetes v1.28, so it requires
the `PersistentVolumeLastPhaseTransitionTime` feature gate to be enabled.
-->
## 如何使用它
从 Kubernetes v1.28 开始,`lastPhaseTransitionTime` 为 Alpha 特性字段,因此需要启用
`PersistentVolumeLastPhaseTransitionTime` 特性门控。
<!--
If you want to test the feature whilst it's alpha, you need to enable this feature gate on the `kube-controller-manager` and the `kube-apiserver`.
Use the `--feature-gates` command line argument:
-->
如果你想在该特性处于 Alpha 阶段时对其进行测试,则需要在 `kube-controller-manager`
`kube-apiserver` 上启用此特性门控。
使用 `--feature-gates` 命令行参数:
```shell
--feature-gates="...,PersistentVolumeLastPhaseTransitionTime=true"
```
<!--
Keep in mind that the feature enablement does not have immediate effect; the new field will be populated whenever a PV is updated and transitions between phases.
Administrators can then access the new field through the PV status, which can be retrieved using standard Kubernetes API calls or through Kubernetes client libraries.
-->
请记住,该特性启用后不会立即生效;而是在 PV 更新以及阶段之间转换时,填充新字段。
然后,管理员可以通过查看 PV 状态访问新字段,此状态可以使用标准 Kubernetes API
调用或通过 Kubernetes 客户端库进行检索。
<!--
Here is an example of how to retrieve the `lastPhaseTransitionTime` for a specific PV using the `kubectl` command-line tool:
-->
以下示例展示了如何使用 `kubectl` 命令行工具检索特定 PV 的 `lastPhaseTransitionTime`
```shell
kubectl get pv <pv-name> -o jsonpath='{.status.lastPhaseTransitionTime}'
```
<!--
## Going forward
This feature was initially introduced as an alpha feature, behind a feature gate that is disabled by default.
During the alpha phase, we (Kubernetes SIG Storage) will collect feedback from the end user community and address any issues or improvements identified.
Once sufficient feedback has been received, or no complaints are received the feature can move to beta.
The beta phase will allow us to further validate the implementation and ensure its stability.
-->
## 未来发展
此特性最初是作为 Alpha 特性引入的,位于默认情况下禁用的特性门控之下。
在 Alpha 阶段我们Kubernetes SIG Storage将收集最终用户的反馈并解决发现的任何问题或改进。
一旦收到足够的反馈,或者没有收到投诉,该特性就可以进入 Beta 阶段。
Beta 阶段将使我们能够进一步验证实施并确保其稳定性。
<!--
At least two Kubernetes releases will happen between the release where this field graduates
to beta and the release that graduates the field to general availability (GA). That means that
the earliest release where this field could be generally available is Kubernetes 1.32,
likely to be scheduled for early 2025.
-->
在该字段升级到 Beta 级别和将该字段升级为通用版 (GA) 的版本之间,至少会经过两个 Kubernetes 版本。
这意味着该字段 GA 的最早版本是 Kubernetes 1.32,可能计划于 2025 年初发布。
<!--
## Getting involved
We always welcome new contributors so if you would like to get involved you can
join our [Kubernetes Storage Special-Interest-Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG).
-->
## 欢迎参与
我们始终欢迎新的贡献者,因此如果你想参与其中,可以加入我们的
[Kubernetes 存储特殊兴趣小组](https://github.com/kubernetes/community/tree/master/sig-storage)SIG
<!--
If you would like to share feedback, you can do so on our
[public Slack channel](https://app.slack.com/client/T09NY5SBT/C09QZFCE5).
If you're not already part of that Slack workspace, you can visit https://slack.k8s.io/ for an invitation.
-->
如果你想分享反馈,可以在我们的 [公共 Slack 频道](https://app.slack.com/client/T09NY5SBT/C09QZFCE5)上分享。
如果你尚未加入 Slack 工作区,可以访问 https://slack.k8s.io/ 获取邀请。
<!--
Special thanks to all the contributors that provided great reviews, shared valuable insight and helped implement this feature (alphabetical order):
-->
特别感谢所有提供精彩评论、分享宝贵意见并帮助实现此特性的贡献者(按字母顺序排列):
- Han Kang ([logicalhan](https://github.com/logicalhan))
- Jan Šafránek ([jsafrane](https://github.com/jsafrane))
- Jordan Liggitt ([liggitt](https://github.com/liggitt))
- Kiki ([carlory](https://github.com/carlory))
- Michelle Au ([msau42](https://github.com/msau42))
- Tim Bannister ([sftim](https://github.com/sftim))
- Wojciech Tyczynski ([wojtek-t](https://github.com/wojtek-t))
- Xing Yang ([xing-yang](https://github.com/xing-yang))

View File

@ -0,0 +1,145 @@
---
layout: blog
title: "介绍 SIG etcd"
slug: introducing-sig-etcd
date: 2023-11-07
canonicalUrl: https://etcd.io/blog/2023/introducing-sig-etcd/
---
<!--
layout: blog
title: "Introducing SIG etcd"
slug: introducing-sig-etcd
date: 2023-11-07
canonicalUrl: https://etcd.io/blog/2023/introducing-sig-etcd/
-->
<!--
**Authors**: Han Kang (Google), Marek Siarkowicz (Google), Frederico Muñoz (SAS Institute)
-->
**作者**Han Kang (Google), Marek Siarkowicz (Google), Frederico Muñoz (SAS Institute)
**译者**Xin Li (Daocloud)
<!--
Special Interest Groups (SIGs) are a fundamental part of the Kubernetes project,
with a substantial share of the community activity happening within them.
When the need arises, [new SIGs can be created](https://github.com/kubernetes/community/blob/master/sig-wg-lifecycle.md),
and that was precisely what happened recently.
-->
特殊兴趣小组SIG是 Kubernetes 项目的基本组成部分,很大一部分的 Kubernetes 社区活动都在其中进行。
当有需要时,可以创建[新的 SIG](https://github.com/kubernetes/community/blob/master/sig-wg-lifecycle.md)
而这正是最近发生的事情。
<!--
[SIG etcd](https://github.com/kubernetes/community/blob/master/sig-etcd/README.md)
is the most recent addition to the list of Kubernetes SIGs.
In this article we will get to know it a bit better, understand its origins, scope, and plans.
-->
[SIG etcd](https://github.com/kubernetes/community/blob/master/sig-etcd/README.md)
是 Kubernetes SIG 列表中的最新成员。在这篇文章中,我们将更好地认识它,了解它的起源、职责和计划。
<!--
## The critical role of etcd
If we look inside the control plane of a Kubernetes cluster, we will find
[etcd](https://kubernetes.io/docs/concepts/overview/components/#etcd),
a consistent and highly-available key value store used as Kubernetes' backing
store for all cluster data -- this description alone highlights the critical role that etcd plays,
and the importance of it within the Kubernetes ecosystem.
-->
## etcd 的关键作用
如果我们查看 Kubernetes 集群的控制平面内部,我们会发现
[etcd](https://kubernetes.io/zh-cn/docs/concepts/overview/components/#etcd)
一个一致且高可用的键值存储,用作 Kubernetes 所有集群数据的后台数据库 -- 仅此描述就突出了
etcd 所扮演的关键角色,以及它在 Kubernetes 生态系统中的重要性。
<!--
This critical role makes the health of the etcd project and community an important consideration,
and [concerns about the state of the project](https://groups.google.com/a/kubernetes.io/g/steering/c/e-O-tVSCJOk/m/N9IkiWLEAgAJ)
in early 2022 did not go unnoticed. The changes in the maintainer team, amongst other factors,
contributed to a situation that needed to be addressed.
-->
由于 etcd 在生态中的关键作用,其项目和社区的健康成为了一个重要的考虑因素,
并且人们 2022 年初[对项目状态的担忧](https://groups.google.com/a/kubernetes.io/g/steering/c/e-O-tVSCJOk/m/N9IkiWLEAgAJ)
并没有被忽视。维护团队的变化以及其他因素导致了一些情况需要被解决。
<!--
## Why a special interest group
With the critical role of etcd in mind, it was proposed that the way forward would
be to create a new special interest group. If etcd was already at the heart of Kubernetes,
creating a dedicated SIG not only recognises that role, it would make etcd a first-class citizen of the Kubernetes community.
-->
## 为什么要设立特殊兴趣小组
考虑到 etcd 的关键作用,有人提出未来的方向是创建一个新的特殊兴趣小组。
如果 etcd 已经成为 Kubernetes 的核心,创建专门的 SIG 不仅是对这一角色的认可,
还会使 etcd 成为 Kubernetes 社区的一等公民。
<!--
Establishing SIG etcd creates a dedicated space to make explicit the contract
between etcd and Kubernetes api machinery and to prevent, on the etcd level,
changes which violate this contract. Additionally, etcd will be able to adop
the processes that Kubernetes offers its SIGs ([KEPs](https://www.kubernetes.dev/resources/keps/),
[PRR](https://github.com/kubernetes/community/blob/master/sig-architecture/production-readiness.md),
[phased feature gates](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/),
amongst others) in order to improve the consistency and reliability of the codebase. Being able to use these processes will be a substantial benefit to the etcd community.
-->
SIG etcd 的成立为明确 etcd 和 Kubernetes API 机制之间的契约关系创造了一个专门的空间,
并防止在 etcd 级别上发生违反此契约的更改。此外etcd 将能够采用 Kubernetes 提供的 SIG
流程([KEP](https://www.kubernetes.dev/resources/keps/)、
[PRR](https://github.com/kubernetes/community/blob/master/sig-architecture/production-readiness.md)、
[分阶段特性门控](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/)以及其他流程)
以提高代码库的一致性和可靠性,这将为 etcd 社区带来巨大的好处。
<!--
As a SIG, etcd will also be able to draw contributor support from Kubernetes proper:
active contributions to etcd from Kubernetes maintainers would decrease the likelihood
of breaking Kubernetes changes, through the increased number of potential reviewers
and the integration with existing testing framework. This will not only benefit Kubernetes,
which will be able to better participate and shape the direction of etcd in terms of the critical role it plays,
but also etcd as a whole.
-->
作为 SIGetcd 还能够从 Kubernetes 获得贡献者的支持Kubernetes 维护者对 etcd
的积极贡献将通过增加潜在审核者数量以及与现有测试框架的集成来降低破坏 Kubernetes 更改的可能性。
这不仅有利于 Kubernetes由于它能够更好地参与并塑造 etcd 所发挥的关键作用,从而也将有利于整个 etcd。
<!--
## About SIG etcd
The recently created SIG is already working towards its goals, defined in its
[Charter](https://github.com/kubernetes/community/blob/master/sig-etcd/charter.md)
and [Vision](https://github.com/kubernetes/community/blob/master/sig-etcd/vision.md).
The purpose is clear: to ensure etcd is a reliable, simple, and scalable production-ready
store for building cloud-native distributed systems and managing cloud-native infrastructure
via orchestrators like Kubernetes.
-->
## 关于 SIG etcd
最近创建的 SIG 已经在努力实现其[章程](https://github.com/kubernetes/community/blob/master/sig-etcd/charter.md)
和[愿景](https:///github.com/kubernetes/community/blob/master/sig-etcd/vision.md)中定义的目标。
其目的很明确:确保 etcd 是一个可靠、简单且可扩展的生产就绪存储,用于构建云原生分布式系统并通过 Kubernetes 等编排器管理云原生基础设施。
<!--
The scope of SIG etcd is not exclusively about etcd as a Kubernetes component,
it also covers etcd as a standard solution. Our goal is to make etcd the most
reliable key-value storage to be used anywhere, unconstrained by any Kubernetes-specific
limits and scaling to meet the requirements of many diverse use-cases.
-->
SIG etcd 的范围不仅仅涉及将 etcd 作为 Kubernetes 组件,还涵盖将 etcd 作为标准解决方案。
我们的目标是使 etcd 成为可在任何地方使用的最可靠的键值存储,不受任何 kubernetes 特定限制的约束,并且可以扩展以满足许多不同用例的需求。
<!--
We are confident that the creation of SIG etcd constitutes an important milestone
in the lifecycle of the project, simultaneously improving etcd itself,
and also the integration of etcd with Kubernetes. We invite everyone interested in etcd to
[visit our page](https://github.com/kubernetes/community/blob/master/sig-etcd/README.md),
[join us at our Slack channel](https://kubernetes.slack.com/messages/etcd),
and get involved in this new stage of etcd's life.
-->
我们相信SIG etcd 的创建将成为项目生命周期中的一个重要里程碑,同时改进 etcd 本身以及
etcd 与 Kubernetes 的集成。我们欢迎所有对 etcd
感兴趣的人[访问我们的页面](https://github.com/kubernetes/community/blob/master/sig-etcd/README.md)、
[加入我们的 Slack 频道](https://kubernetes.slack.com/messages/etcd),并参与 etcd 生命的新阶段。

View File

@ -99,6 +99,9 @@ Add-on 扩展了 Kubernetes 的功能。
<!--
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) is
an overlay network provider that can be used with Kubernetes.
* [Gateway API](/docs/concepts/services-networking/gateway/) is an open source project managed by
the [SIG Network](https://github.com/kubernetes/community/tree/master/sig-network) community and
provides an expressive, extensible, and role-oriented API for modeling service networking.
* [Knitter](https://github.com/ZTE/Knitter/) is a plugin to support multiple network
interfaces in a Kubernetes pod.
* [Multus](https://github.com/k8snetworkplumbingwg/multus-cni) is a Multi plugin for
@ -108,6 +111,9 @@ Add-on 扩展了 Kubernetes 的功能。
-->
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually)
是一个可以用于 Kubernetes 的 overlay 网络提供者。
* [Gateway API](/zh-cn/docs/concepts/services-networking/gateway/) 是一个由
[SIG Network](https://github.com/kubernetes/community/tree/master/sig-network) 社区管理的开源项目,
为服务网络建模提供一种富有表达力、可扩展和面向角色的 API。
* [Knitter](https://github.com/ZTE/Knitter/) 是在一个 Kubernetes Pod 中支持多个网络接口的插件。
* [Multus](https://github.com/k8snetworkplumbingwg/multus-cni) 是一个多插件,
可在 Kubernetes 中提供多种网络支持,以支持所有 CNI 插件(例如 Calico、Cilium、Contiv、Flannel

View File

@ -516,63 +516,40 @@ See [Configure a kubelet image credential provider](/docs/tasks/administer-clust
The interpretation of `config.json` varies between the original Docker
implementation and the Kubernetes interpretation. In Docker, the `auths` keys
can only specify root URLs, whereas Kubernetes allows glob URLs as well as
prefix-matched paths. This means that a `config.json` like this is valid:
prefix-matched paths. The only limitation is that glob patterns (`*`) have to
include the dot (`.`) for each subdomain. The amount of matched subdomains has
to be equal to the amount of glob patterns (`*.`), for example:
-->
对于 `config.json` 的解释在原始 Docker 实现和 Kubernetes 的解释之间有所不同。
在 Docker 中,`auths` 键只能指定根 URL而 Kubernetes 允许 glob URLs 以及前缀匹配的路径。
在 Docker 中,`auths` 键只能指定根 URL而 Kubernetes 允许 glob URL 以及前缀匹配的路径。
唯一的限制是 glob 模式(`*`)必须为每个子域名包括点(`.`)。
匹配的子域名数量必须等于 glob 模式(`*.`)的数量,例如:
<!--
- `*.kubernetes.io` will *not* match `kubernetes.io`, but `abc.kubernetes.io`
- `*.*.kubernetes.io` will *not* match `abc.kubernetes.io`, but `abc.def.kubernetes.io`
- `prefix.*.io` will match `prefix.kubernetes.io`
- `*-good.kubernetes.io` will match `prefix-good.kubernetes.io`
-->
- `*.kubernetes.io` **不**会匹配 `kubernetes.io`,但会匹配 `abc.kubernetes.io`
- `*.*.kubernetes.io` **不**会匹配 `abc.kubernetes.io`,但会匹配 `abc.def.kubernetes.io`
- `prefix.*.io` 将匹配 `prefix.kubernetes.io`
- `*-good.kubernetes.io` 将匹配 `prefix-good.kubernetes.io`
<!--
This means that a `config.json` like this is valid:
-->
这意味着,像这样的 `config.json` 是有效的:
```json
{
"auths": {
"*my-registry.io/images": {
"auth": "…"
}
"my-registry.io/images": { "auth": "…" },
"*.my-registry.io/images": { "auth": "…" }
}
}
```
<!--
The root URL (`*my-registry.io`) is matched by using the following syntax:
```
pattern:
{ term }
term:
'*' matches any sequence of non-Separator characters
'?' matches any single non-Separator character
'[' [ '^' ] { character-range } ']'
character class (must be non-empty)
c matches character c (c != '*', '?', '\\', '[')
'\\' c matches character c
character-range:
c matches character c (c != '\\', '-', ']')
'\\' c matches character c
lo '-' hi matches character c for lo <= c <= hi
```
-->
使用以下语法匹配根 URL `*my-registry.io`
```
pattern:
{ term }
term:
'*' 匹配任何无分隔符字符序列
'?' 匹配任意单个非分隔符
'[' [ '^' ] 字符范围
字符集(必须非空)
c 匹配字符 c c 不为 '*', '?', '\\', '['
'\\' c 匹配字符 c
字符范围:
c 匹配字符 c c 不为 '\\', '?', '-', ']'
'\\' c 匹配字符 c
lo '-' hi 匹配字符范围在 lo 到 hi 之间字符
```
<!--
Image pull operations would now pass the credentials to the CRI container
runtime for every valid pattern. For example the following container image names
@ -584,13 +561,20 @@ would match successfully:
- `my-registry.io/images/my-image`
- `my-registry.io/images/another-image`
- `sub.my-registry.io/images/my-image`
<!--
But not:
-->
但这些不会匹配成功:
- `a.sub.my-registry.io/images/my-image`
- `a.b.sub.my-registry.io/images/my-image`
<!--
The kubelet performs image pulls sequentially for every found credential. This
means, that multiple entries in `config.json` are possible, too:
means, that multiple entries in `config.json` for different paths are possible, too:
-->
kubelet 为每个找到的凭据的镜像按顺序拉取。这意味着在 `config.json` 中可能有多项:
kubelet 为每个找到的凭据的镜像按顺序拉取。这意味着对于不同的路径`config.json`可能有多项:
```json
{
@ -697,7 +681,7 @@ kubectl create secret docker-registry <name> \
<!--
If you already have a Docker credentials file then, rather than using the above
command, you can import the credentials file as a Kubernetes
{{< glossary_tooltip text="Secrets" term_id="secret" >}}.
{{< glossary_tooltip text="Secrets" term_id="secret" >}}.
[Create a Secret based on existing Docker credentials](/docs/tasks/configure-pod-container/pull-image-private-registry/#registry-secret-existing-credentials)
explains how to set this up.
-->
@ -774,8 +758,7 @@ will be merged.
-->
你需要对使用私有仓库的每个 Pod 执行以上操作。不过,
设置该字段的过程也可以通过为[服务账号](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/)资源设置
`imagePullSecrets` 来自动完成。
有关详细指令,
`imagePullSecrets` 来自动完成。有关详细指令,
可参见[将 ImagePullSecrets 添加到服务账号](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)。
你也可以将此方法与节点级别的 `.docker/config.json` 配置结合使用。
@ -830,8 +813,9 @@ common use cases and suggested solutions.
- Move sensitive data into a "Secret" resource, instead of packaging it in an image.
-->
3. 集群使用专有镜像,且有些镜像需要更严格的访问控制
- 确保 [AlwaysPullImages 准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)被启用。否则,所有 Pod 都可以使用所有镜像。
- 确保将敏感数据存储在 Secret 资源中,而不是将其打包在镜像里
- 确保 [AlwaysPullImages 准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)被启用。
否则,所有 Pod 都可以使用所有镜像。
- 确保将敏感数据存储在 Secret 资源中,而不是将其打包在镜像里。
<!--
1. A multi-tenant cluster where each tenant needs own private registry.
@ -843,10 +827,11 @@ common use cases and suggested solutions.
- The tenant adds that secret to imagePullSecrets of each namespace.
-->
4. 集群是多租户的并且每个租户需要自己的私有仓库
- 确保 [AlwaysPullImages 准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)。否则,所有租户的所有的 Pod 都可以使用所有镜像。
- 为私有仓库启用鉴权
- 确保 [AlwaysPullImages 准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)。
否则,所有租户的所有的 Pod 都可以使用所有镜像。
- 为私有仓库启用鉴权。
- 为每个租户生成访问仓库的凭据,放置在 Secret 中,并将 Secret 发布到各租户的名字空间下。
- 租户将 Secret 添加到每个名字空间中的 imagePullSecrets
- 租户将 Secret 添加到每个名字空间中的 imagePullSecrets
<!--
If you need access to multiple registries, you can create one secret for each registry.
@ -859,7 +844,7 @@ If you need access to multiple registries, you can create one secret for each re
In older versions of Kubernetes, the kubelet had a direct integration with cloud provider credentials.
This gave it the ability to dynamically fetch credentials for image registries.
-->
## 旧版的内置 kubelet 凭据提供程序
## 旧版的内置 kubelet 凭据提供程序 {#legacy-built-in-kubelet-credentials-provider}
在旧版本的 Kubernetes 中kubelet 与云提供商凭据直接集成。
这使它能够动态获取镜像仓库的凭据。
@ -869,7 +854,7 @@ There were three built-in implementations of the kubelet credential provider int
ACR (Azure Container Registry), ECR (Elastic Container Registry), and GCR (Google Container Registry).
-->
kubelet 凭据提供程序集成存在三个内置实现:
ACRAzure 容器仓库、ECRElastic 容器仓库)和 GCRGoogle 容器仓库)
ACRAzure 容器仓库、ECRElastic 容器仓库)和 GCRGoogle 容器仓库)
<!--
For more information on the legacy mechanism, read the documentation for the version of Kubernetes that you

View File

@ -0,0 +1,534 @@
---
title: 服务账号
description: >
了解 Kubernetes 中的 ServiceAccount 对象。
content_type: concept
weight: 10
---
<!--
title: Service Accounts
description: >
Learn about ServiceAccount objects in Kubernetes.
content_type: concept
weight: 10
-->
<!-- overview -->
<!--
This page introduces the ServiceAccount object in Kubernetes, providing
information about how service accounts work, use cases, limitations,
alternatives, and links to resources for additional guidance.
-->
本页介绍 Kubernetes 中的 ServiceAccount 对象,
讲述服务账号的工作原理、使用场景、限制、替代方案,还提供了一些资源链接方便查阅更多指导信息。
<!-- body -->
<!--
## What are service accounts? {#what-are-service-accounts}
-->
## 什么是服务账号? {#what-are-service-accounts}
<!--
A service account is a type of non-human account that, in Kubernetes, provides
a distinct identity in a Kubernetes cluster. Application Pods, system
components, and entities inside and outside the cluster can use a specific
ServiceAccount's credentials to identify as that ServiceAccount. This identity
is useful in various situations, including authenticating to the API server or
implementing identity-based security policies.
-->
服务账号是在 Kubernetes 中一种用于非人类用户的账号,在 Kubernetes 集群中提供不同的身份标识。
应用 Pod、系统组件以及集群内外的实体可以使用特定 ServiceAccount 的凭据来将自己标识为该 ServiceAccount。
这种身份可用于许多场景,包括向 API 服务器进行身份认证或实现基于身份的安全策略。
<!--
Service accounts exist as ServiceAccount objects in the API server. Service
accounts have the following properties:
-->
服务账号以 ServiceAccount 对象的形式存在于 API 服务器中。服务账号具有以下属性:
<!--
* **Namespaced:** Each service account is bound to a Kubernetes
{{<glossary_tooltip text="namespace" term_id="namespace">}}. Every namespace
gets a [`default` ServiceAccount](#default-service-accounts) upon creation.
* **Lightweight:** Service accounts exist in the cluster and are
defined in the Kubernetes API. You can quickly create service accounts to
enable specific tasks.
-->
* **名字空间限定:** 每个服务账号都与一个 Kubernetes 名字空间绑定。
每个名字空间在创建时,会获得一个[名为 `default` 的 ServiceAccount](#default-service-accounts)。
* **轻量级:** 服务账号存在于集群中,并在 Kubernetes API 中定义。你可以快速创建服务账号以支持特定任务。
<!--
* **Portable:** A configuration bundle for a complex containerized workload
might include service account definitions for the system's components. The
lightweight nature of service accounts and the namespaced identities make
the configurations portable.
-->
* **可移植性:** 复杂的容器化工作负载的配置包中可能包括针对系统组件的服务账号定义。
服务账号的轻量级性质和名字空间作用域的身份使得这类配置可移植。
<!--
Service accounts are different from user accounts, which are authenticated
human users in the cluster. By default, user accounts don't exist in the Kubernetes
API server; instead, the API server treats user identities as opaque
data. You can authenticate as a user account using multiple methods. Some
Kubernetes distributions might add custom extension APIs to represent user
accounts in the API server.
-->
服务账号与用户账号不同,用户账号是集群中通过了身份认证的人类用户。默认情况下,
用户账号不存在于 Kubernetes API 服务器中相反API 服务器将用户身份视为不透明数据。
你可以使用多种方法认证为某个用户账号。某些 Kubernetes 发行版可能会添加自定义扩展 API
来在 API 服务器中表示用户账号。
<!-- Comparison between service accounts and users -->
{{< table caption="服务账号与用户之间的比较" >}}
<!--
| Description | ServiceAccount | User or group |
| --- | --- | --- |
| Location | Kubernetes API (ServiceAccount object) | External |
| Access control | Kubernetes RBAC or other [authorization mechanisms](/docs/reference/access-authn-authz/authorization/#authorization-modules) | Kubernetes RBAC or other identity and access management mechanisms |
| Intended use | Workloads, automation | People |
-->
| 描述 | 服务账号 | 用户或组 |
| --- | --- | --- |
| 位置 | Kubernetes APIServiceAccount 对象)| 外部 |
| 访问控制 | Kubernetes RBAC 或其他[鉴权机制](/zh-cn/docs/reference/access-authn-authz/authorization/#authorization-modules) | Kubernetes RBAC 或其他身份和访问管理机制 |
| 目标用途 | 工作负载、自动化工具 | 人 |
{{< /table >}}
<!--
### Default service accounts {#default-service-accounts}
-->
### 默认服务账号 {#default-service-accounts}
<!--
When you create a cluster, Kubernetes automatically creates a ServiceAccount
object named `default` for every namespace in your cluster. The `default`
service accounts in each namespace get no permissions by default other than the
[default API discovery permissions](/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings)
that Kubernetes grants to all authenticated principals if role-based access control (RBAC) is enabled.
If you delete the `default` ServiceAccount object in a namespace, the
{{< glossary_tooltip text="control plane" term_id="control-plane" >}}
replaces it with a new one.
-->
在你创建集群时Kubernetes 会自动为集群中的每个名字空间创建一个名为 `default` 的 ServiceAccount 对象。
在启用了基于角色的访问控制RBACKubernetes 为所有通过了身份认证的主体赋予
[默认 API 发现权限](/zh-cn/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings)。
每个名字空间中的 `default` 服务账号除了这些权限之外,默认没有其他访问权限。
如果基于角色的访问控制RBAC被启用当你删除名字空间中的 `default` ServiceAccount 对象时,
{{< glossary_tooltip text="控制平面" term_id="control-plane" >}}会用新的 ServiceAccount 对象替换它。
<!--
If you deploy a Pod in a namespace, and you don't
[manually assign a ServiceAccount to the Pod](#assign-to-pod), Kubernetes
assigns the `default` ServiceAccount for that namespace to the Pod.
-->
如果你在某个名字空间中部署 Pod并且你没有[手动为 Pod 指派 ServiceAccount](#assign-to-pod)
Kubernetes 将该名字空间的 `default` 服务账号指派给这一 Pod。
<!--
## Use cases for Kubernetes service accounts {#use-cases}
As a general guideline, you can use service accounts to provide identities in
the following scenarios:
-->
## Kubernetes 服务账号的使用场景 {#use-cases}
一般而言,你可以在以下场景中使用服务账号来提供身份标识:
<!--
* Your Pods need to communicate with the Kubernetes API server, for example in
situations such as the following:
* Providing read-only access to sensitive information stored in Secrets.
* Granting [cross-namespace access](#cross-namespace), such as allowing a
Pod in namespace `example` to read, list, and watch for Lease objects in
the `kube-node-lease` namespace.
-->
* 你的 Pod 需要与 Kubernetes API 服务器通信,例如在以下场景中:
* 提供对存储在 Secret 中的敏感信息的只读访问。
* 授予[跨名字空间访问](#cross-namespace)的权限,例如允许 `example` 名字空间中的 Pod 读取、列举和监视
`kube-node-lease` 名字空间中的 Lease 对象。
<!--
* Your Pods need to communicate with an external service. For example, a
workload Pod requires an identity for a commercially available cloud API,
and the commercial provider allows configuring a suitable trust relationship.
* [Authenticating to a private image registry using an `imagePullSecret`](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account).
-->
* 你的 Pod 需要与外部服务进行通信。例如,工作负载 Pod 需要一个身份来访问某商业化的云 API
并且商业化 API 的提供商允许配置适当的信任关系。
* [使用 `imagePullSecret` 完成在私有镜像仓库上的身份认证](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)。
<!--
* An external service needs to communicate with the Kubernetes API server. For
example, authenticating to the cluster as part of a CI/CD pipeline.
* You use third-party security software in your cluster that relies on the
ServiceAccount identity of different Pods to group those Pods into different
contexts.
-->
* 外部服务需要与 Kubernetes API 服务器进行通信。例如,作为 CI/CD 流水线的一部分向集群作身份认证。
* 你在集群中使用了第三方安全软件,该软件依赖不同 Pod 的 ServiceAccount 身份,按不同上下文对这些 Pod 分组。
<!--
## How to use service accounts {#how-to-use}
To use a Kubernetes service account, you do the following:
-->
## 如何使用服务账号 {#how-to-use}
要使用 Kubernetes 服务账号,你需要执行以下步骤:
<!--
1. Create a ServiceAccount object using a Kubernetes
client like `kubectl` or a manifest that defines the object.
1. Grant permissions to the ServiceAccount object using an authorization
mechanism such as
[RBAC](/docs/reference/access-authn-authz/rbac/).
-->
1. 使用像 `kubectl` 这样的 Kubernetes 客户端或定义对象的清单manifest创建 ServiceAccount 对象。
2. 使用鉴权机制(如 [RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/))为 ServiceAccount 对象授权。
<!--
1. Assign the ServiceAccount object to Pods during Pod creation.
If you're using the identity from an external service,
[retrieve the ServiceAccount token](#get-a-token) and use it from that
service instead.
-->
3. 在创建 Pod 期间将 ServiceAccount 对象指派给 Pod。
如果你所使用的是来自外部服务的身份,可以[获取 ServiceAccount 令牌](#get-a-token),并在该服务中使用这一令牌。
<!--
For instructions, refer to
[Configure Service Accounts for Pods](/docs/tasks/configure-pod-container/configure-service-account/).
-->
有关具体操作说明,参阅[为 Pod 配置服务账号](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/)。
<!--
### Grant permissions to a ServiceAccount {#grant-permissions}
-->
### 为 ServiceAccount 授权 {#grant-permissions}
<!--
You can use the built-in Kubernetes
[role-based access control (RBAC)](/docs/reference/access-authn-authz/rbac/)
mechanism to grant the minimum permissions required by each service account.
You create a *role*, which grants access, and then *bind* the role to your
ServiceAccount. RBAC lets you define a minimum set of permissions so that the
service account permissions follow the principle of least privilege. Pods that
use that service account don't get more permissions than are required to
function correctly.
-->
你可以使用 Kubernetes 内置的
[基于角色的访问控制 (RBAC)](/zh-cn/docs/reference/access-authn-authz/rbac/)机制来为每个服务账号授予所需的最低权限。
你可以创建一个用来授权的**角色**,然后将此角色**绑定**到你的 ServiceAccount 上。
RBAC 可以让你定义一组最低权限,使得服务账号权限遵循最小特权原则。
这样使用服务账号的 Pod 不会获得超出其正常运行所需的权限。
<!--
For instructions, refer to
[ServiceAccount permissions](/docs/reference/access-authn-authz/rbac/#service-account-permissions).
-->
有关具体操作说明,参阅 [ServiceAccount 权限](/zh-cn/docs/reference/access-authn-authz/rbac/#service-account-permissions)。
<!--
#### Cross-namespace access using a ServiceAccount {#cross-namespace}
-->
#### 使用 ServiceAccount 进行跨名字空间访问 {#cross-namespace}
<!--
You can use RBAC to allow service accounts in one namespace to perform actions
on resources in a different namespace in the cluster. For example, consider a
scenario where you have a service account and Pod in the `dev` namespace and
you want your Pod to see Jobs running in the `maintenance` namespace. You could
create a Role object that grants permissions to list Job objects. Then,
you'd create a RoleBinding object in the `maintenance` namespace to bind the
Role to the ServiceAccount object. Now, Pods in the `dev` namespace can list
Job objects in the `maintenance` namespace using that service account.
-->
你可以使用 RBAC 允许一个名字空间中的服务账号对集群中另一个名字空间的资源执行操作。
例如,假设你在 `dev` 名字空间中有一个服务账号和一个 Pod并且希望该 Pod 可以查看 `maintenance`
名字空间中正在运行的 Job。你可以创建一个 Role 对象来授予列举 Job 对象的权限。
随后在 `maintenance` 名字空间中创建 RoleBinding 对象将 Role 绑定到此 ServiceAccount 对象上。
现在,`dev` 名字空间中的 Pod 可以使用该服务账号列出 `maintenance` 名字空间中的 Job 对象集合。
<!--
### Assign a ServiceAccount to a Pod {#assign-to-pod}
To assign a ServiceAccount to a Pod, you set the `spec.serviceAccountName`
field in the Pod specification. Kubernetes then automatically provides the
credentials for that ServiceAccount to the Pod. In v1.22 and later, Kubernetes
gets a short-lived, **automatically rotating** token using the `TokenRequest`
API and mounts the token as a
[projected volume](/docs/concepts/storage/projected-volumes/#serviceaccounttoken).
-->
### 将 ServiceAccount 指派给 Pod {#assign-to-pod}
要将某 ServiceAccount 指派给某 Pod你需要在该 Pod 的规约中设置 `spec.serviceAccountName` 字段。
Kubernetes 将自动为 Pod 提供该 ServiceAccount 的凭据。在 Kubernetes v1.22 及更高版本中,
Kubernetes 使用 `TokenRequest` API 获取一个短期的、**自动轮换**的令牌,
并以[投射卷](/zh-cn/docs/concepts/storage/projected-volumes/#serviceaccounttoken)的形式挂载此令牌。
<!--
By default, Kubernetes provides the Pod
with the credentials for an assigned ServiceAccount, whether that is the
`default` ServiceAccount or a custom ServiceAccount that you specify.
To prevent Kubernetes from automatically injecting
credentials for a specified ServiceAccount or the `default` ServiceAccount, set the
`automountServiceAccountToken` field in your Pod specification to `false`.
-->
默认情况下Kubernetes 会将所指派的 ServiceAccount
(无论是 `default` 服务账号还是你指定的定制 ServiceAccount的凭据提供给 Pod。
要防止 Kubernetes 自动注入指定的 ServiceAccount 或 `default` ServiceAccount 的凭据,
可以将 Pod 规约中的 `automountServiceAccountToken` 字段设置为 `false`
<!-- OK to remove this historical detail after Kubernetes 1.31 is released -->
<!--
In versions earlier than 1.22, Kubernetes provides a long-lived, static token
to the Pod as a Secret.
-->
在 Kubernetes 1.22 之前的版本中Kubernetes 会将一个长期有效的静态令牌以 Secret 形式提供给 Pod。
<!--
#### Manually retrieve ServiceAccount credentials {#get-a-token}
If you need the credentials for a ServiceAccount to mount in a non-standard
location, or for an audience that isn't the API server, use one of the
following methods:
-->
#### 手动获取 ServiceAccount 凭据 {#get-a-token}
如果你需要 ServiceAccount 的凭据并将其挂载到非标准位置,或者用于 API 服务器之外的受众,可以使用以下方法之一:
<!--
* [TokenRequest API](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/)
(recommended): Request a short-lived service account token from within
your own *application code*. The token expires automatically and can rotate
upon expiration.
If you have a legacy application that is not aware of Kubernetes, you
could use a sidecar container within the same pod to fetch these tokens
and make them available to the application workload.
-->
* [TokenRequest API](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/)(推荐):
在你自己的**应用代码**中请求一个短期的服务账号令牌。此令牌会自动过期,并可在过期时被轮换。
如果你有一个旧的、对 Kubernetes 无感知能力的应用,你可以在同一个 Pod
内使用边车容器来获取这些令牌,并将其提供给应用工作负载。
<!--
* [Token Volume Projection](/docs/tasks/configure-pod-container/configure-service-account/#serviceaccount-token-volume-projection)
(also recommended): In Kubernetes v1.20 and later, use the Pod specification to
tell the kubelet to add the service account token to the Pod as a
*projected volume*. Projected tokens expire automatically, and the kubelet
rotates the token before it expires.
-->
* [令牌卷投射](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#serviceaccount-token-volume-projection)(同样推荐):
在 Kubernetes v1.20 及更高版本中,使用 Pod 规约告知 kubelet 将服务账号令牌作为**投射卷**添加到 Pod 中。
所投射的令牌会自动过期,在过期之前 kubelet 会自动轮换此令牌。
<!--
* [Service Account Token Secrets](/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount)
(not recommended): You can mount service account tokens as Kubernetes
Secrets in Pods. These tokens don't expire and don't rotate.
This method is not recommended, especially at scale, because of the risks associated
with static, long-lived credentials. In Kubernetes v1.24 and later, the
[LegacyServiceAccountTokenNoAutoGeneration feature gate](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-graduated-or-deprecated-features)
prevents Kubernetes from automatically creating these tokens for
ServiceAccounts. `LegacyServiceAccountTokenNoAutoGeneration` is enabled
by default; in other words, Kubernetes does not create these tokens.
-->
* [服务账号令牌 Secret](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount)(不推荐):
你可以将服务账号令牌以 Kubernetes Secret 的形式挂载到 Pod 中。这些令牌不会过期且不会轮换。
不推荐使用此方法,特别是在大规模场景下,这是因为静态、长期有效的凭据存在一定的风险。在 Kubernetes v1.24 及更高版本中,
[LegacyServiceAccountTokenNoAutoGeneration 特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-graduated-or-deprecated-features)阻止
Kubernetes 自动为 ServiceAccount 创建这些令牌。`LegacyServiceAccountTokenNoAutoGeneration` 默认被启用,
也就是说Kubernetes 不会创建这些令牌。
{{< note >}}
<!--
For applications running outside your Kubernetes cluster, you might be considering
creating a long-lived ServiceAccount token that is stored in a Secret. This allows authentication, but the Kubernetes project recommends you avoid this approach.
Long-lived bearer tokens represent a security risk as, once disclosed, the token
can be misused. Instead, consider using an alternative. For example, your external
application can authenticate using a well-protected private key `and` a certificate,
or using a custom mechanism such as an [authentication webhook](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication) that you implement yourself.
-->
对于运行在 Kubernetes 集群外的应用,你可能考虑创建一个长期有效的 ServiceAccount 令牌,
并将其存储在 Secret 中。尽管这种方式可以实现身份认证,但 Kubernetes 项目建议你避免使用此方法。
长期有效的持有者令牌Bearer Token会带来安全风险一旦泄露此令牌就可能被滥用。
为此,你可以考虑使用其他替代方案。例如,你的外部应用可以使用一个保护得很好的私钥和证书进行身份认证,
或者使用你自己实现的[身份认证 Webhook](/zh-cn/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)
这类自定义机制。
<!--
You can also use TokenRequest to obtain short-lived tokens for your external application.
-->
你还可以使用 TokenRequest 为外部应用获取短期的令牌。
{{< /note >}}
<!--
## Authenticating service account credentials {#authenticating-credentials}
-->
## 对服务账号凭据进行鉴别 {#authenticating-credentials}
<!--
ServiceAccounts use signed
{{<glossary_tooltip term_id="jwt" text="JSON Web Tokens">}} (JWTs)
to authenticate to the Kubernetes API server, and to any other system where a
trust relationship exists. Depending on how the token was issued
(either time-limited using a `TokenRequest` or using a legacy mechanism with
a Secret), a ServiceAccount token might also have an expiry time, an audience,
and a time after which the token *starts* being valid. When a client that is
acting as a ServiceAccount tries to communicate with the Kubernetes API server,
the client includes an `Authorization: Bearer <token>` header with the HTTP
request. The API server checks the validity of that bearer token as follows:
-->
ServiceAccount 使用签名的 JSON Web Token (JWT) 来向 Kubernetes API
服务器以及任何其他存在信任关系的系统进行身份认证。根据令牌的签发方式
(使用 `TokenRequest` 限制时间或使用传统的 Secret 机制ServiceAccount
令牌也可能有到期时间、受众和令牌**开始**生效的时间点。
当客户端以 ServiceAccount 的身份尝试与 Kubernetes API 服务器通信时,
客户端会在 HTTP 请求中包含 `Authorization: Bearer <token>` 标头。
API 服务器按照以下方式检查该持有者令牌的有效性:
<!--
1. Checks the token signature.
1. Checks whether the token has expired.
1. Checks whether object references in the token claims are currently valid.
1. Checks whether the token is currently valid.
1. Checks the audience claims.
-->
1. 检查令牌签名。
1. 检查令牌是否已过期。
1. 检查令牌申明中的对象引用是否当前有效。
1. 检查令牌是否当前有效。
1. 检查受众申明。
<!--
The TokenRequest API produces _bound tokens_ for a ServiceAccount. This
binding is linked to the lifetime of the client, such as a Pod, that is acting
as that ServiceAccount.
For tokens issued using the `TokenRequest` API, the API server also checks that
the specific object reference that is using the ServiceAccount still exists,
matching by the {{< glossary_tooltip term_id="uid" text="unique ID" >}} of that
object. For legacy tokens that are mounted as Secrets in Pods, the API server
checks the token against the Secret.
-->
TokenRequest API 为 ServiceAccount 生成**绑定令牌**。这种绑定与以该 ServiceAccount 身份运行的
的客户端(如 Pod的生命期相关联。
对于使用 `TokenRequest` API 签发的令牌API 服务器还会检查正在使用 ServiceAccount 的特定对象引用是否仍然存在,
方式是通过该对象的{{< glossary_tooltip term_id="uid" text="唯一 ID" >}} 进行匹配。
对于以 Secret 形式挂载到 Pod 中的旧有令牌API 服务器会基于 Secret 来检查令牌。
<!--
For more information about the authentication process, refer to
[Authentication](/docs/reference/access-authn-authz/authentication/#service-account-tokens).
-->
有关身份认证过程的更多信息,参考[身份认证](/zh-cn/docs/reference/access-authn-authz/authentication/#service-account-tokens)。
<!--
### Authenticating service account credentials in your own code {#authenticating-in-code}
If you have services of your own that need to validate Kubernetes service
account credentials, you can use the following methods:
-->
### 在自己的代码中检查服务账号凭据 {#authenticating-in-code}
如果你的服务需要检查 Kubernetes 服务账号凭据,可以使用以下方法:
<!--
* [TokenReview API](/docs/reference/kubernetes-api/authentication-resources/token-review-v1/)
(recommended)
* OIDC discovery
-->
* [TokenReview API](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-review-v1/)(推荐)
* OIDC 发现
<!--
The Kubernetes project recommends that you use the TokenReview API, because
this method invalidates tokens that are bound to API objects such as Secrets,
ServiceAccounts, and Pods when those objects are deleted. For example, if you
delete the Pod that contains a projected ServiceAccount token, the cluster
invalidates that token immediately and a TokenReview immediately fails.
If you use OIDC validation instead, your clients continue to treat the token
as valid until the token reaches its expiration timestamp.
-->
Kubernetes 项目建议你使用 TokenReview API因为当你删除某些 API 对象
(如 Secret、ServiceAccount 和 Pod的时候此方法将使绑定到这些 API 对象上的令牌失效。
例如,如果删除包含投射 ServiceAccount 令牌的 Pod则集群立即使该令牌失效
并且 TokenReview 操作也会立即失败。
如果你使用的是 OIDC 验证,则客户端将继续将令牌视为有效,直到令牌达到其到期时间戳。
<!--
Your application should always define the audience that it accepts, and should
check that the token's audiences match the audiences that the application
expects. This helps to minimize the scope of the token so that it can only be
used in your application and nowhere else.
-->
你的应用应始终定义其所接受的受众,并检查令牌的受众是否与应用期望的受众匹配。
这有助于将令牌的作用域最小化,这样它只能在你的应用内部使用,而不能在其他地方使用。
<!--
## Alternatives
* Issue your own tokens using another mechanism, and then use
[Webhook Token Authentication](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)
to validate bearer tokens using your own validation service.
-->
## 替代方案 {#alternatives}
* 使用其他机制签发你自己的令牌,然后使用
[Webhook 令牌身份认证](/zh-cn/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)通过你自己的验证服务来验证持有者令牌。
<!--
* Provide your own identities to Pods.
* [Use the SPIFFE CSI driver plugin to provide SPIFFE SVIDs as X.509 certificate pairs to Pods](https://cert-manager.io/docs/projects/csi-driver-spiffe/).
{{% thirdparty-content single="true" %}}
* [Use a service mesh such as Istio to provide certificates to Pods](https://istio.io/latest/docs/tasks/security/cert-management/plugin-ca-cert/).
-->
* 为 Pod 提供你自己的身份:
* [使用 SPIFFE CSI 驱动插件将 SPIFFE SVID 作为 X.509 证书对提供给 Pod](https://cert-manager.io/docs/projects/csi-driver-spiffe/)。
{{% thirdparty-content single="true" %}}
* [使用 Istio 这类服务网格为 Pod 提供证书](https://istio.io/latest/zh/docs/tasks/security/cert-management/plugin-ca-cert/)。
<!--
* Authenticate from outside the cluster to the API server without using service account tokens:
* [Configure the API server to accept OpenID Connect (OIDC) tokens from your identity provider](/docs/reference/access-authn-authz/authentication/#openid-connect-tokens).
* Use service accounts or user accounts created using an external Identity
and Access Management (IAM) service, such as from a cloud provider, to
authenticate to your cluster.
* [Use the CertificateSigningRequest API with client certificates](/docs/tasks/tls/managing-tls-in-a-cluster/).
-->
* 从集群外部向 API 服务器进行身份认证,而不使用服务账号令牌:
* [配置 API 服务器接受来自你自己的身份驱动的 OpenID Connect (OIDC) 令牌](/zh-cn/docs/reference/access-authn-authz/authentication/#openid-connect-tokens)。
* 使用来自云提供商等外部身份和访问管理 (IAM) 服务创建的服务账号或用户账号向集群进行身份认证。
* [使用 CertificateSigningRequest API 和客户端证书](/zh-cn/docs/tasks/tls/managing-tls-in-a-cluster/)。
<!--
* [Configure the kubelet to retrieve credentials from an image registry](/docs/tasks/administer-cluster/kubelet-credential-provider/).
* Use a Device Plugin to access a virtual Trusted Platform Module (TPM), which
then allows authentication using a private key.
-->
* [配置 kubelet 从镜像仓库中获取凭据](/zh-cn/docs/tasks/administer-cluster/kubelet-credential-provider/)。
* 使用设备插件访问虚拟的可信平台模块 (TPM),进而可以使用私钥进行身份认证。
## {{% heading "whatsnext" %}}
<!--
* Learn how to [manage your ServiceAccounts as a cluster administrator](/docs/reference/access-authn-authz/service-accounts-admin/).
* Learn how to [assign a ServiceAccount to a Pod](/docs/tasks/configure-pod-container/configure-service-account/).
* Read the [ServiceAccount API reference](/docs/reference/kubernetes-api/authentication-resources/service-account-v1/).
-->
* 学习如何[作为集群管理员管理你的 ServiceAccount](/zh-cn/docs/reference/access-authn-authz/service-accounts-admin/)。
* 学习如何[将 ServiceAccount 指派给 Pod](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/)。
* 阅读 [ServiceAccount API 参考文档](/zh-cn/docs/reference/kubernetes-api/authentication-resources/service-account-v1/)。

View File

@ -26,6 +26,13 @@ weight: 30
{{< glossary_definition term_id="ingress" length="all" >}}
{{< note >}}
<!--
Ingress is frozen. New features are being added to the [Gateway API](/docs/concepts/services-networking/gateway/).
-->
入口Ingress目前已停止更新。新的功能正在集成至[网关 API](/zh-cn/docs/concepts/services-networking/gateway/) 中。
{{< /note >}}
<!-- body -->
<!--

View File

@ -26,8 +26,8 @@ description: >-
<!-- overview -->
<!--
If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you
might consider using Kubernetes NetworkPolicies for particular applications in your cluster.
If you want to control traffic flow at the IP address or port level for TCP, UDP, and SCTP protocols,
then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster.
NetworkPolicies are an application-centric construct which allow you to specify how a {{<
glossary_tooltip text="pod" term_id="pod">}} is allowed to communicate with various network
"entities" (we use the word "entity" here to avoid overloading the more common terms such as
@ -35,7 +35,7 @@ glossary_tooltip text="pod" term_id="pod">}} is allowed to communicate with vari
NetworkPolicies apply to a connection with a pod on one or both ends, and are not relevant to
other connections.
-->
如果你希望在 IP 地址或端口层面OSI 第 3 层或第 4 层)控制网络流量,
如果你希望针对 TCP、UDP 和 SCTP 协议在 IP 地址或端口层面控制网络流量,
则你可以考虑为集群中特定应用使用 Kubernetes 网络策略NetworkPolicy
NetworkPolicy 是一种以应用为中心的结构,允许你设置如何允许
{{< glossary_tooltip text="Pod" term_id="pod">}} 与网络上的各类网络“实体”
@ -481,24 +481,16 @@ ingress or egress traffic.
此策略可以确保即使没有被其他任何 NetworkPolicy 选择的 Pod 也不会被允许入站或出站流量。
<!--
## SCTP support
-->
## SCTP 支持 {#sctp-support}
## Network traffic filtering
{{< feature-state for_k8s_version="v1.20" state="stable" >}}
<!--
As a stable feature, this is enabled by default. To disable SCTP at a cluster level, you (or your
cluster administrator) will need to disable the `SCTPSupport`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
for the API server with `--feature-gates=SCTPSupport=false,…`.
When the feature gate is enabled, you can set the `protocol` field of a NetworkPolicy to `SCTP`.
NetworkPolicy is defined for [layer 4](https://en.wikipedia.org/wiki/OSI_model#Layer_4:_Transport_layer)
connections (TCP, UDP, and optionally SCTP). For all the other protocols, the behaviour may vary
across network plugins.
-->
作为一个稳定特性SCTP 支持默认是被启用的。
要在集群层面禁用 SCTP或你的集群管理员需要为 API 服务器指定
`--feature-gates=SCTPSupport=false,...`
来禁用 `SCTPSupport` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。
启用该特性门控后,用户可以将 NetworkPolicy 的 `protocol` 字段设置为 `SCTP`
## 网络流量过滤 {#network-traffic-filtering}
NetworkPolicy 是为[第 4 层](https://zh.wikipedia.org/wiki/OSI%E6%A8%A1%E5%9E%8B#%E7%AC%AC4%E5%B1%A4_%E5%82%B3%E8%BC%B8%E5%B1%A4)连接
TCP、UDP 和可选的 SCTP所定义的。对于所有其他协议这种网络流量过滤的行为可能因网络插件而异。
{{< note >}}
<!--
@ -508,6 +500,19 @@ protocol NetworkPolicies.
你必须使用支持 SCTP 协议 NetworkPolicy 的 {{< glossary_tooltip text="CNI" term_id="cni" >}} 插件。
{{< /note >}}
<!--
When a `deny all` network policy is defined, it is only guaranteed to deny TCP, UDP and SCTP
connections. For other protocols, such as ARP or ICMP, the behaviour is undefined.
The same applies to allow rules: when a specific pod is allowed as ingress source or egress destination,
it is undefined what happens with (for example) ICMP packets. Protocols such as ICMP may be allowed by some
network plugins and denied by others.
-->
`deny all` 网络策略被定义时,此策略只能保证拒绝 TCP、UDP 和 SCTP 连接。
对于 ARP 或 ICMP 这类其他协议,这种网络流量过滤行为是未定义的。
相同的情况也适用于 allow 规则:当特定 Pod 被允许作为入口源或出口目的地时,
对于例如ICMP 数据包会发生什么是未定义的。
ICMP 这类协议可能被某些网络插件所允许,而被另一些网络插件所拒绝。
<!--
## Targeting a range of ports
-->
@ -632,6 +637,158 @@ Kubernetes 控制面会在所有名字空间上设置一个不可变更的标签
如果 NetworkPolicy 无法在某些对象字段中指向某名字空间,
你可以使用标准的标签方式来指向特定名字空间。
<!--
## Pod lifecycle
-->
## Pod 生命周期 {#pod-lifecycle}
{{< note >}}
<!--
The following applies to clusters with a conformant networking plugin and a conformant implementation of
NetworkPolicy.
-->
以下内容适用于使用了合规网络插件和 NetworkPolicy 合规实现的集群。
{{< /note >}}
<!--
When a new NetworkPolicy object is created, it may take some time for a network plugin
to handle the new object. If a pod that is affected by a NetworkPolicy
is created before the network plugin has completed NetworkPolicy handling,
that pod may be started unprotected, and isolation rules will be applied when
the NetworkPolicy handling is completed.
-->
当新的 NetworkPolicy 对象被创建时,网络插件可能需要一些时间来处理这个新对象。
如果受到 NetworkPolicy 影响的 Pod 在网络插件完成 NetworkPolicy 处理之前就被创建了,
那么该 Pod 可能会最初处于无保护状态,而在 NetworkPolicy 处理完成后被应用隔离规则。
<!--
Once the NetworkPolicy is handled by a network plugin,
1. All newly created pods affected by a given NetworkPolicy will be isolated before
they are started.
Implementations of NetworkPolicy must ensure that filtering is effective throughout
the Pod lifecycle, even from the very first instant that any container in that Pod is started.
Because they are applied at Pod level, NetworkPolicies apply equally to init containers,
sidecar containers, and regular containers.
-->
一旦 NetworkPolicy 被网络插件处理,
1. 所有受给定 NetworkPolicy 影响的新建 Pod 都将在启动前被隔离。
NetworkPolicy 的实现必须确保过滤规则在整个 Pod 生命周期内是有效的,
这个生命周期要从该 Pod 的任何容器启动的第一刻算起。
因为 NetworkPolicy 在 Pod 层面被应用,所以 NetworkPolicy 同样适用于 Init 容器、边车容器和常规容器。
<!--
2. Allow rules will be applied eventually after the isolation rules (or may be applied at the same time).
In the worst case, a newly created pod may have no network connectivity at all when it is first started, if
isolation rules were already applied, but no allow rules were applied yet.
Every created NetworkPolicy will be handled by a network plugin eventually, but there is no
way to tell from the Kubernetes API when exactly that happens.
-->
2. Allow 规则最终将在隔离规则之后被应用(或者可能同时被应用)。
在最糟的情况下,如果隔离规则已被应用,但 allow 规则尚未被应用,
那么新建的 Pod 在初始启动时可能根本没有网络连接。
用户所创建的每个 NetworkPolicy 最终都会被网络插件处理,但无法使用 Kubernetes API 来获知确切的处理时间。
<!--
Therefore, pods must be resilient against being started up with different network
connectivity than expected. If you need to make sure the pod can reach certain destinations
before being started, you can use an [init container](/docs/concepts/workloads/pods/init-containers/)
to wait for those destinations to be reachable before kubelet starts the app containers.
-->
因此,若 Pod 启动时使用非预期的网络连接,它必须保持稳定。
如果你需要确保 Pod 在启动之前能够访问特定的目标,可以使用
[Init 容器](/zh-cn/docs/concepts/workloads/pods/init-containers/)在
kubelet 启动应用容器之前等待这些目的地变得可达。
<!--
Every NetworkPolicy will be applied to all selected pods eventually.
Because the network plugin may implement NetworkPolicy in a distributed manner,
it is possible that pods may see a slightly inconsistent view of network policies
when the pod is first created, or when pods or policies change.
For example, a newly-created pod that is supposed to be able to reach both Pod A
on Node 1 and Pod B on Node 2 may find that it can reach Pod A immediately,
but cannot reach Pod B until a few seconds later.
-->
每个 NetworkPolicy 最终都会被应用到所选定的所有 Pod 之上。
由于网络插件可能以分布式的方式实现 NetworkPolicy所以当 Pod 被首次创建时或当
Pod 或策略发生变化时Pod 可能会看到稍微不一致的网络策略视图。
例如,新建的 Pod 本来应能立即访问 Node 1 上的 Pod A 和 Node 2 上的 Pod B
但可能你会发现新建的 Pod 可以立即访问 Pod A但要在几秒后才能访问 Pod B。
<!--
## NetworkPolicy and `hostNetwork` pods
NetworkPolicy behaviour for `hostNetwork` pods is undefined, but it should be limited to 2 possibilities:
-->
## NetworkPolicy 和 `hostNetwork` Pod {#networkpolicy-and-hostnetwork-pods}
针对 `hostNetwork` Pod 的 NetworkPolicy 行为是未定义的,但应限制为以下两种可能:
<!--
- The network plugin can distinguish `hostNetwork` pod traffic from all other traffic
(including being able to distinguish traffic from different `hostNetwork` pods on
the same node), and will apply NetworkPolicy to `hostNetwork` pods just like it does
to pod-network pods.
-->
- 网络插件可以从所有其他流量中辨别出 `hostNetwork` Pod 流量
(包括能够从同一节点上的不同 `hostNetwork` Pod 中辨别流量),
网络插件还可以像处理 Pod 网络流量一样,对 `hostNetwork` Pod 应用 NetworkPolicy。
<!--
- The network plugin cannot properly distinguish `hostNetwork` pod traffic,
and so it ignores `hostNetwork` pods when matching `podSelector` and `namespaceSelector`.
Traffic to/from `hostNetwork` pods is treated the same as all other traffic to/from the node IP.
(This is the most common implementation.)
-->
- 网络插件无法正确辨别 `hostNetwork` Pod 流量,因此在匹配 `podSelector``namespaceSelector`
时会忽略 `hostNetwork` Pod。流向/来自 `hostNetwork` Pod 的流量的处理方式与流向/来自节点 IP
的所有其他流量一样。(这是最常见的实现方式。)
<!--
This applies when
-->
这适用于以下情形:
<!--
1. a `hostNetwork` pod is selected by `spec.podSelector`.
-->
1. `hostNetwork` Pod 被 `spec.podSelector` 选中。
```yaml
...
spec:
podSelector:
matchLabels:
role: client
...
```
<!--
2. a `hostNetwork` pod is selected by a `podSelector` or `namespaceSelector` in an `ingress` or `egress` rule.
-->
2. `hostNetwork` Pod 在 `ingress``egress` 规则中被 `podSelector``namespaceSelector` 选中。
```yaml
...
ingress:
- from:
- podSelector:
matchLabels:
role: client
...
```
<!--
At the same time, since `hostNetwork` pods have the same IP addresses as the nodes they reside on,
their connections will be treated as node connections. For example, you can allow traffic
from a `hostNetwork` Pod using an `ipBlock` rule.
-->
同时,由于 `hostNetwork` Pod 具有与其所在节点相同的 IP 地址,所以它们的连接将被视为节点连接。
例如,你可以使用 `ipBlock` 规则允许来自 `hostNetwork` Pod 的流量。
<!--
## What you can't do with network policies (at least, not yet)

View File

@ -1690,7 +1690,7 @@ Learn more about Services and how they fit into Kubernetes:
* Read about [Ingress](/docs/concepts/services-networking/ingress/), which
exposes HTTP and HTTPS routes from outside the cluster to Services within
your cluster.
* Read about [Gateway](https://gateway-api.sigs.k8s.io/), an extension to
* Read about [Gateway](/docs/concepts/services-networking/gateway/), an extension to
Kubernetes that provides more flexibility than Ingress.
-->
进一步学习 Service 及其在 Kubernetes 中所发挥的作用:
@ -1698,7 +1698,7 @@ Learn more about Services and how they fit into Kubernetes:
* 完成[使用 Service 连接到应用](/zh-cn/docs/tutorials/services/connect-applications-service/)教程。
* 阅读 [Ingress](/zh-cn/docs/concepts/services-networking/ingress/) 文档。Ingress
负责将来自集群外部的 HTTP 和 HTTPS 请求路由给集群内的服务。
* 阅读 [Gateway](https://gateway-api.sigs.k8s.io/) 文档。Gateway 作为 Kubernetes 的扩展提供比
* 阅读 [Gateway](/zh-cn/docs/concepts/services-networking/gateway/) 文档。Gateway 作为 Kubernetes 的扩展提供比
Ingress 更高的灵活性。
<!--

View File

@ -391,17 +391,15 @@ to learn more.
### emptyDir {#emptydir}
<!--
An `emptyDir` volume is first created when a Pod is assigned to a node, and
exists as long as that Pod is running on that node. As the name says, the
`emptyDir` volume is initially empty. All containers in the Pod can read and write the same
For a Pod that defines an `emptyDir` volume, the volume is created when the Pod is assigned to a node.
As the name says, the `emptyDir` volume is initially empty. All containers in the Pod can read and write the same
files in the `emptyDir` volume, though that volume can be mounted at the same
or different paths in each container. When a Pod is removed from a node for
any reason, the data in the `emptyDir` is deleted permanently.
-->
当 Pod 分派到某个节点上时,`emptyDir` 卷会被创建,并且在 Pod 在该节点上运行期间,卷一直存在。
就像其名称表示的那样,卷最初是空的。
尽管 Pod 中的容器挂载 `emptyDir` 卷的路径可能相同也可能不同,这些容器都可以读写
`emptyDir` 卷中相同的文件。
对于定义了 `emptyDir` 卷的 Pod在 Pod 被指派到某节点时此卷会被创建。
就像其名称所表示的那样,`emptyDir` 卷最初是空的。尽管 Pod 中的容器挂载 `emptyDir`
卷的路径可能相同也可能不同,但这些容器都可以读写 `emptyDir` 卷中相同的文件。
当 Pod 因为某些原因被从节点上删除时,`emptyDir` 卷中的数据也会被永久删除。
{{< note >}}
@ -632,10 +630,10 @@ Dynamic provisioning is possible using a
[StorageClass for GCE PD](/docs/concepts/storage/storage-classes/#gce-pd).
Before creating a PersistentVolume, you must create the persistent disk:
-->
#### 手动供应基于区域 PD 的 PersistentVolume {#manually-provisioning-regional-pd-pv}
#### 手动制备基于区域 PD 的 PersistentVolume {#manually-provisioning-regional-pd-pv}
使用[为 GCE PD 定义的存储类](/zh-cn/docs/concepts/storage/storage-classes/#gce-pd)
可以实现动态供应。在创建 PersistentVolume 之前,你首先要创建 PD。
可以实现动态制备。在创建 PersistentVolume 之前,你首先要创建 PD。
```shell
gcloud compute disks create --size=500GB my-data-disk
@ -648,6 +646,9 @@ gcloud compute disks create --size=500GB my-data-disk
-->
#### 区域持久盘配置示例
<!--
# failure-domain.beta.kubernetes.io/zone should be used prior to 1.21
-->
```yaml
apiVersion: v1
kind: PersistentVolume
@ -770,7 +771,7 @@ and then removed entirely in the v1.26 release.
Kubernetes {{< skew currentVersion >}} 不包含 `glusterfs` 卷类型。
GlusterFS 树内存储驱动程序在 Kubernetes v1.25 版本中被弃用,然后在 v1.26 版本中被完全移除。
### hostPath {#hostpath}
{{< warning >}}
@ -872,6 +873,10 @@ Watch out when using this type of volume, because:
-->
#### hostPath 配置示例
<!--
# directory location on host
# this field is optional
-->
```yaml
apiVersion: v1
kind: Pod
@ -887,7 +892,7 @@ spec:
volumes:
- name: test-volume
hostPath:
# 宿主上目录位置
# 宿主上目录位置
path: /data
# 此字段为可选
type: Directory
@ -903,7 +908,7 @@ you can try to mount directories and files separately, as shown in the
`FileOrCreate` 模式不会负责创建文件的父目录。
如果欲挂载的文件的父目录不存在Pod 启动会失败。
为了确保这种模式能够工作,可以尝试把文件和它对应的目录分开挂载,如
[`FileOrCreate` 配置](#hostpath-fileorcreate-example) 所示。
[`FileOrCreate` 配置](#hostpath-fileorcreate-example)所示。
{{< /caution >}}
<!--
@ -911,6 +916,9 @@ you can try to mount directories and files separately, as shown in the
-->
#### hostPath FileOrCreate 配置示例 {#hostpath-fileorcreate-example}
<!--
# Ensure the file directory is created.
-->
```yaml
apiVersion: v1
kind: Pod
@ -1191,6 +1199,9 @@ Here is an example Pod referencing a pre-provisioned Portworx volume:
`portworxVolume` 类型的卷可以通过 Kubernetes 动态创建,也可以预先配备并在 Pod 内引用。
下面是一个引用预先配备的 Portworx 卷的示例 Pod
<!--
# This Portworx volume must already exist.
-->
```yaml
apiVersion: v1
kind: Pod
@ -1253,7 +1264,7 @@ To enable the feature, set `CSIMigrationPortworx=true` in kube-controller-manage
A projected volume maps several existing volume sources into the same
directory. For more details, see [projected volumes](/docs/concepts/storage/projected-volumes/).
-->
### projected (投射 {#projected}
### 投射(projected {#projected}
投射卷能将若干现有的卷来源映射到同一目录上。更多详情请参考[投射卷](/zh-cn/docs/concepts/storage/projected-volumes/)。
@ -1354,7 +1365,7 @@ RBD CSI driver:
* 你必须在集群中安装 v3.5.0 或更高版本的 Ceph CSI 驱动(`rbd.csi.ceph.com`)。
* 因为 `clusterID` 是 CSI 驱动程序必需的参数,而树内存储类又将 `monitors`
作为一个必需的参数,所以 Kubernetes 存储管理者需要根据 `monitors`
的哈希值(例:`#echo -n '<monitors_string>' | md5sum`)来创建
的哈希值(例`#echo -n '<monitors_string>' | md5sum`)来创建
`clusterID`,并保持该 `monitors` 存在于该 `clusterID` 的配置中。
* 同时,如果树内存储类的 `adminId` 的值不是 `admin`,那么其 `adminSecretName`
就需要被修改成 `adminId` 参数的 base64 编码值。
@ -1427,7 +1438,6 @@ For more information, see the [vSphere volume](https://github.com/kubernetes/exa
{{< feature-state for_k8s_version="v1.26" state="stable" >}}
<!--
In Kubernetes {{< skew currentVersion >}}, all operations for the in-tree `vsphereVolume` type
are redirected to the `csi.vsphere.vmware.com` {{< glossary_tooltip text="CSI" term_id="csi" >}} driver.
@ -1568,8 +1578,11 @@ The host directory `/var/log/pods/pod1` is mounted at `/logs` in the container.
-->
在这个示例中,`Pod` 使用 `subPathExpr``hostPath``/var/log/pods` 中创建目录 `pod1`
`hostPath` 卷采用来自 `downwardAPI` 的 Pod 名称生成目录名。
宿主目录 `/var/log/pods/pod1` 被挂载到容器的 `/logs` 中。
宿主目录 `/var/log/pods/pod1` 被挂载到容器的 `/logs` 中。
<!--
# The variable expansion uses round brackets (not curly brackets).
-->
```yaml
apiVersion: v1
kind: Pod
@ -1652,8 +1665,8 @@ to the [volume plugin FAQ](https://github.com/kubernetes/community/blob/master/s
-->
CSI 和 FlexVolume 都允许独立于 Kubernetes 代码库开发卷插件,并作为扩展部署(安装)在 Kubernetes 集群上。
对于希望创建树外Out-Of-Tree卷插件的存储供应商请参考
[卷插件常见问题](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md)。
对于希望创建树外Out-Of-Tree卷插件的存储供应商
请参考[卷插件常见问题](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md)。
### CSI
@ -1778,7 +1791,7 @@ persistent volume:
该映射必须与 CSI 驱动程序返回的 `CreateVolumeResponse` 中的 `volume.attributes`
字段的映射相对应;
[CSI 规范](https://github.com/container-storage-interface/spec/blob/master/spec.md#createvolume)中有相应的定义。
该映射通过`ControllerPublishVolumeRequest`、`NodeStageVolumeRequest`
该映射通过`ControllerPublishVolumeRequest`、`NodeStageVolumeRequest`
`NodePublishVolumeRequest` 中的 `volume_context` 字段传递给 CSI 驱动。
<!--
@ -1912,7 +1925,7 @@ stand-alone binary that needs to be pre-installed on each Windows node.
For more details, refer to the deployment guide of the CSI plugin you wish to deploy.
-->
CSI 节点插件需要执行多种特权操作,例如扫描磁盘设备和挂载文件系统等。
这些操作在每个宿主操作系统上都是不同的。对于 Linux 工作节点而言,容器化的 CSI
这些操作在每个宿主操作系统上都是不同的。对于 Linux 工作节点而言,容器化的 CSI
节点插件通常部署为特权容器。对于 Windows 工作节点而言,容器化 CSI
节点插件的特权操作是通过 [csi-proxy](https://github.com/kubernetes-csi/csi-proxy)
来支持的。csi-proxy 是一个由社区管理的、独立的可执行二进制文件,
@ -1986,7 +1999,7 @@ The following FlexVolume [plugins](https://github.com/Microsoft/K8s-Storage-Plug
deployed as PowerShell scripts on the host, support Windows nodes:
-->
下面的 FlexVolume [插件](https://github.com/Microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows)
以 PowerShell 脚本的形式部署在宿主系统上,支持 Windows 节点:
以 PowerShell 脚本的形式部署在宿主系统上,支持 Windows 节点:
* [SMB](https://github.com/microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows/plugins/microsoft.com~smb.cmd)
* [iSCSI](https://github.com/microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows/plugins/microsoft.com~iscsi.cmd)
@ -2034,17 +2047,15 @@ in `Container.volumeMounts`. Its values are:
cri-dockerd (Docker) is known to choose `rslave` mount propagation when the
mount source contains the Docker daemon's root directory (`/var/lib/docker`).
-->
* `None` - 此卷挂载将不会感知到主机后续在此卷或其任何子目录上执行的挂载变化。
类似的,容器所创建的卷挂载在主机上是不可见的。这是默认模式。
类似的,容器所创建的卷挂载在主机上是不可见的。这是默认模式。
该模式等同于 [`mount(8)`](https://man7.org/linux/man-pages/man8/mount.8.html)中描述的
`rprivate` 挂载传播选项。
该模式等同于 [`mount(8)`](https://man7.org/linux/man-pages/man8/mount.8.html) 中描述的
`rprivate` 挂载传播选项。
然而,当 `rprivate` 传播选项不适用时CRI 运行时可以转为选择 `rslave` 挂载传播选项
(即 `HostToContainer`)。当挂载源包含 Docker 守护进程的根目录(`/var/lib/docker`)时,
cri-dockerd (Docker) 已知可以选择 `rslave` 挂载传播选项。
然而,当 `rprivate` 传播选项不适用时CRI 运行时可以转为选择 `rslave` 挂载传播选项
(即 `HostToContainer`)。当挂载源包含 Docker 守护进程的根目录(`/var/lib/docker`)时,
cri-dockerd (Docker) 已知可以选择 `rslave` 挂载传播选项。
<!--
* `HostToContainer` - This volume mount will receive all subsequent mounts
@ -2084,7 +2095,7 @@ in `Container.volumeMounts`. Its values are:
* `Bidirectional` - 这种卷挂载和 `HostToContainer` 挂载表现相同。
另外,容器创建的卷挂载将被传播回至主机和使用同一卷的所有 Pod 的所有容器。
该模式等同于 [`mount(8)`](https://man7.org/linux/man-pages/man8/mount.8.html)中描述的
该模式等同于 [`mount(8)`](https://man7.org/linux/man-pages/man8/mount.8.html) 中描述的
`rshared` 挂载传播选项。
{{< warning >}}
@ -2101,36 +2112,6 @@ in `Container.volumeMounts`. Its values are:
此外,由 Pod 中的容器创建的任何卷挂载必须在终止时由容器销毁(卸载)。
{{< /warning >}}
<!--
### Configuration
Before mount propagation can work properly on some deployments (CoreOS,
RedHat/Centos, Ubuntu) mount share must be configured correctly in
Docker as shown below.
-->
### 配置 {#configuration}
在某些部署环境中,挂载传播正常工作前,必须在 Docker 中正确配置挂载共享mount share如下所示。
<!--
Edit your Docker's `systemd` service file. Set `MountFlags` as follows:
-->
编辑你的 Docker `systemd` 服务文件,按下面的方法设置 `MountFlags`
```shell
MountFlags=shared
```
<!--
Or, remove `MountFlags=slave` if present. Then restart the Docker daemon:
-->
或者,如果存在 `MountFlags=slave` 就删除掉。然后重启 Docker 守护进程:
```shell
sudo systemctl daemon-reload
sudo systemctl restart docker
```
## {{% heading "whatsnext" %}}
<!--

View File

@ -212,8 +212,8 @@ If you do not specify either, then the DaemonSet controller will create Pods on
## Daemon Pods 是如何被调度的 {#how-daemon-pods-are-scheduled}
<!--
A DaemonSet ensures that all eligible nodes run a copy of a Pod. The DaemonSet
controller creates a Pod for each eligible node and adds the
A DaemonSet can be used to ensure that all eligible nodes run a copy of a Pod.
The DaemonSet controller creates a Pod for each eligible node and adds the
`spec.affinity.nodeAffinity` field of the Pod to match the target host. After
the Pod is created, the default scheduler typically takes over and then binds
the Pod to the target host by setting the `.spec.nodeName` field. If the new
@ -222,13 +222,26 @@ the existing Pods based on the
[priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority)
of the new Pod.
-->
DaemonSet 确保所有符合条件的节点都运行该 Pod 的一个副本。
DaemonSet 可用于确保所有符合条件的节点都运行该 Pod 的一个副本。
DaemonSet 控制器为每个符合条件的节点创建一个 Pod并添加 Pod 的 `spec.affinity.nodeAffinity`
字段以匹配目标主机。Pod 被创建之后,默认的调度程序通常通过设置 `.spec.nodeName` 字段来接管 Pod 并将
Pod 绑定到目标主机。如果新的 Pod 无法放在节点上,则默认的调度程序可能会根据新 Pod
的[优先级](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority)抢占
(驱逐)某些现存的 Pod。
{{< note >}}
<!--
If it's important that the DaemonSet pod run on each node, it's often desirable
to set the `.spec.template.spec.priorityClassName` of the DaemonSet to a
[PriorityClass](/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass)
with a higher priority to ensure that this eviction occurs.
-->
当 DaemonSet 中的 Pod 必须运行在每个节点上时,通常需要将 DaemonSet
`.spec.template.spec.priorityClassName` 设置为具有更高优先级的
[PriorityClass](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass)
以确保可以完成驱逐。
{{< /note >}}
<!--
The user can specify a different scheduler for the Pods of the DaemonSet, by
setting the `.spec.template.spec.schedulerName` field of the DaemonSet.

View File

@ -547,7 +547,7 @@ To enable the plugin, configure the following flags on the API server:
<!--
| Parameter | Description | Example | Required |
| --------- | ----------- | ------- | ------- |
| `--oidc-issuer-url` | URL of the provider which allows the API server to discover public signing keys. Only URLs which use the `https://` scheme are accepted. This is typically the provider's discovery URL without a path, for example "https://accounts.google.com" or "https://login.salesforce.com". This URL should point to the level below .well-known/openid-configuration | If the discovery URL is `https://accounts.google.com/.well-known/openid-configuration`, the value should be `https://accounts.google.com` | Yes |
| `--oidc-issuer-url` | URL of the provider that allows the API server to discover public signing keys. Only URLs that use the `https://` scheme are accepted. This is typically the provider's discovery URL, changed to have an empty path | If the issuer's OIDC discovery URL is `https://accounts.provider.example/.well-known/openid-configuration`, the value should be `https://accounts.google.com` | Yes |
| `--oidc-client-id` | A client id that all tokens must be issued for. | kubernetes | Yes |
| `--oidc-username-claim` | JWT claim to use as the user name. By default `sub`, which is expected to be a unique identifier of the end user. Admins can choose other claims, such as `email` or `name`, depending on their provider. However, claims other than `email` will be prefixed with the issuer URL to prevent naming clashes with other plugins. | sub | No |
| `--oidc-username-prefix` | Prefix prepended to username claims to prevent clashes with existing names (such as `system:` users). For example, the value `oidc:` will create usernames like `oidc:jane.doe`. If this flag isn't provided and `--oidc-username-claim` is a value other than `email` the prefix defaults to `( Issuer URL )#` where `( Issuer URL )` is the value of `--oidc-issuer-url`. The value `-` can be used to disable all prefixing. | `oidc:` | No |
@ -560,7 +560,7 @@ To enable the plugin, configure the following flags on the API server:
| 参数 | 描述 | 示例 | 必需? |
| --------- | ----------- | ------- | ------- |
| `--oidc-issuer-url` | 允许 API 服务器发现公开的签名密钥的服务的 URL。只接受模式为 `https://` 的 URL。此值通常设置为服务的发现 URL不含路径。例如:"https://accounts.google.com" 或 "https://login.salesforce.com"。此 URL 应指向 .well-known/openid-configuration 下一层的路径。 | 如果发现 URL 是 `https://accounts.google.com/.well-known/openid-configuration`,则此值应为 `https://accounts.google.com` | 是 |
| `--oidc-issuer-url` | 允许 API 服务器发现公开的签名密钥的服务的 URL。只接受模式为 `https://` 的 URL。此值通常设置为服务的发现 URL已更改为空路径。 | 如果发行人的 OIDC 发现 URL 是 `https://accounts.google.com/.well-known/openid-configuration`,则此值应为 `https://accounts.google.com` | 是 |
| `--oidc-client-id` | 所有令牌都应发放给此客户 ID。 | kubernetes | 是 |
| `--oidc-username-claim` | 用作用户名的 JWT 申领JWT Claim。默认情况下使用 `sub` 值,即最终用户的一个唯一的标识符。管理员也可以选择其他申领,例如 `email` 或者 `name`,取决于所用的身份服务。不过,除了 `email` 之外的申领都会被添加令牌发放者的 URL 作为前缀,以免与其他插件产生命名冲突。 | sub | 否 |
| `--oidc-username-prefix` | 要添加到用户名申领之前的前缀,用来避免与现有用户名发生冲突(例如:`system:` 用户)。例如,此标志值为 `oidc:` 时将创建形如 `oidc:jane.doe` 的用户名。如果此标志未设置,且 `--oidc-username-claim` 标志值不是 `email`,则默认前缀为 `<令牌发放者的 URL>#`,其中 `<令牌发放者 URL >` 的值取自 `--oidc-issuer-url` 标志的设定。此标志值为 `-` 时,意味着禁止添加用户名前缀。 | `oidc:` | 否 |
@ -746,7 +746,7 @@ Webhook 身份认证是一种用来验证持有者令牌的回调机制。
* `--authentication-token-webhook-cache-ttl` 用来设定身份认证决定的缓存时间。
默认时长为 2 分钟。
* `--authentication-token-webhook-version` 决定是使用 `authentication.k8s.io/v1beta1` 还是
`authenticationk8s.io/v1` 版本的 `TokenReview` 对象从 webhook 发送/接收信息。
`authenticationk8s.io/v1` 版本的 `TokenReview` 对象从 Webhook 发送/接收信息。
默认为“v1beta1”。
<!--
@ -1095,7 +1095,7 @@ the risks and the mechanisms to protect the CA's usage.
-->
为了防范头部信息侦听,在请求中的头部字段被检视之前,
身份认证代理需要向 API 服务器提供一份合法的客户端证书,供后者使用所给的 CA 来执行验证。
警告:**不要** 在不同的上下文中复用 CA 证书,除非你清楚这样做的风险是什么以及应如何保护
警告:**不要**在不同的上下文中复用 CA 证书,除非你清楚这样做的风险是什么以及应如何保护
CA 用法的机制。
* `--requestheader-client-ca-file` 必需字段,给出 PEM 编码的证书包。
@ -1172,11 +1172,11 @@ to the impersonated user info.
带伪装的请求首先会被身份认证识别为发出请求的用户,
之后会切换到使用被伪装的用户的用户信息。
* 用户发起 API 调用时 **同时** 提供自身的凭据和伪装头部字段信息
* API 服务器对用户执行身份认证
* API 服务器确认通过认证的用户具有伪装特权
* 请求用户的信息被替换成伪装字段的值
* 评估请求,鉴权组件针对所伪装的用户信息执行操作
* 用户发起 API 调用时**同时**提供自身的凭据和伪装头部字段信息
* API 服务器对用户执行身份认证
* API 服务器确认通过认证的用户具有伪装特权
* 请求用户的信息被替换成伪装字段的值
* 评估请求,鉴权组件针对所伪装的用户信息执行操作
<!--
The following HTTP headers can be used to performing an impersonation request:
@ -1283,7 +1283,7 @@ authorization plugin, the following ClusterRole encompasses the rules needed to
set user and group impersonation headers:
-->
若要伪装成某个用户、某个组、用户标识符UID或者设置附加字段
执行伪装操作的用户必须具有对所伪装的类别(“user”、“group”、“uid” 等)执行 “impersonate”
执行伪装操作的用户必须具有对所伪装的类别(`user`、`group`、`uid` 等)执行 `impersonate`
动词操作的能力。
对于启用了 RBAC 鉴权插件的集群,下面的 ClusterRole 封装了设置用户和组伪装字段所需的规则:
@ -1706,7 +1706,7 @@ users:
provideClusterInfo: true
# Exec 插件与标准输入 I/O 数据流之间的协议。如果协议无法满足,
# 则插件无法运行并会返回错误信息。合法的值包括 "Never" Exec 插件从不使用标准输入),
# 则插件无法运行并会返回错误信息。合法的值包括 "Never"Exec 插件从不使用标准输入),
# "IfAvailable" Exec 插件希望在可以的情况下使用标准输入),
# 或者 "Always" Exec 插件需要使用标准输入才能工作)。可选字段。
# 默认值为 "IfAvailable"。
@ -1853,7 +1853,7 @@ If specified, `clientKeyData` and `clientCertificateData` must both must be pres
如果插件在后续调用中返回了不同的证书或密钥,`k8s.io/client-go`
会终止其与服务器的连接,从而强制执行新的 TLS 握手过程。
如果指定了这种方式,则 `clientKeyData``clientCertificateData` 字段都必存在。
如果指定了这种方式,则 `clientKeyData``clientCertificateData` 字段都必存在。
`clientCertificateData` 字段可能包含一些要发送给服务器的中间证书Intermediate
Certificates
@ -1996,7 +1996,7 @@ The following `ExecCredential` manifest describes a cluster information sample.
-->
## 为客户端提供的对身份验证信息的 API 访问 {#self-subject-review}
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
{{< feature-state for_k8s_version="v1.28" state="stable" >}}
<!--
If your cluster has the API enabled, you can use the `SelfSubjectReview` API to find out how your Kubernetes cluster maps your authentication information to identify you as a client. This works whether you are authenticating as a user (typically representing a real person) or as a ServiceAccount.
@ -2015,12 +2015,12 @@ Kubernetes API 服务器收到请求后,将使用用户属性填充 status 字
请求示例(主体将是 `SelfSubjectReview`
```
POST /apis/authentication.k8s.io/v1beta1/selfsubjectreviews
POST /apis/authentication.k8s.io/v1/selfsubjectreviews
```
```json
{
"apiVersion": "authentication.k8s.io/v1beta1",
"apiVersion": "authentication.k8s.io/v1",
"kind": "SelfSubjectReview"
}
```
@ -2032,7 +2032,7 @@ Response example:
```json
{
"apiVersion": "authentication.k8s.io/v1beta1",
"apiVersion": "authentication.k8s.io/v1",
"kind": "SelfSubjectReview",
"status": {
"userInfo": {
@ -2119,7 +2119,7 @@ By providing the output flag, it is also possible to print the JSON or YAML repr
{{% tab name="YAML" %}}
```yaml
apiVersion: authentication.k8s.io/v1alpha1
apiVersion: authentication.k8s.io/v1
kind: SelfSubjectReview
status:
userInfo:
@ -2142,10 +2142,12 @@ status:
<!--
This feature is extremely useful when a complicated authentication flow is used in a Kubernetes cluster,
for example, if you use [webhook token authentication](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication) or [authenticating proxy](/docs/reference/access-authn-authz/authentication/#authenticating-proxy).
for example, if you use [webhook token authentication](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)
or [authenticating proxy](/docs/reference/access-authn-authz/authentication/#authenticating-proxy).
-->
在 Kubernetes 集群中使用复杂的身份验证流程时,例如如果你使用
[Webhook 令牌身份验证](/zh-cn/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)或[身份验证代理](/zh-cn/docs/reference/access-authn-authz/authentication/#authenticating-proxy)时,
[Webhook 令牌身份验证](/zh-cn/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)或
[身份验证代理](/zh-cn/docs/reference/access-authn-authz/authentication/#authenticating-proxy)时,
此特性极其有用。
{{< note >}}
@ -2162,7 +2164,8 @@ Kubernetes API 服务器在所有身份验证机制
{{< /note >}}
<!--
By default, all authenticated users can create `SelfSubjectReview` objects when the `APISelfSubjectReview` feature is enabled. It is allowed by the `system:basic-user` cluster role.
By default, all authenticated users can create `SelfSubjectReview` objects when the `APISelfSubjectReview` feature is enabled.
It is allowed by the `system:basic-user` cluster role.
-->
默认情况下,所有经过身份验证的用户都可以在 `APISelfSubjectReview` 特性被启用时创建 `SelfSubjectReview` 对象。
这是 `system:basic-user` 集群角色允许的操作。
@ -2172,17 +2175,24 @@ By default, all authenticated users can create `SelfSubjectReview` objects when
You can only make `SelfSubjectReview` requests if:
* the `APISelfSubjectReview`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
is enabled for your cluster (enabled by default after reaching Beta)
is enabled for your cluster (not needed for Kubernetes {{< skew currentVersion >}}, but older
Kubernetes versions might not offer this feature gate, or might default it to be off)
* (if you are running a version of Kubernetes older than v1.28) the API server for your
cluster has the `authentication.k8s.io/v1alpha1` or `authentication.k8s.io/v1beta1`
* the API server for your cluster has the `authentication.k8s.io/v1alpha1` or `authentication.k8s.io/v1beta1`
{{< glossary_tooltip term_id="api-group" text="API group" >}}
enabled.
-->
你只能在以下情况下进行 `SelfSubjectReview` 请求:
* 集群启用了 `APISelfSubjectReview` (Beta 版本默认启用)
* 集群启用了 `APISelfSubjectReview`
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)
Kubernetes {{< skew currentVersion >}} 不需要,但较旧的 Kubernetes 版本可能没有此特性门控,
或者默认为关闭状态)。
* (如果你运行的 Kubernetes 版本早于 v1.28 版本)集群的 API 服务器包含
`authentication.k8s.io/v1alpha1``authentication.k8s.io/v1beta1` API 组。
* 集群的 API 服务器已启用 `authentication.k8s.io/v1alpha1` 或者 `authentication.k8s.io/v1beta1`
{{< glossary_tooltip term_id="api-group" text="API 组" >}}。。
{{< glossary_tooltip term_id="api-group" text="API 组" >}}。
{{< /note >}}
## {{% heading "whatsnext" %}}
@ -2191,6 +2201,5 @@ You can only make `SelfSubjectReview` requests if:
* Read the [client authentication reference (v1beta1)](/docs/reference/config-api/client-authentication.v1beta1/)
* Read the [client authentication reference (v1)](/docs/reference/config-api/client-authentication.v1/)
-->
* 阅读[客户端认证参考文档 (v1beta1)](/zh-cn/docs/reference/config-api/client-authentication.v1beta1/)
* 阅读[客户端认证参考文档 (v1)](/zh-cn/docs/reference/config-api/client-authentication.v1/)
* 阅读[客户端认证参考文档v1beta1](/zh-cn/docs/reference/config-api/client-authentication.v1beta1/)。
* 阅读[客户端认证参考文档v1](/zh-cn/docs/reference/config-api/client-authentication.v1/)。

View File

@ -125,18 +125,18 @@ For a reference to old feature gates that are removed, please refer to
| `CPUManagerPolicyBetaOptions` | `true` | Beta | 1.23 | |
| `CPUManagerPolicyOptions` | `false` | Alpha | 1.22 | 1.22 |
| `CPUManagerPolicyOptions` | `true` | Beta | 1.23 | |
| CRDValidationRatcheting | false | Alpha | 1.28 |
| `CRDValidationRatcheting` | `false` | Alpha | 1.28 | |
| `CSIMigrationPortworx` | `false` | Alpha | 1.23 | 1.24 |
| `CSIMigrationPortworx` | `false` | Beta | 1.25 | |
| `CSINodeExpandSecret` | `false` | Alpha | 1.25 | 1.26 |
| `CSINodeExpandSecret` | `true` | Beta | 1.27 | |
| `CSIVolumeHealth` | `false` | Alpha | 1.21 | |
| `CloudControllerManagerWebhook` | false | Alpha | 1.27 | |
| `CloudDualStackNodeIPs` | false | Alpha | 1.27 | |
| `ClusterTrustBundle` | false | Alpha | 1.27 | |
| `CloudControllerManagerWebhook` | `false` | Alpha | 1.27 | |
| `CloudDualStackNodeIPs` | `false` | Alpha | 1.27 | |
| `ClusterTrustBundle` | `false` | Alpha | 1.27 | |
| `ComponentSLIs` | `false` | Alpha | 1.26 | 1.26 |
| `ComponentSLIs` | `true` | Beta | 1.27 | |
| `ConsistentListFromCache` | `false` | Alpha | 1.28 |
| `ConsistentListFromCache` | `false` | Alpha | 1.28 | |
| `ContainerCheckpoint` | `false` | Alpha | 1.25 | |
| `ContextualLogging` | `false` | Alpha | 1.24 | |
| `CronJobsScheduledAnnotation` | `true` | Beta | 1.28 | |
@ -148,9 +148,9 @@ For a reference to old feature gates that are removed, please refer to
| `DisableCloudProviders` | `false` | Alpha | 1.22 | |
| `DisableKubeletCloudCredentialProviders` | `false` | Alpha | 1.23 | |
| `DynamicResourceAllocation` | `false` | Alpha | 1.26 | |
| `ElasticIndexedJob` | `true` | Beta` | 1.27 | |
| `ElasticIndexedJob` | `true` | Beta | 1.27 | |
| `EventedPLEG` | `false` | Alpha | 1.26 | 1.26 |
| `EventedPLEG` | `false` | Beta | 1.27 | - |
| `EventedPLEG` | `false` | Beta | 1.27 | |
| `GracefulNodeShutdown` | `false` | Alpha | 1.20 | 1.20 |
| `GracefulNodeShutdown` | `true` | Beta | 1.21 | |
| `GracefulNodeShutdownBasedOnPodPriority` | `false` | Alpha | 1.23 | 1.23 |
@ -263,7 +263,7 @@ For a reference to old feature gates that are removed, please refer to
| `ValidatingAdmissionPolicy` | `false` | Alpha | 1.26 | 1.27 |
| `ValidatingAdmissionPolicy` | `false` | Beta | 1.28 | |
| `VolumeCapacityPriority` | `false` | Alpha | 1.21 | |
| `WatchList` | false | Alpha | 1.27 | |
| `WatchList` | `false` | Alpha | 1.27 | |
| `WinDSR` | `false` | Alpha | 1.14 | |
| `WinOverlay` | `false` | Alpha | 1.14 | 1.19 |
| `WinOverlay` | `true` | Beta | 1.20 | |
@ -421,7 +421,8 @@ A *Beta* feature means:
**Beta** 特性代表:
<!--
* Usually enabled by default. Beta API groups are [disabled by default](https://github.com/kubernetes/enhancements/tree/master/keps/sig-architecture/3136-beta-apis-off-by-default).
* Usually enabled by default. Beta API groups are
[disabled by default](https://github.com/kubernetes/enhancements/tree/master/keps/sig-architecture/3136-beta-apis-off-by-default).
* The feature is well tested. Enabling the feature is considered safe.
* Support for the overall feature will not be dropped, though details may change.
* The schema and/or semantics of objects may change in incompatible ways in a
@ -519,31 +520,14 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `AppArmor`:在 Linux 节点上为 Pod 启用 AppArmor 机制的强制访问控制。
请参见 [AppArmor 教程](/zh-cn/docs/tutorials/security/apparmor/)获取详细信息。
<!--
- `ContainerCheckpoint`: Enables the kubelet `checkpoint` API.
See [Kubelet Checkpoint API](/docs/reference/node/kubelet-checkpoint-api/) for more details.
- `ControllerManagerLeaderMigration`: Enables Leader Migration for
[kube-controller-manager](/docs/tasks/administer-cluster/controller-manager-leader-migration/#initial-leader-migration-configuration) and
[cloud-controller-manager](/docs/tasks/administer-cluster/controller-manager-leader-migration/#deploy-cloud-controller-manager)
which allows a cluster operator to live migrate
controllers from the kube-controller-manager into an external controller-manager
(e.g. the cloud-controller-manager) in an HA cluster without downtime.
-->
- `ContainerCheckpoint`:启用 kubelet `checkpoint` API。
参阅 [Kubelet Checkpoint API](/zh-cn/docs/reference/node/kubelet-checkpoint-api/) 获取更多详细信息。
- `ControllerManagerLeaderMigration`:为
[kube-controller-manager](/zh-cn/docs/tasks/administer-cluster/controller-manager-leader-migration/#initial-leader-migration-configuration) 和
[cloud-controller-manager](/zh-cn/docs/tasks/administer-cluster/controller-manager-leader-migration/#deploy-cloud-controller-manager)
启用 Leader 迁移,它允许集群管理者在没有停机的高可用集群环境下,实时把 kube-controller-manager
迁移到外部的 controller-manager (例如 cloud-controller-manager) 中。
<!--
- `CPUManager`: Enable container level CPU affinity support, see
[CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/).
- `CPUManagerPolicyAlphaOptions`: This allows fine-tuning of CPUManager policies,
experimental, Alpha-quality options
experimental, Alpha-quality options.
This feature gate guards *a group* of CPUManager options whose quality level is alpha.
This feature gate will never graduate to beta or stable.
- `CPUManagerPolicyBetaOptions`: This allows fine-tuning of CPUManager policies,
experimental, Beta-quality options
experimental, Beta-quality options.
This feature gate guards *a group* of CPUManager options whose quality level is beta.
This feature gate will never graduate to stable.
- `CPUManagerPolicyOptions`: Allow fine-tuning of CPUManager policies.
@ -558,38 +542,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
此特性门控永远不会被升级为稳定版本。
- `CPUManagerPolicyOptions`:允许微调 CPU 管理策略。
<!--
- `CSIInlineVolume`: Enable CSI Inline volumes support for pods.
- `CSIMigration`: Enables shims and translation logic to route volume
operations from in-tree plugins to corresponding pre-installed CSI plugins
-->
- `CSIInlineVolume`:为 Pod 启用 CSI 内联卷支持。
- `CSIMigration`:确保封装和转换逻辑能够将卷操作从内嵌插件路由到相应的预安装 CSI 插件。
<!--
- `CSIMigrationAWS`: Enables shims and translation logic to route volume
operations from the AWS-EBS in-tree plugin to EBS CSI plugin. Supports
falling back to in-tree EBS plugin for mount operations to nodes that have
the feature disabled or that do not have EBS CSI plugin installed and
configured. Does not support falling back for provision operations, for those
the CSI plugin must be installed and configured.
-->
- `CSIMigrationAWS`:确保填充和转换逻辑能够将卷操作从 AWS-EBS 内嵌插件路由到 EBS CSI 插件。
如果节点禁用了此特性门控或者未安装和配置 EBS CSI 插件,支持回退到内嵌 EBS 插件来执行卷挂载操作。
不支持回退到这些插件来执行卷制备操作,因为需要安装并配置 CSI 插件。
<!--
- `CSIMigrationAzureDisk`: Enables shims and translation logic to route volume
operations from the Azure-Disk in-tree plugin to AzureDisk CSI plugin.
Supports falling back to in-tree AzureDisk plugin for mount operations to
nodes that have the feature disabled or that do not have AzureDisk CSI plugin
installed and configured. Does not support falling back for provision
operations, for those the CSI plugin must be installed and configured.
Requires CSIMigration feature flag enabled.
-->
- `CSIMigrationAzureDisk`:确保填充和转换逻辑能够将卷操作从 AzureDisk 内嵌插件路由到
Azure 磁盘 CSI 插件。对于禁用了此特性的节点或者没有安装并配置 AzureDisk CSI
插件的节点支持回退到内嵌in-treeAzureDisk 插件来执行磁盘挂载操作。
不支持回退到内嵌插件来执行磁盘制备操作,因为对应的 CSI 插件必须已安装且正确配置。
此特性需要启用 CSIMigration 特性标志。
<!--
- `CSIMigrationAzureFile`: Enables shims and translation logic to route volume
operations from the Azure-File in-tree plugin to AzureFile CSI plugin.
Supports falling back to in-tree AzureFile plugin for mount operations to
@ -653,7 +605,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
-->
- `CloudControllerManagerWebhook`:启用在云控制器管理器中的 Webhook。
- `CloudDualStackNodeIPs`:允许在外部云驱动中通过 `kubelet --node-ip` 设置双协议栈。
有关详细信息,请参阅[配置 IPv4/IPv6 双协议栈](/zh-cn/docs/concepts/services-networking/dual-stack/#configure-ipv4-ipv6-dual-stack)。
有关详细信息,请参阅[配置 IPv4/IPv6 双协议栈](/zh-cn/docs/concepts/services-networking/dual-stack/#configure-ipv4-ipv6-dual-stack)。
- `ClusterTrustBundle`:启用 ClusterTrustBundle 对象和 kubelet 集成。
<!--
- `ComponentSLIs`: Enable the `/metrics/slis` endpoint on Kubernetes components like
@ -665,11 +617,11 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `ContainerCheckpoint`: Enables the kubelet `checkpoint` API.
See [Kubelet Checkpoint API](/docs/reference/node/kubelet-checkpoint-api/) for more details.
- `ContextualLogging`: When you enable this feature gate, Kubernetes components that support
contextual logging add extra detail to log output.
contextual logging add extra detail to log output.
- `CronJobsScheduledAnnotation`: Set the scheduled job time as an
{{< glossary_tooltip text="annotation" term_id="annotation" >}} on Jobs that were created
on behalf of a CronJob.
- `CronJobTimeZone`: Allow the use of the `timeZone` optional field in [CronJobs](/docs/concepts/workloads/controllers/cron-jobs/)
- `CronJobTimeZone`: Allow the use of the `timeZone` optional field in [CronJobs](/docs/concepts/workloads/controllers/cron-jobs/).
-->
- `ComponentSLIs`: 在 kubelet、kube-scheduler、kube-proxy、kube-controller-manager、cloud-controller-manager
等 Kubernetes 组件上启用 `/metrics/slis` 端点,从而允许你抓取健康检查指标。
@ -684,12 +636,13 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `CronJobTimeZone`:允许在 [CronJobs](/zh-cn/docs/concepts/workloads/controllers/cron-jobs/)
中使用 `timeZone` 可选字段。
<!--
- `CRDValidationRatcheting`: Enable updates to custom resources to contain
violations of their OpenAPI schema if the offending portions of the resource
update did not change. See [Validation Ratcheting](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-ratcheting) for more details.
- `CRDValidationRatcheting`: Enable updates to custom resources to contain
violations of their OpenAPI schema if the offending portions of the resource
update did not change. See [Validation Ratcheting](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-ratcheting)
for more details.
- `CrossNamespaceVolumeDataSource`: Enable the usage of cross namespace volume data source
to allow you to specify a source namespace in the `dataSourceRef` field of a
PersistentVolumeClaim.
to allow you to specify a source namespace in the `dataSourceRef` field of a
PersistentVolumeClaim.
- `CustomCPUCFSQuotaPeriod`: Enable nodes to change `cpuCFSQuotaPeriod` in
[kubelet config](/docs/tasks/administer-cluster/kubelet-config-file/).
- `CustomResourceValidationExpressions`: Enable expression language validation in CRD
@ -733,7 +686,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
to authenticate to a cloud provider container registry for image pull credentials.
- `DownwardAPIHugePages`: Enables usage of hugepages in
[downward API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information).
- `DynamicResourceAllocation`: Enables support for resources with custom parameters and a lifecycle
-->
- `DisableCloudProviders`:禁用 `kube-apiserver``kube-controller-manager` 和
`kubelet` 组件的 `--cloud-provider` 标志相关的所有功能。
@ -742,9 +694,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `DownwardAPIHugePages`
允许在[下行DownwardAPI](/zh-cn/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information)
中使用巨页信息。
- `DynamicResourceAllocation`:启用对具有自定义参数和生命周期的资源的支持。
<!--
- `DynamicResourceAllocation": Enables support for resources with custom parameters and a lifecycle
- `DynamicResourceAllocation`: Enables support for resources with custom parameters and a lifecycle
that is independent of a Pod.
- `ElasticIndexedJob`: Enables Indexed Jobs to be scaled up or down by mutating both
`spec.completions` and `spec.parallelism` together such that `spec.completions == spec.parallelism`.
@ -760,9 +711,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `EfficientWatchResumption`:允许将存储发起的书签(进度通知)事件传递给用户。
这仅适用于监视操作。
<!--
- `EphemeralContainers`: Enable the ability to add
{{< glossary_tooltip text="ephemeral containers" term_id="ephemeral-container" >}}
to running pods.
- `EventedPLEG`: Enable support for the kubelet to receive container life cycle events from the
{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}} via
an extension to {{<glossary_tooltip term_id="cri" text="CRI">}}.
@ -776,9 +724,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
now-corrected fault where Kubernetes ignored exec probe timeouts. See
[readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes).
-->
- `EphemeralContainers`:启用添加
{{< glossary_tooltip text="临时容器" term_id="ephemeral-container" >}}
到正在运行的 Pod 的特性。
- `EventedPLEG`启用此特性后kubelet 能够通过 {{<glossary_tooltip term_id="cri" text="CRI">}}
扩展从{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}}接收容器生命周期事件。
PLEG 是 `Pod lifecycle event generator` 的缩写,即 Pod 生命周期事件生成器)。
@ -789,25 +734,15 @@ Each feature gate is designed for enabling/disabling a specific feature:
该缺陷导致 Kubernetes 会忽略 exec 探针的超时值设置。
参阅[就绪态探针](/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes).
<!--
- `ExpandCSIVolumes`: Enable the expanding of CSI volumes.
- `ExpandedDNSConfig`: Enable kubelet and kube-apiserver to allow more DNS
search paths and longer list of DNS search paths. This feature requires container
runtime support(Containerd: v1.5.6 or higher, CRI-O: v1.22 or higher). See
runtime support (containerd: v1.5.6 or higher, CRI-O: v1.22 or higher). See
[Expanded DNS Configuration](/docs/concepts/services-networking/dns-pod-service/#expanded-dns-configuration).
- `ExpandInUsePersistentVolumes`: Enable expanding in-use PVCs. See
[Resizing an in-use PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim).
- `ExpandPersistentVolumes`: Enable the expanding of persistent volumes. See
[Expanding Persistent Volumes Claims](/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims).
-->
- `ExpandCSIVolumes`:启用扩展 CSI 卷。
- `ExpandedDNSConfig`:在 kubelet 和 kube-apiserver 上启用后,
允许使用更多的 DNS 搜索域和搜索域列表。此功能特性需要容器运行时
Containerdv1.5.6 或更高CRI-Ov1.22 或更高)的支持。
containerd v1.5.6 或更高CRI-O v1.22 或更高)的支持。
参阅[扩展 DNS 配置](/zh-cn/docs/concepts/services-networking/dns-pod-service/#expanded-dns-configuration).
- `ExpandInUsePersistentVolumes`:启用扩充使用中的 PVC 的尺寸。
请查阅[调整使用中的 PersistentVolumeClaim 的大小](/zh-cn/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim)。
- `ExpandPersistentVolumes`:允许扩充持久卷。
请查阅[扩展持久卷申领](/zh-cn/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims)。
<!--
- `ExperimentalHostUserNamespaceDefaulting`: Enabling the defaulting user
namespace to host. This is for containers that are using other host namespaces,
@ -830,7 +765,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
<!--
- `GracefulNodeShutdownBasedOnPodPriority`: Enables the kubelet to check Pod priorities
when shutting down a node gracefully.
- `GRPCContainerProbe`: Enables the gRPC probe method for {Liveness,Readiness,Startup}Probe.
- `GRPCContainerProbe`: Enables the gRPC probe method for liveness, readiness and startup probes.
See [Configure Liveness, Readiness and Startup Probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe).
- `HonorPVReclaimPolicy`: Honor persistent volume reclaim policy when it is `Delete` irrespective of PV-PVC deletion ordering.
For more details, check the
@ -839,7 +774,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
-->
- `GracefulNodeShutdownBasedOnPodPriority`:允许 kubelet 在体面终止节点时检查
Pod 的优先级。
- `GRPCContainerProbe`:为 LivenessProbe、ReadinessProbe、StartupProbe 启用 gRPC 探针。
- `GRPCContainerProbe`:为活跃态、就绪态和启动探针启用 gRPC 探针。
参阅[配置活跃态、就绪态和启动探针](/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe)。
- `HonorPVReclaimPolicy`:无论 PV 和 PVC 的删除顺序如何,当持久卷申领的策略为 `Delete`
时,确保这种策略得到处理。
@ -860,10 +795,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `IPTablesOwnershipCleanup`:这使得 kubelet 不再创建传统的 iptables 规则。
- `InPlacePodVerticalScaling`:启用就地 Pod 垂直扩缩。
<!--
- `IdentifyPodOS`: Allows the Pod OS field to be specified. This helps in identifying
the OS of the pod authoritatively during the API server admission time.
In Kubernetes {{< skew currentVersion >}}, the allowed values for the `pod.spec.os.name`
are `windows` and `linux`.
- `InTreePluginAWSUnregister`: Stops registering the aws-ebs in-tree plugin in kubelet
and volume controllers.
- `InTreePluginAzureDiskUnregister`: Stops registering the azuredisk in-tree plugin in kubelet
@ -871,14 +802,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `InTreePluginAzureFileUnregister`: Stops registering the azurefile in-tree plugin in kubelet
and volume controllers.
-->
- `IdentifyPodOS`:允许设置 Pod 的 OS 字段。这一设置有助于在 API 服务器准入期间确定性地辨识
Pod 的 OS。在 Kubernetes {{< skew currentVersion >}} 中,`pod.spec.os.name` 可选的值包括
`windows``linux`
- `ImmutableEphemeralVolumes`:允许将各个 Secret 和 ConfigMap 标记为不可变更的,
以提高安全性和性能。
- `IngressClassNamespacedParams`:允许在 `IngressClass` 资源中使用名字空间范围的参数引用。
此功能为 `IngressClass.spec.parameters` 添加了两个字段 - `scope``namespace`
- `Initializers`:允许使用 Intializers 准入插件来异步协调对象创建操作。
- `InTreePluginAWSUnregister`:在 kubelet 和卷控制器上关闭注册 aws-ebs 内嵌插件。
- `InTreePluginAzureDiskUnregister`:在 kubelet 和卷控制器上关闭注册 azuredisk 内嵌插件。
- `InTreePluginAzureFileUnregister`:在 kubelet 和卷控制器上关闭注册 azurefile 内嵌插件。
@ -899,68 +822,62 @@ Each feature gate is designed for enabling/disabling a specific feature:
<!--
- `InTreePluginvSphereUnregister`: Stops registering the vSphere in-tree plugin in kubelet
and volume controllers.
- `IndexedJob`: Allows the [Job](/docs/concepts/workloads/controllers/job/)
controller to manage Pod completions per completion index.
- `IngressClassNamespacedParams`: Allow namespace-scoped parameters reference in
`IngressClass` resource. This feature adds two fields - `Scope` and `Namespace`
to `IngressClass.spec.parameters`.
- `Initializers`: Allow asynchronous coordination of object creation using the
Initializers admission plugin.
-->
- `InTreePluginvSphereUnregister`:在 kubelet 和卷控制器上关闭注册 vSphere 内嵌插件。
- `IndexedJob`:允许 [Job](/zh-cn/docs/concepts/workloads/controllers/job/)
控制器根据完成索引来管理 Pod 完成。
- `IngressClassNamespacedParams`:允许在 `IngressClass` 资源中引用名字空间范围的参数。
该特性增加了两个字段 —— `scope`、`namespace` 到 `IngressClass.spec.parameters`
- `Initializers` 使用 Initializers 准入插件允许异步协调对象创建。
<!--
- `JobMutableNodeSchedulingDirectives`: Allows updating node scheduling directives in
the pod template of [Job](/docs/concepts/workloads/controllers/job).
the pod template of [Job](/docs/concepts/workloads/controllers/job/).
- `JobBackoffLimitPerIndex`: Allows specifying the maximal number of pod
retries per index in Indexed jobs.
- `JobPodFailurePolicy`: Allow users to specify handling of pod failures based on container
exit codes and pod conditions.
- `JobPodReplacementPolicy`: Allows you to specify pod replacement for terminating pods in a [Job](/docs/concepts/workloads/controllers/job)
- `JobPodReplacementPolicy`: Allows you to specify pod replacement for terminating pods in a
[Job](/docs/concepts/workloads/controllers/job/).
- `JobReadyPods`: Enables tracking the number of Pods that have a `Ready`
[condition](/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions).
The count of `Ready` pods is recorded in the
[status](/docs/reference/kubernetes-api/workload-resources/job-v1/#JobStatus)
of a [Job](/docs/concepts/workloads/controllers/job) status.
of a [Job](/docs/concepts/workloads/controllers/job/) status.
-->
- `JobMutableNodeSchedulingDirectives`:允许在 [Job](/zh-cn/docs/concepts/workloads/controllers/job)
- `JobMutableNodeSchedulingDirectives`:允许在 [Job](/zh-cn/docs/concepts/workloads/controllers/job/)
的 Pod 模板中更新节点调度指令。
- `JobBackoffLimitPerIndex`:允许在索引作业中指定每个索引的最大 Pod 重试次数。
- `JobPodFailurePolicy`:允许用户根据容器退出码和 Pod 状况来指定 Pod 失效的处理方法。
- `JobPodReplacementPolicy`:允许你在 [Job](/zh-cn/docs/concepts/workloads/controllers/job)
- `JobPodReplacementPolicy`:允许你在 [Job](/zh-cn/docs/concepts/workloads/controllers/job/)
中为终止的 Pod 指定替代 Pod。
- `JobReadyPods`:允许跟踪[状况](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions)为
`Ready` 的 Pod 的个数。`Ready` 的 Pod 记录在
[Job](/zh-cn/docs/concepts/workloads/controllers/job) 对象的
[Job](/zh-cn/docs/concepts/workloads/controllers/job/) 对象的
[status](/zh-cn/docs/reference/kubernetes-api/workload-resources/job-v1/#JobStatus) 字段中。
<!--
- `JobTrackingWithFinalizers`: Enables tracking [Job](/docs/concepts/workloads/controllers/job)
- `JobTrackingWithFinalizers`: Enables tracking [Job](/docs/concepts/workloads/controllers/job/)
completions without relying on Pods remaining in the cluster indefinitely.
The Job controller uses Pod finalizers and a field in the Job status to keep
track of the finished Pods to count towards completion.
-->
- `JobTrackingWithFinalizers`:启用跟踪 [Job](/zh-cn/docs/concepts/workloads/controllers/job)
- `JobTrackingWithFinalizers`:启用跟踪 [Job](/zh-cn/docs/concepts/workloads/controllers/job/)
完成情况,而不是永远从集群剩余 Pod 来获取信息判断完成情况。Job 控制器使用
Pod finalizers 和 Job 状态中的一个字段来跟踪已完成的 Pod 以计算完成。
<!--
- `KMSv1`: Enables KMS v1 API for encryption at rest. See [Using a KMS Provider for data encryption](/docs/tasks/administer-cluster/kms-provider) for more details.
- `KMSv2`: Enables KMS v2 API for encryption at rest. See [Using a KMS Provider for data encryption](/docs/tasks/administer-cluster/kms-provider) for more details.
- `KMSv1`: Enables KMS v1 API for encryption at rest. See
[Using a KMS Provider for data encryption](/docs/tasks/administer-cluster/kms-provider/)
for more details.
- `KMSv2`: Enables KMS v2 API for encryption at rest. See
[Using a KMS Provider for data encryption](/docs/tasks/administer-cluster/kms-provider/)
for more details.
- `KMSv2KDF`: Enables KMS v2 to generate single use data encryption keys.
See [Using a KMS Provider for data encryption](/docs/tasks/administer-cluster/kms-provider) for more details.
If the `KMSv2` feature gate is not enabled in your cluster, the value of the `KMSv2KDF` feature gate has no effect.
See [Using a KMS Provider for data encryption](/docs/tasks/administer-cluster/kms-provider/)
for more details. If the `KMSv2` feature gate is not enabled in your cluster, the value of
the `KMSv2KDF` feature gate has no effect.
- `KubeProxyDrainingTerminatingNodes`: Implement connection draining for
terminating nodes for `externalTrafficPolicy: Cluster` services.
-->
- `KMSv1`:启用 KMS v1 API 以进行数据静态加密。
详情参见[使用 KMS 提供程序进行数据加密](/zh-cn/docs/tasks/administer-cluster/kms-provider)。
详情参见[使用 KMS 提供程序进行数据加密](/zh-cn/docs/tasks/administer-cluster/kms-provider/)。
- `KMSv2`:启用 KMS v2 API 以实现静态加密。
详情参见[使用 KMS 驱动进行数据加密](/zh-cn/docs/tasks/administer-cluster/kms-provider)。
详情参见[使用 KMS 驱动进行数据加密](/zh-cn/docs/tasks/administer-cluster/kms-provider/)。
- `KMSv2KDF`:启用 KMS v2 以生成一次性数据加密密钥。
详情参见[使用 KMS 提供程序进行数据加密](/zh-cn/docs/tasks/administer-cluster/kms-provider)。
详情参见[使用 KMS 提供程序进行数据加密](/zh-cn/docs/tasks/administer-cluster/kms-provider/)。
如果 `KMSv2` 特性门控在你的集群未被启用 ,则 `KMSv2KDF` 特性门控的值不会产生任何影响。
- `KubeProxyDrainingTerminatingNodes`:为 `externalTrafficPolicy: Cluster` 服务实现正终止节点的连接排空。
<!--
@ -973,7 +890,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
line argument). If you enable this feature gate and the container runtime
doesn't support it, the kubelet falls back to using the driver configured using
the `cgroupDriver` configuration setting.
See [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver)
See [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/)
for more details.
-->
- `KubeletCgroupDriverFromCRI`:启用检测来自 {{<glossary_tooltip term_id="cri" text="CRI">}}
@ -981,7 +898,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
也可以在支持 `RuntimeConfig` CRI 调用的 CRI 容器运行时所在节点上使用此特性门控。
如果 CRI 和 kubelet 都支持此特性kubelet 将忽略 `cgroupDriver` 配置设置(或已弃用的 `--cgroup-driver` 命令行参数)。
如果你启用此特性门控但容器运行时不支持它,则 kubelet 将回退到使用通过 `cgroupDriver` 配置设置进行配置的驱动。
详情参见[配置 cgroup 驱动](/zh-cn/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver)。
详情参见[配置 cgroup 驱动](/zh-cn/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/)。
<!--
- `KubeletInUserNamespace`: Enables support for running kubelet in a
{{<glossary_tooltip text="user namespace" term_id="userns">}}.
@ -997,7 +914,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
This API augments the [resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources).
- `KubeletPodResourcesGetAllocatable`: Enable the kubelet's pod resources
`GetAllocatableResources` functionality. This API augments the
[resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources)
[resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources).
-->
- `KubeletPodResources`:启用 kubelet 上 Pod 资源 GRPC 端点。更多详细信息,
请参见[支持设备监控](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/compute-device-assignment.md)。
@ -1007,15 +924,16 @@ Each feature gate is designed for enabling/disabling a specific feature:
该 API 增强了[资源分配报告](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources)
包含有关可分配资源的信息,使客户端能够正确跟踪节点上的可用计算资源。
<!--
- `KubeletPodResourcesDynamicResources`: Extend the kubelet's pod resources gRPC endpoint to
- `KubeletPodResourcesDynamicResources`: Extend the kubelet's pod resources gRPC endpoint
to include resources allocated in `ResourceClaims` via `DynamicResourceAllocation` API.
See [resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources) for more details.
with informations about the allocatable resources, enabling clients to properly
See [resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources)
for more details. with informations about the allocatable resources, enabling clients to properly
track the free compute resources on a node.
- `KubeletTracing`: Add support for distributed tracing in the kubelet.
When enabled, kubelet CRI interface and authenticated http servers are instrumented to generate
OpenTelemetry trace spans.
See [Traces for Kubernetes System Components](/docs/concepts/cluster-administration/system-traces) for more details.
See [Traces for Kubernetes System Components](/docs/concepts/cluster-administration/system-traces/)
for more details.
- `LegacyServiceAccountTokenNoAutoGeneration`: Stop auto-generation of Secret-based
[service account tokens](/docs/concepts/security/service-accounts/#get-a-token).
- `LegacyServiceAccountTokenCleanUp`: Enable cleaning up Secret-based
@ -1024,8 +942,9 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `LegacyServiceAccountTokenTracking`: Track usage of Secret-based
[service account tokens](/docs/concepts/security/service-accounts/#get-a-token).
-->
- `KubeletPodResourcesDynamicResources`:扩展 kubelet 的 pod 资源 gRPC 端点以包括通过 `DynamicResourceAllocation` API 在 `ResourceClaims` 中分配的资源。
有关详细信息,请参阅[资源分配报告](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources)。
- `KubeletPodResourcesDynamicResources`:扩展 kubelet 的 pod 资源 gRPC 端点以包括通过
`DynamicResourceAllocation` API 在 `ResourceClaims` 中分配的资源。
有关详细信息,请参阅[资源分配报告](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources)。
- `KubeletTracing`:新增在 Kubelet 中对分布式追踪的支持。
启用时kubelet CRI 接口和经身份验证的 http 服务器被插桩以生成 OpenTelemetry 追踪 span。
参阅[针对 Kubernetes 系统组件的追踪](/zh-cn/docs/concepts/cluster-administration/system-traces/)
@ -1037,10 +956,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `LegacyServiceAccountTokenTracking`:跟踪使用基于 Secret
的[服务账号令牌](/zh-cn/docs/concepts/security/service-accounts/#get-a-token)。
<!--
- `LocalStorageCapacityIsolation`: Enable the consumption of
[local ephemeral storage](/docs/concepts/configuration/manage-resources-containers/)
and also the `sizeLimit` property of an
[emptyDir volume](/docs/concepts/storage/volumes/#emptydir).
- `LocalStorageCapacityIsolationFSQuotaMonitoring`: When `LocalStorageCapacityIsolation`
is enabled for
[local ephemeral storage](/docs/concepts/configuration/manage-resources-containers/)
@ -1049,9 +964,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
[emptyDir volume](/docs/concepts/storage/volumes/#emptydir) storage consumption rather than
filesystem walk for better performance and accuracy.
-->
- `LocalStorageCapacityIsolation`:允许使用
[本地临时存储](/zh-cn/docs/concepts/configuration/manage-resources-containers/)
以及 [emptyDir 卷](/zh-cn/docs/concepts/storage/volumes/#emptydir)的 `sizeLimit` 属性。
- `LocalStorageCapacityIsolationFSQuotaMonitoring`:如果
[本地临时存储](/zh-cn/docs/concepts/configuration/manage-resources-containers/)启用了
`LocalStorageCapacityIsolation`,并且
@ -1132,16 +1044,17 @@ Each feature gate is designed for enabling/disabling a specific feature:
<!--
- `NodeOutOfServiceVolumeDetach`: When a Node is marked out-of-service using the
`node.kubernetes.io/out-of-service` taint, Pods on the node will be forcefully deleted
if they can not tolerate this taint, and the volume detach operations for Pods terminating
on the node will happen immediately. The deleted Pods can recover quickly on different nodes.
if they can not tolerate this taint, and the volume detach operations for Pods terminating
on the node will happen immediately. The deleted Pods can recover quickly on different nodes.
- `NodeSwap`: Enable the kubelet to allocate swap memory for Kubernetes workloads on a node.
Must be used with `KubeletConfiguration.failSwapOn` set to false.
For more details, please see [swap memory](/docs/concepts/architecture/nodes/#swap-memory)
For more details, please see [swap memory](/docs/concepts/architecture/nodes/#swap-memory).
- `OpenAPIEnums`: Enables populating "enum" fields of OpenAPI schemas in the
spec returned from the API server.
- `OpenAPIV3`: Enables the API server to publish OpenAPI v3.
- `PDBUnhealthyPodEvictionPolicy`: Enables the `unhealthyPodEvictionPolicy` field of a `PodDisruptionBudget`. This specifies
when unhealthy pods should be considered for eviction. Please see [Unhealthy Pod Eviction Policy](/docs/tasks/run-application/configure-pdb/#unhealthy-pod-eviction-policy)
- `PDBUnhealthyPodEvictionPolicy`: Enables the `unhealthyPodEvictionPolicy` field of a `PodDisruptionBudget`.
This specifies when unhealthy pods should be considered for eviction. Please see
[Unhealthy Pod Eviction Policy](/docs/tasks/run-application/configure-pdb/#unhealthy-pod-eviction-policy)
for more details.
-->
- `NodeOutOfServiceVolumeDetach`:当使用 `node.kubernetes.io/out-of-service`
@ -1158,11 +1071,13 @@ Each feature gate is designed for enabling/disabling a specific feature:
<!--
- `PersistentVolumeLastPhaseTransitionTime`: Adds a new field to PersistentVolume
which holds a timestamp of when the volume last transitioned its phase.
- `PodAndContainerStatsFromCRI`: Configure the kubelet to gather container and pod stats from the CRI container runtime rather than gathering them from cAdvisor.
As of 1.26, this also includes gathering metrics from CRI and emitting them over `/metrics/cadvisor` (rather than having cAdvisor emit them directly).
- `PodAndContainerStatsFromCRI`: Configure the kubelet to gather container and pod stats from the
CRI container runtime rather than gathering them from cAdvisor. As of 1.26, this also includes
gathering metrics from CRI and emitting them over `/metrics/cadvisor` (rather than having cAdvisor emit them directly).
- `PodDeletionCost`: Enable the [Pod Deletion Cost](/docs/concepts/workloads/controllers/replicaset/#pod-deletion-cost)
feature which allows users to influence ReplicaSet downscaling order.
- `PodDisruptionConditions`: Enables support for appending a dedicated pod condition indicating that the pod is being deleted due to a disruption.
feature which allows users to influence ReplicaSet downscaling order.
- `PodDisruptionConditions`: Enables support for appending a dedicated pod condition indicating that
the pod is being deleted due to a disruption.
-->
- `PersistentVolumeLastPhaseTransitionTime`:为 PersistentVolume 添加一个新字段,用于保存卷上一次转换阶段的时间戳。
- `PodAndContainerStatsFromCRI`:配置 kubelet 从 CRI 容器运行时中而不是从 cAdvisor 中采集容器和 Pod 统计信息。
@ -1173,7 +1088,10 @@ Each feature gate is designed for enabling/disabling a specific feature:
<!--
- `PodHostIPs`: Enable the `status.hostIPs` field for pods and the {{< glossary_tooltip term_id="downward-api" text="downward API" >}}.
The field lets you expose host IP addresses to workloads.
- `PodIndexLabel`: Enables the Job controller and StatefulSet controller to add the pod index as a label when creating new pods. See [Job completion mode docs](/docs/concepts/workloads/controllers/job#completion-mode) and [StatefulSet pod index label docs](/docs/concepts/workloads/controllers/statefulset/#pod-index-label) for more details.
- `PodIndexLabel`: Enables the Job controller and StatefulSet controller to add the pod index as a label
when creating new pods. See [Job completion mode docs](/docs/concepts/workloads/controllers/job/#completion-mode)
and [StatefulSet pod index label docs](/docs/concepts/workloads/controllers/statefulset/#pod-index-label)
for more details.
- `PodReadyToStartContainersCondition`: Enable the kubelet to mark the [PodReadyToStartContainers](/docs/concepts/workloads/pods/pod-lifecycle/#pod-has-network)
condition on pods. This was previously (1.25-1.27) known as `PodHasNetworkCondition`.
-->
@ -1186,9 +1104,10 @@ Each feature gate is designed for enabling/disabling a specific feature:
[PodReadyToStartContainers](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-has-network) 状况。
此前1.25-1.27 版本)称为 `PodHasNetworkCondition`
<!--
- `PodSchedulingReadiness`: Enable setting `schedulingGates` field to control a Pod's [scheduling readiness](/docs/concepts/scheduling-eviction/pod-scheduling-readiness).
- `PodSchedulingReadiness`: Enable setting `schedulingGates` field to control a Pod's
[scheduling readiness](/docs/concepts/scheduling-eviction/pod-scheduling-readiness/).
-->
- `PodSchedulingReadiness`:启用设置 `schedulingGates` 字段以控制 Pod 的[调度就绪](/zh-cn/docs/concepts/scheduling-eviction/pod-scheduling-readiness)。
- `PodSchedulingReadiness`:启用设置 `schedulingGates` 字段以控制 Pod 的[调度就绪](/zh-cn/docs/concepts/scheduling-eviction/pod-scheduling-readiness/)。
<!--
- `ProbeTerminationGracePeriod`: Enable [setting probe-level
`terminationGracePeriodSeconds`](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#probe-level-terminationgraceperiodseconds)
@ -1264,8 +1183,10 @@ Each feature gate is designed for enabling/disabling a specific feature:
有助于减少无效的重新排队。调度器会在集群中发生可能导致 Pod 被重新调度的变化时,
尝试重新进行 Pod 的调度。排队提示是一些内部信号,
用于帮助调度器基于先前的调度尝试来筛选集群中与未调度的 Pod 相关的变化。
<!--
- `SeccompDefault`: Enables the use of `RuntimeDefault` as the default seccomp profile
for all workloads.
The seccomp profile is specified in the `securityContext` of a Pod and/or a Container.
- `SecurityContextDeny`: This gate signals that the `SecurityContextDeny` admission controller is deprecated.
- `ServerSideApply`: Enables the [Sever Side Apply (SSA)](/docs/reference/using-api/server-side-apply/)
feature on the API Server.
@ -1273,6 +1194,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
of resource schema is performed at the API server side rather than the client side
(for example, the `kubectl create` or `kubectl apply` command line).
-->
- `SeccompDefault`:启用 `RuntimeDefault` 作为所有工作负载的默认 seccomp 配置文件。
此 seccomp 配置文件在 Pod 和/或 Container 的 `securityContext` 中被指定。
- `SecurityContextDeny`: 此门控表示 `SecurityContextDeny` 准入控制器已弃用。
- `ServerSideApply`:在 API 服务器上启用[服务器端应用SSA](/zh-cn/docs/reference/using-api/server-side-apply/)。
- `ServerSideFieldValidation`:启用服务器端字段验证。
@ -1316,9 +1239,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `StorageVersionHash`:允许 API 服务器在版本发现中公开存储版本的哈希值。
<!--
- `TopologyAwareHints`: Enables topology aware routing based on topology hints
in EndpointSlices. See [Topology Aware
Hints](/docs/concepts/services-networking/topology-aware-hints/) for more
details.
in EndpointSlices. See [Topology Aware Hints](/docs/concepts/services-networking/topology-aware-hints/)
for more details.
- `TopologyManager`: Enable a mechanism to coordinate fine-grained hardware resource
assignments for different components in Kubernetes. See
[Control Topology Management Policies on a node](/docs/tasks/administer-cluster/topology-manager/).
@ -1355,7 +1277,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `UserNamespacesSupport`:为 Pod 启用用户名字空间支持。
在 Kubernetes v1.28 之前,此特性门控被命名为 `UserNamespacesStatelessPodsSupport`
<!--
- `ValidatingAdmissionPolicy`: Enable [ValidatingAdmissionPolicy](/docs/reference/access-authn-authz/validating-admission-policy/) support for CEL validations be used in Admission Control.
- `ValidatingAdmissionPolicy`: Enable [ValidatingAdmissionPolicy](/docs/reference/access-authn-authz/validating-admission-policy/)
support for CEL validations be used in Admission Control.
- `VolumeCapacityPriority`: Enable support for prioritizing nodes in different
topologies based on available PV capacity.
-->

View File

@ -89,7 +89,7 @@ kubectl [flags]
</td>
</tr>
<tr>
<td colspan="2">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值: "$HOME/.kube/cache"</td>
<td colspan="2">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值"$HOME/.kube/cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
@ -133,7 +133,7 @@ kubectl [flags]
</td>
</tr>
<tr>
<td colspan="2">--cloud-provider-gce-l7lb-src-cidrs cidrs&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值: 130.211.0.0/22,35.191.0.0/16</td>
<td colspan="2">--cloud-provider-gce-l7lb-src-cidrs cidrs&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值130.211.0.0/22,35.191.0.0/16</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
@ -142,7 +142,7 @@ kubectl [flags]
</td>
</tr>
<tr>
<td colspan="2">--cloud-provider-gce-lb-src-cidrs cidrs&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值: 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16</td>
<td colspan="2">--cloud-provider-gce-lb-src-cidrs cidrs&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
@ -175,7 +175,7 @@ kubectl [flags]
</td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值: 300</td>
<td colspan="2">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
@ -186,7 +186,7 @@ kubectl [flags]
</td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值: 300</td>
<td colspan="2">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
@ -230,7 +230,7 @@ kubectl [flags]
</td>
</tr>
<tr>
<td colspan="2">--log-backtrace-at traceLocation&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值: 0</td>
<td colspan="2">--log-backtrace-at traceLocation&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值0</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
@ -263,7 +263,7 @@ kubectl [flags]
</td>
</tr>
<tr>
<td colspan="2">--log-file-max-size uint&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值: 1800</td>
<td colspan="2">--log-file-max-size uint&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值1800</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
@ -274,7 +274,7 @@ kubectl [flags]
</td>
</tr>
<tr>
<td colspan="2">--log-flush-frequency duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值: 5s</td>
<td colspan="2">--log-flush-frequency duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值5s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
@ -285,7 +285,7 @@ kubectl [flags]
</td>
</tr>
<tr>
<td colspan="2">--logtostderr&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值: true</td>
<td colspan="2">--logtostderr&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值true</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
@ -338,18 +338,18 @@ kubectl [flags]
</td>
</tr>
<tr>
<td colspan="2">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值: "none"</td>
<td colspan="2">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值"none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
<!--
Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)
-->
要记录的性能指标的名称。可取 (none|cpu|heap|goroutine|threadcreate|block|mutex) 其中之一。
要记录的性能指标的名称。可取none|cpu|heap|goroutine|threadcreate|block|mutex其中之一。
</td>
</tr>
<tr>
<td colspan="2">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值: "profile.pprof"</td>
<td colspan="2">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值"profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
@ -360,7 +360,7 @@ kubectl [flags]
</td>
</tr>
<tr>
<td colspan="2">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值: "0"</td>
<td colspan="2">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值"0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
@ -404,7 +404,7 @@ kubectl [flags]
</td>
</tr>
<tr>
<td colspan="2">--stderrthreshold severity&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值: 2</td>
<td colspan="2">--stderrthreshold severity&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;默认值2</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
@ -500,7 +500,7 @@ kubectl [flags]
<!--
Path to the kubectl configuration ("kubeconfig") file. Default: "$HOME/.kube/config"
-->
kubectl 的配置 ("kubeconfig") 文件的路径。默认值: "$HOME/.kube/config"
kubectl 的配置 ("kubeconfig") 文件的路径。默认值"$HOME/.kube/config"
</td>
</tr>
@ -541,6 +541,19 @@ When set to true, external plugins can be used as subcommands for builtin comman
</td>
</tr>
<tr>
<td colspan="2">KUBECTL_INTERACTIVE_DELETE</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
<!--
When set to true, the --interactive flag in the kubectl delete command will be activated, allowing users to preview and confirm resources before proceeding to delete by passing this flag.
-->
当设置为 true 时,`kubectl delete` 命令中的 `--interactive` 标志将被激活,
允许用户在通过传递此标志进行删除之前预览并确认资源。
</td>
</tr>
</tbody>
</table>
@ -647,4 +660,4 @@ When set to true, external plugins can be used as subcommands for builtin comman
* [kubectl top](/docs/reference/generated/kubectl/kubectl-commands#top) - 显示资源CPU/内存/存储)使用率
* [kubectl uncordon](/docs/reference/generated/kubectl/kubectl-commands#uncordon) - 标记节点为可调度的
* [kubectl version](/docs/reference/generated/kubectl/kubectl-commands#version) - 打印客户端和服务器的版本信息
* [kubectl wait](/docs/reference/generated/kubectl/kubectl-commands#wait) - 实验性:等待一个或多个资源达到某种状态
* [kubectl wait](/docs/reference/generated/kubectl/kubectl-commands#wait) - 实验级特性:等待一个或多个资源达到某种状态

View File

@ -41,9 +41,10 @@ CustomResourceDefinition 表示应在 API 服务器上公开的资源。其名
<!--
Standard object's metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
-->
标准的对象元数据,更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
标准的对象元数据,更多信息:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
- **spec** (<a href="{{< ref "../extend-resources/custom-resource-definition-v1#CustomResourceDefinitionSpec" >}}">CustomResourceDefinitionSpec</a>), <!--required-->必需
- **spec** (<a href="{{< ref "../extend-resources/custom-resource-definition-v1#CustomResourceDefinitionSpec" >}}">CustomResourceDefinitionSpec</a>)<!--required-->必需
<!--
spec describes how the user wants the resources to appear
-->
@ -298,7 +299,7 @@ CustomResourceDefinitionSpec 描述了用户希望资源的呈现方式。
schema describes the schema used for validation, pruning, and defaulting of this version of the custom resource.
-->
schema 描述了用于验证、精简和默认此版本的自定义资源的模式。
schema 描述了用于验证、精简和默认此版本的自定义资源的模式。
<a name="CustomResourceValidation"></a>
<!--
@ -321,7 +322,7 @@ CustomResourceDefinitionSpec 描述了用户希望资源的呈现方式。
subresources specify what subresources this version of the defined custom resource have.
-->
subresources 指定此版本已定义的自定义资源具有哪些子资源。
subresources 指定此版本已定义的自定义资源具有哪些子资源。
<a name="CustomResourceSubresources"></a>
<!--
@ -461,7 +462,7 @@ CustomResourceDefinitionSpec 描述了用户希望资源的呈现方式。
clientConfig is the instructions for how to call the webhook if strategy is `Webhook`.
-->
如果 strategy 是 `Webhook` 那么 clientConfig 是关于如何调用 Webhook 的说明。
如果 strategy 是 `Webhook`,那么 clientConfig 是关于如何调用 Webhook 的说明。
<a name="WebhookClientConfig"></a>
<!--
@ -581,7 +582,8 @@ CustomResourceDefinitionSpec 描述了用户希望资源的呈现方式。
preserveUnknownFields 表示将对象写入持久性存储时应保留 OpenAPI 模式中未规定的对象字段。
apiVersion、kind、元数据metadata和元数据中的已知字段始终保留。不推荐使用此字段而建议在
`spec.versions[*].schema.openAPIV3Schema` 中设置 `x-preserve-unknown-fields` 为 true。
更多详细信息参见: https://kubernetes.io/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#field-pruning
更多详细信息参见:
https://kubernetes.io/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#field-pruning
## JSONSchemaProps {#JSONSchemaProps}
@ -689,7 +691,7 @@ JSONSchemaProps 是JSON 模式JSON-Schema遵循其规范草案第 4 版
- hostname互联网主机名的有效表示由 RFC 1034 第 3.1 节 [RFC1034] 定义
- ipv4由 Go 语言 net.ParseIP 解析得到的 IPv4 协议的 IP
- ipv6由 Go 语言 net.ParseIP 解析得到的 IPv6 协议的 IP
- cidr: 由 Go 语言 net.ParseCIDR 解析得到的 CIDR
- cidr由 Go 语言 net.ParseCIDR 解析得到的 CIDR
- mac由 Go 语言 net.ParseMAC 解析得到的一个 MAC 地址
- uuidUUID允许大写字母满足正则表达式 (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{12}$
- uuid3UUID3允许大写字母满足正则表达式 (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?3[0-9a-f]{3}-?[0-9a-f]{4}-?[0-9a-f]{12}$
@ -697,13 +699,15 @@ JSONSchemaProps 是JSON 模式JSON-Schema遵循其规范草案第 4 版
- uuid5UUID5允许大写字母满足正则表达式 (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?5[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}$
- isbn一个 ISBN10 或 ISBN13 数字字符串,如 "0321751043" 或 "978-0321751041"
- isbn10一个 ISBN10 数字字符串,如 "0321751043"
- isbn13: 一个 ISBN13 号码字符串,如 "978-0321751041"
- creditcard信用卡号码满足正则表达式 ^(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|6(?:011|5[0-9][0-9])[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|(?:2131|1800|35\d{3})\d{11})$,其中混合任意非数字字符
- isbn13一个 ISBN13 号码字符串,如 "978-0321751041"
- creditcard信用卡号码满足正则表达式
^(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|6(?:011|5[0-9][0-9])[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|(?:2131|1800|35\d{3})\d{11})$
其中混合任意非数字字符
- ssn美国社会安全号码满足正则表达式 ^\d{3}[- ]?\d{2}[- ]?\d{4}$
- hexcolor一个十六进制的颜色编码如 "#FFFFFF",满足正则表达式 ^#?([0-9a-fA-F]{3}|[0-9a-fA-F]{6})$
- rgbcolor一个 RGB 颜色编码 例如 "rgb(255,255,255)"
- bytebase64 编码的二进制数据
- password: 任何类型的字符串
- password任何类型的字符串
- date类似 "2006-01-02" 的日期字符串,由 RFC3339 中的完整日期定义
- duration由 Go 语言 time.ParseDuration 解析的持续时长字符串,如 "22 ns",或与 Scala 持续时间格式兼容。
- datetime一个日期时间字符串如 "2014-12-15T19:30:20.000Z",由 RFC3339 中的 date-time 定义。
@ -808,7 +812,7 @@ JSONSchemaProps 是JSON 模式JSON-Schema遵循其规范草案第 4 版
<!--
x-kubernetes-list-type annotates an array to further describe its topology. This extension must only be used on lists and may have 3 possible values:
-->
x-kubernetes-list-type 注解一个数组以进一步描述其拓扑。此扩展名只能用于列表,并且可能有 3 个可能的值:
x-kubernetes-list-type 注解一个数组以进一步描述其拓扑。此扩展名只能用于列表,并且可能有 3 个可能的值:
<!--
1) `atomic`: the list is treated as a single entity, like a scalar.
@ -840,7 +844,7 @@ JSONSchemaProps 是JSON 模式JSON-Schema遵循其规范草案第 4 版
<!--
x-kubernetes-map-type annotates an object to further describe its topology. This extension must only be used when type is object and may have 2 possible values:
-->
x-kubernetes-map-type 注解一个对象以进一步描述其拓扑。此扩展只能在 type 为 object 时使用,并且可能有 2 个可能的值:
x-kubernetes-map-type 注解一个对象以进一步描述其拓扑。此扩展只能在 type 为 object 时使用,并且可能有 2 个可能的值:
<!--
1) `granular`:
@ -895,7 +899,10 @@ JSONSchemaProps 是JSON 模式JSON-Schema遵循其规范草案第 4 版
- **x-kubernetes-validations.rule** (string),必需
<!--
Rule represents the expression which will be evaluated by CEL. ref: https://github.com/google/cel-spec The Rule is scoped to the location of the x-kubernetes-validations extension in the schema. The `self` variable in the CEL expression is bound to the scoped value. Example: - Rule scoped to the root of a resource with a status subresource: {"rule": "self.status.actual \<= self.spec.maxDesired"}
Rule represents the expression which will be evaluated by CEL. ref: https://github.com/google/cel-spec.
The Rule is scoped to the location of the x-kubernetes-validations extension in the schema.
The `self` variable in the CEL expression is bound to the scoped value.
Example: - Rule scoped to the root of a resource with a status subresource: {"rule": "self.status.actual \<= self.spec.maxDesired"}
-->
rule 表示将由 CEL 评估的表达式。参考: https://github.com/google/cel-spec。
@ -984,10 +991,35 @@ JSONSchemaProps 是JSON 模式JSON-Schema遵循其规范草案第 4 版
- 'map'`X + Y` 执行合并,保留 `X` 中所有键的数组位置,但当 `X``Y` 的键集相交时,会被 `Y` 中的值覆盖。
添加 `Y` 中具有不相交键的元素,保持其局顺序。
- **x-kubernetes-validations.fieldPath** (string)
<!--
fieldPath represents the field path returned when the validation fails.
It must be a relative JSON path (i.e. with array notation) scoped to the location of this
x-kubernetes-validations extension in the schema and refer to an existing field.
e.g. when validation checks if a specific attribute `foo` under a map `testMap`, the fieldPath could be set to `.testMap.foo`
If the validation checks two lists must have unique attributes, the fieldPath could be set to either of the list: e.g. `.testList`
It does not support list numeric index. It supports child operation to refer to an existing field currently.
Refer to [JSONPath support in Kubernetes](https://kubernetes.io/docs/reference/kubectl/jsonpath/) for more info.
Numeric index of array is not supported. For field name which contains special characters, use `['specialName']` to refer the field name.
e.g. for attribute `foo.34$` appears in a list `testList`, the fieldPath could be set to `.testList['foo.34$']`
-->
fieldPath 表示验证失败时返回的字段路径。
它必须是相对 JSON 路径(即,支持数组表示法),范围仅限于此 x-kubernetes-validations
扩展在模式的位置,并引用现有字段。
例如,当验证检查 `testMap` 映射下是否有 `foo` 属性时,可以将 fieldPath 设置为 `.testMap.foo`
如果验证需要确保两个列表具有各不相同的属性,则可以将 fieldPath 设置到其中任一列表,例如 `.testList`
它支持使用子操作引用现有字段,而不支持列表的数字索引。
有关更多信息,请参阅 [Kubernetes 中的 JSONPath 支持](https://kubernetes.io/docs/reference/kubectl/jsonpath/)。
因为其不支持数组的数字索引,所以对于包含特殊字符的字段名称,请使用 `['specialName']` 来引用字段名称。
例如,对于出现在列表 `testList` 中的属性 `foo.34$`fieldPath 可以设置为 `.testList['foo.34$']`
- **x-kubernetes-validations.message** (string)
<!--
Message represents the message displayed when validation fails. The message is required if the Rule contains line breaks. The message must not contain line breaks. If unset, the message is "failed rule: {Rule}". e.g. "must be a URL with the host matching spec.host"
Message represents the message displayed when validation fails. The message is required if the Rule contains line breaks.
The message must not contain line breaks. If unset, the message is "failed rule: {Rule}". e.g. "must be a URL with the host matching spec.host"
-->
message 表示验证失败时显示的消息。如果规则包含换行符,则需要该消息。消息不能包含换行符。
@ -996,8 +1028,17 @@ JSONSchemaProps 是JSON 模式JSON-Schema遵循其规范草案第 4 版
- **x-kubernetes-validations.messageExpression** (string)
<!--
MessageExpression declares a CEL expression that evaluates to the validation failure message that is returned when this rule fails. Since messageExpression is used as a failure message, it must evaluate to a string. If both message and messageExpression are present on a rule, then messageExpression will be used if validation fails. If messageExpression results in a runtime error, the runtime error is logged, and the validation failure message is produced as if the messageExpression field were unset. If messageExpression evaluates to an empty string, a string with only spaces, or a string that contains line breaks, then the validation failure message will also be produced as if the messageExpression field were unset, and the fact that messageExpression produced an empty string/string with only spaces/string with line breaks will be logged. messageExpression has access to all the same variables as the rule; the only difference is the return type. Example: "x must be less than max ("+string(self.max)+")"
MessageExpression declares a CEL expression that evaluates to the validation failure message that
is returned when this rule fails. Since messageExpression is used as a failure message, it must evaluate to a string.
If both message and messageExpression are present on a rule, then messageExpression will be used if validation fails.
If messageExpression results in a runtime error, the runtime error is logged, and the validation failure message is
produced as if the messageExpression field were unset. If messageExpression evaluates to an empty string, a string
with only spaces, or a string that contains line breaks, then the validation failure message will also be produced
as if the messageExpression field were unset, and the fact that messageExpression produced an empty string/string
with only spaces/string with line breaks will be logged. messageExpression has access to all the same variables as the rule;
the only difference is the return type. Example: "x must be less than max ("+string(self.max)+")"
-->
messageExpression 声明一个 CEL 表达式,其计算结果是此规则失败时返回的验证失败消息。
由于 messageExpression 用作失败消息,因此它的值必须是一个字符串。
如果在规则中同时存在 message 和 messageExpression则在验证失败时使用 messageExpression。
@ -1008,6 +1049,24 @@ JSONSchemaProps 是JSON 模式JSON-Schema遵循其规范草案第 4 版
messageExpression 可以访问的变量与规则相同;唯一的区别是返回类型。
例如:"x must be less than max ("+string(self.max)+")"。
- **x-kubernetes-validations.reason** (string)
<!--
reason provides a machine-readable validation failure reason that is returned to the caller
when a request fails this validation rule. The HTTP status code returned to the caller will
match the reason of the reason of the first failed validation rule.
The currently supported reasons are: "FieldValueInvalid", "FieldValueForbidden", "FieldValueRequired",
"FieldValueDuplicate". If not set, default to use "FieldValueInvalid".
All future added reasons must be accepted by clients when reading this value and unknown
reasons should be treated as FieldValueInvalid.
-->
reason 提供机器可读的验证失败原因,当请求未通过此验证规则时,该原因会返回给调用者。
返回给调用者的 HTTP 状态代码将与第一个失败的验证规则的原因相匹配。
目前支持的原因有:`FieldValueInvalid`、`FieldValueForbidden`、`FieldValueRequired`、`FieldValueDuplicate`。
如果未设置,则默认使用 `FieldValueInvalid`
所有未来添加的原因在读取该值时必须被客户端接受,未知原因应被视为 `FieldValueInvalid`
## CustomResourceDefinitionStatus {#CustomResourceDefinitionStatus}
<!--
@ -1066,7 +1125,7 @@ CustomResourceDefinitionStatus 表示 CustomResourceDefinition 的状态。
listKind is the serialized kind of the list for this resource. Defaults to "`kind`List".
-->
listKind 是此资源列表的序列化类型。默认为 "`<kind>List`"。
listKind 是此资源列表的序列化类型。默认为 "`<kind>List`"。
- **acceptedNames.shortNames** ([]string)
@ -1074,7 +1133,7 @@ CustomResourceDefinitionStatus 表示 CustomResourceDefinition 的状态。
shortNames are short names for the resource, exposed in API discovery documents, and used by clients to support invocations like `kubectl get \<shortname>`. It must be all lowercase.
-->
shortNames 是资源的短名称,在 API 发现文档中公开,并支持客户端调用,如 `kubectl get <shortname>`。必须全部小写。
shortNames 是资源的短名称,在 API 发现文档中公开,并支持客户端调用,如 `kubectl get <shortname>`。必须全部小写。
- **acceptedNames.singular** (string)
@ -1205,7 +1264,8 @@ CustomResourceDefinitionList 是 CustomResourceDefinition 对象的列表。
Standard object's metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
-->
标准的对象元数据,更多信息: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
标准的对象元数据,更多信息:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
## Operations {#Operations}
@ -1372,7 +1432,7 @@ POST /apis/apiextensions.k8s.io/v1/customresourcedefinitions
-->
#### 参数
- **body**: <a href="{{< ref "../extend-resources/custom-resource-definition-v1#CustomResourceDefinition" >}}">CustomResourceDefinition</a>,必需
- **body**<a href="{{< ref "../extend-resources/custom-resource-definition-v1#CustomResourceDefinition" >}}">CustomResourceDefinition</a>,必需
- **dryRun** <!--(*in query*):-->**查询参数**string
@ -1429,7 +1489,7 @@ PUT /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name}
CustomResourceDefinition 的名称。
- **body**: <a href="{{< ref "../extend-resources/custom-resource-definition-v1#CustomResourceDefinition" >}}">CustomResourceDefinition</a>,必需
- **body**<a href="{{< ref "../extend-resources/custom-resource-definition-v1#CustomResourceDefinition" >}}">CustomResourceDefinition</a>,必需
- **dryRun** <!--(*in query*):-->**查询参数**string
@ -1484,7 +1544,7 @@ PUT /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name}/status
CustomResourceDefinition 的名称。
- **body**: <a href="{{< ref "../extend-resources/custom-resource-definition-v1#CustomResourceDefinition" >}}">CustomResourceDefinition</a>,必需
- **body**<a href="{{< ref "../extend-resources/custom-resource-definition-v1#CustomResourceDefinition" >}}">CustomResourceDefinition</a>,必需
- **dryRun** <!--(*in query*):-->**查询参数**string
@ -1539,7 +1599,7 @@ PATCH /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name}
CustomResourceDefinition 的名称。
- **body**: <a href="{{< ref "../common-definitions/patch#Patch" >}}">Patch</a>,必需
- **body**<a href="{{< ref "../common-definitions/patch#Patch" >}}">Patch</a>,必需
- **dryRun** <!--(*in query*):-->**查询参数**string
@ -1598,7 +1658,7 @@ PATCH /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name}/status
CustomResourceDefinition 的名称。
- **body**: <a href="{{< ref "../common-definitions/patch#Patch" >}}">Patch</a>,必需
- **body**<a href="{{< ref "../common-definitions/patch#Patch" >}}">Patch</a>,必需
- **dryRun** <!--(*in query*):-->**查询参数**string
@ -1655,7 +1715,7 @@ DELETE /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name}
CustomResourceDefinition 的名称。
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
- **body**<a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
- **dryRun** <!--(*in query*):-->**查询参数**string
@ -1700,7 +1760,7 @@ DELETE /apis/apiextensions.k8s.io/v1/customresourcedefinitions
-->
#### 参数
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
- **body**<a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
- **continue** <!--(*in query*):-->**查询参数**string

View File

@ -0,0 +1,622 @@
---
api_metadata:
apiVersion: "networking.k8s.io/v1alpha1"
import: "k8s.io/api/networking/v1alpha1"
kind: "IPAddress"
content_type: "api_reference"
description: "IPAddress 表示单个 IP 族的单个 IP。"
title: "IPAddress v1alpha1"
weight: 5
---
<!--
api_metadata:
apiVersion: "networking.k8s.io/v1alpha1"
import: "k8s.io/api/networking/v1alpha1"
kind: "IPAddress"
content_type: "api_reference"
description: "IPAddress represents a single IP of a single IP Family."
title: "IPAddress v1alpha1"
weight: 5
auto_generated: true
-->
`apiVersion: networking.k8s.io/v1alpha1`
`import "k8s.io/api/networking/v1alpha1"`
## IPAddress {#IPAddress}
<!--
IPAddress represents a single IP of a single IP Family. The object is designed to be used by APIs that operate on IP addresses. The object is used by the Service core API for allocation of IP addresses. An IP address can be represented in different formats, to guarantee the uniqueness of the IP, the name of the object is the IP address in canonical format, four decimal digits separated by dots suppressing leading zeros for IPv4 and the representation defined by RFC 5952 for IPv6. Valid: 192.168.1.5 or 2001:db8::1 or 2001:db8:aaaa:bbbb:cccc:dddd:eeee:1 Invalid: 10.01.2.3 or 2001:db8:0:0:0::1
-->
IPAddress 表示单个 IP 族的单个 IP。此对象旨在供操作 IP 地址的 API 使用。
此对象由 Service 核心 API 用于分配 IP 地址。
IP 地址可以用不同的格式表示,为了保证 IP 地址的唯一性,此对象的名称是格式规范的 IP 地址。
IPv4 地址由点分隔的四个十进制数字组成前导零可省略IPv6 地址按照 RFC 5952 的定义来表示。
有效值192.168.1.5、2001:db8::1 或 2001:db8:aaaa:bbbb:cccc:dddd:eeee:1。
无效值10.01.2.3 或 2001:db8:0:0:0::1。
<hr>
- **apiVersion**: networking.k8s.io/v1alpha1
- **kind**: IPAddress
<!--
- **metadata** (<a href="{{< ref "../common-definitions/object-meta#ObjectMeta" >}}">ObjectMeta</a>)
Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
- **spec** (<a href="{{< ref "../policy-resources/ip-address-v1alpha1#IPAddressSpec" >}}">IPAddressSpec</a>)
spec is the desired state of the IPAddress. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
-->
- **metadata** (<a href="{{< ref "../common-definitions/object-meta#ObjectMeta" >}}">ObjectMeta</a>)
标准的对象元数据。更多信息:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
- **spec** (<a href="{{< ref "../policy-resources/ip-address-v1alpha1#IPAddressSpec" >}}">IPAddressSpec</a>)
spec 是 IPAddress 的预期状态。更多信息:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
## IPAddressSpec {#IPAddressSpec}
<!--
IPAddressSpec describe the attributes in an IP Address.
-->
IPAddressSpec 描述 IP 地址中的属性。
<hr>
<!--
- **parentRef** (ParentReference)
ParentRef references the resource that an IPAddress is attached to. An IPAddress must reference a parent object.
<a name="ParentReference"></a>
*ParentReference describes a reference to a parent object.*
-->
- **parentRef** (ParentReference)
parentRef 引用挂接 IPAddress 的资源。IPAddress 必须引用一个父对象。
<a name="ParentReference"></a>
**ParentReference 描述指向父对象的引用。**
<!--
- **parentRef.group** (string)
Group is the group of the object being referenced.
- **parentRef.name** (string)
Name is the name of the object being referenced.
- **parentRef.namespace** (string)
Namespace is the namespace of the object being referenced.
-->
- **parentRef.group** (string)
group 是被引用的对象的组。
- **parentRef.name** (string)
name 是被引用的对象的名称。
- **parentRef.namespace** (string)
namespace 是被引用的对象的名字空间。
<!--
- **parentRef.resource** (string)
Resource is the resource of the object being referenced.
- **parentRef.uid** (string)
UID is the uid of the object being referenced.
-->
- **parentRef.resource** (string)
resource 是被引用的对象的资源。
- **parentRef.uid** (string)
uid 是被引用的对象的唯一标识符uid
## IPAddressList {#IPAddressList}
<!--
IPAddressList contains a list of IPAddress.
-->
IPAddressList 包含 IPAddress 的列表。
<hr>
- **apiVersion**: networking.k8s.io/v1alpha1
- **kind**: IPAddressList
<!--
- **metadata** (<a href="{{< ref "../common-definitions/list-meta#ListMeta" >}}">ListMeta</a>)
Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
- **items** ([]<a href="{{< ref "../policy-resources/ip-address-v1alpha1#IPAddress" >}}">IPAddress</a>), required
items is the list of IPAddresses.
-->
- **metadata** (<a href="{{< ref "../common-definitions/list-meta#ListMeta" >}}">ListMeta</a>)
标准的对象元数据。更多信息:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
- **items** ([]<a href="{{< ref "../policy-resources/ip-address-v1alpha1#IPAddress" >}}">IPAddress</a>),必需
items 是 IPAddresses 的列表。
<!--
## Operations {#Operations}
### `get` read the specified IPAddress
#### HTTP Request
-->
## 操作 {#Operations}
<hr>
### `get` 读取指定的 IPAddress
#### HTTP 请求
GET /apis/networking.k8s.io/v1alpha1/ipaddresses/{name}
<!--
#### Parameters
- **name** (*in path*): string, required
name of the IPAddress
- **pretty** (*in query*): string
-->
#### 参数
- **name****路径参数**string必需
IPAddress 的名称。
- **pretty****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../policy-resources/ip-address-v1alpha1#IPAddress" >}}">IPAddress</a>): OK
401: Unauthorized
<!--
### `list` list or watch objects of kind IPAddress
#### HTTP Request
-->
### `list` 列举或监视类别为 IPAddress 的对象
#### HTTP 请求
GET /apis/networking.k8s.io/v1alpha1/ipaddresses
<!--
#### Parameters
- **allowWatchBookmarks** (*in query*): boolean
- **continue** (*in query*): string
- **fieldSelector** (*in query*): string
- **labelSelector** (*in query*): string
- **limit** (*in query*): integer
- **pretty** (*in query*): string
- **resourceVersion** (*in query*): string
- **resourceVersionMatch** (*in query*): string
- **sendInitialEvents** (*in query*): boolean
- **timeoutSeconds** (*in query*): integer
- **watch** (*in query*): boolean
-->
#### 参数
- **allowWatchBookmarks****查询参数**boolean
<a href="{{< ref "../common-parameters/common-parameters#allowWatchBookmarks" >}}">allowWatchBookmarks</a>
- **continue****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#continue" >}}">continue</a>
- **fieldSelector****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#fieldSelector" >}}">fieldSelector</a>
- **labelSelector****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#labelSelector" >}}">labelSelector</a>
- **limit****查询参数**integer
<a href="{{< ref "../common-parameters/common-parameters#limit" >}}">limit</a>
- **pretty****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
- **resourceVersion****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersion" >}}">resourceVersion</a>
- **resourceVersionMatch****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
- **sendInitialEvents****查询参数**boolean
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
- **timeoutSeconds****查询参数**integer
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
- **watch****查询参数**boolean
<a href="{{< ref "../common-parameters/common-parameters#watch" >}}">watch</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../policy-resources/ip-address-v1alpha1#IPAddressList" >}}">IPAddressList</a>): OK
401: Unauthorized
<!--
### `create` create an IPAddress
#### HTTP Request
-->
### `create` 创建 IPAddress
#### HTTP 请求
POST /apis/networking.k8s.io/v1alpha1/ipaddresses
<!--
#### Parameters
- **body**: <a href="{{< ref "../policy-resources/ip-address-v1alpha1#IPAddress" >}}">IPAddress</a>, required
- **dryRun** (*in query*): string
- **fieldManager** (*in query*): string
- **fieldValidation** (*in query*): string
- **pretty** (*in query*): string
-->
#### 参数
- **body**: <a href="{{< ref "../policy-resources/ip-address-v1alpha1#IPAddress" >}}">IPAddress</a>,必需
- **dryRun****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldManager****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
- **fieldValidation****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
- **pretty****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../policy-resources/ip-address-v1alpha1#IPAddress" >}}">IPAddress</a>): OK
201 (<a href="{{< ref "../policy-resources/ip-address-v1alpha1#IPAddress" >}}">IPAddress</a>): Created
202 (<a href="{{< ref "../policy-resources/ip-address-v1alpha1#IPAddress" >}}">IPAddress</a>): Accepted
401: Unauthorized
<!--
### `update` replace the specified IPAddress
#### HTTP Request
-->
### `update` 替换指定的 IPAddress
#### HTTP 请求
PUT /apis/networking.k8s.io/v1alpha1/ipaddresses/{name}
<!--
#### Parameters
- **name** (*in path*): string, required
name of the IPAddress
- **body**: <a href="{{< ref "../policy-resources/ip-address-v1alpha1#IPAddress" >}}">IPAddress</a>, required
- **dryRun** (*in query*): string
- **fieldManager** (*in query*): string
- **fieldValidation** (*in query*): string
- **pretty** (*in query*): string
-->
#### 参数
- **name****路径参数**string必需
IPAddress 的名称。
- **body**: <a href="{{< ref "../policy-resources/ip-address-v1alpha1#IPAddress" >}}">IPAddress</a>,必需
- **dryRun****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldManager****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
- **fieldValidation****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
- **pretty****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../policy-resources/ip-address-v1alpha1#IPAddress" >}}">IPAddress</a>): OK
201 (<a href="{{< ref "../policy-resources/ip-address-v1alpha1#IPAddress" >}}">IPAddress</a>): Created
401: Unauthorized
<!--
### `patch` partially update the specified IPAddress
#### HTTP Request
-->
### `patch` 部分更新指定的 IPAddress
#### HTTP 请求
PATCH /apis/networking.k8s.io/v1alpha1/ipaddresses/{name}
<!--
#### Parameters
- **name** (*in path*): string, required
name of the IPAddress
- **body**: <a href="{{< ref "../common-definitions/patch#Patch" >}}">Patch</a>, required
- **dryRun** (*in query*): string
- **fieldManager** (*in query*): string
- **fieldValidation** (*in query*): string
- **force** (*in query*): boolean
- **pretty** (*in query*): string
-->
#### 参数
- **name****路径参数**string必需
IPAddress 的名称。
- **body**: <a href="{{< ref "../common-definitions/patch#Patch" >}}">Patch</a>,必需
- **dryRun****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldManager****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
- **fieldValidation****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
- **force****查询参数**boolean
<a href="{{< ref "../common-parameters/common-parameters#force" >}}">force</a>
- **pretty****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../policy-resources/ip-address-v1alpha1#IPAddress" >}}">IPAddress</a>): OK
201 (<a href="{{< ref "../policy-resources/ip-address-v1alpha1#IPAddress" >}}">IPAddress</a>): Created
401: Unauthorized
<!--
### `delete` delete an IPAddress
#### HTTP Request
-->
### `delete` 删除 IPAddress
#### HTTP 请求
DELETE /apis/networking.k8s.io/v1alpha1/ipaddresses/{name}
<!--
#### Parameters
- **name** (*in path*): string, required
name of the IPAddress
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
- **dryRun** (*in query*): string
- **gracePeriodSeconds** (*in query*): integer
- **pretty** (*in query*): string
- **propagationPolicy** (*in query*): string
-->
#### 参数
- **name****路径参数**string必需
IPAddress 的名称。
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
- **dryRun****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **gracePeriodSeconds****查询参数**integer
<a href="{{< ref "../common-parameters/common-parameters#gracePeriodSeconds" >}}">gracePeriodSeconds</a>
- **pretty****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
- **propagationPolicy****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#propagationPolicy" >}}">propagationPolicy</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../common-definitions/status#Status" >}}">Status</a>): OK
202 (<a href="{{< ref "../common-definitions/status#Status" >}}">Status</a>): Accepted
401: Unauthorized
<!--
### `deletecollection` delete collection of IPAddress
#### HTTP Request
-->
### `deletecollection` 删除 IPAddress 的集合
#### HTTP 请求
DELETE /apis/networking.k8s.io/v1alpha1/ipaddresses
<!--
#### Parameters
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
- **continue** (*in query*): string
- **dryRun** (*in query*): string
- **fieldSelector** (*in query*): string
- **gracePeriodSeconds** (*in query*): integer
- **labelSelector** (*in query*): string
- **limit** (*in query*): integer
- **pretty** (*in query*): string
- **propagationPolicy** (*in query*): string
- **resourceVersion** (*in query*): string
- **resourceVersionMatch** (*in query*): string
- **sendInitialEvents** (*in query*): boolean
- **timeoutSeconds** (*in query*): integer
-->
#### 参数
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
- **continue****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#continue" >}}">continue</a>
- **dryRun****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldSelector****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#fieldSelector" >}}">fieldSelector</a>
- **gracePeriodSeconds****查询参数**integer
<a href="{{< ref "../common-parameters/common-parameters#gracePeriodSeconds" >}}">gracePeriodSeconds</a>
- **labelSelector****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#labelSelector" >}}">labelSelector</a>
- **limit****查询参数**integer
<a href="{{< ref "../common-parameters/common-parameters#limit" >}}">limit</a>
- **pretty****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
- **propagationPolicy****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#propagationPolicy" >}}">propagationPolicy</a>
- **resourceVersion****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersion" >}}">resourceVersion</a>
- **resourceVersionMatch****查询参数**string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
- **sendInitialEvents****查询参数**boolean
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
- **timeoutSeconds****查询参数**integer
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../common-definitions/status#Status" >}}">Status</a>): OK
401: Unauthorized

View File

@ -49,35 +49,35 @@ You have several options for connecting to nodes, pods and services from outside
你有多种可选方式从集群外连接节点、Pod 和服务:
<!--
- Access services through public IPs.
- Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside
the cluster. See the [services](/docs/concepts/services-networking/service/) and
[kubectl expose](/docs/reference/generated/kubectl/kubectl-commands/#expose) documentation.
- Depending on your cluster environment, this may only expose the service to your corporate network,
or it may expose it to the internet. Think about whether the service being exposed is secure.
Does it do its own authentication?
- Place pods behind services. To access one specific pod from a set of replicas, such as for debugging,
place a unique label on the pod and create a new service which selects this label.
- In most cases, it should not be necessary for application developer to directly access
nodes via their nodeIPs.
- Access services through public IPs.
- Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside
the cluster. See the [services](/docs/concepts/services-networking/service/) and
[kubectl expose](/docs/reference/generated/kubectl/kubectl-commands/#expose) documentation.
- Depending on your cluster environment, this may only expose the service to your corporate network,
or it may expose it to the internet. Think about whether the service being exposed is secure.
Does it do its own authentication?
- Place pods behind services. To access one specific pod from a set of replicas, such as for debugging,
place a unique label on the pod and create a new service which selects this label.
- In most cases, it should not be necessary for application developer to directly access
nodes via their nodeIPs.
-->
- 通过公网 IP 访问服务
- 使用类型为 `NodePort``LoadBalancer`服务,可以从外部访问它们。
请查阅[服务](/zh-cn/docs/concepts/services-networking/service/) 和
- 使用类型为 `NodePort``LoadBalancer` Service,可以从外部访问它们。
请查阅 [Service](/zh-cn/docs/concepts/services-networking/service/) 和
[kubectl expose](/docs/reference/generated/kubectl/kubectl-commands/#expose) 文档。
- 取决于你的集群环境,你可以仅把服务暴露在你的企业网络环境中,也可以将其暴露在
- 取决于你的集群环境,你可以仅把 Service 暴露在你的企业网络环境中,也可以将其暴露在
因特网上。需要考虑暴露的服务是否安全,它是否有自己的用户认证?
- 将 Pod 放置于服务背后。如果要访问一个副本集合中特定的 Pod例如用于调试目的
- 将 Pod 放置于 Service 背后。如果要访问一个副本集合中特定的 Pod例如用于调试目的
请给 Pod 指定一个独特的标签并创建一个新服务选择该标签。
- 大部分情况下,都不需要应用开发者通过节点 IP 直接访问节点。
<!--
- Access services, nodes, or pods using the Proxy Verb.
- Does apiserver authentication and authorization prior to accessing the remote service.
Use this if the services are not secure enough to expose to the internet, or to gain
access to ports on the node IP, or for debugging.
- Proxies may cause problems for some web applications.
- Only works for HTTP/HTTPS.
- Described [here](#manually-constructing-apiserver-proxy-urls).
- Access services, nodes, or pods using the Proxy Verb.
- Does apiserver authentication and authorization prior to accessing the remote service.
Use this if the services are not secure enough to expose to the internet, or to gain
access to ports on the node IP, or for debugging.
- Proxies may cause problems for some web applications.
- Only works for HTTP/HTTPS.
- Described [here](#manually-constructing-apiserver-proxy-urls).
-->
- 通过 Proxy 动词访问服务、节点或者 Pod
- 在访问远程服务之前,利用 API 服务器执行身份认证和鉴权。
@ -88,17 +88,17 @@ You have several options for connecting to nodes, pods and services from outside
- 进一步的描述在[这里](#manually-constructing-apiserver-proxy-urls)
- 从集群中的 node 或者 pod 访问。
<!--
- Access from a node or pod in the cluster.
- Run a pod, and then connect to a shell in it using [kubectl exec](/docs/reference/generated/kubectl/kubectl-commands/#exec).
Connect to other nodes, pods, and services from that shell.
- Some clusters may allow you to ssh to a node in the cluster. From there you may be able to
access cluster services. This is a non-standard method, and will work on some clusters but
not others. Browsers and other tools may or may not be installed. Cluster DNS may not work.
- Access from a node or pod in the cluster.
- Run a pod, and then connect to a shell in it using [kubectl exec](/docs/reference/generated/kubectl/kubectl-commands/#exec).
Connect to other nodes, pods, and services from that shell.
- Some clusters may allow you to ssh to a node in the cluster. From there you may be able to
access cluster services. This is a non-standard method, and will work on some clusters but
not others. Browsers and other tools may or may not be installed. Cluster DNS may not work.
-->
- 从集群中的一个节点或 Pod 访问
- 运行一个 Pod然后使用
[kubectl exec](/docs/reference/generated/kubectl/kubectl-commands/#exec)
连接到它的 Shell。从那个 Shell 连接其他的节点、Pod 和 服务
连接到它的 Shell,从那个 Shell 连接其他的节点、Pod 和 Service。
- 某些集群可能允许你 SSH 到集群中的节点。你可能可以从那儿访问集群服务。
这是一个非标准的方式,可能在一些集群上能工作,但在另一些上却不能。
浏览器和其他工具可能已经安装也可能没有安装。集群 DNS 可能不会正常工作。
@ -135,7 +135,8 @@ heapster is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/
<!--
This shows the proxy-verb URL for accessing each service.
For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached
at `https://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/` if suitable credentials are passed, or through a kubectl proxy at, for example:
at `https://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/`
if suitable credentials are passed, or through a kubectl proxy at, for example:
`http://localhost:8080/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/`.
-->
这一输出显示了用 proxy 动词访问每个服务时可用的 URL。例如此集群
@ -145,7 +146,8 @@ at `https://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-loggi
`http://localhost:8080/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/`
<!--
See [Access Clusters Using the Kubernetes API](/docs/tasks/administer-cluster/access-cluster-api/#accessing-the-cluster-api) for how to pass credentials or use kubectl proxy.
See [Access Clusters Using the Kubernetes API](/docs/tasks/administer-cluster/access-cluster-api/#accessing-the-cluster-api)
for how to pass credentials or use kubectl proxy.
-->
{{< note >}}
请参阅[使用 Kubernetes API 访问集群](/zh-cn/docs/tasks/administer-cluster/access-cluster-api/#accessing-the-cluster-api)
@ -155,10 +157,12 @@ See [Access Clusters Using the Kubernetes API](/docs/tasks/administer-cluster/ac
<!--
#### Manually constructing apiserver proxy URLs
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the service's proxy URL:
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create
proxy URLs that include service endpoints, suffixes, and parameters, you append to the service's proxy URL:
`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`[https:]service_name[:port_name]`*`/proxy`
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL. You can also use the port number in place of the *port_name* for both named and unnamed ports.
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL. You can also
use the port number in place of the *port_name* for both named and unnamed ports.
By default, the API server proxies to your service using HTTP. To use HTTPS, prefix the service name with `https:`:
`http://<kubernetes_master_address>/api/v1/namespaces/<namespace_name>/services/<service_name>/proxy`
@ -176,8 +180,8 @@ The supported formats for the `<service_name>` segment of the URL are:
为了创建包含服务末端、后缀和参数的代理 URLs你可以在服务的代理 URL 中添加:
`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`service_name[:port_name]`*`/proxy`
如果还没有为你的端口指定名称,你可以不用在 URL 中指定 *port_name*。
对于命名和未命名端口,你还可以使用端口号代替 *port_name*。
如果还没有为你的端口指定名称,你可以不用在 URL 中指定 **port_name**。
对于命名和未命名端口,你还可以使用端口号代替 **port_name**。
默认情况下API 服务器使用 HTTP 为你的服务提供代理。 要使用 HTTPS请在服务名称前加上 `https:`
`http://<kubernetes_master_address>/api/v1/namespaces/<namespace_name>/services/<service_name>/proxy`
@ -209,25 +213,25 @@ URL 的 `<service_name>` 段支持的格式为:
https://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_cluster/health?pretty=true
```
<!--
The health information is similar to this:
-->
健康信息与下面的例子类似:
<!--
The health information is similar to this:
-->
健康信息与下面的例子类似:
```json
{
"cluster_name" : "kubernetes_logging",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 5,
"active_shards" : 5,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 5
}
```
```json
{
"cluster_name" : "kubernetes_logging",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 5,
"active_shards" : 5,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 5
}
```
<!--
* To access the *https* Elasticsearch service health information `_cluster/health?pretty=true`, you would use:
@ -248,12 +252,12 @@ You may be able to put an apiserver proxy URL into the address bar of a browser.
你或许能够将 API 服务器代理的 URL 放入浏览器的地址栏,然而:
<!--
- Web browsers cannot usually pass tokens, so you may need to use basic (password) auth. Apiserver can be configured to accept basic auth,
but your cluster may not be configured to accept basic auth.
- Some web apps may not work, particularly those with client side javascript that construct URLs in a
way that is unaware of the proxy path prefix.
- Web browsers cannot usually pass tokens, so you may need to use basic (password) auth. Apiserver can be configured to accept basic auth,
but your cluster may not be configured to accept basic auth.
- Some web apps may not work, particularly those with client side javascript that construct URLs in a
way that is unaware of the proxy path prefix.
-->
- Web 服务器通常不能传递令牌,所以你可能需要使用基本(密码)认证。
API 服务器可以配置为接受基本认证,但你的集群可能并没有这样配置。
- 某些 Web 应用可能无法工作,特别是那些使用客户端 Javascript 构造 URL 的
应用,所构造的 URL 可能并不支持代理路径前缀。
- Web 服务器通常不能传递令牌,所以你可能需要使用基本(密码)认证。
API 服务器可以配置为接受基本认证,但你的集群可能并没有这样配置。
- 某些 Web 应用可能无法工作,特别是那些使用客户端 Javascript 构造 URL 的
应用,所构造的 URL 可能并不支持代理路径前缀。

View File

@ -1125,9 +1125,9 @@ kubectl apply -f my-crontab.yaml
crontab "my-new-cron-object" created
```
<!--
## Validation rules
### Validation rules
-->
## 验证规则
### 验证规则
{{< feature-state state="beta" for_k8s_version="v1.25" >}}

View File

@ -40,22 +40,22 @@ Pod Security Admission 是一个准入控制器,在创建 Pod 时应用 [Pod
<!--
Install the following on your workstation:
- [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
- [kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
- [kubectl](/docs/tasks/tools/)
-->
在你的工作站中安装以下内容:
- [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
- [kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
- [kubectl](/zh-cn/docs/tasks/tools/)
<!--
## Create cluster
1. Create a `KinD` cluster as follows:
1. Create a `kind` cluster as follows:
-->
## 创建集群 {#create-cluster}
2. 按照如下方式创建一个 `KinD` 集群:
2. 按照如下方式创建一个 `kind` 集群:
```shell
kind create cluster --name psa-ns-level
@ -233,7 +233,7 @@ kind delete cluster --name psa-ns-level
[shell script](/examples/security/kind-with-namespace-level-baseline-pod-security.sh)
to perform all the preceding steps all at once.
1. Create KinD cluster
1. Create kind cluster
2. Create new namespace
3. Apply `baseline` Pod Security Standard in `enforce` mode while applying
`restricted` Pod Security Standard also in `warn` and `audit` mode.
@ -246,7 +246,7 @@ kind delete cluster --name psa-ns-level
- 运行一个 [shell 脚本](/examples/security/kind-with-namespace-level-baseline-pod-security.sh)
一次执行所有前面的步骤。
1. 创建 KinD 集群
1. 创建 kind 集群
2. 创建新的名字空间
3. 在 `enforce` 模式下应用 `baseline` Pod 安全标准,
同时在 `warn``audit` 模式下应用 `restricted` Pod 安全标准。

View File

@ -811,7 +811,7 @@ kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.23.0@sha256:49824ab1727c04e56a21a5d8372a402fcd32ea51ac96a2706a12af38934f81ac
image: kindest/node:v1.28.0@sha256:9f3ff58f19dcf1a0611d11e8ac989fdb30a28f40f236f59f0bea31fb956ccf5c
kubeadmConfigPatches:
- |
kind: JoinConfiguration
@ -819,7 +819,7 @@ nodes:
kubeletExtraArgs:
seccomp-default: "true"
- role: worker
image: kindest/node:v1.23.0@sha256:49824ab1727c04e56a21a5d8372a402fcd32ea51ac96a2706a12af38934f81ac
image: kindest/node:v1.28.0@sha256:9f3ff58f19dcf1a0611d11e8ac989fdb30a28f40f236f59f0bea31fb956ccf5c
kubeadmConfigPatches:
- |
kind: JoinConfiguration

View File

@ -366,6 +366,78 @@ other = "Before you begin"
[previous_patches]
other = "Patch Releases:"
# The following text is displayed when JavaScript isn't available.
[release_binary_alternate_links]
other = """You can find links to download Kubernetes components (and their checksums) in the [CHANGELOG](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG) files.
Alternately, use [downloadkubernetes.com](https://www.downloadkubernetes.com/) to filter by version and architecture."""
[release_binary_arch]
other = "Architecture"
[release_binary_arch_option]
other = "Architectures"
[release_binary_copy_link]
other = "Copy Link"
[release_binary_copy_link_certifcate]
other = "cert"
[release_binary_copy_link_checksum]
other = "checksum"
[release_binary_copy_link_signature]
other = "signature"
[release_binary_copy_link_tooltip]
other = "copy to clipboard"
[release_binary_download]
other = "Download Binary"
[release_binary_download_tooltip]
other = "download binary file"
[release_binary_options]
other = "Download Options"
[release_binary_os]
other = "Operating System"
[release_binary_os_option]
other = "Operating Systems"
[release_binary_os_darwin]
other = "darwin"
[release_binary_os_linux]
other = "linux"
[release_binary_os_windows]
other = "windows"
[release_binary_table_caption]
other = "Download Kubernetes component binaries"
# NOTE: <current-version> is a placeholder for the actual version number set by 'release-binaries' shortcode.
# Please do not localize or modify <current-version> placeholder.
[release_binary_section]
other = """You can find the links to download <current-version> Kubernetes components (along with their checksums) below.
To access downloads for older supported versions, visit the respective documentation
link for [older versions](https://kubernetes.io/docs/home/supported-doc-versions/#versions-older) or use [downloadkubernetes.com](https://www.downloadkubernetes.com/)."""
# NOTE: <current-version> and <current-changelog-url> are placeholders set by 'release-binaries' shortcode.
# Please do not localize or modify <current-version> and <current-changelog-url> placeholders.
[release_binary_section_note]
other = """To download older patch versions of <current-version> Kubernetes components (and their checksums),
please refer to the [CHANGELOG](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG/CHANGELOG-<current-changelog-url>.md) file."""
[release_binary_version]
other = "Version"
[release_binary_version_option]
other = "Latest Version"
[release_date_after]
other = ")"

View File

@ -87,6 +87,10 @@
<script defer src="{{ "js/sortable-table.js" | relURL }}"></script>
{{- end -}}
{{- if .HasShortcode "release-binaries" -}}
<script defer src="{{ "js/release_binaries.js" | relURL }}"></script>
{{- end -}}
{{- if eq (lower .Params.cid) "community" -}}
{{- if eq .Params.community_styles_migrated true -}}
<link href="/css/community.css" rel="stylesheet"><!-- legacy styles -->

View File

@ -0,0 +1,130 @@
<!-- Fetch release_binaries.json from kubernetes-sigs/downloadkubernetes to render in table -->
{{ $response := getJSON "https://raw.githubusercontent.com/kubernetes-sigs/downloadkubernetes/master/dist/release_binaries.json" }}
{{ $currentVersion := site.Params.version }}
{{ $Binaries := slice }}
{{ $AllOSes := slice }}
{{ $AllArch := slice }}
{{ $AllVersions := slice }}
{{ range $key, $value := $response }}
{{ if eq $key "Binaries" }}
{{ $Binaries = $value }}
{{ else if eq $key "AllOSes" }}
{{ $AllOSes = $value }}
{{ else if eq $key "AllArch" }}
{{ $AllArch = $value }}
{{ else if eq $key "AllVersions" }}
{{ $AllVersions = $value }}
{{ end }}
{{ end }}
<!-- The below <div> defines an alternate content to be displayed only
when Javascript is disabled or when user's browser doesn't support Javascript -->
<div class="downloadbinaries-nojs">
<p>{{ T "release_binary_alternate_links" | markdownify }}</p>
</div>
<!-- The below section containing the release binary details
is enabled by the script "release_binaries.js" -->
<div id="download-kubernetes-data" style="display: none;">
<p>
{{ $releaseBinarySection := T "release_binary_section" }}
{{ $releaseBinarySection = replace $releaseBinarySection "<current-version>" $currentVersion }}
{{ $releaseBinarySection | markdownify }}
</p>
<div class="alert alert-info note callout" role="alert">
<strong>{{ T "note" }}</strong>
{{ $releaseBinarySectionNote := T "release_binary_section_note" }}
{{ $releaseBinarySectionNote = replace $releaseBinarySectionNote "<current-changelog-url>" (replace $currentVersion "v" "") }}
{{ $releaseBinarySectionNote = replace $releaseBinarySectionNote "<current-version>" $currentVersion }}
{{ $releaseBinarySectionNote | markdownify }}
</div>
<details>
<summary>{{ T "release_binary_options" }}</summary>
<div class="text-center w-100">
<div class="d-inline-block text-center pr-4">
<div>
<b>{{ T "release_binary_os_option" }}</b>
<div class="buttons" id="os-filter">
{{- range $AllOSes }}
<button class="btn btn-outline-primary m-1" data-os="{{.}}-data">{{ T (printf "release_binary_os_%s" .) }}</button>
{{- end }}
</div>
</div>
</div>
<div class="d-inline-block text-center pr-4">
<div>
<b>{{ T "release_binary_version_option" }}</b>
<div class="buttons" id="version-filter">
{{- range $AllVersions }}
{{- $releaseBinaryData := printf "%s.%s" (index (split . ".") 0) (index (split . ".") 1) -}}
{{- if eq $releaseBinaryData $currentVersion }}
<button class="btn btn-primary m-1" data-version="{{.}}-data">{{ $currentVersion }} ({{ T "patch_release" }} {{ . }})</button>
{{- end }}
{{- end }}
</div>
</div>
</div>
<div class="text-center w-100 p-4">
<div>
<b>{{ T "release_binary_arch_option" }}</b>
<div class="buttons" id="arch-filter">
{{- range $AllArch }}
<button class="btn btn-outline-primary m-1" data-arch="{{.}}-data">{{.}}</button>
{{- end }}
</div>
</div>
</div>
</div>
</details>
<div class="table-responsive">
<table class="table" id="release-binary-table">
<caption style="display:none">{{ T "release_binary_table_caption" }}</caption>
<thead>
<tr>
<th>{{ T "release_binary_version" }}</th>
<th>{{ T "release_binary_os" }}</th>
<th>{{ T "release_binary_arch" }}</th>
<th>{{ T "release_binary_download" }}</th>
<th>{{ T "release_binary_copy_link" }}</th>
</tr>
</thead>
<tbody>
{{- range $index, $binary := $Binaries -}}
{{- $releaseBinaryData := printf "%s.%s" (index (split $binary.Version ".") 0) (index (split $binary.Version ".") 1) -}}
{{- if eq $releaseBinaryData $currentVersion }}
{{ $LinkText := printf "dl.k8s.io/%s/bin/%s/%s/%s" $binary.Version $binary.OperatingSystem $binary.Architecture $binary.Name }}
{{ $BinaryLink := printf "https://dl.k8s.io/%s/bin/%s/%s/%s" $binary.Version $binary.OperatingSystem $binary.Architecture $binary.Name }}
{{ $ChecksumLink := printf "https://dl.k8s.io/%s/bin/%s/%s/%s.sha256" $binary.Version $binary.OperatingSystem $binary.Architecture $binary.Name }}
{{ $SignatureLink := printf "https://dl.k8s.io/%s/bin/%s/%s/%s.sig" $binary.Version $binary.OperatingSystem $binary.Architecture $binary.Name }}
{{ $CertificateLink := printf "https://dl.k8s.io/%s/bin/%s/%s/%s.cert" $binary.Version $binary.OperatingSystem $binary.Architecture $binary.Name }}
<tr class="{{ $binary.Version }}-data {{ $binary.OperatingSystem }}-data {{ $binary.Architecture }}-data {{ $binary.Name }}-data">
<td>{{ $binary.Version }}</td>
<td>{{ T (printf "release_binary_os_%s" $binary.OperatingSystem) }}</td>
<td>{{ $binary.Architecture }}</td>
<td>
<span title="{{ T "release_binary_download_tooltip" }}">
<a href="{{$BinaryLink}}">{{ $binary.Name }}</a>
</span>
</td>
<td>
<span class="icon">
<i class="fa fa-copy"></i>
</span>
<span title="{{ T "release_binary_copy_link_tooltip" }}">
<a class="release-binary-copy" href="{{$BinaryLink}}">{{$LinkText}}</a>
(<a class="release-binary-copy" href="{{$ChecksumLink}}">{{T "release_binary_copy_link_checksum"}}</a> | <a class="release-binary-copy" href="{{$SignatureLink}}">{{T "release_binary_copy_link_signature"}}</a> | <a class="release-binary-copy" href="{{$CertificateLink}}">{{T "release_binary_copy_link_certifcate"}}</a>)
</span>
</td>
</tr>
{{- end }}
{{- end }}
</tbody>
</table>
</div>
</div>

View File

@ -0,0 +1,75 @@
const filterCriteria = {
os: "",
arch: ""
};
["os", "arch"].forEach(kind => {
eventListener(kind);
});
function eventListener(kind) {
let buttonGroupQuery = '#' + kind + '-filter' + ' > button';
let buttonGroup = document.querySelectorAll(buttonGroupQuery);
buttonGroup.forEach(button => {
button.addEventListener('click', (evt) => {
let buttonData = button.dataset[kind];
if (filterCriteria[kind] === buttonData) {
filterCriteria[kind] = "";
button.classList.add('btn-outline-primary');
button.classList.remove('btn-primary');
} else {
filterCriteria[kind] = buttonData;
buttonGroup.forEach(b => {
b.classList.remove('btn-primary');
b.classList.add('btn-outline-primary');
});
button.classList.remove('btn-outline-primary');
button.classList.add('btn-primary');
}
filterRows();
});
});
}
function filterRows() {
const rows = document.querySelectorAll('#release-binary-table tbody tr');
rows.forEach(row => {
const os = row.classList.contains(filterCriteria.os) || filterCriteria.os === "";
const arch = row.classList.contains(filterCriteria.arch) || filterCriteria.arch === "";
if (os && arch) {
row.classList.remove('hide');
} else {
row.classList.add('hide');
}
});
}
document.querySelectorAll("#release-binary-table .release-binary-copy").forEach(link => {
link.addEventListener('click', (evt) => {
evt.preventDefault();
const hrefValue = link.getAttribute('href');
const tempTextArea = document.createElement('textarea');
tempTextArea.value = hrefValue;
document.body.appendChild(tempTextArea);
tempTextArea.select();
document.execCommand('copy');
document.body.removeChild(tempTextArea);
return false;
});
});
// The page and script is loaded successfully
$( document ).ready(function() {
// Remove the non-js content
$('.downloadbinaries-nojs').hide();
// Display the release binary content
$('#download-kubernetes-data').show();
})