diff --git a/content/de/_index.html b/content/de/_index.html
index ea84bd7f01..700884d0f9 100644
--- a/content/de/_index.html
+++ b/content/de/_index.html
@@ -43,12 +43,12 @@ Kubernetes ist Open Source und bietet Dir die Freiheit, die Infrastruktur vor Or
diff --git a/content/de/docs/tutorials/_index.md b/content/de/docs/tutorials/_index.md
index 3ab69b6073..46ee724746 100644
--- a/content/de/docs/tutorials/_index.md
+++ b/content/de/docs/tutorials/_index.md
@@ -50,7 +50,7 @@ Bevor Sie die einzelnen Lernprogramme durchgehen, möchten Sie möglicherweise e
* [AppArmor](/docs/tutorials/clusters/apparmor/)
-* [seccomp](/docs/tutorials/clusters/seccomp/)
+* [Seccomp](/docs/tutorials/clusters/seccomp/)
## Services
diff --git a/content/en/blog/_posts/2018-11-08-kubernetes-docs-update-i18n.md b/content/en/blog/_posts/2018-11-08-kubernetes-docs-update-i18n.md
index 4c9af11a9e..d9ac11759e 100644
--- a/content/en/blog/_posts/2018-11-08-kubernetes-docs-update-i18n.md
+++ b/content/en/blog/_posts/2018-11-08-kubernetes-docs-update-i18n.md
@@ -37,7 +37,7 @@ Prow automatically applies language labels based on file path. Thanks to SIG Doc
/language ko
```
-These repo labels let reviewers filter for PRs and issues by language. For example, you can now filter the k/website dashboard for [PRs with Chinese content](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+label%3Alanguage%2Fzh).
+These repo labels let reviewers filter for PRs and issues by language. For example, you can now filter the kubernetes/website dashboard for [PRs with Chinese content](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+label%3Alanguage%2Fzh).
### Team review
diff --git a/content/en/blog/_posts/2020-08-26-kubernetes-release-1.19.md b/content/en/blog/_posts/2020-08-26-kubernetes-release-1.19.md
index 39559ffe4c..16129272bf 100644
--- a/content/en/blog/_posts/2020-08-26-kubernetes-release-1.19.md
+++ b/content/en/blog/_posts/2020-08-26-kubernetes-release-1.19.md
@@ -77,7 +77,7 @@ Check out the full details of the Kubernetes 1.19 release in our [release notes]
## Availability
-Kubernetes 1.19 is available for download on [GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.19.0). To get started with Kubernetes, check out these [interactive tutorials](https://kubernetes.io/docs/tutorials/) or run local Kubernetes clusters using Docker container “nodes” with [KinD](https://kind.sigs.k8s.io/) (Kubernetes in Docker). You can also easily install 1.19 using [kubeadm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/).
+Kubernetes 1.19 is available for download on [GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.19.0). To get started with Kubernetes, check out these [interactive tutorials](https://kubernetes.io/docs/tutorials/) or run local Kubernetes clusters using Docker container “nodes” with [kind](https://kind.sigs.k8s.io/) (Kubernetes in Docker). You can also easily install 1.19 using [kubeadm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/).
## Release Team
This release is made possible through the efforts of hundreds of individuals who contributed both technical and non-technical content. Special thanks to the [release team](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.19/release_team.md) led by Taylor Dolezal, Senior Developer Advocate at HashiCorp. The 34 release team members coordinated many aspects of the release, from documentation to testing, validation, and feature completeness.
diff --git a/content/en/blog/_posts/2023-11-07-introducing-sig-etcd.md b/content/en/blog/_posts/2023-11-07-introducing-sig-etcd.md
new file mode 100644
index 0000000000..b7889f75af
--- /dev/null
+++ b/content/en/blog/_posts/2023-11-07-introducing-sig-etcd.md
@@ -0,0 +1,35 @@
+---
+layout: blog
+title: "Introducing SIG etcd"
+slug: introducing-sig-etcd
+date: 2023-11-07
+canonicalUrl: https://etcd.io/blog/2023/introducing-sig-etcd/
+---
+
+**Authors**: Han Kang (Google), Marek Siarkowicz (Google), Frederico Muñoz (SAS Institute)
+
+Special Interest Groups (SIGs) are a fundamental part of the Kubernetes project, with a substantial share of the community activity happening within them. When the need arises, [new SIGs can be created](https://github.com/kubernetes/community/blob/master/sig-wg-lifecycle.md), and that was precisely what happened recently.
+
+[SIG etcd](https://github.com/kubernetes/community/blob/master/sig-etcd/README.md) is the most recent addition to the list of Kubernetes SIGs. In this article we will get to know it a bit better, understand its origins, scope, and plans.
+
+## The critical role of etcd
+
+If we look inside the control plane of a Kubernetes cluster, we will find [etcd](https://kubernetes.io/docs/concepts/overview/components/#etcd), a consistent and highly-available key value store used as Kubernetes' backing store for all cluster data -- this description alone highlights the critical role that etcd plays, and the importance of it within the Kubernetes ecosystem.
+
+This critical role makes the health of the etcd project and community an important consideration, and [concerns about the state of the project](https://groups.google.com/a/kubernetes.io/g/steering/c/e-O-tVSCJOk/m/N9IkiWLEAgAJ) in early 2022 did not go unnoticed. The changes in the maintainer team, amongst other factors, contributed to a situation that needed to be addressed.
+
+## Why a special interest group
+
+With the critical role of etcd in mind, it was proposed that the way forward would be to create a new special interest group. If etcd was already at the heart of Kubernetes, creating a dedicated SIG not only recognises that role, it would make etcd a first-class citizen of the Kubernetes community.
+
+Establishing SIG etcd creates a dedicated space to make explicit the contract between etcd and Kubernetes api machinery and to prevent, on the etcd level, changes which violate this contract. Additionally, etcd will be able to adopt the processes that Kubernetes offers its SIGs ([KEPs](https://www.kubernetes.dev/resources/keps/), [PRR](https://github.com/kubernetes/community/blob/master/sig-architecture/production-readiness.md), [phased feature gates](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/), amongst others) in order to improve the consistency and reliability of the codebase. Being able to use these processes will be a substantial benefit to the etcd community.
+
+As a SIG, etcd will also be able to draw contributor support from Kubernetes proper: active contributions to etcd from Kubernetes maintainers would decrease the likelihood of breaking Kubernetes changes, through the increased number of potential reviewers and the integration with existing testing framework. This will not only benefit Kubernetes, which will be able to better participate and shape the direction of etcd in terms of the critical role it plays, but also etcd as a whole.
+
+## About SIG etcd
+
+The recently created SIG is already working towards its goals, defined in its [Charter](https://github.com/kubernetes/community/blob/master/sig-etcd/charter.md) and [Vision](https://github.com/kubernetes/community/blob/master/sig-etcd/vision.md). The purpose is clear: to ensure etcd is a reliable, simple, and scalable production-ready store for building cloud-native distributed systems and managing cloud-native infrastructure via orchestrators like Kubernetes.
+
+The scope of SIG etcd is not exclusively about etcd as a Kubernetes component, it also covers etcd as a standard solution. Our goal is to make etcd the most reliable key-value storage to be used anywhere, unconstrained by any Kubernetes-specific limits and scaling to meet the requirements of many diverse use-cases.
+
+We are confident that the creation of SIG etcd constitutes an important milestone in the lifecycle of the project, simultaneously improving etcd itself, and also the integration of etcd with Kubernetes. We invite everyone interested in etcd to [visit our page](https://github.com/kubernetes/community/blob/master/sig-etcd/README.md), [join us at our Slack channel](https://kubernetes.slack.com/messages/etcd), and get involved in this new stage of etcd's life.
diff --git a/content/en/blog/_posts/2023-11-16-mid-cycle-1.29.md b/content/en/blog/_posts/2023-11-16-mid-cycle-1.29.md
new file mode 100644
index 0000000000..82d0bd795e
--- /dev/null
+++ b/content/en/blog/_posts/2023-11-16-mid-cycle-1.29.md
@@ -0,0 +1,72 @@
+---
+layout: blog
+title: 'Kubernetes Removals, Deprecations, and Major Changes in Kubernetes 1.29'
+date: 2023-11-16
+slug: kubernetes-1-29-upcoming-changes
+---
+
+**Authors:** Carol Valencia, Kristin Martin, Abigail McCarthy, James Quigley, Hosam Kamel
+
+
+As with every release, Kubernetes v1.29 will introduce feature deprecations and removals. Our continued ability to produce high-quality releases is a testament to our robust development cycle and healthy community. The following are some of the deprecations and removals coming in the Kubernetes 1.29 release.
+
+## The Kubernetes API removal and deprecation process
+
+The Kubernetes project has a well-documented deprecation policy for features. This policy states that stable APIs may only be deprecated when a newer, stable version of that same API is available and that APIs have a minimum lifetime for each stability level. A deprecated API is one that has been marked for removal in a future Kubernetes release; it will continue to function until removal (at least one year from the deprecation), but usage will result in a warning being displayed. Removed APIs are no longer available in the current version, at which point you must migrate to using the replacement.
+
+* Generally available (GA) or stable API versions may be marked as deprecated, but must not be removed within a major version of Kubernetes.
+* Beta or pre-release API versions must be supported for 3 releases after deprecation.
+* Alpha or experimental API versions may be removed in any release without prior deprecation notice.
+
+Whether an API is removed as a result of a feature graduating from beta to stable or because that API simply did not succeed, all removals comply with this deprecation policy. Whenever an API is removed, migration options are communicated in the documentation.
+
+## A note about the k8s.gcr.io redirect to registry.k8s.io
+
+To host its container images, the Kubernetes project uses a community-owned image registry called registry.k8s.io. Starting last March traffic to the old k8s.gcr.io registry began being redirected to registry.k8s.io. The deprecated k8s.gcr.io registry will eventually be phased out. For more details on this change or to see if you are impacted, please read [k8s.gcr.io Redirect to registry.k8s.io - What You Need to Know](/blog/2023/03/10/image-registry-redirect/).
+
+## A note about the Kubernetes community-owned package repositories
+
+Earlier in 2023, the Kubernetes project [introduced](/blog/2023/08/15/pkgs-k8s-io-introduction/) `pkgs.k8s.io`, community-owned software repositories for Debian and RPM packages. The community-owned repositories replaced the legacy Google-owned repositories (`apt.kubernetes.io` and `yum.kubernetes.io`).
+On September 13, 2023, those legacy repositories were formally deprecated and their contents frozen.
+
+For more information on this change or to see if you are impacted, please read the [deprecation announcement](/blog/2023/08/31/legacy-package-repository-deprecation/).
+
+## Deprecations and removals for Kubernetes v1.29
+
+See the official list of [API removals](/docs/reference/using-api/deprecation-guide/#v1-29) for a full list of planned deprecations for Kubernetes v1.29.
+
+### Removal of in-tree integrations with cloud providers ([KEP-2395](https://kep.k8s.io/2395))
+
+The [feature gates](/docs/reference/command-line-tools-reference/feature-gates/) `DisableCloudProviders` and `DisableKubeletCloudCredentialProviders` will both be set to `true` by default for Kubernetes v1.29. This change will require that users who are currently using in-tree cloud provider integrations (Azure, GCE, or vSphere) enable external cloud controller managers, or opt in to the legacy integration by setting the associated feature gates to `false`.
+
+Enabling external cloud controller managers means you must run a suitable cloud controller manager within your cluster's control plane; it also requires setting the command line argument `--cloud-provider=external` for the kubelet (on every relevant node), and across the control plane (kube-apiserver and kube-controller-manager).
+
+For more information about how to enable and run external cloud controller managers, read [Cloud Controller Manager Administration](/docs/tasks/administer-cluster/running-cloud-controller/) and [Migrate Replicated Control Plane To Use Cloud Controller Manager](/docs/tasks/administer-cluster/controller-manager-leader-migration/).
+
+For general information about cloud controller managers, please see
+[Cloud Controller Manager](/docs/concepts/architecture/cloud-controller/) in the Kubernetes documentation.
+
+### Removal of the `v1beta2` flow control API group
+
+The _flowcontrol.apiserver.k8s.io/v1beta2_ API version of FlowSchema and PriorityLevelConfiguration will [no longer be served](/docs/reference/using-api/deprecation-guide/#v1-29) in Kubernetes v1.29.
+
+To prepare for this, you can edit your existing manifests and rewrite client software to use the `flowcontrol.apiserver.k8s.io/v1beta3` API version, available since v1.26. All existing persisted objects are accessible via the new API. Notable changes in `flowcontrol.apiserver.k8s.io/v1beta3` include
+that the PriorityLevelConfiguration `spec.limited.assuredConcurrencyShares` field was renamed to `spec.limited.nominalConcurrencyShares`.
+
+
+### Deprecation of the `status.nodeInfo.kubeProxyVersion` field for Node
+
+The `.status.kubeProxyVersion` field for Node objects will be [marked as deprecated](https://github.com/kubernetes/enhancements/issues/4004) in v1.29 in preparation for its removal in a future release. This field is not accurate and is set by kubelet, which does not actually know the kube-proxy version, or even if kube-proxy is running.
+
+## Want to know more?
+
+Deprecations are announced in the Kubernetes release notes. You can see the announcements of pending deprecations in the release notes for:
+
+* [Kubernetes v1.25](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#deprecation)
+* [Kubernetes v1.26](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#deprecation)
+* [Kubernetes v1.27](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#deprecation)
+* [Kubernetes v1.28](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.28.md#deprecation)
+
+We will formally announce the deprecations that come with [Kubernetes v1.29](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#deprecation) as part of the CHANGELOG for that release.
+
+For information on the deprecation and removal process, refer to the official Kubernetes [deprecation policy](/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api) document.
\ No newline at end of file
diff --git a/content/en/blog/_posts/2023-11-16-the-case-for-kubernetes-limits/index.md b/content/en/blog/_posts/2023-11-16-the-case-for-kubernetes-limits/index.md
new file mode 100644
index 0000000000..ea3d60c358
--- /dev/null
+++ b/content/en/blog/_posts/2023-11-16-the-case-for-kubernetes-limits/index.md
@@ -0,0 +1,50 @@
+---
+layout: blog
+title: "The Case for Kubernetes Resource Limits: Predictability vs. Efficiency"
+date: 2023-11-16
+slug: the-case-for-kubernetes-resource-limits
+---
+
+**Author:** Milan Plžík (Grafana Labs)
+
+There’s been quite a lot of posts suggesting that not using Kubernetes resource limits might be a fairly useful thing (for example, [For the Love of God, Stop Using CPU Limits on Kubernetes](https://home.robusta.dev/blog/stop-using-cpu-limits/) or [Kubernetes: Make your services faster by removing CPU limits](https://erickhun.com/posts/kubernetes-faster-services-no-cpu-limits/) ). The points made there are totally valid – it doesn’t make much sense to pay for compute power that will not be used due to limits, nor to artificially increase latency. This post strives to argue that limits have their legitimate use as well.
+
+As a Site Reliability Engineer on the [Grafana Labs](https://grafana.com/) platform team, which maintains and improves internal infrastructure and tooling used by the product teams, I primarily try to make Kubernetes upgrades as smooth as possible. But I also spend a lot of time going down the rabbit hole of various interesting Kubernetes issues. This article reflects my personal opinion, and others in the community may disagree.
+
+Let’s flip the problem upside down. Every pod in a Kubernetes cluster has inherent resource limits – the actual CPU, memory, and other resources of the machine it’s running on. If those physical limits are reached by a pod, it will experience throttling similar to what is caused by reaching Kubernetes limits.
+
+## The problem
+Pods without (or with generous) limits can easily consume the extra resources on the node. This, however, has a hidden cost – the amount of extra resources available often heavily depends on pods scheduled on the particular node and their actual load. These extra resources make each pod a special snowflake when it comes to real resource allocation. Even worse, it’s fairly hard to figure out the resources that the pod had at its disposal at any given moment – certainly not without unwieldy data mining of pods running on a particular node, their resource consumption, and similar. And finally, even if we pass this obstacle, we can only have data sampled up to a certain rate and get profiles only for a certain fraction of our calls. This can be scaled up, but the amount of observability data generated might easily reach diminishing returns. Thus, there’s no easy way to tell if a pod had a quick spike and for a short period of time used twice as much memory as usual to handle a request burst.
+
+Now, with Black Friday and Cyber Monday approaching, businesses expect a surge in traffic. Good performance data/benchmarks of the past performance allow businesses to plan for some extra capacity. But is data about pods without limits reliable? With memory or CPU instant spikes handled by the extra resources, everything might look good according to past data. But once the pod bin-packing changes and the extra resources get more scarce, everything might start looking different – ranging from request latencies rising negligibly to requests slowly snowballing and causing pod OOM kills. While almost no one actually cares about the former, the latter is a serious issue that requires instant capacity increase.
+
+## Configuring the limits
+Not using limits takes a tradeoff – it opportunistically improves the performance if there are extra resources available, but lowers predictability of the performance, which might strike back in the future. There are a few approaches that can be used to increase the predictability again. Let’s pick two of them to analyze:
+
+- **Configure workload limits to be a fixed (and small) percentage more than the requests** – I'll call it _fixed-fraction headroom_. This allows the use of some extra shared resources, but keeps the per-node overcommit bound and can be taken to guide worst-case estimates for the workload. Note that the bigger the limits percentage is, the bigger the variance in the performance that might happen across the workloads.
+- **Configure workloads with `requests` = `limits`**. From some point of view, this is equivalent to giving each pod its own tiny machine with constrained resources; the performance is fairly predictable. This also puts the pod into the _Guaranteed_ QoS class, which makes it get evicted only after _BestEffort_ and _Burstable_ pods have been evicted by a node under resource pressure (see [Quality of Service for Pods](/docs/concepts/workloads/pods/pod-qos/)).
+
+Some other cases might also be considered, but these are probably the two simplest ones to discuss.
+
+
+## Cluster resource economy
+Note that in both cases discussed above, we’re effectively preventing the workloads from using some cluster resources it has at the cost of getting more predictability – which might sound like a steep price to pay for a bit more stable performance. Let’s try to quantify the impact there.
+
+### Bin-packing and cluster resource allocation
+Firstly, let’s discuss bin-packing and cluster resource allocation. There’s some inherent cluster inefficiency that comes to play – it’s hard to achieve 100% resource allocation in a Kubernetes cluster. Thus, some percentage will be left unallocated.
+
+When configuring fixed-fraction headroom limits, a proportional amount of this will be available to the pods. If the percentage of unallocated resources in the cluster is lower than the constant we use for setting fixed-fraction headroom limits (see the figure, line 2), all the pods together are able to theoretically use up all the node’s resources; otherwise there are some resources that will inevitably be wasted (see the figure, line 1). In order to eliminate the inevitable resource waste, the percentage for fixed-fraction headroom limits should be configured so that it’s at least equal to the expected percentage of unallocated resources.
+
+{{
}}
+
+For requests = limits (see the figure, line 3), this does not hold: Unless we’re able to allocate all node’s resources, there’s going to be some inevitably wasted resources. Without any knobs to turn on the requests/limits side, the only suitable approach here is to ensure efficient bin-packing on the nodes by configuring correct machine profiles. This can be done either manually or by using a variety of cloud service provider tooling – for example [Karpenter](https://karpenter.sh/) for EKS or [GKE Node auto provisioning](https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning).
+
+### Optimizing actual resource utilization
+Free resources also come in the form of unused resources of other pods (reserved vs. actual CPU utilization, etc.), and their availability can’t be predicted in any reasonable way. Configuring limits makes it next to impossible to utilize these. Looking at this from a different perspective, if a workload wastes a significant amount of resources it has requested, re-visiting its own resource requests might be a fair thing to do. Looking at past data and picking more fitting resource requests might help to make the packing more tight (although at the price of worsening its performance – for example increasing long tail latencies).
+
+## Conclusion
+Optimizing resource requests and limits is hard. Although it’s much easier to break things when setting limits, those breakages might help prevent a catastrophe later by giving more insights into how the workload behaves in bordering conditions. There are cases where setting limits makes less sense: batch workloads (which are not latency-sensitive – for example non-live video encoding), best-effort services (don’t need that level of availability and can be preempted), clusters that have a lot of spare resources by design (various cases of specialty workloads – for example services that handle spikes by design).
+
+On the other hand, setting limits shouldn’t be avoided at all costs – even though figuring out the "right” value for limits is harder and configuring a wrong value yields less forgiving situations. Configuring limits helps you learn about a workload’s behavior in corner cases, and there are simple strategies that can help when reasoning about the right value. It’s a tradeoff between efficient resource usage and performance predictability and should be considered as such.
+
+There’s also an economic aspect of workloads with spiky resource usage. Having “freebie” resources always at hand does not serve as an incentive to improve performance for the product team. Big enough spikes might easily trigger efficiency issues or even problems when trying to defend a product’s SLA – and thus, might be a good candidate to mention when assessing any risks.
diff --git a/content/en/blog/_posts/2023-11-16-the-case-for-kubernetes-limits/requests-limits-configurations.svg b/content/en/blog/_posts/2023-11-16-the-case-for-kubernetes-limits/requests-limits-configurations.svg
new file mode 100644
index 0000000000..082561532c
--- /dev/null
+++ b/content/en/blog/_posts/2023-11-16-the-case-for-kubernetes-limits/requests-limits-configurations.svg
@@ -0,0 +1,281 @@
+
+
+
+
diff --git a/content/en/docs/concepts/services-networking/gateway.md b/content/en/docs/concepts/services-networking/gateway.md
index bca3ead812..dd54398659 100644
--- a/content/en/docs/concepts/services-networking/gateway.md
+++ b/content/en/docs/concepts/services-networking/gateway.md
@@ -29,7 +29,7 @@ The following principles shaped the design and architecture of Gateway API:
* __Application Developer:__ Manages an application running in a cluster and is typically
concerned with application-level configuration and [Service](/docs/concepts/services-networking/service/)
composition.
-* __Portable:__ Gateway API specifications are defined as [custom resources](docs/concepts/extend-kubernetes/api-extension/custom-resources)
+* __Portable:__ Gateway API specifications are defined as [custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources)
and are supported by many [implementations](https://gateway-api.sigs.k8s.io/implementations/).
* __Expressive:__ Gateway API kinds support functionality for common traffic routing use cases
such as header-based matching, traffic weighting, and others that were only possible in
diff --git a/content/en/docs/concepts/services-networking/network-policies.md b/content/en/docs/concepts/services-networking/network-policies.md
index 2eaad9b6a6..90c0351388 100644
--- a/content/en/docs/concepts/services-networking/network-policies.md
+++ b/content/en/docs/concepts/services-networking/network-policies.md
@@ -16,8 +16,8 @@ description: >-
-If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you
-might consider using Kubernetes NetworkPolicies for particular applications in your cluster.
+If you want to control traffic flow at the IP address or port level for TCP, UDP, and SCTP protocols,
+then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster.
NetworkPolicies are an application-centric construct which allow you to specify how a {{<
glossary_tooltip text="pod" term_id="pod">}} is allowed to communicate with various network
"entities" (we use the word "entity" here to avoid overloading the more common terms such as
@@ -257,21 +257,23 @@ creating the following NetworkPolicy in that namespace.
This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed
ingress or egress traffic.
-## SCTP support
+## Network traffic filtering
-{{< feature-state for_k8s_version="v1.20" state="stable" >}}
-
-As a stable feature, this is enabled by default. To disable SCTP at a cluster level, you (or your
-cluster administrator) will need to disable the `SCTPSupport`
-[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
-for the API server with `--feature-gates=SCTPSupport=false,…`.
-When the feature gate is enabled, you can set the `protocol` field of a NetworkPolicy to `SCTP`.
+NetworkPolicy is defined for [layer 4](https://en.wikipedia.org/wiki/OSI_model#Layer_4:_Transport_layer)
+connections (TCP, UDP, and optionally SCTP). For all the other protocols, the behaviour may vary
+across network plugins.
{{< note >}}
You must be using a {{< glossary_tooltip text="CNI" term_id="cni" >}} plugin that supports SCTP
protocol NetworkPolicies.
{{< /note >}}
+When a `deny all` network policy is defined, it is only guaranteed to deny TCP, UDP and SCTP
+connections. For other protocols, such as ARP or ICMP, the behaviour is undefined.
+The same applies to allow rules: when a specific pod is allowed as ingress source or egress destination,
+it is undefined what happens with (for example) ICMP packets. Protocols such as ICMP may be allowed by some
+network plugins and denied by others.
+
## Targeting a range of ports
{{< feature-state for_k8s_version="v1.25" state="stable" >}}
@@ -346,6 +348,88 @@ namespaces, the value of the label is the namespace name.
While NetworkPolicy cannot target a namespace by its name with some object field, you can use the
standardized label to target a specific namespace.
+## Pod lifecycle
+
+{{< note >}}
+The following applies to clusters with a conformant networking plugin and a conformant implementation of
+NetworkPolicy.
+{{< /note >}}
+
+When a new NetworkPolicy object is created, it may take some time for a network plugin
+to handle the new object. If a pod that is affected by a NetworkPolicy
+is created before the network plugin has completed NetworkPolicy handling,
+that pod may be started unprotected, and isolation rules will be applied when
+the NetworkPolicy handling is completed.
+
+Once the NetworkPolicy is handled by a network plugin,
+
+1. All newly created pods affected by a given NetworkPolicy will be isolated before
+they are started.
+Implementations of NetworkPolicy must ensure that filtering is effective throughout
+the Pod lifecycle, even from the very first instant that any container in that Pod is started.
+Because they are applied at Pod level, NetworkPolicies apply equally to init containers,
+sidecar containers, and regular containers.
+
+2. Allow rules will be applied eventually after the isolation rules (or may be applied at the same time).
+In the worst case, a newly created pod may have no network connectivity at all when it is first started, if
+isolation rules were already applied, but no allow rules were applied yet.
+
+Every created NetworkPolicy will be handled by a network plugin eventually, but there is no
+way to tell from the Kubernetes API when exactly that happens.
+
+Therefore, pods must be resilient against being started up with different network
+connectivity than expected. If you need to make sure the pod can reach certain destinations
+before being started, you can use an [init container](/docs/concepts/workloads/pods/init-containers/)
+to wait for those destinations to be reachable before kubelet starts the app containers.
+
+Every NetworkPolicy will be applied to all selected pods eventually.
+Because the network plugin may implement NetworkPolicy in a distributed manner,
+it is possible that pods may see a slightly inconsistent view of network policies
+when the pod is first created, or when pods or policies change.
+For example, a newly-created pod that is supposed to be able to reach both Pod A
+on Node 1 and Pod B on Node 2 may find that it can reach Pod A immediately,
+but cannot reach Pod B until a few seconds later.
+
+## NetworkPolicy and `hostNetwork` pods
+
+NetworkPolicy behaviour for `hostNetwork` pods is undefined, but it should be limited to 2 possibilities:
+- The network plugin can distinguish `hostNetwork` pod traffic from all other traffic
+ (including being able to distinguish traffic from different `hostNetwork` pods on
+ the same node), and will apply NetworkPolicy to `hostNetwork` pods just like it does
+ to pod-network pods.
+- The network plugin cannot properly distinguish `hostNetwork` pod traffic,
+ and so it ignores `hostNetwork` pods when matching `podSelector` and `namespaceSelector`.
+ Traffic to/from `hostNetwork` pods is treated the same as all other traffic to/from the node IP.
+ (This is the most common implementation.)
+
+This applies when
+1. a `hostNetwork` pod is selected by `spec.podSelector`.
+
+ ```yaml
+ ...
+ spec:
+ podSelector:
+ matchLabels:
+ role: client
+ ...
+ ```
+
+2. a `hostNetwork` pod is selected by a `podSelector` or `namespaceSelector` in an `ingress` or `egress` rule.
+
+ ```yaml
+ ...
+ ingress:
+ - from:
+ - podSelector:
+ matchLabels:
+ role: client
+ ...
+ ```
+
+At the same time, since `hostNetwork` pods have the same IP addresses as the nodes they reside on,
+their connections will be treated as node connections. For example, you can allow traffic
+from a `hostNetwork` Pod using an `ipBlock` rule.
+
## What you can't do with network policies (at least, not yet)
As of Kubernetes {{< skew currentVersion >}}, the following functionality does not exist in the
diff --git a/content/en/docs/concepts/workloads/controllers/daemonset.md b/content/en/docs/concepts/workloads/controllers/daemonset.md
index c033702068..bd7cca541b 100644
--- a/content/en/docs/concepts/workloads/controllers/daemonset.md
+++ b/content/en/docs/concepts/workloads/controllers/daemonset.md
@@ -108,8 +108,8 @@ If you do not specify either, then the DaemonSet controller will create Pods on
## How Daemon Pods are scheduled
-A DaemonSet ensures that all eligible nodes run a copy of a Pod. The DaemonSet
-controller creates a Pod for each eligible node and adds the
+A DaemonSet can be used to ensure that all eligible nodes run a copy of a Pod.
+The DaemonSet controller creates a Pod for each eligible node and adds the
`spec.affinity.nodeAffinity` field of the Pod to match the target host. After
the Pod is created, the default scheduler typically takes over and then binds
the Pod to the target host by setting the `.spec.nodeName` field. If the new
@@ -118,6 +118,13 @@ the existing Pods based on the
[priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority)
of the new Pod.
+{{< note >}}
+If it's important that the DaemonSet pod run on each node, it's often desirable
+to set the `.spec.template.spec.priorityClassName` of the DaemonSet to a
+[PriorityClass](/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass)
+with a higher priority to ensure that this eviction occurs.
+{{< /note >}}
+
The user can specify a different scheduler for the Pods of the DaemonSet, by
setting the `.spec.template.spec.schedulerName` field of the DaemonSet.
diff --git a/content/en/docs/concepts/workloads/controllers/job.md b/content/en/docs/concepts/workloads/controllers/job.md
index 0ffc308792..00b6927b71 100644
--- a/content/en/docs/concepts/workloads/controllers/job.md
+++ b/content/en/docs/concepts/workloads/controllers/job.md
@@ -938,6 +938,11 @@ creates Pods with the finalizer `batch.kubernetes.io/job-tracking`. The
controller removes the finalizer only after the Pod has been accounted for in
the Job status, allowing the Pod to be removed by other controllers or users.
+{{< note >}}
+See [My pod stays terminating](/docs/tasks/debug-application/debug-pods) if you
+observe that pods from a Job are stucked with the tracking finalizer.
+{{< /note >}}
+
### Elastic Indexed Jobs
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
diff --git a/content/en/docs/contribute/_index.md b/content/en/docs/contribute/_index.md
index beb6446404..a66b9526a2 100644
--- a/content/en/docs/contribute/_index.md
+++ b/content/en/docs/contribute/_index.md
@@ -76,7 +76,7 @@ end
subgraph second[Review]
direction TB
T[ ] -.-
- D[Look over the K8s/website repository] --- E[Check out the Hugo static site generator]
+ D[Look over the kubernetes/website repository] --- E[Check out the Hugo static site generator]
E --- F[Understand basic GitHub commands]
F --- G[Review open PR and change review processes]
end
@@ -123,7 +123,7 @@ flowchart LR
direction TB
S[ ] -.-
G[Review PRs from other K8s members] -->
- A[Check K8s/website issues list for good first PRs] --> B[Open a PR!!]
+ A[Check kubernetes/website issues list for good first PRs] --> B[Open a PR!!]
end
subgraph first[Suggested Prep]
direction TB
diff --git a/content/en/docs/contribute/generate-ref-docs/contribute-upstream.md b/content/en/docs/contribute/generate-ref-docs/contribute-upstream.md
index 8e23cb69ed..af0d97f709 100644
--- a/content/en/docs/contribute/generate-ref-docs/contribute-upstream.md
+++ b/content/en/docs/contribute/generate-ref-docs/contribute-upstream.md
@@ -24,7 +24,7 @@ API or the `kube-*` components from the upstream code, see the following instruc
- You need to have these tools installed:
- [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
- - [Golang](https://golang.org/doc/install) version 1.13+
+ - [Golang](https://go.dev/doc/install) version 1.13+
- [Docker](https://docs.docker.com/engine/installation/)
- [etcd](https://github.com/coreos/etcd/)
- [make](https://www.gnu.org/software/make/)
diff --git a/content/en/docs/contribute/new-content/open-a-pr.md b/content/en/docs/contribute/new-content/open-a-pr.md
index 1690fdd5c4..382befe116 100644
--- a/content/en/docs/contribute/new-content/open-a-pr.md
+++ b/content/en/docs/contribute/new-content/open-a-pr.md
@@ -37,7 +37,7 @@ opening a pull request. Figure 1 outlines the steps and the details follow.
{{< mermaid >}}
flowchart LR
-A([fa:fa-user New Contributor]) --- id1[(K8s/Website GitHub)]
+A([fa:fa-user New Contributor]) --- id1[(kubernetes/website GitHub)]
subgraph tasks[Changes using GitHub]
direction TB
0[ ] -.-
@@ -132,7 +132,7 @@ Figure 2 shows the steps to follow when you work from a local fork. The details
{{< mermaid >}}
flowchart LR
-1[Fork the K8s/website repository] --> 2[Create local clone and set upstream]
+1[Fork the kubernetes/website repository] --> 2[Create local clone and set upstream]
subgraph changes[Your changes]
direction TB
S[ ] -.-
@@ -359,7 +359,9 @@ Alternately, install and use the `hugo` command on your computer:
### Open a pull request from your fork to kubernetes/website {#open-a-pr}
-Figure 3 shows the steps to open a PR from your fork to the K8s/website. The details follow.
+Figure 3 shows the steps to open a PR from your fork to the [kubernetes/website](https://github.com/kubernetes/website). The details follow.
+
+Please, note that contributors can mention `kubernetes/website` as `k/website`.
@@ -368,7 +370,7 @@ Figure 3 shows the steps to open a PR from your fork to the K8s/website. The det
flowchart LR
subgraph first[ ]
direction TB
-1[1. Go to K8s/website repository] --> 2[2. Select New Pull Request]
+1[1. Go to kubernetes/website repository] --> 2[2. Select New Pull Request]
2 --> 3[3. Select compare across forks]
3 --> 4[4. Select your fork from head repository drop-down menu]
end
@@ -387,7 +389,7 @@ class 1,2,3,4,5,6,7,8 grey
class first,second white
{{ mermaid >}}
-Figure 3. Steps to open a PR from your fork to the K8s/website.
+Figure 3. Steps to open a PR from your fork to the [kubernetes/website](https://github.com/kubernetes/website).
1. In a web browser, go to the [`kubernetes/website`](https://github.com/kubernetes/website/) repository.
1. Select **New Pull Request**.
diff --git a/content/en/docs/contribute/style/diagram-guide.md b/content/en/docs/contribute/style/diagram-guide.md
index bb8d9c9537..6a0d44828f 100644
--- a/content/en/docs/contribute/style/diagram-guide.md
+++ b/content/en/docs/contribute/style/diagram-guide.md
@@ -260,7 +260,8 @@ You should use the [local](/docs/contribute/new-content/open-a-pr/#preview-local
and Netlify previews to verify the diagram is properly rendered.
{{< caution >}}
-The Mermaid live editor feature set may not support the K8s/website Mermaid feature set.
+The Mermaid live editor feature set may not support the [kubernetes/website](https://github.com/kubernetes/website) Mermaid feature set.
+And please, note that contributors can mention `kubernetes/website` as `k/website`.
You might see a syntax error or a blank screen after the Hugo build.
If that is the case, consider using the Mermaid+SVG method.
{{< /caution >}}
@@ -342,7 +343,7 @@ The following lists advantages of the Mermaid+SVG method:
* Live editor tool.
* Live editor tool supports the most current Mermaid feature set.
-* Employ existing K8s/website methods for handling `.svg` image files.
+* Employ existing [kubernetes/website](https://github.com/kubernetes/website) methods for handling `.svg` image files.
* Environment doesn't require Mermaid support.
Be sure to check that your diagram renders properly using the
diff --git a/content/en/docs/reference/access-authn-authz/validating-admission-policy.md b/content/en/docs/reference/access-authn-authz/validating-admission-policy.md
index a18d4489b9..58d351c163 100644
--- a/content/en/docs/reference/access-authn-authz/validating-admission-policy.md
+++ b/content/en/docs/reference/access-authn-authz/validating-admission-policy.md
@@ -113,7 +113,7 @@ actions. Failures defined by the `failurePolicy` are enforced
according to these actions only if the `failurePolicy` is set to `Fail` (or not specified),
otherwise the failures are ignored.
-See [Audit Annotations: validation falures](/docs/reference/labels-annotations-taints/audit-annotations/#validation-policy-admission-k8s-io-validation_failure)
+See [Audit Annotations: validation failures](/docs/reference/labels-annotations-taints/audit-annotations/#validation-policy-admission-k8s-io-validation-failure)
for more details about the validation failure audit annotation.
### Parameter resources
@@ -503,4 +503,4 @@ kubectl create deploy --image=dev.example.com/nginx invalid
The error message is similar to this.
```console
error: failed to create deployment: deployments.apps "invalid" is forbidden: ValidatingAdmissionPolicy 'image-matches-namespace-environment.policy.example.com' with binding 'demo-binding-test.example.com' denied request: only prod images are allowed in namespace default
-```
\ No newline at end of file
+```
diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md
index f6639c72c7..7aaa7b2cd4 100644
--- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md
+++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md
@@ -13,7 +13,6 @@ can specify on different Kubernetes components.
See [feature stages](#feature-stages) for an explanation of the stages for a feature.
-
## Overview
@@ -75,15 +74,17 @@ For a reference to old feature gates that are removed, please refer to
| `CPUManagerPolicyBetaOptions` | `true` | Beta | 1.23 | |
| `CPUManagerPolicyOptions` | `false` | Alpha | 1.22 | 1.22 |
| `CPUManagerPolicyOptions` | `true` | Beta | 1.23 | |
-| CRDValidationRatcheting | false | Alpha | 1.28 |
+| `CRDValidationRatcheting` | `false` | Alpha | 1.28 | |
| `CSIMigrationPortworx` | `false` | Alpha | 1.23 | 1.24 |
| `CSIMigrationPortworx` | `false` | Beta | 1.25 | |
| `CSIVolumeHealth` | `false` | Alpha | 1.21 | |
-| `CloudControllerManagerWebhook` | false | Alpha | 1.27 | |
-| `CloudDualStackNodeIPs` | false | Alpha | 1.27 | 1.28 |
-| `CloudDualStackNodeIPs` | true | Beta | 1.29 | |
+| `CloudControllerManagerWebhook` | `false` | Alpha | 1.27 | |
+| `CloudDualStackNodeIPs` | `false` | Alpha | 1.27 | 1.28 |
+| `CloudDualStackNodeIPs` | `true` | Beta | 1.29 | |
| `ClusterTrustBundle` | false | Alpha | 1.27 | |
-| `ConsistentListFromCache` | `false` | Alpha | 1.28 |
+| `ComponentSLIs` | `false` | Alpha | 1.26 | 1.26 |
+| `ComponentSLIs` | `true` | Beta | 1.27 | |
+| `ConsistentListFromCache` | `false` | Alpha | 1.28 | |
| `ContainerCheckpoint` | `false` | Alpha | 1.25 | |
| `ContextualLogging` | `false` | Alpha | 1.24 | |
| `CronJobsScheduledAnnotation` | `true` | Beta | 1.28 | |
@@ -95,9 +96,9 @@ For a reference to old feature gates that are removed, please refer to
| `DisableCloudProviders` | `false` | Alpha | 1.22 | |
| `DisableKubeletCloudCredentialProviders` | `false` | Alpha | 1.23 | |
| `DynamicResourceAllocation` | `false` | Alpha | 1.26 | |
-| `ElasticIndexedJob` | `true` | Beta` | 1.27 | |
+| `ElasticIndexedJob` | `true` | Beta | 1.27 | |
| `EventedPLEG` | `false` | Alpha | 1.26 | 1.26 |
-| `EventedPLEG` | `false` | Beta | 1.27 | - |
+| `EventedPLEG` | `false` | Beta | 1.27 | |
| `GracefulNodeShutdown` | `false` | Alpha | 1.20 | 1.20 |
| `GracefulNodeShutdown` | `true` | Beta | 1.21 | |
| `GracefulNodeShutdownBasedOnPodPriority` | `false` | Alpha | 1.23 | 1.23 |
@@ -210,7 +211,7 @@ For a reference to old feature gates that are removed, please refer to
| `ValidatingAdmissionPolicy` | `false` | Alpha | 1.26 | 1.27 |
| `ValidatingAdmissionPolicy` | `false` | Beta | 1.28 | |
| `VolumeCapacityPriority` | `false` | Alpha | 1.21 | |
-| `WatchList` | false | Alpha | 1.27 | |
+| `WatchList` | `false` | Alpha | 1.27 | |
| `WinDSR` | `false` | Alpha | 1.14 | |
| `WinOverlay` | `false` | Alpha | 1.14 | 1.19 |
| `WinOverlay` | `true` | Beta | 1.20 | |
@@ -344,7 +345,8 @@ An *Alpha* feature means:
A *Beta* feature means:
-* Usually enabled by default. Beta API groups are [disabled by default](https://github.com/kubernetes/enhancements/tree/master/keps/sig-architecture/3136-beta-apis-off-by-default).
+* Usually enabled by default. Beta API groups are
+ [disabled by default](https://github.com/kubernetes/enhancements/tree/master/keps/sig-architecture/3136-beta-apis-off-by-default).
* The feature is well tested. Enabling the feature is considered safe.
* Support for the overall feature will not be dropped, though details may change.
* The schema and/or semantics of objects may change in incompatible ways in a
@@ -394,11 +396,11 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `CPUManager`: Enable container level CPU affinity support, see
[CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/).
- `CPUManagerPolicyAlphaOptions`: This allows fine-tuning of CPUManager policies,
- experimental, Alpha-quality options
+ experimental, Alpha-quality options.
This feature gate guards *a group* of CPUManager options whose quality level is alpha.
This feature gate will never graduate to beta or stable.
- `CPUManagerPolicyBetaOptions`: This allows fine-tuning of CPUManager policies,
- experimental, Beta-quality options
+ experimental, Beta-quality options.
This feature gate guards *a group* of CPUManager options whose quality level is beta.
This feature gate will never graduate to stable.
- `CPUManagerPolicyOptions`: Allow fine-tuning of CPUManager policies.
@@ -442,16 +444,18 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `ContainerCheckpoint`: Enables the kubelet `checkpoint` API.
See [Kubelet Checkpoint API](/docs/reference/node/kubelet-checkpoint-api/) for more details.
- `ContextualLogging`: When you enable this feature gate, Kubernetes components that support
- contextual logging add extra detail to log output.
+ contextual logging add extra detail to log output.
- `CronJobsScheduledAnnotation`: Set the scheduled job time as an
{{< glossary_tooltip text="annotation" term_id="annotation" >}} on Jobs that were created
on behalf of a CronJob.
-- `CRDValidationRatcheting`: Enable updates to custom resources to contain
- violations of their OpenAPI schema if the offending portions of the resource
- update did not change. See [Validation Ratcheting](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-ratcheting) for more details.
+- `CronJobTimeZone`: Allow the use of the `timeZone` optional field in [CronJobs](/docs/concepts/workloads/controllers/cron-jobs/).
+- `CRDValidationRatcheting`: Enable updates to custom resources to contain
+ violations of their OpenAPI schema if the offending portions of the resource
+ update did not change. See [Validation Ratcheting](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-ratcheting)
+ for more details.
- `CrossNamespaceVolumeDataSource`: Enable the usage of cross namespace volume data source
- to allow you to specify a source namespace in the `dataSourceRef` field of a
- PersistentVolumeClaim.
+ to allow you to specify a source namespace in the `dataSourceRef` field of a
+ PersistentVolumeClaim.
- `CustomCPUCFSQuotaPeriod`: Enable nodes to change `cpuCFSQuotaPeriod` in
[kubelet config](/docs/tasks/administer-cluster/kubelet-config-file/).
- `CustomResourceValidationExpressions`: Enable expression language validation in CRD
@@ -494,7 +498,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
[readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes).
- `ExpandedDNSConfig`: Enable kubelet and kube-apiserver to allow more DNS
search paths and longer list of DNS search paths. This feature requires container
- runtime support(Containerd: v1.5.6 or higher, CRI-O: v1.22 or higher). See
+ runtime support (containerd: v1.5.6 or higher, CRI-O: v1.22 or higher). See
[Expanded DNS Configuration](/docs/concepts/services-networking/dns-pod-service/#expanded-dns-configuration).
- `ExperimentalHostUserNamespaceDefaulting`: Enabling the defaulting user
namespace to host. This is for containers that are using other host namespaces,
@@ -508,6 +512,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
for more details.
- `GracefulNodeShutdownBasedOnPodPriority`: Enables the kubelet to check Pod priorities
when shutting down a node gracefully.
+- `GRPCContainerProbe`: Enables the gRPC probe method for liveness, readiness and startup probes.
+ See [Configure Liveness, Readiness and Startup Probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe).
- `HonorPVReclaimPolicy`: Honor persistent volume reclaim policy when it is `Delete` irrespective of PV-PVC deletion ordering.
For more details, check the
[PersistentVolume deletion protection finalizer](/docs/concepts/storage/persistent-volumes/#persistentvolume-deletion-protection-finalizer)
@@ -534,25 +540,33 @@ Each feature gate is designed for enabling/disabling a specific feature:
and volume controllers.
- `InTreePluginvSphereUnregister`: Stops registering the vSphere in-tree plugin in kubelet
and volume controllers.
+- `JobMutableNodeSchedulingDirectives`: Allows updating node scheduling directives in
+ the pod template of [Job](/docs/concepts/workloads/controllers/job/).
- `JobBackoffLimitPerIndex`: Allows specifying the maximal number of pod
retries per index in Indexed jobs.
- `JobPodFailurePolicy`: Allow users to specify handling of pod failures based on container
exit codes and pod conditions.
-- `JobPodReplacementPolicy`: Allows you to specify pod replacement for terminating pods in a [Job](/docs/concepts/workloads/controllers/job)
+- `JobPodReplacementPolicy`: Allows you to specify pod replacement for terminating pods in a
+ [Job](/docs/concepts/workloads/controllers/job/).
- `JobReadyPods`: Enables tracking the number of Pods that have a `Ready`
[condition](/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions).
The count of `Ready` pods is recorded in the
[status](/docs/reference/kubernetes-api/workload-resources/job-v1/#JobStatus)
- of a [Job](/docs/concepts/workloads/controllers/job) status.
-- `JobTrackingWithFinalizers`: Enables tracking [Job](/docs/concepts/workloads/controllers/job)
+ of a [Job](/docs/concepts/workloads/controllers/job/) status.
+- `JobTrackingWithFinalizers`: Enables tracking [Job](/docs/concepts/workloads/controllers/job/)
completions without relying on Pods remaining in the cluster indefinitely.
The Job controller uses Pod finalizers and a field in the Job status to keep
track of the finished Pods to count towards completion.
-- `KMSv1`: Enables KMS v1 API for encryption at rest. See [Using a KMS Provider for data encryption](/docs/tasks/administer-cluster/kms-provider) for more details.
-- `KMSv2`: Enables KMS v2 API for encryption at rest. See [Using a KMS Provider for data encryption](/docs/tasks/administer-cluster/kms-provider) for more details.
+- `KMSv1`: Enables KMS v1 API for encryption at rest. See
+ [Using a KMS Provider for data encryption](/docs/tasks/administer-cluster/kms-provider/)
+ for more details.
+- `KMSv2`: Enables KMS v2 API for encryption at rest. See
+ [Using a KMS Provider for data encryption](/docs/tasks/administer-cluster/kms-provider/)
+ for more details.
- `KMSv2KDF`: Enables KMS v2 to generate single use data encryption keys.
- See [Using a KMS Provider for data encryption](/docs/tasks/administer-cluster/kms-provider) for more details.
- If the `KMSv2` feature gate is not enabled in your cluster, the value of the `KMSv2KDF` feature gate has no effect.
+ See [Using a KMS Provider for data encryption](/docs/tasks/administer-cluster/kms-provider/)
+ for more details. If the `KMSv2` feature gate is not enabled in your cluster, the value of
+ the `KMSv2KDF` feature gate has no effect.
- `KubeProxyDrainingTerminatingNodes`: Implement connection draining for
terminating nodes for `externalTrafficPolicy: Cluster` services.
- `KubeletCgroupDriverFromCRI`: Enable detection of the kubelet cgroup driver
@@ -564,11 +578,11 @@ Each feature gate is designed for enabling/disabling a specific feature:
line argument). If you enable this feature gate and the container runtime
doesn't support it, the kubelet falls back to using the driver configured using
the `cgroupDriver` configuration setting.
- See [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver)
+ See [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/)
for more details.
- `KubeletInUserNamespace`: Enables support for running kubelet in a
{{}}.
- See [Running Kubernetes Node Components as a Non-root User](/docs/tasks/administer-cluster/kubelet-in-userns/).
+ See [Running Kubernetes Node Components as a Non-root User](/docs/tasks/administer-cluster/kubelet-in-userns/).
- `KubeletPodResources`: Enable the kubelet's pod resources gRPC endpoint. See
[Support Device Monitoring](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/606-compute-device-assignment/README.md)
for more details.
@@ -576,16 +590,18 @@ Each feature gate is designed for enabling/disabling a specific feature:
This API augments the [resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources).
- `KubeletPodResourcesGetAllocatable`: Enable the kubelet's pod resources
`GetAllocatableResources` functionality. This API augments the
- [resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources)
-- `KubeletPodResourcesDynamicResources`: Extend the kubelet's pod resources gRPC endpoint to
+ [resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources).
+- `KubeletPodResourcesDynamicResources`: Extend the kubelet's pod resources gRPC endpoint
to include resources allocated in `ResourceClaims` via `DynamicResourceAllocation` API.
- See [resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources) for more details.
- with informations about the allocatable resources, enabling clients to properly
+ See [resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources)
+ for more details. with informations about the allocatable resources, enabling clients to properly
track the free compute resources on a node.
- `KubeletTracing`: Add support for distributed tracing in the kubelet.
When enabled, kubelet CRI interface and authenticated http servers are instrumented to generate
OpenTelemetry trace spans.
See [Traces for Kubernetes System Components](/docs/concepts/cluster-administration/system-traces) for more details.
+- `LegacyServiceAccountTokenNoAutoGeneration`: Stop auto-generation of Secret-based
+ [service account tokens](/docs/concepts/security/service-accounts/#get-a-token).
- `LegacyServiceAccountTokenCleanUp`: Enable cleaning up Secret-based
[service account tokens](/docs/concepts/security/service-accounts/#get-a-token)
when they are not used in a specified time (default to be one year).
@@ -637,31 +653,35 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `NodeLogQuery`: Enables querying logs of node services using the `/logs` endpoint.
- `NodeOutOfServiceVolumeDetach`: When a Node is marked out-of-service using the
`node.kubernetes.io/out-of-service` taint, Pods on the node will be forcefully deleted
- if they can not tolerate this taint, and the volume detach operations for Pods terminating
- on the node will happen immediately. The deleted Pods can recover quickly on different nodes.
+ if they can not tolerate this taint, and the volume detach operations for Pods terminating
+ on the node will happen immediately. The deleted Pods can recover quickly on different nodes.
- `NodeSwap`: Enable the kubelet to allocate swap memory for Kubernetes workloads on a node.
Must be used with `KubeletConfiguration.failSwapOn` set to false.
- For more details, please see [swap memory](/docs/concepts/architecture/nodes/#swap-memory)
+ For more details, please see [swap memory](/docs/concepts/architecture/nodes/#swap-memory).
- `OpenAPIEnums`: Enables populating "enum" fields of OpenAPI schemas in the
spec returned from the API server.
- `OpenAPIV3`: Enables the API server to publish OpenAPI v3.
-- `PDBUnhealthyPodEvictionPolicy`: Enables the `unhealthyPodEvictionPolicy` field of a `PodDisruptionBudget`. This specifies
- when unhealthy pods should be considered for eviction. Please see [Unhealthy Pod Eviction Policy](/docs/tasks/run-application/configure-pdb/#unhealthy-pod-eviction-policy)
+- `PDBUnhealthyPodEvictionPolicy`: Enables the `unhealthyPodEvictionPolicy` field of a `PodDisruptionBudget`.
+ This specifies when unhealthy pods should be considered for eviction. Please see
+ [Unhealthy Pod Eviction Policy](/docs/tasks/run-application/configure-pdb/#unhealthy-pod-eviction-policy)
for more details.
- `PersistentVolumeLastPhaseTransitionTime`: Adds a new field to PersistentVolume
which holds a timestamp of when the volume last transitioned its phase.
-- `PodAndContainerStatsFromCRI`: Configure the kubelet to gather container and pod stats from the CRI container runtime rather than gathering them from cAdvisor.
- As of 1.26, this also includes gathering metrics from CRI and emitting them over `/metrics/cadvisor` (rather than having cAdvisor emit them directly).
+- `PodAndContainerStatsFromCRI`: Configure the kubelet to gather container and pod stats from the
+ CRI container runtime rather than gathering them from cAdvisor. As of 1.26, this also includes
+ gathering metrics from CRI and emitting them over `/metrics/cadvisor` (rather than having cAdvisor emit them directly).
- `PodDeletionCost`: Enable the [Pod Deletion Cost](/docs/concepts/workloads/controllers/replicaset/#pod-deletion-cost)
- feature which allows users to influence ReplicaSet downscaling order.
-- `PodDisruptionConditions`: Enables support for appending a dedicated pod condition indicating that the pod is being deleted due to a disruption.
+ feature which allows users to influence ReplicaSet downscaling order.
+- `PodDisruptionConditions`: Enables support for appending a dedicated pod condition indicating that
+ the pod is being deleted due to a disruption.
- `PodHostIPs`: Enable the `status.hostIPs` field for pods and the {{< glossary_tooltip term_id="downward-api" text="downward API" >}}.
The field lets you expose host IP addresses to workloads.
- `PodIndexLabel`: Enables the Job controller and StatefulSet controller to add the pod index as a label when creating new pods. See [Job completion mode docs](/docs/concepts/workloads/controllers/job#completion-mode) and [StatefulSet pod index label docs](/docs/concepts/workloads/controllers/statefulset/#pod-index-label) for more details.
- `PodLifecycleSleepAction`: Enables the `sleep` action in Container lifecycle hooks.
- `PodReadyToStartContainersCondition`: Enable the kubelet to mark the [PodReadyToStartContainers](/docs/concepts/workloads/pods/pod-lifecycle/#pod-has-network)
condition on pods. This was previously (1.25-1.27) known as `PodHasNetworkCondition`.
-- `PodSchedulingReadiness`: Enable setting `schedulingGates` field to control a Pod's [scheduling readiness](/docs/concepts/scheduling-eviction/pod-scheduling-readiness).
+- `PodSchedulingReadiness`: Enable setting `schedulingGates` field to control a Pod's
+ [scheduling readiness](/docs/concepts/scheduling-eviction/pod-scheduling-readiness/).
- `ProbeTerminationGracePeriod`: Enable [setting probe-level
`terminationGracePeriodSeconds`](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#probe-level-terminationgraceperiodseconds)
on pods. See the [enhancement proposal](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2238-liveness-probe-grace-period)
@@ -726,9 +746,11 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `StorageVersionHash`: Allow API servers to expose the storage version hash in the
discovery.
- `TopologyAwareHints`: Enables topology aware routing based on topology hints
- in EndpointSlices. See [Topology Aware
- Hints](/docs/concepts/services-networking/topology-aware-hints/) for more
- details.
+ in EndpointSlices. See [Topology Aware Hints](/docs/concepts/services-networking/topology-aware-hints/)
+ for more details.
+- `TopologyManager`: Enable a mechanism to coordinate fine-grained hardware resource
+ assignments for different components in Kubernetes. See
+ [Control Topology Management Policies on a node](/docs/tasks/administer-cluster/topology-manager/).
- `TopologyManagerPolicyAlphaOptions`: Allow fine-tuning of topology manager policies,
experimental, Alpha-quality options.
This feature gate guards *a group* of topology manager options whose quality level is alpha.
@@ -743,7 +765,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
See [Mixed version proxy](/docs/concepts/architecture/mixed-version-proxy/) for more information.
- `UserNamespacesSupport`: Enable user namespace support for Pods.
Before Kubernetes v1.28, this feature gate was named `UserNamespacesStatelessPodsSupport`.
-- `ValidatingAdmissionPolicy`: Enable [ValidatingAdmissionPolicy](/docs/reference/access-authn-authz/validating-admission-policy/) support for CEL validations be used in Admission Control.
+- `ValidatingAdmissionPolicy`: Enable [ValidatingAdmissionPolicy](/docs/reference/access-authn-authz/validating-admission-policy/)
+ support for CEL validations be used in Admission Control.
- `VolumeCapacityPriority`: Enable support for prioritizing nodes in different
topologies based on available PV capacity.
- `WatchBookmark`: Enable support for watch bookmark events.
@@ -752,7 +775,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `WinOverlay`: Allows kube-proxy to run in overlay mode for Windows.
- `WindowsHostNetwork`: Enables support for joining Windows containers to a hosts' network namespace.
-
## {{% heading "whatsnext" %}}
* The [deprecation policy](/docs/reference/using-api/deprecation-policy/) for Kubernetes explains
diff --git a/content/en/docs/tasks/access-application-cluster/access-cluster-services.md b/content/en/docs/tasks/access-application-cluster/access-cluster-services.md
index 8d2d47e349..7994b620fe 100644
--- a/content/en/docs/tasks/access-application-cluster/access-cluster-services.md
+++ b/content/en/docs/tasks/access-application-cluster/access-cluster-services.md
@@ -7,13 +7,10 @@ weight: 140
This page shows how to connect to services running on the Kubernetes cluster.
-
## {{% heading "prerequisites" %}}
-
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
-
## Accessing services running on the cluster
@@ -28,30 +25,30 @@ such as your desktop machine.
You have several options for connecting to nodes, pods and services from outside the cluster:
- - Access services through public IPs.
- - Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside
- the cluster. See the [services](/docs/concepts/services-networking/service/) and
- [kubectl expose](/docs/reference/generated/kubectl/kubectl-commands/#expose) documentation.
- - Depending on your cluster environment, this may only expose the service to your corporate network,
- or it may expose it to the internet. Think about whether the service being exposed is secure.
- Does it do its own authentication?
- - Place pods behind services. To access one specific pod from a set of replicas, such as for debugging,
- place a unique label on the pod and create a new service which selects this label.
- - In most cases, it should not be necessary for application developer to directly access
- nodes via their nodeIPs.
- - Access services, nodes, or pods using the Proxy Verb.
- - Does apiserver authentication and authorization prior to accessing the remote service.
- Use this if the services are not secure enough to expose to the internet, or to gain
- access to ports on the node IP, or for debugging.
- - Proxies may cause problems for some web applications.
- - Only works for HTTP/HTTPS.
- - Described [here](#manually-constructing-apiserver-proxy-urls).
+- Access services through public IPs.
+ - Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside
+ the cluster. See the [services](/docs/concepts/services-networking/service/) and
+ [kubectl expose](/docs/reference/generated/kubectl/kubectl-commands/#expose) documentation.
+ - Depending on your cluster environment, this may only expose the service to your corporate network,
+ or it may expose it to the internet. Think about whether the service being exposed is secure.
+ Does it do its own authentication?
+ - Place pods behind services. To access one specific pod from a set of replicas, such as for debugging,
+ place a unique label on the pod and create a new service which selects this label.
+ - In most cases, it should not be necessary for application developer to directly access
+ nodes via their nodeIPs.
+- Access services, nodes, or pods using the Proxy Verb.
+ - Does apiserver authentication and authorization prior to accessing the remote service.
+ Use this if the services are not secure enough to expose to the internet, or to gain
+ access to ports on the node IP, or for debugging.
+ - Proxies may cause problems for some web applications.
+ - Only works for HTTP/HTTPS.
+ - Described [here](#manually-constructing-apiserver-proxy-urls).
- Access from a node or pod in the cluster.
- - Run a pod, and then connect to a shell in it using [kubectl exec](/docs/reference/generated/kubectl/kubectl-commands/#exec).
- Connect to other nodes, pods, and services from that shell.
- - Some clusters may allow you to ssh to a node in the cluster. From there you may be able to
- access cluster services. This is a non-standard method, and will work on some clusters but
- not others. Browsers and other tools may or may not be installed. Cluster DNS may not work.
+ - Run a pod, and then connect to a shell in it using [kubectl exec](/docs/reference/generated/kubectl/kubectl-commands/#exec).
+ Connect to other nodes, pods, and services from that shell.
+ - Some clusters may allow you to ssh to a node in the cluster. From there you may be able to
+ access cluster services. This is a non-standard method, and will work on some clusters but
+ not others. Browsers and other tools may or may not be installed. Cluster DNS may not work.
### Discovering builtin services
@@ -75,19 +72,23 @@ heapster is running at https://192.0.2.1/api/v1/namespaces/kube-system/services/
This shows the proxy-verb URL for accessing each service.
For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached
-at `https://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/` if suitable credentials are passed, or through a kubectl proxy at, for example:
+at `https://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/`
+if suitable credentials are passed, or through a kubectl proxy at, for example:
`http://localhost:8080/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/`.
{{< note >}}
-See [Access Clusters Using the Kubernetes API](/docs/tasks/administer-cluster/access-cluster-api/#accessing-the-cluster-api) for how to pass credentials or use kubectl proxy.
+See [Access Clusters Using the Kubernetes API](/docs/tasks/administer-cluster/access-cluster-api/#accessing-the-cluster-api)
+for how to pass credentials or use kubectl proxy.
{{< /note >}}
#### Manually constructing apiserver proxy URLs
-As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the service's proxy URL:
+As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create
+proxy URLs that include service endpoints, suffixes, and parameters, you append to the service's proxy URL:
`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`[https:]service_name[:port_name]`*`/proxy`
-If you haven't specified a name for your port, you don't have to specify *port_name* in the URL. You can also use the port number in place of the *port_name* for both named and unnamed ports.
+If you haven't specified a name for your port, you don't have to specify *port_name* in the URL. You can also
+use the port number in place of the *port_name* for both named and unnamed ports.
By default, the API server proxies to your service using HTTP. To use HTTPS, prefix the service name with `https:`:
`http:///api/v1/namespaces//services//proxy`
@@ -99,53 +100,49 @@ The supported formats for the `` segment of the URL are:
* `https::` - proxies to the default or unnamed port using https (note the trailing colon)
* `https::` - proxies to the specified port name or port number using https
-
##### Examples
* To access the Elasticsearch service endpoint `_search?q=user:kimchy`, you would use:
- ```
- http://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_search?q=user:kimchy
- ```
+ ```
+ http://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_search?q=user:kimchy
+ ```
* To access the Elasticsearch cluster health information `_cluster/health?pretty=true`, you would use:
- ```
- https://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_cluster/health?pretty=true
- ```
+ ```
+ https://192.0.2.1/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_cluster/health?pretty=true
+ ```
- The health information is similar to this:
+ The health information is similar to this:
- ```json
- {
- "cluster_name" : "kubernetes_logging",
- "status" : "yellow",
- "timed_out" : false,
- "number_of_nodes" : 1,
- "number_of_data_nodes" : 1,
- "active_primary_shards" : 5,
- "active_shards" : 5,
- "relocating_shards" : 0,
- "initializing_shards" : 0,
- "unassigned_shards" : 5
- }
- ```
+ ```json
+ {
+ "cluster_name" : "kubernetes_logging",
+ "status" : "yellow",
+ "timed_out" : false,
+ "number_of_nodes" : 1,
+ "number_of_data_nodes" : 1,
+ "active_primary_shards" : 5,
+ "active_shards" : 5,
+ "relocating_shards" : 0,
+ "initializing_shards" : 0,
+ "unassigned_shards" : 5
+ }
+ ```
* To access the *https* Elasticsearch service health information `_cluster/health?pretty=true`, you would use:
- ```
- https://192.0.2.1/api/v1/namespaces/kube-system/services/https:elasticsearch-logging:/proxy/_cluster/health?pretty=true
- ```
+ ```
+ https://192.0.2.1/api/v1/namespaces/kube-system/services/https:elasticsearch-logging:/proxy/_cluster/health?pretty=true
+ ```
#### Using web browsers to access services running on the cluster
You may be able to put an apiserver proxy URL into the address bar of a browser. However:
- - Web browsers cannot usually pass tokens, so you may need to use basic (password) auth. Apiserver can be configured to accept basic auth,
- but your cluster may not be configured to accept basic auth.
- - Some web apps may not work, particularly those with client side javascript that construct URLs in a
- way that is unaware of the proxy path prefix.
-
-
-
-
+- Web browsers cannot usually pass tokens, so you may need to use basic (password) auth.
+ Apiserver can be configured to accept basic auth,
+ but your cluster may not be configured to accept basic auth.
+- Some web apps may not work, particularly those with client side javascript that construct URLs in a
+ way that is unaware of the proxy path prefix.
diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md b/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md
index 02f68e91fa..810cd8d03d 100644
--- a/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md
+++ b/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md
@@ -6,11 +6,28 @@ weight: 120
-This page explains how to enable a package repository for a new Kubernetes minor release
+This page explains how to enable a package repository for the desired
+Kubernetes minor release upon upgrading a cluster. This is only needed
for users of the community-owned package repositories hosted at `pkgs.k8s.io`.
-Unlike the legacy package repositories, the community-owned package repositories are
-structured in a way that there's a dedicated package repository for each Kubernetes
-minor version.
+Unlike the legacy package repositories, the community-owned package
+repositories are structured in a way that there's a dedicated package
+repository for each Kubernetes minor version.
+
+{{< note >}}
+This guide only covers a part of the Kubernetes upgrade process. Please see the
+[upgrade guide](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) for
+more information about upgrading Kubernetes clusters.
+{{ note >}}
+
+{{< note >}}
+This step is only needed upon upgrading a cluster to another **minor** release.
+If you're upgrading to another patch release within the same minor release (e.g.
+v{{< skew currentVersion >}}.5 to v{{< skew currentVersion >}}.7), you don't
+need to follow this guide. However, if you're still using the legacy package
+repositories, you'll need to migrate to the new community-owned package
+repositories before upgrading (see the next section for more details on how to
+do this).
+{{ note >}}
## {{% heading "prerequisites" %}}
diff --git a/content/en/docs/tasks/debug/debug-application/debug-pods.md b/content/en/docs/tasks/debug/debug-application/debug-pods.md
index f1aa831f3f..7723a5c790 100644
--- a/content/en/docs/tasks/debug/debug-application/debug-pods.md
+++ b/content/en/docs/tasks/debug/debug-application/debug-pods.md
@@ -69,6 +69,34 @@ There are three things to check:
* Try to manually pull the image to see if the image can be pulled. For example,
if you use Docker on your PC, run `docker pull `.
+
+#### My pod stays terminating
+
+If a Pod is stuck in the `Terminating` state, it means that a deletion has been
+issued for the Pod, but the control plane is unable to delete the Pod object.
+
+This typically happens if the Pod has a [finalizer](/docs/concepts/overview/working-with-objects/finalizers/)
+and there is an [admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/)
+installed in the cluster that prevents the control plane from removing the
+finalizer.
+
+To identify this scenario, check if your cluster has any
+ValidatingWebhookConfiguration or MutatingWebhookConfiguration that target
+`UPDATE` operations for `pods` resources.
+
+If the webhook is provided by a third-party:
+- Make sure you are using the latest version.
+- Disable the webhook for `UPDATE` operations.
+- Report an issue with the corresponding provider.
+
+If you are the author of the webhook:
+- For a mutating webhook, make sure it never changes immutable fields on
+ `UPDATE` operations. For example, changes to containers are usually not allowed.
+- For a validating webhook, make sure that your validation policies only apply
+ to new changes. In other words, you should allow Pods with existing violations
+ to pass validation. This allows Pods that were created before the validating
+ webhook was installed to continue running.
+
#### My pod is crashing or otherwise unhealthy
Once your pod has been scheduled, the methods described in
diff --git a/content/en/docs/tutorials/security/ns-level-pss.md b/content/en/docs/tutorials/security/ns-level-pss.md
index 31404c7d7f..bcfd0ec280 100644
--- a/content/en/docs/tutorials/security/ns-level-pss.md
+++ b/content/en/docs/tutorials/security/ns-level-pss.md
@@ -22,12 +22,12 @@ level. For instructions, refer to
Install the following on your workstation:
-- [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
+- [kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
- [kubectl](/docs/tasks/tools/)
## Create cluster
-1. Create a `KinD` cluster as follows:
+1. Create a `kind` cluster as follows:
```shell
kind create cluster --name psa-ns-level
@@ -150,7 +150,7 @@ kind delete cluster --name psa-ns-level
[shell script](/examples/security/kind-with-namespace-level-baseline-pod-security.sh)
to perform all the preceding steps all at once.
- 1. Create KinD cluster
+ 1. Create kind cluster
2. Create new namespace
3. Apply `baseline` Pod Security Standard in `enforce` mode while applying
`restricted` Pod Security Standard also in `warn` and `audit` mode.
diff --git a/content/en/examples/controllers/daemonset.yaml b/content/en/examples/controllers/daemonset.yaml
index aa540e9697..1650ecce4a 100644
--- a/content/en/examples/controllers/daemonset.yaml
+++ b/content/en/examples/controllers/daemonset.yaml
@@ -35,6 +35,9 @@ spec:
volumeMounts:
- name: varlog
mountPath: /var/log
+ # it may be desirable to set a high priority class to ensure that a DaemonSet Pod
+ # preempts running Pods
+ # priorityClassName: important
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
diff --git a/content/en/releases/download.md b/content/en/releases/download.md
index c728ec015f..cbac27f570 100644
--- a/content/en/releases/download.md
+++ b/content/en/releases/download.md
@@ -10,6 +10,24 @@ cluster. Those components are also shipped in container images as part of the
official release process. All binaries as well as container images are available
for multiple operating systems as well as hardware architectures.
+### kubectl
+
+
+
+The Kubernetes command-line tool, [kubectl](/docs/reference/kubectl/kubectl/), allows
+you to run commands against Kubernetes clusters.
+
+You can use kubectl to deploy applications, inspect and manage cluster resources,
+and view logs. For more information including a complete list of kubectl operations, see the
+[`kubectl` reference documentation](/docs/reference/kubectl/).
+
+kubectl is installable on a variety of Linux platforms, macOS and Windows.
+Find your preferred operating system below.
+
+- [Install kubectl on Linux](/docs/tasks/tools/install-kubectl-linux)
+- [Install kubectl on macOS](/docs/tasks/tools/install-kubectl-macos)
+- [Install kubectl on Windows](/docs/tasks/tools/install-kubectl-windows)
+
## Container Images
All Kubernetes container images are deployed to the
@@ -53,25 +71,4 @@ To manually verify signed container images of Kubernetes core components, refer
## Binaries
-Find links to download Kubernetes components (and their checksums) in the
-[CHANGELOG](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG) files.
-
-Alternately, use [downloadkubernetes.com](https://www.downloadkubernetes.com/) to filter by version and architecture.
-
-### kubectl
-
-
-
-The Kubernetes command-line tool, [kubectl](/docs/reference/kubectl/kubectl/), allows
-you to run commands against Kubernetes clusters.
-
-You can use kubectl to deploy applications, inspect and manage cluster resources,
-and view logs. For more information including a complete list of kubectl operations, see the
-[`kubectl` reference documentation](/docs/reference/kubectl/).
-
-kubectl is installable on a variety of Linux platforms, macOS and Windows.
-Find your preferred operating system below.
-
-- [Install kubectl on Linux](/docs/tasks/tools/install-kubectl-linux)
-- [Install kubectl on macOS](/docs/tasks/tools/install-kubectl-macos)
-- [Install kubectl on Windows](/docs/tasks/tools/install-kubectl-windows)
+{{< release-binaries >}}
\ No newline at end of file
diff --git a/content/en/releases/release.md b/content/en/releases/release.md
index e0d21c0df1..ee1bf72b78 100644
--- a/content/en/releases/release.md
+++ b/content/en/releases/release.md
@@ -4,7 +4,7 @@ type: docs
auto_generated: true
---
-
+
{{< warning >}}
This content is auto-generated and links may not function. The source of the document is located [here](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-release/release.md).
diff --git a/content/es/docs/home/_index.md b/content/es/docs/home/_index.md
index 56bb4ca94e..5f845e74fb 100644
--- a/content/es/docs/home/_index.md
+++ b/content/es/docs/home/_index.md
@@ -4,7 +4,7 @@ noedit: true
cid: docsHome
layout: docsportal_home
class: gridPage gridPageHome
-linkTitle: "Home"
+linkTitle: "Documentación"
main_menu: true
weight: 10
hide_feedback: true
diff --git a/content/es/docs/tutorials/_index.md b/content/es/docs/tutorials/_index.md
index 27c804973a..932b6f6043 100644
--- a/content/es/docs/tutorials/_index.md
+++ b/content/es/docs/tutorials/_index.md
@@ -55,7 +55,7 @@ Antes de recorrer cada tutorial, recomendamos añadir un marcador a
* [AppArmor](/docs/tutorials/clusters/apparmor/)
-* [seccomp](/docs/tutorials/clusters/seccomp/)
+* [Seccomp](/docs/tutorials/clusters/seccomp/)
## Servicios
diff --git a/content/fr/docs/home/_index.md b/content/fr/docs/home/_index.md
index 10e038dfa1..ff2e0e9e0f 100644
--- a/content/fr/docs/home/_index.md
+++ b/content/fr/docs/home/_index.md
@@ -7,7 +7,7 @@ noedit: true
cid: docsHome
layout: docsportal_home
class: gridPage gridPageHome
-linkTitle: "Accueil"
+linkTitle: "Documentation"
main_menu: true
weight: 10
hide_feedback: true
diff --git a/content/id/docs/tutorials/_index.md b/content/id/docs/tutorials/_index.md
index 041b7b62d2..ba0deb8011 100644
--- a/content/id/docs/tutorials/_index.md
+++ b/content/id/docs/tutorials/_index.md
@@ -50,7 +50,7 @@ Sebelum melangkah lebih lanjut ke tutorial, sebaiknya tandai dulu halaman [Kamus
* [AppArmor](/docs/tutorials/clusters/apparmor/)
-* [seccomp](/docs/tutorials/clusters/seccomp/)
+* [Seccomp](/docs/tutorials/clusters/seccomp/)
## Servis
diff --git a/content/it/docs/home/_index.md b/content/it/docs/home/_index.md
index 3f8e3ae0f0..be3a2bbe2a 100644
--- a/content/it/docs/home/_index.md
+++ b/content/it/docs/home/_index.md
@@ -6,7 +6,7 @@ noedit: true
cid: docsHome
layout: docsportal_home
class: gridPage
-linkTitle: "Home"
+linkTitle: "Documentazione"
main_menu: true
weight: 10
hide_feedback: true
diff --git a/content/it/docs/tutorials/_index.md b/content/it/docs/tutorials/_index.md
index 240fce078c..2fac8553ba 100644
--- a/content/it/docs/tutorials/_index.md
+++ b/content/it/docs/tutorials/_index.md
@@ -50,7 +50,7 @@ Prima di procedere con vari tutorial, raccomandiamo di aggiungere il
* [AppArmor](/docs/tutorials/clusters/apparmor/)
-* [seccomp](/docs/tutorials/clusters/seccomp/)
+* [Seccomp](/docs/tutorials/clusters/seccomp/)
## Servizi
diff --git a/content/ru/docs/home/_index.md b/content/ru/docs/home/_index.md
index 13c08174b2..a9108d47d7 100644
--- a/content/ru/docs/home/_index.md
+++ b/content/ru/docs/home/_index.md
@@ -6,7 +6,7 @@ noedit: true
cid: docsHome
layout: docsportal_home
class: gridPage gridPageHome
-linkTitle: "Главная"
+linkTitle: "Документация"
main_menu: true
weight: 10
hide_feedback: true
diff --git a/content/uk/docs/home/_index.md b/content/uk/docs/home/_index.md
index 5a8cfc3c51..d534d5f3ce 100644
--- a/content/uk/docs/home/_index.md
+++ b/content/uk/docs/home/_index.md
@@ -4,7 +4,7 @@ noedit: true
cid: docsHome
layout: docsportal_home
class: gridPage gridPageHome
-linkTitle: "Головна"
+linkTitle: "Документація"
main_menu: true
weight: 10
hide_feedback: true
diff --git a/content/vi/_index.html b/content/vi/_index.html
index 6a182697dd..de8f6e9af5 100644
--- a/content/vi/_index.html
+++ b/content/vi/_index.html
@@ -4,6 +4,8 @@ abstract: "Triển khai tự động, nhân rộng và quản lý container"
cid: home
---
+{{< site-searchbar >}}
+
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
### [Kubernetes (K8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) là một hệ thống mã nguồn mở giúp tự động hóa việc triển khai, nhân rộng và quản lý các ứng dụng container.
diff --git a/content/vi/docs/home/_index.md b/content/vi/docs/home/_index.md
index 1239237a64..5b8130d236 100644
--- a/content/vi/docs/home/_index.md
+++ b/content/vi/docs/home/_index.md
@@ -4,7 +4,7 @@ noedit: true
cid: docsHome
layout: docsportal_home
class: gridPage gridPageHome
-linkTitle: "Home"
+linkTitle: "Tài liệu"
main_menu: true
weight: 10
hide_feedback: true
diff --git a/content/zh-cn/blog/_posts/2023-10-10-cri-o-community-package-infrastructure.md b/content/zh-cn/blog/_posts/2023-10-10-cri-o-community-package-infrastructure.md
new file mode 100644
index 0000000000..af91a47f32
--- /dev/null
+++ b/content/zh-cn/blog/_posts/2023-10-10-cri-o-community-package-infrastructure.md
@@ -0,0 +1,284 @@
+---
+layout: blog
+title: "CRI-O 正迁移至 pkgs.k8s.io"
+date: 2023-10-10
+slug: cri-o-community-package-infrastructure
+---
+
+
+
+**作者**:Sascha Grunert
+
+
+**译者**:Wilson Wu (DaoCloud)
+
+
+Kubernetes 社区[最近宣布](/zh-cn/blog/2023/08/31/legacy-package-repository-deprecation/)旧的软件包仓库已被冻结,
+现在这些软件包将被迁移到由 [OpenBuildService(OBS)](https://build.opensuse.org/project/subprojects/isv:kubernetes)
+提供支持的[社区自治软件包仓库](/blog/2023/08/15/pkgs-k8s-io-introduction)中。
+很久以来,CRI-O 一直在利用 [OBS 进行软件包构建](https://github.com/cri-o/cri-o/blob/e292f17/install.md#install-packaged-versions-of-cri-o),
+但到目前为止,所有打包工作都是手动完成的。
+
+
+CRI-O 社区非常喜欢 Kubernetes,这意味着他们很高兴地宣布:
+
+
+**所有未来的 CRI-O 包都将作为在 pkgs.k8s.io 上托管的官方支持的 Kubernetes 基础设施的一部分提供!**
+
+
+现有软件包将进入一个弃用阶段,目前正在
+[CRI-O 社区中讨论](https://github.com/cri-o/cri-o/discussions/7315)。
+新的基础设施将仅支持 CRI-O `>= v1.28.2` 的版本以及比 `release-1.28` 新的版本分支。
+
+
+## 如何使用新软件包 {#how-to-use-the-new-packages}
+
+
+与 Kubernetes 社区一样,CRI-O 提供 `deb` 和 `rpm` 软件包作为 OBS 中专用子项目的一部分,
+被称为 [`isv:kubernetes:addons:cri-o`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o)。
+这个项目是一个集合,提供 `stable`(针对 CRI-O 标记)以及 `prerelease`(针对 CRI-O `release-1.y` 和 `main` 分支)版本的软件包。
+
+
+**稳定版本:**
+
+
+- [`isv:kubernetes:addons:cri-o:stable`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:stable):稳定软件包
+ - [`isv:kubernetes:addons:cri-o:stable:v1.29`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:stable:v1.29 ):`v1.29.z` 标记
+ - [`isv:kubernetes:addons:cri-o:stable:v1.28`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:stable:v1.28 ):`v1.28.z` 标记
+
+
+**预发布版本:**
+
+
+- [`isv:kubernetes:addons:cri-o:prerelease`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:prerelease):预发布软件包
+ - [`isv:kubernetes:addons:cri-o:prerelease:main`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:prerelease:main):
+ [`main`](https://github.com/cri-o/cri-o/commits/main) 分支
+ - [`isv:kubernetes:addons:cri-o:prerelease:v1.29`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:prerelease:v1.29):
+ [`release-1.29`](https://github.com/cri-o/cri-o/commits/release-1.29) 分支
+ - [`isv:kubernetes:addons:cri-o:prerelease:v1.28`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:prerelease:v1.28):
+ [`release-1.28`](https://github.com/cri-o/cri-o/commits/release-1.28) 分支
+
+
+v1.29 仓库中尚无可用的稳定版本,因为 v1.29.0 将于 12 月发布。
+CRI-O 社区也**不**支持早于 `release-1.28` 的版本分支,
+因为已经有 CI 需求合并到 `main` 中,只有通过适当的努力才能向后移植到 `release-1.28`。
+
+
+例如,如果最终用户想要安装 CRI-O `main` 分支的最新可用版本,
+那么他们可以按照与 Kubernetes 相同的方式添加仓库。
+
+
+### 基于 `rpm` 的发行版 {#rpm-based-distributions}
+
+
+对于基于 `rpm` 的发行版,您可以以 `root`
+用户身份运行以下命令来将 CRI-O 与 Kubernetes 一起安装:
+
+
+#### 添加 Kubernetes 仓库 {#add-the-kubernetes-repo}
+
+```bash
+cat <
+#### 添加 CRI-O 仓库 {#add-the-cri-o-repo}
+
+```bash
+cat <
+#### 安装官方包依赖 {#install-official-package-dependencies}
+
+```bash
+dnf install -y \
+ conntrack \
+ container-selinux \
+ ebtables \
+ ethtool \
+ iptables \
+ socat
+```
+
+
+#### 从添加的仓库中安装软件包 {#install-the-packages-from-the-added-repos}
+
+```bash
+dnf install -y --repo cri-o --repo kubernetes \
+ cri-o \
+ kubeadm \
+ kubectl \
+ kubelet
+```
+
+
+### 基于 `deb` 的发行版 {#deb-based-distributions}
+
+
+对于基于 `deb` 的发行版,您可以以 `root` 用户身份运行以下命令:
+
+
+#### 安装用于添加仓库的依赖项 {#install-dependencies-for-adding-the-repositories}
+
+```bash
+apt-get update
+apt-get install -y software-properties-common curl
+```
+
+
+#### 添加 Kubernetes 仓库 {#add-the-kubernetes-repository}
+
+```bash
+curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key |
+ gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
+echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" |
+ tee /etc/apt/sources.list.d/kubernetes.list
+```
+
+
+#### 添加 CRI-O 仓库 {#add-the-cri-o-repository}
+
+```bash
+curl -fsSL https://pkgs.k8s.io/addons:/cri-o:/prerelease:/main/deb/Release.key |
+ gpg --dearmor -o /etc/apt/keyrings/cri-o-apt-keyring.gpg
+echo "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://pkgs.k8s.io/addons:/cri-o:/prerelease:/main/deb/ /" |
+ tee /etc/apt/sources.list.d/cri-o.list
+```
+
+
+#### 安装软件包 {#install-the-packages}
+
+```bash
+apt-get update
+apt-get install -y cri-o kubelet kubeadm kubectl
+```
+
+
+#### 启动 CRI-O {#start-cri-o}
+
+```bash
+systemctl start crio.service
+```
+
+
+如果使用的是另一个包序列,CRI-O 包路径中项目的 `prerelease:/main`
+前缀可以替换为 `stable:/v1.28`、`stable:/v1.29`、`prerelease:/v1.28` 或 `prerelease :/v1.29`。
+
+
+你可以使用 `kubeadm init` 命令来[引导集群](/docs/setup/product-environment/tools/kubeadm/install-kubeadm/),
+该命令会自动检测后台正在运行 CRI-O。还有适用于
+[Fedora 38](https://github.com/cri-o/packaging/blob/91df5f7/test/rpm/Vagrantfile)
+以及 [Ubuntu 22.04](https://github.com/cri-o/packaging/blob/91df5f7/test/deb/Vagrantfile)
+的 `Vagrantfile` 示例,可在使用 `kubeadm` 的场景中测试下载的软件包。
+
+
+## 它是如何工作的 {#how-it-works-under-the-hood}
+
+
+与这些包相关的所有内容都位于新的 [CRI-O 打包仓库](https://github.com/cri-o/packaging)中。
+它包含 [Daily Reconciliation](https://github.com/cri-o/packaging/blob/91df5f7/.github/workflows/schedule.yml) GitHub 工作流,
+支持所有发布分支以及 CRI-O 标签。
+OBS 工作流程中的[测试管道](https://github.com/cri-o/packaging/actions/workflows/obs.yml)确保包在发布之前可以被正确安装和使用。
+所有包的暂存和发布都是在 [Kubernetes 发布工具箱(krel)](https://github.com/kubernetes/release/blob/1f85912/docs/krel/README.md)的帮助下完成的,
+这一工具箱也被用于官方 Kubernetes `deb` 和 `rpm` 软件包。
+
+
+包构建的输入每天都会被动态调整,并使用 CRI-O 的静态二进制包。
+这些包是基于 CRI-O CI 中的每次提交来构建和签名的,
+并且包含 CRI-O 在特定架构上运行所需的所有内容。静态构建是可重复的,
+由 [nixpkgs](https://github.com/NixOS/nixpkgs) 提供支持,
+并且仅适用于 `x86_64`、`aarch64` 以及 `ppc64le` 架构。
+
+
+CRI-O 维护者将很乐意听取有关新软件包工作情况的任何反馈或建议!
+感谢您阅读本文,请随时通过 Kubernetes [Slack 频道 #crio](https://kubernetes.slack.com/messages/CAZH62UR1)
+联系维护人员或在[打包仓库](https://github.com/cri-o/packaging/issues)中创建 Issue。
diff --git a/content/zh-cn/blog/_posts/2023-10-23-pv-last-phase-transtition-time.md b/content/zh-cn/blog/_posts/2023-10-23-pv-last-phase-transtition-time.md
new file mode 100644
index 0000000000..71a5fdabe0
--- /dev/null
+++ b/content/zh-cn/blog/_posts/2023-10-23-pv-last-phase-transtition-time.md
@@ -0,0 +1,208 @@
+---
+layout: blog
+title: Kubernetes 中 PersistentVolume 的最后阶段转换时间
+date: 2023-10-23
+slug: persistent-volume-last-phase-transition-time
+---
+
+
+
+
+**作者:** Roman Bednář (Red Hat)
+
+**译者:** Xin Li (DaoCloud)
+
+
+在最近的 Kubernetes v1.28 版本中,我们(SIG Storage)引入了一项新的 Alpha 级别特性,
+旨在改进 PersistentVolume(PV)存储管理并帮助集群管理员更好地了解 PV 的生命周期。
+通过将 `lastPhaseTransitionTime` 字段添加到 PV 的状态中,集群管理员现在可以跟踪
+PV 上次转换到不同[阶段](/zh-cn/docs/concepts/storage/persistent-volumes/#phase)的时间,
+从而实现更高效、更明智的资源管理。
+
+
+## 我们为什么需要新的 PV 字段? {#why-new-field}
+
+Kubernetes 中的 PersistentVolume 在为集群中运行的工作负载提供存储资源方面发挥着至关重要的作用。
+然而,有效管理这些 PV 可能具有挑战性,特别是在确定 PV 在不同阶段(`Pending`、`Bound` 或 `Released`)之间转换的最后时间时。
+管理员通常需要知道 PV 上次使用或转换到某些阶段的时间;例如,实施保留策略、执行清理或监控存储运行状况时。
+
+
+过去,Kubernetes 用户在使用 `Delete` 保留策略时面临数据丢失问题,不得不使用更安全的 `Retain` 策略。
+当我们计划引入新的 `lastPhaseTransitionTime` 字段时,我们希望提供一个更通用的解决方案,
+可用于各种用例,包括根据卷上次使用时间进行手动清理或根据状态转变时间生成警报。
+
+
+## lastPhaseTransitionTime 如何提供帮助
+
+如果你已启用特性门控(请参阅[如何使用它](#how-to-use-it)),则每次 PV 从一个阶段转换到另一阶段时,
+PersistentVolume(PV)的新字段 `.status.lastPhaseTransitionTime` 都会被更新。
+
+
+无论是从 `Pending` 转换到 `Bound`、`Bound` 到 `Released`,还是任何其他阶段转换,都会记录 `lastPhaseTransitionTime`。
+对于新创建的 PV,将被声明为处于 `Pending` 阶段,并且 `lastPhaseTransitionTime` 也将被记录。
+
+
+此功能允许集群管理员:
+
+
+1. 实施保留政策
+
+ 通过 `lastPhaseTransitionTime`,管理员可以跟踪 PV 上次使用或转换到 `Released` 阶段的时间。
+ 此信息对于实施保留策略以清理在特定时间内处于 `Released` 阶段的资源至关重要。
+ 例如,现在编写一个脚本或一个策略来删除一周内处于 `Released` 阶段的所有 PV 是很简单的。
+
+
+2. 监控存储运行状况
+
+ 通过分析 PV 的相变时间,管理员可以更有效地监控存储运行状况。
+ 例如,他们可以识别处于 `Pending` 阶段时间异常长的 PV,这可能表明存储配置程序存在潜在问题。
+
+
+## 如何使用它
+
+从 Kubernetes v1.28 开始,`lastPhaseTransitionTime` 为 Alpha 特性字段,因此需要启用
+`PersistentVolumeLastPhaseTransitionTime` 特性门控。
+
+
+如果你想在该特性处于 Alpha 阶段时对其进行测试,则需要在 `kube-controller-manager`
+和 `kube-apiserver` 上启用此特性门控。
+
+使用 `--feature-gates` 命令行参数:
+
+```shell
+--feature-gates="...,PersistentVolumeLastPhaseTransitionTime=true"
+```
+
+
+请记住,该特性启用后不会立即生效;而是在 PV 更新以及阶段之间转换时,填充新字段。
+然后,管理员可以通过查看 PV 状态访问新字段,此状态可以使用标准 Kubernetes API
+调用或通过 Kubernetes 客户端库进行检索。
+
+
+以下示例展示了如何使用 `kubectl` 命令行工具检索特定 PV 的 `lastPhaseTransitionTime`:
+
+```shell
+kubectl get pv -o jsonpath='{.status.lastPhaseTransitionTime}'
+```
+
+
+## 未来发展
+
+此特性最初是作为 Alpha 特性引入的,位于默认情况下禁用的特性门控之下。
+在 Alpha 阶段,我们(Kubernetes SIG Storage)将收集最终用户的反馈并解决发现的任何问题或改进。
+
+一旦收到足够的反馈,或者没有收到投诉,该特性就可以进入 Beta 阶段。
+Beta 阶段将使我们能够进一步验证实施并确保其稳定性。
+
+
+在该字段升级到 Beta 级别和将该字段升级为通用版 (GA) 的版本之间,至少会经过两个 Kubernetes 版本。
+这意味着该字段 GA 的最早版本是 Kubernetes 1.32,可能计划于 2025 年初发布。
+
+
+## 欢迎参与
+
+我们始终欢迎新的贡献者,因此如果你想参与其中,可以加入我们的
+[Kubernetes 存储特殊兴趣小组](https://github.com/kubernetes/community/tree/master/sig-storage)(SIG)。
+
+
+如果你想分享反馈,可以在我们的 [公共 Slack 频道](https://app.slack.com/client/T09NY5SBT/C09QZFCE5)上分享。
+如果你尚未加入 Slack 工作区,可以访问 https://slack.k8s.io/ 获取邀请。
+
+
+特别感谢所有提供精彩评论、分享宝贵意见并帮助实现此特性的贡献者(按字母顺序排列):
+
+- Han Kang ([logicalhan](https://github.com/logicalhan))
+- Jan Šafránek ([jsafrane](https://github.com/jsafrane))
+- Jordan Liggitt ([liggitt](https://github.com/liggitt))
+- Kiki ([carlory](https://github.com/carlory))
+- Michelle Au ([msau42](https://github.com/msau42))
+- Tim Bannister ([sftim](https://github.com/sftim))
+- Wojciech Tyczynski ([wojtek-t](https://github.com/wojtek-t))
+- Xing Yang ([xing-yang](https://github.com/xing-yang))
diff --git a/content/zh-cn/blog/_posts/2023-11-07-introducing-sig-etcd.md b/content/zh-cn/blog/_posts/2023-11-07-introducing-sig-etcd.md
new file mode 100644
index 0000000000..e3b9992674
--- /dev/null
+++ b/content/zh-cn/blog/_posts/2023-11-07-introducing-sig-etcd.md
@@ -0,0 +1,145 @@
+---
+layout: blog
+title: "介绍 SIG etcd"
+slug: introducing-sig-etcd
+date: 2023-11-07
+canonicalUrl: https://etcd.io/blog/2023/introducing-sig-etcd/
+---
+
+
+
+
+**作者**:Han Kang (Google), Marek Siarkowicz (Google), Frederico Muñoz (SAS Institute)
+
+**译者**:Xin Li (Daocloud)
+
+
+特殊兴趣小组(SIG)是 Kubernetes 项目的基本组成部分,很大一部分的 Kubernetes 社区活动都在其中进行。
+当有需要时,可以创建[新的 SIG](https://github.com/kubernetes/community/blob/master/sig-wg-lifecycle.md),
+而这正是最近发生的事情。
+
+
+[SIG etcd](https://github.com/kubernetes/community/blob/master/sig-etcd/README.md)
+是 Kubernetes SIG 列表中的最新成员。在这篇文章中,我们将更好地认识它,了解它的起源、职责和计划。
+
+
+## etcd 的关键作用
+
+如果我们查看 Kubernetes 集群的控制平面内部,我们会发现
+[etcd](https://kubernetes.io/zh-cn/docs/concepts/overview/components/#etcd),
+一个一致且高可用的键值存储,用作 Kubernetes 所有集群数据的后台数据库 -- 仅此描述就突出了
+etcd 所扮演的关键角色,以及它在 Kubernetes 生态系统中的重要性。
+
+
+由于 etcd 在生态中的关键作用,其项目和社区的健康成为了一个重要的考虑因素,
+并且人们 2022 年初[对项目状态的担忧](https://groups.google.com/a/kubernetes.io/g/steering/c/e-O-tVSCJOk/m/N9IkiWLEAgAJ)
+并没有被忽视。维护团队的变化以及其他因素导致了一些情况需要被解决。
+
+
+## 为什么要设立特殊兴趣小组
+
+考虑到 etcd 的关键作用,有人提出未来的方向是创建一个新的特殊兴趣小组。
+如果 etcd 已经成为 Kubernetes 的核心,创建专门的 SIG 不仅是对这一角色的认可,
+还会使 etcd 成为 Kubernetes 社区的一等公民。
+
+
+SIG etcd 的成立为明确 etcd 和 Kubernetes API 机制之间的契约关系创造了一个专门的空间,
+并防止在 etcd 级别上发生违反此契约的更改。此外,etcd 将能够采用 Kubernetes 提供的 SIG
+流程([KEP](https://www.kubernetes.dev/resources/keps/)、
+[PRR](https://github.com/kubernetes/community/blob/master/sig-architecture/production-readiness.md)、
+[分阶段特性门控](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/)以及其他流程)
+以提高代码库的一致性和可靠性,这将为 etcd 社区带来巨大的好处。
+
+
+作为 SIG,etcd 还能够从 Kubernetes 获得贡献者的支持:Kubernetes 维护者对 etcd
+的积极贡献将通过增加潜在审核者数量以及与现有测试框架的集成来降低破坏 Kubernetes 更改的可能性。
+这不仅有利于 Kubernetes,由于它能够更好地参与并塑造 etcd 所发挥的关键作用,从而也将有利于整个 etcd。
+
+
+## 关于 SIG etcd
+
+最近创建的 SIG 已经在努力实现其[章程](https://github.com/kubernetes/community/blob/master/sig-etcd/charter.md)
+和[愿景](https:///github.com/kubernetes/community/blob/master/sig-etcd/vision.md)中定义的目标。
+其目的很明确:确保 etcd 是一个可靠、简单且可扩展的生产就绪存储,用于构建云原生分布式系统并通过 Kubernetes 等编排器管理云原生基础设施。
+
+
+SIG etcd 的范围不仅仅涉及将 etcd 作为 Kubernetes 组件,还涵盖将 etcd 作为标准解决方案。
+我们的目标是使 etcd 成为可在任何地方使用的最可靠的键值存储,不受任何 kubernetes 特定限制的约束,并且可以扩展以满足许多不同用例的需求。
+
+
+我们相信,SIG etcd 的创建将成为项目生命周期中的一个重要里程碑,同时改进 etcd 本身以及
+etcd 与 Kubernetes 的集成。我们欢迎所有对 etcd
+感兴趣的人[访问我们的页面](https://github.com/kubernetes/community/blob/master/sig-etcd/README.md)、
+[加入我们的 Slack 频道](https://kubernetes.slack.com/messages/etcd),并参与 etcd 生命的新阶段。
diff --git a/content/zh-cn/docs/concepts/cluster-administration/addons.md b/content/zh-cn/docs/concepts/cluster-administration/addons.md
index 007fcaacae..d792ab1180 100644
--- a/content/zh-cn/docs/concepts/cluster-administration/addons.md
+++ b/content/zh-cn/docs/concepts/cluster-administration/addons.md
@@ -99,6 +99,9 @@ Add-on 扩展了 Kubernetes 的功能。
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually)
是一个可以用于 Kubernetes 的 overlay 网络提供者。
+* [Gateway API](/zh-cn/docs/concepts/services-networking/gateway/) 是一个由
+ [SIG Network](https://github.com/kubernetes/community/tree/master/sig-network) 社区管理的开源项目,
+ 为服务网络建模提供一种富有表达力、可扩展和面向角色的 API。
* [Knitter](https://github.com/ZTE/Knitter/) 是在一个 Kubernetes Pod 中支持多个网络接口的插件。
* [Multus](https://github.com/k8snetworkplumbingwg/multus-cni) 是一个多插件,
可在 Kubernetes 中提供多种网络支持,以支持所有 CNI 插件(例如 Calico、Cilium、Contiv、Flannel),
diff --git a/content/zh-cn/docs/concepts/containers/images.md b/content/zh-cn/docs/concepts/containers/images.md
index e9dea1061e..ac3b849b5b 100644
--- a/content/zh-cn/docs/concepts/containers/images.md
+++ b/content/zh-cn/docs/concepts/containers/images.md
@@ -516,63 +516,40 @@ See [Configure a kubelet image credential provider](/docs/tasks/administer-clust
The interpretation of `config.json` varies between the original Docker
implementation and the Kubernetes interpretation. In Docker, the `auths` keys
can only specify root URLs, whereas Kubernetes allows glob URLs as well as
-prefix-matched paths. This means that a `config.json` like this is valid:
+prefix-matched paths. The only limitation is that glob patterns (`*`) have to
+include the dot (`.`) for each subdomain. The amount of matched subdomains has
+to be equal to the amount of glob patterns (`*.`), for example:
-->
对于 `config.json` 的解释在原始 Docker 实现和 Kubernetes 的解释之间有所不同。
-在 Docker 中,`auths` 键只能指定根 URL,而 Kubernetes 允许 glob URLs 以及前缀匹配的路径。
+在 Docker 中,`auths` 键只能指定根 URL,而 Kubernetes 允许 glob URL 以及前缀匹配的路径。
+唯一的限制是 glob 模式(`*`)必须为每个子域名包括点(`.`)。
+匹配的子域名数量必须等于 glob 模式(`*.`)的数量,例如:
+
+
+- `*.kubernetes.io` **不**会匹配 `kubernetes.io`,但会匹配 `abc.kubernetes.io`
+- `*.*.kubernetes.io` **不**会匹配 `abc.kubernetes.io`,但会匹配 `abc.def.kubernetes.io`
+- `prefix.*.io` 将匹配 `prefix.kubernetes.io`
+- `*-good.kubernetes.io` 将匹配 `prefix-good.kubernetes.io`
+
+
这意味着,像这样的 `config.json` 是有效的:
```json
{
"auths": {
- "*my-registry.io/images": {
- "auth": "…"
- }
+ "my-registry.io/images": { "auth": "…" },
+ "*.my-registry.io/images": { "auth": "…" }
}
}
```
-
-使用以下语法匹配根 URL (`*my-registry.io`):
-
-```
-pattern:
- { term }
-
-term:
- '*' 匹配任何无分隔符字符序列
- '?' 匹配任意单个非分隔符
- '[' [ '^' ] 字符范围
- 字符集(必须非空)
- c 匹配字符 c (c 不为 '*', '?', '\\', '[')
- '\\' c 匹配字符 c
-
-字符范围:
- c 匹配字符 c (c 不为 '\\', '?', '-', ']')
- '\\' c 匹配字符 c
- lo '-' hi 匹配字符范围在 lo 到 hi 之间字符
-```
-
+但这些不会匹配成功:
+
- `a.sub.my-registry.io/images/my-image`
+- `a.b.sub.my-registry.io/images/my-image`
-kubelet 为每个找到的凭据的镜像按顺序拉取。这意味着在 `config.json` 中可能有多项:
+kubelet 为每个找到的凭据的镜像按顺序拉取。这意味着对于不同的路径在 `config.json` 中也可能有多项:
```json
{
@@ -697,7 +681,7 @@ kubectl create secret docker-registry \
@@ -774,8 +758,7 @@ will be merged.
-->
你需要对使用私有仓库的每个 Pod 执行以上操作。不过,
设置该字段的过程也可以通过为[服务账号](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/)资源设置
-`imagePullSecrets` 来自动完成。
-有关详细指令,
+`imagePullSecrets` 来自动完成。有关详细指令,
可参见[将 ImagePullSecrets 添加到服务账号](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)。
你也可以将此方法与节点级别的 `.docker/config.json` 配置结合使用。
@@ -830,8 +813,9 @@ common use cases and suggested solutions.
- Move sensitive data into a "Secret" resource, instead of packaging it in an image.
-->
3. 集群使用专有镜像,且有些镜像需要更严格的访问控制
- - 确保 [AlwaysPullImages 准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)被启用。否则,所有 Pod 都可以使用所有镜像。
- - 确保将敏感数据存储在 Secret 资源中,而不是将其打包在镜像里
+ - 确保 [AlwaysPullImages 准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)被启用。
+ 否则,所有 Pod 都可以使用所有镜像。
+ - 确保将敏感数据存储在 Secret 资源中,而不是将其打包在镜像里。
4. 集群是多租户的并且每个租户需要自己的私有仓库
- - 确保 [AlwaysPullImages 准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)。否则,所有租户的所有的 Pod 都可以使用所有镜像。
- - 为私有仓库启用鉴权
+ - 确保 [AlwaysPullImages 准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)。
+ 否则,所有租户的所有的 Pod 都可以使用所有镜像。
+ - 为私有仓库启用鉴权。
- 为每个租户生成访问仓库的凭据,放置在 Secret 中,并将 Secret 发布到各租户的名字空间下。
- - 租户将 Secret 添加到每个名字空间中的 imagePullSecrets
+ - 租户将 Secret 添加到每个名字空间中的 imagePullSecrets。
-## 旧版的内置 kubelet 凭据提供程序
+## 旧版的内置 kubelet 凭据提供程序 {#legacy-built-in-kubelet-credentials-provider}
在旧版本的 Kubernetes 中,kubelet 与云提供商凭据直接集成。
这使它能够动态获取镜像仓库的凭据。
@@ -869,7 +854,7 @@ There were three built-in implementations of the kubelet credential provider int
ACR (Azure Container Registry), ECR (Elastic Container Registry), and GCR (Google Container Registry).
-->
kubelet 凭据提供程序集成存在三个内置实现:
-ACR(Azure 容器仓库)、ECR(Elastic 容器仓库)和 GCR(Google 容器仓库)
+ACR(Azure 容器仓库)、ECR(Elastic 容器仓库)和 GCR(Google 容器仓库)。
+
+
+
+
+本页介绍 Kubernetes 中的 ServiceAccount 对象,
+讲述服务账号的工作原理、使用场景、限制、替代方案,还提供了一些资源链接方便查阅更多指导信息。
+
+
+
+
+## 什么是服务账号? {#what-are-service-accounts}
+
+
+服务账号是在 Kubernetes 中一种用于非人类用户的账号,在 Kubernetes 集群中提供不同的身份标识。
+应用 Pod、系统组件以及集群内外的实体可以使用特定 ServiceAccount 的凭据来将自己标识为该 ServiceAccount。
+这种身份可用于许多场景,包括向 API 服务器进行身份认证或实现基于身份的安全策略。
+
+
+服务账号以 ServiceAccount 对象的形式存在于 API 服务器中。服务账号具有以下属性:
+
+
+* **名字空间限定:** 每个服务账号都与一个 Kubernetes 名字空间绑定。
+ 每个名字空间在创建时,会获得一个[名为 `default` 的 ServiceAccount](#default-service-accounts)。
+
+* **轻量级:** 服务账号存在于集群中,并在 Kubernetes API 中定义。你可以快速创建服务账号以支持特定任务。
+
+
+* **可移植性:** 复杂的容器化工作负载的配置包中可能包括针对系统组件的服务账号定义。
+ 服务账号的轻量级性质和名字空间作用域的身份使得这类配置可移植。
+
+
+服务账号与用户账号不同,用户账号是集群中通过了身份认证的人类用户。默认情况下,
+用户账号不存在于 Kubernetes API 服务器中;相反,API 服务器将用户身份视为不透明数据。
+你可以使用多种方法认证为某个用户账号。某些 Kubernetes 发行版可能会添加自定义扩展 API
+来在 API 服务器中表示用户账号。
+
+
+{{< table caption="服务账号与用户之间的比较" >}}
+
+
+| 描述 | 服务账号 | 用户或组 |
+| --- | --- | --- |
+| 位置 | Kubernetes API(ServiceAccount 对象)| 外部 |
+| 访问控制 | Kubernetes RBAC 或其他[鉴权机制](/zh-cn/docs/reference/access-authn-authz/authorization/#authorization-modules) | Kubernetes RBAC 或其他身份和访问管理机制 |
+| 目标用途 | 工作负载、自动化工具 | 人 |
+{{< /table >}}
+
+
+### 默认服务账号 {#default-service-accounts}
+
+
+在你创建集群时,Kubernetes 会自动为集群中的每个名字空间创建一个名为 `default` 的 ServiceAccount 对象。
+在启用了基于角色的访问控制(RBAC)时,Kubernetes 为所有通过了身份认证的主体赋予
+[默认 API 发现权限](/zh-cn/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings)。
+每个名字空间中的 `default` 服务账号除了这些权限之外,默认没有其他访问权限。
+如果基于角色的访问控制(RBAC)被启用,当你删除名字空间中的 `default` ServiceAccount 对象时,
+{{< glossary_tooltip text="控制平面" term_id="control-plane" >}}会用新的 ServiceAccount 对象替换它。
+
+
+如果你在某个名字空间中部署 Pod,并且你没有[手动为 Pod 指派 ServiceAccount](#assign-to-pod),
+Kubernetes 将该名字空间的 `default` 服务账号指派给这一 Pod。
+
+
+## Kubernetes 服务账号的使用场景 {#use-cases}
+
+一般而言,你可以在以下场景中使用服务账号来提供身份标识:
+
+
+* 你的 Pod 需要与 Kubernetes API 服务器通信,例如在以下场景中:
+ * 提供对存储在 Secret 中的敏感信息的只读访问。
+ * 授予[跨名字空间访问](#cross-namespace)的权限,例如允许 `example` 名字空间中的 Pod 读取、列举和监视
+ `kube-node-lease` 名字空间中的 Lease 对象。
+
+
+* 你的 Pod 需要与外部服务进行通信。例如,工作负载 Pod 需要一个身份来访问某商业化的云 API,
+ 并且商业化 API 的提供商允许配置适当的信任关系。
+* [使用 `imagePullSecret` 完成在私有镜像仓库上的身份认证](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)。
+
+
+* 外部服务需要与 Kubernetes API 服务器进行通信。例如,作为 CI/CD 流水线的一部分向集群作身份认证。
+* 你在集群中使用了第三方安全软件,该软件依赖不同 Pod 的 ServiceAccount 身份,按不同上下文对这些 Pod 分组。
+
+
+## 如何使用服务账号 {#how-to-use}
+
+要使用 Kubernetes 服务账号,你需要执行以下步骤:
+
+
+1. 使用像 `kubectl` 这样的 Kubernetes 客户端或定义对象的清单(manifest)创建 ServiceAccount 对象。
+2. 使用鉴权机制(如 [RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/))为 ServiceAccount 对象授权。
+
+
+3. 在创建 Pod 期间将 ServiceAccount 对象指派给 Pod。
+
+ 如果你所使用的是来自外部服务的身份,可以[获取 ServiceAccount 令牌](#get-a-token),并在该服务中使用这一令牌。
+
+
+有关具体操作说明,参阅[为 Pod 配置服务账号](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/)。
+
+
+### 为 ServiceAccount 授权 {#grant-permissions}
+
+
+你可以使用 Kubernetes 内置的
+[基于角色的访问控制 (RBAC)](/zh-cn/docs/reference/access-authn-authz/rbac/)机制来为每个服务账号授予所需的最低权限。
+你可以创建一个用来授权的**角色**,然后将此角色**绑定**到你的 ServiceAccount 上。
+RBAC 可以让你定义一组最低权限,使得服务账号权限遵循最小特权原则。
+这样使用服务账号的 Pod 不会获得超出其正常运行所需的权限。
+
+
+有关具体操作说明,参阅 [ServiceAccount 权限](/zh-cn/docs/reference/access-authn-authz/rbac/#service-account-permissions)。
+
+
+#### 使用 ServiceAccount 进行跨名字空间访问 {#cross-namespace}
+
+
+你可以使用 RBAC 允许一个名字空间中的服务账号对集群中另一个名字空间的资源执行操作。
+例如,假设你在 `dev` 名字空间中有一个服务账号和一个 Pod,并且希望该 Pod 可以查看 `maintenance`
+名字空间中正在运行的 Job。你可以创建一个 Role 对象来授予列举 Job 对象的权限。
+随后在 `maintenance` 名字空间中创建 RoleBinding 对象将 Role 绑定到此 ServiceAccount 对象上。
+现在,`dev` 名字空间中的 Pod 可以使用该服务账号列出 `maintenance` 名字空间中的 Job 对象集合。
+
+
+### 将 ServiceAccount 指派给 Pod {#assign-to-pod}
+
+要将某 ServiceAccount 指派给某 Pod,你需要在该 Pod 的规约中设置 `spec.serviceAccountName` 字段。
+Kubernetes 将自动为 Pod 提供该 ServiceAccount 的凭据。在 Kubernetes v1.22 及更高版本中,
+Kubernetes 使用 `TokenRequest` API 获取一个短期的、**自动轮换**的令牌,
+并以[投射卷](/zh-cn/docs/concepts/storage/projected-volumes/#serviceaccounttoken)的形式挂载此令牌。
+
+
+默认情况下,Kubernetes 会将所指派的 ServiceAccount
+(无论是 `default` 服务账号还是你指定的定制 ServiceAccount)的凭据提供给 Pod。
+
+要防止 Kubernetes 自动注入指定的 ServiceAccount 或 `default` ServiceAccount 的凭据,
+可以将 Pod 规约中的 `automountServiceAccountToken` 字段设置为 `false`。
+
+
+
+
+在 Kubernetes 1.22 之前的版本中,Kubernetes 会将一个长期有效的静态令牌以 Secret 形式提供给 Pod。
+
+
+#### 手动获取 ServiceAccount 凭据 {#get-a-token}
+
+如果你需要 ServiceAccount 的凭据并将其挂载到非标准位置,或者用于 API 服务器之外的受众,可以使用以下方法之一:
+
+
+* [TokenRequest API](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/)(推荐):
+ 在你自己的**应用代码**中请求一个短期的服务账号令牌。此令牌会自动过期,并可在过期时被轮换。
+ 如果你有一个旧的、对 Kubernetes 无感知能力的应用,你可以在同一个 Pod
+ 内使用边车容器来获取这些令牌,并将其提供给应用工作负载。
+
+
+* [令牌卷投射](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#serviceaccount-token-volume-projection)(同样推荐):
+ 在 Kubernetes v1.20 及更高版本中,使用 Pod 规约告知 kubelet 将服务账号令牌作为**投射卷**添加到 Pod 中。
+ 所投射的令牌会自动过期,在过期之前 kubelet 会自动轮换此令牌。
+
+
+* [服务账号令牌 Secret](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount)(不推荐):
+ 你可以将服务账号令牌以 Kubernetes Secret 的形式挂载到 Pod 中。这些令牌不会过期且不会轮换。
+ 不推荐使用此方法,特别是在大规模场景下,这是因为静态、长期有效的凭据存在一定的风险。在 Kubernetes v1.24 及更高版本中,
+ [LegacyServiceAccountTokenNoAutoGeneration 特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-graduated-or-deprecated-features)阻止
+ Kubernetes 自动为 ServiceAccount 创建这些令牌。`LegacyServiceAccountTokenNoAutoGeneration` 默认被启用,
+ 也就是说,Kubernetes 不会创建这些令牌。
+
+{{< note >}}
+
+对于运行在 Kubernetes 集群外的应用,你可能考虑创建一个长期有效的 ServiceAccount 令牌,
+并将其存储在 Secret 中。尽管这种方式可以实现身份认证,但 Kubernetes 项目建议你避免使用此方法。
+长期有效的持有者令牌(Bearer Token)会带来安全风险,一旦泄露,此令牌就可能被滥用。
+为此,你可以考虑使用其他替代方案。例如,你的外部应用可以使用一个保护得很好的私钥和证书进行身份认证,
+或者使用你自己实现的[身份认证 Webhook](/zh-cn/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)
+这类自定义机制。
+
+
+你还可以使用 TokenRequest 为外部应用获取短期的令牌。
+{{< /note >}}
+
+
+## 对服务账号凭据进行鉴别 {#authenticating-credentials}
+
+
+ServiceAccount 使用签名的 JSON Web Token (JWT) 来向 Kubernetes API
+服务器以及任何其他存在信任关系的系统进行身份认证。根据令牌的签发方式
+(使用 `TokenRequest` 限制时间或使用传统的 Secret 机制),ServiceAccount
+令牌也可能有到期时间、受众和令牌**开始**生效的时间点。
+当客户端以 ServiceAccount 的身份尝试与 Kubernetes API 服务器通信时,
+客户端会在 HTTP 请求中包含 `Authorization: Bearer ` 标头。
+API 服务器按照以下方式检查该持有者令牌的有效性:
+
+
+1. 检查令牌签名。
+1. 检查令牌是否已过期。
+1. 检查令牌申明中的对象引用是否当前有效。
+1. 检查令牌是否当前有效。
+1. 检查受众申明。
+
+
+TokenRequest API 为 ServiceAccount 生成**绑定令牌**。这种绑定与以该 ServiceAccount 身份运行的
+的客户端(如 Pod)的生命期相关联。
+
+对于使用 `TokenRequest` API 签发的令牌,API 服务器还会检查正在使用 ServiceAccount 的特定对象引用是否仍然存在,
+方式是通过该对象的{{< glossary_tooltip term_id="uid" text="唯一 ID" >}} 进行匹配。
+对于以 Secret 形式挂载到 Pod 中的旧有令牌,API 服务器会基于 Secret 来检查令牌。
+
+
+有关身份认证过程的更多信息,参考[身份认证](/zh-cn/docs/reference/access-authn-authz/authentication/#service-account-tokens)。
+
+
+### 在自己的代码中检查服务账号凭据 {#authenticating-in-code}
+
+如果你的服务需要检查 Kubernetes 服务账号凭据,可以使用以下方法:
+
+
+* [TokenReview API](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-review-v1/)(推荐)
+* OIDC 发现
+
+
+Kubernetes 项目建议你使用 TokenReview API,因为当你删除某些 API 对象
+(如 Secret、ServiceAccount 和 Pod)的时候,此方法将使绑定到这些 API 对象上的令牌失效。
+例如,如果删除包含投射 ServiceAccount 令牌的 Pod,则集群立即使该令牌失效,
+并且 TokenReview 操作也会立即失败。
+如果你使用的是 OIDC 验证,则客户端将继续将令牌视为有效,直到令牌达到其到期时间戳。
+
+
+你的应用应始终定义其所接受的受众,并检查令牌的受众是否与应用期望的受众匹配。
+这有助于将令牌的作用域最小化,这样它只能在你的应用内部使用,而不能在其他地方使用。
+
+
+## 替代方案 {#alternatives}
+
+* 使用其他机制签发你自己的令牌,然后使用
+ [Webhook 令牌身份认证](/zh-cn/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)通过你自己的验证服务来验证持有者令牌。
+
+
+* 为 Pod 提供你自己的身份:
+ * [使用 SPIFFE CSI 驱动插件将 SPIFFE SVID 作为 X.509 证书对提供给 Pod](https://cert-manager.io/docs/projects/csi-driver-spiffe/)。
+ {{% thirdparty-content single="true" %}}
+ * [使用 Istio 这类服务网格为 Pod 提供证书](https://istio.io/latest/zh/docs/tasks/security/cert-management/plugin-ca-cert/)。
+
+
+* 从集群外部向 API 服务器进行身份认证,而不使用服务账号令牌:
+ * [配置 API 服务器接受来自你自己的身份驱动的 OpenID Connect (OIDC) 令牌](/zh-cn/docs/reference/access-authn-authz/authentication/#openid-connect-tokens)。
+ * 使用来自云提供商等外部身份和访问管理 (IAM) 服务创建的服务账号或用户账号向集群进行身份认证。
+ * [使用 CertificateSigningRequest API 和客户端证书](/zh-cn/docs/tasks/tls/managing-tls-in-a-cluster/)。
+
+
+* [配置 kubelet 从镜像仓库中获取凭据](/zh-cn/docs/tasks/administer-cluster/kubelet-credential-provider/)。
+* 使用设备插件访问虚拟的可信平台模块 (TPM),进而可以使用私钥进行身份认证。
+
+## {{% heading "whatsnext" %}}
+
+
+* 学习如何[作为集群管理员管理你的 ServiceAccount](/zh-cn/docs/reference/access-authn-authz/service-accounts-admin/)。
+* 学习如何[将 ServiceAccount 指派给 Pod](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/)。
+* 阅读 [ServiceAccount API 参考文档](/zh-cn/docs/reference/kubernetes-api/authentication-resources/service-account-v1/)。
diff --git a/content/zh-cn/docs/concepts/services-networking/ingress.md b/content/zh-cn/docs/concepts/services-networking/ingress.md
index c9356f442f..6ba2088285 100644
--- a/content/zh-cn/docs/concepts/services-networking/ingress.md
+++ b/content/zh-cn/docs/concepts/services-networking/ingress.md
@@ -26,6 +26,13 @@ weight: 30
{{< glossary_definition term_id="ingress" length="all" >}}
+{{< note >}}
+
+入口(Ingress)目前已停止更新。新的功能正在集成至[网关 API](/zh-cn/docs/concepts/services-networking/gateway/) 中。
+{{< /note >}}
+
-如果你希望在 IP 地址或端口层面(OSI 第 3 层或第 4 层)控制网络流量,
+如果你希望针对 TCP、UDP 和 SCTP 协议在 IP 地址或端口层面控制网络流量,
则你可以考虑为集群中特定应用使用 Kubernetes 网络策略(NetworkPolicy)。
NetworkPolicy 是一种以应用为中心的结构,允许你设置如何允许
{{< glossary_tooltip text="Pod" term_id="pod">}} 与网络上的各类网络“实体”
@@ -481,24 +481,16 @@ ingress or egress traffic.
此策略可以确保即使没有被其他任何 NetworkPolicy 选择的 Pod 也不会被允许入站或出站流量。
-## SCTP 支持 {#sctp-support}
+## Network traffic filtering
-{{< feature-state for_k8s_version="v1.20" state="stable" >}}
-
-
-作为一个稳定特性,SCTP 支持默认是被启用的。
-要在集群层面禁用 SCTP,你(或你的集群管理员)需要为 API 服务器指定
-`--feature-gates=SCTPSupport=false,...`
-来禁用 `SCTPSupport` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。
-启用该特性门控后,用户可以将 NetworkPolicy 的 `protocol` 字段设置为 `SCTP`。
+## 网络流量过滤 {#network-traffic-filtering}
+
+NetworkPolicy 是为[第 4 层](https://zh.wikipedia.org/wiki/OSI%E6%A8%A1%E5%9E%8B#%E7%AC%AC4%E5%B1%A4_%E5%82%B3%E8%BC%B8%E5%B1%A4)连接
+(TCP、UDP 和可选的 SCTP)所定义的。对于所有其他协议,这种网络流量过滤的行为可能因网络插件而异。
{{< note >}}
+当 `deny all` 网络策略被定义时,此策略只能保证拒绝 TCP、UDP 和 SCTP 连接。
+对于 ARP 或 ICMP 这类其他协议,这种网络流量过滤行为是未定义的。
+相同的情况也适用于 allow 规则:当特定 Pod 被允许作为入口源或出口目的地时,
+对于(例如)ICMP 数据包会发生什么是未定义的。
+ICMP 这类协议可能被某些网络插件所允许,而被另一些网络插件所拒绝。
+
@@ -632,6 +637,158 @@ Kubernetes 控制面会在所有名字空间上设置一个不可变更的标签
如果 NetworkPolicy 无法在某些对象字段中指向某名字空间,
你可以使用标准的标签方式来指向特定名字空间。
+
+## Pod 生命周期 {#pod-lifecycle}
+
+{{< note >}}
+
+以下内容适用于使用了合规网络插件和 NetworkPolicy 合规实现的集群。
+{{< /note >}}
+
+
+当新的 NetworkPolicy 对象被创建时,网络插件可能需要一些时间来处理这个新对象。
+如果受到 NetworkPolicy 影响的 Pod 在网络插件完成 NetworkPolicy 处理之前就被创建了,
+那么该 Pod 可能会最初处于无保护状态,而在 NetworkPolicy 处理完成后被应用隔离规则。
+
+
+一旦 NetworkPolicy 被网络插件处理,
+
+1. 所有受给定 NetworkPolicy 影响的新建 Pod 都将在启动前被隔离。
+ NetworkPolicy 的实现必须确保过滤规则在整个 Pod 生命周期内是有效的,
+ 这个生命周期要从该 Pod 的任何容器启动的第一刻算起。
+ 因为 NetworkPolicy 在 Pod 层面被应用,所以 NetworkPolicy 同样适用于 Init 容器、边车容器和常规容器。
+
+
+2. Allow 规则最终将在隔离规则之后被应用(或者可能同时被应用)。
+ 在最糟的情况下,如果隔离规则已被应用,但 allow 规则尚未被应用,
+ 那么新建的 Pod 在初始启动时可能根本没有网络连接。
+
+用户所创建的每个 NetworkPolicy 最终都会被网络插件处理,但无法使用 Kubernetes API 来获知确切的处理时间。
+
+
+因此,若 Pod 启动时使用非预期的网络连接,它必须保持稳定。
+如果你需要确保 Pod 在启动之前能够访问特定的目标,可以使用
+[Init 容器](/zh-cn/docs/concepts/workloads/pods/init-containers/)在
+kubelet 启动应用容器之前等待这些目的地变得可达。
+
+
+每个 NetworkPolicy 最终都会被应用到所选定的所有 Pod 之上。
+由于网络插件可能以分布式的方式实现 NetworkPolicy,所以当 Pod 被首次创建时或当
+Pod 或策略发生变化时,Pod 可能会看到稍微不一致的网络策略视图。
+例如,新建的 Pod 本来应能立即访问 Node 1 上的 Pod A 和 Node 2 上的 Pod B,
+但可能你会发现新建的 Pod 可以立即访问 Pod A,但要在几秒后才能访问 Pod B。
+
+
+## NetworkPolicy 和 `hostNetwork` Pod {#networkpolicy-and-hostnetwork-pods}
+
+针对 `hostNetwork` Pod 的 NetworkPolicy 行为是未定义的,但应限制为以下两种可能:
+
+
+- 网络插件可以从所有其他流量中辨别出 `hostNetwork` Pod 流量
+ (包括能够从同一节点上的不同 `hostNetwork` Pod 中辨别流量),
+ 网络插件还可以像处理 Pod 网络流量一样,对 `hostNetwork` Pod 应用 NetworkPolicy。
+
+
+- 网络插件无法正确辨别 `hostNetwork` Pod 流量,因此在匹配 `podSelector` 和 `namespaceSelector`
+ 时会忽略 `hostNetwork` Pod。流向/来自 `hostNetwork` Pod 的流量的处理方式与流向/来自节点 IP
+ 的所有其他流量一样。(这是最常见的实现方式。)
+
+
+这适用于以下情形:
+
+
+1. `hostNetwork` Pod 被 `spec.podSelector` 选中。
+
+ ```yaml
+ ...
+ spec:
+ podSelector:
+ matchLabels:
+ role: client
+ ...
+ ```
+
+
+2. `hostNetwork` Pod 在 `ingress` 或 `egress` 规则中被 `podSelector` 或 `namespaceSelector` 选中。
+
+ ```yaml
+ ...
+ ingress:
+ - from:
+ - podSelector:
+ matchLabels:
+ role: client
+ ...
+ ```
+
+
+同时,由于 `hostNetwork` Pod 具有与其所在节点相同的 IP 地址,所以它们的连接将被视为节点连接。
+例如,你可以使用 `ipBlock` 规则允许来自 `hostNetwork` Pod 的流量。
+
进一步学习 Service 及其在 Kubernetes 中所发挥的作用:
@@ -1698,7 +1698,7 @@ Learn more about Services and how they fit into Kubernetes:
* 完成[使用 Service 连接到应用](/zh-cn/docs/tutorials/services/connect-applications-service/)教程。
* 阅读 [Ingress](/zh-cn/docs/concepts/services-networking/ingress/) 文档。Ingress
负责将来自集群外部的 HTTP 和 HTTPS 请求路由给集群内的服务。
-* 阅读 [Gateway](https://gateway-api.sigs.k8s.io/) 文档。Gateway 作为 Kubernetes 的扩展提供比
+* 阅读 [Gateway](/zh-cn/docs/concepts/services-networking/gateway/) 文档。Gateway 作为 Kubernetes 的扩展提供比
Ingress 更高的灵活性。
-当 Pod 分派到某个节点上时,`emptyDir` 卷会被创建,并且在 Pod 在该节点上运行期间,卷一直存在。
-就像其名称表示的那样,卷最初是空的。
-尽管 Pod 中的容器挂载 `emptyDir` 卷的路径可能相同也可能不同,这些容器都可以读写
-`emptyDir` 卷中相同的文件。
+对于定义了 `emptyDir` 卷的 Pod,在 Pod 被指派到某节点时此卷会被创建。
+就像其名称所表示的那样,`emptyDir` 卷最初是空的。尽管 Pod 中的容器挂载 `emptyDir`
+卷的路径可能相同也可能不同,但这些容器都可以读写 `emptyDir` 卷中相同的文件。
当 Pod 因为某些原因被从节点上删除时,`emptyDir` 卷中的数据也会被永久删除。
{{< note >}}
@@ -632,10 +630,10 @@ Dynamic provisioning is possible using a
[StorageClass for GCE PD](/docs/concepts/storage/storage-classes/#gce-pd).
Before creating a PersistentVolume, you must create the persistent disk:
-->
-#### 手动供应基于区域 PD 的 PersistentVolume {#manually-provisioning-regional-pd-pv}
+#### 手动制备基于区域 PD 的 PersistentVolume {#manually-provisioning-regional-pd-pv}
使用[为 GCE PD 定义的存储类](/zh-cn/docs/concepts/storage/storage-classes/#gce-pd)
-可以实现动态供应。在创建 PersistentVolume 之前,你首先要创建 PD。
+可以实现动态制备。在创建 PersistentVolume 之前,你首先要创建 PD。
```shell
gcloud compute disks create --size=500GB my-data-disk
@@ -648,6 +646,9 @@ gcloud compute disks create --size=500GB my-data-disk
-->
#### 区域持久盘配置示例
+
```yaml
apiVersion: v1
kind: PersistentVolume
@@ -770,7 +771,7 @@ and then removed entirely in the v1.26 release.
Kubernetes {{< skew currentVersion >}} 不包含 `glusterfs` 卷类型。
GlusterFS 树内存储驱动程序在 Kubernetes v1.25 版本中被弃用,然后在 v1.26 版本中被完全移除。
-
+
### hostPath {#hostpath}
{{< warning >}}
@@ -872,6 +873,10 @@ Watch out when using this type of volume, because:
-->
#### hostPath 配置示例
+
```yaml
apiVersion: v1
kind: Pod
@@ -887,7 +892,7 @@ spec:
volumes:
- name: test-volume
hostPath:
- # 宿主上目录位置
+ # 宿主机上目录位置
path: /data
# 此字段为可选
type: Directory
@@ -903,7 +908,7 @@ you can try to mount directories and files separately, as shown in the
`FileOrCreate` 模式不会负责创建文件的父目录。
如果欲挂载的文件的父目录不存在,Pod 启动会失败。
为了确保这种模式能够工作,可以尝试把文件和它对应的目录分开挂载,如
-[`FileOrCreate` 配置](#hostpath-fileorcreate-example) 所示。
+[`FileOrCreate` 配置](#hostpath-fileorcreate-example)所示。
{{< /caution >}}
#### hostPath FileOrCreate 配置示例 {#hostpath-fileorcreate-example}
+
```yaml
apiVersion: v1
kind: Pod
@@ -1191,6 +1199,9 @@ Here is an example Pod referencing a pre-provisioned Portworx volume:
`portworxVolume` 类型的卷可以通过 Kubernetes 动态创建,也可以预先配备并在 Pod 内引用。
下面是一个引用预先配备的 Portworx 卷的示例 Pod:
+
```yaml
apiVersion: v1
kind: Pod
@@ -1253,7 +1264,7 @@ To enable the feature, set `CSIMigrationPortworx=true` in kube-controller-manage
A projected volume maps several existing volume sources into the same
directory. For more details, see [projected volumes](/docs/concepts/storage/projected-volumes/).
-->
-### projected (投射) {#projected}
+### 投射(projected) {#projected}
投射卷能将若干现有的卷来源映射到同一目录上。更多详情请参考[投射卷](/zh-cn/docs/concepts/storage/projected-volumes/)。
@@ -1354,7 +1365,7 @@ RBD CSI driver:
* 你必须在集群中安装 v3.5.0 或更高版本的 Ceph CSI 驱动(`rbd.csi.ceph.com`)。
* 因为 `clusterID` 是 CSI 驱动程序必需的参数,而树内存储类又将 `monitors`
作为一个必需的参数,所以 Kubernetes 存储管理者需要根据 `monitors`
- 的哈希值(例:`#echo -n '' | md5sum`)来创建
+ 的哈希值(例如:`#echo -n '' | md5sum`)来创建
`clusterID`,并保持该 `monitors` 存在于该 `clusterID` 的配置中。
* 同时,如果树内存储类的 `adminId` 的值不是 `admin`,那么其 `adminSecretName`
就需要被修改成 `adminId` 参数的 base64 编码值。
@@ -1427,7 +1438,6 @@ For more information, see the [vSphere volume](https://github.com/kubernetes/exa
{{< feature-state for_k8s_version="v1.26" state="stable" >}}
-
在这个示例中,`Pod` 使用 `subPathExpr` 来 `hostPath` 卷 `/var/log/pods` 中创建目录 `pod1`。
`hostPath` 卷采用来自 `downwardAPI` 的 Pod 名称生成目录名。
-宿主目录 `/var/log/pods/pod1` 被挂载到容器的 `/logs` 中。
+宿主机目录 `/var/log/pods/pod1` 被挂载到容器的 `/logs` 中。
+
```yaml
apiVersion: v1
kind: Pod
@@ -1652,8 +1665,8 @@ to the [volume plugin FAQ](https://github.com/kubernetes/community/blob/master/s
-->
CSI 和 FlexVolume 都允许独立于 Kubernetes 代码库开发卷插件,并作为扩展部署(安装)在 Kubernetes 集群上。
-对于希望创建树外(Out-Of-Tree)卷插件的存储供应商,请参考
-[卷插件常见问题](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md)。
+对于希望创建树外(Out-Of-Tree)卷插件的存储供应商,
+请参考[卷插件常见问题](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md)。
### CSI
@@ -1778,7 +1791,7 @@ persistent volume:
该映射必须与 CSI 驱动程序返回的 `CreateVolumeResponse` 中的 `volume.attributes`
字段的映射相对应;
[CSI 规范](https://github.com/container-storage-interface/spec/blob/master/spec.md#createvolume)中有相应的定义。
- 该映射通过`ControllerPublishVolumeRequest`、`NodeStageVolumeRequest`、和
+ 该映射通过`ControllerPublishVolumeRequest`、`NodeStageVolumeRequest` 和
`NodePublishVolumeRequest` 中的 `volume_context` 字段传递给 CSI 驱动。
CSI 节点插件需要执行多种特权操作,例如扫描磁盘设备和挂载文件系统等。
-这些操作在每个宿主操作系统上都是不同的。对于 Linux 工作节点而言,容器化的 CSI
+这些操作在每个宿主机操作系统上都是不同的。对于 Linux 工作节点而言,容器化的 CSI
节点插件通常部署为特权容器。对于 Windows 工作节点而言,容器化 CSI
节点插件的特权操作是通过 [csi-proxy](https://github.com/kubernetes-csi/csi-proxy)
来支持的。csi-proxy 是一个由社区管理的、独立的可执行二进制文件,
@@ -1986,7 +1999,7 @@ The following FlexVolume [plugins](https://github.com/Microsoft/K8s-Storage-Plug
deployed as PowerShell scripts on the host, support Windows nodes:
-->
下面的 FlexVolume [插件](https://github.com/Microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows)
-以 PowerShell 脚本的形式部署在宿主系统上,支持 Windows 节点:
+以 PowerShell 脚本的形式部署在宿主机系统上,支持 Windows 节点:
* [SMB](https://github.com/microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows/plugins/microsoft.com~smb.cmd)
* [iSCSI](https://github.com/microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows/plugins/microsoft.com~iscsi.cmd)
@@ -2034,17 +2047,15 @@ in `Container.volumeMounts`. Its values are:
cri-dockerd (Docker) is known to choose `rslave` mount propagation when the
mount source contains the Docker daemon's root directory (`/var/lib/docker`).
-->
-
* `None` - 此卷挂载将不会感知到主机后续在此卷或其任何子目录上执行的挂载变化。
- 类似的,容器所创建的卷挂载在主机上是不可见的。这是默认模式。
+ 类似的,容器所创建的卷挂载在主机上是不可见的。这是默认模式。
- 该模式等同于 [`mount(8)`](https://man7.org/linux/man-pages/man8/mount.8.html)中描述的
- `rprivate` 挂载传播选项。
+ 该模式等同于 [`mount(8)`](https://man7.org/linux/man-pages/man8/mount.8.html) 中描述的
+ `rprivate` 挂载传播选项。
- 然而,当 `rprivate` 传播选项不适用时,CRI 运行时可以转为选择 `rslave` 挂载传播选项
- (即 `HostToContainer`)。当挂载源包含 Docker 守护进程的根目录(`/var/lib/docker`)时,
- cri-dockerd (Docker) 已知可以选择 `rslave` 挂载传播选项。
- 。
+ 然而,当 `rprivate` 传播选项不适用时,CRI 运行时可以转为选择 `rslave` 挂载传播选项
+ (即 `HostToContainer`)。当挂载源包含 Docker 守护进程的根目录(`/var/lib/docker`)时,
+ cri-dockerd (Docker) 已知可以选择 `rslave` 挂载传播选项。
-### 配置 {#configuration}
-
-在某些部署环境中,挂载传播正常工作前,必须在 Docker 中正确配置挂载共享(mount share),如下所示。
-
-
-编辑你的 Docker `systemd` 服务文件,按下面的方法设置 `MountFlags`:
-
-```shell
-MountFlags=shared
-```
-
-
-或者,如果存在 `MountFlags=slave` 就删除掉。然后重启 Docker 守护进程:
-
-```shell
-sudo systemctl daemon-reload
-sudo systemctl restart docker
-```
-
## {{% heading "whatsnext" %}}
-DaemonSet 确保所有符合条件的节点都运行该 Pod 的一个副本。
+DaemonSet 可用于确保所有符合条件的节点都运行该 Pod 的一个副本。
DaemonSet 控制器为每个符合条件的节点创建一个 Pod,并添加 Pod 的 `spec.affinity.nodeAffinity`
字段以匹配目标主机。Pod 被创建之后,默认的调度程序通常通过设置 `.spec.nodeName` 字段来接管 Pod 并将
Pod 绑定到目标主机。如果新的 Pod 无法放在节点上,则默认的调度程序可能会根据新 Pod
的[优先级](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority)抢占
(驱逐)某些现存的 Pod。
+{{< note >}}
+
+当 DaemonSet 中的 Pod 必须运行在每个节点上时,通常需要将 DaemonSet
+的 `.spec.template.spec.priorityClassName` 设置为具有更高优先级的
+[PriorityClass](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass),
+以确保可以完成驱逐。
+{{< /note >}}
+
为了防范头部信息侦听,在请求中的头部字段被检视之前,
身份认证代理需要向 API 服务器提供一份合法的客户端证书,供后者使用所给的 CA 来执行验证。
-警告:**不要** 在不同的上下文中复用 CA 证书,除非你清楚这样做的风险是什么以及应如何保护
+警告:**不要**在不同的上下文中复用 CA 证书,除非你清楚这样做的风险是什么以及应如何保护
CA 用法的机制。
* `--requestheader-client-ca-file` 必需字段,给出 PEM 编码的证书包。
@@ -1172,11 +1172,11 @@ to the impersonated user info.
带伪装的请求首先会被身份认证识别为发出请求的用户,
之后会切换到使用被伪装的用户的用户信息。
-* 用户发起 API 调用时 **同时** 提供自身的凭据和伪装头部字段信息
-* API 服务器对用户执行身份认证
-* API 服务器确认通过认证的用户具有伪装特权
-* 请求用户的信息被替换成伪装字段的值
-* 评估请求,鉴权组件针对所伪装的用户信息执行操作
+* 用户发起 API 调用时**同时**提供自身的凭据和伪装头部字段信息。
+* API 服务器对用户执行身份认证。
+* API 服务器确认通过认证的用户具有伪装特权。
+* 请求用户的信息被替换成伪装字段的值。
+* 评估请求,鉴权组件针对所伪装的用户信息执行操作。
若要伪装成某个用户、某个组、用户标识符(UID))或者设置附加字段,
-执行伪装操作的用户必须具有对所伪装的类别(“user”、“group”、“uid” 等)执行 “impersonate”
+执行伪装操作的用户必须具有对所伪装的类别(`user`、`group`、`uid` 等)执行 `impersonate`
动词操作的能力。
对于启用了 RBAC 鉴权插件的集群,下面的 ClusterRole 封装了设置用户和组伪装字段所需的规则:
@@ -1706,7 +1706,7 @@ users:
provideClusterInfo: true
# Exec 插件与标准输入 I/O 数据流之间的协议。如果协议无法满足,
- # 则插件无法运行并会返回错误信息。合法的值包括 "Never" (Exec 插件从不使用标准输入),
+ # 则插件无法运行并会返回错误信息。合法的值包括 "Never"(Exec 插件从不使用标准输入),
# "IfAvailable" (Exec 插件希望在可以的情况下使用标准输入),
# 或者 "Always" (Exec 插件需要使用标准输入才能工作)。可选字段。
# 默认值为 "IfAvailable"。
@@ -1853,7 +1853,7 @@ If specified, `clientKeyData` and `clientCertificateData` must both must be pres
如果插件在后续调用中返回了不同的证书或密钥,`k8s.io/client-go`
会终止其与服务器的连接,从而强制执行新的 TLS 握手过程。
-如果指定了这种方式,则 `clientKeyData` 和 `clientCertificateData` 字段都必需存在。
+如果指定了这种方式,则 `clientKeyData` 和 `clientCertificateData` 字段都必须存在。
`clientCertificateData` 字段可能包含一些要发送给服务器的中间证书(Intermediate
Certificates)。
@@ -1996,7 +1996,7 @@ The following `ExecCredential` manifest describes a cluster information sample.
-->
## 为客户端提供的对身份验证信息的 API 访问 {#self-subject-review}
-{{< feature-state for_k8s_version="v1.27" state="beta" >}}
+{{< feature-state for_k8s_version="v1.28" state="stable" >}}
在 Kubernetes 集群中使用复杂的身份验证流程时,例如如果你使用
-[Webhook 令牌身份验证](/zh-cn/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)或[身份验证代理](/zh-cn/docs/reference/access-authn-authz/authentication/#authenticating-proxy)时,
+[Webhook 令牌身份验证](/zh-cn/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)或
+[身份验证代理](/zh-cn/docs/reference/access-authn-authz/authentication/#authenticating-proxy)时,
此特性极其有用。
{{< note >}}
@@ -2162,7 +2164,8 @@ Kubernetes API 服务器在所有身份验证机制
{{< /note >}}
默认情况下,所有经过身份验证的用户都可以在 `APISelfSubjectReview` 特性被启用时创建 `SelfSubjectReview` 对象。
这是 `system:basic-user` 集群角色允许的操作。
@@ -2172,17 +2175,24 @@ By default, all authenticated users can create `SelfSubjectReview` objects when
You can only make `SelfSubjectReview` requests if:
* the `APISelfSubjectReview`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
- is enabled for your cluster (enabled by default after reaching Beta)
+ is enabled for your cluster (not needed for Kubernetes {{< skew currentVersion >}}, but older
+ Kubernetes versions might not offer this feature gate, or might default it to be off)
+* (if you are running a version of Kubernetes older than v1.28) the API server for your
+ cluster has the `authentication.k8s.io/v1alpha1` or `authentication.k8s.io/v1beta1`
* the API server for your cluster has the `authentication.k8s.io/v1alpha1` or `authentication.k8s.io/v1beta1`
{{< glossary_tooltip term_id="api-group" text="API group" >}}
enabled.
-->
你只能在以下情况下进行 `SelfSubjectReview` 请求:
-* 集群启用了 `APISelfSubjectReview` (Beta 版本默认启用)
+* 集群启用了 `APISelfSubjectReview`
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)
+ (Kubernetes {{< skew currentVersion >}} 不需要,但较旧的 Kubernetes 版本可能没有此特性门控,
+ 或者默认为关闭状态)。
+* (如果你运行的 Kubernetes 版本早于 v1.28 版本)集群的 API 服务器包含
+ `authentication.k8s.io/v1alpha1` 或 `authentication.k8s.io/v1beta1` API 组。
* 集群的 API 服务器已启用 `authentication.k8s.io/v1alpha1` 或者 `authentication.k8s.io/v1beta1`
- {{< glossary_tooltip term_id="api-group" text="API 组" >}}。。
+ {{< glossary_tooltip term_id="api-group" text="API 组" >}}。
{{< /note >}}
## {{% heading "whatsnext" %}}
@@ -2191,6 +2201,5 @@ You can only make `SelfSubjectReview` requests if:
* Read the [client authentication reference (v1beta1)](/docs/reference/config-api/client-authentication.v1beta1/)
* Read the [client authentication reference (v1)](/docs/reference/config-api/client-authentication.v1/)
-->
-* 阅读[客户端认证参考文档 (v1beta1)](/zh-cn/docs/reference/config-api/client-authentication.v1beta1/)
-* 阅读[客户端认证参考文档 (v1)](/zh-cn/docs/reference/config-api/client-authentication.v1/)
-
+* 阅读[客户端认证参考文档(v1beta1)](/zh-cn/docs/reference/config-api/client-authentication.v1beta1/)。
+* 阅读[客户端认证参考文档(v1)](/zh-cn/docs/reference/config-api/client-authentication.v1/)。
diff --git a/content/zh-cn/docs/reference/command-line-tools-reference/feature-gates.md b/content/zh-cn/docs/reference/command-line-tools-reference/feature-gates.md
index 5ae43d503b..72e4f5bdba 100644
--- a/content/zh-cn/docs/reference/command-line-tools-reference/feature-gates.md
+++ b/content/zh-cn/docs/reference/command-line-tools-reference/feature-gates.md
@@ -125,18 +125,18 @@ For a reference to old feature gates that are removed, please refer to
| `CPUManagerPolicyBetaOptions` | `true` | Beta | 1.23 | |
| `CPUManagerPolicyOptions` | `false` | Alpha | 1.22 | 1.22 |
| `CPUManagerPolicyOptions` | `true` | Beta | 1.23 | |
-| CRDValidationRatcheting | false | Alpha | 1.28 |
+| `CRDValidationRatcheting` | `false` | Alpha | 1.28 | |
| `CSIMigrationPortworx` | `false` | Alpha | 1.23 | 1.24 |
| `CSIMigrationPortworx` | `false` | Beta | 1.25 | |
| `CSINodeExpandSecret` | `false` | Alpha | 1.25 | 1.26 |
| `CSINodeExpandSecret` | `true` | Beta | 1.27 | |
| `CSIVolumeHealth` | `false` | Alpha | 1.21 | |
-| `CloudControllerManagerWebhook` | false | Alpha | 1.27 | |
-| `CloudDualStackNodeIPs` | false | Alpha | 1.27 | |
-| `ClusterTrustBundle` | false | Alpha | 1.27 | |
+| `CloudControllerManagerWebhook` | `false` | Alpha | 1.27 | |
+| `CloudDualStackNodeIPs` | `false` | Alpha | 1.27 | |
+| `ClusterTrustBundle` | `false` | Alpha | 1.27 | |
| `ComponentSLIs` | `false` | Alpha | 1.26 | 1.26 |
| `ComponentSLIs` | `true` | Beta | 1.27 | |
-| `ConsistentListFromCache` | `false` | Alpha | 1.28 |
+| `ConsistentListFromCache` | `false` | Alpha | 1.28 | |
| `ContainerCheckpoint` | `false` | Alpha | 1.25 | |
| `ContextualLogging` | `false` | Alpha | 1.24 | |
| `CronJobsScheduledAnnotation` | `true` | Beta | 1.28 | |
@@ -148,9 +148,9 @@ For a reference to old feature gates that are removed, please refer to
| `DisableCloudProviders` | `false` | Alpha | 1.22 | |
| `DisableKubeletCloudCredentialProviders` | `false` | Alpha | 1.23 | |
| `DynamicResourceAllocation` | `false` | Alpha | 1.26 | |
-| `ElasticIndexedJob` | `true` | Beta` | 1.27 | |
+| `ElasticIndexedJob` | `true` | Beta | 1.27 | |
| `EventedPLEG` | `false` | Alpha | 1.26 | 1.26 |
-| `EventedPLEG` | `false` | Beta | 1.27 | - |
+| `EventedPLEG` | `false` | Beta | 1.27 | |
| `GracefulNodeShutdown` | `false` | Alpha | 1.20 | 1.20 |
| `GracefulNodeShutdown` | `true` | Beta | 1.21 | |
| `GracefulNodeShutdownBasedOnPodPriority` | `false` | Alpha | 1.23 | 1.23 |
@@ -263,7 +263,7 @@ For a reference to old feature gates that are removed, please refer to
| `ValidatingAdmissionPolicy` | `false` | Alpha | 1.26 | 1.27 |
| `ValidatingAdmissionPolicy` | `false` | Beta | 1.28 | |
| `VolumeCapacityPriority` | `false` | Alpha | 1.21 | |
-| `WatchList` | false | Alpha | 1.27 | |
+| `WatchList` | `false` | Alpha | 1.27 | |
| `WinDSR` | `false` | Alpha | 1.14 | |
| `WinOverlay` | `false` | Alpha | 1.14 | 1.19 |
| `WinOverlay` | `true` | Beta | 1.20 | |
@@ -421,7 +421,8 @@ A *Beta* feature means:
**Beta** 特性代表:
-- `ContainerCheckpoint`:启用 kubelet `checkpoint` API。
- 参阅 [Kubelet Checkpoint API](/zh-cn/docs/reference/node/kubelet-checkpoint-api/) 获取更多详细信息。
-- `ControllerManagerLeaderMigration`:为
- [kube-controller-manager](/zh-cn/docs/tasks/administer-cluster/controller-manager-leader-migration/#initial-leader-migration-configuration) 和
- [cloud-controller-manager](/zh-cn/docs/tasks/administer-cluster/controller-manager-leader-migration/#deploy-cloud-controller-manager)
- 启用 Leader 迁移,它允许集群管理者在没有停机的高可用集群环境下,实时把 kube-controller-manager
- 迁移到外部的 controller-manager (例如 cloud-controller-manager) 中。
-
-- `CSIInlineVolume`:为 Pod 启用 CSI 内联卷支持。
-- `CSIMigration`:确保封装和转换逻辑能够将卷操作从内嵌插件路由到相应的预安装 CSI 插件。
-
-- `CSIMigrationAWS`:确保填充和转换逻辑能够将卷操作从 AWS-EBS 内嵌插件路由到 EBS CSI 插件。
- 如果节点禁用了此特性门控或者未安装和配置 EBS CSI 插件,支持回退到内嵌 EBS 插件来执行卷挂载操作。
- 不支持回退到这些插件来执行卷制备操作,因为需要安装并配置 CSI 插件。
-
-- `CSIMigrationAzureDisk`:确保填充和转换逻辑能够将卷操作从 AzureDisk 内嵌插件路由到
- Azure 磁盘 CSI 插件。对于禁用了此特性的节点或者没有安装并配置 AzureDisk CSI
- 插件的节点,支持回退到内嵌(in-tree)AzureDisk 插件来执行磁盘挂载操作。
- 不支持回退到内嵌插件来执行磁盘制备操作,因为对应的 CSI 插件必须已安装且正确配置。
- 此特性需要启用 CSIMigration 特性标志。
-
- `CloudControllerManagerWebhook`:启用在云控制器管理器中的 Webhook。
- `CloudDualStackNodeIPs`:允许在外部云驱动中通过 `kubelet --node-ip` 设置双协议栈。
- 有关详细信息,请参阅[配置 IPv4/IPv6 双协议栈](/zh-cn/docs/concepts/services-networking/dual-stack/#configure-ipv4-ipv6-dual-stack)。
+ 有关详细信息,请参阅[配置 IPv4/IPv6 双协议栈](/zh-cn/docs/concepts/services-networking/dual-stack/#configure-ipv4-ipv6-dual-stack)。
- `ClusterTrustBundle`:启用 ClusterTrustBundle 对象和 kubelet 集成。
- `ComponentSLIs`: 在 kubelet、kube-scheduler、kube-proxy、kube-controller-manager、cloud-controller-manager
等 Kubernetes 组件上启用 `/metrics/slis` 端点,从而允许你抓取健康检查指标。
@@ -684,12 +636,13 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `CronJobTimeZone`:允许在 [CronJobs](/zh-cn/docs/concepts/workloads/controllers/cron-jobs/)
中使用 `timeZone` 可选字段。
- `DisableCloudProviders`:禁用 `kube-apiserver`,`kube-controller-manager` 和
`kubelet` 组件的 `--cloud-provider` 标志相关的所有功能。
@@ -742,9 +694,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `DownwardAPIHugePages`:
允许在[下行(Downward)API](/zh-cn/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information)
中使用巨页信息。
-- `DynamicResourceAllocation`:启用对具有自定义参数和生命周期的资源的支持。
-- `EphemeralContainers`:启用添加
- {{< glossary_tooltip text="临时容器" term_id="ephemeral-container" >}}
- 到正在运行的 Pod 的特性。
- `EventedPLEG`:启用此特性后,kubelet 能够通过 {{}}
扩展从{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}}接收容器生命周期事件。
(PLEG 是 `Pod lifecycle event generator` 的缩写,即 Pod 生命周期事件生成器)。
@@ -789,25 +734,15 @@ Each feature gate is designed for enabling/disabling a specific feature:
该缺陷导致 Kubernetes 会忽略 exec 探针的超时值设置。
参阅[就绪态探针](/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes).
-- `ExpandCSIVolumes`:启用扩展 CSI 卷。
- `ExpandedDNSConfig`:在 kubelet 和 kube-apiserver 上启用后,
允许使用更多的 DNS 搜索域和搜索域列表。此功能特性需要容器运行时
- (Containerd:v1.5.6 或更高,CRI-O:v1.22 或更高)的支持。
+ (containerd v1.5.6 或更高,CRI-O v1.22 或更高)的支持。
参阅[扩展 DNS 配置](/zh-cn/docs/concepts/services-networking/dns-pod-service/#expanded-dns-configuration).
-- `ExpandInUsePersistentVolumes`:启用扩充使用中的 PVC 的尺寸。
- 请查阅[调整使用中的 PersistentVolumeClaim 的大小](/zh-cn/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim)。
-- `ExpandPersistentVolumes`:允许扩充持久卷。
- 请查阅[扩展持久卷申领](/zh-cn/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims)。
- `GracefulNodeShutdownBasedOnPodPriority`:允许 kubelet 在体面终止节点时检查
Pod 的优先级。
-- `GRPCContainerProbe`:为 LivenessProbe、ReadinessProbe、StartupProbe 启用 gRPC 探针。
+- `GRPCContainerProbe`:为活跃态、就绪态和启动探针启用 gRPC 探针。
参阅[配置活跃态、就绪态和启动探针](/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe)。
- `HonorPVReclaimPolicy`:无论 PV 和 PVC 的删除顺序如何,当持久卷申领的策略为 `Delete`
时,确保这种策略得到处理。
@@ -860,10 +795,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `IPTablesOwnershipCleanup`:这使得 kubelet 不再创建传统的 iptables 规则。
- `InPlacePodVerticalScaling`:启用就地 Pod 垂直扩缩。
-- `IdentifyPodOS`:允许设置 Pod 的 OS 字段。这一设置有助于在 API 服务器准入期间确定性地辨识
- Pod 的 OS。在 Kubernetes {{< skew currentVersion >}} 中,`pod.spec.os.name` 可选的值包括
- `windows` 和 `linux`。
-- `ImmutableEphemeralVolumes`:允许将各个 Secret 和 ConfigMap 标记为不可变更的,
- 以提高安全性和性能。
-- `IngressClassNamespacedParams`:允许在 `IngressClass` 资源中使用名字空间范围的参数引用。
- 此功能为 `IngressClass.spec.parameters` 添加了两个字段 - `scope` 和 `namespace`。
-- `Initializers`:允许使用 Intializers 准入插件来异步协调对象创建操作。
- `InTreePluginAWSUnregister`:在 kubelet 和卷控制器上关闭注册 aws-ebs 内嵌插件。
- `InTreePluginAzureDiskUnregister`:在 kubelet 和卷控制器上关闭注册 azuredisk 内嵌插件。
- `InTreePluginAzureFileUnregister`:在 kubelet 和卷控制器上关闭注册 azurefile 内嵌插件。
@@ -899,68 +822,62 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `InTreePluginvSphereUnregister`:在 kubelet 和卷控制器上关闭注册 vSphere 内嵌插件。
-- `IndexedJob`:允许 [Job](/zh-cn/docs/concepts/workloads/controllers/job/)
- 控制器根据完成索引来管理 Pod 完成。
-- `IngressClassNamespacedParams`:允许在 `IngressClass` 资源中引用名字空间范围的参数。
- 该特性增加了两个字段 —— `scope`、`namespace` 到 `IngressClass.spec.parameters`。
-- `Initializers`: 使用 Initializers 准入插件允许异步协调对象创建。
-- `JobMutableNodeSchedulingDirectives`:允许在 [Job](/zh-cn/docs/concepts/workloads/controllers/job)
+- `JobMutableNodeSchedulingDirectives`:允许在 [Job](/zh-cn/docs/concepts/workloads/controllers/job/)
的 Pod 模板中更新节点调度指令。
- `JobBackoffLimitPerIndex`:允许在索引作业中指定每个索引的最大 Pod 重试次数。
- `JobPodFailurePolicy`:允许用户根据容器退出码和 Pod 状况来指定 Pod 失效的处理方法。
-- `JobPodReplacementPolicy`:允许你在 [Job](/zh-cn/docs/concepts/workloads/controllers/job)
+- `JobPodReplacementPolicy`:允许你在 [Job](/zh-cn/docs/concepts/workloads/controllers/job/)
中为终止的 Pod 指定替代 Pod。
- `JobReadyPods`:允许跟踪[状况](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions)为
`Ready` 的 Pod 的个数。`Ready` 的 Pod 记录在
- [Job](/zh-cn/docs/concepts/workloads/controllers/job) 对象的
+ [Job](/zh-cn/docs/concepts/workloads/controllers/job/) 对象的
[status](/zh-cn/docs/reference/kubernetes-api/workload-resources/job-v1/#JobStatus) 字段中。
-- `JobTrackingWithFinalizers`:启用跟踪 [Job](/zh-cn/docs/concepts/workloads/controllers/job)
+- `JobTrackingWithFinalizers`:启用跟踪 [Job](/zh-cn/docs/concepts/workloads/controllers/job/)
完成情况,而不是永远从集群剩余 Pod 来获取信息判断完成情况。Job 控制器使用
Pod finalizers 和 Job 状态中的一个字段来跟踪已完成的 Pod 以计算完成。
- `KMSv1`:启用 KMS v1 API 以进行数据静态加密。
- 详情参见[使用 KMS 提供程序进行数据加密](/zh-cn/docs/tasks/administer-cluster/kms-provider)。
+ 详情参见[使用 KMS 提供程序进行数据加密](/zh-cn/docs/tasks/administer-cluster/kms-provider/)。
- `KMSv2`:启用 KMS v2 API 以实现静态加密。
- 详情参见[使用 KMS 驱动进行数据加密](/zh-cn/docs/tasks/administer-cluster/kms-provider)。
+ 详情参见[使用 KMS 驱动进行数据加密](/zh-cn/docs/tasks/administer-cluster/kms-provider/)。
- `KMSv2KDF`:启用 KMS v2 以生成一次性数据加密密钥。
- 详情参见[使用 KMS 提供程序进行数据加密](/zh-cn/docs/tasks/administer-cluster/kms-provider)。
+ 详情参见[使用 KMS 提供程序进行数据加密](/zh-cn/docs/tasks/administer-cluster/kms-provider/)。
如果 `KMSv2` 特性门控在你的集群未被启用 ,则 `KMSv2KDF` 特性门控的值不会产生任何影响。
- `KubeProxyDrainingTerminatingNodes`:为 `externalTrafficPolicy: Cluster` 服务实现正终止节点的连接排空。
- `KubeletCgroupDriverFromCRI`:启用检测来自 {{}}
@@ -981,7 +898,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
也可以在支持 `RuntimeConfig` CRI 调用的 CRI 容器运行时所在节点上使用此特性门控。
如果 CRI 和 kubelet 都支持此特性,kubelet 将忽略 `cgroupDriver` 配置设置(或已弃用的 `--cgroup-driver` 命令行参数)。
如果你启用此特性门控但容器运行时不支持它,则 kubelet 将回退到使用通过 `cgroupDriver` 配置设置进行配置的驱动。
- 详情参见[配置 cgroup 驱动](/zh-cn/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver)。
+ 详情参见[配置 cgroup 驱动](/zh-cn/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/)。
- `KubeletPodResources`:启用 kubelet 上 Pod 资源 GRPC 端点。更多详细信息,
请参见[支持设备监控](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/compute-device-assignment.md)。
@@ -1007,15 +924,16 @@ Each feature gate is designed for enabling/disabling a specific feature:
该 API 增强了[资源分配报告](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources)
包含有关可分配资源的信息,使客户端能够正确跟踪节点上的可用计算资源。
-- `KubeletPodResourcesDynamicResources`:扩展 kubelet 的 pod 资源 gRPC 端点以包括通过 `DynamicResourceAllocation` API 在 `ResourceClaims` 中分配的资源。
- 有关详细信息,请参阅[资源分配报告](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources)。
+- `KubeletPodResourcesDynamicResources`:扩展 kubelet 的 pod 资源 gRPC 端点以包括通过
+ `DynamicResourceAllocation` API 在 `ResourceClaims` 中分配的资源。
+ 有关详细信息,请参阅[资源分配报告](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources)。
- `KubeletTracing`:新增在 Kubelet 中对分布式追踪的支持。
启用时,kubelet CRI 接口和经身份验证的 http 服务器被插桩以生成 OpenTelemetry 追踪 span。
参阅[针对 Kubernetes 系统组件的追踪](/zh-cn/docs/concepts/cluster-administration/system-traces/)
@@ -1037,10 +956,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `LegacyServiceAccountTokenTracking`:跟踪使用基于 Secret
的[服务账号令牌](/zh-cn/docs/concepts/security/service-accounts/#get-a-token)。
-- `LocalStorageCapacityIsolation`:允许使用
- [本地临时存储](/zh-cn/docs/concepts/configuration/manage-resources-containers/)
- 以及 [emptyDir 卷](/zh-cn/docs/concepts/storage/volumes/#emptydir)的 `sizeLimit` 属性。
- `LocalStorageCapacityIsolationFSQuotaMonitoring`:如果
[本地临时存储](/zh-cn/docs/concepts/configuration/manage-resources-containers/)启用了
`LocalStorageCapacityIsolation`,并且
@@ -1132,16 +1044,17 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `NodeOutOfServiceVolumeDetach`:当使用 `node.kubernetes.io/out-of-service`
@@ -1158,11 +1071,13 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `PersistentVolumeLastPhaseTransitionTime`:为 PersistentVolume 添加一个新字段,用于保存卷上一次转换阶段的时间戳。
- `PodAndContainerStatsFromCRI`:配置 kubelet 从 CRI 容器运行时中而不是从 cAdvisor 中采集容器和 Pod 统计信息。
@@ -1173,7 +1088,10 @@ Each feature gate is designed for enabling/disabling a specific feature:
@@ -1186,9 +1104,10 @@ Each feature gate is designed for enabling/disabling a specific feature:
[PodReadyToStartContainers](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-has-network) 状况。
此前(1.25-1.27 版本)称为 `PodHasNetworkCondition`。
-- `PodSchedulingReadiness`:启用设置 `schedulingGates` 字段以控制 Pod 的[调度就绪](/zh-cn/docs/concepts/scheduling-eviction/pod-scheduling-readiness)。
+- `PodSchedulingReadiness`:启用设置 `schedulingGates` 字段以控制 Pod 的[调度就绪](/zh-cn/docs/concepts/scheduling-eviction/pod-scheduling-readiness/)。
+- `SeccompDefault`:启用 `RuntimeDefault` 作为所有工作负载的默认 seccomp 配置文件。
+ 此 seccomp 配置文件在 Pod 和/或 Container 的 `securityContext` 中被指定。
- `SecurityContextDeny`: 此门控表示 `SecurityContextDeny` 准入控制器已弃用。
- `ServerSideApply`:在 API 服务器上启用[服务器端应用(SSA)](/zh-cn/docs/reference/using-api/server-side-apply/)。
- `ServerSideFieldValidation`:启用服务器端字段验证。
@@ -1316,9 +1239,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `StorageVersionHash`:允许 API 服务器在版本发现中公开存储版本的哈希值。
diff --git a/content/zh-cn/docs/reference/kubectl/kubectl.md b/content/zh-cn/docs/reference/kubectl/kubectl.md
index bf2cce0d37..a4b4da943b 100644
--- a/content/zh-cn/docs/reference/kubectl/kubectl.md
+++ b/content/zh-cn/docs/reference/kubectl/kubectl.md
@@ -89,7 +89,7 @@ kubectl [flags]