Merge remote-tracking branch 'upstream/main' into dev-1.25

This commit is contained in:
Seth McCombs 2022-07-21 13:48:12 -07:00
commit 88784d31b8
135 changed files with 14957 additions and 2026 deletions

View File

@ -200,6 +200,7 @@ aliases:
- devlware
- jhonmike
- rikatz
- stormqueen1990
- yagonobre
sig-docs-vi-owners: # Admins for Vietnamese content
- huynguyennovem

View File

@ -257,6 +257,51 @@ This works for Catalina as well as Mojave macOS.
-->
这适用于 Catalina 和 Mojave macOS。
### 对执行 make container-image 命令部分地区访问超时的故障排除
现象如下:
```shell
langs/language.go:23:2: golang.org/x/text@v0.3.7: Get "https://proxy.golang.org/golang.org/x/text/@v/v0.3.7.zip": dial tcp 142.251.43.17:443: i/o timeout
langs/language.go:24:2: golang.org/x/text@v0.3.7: Get "https://proxy.golang.org/golang.org/x/text/@v/v0.3.7.zip": dial tcp 142.251.43.17:443: i/o timeout
common/text/transform.go:21:2: golang.org/x/text@v0.3.7: Get "https://proxy.golang.org/golang.org/x/text/@v/v0.3.7.zip": dial tcp 142.251.43.17:443: i/o timeout
common/text/transform.go:22:2: golang.org/x/text@v0.3.7: Get "https://proxy.golang.org/golang.org/x/text/@v/v0.3.7.zip": dial tcp 142.251.43.17:443: i/o timeout
common/text/transform.go:23:2: golang.org/x/text@v0.3.7: Get "https://proxy.golang.org/golang.org/x/text/@v/v0.3.7.zip": dial tcp 142.251.43.17:443: i/o timeout
hugolib/integrationtest_builder.go:29:2: golang.org/x/tools@v0.1.11: Get "https://proxy.golang.org/golang.org/x/tools/@v/v0.1.11.zip": dial tcp 142.251.42.241:443: i/o timeout
deploy/google.go:24:2: google.golang.org/api@v0.76.0: Get "https://proxy.golang.org/google.golang.org/api/@v/v0.76.0.zip": dial tcp 142.251.43.17:443: i/o timeout
parser/metadecoders/decoder.go:32:2: gopkg.in/yaml.v2@v2.4.0: Get "https://proxy.golang.org/gopkg.in/yaml.v2/@v/v2.4.0.zip": dial tcp 142.251.42.241:443: i/o timeout
The command '/bin/sh -c mkdir $HOME/src && cd $HOME/src && curl -L https://github.com/gohugoio/hugo/archive/refs/tags/v${HUGO_VERSION}.tar.gz | tar -xz && cd "hugo-${HUGO_VERS ION}" && go install --tags extended' returned a non-zero code: 1
make: *** [Makefile:69container-image] error 1
```
请修改 `Dockerfile` 文件,为其添加网络代理。修改内容如下:
```dockerfile
...
FROM golang:1.18-alpine
LABEL maintainer="Luc Perkins <lperkins@linuxfoundation.org>"
ENV GO111MODULE=on # 需要添加内容1
ENV GOPROXY=https://proxy.golang.org,direct # 需要添加内容2
RUN apk add --no-cache \
curl \
gcc \
g++ \
musl-dev \
build-base \
libc6-compat
ARG HUGO_VERSION
...
```
将 "https://proxy.golang.org" 替换为本地可以使用的代理地址。
**注意:** 此部分仅适用于中国大陆
<!--
## Get involved with SIG Docs

View File

@ -21,15 +21,15 @@ Die Add-Ons in den einzelnen Kategorien sind alphabetisch sortiert - Die Reihenf
* [ACI](https://www.github.com/noironetworks/aci-containers) bietet Container-Networking und Network-Security mit Cisco ACI.
* [Calico](https://docs.projectcalico.org/latest/introduction/) ist ein Networking- und Network-Policy-Provider. Calico unterstützt eine Reihe von Networking-Optionen, damit Du die richtige für deinen Use-Case wählen kannst. Dies beinhaltet Non-Overlaying and Overlaying-Networks mit oder ohne BGP. Calico nutzt die gleiche Engine um Network-Policies für Hosts, Pods und (falls Du Istio & Envoy benutzt) Anwendungen auf Service-Mesh-Ebene durchzusetzen.
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) vereint Flannel und Calico um Networking- und Network-Policies bereitzustellen.
* [Canal](https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel) vereint Flannel und Calico um Networking- und Network-Policies bereitzustellen.
* [Cilium](https://github.com/cilium/cilium) ist ein L3 Network- and Network-Policy-Plugin welches das transparent HTTP/API/L7-Policies durchsetzen kann. Sowohl Routing- als auch Overlay/Encapsulation-Modes werden uterstützt. Außerdem kann Cilium auf andere CNI-Plugins aufsetzen.
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) ermöglicht das nahtlose Verbinden von Kubernetes mit einer Reihe an CNI-Plugins wie z.B. Calico, Canal, Flannel, Romana, oder Weave.
* [CNI-Genie](https://github.com/cni-genie/CNI-Genie) ermöglicht das nahtlose Verbinden von Kubernetes mit einer Reihe an CNI-Plugins wie z.B. Calico, Canal, Flannel, Romana, oder Weave.
* [Contiv](https://contivpp.io/) bietet konfigurierbares Networking (Native L3 auf BGP, Overlay mit vxlan, Klassisches L2, Cisco-SDN/ACI) für verschiedene Anwendungszwecke und auch umfangreiches Policy-Framework. Das Contiv-Projekt ist vollständig [Open Source](http://github.com/contiv). Der [installer](http://github.com/contiv/install) bietet sowohl kubeadm als auch nicht-kubeadm basierte Installationen.
* [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), basierend auf [Tungsten Fabric](https://tungsten.io), ist eine Open Source, multi-Cloud Netzwerkvirtualisierungs- und Policy-Management Plattform. Contrail und Tungsten Fabric sind mit Orechstratoren wie z.B. Kubernetes, OpenShift, OpenStack und Mesos integriert und bieten Isolationsmodi für Virtuelle Maschinen, Container (bzw. Pods) und Bare Metal workloads.
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) ist ein Overlay-Network-Provider der mit Kubernetes genutzt werden kann.
* [Knitter](https://github.com/ZTE/Knitter/) ist eine Network-Lösung die Mehrfach-Network in Kubernetes ermöglicht.
* Multus ist ein Multi-Plugin für Mehrfachnetzwerk-Unterstützung um alle CNI-Plugins (z.B. Calico, Cilium, Contiv, Flannel), zusätzlich zu SRIOV-, DPDK-, OVS-DPDK- und VPP-Basierten Workloads in Kubernetes zu unterstützen.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) bietet eine Integration zwischen VMware NSX-T und einem Orchestator wie z.B. Kubernetes. Außerdem bietet es eine Integration zwischen NSX-T und Containerbasierten CaaS/PaaS-Plattformen wie z.B. Pivotal Container Service (PKS) und OpenShift.
* [Multus](https://github.com/k8snetworkplumbingwg/multus-cni) ist ein Multi-Plugin für Mehrfachnetzwerk-Unterstützung um alle CNI-Plugins (z.B. Calico, Cilium, Contiv, Flannel), zusätzlich zu SRIOV-, DPDK-, OVS-DPDK- und VPP-Basierten Workloads in Kubernetes zu unterstützen.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html) Container Plug-in (NCP) bietet eine Integration zwischen VMware NSX-T und einem Orchestator wie z.B. Kubernetes. Außerdem bietet es eine Integration zwischen NSX-T und Containerbasierten CaaS/PaaS-Plattformen wie z.B. Pivotal Container Service (PKS) und OpenShift.
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) ist eine SDN-Plattform die Policy-Basiertes Networking zwischen Kubernetes Pods und nicht-Kubernetes Umgebungen inklusive Sichtbarkeit und Security-Monitoring bereitstellt.
* [Romana](https://github.com/romana/romana) ist eine Layer 3 Network-Lösung für Pod-Netzwerke welche auch die [NetworkPolicy API](/docs/concepts/services-networking/network-policies/) unterstützt. Details zur Installation als kubeadm Add-On sind [hier](https://github.com/romana/romana/tree/master/containerize) verfügbar.
* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) bietet Networking and Network-Policies und arbeitet auf beiden Seiten der Network-Partition ohne auf eine externe Datenbank angwiesen zu sein.

View File

@ -16,7 +16,7 @@ Die `image` Eigenschaft eines Containers unterstüzt die gleiche Syntax wie die
## Aktualisieren von Images
Die Standardregel für das Herunterladen von Images ist `IfNotPresent`, dies führt dazu, dass das Kubelet Images überspringt, die bereits auf einem Node vorliegen.
Die Standardregel für das Herunterladen von Images ist `IfNotPresent`, dies führt dazu, dass das Image wird nur heruntergeladen wenn es noch nicht lokal verfügbar ist.
Wenn sie stattdessen möchten, dass ein Image immer forciert heruntergeladen wird, können sie folgendes tun:

View File

@ -54,7 +54,7 @@ die Entwicklern und Anwendern zur Verfügung stehen. Benutzer können ihre eigen
ihren [eigenen APIs](/docs/concepts/api-extension/custom-resources/) schreiben, die von einem
universellen [Kommandozeilen-Tool](/docs/user-guide/kubectl-overview/) angesprochen werden können.
Dieses [Design](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) hat es einer Reihe anderer Systeme ermöglicht, auf Kubernetes aufzubauen.
Dieses [Design](https://git.k8s.io/design-proposals-archive/architecture/architecture.md) hat es einer Reihe anderer Systeme ermöglicht, auf Kubernetes aufzubauen.
## Was Kubernetes nicht ist

View File

@ -56,6 +56,6 @@ Offiziell unterstützte Clientbibliotheken:
## Design Dokumentation
Ein Archiv der Designdokumente für Kubernetes-Funktionalität. Gute Ansatzpunkte sind [Kubernetes Architektur](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) und [Kubernetes Design Übersicht](https://git.k8s.io/community/contributors/design-proposals).
Ein Archiv der Designdokumente für Kubernetes-Funktionalität. Gute Ansatzpunkte sind [Kubernetes Architektur](https://git.k8s.io/design-proposals-archive/architecture/architecture.md) und [Kubernetes Design Übersicht](https://git.k8s.io/community/contributors/design-proposals).

View File

@ -424,7 +424,7 @@ export no_proxy=$no_proxy,$(minikube ip)
Minikube verwendet [libmachine](https://github.com/docker/machine/tree/master/libmachine) zur Bereitstellung von VMs, und [kubeadm](https://github.com/kubernetes/kubeadm) um einen Kubernetes-Cluster in Betrieb zu nehmen.
Weitere Informationen zu Minikube finden Sie im [Vorschlag](https://git.k8s.io/community/contributors/design-proposals/cluster-lifecycle/local-cluster-ux.md).
Weitere Informationen zu Minikube finden Sie im [Vorschlag](https://git.k8s.io/design-proposals-archive/cluster-lifecycle/local-cluster-ux.md).
## Zusätzliche Links

View File

@ -11,7 +11,7 @@ weight: 90
<!-- overview -->
Der Horizontal Pod Autoscaler skaliert automatisch die Anzahl der Pods eines Replication Controller, Deployment oder Replikat Set basierend auf der beobachteten CPU-Auslastung (oder, mit Unterstützung von [benutzerdefinierter Metriken](https://git.k8s.io/community/contributors/design-proposals/instrumentation/custom-metrics-api.md), von der Anwendung bereitgestellten Metriken). Beachte, dass die horizontale Pod Autoskalierung nicht für Objekte gilt, die nicht skaliert werden können, z. B. DaemonSets.
Der Horizontal Pod Autoscaler skaliert automatisch die Anzahl der Pods eines Replication Controller, Deployment oder Replikat Set basierend auf der beobachteten CPU-Auslastung (oder, mit Unterstützung von [benutzerdefinierter Metriken](https://git.k8s.io/design-proposals-archive/instrumentation/custom-metrics-api.md), von der Anwendung bereitgestellten Metriken). Beachte, dass die horizontale Pod Autoskalierung nicht für Objekte gilt, die nicht skaliert werden können, z. B. DaemonSets.
Der Horizontal Pod Autoscaler ist als Kubernetes API-Ressource und einem Controller implementiert.
Die Ressource bestimmt das Verhalten des Controllers.
@ -46,7 +46,7 @@ Das Verwenden von Metriken aus Heapster ist seit der Kubernetes Version 1.11 ver
Siehe [Unterstützung der Metrik APIs](#unterstützung-der-metrik-apis) für weitere Details.
Der Autoscaler greift über die Scale Sub-Ressource auf die entsprechenden skalierbaren Controller (z.B. Replication Controller, Deployments und Replika Sets) zu. Scale ist eine Schnittstelle, mit der Sie die Anzahl der Replikate dynamisch einstellen und jeden ihrer aktuellen Zustände untersuchen können. Weitere Details zu der Scale Sub-Ressource findest du [hier](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#scale-subresource).
Der Autoscaler greift über die Scale Sub-Ressource auf die entsprechenden skalierbaren Controller (z.B. Replication Controller, Deployments und Replika Sets) zu. Scale ist eine Schnittstelle, mit der Sie die Anzahl der Replikate dynamisch einstellen und jeden ihrer aktuellen Zustände untersuchen können. Weitere Details zu der Scale Sub-Ressource findest du [hier](https://git.k8s.io/design-proposals-archive/autoscaling/horizontal-pod-autoscaler.md#scale-subresource).
### Details zum Algorithmus
@ -90,7 +90,7 @@ Die aktuelle stabile Version, die nur die Unterstützung für die automatische S
Die Beta-Version, welche die Skalierung des Speichers und benutzerdefinierte Metriken unterstützt, befindet sich unter `autoscaling/v2beta2`. Die in `autoscaling/v2beta2` neu eingeführten Felder bleiben bei der Arbeit mit `autoscaling/v1` als Anmerkungen erhalten.
Weitere Details über das API Objekt kann unter dem [HorizontalPodAutoscaler Objekt](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object) gefunden werden.
Weitere Details über das API Objekt kann unter dem [HorizontalPodAutoscaler Objekt](https://git.k8s.io/design-proposals-archive/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object) gefunden werden.
## Unterstützung des Horizontal Pod Autoscaler in kubectl
@ -166,7 +166,7 @@ Standardmäßig ruft der HorizontalPodAutoscaler Controller Metriken aus einer R
## {{% heading "whatsnext" %}}
* Design Dokument [Horizontal Pod Autoscaling](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md).
* Design Dokument [Horizontal Pod Autoscaling](https://git.k8s.io/design-proposals-archive/autoscaling/horizontal-pod-autoscaler.md).
* kubectl autoscale Befehl: [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale).
* Verwenden des [Horizontal Pod Autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/).

View File

@ -0,0 +1,178 @@
---
layout: blog
title: Kubernetes Gateway API Graduates to Beta
date: 2022-07-13
slug: gateway-api-graduates-to-beta
canonicalUrl: https://gateway-api.sigs.k8s.io/blog/2022/graduating-to-beta/
---
**Authors:** Shane Utt (Kong), Rob Scott (Google), Nick Young (VMware), Jeff Apple (HashiCorp)
We are excited to announce the v0.5.0 release of Gateway API. For the first
time, several of our most important Gateway API resources are graduating to
beta. Additionally, we are starting a new initiative to explore how Gateway API
can be used for mesh and introducing new experimental concepts such as URL
rewrites. We'll cover all of this and more below.
## What is Gateway API?
Gateway API is a collection of resources centered around [Gateway][gw] resources
(which represent the underlying network gateways / proxy servers) to enable
robust Kubernetes service networking through expressive, extensible and
role-oriented interfaces that are implemented by many vendors and have broad
industry support.
Originally conceived as a successor to the well known [Ingress][ing] API, the
benefits of Gateway API include (but are not limited to) explicit support for
many commonly used networking protocols (e.g. `HTTP`, `TLS`, `TCP`, `UDP`) as
well as tightly integrated support for Transport Layer Security (TLS). The
`Gateway` resource in particular enables implementations to manage the lifecycle
of network gateways as a Kubernetes API.
If you're an end-user interested in some of the benefits of Gateway API we
invite you to jump in and find an implementation that suits you. At the time of
this release there are over a dozen [implementations][impl] for popular API
gateways and service meshes and guides are available to start exploring quickly.
[gw]:https://gateway-api.sigs.k8s.io/api-types/gateway/
[ing]:https://kubernetes.io/docs/reference/kubernetes-api/service-resources/ingress-v1/
[impl]:https://gateway-api.sigs.k8s.io/implementations/
### Getting started
Gateway API is an official Kubernetes API like
[Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/).
Gateway API represents a superset of Ingress functionality, enabling more
advanced concepts. Similar to Ingress, there is no default implementation of
Gateway API built into Kubernetes. Instead, there are many different
[implementations][impl] available, providing significant choice in terms of underlying
technologies while providing a consistent and portable experience.
Take a look at the [API concepts documentation][concepts] and check out some of
the [Guides][guides] to start familiarizing yourself with the APIs and how they
work. When you're ready for a practical application open the [implementations
page][impl] and select an implementation that belongs to an existing technology
you may already be familiar with or the one your cluster provider uses as a
default (if applicable). Gateway API is a [Custom Resource Definition
(CRD)][crd] based API so you'll need to [install the CRDs][install-crds] onto a
cluster to use the API.
If you're specifically interested in helping to contribute to Gateway API, we
would love to have you! Please feel free to [open a new issue][issue] on the
repository, or join in the [discussions][disc]. Also check out the [community
page][community] which includes links to the Slack channel and community meetings.
[crd]:https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/
[concepts]:https://gateway-api.sigs.k8s.io/concepts/api-overview/
[guides]:https://gateway-api.sigs.k8s.io/guides/getting-started/
[impl]:https://gateway-api.sigs.k8s.io/implementations/
[install-crds]:https://gateway-api.sigs.k8s.io/guides/getting-started/#install-the-crds
[issue]:https://github.com/kubernetes-sigs/gateway-api/issues/new/choose
[disc]:https://github.com/kubernetes-sigs/gateway-api/discussions
[community]:https://gateway-api.sigs.k8s.io/contributing/community/
## Release highlights
### Graduation to beta
The `v0.5.0` release is particularly historic because it marks the growth in
maturity to a beta API version (`v1beta1`) release for some of the key APIs:
- [GatewayClass](https://gateway-api.sigs.k8s.io/api-types/gatewayclass/)
- [Gateway](https://gateway-api.sigs.k8s.io/api-types/gateway/)
- [HTTPRoute](https://gateway-api.sigs.k8s.io/api-types/httproute/)
This achievement was marked by the completion of several graduation criteria:
- API has been [widely implemented][impl].
- Conformance tests provide basic coverage for all resources and have multiple implementations passing tests.
- Most of the API surface is actively being used.
- Kubernetes SIG Network API reviewers have approved graduation to beta.
For more information on Gateway API versioning, refer to the [official
documentation](https://gateway-api.sigs.k8s.io/concepts/versioning/). To see
what's in store for future releases check out the [next steps](#next-steps)
section.
[impl]:https://gateway-api.sigs.k8s.io/implementations/
### Release channels
This release introduces the `experimental` and `standard` [release channels][ch]
which enable a better balance of maintaining stability while still enabling
experimentation and iterative development.
The `standard` release channel includes:
- resources that have graduated to beta
- fields that have graduated to standard (no longer considered experimental)
The `experimental` release channel includes everything in the `standard` release
channel, plus:
- `alpha` API resources
- fields that are considered experimental and have not graduated to `standard` channel
Release channels are used internally to enable iterative development with
quick turnaround, and externally to indicate feature stability to implementors
and end-users.
For this release we've added the following experimental features:
- [Routes can attach to Gateways by specifying port numbers](https://gateway-api.sigs.k8s.io/geps/gep-957/)
- [URL rewrites and path redirects](https://gateway-api.sigs.k8s.io/geps/gep-726/)
[ch]:https://gateway-api.sigs.k8s.io/concepts/versioning/#release-channels-eg-experimental-standard
### Other improvements
For an exhaustive list of changes included in the `v0.5.0` release, please see
the [v0.5.0 release notes](https://github.com/kubernetes-sigs/gateway-api/releases/tag/v0.5.0).
## Gateway API for service mesh: the GAMMA Initiative
Some service mesh projects have [already implemented support for the Gateway
API](https://gateway-api.sigs.k8s.io/implementations/). Significant overlap
between the Service Mesh Interface (SMI) APIs and the Gateway API has [inspired
discussion in the SMI
community](https://github.com/servicemeshinterface/smi-spec/issues/249) about
possible integration.
We are pleased to announce that the service mesh community, including
representatives from Cilium Service Mesh, Consul, Istio, Kuma, Linkerd, NGINX
Service Mesh and Open Service Mesh, is coming together to form the [GAMMA
Initiative](https://gateway-api.sigs.k8s.io/contributing/gamma/), a dedicated
workstream within the Gateway API subproject focused on Gateway API for Mesh
Management and Administration.
This group will deliver [enhancement
proposals](https://gateway-api.sigs.k8s.io/v1beta1/contributing/gep/) consisting
of resources, additions, and modifications to the Gateway API specification for
mesh and mesh-adjacent use-cases.
This work has begun with [an exploration of using Gateway API for
service-to-service
traffic](https://docs.google.com/document/d/1T_DtMQoq2tccLAtJTpo3c0ohjm25vRS35MsestSL9QU/edit#heading=h.jt37re3yi6k5)
and will continue with enhancement in areas such as authentication and
authorization policy.
## Next steps
As we continue to mature the API for production use cases, here are some of the highlights of what we'll be working on for the next Gateway API releases:
- [GRPCRoute][gep1016] for [gRPC][grpc] traffic routing
- [Route delegation][pr1085]
- Layer 4 API maturity: Graduating [TCPRoute][tcpr], [UDPRoute][udpr] and
[TLSRoute][tlsr] to beta
- [GAMMA Initiative](https://gateway-api.sigs.k8s.io/contributing/gamma/) - Gateway API for Service Mesh
If there's something on this list you want to get involved in, or there's
something not on this list that you want to advocate for to get on the roadmap
please join us in the #sig-network-gateway-api channel on Kubernetes Slack or our weekly [community calls](https://gateway-api.sigs.k8s.io/contributing/community/#meetings).
[gep1016]:https://github.com/kubernetes-sigs/gateway-api/blob/master/site-src/geps/gep-1016.md
[grpc]:https://grpc.io/
[pr1085]:https://github.com/kubernetes-sigs/gateway-api/pull/1085
[tcpr]:https://github.com/kubernetes-sigs/gateway-api/blob/main/apis/v1alpha2/tcproute_types.go
[udpr]:https://github.com/kubernetes-sigs/gateway-api/blob/main/apis/v1alpha2/udproute_types.go
[tlsr]:https://github.com/kubernetes-sigs/gateway-api/blob/main/apis/v1alpha2/tlsroute_types.go
[community]:https://gateway-api.sigs.k8s.io/contributing/community/

View File

@ -332,7 +332,7 @@ container of a Pod can specify either or both of the following:
Limits and requests for `ephemeral-storage` are measured in byte quantities.
You can express storage as a plain integer or as a fixed-point number using one of these suffixes:
E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
Mi, Ki. For example, the following quantities all represent roughly the same value:
- `128974848`
@ -340,6 +340,10 @@ Mi, Ki. For example, the following quantities all represent roughly the same val
- `129M`
- `123Mi`
Pay attention to the case of the suffixes. If you request `400m` of ephemeral-storage, this is a request
for 0.4 bytes. Someone who types that probably meant to ask for 400 mebibytes (`400Mi`)
or 400 megabytes (`400M`).
In the following example, the Pod has two containers. Each container has a request of
2GiB of local ephemeral storage. Each container has a limit of 4GiB of local ephemeral
storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and

View File

@ -85,11 +85,15 @@ The kubelet supports the following filesystem partitions:
Kubelet auto-discovers these filesystems and ignores other filesystems. Kubelet
does not support other configurations.
{{<note>}}
Some kubelet garbage collection features are deprecated in favor of eviction.
For a list of the deprecated features, see
[kubelet garbage collection deprecation](/docs/concepts/architecture/garbage-collection/#deprecation).
{{</note>}}
Some kubelet garbage collection features are deprecated in favor of eviction:
| Existing Flag | New Flag | Rationale |
| ------------- | -------- | --------- |
| `--image-gc-high-threshold` | `--eviction-hard` or `--eviction-soft` | existing eviction signals can trigger image garbage collection |
| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | eviction reclaims achieve the same behavior |
| `--maximum-dead-containers` | | deprecated once old logs are stored outside of container's context |
| `--maximum-dead-containers-per-container` | | deprecated once old logs are stored outside of container's context |
| `--minimum-container-ttl-duration` | | deprecated once old logs are stored outside of container's context |
### Eviction thresholds

View File

@ -558,7 +558,7 @@ If the access modes are specified as ReadWriteOncePod, the volume is constrained
| AzureFile | &#x2713; | &#x2713; | &#x2713; | - |
| AzureDisk | &#x2713; | - | - | - |
| CephFS | &#x2713; | &#x2713; | &#x2713; | - |
| Cinder | &#x2713; | - | - | - |
| Cinder | &#x2713; | - | ([if multi-attach volumes are available](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/features.md#multi-attach-volumes)) | - |
| CSI | depends on the driver | depends on the driver | depends on the driver | depends on the driver |
| FC | &#x2713; | &#x2713; | - | - |
| FlexVolume | &#x2713; | &#x2713; | depends on the driver | - |

View File

@ -73,7 +73,7 @@ volume mount will not receive updates for those volume sources.
## SecurityContext interactions
The [proposal](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2451-service-account-token-volumes#proposal) for file permission handling in projected service account volume enhancement introduced the projected files having the the correct owner permissions set.
The [proposal](https://git.k8s.io/enhancements/keps/sig-storage/2451-service-account-token-volumes#proposal) for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.
### Linux
@ -99,6 +99,7 @@ into their own volume mount outside of `C:\`.
By default, the projected files will have the following ownership as shown for
an example projected volume file:
```powershell
PS C:\> Get-Acl C:\var\run\secrets\kubernetes.io\serviceaccount\..2021_08_31_22_22_18.318230061\ca.crt | Format-List
@ -111,6 +112,7 @@ Access : NT AUTHORITY\SYSTEM Allow FullControl
Audit :
Sddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU)
```
This implies all administrator users like `ContainerAdministrator` will have
read, write and execute access while, non-administrator users will have read and
execute access.

View File

@ -132,7 +132,7 @@ section refers to several key workload abstractions and how they map to Windows.
* CronJob
* ReplicationController
* {{< glossary_tooltip text="Services" term_id="service" >}}
See [Load balancing and Services](#load-balancing-and-services) for more details.
See [Load balancing and Services](/docs/concepts/services-networking/windows-networking/#load-balancing-and-services) for more details.
Pods, workload resources, and Services are critical elements to managing Windows
workloads on Kubernetes. However, on their own they are not enough to enable

View File

@ -71,7 +71,7 @@ Pod Template:
job-name=pi
Containers:
pi:
Image: perl
Image: perl:5.34.0
Port: <none>
Host Port: <none>
Command:
@ -125,7 +125,7 @@ spec:
- -Mbignum=bpi
- -wle
- print bpi(2000)
image: perl
image: perl:5.34.0
imagePullPolicy: Always
name: pi
resources: {}
@ -356,7 +356,7 @@ spec:
spec:
containers:
- name: pi
image: perl
image: perl:5.34.0
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
```
@ -402,7 +402,7 @@ spec:
spec:
containers:
- name: pi
image: perl
image: perl:5.34.0
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
```

View File

@ -216,16 +216,16 @@ Figure 2. Working from a local fork to make your changes.
1. Decide which branch base to your work on:
- For improvements to existing content, use `upstream/main`.
- For new content about existing features, use `upstream/main`.
- For localized content, use the localization's conventions. For more information, see
[localizing Kubernetes documentation](/docs/contribute/localization/).
- For new features in an upcoming Kubernetes release, use the feature branch. For more
information, see [documenting for a release](/docs/contribute/new-content/new-features/).
- For long-running efforts that multiple SIG Docs contributors collaborate on,
like content reorganization, use a specific feature branch created for that effort.
- For improvements to existing content, use `upstream/main`.
- For new content about existing features, use `upstream/main`.
- For localized content, use the localization's conventions. For more information, see
[localizing Kubernetes documentation](/docs/contribute/localization/).
- For new features in an upcoming Kubernetes release, use the feature branch. For more
information, see [documenting for a release](/docs/contribute/new-content/new-features/).
- For long-running efforts that multiple SIG Docs contributors collaborate on,
like content reorganization, use a specific feature branch created for that effort.
If you need help choosing a branch, ask in the `#sig-docs` Slack channel.
If you need help choosing a branch, ask in the `#sig-docs` Slack channel.
1. Create a new branch based on the branch identified in step 1. This example assumes the base
branch is `upstream/main`:
@ -234,7 +234,7 @@ Figure 2. Working from a local fork to make your changes.
git checkout -b <my_new_branch> upstream/main
```
3. Make your changes using a text editor.
1. Make your changes using a text editor.
At any time, use the `git status` command to see what files you've changed.
@ -396,7 +396,7 @@ Figure 3. Steps to open a PR from your fork to the K8s/website.
1. From the **head repository** drop-down menu, select your fork.
1. From the **compare** drop-down menu, select your branch.
1. Select **Create Pull Request**.
`. Add a description for your pull request:
1. Add a description for your pull request:
- **Title** (50 characters or less): Summarize the intent of the change.
- **Description**: Describe the change in more detail.
@ -484,10 +484,10 @@ conflict. You must resolve all merge conflicts in your PR.
1. Fetch changes from `kubernetes/website`'s `upstream/main` and rebase your branch:
```shell
git fetch upstream
git rebase upstream/main
```
```shell
git fetch upstream
git rebase upstream/main
```
1. Inspect the results of the rebase:
@ -512,7 +512,7 @@ conflict. You must resolve all merge conflicts in your PR.
1. Continue the rebase:
``
```shell
git rebase --continue
```

View File

@ -10,9 +10,8 @@ weight: 10
Anyone can review a documentation pull request. Visit the [pull requests](https://github.com/kubernetes/website/pulls)
section in the Kubernetes website repository to see open pull requests.
Reviewing documentation pull requests is a
great way to introduce yourself to the Kubernetes community.
It helps you learn the code base and build trust with other contributors.
Reviewing documentation pull requests is a great way to introduce yourself to the Kubernetes
community. It helps you learn the code base and build trust with other contributors.
Before reviewing, it's a good idea to:
@ -28,7 +27,6 @@ Before reviewing, it's a good idea to:
Before you start a review:
- Read the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)
and ensure that you abide by it at all times.
- Be polite, considerate, and helpful.
@ -73,6 +71,7 @@ class third,fourth white
Figure 1. Review process steps.
1. Go to [https://github.com/kubernetes/website/pulls](https://github.com/kubernetes/website/pulls).
You see a list of every open pull request against the Kubernetes website and docs.
@ -103,12 +102,20 @@ Figure 1. Review process steps.
4. Go to the **Files changed** tab to start your review.
1. Click on the `+` symbol beside the line you want to comment on.
1. Fill in any comments you have about the line and click either **Add single comment** (if you
have only one comment to make) or **Start a review** (if you have multiple comments to make).
1. Fill in any comments you have about the line and click either **Add single comment**
(if you have only one comment to make) or **Start a review** (if you have multiple comments to make).
1. When finished, click **Review changes** at the top of the page. Here, you can add
a summary of your review (and leave some positive comments for the contributor!),
approve the PR, comment or request changes as needed. New contributors should always
choose **Comment**.
a summary of your review (and leave some positive comments for the contributor!).
Please always use the "Comment"
- Avoid clicking the "Request changes" button when finishing your review.
If you want to block a PR from being merged before some further changes are made,
you can leave a "/hold" comment.
Mention why you are setting a hold, and optionally specify the conditions under
which the hold can be removed by you or other reviewers.
- Avoid clicking the "Approve" button when finishing your review.
Leaving a "/approve" comment is recommended most of the time.
## Reviewing checklist

View File

@ -361,7 +361,7 @@ Beware.
### Katacoda Embedded Live Environment
This button lets users run Minikube in their browser using the [Katacoda Terminal](https://www.katacoda.com/embed/panel).
This button lets users run Minikube in their browser using the Katacoda Terminal.
It lowers the barrier of entry by allowing users to use Minikube with one click instead of going through the complete
Minikube and Kubectl installation process locally.

View File

@ -9,7 +9,7 @@ weight: 95
<!-- overview -->
The tables below enumerate the configuration parameters on
[PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/) objects, whether the field mutates
[PodSecurityPolicy](/docs/concepts/security/pod-security-policy/) objects, whether the field mutates
and/or validates pods, and how the configuration values map to the
[Pod Security Standards](/docs/concepts/security/pod-security-standards/).
@ -31,9 +31,9 @@ The fields enumerated in this table are part of the `PodSecurityPolicySpec`, whi
under the `.spec` field path.
<table class="no-word-break">
<caption style="display:none">Mapping PodSecurityPolicySpec fields to Pod Security Standards</caption>
<tbody>
<tr>
<caption style="display:none">Mapping PodSecurityPolicySpec fields to Pod Security Standards</caption>
<tbody>
<tr>
<th><code>PodSecurityPolicySpec</code></th>
<th>Type</th>
<th>Pod Security Standards Equivalent</th>
@ -54,19 +54,19 @@ under the `.spec` field path.
<td>
<p><b>Baseline</b>: subset of</p>
<ul>
<li><code>AUDIT_WRITE</code></li>
<li><code>CHOWN</code></li>
<li><code>DAC_OVERRIDE</code></li>
<li><code>FOWNER</code></li>
<li><code>FSETID</code></li>
<li><code>KILL</code></li>
<li><code>MKNOD</code></li>
<li><code>NET_BIND_SERVICE</code></li>
<li><code>SETFCAP</code></li>
<li><code>SETGID</code></li>
<li><code>SETPCAP</code></li>
<li><code>SETUID</code></li>
<li><code>SYS_CHROOT</code></li>
<li><code>AUDIT_WRITE</code></li>
<li><code>CHOWN</code></li>
<li><code>DAC_OVERRIDE</code></li>
<li><code>FOWNER</code></li>
<li><code>FSETID</code></li>
<li><code>KILL</code></li>
<li><code>MKNOD</code></li>
<li><code>NET_BIND_SERVICE</code></li>
<li><code>SETFCAP</code></li>
<li><code>SETGID</code></li>
<li><code>SETPCAP</code></li>
<li><code>SETUID</code></li>
<li><code>SYS_CHROOT</code></li>
</ul>
<p><b>Restricted</b>: empty / undefined / nil OR a list containing <i>only</i> <code>NET_BIND_SERVICE</code>
</td>
@ -236,9 +236,9 @@ The [annotations](/docs/concepts/overview/working-with-objects/annotations/) enu
table can be specified under `.metadata.annotations` on the PodSecurityPolicy object.
<table class="no-word-break">
<caption style="display:none">Mapping PodSecurityPolicy annotations to Pod Security Standards</caption>
<tbody>
<tr>
<caption style="display:none">Mapping PodSecurityPolicy annotations to Pod Security Standards</caption>
<tbody>
<tr>
<th><code>PSP Annotation</code></th>
<th>Type</th>
<th>Pod Security Standards Equivalent</th>

View File

@ -54,8 +54,8 @@ it can't be both.
ClusterRoles have several uses. You can use a ClusterRole to:
1. define permissions on namespaced resources and be granted within individual namespace(s)
1. define permissions on namespaced resources and be granted across all namespaces
1. define permissions on namespaced resources and be granted access within individual namespace(s)
1. define permissions on namespaced resources and be granted access across all namespaces
1. define permissions on cluster-scoped resources
If you want to define a role within a namespace, use a Role; if you want to define

View File

@ -90,7 +90,7 @@ kubelet [flags]
</tr>
<tr>
<td colspan="2">--authorization-mode string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>AlwaysAllow</code></td></td>
<td colspan="2">--authorization-mode string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>AlwaysAllow</code></td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Authorization mode for Kubelet server. Valid options are AlwaysAllow or Webhook. Webhook mode uses the SubjectAccessReview API to determine authorization. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)</td>

View File

@ -2,9 +2,10 @@
title: Extensions
id: Extensions
date: 2019-02-01
full_link: /docs/concepts/extend-kubernetes/extend-cluster/#extensions
full_link: /docs/concepts/extend-kubernetes/#extensions
short_description: >
Extensions are software components that extend and deeply integrate with Kubernetes to support new types of hardware.
Extensions are software components that extend and deeply integrate with Kubernetes to support
new types of hardware.
aka:
tags:
@ -15,4 +16,6 @@ tags:
<!--more-->
Many cluster administrators use a hosted or distribution instance of Kubernetes. These clusters come with extensions pre-installed. As a result, most Kubernetes users will not need to install [extensions](/docs/concepts/extend-kubernetes/extend-cluster/#extensions) and even fewer users will need to author new ones.
Many cluster administrators use a hosted or distribution instance of Kubernetes. These clusters
come with extensions pre-installed. As a result, most Kubernetes users will not need to install
[extensions](/docs/concepts/extend-kubernetes/) and even fewer users will need to author new ones.

View File

@ -2,7 +2,7 @@
title: Garbage Collection
id: garbage-collection
date: 2021-07-07
full_link: /docs/concepts/workloads/controllers/garbage-collection/
full_link: /docs/concepts/architecture/garbage-collection/
short_description: >
A collective term for the various mechanisms Kubernetes uses to clean up cluster
resources.
@ -12,13 +12,16 @@ tags:
- fundamental
- operation
---
Garbage collection is a collective term for the various mechanisms Kubernetes uses to clean up
cluster resources.
Garbage collection is a collective term for the various mechanisms Kubernetes uses to clean up
cluster resources.
<!--more-->
Kubernetes uses garbage collection to clean up resources like [unused containers and images](/docs/concepts/workloads/controllers/garbage-collection/#containers-images),
Kubernetes uses garbage collection to clean up resources like
[unused containers and images](/docs/concepts/architecture/garbage-collection/#containers-images),
[failed Pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection),
[objects owned by the targeted resource](/docs/concepts/overview/working-with-objects/owners-dependents/),
[completed Jobs](/docs/concepts/workloads/controllers/ttlafterfinished/), and resources
that have expired or failed.
that have expired or failed.

View File

@ -39,7 +39,7 @@ The JSON and Protobuf serialization schemas follow the same guidelines for
schema changes. The following descriptions cover both formats.
The API versioning and software versioning are indirectly related.
The [API and release versioning proposal](https://git.k8s.io/design-proposals-archive/release/versioning.md)
The [API and release versioning proposal](https://git.k8s.io/sig-release/release-engineering/versioning.md)
describes the relationship between API versioning and software versioning.
Different API versions indicate different levels of stability and support. You

View File

@ -13,10 +13,10 @@ Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [i
* a highly available cluster
* composable attributes
* support for most popular Linux distributions
* Ubuntu 16.04, 18.04, 20.04
* Ubuntu 16.04, 18.04, 20.04, 22.04
* CentOS/RHEL/Oracle Linux 7, 8
* Debian Buster, Jessie, Stretch, Wheezy
* Fedora 31, 32
* Fedora 34, 35
* Fedora CoreOS
* openSUSE Leap 15
* Flatcar Container Linux by Kinvolk
@ -33,7 +33,7 @@ To choose a tool which best fits your use case, read [this comparison](https://g
Provision servers with the following [requirements](https://github.com/kubernetes-sigs/kubespray#requirements):
* **Ansible v2.9 and python-netaddr are installed on the machine that will run Ansible commands**
* **Ansible v2.11 and python-netaddr are installed on the machine that will run Ansible commands**
* **Jinja 2.11 (or newer) is required to run the Ansible Playbooks**
* The target servers must have access to the Internet in order to pull docker images. Otherwise, additional configuration is required ([See Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))
* The target servers are configured to allow **IPv4 forwarding**

View File

@ -69,4 +69,4 @@ admission controller. To get started with `cosigned` here are a few helpful
resources:
* [Installation](https://github.com/sigstore/cosign#installation)
* [Configuration Options](https://github.com/sigstore/cosign/tree/main/config)
* [Configuration Options](https://github.com/sigstore/cosign/blob/main/USAGE.md#detailed-usage)

View File

@ -362,9 +362,9 @@ and create it:
kubectl create --validate=false -f my-crontab.yaml -o yaml
```
your output is similar to:
Your output is similar to:
```console
```yaml
apiVersion: stable.example.com/v1
kind: CronTab
metadata:
@ -836,7 +836,7 @@ Validation Rules Examples:
| `has(self.expired) && self.created + self.ttl < self.expired` | Validate that 'expired' date is after a 'create' date plus a 'ttl' duration |
| `self.health.startsWith('ok')` | Validate a 'health' string field has the prefix 'ok' |
| `self.widgets.exists(w, w.key == 'x' && w.foo < 10)` | Validate that the 'foo' property of a listMap item with a key 'x' is less than 10 |
| `type(self) == string ? self == '100%' : self == 1000` | Validate an int-or-string field for both the the int and string cases |
| `type(self) == string ? self == '100%' : self == 1000` | Validate an int-or-string field for both the int and string cases |
| `self.metadata.name.startsWith(self.prefix)` | Validate that an object's name has the prefix of another field value |
| `self.set1.all(e, !(e in self.set2))` | Validate that two listSets are disjoint |
| `size(self.names) == size(self.details) && self.names.all(n, n in self.details)` | Validate the 'details' map is keyed by the items in the 'names' listSet |
@ -844,7 +844,6 @@ Validation Rules Examples:
Xref: [Supported evaluation on CEL](https://github.com/google/cel-spec/blob/v0.6.0/doc/langdef.md#evaluation)
- If the Rule is scoped to the root of a resource, it may make field selection into any fields
declared in the OpenAPIv3 schema of the CRD as well as `apiVersion`, `kind`, `metadata.name` and
`metadata.generateName`. This includes selection of fields in both the `spec` and `status` in the

View File

@ -7,7 +7,7 @@ description: Configure the kubelet's image credential provider plugin
content_type: task
---
{{< feature-state for_k8s_version="v1.20" state="alpha" >}}
{{< feature-state for_k8s_version="v1.24" state="beta" >}}
<!-- overview -->

View File

@ -2,7 +2,7 @@ apiVersion: apiserver.k8s.io/v1beta1
kind: EgressSelectorConfiguration
egressSelections:
# Since we want to control the egress traffic to the cluster, we use the
# "cluster" as the name. Other supported values are "etcd", and "master".
# "cluster" as the name. Other supported values are "etcd", and "controlplane".
- name: cluster
connection:
# This controls the protocol between the API Server and the Konnectivity

View File

@ -7,7 +7,7 @@ spec:
spec:
containers:
- name: pi
image: perl:5.34
image: perl:5.34.0
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
backoffLimit: 4

View File

@ -8,11 +8,10 @@ spec:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
- key: kubernetes.io/os
operator: In
values:
- antarctica-east1
- antarctica-west1
- linux
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:

View File

@ -8,10 +8,11 @@ spec:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
- key: topology.kubernetes.io/zone
operator: In
values:
- linux
- antarctica-east1
- antarctica-west1
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:

View File

@ -78,7 +78,6 @@ releases may also occur in between these.
| Monthly Patch Release | Cherry Pick Deadline | Target date |
| --------------------- | -------------------- | ----------- |
| July 2022 | 2022-07-08 | 2022-07-13 |
| August 2022 | 2022-08-12 | 2022-08-17 |
| September 2022 | 2022-09-09 | 2022-09-14 |
| October 2022 | 2022-10-07 | 2022-10-12 |
@ -87,24 +86,28 @@ releases may also occur in between these.
### 1.24
Next patch release is **1.24.1**
Next patch release is **1.24.4**
End of Life for **1.24** is **2023-09-29**
End of Life for **1.24** is **2023-07-28**
| PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE |
|---------------|----------------------|-------------|------|
| 1.24.4 | 2022-08-12 | 2022-08-17 | |
| 1.24.3 | 2022-07-08 | 2022-07-13 | |
| 1.24.2 | 2022-06-10 | 2022-06-15 | |
| 1.24.1 | 2022-05-20 | 2022-05-24 | |
### 1.23
Next patch release is **1.23.10**
**1.23** enters maintenance mode on **2022-12-28**.
End of Life for **1.23** is **2023-02-28**.
| Patch Release | Cherry Pick Deadline | Target Date | Note |
|---------------|----------------------|-------------|------|
| 1.23.10 | 2022-08-12 | 2022-08-17 | |
| 1.23.9 | 2022-07-08 | 2022-07-13 | |
| 1.23.8 | 2022-06-10 | 2022-06-15 | |
| 1.23.7 | 2022-05-20 | 2022-05-24 | |
@ -117,12 +120,15 @@ End of Life for **1.23** is **2023-02-28**.
### 1.22
Next patch release is **1.22.13**
**1.22** enters maintenance mode on **2022-08-28**
End of Life for **1.22** is **2022-10-28**
| Patch Release | Cherry Pick Deadline | Target Date | Note |
|---------------|----------------------|-------------|------|
| 1.22.13 | 2022-08-12 | 2022-08-17 | |
| 1.22.12 | 2022-07-08 | 2022-07-13 | |
| 1.22.11 | 2022-06-10 | 2022-06-15 | |
| 1.22.10 | 2022-05-20 | 2022-05-24 | |

View File

@ -4,8 +4,8 @@ type: docs
---
"Release Managers" is an umbrella term that encompasses the set of Kubernetes
contributors responsible for maintaining release branches, tagging releases,
and building/packaging Kubernetes.
contributors responsible for maintaining release branches and creating releases
by using the tools SIG Release provides.
The responsibilities of each role are described below.
@ -133,7 +133,9 @@ referred to as Release Manager shadows. They are responsible for:
GitHub Mentions: @kubernetes/release-engineering
- Arnaud Meukam ([@ameukam](https://github.com/ameukam))
- Jeremy Rickard ([@jeremyrickard](https://github.com/jeremyrickard))
- Jim Angel ([@jimangel](https://github.com/jimangel))
- Joseph Sandoval ([@jrsapi](https://github.com/jrsapi))
- Joyce Kung ([@thejoycekung](https://github.com/thejoycekung))
- Max Körbächer ([@mkorbi](https://github.com/mkorbi))
- Seth McCombs ([@sethmccombs](https://github.com/sethmccombs))

View File

@ -124,7 +124,7 @@ The general labeling process should be consistent across artifact types.
referring to a release MAJOR.MINOR `vX.Y` version.
See also
[release versioning](https://git.k8s.io/design-proposals-archive/release/versioning.md).
[release versioning](https://git.k8s.io/sig-release/release-engineering/versioning.md).
- *release branch*: Git branch `release-X.Y` created for the `vX.Y` milestone.

View File

@ -21,7 +21,7 @@ Specific cluster deployment tools may place additional restrictions on version s
## Supported versions
Kubernetes versions are expressed as **x.y.z**, where **x** is the major version, **y** is the minor version, and **z** is the patch version, following [Semantic Versioning](https://semver.org/) terminology.
For more information, see [Kubernetes Release Versioning](https://git.k8s.io/design-proposals-archive/release/versioning.md#kubernetes-release-versioning).
For more information, see [Kubernetes Release Versioning](https://git.k8s.io/sig-release/release-engineering/versioning.md#kubernetes-release-versioning).
The Kubernetes project maintains release branches for the most recent three minor releases ({{< skew currentVersion >}}, {{< skew currentVersionAddMinor -1 >}}, {{< skew currentVersionAddMinor -2 >}}). Kubernetes 1.19 and newer receive approximately 1 year of patch support. Kubernetes 1.18 and older received approximately 9 months of patch support.

View File

@ -8,7 +8,7 @@ weight: 10
<!-- overview -->
Un nodo es una máquina de trabajo en Kubernetes, previamente conocida como `minion`. Un nodo puede ser una máquina virtual o física, dependiendo del tipo de clúster. Cada nodo está gestionado por el componente máster y contiene los servicios necesarios para ejecutar [pods](/docs/concepts/workloads/pods/pod). Los servicios en un nodo incluyen el [container runtime](/docs/concepts/overview/components/#node-components), kubelet y el kube-proxy. Accede a la sección [The Kubernetes Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) en el documento de diseño de arquitectura para más detalle.
Un nodo es una máquina de trabajo en Kubernetes, previamente conocida como `minion`. Un nodo puede ser una máquina virtual o física, dependiendo del tipo de clúster. Cada nodo está gestionado por el componente máster y contiene los servicios necesarios para ejecutar [pods](/docs/concepts/workloads/pods/pod). Los servicios en un nodo incluyen el [container runtime](/docs/concepts/overview/components/#node-components), kubelet y el kube-proxy. Accede a la sección [The Kubernetes Node](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node) en el documento de diseño de arquitectura para más detalle.

View File

@ -676,7 +676,7 @@ La cantidad de recursos disponibles para los pods es menor que la capacidad del
los demonios del sistema utilizan una parte de los recursos disponibles. El campo `allocatable`
[NodeStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#nodestatus-v1-core)
indica la cantidad de recursos que están disponibles para los Pods. Para más información, mira
[Node Allocatable Resources](https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md).
[Node Allocatable Resources](https://git.k8s.io/design-proposals-archive/node/node-allocatable.md).
La característica [resource quota](/docs/concepts/policy/resource-quotas/) se puede configurar
para limitar la cantidad total de recursos que se pueden consumir. Si se usa en conjunto
@ -757,7 +757,7 @@ Puedes ver que el Contenedor fué terminado a causa de `reason:OOM Killed`, dond
* Obtén experiencia práctica [assigning CPU resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/).
* Para más detalles sobre la diferencia entre solicitudes y límites, mira
[Resource QoS](https://git.k8s.io/community/contributors/design-proposals/node/resource-qos.md).
[Resource QoS](https://git.k8s.io/design-proposals-archive/node/resource-qos.md).
* Lee [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core) referencia de API

View File

@ -38,4 +38,4 @@ Para obtener más detalles vea la [documentación sobre autorización](/docs/ref
<!-- whatsnext -->
* [RuntimeClass](/docs/concepts/containers/runtime-class/)
* [PodOverhead Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md)
* [Diseño de capacidad de PodOverhead](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead)

View File

@ -14,7 +14,7 @@ weight: 50
Los objetos de tipo {{< glossary_tooltip text="Secret" term_id="secret" >}} en Kubernetes te permiten almacenar y administrar información confidencial, como
contraseñas, tokens OAuth y llaves ssh. Poniendo esta información en un Secret
es más seguro y más flexible que ponerlo en la definición de un {{< glossary_tooltip term_id="pod" >}} o en un {{< glossary_tooltip text="container image" term_id="image" >}}. Ver [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) para más información.
es más seguro y más flexible que ponerlo en la definición de un {{< glossary_tooltip term_id="pod" >}} o en un {{< glossary_tooltip text="container image" term_id="image" >}}. Ver [Secrets design document](https://git.k8s.io/design-proposals-archive/auth/secrets.md) para más información.
<!-- body -->
@ -345,7 +345,7 @@ echo 'MWYyZDFlMmU2N2Rm' | base64 --decode
## Usando Secrets
Los Secrets se pueden montar como volúmenes de datos o ser expuestos como
{{< glossary_tooltip text="variables de ambiente" term_id="container-env-variables" >}}
{{< glossary_tooltip text="variables de entorno" term_id="container-env-variables" >}}
para ser usados por un contenedor en un pod. También pueden ser utilizados por otras partes del sistema,
sin estar directamente expuesto en el pod. Por ejemplo, pueden tener credenciales que otras partes del sistema usan para interactuar con sistemas externos en su nombre.

View File

@ -202,4 +202,4 @@ cuentan en Kubernetes.
- [Diseño de RuntimeClass](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md)
- [Diseño de programación de RuntimeClass](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md#runtimeclass-scheduling)
- Leer sobre el concepto de [Pod Overhead](/docs/concepts/scheduling-eviction/pod-overhead/)
- [Diseño de capacidad de PodOverhead](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md)
- [Diseño de capacidad de PodOverhead](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead)

View File

@ -66,7 +66,7 @@ Para facilitar la eliminación de propiedades o reestructurar la representación
Se versiona a nivel de la API en vez de a nivel de los recursos o propiedades para asegurarnos de que la API presenta una visión clara y consistente de los recursos y el comportamiento del sistema, y para controlar el acceso a las APIs experimentales o que estén terminando su ciclo de vida. Los esquemas de serialización JSON y Protobuf siguen los mismos lineamientos para los cambios, es decir, estas descripciones cubren ambos formatos.
Se ha de tener en cuenta que hay una relación indirecta entre el versionado de la API y el versionado del resto del software. La propuesta de [versionado de la API y releases](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md) describe esta relación.
Se ha de tener en cuenta que hay una relación indirecta entre el versionado de la API y el versionado del resto del software. La propuesta de [versionado de la API y releases](https://git.k8s.io/design-proposals-archive/release/versioning.md) describe esta relación.
Las distintas versiones de la API implican distintos niveles de estabilidad y soporte. El criterio para cada nivel se describe en detalle en la documentación de [Cambios a la API](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions). A continuación se ofrece un resumen:
@ -89,7 +89,7 @@ Las distintas versiones de la API implican distintos niveles de estabilidad y so
## Grupos de API
Para que sea más fácil extender la API de Kubernetes, se han creado los [*grupos de API*](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md).
Para que sea más fácil extender la API de Kubernetes, se han creado los [*grupos de API*](https://git.k8s.io/design-proposals-archive/api-machinery/api-group.md).
Estos grupos se especifican en una ruta REST y en la propiedad `apiVersion` de un objeto serializado.
Actualmente hay varios grupos de API en uso:

View File

@ -60,7 +60,7 @@ APIs](/docs/concepts/api-extension/custom-resources/)
desde una [herramienta de línea de comandos](/docs/user-guide/kubectl-overview/).
Este
[diseño](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md)
[diseño](https://git.k8s.io/design-proposals-archive/architecture/architecture.md)
ha permitido que otros sistemas sean construidos sobre Kubernetes.
## Lo que Kubernetes no es

View File

@ -10,7 +10,7 @@ Todos los objetos de la API REST de Kubernetes se identifica de forma inequívoc
Para aquellos atributos provistos por el usuario que no son únicos, Kubernetes provee de [etiquetas](/docs/user-guide/labels) y [anotaciones](/docs/concepts/overview/working-with-objects/annotations/).
Echa un vistazo al [documento de diseño de identificadores](https://git.k8s.io/community/contributors/design-proposals/architecture/identifiers.md) para información precisa acerca de las reglas sintácticas de los Nombres y UIDs.
Echa un vistazo al [documento de diseño de identificadores](https://git.k8s.io/design-proposals-archive/architecture/identifiers.md) para información precisa acerca de las reglas sintácticas de los Nombres y UIDs.

View File

@ -58,7 +58,7 @@ Ni la contención ni los cambios en un LimitRange afectarán a los recursos ya c
## {{% heading "whatsnext" %}}
Consulte el [documento de diseño del LimitRanger](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md) para más información.
Consulte el [documento de diseño del LimitRanger](https://git.k8s.io/design-proposals-archive/resource-management/admission_control_limit_range.md) para más información.
Los siguientes ejemplos utilizan límites y están pendientes de su traducción:

View File

@ -95,7 +95,7 @@ Los Services comúnmente abstraen el acceso a los Pods de Kubernetes, pero tambi
Por ejemplo:
- Quieres tener un clúster de base de datos externo en producción, pero en el ambiente de pruebas quieres usar tus propias bases de datos.
- Quieres tener un clúster de base de datos externo en producción, pero en el entorno de pruebas quieres usar tus propias bases de datos.
- Quieres apuntar tu Service a un Service en un {{< glossary_tooltip term_id="namespace" text="Namespace" >}} o en un clúster diferente.
- Estás migrando tu carga de trabajo a Kubernetes. Mientras evalúas la aproximación, corres solo una porción de tus backends en Kubernetes.
@ -418,7 +418,7 @@ Si quieres un número de puerto específico, puedes especificar un valor en el c
Esto significa que necesitas prestar atención a posibles colisiones de puerto por tu cuenta.
También tienes que usar un número de puerto válido, uno que esté dentro del rango configurado para uso del NodePort.
Usar un NodePort te da libertad para configurar tu propia solución de balanceo de cargas, para configurar ambientes que no soportan Kubernetes del todo, o para exponer uno o más IPs del nodo directamente.
Usar un NodePort te da libertad para configurar tu propia solución de balanceo de cargas, para configurar entornos que no soportan Kubernetes del todo, o para exponer uno o más IPs del nodo directamente.
Ten en cuenta que este Service es visible como `<NodeIP>:spec.ports[*].nodePort` y `.spec.clusterIP:spec.ports[*].port`.
Si la bandera `--nodeport-addresses` está configurada para el kube-proxy o para el campo equivalente en el fichero de configuración, `<NodeIP>` sería IP filtrada del nodo. Si
@ -514,7 +514,7 @@ El valor de `spec.loadBalancerClass` debe ser un identificador de etiqueta, con
#### Balanceador de carga interno
En un ambiente mixto algunas veces es necesario enrutar el tráfico desde Services dentro del mismo bloque (virtual) de direcciones de red.
En un entorno mixto algunas veces es necesario enrutar el tráfico desde Services dentro del mismo bloque (virtual) de direcciones de red.
En un entorno de split-horizon DNS necesitarías dos Services para ser capaz de enrutar tanto el tráfico externo como el interno a tus Endpoints.
@ -646,7 +646,7 @@ HTTP y HTTPS seleccionan un proxy de capa 7: el ELB termina la conexión con el
TCP y SSL seleccionan un proxy de capa 4: el ELB reenvía el tráfico sin modificar los encabezados.
En un ambiente mixto donde algunos puertos están asegurados y otros se dejan sin encriptar, puedes usar una de las siguientes anotaciones:
En un entorno mixto donde algunos puertos están asegurados y otros se dejan sin encriptar, puedes usar una de las siguientes anotaciones:
```yaml
metadata:

View File

@ -0,0 +1,36 @@
---
reviewers:
- edithturn
- raelga
- electrocucaracha
title: Supervisión del Estado del Volumen
content_type: concept
---
<!-- overview -->
{{< feature-state for_k8s_version="v1.21" state="alpha" >}}
La supervisión del estado del volumen de {{< glossary_tooltip text="CSI" term_id="csi" >}} permite que los controladores de CSI detecten condiciones de volumen anómalas de los sistemas de almacenamiento subyacentes y las notifiquen como eventos en {{< glossary_tooltip text="PVCs" term_id="persistent-volume-claim" >}} o {{< glossary_tooltip text="Pods" term_id="pod" >}}.
<!-- body -->
## Supervisión del Estado del Volumen
El _monitoreo del estado del volumen_ de Kubernetes es parte de cómo Kubernetes implementa la Interfaz de Almacenamiento de Contenedores (CSI). La función de supervisión del estado del volumen se implementa en dos componentes: un controlador de supervisión del estado externo y {{< glossary_tooltip term_id="kubelet" text="Kubelet" >}}.
Si un controlador CSI admite la función supervisión del estado del volumen desde el lado del controlador, se informará un evento en el {{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}} (PVC) relacionado cuando se detecte una condición de volumen anormal en un volumen CSI.
El {{< glossary_tooltip text="controlador" term_id="controller" >}} de estado externo también observa los eventos de falla del nodo. Se puede habilitar la supervisión de fallas de nodos configurando el indicador `enable-node-watcher` en verdadero. Cuando el monitor de estado externo detecta un evento de falla de nodo, el controlador reporta que se informará un evento en el PVC para indicar que los Pods que usan este PVC están en un nodo fallido.
Si un controlador CSI es compatible con la función monitoreo del estado del volumen desde el lado del nodo, se informará un evento en cada Pod que use el PVC cuando se detecte una condición de volumen anormal en un volumen CSI.
{{< note >}}
Se necesita habilitar el `CSIVolumeHealth` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) para usar esta función desde el lado del nodo.
{{< /note >}}
## {{% heading "whatsnext" %}}
Ver la [documentación del controlador CSI](https://kubernetes-csi.github.io/docs/drivers.html) para averiguar qué controladores CSI han implementado esta característica.

View File

@ -638,7 +638,7 @@ Mira el [ ejemplo NFS ](https://github.com/kubernetes/examples/tree/master/stagi
### persistentVolumeClaim {#persistentvolumeclaim}
Un volumen `persistenceVolumeClain` se utiliza para montar un [PersistentVolume](/docs/concepts/storage/persistent-volumes/) en tu Pod. PersistentVolumeClaims son una forma en que el usuario "reclama" almacenamiento duradero (como un PersistentDisk GCE o un volumen ISCSI) sin conocer los detalles del ambiente de la nube en particular.
Un volumen `persistenceVolumeClain` se utiliza para montar un [PersistentVolume](/docs/concepts/storage/persistent-volumes/) en tu Pod. PersistentVolumeClaims son una forma en que el usuario "reclama" almacenamiento duradero (como un PersistentDisk GCE o un volumen ISCSI) sin conocer los detalles del entorno de la nube en particular.
Mira la información spbre [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) para más detalles.

View File

@ -170,7 +170,7 @@ Seguimiento en [#26120](https://github.com/kubernetes/kubernetes/issues/26120)
## {{% heading "whatsnext" %}}
[Documento de Diseño 1](https://git.k8s.io/community/contributors/design-proposals/api-machinery/garbage-collection.md)
[Documento de Diseño 1](https://git.k8s.io/design-proposals-archive/api-machinery/garbage-collection.md)
[Documento de Diseño 2](https://git.k8s.io/community/contributors/design-proposals/api-machinery/synchronous-garbage-collection.md)
[Documento de Diseño 2](https://git.k8s.io/design-proposals-archive/api-machinery/synchronous-garbage-collection.md)

View File

@ -92,4 +92,4 @@ modificación del Pod Preset. En estos casos, se puede añadir una observación
Ver [Inyectando datos en un Pod usando PodPreset](/docs/tasks/inject-data-application/podpreset/)
Para más información sobre los detalles de los trasfondos, consulte la [propuesta de diseño de PodPreset](https://git.k8s.io/community/contributors/design-proposals/service-catalog/pod-preset.md).
Para más información sobre los detalles de los trasfondos, consulte la [propuesta de diseño de PodPreset](https://git.k8s.io/design-proposals-archive/service-catalog/pod-preset.md).

View File

@ -59,6 +59,6 @@ En estos momento, las librerías con soporte oficial son:
Un archivo de los documentos de diseño para la funcionalidad de Kubernetes.
Puedes empezar por [Arquitectura de Kubernetes](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) y [Vista general del diseño de Kubernetes](https://git.k8s.io/community/contributors/design-proposals).
Puedes empezar por [Arquitectura de Kubernetes](https://git.k8s.io/design-proposals-archive/architecture/architecture.md) y [Vista general del diseño de Kubernetes](https://git.k8s.io/community/contributors/design-proposals).

View File

@ -52,6 +52,6 @@ El servidor reune métricas de la Summary API, que es expuesta por el [Kubelet](
El servidor de métricas se añadió a la API de Kubernetes utilizando el
[Kubernetes aggregator](/docs/concepts/api-extension/apiserver-aggregation/) introducido en Kubernetes 1.7.
Puedes aprender más acerca del servidor de métricas en el [documento de diseño](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/metrics-server.md).
Puedes aprender más acerca del servidor de métricas en el [documento de diseño](https://github.com/kubernetes/design-proposals-archive/blob/main/instrumentation/metrics-server.md).

View File

@ -17,7 +17,7 @@ card:
Este tutorial muestra como ejecutar una aplicación Node.js Hola Mundo en Kubernetes utilizando
[Minikube](/docs/setup/learning-environment/minikube) y Katacoda.
Katacoda provee un ambiente de Kubernetes desde el navegador.
Katacoda provee un entorno de Kubernetes desde el navegador.
{{< note >}}
También se puede seguir este tutorial si se ha instalado [Minikube localmente](/docs/tasks/tools/install-minikube/).
@ -63,9 +63,9 @@ Para más información sobre el comando `docker build`, lea la [documentación d
minikube dashboard
```
3. Solo en el ambiente de Katacoda: En la parte superior de la terminal, haz clic en el símbolo + y luego clic en **Select port to view on Host 1**.
3. Solo en el entorno de Katacoda: En la parte superior de la terminal, haz clic en el símbolo + y luego clic en **Select port to view on Host 1**.
4. Solo en el ambiente de Katacoda: Escribir `30000`, y hacer clic en **Display Port**.
4. Solo en el entorno de Katacoda: Escribir `30000`, y hacer clic en **Display Port**.
## Crear un Deployment
@ -154,15 +154,15 @@ Por defecto, el Pod es accedido por su dirección IP interna dentro del clúster
minikube service hello-node
```
4. Solo en el ambiente de Katacoda: Hacer clic sobre el símbolo +, y luego en **Select port to view on Host 1**.
4. Solo en el entorno de Katacoda: Hacer clic sobre el símbolo +, y luego en **Select port to view on Host 1**.
5. Solo en el ambiente de Katacoda: Anotar el puerto de 5 dígitos ubicado al lado del valor de `8080` en el resultado de servicios. Este número de puerto es generado aleatoriamente y puede ser diferente al indicado en el ejemplo. Escribir el número de puerto en el cuadro de texto y hacer clic en Display Port. Usando el ejemplo anterior, usted escribiría `30369`.
5. Solo en el entorno de Katacoda: Anotar el puerto de 5 dígitos ubicado al lado del valor de `8080` en el resultado de servicios. Este número de puerto es generado aleatoriamente y puede ser diferente al indicado en el ejemplo. Escribir el número de puerto en el cuadro de texto y hacer clic en Display Port. Usando el ejemplo anterior, usted escribiría `30369`.
Esto abre una ventana de navegador que contiene la aplicación y muestra el mensaje "Hello World".
## Habilitar Extensiones
Minikube tiene un conjunto de {{< glossary_tooltip text="Extensiones" term_id="addons" >}} que pueden ser habilitados y desahabilitados en el ambiente local de Kubernetes.
Minikube tiene un conjunto de {{< glossary_tooltip text="Extensiones" term_id="addons" >}} que pueden ser habilitados y desahabilitados en el entorno local de Kubernetes.
1. Listar las extensiones soportadas actualmente:

View File

@ -7,85 +7,48 @@ cid: partners
---
<section id="users">
<main>
<h5>Kubernetes trabaja con socios para crear una base de código sólida y vibrante que admita un espectro de plataformas complementarias.</h5>
<div class="col-container">
<div class="col-nav">
<center>
<h5>
<b>Kubernetes Certified Service Providers</b>
</h5>
<br>Proveedores de servicios con amplia experiencia ayudando a las empresas a adoptar Kubernetes con éxito.
<br><br><br>
<button id="kcsp" class="button" onClick="updateSrc(this.id)">See KCSP Partners</button>
<br><br>Conviértete en <a href="https://www.cncf.io/certification/kcsp/">KCSP</a>
</center>
</div>
<div class="col-nav">
<center>
<h5>
<b>Certified Kubernetes Distributions, Hosted Platforms, and Installers</b>
</h5>La conformidad del software garantiza que la versión de Kubernetes de cada proveedor sea compatible con las API requeridas.
<br><br><br>
<button id="conformance" class="button" onClick="updateSrc(this.id)">See Conformance Partners</button>
<br><br>Conviértete en <a href="https://www.cncf.io/certification/software-conformance/">Certified Kubernetes</a>
</center>
</div>
<div class="col-nav">
<center>
<h5><b>Kubernetes Training Partners</b></h5>
<br>Partners de formación que ofrecen cursos de alta calidad y con una amplia experiencia en formación de tecnologías nativas de la nube.
<br><br><br><br>
<button id="ktp" class="button" onClick="updateSrc(this.id)">See KTP Partners</button>
<br><br>Conviértete en <a href="https://www.cncf.io/certification/training/">KTP</a>
</center>
</div>
</div>
<script src="https://code.jquery.com/jquery-3.3.1.min.js" integrity="sha256-FgpCb/KJQlLNfOu91ta32o/NMZxltwRo8QtmkMRdAu8=" crossorigin="anonymous"></script>
<script type="text/javascript">
var defaultLink = "https://landscape.cncf.io/category=kubernetes-certified-service-provider&format=card-mode&grouping=category&embed=yes";
var firstLink = "https://landscape.cncf.io/category=certified-kubernetes-distribution,certified-kubernetes-hosted,certified-kubernetes-installer&format=card-mode&grouping=category&embed=yes";
var secondLink = "https://landscape.cncf.io/category=kubernetes-training-partner&format=card-mode&grouping=category&embed=yes";
function updateSrc(buttonId) {
if (buttonId == "kcsp") {
$("#landscape").attr("src",defaultLink);
window.location.hash = "#kcsp";
}
if (buttonId == "conformance") {
$("#landscape").attr("src",firstLink);
window.location.hash = "#conformance";
}
if (buttonId == "ktp") {
$("#landscape").attr("src",secondLink);
window.location.hash = "#ktp";
}
}
// Automatically load the correct iframe based on the URL fragment
document.addEventListener('DOMContentLoaded', function() {
var showContent = "kcsp";
if (window.location.hash) {
console.log('hash is:', window.location.hash.substring(1));
showContent = window.location.hash.substring(1);
}
updateSrc(showContent);
});
</script>
<body>
<div id="frameHolder">
<iframe id="landscape" frameBorder="0" scrolling="no" style="width: 1px; min-width: 100%" src=""></iframe>
<script src="https://landscape.cncf.io/iframeResizer.js"></script>
<h5>Kubernetes trabaja con socios para crear una base de código sólida y vibrante que admita un espectro de plataformas complementarias.</h5>
<div class="col-container">
<div class="col-nav">
<center>
<h5>
<b>Kubernetes Certified Service Providers</b>
</h5>
<br>Proveedores de servicios con amplia experiencia ayudando a las empresas a adoptar Kubernetes con éxito.
<br><br><br>
<button class="button landscape-trigger landscape-default" data-landscape-types="kubernetes-certified-service-provider" id="kcsp">See KCSP Partners</button>
<br><br>Conviértete en
<a href="https://www.cncf.io/certification/kcsp/">KCSP</a>?
</center>
</div>
<div class="col-nav">
<center>
<h5>
<b>Certified Kubernetes Distributions, Hosted Platforms, and Installers</b>
</h5>La conformidad del software garantiza que la versión de Kubernetes de cada proveedor sea compatible con las API requeridas.
<br><br><br>
<button class="button landscape-trigger" data-landscape-types="certified-kubernetes-distribution,certified-kubernetes-hosted,certified-kubernetes-installer" id="conformance">See Conformance Partners</button>
<br><br>Conviértete en
<a href="https://www.cncf.io/certification/software-conformance/">Certified Kubernetes</a>?
</center>
</div>
<div class="col-nav">
<center>
<h5>
<b>Kubernetes Training Partners</b>
</h5>
<br>Partners de formación que ofrecen cursos de alta calidad y con una amplia experiencia en formación de tecnologías nativas de la nube.
<br><br><br>
<button class="button landscape-trigger" data-landscape-types="kubernetes-training-partner" id="ktp">See KTP Partners</button>
<br><br>Conviértete en
<a href="https://www.cncf.io/certification/training/">KTP</a>?
</center>
</div>
</div>
</body>
</main>
{{< cncf-landscape helpers=true >}}
</section>
<style>
{{< include "partner-style.css" >}}
</style>
<script>
{{< include "partner-script.js" >}}
</script>

View File

@ -15,8 +15,8 @@ Kubespray se base sur des outils de provisioning, des [paramètres](https://gith
* Le support des principales distributions Linux:
* Container Linux de CoreOS
* Debian Jessie, Stretch, Wheezy
* Ubuntu 16.04, 18.04
* CentOS/RHEL 7
* Ubuntu 16.04, 18.04, 20.04, 22.04
* CentOS/RHEL 7, 8
* Fedora/CentOS Atomic
* openSUSE Leap 42.3/Tumbleweed
* des tests d'intégration continue
@ -33,8 +33,8 @@ Afin de choisir l'outil le mieux adapté à votre besoin, veuillez lire [cette c
Les serveurs doivent être installés en s'assurant des éléments suivants:
* **Ansible v2.6 (ou version plus récente) et python-netaddr installés sur la machine qui exécutera les commandes Ansible**
* **Jinja 2.9 (ou version plus récente) est nécessaire pour exécuter les playbooks Ansible**
* **Ansible v2.11 (ou version plus récente) et python-netaddr installés sur la machine qui exécutera les commandes Ansible**
* **Jinja 2.11 (ou version plus récente) est nécessaire pour exécuter les playbooks Ansible**
* Les serveurs cibles doivent avoir **accès à Internet** afin de télécharger les images Docker. Autrement, une configuration supplémentaire est nécessaire, (se référer à [Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/downloads.md#offline-environment))
* Les serveurs cibles doivent être configurés afin d'autoriser le transfert IPv4 (**IPv4 forwarding**)
* **Votre clé ssh doit être copiée** sur tous les serveurs faisant partie de votre inventaire Ansible.

View File

@ -22,15 +22,15 @@ Laman ini akan menjabarkan beberapa *add-ons* yang tersedia serta tautan instruk
* [ACI](https://www.github.com/noironetworks/aci-containers) menyediakan integrasi jaringan kontainer dan keamanan jaringan dengan Cisco ACI.
* [Calico](https://docs.projectcalico.org/latest/getting-started/kubernetes/) merupakan penyedia jaringan L3 yang aman dan *policy* jaringan.
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) menggabungkan Flannel dan Calico, menyediakan jaringan serta *policy* jaringan.
* [Canal](https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel) menggabungkan Flannel dan Calico, menyediakan jaringan serta *policy* jaringan.
* [Cilium](https://github.com/cilium/cilium) merupakan *plugin* jaringan L3 dan *policy* jaringan yang dapat menjalankan *policy* HTTP/API/L7 secara transparan. Mendukung mode *routing* maupun *overlay/encapsulation*.
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) memungkinkan Kubernetes agar dapat terkoneksi dengan beragam *plugin* CNI, seperti Calico, Canal, Flannel, Romana, atau Weave dengan mulus.
* [CNI-Genie](https://github.com/cni-genie/CNI-Genie) memungkinkan Kubernetes agar dapat terkoneksi dengan beragam *plugin* CNI, seperti Calico, Canal, Flannel, Romana, atau Weave dengan mulus.
* [Contiv](http://contiv.github.io) menyediakan jaringan yang dapat dikonfigurasi (*native* L3 menggunakan BGP, *overlay* menggunakan vxlan, klasik L2, dan Cisco-SDN/ACI) untuk berbagai penggunaan serta *policy framework* yang kaya dan beragam. Proyek Contiv merupakan proyek [open source](http://github.com/contiv). Laman [instalasi](http://github.com/contiv/install) ini akan menjabarkan cara instalasi, baik untuk klaster dengan kubeadm maupun non-kubeadm.
* [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), yang berbasis dari [Tungsten Fabric](https://tungsten.io), merupakan sebuah proyek *open source* yang menyediakan virtualisasi jaringan *multi-cloud* serta platform manajemen *policy*. Contrail dan Tungsten Fabric terintegrasi dengan sistem orkestrasi lainnya seperti Kubernetes, OpenShift, OpenStack dan Mesos, serta menyediakan mode isolasi untuk mesin virtual (VM), kontainer/pod dan *bare metal*.
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) merupakan penyedia jaringan *overlay* yang dapat digunakan pada Kubernetes.
* [Knitter](https://github.com/ZTE/Knitter/) merupakan solusi jaringan yang mendukung multipel jaringan pada Kubernetes.
* Multus merupakan sebuah multi *plugin* agar Kubernetes mendukung multipel jaringan secara bersamaan sehingga dapat menggunakan semua *plugin* CNI (contoh: Calico, Cilium, Contiv, Flannel), ditambah pula dengan SRIOV, DPDK, OVS-DPDK dan VPP pada *workload* Kubernetes.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) menyediakan integrasi antara VMware NSX-T dan orkestrator kontainer seperti Kubernetes, termasuk juga integrasi antara NSX-T dan platform CaaS/PaaS berbasis kontainer seperti *Pivotal Container Service* (PKS) dan OpenShift.
* [Multus](https://github.com/k8snetworkplumbingwg/multus-cni) merupakan sebuah multi *plugin* agar Kubernetes mendukung multipel jaringan secara bersamaan sehingga dapat menggunakan semua *plugin* CNI (contoh: Calico, Cilium, Contiv, Flannel), ditambah pula dengan SRIOV, DPDK, OVS-DPDK dan VPP pada *workload* Kubernetes.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html) Container Plug-in (NCP) menyediakan integrasi antara VMware NSX-T dan orkestrator kontainer seperti Kubernetes, termasuk juga integrasi antara NSX-T dan platform CaaS/PaaS berbasis kontainer seperti *Pivotal Container Service* (PKS) dan OpenShift.
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) merupakan platform SDN yang menyediakan *policy-based* jaringan antara Kubernetes Pods dan non-Kubernetes *environment* dengan *monitoring* visibilitas dan keamanan.
* [Romana](http://romana.io) merupakan solusi jaringan *Layer* 3 untuk jaringan pod yang juga mendukung [*NetworkPolicy* API](/id/docs/concepts/services-networking/network-policies/). Instalasi Kubeadm *add-on* ini tersedia [di sini](https://github.com/romana/romana/tree/master/containerize).
* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) menyediakan jaringan serta *policy* jaringan, yang akan membawa kedua sisi dari partisi jaringan, serta tidak membutuhkan basis data eksternal.

View File

@ -22,14 +22,14 @@ I componenti aggiuntivi in ogni sezione sono ordinati alfabeticamente - l'ordine
* [ACI](https://www.github.com/noironetworks/aci-containers) fornisce funzionalità integrate di networking e sicurezza di rete con Cisco ACI.
* [Calico](https://docs.projectcalico.org/latest/getting-started/kubernetes/) è un provider di sicurezza e rete L3 sicuro.
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) unisce Flannel e Calico, fornendo i criteri di rete e di rete.
* [Canal](https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel) unisce Flannel e Calico, fornendo i criteri di rete e di rete.
* [Cilium](https://github.com/cilium/cilium) è un plug-in di criteri di rete e di rete L3 in grado di applicare in modo trasparente le politiche HTTP / API / L7. Sono supportate entrambe le modalità di routing e overlay / incapsulamento.
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) consente a Kubernetes di connettersi senza problemi a una scelta di plugin CNI, come Calico, Canal, Flannel, Romana o Weave.
* [CNI-Genie](https://github.com/cni-genie/CNI-Genie) consente a Kubernetes di connettersi senza problemi a una scelta di plugin CNI, come Calico, Canal, Flannel, Romana o Weave.
* [Contiv](https://contivpp.io/) offre networking configurabile (L3 nativo con BGP, overlay con vxlan, L2 classico e Cisco-SDN / ACI) per vari casi d'uso e un ricco framework di policy. Il progetto Contiv è completamente [open source](http://github.com/contiv). Il [programma di installazione](http://github.com/contiv/install) fornisce sia opzioni di installazione basate su kubeadm che non su Kubeadm.
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) è un provider di reti sovrapposte che può essere utilizzato con Kubernetes.
* [Knitter](https://github.com/ZTE/Knitter/) è una soluzione di rete che supporta più reti in Kubernetes.
* Multus è un multi-plugin per il supporto di più reti in Kubernetes per supportare tutti i plugin CNI (es. Calico, Cilium, Contiv, Flannel), oltre a SRIOV, DPDK, OVS-DPDK e carichi di lavoro basati su VPP in Kubernetes.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) fornisce l'integrazione tra VMware NSX-T e orchestratori di contenitori come Kubernetes, oltre all'integrazione tra NSX-T e piattaforme CaaS / PaaS basate su container come Pivotal Container Service (PKS) e OpenShift.
* [Multus](https://github.com/k8snetworkplumbingwg/multus-cni) è un multi-plugin per il supporto di più reti in Kubernetes per supportare tutti i plugin CNI (es. Calico, Cilium, Contiv, Flannel), oltre a SRIOV, DPDK, OVS-DPDK e carichi di lavoro basati su VPP in Kubernetes.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html) Container Plug-in (NCP) fornisce l'integrazione tra VMware NSX-T e orchestratori di contenitori come Kubernetes, oltre all'integrazione tra NSX-T e piattaforme CaaS / PaaS basate su container come Pivotal Container Service (PKS) e OpenShift.
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1/docs/kubernetes-1-installation.rst) è una piattaforma SDN che fornisce una rete basata su policy tra i pod di Kubernetes e non Kubernetes con visibilità e monitoraggio della sicurezza.
* [Romana](https://github.com/romana/romana) è una soluzione di rete Layer 3 per pod network che supporta anche [API NetworkPolicy](/docs/concepts/services-networking/network-policies/). Dettagli di installazione del componente aggiuntivo di Kubeadm disponibili [qui](https://github.com/romana/romana/tree/master/containerize).
* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) fornisce i criteri di rete e di rete, continuerà a funzionare su entrambi i lati di una partizione di rete e non richiede un database esterno.

View File

@ -277,7 +277,7 @@ kubectl create secret tls my-tls-secret \
Bootstrap token Secretは、Secretの`type`を`bootstrap.kubernetes.io/token`に明示的に指定することで作成できます。このタイプのSecretは、ードのブートストラッププロセス中に使用されるトークン用に設計されています。よく知られているConfigMapに署名するために使用されるトークンを格納します。
Bootstrap toke Secretは通常、`kube-system`namespaceで作成され`bootstrap-token-<token-id>`の形式で名前が付けられます。ここで`<token-id>`はトークンIDの6文字の文字列です。
Bootstrap token Secretは通常、`kube-system`namespaceで作成され`bootstrap-token-<token-id>`の形式で名前が付けられます。ここで`<token-id>`はトークンIDの6文字の文字列です。
Kubernetesマニフェストとして、Bootstrap token Secretは次のようになります。

View File

@ -6,19 +6,20 @@ weight: 30
<!-- overview -->
This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray).
This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Equinix Metal (formerly Packet), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray).
Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides:
* a highly available cluster
* composable attributes
* support for most popular Linux distributions
* Container Linux by CoreOS
* Ubuntu 16.04, 18.04, 20.04, 22.04
* CentOS/RHEL/Oracle Linux 7, 8
* Debian Buster, Jessie, Stretch, Wheezy
* Ubuntu 16.04, 18.04
* CentOS/RHEL/Oracle Linux 7
* Fedora 28
* Fedora 34, 35
* Fedora CoreOS
* openSUSE Leap 15
* Flatcar Container Linux by Kinvolk
* continuous integration tests
To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to
@ -34,13 +35,13 @@ To choose a tool which best fits your use case, read [this comparison](https://g
Provision servers with the following [requirements](https://github.com/kubernetes-sigs/kubespray#requirements):
* **Ansible v2.7.8 and python-netaddr is installed on the machine that will run Ansible commands**
* **Jinja 2.9 (or newer) is required to run the Ansible Playbooks**
* **Ansible v2.11 and python-netaddr are installed on the machine that will run Ansible commands**
* **Jinja 2.11 (or newer) is required to run the Ansible Playbooks**
* The target servers must have access to the Internet in order to pull docker images. Otherwise, additional configuration is required ([See Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))
* The target servers are configured to allow **IPv4 forwarding**
* **Your ssh key must be copied** to all the servers part of your inventory
* The **firewalls are not managed**, you'll need to implement your own rules the way you used to. in order to avoid any issue during deployment you should disable your firewall
* If kubespray is ran from non-root user account, correct privilege escalation method should be configured in the target servers. Then the `ansible_become` flag or command parameters `--become` or `-b` should be specified
* **Your ssh key must be copied** to all the servers in your inventory
* **Firewalls are not managed by kubespray**. You'll need to implement appropriate rules as needed. You should disable your firewall in order to avoid any issues during deployment
* If kubespray is run from a non-root user account, correct privilege escalation method should be configured in the target servers and the `ansible_become` flag or command parameters `--become` or `-b` should be specified
Kubespray provides the following utilities to help provision your environment:

View File

@ -28,7 +28,7 @@ APIService를 구현하는 가장 일반적인 방법은 클러스터 내에 실
Extension-apiserver는 kube-apiserver로 오가는 연결의 레이턴시가 낮아야 한다.
kube-apiserver로 부터의 디스커버리 요청은 왕복 레이턴시가 5초 이내여야 한다.
extention API server가 레이턴시 요구 사항을 달성할 수 없는 경우 이를 충족할 수 있도록 변경하는 것을 고려한다.
Extension API server가 레이턴시 요구 사항을 달성할 수 없는 경우 이를 충족할 수 있도록 변경하는 것을 고려한다.
## {{% heading "whatsnext" %}}

View File

@ -19,7 +19,7 @@ content_type: concept
위한 팀 마일스톤과 개발 브랜치를 관리한다. 본 섹션은 한글화팀의 팀 마일스톤 관리에 특화된
내용을 다룬다.
한글화팀은 `master` 브랜치에서 분기한 개발 브랜치를 사용한다. 개발 브랜치 이름은 다음과 같은
한글화팀은 `main` 브랜치에서 분기한 개발 브랜치를 사용한다. 개발 브랜치 이름은 다음과 같은
구조를 갖는다.
`dev-<소스 버전>-ko.<팀 마일스톤>`
@ -46,7 +46,7 @@ content_type: concept
- [CLA 서명 없음, 병합할 수 없음](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+label%3A%22cncf-cla%3A+no%22+-label%3Ado-not-merge+label%3Alanguage%2Fko)
- [LGTM 필요](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+-label%3Ado-not-merge+label%3Alanguage%2Fko+-label%3Algtm+)
- [LGTM 보유, 문서 승인 필요](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+-label%3Ado-not-merge+label%3Alanguage%2Fko+label%3Algtm)
- [퀵윈(Quick Wins)](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+base%3Amaster+-label%3A%22do-not-merge%2Fwork-in-progress%22+-label%3A%22do-not-merge%2Fhold%22+label%3A%22cncf-cla%3A+yes%22+label%3A%22size%2FXS%22+label%3A%22language%2Fko%22+)
- [퀵윈(Quick Wins)](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+base%3Amain+-label%3A%22do-not-merge%2Fwork-in-progress%22+-label%3A%22do-not-merge%2Fhold%22+label%3A%22cncf-cla%3A+yes%22+label%3A%22size%2FXS%22+label%3A%22language%2Fko%22+)
팀 마일스톤 일정과 PR 랭글러는 커뮤니티 슬랙 내 [#kubernetes-docs-ko 채널](https://kubernetes.slack.com/archives/CA1MMR86S)에 공지된다.
@ -221,7 +221,7 @@ API 오브젝트의 필드 이름, 파일 이름, 경로와 같은 내용은 독
한글화 용어집의 개선(추가, 수정, 삭제 등)을 위한 과정은 다음과 같다.
1. 컨트리뷰터가 개선이 필요한 용어을 파악하면, ISSUE를 생성하여 개선 필요성을 공유하거나 `master` 브랜치에
1. 컨트리뷰터가 개선이 필요한 용어을 파악하면, ISSUE를 생성하여 개선 필요성을 공유하거나 `main` 브랜치에
PR을 생성하여 개선된 용어를 제안한다.
1. 개선 제안에 대한 논의는 ISSUE 및 PR을 통해서 이루어지며, 한글화팀 회의를 통해 확정한다.

View File

@ -13,10 +13,10 @@ Kubespray는 [Ansible](https://docs.ansible.com/) 플레이북, [인벤토리](h
* 고가용성을 지닌 클러스터
* 구성할 수 있는 속성들
* 대부분의 인기있는 리눅스 배포판들에 대한 지원
* Ubuntu 16.04, 18.04, 20.04
* Ubuntu 16.04, 18.04, 20.04, 22.04
* CentOS/RHEL/Oracle Linux 7, 8
* Debian Buster, Jessie, Stretch, Wheezy
* Fedora 31, 32
* Fedora 34, 35
* Fedora CoreOS
* openSUSE Leap 15
* Flatcar Container Linux by Kinvolk
@ -33,7 +33,7 @@ Kubespray는 [Ansible](https://docs.ansible.com/) 플레이북, [인벤토리](h
언더레이(underlay) [요건](https://github.com/kubernetes-sigs/kubespray#requirements)을 만족하는 프로비전 한다.
* **Ansible의 명령어를 실행하기 위해 Ansible v 2.9와 Python netaddr 라이브러리가 머신에 설치되어 있어야 한다**
* **Ansible의 명령어를 실행하기 위해 Ansible v 2.11와 Python netaddr 라이브러리가 머신에 설치되어 있어야 한다**
* **Ansible 플레이북을 실행하기 위해 2.11 (혹은 그 이상) 버전의 Jinja가 필요하다**
* 타겟 서버들은 docker 이미지를 풀(pull) 하기 위해 반드시 인터넷에 접속할 수 있어야 한다. 아니라면, 추가적인 설정을 해야 한다 ([오프라인 환경 확인하기](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))
* 타겟 서버들의 **IPv4 포워딩**이 활성화되어야 한다

View File

@ -2,7 +2,7 @@ apiVersion: apiserver.k8s.io/v1beta1
kind: EgressSelectorConfiguration
egressSelections:
# 클러스터에 대한 송신(egress) 트래픽을 제어하기 위해
# "cluster"를 name으로 사용한다. 기타 지원되는 값은 "etcd" 및 "master"이다.
# "cluster"를 name으로 사용한다. 기타 지원되는 값은 "etcd" 및 "controlplane"이다.
- name: cluster
connection:
# API 서버와 Konnectivity 서버 간의 프로토콜을

View File

@ -0,0 +1,20 @@
---
title: RBAC (Controle de Acesso Baseado em Funções)
id: rbac
date: 2018-04-12
full_link: /docs/reference/access-authn-authz/rbac/
short_description: >
Gerencia decisões de autorização, permitindo que os administradores configurem dinamicamente políticas de acesso por meio da API do Kubernetes.
aka:
- Role Based Access Control
- Controle de Acesso Baseado em Funções
tags:
- security
- fundamental
---
Gerencia decisões de autorização, permitindo que os administradores configurem dinamicamente políticas de acesso por meio da {{< glossary_tooltip text="API do Kubernetes" term_id="kubernetes-api" >}}.
<!--more-->
O RBAC (do inglês - Role-Based Access Control) utiliza *funções*, que contêm regras de permissão, e *atribuição das funções*, que concedem as permissões definidas em uma função a um conjunto de usuários.

View File

@ -7,85 +7,48 @@ cid: partners
---
<section id="users">
<main class="main-section">
<h5>Kubernetes phối hợp làm việc với các đối tác để tạo ra một codebase mạnh mẽ hỗ trợ một loạt các nền tảng bổ sung.</h5>
<div class="col-container">
<div class="col-nav">
<center>
<h5>
<b>Các nhà cung cấp dịch vụ được chứng nhận bởi Kubernetes (KCSP)</b>
</h5>
<br>Các nhà cung cấp dịch vụ được chứng nhận với bề dày kinh nghiệm sẽ trợ giúp các tổ chức kinh doanh, các công ty ứng dụng Kubernetes nhanh chóng.
<br><br><br>
<button id="kcsp" class="button" onClick="updateSrc(this.id)">Xem các đối tác KCSP</button>
<br><br>Bạn muốn trở thành một <a href="https://www.cncf.io/certification/kcsp/">KCSP</a>?
</center>
</div>
<div class="col-nav">
<center>
<h5>
<b>Các nhà phân phối Kubernetes, dịch vụ hosting, dịch vụ cài đặt</b>
</h5>Tiêu chuẩn tương thích về phần mềm bảo đảm rằng các phiên bản Kubernetes từ các nhà cung cấp sẽ hỗ trợ các bộ API được yêu cầu bởi khách hàng.
<br><br><br>
<button id="conformance" class="button" onClick="updateSrc(this.id)">Xem các đối tác Conformance</button>
<br><br>Bạn muốn trở thành một <a href="https://www.cncf.io/certification/software-conformance/">Kubernetes Certified</a>?
</center>
</div>
<div class="col-nav">
<center>
<h5><b>Các đối tác đào tạo Kubernetes (KTP)</b></h5>
<br>Các đối tác đào tạo được chứng nhận đã và đang sở hữu bề dày kinh nghiệm trong lĩnh vực đám mây.
<br><br><br><br>
<button id="ktp" class="button" onClick="updateSrc(this.id)">Xem các đối tác KTP</button>
<br><br>Bạn muốn trở thành một <a href="https://www.cncf.io/certification/training/">KTP</a>?
</center>
</div>
</div>
<script src="https://code.jquery.com/jquery-3.3.1.min.js" integrity="sha256-FgpCb/KJQlLNfOu91ta32o/NMZxltwRo8QtmkMRdAu8=" crossorigin="anonymous"></script>
<script type="text/javascript">
var defaultLink = "https://landscape.cncf.io/category=kubernetes-certified-service-provider&format=card-mode&grouping=category&embed=yes";
var firstLink = "https://landscape.cncf.io/category=certified-kubernetes-distribution,certified-kubernetes-hosted,certified-kubernetes-installer&format=card-mode&grouping=category&embed=yes";
var secondLink = "https://landscape.cncf.io/category=kubernetes-training-partner&format=card-mode&grouping=category&embed=yes";
function updateSrc(buttonId) {
if (buttonId == "kcsp") {
$("#landscape").attr("src",defaultLink);
window.location.hash = "#kcsp";
}
if (buttonId == "conformance") {
$("#landscape").attr("src",firstLink);
window.location.hash = "#conformance";
}
if (buttonId == "ktp") {
$("#landscape").attr("src",secondLink);
window.location.hash = "#ktp";
}
}
// Automatically load the correct iframe based on the URL fragment
document.addEventListener('DOMContentLoaded', function() {
var showContent = "kcsp";
if (window.location.hash) {
console.log('hash is:', window.location.hash.substring(1));
showContent = window.location.hash.substring(1);
}
updateSrc(showContent);
});
</script>
<body>
<div id="frameHolder">
<iframe id="landscape" title="CNCF Landscape" frameBorder="0" scrolling="no" style="width: 1px; min-width: 100%" src=""></iframe>
<script src="https://landscape.cncf.io/iframeResizer.js"></script>
<h5>Kubernetes phối hợp làm việc với các đối tác để tạo ra một codebase mạnh mẽ hỗ trợ một loạt các nền tảng bổ sung.</h5>
<div class="col-container">
<div class="col-nav">
<center>
<h5>
<b>Các nhà cung cấp dịch vụ được chứng nhận bởi Kubernetes (KCSP)</b>
</h5>
<br>Các nhà cung cấp dịch vụ được chứng nhận với bề dày kinh nghiệm sẽ trợ giúp các tổ chức kinh doanh, các công ty ứng dụng Kubernetes nhanh chóng.
<br><br><br>
<button class="button landscape-trigger landscape-default" data-landscape-types="kubernetes-certified-service-provider" id="kcsp">Xem các đối tác KCSP</button>
<br><br>Bạn muốn trở thành một
<a href="https://www.cncf.io/certification/kcsp/">KCSP</a>?
</center>
</div>
<div class="col-nav">
<center>
<h5>
<b>Các nhà phân phối Kubernetes, dịch vụ hosting, dịch vụ cài đặt</b>
</h5>Tiêu chuẩn tương thích về phần mềm bảo đảm rằng các phiên bản Kubernetes từ các nhà cung cấp sẽ hỗ trợ các bộ API được yêu cầu bởi khách hàng.
<br><br><br>
<button class="button landscape-trigger" data-landscape-types="certified-kubernetes-distribution,certified-kubernetes-hosted,certified-kubernetes-installer" id="conformance">Xem các đối tác Conformance</button>
<br><br>Bạn muốn trở thành một
<a href="https://www.cncf.io/certification/software-conformance/">Kubernetes Certified</a>?
</center>
</div>
<div class="col-nav">
<center>
<h5>
<b>Các đối tác đào tạo Kubernetes (KTP)</b>
</h5>
<br>Các đối tác đào tạo được chứng nhận đã và đang sở hữu bề dày kinh nghiệm trong lĩnh vực đám mây.
<br><br><br>
<button class="button landscape-trigger" data-landscape-types="kubernetes-training-partner" id="ktp">Xem các đối tác KTP</button>
<br><br>Bạn muốn trở thành một
<a href="https://www.cncf.io/certification/training/">KTP</a>?
</center>
</div>
</div>
</body>
</main>
{{< cncf-landscape helpers=true >}}
</section>
<style>
{{< include "partner-style.css" >}}
</style>
<script>
{{< include "partner-script.js" >}}
</script>

View File

@ -1,27 +1,35 @@
---
layout: blog
title: "认识我们的贡献者 - 亚太地区(印度地区)"
date: 2022-01-10T12:00:00+0000
date: 2022-01-10
slug: meet-our-contributors-india-ep-01
canonicalUrl: https://kubernetes.dev/blog/2022/01/10/meet-our-contributors-india-ep-01/
canonicalUrl: https://www.kubernetes.dev/blog/2022/01/10/meet-our-contributors-india-ep-01/
---
<!--
layout: blog
title: "Meet Our Contributors - APAC (India region)"
date: 2022-01-10T12:00:00+0000
date: 2022-01-10
slug: meet-our-contributors-india-ep-01
canonicalUrl: https://kubernetes.dev/blog/2022/01/10/meet-our-contributors-india-ep-01/
canonicalUrl: https://www.kubernetes.dev/blog/2022/01/10/meet-our-contributors-india-ep-01/
-->
<!--
**Authors & Interviewers:** [Anubhav Vardhan](https://github.com/anubha-v-ardhan), [Atharva Shinde](https://github.com/Atharva-Shinde), [Avinesh Tripathi](https://github.com/AvineshTripathi), [Debabrata Panigrahi](https://github.com/Debanitrkl), [Kunal Verma](https://github.com/verma-kunal), [Pranshu Srivastava](https://github.com/PranshuSrivastava), [Pritish Samal](https://github.com/CIPHERTron), [Purneswar Prasad](https://github.com/PurneswarPrasad), [Vedant Kakde](https://github.com/vedant-kakde)
-->
**作者和采访者:** [Anubhav Vardhan](https://github.com/anubha-v-ardhan) [Atharva Shinde](https://github.com/Atharva-Shinde) [Avinesh Tripathi](https://github.com/AvineshTripathi) [Debabrata Panigrahi](https://github.com/Debanitrkl) [Kunal Verma](https://github.com/verma-kunal) [Pranshu Srivastava](https://github.com/PranshuSrivastava) [Pritish Samal](https://github.com/CIPHERTron) [Purneswar Prasad](https://github.com/PurneswarPrasad) [Vedant Kakde](https://github.com/vedant-kakde)
**作者和采访者:** [Anubhav Vardhan](https://github.com/anubha-v-ardhan)、
[Atharva Shinde](https://github.com/Atharva-Shinde)、
[Avinesh Tripathi](https://github.com/AvineshTripathi)、
[Debabrata Panigrahi](https://github.com/Debanitrkl)、
[Kunal Verma](https://github.com/verma-kunal)、
[Pranshu Srivastava](https://github.com/PranshuSrivastava)、
[Pritish Samal](https://github.com/CIPHERTron)、
[Purneswar Prasad](https://github.com/PurneswarPrasad)、
[Vedant Kakde](https://github.com/vedant-kakde)
<!--
**Editor:** [Priyanka Saggu](https://psaggu.com)
-->
**编辑:** [Priyanka Saggu](https://psaggu.com)
**编辑:** [Priyanka Saggu](https://psaggu.com)
---
@ -39,10 +47,10 @@ Welcome to the first episode of the APAC edition of the "Meet Our Contributors"
<!--
In this post, we'll introduce you to five amazing folks from the India region who have been actively contributing to the upstream Kubernetes projects in a variety of ways, as well as being the leaders or maintainers of numerous community initiatives.
-->
在这篇文章中,我们将向介绍来自印度地区的五位优秀贡献者,他们一直在以各种方式积极地为上游 Kubernetes 项目做贡献,同时也是众多社区倡议的领导者和维护者。
在这篇文章中,我们将向介绍来自印度地区的五位优秀贡献者,他们一直在以各种方式积极地为上游 Kubernetes 项目做贡献,同时也是众多社区倡议的领导者和维护者。
<!--
💫 *Let's get started, so without further ado…*
💫 *Let's get started, so without further ado…*
-->
💫 *闲话少说,我们开始吧。*
@ -70,7 +78,9 @@ To the newcomers, Arsh helps plan their early contributions sustainably.
> actually handle. This can often lead to burnout in later stages. It's much more sustainable
> to work on things iteratively._
-->
> _我鼓励大家以可持续的方式为社区做贡献。我的意思是一个人很容易在早期的时候非常有热情并且承担了很多超出个人实际能力的事情。这通常会导致后期的倦怠。迭代地处理事情会让大家对社区的贡献变得可持续。_
> 我鼓励大家以可持续的方式为社区做贡献。
> 我的意思是,一个人很容易在早期的时候非常有热情,并且承担了很多超出个人实际能力的事情。
> 这通常会导致后期的倦怠。迭代地处理事情会让大家对社区的贡献变得可持续。
## [Kunal Kushwaha](https://github.com/kunal-kushwaha)
@ -80,7 +90,7 @@ Kunal Kushwaha is a core member of the Kubernetes marketing council. He is also
Kunal Kushwaha 是 Kubernetes 营销委员会的核心成员。他同时也是 [CNCF 学生计划](https://community.cncf.io/cloud-native-students/) 的创始人之一。他还在 1.22 版本周期中担任通信经理一职。
<!--
At the end of his first year, Kunal began contributing to the [fabric8io kubernetes-client](https://github.com/fabric8io/kubernetes-client) project. He was then selected to work on the same project as part of Google Summer of Code. Kunal mentored people on the same project, first through Google Summer of Code then through Google Code-in.
At the end of his first year, Kunal began contributing to the [fabric8io kubernetes-client](https://github.com/fabric8io/kubernetes-client) project. He was then selected to work on the same project as part of Google Summer of Code. Kunal mentored people on the same project, first through Google Summer of Code then through Google Code-in.
-->
在他的第一年结束时Kunal 开始为 [fabric8io kubernetes-client](https://github.com/fabric8io/kubernetes-client) 项目做贡献。然后,他被推选从事同一项目,此项目是 Google Summer of Code 的一部分。Kunal 在 Google Summer of Code、Google Code-in 等项目中指导过很多人。
@ -99,7 +109,11 @@ As an open-source enthusiast, he believes that diverse participation in the comm
> because being a beginner is a skill and you can bring new perspectives to the
> organisation._
-->
> _我相信如果你发现自己在一个了解不多的项目当中那是件好事因为现在你可以一边贡献一边学习社区也会帮助你。它帮助我获得了很多技能认识了来自世界各地的人也帮助了他们。你可以在这个过程中学习自己不一定必须是专家。请重视非代码贡献因为作为初学者这是一项技能你可以为组织带来新的视角。_
> 我相信,如果你发现自己在一个了解不多的项目当中,那是件好事,
> 因为现在你可以一边贡献一边学习,社区也会帮助你。
> 它帮助我获得了很多技能,认识了来自世界各地的人,也帮助了他们。
> 你可以在这个过程中学习,自己不一定必须是专家。
> 请重视非代码贡献,因为作为初学者这是一项技能,你可以为组织带来新的视角。
## [Madhav Jivarajani](https://github.com/MadhavJivrajani)
@ -112,12 +126,19 @@ Madhav Jivarajani 在 VMware 上游 Kubernetes 稳定性团队工作。他于 20
<!--
Among several significant contributions are his recent efforts toward the Archival of [design proposals](https://github.com/kubernetes/community/issues/6055), refactoring the ["groups" codebase](https://github.com/kubernetes/k8s.io/pull/2713) under k8s-infra repository to make it mockable and testable, and improving the functionality of the [GitHub k8s bot](https://github.com/kubernetes/test-infra/issues/23129).
-->
在这几个重要项目中,他最近致力于 [设计方案](https://github.com/kubernetes/community/issues/6055) 的存档工作,重构 k8s-infra 存储库下的 ["组"代码库](https://github.com/kubernetes/k8s.io/pull/2713) ,使其具有可模拟性和可测试性,以及改进 [GitHub k8s 机器人](https://github.com/kubernetes/test-infra/issues/23129) 的功能。
在这几个重要项目中,他最近致力于[设计方案](https://github.com/kubernetes/community/issues/6055)的存档工作,
重构 k8s-infra 存储库下的 ["组"代码库](https://github.com/kubernetes/k8s.io/pull/2713)
使其具有可模拟性和可测试性,以及改进 [GitHub k8s 机器人](https://github.com/kubernetes/test-infra/issues/23129)的功能。
<!--
In addition to his technical efforts, Madhav oversees many projects aimed at assisting new contributors. He organises bi-weekly "KEP reading club" sessions to help newcomers understand the process of adding new features, deprecating old ones, and making other key changes to the upstream project. He has also worked on developing [Katacoda scenarios](https://github.com/kubernetes-sigs/contributor-katacoda) to assist new contributors to become acquainted with the process of contributing to k/k. In addition to his current efforts to meet with community members every week, he has organised several [new contributors workshops (NCW)](https://www.youtube.com/watch?v=FgsXbHBRYIc).
-->
除了在技术方面的贡献Madhav 还监督许多旨在帮助新贡献者的项目。他每两周组织一次的“KEP 阅读俱乐部”会议,帮助新人了解添加新功能、摒弃旧功能以及对上游项目进行其他关键更改的过程。他还致力于开发 [Katacoda 场景](https://github.com/kubernetes-sigs/contributor-katacoda) ,以帮助新的贡献者在为 k/k 做贡献的过程更加熟练。目前除了每周与社区成员会面外,他还组织了几个 [新贡献者讲习班NCW](https://www.youtube.com/watch?v=FgsXbHBRYIc) 。
除了在技术方面的贡献Madhav 还监督许多旨在帮助新贡献者的项目。
他每两周组织一次的 “KEP 阅读俱乐部” 会议,帮助新人了解添加新功能、
摒弃旧功能以及对上游项目进行其他关键更改的过程。他还致力于开发
[Katacoda 场景](https://github.com/kubernetes-sigs/contributor-katacoda)
以帮助新的贡献者在为 k/k 做贡献的过程更加熟练。目前除了每周与社区成员会面外,
他还组织了几个[新贡献者讲习班NCW](https://www.youtube.com/watch?v=FgsXbHBRYIc)。
<!--
> _I initially did not know much about Kubernetes. I joined because the community was
@ -127,7 +148,11 @@ In addition to his technical efforts, Madhav oversees many projects aimed at ass
> as a result I continued to dig deeper into Kubernetes and the design of it.
> I am a systems nut & thus Kubernetes was an absolute goldmine for me._
-->
> _一开始我对 Kubernetes 了解并不多。我加入社区是因为社区超级友好。但让我留下来的不仅仅是人,还有项目本身。我在社区中不会感到不知所措,这是因为我能够在感兴趣的和正在讨论的主题中获得尽可能多的背景和知识。因此,我将继续深入探讨 Kubernetes 及其设计。我是一个系统迷kubernetes 对我来说绝对是一个金矿。_
> 一开始我对 Kubernetes 了解并不多。我加入社区是因为社区超级友好。
> 但让我留下来的不仅仅是人,还有项目本身。我在社区中不会感到不知所措,
> 这是因为我能够在感兴趣的和正在讨论的主题中获得尽可能多的背景和知识。
> 因此,我将继续深入探讨 Kubernetes 及其设计。
> 我是一个系统迷kubernetes 对我来说绝对是一个金矿。
## [Rajas Kakodkar](https://github.com/rajaskakodkar)
@ -152,7 +177,8 @@ One of the first challenges he ran across was that he was in a different time zo
> cutting edge tech but more importantly because I get to work with
> awesome people and help in solving real world problems._
-->
> _我喜欢为 kubernetes 做出贡献不仅因为我可以从事尖端技术工作更重要的是我可以和优秀的人一起工作并帮助解决现实问题。_
> 我喜欢为 kubernetes 做出贡献,不仅因为我可以从事尖端技术工作,
> 更重要的是,我可以和优秀的人一起工作,并帮助解决现实问题。
## [Rajula Vineet Reddy](https://github.com/rajula96reddy)
@ -180,18 +206,19 @@ Rajas 说,参与项目会议和跟踪各种项目角色对于了解社区至
> _The first step to_ “come forward and start” _is hard. But it's all gonna be
> smooth after that. Just take that jump._
-->
> _我发现社区非常有帮助而且总是“你得到的回报和你贡献的一样多”。你参与得越多你就越会了解、学习和贡献新东西。_
> _“挺身而出”的第一步是艰难的。但在那之后一切都会顺利的。勇敢地参与进来吧。_
> 我发现社区非常有帮助,而且总是“你得到的回报和你贡献的一样多”。
> 你参与得越多,你就越会了解、学习和贡献新东西。
>
> “挺身而出”的第一步是艰难的。但在那之后一切都会顺利的。勇敢地参与进来吧。
---
<!--
If you have any recommendations/suggestions for who we should interview next, please let us know in #sig-contribex. We're thrilled to have other folks assisting us in reaching out to even more wonderful individuals of the community. Your suggestions would be much appreciated.
-->
如果您对我们下一步应该采访谁有任何意见/建议,请在 #sig-contribex 中告知我们。我们很高兴有其他人帮助我们接触社区中更优秀的人。我们将不胜感激。
如果你对我们下一步应该采访谁有任何意见/建议,请在 #sig-contribex 中告知我们。我们很高兴有其他人帮助我们接触社区中更优秀的人。我们将不胜感激。
<!--
We'll see you all in the next one. Everyone, till then, have a happy contributing! 👋
-->
我们下期见。最后,祝大家都能快乐地为社区做贡献!👋

View File

@ -1,14 +1,14 @@
---
layout: blog
title: "认识我们的贡献者 - 亚太地区(澳大利亚-新西兰地区)"
date: 2022-03-16T12:00:00+0000
date: 2022-03-16
slug: meet-our-contributors-au-nz-ep-02
canonicalUrl: https://www.kubernetes.dev/blog/2022/03/14/meet-our-contributors-au-nz-ep-02/
---
<!--
layout: blog
title: "Meet Our Contributors - APAC (Aus-NZ region)"
date: 2022-03-16T12:00:00+0000
date: 2022-03-16
slug: meet-our-contributors-au-nz-ep-02
canonicalUrl: https://www.kubernetes.dev/blog/2022/03/14/meet-our-contributors-au-nz-ep-02/
-->
@ -17,16 +17,16 @@ canonicalUrl: https://www.kubernetes.dev/blog/2022/03/14/meet-our-contributors-a
**Authors & Interviewers:** [Anubhav Vardhan](https://github.com/anubha-v-ardhan), [Atharva Shinde](https://github.com/Atharva-Shinde), [Avinesh Tripathi](https://github.com/AvineshTripathi), [Brad McCoy](https://github.com/bradmccoydev), [Debabrata Panigrahi](https://github.com/Debanitrkl), [Jayesh Srivastava](https://github.com/jayesh-srivastava), [Kunal Verma](https://github.com/verma-kunal), [Pranshu Srivastava](https://github.com/PranshuSrivastava), [Priyanka Saggu](github.com/Priyankasaggu11929/), [Purneswar Prasad](https://github.com/PurneswarPrasad), [Vedant Kakde](https://github.com/vedant-kakde)
-->
**作者和采访者:**
[Anubhav Vardhan](https://github.com/anubha-v-ardhan),
[Atharva Shinde](https://github.com/Atharva-Shinde),
[Avinesh Tripathi](https://github.com/AvineshTripathi),
[Brad McCoy](https://github.com/bradmccoydev),
[Debabrata Panigrahi](https://github.com/Debanitrkl),
[Jayesh Srivastava](https://github.com/jayesh-srivastava),
[Kunal Verma](https://github.com/verma-kunal),
[Pranshu Srivastava](https://github.com/PranshuSrivastava),
[Priyanka Saggu](github.com/Priyankasaggu11929/),
[Purneswar Prasad](https://github.com/PurneswarPrasad),
[Anubhav Vardhan](https://github.com/anubha-v-ardhan)
[Atharva Shinde](https://github.com/Atharva-Shinde)
[Avinesh Tripathi](https://github.com/AvineshTripathi)
[Brad McCoy](https://github.com/bradmccoydev)
[Debabrata Panigrahi](https://github.com/Debanitrkl)
[Jayesh Srivastava](https://github.com/jayesh-srivastava)
[Kunal Verma](https://github.com/verma-kunal)
[Pranshu Srivastava](https://github.com/PranshuSrivastava)
[Priyanka Saggu](github.com/Priyankasaggu11929/)
[Purneswar Prasad](https://github.com/PurneswarPrasad)
[Vedant Kakde](https://github.com/vedant-kakde)
---
@ -79,8 +79,8 @@ Caleb 也是 [CloudNative NZ](https://www.meetup.com/cloudnative-nz/)
<!--
> _There need to be more outreach in APAC and the educators and universities must pick up Kubernetes, as they are very slow and about 8+ years out of date. NZ tends to rather pay overseas than educate locals on the latest cloud tech Locally._
-->
> _亚太地区需要更多的外联活动,教育工作者和大学必须学习 Kubernetes因为他们非常缓慢
而且已经落后了8年多。新西兰倾向于在海外付费而不是教育当地人最新的云技术。_
> 亚太地区需要更多的外联活动,教育工作者和大学必须学习 Kubernetes因为他们非常缓慢
> 而且已经落后了8年多。新西兰倾向于在海外付费而不是教育当地人最新的云技术。
## [Dylan Graham](https://github.com/DylanGraham)
@ -107,7 +107,7 @@ He believes that consistent attendance at community/project meetings, taking on
<!--
> _The feeling of being a part of a large community is really special. I've met some amazing people, even some before the pandemic in real life :)_
-->
> _成为大社区的一份子感觉真的很特别。我遇到了一些了不起的人,甚至是在现实生活中疫情发生之前。_
> 成为大社区的一份子感觉真的很特别。我遇到了一些了不起的人,甚至是在现实生活中疫情发生之前。
## [Hippie Hacker](https://github.com/hh)
@ -137,7 +137,7 @@ He recommends that new contributors use pair programming.
<!--
> _The cross pollination of approaches and two pairs of eyes on the same work can often yield a much more amplified effect than a PR comment / approval alone can afford._
-->
> _针对一个项目,多人关注和交叉交流往往比单独的评审、批准 PR 能产生更大的效果。_
> 针对一个项目,多人关注和交叉交流往往比单独的评审、批准 PR 能产生更大的效果。
## [Nick Young](https://github.com/youngnick)
@ -154,7 +154,7 @@ His contribution path was notable in that he began working on major areas of the
他的贡献之路是引人注目的,因为他很早就在 Kubernetes 项目的主要领域工作,这改变了他的轨迹。
<!--
He asserts the best thing a new contributor can do is to "start contributing". Naturally, if it is relevant to their employment, that is excellent; however, investing non-work time in contributing can pay off in the long run in terms of work. He believes that new contributors, particularly those who are currently Kubernetes users, should be encouraged to participate in higher-level project discussions.
He asserts the best thing a new contributor can do is to "start contributing". Naturally, if it is relevant to their employment, that is excellent; however, investing non-work time in contributing can pay off in the long run in terms of work. He believes that new contributors, particularly those who are currently Kubernetes users, should be encouraged to participate in higher-level project discussions.
-->
他断言,一个新贡献者能做的最好的事情就是“开始贡献”。当然,如果与他的工作息息相关,那好极了;
然而,把非工作时间投入到贡献中去,从长远来看可以在工作上获得回报。
@ -163,8 +163,8 @@ He asserts the best thing a new contributor can do is to "start contributing". N
<!--
> _Just being active and contributing will get you a long way. Once you've been active for a while, you'll find that you're able to answer questions, which will mean you're asked questions, and before you know it you are an expert._
-->
> _只要积极主动,做出贡献,你就可以走很远。一旦你活跃了一段时间,你会发现你能够解答别人的问题,
这意味着会有人请教你或和你讨论,在你意识到这一点之前,你就已经是专家了。_
> 只要积极主动,做出贡献,你就可以走很远。一旦你活跃了一段时间,你会发现你能够解答别人的问题,
> 这意味着会有人请教你或和你讨论,在你意识到这一点之前,你就已经是专家了。
---

View File

@ -0,0 +1,523 @@
---
layout: blog
title: "Kubernetes 1.24: 观星者"
date: 2022-05-03
slug: kubernetes-1-24-release-announcement
---
<!--
layout: blog
title: "Kubernetes 1.24: Stargazer"
date: 2022-05-03
slug: kubernetes-1-24-release-announcement
-->
<!--
**Authors**: [Kubernetes 1.24 Release Team](https://git.k8s.io/sig-release/releases/release-1.24/release-team.md)
We are excited to announce the release of Kubernetes 1.24, the first release of 2022!
This release consists of 46 enhancements: fourteen enhancements have graduated to stable,
fifteen enhancements are moving to beta, and thirteen enhancements are entering alpha.
Also, two features have been deprecated, and two features have been removed.
-->
**作者**: [Kubernetes 1.24 发布团队](https://git.k8s.io/sig-release/releases/release-1.24/release-team.md)
我们很高兴地宣布 Kubernetes 1.24 的发布,这是 2022 年的第一个版本!
这个版本包括 46 个增强功能14 个增强功能已经升级到稳定版15 个增强功能正在进入 Beta 版,
13 个增强功能正在进入 Alpha 阶段。另外,有两个功能被废弃了,还有两个功能被删除了。
<!--
## Major Themes
### Dockershim Removed from kubelet
After its deprecation in v1.20, the dockershim component has been removed from the kubelet in Kubernetes v1.24.
From v1.24 onwards, you will need to either use one of the other [supported runtimes](/docs/setup/production-environment/container-runtimes/) (such as containerd or CRI-O)
or use cri-dockerd if you are relying on Docker Engine as your container runtime.
For more information about ensuring your cluster is ready for this removal, please
see [this guide](/blog/2022/03/31/ready-for-dockershim-removal/).
-->
## 主要议题
### 从 kubelet 中删除 Dockershim
在 v1.20 版本中被废弃后dockershim 组件已被从 Kubernetes v1.24 版本的 kubelet 中移除。
从v1.24开始,如果你依赖 Docker Engine 作为容器运行时,
则需要使用其他[受支持的运行时](/zh-cn/docs/setup/production-environment/container-runtimes/)之一
(如 containerd 或 CRI-O或使用 CRI dockerd。
有关确保群集已准备好进行此删除的更多信息,请参阅[本指南](/zh-cn/blog/2022/03/31/ready-for-dockershim-removal/)。
<!--
### Beta APIs Off by Default
[New beta APIs will not be enabled in clusters by default](https://github.com/kubernetes/enhancements/issues/3136).
Existing beta APIs and new versions of existing beta APIs will continue to be enabled by default.
-->
### 默认情况下关闭 Beta API
[新的 beta API 默认不会在集群中启用](https://github.com/kubernetes/enhancements/issues/3136)。
默认情况下,现有 Beta API 和及其更新版本将继续被启用。
<!--
### Signing Release Artifacts
Release artifacts are [signed](https://github.com/kubernetes/enhancements/issues/3031) using [cosign](https://github.com/sigstore/cosign)
signatures,
and there is experimental support for [verifying image signatures](/docs/tasks/administer-cluster/verify-signed-images/).
Signing and verification of release artifacts is part of [increasing software supply chain security for the Kubernetes release process](https://github.com/kubernetes/enhancements/issues/3027).
-->
### 签署发布工件
发布工件使用 [cosign](https://github.com/sigstore/cosign) 签名进行[签名](https://github.com/kubernetes/enhancements/issues/3031)
并且有[验证图像签名](/zh-cn/docs/tasks/administer-cluster/verify-signed-images/)的实验性支持。
发布工件的签名和验证是[提高 Kubernetes 发布过程的软件供应链安全性](https://github.com/kubernetes/enhancements/issues/3027)
的一部分。
<!--
### OpenAPI v3
Kubernetes 1.24 offers beta support for publishing its APIs in the [OpenAPI v3 format](https://github.com/kubernetes/enhancements/issues/2896).
-->
### OpenAPI v3
Kubernetes 1.24 提供了以 [OpenAPI v3 格式](https://github.com/kubernetes/enhancements/issues/2896)发布其 API 的 Beta 支持。
<!--
### Storage Capacity and Volume Expansion Are Generally Available
[Storage capacity tracking](https://github.com/kubernetes/enhancements/issues/1472)
supports exposing currently available storage capacity via [CSIStorageCapacity objects](/docs/concepts/storage/storage-capacity/#api)
and enhances scheduling of pods that use CSI volumes with late binding.
[Volume expansion](https://github.com/kubernetes/enhancements/issues/284) adds support
for resizing existing persistent volumes.
-->
### 存储容量和卷扩展普遍可用
[存储容量跟踪](https://github.com/kubernetes/enhancements/issues/1472)支持通过
[CSIStorageCapacity 对象](/zh-cn/docs/concepts/storage/storage-capacity/#api)公开当前可用的存储容量,
并增强使用具有后期绑定的 CSI 卷的 pod 的调度。
[卷的扩展](https://github.com/kubernetes/enhancements/issues/284)增加了对调整现有持久性卷大小的支持。
<!--
### NonPreemptingPriority to Stable
This feature adds [a new option to PriorityClasses](https://github.com/kubernetes/enhancements/issues/902),
which can enable or disable pod preemption.
-->
### NonPreemptingPriority 到稳定
此功能[为 PriorityClasses 添加了一个新选项](https://github.com/kubernetes/enhancements/issues/902),可以启用或禁用 Pod 抢占。
<!--
### Storage Plugin Migration
Work is underway to [migrate the internals of in-tree storage plugins](https://github.com/kubernetes/enhancements/issues/625) to call out to CSI Plugins
while maintaining the original API.
The [Azure Disk](https://github.com/kubernetes/enhancements/issues/1490)
and [OpenStack Cinder](https://github.com/kubernetes/enhancements/issues/1489) plugins
have both been migrated.
-->
### 存储插件迁移
目前正在进行[迁移树内存储插件的内部组件](https://github.com/kubernetes/enhancements/issues/625)工作,
以便在保持原有 API 的同时调用 CSI 插件。[Azure Disk](https://github.com/kubernetes/enhancements/issues/1490)
和 [OpenStack Cinder](https://github.com/kubernetes/enhancements/issues/1489) 插件都已迁移。
<!--
### gRPC Probes Graduate to Beta
With Kubernetes 1.24, the [gRPC probes functionality](https://github.com/kubernetes/enhancements/issues/2727)
has entered beta and is available by default. You can now [configure startup, liveness, and readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes) for your gRPC app
natively within Kubernetes without exposing an HTTP endpoint or
using an extra executable.
-->
### gRPC 探针升级到 Beta
在 Kubernetes 1.24 中,[gRPC 探测功能](https://github.com/kubernetes/enhancements/issues/2727)
已进入测试版,默认可用。现在,你可以在 Kubernetes 中为你的 gRPC
应用程序原生地[配置启动、活跃度和就绪性探测](/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes)
而无需暴露 HTTP 端点或使用额外的可执行文件。
<!--
### Kubelet Credential Provider Graduates to Beta
Originally released as Alpha in Kubernetes 1.20, the kubelet's support for
[image credential providers](/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/)
has now graduated to Beta.
This allows the kubelet to dynamically retrieve credentials for a container image registry
using exec plugins rather than storing credentials on the node's filesystem.
-->
### Kubelet 凭证提供者毕业至 Beta
kubelet 最初在 Kubernetes 1.20 中作为 Alpha 发布,现在它对[镜像凭证提供者](/zh-cn/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/)
的支持已升级到 Beta。这允许 kubelet 使用 exec 插件动态检索容器映像注册表的凭据,而不是将凭据存储在节点的文件系统上。
<!--
### Contextual Logging in Alpha
Kubernetes 1.24 has introduced [contextual logging](https://github.com/kubernetes/enhancements/issues/3077)
that enables the caller of a function to control all aspects of logging (output formatting, verbosity, additional values, and names).
-->
### Alpha 中的上下文日志记录
Kubernetes 1.24 引入了[上下文日志](https://github.com/kubernetes/enhancements/issues/3077)
这使函数的调用者能够控制日志记录的所有方面(输出格式、详细程度、附加值和名称)。
<!--
### Avoiding Collisions in IP allocation to Services
Kubernetes 1.24 introduces a new opt-in feature that allows you to
[soft-reserve a range for static IP address assignments](/docs/concepts/services-networking/service/#service-ip-static-sub-range)
to Services.
With the manual enablement of this feature, the cluster will prefer automatic assignment from
the pool of Service IP addresses, thereby reducing the risk of collision.
-->
### 避免 IP 分配给服务的冲突
Kubernetes 1.24 引入了一项新的选择加入功能,
允许你[为服务的静态 IP 地址分配软保留范围](/zh-cn/docs/concepts/services-networking/service/#service-ip-static-sub-range)。
通过手动启用此功能,集群将更喜欢从服务 IP 地址池中自动分配,从而降低冲突风险。
<!--
A Service `ClusterIP` can be assigned:
* dynamically, which means the cluster will automatically pick a free IP within the configured Service IP range.
* statically, which means the user will set one IP within the configured Service IP range.
Service `ClusterIP` are unique; hence, trying to create a Service with a `ClusterIP` that has already been allocated will return an error.
-->
服务的 `ClusterIP` 可以按照以下两种方式分配:
* 动态,这意味着集群将自动在配置的服务 IP 范围内选择一个空闲 IP。
* 静态,这意味着用户将在配置的服务 IP 范围内设置一个 IP。
服务 `ClusterIP` 是唯一的;因此,尝试使用已分配的 `ClusterIP` 创建服务将返回错误。
<!--
### Dynamic Kubelet Configuration is Removed from the Kubelet
After being deprecated in Kubernetes 1.22, Dynamic Kubelet Configuration has been removed from the kubelet. The feature will be removed from the API server in Kubernetes 1.26.
-->
### 从 Kubelet 中删除动态 Kubelet 配置
在 Kubernetes 1.22 中被弃用后,动态 Kubelet 配置已从 kubelet 中删除。
该功能将从 Kubernetes 1.26 的 API 服务器中删除。
<!--
## CNI Version-Related Breaking Change
Before you upgrade to Kubernetes 1.24, please verify that you are using/upgrading to a container
runtime that has been tested to work correctly with this release.
For example, the following container runtimes are being prepared, or have already been prepared, for Kubernetes:
* containerd v1.6.4 and later, v1.5.11 and later
* CRI-O 1.24 and later
-->
## CNI 版本相关的重大更改
在升级到 Kubernetes 1.24 之前,请确认你正在使用/升级到经过测试可以在此版本中正常工作的容器运行时。
例如,以下容器运行时正在为 Kubernetes 准备,或者已经准备好了。
* containerd v1.6.4 及更高版本v1.5.11 及更高版本
* CRI-O 1.24 及更高版本
<!--
Service issues exist for pod CNI network setup and tear down in containerd
v1.6.0v1.6.3 when the CNI plugins have not been upgraded and/or the CNI config
version is not declared in the CNI config files. The containerd team reports, "these issues are resolved in containerd v1.6.4."
With containerd v1.6.0v1.6.3, if you do not upgrade the CNI plugins and/or
declare the CNI config version, you might encounter the following "Incompatible
CNI versions" or "Failed to destroy network for sandbox" error conditions.
-->
当 CNI 插件尚未升级和/或 CNI 配置版本未在 CNI 配置文件中声明时,在 containerd v1.6.0v1.6.3
中存在 pod CNI 网络设置和拆除的服务问题。containerd 团队报告说,“这些问题在 containerd v1.6.4 中得到解决。”
在 containerd v1.6.0-v1.6.3 版本中,如果你不升级 CNI 插件和/或声明 CNI 配置版本,
你可能会遇到以下 “Incompatible CNI versions” 或 “Failed to destroy network for sandbox” 的错误情况。
<!--
## CSI Snapshot
_This information was added after initial publication._
[VolumeSnapshot v1beta1 CRD has been removed](https://github.com/kubernetes/enhancements/issues/177).
Volume snapshot and restore functionality for Kubernetes and the Container Storage Interface (CSI), which provides standardized APIs design (CRDs) and adds PV snapshot/restore support for CSI volume drivers, moved to GA in v1.20. VolumeSnapshot v1beta1 was deprecated in v1.20 and is now unsupported. Refer to [KEP-177: CSI Snapshot](https://git.k8s.io/enhancements/keps/sig-storage/177-volume-snapshot#kep-177-csi-snapshot) and [Volume Snapshot GA blog](/blog/2020/12/10/kubernetes-1.20-volume-snapshot-moves-to-ga/) for more information.
-->
## CSI 快照
**此信息是在首次发布后添加的。**
[VolumeSnapshot v1beta1 CRD 已被移除](https://github.com/kubernetes/enhancements/issues/177)。
Kubernetes 和容器存储接口 (CSI) 的卷快照和恢复功能,提供标准化的 API 设计 (CRD) 并添加了对 CSI 卷驱动程序的
PV 快照/恢复支持,在 v1.20 中移至 GA。VolumeSnapshot v1beta1 在 v1.20 中被弃用,现在不受支持。
有关详细信息,请参阅 [KEP-177: CSI 快照](https://git.k8s.io/enhancements/keps/sig-storage/177-volume-snapshot#kep-177-csi-snapshot)
和[卷快照 GA 博客](/blog/2020/12/10/kubernetes-1.20-volume-snapshot-moves-to-ga/)。
<!--
## Other Updates
### Graduations to Stable
This release saw fourteen enhancements promoted to stable:
-->
## 其他更新
### 毕业到稳定
在此版本中,有 14 项增强功能升级为稳定版:
<!--
* [Container Storage Interface (CSI) Volume Expansion](https://github.com/kubernetes/enhancements/issues/284)
* [Pod Overhead](https://github.com/kubernetes/enhancements/issues/688): Account for resources tied to the pod sandbox but not specific containers.
* [Add non-preempting option to PriorityClasses](https://github.com/kubernetes/enhancements/issues/902)
* [Storage Capacity Tracking](https://github.com/kubernetes/enhancements/issues/1472)
* [OpenStack Cinder In-Tree to CSI Driver Migration](https://github.com/kubernetes/enhancements/issues/1489)
* [Azure Disk In-Tree to CSI Driver Migration](https://github.com/kubernetes/enhancements/issues/1490)
* [Efficient Watch Resumption](https://github.com/kubernetes/enhancements/issues/1904): Watch can be efficiently resumed after kube-apiserver reboot.
* [Service Type=LoadBalancer Class Field](https://github.com/kubernetes/enhancements/issues/1959): Introduce a new Service annotation `service.kubernetes.io/load-balancer-class` that allows multiple implementations of `type: LoadBalancer` Services in the same cluster.
* [Indexed Job](https://github.com/kubernetes/enhancements/issues/2214): Add a completion index to Pods of Jobs with a fixed completion count.
* [Add Suspend Field to Jobs API](https://github.com/kubernetes/enhancements/issues/2232): Add a suspend field to the Jobs API to allow orchestrators to create jobs with more control over when pods are created.
* [Pod Affinity NamespaceSelector](https://github.com/kubernetes/enhancements/issues/2249): Add a `namespaceSelector` field for to pod affinity/anti-affinity spec.
* [Leader Migration for Controller Managers](https://github.com/kubernetes/enhancements/issues/2436): kube-controller-manager and cloud-controller-manager can apply new controller-to-controller-manager assignment in HA control plane without downtime.
* [CSR Duration](https://github.com/kubernetes/enhancements/issues/2784): Extend the CertificateSigningRequest API with a mechanism to allow clients to request a specific duration for the issued certificate.
-->
* [容器存储接口CSI卷扩展](https://github.com/kubernetes/enhancements/issues/284)
* [Pod 开销](https://github.com/kubernetes/enhancements/issues/688): 核算与 Pod 沙箱绑定的资源,但不包括特定的容器。
* [向 PriorityClass 添加非抢占选项](https://github.com/kubernetes/enhancements/issues/902)
* [存储容量跟踪](https://github.com/kubernetes/enhancements/issues/1472)
* [OpenStack Cinder In-Tree 到 CSI 驱动程序迁移](https://github.com/kubernetes/enhancements/issues/1489)
* [Azure 磁盘树到 CSI 驱动程序迁移](https://github.com/kubernetes/enhancements/issues/1490)
* [高效的监视恢复](https://github.com/kubernetes/enhancements/issues/1904)
kube-apiserver 重新启动后,可以高效地恢复监视。
* [Service Type=LoadBalancer 类字段](https://github.com/kubernetes/enhancements/issues/1959)
引入新的服务注解 `service.kubernetes.io/load-balancer-class`
允许在同一个集群中提供 `type: LoadBalancer` 服务的多个实现。
* [带索引的 Job](https://github.com/kubernetes/enhancements/issues/2214):为带有固定完成计数的 Job 的 Pod 添加完成索引。
* [在 Job API 中增加 suspend 字段](https://github.com/kubernetes/enhancements/issues/2232)
在 Job API 中增加一个 suspend 字段,允许协调者在创建作业时对 Pod 的创建进行更多控制。
* [Pod 亲和性 NamespaceSelector](https://github.com/kubernetes/enhancements/issues/2249)
为 Pod 亲和性/反亲和性规约添加一个 `namespaceSelector` 字段。
* [控制器管理器的领导者迁移](https://github.com/kubernetes/enhancements/issues/2436)
kube-controller-manager 和 cloud-controller-manager 可以在 HA 控制平面中重新分配新的控制器到控制器管理器,而无需停机。
* [CSR 期限](https://github.com/kubernetes/enhancements/issues/2784)
用一种机制来扩展证书签名请求 API允许客户为签发的证书请求一个特定的期限。
<!--
### Major Changes
This release saw two major changes:
* [Dockershim Removal](https://github.com/kubernetes/enhancements/issues/2221)
* [Beta APIs are off by Default](https://github.com/kubernetes/enhancements/issues/3136)
-->
### 主要变化
此版本有两个主要变化:
* [Dockershim 移除](https://github.com/kubernetes/enhancements/issues/2221)
* [Beta APIs 默认关闭](https://github.com/kubernetes/enhancements/issues/3136)
<!--
### Release Notes
Check out the full details of the Kubernetes 1.24 release in our [release notes](https://git.k8s.io/kubernetes/CHANGELOG/CHANGELOG-1.24.md).
-->
### 发行说明
在我们的[发行说明](https://git.k8s.io/kubernetes/CHANGELOG/CHANGELOG-1.24.md) 中查看 Kubernetes 1.24 版本的完整详细信息。
<!--
### Availability
Kubernetes 1.24 is available for download on [GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.24.0).
To get started with Kubernetes, check out these [interactive tutorials](/docs/tutorials/) or run local
Kubernetes clusters using containers as “nodes”, with [kind](https://kind.sigs.k8s.io/).
You can also easily install 1.24 using [kubeadm](/docs/setup/independent/create-cluster-kubeadm/).
-->
### 可用性
Kubernetes 1.24 可在 [GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.24.0) 上下载。
要开始使用 Kubernetes请查看这些[交互式教程](/zh-cn/docs/tutorials/)或在本地运行。
使用 [kind](https://kind.sigs.k8s.io/),可以将容器作为 Kubernetes 集群的 “节点”。
你还可以使用 [kubeadm](/zh-cn/docs/setup/independent/create-cluster-kubeadm/) 轻松安装 1.24。
<!--
### Release Team
This release would not have been possible without the combined efforts of committed individuals
comprising the Kubernetes 1.24 release team. This team came together to deliver all of the components
that go into each Kubernetes release, including code, documentation, release notes, and more.
Special thanks to James Laverack, our release lead, for guiding us through a successful release cycle,
and to all of the release team members for the time and effort they put in to deliver the v1.24
release for the Kubernetes community.
-->
### 发布团队
如果没有组成 Kubernetes 1.24 发布团队的坚定个人的共同努力,这个版本是不可能实现的。
该团队齐心协力交付每个 Kubernetes 版本中的所有组件,包括代码、文档、发行说明等。
特别感谢我们的发布负责人 James Laverack 指导我们完成了一个成功的发布周期,
并感谢所有发布团队成员投入时间和精力为 Kubernetes 社区提供 v1.24 版本。
<!--
### Release Theme and Logo
**Kubernetes 1.24: Stargazer**
{{< figure src="/images/blog/2022-05-03-kubernetes-release-1.24/kubernetes-1.24.png" alt="" class="release-logo" >}}
The theme for Kubernetes 1.24 is _Stargazer_.
-->
### 发布主题和徽标
**Kubernetes 1.24: 观星者**
{{< figure src="/images/blog/2022-05-03-kubernetes-release-1.24/kubernetes-1.24.png" alt="" class="release-logo" >}}
Kubernetes 1.24 的主题是**观星者Stargazer**。
<!--
Generations of people have looked to the stars in awe and wonder, from ancient astronomers to the
scientists who built the James Webb Space Telescope. The stars have inspired us, set our imagination
alight, and guided us through long nights on difficult seas.
With this release we gaze upwards, to what is possible when our community comes together. Kubernetes
is the work of hundreds of contributors across the globe and thousands of end-users supporting
applications that serve millions. Every one is a star in our sky, helping us chart our course.
-->
古代天文学家到建造 James Webb 太空望远镜的科学家,几代人都怀着敬畏和惊奇的心情仰望星空。
星星启发了我们,点燃了我们的想象力,并引导我们在艰难的海上度过了漫长的夜晚。
通过此版本,我们向上凝视,当我们的社区聚集在一起时可能发生的事情。
Kubernetes 是全球数百名贡献者和数千名最终用户支持的成果,
是一款为数百万人服务的应用程序。每个人都是我们天空中的一颗星星,帮助我们规划路线。
<!--
The release logo is made by [Britnee Laverack](https://www.instagram.com/artsyfie/), and depicts a telescope set upon starry skies and the
[Pleiades](https://en.wikipedia.org/wiki/Pleiades), often known in mythology as the “Seven Sisters”. The number seven is especially auspicious
for the Kubernetes project, and is a reference back to our original “Project Seven” name.
This release of Kubernetes is named for those that would look towards the night sky and wonder — for
all the stargazers out there. ✨
-->
发布标志由 [Britnee Laverack](https://www.instagram.com/artsyfie/) 制作,
描绘了一架位于星空和[昴星团](https://en.wikipedia.org/wiki/Pleiades)的望远镜,在神话中通常被称为“七姐妹”。
数字 7 对于 Kubernetes 项目特别吉祥,是对我们最初的“项目七”名称的引用。
这个版本的 Kubernetes 为那些仰望夜空的人命名——为所有的观星者命名。 ✨
<!--
### User Highlights
* Check out how leading retail e-commerce company [La Redoute used Kubernetes, alongside other CNCF projects, to transform and streamline its software delivery lifecycle](https://www.cncf.io/case-studies/la-redoute/) - from development to operations.
* Trying to ensure no change to an API call would cause any breaks, [Salt Security built its microservices entirely on Kubernetes, and it communicates via gRPC while Linkerd ensures messages are encrypted](https://www.cncf.io/case-studies/salt-security/).
* In their effort to migrate from private to public cloud, [Allainz Direct engineers redesigned its CI/CD pipeline in just three months while managing to condense 200 workflows down to 10-15](https://www.cncf.io/case-studies/allianz/).
* Check out how [Bink, a UK based fintech company, updated its in-house Kubernetes distribution with Linkerd to build a cloud-agnostic platform that scales as needed whilst allowing them to keep a close eye on performance and stability](https://www.cncf.io/case-studies/bink/).
* Using Kubernetes, the Dutch organization [Stichting Open Nederland](http://www.stichtingopennederland.nl/) created a testing portal in just one-and-a-half months to help safely reopen events in the Netherlands. The [Testing for Entry (Testen voor Toegang)](https://www.testenvoortoegang.org/) platform [leveraged the performance and scalability of Kubernetes to help individuals book over 400,000 COVID-19 testing appointments per day. ](https://www.cncf.io/case-studies/true/)
* Working alongside SparkFabrik and utilizing Backstage, [Santagostino created the developer platform Samaritan to centralize services and documentation, manage the entire lifecycle of services, and simplify the work of Santagostino developers](https://www.cncf.io/case-studies/santagostino/).
-->
### 用户亮点
* 了解领先的零售电子商务公司
[La Redoute 如何使用 Kubernetes 以及其他 CNCF 项目来转变和简化](https://www.cncf.io/case-studies/la-redoute/)
其从开发到运营的软件交付生命周期。
* 为了确保对 API 调用的更改不会导致任何中断,[Salt Security 完全在 Kubernetes 上构建了它的微服务,
它通过 gRPC 进行通信,而 Linkerd 确保消息是加密的](https://www.cncf.io/case-studies/salt-security/)。
* 为了从私有云迁移到公共云,[Alllainz Direct 工程师在短短三个月内重新设计了其 CI/CD 管道,
同时设法将 200 个工作流压缩到 10-15 个](https://www.cncf.io/case-studies/allianz/)。
* 看看[英国金融科技公司 Bink 是如何用 Linkerd 更新其内部的 Kubernetes 分布,以建立一个云端的平台,
根据需要进行扩展,同时允许他们密切关注性能和稳定性](https://www.cncf.io/case-studies/bink/)。
* 利用Kubernetes荷兰组织 [Stichting Open Nederland](http://www.stichtingopennederland.nl/)
在短短一个半月内创建了一个测试门户网站,以帮助安全地重新开放荷兰的活动。
[入门测试 (Testen voor Toegang)](https://www.testenvoortoegang.org/)
平台[利用 Kubernetes 的性能和可扩展性来帮助个人每天预订超过 400,000 个 COVID-19 测试预约](https://www.cncf.io/case-studies/true/)。
* 与 SparkFabrik 合作并利用 Backstage[Santagostino 创建了开发人员平台 Samaritan 来集中服务和文档,
管理服务的整个生命周期,并简化 Santagostino 开发人员的工作](https://www.cncf.io/case-studies/santagostino/)。
<!--
### Ecosystem Updates
* KubeCon + CloudNativeCon Europe 2022 will take place in Valencia, Spain, from 16 20 May 2022! You can find more information about the conference and registration on the [event site](https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/).
* In the [2021 Cloud Native Survey](https://www.cncf.io/announcements/2022/02/10/cncf-sees-record-kubernetes-and-container-adoption-in-2021-cloud-native-survey/), the CNCF saw record Kubernetes and container adoption. Take a look at the [results of the survey](https://www.cncf.io/reports/cncf-annual-survey-2021/).
* The [Linux Foundation](https://www.linuxfoundation.org/) and [The Cloud Native Computing Foundation](https://www.cncf.io/) (CNCF) announced the availability of a new [Cloud Native Developer Bootcamp](https://training.linuxfoundation.org/training/cloudnativedev-bootcamp/?utm_source=lftraining&utm_medium=pr&utm_campaign=clouddevbc0322) to provide participants with the knowledge and skills to design, build, and deploy cloud native applications. Check out the [announcement](https://www.cncf.io/announcements/2022/03/15/new-cloud-native-developer-bootcamp-provides-a-clear-path-to-cloud-native-careers/) to learn more.
-->
### 生态系统更新
* KubeCon + CloudNativeCon Europe 2022 将于 2022 年 5 月 16 日至 20 日在西班牙巴伦西亚举行!
你可以在[活动网站](https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/)上找到有关会议和注册的更多信息。
* 在 [2021 年云原生调查](https://www.cncf.io/announcements/2022/02/10/cncf-sees-record-kubernetes-and-container-adoption-in-2021-cloud-native-survey/)
CNCF 看到了创纪录的 Kubernetes 和容器采用。参阅[调查结果](https://www.cncf.io/reports/cncf-annual-survey-2021/)。
* [Linux 基金会](https://www.linuxfoundation.org/)和[云原生计算基金会](https://www.cncf.io/) (CNCF)
宣布推出新的 [云原生开发者训练营](https://training.linuxfoundation.org/training/cloudnativedev-bootcamp/?utm_source=lftraining&utm_medium=pr&utm_campaign=clouddevbc0322)
为参与者提供设计、构建和部署云原生应用程序的知识和技能。查看[公告](https://www.cncf.io/announcements/2022/03/15/new-cloud-native-developer-bootcamp-provides-a-clear-path-to-cloud-native-careers/)以了解更多信息。
<!--
### Project Velocity
The [CNCF K8s DevStats](https://k8s.devstats.cncf.io/d/12/dashboards?orgId=1&refresh=15m) project
aggregates a number of interesting data points related to the velocity of Kubernetes and various
sub-projects. This includes everything from individual contributions to the number of companies that
are contributing, and is an illustration of the depth and breadth of effort that goes into evolving this ecosystem.
In the v1.24 release cycle, which [ran for 17 weeks](https://github.com/kubernetes/sig-release/tree/master/releases/release-1.24) (January 10 to May 3), we saw contributions from [1029 companies](https://k8s.devstats.cncf.io/d/9/companies-table?orgId=1&var-period_name=v1.23.0%20-%20v1.24.0&var-metric=contributions) and [1179 individuals](https://k8s.devstats.cncf.io/d/66/developer-activity-counts-by-companies?orgId=1&var-period_name=v1.23.0%20-%20v1.24.0&var-metric=contributions&var-repogroup_name=Kubernetes&var-country_name=All&var-companies=All&var-repo_name=kubernetes%2Fkubernetes).
-->
### 项目速度
The [CNCF K8s DevStats](https://k8s.devstats.cncf.io/d/12/dashboards?orgId=1&refresh=15m) 项目
汇总了许多与 Kubernetes 和各种子项目的速度相关的有趣数据点。这包括从个人贡献到做出贡献的公司数量的所有内容,
并且说明了为发展这个生态系统而付出的努力的深度和广度。
在[运行 17 周](https://github.com/kubernetes/sig-release/tree/master/releases/release-1.24)
1 月 10 日至 5 月 3 日)的 v1.24 发布周期中,我们看到 [1029 家公司](https://k8s.devstats.cncf.io/d/9/companies-table?orgId=1&var-period_name=v1.23.0%20-%20v1.24.0&var-metric=contributions)
和 [1179 人](https://k8s.devstats.cncf.io/d/66/developer-activity-counts-by-companies?orgId=1&var-period_name=v1.23.0%20-%20v1.24.0&var-metric=contributions&var-repogroup_name=Kubernetes&var-country_name=All&var-companies=All&var-repo_name=kubernetes%2Fkubernetes) 的贡献。
<!--
## Upcoming Release Webinar
Join members of the Kubernetes 1.24 release team on Tue May 24, 2022 9:45am 11am PT to learn about
the major features of this release, as well as deprecations and removals to help plan for upgrades.
For more information and registration, visit the [event page](https://community.cncf.io/e/mck3kd/)
on the CNCF Online Programs site.
-->
## 即将发布的网络研讨会
在太平洋时间 2022 年 5 月 24 日星期二上午 9:45 至上午 11 点加入 Kubernetes 1.24 发布团队的成员,
了解此版本的主要功能以及弃用和删除,以帮助规划升级。有关更多信息和注册,
请访问 CNCF 在线计划网站上的[活动页面](https://community.cncf.io/e/mck3kd/)。
<!--
## Get Involved
The simplest way to get involved with Kubernetes is by joining one of the many [Special Interest Groups](https://git.k8s.io/community/sig-list.md) (SIGs) that align with your interests.
Have something youd like to broadcast to the Kubernetes community? Share your voice at our weekly [community meeting](https://git.k8s.io/community/communication), and through the channels below:
* Find out more about contributing to Kubernetes at the [Kubernetes Contributors](https://www.kubernetes.dev/) website
* Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for the latest updates
* Join the community discussion on [Discuss](https://discuss.kubernetes.io/)
* Join the community on [Slack](http://slack.k8s.io/)
* Post questions (or answer questions) on [Server Fault](https://serverfault.com/questions/tagged/kubernetes).
* Share your Kubernetes [story](https://docs.google.com/a/linuxfoundation.org/forms/d/e/1FAIpQLScuI7Ye3VQHQTwBASrgkjQDSS5TP0g3AXfFhwSM9YpHgxRKFA/viewform)
* Read more about whats happening with Kubernetes on the [blog](https://kubernetes.io/blog/)
* Learn more about the [Kubernetes Release Team](https://git.k8s.io/sig-release/release-team)
-->
## 参与进来
参与 Kubernetes 的最简单方法是加入符合你兴趣的众多 [特别兴趣组](https://git.k8s.io/community/sig-list.md)(SIG) 之一。
你有什么想向 Kubernetes 社区广播的内容吗?
在我们的每周的[社区会议](https://git.k8s.io/community/communication)上分享你的声音,并通过以下渠道:
* 在 [Kubernetes Contributors](https://www.kubernetes.dev/) 网站上了解有关为 Kubernetes 做出贡献的更多信息
* 在 Twitter 上关注我们 [@Kubernetesio](https://twitter.com/kubernetesio) 以获取最新更新
* 加入社区讨论 [Discuss](https://discuss.kubernetes.io/)
* 加入 [Slack](http://slack.k8s.io/) 社区
* 在 [Server Fault](https://serverfault.com/questions/tagged/kubernetes) 上发布问题(或回答问题)。
* 分享你的 Kubernetes [故事](https://docs.google.com/a/linuxfoundation.org/forms/d/e/1FAIpQLScuI7Ye3VQHQTwBASrgkjQDSS5TP0g3AXfFhwSM9YpHgxRKFA/viewform)
* 在[博客](/zh-cn/blog/)上阅读有关 Kubernetes 正在发生的事情的更多信息
* 详细了解 [Kubernetes 发布团队](https://git.k8s.io/sig-release/release-team)

View File

@ -14,7 +14,7 @@ slug: annual-report-summary-2021
<!--
**Author:** Paris Pittman (Steering Committee)
-->
**作者:**Paris Pittman指导委员会
**作者:** Paris Pittman指导委员会
<!--
Last year, we published our first [Annual Report Summary](/blog/2021/06/28/announcing-kubernetes-community-group-annual-reports/) for 2020 and it's already time for our second edition!

View File

@ -1,130 +1,187 @@
---
title: 案例研究Ygrene
linkTitle: Ygrene
case_study_styles: true
cid: caseStudies
css: /css/style_ygrene.css
logo: ygrene_featured_logo.png
featured: true
weight: 48
quote: >
我们必须改变一些实践和代码,以及构建的方式,但我们能够在一个月左右的时间内将我们的主要系统安装到 Kubernetes 上,然后在两个月内投入生产。这对于金融公司来说是非常快的。
new_case_study_styles: true
heading_background: /images/case-studies/ygrene/banner1.jpg
heading_title_logo: /images/ygrene_logo.png
subheading: >
Ygrene: 使用原生云为金融行业带来安全性和可扩展性
case_study_details:
- 公司: Ygrene
- 地点: Petaluma Calif
- 行业: 清洁能源融资
---
<!--
title: Ygrene Case Study
linkTitle: Ygrene
case_study_styles: true
cid: caseStudies
logo: ygrene_featured_logo.png
featured: true
weight: 48
quote: >
We had to change some practices and code, and the way things were built, but we were able to get our main systems onto Kubernetes in a month or so, and then into production within two months. That's very fast for a finance company.
<!-- <div class="banner1 desktop" style="background-image: url('/images/CaseStudy_ygrene_banner1.jpg')">
<h1> CASE STUDY:<img src="/images/ygrene_logo.png" style="margin-bottom:-1%" class="header_logo"><br> <div class="subhead">Ygrene: Using Cloud Native to Bring Security and Scalability to the Finance Industry
new_case_study_styles: true
heading_background: /images/case-studies/ygrene/banner1.jpg
heading_title_logo: /images/ygrene_logo.png
subheading: >
Ygrene: Using Cloud Native to Bring Security and Scalability to the Finance Industry
case_study_details:
- Company: Ygrene
- Location: Petaluma, Calif.
- Industry: Clean energy financing
-->
</div></h1> -->
<div class="banner1">
<h1> 案例研究:<img src="/images/ygrene_logo.png" style="margin-bottom:-1%" class="header_logo"><br> <div class="subhead">Ygrene: 使用原生云为金融行业带来安全性和可扩展性
<!--
<h2>Challenges</h2>
-->
<h2>挑战</h2>
</div></h1>
<!--
<p>A PACE (Property Assessed Clean Energy) financing company, Ygrene has funded more than $1 billion in loans since 2010. In order to approve and process those loans, "We have lots of data sources that are being aggregated, and we also have lots of systems that need to churn on that data," says Ygrene Development Manager Austin Adams. The company was utilizing massive servers, and "we just reached the limit of being able to scale them vertically. We had a really unstable system that became overwhelmed with requests just for doing background data processing in real time. The performance the users saw was very poor. We needed a solution that wouldn't require us to make huge refactors to the code base." As a finance company, Ygrene also needed to ensure that they were shipping their applications securely.</p>
-->
<p>作为一家 PACE清洁能源资产评估融资公司Ygrene 自 2010 年以来已经为超过 10 亿的贷款提供资金。为了批准和处理这些贷款“我们有很多正在聚合的数据源而且我们也有许多系统需要对这些数据进行改动”Ygrene 开发经理 Austin Adams 说。该公司正在使用大量服务器“我们刚刚达到能够垂直扩展它们的极限。我们有一个非常不稳定的系统它变得不知所措要求只是做后台数据处理的实时。用户看到的性能很差。我们需要一个解决方案不需要我们对代码库进行大量重构。作为一家金融公司Ygrene 还需要确保他们安全地传输应用程序。”</p>
<!--
<h2>Solution</h2>
-->
<h2>解决方案</h2>
</div>
<!-- <div class="details">
Company &nbsp;<b>Ygrene</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Petaluma, Calif.</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Clean energy financing</b>
</div> -->
<div class="details">
公司 &nbsp;<b>Ygrene</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;位置 &nbsp;<b>佩塔卢马,加利福尼亚州</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;行业 &nbsp;<b>清洁能源融资</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1">
<h2>挑战</h2>
<!-- A PACE (Property Assessed Clean Energy) financing company, Ygrene has funded more than $1 billion in loans since 2010. In order to approve and process those loans, "We have lots of data sources that are being aggregated, and we also have lots of systems that need to churn on that data," says Ygrene Development Manager Austin Adams. The company was utilizing massive servers, and "we just reached the limit of being able to scale them vertically. We had a really unstable system that became overwhelmed with requests just for doing background data processing in real time. The performance the users saw was very poor. We needed a solution that wouldnt require us to make huge refactors to the code base." As a finance company, Ygrene also needed to ensure that they were shipping their applications securely. -->
作为一家 PACE清洁能源资产评估融资公司Ygrene 自2010年以来已经为超过10亿的贷款提供资金。为了批准和处理这些贷款“我们有很多正在聚合的数据源而且我们也有许多系统需要对这些数据进行改动”Ygrene 开发经理 Austin Adams 说。该公司正在使用大量服务器“我们刚刚达到能够垂直扩展它们的极限。我们有一个非常不稳定的系统它变得不知所措要求只是做后台数据处理的实时。用户看到的性能很差。我们需要一个解决方案不需要我们对代码库进行大量重构。作为一家金融公司Ygrene 还需要确保他们安全地传输应用程序。”
<br>
<h2>解决方案</h2>
<!-- Moving from an Engine Yard platform and Amazon Elastic Beanstalk, the Ygrene team embraced cloud native technologies and practices: <a href="https://kubernetes.io/">Kubernetes</a> to help scale out vertically and distribute workloads, <a href="https://github.com/theupdateframework/notary">Notary</a> to put in build-time controls and get trust on the Docker images being used with third-party dependencies, and <a href="https://www.fluentd.org/">Fluentd</a> for "observing every part of our stack," all running on <a href="https://aws.amazon.com/ec2/spot/">Amazon EC2 Spot</a>. -->
从 Engine Yard 和 Amazon Elastic Beanstalk 上迁移了应用后Ygrene 团队采用云原生技术和实践:使用<a href="https://kubernetes.io/">Kubernetes</a>来帮助垂直扩展和分配工作负载,使用<a href="https://github.com/theupdateframework/notary"> Notary </a>加入构建时间控制和获取使用第三方依赖的可信赖 Docker 镜像,使用<a href="https://www.fluentd.org/">Fluentd</a>“掌握堆栈中的所有情况”,这些都运行在<a href="https://aws.amazon.com/ec2/spot/">Amazon EC2 Spot</a>上。
</div>
<div class="col2">
<!--
<p>Moving from an Engine Yard platform and Amazon Elastic Beanstalk, the Ygrene team embraced cloud native technologies and practices: <a href="https://kubernetes.io/">Kubernetes</a> to help scale out vertically and distribute workloads, <a href="https://github.com/theupdateframework/notary">Notary</a> to put in build-time controls and get trust on the Docker images being used with third-party dependencies, and <a href="https://www.fluentd.org/">Fluentd</a> for "observing every part of our stack," all running on <a href="https://aws.amazon.com/ec2/spot/">Amazon EC2 Spot</a>.</p>
-->
<p>从 Engine Yard 和 Amazon Elastic Beanstalk 上迁移了应用后Ygrene 团队采用云原生技术和实践:使用<a href="https://kubernetes.io/"> Kubernetes </a>来帮助垂直扩展和分配工作负载,使用<a href="https://github.com/theupdateframework/notary"> Notary </a>加入构建时间控制和获取使用第三方依赖的可信赖 Docker 镜像,使用<a href="https://www.fluentd.org/"> Fluentd </a>“掌握堆栈中的所有情况”,这些都运行在 <a href="https://aws.amazon.com/ec2/spot/"> Amazon EC2 Spot </a>上。</p>
<!--
<h2>Impact</h2>
-->
<h2>影响</h2>
<!-- Before, deployments typically took three to four hours, and two or three months worth of work would be deployed at low-traffic times every week or two weeks. Now, they take five minutes for Kubernetes, and an hour for the overall deploy with smoke testing. And "were able to deploy three or four times a week, with just one weeks or two days worth of work," Adams says. "Were deploying during the work week, in the daytime and without any downtime. We had to ask for business approval to take the systems down, even in the middle of the night, because people could be doing loans. Now we can deploy, ship code, and migrate databases, all without taking the system down. The company gets new features without worrying that some business will be lost or delayed." Additionally, by using the kops project, Ygrene can now run its Kubernetes clusters with AWS EC2 Spot, at a tenth of the previous cost. These cloud native technologies have "changed the game for scalability, observability, and security—were adding new data sources that are very secure," says Adams. "Without Kubernetes, Notary, and Fluentd, we couldnt tell our investors and team members that we knew what was going on." -->
以前部署通常需要三到四个小时而且每周或每两周要把一些两三个月工作量的任务在系统占用低的时候进行部署。现在他们用5分钟来配置 Kubernetes然后用一个小时进行整体部署与烟雾测试。Adams 说:“我们每周可以部署三到四次,只需一周或两天的工作量。”“我们在工作周、白天的任意时间进行部署,甚至不需要停机。以前我们不得不请求企业批准,以关闭系统,因为即使在半夜,人们也可能正在访问服务。现在,我们可以部署、上传代码和迁移数据库,而无需关闭系统。公司获得新功能,而不必担心某些业务会丢失或延迟。”此外,通过使用 kops 项目Ygrene 现在可以用以前成本的十分之一使用 AWS EC2 Spot 运行其 Kubernetes 集群。Adams 说,这些云原生技术“改变了可扩展性、可观察性和安全性(我们正在添加新的非常安全的数据源)的游戏。”“没有 Kubernetes、Notary 和 Fluent我们就无法告诉投资者和团队成员我们知道刚刚发生了什么事情。”
</div>
<!--
<p>Before, deployments typically took three to four hours, and two or three months' worth of work would be deployed at low-traffic times every week or two weeks. Now, they take five minutes for Kubernetes, and an hour for the overall deploy with smoke testing. And "we're able to deploy three or four times a week, with just one week's or two days' worth of work," Adams says. "We're deploying during the work week, in the daytime and without any downtime. We had to ask for business approval to take the systems down, even in the middle of the night, because people could be doing loans. Now we can deploy, ship code, and migrate databases, all without taking the system down. The company gets new features without worrying that some business will be lost or delayed." Additionally, by using the kops project, Ygrene can now run its Kubernetes clusters with AWS EC2 Spot, at a tenth of the previous cost. These cloud native technologies have "changed the game for scalability, observability, and security—we're adding new data sources that are very secure," says Adams. "Without Kubernetes, Notary, and Fluentd, we couldn't tell our investors and team members that we knew what was going on."</p>
-->
<p>以前,部署通常需要三到四个小时,而且每周或每两周要把一些两三个月工作量的任务在系统占用低的时候进行部署。现在,他们用 5 分钟来配置 Kubernetes然后用一个小时进行整体部署与烟雾测试。Adams 说:“我们每周可以部署三到四次,只需一周或两天的工作量。”“我们在工作周、白天的任意时间进行部署,甚至不需要停机。以前我们不得不请求企业批准,以关闭系统,因为即使在半夜,人们也可能正在访问服务。现在,我们可以部署、上传代码和迁移数据库,而无需关闭系统。公司获得新功能,而不必担心某些业务会丢失或延迟。”此外,通过使用 kops 项目Ygrene 现在可以用以前成本的十分之一使用 AWS EC2 Spot 运行其 Kubernetes 集群。Adams 说,这些云原生技术“改变了可扩展性、可观察性和安全性(我们正在添加新的非常安全的数据源)的游戏。”“没有 Kubernetes、Notary 和 Fluent我们就无法告诉投资者和团队成员我们知道刚刚发生了什么事情。”</p>
</div>
</section>
<div class="banner2">
<div class="banner2text">
<!-- "CNCF projects are helping Ygrene determine the security and observability standards for the entire PACE industry. Were an emerging finance industry, and without these projects, especially Kubernetes, we couldnt be the industry leader that we are today." <span style="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase;line-height:14px"><br><br>— Austin Adams, Development Manager, Ygrene Energy Fund</span> -->
“CNCF 项目正在帮助 Ygrene 确定整个 PACE 行业的安全性和可观察性标准。我们是一个新兴的金融企业,没有这些项目,尤其是 Kubernetes我们不可能成为今天的行业领导者。”<span style="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase;line-height:14px"><br><br>— Austin Adams, Ygrene 能源基金会开发经理</span>
<!--
{{< case-studies/quote author="Austin Adams, Development Manager, Ygrene Energy Fund" >}}
"CNCF projects are helping Ygrene determine the security and observability standards for the entire PACE industry. We're an emerging finance industry, and without these projects, especially Kubernetes, we couldn't be the industry leader that we are today."
{{< /case-studies/quote >}}
-->
{{< case-studies/quote author="Austin Adams, Ygrene 能源基金会开发经理" >}}
“CNCF 项目正在帮助 Ygrene 确定整个 PACE 行业的安全性和可观察性标准。我们是一个新兴的金融企业,没有这些项目,尤其是 Kubernetes我们不可能成为今天的行业领导者。”
{{< /case-studies/quote >}}
</div>
</div>
<!--
{{< case-studies/lead >}}
In less than a decade, <a href="https://ygrene.com/">Ygrene</a> has funded more than $1 billion in loans for renewable energy projects.
{{< /case-studies/lead >}}
-->
{{< case-studies/lead >}}
在不到十年的时间里,<a href="https://ygrene.com/"> Ygrene </a>就为可再生能源项目提供了超过 10 亿美元的贷款。
{{< /case-studies/lead >}}
<section class="section2">
<div class="fullcol">
<!-- <h2>In less than a decade, <a href="https://ygrene.com/index.html" style="text-decoration:underline">Ygrene</a> has funded more than $1 billion in loans for renewable energy&nbsp;projects.</h2> A <a href="https://www.energy.gov/eere/slsc/property-assessed-clean-energy-programs">PACE</a> (Property Assessed Clean Energy) financing company, "We take the equity in a home or a commercial building, and use it to finance property improvements for anything that saves electricity, produces electricity, saves water, or reduces carbon emissions," says Development Manager Austin Adams. <br><br> -->
<h2>在不到十年的时间里,<a href="https://ygrene.com/index.html" style="text-decoration:underline"> Ygrene </a>就为可再生能源项目提供了超过10亿美元的贷款。</h2><a href="https://www.energy.gov/eere/slsc/property-assessed-clean-energy-programs"> PACE </a>(清洁能源物业评估)融资公司开发经理 Austin Adams 表示:“我们抵押房屋或商业建筑,用贷款来为任何可以节约电力、生产电力、节约用水或减少碳排放的项目提供资金支持。”<br><br>
<!--
<p>A <a href="https://www.energy.gov/eere/slsc/property-assessed-clean-energy-programs">PACE</a> (Property Assessed Clean Energy) financing company, "We take the equity in a home or a commercial building, and use it to finance property improvements for anything that saves electricity, produces electricity, saves water, or reduces carbon emissions," says Development Manager Austin Adams.</p>
-->
<p><a href="https://www.energy.gov/eere/slsc/property-assessed-clean-energy-programs"> PACE </a>(清洁能源物业评估)融资公司开发经理 Austin Adams 表示:“我们抵押房屋或商业建筑,用贷款来为任何可以节约电力、生产电力、节约用水或减少碳排放的项目提供资金支持。”</p>
<!-- In order to approve those loans, the company processes an enormous amount of underwriting data. "We have tons of different points that we have to validate about the property, about the company, or about the person," Adams says. "So we have lots of data sources that are being aggregated, and we also have lots of systems that need to churn on that data in real time." <br><br> -->
为了批准这些贷款公司需要处理大量的承销数据。Adams 说:“我们必须要验证有关财产、公司或人员的问题,像这样的工作数以千计。因此,我们有很多正在聚合的数据源,并且我们也有大量系统需要实时对这些数据进行改动。”<br><br>
<!-- By 2017, deployments and scalability had become pain points. The company was utilizing massive servers, and "we just reached the limit of being able to scale them vertically," he says. Migrating to AWS Elastic Beanstalk didnt solve the problem: "The Scala services needed a lot of data from the main Ruby on Rails services and from different vendors, so they were asking for information from our Ruby services at a rate that those services couldnt handle. We had lots of configuration misses with Elastic Beanstalk as well. It just came to a head, and we realized we had a really unstable system." -->
到 2017 年,部署和可扩展性已成为痛点。该公司已经使用了大量服务器,“我们刚刚达到能够垂直扩展的极限,”他说。迁移到 AWS Elastic Beanstalk 并不能解决问题“Scala 服务需要来自主 Ruby on Rails 服务和不同供应商提供的大量数据,因此他们要求从我们的 Ruby 服务以一种服务器无法承受的速率获取信息。在 Elastic Beanstalk 上我们也有许多配置与应用不匹配。这仅仅是一个开始,我们也意识到我们这个系统非常不稳定。”
</div>
</section>
<div class="banner3">
<div class="banner3text">
<!-- "CNCF has been an amazing incubator for so many projects. Now we look at its webpage regularly to find out if there are any new, awesome, high-quality projects we can implement into our stack. Its actually become a hub for us for knowing what software we need to be looking at to make our systems more secure or more scalable."<span style="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase;line-height:14px"><br><br>— Austin Adams, Development Manager, Ygrene Energy Fund</span> -->
“CNCF 是众多项目惊人的孵化器。现在,我们定期查看其网页,了解是否有任何新的、可敬的高质量项目可以应用到我们的系统中。它实际上已成为我们了解自身需要什么样的软件以使我们的系统更加安全和具有可伸缩性的信息中心。”<span style="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase;line-height:14px"><br><br>— Austin Adams, Ygrene 能源基金会开发经理</span>
</div>
</div>
<section class="section3">
<div class="fullcol">
<!--
<p>In order to approve those loans, the company processes an enormous amount of underwriting data. "We have tons of different points that we have to validate about the property, about the company, or about the person," Adams says. "So we have lots of data sources that are being aggregated, and we also have lots of systems that need to churn on that data in real time."</p>
-->
<p>为了批准这些贷款公司需要处理大量的承销数据。Adams 说:“我们必须要验证有关财产、公司或人员的问题,像这样的工作数以千计。因此,我们有很多正在聚合的数据源,并且我们也有大量系统需要实时对这些数据进行改动。”</p>
<!-- Adams along with the rest of the team set out to find a solution that would be transformational, but "wouldnt require us to make huge refactors to the code base," he says. And as a finance company, Ygrene needed security as much as scalability. They found the answer by embracing cloud native technologies: Kubernetes to help scale out vertically and distribute workloads, Notary to achieve reliable security at every level, and Fluentd for observability. "Kubernetes was where the community was going, and we wanted to be future proof," says Adams. <br><br> -->
Adams 和其他团队一起着手寻找一种具有变革性的解决方案但“不需要我们对代码库进行巨大的重构”他说。作为一家金融公司和可伸缩性一样Ygrene 需要更好的安全性。他们通过采用云原生技术找到了答案Kubernetes 帮助纵向扩展和分配工作负载Notary 在各个级别实现可靠的安全性Fluentd 来提供可观察性。Adams 说:“ Kubernetes 是社区前进的方向,也是我们展望未来的证明。”<br><br>
<!-- With Kubernetes, the team was able to quickly containerize the Ygrene application with Docker. "We had to change some practices and code, and the way things were built," Adams says, "but we were able to get our main systems onto Kubernetes in a month or so, and then into production within two months. Thats very fast for a finance company."<br><br> -->
有了 Kubernetes该团队能够快速将 Ygrene 应用程序用 Docker 容器化。“我们必须改变一些实现和代码,以及系统的构建方式,” Adams 说,“但我们已经能够在一个月左右的时间内将主要系统引入 Kubernetes然后在两个月内投入生产。对于一家金融公司来说这已经非常快了。”<br><br>
<!-- How? Cloud native has "changed the game for scalability, observability, and security—were adding new data sources that are very secure," says Adams. "Without Kubernetes, Notary, and Fluentd, we couldnt tell our investors and team members that we knew what was going on." <br><br> -->
怎么样Adams 说,这些云原生技术“改变了可扩展性、可观察性和安全性(我们正在添加新的非常安全的数据源)的游戏。”“没有 Kubernetes、Notary 和 Fluent我们就无法告诉投资者和团队成员我们知道刚刚发生了什么事情。”<br><br>
<!-- Notary, in particular, "has been a godsend," says Adams. "We need to know that our attack surface on third-party dependencies is low, or at least managed. We use it as a trust system and we also use it as a separation, so production images are signed by Notary, but some development images we dont sign. That is to ensure that they cant get into the production cluster. Weve been using it in the test cluster to feel more secure about our builds." -->
Adams 说,尤其 Notary 简直就是“天赐之物”。“我们要清楚,我们针对第三方依赖项的攻击面较低,或者至少是托管的。因为我们使用 Notary 作为一个信任系统,我们也使用它作为区分,所以生产镜像由 Notary 签名,但一些开发镜像就不签署。这是为了确保未签名镜像无法进入生产集群。我们已经在测试集群中使用它,以使构建的应用更安全。”
<!--
<p>By 2017, deployments and scalability had become pain points. The company was utilizing massive servers, and "we just reached the limit of being able to scale them vertically," he says. Migrating to AWS Elastic Beanstalk didn't solve the problem: "The Scala services needed a lot of data from the main Ruby on Rails services and from different vendors, so they were asking for information from our Ruby services at a rate that those services couldn't handle. We had lots of configuration misses with Elastic Beanstalk as well. It just came to a head, and we realized we had a really unstable system."</p>
-->
<p>到 2017 年,部署和可扩展性已成为痛点。该公司已经使用了大量服务器,“我们刚刚达到能够垂直扩展的极限,”他说。迁移到 AWS Elastic Beanstalk 并不能解决问题“Scala 服务需要来自主 Ruby on Rails 服务和不同供应商提供的大量数据,因此他们要求从我们的 Ruby 服务以一种服务器无法承受的速率获取信息。在 Elastic Beanstalk 上我们也有许多配置与应用不匹配。这仅仅是一个开始,我们也意识到我们这个系统非常不稳定。”</p>
<!--
{{< case-studies/quote
image="/images/case-studies/ygrene/banner3.jpg"
author="Austin Adams, Development Manager, Ygrene Energy Fund"
>}}
"CNCF has been an amazing incubator for so many projects. Now we look at its webpage regularly to find out if there are any new, awesome, high-quality projects we can implement into our stack. It's actually become a hub for us for knowing what software we need to be looking at to make our systems more secure or more scalable."
{{< /case-studies/quote >}}
-->
{{< case-studies/quote
image="/images/case-studies/ygrene/banner3.jpg"
author="Austin Adams, Ygrene 能源基金会开发经理"
>}}
“CNCF 是众多项目惊人的孵化器。现在,我们定期查看其网页,了解是否有任何新的、可敬的高质量项目可以应用到我们的系统中。它实际上已成为我们了解自身需要什么样的软件以使我们的系统更加安全和具有可伸缩性的信息中心。”
{{< /case-studies/quote >}}
</div>
</section>
<div class="banner4">
<div class="banner4text">
<!-- "We had to change some practices and code, and the way things were built," Adams says, "but we were able to get our main systems onto Kubernetes in a month or so, and then into production within two months. Thats very fast for a finance company." -->
<!--
<p>Adams along with the rest of the team set out to find a solution that would be transformational, but "wouldn't require us to make huge refactors to the code base," he says. And as a finance company, Ygrene needed security as much as scalability. They found the answer by embracing cloud native technologies: Kubernetes to help scale out vertically and distribute workloads, Notary to achieve reliable security at every level, and Fluentd for observability. "Kubernetes was where the community was going, and we wanted to be future proof," says Adams.</p> -->
<p>Adams 和其他团队一起着手寻找一种具有变革性的解决方案但“不需要我们对代码库进行巨大的重构”他说。作为一家金融公司和可伸缩性一样Ygrene 需要更好的安全性。他们通过采用云原生技术找到了答案Kubernetes 帮助纵向扩展和分配工作负载Notary 在各个级别实现可靠的安全性Fluentd 来提供可观察性。Adams 说“Kubernetes 是社区前进的方向,也是我们展望未来的证明。”</p>
<!--
<p>With Kubernetes, the team was able to quickly containerize the Ygrene application with Docker. "We had to change some practices and code, and the way things were built," Adams says, "but we were able to get our main systems onto Kubernetes in a month or so, and then into production within two months. That's very fast for a finance company."</p>
-->
<p>有了 Kubernetes该团队能够快速将 Ygrene 应用程序用 Docker 容器化。“我们必须改变一些实现和代码,以及系统的构建方式,” Adams 说,“但我们已经能够在一个月左右的时间内将主要系统引入 Kubernetes然后在两个月内投入生产。对于一家金融公司来说这已经非常快了。”</p>
<!--
<p>How? Cloud native has "changed the game for scalability, observability, and security—we're adding new data sources that are very secure," says Adams. "Without Kubernetes, Notary, and Fluentd, we couldn't tell our investors and team members that we knew what was going on."</p>
-->
<p>怎么样Adams 说,这些云原生技术“改变了可扩展性、可观察性和安全性(我们正在添加新的非常安全的数据源)的游戏。”“没有 Kubernetes、Notary 和 Fluent我们就无法告诉投资者和团队成员我们知道刚刚发生了什么事情。”</p>
<!--
<p>Notary, in particular, "has been a godsend," says Adams. "We need to know that our attack surface on third-party dependencies is low, or at least managed. We use it as a trust system and we also use it as a separation, so production images are signed by Notary, but some development images we don't sign. That is to ensure that they can't get into the production cluster. We've been using it in the test cluster to feel more secure about our builds."</p>
-->
<p>Adams 说,尤其 Notary 简直就是“天赐之物”。“我们要清楚,我们针对第三方依赖项的攻击面较低,或者至少是托管的。因为我们使用 Notary 作为一个信任系统,我们也使用它作为区分,所以生产镜像由 Notary 签名,但一些开发镜像就不签署。这是为了确保未签名镜像无法进入生产集群。我们已经在测试集群中使用它,以使构建的应用更安全。”</p>
<!--
{{< case-studies/quote image="/images/case-studies/ygrene/banner4.jpg">}}
"We had to change some practices and code, and the way things were built," Adams says, "but we were able to get our main systems onto Kubernetes in a month or so, and then into production within two months. That's very fast for a finance company."
{{< /case-studies/quote >}}
-->
{{< case-studies/quote image="/images/case-studies/ygrene/banner4.jpg">}}
“我们必须改变一些实现和代码,以及系统的构建方式,” Adams 说,“但我们已经能够在一个月左右的时间内将主要系统引入 Kubernetes然后在两个月内投入生产。对于一家金融公司来说这已经非常快了。”
</div>
</div>
{{< /case-studies/quote >}}
<section class="section4">
<div class="fullcol">
<!-- By using the kops project, Ygrene was able to move from Elastic Beanstalk to running its Kubernetes clusters on AWS EC2 Spot, at a tenth of the previous cost. "In order to scale before, we would need to up our instance sizes, incurring high cost for low value," says Adams. "Now with Kubernetes and kops, we are able to scale horizontally on Spot with multiple instance groups."<br><br> -->
通过使用 kops 项目Ygrene 能够用以前成本的十分之一从 Elastic Beanstalk 迁移到 AWS EC2 Spot 上运行其 Kubernetes 群集。Adams 说:“以前为了扩展,我们需要增加实例大小,导致高成本产出低价值。现在,借助 Kubernetes 和 kops我们能够在具有多个实例组的 Spot 上水平缩放。”<br><br>
<!-- That also helped them mitigate the risk that comes with running in the public cloud. "We figured out, essentially, that if were able to select instance classes using EC2 Spot that had an extremely low likelihood of interruption and zero history of interruption, and were willing to pay a price high enough, that we could virtually get the same guarantee using Kubernetes because we have enough nodes," says Software Engineer Zach Arnold, who led the migration to Kubernetes. "Now that weve re-architected these pieces of the application to not live on the same server, we can push out to many different servers and have a more stable deployment."<br><br> -->
这也帮助他们降低了在公共云中运行所带来的风险。“我们发现,基本上,如果我们能够使用中断可能性极低、无中断历史的 EC2 Spot 选择实例类,并且我们愿意付出足够高的价格,我们几乎可以得到和使用 Kubernetes 相同的保证,因为我们有足够的节点,”软件工程师 Zach Arnold 说,他领导了向 Kubernetes 的迁移。“现在,我们已经重新架构了应用程序的这些部分,使之不再位于同一台服务器上,我们可以推送到许多不同的服务器,并实现更稳定的部署。”<br><br>
<!-- As a result, the team can now ship code any time of day. "That was risky because it could bring down your whole loan management software with it," says Arnold. "But we now can deploy safely and securely during the day." -->
因此,团队现在可以在一天中的任何时间传输代码。阿诺德说:“以前这样做是很危险的,因为它会拖慢整个贷款管理软件。”“但现在,我们可以在白天安全部署。”
<!--
<p>By using the kops project, Ygrene was able to move from Elastic Beanstalk to running its Kubernetes clusters on AWS EC2 Spot, at a tenth of the previous cost. "In order to scale before, we would need to up our instance sizes, incurring high cost for low value," says Adams. "Now with Kubernetes and kops, we are able to scale horizontally on Spot with multiple instance groups."</p>
-->
<p>通过使用 kops 项目Ygrene 能够用以前成本的十分之一从 Elastic Beanstalk 迁移到 AWS EC2 Spot 上运行其 Kubernetes 集群。Adams 说:“以前为了扩展,我们需要增加实例大小,导致高成本产出低价值。现在,借助 Kubernetes 和 kops我们能够在具有多个实例组的 Spot 上水平缩放。”</p>
</div>
</section>
<div class="banner5">
<div class="banner5text">
<!-- "In order to scale before, we would need to up our instance sizes, incurring high cost for low value," says Adams. "Now with Kubernetes and kops, we are able to scale horizontally on Spot with multiple instance groups." -->
<!--
<p>That also helped them mitigate the risk that comes with running in the public cloud. "We figured out, essentially, that if we're able to select instance classes using EC2 Spot that had an extremely low likelihood of interruption and zero history of interruption, and we're willing to pay a price high enough, that we could virtually get the same guarantee using Kubernetes because we have enough nodes," says Software Engineer Zach Arnold, who led the migration to Kubernetes. "Now that we've re-architected these pieces of the application to not live on the same server, we can push out to many different servers and have a more stable deployment."</p>
-->
<p>这也帮助他们降低了在公共云中运行所带来的风险。“我们发现,基本上,如果我们能够使用中断可能性极低、无中断历史的 EC2 Spot 选择实例类,并且我们愿意付出足够高的价格,我们几乎可以得到和使用 Kubernetes 相同的保证,因为我们有足够的节点,”软件工程师 Zach Arnold 说,他领导了向 Kubernetes 的迁移。“现在,我们已经重新架构了应用程序的这些部分,使之不再位于同一台服务器上,我们可以推送到许多不同的服务器,并实现更稳定的部署。”</p>
<!--
<p>As a result, the team can now ship code any time of day. "That was risky because it could bring down your whole loan management software with it," says Arnold. "But we now can deploy safely and securely during the day."</p>
-->
<p>因此,团队现在可以在一天中的任何时间传输代码。阿诺德说:“以前这样做是很危险的,因为它会拖慢整个贷款管理软件。”“但现在,我们可以在白天安全部署。”</p>
<!--
{{< case-studies/quote >}}
"In order to scale before, we would need to up our instance sizes, incurring high cost for low value," says Adams. "Now with Kubernetes and kops, we are able to scale horizontally on Spot with multiple instance groups."
{{< /case-studies/quote >}}
-->
{{< case-studies/quote >}}
Adams 说:“以前为了扩展,我们需要增加实例大小,导致高成本产出低价值。现在,借助 Kubernetes 和 kops我们能够在具有多个实例组的 Spot 上水平缩放。”
</div>
</div>
{{< /case-studies/quote >}}
<section class="section5">
<div class="fullcol">
<!-- Before, deployments typically took three to four hours, and two or three months worth of work would be deployed at low-traffic times every week or two weeks. Now, they take five minutes for Kubernetes, and an hour for an overall deploy with smoke testing. And "were able to deploy three or four times a week, with just one weeks or two days worth of work," Adams says. "Were deploying during the work week, in the daytime and without any downtime. We had to ask for business approval to take the systems down for 30 minutes to an hour, even in the middle of the night, because people could be doing loans. Now we can deploy, ship code, and migrate databases, all without taking the system down. The company gets new features without worrying that some business will be lost or delayed."<br><br> -->
以前部署通常需要三到四个小时而且每周或每两周要把一些两三个月工作量的任务在系统占用低的时候进行部署。现在他们用5分钟来配置 Kubernetes然后用一个小时进行整体部署与烟雾测试。Adams 说:“我们每周可以部署三到四次,只需一周或两天的工作量。”“我们在工作周、白天的任意时间进行部署,甚至不需要停机。以前我们不得不请求企业批准,以关闭系统,因为即使在半夜,人们也可能正在访问服务。现在,我们可以部署、上传代码和迁移数据库,而无需关闭系统。公司增加新项目,而不必担心某些业务会丢失或延迟。”<br><br>
<!-- Cloud native also affected how Ygrenes 50+ developers and contractors work. Adams and Arnold spent considerable time "teaching people to think distributed out of the box," says Arnold. "We ended up picking what we call the Four Ss of Shipping: safely, securely, stably, and speedily." (For more on the security piece of it, see their <a href="https://thenewstack.io/beyond-ci-cd-how-continuous-hacking-of-docker-containers-and-pipeline-driven-security-keeps-ygrene-secure/index.html">article</a> on their "continuous hacking" strategy.) As for the engineers, says Adams, "they have been able to advance as their software has advanced. I think that at the end of the day, the developers feel better about what theyre doing, and they also feel more connected to the modern software development community."<br><br> -->
云原生也影响了 Ygrene 的 50 多名开发人员和承包商的工作方式。Adams 和 Arnold 花了相当长的时间“教人们思考开箱即用的”Arnold 说。“我们最终选择了称之为“航运四S”安全、可靠、稳妥、快速。”有关其安全部分的更多内容请参阅他们关于"持续黑客攻击"策略的<a href="https://thenewstack.io/beyond-ci-cd-how-continuous-hacking-of-docker-containers-and-pipeline-driven-security-keeps-ygrene-secure/index.html">文章</a>。至于工程师Adams 说,“他们已经能够跟上软件进步的步伐。我想一天结束时,开发人员会感觉更好,他们也会感觉与现代软件开发社区的联系更加紧密。”<br><br>
<!-- Looking ahead, Adams is excited to explore more CNCF projects, including SPIFFE and SPIRE. "CNCF has been an amazing incubator for so many projects," he says. "Now we look at its webpage regularly to find out if there are any new, awesome, high-quality projects we can implement into our stack. Its actually become a hub for us for knowing what software we need to be looking at to make our systems more secure or more scalable." -->
展望未来Adams 很高兴能探索更多的 CNCF 项目,包括 SPIFFE 和 SPIRE。“ CNCF 是众多项目惊人的孵化器。现在,我们定期查看其网页,了解是否有任何新的、可敬的高质量项目可以应用到我们的系统中。它实际上已成为我们了解自身需要什么样的软件以使我们的系统更加安全和具有可伸缩性的信息中心。”
<!--
<p>Before, deployments typically took three to four hours, and two or three months' worth of work would be deployed at low-traffic times every week or two weeks. Now, they take five minutes for Kubernetes, and an hour for an overall deploy with smoke testing. And "we're able to deploy three or four times a week, with just one week's or two days' worth of work," Adams says. "We're deploying during the work week, in the daytime and without any downtime. We had to ask for business approval to take the systems down for 30 minutes to an hour, even in the middle of the night, because people could be doing loans. Now we can deploy, ship code, and migrate databases, all without taking the system down. The company gets new features without worrying that some business will be lost or delayed."</p>
-->
<p>以前部署通常需要三到四个小时而且每周或每两周要把一些两三个月工作量的任务在系统占用低的时候进行部署。现在他们用5分钟来配置 Kubernetes然后用一个小时进行整体部署与烟雾测试。Adams 说:“我们每周可以部署三到四次,只需一周或两天的工作量。”“我们在工作周、白天的任意时间进行部署,甚至不需要停机。以前我们不得不请求企业批准,以关闭系统,因为即使在半夜,人们也可能正在访问服务。现在,我们可以部署、上传代码和迁移数据库,而无需关闭系统。公司增加新项目,而不必担心某些业务会丢失或延迟。”</p>
<!--
<p>Cloud native also affected how Ygrene's 50+ developers and contractors work. Adams and Arnold spent considerable time "teaching people to think distributed out of the box," says Arnold. "We ended up picking what we call the Four S's of Shipping: safely, securely, stably, and speedily." (For more on the security piece of it, see their <a href="https://thenewstack.io/beyond-ci-cd-how-continuous-hacking-of-docker-containers-and-pipeline-driven-security-keeps-ygrene-secure/">article</a> on their "continuous hacking" strategy.) As for the engineers, says Adams, "they have been able to advance as their software has advanced. I think that at the end of the day, the developers feel better about what they're doing, and they also feel more connected to the modern software development community."</p>
-->
<p>云原生也影响了 Ygrene 的 50 多名开发人员和承包商的工作方式。Adams 和 Arnold 花了相当长的时间“教人们思考开箱即用的”Arnold 说。“我们最终选择了称之为“航运四S”安全、可靠、稳妥、快速。”有关其安全部分的更多内容请参阅他们关于"持续黑客攻击"策略的<a href="https://thenewstack.io/beyond-ci-cd-how-continuous-hacking-of-docker-containers-and-pipeline-driven-security-keeps-ygrene-secure/">文章</a>。至于工程师Adams 说,“他们已经能够跟上软件进步的步伐。我想一天结束时,开发人员会感觉更好,他们也会感觉与现代软件开发社区的联系更加紧密。”</p>
</div>
</section>
<!--
<p>Looking ahead, Adams is excited to explore more CNCF projects, including SPIFFE and SPIRE. "CNCF has been an amazing incubator for so many projects," he says. "Now we look at its webpage regularly to find out if there are any new, awesome, high-quality projects we can implement into our stack. It's actually become a hub for us for knowing what software we need to be looking at to make our systems more secure or more scalable."</p>
-->
<p>展望未来Adams 很高兴能探索更多的 CNCF 项目,包括 SPIFFE 和 SPIRE。“CNCF 是众多项目惊人的孵化器。现在,我们定期查看其网页,了解是否有任何新的、可敬的高质量项目可以应用到我们的系统中。它实际上已成为我们了解自身需要什么样的软件以使我们的系统更加安全和具有可伸缩性的信息中心。”</p>

View File

@ -2,131 +2,195 @@
title: 案例研究Zalando
case_study_styles: true
cid: caseStudies
css: /css/style_zalando.css
new_case_study_styles: true
heading_background: /images/case-studies/zalando/banner1.jpg
heading_title_logo: /images/zalando_logo.png
subheading: >
欧洲领先的在线时尚平台通过云原生获得突破性进展
case_study_details:
- 公司: Zalando
- 位置: Berlin Germany
- 行业: 在线时尚平台
---
<!--
title: Zalando Case Study
case_study_styles: true
cid: caseStudies
<div class="banner1">
<!-- <h1> CASE STUDY:<img src="/images/zalando_logo.png" class="header_logo"><br> <div class="subhead">Europes Leading Online Fashion Platform Gets Radical with Cloud Native
</div></h1> -->
<h1> 案例研究:<img src="/images/zalando_logo.png" class="header_logo"><br> <div class="subhead">欧洲领先的在线时尚平台通过云原生获得突破性进展
</div></h1>
new_case_study_styles: true
heading_background: /images/case-studies/zalando/banner1.jpg
heading_title_logo: /images/zalando_logo.png
subheading: >
Europe's Leading Online Fashion Platform Gets Radical with Cloud Native
case_study_details:
- Company: Zalando
- Location: Berlin, Germany
- Industry: Online Fashion
-->
<!--
<h2>Challenge</h2>
-->
<h2>挑战</h2>
</div>
<!--
<p>Zalando, Europe's leading online fashion platform, has experienced exponential growth since it was founded in 2008. In 2015, with plans to further expand its original e-commerce site to include new services and products, Zalando embarked on a <a href="https://jobs.zalando.com/tech/blog/radical-agility-study-notes/">radical transformation</a> resulting in autonomous self-organizing teams. This change requires an infrastructure that could scale with the growth of the engineering organization. Zalando's technology department began rewriting its applications to be cloud-ready and started moving its infrastructure from on-premise data centers to the cloud. While orchestration wasn't immediately considered, as teams migrated to <a href="https://aws.amazon.com/">Amazon Web Services</a> (AWS): "We saw the pain teams were having with infrastructure and Cloud Formation on AWS," says Henning Jacobs, Head of Developer Productivity. "There's still too much operational overhead for the teams and compliance. " To provide better support, cluster management was brought into play.</p>
-->
<p>Zalando 是欧洲领先的在线时尚平台,自 2008 年成立以来,经历了指数级增长。在 2015 年Zalando 计划进一步扩展其原有的电子商务站点,以扩展新的服务和产品,从而开始了<a href="https://jobs.zalando.com/tech/blog/radical-agility-study-notes/">彻底的变革</a>因此形成了自主的自组织团队。这次扩展需要可以随工程组织的增长而扩展的基础架构。Zalando 的技术部门开始重写其应用程序,使其能够在云端运行,并开始将其基础架构从内部数据中心迁移到云。虽然编排没有立即被考虑,因为团队迁移到<a href="https://aws.amazon.com/">亚马逊网络服务</a>AWS“我们体验过团队在 AWS 上进行基础设施架构和使用云资源时遇到的痛苦,”开发人员生产力主管 Henning Jacobs 说,“我们遇到了一些难题。对于团队和合规性来说,运营开销仍然过多。为了提供更好的支持,项目引入了集群管理。”</p>
<!-- <div class="details">
Company &nbsp;<b>Zalando</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Berlin, Germany</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Online Fashion</b>
</div> -->
<div class="details">
公司 &nbsp;<b>Zalando</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;位置 &nbsp;<b>柏林, 德国</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;行业 &nbsp;<b>在线时尚平台</b>
</div>
<!--
<h2>Solution</h2>
-->
<h2>解决方案</h2>
<hr>
<section class="section1">
<div class="cols">
<div class="col1">
<h2>挑战</h2>
<!-- Zalando, Europes leading online fashion platform, has experienced exponential growth since it was founded in 2008. In 2015, with plans to further expand its original e-commerce site to include new services and products, Zalando embarked on a <a href="https://jobs.zalando.com/tech/blog/radical-agility-study-notes/">radical transformation</a> resulting in autonomous self-organizing teams. This change requires an infrastructure that could scale with the growth of the engineering organization. Zalandos technology department began rewriting its applications to be cloud-ready and started moving its infrastructure from on-premise data centers to the cloud. While orchestration wasnt immediately considered, as teams migrated to <a href="https://aws.amazon.com/">Amazon Web Services</a> (AWS): "We saw the pain teams were having with infrastructure and Cloud Formation on AWS," says Henning Jacobs, Head of Developer Productivity. "Theres still too much operational overhead for the teams and compliance. " To provide better support, cluster management was brought into play. -->
Zalando 是欧洲领先的在线时尚平台,自 2008 年成立以来经历了指数级增长。2015 年Zalando 计划进一步扩展其原有的电子商务站点,以扩展新的服务和产品,从而开始了<a href="https://jobs.zalando.com/tech/blog/radical-agility-study-notes/">彻底的变革</a>因此形成了自主的自组织团队。这次扩展需要可以随工程组织的增长而扩展的基础架构。Zalando 的技术部门开始重写其应用程序,使其能够在云端运行,并开始将其基础架构从内部数据中心迁移到云。虽然编排没有立即被考虑,因为团队迁移到<a href="https://aws.amazon.com/">亚马逊网络服务</a>AWS“我们体验过团队在 AWS 上进行基础设施架构和使用云资源时遇到的痛苦,”开发人员生产力主管 Henning Jacobs 说,“我们遇到了一些难题。对于团队和合规性来说,运营开销仍然过多。为了提供更好的支持,项目引入了集群管理。”
<!--
<p>The company now runs its Docker containers on AWS using Kubernetes orchestration.</p>
-->
<p>该公司现在使用 Kubernetes 编排在 AWS 上运行的 Docker 容器。</p>
</div>
<div class="col2">
<h2>解决方案</h2>
<!-- The company now runs its Docker containers on AWS using Kubernetes orchestration. -->
该公司现在使用 Kubernetes 编排在 AWS 上运行的 Docker 容器。
<br>
<!--
<h2>Impact</h2>
-->
<h2>影响</h2>
<!-- With the old infrastructure "it was difficult to properly embrace new technologies, and DevOps teams were considered to be a bottleneck," says Jacobs. "Now, with this cloud infrastructure, they have this packaging format, which can contain anything that runs on the Linux kernel. This makes a lot of people pretty happy. The engineers love autonomy." -->
Jacobs 说,由于旧基础设施“很难正确采用新技术,而 DevOps 团队被视为瓶颈。”“现在有了这个云基础架构它们有了这种打包格式可以包含任何在Linux内核上运行的东西。这使得很多人相当高兴。工程师喜欢自主。”
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
<!-- "We envision all Zalando delivery teams running their containerized applications on a state-of-the-art, reliable and scalable cluster infrastructure provided by Kubernetes."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- Henning Jacobs, Head of Developer Productivity at Zalando</span> -->
“我们设想所有 Zalando 交付团队在 Kubernetes 提供的最先进的、可靠且可扩展的群集基础架构上运行其容器化应用程序。”<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- Henning Jacobs, Zalando 开发人员生产力主管
</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<!-- <h2>When Henning Jacobs arrived at Zalando in 2010, the company was just two years old with 180 employees running an online store for European shoppers to buy fashion items.</h2> -->
<h2>当 Henning Jacobs 于2010年来到 Zalando 时该公司刚成立两年有180名员工经营一家网上商店供欧洲消费者购买时尚商品。</h2>
<!-- "It started as a PHP e-commerce site which was easy to get started with, but was not scaling with the business' needs" says Jacobs, Head of Developer Productivity at Zalando.<br><br> -->
Zalando 的开发人员生产力主管 Jacobs 说:“它最初是一个 PHP 电子商务网站,很容易上手,但无法随业务需求扩展。”<br><br>
<!-- At that time, the company began expanding beyond its German origins into other European markets. Fast-forward to today and Zalando now has more than 14,000 employees, 3.6 billion Euro in revenue for 2016 and operates across 15 countries. "With growth in all dimensions, and constant scaling, it has been a once-in-a-lifetime experience," he says.<br><br> -->
当时公司开始从德国以外扩展到其他欧洲市场。快进到今天Zalando 现在拥有超过 14000 名员工2016 年收入为 36 亿欧元,业务遍及 15 个国家/地区。他表示:“这种全面快速的增长和不断扩展,一生只能有一次。”<br><br>
<!-- Not to mention a unique opportunity for an infrastructure specialist like Jacobs. Just after he joined, the company began rewriting all their applications in-house. "That was generally our strategy," he says. "For example, we started with our own logistics warehouses but at first you dont know how to do logistics software, so you have some vendor software. And then we replaced it with our own because with off-the-shelf software youre not competitive. You need to optimize these processes based on your specific business needs."<br><br> -->
能拥有像 Jacobs 这样的基础设施专家的机会是非常独特的。就在他加入公司后,公司开始在内部重写他们的所有应用。“这通常是我们的策略,”他表示。“例如,我们从我们自己的物流仓库开始,但起初不知道如何做物流软件,所以得用一些供应商软件。然后我们用自己的软件替换了它,因为使用现成的软件是没有竞争力的。需要根据特定业务需求优化这些流程。”<br><br>
<!-- In parallel to rewriting their applications, Zalando had set a goal of expanding beyond basic e-commerce to a platform offering multi-tenancy, a dramatic increase in assortments and styles, same-day delivery and even <a href="https://www.zalon.de">your own personal online stylist</a>.<br><br> -->
在重写其应用程序的同时Zalando 设定了一个目标,即从基本电子商务扩展到提供多租户的平台、大量增加的分类和样式、当天送达,甚至<a href="https://www.zalon.de">您自己的平台个人在线造型师</a><br><br>
<!-- The need to scale ultimately led the company on a cloud-native journey. As did its embrace of a microservices-based software architecture that gives engineering teams more autonomy and ownership of projects. "This move to the cloud was necessary because in the data center you couldnt have autonomous teams. You have the same infrastructure and it was very homogeneous, so you could only run your Java or Python app," Jacobs says. -->
扩展的需求最终引领了公司踏上云原生之旅。它采用基于微服务的软件架构,使工程团队拥有更多的项目自主权和所有权。“迁移到云是必要的,因为在数据中心中,团队不可能随心所欲。使用相同的基础架构,因此只能运行 Java 或 Python 应用”Jacobs 说。
</div>
</section>
<div class="banner3">
<div class="banner3text">
<!-- "This move to the cloud was necessary because in the data center you couldnt have autonomous teams. You have the same infrastructure and it was very homogeneous, so you could only run your Java or Python app." -->
<!--
<p>With the old infrastructure "it was difficult to properly embrace new technologies, and DevOps teams were considered to be a bottleneck," says Jacobs. "Now, with this cloud infrastructure, they have this packaging format, which can contain anything that runs on the Linux kernel. This makes a lot of people pretty happy. The engineers love autonomy."</p>
-->
<p>Jacobs 说,由于旧基础设施“很难正确采用新技术,而 DevOps 团队被视为瓶颈。”“现在,有了这个云基础架构,它们有了这种打包格式,可以包含任何在 Linux 内核上运行的东西。这使得很多人相当高兴。工程师喜欢自主。”</p>
<!--
{{< case-studies/quote author="Henning Jacobs, Head of Developer Productivity at Zalando" >}}
"We envision all Zalando delivery teams running their containerized applications on a state-of-the-art, reliable and scalable cluster infrastructure provided by Kubernetes."
{{< /case-studies/quote >}}
-->
{{< case-studies/quote author="Henning Jacobs, Zalando 开发人员生产力主管" >}}
<p>“我们设想所有 Zalando 交付团队在 Kubernetes 提供的最先进的、可靠且可扩展的集群基础架构上运行其容器化应用程序。”</p>
{{< /case-studies/quote >}}
<!--
{{< case-studies/lead >}}
When Henning Jacobs arrived at Zalando in 2010, the company was just two years old with 180 employees running an online store for European shoppers to buy fashion items.
{{< /case-studies/lead >}}
-->
{{< case-studies/lead >}}
当 Henning Jacobs 于 2010 年来到 Zalando 时,该公司刚成立两年,有 180 名员工经营一家网上商店,供欧洲消费者购买时尚商品。
{{< /case-studies/lead >}}
<!--
<p>"It started as a PHP e-commerce site which was easy to get started with, but was not scaling with the business' needs" says Jacobs, Head of Developer Productivity at Zalando.</p>
-->
<p>Zalando 的开发人员生产力主管 Jacobs 说:“它最初是一个 PHP 电子商务网站,很容易上手,但无法随业务需求扩展。”</p>
<!--
<p>At that time, the company began expanding beyond its German origins into other European markets. Fast-forward to today and Zalando now has more than 14,000 employees, 3.6 billion Euro in revenue for 2016 and operates across 15 countries. "With growth in all dimensions, and constant scaling, it has been a once-in-a-lifetime experience," he says.</p>
-->
<p>当时,公司开始从 Germany 以外扩展到其他欧洲市场。快进到今天Zalando 现在拥有超过 14000 名员工2016 年收入为 36 亿欧元,业务遍及 15 个国家/地区。他表示:“这种全面快速的增长和不断扩展,一生只能有一次。”</p>
<!--
<p>Not to mention a unique opportunity for an infrastructure specialist like Jacobs. Just after he joined, the company began rewriting all their applications in-house. "That was generally our strategy," he says. "For example, we started with our own logistics warehouses but at first you don't know how to do logistics software, so you have some vendor software. And then we replaced it with our own because with off-the-shelf software you're not competitive. You need to optimize these processes based on your specific business needs."</p>
-->
<p>能拥有像 Jacobs 这样的基础设施专家的机会是非常独特的。就在他加入公司后,公司开始在内部重写他们的所有应用。“这通常是我们的策略,”他表示。“例如,我们从我们自己的物流仓库开始,但起初不知道如何做物流软件,所以得用一些供应商软件。然后我们用自己的软件替换了它,因为使用现成的软件是没有竞争力的。需要根据特定业务需求优化这些流程。”</p>
<!--
<p>In parallel to rewriting their applications, Zalando had set a goal of expanding beyond basic e-commerce to a platform offering multi-tenancy, a dramatic increase in assortments and styles, same-day delivery and even <a href="https://www.zalon.de">your own personal online stylist</a>.</p>
-->
<p>在重写其应用程序的同时Zalando 设定了一个目标,即从基本电子商务扩展到一个提供多租户、大量增加的分类和样式、当天送达,甚至<a href="https://www.zalon.de">你自己个人在线造型师</a>的平台。</p>
<!--
<p>The need to scale ultimately led the company on a cloud-native journey. As did its embrace of a microservices-based software architecture that gives engineering teams more autonomy and ownership of projects. "This move to the cloud was necessary because in the data center you couldn't have autonomous teams. You have the same infrastructure and it was very homogeneous, so you could only run your Java or Python app," Jacobs says.</p>
-->
<p>扩展的需求最终引领了公司踏上云原生之旅。它采用基于微服务的软件架构,使工程团队拥有更多的项目自主权和所有权。“迁移到云是必要的,因为在数据中心中,团队不可能随心所欲。使用相同的基础架构,因此只能运行 Java 或 Python 应用”Jacobs 说。</p>
<!--
{{< case-studies/quote image="/images/case-studies/zalando/banner3.jpg" >}}
"This move to the cloud was necessary because in the data center you couldn't have autonomous teams. You have the same infrastructure and it was very homogeneous, so you could only run your Java or Python app."
{{< /case-studies/quote >}}
-->
{{< case-studies/quote image="/images/case-studies/zalando/banner3.jpg" >}}
“迁移到云是必要的,因为在数据中心中,团队不可能随心所欲。使用相同的基础架构,因此只能运行 Java 或 Python 应用。”
{{< /case-studies/quote >}}
</div>
</div>
<section class="section3">
<div class="fullcol">
<!-- Zalando began moving its infrastructure from two on-premise data centers to the cloud, requiring the migration of older applications for cloud-readiness. "We decided to have a clean break," says Jacobs. "Our <a href="https://aws.amazon.com/">Amazon Web Services</a> infrastructure was set up like so: Every team has its own AWS account, which is completely isolated, meaning theres no lift and shift. You basically have to rewrite your application to make it cloud-ready even down to the persistence layer. We bravely went back to the drawing board and redid everything, first choosing Docker as a common containerization, then building the infrastructure from there."<br><br> -->
Zalando 开始将其基础架构从两个内部数据中心迁移到云,这需要迁移较旧的应用程序使其在云中准备就绪。“我们决定果断一些,” Jacobs 说。“我们的<a href="https://aws.amazon.com/">亚马逊网络服务</a>基础设施是这样的:每个团队都有自己的 AWS 账户,该账户是完全隔离的,这意味着没有'提升和转移'。基本上必须重写应用程序,使其在云中准备就绪,甚至到持久层也是如此。我们勇敢地回到绘图板,重做一切,首先选择 Docker 作为通用容器化,然后从那里构建基础结构。”<br><br>
<!-- The company decided to hold off on orchestration at the beginning, but as teams were migrated to AWS, "we saw the pain teams were having with infrastructure and cloud formation on AWS," says Jacobs. <br><br> -->
公司最初决定推迟编排,但随着团队迁移到 AWS“我们看到团队在 AWS 上的基础设施和使用云资源方面遇到了难题,” Jacobs 说。<br><br>
<!-- Zalandos 200+ autonomous engineering teams decide what technologies to use and could operate their own applications using their own AWS accounts. This setup proved to be a compliance challenge. Even with strict rules-of-play and automated compliance checks in place, engineering teams and IT-compliance were overburdened addressing compliance issues. "Violations appear for non-compliant behavior, which we detect when scanning the cloud infrastructure," says Jacobs. "Everything is possible and nothing enforced, so you have to live with violations (and resolve them) instead of preventing the error in the first place. This means overhead for teams—and overhead for compliance and operations. It also takes time to spin up new EC2 instances on AWS, which affects our deployment velocity." <br><br> -->
Zalando 200多人的自主工程团队研究使用哪些技术并可以使用自己的 AWS 账户操作自己的应用程序。事实证明,此设置是一项合规性挑战。即使制定了严格的游戏规则和自动化的合规性检查,工程团队和 IT 合规性在解决合规性问题方面也负担过重。Jacobs 说:"违规行为会出现,我们在扫描云基础架构时会检测到这些行为。“一切皆有可能,没有强制实施,因此您必须忍受违规行为(并解决它们),而不是一开始就防止错误。这些都会增加团队的开销,以及合规性和操作的开销。在 AWS 上启动新的 EC2 实例也需要时间,这会影响我们的部署速度。”<br><br>
<!-- The team realized they needed to "leverage the value you get from cluster management," says Jacobs. When they first looked at Platform as a Service (PaaS) options in 2015, the market was fragmented; but "now there seems to be a clear winner. It seemed like a good bet to go with Kubernetes."<br><br> -->
Jacobs 说团队意识到他们需要“利用从集群管理中获得的价值”。当他们在2015年首次将平台视为服务PaaS选项时市场是支离破碎的但“现在似乎有一个明确的赢家。使用 Kubernetes 似乎是一个很好的尝试。”<br><br>
<!-- The transition to Kubernetes started in 2016 during Zalandos <a href="https://jobs.zalando.com/tech/blog/hack-week-5-is-live/?gh_src=4n3gxh1">Hack Week</a> where participants deployed their projects to a Kubernetes cluster. From there 60 members of the tech infrastructure department were on-boarded - and then engineering teams were brought on one at a time. "We always start by talking with them and make sure everyones expectations are clear," says Jacobs. "Then we conduct some Kubernetes training, which is mostly training for our CI/CD setup, because the user interface for our users is primarily through the CI/CD system. But they have to know fundamental Kubernetes concepts and the API. This is followed by a weekly sync with each team to check their progress. Once they have something in production, we want to see if everything is fine on top of what we can improve." -->
Kubernetes 的过渡始于 2016 年 Zalando 的<a href="https://jobs.zalando.com/tech/blog/hack-week-5-is-live/?gh_src=4n3gxh1">极客周</a>期间,参与者将他们的项目部署到 Kubernetes 集群。60名技术基础设施部门的成员开始使用这项技术之后每次会加入一支工程团队。Jacobs 说:“我们总是从与他们交谈开始,确保每个人的期望都清晰。然后,我们进行一些 Kubernetes 培训,这主要是针对我们的 CI/CD 设置的培训,因为我们的用户界面主要通过 CI/CD 系统。但是他们必须知道Kubernetes的基本概念和API。之后每周与每个团队同步以检查其进度。一旦出现什么状况就可以确定我们所做的改进是否正常。”
<!--
<p>Zalando began moving its infrastructure from two on-premise data centers to the cloud, requiring the migration of older applications for cloud-readiness. "We decided to have a clean break," says Jacobs. "Our <a href="https://aws.amazon.com/">Amazon Web Services</a> infrastructure was set up like so: Every team has its own AWS account, which is completely isolated, meaning there's no 'lift and shift.' You basically have to rewrite your application to make it cloud-ready even down to the persistence layer. We bravely went back to the drawing board and redid everything, first choosing Docker as a common containerization, then building the infrastructure from there."</p>
-->
<p>Zalando 开始将其基础架构从两个内部数据中心迁移到云,这需要迁移较旧的应用程序使其在云中准备就绪。“我们决定果断一些,” Jacobs 说。“我们的<a href="https://aws.amazon.com/">亚马逊网络服务</a>基础设施是这样的:每个团队都有自己的 AWS 账户,该账户是完全隔离的,这意味着没有'提升和转移'。基本上必须重写应用程序,使其在云中准备就绪,甚至到持久层也是如此。我们勇敢地回到绘图板,重做一切,首先选择 Docker 作为通用容器化,然后从那里构建基础结构。”</p>
</div>
</section>
<div class="banner4">
<div class="banner4text">
<!-- Once Zalando began migrating applications to Kubernetes, the results were immediate. "Kubernetes is a cornerstone for our seamless end-to-end developer experience. We are able to ship ideas to production using a single consistent and declarative API," says Jacobs. -->
一旦Zalando开始将应用迁移到Kubernetes效果立竿见影。“Kubernetes 是我们无缝端到端开发人员体验的基石。我们能够使用单一一致且声明性的 API 将创意运送到生产中,” Jacobs 说。
</div>
</div>
<!--
<p>The company decided to hold off on orchestration at the beginning, but as teams were migrated to AWS, "we saw the pain teams were having with infrastructure and cloud formation on AWS," says Jacobs.</p>
-->
<p>公司最初决定推迟编排,但随着团队迁移到 AWS“我们看到团队在 AWS 上的基础设施和使用云资源方面遇到了难题,” Jacobs 说。</p>
<section class="section4">
<div class="fullcol">
<!-- At the moment, Zalando is running an initial 40 Kubernetes clusters with plans to scale for the foreseeable future.
Once Zalando began migrating applications to Kubernetes, the results were immediate. "Kubernetes is a cornerstone for our seamless end-to-end developer experience. We are able to ship ideas to production using a single consistent and declarative API," says Jacobs. "The self-healing infrastructure provides a frictionless experience with higher-level abstractions built upon low-level best practices. We envision all Zalando delivery teams will run their containerized applications on a state-of-the-art reliable and scalable cluster infrastructure provided by Kubernetes."<br><br> -->
目前Zalando正在运行最初的40个Kubernetes集群并计划在可预见的将来进行扩展。一旦Zalando开始将申请迁移到Kubernetes效果立竿见影。“ Kubernetes 是我们无缝端到端开发人员体验的基石。我们能够使用单一一致且声明性的 API 将创意运送到生产中,” Jacobs 说。“自愈基础架构提供了无摩擦体验,基于低级最佳实践构建了更高级别的抽象。我们设想所有 Zalando 交付团队都将在 Kubernetes 提供的最先进的可靠和可扩展群集基础架构上运行其容器化应用程序。”
<!-- With the old on-premise infrastructure "it was difficult to properly embrace new technologies, and DevOps teams were considered to be a bottleneck," says Jacobs. "Now, with this cloud infrastructure, they have this packaging format, which can contain anything that runs in the Linux kernel. This makes a lot of people pretty happy. The engineers love the autonomy."
There were a few challenges in Zalandos Kubernetes implementation. "We are a team of seven people providing clusters to different engineering teams, and our goal is to provide a rock-solid experience for all of them," says Jacobs. "We dont want pet clusters. We dont want to have to understand what workload they have; it should just work out of the box. With that in mind, cluster autoscaling is important. There are many different ways of doing cluster management, and this is not part of the core. So we created two components to provision clusters, have a registry for clusters, and to manage the whole cluster life cycle."<br><br> -->
Jacobs 说,使用旧的内部基础设施,“很难正确采用新技术,而 DevOps 团队被视为瓶颈。”“现在,有了这个云基础架构,它们有了这种打包格式,可以包含运行在 Linux 内核中的任何内容。这使得很多人相当高兴。工程师喜欢自主性。”在Zalando的Kubernetes实施中出现了一些挑战。Jacobs 说:“我们是一支由七人组成的团队,为不同的工程团队提供集群,我们的目标是为所有团队提供坚如磐石的体验。”“我们不想要宠物集群。我们不想了解他们的工作量;它应该只是开箱即用。考虑到这一点,集群自动缩放非常重要。执行集群管理的方法有很多种,这不是核心的一部分。因此,我们创建了两个组件来预配群集,具有集群的注册表,并管理整个集群生命周期。”<br><br>
<!-- Jacobss team also worked to improve the Kubernetes-AWS integration. "Thus you're very restricted. You need infrastructure to scale each autonomous teams idea.""<br><br> -->
Jacobs 的团队还致力于改进Kubernetes-AWS 集成。“由于许多限制条件,需要通过基础设施来扩展每个自主团队的想法。”
<!-- Plus, "there are still a lot of best practices missing," says Jacobs. The team, for example, recently solved a pod security policy issue. "There was already a concept in Kubernetes but it wasnt documented, so it was kind of tricky," he says. The large Kubernetes community was a big help to resolve the issue. To help other companies start down the same path, Jacobs compiled his teams learnings in a document called <a href="http://kubernetes-on-aws.readthedocs.io/en/latest/admin-guide/kubernetes-in-production.html">Running Kubernetes in Production</a>. -->
此外“仍然缺少很多最佳实践”Jacobs说。例如该团队最近解决了 pod 安全策略问题。“在Kubernetes已经有一个概念但没有记录所以有点棘手”他说。大型的Kubernetes社区是解决这个问题的一大帮助。为了帮助其他公司走上同样的道路Jacobs在一份名为“在生产中运行 Kubernetes ”的文件中汇编了他的团队的经验教训。
<!--
<p>Zalandos 200+ autonomous engineering teams decide what technologies to use and could operate their own applications using their own AWS accounts. This setup proved to be a compliance challenge. Even with strict rules-of-play and automated compliance checks in place, engineering teams and IT-compliance were overburdened addressing compliance issues. "Violations appear for non-compliant behavior, which we detect when scanning the cloud infrastructure," says Jacobs. "Everything is possible and nothing enforced, so you have to live with violations (and resolve them) instead of preventing the error in the first place. This means overhead for teams—and overhead for compliance and operations. It also takes time to spin up new EC2 instances on AWS, which affects our deployment velocity."</p>
-->
<p>Zalando 200 多人的自主工程团队研究使用哪些技术,并可以使用自己的 AWS 账户操作自己的应用程序。事实证明,此设置是一项合规性挑战。即使制定了严格的游戏规则和自动化的合规性检查,工程团队和 IT 合规性在解决合规性问题方面也负担过重。Jacobs 说:"违规行为会出现,我们在扫描云基础架构时会检测到这些行为。“一切皆有可能,没有强制实施,因此你必须忍受违规行为(并解决它们),而不是一开始就防止错误。这些都会增加团队的开销,以及合规性和操作的开销。在 AWS 上启动新的 EC2 实例也需要时间,这会影响我们的部署速度。”</p>
</div>
</section>
<!--
<p>The team realized they needed to "leverage the value you get from cluster management," says Jacobs. When they first looked at Platform as a Service (PaaS) options in 2015, the market was fragmented; but "now there seems to be a clear winner. It seemed like a good bet to go with Kubernetes."</p>
-->
<p>Jacobs 说,团队意识到他们需要“利用从集群管理中获得的价值”。当他们在 2015 年首次将平台视为服务PaaS选项时市场是支离破碎的但“现在似乎有一个明确的赢家。使用 Kubernetes 似乎是一个很好的尝试。”</p>
<div class="banner5">
<div class="banner5text">
<!-- "The Kubernetes API allows us to run applications in a cloud provider-agnostic way, which gives us the freedom to revisit IaaS providers in the coming years... We expect the Kubernetes API to be the global standard for PaaS infrastructure and are excited about the continued journey." -->
“ Kubernetes API 允许我们以与云提供商无关的方式运行应用程序,这使我们能够在未来几年中自由访问 IaaS 提供商...我们期望 Kubernetes API 成为 PaaS 基础设施的全球标准,并对未来的继续旅程感到兴奋。”
</div>
</div>
<section class="section5">
<div class="fullcol">
<!-- In the end, Kubernetes made it possible for Zalando to introduce and maintain the new products the company envisioned to grow its platform. "<a href="https://www.zalon.de/">The fashion advice</a> product used Scala, and there were struggles to make this possible with our former infrastructure," says Jacobs. "It was a workaround, and that team needed more and more support from the platform team, just because they used different technologies. Now with Kubernetes, its autonomous. Whatever the workload is, that team can just go their way, and Kubernetes prevents other bottlenecks."<br><br> -->
最后Kubernetes 使 Zalando 能够引进和维护公司为发展其平台而设想的新产品。Jacobs 说:“时尚咨询产品使用 Scala而我们以前的基础设施也难以实现这一点。”这是一个解决方法该团队需要平台团队提供越来越多的支持只是因为他们使用了不同的技术。现在有了Kubernetes它就自主了。无论工作负载是什么该团队都可以走自己的路而 Kubernetes 可以防止其他瓶颈。<br><br>
<!-- Looking ahead, Jacobs sees Zalandos new infrastructure as a great enabler for other things the company has in the works, from its new logistics software, to a platform feature connecting brands, to products dreamed up by data scientists. "One vision is if you watch the next James Bond movie and see the suit hes wearing, you should be able to automatically order it, and have it delivered to you within an hour," says Jacobs. "Its about connecting the full fashion sphere. This is definitely not possible if you have a bottleneck with everyone running in the same data center and thus very restricted. You need infrastructure to scale each autonomous teams idea."<br><br> -->
展望未来Jacobs 将 Zalando 的新基础设施视为公司在进行的其他工程中的巨大推动因素从新的物流软件到连接品牌的平台功能以及数据科学家梦寐以求的产品。Jacobs说“一个愿景是如果你看下一部 James Bond 的电影,看看他穿的西装,你就应该能够自动订购,并在一小时内把它送到你身边。”“这是关于连接整个时尚领域。如果您遇到瓶颈,因为每个人都在同一个数据中心运行,因此限制很大,则这绝对是不可能的。需要基础设施来扩展每个自主团队的想法。”<br><br>
<!-- For other companies considering this technology, Jacobs says he wouldnt necessarily advise doing it exactly the same way Zalando did. "Its okay to do so if youre ready to fail at some things," he says. "You need to set the right expectations. Not everything will work. Rewriting apps and this type of organizational change can be disruptive. The first product we moved was critical. There were a lot of dependencies, and it took longer than expected. Maybe we should have started with something less complicated, less business critical, just to get our toes wet."<br><br> -->
对于考虑这项技术的其他公司Jacobs 说,他不一定建议像 Zalando 那样做。他表示:“如果你准备尝试失败,那么这样做是可以的。”“设定正确的期望是必须的。并不是一切都会起作用。重写应用和这种类型的组织更改可能会造成中断。我们移动的第一个产品至关重要。存在大量依赖关系,而且时间比预期长。也许我们应该从不那么复杂、不是业务关键的东西开始,只是为了开个好头。”<br><br>
<!-- But once they got to the other side "it was clear for everyone that theres no big alternative," Jacobs adds. "The Kubernetes API allows us to run applications in a cloud provider-agnostic way, which gives us the freedom to revisit IaaS providers in the coming years. Zalando Technology benefits from migrating to Kubernetes as we are able to leverage our existing knowledge to create an engineering platform offering flexibility and speed to our engineers while significantly reducing the operational overhead. We expect the Kubernetes API to be the global standard for PaaS infrastructure and are excited about the continued journey." -->
但是,一旦他们到了另一边,“每个人都很清楚,没有大的选择,” Jacobs 补充说。“ Kubernetes API 允许我们以与云提供商无关的方式运行应用程序,这使我们能够在未来几年中自由访问 IaaS 提供商。Zalando 受益于迁移到 Kubernetes因为我们能够利用现有知识创建工程平台为我们的工程师提供灵活性和速度同时显著降低运营开销。我们期望 Kubernetes API 成为 PaaS 基础设施的全球标准,并对未来的旅程感到兴奋。”
<!--
<p>The transition to Kubernetes started in 2016 during Zalando's <a href="https://jobs.zalando.com/tech/blog/hack-week-5-is-live/?gh_src=4n3gxh1">Hack Week</a> where participants deployed their projects to a Kubernetes cluster. From there 60 members of the tech infrastructure department were on-boarded - and then engineering teams were brought on one at a time. "We always start by talking with them and make sure everyone's expectations are clear," says Jacobs. "Then we conduct some Kubernetes training, which is mostly training for our CI/CD setup, because the user interface for our users is primarily through the CI/CD system. But they have to know fundamental Kubernetes concepts and the API. This is followed by a weekly sync with each team to check their progress. Once they have something in production, we want to see if everything is fine on top of what we can improve."</p>
-->
<p>Kubernetes 的过渡始于 2016 年 Zalando 的<a href="https://jobs.zalando.com/tech/blog/hack-week-5-is-live/?gh_src=4n3gxh1">极客周</a>期间,参与者将他们的项目部署到 Kubernetes 集群。60 名技术基础设施部门的成员开始使用这项技术之后每次会加入一支工程团队。Jacobs 说:“我们总是从与他们交谈开始,确保每个人的期望都清晰。然后,我们进行一些 Kubernetes 培训,这主要是针对我们的 CI/CD 设置的培训,因为我们的用户界面主要通过 CI/CD 系统。但是他们必须知道 Kubernetes 的基本概念和API。之后每周与每个团队同步以检查其进度。一旦出现什么状况就可以确定我们所做的改进是否正常。”</p>
</div>
<!--
{{< case-studies/quote image="/images/case-studies/zalando/banner4.jpg" >}}
Once Zalando began migrating applications to Kubernetes, the results were immediate. "Kubernetes is a cornerstone for our seamless end-to-end developer experience. We are able to ship ideas to production using a single consistent and declarative API," says Jacobs.
{{< /case-studies/quote >}}
-->
{{< case-studies/quote image="/images/case-studies/zalando/banner4.jpg" >}}
一旦 Zalando 开始将应用迁移到 Kubernetes效果立竿见影。“Kubernetes 是我们无缝端到端开发人员体验的基石。我们能够使用单一一致且声明性的 API 将创意运送到生产中,” Jacobs 说。
{{< /case-studies/quote >}}
</section>
<!--
<p>At the moment, Zalando is running an initial 40 Kubernetes clusters with plans to scale for the foreseeable future. Once Zalando began migrating applications to Kubernetes, the results were immediate. "Kubernetes is a cornerstone for our seamless end-to-end developer experience. We are able to ship ideas to production using a single consistent and declarative API," says Jacobs. "The self-healing infrastructure provides a frictionless experience with higher-level abstractions built upon low-level best practices. We envision all Zalando delivery teams will run their containerized applications on a state-of-the-art reliable and scalable cluster infrastructure provided by Kubernetes."</p>
-->
<p>目前Zalando 正在运行最初的 40 个 Kubernetes 集群,并计划在可预见的将来进行扩展。一旦 Zalando 开始将申请迁移到 Kubernetes效果立竿见影。“Kubernetes 是我们无缝端到端开发人员体验的基石。我们能够使用单一一致且声明性的 API 将创意运送到生产中,” Jacobs 说。“自愈基础架构提供了无摩擦体验,基于低级最佳实践构建了更高级别的抽象。我们设想所有 Zalando 交付团队都将在 Kubernetes 提供的最先进的可靠和可扩展集群基础架构上运行其容器化应用程序。”</p>
<!--
<p>With the old on-premise infrastructure "it was difficult to properly embrace new technologies, and DevOps teams were considered to be a bottleneck," says Jacobs. "Now, with this cloud infrastructure, they have this packaging format, which can contain anything that runs in the Linux kernel. This makes a lot of people pretty happy. The engineers love the autonomy."</p>
-->
<p>Jacobs 说,使用旧的内部基础设施,“很难正确采用新技术,而 DevOps 团队被视为瓶颈。”“现在,有了这个云基础架构,它们有了这种打包格式,可以包含运行在 Linux 内核中的任何内容。这使得很多人相当高兴。工程师喜欢自主性。”</p>
<!--
<p>There were a few challenges in Zalando's Kubernetes implementation. "We are a team of seven people providing clusters to different engineering teams, and our goal is to provide a rock-solid experience for all of them," says Jacobs. "We don't want pet clusters. We don't want to have to understand what workload they have; it should just work out of the box. With that in mind, cluster autoscaling is important. There are many different ways of doing cluster management, and this is not part of the core. So we created two components to provision clusters, have a registry for clusters, and to manage the whole cluster life cycle."</p>
-->
<p>在 Zalando 的 Kubernetes 实施中出现了一些挑战。Jacobs 说:“我们是一支由七人组成的团队,为不同的工程团队提供集群,我们的目标是为所有团队提供坚如磐石的体验。”“我们不想要宠物集群。我们不想了解他们的工作量;它应该只是开箱即用。考虑到这一点,集群自动缩放非常重要。执行集群管理的方法有很多种,这不是核心的一部分。因此,我们创建了两个组件来预配集群,具有集群的注册表,并管理整个集群生命周期。”</p>
<!--
<p>Jacobs's team also worked to improve the Kubernetes-AWS integration. "Thus you're very restricted. You need infrastructure to scale each autonomous team's idea." Plus, "there are still a lot of best practices missing," says Jacobs. The team, for example, recently solved a pod security policy issue. "There was already a concept in Kubernetes but it wasn't documented, so it was kind of tricky," he says. The large Kubernetes community was a big help to resolve the issue. To help other companies start down the same path, Jacobs compiled his team's learnings in a document called <a href="http://kubernetes-on-aws.readthedocs.io/en/latest/admin-guide/kubernetes-in-production.html">Running Kubernetes in Production</a>.</p>
-->
<p>Jacobs 的团队还致力于改进 Kubernetes-AWS 集成。“由于许多限制条件,需要通过基础设施来扩展每个自主团队的想法。”此外,“仍然缺少很多最佳实践,” Jacobs 说。例如,该团队最近解决了 pod 安全策略问题。“在 Kubernetes 已经有一个概念,但没有记录,所以有点棘手,”他说。大型的 Kubernetes 社区是解决这个问题的一大帮助。为了帮助其他公司走上同样的道路Jacobs 在一份名为<a href="http://kubernetes-on-aws.readthedocs.io/en/latest/admin-guide/kubernetes-in-production.html">在生产中运行 Kubernetes </a> 的文件中汇编了他的团队的经验教训。</p>
<!--
{{< case-studies/quote >}}
"The Kubernetes API allows us to run applications in a cloud provider-agnostic way, which gives us the freedom to revisit IaaS providers in the coming years... We expect the Kubernetes API to be the global standard for PaaS infrastructure and are excited about the continued journey."
{{< /case-studies/quote >}}
-->
{{< case-studies/quote >}}
“Kubernetes API 允许我们以与云提供商无关的方式运行应用程序,这使我们能够在未来几年中自由访问 IaaS 提供商...。我们期望 Kubernetes API 成为 PaaS 基础设施的全球标准,并对未来的继续旅程感到兴奋。”
{{< /case-studies/quote >}}
<!--
<p>In the end, Kubernetes made it possible for Zalando to introduce and maintain the new products the company envisioned to grow its platform. "<a href="https://www.zalon.de/">The fashion advice</a> product used Scala, and there were struggles to make this possible with our former infrastructure," says Jacobs. "It was a workaround, and that team needed more and more support from the platform team, just because they used different technologies. Now with Kubernetes, it's autonomous. Whatever the workload is, that team can just go their way, and Kubernetes prevents other bottlenecks."</p>
-->
<p>最后Kubernetes 使 Zalando 能够引进和维护公司为发展其平台而设想的新产品。Jacobs 说:“时尚咨询产品使用 Scala而我们以前的基础设施也难以实现这一点。”这是一个解决方法该团队需要平台团队提供越来越多的支持只是因为他们使用了不同的技术。现在有了 Kubernetes它就自主了。无论工作负载是什么该团队都可以走自己的路而 Kubernetes 可以防止其他瓶颈。</p>
<!--
<p>Looking ahead, Jacobs sees Zalando's new infrastructure as a great enabler for other things the company has in the works, from its new logistics software, to a platform feature connecting brands, to products dreamed up by data scientists. "One vision is if you watch the next James Bond movie and see the suit he's wearing, you should be able to automatically order it, and have it delivered to you within an hour," says Jacobs. "It's about connecting the full fashion sphere. This is definitely not possible if you have a bottleneck with everyone running in the same data center and thus very restricted. You need infrastructure to scale each autonomous team's idea."</p>
-->
<p>展望未来Jacobs 将 Zalando 的新基础设施视为公司在进行的其他工程中的巨大推动因素从新的物流软件到连接品牌的平台功能以及数据科学家梦寐以求的产品。Jacobs 说:“一个愿景是,如果你看下一部 James Bond 的电影,看看他穿的西装,你就应该能够自动订购,并在一小时内把它送到你身边。”“这是关于连接整个时尚领域。如果你遇到瓶颈,因为每个人都在同一个数据中心运行,因此限制很大,则这绝对是不可能的。需要基础设施来扩展每个自主团队的想法。”</p>
<!--
<p>For other companies considering this technology, Jacobs says he wouldn't necessarily advise doing it exactly the same way Zalando did. "It's okay to do so if you're ready to fail at some things," he says. "You need to set the right expectations. Not everything will work. Rewriting apps and this type of organizational change can be disruptive. The first product we moved was critical. There were a lot of dependencies, and it took longer than expected. Maybe we should have started with something less complicated, less business critical, just to get our toes wet."</p>
-->
<p>对于考虑这项技术的其他公司Jacobs 说,他不一定建议像 Zalando 那样做。他表示:“如果你准备尝试失败,那么这样做是可以的。”“设定正确的期望是必须的。并不是一切都会起作用。重写应用和这种类型的组织更改可能会造成中断。我们移动的第一个产品至关重要。存在大量依赖关系,而且时间比预期长。也许我们应该从不那么复杂、不是业务关键的东西开始,只是为了开个好头。”</p>
<!--
<p>But once they got to the other side "it was clear for everyone that there's no big alternative," Jacobs adds. "The Kubernetes API allows us to run applications in a cloud provider-agnostic way, which gives us the freedom to revisit IaaS providers in the coming years. Zalando Technology benefits from migrating to Kubernetes as we are able to leverage our existing knowledge to create an engineering platform offering flexibility and speed to our engineers while significantly reducing the operational overhead. We expect the Kubernetes API to be the global standard for PaaS infrastructure and are excited about the continued journey."</p>
-->
<p>但是,一旦他们到了另一边,“每个人都很清楚,没有大的选择,” Jacobs 补充说。“Kubernetes API 允许我们以与云提供商无关的方式运行应用程序,这使我们能够在未来几年中自由访问 IaaS 提供商。Zalando 受益于迁移到 Kubernetes因为我们能够利用现有知识创建工程平台为我们的工程师提供灵活性和速度同时显著降低运营开销。我们期望 Kubernetes API 成为 PaaS 基础设施的全球标准,并对未来的旅程感到兴奋。”</p>

View File

@ -54,7 +54,7 @@ API 服务器被配置为在一个安全的 HTTPS 端口(通常为 443
Nodes should be provisioned with the public root certificate for the cluster such that they can
connect securely to the API server along with valid client credentials. A good approach is that the
client credentials provided to the kubelet are in the form of a client certificate. See
[kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
[kubelet TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/)
for automated provisioning of kubelet client certificates.
-->
应该使用集群的公共根证书开通节点,这样它们就能够基于有效的客户端凭据安全地连接 API 服务器。

View File

@ -1170,14 +1170,13 @@ see [KEP-2400](https://github.com/kubernetes/enhancements/issues/2400) and its
<!--
* Learn about the [components](/docs/concepts/overview/components/#node-components) that make up a node.
* Read the [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core).
* Read the [Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)
* Read the [Node](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node)
section of the architecture design document.
* Read about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/).
-->
* 进一步了解节点[组件](/zh-cn/docs/concepts/overview/components/#node-components)。
* 阅读 [Node 的 API 定义](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core)。
* 阅读架构设计文档中有关
[Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)
[Node](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node)
的章节。
* 了解[污点和容忍度](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/)。

View File

@ -11,15 +11,11 @@ content_type: concept
Add-ons extend the functionality of Kubernetes.
This page lists some of the available add-ons and links to their respective installation instructions.
Add-ons in each section are sorted alphabetically - the ordering does not imply any preferential status.
-->
Add-ons 扩展了 Kubernetes 的功能。
本文列举了一些可用的 add-ons 以及到它们各自安装说明的链接。
每个 Add-ons 按字母顺序排序 - 顺序不代表任何优先地位。
<!-- body -->
<!--
@ -27,59 +23,66 @@ Add-ons 扩展了 Kubernetes 的功能。
* [ACI](https://www.github.com/noironetworks/aci-containers) provides integrated container networking and network security with Cisco ACI.
* [Antrea](https://antrea.io/) operates at Layer 3/4 to provide networking and security services for Kubernetes, leveraging Open vSwitch as the networking data plane.
* [Calico](https://docs.projectcalico.org/latest/getting-started/kubernetes/) is a secure L3 networking and network policy provider.
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) unites Flannel and Calico, providing networking and network policy.
* [Cilium](https://github.com/cilium/cilium) is a L3 network and network policy plugin that can enforce HTTP/API/L7 policies transparently. Both routing and overlay/encapsulation mode are supported.
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, or Weave.
* [Contiv](https://contivpp.io/) provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework. Contiv project is fully [open sourced](https://github.com/contiv). The [installer](https://github.com/contiv/install) provides both kubeadm and non-kubeadm based installation options.
* [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is an open source, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods and bare metal workloads.
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) is an overlay network provider that can be used with Kubernetes.
* [Knitter](https://github.com/ZTE/Knitter/) is a network solution supporting multiple networking in Kubernetes.
* Multus is a Multi plugin for multiple network support in Kubernetes to support all CNI plugins (e.g. Calico, Cilium, Contiv, Flannel), in addition to SRIOV, DPDK, OVS-DPDK and VPP based workloads in Kubernetes.
* [OVN-Kubernetes](https://github.com/ovn-org/ovn-kubernetes/) is a networking provider for Kubernetes based on [OVN (Open Virtual Network)](https://github.com/ovn-org/ovn/), a virtual networking implementation that came out of the Open vSwitch (OVS) project. OVN-Kubernetes provides an overlay based networking implementation for Kubernetes, including an OVS based implementation of load balancing and network policy.
* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) is OVN based CNI controller plugin to provide cloud native based Service function chaining(SFC), Multiple OVN overlay networking, dynamic subnet creation, dynamic creation of virtual networks, VLAN Provider network, Direct provider network and pluggable with other Multi-network plugins, ideal for edge based cloud native workloads in Multi-cluster networking
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and OpenShift.
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) is an SDN platform that provides policy-based networking between Kubernetes Pods and non-Kubernetes environments with visibility and security monitoring.
* [Romana](https://github.com/romana) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy](/docs/concepts/services-networking/network-policies/) API.
* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database.
* [Calico](https://docs.projectcalico.org/latest/introduction/) is a networking and network policy provider. Calico supports a flexible set of networking options so you can choose the most efficient option for your situation, including non-overlay and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, pods, and (if using Istio & Envoy) applications at the service mesh layer.
* [Canal](https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel) unites Flannel and Calico, providing networking and network policy.
* [Cilium](https://github.com/cilium/cilium) is a L3 network and network policy plugin that can enforce HTTP/API/L7 policies transparently. Both routing and overlay/encapsulation mode are supported, and it can work on top of other CNI plugins.
-->
## 网和网络策略
## 联网和网络策略
* [ACI](https://www.github.com/noironetworks/aci-containers) 通过 Cisco ACI 提供集成的容器网络和安全网络。
* [Antrea](https://antrea.io/) 在第 3/4 层执行操作,为 Kubernetes
提供网络连接和安全服务。Antrea 利用 Open vSwitch 作为网络的数据面。
* [Calico](https://docs.projectcalico.org/v3.11/getting-started/kubernetes/installation/calico)
是一个安全的 L3 网络和网络策略驱动。
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) 结合 Flannel 和 Calico提供网络和网络策略。
* [Calico](https://docs.projectcalico.org/latest/introduction/) 是一个联网和网络策略供应商。
Calico 支持一套灵活的网络选项,因此你可以根据自己的情况选择最有效的选项,包括非覆盖和覆盖网络,带或不带 BGP。
Calico 使用相同的引擎为主机、Pod 和(如果使用 Istio 和 Envoy应用程序在服务网格层执行网络策略。
* [Canal](https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel) 结合 Flannel 和 Calico提供联网和网络策略。
* [Cilium](https://github.com/cilium/cilium) 是一个 L3 网络和网络策略插件,能够透明的实施 HTTP/API/L7 策略。
同时支持路由routing和覆盖/封装overlay/encapsulation模式。
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) 使 Kubernetes 无缝连接到一种 CNI 插件,
例如Flannel、Calico、Canal 或者 Weave。
同时支持路由routing和覆盖/封装overlay/encapsulation模式并且它可以在其他 CNI 插件之上工作。
<!--
* [CNI-Genie](https://github.com/cni-genie/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, or Weave.
* [Contiv](https://contivpp.io/) provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework. Contiv project is fully [open sourced](https://github.com/contiv). The [installer](https://github.com/contiv/install) provides both kubeadm and non-kubeadm based installation options.
* [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is an open source, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods and bare metal workloads.
-->
* [CNI-Genie](https://github.com/cni-genie/CNI-Genie) 使 Kubernetes 无缝连接到
Calico、Canal、Flannel 或 Weave 等其中一种 CNI 插件。
* [Contiv](https://contivpp.io/) 为各种用例和丰富的策略框架提供可配置的网络
(使用 BGP 的本机 L3、使用 vxlan 的覆盖、标准 L2 和 Cisco-SDN/ACI
带 BGP 的原生 L3、带 vxlan 的覆盖、标准 L2 和 Cisco-SDN/ACI
Contiv 项目完全[开源](https://github.com/contiv)。
[安装程序](https://github.com/contiv/install) 提供了基于 kubeadm 和非 kubeadm 的安装选项。
* 基于 [Tungsten Fabric](https://tungsten.io) 的
[Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/)
是一个开源的多云网络虚拟化和策略管理平台Contrail 和 Tungsten Fabric 与业务流程系统
(例如 Kubernetes、OpenShift、OpenStack和Mesos集成在一起
其[安装程序](https://github.com/contiv/install) 提供了基于 kubeadm 和非 kubeadm 的安装选项。
* [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/) 基于
[Tungsten Fabric](https://tungsten.io),是一个开源的多云网络虚拟化和策略管理平台。
Contrail 和 Tungsten Fabric 与业务流程系统(例如 Kubernetes、OpenShift、OpenStack 和 Mesos集成在一起
为虚拟机、容器或 Pod 以及裸机工作负载提供了隔离模式。
<!--
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) is an overlay network provider that can be used with Kubernetes.
* [Knitter](https://github.com/ZTE/Knitter/) is a plugin to support multiple network interfaces in a Kubernetes pod.
* [Multus](https://github.com/k8snetworkplumbingwg/multus-cni) is a Multi plugin for multiple network support in Kubernetes to support all CNI plugins (e.g. Calico, Cilium, Contiv, Flannel), in addition to SRIOV, DPDK, OVS-DPDK and VPP based workloads in Kubernetes.
-->
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually)
是一个可以用于 Kubernetes 的 overlay 网络提供者。
* [Knitter](https://github.com/ZTE/Knitter/) 是为 kubernetes 提供复合网络解决方案的网络组件。
* Multus 是一个多插件,可在 Kubernetes 中提供多种网络支持,
以支持所有 CNI 插件(例如 CalicoCiliumContivFlannel
* [Knitter](https://github.com/ZTE/Knitter/) 是在一个 Kubernetes Pod 中支持多个网络接口的插件。
* [Multus](https://github.com/k8snetworkplumbingwg/multus-cni) 是一个多插件
可在 Kubernetes 中提供多种网络支持,以支持所有 CNI 插件(例如 Calico、Cilium、Contiv、Flannel
而且包含了在 Kubernetes 中基于 SRIOV、DPDK、OVS-DPDK 和 VPP 的工作负载。
<!--
* [OVN-Kubernetes](https://github.com/ovn-org/ovn-kubernetes/) is a networking provider for Kubernetes based on [OVN (Open Virtual Network)](https://github.com/ovn-org/ovn/), a virtual networking implementation that came out of the Open vSwitch (OVS) project. OVN-Kubernetes provides an overlay based networking implementation for Kubernetes, including an OVS based implementation of load balancing and network policy.
* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) is OVN based CNI controller plugin to provide cloud native based Service function chaining(SFC), Multiple OVN overlay networking, dynamic subnet creation, dynamic creation of virtual networks, VLAN Provider network, Direct provider network and pluggable with other Multi-network plugins, ideal for edge based cloud native workloads in Multi-cluster networking.
-->
* [OVN-Kubernetes](https://github.com/ovn-org/ovn-kubernetes/) 是一个 Kubernetes 网络驱动,
基于 [OVNOpen Virtual Network](https://github.com/ovn-org/ovn/)实现,是从 Open vSwitch (OVS)
项目衍生出来的虚拟网络实现。
OVN-Kubernetes 为 Kubernetes 提供基于覆盖网络的网络实现,包括一个基于 OVS 实现的负载均衡器
和网络策略。
* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) 是一个基于 OVN 的 CNI
控制器插件提供基于云原生的服务功能链条Service Function ChainingSFC、多种 OVN 覆盖
网络、动态子网创建、动态虚拟网络创建、VLAN 驱动网络、直接驱动网络,并且可以
驳接其他的多网络插件,适用于基于边缘的、多集群联网的云原生工作负载。
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) 容器插件NCP
项目衍生出来的虚拟网络实现。OVN-Kubernetes 为 Kubernetes 提供基于覆盖网络的网络实现,
包括一个基于 OVS 实现的负载均衡器和网络策略。
* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) 是一个基于 OVN 的 CNI 控制器插件,
提供基于云原生的服务功能链条Service Function ChainingSFC、多种 OVN 覆盖网络、动态子网创建、
动态虚拟网络创建、VLAN 驱动网络、直接驱动网络,并且可以驳接其他的多网络插件,
适用于基于边缘的、多集群联网的云原生工作负载。
<!--
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and OpenShift.
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) is an SDN platform that provides policy-based networking between Kubernetes Pods and non-Kubernetes environments with visibility and security monitoring.
* [Romana](https://github.com/romana) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy](/docs/concepts/services-networking/network-policies/) API.
* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database.
-->
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html) 容器插件NCP
提供了 VMware NSX-T 与容器协调器(例如 Kubernetes之间的集成以及 NSX-T 与基于容器的
CaaS / PaaS 平台例如关键容器服务PKS和 OpenShift之间的集成。
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst)
@ -87,7 +90,7 @@ Add-ons 扩展了 Kubernetes 的功能。
* [Romana](https://github.com/romana) 是一个 Pod 网络的第三层解决方案,并支持
[NetworkPolicy](/zh-cn/docs/concepts/services-networking/network-policies/) API。
* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/)
提供在网络分组两端参与工作的网和网络策略,并且不需要额外的数据库。
提供在网络分组两端参与工作的网和网络策略,并且不需要额外的数据库。
<!--
## Service Discovery
@ -127,7 +130,7 @@ Add-ons 扩展了 Kubernetes 的功能。
* [KubeVirt](https://kubevirt.io/user-guide/#/installation/installation) 是可以让 Kubernetes
运行虚拟机的 add-ons。通常运行在裸机集群上。
* [节点问题检测器](https://github.com/kubernetes/node-problem-detector) 在 Linux 节点上运行,
并将系统问题报告为[事件](/docs/reference/kubernetes-api/cluster-resources/event-v1/)
并将系统问题报告为[事件](/zh-cn/docs/reference/kubernetes-api/cluster-resources/event-v1/)
或[节点状况](/zh-cn/docs/concepts/architecture/nodes/#condition)。
<!--

View File

@ -474,7 +474,7 @@ Weave Net 可以作为 [CNI 插件](https://www.weave.works/docs/net/latest/cni-
<!--
The early design of the networking model and its rationale, and some future
plans are described in more detail in the
[networking design document](https://git.k8s.io/community/contributors/design-proposals/network/networking.md).
[networking design document](https://git.k8s.io/design-proposals-archive/network/networking.md).
-->
网络模型的早期设计、运行原理以及未来的一些计划,
都在[联网设计文档](https://git.k8s.io/community/contributors/design-proposals/network/networking.md)里有更详细的描述。
都在[联网设计文档](https://git.k8s.io/design-proposals-archive/network/networking.md)里有更详细的描述。

View File

@ -71,8 +71,8 @@ parameters are passed to the handler.
而被终止之前,此回调会被调用。
如果容器已经处于已终止或者已完成状态,则对 preStop 回调的调用将失败。
在用来停止容器的 TERM 信号被发出之前,回调必须执行结束。
Pod 的终止宽限周期在 `PreStop` 回调被执行之前即开始计数,所以无论
回调函数的执行结果如何,容器最终都会在 Pod 的终止宽限期内被终止。
Pod 的终止宽限周期在 `PreStop` 回调被执行之前即开始计数,
所以无论回调函数的执行结果如何,容器最终都会在 Pod 的终止宽限期内被终止。
没有参数会被传递给处理程序。
<!--
@ -113,7 +113,7 @@ the Kubernetes management system executes the handler according to the hook acti
### 回调处理程序执行
当调用容器生命周期管理回调时Kubernetes 管理系统根据回调动作执行其处理程序,
`httpGet``tcpSocket` 在kubelet 进程执行,而 `exec` 则由容器内执行
`httpGet``tcpSocket` kubelet 进程执行,而 `exec` 则由容器内执行。
<!--
Hook handler calls are synchronous within the context of the Pod containing the Container.
@ -190,7 +190,7 @@ It is up to the hook implementation to handle this correctly.
-->
### 回调递送保证
回调的递送应该是 *至少一次*,这意味着对于任何给定的事件,
回调的递送应该是**至少一次**,这意味着对于任何给定的事件,
例如 `PostStart``PreStop`,回调可以被调用多次。
如何正确处理被多次调用的情况,是回调实现所要考虑的问题。

View File

@ -13,9 +13,9 @@ Every Kubernetes object also has a [_UID_](#uids) that is unique across your who
For example, you can only have one Pod named `myapp-1234` within the same [namespace](/docs/concepts/overview/working-with-objects/namespaces/), but you can have one Pod and one Deployment that are each named `myapp-1234`.
-->
集群中的每一个对象都有一个[_名称_](#names)来标识在同类资源中的唯一性。
集群中的每一个对象都有一个[**名称**](#names)来标识在同类资源中的唯一性。
每个 Kubernetes 对象也有一个 [_UID_](#uids) 来标识在整个集群中的唯一性。
每个 Kubernetes 对象也有一个 [**UID**](#uids) 来标识在整个集群中的唯一性。
比如,在同一个[名字空间](/zh-cn/docs/concepts/overview/working-with-objects/namespaces/)
中有一个名为 `myapp-1234` 的 Pod但是可以命名一个 Pod 和一个 Deployment 同为 `myapp-1234`
@ -171,9 +171,9 @@ UUIDs 是标准化的,见 ISO/IEC 9834-8 和 ITU-T X.667。
<!--
* Read about [labels](/docs/concepts/overview/working-with-objects/labels/) in Kubernetes.
* See the [Identifiers and Names in Kubernetes](https://git.k8s.io/community/contributors/design-proposals/architecture/identifiers.md) design document.
* See the [Identifiers and Names in Kubernetes](https://git.k8s.io/design-proposals-archive/architecture/identifiers.md) design document.
-->
* 进一步了解 Kubernetes [标签](/zh-cn/docs/concepts/overview/working-with-objects/labels/)
* 参阅 [Kubernetes 标识符和名称](https://git.k8s.io/community/contributors/design-proposals/architecture/identifiers.md)的设计文档
* 参阅 [Kubernetes 标识符和名称](https://git.k8s.io/design-proposals-archive/architecture/identifiers.md)的设计文档

View File

@ -318,18 +318,18 @@ Disadvantages compared to imperative object configuration:
<!--
- [Managing Kubernetes Objects Using Imperative Commands](/docs/tasks/manage-kubernetes-objects/imperative-command/)
- [Managing Kubernetes Objects Using Object Configuration (Imperative)](/docs/tasks/manage-kubernetes-objects/imperative-config/)
- [Managing Kubernetes Objects Using Object Configuration (Declarative)](/docs/tasks/manage-kubernetes-objects/declarative-config/)
- [Managing Kubernetes Objects Using Kustomize (Declarative)](/docs/tasks/manage-kubernetes-objects/kustomization/)
- [Imperative Management of Kubernetes Objects Using Configuration Files](/docs/tasks/manage-kubernetes-objects/imperative-config/)
- [Declarative Management of Kubernetes Objects Using Configuration Files](/docs/tasks/manage-kubernetes-objects/declarative-config/)
- [Declarative Management of Kubernetes Objects Using Kustomize](/docs/tasks/manage-kubernetes-objects/kustomization/)
- [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl-commands/)
- [Kubectl Book](https://kubectl.docs.kubernetes.io)
- [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
-->
- [使用指令式命令管理 Kubernetes 对象](/zh-cn/docs/tasks/manage-kubernetes-objects/imperative-command/)
- [使用对象配置管理 Kubernetes 对象(指令式)](/zh-cn/docs/tasks/manage-kubernetes-objects/imperative-config/)
- [使用对象配置管理 Kubernetes 对象(声明式)](/zh-cn/docs/tasks/manage-kubernetes-objects/declarative-config/)
- [使用 Kustomize(声明式)管理 Kubernetes 对象](/zh-cn/docs/tasks/manage-kubernetes-objects/kustomization/)
- [使用配置文件对 Kubernetes 对象进行命令式管理](/zh-cn/docs/tasks/manage-kubernetes-objects/imperative-config/)
- [使用配置文件对 Kubernetes 对象进行声明式管理](/zh-cn/docs/tasks/manage-kubernetes-objects/declarative-config/)
- [使用 Kustomize 对 Kubernetes 对象进行声明式管理](/zh-cn/docs/tasks/manage-kubernetes-objects/kustomization/)
- [Kubectl 命令参考](/docs/reference/generated/kubectl/kubectl-commands/)
- [Kubectl Book](https://kubectl.docs.kubernetes.io)
- [Kubectl Book](https://kubectl.docs.kubernetes.io/zh/)
- [Kubernetes API 参考](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)

View File

@ -28,7 +28,7 @@ A _LimitRange_ provides constraints that can:
- Set default request/limit for compute resources in a namespace and automatically inject them to Containers at runtime.
-->
一个 _LimitRange限制范围_ 对象提供的限制能够做到:
一个 **LimitRange限制范围** 对象提供的限制能够做到:
- 在一个命名空间中实施对每个 Pod 或 Container 最小和最大的资源使用量的限制。
- 在一个命名空间中实施对每个 PersistentVolumeClaim 能申请的最小和最大的存储空间大小的限制。
@ -40,13 +40,14 @@ A _LimitRange_ provides constraints that can:
LimitRange support has been enabled by default since Kubernetes 1.10.
LimitRange support is enabled by default for many Kubernetes distributions.
A LimitRange is enforced in a particular namespace when there is a
LimitRange object in that namespace.
-->
## 启用 LimitRange
对 LimitRange 的支持自 Kubernetes 1.10 版本默认启用。
LimitRange 支持在很多 Kubernetes 发行版本中也是默认启用的
当某命名空间中有一个 LimitRange 对象时,将在该命名空间中实施 LimitRange 限制
<!--
The name of a LimitRange object must be a valid
@ -58,7 +59,7 @@ LimitRange 的名称必须是合法的
<!--
### Overview of Limit Range
- The administrator creates one `LimitRange` in one namespace.
- The administrator creates one LimitRange in one namespace.
- Users create resources like Pods, Containers, and PersistentVolumeClaims in the namespace.
- The `LimitRanger` admission controller enforces defaults and limits for all Pods and Containers that do not set compute resource requirements and tracks usage to ensure it does not exceed resource minimum, maximum and ratio defined in any LimitRange present in the namespace.
- If creating or updating a resource (Pod, Container, PersistentVolumeClaim) that violates a LimitRange constraint, the request to the API server will fail with an HTTP status code `403 FORBIDDEN` and a message explaining the constraint that have been violated.
@ -106,21 +107,21 @@ Neither contention nor changes to a LimitRange will affect already created resou
## {{% heading "whatsnext" %}}
<!--
See [LimitRanger design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md) for more information.
Refer to the [LimitRanger design document](https://git.k8s.io/design-proposals-archive/resource-management/admission_control_limit_range.md) for more information.
-->
参阅 [LimitRanger 设计文档](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md)获取更多信息。
参阅 [LimitRanger 设计文档](https://git.k8s.io/design-proposals-archive/resource-management/admission_control_limit_range.md)获取更多信息。
<!--
For examples on using limits, see:
- See [how to configure minimum and maximum CPU constraints per namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/).
- See [how to configure minimum and maximum Memory constraints per namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/).
- See [how to configure default CPU Requests and Limits per namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/).
- See [how to configure default Memory Requests and Limits per namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/).
- Check [how to configure minimum and maximum Storage consumption per namespace](/docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage).
- See a [detailed example on configuring quota per namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/).
- [how to configure minimum and maximum CPU constraints per namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/).
- [how to configure minimum and maximum Memory constraints per namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/).
- [how to configure default CPU Requests and Limits per namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/).
- [how to configure default Memory Requests and Limits per namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/).
- [how to configure minimum and maximum Storage consumption per namespace](/docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage).
- a [detailed example on configuring quota per namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/).
-->
关于使用限值的例子,可参
关于使用限值的例子,可参阅:
- [如何配置每个命名空间最小和最大的 CPU 约束](/zh-cn/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)。
- [如何配置每个命名空间最小和最大的内存约束](/zh-cn/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)。

View File

@ -33,7 +33,7 @@ be created in a namespace by type, as well as the total amount of compute resour
be consumed by resources in that namespace.
-->
资源配额,通过 `ResourceQuota` 对象来定义,对每个命名空间的资源消耗总量提供限制。
它可以限制命名空间中某种类型的对象的总数目上限,也可以限制命空间中的 Pod 可以使用的计算资源的总上限。
它可以限制命名空间中某种类型的对象的总数目上限,也可以限制命空间中的 Pod 可以使用的计算资源的总上限。
<!--
Resource quotas work like this:
@ -52,14 +52,15 @@ Resource quotas work like this:
the `LimitRanger` admission controller to force defaults for pods that make no compute resource requirements.
See the [walkthrough](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) for an example of how to avoid this problem.
-->
- 不同的团队可以在不同的命名空间下工作。这可以通过 [RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/) 强制执行。
- 不同的团队可以在不同的命名空间下工作。这可以通过
[RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/) 强制执行。
- 集群管理员可以为每个命名空间创建一个或多个 ResourceQuota 对象。
- 当用户在命名空间下创建资源(如 Pod、Service 等Kubernetes 的配额系统会
跟踪集群的资源使用情况,以确保使用的资源用量不超过 ResourceQuota 中定义的硬性资源限额。
- 当用户在命名空间下创建资源(如 Pod、Service 等Kubernetes 的配额系统会跟踪集群的资源使用情况,
以确保使用的资源用量不超过 ResourceQuota 中定义的硬性资源限额。
- 如果资源创建或者更新请求违反了配额约束那么该请求会报错HTTP 403 FORBIDDEN
并在消息中给出有可能违反的约束。
- 如果命名空间下的计算资源 (如 `cpu``memory`)的配额被启用,则用户必须为
这些资源设定请求值request和约束值limit否则配额系统将拒绝 Pod 的创建。
- 如果命名空间下的计算资源 (如 `cpu``memory`)的配额被启用,
则用户必须为这些资源设定请求值request和约束值limit否则配额系统将拒绝 Pod 的创建。
提示: 可使用 `LimitRanger` 准入控制器来为没有设置计算资源需求的 Pod 设置默认值。
若想避免这类问题,请参考
@ -161,7 +162,7 @@ The following resource types are supported:
### Resource Quota For Extended Resources
In addition to the resources mentioned above, in release 1.10, quota support for
[extended resources](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources) is added.
[extended resources](/docs/concepts/configuration/manage-resources-containers/#extended-resources) is added.
-->
### 扩展资源的资源配额
@ -316,12 +317,9 @@ Job 而导致集群拒绝服务。
<!--
It is possible to do generic object count quota on a limited set of resources.
In addition, it is possible to further constrain quota for particular resources by their type.
The following types are supported:
-->
对有限的一组资源上实施一般性的对象数量配额也是可能的。
此外,还可以进一步按资源的类型设置其配额。
支持以下类型:
@ -466,10 +464,10 @@ one value. For example:
```
<!--
If the `operator` is `Exists` or `DoesNotExist`, the `values field must *NOT* be
If the `operator` is `Exists` or `DoesNotExist`, the `values` field must *NOT* be
specified.
-->
如果 `operator``Exists``DoesNotExist`,则*不*可以设置 `values` 字段。
如果 `operator``Exists``DoesNotExist`,则****可以设置 `values` 字段。
<!--
### Resource Quota Per PriorityClass
@ -495,8 +493,8 @@ A quota is matched and consumed only if `scopeSelector` in the quota spec select
When quota is scoped for priority class using `scopeSelector` field, quota object
is restricted to track only following resources:
-->
如果配额对象通过 `scopeSelector` 字段设置其作用域为优先级类,则配额对象只能
跟踪以下资源:
如果配额对象通过 `scopeSelector` 字段设置其作用域为优先级类,
则配额对象只能跟踪以下资源:
* `pods`
* `cpu`
@ -713,27 +711,27 @@ Operators can use `CrossNamespacePodAffinity` quota scope to limit which namespa
have pods with affinity terms that cross namespaces. Specifically, it controls which pods are allowed
to set `namespaces` or `namespaceSelector` fields in pod affinity terms.
-->
集群运维人员可以使用 `CrossNamespacePodAffinity` 配额作用域来
限制哪个名字空间中可以存在包含跨名字空间亲和性规则的 Pod。
更为具体一点,此作用域用来配置哪些 Pod 可以在其 Pod 亲和性规则
中设置 `namespaces``namespaceSelector` 字段。
集群运维人员可以使用 `CrossNamespacePodAffinity`
配额作用域来限制哪个名字空间中可以存在包含跨名字空间亲和性规则的 Pod。
更为具体一点,此作用域用来配置哪些 Pod 可以在其 Pod 亲和性规则中设置
`namespaces``namespaceSelector` 字段。
<!--
Preventing users from using cross-namespace affinity terms might be desired since a pod
with anti-affinity constraints can block pods from all other namespaces
from getting scheduled in a failure domain.
-->
禁止用户使用跨名字空间的亲和性规则可能是一种被需要的能力,因为带有
反亲和性约束的 Pod 可能会阻止所有其他名字空间的 Pod 被调度到某失效域中。
禁止用户使用跨名字空间的亲和性规则可能是一种被需要的能力,
因为带有反亲和性约束的 Pod 可能会阻止所有其他名字空间的 Pod 被调度到某失效域中。
<!--
Using this scope operators can prevent certain namespaces (`foo-ns` in the example below)
from having pods that use cross-namespace pod affinity by creating a resource quota object in
that namespace with `CrossNamespaceAffinity` scope and hard limit of 0:
-->
使用此作用域操作符可以避免某些名字空间(例如下面例子中的 `foo-ns`)运行
特别的 Pod这类 Pod 使用跨名字空间的 Pod 亲和性约束,在该名字空间中创建
了作用域为 `CrossNamespaceAffinity` 的、硬性约束为 0 的资源配额对象。
使用此作用域操作符可以避免某些名字空间(例如下面例子中的 `foo-ns`)运行特别的 Pod
这类 Pod 使用跨名字空间的 Pod 亲和性约束,在该名字空间中创建了作用域为
`CrossNamespaceAffinity` 的、硬性约束为 0 的资源配额对象。
```yaml
apiVersion: v1
@ -752,12 +750,12 @@ spec:
<!--
If operators want to disallow using `namespaces` and `namespaceSelector` by default, and
only allow it for specific namespaces, they could configure `CrossNamespaceAffinity`
as a limited resource by setting the kube-apiserver flag -admission-control-config-file
as a limited resource by setting the kube-apiserver flag --admission-control-config-file
to the path of the following configuration file:
-->
如果集群运维人员希望默认禁止使用 `namespaces``namespaceSelector`
仅仅允许在特定名字空间中这样做,他们可以将 `CrossNamespaceAffinity` 作为一个
被约束的资源。方法是为 `kube-apiserver` 设置标志
如果集群运维人员希望默认禁止使用 `namespaces``namespaceSelector`
仅仅允许在特定名字空间中这样做,他们可以将 `CrossNamespaceAffinity`
作为一个被约束的资源。方法是为 `kube-apiserver` 设置标志
`--admission-control-config-file`,使之指向如下的配置文件:
```yaml
@ -779,8 +777,8 @@ With the above configuration, pods can use `namespaces` and `namespaceSelector`
if the namespace where they are created have a resource quota object with
`CrossNamespaceAffinity` scope and a hard limit greater than or equal to the number of pods using those fields.
-->
基于上面的配置,只有名字空间中包含作用域为 `CrossNamespaceAffinity`
硬性约束大于或等于使用 `namespaces``namespaceSelector` 字段的 Pods
基于上面的配置,只有名字空间中包含作用域为 `CrossNamespaceAffinity`
硬性约束大于或等于使用 `namespaces``namespaceSelector` 字段的 Pod
个数时,才可以在该名字空间中继续创建在其 Pod 亲和性规则中设置 `namespaces`
`namespaceSelector` 的新 Pod。
@ -987,18 +985,18 @@ should be allowed in a namespace, if and only if, a matching quota object exists
(例如 "cluster-services")的 Pod。
<!--
With this mechanism, operators will be able to restrict usage of certain high
With this mechanism, operators are able to restrict usage of certain high
priority classes to a limited number of namespaces and not every namespace
will be able to consume these priority classes by default.
-->
通过这种机制,操作人员能够限制某些高优先级类仅出现在有限数量的命名空间中,
通过这种机制,操作人员能够限制某些高优先级类仅出现在有限数量的命名空间中,
而并非每个命名空间默认情况下都能够使用这些优先级类。
<!--
To enforce this, kube-apiserver flag `-admission-control-config-file` should be
To enforce this, `kube-apiserver` flag `--admission-control-config-file` should be
used to pass path to the following configuration file:
-->
要实现此目的,应设置 kube-apiserver 的标志 `--admission-control-config-file`
要实现此目的,应设置 `kube-apiserver` 的标志 `--admission-control-config-file`
指向如下配置文件:
```yaml
@ -1057,14 +1055,13 @@ and it is to be created in a namespace other than `kube-system`.
## {{% heading "whatsnext" %}}
<!--
- See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) for more information.
- See [ResourceQuota design doc](https://git.k8s.io/design-proposals-archive/resource-management/admission_control_resource_quota.md) for more information.
- See a [detailed example for how to use resource quota](/docs/tasks/administer-cluster/quota-api-object/).
- Read [Quota support for priority class design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/pod-priority-resourcequota.md).
- Read [Quota support for priority class design doc](https://git.k8s.io/design-proposals-archive/scheduling/pod-priority-resourcequota.md).
- See [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765)
-->
- 查看[资源配额设计文档](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md)
- 查看[如何使用资源配额的详细示例](/zh-cn/docs/tasks/administer-cluster/quota-api-object/)。
- 阅读[优先级类配额支持的设计文档](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/pod-priority-resourcequota.md)。
了解更多信息。
- 参阅 [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765)
- 参阅[资源配额设计文档](https://git.k8s.io/design-proposals-archive/resource-management/admission_control_resource_quota.md)。
- 参阅[如何使用资源配额的详细示例](/zh-cn/docs/tasks/administer-cluster/quota-api-object/)。
- 参阅[优先级类配额支持的设计文档](https://git.k8s.io/design-proposals-archive/scheduling/pod-priority-resourcequota.md)了解更多信息。
- 参阅 [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765)。

View File

@ -31,7 +31,17 @@ so that Pods with higher Priority can schedule on Nodes. Eviction is the process
of terminating one or more Pods on Nodes.
-->
<!-- ## Scheduling -->
<!--
## Scheduling
* [Kubernetes Scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/)
* [Assigning Pods to Nodes](/docs/concepts/scheduling-eviction/assign-pod-node/)
* [Pod Overhead](/docs/concepts/scheduling-eviction/pod-overhead/)
* [Taints and Tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/)
* [Scheduling Framework](/docs/concepts/scheduling-eviction/scheduling-framework)
* [Scheduler Performance Tuning](/docs/concepts/scheduling-eviction/scheduler-perf-tuning/)
* [Resource Bin Packing for Extended Resources](/docs/concepts/scheduling-eviction/resource-bin-packing/)
-->
## 调度
@ -43,10 +53,20 @@ of terminating one or more Pods on Nodes.
* [调度器的性能调试](/zh-cn/docs/concepts/scheduling-eviction/scheduler-perf-tuning/)
* [扩展资源的资源装箱](/zh-cn/docs/concepts/scheduling-eviction/resource-bin-packing/)
<!-- ## Pod Disruption -->
<!--
## Pod Disruption
{{<glossary_definition term_id="pod-disruption" length="all">}}
* [Pod Priority and Preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/)
* [Node-pressure Eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/)
* [API-initiated Eviction](/docs/concepts/scheduling-eviction/api-eviction/)
-->
## Pod 干扰
{{<glossary_definition term_id="pod-disruption" length="all">}}
* [Pod 优先级和抢占](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/)
* [节点压力驱逐](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/)
* [节点压力驱逐](/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction/)
* [API发起的驱逐](/zh-cn/docs/concepts/scheduling-eviction/api-eviction/)

View File

@ -18,13 +18,13 @@ weight: 20
<!--
You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it can only run on particular set of
{{< glossary_tooltip text="Node(s)" term_id="node" >}}.
There are several ways to do this, and the recommended approaches all use
{{< glossary_tooltip text="node(s)" term_id="node" >}}.
There are several ways to do this and the recommended approaches all use
[label selectors](/docs/concepts/overview/working-with-objects/labels/) to facilitate the selection.
Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement
(e.g. spread your pods across nodes so as not place the pod on a node with insufficient free resources, etc.)
(for example, spreading your Pods across nodes so as not place Pods on a node with insufficient free resources).
However, there are some circumstances where you may want to control which node
the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it, or to co-locate pods from two different
the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it, or to co-locate Pods from two different
services that communicate a lot into the same availability zone.
-->
你可以约束一个 {{< glossary_tooltip text="Pod" term_id="pod" >}}
@ -172,6 +172,19 @@ define. Some of the benefits of affinity and anti-affinity include:
* 你可以使用节点上(或其他拓扑域中)运行的其他 Pod 的标签来实施调度约束,
而不是只能使用节点本身的标签。这个能力让你能够定义规则允许哪些 Pod 可以被放置在一起。
<!--
The affinity feature consists of two types of affinity:
* *Node affinity* functions like the `nodeSelector` field but is more expressive and
allows you to specify soft rules.
* *Inter-pod affinity/anti-affinity* allows you to constrain Pods against labels
on other Pods.
-->
亲和性功能由两种类型的亲和性组成:
* **节点亲和性**功能类似于 `nodeSelector` 字段,但它的表达能力更强,并且允许你指定软规则。
* Pod 间亲和性/反亲和性允许你根据其他 Pod 的标签来约束 Pod。
<!--
### Node affinity
@ -222,15 +235,16 @@ For example, consider the following Pod spec:
<!--
In this example, the following rules apply:
* The node *must* have a label with the key `kubernetes.io/os` and
the value `linux`.
* The node *must* have a label with the key `topology.kubernetes.io/zone` and
the value of that label *must* be either `antarctica-east1` or `antarctica-west1`.
* The node *preferably* has a label with the key `another-node-label-key` and
the value `another-node-label-value`.
-->
在这一示例中,所应用的规则如下:
* 节点必须包含键名为 `kubernetes.io/os` 的标签,并且其取值为 `linux`
* 节点 **最好** 具有键名为 `another-node-label-key` 且取值为
* 节点**必须**包含一个键名为 `topology.kubernetes.io/zone` 的标签,
并且该标签的取值**必须**为 `antarctica-east1``antarctica-west1`
* 节点**最好**具有一个键名为 `another-node-label-key` 且取值为
`another-node-label-value` 的标签。
<!--
@ -269,7 +283,7 @@ satisfied.
<!--
If you specify multiple `matchExpressions` associated with a single `nodeSelectorTerms`,
then the Pod can be scheduled onto a node only if all the `matchExpressions` are
satisfied.
satisfied.
-->
如果你指定了多个与同一 `nodeSelectorTerms` 关联的 `matchExpressions`
则只有当所有 `matchExpressions` 都满足时 Pod 才可以被调度到节点上。
@ -341,8 +355,8 @@ must have existing nodes with the `kubernetes.io/os=linux` label.
<!--
When configuring multiple [scheduling profiles](/docs/reference/scheduling/config/#multiple-profiles), you can associate
a profile with a Node affinity, which is useful if a profile only applies to a specific set of nodes.
To do so, add an `addedAffinity` to the `args` field of the [`NodeAffinity` plugin](/docs/reference/scheduling/config/#scheduling-plugins)
a profile with a node affinity, which is useful if a profile only applies to a specific set of nodes.
To do so, add an `addedAffinity` to the `args` field of the [`NodeAffinity` plugin](/docs/reference/scheduling/config/#scheduling-plugins)
in the [scheduler configuration](/docs/reference/scheduling/config/). For example:
-->
在配置多个[调度方案](/zh-cn/docs/reference/scheduling/config/#multiple-profiles)时,
@ -410,7 +424,7 @@ Inter-pod affinity and anti-affinity allow you to constrain which nodes your
Pods can be scheduled on based on the labels of **Pods** already running on that
node, instead of the node labels.
-->
### pod 间亲和性与反亲和性 {#inter-pod-affinity-and-anti-affinity}
### Pod 间亲和性与反亲和性 {#inter-pod-affinity-and-anti-affinity}
Pod 间亲和性与反亲和性使你可以基于已经在节点上运行的 **Pod** 的标签来约束
Pod 可以调度到的节点,而不是基于节点上的标签。
@ -552,9 +566,9 @@ same zone currently running Pods with the `Security=S2` Pod label.
<!--
To get yourself more familiar with the examples of Pod affinity and anti-affinity,
refer to the [design proposal](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/podaffinity.md).
refer to the [design proposal](https://git.k8s.io/design-proposals-archive/scheduling/podaffinity.md).
-->
查阅[设计文档](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/podaffinity.md)
查阅[设计文档](https://git.k8s.io/design-proposals-archive/scheduling/podaffinity.md)
以进一步熟悉 Pod 亲和性与反亲和性的示例。
<!--
@ -571,8 +585,7 @@ exceptions for performance and security reasons:
有一些限制:
<!--
* For Pod affinity and anti-affinity, an empty `topologyKey` field is not allowed in both
`requiredDuringSchedulingIgnoredDuringExecution`
* For Pod affinity and anti-affinity, an empty `topologyKey` field is not allowed in both `requiredDuringSchedulingIgnoredDuringExecution`
and `preferredDuringSchedulingIgnoredDuringExecution`.
* For `requiredDuringSchedulingIgnoredDuringExecution` Pod anti-affinity rules,
the admission controller `LimitPodHardAntiAffinityTopology` limits
@ -634,6 +647,14 @@ Pod 间亲和性与反亲和性在与更高级别的集合(例如 ReplicaSet
Deployment 等)一起使用时,它们可能更加有用。
这些规则使得你可以配置一组工作负载,使其位于相同定义拓扑(例如,节点)中。
<!--
Take, for example, a three-node cluster running a web application with an
in-memory cache like redis. You could use inter-pod affinity and anti-affinity
to co-locate the web servers with the cache as much as possible.
-->
以一个三节点的集群为例,该集群运行一个带有 Redis 这种内存缓存的 Web 应用程序。
你可以使用节点间的亲和性和反亲和性来尽可能地将 Web 服务器与缓存并置。
<!--
In the following example Deployment for the redis cache, the replicas get the label `app=store`. The
`podAntiAffinity` rule tells the scheduler to avoid placing multiple replicas
@ -803,16 +824,16 @@ The above Pod will only run on the node `kube-01`.
<!--
* Read more about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/) .
* Read the design docs for [node affinity](https://git.k8s.io/community/contributors/design-proposals/scheduling/nodeaffinity.md)
and for [inter-pod affinity/anti-affinity](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md).
* Read the design docs for [node affinity](https://git.k8s.io/design-proposals-archive/scheduling/nodeaffinity.md)
and for [inter-pod affinity/anti-affinity](https://git.k8s.io/design-proposals-archive/scheduling/podaffinity.md).
* Learn about how the [topology manager](/docs/tasks/administer-cluster/topology-manager/) takes part in node-level
resource allocation decisions.
* Learn how to use [nodeSelector](/docs/tasks/configure-pod-container/assign-pods-nodes/).
* Learn how to use [affinity and anti-affinity](/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/).
-->
* 进一步阅读[污点与容忍度](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/)文档。
* 阅读[节点亲和性](https://git.k8s.io/community/contributors/design-proposals/scheduling/nodeaffinity.md)
和[Pod 间亲和性与反亲和性](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md)
* 阅读[节点亲和性](https://git.k8s.io/design-proposals-archive/scheduling/nodeaffinity.md)
和[Pod 间亲和性与反亲和性](https://git.k8s.io/design-proposals-archive/scheduling/podaffinity.md)
的设计文档。
* 了解[拓扑管理器](/zh-cn/docs/tasks/administer-cluster/topology-manager/)如何参与节点层面资源分配决定。
* 了解如何使用 [nodeSelector](/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes/)。

View File

@ -22,7 +22,7 @@ During a node-pressure eviction, the kubelet sets the `PodPhase` for the
selected pods to `Failed`. This terminates the pods.
Node-pressure eviction is not the same as
[API-initiated eviction](/docs/reference/generated/kubernetes-api/v1.23/).
[API-initiated eviction](/docs/concepts/scheduling-eviction/api-eviction/).
-->
{{<glossary_tooltip term_id="kubelet" text="kubelet">}}
监控集群节点的 CPU、内存、磁盘空间和文件系统的 inode 等资源。
@ -31,7 +31,7 @@ kubelet 可以主动地使节点上一个或者多个 Pod 失效,以回收资
在节点压力驱逐期间kubelet 将所选 Pod 的 `PodPhase` 设置为 `Failed`。这将终止 Pod。
节点压力驱逐不同于 [API 发起的驱逐](/docs/reference/generated/kubernetes-api/v1.23/)。
节点压力驱逐不同于 [API 发起的驱逐](/zh-cn/docs/concepts/scheduling-eviction/api-eviction/)。
<!--
The kubelet does not respect your configured `PodDisruptionBudget` or the pod's
@ -129,7 +129,7 @@ memory is reclaimable under pressure.
`memory.available` 的值来自 cgroupfs而不是像 `free -m` 这样的工具。
这很重要,因为 `free -m` 在容器中不起作用,如果用户使用
[节点可分配资源](/zh-cn/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable)
这一功能特性,资源不足的判定是基于 CGroup 层次结构中的用户 Pod 所处的局部及 CGroup 根节点作出的。
这一功能特性,资源不足的判定是基于 cgroup 层次结构中的用户 Pod 所处的局部及 cgroup 根节点作出的。
这个[脚本](/zh-cn/examples/admin/resource/memory-available.sh)
重现了 kubelet 为计算 `memory.available` 而执行的相同步骤。
kubelet 在其计算中排除了 inactive_file即非活动 LRU 列表上基于文件来虚拟的内存的字节数),
@ -154,15 +154,26 @@ kubelet 支持以下文件系统分区:
kubelet 会自动发现这些文件系统并忽略其他文件系统。kubelet 不支持其他配置。
{{<note>}}
<!--
Some kubelet garbage collection features are deprecated in favor of eviction.
For a list of the deprecated features, see [kubelet garbage collection deprecation](/docs/concepts/cluster-administration/kubelet-garbage-collection/#deprecation).
<!--
Some kubelet garbage collection features are deprecated in favor of eviction:
| Existing Flag | New Flag | Rationale |
| ------------- | -------- | --------- |
| `--image-gc-high-threshold` | `--eviction-hard` or `--eviction-soft` | existing eviction signals can trigger image garbage collection |
| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | eviction reclaims achieve the same behavior |
| `--maximum-dead-containers` | - | deprecated once old logs are stored outside of container's context |
| `--maximum-dead-containers-per-container` | - | deprecated once old logs are stored outside of container's context |
| `--minimum-container-ttl-duration` | - | deprecated once old logs are stored outside of container's context |
-->
一些 kubelet 垃圾收集功能已被弃用,以支持驱逐。
有关已弃用功能的列表,请参阅
[kubelet 垃圾收集弃用](/zh-cn/docs/concepts/cluster-administration/kubelet-garbage-collection/#deprecation)。
{{</note>}}
一些 kubelet 垃圾收集功能已被弃用,以鼓励使用驱逐机制。
| 现有标志 | 新的标志 | 原因 |
| ------------- | -------- | --------- |
| `--image-gc-high-threshold` | `--eviction-hard``--eviction-soft` | 现有的驱逐信号可以触发镜像垃圾收集 |
| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | 驱逐回收具有相同的行为 |
| `--maximum-dead-containers` | - | 一旦旧的日志存储在容器的上下文之外就会被弃用 |
| `--maximum-dead-containers-per-container` | - | 一旦旧的日志存储在容器的上下文之外就会被弃用 |
| `--minimum-container-ttl-duration` | - | 一旦旧的日志存储在容器的上下文之外就会被弃用 |
<!--
### Eviction thresholds
@ -247,7 +258,7 @@ You can use the following flags to configure soft eviction thresholds:
如果驱逐条件持续时长超过指定的宽限期,可以触发 Pod 驱逐。
* `eviction-soft-grace-period`:一组驱逐宽限期,
`memory.available=1m30s`,定义软驱逐条件在触发 Pod 驱逐之前必须保持多长时间。
* `eviction-max-pod-grace-period`:在满足软驱逐条件而终止 Pod 时使用的最大允许宽限期(以秒为单位)。
* `eviction-max-pod-grace-period`:在满足软驱逐条件而终止 Pod 时使用的最大允许宽限期(以秒为单位)。
<!--
#### Hard eviction thresholds {#hard-eviction-thresholds}
@ -320,7 +331,7 @@ kubelet 根据下表将驱逐信号映射为节点状况:
| 节点条件 | 驱逐信号 | 描述 |
|---------|--------|------|
| `MemoryPressure` | `memory.available` | 节点上的可用内存已满足驱逐条件 |
| `DiskPressure` | `nodefs.available`、`nodefs.inodesFree`、`imagefs.available` 或 `imagefs.inodesFree` | 节点的根文件系统或像文件系统上的可用磁盘空间和 inode 已满足驱逐条件 |
| `DiskPressure` | `nodefs.available`、`nodefs.inodesFree`、`imagefs.available` 或 `imagefs.inodesFree` | 节点的根文件系统或像文件系统上的可用磁盘空间和 inode 已满足驱逐条件 |
| `PIDPressure` | `pid.available` | (Linux) 节点上的可用进程标识符已低于驱逐条件 |
kubelet 根据配置的 `--node-status-update-frequency` 更新节点条件,默认为 `10s`
@ -472,7 +483,7 @@ requests.
The kubelet sorts pods differently based on whether the node has a dedicated
`imagefs` filesystem:
-->
当 kubelet 因 inode 或 PID 不足而驱逐 pod 时,
当 kubelet 因 inode 或 PID 不足而驱逐 Pod 时,
它使用优先级来确定驱逐顺序,因为 inode 和 PID 没有请求。
kubelet 根据节点是否具有专用的 `imagefs` 文件系统对 Pod 进行不同的排序:
@ -648,7 +659,7 @@ Consider the following scenario:
* 节点内存容量:`10Gi`
* 操作员希望为系统守护进程(内核、`kubelet` 等)保留 10% 的内存容量
* 操作员希望驱逐内存利用率为 95% 的Pod以减少系统 OOM 的概率。
* 操作员希望驱逐内存利用率为 95% 的 Pod以减少系统 OOM 的概率。
<!--
For this to work, the kubelet is launched as follows:
@ -757,7 +768,7 @@ You can work around that behavior by setting the memory limit and memory request
the same for containers likely to perform intensive I/O activity. You will need
to estimate or measure an optimal memory limit value for that container.
-->
更多细节请参见 [https://github.com/kubernetes/kubernetes/issues/43916](https://github.com/kubernetes/kubernetes/issues/43916)
更多细节请参见 [https://github.com/kubernetes/kubernetes/issues/43916](https://github.com/kubernetes/kubernetes/issues/43916)
你可以通过为可能执行 I/O 密集型活动的容器设置相同的内存限制和内存请求来应对该行为。
你将需要估计或测量该容器的最佳内存限制值。
@ -765,14 +776,14 @@ to estimate or measure an optimal memory limit value for that container.
## {{% heading "whatsnext" %}}
<!--
* Learn about [API-initiated Eviction](/docs/reference/generated/kubernetes-api/v1.23/)
* Learn about [API-initiated Eviction](/docs/concepts/scheduling-eviction/api-eviction/)
* Learn about [Pod Priority and Preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/)
* Learn about [PodDisruptionBudgets](/docs/tasks/run-application/configure-pdb/)
* Learn about [Quality of Service](/docs/tasks/configure-pod-container/quality-service-pod/) (QoS)
* Check out the [Eviction API](/docs/reference/generated/kubernetes-api/{{<param "version">}}/#create-eviction-pod-v1-core)
-->
* 了解 [API 发起的驱逐](/docs/reference/generated/kubernetes-api/v1.23/)
* 了解 [Pod 优先级和驱逐](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/)
* 了解 [PodDisruptionBudgets](/docs/tasks/run-application/configure-pdb/)
* 了解 [API 发起的驱逐](/zh-cn/docs/concepts/scheduling-eviction/api-eviction/)
* 了解 [Pod 优先级和抢占](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/)
* 了解 [PodDisruptionBudgets](/zh-cn/docs/tasks/run-application/configure-pdb/)
* 了解[服务质量](/zh-cn/docs/tasks/configure-pod-container/quality-service-pod/)QoS
* 查看[驱逐 API](/docs/reference/generated/kubernetes-api/{{<param "version">}}/#create-eviction-pod-v1-core)

View File

@ -639,7 +639,7 @@ exceeding its requests, it won't be evicted. Another Pod with higher priority
that exceeds its requests may be evicted.
-->
kubelet 使用优先级来确定
[节点压力驱逐](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/) Pod 的顺序。
[节点压力驱逐](/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction/) Pod 的顺序。
你可以使用 QoS 类来估计 Pod 最有可能被驱逐的顺序。kubelet 根据以下因素对 Pod 进行驱逐排名:
1. 对紧俏资源的使用是否超过请求值
@ -650,8 +650,8 @@ kubelet 使用优先级来确定
[kubelet 驱逐时 Pod 的选择](/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction/#pod-selection-for-kubelet-eviction)。
当某 Pod 的资源用量未超过其请求时kubelet 节点压力驱逐不会驱逐该 Pod。
如果优先级较低的 Pod 没有超过其请求,则不会被驱逐。
另一个优先级高于其请求的 Pod 可能会被驱逐。
如果优先级较低的 Pod 的资源使用量没有超过其请求,则不会被驱逐。
另一个优先级较高且资源使用量超过其请求的 Pod 可能会被驱逐。
## {{% heading "whatsnext" %}}
@ -659,11 +659,11 @@ kubelet 使用优先级来确定
* Read about using ResourceQuotas in connection with PriorityClasses:
[limit Priority Class consumption by default](/docs/concepts/policy/resource-quotas/#limit-priority-class-consumption-by-default)
* Learn about [Pod Disruption](/docs/concepts/workloads/pods/disruptions/)
* Learn about [API-initiated Eviction](/docs/reference/generated/kubernetes-api/v1.23/)
* Learn about [API-initiated Eviction](/docs/concepts/scheduling-eviction/api-eviction/)
* Learn about [Node-pressure Eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/)
-->
* 阅读有关将 ResourceQuota 与 PriorityClass 结合使用的信息:
[默认限制优先级消费](/zh-cn/docs/concepts/policy/resource-quotas/#limit-priority-class-consumption-by-default)
* 了解 [Pod 干扰](/zh-cn/docs/concepts/workloads/pods/disruptions/)
* 了解 [API 发起的驱逐](/docs/reference/generated/kubernetes-api/v1.23/)
* 了解[节点压力驱逐](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/)
* 了解 [API 发起的驱逐](/zh-cn/docs/concepts/scheduling-eviction/api-eviction/)
* 了解[节点压力驱逐](/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction/)

View File

@ -528,6 +528,6 @@ arbitrary tolerations to DaemonSets.
* Read about [Node-pressure Eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/) and how you can configure it
* Read about [Pod Priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/)
-->
* 阅读[节点压力驱逐](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/)
* 阅读[节点压力驱逐](/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction/)
以及如何配置其行为
* 阅读 [Pod 优先级](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/)

View File

@ -40,14 +40,18 @@ following diagram:
## 传输安全 {#transport-security}
<!--
In a typical Kubernetes cluster, the API serves on port 443, protected by TLS.
By default, the Kubernetes API server listens on port 6443 on the first non-localhost network interface, protected by TLS. In a typical production Kubernetes cluster, the API serves on port 443. The port can be changed with the `--secure-port`, and the listening IP address with the `--bind-address` flag.
The API server presents a certificate. This certificate may be signed using
a private certificate authority (CA), or based on a public key infrastructure linked
to a generally recognized CA.
to a generally recognized CA. The certificate and corresponding private key can be set by using the `--tls-cert-file` and `--tls-private-key-file` flags.
-->
在典型的 Kubernetes 集群中API 服务器在 443 端口上提供服务,受 TLS 保护。
API 服务器出示证书。
该证书可以使用私有证书颁发机构CA签名也可以基于链接到公认的 CA 的公钥基础架构签名。
默认情况下Kubernetes API 服务器在第一个非 localhost 网络接口的 6443 端口上进行监听,
受 TLS 保护。在一个典型的 Kubernetes 生产集群中API 使用 443 端口。
该端口可以通过 `--secure-port` 进行变更,监听 IP 地址可以通过 `--bind-address` 标志进行变更。
API 服务器出示证书。该证书可以使用私有证书颁发机构CA签名也可以基于链接到公认的 CA 的公钥基础架构签名。
该证书和相应的私钥可以通过使用 `--tls-cert-file``--tls-private-key-file` 标志进行设置。
<!--
If your cluster uses a private certificate authority, you need a copy of that CA
@ -243,63 +247,7 @@ For more information, see [Auditing](/docs/tasks/debug/debug-cluster/audit/).
Kubernetes 审计提供了一套与安全相关的、按时间顺序排列的记录,其中记录了集群中的操作序列。
集群对用户、使用 Kubernetes API 的应用程序以及控制平面本身产生的活动进行审计。
更多信息请参考 [审计](/zh-cn/docs/tasks/debug/debug-cluster/audit/).
<!-- ## API server ports and IPs -->
## API 服务器端口和 IP {#api-server-ports-and-ips}
<!--
The previous discussion applies to requests sent to the secure port of the API server
(the typical case). The API server can actually serve on 2 ports:
By default, the Kubernetes API server serves HTTP on 2 ports:
-->
前面的讨论适用于发送到 API 服务器的安全端口的请求(典型情况)。 API 服务器实际上可以在 2 个端口上提供服务:
默认情况下Kubernetes API 服务器在 2 个端口上提供 HTTP 服务:
<!--
1. `localhost` port:
- is intended for testing and bootstrap, and for other components of the master node
(scheduler, controller-manager) to talk to the API
- no TLS
- default is port 8080
- default IP is localhost, change with `--insecure-bind-address` flag.
- request **bypasses** authentication and authorization modules.
- request handled by admission control module(s).
- protected by need to have host access
2. “Secure port”:
- use whenever possible
- uses TLS. Set cert with `--tls-cert-file` and key with `--tls-private-key-file` flag.
- default is port 6443, change with `--secure-port` flag.
- default IP is first non-localhost network interface, change with `--bind-address` flag.
- request handled by authentication and authorization modules.
- request handled by admission control module(s).
- authentication and authorization modules run.
-->
1. `localhost` 端口:
- 用于测试和引导,以及主控节点上的其他组件(调度器,控制器管理器)与 API 通信
- 没有 TLS
- 默认为端口 8080
- 默认 IP 为 localhost使用 `--insecure-bind-address` 进行更改
- 请求 **绕过** 身份认证和鉴权模块
- 由准入控制模块处理的请求
- 受需要访问主机的保护
2. “安全端口”:
- 尽可能使用
- 使用 TLS。 用 `--tls-cert-file` 设置证书,用 `--tls-private-key-file` 设置密钥
- 默认端口 6443使用 `--secure-port` 更改
- 默认 IP 是第一个非本地网络接口,使用 `--bind-address` 更改
- 请求须经身份认证和鉴权组件处理
- 请求须经准入控制模块处理
- 身份认证和鉴权模块运行
更多信息请参考[审计](/zh-cn/docs/tasks/debug/debug-cluster/audit/)。
## {{% heading "whatsnext" %}}
@ -348,4 +296,4 @@ You can learn about:
你可以了解
- Pod 如何使用
[Secrets](/zh-cn/docs/concepts/configuration/secret/#service-accounts-automatically-create-and-attach-secrets-with-api-credentials)
获取 API 凭证.
获取 API 凭证

View File

@ -43,8 +43,6 @@ Pod 安全性标准定义了三种不同的 **策略Policy**,以广泛
<!--
## Profile Details
### Privileged
-->
## Profile 细节 {#profile-details}
@ -98,10 +96,9 @@ fail validation.
<td>控制Control</td>
<td>策略Policy</td>
</tr>
<tr>
<!-- <td style="white-space: nowrap">HostProcess</td> -->
<td style="white-space: nowrap">HostProcess</td>
<td>
<tr>
<td style="white-space: nowrap">HostProcess</td>
<td>
<p><!--Windows pods offer the ability to run <a href="/docs/tasks/configure-pod-container/create-hostprocess-pod">HostProcess containers</a> which enables privileged access to the Windows node. Privileged access to the host is disallowed in the baseline policy. -->
Windows Pod 提供了运行 <a href="/zh-cn/docs/tasks/configure-pod-container/create-hostprocess-pod">HostProcess 容器</a> 的能力,这使得对 Windows 节点的特权访问成为可能。Baseline 策略中禁止对宿主的特权访问。{{< feature-state for_k8s_version="v1.23" state="beta" >}}
</p>
@ -121,7 +118,7 @@ fail validation.
</tr>
<tr>
<td style="white-space: nowrap"><!--Host Namespaces-->宿主名字空间</td>
<td>
<td>
<p><!--Sharing the host namespaces must be disallowed.-->必须禁止共享宿主上的名字空间。</p>
<p><strong><!--Restricted Fields-->限制的字段</strong></p>
<ul>
@ -195,7 +192,6 @@ fail validation.
<li><!--Undefined/nil-->未定义、nil</li>
</ul>
</td>
<td>
</tr>
<tr>
<td style="white-space: nowrap"><!--Host Ports-->宿主端口</td>
@ -284,7 +280,7 @@ fail validation.
</ul>
</td>
</tr>
<tr>
<tr>
<td>Seccomp</td>
<td>
<p><!--Seccomp profile must not be explicitly set to <code>Unconfined</code>.-->Seccomp 配置必须不能显式设置为 <code>Unconfined</code></p>
@ -304,7 +300,7 @@ fail validation.
</td>
</tr>
<tr>
<td>Sysctls</td>
<td style="white-space: nowrap">Sysctls</td>
<td>
<p><!--Sysctls can disable security mechanisms or affect all containers on a host, and should be disallowed except for an allowed "safe" subset. A sysctl is considered safe if it is namespaced in the container or the Pod, and it is isolated from other Pods or processes on the same Node.-->Sysctls 可以禁用安全机制或影响宿主上所有容器,因此除了若干“安全”的子集之外,应该被禁止。如果某 sysctl 是受容器或 Pod 的名字空间限制,且与节点上其他 Pod 或进程相隔离,可认为是安全的。</p>
<p><strong><!--Restricted Fields-->限制的字段</strong></p>
@ -360,7 +356,7 @@ fail validation.
<td colspan="2"><em><!--Everything from the baseline profile.-->Baseline 策略的所有要求。</em></td>
</tr>
<tr>
<td style="white-space: nowrap"><!--Volume Types-->卷类型</td>
<td style="white-space: nowrap"><!--Volume Types-->卷类型</td>
<td>
<p><!--In addition to restricting HostPath volumes, the restricted policy limits usage of non-core volume types to those defined through PersistentVolumes.-->除了限制 HostPath 卷之外,此类策略还限制可以通过 PersistentVolumes 定义的非核心卷类型。</p>
<p><strong><!--Restricted Fields-->限制的字段</strong></p>
@ -382,7 +378,7 @@ fail validation.
</td>
</tr>
<tr>
<td style="white-space: nowrap"><!--Privilege Escalation (v1.8+)-->特权提升v1.8+</td>
<td style="white-space: nowrap"><!--Privilege Escalation (v1.8+)-->特权提升v1.8+</td>
<td>
<p><!--Privilege escalation (such as via set-user-ID or set-group-ID file mode) should not be allowed.-->禁止(通过 SetUID 或 SetGID 文件模式)获得特权提升。</p>
<p><strong><!--Restricted Fields-->限制的字段</strong></p>
@ -398,8 +394,8 @@ fail validation.
</td>
</tr>
<tr>
<td style="white-space: nowrap"><!--Running as Non-root-->以非 root 账号运行</td>
<td>
<td style="white-space: nowrap"><!--Running as Non-root-->以非 root 账号运行</td>
<td>
<p><!--Containers must be required to run as non-root users.-->容器必须以非 root 账号运行。</p>
<p><strong><!--Restricted Fields-->限制的字段</strong></p>
<ul>
@ -453,15 +449,29 @@ fail validation.
<li><code>Localhost</code></li>
</ul>
<small>
<!--The container fields may be undefined/<code>nil</code> if the pod-level <code>spec.securityContext.seccompProfile.type</code> field is set appropriately. Conversely, the pod-level field may be undefined/<code>nil</code> if _all_ container- level fields are set.-->如果 Pod 级别的 <code>spec.securityContext.seccompProfile.type</code> 已设置得当,容器级别的安全上下文字段可以为 未定义/<code>nil</code>。反而言之,如果 <bold>所有的</bold> 容器级别的安全上下文字段已设置,则 Pod 级别的字段可为 未定义/<code>nil</code>
<!--
The container fields may be undefined/<code>nil</code> if the pod-level
<code>spec.securityContext.seccompProfile.type</code> field is set appropriately.
Conversely, the pod-level field may be undefined/<code>nil</code> if _all_ container-
level fields are set.
-->
如果 Pod 级别的 <code>spec.securityContext.seccompProfile.type</code>
已设置得当,容器级别的安全上下文字段可以为未定义/<code>nil</code>
反之如果 <bold>所有的</bold> 容器级别的安全上下文字段已设置,
则 Pod 级别的字段可为 未定义/<code>nil</code>
</small>
</td>
</tr>
<tr>
</tr>
<tr>
<td style="white-space: nowrap"><!--Capabilities (v1.22+) -->权能v1.22+</td>
<td>
<td>
<p>
<!--Containers must drop <code>ALL</code> capabilities, and are only permitted to add back the <code>NET_BIND_SERVICE</code> capability.-->容器必须弃用 <code>ALL</code> 权能,并且只允许添加 <code>NET_BIND_SERVICE</code> 权能。
<!--
Containers must drop <code>ALL</code> capabilities, and are only permitted to add back
the <code>NET_BIND_SERVICE</code> capability.
-->
容器必须弃用 <code>ALL</code> 权能,并且只允许添加
<code>NET_BIND_SERVICE</code> 权能。
</p>
<p><strong><!--Restricted Fields-->限制的字段</strong></p>
<ul>
@ -568,13 +578,13 @@ SIG Auth may reconsider this position in the future, should a clear need for oth
SIG Auth 可能会在将来考虑这个范围的框架,前提是有对其他框架的需求。
<!--
### What's the difference between a security policy and a security context?
### What's the difference between a security profile and a security context?
[Security Contexts](/docs/tasks/configure-pod-container/security-context/) configure Pods and
Containers at runtime. Security contexts are defined as part of the Pod and container specifications
in the Pod manifest, and represent parameters to the container runtime.
-->
### 安全策略与安全上下文的区别是什么?
### 安全配置与安全上下文的区别是什么?
[安全上下文](/zh-cn/docs/tasks/configure-pod-container/security-context/)在运行时配置 Pod
和容器。安全上下文是在 Pod 清单中作为 Pod 和容器规约的一部分来定义的,
@ -595,18 +605,18 @@ built-in [Pod Security Admission Controller](/docs/concepts/security/pod-securit
### What profiles should I apply to my Windows Pods?
Windows in Kubernetes has some limitations and differentiators from standard Linux-based
workloads. Specifically, the Pod SecurityContext fields [have no effect on
Windows](/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#v1-podsecuritycontext). As
such, no standardized Pod Security profiles currently exists.
workloads. Specifically, many of the Pod SecurityContext fields
[have no effect on Windows](/docs/concepts/windows/intro/#compatibility-v1-pod-spec-containers-securitycontext).
As such, no standardized Pod Security profiles currently exist.
-->
### 我应该为我的 Windows Pod 实施哪种框架?
Kubernetes 中的 Windows 负载与标准的基于 Linux 的负载相比有一些局限性和区别。
尤其是 Pod SecurityContext
字段[对 Windows 不起作用](/zh-cn/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#v1-podsecuritycontext)。
字段[对 Windows 不起作用](/zh-cn/docs/concepts/windows/intro/#compatibility-v1-pod-spec-containers-securitycontext)。
因此,目前没有对应的标准 Pod 安全性框架。
<!--
<!--
If you apply the restricted profile for a Windows pod, this **may** have an impact on the pod
at runtime. The restricted profile requires enforcing Linux-specific restrictions (such as seccomp
profile, and disallowing privilege escalation). If the kubelet and / or its container runtime ignore
@ -620,7 +630,9 @@ Restricted 策略需要强制执行 Linux 特有的限制(如 seccomp Profile
然而,对于使用 Windows 容器的 Pod 来说,缺乏强制执行意味着相比于 Restricted 策略,没有任何额外的限制。
<!--
The use of the HostProcess flag to create a HostProcess pod should only be done in alignment with the privileged policy. Creation of a Windows HostProcess pod is blocked under the baseline and restricted policies, so any HostProcess pod should be considered privileged.
The use of the HostProcess flag to create a HostProcess pod should only be done in alignment with the privileged policy.
Creation of a Windows HostProcess pod is blocked under the baseline and restricted policies,
so any HostProcess pod should be considered privileged.
-->
你应该只在 Privileged 策略下使用 HostProcess 标志来创建 HostProcess Pod。
在 Baseline 和 Restricted 策略下,创建 Windows HostProcess Pod 是被禁止的,
@ -645,11 +657,11 @@ restrict privileged permissions is lessened when the workload is isolated from t
kernel. This allows for workloads requiring heightened permissions to still be isolated.
Additionally, the protection of sandboxed workloads is highly dependent on the method of
sandboxing. As such, no single recommended policy is recommended for all sandboxed workloads.
sandboxing. As such, no single recommended profile is recommended for all sandboxed workloads.
-->
沙箱化负载所需要的保护可能彼此各不相同。例如,当负载与下层内核直接隔离开来时,
限制特权化操作的许可就不那么重要。这使得那些需要更多许可权限的负载仍能被有效隔离。
此外,沙箱化负载的保护高度依赖于沙箱化的实现方法。
因此,现在还没有针对所有沙箱化负载的建议策略
因此,现在还没有针对所有沙箱化负载的建议配置

View File

@ -31,7 +31,7 @@ weight: 20
<!--
This document describes _persistent volumes_ in Kubernetes. Familiarity with [volumes](/docs/concepts/storage/volumes/) is suggested.
-->
本文描述 Kubernetes 中的 _持久卷Persistent Volume_
本文描述 Kubernetes 中的**持久卷Persistent Volume**
建议先熟悉[卷Volume](/zh-cn/docs/concepts/storage/volumes/)的概念。
<!-- body -->
@ -43,24 +43,25 @@ Managing storage is a distinct problem from managing compute instances. The Pers
-->
## 介绍 {#introduction}
存储的管理是一个与计算实例的管理完全不同的问题。PersistentVolume 子系统为用户
和管理员提供了一组 API将存储如何供应的细节从其如何被使用中抽象出来。
存储的管理是一个与计算实例的管理完全不同的问题。
PersistentVolume 子系统为用户和管理员提供了一组 API
将存储如何制备的细节从其如何被使用中抽象出来。
为了实现这点,我们引入了两个新的 API 资源PersistentVolume 和
PersistentVolumeClaim。
<!--
A _PersistentVolume_ (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using [Storage Classes](/docs/concepts/storage/storage-classes/). It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
-->
持久卷PersistentVolumePV是集群中的一块存储可以由管理员事先供应,或者
使用[存储类Storage Class](/zh-cn/docs/concepts/storage/storage-classes/)来动态供应
持久卷是集群资源就像节点也是集群资源一样。PV 持久卷和普通的 Volume 一样,也是使用
卷插件来实现的,只是它们拥有独立于任何使用 PV 的 Pod 的生命周期。
**持久卷PersistentVolumePV** 是集群中的一块存储,可以由管理员事先制备,
或者使用[存储类Storage Class](/zh-cn/docs/concepts/storage/storage-classes/)来动态制备
持久卷是集群资源就像节点也是集群资源一样。PV 持久卷和普通的 Volume 一样,
也是使用卷插件来实现的,只是它们拥有独立于任何使用 PV 的 Pod 的生命周期。
此 API 对象中记述了存储的实现细节,无论其背后是 NFS、iSCSI 还是特定于云平台的存储系统。
<!--
A _PersistentVolumeClaim_ (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see [AccessModes](#access-modes)).
-->
持久卷申领PersistentVolumeClaimPVC表达的是用户对存储的请求。概念上与 Pod 类似。
**持久卷申领PersistentVolumeClaimPVC** 表达的是用户对存储的请求。概念上与 Pod 类似。
Pod 会耗用节点资源,而 PVC 申领会耗用 PV 资源。Pod 可以请求特定数量的资源CPU
和内存);同样 PVC 申领也可以请求特定的大小和访问模式
(例如,可以要求 PV 卷能够以 ReadWriteOnce、ReadOnlyMany 或 ReadWriteMany
@ -71,11 +72,11 @@ While PersistentVolumeClaims allow a user to consume abstract storage resources,
See the [detailed walkthrough with working examples](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/).
-->
尽管 PersistentVolumeClaim 允许用户消耗抽象的存储资源,常见的情况是针对不同的
问题用户需要的是具有不同属性(如,性能)的 PersistentVolume 卷。
集群管理员需要能够提供不同性质的 PersistentVolume并且这些 PV 卷之间的差别不
仅限于卷大小和访问模式,同时又不能将卷是如何实现的这些细节暴露给用户。
为了满足这类需求,就有了 _存储类StorageClass_ 资源。
尽管 PersistentVolumeClaim 允许用户消耗抽象的存储资源,
常见的情况是针对不同的问题用户需要的是具有不同属性(如,性能)的 PersistentVolume 卷。
集群管理员需要能够提供不同性质的 PersistentVolume
并且这些 PV 卷之间的差别不仅限于卷大小和访问模式,同时又不能将卷是如何实现的这些细节暴露给用户。
为了满足这类需求,就有了 **存储类StorageClass** 资源。
参见[基于运行示例的详细演练](/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage/)。
@ -93,19 +94,19 @@ There are two ways PVs may be provisioned: statically or dynamically.
PV 卷是集群中的资源。PVC 申领是对这些资源的请求,也被用来执行对资源的申领检查。
PV 卷和 PVC 申领之间的互动遵循如下生命周期:
### 供应 {#provisioning}
### 制备 {#provisioning}
PV 卷的供应有两种方式:静态供应或动态供应
PV 卷的制备有两种方式:静态制备或动态制备
<!--
#### Static
A cluster administrator creates a number of PVs. They carry the details of the real storage, which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption.
-->
#### 静态供应 {#static}
#### 静态制备 {#static}
集群管理员创建若干 PV 卷。这些卷对象带有真实存储的细节信息,并且对集群
用户可用可见。PV 卷对象存在于 Kubernetes API 中,可供用户消费(使用)。
集群管理员创建若干 PV 卷。这些卷对象带有真实存储的细节信息,
并且对集群用户可用可见。PV 卷对象存在于 Kubernetes API 中,可供用户消费(使用)。
<!--
#### Dynamic
@ -118,28 +119,28 @@ the administrator must have created and configured that class for dynamic
provisioning to occur. Claims that request the class `""` effectively disable
dynamic provisioning for themselves.
-->
#### 动态供应 {#dynamic}
#### 动态制备 {#dynamic}
如果管理员所创建的所有静态 PV 卷都无法与用户的 PersistentVolumeClaim 匹配,
集群可以尝试为该 PVC 申领动态供应一个存储卷。
这一供应操作是基于 StorageClass 来实现的PVC 申领必须请求某个
[存储类](/zh-cn/docs/concepts/storage/storage-classes/)同时集群管理员必须
已经创建并配置了该类,这样动态供应卷的动作才会发生。
如果 PVC 申领指定存储类为 `""`,则相当于为自身禁止使用动态供应的卷。
集群可以尝试为该 PVC 申领动态制备一个存储卷。
这一制备操作是基于 StorageClass 来实现的PVC 申领必须请求某个
[存储类](/zh-cn/docs/concepts/storage/storage-classes/)
同时集群管理员必须已经创建并配置了该类,这样动态制备卷的动作才会发生。
如果 PVC 申领指定存储类为 `""`,则相当于为自身禁止使用动态制备的卷。
<!--
To enable dynamic storage provisioning based on storage class, the cluster administrator
needs to enable the `DefaultStorageClass` [admission controller](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)
on the API server. This can be done, for example, by ensuring that `DefaultStorageClass` is
among the comma-delimited, ordered list of values for the `-enable-admission-plugins` flag of
among the comma-delimited, ordered list of values for the `--enable-admission-plugins` flag of
the API server component. For more information on API server command-line flags,
check [kube-apiserver](/docs/admin/kube-apiserver/) documentation.
-->
为了基于存储类完成动态的存储供应,集群管理员需要在 API 服务器上启用
为了基于存储类完成动态的存储制备,集群管理员需要在 API 服务器上启用
`DefaultStorageClass` [准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)。
举例而言,可以通过保证 `DefaultStorageClass` 出现在 API 服务器组件的
`--enable-admission-plugins` 标志值中实现这点;该标志的值可以是逗号
分隔的有序列表。关于 API 服务器标志的更多信息,可以参考
`--enable-admission-plugins` 标志值中实现这点;该标志的值可以是逗号分隔的有序列表。
关于 API 服务器标志的更多信息,可以参考
[kube-apiserver](/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/)
文档。
@ -147,30 +148,28 @@ check [kube-apiserver](/docs/admin/kube-apiserver/) documentation.
### Binding
A user creates, or in the case of dynamic provisioning, has already created, a PersistentVolumeClaim with a specific amount of storage requested and with certain access modes. A control loop in the master watches for new PVCs, finds a matching PV (if possible), and binds them together. If a PV was dynamically provisioned for a new PVC, the loop will always bind that PV to the PVC. Otherwise, the user will always get at least what they asked for, but the volume may be in excess of what was requested. Once bound, PersistentVolumeClaim binds are exclusive, regardless of how they were bound. A PVC to PV binding is a one-to-one mapping, using a ClaimRef which is a bi-directional binding between the PersistentVolume and the PersistentVolumeClaim.
Claims will remain unbound indefinitely if a matching volume does not exist. Claims will be bound as matching volumes become available. For example, a cluster provisioned with many 50Gi PVs would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster.
-->
### 绑定 {#binding}
用户创建一个带有特定存储容量和特定访问模式需求的 PersistentVolumeClaim 对象;
在动态供应场景下,这个 PVC 对象可能已经创建完毕。
在动态制备场景下,这个 PVC 对象可能已经创建完毕。
主控节点中的控制回路监测新的 PVC 对象,寻找与之匹配的 PV 卷(如果可能的话),
并将二者绑定到一起。
如果为了新的 PVC 申领动态供应了 PV 卷,则控制回路总是将该 PV 卷绑定到这一 PVC 申领。
如果为了新的 PVC 申领动态制备了 PV 卷,则控制回路总是将该 PV 卷绑定到这一 PVC 申领。
否则,用户总是能够获得他们所请求的资源,只是所获得的 PV 卷可能会超出所请求的配置。
一旦绑定关系建立,则 PersistentVolumeClaim 绑定就是排他性的,无论该 PVC 申领是
如何与 PV 卷建立的绑定关系。
PVC 申领与 PV 卷之间的绑定是一种一对一的映射,实现上使用 ClaimRef 来记述 PV 卷
与 PVC 申领间的双向绑定关系。
一旦绑定关系建立,则 PersistentVolumeClaim 绑定就是排他性的,
无论该 PVC 申领是如何与 PV 卷建立的绑定关系。
PVC 申领与 PV 卷之间的绑定是一种一对一的映射,实现上使用 ClaimRef 来记述
PV 卷与 PVC 申领间的双向绑定关系。
<!--
Claims will remain unbound indefinitely if a matching volume does not exist. Claims will be bound as matching volumes become available. For example, a cluster provisioned with many 50Gi PVs would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster.
-->
如果找不到匹配的 PV 卷PVC 申领会无限期地处于未绑定状态。
当与之匹配的 PV 卷可用时PVC 申领会被绑定。
例如,即使某集群上供应了很多 50 Gi 大小的 PV 卷,也无法与请求
100 Gi 大小的存储的 PVC 匹配。当新的 100 Gi PV 卷被加入到集群时,
PVC 才有可能被绑定。
例如,即使某集群上制备了很多 50 Gi 大小的 PV 卷,也无法与请求
100 Gi 大小的存储的 PVC 匹配。当新的 100 Gi PV 卷被加入到集群时,
PVC 才有可能被绑定。
<!--
### Using
@ -179,15 +178,16 @@ Pods use claims as volumes. The cluster inspects the claim to find the bound vol
-->
### 使用 {#using}
Pod 将 PVC 申领当做存储卷来使用。集群会检视 PVC 申领,找到所绑定的卷,
为 Pod 挂载该卷。对于支持多种访问模式的卷,用户要在 Pod 中以卷的形式使用申领
时指定期望的访问模式。
Pod 将 PVC 申领当做存储卷来使用。集群会检视 PVC 申领,找到所绑定的卷,
为 Pod 挂载该卷。对于支持多种访问模式的卷,
用户要在 Pod 中以卷的形式使用申领时指定期望的访问模式。
<!--
Once a user has a claim and that claim is bound, the bound PV belongs to the user for as long as they need it. Users schedule Pods and access their claimed PVs by including a `persistentVolumeClaim` section in a Pod's `volumes` block. See [Claims As Volumes](#claims-as-volumes) for more details on this.
-->
一旦用户有了申领对象并且该申领已经被绑定,则所绑定的 PV 卷在用户仍然需要它期间
一直属于该用户。用户通过在 Pod 的 `volumes` 块中包含 `persistentVolumeClaim`
一旦用户有了申领对象并且该申领已经被绑定,
则所绑定的 PV 卷在用户仍然需要它期间一直属于该用户。
用户通过在 Pod 的 `volumes` 块中包含 `persistentVolumeClaim`
节区来调度 Pod访问所申领的 PV 卷。
相关细节可参阅[使用申领作为卷](#claims-as-volumes)。
@ -198,9 +198,9 @@ The purpose of the Storage Object in Use Protection feature is to ensure that Pe
-->
### 保护使用中的存储对象 {#storage-object-in-use-protection}
保护使用中的存储对象Storage Object in Use Protection这一功能特性的目的
是确保仍被 Pod 使用的 PersistentVolumeClaimPVC对象及其所绑定的
PersistentVolumePV对象在系统中不会被删除因为这样做可能会引起数据丢失。
保护使用中的存储对象Storage Object in Use Protection
这一功能特性的目的是确保仍被 Pod 使用的 PersistentVolumeClaimPVC
对象及其所绑定的 PersistentVolumePV对象在系统中不会被删除因为这样做可能会引起数据丢失。
<!--
PVC is in active use by a Pod when a Pod object exists that is using the PVC.
@ -271,11 +271,11 @@ Events: <none>
When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with the volume after it has been released of its claim. Currently, volumes can either be Retained, Recycled, or Deleted.
-->
### 回收 {#reclaiming}
### 回收Reclaiming {#reclaiming}
当用户不再使用其存储卷时,他们可以从 API 中将 PVC 对象删除,从而允许
该资源被回收再利用。PersistentVolume 对象的回收策略告诉集群,当其被
从申领中释放时如何处理该数据卷。
当用户不再使用其存储卷时,他们可以从 API 中将 PVC 对象删除,
从而允许该资源被回收再利用。PersistentVolume 对象的回收策略告诉集群,
当其被从申领中释放时如何处理该数据卷。
目前,数据卷可以被 Retained保留、Recycled回收或 Deleted删除
<!--
@ -285,8 +285,8 @@ The `Retain` reclaim policy allows for manual reclamation of the resource. When
-->
#### 保留Retain {#retain}
回收策略 `Retain` 使得用户可以手动回收资源。当 PersistentVolumeClaim 对象
被删除时PersistentVolume 卷仍然存在,对应的数据卷被视为"已释放released"。
回收策略 `Retain` 使得用户可以手动回收资源。当 PersistentVolumeClaim
对象被删除时PersistentVolume 卷仍然存在,对应的数据卷被视为"已释放released"。
由于卷上仍然存在这前一申领人的数据,该卷还不能用于其他申领。
管理员可以通过下面的步骤来手动回收该卷:
@ -314,10 +314,10 @@ For volume plugins that support the `Delete` reclaim policy, deletion removes bo
对于支持 `Delete` 回收策略的卷插件,删除动作会将 PersistentVolume 对象从
Kubernetes 中移除,同时也会从外部基础设施(如 AWS EBS、GCE PD、Azure Disk 或
Cinder 卷)中移除所关联的存储资产。
动态供应的卷会继承[其 StorageClass 中设置的回收策略](#reclaim-policy)该策略默认
`Delete`
管理员需要根据用户的期望来配置 StorageClass否则 PV 卷被创建之后必须要被
编辑或者修补。参阅[更改 PV 卷的回收策略](/zh-cn/docs/tasks/administer-cluster/change-pv-reclaim-policy/).
动态制备的卷会继承[其 StorageClass 中设置的回收策略](#reclaim-policy)
该策略默认`Delete`管理员需要根据用户的期望来配置 StorageClass
否则 PV 卷被创建之后必须要被编辑或者修补。
参阅[更改 PV 卷的回收策略](/zh-cn/docs/tasks/administer-cluster/change-pv-reclaim-policy/).
<!--
#### Recycle
@ -329,11 +329,11 @@ If supported by the underlying volume plugin, the `Recycle` reclaim policy perfo
#### 回收Recycle {#recycle}
{{< warning >}}
回收策略 `Recycle` 已被废弃。取而代之的建议方案是使用动态供应
回收策略 `Recycle` 已被废弃。取而代之的建议方案是使用动态制备
{{< /warning >}}
如果下层的卷插件支持,回收策略 `Recycle` 会在卷上执行一些基本的
擦除`rm -rf /thevolume/*`)操作,之后允许该卷用于新的 PVC 申领。
如果下层的卷插件支持,回收策略 `Recycle` 会在卷上执行一些基本的擦除
`rm -rf /thevolume/*`)操作,之后允许该卷用于新的 PVC 申领。
<!--
However, an administrator can configure a custom recycler Pod template using
@ -371,8 +371,7 @@ spec:
<!--
However, the particular path specified in the custom recycler Pod template in the `volumes` part is replaced with the particular path of the volume that is being recycled.
-->
定制回收器 Pod 模板中在 `volumes` 部分所指定的特定路径要替换为
正被回收的卷的路径。
定制回收器 Pod 模板中在 `volumes` 部分所指定的特定路径要替换为正被回收的卷的路径。
<!--
### PersistentVolume deletion protection finalizer
@ -383,8 +382,8 @@ having `Delete` reclaim policy are deleted only after the backing storage are de
### PersistentVolume 删除保护 finalizer {#persistentvolume-deletion-protection-finalizer}
{{< feature-state for_k8s_version="v1.23" state="alpha" >}}
可以在 PersistentVolume 上添加终结器Finalizers以确保只有在删除对应的存储后才删除具有
`Delete` 回收策略的 PersistentVolume。
可以在 PersistentVolume 上添加终结器Finalizer
以确保只有在删除对应的存储后才删除具有 `Delete` 回收策略的 PersistentVolume。
<!--
The newly introduced finalizers `kubernetes.io/pv-controller` and `external-provisioner.volume.kubernetes.io/finalizer`
@ -483,9 +482,8 @@ The binding happens regardless of some volume matching criteria, including node
The control plane still checks that [storage class](/docs/concepts/storage/storage-classes/), access modes, and requested storage size are valid.
-->
绑定操作不会考虑某些卷匹配条件是否满足,包括节点亲和性等等。
控制面仍然会检查
[存储类](/zh-cn/docs/concepts/storage/storage-classes/)、访问模式和所请求的
存储尺寸都是合法的。
控制面仍然会检查[存储类](/zh-cn/docs/concepts/storage/storage-classes/)、
访问模式和所请求的存储尺寸都是合法的。
```yaml
apiVersion: v1
@ -503,9 +501,9 @@ spec:
This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify the relevant PersistentVolumeClaim in the `claimRef` field of the PV so that other PVCs can not bind to it.
-->
此方法无法对 PersistentVolume 的绑定特权做出任何形式的保证。
如果有其他 PersistentVolumeClaim 可以使用你所指定的 PV则你应该首先预留
该存储卷。你可以将 PV 的 `claimRef` 字段设置为相关的 PersistentVolumeClaim
以确保其他 PVC 不会绑定到该 PV 卷。
如果有其他 PersistentVolumeClaim 可以使用你所指定的 PV
则你应该首先预留该存储卷。你可以将 PV 的 `claimRef` 字段设置为相关的
PersistentVolumeClaim 以确保其他 PVC 不会绑定到该 PV 卷。
```yaml
apiVersion: v1
@ -524,8 +522,8 @@ spec:
This is useful if you want to consume PersistentVolumes that have their `claimPolicy` set
to `Retain`, including cases where you are reusing an existing PV.
-->
如果你想要使用 `claimPolicy` 属性设置为 `Retain` 的 PersistentVolume 卷
时,包括你希望复用现有的 PV 卷时,这点是很有用的
如果你想要使用 `claimPolicy` 属性设置为 `Retain` 的 PersistentVolume 卷时,
包括你希望复用现有的 PV 卷时,这点是很有用的
<!--
### Expanding Persistent Volumes Claims
@ -594,8 +592,8 @@ increased and that no resize is necessary.
如果对 PersistentVolume 的容量进行编辑,然后又将其所对应的
PersistentVolumeClaim 的 `.spec` 进行编辑,使该 PersistentVolumeClaim
的大小匹配 PersistentVolume 的话,则不会发生存储大小的调整。
Kubernetes 控制平面将看到两个资源的所需状态匹配,并认为其后备卷的大小
已被手动增加,无需调整。
Kubernetes 控制平面将看到两个资源的所需状态匹配,
并认为其后备卷的大小已被手动增加,无需调整。
{{< /warning >}}
<!--
@ -608,8 +606,8 @@ Kubernetes 控制平面将看到两个资源的所需状态匹配,并认为其
<!--
Support for expanding CSI volumes is enabled by default but it also requires a specific CSI driver to support volume expansion. Refer to documentation of the specific CSI driver for more information.
-->
对 CSI 卷的扩充能力默认是被启用的,不过扩充 CSI 卷要求 CSI 驱动支持
卷扩充操作。可参阅特定 CSI 驱动的文档了解更多信息。
对 CSI 卷的扩充能力默认是被启用的,不过扩充 CSI 卷要求 CSI
驱动支持卷扩充操作。可参阅特定 CSI 驱动的文档了解更多信息。
<!--
#### Resizing a volume containing a file system
@ -628,13 +626,12 @@ or when a Pod is running and the underlying file system supports online expansio
FlexVolumes (deprecated since Kubernetes v1.23) allow resize if the driver is configured with the
`RequiresFSResize` capability to `true`. The FlexVolume can be resized on Pod restart.
-->
当卷中包含文件系统时,只有在 Pod 使用 `ReadWrite` 模式来使用 PVC 申领的
情况下才能重设其文件系统的大小。
文件系统扩充的操作或者是在 Pod 启动期间完成,或者在下层文件系统支持在线
扩充的前提下在 Pod 运行期间完成。
当卷中包含文件系统时,只有在 Pod 使用 `ReadWrite` 模式来使用 PVC
申领的情况下才能重设其文件系统的大小。文件系统扩充的操作或者是在 Pod
启动期间完成,或者在下层文件系统支持在线扩充的前提下在 Pod 运行期间完成。
如果 FlexVolumes 的驱动将 `RequiresFSResize` 能力设置为 `true`则该
FlexVolume 卷(于 Kubernetes v1.23 弃用)可以在 Pod 重启期间调整大小。
如果 FlexVolumes 的驱动将 `RequiresFSResize` 能力设置为 `true`
则该 FlexVolume 卷(于 Kubernetes v1.23 弃用)可以在 Pod 重启期间调整大小。
<!--
#### Resizing an in-use PersistentVolumeClaim
@ -691,9 +688,9 @@ If a user specifies a new size that is too big to be satisfied by underlying sto
<!--
If expanding underlying storage fails, the cluster administrator can manually recover the Persistent Volume Claim (PVC) state and cancel the resize requests. Otherwise, the resize requests are continuously retried by the controller without administrator intervention.
-->
如果扩充下层存储的操作失败,集群管理员可以手动地恢复 PVC 申领的状态并
取消重设大小的请求。否则,在没有管理员干预的情况下,控制器会反复重试
重设大小的操作。
如果扩充下层存储的操作失败,集群管理员可以手动地恢复 PVC
申领的状态并取消重设大小的请求。否则,在没有管理员干预的情况下,
控制器会反复重试重设大小的操作。
<!--
1. Mark the PersistentVolume(PV) that is bound to the PersistentVolumeClaim(PVC) with `Retain` reclaim policy.
@ -702,7 +699,7 @@ If expanding underlying storage fails, the cluster administrator can manually re
4. Re-create the PVC with smaller size than PV and set `volumeName` field of the PVC to the name of the PV. This should bind new PVC to existing PV.
5. Don't forget to restore the reclaim policy of the PV.
-->
1. 将绑定到 PVC 申领的 PV 卷标记为 `Retain` 回收策略
1. 将绑定到 PVC 申领的 PV 卷标记为 `Retain` 回收策略
2. 删除 PVC 对象。由于 PV 的回收策略为 `Retain`,我们不会在重建 PVC 时丢失数据。
3. 删除 PV 规约中的 `claimRef` 项,这样新的 PVC 可以绑定到该卷。
这一操作会使得 PV 卷变为 "可用Available"。
@ -736,12 +733,12 @@ size that is within the capacity limits of underlying storage provider. You can
-->
如果集群中的特性门控 `RecoverVolumeExpansionFailure`
已启用,在 PVC 的扩展发生失败时,你可以使用比先前请求的值更小的尺寸来重试扩展。
要使用一个更小的尺寸尝试请求新的扩展,请编辑该 PVC 的 `.spec.resources` 并选择
一个比你之前所尝试的值更小的值。
要使用一个更小的尺寸尝试请求新的扩展,请编辑该 PVC 的 `.spec.resources`
并选择一个比你之前所尝试的值更小的值。
如果由于容量限制而无法成功扩展至更高的值,这将很有用。
如果发生了这种情况,或者你怀疑可能发生了这种情况,你可以通过指定一个在底层存储供应容量
限制内的尺寸来重试扩展。你可以通过查看 `.status.resizeStatus` 以及 PVC 上的事件
来监控调整大小操作的状态。
如果发生了这种情况,或者你怀疑可能发生了这种情况,
你可以通过指定一个在底层存储制备容量限制内的尺寸来重试扩展。
你可以通过查看 `.status.resizeStatus` 以及 PVC 上的事件来监控调整大小操作的状态。
<!--
Note that,
@ -796,8 +793,7 @@ PV 持久卷是用插件的形式来实现的。Kubernetes 目前支持以下插
* [`gcePersistentDisk`](/zh-cn/docs/concepts/storage/volumes/#gcepersistentdisk) - GCE 持久化盘
* [`glusterfs`](/zh-cn/docs/concepts/storage/volumes/#glusterfs) - Glusterfs 卷
* [`hostPath`](/zh-cn/docs/concepts/storage/volumes/#hostpath) - HostPath 卷
(仅供单节点测试使用;不适用于多节点集群;
请尝试使用 `local` 卷作为替代)
(仅供单节点测试使用;不适用于多节点集群;请尝试使用 `local` 卷作为替代)
* [`iscsi`](/zh-cn/docs/concepts/storage/volumes/#iscsi) - iSCSI (SCSI over IP) 存储
* [`local`](/zh-cn/docs/concepts/storage/volumes/#local) - 节点上挂载的本地存储设备
* [`nfs`](/zh-cn/docs/concepts/storage/volumes/#nfs) - 网络文件系统 (NFS) 存储
@ -889,7 +885,7 @@ Helper programs relating to the volume type may be required for consumption of a
<!--
### Capacity
Generally, a PV will have a specific storage capacity. This is set using the PV's `capacity` attribute. Read the glossary term [Quantity](/docs/reference/glossary/?all=true#term-quantity) to understand the units expected by `capacity`.
Generally, a PV will have a specific storage capacity. This is set using the PV's `capacity` attribute. Read the glossary term [Quantity](/docs/reference/glossary/?all=true#term-quantity) to understand the units expected by `capacity`.
Currently, storage size is the only resource that can be set or requested. Future attributes may include IOPS, throughput, etc.
-->
@ -897,8 +893,7 @@ Currently, storage size is the only resource that can be set or requested. Futu
一般而言,每个 PV 卷都有确定的存储容量。
容量属性是使用 PV 对象的 `capacity` 属性来设置的。
参考词汇表中的
[量纲Quantity](/zh-cn/docs/reference/glossary/?all=true#term-quantity)
参考词汇表中的[量纲Quantity](/zh-cn/docs/reference/glossary/?all=true#term-quantity)
词条,了解 `capacity` 字段可以接受的单位。
目前,存储大小是可以设置和请求的唯一资源。
@ -954,11 +949,9 @@ A PersistentVolume can be mounted on a host in any way supported by the resource
### 访问模式 {#access-modes}
PersistentVolume 卷可以用资源提供者所支持的任何方式挂载到宿主系统上。
如下表所示,提供者(驱动)的能力不同,每个 PV 卷的访问模式都会设置为
对应卷所支持的模式值。
例如NFS 可以支持多个读写客户,但是某个特定的 NFS PV 卷可能在服务器
上以只读的方式导出。每个 PV 卷都会获得自身的访问模式集合,描述的是
特定 PV 卷的能力。
如下表所示,提供者(驱动)的能力不同,每个 PV 卷的访问模式都会设置为对应卷所支持的模式值。
例如NFS 可以支持多个读写客户,但是某个特定的 NFS PV 卷可能在服务器上以只读的方式导出。
每个 PV 卷都会获得自身的访问模式集合,描述的是特定 PV 卷的能力。
<!--
The access modes are:
@ -992,7 +985,7 @@ ReadWriteOnce 访问模式也允许运行在同一节点上的多个 Pod 访问
`ReadWriteOncePod`
: 卷可以被单个 Pod 以读写方式挂载。
如果你想确保整个集群中只有一个 Pod 可以读取或写入该 PVC
请使用ReadWriteOncePod 访问模式。这只支持 CSI 卷以及需要 Kubernetes 1.22 以上版本。
请使用 ReadWriteOncePod 访问模式。这只支持 CSI 卷以及需要 Kubernetes 1.22 以上版本。
这篇博客文章 [Introducing Single Pod Access Mode for PersistentVolumes](/blog/2021/09/13/read-write-once-pod-access-mode-alpha/)
描述了更详细的内容。
@ -1032,10 +1025,9 @@ Kubernetes 使用卷访问模式来匹配 PersistentVolumeClaim 和 PersistentVo
<!--
> __Important!__ A volume can only be mounted using one access mode at a time, even if it supports many. For example, a GCEPersistentDisk can be mounted as ReadWriteOnce by a single node or ReadOnlyMany by many nodes, but not at the same time.
-->
> __重要提醒__ 每个卷同一时刻只能以一种访问模式挂载,即使该卷能够支持
> 多种访问模式。例如,一个 GCEPersistentDisk 卷可以被某节点以 ReadWriteOnce
> 模式挂载,或者被多个节点以 ReadOnlyMany 模式挂载,但不可以同时以两种模式
> 挂载。
> **重要提醒!** 每个卷同一时刻只能以一种访问模式挂载,即使该卷能够支持多种访问模式。
> 例如,一个 GCEPersistentDisk 卷可以被某节点以 ReadWriteOnce
> 模式挂载,或者被多个节点以 ReadOnlyMany 模式挂载,但不可以同时以两种模式挂载。
<!--
| Volume Plugin | ReadWriteOnce | ReadOnlyMany | ReadWriteMany|
@ -1078,8 +1070,7 @@ to PVCs that request no particular class.
每个 PV 可以属于某个类Class通过将其 `storageClassName` 属性设置为某个
[StorageClass](/zh-cn/docs/concepts/storage/storage-classes/) 的名称来指定。
特定类的 PV 卷只能绑定到请求该类存储卷的 PVC 申领。
未设置 `storageClassName` 的 PV 卷没有类设定,只能绑定到那些没有指定特定
存储类的 PVC 申领。
未设置 `storageClassName` 的 PV 卷没有类设定,只能绑定到那些没有指定特定存储类的 PVC 申领。
<!--
In the past, the annotation `volume.beta.kubernetes.io/storage-class` was used instead
@ -1167,6 +1158,17 @@ it will become fully deprecated in a future Kubernetes release.
-->
### 节点亲和性 {#node-affinity}
<!--
For most volume types, you do not need to set this field. It is automatically populated for [AWS EBS](/docs/concepts/storage/volumes/#awselasticblockstore), [GCE PD](/docs/concepts/storage/volumes/#gcepersistentdisk) and [Azure Disk](/docs/concepts/storage/volumes/#azuredisk) volume block types. You need to explicitly set this for [local](/docs/concepts/storage/volumes/#local) volumes.
-->
{{< note >}}
对大多数类型的卷而言,你不需要设置节点亲和性字段。
[AWS EBS](/zh-cn/docs/concepts/storage/volumes/#awselasticblockstore)、
[GCE PD](/zh-cn/docs/concepts/storage/volumes/#gcepersistentdisk) 和
[Azure Disk](/zh-cn/docs/concepts/storage/volumes/#azuredisk) 卷类型都能自动设置相关字段。
你需要为 [local](/zh-cn/docs/concepts/storage/volumes/#local) 卷显式地设置此属性。
{{< /note >}}
<!--
A PV can specify node affinity to define constraints that limit what nodes this volume can be accessed from. Pods that use a PV will only be scheduled to nodes that are selected by the node affinity. To specify node affinity, set `nodeAffinity` in the `.spec` of a PV. The [PersistentVolume](/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/#PersistentVolumeSpec) API reference has more details on this field.
-->
@ -1176,19 +1178,6 @@ A PV can specify node affinity to define constraints that limit what nodes this
[持久卷](/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/#PersistentVolumeSpec)
API 参考关于该字段的更多细节。
<!--
For most volume types, you do not need to set this field. It is automatically populated for [AWS EBS](/docs/concepts/storage/volumes/#awselasticblockstore), [GCE PD](/docs/concepts/storage/volumes/#gcepersistentdisk) and [Azure Disk](/docs/concepts/storage/volumes/#azuredisk) volume block types. You need to explicitly set this for [local](/docs/concepts/storage/volumes/#local) volumes.
-->
{{< note >}}
对大多数类型的卷而言,你不需要设置节点亲和性字段。
[AWS EBS](/zh-cn/docs/concepts/storage/volumes/#awselasticblockstore)、
[GCE PD](/zh-cn/docs/concepts/storage/volumes/#gcepersistentdisk) 和
[Azure Disk](/zh-cn/docs/concepts/storage/volumes/#azuredisk) 卷类型都能
自动设置相关字段。
你需要为 [local](/zh-cn/docs/concepts/storage/volumes/#local) 卷显式地设置
此属性。
{{< /note >}}
<!--
### Phase
@ -1223,7 +1212,6 @@ The name of a PersistentVolumeClaim object must be a valid
PersistentVolumeClaim 对象的名称必须是合法的
[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
@ -1265,13 +1253,13 @@ Claims use [the same convention as volumes](#volume-mode) to indicate the consum
<!--
### Resources
Claims, like Pods, can request specific quantities of a resource. In this case, the request is for storage. The same [resource model](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md) applies to both volumes and claims.
Claims, like Pods, can request specific quantities of a resource. In this case, the request is for storage. The same [resource model](https://git.k8s.io/design-proposals-archive/scheduling/resources.md) applies to both volumes and claims.
-->
### 资源 {#resources}
申领和 Pod 一样,也可以请求特定数量的资源。在这个上下文中,请求的资源是存储。
卷和申领都使用相同的
[资源模型](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md)。
[资源模型](https://git.k8s.io/design-proposals-archive/scheduling/resources.md)。
<!--
### Selector
@ -1323,10 +1311,10 @@ by the cluster, depending on whether the
is turned on.
-->
PVC 申领不必一定要请求某个类。如果 PVC 的 `storageClassName` 属性值设置为 `""`
则被视为要请求的是没有设置存储类的 PV 卷,因此这一 PVC 申领只能绑定到未设置
存储类的 PV 卷(未设置注解或者注解值为 `""` 的 PersistentVolumePV对象在系统中不会被删除因为这样做可能会引起数据丢失。
未设置 `storageClassName` 的 PVC 与此大不相同,也会被集群作不同处理。
具体筛查方式取决于
则被视为要请求的是没有设置存储类的 PV 卷,因此这一 PVC 申领只能绑定到未设置存储类的
PV 卷(未设置注解或者注解值为 `""` 的 PersistentVolumePV对象在系统中不会被删除
因为这样做可能会引起数据丢失。未设置 `storageClassName` 的 PVC 与此大不相同,
也会被集群作不同处理。具体筛查方式取决于
[`DefaultStorageClass` 准入控制器插件](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)
是否被启用。
@ -1348,11 +1336,11 @@ PVC 申领不必一定要请求某个类。如果 PVC 的 `storageClassName` 属
所有未设置 `storageClassName` 的 PVC 都只能绑定到隶属于默认存储类的 PV 卷。
设置默认 StorageClass 的工作是通过将对应 StorageClass 对象的注解
`storageclass.kubernetes.io/is-default-class` 赋值为 `true` 来完成的。
如果管理员未设置默认存储类,集群对 PVC 创建的处理方式与未启用准入控制器插件
时相同。如果设定的默认存储类不止一个,准入控制插件会禁止所有创建 PVC 操作。
如果管理员未设置默认存储类,集群对 PVC 创建的处理方式与未启用准入控制器插件时相同。
如果设定的默认存储类不止一个,准入控制插件会禁止所有创建 PVC 操作。
* 如果准入控制器插件被关闭,则不存在默认 StorageClass 的说法。
所有未设置 `storageClassName` 的 PVC 都只能绑定到未设置存储类的 PV 卷。
在这种情况下,未设置 `storageClassName` 的 PVC 与 `storageClassName` 设置
在这种情况下,未设置 `storageClassName` 的 PVC 与 `storageClassName` 设置
`""` 的 PVC 的处理方式相同。
<!--
@ -1366,14 +1354,14 @@ the requested labels may be bound to the PVC.
取决于安装方法,默认的 StorageClass 可能在集群安装期间由插件管理器Addon
Manager部署到集群中。
当某 PVC 除了请求 StorageClass 之外还设置了 `selector`,则这两种需求会按
逻辑与关系处理:只有隶属于所请求类且带有所请求标签的 PV 才能绑定到 PVC。
当某 PVC 除了请求 StorageClass 之外还设置了 `selector`,则这两种需求会按逻辑与关系处理:
只有隶属于所请求类且带有所请求标签的 PV 才能绑定到 PVC。
<!--
Currently, a PVC with a non-empty `selector` can't have a PV dynamically provisioned for it.
-->
{{< note >}}
目前,设置了非空 `selector` 的 PVC 对象无法让集群为其动态供应 PV 卷。
目前,设置了非空 `selector` 的 PVC 对象无法让集群为其动态制备 PV 卷。
{{< /note >}}
<!--
@ -1448,7 +1436,7 @@ See [an example of `hostPath` typed volume](/docs/tasks/configure-pod-container/
The following volume plugins support raw block volumes, including dynamic provisioning where
applicable:
-->
以下卷插件支持原始块卷,包括其动态供应(如果支持的话)的卷:
以下卷插件支持原始块卷,包括其动态制备(如果支持的话)的卷:
* AWSElasticBlockStore
* AzureDisk
@ -1502,9 +1490,158 @@ spec:
requests:
storage: 10Gi
```
<!--
### Pod specification adding Raw Block Device path in container
-->
### 在容器中添加原始块设备路径的 Pod 规约 {#pod-spec-adding-raw-block-device-path-in-container}
```yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-with-block-volume
spec:
containers:
- name: fc-container
image: fedora:26
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
volumeDevices:
- name: data
devicePath: /dev/xvda
volumes:
- name: data
persistentVolumeClaim:
claimName: block-pvc
```
<!--
When adding a raw block device for a Pod, you specify the device path in the container instead of a mount path.
-->
{{< note >}}
向 Pod 中添加原始块设备时,你要在容器内设置设备路径而不是挂载路径。
{{< /note >}}
<!--
### Binding Block Volumes
If a user requests a raw block volume by indicating this using the `volumeMode` field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider this mode as part of the spec.
Listed is a table of possible combinations the user and admin might specify for requesting a raw block device. The table indicates if the volume will be bound or not given the combinations:
Volume binding matrix for statically provisioned volumes:
-->
### 绑定块卷 {#binding-block-volumes}
如果用户通过 PersistentVolumeClaim 规约的 `volumeMode` 字段来表明对原始块设备的请求,
绑定规则与之前版本中未在规约中考虑此模式的实现略有不同。
下面列举的表格是用户和管理员可以为请求原始块设备所作设置的组合。
此表格表明在不同的组合下卷是否会被绑定。
静态制备卷的卷绑定矩阵:
<!--
| PV volumeMode | PVC volumeMode | Result |
| --------------|:---------------:| ----------------:|
| unspecified | unspecified | BIND |
| unspecified | Block | NO BIND |
| unspecified | Filesystem | BIND |
| Block | unspecified | NO BIND |
| Block | Block | BIND |
| Block | Filesystem | NO BIND |
| Filesystem | Filesystem | BIND |
| Filesystem | Block | NO BIND |
| Filesystem | unspecified | BIND |
-->
| PV volumeMode | PVC volumeMode | Result |
| --------------|:---------------:| ----------------:|
| 未指定 | 未指定 | 绑定 |
| 未指定 | Block | 不绑定 |
| 未指定 | Filesystem | 绑定 |
| Block | 未指定 | 不绑定 |
| Block | Block | 绑定 |
| Block | Filesystem | 不绑定 |
| Filesystem | Filesystem | 绑定 |
| Filesystem | Block | 不绑定 |
| Filesystem | 未指定 | 绑定 |
<!--
Only statically provisioned volumes are supported for alpha release. Administrators should take care to consider these values when working with raw block devices.
-->
{{< note >}}
Alpha 发行版本中仅支持静态制备的卷。
管理员需要在处理原始块设备时小心处理这些值。
{{< /note >}}
<!--
## Volume Snapshot and Restore Volume from Snapshot Support
-->
## 对卷快照及从卷快照中恢复卷的支持 {#volume-snapshot-and-restore-volume-from-snapshot-support}
{{< feature-state for_k8s_version="v1.20" state="stable" >}}
<!--
Volume snapshots only support the out-of-tree CSI volume plugins. For details, see [Volume Snapshots](/docs/concepts/storage/volume-snapshots/).
In-tree volume plugins are deprecated. You can read about the deprecated volume plugins in the [Volume Plugin FAQ](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md).
-->
卷快照Volume Snapshot仅支持树外 CSI 卷插件。
有关细节可参阅[卷快照](/zh-cn/docs/concepts/storage/volume-snapshots/)文档。
树内卷插件被弃用。你可以查阅[卷插件 FAQ](https://git.k8s.io/community/sig-storage/volume-plugin-faq.md)
了解已弃用的卷插件。
<!--
### Create a PersistentVolumeClaim from a Volume Snapshot {#create-persistent-volume-claim-from-volume-snapshot}
-->
### 基于卷快照创建 PVC 申领 {#create-persistent-volume-claim-from-volume-snapshot}
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: restore-pvc
spec:
storageClassName: csi-hostpath-sc
dataSource:
name: new-snapshot-test
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
```
<!--
## Volume Cloning
[Volume Cloning](/docs/concepts/storage/volume-pvc-datasource/) only available for CSI volume plugins.
-->
## 卷克隆 {#volume-cloning}
[卷克隆](/zh-cn/docs/concepts/storage/volume-pvc-datasource/)功能特性仅适用于 CSI 卷插件。
<!--
### Create PersistentVolumeClaim from an existing PVC {#create-persistent-volume-claim-from-an-existing-pvc}
-->
### 基于现有 PVC 创建新的 PVC 申领 {#create-persistent-volume-claim-from-an-existing-pvc}
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cloned-pvc
spec:
storageClassName: my-csi-plugin
dataSource:
name: existing-src-pvc-name
kind: PersistentVolumeClaim
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
```
<!--
## Volume populators and data sources
Kubernetes supports custom volume populators.
@ -1523,11 +1660,9 @@ gate enabled, use of the `dataSourceRef` is preferred over `dataSource`.
{{< feature-state for_k8s_version="v1.24" state="beta" >}}
{{< note >}}
Kubernetes 支持自定义的卷填充器;要使用自定义的卷填充器,你必须为
Kubernetes 支持自定义的卷填充器。要使用自定义的卷填充器,你必须为
kube-apiserver 和 kube-controller-manager 启用 `AnyVolumeDataSource`
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。
{{< /note >}}
卷填充器利用了 PVC 规约字段 `dataSourceRef`
不像 `dataSource` 字段只能包含对另一个持久卷申领或卷快照的引用,
@ -1554,6 +1689,7 @@ contents.
<!--
There are two differences between the `dataSourceRef` field and the `dataSource` field that
users should be aware of:
* The `dataSource` field ignores invalid values (as if the field was blank) while the
`dataSourceRef` field never ignores values and will cause an error if an invalid value is
used. Invalid values are any core object (objects with no apiGroup) except for PVCs.
@ -1567,6 +1703,7 @@ backwards compatibility. In particular, a mixture of older and newer controllers
interoperate because the fields are the same.
-->
`dataSourceRef` 字段和 `dataSource` 字段之间有两个用户应该注意的区别:
* `dataSource` 字段会忽略无效的值(如同是空值),
`dataSourceRef` 字段永远不会忽略值,并且若填入一个无效的值,会导致错误。
无效值指的是 PVC 之外的核心对象(没有 apiGroup 的对象)。
@ -1627,161 +1764,6 @@ the process.
如果没有填充器处理该数据源的情况下,该控制器会在 PVC 上产生警告事件。
当一个合适的填充器被安装到 PVC 上时,该控制器的职责是上报与卷创建有关的事件,以及在该过程中发生的问题。
<!--
### Pod specification adding Raw Block Device path in container
-->
### 在容器中添加原始块设备路径的 Pod 规约
```yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-with-block-volume
spec:
containers:
- name: fc-container
image: fedora:26
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
volumeDevices:
- name: data
devicePath: /dev/xvda
volumes:
- name: data
persistentVolumeClaim:
claimName: block-pvc
```
<!--
When adding a raw block device for a Pod, you specify the device path in the container instead of a mount path.
-->
{{< note >}}
向 Pod 中添加原始块设备时,你要在容器内设置设备路径而不是挂载路径。
{{< /note >}}
<!--
### Binding Block Volumes
If a user requests a raw block volume by indicating this using the `volumeMode` field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider this mode as part of the spec.
Listed is a table of possible combinations the user and admin might specify for requesting a raw block device. The table indicates if the volume will be bound or not given the combinations:
Volume binding matrix for statically provisioned volumes:
-->
### 绑定块卷 {#binding-block-volumes}
如果用户通过 PersistentVolumeClaim 规约的 `volumeMode` 字段来表明对原始
块设备的请求,绑定规则与之前版本中未在规约中考虑此模式的实现略有不同。
下面列举的表格是用户和管理员可以为请求原始块设备所作设置的组合。
此表格表明在不同的组合下卷是否会被绑定。
静态供应卷的卷绑定矩阵:
<!--
| PV volumeMode | PVC volumeMode | Result |
| --------------|:---------------:| ----------------:|
| unspecified | unspecified | BIND |
| unspecified | Block | NO BIND |
| unspecified | Filesystem | BIND |
| Block | unspecified | NO BIND |
| Block | Block | BIND |
| Block | Filesystem | NO BIND |
| Filesystem | Filesystem | BIND |
| Filesystem | Block | NO BIND |
| Filesystem | unspecified | BIND |
-->
| PV volumeMode | PVC volumeMode | Result |
| --------------|:---------------:| ----------------:|
| 未指定 | 未指定 | 绑定 |
| 未指定 | Block | 不绑定 |
| 未指定 | Filesystem | 绑定 |
| Block | 未指定 | 不绑定 |
| Block | Block | 绑定 |
| Block | Filesystem | 不绑定 |
| Filesystem | Filesystem | 绑定 |
| Filesystem | Block | 不绑定 |
| Filesystem | 未指定 | 绑定 |
<!--
Only statically provisioned volumes are supported for alpha release. Administrators should take care to consider these values when working with raw block devices.
-->
{{< note >}}
Alpha 发行版本中仅支持静态供应的卷。
管理员需要在处理原始块设备时小心处理这些值。
{{< /note >}}
<!--
## Volume Snapshot and Restore Volume from Snapshot Support
-->
## 对卷快照及从卷快照中恢复卷的支持
{{< feature-state for_k8s_version="v1.17" state="beta" >}}
<!--
Volume snapshot feature was added to support CSI Volume Plugins only. For details, see [volume snapshots](/docs/concepts/storage/volume-snapshots/).
To enable support for restoring a volume from a volume snapshot data source, enable the
`VolumeSnapshotDataSource` feature gate on the apiserver and controller-manager.
-->
卷快照Volume Snapshot特性的添加仅是为了支持 CSI 卷插件。
有关细节可参阅[卷快照](/zh-cn/docs/concepts/storage/volume-snapshots/)文档。
要启用从卷快照数据源恢复数据卷的支持,可在 API 服务器和控制器管理器上启用
`VolumeSnapshotDataSource` 特性门控。
<!--
### Create a PersistentVolumeClaim from a Volume Snapshot {#create-persistent-volume-claim-from-volume-snapshot}
-->
### 基于卷快照创建 PVC 申领 {#create-persistent-volume-claim-from-volume-snapshot}
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: restore-pvc
spec:
storageClassName: csi-hostpath-sc
dataSource:
name: new-snapshot-test
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
```
<!--
## Volume Cloning
[Volume Cloning](/docs/concepts/storage/volume-pvc-datasource/) only available for CSI volume plugins.
-->
## 卷克隆 {#volume-cloning}
[卷克隆](/zh-cn/docs/concepts/storage/volume-pvc-datasource/)功能特性仅适用于
CSI 卷插件。
<!--
### Create PersistentVolumeClaim from an existing PVC {#create-persistent-volume-claim-from-an-existing-pvc}
-->
### 基于现有 PVC 创建新的 PVC 申领 {#create-persistent-volume-claim-from-an-existing-pvc}
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cloned-pvc
spec:
storageClassName: my-csi-plugin
dataSource:
name: existing-src-pvc-name
kind: PersistentVolumeClaim
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
```
<!--
## Writing Portable Configuration
@ -1802,6 +1784,7 @@ and need persistent storage, it is recommended that you use the following patter
以及 ConfigMap 等放在一起。
- 不要在配置中包含 PersistentVolume 对象,因为对配置进行实例化的用户很可能
没有创建 PersistentVolume 的权限。
<!--
- Give the user the option of providing a storage class name when instantiating
the template.
@ -1819,9 +1802,10 @@ and need persistent storage, it is recommended that you use the following patter
- 仍按用户提供存储类名称,将该名称放到 `persistentVolumeClaim.storageClassName` 字段中。
这样会使得 PVC 在集群被管理员启用了存储类支持时能够匹配到正确的存储类,
- 如果用户未指定存储类名称,将 `persistentVolumeClaim.storageClassName` 留空nil
这样,集群会使用默认 `StorageClass` 为用户自动供应一个存储卷。
这样,集群会使用默认 `StorageClass` 为用户自动制备一个存储卷。
很多集群环境都配置了默认的 `StorageClass`,或者管理员也可以自行创建默认的
`StorageClass`
<!--
- In your tooling, watch for PVCs that are not getting bound after some time
and surface this to the user, as this may indicate that the cluster has no

View File

@ -140,9 +140,9 @@ volume mount will not receive updates for those volume sources.
## 与 SecurityContext 间的关系 {#securitycontext-interactions}
<!--
The [proposal](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2451-service-account-token-volumes#proposal) for file permission handling in projected service account volume enhancement introduced the projected files having the the correct owner permissions set.
The [proposal](https://git.k8s.io/enhancements/keps/sig-storage/2451-service-account-token-volumes#proposal) for file permission handling in projected service account volume enhancement introduced the projected files having the the correct owner permissions set.
-->
关于在投射的服务账号卷中处理文件访问权限的[提案](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2451-service-account-token-volumes#proposal)
关于在投射的服务账号卷中处理文件访问权限的[提案](https://git.k8s.io/enhancements/keps/sig-storage/2451-service-account-token-volumes#proposal)
介绍了如何使得所投射的文件具有合适的属主访问权限。
### Linux

View File

@ -540,11 +540,11 @@ It mounts a directory and writes the requested data in plain text files.
这种卷类型挂载一个目录并在纯文本文件中写入所请求的数据。
<!--
A Container using Downward API as a [subPath](#using-subpath) volume mount will not
receive Downward API updates.
A container using the downward API as a [`subPath`](#using-subpath) volume mount does not
receive updates when field values change.
-->
{{< note >}}
容器以 [subPath](#using-subpath) 卷挂载方式使用 downwardAPI 时,将不能接收到它的更新。
容器以 [subPath](#using-subpath) 卷挂载方式使用 downward API 时,在字段值更改时将不能接收到它的更新。
{{< /note >}}
<!--

View File

@ -151,14 +151,14 @@ visit [Configuration](/docs/concepts/configuration/).
<!--
There are two supporting concepts that provide backgrounds about how Kubernetes manages pods
for applications:
* [Garbage collection](/docs/concepts/workloads/controllers/garbage-collection/) tidies up objects
* [Garbage collection](/docs/concepts/architecture/garbage-collection/) tidies up objects
from your cluster after their _owning resource_ has been removed.
* The [_time-to-live after finished_ controller](/docs/concepts/workloads/controllers/ttlafterfinished/)
removes Jobs once a defined time has passed since they completed.
-->
关于 Kubernetes 如何为应用管理 Pods还有两个支撑概念能够提供相关背景信息
* [垃圾收集](/zh-cn/docs/concepts/workloads/controllers/garbage-collection/)机制负责在
* [垃圾收集](/zh-cn/docs/concepts/architecture/garbage-collection/)机制负责在
对象的 _属主资源_ 被删除时在集群中清理这些对象。
* [_Time-to-Live_ 控制器](/zh-cn/docs/concepts/workloads/controllers/ttlafterfinished/)
会在 Job 结束之后的指定时间间隔之后删除它们。

View File

@ -42,22 +42,22 @@ ReplicaSet 是通过一组字段来定义的,包括一个用来识别可获得
进而实现其存在价值。当 ReplicaSet 需要创建新的 Pod 时,会使用所提供的 Pod 模板。
<!--
A ReplicaSet is linked to its Pods via the Pods' [metadata.ownerReferences](/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents)
A ReplicaSet is linked to its Pods via the Pods' [metadata.ownerReferences](/docs/concepts/architecture/garbage-collection/#owners-and-dependents)
field, which specifies what resource the current object is owned by. All Pods acquired by a ReplicaSet have their owning
ReplicaSet's identifying information within their ownerReferences field. It's through this link that the ReplicaSet
knows of the state of the Pods it is maintaining and plans accordingly.
-->
ReplicaSet 通过 Pod 上的
[metadata.ownerReferences](/zh-cn/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents)
[metadata.ownerReferences](/zh-cn/docs/concepts/architecture/garbage-collection/#owners-and-dependents)
字段连接到附属 Pod该字段给出当前对象的属主资源。
ReplicaSet 所获得的 Pod 都在其 ownerReferences 字段中包含了属主 ReplicaSet
的标识信息。正是通过这一连接ReplicaSet 知道它所维护的 Pod 集合的状态,
并据此计划其操作行为。
<!--
A ReplicaSet identifies new Pods to acquire by using its selector. If there is a Pod that has no OwnerReference or the
OwnerReference is not a {{< glossary_tooltip term_id="controller" >}} and it matches a ReplicaSet's selector, it will be immediately acquired by said
ReplicaSet.
A ReplicaSet identifies new Pods to acquire by using its selector. If there is a Pod that has no
OwnerReference or the OwnerReference is not a {{< glossary_tooltip term_id="controller" >}} and it
matches a ReplicaSet's selector, it will be immediately acquired by said ReplicaSet.
-->
ReplicaSet 使用其选择算符来辨识要获得的 Pod 集合。如果某个 Pod 没有
OwnerReference 或者其 OwnerReference 不是一个{{< glossary_tooltip text="控制器" term_id="controller" >}}
@ -408,7 +408,9 @@ matchLabels:
{{< note >}}
<!--
For 2 ReplicaSets specifying the same `.spec.selector` but different `.spec.template.metadata.labels` and `.spec.template.spec` fields, each ReplicaSet ignores the Pods created by the other ReplicaSet.
For 2 ReplicaSets specifying the same `.spec.selector` but different
`.spec.template.metadata.labels` and `.spec.template.spec` fields, each ReplicaSet ignores the
Pods created by the other ReplicaSet.
-->
对于设置了相同的 `.spec.selector`,但
`.spec.template.metadata.labels``.spec.template.spec` 字段不同的两个
@ -435,11 +437,13 @@ ReplicaSet 创建、删除 Pod 以与此值匹配。
### Deleting a ReplicaSet and its Pods
To delete a ReplicaSet and all of its Pods, use [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). The [Garbage collector](/docs/concepts/workloads/controllers/garbage-collection/) automatically deletes all of the dependent Pods by default.
To delete a ReplicaSet and all of its Pods, use
[`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). The
[Garbage collector](/docs/concepts/architecture/garbage-collection/) automatically deletes all of
the dependent Pods by default.
When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Background` or `Foreground` in
the -d option.
For example:
When using the REST API or the `client-go` library, you must set `propagationPolicy` to
`Background` or `Foreground` in the `-d` option. For example:
-->
## 使用 ReplicaSet {#working-with-replicasets}
@ -447,7 +451,7 @@ For example:
要删除 ReplicaSet 和它的所有 Pod使用
[`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) 命令。
默认情况下,[垃圾收集器](/zh-cn/docs/concepts/workloads/controllers/garbage-collection/)
默认情况下,[垃圾收集器](/zh-cn/docs/concepts/architecture/garbage-collection/)
自动删除所有依赖的 Pod。
当使用 REST API 或 `client-go` 库时,你必须在 `-d` 选项中将 `propagationPolicy`
@ -463,7 +467,9 @@ curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/fron
<!--
### Deleting just a ReplicaSet
You can delete a ReplicaSet without affecting any of its Pods using [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) with the `--cascade=orphan` option.
You can delete a ReplicaSet without affecting any of its Pods using
[`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete)
with the `--cascade=orphan` option.
When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Orphan`.
For example:
-->
@ -488,7 +494,8 @@ Once the original is deleted, you can create a new ReplicaSet to replace it. As
as the old and new `.spec.selector` are the same, then the new one will adopt the old Pods.
However, it will not make any effort to make existing Pods match a new, different pod template.
To update Pods to a new spec in a controlled way, use a
[Deployment](/docs/concepts/workloads/controllers/deployment/#creating-a-deployment), as ReplicaSets do not support a rolling update directly.
[Deployment](/docs/concepts/workloads/controllers/deployment/#creating-a-deployment), as
ReplicaSets do not support a rolling update directly.
-->
一旦删除了原来的 ReplicaSet就可以创建一个新的来替换它。
由于新旧 ReplicaSet 的 `.spec.selector` 是相同的,新的 ReplicaSet 将接管老的 Pod。
@ -529,13 +536,13 @@ prioritize scaling down pods based on the following general algorithm:
其一般性算法如下:
<!--
1. Pending (and unschedulable) pods are scaled down first
2. If `controller.kubernetes.io/pod-deletion-cost` annotation is set, then
the pod with the lower value will come first.
3. Pods on nodes with more replicas come before pods on nodes with fewer replicas.
4. If the pods' creation times differ, the pod that was created more recently
comes before the older pod (the creation times are bucketed on an integer log scale
when the `LogarithmicScaleDown` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled)
1. Pending (and unschedulable) pods are scaled down first
1. If `controller.kubernetes.io/pod-deletion-cost` annotation is set, then
the pod with the lower value will come first.
1. Pods on nodes with more replicas come before pods on nodes with fewer replicas.
1. If the pods' creation times differ, the pod that was created more recently
comes before the older pod (the creation times are bucketed on an integer log scale
when the `LogarithmicScaleDown` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled)
-->
1. 首先选择剔除悬决Pending且不可调度的各个 Pod
2. 如果设置了 `controller.kubernetes.io/pod-deletion-cost` 注解,则注解值较小的优先被裁减掉
@ -677,7 +684,12 @@ ReplicaSetDeployment 拥有并管理其 ReplicaSet。
<!--
### Bare Pods
Unlike the case where a user directly created Pods, a ReplicaSet replaces Pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your application requires only a single Pod. Think of it similarly to a process supervisor, only it supervises multiple Pods across multiple nodes instead of individual processes on a single node. A ReplicaSet delegates local container restarts to some agent on the node such as kubelet.
Unlike the case where a user directly created Pods, a ReplicaSet replaces Pods that are deleted or
terminated for any reason, such as in the case of node failure or disruptive node maintenance,
such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your
application requires only a single Pod. Think of it similarly to a process supervisor, only it
supervises multiple Pods across multiple nodes instead of individual processes on a single node. A
ReplicaSet delegates local container restarts to some agent on the node such as Kubelet.
-->
### 裸 Pod {#bare-pods}
@ -691,8 +703,8 @@ ReplicaSet 将本地容器重启的任务委托给了节点上的某个代理(
<!--
### Job
Use a [`Job`](/docs/concepts/workloads/controllers/job/) instead of a ReplicaSet for Pods that are expected to terminate on their own
(that is, batch jobs).
Use a [`Job`](/docs/concepts/workloads/controllers/job/) instead of a ReplicaSet for Pods that are
expected to terminate on their own (that is, batch jobs).
-->
### Job
@ -718,7 +730,7 @@ safe to terminate when the machine is otherwise ready to be rebooted/shutdown.
### ReplicationController
<!--
ReplicaSets are the successors to [_ReplicationControllers_](/docs/concepts/workloads/controllers/replicationcontroller/).
ReplicaSets are the successors to [ReplicationControllers](/docs/concepts/workloads/controllers/replicationcontroller/).
The two serve the same purpose, and behave similarly, except that a ReplicationController does not support set-based
selector requirements as described in the [labels user guide](/docs/concepts/overview/working-with-objects/labels/#label-selectors).
As such, ReplicaSets are preferred over ReplicationControllers

View File

@ -26,10 +26,10 @@ weight: 90
<!-- overview -->
{{< note >}}
<!--
A [`Deployment`](/docs/concepts/workloads/controllers/deployment/) that configures a [`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is now the recommended way to set up replication.
-->
{{< note >}}
现在推荐使用配置 [`ReplicaSet`](/zh-cn/docs/concepts/workloads/controllers/replicaset/) 的
[`Deployment`](/zh-cn/docs/concepts/workloads/controllers/deployment/) 来建立副本管理机制。
{{< /note >}}
@ -56,7 +56,7 @@ only a single pod. A ReplicationController is similar to a process supervisor,
but instead of supervising individual processes on a single node, the ReplicationController supervises multiple pods
across multiple nodes.
-->
## ReplicationController 如何工作
## ReplicationController 如何工作 {#how-a-replicationcontroller-works}
当 Pod 数量过多时ReplicationController 会终止多余的 Pod。当 Pod 数量太少时ReplicationController 将会启动新的 Pod。
与手动创建的 Pod 不同,由 ReplicationController 创建的 Pod 在失败、被删除或被终止时会被自动替换。
@ -82,7 +82,7 @@ service, such as web servers.
This example ReplicationController config runs three copies of the nginx web server.
-->
## 运行一个示例 ReplicationController
## 运行一个示例 ReplicationController {#running-an-example-replicationcontroller}
这个示例 ReplicationController 配置运行 nginx Web 服务器的三个副本。
@ -187,14 +187,18 @@ specifies an expression with the name from each pod in the returned list.
## Writing a ReplicationController Spec
As with all other Kubernetes config, a ReplicationController needs `apiVersion`, `kind`, and `metadata` fields.
The name of a ReplicationController object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
For general information about working with configuration files, see [object management](/docs/concepts/overview/working-with-objects/object-management/).
A ReplicationController also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status).
-->
## 编写一个 ReplicationController 规约
## 编写一个 ReplicationController 规约 {#writing-a-replicationcontroller-spec}
与所有其它 Kubernetes 配置一样ReplicationController 需要 `apiVersion`
`kind``metadata` 字段。
ReplicationController 对象的名称必须是有效的
[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
有关使用配置文件的常规信息,参考
[对象管理](/zh-cn/docs/concepts/overview/working-with-objects/object-management/)。
@ -205,14 +209,14 @@ ReplicationController 也需要一个 [`.spec` 部分](https://git.k8s.io/commun
The `.spec.template` is the only required field of the `.spec`.
The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [pod](/docs/concepts/workloads/pods/pod/), except it is nested and does not have an `apiVersion` or `kind`.
The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/#pod-templates). It has exactly the same schema as a {{< glossary_tooltip text="Pod" term_id="pod" >}}, except it is nested and does not have an `apiVersion` or `kind`.
-->
### Pod 模板 {#pod-template}
`.spec.template``.spec` 的唯一必需字段。
`.spec.template` 是一个 [Pod 模板](/zh-cn/docs/concepts/workloads/pods/#pod-templates)。
它的模式与 [Pod](/zh-cn/docs/concepts/workloads/pods/) 完全相同,只是它是嵌套的,没有 `apiVersion``kind` 属性。
它的模式与 {{< glossary_tooltip text="Pod" term_id="pod" >}} 完全相同,只是它是嵌套的,没有 `apiVersion``kind` 属性。
<!--
In addition to required fields for a Pod, a pod template in a ReplicationController must specify appropriate
@ -221,7 +225,7 @@ labels and an appropriate restart policy. For labels, make sure not to overlap w
Only a [`.spec.template.spec.restartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Always` is allowed, which is the default if not specified.
For local container restarts, ReplicationControllers delegate to an agent on the node,
for example the [Kubelet](/docs/admin/kubelet/) or Docker.
for example the [Kubelet](/docs/reference/command-line-tools-reference/kubelet/) or Docker.
-->
除了 Pod 所需的字段外ReplicationController 中的 Pod 模板必须指定适当的标签和适当的重新启动策略。
对于标签,请确保不与其他控制器重叠。参考 [Pod 选择算符](#pod-selector)。
@ -368,7 +372,7 @@ Pods may be removed from a ReplicationController's target set by changing their
<!--
## Common usage patterns
-->
## 常见的使用模式
## 常见的使用模式 {#common-usage-patterns}
<!--
### Rescheduling
@ -393,7 +397,7 @@ The ReplicationController enables scaling the number of replicas up or down, eit
The ReplicationController is designed to facilitate rolling updates to a service by replacing pods one-by-one.
As explained in [#1353](http://issue.k8s.io/1353), the recommended approach is to create a new ReplicationController with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures.
As explained in [#1353](https://issue.k8s.io/1353), the recommended approach is to create a new ReplicationController with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures.
-->
### 滚动更新 {#rolling-updates}
@ -407,17 +411,11 @@ ReplicationController 的设计目的是通过逐个替换 Pod 以方便滚动
Ideally, the rolling update controller would take application readiness into account, and would ensure that a sufficient number of pods were productively serving at any given time.
The two ReplicationControllers would need to create pods with at least one differentiating label, such as the image tag of the primary container of the pod, since it is typically image updates that motivate rolling updates.
Rolling update is implemented in the client tool
[`kubectl rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update). Visit [`kubectl rolling-update` task](/docs/tasks/run-application/rolling-update-replication-controller/) for more concrete examples.
-->
理想情况下,滚动更新控制器将考虑应用程序的就绪情况,并确保在任何给定时间都有足够数量的 Pod 有效地提供服务。
这两个 ReplicationController 将需要创建至少具有一个不同标签的 Pod比如 Pod 主要容器的镜像标签,因为通常是镜像更新触发滚动更新。
滚动更新是在客户端工具 [`kubectl rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update)
中实现的。访问 [`kubectl rolling-update` 任务](/zh-cn/docs/tasks/run-application/rolling-update-replication-controller/)以获得更多的具体示例。
<!--
### Multiple release tracks
@ -462,7 +460,7 @@ A ReplicationController will never terminate on its own, but it isn't expected t
Pods created by a ReplicationController are intended to be fungible and semantically identical, though their configurations may become heterogeneous over time. This is an obvious fit for replicated stateless servers, but ReplicationControllers can also be used to maintain availability of master-elected, sharded, and worker-pool applications. Such applications should use dynamic work assignment mechanisms, such as the [RabbitMQ work queues](https://www.rabbitmq.com/tutorials/tutorial-two-python.html), as opposed to static/one-time customization of the configuration of each pod, which is considered an anti-pattern. Any pod customization performed, such as vertical auto-sizing of resources (for example, cpu or memory), should be performed by another online controller process, not unlike the ReplicationController itself.
-->
## 编写多副本的应用
## 编写多副本的应用 {#writing-programs-for-replication}
由 ReplicationController 创建的 Pod 是可替换的,语义上是相同的,
尽管随着时间的推移,它们的配置可能会变得异构。
@ -477,9 +475,9 @@ Pods created by a ReplicationController are intended to be fungible and semantic
<!--
## Responsibilities of the ReplicationController
The ReplicationController ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](http://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
The ReplicationController ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
-->
## ReplicationController 的职责
## ReplicationController 的职责 {#responsibilities-of-the-replicationcontroller}
ReplicationController 仅确保所需的 Pod 数量与其标签选择算符匹配,并且是可操作的。
目前,它的计数中只排除终止的 Pod。
@ -488,7 +486,7 @@ ReplicationController 仅确保所需的 Pod 数量与其标签选择算符匹
我们计划发出事件,这些事件可以被外部客户端用来实现任意复杂的替换和/或缩减策略。
<!--
The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](http://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](http://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](http://issue.k8s.io/170)).
The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](https://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](https://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](https://issue.k8s.io/170)).
-->
ReplicationController 永远被限制在这个狭隘的职责范围内。
它本身既不执行就绪态探测,也不执行活跃性探测。
@ -501,7 +499,7 @@ ReplicationController 永远被限制在这个狭隘的职责范围内。
我们甚至计划考虑批量创建 Pod 的机制(查阅 [#170](https://issue.k8s.io/170))。
<!--
The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](https://netflixtechblog.com/asgard-web-based-cloud-management-and-deployment-2c9fc4e4d3a1) managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc.
The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, scale) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](https://netflixtechblog.com/asgard-web-based-cloud-management-and-deployment-2c9fc4e4d3a1) managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc.
-->
ReplicationController 旨在成为可组合的构建基元。
我们希望在它和其他补充原语的基础上构建更高级别的 API 或者工具,以便于将来的用户使用。
@ -516,7 +514,7 @@ Replication controller is a top-level resource in the Kubernetes REST API. More
API object can be found at:
[ReplicationController API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#replicationcontroller-v1-core).
-->
## API 对象
## API 对象 {#api-object}
在 Kubernetes REST API 中 Replication controller 是顶级资源。
更多关于 API 对象的详细信息可以在
@ -524,13 +522,14 @@ API object can be found at:
<!--
## Alternatives to ReplicationController
### ReplicaSet
[`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is the next-generation ReplicationController that supports the new [set-based label selector](/docs/concepts/overview/working-with-objects/labels/#set-based-requirement).
Its mainly used by [Deployment](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate Pod creation, deletion and updates.
Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or dont require updates at all.
It's mainly used by [Deployment](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate pod creation, deletion and updates.
Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or don't require updates at all.
-->
## ReplicationController 的替代方案
## ReplicationController 的替代方案 {#alternatives-to-replicationcontroller}
### ReplicaSet
@ -569,7 +568,7 @@ ReplicationController 将本地容器重启委托给节点上的某个代理(例
<!--
### Job
Use a [`Job`](/docs/concepts/jobs/run-to-completion-finite-workloads/) instead of a ReplicationController for Pods that are expected to terminate on their own
Use a [`Job`](/docs/concepts/workloads/controllers/job/) instead of a ReplicationController for pods that are expected to terminate on their own
(that is, batch jobs).
-->
### Job

File diff suppressed because it is too large Load Diff

View File

@ -158,6 +158,14 @@ PUT | update
PATCH | patch
DELETE | delete针对单个资源、deletecollection针对集合
{{< caution >}}
<!--
The `get`, `list` and `watch` verbs can all return the full details of a resource. In terms of the returned data they are equivalent. For example, `list` on `secrets` will still reveal the `data` attributes of any returned resources.
-->
`get`、`list` 和 `watch` 动作都可以返回一个资源的完整详细信息。就返回的数据而言,它们是等价的。
例如,对 `secrets` 使用 `list` 仍然会显示所有已返回资源的 `data` 属性。
{{< /caution >}}
<!--
Kubernetes sometimes checks authorization for additional permissions using specialized verbs. For example:

View File

@ -2,21 +2,8 @@
title: kube-controller-manager
content_type: tool-reference
weight: 30
auto_generated: true
---
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference conent, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
## {{% heading "synopsis" %}}
<!--
@ -71,7 +58,7 @@ The map from metric-label to value allow-list of this label. The key's format is
-->
从度量值标签到准许值列表的映射。键名的格式为&lt;MetricName&gt;,&lt;LabelName&gt;
准许值的格式为&lt;allowed_value&gt;,&lt;allowed_value&gt;...。
例如,<code>metric1,label1='v1,v2,v3', metric1,label2='v1,v2,v3'
例如,<code>metric1,label1='v1,v2,v3', metric1,label2='v1,v2,v3',
metric2,label='v1,v2,v3'</code>
</p>
</td>
@ -86,7 +73,7 @@ metric2,label='v1,v2,v3'</code>。
The reconciler sync wait time between volume attach detach. This duration must be larger than one second, and increasing this value from the default may allow for volumes to be mismatched with pods.
-->
协调器reconciler在相邻两次对存储卷进行挂载和解除挂载操作之间的等待时间。
此时长必须长于 1 秒钟。此值设置为大于默认值时,可能导致存储卷无法与 Pods 匹配。
此时长必须长于 1 秒钟。此值设置为大于默认值时,可能导致存储卷无法与 Pod 匹配。
</td>
</tr>
@ -98,9 +85,9 @@ The reconciler sync wait time between volume attach detach. This duration must b
<!--
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io. This is optional. If empty, all token requests are considered to be anonymous and no client CA is looked up in the cluster.
-->
此标志值为一个 kubeconfig 文件的路径名。该文件中包含与某 Kubernetes “核心”
此标志值为一个 kubeconfig 文件的路径名。该文件中包含与某 Kubernetes “核心”
服务器相关的信息,并支持足够的权限以创建 tokenreviews.authentication.k8s.io。
此选项是可选的。如果设置为空值,所有令牌请求都会被认作匿名请求,
此选项是可选的。如果设置为空值,所有令牌请求都会被认作匿名请求,
Kubernetes 也不再在集群中查找客户端的 CA 证书信息。
</td>
</tr>
@ -113,8 +100,8 @@ Kubernetes 也不再在集群中查找客户端的 CA 证书信息。
<!--
If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
-->
此值为 false 时,通过 authentication-kubeconfig 参数所指定的文件会被用来
检索集群中缺失的身份认证配置信息。
此值为 false 时,通过 authentication-kubeconfig
参数所指定的文件会被用来检索集群中缺失的身份认证配置信息。
</td>
</tr>
@ -256,8 +243,8 @@ Type of CIDR allocator to use
If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
-->
如果设置了此标志,对于所有能够提供客户端证书的请求,若该证书由
<code>--client-ca-file</code> 中所给机构之一签署,则该请求会被
成功认证为客户端证书中 CommonName 所标识的实体。
<code>--client-ca-file</code> 中所给机构之一签署,
则该请求会被成功认证为客户端证书中 CommonName 所标识的实体。
</td>
</tr>
@ -293,7 +280,7 @@ The provider for cloud services. Empty string for no provider.
<!--
CIDR Range for Pods in cluster. Requires --allocate-node-cidrs to be true
-->
集群中 Pods 的 CIDR 范围。要求 <code>--allocate-node-cidrs</code> 标志为 true。
集群中 Pod 的 CIDR 范围。要求 <code>--allocate-node-cidrs</code> 标志为 true。
</td>
</tr>
@ -318,7 +305,7 @@ The instance prefix for the cluster.
Filename containing a PEM-encoded X509 CA certificate used to issue cluster-scoped certificates. If specified, no more specific --cluster-signing-* flag may be specified.
-->
包含 PEM 编码格式的 X509 CA 证书的文件名。该证书用来发放集群范围的证书。
如果设置了此标志,则不能指定更具体的<code>--cluster-signing-*</code> 标志。
如果设置了此标志,则不能指定更具体的 <code>--cluster-signing-*</code> 标志。
</td>
</tr>
@ -331,7 +318,7 @@ Filename containing a PEM-encoded X509 CA certificate used to issue cluster-scop
The max length of duration signed certificates will be given.
Individual CSRs may request shorter certs by setting spec.expirationSeconds.
-->
所签名证书的有效期限。每个 CSR 可以通过设置 spec.expirationSeconds 来请求更短的证书。
所签名证书的有效期限。每个 CSR 可以通过设置 <code>spec.expirationSeconds</code> 来请求更短的证书。
</td>
</tr>
@ -358,7 +345,7 @@ If specified, no more specific --cluster-signing-* flag may be specified.
Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/kube-apiserver-client signer. If specified, --cluster-signing-{cert,key}-file must not be set.
-->
包含 PEM 编码的 X509 CA 证书的文件名,
该证书用于为 kubernetes.io/kube-apiserver-client 签署者颁发证书。
该证书用于为 kubernetes.io/kube-apiserver-client 签署者颁发证书。
如果指定,则不得设置 <code>--cluster-signing-{cert,key}-file</code>
</td>
</tr>
@ -372,7 +359,7 @@ Filename containing a PEM-encoded X509 CA certificate used to issue certificates
Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io/kube-apiserver-client signer. If specified, --cluster-signing-{cert,key}-file must not be set.
-->
包含 PEM 编码的 RSA 或 ECDSA 私钥的文件名,
该私钥用于为 kubernetes.io/kube-apiserver-client 签署者签名证书。
该私钥用于为 kubernetes.io/kube-apiserver-client 签署者签名证书。
如果指定,则不得设置 <code>--cluster-signing-{cert,key}-file</code>
</td>
</tr>
@ -386,7 +373,7 @@ Filename containing a PEM-encoded RSA or ECDSA private key used to sign certific
Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io/kube-apiserver-client-kubelet signer. If specified, --cluster-signing-{cert,key}-file must not be set.
-->
包含 PEM 编码的 X509 CA 证书的文件名,
该证书用于为 kubernetes.io/kube-apiserver-client-kubelet 签署者颁发证书。
该证书用于为 kubernetes.io/kube-apiserver-client-kubelet 签署者颁发证书。
如果指定,则不得设置 <code>--cluster-signing-{cert,key}-file</code>
</td>
</tr>
@ -672,7 +659,7 @@ Interval between starting controller managers.
<!--
A list of controllers to enable. '*' enables all on-by-default controllers, 'foo' enables the controller named 'foo', '-foo' disables the controller named 'foo'.<br/>All controllers: attachdetach, bootstrapsigner, cloud-node-lifecycle, clusterrole-aggregation, cronjob, csrapproving, csrcleaner, csrsigning, daemonset, deployment, disruption, endpoint, endpointslice, endpointslicemirroring, ephemeral-volume, garbagecollector, horizontalpodautoscaling, job, namespace, nodeipam, nodelifecycle, persistentvolume-binder, persistentvolume-expander, podgc, pv-protection, pvc-protection, replicaset, replicationcontroller, resourcequota, root-ca-cert-publisher, route, service, serviceaccount, serviceaccount-token, statefulset, tokencleaner, ttl, ttl-after-finished<br/>Disabled-by-default controllers: bootstrapsigner, tokencleaner
-->
要启用的控制器列表。<code>\*</code> 表示启用所有默认启用的控制器;
要启用的控制器列表。<code>*</code> 表示启用所有默认启用的控制器;
<code>foo</code> 启用名为 foo 的控制器;
<code>-foo</code> 表示禁用名为 foo 的控制器。<br/>
控制器的全集attachdetach、bootstrapsigner、cloud-node-lifecycle、clusterrole-aggregation、cronjob、csrapproving、csrcleaner、csrsigning、daemonset、deployment、disruption、endpoint、endpointslice、endpointslicemirroring、ephemeral-volume、garbagecollector、horizontalpodautoscaling、job、namespace、nodeipam、nodelifecycle、persistentvolume-binder、persistentvolume-expander、podgc、pv-protection、pvc-protection、replicaset、replicationcontroller、resourcequota、root-ca-cert-publisher、route、service、serviceaccount、serviceaccount-token、statefulset、tokencleaner、ttl、ttl-after-finished<br/>
@ -700,13 +687,11 @@ Disable volume attach detach reconciler sync. Disabling this may cause volumes t
<!--
This flag provides an escape hatch for misbehaving metrics. You must provide the fully qualified metric name in order to disable it. Disclaimer: disabling metrics is higher in precedence than showing hidden metrics.
-->
此标志提供对行为异常的度量值的防控措施。你必须提供度量值的
完全限定名称才能将其禁用。<B>声明</B>:禁用度量值的操作比显示隐藏度量值
的操作优先级高。
此标志提供对行为异常的度量值的防控措施。你必须提供度量值的完全限定名称才能将其禁用。
<B>声明</B>:禁用度量值的操作比显示隐藏度量值的操作优先级高。
</p></td>
</tr>
<tr>
<td colspan="2">--enable-dynamic-provisioning&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<!--Default:-->默认值true</td>
</tr>
@ -741,7 +726,7 @@ Enable HostPath PV provisioning when running without a cloud provider. This allo
-->
在没有云驱动程序的情况下,启用 HostPath 持久卷的制备。
此参数便于对卷供应功能进行开发和测试。HostPath 卷的制备并非受支持的功能特性,
在多节点的集群中也无法工作,因此除了开发和测试环境中不应使用。
在多节点的集群中也无法工作,因此除了开发和测试环境中不应使用 HostPath 卷的制备
</td>
</tr>
@ -757,7 +742,6 @@ Whether to enable controller leader migration.
</p></td>
</tr>
<tr>
<td colspan="2">--enable-taint-manager&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<!--Default:-->默认值true</td>
</tr>
@ -766,8 +750,8 @@ Whether to enable controller leader migration.
<!--
WARNING: Beta feature. If set to true enables NoExecute Taints and will evict all not-tolerating Pod running on Nodes tainted with this kind of Taints.
-->
警告此为Beta 阶段特性。设置为 true 时会启用 NoExecute 污点,
并在所有标记了此污点的节点上逐所有无法忍受该污点的 Pods
警告:此为 Beta 阶段特性。设置为 true 时会启用 NoExecute 污点,
并在所有标记了此污点的节点上逐所有无法忍受该污点的 Pod。
</td>
</tr>
@ -779,7 +763,7 @@ WARNING: Beta feature. If set to true enables NoExecute Taints and will evict al
<!--
The length of endpoint updates batching period. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated
-->
端点Endpoint批量更新周期时长。对 Pods 变更的处理会被延迟,
端点Endpoint批量更新周期时长。对 Pod 变更的处理会被延迟,
以便将其与即将到来的更新操作合并,从而减少端点更新操作次数。
较大的数值意味着端点更新的迟滞时间会增长,也意味着所生成的端点版本个数会变少。
</td>
@ -793,7 +777,7 @@ The length of endpoint updates batching period. Processing of pod changes will b
<!--
The length of endpoint slice updates batching period. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated
-->
端点片段Endpoint Slice批量更新周期时长。对 Pods 变更的处理会被延迟,
端点片段Endpoint Slice批量更新周期时长。对 Pod 变更的处理会被延迟,
以便将其与即将到来的更新操作合并,从而减少端点更新操作次数。
较大的数值意味着端点更新的迟滞时间会增长,也意味着所生成的端点版本个数会变少。
</td>
@ -808,8 +792,8 @@ The length of endpoint slice updates batching period. Processing of pod changes
The plugin to use when cloud provider is set to external. Can be empty, should only be set when cloud-provider is external. Currently used to allow node and volume controllers to work for in tree cloud providers.
-->
当云驱动程序设置为 external 时要使用的插件名称。此字符串可以为空。
只能在云驱动程序为 external 时设置。目前用来保证节点控制器和卷控制器能够
在三种云驱动上正常工作。
只能在云驱动程序为 external 时设置。
目前用来保证节点控制器和卷控制器能够在三种云驱动上正常工作。
</td>
</tr>
@ -922,7 +906,7 @@ WinDSR=true|false (ALPHA - default=false)<br/>
WinOverlay=true|false (BETA - default=true)<br/>
WindowsHostProcessContainers=true|false (BETA - default=true)
-->
一组 key=value 对,用来描述测试性/试验性功能的特性门控Feature Gate。可选项有
一组 key=value 对,用来描述测试性/试验性功能的特性门控Feature Gate。可选项有<br/>
APIListChunking=true|false (BETA - 默认值=true)<br/>
APIPriorityAndFairness=true|false (BETA - 默认值=true)<br/>
APIResponseCompression=true|false (BETA - 默认值=true)<br/>
@ -1071,8 +1055,8 @@ Pod 启动之后可以忽略 CPU 采样值的时长。
<!--
The period for which autoscaler will look backwards and not scale down below any recommendation it made during that period.
-->
自动扩缩程序的回溯时长。自动扩缩器不会基于在给定的时长内所建议的规模
对负载执行规模缩小的操作。
自动扩缩程序的回溯时长。
自动扩缩程序不会基于在给定的时长内所建议的规模对负载执行缩容操作。
</td>
</tr>
@ -1096,7 +1080,7 @@ Pod 启动之后,在此值所给定的时长内,就绪状态的变化都不
<!--
The period for syncing the number of pods in horizontal pod autoscaler.
-->
水平 Pod 扩缩器对 Pods 数目执行同步操作的周期。
水平 Pod 扩缩器对 Pod 数目执行同步操作的周期。
</td>
</tr>
@ -1182,8 +1166,9 @@ Path to kubeconfig file with authorization and master location information.
<!--
Number of nodes from which NodeController treats the cluster as large for the eviction logic purposes. --secondary-node-eviction-rate is implicitly overridden to 0 for clusters this size or smaller.
-->
节点控制器在执行 Pod 逐出操作逻辑时,基于此标志所设置的节点个数阈值来判断
所在集群是否为大规模集群。当集群规模小于等于此规模时,
节点控制器在执行 Pod 驱逐操作逻辑时,
基于此标志所设置的节点个数阈值来判断所在集群是否为大规模集群。
当集群规模小于等于此规模时,
<code>--secondary-node-eviction-rate</code> 会被隐式重设为 0。
</td>
</tr>
@ -1209,10 +1194,11 @@ Start a leader election client and gain leadership before executing the main loo
<!--
The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled.
-->
对于未获得领导者身份的节点,在探测到领导者身份需要更迭时需要等待
此标志所设置的时长,才能尝试去获得曾经是领导者但尚未续约的席位。
本质上,这个时长也是现有领导者节点在被其他候选节点替代之前可以停止
的最长时长。只有集群启用了领导者选举机制时,此标志才起作用。
对于未获得领导者身份的节点,
在探测到领导者身份需要更迭时需要等待此标志所设置的时长,
才能尝试去获得曾经是领导者但尚未续约的席位。本质上,
这个时长也是现有领导者节点在被其他候选节点替代之前可以停止的最长时长。
只有集群启用了领导者选举机制时,此标志才起作用。
</td>
</tr>
@ -1240,7 +1226,7 @@ The interval between attempts by the acting master to renew a leadership slot be
The type of resource object that is used for locking during leader election. Supported options are 'leases', 'endpointsleases' and 'configmapsleases'.
-->
在领导者选举期间用于锁定的资源对象的类型。 支持的选项为
"leases"、"endpointsleases" 和 "configmapsleases"
<code>leases</code><code>endpointsleases</code><code>configmapsleases</code>
</td>
</tr>
@ -1465,8 +1451,8 @@ Number of nodes per second on which pods are deleted in case of node failure whe
-->
当某区域健康时,在节点故障的情况下每秒删除 Pods 的节点数。
请参阅 <code>--unhealthy-zone-threshold</code>
以了解“健康”的判定标准。这里的区域zone在集群并不跨多个区域时
指的是整个集群。
以了解“健康”的判定标准。
这里的区域zone在集群并不跨多个区域时指的是整个集群。
</td>
</tr>
@ -1545,7 +1531,7 @@ If true, SO_REUSEPORT will be used when binding the port, which allows more than
<!--
The grace period for deleting pods on failed nodes.
-->
在失效的节点上删除 Pods 时为其预留的宽限期。
在失效的节点上删除 Pod 时为其预留的宽限期。
</td>
</tr>
@ -1569,8 +1555,8 @@ Enable profiling via web interface host:port/debug/pprof/
<!--
the increment of time added per Gi to ActiveDeadlineSeconds for an NFS scrubber pod
-->
NFS 清洗 Pod 在清洗用过的卷时,根据此标志所设置的秒数,为每清洗 1 GiB 数据
增加对应超时时长,作为 activeDeadlineSeconds。
NFS 清洗 Pod 在清洗用过的卷时,根据此标志所设置的秒数,
为每清洗 1 GiB 数据增加对应超时时长,作为 activeDeadlineSeconds。
</td>
</tr>
@ -1607,7 +1593,7 @@ NFS 回收器 Pod 要使用的 activeDeadlineSeconds 参数下限。
<!--
The file path to a pod definition used as a template for HostPath persistent volume recycling. This is for development and testing only and will not work in a multi-node cluster.
-->
对 HostPath 持久卷进行回收利用时,用作模的 Pod 定义文件所在路径。
对 HostPath 持久卷进行回收利用时,用作模的 Pod 定义文件所在路径。
此标志仅用于开发和测试目的,不适合多节点集群中使用。
</td>
</tr>
@ -1620,7 +1606,7 @@ The file path to a pod definition used as a template for HostPath persistent vol
<!--
The file path to a pod definition used as a template for NFS persistent volume recycling
-->
对 NFS 卷执行回收利用时,用作模的 Pod 定义文件所在路径。
对 NFS 卷执行回收利用时,用作模的 Pod 定义文件所在路径。
</td>
</tr>
@ -1759,7 +1745,7 @@ The period for reconciling routes created for Nodes by cloud provider.
<!--
Number of nodes per second on which pods are deleted in case of node failure when a zone is unhealthy (see --unhealthy-zone-threshold for definition of healthy/unhealthy). Zone refers to entire cluster in non-multizone clusters. This value is implicitly overridden to 0 if the cluster size is smaller than --large-cluster-size-threshold.
-->
区域不健康,节点失效时,每秒钟从此标志所给的节点个数上删除 Pods
一个区域不健康造成节点失效时,每秒钟从此标志所给的节点上删除 Pod 的节点个数
参见 <code>--unhealthy-zone-threshold</code> 以了解“健康与否”的判定标准。
在只有一个区域的集群中,区域指的是整个集群。如果集群规模小于
<code>--large-cluster-size-threshold</code> 所设置的节点个数时,
@ -1826,8 +1812,8 @@ The previous version for which you want to show hidden metrics. Only the previou
<!--
Number of terminated pods that can exist before the terminated pod garbage collector starts deleting terminated pods. If &lt;= 0, the terminated pod garbage collector is disabled.
-->
在已终止 Pods 垃圾收集器删除已终止 Pods 之前,可以保留的已删除
Pods 的个数上限。若此值小于等于 0则相当于禁止垃圾回收已终止的 Pods
在已终止 Pod 垃圾收集器删除已终止 Pod 之前,可以保留的已终止 Pod 的个数上限。
若此值小于等于 0则相当于禁止垃圾回收已终止的 Pod。
</td>
</tr>
@ -1896,8 +1882,9 @@ A pair of x509 certificate and private key file paths, optionally suffixed with
-->
X509 证书和私钥文件路径的耦对。作为可选项,可以添加域名模式的列表,
其中每个域名模式都是可以带通配片段前缀的全限定域名FQDN
域名模式也可以使用 IP 地址字符串,不过只有 API 服务器在所给 IP 地址上
对客户端可见时才可以使用 IP 地址。在未提供域名模式时,从证书中提取域名。
域名模式也可以使用 IP 地址字符串,
不过只有 API 服务器在所给 IP 地址上对客户端可见时才可以使用 IP 地址。
在未提供域名模式时,从证书中提取域名。
如果有非通配方式的匹配,则优先于通配方式的匹配;显式的域名模式优先于提取的域名。
当存在多个密钥/证书耦对时,可以多次使用 <code>--tls-sni-cert-key</code> 标志。
例如:<code>example.crt,example.key</code><code>foo.crt,foo.key:\*.foo.com,foo.com</code>

View File

@ -971,7 +971,7 @@ run those in addition to the pods specified by static pod files, and exit.
Default: false
-->
<p><code>runOnce</code>字段被设置时kubelet 会咨询 API 服务器一次并获得 Pod 列表,
运行在静态 Pod 文件中指定的 Pod 及这里所获得的 Pod然后退出。</p>
运行在静态 Pod 文件中指定的 Pod 及这里所获得的 Pod然后退出。</p>
<p>默认值false</p>
</td>
</tr>
@ -1467,13 +1467,13 @@ Default: &quot;&quot;
<td>
<!--systemReservedCgroup helps the kubelet identify absolute name of top level CGroup used
to enforce <code>systemReserved</code> compute resource reservation for OS system daemons.
Refer to <a href="https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md">Node Allocatable</a>
Refer to <a href="https://git.k8s.io/design-proposals-archive/node/node-allocatable.md">Node Allocatable</a>
doc for more information.
Default: &quot;&quot;
-->
<p><code>systemReservedCgroup</code>帮助 kubelet 识别用来为 OS 系统级守护进程实施
<code>systemReserved</code>计算资源预留时使用的顶级控制组CGroup
参考 <a href="https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md">Node Allocatable</a>
参考 <a href="https://git.k8s.io/design-proposals-archive/node/node-allocatable.md">Node Allocatable</a>
以了解详细信息。</p>
<p>默认值:&quot;&quot;</p>
</td>
@ -1486,13 +1486,13 @@ Default: &quot;&quot;
<td>
<!--kubeReservedCgroup helps the kubelet identify absolute name of top level CGroup used
to enforce `KubeReserved` compute resource reservation for Kubernetes node system daemons.
Refer to <a href="https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md">Node Allocatable</a>
Refer to <a href="https://git.k8s.io/design-proposals-archive/node/node-allocatable.md">Node Allocatable</a>
doc for more information.
Default: ""
-->
<p><code>kubeReservedCgroup</code> 帮助 kubelet 识别用来为 Kubernetes 节点系统级守护进程实施
<code>kubeReserved</code>计算资源预留时使用的顶级控制组CGroup
参阅 <a href="https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md">Node Allocatable</a>
参阅 <a href="https://git.k8s.io/design-proposals-archive/node/node-allocatable.md">Node Allocatable</a>
了解进一步的信息。</p>
<p>默认值:&quot;&quot;</p>
</td>
@ -1509,7 +1509,7 @@ If <code>none</code> is specified, no other options may be specified.
When <code>system-reserved</code> is in the list, systemReservedCgroup must be specified.
When <code>kube-reserved</code> is in the list, kubeReservedCgroup must be specified.
This field is supported only when <code>cgroupsPerQOS</code> is set to true.
Refer to <a href="https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md">Node Allocatable</a>
Refer to <a href="https://git.k8s.io/design-proposals-archive/node/node-allocatable.md">Node Allocatable</a>
for more information.
Default: [&quot;pods&quot;]
-->
@ -1520,7 +1520,7 @@ Default: [&quot;pods&quot;]
<p>如果列表中包含<code>system-reserved</code>,则必须设置<code>systemReservedCgroup</code></p>
<p>如果列表中包含<code>kube-reserved</code>,则必须设置<code>kubeReservedCgroup</code></p>
<p>这个字段只有在<code>cgroupsPerQOS</code>被设置为<code>true</code>才被支持。</p>
<p>参阅<a href="https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md">Node Allocatable</a>
<p>参阅<a href="https://git.k8s.io/design-proposals-archive/node/node-allocatable.md">Node Allocatable</a>
了解进一步的信息。</p>
<p>默认值:[&quot;pods&quot;]</p>
</td>

View File

@ -334,6 +334,9 @@ kubectl get pods --selector=app=cassandra -o \
kubectl get configmap myconfig \
-o jsonpath='{.data.ca\.crt}'
# Retrieve a base64 encoded value with dashes instead of underscores.
kubectl get secret my-secret --template='{{index .data "key-name-with-dashes"}}'
# Get all worker nodes (use a selector to exclude results that have a label
# named 'node-role.kubernetes.io/control-plane')
kubectl get node --selector='!node-role.kubernetes.io/control-plane'
@ -417,6 +420,9 @@ kubectl get pods --selector=app=cassandra -o \
kubectl get configmap myconfig \
-o jsonpath='{.data.ca\.crt}'
# 检索一个 base64 编码的值,其中的键名应该包含减号而不是下划线。
kubectl get secret my-secret --template='{{index .data "key-name-with-dashes"}}'
# 获取所有工作节点(使用选择器以排除标签名称为 'node-role.kubernetes.io/control-plane' 的结果)
kubectl get node --selector='!node-role.kubernetes.io/control-plane'

View File

@ -45,8 +45,8 @@ Self 是一个特殊情况,因为用户应始终能够检查自己是否可以
-->
- **metadata** (<a href="{{< ref "../common-definitions/object-meta#ObjectMeta" >}}">ObjectMeta</a>)
标准的列表元数据。
更多信息:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
标准的列表元数据。更多信息:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
- **spec** (<a href="{{< ref "../authorization-resources/self-subject-access-review-v1#SelfSubjectAccessReviewSpec" >}}">SelfSubjectAccessReviewSpec</a>),必需

View File

@ -30,7 +30,7 @@ SelfSubjectRulesReview 枚举当前用户可以在某命名空间内执行的操
返回的操作列表可能不完整,具体取决于服务器的鉴权模式以及评估过程中遇到的任何错误。
SelfSubjectRulesReview 应由 UI 用于显示/隐藏操作,或让最终用户尽快理解自己的权限。
SelfSubjectRulesReview 不得被外部系统使用以驱动鉴权决策,
因为这会引起混淆代理人(confused deputy、缓存有效期/吊销cache lifetime/revocation和正确性问题。
因为这会引起混淆代理人(Confused deputy、缓存有效期/吊销Cache lifetime/revocation和正确性问题。
SubjectAccessReview 和 LocalAccessReview 是遵从 API 服务器所做鉴权决策的正确方式。
<hr>
@ -49,8 +49,8 @@ SubjectAccessReview 和 LocalAccessReview 是遵从 API 服务器所做鉴权决
-->
- **metadata** (<a href="{{< ref "../common-definitions/object-meta#ObjectMeta" >}}">ObjectMeta</a>)
标准的列表元数据。
更多信息:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
标准的列表元数据。更多信息:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
- **spec** (<a href="{{< ref "../authorization-resources/self-subject-rules-review-v1#SelfSubjectRulesReviewSpec" >}}">SelfSubjectRulesReviewSpec</a>),必需
@ -60,12 +60,6 @@ SubjectAccessReview 和 LocalAccessReview 是遵从 API 服务器所做鉴权决
Status is filled in by the server and indicates the set of actions a user can perform.
<a name="SubjectRulesReviewStatus"></a>
*SubjectRulesReviewStatus contains the result of a rules check. This check can be incomplete depending on the set of authorizers the server is configured with and any errors experienced during evaluation. Because authorization rules are additive, if a rule appears in a list it's safe to assume the subject has that permission, even if that list is incomplete.*
- **status.incomplete** (boolean), required
Incomplete is true when the rules returned by this call are incomplete. This is most commonly encountered when an authorizer, such as an external authorizer, doesn't support rules evaluation.
- **status.nonResourceRules** ([]NonResourceRule), required
NonResourceRules is the list of actions the subject is allowed to perform on non-resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete.
<a name="NonResourceRule"></a>
*NonResourceRule holds information that describes a rule for the non-resource*
-->
- **status** (SubjectRulesReviewStatus)
@ -76,6 +70,15 @@ SubjectAccessReview 和 LocalAccessReview 是遵从 API 服务器所做鉴权决
此检查可能不完整,具体取决于服务器配置的 Authorizer 的集合以及评估期间遇到的任何错误。
由于鉴权规则是叠加的,所以如果某个规则出现在列表中,即使该列表不完整,也可以安全地假定该主体拥有该权限。**
<!--
- **status.incomplete** (boolean), required
Incomplete is true when the rules returned by this call are incomplete. This is most commonly encountered when an authorizer, such as an external authorizer, doesn't support rules evaluation.
- **status.nonResourceRules** ([]NonResourceRule), required
NonResourceRules is the list of actions the subject is allowed to perform on non-resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete.
<a name="NonResourceRule"></a>
*NonResourceRule holds information that describes a rule for the non-resource*
-->
- **status.incomplete** (boolean),必需
当此调用返回的规则不完整时incomplete 结果为 true。
@ -88,28 +91,33 @@ SubjectAccessReview 和 LocalAccessReview 是遵从 API 服务器所做鉴权决
<a name="NonResourceRule"></a>
**nonResourceRule 包含描述非资源路径的规则的信息。**
<!--
<!--
- **status.nonResourceRules.verbs** ([]string), required
Verb is a list of kubernetes non-resource API verbs, like: get, post, put, delete, patch, head, options. "*" means all.
- **status.nonResourceRules.nonResourceURLs** ([]string)
NonResourceURLs is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path. "*" means all.
NonResourceURLs is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path. "*" means all.
-->
- **status.nonResourceRules.verbs** ([]string),必需
verb 是 kubernetes 非资源 API 动作的列表,例如 get、post、put、delete、patch、head、options。
`*` 表示所有动作。
- **status.nonResourceRules.nonResourceURLs** ([]string)
nonResourceURLs 是用户应有权访问的一组部分 URL。
允许使用 `*`,但仅能作为路径中最后一段且必须用于完整的一段。
`*` 表示全部。
<!--
- **status.resourceRules** ([]ResourceRule), required
ResourceRules is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete.
<a name="ResourceRule"></a>
*ResourceRule is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete.*
- **status.resourceRules.verbs** ([]string), required
Verb is a list of kubernetes resource API verbs, like: get, list, watch, create, update, delete, proxy. "*" means all.
-->
- **status.nonResourceRules.verbs** ([]string),必需
verb 是 kubernetes 非资源 API 动作的列表,例如 get、post、put、delete、patch、head、options。
"*" 表示所有动作。
- **status.nonResourceRules.nonResourceURLs** ([]string)
nonResourceURLs 是用户应有权访问的一组部分 URL。
允许使用 "*",但仅能作为路径中最后一段且必须用于完整的一段。
"*" 表示全部。
Verb is a list of kubernetes resource API verbs, like: get, list, watch, create, update, delete, proxy. "*" means all.
-->
- **status.resourceRules** ([]ResourceRule),必需
@ -122,8 +130,9 @@ SubjectAccessReview 和 LocalAccessReview 是遵从 API 服务器所做鉴权决
- **status.resourceRules.verbs** ([]string),必需
verb 是 kubernetes 资源 API 动作的列表,例如 get、list、watch、create、update、delete、proxy。
"*" 表示所有动作。
<!--
`*` 表示所有动作。
<!--
- **status.resourceRules.apiGroups** ([]string)
APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. "*" means all.
- **status.resourceRules.resourceNames** ([]string)
@ -131,26 +140,30 @@ SubjectAccessReview 和 LocalAccessReview 是遵从 API 服务器所做鉴权决
- **status.resourceRules.resources** ([]string)
Resources is a list of resources this rule applies to. "*" means all in the specified apiGroups.
"*/foo" represents the subresource 'foo' for all resources in the specified apiGroups.
- **status.evaluationError** (string)
EvaluationError can appear in combination with Rules. It indicates an error occurred during rule evaluation, such as an authorizer that doesn't support rule evaluation, and that ResourceRules and/or NonResourceRules may be incomplete.
-->
-->
- **status.resourceRules.apiGroups** ([]string)
apiGroups 是包含资源的 APIGroup 的名称。
如果指定了多个 API 组,则允许对任何 API 组中枚举的资源之一请求任何操作。
"*" 表示所有 APIGroup。
`*` 表示所有 APIGroup。
- **status.resourceRules.resourceNames** ([]string)
resourceNames 是此规则所适用的资源名称白名单,可选。
空集合意味着允许所有资源。
"*" 表示所有资源。
`*` 表示所有资源。
- **status.resourceRules.resources** ([]string)
resources 是此规则所适用的资源的列表。
"*" 表示指定 APIGroup 中的所有资源。
"*/foo" 表示指定 APIGroup 中所有资源的子资源 "foo"。
`*` 表示指定 APIGroup 中的所有资源。
`*/foo` 表示指定 APIGroup 中所有资源的子资源 "foo"。
<!--
- **status.evaluationError** (string)
EvaluationError can appear in combination with Rules. It indicates an error occurred during rule evaluation, such as an authorizer that doesn't support rule evaluation, and that ResourceRules and/or NonResourceRules may be incomplete.
-->
- **status.evaluationError** (string)

Some files were not shown because too many files have changed in this diff Show More