Merge remote-tracking branch 'upstream/main' into dev-1.29

This commit is contained in:
drewhagen 2023-11-29 15:33:49 -06:00
commit deaf1b920a
21 changed files with 599 additions and 368 deletions

View File

@ -35,7 +35,7 @@ Firstly, lets discuss bin-packing and cluster resource allocation. Theres
When configuring fixed-fraction headroom limits, a proportional amount of this will be available to the pods. If the percentage of unallocated resources in the cluster is lower than the constant we use for setting fixed-fraction headroom limits (see the figure, line 2), all the pods together are able to theoretically use up all the nodes resources; otherwise there are some resources that will inevitably be wasted (see the figure, line 1). In order to eliminate the inevitable resource waste, the percentage for fixed-fraction headroom limits should be configured so that its at least equal to the expected percentage of unallocated resources.
{{<figure alt="Chart displaying various requests/limits configurations" width="40%" src="requests-limits-configurations.svg">}}
{{<figure alt="Chart displaying various requests/limits configurations" class="diagram-medium" src="requests-limits-configurations.svg">}}
For requests = limits (see the figure, line 3), this does not hold: Unless were able to allocate all nodes resources, theres going to be some inevitably wasted resources. Without any knobs to turn on the requests/limits side, the only suitable approach here is to ensure efficient bin-packing on the nodes by configuring correct machine profiles. This can be done either manually or by using a variety of cloud service provider tooling for example [Karpenter](https://karpenter.sh/) for EKS or [GKE Node auto provisioning](https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning).

View File

@ -54,7 +54,7 @@ started using it. For E2E testing, that was
[Ginkgo+Gomega](https://github.com/onsi/ginkgo). Some hacks were
necessary, for example around cleanup after a test run and for
categorising tests. Eventually this led to Ginkgo v2 and [revised best
practices for E2E testing](/blog/2023/04/12/e2e-testing-best-practices-reloaded/).
practices for E2E testing](https://www.kubernetes.dev/blog/2023/04/12/e2e-testing-best-practices-reloaded/).
Regarding unit testing opinions are pretty diverse: some maintainers
prefer to use just the Go standard library with hand-written
checks. Others use helper packages like stretchr/testify. That

View File

@ -0,0 +1,259 @@
---
layout: blog
title: "New Experimental Features in Gateway API v1.0"
date: 2023-11-28T10:00:00-08:00
slug: gateway-api-ga
---
***Authors:*** Candace Holman (Red Hat), Dave Protasowski (VMware), Gaurav K Ghildiyal (Google), John Howard (Google), Simone Rodigari (IBM)
Recently, the [Gateway API](https://gateway-api.sigs.k8s.io/) [announced its v1.0 GA release](/blog/2023/10/31/gateway-api-ga/), marking a huge milestone for the project.
Along with stabilizing some of the core functionality in the API, a number of exciting new *experimental* features have been added.
## Backend TLS Policy
`BackendTLSPolicy` is a new Gateway API type used for specifying the TLS configuration of the connection from the Gateway to backend Pods via the Service API object.
It is specified as a [Direct PolicyAttachment](https://gateway-api.sigs.k8s.io/geps/gep-713/#direct-policy-attachment) without defaults or overrides, applied to a Service that accesses a backend, where the BackendTLSPolicy resides in the same namespace as the Service to which it is applied.
All Gateway API Routes that point to a referenced Service should respect a configured `BackendTLSPolicy`.
While there were existing ways provided for [TLS to be configured for edge and passthrough termination](https://gateway-api.sigs.k8s.io/guides/tls/#tls-configuration), this new API object specifically addresses the configuration of TLS in order to convey HTTPS from the Gateway dataplane to the backend.
This is referred to as "backend TLS termination" and enables the Gateway to know how to connect to a backend Pod that has its own certificate.
![Termination Types](https://gateway-api.sigs.k8s.io/geps/images/1897-TLStermtypes.png)
The specification of a `BackendTLSPolicy` consists of:
- `targetRef` - Defines the targeted API object of the policy. Only Service is allowed.
- `tls` - Defines the configuration for TLS, including `hostname`, `caCertRefs`, and `wellKnownCACerts`. Either `caCertRefs` or `wellKnownCACerts` may be specified, but not both.
- `hostname` - Defines the Server Name Indication (SNI) that the Gateway uses to connect to the backend. The certificate served by the backend must match this SNI.
- `caCertRefs` - Defines one or more references to objects that contain PEM-encoded TLS certificates, which are used to establish a TLS handshake between the Gateway and backend.
- `wellKnownCACerts` - Specifies whether or not system CA certificates may be used in the TLS handshake between the Gateway and backend.
### Examples
#### Using System Certificates
In this example, the `BackendTLSPolicy` is configured to use system certificates to connect with a TLS-encrypted upstream connection where Pods backing the `dev` Service are expected to serve a valid certificate for `dev.example.com`.
```yaml
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: BackendTLSPolicy
metadata:
name: tls-upstream-dev
spec:
targetRef:
kind: Service
name: dev-service
group: ""
tls:
wellKnownCACerts: "System"
hostname: dev.example.com
```
#### Using Explicit CA Certificates
In this example, the `BackendTLSPolicy` is configured to use certificates defined in the configuration map `auth-cert` to connect with a TLS-encrypted upstream connection where Pods backing the `auth` Service are expected to serve a valid certificate for `auth.example.com`.
```yaml
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: BackendTLSPolicy
metadata:
name: tls-upstream-auth
spec:
targetRef:
kind: Service
name: auth-service
group: ""
tls:
caCertRefs:
- kind: ConfigMapReference
name: auth-cert
group: ""
hostname: auth.example.com
```
The following illustrates a BackendTLSPolicy that configures TLS for a Service serving a backend:
{{< mermaid >}}
flowchart LR
client(["client"])
gateway["Gateway"]
style gateway fill:#02f,color:#fff
httproute["HTTP<BR>Route"]
style httproute fill:#02f,color:#fff
service["Service"]
style service fill:#02f,color:#fff
pod1["Pod"]
style pod1 fill:#02f,color:#fff
pod2["Pod"]
style pod2 fill:#02f,color:#fff
client -.->|HTTP <br> request| gateway
gateway --> httproute
httproute -.->|BackendTLSPolicy|service
service --> pod1 & pod2
{{</ mermaid >}}
For more information, refer to the [documentation for TLS](https://gateway-api.sigs.k8s.io/guides/tls).
## HTTPRoute Timeouts
A key enhancement in Gateway API's latest release (v1.0) is the introduction of the `timeouts` field within HTTPRoute Rules. This feature offers a dynamic way to manage timeouts for incoming HTTP requests, adding precision and reliability to your gateway setups.
With Timeouts, developers can fine-tune their Gateway API's behavior in two fundamental ways:
1. **Request Timeout**:
The request timeout is the duration within which the Gateway API implementation must send a response to a client's HTTP request.
It allows flexibility in specifying when this timeout starts, either before or after the entire client request stream is received, making it implementation-specific.
This timeout efficiently covers the entire request-response transaction, enhancing the responsiveness of your services.
1. **Backend Request Timeout**:
The backendRequest timeout is a game-changer for those dealing with backends.
It sets a timeout for a single request sent from the Gateway to a backend service.
This timeout spans from the initiation of the request to the reception of the full response from the backend.
This feature is particularly helpful in scenarios where the Gateway needs to retry connections to a backend, ensuring smooth communication under various conditions.
Notably, the `request` timeout encompasses the `backendRequest` timeout. Hence, the value of `backendRequest` should never exceed the value of the `request` timeout.
The ability to configure these timeouts adds a new layer of reliability to your Kubernetes services.
Whether it's ensuring client requests are processed within a specified timeframe or managing backend service communications, Gateway API's Timeouts offer the control and predictability you need.
To get started, you can define timeouts in your HTTPRoute Rules using the Timeouts field, specifying their type as Duration.
A zero-valued timeout (`0s`) disables the timeout, while a valid non-zero-valued timeout should be at least 1ms.
Here's an example of setting request and backendRequest timeouts in an HTTPRoute:
```yaml
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: timeout-example
spec:
parentRefs:
- name: example-gateway
rules:
- matches:
- path:
type: PathPrefix
value: /timeout
timeouts:
request: 10s
backendRequest: 2s
backendRefs:
- name: timeout-svc
port: 8080
```
In this example, a `request` timeout of 10 seconds is defined, ensuring that client requests are processed within that timeframe.
Additionally, a 2-second `backendRequest` timeout is set for individual requests from the Gateway to a backend service called timeout-svc.
These new HTTPRoute Timeouts provide Kubernetes users with more control and flexibility in managing network communications, helping ensure a smoother and more predictable experience for both clients and backends.
For additional details and examples, refer to the [official timeouts API documentation](https://gateway-api.sigs.k8s.io/api-types/httproute/#timeouts-optional).
## Gateway Infrastructure Labels
While Gateway API providers a common API for different implementations, each implementation will have different resources created under-the-hood to apply users' intent.
This could be configuring cloud load balancers, creating in-cluster Pods and Services, or more.
While the API has always provided an extension point -- `parametersRef` in `GatewayClass` -- to customize implementation specific things, there was no common core way to express common infrastructure customizations.
Gateway API v1.0 paves the way for this with a new `infrastructure` field on the `Gateway` object, allowing customization of the underlying infrastructure.
For now, this starts small with two critical fields: labels and annotations.
When these are set, any generated infrastructure will have the provided labels and annotations set on them.
For example, I may want to group all my resources for one application together:
```yaml
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: hello-world
spec:
infrastructure:
labels:
app.kubernetes.io/name: hello-world
```
In the future, we are looking into more common infrastructure configurations, such as resource sizing.
For more information, refer to the [documentation](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1.GatewayInfrastructure) for this feature.
## Support for Websockets, HTTP/2 and more!
Not all implementations of Gateway API support automatic protocol selection.
In some cases protocols are disabled without an explicit opt-in.
When a Route's backend references a Kubernetes Service, application developers can specify the protocol using `ServicePort` [`appProtocol`][appProtocol] field.
For example the following `store` Kubernetes Service is indicating the port `8080` supports HTTP/2 Prior Knowledge.
```yaml
apiVersion: v1
kind: Service
metadata:
name: store
spec:
selector:
app: store
ports:
- protocol: TCP
appProtocol: kubernetes.io/h2c
port: 8080
targetPort: 8080
```
Currently, Gateway API has conformance testing for:
- `kubernetes.io/h2c` - HTTP/2 Prior Knowledge
- `kubernetes.io/ws` - WebSocket over HTTP
For more information, refer to the documentation for [Backend Protocol Selection](https://gateway-api.sigs.k8s.io/guides/backend-protocol).
[appProtocol]: https://kubernetes.io/docs/concepts/services-networking/service/#application-protocol
## `gwctl`, our new Gateway API command line tool
`gwctl` is a command line tool that aims to be a `kubectl` replacement for viewing Gateway API resources.
The initial release of `gwctl` that comes bundled with Gateway v1.0 release includes helpful features for managing Gateway API Policies.
Gateway API Policies serve as powerful extension mechanisms for modifying the behavior of Gateway resources.
One challenge with using policies is that it may be hard to discover which policies are affecting which Gateway resources.
`gwctl` helps bridge this gap by answering questions like:
* Which policies are available for use in the Kubernetes cluster?
* Which policies are attached to a particular Gateway, HTTPRoute, etc?
* If policies are applied to multiple resources in the Gateway resource hierarchy, what is the effective policy that is affecting a particular resource? (For example, if an HTTP request timeout policy is applied to both an HTTPRoute and its parent Gateway, what is the effective timeout for the HTTPRoute?)
`gwctl` is still in the very early phases of development and hence may be a bit rough around the edges.
Follow the instructions in [the repository](https://github.com/kubernetes-sigs/gateway-api/tree/main/gwctl#try-it-out) to install and try out `gwctl`.
### Examples
Here are some examples of how `gwctl` can be used:
```bash
# List all policies in the cluster. This will also give the resource they bind
# to.
gwctl get policies -A
# List all available policy types.
gwctl get policycrds
# Describe all HTTPRoutes in namespace ns2. (Output includes effective policies)
gwctl describe httproutes -n ns2
# Describe a single HTTPRoute in the default namespace. (Output includes
# effective policies)
gwctl describe httproutes my-httproute-1
# Describe all Gateways across all namespaces. (Output includes effective
# policies)
gwctl describe gateways -A
# Describe a single GatewayClass. (Output includes effective policies)
gwctl describe gatewayclasses foo-com-external-gateway-class
```
## Get involved
These projects, and many more, continue to be improved in Gateway API.
There are lots of opportunities to get involved and help define the future of Kubernetes routing APIs for both Ingress and Mesh.
If this is interesting to you, please [join us in the community](https://gateway-api.sigs.k8s.io/contributing/) and help us build the future of Gateway API together!

View File

@ -185,8 +185,8 @@ and the volume is considered "released". But it is not yet available for
another claim because the previous claimant's data remains on the volume.
An administrator can manually reclaim the volume with the following steps.
1. Delete the PersistentVolume. The associated storage asset in external infrastructure
(such as an AWS EBS or GCE PD volume) still exists after the PV is deleted.
1. Delete the PersistentVolume. The associated storage asset in external infrastructure
still exists after the PV is deleted.
1. Manually clean up the data on the associated storage asset accordingly.
1. Manually delete the associated storage asset.
@ -197,7 +197,7 @@ the same storage asset definition.
For volume plugins that support the `Delete` reclaim policy, deletion removes
both the PersistentVolume object from Kubernetes, as well as the associated
storage asset in the external infrastructure, such as an AWS EBS or GCE PD volume. Volumes that were dynamically provisioned
storage asset in the external infrastructure. Volumes that were dynamically provisioned
inherit the [reclaim policy of their StorageClass](#reclaim-policy), which
defaults to `Delete`. The administrator should configure the StorageClass
according to users' expectations; otherwise, the PV must be edited or
@ -371,7 +371,6 @@ the following types of volumes:
* azureFile (deprecated)
* {{< glossary_tooltip text="csi" term_id="csi" >}}
* flexVolume (deprecated)
* gcePersistentDisk (deprecated)
* rbd (deprecated)
* portworxVolume (deprecated)
@ -439,11 +438,6 @@ Similar to other volume types - FlexVolume volumes can also be expanded when in-
FlexVolume resize is possible only when the underlying driver supports resize.
{{< /note >}}
{{< note >}}
Expanding EBS volumes is a time-consuming operation.
Also, there is a per-volume quota of one modification every 6 hours.
{{< /note >}}
#### Recovering from Failure when Expanding Volumes
If a user specifies a new size that is too big to be satisfied by underlying
@ -519,8 +513,6 @@ This means that support is still available but will be removed in a future Kuber
(**deprecated** in v1.21)
* [`flexVolume`](/docs/concepts/storage/volumes/#flexvolume) - FlexVolume
(**deprecated** in v1.23)
* [`gcePersistentDisk`](/docs/concepts/storage/volumes/#gcepersistentdisk) - GCE Persistent Disk
(**deprecated** in v1.17)
* [`portworxVolume`](/docs/concepts/storage/volumes/#portworxvolume) - Portworx volume
(**deprecated** in v1.25)
* [`vsphereVolume`](/docs/concepts/storage/volumes/#vspherevolume) - vSphere VMDK volume
@ -672,8 +664,7 @@ are specified as ReadWriteOncePod, the volume is constrained and can be mounted
{{< /note >}}
> __Important!__ A volume can only be mounted using one access mode at a time,
> even if it supports many. For example, a GCEPersistentDisk can be mounted as
> ReadWriteOnce by a single node or ReadOnlyMany by many nodes, but not at the same time.
> even if it supports many.
| Volume Plugin | ReadWriteOnce | ReadOnlyMany | ReadWriteMany | ReadWriteOncePod |
| :--- | :---: | :---: | :---: | - |
@ -682,8 +673,6 @@ are specified as ReadWriteOncePod, the volume is constrained and can be mounted
| CSI | depends on the driver | depends on the driver | depends on the driver | depends on the driver |
| FC | &#x2713; | &#x2713; | - | - |
| FlexVolume | &#x2713; | &#x2713; | depends on the driver | - |
| GCEPersistentDisk | &#x2713; | &#x2713; | - | - |
| Glusterfs | &#x2713; | &#x2713; | &#x2713; | - |
| HostPath | &#x2713; | - | - | - |
| iSCSI | &#x2713; | &#x2713; | - | - |
| NFS | &#x2713; | &#x2713; | &#x2713; | - |
@ -710,9 +699,9 @@ Current reclaim policies are:
* Retain -- manual reclamation
* Recycle -- basic scrub (`rm -rf /thevolume/*`)
* Delete -- associated storage asset such as AWS EBS or GCE PD volume is deleted
* Delete -- associated storage asset
Currently, only NFS and HostPath support recycling. AWS EBS and GCE PD volumes support deletion.
For Kubernetes {{< skew currentVersion >}}, only `nfs` and `hostPath` volume types support recycling.
### Mount Options
@ -728,7 +717,6 @@ The following volume types support mount options:
* `azureFile`
* `cephfs` (**deprecated** in v1.28)
* `cinder` (**deprecated** in v1.18)
* `gcePersistentDisk` (**deprecated** in v1.28)
* `iscsi`
* `nfs`
* `rbd` (**deprecated** in v1.28)
@ -743,8 +731,7 @@ it will become fully deprecated in a future Kubernetes release.
### Node Affinity
{{< note >}}
For most volume types, you do not need to set this field. It is automatically
populated for [GCE PD](/docs/concepts/storage/volumes/#gcepersistentdisk) volume block types.
For most volume types, you do not need to set this field.
You need to explicitly set this for [local](/docs/concepts/storage/volumes/#local) volumes.
{{< /note >}}
@ -965,7 +952,6 @@ applicable:
* CSI
* FC (Fibre Channel)
* GCEPersistentDisk (deprecated)
* iSCSI
* Local volume
* OpenStack Cinder

View File

@ -76,7 +76,6 @@ for provisioning PVs. This field must be specified.
| CephFS | - | - |
| FC | - | - |
| FlexVolume | - | - |
| GCEPersistentDisk | &#x2713; | [GCE PD](#gce-pd) |
| iSCSI | - | - |
| NFS | - | [NFS](#nfs) |
| RBD | &#x2713; | [Ceph RBD](#ceph-rbd) |
@ -123,7 +122,6 @@ StorageClass has the field `allowVolumeExpansion` set to true.
| Volume type | Required Kubernetes version |
| :------------------- | :-------------------------- |
| gcePersistentDisk | 1.11 |
| rbd | 1.11 |
| Azure File | 1.11 |
| Portworx | 1.11 |
@ -167,13 +165,7 @@ requirements](/docs/concepts/configuration/manage-resources-containers/),
anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity),
and [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration).
The following plugins support `WaitForFirstConsumer` with dynamic provisioning:
- [GCEPersistentDisk](#gce-pd)
The following plugins support `WaitForFirstConsumer` with pre-created PersistentVolume binding:
- All of the above
- [Local](#local)
[CSI volumes](/docs/concepts/storage/volumes/#csi) are also supported with dynamic provisioning
@ -292,55 +284,6 @@ parameters:
[allowedTopologies](#allowed-topologies)
{{< /note >}}
### GCE PD
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
fstype: ext4
replication-type: none
```
- `type`: `pd-standard` or `pd-ssd`. Default: `pd-standard`
- `zone` (Deprecated): GCE zone. If neither `zone` nor `zones` is specified, volumes are
generally round-robin-ed across all active zones where Kubernetes cluster has
a node. `zone` and `zones` parameters must not be used at the same time.
- `zones` (Deprecated): A comma separated list of GCE zone(s). If neither `zone` nor `zones`
is specified, volumes are generally round-robin-ed across all active zones
where Kubernetes cluster has a node. `zone` and `zones` parameters must not
be used at the same time.
- `fstype`: `ext4` or `xfs`. Default: `ext4`. The defined filesystem type must be supported by the host operating system.
- `replication-type`: `none` or `regional-pd`. Default: `none`.
If `replication-type` is set to `none`, a regular (zonal) PD will be provisioned.
If `replication-type` is set to `regional-pd`, a
[Regional Persistent Disk](https://cloud.google.com/compute/docs/disks/#repds)
will be provisioned. It's highly recommended to have
`volumeBindingMode: WaitForFirstConsumer` set, in which case when you create
a Pod that consumes a PersistentVolumeClaim which uses this StorageClass, a
Regional Persistent Disk is provisioned with two zones. One zone is the same
as the zone that the Pod is scheduled in. The other zone is randomly picked
from the zones available to the cluster. Disk zones can be further constrained
using `allowedTopologies`.
{{< note >}}
`zone` and `zones` parameters are deprecated and replaced with
[allowedTopologies](#allowed-topologies). When
[GCE CSI Migration](/docs/concepts/storage/volumes/#gce-csi-migration) is
enabled, a GCE PD volume can be provisioned in a topology that does not match
any nodes, but any pod trying to use that volume will fail to schedule. With
legacy pre-migration GCE PD, in this case an error will be produced
instead at provisioning time. GCE CSI Migration is enabled by default beginning
from the Kubernetes 1.23 release.
{{< /note >}}
### NFS
```yaml

View File

@ -295,127 +295,15 @@ beforehand so that Kubernetes hosts can access them.
See the [fibre channel example](https://github.com/kubernetes/examples/tree/master/staging/volumes/fibre_channel)
for more details.
### gcePersistentDisk (deprecated) {#gcepersistentdisk}
### gcePersistentDisk (removed) {#gcepersistentdisk}
{{< feature-state for_k8s_version="v1.17" state="deprecated" >}}
Kubernetes {{< skew currentVersion >}} does not include a `gcePersistentDisk` volume type.
A `gcePersistentDisk` volume mounts a Google Compute Engine (GCE)
[persistent disk](https://cloud.google.com/compute/docs/disks) (PD) into your Pod.
Unlike `emptyDir`, which is erased when a pod is removed, the contents of a PD are
preserved and the volume is merely unmounted. This means that a PD can be
pre-populated with data, and that data can be shared between pods.
The `gcePersistentDisk` in-tree storage driver was deprecated in the Kubernetes v1.17 release
and then removed entirely in the v1.28 release.
{{< note >}}
You must create a PD using `gcloud` or the GCE API or UI before you can use it.
{{< /note >}}
There are some restrictions when using a `gcePersistentDisk`:
* the nodes on which Pods are running must be GCE VMs
* those VMs need to be in the same GCE project and zone as the persistent disk
One feature of GCE persistent disk is concurrent read-only access to a persistent disk.
A `gcePersistentDisk` volume permits multiple consumers to simultaneously
mount a persistent disk as read-only. This means that you can pre-populate a PD with your dataset
and then serve it in parallel from as many Pods as you need. Unfortunately,
PDs can only be mounted by a single consumer in read-write mode. Simultaneous
writers are not allowed.
Using a GCE persistent disk with a Pod controlled by a ReplicaSet will fail unless
the PD is read-only or the replica count is 0 or 1.
#### Creating a GCE persistent disk {#gce-create-persistent-disk}
Before you can use a GCE persistent disk with a Pod, you need to create it.
```shell
gcloud compute disks create --size=500GB --zone=us-central1-a my-data-disk
```
#### GCE persistent disk configuration example
```yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: registry.k8s.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
# This GCE PD must already exist.
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4
```
#### Regional persistent disks
The [Regional persistent disks](https://cloud.google.com/compute/docs/disks/#repds)
feature allows the creation of persistent disks that are available in two zones
within the same region. In order to use this feature, the volume must be provisioned
as a PersistentVolume; referencing the volume directly from a pod is not supported.
#### Manually provisioning a Regional PD PersistentVolume
Dynamic provisioning is possible using a
[StorageClass for GCE PD](/docs/concepts/storage/storage-classes/#gce-pd).
Before creating a PersistentVolume, you must create the persistent disk:
```shell
gcloud compute disks create --size=500GB my-data-disk
--region us-central1
--replica-zones us-central1-a,us-central1-b
```
#### Regional persistent disk configuration example
```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-volume
spec:
capacity:
storage: 400Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
# failure-domain.beta.kubernetes.io/zone should be used prior to 1.21
- key: topology.kubernetes.io/zone
operator: In
values:
- us-central1-a
- us-central1-b
```
#### GCE CSI migration
{{< feature-state for_k8s_version="v1.25" state="stable" >}}
The `CSIMigration` feature for GCE PD, when enabled, redirects all plugin operations
from the existing in-tree plugin to the `pd.csi.storage.gke.io` Container
Storage Interface (CSI) Driver. In order to use this feature, the [GCE PD CSI
Driver](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver)
must be installed on the cluster.
#### GCE CSI migration complete
{{< feature-state for_k8s_version="v1.21" state="alpha" >}}
To disable the `gcePersistentDisk` storage plugin from being loaded by the controller manager
and the kubelet, set the `InTreePluginGCEUnregister` flag to `true`.
The Kubernetes project suggests that you use the [Google Compute Engine Persistent Disk CSI](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver)
third party storage driver instead.
### gitRepo (deprecated) {#gitrepo}
@ -704,8 +592,8 @@ for an example of mounting NFS volumes with PersistentVolumes.
A `persistentVolumeClaim` volume is used to mount a
[PersistentVolume](/docs/concepts/storage/persistent-volumes/) into a Pod. PersistentVolumeClaims
are a way for users to "claim" durable storage (such as a GCE PersistentDisk or an
iSCSI volume) without knowing the details of the particular cloud environment.
are a way for users to "claim" durable storage (such as an iSCSI volume)
without knowing the details of the particular cloud environment.
See the information about [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) for more
details.

View File

@ -66,5 +66,4 @@ The following broad classes of Kubernetes volume plugins are supported on Window
The following in-tree plugins support persistent storage on Windows nodes:
* [`azureFile`](/docs/concepts/storage/volumes/#azurefile)
* [`gcePersistentDisk`](/docs/concepts/storage/volumes/#gcepersistentdisk)
* [`vsphereVolume`](/docs/concepts/storage/volumes/#vspherevolume)

View File

@ -61,7 +61,7 @@ localization you want to help out with is inside `content/<two-letter-code>`.
### Suggest changes
Create or update your chosen localized page based on the English original. See
[translating content](#translating-content) for more details.
[localize content](#localize-content) for more details.
If you notice a technical inaccuracy or other problem with the upstream
(English) documentation, you should fix the upstream documentation first and

View File

@ -1176,6 +1176,10 @@ protocol specific logic, then returns opaque credentials to use. Almost all cred
use cases require a server side component with support for the [webhook token authenticator](#webhook-token-authentication)
to interpret the credential format produced by the client plugin.
{{< note >}}
Earlier versions of `kubectl` included built-in support for authenticating to AKS and GKE, but this is no longer present.
{{< /note >}}
### Example use case
In a hypothetical use case, an organization would run an external service that exchanges LDAP credentials

View File

@ -487,6 +487,26 @@ startupProbe:
value: ""
```
{{< note >}}
When the kubelet probes a Pod using HTTP, it only follows redirects if the redirect
is to the same host. If the kubelet receives 11 or more redirects during probing, the probe is considered successful
and a related Event is created:
```none
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 29m default-scheduler Successfully assigned default/httpbin-7b8bc9cb85-bjzwn to daocloud
Normal Pulling 29m kubelet Pulling image "docker.io/kennethreitz/httpbin"
Normal Pulled 24m kubelet Successfully pulled image "docker.io/kennethreitz/httpbin" in 5m12.402735213s
Normal Created 24m kubelet Created container httpbin
Normal Started 24m kubelet Started container httpbin
Warning ProbeWarning 4m11s (x1197 over 24m) kubelet Readiness probe warning: Probe terminated redirects
```
If the kubelet receives a redirect where the hostname is different from the request, the outcome of the probe is treated as successful and kubelet creates an event to report the redirect failure.
{{< /note >}}
### TCP probes
For a TCP probe, the kubelet makes the probe connection at the node, not in the Pod, which

View File

@ -35,4 +35,14 @@ If kubectl cluster-info returns the url response but you can't access your clust
```shell
kubectl cluster-info dump
```
```
### Troubleshooting the 'No Auth Provider Found' error message {#no-auth-provider-found}
In Kubernetes 1.26, kubectl removed the built-in authentication for the following cloud
providers' managed Kubernetes offerings. These providers have released kubectl plugins to provide the cloud-specific authentication. For instructions, refer to the following provider documentation:
* Azure AKS: [kubelogin plugin](https://azure.github.io/kubelogin/)
* Google Kubernetes Engine: [gke-gcloud-auth-plugin](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#install_plugin)
(There could also be other reasons to see the same error message, unrelated to that change.)

View File

@ -78,16 +78,11 @@ releases may also occur in between these.
| Monthly Patch Release | Cherry Pick Deadline | Target date |
| --------------------- | -------------------- | ----------- |
| November 2023 | N/A | N/A |
| December 2023 | 2023-12-01 | 2023-12-06 |
| December 2023 | 2023-12-08 | 2023-12-13 |
| January 2024 | 2024-01-12 | 2024-01-17 |
| February 2024 | 2024-02-09 | 2024-02-14 |
| March 2024 | 2024-03-08 | 2024-03-13 |
**Note:** Due to overlap with KubeCon NA 2023 and the resulting lack of
availability of Release Managers, it has been decided to skip patch releases
in November. Instead, we'll have patch releases early in December.
## Detailed Release History for Active Branches
{{< release-branches >}}

View File

@ -64,12 +64,12 @@ Kubernetes - проект з відкритим вихідним кодом. В
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Переглянути відео</button>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">Відвідайте KubeCon + CloudNativeCon в Європі, 18-21 квітня 2023 року</a>
<br>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/" button id="desktopKCButton">Відвідайте KubeCon + CloudNativeCon у Північній Америці, 6-9 листопада 2023 року</a>
<br>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">Відвідайте KubeCon + CloudNativeCon в Європі, 19-22 березня 2024 року</a>
</div>
<div id="videoPlayer">

View File

@ -3,15 +3,12 @@ title: 调度框架
content_type: concept
weight: 60
---
<!--
---
reviewers:
- ahg-g
title: Scheduling Framework
content_type: concept
weight: 60
---
-->
<!-- overview -->
@ -25,7 +22,6 @@ scheduling "core" lightweight and maintainable. Refer to the [design proposal of
scheduling framework][kep] for more technical information on the design of the
framework.
-->
调度框架是面向 Kubernetes 调度器的一种插件架构,
它为现有的调度器添加了一组新的“插件” API。插件会被编译到调度器之中。
这些 API 允许大多数调度功能以插件的形式实现,同时使调度“核心”保持简单且可维护。
@ -39,7 +35,7 @@ framework.
<!--
# Framework workflow
-->
# 框架工作流程
# 框架工作流程 {#framework-workflow}
<!--
The Scheduling Framework defines a few extension points. Scheduler plugins
@ -58,7 +54,7 @@ cycle** and the **binding cycle**.
<!--
## Scheduling Cycle & Binding Cycle
-->
## 调度周期和绑定周期
## 调度周期和绑定周期 {#scheduling-cycle-and-binding-cycle}
<!--
The scheduling cycle selects a node for the Pod, and the binding cycle applies
@ -82,23 +78,27 @@ the queue and retried.
Pod 将返回队列并重试。
<!--
## Extension points
## Interfaces
-->
## 扩展点
## 接口 {#interfaces}
<!--
The following picture shows the scheduling context of a Pod and the extension
points that the scheduling framework exposes. In this picture "Filter" is
equivalent to "Predicate" and "Scoring" is equivalent to "Priority function".
The following picture shows the scheduling context of a Pod and the interfaces
that the scheduling framework exposes.
-->
下图显示了一个 Pod 的调度上下文以及调度框架公开的扩展点。
在此图片中,“过滤器”等同于“断言”,“评分”相当于“优先级函数”。
下图显示了一个 Pod 的调度上下文以及调度框架公开的接口。
<!--
One plugin may register at multiple extension points to perform more complex or
One plugin may implement multiple interfaces to perform more complex or
stateful tasks.
-->
一个插件可以在多个扩展点处注册,以执行更复杂或有状态的任务。
一个插件可能实现多个接口,以执行更为复杂或有状态的任务。
<!--
Some interfaces match the scheduler extension points which can be configured through
[Scheduler Configuration](/docs/reference/scheduling/config/#extension-points).
-->
某些接口与可以通过[调度器配置](/zh-cn/docs/reference/scheduling/config/#extension-points)来设置的调度器扩展点匹配。
<!--
{{< figure src="/images/docs/scheduling-framework-extensions.png" title="scheduling framework extension points" class="diagram-large">}}
@ -125,7 +125,45 @@ For more details about how internal scheduler queues work, read
只有当所有 PreEnqueue 插件返回 `Success`Pod 才允许进入活动队列。
否则,它将被放置在内部无法调度的 Pod 列表中,并且不会获得 `Unschedulable` 状态。
要了解有关内部调度器队列如何工作的更多详细信息,请阅读 [kube-scheduler 调度队列](https://github.com/kubernetes/community/blob/f03b6d5692bd979f07dd472e7b6836b2dad0fd9b/contributors/devel/sig-scheduling/scheduler_queues.md)。
要了解有关内部调度器队列如何工作的更多详细信息,请阅读
[kube-scheduler 调度队列](https://github.com/kubernetes/community/blob/f03b6d5692bd979f07dd472e7b6836b2dad0fd9b/contributors/devel/sig-scheduling/scheduler_queues.md)。
### EnqueueExtension
<!--
EnqueueExtension is the interface where the plugin can control
whether to retry scheduling of Pods rejected by this plugin, based on changes in the cluster.
Plugins that implement PreEnqueue, PreFilter, Filter, Reserve or Permit should implement this interface.
-->
EnqueueExtension 作为一个接口,插件可以在此接口之上根据集群中的变化来控制是否重新尝试调度被此插件拒绝的 Pod。
实现 PreEnqueue、PreFilter、Filter、Reserve 或 Permit 的插件应实现此接口。
#### QueueingHint
{{< feature-state for_k8s_version="v1.28" state="beta" >}}
<!--
QueueingHint is a callback function for deciding whether a Pod can be requeued to the active queue or backoff queue.
It's executed every time a certain kind of event or change happens in the cluster.
When the QueueingHint finds that the event might make the Pod schedulable,
the Pod is put into the active queue or the backoff queue
so that the scheduler will retry the scheduling of the Pod.
-->
QueueingHint 作为一个回调函数,用于决定是否将 Pod 重新排队到活跃队列或回退队列。
每当集群中发生某种事件或变化时,此函数就会被执行。
当 QueueingHint 发现事件可能使 Pod 可调度时Pod 将被放入活跃队列或回退队列,
以便调度器可以重新尝试调度 Pod。
{{< note >}}
<!--
QueueingHint evaluation during scheduling is a beta-level feature and is enabled by default in 1.28.
You can disable it via the
`SchedulerQueueingHints` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
-->
在调度过程中对 QueueingHint 求值是一个 Beta 级别的特性,在 1.28 中默认被启用。
你可以通过 `SchedulerQueueingHints`
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)来禁用它。
{{< /note >}}
<!--
### QueueSort {#queue-sort}
@ -188,20 +226,20 @@ tries to make the pod schedulable by preempting other Pods.
<!--
### PreScore {#pre-score}
-->
-->
### PreScore {#pre-score}
<!--
These plugins are used to perform "pre-scoring" work, which generates a sharable
state for Score plugins to use. If a PreScore plugin returns an error, the
scheduling cycle is aborted.
-->
-->
这些插件用于执行 “前置评分pre-scoring” 工作,即生成一个可共享状态供 Score 插件使用。
如果 PreScore 插件返回错误,则调度周期将终止。
<!--
### Score {#scoring}
-->
-->
### Score {#scoring}
<!--
@ -213,8 +251,8 @@ scores from all plugins according to the configured plugin weights.
-->
这些插件用于对通过过滤阶段的节点进行排序。调度器将为每个节点调用每个评分插件。
将有一个定义明确的整数范围,代表最小和最大分数。
在[标准化评分](#normalize-scoring)阶段之后,调度器将根据配置的插件权重
合并所有插件的节点分数。
在[标准化评分](#normalize-scoring)阶段之后,
调度器将根据配置的插件权重合并所有插件的节点分数。
<!--
### NormalizeScore {#normalize-scoring}
@ -280,13 +318,13 @@ NormalizeScore extension point.
### Reserve {#reserve}
<!--
A plugin that implements the Reserve extension has two methods, namely `Reserve`
A plugin that implements the Reserve interface has two methods, namely `Reserve`
and `Unreserve`, that back two informational scheduling phases called Reserve
and Unreserve, respectively. Plugins which maintain runtime state (aka "stateful
plugins") should use these phases to be notified by the scheduler when resources
on a node are being reserved and unreserved for a given Pod.
-->
实现了 Reserve 扩展的插件,拥有两个方法,即 `Reserve``Unreserve`
实现了 Reserve 接口的插件,拥有两个方法,即 `Reserve``Unreserve`
他们分别支持两个名为 Reserve 和 Unreserve 的信息处理性质的调度阶段。
维护运行时状态的插件(又称 "有状态插件")应该使用这两个阶段,
以便在节点上的资源被保留和未保留给特定的 Pod 时得到调度器的通知。
@ -360,7 +398,7 @@ _Permit_ 插件在每个 Pod 调度周期的最后调用,用于防止或延迟
If any Permit plugin denies a Pod, it is returned to the scheduling queue.
This will trigger the Unreserve phase in [Reserve plugins](#reserve).
-->
1. **拒绝** \
2. **拒绝** \
如果任何 Permit 插件拒绝 Pod则该 Pod 将被返回到调度队列。
这将触发 [Reserve 插件](#reserve)中的 Unreserve 阶段。
@ -372,7 +410,7 @@ _Permit_ 插件在每个 Pod 调度周期的最后调用,用于防止或延迟
and the Pod is returned to the scheduling queue, triggering the
Unreserve phase in [Reserve plugins](#reserve).
-->
1. **等待**(带有超时) \
3. **等待**(带有超时)\
如果一个 Permit 插件返回 “等待” 结果,则 Pod 将保持在一个内部的 “等待中”
的 Pod 列表,同时该 Pod 的绑定周期启动时即直接阻塞直到得到批准。
如果超时发生,**等待** 变成 **拒绝**,并且 Pod
@ -384,7 +422,7 @@ While any plugin can access the list of "waiting" Pods and approve them
(see [`FrameworkHandle`](https://git.k8s.io/enhancements/keps/sig-scheduling/624-scheduling-framework#frameworkhandle)), we expect only the permit
plugins to approve binding of reserved Pods that are in "waiting" state. Once a Pod
is approved, it is sent to the [PreBind](#pre-bind) phase.
-->
-->
尽管任何插件可以访问 “等待中” 状态的 Pod 列表并批准它们
(查看 [`FrameworkHandle`](https://git.k8s.io/enhancements/keps/sig-scheduling/624-scheduling-framework#frameworkhandle))。
我们期望只有允许插件可以批准处于 “等待中” 状态的预留 Pod 的绑定。
@ -402,15 +440,14 @@ example, a pre-bind plugin may provision a network volume and mount it on the
target node before allowing the Pod to run there.
-->
这些插件用于执行 Pod 绑定前所需的所有工作。
例如,一个 PreBind 插件可能需要制备网络卷并且在允许 Pod 运行在该节点之前
将其挂载到目标节点上。
例如,一个 PreBind 插件可能需要制备网络卷并且在允许 Pod
运行在该节点之前将其挂载到目标节点上。
<!--
If any PreBind plugin returns an error, the Pod is [rejected](#reserve) and
returned to the scheduling queue.
-->
如果任何 PreBind 插件返回错误,则 Pod 将被 [拒绝](#reserve) 并且
退回到调度队列中。
如果任何 PreBind 插件返回错误,则 Pod 将被[拒绝](#reserve)并且退回到调度队列中。
<!--
### Bind
@ -434,11 +471,11 @@ Bind 插件用于将 Pod 绑定到节点上。直到所有的 PreBind 插件都
### PostBind {#post-bind}
<!--
This is an informational extension point. Post-bind plugins are called after a
This is an informational interface. Post-bind plugins are called after a
Pod is successfully bound. This is the end of a binding cycle, and can be used
to clean up associated resources.
-->
这是个信息性的扩展点
这是个信息性的接口
PostBind 插件在 Pod 成功绑定后被调用。这是绑定周期的结尾,可用于清理相关的资源。
<!--
@ -464,7 +501,7 @@ Plugins that use this extension point usually should also use
<!--
## Plugin API
-->
## 插件 API
## 插件 API {#plugin-api}
<!--
There are two steps to the plugin API. First, plugins must register and get
@ -495,24 +532,24 @@ type PreFilterPlugin interface {
<!--
## Plugin configuration
-->
## 插件配置
## 插件配置 {#plugin-configuration}
<!--
You can enable or disable plugins in the scheduler configuration. If you are using
Kubernetes v1.18 or later, most scheduling
[plugins](/docs/reference/scheduling/config/#scheduling-plugins) are in use and
enabled by default.
-->
-->
你可以在调度器配置中启用或禁用插件。
如果你在使用 Kubernetes v1.18 或更高版本,大部分调度
[插件](/zh-cn/docs/reference/scheduling/config/#scheduling-plugins)
如果你在使用 Kubernetes v1.18 或更高版本,
大部分调度[插件](/zh-cn/docs/reference/scheduling/config/#scheduling-plugins)
都在使用中且默认启用。
<!--
In addition to default plugins, you can also implement your own scheduling
plugins and get them configured along with default plugins. You can visit
[scheduler-plugins](https://github.com/kubernetes-sigs/scheduler-plugins) for more details.
-->
-->
除了默认的插件,你还可以实现自己的调度插件并且将它们与默认插件一起配置。
你可以访问 [scheduler-plugins](https://github.com/kubernetes-sigs/scheduler-plugins)
了解更多信息。
@ -521,7 +558,7 @@ plugins and get them configured along with default plugins. You can visit
If you are using Kubernetes v1.18 or later, you can configure a set of plugins as
a scheduler profile and then define multiple profiles to fit various kinds of workload.
Learn more at [multiple profiles](/docs/reference/scheduling/config/#multiple-profiles).
-->
如果你正在使用 Kubernetes v1.18 或更高版本,你可以将一组插件设置为
一个调度器配置文件,然后定义不同的配置文件来满足各类工作负载。
-->
如果你正在使用 Kubernetes v1.18 或更高版本,你可以将一组插件设置为一个调度器配置文件,
然后定义不同的配置文件来满足各类工作负载。
了解更多关于[多配置文件](/zh-cn/docs/reference/scheduling/config/#multiple-profiles)。

View File

@ -636,8 +636,8 @@ Some Kubernetes resources define an additional runtime cost budget that bounds
the execution of multiple expressions. If the sum total of the cost of
expressions exceed the budget, execution of the expressions will be halted, and
an error will result. For example the validation of a custom resource has a
_per-validation_ runtime cost budget for all [Validation
Rules](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules)
_per-validation_ runtime cost budget for all
[Validation Rules](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules)
evaluated to validate the custom resource.
-->
一些 Kubernetes 资源定义了额外的运行时成本预算,用于限制多个表达式的执行。

View File

@ -66,9 +66,9 @@ The **flowcontrol.apiserver.k8s.io/v1beta2** API version of FlowSchema and Prior
### v1.27
<!--
The **v1.27** release will stop serving the following deprecated API versions:
The **v1.27** release stopped serving the following deprecated API versions:
-->
**v1.27** 发行版本中将去除以下已弃用的 API 版本:
**v1.27** 发行版本停止支持以下已弃用的 API 版本:
#### CSIStorageCapacity {#csistoragecapacity-v127}

View File

@ -124,9 +124,13 @@ Any commands under `kubeadm alpha` are, by definition, supported on an alpha lev
<!--
### Preparing the hosts
#### Component installation
-->
### 主机准备 {#preparing-the-hosts}
#### 安装组件 {#component-installation}
<!--
Install a {{< glossary_tooltip term_id="container-runtime" text="container runtime" >}} and kubeadm on all the hosts.
For detailed instructions and other prerequisites, see [Installing kubeadm](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
@ -152,6 +156,110 @@ After you initialize your control-plane, the kubelet runs normally.
初始化控制平面后kubelet 将正常运行。
{{< /note >}}
<!--
#### Network setup
kubeadm similarly to other Kubernetes components tries to find a usable IP on
the network interface associated with the default gateway on a host. Such
an IP is then used for the advertising and/or listening performed by a component.
-->
#### 网络设置 {#network-setup}
kubeadm 与其他 Kubernetes 组件类似,会尝试在与主机默认网关关联的网络接口上找到可用的 IP 地址。
这个 IP 地址随后用于由某组件执行的公告和/或监听。
<!--
To find out what this IP is on a Linux host you can use:
```shell
ip route show # Look for a line starting with "default via"
```
-->
要在 Linux 主机上获得此 IP 地址,你可以使用以下命令:
```shell
ip route show # 查找以 "default via" 开头的行
```
<!--
Kubernetes components do not accept custom network interface as an option,
therefore a custom IP address must be passed as a flag to all components instances
that need such a custom configuration.
To configure the API server advertise address for control plane nodes created with both
`init` and `join`, the flag `--apiserver-advertise-address` can be used.
Preferably, this option can be set in the [kubeadm API](/docs/reference/config-api/kubeadm-config.v1beta3)
as `InitConfiguration.localAPIEndpoint` and `JoinConfiguration.controlPlane.localAPIEndpoint`.
-->
Kubernetes 组件不接受自定义网络接口作为选项,因此必须将自定义 IP
地址作为标志传递给所有需要此自定义配置的组件实例。
要为使用 `init``join` 创建的控制平面节点配置 API 服务器的公告地址,
你可以使用 `--apiserver-advertise-address` 标志。
最好在 [kubeadm API](/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3)中使用
`InitConfiguration.localAPIEndpoint``JoinConfiguration.controlPlane.localAPIEndpoint`
来设置此选项。
<!--
For kubelets on all nodes, the `--node-ip` option can be passed in
`.nodeRegistration.kubeletExtraArgs` inside a kubeadm configuration file
(`InitConfiguration` or `JoinConfiguration`).
For dual-stack see
[Dual-stack support with kubeadm](/docs/setup/production-environment/tools/kubeadm/dual-stack-support).
-->
对于所有节点上的 kubelet`--node-ip` 选项可以在 kubeadm 配置文件
`InitConfiguration` 或 `JoinConfiguration`)的 `.nodeRegistration.kubeletExtraArgs`
中设置。
有关双协议栈细节参见[使用 kubeadm 支持双协议栈](/zh-cn/docs/setup/production-environment/tools/kubeadm/dual-stack-support)。
{{< note >}}
<!--
IP addresses become part of certificates SAN fields. Changing these IP addresses would require
signing new certificates and restarting the affected components, so that the change in
certificate files is reflected. See
[Manual certificate renewal](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#manual-certificate-renewal)
for more details on this topic.
-->
IP 地址成为证书 SAN 字段的一部分。更改这些 IP 地址将需要签署新的证书并重启受影响的组件,
以便反映证书文件中的变化。有关此主题的更多细节参见
[手动续期证书](/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#manual-certificate-renewal)。
{{</ note >}}
{{< warning >}}
<!--
The Kubernetes project recommends against this approach (configuring all component instances
with custom IP addresses). Instead, the Kubernetes maintainers recommend to setup the host network,
so that the default gateway IP is the one that Kubernetes components auto-detect and use.
On Linux nodes, you can use commands such as `ip route` to configure networking; your operating
system might also provide higher level network management tools. If your node's default gateway
is a public IP address, you should configure packet filtering or other security measures that
protect the nodes and your cluster.
-->
Kubernetes 项目不推荐此方法(使用自定义 IP 地址配置所有组件实例)。
Kubernetes 维护者建议设置主机网络,使默认网关 IP 成为 Kubernetes 组件自动检测和使用的 IP。
对于 Linux 节点,你可以使用诸如 `ip route` 的命令来配置网络;
你的操作系统可能还提供更高级的网络管理工具。
如果节点的默认网关是公共 IP 地址,你应配置数据包过滤或其他保护节点和集群的安全措施。
{{< /warning >}}
{{< note >}}
<!--
If the host does not have a default gateway, it is recommended to setup one. Otherwise,
without passing a custom IP address to a Kubernetes component, the component
will exit with an error. If two or more default gateways are present on the host,
a Kubernetes component will try to use the first one it encounters that has a suitable
global unicast IP address. While making this choice, the exact ordering of gateways
might vary between different operating systems and kernel versions.
-->
如果主机没有默认网关,则建议设置一个默认网关。
否则,在不传递自定义 IP 地址给 Kubernetes 组件的情况下,此组件将退出并报错。
如果主机上存在两个或多个默认网关,则 Kubernetes
组件将尝试使用所遇到的第一个具有合适全局单播 IP 地址的网关。
在做出此选择时,网关的确切顺序可能因不同的操作系统和内核版本而有所差异。
{{< /note >}}
<!--
### Preparing the required container images
-->
@ -209,7 +317,7 @@ a provider-specific value. See [Installing a Pod network add-on](#pod-network).
1. (推荐)如果计划将单个控制平面 kubeadm 集群升级成高可用,
你应该指定 `--control-plane-endpoint` 为所有控制平面节点设置共享端点。
端点可以是负载均衡器的 DNS 名称或 IP 地址。
1. 选择一个 Pod 网络插件,并验证是否需要为 `kubeadm init` 传递参数。
2. 选择一个 Pod 网络插件,并验证是否需要为 `kubeadm init` 传递参数。
根据你选择的第三方网络插件,你可能需要设置 `--pod-network-cidr` 的值。
请参阅[安装 Pod 网络附加组件](#pod-network)。
@ -218,19 +326,10 @@ a provider-specific value. See [Installing a Pod network add-on](#pod-network).
known endpoints. To use different container runtime or if there are more than one installed
on the provisioned node, specify the `--cri-socket` argument to `kubeadm`. See
[Installing a runtime](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime).
1. (Optional) Unless otherwise specified, `kubeadm` uses the network interface associated
with the default gateway to set the advertise address for this particular control-plane node's API server.
To use a different network interface, specify the `--apiserver-advertise-address=<ip-address>` argument
to `kubeadm init`. To deploy an IPv6 Kubernetes cluster using IPv6 addressing, you
must specify an IPv6 address, for example `--apiserver-advertise-address=2001:db8::101`
-->
1. (可选)`kubeadm` 试图通过使用已知的端点列表来检测容器运行时。
3. (可选)`kubeadm` 试图通过使用已知的端点列表来检测容器运行时。
使用不同的容器运行时或在预配置的节点上安装了多个容器运行时,请为 `kubeadm init` 指定 `--cri-socket` 参数。
请参阅[安装运行时](/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime)。
1. (可选)除非另有说明,否则 `kubeadm` 使用与默认网关关联的网络接口来设置此控制平面节点 API server 的广播地址。
要使用其他网络接口,请为 `kubeadm init` 设置 `--apiserver-advertise-address=<ip-address>` 参数。
要部署使用 IPv6 地址的 Kubernetes 集群,
必须指定一个 IPv6 地址,例如 `--apiserver-advertise-address=2001:db8::101`
<!--
To initialize the control-plane node run:

View File

@ -3,7 +3,6 @@ title: 访问集群
weight: 20
content_type: concept
---
<!--
title: Accessing Clusters
weight: 20
@ -28,7 +27,7 @@ When accessing the Kubernetes API for the first time, we suggest using the
Kubernetes CLI, `kubectl`.
To access a cluster, you need to know the location of the cluster and have credentials
to access it. Typically, this is automatically set-up when you work through
to access it. Typically, this is automatically set-up when you work through
a [Getting started guide](/docs/setup/),
or someone else set up the cluster and provided you with credentials and a location.
@ -38,9 +37,9 @@ Check the location and credentials that kubectl knows about with this command:
当你第一次访问 Kubernetes API 的时候,我们建议你使用 Kubernetes CLI 工具 `kubectl`
访问集群时,你需要知道集群的地址并且拥有访问的凭证。通常,这些在你通过
[启动安装](/zh-cn/docs/setup/)安装集群时都是自动安装好的,或者其他人安装时
也应该提供了凭证和集群地址。
访问集群时,你需要知道集群的地址并且拥有访问的凭证。通常,
这些在你通过[启动安装](/zh-cn/docs/setup/)安装集群时都是自动安装好的,
或者其他人安装时也应该提供了凭证和集群地址。
通过以下命令检查 kubectl 是否知道集群地址及凭证:
@ -63,22 +62,22 @@ Kubectl handles locating and authenticating to the apiserver.
If you want to directly access the REST API with an http client like
curl or wget, or a browser, there are several ways to locate and authenticate:
- Run kubectl in proxy mode.
- Recommended approach.
- Uses stored apiserver location.
- Verifies identity of apiserver using self-signed cert. No MITM possible.
- Authenticates to apiserver.
- In future, may do intelligent client-side load-balancing and failover.
- Provide the location and credentials directly to the http client.
- Alternate approach.
- Works with some types of client code that are confused by using a proxy.
- Need to import a root cert into your browser to protect against MITM.
- Run kubectl in proxy mode.
- Recommended approach.
- Uses stored apiserver location.
- Verifies identity of apiserver using self-signed cert. No MITM possible.
- Authenticates to apiserver.
- In future, may do intelligent client-side load-balancing and failover.
- Provide the location and credentials directly to the http client.
- Alternate approach.
- Works with some types of client code that are confused by using a proxy.
- Need to import a root cert into your browser to protect against MITM.
-->
## 直接访问 REST API {#directly-accessing-the-rest-api}
Kubectl 处理 apiserver 的定位和身份验证。
如果要使用 curl 或 wget 等 http 客户端或浏览器直接访问 REST API可以通过
多种方式查找和验证:
如果要使用 curl 或 wget 等 http 客户端或浏览器直接访问 REST API
可以通过多种方式查找和验证:
- 以代理模式运行 kubectl。
- 推荐此方式。
@ -86,7 +85,7 @@ Kubectl 处理 apiserver 的定位和身份验证。
- 使用自签名的证书来验证 apiserver 的身份。杜绝 MITM 攻击。
- 对 apiserver 进行身份验证。
- 未来可能会实现智能化的客户端负载均衡和故障恢复。
- 直接向 http 客户端提供位置和凭
- 直接向 http 客户端提供位置和凭
- 可选的方案。
- 适用于代理可能引起混淆的某些客户端类型。
- 需要引入根证书到你的浏览器以防止 MITM 攻击。
@ -94,7 +93,7 @@ Kubectl 处理 apiserver 的定位和身份验证。
<!--
### Using kubectl proxy
The following command runs kubectl in a mode where it acts as a reverse proxy. It handles
The following command runs kubectl in a mode where it acts as a reverse proxy. It handles
locating the apiserver and authenticating.
Run it like this:
-->
@ -149,9 +148,7 @@ The output is similar to this:
Use `kubectl apply` and `kubectl describe secret...` to create a token for the default service account with grep/cut:
First, create the Secret, requesting a token for the default ServiceAccount:
-->
### 不使用 kubectl proxy {#without-kubectl-proxy}
使用 `kubectl apply``kubectl describe secret ...` 及 grep 和剪切操作来为 default 服务帐户创建令牌,如下所示:
@ -245,16 +242,16 @@ The output is similar to this:
```
<!--
The above examples use the `--insecure` flag. This leaves it subject to MITM
attacks. When kubectl accesses the cluster it uses a stored root certificate
and client certificates to access the server. (These are installed in the
`~/.kube` directory). Since cluster certificates are typically self-signed, it
The above examples use the `--insecure` flag. This leaves it subject to MITM
attacks. When kubectl accesses the cluster it uses a stored root certificate
and client certificates to access the server. (These are installed in the
`~/.kube` directory). Since cluster certificates are typically self-signed, it
may take special configuration to get your http client to use root
certificate.
On some clusters, the apiserver does not require authentication; it may serve
on localhost, or be protected by a firewall. There is not a standard
for this. [Controlling Access to the API](/docs/concepts/security/controlling-access)
on localhost, or be protected by a firewall. There is not a standard
for this. [Controlling Access to the API](/docs/concepts/security/controlling-access)
describes how a cluster admin can configure this.
-->
上面的例子使用了 `--insecure` 参数,这使得它很容易受到 MITM 攻击。
@ -275,11 +272,18 @@ client libraries.
### Go client
* To get the library, run the following command: `go get k8s.io/client-go@kubernetes-<kubernetes-version-number>`, see [INSTALL.md](https://github.com/kubernetes/client-go/blob/master/INSTALL.md#for-the-casual-user) for detailed installation instructions. See [https://github.com/kubernetes/client-go](https://github.com/kubernetes/client-go#compatibility-matrix) to see which versions are supported.
* Write an application atop of the client-go clients. Note that client-go defines its own API objects, so if needed, please import API definitions from client-go rather than from the main repository, e.g., `import "k8s.io/client-go/kubernetes"` is correct.
* To get the library, run the following command: `go get k8s.io/client-go@kubernetes-<kubernetes-version-number>`,
see [INSTALL.md](https://github.com/kubernetes/client-go/blob/master/INSTALL.md#for-the-casual-user)
for detailed installation instructions. See
[https://github.com/kubernetes/client-go](https://github.com/kubernetes/client-go#compatibility-matrix)
to see which versions are supported.
* Write an application atop of the client-go clients. Note that client-go defines its own API objects,
so if needed, please import API definitions from client-go rather than from the main repository,
e.g., `import "k8s.io/client-go/kubernetes"` is correct.
The Go client can use the same [kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/)
as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://git.k8s.io/client-go/examples/out-of-cluster-client-configuration/main.go).
as the kubectl CLI does to locate and authenticate to the apiserver. See this
[example](https://git.k8s.io/client-go/examples/out-of-cluster-client-configuration/main.go).
If the application is deployed as a Pod in the cluster, please refer to the [next section](#accessing-the-api-from-a-pod).
-->
@ -307,10 +311,13 @@ Go 客户端可以像 kubectl CLI 一样使用相同的
<!--
### Python client
To use [Python client](https://github.com/kubernetes-client/python), run the following command: `pip install kubernetes`. See [Python Client Library page](https://github.com/kubernetes-client/python) for more installation options.
To use [Python client](https://github.com/kubernetes-client/python), run the following command:
`pip install kubernetes`. See [Python Client Library page](https://github.com/kubernetes-client/python)
for more installation options.
The Python client can use the same [kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/)
as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://github.com/kubernetes-client/python/tree/master/examples).
as the kubectl CLI does to locate and authenticate to the apiserver. See this
[example](https://github.com/kubernetes-client/python/tree/master/examples).
### Other languages
@ -359,36 +366,34 @@ The previous section describes how to connect to the Kubernetes API server.
For information about connecting to other services running on a Kubernetes cluster, see
[Access Cluster Services](/docs/tasks/access-application-cluster/access-cluster-services/).
-->
## 访问集群上运行的服务 {#accessing-services-running-on-the-cluster}
上一节介绍了如何连接到 Kubernetes API 服务器。
有关连接到 Kubernetes 集群上运行的其他服务的信息,请参阅
[访问集群服务](/zh-cn/docs/tasks/access-application-cluster/access-cluster-services/)。
有关连接到 Kubernetes 集群上运行的其他服务的信息,
请参阅[访问集群服务](/zh-cn/docs/tasks/access-application-cluster/access-cluster-services/)。
<!--
## Requesting redirects
The redirect capabilities have been deprecated and removed. Please use a proxy (see below) instead.
The redirect capabilities have been deprecated and removed. Please use a proxy (see below) instead.
-->
## 请求重定向 {#requesting-redirects}
重定向功能已弃用并被删除。请改用代理(见下文)。
<!--
## So Many Proxies
## So many proxies
There are several different proxies you may encounter when using Kubernetes:
1. The [kubectl proxy](#directly-accessing-the-rest-api):
- runs on a user's desktop or in a pod
- proxies from a localhost address to the Kubernetes apiserver
- client to proxy uses HTTP
- proxy to apiserver uses HTTPS
- locates apiserver
- adds authentication headers
1. The [kubectl proxy](#directly-accessing-the-rest-api):
- runs on a user's desktop or in a pod
- proxies from a localhost address to the Kubernetes apiserver
- client to proxy uses HTTP
- proxy to apiserver uses HTTPS
- locates apiserver
- adds authentication headers
-->
## 多种代理 {#so-many-proxies}
@ -404,15 +409,15 @@ There are several different proxies you may encounter when using Kubernetes:
- 添加身份验证头部
<!--
1. The [apiserver proxy](/docs/tasks/access-application-cluster/access-cluster-services/#discovering-builtin-services):
1. The [apiserver proxy](/docs/tasks/access-application-cluster/access-cluster-services/#discovering-builtin-services):
- is a bastion built into the apiserver
- connects a user outside of the cluster to cluster IPs which otherwise might not be reachable
- runs in the apiserver processes
- client to proxy uses HTTPS (or http if apiserver so configured)
- proxy to target may use HTTP or HTTPS as chosen by proxy using available information
- can be used to reach a Node, Pod, or Service
- does load balancing when used to reach a Service
- is a bastion built into the apiserver
- connects a user outside of the cluster to cluster IPs which otherwise might not be reachable
- runs in the apiserver processes
- client to proxy uses HTTPS (or http if apiserver so configured)
- proxy to target may use HTTP or HTTPS as chosen by proxy using available information
- can be used to reach a Node, Pod, or Service
- does load balancing when used to reach a Service
-->
2. [apiserver 代理](/zh-cn/docs/tasks/access-application-cluster/access-cluster-services/#discovering-builtin-services)
@ -425,13 +430,13 @@ There are several different proxies you may encounter when using Kubernetes:
- 在访问服务时进行负载平衡
<!--
1. The [kube proxy](/docs/concepts/services-networking/service/#ips-and-vips):
1. The [kube proxy](/docs/concepts/services-networking/service/#ips-and-vips):
- runs on each node
- proxies UDP and TCP
- does not understand HTTP
- provides load balancing
- is only used to reach services
- runs on each node
- proxies UDP and TCP
- does not understand HTTP
- provides load balancing
- is only used to reach services
-->
3. [kube proxy](/zh-cn/docs/concepts/services-networking/service/#ips-and-vips)
@ -442,11 +447,11 @@ There are several different proxies you may encounter when using Kubernetes:
- 只能用来访问服务
<!--
1. A Proxy/Load-balancer in front of apiserver(s):
1. A Proxy/Load-balancer in front of apiserver(s):
- existence and implementation varies from cluster to cluster (e.g. nginx)
- sits between all clients and one or more apiservers
- acts as load balancer if there are several apiservers.
- existence and implementation varies from cluster to cluster (e.g. nginx)
- sits between all clients and one or more apiservers
- acts as load balancer if there are several apiservers.
-->
4. 位于 apiserver 之前的 Proxy/Load-balancer
@ -455,14 +460,14 @@ There are several different proxies you may encounter when using Kubernetes:
- 如果有多个 apiserver则充当负载均衡器
<!--
1. Cloud Load Balancers on external services:
1. Cloud Load Balancers on external services:
- are provided by some cloud providers (e.g. AWS ELB, Google Cloud Load Balancer)
- are created automatically when the Kubernetes service has type `LoadBalancer`
- use UDP/TCP only
- implementation varies by cloud provider.
- are provided by some cloud providers (e.g. AWS ELB, Google Cloud Load Balancer)
- are created automatically when the Kubernetes service has type `LoadBalancer`
- use UDP/TCP only
- implementation varies by cloud provider.
Kubernetes users will typically not need to worry about anything other than the first two types. The cluster admin
Kubernetes users will typically not need to worry about anything other than the first two types. The cluster admin
will typically ensure that the latter types are set up correctly.
-->
5. 外部服务上的云负载均衡器:

View File

@ -166,7 +166,7 @@ For example, this is how to start a simple web server as a static Pod:
`--pod-manifest-path=/etc/kubernetes/manifests/` argument.
On Fedora, edit `/etc/kubernetes/kubelet` to include this line:
-->
3. 配置这个节点上的 kubelet使用这个参数执行 `--pod-manifest-path=/etc/kubelet.d/`。
3. 配置这个节点上的 kubelet使用这个参数执行 `--pod-manifest-path=/etc/kubernetes/manifests/`。
在 Fedora 上编辑 `/etc/kubernetes/kubelet` 以包含下面这行:
```

View File

@ -434,25 +434,6 @@ following:
-->
7. 在 default 名字空间下创建一个 Pod
```
cat <<EOF > /tmp/pss/nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
EOF
```
<!--
1. Create the Pod in the cluster:
-->
8. 在集群中创建 Pod
```shell
kubectl apply -f https://k8s.io/examples/security/example-baseline-pod.yaml
```

View File

@ -62,7 +62,7 @@ body.cid-community .community-section:first-child {
body.cid-community #navigation-items {
padding: 0.25em;
width: 100vw;
width: 100%;
max-width: initial;
margin-top: 2.5em;
@ -117,7 +117,7 @@ body.cid-community .community-section#introduction > p {
body.cid-community #gallery {
display: flex;
max-width: 100vw;
max-width: 100%;
gap: 0.75rem;
justify-content: center;
margin-left: auto;
@ -140,7 +140,7 @@ body.cid-community #gallery img.community-gallery-mobile {
body.cid-community .community-section#events {
width: 100vw;
width: 100%;
max-width: initial;
margin-bottom: 0;
@ -154,7 +154,7 @@ body.cid-community .community-section#events {
}
body.cid-community .community-section#values {
width: 100vw;
width: 100%;
max-width: initial;
background-image: url('/images/community/event-bg.jpg');
color: #fff;
@ -167,7 +167,7 @@ body.cid-community .community-section#values {
}
body.cid-community .community-section#meetups {
width: 100vw;
width: 100%;
max-width: initial;
margin-top: 0;
@ -176,8 +176,6 @@ body.cid-community .community-section#meetups {
background-repeat: no-repeat, repeat;
background-size: auto 100%, cover;
color: #fff;
width: 100vw;
/* fallback in case calc() fails */
padding: 5vw;
padding-bottom: 1em;
@ -231,7 +229,7 @@ body.cid-community .fullbutton {
}
body.cid-community #videos {
width: 100vw;
width: 100%;
max-width: initial;
padding: 0.5em 5vw 5% 5vw; /* fallback in case calc() fails */
background-color: #eeeeee;
@ -325,7 +323,7 @@ body.cid-community .resourcebox {
body.cid-community .community-section.community-frame {
width: 100vw;
width: 100%;
}
body.cid-community .community-section.community-frame .twittercol1 {
@ -431,4 +429,11 @@ body.cid-community #cncf-code-of-conduct h2:after {
body.cid-community .community-section#meetups p:last-of-type {
margin-bottom: 6em; /* extra space for background */
}
}
@media only screen and (max-width: 767px) {
body.cid-community .community-section h2:before,
body.cid-community .community-section h2:after {
display: none;
}
}