Merge remote-tracking branch 'upstream/main' into dev-1.23

This commit is contained in:
Jesse Butler 2021-09-22 10:51:02 -04:00
commit b940dce478
45 changed files with 2160 additions and 184 deletions

View File

@ -72,12 +72,12 @@ aliases:
- anthonydahanne
- feloy
sig-docs-hi-owners: # Admins for Hindi content
- avidLearnerInProgress
- daminisatya
- anubha-v-ardhan
- divya-mohan0209
- mittalyashu
sig-docs-hi-reviews: # PR reviews for Hindi content
- avidLearnerInProgress
- daminisatya
- anubha-v-ardhan
- divya-mohan0209
- mittalyashu
sig-docs-id-owners: # Admins for Indonesian content
- ariscahyadi

View File

@ -50,6 +50,8 @@ Bevor Sie die einzelnen Lernprogramme durchgehen, möchten Sie möglicherweise e
* [AppArmor](/docs/tutorials/clusters/apparmor/)
* [seccomp](/docs/tutorials/clusters/seccomp/)
## Services
* [Source IP verwenden](/docs/tutorials/services/source-ip/)

View File

@ -190,9 +190,9 @@ kubectl get configmap
No resources found in default namespace.
```
To sum things up, when there's an override owner reference from a child to a parent, deleting the parent deletes the children automatically. This is called `cascade`. The default for cascade is `true`, however, you can use the --cascade=false option for `kubectl delete` to delete an object and orphan its children.
To sum things up, when there's an override owner reference from a child to a parent, deleting the parent deletes the children automatically. This is called `cascade`. The default for cascade is `true`, however, you can use the --cascade=orphan option for `kubectl delete` to delete an object and orphan its children.
In the following example, there is a parent and a child. Notice the owner references are still included. If I delete the parent using --cascade=false, the parent is deleted but the child still exists:
In the following example, there is a parent and a child. Notice the owner references are still included. If I delete the parent using --cascade=orphan, the parent is deleted but the child still exists:
```
kubectl get configmap
@ -200,7 +200,7 @@ NAME DATA AGE
mymap-child 0 13m8s
mymap-parent 0 13m8s
kubectl delete --cascade=false configmap/mymap-parent
kubectl delete --cascade=orphan configmap/mymap-parent
configmap "mymap-parent" deleted
kubectl get configmap

View File

@ -0,0 +1,287 @@
---
layout: blog
title: "Introducing Single Pod Access Mode for PersistentVolumes"
date: 2021-09-13
slug: read-write-once-pod-access-mode-alpha
---
**Author:** Chris Henzie (Google)
Last month's release of Kubernetes v1.22 introduced a new ReadWriteOncePod access mode for [PersistentVolumes](/docs/concepts/storage/persistent-volumes/#persistent-volumes) and [PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims).
With this alpha feature, Kubernetes allows you to restrict volume access to a single pod in the cluster.
## What are access modes and why are they important?
When using storage, there are different ways to model how that storage is consumed.
For example, a storage system like a network file share can have many users all reading and writing data simultaneously.
In other cases maybe everyone is allowed to read data but not write it.
For highly sensitive data, maybe only one user is allowed to read and write data but nobody else.
In the world of Kubernetes, [access modes](/docs/concepts/storage/persistent-volumes/#access-modes) are the way you can define how durable storage is consumed.
These access modes are a part of the spec for PersistentVolumes (PVs) and PersistentVolumeClaims (PVCs).
```yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: shared-cache
spec:
accessModes:
- ReadWriteMany # Allow many pods to access shared-cache simultaneously.
resources:
requests:
storage: 1Gi
```
Before v1.22, Kubernetes offered three access modes for PVs and PVCs:
- ReadWriteOnce – the volume can be mounted as read-write by a single node
- ReadOnlyMany – the volume can be mounted read-only by many nodes
- ReadWriteMany – the volume can be mounted as read-write by many nodes
These access modes are enforced by Kubernetes components like the `kube-controller-manager` and `kubelet` to ensure only certain pods are allowed to access a given PersistentVolume.
## What is this new access mode and how does it work?
Kubernetes v1.22 introduced a fourth access mode for PVs and PVCs, that you can use for CSI volumes:
- ReadWriteOncePod – the volume can be mounted as read-write by a single pod
If you create a pod with a PVC that uses the ReadWriteOncePod access mode, Kubernetes ensures that pod is the only pod across your whole cluster that can read that PVC or write to it.
If you create another pod that references the same PVC with this access mode, the pod will fail to start because the PVC is already in use by another pod.
For example:
```
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 1s default-scheduler 0/1 nodes are available: 1 node has pod using PersistentVolumeClaim with the same name and ReadWriteOncePod access mode.
```
### How is this different than the ReadWriteOnce access mode?
The ReadWriteOnce access mode restricts volume access to a single *node*, which means it is possible for multiple pods on the same node to read from and write to the same volume.
This could potentially be a major problem for some applications, especially if they require at most one writer for data safety guarantees.
With ReadWriteOncePod these issues go away.
Set the access mode on your PVC, and Kubernetes guarantees that only a single pod has access.
## How do I use it?
The ReadWriteOncePod access mode is in alpha for Kubernetes v1.22 and is only supported for CSI volumes.
As a first step you need to enable the ReadWriteOncePod [feature gate](/docs/reference/command-line-tools-reference/feature-gates) for `kube-apiserver`, `kube-scheduler`, and `kubelet`.
You can enable the feature by setting command line arguments:
```
--feature-gates="...,ReadWriteOncePod=true"
```
You also need to update the following CSI sidecars to these versions or greater:
- [csi-provisioner:v3.0.0+](https://github.com/kubernetes-csi/external-provisioner/releases/tag/v3.0.0)
- [csi-attacher:v3.3.0+](https://github.com/kubernetes-csi/external-attacher/releases/tag/v3.3.0)
- [csi-resizer:v1.3.0+](https://github.com/kubernetes-csi/external-resizer/releases/tag/v1.3.0)
### Creating a PersistentVolumeClaim
In order to use the ReadWriteOncePod access mode for your PVs and PVCs, you will need to create a new PVC with the access mode:
```yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: single-writer-only
spec:
accessModes:
- ReadWriteOncePod # Allow only a single pod to access single-writer-only.
resources:
requests:
storage: 1Gi
```
If your storage plugin supports [dynamic provisioning](/docs/concepts/storage/dynamic-provisioning/), new PersistentVolumes will be created with the ReadWriteOncePod access mode applied.
#### Migrating existing PersistentVolumes
If you have existing PersistentVolumes, they can be migrated to use ReadWriteOncePod.
In this example, we already have a "cat-pictures-pvc" PersistentVolumeClaim that is bound to a "cat-pictures-pv" PersistentVolume, and a "cat-pictures-writer" Deployment that uses this PersistentVolumeClaim.
As a first step, you need to edit your PersistentVolume's `spec.persistentVolumeReclaimPolicy` and set it to `Retain`.
This ensures your PersistentVolume will not be deleted when we delete the corresponding PersistentVolumeClaim:
```shell
kubectl patch pv cat-pictures-pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
```
Next you need to stop any workloads that are using the PersistentVolumeClaim bound to the PersistentVolume you want to migrate, and then delete the PersistentVolumeClaim.
Once that is done, you need to clear your PersistentVolume's `spec.claimRef.uid` to ensure PersistentVolumeClaims can bind to it upon recreation:
```shell
kubectl scale --replicas=0 deployment cat-pictures-writer
kubectl delete pvc cat-pictures-pvc
kubectl patch pv cat-pictures-pv -p '{"spec":{"claimRef":{"uid":""}}}'
```
After that you need to replace the PersistentVolume's access modes with ReadWriteOncePod:
```shell
kubectl patch pv cat-pictures-pv -p '{"spec":{"accessModes":["ReadWriteOncePod"]}}'
```
{{< note >}}
The ReadWriteOncePod access mode cannot be combined with other access modes.
Make sure ReadWriteOncePod is the only access mode on the PersistentVolume when updating, otherwise the request will fail.
{{< /note >}}
Next you need to modify your PersistentVolumeClaim to set ReadWriteOncePod as the only access mode.
You should also set your PersistentVolumeClaim's `spec.volumeName` to the name of your PersistentVolume.
Once this is done, you can recreate your PersistentVolumeClaim and start up your workloads:
```shell
# IMPORTANT: Make sure to edit your PVC in cat-pictures-pvc.yaml before applying. You need to:
# - Set ReadWriteOncePod as the only access mode
# - Set spec.volumeName to "cat-pictures-pv"
kubectl apply -f cat-pictures-pvc.yaml
kubectl apply -f cat-pictures-writer-deployment.yaml
```
Lastly you may edit your PersistentVolume's `spec.persistentVolumeReclaimPolicy` and set to it back to `Delete` if you previously changed it.
```shell
kubectl patch pv cat-pictures-pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'
```
You can read [Configure a Pod to Use a PersistentVolume for Storage](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/) for more details on working with PersistentVolumes and PersistentVolumeClaims.
## What volume plugins support this?
The only volume plugins that support this are CSI drivers.
SIG Storage does not plan to support this for in-tree plugins because they are being deprecated as part of [CSI migration](/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/#what-is-the-timeline-status).
Support may be considered for beta for users that prefer to use the legacy in-tree volume APIs with CSI migration enabled.
## As a storage vendor, how do I add support for this access mode to my CSI driver?
The ReadWriteOncePod access mode will work out of the box without any required updates to CSI drivers, but [does require updates to CSI sidecars](#update-your-csi-sidecars).
With that being said, if you would like to stay up to date with the latest changes to the CSI specification (v1.5.0+), read on.
Two new access modes were introduced to the CSI specification in order to disambiguate the legacy [`SINGLE_NODE_WRITER`](https://github.com/container-storage-interface/spec/blob/v1.5.0/csi.proto#L418-L420) access mode.
They are [`SINGLE_NODE_SINGLE_WRITER` and `SINGLE_NODE_MULTI_WRITER`](https://github.com/container-storage-interface/spec/blob/v1.5.0/csi.proto#L437-L447).
In order to communicate to sidecars (like the [external-provisioner](https://github.com/kubernetes-csi/external-provisioner)) that your driver understands and accepts these two new CSI access modes, your driver will also need to advertise the `SINGLE_NODE_MULTI_WRITER` capability for the [controller service](https://github.com/container-storage-interface/spec/blob/v1.5.0/csi.proto#L1073-L1081) and [node service](https://github.com/container-storage-interface/spec/blob/v1.5.0/csi.proto#L1515-L1524).
If you'd like to read up on the motivation for these access modes and capability bits, you can also read the [CSI Specification Changes, Volume Capabilities](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/2485-read-write-once-pod-pv-access-mode/README.md#csi-specification-changes-volume-capabilities) section of KEP-2485 (ReadWriteOncePod PersistentVolume Access Mode).
### Update your CSI driver to use the new interface
As a first step you will need to update your driver's `container-storage-interface` dependency to v1.5.0+, which contains support for these new access modes and capabilities.
### Accept new CSI access modes
If your CSI driver contains logic for validating CSI access modes for requests , it may need updating.
If it currently accepts `SINGLE_NODE_WRITER`, it should be updated to also accept `SINGLE_NODE_SINGLE_WRITER` and `SINGLE_NODE_MULTI_WRITER`.
Using the [GCP PD CSI driver validation logic](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/blob/v1.2.2/pkg/gce-pd-csi-driver/utils.go#L116-L130) as an example, here is how it can be extended:
```diff
diff --git a/pkg/gce-pd-csi-driver/utils.go b/pkg/gce-pd-csi-driver/utils.go
index 281242c..b6c5229 100644
--- a/pkg/gce-pd-csi-driver/utils.go
+++ b/pkg/gce-pd-csi-driver/utils.go
@@ -123,6 +123,8 @@ func validateAccessMode(am *csi.VolumeCapability_AccessMode) error {
case csi.VolumeCapability_AccessMode_SINGLE_NODE_READER_ONLY:
case csi.VolumeCapability_AccessMode_MULTI_NODE_READER_ONLY:
case csi.VolumeCapability_AccessMode_MULTI_NODE_MULTI_WRITER:
+ case csi.VolumeCapability_AccessMode_SINGLE_NODE_SINGLE_WRITER:
+ case csi.VolumeCapability_AccessMode_SINGLE_NODE_MULTI_WRITER:
default:
return fmt.Errorf("%v access mode is not supported for for PD", am.GetMode())
}
```
### Advertise new CSI controller and node service capabilities
Your CSI driver will also need to return the new `SINGLE_NODE_MULTI_WRITER` capability as part of the `ControllerGetCapabilities` and `NodeGetCapabilities` RPCs.
Using the [GCP PD CSI driver capability advertisement logic](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/blob/v1.2.2/pkg/gce-pd-csi-driver/gce-pd-driver.go#L54-L77) as an example, here is how it can be extended:
```diff
diff --git a/pkg/gce-pd-csi-driver/gce-pd-driver.go b/pkg/gce-pd-csi-driver/gce-pd-driver.go
index 45903f3..0d7ea26 100644
--- a/pkg/gce-pd-csi-driver/gce-pd-driver.go
+++ b/pkg/gce-pd-csi-driver/gce-pd-driver.go
@@ -56,6 +56,8 @@ func (gceDriver *GCEDriver) SetupGCEDriver(name, vendorVersion string, extraVolu
csi.VolumeCapability_AccessMode_SINGLE_NODE_WRITER,
csi.VolumeCapability_AccessMode_MULTI_NODE_READER_ONLY,
csi.VolumeCapability_AccessMode_MULTI_NODE_MULTI_WRITER,
+ csi.VolumeCapability_AccessMode_SINGLE_NODE_SINGLE_WRITER,
+ csi.VolumeCapability_AccessMode_SINGLE_NODE_MULTI_WRITER,
}
gceDriver.AddVolumeCapabilityAccessModes(vcam)
csc := []csi.ControllerServiceCapability_RPC_Type{
@@ -67,12 +69,14 @@ func (gceDriver *GCEDriver) SetupGCEDriver(name, vendorVersion string, extraVolu
csi.ControllerServiceCapability_RPC_EXPAND_VOLUME,
csi.ControllerServiceCapability_RPC_LIST_VOLUMES,
csi.ControllerServiceCapability_RPC_LIST_VOLUMES_PUBLISHED_NODES,
+ csi.ControllerServiceCapability_RPC_SINGLE_NODE_MULTI_WRITER,
}
gceDriver.AddControllerServiceCapabilities(csc)
ns := []csi.NodeServiceCapability_RPC_Type{
csi.NodeServiceCapability_RPC_STAGE_UNSTAGE_VOLUME,
csi.NodeServiceCapability_RPC_EXPAND_VOLUME,
csi.NodeServiceCapability_RPC_GET_VOLUME_STATS,
+ csi.NodeServiceCapability_RPC_SINGLE_NODE_MULTI_WRITER,
}
gceDriver.AddNodeServiceCapabilities(ns)
```
### Implement `NodePublishVolume` behavior
The CSI spec outlines expected behavior for the `NodePublishVolume` RPC when called more than once for the same volume but with different arguments (like the target path).
Please refer to [the second table in the NodePublishVolume section of the CSI spec](https://github.com/container-storage-interface/spec/blob/v1.5.0/spec.md#nodepublishvolume) for more details on expected behavior when implementing in your driver.
### Update your CSI sidecars
When deploying your CSI drivers, you must update the following CSI sidecars to versions that depend on CSI spec v1.5.0+ and the Kubernetes v1.22 API.
The minimum required versions are:
- [csi-provisioner:v3.0.0+](https://github.com/kubernetes-csi/external-provisioner/releases/tag/v3.0.0)
- [csi-attacher:v3.3.0+](https://github.com/kubernetes-csi/external-attacher/releases/tag/v3.3.0)
- [csi-resizer:v1.3.0+](https://github.com/kubernetes-csi/external-resizer/releases/tag/v1.3.0)
## Whats next?
As part of the beta graduation for this feature, SIG Storage plans to update the Kubenetes scheduler to support pod preemption in relation to ReadWriteOncePod storage.
This means if two pods request a PersistentVolumeClaim with ReadWriteOncePod, the pod with highest priority will gain access to the PersistentVolumeClaim and any pod with lower priority will be preempted from the node and be unable to access the PersistentVolumeClaim.
## How can I learn more?
Please see [KEP-2485](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/2485-read-write-once-pod-pv-access-mode/README.md) for more details on the ReadWriteOncePod access mode and motivations for CSI spec changes.
## How do I get involved?
The [Kubernetes #csi Slack channel](https://kubernetes.slack.com/messages/csi) and any of the [standard SIG Storage communication channels](https://github.com/kubernetes/community/blob/master/sig-storage/README.md#contact) are great mediums to reach out to the SIG Storage and the CSI teams.
Special thanks to the following people for their insightful reviews and design considerations:
* Abdullah Gharaibeh (ahg-g)
* Aldo Culquicondor (alculquicondor)
* Ben Swartzlander (bswartz)
* Deep Debroy (ddebroy)
* Hemant Kumar (gnufied)
* Humble Devassy Chirammal (humblec)
* James DeFelice (jdef)
* Jan Šafránek (jsafrane)
* Jing Xu (jingxu97)
* Jordan Liggitt (liggitt)
* Michelle Au (msau42)
* Saad Ali (saad-ali)
* Tim Hockin (thockin)
* Xing Yang (xing-yang)
If youre interested in getting involved with the design and development of CSI or any part of the Kubernetes storage system, join the [Kubernetes Storage Special Interest Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG).
Were rapidly growing and always welcome new contributors.

View File

@ -91,18 +91,6 @@ imply any preferential status.
Project [Antrea](https://github.com/vmware-tanzu/antrea) is an opensource Kubernetes networking solution intended to be Kubernetes native. It leverages Open vSwitch as the networking data plane. Open vSwitch is a high-performance programmable virtual switch that supports both Linux and Windows. Open vSwitch enables Antrea to implement Kubernetes Network Policies in a high-performance and efficient manner.
Thanks to the "programmable" characteristic of Open vSwitch, Antrea is able to implement an extensive set of networking and security features and services on top of Open vSwitch.
### AOS from Apstra
[AOS](https://www.apstra.com/products/aos/) is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform. AOS leverages a highly scalable distributed design to eliminate network outages while minimizing costs.
The AOS Reference Design currently supports Layer-3 connected hosts that eliminate legacy Layer-2 switching problems. These Layer-3 hosts can be Linux servers (Debian, Ubuntu, CentOS) that create BGP neighbor relationships directly with the top of rack switches (TORs). AOS automates the routing adjacencies and then provides fine grained control over the route health injections (RHI) that are common in a Kubernetes deployment.
AOS has a rich set of REST API endpoints that enable Kubernetes to quickly change the network policy based on application requirements. Further enhancements will integrate the AOS Graph model used for the network design with the workload provisioning, enabling an end to end management system for both private and public clouds.
AOS supports the use of common vendor equipment from manufacturers including Cisco, Arista, Dell, Mellanox, HPE, and a large number of white-box systems and open network operating systems like Microsoft SONiC, Dell OPX, and Cumulus Linux.
Details on how the AOS system works can be accessed here: https://www.apstra.com/products/how-it-works/
### AWS VPC CNI for Kubernetes
The [AWS VPC CNI](https://github.com/aws/amazon-vpc-cni-k8s) offers integrated AWS Virtual Private Cloud (VPC) networking for Kubernetes clusters. This CNI plugin offers high throughput and availability, low latency, and minimal network jitter. Additionally, users can apply existing AWS VPC networking and security best practices for building Kubernetes clusters. This includes the ability to use VPC flow logs, VPC routing policies, and security groups for network traffic isolation.
@ -116,15 +104,6 @@ Additionally, the CNI can be run alongside [Calico for network policy enforcemen
Azure CNI is available natively in the [Azure Kubernetes Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni).
### Big Cloud Fabric from Big Switch Networks
[Big Cloud Fabric](https://www.bigswitch.com/container-network-automation) is a cloud native networking architecture, designed to run Kubernetes in private cloud/on-premises environments. Using unified physical & virtual SDN, Big Cloud Fabric tackles inherent container networking problems such as load balancing, visibility, troubleshooting, security policies & container traffic monitoring.
With the help of the Big Cloud Fabric's virtual pod multi-tenant architecture, container orchestration systems such as Kubernetes, RedHat OpenShift, Mesosphere DC/OS & Docker Swarm will be natively integrated alongside with VM orchestration systems such as VMware, OpenStack & Nutanix. Customers will be able to securely inter-connect any number of these clusters and enable inter-tenant communication between them if needed.
BCF was recognized by Gartner as a visionary in the latest [Magic Quadrant](https://go.bigswitch.com/17GatedDocuments-MagicQuadrantforDataCenterNetworking_Reg.html). One of the BCF Kubernetes on-premises deployments (which includes Kubernetes, DC/OS & VMware running on multiple DCs across different geographic regions) is also referenced [here](https://portworx.com/architects-corner-kubernetes-satya-komala-nio/).
### Calico
[Calico](https://docs.projectcalico.org/) is an open source networking and network security solution for containers, virtual machines, and native host-based workloads. Calico supports multiple data planes including: a pure Linux eBPF dataplane, a standard Linux networking dataplane, and a Windows HNS dataplane. Calico provides a full networking stack but can also be used in conjunction with [cloud provider CNIs](https://docs.projectcalico.org/networking/determine-best-networking#calico-compatible-cni-plugins-and-cloud-provider-integrations) to provide network policy enforcement.

View File

@ -11,7 +11,7 @@ weight: 20
<!-- overview -->
The aggregation layer allows Kubernetes to be extended with additional APIs, beyond what is offered by the core Kubernetes APIs.
The additional APIs can either be ready-made solutions such as [service-catalog](/docs/concepts/extend-kubernetes/service-catalog/), or APIs that you develop yourself.
The additional APIs can either be ready-made solutions such as a [metrics server](https://github.com/kubernetes-sigs/metrics-server), or APIs that you develop yourself.
The aggregation layer is different from [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/), which are a way to make the {{< glossary_tooltip term_id="kube-apiserver" text="kube-apiserver" >}} recognise new kinds of object.

View File

@ -32,7 +32,7 @@ The application can access the message queue as a service.
Service Catalog uses the [Open service broker API](https://github.com/openservicebrokerapi/servicebroker) to communicate with service brokers, acting as an intermediary for the Kubernetes API Server to negotiate the initial provisioning and retrieve the credentials necessary for the application to use a managed service.
It is implemented as an extension API server and a controller, using etcd for storage. It also uses the [aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) available in Kubernetes 1.7+ to present its API.
It is implemented using a [CRDs-based](/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-resources) architecture.
<br>

View File

@ -444,8 +444,7 @@ variables and DNS.
When a Pod is run on a Node, the kubelet adds a set of environment variables
for each active Service. It supports both [Docker links
compatible](https://docs.docker.com/userguide/dockerlinks/) variables (see
[makeLinkVariables](https://releases.k8s.io/{{< param "fullversion" >}}/pkg/kubelet/envvars/envvars.go#L49))
compatible](https://docs.docker.com/userguide/dockerlinks/) variables (see [makeLinkVariables](https://github.com/kubernetes/kubernetes/blob/dd2d12f6dc0e654c15d5db57a5f9f6ba61192726/pkg/kubelet/envvars/envvars.go#L72))
and simpler `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables,
where the Service name is upper-cased and dashes are converted to underscores.

View File

@ -185,7 +185,7 @@ You can modify the Pods that a DaemonSet creates. However, Pods do not allow al
fields to be updated. Also, the DaemonSet controller will use the original template the next
time a node (even with the same name) is created.
You can delete a DaemonSet. If you specify `--cascade=false` with `kubectl`, then the Pods
You can delete a DaemonSet. If you specify `--cascade=orphan` with `kubectl`, then the Pods
will be left on the nodes. If you subsequently create a new DaemonSet with the same selector,
the new DaemonSet adopts the existing Pods. If any Pods need replacing the DaemonSet replaces
them according to its `updateStrategy`.

View File

@ -523,7 +523,7 @@ to keep running, but you want the rest of the Pods it creates
to use a different pod template and for the Job to have a new name.
You cannot update the Job because these fields are not updatable.
Therefore, you delete Job `old` but _leave its pods
running_, using `kubectl delete jobs/old --cascade=false`.
running_, using `kubectl delete jobs/old --cascade=orphan`.
Before deleting it, you make a note of what selector it uses:
```shell

View File

@ -192,7 +192,7 @@ When using the REST API or Go client library, you need to do the steps explicitl
You can delete a ReplicationController without affecting any of its pods.
Using kubectl, specify the `--cascade=false` option to [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete).
Using kubectl, specify the `--cascade=orphan` option to [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete).
When using the REST API or Go client library, you can delete the ReplicationController object.

View File

@ -173,7 +173,7 @@ Cluster Domain will be set to `cluster.local` unless
### Stable Storage
For each VolumeClaimTemplate entry defined in a StatefulSet, each Pod receives one PersistentVolumeClaim. In the nginx example above, each Podreceives a single PersistentVolume with a StorageClass of `my-storage-class` and 1 Gib of provisioned storage. If no StorageClass
For each VolumeClaimTemplate entry defined in a StatefulSet, each Pod receives one PersistentVolumeClaim. In the nginx example above, each Pod receives a single PersistentVolume with a StorageClass of `my-storage-class` and 1 Gib of provisioned storage. If no StorageClass
is specified, then the default StorageClass will be used. When a Pod is (re)scheduled
onto a node, its `volumeMounts` mount the PersistentVolumes associated with its
PersistentVolume Claims. Note that, the PersistentVolumes associated with the
@ -301,4 +301,3 @@ Please note that this field only works if you enable the `StatefulSetMinReadySec
* Follow an example of [deploying a stateful application](/docs/tutorials/stateful-application/basic-stateful-set/).
* Follow an example of [deploying Cassandra with Stateful Sets](/docs/tutorials/stateful-application/cassandra/).
* Follow an example of [running a replicated stateful application](/docs/tasks/run-application/run-replicated-stateful-application/).

View File

@ -230,20 +230,9 @@ If you apply "two-constraints.yaml" to this cluster, you will notice "mypod" sta
To overcome this situation, you can either increase the `maxSkew` or modify one of the constraints to use `whenUnsatisfiable: ScheduleAnyway`.
### Conventions
### Interaction With Node Affinity and Node Selectors
There are some implicit conventions worth noting here:
- Only the Pods holding the same namespace as the incoming Pod can be matching candidates.
- Nodes without `topologySpreadConstraints[*].topologyKey` present will be bypassed. It implies that:
1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incoming Pod will be scheduled into "zoneA".
2. the incoming Pod has no chances to be scheduled onto this kind of nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone".
- Be aware of what will happen if the incomingPod's `topologySpreadConstraints[*].labelSelector` doesn't match its own labels. In the above example, if we remove the incoming Pod's labels, it can still be placed onto "zoneB" since the constraints are still satisfied. However, after the placement, the degree of imbalance of the cluster remains unchanged - it's still zoneA having 2 Pods which hold label {foo:bar}, and zoneB having 1 Pod which holds label {foo:bar}. So if this is not what you expect, we recommend the workload's `topologySpreadConstraints[*].labelSelector` to match its own labels.
- If the incoming Pod has `spec.nodeSelector` or `spec.affinity.nodeAffinity` defined, nodes not matching them will be bypassed.
The scheduler will skip the non-matching nodes from the skew calculations if the incoming Pod has `spec.nodeSelector` or `spec.affinity.nodeAffinity` defined.
Suppose you have a 5-node cluster ranging from zoneA to zoneC:
@ -283,6 +272,21 @@ There are some implicit conventions worth noting here:
{{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}}
The scheduler doesn't have prior knowledge of all the zones or other topology domains that a cluster has. They are determined from the existing nodes in the cluster. This could lead to a problem in autoscaled clusters, when a node pool (or node group) is scaled to zero nodes and the user is expecting them to scale up, because, in this case, those topology domains won't be considered until there is at least one node in them.
### Other Noticeable Semantics
There are some implicit conventions worth noting here:
- Only the Pods holding the same namespace as the incoming Pod can be matching candidates.
- The scheduler will bypass the nodes without `topologySpreadConstraints[*].topologyKey` present. This implies that:
1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incoming Pod will be scheduled into "zoneA".
2. the incoming Pod has no chances to be scheduled onto this kind of nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone".
- Be aware of what will happen if the incomingPod's `topologySpreadConstraints[*].labelSelector` doesn't match its own labels. In the above example, if we remove the incoming Pod's labels, it can still be placed onto "zoneB" since the constraints are still satisfied. However, after the placement, the degree of imbalance of the cluster remains unchanged - it's still zoneA having 2 Pods which hold label {foo:bar}, and zoneB having 1 Pod which holds label {foo:bar}. So if this is not what you expect, we recommend the workload's `topologySpreadConstraints[*].labelSelector` to match its own labels.
### Cluster-level default constraints
It is possible to set default topology spread constraints for a cluster. Default

View File

@ -83,6 +83,33 @@ You can also include a full definition:
which renders as:
{{< glossary_definition term_id="cluster" length="all" >}}
## Links to API Reference
You can link to a page of the Kubernetes API reference using the `api-reference` shortcode, for example to the {{< api-reference page="workload-resources/pod-v1" >}} reference:
```
{{</* api-reference page="workload-resources/pod-v1" */>}}
```
The content of the `page` parameter is the suffix of the URL of the API reference page.
You can link to a specific place into a page by specifying an `anchor` parameter, for example to the {{< api-reference page="workload-resources/pod-v1" anchor="PodSpec" >}} reference or the {{< api-reference page="workload-resources/pod-v1" anchor="environment-variables" >}} section of the page:
```
{{</* api-reference page="workload-resources/pod-v1" anchor="PodSpec" */>}}
{{</* api-reference page="workload-resources/pod-v1" anchor="environment-variables" */>}}
```
You can change the text of the link by specifying a `text` parameter, for example by linking to the {{< api-reference page="workload-resources/pod-v1" anchor="environment-variables" text="Environment Variables">}} section of the page:
```
{{</* api-reference page="workload-resources/pod-v1" anchor="environment-variables" text="Environment Variable" */>}}
```
## Table captions
You can make tables more accessible to screen readers by adding a table caption. To add a [caption](https://www.w3schools.com/tags/tag_caption.asp) to a table, enclose the table with a `table` shortcode and specify the caption with the `caption` parameter.

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 175 KiB

View File

@ -64,6 +64,8 @@ client libraries:
* [Scheduler Policies](/docs/reference/scheduling/policies)
* [Scheduler Profiles](/docs/reference/scheduling/config#profiles)
* List of [ports and protocols](/docs/reference/ports-and-protocols/) that
should be open on control plane and worker nodes
## Config APIs
This section hosts the documentation for "unpublished" APIs which are used to

View File

@ -72,7 +72,7 @@ The ServiceAccount admission controller will add the following projected volume
defaultMode: 420 # 0644
sources:
- serviceAccountToken:
expirationSeconds: 3600
expirationSeconds: 3607
path: token
- configMap:
items:

View File

@ -75,7 +75,7 @@ If you need help, run `kubectl help` from the terminal window.
By default `kubectl` will first determine if it is running within a pod, and thus in a cluster. It starts by checking for the `KUBERNETES_SERVICE_HOST` and `KUBERNETES_SERVICE_PORT` environment variables and the existence of a service account token file at `/var/run/secrets/kubernetes.io/serviceaccount/token`. If all three are found in-cluster authentication is assumed.
To maintain backwards compatibility, if the `POD_NAMESPACE` environment variable is set during in-cluster authentication it will override the default namespace from the from the service account token. Any manifests or tools relying on namespace defaulting will be affected by this.
To maintain backwards compatibility, if the `POD_NAMESPACE` environment variable is set during in-cluster authentication it will override the default namespace from the service account token. Any manifests or tools relying on namespace defaulting will be affected by this.
**`POD_NAMESPACE` environment variable**

View File

@ -0,0 +1,40 @@
---
title: Ports and Protocols
content_type: reference
weight: 50
---
When running Kubernetes in an environment with strict network boundaries, such
as on-premises datacenter with physical network firewalls or Virtual
Networks in Public Cloud, it is useful to be aware of the ports and protocols
used by Kubernetes components
## Control plane
| Protocol | Direction | Port Range | Purpose | Used By |
|----------|-----------|------------|-------------------------|---------------------------|
| TCP | Inbound | 6443 | Kubernetes API server | All |
| TCP | Inbound | 2379-2380 | etcd server client API | kube-apiserver, etcd |
| TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
| TCP | Inbound | 10259 | kube-scheduler | Self |
| TCP | Inbound | 10257 | kube-controller-manager | Self |
Although etcd ports are included in control plane section, you can also host your own
etcd cluster externally or on custom ports.
## Worker node(s) {#node}
| Protocol | Direction | Port Range | Purpose | Used By |
|----------|-----------|-------------|-----------------------|-------------------------|
| TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
| TCP | Inbound | 30000-32767 | NodePort Services† | All |
† Default port range for [NodePort Services](/docs/concepts/services-networking/service/).
All default port numbers can be overridden. When custom ports are used those
ports need to be open instead of defaults mentioned here.
One common example is API server port that is sometimes switched
to 443. Alternatively, the default port is kept as is and API server is put
behind a load balancer that listens on 443 and routes the requests to API server
on the default port.

View File

@ -67,31 +67,9 @@ sudo sysctl --system
For more details please see the [Network Plugin Requirements](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#network-plugin-requirements) page.
## Check required ports
### Control-plane node(s)
| Protocol | Direction | Port Range | Purpose | Used By |
|----------|-----------|------------|-------------------------|---------------------------|
| TCP | Inbound | 6443\* | Kubernetes API server | All |
| TCP | Inbound | 2379-2380 | etcd server client API | kube-apiserver, etcd |
| TCP | Inbound | 10250 | kubelet API | Self, Control plane |
| TCP | Inbound | 10251 | kube-scheduler | Self |
| TCP | Inbound | 10252 | kube-controller-manager | Self |
### Worker node(s)
| Protocol | Direction | Port Range | Purpose | Used By |
|----------|-----------|-------------|-----------------------|-------------------------|
| TCP | Inbound | 10250 | kubelet API | Self, Control plane |
| TCP | Inbound | 30000-32767 | NodePort Services† | All |
† Default port range for [NodePort Services](/docs/concepts/services-networking/service/).
Any port numbers marked with * are overridable, so you will need to ensure any
custom ports you provide are also open.
Although etcd ports are included in control-plane nodes, you can also host your own
etcd cluster externally or on custom ports.
These
[required ports](/docs/reference/ports-and-protocols/)
need to be open in order for Kubernetes components to communicate with each other.
The pod network plugin you use (see below) may also require certain ports to be
open. Since this differs with each pod network plugin, please see the

View File

@ -44,6 +44,46 @@ This page shows you how to set up a simple Ingress which routes requests to Serv
1. Verify that the NGINX Ingress controller is running
{{< tabs name="tab_with_md" >}}
{{% tab name="minikube v1.19 or later" %}}
```shell
kubectl get pods -n ingress-nginx
```
{{< note >}}This can take up to a minute.{{< /note >}}
Output:
```
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-g9g49 0/1 Completed 0 11m
ingress-nginx-admission-patch-rqp78 0/1 Completed 1 11m
ingress-nginx-controller-59b45fb494-26npt 1/1 Running 0 11m
```
{{% /tab %}}
{{% tab name="minikube v1.18.1 or earlier" %}}
```shell
kubectl get pods -n kube-system
```
{{< note >}}This can take up to a minute.{{< /note >}}
Output:
```
NAME READY STATUS RESTARTS AGE
default-http-backend-59868b7dd6-xb8tq 1/1 Running 0 1m
kube-addon-manager-minikube 1/1 Running 0 3m
kube-dns-6dcb57bcc8-n4xd4 3/3 Running 0 2m
kubernetes-dashboard-5498ccf677-b8p5h 1/1 Running 0 2m
nginx-ingress-controller-5984b97644-rnkrg 1/1 Running 0 1m
storage-provisioner 1/1 Running 0 2m
```
{{% /tab %}}
{{< /tabs >}}
```shell
kubectl get pods -n ingress-nginx
```
@ -59,6 +99,7 @@ This page shows you how to set up a simple Ingress which routes requests to Serv
ingress-nginx-controller-59b45fb494-lzmw2 1/1 Running 0 3m28s
```
## Deploy a hello, world app
1. Create a Deployment using the following command:

View File

@ -133,10 +133,18 @@ the [etcd administration guide](https://etcd.io/docs/v2.3/admin_guide/#member-mi
## Implementation notes
![ha-master-gce](/images/docs/ha-master-gce.png)
![ha-control-plane](/docs/images/ha-control-plane.svg)
### Overview
The figure above illustrates three control plane nodes and their components in a highly available cluster. The control plane nodes components employ the following methods:
- etcd: instances are clustered together using consensus.
- Controllers, scheduler and cluster auto-scaler: only one instance of each will be active in a cluster using a lease mechanism.
- Add-on manager: each works independently to keep add-ons in sync.
In addition, a load balancer operating in front of the API servers routes external and internal traffic to the control plane nodes.
Each of the control plane nodes will run the following components in the following mode:
* etcd instance: all instances will be clustered together using consensus;
@ -215,4 +223,3 @@ server coordination (for example, the `StorageVersionAPI` feature gate).
[Automated HA master deployment - design doc](https://git.k8s.io/community/contributors/design-proposals/cluster-lifecycle/ha_master.md)

View File

@ -9,17 +9,17 @@ weight: 20
<!-- overview -->
This page explains how to upgrade a Kubernetes cluster created with kubeadm from version
{{< skew latestVersionAddMinor -1 >}}.x to version {{< skew latestVersion >}}.x, and from version
{{< skew latestVersion >}}.x to {{< skew latestVersion >}}.y (where `y > x`). Skipping MINOR versions
{{< skew currentVersionAddMinor -1 >}}.x to version {{< skew currentVersion >}}.x, and from version
{{< skew currentVersion >}}.x to {{< skew currentVersion >}}.y (where `y > x`). Skipping MINOR versions
when upgrading is unsupported.
To see information about upgrading clusters created using older versions of kubeadm,
please refer to following pages instead:
- [Upgrading a kubeadm cluster from {{< skew latestVersionAddMinor -2 >}} to {{< skew latestVersionAddMinor -1 >}}](https://v{{< skew latestVersionAddMinor -1 "-" >}}.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
- [Upgrading a kubeadm cluster from {{< skew latestVersionAddMinor -3 >}} to {{< skew latestVersionAddMinor -2 >}}](https://v{{< skew latestVersionAddMinor -2 "-" >}}.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
- [Upgrading a kubeadm cluster from {{< skew latestVersionAddMinor -4 >}} to {{< skew latestVersionAddMinor -3 >}}](https://v{{< skew latestVersionAddMinor -3 "-" >}}.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
- [Upgrading a kubeadm cluster from {{< skew latestVersionAddMinor -5 >}} to {{< skew latestVersionAddMinor -4 >}}](https://v{{< skew latestVersionAddMinor -4 "-" >}}.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
- [Upgrading a kubeadm cluster from {{< skew currentVersionAddMinor -2 >}} to {{< skew currentVersionAddMinor -1 >}}](https://v{{< skew currentVersionAddMinor -1 "-" >}}.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
- [Upgrading a kubeadm cluster from {{< skew currentVersionAddMinor -3 >}} to {{< skew currentVersionAddMinor -2 >}}](https://v{{< skew currentVersionAddMinor -2 "-" >}}.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
- [Upgrading a kubeadm cluster from {{< skew currentVersionAddMinor -4 >}} to {{< skew currentVersionAddMinor -3 >}}](https://v{{< skew currentVersionAddMinor -3 "-" >}}.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
- [Upgrading a kubeadm cluster from {{< skew currentVersionAddMinor -5 >}} to {{< skew currentVersionAddMinor -4 >}}](https://v{{< skew currentVersionAddMinor -4 "-" >}}.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
The upgrade workflow at high level is the following:
@ -45,19 +45,19 @@ The upgrade workflow at high level is the following:
## Determine which version to upgrade to
Find the latest stable {{< skew latestVersion >}} version using the OS package manager:
Find the latest patch release for Kubernetes {{< skew currentVersion >}} using the OS package manager:
{{< tabs name="k8s_install_versions" >}}
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
apt update
apt-cache madison kubeadm
# find the latest {{< skew latestVersion >}} version in the list
# it should look like {{< skew latestVersion >}}.x-00, where x is the latest patch
# find the latest {{< skew currentVersion >}} version in the list
# it should look like {{< skew currentVersion >}}.x-00, where x is the latest patch
{{% /tab %}}
{{% tab name="CentOS, RHEL or Fedora" %}}
yum list --showduplicates kubeadm --disableexcludes=kubernetes
# find the latest {{< skew latestVersion >}} version in the list
# it should look like {{< skew latestVersion >}}.x-0, where x is the latest patch
# find the latest {{< skew currentVersion >}} version in the list
# it should look like {{< skew currentVersion >}}.x-0, where x is the latest patch
{{% /tab %}}
{{< /tabs >}}
@ -74,18 +74,18 @@ Pick a control plane node that you wish to upgrade first. It must have the `/etc
{{< tabs name="k8s_install_kubeadm_first_cp" >}}
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
# replace x in {{< skew latestVersion >}}.x-00 with the latest patch version
# replace x in {{< skew currentVersion >}}.x-00 with the latest patch version
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm={{< skew latestVersion >}}.x-00 && \
apt-get update && apt-get install -y kubeadm={{< skew currentVersion >}}.x-00 && \
apt-mark hold kubeadm
-
# since apt-get version 1.1 you can also use the following method
apt-get update && \
apt-get install -y --allow-change-held-packages kubeadm={{< skew latestVersion >}}.x-00
apt-get install -y --allow-change-held-packages kubeadm={{< skew currentVersion >}}.x-00
{{% /tab %}}
{{% tab name="CentOS, RHEL or Fedora" %}}
# replace x in {{< skew latestVersion >}}.x-0 with the latest patch version
yum install -y kubeadm-{{< skew latestVersion >}}.x-0 --disableexcludes=kubernetes
# replace x in {{< skew currentVersion >}}.x-0 with the latest patch version
yum install -y kubeadm-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes
{{% /tab %}}
{{< /tabs >}}
@ -120,13 +120,13 @@ Failing to do so will cause `kubeadm upgrade apply` to exit with an error and no
```shell
# replace x with the patch version you picked for this upgrade
sudo kubeadm upgrade apply v{{< skew latestVersion >}}.x
sudo kubeadm upgrade apply v{{< skew currentVersion >}}.x
```
Once the command finishes you should see:
```
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v{{< skew latestVersion >}}.x". Enjoy!
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v{{< skew currentVersion >}}.x". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
```
@ -171,20 +171,20 @@ Also calling `kubeadm upgrade plan` and upgrading the CNI provider plugin is no
{{< tabs name="k8s_install_kubelet" >}}
{{< tab name="Ubuntu, Debian or HypriotOS" >}}
<pre>
# replace x in {{< skew latestVersion >}}.x-00 with the latest patch version
# replace x in {{< skew currentVersion >}}.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet={{< skew latestVersion >}}.x-00 kubectl={{< skew latestVersion >}}.x-00 && \
apt-get update && apt-get install -y kubelet={{< skew currentVersion >}}.x-00 kubectl={{< skew currentVersion >}}.x-00 && \
apt-mark hold kubelet kubectl
-
# since apt-get version 1.1 you can also use the following method
apt-get update && \
apt-get install -y --allow-change-held-packages kubelet={{< skew latestVersion >}}.x-00 kubectl={{< skew latestVersion >}}.x-00
apt-get install -y --allow-change-held-packages kubelet={{< skew currentVersion >}}.x-00 kubectl={{< skew currentVersion >}}.x-00
</pre>
{{< /tab >}}
{{< tab name="CentOS, RHEL or Fedora" >}}
<pre>
# replace x in {{< skew latestVersion >}}.x-0 with the latest patch version
yum install -y kubelet-{{< skew latestVersion >}}.x-0 kubectl-{{< skew latestVersion >}}.x-0 --disableexcludes=kubernetes
# replace x in {{< skew currentVersion >}}.x-0 with the latest patch version
yum install -y kubelet-{{< skew currentVersion >}}.x-0 kubectl-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes
</pre>
{{< /tab >}}
{{< /tabs >}}
@ -216,18 +216,18 @@ without compromising the minimum required capacity for running your workloads.
{{< tabs name="k8s_install_kubeadm_worker_nodes" >}}
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
# replace x in {{< skew latestVersion >}}.x-00 with the latest patch version
# replace x in {{< skew currentVersion >}}.x-00 with the latest patch version
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm={{< skew latestVersion >}}.x-00 && \
apt-get update && apt-get install -y kubeadm={{< skew currentVersion >}}.x-00 && \
apt-mark hold kubeadm
-
# since apt-get version 1.1 you can also use the following method
apt-get update && \
apt-get install -y --allow-change-held-packages kubeadm={{< skew latestVersion >}}.x-00
apt-get install -y --allow-change-held-packages kubeadm={{< skew currentVersion >}}.x-00
{{% /tab %}}
{{% tab name="CentOS, RHEL or Fedora" %}}
# replace x in {{< skew latestVersion >}}.x-0 with the latest patch version
yum install -y kubeadm-{{< skew latestVersion >}}.x-0 --disableexcludes=kubernetes
# replace x in {{< skew currentVersion >}}.x-0 with the latest patch version
yum install -y kubeadm-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes
{{% /tab %}}
{{< /tabs >}}
@ -254,18 +254,18 @@ without compromising the minimum required capacity for running your workloads.
{{< tabs name="k8s_kubelet_and_kubectl" >}}
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
# replace x in {{< skew latestVersion >}}.x-00 with the latest patch version
# replace x in {{< skew currentVersion >}}.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet={{< skew latestVersion >}}.x-00 kubectl={{< skew latestVersion >}}.x-00 && \
apt-get update && apt-get install -y kubelet={{< skew currentVersion >}}.x-00 kubectl={{< skew currentVersion >}}.x-00 && \
apt-mark hold kubelet kubectl
-
# since apt-get version 1.1 you can also use the following method
apt-get update && \
apt-get install -y --allow-change-held-packages kubelet={{< skew latestVersion >}}.x-00 kubectl={{< skew latestVersion >}}.x-00
apt-get install -y --allow-change-held-packages kubelet={{< skew currentVersion >}}.x-00 kubectl={{< skew currentVersion >}}.x-00
{{% /tab %}}
{{% tab name="CentOS, RHEL or Fedora" %}}
# replace x in {{< skew latestVersion >}}.x-0 with the latest patch version
yum install -y kubelet-{{< skew latestVersion >}}.x-0 kubectl-{{< skew latestVersion >}}.x-0 --disableexcludes=kubernetes
# replace x in {{< skew currentVersion >}}.x-0 with the latest patch version
yum install -y kubelet-{{< skew currentVersion >}}.x-0 kubectl-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes
{{% /tab %}}
{{< /tabs >}}

View File

@ -34,14 +34,21 @@ If you are just looking for how to run a pod as a non-root user, see [SecurityCo
## Running Kubernetes inside Rootless Docker/Podman
[kind](https://kind.sigs.k8s.io/) supports running Kubernetes inside a Rootless Docker or Rootless Podman.
### kind
[kind](https://kind.sigs.k8s.io/) supports running Kubernetes inside Rootless Docker or Rootless Podman.
See [Running kind with Rootless Docker](https://kind.sigs.k8s.io/docs/user/rootless/).
<!--
[minikube](https://minikube.sigs.k8s.io/docs/) also plans to support Rootless Docker/Podman drivers.
See [minikube issue #10836](https://github.com/kubernetes/minikube/issues/10836) to track the progress.
-->
### minikube
[minikube](https://minikube.sigs.k8s.io/) also supports running Kubernetes inside Rootless Docker.
See the page about the [docker](https://minikube.sigs.k8s.io/docs/drivers/docker/) driver in the Minikube documentation.
Rootless Podman is not supported.
<!-- Supporting rootless podman is discussed in https://github.com/kubernetes/minikube/issues/8719 -->
## Running Rootless Kubernetes directly on a host

View File

@ -302,7 +302,7 @@ For details, read the [documentation for your Kubernetes version](/docs/home/sup
Run the following command:
```shell
kubectl delete deployment nginx-deployment --cascade=false
kubectl delete deployment nginx-deployment --cascade=orphan
```
**Using the Kubernetes API**

View File

@ -9,7 +9,7 @@ min-kubernetes-server-version: v1.22
As of v1.22, Kubernetes provides a built-in [admission controller](/docs/reference/access-authn-authz/admission-controllers/#podsecurity)
to enforce the [Pod Security Standards](/docs/concepts/security/pod-security-standards).
You can configure this admission controller to set cluster-wide defaults and [exemptions](#exemptions).
You can configure this admission controller to set cluster-wide defaults and [exemptions](/docs/concepts/security/pod-security-admission/#exemptions).
## {{% heading "prerequisites" %}}

View File

@ -88,10 +88,10 @@ GitHub Mentions: [@kubernetes/release-engineering](https://github.com/orgs/kuber
- Adolfo García Veytia ([@puerco](https://github.com/puerco))
- Carlos Panato ([@cpanato](https://github.com/cpanato))
- Daniel Mangum ([@hasheddan](https://github.com/hasheddan))
- Marko Mudrinić ([@xmudrii](https://github.com/xmudrii))
- Sascha Grunert ([@saschagrunert](https://github.com/saschagrunert))
- Stephen Augustus ([@justaugustus](https://github.com/justaugustus))
- Verónica López ([@verolop](https://github.com/verolop))
### Becoming a Release Manager
@ -138,7 +138,6 @@ GitHub Mentions: @kubernetes/release-engineering
- Nabarun Pal ([@palnabarun](https://github.com/palnabarun))
- Seth McCombs ([@sethmccombs](https://github.com/sethmccombs))
- Taylor Dolezal ([@onlydole](https://github.com/onlydole))
- Verónica López ([@verolop](https://github.com/verolop))
- Wilson Husin ([@wilsonehusin](https://github.com/wilsonehusin))
### Becoming a Release Manager Associate
@ -199,7 +198,6 @@ GitHub team: [@kubernetes/sig-release-leads](https://github.com/orgs/kubernetes/
- Adolfo García Veytia ([@puerco](https://github.com/puerco))
- Carlos Panato ([@cpanato](https://github.com/cpanato))
- Daniel Mangum ([@hasheddan](https://github.com/hasheddan))
- Jeremy Rickard ([@jeremyrickard](https://github.com/jeremyrickard))
---

View File

@ -134,7 +134,7 @@ de containerd en `/etc/containerd/config.toml`. Los `handlers` válidos se
configuran en la sección de motores de ejecución:
```
[plugins.cri.containerd.runtimes.${HANDLER_NAME}]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.${HANDLER_NAME}]
```
Véase la configuración de containerd para más detalles:

View File

@ -0,0 +1,152 @@
---
reviewers:
- edithturn
- raelga
- electrocucaracha
title: Snapshots de Volúmenes
content_type: concept
weight: 20
---
<!-- overview -->
En Kubernetes, un _VolumeSnapshot_ representa un Snapshot de un volumen en un sistema de almacenamiento. Este documento asume que está familiarizado con [volúmenes persistentes](/docs/concepts/storage/persistent-volumes/) de Kubernetes.
<!-- body -->
## Introducción
Al igual que los recursos de API `PersistentVolume` y `PersistentVolumeClaim` se utilizan para aprovisionar volúmenes para usuarios y administradores, `VolumeSnapshotContent` y `VolumeSnapshot` se proporcionan para crear Snapshots de volumen para usuarios y administradores.
Un `VolumeSnapshotContent` es un Snapshot tomado de un volumen en el clúster que ha sido aprovisionado por un administrador. Es un recurso en el clúster al igual que un PersistentVolume es un recurso de clúster.
Un `VolumeSnapshot` es una solicitud de Snapshot de un volumen por parte del usuario. Es similar a un PersistentVolumeClaim.
`VolumeSnapshotClass` permite especificar diferentes atributos que pertenecen a un `VolumeSnapshot`. Estos atributos pueden diferir entre Snapshots tomados del mismo volumen en el sistema de almacenamiento y, por lo tanto, no se pueden expresar mediante el mismo `StorageClass` de un `PersistentVolumeClaim`.
Los Snapshots de volumen brindan a los usuarios de Kubernetes una forma estandarizada de copiar el contenido de un volumen en un momento determinado, sin crear uno completamente nuevo. Esta funcionalidad permite, por ejemplo, a los administradores de bases de datos realizar copias de seguridad de las bases de datos antes de realizar una edición o eliminar modificaciones.
Cuando utilicen esta función los usuarios deben tener en cuenta lo siguiente:
* Los objetos de API `VolumeSnapshot`, `VolumeSnapshotContent`, y `VolumeSnapshotClass` son {{< glossary_tooltip term_id="CustomResourceDefinition" text="CRDs" >}}, y no forman parte de la API principal.
* La compatibilidad con `VolumeSnapshot` solo está disponible para controladores CSI.
* Como parte del proceso de implementación de `VolumeSnapshot`, el equipo de Kubernetes proporciona un controlador de Snapshot para implementar en el plano de control y un sidecar auxiliar llamado csi-snapshotter para implementar junto con el controlador CSI. El controlador de Snapshot observa los objetos `VolumeSnapshot` y `VolumeSnapshotContent` y es responsable de la creación y eliminación del objeto `VolumeSnapshotContent`. El sidecar csi-snapshotter observa los objetos `VolumeSnapshotContent` y activa las operaciones `CreateSnapshot` y `DeleteSnapshot` en un punto final CSI.
* También hay un servidor webhook de validación que proporciona una validación más estricta en los objetos Snapshot. Esto debe ser instalado por las distribuciones de Kubernetes junto con el controlador de Snapshots y los CRDs, no los controladores CSI. Debe instalarse en todos los clústeres de Kubernetes que tengan habilitada la función de Snapshot.
* Los controladores CSI pueden haber implementado o no la funcionalidad de Snapshot de volumen. Los controladores CSI que han proporcionado soporte para Snapshot de volumen probablemente usarán csi-snapshotter. Consulte [CSI Driver documentation](https://kubernetes-csi.github.io/docs/) para obtener más detalles.
* Los CRDs y las instalaciones del controlador de Snapshot son responsabilidad de la distribución de Kubernetes.
## Ciclo de vida de un Snapshot de volumen y el contenido de un Snapshot de volumen.
`VolumeSnapshotContents` son recursos en el clúster. `VolumeSnapshots` son solicitudes de esos recursos. La interacción entre `VolumeSnapshotContents` y `VolumeSnapshots` sigue este ciclo de vida:
### Snapshot del volumen de aprovisionamiento
Hay dos formas de aprovisionar los Snapshots: aprovisionadas previamente o aprovisionadas dinámicamente.
#### Pre-aprovisionado {#static}
Un administrador de clúster crea una serie de `VolumeSnapshotContents`. Llevan los detalles del Snapshot del volumen real en el sistema de almacenamiento que está disponible para que lo utilicen los usuarios del clúster. Existen en la API de Kubernetes y están disponibles para su consumo.
#### Dinámica
En lugar de utilizar un Snapshot preexistente, puede solicitar que se tome una Snapshot dinámicamente de un PersistentVolumeClaim. El [VolumeSnapshotClass](/docs/concepts/storage/volume-snapshot-classes/) especifica los parámetros específicos del proveedor de almacenamiento para usar al tomar una Snapshot.
### Vinculante
El controlador de Snapshots maneja el enlace de un objeto `VolumeSnapshot` con un objeto `VolumeSnapshotContent` apropiado, tanto en escenarios de aprovisionamiento previo como de aprovisionamiento dinámico. El enlace es un mapeo uno a uno.
En el caso de un enlace aprovisionado previamente, el VolumeSnapshot permanecerá sin enlazar hasta que se cree el objeto VolumeSnapshotContent solicitado.
### Persistent Volume Claim como Snapshot Source Protection
El propósito de esta protección es garantizar que los objetos de la API
{{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}}
en uso, no se eliminen del sistema mientras se toma un Snapshot (ya que esto puede resultar en la pérdida de datos).
Mientras se toma un Snapshot de un PersistentVolumeClaim, ese PersistentVolumeClaim está en uso. Si elimina un objeto de la API PersistentVolumeClaim en uso activo como fuente de Snapshot, el objeto PersistentVolumeClaim no se elimina de inmediato. En cambio, la eliminación del objeto PersistentVolumeClaim se pospone hasta que el Snapshot esté readyToUse o se cancele.
### Borrar
La eliminación se activa al eliminar el objeto `VolumeSnapshot`, y se seguirá la `DeletionPolicy`. Sí `DeletionPolicy` es `Delete`, entonces el Snapshot de almacenamiento subyacente se eliminará junto con el objeto `VolumeSnapshotContent`. Sí `DeletionPolicy` es `Retain`, tanto el Snapshot subyacente como el `VolumeSnapshotContent` permanecen.
## VolumeSnapshots
Cada VolumeSnapshot contiene una especificación y un estado.
```yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: new-snapshot-test
spec:
volumeSnapshotClassName: csi-hostpath-snapclass
source:
persistentVolumeClaimName: pvc-test
```
`persistentVolumeClaimName` es el nombre de la fuente de datos PersistentVolumeClaim para el Snapshot. Este campo es obligatorio para aprovisionar dinámicamente un Snapshot.
Un Snapshot de volumen puede solicitar una clase particular especificando el nombre de un [VolumeSnapshotClass](/docs/concepts/storage/volume-snapshot-classes/)
utilizando el atributo `volumeSnapshotClassName`. Si no se establece nada, se usa la clase predeterminada si está disponible.
Para los Snapshots aprovisionadas previamente, debe especificar un `volumeSnapshotContentName` como el origen del Snapshot, como se muestra en el siguiente ejemplo. El campo de origen `volumeSnapshotContentName` es obligatorio para los Snapshots aprovisionados previamente.
```yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: test-snapshot
spec:
source:
volumeSnapshotContentName: test-content
```
## Contenido del Snapshot de volumen
Cada VolumeSnapshotContent contiene una especificación y un estado. En el aprovisionamiento dinámico, el controlador común de Snapshots crea objetos `VolumeSnapshotContent`. Aquí hay un ejemplo:
```yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotContent
metadata:
name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455
spec:
deletionPolicy: Delete
driver: hostpath.csi.k8s.io
source:
volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002
volumeSnapshotClassName: csi-hostpath-snapclass
volumeSnapshotRef:
name: new-snapshot-test
namespace: default
uid: 72d9a349-aacd-42d2-a240-d775650d2455
```
`volumeHandle` es el identificador único del volumen creado en el backend de almacenamiento y devuelto por el controlador CSI durante la creación del volumen. Este campo es obligatorio para aprovisionar dinámicamente un Snapshot. Especifica el origen del volumen del Snapshot.
Para los Snapshots aprovisionados previamente, usted (como administrador del clúster) es responsable de crear el objeto `VolumeSnapshotContent` de la siguiente manera.
```yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotContent
metadata:
name: new-snapshot-content-test
spec:
deletionPolicy: Delete
driver: hostpath.csi.k8s.io
source:
snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002
volumeSnapshotRef:
name: new-snapshot-test
namespace: default
```
`snapshotHandle` es el identificador único del Snapshot de volumen creado en el backend de almacenamiento. Este campo es obligatorio para las Snapshots aprovisionadas previamente. Especifica el ID del Snapshot CSI en el sistema de almacenamiento que representa el `VolumeSnapshotContent`.
## Aprovisionamiento de Volúmenes a partir de Snapshots
Puede aprovisionar un nuevo volumen, rellenado previamente con datos de una Snapshot, mediante el campo *dataSource* en el objeto `PersistentVolumeClaim`.
Para obtener más detalles, consulte
[Volume Snapshot and Restore Volume from Snapshot](/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support).

File diff suppressed because it is too large Load Diff

View File

@ -12,23 +12,14 @@ Al eliminar un DaemonSet se limpian todos los Pods que han sido creados.
Algunos casos de uso típicos de un DaemonSet son:
- ejecutar un proceso de almacenamiento en el clúster, como `glusterd`, `ceph`, en cada nodo.
- ejecutar un proceso de recolección de logs en cada nodo, como `fluentd` o `logstash`.
- ejecutar un proceso de monitorización de nodos en cada nodo, como [Prometheus Node Exporter](
https://github.com/prometheus/node_exporter), [Sysdig Agent] (https://sysdigdocs.atlassian.net/wiki/spaces/Platform), `collectd`,
[Dynatrace OneAgent](https://www.dynatrace.com/technologies/kubernetes-monitoring/),
[AppDynamics Agent](https://docs.appdynamics.com/display/CLOUD/Container+Visibility+with+Kubernetes),
[Datadog agent](https://docs.datadoghq.com/agent/kubernetes/daemonset_setup/),
[New Relic agent](https://docs.newrelic.com/docs/integrations/kubernetes-integration/installation/kubernetes-installation-configuration),
Ganglia `gmond` o un agente de Instana.
- Ejecutar un proceso de almacenamiento en el clúster.
- Ejecutar un proceso de recolección de logs en cada nodo.
- Ejecutar un proceso de monitorización de nodos en cada nodo.
De forma básica, se debería usar un DaemonSet, cubriendo todos los nodos, por cada tipo de proceso.
En configuraciones más complejas se podría usar múltiples DaemonSets para un único tipo de proceso,
En configuraciones más complejas se podría usar múltiples DaemonSets para un único tipo de proceso,
pero con diferentes parámetros y/o diferentes peticiones de CPU y memoria según el tipo de hardware.
<!-- body -->
## Escribir una especificación de DaemonSet
@ -46,7 +37,7 @@ kubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml
### Campos requeridos
Como con cualquier otra configuración de Kubernetes, un DaemonSet requiere los campos `apiVersion`, `kind`, y `metadata`.
Como con cualquier otra configuración de Kubernetes, un DaemonSet requiere los campos `apiVersion`, `kind`, y `metadata`.
Para información general acerca de cómo trabajar con ficheros de configuración, ver los documentos [desplegar aplicaciones](/docs/user-guide/deploying-applications/),
[configurar contenedores](/docs/tasks/), y [gestión de objetos usando kubectl](/docs/concepts/overview/object-management-kubectl/overview/).
@ -56,10 +47,10 @@ Un DaemonSet también necesita un sección [`.spec`](https://git.k8s.io/communit
El campo `.spec.template` es uno de los campos obligatorios de la sección `.spec`.
El campo `.spec.template` es una [plantilla Pod](/docs/concepts/workloads/pods/pod-overview/#pod-templates). Tiene exactamente el mismo esquema que un [Pod](/docs/concepts/workloads/pods/pod/),
El campo `.spec.template` es una [plantilla Pod](/docs/concepts/workloads/pods/pod-overview/#pod-templates). Tiene exactamente el mismo esquema que un [Pod](/docs/concepts/workloads/pods/pod/),
excepto por el hecho de que está anidado y no tiene los campos `apiVersion` o `kind`.
Además de los campos obligatorios de un Pod, la plantilla Pod para un DaemonSet debe especificar
Además de los campos obligatorios de un Pod, la plantilla Pod para un DaemonSet debe especificar
las etiquetas apropiadas (ver [selector de pod](#pod-selector)).
Una plantilla Pod para un DaemonSet debe tener una [`RestartPolicy`](/docs/user-guide/pod-states)
@ -67,7 +58,7 @@ Una plantilla Pod para un DaemonSet debe tener una [`RestartPolicy`](/docs/user-
### Selector de Pod
El campo `.spec.selector` es un selector de pod. Funciona igual que el campo `.spec.selector`
El campo `.spec.selector` es un selector de pod. Funciona igual que el campo `.spec.selector`
de un [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/).
A partir de Kubernetes 1.8, se debe configurar un selector de pod que coincida con las
@ -86,15 +77,15 @@ Cuando se configura ambos campos, el resultado es conjuntivo (AND).
Si se especifica el campo `.spec.selector`, entonces debe coincidir con el campo `.spec.template.metadata.labels`. Aquellas configuraciones que no coinciden, son rechazadas por la API.
Además, normalmente no se debería crear ningún Pod con etiquetas que coincidan con el selector, bien sea de forma directa, via otro
Además, normalmente no se debería crear ningún Pod con etiquetas que coincidan con el selector, bien sea de forma directa, via otro
DaemonSet, o via otro controlador como un ReplicaSet. De ser así, el controlador del DaemonSet
pensará que dichos Pods fueron en realidad creados por él mismo. Kubernetes, en cualquier caso, no te impide realizar esta
pensará que dichos Pods fueron en realidad creados por él mismo. Kubernetes, en cualquier caso, no te impide realizar esta
operación. Un caso donde puede que necesites hacer esto es cuando quieres crear manualmente un Pod con un valor diferente en un nodo para pruebas.
### Ejecutar Pods sólo en algunos Nodos
### Ejecutar Pods sólo en Nodos seleccionados
Si se configura un `.spec.template.spec.nodeSelector`, entonces el controlador del DaemonSet
creará los Pods en aquellos nodos que coincidan con el [selector de nodo](/docs/concepts/configuration/assign-pod-node/) indicado.
creará los Pods en aquellos nodos que coincidan con el [selector de nodo](/docs/concepts/configuration/assign-pod-node/) indicado.
De forma similar, si se configura una `.spec.template.spec.affinity`,
entonces el controlador del DaemonSet creará los Pods en aquellos nodos que coincidan con la [afinidad de nodo](/docs/concepts/configuration/assign-pod-node/) indicada.
Si no se configura ninguno de los dos, entonces el controlador del DaemonSet creará los Pods en todos los nodos.
@ -115,13 +106,13 @@ se indica el campo `.spec.nodeName`, y por ello el planificador los ignora). Por
{{< feature-state state="beta" for-kubernetes-version="1.12" >}}
Un DaemonSet garantiza que todos los nodos elegibles ejecuten una copia de un Pod.
Un DaemonSet garantiza que todos los nodos elegibles ejecuten una copia de un Pod.
Normalmente, es el planificador de Kubernetes quien determina el nodo donde se ejecuta un Pod. Sin embargo,
los pods del DaemonSet son creados y planificados por el mismo controlador del DaemonSet.
Esto introduce los siguientes inconvenientes:
* Comportamiento inconsistente de los Pods: Los Pods normales que están esperando
a ser creados, se encuentran en estado `Pending`, pero los pods del DaemonSet no pasan por el estado `Pending`.
a ser creados, se encuentran en estado `Pending`, pero los pods del DaemonSet no pasan por el estado `Pending`.
Esto confunde a los usuarios.
* La [prioridad y el comportamiento de apropiación de Pods](/docs/concepts/configuration/pod-priority-preemption/)
se maneja por el planificador por defecto. Cuando se habilita la contaminación, el controlador del DaemonSet
@ -130,7 +121,7 @@ Esto introduce los siguientes inconvenientes:
`ScheduleDaemonSetPods` permite planificar DaemonSets usando el planificador por defecto
en vez del controlador del DaemonSet, añadiendo la condición `NodeAffinity`
a los pods del DaemonSet, en vez de la condición `.spec.nodeName`. El planificador por defecto
se usa entonces para asociar el pod a su servidor destino. Si la afinidad de nodo del
se usa entonces para asociar el pod a su servidor destino. Si la afinidad de nodo del
pod del DaemonSet ya existe, se sustituye. El controlador del DaemonSet sólo realiza
estas operaciones cuando crea o modifica los pods del DaemonSet, y no se realizan cambios
al `spec.template` del DaemonSet.
@ -146,7 +137,7 @@ nodeAffinity:
- target-host-name
```
Adicionalmente, se añade de forma automática la tolerancia `node.kubernetes.io/unschedulable:NoSchedule`
Adicionalmente, se añade de forma automática la tolerancia `node.kubernetes.io/unschedulable:NoSchedule`
a los Pods del DaemonSet. Así, el planificador por defecto ignora los nodos
`unschedulable` cuando planifica los Pods del DaemonSet.
@ -158,23 +149,23 @@ A pesar de que los Pods de proceso respetan las
la siguientes tolerancias son añadidas a los Pods del DaemonSet de forma automática
según las siguientes características:
| Clave de tolerancia | Efecto | Versión | Descripción |
| ---------------------------------------- | ---------- | ------- | ------------------------------------------------------------ |
| `node.kubernetes.io/not-ready` | NoExecute | 1.13+ | Los pods del DaemonSet no son expulsados cuando hay problemas de nodo como una partición de red. |
| `node.kubernetes.io/unreachable` | NoExecute | 1.13+ | Los pods del DaemonSet no son expulsados cuando hay problemas de nodo como una partición de red. |
| `node.kubernetes.io/disk-pressure` | NoSchedule | 1.8+ | Los pods del DaemonSet no son expulsados cuando hay problemas de nodo como la falta de espacio en disco. |
| `node.kubernetes.io/memory-pressure` | NoSchedule | 1.8+ | Los pods del DaemonSet no son expulsados cuando hay problemas de nodo como la falta de memoria. |
| `node.kubernetes.io/unschedulable` | NoSchedule | 1.12+ | Los pods del DaemonSet toleran los atributos unschedulable del planificador por defecto. |
| `node.kubernetes.io/network-unavailable` | NoSchedule | 1.12+ | Los pods del DaemonSet, que usan la red del servidor anfitrión, toleran los atributos network-unavailable del planificador por defecto. |
| Clave de tolerancia | Efecto | Versión | Descripción |
| ---------------------------------------- | ---------- | ------- | --------------------------------------------------------------------------------------------------------------------------------------- |
| `node.kubernetes.io/not-ready` | NoExecute | 1.13+ | Los pods del DaemonSet no son expulsados cuando hay problemas de nodo como una partición de red. |
| `node.kubernetes.io/unreachable` | NoExecute | 1.13+ | Los pods del DaemonSet no son expulsados cuando hay problemas de nodo como una partición de red. |
| `node.kubernetes.io/disk-pressure` | NoSchedule | 1.8+ | Los pods del DaemonSet no son expulsados cuando hay problemas de nodo como la falta de espacio en disco. |
| `node.kubernetes.io/memory-pressure` | NoSchedule | 1.8+ | Los pods del DaemonSet no son expulsados cuando hay problemas de nodo como la falta de memoria. |
| `node.kubernetes.io/unschedulable` | NoSchedule | 1.12+ | Los pods del DaemonSet toleran los atributos unschedulable del planificador por defecto. |
| `node.kubernetes.io/network-unavailable` | NoSchedule | 1.12+ | Los pods del DaemonSet, que usan la red del servidor anfitrión, toleran los atributos network-unavailable del planificador por defecto. |
## Comunicarse con los Pods de los DaemonSets
Algunos patrones posibles para la comunicación con los Pods de un DaemonSet son:
- **Push**: Los Pods del DaemonSet se configuran para enviar actualizaciones a otro servicio,
- **Push**: Los Pods del DaemonSet se configuran para enviar actualizaciones a otro servicio,
como una base de datos de estadísticas. No tienen clientes.
- **NodeIP y Known Port**: Los Pods del DaemonSet pueden usar un `hostPort`, de forma que se les puede alcanzar via las IPs del nodo. Los clientes conocen la lista de IPs del nodo de algún modo,
- **NodeIP y Known Port**: Los Pods del DaemonSet pueden usar un `hostPort`, de forma que se les puede alcanzar via las IPs del nodo. Los clientes conocen la lista de IPs del nodo de algún modo,
y conocen el puerto acordado.
- **DNS**: Se crea un [servicio headless](/docs/concepts/services-networking/service/#headless-services) con el mismo selector de pod,
y entonces se descubre a los DaemonSets usando los recursos `endpoints` o mediante múltiples registros de tipo A en el DNS.
@ -185,10 +176,10 @@ y conocen el puerto acordado.
Si se cambian las etiquetas de nodo, el DaemonSet comenzará de forma inmediata a añadir Pods a los nuevos nodos que coincidan y a eliminar
los Pods de aquellos nuevos nodos donde no coincidan.
Puedes modificar los Pods que crea un DaemonSet. Sin embargo, no se permite actualizar todos los campos de los Pods.
Puedes modificar los Pods que crea un DaemonSet. Sin embargo, no se permite actualizar todos los campos de los Pods.
Además, el controlador del DaemonSet utilizará la plantilla original la próxima vez que se cree un nodo (incluso con el mismo nombre).
Puedes eliminar un DaemonSet. Si indicas el parámetro `--cascade=false` al usar `kubectl`,
Puedes eliminar un DaemonSet. Si indicas el parámetro `--cascade=false` al usar `kubectl`,
entonces los Pods continuarán ejecutándose en los nodos. Así, puedes crear entonces un nuevo DaemonSet con una plantilla diferente.
El nuevo DaemonSet con la plantilla diferente reconocerá a todos los Pods existentes que tengan etiquetas coincidentes y
no modificará o eliminará ningún Pod aunque la plantilla no coincida con los Pods desplegados.
@ -198,14 +189,14 @@ A partir de las versión 1.6 de Kubernetes, puedes [llevar a cabo una actualizac
## Alternativas al DaemonSet
### Secuencias de comandos de inicialización
### Secuencias de comandos de inicialización
Aunque es perfectamente posible ejecutar procesos arrancándolos directamente en un nodo (ej. usando
`init`, `upstartd`, o `systemd`), existen numerosas ventajas si se realiza via un DaemonSet:
- Capacidad de monitorizar y gestionar los logs de los procesos del mismo modo que para las aplicaciones.
- Mismo lenguaje y herramientas de configuración (ej. plantillas de Pod, `kubectl`) tanto para los procesos como para las aplicaciones.
- Los procesos que se ejecutan en contenedores con límitaciones de recursos aumentan el aislamiento entre dichos procesos y el resto de contenedores de aplicaciones.
- Los procesos que se ejecutan en contenedores con límitaciones de recursos aumentan el aislamiento entre dichos procesos y el resto de contenedores de aplicaciones.
Sin embargo, esto también se podría conseguir ejecutando los procesos en un contenedor en vez de un Pod
(ej. arrancarlos directamente via Docker).
@ -231,8 +222,6 @@ ambos crean Pods, y que dichos Pods tienen procesos que no se espera que termine
servidores de almacenamiento).
Utiliza un Deployment para definir servicios sin estado, como las interfaces de usuario, donde el escalado vertical y horizontal
del número de réplicas y las actualizaciones continuas son mucho más importantes que el control exacto del servidor donde se ejecuta el Pod.
Utiliza un DaemonSet cuando es importante que una copia de un Pod siempre se ejecute en cada uno de los nodos,
y cuando se necesite que arranque antes que el resto de Pods.
del número de réplicas y las actualizaciones continuas son mucho más importantes que el control exacto del servidor donde se ejecuta el Pod.
Utiliza un DaemonSet cuando es importante que una copia de un Pod siempre se ejecute en cada uno de los nodos,
y cuando se necesite que arranque antes que el resto de Pods.

View File

@ -141,6 +141,7 @@ Sinon, le contrôleur de nœud supprime le nœud de sa liste de nœuds.
La troisième est la surveillance de la santé des nœuds.
Le contrôleur de noeud est responsable de la mise à jour de la condition NodeReady de NodeStatus vers ConditionUnknown lorsqu'un noeud devient inaccessible (le contrôleur de noeud cesse de recevoir des heartbeats pour une raison quelconque, par exemple en raison d'une panne du noeud), puis de l'éviction ultérieure de tous les pods du noeud. (en utilisant une terminaison propre) si le nœud continue dêtre inaccessible.
(Les délais d'attente par défaut sont de 40 secondes pour commencer à signaler ConditionUnknown et de 5 minutes après cela pour commencer à expulser les pods.)
Le contrôleur de nœud vérifie l'état de chaque nœud toutes les `--node-monitor-period` secondes.
Dans les versions de Kubernetes antérieures à 1.13, NodeStatus correspond au heartbeat du nœud.
@ -157,6 +158,7 @@ Dans la plupart des cas, le contrôleur de noeud limite le taux dexpulsion à
Le comportement d'éviction de noeud change lorsqu'un noeud d'une zone de disponibilité donnée devient défaillant.
Le contrôleur de nœud vérifie quel pourcentage de nœuds de la zone est défaillant (la condition NodeReady est ConditionUnknown ou ConditionFalse) en même temps.
Si la fraction de nœuds défaillant est au moins `--unhealthy-zone-threshold` (valeur par défaut de 0,55), le taux d'expulsion est réduit: si le cluster est petit (c'est-à-dire inférieur ou égal à ` --large-cluster-size-threshold` noeuds - valeur par défaut 50) puis les expulsions sont arrêtées, sinon le taux d'expulsion est réduit à `--secondary-node-eviction-rate` (valeur par défaut de 0,01) par seconde.
Ces stratégies sont implémentées par zone de disponibilité car une zone de disponibilité peut être partitionnée à partir du master, tandis que les autres restent connectées.
Si votre cluster ne s'étend pas sur plusieurs zones de disponibilité de fournisseur de cloud, il n'existe qu'une seule zone de disponibilité (la totalité du cluster).

View File

@ -111,7 +111,7 @@ _Handler runtime_ diatur melalui konfigurasi containerd pada `/etc/containerd/co
_Handler_ yang valid dapat dikonfigurasi pada bagian _runtime_:
```
[plugins.cri.containerd.runtimes.${HANDLER_NAME}]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.${HANDLER_NAME}]
```
Lihat dokumentasi konfigurasi containerd untuk lebih detail:

View File

@ -112,7 +112,7 @@ Agentes de execução são configurados através da configuração do containerd
`/etc/containerd/config.toml`. Agentes válidos são configurados sob a seção de `runtimes`:
```
[plugins.cri.containerd.runtimes.${HANDLER_NAME}]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.${HANDLER_NAME}]
```
Veja a documentação de configuração do containerd para maiores detalhes:

View File

@ -64,7 +64,7 @@ Na maioria dos shells, a forma mais fácil de escapar as senhas é usar aspas si
Por exemplo, se sua senha atual é `S!B\*d$zDsb=`, você precisa executar o comando dessa forma:
```shell
kubectl create secret generic dev-db-secret \
kubectl create secret generic db-user-pass \
--from-literal=username=devuser \
--from-literal=password='S!B\*d$zDsb='
```

View File

@ -0,0 +1,70 @@
---
title: Администрирование кластера
reviewers:
- davidopp
- lavalamp
weight: 100
content_type: concept
description: >
Lower-level detail relevant to creating or administering a Kubernetes cluster.
no_list: true
---
<!-- overview -->
Обзор администрирования кластера предназначен для всех, кто создает или администрирует кластер Kubernetes. Это предполагает некоторое знакомство с основными [концепциями] (/docs/concepts/) Kubernetes.
<!-- body -->
## Планирование кластера
См. Руководства в разделе [настройка](/docs/setup/) для получения примеров того, как планировать, устанавливать и настраивать кластеры Kubernetes. Решения, перечисленные в этой статье, называются *distros*.
{{< note >}}
не все дистрибутивы активно поддерживаются. Выбирайте дистрибутивы, протестированные с последней версией Kubernetes.
{{< /note >}}
Прежде чем выбрать руководство, вот некоторые соображения:
- Вы хотите опробовать Kubernetes на вашем компьюторе или собрать многоузловой кластер высокой доступности? Выбирайте дистрибутивы, наиболее подходящие для ваших нужд.
- будете ли вы использовать **размещенный кластер Kubernetes**, такой как [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) или **разместите собственный кластер**?
- Будет ли ваш кластер **в помещений** или **в облаке (IaaS)**? Kubernetes не поддерживает напрямую гибридные кластеры. Вместо этого вы можете настроить несколько кластеров.
- **Если вы будете настроаивать Kubernetes в помещений (локально)**, подумайте, какая [сетевая модель](/docs/concepts/cluster-administration/networking/) подходит лучше всего.
- Будете ли вы запускать Kubernetes на **оборудований "bare metal"** или на **вирутальных машинах (VMs)**?
- Вы хотите **запустить кластер** или планируете **активно разворачивать код проекта Kubernetes**? В последнем случае выберите активно разрабатываемый дистрибутив. Некоторые дистрибутивы используют только двоичные выпуски, но предлагают болле широкий выбор.
- Ознакомьтесь с [компонентами](/docs/concepts/overview/components/) необходивые для запуска кластера.
## Управление кластером
* Узнайте как [управлять узлами](/docs/concepts/architecture/nodes/).
* Узнайте как настроить и управлять [квотами ресурсов](/docs/concepts/policy/resource-quotas/) для общих кластеров.
## Обеспечение безопасности кластера
* [Сгенерировать сертификаты](/docs/tasks/administer-cluster/certificates/) описывает шаги по созданию сертификатов с использованием различных цепочек инструментов.
* [Kubernetes Container Environment](/docs/concepts/containers/container-environment/) описывает среду для управляемых контейнеров Kubelet на узле Kubernetes.
* [Управление доступом к Kubernetes API](/docs/concepts/security/controlling-access) описывает как Kubernetes реализует контроль доступа для своего собственного API.
* [Аутентификация](/docs/reference/access-authn-authz/authentication/) объясняет аутентификацию в Kubernetes, включая различные варианты аутентификации.
* [Авторизация](/docs/reference/access-authn-authz/authorization/) отделена от аутентификации и контролирует обработку HTTP-вызовов.
* [Использование контроллеров допуска](/docs/reference/access-authn-authz/admission-controllers/) explains plug-ins which intercepts requests to the Kubernetes API server after authentication and authorization.
* [Использование Sysctls в кластере Kubernetes](/docs/tasks/administer-cluster/sysctl-cluster/) описывает администратору, как использовать sysctlинструмент командной строки для установки параметров ядра.
* [Аудит](/docs/tasks/debug-application-cluster/audit/) описывает, как взаимодействовать с журналами аудита Kubernetes.
### Обеспечение безопасности kubelet
* [Связь между плоскостью управления и узлом](/docs/concepts/architecture/control-plane-node-communication/)
* [Загрузка TLS](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
* [Аутентификация/авторизация Kubelet](/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/)
## Дополнительные кластерные услуги
* [Интеграция DNS](/docs/concepts/services-networking/dns-pod-service/) описывает как разрешить DNS имя непосредственно службе Kubernetes.
* [Ведение журнала и мониторинг активности кластера](/docs/concepts/cluster-administration/logging/) объясняет, как работает ведение журнала в Kubernetes и как его реализовать.

View File

@ -0,0 +1,53 @@
---
title: Установка дополнений
content_type: concept
---
<!-- overview -->
{{% thirdparty-content %}}
Надстройки расширяют функциональность Kubernetes.
На этой странице перечислены некоторые из доступных надстроек и ссылки на соответствующие инструкции по установке.
<!-- body -->
## Сеть и сетевыя политика
* [ACI](https://www.github.com/noironetworks/aci-containers) обеспечивает интегрированную сеть контейнеров и сетевую безопасность с помощью Cisco ACI.
* [Antrea](https://antrea.io/) работает на уровне 3, обеспечивая сетевые службы и службы безопасности для Kubernetes, используя Open vSwitch в качестве уровня сетевых данных.
* [Calico](https://docs.projectcalico.org/latest/introduction/) Calico поддерживает гибкий набор сетевых опций, поэтому вы можете выбрать наиболее эффективный вариант для вашей ситуации, включая сети без оверлея и оверлейные сети, с или без BGP. Calico использует тот же механизм для обеспечения соблюдения сетевой политики для хостов, модулей и (при использовании Istio и Envoy) приложений на уровне сервисной сети (mesh layer).
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) объединяет Flannel и Calico, обеспечивая сеть и сетевую политик.
* [Cilium](https://github.com/cilium/cilium) - это плагин сети L3 и сетевой политики, который может прозрачно применять политики HTTP/API/L7. Поддерживаются как режим маршрутизации, так и режим наложения/инкапсуляции, и он может работать поверх других подключаемых модулей CNI.
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) позволяет Kubernetes легко подключаться к выбору плагинов CNI, таких как Calico, Canal, Flannel, Romana или Weave.
* [Contiv](https://contiv.github.io) предоставляет настраиваемую сеть (собственный L3 с использованием BGP, слоя с использованием vxlan, классический L2 и Cisco-SDN/ACI) для различных вариантов использования и обширную структуру политик. Проект Contiv имеет полностью [открытый исходный код](https://github.com/contiv). [Установка](https://github.com/contiv/install) обеспечивает варианты на основе как kubeadm так и без kubeadm.
* [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), основан на [Tungsten Fabric](https://tungsten.io), представляет собой платформу для виртуализации мультиоблачных сетей с открытым исходным кодом и управления политиками. Contrail и Tungsten Fabric are интегрированы с системами оркестровки, такими как Kubernetes, OpenShift, OpenStack и Mesos, и обеспечивают режимы изоляции для виртуальных машин, контейнеров/pod-ов и рабочих нагрузок без операционной системы.
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) - это поставщик оверлейной сети, который можно использовать с Kubernetes.
* [Knitter](https://github.com/ZTE/Knitter/) - это плагин для поддержки нескольких сетевых интерфейсов Kubernetes pod-ов.
* [Multus](https://github.com/Intel-Corp/multus-cni) - это плагин Multi для поддержки нексольких сетейв Kubernetes для поддержки всех CNI плагинов (наприме: Calico, Cilium, Contiv, Flannel), в дополнение к рабочим нагрузкам основанных на SRIOV, DPDK, OVS-DPDK и VPP в Kubernetes.
* [OVN-Kubernetes](https://github.com/ovn-org/ovn-kubernetes/) - это сетевой провайдер для Kubernetes основанный на [OVN (Open Virtual Network)](https://github.com/ovn-org/ovn/), реализация виртуалной сети a появившейся в результате проекта Open vSwitch (OVS). OVN-Kubernetes обеспечивает сетевую реализацию на основе наложения для Kubernetes, включая реализацию балансировки нагрузки и сетевой политики на основе OVS.
* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) - это подключаемый модуль контроллера CNI на основе OVN для обеспечения облачной цепочки сервисных функций (SFC), несколько наложеных сетей OVN, динамического создания подсети, динамического создания виртуальных сетей, сети поставщика VLAN, сети прямого поставщика и подключаемого к другим Multi Сетевые плагины, идеально подходящие для облачных рабочих нагрузок на периферии в сети с несколькими кластерами.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) плагин для контейнера (NCP) обеспечивающий интеграцию между VMware NSX-T и контейнерами оркестраторов, таких как Kubernetes, а так же интеграцию между NSX-T и контейнеров на основе платформы CaaS/PaaS, таких как Pivotal Container Service (PKS) и OpenShift.
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) - эта платформа SDN, которая обеспечивает сетевое взаимодействие на основе политик между Kubernetes Pod-ами и не Kubernetes окружением с отображением и мониторингом безопасности.
* [Romana](https://romana.io) - это сетевое решение уровня 3 для pod сетей, которое также поддерживает [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Подробности установки Kubeadm доступны [здесь](https://github.com/romana/romana/tree/master/containerize).
* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) обеспечивает сетевуюи политику сетей, будет работать в сетевого раздела и не требует внешней базы данных.
## Обнаружение служб
* [CoreDNS](https://coredns.io) - это гибкий, расширяемый DNS-сервер, который может быть [установлен](https://github.com/coredns/deployment/tree/master/kubernetes) в качестве внутрикластерного DNS для pod-ов.
## Визуализация и контроль
* [Dashboard](https://github.com/kubernetes/dashboard#kubernetes-dashboard) - это веб-интерфейс панели инструментов для Kubernetes.
* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s) - это инструмент для графической визуализации ваших контейнеров, pod-ов, сервисов и т.д. Используйте его вместе с [учетной записью Weave Cloud](https://cloud.weave.works/) или разместите пользовательский интерфейс самостоятельно.
## Инфраструктура
* [KubeVirt](https://kubevirt.io/user-guide/#/installation/installation) - это дополнение для запуска виртуальных машин в Kubernetes. Обычно работает на bare-metal кластерах.
## Legacy Add-ons
В устаревшем каталоге [cluster/addons](https://git.k8s.io/kubernetes/cluster/addons) задокументировано несколько других дополнений.
Ссылки на те, в хорошем состоянии, должны быть здесь. PR приветствуются!

View File

@ -29,7 +29,7 @@ The Kubernetes API server provides 3 API endpoints (`healthz`, `livez` and `read
The `healthz` endpoint is deprecated (since Kubernetes v1.16), and you should use the more specific `livez` and `readyz` endpoints instead.
The `livez` endpoint can be used with the `--livez-grace-period` [flag](/docs/reference/command-line-tools-reference/kube-apiserver) to specify the startup duration.
For a graceful shutdown you can specify the `--shutdown-delay-duration` [flag](/docs/reference/command-line-tools-reference/kube-apiserver) with the `/readyz` endpoint.
Machines that check the `health`/`livez`/`readyz` of the API server should rely on the HTTP status code.
Machines that check the `healthz`/`livez`/`readyz` of the API server should rely on the HTTP status code.
A status code `200` indicates the API server is `healthy`/`live`/`ready`, depending of the called endpoint.
The more verbose options shown below are intended to be used by human operators to debug their cluster or specially the state of the API server.
-->
@ -37,7 +37,7 @@ Kubernetes API 服务器提供 3 个 API 端点(`healthz`、`livez` 和 `ready
`healthz` 端点已被弃用(自 Kubernetes v1.16 起),你应该使用更为明确的 `livez``readyz` 端点。
`livez` 端点可与 `--livez-grace-period` [标志](/zh/docs/reference/command-line-tools-reference/kube-apiserver)一起使用,来指定启动持续时间。
为了正常关机,你可以使用 `/readyz` 端点并指定 `--shutdown-delay-duration` [标志](/zh/docs/reference/command-line-tools-reference/kube-apiserver)。
检查 API 服务器的 `health`/`livez`/`readyz` 端点的机器应依赖于 HTTP 状态代码。
检查 API 服务器的 `healthz`/`livez`/`readyz` 端点的机器应依赖于 HTTP 状态代码。
状态码 `200` 表示 API 服务器是 `healthy`、`live` 还是 `ready`,具体取决于所调用的端点。
以下更详细的选项供操作人员使用,用来调试其集群或专门调试 API 服务器的状态。
@ -46,7 +46,7 @@ Kubernetes API 服务器提供 3 个 API 端点(`healthz`、`livez` 和 `ready
<!--
For all endpoints you can use the `verbose` parameter to print out the checks and their status.
This can be useful for a human operator to debug the current status of the Api server, it is not intended to be consumed by a machine:
This can be useful for a human operator to debug the current status of the API server, it is not intended to be consumed by a machine:
-->
对于所有端点,都可以使用 `verbose` 参数来打印检查项以及检查状态。
这对于操作人员调试 API 服务器的当前状态很有用,这些不打算给机器使用:
@ -130,12 +130,12 @@ healthz check passed
{{< feature-state state="alpha" >}}
<!--
Each individual health check exposes an http endpoint and could can be checked individually.
Each individual health check exposes an HTTP endpoint and could can be checked individually.
The schema for the individual health checks is `/livez/<healthcheck-name>` where `livez` and `readyz` and be used to indicate if you want to check the liveness or the readiness of the API server.
The `<healthcheck-name>` path can be discovered using the `verbose` flag from above and take the path between `[+]` and `ok`.
These individual health checks should not be consumed by machines but can be helpful for a human operator to debug a system:
-->
每个单独的健康检查都会公开一个 http 端点,并且可以单独检查。
每个单独的健康检查都会公开一个 HTTP 端点,并且可以单独检查。
单个运行状况检查的模式为 `/livez/<healthcheck-name>`,其中 `livez``readyz` 表明你要检查的是 API 服务器是否存活或就绪。
`<healthcheck-name>` 的路径可以通过上面的 `verbose` 参数发现 ,并采用 `[+]``ok` 之间的路径。
这些单独的健康检查不应由机器使用,但对于操作人员调试系统而言,是有帮助的:

View File

@ -174,7 +174,7 @@ If you are using the sample manifest from the previous point, this will require
* If kube-proxy is running in IPVS mode:
``` bash
sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__//g; s/__PILLAR__CLUSTER__DNS__/$kubedns/g" nodelocaldns.yaml
sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/,__PILLAR__DNS__SERVER__//g; s/__PILLAR__CLUSTER__DNS__/$kubedns/g" nodelocaldns.yaml
```
In this mode, node-local-dns pods listen only on `<node-local-address>`. The node-local-dns interface cannot bind the kube-dns cluster IP since the interface used for IPVS loadbalancing already uses this address.
`__PILLAR__UPSTREAM__SERVERS__` will be populated by the node-local-dns pods.

View File

@ -550,10 +550,10 @@ APIService 对象的名称必须是合法的
<!--
#### Contacting the extension apiserver
Once the Kubernetes apiserver has determined a request should be sent to a extension apiserver,
Once the Kubernetes apiserver has determined a request should be sent to an extension apiserver,
it needs to know how to contact it.
The `service` stanza is a reference to the service for a extension apiserver.
The `service` stanza is a reference to the service for an extension apiserver.
The service namespace and name are required. The port is optional and defaults to 443.
The path is optional and defaults to "/".

View File

@ -154,10 +154,6 @@ term_id="kube-apiserver" >}} (`--feature-gates=HugePageStorageMediumSize=false`)
`proc/sys/vm/hugetlb_shm_group` 匹配的补充组下。
- 通过 ResourceQuota 资源,可以使用 `hugepages-<size>` 标记控制每个命名空间下的巨页使用量,
类似于使用 `cpu``memory` 来控制其他计算资源。
- 多种尺寸的巨页的支持需要特性门控配置。它可以通过 `HugePageStorageMediumSize` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)在 {{<
glossary_tooltip text="kubelet" term_id="kubelet" >}} 和 {{<
glossary_tooltip text="kube-apiserver"
term_id="kube-apiserver" >}} 中开启(`--feature-gates=HugePageStorageMediumSize=false`)。

View File

@ -41,7 +41,7 @@ weight: 10
<p> Kubernetes 中的服务(Service)是一种抽象概念,它定义了 Pod 的逻辑集和访问 Pod 的协议。Service 使从属 Pod 之间的松耦合成为可能。 和其他 Kubernetes 对象一样, Service 用 YAML <a href="/zh/docs/concepts/configuration/overview/#general-configuration-tips">(更推荐)</a> 或者 JSON 来定义. Service 下的一组 Pod 通常由 <i>LabelSelector</i> (请参阅下面的说明为什么您可能想要一个 spec 中不包含<code>selector</code>的服务)来标记。</p>
<!-- <p>Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a Service. Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a <code>type</code> in the ServiceSpec:</p>-->
<p>尽管每个 Pod 都有一个唯一的 IP 地址,但是如果没有 Service ,这些 IP 不会暴露在集外部。Service 允许您的应用程序接收流量。Service 也可以用在 ServiceSpec 标记<code>type</code>的方式暴露</p>
<p>尽管每个 Pod 都有一个唯一的 IP 地址,但是如果没有 Service ,这些 IP 不会暴露在集外部。Service 允许您的应用程序接收流量。Service 也可以用在 ServiceSpec 标记<code>type</code>的方式暴露</p>
<ul>
<!-- <li><i>ClusterIP</i> (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.</li>-->
<!-- <li><i>NodePort</i> - Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using <code>&lt;NodeIP&gt;:&lt;NodePort&gt;</code>. Superset of ClusterIP.</li>-->

View File

@ -0,0 +1,7 @@
{{ $base := "docs/reference/kubernetes-api" }}
{{ $pageArg := .Get "page" }}
{{ $anchorArg := .Get "anchor" }}
{{ $textArg := .Get "text" }}
{{ $page := site.GetPage "page" (printf "%s/%s" $base $pageArg) }}
{{ $metadata := $page.Params.api_metadata }}
<a href="{{ $page.URL }}{{if $anchorArg}}#{{ $anchorArg }}{{end}}">{{if $textArg}}{{ $textArg }}{{else if $anchorArg}}{{ $anchorArg }}{{else}}{{ $metadata.kind }}{{end}}</a>

View File

@ -54,11 +54,36 @@
{{- $latestVersionAddMinor = printf "%s%s%d" (index $versionArray 0) $seperator $latestVersionAddMinor -}}
{{- $latestVersionAddMinor -}}
{{- end -}}
{{- $currentVersion := site.Params.version -}}
{{- $currentVersion := (replace $currentVersion "v" "") -}}
{{- $currentVersionArray := split $currentVersion "." -}}
{{- $currentMinorVersion := int (index $currentVersionArray 1) -}}
<!-- output latestVersion based on captured arg -->
{{- if eq $version "currentVersion" -}}
{{- $currentVersion -}}
{{- end -}}
<!-- output currentVersionAddMinor based on captured args -->
{{- if eq $version "currentVersionAddMinor" -}}
{{- $seperator := .Get 2 -}}
{{- if eq $seperator "" -}}
{{- $seperator = "." -}}
{{- end -}}
{{- $currentVersionAddMinor := int (.Get 1) -}}
{{- $currentVersionAddMinor = add $currentMinorVersion $currentVersionAddMinor -}}
{{- $currentVersionAddMinor = printf "%s%s%d" (index $versionArray 0) $seperator $currentVersionAddMinor -}}
{{- $currentVersionAddMinor -}}
{{- end -}}
<!--
example shortcode use:
- skew nextMinorVersion
- skew latestVersion
- skew currentVersion
- skew prevMinorVersion
- skew oldestMinorVersion
- skew latestVersionAddMinor -1 "-"
- skew currentVersionAddMinor -1 "-"
-->

View File

@ -35,6 +35,8 @@
# + /docs/bar : is a redirect entry, or
# + /docs/bar : is something we don't understand
#
# + {{ < api-reference page="" anchor="" ... > }}
# + {{ < api-reference page="" > }}
import argparse
import glob
@ -72,7 +74,8 @@ ARGS = None
RESULT = {}
# Cached redirect entries
REDIRECTS = {}
# Cached anchors in target pages
ANCHORS = {}
def new_record(level, message, target):
"""Create new checking record.
@ -330,6 +333,44 @@ def check_target(page, anchor, target):
msg = "Link may be wrong for the anchor [%s]" % anchor
return new_record("WARNING", msg, target)
def check_anchor(target_page, anchor):
"""Check if an anchor is defined in the target page
:param target_page: The target page to check
:param anchor: Anchor string to find in the target page
"""
if target_page not in ANCHORS:
try:
with open(target_page, "r") as f:
data = f.readlines()
except Exception as ex:
print("[Error] failed in reading markdown file: " + str(ex))
return
content = "\n".join(strip_comments(data))
anchor_pattern1 = r"<a name=\"(.*?)\""
regex1 = re.compile(anchor_pattern1)
anchor_pattern2 = r"{#(.*?)}"
regex2 = re.compile(anchor_pattern2)
ANCHORS[target_page] = regex1.findall(content) + regex2.findall(content)
return anchor in ANCHORS[target_page]
def check_apiref_target(target, anchor):
"""Check a link to an API reference page.
:param target: The link target string to check
:param anchor: Anchor string from the content page
"""
base = os.path.join(ROOT, "content", "en", "docs", "reference", "kubernetes-api")
ok = check_file_exists(base + "/", target)
if not ok:
return new_record("ERROR", "API reference page not found", target)
if anchor is None:
return
target_page = os.path.join(base, target)+".md"
if not check_anchor(target_page, anchor):
return new_record("ERROR", "Anchor not found in API reference page", target+"#"+anchor)
def validate_links(page):
"""Find and validate links on a content page.
@ -355,6 +396,27 @@ def validate_links(page):
r = check_target(page, m[0], m[1])
if r:
records.append(r)
# searches for pattern: {{< api-reference page="" anchor=""
apiref_pattern = r"{{ *< *api-reference page=\"([^\"]*?)\" *anchor=\"(.*?)\""
regex = re.compile(apiref_pattern)
matches = regex.findall(content)
for m in matches:
r = check_apiref_target(m[0], m[1])
if r:
records.append(r)
# searches for pattern: {{< api-reference page=""
apiref_pattern = r"{{ *< *api-reference page=\"([^\"]*?)\""
regex = re.compile(apiref_pattern)
matches = regex.findall(content)
for m in matches:
r = check_apiref_target(m, None)
if r:
records.append(r)
if len(records):
RESULT[page] = records