Replace reference to redirect entries (1)
We are referencing the redirect entries in the docs, at many places. These references should be updated to the correct content page.
This commit is contained in:
parent
cd9523dbca
commit
0bdcd44e6b
|
@ -46,7 +46,7 @@ These connections terminate at the kubelet's HTTPS endpoint. By default, the api
|
|||
|
||||
To verify this connection, use the `--kubelet-certificate-authority` flag to provide the apiserver with a root certificate bundle to use to verify the kubelet's serving certificate.
|
||||
|
||||
If that is not possible, use [SSH tunneling](/docs/concepts/architecture/master-node-communication/#ssh-tunnels) between the apiserver and kubelet if required to avoid connecting over an
|
||||
If that is not possible, use [SSH tunneling](#ssh-tunnels) between the apiserver and kubelet if required to avoid connecting over an
|
||||
untrusted or public network.
|
||||
|
||||
Finally, [Kubelet authentication and/or authorization](/docs/admin/kubelet-authentication-authorization/) should be enabled to secure the kubelet API.
|
||||
|
|
|
@ -23,8 +23,6 @@ The [components](/docs/concepts/overview/components/#node-components) on a node
|
|||
{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}, and the
|
||||
{{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}}.
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Management
|
||||
|
@ -195,7 +193,7 @@ The node lifecycle controller automatically creates
|
|||
The scheduler takes the Node's taints into consideration when assigning a Pod to a Node.
|
||||
Pods can also have tolerations which let them tolerate a Node's taints.
|
||||
|
||||
See [Taint Nodes by Condition](/docs/concepts/configuration/taint-and-toleration/#taint-nodes-by-condition)
|
||||
See [Taint Nodes by Condition](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition)
|
||||
for more details.
|
||||
|
||||
### Capacity and Allocatable {#capacity}
|
||||
|
@ -339,6 +337,6 @@ for more information.
|
|||
* Read the [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core).
|
||||
* Read the [Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)
|
||||
section of the architecture design document.
|
||||
* Read about [taints and tolerations](/docs/concepts/configuration/taint-and-toleration/).
|
||||
* Read about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/).
|
||||
* Read about [cluster autoscaling](/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaling).
|
||||
|
||||
|
|
|
@ -752,4 +752,4 @@ You can see that the Container was terminated because of `reason:OOM Killed`, wh
|
|||
|
||||
* Read the [ResourceRequirements](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcerequirements-v1-core) API reference
|
||||
|
||||
* Read about [project quotas](http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) in XFS
|
||||
* Read about [project quotas](https://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) in XFS
|
||||
|
|
|
@ -73,7 +73,7 @@ A desired state of an object is described by a Deployment, and if changes to tha
|
|||
|
||||
## Container Images
|
||||
|
||||
The [imagePullPolicy](/docs/concepts/containers/images/#updating-images) and the tag of the image affect when the [kubelet](/docs/admin/kubelet/) attempts to pull the specified image.
|
||||
The [imagePullPolicy](/docs/concepts/containers/images/#updating-images) and the tag of the image affect when the [kubelet](/docs/reference/command-line-tools-reference/kubelet/) attempts to pull the specified image.
|
||||
|
||||
- `imagePullPolicy: IfNotPresent`: the image is pulled only if it is not already present locally.
|
||||
|
||||
|
|
|
@ -11,7 +11,7 @@ weight: 70
|
|||
|
||||
{{< feature-state for_k8s_version="v1.14" state="stable" >}}
|
||||
|
||||
[Pods](/docs/user-guide/pods) can have _priority_. Priority indicates the
|
||||
[Pods](/docs/concepts/workloads/pods/pod/) can have _priority_. Priority indicates the
|
||||
importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the
|
||||
scheduler tries to preempt (evict) lower priority Pods to make scheduling of the
|
||||
pending Pod possible.
|
||||
|
|
|
@ -15,14 +15,14 @@ Kubernetes is highly configurable and extensible. As a result,
|
|||
there is rarely a need to fork or submit patches to the Kubernetes
|
||||
project code.
|
||||
|
||||
This guide describes the options for customizing a Kubernetes
|
||||
cluster. It is aimed at {{< glossary_tooltip text="cluster operators" term_id="cluster-operator" >}} who want to
|
||||
understand how to adapt their Kubernetes cluster to the needs of
|
||||
their work environment. Developers who are prospective {{< glossary_tooltip text="Platform Developers" term_id="platform-developer" >}} or Kubernetes Project {{< glossary_tooltip text="Contributors" term_id="contributor" >}} will also find it
|
||||
useful as an introduction to what extension points and patterns
|
||||
exist, and their trade-offs and limitations.
|
||||
|
||||
|
||||
This guide describes the options for customizing a Kubernetes cluster. It is
|
||||
aimed at {{< glossary_tooltip text="cluster operators" term_id="cluster-operator" >}}
|
||||
who want to understand how to adapt their
|
||||
Kubernetes cluster to the needs of their work environment. Developers who are prospective
|
||||
{{< glossary_tooltip text="Platform Developers" term_id="platform-developer" >}}
|
||||
or Kubernetes Project {{< glossary_tooltip text="Contributors" term_id="contributor" >}}
|
||||
will also find it useful as an introduction to what extension points and
|
||||
patterns exist, and their trade-offs and limitations.
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
@ -35,14 +35,14 @@ Customization approaches can be broadly divided into *configuration*, which only
|
|||
|
||||
*Configuration files* and *flags* are documented in the Reference section of the online documentation, under each binary:
|
||||
|
||||
* [kubelet](/docs/admin/kubelet/)
|
||||
* [kube-apiserver](/docs/admin/kube-apiserver/)
|
||||
* [kube-controller-manager](/docs/admin/kube-controller-manager/)
|
||||
* [kube-scheduler](/docs/admin/kube-scheduler/).
|
||||
* [kubelet](/docs/reference/command-line-tools-reference/kubelet/)
|
||||
* [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/)
|
||||
* [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/)
|
||||
* [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/).
|
||||
|
||||
Flags and configuration files may not always be changeable in a hosted Kubernetes service or a distribution with managed installation. When they are changeable, they are usually only changeable by the cluster administrator. Also, they are subject to change in future Kubernetes versions, and setting them may require restarting processes. For those reasons, they should be used only when there are no other options.
|
||||
|
||||
*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/), [PodSecurityPolicies](/docs/concepts/policy/pod-security-policy/), [NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control ([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs. APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations. They are declarative and use the same conventions as other Kubernetes resources like pods, so new cluster configuration can be repeatable and be managed the same way as applications. And, where they are stable, they enjoy a [defined support policy](/docs/reference/deprecation-policy/) like other Kubernetes APIs. For these reasons, they are preferred over *configuration files* and *flags* where suitable.
|
||||
*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/), [PodSecurityPolicies](/docs/concepts/policy/pod-security-policy/), [NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control ([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs. APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations. They are declarative and use the same conventions as other Kubernetes resources like pods, so new cluster configuration can be repeatable and be managed the same way as applications. And, where they are stable, they enjoy a [defined support policy](/docs/reference/using-api/deprecation-policy/) like other Kubernetes APIs. For these reasons, they are preferred over *configuration files* and *flags* where suitable.
|
||||
|
||||
## Extensions
|
||||
|
||||
|
@ -73,10 +73,9 @@ failure.
|
|||
|
||||
In the webhook model, Kubernetes makes a network request to a remote service.
|
||||
In the *Binary Plugin* model, Kubernetes executes a binary (program).
|
||||
Binary plugins are used by the kubelet (e.g. [Flex Volume
|
||||
Plugins](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md)
|
||||
and [Network
|
||||
Plugins](/docs/concepts/cluster-administration/network-plugins/))
|
||||
Binary plugins are used by the kubelet (e.g.
|
||||
[Flex Volume Plugins](/docs/concepts/storage/volumes/#flexVolume)
|
||||
and [Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/))
|
||||
and by kubectl.
|
||||
|
||||
Below is a diagram showing how the extension points interact with the
|
||||
|
@ -95,13 +94,13 @@ This diagram shows the extension points in a Kubernetes system.
|
|||
|
||||
<!-- image source diagrams: https://docs.google.com/drawings/d/1k2YdJgNTtNfW7_A8moIIkij-DmVgEhNrn3y2OODwqQQ/view -->
|
||||
|
||||
1. Users often interact with the Kubernetes API using `kubectl`. [Kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) extend the kubectl binary. They only affect the individual user's local environment, and so cannot enforce site-wide policies.
|
||||
2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the [API Access Extensions](/docs/concepts/overview/extending#api-access-extensions) section.
|
||||
3. The apiserver serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are defined by the Kubernetes project and can't be changed. You can also add resources that you define, or that other projects have defined, called *Custom Resources*, as explained in the [Custom Resources](/docs/concepts/overview/extending#user-defined-types) section. Custom Resources are often used with API Access Extensions.
|
||||
4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling. These are described in the [Scheduler Extensions](/docs/concepts/overview/extending#scheduler-extensions) section.
|
||||
5. Much of the behavior of Kubernetes is implemented by programs called Controllers which are clients of the API-Server. Controllers are often used in conjunction with Custom Resources.
|
||||
6. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on the cluster network. [Network Plugins](/docs/concepts/overview/extending#network-plugins) allow for different implementations of pod networking.
|
||||
7. The kubelet also mounts and unmounts volumes for containers. New types of storage can be supported via [Storage Plugins](/docs/concepts/overview/extending#storage-plugins).
|
||||
1. Users often interact with the Kubernetes API using `kubectl`. [Kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) extend the kubectl binary. They only affect the individual user's local environment, and so cannot enforce site-wide policies.
|
||||
2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the [API Access Extensions](/docs/concepts/extend-kubernetes/#api-access-extensions) section.
|
||||
3. The apiserver serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are defined by the Kubernetes project and can't be changed. You can also add resources that you define, or that other projects have defined, called *Custom Resources*, as explained in the [Custom Resources](/docs/concepts/extend-kubernetes/#user-defined-types) section. Custom Resources are often used with API Access Extensions.
|
||||
4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling. These are described in the [Scheduler Extensions](/docs/concepts/overview/extending#scheduler-extensions) section.
|
||||
5. Much of the behavior of Kubernetes is implemented by programs called Controllers which are clients of the API-Server. Controllers are often used in conjunction with Custom Resources.
|
||||
6. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on the cluster network. [Network Plugins](/docs/concepts/extend-kubernetes/#network-plugins) allow for different implementations of pod networking.
|
||||
7. The kubelet also mounts and unmounts volumes for containers. New types of storage can be supported via [Storage Plugins](/docs/concepts/extend-kubernetes/#storage-plugins).
|
||||
|
||||
If you are unsure where to start, this flowchart can help. Note that some solutions may involve several types of extensions.
|
||||
|
||||
|
@ -117,7 +116,7 @@ Consider adding a Custom Resource to Kubernetes if you want to define new contro
|
|||
|
||||
Do not use a Custom Resource as data storage for application, user, or monitoring data.
|
||||
|
||||
For more about Custom Resources, see the [Custom Resources concept guide](/docs/concepts/api-extension/custom-resources/).
|
||||
For more about Custom Resources, see the [Custom Resources concept guide](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
|
||||
|
||||
|
||||
### Combining New APIs with Automation
|
||||
|
@ -162,28 +161,28 @@ After a request is authorized, if it is a write operation, it also goes through
|
|||
|
||||
### Storage Plugins
|
||||
|
||||
[Flex Volumes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/flexvolume-deployment.md
|
||||
) allow users to mount volume types without built-in support by having the
|
||||
[Flex Volumes](/docs/concepts/storage/volumes/#flexVolume)
|
||||
allow users to mount volume types without built-in support by having the
|
||||
Kubelet call a Binary Plugin to mount the volume.
|
||||
|
||||
|
||||
### Device Plugins
|
||||
|
||||
Device plugins allow a node to discover new Node resources (in addition to the
|
||||
builtin ones like cpu and memory) via a [Device
|
||||
Plugin](/docs/concepts/cluster-administration/device-plugins/).
|
||||
|
||||
builtin ones like cpu and memory) via a
|
||||
[Device Plugin](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/).
|
||||
|
||||
### Network Plugins
|
||||
|
||||
Different networking fabrics can be supported via node-level [Network Plugins](/docs/admin/network-plugins/).
|
||||
Different networking fabrics can be supported via node-level
|
||||
[Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/).
|
||||
|
||||
### Scheduler Extensions
|
||||
|
||||
The scheduler is a special type of controller that watches pods, and assigns
|
||||
pods to nodes. The default scheduler can be replaced entirely, while
|
||||
continuing to use other Kubernetes components, or [multiple
|
||||
schedulers](/docs/tasks/administer-cluster/configure-multiple-schedulers/)
|
||||
continuing to use other Kubernetes components, or
|
||||
[multiple schedulers](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)
|
||||
can run at the same time.
|
||||
|
||||
This is a significant undertaking, and almost all Kubernetes users find they
|
||||
|
@ -195,16 +194,13 @@ that permits a webhook backend (scheduler extension) to filter and prioritize
|
|||
the nodes chosen for a pod.
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Learn more about [Custom Resources](/docs/concepts/api-extension/custom-resources/)
|
||||
* Learn more about [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
* Learn about [Dynamic admission control](/docs/reference/access-authn-authz/extensible-admission-controllers/)
|
||||
* Learn more about Infrastructure extensions
|
||||
* [Network Plugins](/docs/concepts/cluster-administration/network-plugins/)
|
||||
* [Device Plugins](/docs/concepts/cluster-administration/device-plugins/)
|
||||
* [Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
|
||||
* [Device Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
|
||||
* Learn about [kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/)
|
||||
* Learn about the [Operator pattern](/docs/concepts/extend-kubernetes/operator/)
|
||||
|
||||
|
|
|
@ -10,8 +10,6 @@ In Kubernetes, _scheduling_ refers to making sure that {{< glossary_tooltip text
|
|||
are matched to {{< glossary_tooltip text="Nodes" term_id="node" >}} so that
|
||||
{{< glossary_tooltip term_id="kubelet" >}} can run them.
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Scheduling overview {#scheduling}
|
||||
|
@ -92,6 +90,6 @@ of the scheduler:
|
|||
* Read about [scheduler performance tuning](/docs/concepts/scheduling-eviction/scheduler-perf-tuning/)
|
||||
* Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/)
|
||||
* Read the [reference documentation](/docs/reference/command-line-tools-reference/kube-scheduler/) for kube-scheduler
|
||||
* Learn about [configuring multiple schedulers](/docs/tasks/administer-cluster/configure-multiple-schedulers/)
|
||||
* Learn about [configuring multiple schedulers](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)
|
||||
* Learn about [topology management policies](/docs/tasks/administer-cluster/topology-manager/)
|
||||
* Learn about [Pod Overhead](/docs/concepts/configuration/pod-overhead/)
|
||||
|
|
|
@ -175,7 +175,7 @@ toleration to pods that use the special hardware. As in the dedicated nodes use
|
|||
it is probably easiest to apply the tolerations using a custom
|
||||
[admission controller](/docs/reference/access-authn-authz/admission-controllers/).
|
||||
For example, it is recommended to use [Extended
|
||||
Resources](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources)
|
||||
Resources](/docs/concepts/configuration/manage-resources-containers/#extended-resources)
|
||||
to represent the special hardware, taint your special hardware nodes with the
|
||||
extended resource name and run the
|
||||
[ExtendedResourceToleration](/docs/reference/access-authn-authz/admission-controllers/#extendedresourcetoleration)
|
||||
|
|
|
@ -133,7 +133,7 @@ about the [service proxy](/docs/concepts/services-networking/service/#virtual-ip
|
|||
|
||||
Kubernetes supports 2 primary modes of finding a Service - environment variables
|
||||
and DNS. The former works out of the box while the latter requires the
|
||||
[CoreDNS cluster addon](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/coredns).
|
||||
[CoreDNS cluster addon](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/coredns).
|
||||
{{< note >}}
|
||||
If the service environment variables are not desired (because possible clashing with expected program ones,
|
||||
too many variables to process, only using DNS, etc) you can disable this mode by setting the `enableServiceLinks`
|
||||
|
|
|
@ -26,7 +26,7 @@ Kubernetes as a project currently supports and maintains [GCE](https://git.k8s.i
|
|||
* [Ambassador](https://www.getambassador.io/) API Gateway is an [Envoy](https://www.envoyproxy.io) based ingress
|
||||
controller with [community](https://www.getambassador.io/docs) or
|
||||
[commercial](https://www.getambassador.io/pro/) support from [Datawire](https://www.datawire.io/).
|
||||
* [AppsCode Inc.](https://appscode.com) offers support and maintenance for the most widely used [HAProxy](http://www.haproxy.org/) based ingress controller [Voyager](https://appscode.com/products/voyager).
|
||||
* [AppsCode Inc.](https://appscode.com) offers support and maintenance for the most widely used [HAProxy](https://www.haproxy.org/) based ingress controller [Voyager](https://appscode.com/products/voyager).
|
||||
* [AWS ALB Ingress Controller](https://github.com/kubernetes-sigs/aws-alb-ingress-controller) enables ingress using the [AWS Application Load Balancer](https://aws.amazon.com/elasticloadbalancing/).
|
||||
* [Contour](https://projectcontour.io/) is an [Envoy](https://www.envoyproxy.io/) based ingress controller
|
||||
provided and supported by VMware.
|
||||
|
|
|
@ -432,7 +432,7 @@ a Service.
|
|||
|
||||
It's also worth noting that even though health checks are not exposed directly
|
||||
through the Ingress, there exist parallel concepts in Kubernetes such as
|
||||
[readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/)
|
||||
[readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)
|
||||
that allow you to achieve the same end result. Please review the controller
|
||||
specific documentation to see how they handle health checks (
|
||||
[nginx](https://git.k8s.io/ingress-nginx/README.md),
|
||||
|
|
|
@ -20,8 +20,6 @@ With Kubernetes you don't need to modify your application to use an unfamiliar s
|
|||
Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods,
|
||||
and can load-balance across them.
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Motivation
|
||||
|
@ -390,7 +388,7 @@ variables and DNS.
|
|||
When a Pod is run on a Node, the kubelet adds a set of environment variables
|
||||
for each active Service. It supports both [Docker links
|
||||
compatible](https://docs.docker.com/userguide/dockerlinks/) variables (see
|
||||
[makeLinkVariables](http://releases.k8s.io/{{< param "githubbranch" >}}/pkg/kubelet/envvars/envvars.go#L49))
|
||||
[makeLinkVariables](https://releases.k8s.io/{{< param "githubbranch" >}}/pkg/kubelet/envvars/envvars.go#L49))
|
||||
and simpler `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables,
|
||||
where the Service name is upper-cased and dashes are converted to underscores.
|
||||
|
||||
|
@ -754,7 +752,7 @@ In the above example, if the Service contained three ports, `80`, `443`, and
|
|||
`8443`, then `443` and `8443` would use the SSL certificate, but `80` would just
|
||||
be proxied HTTP.
|
||||
|
||||
From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services.
|
||||
From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services.
|
||||
To see which policies are available for use, you can use the `aws` command line tool:
|
||||
|
||||
```bash
|
||||
|
@ -889,7 +887,7 @@ To use a Network Load Balancer on AWS, use the annotation `service.beta.kubernet
|
|||
```
|
||||
|
||||
{{< note >}}
|
||||
NLB only works with certain instance classes; see the [AWS documentation](http://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets)
|
||||
NLB only works with certain instance classes; see the [AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets)
|
||||
on Elastic Load Balancing for a list of supported instance types.
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -1046,9 +1044,9 @@ spec:
|
|||
## Shortcomings
|
||||
|
||||
Using the userspace proxy for VIPs, work at small to medium scale, but will
|
||||
not scale to very large clusters with thousands of Services. The [original
|
||||
design proposal for portals](http://issue.k8s.io/1107) has more details on
|
||||
this.
|
||||
not scale to very large clusters with thousands of Services. The
|
||||
[original design proposal for portals](https://github.com/kubernetes/kubernetes/issues/1107)
|
||||
has more details on this.
|
||||
|
||||
Using the userspace proxy obscures the source IP address of a packet accessing
|
||||
a Service.
|
||||
|
|
|
@ -19,9 +19,6 @@ weight: 20
|
|||
|
||||
This document describes the current state of _persistent volumes_ in Kubernetes. Familiarity with [volumes](/docs/concepts/storage/volumes/) is suggested.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Introduction
|
||||
|
@ -148,7 +145,11 @@ The `Recycle` reclaim policy is deprecated. Instead, the recommended approach is
|
|||
|
||||
If supported by the underlying volume plugin, the `Recycle` reclaim policy performs a basic scrub (`rm -rf /thevolume/*`) on the volume and makes it available again for a new claim.
|
||||
|
||||
However, an administrator can configure a custom recycler Pod template using the Kubernetes controller manager command line arguments as described [here](/docs/admin/kube-controller-manager/). The custom recycler Pod template must contain a `volumes` specification, as shown in the example below:
|
||||
However, an administrator can configure a custom recycler Pod template using
|
||||
the Kubernetes controller manager command line arguments as described in the
|
||||
[reference](/docs/reference/command-line-tools-reference/kube-controller-manager/).
|
||||
The custom recycler Pod template must contain a `volumes` specification, as
|
||||
shown in the example below:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
|
|
@ -15,8 +15,6 @@ This document describes the concept of a StorageClass in Kubernetes. Familiarity
|
|||
with [volumes](/docs/concepts/storage/volumes/) and
|
||||
[persistent volumes](/docs/concepts/storage/persistent-volumes) is suggested.
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Introduction
|
||||
|
@ -168,11 +166,11 @@ A cluster administrator can address this issue by specifying the `WaitForFirstCo
|
|||
will delay the binding and provisioning of a PersistentVolume until a Pod using the PersistentVolumeClaim is created.
|
||||
PersistentVolumes will be selected or provisioned conforming to the topology that is
|
||||
specified by the Pod's scheduling constraints. These include, but are not limited to, [resource
|
||||
requirements](/docs/concepts/configuration/manage-compute-resources-container),
|
||||
requirements](/docs/concepts/configuration/manage-resources-containers/),
|
||||
[node selectors](/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector),
|
||||
[pod affinity and
|
||||
anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity),
|
||||
and [taints and tolerations](/docs/concepts/configuration/taint-and-toleration).
|
||||
and [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration).
|
||||
|
||||
The following plugins support `WaitForFirstConsumer` with dynamic provisioning:
|
||||
|
||||
|
@ -244,7 +242,7 @@ parameters:
|
|||
```
|
||||
|
||||
* `type`: `io1`, `gp2`, `sc1`, `st1`. See
|
||||
[AWS docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html)
|
||||
[AWS docs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html)
|
||||
for details. Default: `gp2`.
|
||||
* `zone` (Deprecated): AWS zone. If neither `zone` nor `zones` is specified, volumes are
|
||||
generally round-robin-ed across all active zones where Kubernetes cluster
|
||||
|
@ -256,7 +254,7 @@ parameters:
|
|||
* `iopsPerGB`: only for `io1` volumes. I/O operations per second per GiB. AWS
|
||||
volume plugin multiplies this with size of requested volume to compute IOPS
|
||||
of the volume and caps it at 20 000 IOPS (maximum supported by AWS, see
|
||||
[AWS docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html).
|
||||
[AWS docs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html).
|
||||
A string is expected here, i.e. `"10"`, not `10`.
|
||||
* `fsType`: fsType that is supported by kubernetes. Default: `"ext4"`.
|
||||
* `encrypted`: denotes whether the EBS volume should be encrypted or not.
|
||||
|
|
|
@ -18,10 +18,7 @@ Container starts with a clean state. Second, when running Containers together
|
|||
in a `Pod` it is often necessary to share files between those Containers. The
|
||||
Kubernetes `Volume` abstraction solves both of these problems.
|
||||
|
||||
Familiarity with [Pods](/docs/user-guide/pods) is suggested.
|
||||
|
||||
|
||||
|
||||
Familiarity with [Pods](/docs/concepts/workloads/pods/pod/) is suggested.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
@ -100,7 +97,7 @@ We welcome additional contributions.
|
|||
### awsElasticBlockStore {#awselasticblockstore}
|
||||
|
||||
An `awsElasticBlockStore` volume mounts an Amazon Web Services (AWS) [EBS
|
||||
Volume](http://aws.amazon.com/ebs/) into your Pod. Unlike
|
||||
Volume](https://aws.amazon.com/ebs/) into your Pod. Unlike
|
||||
`emptyDir`, which is erased when a Pod is removed, the contents of an EBS
|
||||
volume are preserved and the volume is merely unmounted. This means that an
|
||||
EBS volume can be pre-populated with data, and that data can be "handed off"
|
||||
|
@ -401,8 +398,8 @@ See the [Flocker example](https://github.com/kubernetes/examples/tree/{{< param
|
|||
|
||||
### gcePersistentDisk {#gcepersistentdisk}
|
||||
|
||||
A `gcePersistentDisk` volume mounts a Google Compute Engine (GCE) [Persistent
|
||||
Disk](http://cloud.google.com/compute/docs/disks) into your Pod. Unlike
|
||||
A `gcePersistentDisk` volume mounts a Google Compute Engine (GCE)
|
||||
[Persistent Disk](https://cloud.google.com/compute/docs/disks) into your Pod. Unlike
|
||||
`emptyDir`, which is erased when a Pod is removed, the contents of a PD are
|
||||
preserved and the volume is merely unmounted. This means that a PD can be
|
||||
pre-populated with data, and that data can be "handed off" between Pods.
|
||||
|
@ -537,7 +534,7 @@ spec:
|
|||
|
||||
### glusterfs {#glusterfs}
|
||||
|
||||
A `glusterfs` volume allows a [Glusterfs](http://www.gluster.org) (an open
|
||||
A `glusterfs` volume allows a [Glusterfs](https://www.gluster.org) (an open
|
||||
source networked filesystem) volume to be mounted into your Pod. Unlike
|
||||
`emptyDir`, which is erased when a Pod is removed, the contents of a
|
||||
`glusterfs` volume are preserved and the volume is merely unmounted. This
|
||||
|
@ -589,7 +586,7 @@ Watch out when using this type of volume, because:
|
|||
able to account for resources used by a `hostPath`
|
||||
* the files or directories created on the underlying hosts are only writable by root. You
|
||||
either need to run your process as root in a
|
||||
[privileged Container](/docs/user-guide/security-context) or modify the file
|
||||
[privileged Container](/docs/tasks/configure-pod-container/security-context/) or modify the file
|
||||
permissions on the host to be able to write to a `hostPath` volume
|
||||
|
||||
#### Example Pod
|
||||
|
@ -952,7 +949,7 @@ More details and examples can be found [here](https://github.com/kubernetes/exam
|
|||
|
||||
### quobyte {#quobyte}
|
||||
|
||||
A `quobyte` volume allows an existing [Quobyte](http://www.quobyte.com) volume to
|
||||
A `quobyte` volume allows an existing [Quobyte](https://www.quobyte.com) volume to
|
||||
be mounted into your Pod.
|
||||
|
||||
{{< caution >}}
|
||||
|
@ -966,8 +963,8 @@ GitHub project has [instructions](https://github.com/quobyte/quobyte-csi#quobyte
|
|||
|
||||
### rbd {#rbd}
|
||||
|
||||
An `rbd` volume allows a [Rados Block
|
||||
Device](http://ceph.com/docs/master/rbd/rbd/) volume to be mounted into your
|
||||
An `rbd` volume allows a
|
||||
[Rados Block Device](https://ceph.com/docs/master/rbd/rbd/) volume to be mounted into your
|
||||
Pod. Unlike `emptyDir`, which is erased when a Pod is removed, the contents of
|
||||
a `rbd` volume are preserved and the volume is merely unmounted. This
|
||||
means that a RBD volume can be pre-populated with data, and that data can
|
||||
|
@ -1044,7 +1041,7 @@ A Container using a Secret as a [subPath](#using-subpath) volume mount will not
|
|||
receive Secret updates.
|
||||
{{< /note >}}
|
||||
|
||||
Secrets are described in more detail [here](/docs/user-guide/secrets).
|
||||
Secrets are described in more detail [here](/docs/concepts/configuration/secret/).
|
||||
|
||||
### storageOS {#storageos}
|
||||
|
||||
|
@ -1244,11 +1241,12 @@ medium of the filesystem holding the kubelet root dir (typically
|
|||
Pods.
|
||||
|
||||
In the future, we expect that `emptyDir` and `hostPath` volumes will be able to
|
||||
request a certain amount of space using a [resource](/docs/user-guide/compute-resources)
|
||||
request a certain amount of space using a [resource](/docs/concepts/configuration/manage-resources-containers/)
|
||||
specification, and to select the type of media to use, for clusters that have
|
||||
several media types.
|
||||
|
||||
## Out-of-Tree Volume Plugins
|
||||
|
||||
The Out-of-tree volume plugins include the Container Storage Interface (CSI)
|
||||
and FlexVolume. They enable storage vendors to create custom storage plugins
|
||||
without adding them to the Kubernetes repository.
|
||||
|
|
|
@ -26,9 +26,6 @@ In a simple case, one DaemonSet, covering all nodes, would be used for each type
|
|||
A more complex setup might use multiple DaemonSets for a single type of daemon, but with
|
||||
different flags and/or different memory and cpu requests for different hardware types.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Writing a DaemonSet Spec
|
||||
|
@ -48,7 +45,8 @@ kubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml
|
|||
### Required Fields
|
||||
|
||||
As with all other Kubernetes config, a DaemonSet needs `apiVersion`, `kind`, and `metadata` fields. For
|
||||
general information about working with config files, see [deploying applications](/docs/user-guide/deploying-applications/),
|
||||
general information about working with config files, see
|
||||
[running stateless applications](/docs/tasks/run-application/run-stateless-application-deployment/),
|
||||
[configuring containers](/docs/tasks/), and [object management using kubectl](/docs/concepts/overview/working-with-objects/object-management/) documents.
|
||||
|
||||
The name of a DaemonSet object must be a valid
|
||||
|
@ -71,7 +69,7 @@ A Pod Template in a DaemonSet must have a [`RestartPolicy`](/docs/concepts/workl
|
|||
### Pod Selector
|
||||
|
||||
The `.spec.selector` field is a pod selector. It works the same as the `.spec.selector` of
|
||||
a [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/).
|
||||
a [Job](/docs/concepts/workloads/controllers/job/).
|
||||
|
||||
As of Kubernetes 1.8, you must specify a pod selector that matches the labels of the
|
||||
`.spec.template`. The pod selector will no longer be defaulted when left empty. Selector
|
||||
|
@ -147,7 +145,7 @@ automatically to DaemonSet Pods. The default scheduler ignores
|
|||
### Taints and Tolerations
|
||||
|
||||
Although Daemon Pods respect
|
||||
[taints and tolerations](/docs/concepts/configuration/taint-and-toleration),
|
||||
[taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/),
|
||||
the following tolerations are added to DaemonSet Pods automatically according to
|
||||
the related features.
|
||||
|
||||
|
@ -213,7 +211,7 @@ use a DaemonSet rather than creating individual Pods.
|
|||
### Static Pods
|
||||
|
||||
It is possible to create Pods by writing a file to a certain directory watched by Kubelet. These
|
||||
are called [static pods](/docs/concepts/cluster-administration/static-pod/).
|
||||
are called [static pods](/docs/tasks/configure-pod-container/static-pod/).
|
||||
Unlike DaemonSet, static Pods cannot be managed with kubectl
|
||||
or other Kubernetes API clients. Static Pods do not depend on the apiserver, making them useful
|
||||
in cluster bootstrapping cases. Also, static Pods may be deprecated in the future.
|
||||
|
|
|
@ -22,7 +22,6 @@ You describe a _desired state_ in a Deployment, and the Deployment {{< glossary_
|
|||
Do not manage ReplicaSets owned by a Deployment. Consider opening an issue in the main Kubernetes repository if your use case is not covered below.
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Use Case
|
||||
|
@ -1040,7 +1039,8 @@ can create multiple Deployments, one for each release, following the canary patt
|
|||
## Writing a Deployment Spec
|
||||
|
||||
As with all other Kubernetes configs, a Deployment needs `.apiVersion`, `.kind`, and `.metadata` fields.
|
||||
For general information about working with config files, see [deploying applications](/docs/tutorials/stateless-application/run-stateless-application-deployment/),
|
||||
For general information about working with config files, see
|
||||
[deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/),
|
||||
configuring containers, and [using kubectl to manage resources](/docs/concepts/overview/working-with-objects/object-management/) documents.
|
||||
The name of a Deployment object must be a valid
|
||||
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
|
||||
|
|
|
@ -24,9 +24,6 @@ due to a node hardware failure or a node reboot).
|
|||
|
||||
You can also use a Job to run multiple Pods in parallel.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Running an example Job
|
||||
|
@ -122,6 +119,7 @@ A Job also needs a [`.spec` section](https://git.k8s.io/community/contributors/d
|
|||
|
||||
The `.spec.template` is the only required field of the `.spec`.
|
||||
|
||||
|
||||
The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/#pod-templates). It has exactly the same schema as a {{< glossary_tooltip text="Pod" term_id="pod" >}}, except it is nested and does not have an `apiVersion` or `kind`.
|
||||
|
||||
In addition to required fields for a Pod, a pod template in a Job must specify appropriate
|
||||
|
@ -450,7 +448,7 @@ requires only a single Pod.
|
|||
|
||||
### Replication Controller
|
||||
|
||||
Jobs are complementary to [Replication Controllers](/docs/user-guide/replication-controller).
|
||||
Jobs are complementary to [Replication Controllers](/docs/concepts/workloads/controllers/replicationcontroller/).
|
||||
A Replication Controller manages Pods which are not expected to terminate (e.g. web servers), and a Job
|
||||
manages Pods that are expected to terminate (e.g. batch tasks).
|
||||
|
||||
|
|
|
@ -23,9 +23,6 @@ A _ReplicationController_ ensures that a specified number of pod replicas are ru
|
|||
time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is
|
||||
always up and available.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## How a ReplicationController Works
|
||||
|
@ -134,7 +131,7 @@ labels and an appropriate restart policy. For labels, make sure not to overlap w
|
|||
Only a [`.spec.template.spec.restartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Always` is allowed, which is the default if not specified.
|
||||
|
||||
For local container restarts, ReplicationControllers delegate to an agent on the node,
|
||||
for example the [Kubelet](/docs/admin/kubelet/) or Docker.
|
||||
for example the [Kubelet](/docs/reference/command-line-tools-reference/kubelet/) or Docker.
|
||||
|
||||
### Labels on the ReplicationController
|
||||
|
||||
|
@ -214,7 +211,7 @@ The ReplicationController makes it easy to scale the number of replicas up or do
|
|||
|
||||
The ReplicationController is designed to facilitate rolling updates to a service by replacing pods one-by-one.
|
||||
|
||||
As explained in [#1353](http://issue.k8s.io/1353), the recommended approach is to create a new ReplicationController with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures.
|
||||
As explained in [#1353](https://issue.k8s.io/1353), the recommended approach is to create a new ReplicationController with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures.
|
||||
|
||||
Ideally, the rolling update controller would take application readiness into account, and would ensure that a sufficient number of pods were productively serving at any given time.
|
||||
|
||||
|
@ -239,11 +236,11 @@ Pods created by a ReplicationController are intended to be fungible and semantic
|
|||
|
||||
## Responsibilities of the ReplicationController
|
||||
|
||||
The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](http://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
|
||||
The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
|
||||
|
||||
The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](http://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](http://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](http://issue.k8s.io/170)).
|
||||
The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](https://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](https://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](https://issue.k8s.io/170)).
|
||||
|
||||
The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, scale) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc.
|
||||
The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, scale) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](https://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc.
|
||||
|
||||
|
||||
## API Object
|
||||
|
@ -271,7 +268,7 @@ Unlike in the case where a user directly created pods, a ReplicationController r
|
|||
|
||||
### Job
|
||||
|
||||
Use a [`Job`](/docs/concepts/jobs/run-to-completion-finite-workloads/) instead of a ReplicationController for pods that are expected to terminate on their own
|
||||
Use a [`Job`](/docs/concepts/workloads/controllers/job/) instead of a ReplicationController for pods that are expected to terminate on their own
|
||||
(that is, batch jobs).
|
||||
|
||||
### DaemonSet
|
||||
|
@ -283,6 +280,6 @@ safe to terminate when the machine is otherwise ready to be rebooted/shutdown.
|
|||
|
||||
## For more information
|
||||
|
||||
Read [Run Stateless AP Replication Controller](/docs/tutorials/stateless-application/run-stateless-ap-replication-controller/).
|
||||
Read [Run Stateless Application Deployment](/docs/tasks/run-application/run-stateless-application-deployment/).
|
||||
|
||||
|
||||
|
|
|
@ -200,7 +200,7 @@ The StatefulSet should not specify a `pod.Spec.TerminationGracePeriodSeconds` of
|
|||
|
||||
When the nginx example above is created, three Pods will be deployed in the order
|
||||
web-0, web-1, web-2. web-1 will not be deployed before web-0 is
|
||||
[Running and Ready](/docs/user-guide/pod-states/), and web-2 will not be deployed until
|
||||
[Running and Ready](/docs/concepts/workloads/pods/pod-lifecycle/), and web-2 will not be deployed until
|
||||
web-1 is Running and Ready. If web-0 should fail, after web-1 is Running and Ready, but before
|
||||
web-2 is launched, web-2 will not be launched until web-0 is successfully relaunched and
|
||||
becomes Running and Ready.
|
||||
|
|
|
@ -20,12 +20,6 @@ Alpha Disclaimer: this feature is currently alpha, and can be enabled with both
|
|||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
`TTLAfterFinished`.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## TTL Controller
|
||||
|
@ -82,9 +76,7 @@ very small. Please be aware of this risk when setting a non-zero TTL.
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* [Clean up Jobs automatically](/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically)
|
||||
|
||||
[Clean up Jobs automatically](/docs/concepts/workloads/controllers/jobs-run-to-completion/#clean-up-finished-jobs-automatically)
|
||||
|
||||
[Design doc](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/0026-ttl-after-finish.md)
|
||||
|
||||
* [Design doc](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/0026-ttl-after-finish.md)
|
||||
|
||||
|
|
|
@ -16,7 +16,6 @@ what types of disruptions can happen to Pods.
|
|||
It is also for cluster administrators who want to perform automated
|
||||
cluster actions, like upgrading and autoscaling clusters.
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Voluntary and involuntary disruptions
|
||||
|
@ -70,15 +69,15 @@ deleting deployments or pods bypasses Pod Disruption Budgets.
|
|||
|
||||
Here are some ways to mitigate involuntary disruptions:
|
||||
|
||||
- Ensure your pod [requests the resources](/docs/tasks/configure-pod-container/assign-cpu-ram-container) it needs.
|
||||
- Ensure your pod [requests the resources](/docs/tasks/configure-pod-container/assign-memory-resource) it needs.
|
||||
- Replicate your application if you need higher availability. (Learn about running replicated
|
||||
[stateless](/docs/tasks/run-application/run-stateless-application-deployment/)
|
||||
and [stateful](/docs/tasks/run-application/run-replicated-stateful-application/) applications.)
|
||||
[stateless](/docs/tasks/run-application/run-stateless-application-deployment/)
|
||||
and [stateful](/docs/tasks/run-application/run-replicated-stateful-application/) applications.)
|
||||
- For even higher availability when running replicated applications,
|
||||
spread applications across racks (using
|
||||
[anti-affinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature))
|
||||
or across zones (if using a
|
||||
[multi-zone cluster](/docs/setup/multiple-zones).)
|
||||
spread applications across racks (using
|
||||
[anti-affinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature))
|
||||
or across zones (if using a
|
||||
[multi-zone cluster](/docs/setup/multiple-zones).)
|
||||
|
||||
The frequency of voluntary disruptions varies. On a basic Kubernetes cluster, there are
|
||||
no voluntary disruptions at all. However, your cluster administrator or hosting provider
|
||||
|
|
|
@ -20,9 +20,6 @@ managed in Kubernetes.
|
|||
|
||||
_Pod_ 是可以在 Kubernetes 中创建和管理的、最小的可部署的计算单元。
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
|
@ -260,7 +257,7 @@ always use controllers even for singletons, for example,
|
|||
[Deployments](/docs/concepts/workloads/controllers/deployment/).
|
||||
Controllers provide self-healing with a cluster scope, as well as replication
|
||||
and rollout management.
|
||||
Controllers like [StatefulSet](/docs/concepts/workloads/controllers/statefulset.md)
|
||||
Controllers like [StatefulSet](/docs/concepts/workloads/controllers/statefulset/)
|
||||
can also provide support to stateful Pods.
|
||||
-->
|
||||
一般来说,用户不需要直接创建 Pod。他们几乎都是使用控制器进行创建,即使对于单例的 Pod 创建也一样使用控制器,例如 [Deployments](/docs/concepts/workloads/controllers/deployment/)。
|
||||
|
@ -270,7 +267,6 @@ can also provide support to stateful Pods.
|
|||
<!--
|
||||
The use of collective APIs as the primary user-facing primitive is relatively common among cluster scheduling systems, including [Borg](https://research.google.com/pubs/pub43438.html), [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html), [Aurora](http://aurora.apache.org/documentation/latest/reference/configuration/#job-schema), and [Tupperware](https://www.slideshare.net/Docker/aravindnarayanan-facebook140613153626phpapp02-37588997).
|
||||
-->
|
||||
|
||||
在集群调度系统中,使用 API 合集作为面向用户的主要原语是比较常见的,包括 [Borg](https://research.google.com/pubs/pub43438.html)、[Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html)、[Aurora](http://aurora.apache.org/documentation/latest/reference/configuration/#job-schema)、和 [Tupperware](https://www.slideshare.net/Docker/aravindnarayanan-facebook140613153626phpapp02-37588997)。
|
||||
|
||||
<!--
|
||||
|
|
Loading…
Reference in New Issue