Merge pull request #5253 from rootsongjc/official
k8smeetup-rootsongjc-pr-2017-08-15
This commit is contained in:
commit
dcc8970fe1
|
@ -0,0 +1,18 @@
|
|||
apiVersion: extensions/v1beta1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: prometheus-node-exporter
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
name: prometheus-node-exporter
|
||||
labels:
|
||||
daemon: prom-node-exp
|
||||
spec:
|
||||
containers:
|
||||
- name: c
|
||||
image: prom/prometheus
|
||||
ports:
|
||||
- containerPort: 9090
|
||||
hostPort: 9090
|
||||
name: serverport
|
|
@ -0,0 +1,87 @@
|
|||
---
|
||||
approvers:
|
||||
- liggitt
|
||||
title: Kubelet authentication/authorization
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Overview
|
||||
|
||||
A kubelet's HTTPS endpoint exposes APIs which give access to data of varying sensitivity,
|
||||
and allow you to perform operations with varying levels of power on the node and within containers.
|
||||
|
||||
This document describes how to authenticate and authorize access to the kubelet's HTTPS endpoint.
|
||||
|
||||
## Kubelet authentication
|
||||
|
||||
By default, requests to the kubelet's HTTPS endpoint that are not rejected by other configured
|
||||
authentication methods are treated as anonymous requests, and given a username of `system:anonymous`
|
||||
and a group of `system:unauthenticated`.
|
||||
|
||||
To disable anonymous access and send `401 Unauthorized` responses to unauthenticated requests:
|
||||
|
||||
* start the kubelet with the `--anonymous-auth=false` flag
|
||||
|
||||
To enable X509 client certificate authentication to the kubelet's HTTPS endpoint:
|
||||
|
||||
* start the kubelet with the `--client-ca-file` flag, providing a CA bundle to verify client certificates with
|
||||
* start the apiserver with `--kubelet-client-certificate` and `--kubelet-client-key` flags
|
||||
* see the [apiserver authentication documentation](/docs/admin/authentication/#x509-client-certs) for more details
|
||||
|
||||
To enable API bearer tokens (including service account tokens) to be used to authenticate to the kubelet's HTTPS endpoint:
|
||||
|
||||
* ensure the `authentication.k8s.io/v1beta1` API group is enabled in the API server
|
||||
* start the kubelet with the `--authentication-token-webhook`, `--kubeconfig`, and `--require-kubeconfig` flags
|
||||
* the kubelet calls the `TokenReview` API on the configured API server to determine user information from bearer tokens
|
||||
|
||||
## Kubelet authorization
|
||||
|
||||
Any request that is successfully authenticated (including an anonymous request) is then authorized. The default authorization mode is `AlwaysAllow`, which allows all requests.
|
||||
|
||||
There are many possible reasons to subdivide access to the kubelet API:
|
||||
|
||||
* anonymous auth is enabled, but anonymous users' ability to call the kubelet API should be limited
|
||||
* bearer token auth is enabled, but arbitrary API users' (like service accounts) ability to call the kubelet API should be limited
|
||||
* client certificate auth is enabled, but only some of the client certificates signed by the configured CA should be allowed to use the kubelet API
|
||||
|
||||
To subdivide access to the kubelet API, delegate authorization to the API server:
|
||||
|
||||
* ensure the `authorization.k8s.io/v1beta1` API group is enabled in the API server
|
||||
* start the kubelet with the `--authorization-mode=Webhook`, `--kubeconfig`, and `--require-kubeconfig` flags
|
||||
* the kubelet calls the `SubjectAccessReview` API on the configured API server to determine whether each request is authorized
|
||||
|
||||
The kubelet authorizes API requests using the same [request attributes](/docs/admin/authorization/#request-attributes) approach as the apiserver.
|
||||
|
||||
The verb is determined from the incoming request's HTTP verb:
|
||||
|
||||
HTTP verb | request verb
|
||||
----------|---------------
|
||||
POST | create
|
||||
GET, HEAD | get
|
||||
PUT | update
|
||||
PATCH | patch
|
||||
DELETE | delete
|
||||
|
||||
The resource and subresource is determined from the incoming request's path:
|
||||
|
||||
Kubelet API | resource | subresource
|
||||
-------------|----------|------------
|
||||
/stats/\* | nodes | stats
|
||||
/metrics/\* | nodes | metrics
|
||||
/logs/\* | nodes | log
|
||||
/spec/\* | nodes | spec
|
||||
*all others* | nodes | proxy
|
||||
|
||||
The namespace and API group attributes are always an empty string, and
|
||||
the resource name is always the name of the kubelet's `Node` API object.
|
||||
|
||||
When running in this mode, ensure the user identified by the `--kubelet-client-certificate` and `--kubelet-client-key`
|
||||
flags passed to the apiserver is authorized for the following attributes:
|
||||
|
||||
* verb=\*, resource=nodes, subresource=proxy
|
||||
* verb=\*, resource=nodes, subresource=stats
|
||||
* verb=\*, resource=nodes, subresource=log
|
||||
* verb=\*, resource=nodes, subresource=spec
|
||||
* verb=\*, resource=nodes, subresource=metrics
|
|
@ -0,0 +1,216 @@
|
|||
---
|
||||
approvers:
|
||||
- ericchiang
|
||||
- mikedanese
|
||||
- jcbsmpsn
|
||||
title: TLS bootstrapping
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Overview
|
||||
|
||||
This document describes how to set up TLS client certificate bootstrapping for kubelets.
|
||||
Kubernetes 1.4 introduced an API for requesting certificates from a cluster-level Certificate Authority (CA). The original intent of this API is to enable provisioning of TLS client certificates for kubelets. The proposal can be found [here](https://github.com/kubernetes/kubernetes/pull/20439)
|
||||
and progress on the feature is being tracked as [feature #43](https://github.com/kubernetes/features/issues/43).
|
||||
|
||||
## kube-apiserver configuration
|
||||
|
||||
The API server should be configured with an [authenticator](/docs/admin/authentication/) that can authenticate tokens as a user in the `system:bootstrappers` group.
|
||||
|
||||
This group will later be used in the controller-manager configuration to scope approvals in the default approval
|
||||
controller. As this feature matures, you should ensure tokens are bound to a Role-Based Access Control (RBAC) policy which limits requests
|
||||
(using the bootstrap token) strictly to client requests related to certificate provisioning. With RBAC in place, scoping the tokens to a group allows for great flexibility (e.g. you could disable a particular bootstrap group's access when you are done provisioning the nodes).
|
||||
|
||||
While any authentication strategy can be used for the kubelet's initial bootstrap credentials, the following two authenticators are recommended for ease of provisioning.
|
||||
|
||||
1. [Bootstrap Tokens](/docs/admin/bootstrap-tokens/) - __alpha__
|
||||
2. [Token authentication file](###token-authentication-file)
|
||||
|
||||
Using bootstrap tokens is currently __alpha__ and will simplify the management of bootstrap token management especially in a HA scenario.
|
||||
|
||||
### Token authentication file
|
||||
Tokens are arbitrary but should represent at least 128 bits of entropy derived from a secure random number
|
||||
generator (such as /dev/urandom on most modern systems). There are multiple ways you can generate a token. For example:
|
||||
|
||||
`head -c 16 /dev/urandom | od -An -t x | tr -d ' '`
|
||||
|
||||
will generate tokens that look like `02b50b05283e98dd0fd71db496ef01e8`
|
||||
|
||||
The token file should look like the following example, where the first three values can be anything and the quoted group
|
||||
name should be as depicted:
|
||||
|
||||
```
|
||||
02b50b05283e98dd0fd71db496ef01e8,kubelet-bootstrap,10001,"system:bootstrappers"
|
||||
```
|
||||
|
||||
Add the `--token-auth-file=FILENAME` flag to the kube-apiserver command (in your systemd unit file perhaps) to enable the token file.
|
||||
See docs [here](/docs/admin/authentication/#static-token-file) for further details.
|
||||
|
||||
### Client certificate CA bundle
|
||||
|
||||
Add the `--client-ca-file=FILENAME` flag to the kube-apiserver command to enable client certificate authentication,
|
||||
referencing a certificate authority bundle containing the signing certificate (e.g. `--client-ca-file=/var/lib/kubernetes/ca.pem`).
|
||||
|
||||
## kube-controller-manager configuration
|
||||
The API for requesting certificates adds a certificate-issuing control loop to the Kubernetes Controller Manager. This takes the form of a
|
||||
[cfssl](https://blog.cloudflare.com/introducing-cfssl/) local signer using assets on disk. Currently, all certificates issued have one year validity and a default set of key usages.
|
||||
|
||||
### Signing assets
|
||||
You must provide a Certificate Authority in order to provide the cryptographic materials necessary to issue certificates.
|
||||
This CA should be trusted by kube-apiserver for authentication with the `--client-ca-file=FILENAME` flag. The management
|
||||
of the CA is beyond the scope of this document but it is recommended that you generate a dedicated CA for Kubernetes.
|
||||
Both certificate and key are assumed to be PEM-encoded.
|
||||
|
||||
The kube-controller-manager flags are:
|
||||
|
||||
```
|
||||
--cluster-signing-cert-file="/etc/path/to/kubernetes/ca/ca.crt" --cluster-signing-key-file="/etc/path/to/kubernetes/ca/ca.key"
|
||||
```
|
||||
|
||||
### Approval controller
|
||||
|
||||
In 1.7 the experimental "group auto approver" controller is dropped in favor of the new `csrapproving` controller
|
||||
that ships as part of [kube-controller-manager](/docs/admin/kube-controller-manager/) and is enabled by default.
|
||||
The controller uses the [`SubjectAccessReview` API](/docs/admin/authorization/#checking-api-access) to determine
|
||||
if a given user is authorized to request a CSR, then approves based on the authorization outcome. To prevent
|
||||
conflicts with other approvers, the builtin approver doesn't explicitly deny CSRs, only ignoring unauthorized requests.
|
||||
|
||||
The controller categorizes CSRs into three subresources:
|
||||
|
||||
1. `nodeclient` - a request by a user for a client certificate with `O=system:nodes` and `CN=system:node:(node name)`.
|
||||
2. `selfnodeclient` - a node renewing a client certificate with the same `O` and `CN`.
|
||||
3. `selfnodeserver` - a node renewing a serving certificate. (ALPHA, requires feature gate)
|
||||
|
||||
The checks to determine if a CSR is a `selfnodeserver` request is currently tied to the kubelet's credential rotation
|
||||
implementation, an __alpha__ feature. As such, the definition of `selfnodeserver` will likely change in a future and
|
||||
requires the `RotateKubeletServerCertificate` feature gate on the controller manager. The feature progress can be
|
||||
tracked at [kubernetes/features#267](https://github.com/kubernetes/features/issues/267).
|
||||
|
||||
```
|
||||
--feature-gates=RotateKubeletServerCertificate=true
|
||||
```
|
||||
|
||||
The following RBAC `ClusterRoles` represent the `nodeclient`, `selfnodeclient`, and `selfnodeserver` capabilities. Similar roles
|
||||
may be automatically created in future releases.
|
||||
|
||||
```yml
|
||||
# A ClusterRole which instructs the CSR approver to approve a user requesting
|
||||
# node client credentials.
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: approve-node-client-csr
|
||||
rules:
|
||||
- apiGroups: ["certificates.k8s.io"]
|
||||
resources: ["certificatesigningrequests/nodeclient"]
|
||||
verbs: ["create"]
|
||||
---
|
||||
# A ClusterRole which instructs the CSR approver to approve a node renewing its
|
||||
# own client credentials.
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: approve-node-client-renewal-csr
|
||||
rules:
|
||||
- apiGroups: ["certificates.k8s.io"]
|
||||
resources: ["certificatesigningrequests/selfnodeclient"]
|
||||
verbs: ["create"]
|
||||
---
|
||||
# A ClusterRole which instructs the CSR approver to approve a node requesting a
|
||||
# serving cert matching its client cert.
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: approve-node-server-renewal-csr
|
||||
rules:
|
||||
- apiGroups: ["certificates.k8s.io"]
|
||||
resources: ["certificatesigningrequests/selfnodeserver"]
|
||||
verbs: ["create"]
|
||||
```
|
||||
|
||||
These powers can be granted to credentials, such as bootstrapping tokens. For example, to replicate the behavior
|
||||
provided by the removed auto-approval flag, of approving all CSRs by a single group:
|
||||
|
||||
```
|
||||
# REMOVED: This flag no longer works as of 1.7.
|
||||
--insecure-experimental-approve-all-kubelet-csrs-for-group="system:bootstrappers"
|
||||
```
|
||||
|
||||
An admin would create a `ClusterRoleBinding` targeting that group.
|
||||
|
||||
```yml
|
||||
# Approve all CSRs for the group "system:bootstrappers"
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: auto-approve-csrs-for-group
|
||||
subjects:
|
||||
- kind: Group
|
||||
name: system:bootstrappers
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: approve-node-client-csr
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
```
|
||||
|
||||
To let a node renew its own credentials, an admin can construct a `ClusterRoleBinding` targeting
|
||||
that node's credentials:
|
||||
|
||||
```yml
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: node1-client-cert-renewal
|
||||
subjects:
|
||||
- kind: User
|
||||
name: system:node:node-1 # Let "node-1" renew its client certificate.
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: approve-node-client-renewal-csr
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
```
|
||||
|
||||
Deleting the binding will prevent the node from renewing its client credentials, effectively
|
||||
removing it from the cluster once its certificate expires.
|
||||
|
||||
## kubelet configuration
|
||||
To request a client certificate from kube-apiserver, the kubelet first needs a path to a kubeconfig file that contains the
|
||||
bootstrap authentication token. You can use `kubectl config set-cluster`, `set-credentials`, and `set-context` to build this kubeconfig. Provide the name `kubelet-bootstrap` to `kubectl config set-credentials` and include `--token=<token-value>` as follows:
|
||||
|
||||
```
|
||||
kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=bootstrap.kubeconfig
|
||||
```
|
||||
|
||||
When starting the kubelet, if the file specified by `--kubeconfig` does not exist, the bootstrap kubeconfig is used to request a client certificate from the API server. On approval of the certificate request and receipt back by the kubelet, a kubeconfig file referencing the generated key and obtained certificate is written to the path specified by `--kubeconfig`. The certificate and key file will be placed in the directory specified by `--cert-dir`.
|
||||
|
||||
**Note:** The following flags are required to enable this bootstrapping when starting the kubelet:
|
||||
|
||||
```
|
||||
--require-kubeconfig
|
||||
--bootstrap-kubeconfig="/path/to/bootstrap/kubeconfig"
|
||||
```
|
||||
|
||||
Additionally, in 1.7 the kubelet implements __alpha__ features for enabling rotation of both its client and/or serving certs.
|
||||
These can be enabled through the respective `RotateKubeletClientCertificate` and `RotateKubeletServerCertificate` feature
|
||||
flags on the kubelet, but may change in backward incompatible ways in future releases.
|
||||
|
||||
```
|
||||
--feature-gates=RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true
|
||||
```
|
||||
|
||||
`RotateKubeletClientCertificate` causes the kubelet to rotate its client certificates by creating new CSRs as its existing
|
||||
credentials expire. `RotateKubeletServerCertificate` causes the kubelet to both request a serving certificate after
|
||||
bootstrapping its client credentials and rotate the certificate. The serving cert currently does not request DNS or IP
|
||||
SANs.
|
||||
|
||||
## kubectl approval
|
||||
The signing controller does not immediately sign all certificate requests. Instead, it waits until they have been flagged with an
|
||||
"Approved" status by an appropriately-privileged user. This is intended to eventually be an automated process handled by an external
|
||||
approval controller, but for the alpha version of the API it can be done manually by a cluster administrator using kubectl.
|
||||
An administrator can list CSRs with `kubectl get csr` and describe one in detail with `kubectl describe csr <name>`. Before the 1.6 release there were
|
||||
[no direct approve/deny commands](https://github.com/kubernetes/kubernetes/issues/30163) so an approver had to update
|
||||
the Status field directly ([rough how-to](https://github.com/gtank/csrctl)). Later versions of Kubernetes offer `kubectl certificate approve <name>` and `kubectl certificate deny <name>` commands.
|
|
@ -0,0 +1,12 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: command-demo
|
||||
labels:
|
||||
purpose: demonstrate-command
|
||||
spec:
|
||||
containers:
|
||||
- name: command-demo-container
|
||||
image: debian
|
||||
command: ["printenv"]
|
||||
args: ["HOSTNAME", "KUBERNETES_PORT"]
|
|
@ -0,0 +1,93 @@
|
|||
---
|
||||
approvers:
|
||||
- mikedanese
|
||||
title: Configuration Best Practices
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
This document highlights and consolidates configuration best practices that are introduced throughout the user-guide, getting-started documentation, and examples.
|
||||
|
||||
This is a living document. If you think of something that is not on this list but might be useful to others, please don't hesitate to file an issue or submit a PR.
|
||||
{% endcapture %}
|
||||
|
||||
{% capture body %}
|
||||
## General Config Tips
|
||||
|
||||
- When defining configurations, specify the latest stable API version (currently v1).
|
||||
|
||||
- Configuration files should be stored in version control before being pushed to the cluster. This allows quick roll-back of a configuration if needed. It also aids with cluster re-creation and restoration if necessary.
|
||||
|
||||
- Write your configuration files using YAML rather than JSON. Though these formats can be used interchangeably in almost all scenarios, YAML tends to be more user-friendly.
|
||||
|
||||
- Group related objects into a single file whenever it makes sense. One file is often easier to manage than several. See the [guestbook-all-in-one.yaml](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/all-in-one/guestbook-all-in-one.yaml) file as an example of this syntax.
|
||||
|
||||
Note also that many `kubectl` commands can be called on a directory, so you can also call `kubectl create` on a directory of config files. See below for more details.
|
||||
|
||||
- Don't specify default values unnecessarily, in order to simplify and minimize configs, and to reduce error. For example, omit the selector and labels in a `ReplicationController` if you want them to be the same as the labels in its `podTemplate`, since those fields are populated from the `podTemplate` labels by default. See the [guestbook app's](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) .yaml files for some [examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/frontend-deployment.yaml) of this.
|
||||
|
||||
- Put an object description in an annotation to allow better introspection.
|
||||
|
||||
|
||||
## "Naked" Pods vs Replication Controllers and Jobs
|
||||
|
||||
- If there is a viable alternative to naked pods (in other words: pods not bound to a [replication controller](/docs/user-guide/replication-controller)), go with the alternative. Naked pods will not be rescheduled in the event of node failure.
|
||||
|
||||
Replication controllers are almost always preferable to creating pods, except for some explicit [`restartPolicy: Never`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) scenarios. A [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/) object (currently in Beta) may also be appropriate.
|
||||
|
||||
|
||||
## Services
|
||||
|
||||
- It's typically best to create a [service](/docs/concepts/services-networking/service/) before corresponding [replication controllers](/docs/concepts/workloads/controllers/replicationcontroller/). This lets the scheduler spread the pods that comprise the service.
|
||||
|
||||
You can also use this process to ensure that at least one replica works before creating lots of them:
|
||||
|
||||
1. Create a replication controller without specifying replicas (this will set replicas=1);
|
||||
2. Create a service;
|
||||
3. Then scale up the replication controller.
|
||||
|
||||
- Don't use `hostPort` unless it is absolutely necessary (for example: for a node daemon). It specifies the port number to expose on the host. When you bind a Pod to a `hostPort`, there are a limited number of places to schedule a pod due to port conflicts— you can only schedule as many such Pods as there are nodes in your Kubernetes cluster.
|
||||
|
||||
If you only need access to the port for debugging purposes, you can use the [kubectl proxy and apiserver proxy](/docs/tasks/access-kubernetes-api/http-proxy-access-api/) or [kubectl port-forward](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/).
|
||||
You can use a [Service](/docs/concepts/services-networking/service/) object for external service access.
|
||||
|
||||
If you explicitly need to expose a pod's port on the host machine, consider using a [NodePort](/docs/user-guide/services/#type-nodeport) service before resorting to `hostPort`.
|
||||
|
||||
- Avoid using `hostNetwork`, for the same reasons as `hostPort`.
|
||||
|
||||
- Use _headless services_ for easy service discovery when you don't need kube-proxy load balancing. See [headless services](/docs/user-guide/services/#headless-services).
|
||||
|
||||
## Using Labels
|
||||
|
||||
- Define and use [labels](/docs/user-guide/labels/) that identify __semantic attributes__ of your application or deployment. For example, instead of attaching a label to a set of pods to explicitly represent some service (For example, `service: myservice`), or explicitly representing the replication controller managing the pods (for example, `controller: mycontroller`), attach labels that identify semantic attributes, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. This will let you select the object groups appropriate to the context— for example, a service for all "tier: frontend" pods, or all "test" phase components of app "myapp". See the [guestbook](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) app for an example of this approach.
|
||||
|
||||
A service can be made to span multiple deployments, such as is done across [rolling updates](/docs/tasks/run-application/rolling-update-replication-controller/), by simply omitting release-specific labels from its selector, rather than updating a service's selector to match the replication controller's selector fully.
|
||||
|
||||
- To facilitate rolling updates, include version info in replication controller names, for example as a suffix to the name. It is useful to set a 'version' label as well. The rolling update creates a new controller as opposed to modifying the existing controller. So, there will be issues with version-agnostic controller names. See the [documentation](/docs/tasks/run-application/rolling-update-replication-controller/) on the rolling-update command for more detail.
|
||||
|
||||
Note that the [Deployment](/docs/concepts/workloads/controllers/deployment/) object obviates the need to manage replication controller 'version names'. A desired state of an object is described by a Deployment, and if changes to that spec are _applied_, the deployment controller changes the actual state to the desired state at a controlled rate. (Deployment objects are currently part of the [`extensions` API Group](/docs/concepts/overview/kubernetes-api/#api-groups).)
|
||||
|
||||
- You can manipulate labels for debugging. Because Kubernetes replication controllers and services match to pods using labels, this allows you to remove a pod from being considered by a controller, or served traffic by a service, by removing the relevant selector labels. If you remove the labels of an existing pod, its controller will create a new pod to take its place. This is a useful way to debug a previously "live" pod in a quarantine environment. See the [`kubectl label`](/docs/concepts/overview/working-with-objects/labels/) command.
|
||||
|
||||
## Container Images
|
||||
|
||||
- The [default container image pull policy](/docs/concepts/containers/images/) is `IfNotPresent`, which causes the [Kubelet](/docs/admin/kubelet/) to not pull an image if it already exists. If you would like to always force a pull, you must specify a pull image policy of `Always` in your .yaml file (`imagePullPolicy: Always`) or specify a `:latest` tag on your image.
|
||||
|
||||
That is, if you're specifying an image with other than the `:latest` tag, for example `myimage:v1`, and there is an image update to that same tag, the Kubelet won't pull the updated image. You can address this by ensuring that any updates to an image bump the image tag as well (for example, `myimage:v2`), and ensuring that your configs point to the correct version.
|
||||
|
||||
**Note:** You should avoid using `:latest` tag when deploying containers in production, because this makes it hard to track which version of the image is running and hard to roll back.
|
||||
|
||||
- To work only with a specific version of an image, you can specify an image with its digest (SHA256). This approach guarantees that the image will never update. For detailed information about working with image digests, see [the Docker documentation](https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier).
|
||||
|
||||
## Using kubectl
|
||||
|
||||
- Use `kubectl create -f <directory>` where possible. This looks for config objects in all `.yaml`, `.yml`, and `.json` files in `<directory>` and passes them to `create`.
|
||||
|
||||
- Use `kubectl delete` rather than `stop`. `Delete` has a superset of the functionality of `stop`, and `stop` is deprecated.
|
||||
|
||||
- Use kubectl bulk operations (via files and/or labels) for get and delete. See [label selectors](/docs/user-guide/labels/#label-selectors) and [using labels effectively](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively).
|
||||
|
||||
- Use `kubectl run` and `expose` to quickly create and expose single container Deployments. See the [quick start guide](/docs/user-guide/quick-start/) for an example.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/concept.md %}
|
|
@ -0,0 +1,26 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: with-node-affinity
|
||||
spec:
|
||||
affinity:
|
||||
nodeAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: kubernetes.io/e2e-az-name
|
||||
operator: In
|
||||
values:
|
||||
- e2e-az1
|
||||
- e2e-az2
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 1
|
||||
preference:
|
||||
matchExpressions:
|
||||
- key: another-node-label-key
|
||||
operator: In
|
||||
values:
|
||||
- another-node-label-value
|
||||
containers:
|
||||
- name: with-node-affinity
|
||||
image: gcr.io/google_containers/pause:2.0
|
|
@ -0,0 +1,29 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: with-pod-affinity
|
||||
spec:
|
||||
affinity:
|
||||
podAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchExpressions:
|
||||
- key: security
|
||||
operator: In
|
||||
values:
|
||||
- S1
|
||||
topologyKey: failure-domain.beta.kubernetes.io/zone
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 100
|
||||
podAffinityTerm:
|
||||
labelSelector:
|
||||
matchExpressions:
|
||||
- key: security
|
||||
operator: In
|
||||
values:
|
||||
- S2
|
||||
topologyKey: kubernetes.io/hostname
|
||||
containers:
|
||||
- name: with-pod-affinity
|
||||
image: gcr.io/google_containers/pause:2.0
|
|
@ -0,0 +1,13 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
env: test
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
imagePullPolicy: IfNotPresent
|
||||
nodeSelector:
|
||||
disktype: ssd
|
|
@ -0,0 +1,25 @@
|
|||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: curl-deployment
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: curlpod
|
||||
spec:
|
||||
volumes:
|
||||
- name: secret-volume
|
||||
secret:
|
||||
secretName: nginxsecret
|
||||
containers:
|
||||
- name: curlpod
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- while true; do sleep 1; done
|
||||
image: radial/busyboxplus:curl
|
||||
volumeMounts:
|
||||
- mountPath: /etc/nginx/ssl
|
||||
name: secret-volume
|
|
@ -0,0 +1,21 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: hostaliases-pod
|
||||
spec:
|
||||
hostAliases:
|
||||
- ip: "127.0.0.1"
|
||||
hostnames:
|
||||
- "foo.local"
|
||||
- "bar.local"
|
||||
- ip: "10.1.2.3"
|
||||
hostnames:
|
||||
- "foo.remote"
|
||||
- "bar.remote"
|
||||
containers:
|
||||
- name: cat-hosts
|
||||
image: busybox
|
||||
command:
|
||||
- cat
|
||||
args:
|
||||
- "/etc/hosts"
|
|
@ -0,0 +1,298 @@
|
|||
---
|
||||
approvers:
|
||||
- bprashanth
|
||||
title: Ingress Resources
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
__Terminology__
|
||||
|
||||
Throughout this doc you will see a few terms that are sometimes used interchangeably elsewhere, that might cause confusion. This section attempts to clarify them.
|
||||
|
||||
* Node: A single virtual or physical machine in a Kubernetes cluster.
|
||||
* Cluster: A group of nodes firewalled from the internet, that are the primary compute resources managed by Kubernetes.
|
||||
* Edge router: A router that enforces the firewall policy for your cluster. This could be a gateway managed by a cloud provider or a physical piece of hardware.
|
||||
* Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the [Kubernetes networking model](/docs/concepts/cluster-administration/networking/). Examples of a Cluster network include Overlays such as [flannel](https://github.com/coreos/flannel#flannel) or SDNs such as [OVS](/docs/admin/ovs-networking/).
|
||||
* Service: A Kubernetes [Service](/docs/concepts/services-networking/service/) that identifies a set of pods using label selectors. Unless mentioned otherwise, Services are assumed to have virtual IPs only routable within the cluster network.
|
||||
|
||||
## What is Ingress?
|
||||
|
||||
Typically, services and pods have IPs only routable by the cluster network. All traffic that ends up at an edge router is either dropped or forwarded elsewhere. Conceptually, this might look like:
|
||||
|
||||
```
|
||||
internet
|
||||
|
|
||||
------------
|
||||
[ Services ]
|
||||
```
|
||||
|
||||
An Ingress is a collection of rules that allow inbound connections to reach the cluster services.
|
||||
|
||||
```
|
||||
internet
|
||||
|
|
||||
[ Ingress ]
|
||||
--|-----|--
|
||||
[ Services ]
|
||||
```
|
||||
|
||||
It can be configured to give services externally-reachable URLs, load balance traffic, terminate SSL, offer name based virtual hosting etc. Users request ingress by POSTing the Ingress resource to the API server. An [Ingress controller](#ingress-controllers) is responsible for fulfilling the Ingress, usually with a loadbalancer, though it may also configure your edge router or additional frontends to help handle the traffic in an HA manner.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before you start using the Ingress resource, there are a few things you should understand. The Ingress is a beta resource, not available in any Kubernetes release prior to 1.1. You need an Ingress controller to satisfy an Ingress, simply creating the resource will have no effect.
|
||||
|
||||
GCE/GKE deploys an ingress controller on the master. You can deploy any number of custom ingress controllers in a pod. You must annotate each ingress with the appropriate class, as indicated [here](https://git.k8s.io/ingress/controllers/nginx#running-multiple-ingress-controllers) and [here](https://git.k8s.io/ingress/controllers/gce/BETA_LIMITATIONS.md#disabling-glbc).
|
||||
|
||||
Make sure you review the [beta limitations](https://git.k8s.io/ingress/controllers/gce/BETA_LIMITATIONS.md) of this controller. In environments other than GCE/GKE, you need to [deploy a controller](https://git.k8s.io/ingress/controllers) as a pod.
|
||||
|
||||
## The Ingress Resource
|
||||
|
||||
A minimal Ingress might look like:
|
||||
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: test-ingress
|
||||
annotations:
|
||||
ingress.kubernetes.io/rewrite-target: /
|
||||
spec:
|
||||
rules:
|
||||
- http:
|
||||
paths:
|
||||
- path: /testpath
|
||||
backend:
|
||||
serviceName: test
|
||||
servicePort: 80
|
||||
```
|
||||
|
||||
*POSTing this to the API server will have no effect if you have not configured an [Ingress controller](#ingress-controllers).*
|
||||
|
||||
__Lines 1-6__: As with all other Kubernetes config, an Ingress needs `apiVersion`, `kind`, and `metadata` fields. For general information about working with config files, see [deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/), [configuring containers](/docs/tasks/configure-pod-container/configmap/), [managing resources](/docs/concepts/cluster-administration/manage-deployment/) and [ingress configuration rewrite](https://github.com/kubernetes/ingress/blob/master/controllers/nginx/configuration.md#rewrite).
|
||||
|
||||
__Lines 7-9__: Ingress [spec](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status) has all the information needed to configure a loadbalancer or proxy server. Most importantly, it contains a list of rules matched against all incoming requests. Currently the Ingress resource only supports http rules.
|
||||
|
||||
__Lines 10-11__: Each http rule contains the following information: A host (e.g.: foo.bar.com, defaults to * in this example), a list of paths (e.g.: /testpath) each of which has an associated backend (test:80). Both the host and path must match the content of an incoming request before the loadbalancer directs traffic to the backend.
|
||||
|
||||
__Lines 12-14__: A backend is a service:port combination as described in the [services doc](/docs/concepts/services-networking/service/). Ingress traffic is typically sent directly to the endpoints matching a backend.
|
||||
|
||||
__Global Parameters__: For the sake of simplicity the example Ingress has no global parameters, see the [API reference](https://releases.k8s.io/{{page.githubbranch}}/staging/src/k8s.io/api/extensions/v1beta1/types.go) for a full definition of the resource. One can specify a global default backend in the absence of which requests that don't match a path in the spec are sent to the default backend of the Ingress controller.
|
||||
|
||||
## Ingress controllers
|
||||
|
||||
In order for the Ingress resource to work, the cluster must have an Ingress controller running. This is unlike other types of controllers, which typically run as part of the `kube-controller-manager` binary, and which are typically started automatically as part of cluster creation. You need to choose the ingress controller implementation that is the best fit for your cluster, or implement one. Examples and instructions can be found [here](https://git.k8s.io/ingress/controllers).
|
||||
|
||||
## Before you begin
|
||||
|
||||
The following document describes a set of cross platform features exposed through the Ingress resource. Ideally, all Ingress controllers should fulfill this specification, but we're not there yet. The docs for the GCE and nginx controllers are [here](https://git.k8s.io/ingress/controllers/gce/README.md) and [here](https://git.k8s.io/ingress/controllers/nginx/README.md) respectively. **Make sure you review controller specific docs so you understand the caveats of each one**.
|
||||
|
||||
## Types of Ingress
|
||||
|
||||
### Single Service Ingress
|
||||
|
||||
There are existing Kubernetes concepts that allow you to expose a single service (see [alternatives](#alternatives)), however you can do so through an Ingress as well, by specifying a *default backend* with no rules.
|
||||
|
||||
{% include code.html language="yaml" file="ingress.yaml" ghlink="/docs/concepts/services-networking/ingress.yaml" %}
|
||||
|
||||
If you create it using `kubectl create -f` you should see:
|
||||
|
||||
```shell
|
||||
$ kubectl get ing
|
||||
NAME RULE BACKEND ADDRESS
|
||||
test-ingress - testsvc:80 107.178.254.228
|
||||
```
|
||||
|
||||
Where `107.178.254.228` is the IP allocated by the Ingress controller to satisfy this Ingress. The `RULE` column shows that all traffic send to the IP is directed to the Kubernetes Service listed under `BACKEND`.
|
||||
|
||||
### Simple fanout
|
||||
|
||||
As described previously, pods within kubernetes have IPs only visible on the cluster network, so we need something at the edge accepting ingress traffic and proxying it to the right endpoints. This component is usually a highly available loadbalancer. An Ingress allows you to keep the number of loadbalancers down to a minimum, for example, a setup like:
|
||||
|
||||
```shell
|
||||
foo.bar.com -> 178.91.123.132 -> / foo s1:80
|
||||
/ bar s2:80
|
||||
```
|
||||
|
||||
would require an Ingress such as:
|
||||
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: test
|
||||
annotations:
|
||||
ingress.kubernetes.io/rewrite-target: /
|
||||
spec:
|
||||
rules:
|
||||
- host: foo.bar.com
|
||||
http:
|
||||
paths:
|
||||
- path: /foo
|
||||
backend:
|
||||
serviceName: s1
|
||||
servicePort: 80
|
||||
- path: /bar
|
||||
backend:
|
||||
serviceName: s2
|
||||
servicePort: 80
|
||||
```
|
||||
|
||||
When you create the Ingress with `kubectl create -f`:
|
||||
|
||||
```shell
|
||||
$ kubectl get ing
|
||||
NAME RULE BACKEND ADDRESS
|
||||
test -
|
||||
foo.bar.com
|
||||
/foo s1:80
|
||||
/bar s2:80
|
||||
```
|
||||
The Ingress controller will provision an implementation specific loadbalancer that satisfies the Ingress, as long as the services (s1, s2) exist. When it has done so, you will see the address of the loadbalancer under the last column of the Ingress.
|
||||
|
||||
### Name based virtual hosting
|
||||
|
||||
Name-based virtual hosts use multiple host names for the same IP address.
|
||||
|
||||
```
|
||||
foo.bar.com --| |-> foo.bar.com s1:80
|
||||
| 178.91.123.132 |
|
||||
bar.foo.com --| |-> bar.foo.com s2:80
|
||||
```
|
||||
|
||||
The following Ingress tells the backing loadbalancer to route requests based on the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4).
|
||||
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: test
|
||||
spec:
|
||||
rules:
|
||||
- host: foo.bar.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: s1
|
||||
servicePort: 80
|
||||
- host: bar.foo.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: s2
|
||||
servicePort: 80
|
||||
```
|
||||
|
||||
__Default Backends__: An Ingress with no rules, like the one shown in the previous section, sends all traffic to a single default backend. You can use the same technique to tell a loadbalancer where to find your website's 404 page, by specifying a set of rules *and* a default backend. Traffic is routed to your default backend if none of the Hosts in your Ingress match the Host in the request header, and/or none of the paths match the URL of the request.
|
||||
|
||||
### TLS
|
||||
|
||||
You can secure an Ingress by specifying a [secret](/docs/user-guide/secrets) that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. If the TLS configuration section in an Ingress specifies different hosts, they will be multiplexed on the same port according to the hostname specified through the SNI TLS extension (provided the Ingress controller supports SNI). The TLS secret must contain keys named `tls.crt` and `tls.key` that contain the certificate and private key to use for TLS, e.g.:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
tls.crt: base64 encoded cert
|
||||
tls.key: base64 encoded key
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: testsecret
|
||||
namespace: default
|
||||
type: Opaque
|
||||
```
|
||||
|
||||
Referencing this secret in an Ingress will tell the Ingress controller to secure the channel from the client to the loadbalancer using TLS:
|
||||
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: no-rules-map
|
||||
spec:
|
||||
tls:
|
||||
- secretName: testsecret
|
||||
backend:
|
||||
serviceName: s1
|
||||
servicePort: 80
|
||||
```
|
||||
|
||||
Note that there is a gap between TLS features supported by various Ingress controllers. Please refer to documentation on [nginx](https://git.k8s.io/ingress/controllers/nginx/README.md#https), [GCE](https://git.k8s.io/ingress/controllers/gce/README.md#tls), or any other platform specific Ingress controller to understand how TLS works in your environment.
|
||||
|
||||
### Loadbalancing
|
||||
|
||||
An Ingress controller is bootstrapped with some loadbalancing policy settings that it applies to all Ingress, such as the loadbalancing algorithm, backend weight scheme etc. More advanced loadbalancing concepts (e.g.: persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can still get these features through the [service loadbalancer](https://git.k8s.io/contrib/service-loadbalancer). With time, we plan to distill loadbalancing patterns that are applicable cross platform into the Ingress resource.
|
||||
|
||||
It's also worth noting that even though health checks are not exposed directly through the Ingress, there exist parallel concepts in Kubernetes such as [readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) which allow you to achieve the same end result. Please review the controller specific docs to see how they handle health checks ([nginx](https://git.k8s.io/ingress/controllers/nginx/README.md), [GCE](https://git.k8s.io/ingress/controllers/gce/README.md#health-checks)).
|
||||
|
||||
## Updating an Ingress
|
||||
|
||||
Say you'd like to add a new Host to an existing Ingress, you can update it by editing the resource:
|
||||
|
||||
```shell
|
||||
$ kubectl get ing
|
||||
NAME RULE BACKEND ADDRESS
|
||||
test - 178.91.123.132
|
||||
foo.bar.com
|
||||
/foo s1:80
|
||||
$ kubectl edit ing test
|
||||
```
|
||||
|
||||
This should pop up an editor with the existing yaml, modify it to include the new Host.
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
rules:
|
||||
- host: foo.bar.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: s1
|
||||
servicePort: 80
|
||||
path: /foo
|
||||
- host: bar.baz.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: s2
|
||||
servicePort: 80
|
||||
path: /foo
|
||||
..
|
||||
```
|
||||
|
||||
saving it will update the resource in the API server, which should tell the Ingress controller to reconfigure the loadbalancer.
|
||||
|
||||
```shell
|
||||
$ kubectl get ing
|
||||
NAME RULE BACKEND ADDRESS
|
||||
test - 178.91.123.132
|
||||
foo.bar.com
|
||||
/foo s1:80
|
||||
bar.baz.com
|
||||
/foo s2:80
|
||||
```
|
||||
|
||||
You can achieve the same by invoking `kubectl replace -f` on a modified Ingress yaml file.
|
||||
|
||||
## Failing across availability zones
|
||||
|
||||
Techniques for spreading traffic across failure domains differs between cloud providers. Please check the documentation of the relevant Ingress controller for details. Please refer to the federation [doc](/docs/concepts/cluster-administration/federation/) for details on deploying Ingress in a federated cluster.
|
||||
|
||||
## Future Work
|
||||
|
||||
* Various modes of HTTPS/TLS support (e.g.: SNI, re-encryption)
|
||||
* Requesting an IP or Hostname via claims
|
||||
* Combining L4 and L7 Ingress
|
||||
* More Ingress controllers
|
||||
|
||||
Please track the [L7 and Ingress proposal](https://github.com/kubernetes/kubernetes/pull/12827) for more details on the evolution of the resource, and the [Ingress repository](https://github.com/kubernetes/ingress/tree/master) for more details on the evolution of various Ingress controllers.
|
||||
|
||||
## Alternatives
|
||||
|
||||
You can expose a Service in multiple ways that don't directly involve the Ingress resource:
|
||||
|
||||
* Use [Service.Type=LoadBalancer](/docs/user-guide/services/#type-loadbalancer)
|
||||
* Use [Service.Type=NodePort](/docs/user-guide/services/#type-nodeport)
|
||||
* Use a [Port Proxy](https://git.k8s.io/contrib/for-demos/proxy-to-service)
|
||||
* Deploy the [Service loadbalancer](https://git.k8s.io/contrib/service-loadbalancer). This allows you to share a single IP among multiple Services and achieve more advanced loadbalancing through Service Annotations.
|
|
@ -0,0 +1,9 @@
|
|||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: test-ingress
|
||||
spec:
|
||||
backend:
|
||||
serviceName: testsvc
|
||||
servicePort: 80
|
||||
|
|
@ -0,0 +1,43 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: my-nginx
|
||||
labels:
|
||||
run: my-nginx
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 8080
|
||||
targetPort: 80
|
||||
protocol: TCP
|
||||
name: http
|
||||
- port: 443
|
||||
protocol: TCP
|
||||
name: https
|
||||
selector:
|
||||
run: my-nginx
|
||||
---
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: my-nginx
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
run: my-nginx
|
||||
spec:
|
||||
volumes:
|
||||
- name: secret-volume
|
||||
secret:
|
||||
secretName: nginxsecret
|
||||
containers:
|
||||
- name: nginxhttps
|
||||
image: bprashanth/nginxhttps:1.0
|
||||
ports:
|
||||
- containerPort: 443
|
||||
- containerPort: 80
|
||||
volumeMounts:
|
||||
- mountPath: /etc/nginx/ssl
|
||||
name: secret-volume
|
|
@ -0,0 +1,12 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: my-nginx
|
||||
labels:
|
||||
run: my-nginx
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
protocol: TCP
|
||||
selector:
|
||||
run: my-nginx
|
|
@ -0,0 +1,17 @@
|
|||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: my-nginx
|
||||
spec:
|
||||
replicas: 2
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
run: my-nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: my-nginx
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
|
|
@ -0,0 +1,18 @@
|
|||
apiVersion: batch/v2alpha1
|
||||
kind: CronJob
|
||||
metadata:
|
||||
name: hello
|
||||
spec:
|
||||
schedule: "*/1 * * * *"
|
||||
jobTemplate:
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: hello
|
||||
image: busybox
|
||||
args:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- date; echo Hello from the Kubernetes cluster
|
||||
restartPolicy: OnFailure
|
|
@ -0,0 +1,36 @@
|
|||
apiVersion: extensions/v1beta1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: fluentd-elasticsearch
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: fluentd-logging
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: fluentd-elasticsearch
|
||||
spec:
|
||||
containers:
|
||||
- name: fluentd-elasticsearch
|
||||
image: gcr.io/google-containers/fluentd-elasticsearch:1.20
|
||||
resources:
|
||||
limits:
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 200Mi
|
||||
volumeMounts:
|
||||
- name: varlog
|
||||
mountPath: /var/log
|
||||
- name: varlibdockercontainers
|
||||
mountPath: /var/lib/docker/containers
|
||||
readOnly: true
|
||||
terminationGracePeriodSeconds: 30
|
||||
volumes:
|
||||
- name: varlog
|
||||
hostPath:
|
||||
path: /var/log
|
||||
- name: varlibdockercontainers
|
||||
hostPath:
|
||||
path: /var/lib/docker/containers
|
|
@ -0,0 +1,935 @@
|
|||
---
|
||||
approvers:
|
||||
- bgrant0607
|
||||
- janetkuo
|
||||
title: Deployments
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
|
||||
A _Deployment_ controller provides declarative updates for [Pods](/docs/concepts/workloads/pods/pod/) and
|
||||
[ReplicaSets](/docs/concepts/workloads/controllers/replicaset/).
|
||||
|
||||
You describe a _desired state_ in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.
|
||||
|
||||
**Note:** You should not manage ReplicaSets owned by a Deployment. All the use cases should be covered by manipulating the Deployment object. Consider opening an issue in the main Kubernetes repository if your use case is not covered below.
|
||||
{: .note}
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture body %}
|
||||
|
||||
## Use Case
|
||||
|
||||
The following are typical use cases for Deployments:
|
||||
|
||||
* [Create a Deployment to rollout a ReplicaSet](#creating-a-deployment). The ReplicaSet creates Pods in the background. Check the status of the rollout to see if it succeeds or not.
|
||||
* [Declare the new state of the Pods](#updating-a-deployment) by updating the PodTemplateSpec of the Deployment. A new ReplicaSet is created and the Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate. Each new ReplicaSet updates the revision of the Deployment.
|
||||
* [Rollback to an earlier Deployment revision](#rolling-back-a-deployment) if the current state of the Deployment is not stable. Each rollback updates the revision of the Deployment.
|
||||
* [Scale up the Deployment to facilitate more load.](#scaling-a-deployment)
|
||||
* [Pause the Deployment](#pausing-and-resuming-a-deployment) to apply multiple fixes to its PodTemplateSpec and then resume it to start a new rollout.
|
||||
* [Use the status of the Deployment](#deployment-status) as an indicator that a rollout has stuck
|
||||
* [Clean up older ReplicaSets](#clean-up-policy) that you don't need anymore
|
||||
|
||||
|
||||
## Creating a Deployment
|
||||
|
||||
Here is an example Deployment. It creates a ReplicaSet to bring up three nginx Pods.
|
||||
|
||||
{% include code.html language="yaml" file="nginx-deployment.yaml" ghlink="/docs/concepts/workloads/controllers/nginx-deployment.yaml" %}
|
||||
|
||||
Run the example by downloading the example file and then running this command:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f docs/user-guide/nginx-deployment.yaml --record
|
||||
deployment "nginx-deployment" created
|
||||
```
|
||||
|
||||
Setting the kubectl flag `--record` to `true` allows you to record current command in the annotations of
|
||||
the resources being created or updated. It is useful for future introspection: for example, to see the
|
||||
commands executed in each Deployment revision.
|
||||
|
||||
Then running `get` immediately will give:
|
||||
|
||||
```shell
|
||||
$ kubectl get deployments
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3 0 0 0 1s
|
||||
```
|
||||
|
||||
This indicates that the Deployment's number of desired replicas is 3 (according to deployment's `.spec.replicas`),
|
||||
the number of current replicas (`.status.replicas`) is 0, the number of up-to-date replicas (`.status.updatedReplicas`)
|
||||
is 0, and the number of available replicas (`.status.availableReplicas`) is also 0.
|
||||
|
||||
To see the Deployment rollout status, run:
|
||||
|
||||
```shell
|
||||
$ kubectl rollout status deployment/nginx-deployment
|
||||
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
|
||||
deployment "nginx-deployment" successfully rolled out
|
||||
```
|
||||
|
||||
Running the `get` again a few seconds later should give:
|
||||
|
||||
```shell
|
||||
$ kubectl get deployments
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3 3 3 3 18s
|
||||
```
|
||||
|
||||
This indicates that the Deployment has created all three replicas, and all replicas are up-to-date (contains the
|
||||
latest pod template) and available (pod status is ready for at least Deployment's `.spec.minReadySeconds`). Running
|
||||
`kubectl get rs` and `kubectl get pods` will show the ReplicaSet (RS) and Pods created.
|
||||
|
||||
```shell
|
||||
$ kubectl get rs
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-2035384211 3 3 3 18s
|
||||
```
|
||||
|
||||
You may notice that the name of the ReplicaSet is always `<the name of the Deployment>-<hash value of the pod template>`.
|
||||
|
||||
```shell
|
||||
$ kubectl get pods --show-labels
|
||||
NAME READY STATUS RESTARTS AGE LABELS
|
||||
nginx-deployment-2035384211-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=2035384211
|
||||
nginx-deployment-2035384211-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=2035384211
|
||||
nginx-deployment-2035384211-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=2035384211
|
||||
```
|
||||
|
||||
The created ReplicaSet ensures that there are three nginx Pods at all times.
|
||||
|
||||
**Note:** You must specify an appropriate selector and pod template labels in a Deployment (in this case,
|
||||
`app = nginx`). That is, don't overlap with other controllers (including other Deployments, ReplicaSets,
|
||||
StatefulSets, etc.). Kubernetes doesn't stop you from overlapping, and if multiple
|
||||
controllers have overlapping selectors, those controllers may fight with each other and won't behave
|
||||
correctly.
|
||||
{: .note}
|
||||
|
||||
### Pod-template-hash label
|
||||
|
||||
**Note:** Do not change this label.
|
||||
{: .note}
|
||||
|
||||
Note the pod-template-hash label in the example output in the pod labels above. This label is added by the
|
||||
Deployment controller to every ReplicaSet that a Deployment creates or adopts. Its purpose is to make sure that child
|
||||
ReplicaSets of a Deployment do not overlap. It is computed by hashing the PodTemplate of the ReplicaSet
|
||||
and using the resulting hash as the label value that will be added in the ReplicaSet selector, pod template labels,
|
||||
and in any existing Pods that the ReplicaSet may have.
|
||||
|
||||
## Updating a Deployment
|
||||
|
||||
**Note:** A Deployment's rollout is triggered if and only if the Deployment's pod template (that is, `.spec.template`)
|
||||
is changed, for example if the labels or container images of the template are updated. Other updates, such as scaling the Deployment, do not trigger a rollout.
|
||||
{: .note}
|
||||
|
||||
Suppose that we now want to update the nginx Pods to use the `nginx:1.9.1` image
|
||||
instead of the `nginx:1.7.9` image.
|
||||
|
||||
```shell
|
||||
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
|
||||
deployment "nginx-deployment" image updated
|
||||
```
|
||||
|
||||
Alternatively, we can `edit` the Deployment and change `.spec.template.spec.containers[0].image` from `nginx:1.7.9` to `nginx:1.9.1`:
|
||||
|
||||
```shell
|
||||
$ kubectl edit deployment/nginx-deployment
|
||||
deployment "nginx-deployment" edited
|
||||
```
|
||||
|
||||
To see the rollout status, run:
|
||||
|
||||
```shell
|
||||
$ kubectl rollout status deployment/nginx-deployment
|
||||
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
|
||||
deployment "nginx-deployment" successfully rolled out
|
||||
```
|
||||
|
||||
After the rollout succeeds, you may want to `get` the Deployment:
|
||||
|
||||
```shell
|
||||
$ kubectl get deployments
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3 3 3 3 36s
|
||||
```
|
||||
|
||||
The number of up-to-date replicas indicates that the Deployment has updated the replicas to the latest configuration.
|
||||
The current replicas indicates the total replicas this Deployment manages, and the available replicas indicates the
|
||||
number of current replicas that are available.
|
||||
|
||||
We can run `kubectl get rs` to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it
|
||||
up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.
|
||||
|
||||
```shell
|
||||
$ kubectl get rs
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-1564180365 3 3 3 6s
|
||||
nginx-deployment-2035384211 0 0 0 36s
|
||||
```
|
||||
|
||||
Running `get pods` should now show only the new Pods:
|
||||
|
||||
```shell
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-deployment-1564180365-khku8 1/1 Running 0 14s
|
||||
nginx-deployment-1564180365-nacti 1/1 Running 0 14s
|
||||
nginx-deployment-1564180365-z9gth 1/1 Running 0 14s
|
||||
```
|
||||
|
||||
Next time we want to update these Pods, we only need to update the Deployment's pod template again.
|
||||
|
||||
Deployment can ensure that only a certain number of Pods may be down while they are being updated. By
|
||||
default, it ensures that at least 1 less than the desired number of Pods are up (1 max unavailable).
|
||||
|
||||
Deployment can also ensure that only a certain number of Pods may be created above the desired number of
|
||||
Pods. By default, it ensures that at most 1 more than the desired number of Pods are up (1 max surge).
|
||||
|
||||
In a future version of Kubernetes, the defaults will change from 1-1 to 25%-25%.
|
||||
|
||||
For example, if you look at the above Deployment closely, you will see that it first created a new Pod,
|
||||
then deleted some old Pods and created new ones. It does not kill old Pods until a sufficient number of
|
||||
new Pods have come up, and does not create new Pods until a sufficient number of old Pods have been killed.
|
||||
It makes sure that number of available Pods is at least 2 and the number of total Pods is at most 4.
|
||||
|
||||
```shell
|
||||
$ kubectl describe deployments
|
||||
Name: nginx-deployment
|
||||
Namespace: default
|
||||
CreationTimestamp: Tue, 15 Mar 2016 12:01:06 -0700
|
||||
Labels: app=nginx
|
||||
Selector: app=nginx
|
||||
Replicas: 3 updated | 3 total | 3 available | 0 unavailable
|
||||
StrategyType: RollingUpdate
|
||||
MinReadySeconds: 0
|
||||
RollingUpdateStrategy: 1 max unavailable, 1 max surge
|
||||
OldReplicaSets: <none>
|
||||
NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
36s 36s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3
|
||||
23s 23s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1
|
||||
23s 23s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2
|
||||
23s 23s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2
|
||||
21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0
|
||||
21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3
|
||||
```
|
||||
|
||||
Here we see that when we first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211)
|
||||
and scaled it up to 3 replicas directly. When we updated the Deployment, it created a new ReplicaSet
|
||||
(nginx-deployment-1564180365) and scaled it up to 1 and then scaled down the old ReplicaSet to 2, so that at
|
||||
least 2 Pods were available and at most 4 Pods were created at all times. It then continued scaling up and down
|
||||
the new and the old ReplicaSet, with the same rolling update strategy. Finally, we'll have 3 available replicas
|
||||
in the new ReplicaSet, and the old ReplicaSet is scaled down to 0.
|
||||
|
||||
### Rollover (aka multiple updates in-flight)
|
||||
|
||||
Each time a new deployment object is observed by the deployment controller, a ReplicaSet is created to bring up
|
||||
the desired Pods if there is no existing ReplicaSet doing so. Existing ReplicaSet controlling Pods whose labels
|
||||
match `.spec.selector` but whose template does not match `.spec.template` are scaled down. Eventually, the new
|
||||
ReplicaSet will be scaled to `.spec.replicas` and all old ReplicaSets will be scaled to 0.
|
||||
|
||||
If you update a Deployment while an existing rollout is in progress, the Deployment will create a new ReplicaSet
|
||||
as per the update and start scaling that up, and will roll over the ReplicaSet that it was scaling up previously
|
||||
-- it will add it to its list of old ReplicaSets and will start scaling it down.
|
||||
|
||||
For example, suppose you create a Deployment to create 5 replicas of `nginx:1.7.9`,
|
||||
but then updates the Deployment to create 5 replicas of `nginx:1.9.1`, when only 3
|
||||
replicas of `nginx:1.7.9` had been created. In that case, Deployment will immediately start
|
||||
killing the 3 `nginx:1.7.9` Pods that it had created, and will start creating
|
||||
`nginx:1.9.1` Pods. It will not wait for 5 replicas of `nginx:1.7.9` to be created
|
||||
before changing course.
|
||||
|
||||
### Label selector updates
|
||||
|
||||
It is generally discouraged to make label selector updates and it is suggested to plan your selectors up front.
|
||||
In any case, if you need to perform a label selector update, exercise great caution and make sure you have grasped
|
||||
all of the implications.
|
||||
|
||||
* Selector additions require the pod template labels in the Deployment spec to be updated with the new label too,
|
||||
otherwise a validation error is returned. This change is a non-overlapping one, meaning that the new selector does
|
||||
not select ReplicaSets and Pods created with the old selector, resulting in orphaning all old ReplicaSets and
|
||||
creating a new ReplicaSet.
|
||||
* Selector updates -- that is, changing the existing value in a selector key -- result in the same behavior as additions.
|
||||
* Selector removals -- that is, removing an existing key from the Deployment selector -- do not require any changes in the
|
||||
pod template labels. No existing ReplicaSet is orphaned, and a new ReplicaSet is not created, but note that the
|
||||
removed label still exists in any existing Pods and ReplicaSets.
|
||||
|
||||
## Rolling Back a Deployment
|
||||
|
||||
Sometimes you may want to rollback a Deployment; for example, when the Deployment is not stable, such as crash looping.
|
||||
By default, all of the Deployment's rollout history is kept in the system so that you can rollback anytime you want
|
||||
(you can change that by modifying revision history limit).
|
||||
|
||||
**Note:** A Deployment's revision is created when a Deployment's rollout is triggered. This means that the
|
||||
new revision is created if and only if the Deployment's pod template (`.spec.template`) is changed,
|
||||
for example if you update the labels or container images of the template. Other updates, such as scaling the Deployment,
|
||||
do not create a Deployment revision, so that we can facilitate simultaneous manual- or auto-scaling.
|
||||
This means that when you roll back to an earlier revision, only the Deployment's pod template part is
|
||||
rolled back.
|
||||
{: .note}
|
||||
|
||||
Suppose that we made a typo while updating the Deployment, by putting the image name as `nginx:1.91` instead of `nginx:1.9.1`:
|
||||
|
||||
```shell
|
||||
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.91
|
||||
deployment "nginx-deployment" image updated
|
||||
```
|
||||
|
||||
The rollout will be stuck.
|
||||
|
||||
```shell
|
||||
$ kubectl rollout status deployments nginx-deployment
|
||||
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
|
||||
```
|
||||
|
||||
Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts,
|
||||
[read more here](#deployment-status).
|
||||
|
||||
You will also see that both the number of old replicas (nginx-deployment-1564180365 and
|
||||
nginx-deployment-2035384211) and new replicas (nginx-deployment-3066724191) are 2.
|
||||
|
||||
```shell
|
||||
$ kubectl get rs
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-1564180365 2 2 0 25s
|
||||
nginx-deployment-2035384211 0 0 0 36s
|
||||
nginx-deployment-3066724191 2 2 2 6s
|
||||
```
|
||||
|
||||
Looking at the Pods created, you will see that the 2 Pods created by new ReplicaSet are stuck in an image pull loop.
|
||||
|
||||
```shell
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-deployment-1564180365-70iae 1/1 Running 0 25s
|
||||
nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s
|
||||
nginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s
|
||||
nginx-deployment-3066724191-eocby 0/1 ImagePullBackOff 0 6s
|
||||
```
|
||||
|
||||
**Note:** The Deployment controller will stop the bad rollout automatically, and will stop scaling up the new
|
||||
ReplicaSet. This depends on the rollingUpdate parameters (`maxUnavailable` specifically) that you have specified.
|
||||
Kubernetes by default sets the value to 1 and spec.replicas to 1 so if you haven't cared about setting those
|
||||
parameters, your Deployment can have 100% unavailability by default! This will be fixed in Kubernetes in a future
|
||||
version.
|
||||
{: .note}
|
||||
|
||||
```shell
|
||||
$ kubectl describe deployment
|
||||
Name: nginx-deployment
|
||||
Namespace: default
|
||||
CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700
|
||||
Labels: app=nginx
|
||||
Selector: app=nginx
|
||||
Replicas: 2 updated | 3 total | 2 available | 2 unavailable
|
||||
StrategyType: RollingUpdate
|
||||
MinReadySeconds: 0
|
||||
RollingUpdateStrategy: 1 max unavailable, 1 max surge
|
||||
OldReplicaSets: nginx-deployment-1564180365 (2/2 replicas created)
|
||||
NewReplicaSet: nginx-deployment-3066724191 (2/2 replicas created)
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3
|
||||
22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1
|
||||
22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2
|
||||
22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2
|
||||
21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0
|
||||
21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3
|
||||
13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1
|
||||
13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-1564180365 to 2
|
||||
13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 2
|
||||
```
|
||||
|
||||
To fix this, we need to rollback to a previous revision of Deployment that is stable.
|
||||
|
||||
### Checking Rollout History of a Deployment
|
||||
|
||||
First, check the revisions of this deployment:
|
||||
|
||||
```shell
|
||||
$ kubectl rollout history deployment/nginx-deployment
|
||||
deployments "nginx-deployment"
|
||||
REVISION CHANGE-CAUSE
|
||||
1 kubectl create -f docs/user-guide/nginx-deployment.yaml --record
|
||||
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
|
||||
3 kubectl set image deployment/nginx-deployment nginx=nginx:1.91
|
||||
```
|
||||
|
||||
Because we recorded the command while creating this Deployment using `--record`, we can easily see
|
||||
the changes we made in each revision.
|
||||
|
||||
To further see the details of each revision, run:
|
||||
|
||||
```shell
|
||||
$ kubectl rollout history deployment/nginx-deployment --revision=2
|
||||
deployments "nginx-deployment" revision 2
|
||||
Labels: app=nginx
|
||||
pod-template-hash=1159050644
|
||||
Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
|
||||
Containers:
|
||||
nginx:
|
||||
Image: nginx:1.9.1
|
||||
Port: 80/TCP
|
||||
QoS Tier:
|
||||
cpu: BestEffort
|
||||
memory: BestEffort
|
||||
Environment Variables: <none>
|
||||
No volumes.
|
||||
```
|
||||
|
||||
### Rolling Back to a Previous Revision
|
||||
|
||||
Now we've decided to undo the current rollout and rollback to the previous revision:
|
||||
|
||||
```shell
|
||||
$ kubectl rollout undo deployment/nginx-deployment
|
||||
deployment "nginx-deployment" rolled back
|
||||
```
|
||||
|
||||
Alternatively, you can rollback to a specific revision by specify that in `--to-revision`:
|
||||
|
||||
```shell
|
||||
$ kubectl rollout undo deployment/nginx-deployment --to-revision=2
|
||||
deployment "nginx-deployment" rolled back
|
||||
```
|
||||
|
||||
For more details about rollout related commands, read [`kubectl rollout`](/docs/user-guide/kubectl/{{page.version}}/#rollout).
|
||||
|
||||
The Deployment is now rolled back to a previous stable revision. As you can see, a `DeploymentRollback` event
|
||||
for rolling back to revision 2 is generated from Deployment controller.
|
||||
|
||||
```shell
|
||||
$ kubectl get deployment
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3 3 3 3 30m
|
||||
|
||||
$ kubectl describe deployment
|
||||
Name: nginx-deployment
|
||||
Namespace: default
|
||||
CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700
|
||||
Labels: app=nginx
|
||||
Selector: app=nginx
|
||||
Replicas: 3 updated | 3 total | 3 available | 0 unavailable
|
||||
StrategyType: RollingUpdate
|
||||
MinReadySeconds: 0
|
||||
RollingUpdateStrategy: 1 max unavailable, 1 max surge
|
||||
OldReplicaSets: <none>
|
||||
NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
30m 30m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3
|
||||
29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1
|
||||
29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2
|
||||
29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2
|
||||
29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0
|
||||
29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 2
|
||||
29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1
|
||||
29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-1564180365 to 2
|
||||
2m 2m 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-3066724191 to 0
|
||||
2m 2m 1 {deployment-controller } Normal DeploymentRollback Rolled back deployment "nginx-deployment" to revision 2
|
||||
29m 2m 2 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3
|
||||
```
|
||||
|
||||
## Scaling a Deployment
|
||||
|
||||
You can scale a Deployment by using the following command:
|
||||
|
||||
```shell
|
||||
$ kubectl scale deployment nginx-deployment --replicas=10
|
||||
deployment "nginx-deployment" scaled
|
||||
```
|
||||
|
||||
Assuming [horizontal pod autoscaling](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) is enabled
|
||||
in your cluster, you can setup an autoscaler for your Deployment and choose the minimum and maximum number of
|
||||
Pods you want to run based on the CPU utilization of your existing Pods.
|
||||
|
||||
```shell
|
||||
$ kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent=80
|
||||
deployment "nginx-deployment" autoscaled
|
||||
```
|
||||
|
||||
### Proportional scaling
|
||||
|
||||
RollingUpdate Deployments support running multiple versions of an application at the same time. When you
|
||||
or an autoscaler scales a RollingUpdate Deployment that is in the middle of a rollout (either in progress
|
||||
or paused), then the Deployment controller will balance the additional replicas in the existing active
|
||||
ReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called *proportional scaling*.
|
||||
|
||||
For example, you are running a Deployment with 10 replicas, [maxSurge](#max-surge)=3, and [maxUnavailable](#max-unavailable)=2.
|
||||
|
||||
```shell
|
||||
$ kubectl get deploy
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 10 10 10 10 50s
|
||||
```
|
||||
|
||||
You update to a new image which happens to be unresolvable from inside the cluster.
|
||||
|
||||
```shell
|
||||
$ kubectl set image deploy/nginx-deployment nginx=nginx:sometag
|
||||
deployment "nginx-deployment" image updated
|
||||
```
|
||||
|
||||
The image update starts a new rollout with ReplicaSet nginx-deployment-1989198191, but it's blocked due to the
|
||||
maxUnavailable requirement that we mentioned above.
|
||||
|
||||
```shell
|
||||
$ kubectl get rs
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-1989198191 5 5 0 9s
|
||||
nginx-deployment-618515232 8 8 8 1m
|
||||
```
|
||||
|
||||
Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas
|
||||
to 15. The Deployment controller needs to decide where to add these new 5 replicas. If we weren't using
|
||||
proportional scaling, all 5 of them would be added in the new ReplicaSet. With proportional scaling, we
|
||||
spread the additional replicas across all ReplicaSets. Bigger proportions go to the ReplicaSets with the
|
||||
most replicas and lower proportions go to ReplicaSets with less replicas. Any leftovers are added to the
|
||||
ReplicaSet with the most replicas. ReplicaSets with zero replicas are not scaled up.
|
||||
|
||||
In our example above, 3 replicas will be added to the old ReplicaSet and 2 replicas will be added to the
|
||||
new ReplicaSet. The rollout process should eventually move all replicas to the new ReplicaSet, assuming
|
||||
the new replicas become healthy.
|
||||
|
||||
```shell
|
||||
$ kubectl get deploy
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 15 18 7 8 7m
|
||||
$ kubectl get rs
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-1989198191 7 7 0 7m
|
||||
nginx-deployment-618515232 11 11 11 7m
|
||||
```
|
||||
|
||||
## Pausing and Resuming a Deployment
|
||||
|
||||
You can pause a Deployment before triggering one or more updates and then resume it. This will allow you to
|
||||
apply multiple fixes in between pausing and resuming without triggering unnecessary rollouts.
|
||||
|
||||
For example, with a Deployment that was just created:
|
||||
|
||||
```shell
|
||||
$ kubectl get deploy
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
nginx 3 3 3 3 1m
|
||||
$ kubectl get rs
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-2142116321 3 3 3 1m
|
||||
```
|
||||
|
||||
Pause by running the following command:
|
||||
|
||||
```shell
|
||||
$ kubectl rollout pause deployment/nginx-deployment
|
||||
deployment "nginx-deployment" paused
|
||||
```
|
||||
|
||||
Then update the image of the Deployment:
|
||||
|
||||
```shell
|
||||
$ kubectl set image deploy/nginx-deployment nginx=nginx:1.9.1
|
||||
deployment "nginx-deployment" image updated
|
||||
```
|
||||
|
||||
Notice that no new rollout started:
|
||||
|
||||
```shell
|
||||
$ kubectl rollout history deploy/nginx-deployment
|
||||
deployments "nginx"
|
||||
REVISION CHANGE-CAUSE
|
||||
1 <none>
|
||||
|
||||
$ kubectl get rs
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-2142116321 3 3 3 2m
|
||||
```
|
||||
|
||||
You can make as many updates as you wish, for example, update the resources that will be used:
|
||||
|
||||
```shell
|
||||
$ kubectl set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi
|
||||
deployment "nginx" resource requirements updated
|
||||
```
|
||||
|
||||
The initial state of the Deployment prior to pausing it will continue its function, but new updates to
|
||||
the Deployment will not have any effect as long as the Deployment is paused.
|
||||
|
||||
Eventually, resume the Deployment and observe a new ReplicaSet coming up with all the new updates:
|
||||
|
||||
```shell
|
||||
$ kubectl rollout resume deploy/nginx-deployment
|
||||
deployment "nginx" resumed
|
||||
$ kubectl get rs -w
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-2142116321 2 2 2 2m
|
||||
nginx-3926361531 2 2 0 6s
|
||||
nginx-3926361531 2 2 1 18s
|
||||
nginx-2142116321 1 2 2 2m
|
||||
nginx-2142116321 1 2 2 2m
|
||||
nginx-3926361531 3 2 1 18s
|
||||
nginx-3926361531 3 2 1 18s
|
||||
nginx-2142116321 1 1 1 2m
|
||||
nginx-3926361531 3 3 1 18s
|
||||
nginx-3926361531 3 3 2 19s
|
||||
nginx-2142116321 0 1 1 2m
|
||||
nginx-2142116321 0 1 1 2m
|
||||
nginx-2142116321 0 0 0 2m
|
||||
nginx-3926361531 3 3 3 20s
|
||||
^C
|
||||
$ kubectl get rs
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-2142116321 0 0 0 2m
|
||||
nginx-3926361531 3 3 3 28s
|
||||
```
|
||||
|
||||
**Note:** You cannot rollback a paused Deployment until you resume it.
|
||||
{: .note}
|
||||
|
||||
## Deployment status
|
||||
|
||||
A Deployment enters various states during its lifecycle. It can be [progressing](#progressing-deployment) while
|
||||
rolling out a new ReplicaSet, it can be [complete](#complete-deployment), or it can [fail to progress](#failed-deployment).
|
||||
|
||||
### Progressing Deployment
|
||||
|
||||
Kubernetes marks a Deployment as _progressing_ when one of the following tasks is performed:
|
||||
|
||||
* The Deployment creates a new ReplicaSet.
|
||||
* The Deployment is scaling up its newest ReplicaSet.
|
||||
* The Deployment is scaling down its older ReplicaSet(s).
|
||||
* New Pods become ready or available (ready for at least [MinReadySeconds](#min-ready-seconds)).
|
||||
|
||||
You can monitor the progress for a Deployment by using `kubectl rollout status`.
|
||||
|
||||
### Complete Deployment
|
||||
|
||||
Kubernetes marks a Deployment as _complete_ when it has the following characteristics:
|
||||
|
||||
* All of the replicas associated with the Deployment have been updated to the latest version you've specified, meaning any
|
||||
updates you've requested have been completed.
|
||||
* All of the replicas associated with the Deployment are available.
|
||||
* No old replicas for the Deployment are running.
|
||||
|
||||
You can check if a Deployment has completed by using `kubectl rollout status`. If the rollout completed
|
||||
successfully, `kubectl rollout status` returns a zero exit code.
|
||||
|
||||
```shell
|
||||
$ kubectl rollout status deploy/nginx-deployment
|
||||
Waiting for rollout to finish: 2 of 3 updated replicas are available...
|
||||
deployment "nginx" successfully rolled out
|
||||
$ echo $?
|
||||
0
|
||||
```
|
||||
|
||||
### Failed Deployment
|
||||
|
||||
Your Deployment may get stuck trying to deploy its newest ReplicaSet without ever completing. This can occur
|
||||
due to some of the following factors:
|
||||
|
||||
* Insufficient quota
|
||||
* Readiness probe failures
|
||||
* Image pull errors
|
||||
* Insufficient permissions
|
||||
* Limit ranges
|
||||
* Application runtime misconfiguration
|
||||
|
||||
One way you can detect this condition is to specify a deadline parameter in your Deployment spec:
|
||||
([`spec.progressDeadlineSeconds`](#progress-deadline-seconds)). `spec.progressDeadlineSeconds` denotes the
|
||||
number of seconds the Deployment controller waits before indicating (in the Deployment status) that the
|
||||
Deployment progress has stalled.
|
||||
|
||||
The following `kubectl` command sets the spec with `progressDeadlineSeconds` to make the controller report
|
||||
lack of progress for a Deployment after 10 minutes:
|
||||
|
||||
```shell
|
||||
$ kubectl patch deployment/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}'
|
||||
"nginx-deployment" patched
|
||||
```
|
||||
Once the deadline has been exceeded, the Deployment controller adds a DeploymentCondition with the following
|
||||
attributes to the Deployment's `status.conditions`:
|
||||
|
||||
* Type=Progressing
|
||||
* Status=False
|
||||
* Reason=ProgressDeadlineExceeded
|
||||
|
||||
See the [Kubernetes API conventions](https://git.k8s.io/community/contributors/devel/api-conventions.md#typical-status-properties) for more information on status conditions.
|
||||
|
||||
**Note:** Kubernetes will take no action on a stalled Deployment other than to report a status condition with
|
||||
`Reason=ProgressDeadlineExceeded`. Higher level orchestrators can take advantage of it and act accordingly, for
|
||||
example, rollback the Deployment to its previous version.
|
||||
{: .note}
|
||||
|
||||
**Note:** If you pause a Deployment, Kubernetes does not check progress against your specified deadline. You can
|
||||
safely pause a Deployment in the middle of a rollout and resume without triggering the condition for exceeding the
|
||||
deadline.
|
||||
{: .note}
|
||||
|
||||
You may experience transient errors with your Deployments, either due to a low timeout that you have set or
|
||||
due to any other kind of error that can be treated as transient. For example, let's suppose you have
|
||||
insufficient quota. If you describe the Deployment you will notice the following section:
|
||||
|
||||
```shell
|
||||
$ kubectl describe deployment nginx-deployment
|
||||
<...>
|
||||
Conditions:
|
||||
Type Status Reason
|
||||
---- ------ ------
|
||||
Available True MinimumReplicasAvailable
|
||||
Progressing True ReplicaSetUpdated
|
||||
ReplicaFailure True FailedCreate
|
||||
<...>
|
||||
```
|
||||
|
||||
If you run `kubectl get deployment nginx-deployment -o yaml`, the Deployement status might look like this:
|
||||
|
||||
```
|
||||
status:
|
||||
availableReplicas: 2
|
||||
conditions:
|
||||
- lastTransitionTime: 2016-10-04T12:25:39Z
|
||||
lastUpdateTime: 2016-10-04T12:25:39Z
|
||||
message: Replica set "nginx-deployment-4262182780" is progressing.
|
||||
reason: ReplicaSetUpdated
|
||||
status: "True"
|
||||
type: Progressing
|
||||
- lastTransitionTime: 2016-10-04T12:25:42Z
|
||||
lastUpdateTime: 2016-10-04T12:25:42Z
|
||||
message: Deployment has minimum availability.
|
||||
reason: MinimumReplicasAvailable
|
||||
status: "True"
|
||||
type: Available
|
||||
- lastTransitionTime: 2016-10-04T12:25:39Z
|
||||
lastUpdateTime: 2016-10-04T12:25:39Z
|
||||
message: 'Error creating: pods "nginx-deployment-4262182780-" is forbidden: exceeded quota:
|
||||
object-counts, requested: pods=1, used: pods=3, limited: pods=2'
|
||||
reason: FailedCreate
|
||||
status: "True"
|
||||
type: ReplicaFailure
|
||||
observedGeneration: 3
|
||||
replicas: 2
|
||||
unavailableReplicas: 2
|
||||
```
|
||||
|
||||
Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the
|
||||
reason for the Progressing condition:
|
||||
|
||||
```
|
||||
Conditions:
|
||||
Type Status Reason
|
||||
---- ------ ------
|
||||
Available True MinimumReplicasAvailable
|
||||
Progressing False ProgressDeadlineExceeded
|
||||
ReplicaFailure True FailedCreate
|
||||
```
|
||||
|
||||
You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other
|
||||
controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota
|
||||
conditions and the Deployment controller then completes the Deployment rollout, you'll see the
|
||||
Deployment's status update with a successful condition (`Status=True` and `Reason=NewReplicaSetAvailable`).
|
||||
|
||||
```
|
||||
Conditions:
|
||||
Type Status Reason
|
||||
---- ------ ------
|
||||
Available True MinimumReplicasAvailable
|
||||
Progressing True NewReplicaSetAvailable
|
||||
```
|
||||
|
||||
`Type=Available` with `Status=True` means that your Deployment has minimum availability. Minimum availability is dictated
|
||||
by the parameters specified in the deployment strategy. `Type=Progressing` with `Status=True` means that your Deployment
|
||||
is either in the middle of a rollout and it is progressing or that it has successfully completed its progress and the minimum
|
||||
required new replicas are available (see the Reason of the condition for the particulars - in our case
|
||||
`Reason=NewReplicaSetAvailable` means that the Deployment is complete).
|
||||
|
||||
You can check if a Deployment has failed to progress by using `kubectl rollout status`. `kubectl rollout status`
|
||||
returns a non-zero exit code if the Deployment has exceeded the progression deadline.
|
||||
|
||||
```shell
|
||||
$ kubectl rollout status deploy/nginx-deployment
|
||||
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
|
||||
error: deployment "nginx" exceeded its progress deadline
|
||||
$ echo $?
|
||||
1
|
||||
```
|
||||
|
||||
### Operating on a failed deployment
|
||||
|
||||
All actions that apply to a complete Deployment also apply to a failed Deployment. You can scale it up/down, roll back
|
||||
to a previous revision, or even pause it if you need to apply multiple tweaks in the Deployment pod template.
|
||||
|
||||
## Clean up Policy
|
||||
|
||||
You can set `.spec.revisionHistoryLimit` field in a Deployment to specify how many old ReplicaSets for
|
||||
this Deployment you want to retain. The rest will be garbage-collected in the background. By default,
|
||||
all revision history will be kept. In a future version, it will default to switch to 2.
|
||||
|
||||
**Note:** Explicitly setting this field to 0, will result in cleaning up all the history of your Deployment
|
||||
thus that Deployment will not be able to roll back.
|
||||
{: .note}
|
||||
|
||||
## Use Cases
|
||||
|
||||
### Canary Deployment
|
||||
|
||||
If you want to roll out releases to a subset of users or servers using the Deployment, you
|
||||
can create multiple Deployments, one for each release, following the canary pattern described in
|
||||
[managing resources](/docs/concepts/cluster-administration/manage-deployment/#canary-deployments).
|
||||
|
||||
## Writing a Deployment Spec
|
||||
|
||||
As with all other Kubernetes configs, a Deployment needs `apiVersion`, `kind`, and `metadata` fields.
|
||||
For general information about working with config files, see [deploying applications](/docs/tutorials/stateless-application/run-stateless-application-deployment/),
|
||||
configuring containers, and [using kubectl to manage resources](/docs/tutorials/object-management-kubectl/object-management/) documents.
|
||||
|
||||
A Deployment also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status).
|
||||
|
||||
### Pod Template
|
||||
|
||||
The `.spec.template` is the only required field of the `.spec`.
|
||||
|
||||
The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [Pod](/docs/concepts/workloads/pods/pod/), except it is nested and does not have an
|
||||
`apiVersion` or `kind`.
|
||||
|
||||
In addition to required fields for a Pod, a pod template in a Deployment must specify appropriate
|
||||
labels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See [selector](#selector)).
|
||||
|
||||
Only a [`.spec.template.spec.restartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/) equal to `Always` is
|
||||
allowed, which is the default if not specified.
|
||||
|
||||
### Replicas
|
||||
|
||||
`.spec.replicas` is an optional field that specifies the number of desired Pods. It defaults to 1.
|
||||
|
||||
### Selector
|
||||
|
||||
`.spec.selector` is an optional field that specifies a [label selector](/docs/concepts/overview/working-with-objects/labels/)
|
||||
for the Pods targeted by this deployment.
|
||||
|
||||
If specified, `.spec.selector` must match `.spec.template.metadata.labels`, or it will be rejected by
|
||||
the API. If `.spec.selector` is unspecified, `.spec.selector.matchLabels` defaults to
|
||||
`.spec.template.metadata.labels`.
|
||||
|
||||
A Deployment may terminate Pods whose labels match the selector if their template is different
|
||||
from `.spec.template` or if the total number of such Pods exceeds `.spec.replicas`. It brings up new
|
||||
Pods with `.spec.template` if the number of Pods is less than the desired number.
|
||||
|
||||
**Note:** You should not create other pods whose labels match this selector, either directly, by creating
|
||||
another Deployment, or by creating another controller such as a ReplicaSet or a ReplicationController. If you
|
||||
do so, the first Deployment thinks that it created these other pods. Kubernetes does not stop you from doing this.
|
||||
{: .note}
|
||||
|
||||
If you have multiple controllers that have overlapping selectors, the controllers will fight with each
|
||||
other and won't behave correctly.
|
||||
|
||||
### Strategy
|
||||
|
||||
`.spec.strategy` specifies the strategy used to replace old Pods by new ones.
|
||||
`.spec.strategy.type` can be "Recreate" or "RollingUpdate". "RollingUpdate" is
|
||||
the default value.
|
||||
|
||||
#### Recreate Deployment
|
||||
|
||||
All existing Pods are killed before new ones are created when `.spec.strategy.type==Recreate`.
|
||||
|
||||
#### Rolling Update Deployment
|
||||
|
||||
The Deployment updates Pods in a [rolling update](/docs/tasks/run-application/rolling-update-replication-controller/)
|
||||
fashion when `.spec.strategy.type==RollingUpdate`. You can specify `maxUnavailable` and `maxSurge` to control
|
||||
the rolling update process.
|
||||
|
||||
##### Max Unavailable
|
||||
|
||||
`.spec.strategy.rollingUpdate.maxUnavailable` is an optional field that specifies the maximum number
|
||||
of Pods that can be unavailable during the update process. The value can be an absolute number (for example, 5)
|
||||
or a percentage of desired Pods (for example, 10%). The absolute number is calculated from percentage by
|
||||
rounding down. The value cannot be 0 if `.spec.strategy.rollingUpdate.maxSurge` is 0. The default value is 25%.
|
||||
|
||||
For example, when this value is set to 30%, the old ReplicaSet can be scaled down to 70% of desired
|
||||
Pods immediately when the rolling update starts. Once new Pods are ready, old ReplicaSet can be scaled
|
||||
down further, followed by scaling up the new ReplicaSet, ensuring that the total number of Pods available
|
||||
at all times during the update is at least 70% of the desired Pods.
|
||||
|
||||
##### Max Surge
|
||||
|
||||
`.spec.strategy.rollingUpdate.maxSurge` is an optional field that specifies the maximum number of Pods
|
||||
that can be created over the desired number of Pods. The value can be an absolute number (for example, 5) or a
|
||||
percentage of desired Pods (for example, 10%). The value cannot be 0 if `MaxUnavailable` is 0. The absolute number
|
||||
is calculated from the percentage by rounding up. The default value is 25%.
|
||||
|
||||
For example, when this value is set to 30%, the new ReplicaSet can be scaled up immediately when the
|
||||
rolling update starts, such that the total number of old and new Pods does not exceed 130% of desired
|
||||
Pods. Once old Pods have been killed, the new ReplicaSet can be scaled up further, ensuring that the
|
||||
total number of Pods running at any time during the update is at most 130% of desired Pods.
|
||||
|
||||
### Progress Deadline Seconds
|
||||
|
||||
`.spec.progressDeadlineSeconds` is an optional field that specifies the number of seconds you want
|
||||
to wait for your Deployment to progress before the system reports back that the Deployment has
|
||||
[failed progressing](#failed-deployment) - surfaced as a condition with `Type=Progressing`, `Status=False`.
|
||||
and `Reason=ProgressDeadlineExceeded` in the status of the resource. The deployment controller will keep
|
||||
retrying the Deployment. In the future, once automatic rollback will be implemented, the deployment
|
||||
controller will roll back a Deployment as soon as it observes such a condition.
|
||||
|
||||
If specified, this field needs to be greater than `.spec.minReadySeconds`.
|
||||
|
||||
### Min Ready Seconds
|
||||
|
||||
`.spec.minReadySeconds` is an optional field that specifies the minimum number of seconds for which a newly
|
||||
created Pod should be ready without any of its containers crashing, for it to be considered available.
|
||||
This defaults to 0 (the Pod will be considered available as soon as it is ready). To learn more about when
|
||||
a Pod is considered ready, see [Container Probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes).
|
||||
|
||||
### Rollback To
|
||||
|
||||
`.spec.rollbackTo` is an optional field with the configuration the Deployment
|
||||
should roll back to. Setting this field triggers a rollback, and this field will
|
||||
be cleared by the server after a rollback is done.
|
||||
|
||||
Because this field will be cleared by the server, it should not be used
|
||||
declaratively. For example, you should not perform `kubectl apply` with a
|
||||
manifest with `.spec.rollbackTo` field set.
|
||||
|
||||
#### Revision
|
||||
|
||||
`.spec.rollbackTo.revision` is an optional field specifying the revision to roll
|
||||
back to. Setting to 0 means rolling back to the last revision in history;
|
||||
otherwise, means rolling back to the specified revision. This defaults to 0 when
|
||||
[`spec.rollbackTo`](#rollback-to) is set.
|
||||
|
||||
### Revision History Limit
|
||||
|
||||
A Deployment's revision history is stored in the replica sets it controls.
|
||||
|
||||
`.spec.revisionHistoryLimit` is an optional field that specifies the number of old ReplicaSets to retain
|
||||
to allow rollback. Its ideal value depends on the frequency and stability of new Deployments. All old
|
||||
ReplicaSets will be kept by default, consuming resources in `etcd` and crowding the output of `kubectl get rs`,
|
||||
if this field is not set. The configuration of each Deployment revision is stored in its ReplicaSets;
|
||||
therefore, once an old ReplicaSet is deleted, you lose the ability to rollback to that revision of Deployment.
|
||||
|
||||
More specifically, setting this field to zero means that all old ReplicaSets with 0 replica will be cleaned up.
|
||||
In this case, a new Deployment rollout cannot be undone, since its revision history is cleaned up.
|
||||
|
||||
### Paused
|
||||
|
||||
`.spec.paused` is an optional boolean field for pausing and resuming a Deployment. The only difference between
|
||||
a paused Deployment and one that is not paused, is that any changes into the PodTemplateSpec of the paused
|
||||
Deployment will not trigger new rollouts as long as it is paused. A Deployment is not paused by default when
|
||||
it is created.
|
||||
|
||||
## Alternative to Deployments
|
||||
|
||||
### kubectl rolling update
|
||||
|
||||
[Kubectl rolling update](/docs/user-guide/kubectl/{{page.version}}/#rolling-update) updates Pods and ReplicationControllers
|
||||
in a similar fashion. But Deployments are recommended, since they are declarative, server side, and have
|
||||
additional features, such as rolling back to any previous revision even after the rolling update is done.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/concept.md %}
|
|
@ -0,0 +1,45 @@
|
|||
apiVersion: extensions/v1beta1
|
||||
kind: ReplicaSet
|
||||
metadata:
|
||||
name: frontend
|
||||
# these labels can be applied automatically
|
||||
# from the labels in the pod template if not set
|
||||
# labels:
|
||||
# app: guestbook
|
||||
# tier: frontend
|
||||
spec:
|
||||
# this replicas value is default
|
||||
# modify it according to your case
|
||||
replicas: 3
|
||||
# selector can be applied automatically
|
||||
# from the labels in the pod template if not set,
|
||||
# but we are specifying the selector here to
|
||||
# demonstrate its usage.
|
||||
selector:
|
||||
matchLabels:
|
||||
tier: frontend
|
||||
matchExpressions:
|
||||
- {key: tier, operator: In, values: [frontend]}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: guestbook
|
||||
tier: frontend
|
||||
spec:
|
||||
containers:
|
||||
- name: php-redis
|
||||
image: gcr.io/google_samples/gb-frontend:v3
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
env:
|
||||
- name: GET_HOSTS_FROM
|
||||
value: dns
|
||||
# If your cluster config does not include a dns service, then to
|
||||
# instead access environment variables to find service host
|
||||
# info, comment out the 'value: dns' line above, and uncomment the
|
||||
# line below.
|
||||
# value: env
|
||||
ports:
|
||||
- containerPort: 80
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: autoscaling/v1
|
||||
kind: HorizontalPodAutoscaler
|
||||
metadata:
|
||||
name: frontend-scaler
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
kind: ReplicaSet
|
||||
name: frontend
|
||||
minReplicas: 3
|
||||
maxReplicas: 10
|
||||
targetCPUUtilizationPercentage: 50
|
|
@ -0,0 +1,15 @@
|
|||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: pi
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
name: pi
|
||||
spec:
|
||||
containers:
|
||||
- name: pi
|
||||
image: perl
|
||||
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
|
||||
restartPolicy: Never
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
apiVersion: extensions/v1beta1
|
||||
kind: ReplicaSet
|
||||
metadata:
|
||||
name: my-repset
|
||||
spec:
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
pod-is-for: garbage-collection-example
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
pod-is-for: garbage-collection-example
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
|
@ -0,0 +1,16 @@
|
|||
apiVersion: apps/v1beta1 # for versions before 1.6.0 use extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx-deployment
|
||||
spec:
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:1.7.9
|
||||
ports:
|
||||
- containerPort: 80
|
|
@ -0,0 +1,51 @@
|
|||
# A headless service to create DNS records
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
name: web
|
||||
# *.nginx.default.svc.cluster.local
|
||||
clusterIP: None
|
||||
selector:
|
||||
app: nginx
|
||||
---
|
||||
apiVersion: apps/v1alpha1
|
||||
kind: PetSet
|
||||
metadata:
|
||||
name: web
|
||||
spec:
|
||||
serviceName: "nginx"
|
||||
replicas: 2
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
annotations:
|
||||
pod.alpha.kubernetes.io/initialized: "true"
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 0
|
||||
containers:
|
||||
- name: nginx
|
||||
image: gcr.io/google_containers/nginx-slim:0.8
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: web
|
||||
volumeMounts:
|
||||
- name: www
|
||||
mountPath: /usr/share/nginx/html
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: www
|
||||
annotations:
|
||||
volume.alpha.kubernetes.io/storage-class: anything
|
||||
spec:
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
|
|
@ -0,0 +1,19 @@
|
|||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
replicas: 3
|
||||
selector:
|
||||
app: nginx
|
||||
template:
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
|
@ -0,0 +1,231 @@
|
|||
---
|
||||
approvers:
|
||||
- enisoc
|
||||
- erictune
|
||||
- foxish
|
||||
- janetkuo
|
||||
- kow3ns
|
||||
- smarterclayton
|
||||
title: StatefulSets
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
**StatefulSets are a beta feature in 1.7. This feature replaces the
|
||||
PetSets feature from 1.4. Users of PetSets are referred to the 1.5
|
||||
[Upgrade Guide](/docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set/)
|
||||
for further information on how to upgrade existing PetSets to StatefulSets.**
|
||||
|
||||
{% include templates/glossary/snippet.md term="statefulset" length="long" %}
|
||||
{% endcapture %}
|
||||
|
||||
{% capture body %}
|
||||
|
||||
## Using StatefulSets
|
||||
|
||||
StatefulSets are valuable for applications that require one or more of the
|
||||
following.
|
||||
|
||||
* Stable, unique network identifiers.
|
||||
* Stable, persistent storage.
|
||||
* Ordered, graceful deployment and scaling.
|
||||
* Ordered, graceful deletion and termination.
|
||||
* Ordered, automated rolling updates.
|
||||
|
||||
In the above, stable is synonymous with persistence across Pod (re)scheduling.
|
||||
If an application doesn't require any stable identifiers or ordered deployment,
|
||||
deletion, or scaling, you should deploy your application with a controller that
|
||||
provides a set of stateless replicas. Controllers such as
|
||||
[Deployment](/docs/concepts/workloads/controllers/deployment/) or
|
||||
[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) may be better suited to your stateless needs.
|
||||
|
||||
## Limitations
|
||||
|
||||
* StatefulSet is a beta resource, not available in any Kubernetes release prior to 1.5.
|
||||
* As with all alpha/beta resources, you can disable StatefulSet through the `--runtime-config` option passed to the apiserver.
|
||||
* The storage for a given Pod must either be provisioned by a [PersistentVolume Provisioner](http://releases.k8s.io/{{page.githubbranch}}/examples/persistent-volume-provisioning/README.md) based on the requested `storage class`, or pre-provisioned by an admin.
|
||||
* Deleting and/or scaling a StatefulSet down will *not* delete the volumes associated with the StatefulSet. This is done to ensure data safety, which is generally more valuable than an automatic purge of all related StatefulSet resources.
|
||||
* StatefulSets currently require a [Headless Service](/docs/concepts/services-networking/service/#headless-services) to be responsible for the network identity of the Pods. You are responsible for creating this Service.
|
||||
|
||||
## Components
|
||||
The example below demonstrates the components of a StatefulSet.
|
||||
|
||||
* A Headless Service, named nginx, is used to control the network domain.
|
||||
* The StatefulSet, named web, has a Spec that indicates that 3 replicas of the nginx container will be launched in unique Pods.
|
||||
* The volumeClaimTemplates will provide stable storage using [PersistentVolumes](/docs/concepts/storage/volumes/) provisioned by a
|
||||
PersistentVolume Provisioner.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
name: web
|
||||
clusterIP: None
|
||||
selector:
|
||||
app: nginx
|
||||
---
|
||||
apiVersion: apps/v1beta1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: web
|
||||
spec:
|
||||
serviceName: "nginx"
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: nginx
|
||||
image: gcr.io/google_containers/nginx-slim:0.8
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: web
|
||||
volumeMounts:
|
||||
- name: www
|
||||
mountPath: /usr/share/nginx/html
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: www
|
||||
spec:
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
storageClassName: my-storage-class
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
```
|
||||
|
||||
## Pod Identity
|
||||
StatefulSet Pods have a unique identity that is comprised of an ordinal, a
|
||||
stable network identity, and stable storage. The identity sticks to the Pod,
|
||||
regardless of which node it's (re)scheduled on.
|
||||
|
||||
### Ordinal Index
|
||||
|
||||
For a StatefulSet with N replicas, each Pod in the StatefulSet will be
|
||||
assigned an integer ordinal, in the range [0,N), that is unique over the Set.
|
||||
|
||||
### Stable Network ID
|
||||
|
||||
Each Pod in a StatefulSet derives its hostname from the name of the StatefulSet
|
||||
and the ordinal of the Pod. The pattern for the constructed hostname
|
||||
is `$(statefulset name)-$(ordinal)`. The example above will create three Pods
|
||||
named `web-0,web-1,web-2`.
|
||||
A StatefulSet can use a [Headless Service](/docs/concepts/services-networking/service/#headless-services)
|
||||
to control the domain of its Pods. The domain managed by this Service takes the form:
|
||||
`$(service name).$(namespace).svc.cluster.local`, where "cluster.local"
|
||||
is the [cluster domain](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md).
|
||||
As each Pod is created, it gets a matching DNS subdomain, taking the form:
|
||||
`$(podname).$(governing service domain)`, where the governing service is defined
|
||||
by the `serviceName` field on the StatefulSet.
|
||||
|
||||
Here are some examples of choices for Cluster Domain, Service name,
|
||||
StatefulSet name, and how that affects the DNS names for the StatefulSet's Pods.
|
||||
|
||||
Cluster Domain | Service (ns/name) | StatefulSet (ns/name) | StatefulSet Domain | Pod DNS | Pod Hostname |
|
||||
-------------- | ----------------- | ----------------- | -------------- | ------- | ------------ |
|
||||
cluster.local | default/nginx | default/web | nginx.default.svc.cluster.local | web-{0..N-1}.nginx.default.svc.cluster.local | web-{0..N-1} |
|
||||
cluster.local | foo/nginx | foo/web | nginx.foo.svc.cluster.local | web-{0..N-1}.nginx.foo.svc.cluster.local | web-{0..N-1} |
|
||||
kube.local | foo/nginx | foo/web | nginx.foo.svc.kube.local | web-{0..N-1}.nginx.foo.svc.kube.local | web-{0..N-1} |
|
||||
|
||||
Note that Cluster Domain will be set to `cluster.local` unless
|
||||
[otherwise configured](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md).
|
||||
|
||||
### Stable Storage
|
||||
|
||||
Kubernetes creates one [PersistentVolume](/docs/concepts/storage/volumes/) for each
|
||||
VolumeClaimTemplate. In the nginx example above, each Pod will receive a single PersistentVolume
|
||||
with a StorageClass of `my-storage-class` and 1 Gib of provisioned storage. If no StorageClass
|
||||
is specified, then the default StorageClass will be used. When a Pod is (re)scheduled
|
||||
onto a node, its `volumeMounts` mount the PersistentVolumes associated with its
|
||||
PersistentVolume Claims. Note that, the PersistentVolumes associated with the
|
||||
Pods' PersistentVolume Claims are not deleted when the Pods, or StatefulSet are deleted.
|
||||
This must be done manually.
|
||||
|
||||
## Deployment and Scaling Guarantees
|
||||
|
||||
* For a StatefulSet with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1}.
|
||||
* When Pods are being deleted, they are terminated in reverse order, from {N-1..0}.
|
||||
* Before a scaling operation is applied to a Pod, all of its predecessors must be Running and Ready.
|
||||
* Before a Pod is terminated, all of its successors must be completely shutdown.
|
||||
|
||||
The StatefulSet should not specify a `pod.Spec.TerminationGracePeriodSeconds` of 0. This practice is unsafe and strongly discouraged. For further explanation, please refer to [force deleting StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/).
|
||||
|
||||
When the nginx example above is created, three Pods will be deployed in the order
|
||||
web-0, web-1, web-2. web-1 will not be deployed before web-0 is
|
||||
[Running and Ready](/docs/user-guide/pod-states), and web-2 will not be deployed until
|
||||
web-1 is Running and Ready. If web-0 should fail, after web-1 is Running and Ready, but before
|
||||
web-2 is launched, web-2 will not be launched until web-0 is successfully relaunched and
|
||||
becomes Running and Ready.
|
||||
|
||||
If a user were to scale the deployed example by patching the StatefulSet such that
|
||||
`replicas=1`, web-2 would be terminated first. web-1 would not be terminated until web-2
|
||||
is fully shutdown and deleted. If web-0 were to fail after web-2 has been terminated and
|
||||
is completely shutdown, but prior to web-1's termination, web-1 would not be terminated
|
||||
until web-0 is Running and Ready.
|
||||
|
||||
### Pod Management Policies
|
||||
In Kubernetes 1.7 and later, StatefulSet allows you to relax its ordering guarantees while
|
||||
preserving its uniqueness and identity guarantees via its `.spec.podManagementPolicy` field.
|
||||
|
||||
#### OrderedReady Pod Management
|
||||
|
||||
`OrderedReady` pod management is the default for StatefulSets. It implements the behavior
|
||||
described [above](#deployment-and-scaling-guarantees).
|
||||
|
||||
#### Parallel Pod Management
|
||||
|
||||
`Parallel` pod management tells the StatefulSet controller to launch or
|
||||
terminate all Pods in parallel, and to not wait for Pods to become Running
|
||||
and Ready or completely terminated prior to launching or terminating another
|
||||
Pod.
|
||||
|
||||
## Update Strategies
|
||||
|
||||
In Kubernetes 1.7 and later, StatefulSet's `.spec.updateStrategy` field allows you to configure
|
||||
and disable automated rolling updates for containers, labels, resource request/limits, and
|
||||
annotations for the Pods in a StatefulSet.
|
||||
|
||||
### On Delete
|
||||
|
||||
The `OnDelete` update strategy implements the legacy (1.6 and prior) behavior. It is the default
|
||||
strategy when `spec.updateStrategy` is left unspecified. When a StatefulSet's
|
||||
`.spec.updateStrategy.type` is set to `OnDelete`, the StatefulSet controller will not automatically
|
||||
update the Pods in a StatefulSet. Users must manually delete Pods to cause the controller to
|
||||
create new Pods that reflect modifications made to a StatefulSet's `.spec.template`.
|
||||
|
||||
### Rolling Updates
|
||||
|
||||
The `RollingUpdate` update strategy implements automated, rolling update for the Pods in a
|
||||
StatefulSet. When a StatefulSet's `.spec.updateStrategy.type` is set to `RollingUpdate`, the
|
||||
StatefulSet controller will delete and recreate each Pod in the StatefulSet. It will proceed
|
||||
in the same order as Pod termination (from the largest ordinal to the smallest), updating
|
||||
each Pod one at a time. It will wait until an updated Pod is Running and Ready prior to
|
||||
updating its predecessor.
|
||||
|
||||
#### Partitions
|
||||
|
||||
The `RollingUpdate` update strategy can be partitioned, by specifying a
|
||||
`.spec.updateStrategy.rollingUpdate.partition`. If a partition is specified, all Pods with an
|
||||
ordinal that is greater than or equal to the partition will be updated when the StatefulSet's
|
||||
`.spec.template` is updated. All Pods with an ordinal that is less than the partition will not
|
||||
be updated, and, even if they are deleted, they will be recreated at the previous version. If a
|
||||
StatefulSet's `.spec.updateStrategy.rollingUpdate.partition` is greater than its `.spec.replicas`,
|
||||
updates to its `.spec.template` will not be propagated to its Pods.
|
||||
In most cases you will not need to use a partition, but they are useful if you want to stage an
|
||||
update, roll out a canary, or perform a phased roll out.
|
||||
|
||||
{% endcapture %}
|
||||
{% capture whatsnext %}
|
||||
|
||||
* Follow an example of [deploying a stateful application](/docs/tutorials/stateful-application/basic-stateful-set).
|
||||
|
||||
{% endcapture %}
|
||||
{% include templates/concept.md %}
|
|
@ -0,0 +1,318 @@
|
|||
---
|
||||
title: Accessing Clusters
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Accessing the cluster API
|
||||
|
||||
### Accessing for the first time with kubectl
|
||||
|
||||
When accessing the Kubernetes API for the first time, we suggest using the
|
||||
Kubernetes CLI, `kubectl`.
|
||||
|
||||
To access a cluster, you need to know the location of the cluster and have credentials
|
||||
to access it. Typically, this is automatically set-up when you work through
|
||||
a [Getting started guide](/docs/getting-started-guides/),
|
||||
or someone else setup the cluster and provided you with credentials and a location.
|
||||
|
||||
Check the location and credentials that kubectl knows about with this command:
|
||||
|
||||
```shell
|
||||
$ kubectl config view
|
||||
```
|
||||
|
||||
Many of the [examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) provide an introduction to using
|
||||
kubectl and complete documentation is found in the [kubectl manual](/docs/user-guide/kubectl/index).
|
||||
|
||||
### Directly accessing the REST API
|
||||
|
||||
Kubectl handles locating and authenticating to the apiserver.
|
||||
If you want to directly access the REST API with an http client like
|
||||
curl or wget, or a browser, there are several ways to locate and authenticate:
|
||||
|
||||
- Run kubectl in proxy mode.
|
||||
- Recommended approach.
|
||||
- Uses stored apiserver location.
|
||||
- Verifies identity of apiserver using self-signed cert. No MITM possible.
|
||||
- Authenticates to apiserver.
|
||||
- In future, may do intelligent client-side load-balancing and failover.
|
||||
- Provide the location and credentials directly to the http client.
|
||||
- Alternate approach.
|
||||
- Works with some types of client code that are confused by using a proxy.
|
||||
- Need to import a root cert into your browser to protect against MITM.
|
||||
|
||||
#### Using kubectl proxy
|
||||
|
||||
The following command runs kubectl in a mode where it acts as a reverse proxy. It handles
|
||||
locating the apiserver and authenticating.
|
||||
Run it like this:
|
||||
|
||||
```shell
|
||||
$ kubectl proxy --port=8080 &
|
||||
```
|
||||
|
||||
See [kubectl proxy](/docs/user-guide/kubectl/v1.6/#proxy) for more details.
|
||||
|
||||
Then you can explore the API with curl, wget, or a browser, like so:
|
||||
|
||||
```shell
|
||||
$ curl http://localhost:8080/api/
|
||||
{
|
||||
"versions": [
|
||||
"v1"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### Without kubectl proxy (before v1.3.x)
|
||||
|
||||
It is possible to avoid using kubectl proxy by passing an authentication token
|
||||
directly to the apiserver, like this:
|
||||
|
||||
```shell
|
||||
$ APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ")
|
||||
$ TOKEN=$(kubectl config view | grep token | cut -f 2 -d ":" | tr -d " ")
|
||||
$ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
|
||||
{
|
||||
"versions": [
|
||||
"v1"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### Without kubectl proxy (post v1.3.x)
|
||||
|
||||
In Kubernetes version 1.3 or later, `kubectl config view` no longer displays the token. Use `kubectl describe secret...` to get the token for the default service account, like this:
|
||||
|
||||
``` shell
|
||||
$ APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ")
|
||||
$ TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t')
|
||||
$ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
|
||||
{
|
||||
"kind": "APIVersions",
|
||||
"versions": [
|
||||
"v1"
|
||||
],
|
||||
"serverAddressByClientCIDRs": [
|
||||
{
|
||||
"clientCIDR": "0.0.0.0/0",
|
||||
"serverAddress": "10.0.1.149:443"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
The above examples use the `--insecure` flag. This leaves it subject to MITM
|
||||
attacks. When kubectl accesses the cluster it uses a stored root certificate
|
||||
and client certificates to access the server. (These are installed in the
|
||||
`~/.kube` directory). Since cluster certificates are typically self-signed, it
|
||||
may take special configuration to get your http client to use root
|
||||
certificate.
|
||||
|
||||
On some clusters, the apiserver does not require authentication; it may serve
|
||||
on localhost, or be protected by a firewall. There is not a standard
|
||||
for this. [Configuring Access to the API](/docs/admin/accessing-the-api)
|
||||
describes how a cluster admin can configure this. Such approaches may conflict
|
||||
with future high-availability support.
|
||||
|
||||
### Programmatic access to the API
|
||||
|
||||
Kubernetes officially supports [Go](#go-client) and [Python](#python-client)
|
||||
client libraries.
|
||||
|
||||
#### Go client
|
||||
|
||||
* To get the library, run the following command: `go get k8s.io/client-go/<version number>/kubernetes`. See [https://github.com/kubernetes/client-go](https://github.com/kubernetes/client-go) to see which versions are supported.
|
||||
* Write an application atop of the client-go clients. Note that client-go defines its own API objects, so if needed, please import API definitions from client-go rather than from the main repository, e.g., `import "k8s.io/client-go/1.4/pkg/api/v1"` is correct.
|
||||
|
||||
The Go client can use the same [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/)
|
||||
as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://git.k8s.io/client-go/examples/out-of-cluster-client-configuration/main.go).
|
||||
|
||||
If the application is deployed as a Pod in the cluster, please refer to the [next section](#accessing-the-api-from-a-pod).
|
||||
|
||||
#### Python client
|
||||
|
||||
To use [Python client](https://github.com/kubernetes-incubator/client-python), run the following command: `pip install kubernetes`. See [Python Client Library page](https://github.com/kubernetes-incubator/client-python) for more installation options.
|
||||
|
||||
The Python client can use the same [kubeconfig file](/docs/user-guide/kubeconfig-file)
|
||||
as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://github.com/kubernetes-incubator/client-python/tree/master/examples/example1.py).
|
||||
|
||||
#### Other languages
|
||||
|
||||
There are [client libraries](/docs/reference/client-libraries/) for accessing the API from other languages.
|
||||
See documentation for other libraries for how they authenticate.
|
||||
|
||||
### Accessing the API from a Pod
|
||||
|
||||
When accessing the API from a pod, locating and authenticating
|
||||
to the apiserver are somewhat different.
|
||||
|
||||
The recommended way to locate the apiserver within the pod is with
|
||||
the `kubernetes` DNS name, which resolves to a Service IP which in turn
|
||||
will be routed to an apiserver.
|
||||
|
||||
The recommended way to authenticate to the apiserver is with a
|
||||
[service account](/docs/tasks/configure-pod-container/configure-service-account/) credential. By kube-system, a pod
|
||||
is associated with a service account, and a credential (token) for that
|
||||
service account is placed into the filesystem tree of each container in that pod,
|
||||
at `/var/run/secrets/kubernetes.io/serviceaccount/token`.
|
||||
|
||||
If available, a certificate bundle is placed into the filesystem tree of each
|
||||
container at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`, and should be
|
||||
used to verify the serving certificate of the apiserver.
|
||||
|
||||
Finally, the default namespace to be used for namespaced API operations is placed in a file
|
||||
at `/var/run/secrets/kubernetes.io/serviceaccount/namespace` in each container.
|
||||
|
||||
From within a pod the recommended ways to connect to API are:
|
||||
|
||||
- run a kubectl proxy as one of the containers in the pod, or as a background
|
||||
process within a container. This proxies the
|
||||
Kubernetes API to the localhost interface of the pod, so that other processes
|
||||
in any container of the pod can access it. See this [example of using kubectl proxy
|
||||
in a pod](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/kubectl-container/).
|
||||
- use the Go client library, and create a client using the `rest.InClusterConfig()` and `kubernetes.NewForConfig()` functions.
|
||||
They handle locating and authenticating to the apiserver. [example](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go)
|
||||
|
||||
In each case, the credentials of the pod are used to communicate securely with the apiserver.
|
||||
|
||||
|
||||
## Accessing services running on the cluster
|
||||
|
||||
The previous section was about connecting the Kubernetes API server. This section is about
|
||||
connecting to other services running on Kubernetes cluster. In Kubernetes, the
|
||||
[nodes](/docs/admin/node), [pods](/docs/user-guide/pods) and [services](/docs/user-guide/services) all have
|
||||
their own IPs. In many cases, the node IPs, pod IPs, and some service IPs on a cluster will not be
|
||||
routable, so they will not be reachable from a machine outside the cluster,
|
||||
such as your desktop machine.
|
||||
|
||||
### Ways to connect
|
||||
|
||||
You have several options for connecting to nodes, pods and services from outside the cluster:
|
||||
|
||||
- Access services through public IPs.
|
||||
- Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside
|
||||
the cluster. See the [services](/docs/user-guide/services) and
|
||||
[kubectl expose](/docs/user-guide/kubectl/v1.6/#expose) documentation.
|
||||
- Depending on your cluster environment, this may just expose the service to your corporate network,
|
||||
or it may expose it to the internet. Think about whether the service being exposed is secure.
|
||||
Does it do its own authentication?
|
||||
- Place pods behind services. To access one specific pod from a set of replicas, such as for debugging,
|
||||
place a unique label on the pod and create a new service which selects this label.
|
||||
- In most cases, it should not be necessary for application developer to directly access
|
||||
nodes via their nodeIPs.
|
||||
- Access services, nodes, or pods using the Proxy Verb.
|
||||
- Does apiserver authentication and authorization prior to accessing the remote service.
|
||||
Use this if the services are not secure enough to expose to the internet, or to gain
|
||||
access to ports on the node IP, or for debugging.
|
||||
- Proxies may cause problems for some web applications.
|
||||
- Only works for HTTP/HTTPS.
|
||||
- Described [here](#manually-constructing-apiserver-proxy-urls).
|
||||
- Access from a node or pod in the cluster.
|
||||
- Run a pod, and then connect to a shell in it using [kubectl exec](/docs/user-guide/kubectl/v1.6/#exec).
|
||||
Connect to other nodes, pods, and services from that shell.
|
||||
- Some clusters may allow you to ssh to a node in the cluster. From there you may be able to
|
||||
access cluster services. This is a non-standard method, and will work on some clusters but
|
||||
not others. Browsers and other tools may or may not be installed. Cluster DNS may not work.
|
||||
|
||||
### Discovering builtin services
|
||||
|
||||
Typically, there are several services which are started on a cluster by kube-system. Get a list of these
|
||||
with the `kubectl cluster-info` command:
|
||||
|
||||
```shell
|
||||
$ kubectl cluster-info
|
||||
|
||||
Kubernetes master is running at https://104.197.5.247
|
||||
elasticsearch-logging is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy
|
||||
kibana-logging is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/kibana-logging/proxy
|
||||
kube-dns is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/kube-dns/proxy
|
||||
grafana is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
|
||||
heapster is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/monitoring-heapster/proxy
|
||||
```
|
||||
|
||||
This shows the proxy-verb URL for accessing each service.
|
||||
For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached
|
||||
at `https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/` if suitable credentials are passed. Logging can also be reached through a kubectl proxy, for example at:
|
||||
`http://localhost:8080/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/`.
|
||||
(See [above](#accessing-the-cluster-api) for how to pass credentials or use kubectl proxy.)
|
||||
|
||||
#### Manually constructing apiserver proxy URLs
|
||||
|
||||
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
|
||||
`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`service_name[:port_name]`*`/proxy`
|
||||
|
||||
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL.
|
||||
|
||||
##### Examples
|
||||
|
||||
* To access the Elasticsearch service endpoint `_search?q=user:kimchy`, you would use: `http://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_search?q=user:kimchy`
|
||||
* To access the Elasticsearch cluster health information `_cluster/health?pretty=true`, you would use: `https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_cluster/health?pretty=true`
|
||||
|
||||
```json
|
||||
{
|
||||
"cluster_name" : "kubernetes_logging",
|
||||
"status" : "yellow",
|
||||
"timed_out" : false,
|
||||
"number_of_nodes" : 1,
|
||||
"number_of_data_nodes" : 1,
|
||||
"active_primary_shards" : 5,
|
||||
"active_shards" : 5,
|
||||
"relocating_shards" : 0,
|
||||
"initializing_shards" : 0,
|
||||
"unassigned_shards" : 5
|
||||
}
|
||||
```
|
||||
|
||||
#### Using web browsers to access services running on the cluster
|
||||
|
||||
You may be able to put an apiserver proxy url into the address bar of a browser. However:
|
||||
|
||||
- Web browsers cannot usually pass tokens, so you may need to use basic (password) auth. Apiserver can be configured to accept basic auth,
|
||||
but your cluster may not be configured to accept basic auth.
|
||||
- Some web apps may not work, particularly those with client side javascript that construct urls in a
|
||||
way that is unaware of the proxy path prefix.
|
||||
|
||||
## Requesting redirects
|
||||
|
||||
The redirect capabilities have been deprecated and removed. Please use a proxy (see below) instead.
|
||||
|
||||
## So Many Proxies
|
||||
|
||||
There are several different proxies you may encounter when using Kubernetes:
|
||||
|
||||
1. The [kubectl proxy](#directly-accessing-the-rest-api):
|
||||
- runs on a user's desktop or in a pod
|
||||
- proxies from a localhost address to the Kubernetes apiserver
|
||||
- client to proxy uses HTTP
|
||||
- proxy to apiserver uses HTTPS
|
||||
- locates apiserver
|
||||
- adds authentication headers
|
||||
1. The [apiserver proxy](#discovering-builtin-services):
|
||||
- is a bastion built into the apiserver
|
||||
- connects a user outside of the cluster to cluster IPs which otherwise might not be reachable
|
||||
- runs in the apiserver processes
|
||||
- client to proxy uses HTTPS (or http if apiserver so configured)
|
||||
- proxy to target may use HTTP or HTTPS as chosen by proxy using available information
|
||||
- can be used to reach a Node, Pod, or Service
|
||||
- does load balancing when used to reach a Service
|
||||
1. The [kube proxy](/docs/user-guide/services/#ips-and-vips):
|
||||
- runs on each node
|
||||
- proxies UDP and TCP
|
||||
- does not understand HTTP
|
||||
- provides load balancing
|
||||
- is just used to reach services
|
||||
1. A Proxy/Load-balancer in front of apiserver(s):
|
||||
- existence and implementation varies from cluster to cluster (e.g. nginx)
|
||||
- sits between all clients and one or more apiservers
|
||||
- acts as load balancer if there are several apiservers.
|
||||
1. Cloud Load Balancers on external services:
|
||||
- are provided by some cloud providers (e.g. AWS ELB, Google Cloud Load Balancer)
|
||||
- are created automatically when the Kubernetes service has type `LoadBalancer`
|
||||
- use UDP/TCP only
|
||||
- implementation varies by cloud provider.
|
||||
|
||||
Kubernetes users will typically not need to worry about anything other than the first two types. The cluster admin
|
||||
will typically ensure that the latter types are setup correctly.
|
|
@ -0,0 +1,34 @@
|
|||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: frontend
|
||||
spec:
|
||||
selector:
|
||||
app: hello
|
||||
tier: frontend
|
||||
ports:
|
||||
- protocol: "TCP"
|
||||
port: 80
|
||||
targetPort: 80
|
||||
type: LoadBalancer
|
||||
---
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: frontend
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: hello
|
||||
tier: frontend
|
||||
track: stable
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: "gcr.io/google-samples/hello-frontend:1.0"
|
||||
lifecycle:
|
||||
preStop:
|
||||
exec:
|
||||
command: ["/usr/sbin/nginx","-s","quit"]
|
|
@ -0,0 +1,12 @@
|
|||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: hello
|
||||
spec:
|
||||
selector:
|
||||
app: hello
|
||||
tier: backend
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: http
|
|
@ -0,0 +1,19 @@
|
|||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: hello
|
||||
spec:
|
||||
replicas: 7
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: hello
|
||||
tier: backend
|
||||
track: stable
|
||||
spec:
|
||||
containers:
|
||||
- name: hello
|
||||
image: "gcr.io/google-samples/hello-go-gke:1.0"
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 80
|
|
@ -0,0 +1,94 @@
|
|||
---
|
||||
title: Use Port Forwarding to Access Applications in a Cluster
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
|
||||
This page shows how to use `kubectl port-forward` to connect to a Redis
|
||||
server running in a Kubernetes cluster. This type of connection can be useful
|
||||
for database debugging.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
* {% include task-tutorial-prereqs.md %}
|
||||
|
||||
* Install [redis-cli](http://redis.io/topics/rediscli).
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture steps %}
|
||||
|
||||
## Creating a pod to run a Redis server
|
||||
|
||||
1. Create a pod:
|
||||
|
||||
kubectl create -f https://k8s.io/docs/tasks/access-application-cluster/redis-master.yaml
|
||||
|
||||
The output of a successful command verifies that the pod was created:
|
||||
|
||||
pod "redis-master" created
|
||||
|
||||
1. Check to see whether the pod is running and ready:
|
||||
|
||||
kubectl get pods
|
||||
|
||||
When the pod is ready, the output displays a STATUS of Running:
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
redis-master 2/2 Running 0 41s
|
||||
|
||||
1. Verify that the Redis server is running in the pod and listening on port 6379:
|
||||
|
||||
{% raw %}
|
||||
kubectl get pods redis-master --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
|
||||
{% endraw %}
|
||||
|
||||
The output displays the port:
|
||||
|
||||
6379
|
||||
|
||||
## Forward a local port to a port on the pod
|
||||
|
||||
1. Forward port 6379 on the local workstation to port 6379 of redis-master pod:
|
||||
|
||||
kubectl port-forward redis-master 6379:6379
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
I0710 14:43:38.274550 3655 portforward.go:225] Forwarding from 127.0.0.1:6379 -> 6379
|
||||
I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:6379 -> 6379
|
||||
|
||||
1. Start the Redis command line interface:
|
||||
|
||||
redis-cli
|
||||
|
||||
1. At the Redis command line prompt, enter the `ping` command:
|
||||
|
||||
127.0.0.1:6379>ping
|
||||
|
||||
A successful ping request returns PONG.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture discussion %}
|
||||
|
||||
## Discussion
|
||||
|
||||
Connections made to local port 6379 are forwarded to port 6379 of the pod that
|
||||
is running the Redis server. With this connection in place you can use your
|
||||
local workstation to debug the database that is running in the pod.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture whatsnext %}
|
||||
Learn more about [kubectl port-forward](/docs/user-guide/kubectl/v1.6/#port-forward).
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% include templates/task.md %}
|
|
@ -0,0 +1,33 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
labels:
|
||||
name: redis
|
||||
redis-sentinel: "true"
|
||||
role: master
|
||||
name: redis-master
|
||||
spec:
|
||||
containers:
|
||||
- name: master
|
||||
image: gcr.io/google_containers/redis:v1
|
||||
env:
|
||||
- name: MASTER
|
||||
value: "true"
|
||||
ports:
|
||||
- containerPort: 6379
|
||||
resources:
|
||||
limits:
|
||||
cpu: "0.1"
|
||||
volumeMounts:
|
||||
- mountPath: /redis-master-data
|
||||
name: data
|
||||
- name: sentinel
|
||||
image: kubernetes/redis:v1
|
||||
env:
|
||||
- name: SENTINEL
|
||||
value: "true"
|
||||
ports:
|
||||
- containerPort: 26379
|
||||
volumes:
|
||||
- name: data
|
||||
emptyDir: {}
|
|
@ -0,0 +1,147 @@
|
|||
---
|
||||
title: Use a Service to Access an Application in a Cluster
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
|
||||
This page shows how to create a Kubernetes Service object that external
|
||||
clients can use to access an application running in a cluster. The Service
|
||||
provides load balancing for an application that has two running instances.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
{% include task-tutorial-prereqs.md %}
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture objectives %}
|
||||
|
||||
* Run two instances of a Hello World application.
|
||||
* Create a Service object that exposes a node port.
|
||||
* Use the Service object to access the running application.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture lessoncontent %}
|
||||
|
||||
## Creating a service for an application running in two pods
|
||||
|
||||
1. Run a Hello World application in your cluster:
|
||||
|
||||
kubectl run hello-world --replicas=2 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080
|
||||
|
||||
The preceding command creates a
|
||||
[Deployment](/docs/concepts/workloads/controllers/deployment/)
|
||||
object and an associated
|
||||
[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)
|
||||
object. The ReplicaSet has two
|
||||
[Pods](/docs/concepts/workloads/pods/pod/),
|
||||
each of which runs the Hello World application.
|
||||
|
||||
1. Display information about the Deployment:
|
||||
|
||||
kubectl get deployments hello-world
|
||||
kubectl describe deployments hello-world
|
||||
|
||||
1. Display information about your ReplicaSet objects:
|
||||
|
||||
kubectl get replicasets
|
||||
kubectl describe replicasets
|
||||
|
||||
1. Create a Service object that exposes the deployment:
|
||||
|
||||
kubectl expose deployment hello-world --type=NodePort --name=example-service
|
||||
|
||||
1. Display information about the Service:
|
||||
|
||||
kubectl describe services example-service
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
Name: example-service
|
||||
Namespace: default
|
||||
Labels: run=load-balancer-example
|
||||
Selector: run=load-balancer-example
|
||||
Type: NodePort
|
||||
IP: 10.32.0.16
|
||||
Port: <unset> 8080/TCP
|
||||
NodePort: <unset> 31496/TCP
|
||||
Endpoints: 10.200.1.4:8080,10.200.2.5:8080
|
||||
Session Affinity: None
|
||||
No events.
|
||||
|
||||
Make a note of the NodePort value for the service. For example,
|
||||
in the preceding output, the NodePort value is 31496.
|
||||
|
||||
1. List the pods that are running the Hello World application:
|
||||
|
||||
kubectl get pods --selector="run=load-balancer-example" --output=wide
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
NAME READY STATUS ... IP NODE
|
||||
hello-world-2895499144-bsbk5 1/1 Running ... 10.200.1.4 worker1
|
||||
hello-world-2895499144-m1pwt 1/1 Running ... 10.200.2.5 worker2
|
||||
|
||||
1. Get the public IP address of one of your nodes that is running
|
||||
a Hello World pod. How you get this address depends on how you set
|
||||
up your cluster. For example, if you are using Minikube, you can
|
||||
see the node address by running `kubectl cluster-info`. If you are
|
||||
using Google Compute Engine instances, you can use the
|
||||
`gcloud compute instances list` command to see the public addresses of your
|
||||
nodes. For more information about this command, see the [GCE documentation](https://cloud.google.com/sdk/gcloud/reference/compute/instances/list).
|
||||
|
||||
1. On your chosen node, create a firewall rule that allows TCP traffic
|
||||
on your node port. For example, if your Service has a NodePort value of
|
||||
31568, create a firewall rule that allows TCP traffic on port 31568. Different
|
||||
cloud providers offer different ways of configuring firewall rules. See [the
|
||||
GCE documentation on firewall rules](https://cloud.google.com/compute/docs/vpc/firewalls),
|
||||
for example.
|
||||
|
||||
1. Use the node address and node port to access the Hello World application:
|
||||
|
||||
curl http://<public-node-ip>:<node-port>
|
||||
|
||||
where `<public-node-ip>` is the public IP address of your node,
|
||||
and `<node-port>` is the NodePort value for your service.
|
||||
|
||||
The response to a successful request is a hello message:
|
||||
|
||||
Hello Kubernetes!
|
||||
|
||||
## Using a service configuration file
|
||||
|
||||
As an alternative to using `kubectl expose`, you can use a
|
||||
[service configuration file](/docs/user-guide/services/operations)
|
||||
to create a Service.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture cleanup %}
|
||||
|
||||
To delete the Service, enter this command:
|
||||
|
||||
kubectl delete services example-service
|
||||
|
||||
To delete the Deployment, the ReplicaSet, and the Pods that are running
|
||||
the Hello World application, enter this command:
|
||||
|
||||
kubectl delete deployment hello-world
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture whatsnext %}
|
||||
|
||||
Learn more about
|
||||
[connecting applications with services](/docs/concepts/services-networking/connect-applications-service/).
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/tutorial.md %}
|
||||
|
|
@ -0,0 +1,27 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: two-containers
|
||||
spec:
|
||||
|
||||
restartPolicy: Never
|
||||
|
||||
volumes:
|
||||
- name: shared-data
|
||||
emptyDir: {}
|
||||
|
||||
containers:
|
||||
|
||||
- name: nginx-container
|
||||
image: nginx
|
||||
volumeMounts:
|
||||
- name: shared-data
|
||||
mountPath: /usr/share/nginx/html
|
||||
|
||||
- name: debian-container
|
||||
image: debian
|
||||
volumeMounts:
|
||||
- name: shared-data
|
||||
mountPath: /pod-data
|
||||
command: ["/bin/sh"]
|
||||
args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]
|
|
@ -0,0 +1,13 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: constraints-cpu-demo-2
|
||||
spec:
|
||||
containers:
|
||||
- name: constraints-cpu-demo-2-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1.5"
|
||||
requests:
|
||||
cpu: "500m"
|
|
@ -0,0 +1,13 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: constraints-cpu-demo-4
|
||||
spec:
|
||||
containers:
|
||||
- name: constraints-cpu-demo-4-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
cpu: "800m"
|
||||
requests:
|
||||
cpu: "100m"
|
|
@ -0,0 +1,8 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: constraints-cpu-demo-4
|
||||
spec:
|
||||
containers:
|
||||
- name: constraints-cpu-demo-4-ctr
|
||||
image: vish/stress
|
|
@ -0,0 +1,13 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: constraints-cpu-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: constraints-cpu-demo-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
cpu: "800m"
|
||||
requests:
|
||||
cpu: "500m"
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: LimitRange
|
||||
metadata:
|
||||
name: cpu-min-max-demo-lr
|
||||
spec:
|
||||
limits:
|
||||
- max:
|
||||
cpu: "800m"
|
||||
min:
|
||||
cpu: "200m"
|
||||
type: Container
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: default-cpu-demo-2
|
||||
spec:
|
||||
containers:
|
||||
- name: default-cpu-demo-2-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1"
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: default-cpu-demo-3
|
||||
spec:
|
||||
containers:
|
||||
- name: default-cpu-demo-3-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
requests:
|
||||
cpu: "0.75"
|
|
@ -0,0 +1,8 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: default-cpu-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: default-cpu-demo-ctr
|
||||
image: nginx
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: LimitRange
|
||||
metadata:
|
||||
name: cpu-limit-range
|
||||
spec:
|
||||
limits:
|
||||
- default:
|
||||
cpu: 1
|
||||
defaultRequest:
|
||||
cpu: 0.5
|
||||
type: Container
|
|
@ -0,0 +1,30 @@
|
|||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: kube-dns-autoscaler
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: kube-dns-autoscaler
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kube-dns-autoscaler
|
||||
spec:
|
||||
containers:
|
||||
- name: autoscaler
|
||||
image: gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.0.0
|
||||
resources:
|
||||
requests:
|
||||
cpu: "20m"
|
||||
memory: "10Mi"
|
||||
command:
|
||||
- /cluster-proportional-autoscaler
|
||||
- --namespace=kube-system
|
||||
- --configmap=kube-dns-autoscaler
|
||||
- --target=<SCALE_TARGET>
|
||||
# When cluster is using large nodes(with more cores), "coresPerReplica" should dominate.
|
||||
# If using small nodes, "nodesPerReplica" should dominate.
|
||||
- --default-params={"linear":{"coresPerReplica":256,"nodesPerReplica":16,"min":1}}
|
||||
- --logtostderr=true
|
||||
- --v=2
|
|
@ -0,0 +1,112 @@
|
|||
---
|
||||
title: IP Masquerade Agent User Guide
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
This page shows how to configure and enable the ip-masq-agent.
|
||||
{% endcapture %}
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
{% include task-tutorial-prereqs.md %}
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture discussion %}
|
||||
## IP Masquerade Agent User Guide
|
||||
|
||||
The ip-masq-agent configures iptables rules to hide a pod's IP address behind the cluster node's IP address. This is typically done when sending traffic to destinations outside the cluster's pod [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) range.
|
||||
|
||||
### **Key Terms**
|
||||
|
||||
* **NAT (Network Address Translation)**
|
||||
Is a method of remapping one IP address to another by modifying either the source and/or destination address information in the IP header. Typically performed by a device doing IP routing.
|
||||
* **Masquerading**
|
||||
A form of NAT that is typically used to perform a many to one address translation, where multiple source IP addresses are masked behind a single address, which is typically the device doing the IP routing. In Kubernetes this is the Node's IP address.
|
||||
* **CIDR (Classless Inter-Domain Routing)**
|
||||
Based on the variable-length subnet masking, allows specifying arbitrary-length prefixes. CIDR introduced a new method of representation for IP addresses, now commonly known as **CIDR notation**, in which an address or routing prefix is written with a suffix indicating the number of bits of the prefix, such as 192.168.2.0/24.
|
||||
* **Link Local**
|
||||
A link-local address is a network address that is valid only for communications within the network segment or the broadcast domain that the host is connected to. Link-local addresses for IPv4 are defined in the address block 169.254.0.0/16 in CIDR notation.
|
||||
|
||||
The ip-masq-agent configures iptables rules to handle masquerading node/pod IP addresses when sending traffic to destinations outside the cluster node's IP and the Cluster IP range. This essentially hides pod IP addresses behind the cluster node's IP address. In some environments, traffic to "external" addresses must come from a known machine address. For example, in Google Cloud, any traffic to the internet must come from a VM's IP. When containers are used, as in GKE, the Pod IP will be rejected for egress. To avoid this, we must hide the Pod IP behind the VM's own IP address - generally known as "masquerade". By default, the agent is configured to treat the three private IP ranges specified by [RFC 1918](https://tools.ietf.org/html/rfc1918) as non-masquerade [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). These ranges are 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16. The agent will also treat link-local (169.254.0.0/16) as a non-masquerade CIDR by default. The agent is configured to reload its configuration from the location */etc/config/ip-masq-agent* every 60 seconds, which is also configurable.
|
||||
|
||||

|
||||
|
||||
The agent configuration file must be written in YAML or JSON syntax, and may contain three optional keys:
|
||||
|
||||
* **nonMasqueradeCIDRs:** A list of strings in [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation that specify the non-masquerade ranges.
|
||||
* **masqLinkLocal:** A Boolean (true / false) which indicates whether to masquerade traffic to the link local prefix 169.254.0.0/16. False by default.
|
||||
* **resyncInterval:** An interval at which the agent attempts to reload config from disk. e.g. '30s' where 's' is seconds, 'ms' is milliseconds etc...
|
||||
|
||||
Traffic to 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16) ranges will NOT be masqueraded. Any other traffic (assumed to be internet) will be masqueraded. An example of a local destination from a pod could be its Node's IP address as well as another node's address or one of the IP addresses in Cluster's IP range. Any other traffic will be masqueraded by default. The below entries show the default set of rules that are applied by the ip-masq-agent:
|
||||
|
||||
```
|
||||
iptables -t nat -L IP-MASQ-AGENT
|
||||
RETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
|
||||
RETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
|
||||
RETURN all -- anywhere 172.16.0.0/12 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
|
||||
RETURN all -- anywhere 192.168.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
|
||||
MASQUERADE all -- anywhere anywhere /* ip-masq-agent: outbound traffic should be subject to MASQUERADE (this match must come after cluster-local CIDR matches) */ ADDRTYPE match dst-type !LOCAL
|
||||
|
||||
```
|
||||
|
||||
By default, in GCE/GKE starting with Kubernetes version 1.7.0, if network policy is enabled or you are using a cluster CIDR not in the 10.0.0.0/8 range, the ip-masq-agent will run in your cluster. If you are running in another environment, you can add the ip-masq-agent [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) to your cluster:
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture steps %}
|
||||
|
||||
## Create an ip-masq-agent
|
||||
To create an ip-masq-agent, run the following kubectl command:
|
||||
|
||||
`
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes-incubator/ip-masq-agent/master/ip-masq-agent.yaml
|
||||
`
|
||||
|
||||
You must also apply the appropriate node label to any nodes in your cluster that you want the agent to run on.
|
||||
|
||||
`
|
||||
kubectl label nodes my-node beta.kubernetes.io/masq-agent-ds-ready=true
|
||||
`
|
||||
|
||||
More information can be found in the ip-masq-agent documentation [here](https://github.com/kubernetes-incubator/ip-masq-agent)
|
||||
|
||||
In most cases, the default set of rules should be sufficient; however, if this is not the case for your cluster, you can create and apply a [ConfigMap](/docs/tasks/configure-pod-container/configmap/) to customize the IP ranges that are affected. For example, to allow only 10.0.0.0/8 to be considered by the ip-masq-agent, you can create the following [ConfigMap](/docs/tasks/configure-pod-container/configmap/) in a file called "config".
|
||||
**Note:** It is important that the file is called config since, by default, that will be used as the key for lookup by the ip-masq-agent:
|
||||
|
||||
```
|
||||
nonMasqueradeCIDRs:
|
||||
- 10.0.0.0/8
|
||||
resyncInterval: 60s
|
||||
|
||||
```
|
||||
|
||||
Run the following command to add the config map to your cluster:
|
||||
|
||||
```
|
||||
kubectl create configmap ip-masq-agent --from-file=config --namespace=kube-system
|
||||
```
|
||||
|
||||
This will update a file located at */etc/config/ip-masq-agent* which is periodically checked every *resyscInterval* and applied to the cluster node.
|
||||
After the resync interval has expired, you should see the iptables rules reflect your changes:
|
||||
|
||||
```
|
||||
iptables -t nat -L IP-MASQ-AGENT
|
||||
Chain IP-MASQ-AGENT (1 references)
|
||||
target prot opt source destination
|
||||
RETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
|
||||
RETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent: cluster-local
|
||||
MASQUERADE all -- anywhere anywhere /* ip-masq-agent: outbound traffic should be subject to MASQUERADE (this match must come after cluster-local CIDR matches) */ ADDRTYPE match dst-type !LOCAL
|
||||
```
|
||||
|
||||
By default, the link local range (169.254.0.0/16) is also handled by the ip-masq agent, which sets up the appropriate iptables rules. To have the ip-masq-agent ignore link local, you can set *masqLinkLocal* to true in the config map.
|
||||
|
||||
```
|
||||
nonMasqueradeCIDRs:
|
||||
- 10.0.0.0/8
|
||||
resyncInterval: 60s
|
||||
masqLinkLocal: true
|
||||
```
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/task.md %}
|
|
@ -0,0 +1,13 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: constraints-mem-demo-2
|
||||
spec:
|
||||
containers:
|
||||
- name: constraints-mem-demo-2-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
memory: "1.5Gi"
|
||||
requests:
|
||||
memory: "800Mi"
|
|
@ -0,0 +1,13 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: constraints-mem-demo-3
|
||||
spec:
|
||||
containers:
|
||||
- name: constraints-mem-demo-3-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
memory: "800Mi"
|
||||
requests:
|
||||
memory: "100Mi"
|
|
@ -0,0 +1,9 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: constraints-mem-demo-4
|
||||
spec:
|
||||
containers:
|
||||
- name: constraints-mem-demo-4-ctr
|
||||
image: nginx
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: constraints-mem-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: constraints-mem-demo-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
memory: "800Mi"
|
||||
requests:
|
||||
memory: "600Mi"
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: LimitRange
|
||||
metadata:
|
||||
name: mem-min-max-demo-lr
|
||||
spec:
|
||||
limits:
|
||||
- max:
|
||||
memory: 1Gi
|
||||
min:
|
||||
memory: 500Mi
|
||||
type: Container
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: default-mem-demo-2
|
||||
spec:
|
||||
containers:
|
||||
- name: defalt-mem-demo-2-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
memory: "1Gi"
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: default-mem-demo-3
|
||||
spec:
|
||||
containers:
|
||||
- name: default-mem-demo-3-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
requests:
|
||||
memory: "128Mi"
|
|
@ -0,0 +1,8 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: default-mem-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: default-mem-demo-ctr
|
||||
image: nginx
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: LimitRange
|
||||
metadata:
|
||||
name: mem-limit-range
|
||||
spec:
|
||||
limits:
|
||||
- default:
|
||||
memory: 512Mi
|
||||
defaultRequest:
|
||||
memory: 256Mi
|
||||
type: Container
|
|
@ -0,0 +1,43 @@
|
|||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
component: scheduler
|
||||
tier: control-plane
|
||||
name: my-scheduler
|
||||
namespace: kube-system
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
component: scheduler
|
||||
tier: control-plane
|
||||
version: second
|
||||
spec:
|
||||
containers:
|
||||
- command:
|
||||
- /usr/local/bin/kube-scheduler
|
||||
- --address=0.0.0.0
|
||||
- --leader-elect=false
|
||||
- --scheduler-name=my-scheduler
|
||||
image: gcr.io/my-gcp-project/my-kube-scheduler:1.0
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10251
|
||||
initialDelaySeconds: 15
|
||||
name: kube-second-scheduler
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10251
|
||||
resources:
|
||||
requests:
|
||||
cpu: '0.1'
|
||||
securityContext:
|
||||
privileged: false
|
||||
volumeMounts: []
|
||||
hostNetwork: false
|
||||
hostPID: false
|
||||
volumes: []
|
|
@ -0,0 +1,10 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: no-annotation
|
||||
labels:
|
||||
name: multischeduler-example
|
||||
spec:
|
||||
containers:
|
||||
- name: pod-with-no-annotation-container
|
||||
image: gcr.io/google_containers/pause:2.0
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: annotation-default-scheduler
|
||||
labels:
|
||||
name: multischeduler-example
|
||||
spec:
|
||||
schedulerName: default-scheduler
|
||||
containers:
|
||||
- name: pod-with-default-annotation-container
|
||||
image: gcr.io/google_containers/pause:2.0
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: annotation-second-scheduler
|
||||
labels:
|
||||
name: multischeduler-example
|
||||
spec:
|
||||
schedulerName: my-scheduler
|
||||
containers:
|
||||
- name: pod-with-second-annotation-container
|
||||
image: gcr.io/google_containers/pause:2.0
|
|
@ -0,0 +1,16 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: quota-mem-cpu-demo-2
|
||||
spec:
|
||||
containers:
|
||||
- name: quota-mem-cpu-demo-2-ctr
|
||||
image: redis
|
||||
resources:
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
cpu: "800m"
|
||||
requests:
|
||||
memory: "700Mi"
|
||||
cpu: "400m"
|
||||
|
|
@ -0,0 +1,16 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: quota-mem-cpu-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: quota-mem-cpu-demo-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
memory: "800Mi"
|
||||
cpu: "800m"
|
||||
requests:
|
||||
memory: "600Mi"
|
||||
cpu: "400m"
|
||||
|
|
@ -0,0 +1,10 @@
|
|||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: mem-cpu-demo
|
||||
spec:
|
||||
hard:
|
||||
requests.cpu: "1"
|
||||
requests.memory: 1Gi
|
||||
limits.cpu: "2"
|
||||
limits.memory: 2Gi
|
|
@ -0,0 +1,11 @@
|
|||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pvc-quota-demo-2
|
||||
spec:
|
||||
storageClassName: manual
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 4Gi
|
|
@ -0,0 +1,11 @@
|
|||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pvc-quota-demo
|
||||
spec:
|
||||
storageClassName: manual
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 3Gi
|
|
@ -0,0 +1,9 @@
|
|||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: object-quota-demo
|
||||
spec:
|
||||
hard:
|
||||
persistentvolumeclaims: "1"
|
||||
services.loadbalancers: "2"
|
||||
services.nodeports: "0"
|
|
@ -0,0 +1,14 @@
|
|||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: pod-quota-demo
|
||||
spec:
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
purpose: quota-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: pod-quota-demo
|
||||
image: nginx
|
|
@ -0,0 +1,7 @@
|
|||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: pod-demo
|
||||
spec:
|
||||
hard:
|
||||
pods: "2"
|
|
@ -0,0 +1,11 @@
|
|||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pvc-quota-demo-2
|
||||
spec:
|
||||
storageClassName: manual
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 4Gi
|
|
@ -0,0 +1,256 @@
|
|||
---
|
||||
approvers:
|
||||
- eparis
|
||||
- pmorie
|
||||
title: Configure Containers Using a ConfigMap
|
||||
---
|
||||
|
||||
|
||||
{% capture overview %}
|
||||
|
||||
This page shows you how to configure an application using a ConfigMap. ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
* {% include task-tutorial-prereqs.md %}
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
||||
{% capture steps %}
|
||||
|
||||
## Use kubectl to create a ConfigMap
|
||||
|
||||
Use the `kubectl create configmap` command to create configmaps from [directories](#creating-configmaps-from-directories), [files](#creating-configmaps-from-files), or [literal values](#creating-configmaps-from-literal-values):
|
||||
|
||||
```shell
|
||||
kubectl create configmap <map-name> <data-source>
|
||||
```
|
||||
|
||||
where \<map-name> is the name you want to assign to the ConfigMap and \<data-source> is the directory, file, or literal value to draw the data from.
|
||||
|
||||
The data source corresponds to a key-value pair in the ConfigMap, where
|
||||
|
||||
* key = the file name or the key you provided on the command line, and
|
||||
* value = the file contents or the literal value you provided on the command line.
|
||||
|
||||
You can use [`kubectl describe`](/docs/user-guide/kubectl/v1.6/#describe) or [`kubectl get`](/docs/user-guide/kubectl/v1.6/#get) to retrieve information about a ConfigMap. The former shows a summary of the ConfigMap, while the latter returns the full contents of the ConfigMap.
|
||||
|
||||
### Create ConfigMaps from directories
|
||||
|
||||
You can use `kubectl create configmap` to create a ConfigMap from multiple files in the same directory.
|
||||
|
||||
For example:
|
||||
|
||||
```shell
|
||||
kubectl create configmap game-config --from-file=docs/user-guide/configmap/kubectl
|
||||
```
|
||||
|
||||
combines the contents of the `docs/user-guide/configmap/kubectl/` directory
|
||||
|
||||
```shell
|
||||
ls docs/user-guide/configmap/kubectl/
|
||||
game.properties
|
||||
ui.properties
|
||||
```
|
||||
|
||||
into the following ConfigMap:
|
||||
|
||||
```shell
|
||||
kubectl describe configmaps game-config
|
||||
Name: game-config
|
||||
Namespace: default
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
|
||||
Data
|
||||
====
|
||||
game.properties: 158 bytes
|
||||
ui.properties: 83 bytes
|
||||
```
|
||||
|
||||
The `game.properties` and `ui.properties` files in the `docs/user-guide/configmap/kubectl/` directory are represented in the `data` section of the ConfigMap.
|
||||
|
||||
```shell
|
||||
kubectl get configmaps game-config -o yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
game.properties: |
|
||||
enemies=aliens
|
||||
lives=3
|
||||
enemies.cheat=true
|
||||
enemies.cheat.level=noGoodRotten
|
||||
secret.code.passphrase=UUDDLRLRBABAS
|
||||
secret.code.allowed=true
|
||||
secret.code.lives=30
|
||||
ui.properties: |
|
||||
color.good=purple
|
||||
color.bad=yellow
|
||||
allow.textmode=true
|
||||
how.nice.to.look=fairlyNice
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
creationTimestamp: 2016-02-18T18:52:05Z
|
||||
name: game-config
|
||||
namespace: default
|
||||
resourceVersion: "516"
|
||||
selfLink: /api/v1/namespaces/default/configmaps/game-config-2
|
||||
uid: b4952dc3-d670-11e5-8cd0-68f728db1985
|
||||
```
|
||||
|
||||
### Create ConfigMaps from files
|
||||
|
||||
You can use `kubectl create configmap` to create a ConfigMap from an individual file, or from multiple files.
|
||||
|
||||
For example,
|
||||
|
||||
```shell
|
||||
kubectl create configmap game-config-2 --from-file=docs/user-guide/configmap/kubectl/game.properties
|
||||
```
|
||||
|
||||
would produce the following ConfigMap:
|
||||
|
||||
```shell
|
||||
kubectl describe configmaps game-config-2
|
||||
Name: game-config-2
|
||||
Namespace: default
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
|
||||
Data
|
||||
====
|
||||
game.properties: 158 bytes
|
||||
```
|
||||
|
||||
You can pass in the `--from-file` argument multiple times to create a ConfigMap from multiple data sources.
|
||||
|
||||
```shell
|
||||
kubectl create configmap game-config-2 --from-file=docs/user-guide/configmap/kubectl/game.properties --from-file=docs/user-guide/configmap/kubectl/ui.properties
|
||||
```
|
||||
|
||||
```shell
|
||||
kubectl describe configmaps game-config-2
|
||||
Name: game-config-2
|
||||
Namespace: default
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
|
||||
Data
|
||||
====
|
||||
game.properties: 158 bytes
|
||||
ui.properties: 83 bytes
|
||||
```
|
||||
|
||||
#### Define the key to use when creating a ConfigMap from a file
|
||||
|
||||
You can define a key other than the file name to use in the `data` section of your ConfigMap when using the `--from-file` argument:
|
||||
|
||||
```shell
|
||||
kubectl create configmap game-config-3 --from-file=<my-key-name>=<path-to-file>
|
||||
```
|
||||
|
||||
where `<my-key-name>` is the key you want to use in the ConfigMap and `<path-to-file>` is the location of the data source file you want the key to represent.
|
||||
|
||||
For example:
|
||||
|
||||
```shell
|
||||
kubectl create configmap game-config-3 --from-file=game-special-key=docs/user-guide/configmap/kubectl/game.properties
|
||||
|
||||
kubectl get configmaps game-config-3 -o yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
game-special-key: |
|
||||
enemies=aliens
|
||||
lives=3
|
||||
enemies.cheat=true
|
||||
enemies.cheat.level=noGoodRotten
|
||||
secret.code.passphrase=UUDDLRLRBABAS
|
||||
secret.code.allowed=true
|
||||
secret.code.lives=30
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
creationTimestamp: 2016-02-18T18:54:22Z
|
||||
name: game-config-3
|
||||
namespace: default
|
||||
resourceVersion: "530"
|
||||
selfLink: /api/v1/namespaces/default/configmaps/game-config-3
|
||||
uid: 05f8da22-d671-11e5-8cd0-68f728db1985
|
||||
```
|
||||
|
||||
### Create ConfigMaps from literal values
|
||||
|
||||
You can use `kubectl create configmap` with the `--from-literal` argument to define a literal value from the command line:
|
||||
|
||||
```shell
|
||||
kubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm
|
||||
```
|
||||
|
||||
You can pass in multiple key-value pairs. Each pair provided on the command line is represented as a separate entry in the `data` section of the ConfigMap.
|
||||
|
||||
```shell
|
||||
kubectl get configmaps special-config -o yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
special.how: very
|
||||
special.type: charm
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
creationTimestamp: 2016-02-18T19:14:38Z
|
||||
name: special-config
|
||||
namespace: default
|
||||
resourceVersion: "651"
|
||||
selfLink: /api/v1/namespaces/default/configmaps/special-config
|
||||
uid: dadce046-d673-11e5-8cd0-68f728db1985
|
||||
```
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture discussion %}
|
||||
|
||||
## Understanding ConfigMaps
|
||||
|
||||
ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.
|
||||
The ConfigMap API resource stores configuration data as key-value pairs. The data can be consumed in pods or provide the configurations for system components such as controllers. ConfigMap is similar to [Secrets](/docs/concepts/configuration/secret/), but provides a means of working with strings that don't contain sensitive information. Users and system components alike can store configuration data in ConfigMap.
|
||||
|
||||
**Note:** ConfigMaps should reference properties files, not replace them. Think of the ConfigMap as representing something similar to the Linux `/etc` directory and its contents. For example, if you create a [Kubernetes Volume](/docs/concepts/storage/volumes/) from a ConfigMap, each data item in the ConfigMap is represented by an individual file in the volume.
|
||||
{: .note}
|
||||
|
||||
The ConfigMap's `data` field contains the configuration data. As shown in the example below, this can be simple -- like individual properties defined using `--from-literal` -- or complex -- like configuration files or JSON blobs defined using `--from-file`.
|
||||
|
||||
```yaml
|
||||
kind: ConfigMap
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
creationTimestamp: 2016-02-18T19:14:38Z
|
||||
name: example-config
|
||||
namespace: default
|
||||
data:
|
||||
# example of a simple property defined using --from-literal
|
||||
example.property.1: hello
|
||||
example.property.2: world
|
||||
# example of a complex property defined using --from-file
|
||||
example.property.file: |-
|
||||
property.1=value-1
|
||||
property.2=value-2
|
||||
property.3=value-3
|
||||
```
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture whatsnext %}
|
||||
* See [Using ConfigMap Data in Pods](/docs/tasks/configure-pod-container/configure-pod-configmap).
|
||||
* Follow a real world example of [Configuring Redis using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/).
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/task.md %}
|
|
@ -0,0 +1,301 @@
|
|||
---
|
||||
title: Configure Liveness and Readiness Probes
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
|
||||
This page shows how to configure liveness and readiness probes for Containers.
|
||||
|
||||
The [kubelet](/docs/admin/kubelet/) uses liveness probes to know when to
|
||||
restart a Container. For example, liveness probes could catch a deadlock,
|
||||
where an application is running, but unable to make progress. Restarting a
|
||||
Container in such a state can help to make the application more available
|
||||
despite bugs.
|
||||
|
||||
The kubelet uses readiness probes to know when a Container is ready to start
|
||||
accepting traffic. A Pod is considered ready when all of its Containers are ready.
|
||||
One use of this signal is to control which Pods are used as backends for Services.
|
||||
When a Pod is not ready, it is removed from Service load balancers.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
{% include task-tutorial-prereqs.md %}
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture steps %}
|
||||
|
||||
## Define a liveness command
|
||||
|
||||
Many applications running for long periods of time eventually transition to
|
||||
broken states, and cannot recover except by being restarted. Kubernetes provides
|
||||
liveness probes to detect and remedy such situations.
|
||||
|
||||
In this exercise, you create a Pod that runs a Container based on the
|
||||
`gcr.io/google_containers/busybox` image. Here is the configuration file for the Pod:
|
||||
|
||||
{% include code.html language="yaml" file="exec-liveness.yaml" ghlink="/docs/tasks/configure-pod-container/exec-liveness.yaml" %}
|
||||
|
||||
In the configuration file, you can see that the Pod has a single Container.
|
||||
The `periodSeconds` field specifies that the kubelet should perform a liveness
|
||||
probe every 5 seconds. The `initialDelaySeconds` field tells the kubelet that it
|
||||
should wait 5 second before performing the first probe. To perform a probe, the
|
||||
kubelet executes the command `cat /tmp/healthy` in the Container. If the
|
||||
command succeeds, it returns 0, and the kubelet considers the Container to be alive and
|
||||
healthy. If the command returns a non-zero value, the kubelet kills the Container
|
||||
and restarts it.
|
||||
|
||||
When the Container starts, it executes this command:
|
||||
|
||||
```shell
|
||||
/bin/sh -c "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600"
|
||||
```
|
||||
|
||||
For the first 30 seconds of the Container's life, there is a `/tmp/healthy` file.
|
||||
So during the first 30 seconds, the command `cat /tmp/healthy` returns a success
|
||||
code. After 30 seconds, `cat /tmp/healthy` returns a failure code.
|
||||
|
||||
Create the Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/exec-liveness.yaml
|
||||
```
|
||||
|
||||
Within 30 seconds, view the Pod events:
|
||||
|
||||
```
|
||||
kubectl describe pod liveness-exec
|
||||
```
|
||||
|
||||
The output indicates that no liveness probes have failed yet:
|
||||
|
||||
```shell
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
|
||||
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "gcr.io/google_containers/busybox"
|
||||
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "gcr.io/google_containers/busybox"
|
||||
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]
|
||||
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e
|
||||
```
|
||||
|
||||
After 35 seconds, view the Pod events again:
|
||||
|
||||
```shell
|
||||
kubectl describe pod liveness-exec
|
||||
```
|
||||
|
||||
At the bottom of the output, there are messages indicating that the liveness
|
||||
probes have failed, and the containers have been killed and recreated.
|
||||
|
||||
```shell
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
|
||||
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "gcr.io/google_containers/busybox"
|
||||
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "gcr.io/google_containers/busybox"
|
||||
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]
|
||||
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e
|
||||
2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
|
||||
```
|
||||
|
||||
Wait another 30 seconds, and verify that the Container has been restarted:
|
||||
|
||||
```shell
|
||||
kubectl get pod liveness-exec
|
||||
```
|
||||
|
||||
The output shows that `RESTARTS` has been incremented:
|
||||
|
||||
```shell
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
liveness-exec 1/1 Running 1 1m
|
||||
```
|
||||
|
||||
## Define a liveness HTTP request
|
||||
|
||||
Another kind of liveness probe uses an HTTP GET request. Here is the configuration
|
||||
file for a Pod that runs a container based on the `gcr.io/google_containers/liveness`
|
||||
image.
|
||||
|
||||
{% include code.html language="yaml" file="http-liveness.yaml" ghlink="/docs/tasks/configure-pod-container/http-liveness.yaml" %}
|
||||
|
||||
In the configuration file, you can see that the Pod has a single Container.
|
||||
The `livenessProbe` field specifies that the kubelet should perform a liveness
|
||||
probe every 3 seconds. The `initialDelaySeconds` field tells the kubelet that it
|
||||
should wait 3 seconds before performing the first probe. To perform a probe, the
|
||||
kubelet sends an HTTP GET request to the server that is running in the Container
|
||||
and listening on port 8080. If the handler for the server's `/healthz` path
|
||||
returns a success code, the kubelet considers the Container to be alive and
|
||||
healthy. If the handler returns a failure code, the kubelet kills the Container
|
||||
and restarts it.
|
||||
|
||||
Any code greater than or equal to 200 and less than 400 indicates success. Any
|
||||
other code indicates failure.
|
||||
|
||||
You can see the source code for the server in
|
||||
[server.go](https://github.com/kubernetes/kubernetes/blob/master/test/images/liveness/server.go).
|
||||
|
||||
For the first 10 seconds that the Container is alive, the `/healthz` handler
|
||||
returns a status of 200. After that, the handler returns a status of 500.
|
||||
|
||||
```go
|
||||
http.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) {
|
||||
duration := time.Now().Sub(started)
|
||||
if duration.Seconds() > 10 {
|
||||
w.WriteHeader(500)
|
||||
w.Write([]byte(fmt.Sprintf("error: %v", duration.Seconds())))
|
||||
} else {
|
||||
w.WriteHeader(200)
|
||||
w.Write([]byte("ok"))
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
The kubelet starts performing health checks 3 seconds after the Container starts.
|
||||
So the first couple of health checks will succeed. But after 10 seconds, the health
|
||||
checks will fail, and the kubelet will kill and restart the Container.
|
||||
|
||||
To try the HTTP liveness check, create a Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/http-liveness.yaml
|
||||
```
|
||||
|
||||
After 10 seconds, view Pod events to verify that liveness probes have failed and
|
||||
the Container has been restarted:
|
||||
|
||||
```shell
|
||||
kubectl describe pod liveness-http
|
||||
```
|
||||
|
||||
## Define a TCP liveness probe
|
||||
|
||||
A third type of liveness probe uses a TCP Socket. With this configuration, the
|
||||
kubelet will attempt to open a socket to your container on the specified port.
|
||||
If it can establish a connection, the container is considered healthy, if it
|
||||
can’t it is considered a failure.
|
||||
|
||||
{% include code.html language="yaml" file="tcp-liveness-readiness.yaml" ghlink="/docs/tasks/configure-pod-container/tcp-liveness-readiness.yaml" %}
|
||||
|
||||
As you can see, configuration for a TCP check is quite similar to an HTTP check.
|
||||
This example uses both readiness and liveness probes. The kubelet will send the
|
||||
first readiness probe 5 seconds after the container starts. This will attempt to
|
||||
connect to the `goproxy` container on port 8080. If the probe succeeds, the pod
|
||||
will be marked as ready. The kubelet will continue to run this check every 10
|
||||
seconds.
|
||||
|
||||
In addition to the readiness probe, this configuration includes a liveness probe.
|
||||
The kubelet will run the first liveness probe 15 seconds after the container
|
||||
starts. Just like the readiness probe, this will attempt to connect to the
|
||||
`goproxy` container on port 8080. If the liveness probe fails, the container
|
||||
will be restarted.
|
||||
|
||||
## Use a named port
|
||||
|
||||
You can use a named
|
||||
[ContainerPort](/docs/api-reference/{{page.version}}/#containerport-v1-core)
|
||||
for HTTP or TCP liveness checks:
|
||||
|
||||
```yaml
|
||||
ports:
|
||||
- name: liveness-port
|
||||
containerPort: 8080
|
||||
hostPort: 8080
|
||||
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: liveness-port
|
||||
```
|
||||
|
||||
## Define readiness probes
|
||||
|
||||
Sometimes, applications are temporarily unable to serve traffic.
|
||||
For example, an application might need to load large data or configuration
|
||||
files during startup. In such cases, you don't want to kill the application,
|
||||
but you don’t want to send it requests either. Kubernetes provides
|
||||
readiness probes to detect and mitigate these situations. A pod with containers
|
||||
reporting that they are not ready does not receive traffic through Kubernetes
|
||||
Services.
|
||||
|
||||
Readiness probes are configured similarly to liveness probes. The only difference
|
||||
is that you use the `readinessProbe` field instead of the `livenessProbe` field.
|
||||
|
||||
```yaml
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- cat
|
||||
- /tmp/healthy
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 5
|
||||
```
|
||||
|
||||
Configuration for HTTP and TCP readiness probes also remains identical to
|
||||
liveness probes.
|
||||
|
||||
Readiness and liveness probes can be used in parallel for the same container.
|
||||
Using both can ensure that traffic does not reach a container that is not ready
|
||||
for it, and that containers are restarted when they fail.
|
||||
|
||||
## Configure Probes
|
||||
|
||||
{% comment %}
|
||||
Eventually, some of this section could be moved to a concept topic.
|
||||
{% endcomment %}
|
||||
|
||||
[Probes](/docs/api-reference/{{page.version}}/#probe-v1-core) have a number of fields that
|
||||
you can use to more precisely control the behavior of liveness and readiness
|
||||
checks:
|
||||
|
||||
* `initialDelaySeconds`: Number of seconds after the container has started
|
||||
before liveness probes are initiated.
|
||||
* `periodSeconds`: How often (in seconds) to perform the probe. Default to 10
|
||||
seconds. Minimum value is 1.
|
||||
* `timeoutSeconds`: Number of seconds after which the probe times out. Defaults
|
||||
to 1 second. Minimum value is 1.
|
||||
* `successThreshold`: Minimum consecutive successes for the probe to be
|
||||
considered successful after having failed. Defaults to 1. Must be 1 for
|
||||
liveness. Minimum value is 1.
|
||||
* `failureThreshold`: Minimum consecutive failures for the probe to be
|
||||
considered failed after having succeeded. Defaults to 3. Minimum value is 1.
|
||||
|
||||
[HTTP probes](/docs/api-reference/{{page.version}}/#httpgetaction-v1-core)
|
||||
have additional fields that can be set on `httpGet`:
|
||||
|
||||
* `host`: Host name to connect to, defaults to the pod IP. You probably want to
|
||||
set "Host" in httpHeaders instead.
|
||||
* `scheme`: Scheme to use for connecting to the host. Defaults to HTTP.
|
||||
* `path`: Path to access on the HTTP server.
|
||||
* `httpHeaders`: Custom headers to set in the request. HTTP allows repeated headers.
|
||||
* `port`: Name or number of the port to access on the container. Number must be
|
||||
in the range 1 to 65535.
|
||||
|
||||
For an HTTP probe, the kubelet sends an HTTP request to the specified path and
|
||||
port to perform the check. The kubelet sends the probe to the container’s IP address,
|
||||
unless the address is overridden by the optional `host` field in `httpGet`.
|
||||
In most scenarios, you do not want to set the `host` field. Here's one scenario
|
||||
where you would set it. Suppose the Container listens on 127.0.0.1 and the Pod's
|
||||
`hostNetwork` field is true. Then `host`, under `httpGet`, should be set to 127.0.0.1.
|
||||
If your pod relies on virtual hosts, which is probably the more common case,
|
||||
you should not use `host`, but rather set the `Host` header in `httpHeaders`.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture whatsnext %}
|
||||
|
||||
* Learn more about
|
||||
[Container Probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes).
|
||||
|
||||
### Reference
|
||||
|
||||
* [Pod](/docs/api-reference/{{page.version}}/#pod-v1-core)
|
||||
* [Container](/docs/api-reference/{{page.version}}/#container-v1-core)
|
||||
* [Probe](/docs/api-reference/{{page.version}}/#probe-v1-core)
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/task.md %}
|
|
@ -0,0 +1,231 @@
|
|||
---
|
||||
approvers:
|
||||
- bprashanth
|
||||
- liggitt
|
||||
- thockin
|
||||
title: Configure Service Accounts for Pods
|
||||
---
|
||||
|
||||
A service account provides an identity for processes that run in a Pod.
|
||||
|
||||
*This is a user introduction to Service Accounts. See also the
|
||||
[Cluster Admin Guide to Service Accounts](/docs/admin/service-accounts-admin).*
|
||||
|
||||
**Note:** This document describes how service accounts behave in a cluster set up
|
||||
as recommended by the Kubernetes project. Your cluster administrator may have
|
||||
customized the behavior in your cluster, in which case this documentation may
|
||||
not apply.
|
||||
{: .note}
|
||||
|
||||
When you (a human) access the cluster (e.g. using `kubectl`), you are
|
||||
authenticated by the apiserver as a particular User Account (currently this is
|
||||
usually `admin`, unless your cluster administrator has customized your
|
||||
cluster). Processes in containers inside pods can also contact the apiserver.
|
||||
When they do, they are authenticated as a particular Service Account (e.g.
|
||||
`default`).
|
||||
|
||||
## Use the Default Service Account to access the API server.
|
||||
|
||||
When you create a pod, if you do not specify a service account, it is
|
||||
automatically assigned the `default` service account in the same namespace.
|
||||
If you get the raw json or yaml for a pod you have created (e.g. `kubectl get pods/podname -o yaml`),
|
||||
you can see the `spec.serviceAccountName` field has been
|
||||
[automatically set](/docs/user-guide/working-with-resources/#resources-are-automatically-modified).
|
||||
|
||||
You can access the API from inside a pod using automatically mounted service account credentials,
|
||||
as described in [Accessing the Cluster](/docs/user-guide/accessing-the-cluster/#accessing-the-api-from-a-pod).
|
||||
The API permissions a service account has depend on the [authorization plugin and policy](/docs/admin/authorization/#a-quick-note-on-service-accounts) in use.
|
||||
|
||||
In version 1.6+, you can opt out of automounting API credentials for a service account by setting
|
||||
`automountServiceAccountToken: false` on the service account:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: build-robot
|
||||
automountServiceAccountToken: false
|
||||
...
|
||||
```
|
||||
|
||||
In version 1.6+, you can also opt out of automounting API credentials for a particular pod:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: my-pod
|
||||
spec:
|
||||
serviceAccountName: build-robot
|
||||
automountServiceAccountToken: false
|
||||
...
|
||||
```
|
||||
|
||||
The pod spec takes precedence over the service account if both specify a `automountServiceAccountToken` value.
|
||||
|
||||
## Use Multiple Service Accounts.
|
||||
|
||||
Every namespace has a default service account resource called `default`.
|
||||
You can list this and any other serviceAccount resources in the namespace with this command:
|
||||
|
||||
```shell
|
||||
$ kubectl get serviceAccounts
|
||||
NAME SECRETS AGE
|
||||
default 1 1d
|
||||
```
|
||||
|
||||
You can create additional ServiceAccount objects like this:
|
||||
|
||||
```shell
|
||||
$ cat > /tmp/serviceaccount.yaml <<EOF
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: build-robot
|
||||
EOF
|
||||
$ kubectl create -f /tmp/serviceaccount.yaml
|
||||
serviceaccount "build-robot" created
|
||||
```
|
||||
|
||||
If you get a complete dump of the service account object, like this:
|
||||
|
||||
```shell
|
||||
$ kubectl get serviceaccounts/build-robot -o yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
creationTimestamp: 2015-06-16T00:12:59Z
|
||||
name: build-robot
|
||||
namespace: default
|
||||
resourceVersion: "272500"
|
||||
selfLink: /api/v1/namespaces/default/serviceaccounts/build-robot
|
||||
uid: 721ab723-13bc-11e5-aec2-42010af0021e
|
||||
secrets:
|
||||
- name: build-robot-token-bvbk5
|
||||
```
|
||||
|
||||
then you will see that a token has automatically been created and is referenced by the service account.
|
||||
|
||||
You may use authorization plugins to [set permissions on service accounts](/docs/admin/authorization/#a-quick-note-on-service-accounts).
|
||||
|
||||
To use a non-default service account, simply set the `spec.serviceAccountName`
|
||||
field of a pod to the name of the service account you wish to use.
|
||||
|
||||
The service account has to exist at the time the pod is created, or it will be rejected.
|
||||
|
||||
You cannot update the service account of an already created pod.
|
||||
|
||||
You can clean up the service account from this example like this:
|
||||
|
||||
```shell
|
||||
$ kubectl delete serviceaccount/build-robot
|
||||
```
|
||||
|
||||
## Manually create a service account API token.
|
||||
|
||||
Suppose we have an existing service account named "build-robot" as mentioned above, and we create
|
||||
a new secret manually.
|
||||
|
||||
```shell
|
||||
$ cat > /tmp/build-robot-secret.yaml <<EOF
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: build-robot-secret
|
||||
annotations:
|
||||
kubernetes.io/service-account.name: build-robot
|
||||
type: kubernetes.io/service-account-token
|
||||
EOF
|
||||
$ kubectl create -f /tmp/build-robot-secret.yaml
|
||||
secret "build-robot-secret" created
|
||||
```
|
||||
|
||||
Now you can confirm that the newly built secret is populated with an API token for the "build-robot" service account.
|
||||
|
||||
Any tokens for non-existent service accounts will be cleaned up by the token controller.
|
||||
|
||||
```shell
|
||||
$ kubectl describe secrets/build-robot-secret
|
||||
Name: build-robot-secret
|
||||
Namespace: default
|
||||
Labels: <none>
|
||||
Annotations: kubernetes.io/service-account.name=build-robot,kubernetes.io/service-account.uid=870ef2a5-35cf-11e5-8d06-005056b45392
|
||||
|
||||
Type: kubernetes.io/service-account-token
|
||||
|
||||
Data
|
||||
====
|
||||
ca.crt: 1220 bytes
|
||||
token: ...
|
||||
namespace: 7 bytes
|
||||
```
|
||||
|
||||
**Note:** The content of `token` is elided here.
|
||||
{: .note}
|
||||
|
||||
## Add ImagePullSecrets to a service account
|
||||
|
||||
First, create an imagePullSecret, as described [here](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod).
|
||||
Next, verify it has been created. For example:
|
||||
|
||||
```shell
|
||||
$ kubectl get secrets myregistrykey
|
||||
NAME TYPE DATA AGE
|
||||
myregistrykey kubernetes.io/.dockerconfigjson 1 1d
|
||||
```
|
||||
|
||||
Next, modify the default service account for the namespace to use this secret as an imagePullSecret.
|
||||
|
||||
```shell
|
||||
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}'
|
||||
```
|
||||
|
||||
Interactive version requiring manual edit:
|
||||
|
||||
```shell
|
||||
$ kubectl get serviceaccounts default -o yaml > ./sa.yaml
|
||||
$ cat sa.yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
creationTimestamp: 2015-08-07T22:02:39Z
|
||||
name: default
|
||||
namespace: default
|
||||
resourceVersion: "243024"
|
||||
selfLink: /api/v1/namespaces/default/serviceaccounts/default
|
||||
uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6
|
||||
secrets:
|
||||
- name: default-token-uudge
|
||||
$ vi sa.yaml
|
||||
[editor session not shown]
|
||||
[delete line with key "resourceVersion"]
|
||||
[add lines with "imagePullSecret:"]
|
||||
$ cat sa.yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
creationTimestamp: 2015-08-07T22:02:39Z
|
||||
name: default
|
||||
namespace: default
|
||||
selfLink: /api/v1/namespaces/default/serviceaccounts/default
|
||||
uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6
|
||||
secrets:
|
||||
- name: default-token-uudge
|
||||
imagePullSecrets:
|
||||
- name: myregistrykey
|
||||
$ kubectl replace serviceaccount default -f ./sa.yaml
|
||||
serviceaccounts/default
|
||||
```
|
||||
|
||||
Now, any new pods created in the current namespace will have this added to their spec:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
imagePullSecrets:
|
||||
- name: myregistrykey
|
||||
```
|
||||
|
||||
<!--## Adding Secrets to a service account.
|
||||
|
||||
TODO: Test and explain how to use additional non-K8s secrets with an existing service account.
|
||||
-->
|
|
@ -0,0 +1,16 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: cpu-demo-2
|
||||
spec:
|
||||
containers:
|
||||
- name: cpu-demo-ctr-2
|
||||
image: vish/stress
|
||||
resources:
|
||||
limits:
|
||||
cpu: "100"
|
||||
requests:
|
||||
cpu: "100"
|
||||
args:
|
||||
- -cpus
|
||||
- "2"
|
|
@ -0,0 +1,16 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: cpu-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: cpu-demo-ctr
|
||||
image: vish/stress
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1"
|
||||
requests:
|
||||
cpu: "0.5"
|
||||
args:
|
||||
- -cpus
|
||||
- "2"
|
|
@ -0,0 +1,26 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
|
||||
metadata:
|
||||
labels:
|
||||
test: liveness
|
||||
name: liveness-exec
|
||||
spec:
|
||||
containers:
|
||||
|
||||
- name: liveness
|
||||
|
||||
args:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
|
||||
|
||||
image: gcr.io/google_containers/busybox
|
||||
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- cat
|
||||
- /tmp/healthy
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 5
|
|
@ -0,0 +1,25 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
labels:
|
||||
test: liveness
|
||||
name: liveness-http
|
||||
spec:
|
||||
containers:
|
||||
|
||||
- name: liveness
|
||||
|
||||
args:
|
||||
- /server
|
||||
|
||||
image: gcr.io/google_containers/liveness
|
||||
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8080
|
||||
httpHeaders:
|
||||
- name: X-Custom-Header
|
||||
value: Awesome
|
||||
initialDelaySeconds: 3
|
||||
periodSeconds: 3
|
|
@ -0,0 +1,30 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: init-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
volumeMounts:
|
||||
- name: workdir
|
||||
mountPath: /usr/share/nginx/html
|
||||
# These containers are run during pod initialization
|
||||
initContainers:
|
||||
- name: install
|
||||
image: busybox
|
||||
command:
|
||||
- wget
|
||||
- "-O"
|
||||
- "/work-dir/index.html"
|
||||
- http://kubernetes.io
|
||||
volumeMounts:
|
||||
- name: workdir
|
||||
mountPath: "/work-dir"
|
||||
dnsPolicy: Default
|
||||
volumes:
|
||||
- name: workdir
|
||||
emptyDir: {}
|
||||
|
|
@ -0,0 +1,33 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: lifecycle-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: lifecycle-demo-container
|
||||
image: nginx
|
||||
|
||||
lifecycle:
|
||||
postStart:
|
||||
exec:
|
||||
command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
|
||||
preStop:
|
||||
exec:
|
||||
command: ["/usr/sbin/nginx","-s","quit"]
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: LimitRange
|
||||
metadata:
|
||||
name: mem-limit-range
|
||||
spec:
|
||||
limits:
|
||||
- default:
|
||||
memory: 512Mi
|
||||
defaultRequest:
|
||||
memory: 256Mi
|
||||
type: Container
|
|
@ -0,0 +1,20 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: memory-demo-2
|
||||
spec:
|
||||
containers:
|
||||
- name: memory-demo-2-ctr
|
||||
image: vish/stress
|
||||
resources:
|
||||
requests:
|
||||
memory: 50Mi
|
||||
limits:
|
||||
memory: "100Mi"
|
||||
args:
|
||||
- -mem-total
|
||||
- 250Mi
|
||||
- -mem-alloc-size
|
||||
- 10Mi
|
||||
- -mem-alloc-sleep
|
||||
- 1s
|
|
@ -0,0 +1,20 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: memory-demo-3
|
||||
spec:
|
||||
containers:
|
||||
- name: memory-demo-3-ctr
|
||||
image: vish/stress
|
||||
resources:
|
||||
limits:
|
||||
memory: "1000Gi"
|
||||
requests:
|
||||
memory: "1000Gi"
|
||||
args:
|
||||
- -mem-total
|
||||
- 150Mi
|
||||
- -mem-alloc-size
|
||||
- 10Mi
|
||||
- -mem-alloc-sleep
|
||||
- 1s
|
|
@ -0,0 +1,20 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: memory-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: memory-demo-ctr
|
||||
image: vish/stress
|
||||
resources:
|
||||
limits:
|
||||
memory: "200Mi"
|
||||
requests:
|
||||
memory: "100Mi"
|
||||
args:
|
||||
- -mem-total
|
||||
- 150Mi
|
||||
- -mem-alloc-size
|
||||
- 10Mi
|
||||
- -mem-alloc-sleep
|
||||
- 1s
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: oir-demo-2
|
||||
spec:
|
||||
containers:
|
||||
- name: oir-demo-2-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
requests:
|
||||
pod.alpha.kubernetes.io/opaque-int-resource-dongle: 2
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: oir-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: oir-demo-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
requests:
|
||||
pod.alpha.kubernetes.io/opaque-int-resource-dongle: 3
|
|
@ -0,0 +1,14 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: redis
|
||||
spec:
|
||||
containers:
|
||||
- name: redis
|
||||
image: redis
|
||||
volumeMounts:
|
||||
- name: redis-storage
|
||||
mountPath: /data/redis
|
||||
volumes:
|
||||
- name: redis-storage
|
||||
emptyDir: {}
|
|
@ -0,0 +1,13 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
env: test
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
imagePullPolicy: IfNotPresent
|
||||
nodeSelector:
|
||||
disktype: ssd
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: private-reg
|
||||
spec:
|
||||
containers:
|
||||
- name: private-reg-container
|
||||
image: <your-private-image>
|
||||
imagePullSecrets:
|
||||
- name: regsecret
|
||||
|
|
@ -0,0 +1,23 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-projected-volume
|
||||
spec:
|
||||
containers:
|
||||
- name: test-projected-volume
|
||||
image: busybox
|
||||
args:
|
||||
- sleep
|
||||
- "86400"
|
||||
volumeMounts:
|
||||
- name: all-in-one
|
||||
mountPath: "/projected-volume"
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: all-in-one
|
||||
projected:
|
||||
sources:
|
||||
- secret:
|
||||
name: user
|
||||
- secret:
|
||||
name: pass
|
|
@ -0,0 +1,13 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: qos-demo-2
|
||||
spec:
|
||||
containers:
|
||||
- name: qos-demo-2-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
memory: "200Mi"
|
||||
requests:
|
||||
memory: "100Mi"
|
|
@ -0,0 +1,8 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: qos-demo-3
|
||||
spec:
|
||||
containers:
|
||||
- name: qos-demo-3-ctr
|
||||
image: nginx
|
|
@ -0,0 +1,15 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: qos-demo-4
|
||||
spec:
|
||||
containers:
|
||||
|
||||
- name: qos-demo-4-ctr-1
|
||||
image: nginx
|
||||
resources:
|
||||
requests:
|
||||
memory: "200Mi"
|
||||
|
||||
- name: qos-demo-4-ctr-2
|
||||
image: redis
|
|
@ -0,0 +1,15 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: qos-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: qos-demo-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
memory: "200Mi"
|
||||
cpu: "700m"
|
||||
requests:
|
||||
memory: "200Mi"
|
||||
cpu: "700m"
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: compute-resources
|
||||
spec:
|
||||
hard:
|
||||
pods: "4"
|
||||
requests.cpu: "1"
|
||||
requests.memory: 1Gi
|
||||
limits.cpu: "2"
|
||||
limits.memory: 2Gi
|
|
@ -0,0 +1,12 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: security-context-demo-2
|
||||
spec:
|
||||
securityContext:
|
||||
runAsUser: 1000
|
||||
containers:
|
||||
- name: sec-ctx-demo-2
|
||||
image: gcr.io/google-samples/node-hello:1.0
|
||||
securityContext:
|
||||
runAsUser: 2000
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: security-context-demo-3
|
||||
spec:
|
||||
containers:
|
||||
- name: sec-ctx-3
|
||||
image: gcr.io/google-samples/node-hello:1.0
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: security-context-demo-4
|
||||
spec:
|
||||
containers:
|
||||
- name: sec-ctx-4
|
||||
image: gcr.io/google-samples/node-hello:1.0
|
||||
securityContext:
|
||||
capabilities:
|
||||
add: ["NET_ADMIN", "SYS_TIME"]
|
|
@ -0,0 +1,17 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: security-context-demo
|
||||
spec:
|
||||
securityContext:
|
||||
runAsUser: 1000
|
||||
fsGroup: 2000
|
||||
volumes:
|
||||
- name: sec-ctx-vol
|
||||
emptyDir: {}
|
||||
containers:
|
||||
- name: sec-ctx-demo
|
||||
image: gcr.io/google-samples/node-hello:1.0
|
||||
volumeMounts:
|
||||
- name: sec-ctx-vol
|
||||
mountPath: /data/demo
|
|
@ -0,0 +1,11 @@
|
|||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: task-pv-claim
|
||||
spec:
|
||||
storageClassName: manual
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 3Gi
|
|
@ -0,0 +1,22 @@
|
|||
kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: task-pv-pod
|
||||
spec:
|
||||
|
||||
volumes:
|
||||
- name: task-pv-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: task-pv-claim
|
||||
|
||||
containers:
|
||||
- name: task-pv-container
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: "http-server"
|
||||
volumeMounts:
|
||||
- mountPath: "/usr/share/nginx/html"
|
||||
name: task-pv-storage
|
||||
|
||||
|
|
@ -0,0 +1,14 @@
|
|||
kind: PersistentVolume
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: task-pv-volume
|
||||
labels:
|
||||
type: local
|
||||
spec:
|
||||
storageClassName: manual
|
||||
capacity:
|
||||
storage: 10Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
hostPath:
|
||||
path: "/tmp/data"
|
|
@ -0,0 +1,22 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: goproxy
|
||||
labels:
|
||||
app: goproxy
|
||||
spec:
|
||||
containers:
|
||||
- name: goproxy
|
||||
image: gcr.io/google_containers/goproxy:0.1
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
readinessProbe:
|
||||
tcpSocket:
|
||||
port: 8080
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 10
|
||||
livenessProbe:
|
||||
tcpSocket:
|
||||
port: 8080
|
||||
initialDelaySeconds: 15
|
||||
periodSeconds: 20
|
|
@ -0,0 +1,14 @@
|
|||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: patch-demo
|
||||
spec:
|
||||
replicas: 2
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: patch-demo-ctr
|
||||
image: nginx
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue