Consolidate YAML files [part-13] (#9377)

This PR deals with YAML files referenced by the following two topics:
- concepts:object management
- concepts:service networking

When scanning references to the YAML files to be moved, some trivial
editings were applied to the markdown files. None of this editing should
break the build or ruin the doc otherwise.
This commit is contained in:
Qiming 2018-07-10 23:56:25 +08:00 committed by k8s-ci-robot
parent f9901e7a8f
commit 99a77ff368
17 changed files with 159 additions and 133 deletions

View File

@ -65,18 +65,18 @@ configuration file that was used to create the object.
Here's an example of an object configuration file:
{{< code file="simple_deployment.yaml" >}}
{{< codenew file="application/simple_deployment.yaml" >}}
Create the object using `kubectl apply`:
```shell
kubectl apply -f https://k8s.io/docs/concepts/overview/object-management-kubectl/simple_deployment.yaml
kubectl apply -f https://k8s.io/examples/application/simple_deployment.yaml
```
Print the live configuration using `kubectl get`:
```shell
kubectl get -f https://k8s.io/docs/concepts/overview/object-management-kubectl/simple_deployment.yaml -o yaml
kubectl get -f https://k8s.io/examples/application/simple_deployment.yaml -o yaml
```
The output shows that the `kubectl.kubernetes.io/last-applied-configuration` annotation
@ -139,12 +139,12 @@ kubectl apply -f <directory>/
Here's an example configuration file:
{{< code file="simple_deployment.yaml" >}}
{{< codenew file="application/simple_deployment.yaml" >}}
Create the object using `kubectl apply`:
```shell
kubectl apply -f https://k8s.io/docs/concepts/overview/object-management-kubectl/simple_deployment.yaml
kubectl apply -f https://k8s.io/examples/application/simple_deployment.yaml
```
{{< note >}}
@ -155,7 +155,7 @@ configuration file instead of a directory.
Print the live configuration using `kubectl get`:
```shell
kubectl get -f https://k8s.io/docs/concepts/overview/object-management-kubectl/simple_deployment.yaml -o yaml
kubectl get -f https://k8s.io/examples/application/simple_deployment.yaml -o yaml
```
The output shows that the `kubectl.kubernetes.io/last-applied-configuration` annotation
@ -210,7 +210,7 @@ kubectl scale deployment/nginx-deployment --replicas=2
Print the live configuration using `kubectl get`:
```shell
kubectl get -f https://k8s.io/docs/concepts/overview/object-management-kubectl/simple_deployment.yaml -o yaml
kubectl get -f https://k8s.io/examples/application/simple_deployment.yaml -o yaml
```
The output shows that the `replicas` field has been set to 2, and the `last-applied-configuration`
@ -257,18 +257,18 @@ spec:
Update the `simple_deployment.yaml` configuration file to change the image from
`nginx:1.7.9` to `nginx:1.11.9`, and delete the `minReadySeconds` field:
{{< code file="update_deployment.yaml" >}}
{{< codenew file="application/update_deployment.yaml" >}}
Apply the changes made to the configuration file:
```shell
kubectl apply -f https://k8s.io/docs/concepts/overview/object-management-kubectl/update_deployment.yaml
kubectl apply -f https://k8s.io/examples/application/update_deployment.yaml
```
Print the live configuration using `kubectl get`:
```
kubectl get -f https://k8s.io/docs/concepts/overview/object-management-kubectl/simple_deployment.yaml -o yaml
kubectl get -f https://k8s.io/examples/application/simple_deployment.yaml -o yaml
```
The output shows the following changes to the live configuration:
@ -417,7 +417,7 @@ to calculate which fields should be deleted or set:
Here's an example. Suppose this is the configuration file for a Deployment object:
{{< code file="update_deployment.yaml" >}}
{{< codenew file="application/update_deployment.yaml" >}}
Also, suppose this is the live configuration for the same Deployment object:
@ -463,7 +463,10 @@ Here are the merge calculations that would be performed by `kubectl apply`:
1. Calculate the fields to delete by reading values from
`last-applied-configuration` and comparing them to values in the
configuration file. In this example, `minReadySeconds` appears in the
configuration file.
Clear fields explicitly set to null in the local object configuration file
regardless of whether they appear in the `last-applied-configuration`.
In this example, `minReadySeconds` appears in the
`last-applied-configuration` annotation, but does not appear in the configuration file.
**Action:** Clear `minReadySeconds` from the live configuration.
2. Calculate the fields to set by reading values from the configuration
@ -517,12 +520,6 @@ spec:
# ...
```
{{< comment >}}
TODO(1.6): For 1.6, add the following bullet point to 1.
- clear fields explicitly set to null in the local object configuration file regardless of whether they appear in the last-applied-configuration
{{< /comment >}}
### How different types of fields are merged
How a particular field in a configuration file is merged with
@ -716,18 +713,18 @@ not specified when the object is created.
Here's a configuration file for a Deployment. The file does not specify `strategy`:
{{< code file="simple_deployment.yaml" >}}
{{< codenew file="application/simple_deployment.yaml" >}}
Create the object using `kubectl apply`:
```shell
kubectl apply -f https://k8s.io/docs/concepts/overview/object-management-kubectl/simple_deployment.yaml
kubectl apply -f https://k8s.io/examples/application/simple_deployment.yaml
```
Print the live configuration using `kubectl get`:
```shell
kubectl get -f https://k8s.io/docs/concepts/overview/object-management-kubectl/simple_deployment.yaml -o yaml
kubectl get -f https://k8s.io/examples/application/simple_deployment.yaml -o yaml
```
The output shows that the API server set several fields to default values in the live
@ -871,31 +868,10 @@ Recommendation: These fields should be explicitly defined in the object configur
### How to clear server-defaulted fields or fields set by other writers
As of Kubernetes 1.5, fields that do not appear in the configuration file cannot be
cleared by a merge operation. Here are some workarounds:
Option 1: Remove the field by directly modifying the live object.
{{< note >}}
**Note:** As of Kubernetes 1.5, `kubectl edit` does not work with `kubectl apply`.
Using these together will cause unexpected behavior.
{{< /note >}}
Option 2: Remove the field through the configuration file.
1. Add the field to the configuration file to match the live object.
1. Apply the configuration file; this updates the annotation to include the field.
1. Delete the field from the configuration file.
1. Apply the configuration file; this deletes the field from the live object and annotation.
{{< comment >}}
TODO(1.6): Update this with the following for 1.6
Fields that do not appear in the configuration file can be cleared by
setting their values to `null` and then applying the configuration file.
For fields defaulted by the server, this triggers re-defaulting
the values.
{{< /comment >}}
## How to change ownership of a field between the configuration file and direct imperative writers
@ -994,13 +970,6 @@ template:
controller-selector: "extensions/v1beta1/deployment/nginx"
```
## Known Issues
* Prior to Kubernetes 1.6, `kubectl apply` did not support operating on objects stored in a
[custom resource](/docs/concepts/api-extension/custom-resources/).
For these cluster versions, you should instead use [imperative object configuration](/docs/concepts/overview/object-management-kubectl/imperative-config/).
{{% /capture %}}
{{% capture whatsnext %}}
- [Managing Kubernetes Objects Using Imperative Commands](/docs/concepts/overview/object-management-kubectl/imperative-command/)
- [Imperative Management of Kubernetes Objects Using Configuration Files](/docs/concepts/overview/object-management-kubectl/imperative-config/)

View File

@ -36,12 +36,14 @@ When you create an object in Kubernetes, you must provide the object spec that d
Here's an example `.yaml` file that shows the required fields and object spec for a Kubernetes Deployment:
{{< code file="nginx-deployment.yaml" >}}
{{< codenew file="application/deployment.yaml" >}}
One way to create a Deployment using a `.yaml` file like the one above is to use the [`kubectl create`](/docs/reference/generated/kubectl/kubectl-commands#create) command in the `kubectl` command-line interface, passing the `.yaml` file as an argument. Here's an example:
One way to create a Deployment using a `.yaml` file like the one above is to use the
[`kubectl create`](/docs/reference/generated/kubectl/kubectl-commands#create) command
in the `kubectl` command-line interface, passing the `.yaml` file as an argument. Here's an example:
```shell
$ kubectl create -f https://k8s.io/docs/concepts/overview/working-with-objects/nginx-deployment.yaml --record
$ kubectl create -f https://k8s.io/examples/application/deployment.yaml --record
```
The output is similar to this:

View File

@ -1,19 +0,0 @@
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80

View File

@ -44,13 +44,17 @@ fe00::2 ip6-allrouters
10.200.0.4 nginx
```
by default, the hosts file only includes ipv4 and ipv6 boilerplates like `localhost` and its own hostname.
By default, the `hosts` file only includes IPv4 and IPv6 boilerplates like
`localhost` and its own hostname.
## Adding Additional Entries with HostAliases
In addition to the default boilerplate, we can add additional entries to the hosts file to resolve `foo.local`, `bar.local` to `127.0.0.1` and `foo.remote`, `bar.remote` to `10.1.2.3`, we can by adding HostAliases to the Pod under `.spec.hostAliases`:
In addition to the default boilerplate, we can add additional entries to the
`hosts` file to resolve `foo.local`, `bar.local` to `127.0.0.1` and `foo.remote`,
`bar.remote` to `10.1.2.3`, we can by adding HostAliases to the Pod under
`.spec.hostAliases`:
{{< code file="hostaliases-pod.yaml" >}}
{{< codenew file="service/networking/hostaliases-pod.yaml" >}}
This Pod can be started with the following commands:
@ -63,7 +67,7 @@ NAME READY STATUS RESTARTS AGE IP
hostaliases-pod 0/1 Completed 0 6s 10.244.135.10 node3
```
The hosts file content would look like this:
The `hosts` file content would look like this:
```shell
$ kubectl logs hostaliases-pod
@ -83,22 +87,17 @@ fe00::2 ip6-allrouters
With the additional entries specified at the bottom.
## Limitations
HostAlias is only supported in 1.7+.
HostAlias support in 1.7 is limited to non-hostNetwork Pods because kubelet only manages the hosts file for non-hostNetwork Pods.
In 1.8, HostAlias is supported for all Pods regardless of network configuration.
## Why Does Kubelet Manage the Hosts File?
Kubelet [manages](https://github.com/kubernetes/kubernetes/issues/14633) the hosts file for each container of the Pod to prevent Docker from [modifying](https://github.com/moby/moby/issues/17190) the file after the containers have already been started.
Kubelet [manages](https://github.com/kubernetes/kubernetes/issues/14633) the
`hosts` file for each container of the Pod to prevent Docker from
[modifying](https://github.com/moby/moby/issues/17190) the file after the
containers have already been started.
Because of the managed-nature of the file, any user-written content will be overwritten whenever the hosts file is remounted by Kubelet in the event of a container restart or a Pod reschedule. Thus, it is not suggested to modify the contents of the file.
Because of the managed-nature of the file, any user-written content will be
overwritten whenever the `hosts` file is remounted by Kubelet in the event of
a container restart or a Pod reschedule. Thus, it is not suggested to modify
the contents of the file.
{{% /capture %}}
{{% capture whatsnext %}}
{{% /capture %}}

View File

@ -28,11 +28,12 @@ This guide uses a simple nginx server to demonstrate proof of concept. The same
## Exposing pods to the cluster
We did this in a previous example, but let's do it once again and focus on the networking perspective. Create an nginx pod, and note that it has a container port specification:
We did this in a previous example, but let's do it once again and focus on the networking perspective.
Create an nginx Pod, and note that it has a container port specification:
{{< code file="run-my-nginx.yaml" >}}
{{< codenew file="service/networking/run-my-nginx.yaml" >}}
This makes it accessible from any node in your cluster. Check the nodes the pod is running on:
This makes it accessible from any node in your cluster. Check the nodes the Pod is running on:
```shell
$ kubectl create -f ./run-my-nginx.yaml
@ -69,9 +70,15 @@ service "my-nginx" exposed
This is equivalent to `kubectl create -f` the following yaml:
{{< code file="nginx-svc.yaml" >}}
{{< codenew file="service/networking/nginx-svc.yaml" >}}
This specification will create a Service which targets TCP port 80 on any Pod with the `run: my-nginx` label, and expose it on an abstracted Service port (`targetPort`: is the port the container accepts traffic on, `port`: is the abstracted Service port, which can be any port other pods use to access the Service). View [service API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core) to see the list of supported fields in service definition.
This specification will create a Service which targets TCP port 80 on any Pod
with the `run: my-nginx` label, and expose it on an abstracted Service port
(`targetPort`: is the port the container accepts traffic on, `port`: is the
abstracted Service port, which can be any port other pods use to access the
Service).
View [Service](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core)
API object to see the list of supported fields in service definition.
Check your Service:
```shell
@ -80,7 +87,13 @@ NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-nginx 10.0.162.149 <none> 80/TCP 21s
```
As mentioned previously, a Service is backed by a group of pods. These pods are exposed through `endpoints`. The Service's selector will be evaluated continuously and the results will be POSTed to an Endpoints object also named `my-nginx`. When a pod dies, it is automatically removed from the endpoints, and new pods matching the Service's selector will automatically get added to the endpoints. Check the endpoints, and note that the IPs are the same as the pods created in the first step:
As mentioned previously, a Service is backed by a group of Pods. These Pods are
exposed through `endpoints`. The Service's selector will be evaluated continuously
and the results will be POSTed to an Endpoints object also named `my-nginx`.
When a Pod dies, it is automatically removed from the endpoints, and new Pods
matching the Service's selector will automatically get added to the endpoints.
Check the endpoints, and note that the IPs are the same as the Pods created in
the first step:
```shell
$ kubectl describe svc my-nginx
@ -101,15 +114,22 @@ NAME ENDPOINTS AGE
my-nginx 10.244.2.5:80,10.244.3.4:80 1m
```
You should now be able to curl the nginx Service on `<CLUSTER-IP>:<PORT>` from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire, if you're curious about how this works you can read more about the [service proxy](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies).
You should now be able to curl the nginx Service on `<CLUSTER-IP>:<PORT>` from
any node in your cluster. Note that the Service IP is completely virtual, it
never hits the wire. If you're curious about how this works you can read more
about the [service proxy](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies).
## Accessing the Service
Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. The former works out of the box while the latter requires the [kube-dns cluster addon](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/kube-dns/README.md).
Kubernetes supports 2 primary modes of finding a Service - environment variables
and DNS. The former works out of the box while the latter requires the
[kube-dns cluster addon](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/kube-dns/README.md).
### Environment Variables
When a Pod runs on a Node, the kubelet adds a set of environment variables for each active Service. This introduces an ordering problem. To see why, inspect the environment of your running nginx pods (your pod name will be different):
When a Pod runs on a Node, the kubelet adds a set of environment variables for
each active Service. This introduces an ordering problem. To see why, inspect
the environment of your running nginx Pods (your Pod name will be different):
```shell
$ kubectl exec my-nginx-3800858182-jr4a2 -- printenv | grep SERVICE
@ -118,7 +138,14 @@ KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
```
Note there's no mention of your Service. This is because you created the replicas before the Service. Another disadvantage of doing this is that the scheduler might put both pods on the same machine, which will take your entire Service down if it dies. We can do this the right way by killing the 2 pods and waiting for the Deployment to recreate them. This time around the Service exists *before* the replicas. This will give you scheduler-level Service spreading of your pods (provided all your nodes have equal capacity), as well as the right environment variables:
Note there's no mention of your Service. This is because you created the replicas
before the Service. Another disadvantage of doing this is that the scheduler might
put both Pods on the same machine, which will take your entire Service down if
it dies. We can do this the right way by killing the 2 Pods and waiting for the
Deployment to recreate them. This time around the Service exists *before* the
replicas. This will give you scheduler-level Service spreading of your Pods
(provided all your nodes have equal capacity), as well as the right environment
variables:
```shell
$ kubectl scale deployment my-nginx --replicas=0; kubectl scale deployment my-nginx --replicas=2;
@ -150,7 +177,11 @@ NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 8m
```
If it isn't running, you can [enable it](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/README.md#how-do-i-configure-it). The rest of this section will assume you have a Service with a long lived IP (my-nginx), and a dns server that has assigned a name to that IP (the kube-dns cluster addon), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname). Let's run another curl application to test this:
If it isn't running, you can [enable it](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/README.md#how-do-i-configure-it).
The rest of this section will assume you have a Service with a long lived IP
(my-nginx), and a DNS server that has assigned a name to that IP (the kube-dns
cluster addon), so you can talk to the Service from any pod in your cluster using
standard methods (e.g. gethostbyname). Let's run another curl application to test this:
```shell
$ kubectl run curl --image=radial/busyboxplus:curl -i --tty
@ -221,13 +252,16 @@ nginxsecret Opaque 2 1m
Now modify your nginx replicas to start an https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):
{{< code file="nginx-secure-app.yaml" >}}
{{< codenew file="service/networking/nginx-secure-app.yaml" >}}
Noteworthy points about the nginx-secure-app manifest:
- It contains both Deployment and Service specification in the same file.
- The [nginx server](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/https-nginx/default.conf) serves http traffic on port 80 and https traffic on 443, and nginx Service exposes both ports.
- Each container has access to the keys through a volume mounted at /etc/nginx/ssl. This is setup *before* the nginx server is started.
- The [nginx server](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/https-nginx/default.conf)
serves HTTP traffic on port 80 and HTTPS traffic on 443, and nginx Service
exposes both ports.
- Each container has access to the keys through a volume mounted at `/etc/nginx/ssl`.
This is setup *before* the nginx server is started.
```shell
$ kubectl delete deployments,svc my-nginx; kubectl create -f ./nginx-secure-app.yaml
@ -247,7 +281,7 @@ Note how we supplied the `-k` parameter to curl in the last step, this is becaus
so we have to tell curl to ignore the CName mismatch. By creating a Service we linked the CName used in the certificate with the actual DNS name used by pods during Service lookup.
Let's test this from a pod (the same secret is being reused for simplicity, the pod only needs nginx.crt to access the Service):
{{< code file="curlpod.yaml" >}}
{{< codenew file="service/networking/curlpod.yaml" >}}
```shell
$ kubectl create -f ./curlpod.yaml
@ -262,7 +296,11 @@ $ kubectl exec curl-deployment-1515033274-1410r -- curl https://my-nginx --cacer
## Exposing the Service
For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the last section already used `NodePort`, so your nginx https replica is ready to serve traffic on the internet if your node has a public IP.
For some parts of your applications you may want to expose a Service onto an
external IP address. Kubernetes supports two ways of doing this: NodePorts and
LoadBalancers. The Service created in the last section already used `NodePort`,
so your nginx HTTPS replica is ready to serve traffic on the internet if your
node has a public IP.
```shell
$ kubectl get svc my-nginx -o yaml | grep nodePort -C 5

View File

@ -233,7 +233,7 @@ Below are the properties a user can specify in the `dnsConfig` field:
The following is an example Pod with custom DNS settings:
{{< code file="custom-dns.yaml" >}}
{{< codenew file="service/networking/custom-dns.yaml" >}}
When the Pod above is created, the container `test` gets the following contents
in its `/etc/resolv.conf` file:

View File

@ -111,9 +111,11 @@ Make sure you review your controller's specific docs so you understand the cavea
### Single Service Ingress
There are existing Kubernetes concepts that allow you to expose a single service (see [alternatives](#alternatives)), however you can do so through an Ingress as well, by specifying a *default backend* with no rules.
There are existing Kubernetes concepts that allow you to expose a single Service
(see [alternatives](#alternatives)), however you can do so through an Ingress
as well, by specifying a *default backend* with no rules.
{{< code file="ingress.yaml" >}}
{{< codenew file="service/networking/ingress.yaml" >}}
If you create it using `kubectl create -f` you should see:
@ -123,11 +125,17 @@ NAME RULE BACKEND ADDRESS
test-ingress - testsvc:80 107.178.254.228
```
Where `107.178.254.228` is the IP allocated by the Ingress controller to satisfy this Ingress. The `RULE` column shows that all traffic sent to the IP is directed to the Kubernetes Service listed under `BACKEND`.
Where `107.178.254.228` is the IP allocated by the Ingress controller to satisfy
this Ingress. The `RULE` column shows that all traffic sent to the IP are
directed to the Kubernetes Service listed under `BACKEND`.
### Simple fanout
As described previously, pods within kubernetes have IPs only visible on the cluster network, so we need something at the edge accepting ingress traffic and proxying it to the right endpoints. This component is usually a highly available loadbalancer. An Ingress allows you to keep the number of loadbalancers down to a minimum, for example, a setup like:
As described previously, Pods within kubernetes have IPs only visible on the
cluster network, so we need something at the edge accepting ingress traffic and
proxying it to the right endpoints. This component is usually a highly available
loadbalancer. An Ingress allows you to keep the number of loadbalancers down
to a minimum. For example, a setup like:
```shell
foo.bar.com -> 178.91.123.132 -> / foo s1:80
@ -168,7 +176,10 @@ test -
/foo s1:80
/bar s2:80
```
The Ingress controller will provision an implementation specific loadbalancer that satisfies the Ingress, as long as the services (s1, s2) exist. When it has done so, you will see the address of the loadbalancer under the last column of the Ingress.
The Ingress controller will provision an implementation specific loadbalancer
that satisfies the Ingress, as long as the services (`s1`, `s2`) exist.
When it has done so, you will see the address of the loadbalancer under the
last column of the Ingress.
### Name based virtual hosting
@ -180,7 +191,8 @@ foo.bar.com --| |-> foo.bar.com s1:80
bar.foo.com --| |-> bar.foo.com s2:80
```
The following Ingress tells the backing loadbalancer to route requests based on the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4).
The following Ingress tells the backing loadbalancer to route requests based on
the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4).
```yaml
apiVersion: extensions/v1beta1
@ -203,11 +215,23 @@ spec:
servicePort: 80
```
__Default Backends__: An Ingress with no rules, like the one shown in the previous section, sends all traffic to a single default backend. You can use the same technique to tell a loadbalancer where to find your website's 404 page, by specifying a set of rules *and* a default backend. Traffic is routed to your default backend if none of the Hosts in your Ingress match the Host in the request header, and/or none of the paths match the URL of the request.
__Default Backends__: An Ingress with no rules, like the one shown in the previous
section, sends all traffic to a single default backend. You can use the same
technique to tell a loadbalancer where to find your website's 404 page, by
specifying a set of rules *and* a default backend. Traffic is routed to your
default backend if none of the Hosts in your Ingress match the Host in the
request header, and/or none of the paths match the URL of the request.
### TLS
You can secure an Ingress by specifying a [secret](/docs/user-guide/secrets) that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. If the TLS configuration section in an Ingress specifies different hosts, they will be multiplexed on the same port according to the hostname specified through the SNI TLS extension (provided the Ingress controller supports SNI). The TLS secret must contain keys named `tls.crt` and `tls.key` that contain the certificate and private key to use for TLS, e.g.:
You can secure an Ingress by specifying a [secret](/docs/concepts/configuration/secret)
that contains a TLS private key and certificate. Currently the Ingress only
supports a single TLS port, 443, and assumes TLS termination. If the TLS
configuration section in an Ingress specifies different hosts, they will be
multiplexed on the same port according to the hostname specified through the
SNI TLS extension (provided the Ingress controller supports SNI). The TLS secret
must contain keys named `tls.crt` and `tls.key` that contain the certificate
and private key to use for TLS, e.g.:
```yaml
apiVersion: v1
@ -221,7 +245,8 @@ metadata:
type: Opaque
```
Referencing this secret in an Ingress will tell the Ingress controller to secure the channel from the client to the loadbalancer using TLS:
Referencing this secret in an Ingress will tell the Ingress controller to
secure the channel from the client to the loadbalancer using TLS:
```yaml
apiVersion: extensions/v1beta1
@ -236,13 +261,30 @@ spec:
servicePort: 80
```
Note that there is a gap between TLS features supported by various Ingress controllers. Please refer to documentation on [nginx](https://git.k8s.io/ingress-nginx/README.md#https), [GCE](https://git.k8s.io/ingress-gce/README.md#frontend-https), or any other platform specific Ingress controller to understand how TLS works in your environment.
Note that there is a gap between TLS features supported by various Ingress
controllers. Please refer to documentation on
[nginx](https://git.k8s.io/ingress-nginx/README.md#https),
[GCE](https://git.k8s.io/ingress-gce/README.md#frontend-https), or any other
platform specific Ingress controller to understand how TLS works in your environment.
### Loadbalancing
An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced load balancing concepts (e.g.: persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can still get these features through the [service loadbalancer](https://github.com/kubernetes/ingress-nginx/blob/master/docs/ingress-controller-catalog.md). With time, we plan to distill load balancing patterns that are applicable cross platform into the Ingress resource.
An Ingress controller is bootstrapped with some load balancing policy settings
that it applies to all Ingress, such as the load balancing algorithm, backend
weight scheme, and others. More advanced load balancing concepts
(e.g. persistent sessions, dynamic weights) are not yet exposed through the
Ingress. You can still get these features through the
[service loadbalancer](https://github.com/kubernetes/ingress-nginx/blob/master/docs/ingress-controller-catalog.md).
With time, we plan to distill load balancing patterns that are applicable
cross platform into the Ingress resource.
It's also worth noting that even though health checks are not exposed directly through the Ingress, there exist parallel concepts in Kubernetes such as [readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) which allow you to achieve the same end result. Please review the controller specific docs to see how they handle health checks ([nginx](https://git.k8s.io/ingress-nginx/README.md), [GCE](https://git.k8s.io/ingress-gce/README.md#health-checks)).
It's also worth noting that even though health checks are not exposed directly
through the Ingress, there exist parallel concepts in Kubernetes such as
[readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/)
which allow you to achieve the same end result. Please review the controller
specific docs to see how they handle health checks (
[nginx](https://git.k8s.io/ingress-nginx/README.md),
[GCE](https://git.k8s.io/ingress-gce/README.md#health-checks)).
## Updating an Ingress

View File

@ -306,28 +306,12 @@ func TestExampleObjectSchemas(t *testing.T) {
"nginx-deployment": {&extensions.Deployment{}},
"nginx-svc": {&api.Service{}},
},
"docs/concepts/overview/working-with-objects": {
"nginx-deployment": {&extensions.Deployment{}},
},
"docs/concepts/services-networking": {
"curlpod": {&extensions.Deployment{}},
"custom-dns": {&api.Pod{}},
"hostaliases-pod": {&api.Pod{}},
"ingress": {&extensions.Ingress{}},
"nginx-secure-app": {&api.Service{}, &extensions.Deployment{}},
"nginx-svc": {&api.Service{}},
"run-my-nginx": {&extensions.Deployment{}},
},
"docs/tutorials/clusters": {
"hello-apparmor-pod": {&api.Pod{}},
},
"docs/tutorials/configuration/configmap/redis": {
"redis-pod": {&api.Pod{}},
},
"docs/concepts/overview/object-management-kubectl": {
"simple_deployment": {&extensions.Deployment{}},
"update_deployment": {&extensions.Deployment{}},
},
"examples/admin": {
"namespace-dev": {&api.Namespace{}},
"namespace-prod": {&api.Namespace{}},
@ -381,6 +365,8 @@ func TestExampleObjectSchemas(t *testing.T) {
"deployment-update": {&extensions.Deployment{}},
"nginx-with-request": {&extensions.Deployment{}},
"shell-demo": {&api.Pod{}},
"simple_deployment": {&extensions.Deployment{}},
"update_deployment": {&extensions.Deployment{}},
},
"examples/application/cassandra": {
"cassandra-service": {&api.Service{}},
@ -529,6 +515,15 @@ func TestExampleObjectSchemas(t *testing.T) {
"hello-service": {&api.Service{}},
"hello": {&extensions.Deployment{}},
},
"examples/service/networking": {
"curlpod": {&extensions.Deployment{}},
"custom-dns": {&api.Pod{}},
"hostaliases-pod": {&api.Pod{}},
"ingress": {&extensions.Ingress{}},
"nginx-secure-app": {&api.Service{}, &extensions.Deployment{}},
"nginx-svc": {&api.Service{}},
"run-my-nginx": {&extensions.Deployment{}},
},
"examples/windows": {
"configmap-pod": {&api.ConfigMap{}, &api.Pod{}},
"daemonset": {&extensions.DaemonSet{}},