Kubernetes Ingress Docs (#1228)

This commit is contained in:
Olly P 2019-06-20 12:11:16 +01:00 committed by paigehargrave
parent 1c281f8f56
commit 9199a58f5d
11 changed files with 867 additions and 28 deletions

View File

@ -1518,7 +1518,7 @@ manuals:
- title: Securing services with TLS
path: /ee/ucp/interlock/usage/tls/
- title: Configuring websockets
path: /ee/ucp/interlock/usage/websockets/
path: /ee/ucp/interlock/usage/websockets/
- sectiontitle: Deploy apps with Kubernetes
section:
- title: Access Kubernetes Resources
@ -1529,8 +1529,6 @@ manuals:
path: /ee/ucp/kubernetes/deploy-with-compose/
- title: Using Pod Security Policies
path: /ee/ucp/kubernetes/pod-security-policies/
- title: Deploy an ingress controller
path: /ee/ucp/kubernetes/layer-7-routing/
- title: Create a service account for a Kubernetes app
path: /ee/ucp/kubernetes/create-service-account/
- title: Install an unmanaged CNI plugin
@ -1551,6 +1549,18 @@ manuals:
path: /ee/ucp/kubernetes/storage/configure-aws-storage/
- title: Configure iSCSI
path: /ee/ucp/kubernetes/storage/use-iscsi/
- sectiontitle: Cluster Ingress
section:
- title: Overview
path: /ee/ucp/kubernetes/cluster-ingress/
- title: Install Ingress
path: /ee/ucp/kubernetes/cluster-ingress/install/
- title: Deploy Simple Application
path: /ee/ucp/kubernetes/cluster-ingress/ingress/
- title: Deploy a Canary Deployment
path: /ee/ucp/kubernetes/cluster-ingress/canary/
- title: Implementing Persistent (sticky) Sessions
path: /ee/ucp/kubernetes/cluster-ingress/sticky/
- title: API reference
path: /reference/ucp/3.2/api/
nosync: true

View File

@ -0,0 +1,114 @@
---
title: Deploy a Sample Application with a Canary release (Experimental)
description: Stage a canary release using weight-based load balancing between multiple backend applications.
keywords: ucp, cluster, ingress, kubernetes
---
{% include experimental-feature.md %}
# Deploy a Sample Application with a Canary release
This example stages a canary release using weight-based load balancing between
multiple backend applications.
> **Note**: This guide assumes the [Deploy Sample Application](./ingress/)
> tutorial was followed, with the artefacts still running on the cluster. If
> they are not, please go back and follow this guide.
The following schema is used for this tutorial:
- 80% of client traffic is sent to the production v1 service.
- 20% of client traffic is sent to the staging v2 service.
- All test traffic using the header `stage=dev` is sent to the v3 service.
A new Kubernetes manifest file with updated ingress rules can be found
[here](./yaml/ingress-weighted.yaml)
1) Source a [UCP Client Bundle](/ee/ucp/user-access/cli/) attached to a
cluster with Cluster Ingress installed.
2) Download the sample Kubernetes manifest file
```
$ wget https://github.com/docker/docker.github.io/tree/master/ee/ucp/kubernetes/cluster-ingress/yaml/ingress-weighted.yaml
```
3) Deploy the Kubernetes manifest file
```bash
$ kubectl apply -f ingress-weighted.yaml
$ kubectl describe vs
Hosts:
demo.example.com
Http:
Match:
Headers:
Stage:
Exact: dev
Route:
Destination:
Host: demo-service
Port:
Number: 8080
Subset: v3
Route:
Destination:
Host: demo-service
Port:
Number: 8080
Subset: v1
Weight: 80
Destination:
Host: demo-service
Port:
Number: 8080
Subset: v2
Weight: 20
```
This virtual service performs the following actions:
- Receives all traffic with host=demo.example.com.
- If an exact match for HTTP header `stage=dev` is found, traffic is routed
to v3.
- All other traffic is routed to v1 and v2 in an 80:20 ratio.
Now we can send traffic to the application to view the applied load balancing
algorithms.
```bash
# Public IP Address of a Worker or Manager VM in the Cluster
$ IPADDR=51.141.127.241
# Node Port
$ PORT=$(kubectl get service demo-service --output jsonpath='{.spec.ports[?(@.name=="http")].nodePort}')
$ for i in {1..5}; do curl -H "Host: demo.example.com" http://$IPADDR:$PORT/ping; done
{"instance":"demo-v1-7797b7c7c8-5vts2","version":"v1","metadata":"production","request_id":"d0671d32-48e7-41f7-a358-ddd7b47bba5f"}
{"instance":"demo-v2-6c5b4c6f76-c6zhm","version":"v2","metadata":"staging","request_id":"ba6dcfd6-f62a-4c68-9dd2-b242179959e0"}
{"instance":"demo-v1-7797b7c7c8-kw6gp","version":"v1","metadata":"production","request_id":"d87601c0-7935-4cfc-842c-37910e6cd573"}
{"instance":"demo-v1-7797b7c7c8-5vts2","version":"v1","metadata":"production","request_id":"4c71ffab-8657-4d99-87b3-7a6933258990"}
{"instance":"demo-v1-7797b7c7c8-gfwzj","version":"v1","metadata":"production","request_id":"c404471c-cc85-497e-9e5e-7bb666f4f309"}
```
The split between v1 and v2 corresponds to the specified criteria. Within the
v1 service, requests are load-balanced across the 3 backend replicas. v3 does
not appear in the requests.
To send traffic to the 3rd service, we can add the HTTP header `stage=dev`.
```bash
for i in {1..5}; do curl -H "Host: demo.example.com" -H "Stage: dev" http://$IPADDR:$PORT/ping; done
{"instance":"demo-v3-d88dddb74-9k7qg","version":"v3","metadata":"dev","request_id":"52d7afe7-befb-4e17-a49c-ee63b96d0daf"}
{"instance":"demo-v3-d88dddb74-9k7qg","version":"v3","metadata":"dev","request_id":"b2e664d2-5224-44b1-98d9-90b090578423"}
{"instance":"demo-v3-d88dddb74-9k7qg","version":"v3","metadata":"dev","request_id":"5446c78e-8a77-4f7e-bf6a-63184db5350f"}
{"instance":"demo-v3-d88dddb74-9k7qg","version":"v3","metadata":"dev","request_id":"657553c5-bc73-4a13-b320-f78f7e6c7457"}
{"instance":"demo-v3-d88dddb74-9k7qg","version":"v3","metadata":"dev","request_id":"bae52f09-0510-42d9-aec0-ca6bbbaae168"}
```
In this case, 100% of the traffic with the stage=dev header is sent to the v3
service.
## Where to go next
- [Deploy the Sample Application with Sticky Sessions](./sticky/)

View File

@ -0,0 +1,36 @@
---
title: Kubernetes Cluster Ingress (Experimental)
description: Learn about Ingress host and path routing for Kubernetes applications.
keywords: ucp, cluster, ingress, kubernetes
redirect_from:
- /ee/ucp/kubernetes/layer-7-routing/
---
{% include experimental-feature.md %}
## Cluster Ingress capabilities
Cluster Ingress provides layer 7 services to traffic entering a Docker Enterprise cluster for a variety of different use-cases that help provide application resilience, security, and observability. Ingress provides dynamic control of L7 routing in a highly available architecture that is also highly performant.
UCP's Ingress for Kubernetes is based on the [Istio](https://istio.io/) control-plane and is a simplified deployment focused on just providing ingress services with minimal complexity. This includes features such as:
- L7 host and path routing
- Complex path matching and redirection rules
- Weight-based load balancing
- TLS termination
- Persistent L7 sessions
- Hot config reloads
- Redundant and highly available design
For a detailed look at Istio Ingress architecture, refer to the [Istio Ingress docs.](https://istio.io/docs/tasks/traffic-management/ingress/)
To get started with UCP Ingress, the following help topics are provided:
- [Install Cluster Ingress on to a UCP Cluster](./install/)
- [Deploy a Sample Application with Ingress Rules](./ingress)
- [Deploy a Sample Application with a Canary release](./canary/)
- [Deploy a Sample Application with Sticky Sessions](./sticky/)
## Where to go next
- [Install Cluster Ingress on to a UCP Cluster](./install/)

View File

@ -0,0 +1,168 @@
---
title: Deploy a Sample Application with Ingress (Experimental)
description: Learn how to deploy Ingress rules for Kubernetes applications.
keywords: ucp, cluster, ingress, kubernetes
---
{% include experimental-feature.md %}
# Deploy a Sample Application with Ingress
Cluster Ingress is capable of routing based on many HTTP attributes, but most
commonly the HTTP host and path. The following example shows the basics of
deploying Ingress rules for a Kubernetes application. An example application is
deployed from this [deployment manifest](./yaml/demo-app.yaml) and L7 Ingress
rules are applied.
## Deploy a Sample Application
In this example, three different versions of the docker-demo application are
deployed. The docker-demo application is able to display the container hostname,
environment variables or labels in its HTTP responses, therefore is good sample
application for an Ingress controller.
The 3 versions of the sample application are:
- v1: a production version with 3 replicas running.
- v2: a staging version with a single replica running.
- v3: a development version also with a single replica.
An example Kubernetes manifest file container all 3 deployments can be found [here](./yaml/demo-app.yaml)
1) Source a [UCP Client Bundle](/ee/ucp/user-access/cli/) attached to a
cluster with Cluster Ingress installed.
2) Download the sample Kubernetes manifest file
```
$ wget https://github.com/docker/docker.github.io/tree/master/ee/ucp/kubernetes/cluster-ingress/yaml/demo-app.yaml
```
3) Deploy the sample Kubernetes manifest file
```bash
$ kubectl apply -f demo-app.yaml
```
4) Verify the sample applications are running
```bash
$ kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
demo-v1-7797b7c7c8-5vts2 1/1 Running 0 3h
demo-v1-7797b7c7c8-gfwzj 1/1 Running 0 3h
demo-v1-7797b7c7c8-kw6gp 1/1 Running 0 3h
demo-v2-6c5b4c6f76-c6zhm 1/1 Running 0 3h
demo-v3-d88dddb74-9k7qg 1/1 Running 0 3h
$ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
demo-service NodePort 10.96.97.215 <none> 8080:33383/TCP 3h app=demo
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d <none>
```
This first part of the tutorial deployed the pods and a Kubernetes service,
using Kubernetes NodePorts these pods can be accessed outside of the Cluster
Ingress. This illustrate the standard L4 load balancing that a Kubernetes
service applies.
```bash
# Public IP Address of a Worker or Manager VM in the Cluster
$ IPADDR=51.141.127.241
# Node Port
$ PORT=$(kubectl get service demo-service --output jsonpath='{.spec.ports[?(@.name=="http")].nodePort}')
# Send traffic directly to the NodePort to bypass L7 Ingress
```bash
$ for i in {1..5}; do curl http://$IPADDR:$PORT/ping; done
{"instance":"demo-v3-d88dddb74-9k7qg","version":"v3","metadata":"dev"}
{"instance":"demo-v3-d88dddb74-9k7qg","version":"v3","metadata":"dev"}
{"instance":"demo-v2-6c5b4c6f76-c6zhm","version":"v2","metadata":"staging"}
{"instance":"demo-v1-7797b7c7c8-gfwzj","version":"v1","metadata":"production"}
{"instance":"demo-v1-7797b7c7c8-gfwzj","version":"v1","metadata":"production"}
```
The L4 load balancing is applied to the number of replicas that exist for each
service. Different scenarios require more complex logic to load balancing.
Make sure to detach the number of backend instances from the load balancing
algorithms used by the Ingress.
## Apply Ingress rules to Sample Application
To leverage the Cluster Ingress for the sample application, there are three custom resources types that need to be deployed:
- Gateway
- Virtual Service
- Destinationrule
For the sample application, an example manifest file with all 3 objects defined is [here](./yaml/ingress-simple.yaml).
1) Source a [UCP Client Bundle](/ee/ucp/user-access/cli/) attached to a
cluster with Cluster Ingress installed.
2) Download the sample Kubernetes manifest file
```
$ wget https://github.com/docker/docker.github.io/tree/master/ee/ucp/kubernetes/cluster-ingress/yaml/ingress-simple.yaml
```
3) Deploy the sample Kubernetes manifest file
```bash
$ kubectl apply -f ingress-simple.yaml
$ kubectl describe virtualservice demo-vs
...
Spec:
Gateways:
cluster-gateway
Hosts:
demo.example.com
Http:
Match: <nil>
Route:
Destination:
Host: demo-service
Port:
Number: 8080
Subset: v1
```
This configuration matches all traffic with `demo.example.com` and sends it to
the backend version=v1 deployment, regardless of the quantity of replicas in
the backend.
Curl the service again using the port of the Ingress gateway. Because DNS is
not set up, use the `--header` flag from curl to manually set the host header.
```bash
# Find the Cluster Ingress Node Port
$ PORT=$(kubectl get service -n istio-system istio-ingressgateway --output jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
# Public IP Address of a Worker or Manager VM in the Cluster
$ IPADDR=51.141.127.241
$ for i in {1..5}; do curl --header "Host: demo.example.com" http://$IPADDR:$PORT/ping; done
{"instance":"demo-v1-7797b7c7c8-5vts2","version":"v1","metadata":"production","request_id":"2558fdd1-0cbd-4ba9-b104-0d4d0b1cef85"}
{"instance":"demo-v1-7797b7c7c8-kw6gp","version":"v1","metadata":"production","request_id":"59f865f5-15fb-4f49-900e-40ab0c44c9e4"}
{"instance":"demo-v1-7797b7c7c8-5vts2","version":"v1","metadata":"production","request_id":"fe233ca3-838b-4670-b6a0-3a02cdb91624"}
{"instance":"demo-v1-7797b7c7c8-5vts2","version":"v1","metadata":"production","request_id":"842b8d03-8f8a-4b4b-b7f4-543f080c3097"}
{"instance":"demo-v1-7797b7c7c8-kw6gp","version":"v1","metadata":"production","request_id":"197cbb1d-5381-4e40-bc6f-cccec22eccbc"}
```
To have SNI (Server Name Indication) work with TLS services, use curl's
`--resolve` flag.
```bash
$ curl --resolve demo.example.com:$IPADDR:$PORT http://demo.example.com/ping
```
In this instance, the three backend v1 replicas are load balanced and no
requests are sent to the other versions.
## Where to go next
- [Deploy a Sample Application with a Canary release](./canary/)
- [Deploy a Sample Application with Sticky Sessions](./sticky/)

View File

@ -0,0 +1,132 @@
---
title: Install Cluster Ingress (Experimental)
description: Learn how to deploy ingress rules using Kubernetes manifests.
keywords: ucp, cluster, ingress, kubernetes
---
{% include experimental-feature.md %}
# Install Cluster Ingress
Cluster Ingress for Kubernetes is currently deployed manually outside of UCP.
Future plans for UCP include managing the full lifecycle of the Ingress
components themselves. This guide describes how to manually deploy Ingress using
Kubernetes deployment manifests.
## Offline Installation
If you are installing Cluster Ingress on a UCP cluster that does not have access
to the Docker Hub, you will need to pre-pull the Ingress container images. If
your cluster has access to the Docker Hub, you can move on to [deploying cluster
ingress](#deploy-cluster-ingress)
Without access to the Docker Hub, you will need to download the container images
on a workstation with access to the internet. Container images are distributed
in a `.tar.gz` and can be downloaded at the following
[URL](https://s3.amazonaws.com/docker-istio/istio-ingress-1.1.2.tgz).
Once the container images have been downloaded, they would then need to be
copied on to the hosts in your UCP cluster, and then side loaded in Docker.
Images can be side loaded with:
```bash
$ docker load -i ucp.tar.gz
```
There images should now be present on your nodes:
```bash
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker/node-agent-k8s 1.1.2 4ddd06d05d5d 6 days ago 243MB
docker/proxy_init 1.1.2 ff9628f32621 6 days ago 145MB
docker/proxyv2 1.1.2 bebabbe114a4 6 days ago 360MB
docker/pilot 1.1.2 58b6e18f3545 6 days ago 299MB
```
## Deploy Cluster Ingress
This step deploys the Ingress controller components `istio-pilot` and
`istio-ingressgateway`. Together, these components act as the control-plane and
data-plane for ingress traffic. These components are a simplified deployment of
Istio cluster Ingress functionality. Many other custom K8s resources (CRDs) are
also created that aid in the Ingress functionality.
> **Note**: This does not deploy the service mesh capabilities of Istio as its
> function in UCP is for Ingress.
As Cluster Ingress is not built into UCP in this release, a Cluster Admin will
need to manually download and apply the following Kubernetes Manifest
[file](https://s3.amazonaws.com/docker-istio/istio-ingress-1.1.2.yaml).
1) Download the Kubernetes manifest yaml
```bash
$ wget https://s3.amazonaws.com/docker-istio/istio-ingress-1.1.2.yaml
```
2) Source a [UCP Client Bundle](/ee/ucp/user-access/cli/)
3) Deploy the Kubernetes manifest file
```bash
$ kubectl apply -f istio-ingress-1.1.2.yaml
```
4) Check the installation has been completely succesfully. It may take a minute
or 2 for all pods to become ready.
```
kubectl get pods -n istio-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
istio-ingressgateway-747bc6b4cb-fkt6k 2/2 Running 0 44s 172.0.1.23 manager-02 <none> <none>
istio-ingressgateway-747bc6b4cb-gr8f7 2/2 Running 0 61s 172.0.1.25 manager-02 <none> <none>
istio-pilot-7b74c7568b-ntbjd 1/1 Running 0 61s 172.0.1.22 manager-02 <none> <none>
istio-pilot-7b74c7568b-p5skc 1/1 Running 0 44s 172.0.1.24 manager-02 <none> <none>
$ kubectl get services -n istio-system -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
istio-ingressgateway NodePort 10.96.32.197 <none> 80:33000/TCP,443:33001/TCP,31400:33002/TCP,15030:34420/TCP,15443:34368/TCP,15020:34300/TCP 86s app=istio-ingressgateway,istio=ingressgateway,release=istio
istio-pilot ClusterIP 10.96.199.152 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 85s istio=pilot
```
5) Test the Ingress Deployment
To test that the Envory proxy is working correclty in the Isitio Gateway pods,
there is a status port configured on an internal port 15020. From the above
output we can see that port 15020 is exposed as a Kubernetes NodePort, in the
output above the NodePort is 34300 put this could be different in each
environment.
To check the envoy proxy's status, there is a health endpoint at
`/healthz/ready`.
```bash
# Node Port
$ PORT=$(kubectl get service -n istio-system istio-ingressgateway --output jsonpath='{.spec.ports[?(@.name=="status-port")].nodePort}')
# Public IP Address of a Worker or Manager VM in the Cluster
$ IPADDR=51.141.127.241
# Use Curl to check the status port is available
$ curl -vvv http://$IPADDR:$PORT/healthz/ready
* Trying 51.141.127.241...
* TCP_NODELAY set
* Connected to 51.141.127.241 (51.141.127.241) port 34300 (#0)
> GET /healthz/ready HTTP/1.1
> Host: 51.141.127.241:34300
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Wed, 19 Jun 2019 13:31:53 GMT
< Content-Length: 0
<
* Connection #0 to host 51.141.127.241 left intact
```
If the output is `HTTP/1.1 200 OK` Envoy is running correctly, ready to service
applications.
## Where to go next
- [Deploy a Sample Application](./ingress/)

View File

@ -0,0 +1,102 @@
---
title: Deploy a Sample Application with Sticky Sessions (Experimental)
description: Learn how to use cookies with Ingress host and path routing.
keywords: ucp, cluster, ingress, kubernetes
---
{% include experimental-feature.md %}
# Deploy a Sample Application with Sticky Sessions
With persistent sessions, the Ingress controller can use a predetermined header
or dynamically generate a HTTP header cookie for a client session to use, so
that a clients requests are sent to the same backend.
> **Note**: This guide assumes the [Deploy Sample Application](./ingress/)
> tutorial was followed, with the artefacts still running on the cluster. If
> they are not, please go back and follow this guide.
This is specified within the Isitio Object `DestinationRule` via a
`TrafficPolicy` for a given host. In the following example configuration,
consistentHash is chosen as the load balancing method and a cookie named
“session” is used to determine the consistent hash. If incoming requests do not
have the “session” cookie set, the Ingress proxy sets it for use in future
requests.
A Kubernetes manifest file with an updated DestinationRule can be found [here](./yaml/ingress-sticky.yaml)
1) Source a [UCP Client Bundle](/ee/ucp/user-access/cli/) attached to a
cluster with Cluster Ingress installed.
2) Download the sample Kubernetes manifest file
```
$ wget https://github.com/docker/docker.github.io/tree/master/ee/ucp/kubernetes/cluster-ingress/yaml/ingress-sticky.yaml
```
3) Deploy the Kubernetes manifest file with the new DestinationRule, this has
the consistentHash loadBalancer policy set.
```bash
$ kubectl apply -f ingress-sticky.yaml
```
2) Curl the service to view how requests are load balanced without using
cookies. In this example, requests are bounced between different v1
services.
```bash
# Public IP Address of a Worker or Manager VM in the Cluster
$ IPADDR=51.141.127.241
# Node Port
$ PORT=$(kubectl get service demo-service --output jsonpath='{.spec.ports[?(@.name=="http")].nodePort}')
$ for i in {1..5}; do curl -H "Host: demo.example.com" http://$IPADDR:$PORT/ping; done
{"instance":"demo-v1-7797b7c7c8-gfwzj","version":"v1","metadata":"production","request_id":"b40a0294-2629-413b-b876-76b59d72189b"}
{"instance":"demo-v1-7797b7c7c8-kw6gp","version":"v1","metadata":"production","request_id":"721fe4ba-a785-484a-bba0-627ee6e47188"}
{"instance":"demo-v1-7797b7c7c8-gfwzj","version":"v1","metadata":"production","request_id":"77ed801b-81aa-4c02-8cc9-7e3bd3244807"}
{"instance":"demo-v1-7797b7c7c8-gfwzj","version":"v1","metadata":"production","request_id":"36d8aaed-fcdf-4489-a85e-76ea96949d6c"}
{"instance":"demo-v1-7797b7c7c8-kw6gp","version":"v1","metadata":"production","request_id":"4693b6ad-286b-4470-9eea-c8656f6801ae"}
```
3) Curl again and inspect the headers returned from the proxy.
```bash
$ curl -i -H "Host: demo.example.com" http://$IPADDR:$PORT/ping
HTTP/1.1 200 OK
set-cookie: session=1555389679134464956; Path=/; Expires=Wed, 17 Apr 2019 04:41:19 GMT; Max-Age=86400
date: Tue, 16 Apr 2019 04:41:18 GMT
content-length: 131
content-type: text/plain; charset=utf-8
x-envoy-upstream-service-time: 0
set-cookie: session="d7227d32eeb0524b"; Max-Age=60; HttpOnly
server: istio-envoy
{"instance":"demo-v1-7797b7c7c8-kw6gp","version":"v1","metadata":"production","request_id":"011d5fdf-2285-4ce7-8644-c2df6481c584"}
```
The Ingress proxy set a 60 second TTL cookie named “session” on this HTTP
request. A browser or other client application can use that value in future
requests.
4) Curl the service again using the flags that save cookies persistently
across sessions. The header information shows the session is being set,
persisted across requests, and that for a given session header, the
responses are coming from the same backend.
```bash
$ for i in {1..5}; do curl -c cookie.txt -b cookie.txt -H "Host: demo.example.com" http://$IPADDR:$PORT/ping; done
{"instance":"demo-v1-7797b7c7c8-kw6gp","version":"v1","metadata":"production","request_id":"72b35296-d6bd-462a-9e62-0bd0249923d7"}
{"instance":"demo-v1-7797b7c7c8-kw6gp","version":"v1","metadata":"production","request_id":"c8872f6c-f77c-4411-aed2-d7aa6d1d92e9"}
{"instance":"demo-v1-7797b7c7c8-kw6gp","version":"v1","metadata":"production","request_id":"0e7b8725-c550-4923-acea-db94df1eb0e4"}
{"instance":"demo-v1-7797b7c7c8-kw6gp","version":"v1","metadata":"production","request_id":"9996fe77-8260-4225-89df-0eaf7581e961"}
{"instance":"demo-v1-7797b7c7c8-kw6gp","version":"v1","metadata":"production","request_id":"d35c380e-31d6-44ce-a5d0-f9f6179715ab"}
```
When the HTTP uses the cookie that is set by the Ingress proxy, all requests
are sent to the same backend demo-v1-7797b7c7c8-kw6gp.
## Where to to go next
- [Cluster Ingress Overview](./)

View File

@ -0,0 +1,106 @@
apiVersion: v1
kind: Service
metadata:
name: demo-service
labels:
app: demo
spec:
type: NodePort
ports:
- port: 8080
name: http
selector:
app: demo
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: demo-v1
labels:
app: demo
version: v1
spec:
replicas: 3
template:
metadata:
labels:
app: demo
version: v1
spec:
containers:
- name: webserver
image: ehazlett/docker-demo
resources:
requests:
cpu: "100m"
imagePullPolicy: IfNotPresent #Always
ports:
- containerPort: 8080
env:
- name: VERSION
value: "v1"
- name: METADATA
value: "production"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: demo-v2
labels:
app: demo
version: v2
spec:
replicas: 1
template:
metadata:
labels:
app: demo
version: v2
spec:
containers:
- name: webserver
image: ehazlett/docker-demo
resources:
requests:
cpu: "100m"
imagePullPolicy: IfNotPresent #Always
ports:
- containerPort: 8080
env:
- name: VERSION
value: "v2"
- name: METADATA
value: "staging"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: demo-v3
labels:
app: demo
version: v3
spec:
replicas: 1
template:
metadata:
labels:
app: demo
version: v3
spec:
containers:
- name: webserver
image: ehazlett/docker-demo
resources:
requests:
cpu: "100m"
imagePullPolicy: IfNotPresent #Always
ports:
- containerPort: 8080
env:
- name: VERSION
value: "v3"
- name: METADATA
value: "dev"

View File

@ -0,0 +1,47 @@
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: cluster-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: demo-vs
spec:
hosts:
- "demo.example.com"
gateways:
- cluster-gateway
http:
- match:
route:
- destination:
host: demo-service
subset: v1
port:
number: 8080
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: demo-destinationrule
spec:
host: demo-service
subsets:
- name: v1
labels:
version: v1

View File

@ -0,0 +1,77 @@
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: cluster-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: demo-vs
spec:
hosts:
- "demo.example.com"
gateways:
- cluster-gateway
http:
- match:
- headers:
stage:
exact: dev
route:
- destination:
host: demo-service
subset: v3
port:
number: 8080
- match:
route:
- destination:
host: demo-service
subset: v1
port:
number: 8080
weight: 100
- destination:
host: demo-service
subset: v2
port:
number: 8080
weight: 0
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: demo-destinationrule
spec:
host: demo-service
subsets:
- name: v1
labels:
version: v1
trafficPolicy:
loadBalancer:
consistentHash:
httpCookie:
name: session
ttl: 60s
- name: v2
labels:
version: v2
- name: v3
labels:
version: v3

View File

@ -0,0 +1,72 @@
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: cluster-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: demo-vs
spec:
hosts:
- "demo.example.com"
gateways:
- cluster-gateway
http:
- match:
- headers:
stage:
exact: dev
route:
- destination:
host: demo-service
subset: v3
port:
number: 8080
- match:
route:
- destination:
host: demo-service
subset: v1
port:
number: 8080
weight: 80
- destination:
host: demo-service
subset: v2
port:
number: 8080
weight: 20
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: demo-destinationrule
spec:
host: demo-service
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- name: v3
labels:
version: v3

View File

@ -1,25 +0,0 @@
---
title: Layer 7 routing
description: Learn how to route traffic to your Kubernetes workloads in Docker Enterprise Edition.
keywords: UCP, Kubernetes, ingress, routing
redirect_from:
- /ee/ucp/kubernetes/deploy-ingress-controller/
---
When you deploy a Kubernetes application, you may want to make it accessible
to users using hostnames instead of IP addresses.
Kubernetes provides **ingress controllers** for this. This functionality is
specific to Kubernetes. If you're trying to route traffic to Swarm-based
applications, check [layer 7 routing with Swarm](../interlock/index.md).
Use an ingress controller when you want to:
* Give your Kubernetes app an externally-reachable URL.
* Load-balance traffic to your app.
A popular ingress controller within the Kubernetes Community is the [NGINX controller](https://github.com/kubernetes/ingress-nginx), and can be used in Docker Enterprise Edition, but it is not directly supported by Docker, Inc.
Learn about [ingress in Kubernetes](https://v1-11.docs.kubernetes.io/docs/concepts/services-networking/ingress/).
For an example of a YAML NGINX kube ingress deployment, refer to <https://success.docker.com/article/how-to-configure-a-default-tls-certificate-for-the-kubernetes-nginx-ingress-controller>.