Fix formatting, headings, typos

This commit is contained in:
Traci Morrison 2019-10-08 15:51:31 -04:00
parent a6b751346f
commit 63fd394988
6 changed files with 121 additions and 155 deletions

View File

@ -1559,7 +1559,7 @@ manuals:
path: /ee/ucp/kubernetes/cluster-ingress/
- title: Install Ingress
path: /ee/ucp/kubernetes/cluster-ingress/install/
- title: Deploy Simple Application
- title: Deploy a Sample Application
path: /ee/ucp/kubernetes/cluster-ingress/ingress/
- title: Deploy a Canary Deployment
path: /ee/ucp/kubernetes/cluster-ingress/canary/

View File

@ -1,18 +1,20 @@
---
title: Deploy a Sample Application with a Canary release (Experimental)
description: Stage a canary release using weight-based load balancing between multiple backend applications.
description: Stage a canary release using weight-based load balancing between multiple back-end applications.
keywords: ucp, cluster, ingress, kubernetes
---
{% include experimental-feature.md %}
# Deploy a Sample Application with a Canary release
## Overview
This example stages a canary release using weight-based load balancing between
multiple backend applications.
multiple back-end applications.
> **Note**: This guide assumes the [Deploy Sample Application](./ingress/)
> tutorial was followed, with the artefacts still running on the cluster. If
> Note
>
> This guide assumes the [Deploy Sample Application](./ingress/)
> tutorial was followed, with the artifacts still running on the cluster. If
> they are not, please go back and follow this guide.
The following schema is used for this tutorial:
@ -20,21 +22,16 @@ The following schema is used for this tutorial:
- 20% of client traffic is sent to the staging v2 service.
- All test traffic using the header `stage=dev` is sent to the v3 service.
A new Kubernetes manifest file with updated ingress rules can be found
[here](./yaml/ingress-weighted.yaml)
A new Kubernetes manifest file with updated ingress rules can be found [here](./yaml/ingress-weighted.yaml).
1) Source a [UCP Client Bundle](/ee/ucp/user-access/cli/) attached to a
cluster with Cluster Ingress installed.
2) Download the sample Kubernetes manifest file
```
$ wget https://github.com/docker/docker.github.io/tree/master/ee/ucp/kubernetes/cluster-ingress/yaml/ingress-weighted.yaml
```
3) Deploy the Kubernetes manifest file
```bash
1. Source a [UCP Client Bundle](/ee/ucp/user-access/cli/) attached to a cluster with Cluster Ingress installed.
2. Download the sample Kubernetes manifest file.
```bash
$ wget https://github.com/docker/docker.github.io/tree/master/ee/ucp/kubernetes/cluster-ingress/yaml/ingress-weighted.yaml
```
3. Deploy the Kubernetes manifest file.
```bash
$ kubectl apply -f ingress-weighted.yaml
$ kubectl describe vs
@ -64,14 +61,12 @@ A new Kubernetes manifest file with updated ingress rules can be found
Number: 8080
Subset: v2
Weight: 20
```
```
This virtual service performs the following actions:
- Receives all traffic with host=demo.example.com.
- If an exact match for HTTP header `stage=dev` is found, traffic is routed
to v3.
- All other traffic is routed to v1 and v2 in an 80:20 ratio.
- Receives all traffic with host=demo.example.com.
- If an exact match for HTTP header `stage=dev` is found, traffic is routed to v3.
- All other traffic is routed to v1 and v2 in an 80:20 ratio.
Now we can send traffic to the application to view the applied load balancing
algorithms.
@ -92,10 +87,10 @@ $ for i in {1..5}; do curl -H "Host: demo.example.com" http://$IPADDR:$PORT/ping
```
The split between v1 and v2 corresponds to the specified criteria. Within the
v1 service, requests are load-balanced across the 3 backend replicas. v3 does
v1 service, requests are load-balanced across the three back-end replicas. v3 does
not appear in the requests.
To send traffic to the 3rd service, we can add the HTTP header `stage=dev`.
To send traffic to the third service, add the HTTP header `stage=dev`.
```bash
for i in {1..5}; do curl -H "Host: demo.example.com" -H "Stage: dev" http://$IPADDR:$PORT/ping; done
@ -106,8 +101,7 @@ for i in {1..5}; do curl -H "Host: demo.example.com" -H "Stage: dev" http://$IPA
{"instance":"demo-v3-d88dddb74-9k7qg","version":"v3","metadata":"dev","request_id":"bae52f09-0510-42d9-aec0-ca6bbbaae168"}
```
In this case, 100% of the traffic with the stage=dev header is sent to the v3
service.
In this case, 100% of the traffic with the `stage=dev` header is sent to the v3 service.
## Where to go next

View File

@ -10,7 +10,7 @@ redirect_from:
## Cluster Ingress capabilities
Cluster Ingress provides layer 7 services to traffic entering a Docker Enterprise cluster for a variety of different use-cases that help provide application resilience, security, and observability. Ingress provides dynamic control of L7 routing in a highly available architecture that is also highly performant.
Cluster Ingress provides L7 services to traffic entering a Docker Enterprise cluster for a variety of different use-cases that help provide application resilience, security, and observability. Ingress provides dynamic control of L7 routing in a highly available architecture that is also high performing.
UCP's Ingress for Kubernetes is based on the [Istio](https://istio.io/) control-plane and is a simplified deployment focused on just providing ingress services with minimal complexity. This includes features such as:
@ -26,9 +26,9 @@ For a detailed look at Istio Ingress architecture, refer to the [Istio Ingress d
To get started with UCP Ingress, the following help topics are provided:
- [Install Cluster Ingress on to a UCP Cluster](./install/)
- [Install Cluster Ingress on a UCP Cluster](./install/)
- [Deploy a Sample Application with Ingress Rules](./ingress)
- [Deploy a Sample Application with a Canary release](./canary/)
- [Deploy a Sample Application with a Canary Release](./canary/)
- [Deploy a Sample Application with Sticky Sessions](./sticky/)
## Where to go next

View File

@ -6,7 +6,7 @@ keywords: ucp, cluster, ingress, kubernetes
{% include experimental-feature.md %}
# Deploy a Sample Application with Ingress
## Overview
Cluster Ingress is capable of routing based on many HTTP attributes, but most
commonly the HTTP host and path. The following example shows the basics of
@ -21,32 +21,28 @@ deployed. The docker-demo application is able to display the container hostname,
environment variables or labels in its HTTP responses, therefore is good sample
application for an Ingress controller.
The 3 versions of the sample application are:
The three versions of the sample application are:
- v1: a production version with 3 replicas running.
- v1: a production version with three replicas running.
- v2: a staging version with a single replica running.
- v3: a development version also with a single replica.
An example Kubernetes manifest file container all 3 deployments can be found [here](./yaml/demo-app.yaml)
> Note
>
> An example Kubernetes manifest file containing all three deployments can be found [here](./yaml/demo-app.yaml).
1) Source a [UCP Client Bundle](/ee/ucp/user-access/cli/) attached to a
cluster with Cluster Ingress installed.
1. Source a [UCP Client Bundle](/ee/ucp/user-access/cli/) attached to a cluster with Cluster Ingress installed.
2. Download the sample Kubernetes manifest file.
```bash
$ wget https://raw.githubusercontent.com/docker/docker.github.io/master/ee/ucp/kubernetes/cluster-ingress/yaml/demo-app.yaml
```
3. Deploy the sample Kubernetes manifest file.
```bash
$ kubectl apply -f demo-app.yaml
```
4. Verify that the sample applications are running.
2) Download the sample Kubernetes manifest file
```
$ wget https://raw.githubusercontent.com/docker/docker.github.io/master/ee/ucp/kubernetes/cluster-ingress/yaml/demo-app.yaml
```
3) Deploy the sample Kubernetes manifest file
```bash
$ kubectl apply -f demo-app.yaml
```
4) Verify the sample applications are running
```bash
```bash
$ kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
demo-v1-7797b7c7c8-5vts2 1/1 Running 0 3h
@ -59,11 +55,11 @@ An example Kubernetes manifest file container all 3 deployments can be found [he
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
demo-service NodePort 10.96.97.215 <none> 8080:33383/TCP 3h app=demo
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d <none>
```
```
This first part of the tutorial deployed the pods and a Kubernetes service,
using Kubernetes NodePorts these pods can be accessed outside of the Cluster
Ingress. This illustrate the standard L4 load balancing that a Kubernetes
This first part of the tutorial deployed the pods and a Kubernetes service.
Using Kubernetes NodePorts, these pods can be accessed outside of the Cluster
Ingress. This illustrates the standard L4 load balancing that a Kubernetes
service applies.
```bash
@ -74,8 +70,6 @@ $ IPADDR=51.141.127.241
$ PORT=$(kubectl get service demo-service --output jsonpath='{.spec.ports[?(@.name=="http")].nodePort}')
# Send traffic directly to the NodePort to bypass L7 Ingress
```bash
$ for i in {1..5}; do curl http://$IPADDR:$PORT/ping; done
{"instance":"demo-v3-d88dddb74-9k7qg","version":"v3","metadata":"dev"}
{"instance":"demo-v3-d88dddb74-9k7qg","version":"v3","metadata":"dev"}
@ -86,31 +80,28 @@ $ for i in {1..5}; do curl http://$IPADDR:$PORT/ping; done
The L4 load balancing is applied to the number of replicas that exist for each
service. Different scenarios require more complex logic to load balancing.
Make sure to detach the number of backend instances from the load balancing
Make sure to detach the number of back-end instances from the load balancing
algorithms used by the Ingress.
## Apply Ingress rules to Sample Application
## Apply Ingress Rules to the Sample Application
To leverage the Cluster Ingress for the sample application, there are three custom resources types that need to be deployed:
- Gateway
- Virtual Service
- Destinationrule
- Gateway
- Virtual Service
- Destinationrule
> Note
>
> For the sample application, an example manifest file with all three objects defined can be found [here](./yaml/ingress-simple.yaml).
For the sample application, an example manifest file with all 3 objects defined is [here](./yaml/ingress-simple.yaml).
1. Source a [UCP Client Bundle](/ee/ucp/user-access/cli/) attached to a cluster with Cluster Ingress installed.
2. Download the sample Kubernetes manifest file.
```bash
$ wget https://raw.githubusercontent.com/docker/docker.github.io/master/ee/ucp/kubernetes/cluster-ingress/yaml/ingress-simple.yaml
```
3. Deploy the sample Kubernetes manifest file.
1) Source a [UCP Client Bundle](/ee/ucp/user-access/cli/) attached to a
cluster with Cluster Ingress installed.
2) Download the sample Kubernetes manifest file
```
$ wget https://raw.githubusercontent.com/docker/docker.github.io/master/ee/ucp/kubernetes/cluster-ingress/yaml/ingress-simple.yaml
```
3) Deploy the sample Kubernetes manifest file
```bash
```bash
$ kubectl apply -f ingress-simple.yaml
$ kubectl describe virtualservice demo-vs
@ -128,11 +119,11 @@ For the sample application, an example manifest file with all 3 objects defined
Port:
Number: 8080
Subset: v1
```
```
This configuration matches all traffic with `demo.example.com` and sends it to
the backend version=v1 deployment, regardless of the quantity of replicas in
the backend.
the back end version=v1 deployment, regardless of the quantity of replicas in
the back end.
Curl the service again using the port of the Ingress gateway. Because DNS is
not set up, use the `--header` flag from curl to manually set the host header.
@ -152,14 +143,13 @@ $ for i in {1..5}; do curl --header "Host: demo.example.com" http://$IPADDR:$POR
{"instance":"demo-v1-7797b7c7c8-kw6gp","version":"v1","metadata":"production","request_id":"197cbb1d-5381-4e40-bc6f-cccec22eccbc"}
```
To have SNI (Server Name Indication) work with TLS services, use curl's
`--resolve` flag.
To have Server Name Indication (SNI) work with TLS services, use curl's `--resolve` flag.
```bash
$ curl --resolve demo.example.com:$IPADDR:$PORT http://demo.example.com/ping
```
In this instance, the three backend v1 replicas are load balanced and no
In this instance, the three back-end v1 replicas are load balanced and no
requests are sent to the other versions.
## Where to go next

View File

@ -6,7 +6,7 @@ keywords: ucp, cluster, ingress, kubernetes
{% include experimental-feature.md %}
# Install Cluster Ingress
## Overview
Cluster Ingress for Kubernetes is currently deployed manually outside of UCP.
Future plans for UCP include managing the full lifecycle of the Ingress
@ -18,12 +18,12 @@ Kubernetes deployment manifests.
If you are installing Cluster Ingress on a UCP cluster that does not have access
to the Docker Hub, you will need to pre-pull the Ingress container images. If
your cluster has access to the Docker Hub, you can move on to [deploying cluster
ingress](#deploy-cluster-ingress)
ingress](#deploy-cluster-ingress).
Without access to the Docker Hub, you will need to download the container images
on a workstation with access to the internet. Container images are distributed
in a `.tar.gz` and can be downloaded at the following
[URL](https://s3.amazonaws.com/docker-istio/istio-ingress-1.1.2.tgz).
in a `.tar.gz` and can be downloaded from
[here](https://s3.amazonaws.com/docker-istio/istio-ingress-1.1.2.tgz).
Once the container images have been downloaded, they would then need to be
copied on to the hosts in your UCP cluster, and then side loaded in Docker.
@ -49,34 +49,32 @@ docker/pilot 1.1.2 58b6e18f3545 6 days ago
This step deploys the Ingress controller components `istio-pilot` and
`istio-ingressgateway`. Together, these components act as the control-plane and
data-plane for ingress traffic. These components are a simplified deployment of
Istio cluster Ingress functionality. Many other custom K8s resources (CRDs) are
Istio cluster Ingress functionality. Many other custom Kubernetes resources (CRDs) are
also created that aid in the Ingress functionality.
> **Note**: This does not deploy the service mesh capabilities of Istio as its
> Note
>
> This does not deploy the service mesh capabilities of Istio as its
> function in UCP is for Ingress.
As Cluster Ingress is not built into UCP in this release, a Cluster Admin will
need to manually download and apply the following Kubernetes Manifest
[file](https://s3.amazonaws.com/docker-istio/istio-ingress-1.1.2.yaml).
1) Download the Kubernetes manifest yaml
> Note
>
> As Cluster Ingress is not built into UCP in this release, a Cluster Admin will
> need to manually download and apply the following Kubernetes Manifest [file](https://s3.amazonaws.com/docker-istio/istio-ingress-1.1.2.yaml).
1. Download the Kubernetes manifest file.
```bash
$ wget https://s3.amazonaws.com/docker-istio/istio-ingress-1.1.2.yaml
```
2) Source a [UCP Client Bundle](/ee/ucp/user-access/cli/)
3) Deploy the Kubernetes manifest file
2. Source a [UCP Client Bundle](/ee/ucp/user-access/cli/).
3. Deploy the Kubernetes manifest file.
```bash
$ kubectl apply -f istio-ingress-1.1.2.yaml
```
4. Verify that the installation was successful. It may take 1-2 minutes for all pods to become ready.
4) Check the installation has been completely succesfully. It may take a minute
or 2 for all pods to become ready.
```
kubectl get pods -n istio-system -o wide
```bash
$ kubectl get pods -n istio-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
istio-ingressgateway-747bc6b4cb-fkt6k 2/2 Running 0 44s 172.0.1.23 manager-02 <none> <none>
istio-ingressgateway-747bc6b4cb-gr8f7 2/2 Running 0 61s 172.0.1.25 manager-02 <none> <none>
@ -89,16 +87,10 @@ istio-ingressgateway NodePort 10.96.32.197 <none> 80:33000/TCP,44
istio-pilot ClusterIP 10.96.199.152 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 85s istio=pilot
```
5) Test the Ingress Deployment
To test that the Envoy proxy is working correclty in the Isitio Gateway pods,
there is a status port configured on an internal port 15020. From the above
output we can see that port 15020 is exposed as a Kubernetes NodePort, in the
output above the NodePort is 34300 but this could be different in each
Now you can test the Ingress deployment. To test that the Envoy proxy is working correctly in the Istio Gateway pods, there is a status port configured on an internal port 15020. From the above output, we can see that port 15020 is exposed as a Kubernetes NodePort. In the output above, the NodePort is 34300, but this could be different in each
environment.
To check the envoy proxy's status, there is a health endpoint at
`/healthz/ready`.
To check the envoy proxy's status, there is a health endpoint at `/healthz/ready`.
```bash
# Node Port
@ -124,8 +116,7 @@ $ curl -vvv http://$IPADDR:$PORT/healthz/ready
* Connection #0 to host 51.141.127.241 left intact
```
If the output is `HTTP/1.1 200 OK` Envoy is running correctly, ready to service
applications.
If the output is `HTTP/1.1 200 OK`, then Envoy is running correctly, ready to service applications.
## Where to go next

View File

@ -6,46 +6,41 @@ keywords: ucp, cluster, ingress, kubernetes
{% include experimental-feature.md %}
# Deploy a Sample Application with Sticky Sessions
## Overview
With persistent sessions, the Ingress controller can use a predetermined header
or dynamically generate a HTTP header cookie for a client session to use, so
that a clients requests are sent to the same backend.
that a clients requests are sent to the same back end.
> **Note**: This guide assumes the [Deploy Sample Application](./ingress/)
> tutorial was followed, with the artefacts still running on the cluster. If
> Note
>
> This guide assumes the [Deploy Sample Application](./ingress/)
> tutorial was followed, with the artifacts still running on the cluster. If
> they are not, please go back and follow this guide.
This is specified within the Isitio Object `DestinationRule` via a
This is specified within the Istio Object `DestinationRule` via a
`TrafficPolicy` for a given host. In the following example configuration,
consistentHash is chosen as the load balancing method and a cookie named
“session” is used to determine the consistent hash. If incoming requests do not
have the “session” cookie set, the Ingress proxy sets it for use in future
requests.
A Kubernetes manifest file with an updated DestinationRule can be found [here](./yaml/ingress-sticky.yaml)
> Note
>
> A Kubernetes manifest file with an updated DestinationRule can be found [here](./yaml/ingress-sticky.yaml).
1) Source a [UCP Client Bundle](/ee/ucp/user-access/cli/) attached to a
cluster with Cluster Ingress installed.
1. Source a [UCP Client Bundle](/ee/ucp/user-access/cli/) attached to a cluster with Cluster Ingress installed.
2. Download the sample Kubernetes manifest file.
```bash
$ wget https://github.com/docker/docker.github.io/tree/master/ee/ucp/kubernetes/cluster-ingress/yaml/ingress-sticky.yaml
```
3. Deploy the Kubernetes manifest file with the new DestinationRule. This file includes the consistentHash loadBalancer policy.
```bash
$ kubectl apply -f ingress-sticky.yaml
```
4. Curl the service to view how requests are load balanced without using cookies. In this example, requests are bounced between different v1 services.
2) Download the sample Kubernetes manifest file
```
$ wget https://github.com/docker/docker.github.io/tree/master/ee/ucp/kubernetes/cluster-ingress/yaml/ingress-sticky.yaml
```
3) Deploy the Kubernetes manifest file with the new DestinationRule, this has
the consistentHash loadBalancer policy set.
```bash
$ kubectl apply -f ingress-sticky.yaml
```
2) Curl the service to view how requests are load balanced without using
cookies. In this example, requests are bounced between different v1
services.
```bash
```bash
# Public IP Address of a Worker or Manager VM in the Cluster
$ IPADDR=51.141.127.241
@ -58,11 +53,11 @@ A Kubernetes manifest file with an updated DestinationRule can be found [here](.
{"instance":"demo-v1-7797b7c7c8-gfwzj","version":"v1","metadata":"production","request_id":"77ed801b-81aa-4c02-8cc9-7e3bd3244807"}
{"instance":"demo-v1-7797b7c7c8-gfwzj","version":"v1","metadata":"production","request_id":"36d8aaed-fcdf-4489-a85e-76ea96949d6c"}
{"instance":"demo-v1-7797b7c7c8-kw6gp","version":"v1","metadata":"production","request_id":"4693b6ad-286b-4470-9eea-c8656f6801ae"}
```
```
3) Curl again and inspect the headers returned from the proxy.
Now curl again and inspect the headers returned from the proxy.
```bash
```bash
$ curl -i -H "Host: demo.example.com" http://$IPADDR:$PORT/ping
HTTP/1.1 200 OK
set-cookie: session=1555389679134464956; Path=/; Expires=Wed, 17 Apr 2019 04:41:19 GMT; Max-Age=86400
@ -74,28 +69,24 @@ A Kubernetes manifest file with an updated DestinationRule can be found [here](.
server: istio-envoy
{"instance":"demo-v1-7797b7c7c8-kw6gp","version":"v1","metadata":"production","request_id":"011d5fdf-2285-4ce7-8644-c2df6481c584"}
```
```
The Ingress proxy set a 60 second TTL cookie named “session” on this HTTP
request. A browser or other client application can use that value in future
requests.
The Ingress proxy sets a 60 second TTL cookie named `session` on this HTTP request. A browser or other client application can use that value in future
requests.
4) Curl the service again using the flags that save cookies persistently
across sessions. The header information shows the session is being set,
persisted across requests, and that for a given session header, the
responses are coming from the same backend.
Now curl the service again using the flags that save cookies persistently across sessions. The header information shows the session is being set,
persisted across requests, and that for a given session header, the responses are coming from the same back end.
```bash
```bash
$ for i in {1..5}; do curl -c cookie.txt -b cookie.txt -H "Host: demo.example.com" http://$IPADDR:$PORT/ping; done
{"instance":"demo-v1-7797b7c7c8-kw6gp","version":"v1","metadata":"production","request_id":"72b35296-d6bd-462a-9e62-0bd0249923d7"}
{"instance":"demo-v1-7797b7c7c8-kw6gp","version":"v1","metadata":"production","request_id":"c8872f6c-f77c-4411-aed2-d7aa6d1d92e9"}
{"instance":"demo-v1-7797b7c7c8-kw6gp","version":"v1","metadata":"production","request_id":"0e7b8725-c550-4923-acea-db94df1eb0e4"}
{"instance":"demo-v1-7797b7c7c8-kw6gp","version":"v1","metadata":"production","request_id":"9996fe77-8260-4225-89df-0eaf7581e961"}
{"instance":"demo-v1-7797b7c7c8-kw6gp","version":"v1","metadata":"production","request_id":"d35c380e-31d6-44ce-a5d0-f9f6179715ab"}
```
```
When the HTTP uses the cookie that is set by the Ingress proxy, all requests
are sent to the same backend demo-v1-7797b7c7c8-kw6gp.
When the HTTP uses the cookie that is set by the Ingress proxy, all requests are sent to the same back end, `demo-v1-7797b7c7c8-kw6gp`.
## Where to to go next