[WIP] Update traffic routing tasks to use v1alpha3 config (#1067)

* use v1alpha3 route rules

* circuit breaking task updated to v1alpha3

* convert mirroring task to v1alpha3

* convert egress task to v1alpha3

* Egress task corrections and clarifications

* use simpler rule names

* move new tasks to separate folder (keep old versions around for now)

* update example outputs

* egress tcp task

* fix broken refs

* more broken refs

* imporove wording

* add missing include home.html

* remove ingress task - will create a replacement in followup PR
This commit is contained in:
Frank Budinsky 2018-03-23 09:11:07 -04:00 committed by GitHub
parent 8d8a44c665
commit b12506c88d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 1557 additions and 0 deletions

View File

@ -0,0 +1,252 @@
---
title: Circuit Breaking
overview: This task demonstrates the circuit-breaking capability for resilient applications
order: 50
layout: docs
type: markdown
---
{% include home.html %}
This task demonstrates the circuit-breaking capability for resilient applications. Circuit breaking allows developers to write applications that limit the impact of failures, latency spikes, and other undesirable effects of network peculiarities. This task will show how to configure circuit breaking for connections, requests, and outlier detection.
## Before you begin
* Setup Istio by following the instructions in the
[Installation guide]({{home}}/docs/setup/).
* Start the [httpbin](https://github.com/istio/istio/tree/master/samples/httpbin) sample
which will be used as the backend service for our task
```bash
kubectl apply -f <(istioctl kube-inject --debug -f samples/httpbin/httpbin.yaml)
```
## Circuit breaker
Let's set up a scenario to demonstrate the circuit-breaking capabilities of Istio. We should have the `httpbin` service running from the previous section.
1. Create a [destination rule]({{home}}/docs/reference/config/istio.networking.v1alpha3.html#DestinationRule) to specify our circuit breaking settings when calling the `httpbin` service:
```bash
cat <<EOF | istioctl create -f -
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: httpbin
spec:
name: httpbin
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
http:
http1MaxPendingRequests: 1
maxRequestsPerConnection: 1
outlierDetection:
http:
consecutiveErrors: 1
interval: 1s
baseEjectionTime: 3m
maxEjectionPercent: 100
EOF
```
2. Verify our destination rule was created correctly:
```bash
istioctl get destinationrule httpbin -o yaml
```
```
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: httpbin
...
spec:
name: httpbin
trafficPolicy:
connectionPool:
http:
http1MaxPendingRequests: 1
maxRequestsPerConnection: 1
tcp:
maxConnections: 100
outlierDetection:
http:
baseEjectionTime: 180.000s
consecutiveErrors: 1
interval: 1.000s
maxEjectionPercent: 100
```
### Setting up our client
Now that we've set up rules for calling the `httpbin` service, let's create a client we can use to send traffic to our service and see whether we can trip the circuit breaking policies. We're going to use a simple load-testing client called [fortio](https://github.com/istio/fortio). With this client we can control the number of connections, concurrency, and delays of outgoing HTTP calls. In this step, we'll set up a client that is injected with the istio sidecar proxy so our network interactions are governed by Istio:
```bash
kubectl apply -f <(istioctl kube-inject --debug -f samples/httpbin/sample-client/fortio-deploy.yaml)
```
Now we should be able to log into that client pod and use the simple fortio tool to call `httpbin`. We'll pass in `-curl` to indicate we just want to make one call:
```bash
FORTIO_POD=$(kubectl get pod | grep fortio | awk '{ print $1 }')
kubectl exec -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -curl http://httpbin:8000/get
```
```
HTTP/1.1 200 OK
server: envoy
date: Tue, 16 Jan 2018 23:47:00 GMT
content-type: application/json
access-control-allow-origin: *
access-control-allow-credentials: true
content-length: 445
x-envoy-upstream-service-time: 36
{
"args": {},
"headers": {
"Content-Length": "0",
"Host": "httpbin:8000",
"User-Agent": "istio/fortio-0.6.2",
"X-B3-Sampled": "1",
"X-B3-Spanid": "824fbd828d809bf4",
"X-B3-Traceid": "824fbd828d809bf4",
"X-Ot-Span-Context": "824fbd828d809bf4;824fbd828d809bf4;0000000000000000",
"X-Request-Id": "1ad2de20-806e-9622-949a-bd1d9735a3f4"
},
"origin": "127.0.0.1",
"url": "http://httpbin:8000/get"
}
```
You can see the request succeeded! Now, let's break something.
### Tripping the circuit breaker:
In the circuit-breaking settings, we specified `maxConnections: 1` and `http1MaxPendingRequests: 1`. This should mean that if we exceed more than one connection and request concurrently, we should see the istio-proxy open the circuit for further requests/connections. Let's try with two concurrent connections (`-c 2`) and send 20 requests (`-n 20`)
```bash
kubectl exec -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -c 2 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get
```
```
Fortio 0.6.2 running at 0 queries per second, 2->2 procs, for 5s: http://httpbin:8000/get
Starting at max qps with 2 thread(s) [gomax 2] for exactly 20 calls (10 per thread + 0)
23:51:10 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
Ended after 106.474079ms : 20 calls. qps=187.84
Aggregated Function Time : count 20 avg 0.010215375 +/- 0.003604 min 0.005172024 max 0.019434859 sum 0.204307492
# range, mid point, percentile, count
>= 0.00517202 <= 0.006 , 0.00558601 , 5.00, 1
> 0.006 <= 0.007 , 0.0065 , 20.00, 3
> 0.007 <= 0.008 , 0.0075 , 30.00, 2
> 0.008 <= 0.009 , 0.0085 , 40.00, 2
> 0.009 <= 0.01 , 0.0095 , 60.00, 4
> 0.01 <= 0.011 , 0.0105 , 70.00, 2
> 0.011 <= 0.012 , 0.0115 , 75.00, 1
> 0.012 <= 0.014 , 0.013 , 90.00, 3
> 0.016 <= 0.018 , 0.017 , 95.00, 1
> 0.018 <= 0.0194349 , 0.0187174 , 100.00, 1
# target 50% 0.0095
# target 75% 0.012
# target 99% 0.0191479
# target 99.9% 0.0194062
Code 200 : 19 (95.0 %)
Code 503 : 1 (5.0 %)
Response Header Sizes : count 20 avg 218.85 +/- 50.21 min 0 max 231 sum 4377
Response Body/Total Sizes : count 20 avg 652.45 +/- 99.9 min 217 max 676 sum 13049
All done 20 calls (plus 0 warmup) 10.215 ms avg, 187.8 qps
```
We see almost all requests made it through!
```
Code 200 : 19 (95.0 %)
Code 503 : 1 (5.0 %)
```
The istio-proxy does allow for some leeway. Let's bring the number of concurrent connections up to 3:
```bash
kubectl exec -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -c 3 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get
```
```
Fortio 0.6.2 running at 0 queries per second, 2->2 procs, for 5s: http://httpbin:8000/get
Starting at max qps with 3 thread(s) [gomax 2] for exactly 30 calls (10 per thread + 0)
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
Ended after 71.05365ms : 30 calls. qps=422.22
Aggregated Function Time : count 30 avg 0.0053360199 +/- 0.004219 min 0.000487853 max 0.018906468 sum 0.160080597
# range, mid point, percentile, count
>= 0.000487853 <= 0.001 , 0.000743926 , 10.00, 3
> 0.001 <= 0.002 , 0.0015 , 30.00, 6
> 0.002 <= 0.003 , 0.0025 , 33.33, 1
> 0.003 <= 0.004 , 0.0035 , 40.00, 2
> 0.004 <= 0.005 , 0.0045 , 46.67, 2
> 0.005 <= 0.006 , 0.0055 , 60.00, 4
> 0.006 <= 0.007 , 0.0065 , 73.33, 4
> 0.007 <= 0.008 , 0.0075 , 80.00, 2
> 0.008 <= 0.009 , 0.0085 , 86.67, 2
> 0.009 <= 0.01 , 0.0095 , 93.33, 2
> 0.014 <= 0.016 , 0.015 , 96.67, 1
> 0.018 <= 0.0189065 , 0.0184532 , 100.00, 1
# target 50% 0.00525
# target 75% 0.00725
# target 99% 0.0186345
# target 99.9% 0.0188793
Code 200 : 19 (63.3 %)
Code 503 : 11 (36.7 %)
Response Header Sizes : count 30 avg 145.73333 +/- 110.9 min 0 max 231 sum 4372
Response Body/Total Sizes : count 30 avg 507.13333 +/- 220.8 min 217 max 676 sum 15214
All done 30 calls (plus 0 warmup) 5.336 ms avg, 422.2 qps
```
Now we start to see the circuit breaking behavior we expect.
```
Code 200 : 19 (63.3 %)
Code 503 : 11 (36.7 %)
```
Only 63.3% of the requests made it through and the rest were trapped by circuit breaking. We can query the istio-proxy stats to see more:
```bash
kubectl exec -it $FORTIO_POD -c istio-proxy -- sh -c 'curl localhost:15000/stats' | grep httpbin | grep pending
```
```
cluster.out.httpbin.springistio.svc.cluster.local|http|version=v1.upstream_rq_pending_active: 0
cluster.out.httpbin.springistio.svc.cluster.local|http|version=v1.upstream_rq_pending_failure_eject: 0
cluster.out.httpbin.springistio.svc.cluster.local|http|version=v1.upstream_rq_pending_overflow: 12
cluster.out.httpbin.springistio.svc.cluster.local|http|version=v1.upstream_rq_pending_total: 39
```
We see `12` for the `upstream_rq_pending_overflow` value which means `12` calls so far have been flagged for circuit breaking.
## Cleaning up
1. Remove the rules.
```bash
istioctl delete destinationrule httpbin
```
1. Shutdown the [httpbin](https://github.com/istio/istio/tree/master/samples/httpbin) service and client.
```bash
kubectl delete deploy httpbin fortio-deploy
kubectl delete svc httpbin
```
## What's next
Check out the [destination rule]({{home}}/docs/reference/config/istio.networking.v1alpha3.html#DestinationRule) reference section for more circuit breaker settings.

View File

@ -0,0 +1,111 @@
---
title: Control Egress TCP Traffic
overview: Describes how to configure Istio to route TCP traffic from services in the mesh to external services.
order: 41
layout: docs
type: markdown
---
{% include home.html %}
The [Control Egress Traffic]({{home}}/docs/tasks/traffic-management-v1alpha3/egress.html) task demonstrated how external (outside the Kubernetes cluster) HTTP and HTTPS services can be accessed from applications inside the mesh. A quick reminder: by default, Istio-enabled applications are unable to access URLs outside the cluster. To enable such access, an [external service]({{home}}/docs/reference/config/istio.networking.v1alpha3.html#ExternalService) must be defined, or, alternatively, [direct access to external services]({{home}}/docs/tasks/traffic-management-v1alpha3/egress.html#calling-external-services-directly) must be configured.
This task describes how to configure Istio to expose external TCP services to applications inside the Istio service mesh.
## Before you begin
* Setup Istio by following the instructions in the
[Installation guide]({{home}}/docs/setup/).
* Start the [sleep](https://github.com/istio/istio/tree/master/samples/sleep) sample application which will be used as a test source for external calls.
```bash
kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml)
```
**Note**: any pod that you can execute `curl` from is good enough.
## Using Istio external services for external TCP traffic
In this task we access `wikipedia.org` by HTTPS originated by the application. This task demonstrates the use case where an application cannot use HTTP with TLS origination by the sidecar proxy. Using HTTP with TLS origination by the sidecar proxy is described in the [Control Egress Traffic]({{home}}/docs/tasks/traffic-management-v1alpha3/egress.html) task. In that task, `https://google.com` was accessed by issuing HTTP requests to `http://www.google.com:443`.
The HTTPS traffic originated by the application will be treated by Istio as _opaque_ TCP. To enable such traffic, we define a TCP external service on port 443. In TCP external services, as opposed to HTTP-based external services, the destinations are specified by IPs or by blocks of IPs in [CIDR notation](https://tools.ietf.org/html/rfc2317).
Let's assume for the sake of this example that we want to access `wikipedia.org` by the domain name. This means that we have to specify all the IPs of `wikipedia.org` in our TCP external service. Fortunately, the IPs of `wikipedia.org` are published [here]( https://www.mediawiki.org/wiki/Wikipedia_Zero/IP_Addresses). It is a list of IP blocks in [CIDR notation](https://tools.ietf.org/html/rfc2317): `91.198.174.192/27`, `103.102.166.224/27`, and more.
## Creating an external service
Let's create an external service to enable TCP access to `wikipedia.org`:
```bash
cat <<EOF | istioctl create -f -
apiVersion: networking.istio.io/v1alpha3
kind: ExternalService
metadata:
name: wikipedia-ext
spec:
hosts:
- 91.198.174.192/27
- 103.102.166.224/27
- 198.35.26.96/27
- 208.80.153.224/27
- 208.80.154.224/27
ports:
- number: 443
name: tcp
protocol: TCP
discovery: NONE
EOF
```
This command instructs the Istio proxy to forward requests on port 443 of any of the `wikipedia.org` IP addresses to the same IP address to which the connection was bound.
## Access wikipedia.org by HTTPS
1. `kubectl exec` into the pod to be used as the test source. If you are using the [sleep](https://github.com/istio/istio/tree/master/samples/sleep) application, run the following command:
```bash
kubectl exec -it $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c sleep bash
```
2. Make a request and verify that we can access https://www.wikipedia.org successfully:
```bash
curl -o /dev/null -s -w "%{http_code}\n" https://www.wikipedia.org
```
```bash
200
```
We should see `200` printed as the output, which is the HTTP code _OK_.
3. Now let's fetch the current number of the articles available on Wikipedia in the English language:
```bash
curl -s https://en.wikipedia.org/wiki/Main_Page | grep articlecount | grep 'Special:Statistics'
```
The output should be similar to:
```bash
<div id="articlecount" style="font-size:85%;"><a href="/wiki/Special:Statistics" title="Special:Statistics">5,563,121</a> articles in <a href="/wiki/English_language" title="English language">English</a></div>
```
This means there were 5,563,121 articles in Wikipedia in English when this task was written.
## Cleanup
1. Remove the external service we created.
```bash
istioctl delete externalservice wikipedia-ext
```
1. Shutdown the [sleep](https://github.com/istio/istio/tree/master/samples/sleep) application.
```bash
kubectl delete -f samples/sleep/sleep.yaml
```
## What's next
* The [External Services]({{home}}/docs/reference/config/istio.networking.v1alpha3.html#ExternalService) reference.
* The [Control Egress Traffic]({{home}}/docs/tasks/traffic-management-v1alpha3/egress.html) task, for HTTP and HTTPS.

View File

@ -0,0 +1,297 @@
---
title: Control Egress Traffic
overview: Describes how to configure Istio to route traffic from services in the mesh to external services.
order: 40
layout: docs
type: markdown
---
{% include home.html %}
By default, Istio-enabled services are unable to access URLs outside of the cluster because
iptables is used in the pod to transparently redirect all outbound traffic to the sidecar proxy,
which only handles intra-cluster destinations.
This task describes how to configure Istio to expose external services to Istio-enabled clients.
You'll learn how to enable access to external services by defining `ExternalService` configurations,
or alternatively, to simply bypass the Istio proxy for a specific range of IPs.
## Before you begin
* Setup Istio by following the instructions in the
[Installation guide]({{home}}/docs/setup/).
* Start the [sleep](https://github.com/istio/istio/tree/master/samples/sleep) sample
which will be used as a test source for external calls.
```bash
kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml)
```
Note that any pod that you can `exec` and `curl` from would do.
## Configuring Istio external services
Using Istio `ExternalService` configurations, you can access any publicly accessible service
from within your Istio cluster. In this task we will use
[httpbin.org](http://httpbin.org) and [www.google.com](http://www.google.com) as examples.
### Configuring the external services
1. Create an `ExternalService` to allow access to an external HTTP service:
```bash
cat <<EOF | istioctl create -f -
apiVersion: networking.istio.io/v1alpha3
kind: ExternalService
metadata:
name: httpbin-ext
spec:
hosts:
- httpbin.org
ports:
- number: 80
name: http
protocol: http
EOF
```
2. Create an `ExternalService` to allow access to an external HTTPS service:
```bash
cat <<EOF | istioctl create -f -
apiVersion: networking.istio.io/v1alpha3
kind: ExternalService
metadata:
name: google-ext
spec:
hosts:
- www.google.com
ports:
- number: 443
name: https
protocol: http
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: google-ext
spec:
name: www.google.com
trafficPolicy:
tls:
mode: SIMPLE # initiates HTTPS when talking to www.google.com
EOF
```
Notice that we also create a corresponding `DestinationRule` to
initiate TLS for connections to the HTTPS service.
Callers must access this service using HTTP on port 443 and Istio will upgrade
the connection to HTTPS.
### Make requests to the external services
1. Exec into the pod being used as the test source. For example,
if you are using the sleep service, run the following commands:
```bash
export SOURCE_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})
kubectl exec -it $SOURCE_POD -c sleep bash
```
2. Make a request to the external HTTP service:
```bash
curl http://httpbin.org/headers
```
3. Make a request to the external HTTPS service.
External services of type HTTPS must be accessed over HTTP with the port specified in the request:
```bash
curl http://www.google.com:443
```
### Setting route rules on an external service
Similar to inter-cluster requests, Istio
[routing rules]({{home}}/docs/concepts/traffic-management/rules-configuration.html)
can also be set for external services that are accessed using `ExternalService` configurations.
To illustrate we will use [istioctl]({{home}}/docs/reference/commands/istioctl.html)
to set a timeout rule on calls to the httpbin.org service.
1. From inside the pod being used as the test source, invoke the `/delay` endpoint of the httpbin.org external service:
```bash
kubectl exec -it $SOURCE_POD -c sleep bash
time curl -o /dev/null -s -w "%{http_code}\n" http://httpbin.org/delay/5
```
```bash
200
real 0m5.024s
user 0m0.003s
sys 0m0.003s
```
The request should return 200 (OK) in approximately 5 seconds.
1. Exit the source pod and use `istioctl` to set a 3s timeout on calls to the httpbin.org external service:
```bash
cat <<EOF | istioctl create -f -
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin-ext
spec:
hosts:
- httpbin.org
http:
- timeout: 3s
EOF
```
1. Wait a few seconds, then issue the _curl_ request again:
```bash
kubectl exec -it $SOURCE_POD -c sleep bash
time curl -o /dev/null -s -w "%{http_code}\n" http://httpbin.org/delay/5
```
```bash
504
real 0m3.149s
user 0m0.004s
sys 0m0.004s
```
This time a 504 (Gateway Timeout) appears after 3 seconds.
Although httpbin.org was waiting 5 seconds, Istio cut off the request at 3 seconds.
## Calling external services directly
The Istio `ExternalService` currently only supports HTTP/HTTPS requests.
If you want to access services with other protocols (e.g., mongodb://host/database),
or if you want to completely bypass Istio for a specific IP range,
you will need to configure the source service's Envoy sidecar to prevent it from
[intercepting]({{home}}/docs/concepts/traffic-management/request-routing.html#communication-between-services)
the external requests. This can be done using the `--includeIPRanges` option of
[istioctl kube-inject]({{home}}/docs/reference/commands/istioctl.html#istioctl kube-inject)
when starting the service.
The simplest way to use the `--includeIPRanges` option is to pass it the IP range(s)
used for internal cluster services, thereby excluding external IPs from being redirected
to the sidecar proxy.
The values used for internal IP range(s), however, depends on where your cluster is running.
For example, with Minikube the range is 10.0.0.1/24, so you would start the sleep service like this:
```bash
kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml --includeIPRanges=10.0.0.1/24)
```
On IBM Cloud Private, use:
1. Get your `service_cluster_ip_range` from IBM Cloud Private configuration file under `cluster/config.yaml`.
```bash
cat cluster/config.yaml | grep service_cluster_ip_range
```
A sample output is as following:
```
service_cluster_ip_range: 10.0.0.1/24
```
1. Inject the `service_cluster_ip_range` to your application profile via `--includeIPRanges` to limit Istio's traffic interception to the service cluster IP range.
```bash
kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml --includeIPRanges=10.0.0.1/24)
```
On IBM Cloud Container Service, use:
```bash
kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml --includeIPRanges=172.30.0.0/16,172.20.0.0/16,10.10.10.0/24)
```
On Google Container Engine (GKE) the ranges are not fixed, so you will
need to run the `gcloud container clusters describe` command to determine the ranges to use. For example:
```bash
gcloud container clusters describe XXXXXXX --zone=XXXXXX | grep -e clusterIpv4Cidr -e servicesIpv4Cidr
```
```
clusterIpv4Cidr: 10.4.0.0/14
servicesIpv4Cidr: 10.7.240.0/20
```
```bash
kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml --includeIPRanges=10.4.0.0/14,10.7.240.0/20)
```
On Azure Container Service(ACS), use:
```bash
kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml --includeIPRanges=10.244.0.0/16,10.240.0.0/16)
```
After starting your service this way, the Istio sidecar will only intercept and manage internal requests
within the cluster. Any external request will simply bypass the sidecar and go straight to its intended
destination.
```bash
export SOURCE_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})
kubectl exec -it $SOURCE_POD -c sleep curl http://httpbin.org/headers
```
## Understanding what happened
In this task we looked at two ways to call external services from within an Istio cluster:
1. Using an `ExternalService` (recommended)
2. Configuring the Istio sidecar to exclude external IPs from its remapped IP table
The first approach (`ExternalService`) only supports HTTP(S) requests, but allows
you to use all of the same Istio service mesh features for calls to services within or outside
of the cluster. We demonstrated this by setting a timeout rule for calls to an external service.
The second approach bypasses the Istio sidecar proxy, giving your services direct access to any
external URL. However, configuring the proxy this way does require
cloud provider specific knowledge and configuration.
## Cleanup
1. Remove the rules.
```bash
istioctl delete externalservice httpbin-ext google-ext
istioctl delete destinationrule google-ext
istioctl delete virtualservice httpbin-ext
```
1. Shutdown the [sleep](https://github.com/istio/istio/tree/master/samples/sleep) service.
```bash
kubectl delete -f samples/sleep/sleep.yaml
```
## ExternalService and Access Control
Note that Istio `ExternalService` is **not a security feature**. It enables access to external (out of the service mesh) services. It is up to the user to deploy appropriate security mechanisms such as firewalls to prevent unauthorized access to the external services. We are working on adding access control support for the external services.
## What's next
* Read more about [external services]({{home}}/docs/reference/config/istio.networking.v1alpha3.html#ExternalService).
* Learn how to setup
[timeouts]({{home}}/docs/reference/config/istio.networking.v1alpha3.html#HTTPRoute.timeout),
[retries]({{home}}/docs/reference/config/istio.networking.v1alpha3.html#HTTPRoute.retries),
and [circuit breakers]({{home}}/docs/reference/config/istio.networking.v1alpha3.html#OutlierDetection) for egress traffic.

View File

@ -0,0 +1,174 @@
---
title: Fault Injection
overview: This task shows how to inject delays and test the resiliency of your application.
order: 20
layout: docs
type: markdown
---
{% include home.html %}
This task shows how to inject delays and test the resiliency of your application.
## Before you begin
* Setup Istio by following the instructions in the
[Installation guide]({{home}}/docs/setup/).
* Deploy the [Bookinfo]({{home}}/docs/guides/bookinfo.html) sample application.
* Initialize the application version routing by either first doing the
[request routing](./request-routing.html) task or by running following
commands:
```bash
istioctl create -f samples/bookinfo/routing/route-rule-all-v1.yaml
istioctl replace -f samples/bookinfo/routing/route-rule-reviews-test-v2.yaml
```
# Fault injection
## Fault injection using HTTP delay
To test our Bookinfo application microservices for resiliency, we will _inject a 7s delay_
between the reviews:v2 and ratings microservices, for user "jason". Since the _reviews:v2_ service has a
10s timeout for its calls to the ratings service, we expect the end-to-end flow to
continue without any errors.
1. Create a fault injection rule to delay traffic coming from user "jason" (our test user)
```bash
istioctl replace -f samples/bookinfo/routing/route-rule-ratings-test-delay.yaml
```
Confirm the rule is created:
```bash
istioctl get virtualservice ratings -o yaml
```
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
...
spec:
hosts:
- ratings
http:
- fault:
delay:
fixedDelay: 7s
percent: 100
match:
- headers:
cookie:
regex: ^(.*?;)?(user=jason)(;.*)?$
route:
- destination:
name: ratings
subset: v1
- route:
- destination:
name: ratings
subset: v1
```
Allow several seconds to account for rule propagation delay to all pods.
1. Observe application behavior
Log in as user "jason". If the application's front page was set to correctly handle delays, we expect it
to load within approximately 7 seconds. To see the web page response times, open the
*Developer Tools* menu in IE, Chrome or Firefox (typically, key combination _Ctrl+Shift+I_
or _Alt+Cmd+I_), tab Network, and reload the `productpage` web page.
You will see that the webpage loads in about 6 seconds. The reviews section will show
*Sorry, product reviews are currently unavailable for this book*.
## Understanding what happened
The reason that the entire reviews service has failed is because our Bookinfo application
has a bug. The timeout between the productpage and reviews service is less (3s + 1 retry = 6s total)
than the timeout between the reviews and ratings service (10s). These kinds of bugs can occur in
typical enterprise applications where different teams develop different microservices
independently. Istio's fault injection rules help you identify such anomalies without
impacting end users.
> Notice that we are restricting the failure impact to user "jason" only. If you login
> as any other user, you would not experience any delays.
**Fixing the bug:** At this point we would normally fix the problem by either increasing the
productpage timeout or decreasing the reviews to ratings service timeout,
terminate and restart the fixed microservice, and then confirm that the `productpage`
returns its response without any errors.
However, we already have this fix running in v3 of the reviews service, so we can simply
fix the problem by migrating all
traffic to `reviews:v3` as described in the
[traffic shifting]({{home}}/docs/tasks/traffic-management/traffic-shifting.html) task.
(Left as an exercise for the reader - change the delay rule to
use a 2.8 second delay and then run it against the v3 version of reviews.)
## Fault injection using HTTP Abort
As another test of resiliency, we will introduce an HTTP abort to the ratings microservices for the user "jason".
We expect the page to load immediately unlike the delay example and display the "product ratings not available"
message.
1. Create a fault injection rule to send an HTTP abort for user "jason"
```bash
istioctl replace -f samples/bookinfo/routing/route-rule-ratings-test-abort.yaml
```
Confirm the rule is created
```bash
istioctl get virtualservice ratings -o yaml
```
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
...
spec:
hosts:
- ratings
http:
- fault:
abort:
httpStatus: 500
percent: 100
match:
- headers:
cookie:
regex: ^(.*?;)?(user=jason)(;.*)?$
- route:
- destination:
name: ratings
subset: v1
```
1. Observe application behavior
Login as user "jason". If the rule propagated successfully to all pods, you should see the page load
immediately with the "product ratings not available" message. Logout from user "jason" and you should
see the ratings v2 show up successfully on the productpage web page.
## Cleanup
* Remove the application routing rules:
```bash
istioctl delete -f samples/bookinfo/routing/route-rule-all-v1.yaml
```
* If you are not planning to explore any follow-on tasks, refer to the
[Bookinfo cleanup]({{home}}/docs/guides/bookinfo.html#cleanup) instructions
to shutdown the application.
## What's next
* Learn more about [fault injection]({{home}}/docs/concepts/traffic-management/fault-injection.html).

View File

@ -0,0 +1,12 @@
---
title: Traffic Management (v1alpha3)
overview: WIP - Describes tasks that demonstrate traffic routing features of Istio service mesh.
order: 15
layout: docs
type: markdown
toc: false
---
{% include section-index.html docs=site.docs %}

View File

@ -0,0 +1,268 @@
---
title: Mirroring
overview: This task demonstrates the traffic shadowing/mirroring capabilities of Istio
order: 60
layout: docs
type: markdown
---
{% include home.html %}
This task demonstrates the traffic shadowing/mirroring capabilities of Istio. Traffic mirroring is a powerful concept that allows feature teams to bring changes to production with as little risk as possible. Mirroring brings a copy of live traffic to a mirrored service and happens out of band of the critical request path for the primary service.
## Before you begin
* Setup Istio by following the instructions in the
[Installation guide]({{home}}/docs/setup/).
* Start two versions of the `httpbin` service that have access logging enabled
httpbin-v1:
```bash
cat <<EOF | istioctl kube-inject -f - | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: httpbin-v1
spec:
replicas: 1
template:
metadata:
labels:
app: httpbin
version: v1
spec:
containers:
- image: docker.io/kennethreitz/httpbin
imagePullPolicy: IfNotPresent
name: httpbin
command: ["gunicorn", "--access-logfile", "-", "-b", "0.0.0.0:8080", "httpbin:app"]
ports:
- containerPort: 8080
EOF
```
httpbin-v2:
```bash
cat <<EOF | istioctl kube-inject -f - | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: httpbin-v2
spec:
replicas: 1
template:
metadata:
labels:
app: httpbin
version: v2
spec:
containers:
- image: docker.io/kennethreitz/httpbin
imagePullPolicy: IfNotPresent
name: httpbin
command: ["gunicorn", "--access-logfile", "-", "-b", "0.0.0.0:8080", "httpbin:app"]
ports:
- containerPort: 8080
EOF
```
httpbin Kubernetes service:
```bash
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
name: httpbin
labels:
app: httpbin
spec:
ports:
- name: http
port: 8080
selector:
app: httpbin
EOF
```
* Start the `sleep` service so we can use `curl` to provide load
sleep service:
```bash
cat <<EOF | istioctl kube-inject -f - | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sleep
spec:
replicas: 1
template:
metadata:
labels:
app: sleep
spec:
containers:
- name: sleep
image: tutum/curl
command: ["/bin/sleep","infinity"]
imagePullPolicy: IfNotPresent
EOF
```
## Mirroring
Let's set up a scenario to demonstrate the traffic-mirroring capabilities of Istio. We have two versions of our `httpbin` service. By default Kubernetes will load balance across both versions of the service. We'll use Istio to force all traffic to v1 of the `httpbin` service.
### Creating default routing policy
1. Create a default route rule to route all traffic to `v1` of our `httpbin` service:
```bash
cat <<EOF | istioctl create -f -
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- httpbin
http:
- route:
- destination:
name: httpbin
subset: v1
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: httpbin
spec:
name: httpbin
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
EOF
```
Now all traffic should go to `httpbin v1` service. Let's try sending in some traffic:
```bash
export SLEEP_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})
kubectl exec -it $SLEEP_POD -c sleep -- sh -c 'curl http://httpbin:8080/headers'
```
```
{
"headers": {
"Accept": "*/*",
"Content-Length": "0",
"Host": "httpbin:8080",
"User-Agent": "curl/7.35.0",
"X-B3-Sampled": "1",
"X-B3-Spanid": "eca3d7ed8f2e6a0a",
"X-B3-Traceid": "eca3d7ed8f2e6a0a",
"X-Ot-Span-Context": "eca3d7ed8f2e6a0a;eca3d7ed8f2e6a0a;0000000000000000"
}
}
```
If we check the logs for `v1` and `v2` of our `httpbin` pods, we should see access log entries for only `v1`:
```bash
export V1_POD=$(kubectl get pod -l app=httpbin,version=v1 -o jsonpath={.items..metadata.name})
kubectl logs -f $V1_POD -c httpbin
```
```
127.0.0.1 - - [07/Mar/2018:19:02:43 +0000] "GET /headers HTTP/1.1" 200 321 "-" "curl/7.35.0"
```
```bash
export V2_POD=$(kubectl get pod -l app=httpbin,version=v2 -o jsonpath={.items..metadata.name})
kubectl logs -f $V2_POD -c httpbin
```
```
<none>
```
2. Change the route rule to mirror traffic to v2
```bash
cat <<EOF | istioctl replace -f -
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- httpbin
http:
- route:
- destination:
name: httpbin
subset: v1
weight: 100
mirror:
name: httpbin
subset: v2
EOF
```
This route rule specifies we route 100% of the traffic to v1. The last stanza specifies we want to mirror to the `httpbin v2` service. When traffic gets mirrored, the requests are sent to the mirrored service with its Host/Authority header appended with *-shadow*. For example, *cluster-1* becomes *cluster-1-shadow*. Also important to realize is that these requests are mirrored as "fire and forget", i.e., the responses are discarded.
Now if we send in traffic:
```bash
kubectl exec -it $SLEEP_POD -c sleep -- sh -c 'curl http://httpbin:8080/headers'
```
We should see access logging for both `v1` and `v2`. The access logs created in `v2` is the mirrored requests that are actually going to `v1`.
```bash
kubectl logs -f $V1_POD -c httpbin
```
```
127.0.0.1 - - [07/Mar/2018:19:02:43 +0000] "GET /headers HTTP/1.1" 200 321 "-" "curl/7.35.0"
127.0.0.1 - - [07/Mar/2018:19:26:44 +0000] "GET /headers HTTP/1.1" 200 321 "-" "curl/7.35.0"
```
```bash
kubectl logs -f $V2_POD -c httpbin
```
```
127.0.0.1 - - [07/Mar/2018:19:26:44 +0000] "GET /headers HTTP/1.1" 200 361 "-" "curl/7.35.0"
```
## Cleaning up
1. Remove the rules.
```bash
istioctl delete virtualservice httpbin
istioctl delete destinationrule httpbin
```
1. Shutdown the [httpbin](https://github.com/istio/istio/tree/master/samples/httpbin) service and client.
```bash
kubectl delete deploy httpbin-v1 httpbin-v2 sleep
kubectl delete svc httpbin
```
## What's next
Check out the [Mirroring configuration]({{home}}/docs/reference/config/istio.networking.v1alpha3.html#HTTPRoute.mirror) reference documentation.

View File

@ -0,0 +1,186 @@
---
title: Configuring Request Routing
overview: This task shows you how to configure dynamic request routing based on weights and HTTP headers.
order: 10
layout: docs
type: markdown
---
{% include home.html %}
This task shows you how to configure dynamic request routing based on weights and HTTP headers.
## Before you begin
* Setup Istio by following the instructions in the
[Installation guide]({{home}}/docs/setup/).
* Deploy the [Bookinfo]({{home}}/docs/guides/bookinfo.html) sample application.
## Content-based routing
Because the Bookinfo sample deploys 3 versions of the reviews microservice,
we need to set a default route.
Otherwise if you access the application several times, you'll notice that sometimes the output contains
star ratings.
This is because without an explicit default version set, Istio will
route requests to all available versions of a service in a random fashion.
> Note: This task assumes you don't have any routes set yet. If you've already created conflicting route rules for the sample,
you'll need to use `replace` rather than `create` in the following command.
1. Set the default version for all microservices to v1.
```bash
istioctl create -f samples/bookinfo/routing/route-rule-all-v1.yaml
```
> Note: In a Kubernetes deployment of Istio, you can replace `istioctl`
> with `kubectl` in the above, and for all other CLI commands.
> Note, however, that `kubectl` currently does not provide input validation.
You can display the routes that are defined with the following command:
```bash
istioctl get virtualservices -o yaml
```
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: details
...
spec:
hosts:
- details
http:
- route:
- destination:
name: details
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: productpage
...
spec:
gateways:
- bookinfo-gateway
- mesh
hosts:
- productpage
http:
- route:
- destination:
name: productpage
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
...
spec:
hosts:
- ratings
http:
- route:
- destination:
name: ratings
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
...
spec:
hosts:
- reviews
http:
- route:
- destination:
name: reviews
subset: v1
---
```
> Note: The corresponding `subset` definitions can be displayed using `istioctl get destinationrules -o yaml`.
Since rule propagation to the proxies is asynchronous, you should wait a few seconds for the rules
to propagate to all pods before attempting to access the application.
1. Open the Bookinfo URL (http://$GATEWAY_URL/productpage) in your browser
You should see the Bookinfo application productpage displayed.
Notice that the `productpage` is displayed with no rating stars since `reviews:v1` does not access the ratings service.
1. Route a specific user to `reviews:v2`
Lets enable the ratings service for test user "jason" by routing productpage traffic to
`reviews:v2` instances.
```bash
istioctl replace -f samples/bookinfo/routing/route-rule-reviews-test-v2.yaml
```
Confirm the rule is created:
```bash
istioctl get virtualservice reviews -o yaml
```
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
...
spec:
hosts:
- reviews
http:
- match:
- headers:
cookie:
regex: ^(.*?;)?(user=jason)(;.*)?$
route:
- destination:
name: reviews
subset: v2
- route:
- destination:
name: reviews
subset: v1
```
1. Log in as user "jason" at the `productpage` web page.
You should now see ratings (1-5 stars) next to each review. Notice that if you log in as
any other user, you will continue to see `reviews:v1`.
## Understanding what happened
In this task, you used Istio to send 100% of the traffic to the v1 version of each of the Bookinfo
services. You then set a rule to selectively send traffic to version v2 of the reviews service based
on a header (i.e., a user cookie) in a request.
Once the v2 version has been tested to our satisfaction, we could use Istio to send traffic from
all users to v2, optionally in a gradual fashion. We'll explore this in a separate task.
## Cleanup
* Remove the application routing rules.
```bash
istioctl delete -f samples/bookinfo/routing/route-rule-all-v1.yaml
```
* If you are not planning to explore any follow-on tasks, refer to the
[Bookinfo cleanup]({{home}}/docs/guides/bookinfo.html#cleanup) instructions
to shutdown the application.
## What's next
* Learn more about [request routing]({{home}}/docs/concepts/traffic-management/rules-configuration.html).

View File

@ -0,0 +1,149 @@
---
title: Setting Request Timeouts
overview: This task shows you how to setup request timeouts in Envoy using Istio.
order: 28
layout: docs
type: markdown
---
{% include home.html %}
This task shows you how to setup request timeouts in Envoy using Istio.
## Before you begin
* Setup Istio by following the instructions in the
[Installation guide]({{home}}/docs/setup/).
* Deploy the [Bookinfo]({{home}}/docs/guides/bookinfo.html) sample application.
* Initialize the application version routing by running the following command:
```bash
istioctl create -f samples/bookinfo/routing/route-rule-all-v1.yaml
```
## Request timeouts
A timeout for http requests can be specified using the *httpReqTimeout* field of a routing rule.
By default, the timeout is 15 seconds, but in this task we'll override the `reviews` service
timeout to 1 second.
To see its effect, however, we'll also introduce an artificial 2 second delay in calls
to the `ratings` service.
1. Route requests to v2 of the `reviews` service, i.e., a version that calls the `ratings` service
```bash
cat <<EOF | istioctl replace -f -
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
name: reviews
subset: v2
EOF
```
1. Add a 2 second delay to calls to the `ratings` service:
```bash
cat <<EOF | istioctl replace -f -
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
http:
- fault:
delay:
percent: 100
fixedDelay: 2s
route:
- destination:
name: ratings
subset: v1
EOF
```
1. Open the Bookinfo URL (http://$GATEWAY_URL/productpage) in your browser
You should see the Bookinfo application working normally (with ratings stars displayed),
but there is a 2 second delay whenever you refresh the page.
1. Now add a 1 second request timeout for calls to the `reviews` service
```bash
cat <<EOF | istioctl replace -f -
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
name: reviews
subset: v2
timeout: 1s
EOF
```
1. Refresh the Bookinfo web page
You should now see that it returns in 1 second (instead of 2), but the reviews are unavailable.
## Understanding what happened
In this task, you used Istio to set the request timeout for calls to the `reviews`
microservice to 1 second (instead of the default 15 seconds).
Since the `reviews` service subsequently calls the `ratings` service when handling requests,
you used Istio to inject a 2 second delay in calls to `ratings`, so that you would cause the
`reviews` service to take longer than 1 second to complete and consequently you could see the
timeout in action.
You observed that the Bookinfo productpage (which calls the `reviews` service to populate the page),
instead of displaying reviews, displayed
the message: Sorry, product reviews are currently unavailable for this book.
This was the result of it receiving the timeout error from the `reviews` service.
If you check out the [fault injection task](./fault-injection.html), you'll find out that the `productpage`
microservice also has its own application-level timeout (3 seconds) for calls to the `reviews` microservice.
Notice that in this task we used an Istio route rule to set the timeout to 1 second.
Had you instead set the timeout to something greater than 3 seconds (e.g., 4 seconds) the timeout
would have had no effect since the more restrictive of the two will take precedence.
More details can be found [here]({{home}}/docs/concepts/traffic-management/handling-failures.html#faq).
One more thing to note about timeouts in Istio is that in addition to overriding them in route rules,
as you did in this task, they can also be overridden on a per-request basis if the application adds
an "x-envoy-upstream-rq-timeout-ms" header on outbound requests. In the header
the timeout is specified in millisecond (instead of second) units.
## Cleanup
* Remove the application routing rules.
```bash
istioctl delete -f samples/bookinfo/routing/route-rule-all-v1.yaml
```
* If you are not planning to explore any follow-on tasks, refer to the
[Bookinfo cleanup]({{home}}/docs/guides/bookinfo.html#cleanup) instructions
to shutdown the application.
## What's next
* Learn more about [failure handling]({{home}}/docs/concepts/traffic-management/handling-failures.html).
* Learn more about [routing rules]({{home}}/docs/concepts/traffic-management/rules-configuration.html).

View File

@ -0,0 +1,108 @@
---
title: Traffic Shifting
overview: This task shows you how to migrate traffic from an old to new version of a service.
order: 25
layout: docs
type: markdown
---
{% include home.html %}
This task shows you how to gradually migrate traffic from an old to new version of a service.
With Istio, we can migrate the traffic in a gradual fashion by using a sequence of rules
with weights less than 100 to migrate traffic in steps, for example 10, 20, 30, ... 100%.
For simplicity this task will migrate the traffic from `reviews:v1` to `reviews:v3` in just
two steps: 50%, 100%.
## Before you begin
* Setup Istio by following the instructions in the
[Installation guide]({{home}}/docs/setup/).
* Deploy the [Bookinfo]({{home}}/docs/guides/bookinfo.html) sample application.
## Weight-based version routing
1. Set the default version for all microservices to v1.
```bash
istioctl create -f samples/bookinfo/routing/route-rule-all-v1.yaml
```
1. Confirm v1 is the active version of the `reviews` service by opening http://$GATEWAY_URL/productpage in your browser.
You should see the Bookinfo application productpage displayed.
Notice that the `productpage` is displayed with no rating stars since `reviews:v1` does not access the ratings service.
1. First, transfer 50% of the traffic from `reviews:v1` to `reviews:v3` with the following command:
```bash
istioctl replace -f samples/bookinfo/routing/route-rule-reviews-50-v3.yaml
```
Confirm the rule was replaced:
```bash
istioctl get virtualservice reviews -o yaml
```
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
...
spec:
hosts:
- reviews
http:
- route:
- destination:
name: reviews
subset: v1
weight: 50
- route:
- destination:
name: reviews
subset: v3
weight: 50
```
1. Refresh the `productpage` in your browser and you should now see *red* colored star ratings approximately 50% of the time.
> Note: With the current Envoy sidecar implementation, you may need to refresh the `productpage` very many times
> to see the proper distribution. It may require 15 refreshes or more before you see any change. You can modify the rules to route 90% of the traffic to v3 to see red stars more often.
1. When version v3 of the `reviews` microservice is considered stable, we can route 100% of the traffic to `reviews:v3`:
```bash
istioctl replace -f samples/bookinfo/routing/route-rule-reviews-v3.yaml
```
You can now log into the `productpage` as any user and you should always see book reviews
with *red* colored star ratings for each review.
## Understanding what happened
In this task we migrated traffic from an old to new version of the `reviews` service using Istio's
weighted routing feature. Note that this is very different than version migration using deployment features
of container orchestration platforms, which use instance scaling to manage the traffic.
With Istio, we can allow the two versions of the `reviews` service to scale up and down independently,
without affecting the traffic distribution between them.
For more about version routing with autoscaling, check out [Canary Deployments using Istio]({{home}}/blog/canary-deployments-using-istio.html).
## Cleanup
* Remove the application routing rules.
```bash
istioctl delete -f samples/bookinfo/routing/route-rule-all-v1.yaml
```
* If you are not planning to explore any follow-on tasks, refer to the
[Bookinfo cleanup]({{home}}/docs/guides/bookinfo.html#cleanup) instructions
to shutdown the application.
## What's next
* Learn more about [request routing]({{home}}/docs/concepts/traffic-management/rules-configuration.html).