mirror of https://github.com/istio/istio.io.git
Update circuit breaker task for 1.5 (#6599)
* Document starting fortio with automatic injection enabled * Update fortio output
This commit is contained in:
parent
10821e3a17
commit
54430797b9
|
@ -90,7 +90,15 @@ delays for outgoing HTTP calls. You will use this client to "trip" the circuit b
|
|||
policies you set in the `DestinationRule`.
|
||||
|
||||
1. Inject the client with the Istio sidecar proxy so network interactions are
|
||||
governed by Istio:
|
||||
governed by Istio.
|
||||
|
||||
If you have enabled [automatic sidecar injection](/docs/setup/additional-setup/sidecar-injection/#automatic-sidecar-injection), deploy the `fortio` service:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f @samples/httpbin/sample-client/fortio-deploy.yaml@
|
||||
{{< /text >}}
|
||||
|
||||
Otherwise, you have to manually inject the sidecar before deploying the `fortio` application:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/sample-client/fortio-deploy.yaml@)
|
||||
|
@ -101,14 +109,14 @@ Pass in `-curl` to indicate that you just want to make one call:
|
|||
|
||||
{{< text bash >}}
|
||||
$ FORTIO_POD=$(kubectl get pod | grep fortio | awk '{ print $1 }')
|
||||
$ kubectl exec -it $FORTIO_POD -c fortio /usr/bin/fortio -- load -curl http://httpbin:8000/get
|
||||
$ kubectl exec -it $FORTIO_POD -c fortio /usr/bin/fortio -- load -curl http://httpbin:8000/get
|
||||
HTTP/1.1 200 OK
|
||||
server: envoy
|
||||
date: Tue, 16 Jan 2018 23:47:00 GMT
|
||||
date: Tue, 25 Feb 2020 20:25:52 GMT
|
||||
content-type: application/json
|
||||
content-length: 586
|
||||
access-control-allow-origin: *
|
||||
access-control-allow-credentials: true
|
||||
content-length: 445
|
||||
x-envoy-upstream-service-time: 36
|
||||
|
||||
{
|
||||
|
@ -116,12 +124,12 @@ Pass in `-curl` to indicate that you just want to make one call:
|
|||
"headers": {
|
||||
"Content-Length": "0",
|
||||
"Host": "httpbin:8000",
|
||||
"User-Agent": "istio/fortio-0.6.2",
|
||||
"User-Agent": "fortio.org/fortio-1.3.1",
|
||||
"X-B3-Parentspanid": "8fc453fb1dec2c22",
|
||||
"X-B3-Sampled": "1",
|
||||
"X-B3-Spanid": "824fbd828d809bf4",
|
||||
"X-B3-Traceid": "824fbd828d809bf4",
|
||||
"X-Ot-Span-Context": "824fbd828d809bf4;824fbd828d809bf4;0000000000000000",
|
||||
"X-Request-Id": "1ad2de20-806e-9622-949a-bd1d9735a3f4"
|
||||
"X-B3-Spanid": "071d7f06bc94943c",
|
||||
"X-B3-Traceid": "86a929a0e76cda378fc453fb1dec2c22",
|
||||
"X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/default/sa/httpbin;Hash=68bbaedefe01ef4cb99e17358ff63e92d04a4ce831a35ab9a31d3c8e06adb038;Subject=\"\";URI=spiffe://cluster.local/ns/default/sa/default"
|
||||
},
|
||||
"origin": "127.0.0.1",
|
||||
"url": "http://httpbin:8000/get"
|
||||
|
@ -142,103 +150,121 @@ one connection and request concurrently, you should see some failures when the
|
|||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec -it $FORTIO_POD -c fortio /usr/bin/fortio -- load -c 2 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get
|
||||
Fortio 0.6.2 running at 0 queries per second, 2->2 procs, for 5s: http://httpbin:8000/get
|
||||
Starting at max qps with 2 thread(s) [gomax 2] for exactly 20 calls (10 per thread + 0)
|
||||
23:51:10 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
Ended after 106.474079ms : 20 calls. qps=187.84
|
||||
Aggregated Function Time : count 20 avg 0.010215375 +/- 0.003604 min 0.005172024 max 0.019434859 sum 0.204307492
|
||||
20:33:46 I logger.go:97> Log level is now 3 Warning (was 2 Info)
|
||||
Fortio 1.3.1 running at 0 queries per second, 6->6 procs, for 20 calls: http://httpbin:8000/get
|
||||
Starting at max qps with 2 thread(s) [gomax 6] for exactly 20 calls (10 per thread + 0)
|
||||
20:33:46 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
20:33:47 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
20:33:47 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
Ended after 59.8524ms : 20 calls. qps=334.16
|
||||
Aggregated Function Time : count 20 avg 0.0056869 +/- 0.003869 min 0.000499 max 0.0144329 sum 0.113738
|
||||
# range, mid point, percentile, count
|
||||
>= 0.00517202 <= 0.006 , 0.00558601 , 5.00, 1
|
||||
> 0.006 <= 0.007 , 0.0065 , 20.00, 3
|
||||
> 0.007 <= 0.008 , 0.0075 , 30.00, 2
|
||||
> 0.008 <= 0.009 , 0.0085 , 40.00, 2
|
||||
> 0.009 <= 0.01 , 0.0095 , 60.00, 4
|
||||
> 0.01 <= 0.011 , 0.0105 , 70.00, 2
|
||||
> 0.011 <= 0.012 , 0.0115 , 75.00, 1
|
||||
> 0.012 <= 0.014 , 0.013 , 90.00, 3
|
||||
> 0.016 <= 0.018 , 0.017 , 95.00, 1
|
||||
> 0.018 <= 0.0194349 , 0.0187174 , 100.00, 1
|
||||
# target 50% 0.0095
|
||||
# target 75% 0.012
|
||||
# target 99% 0.0191479
|
||||
# target 99.9% 0.0194062
|
||||
Code 200 : 19 (95.0 %)
|
||||
Code 503 : 1 (5.0 %)
|
||||
Response Header Sizes : count 20 avg 218.85 +/- 50.21 min 0 max 231 sum 4377
|
||||
Response Body/Total Sizes : count 20 avg 652.45 +/- 99.9 min 217 max 676 sum 13049
|
||||
All done 20 calls (plus 0 warmup) 10.215 ms avg, 187.8 qps
|
||||
>= 0.000499 <= 0.001 , 0.0007495 , 10.00, 2
|
||||
> 0.001 <= 0.002 , 0.0015 , 15.00, 1
|
||||
> 0.003 <= 0.004 , 0.0035 , 45.00, 6
|
||||
> 0.004 <= 0.005 , 0.0045 , 55.00, 2
|
||||
> 0.005 <= 0.006 , 0.0055 , 60.00, 1
|
||||
> 0.006 <= 0.007 , 0.0065 , 70.00, 2
|
||||
> 0.007 <= 0.008 , 0.0075 , 80.00, 2
|
||||
> 0.008 <= 0.009 , 0.0085 , 85.00, 1
|
||||
> 0.011 <= 0.012 , 0.0115 , 90.00, 1
|
||||
> 0.012 <= 0.014 , 0.013 , 95.00, 1
|
||||
> 0.014 <= 0.0144329 , 0.0142165 , 100.00, 1
|
||||
# target 50% 0.0045
|
||||
# target 75% 0.0075
|
||||
# target 90% 0.012
|
||||
# target 99% 0.0143463
|
||||
# target 99.9% 0.0144242
|
||||
Sockets used: 4 (for perfect keepalive, would be 2)
|
||||
Code 200 : 17 (85.0 %)
|
||||
Code 503 : 3 (15.0 %)
|
||||
Response Header Sizes : count 20 avg 195.65 +/- 82.19 min 0 max 231 sum 3913
|
||||
Response Body/Total Sizes : count 20 avg 729.9 +/- 205.4 min 241 max 817 sum 14598
|
||||
All done 20 calls (plus 0 warmup) 5.687 ms avg, 334.2 qps
|
||||
{{< /text >}}
|
||||
|
||||
It's interesting to see that almost all requests made it through! The `istio-proxy`
|
||||
does allow for some leeway.
|
||||
|
||||
{{< text plain >}}
|
||||
Code 200 : 19 (95.0 %)
|
||||
Code 503 : 1 (5.0 %)
|
||||
Code 200 : 17 (85.0 %)
|
||||
Code 503 : 3 (15.0 %)
|
||||
{{< /text >}}
|
||||
|
||||
1. Bring the number of concurrent connections up to 3:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec -it $FORTIO_POD -c fortio /usr/bin/fortio -- load -c 3 -qps 0 -n 30 -loglevel Warning http://httpbin:8000/get
|
||||
Fortio 0.6.2 running at 0 queries per second, 2->2 procs, for 5s: http://httpbin:8000/get
|
||||
Starting at max qps with 3 thread(s) [gomax 2] for exactly 30 calls (10 per thread + 0)
|
||||
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
Ended after 71.05365ms : 30 calls. qps=422.22
|
||||
Aggregated Function Time : count 30 avg 0.0053360199 +/- 0.004219 min 0.000487853 max 0.018906468 sum 0.160080597
|
||||
20:32:30 I logger.go:97> Log level is now 3 Warning (was 2 Info)
|
||||
Fortio 1.3.1 running at 0 queries per second, 6->6 procs, for 30 calls: http://httpbin:8000/get
|
||||
Starting at max qps with 3 thread(s) [gomax 6] for exactly 30 calls (10 per thread + 0)
|
||||
20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
20:32:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
|
||||
Ended after 51.9946ms : 30 calls. qps=576.98
|
||||
Aggregated Function Time : count 30 avg 0.0040001633 +/- 0.003447 min 0.0004298 max 0.015943 sum 0.1200049
|
||||
# range, mid point, percentile, count
|
||||
>= 0.000487853 <= 0.001 , 0.000743926 , 10.00, 3
|
||||
> 0.001 <= 0.002 , 0.0015 , 30.00, 6
|
||||
> 0.002 <= 0.003 , 0.0025 , 33.33, 1
|
||||
> 0.003 <= 0.004 , 0.0035 , 40.00, 2
|
||||
> 0.004 <= 0.005 , 0.0045 , 46.67, 2
|
||||
> 0.005 <= 0.006 , 0.0055 , 60.00, 4
|
||||
> 0.006 <= 0.007 , 0.0065 , 73.33, 4
|
||||
> 0.007 <= 0.008 , 0.0075 , 80.00, 2
|
||||
> 0.008 <= 0.009 , 0.0085 , 86.67, 2
|
||||
> 0.009 <= 0.01 , 0.0095 , 93.33, 2
|
||||
> 0.014 <= 0.016 , 0.015 , 96.67, 1
|
||||
> 0.018 <= 0.0189065 , 0.0184532 , 100.00, 1
|
||||
# target 50% 0.00525
|
||||
# target 75% 0.00725
|
||||
# target 99% 0.0186345
|
||||
# target 99.9% 0.0188793
|
||||
Code 200 : 19 (63.3 %)
|
||||
Code 503 : 11 (36.7 %)
|
||||
Response Header Sizes : count 30 avg 145.73333 +/- 110.9 min 0 max 231 sum 4372
|
||||
Response Body/Total Sizes : count 30 avg 507.13333 +/- 220.8 min 217 max 676 sum 15214
|
||||
All done 30 calls (plus 0 warmup) 5.336 ms avg, 422.2 qps
|
||||
>= 0.0004298 <= 0.001 , 0.0007149 , 16.67, 5
|
||||
> 0.001 <= 0.002 , 0.0015 , 36.67, 6
|
||||
> 0.002 <= 0.003 , 0.0025 , 50.00, 4
|
||||
> 0.003 <= 0.004 , 0.0035 , 60.00, 3
|
||||
> 0.004 <= 0.005 , 0.0045 , 66.67, 2
|
||||
> 0.005 <= 0.006 , 0.0055 , 76.67, 3
|
||||
> 0.006 <= 0.007 , 0.0065 , 83.33, 2
|
||||
> 0.007 <= 0.008 , 0.0075 , 86.67, 1
|
||||
> 0.008 <= 0.009 , 0.0085 , 90.00, 1
|
||||
> 0.009 <= 0.01 , 0.0095 , 96.67, 2
|
||||
> 0.014 <= 0.015943 , 0.0149715 , 100.00, 1
|
||||
# target 50% 0.003
|
||||
# target 75% 0.00583333
|
||||
# target 90% 0.009
|
||||
# target 99% 0.0153601
|
||||
# target 99.9% 0.0158847
|
||||
Sockets used: 20 (for perfect keepalive, would be 3)
|
||||
Code 200 : 11 (36.7 %)
|
||||
Code 503 : 19 (63.3 %)
|
||||
Response Header Sizes : count 30 avg 84.366667 +/- 110.9 min 0 max 231 sum 2531
|
||||
Response Body/Total Sizes : count 30 avg 451.86667 +/- 277.1 min 241 max 817 sum 13556
|
||||
All done 30 calls (plus 0 warmup) 4.000 ms avg, 577.0 qps
|
||||
{{< /text >}}
|
||||
|
||||
Now you start to see the expected circuit breaking behavior. Only 63.3% of the
|
||||
Now you start to see the expected circuit breaking behavior. Only 36.7% of the
|
||||
requests succeeded and the rest were trapped by circuit breaking:
|
||||
|
||||
{{< text plain >}}
|
||||
Code 200 : 19 (63.3 %)
|
||||
Code 503 : 11 (36.7 %)
|
||||
Code 200 : 11 (36.7 %)
|
||||
Code 503 : 19 (63.3 %)
|
||||
{{< /text >}}
|
||||
|
||||
1. Query the `istio-proxy` stats to see more:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec $FORTIO_POD -c istio-proxy -- pilot-agent request GET stats | grep httpbin | grep pending
|
||||
cluster.outbound|80||httpbin.springistio.svc.cluster.local.upstream_rq_pending_active: 0
|
||||
cluster.outbound|80||httpbin.springistio.svc.cluster.local.upstream_rq_pending_failure_eject: 0
|
||||
cluster.outbound|80||httpbin.springistio.svc.cluster.local.upstream_rq_pending_overflow: 12
|
||||
cluster.outbound|80||httpbin.springistio.svc.cluster.local.upstream_rq_pending_total: 39
|
||||
cluster.outbound|8000||httpbin.default.svc.cluster.local.circuit_breakers.default.rq_pending_open: 0
|
||||
cluster.outbound|8000||httpbin.default.svc.cluster.local.circuit_breakers.high.rq_pending_open: 0
|
||||
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_active: 0
|
||||
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_failure_eject: 0
|
||||
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_overflow: 21
|
||||
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_total: 29
|
||||
{{< /text >}}
|
||||
|
||||
You can see `12` for the `upstream_rq_pending_overflow` value which means `12`
|
||||
You can see `21` for the `upstream_rq_pending_overflow` value which means `21`
|
||||
calls so far have been flagged for circuit breaking.
|
||||
|
||||
## Cleaning up
|
||||
|
@ -253,5 +279,5 @@ one connection and request concurrently, you should see some failures when the
|
|||
|
||||
{{< text bash >}}
|
||||
$ kubectl delete deploy httpbin fortio-deploy
|
||||
$ kubectl delete svc httpbin
|
||||
$ kubectl delete svc httpbin fortio
|
||||
{{< /text >}}
|
||||
|
|
Loading…
Reference in New Issue