mirror of https://github.com/istio/istio.io.git
Update performance numbers to 90th percentile (#3781)
* Update performance numbers to 90th percentile * spell checker
This commit is contained in:
parent
3accc1d69b
commit
d266753179
|
@ -24,7 +24,15 @@
|
||||||
4s
|
4s
|
||||||
5000qps
|
5000qps
|
||||||
50Mb
|
50Mb
|
||||||
|
1ms
|
||||||
|
2ms
|
||||||
|
3ms
|
||||||
|
4ms
|
||||||
|
5ms
|
||||||
6ms
|
6ms
|
||||||
|
7ms
|
||||||
|
8ms
|
||||||
|
9ms
|
||||||
6s
|
6s
|
||||||
72.96ms
|
72.96ms
|
||||||
7Mb
|
7Mb
|
||||||
|
|
|
@ -32,7 +32,7 @@ After running the tests using Istio {{< istio_release_name >}}, we get the follo
|
||||||
- The Envoy proxy uses **0.6 vCPU** and **50 MB memory** per 1000 requests per second going through the proxy.
|
- The Envoy proxy uses **0.6 vCPU** and **50 MB memory** per 1000 requests per second going through the proxy.
|
||||||
- The `istio-telemetry` service uses **0.6 vCPU** per 1000 **mesh-wide** requests per second.
|
- The `istio-telemetry` service uses **0.6 vCPU** per 1000 **mesh-wide** requests per second.
|
||||||
- Pilot uses **1 vCPU** and 1.5 GB of memory.
|
- Pilot uses **1 vCPU** and 1.5 GB of memory.
|
||||||
- The Envoy proxy adds 10ms to the 99th percentile latency.
|
- The Envoy proxy adds 8ms to the 90th percentile latency.
|
||||||
|
|
||||||
## Control plane performance
|
## Control plane performance
|
||||||
|
|
||||||
|
@ -101,12 +101,12 @@ immediately. This process adds to the queue wait time of the next request and af
|
||||||
average and tail latencies. The actual tail latency depends on the traffic pattern.
|
average and tail latencies. The actual tail latency depends on the traffic pattern.
|
||||||
|
|
||||||
Inside the mesh, a request traverses the client-side proxy and then the server-side
|
Inside the mesh, a request traverses the client-side proxy and then the server-side
|
||||||
proxy. This two proxies on the data path add about 10ms to the 99th percentile latency at 1000 requests per second.
|
proxy. This two proxies on the data path add about 8ms to the 90th percentile latency at 1000 requests per second.
|
||||||
The server-side proxy alone adds 6ms to the 99th percentile latency.
|
The server-side proxy alone adds 2ms to the 90th percentile latency.
|
||||||
|
|
||||||
### Latency for Istio {{< istio_release_name >}}
|
### Latency for Istio {{< istio_release_name >}}
|
||||||
|
|
||||||
The default configuration of Istio 1.1 adds 10ms to the 99th percentile latency of the data plane over the baseline.
|
The default configuration of Istio 1.1 adds 8ms to the 90th percentile latency of the data plane over the baseline.
|
||||||
We obtained these results using the [Istio benchmarks](https://github.com/istio/tools/tree/master/perf/benchmark)
|
We obtained these results using the [Istio benchmarks](https://github.com/istio/tools/tree/master/perf/benchmark)
|
||||||
for the `http/1.1` protocol, with a 1 kB payload at 1000 requests per second using 16 client connections, 2 proxy workers and mutual TLS enabled.
|
for the `http/1.1` protocol, with a 1 kB payload at 1000 requests per second using 16 client connections, 2 proxy workers and mutual TLS enabled.
|
||||||
|
|
||||||
|
@ -115,8 +115,8 @@ This will decrease the amount data flowing through the system, which will in tur
|
||||||
|
|
||||||
{{< image width="90%" ratio="75%"
|
{{< image width="90%" ratio="75%"
|
||||||
link="latency.svg?sanitize=true"
|
link="latency.svg?sanitize=true"
|
||||||
alt="P99 latency vs client connections"
|
alt="P90 latency vs client connections"
|
||||||
caption="P99 latency vs client connections"
|
caption="P90 latency vs client connections"
|
||||||
>}}
|
>}}
|
||||||
|
|
||||||
- `baseline` Client pod directly calls the server pod, no sidecars are present.
|
- `baseline` Client pod directly calls the server pod, no sidecars are present.
|
||||||
|
|
File diff suppressed because one or more lines are too long
Before Width: | Height: | Size: 208 KiB After Width: | Height: | Size: 186 KiB |
Loading…
Reference in New Issue