diff --git a/.spelling b/.spelling
index 166bdb81ab..915eb6e6ea 100644
--- a/.spelling
+++ b/.spelling
@@ -24,7 +24,15 @@
4s
5000qps
50Mb
+1ms
+2ms
+3ms
+4ms
+5ms
6ms
+7ms
+8ms
+9ms
6s
72.96ms
7Mb
diff --git a/content/docs/concepts/performance-and-scalability/index.md b/content/docs/concepts/performance-and-scalability/index.md
index 98657a4e8e..fd9b3a2ec1 100644
--- a/content/docs/concepts/performance-and-scalability/index.md
+++ b/content/docs/concepts/performance-and-scalability/index.md
@@ -32,7 +32,7 @@ After running the tests using Istio {{< istio_release_name >}}, we get the follo
- The Envoy proxy uses **0.6 vCPU** and **50 MB memory** per 1000 requests per second going through the proxy.
- The `istio-telemetry` service uses **0.6 vCPU** per 1000 **mesh-wide** requests per second.
- Pilot uses **1 vCPU** and 1.5 GB of memory.
-- The Envoy proxy adds 10ms to the 99th percentile latency.
+- The Envoy proxy adds 8ms to the 90th percentile latency.
## Control plane performance
@@ -101,12 +101,12 @@ immediately. This process adds to the queue wait time of the next request and af
average and tail latencies. The actual tail latency depends on the traffic pattern.
Inside the mesh, a request traverses the client-side proxy and then the server-side
-proxy. This two proxies on the data path add about 10ms to the 99th percentile latency at 1000 requests per second.
-The server-side proxy alone adds 6ms to the 99th percentile latency.
+proxy. This two proxies on the data path add about 8ms to the 90th percentile latency at 1000 requests per second.
+The server-side proxy alone adds 2ms to the 90th percentile latency.
### Latency for Istio {{< istio_release_name >}}
-The default configuration of Istio 1.1 adds 10ms to the 99th percentile latency of the data plane over the baseline.
+The default configuration of Istio 1.1 adds 8ms to the 90th percentile latency of the data plane over the baseline.
We obtained these results using the [Istio benchmarks](https://github.com/istio/tools/tree/master/perf/benchmark)
for the `http/1.1` protocol, with a 1 kB payload at 1000 requests per second using 16 client connections, 2 proxy workers and mutual TLS enabled.
@@ -115,8 +115,8 @@ This will decrease the amount data flowing through the system, which will in tur
{{< image width="90%" ratio="75%"
link="latency.svg?sanitize=true"
- alt="P99 latency vs client connections"
- caption="P99 latency vs client connections"
+ alt="P90 latency vs client connections"
+ caption="P90 latency vs client connections"
>}}
- `baseline` Client pod directly calls the server pod, no sidecars are present.
diff --git a/content/docs/concepts/performance-and-scalability/latency.svg b/content/docs/concepts/performance-and-scalability/latency.svg
index fe4de89717..e036beeb46 100644
--- a/content/docs/concepts/performance-and-scalability/latency.svg
+++ b/content/docs/concepts/performance-and-scalability/latency.svg
@@ -1 +1 @@
-
\ No newline at end of file
+
\ No newline at end of file