diff --git a/sig-scalability/README.md b/sig-scalability/README.md
index d29eefdca..b054e3f82 100644
--- a/sig-scalability/README.md
+++ b/sig-scalability/README.md
@@ -8,8 +8,7 @@ To understand how this file is generated, see https://git.k8s.io/community/gener
--->
# Scalability Special Interest Group
-SIG Scalability is responsible for defining and driving scalability goals for Kubernetes. We also coordinate and contribute to general system-wide scalability and performance improvements (not falling into the charter of other individual SIGs) by driving large architectural changes and finding bottlenecks, as well as provide guidance and consultations about any scalability and performance related aspects of Kubernetes.
-We are actively working on finding and removing various scalability bottlenecks which should lead us towards pushing system's scalability higher. This may include going beyond 5k nodes in the future - although that's not our priority as of now, this is very deeply in our area of interest and we are happy to guide and collaborate on any efforts towards that goal as long as they are not sacrificing on overall Kubernetes architecture (by making it non-maintainable, non-understandable, etc.).
+SIG Scalability is responsible for defining and driving scalability goals for Kubernetes. We also coordinate and contribute to general system-wide scalability and performance improvements (not falling into the charter of other individual SIGs) by driving large architectural changes and finding bottlenecks, as well as provide guidance and consultations about any scalability and performance related aspects of Kubernetes.
We are actively working on finding and removing various scalability bottlenecks which should lead us towards pushing system's scalability higher. This may include going beyond 5k nodes in the future - although that's not our priority as of now, this is very deeply in our area of interest and we are happy to guide and collaborate on any efforts towards that goal as long as they are not sacrificing on overall Kubernetes architecture (by making it non-maintainable, non-understandable, etc.).
The [charter](charter.md) defines the scope and governance of the Scalability Special Interest Group.
diff --git a/sig-scalability/slos/dns_latency.md b/sig-scalability/slos/dns_latency.md
index 3293fd8d6..db4731b45 100644
--- a/sig-scalability/slos/dns_latency.md
+++ b/sig-scalability/slos/dns_latency.md
@@ -4,9 +4,9 @@
| Status | SLI | SLO |
| --- | --- | --- |
-| __WIP__ | In-cluster dns latency from a single prober pod, measured as latency of per second DNS lookup[1](#footnote) for "null service" from that pod, measured as 99th percentile over last 5 minutes. | In default Kubernetes installataion with RTT between nodes <= Y, 99th percentile of (99th percentile over all prober pods) per cluster-day <= X |
+| __WIP__ | In-cluster dns latency from a single prober pod, measured as latency of per second DNS lookup[1](#footnote1) for "null service" from that pod, measured as 99th percentile over last 5 minutes. | In default Kubernetes installataion with RTT between nodes <= Y, 99th percentile of (99th percentile over all prober pods) per cluster-day <= X |
-\[1\] In fact two DNS lookups: (1) to nameserver IP from
+\[1\] In fact two DNS lookups: (1) to nameserver IP from
/etc/resolv.conf (2) to kube-system/kube-dns service IP and track them as two
separate SLIs.
diff --git a/sig-scalability/slos/slos.md b/sig-scalability/slos/slos.md
index 9924d9488..49cc9ce19 100644
--- a/sig-scalability/slos/slos.md
+++ b/sig-scalability/slos/slos.md
@@ -107,16 +107,13 @@ Prerequisite: Kubernetes cluster is available and serving.
| __Official__ | Latency of non-streaming read-only API calls for every (resource, scope pair, measured as 99th percentile over last 5 minutes | In default Kubernetes installation, for every (resource, scope) pair, excluding virtual and aggregated resources and Custom Resource Definitions, 99th percentile per cluster-day[1](#footnote1) (a) <= 1s if `scope=resource` (b) <= 5s if `scope=namespace` (c) <= 30s if `scope=cluster` | [Details](./api_call_latency.md) |
| __Official__ | Startup latency of stateless and schedulable pods, excluding time to pull images and run init containers, measured from pod creation timestamp to when all its containers are reported as started and observed via watch, measured as 99th percentile over last 5 minutes | In default Kubernetes installation, 99th percentile per cluster-day[1](#footnote1) <= 5s | [Details](./pod_startup_latency.md) |
| __WIP__ | Latency of programming a single (e.g. iptables on a given node) in-cluster load balancing mechanism, measured from when service spec or list of its `Ready` pods change to when it is reflected in load balancing mechanism, measured as 99th percentile over last 5 minutes | In default Kubernetes installation, 99th percentile of (99th percentiles across all programmers (e.g. iptables)) per cluster-day[1](#footnote1) <= X | [Details](./network_programming_latency.md) |
-| __WIP__ | Latency of programming a single in-cluster dns instance, measured from when service spec or list of its `Ready` pods change to when it is reflected in that dns instance, measured as 99th percentile over last 5 minutes | In default Kubernetes installation, 99th percentile of (99th percentiles across all dns instances) per cluster-day <= X | [Details](./dns_programming_latency.md) |
-| __WIP__ | In-cluster network latency from a single prober pod, measured as latency of per second ping from that pod to "null service", measured as 99th percentile over last 5 minutes. | In default Kubernetes installataion with RTT between nodes <= Y, 99th percentile of (99th percentile over all prober pods) per cluster-day <= X | [Details](./network_latency.md) |
-| __WIP__ | In-cluster dns latency from a single prober pod, measured as latency of per second DNS lookup[1](#footnote2) for "null service" from that pod, measured as 99th percentile over last 5 minutes. | In default Kubernetes installataion with RTT between nodes <= Y, 99th percentile of (99th percentile over all prober pods) per cluster-day <= X |
+| __WIP__ | Latency of programming a single in-cluster dns instance, measured from when service spec or list of its `Ready` pods change to when it is reflected in that dns instance, measured as 99th percentile over last 5 minutes | In default Kubernetes installation, 99th percentile of (99th percentiles across all dns instances) per cluster-day[1](#footnote1) <= X | [Details](./dns_programming_latency.md) |
+| __WIP__ | In-cluster network latency from a single prober pod, measured as latency of per second ping from that pod to "null service", measured as 99th percentile over last 5 minutes. | In default Kubernetes installataion with RTT between nodes <= Y, 99th percentile of (99th percentile over all prober pods) per cluster-day[1](#footnote1) <= X | [Details](./network_latency.md) |
+| __WIP__ | In-cluster dns latency from a single prober pod, measured as latency of per second DNS lookup for "null service" from that pod, measured as 99th percentile over last 5 minutes. | In default Kubernetes installataion with RTT between nodes <= Y, 99th percentile of (99th percentile over all prober pods) per cluster-day[1](#footnote1) <= X |
\[1\] For the purpose of visualization it will be a
sliding window. However, for the purpose of reporting the SLO, it means one
point per day (whether SLO was satisfied on a given day or not).
-\[2\] In fact two DNS lookups: (1) to nameserver IP from
-/etc/resolv.conf (2) to kube-system/kube-dns service IP and track them as two
-separate SLIs.
### Burst SLIs/SLOs
diff --git a/sigs.yaml b/sigs.yaml
index c288ac133..6ef2035a8 100644
--- a/sigs.yaml
+++ b/sigs.yaml
@@ -1585,8 +1585,7 @@ sigs:
scalability and performance improvements (not falling into the charter of
other individual SIGs) by driving large architectural changes and finding
bottlenecks, as well as provide guidance and consultations about any
- scalability and performance related aspects of Kubernetes.
-
+ scalability and performance related aspects of Kubernetes.
We are actively working on finding and removing various scalability
bottlenecks which should lead us towards pushing system's scalability
higher. This may include going beyond 5k nodes in the future - although