From 6a724a42ed5c31e5b3b0d95f9227ebd00293f348 Mon Sep 17 00:00:00 2001 From: Wen Gao Date: Thu, 3 May 2018 14:52:42 +0800 Subject: [PATCH] Word error --- sig-scalability/blogs/scalability-regressions-case-studies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sig-scalability/blogs/scalability-regressions-case-studies.md b/sig-scalability/blogs/scalability-regressions-case-studies.md index 686a2bf84..31a757df5 100644 --- a/sig-scalability/blogs/scalability-regressions-case-studies.md +++ b/sig-scalability/blogs/scalability-regressions-case-studies.md @@ -37,4 +37,4 @@ This document is a compilation of some interesting scalability/performance regre - On many occasions our scalability tests caught critical/risky bugs which were missed by most other tests. If not caught, those could've seriously jeopardized production-readiness of k8s. - SIG-Scalability has caught/fixed several important issues that span across various components, features and SIGs. - Around 60% of times (possibly even more), we catch scalability regressions with just our medium-scale (and fast) tests, i.e gce-100 and kubemark-500. Making them run as presubmits should act as a strong shield against regressions. -- Majority of the remaining ones are caught by our large-scale (and slow) tests, i.e kubemark-5k and gce-2k. Making them as post-submit blokcers (given they're "usually" quite healthy) should act as a second layer of protection against regressions. +- Majority of the remaining ones are caught by our large-scale (and slow) tests, i.e kubemark-5k and gce-2k. Making them as post-submit blockers (given they're "usually" quite healthy) should act as a second layer of protection against regressions.