diff --git a/autoscaling.md b/autoscaling.md index c1d1578bf..a28387439 100644 --- a/autoscaling.md +++ b/autoscaling.md @@ -252,3 +252,6 @@ to prevent this, deployment orchestration should notify the auto-scaler that a d temporarily disable negative decrement thresholds until the deployment process is completed. It is more important for an auto-scaler to be able to grow capacity during a deployment than to shrink the number of instances precisely. + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/proposals/autoscaling.md?pixel)]() diff --git a/federation.md b/federation.md index e261833e5..a2d30017e 100644 --- a/federation.md +++ b/federation.md @@ -429,3 +429,6 @@ does the zookeeper config look like for N=3 across 3 AZs -- and how does each replica find the other replicas and how do clients find their primary zookeeper replica? And now how do I do a shared, highly available redis database? + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/proposals/federation.md?pixel)]() diff --git a/high-availability.md b/high-availability.md index 647c95621..909903a2c 100644 --- a/high-availability.md +++ b/high-availability.md @@ -44,3 +44,6 @@ There is a short window after a new master acquires the lease, during which data ## Open Questions: * Is there a desire to keep track of all nodes for a specific component type? + + +[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/proposals/high-availability.md?pixel)]()