From 70b554a5cbef78041e81ec8489eaf6beebd5f951 Mon Sep 17 00:00:00 2001 From: AdamDang Date: Thu, 22 Mar 2018 17:27:22 +0800 Subject: [PATCH] Update networking.md "Kubernetes appears 12 times in main text of this doc, except links, 9 in capital,3 in lower case. It's better to be same. --- docs/networking.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/networking.md b/docs/networking.md index 4ab5f66253..1d3b8415a1 100644 --- a/docs/networking.md +++ b/docs/networking.md @@ -2,9 +2,9 @@ Kubernetes Operations (kops) currently supports 4 networking modes: -* `kubenet` kubernetes native networking via a CNI plugin. This is the default. +* `kubenet` Kubernetes native networking via a CNI plugin. This is the default. * `cni` Container Network Interface(CNI) style networking, often installed via a Daemonset. -* `classic` kubernetes native networking, done in-process. +* `classic` Kubernetes native networking, done in-process. * `external` networking is done via a Daemonset. This is used in some custom implementations. ### kops Default Networking @@ -17,7 +17,7 @@ One important limitation when using `kubenet` networking is that an AWS routing 50 entries, which sets a limit of 50 nodes per cluster. AWS support will sometimes raise the limit to 100, but their documentation notes that routing tables over 50 may take a performance hit. -Because k8s modifies the AWS routing table, this means that realistically kubernetes needs to own the +Because k8s modifies the AWS routing table, this means that realistically Kubernetes needs to own the routing table, and thus it requires its own subnet. It is theoretically possible to share a routing table with other infrastructure (but not a second cluster!), but this is not really recommended. Certain `cni` networking solutions claim to address these problems.