Change the default networking provider to Cilium

This commit is contained in:
John Gardiner Myers 2022-11-10 08:23:59 -08:00
parent eb675ce17f
commit 7a4ac14d8d
4 changed files with 12 additions and 15 deletions

View File

@ -97,7 +97,7 @@ kops create cluster [CLUSTER] [flags]
--master-zones strings Zones in which to run masters (must be an odd number) --master-zones strings Zones in which to run masters (must be an odd number)
--network-cidr string Network CIDR to use --network-cidr string Network CIDR to use
--network-id string Shared Network or VPC to use --network-id string Shared Network or VPC to use
--networking string Networking mode. kubenet, external, weave, flannel-vxlan (or flannel), flannel-udp, calico, canal, kube-router, amazonvpc, cilium, cilium-etcd, cni. (default "kubenet") --networking string Networking mode. kubenet, external, weave, flannel-vxlan (or flannel), flannel-udp, calico, canal, kube-router, amazonvpc, cilium, cilium-etcd, cni. (default "cilium")
--node-count int32 Total number of worker nodes. Defaults to one node per zone --node-count int32 Total number of worker nodes. Defaults to one node per zone
--node-image string Machine image for worker nodes. Takes precedence over --image --node-image string Machine image for worker nodes. Takes precedence over --image
--node-security-groups strings Additional precreated security groups to add to worker nodes. --node-security-groups strings Additional precreated security groups to add to worker nodes.

View File

@ -12,6 +12,8 @@ listed below, are available which implement and manage this abstraction.
The following table provides the support status for various networking providers with regards to kOps version: The following table provides the support status for various networking providers with regards to kOps version:
As of kOps 1.26 the default network provider is Cilium. Prior to that the default is Kubenet.
| Network provider | Experimental | Stable | Deprecated | Removed | | Network provider | Experimental | Stable | Deprecated | Removed |
|------------------|-------------:|-------:|-----------:|----------------:| |------------------|-------------:|-------:|-----------:|----------------:|
| AWS VPC | 1.9 | 1.21 | - | - | | AWS VPC | 1.9 | 1.21 | - | - |
@ -27,27 +29,20 @@ The following table provides the support status for various networking providers
| Romana | 1.8 | - | 1.18 | 1.19 | | Romana | 1.8 | - | 1.18 | 1.19 |
| Weave | 1.5 | - | 1.23 | Kubernetes 1.23 | | Weave | 1.5 | - | 1.23 | Kubernetes 1.23 |
### Which networking provider should you use?
kOps maintainers have no bias over the CNI provider that you run, we only aim to be flexible and provide a working setup of the CNIs.
We do recommended something other than `kubenet` for production clusters due to `kubenet`'s limitations, as explained [below](#kubenet-default).
### Specifying network option for cluster creation ### Specifying network option for cluster creation
You can specify the network provider via the `--networking` command line switch. However, this will only give a default configuration of the provider. Typically you would often modify the `spec.networking` section of the cluster spec to configure the provider further. You can specify the network provider via the `--networking` command line switch. However, this will only give a default configuration of the provider. Typically you would often modify the `spec.networking` section of the cluster spec to configure the provider further.
### Kubenet (default) ### Kubenet
Kubernetes Operations (kOps) uses `kubenet` networking by default. This sets up networking on AWS using VPC The "kubenet" option has the control plane allocate a /24 CIDR to each Node, drawing from the Node network.
networking, where the master allocates a /24 CIDR to each Node, drawing from the Node network. Routes for each node are then configured in the cloud provider network's routing tables.
Using `kubenet` mode routes for each node are then configured in the AWS VPC routing tables.
One important limitation when using `kubenet` networking is that an AWS routing table cannot have more than One important limitation when using `kubenet` networking on AWS is that an AWS routing table cannot have more than
50 entries, which sets a limit of 50 nodes per cluster. AWS support will sometimes raise the limit to 100, 50 entries, which sets a limit of 50 nodes per cluster. AWS support will sometimes raise the limit to 100,
but their documentation notes that routing tables over 50 may take a performance hit. but their documentation notes that routing tables over 50 may take a performance hit.
Because k8s modifies the AWS routing table, this means that realistically Kubernetes needs to own the Because kubernetes modifies the AWS routing table, this means that, realistically, Kubernetes needs to own the
routing table, and thus it requires its own subnet. It is theoretically possible to share a routing table routing table, and thus it requires its own subnet. It is theoretically possible to share a routing table
with other infrastructure (but not a second cluster!), but this is not really recommended. Certain with other infrastructure (but not a second cluster!), but this is not really recommended. Certain
`cni` networking solutions claim to address these problems. `cni` networking solutions claim to address these problems.
@ -56,7 +51,7 @@ Users running `--topology private` will not be able to choose `kubenet` networki
requires a single routing table. These advanced users are usually running in multiple availability zones requires a single routing table. These advanced users are usually running in multiple availability zones
and as NAT gateways are single AZ, multiple route tables are needed to use each NAT gateway. and as NAT gateways are single AZ, multiple route tables are needed to use each NAT gateway.
Kubenet is the default networking option because of its simplicity, however, it should not be used in Kubenet is simple, however, it should not be used in
production clusters which expect a gradual increase in traffic and/or workload over time. Such clusters production clusters which expect a gradual increase in traffic and/or workload over time. Such clusters
will eventually "out-grow" the `kubenet` networking provider. will eventually "out-grow" the `kubenet` networking provider.

View File

@ -6,6 +6,8 @@ This is a document to gather the release notes prior to the release.
# Significant changes # Significant changes
* The default networking provider for new clusters is now Cilium.
## AWS only ## AWS only
* Bastions are now fronted by a Network Load Balancer. * Bastions are now fronted by a Network Load Balancer.

View File

@ -164,7 +164,7 @@ func (o *NewClusterOptions) InitDefaults() {
o.Channel = api.DefaultChannel o.Channel = api.DefaultChannel
o.Authorization = AuthorizationFlagRBAC o.Authorization = AuthorizationFlagRBAC
o.AdminAccess = []string{"0.0.0.0/0", "::/0"} o.AdminAccess = []string{"0.0.0.0/0", "::/0"}
o.Networking = "kubenet" o.Networking = "cilium"
o.Topology = api.TopologyPublic o.Topology = api.TopologyPublic
o.InstanceManager = "cloudgroups" o.InstanceManager = "cloudgroups"
} }