This change increases the specificity of the master's state store bucket contents permission to only the top-level folder named after the cluster.
Fixes#365
The master is now registered as a Node. It is marked as Unschedulable,
so normal pods will not run on it. But Daemonsets will, and it is
surprising that they don't work unless hostNetwork=true.
The default is now what seems to be expected:
* we allocate the master a real CIDR on the pod network
* kube-proxy runs on the master, so it can talk to pods
* we run kubelet on the master with enable-debugging-handlers, so
kubectl logs etc works
To get the old behaviour, edit the cluster spec and set
`isolateMasters: true`
By reusing the subnet & security groups, we are able to skip the ELB
steps of the upgrade procedure. The new cluster also has the same
identity as the old cluster for security groups, so we don't need to
reconfigure ELB etc.
Fixes#175Fixes#174
We separate out the `create cluster` operation from the `update cluster`
operation. Now create cluster only creates the spec (unless you pass
--yes), and is only for new clusters.
`update cluster` works on new or existing clusters, and should be called
to apply changes.
`update` is not the best name, because it means something different in
kubectl, but I think it's a good start.
We now output a ClusterName property into the launchconfig, even though
we don't technically need it. But it allows us to more easily detect
the cluster, and it generally seems like a good idea.
Also rename to 'autoscaling-config' and clean up the cluster name
detection logic.
Fix#96
It's not currently used, and we hadn't updated it to match the better
pattern.
k8s.io/role=master can only be in one role
k8s.io/role/master=1 allows for multiple roles
This allows for a larger EBS root volume (and we now default to 20GB,
just like kube-up did).
We remove the BlockDeviceMappings support because it wasn't used and
made things a lot more complicated. We always map the ephemeral
devices.
Issue #24
This is a weird edge case, because it can't be determined in advance.
We carve out a special well-known name, `@aws`, which nodeup/protokube
will expand to the local-hostname from the aws metadata service when it
is found in the HostnameOverride value.
Ideally we wouldn't do this at all now that we have DNS integration, but
we first want to get into the tested & working configuration!
Fixes#19
This is needed so that we can have encrypted storage and complex keys
(e.g. multiple CA certs). Multiple CA certs are needed for an in-place
upgrade from kube-up v1.
A lot of work that had to happen here:
* Better reuse of config
* Ability to mark VPC & InternetGateway as shared
* Find models relative to the executable, to run from a dir-per-cluster
Fixes#95
We allow --zones & --master-zones to be specified separately now, but we
validate for common errors (using a region where you meant a zone,
duplicating a zone, spanning regions, entering an invalid AZ etc)
* GCE support only
* Key and secret generation
* "Direct mode" makes API calls
* "Dry run mode" previews the changes
* Terraform output (though key generation not working for master ip)
* cloud-init output (though debian image does not ship with cloud-init)