User reports of kubelet flags not being passed; moved more to code.
Also found & fixed the likely root-cause issue: we have two copies of
the cluster spec and were not being precise about which one we wanted to
use at all times.
So the flow is that we recommend (or strongly recommend) a new kops
version when one is required for a new version, and then the new kops
version will recommend (or strongly recommend) a new k8s version.
We don't have a notion of multiple recommended k8s versions per kops
version - that is what channels are for.
Users are always free to disregard updates, even "required" ones by
setting a flag.
This helps us treat protokube as being paired with nodeup, and is a step
towards registry-less deployments (and isolated deployments) along with
moving away from our deprecated gcr.io usage.
* Zones are now subnets
* Utility subnet is no longer part of Zone
* Bastion InstanceGroup type added instead
* Etcd clusters defined in terms of InstanceGroups, not zones
* AdminAccess split into SSHAccess & APIAccess
* Dropped unused Multizone flag
* kopsrepo/master:
gcs-upload: Use a no-clobber copy instead
gcs-upload: Fix cache-control on other files as well
changes from code review
doc updates
unit tests with fakes
it is working in alpha
working on the start of validate
Starting work on node lookup and validation
starting porting node code
Fix retries for AutoScalingGroup pending delete
Apply gofmt to pkg directory
Avoid tests hitting kubernetes stable.txt HTTP file
Fix printing of max size on instance group
Disable kubelet from starting until after volume mounts
Fix Cluster parsing error message
bumping stable channel to k8s 1.4.6
support more zones(cn-north-1a/b) for cloud provider guess
This:
- reworks how retries are handled in fi/executor.go to a time-based scheme
- changes the single-task limit to 10m (from about 30s of no-progress)
- eliminates the inner IAM propagation retry for LaunchConfigurations,
because the task itself will just be redriven for a while. This also
eliminates any long-pole delay caused by this error (since task Run()
should be 'fast').
This isn't ideal, because it isn't versioned, but there is an important
bugfix - otherwise pods are allocated a .255 IP, which is reserved for
broadcast.
Issue #724
This is a breaking change for people using the API (sorry), but is
hopefully a simple search and replace:
"k8s.io/kops/upup/pkg/api"
-> api "k8s.io/kops/pkg/apis/kops"
"k8s.io/kops/upup/pkg/api/registry"
-> "k8s.io/kops/pkg/apis/kops/registry"
This is the "correct" place for it in the k8s API infrastructure - we
are working towards a versioned API here.
A managed file is templated kops-side, but then stored in the S3 bucket
(aka state store)
This will be used to pass the channel containing the core addons.
Beginnings of a mock for the AWSCloud, so that hopefully we aren't
calling out to AWS at all in the tests. We will likely start mocking
the actual EC2 APIs in future, but this seems a good starting point.
Fix#425
This means it only needs to be specified during `kops create`. We
remove the option from `kops update` for consistency.
This will shortly be manageable using the secrets functionality.
Fix#221
The master is now registered as a Node. It is marked as Unschedulable,
so normal pods will not run on it. But Daemonsets will, and it is
surprising that they don't work unless hostNetwork=true.
The default is now what seems to be expected:
* we allocate the master a real CIDR on the pod network
* kube-proxy runs on the master, so it can talk to pods
* we run kubelet on the master with enable-debugging-handlers, so
kubectl logs etc works
To get the old behaviour, edit the cluster spec and set
`isolateMasters: true`
We separate out the `create cluster` operation from the `update cluster`
operation. Now create cluster only creates the spec (unless you pass
--yes), and is only for new clusters.
`update cluster` works on new or existing clusters, and should be called
to apply changes.
`update` is not the best name, because it means something different in
kubectl, but I think it's a good start.