* kopsrepo/master:
gcs-upload: Use a no-clobber copy instead
gcs-upload: Fix cache-control on other files as well
changes from code review
doc updates
unit tests with fakes
it is working in alpha
working on the start of validate
Starting work on node lookup and validation
starting porting node code
Fix retries for AutoScalingGroup pending delete
Apply gofmt to pkg directory
Avoid tests hitting kubernetes stable.txt HTTP file
Fix printing of max size on instance group
Disable kubelet from starting until after volume mounts
Fix Cluster parsing error message
bumping stable channel to k8s 1.4.6
support more zones(cn-north-1a/b) for cloud provider guess
Without this fix, last generated private subnet address overlaps with main CIDR range provided via --network-cidr= option, which causes error.
For example before change, with --network-cidr=10.0.0.0/22, the list of subnets generated by the code was:
```
I1117 07:34:24.720380 47964 cluster.go:503] Assigned CIDR 10.0.1.128/25 to zone us-east-1c
I1117 07:34:24.720397 47964 cluster.go:514] Assigned Private CIDR 10.0.3.0/25 to zone us-east-1c
I1117 07:34:24.720404 47964 cluster.go:503] Assigned CIDR 10.0.2.0/25 to zone us-east-1d
I1117 07:34:24.720409 47964 cluster.go:514] Assigned Private CIDR 10.0.3.128/25 to zone us-east-1d
I1117 07:34:24.720415 47964 cluster.go:503] Assigned CIDR 10.0.2.128/25 to zone us-east-1e
I1117 07:34:24.720420 47964 cluster.go:514] Assigned Private CIDR 10.0.4.0/25 to zone us-east-1e
```
The last CIDR 10.0.4.0/25 is beyond 10.0.0.0/22 boundaries, which causes the error:
```
W1117 07:39:29.240474 48009 executor.go:100] error running task "subnet/private-us-east-1e.kubpriv.pink-ptdevcloud.com": error creating subnet: InvalidSubnet.Range: The CIDR '10.0.4.0/25' is invalid.
status code: 400, request id: b195c64b-0a35-413c-b6ec-d7ee40d49adb
```
With a code fix, subnets get generated in a correct way:
```
I1118 07:22:31.466899 55710 cluster.go:503] Assigned CIDR 10.0.1.0/25 to zone us-east-1c
I1118 07:22:31.466908 55710 cluster.go:514] Assigned Private CIDR 10.0.2.128/25 to zone us-east-1c
I1118 07:22:31.466913 55710 cluster.go:503] Assigned CIDR 10.0.1.128/25 to zone us-east-1d
I1118 07:22:31.466917 55710 cluster.go:514] Assigned Private CIDR 10.0.3.0/25 to zone us-east-1d
I1118 07:22:31.466922 55710 cluster.go:503] Assigned CIDR 10.0.2.0/25 to zone us-east-1e
I1118 07:22:31.466925 55710 cluster.go:514] Assigned Private CIDR 10.0.3.128/25 to zone us-east-1e
```
I believe S3 eventual consistency doesn't really guarantee much here,
so a delete by one kops instance and a list by another could easily
generate this.
Fixes#917