NatGateway creation in AWS is a long procedure. It can take up to 10 min for NatGateway to go from Pending to Available state.
We have to use WaitUntilNatGatewayAvailable function to make sure that NatGateway is fully up before trying to use it.
Without this change all my tests attempts to create or update (add nodes to) kubernetis cluster with private topology in us-east-1 region failed with error:
```
W1117 12:14:08.719010 51863 executor.go:100] error running task "route/private-us-east-1c.kubpriv.pink-ptdevcloud.com": error creating Route: InvalidNatGatewayID.NotFound: The natGateway ID 'nat-08be6e70ddffd44d4' does not exist
status code: 400, request id: 5adf5c0a-c12f-4d6b-8dfd-186c51efff9f
```
Without this fix, last generated private subnet address overlaps with main CIDR range provided via --network-cidr= option, which causes error.
For example before change, with --network-cidr=10.0.0.0/22, the list of subnets generated by the code was:
```
I1117 07:34:24.720380 47964 cluster.go:503] Assigned CIDR 10.0.1.128/25 to zone us-east-1c
I1117 07:34:24.720397 47964 cluster.go:514] Assigned Private CIDR 10.0.3.0/25 to zone us-east-1c
I1117 07:34:24.720404 47964 cluster.go:503] Assigned CIDR 10.0.2.0/25 to zone us-east-1d
I1117 07:34:24.720409 47964 cluster.go:514] Assigned Private CIDR 10.0.3.128/25 to zone us-east-1d
I1117 07:34:24.720415 47964 cluster.go:503] Assigned CIDR 10.0.2.128/25 to zone us-east-1e
I1117 07:34:24.720420 47964 cluster.go:514] Assigned Private CIDR 10.0.4.0/25 to zone us-east-1e
```
The last CIDR 10.0.4.0/25 is beyond 10.0.0.0/22 boundaries, which causes the error:
```
W1117 07:39:29.240474 48009 executor.go:100] error running task "subnet/private-us-east-1e.kubpriv.pink-ptdevcloud.com": error creating subnet: InvalidSubnet.Range: The CIDR '10.0.4.0/25' is invalid.
status code: 400, request id: b195c64b-0a35-413c-b6ec-d7ee40d49adb
```
With a code fix, subnets get generated in a correct way:
```
I1118 07:22:31.466899 55710 cluster.go:503] Assigned CIDR 10.0.1.0/25 to zone us-east-1c
I1118 07:22:31.466908 55710 cluster.go:514] Assigned Private CIDR 10.0.2.128/25 to zone us-east-1c
I1118 07:22:31.466913 55710 cluster.go:503] Assigned CIDR 10.0.1.128/25 to zone us-east-1d
I1118 07:22:31.466917 55710 cluster.go:514] Assigned Private CIDR 10.0.3.0/25 to zone us-east-1d
I1118 07:22:31.466922 55710 cluster.go:503] Assigned CIDR 10.0.2.0/25 to zone us-east-1e
I1118 07:22:31.466925 55710 cluster.go:514] Assigned Private CIDR 10.0.3.128/25 to zone us-east-1e
```
I believe S3 eventual consistency doesn't really guarantee much here,
so a delete by one kops instance and a list by another could easily
generate this.
Fixes#917