Kops defaults to a network named "default" and has issues with network modes.
Apparently there is a "default" network within the projects that boskos issues,
causing `kops create cluster` to fail some cloudup validation.
By specifying a cluster-specific network, kops will create this new network with the non-deprecated settings.
Originally I had thought we were relying on ssh keys mounted from a secret,
it turns out kubetest 1 generated the keys indirectly through gcloud.
This runs the same command as kubetest 1, creating and uploading the SSH keys.
This adds zones that have been released since this list was updated.
This also reenables some that were disabled due to capacity for c4 instance types - we use c5 now so hopefully they'll have sufficient coverage.
We can disable them if we continue to run into availability issues.
Long term I could see us using the ec2.DescribeAvailabilityZones and ec2.DescribeInstanceTypeOfferings to pick random zone(s) in a random region and guarantee that it has the needed instance types.
Ensure apiserver role can only be used on AWS (because of firewalling)
Apply api-server label to CP as well
Consolidate node not ready validation message
Guard apiserver nodes with a feature flag
Rename Apiserver role to APIServer
Add an integration test for apiserver nodes
Rename Apiserver role to APIServer
Enumerate all roles in rolling update docs
Apply suggestions from code review
Co-authored-by: Steven E. Harris <seh@panix.com>
Some of the "beforeSuite" tests are failing because the e2e.test binary isn't resolving the API DNS.
This extends the validation time and also adds a sleep to wait for any negative TTLs to expire.
Generate and upload keys.json + discovery.json to public store
Don't enable anonymous auth on publicjwks
Remove tests that won't work using FS VFS anymore
There were too many issues with downloading kops from a version marker with this setup.
We'll need to move this logic into kubetest2 itself since it has sufficient knowledge for eg. KOPS_BASE_URL, where the kops binary was downloaded, etc.
This ensures that the same binary is used for kubetest2-kops commands as well as the kops commands invoked directly in the scenario script.
Periodic jobs will create a temp file that will be used to save the kops binary from the provided version marker.
non-periodic jobs (local invocation) will use the bazel build binary, preserving original behavior but using this same binary for kops commands rather than relying on PATH.
The api design for using existing instance profiles must have changed during its PR and I never removed the old field from the integration test.
grep shows that this field doesn't exist anywhere else in the codebase.
This will unblock the remaining periodic e2e jobs that havent been migrated yet.
They run a test with the kops version from "latest-ci.txt" as published by the "postsubmit-push-to-staging" postsubmit job,
and if the tests succeed then they get published to "latest-ci-updown-green.txt" which is what all of the other periodic jobs rely on.
example job that uses this functionality: 37b80c5e3b/config/jobs/kubernetes/kops/kops-pipeline.yaml (L46-L48)
This updates the upgrade scenario script to support building kops when ran locally, or using the version markers when ran in a periodic prow job.
hoping to fix the upgrade tests here: https://testgrid.k8s.io/kops-kubetest2#kops-aws-upgrade
Previously we would incorrectly append create cluster arguments if they had already been specified and used --foo=bar notation.
This resulted in arguments being specified multiple times causing undesired behavior.
We now check for both `--foo bar` and `--foo=bar` when attempting to add a `--foo` argument.
The public api url cannot be used by pods and nodes if access is restricted. So by default we need to use the internal one.
This should finally pass the OIDC e2e test
For public access, api server must be publically available and anonymous
auth must be enabled