This allows us to invoke methods that accept interfaces, and arguably
the old logic was incorrect (though it worked in the case where we
wanted exact type matches)
Deletion processing is not entirely a factor of the target, it is more
a factor of our mode of execution (dry-run vs pre-rolling-update vs
post-rolling-update). We want to introduce that post-rolling-update
phase, so introduce the DeletionProcessingMode enum and move it from
the target to the context.
We initially support capturing to a file (in our own format, as it
doesn't appear a suitable format exists). This means we don't need a
server to capture the traces, and can start capturing through prow
without a lot of infrastructure changes.
Co-authored-by: Peter Rifel <rifelpet@users.noreply.github.com>
The rule of thumb is that we shouldn't be embedding a context.Context,
but it is reasonable when the lifetime is similar and when the
refactor would otherwise be unacceptably large.
This is a minimal way to introduce it, based on adding the support
needed in the GCS support for serviceAccountIssuerDiscovery. We will
need to plumb through the context in many more places over time.
The io/ioutil package has been deprecated as of Go 1.16, see
https://golang.org/doc/go1.16#ioutil. This commit replaces the existing
io/ioutil functions with their new definitions in io and os packages.
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
We don't call klog.InitFlags yet, because that will cause a flag
redefinition error until we get everyone to stop using glog. That
will happen when we update to k8s 1.13.
The launch configuration test exposed that our integration tests don't
retry for very long, and wait a long time in between retries.
Create a RunTasksOptions type to hold the parameters, in particular
max task time, and the amount of time we wait when all tasks have
failed.
This PR implmenents a new custom error that is returned when a task
lifecycle set to fi.LifecycleExistsAndWarnIfChanges. This will allow
a task to to fail validation, but the task is marked as completed and
the error is cleared.
This lets us configure cross-project permissions while ourselves needing
minimal permissions, but also gives us a nice hook for future lockdown
of object-level permissions.
We move everything to the models. We feature-flag it, because we
probably want to change the names etc, and we aren't going to be able to
offer smooth upgrades until that is done.
This:
- reworks how retries are handled in fi/executor.go to a time-based scheme
- changes the single-task limit to 10m (from about 30s of no-progress)
- eliminates the inner IAM propagation retry for LaunchConfigurations,
because the task itself will just be redriven for a while. This also
eliminates any long-pole delay caused by this error (since task Run()
should be 'fast').
A managed file is templated kops-side, but then stored in the S3 bucket
(aka state store)
This will be used to pass the channel containing the core addons.
This is needed so that we can have encrypted storage and complex keys
(e.g. multiple CA certs). Multiple CA certs are needed for an in-place
upgrade from kube-up v1.
If there is an error performing a task, we will reattempt it as long as
forward progress is still being made (i.e. at least one other task
completed successfully)
This makes everything more reliable (though we should still fix these
problems), but it also lays the groundwork for parallel execution.
We call the Render methods on Tasks by reflection, and some of them
don't care about the Target, but do care about the Context (e.g. the PKI
tasks, which only care about the CAStore)
* GCE support only
* Key and secret generation
* "Direct mode" makes API calls
* "Dry run mode" previews the changes
* Terraform output (though key generation not working for master ip)
* cloud-init output (though debian image does not ship with cloud-init)