Add status field in subresource on crd yaml and add new ClusterRole system:vpa-actor to patch /status subresource.
The `metadata.generation` only increase on vpa spec update.
Fix e2e test for patch and create vpa
Containers in recommendation can be different from recommendations in pod:
- A new container can be added to a pod. At first there will be no
recommendation for the container
- A container can be removed from pod. For some time recommendation will contain
recommendation for the old container
- Container can be renamed. Then there will be recommendation for container
under its old name.
Add tests for what VPA does in those situations, when limit range exists.
Containers in recommendation can be different from recommendations in pod:
- A new container can be added to a pod. At first there will be no
recommendation for the container
- A container can be removed from pod. For some time recommendation will contain
recommendation for the old container
- Container can be renamed. Then there will be recommendation for container
under its old name.
Add tests for what VPA does in those situations.
We allowed them before (it was default) but now we need to allow it explicitly:
https://groups.google.com/a/kubernetes.io/g/dev/c/BZlDyz9FK1U/m/57PgQlA4BgAJ
Long term I want to run pods without privilidge but it requeres:
- https://github.com/kubernetes/kubernetes/pull/110779 to merge
- Syncing e2e dependencies to include the merged change
- Changing tests to run pods without privilidges
To keep tests passing through removal of PodSecurityPolicy for 1.25 I want to
merge this change first and reduce pod privilidges later
- Change default K8s to 1.23.5
- Change go to 1.17 (json dependency doesn't compile with 1.16 anymore)
- Drop references to bindata
- Update some calls
Use `gcr.io/google-containers/stress:v1` and use `Command` to pass flags
to the command. Before we used `gcr.io/jbartosik-gke-dev/stress:0.10`
image which baked in flags in the image.
Tests are flaky with VPA sometimes generating recommendations higher
than 1000 mCPU.
I think this is a reasonable behavior - we're asking resoirce consumer
to use 1800 mCPU between 3 pods, if it gets unevenly distributed we can
end up with some pods using 1000 mCPU.