Currently count includes keys from different resource(s) if the keys
are a prefix of the specified resource/key.
Consider the following keys:
A: <storage-prefix>//foo.bar.io/machines
B: <storage-prefix>//foo.bar.io/machinesets
If we ask for the count of key A, the result will also include the
keys from key B since key B shares the same prefix as key A.
Append a separator to mark the end of the key, this will exclude all
other keys from a different resource that is a prefix of the specified
key.
Kubernetes-commit: 7e445867aa4d37a67591faf6e5508abaea69d216
conflict.
Adding unit test verify that deleteValidation is retried.
adding e2e test verifying the webhook can intercept configmap and custom
resource deletion, and the existing object is sent via the
admissionreview.OldObject.
update the admission integration test to verify that the existing object
is passed to the deletion admission webhook as oldObject, in case of an
immediate deletion and in case of an update-on-delete.
Kubernetes-commit: 7bb4a3bace048cb9cd93d0221a7bf7c4accbf6be
- Move from the old github.com/golang/glog to k8s.io/klog
- klog as explicit InitFlags() so we add them as necessary
- we update the other repositories that we vendor that made a similar
change from glog to klog
* github.com/kubernetes/repo-infra
* k8s.io/gengo/
* k8s.io/kube-openapi/
* github.com/google/cadvisor
- Entirely remove all references to glog
- Fix some tests by explicit InitFlags in their init() methods
Change-Id: I92db545ff36fcec83afe98f550c9e630098b3135
Kubernetes-commit: 954996e231074dc7429f7be1256a579bedd8344c
Like we do everywhere else we use TranformFromStorage. The current
behavior is causing all service account tokens to be regenerated,
invalidating old service account tokens and unrecoverably breaking apps
that are using InClusterConfig or exported service account tokens.
If we are going to break stuff, let's just break the Lists so that
misconfiguration of encryption config or checkpoint corruption are
obvious.
Kubernetes-commit: e7bda4431da05b55b4e8f66ed308d4ed90efd2df
Reuse leases for keys in a time window, to reduce the overhead to etcd
caused by using massive number of leases
Fixes#47532
Kubernetes-commit: 163529bc202054d991f0ce2e21738cc18ffd6022
Pick a reasonable middle ground between allocating larger chunks of
memory (2048 * ~500b for pod slices) and having many small allocations
as the list is resized by preallocating capacity based on the expected
list size. At worst, we'll allocate a 1M slice for pods and only add
a single pod to it (if the selector is very specific).
Kubernetes-commit: ce0dc76901bd1ce36ca20c5cf96b89088d0e95a2
The etcd3 storage now attempts to fill partial pages to prevent clients
having to make more round trips (latency from server to etcd is lower
than client to server). The server makes repeated requests to etcd of
the current page size, then uses the filter function to eliminate any
matches. After this change the apiserver will always return full pages,
but we leave the language in place that clients must tolerate it.
Reduces tail latency of large filtered lists, such as viewing pods
assigned to a node.
Kubernetes-commit: da7124e5e5c0385dd5bcfc72ef035effc7708913
In GuaranteedUpdate, if it was called with a suggestion (e.g. via the
watch cache), and the suggested object is stale, perform a live lookup
and then retry the update.
Signed-off-by: Andy Goldstein <andy.goldstein@gmail.com>
Kubernetes-commit: bf33df16b52508974ddedacd814010cfe0fb79f0