Extract getCurrentState as a separate method that can be reused.
Signed-off-by: Siyuan Zhang <sizhang@google.com>
Kubernetes-commit: ebca5d438d9cb2c82d0b99dbcb0aeca8879db441
At first glance, it seems that the fields storage.ListOptions.ProgressNotify and storage.ListOptions.Predicate.AllowWatchBookmarks
are the same. Unfortunately, this is not the case.
This PR documents the differences and motivations for why these fields are actually distinct.
Kubernetes-commit: 6058540f3d0edc405a1f1b8a96bd82ceca99c240
Having local variables gives false impression that this is overwritten
in the function block.
Kubernetes-commit: e01bd641447a315e28fab8148e99ac6afba9bcd7
From API call WithRange and WithPrefix work the same, they just set the range end.
The difference is when the range end is provided:
* WithRange(end) requires providing the end while calling
* WithPrefix() calculates the end based on key provided to the Get.
For example, those are equal:
* client.Get(ctx, "/pods/", WithPrefix())
* client.Get(ctx, "/pods/", WithRange(GetPrfixRangeEnd("/pods/")))
As keyPrefix is equal preparedKey there should not be a difference.
Kubernetes-commit: 1f4f2a5d6014dc8f98b25a9484d4a6064a6ae18e
returnRV was was equal to withRev, but updated at different time.
When preparing the request they are set equal to each other.
The only difference was during the for loop.
returnRV was always set no matter if pagination was enabled, while withRev only when paginating.
Kubernetes-commit: be4692864bb983e94e8d7b6b6aa1a9c22fe23bce
7a63997c8a1a9ba1 added a global variable which gets set multiple times by
different goroutines in integration tests, leading to a data race:
WARNING: DATA RACE
Write at 0x00000a626928 by goroutine 87080:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/storage/etcd3/metrics.SetStorageMonitorGetter()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/storage/etcd3/metrics/metrics.go:231 +0x104
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/options.(*EtcdOptions).ApplyWithStorageFactoryTo()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/options/etcd.go:242 +0xbd
k8s.io/kubernetes/pkg/controlplane/apiserver.BuildGenericConfig()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controlplane/apiserver/config.go:124 +0x1c3d
k8s.io/kubernetes/cmd/kube-apiserver/app.CreateKubeAPIServerConfig()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/server.go:218 +0xeb
k8s.io/kubernetes/cmd/kube-apiserver/app.NewConfig()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/config.go:74 +0xd5
k8s.io/kubernetes/cmd/kube-apiserver/app/testing.StartTestServer()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/testing/testserver.go:299 +0x2e97
k8s.io/kubernetes/cmd/kube-apiserver/app/testing.StartTestServerOrDie()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/testing/testserver.go:423 +0xb2
k8s.io/kubernetes/test/integration/controlplane.testReconcilersAPIServerLease.func3()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/controlplane/kube_apiserver_test.go:486 +0x1dd
k8s.io/kubernetes/test/integration/controlplane.testReconcilersAPIServerLease.func7()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/controlplane/kube_apiserver_test.go:488 +0x47
Previous write at 0x00000a626928 by goroutine 87079:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/storage/etcd3/metrics.SetStorageMonitorGetter()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/storage/etcd3/metrics/metrics.go:231 +0x104
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/options.(*EtcdOptions).ApplyWithStorageFactoryTo()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/options/etcd.go:242 +0xbd
k8s.io/kubernetes/pkg/controlplane/apiserver.BuildGenericConfig()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controlplane/apiserver/config.go:124 +0x1c3d
k8s.io/kubernetes/cmd/kube-apiserver/app.CreateKubeAPIServerConfig()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/server.go:218 +0xeb
k8s.io/kubernetes/cmd/kube-apiserver/app.NewConfig()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/config.go:74 +0xd5
k8s.io/kubernetes/cmd/kube-apiserver/app/testing.StartTestServer()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/testing/testserver.go:299 +0x2e97
k8s.io/kubernetes/cmd/kube-apiserver/app/testing.StartTestServerOrDie()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/testing/testserver.go:423 +0xb2
k8s.io/kubernetes/test/integration/controlplane.testReconcilersAPIServerLease.func3()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/controlplane/kube_apiserver_test.go:486 +0x1dd
k8s.io/kubernetes/test/integration/controlplane.testReconcilersAPIServerLease.func7()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/controlplane/kube_apiserver_test.go:488 +0x47
Mutex locking avoids the data race. Whether this variable really can be used
safely by those concurrent (?) tests is a different question...
Kubernetes-commit: 13a8ad12b8296c0360afe3f66218027dae6c1805
// AnnotateInitialEventsEndBookmark adds a special annotation to the given object
// which indicates that the initial events have been sent.
//
// Note that this function assumes that the obj's annotation
// field is a reference type (i.e. a map).
Kubernetes-commit: 47d9a47a08856613e2e6ae6aa8a1bdeb1e281f97
Fix a segfault when collecting the storage size metrics when the getters
used to collect the data on etcd haven't been initialized properly. This
happens when the EtcdOptions are not applied which is the case for
aggregated apiservers that don't care about storage.
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
Kubernetes-commit: c6efaf16c1ed07ce37485b7a272628f653cbf06f
Instead of numerating all the etcd endpoints known by apiserver, we will
group them by purpose. `etcd-0` will be the default etcd, `etcd-1` will
be the first resource override, `etcd-2` will be the second override and
so on.
Kubernetes-commit: 03aad1f823cb719fa6e6b6d33fefa2a2140cc760
Change name to make it compliant with prometheus guidelines.
Calculate it on demand instead of periodic to comply with prometheus standards.
Replace "endpoint" with "server" label to make it semantically consistent with storage factory
Kubernetes-commit: 7a63997c8a1a9ba14f2bdc478fdf33cf88f48d80
Request bookmark every 100ms when there is at least one request blocked on revision not present in watch cache.
Kubernetes-commit: 39bb8f4bb1d013937aceac6c387563ffe13545c5