7a63997c8a1a9ba1 added a global variable which gets set multiple times by
different goroutines in integration tests, leading to a data race:
WARNING: DATA RACE
Write at 0x00000a626928 by goroutine 87080:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/storage/etcd3/metrics.SetStorageMonitorGetter()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/storage/etcd3/metrics/metrics.go:231 +0x104
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/options.(*EtcdOptions).ApplyWithStorageFactoryTo()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/options/etcd.go:242 +0xbd
k8s.io/kubernetes/pkg/controlplane/apiserver.BuildGenericConfig()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controlplane/apiserver/config.go:124 +0x1c3d
k8s.io/kubernetes/cmd/kube-apiserver/app.CreateKubeAPIServerConfig()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/server.go:218 +0xeb
k8s.io/kubernetes/cmd/kube-apiserver/app.NewConfig()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/config.go:74 +0xd5
k8s.io/kubernetes/cmd/kube-apiserver/app/testing.StartTestServer()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/testing/testserver.go:299 +0x2e97
k8s.io/kubernetes/cmd/kube-apiserver/app/testing.StartTestServerOrDie()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/testing/testserver.go:423 +0xb2
k8s.io/kubernetes/test/integration/controlplane.testReconcilersAPIServerLease.func3()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/controlplane/kube_apiserver_test.go:486 +0x1dd
k8s.io/kubernetes/test/integration/controlplane.testReconcilersAPIServerLease.func7()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/controlplane/kube_apiserver_test.go:488 +0x47
Previous write at 0x00000a626928 by goroutine 87079:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/storage/etcd3/metrics.SetStorageMonitorGetter()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/storage/etcd3/metrics/metrics.go:231 +0x104
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/options.(*EtcdOptions).ApplyWithStorageFactoryTo()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/options/etcd.go:242 +0xbd
k8s.io/kubernetes/pkg/controlplane/apiserver.BuildGenericConfig()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/controlplane/apiserver/config.go:124 +0x1c3d
k8s.io/kubernetes/cmd/kube-apiserver/app.CreateKubeAPIServerConfig()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/server.go:218 +0xeb
k8s.io/kubernetes/cmd/kube-apiserver/app.NewConfig()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/config.go:74 +0xd5
k8s.io/kubernetes/cmd/kube-apiserver/app/testing.StartTestServer()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/testing/testserver.go:299 +0x2e97
k8s.io/kubernetes/cmd/kube-apiserver/app/testing.StartTestServerOrDie()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/testing/testserver.go:423 +0xb2
k8s.io/kubernetes/test/integration/controlplane.testReconcilersAPIServerLease.func3()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/controlplane/kube_apiserver_test.go:486 +0x1dd
k8s.io/kubernetes/test/integration/controlplane.testReconcilersAPIServerLease.func7()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/controlplane/kube_apiserver_test.go:488 +0x47
Mutex locking avoids the data race. Whether this variable really can be used
safely by those concurrent (?) tests is a different question...
Kubernetes-commit: 13a8ad12b8296c0360afe3f66218027dae6c1805
Fix a segfault when collecting the storage size metrics when the getters
used to collect the data on etcd haven't been initialized properly. This
happens when the EtcdOptions are not applied which is the case for
aggregated apiservers that don't care about storage.
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
Kubernetes-commit: c6efaf16c1ed07ce37485b7a272628f653cbf06f
Instead of numerating all the etcd endpoints known by apiserver, we will
group them by purpose. `etcd-0` will be the default etcd, `etcd-1` will
be the first resource override, `etcd-2` will be the second override and
so on.
Kubernetes-commit: 03aad1f823cb719fa6e6b6d33fefa2a2140cc760
Change name to make it compliant with prometheus guidelines.
Calculate it on demand instead of periodic to comply with prometheus standards.
Replace "endpoint" with "server" label to make it semantically consistent with storage factory
Kubernetes-commit: 7a63997c8a1a9ba14f2bdc478fdf33cf88f48d80
Make similar buckets for the apiserver_request_duration_seconds and
the etcd_request_duration_seconds histogram so that the result is
more comparable side by side.
etcd_request_duration_seconds uses the default buckets provided by
prometheus client library:
DefBuckets = []float64{.005, .01, .025, .05, .1, .25, .5, 1, 2.5, 5, 10}
apiserver_request_duration_seconds on the other hand uses more fine
grained buckets, and the maximum bucket size is 60s. Both histograms
should use similar bucket sizes so they are more comparable side by side.
Kubernetes-commit: 9d8441f17d90c46eca6390a522e8771bed10e0ba