This commit introduces the fact that build.WithGenerateStoreFunc()
needs to be used for configuring properly the `Builder` (like any other
`With...` method.
* rename: `WithCustomGenerateStoreFunc` to `WithGenerateStoreFunc`.
* remove buildStorFunc defaulting in `NewBuilder()` function
* add `DefaultGenerateStoreFunc()` method in `BuilderInterface`
* update `Builder initialisation` in `main.go`
Signed-off-by: cedric lamoriniere <cedric.lamoriniere@datadoghq.com>
* Allow to extend the internal Builder
* All logics still internal
* Add in `metric.Family` the metric family `Type` (counter, gauge)
Signed-off-by: cedric lamoriniere <cedric.lamoriniere@datadoghq.com>
- To allow other external Stores, remove the `FamilyByteSlicer` interface
and give access directly to `metric.Family`.
- Move functions present in `pkg/metric/generator.go` to a dedicated package
`generator` in `pkg/metric_generator/generator.go`.
Signed-off-by: cedric lamoriniere <cedric.lamoriniere@datadoghq.com>
- Use `cache.Store` interface for genericity and extensibility.
- Add `WithCustomGenerateStoreFunc` method to `Builder` struct for allowing
custome metrics Stores generation.
Signed-off-by: cedric lamoriniere <cedric.lamoriniere@datadoghq.com>
Previously kube-state-metrics created multiple ListerWatchers, with
multiple reflectors for a single store. This caused that on "replace"
events, the entire store was replaced with the content of a single
reflector instead of the union of all reflectors acting on a single
store.
This patch changes the architecture to have a single reflector that is
fed by a MultiListerWatcher acting on a single store, which as a result
fixes the replace actions.
Kubernetes has a new resource type: `VolumeAttachments`. They provide
helpful information on where a volume is attached and to alert on
unexpected attachment status (for example, differences between
information scraped from node-exporter and kube-state-metrics).
The collector adds a bunch of new metrics. Each VolumeAttachment (ie.,
each CSI-attached volume) will have one of each, so we do not overly
pollute the metrics space. Most metrics are rather unsurprising.
- `kube_volumeattachment_status_attachment_metadata`: provides a
label-like export of the attachment metadata map. Generalizing the
label-conversion function slightly helps at providing this metric.
- `kube_volumeattachment_created`: as VolumeAttachments are
automatically created and we already suffered from duplicate
`VolumeAttachments`, this can be invaluable for debugging
misattachments.
- `kube_volumeattachment_spec_source_persistentvolume`: will only be
generated when the volume source is of `PersistentVolume` type. The
other type `inlineVolumeSpec` is still alpha-level and hard to map to
metrics.
No end-to-end test manifest was added, as `VolumeAttachment`s are
automatically generated when mounting volumes.
Signed-off-by: Jens Erat <email@jenserat.de>
main_test.go: Add model based test for sharding
In order to ensure a sharded system behaves equal to an unsharded
system, a model based test has been introduced. It scrapes an unsharded
setup and compares its output with the union of a sharded setup
therefore ensuring semantic equality.
When kube-state-metrics does not have the correct roles to list or watch
on a resource, it just logs this error but not actually error out. This
is a problem as the pod never restars and it is hard to catch any
problems as other resource metrics continue to be created correctly. We
only see this error in the noisy kube-state-metrics logs.
This registers two metrics `kube_state_metrics_watch_total` and
`kube_state_metrics_list_total`. With the following labels:
"result" label is the type of action count relates to, error or success.
"resource" label contains the resource <apiVersion.Kind>.
This way we can do a rate alert when kube-state-metrics error rate is
too high.
Example of the metrics:
kube_state_metrics_list_total{resource="*v1.Namespace",result="success"} 1
kube_state_metrics_list_total{resource="*v1.Node",result="error"} 52
kube_state_metrics_watch_total{resource="*v1beta1.Ingress",result="success"} 1
This fixes an issue where it was impossible to specify a collector that
was available but not selected by default.
Instead of checking whether chosen collectors are valid at flag parse
time, this moves the check into the builder, where we can reference it
against the availableStores in the builder. As a bonus, the error
message also prints out a list of available collectors:
```
kube-state-metrics --collectors non-existent-collector
I0618 15:23:34.517532 50719 main.go:88] Using collectors non-existent-collector
F0618 15:23:34.519132 50719 main.go:90] Error: collector non-existent-collector does not exist. Available collectors: persistentvolumeclaims,configmaps,limitranges,nodes,namespaces,persistentvolumes,pods,replicasets,services,cronjobs,deployments,ingresses,horizontalpodautoscalers,jobs,poddisruptionbudgets,secrets,certificatesigningrequests,daemonsets,endpoints,storageclasses,replicationcontrollers,resourcequotas,statefulsets
```
Since the removal of collector, this introduces both the concept of the
store and the resources instead of collectors that the user passes in.
The user facing logs and flags were not changed as that would be a
regression.