The "condition" and "status" labels for the hpa status conditions were
mapped to the incorrect values. This resulted in the status being in the
condition label, and the condition in the status label.
This changelist corrects the mapping, so that condition and status map
to their respective values.
kube_hpa_status_condition{condition="AbleToScale",hpa="hpa1",namespace="ns1",status="false"} 0
kube_hpa_status_condition{condition="AbleToScale",hpa="hpa1",namespace="ns1",status="true"} 1
kube_hpa_status_condition{condition="AbleToScale",hpa="hpa1",namespace="ns1",status="unknown"} 0
Fixes: f9658ca ("Add hpa conditions")
Signed-off-by: Terin Stock <terin@cloudflare.com>
Deployments, like Nodes, have status conditions observing the
current state. While the state of Available and Progressing conditions
can likely be inferred by other metrics, the state of ReplicaFailure can
not be inferred.
This changelist adds a new metric `kube_deployment_status_condition`
that observes all the conditions on a deployment for each condition
status. This is analogous to the status conditions observed by nodes and
horizontal pod autoscalers, and allows kube-state-metrics to observe
status conditions added by third-parties.
As an example, for a deployment that has stalled, the following new
metrics observed would allow an operator to detect the condition:
kube_deployment_status_condition{deployment="example", namespace="default", condition="ReplicaFailure", status="true"} 1
kube_deployment_status_condition{deployment="example", namespace="default", condition="ReplicaFailure", status="false"} 0
kube_deployment_status_condition{deployment="example", namespace="default", condition="ReplicaFailure", status="unknown"} 0
Bug: #886
Signed-off-by: Terin Stock <terin@cloudflare.com>
main_test.go: Add model based test for sharding
In order to ensure a sharded system behaves equal to an unsharded
system, a model based test has been introduced. It scrapes an unsharded
setup and compares its output with the union of a sharded setup
therefore ensuring semantic equality.
Report pod_restart_policy{...,type="Always|OnFailure|Never"} 1 for the
pod.spec.restartPolicy which allows an admin to know how many batch vs
service workload pods are running on the cluster.
Update of documentation for consistency reasons (see e.g. kube_service_labels or kube_deployment_labels or kube_persistenvolume_labels): label_RESOURCESTRINGINCAPITALLETTERS_LABEL=<RESOURCESTRINGINCAPITALLETTERS_LABEL>