Update 'Missing Metrics' help to match 1.1 changes (#3353)

* Update 'Missing Metrics' help to match 1.1 changes
This commit is contained in:
Douglas Reid 2019-02-21 17:05:33 -08:00 committed by Martin Taillefer
parent 29b79d84ed
commit ed54669247
1 changed files with 48 additions and 40 deletions

View File

@ -28,16 +28,16 @@ Mixer generates metrics to monitor its own behavior. The first step is to check
1. Establish a connection to the Mixer self-monitoring endpoint for the `istio-telemetry` deployment. In Kubernetes environments, execute the following command:
{{< text bash >}}
$ kubectl -n istio-system port-forward <istio-telemetry pod> 10514 &
$ kubectl -n istio-system port-forward <istio-telemetry pod> 15014 &
{{< /text >}}
1. Verify successful report calls. On the Mixer self-monitoring endpoint, search for `grpc_server_handled_total`. You should see something like:
1. Verify successful report calls. On the Mixer self-monitoring endpoint, search for `grpc_io_server_completed_rpcs`. You should see something like:
{{< text plain >}}
grpc_server_handled_total{grpc_code="OK",grpc_method="Report",grpc_service="istio.mixer.v1.Mixer",grpc_type="unary"} 68
grpc_io_server_completed_rpcs{grpc_server_method="istio.mixer.v1.Mixer/Report",grpc_server_status="OK"} 2532
{{< /text >}}
If you do not see any data for `grpc_server_handled_total` with a `grpc_method="Report"`, then Envoy is not calling Mixer to report telemetry.
If you do not see any data for `grpc_io_server_completed_rpcs` with a `grpc_server_method="istio.mixer.v1.Mixer/Report"`, then Envoy is not calling Mixer to report telemetry.
1. In this case, ensure you integrated the services properly into the mesh. You can achieve this task with either [automatic or manual sidecar injection](/docs/setup/kubernetes/sidecar-injection/).
@ -47,17 +47,18 @@ In Kubernetes environments, issue the following command:
{{< text bash >}}
$ kubectl get rules --all-namespaces
NAMESPACE NAME AGE
istio-system kubeattrgenrulerule 13d
istio-system promhttp 13d
istio-system promtcp 13d
istio-system stdio 13d
istio-system tcpkubeattrgenrulerule 13d
NAMESPACE NAME AGE
istio-system kubeattrgenrulerule 4h
istio-system promhttp 4h
istio-system promtcp 4h
istio-system promtcpconnectionclosed 4h
istio-system promtcpconnectionopen 4h
istio-system tcpkubeattrgenrulerule 4h
{{< /text >}}
If the output shows no rules named `promhttp` or `promtcp`, then the Mixer configuration for sending metric instances to the Prometheus adapter is missing. You must supply the configuration for rules connecting the Mixer metric instances to a Prometheus handler.
For reference, please consult the [default rules for Prometheus]({{< github_file >}}/install/kubernetes/helm/subcharts/mixer/templates/config.yaml).
For reference, please consult the [default rules for Prometheus]({{< github_file >}}/install/kubernetes/helm/istio/charts/mixer/templates/config.yaml).
## Verify the Prometheus handler configuration exists
@ -69,9 +70,18 @@ For reference, please consult the [default rules for Prometheus]({{< github_file
istio-system handler 13d
{{< /text >}}
Depending on whether or not your install of Istio was a fresh install or upgrade, you may also need to issue the following command:
{{< text bash >}}
$ kubectl get handlers.config.istio.io --all-namespaces
NAMESPACE NAME AGE
istio-system kubernetesenv 4h
istio-system prometheus 4h
{{< /text >}}
1. If the output shows no configured Prometheus handlers, you must reconfigure Mixer with the appropriate handler configuration.
For reference, please consult the [default handler configuration for Prometheus]({{< github_file >}}/install/kubernetes/helm/subcharts/mixer/templates/config.yaml).
For reference, please consult the [default handler configuration for Prometheus]({{< github_file >}}/install/kubernetes/helm/istio/charts/mixer/templates/config.yaml).
## Verify Mixer metric instances configuration exists
@ -79,18 +89,20 @@ For reference, please consult the [default rules for Prometheus]({{< github_file
{{< text bash >}}
$ kubectl get metrics.config.istio.io --all-namespaces
NAMESPACE NAME AGE
istio-system requestcount 13d
istio-system requestduration 13d
istio-system requestsize 13d
istio-system responsesize 13d
istio-system tcpbytereceived 13d
istio-system tcpbytesent 13d
NAMESPACE NAME AGE
istio-system requestcount 4h
istio-system requestduration 4h
istio-system requestsize 4h
istio-system responsesize 4h
istio-system tcpbytereceived 4h
istio-system tcpbytesent 4h
istio-system tcpconnectionsclosed 4h
istio-system tcpconnectionsopened 4h
{{< /text >}}
1. If the output shows no configured metric instances, you must reconfigure Mixer with the appropriate instance configuration.
For reference, please consult the [default instances configuration for metrics]({{< github_file >}}/install/kubernetes/helm/subcharts/mixer/templates/config.yaml).
For reference, please consult the [default instances configuration for metrics]({{< github_file >}}/install/kubernetes/helm/istio/charts/mixer/templates/config.yaml).
## Verify there are no known configuration errors
@ -99,44 +111,40 @@ For reference, please consult the [default rules for Prometheus]({{< github_file
1. For each of the following metrics, verify that the most up-to-date value is 0:
* `mixer_config_adapter_info_config_error_count`
* `mixer_config_adapter_info_config_errors_total`
* `mixer_config_handler_validation_error_count`
* `mixer_config_template_config_errors_total`
* `mixer_config_instance_config_error_count`
* `mixer_config_instance_config_errors_total`
* `mixer_config_rule_config_error_count`
* `mixer_config_rule_config_errors_total`
* `mixer_config_rule_config_match_error_count`
* `mixer_config_rule_config_match_error_total`
* `mixer_config_unsatisfied_action_handler_count`
* `mixer_config_unsatisfied_action_handler_total`
* `mixer_handler_handler_build_failure_count`
* `mixer_config_handler_validation_error_total`
On the page showing Mixer self-monitoring port, search for each of the metrics listed above. The matching text should look something like
the following (using `mixer_config_instance_config_error_count`):
* `mixer_handler_handler_build_failures_total`
{{< text plain >}}
mixer_config_rule_config_match_error_count{configID="-1"} 0
mixer_config_rule_config_match_error_count{configID="0"} 0
mixer_config_rule_config_match_error_count{configID="1"} 0</td>
{{< /text >}}
On the page showing Mixer self-monitoring port, search for each of the metrics listed above. You should not find any values for those metrics if everything is
configured correctly.
Confirm that the metric value with the largest configuration ID is 0. This will verify that Mixer has generated no errors in processing the configuration as supplied.
If any of those metrics have a value, confirm that the metric value with the largest configuration ID is 0. This will verify that Mixer has generated no errors
in processing the most recent configuration as supplied.
## Verify Mixer is sending metric instances to the Prometheus adapter
1. Establish a connection to the `istio-telemetry` self-monitoring endpoint. Setup a port-forward to the `istio-telemetry` self-monitoring port as described in
[Verify Mixer is receiving Report calls](#verify-mixer-is-receiving-report-calls).
1. On the Mixer self-monitoring port, search for `mixer_runtime_dispatch_count`. The output should be similar to:
1. On the Mixer self-monitoring port, search for `mixer_runtime_dispatches_total`. The output should be similar to:
{{< text plain >}}
mixer_runtime_dispatch_count{adapter="prometheus",error="false",handler="handler.prometheus.istio-system",meshFunction="metric"} 916
mixer_runtime_dispatch_count{adapter="prometheus",error="true",handler="handler.prometheus.istio-system",meshFunction="metric"} 0
mixer_runtime_dispatches_total{adapter="prometheus",error="false",handler="prometheus.istio-system",meshFunction="metric"} 2532
{{< /text >}}
1. Confirm that `mixer_runtime_dispatch_count` is present with the values:
1. Confirm that `mixer_runtime_dispatches_total` is present with the values:
{{< text plain >}}
adapter="prometheus"
@ -146,7 +154,7 @@ Confirm that the metric value with the largest configuration ID is 0. This will
If you cant find recorded dispatches to the Prometheus adapter, there is likely a configuration issue. Please follow the steps above
to ensure everything is configured properly.
If the dispatches to the Prometheus adapter report errors, check the Mixer logs to determine the source of the error. The most likely cause is a configuration issue for the handler listed in mixer_runtime_dispatch_count.
If the dispatches to the Prometheus adapter report errors, check the Mixer logs to determine the source of the error. The most likely cause is a configuration issue for the handler listed in `mixer_runtime_dispatches_total`.
1. Check the Mixer logs in a Kubernetes environment with: