Remove deprecated mixer for observability (#8040)

* Remove deprecated mixer for observability

* Fix lint

* remove double #

* Fix lint
This commit is contained in:
Shamsher Ansari 2020-09-02 20:35:46 +05:30 committed by GitHub
parent 28d61ab090
commit c32f55cbaf
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
15 changed files with 7 additions and 1202 deletions

View File

@ -1,7 +1,7 @@
<!-- WARNING: THIS IS AN AUTO-GENERATED FILE, DO NOT EDIT. UPDATE THE OWNER ATTRIBUTE IN THE DOCUMENT FILES, INSTEAD -->
# Istio.io Document Owners
There are 158 owned istio.io docs.
There are 154 owned istio.io docs.
## istio/wg-docs-maintainers: 15 docs
@ -89,7 +89,7 @@ There are 158 owned istio.io docs.
- [docs/tasks/traffic-management/tcp-traffic-shifting/index.md](https://preliminary.istio.io/latest/docs/tasks/traffic-management/tcp-traffic-shifting)
- [docs/tasks/traffic-management/traffic-shifting/index.md](https://preliminary.istio.io/latest/docs/tasks/traffic-management/traffic-shifting)
## istio/wg-policies-and-telemetry-maintainers: 32 docs
## istio/wg-policies-and-telemetry-maintainers: 28 docs
- [docs/concepts/observability/index.md](https://preliminary.istio.io/latest/docs/concepts/observability)
- [docs/concepts/wasm/index.md](https://preliminary.istio.io/latest/docs/concepts/wasm)
@ -115,10 +115,6 @@ There are 158 owned istio.io docs.
- [docs/tasks/observability/metrics/querying-metrics/index.md](https://preliminary.istio.io/latest/docs/tasks/observability/metrics/querying-metrics)
- [docs/tasks/observability/metrics/tcp-metrics/index.md](https://preliminary.istio.io/latest/docs/tasks/observability/metrics/tcp-metrics)
- [docs/tasks/observability/metrics/using-istio-dashboard/index.md](https://preliminary.istio.io/latest/docs/tasks/observability/metrics/using-istio-dashboard)
- [docs/tasks/observability/mixer/logs/collecting-logs/index.md](https://preliminary.istio.io/latest/docs/tasks/observability/mixer/logs/collecting-logs)
- [docs/tasks/observability/mixer/logs/fluentd/index.md](https://preliminary.istio.io/latest/docs/tasks/observability/mixer/logs/fluentd)
- [docs/tasks/observability/mixer/metrics/collecting-metrics/index.md](https://preliminary.istio.io/latest/docs/tasks/observability/mixer/metrics/collecting-metrics)
- [docs/tasks/observability/mixer/metrics/tcp-metrics/index.md](https://preliminary.istio.io/latest/docs/tasks/observability/mixer/metrics/tcp-metrics)
- [docs/tasks/policy-enforcement/control-headers/index.md](https://preliminary.istio.io/latest/docs/tasks/policy-enforcement/control-headers)
- [docs/tasks/policy-enforcement/denial-and-list/index.md](https://preliminary.istio.io/latest/docs/tasks/policy-enforcement/denial-and-list)
- [docs/tasks/policy-enforcement/enabling-policy/index.md](https://preliminary.istio.io/latest/docs/tasks/policy-enforcement/enabling-policy)

View File

@ -54,7 +54,6 @@ Below is our list of existing features and their current phases. This informatio
| Feature | Phase
|-------------------|-------------------
| [Prometheus Integration](/docs/tasks/observability/metrics/querying-metrics/) | Stable
| [Local Logging (STDIO)](/docs/tasks/observability/mixer/logs/collecting-logs/) | Stable
| [Statsd Integration](/docs/reference/config/policy-and-telemetry/adapters/statsd/) | Stable
| [Client and Server Telemetry Reporting](/docs/reference/config/policy-and-telemetry/) | Stable
| [Service Dashboard in Grafana](/docs/tasks/observability/metrics/using-istio-dashboard/) | Stable
@ -62,7 +61,6 @@ Below is our list of existing features and their current phases. This informatio
| [Distributed Tracing](/docs/tasks/observability/distributed-tracing/) | Stable
| [Stackdriver Integration](/docs/reference/config/policy-and-telemetry/adapters/stackdriver/) | Beta
| [Distributed Tracing to Zipkin / Jaeger](/docs/tasks/observability/distributed-tracing/) | Beta
| [Logging with Fluentd](/docs/tasks/observability/mixer/logs/fluentd/) | Beta
| [Trace Sampling](/docs/tasks/observability/distributed-tracing/configurability/#trace-sampling) | Beta
### Security and policy enforcement

View File

@ -40,7 +40,6 @@ will prevent any possibility for a malicious application to access the forbidden
* The [Egress Gateway with TLS Origination](/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/) example
demonstrates how to allow applications to send HTTP requests to external servers that require HTTPS, while directing
traffic through egress gateway.
* The [Collecting Metrics](/docs/tasks/observability/mixer/metrics/collecting-metrics/) task describes how to configure metrics for services in a mesh.
* The [Visualizing Metrics with Grafana](/docs/tasks/observability/metrics/using-istio-dashboard/)
describes the Istio Dashboard to monitor mesh traffic.
* The [Basic Access Control](/docs/tasks/policy-enforcement/denial-and-list/) task shows how to control access to

View File

@ -45,33 +45,7 @@ service. Some common time synchronization systems are NTP and Chrony. This is es
problematic in engineering labs with firewalls. In these scenarios, NTP may not be configured
properly to point at the lab-based NTP services.
## Mixer Telemetry Issues
{{< warning >}}
Mixer is deprecated. The functionality provided by Mixer is being moved into the Envoy proxies.
Use of Mixer with Istio will only be supported through the 1.7 release of Istio.
{{</ warning>}}
### Expected metrics are not being collected
The following procedure helps you diagnose problems where metrics
you are expecting to see reported are not being collected.
The expected flow for metrics is:
1. Envoy reports attributes from requests asynchronously to Mixer in a batch.
1. Mixer translates the attributes into instances based on the operator-provided configuration.
1. Mixer hands the instances to Mixer adapters for processing and backend storage.
1. The backend storage systems record the metrics data.
The Mixer default installations include a Prometheus adapter and the configuration to generate a [default set of metric values](/docs/reference/config/policy-and-telemetry/metrics/) and send them to the Prometheus adapter. The Prometheus adapter configuration enables a Prometheus instance to scrape Mixer for metrics.
If the Istio Dashboard or the Prometheus queries dont show the expected metrics, any step of the flow above may present an issue. The following sections provide instructions to troubleshoot each step.
#### Verify Istio CNI pods are running (if used)
## Verify Istio CNI pods are running (if used)
The Istio CNI plugin performs the Istio mesh pod traffic redirection in the Kubernetes pod lifecycles network setup phase, thereby removing the [requirement for the `NET_ADMIN` and `NET_RAW` capabilities](/docs/ops/deployment/requirements/) for users deploying pods into the Istio mesh. The Istio CNI plugin replaces the functionality provided by the `istio-init` container.
@ -81,171 +55,4 @@ The Istio CNI plugin performs the Istio mesh pod traffic redirection in the Kube
$ kubectl -n kube-system get pod -l k8s-app=istio-cni-node
{{< /text >}}
1. If `PodSecurityPolicy` is being enforced in your cluster, ensure the `istio-cni` service account can use a `PodSecurityPolicy` which [allows the `NET_ADMIN` and `NET_RAW` capabilities](/docs/ops/deployment/requirements/)
#### Verify Mixer is receiving report calls
Mixer generates metrics to monitor its own behavior. The first step is to check these metrics:
1. Establish a connection to the Mixer self-monitoring endpoint for the `istio-telemetry` deployment. In Kubernetes environments, execute the following command:
{{< text bash >}}
$ kubectl -n istio-system port-forward <istio-telemetry pod> 15014 &
{{< /text >}}
1. Verify successful report calls. On the Mixer self-monitoring endpoint (`http://localhost:15014/metrics`), search for `grpc_io_server_completed_rpcs`. You should see something like:
{{< text plain >}}
grpc_io_server_completed_rpcs{grpc_server_method="istio.mixer.v1.Mixer/Report",grpc_server_status="OK"} 2532
{{< /text >}}
If you do not see any data for `grpc_io_server_completed_rpcs` with a `grpc_server_method="istio.mixer.v1.Mixer/Report"`, then Envoy is not calling Mixer to report telemetry.
1. In this case, ensure you integrated the services properly into the mesh. You can achieve this task with either [automatic or manual sidecar injection](/docs/setup/additional-setup/sidecar-injection/).
#### Verify the Mixer rules exist
In Kubernetes environments, issue the following command:
{{< text bash >}}
$ kubectl get rules --all-namespaces
NAMESPACE NAME AGE
istio-system kubeattrgenrulerule 4h
istio-system promhttp 4h
istio-system promtcp 4h
istio-system promtcpconnectionclosed 4h
istio-system promtcpconnectionopen 4h
istio-system tcpkubeattrgenrulerule 4h
{{< /text >}}
If the output shows no rules named `promhttp` or `promtcp`, then the Mixer configuration for sending metric instances to the Prometheus adapter is missing. You must supply the configuration for rules connecting the Mixer metric instances to a Prometheus handler.
For reference, please consult the [default rules for Prometheus]({{< github_file >}}/manifests/istio-telemetry/mixer-telemetry/templates/config.yaml).
#### Verify the Prometheus handler configuration exists
1. In Kubernetes environments, issue the following command:
{{< text bash >}}
$ kubectl get handlers.config.istio.io --all-namespaces
NAMESPACE NAME AGE
istio-system kubernetesenv 4h
istio-system prometheus 4h
{{< /text >}}
If you're upgrading from Istio 1.1 or earlier, issue the following command instead:
{{< text bash >}}
$ kubectl get prometheuses.config.istio.io --all-namespaces
NAMESPACE NAME AGE
istio-system handler 13d
{{< /text >}}
1. If the output shows no configured Prometheus handlers, you must reconfigure Mixer with the appropriate handler configuration.
For reference, please consult the [default handler configuration for Prometheus]({{< github_file >}}/manifests/istio-telemetry/mixer-telemetry/templates/config.yaml).
#### Verify Mixer metric instances configuration exists
1. In Kubernetes environments, issue the following command:
{{< text bash >}}
$ kubectl get instances -o custom-columns=NAME:.metadata.name,TEMPLATE:.spec.compiledTemplate --all-namespaces
{{< /text >}}
If you're upgrading from Istio 1.1 or earlier, issue the following command instead:
{{< text bash >}}
$ kubectl get metrics.config.istio.io --all-namespaces
{{< /text >}}
1. If the output shows no configured metric instances, you must reconfigure Mixer with the appropriate instance configuration.
For reference, please consult the [default instances configuration for metrics]({{< github_file >}}/manifests/istio-telemetry/mixer-telemetry/templates/config.yaml).
#### Verify there are no known configuration errors
1. To establish a connection to the Istio-telemetry self-monitoring endpoint, setup a port-forward to the Istio-telemetry self-monitoring port as described in
[Verify Mixer is receiving Report calls](#verify-mixer-is-receiving-report-calls).
1. For each of the following metrics, verify that the most up-to-date value is 0:
* `mixer_config_adapter_info_config_errors_total`
* `mixer_config_template_config_errors_total`
* `mixer_config_instance_config_errors_total`
* `mixer_config_rule_config_errors_total`
* `mixer_config_rule_config_match_error_total`
* `mixer_config_unsatisfied_action_handler_total`
* `mixer_config_handler_validation_error_total`
* `mixer_handler_handler_build_failures_total`
On the page showing Mixer self-monitoring port, search for each of the metrics listed above. You should not find any values for those metrics if everything is
configured correctly.
If any of those metrics have a value, confirm that the metric value with the largest configuration ID is 0. This will verify that Mixer has generated no errors
in processing the most recent configuration as supplied.
#### Verify Mixer is sending metric instances to the Prometheus adapter
1. Establish a connection to the `istio-telemetry` self-monitoring endpoint. Setup a port-forward to the `istio-telemetry` self-monitoring port as described in
[Verify Mixer is receiving Report calls](#verify-mixer-is-receiving-report-calls).
1. On the Mixer self-monitoring port, search for `mixer_runtime_dispatches_total`. The output should be similar to:
{{< text plain >}}
mixer_runtime_dispatches_total{adapter="prometheus",error="false",handler="prometheus.istio-system",meshFunction="metric"} 2532
{{< /text >}}
1. Confirm that `mixer_runtime_dispatches_total` is present with the values:
{{< text plain >}}
adapter="prometheus"
error="false"
{{< /text >}}
If you cant find recorded dispatches to the Prometheus adapter, there is likely a configuration issue. Please follow the steps above
to ensure everything is configured properly.
If the dispatches to the Prometheus adapter report errors, check the Mixer logs to determine the source of the error. The most likely cause is a configuration issue for the handler listed in `mixer_runtime_dispatches_total`.
1. Check the Mixer logs in a Kubernetes environment with:
{{< text bash >}}
$ kubectl -n istio-system logs <istio-telemetry pod> -c mixer
{{< /text >}}
#### Verify Prometheus configuration
1. Connect to the Prometheus UI
1. Verify you can successfully scrape Mixer through the UI.
1. In Kubernetes environments, setup port-forwarding with:
{{< text bash >}}
$ istioctl dashboard prometheus
{{< /text >}}
1. In the Prometheus browser window, select **Status** then **Targets**.
1. Confirm the target `istio-mesh` has a status of UP.
1. In the Prometheus browser window, select **Status** then **Configuration**.
1. Confirm an entry exists similar to:
{{< text plain >}}
- job_name: 'istio-mesh'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['istio-mixer.istio-system:42422']</td>
{{< /text >}}
1. If `PodSecurityPolicy` is being enforced in your cluster, ensure the `istio-cni` service account can use a `PodSecurityPolicy` which [allows the `NET_ADMIN` and `NET_RAW` capabilities](/docs/ops/deployment/requirements/).

View File

@ -75,12 +75,10 @@ The following ports and protocols are used by Istio.
| 15012 | GRPC | Istiod | XDS and CA services (TLS) |
| 8080 | HTTP | Istiod | Debug interface |
| 443 | HTTPS | Istiod | Webhooks |
| 15014 | HTTP | Mixer, Istiod | Control plane monitoring |
| 15014 | HTTP | Istiod | Control plane monitoring |
| 15443 | TLS | Ingress and Egress Gateways | SNI |
| 9090 | HTTP | Prometheus | Prometheus |
| 42422 | TCP | Mixer | Telemetry - Prometheus |
| 15004 | HTTP | Mixer, Pilot | Policy/Telemetry - `mTLS` |
| 9091 | HTTP | Mixer | Policy/Telemetry |
| 15004 | HTTP | Istiod | Policy/Telemetry - `mTLS` |
## Required pod capabilities

View File

@ -24,7 +24,7 @@ As an example, as of this writing, `istioctl` has 24 scopes, representing differ
- `ads`, `adsc`, `analysis`, `attributes`, `authn`, `authorization`, `cache`, `cli`, `default`, `grpcAdapter`, `installer`, `mcp`, `model`, `patch`, `processing`, `resource`, `secretfetcher`, `source`, `spiffe`, `tpath`, `translator`, `util`, `validation`, `validationController`
Pilot-Agent, Pilot-Discovery, Mixer, and the Istio Operator have their own scopes which you can discover by looking at their [reference documentation](/docs/reference/commands/).
Pilot-Agent, Pilot-Discovery, and the Istio Operator have their own scopes which you can discover by looking at their [reference documentation](/docs/reference/commands/).
Each scope has a unique output level which is one of:

View File

@ -1,14 +0,0 @@
---
title: Using Mixer for Telemetry (deprecated)
description: Demonstrates how to collect telemetry information from the mesh using Mixer.
weight: 99
aliases:
- /docs/examples/telemetry/
- /docs/tasks/telemetry/
test: n/a
---
{{< warning >}}
Mixer is deprecated. The functionality provided by Mixer is being moved into the Envoy proxies.
Use of Mixer with Istio will only be supported through the 1.7 release of Istio.
{{</ warning>}}

View File

@ -1,11 +0,0 @@
---
title: Logs
description: Demonstrates the configuration, collection, and processing of Istio mesh logs.
weight: 20
aliases:
test: n/a
---
{{< warning >}}
Mixer is deprecated. The functionality provided by Mixer is being moved into the Envoy proxies.
Use of Mixer with Istio will only be supported through the 1.7 release of Istio.
{{</ warning>}}

View File

@ -1,146 +0,0 @@
---
title: Collecting Logs with Mixer
description: This task shows you how to configure Istio's Mixer to collect and customize logs.
weight: 10
keywords: [telemetry,logs]
aliases:
- /docs/tasks/observability/logs/collecting-logs/
- /docs/tasks/telemetry/logs/collecting-logs/
owner: istio/wg-policies-and-telemetry-maintainers
test: n/a
---
{{< warning >}}
Mixer is deprecated. The functionality provided by Mixer is being moved into the Envoy proxies.
Use of Mixer with Istio will only be supported through the 1.7 release of Istio.
{{</ warning>}}
This task shows how to configure Istio to automatically gather telemetry for
services in a mesh. At the end of this task, a new log stream will be enabled
for calls to services within your mesh.
The [Bookinfo](/docs/examples/bookinfo/) sample application is used
as the example application throughout this task.
## Before you begin
* [Install Istio](/docs/setup) in your cluster and deploy an
application. This task assumes that Mixer is setup in a default configuration
(`--configDefaultNamespace=istio-system`). If you use a different
value, update the configuration and commands in this task to match the value.
## Collecting new logs data
1. Apply a YAML file with configuration for the new log
stream that Istio will generate and collect automatically.
{{< text bash >}}
$ kubectl apply -f @samples/bookinfo/telemetry/log-entry.yaml@
{{< /text >}}
{{< warning >}}
If you use Istio 1.1.2 or prior, please use the following configuration instead:
{{< text bash >}}
$ kubectl apply -f @samples/bookinfo/telemetry/log-entry-crd.yaml@
{{< /text >}}
{{< /warning >}}
1. Send traffic to the sample application.
For the Bookinfo sample, visit `http://$GATEWAY_URL/productpage` in your web
browser or issue the following command:
{{< text bash >}}
$ curl http://$GATEWAY_URL/productpage
{{< /text >}}
1. Verify that the log stream has been created and is being populated for
requests.
In a Kubernetes environment, search through the logs for the `istio-telemetry` pods as
follows:
{{< text bash json >}}
$ kubectl logs -n istio-system -l istio-mixer-type=telemetry -c mixer | grep "newlog" | grep -v '"destination":"telemetry"' | grep -v '"destination":"pilot"' | grep -v '"destination":"policy"' | grep -v '"destination":"unknown"'
{"level":"warn","time":"2018-09-15T20:46:36.009801Z","instance":"newlog.xxxxx.istio-system","destination":"details","latency":"13.601485ms","responseCode":200,"responseSize":178,"source":"productpage","user":"unknown"}
{"level":"warn","time":"2018-09-15T20:46:36.026993Z","instance":"newlog.xxxxx.istio-system","destination":"reviews","latency":"919.482857ms","responseCode":200,"responseSize":295,"source":"productpage","user":"unknown"}
{"level":"warn","time":"2018-09-15T20:46:35.982761Z","instance":"newlog.xxxxx.istio-system","destination":"productpage","latency":"968.030256ms","responseCode":200,"responseSize":4415,"source":"istio-ingressgateway","user":"unknown"}
{{< /text >}}
## Understanding the logs configuration
In this task, you added Istio configuration that instructed Mixer to
automatically generate and report a new log stream for all
traffic within the mesh.
The added configuration controlled three pieces of Mixer functionality:
1. Generation of *instances* (in this example, log entries)
from Istio attributes
1. Creation of *handlers* (configured Mixer adapters) capable of processing
generated *instances*
1. Dispatch of *instances* to *handlers* according to a set of *rules*
The logs configuration directs Mixer to send log entries to stdout. It uses
three stanzas (or blocks) of configuration: *instance* configuration, *handler*
configuration, and *rule* configuration.
The `kind: instance` stanza of configuration defines a schema for generated log entries
(or *instances*) named `newlog`. This instance configuration tells Mixer _how_
to generate log entries for requests based on the attributes reported by Envoy.
The `severity` parameter is used to indicate the log level for any generated
`logentry`. In this example, a literal value of `"warning"` is used. This value will
be mapped to supported logging levels by a `logentry` *handler*.
The `timestamp` parameter provides time information for all log entries. In this
example, the time is provided by the attribute value of `request.time`, as
provided by Envoy.
The `variables` parameter allows operators to configure what values should be
included in each `logentry`. A set of expressions controls the mapping from Istio
attributes and literal values into the values that constitute a `logentry`.
In this example, each `logentry` instance has a field named `latency` populated
with the value from the attribute `response.duration`. If there is no known
value for `response.duration`, the `latency` field will be set to a duration of
`0ms`.
The `kind: handler` stanza of configuration defines a *handler* named `newloghandler`. The
handler `spec` configures how the `stdio` compiled adapter code processes received
`logentry` instances. The `severity_levels` parameter controls how `logentry`
values for the `severity` field are mapped to supported logging levels. Here,
the value of `"warning"` is mapped to the `WARNING` log level. The
`outputAsJson` parameter directs the adapter to generate JSON-formatted log
lines.
The `kind: rule` stanza of configuration defines a new *rule* named `newlogstdio`. The
rule directs Mixer to send all `newlog` instances to the
`newloghandler` handler. Because the `match` parameter is set to `true`, the
rule is executed for all requests in the mesh.
A `match: true` expression in the rule specification is not required to
configure a rule to be executed for all requests. Omitting the entire `match`
parameter from the `spec` is equivalent to setting `match: true`. It is included
here to illustrate how to use `match` expressions to control rule execution.
## Cleanup
* Remove the new logs configuration:
{{< text bash >}}
$ kubectl delete -f @samples/bookinfo/telemetry/log-entry.yaml@
{{< /text >}}
If you are using Istio 1.1.2 or prior:
{{< text bash >}}
$ kubectl delete -f @samples/bookinfo/telemetry/log-entry-crd.yaml@
{{< /text >}}
* If you are not planning to explore any follow-on tasks, refer to the
[Bookinfo cleanup](/docs/examples/bookinfo/#cleanup) instructions
to shutdown the application.

View File

@ -1,395 +0,0 @@
---
title: Logging with Mixer and Fluentd
description: This task shows you how to configure Istio's Mixer to log to a Fluentd daemon.
weight: 90
keywords: [telemetry,logging]
aliases:
- /docs/tasks/observability/logs/fluentd/
- /docs/tasks/telemetry/fluentd/
- /docs/tasks/telemetry/logs/fluentd/
owner: istio/wg-policies-and-telemetry-maintainers
test: n/a
---
{{< warning >}}
Mixer is deprecated. The functionality provided by Mixer is being moved into the Envoy proxies.
Use of Mixer with Istio will only be supported through the 1.7 release of Istio.
{{</ warning>}}
This task shows how to configure Istio to create custom log entries
and send them to a [Fluentd](https://www.fluentd.org/) daemon. Fluentd
is an open source log collector that supports many [data
outputs](https://www.fluentd.org/dataoutputs) and has a pluggable
architecture. One popular logging backend is
[Elasticsearch](https://www.elastic.co/products/elasticsearch), and
[Kibana](https://www.elastic.co/products/kibana) as a viewer. At the
end of this task, a new log stream will be enabled sending logs to an
example Fluentd / Elasticsearch / Kibana stack.
The [Bookinfo](/docs/examples/bookinfo/) sample application is used
as the example application throughout this task.
## Before you begin
* [Install Istio](/docs/setup/) in your cluster and deploy an
application. This task assumes that Mixer is setup in a default configuration
(`--configDefaultNamespace=istio-system`). If you use a different
value, update the configuration and commands in this task to match the value.
## Setup Fluentd
In your cluster, you may already have a Fluentd daemon set running,
such the add-on described
[here](https://kubernetes.io/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/)
and
[here](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch),
or something specific to your cluster provider. This is likely
configured to send logs to an Elasticsearch system or logging
provider.
You may use these Fluentd daemons, or any other Fluentd daemon you
have set up, as long as they are listening for forwarded logs, and
Istio's Mixer is able to connect to them. In order for Mixer to
connect to a running Fluentd daemon, you may need to add a
[service](https://kubernetes.io/docs/concepts/services-networking/service/)
for Fluentd. The Fluentd configuration to listen for forwarded logs
is:
{{< text xml >}}
<source>
type forward
</source>
{{< /text >}}
The full details of connecting Mixer to all possible Fluentd
configurations is beyond the scope of this task.
### Example Fluentd, Elasticsearch, Kibana Stack
For the purposes of this task, you may deploy the example stack
provided. This stack includes Fluentd, Elasticsearch, and Kibana in a
non production-ready set of
[Services](https://kubernetes.io/docs/concepts/services-networking/service/)
and
[Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/)
all in a new
[Namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/)
called `logging`.
Save the following as `logging-stack.yaml`.
{{< text yaml >}}
# Logging Namespace. All below are a part of this namespace.
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
# Elasticsearch Service
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
ports:
- port: 9200
protocol: TCP
targetPort: db
selector:
app: elasticsearch
---
# Elasticsearch Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
annotations:
sidecar.istio.io/inject: "false"
spec:
containers:
- image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.1.1
name: elasticsearch
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
requests:
cpu: 100m
env:
- name: discovery.type
value: single-node
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: elasticsearch
mountPath: /data
volumes:
- name: elasticsearch
emptyDir: {}
---
# Fluentd Service
apiVersion: v1
kind: Service
metadata:
name: fluentd-es
namespace: logging
labels:
app: fluentd-es
spec:
ports:
- name: fluentd-tcp
port: 24224
protocol: TCP
targetPort: 24224
- name: fluentd-udp
port: 24224
protocol: UDP
targetPort: 24224
selector:
app: fluentd-es
---
# Fluentd Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: fluentd-es
namespace: logging
labels:
app: fluentd-es
spec:
replicas: 1
selector:
matchLabels:
app: fluentd-es
template:
metadata:
labels:
app: fluentd-es
annotations:
sidecar.istio.io/inject: "false"
spec:
containers:
- name: fluentd-es
image: gcr.io/google-containers/fluentd-elasticsearch:v2.0.1
env:
- name: FLUENTD_ARGS
value: --no-supervisor -q
resources:
limits:
memory: 500Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: config-volume
mountPath: /etc/fluent/config.d
terminationGracePeriodSeconds: 30
volumes:
- name: config-volume
configMap:
name: fluentd-es-config
---
# Fluentd ConfigMap, contains config files.
kind: ConfigMap
apiVersion: v1
data:
forward.input.conf: |-
# Takes the messages sent over TCP
<source>
type forward
</source>
output.conf: |-
<match **>
type elasticsearch
log_level info
include_tag_key true
host elasticsearch
port 9200
logstash_format true
# Set the chunk limits.
buffer_chunk_limit 2M
buffer_queue_limit 8
flush_interval 5s
# Never wait longer than 5 minutes between retries.
max_retry_wait 30
# Disable the limit on the number of retries (retry forever).
disable_retry_limit
# Use multiple threads for processing.
num_threads 2
</match>
metadata:
name: fluentd-es-config
namespace: logging
---
# Kibana Service
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: logging
labels:
app: kibana
spec:
ports:
- port: 5601
protocol: TCP
targetPort: ui
selector:
app: kibana
---
# Kibana Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: logging
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
annotations:
sidecar.istio.io/inject: "false"
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana-oss:6.1.1
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
requests:
cpu: 100m
env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200
ports:
- containerPort: 5601
name: ui
protocol: TCP
---
{{< /text >}}
Create the resources:
{{< text bash >}}
$ kubectl apply -f logging-stack.yaml
namespace "logging" created
service "elasticsearch" created
deployment "elasticsearch" created
service "fluentd-es" created
deployment "fluentd-es" created
configmap "fluentd-es-config" created
service "kibana" created
deployment "kibana" created
{{< /text >}}
## Configure Istio
Now that there is a running Fluentd daemon, configure Istio with a new
log type, and send those logs to the listening daemon. Apply a
YAML file with configuration for the log stream that
Istio will generate and collect automatically:
{{< text bash >}}
$ kubectl apply -f @samples/bookinfo/telemetry/fluentd-istio.yaml@
{{< /text >}}
{{< warning >}}
If you use Istio 1.1.2 or prior, please use the following configuration instead:
{{< text bash >}}
$ kubectl apply -f @samples/bookinfo/telemetry/fluentd-istio-crd.yaml@
{{< /text >}}
{{< /warning >}}
Notice that the `address: "fluentd-es.logging:24224"` line in the
handler configuration is pointing to the Fluentd daemon we setup in the
example stack.
## View the new logs
1. Send traffic to the sample application.
For the
[Bookinfo](/docs/examples/bookinfo/#determine-the-ingress-ip-and-port)
sample, visit `http://$GATEWAY_URL/productpage` in your web browser
or issue the following command:
{{< text bash >}}
$ curl http://$GATEWAY_URL/productpage
{{< /text >}}
1. In a Kubernetes environment, setup port-forwarding for Kibana by
executing the following command:
{{< text bash >}}
$ kubectl -n logging port-forward $(kubectl -n logging get pod -l app=kibana -o jsonpath='{.items[0].metadata.name}') 5601:5601 &
{{< /text >}}
Leave the command running. Press Ctrl-C to exit when done accessing the Kibana UI.
1. Navigate to the [Kibana UI](http://localhost:5601/) and click the "Set up index patterns" in the top right.
1. Use `*` as the index pattern, and click "Next step.".
1. Select `@timestamp` as the Time Filter field name, and click "Create index pattern."
1. Now click "Discover" on the left menu, and start exploring the logs generated
## Cleanup
* Remove the new telemetry configuration:
{{< text bash >}}
$ kubectl delete -f @samples/bookinfo/telemetry/fluentd-istio.yaml@
{{< /text >}}
If you are using Istio 1.1.2 or prior:
{{< text bash >}}
$ kubectl delete -f @samples/bookinfo/telemetry/fluentd-istio-crd.yaml@
{{< /text >}}
* Remove the example Fluentd, Elasticsearch, Kibana stack:
{{< text bash >}}
$ kubectl delete -f logging-stack.yaml
{{< /text >}}
* Remove any `kubectl port-forward` processes that may still be running:
{{< text bash >}}
$ killall kubectl
{{< /text >}}
* If you are not planning to explore any follow-on tasks, refer to the
[Bookinfo cleanup](/docs/examples/bookinfo/#cleanup) instructions
to shutdown the application.

View File

@ -1,11 +0,0 @@
---
title: Metrics
description: Demonstrates the configuration, collection, and processing of Istio mesh metrics using Mixer.
weight: 1
aliases:
test: n/a
---
{{< warning >}}
Mixer is deprecated. The functionality provided by Mixer is being moved into the Envoy proxies.
Use of Mixer with Istio will only be supported through the 1.7 release of Istio.
{{</ warning>}}

View File

@ -1,196 +0,0 @@
---
title: Collecting Metrics With Mixer
description: This task shows you how to configure Istio's Mixer to collect and customize metrics.
weight: 10
keywords: [telemetry,metrics]
aliases:
- /docs/tasks/metrics-logs.html
- /docs/tasks/telemetry/metrics-logs/
- /docs/tasks/telemetry/metrics/collecting-metrics/
- /docs/tasks/observability/metrics/collecting-metrics/
owner: istio/wg-policies-and-telemetry-maintainers
test: n/a
---
{{< warning >}}
Mixer is deprecated. The functionality provided by Mixer is being moved into the Envoy proxies.
Use of Mixer with Istio will only be supported through the 1.7 release of Istio.
{{< /warning>}}
This task shows how to configure Istio to automatically gather telemetry for
services in a mesh. At the end of this task, a new metric will be enabled for
calls to services within your mesh.
The [Bookinfo](/docs/examples/bookinfo/) sample application is used
as the example application throughout this task.
## Before you begin
* [Install Istio](/docs/setup) with Mixer enabled in your cluster and deploy an
application.
The *custom* configuration needed to use Mixer for telemetry is:
{{< text yaml >}}
values:
prometheus:
enabled: true
telemetry:
v1:
enabled: true
v2:
enabled: false
components:
citadel:
enabled: true
telemetry:
enabled: true
{{< /text >}}
Please see the guide on [Customizing the configuration](/docs/setup/install/istioctl/#customizing-the-configuration)
for information on how to apply these settings.
Once the configuration has been applied, confirm a telemetry-focused instance of Mixer is running:
{{< text bash >}}
$ kubectl -n istio-system get service istio-telemetry
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-telemetry ClusterIP 10.4.31.226 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 80s
{{< /text >}}
## Collecting new metrics
1. Apply a YAML file with configuration for the new metric
that Istio will generate and collect automatically.
{{< text bash >}}
$ kubectl apply -f @samples/bookinfo/telemetry/metrics.yaml@
{{< /text >}}
{{< warning >}}
If you use Istio 1.1.2 or prior, please use the following configuration instead:
{{< text bash >}}
$ kubectl apply -f @samples/bookinfo/telemetry/metrics-crd.yaml@
{{< /text >}}
{{< /warning >}}
1. Send traffic to the sample application.
For the Bookinfo sample, visit `http://$GATEWAY_URL/productpage` in your web
browser or issue the following command:
{{< text bash >}}
$ curl http://$GATEWAY_URL/productpage
{{< /text >}}
1. Verify that the new metric values are being generated and collected.
In a Kubernetes environment, setup port-forwarding for Prometheus by
executing the following command:
{{< text bash >}}
$ kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=prometheus -o jsonpath='{.items[0].metadata.name}') 9090:9090 &
{{< /text >}}
View values for the new metric in the Prometheus browser window. Select **Graph**.
Enter the `istio_double_request_count` metric and select **Execute**.
The table displayed in the
**Console** tab includes entries similar to:
{{< text plain >}}
istio_double_request_count{destination="details-v1",instance="172.17.0.12:42422",job="istio-mesh",message="twice the fun!",reporter="client",source="productpage-v1"} 8
istio_double_request_count{destination="details-v1",instance="172.17.0.12:42422",job="istio-mesh",message="twice the fun!",reporter="server",source="productpage-v1"} 8
istio_double_request_count{destination="istio-policy",instance="172.17.0.12:42422",job="istio-mesh",message="twice the fun!",reporter="server",source="details-v1"} 4
istio_double_request_count{destination="istio-policy",instance="172.17.0.12:42422",job="istio-mesh",message="twice the fun!",reporter="server",source="istio-ingressgateway"} 4
{{< /text >}}
For more on querying Prometheus for metric values, see the
[Querying Istio Metrics](/docs/tasks/observability/metrics/querying-metrics/) task.
## Understanding the metrics configuration
In this task, you added Istio configuration that instructed Mixer to
automatically generate and report a new metric for all
traffic within the mesh.
The added configuration controlled three pieces of Mixer functionality:
1. Generation of *instances* (in this example, metric values)
from Istio attributes
1. Creation of *handlers* (configured Mixer adapters) capable of processing
generated *instances*
1. Dispatch of *instances* to *handlers* according to a set of *rules*
The metrics configuration directs Mixer to send metric values to Prometheus. It
uses three stanzas (or blocks) of configuration: *instance* configuration,
*handler* configuration, and *rule* configuration.
The `kind: instance` stanza of configuration defines a schema for generated metric values
(or *instances*) for a new metric named `doublerequestcount`. This instance
configuration tells Mixer _how_ to generate metric values for any given request,
based on the attributes reported by Envoy (and generated by Mixer itself).
For each instance of `doublerequestcount`, the configuration directs Mixer to
supply a value of `2` for the instance. Because Istio generates an instance for
each request, this means that this metric records a value equal to twice the
total number of requests received.
A set of `dimensions` are specified for each `doublerequestcount`
instance. Dimensions provide a way to slice, aggregate, and analyze metric data
according to different needs and directions of inquiry. For instance, it may be
desirable to only consider requests for a certain destination service when
troubleshooting application behavior.
The configuration instructs Mixer to populate values for these dimensions based
on attribute values and literal values. For instance, for the `source`
dimension, the new configuration requests that the value be taken from the
`source.workload.name` attribute. If that attribute value is not populated, the rule
instructs Mixer to use a default value of `"unknown"`. For the `message`
dimension, a literal value of `"twice the fun!"` will be used for all instances.
The `kind: handler` stanza of configuration defines a *handler* named
`doublehandler`. The handler `spec` configures how the Prometheus adapter code
translates received metric instances into Prometheus-formatted values that can
be processed by a Prometheus backend. This configuration specified a new
Prometheus metric named `double_request_count`. The Prometheus adapter prepends
the `istio_` namespace to all metric names, therefore this metric will show up
in Prometheus as `istio_double_request_count`. The metric has three labels
matching the dimensions configured for `doublerequestcount` instances.
Mixer instances are matched to Prometheus metrics via the `instance_name` parameter.
The `instance_name` values must be the fully-qualified name for Mixer instances (example:
`doublerequestcount.instance.istio-system`).
The `kind: rule` stanza of configuration defines a new *rule* named `doubleprom`. The
rule directs Mixer to send all `doublerequestcount` instances to the
`doublehandler` handler. Because there is no `match` clause in the
rule, and because the rule is in the configured default configuration namespace
(`istio-system`), the rule is executed for all requests in the mesh.
## Cleanup
* Remove the new metrics configuration:
{{< text bash >}}
$ kubectl delete -f @samples/bookinfo/telemetry/metrics.yaml@
{{< /text >}}
If you are using Istio 1.1.2 or prior:
{{< text bash >}}
$ kubectl delete -f @samples/bookinfo/telemetry/metrics-crd.yaml@
{{< /text >}}
* Remove any `kubectl port-forward` processes that may still be running:
{{< text bash >}}
$ killall kubectl
{{< /text >}}
* If you are not planning to explore any follow-on tasks, refer to the
[Bookinfo cleanup](/docs/examples/bookinfo/#cleanup) instructions
to shutdown the application.

View File

@ -1,218 +0,0 @@
---
title: Collecting Metrics for TCP services with Mixer
description: This task shows you how to configure Istio's Mixer to collect metrics for TCP services.
weight: 20
keywords: [telemetry,metrics,tcp]
aliases:
- /docs/tasks/telemetry/tcp-metrics
- /docs/tasks/telemetry/metrics/tcp-metrics/
owner: istio/wg-policies-and-telemetry-maintainers
test: n/a
---
{{< warning >}}
Mixer is deprecated. The functionality provided by Mixer is being moved into the Envoy proxies.
Use of Mixer with Istio will only be supported through the 1.7 release of Istio.
{{< /warning>}}
This task shows how to configure Istio to automatically gather telemetry for TCP
services in a mesh. At the end of this task, a new metric will be enabled for
calls to a TCP service within your mesh.
The [Bookinfo](/docs/examples/bookinfo/) sample application is used
as the example application throughout this task.
## Before you begin
* [Install Istio](/docs/setup) with Mixer enabled in your cluster and deploy an application.
The *custom* configuration needed to use Mixer for telemetry is:
{{< text yaml >}}
values:
prometheus:
enabled: true
telemetry:
v1:
enabled: true
v2:
enabled: false
components:
citadel:
enabled: true
telemetry:
enabled: true
{{< /text >}}
Please see the guide on [Customizing the configuration](/docs/setup/install/istioctl/#customizing-the-configuration)
for information on how to apply these settings.
Once the configuration has been applied, confirm a telemetry-focused instance of Mixer is running:
{{< text bash >}}
$ kubectl -n istio-system get service istio-telemetry
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-telemetry ClusterIP 10.4.31.226 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 80s
{{< /text >}}
* This task assumes that the Bookinfo sample will be deployed in the `default`
namespace. If you use a different namespace, you will need to update the
example configuration and commands.
## Collecting new telemetry data
1. Apply a YAML file with configuration for the new metrics that Istio
will generate and collect automatically.
{{< text bash >}}
$ kubectl apply -f @samples/bookinfo/telemetry/tcp-metrics.yaml@
{{< /text >}}
{{< warning >}}
If you use Istio 1.1.2 or prior, please use the following configuration instead:
{{< text bash >}}
$ kubectl apply -f @samples/bookinfo/telemetry/tcp-metrics-crd.yaml@
{{< /text >}}
{{< /warning >}}
1. Setup Bookinfo to use MongoDB.
1. Install `v2` of the `ratings` service.
If you are using a cluster with automatic sidecar injection enabled,
simply deploy the services using `kubectl`:
{{< text bash >}}
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo-ratings-v2.yaml@
{{< /text >}}
If you are using manual sidecar injection, use the following command instead:
{{< text bash >}}
$ kubectl apply -f <(istioctl kube-inject -f @samples/bookinfo/platform/kube/bookinfo-ratings-v2.yaml@)
deployment "ratings-v2" configured
{{< /text >}}
1. Install the `mongodb` service:
If you are using a cluster with automatic sidecar injection enabled,
simply deploy the services using `kubectl`:
{{< text bash >}}
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo-db.yaml@
{{< /text >}}
If you are using manual sidecar injection, use the following command instead:
{{< text bash >}}
$ kubectl apply -f <(istioctl kube-inject -f @samples/bookinfo/platform/kube/bookinfo-db.yaml@)
service "mongodb" configured
deployment "mongodb-v1" configured
{{< /text >}}
1. The Bookinfo sample deploys multiple versions of each microservice, so you will start by creating destination rules
that define the service subsets corresponding to each version, and the load balancing policy for each subset.
{{< text bash >}}
$ kubectl apply -f @samples/bookinfo/networking/destination-rule-all.yaml@
{{< /text >}}
If you enabled mutual TLS, please run the following instead
{{< text bash >}}
$ kubectl apply -f @samples/bookinfo/networking/destination-rule-all-mtls.yaml@
{{< /text >}}
You can display the destination rules with the following command:
{{< text bash >}}
$ kubectl get destinationrules -o yaml
{{< /text >}}
Since the subset references in virtual services rely on the destination rules,
wait a few seconds for destination rules to propagate before adding virtual services that refer to these subsets.
1. Create `ratings` and `reviews` virtual services:
{{< text bash >}}
$ kubectl apply -f @samples/bookinfo/networking/virtual-service-ratings-db.yaml@
Created config virtual-service/default/reviews at revision 3003
Created config virtual-service/default/ratings at revision 3004
{{< /text >}}
1. Send traffic to the sample application.
For the Bookinfo sample, visit `http://$GATEWAY_URL/productpage` in your web
browser or issue the following command:
{{< text bash >}}
$ curl http://$GATEWAY_URL/productpage
{{< /text >}}
1. Verify that the new metric values are being generated and collected.
In a Kubernetes environment, setup port-forwarding for Prometheus by
executing the following command:
{{< text bash >}}
$ kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=prometheus -o jsonpath='{.items[0].metadata.name}') 9090:9090 &
{{< /text >}}
View values for the new metric in the Prometheus browser window. Select **Graph**.
Enter the `istio_mongo_received_bytes` metric and select **Execute**.
The table displayed in the
**Console** tab includes entries similar to:
{{< text plain >}}
istio_mongo_received_bytes{destination_version="v1",instance="172.17.0.18:42422",job="istio-mesh",source_service="ratings-v2",source_version="v2"}
{{< /text >}}
## Understanding TCP telemetry collection
In this task, you added Istio configuration that instructed Mixer to
automatically generate and report a new metric for all traffic to a TCP service
within the mesh.
Similar to the [Collecting Metrics](/docs/tasks/observability/mixer/metrics/collecting-metrics/) Task, the new
configuration consisted of _instances_, a _handler_, and a _rule_. Please see
that Task for a complete description of the components of metric collection.
Metrics collection for TCP services differs only in the limited set of
attributes that are available for use in _instances_.
### TCP attributes
Several TCP-specific attributes enable TCP policy and control within Istio.
These attributes are generated by server-side Envoy proxies. They are forwarded to Mixer at connection establishment, and forwarded periodically when connection is alive (periodical report), and forwarded at connection close (final report). The default interval for periodical report is 10 seconds, and it should be at least 1 second. Additionally, context attributes provide the ability to distinguish between `http` and `tcp`
protocols within policies.
{{< image link="./istio-tcp-attribute-flow.svg"
alt="Attribute Generation Flow for TCP Services in an Istio Mesh."
caption="TCP Attribute Flow"
>}}
## Cleanup
* Remove the new telemetry configuration:
{{< text bash >}}
$ kubectl delete -f @samples/bookinfo/telemetry/tcp-metrics.yaml@
{{< /text >}}
If you are using Istio 1.1.2 or prior:
{{< text bash >}}
$ kubectl delete -f @samples/bookinfo/telemetry/tcp-metrics-crd.yaml@
{{< /text >}}
* Remove the `port-forward` process:
{{< text bash >}}
$ killall kubectl
{{< /text >}}
* If you are not planning to explore any follow-on tasks, refer to the
[Bookinfo cleanup](/docs/examples/bookinfo/#cleanup) instructions
to shutdown the application.

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 58 KiB

View File

@ -41,7 +41,6 @@ providing a flexible fine-grained access control mechanism. [Learn more](https:/
[Learn more](/docs/concepts/security/#authorization)
- **Fluentd**. Mixer now has an adapter for log collection through [Fluentd](https://www.fluentd.org).
[Learn more](/docs/tasks/observability/mixer/logs/fluentd/)
- **Stdio**. The stdio adapter now lets you log to files with support for log rotation & backup, along with a host
of controls.