zh-translation:/docs/tasks/observability/metrics/tcp-metrics/index.md (#5363)

* 翻译:content/zh/docs/tasks/observability/metrics/tcp-metrics/index.md

* 修改词汇

* fix lint problem
This commit is contained in:
cocotyty 2019-11-06 16:26:13 +08:00 committed by Istio Automation
parent e8bd3b7566
commit 9b7309ae4f
1 changed files with 44 additions and 68 deletions

View File

@ -1,40 +1,33 @@
---
title: Collecting Metrics for TCP services
description: This task shows you how to configure Istio to collect metrics for TCP services.
title: 收集 TCP 服务指标
description: 本任务展示了如何配置 Istio 进行 TCP 服务的指标收集。
weight: 20
keywords: [telemetry,metrics,tcp]
aliases:
- /docs/tasks/telemetry/tcp-metrics
- /docs/tasks/telemetry/metrics/tcp-metrics/
- /zh/docs/tasks/telemetry/tcp-metrics
- /zh/docs/tasks/telemetry/metrics/tcp-metrics/
---
This task shows how to configure Istio to automatically gather telemetry for TCP
services in a mesh. At the end of this task, a new metric will be enabled for
calls to a TCP service within your mesh.
本文任务展示了如何对 Istio 进行配置,从而自动收集网格中 TCP 服务的遥测数据。在任务最后,会为网格中的一个 TCP 服务启用一个新的指标。
The [Bookinfo](/docs/examples/bookinfo/) sample application is used
as the example application throughout this task.
在本例中会使用 [Bookinfo](/zh/docs/examples/bookinfo/) 作为示例应用。
## Before you begin
## 开始之前{#before-you-begin}
* [Install Istio](/docs/setup) in your cluster and deploy an
application.
* 在集群中[安装 Istio](/zh/docs/setup/) 并部署一个应用。
* This task assumes that the Bookinfo sample will be deployed in the `default`
namespace. If you use a different namespace, you will need to update the
example configuration and commands.
* 任务中假设 Bookinfo 应用部署在 `default` 命名空间中。如果使用不同的命名空间,需要更新例子中的相关配置和命令。
## Collecting new telemetry data
## 收集新的遥测数据{#collecting-new-telemetry-data}
1. Apply a YAML file with configuration for the new metrics that Istio
will generate and collect automatically.
1. 创建一个新的 YAML 文件用于配置新的指标Istio 会据此文件生成并自动收集新建指标。
{{< text bash >}}
$ kubectl apply -f @samples/bookinfo/telemetry/tcp-metrics.yaml@
{{< /text >}}
{{< warning >}}
If you use Istio 1.1.2 or prior, please use the following configuration instead:
如果您使用的是 Istio 1.1.2 或更低版本,请改用以下配置:
{{< text bash >}}
$ kubectl apply -f @samples/bookinfo/telemetry/tcp-metrics-crd.yaml@
@ -42,34 +35,32 @@ will generate and collect automatically.
{{< /warning >}}
1. Setup Bookinfo to use MongoDB.
1. 设置 Bookinfo 使用 Mongodb。
1. Install `v2` of the `ratings` service.
1. 安装 `ratings` 服务的 `v2` 版本。
If you are using a cluster with automatic sidecar injection enabled,
simply deploy the services using `kubectl`:
如果使用的是启用了 Sidecar 自动注入的集群,可以简单使用 `kubectl` 进行服务部署:
{{< text bash >}}
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo-ratings-v2.yaml@
{{< /text >}}
If you are using manual sidecar injection, use the following command instead:
如果使用手工的 Sidecar 注入方式,就需要使用下面的命令:
{{< text bash >}}
$ kubectl apply -f <(istioctl kube-inject -f @samples/bookinfo/platform/kube/bookinfo-ratings-v2.yaml@)
deployment "ratings-v2" configured
{{< /text >}}
1. Install the `mongodb` service:
1. 安装 `mongodb` 服务:
If you are using a cluster with automatic sidecar injection enabled,
simply deploy the services using `kubectl`:
如果使用的是启用了 Sidecar 自动注入的集群,可以简单使用 `kubectl` 进行服务部署:
{{< text bash >}}
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo-db.yaml@
{{< /text >}}
If you are using manual sidecar injection, use the following command instead:
如果使用手工的 Sidecar 注入方式,就需要使用下面的命令:
{{< text bash >}}
$ kubectl apply -f <(istioctl kube-inject -f @samples/bookinfo/platform/kube/bookinfo-db.yaml@)
@ -77,29 +68,28 @@ will generate and collect automatically.
deployment "mongodb-v1" configured
{{< /text >}}
1. The Bookinfo sample deploys multiple versions of each microservice, so you will start by creating destination rules
that define the service subsets corresponding to each version, and the load balancing policy for each subset.
1. Bookinfo 示例部署了每个微服务的多个版本,因此您将首先创建目标规则定义每个版本对应的服务子集,以及每个子集的负载均衡策略。
{{< text bash >}}
$ kubectl apply -f @samples/bookinfo/networking/destination-rule-all.yaml@
{{< /text >}}
If you enabled mutual TLS, please run the following instead
如果您启用了双向 TLS请执行以下操作
{{< text bash >}}
$ kubectl apply -f @samples/bookinfo/networking/destination-rule-all-mtls.yaml@
{{< /text >}}
You can display the destination rules with the following command:
您可以使用以下命令显示目标规则:
{{< text bash >}}
$ kubectl get destinationrules -o yaml
{{< /text >}}
Since the subset references in virtual services rely on the destination rules,
wait a few seconds for destination rules to propagate before adding virtual services that refer to these subsets.
由于虚拟服务中的子集引用依赖于目标规则,
在添加引用这些子集的虚拟服务之前,请等待几秒钟以使目标规则传播。
1. Create `ratings` and `reviews` virtual services:
1. 创建 `ratings` 以及 `reviews` 两个虚拟服务:
{{< text bash >}}
$ kubectl apply -f @samples/bookinfo/networking/virtual-service-ratings-db.yaml@
@ -107,79 +97,65 @@ will generate and collect automatically.
Created config virtual-service/default/ratings at revision 3004
{{< /text >}}
1. Send traffic to the sample application.
1. 向应用发送流量。
For the Bookinfo sample, visit `http://$GATEWAY_URL/productpage` in your web
browser or issue the following command:
对于 Bookinfo 应用来说,在浏览器中浏览 `http://$GATEWAY_URL/productpage`,或者使用下面的命令:
{{< text bash >}}
$ curl http://$GATEWAY_URL/productpage
{{< /text >}}
1. Verify that the new metric values are being generated and collected.
1. 检查是否已经生成并收集了新的指标。
In a Kubernetes environment, setup port-forwarding for Prometheus by
executing the following command:
在 Kubernetes 环境中,使用下面的命令为 Prometheus 设置端口转发:
{{< text bash >}}
$ kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=prometheus -o jsonpath='{.items[0].metadata.name}') 9090:9090 &
{{< /text >}}
View values for the new metric via the [Prometheus UI](http://localhost:9090/graph#%5B%7B%22range_input%22%3A%221h%22%2C%22expr%22%3A%22istio_mongo_received_bytes%22%2C%22tab%22%3A1%7D%5D).
使用 [Prometheus 界面](http://localhost:9090/graph#%5B%7B%22range_input%22%3A%221h%22%2C%22expr%22%3A%22istio_mongo_received_bytes%22%2C%22tab%22%3A1%7D%5D) 浏览新的指标值。
The provided link opens the Prometheus UI and executes a query for values of
the `istio_mongo_received_bytes` metric. The table displayed in the
**Console** tab includes entries similar to:
上面的连接会打开 Promethe 界面,并执行了对 `istio_mongo_received_bytes` 指标的查询。**Console** 标签页中包含了大致如下的内容:
{{< text plain >}}
istio_mongo_received_bytes{destination_version="v1",instance="172.17.0.18:42422",job="istio-mesh",source_service="ratings-v2",source_version="v2"}
{{< /text >}}
## Understanding TCP telemetry collection
## 理解 TCP 遥测数据的收集过程{#understanding-tcp-telemetry-collection}
In this task, you added Istio configuration that instructed Mixer to
automatically generate and report a new metric for all traffic to a TCP service
within the mesh.
这一任务中,我们加入了一段 Istio 配置,对于所有目标为网格内 TCP 服务的流量Mixer 自动为其生成并报告新的指标。
Similar to the [Collecting Metrics and
Logs](/docs/tasks/observability/metrics/collecting-metrics/) Task, the new
configuration consisted of _instances_, a _handler_, and a _rule_. Please see
that Task for a complete description of the components of metric collection.
类似[收集指标和日志任务](/zh/docs/tasks/observability/metrics/collecting-metrics/)中的情况,新的配置由 _instance_、一个 _handler_ 以及一个 _rule_ 构成。请参看该任务来获取关于指标收集的组件的完整信息。
Metrics collection for TCP services differs only in the limited set of
attributes that are available for use in _instances_.
_instances_ 中属性集的可选范围不同,是 TCP 服务的指标收集过程的唯一差异。
### TCP attributes
### TCP 属性{#tcp-attributes}
Several TCP-specific attributes enable TCP policy and control within Istio.
These attributes are generated by server-side Envoy proxies. They are forwarded to Mixer at connection establishment, and forwarded periodically when connection is alive (periodical report), and forwarded at connection close (final report). The default interval for periodical report is 10 seconds, and it should be at least 1 second. Additionally, context attributes provide the ability to distinguish between `http` and `tcp`
protocols within policies.
TCP 相关的属性是 Istio 中 TCP 策略和控制的基础。这些属性是由服务端的 Envoy 代理生成的。它们在连接建立时发给 Mixer在连接的存活期间周期性的进行发送周期性报告最后在连接关闭时再次发送最终报告。周期性报告的缺省间隔时间为 10 秒钟,最小取值为 1 秒。另外上下文属性让策略有了区分 `http``tcp` 协议的能力。
{{< image link="./istio-tcp-attribute-flow.svg"
alt="Attribute Generation Flow for TCP Services in an Istio Mesh."
caption="TCP Attribute Flow"
alt="Istio 服务网格中的 TCP 服务属性生成流程"
caption="TCP 属性流程"
>}}
## Cleanup
## 清除{#cleanup}
* Remove the new telemetry configuration:
* 删除新的遥测配置:
{{< text bash >}}
$ kubectl delete -f @samples/bookinfo/telemetry/tcp-metrics.yaml@
{{< /text >}}
If you are using Istio 1.1.2 or prior:
如果您使用的是 Istio 1.1.2 或更低版本:
{{< text bash >}}
$ kubectl delete -f @samples/bookinfo/telemetry/tcp-metrics-crd.yaml@
{{< /text >}}
* Remove the `port-forward` process:
* 删除 `port-forward` 进程:
{{< text bash >}}
$ killall kubectl
{{< /text >}}
* If you are not planning to explore any follow-on tasks, refer to the
[Bookinfo cleanup](/docs/examples/bookinfo/#cleanup) instructions
to shutdown the application.
* 如果不准备进一步探索其他任务,请参照 [Bookinfo 清除](/zh/docs/examples/bookinfo/#cleanup),关闭示例应用。