zh-translation: /blog/2018/export-logs-through-stackdriver/index.md (#5616)

* update export-logs-through-stackdriver/index.md

* update gke

* fix error
This commit is contained in:
liyan 2019-11-14 15:17:45 +08:00 committed by Istio Automation
parent 3dcaf51313
commit 853a5e9162
1 changed files with 60 additions and 97 deletions

View File

@ -1,77 +1,62 @@
---
title: Exporting Logs to BigQuery, GCS, Pub/Sub through Stackdriver
description: How to export Istio Access Logs to different sinks like BigQuery, GCS, Pub/Sub through Stackdriver.
title: 通过 Stackdriver 将日志导出到 BigQuery、GCS、Pub/Sub
description: 如何通过 Stackdriver 将 Istio 访问日志导出到 BigQuery、GCS、Pub/Sub 等不同的接收器。
publishdate: 2018-07-09
subtitle:
attribution: Nupur Garg and Douglas Reid
target_release: 0.8
---
This post shows how to direct Istio logs to [Stackdriver](https://cloud.google.com/stackdriver/)
and export those logs to various configured sinks such as such as
[BigQuery](https://cloud.google.com/bigquery/), [Google Cloud Storage](https://cloud.google.com/storage/)
or [Cloud Pub/Sub](https://cloud.google.com/pubsub/). At the end of this post you can perform
analytics on Istio data from your favorite places such as BigQuery, GCS or Cloud Pub/Sub.
这篇文章展示了如何将 Istio 日志指向 [`Stackdriver`](https://cloud.google.com/stackdriver/) 并将这些日志导出到各种配置的接收器,例如 [`BigQuery`](https://cloud.google.com/bigquery/)、[`Google Cloud Storage(GCS)`](https://cloud.google.com/storage/) 或 [`Cloud Pub/Sub`](https://cloud.google.com/pubsub/)。在这篇文章的最后,可以从喜欢的地方(如 BigQuery、GCS 或 Cloud Pub/Sub对 Istio 数据进行分析。
The [Bookinfo](/docs/examples/bookinfo/) sample application is used as the example
application throughout this task.
[`Bookinfo`](/zh/docs/examples/bookinfo/) 示例应用程序在整个任务中用作示例应用程序。
## Before you begin
## 开始之前{#before-you-begin}
[Install Istio](/docs/setup/) in your cluster and deploy an application.
在集群中[`安装 Istio`](/zh/docs/setup/) 并部署应用程序。
## Configuring Istio to export logs
## 配置 Istio 导出日志{#configuring-Istio-to-export-logs}
Istio exports logs using the `logentry` [template](/docs/reference/config/policy-and-telemetry/templates/logentry).
This specifies all the variables that are available for analysis. It
contains information like source service, destination service, auth
metrics (coming..) among others. Following is a diagram of the pipeline:
Istio 使用 `logentry` [`模板`](/zh/docs/reference/config/policy-and-telemetry/templates/logentry)导出日志。这里指定了可用于分析的所有变量。它包含源服务、目标服务、`auth` 指标(即将实现......)等信息。以下是示意图:
{{< image width="75%" link="./istio-analytics-using-stackdriver.png" caption="Exporting logs from Istio to Stackdriver for analysis" >}}
{{< image width="75%" link="./istio-analytics-using-stackdriver.png" caption="导出日志到 Stackdriver 进行分析的图释" >}}
Istio supports exporting logs to Stackdriver which can in turn be configured to export
logs to your favorite sink like BigQuery, Pub/Sub or GCS. Please follow the steps
below to setup your favorite sink for exporting logs first and then Stackdriver
in Istio.
Istio 支持将日志导出到 Stackdriver而 Stackdriver 又可以配置为将日志导出到喜欢的接收器,如 BigQuery、Pub/Sub 或 GCS。请按照以下步骤设置喜欢的接收器首先导出日志然后在 Istio 中使用 Stackdriver。
### Setting up various log sinks
### 设置各种日志接收器{#setting-up-various-log-sinks}
Common setup for all sinks:
所有接收器的通用设置:
1. Enable [Stackdriver Monitoring API](https://cloud.google.com/monitoring/api/enable-api) for the project.
1. Make sure `principalEmail` that would be setting up the sink has write access to the project and Logging Admin role permissions.
1. Make sure the `GOOGLE_APPLICATION_CREDENTIALS` environment variable is set. Please follow instructions [here](https://cloud.google.com/docs/authentication/getting-started) to set it up.
1. 为项目启用 [`Stackdriver Monitoring API`](https://cloud.google.com/monitoring/api/enable-api) 。
1. 确保配置的接收器的 `principalEmail` 具有对项目写入的权限和日志管理员角色的权限。
1. 确保已设置 `GOOGLE_APPLICATION_CREDENTIALS` 环境变量。请按照[`此处`](https://cloud.google.com/docs/authentication/getting-started)的说明进行设置。
#### BigQuery
#### BigQuery{#big-query}
1. [Create a BigQuery dataset](https://cloud.google.com/bigquery/docs/datasets) as a destination for the logs export.
1. Record the ID of the dataset. It will be needed to configure the Stackdriver handler.
It would be of the form `bigquery.googleapis.com/projects/[PROJECT_ID]/datasets/[DATASET_ID]`
1. Give [sinks writer identity](https://cloud.google.com/logging/docs/api/tasks/exporting-logs#writing_to_the_destination): `cloud-logs@system.gserviceaccount.com` BigQuery Data Editor role in IAM.
1. If using [Google Kubernetes Engine](/docs/setup/platform-setup/gke/), make sure `bigquery` [Scope](https://cloud.google.com/sdk/gcloud/reference/container/clusters/create) is enabled on the cluster.
1. [`创建 BigQuery 数据集`](https://cloud.google.com/bigquery/docs/datasets)作为日志导出的目标。
1. 记录数据集的 ID。 这里需要设置 Stackdriver 处理程序。它的格式为 `bigquery.googleapis.com/projects/[PROJECT_ID]/datasets/[DATASET_ID]`
1. 给[`接收器授权`](https://cloud.google.com/logging/docs/api/tasks/exporting-logs#writing_to_the_destination)cloud-logs@system.gserviceaccount.com。它具有 IAM 中的 BigQuery Data Editor 的角色。
1. 如果使用 [`Google Kubernetes Engine`](/zh/docs/setup/platform-setup/gke/),请确保在集群上启用了 `bigquery` [`Scope`](https://cloud.google.com/sdk/gcloud/reference/container/clusters/create)。
#### Google Cloud Storage (GCS)
#### Google Cloud Storage (GCS){#google-cloud-storage}
1. [Create a GCS bucket](https://cloud.google.com/storage/docs/creating-buckets) where you would like logs to get exported in GCS.
1. Recode the ID of the bucket. It will be needed to configure Stackdriver.
It would be of the form `storage.googleapis.com/[BUCKET_ID]`
1. Give [sinks writer identity](https://cloud.google.com/logging/docs/api/tasks/exporting-logs#writing_to_the_destination): `cloud-logs@system.gserviceaccount.com` Storage Object Creator role in IAM.
1. [`创建 GCS 存储桶`](https://cloud.google.com/storage/docs/creating-buckets),希望导出日志到 GCS 中。
1. 记录存储桶的 ID。这里需要配置 Stackdriver。它的形式为 `storage.googleapis.com/[BUCKET_ID]`
1. 给[`接收器授权`](https://cloud.google.com/logging/docs/api/tasks/exporting-logs#writing_to_the_destination)`cloud-logs @ system.gserviceaccount.com`。它具有 IAM 中的 Storage Object Creator 的角色。
#### Google Cloud Pub/Sub
#### Google Cloud Pub/Sub{#google-cloud-pub-sub}
1. [Create a topic](https://cloud.google.com/pubsub/docs/admin) where you would like logs to get exported in Google Cloud Pub/Sub.
1. Recode the ID of the topic. It will be needed to configure Stackdriver.
It would be of the form `pubsub.googleapis.com/projects/[PROJECT_ID]/topics/[TOPIC_ID]`
1. Give [sinks writer identity](https://cloud.google.com/logging/docs/api/tasks/exporting-logs#writing_to_the_destination): `cloud-logs@system.gserviceaccount.com` Pub/Sub Publisher role in IAM.
1. If using [Google Kubernetes Engine](/docs/setup/platform-setup/gke/), make sure `pubsub` [Scope](https://cloud.google.com/sdk/gcloud/reference/container/clusters/create) is enabled on the cluster.
1. [`创建主题`](https://cloud.google.com/pubsub/docs/admin)希望导出日志到Google Cloud Pub/Sub 中。
1. 记录主题的 ID。这里需要配置 Stackdriver。它的形式为`pubsub.googleapis.com/projects/[PROJECT_ID]/topics/[TOPIC_ID]`。
1. 给[`接收器授权`](https://cloud.google.com/logging/docs/api/tasks/exporting-logs#writing_to_the_destination)`cloud-logs @ system.gserviceaccount.com`。它具有 IAM 中的 Pub/Sub Publisher 角色。
1. 如果使用 [`Google Kubernetes Engine`](/zh/docs/setup/platform-setup/gke/),请确保在集群中启动了 `pubsub` [`Scope`](https://cloud.google.com/sdk/gcloud/reference/container/clusters/create)。
### Setting up Stackdriver
### 设置 Stackdriver{#setting-up-stack-driver}
A Stackdriver handler must be created to export data to Stackdriver. The configuration for
a Stackdriver handler is described [here](/docs/reference/config/policy-and-telemetry/adapters/stackdriver/).
必须创建 Stackdriver 处理程序,将数据导出到 Stackdriver。Stackdriver 处理程序的配置在[`此处`](/zh/docs/reference/config/policy-and-telemetry/adapters/stackdriver/)描述。
1. Save the following yaml file as `stackdriver.yaml`. Replace `<project_id>,
<sink_id>, <sink_destination>, <log_filter>` with their specific values.
1. 保存如下的yaml文件为 `stackdriver.yaml` 。并替换 `<project_id>,
<sink_id>, <sink_destination>, <log_filter>` 为相应的值。
{{< text yaml >}}
apiVersion: "config.istio.io/v1alpha2"
@ -80,17 +65,16 @@ a Stackdriver handler is described [here](/docs/reference/config/policy-and-tele
name: handler
namespace: istio-system
spec:
# We'll use the default value from the adapter, once per minute, so we don't need to supply a value.
# 设置 pushInterval 值。默认值是每分钟一次,不设置使用默认值。
# pushInterval: 1m
# Must be supplied for the Stackdriver adapter to work
# 必须设置 Stacldriver 适配器 project_id 的值。
project_id: "<project_id>"
# One of the following must be set; the preferred method is `appCredentials`, which corresponds to
# Google Application Default Credentials.
# If none is provided we default to app credentials.
# apiCredentials 和 apiKey 必须设置之一; 首选方法是`appCredentials`,它对应于 Google 应用程序默认凭据。
# 如果没有提供,我们使用默认应用凭据。
# appCredentials:
# apiKey:
# serviceAccountPath:
# Describes how to map Istio logs into Stackdriver.
# 描述如何将 Istio 日志映射到 Stackdriver。
logInfo:
accesslog.logentry.istio-system:
payloadTemplate: '{{or (.sourceIp) "-"}} - {{or (.sourceUser) "-"}} [{{or (.timestamp.Format "02/Jan/2006:15:04:05 -0700") "-"}}] "{{or (.method) "-"}} {{or (.url) "-"}} {{or (.protocol) "-"}}" {{or (.responseCode) "-"}} {{or (.responseSize) "-"}}'
@ -140,7 +124,7 @@ a Stackdriver handler is described [here](/docs/reference/config/policy-and-tele
name: stackdriver
namespace: istio-system
spec:
match: "true" # If omitted match is true.
match: "true" # 缺省 match 为 true
actions:
- handler: handler.stackdriver
instances:
@ -148,7 +132,7 @@ a Stackdriver handler is described [here](/docs/reference/config/policy-and-tele
---
{{< /text >}}
1. Push the configuration
1. 创建配置
{{< text bash >}}
$ kubectl apply -f stackdriver.yaml
@ -161,41 +145,25 @@ a Stackdriver handler is described [here](/docs/reference/config/policy-and-tele
metric "stackdriverresponsesize" created
{{< /text >}}
1. Send traffic to the sample application.
For the Bookinfo sample, visit `http://$GATEWAY_URL/productpage` in your web
browser or issue the following command:
1. 访问示例应用程序。
对于 `Bookinfo` 示例,请使用浏览器访问 `http://$GATEWAY_URL/productpage` 或发出以下命令:
{{< text bash >}}
$ curl http://$GATEWAY_URL/productpage
{{< /text >}}
1. Verify that logs are flowing through Stackdriver to the configured sink.
1. 验证日志是否正在通过 Stackdriver 流向配置的接收器。
* Stackdriver: Navigate to the [Stackdriver Logs
Viewer](https://pantheon.corp.google.com/logs/viewer) for your project
and look under "GKE Container" -> "Cluster Name" -> "Namespace Id" for
Istio Access logs.
* BigQuery: Navigate to the [BigQuery
Interface](https://bigquery.cloud.google.com/) for your project and you
should find a table with prefix `accesslog_logentry_istio` in your sink
dataset.
* GCS: Navigate to the [Storage
Browser](https://pantheon.corp.google.com/storage/browser/) for your
project and you should find a bucket named
`accesslog.logentry.istio-system` in your sink bucket.
* Pub/Sub: Navigate to the [Pub/Sub
Topic List](https://pantheon.corp.google.com/cloudpubsub/topicList) for
your project and you should find a topic for `accesslog` in your sink
topic.
* Stackdriver导航到项目的 [`Stackdriver Logs Viewer`](https://pantheon.corp.google.com/logs/viewer),查看 “GKE Container” -> “Cluster Name” -> “Namespace Id” 查看 Istio 访问日志。
* BigQuery导航到项目的 [`BigQuery Interface`](https://bigquery.cloud.google.com/),在接收器的数据集中找到一个前缀为 `accesslog_logentry_istio` 的表。
* GCS导航到项目的 [`Storage Brower`](https://pantheon.corp.google.com/storage/browser/),在接收器的桶中找到一个名为 `accesslog.logentry.istio-system` 的桶。
* Pub/Sub导航到项目的 [`Pub/Sub 主题列表`](https://pantheon.corp.google.com/cloudpubsub/topicList) 在接收器的主题中找到 `accesslog` 主题。
## Understanding what happened
## 了解发生了什么{#understanding-what-happened}
`Stackdriver.yaml` file above configured Istio to send access logs to
Stackdriver and then added a sink configuration where these logs could be
exported. In detail as follows:
上面的 `Stackdriver.yaml` 文件配置了 Istio 将访问日志发送到 Stackdriver然后添加了一个接收器配置将日志导出。具体如下
1. Added a handler of kind `stackdriver`
1. 添加一个 `stackdriver` 类型的处理程序:
{{< text yaml >}}
apiVersion: "config.istio.io/v1alpha2"
@ -205,7 +173,7 @@ exported. In detail as follows:
namespace: <your defined namespace>
{{< /text >}}
1. Added `logInfo` in spec
1. `spec` 上增加 `logInfo`
{{< text yaml >}}
spec:
@ -221,11 +189,9 @@ exported. In detail as follows:
filter: '<log_filter>'
{{< /text >}}
In the above configuration sinkInfo contains information about the sink where you want
the logs to get exported to. For more information on how this gets filled for different sinks please refer
[here](https://cloud.google.com/logging/docs/export/#sink-terms).
1. Added a rule for Stackdriver
在上面的配置中sinkInfo 包含有关日志导出到所需接收器的信息。有关如何填写不同接收器的更多信息,请参阅[`此处`](https://cloud.google.com/logging/docs/export/#sink-terms)。
 
1. 为 Stackdriver 添加规则
{{< text yaml >}}
apiVersion: "config.istio.io/v1alpha2"
@ -233,26 +199,23 @@ exported. In detail as follows:
metadata:
name: stackdriver
namespace: istio-system spec:
match: "true" # If omitted match is true
match: "true" # 缺省 match 为 true
actions:
- handler: handler.stackdriver
instances:
- accesslog.logentry
{{< /text >}}
## Cleanup
## 清理{#cleanup}
* Remove the new Stackdriver configuration:
* 删除新的 Stackdriver 配置:
{{< text bash >}}
$ kubectl delete -f stackdriver.yaml
{{< /text >}}
* If you are not planning to explore any follow-on tasks, refer to the
[Bookinfo cleanup](/docs/examples/bookinfo/#cleanup) instructions to shutdown
the application.
* 如果不打算任何后续任务,请参阅 [`Bookinfo cleanup`](/zh/docs/examples/bookinfo/#cleanup) 指令关闭应用程序。
## Availability of logs in export sinks
## 日志导出的可用性{#availability-of-logs-in-export-sinks}
Export to BigQuery is within minutes (we see it to be almost instant), GCS can
have a delay of 2 to 12 hours and Pub/Sub is almost immediately.
导出到 BigQuery 只需几分钟可以认为是瞬间完成的不过GCS要延迟2 至 12 小时而Pub/Sub 则几乎没有延迟。