Update installation instructions for logs, metrics and traces (#763)

* Update instructions to install logging and monitoring components.

* Update serving/installing-logging-metrics-traces.md

Co-Authored-By: mdemirhan <4033879+mdemirhan@users.noreply.github.com>
This commit is contained in:
Mustafa Demirhan 2019-01-15 11:16:30 -08:00 committed by Knative Prow Robot
parent fb4bb9254e
commit 7766388b8f
1 changed files with 129 additions and 115 deletions

View File

@ -1,67 +1,29 @@
# Monitoring, Logging and Tracing Installation
# Installing Logging, Metrics and Traces
Knative Serving offers two different monitoring setups:
[Elasticsearch, Kibana, Prometheus and Grafana](#elasticsearch-kibana-prometheus--grafana-setup)
or [Stackdriver, Prometheus and Grafana](#stackdriver-prometheus--grafana-setup)
You can install only one of these two setups and side-by-side installation of
these two are not supported.
## Before you begin
The following instructions assume that you cloned the Knative Serving
repository. To clone the repository, run the following commands:
```shell
git clone https://github.com/knative/serving knative-serving
cd knative-serving
git checkout v0.2.3
```
## Elasticsearch, Kibana, Prometheus & Grafana Setup
If you installed the
[full Knative release](../install/README.md#installing-knative), the monitoring
component is already installed and you can skip down to the
If you installed one of the [Knative install bundles](../install/README.md#installing-knative),
some or all of the observability features are installed. For example, if you install the `release.yaml` package from the
`knative/serving` repo, then an ELK stack is installed by default and you can skip down to the
[Create Elasticsearch Indices](#create-elasticsearch-indices) section.
To configure and setup monitoring:
## Metrics
1. Choose a container image that meets the
[Fluentd image requirements](fluentd/README.md#requirements). For example,
you can use the public image
[k8s.gcr.io/fluentd-elasticsearch:v2.0.4](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch/fluentd-es-image).
Or you can create a custom one and upload the image to a container registry
which your cluster has read access to.
1. Follow the instructions in
["Setting up a logging plugin"](setting-up-a-logging-plugin.md#Configuring)
to configure the Elasticsearch components settings.
1. Install Knative monitoring components by running the following command from
the root directory of [knative/serving](https://github.com/knative/serving)
repository:
1. Run the following command to install Prometheus and Grafana:
```shell
kubectl apply --recursive --filename config/monitoring/100-common \
--filename config/monitoring/150-elasticsearch \
--filename third_party/config/monitoring/common \
--filename third_party/config/monitoring/elasticsearch \
--filename config/monitoring/200-common \
--filename config/monitoring/200-common/100-istio.yaml
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring-metrics-prometheus.yaml
```
The installation is complete when logging & monitoring components are all
reported `Running` or `Completed`:
1. Ensure that the `grafana-*`, `kibana-logging-*`, `kube-state-metrics-*`, `node-exporter-*` and `prometheus-system-*`
pods all report a `Running` status:
```shell
kubectl get pods --namespace knative-monitoring --watch
```
```shell
For example:
```text
NAME READY STATUS RESTARTS AGE
elasticsearch-logging-0 1/1 Running 0 2d
elasticsearch-logging-1 1/1 Running 0 2d
fluentd-ds-5kc85 1/1 Running 0 2d
fluentd-ds-vhrcq 1/1 Running 0 2d
fluentd-ds-xghk9 1/1 Running 0 2d
grafana-798cf569ff-v4q74 1/1 Running 0 2d
kibana-logging-7d474fbb45-6qb8x 1/1 Running 0 2d
kube-state-metrics-75bd4f5b8b-8t2h2 4/4 Running 0 2d
@ -72,10 +34,46 @@ To configure and setup monitoring:
prometheus-system-1 1/1 Running 0 2d
```
CTRL+C to exit watch.
Tip: Hit CTRL+C to exit watch mode.
1. Verify that each of your nodes have the
`beta.kubernetes.io/fluentd-ds-ready=true` label:
[Accessing Metrics](./accessing-metrics.md) for more information about metrics in Knative.
## Logs
Knative offers three different setups for collecting logs. Choose one to install:
1. [Elasticsearch and Kibana](#elasticsearch-and-kibana)
1. [Stackdriver](#stackdriver)
1. [Custom logging plugin](setting-up-a-logging-plugin.md)
### Elasticsearch and Kibana
1. Run the following command to install an ELK stack:
```shell
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring-logs-elasticsearch.yaml
```
1. Ensure that the `elasticsearch-logging-*`, `fluentd-ds-*`, and `kibana-logging-*` pods all report a `Running` status:
```shell
kubectl get pods --namespace knative-monitoring --watch
```
For example:
```text
NAME READY STATUS RESTARTS AGE
elasticsearch-logging-0 1/1 Running 0 2d
elasticsearch-logging-1 1/1 Running 0 2d
fluentd-ds-5kc85 1/1 Running 0 2d
fluentd-ds-vhrcq 1/1 Running 0 2d
fluentd-ds-xghk9 1/1 Running 0 2d
kibana-logging-7d474fbb45-6qb8x 1/1 Running 0 2d
```
Tip: Hit CTRL+C to exit watch mode.
1. Verify that each of your nodes have the `beta.kubernetes.io/fluentd-ds-ready=true` label:
```shell
kubectl get nodes --selector beta.kubernetes.io/fluentd-ds-ready=true
@ -83,29 +81,29 @@ To configure and setup monitoring:
1. If you receive the `No Resources Found` response:
1. Run the following command to ensure that the Fluentd DaemonSet runs on all
your nodes:
1. Run the following command to ensure that the Fluentd DaemonSet runs on all your nodes:
```shell
kubectl label nodes --all beta.kubernetes.io/fluentd-ds-ready="true"
```
1. Run the following command to ensure that the `fluentd-ds` daemonset is
ready on at least one node:
1. Run the following command to ensure that the `fluentd-ds` daemonset is ready on at least one node:
```shell
kubectl get daemonset fluentd-ds --namespace knative-monitoring
kubectl get daemonset fluentd-ds --namespace knative-monitoring --watch
```
Tip: Hit CTRL+C to exit watch mode.
### Create Elasticsearch Indices
1. When the installation is complete and all the resources are running, you can continue to the next section
and begin creating your Elasticsearch indices.
To visualize logs with Kibana, you need to set which Elasticsearch indices to
explore. We will create two indices in Elasticsearch using `Logstash` for
application logs and `Zipkin` for request traces.
#### Create Elasticsearch Indices
- To open the Kibana UI (the visualization tool for
[Elasticsearch](https://info.elastic.co)), you must start a local proxy by
running the following command:
To visualize logs with Kibana, you need to set which Elasticsearch indices to explore.
- To open the Kibana UI (the visualization tool for [Elasticsearch](https://info.elastic.co)),
you must start a local proxy by running the following command:
```shell
kubectl proxy
@ -124,65 +122,59 @@ application logs and `Zipkin` for request traces.
![Create logstash-* index](images/kibana-landing-page-configure-index.png)
- To create the second index, select `Create Index Pattern` button on top left
of the page. Enter `zipkin*` to `Index pattern` and select `timestamp_millis`
from `Time Filter field name` and click on `Create` button.
See [Accessing Logs](./accessing-logs.md) for more information about logs in Knative.
## Stackdriver, Prometheus & Grafana Setup
You must configure and build your own Fluentd image if either of the following
are true:
- Your Knative Serving component is not hosted on a Google Cloud Platform (GCP)
based cluster.
- You want to send logs to another GCP project.
### Stackdriver
To configure and setup monitoring:
1. Clone the Knative Serving repository:
```shell
git clone https://github.com/knative/serving knative-serving
cd knative-serving
git checkout v0.3.0
```
1. Choose a container image that meets the
[Fluentd image requirements](fluentd/README.md#requirements). For example,
you can use a public image. Or you can create a custom one and upload the
image to a container registry which your cluster has read access to.
[Fluentd image requirements](fluentd/README.md#requirements). For example, you can use a
public image. Or you can create a custom one and upload the image to a
container registry which your cluster has read access to.
You must configure and build your own Fluentd image if either of the following are true:
- Your Knative Serving component is not hosted on a Google Cloud Platform (GCP) based cluster.
- You want to send logs to another GCP project.
1. Follow the instructions in
["Setting up a logging plugin"](setting-up-a-logging-plugin.md#Configuring)
to configure the stackdriver components settings.
1. Install Knative monitoring components by running the following command from
the root directory of [knative/serving](https://github.com/knative/serving)
repository:
```shell
kubectl apply --recursive --filename config/monitoring/100-common \
--filename config/monitoring/150-stackdriver \
--filename third_party/config/monitoring/common \
--filename config/monitoring/200-common \
--filename config/monitoring/200-common/100-istio.yaml
```
1. Install Knative Stackdriver components by running the following command from the root directory of
[knative/serving](https://github.com/knative/serving) repository:
The installation is complete when logging & monitoring components are all
reported `Running` or `Completed`:
```shell
kubectl apply --recursive --filename config/monitoring/100-namespace.yaml \
--filename third_party/config/monitoring/logging/stackdriver
```
```shell
kubectl get pods --namespace knative-monitoring --watch
```
1. Ensure that the `fluentd-ds-*` pods all report a `Running` status:
```shell
NAME READY STATUS RESTARTS AGE
fluentd-ds-5kc85 1/1 Running 0 2d
fluentd-ds-vhrcq 1/1 Running 0 2d
fluentd-ds-xghk9 1/1 Running 0 2d
grafana-798cf569ff-v4q74 1/1 Running 0 2d
kube-state-metrics-75bd4f5b8b-8t2h2 4/4 Running 0 2d
node-exporter-cr6bh 2/2 Running 0 2d
node-exporter-mf6k7 2/2 Running 0 2d
node-exporter-rhzr7 2/2 Running 0 2d
prometheus-system-0 1/1 Running 0 2d
prometheus-system-1 1/1 Running 0 2d
```
```shell
kubectl get pods --namespace knative-monitoring --watch
```
For example:
```text
NAME READY STATUS RESTARTS AGE
fluentd-ds-5kc85 1/1 Running 0 2d
fluentd-ds-vhrcq 1/1 Running 0 2d
fluentd-ds-xghk9 1/1 Running 0 2d
```
CTRL+C to exit watch.
1. Verify that each of your nodes have the
`beta.kubernetes.io/fluentd-ds-ready=true` label:
Tip: Hit CTRL+C to exit watch mode.
1. Verify that each of your nodes have the `beta.kubernetes.io/fluentd-ds-ready=true` label:
```shell
kubectl get nodes --selector beta.kubernetes.io/fluentd-ds-ready=true
@ -190,19 +182,41 @@ To configure and setup monitoring:
1. If you receive the `No Resources Found` response:
1. Run the following command to ensure that the Fluentd DaemonSet runs on all
your nodes:
1. Run the following command to ensure that the Fluentd DaemonSet runs on all your nodes:
```shell
kubectl label nodes --all beta.kubernetes.io/fluentd-ds-ready="true"
```
1. Run the following command to ensure that the `fluentd-ds` daemonset is
ready on at least one node:
1. Run the following command to ensure that the `fluentd-ds` daemonset is ready on at least one node:
```shell
kubectl get daemonset fluentd-ds --namespace knative-monitoring
```
See [Accessing Logs](./accessing-logs.md) for more information about logs in Knative.
## End to end traces
- If Elasticsearch is not installed or if you don't want to persist end to end traces, run:
```shell
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring-tracing-zipkin-in-mem.yaml
```
- If Elasticsearch is installed and you want to persist end to end traces, first run:
```shell
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring-tracing-zipkin.yaml
```
Next, create an Elasticsearch index for end to end traces:
- Open Kibana UI as described in [Create Elasticsearch Indices](#create-elasticsearch-indices) section.
- Select `Create Index Pattern` button on top left of the page.
Enter `zipkin*` to `Index pattern` and select `timestamp_millis`
from `Time Filter field name` and click on `Create` button.
Visit [Accessing Traces](./accessing-traces.md) for more information on end to end traces.
## Learn More