mirror of https://github.com/knative/docs.git
Update installation instructions for logs, metrics and traces (#763)
* Update instructions to install logging and monitoring components. * Update serving/installing-logging-metrics-traces.md Co-Authored-By: mdemirhan <4033879+mdemirhan@users.noreply.github.com>
This commit is contained in:
parent
fb4bb9254e
commit
7766388b8f
|
@ -1,67 +1,29 @@
|
||||||
# Monitoring, Logging and Tracing Installation
|
# Installing Logging, Metrics and Traces
|
||||||
|
|
||||||
Knative Serving offers two different monitoring setups:
|
If you installed one of the [Knative install bundles](../install/README.md#installing-knative),
|
||||||
[Elasticsearch, Kibana, Prometheus and Grafana](#elasticsearch-kibana-prometheus--grafana-setup)
|
some or all of the observability features are installed. For example, if you install the `release.yaml` package from the
|
||||||
or [Stackdriver, Prometheus and Grafana](#stackdriver-prometheus--grafana-setup)
|
`knative/serving` repo, then an ELK stack is installed by default and you can skip down to the
|
||||||
You can install only one of these two setups and side-by-side installation of
|
|
||||||
these two are not supported.
|
|
||||||
|
|
||||||
## Before you begin
|
|
||||||
|
|
||||||
The following instructions assume that you cloned the Knative Serving
|
|
||||||
repository. To clone the repository, run the following commands:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
git clone https://github.com/knative/serving knative-serving
|
|
||||||
cd knative-serving
|
|
||||||
git checkout v0.2.3
|
|
||||||
```
|
|
||||||
|
|
||||||
## Elasticsearch, Kibana, Prometheus & Grafana Setup
|
|
||||||
|
|
||||||
If you installed the
|
|
||||||
[full Knative release](../install/README.md#installing-knative), the monitoring
|
|
||||||
component is already installed and you can skip down to the
|
|
||||||
[Create Elasticsearch Indices](#create-elasticsearch-indices) section.
|
[Create Elasticsearch Indices](#create-elasticsearch-indices) section.
|
||||||
|
|
||||||
To configure and setup monitoring:
|
## Metrics
|
||||||
|
|
||||||
1. Choose a container image that meets the
|
1. Run the following command to install Prometheus and Grafana:
|
||||||
[Fluentd image requirements](fluentd/README.md#requirements). For example,
|
|
||||||
you can use the public image
|
|
||||||
[k8s.gcr.io/fluentd-elasticsearch:v2.0.4](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch/fluentd-es-image).
|
|
||||||
Or you can create a custom one and upload the image to a container registry
|
|
||||||
which your cluster has read access to.
|
|
||||||
1. Follow the instructions in
|
|
||||||
["Setting up a logging plugin"](setting-up-a-logging-plugin.md#Configuring)
|
|
||||||
to configure the Elasticsearch components settings.
|
|
||||||
1. Install Knative monitoring components by running the following command from
|
|
||||||
the root directory of [knative/serving](https://github.com/knative/serving)
|
|
||||||
repository:
|
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl apply --recursive --filename config/monitoring/100-common \
|
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring-metrics-prometheus.yaml
|
||||||
--filename config/monitoring/150-elasticsearch \
|
|
||||||
--filename third_party/config/monitoring/common \
|
|
||||||
--filename third_party/config/monitoring/elasticsearch \
|
|
||||||
--filename config/monitoring/200-common \
|
|
||||||
--filename config/monitoring/200-common/100-istio.yaml
|
|
||||||
```
|
```
|
||||||
|
|
||||||
The installation is complete when logging & monitoring components are all
|
1. Ensure that the `grafana-*`, `kibana-logging-*`, `kube-state-metrics-*`, `node-exporter-*` and `prometheus-system-*`
|
||||||
reported `Running` or `Completed`:
|
pods all report a `Running` status:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl get pods --namespace knative-monitoring --watch
|
kubectl get pods --namespace knative-monitoring --watch
|
||||||
```
|
```
|
||||||
|
|
||||||
```shell
|
For example:
|
||||||
|
|
||||||
|
```text
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE
|
||||||
elasticsearch-logging-0 1/1 Running 0 2d
|
|
||||||
elasticsearch-logging-1 1/1 Running 0 2d
|
|
||||||
fluentd-ds-5kc85 1/1 Running 0 2d
|
|
||||||
fluentd-ds-vhrcq 1/1 Running 0 2d
|
|
||||||
fluentd-ds-xghk9 1/1 Running 0 2d
|
|
||||||
grafana-798cf569ff-v4q74 1/1 Running 0 2d
|
grafana-798cf569ff-v4q74 1/1 Running 0 2d
|
||||||
kibana-logging-7d474fbb45-6qb8x 1/1 Running 0 2d
|
kibana-logging-7d474fbb45-6qb8x 1/1 Running 0 2d
|
||||||
kube-state-metrics-75bd4f5b8b-8t2h2 4/4 Running 0 2d
|
kube-state-metrics-75bd4f5b8b-8t2h2 4/4 Running 0 2d
|
||||||
|
@ -72,10 +34,46 @@ To configure and setup monitoring:
|
||||||
prometheus-system-1 1/1 Running 0 2d
|
prometheus-system-1 1/1 Running 0 2d
|
||||||
```
|
```
|
||||||
|
|
||||||
CTRL+C to exit watch.
|
Tip: Hit CTRL+C to exit watch mode.
|
||||||
|
|
||||||
1. Verify that each of your nodes have the
|
[Accessing Metrics](./accessing-metrics.md) for more information about metrics in Knative.
|
||||||
`beta.kubernetes.io/fluentd-ds-ready=true` label:
|
|
||||||
|
## Logs
|
||||||
|
|
||||||
|
Knative offers three different setups for collecting logs. Choose one to install:
|
||||||
|
|
||||||
|
1. [Elasticsearch and Kibana](#elasticsearch-and-kibana)
|
||||||
|
1. [Stackdriver](#stackdriver)
|
||||||
|
1. [Custom logging plugin](setting-up-a-logging-plugin.md)
|
||||||
|
|
||||||
|
### Elasticsearch and Kibana
|
||||||
|
|
||||||
|
1. Run the following command to install an ELK stack:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring-logs-elasticsearch.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
1. Ensure that the `elasticsearch-logging-*`, `fluentd-ds-*`, and `kibana-logging-*` pods all report a `Running` status:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl get pods --namespace knative-monitoring --watch
|
||||||
|
```
|
||||||
|
For example:
|
||||||
|
|
||||||
|
```text
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
elasticsearch-logging-0 1/1 Running 0 2d
|
||||||
|
elasticsearch-logging-1 1/1 Running 0 2d
|
||||||
|
fluentd-ds-5kc85 1/1 Running 0 2d
|
||||||
|
fluentd-ds-vhrcq 1/1 Running 0 2d
|
||||||
|
fluentd-ds-xghk9 1/1 Running 0 2d
|
||||||
|
kibana-logging-7d474fbb45-6qb8x 1/1 Running 0 2d
|
||||||
|
```
|
||||||
|
|
||||||
|
Tip: Hit CTRL+C to exit watch mode.
|
||||||
|
|
||||||
|
1. Verify that each of your nodes have the `beta.kubernetes.io/fluentd-ds-ready=true` label:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl get nodes --selector beta.kubernetes.io/fluentd-ds-ready=true
|
kubectl get nodes --selector beta.kubernetes.io/fluentd-ds-ready=true
|
||||||
|
@ -83,29 +81,29 @@ To configure and setup monitoring:
|
||||||
|
|
||||||
1. If you receive the `No Resources Found` response:
|
1. If you receive the `No Resources Found` response:
|
||||||
|
|
||||||
1. Run the following command to ensure that the Fluentd DaemonSet runs on all
|
1. Run the following command to ensure that the Fluentd DaemonSet runs on all your nodes:
|
||||||
your nodes:
|
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl label nodes --all beta.kubernetes.io/fluentd-ds-ready="true"
|
kubectl label nodes --all beta.kubernetes.io/fluentd-ds-ready="true"
|
||||||
```
|
```
|
||||||
|
|
||||||
1. Run the following command to ensure that the `fluentd-ds` daemonset is
|
1. Run the following command to ensure that the `fluentd-ds` daemonset is ready on at least one node:
|
||||||
ready on at least one node:
|
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl get daemonset fluentd-ds --namespace knative-monitoring
|
kubectl get daemonset fluentd-ds --namespace knative-monitoring --watch
|
||||||
```
|
```
|
||||||
|
|
||||||
### Create Elasticsearch Indices
|
Tip: Hit CTRL+C to exit watch mode.
|
||||||
|
|
||||||
To visualize logs with Kibana, you need to set which Elasticsearch indices to
|
1. When the installation is complete and all the resources are running, you can continue to the next section
|
||||||
explore. We will create two indices in Elasticsearch using `Logstash` for
|
and begin creating your Elasticsearch indices.
|
||||||
application logs and `Zipkin` for request traces.
|
|
||||||
|
|
||||||
- To open the Kibana UI (the visualization tool for
|
#### Create Elasticsearch Indices
|
||||||
[Elasticsearch](https://info.elastic.co)), you must start a local proxy by
|
|
||||||
running the following command:
|
To visualize logs with Kibana, you need to set which Elasticsearch indices to explore.
|
||||||
|
|
||||||
|
- To open the Kibana UI (the visualization tool for [Elasticsearch](https://info.elastic.co)),
|
||||||
|
you must start a local proxy by running the following command:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl proxy
|
kubectl proxy
|
||||||
|
@ -124,65 +122,59 @@ application logs and `Zipkin` for request traces.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
- To create the second index, select `Create Index Pattern` button on top left
|
See [Accessing Logs](./accessing-logs.md) for more information about logs in Knative.
|
||||||
of the page. Enter `zipkin*` to `Index pattern` and select `timestamp_millis`
|
|
||||||
from `Time Filter field name` and click on `Create` button.
|
|
||||||
|
|
||||||
## Stackdriver, Prometheus & Grafana Setup
|
### Stackdriver
|
||||||
|
|
||||||
You must configure and build your own Fluentd image if either of the following
|
|
||||||
are true:
|
|
||||||
|
|
||||||
- Your Knative Serving component is not hosted on a Google Cloud Platform (GCP)
|
|
||||||
based cluster.
|
|
||||||
- You want to send logs to another GCP project.
|
|
||||||
|
|
||||||
To configure and setup monitoring:
|
To configure and setup monitoring:
|
||||||
|
|
||||||
|
1. Clone the Knative Serving repository:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
git clone https://github.com/knative/serving knative-serving
|
||||||
|
cd knative-serving
|
||||||
|
git checkout v0.3.0
|
||||||
|
```
|
||||||
|
|
||||||
1. Choose a container image that meets the
|
1. Choose a container image that meets the
|
||||||
[Fluentd image requirements](fluentd/README.md#requirements). For example,
|
[Fluentd image requirements](fluentd/README.md#requirements). For example, you can use a
|
||||||
you can use a public image. Or you can create a custom one and upload the
|
public image. Or you can create a custom one and upload the image to a
|
||||||
image to a container registry which your cluster has read access to.
|
container registry which your cluster has read access to.
|
||||||
|
|
||||||
|
You must configure and build your own Fluentd image if either of the following are true:
|
||||||
|
- Your Knative Serving component is not hosted on a Google Cloud Platform (GCP) based cluster.
|
||||||
|
- You want to send logs to another GCP project.
|
||||||
|
|
||||||
1. Follow the instructions in
|
1. Follow the instructions in
|
||||||
["Setting up a logging plugin"](setting-up-a-logging-plugin.md#Configuring)
|
["Setting up a logging plugin"](setting-up-a-logging-plugin.md#Configuring)
|
||||||
to configure the stackdriver components settings.
|
to configure the stackdriver components settings.
|
||||||
1. Install Knative monitoring components by running the following command from
|
|
||||||
the root directory of [knative/serving](https://github.com/knative/serving)
|
1. Install Knative Stackdriver components by running the following command from the root directory of
|
||||||
repository:
|
[knative/serving](https://github.com/knative/serving) repository:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl apply --recursive --filename config/monitoring/100-common \
|
kubectl apply --recursive --filename config/monitoring/100-namespace.yaml \
|
||||||
--filename config/monitoring/150-stackdriver \
|
--filename third_party/config/monitoring/logging/stackdriver
|
||||||
--filename third_party/config/monitoring/common \
|
|
||||||
--filename config/monitoring/200-common \
|
|
||||||
--filename config/monitoring/200-common/100-istio.yaml
|
|
||||||
```
|
```
|
||||||
|
|
||||||
The installation is complete when logging & monitoring components are all
|
1. Ensure that the `fluentd-ds-*` pods all report a `Running` status:
|
||||||
reported `Running` or `Completed`:
|
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl get pods --namespace knative-monitoring --watch
|
kubectl get pods --namespace knative-monitoring --watch
|
||||||
```
|
```
|
||||||
|
|
||||||
```shell
|
For example:
|
||||||
|
|
||||||
|
```text
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE
|
||||||
fluentd-ds-5kc85 1/1 Running 0 2d
|
fluentd-ds-5kc85 1/1 Running 0 2d
|
||||||
fluentd-ds-vhrcq 1/1 Running 0 2d
|
fluentd-ds-vhrcq 1/1 Running 0 2d
|
||||||
fluentd-ds-xghk9 1/1 Running 0 2d
|
fluentd-ds-xghk9 1/1 Running 0 2d
|
||||||
grafana-798cf569ff-v4q74 1/1 Running 0 2d
|
|
||||||
kube-state-metrics-75bd4f5b8b-8t2h2 4/4 Running 0 2d
|
|
||||||
node-exporter-cr6bh 2/2 Running 0 2d
|
|
||||||
node-exporter-mf6k7 2/2 Running 0 2d
|
|
||||||
node-exporter-rhzr7 2/2 Running 0 2d
|
|
||||||
prometheus-system-0 1/1 Running 0 2d
|
|
||||||
prometheus-system-1 1/1 Running 0 2d
|
|
||||||
```
|
```
|
||||||
|
|
||||||
CTRL+C to exit watch.
|
Tip: Hit CTRL+C to exit watch mode.
|
||||||
|
|
||||||
1. Verify that each of your nodes have the
|
1. Verify that each of your nodes have the `beta.kubernetes.io/fluentd-ds-ready=true` label:
|
||||||
`beta.kubernetes.io/fluentd-ds-ready=true` label:
|
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl get nodes --selector beta.kubernetes.io/fluentd-ds-ready=true
|
kubectl get nodes --selector beta.kubernetes.io/fluentd-ds-ready=true
|
||||||
|
@ -190,19 +182,41 @@ To configure and setup monitoring:
|
||||||
|
|
||||||
1. If you receive the `No Resources Found` response:
|
1. If you receive the `No Resources Found` response:
|
||||||
|
|
||||||
1. Run the following command to ensure that the Fluentd DaemonSet runs on all
|
1. Run the following command to ensure that the Fluentd DaemonSet runs on all your nodes:
|
||||||
your nodes:
|
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl label nodes --all beta.kubernetes.io/fluentd-ds-ready="true"
|
kubectl label nodes --all beta.kubernetes.io/fluentd-ds-ready="true"
|
||||||
```
|
```
|
||||||
|
|
||||||
1. Run the following command to ensure that the `fluentd-ds` daemonset is
|
1. Run the following command to ensure that the `fluentd-ds` daemonset is ready on at least one node:
|
||||||
ready on at least one node:
|
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl get daemonset fluentd-ds --namespace knative-monitoring
|
kubectl get daemonset fluentd-ds --namespace knative-monitoring
|
||||||
```
|
```
|
||||||
|
See [Accessing Logs](./accessing-logs.md) for more information about logs in Knative.
|
||||||
|
|
||||||
|
## End to end traces
|
||||||
|
|
||||||
|
- If Elasticsearch is not installed or if you don't want to persist end to end traces, run:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring-tracing-zipkin-in-mem.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
- If Elasticsearch is installed and you want to persist end to end traces, first run:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring-tracing-zipkin.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Next, create an Elasticsearch index for end to end traces:
|
||||||
|
|
||||||
|
- Open Kibana UI as described in [Create Elasticsearch Indices](#create-elasticsearch-indices) section.
|
||||||
|
- Select `Create Index Pattern` button on top left of the page.
|
||||||
|
Enter `zipkin*` to `Index pattern` and select `timestamp_millis`
|
||||||
|
from `Time Filter field name` and click on `Create` button.
|
||||||
|
|
||||||
|
Visit [Accessing Traces](./accessing-traces.md) for more information on end to end traces.
|
||||||
|
|
||||||
## Learn More
|
## Learn More
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue