8.5 KiB
		
	
	
	
	
	
			
		
		
	
	Monitoring, Logging and Tracing Installation
Knative Serving offers two different monitoring setups: Elasticsearch, Kibana, Prometheus and Grafana or Stackdriver, Prometheus and Grafana You can install only one of these two setups and side-by-side installation of these two are not supported.
Before you begin
The following instructions assume that you cloned the Knative Serving repository. To clone the repository, run the following commands:
git clone https://github.com/knative/serving knative-serving
cd knative-serving
git checkout v0.2.1
Elasticsearch, Kibana, Prometheus & Grafana Setup
If you installed the full Knative release, the monitoring component is already installed and you can skip down to the Create Elasticsearch Indices section.
To configure and setup monitoring:
- 
Choose a container image that meets the Fluentd image requirements. For example, you can use the public image k8s.gcr.io/fluentd-elasticsearch:v2.0.4. Or you can create a custom one and upload the image to a container registry which your cluster has read access to. 
- 
Follow the instructions in "Setting up a logging plugin" to configure the Elasticsearch components settings. 
- 
Install Knative monitoring components by running the following command from the root directory of knative/serving repository: kubectl apply --recursive --filename config/monitoring/100-common \ --filename config/monitoring/150-elasticsearch \ --filename third_party/config/monitoring/common \ --filename third_party/config/monitoring/elasticsearch \ --filename config/monitoring/200-common \ --filename config/monitoring/200-common/100-istio.yamlThe installation is complete when logging & monitoring components are all reported RunningorCompleted:kubectl get pods --namespace monitoring --watchNAME READY STATUS RESTARTS AGE elasticsearch-logging-0 1/1 Running 0 2d elasticsearch-logging-1 1/1 Running 0 2d fluentd-ds-5kc85 1/1 Running 0 2d fluentd-ds-vhrcq 1/1 Running 0 2d fluentd-ds-xghk9 1/1 Running 0 2d grafana-798cf569ff-v4q74 1/1 Running 0 2d kibana-logging-7d474fbb45-6qb8x 1/1 Running 0 2d kube-state-metrics-75bd4f5b8b-8t2h2 4/4 Running 0 2d node-exporter-cr6bh 2/2 Running 0 2d node-exporter-mf6k7 2/2 Running 0 2d node-exporter-rhzr7 2/2 Running 0 2d prometheus-system-0 1/1 Running 0 2d prometheus-system-1 1/1 Running 0 2dCTRL+C to exit watch. 
- 
Verify that each of your nodes have the beta.kubernetes.io/fluentd-ds-ready=truelabel:kubectl get nodes --selector beta.kubernetes.io/fluentd-ds-ready=true
- 
If you receive the No Resources Foundresponse:- 
Run the following command to ensure that the Fluentd DaemonSet runs on all your nodes: kubectl label nodes --all beta.kubernetes.io/fluentd-ds-ready="true"
- 
Run the following command to ensure that the fluentd-dsdaemonset is ready on at least one node:kubectl get daemonset fluentd-ds --namespace knative-monitoring
 
- 
Create Elasticsearch Indices
To visualize logs with Kibana, you need to set which Elasticsearch indices to
explore. We will create two indices in Elasticsearch using Logstash for
application logs and Zipkin for request traces.
- 
To open the Kibana UI (the visualization tool for Elasticsearch), you must start a local proxy by running the following command: kubectl proxyThis command starts a local proxy of Kibana on port 8001. For security reasons, the Kibana UI is exposed only within the cluster. 
- 
Navigate to the Kibana UI. It might take a couple of minutes for the proxy to work. 
- 
Within the "Configure an index pattern" page, enter logstash-*toIndex patternand select@timestampfromTime Filter field nameand click onCreatebutton.
- To create the second index, select Create Index Patternbutton on top left of the page. Enterzipkin*toIndex patternand selecttimestamp_millisfromTime Filter field nameand click onCreatebutton.
Stackdriver, Prometheus & Grafana Setup
You must configure and build your own Fluentd image if either of the following are true:
- Your Knative Serving component is not hosted on a Google Cloud Platform (GCP) based cluster.
- You want to send logs to another GCP project.
To configure and setup monitoring:
- 
Choose a container image that meets the Fluentd image requirements. For example, you can use a public image. Or you can create a custom one and upload the image to a container registry which your cluster has read access to. 
- 
Follow the instructions in "Setting up a logging plugin" to configure the stackdriver components settings. 
- 
Install Knative monitoring components by running the following command from the root directory of knative/serving repository: kubectl apply --recursive --filename config/monitoring/100-common \ --filename config/monitoring/150-stackdriver \ --filename third_party/config/monitoring/common \ --filename config/monitoring/200-common \ --filename config/monitoring/200-common/100-istio.yamlThe installation is complete when logging & monitoring components are all reported RunningorCompleted:kubectl get pods --namespace monitoring --watchNAME READY STATUS RESTARTS AGE fluentd-ds-5kc85 1/1 Running 0 2d fluentd-ds-vhrcq 1/1 Running 0 2d fluentd-ds-xghk9 1/1 Running 0 2d grafana-798cf569ff-v4q74 1/1 Running 0 2d kube-state-metrics-75bd4f5b8b-8t2h2 4/4 Running 0 2d node-exporter-cr6bh 2/2 Running 0 2d node-exporter-mf6k7 2/2 Running 0 2d node-exporter-rhzr7 2/2 Running 0 2d prometheus-system-0 1/1 Running 0 2d prometheus-system-1 1/1 Running 0 2dCTRL+C to exit watch. 
- 
Verify that each of your nodes have the beta.kubernetes.io/fluentd-ds-ready=truelabel:kubectl get nodes --selector beta.kubernetes.io/fluentd-ds-ready=true
- 
If you receive the No Resources Foundresponse:- 
Run the following command to ensure that the Fluentd DaemonSet runs on all your nodes: kubectl label nodes --all beta.kubernetes.io/fluentd-ds-ready="true"
- 
Run the following command to ensure that the fluentd-dsdaemonset is ready on at least one node:kubectl get daemonset fluentd-ds --namespace knative-monitoring
 
- 
Learn More
- Learn more about accessing logs, metrics, and traces:
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License.
