Merge pull request #9614 from caesarxuchao/logging-demo-examples-v1

update examples/logging-demo to v1
This commit is contained in:
Abhi Shah 2015-06-11 09:07:27 -07:00
commit 1946f5acf2
3 changed files with 82 additions and 109 deletions

View File

@ -17,7 +17,7 @@ describes a pod that just emits a log message once every 4 seconds:
# sleep 4 # sleep 4
# i=$[$i+1] # i=$[$i+1]
# done # done
apiVersion: v1beta3 apiVersion: v1
kind: Pod kind: Pod
metadata: metadata:
labels: labels:
@ -37,13 +37,10 @@ spec:
The other YAML file [synthetic_10lps.yaml](synthetic_10lps.yaml) specifies a similar synthetic logger that emits 10 log messages every second. To run both synthetic loggers: The other YAML file [synthetic_10lps.yaml](synthetic_10lps.yaml) specifies a similar synthetic logger that emits 10 log messages every second. To run both synthetic loggers:
``` ```
$ make up $ make up
../../../kubectl.sh create -f synthetic_0_25lps.yaml ../../cluster/kubectl.sh create -f synthetic_0_25lps.yaml
Running: ../../../cluster/../cluster/gce/../../_output/dockerized/bin/linux/amd64/kubectl create -f synthetic_0_25lps.yaml pods/synthetic-logger-0.25lps-pod
synthetic-logger-0.25lps-pod ../../cluster/kubectl.sh create -f synthetic_10lps.yaml
../../../kubectl.sh create -f synthetic_10lps.yaml pods/synthetic-logger-10lps-pod
Running: ../../../cluster/../cluster/gce/../../_output/dockerized/bin/linux/amd64/kubectl create -f synthetic_10lps.yaml
synthetic-logger-10lps-pod
``` ```
Visiting the Kibana dashboard should make it clear that logs are being collected from the two synthetic loggers: Visiting the Kibana dashboard should make it clear that logs are being collected from the two synthetic loggers:
@ -52,146 +49,121 @@ Visiting the Kibana dashboard should make it clear that logs are being collected
You can report the running pods, [replication controllers](../../docs/replication-controller.md), and [services](../../docs/services.md) with another Makefile rule: You can report the running pods, [replication controllers](../../docs/replication-controller.md), and [services](../../docs/services.md) with another Makefile rule:
``` ```
$ make get $ make get
../../../kubectl.sh get pods ../../cluster/kubectl.sh get pods
Running: ../../../../cluster/gce/../../_output/dockerized/bin/linux/amd64/kubectl get pods NAME READY REASON RESTARTS AGE
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE elasticsearch-logging-v1-gzknt 1/1 Running 0 11m
elasticsearch-logging-f0smz 10.244.2.3 kubernetes-minion-ilqx/104.197.8.214 kubernetes.io/cluster-service=true,name=elasticsearch-logging Running 5 hours elasticsearch-logging-v1-swgzc 1/1 Running 0 11m
elasticsearch-logging gcr.io/google_containers/elasticsearch:1.0 Running 5 hours fluentd-elasticsearch-kubernetes-minion-1rtv 1/1 Running 0 11m
etcd-server-kubernetes-master kubernetes-master/ <none> Running 5 hours fluentd-elasticsearch-kubernetes-minion-6bob 1/1 Running 0 10m
etcd-container gcr.io/google_containers/etcd:2.0.9 Running 5 hours fluentd-elasticsearch-kubernetes-minion-98g3 1/1 Running 0 10m
fluentd-elasticsearch-kubernetes-minion-7s1y 10.244.0.2 kubernetes-minion-7s1y/23.236.54.97 <none> Running 5 hours fluentd-elasticsearch-kubernetes-minion-qduc 1/1 Running 0 10m
fluentd-elasticsearch gcr.io/google_containers/fluentd-elasticsearch:1.5 Running 5 hours kibana-logging-v1-1w44h 1/1 Running 0 11m
fluentd-elasticsearch-kubernetes-minion-cfs6 10.244.1.2 kubernetes-minion-cfs6/104.154.61.231 <none> Running 5 hours kube-dns-v3-i8u9s 3/3 Running 0 11m
fluentd-elasticsearch gcr.io/google_containers/fluentd-elasticsearch:1.5 Running 5 hours monitoring-heapster-v1-mles8 0/1 Running 11 11m
fluentd-elasticsearch-kubernetes-minion-ilqx 10.244.2.2 kubernetes-minion-ilqx/104.197.8.214 <none> Running 5 hours synthetic-logger-0.25lps-pod 1/1 Running 0 42s
fluentd-elasticsearch gcr.io/google_containers/fluentd-elasticsearch:1.5 Running 5 hours synthetic-logger-10lps-pod 1/1 Running 0 41s
fluentd-elasticsearch-kubernetes-minion-x8gx 10.244.3.2 kubernetes-minion-x8gx/104.154.47.83 <none> Running 5 hours ../../cluster/kubectl.sh get replicationControllers
fluentd-elasticsearch gcr.io/google_containers/fluentd-elasticsearch:1.5 Running 5 hours CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
kibana-logging-cwe0b 10.244.1.3 kubernetes-minion-cfs6/104.154.61.231 kubernetes.io/cluster-service=true,name=kibana-logging Running 5 hours elasticsearch-logging-v1 elasticsearch-logging gcr.io/google_containers/elasticsearch:1.4 k8s-app=elasticsearch-logging,version=v1 2
kibana-logging gcr.io/google_containers/kibana:1.2 Running 5 hours kibana-logging-v1 kibana-logging gcr.io/google_containers/kibana:1.3 k8s-app=kibana-logging,version=v1 1
kube-apiserver-kubernetes-master kubernetes-master/ <none> Running 5 hours kube-dns-v3 etcd gcr.io/google_containers/etcd:2.0.9 k8s-app=kube-dns,version=v3 1
kube-apiserver gcr.io/google_containers/kube-apiserver:f0c332fc2582927ec27d24965572d4b0 Running 5 hours kube2sky gcr.io/google_containers/kube2sky:1.9
kube-controller-manager-kubernetes-master kubernetes-master/ <none> Running 5 hours skydns gcr.io/google_containers/skydns:2015-03-11-001
kube-controller-manager gcr.io/google_containers/kube-controller-manager:6729154dfd4e2a19752bdf9ceff8464c Running 5 hours monitoring-heapster-v1 heapster gcr.io/google_containers/heapster:v0.13.0 k8s-app=heapster,version=v1 1
kube-dns-swd4n 10.244.3.5 kubernetes-minion-x8gx/104.154.47.83 k8s-app=kube-dns,kubernetes.io/cluster-service=true,name=kube-dns Running 5 hours ../../cluster/kubectl.sh get services
kube2sky gcr.io/google_containers/kube2sky:1.2 Running 5 hours NAME LABELS SELECTOR IP(S) PORT(S)
etcd quay.io/coreos/etcd:v2.0.3 Running 5 hours elasticsearch-logging k8s-app=elasticsearch-logging,kubernetes.io/cluster-service=true,kubernetes.io/name=Elasticsearch k8s-app=elasticsearch-logging 10.0.145.125 9200/TCP
skydns gcr.io/google_containers/skydns:2015-03-11-001 Running 5 hours kibana-logging k8s-app=kibana-logging,kubernetes.io/cluster-service=true,kubernetes.io/name=Kibana k8s-app=kibana-logging 10.0.189.192 5601/TCP
kube-scheduler-kubernetes-master kubernetes-master/ <none> Running 5 hours kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.0.0.10 53/UDP
kube-scheduler gcr.io/google_containers/kube-scheduler:ec9d2092f754211cc5ab3a5162c05fc1 Running 5 hours 53/TCP
monitoring-heapster-controller-zpjj1 10.244.3.3 kubernetes-minion-x8gx/104.154.47.83 kubernetes.io/cluster-service=true,name=heapster Running 5 hours kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.1 443/TCP
heapster gcr.io/google_containers/heapster:v0.10.0 Running 5 hours
monitoring-influx-grafana-controller-dqan4 10.244.3.4 kubernetes-minion-x8gx/104.154.47.83 kubernetes.io/cluster-service=true,name=influxGrafana Running 5 hours
grafana gcr.io/google_containers/heapster_grafana:v0.6 Running 5 hours
influxdb gcr.io/google_containers/heapster_influxdb:v0.3 Running 5 hours
synthetic-logger-0.25lps-pod 10.244.0.7 kubernetes-minion-7s1y/23.236.54.97 name=synth-logging-source Running 19 minutes
synth-lgr ubuntu:14.04 Running 19 minutes
synthetic-logger-10lps-pod 10.244.3.14 kubernetes-minion-x8gx/104.154.47.83 name=synth-logging-source Running 19 minutes
synth-lgr ubuntu:14.04 Running 19 minutes
../../_output/local/bin/linux/amd64/kubectl get replicationControllers
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
elasticsearch-logging elasticsearch-logging gcr.io/google_containers/elasticsearch:1.0 name=elasticsearch-logging 1
kibana-logging kibana-logging gcr.io/google_containers/kibana:1.2 name=kibana-logging 1
kube-dns etcd quay.io/coreos/etcd:v2.0.3 k8s-app=kube-dns 1
kube2sky gcr.io/google_containers/kube2sky:1.2
skydns gcr.io/google_containers/skydns:2015-03-11-001
monitoring-heapster-controller heapster gcr.io/google_containers/heapster:v0.10.0 name=heapster 1
monitoring-influx-grafana-controller influxdb gcr.io/google_containers/heapster_influxdb:v0.3 name=influxGrafana 1
grafana gcr.io/google_containers/heapster_grafana:v0.6
../../_output/local/bin/linux/amd64/kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
elasticsearch-logging kubernetes.io/cluster-service=true,name=elasticsearch-logging name=elasticsearch-logging 10.0.251.221 9200/TCP
kibana-logging kubernetes.io/cluster-service=true,name=kibana-logging name=kibana-logging 10.0.188.118 5601/TCP
kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,name=kube-dns k8s-app=kube-dns 10.0.0.10 53/UDP
kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.2 443/TCP
monitoring-grafana kubernetes.io/cluster-service=true,name=grafana name=influxGrafana 10.0.254.202 80/TCP
monitoring-heapster kubernetes.io/cluster-service=true,name=heapster name=heapster 10.0.19.214 80/TCP
monitoring-influxdb name=influxGrafana name=influxGrafana 10.0.198.71 80/TCP
monitoring-influxdb-ui name=influxGrafana name=influxGrafana 10.0.109.66 80/TCP
``` ```
The `net` rule in the Makefile will report information about the Elasticsearch and Kibana services including the public IP addresses of each service. The `net` rule in the Makefile will report information about the Elasticsearch and Kibana services including the public IP addresses of each service.
``` ```
$ make net $ make net
../../../kubectl.sh get services elasticsearch-logging -o json ../../cluster/kubectl.sh get services elasticsearch-logging -o json
current-context: "lithe-cocoa-92103_kubernetes"
Running: ../../_output/local/bin/linux/amd64/kubectl get services elasticsearch-logging -o json
{ {
"kind": "Service", "kind": "Service",
"apiVersion": "v1beta3", "apiVersion": "v1",
"metadata": { "metadata": {
"name": "elasticsearch-logging", "name": "elasticsearch-logging",
"namespace": "default", "namespace": "default",
"selfLink": "/api/v1beta3/namespaces/default/services/elasticsearch-logging", "selfLink": "/api/v1/namespaces/default/services/elasticsearch-logging",
"uid": "9dc7290f-f358-11e4-a58e-42010af09a93", "uid": "e056e116-0fb4-11e5-9243-42010af0d13a",
"resourceVersion": "28", "resourceVersion": "23",
"creationTimestamp": "2015-05-05T18:57:45Z", "creationTimestamp": "2015-06-10T21:08:43Z",
"labels": { "labels": {
"k8s-app": "elasticsearch-logging",
"kubernetes.io/cluster-service": "true", "kubernetes.io/cluster-service": "true",
"name": "elasticsearch-logging" "kubernetes.io/name": "Elasticsearch"
} }
}, },
"spec": { "spec": {
"ports": [ "ports": [
{ {
"name": "",
"protocol": "TCP", "protocol": "TCP",
"port": 9200, "port": 9200,
"targetPort": "es-port" "targetPort": "es-port",
"nodePort": 0
} }
], ],
"selector": { "selector": {
"name": "elasticsearch-logging" "k8s-app": "elasticsearch-logging"
}, },
"portalIP": "10.0.251.221", "clusterIP": "10.0.145.125",
"type": "ClusterIP",
"sessionAffinity": "None" "sessionAffinity": "None"
}, },
"status": {} "status": {
"loadBalancer": {}
}
} }
current-context: "lithe-cocoa-92103_kubernetes" ../../cluster/kubectl.sh get services kibana-logging -o json
Running: ../../_output/local/bin/linux/amd64/kubectl get services kibana-logging -o json
{ {
"kind": "Service", "kind": "Service",
"apiVersion": "v1beta3", "apiVersion": "v1",
"metadata": { "metadata": {
"name": "kibana-logging", "name": "kibana-logging",
"namespace": "default", "namespace": "default",
"selfLink": "/api/v1beta3/namespaces/default/services/kibana-logging", "selfLink": "/api/v1/namespaces/default/services/kibana-logging",
"uid": "9dc6f856-f358-11e4-a58e-42010af09a93", "uid": "e05c7dae-0fb4-11e5-9243-42010af0d13a",
"resourceVersion": "31", "resourceVersion": "30",
"creationTimestamp": "2015-05-05T18:57:45Z", "creationTimestamp": "2015-06-10T21:08:43Z",
"labels": { "labels": {
"k8s-app": "kibana-logging",
"kubernetes.io/cluster-service": "true", "kubernetes.io/cluster-service": "true",
"name": "kibana-logging" "kubernetes.io/name": "Kibana"
} }
}, },
"spec": { "spec": {
"ports": [ "ports": [
{ {
"name": "",
"protocol": "TCP", "protocol": "TCP",
"port": 5601, "port": 5601,
"targetPort": "kibana-port" "targetPort": "kibana-port",
"nodePort": 0
} }
], ],
"selector": { "selector": {
"name": "kibana-logging" "k8s-app": "kibana-logging"
}, },
"portalIP": "10.0.188.118", "clusterIP": "10.0.189.192",
"type": "ClusterIP",
"sessionAffinity": "None" "sessionAffinity": "None"
}, },
"status": {} "status": {
"loadBalancer": {}
}
} }
``` ```
To find the URLs to access the Elasticsearch and Kibana viewer, To find the URLs to access the Elasticsearch and Kibana viewer,
``` ```
$ kubectl cluster-info $ kubectl cluster-info
Kubernetes master is running at https://130.211.122.180 Kubernetes master is running at https://104.154.60.226
elasticsearch-logging is running at https://130.211.122.180/api/v1beta3/proxy/namespaces/default/services/elasticsearch-logging Elasticsearch is running at https://104.154.60.226/api/v1beta3/proxy/namespaces/default/services/elasticsearch-logging
kibana-logging is running at https://130.211.122.180/api/v1beta3/proxy/namespaces/default/services/kibana-logging Kibana is running at https://104.154.60.226/api/v1beta3/proxy/namespaces/default/services/kibana-logging
kube-dns is running at https://130.211.122.180/api/v1beta3/proxy/namespaces/default/services/kube-dns KubeDNS is running at https://104.154.60.226/api/v1beta3/proxy/namespaces/default/services/kube-dns
grafana is running at https://130.211.122.180/api/v1beta3/proxy/namespaces/default/services/monitoring-grafana
heapster is running at https://130.211.122.180/api/v1beta3/proxy/namespaces/default/services/monitoring-heapster
``` ```
To find the user name and password to access the URLs, To find the user name and password to access the URLs,
@ -201,7 +173,7 @@ apiVersion: v1
clusters: clusters:
- cluster: - cluster:
certificate-authority-data: REDACTED certificate-authority-data: REDACTED
server: https://130.211.122.180 server: https://104.154.60.226
name: lithe-cocoa-92103_kubernetes name: lithe-cocoa-92103_kubernetes
contexts: contexts:
- context: - context:
@ -223,7 +195,7 @@ users:
username: admin username: admin
``` ```
Access the Elasticsearch service at URL `https://130.211.122.180/api/v1beta3/proxy/namespaces/default/services/elasticsearch-logging`, use the user name 'admin' and password 'h5M0FtVXXflBSdI7', Access the Elasticsearch service at URL `https://104.154.60.226/api/v1/proxy/namespaces/default/services/elasticsearch-logging`, use the user name 'admin' and password 'h5M0FtVXXflBSdI7',
``` ```
{ {
"status" : 200, "status" : 200,
@ -239,7 +211,7 @@ Access the Elasticsearch service at URL `https://130.211.122.180/api/v1beta3/pro
"tagline" : "You Know, for Search" "tagline" : "You Know, for Search"
} }
``` ```
Visiting the URL `https://130.211.122.180/api/v1beta3/proxy/namespaces/default/services/kibana-logging` should show the Kibana viewer for the logging information stored in the Elasticsearch service. Visiting the URL `https://104.154.60.226/api/v1/proxy/namespaces/default/services/kibana-logging` should show the Kibana viewer for the logging information stored in the Elasticsearch service.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/logging-demo/README.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/logging-demo/README.md?pixel)]()

View File

@ -12,7 +12,7 @@
# sleep 4 # sleep 4
# i=$[$i+1] # i=$[$i+1]
# done # done
apiVersion: v1beta3 apiVersion: v1
kind: Pod kind: Pod
metadata: metadata:
labels: labels:
@ -20,10 +20,11 @@ metadata:
name: synthetic-logger-0.25lps-pod name: synthetic-logger-0.25lps-pod
spec: spec:
containers: containers:
- args: - name: synth-lgr
image: ubuntu:14.04
args:
- bash - bash
- -c - -c
- 'i="0"; while true; do echo -n "`hostname`: $i: "; date --rfc-3339 ns; sleep - 'i="0"; while true; do echo -n "`hostname`: $i: "; date --rfc-3339 ns; sleep
4; i=$[$i+1]; done' 4; i=$[$i+1]; done'
image: ubuntu:14.04
name: synth-lgr

View File

@ -12,19 +12,19 @@
# sleep 4 # sleep 4
# i=$[$i+1] # i=$[$i+1]
# done # done
apiVersion: v1beta3 apiVersion: v1
kind: Pod kind: Pod
metadata: metadata:
creationTimestamp: null
labels: labels:
name: synth-logging-source name: synth-logging-source
name: synthetic-logger-10lps-pod name: synthetic-logger-10lps-pod
spec: spec:
containers: containers:
- args: - name: synth-lgr
image: ubuntu:14.04
args:
- bash - bash
- -c - -c
- 'i="0"; while true; do echo -n "`hostname`: $i: "; date --rfc-3339 ns; sleep - 'i="0"; while true; do echo -n "`hostname`: $i: "; date --rfc-3339 ns; sleep
0.1; i=$[$i+1]; done' 0.1; i=$[$i+1]; done'
image: ubuntu:14.04
name: synth-lgr