tldr for dns example; rename log to logs
This commit is contained in:
parent
c0267586e6
commit
8cff557b4f
|
@ -115,7 +115,7 @@ dns-frontend 10.244.2.9 kubernetes-min
|
|||
Wait until the pod succeeds, then we can see the output from the client pod:
|
||||
|
||||
```shell
|
||||
$ kubectl log dns-frontend
|
||||
$ kubectl logs dns-frontend
|
||||
2015-05-07T20:13:54.147664936Z 10.0.236.129
|
||||
2015-05-07T20:13:54.147721290Z Send request to: http://dns-backend.development.cluster.local:8000
|
||||
2015-05-07T20:13:54.147733438Z <Response [200]>
|
||||
|
@ -129,7 +129,7 @@ If we switch to prod namespace with the same pod config, we'll see the same resu
|
|||
```shell
|
||||
$ kubectl config use-context prod
|
||||
$ kubectl create -f examples/cluster-dns/dns-frontend-pod.yaml
|
||||
$ kubectl log dns-frontend
|
||||
$ kubectl logs dns-frontend
|
||||
2015-05-07T20:13:54.147664936Z 10.0.236.129
|
||||
2015-05-07T20:13:54.147721290Z Send request to: http://dns-backend.development.cluster.local:8000
|
||||
2015-05-07T20:13:54.147733438Z <Response [200]>
|
||||
|
@ -142,4 +142,39 @@ $ kubectl log dns-frontend
|
|||
If you prefer not using namespace, then all your services can be addressed using `default` namespace, e.g. `http://dns-backend.default.cluster.local:8000`, or shorthand version `http://dns-backend:8000`
|
||||
|
||||
|
||||
### tl; dr;
|
||||
For those of you who are impatient, here is the summary of the commands we ran in this tutorial. Remember to set first `$CLUSTER_NAME` and `$USER_NAME` to the values found in `~/.kube/config`.
|
||||
|
||||
```sh
|
||||
# create dev and prod namespaces
|
||||
kubectl create -f examples/cluster-dns/namespace-dev.yaml
|
||||
kubectl create -f examples/cluster-dns/namespace-prod.yaml
|
||||
|
||||
# create two contexts
|
||||
kubectl config set-context dev --namespace=development --cluster=${CLUSTER_NAME} --user=${USER_NAME}
|
||||
kubectl config set-context prod --namespace=production --cluster=${CLUSTER_NAME} --user=${USER_NAME}
|
||||
|
||||
# create two backend replication controllers
|
||||
kubectl config use-context dev
|
||||
kubectl create -f examples/cluster-dns/dns-backend-rc.yaml
|
||||
kubectl config use-context prod
|
||||
kubectl create -f examples/cluster-dns/dns-backend-rc.yaml
|
||||
|
||||
# create backend services
|
||||
kubectl config use-context dev
|
||||
kubectl create -f examples/cluster-dns/dns-backend-service.yaml
|
||||
kubectl config use-context prod
|
||||
kubectl create -f examples/cluster-dns/dns-backend-service.yaml
|
||||
|
||||
# create a pod in each namespace and get its output
|
||||
kubectl config use-context dev
|
||||
kubectl create -f examples/cluster-dns/dns-frontend-pod.yaml
|
||||
kubectl logs dns-frontend
|
||||
|
||||
kubectl config use-context prod
|
||||
kubectl create -f examples/cluster-dns/dns-frontend-pod.yaml
|
||||
kubectl logs dns-frontend
|
||||
```
|
||||
|
||||
|
||||
[]()
|
||||
|
|
|
@ -153,7 +153,7 @@ hazelcast-ulkws 10.244.66.2 e2e-test-
|
|||
To prove that this all works, you can use the `log` command to examine the logs of one pod, for example:
|
||||
|
||||
```sh
|
||||
$ kubectl log hazelcast-ulkws hazelcast
|
||||
$ kubectl logs hazelcast-ulkws hazelcast
|
||||
2015-05-09 22:06:20.016 INFO 5 --- [ main] com.github.pires.hazelcast.Application : Starting Application v0.2-SNAPSHOT on hazelcast-enyli with PID 5 (/bootstrapper.jar started by root in /)
|
||||
2015-05-09 22:06:20.071 INFO 5 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@5424f110: startup date [Sat May 09 22:06:20 GMT 2015]; root of context hierarchy
|
||||
2015-05-09 22:06:21.511 INFO 5 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
|
||||
|
|
|
@ -116,7 +116,7 @@ $ kubectl get pods
|
|||
You can take a look at the logs for a pod by using `kubectl.sh log`. For example:
|
||||
|
||||
```shell
|
||||
$ kubectl log mysql
|
||||
$ kubectl logs mysql
|
||||
```
|
||||
|
||||
If you want to do deeper troubleshooting, e.g. if it seems a container is not staying up, you can also ssh in to the node that a pod is running on. There, you can run `sudo -s`, then `docker ps -a` to see all the containers. You can then inspect the logs of containers that have exited, via `docker logs <container_id>`. (You can also find some relevant logs under `/var/log`, e.g. `docker.log` and `kubelet.log`).
|
||||
|
|
|
@ -131,7 +131,7 @@ You should now get a pod provisioned whose name begins with openshift.
|
|||
```shell
|
||||
$ cluster/kubectl.sh get pods | grep openshift
|
||||
$ cluster/kubectl.sh log openshift-t7147 origin
|
||||
Running: cluster/../cluster/gce/../../cluster/../_output/dockerized/bin/linux/amd64/kubectl log openshift-t7t47 origin
|
||||
Running: cluster/../cluster/gce/../../cluster/../_output/dockerized/bin/linux/amd64/kubectl logs openshift-t7t47 origin
|
||||
2015-04-30T15:26:00.454146869Z I0430 15:26:00.454005 1 start_master.go:296] Starting an OpenShift master, reachable at 0.0.0.0:8443 (etcd: [https://10.0.27.2:4001])
|
||||
2015-04-30T15:26:00.454231211Z I0430 15:26:00.454223 1 start_master.go:297] OpenShift master public address is https://104.197.73.241:8443
|
||||
```
|
||||
|
|
|
@ -99,7 +99,7 @@ CONTAINER ID IMAGE COMMAND CREATED
|
|||
If you read logs of the phabricator container you will notice the following error message:
|
||||
|
||||
```bash
|
||||
$ kubectl log phabricator-controller-02qp4
|
||||
$ kubectl logs phabricator-controller-02qp4
|
||||
[...]
|
||||
Raw MySQL Error: Attempt to connect to root@173.194.252.142 failed with error
|
||||
#2013: Lost connection to MySQL server at 'reading initial communication
|
||||
|
|
|
@ -41,7 +41,7 @@ This pod runs a binary that displays the content of one of the pieces of secret
|
|||
volume:
|
||||
|
||||
```shell
|
||||
$ kubectl log secret-test-pod
|
||||
$ kubectl logs secret-test-pod
|
||||
2015-04-29T21:17:24.712206409Z content of file "/etc/secret-volume/data-1": value-1
|
||||
```
|
||||
|
||||
|
|
Loading…
Reference in New Issue