Collected markedown fixes around syntax.
This commit is contained in:
parent
2d1ab041f9
commit
2c5c45b47f
|
|
@ -287,7 +287,6 @@ UN 10.244.3.3 51.28 KB 256 51.0% dafe3154-1d67-42e1-ac1d-78e
|
||||||
For those of you who are impatient, here is the summary of the commands we ran in this tutorial.
|
For those of you who are impatient, here is the summary of the commands we ran in this tutorial.
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
|
|
||||||
# create a service to track all cassandra nodes
|
# create a service to track all cassandra nodes
|
||||||
kubectl create -f examples/cassandra/cassandra-service.yaml
|
kubectl create -f examples/cassandra/cassandra-service.yaml
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -83,7 +83,7 @@ spec:
|
||||||
|
|
||||||
To start the service, run:
|
To start the service, run:
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
$ kubectl create -f examples/celery-rabbitmq/rabbitmq-service.yaml
|
$ kubectl create -f examples/celery-rabbitmq/rabbitmq-service.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -111,7 +111,6 @@ metadata:
|
||||||
namespace: NAMESPACE
|
namespace: NAMESPACE
|
||||||
data:
|
data:
|
||||||
token: "TOKEN"
|
token: "TOKEN"
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Replace `NAMESPACE` with the actual namespace to be used and `TOKEN` with the basic64 encoded
|
Replace `NAMESPACE` with the actual namespace to be used and `TOKEN` with the basic64 encoded
|
||||||
|
|
@ -126,7 +125,6 @@ $ kubectl config view
|
||||||
...
|
...
|
||||||
$ echo yGlDcMvSZPX4PyP0Q5bHgAYgi1iyEHv2 | base64
|
$ echo yGlDcMvSZPX4PyP0Q5bHgAYgi1iyEHv2 | base64
|
||||||
eUdsRGNNdlNaUFg0UHlQMFE1YkhnQVlnaTFpeUVIdjIK=
|
eUdsRGNNdlNaUFg0UHlQMFE1YkhnQVlnaTFpeUVIdjIK=
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
resulting in the file:
|
resulting in the file:
|
||||||
|
|
@ -139,7 +137,6 @@ metadata:
|
||||||
namespace: mytunes
|
namespace: mytunes
|
||||||
data:
|
data:
|
||||||
token: "eUdsRGNNdlNaUFg0UHlQMFE1YkhnQVlnaTFpeUVIdjIK="
|
token: "eUdsRGNNdlNaUFg0UHlQMFE1YkhnQVlnaTFpeUVIdjIK="
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
which can be used to create the secret in your namespace:
|
which can be used to create the secret in your namespace:
|
||||||
|
|
@ -147,7 +144,6 @@ which can be used to create the secret in your namespace:
|
||||||
```console
|
```console
|
||||||
kubectl create -f examples/elasticsearch/apiserver-secret.yaml --namespace=mytunes
|
kubectl create -f examples/elasticsearch/apiserver-secret.yaml --namespace=mytunes
|
||||||
secrets/apiserver-secret
|
secrets/apiserver-secret
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Now you are ready to create the replication controller which will then create the pods:
|
Now you are ready to create the replication controller which will then create the pods:
|
||||||
|
|
@ -155,7 +151,6 @@ Now you are ready to create the replication controller which will then create th
|
||||||
```console
|
```console
|
||||||
$ kubectl create -f examples/elasticsearch/music-rc.yaml --namespace=mytunes
|
$ kubectl create -f examples/elasticsearch/music-rc.yaml --namespace=mytunes
|
||||||
replicationcontrollers/music-db
|
replicationcontrollers/music-db
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
It's also useful to have a [service](../../docs/user-guide/services.md) with an load balancer for accessing the Elasticsearch
|
It's also useful to have a [service](../../docs/user-guide/services.md) with an load balancer for accessing the Elasticsearch
|
||||||
|
|
@ -184,7 +179,6 @@ Let's create the service with an external load balancer:
|
||||||
```console
|
```console
|
||||||
$ kubectl create -f examples/elasticsearch/music-service.yaml --namespace=mytunes
|
$ kubectl create -f examples/elasticsearch/music-service.yaml --namespace=mytunes
|
||||||
services/music-server
|
services/music-server
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Let's see what we've got:
|
Let's see what we've got:
|
||||||
|
|
@ -301,7 +295,6 @@ music-db-u1ru3 1/1 Running 0 38s
|
||||||
music-db-wnss2 1/1 Running 0 1m
|
music-db-wnss2 1/1 Running 0 1m
|
||||||
music-db-x7j2w 1/1 Running 0 1m
|
music-db-x7j2w 1/1 Running 0 1m
|
||||||
music-db-zjqyv 1/1 Running 0 1m
|
music-db-zjqyv 1/1 Running 0 1m
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Let's check to make sure that these 10 nodes are part of the same Elasticsearch cluster:
|
Let's check to make sure that these 10 nodes are part of the same Elasticsearch cluster:
|
||||||
|
|
@ -359,7 +352,6 @@ $ curl 104.197.12.157:9200/_nodes?pretty=true | grep name
|
||||||
"name" : "mytunes-db"
|
"name" : "mytunes-db"
|
||||||
"vm_name" : "OpenJDK 64-Bit Server VM",
|
"vm_name" : "OpenJDK 64-Bit Server VM",
|
||||||
"name" : "eth0",
|
"name" : "eth0",
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -46,7 +46,7 @@ Currently, you can look at:
|
||||||
|
|
||||||
Example from command line (the DNS lookup looks better from a web browser):
|
Example from command line (the DNS lookup looks better from a web browser):
|
||||||
|
|
||||||
```
|
```console
|
||||||
$ kubectl create -f examples/explorer/pod.json
|
$ kubectl create -f examples/explorer/pod.json
|
||||||
$ kubectl proxy &
|
$ kubectl proxy &
|
||||||
Starting to serve on localhost:8001
|
Starting to serve on localhost:8001
|
||||||
|
|
|
||||||
|
|
@ -63,13 +63,13 @@ The "IP" field should be filled with the address of a node in the Glusterfs serv
|
||||||
|
|
||||||
Create the endpoints,
|
Create the endpoints,
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
$ kubectl create -f examples/glusterfs/glusterfs-endpoints.json
|
$ kubectl create -f examples/glusterfs/glusterfs-endpoints.json
|
||||||
```
|
```
|
||||||
|
|
||||||
You can verify that the endpoints are successfully created by running
|
You can verify that the endpoints are successfully created by running
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
$ kubectl get endpoints
|
$ kubectl get endpoints
|
||||||
NAME ENDPOINTS
|
NAME ENDPOINTS
|
||||||
glusterfs-cluster 10.240.106.152:1,10.240.79.157:1
|
glusterfs-cluster 10.240.106.152:1,10.240.79.157:1
|
||||||
|
|
@ -79,7 +79,7 @@ glusterfs-cluster 10.240.106.152:1,10.240.79.157:1
|
||||||
|
|
||||||
The following *volume* spec in [glusterfs-pod.json](glusterfs-pod.json) illustrates a sample configuration.
|
The following *volume* spec in [glusterfs-pod.json](glusterfs-pod.json) illustrates a sample configuration.
|
||||||
|
|
||||||
```js
|
```json
|
||||||
{
|
{
|
||||||
"name": "glusterfsvol",
|
"name": "glusterfsvol",
|
||||||
"glusterfs": {
|
"glusterfs": {
|
||||||
|
|
@ -98,13 +98,13 @@ The parameters are explained as the followings.
|
||||||
|
|
||||||
Create a pod that has a container using Glusterfs volume,
|
Create a pod that has a container using Glusterfs volume,
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
$ kubectl create -f examples/glusterfs/glusterfs-pod.json
|
$ kubectl create -f examples/glusterfs/glusterfs-pod.json
|
||||||
```
|
```
|
||||||
|
|
||||||
You can verify that the pod is running:
|
You can verify that the pod is running:
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
$ kubectl get pods
|
$ kubectl get pods
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE
|
||||||
glusterfs 1/1 Running 0 3m
|
glusterfs 1/1 Running 0 3m
|
||||||
|
|
@ -115,7 +115,7 @@ $ kubectl get pods glusterfs -t '{{.status.hostIP}}{{"\n"}}'
|
||||||
|
|
||||||
You may ssh to the host (the hostIP) and run 'mount' to see if the Glusterfs volume is mounted,
|
You may ssh to the host (the hostIP) and run 'mount' to see if the Glusterfs volume is mounted,
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
$ mount | grep kube_vol
|
$ mount | grep kube_vol
|
||||||
10.240.106.152:kube_vol on /var/lib/kubelet/pods/f164a571-fa68-11e4-ad5c-42010af019b7/volumes/kubernetes.io~glusterfs/glusterfsvol type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
|
10.240.106.152:kube_vol on /var/lib/kubelet/pods/f164a571-fa68-11e4-ad5c-42010af019b7/volumes/kubernetes.io~glusterfs/glusterfsvol type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -6,13 +6,13 @@
|
||||||
|
|
||||||
Now start a local redis instance
|
Now start a local redis instance
|
||||||
|
|
||||||
```
|
```sh
|
||||||
redis-server
|
redis-server
|
||||||
```
|
```
|
||||||
|
|
||||||
And run the app
|
And run the app
|
||||||
|
|
||||||
```
|
```sh
|
||||||
export GOPATH=~/Development/k8hacking/k8petstore/web-server/
|
export GOPATH=~/Development/k8hacking/k8petstore/web-server/
|
||||||
cd $GOPATH/src/main/
|
cd $GOPATH/src/main/
|
||||||
## Now, you're in the local dir to run the app. Go get its depenedencies.
|
## Now, you're in the local dir to run the app. Go get its depenedencies.
|
||||||
|
|
|
||||||
|
|
@ -56,14 +56,14 @@ billing](https://developers.google.com/console/help/new/#billing).
|
||||||
Authenticate with gcloud and set the gcloud default project name to
|
Authenticate with gcloud and set the gcloud default project name to
|
||||||
point to the project you want to use for your Kubernetes cluster:
|
point to the project you want to use for your Kubernetes cluster:
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
gcloud auth login
|
gcloud auth login
|
||||||
gcloud config set project <project-name>
|
gcloud config set project <project-name>
|
||||||
```
|
```
|
||||||
|
|
||||||
Next, start up a Kubernetes cluster:
|
Next, start up a Kubernetes cluster:
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
wget -q -O - https://get.k8s.io | bash
|
wget -q -O - https://get.k8s.io | bash
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
@ -193,7 +193,7 @@ image is based on the Node.js official image. It then installs Meteor
|
||||||
and copies in your apps' code. The last line specifies what happens
|
and copies in your apps' code. The last line specifies what happens
|
||||||
when your app container is run.
|
when your app container is run.
|
||||||
|
|
||||||
```
|
```sh
|
||||||
ENTRYPOINT MONGO_URL=mongodb://$MONGO_SERVICE_HOST:$MONGO_SERVICE_PORT /usr/local/bin/node main.js
|
ENTRYPOINT MONGO_URL=mongodb://$MONGO_SERVICE_HOST:$MONGO_SERVICE_PORT /usr/local/bin/node main.js
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
@ -216,7 +216,8 @@ As mentioned above, the mongo container uses a volume which is mapped
|
||||||
to a persistent disk by Kubernetes. In [`mongo-pod.json`](mongo-pod.json) the container
|
to a persistent disk by Kubernetes. In [`mongo-pod.json`](mongo-pod.json) the container
|
||||||
section specifies the volume:
|
section specifies the volume:
|
||||||
|
|
||||||
```
|
```json
|
||||||
|
{
|
||||||
"volumeMounts": [
|
"volumeMounts": [
|
||||||
{
|
{
|
||||||
"name": "mongo-disk",
|
"name": "mongo-disk",
|
||||||
|
|
@ -227,7 +228,8 @@ section specifies the volume:
|
||||||
The name `mongo-disk` refers to the volume specified outside the
|
The name `mongo-disk` refers to the volume specified outside the
|
||||||
container section:
|
container section:
|
||||||
|
|
||||||
```
|
```json
|
||||||
|
{
|
||||||
"volumes": [
|
"volumes": [
|
||||||
{
|
{
|
||||||
"name": "mongo-disk",
|
"name": "mongo-disk",
|
||||||
|
|
|
||||||
|
|
@ -45,7 +45,7 @@ into another one.
|
||||||
|
|
||||||
The nfs server pod creates a privileged container, so if you are using a Salt based KUBERNETES_PROVIDER (**gce**, **vagrant**, **aws**), you have to enable the ability to create privileged containers by API.
|
The nfs server pod creates a privileged container, so if you are using a Salt based KUBERNETES_PROVIDER (**gce**, **vagrant**, **aws**), you have to enable the ability to create privileged containers by API.
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
#At the root of Kubernetes source code
|
#At the root of Kubernetes source code
|
||||||
$ vi cluster/saltbase/pillar/privilege.sls
|
$ vi cluster/saltbase/pillar/privilege.sls
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -41,7 +41,7 @@ The example combines a web frontend and an external service that provides MySQL
|
||||||
|
|
||||||
This example assumes that you have a basic understanding of kubernetes [services](../../docs/user-guide/services.md) and that you have forked the repository and [turned up a Kubernetes cluster](../../docs/getting-started-guides/):
|
This example assumes that you have a basic understanding of kubernetes [services](../../docs/user-guide/services.md) and that you have forked the repository and [turned up a Kubernetes cluster](../../docs/getting-started-guides/):
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
$ cd kubernetes
|
$ cd kubernetes
|
||||||
$ hack/dev-build-and-up.sh
|
$ hack/dev-build-and-up.sh
|
||||||
```
|
```
|
||||||
|
|
@ -56,7 +56,7 @@ In the remaining part of this example we will assume that your instance is named
|
||||||
|
|
||||||
To start Phabricator server use the file [`examples/phabricator/phabricator-controller.json`](phabricator-controller.json) which describes a [replication controller](../../docs/user-guide/replication-controller.md) with a single [pod](../../docs/user-guide/pods.md) running an Apache server with Phabricator PHP source:
|
To start Phabricator server use the file [`examples/phabricator/phabricator-controller.json`](phabricator-controller.json) which describes a [replication controller](../../docs/user-guide/replication-controller.md) with a single [pod](../../docs/user-guide/pods.md) running an Apache server with Phabricator PHP source:
|
||||||
|
|
||||||
```js
|
```json
|
||||||
{
|
{
|
||||||
"kind": "ReplicationController",
|
"kind": "ReplicationController",
|
||||||
"apiVersion": "v1",
|
"apiVersion": "v1",
|
||||||
|
|
@ -98,13 +98,13 @@ To start Phabricator server use the file [`examples/phabricator/phabricator-cont
|
||||||
|
|
||||||
Create the phabricator pod in your Kubernetes cluster by running:
|
Create the phabricator pod in your Kubernetes cluster by running:
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
$ kubectl create -f examples/phabricator/phabricator-controller.json
|
$ kubectl create -f examples/phabricator/phabricator-controller.json
|
||||||
```
|
```
|
||||||
|
|
||||||
Once that's up you can list the pods in the cluster, to verify that it is running:
|
Once that's up you can list the pods in the cluster, to verify that it is running:
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
kubectl get pods
|
kubectl get pods
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
@ -117,7 +117,7 @@ phabricator-controller-9vy68 1/1 Running 0 1m
|
||||||
|
|
||||||
If you ssh to that machine, you can run `docker ps` to see the actual pod:
|
If you ssh to that machine, you can run `docker ps` to see the actual pod:
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
me@workstation$ gcloud compute ssh --zone us-central1-b kubernetes-minion-2
|
me@workstation$ gcloud compute ssh --zone us-central1-b kubernetes-minion-2
|
||||||
|
|
||||||
$ sudo docker ps
|
$ sudo docker ps
|
||||||
|
|
@ -148,7 +148,7 @@ gcloud sql instances patch phabricator-db --authorized-networks 130.211.141.151
|
||||||
|
|
||||||
To automate this process and make sure that a proper host is authorized even if pod is rescheduled to a new machine we need a separate pod that periodically lists pods and authorizes hosts. Use the file [`examples/phabricator/authenticator-controller.json`](authenticator-controller.json):
|
To automate this process and make sure that a proper host is authorized even if pod is rescheduled to a new machine we need a separate pod that periodically lists pods and authorizes hosts. Use the file [`examples/phabricator/authenticator-controller.json`](authenticator-controller.json):
|
||||||
|
|
||||||
```js
|
```json
|
||||||
{
|
{
|
||||||
"kind": "ReplicationController",
|
"kind": "ReplicationController",
|
||||||
"apiVersion": "v1",
|
"apiVersion": "v1",
|
||||||
|
|
@ -184,7 +184,7 @@ To automate this process and make sure that a proper host is authorized even if
|
||||||
|
|
||||||
To create the pod run:
|
To create the pod run:
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
$ kubectl create -f examples/phabricator/authenticator-controller.json
|
$ kubectl create -f examples/phabricator/authenticator-controller.json
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
@ -195,7 +195,7 @@ A Kubernetes 'service' is a named load balancer that proxies traffic to one or m
|
||||||
|
|
||||||
The pod that you created in Step One has the label `name=phabricator`. The selector field of the service determines which pods will receive the traffic sent to the service. Since we are setting up a service for an external application we also need to request external static IP address (otherwise it will be assigned dynamically):
|
The pod that you created in Step One has the label `name=phabricator`. The selector field of the service determines which pods will receive the traffic sent to the service. Since we are setting up a service for an external application we also need to request external static IP address (otherwise it will be assigned dynamically):
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
$ gcloud compute addresses create phabricator --region us-central1
|
$ gcloud compute addresses create phabricator --region us-central1
|
||||||
Created [https://www.googleapis.com/compute/v1/projects/myproject/regions/us-central1/addresses/phabricator].
|
Created [https://www.googleapis.com/compute/v1/projects/myproject/regions/us-central1/addresses/phabricator].
|
||||||
NAME REGION ADDRESS STATUS
|
NAME REGION ADDRESS STATUS
|
||||||
|
|
@ -204,7 +204,7 @@ phabricator us-central1 107.178.210.6 RESERVED
|
||||||
|
|
||||||
Use the file [`examples/phabricator/phabricator-service.json`](phabricator-service.json):
|
Use the file [`examples/phabricator/phabricator-service.json`](phabricator-service.json):
|
||||||
|
|
||||||
```js
|
```json
|
||||||
{
|
{
|
||||||
"kind": "Service",
|
"kind": "Service",
|
||||||
"apiVersion": "v1",
|
"apiVersion": "v1",
|
||||||
|
|
@ -228,14 +228,14 @@ Use the file [`examples/phabricator/phabricator-service.json`](phabricator-servi
|
||||||
|
|
||||||
To create the service run:
|
To create the service run:
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
$ kubectl create -f examples/phabricator/phabricator-service.json
|
$ kubectl create -f examples/phabricator/phabricator-service.json
|
||||||
phabricator
|
phabricator
|
||||||
```
|
```
|
||||||
|
|
||||||
To play with the service itself, find the external IP of the load balancer:
|
To play with the service itself, find the external IP of the load balancer:
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
$ kubectl get services phabricator -o template --template='{{(index .status.loadBalancer.ingress 0).ip}}{{"\n"}}'
|
$ kubectl get services phabricator -o template --template='{{(index .status.loadBalancer.ingress 0).ip}}{{"\n"}}'
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
@ -243,7 +243,7 @@ and then visit port 80 of that IP address.
|
||||||
|
|
||||||
**Note**: You may need to open the firewall for port 80 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-minion`:
|
**Note**: You may need to open the firewall for port 80 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-minion`:
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
$ gcloud compute firewall-rules create phabricator-node-80 --allow=tcp:80 --target-tags kubernetes-minion
|
$ gcloud compute firewall-rules create phabricator-node-80 --allow=tcp:80 --target-tags kubernetes-minion
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
@ -251,7 +251,7 @@ $ gcloud compute firewall-rules create phabricator-node-80 --allow=tcp:80 --targ
|
||||||
|
|
||||||
To turn down a Kubernetes cluster:
|
To turn down a Kubernetes cluster:
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
$ cluster/kube-down.sh
|
$ cluster/kube-down.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -134,7 +134,7 @@ The external load balancer allows us to access the service from outside via an e
|
||||||
|
|
||||||
Note that you may need to create a firewall rule to allow the traffic, assuming you are using Google Compute Engine:
|
Note that you may need to create a firewall rule to allow the traffic, assuming you are using Google Compute Engine:
|
||||||
|
|
||||||
```
|
```console
|
||||||
$ gcloud compute firewall-rules create rethinkdb --allow=tcp:8080
|
$ gcloud compute firewall-rules create rethinkdb --allow=tcp:8080
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -63,7 +63,7 @@ cluster.
|
||||||
Use the [`examples/spark/spark-master.json`](spark-master.json) file to create a [pod](../../docs/user-guide/pods.md) running
|
Use the [`examples/spark/spark-master.json`](spark-master.json) file to create a [pod](../../docs/user-guide/pods.md) running
|
||||||
the Master service.
|
the Master service.
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
$ kubectl create -f examples/spark/spark-master.json
|
$ kubectl create -f examples/spark/spark-master.json
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
@ -71,13 +71,13 @@ Then, use the [`examples/spark/spark-master-service.json`](spark-master-service.
|
||||||
create a logical service endpoint that Spark workers can use to access
|
create a logical service endpoint that Spark workers can use to access
|
||||||
the Master pod.
|
the Master pod.
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
$ kubectl create -f examples/spark/spark-master-service.json
|
$ kubectl create -f examples/spark/spark-master-service.json
|
||||||
```
|
```
|
||||||
|
|
||||||
### Check to see if Master is running and accessible
|
### Check to see if Master is running and accessible
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
$ kubectl get pods
|
$ kubectl get pods
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE
|
||||||
[...]
|
[...]
|
||||||
|
|
@ -87,7 +87,7 @@ spark-master 1/1 Running 0 25
|
||||||
|
|
||||||
Check logs to see the status of the master.
|
Check logs to see the status of the master.
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
$ kubectl logs spark-master
|
$ kubectl logs spark-master
|
||||||
|
|
||||||
starting org.apache.spark.deploy.master.Master, logging to /opt/spark-1.4.0-bin-hadoop2.6/sbin/../logs/spark--org.apache.spark.deploy.master.Master-1-spark-master.out
|
starting org.apache.spark.deploy.master.Master, logging to /opt/spark-1.4.0-bin-hadoop2.6/sbin/../logs/spark--org.apache.spark.deploy.master.Master-1-spark-master.out
|
||||||
|
|
@ -122,13 +122,13 @@ The Spark workers need the Master service to be running.
|
||||||
Use the [`examples/spark/spark-worker-controller.json`](spark-worker-controller.json) file to create a
|
Use the [`examples/spark/spark-worker-controller.json`](spark-worker-controller.json) file to create a
|
||||||
[replication controller](../../docs/user-guide/replication-controller.md) that manages the worker pods.
|
[replication controller](../../docs/user-guide/replication-controller.md) that manages the worker pods.
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
$ kubectl create -f examples/spark/spark-worker-controller.json
|
$ kubectl create -f examples/spark/spark-worker-controller.json
|
||||||
```
|
```
|
||||||
|
|
||||||
### Check to see if the workers are running
|
### Check to see if the workers are running
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
$ kubectl get pods
|
$ kubectl get pods
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE
|
||||||
[...]
|
[...]
|
||||||
|
|
@ -148,7 +148,7 @@ $ kubectl logs spark-master
|
||||||
|
|
||||||
Get the address and port of the Master service.
|
Get the address and port of the Master service.
|
||||||
|
|
||||||
```shell
|
```sh
|
||||||
$ kubectl get service spark-master
|
$ kubectl get service spark-master
|
||||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||||
spark-master name=spark-master name=spark-master 10.0.204.187 7077/TCP
|
spark-master name=spark-master name=spark-master 10.0.204.187 7077/TCP
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue