From 2bf196397c496ac10064bc4063838e2a69d23145 Mon Sep 17 00:00:00 2001 From: Doug Davis Date: Thu, 5 May 2016 13:41:49 -0700 Subject: [PATCH] Change minion to node Contination of #1111 I tried to keep this PR down to just a simple search-n-replace to keep things simple. I may have gone too far in some spots but its easy to roll those back if needed. I avoided renaming `contrib/mesos/pkg/minion` because there's already a `contrib/mesos/pkg/node` dir and fixing that will require a bit of work due to a circular import chain that pops up. So I'm saving that for a follow-on PR. I rolled back some of this from a previous commit because it just got to big/messy. Will follow up with additional PRs Signed-off-by: Doug Davis --- guestbook-go/README.md | 4 ++-- guestbook/README.md | 10 +++++----- javaee/README.md | 6 +++--- meteor/README.md | 2 +- openshift-origin/README.md | 2 +- phabricator/README.md | 6 +++--- phabricator/setup.sh | 2 +- runtime-constraints/README.md | 6 +++--- sharing-clusters/README.md | 24 ++++++++++++------------ 9 files changed, 31 insertions(+), 31 deletions(-) diff --git a/guestbook-go/README.md b/guestbook-go/README.md index db76ae27..5f3d2093 100644 --- a/guestbook-go/README.md +++ b/guestbook-go/README.md @@ -92,9 +92,9 @@ Use the `examples/guestbook-go/redis-master-controller.json` file to create a [r 4. To verify what containers are running in the redis-master pod, you can SSH to that machine with `gcloud compute ssh --zone` *`zone_name`* *`host_name`* and then run `docker ps`: ```console - me@workstation$ gcloud compute ssh --zone us-central1-b kubernetes-minion-bz1p + me@workstation$ gcloud compute ssh --zone us-central1-b kubernetes-node-bz1p - me@kubernetes-minion-3:~$ sudo docker ps + me@kubernetes-node-3:~$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS d5c458dabe50 redis "/entrypoint.sh redis" 5 minutes ago Up 5 minutes ``` diff --git a/guestbook/README.md b/guestbook/README.md index ae2a1fbe..a37b8bf0 100644 --- a/guestbook/README.md +++ b/guestbook/README.md @@ -322,7 +322,7 @@ You can get information about a pod, including the machine that it is running on ```console $ kubectl describe pods redis-master-2353460263-1ecey Name: redis-master-2353460263-1ecey -Node: kubernetes-minion-m0k7/10.240.0.5 +Node: kubernetes-node-m0k7/10.240.0.5 ... Labels: app=redis,pod-template-hash=2353460263,role=master,tier=backend Status: Running @@ -337,7 +337,7 @@ Containers: ... ``` -The `Node` is the name and IP of the machine, e.g. `kubernetes-minion-m0k7` in the example above. You can find more details about this node with `kubectl describe nodes kubernetes-minion-m0k7`. +The `Node` is the name and IP of the machine, e.g. `kubernetes-node-m0k7` in the example above. You can find more details about this node with `kubectl describe nodes kubernetes-node-m0k7`. If you want to view the container logs for a given pod, you can run: @@ -356,7 +356,7 @@ me@workstation$ gcloud compute ssh Then, you can look at the Docker containers on the remote machine. You should see something like this (the specifics of the IDs will be different): ```console -me@kubernetes-minion-krxw:~$ sudo docker ps +me@kubernetes-node-krxw:~$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ... 0ffef9649265 redis:latest "/entrypoint.sh redi" About a minute ago Up About a minute k8s_master.869d22f3_redis-master-dz33o_default_1449a58a-5ead-11e5-a104-688f84ef8ef6_d74cb2b5 @@ -718,10 +718,10 @@ NAME REGION IP_ADDRESS IP_PROTOCOL TARGET frontend us-central1 130.211.188.51 TCP us-central1/targetPools/frontend ``` -In Google Compute Engine, you also may need to open the firewall for port 80 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-minion` (replace with your tags as appropriate): +In Google Compute Engine, you also may need to open the firewall for port 80 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-node` (replace with your tags as appropriate): ```console -$ gcloud compute firewall-rules create --allow=tcp:80 --target-tags=kubernetes-minion kubernetes-minion-80 +$ gcloud compute firewall-rules create --allow=tcp:80 --target-tags=kubernetes-node kubernetes-node-80 ``` For GCE Kubernetes startup details, see the [Getting started on Google Compute Engine](../../docs/getting-started-guides/gce.md) diff --git a/javaee/README.md b/javaee/README.md index 944cd9c2..4fe6ec4f 100644 --- a/javaee/README.md +++ b/javaee/README.md @@ -143,12 +143,12 @@ kubectl get -o template po wildfly-rc-w2kk5 --template={{.status.podIP}} 10.246.1.23 ``` -Log in to minion and access the application: +Log in to node and access the application: ```sh -vagrant ssh minion-1 +vagrant ssh node-1 Last login: Thu Jul 16 00:24:36 2015 from 10.0.2.2 -[vagrant@kubernetes-minion-1 ~]$ curl http://10.246.1.23:8080/employees/resources/employees/ +[vagrant@kubernetes-node-1 ~]$ curl http://10.246.1.23:8080/employees/resources/employees/ 1Penny2Sheldon3Amy4Leonard5Bernadette6Raj7Howard8Priya ``` diff --git a/meteor/README.md b/meteor/README.md index 66a3a896..94c393af 100644 --- a/meteor/README.md +++ b/meteor/README.md @@ -180,7 +180,7 @@ You will have to open up port 80 if it's not open yet in your environment. On Google Compute Engine, you may run the below command. ``` -gcloud compute firewall-rules create meteor-80 --allow=tcp:80 --target-tags kubernetes-minion +gcloud compute firewall-rules create meteor-80 --allow=tcp:80 --target-tags kubernetes-node ``` What is going on? diff --git a/openshift-origin/README.md b/openshift-origin/README.md index 451e4d41..b352dd5c 100644 --- a/openshift-origin/README.md +++ b/openshift-origin/README.md @@ -59,7 +59,7 @@ $ vi cluster/saltbase/pillar/privilege.sls allow_privileged: true ``` -Now spin up a cluster using your preferred KUBERNETES_PROVIDER. Remember that `kube-up.sh` may start other pods on your minion nodes, so ensure that you have enough resources to run the five pods for this example. +Now spin up a cluster using your preferred KUBERNETES_PROVIDER. Remember that `kube-up.sh` may start other pods on your nodes, so ensure that you have enough resources to run the five pods for this example. ```sh diff --git a/phabricator/README.md b/phabricator/README.md index ba04699d..dc3c408f 100644 --- a/phabricator/README.md +++ b/phabricator/README.md @@ -160,7 +160,7 @@ phabricator-controller-9vy68 1/1 Running 0 1m If you ssh to that machine, you can run `docker ps` to see the actual pod: ```sh -me@workstation$ gcloud compute ssh --zone us-central1-b kubernetes-minion-2 +me@workstation$ gcloud compute ssh --zone us-central1-b kubernetes-node-2 $ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES @@ -230,10 +230,10 @@ and then visit port 80 of that IP address. **Note**: Provisioning of the external IP address may take few minutes. -**Note**: You may need to open the firewall for port 80 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-minion`: +**Note**: You may need to open the firewall for port 80 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-node`: ```sh -$ gcloud compute firewall-rules create phabricator-node-80 --allow=tcp:80 --target-tags kubernetes-minion +$ gcloud compute firewall-rules create phabricator-node-80 --allow=tcp:80 --target-tags kubernetes-node ``` ### Step Six: Cleanup diff --git a/phabricator/setup.sh b/phabricator/setup.sh index 678973c8..588b1f5f 100755 --- a/phabricator/setup.sh +++ b/phabricator/setup.sh @@ -16,5 +16,5 @@ echo "Create Phabricator replication controller" && kubectl create -f phabricator-controller.json echo "Create Phabricator service" && kubectl create -f phabricator-service.json -echo "Create firewall rule" && gcloud compute firewall-rules create phabricator-node-80 --allow=tcp:80 --target-tags kubernetes-minion +echo "Create firewall rule" && gcloud compute firewall-rules create phabricator-node-80 --allow=tcp:80 --target-tags kubernetes-node diff --git a/runtime-constraints/README.md b/runtime-constraints/README.md index 7c7c7bd1..4dc4f627 100644 --- a/runtime-constraints/README.md +++ b/runtime-constraints/README.md @@ -79,13 +79,13 @@ $ cluster/kubectl.sh run cpuhog \ -- md5sum /dev/urandom ``` -This will create a single pod on your minion that requests 1/10 of a CPU, but it has no limit on how much CPU it may actually consume +This will create a single pod on your node that requests 1/10 of a CPU, but it has no limit on how much CPU it may actually consume on the node. To demonstrate this, if you SSH into your machine, you will see it is consuming as much CPU as possible on the node. ``` -$ vagrant ssh minion-1 +$ vagrant ssh node-1 $ sudo docker stats $(sudo docker ps -q) CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O 6b593b1a9658 0.00% 1.425 MB/1.042 GB 0.14% 1.038 kB/738 B @@ -150,7 +150,7 @@ $ cluster/kubectl.sh run cpuhog \ Let's SSH into the node, and look at usage stats. ``` -$ vagrant ssh minion-1 +$ vagrant ssh node-1 $ sudo su $ docker stats $(docker ps -q) CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O diff --git a/sharing-clusters/README.md b/sharing-clusters/README.md index 90b529c5..a3af523d 100644 --- a/sharing-clusters/README.md +++ b/sharing-clusters/README.md @@ -88,18 +88,18 @@ And kubectl get nodes should agree: ``` $ kubectl get nodes NAME LABELS STATUS -eu-minion-0n61 kubernetes.io/hostname=eu-minion-0n61 Ready -eu-minion-79ua kubernetes.io/hostname=eu-minion-79ua Ready -eu-minion-7wz7 kubernetes.io/hostname=eu-minion-7wz7 Ready -eu-minion-loh2 kubernetes.io/hostname=eu-minion-loh2 Ready +eu-node-0n61 kubernetes.io/hostname=eu-node-0n61 Ready +eu-node-79ua kubernetes.io/hostname=eu-node-79ua Ready +eu-node-7wz7 kubernetes.io/hostname=eu-node-7wz7 Ready +eu-node-loh2 kubernetes.io/hostname=eu-node-loh2 Ready $ kubectl config use-context $ kubectl get nodes NAME LABELS STATUS -kubernetes-minion-5jtd kubernetes.io/hostname=kubernetes-minion-5jtd Ready -kubernetes-minion-lqfc kubernetes.io/hostname=kubernetes-minion-lqfc Ready -kubernetes-minion-sjra kubernetes.io/hostname=kubernetes-minion-sjra Ready -kubernetes-minion-wul8 kubernetes.io/hostname=kubernetes-minion-wul8 Ready +kubernetes-node-5jtd kubernetes.io/hostname=kubernetes-node-5jtd Ready +kubernetes-node-lqfc kubernetes.io/hostname=kubernetes-node-lqfc Ready +kubernetes-node-sjra kubernetes.io/hostname=kubernetes-node-sjra Ready +kubernetes-node-wul8 kubernetes.io/hostname=kubernetes-node-wul8 Ready ``` ## Testing reachability @@ -207,10 +207,10 @@ $ kubectl exec -it kubectl-tester bash kubectl-tester $ kubectl get nodes NAME LABELS STATUS -eu-minion-0n61 kubernetes.io/hostname=eu-minion-0n61 Ready -eu-minion-79ua kubernetes.io/hostname=eu-minion-79ua Ready -eu-minion-7wz7 kubernetes.io/hostname=eu-minion-7wz7 Ready -eu-minion-loh2 kubernetes.io/hostname=eu-minion-loh2 Ready +eu-node-0n61 kubernetes.io/hostname=eu-node-0n61 Ready +eu-node-79ua kubernetes.io/hostname=eu-node-79ua Ready +eu-node-7wz7 kubernetes.io/hostname=eu-node-7wz7 Ready +eu-node-loh2 kubernetes.io/hostname=eu-node-loh2 Ready ``` For a more advanced example of sharing clusters, see the [service-loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer/README.md)