Replaced (or defined first instance of) GKE/GCE with Google Container Engine/Google Compute Engine

Fixes #10354
This commit is contained in:
RichieEscarez 2015-06-26 12:13:43 -07:00
parent caaf220164
commit 0c988f55fd
7 changed files with 17 additions and 17 deletions

View File

@ -159,7 +159,7 @@ music-server name=music-db name=music-db 10.0.138.61 9200/TCP
NAME TYPE DATA
apiserver-secret Opaque 2
```
This shows 4 instances of Elasticsearch running. After making sure that port 9200 is accessible for this cluster (e.g. using a firewall rule for GCE) we can make queries via the service which will be fielded by the matching Elasticsearch pods.
This shows 4 instances of Elasticsearch running. After making sure that port 9200 is accessible for this cluster (e.g. using a firewall rule for Google Compute Engine) we can make queries via the service which will be fielded by the matching Elasticsearch pods.
```
$ curl 104.197.12.157:9200
{

View File

@ -18,7 +18,7 @@ This example shows how to build a simple, multi-tier web application using Kuber
- [Using 'type: LoadBalancer' for the frontend service (cloud-provider-specific)](#using-type-loadbalancer-for-the-frontend-service-cloud-provider-specific)
- [Create the Frontend Service](#create-the-frontend-service)
- [Accessing the guestbook site externally](#accessing-the-guestbook-site-externally)
- [GCE External Load Balancer Specifics](#gce-external-load-balancer-specifics)
- [Google Compute Engine External Load Balancer Specifics](#gce-external-load-balancer-specifics)
- [Step Seven: Cleanup](#step-seven-cleanup)
- [Troubleshooting](#troubleshooting)
@ -33,7 +33,7 @@ The web front end interacts with the redis master via javascript redis API calls
### Step Zero: Prerequisites
This example requires a running Kubernetes cluster. See the [Getting Started guides](../../docs/getting-started-guides) for how to get started. As noted above, if you have a GKE cluster set up, go [here](https://cloud.google.com/container-engine/docs/tutorials/guestbook) instead.
This example requires a running Kubernetes cluster. See the [Getting Started guides](../../docs/getting-started-guides) for how to get started. As noted above, if you have a Google Container Engine cluster set up, go [here](https://cloud.google.com/container-engine/docs/tutorials/guestbook) instead.
### Step One: Start up the redis master
@ -136,7 +136,7 @@ $ kubectl logs <pod_name>
These logs will usually give you enough information to troubleshoot.
However, if you should want to ssh to the listed host machine, you can inspect various logs there directly as well. For example, with GCE, using `gcloud`, you can ssh like this:
However, if you should want to SSH to the listed host machine, you can inspect various logs there directly as well. For example, with Google Compute Engine, using `gcloud`, you can SSH like this:
```shell
me@workstation$ gcloud compute ssh kubernetes-minion-krxw
@ -442,7 +442,7 @@ spec:
#### Using 'type: LoadBalancer' for the frontend service (cloud-provider-specific)
For supported cloud providers, such as GCE/GKE, you can specify to use an external load balancer
For supported cloud providers, such as Google Compute Engine or Google Container Engine, you can specify to use an external load balancer
in the service `spec`, to expose the service onto an external load balancer IP.
To do this, uncomment the `type: LoadBalancer` line in the `frontend-service.yaml` file before you start the service.
@ -495,9 +495,9 @@ You should see a web page that looks something like this (without the messages).
If you are more advanced in the ops arena, you can also manually get the service IP from looking at the output of `kubectl get pods,services`, and modify your firewall using standard tools and services (firewalld, iptables, selinux) which you are already familiar with.
##### GCE External Load Balancer Specifics
##### Google Compute Engine External Load Balancer Specifics
In GCE, `kubectl` automatically creates forwarding rule for services with `LoadBalancer`.
In Google Compute Engine, `kubectl` automatically creates forwarding rule for services with `LoadBalancer`.
You can list the forwarding rules like this. The forwarding rule also indicates the external IP.
@ -507,13 +507,13 @@ NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
frontend us-central1 130.211.188.51 TCP us-central1/targetPools/frontend
```
In GCE, you also may need to open the firewall for port 80 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-minion` (replace with your tags as appropriate):
In Google Compute Engine, you also may need to open the firewall for port 80 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-minion` (replace with your tags as appropriate):
```shell
$ gcloud compute firewall-rules create --allow=tcp:80 --target-tags=kubernetes-minion kubernetes-minion-80
```
For GCE details about limiting traffic to specific sources, see the [GCE firewall documentation][gce-firewall-docs].
For Google Compute Engine details about limiting traffic to specific sources, see the [Google Compute Engine firewall documentation][gce-firewall-docs].
[cloud-console]: https://console.developer.google.com
[gce-firewall-docs]: https://cloud.google.com/compute/docs/networking#firewalls

View File

@ -10,7 +10,7 @@ then edit */etc/iscsi/initiatorname.iscsi* and */etc/iscsi/iscsid.conf* to match
I mostly followed these [instructions](http://www.server-world.info/en/note?os=Fedora_21&p=iscsi) to setup iSCSI target. and these [instructions](http://www.server-world.info/en/note?os=Fedora_21&p=iscsi&f=2) to setup iSCSI initiator.
**Setup B.** On Unbuntu 12.04 and Debian 7 nodes on GCE
**Setup B.** On Unbuntu 12.04 and Debian 7 nodes on Google Compute Engine (GCE)
GCE does not provide preconfigured Fedora 21 image, so I set up the iSCSI target on a preconfigured Ubuntu 12.04 image, mostly following these [instructions](http://www.server-world.info/en/note?os=Ubuntu_12.04&p=iscsi). My Kubernetes cluster on GCE was running Debian 7 images, so I followed these [instructions](http://www.server-world.info/en/note?os=Debian_7.0&p=iscsi&f=2) to set up the iSCSI initiator.

View File

@ -34,8 +34,8 @@ Next, start up a Kubernetes cluster:
wget -q -O - https://get.k8s.io | bash
```
Please see the [GCE getting started
guide](http://docs.k8s.io/getting-started-guides/gce.md) for full
Please see the [Google Compute Engine getting started
guide](../../docs/getting-started-guides/gce.md) for full
details and other options for starting a cluster.
Build a container for your Meteor app
@ -139,7 +139,7 @@ kubectl get services/meteor --template="{{range .status.loadBalancer.ingress}} {
```
You will have to open up port 80 if it's not open yet in your
environment. On GCE, you may run the below command.
environment. On Google Compute Engine, you may run the below command.
```
gcloud compute firewall-rules create meteor-80 --allow=tcp:80 --target-tags kubernetes-minion
```

View File

@ -8,7 +8,7 @@ We'll create two Kubernetes [pods](http://docs.k8s.io/pods.md) to run mysql and
This example demonstrates several useful things, including: how to set up and use persistent disks with Kubernetes pods; how to define Kubernetes services to leverage docker-links-compatible service environment variables; and use of an external load balancer to expose the wordpress service externally and make it transparent to the user if the wordpress pod moves to a different cluster node.
## Get started on Google Compute Engine
## Get started on Google Compute Engine (GCE)
Because we're using the `GCEPersistentDisk` type of volume for persistent storage, this example is only applicable to [Google Compute Engine](https://cloud.google.com/compute/). Take a look at the [volumes documentation](/docs/volumes.md) for other options.

View File

@ -7,8 +7,8 @@ This guide assumes knowledge of Kubernetes fundamentals and that you have a clus
## Provisioning
A PersistentVolume in Kubernetes represents a real piece of underlying storage capacity in the infrastructure. Cluster administrators
must first create storage (create their GCE disks, export their NFS shares, etc.) in order for Kubernetes to mount it.
A Persistent Volume (PV) in Kubernetes represents a real piece of underlying storage capacity in the infrastructure. Cluster administrators
must first create storage (create their Google Compute Engine (GCE) disks, export their NFS shares, etc.) in order for Kubernetes to mount it.
PVs are intended for "network volumes" like GCE Persistent Disks, NFS shares, and AWS ElasticBlockStore volumes. ```HostPath``` was included
for ease of development and testing. You'll create a local ```HostPath``` for this example.

View File

@ -105,7 +105,7 @@ type: LoadBalancer
The external load balancer allows us to access the service from outside via an external IP, which is 104.197.19.120 in this case.
Note that you may need to create a firewall rule to allow the traffic, assuming you are using GCE:
Note that you may need to create a firewall rule to allow the traffic, assuming you are using Google Compute Engine:
```
$ gcloud compute firewall-rules create rethinkdb --allow=tcp:8080
```