Markdown cleanup on the "Creating a K8s Cluster" doc (#195)

1. Change - unordered list to use *
2. Change ordered lists to use lazy numbering (write 1 inline instead of
2-9)
3. Fix line lengths
4. Fix indentation on shell code sample
This commit is contained in:
Drew Inglis 2018-02-20 14:51:49 -08:00 committed by GitHub
parent 078611aca3
commit 1757a5485f
1 changed files with 87 additions and 83 deletions

View File

@ -2,28 +2,28 @@
Two options: Two options:
* Setup a [GKE cluster](#gke) * Setup a [GKE cluster](#gke)
* Run [minikube](#minikube) locally * Run [minikube](#minikube) locally
## GKE ## GKE
To use a k8s cluster running in GKE: To use a k8s cluster running in GKE:
1. Install `gcloud` using 1. Install `gcloud` using [the instructions for your
[the instructions for your platform](https://cloud.google.com/sdk/downloads). platform](https://cloud.google.com/sdk/downloads).
2. Create a GCP project (or use an existing project if you've already created 1. Create a GCP project (or use an existing project if you've already created
one) at http://console.cloud.google.com/home/dashboard. Set the ID of the one) at http://console.cloud.google.com/home/dashboard. Set the ID of the
project in an environment variable (e.g. `PROJECT_ID`) along with the email project in an environment variable (e.g. `PROJECT_ID`) along with the email
of your GCP user (`GCP_USER`). of your GCP user (`GCP_USER`).
3. Enable the k8s API: 1. Enable the k8s API:
```shell ```shell
gcloud --project=$PROJECT_ID services enable container.googleapis.com gcloud --project=$PROJECT_ID services enable container.googleapis.com
``` ```
4. Create a k8s cluster (version 1.9 or greater): 1. Create a k8s cluster (version 1.9 or greater):
```shell ```shell
gcloud --project=$PROJECT_ID container clusters create \ gcloud --project=$PROJECT_ID container clusters create \
@ -33,132 +33,136 @@ To use a k8s cluster running in GKE:
--enable-autoscaling --min-nodes=1 --max-nodes=3 \ --enable-autoscaling --min-nodes=1 --max-nodes=3 \
elafros-demo elafros-demo
``` ```
- Version 1.9+ is required
- Change this to whichever zone you choose
- cloud-platform scope is required to access GCB
- Autoscale from 1 to 3 nodes. Adjust this for your use case
- Change this to your preferred cluster name
* Version 1.9+ is required
* Change this to whichever zone you choose
* cloud-platform scope is required to access GCB
* Autoscale from 1 to 3 nodes. Adjust this for your use case
* Change this to your preferred cluster name
You can see the list of supported cluster versions in a particular zone You can see the list of supported cluster versions in a particular zone by
by running: running:
```shell ```shell
# Get the list of valid versions in us-east1-d # Get the list of valid versions in us-east1-d
gcloud container get-server-config --zone us-east1-d gcloud container get-server-config --zone us-east1-d
``` ```
5. If you haven't installed `kubectl` yet, you can install it now with `gcloud`: 1. If you haven't installed `kubectl` yet, you can install it now with
`gcloud`:
```shell ```shell
gcloud components install kubectl gcloud components install kubectl
``` ```
6. Give your gcloud user cluster-admin privileges: 1. Give your gcloud user cluster-admin privileges:
```shell ```shell
kubectl create clusterrolebinding gcloud-admin-binding \ kubectl create clusterrolebinding gcloud-admin-binding \
--clusterrole=cluster-admin \ --clusterrole=cluster-admin \
--user=$GCP_USER --user=$GCP_USER
``` ```
## Minikube ## Minikube
1. [Install and configure 1. [Install and configure
minikube](https://github.com/kubernetes/minikube#minikube) with a [VM minikube](https://github.com/kubernetes/minikube#minikube) with a [VM
driver](https://github.com/kubernetes/minikube#requirements), e.g. `kvm` on driver](https://github.com/kubernetes/minikube#requirements), e.g. `kvm` on
Linux or `xhyve` on macOS. Linux or `xhyve` on macOS.
2. [Create a cluster](https://github.com/kubernetes/minikube#quickstart) with version 1.9 or greater and your chosen VM driver: 1. [Create a cluster](https://github.com/kubernetes/minikube#quickstart) with
version 1.9 or greater and your chosen VM driver:
_Until minikube [enables it by default](https://github.com/kubernetes/minikube/pull/2547),the MutatingAdmissionWebhook plugin must be manually enabled._ _Until minikube [enables it by
default](https://github.com/kubernetes/minikube/pull/2547),the
MutatingAdmissionWebhook plugin must be manually enabled._
```shell ```shell
minikube start \ minikube start \
--kubernetes-version=v1.9.0 \ --kubernetes-version=v1.9.0 \
--vm-driver=kvm \ --vm-driver=kvm \
--extra-config=apiserver.Admission.PluginNames=DenyEscalatingExec,LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,SecurityContextDeny,MutatingAdmissionWebhook --extra-config=apiserver.Admission.PluginNames=DenyEscalatingExec,LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,SecurityContextDeny,MutatingAdmissionWebhook
``` ```
### Minikube with GCR ### Minikube with GCR
You can use Google Container Registry as the registry for a Minikube cluster. You can use Google Container Registry as the registry for a Minikube cluster.
1. [Set up a GCR repo](setting-up-a-docker-registry.md). Export the environment 1. [Set up a GCR repo](setting-up-a-docker-registry.md). Export the environment
variable `PROJECT_ID` as the name of your project. Also export `GCR_DOMAIN` variable `PROJECT_ID` as the name of your project. Also export `GCR_DOMAIN`
as the domain name of your GCR repo. This will be either `gcr.io` or a as the domain name of your GCR repo. This will be either `gcr.io` or a
region-specific variant like `us.gcr.io`. region-specific variant like `us.gcr.io`.
```shell ```shell
export PROJECT_ID=elafros-demo-project export PROJECT_ID=elafros-demo-project
export GCR_DOMAIN=gcr.io export GCR_DOMAIN=gcr.io
``` ```
To have Bazel builds push to GCR, set `DOCKER_REPO_OVERRIDE` to the GCR To have Bazel builds push to GCR, set `DOCKER_REPO_OVERRIDE` to the GCR
repo's url. repo's url.
```shell ```shell
export DOCKER_REPO_OVERRIDE="${GCR_DOMAIN}/${PROJECT_ID}" export DOCKER_REPO_OVERRIDE="${GCR_DOMAIN}/${PROJECT_ID}"
``` ```
2. Create a GCP service account: 1. Create a GCP service account:
```shell ```shell
gcloud iam service-accounts create minikube-gcr \ gcloud iam service-accounts create minikube-gcr \
--display-name "Minikube GCR Pull" \ --display-name "Minikube GCR Pull" \
--project $PROJECT_ID --project $PROJECT_ID
``` ```
3. Give your service account the `storage.objectViewer` role: 1. Give your service account the `storage.objectViewer` role:
```shell ```shell
gcloud projects add-iam-policy-binding $PROJECT_ID \ gcloud projects add-iam-policy-binding $PROJECT_ID \
--member "serviceAccount:minikube-gcr@${PROJECT_ID}.iam.gserviceaccount.com" \ --member "serviceAccount:minikube-gcr@${PROJECT_ID}.iam.gserviceaccount.com" \
--role roles/storage.objectViewer --role roles/storage.objectViewer
``` ```
4. Create a key credential file for the service account: 1. Create a key credential file for the service account:
```shell ```shell
gcloud iam service-accounts keys create \ gcloud iam service-accounts keys create \
--iam-account "minikube-gcr@${PROJECT_ID}.iam.gserviceaccount.com" \ --iam-account "minikube-gcr@${PROJECT_ID}.iam.gserviceaccount.com" \
minikube-gcr-key.json minikube-gcr-key.json
``` ```
Now you can use the `minikube-gcr-key.json` file to create image pull secrets Now you can use the `minikube-gcr-key.json` file to create image pull secrets
and link them to Kubernetes service accounts. _A secret must be created and and link them to Kubernetes service accounts. _A secret must be created and
linked to a service account in each namespace that will pull images from GCR._ linked to a service account in each namespace that will pull images from GCR._
For example, use these steps to allow Minikube to pull Elafros and Build For example, use these steps to allow Minikube to pull Elafros and Build images
images from GCR as built by Bazel (`bazel run :everything.create`). _This is from GCR as built by Bazel (`bazel run :everything.create`). _This is only
only necessary if you are not using public Elafros and Build images._ necessary if you are not using public Elafros and Build images._
1. Create a Kubernetes secret in the `ela-system` and `build-system` namespace: 1. Create a Kubernetes secret in the `ela-system` and `build-system` namespace:
```shell ```shell
for prefix in ela build; do for prefix in ela build; do
kubectl create secret docker-registry "gcr" \ kubectl create secret docker-registry "gcr" \
--docker-server=$GCR_DOMAIN \ --docker-server=$GCR_DOMAIN \
--docker-username=_json_key \ --docker-username=_json_key \
--docker-password="$(cat minikube-gcr-key.json)" \ --docker-password="$(cat minikube-gcr-key.json)" \
--docker-email=your.email@here.com \ --docker-email=your.email@here.com \
-n "${prefix}-system" -n "${prefix}-system"
done done
``` ```
_The secret must be created in the same namespace as the pod or service _The secret must be created in the same namespace as the pod or service
account._ account._
2. Add the secret as an imagePullSecret to the `ela-controller` and 1. Add the secret as an imagePullSecret to the `ela-controller` and
`build-controller` service accounts: `build-controller` service accounts:
```shell ```shell
for prefix in ela build; do for prefix in ela build; do
kubectl patch serviceaccount "${prefix}-controller" \ kubectl patch serviceaccount "${prefix}-controller" \
-p '{"imagePullSecrets": [{"name": "gcr"}]}' \ -p '{"imagePullSecrets": [{"name": "gcr"}]}' \
-n "${prefix}-system" -n "${prefix}-system"
done done
``` ```
Use the same procedure to add imagePullSecrets to service accounts in any Use the same procedure to add imagePullSecrets to service accounts in any
namespace. Use the `default` service account for pods that do not specify a namespace. Use the `default` service account for pods that do not specify a