Markdown cleanup on the "Creating a K8s Cluster" doc (#195)

1. Change - unordered list to use *
2. Change ordered lists to use lazy numbering (write 1 inline instead of
2-9)
3. Fix line lengths
4. Fix indentation on shell code sample
This commit is contained in:
Drew Inglis 2018-02-20 14:51:49 -08:00 committed by GitHub
parent 078611aca3
commit 1757a5485f
1 changed files with 87 additions and 83 deletions

View File

@ -2,28 +2,28 @@
Two options:
* Setup a [GKE cluster](#gke)
* Run [minikube](#minikube) locally
* Setup a [GKE cluster](#gke)
* Run [minikube](#minikube) locally
## GKE
To use a k8s cluster running in GKE:
1. Install `gcloud` using
[the instructions for your platform](https://cloud.google.com/sdk/downloads).
1. Install `gcloud` using [the instructions for your
platform](https://cloud.google.com/sdk/downloads).
2. Create a GCP project (or use an existing project if you've already created
1. Create a GCP project (or use an existing project if you've already created
one) at http://console.cloud.google.com/home/dashboard. Set the ID of the
project in an environment variable (e.g. `PROJECT_ID`) along with the email
of your GCP user (`GCP_USER`).
3. Enable the k8s API:
1. Enable the k8s API:
```shell
gcloud --project=$PROJECT_ID services enable container.googleapis.com
```
4. Create a k8s cluster (version 1.9 or greater):
1. Create a k8s cluster (version 1.9 or greater):
```shell
gcloud --project=$PROJECT_ID container clusters create \
@ -33,132 +33,136 @@ To use a k8s cluster running in GKE:
--enable-autoscaling --min-nodes=1 --max-nodes=3 \
elafros-demo
```
- Version 1.9+ is required
- Change this to whichever zone you choose
- cloud-platform scope is required to access GCB
- Autoscale from 1 to 3 nodes. Adjust this for your use case
- Change this to your preferred cluster name
* Version 1.9+ is required
* Change this to whichever zone you choose
* cloud-platform scope is required to access GCB
* Autoscale from 1 to 3 nodes. Adjust this for your use case
* Change this to your preferred cluster name
You can see the list of supported cluster versions in a particular zone
by running:
You can see the list of supported cluster versions in a particular zone by
running:
```shell
# Get the list of valid versions in us-east1-d
gcloud container get-server-config --zone us-east1-d
```
5. If you haven't installed `kubectl` yet, you can install it now with `gcloud`:
1. If you haven't installed `kubectl` yet, you can install it now with
`gcloud`:
```shell
gcloud components install kubectl
```
6. Give your gcloud user cluster-admin privileges:
1. Give your gcloud user cluster-admin privileges:
```shell
kubectl create clusterrolebinding gcloud-admin-binding \
--clusterrole=cluster-admin \
--user=$GCP_USER
--clusterrole=cluster-admin \
--user=$GCP_USER
```
## Minikube
1. [Install and configure
minikube](https://github.com/kubernetes/minikube#minikube) with a [VM
driver](https://github.com/kubernetes/minikube#requirements), e.g. `kvm` on
Linux or `xhyve` on macOS.
1. [Install and configure
minikube](https://github.com/kubernetes/minikube#minikube) with a [VM
driver](https://github.com/kubernetes/minikube#requirements), e.g. `kvm` on
Linux or `xhyve` on macOS.
2. [Create a cluster](https://github.com/kubernetes/minikube#quickstart) with version 1.9 or greater and your chosen VM driver:
1. [Create a cluster](https://github.com/kubernetes/minikube#quickstart) with
version 1.9 or greater and your chosen VM driver:
_Until minikube [enables it by default](https://github.com/kubernetes/minikube/pull/2547),the MutatingAdmissionWebhook plugin must be manually enabled._
_Until minikube [enables it by
default](https://github.com/kubernetes/minikube/pull/2547),the
MutatingAdmissionWebhook plugin must be manually enabled._
```shell
minikube start \
--kubernetes-version=v1.9.0 \
--vm-driver=kvm \
--extra-config=apiserver.Admission.PluginNames=DenyEscalatingExec,LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,SecurityContextDeny,MutatingAdmissionWebhook
--kubernetes-version=v1.9.0 \
--vm-driver=kvm \
--extra-config=apiserver.Admission.PluginNames=DenyEscalatingExec,LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,SecurityContextDeny,MutatingAdmissionWebhook
```
### Minikube with GCR
You can use Google Container Registry as the registry for a Minikube cluster.
1. [Set up a GCR repo](setting-up-a-docker-registry.md). Export the environment
variable `PROJECT_ID` as the name of your project. Also export `GCR_DOMAIN`
as the domain name of your GCR repo. This will be either `gcr.io` or a
region-specific variant like `us.gcr.io`.
1. [Set up a GCR repo](setting-up-a-docker-registry.md). Export the environment
variable `PROJECT_ID` as the name of your project. Also export `GCR_DOMAIN`
as the domain name of your GCR repo. This will be either `gcr.io` or a
region-specific variant like `us.gcr.io`.
```shell
export PROJECT_ID=elafros-demo-project
export GCR_DOMAIN=gcr.io
```
```shell
export PROJECT_ID=elafros-demo-project
export GCR_DOMAIN=gcr.io
```
To have Bazel builds push to GCR, set `DOCKER_REPO_OVERRIDE` to the GCR
repo's url.
To have Bazel builds push to GCR, set `DOCKER_REPO_OVERRIDE` to the GCR
repo's url.
```shell
export DOCKER_REPO_OVERRIDE="${GCR_DOMAIN}/${PROJECT_ID}"
```
```shell
export DOCKER_REPO_OVERRIDE="${GCR_DOMAIN}/${PROJECT_ID}"
```
2. Create a GCP service account:
1. Create a GCP service account:
```shell
gcloud iam service-accounts create minikube-gcr \
--display-name "Minikube GCR Pull" \
--project $PROJECT_ID
```
```shell
gcloud iam service-accounts create minikube-gcr \
--display-name "Minikube GCR Pull" \
--project $PROJECT_ID
```
3. Give your service account the `storage.objectViewer` role:
1. Give your service account the `storage.objectViewer` role:
```shell
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member "serviceAccount:minikube-gcr@${PROJECT_ID}.iam.gserviceaccount.com" \
--role roles/storage.objectViewer
```
```shell
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member "serviceAccount:minikube-gcr@${PROJECT_ID}.iam.gserviceaccount.com" \
--role roles/storage.objectViewer
```
4. Create a key credential file for the service account:
1. Create a key credential file for the service account:
```shell
gcloud iam service-accounts keys create \
--iam-account "minikube-gcr@${PROJECT_ID}.iam.gserviceaccount.com" \
minikube-gcr-key.json
```
```shell
gcloud iam service-accounts keys create \
--iam-account "minikube-gcr@${PROJECT_ID}.iam.gserviceaccount.com" \
minikube-gcr-key.json
```
Now you can use the `minikube-gcr-key.json` file to create image pull secrets
and link them to Kubernetes service accounts. _A secret must be created and
linked to a service account in each namespace that will pull images from GCR._
For example, use these steps to allow Minikube to pull Elafros and Build
images from GCR as built by Bazel (`bazel run :everything.create`). _This is
only necessary if you are not using public Elafros and Build images._
For example, use these steps to allow Minikube to pull Elafros and Build images
from GCR as built by Bazel (`bazel run :everything.create`). _This is only
necessary if you are not using public Elafros and Build images._
1. Create a Kubernetes secret in the `ela-system` and `build-system` namespace:
1. Create a Kubernetes secret in the `ela-system` and `build-system` namespace:
```shell
for prefix in ela build; do
kubectl create secret docker-registry "gcr" \
--docker-server=$GCR_DOMAIN \
--docker-username=_json_key \
--docker-password="$(cat minikube-gcr-key.json)" \
--docker-email=your.email@here.com \
-n "${prefix}-system"
done
```
```shell
for prefix in ela build; do
kubectl create secret docker-registry "gcr" \
--docker-server=$GCR_DOMAIN \
--docker-username=_json_key \
--docker-password="$(cat minikube-gcr-key.json)" \
--docker-email=your.email@here.com \
-n "${prefix}-system"
done
```
_The secret must be created in the same namespace as the pod or service
account._
_The secret must be created in the same namespace as the pod or service
account._
2. Add the secret as an imagePullSecret to the `ela-controller` and
`build-controller` service accounts:
1. Add the secret as an imagePullSecret to the `ela-controller` and
`build-controller` service accounts:
```shell
for prefix in ela build; do
kubectl patch serviceaccount "${prefix}-controller" \
-p '{"imagePullSecrets": [{"name": "gcr"}]}' \
-n "${prefix}-system"
done
```
```shell
for prefix in ela build; do
kubectl patch serviceaccount "${prefix}-controller" \
-p '{"imagePullSecrets": [{"name": "gcr"}]}' \
-n "${prefix}-system"
done
```
Use the same procedure to add imagePullSecrets to service accounts in any
namespace. Use the `default` service account for pods that do not specify a