Update minikube docs post rename (#1332)

The docs for setting up minikube were using the namespaces and
resource names from elafros instead of knative. The naming changed
slightly, e.g. a knative controller is now called `controller`
instead of `knative-serving-controller`, so one of the loops had
to be broken into 2 statements.

Added steps about redeploying pods after setting up GCR
secrets b/c there is a chicken and egg problem where the namespaces
must exist before you can setup the secrets, but the secrets must
exist before the images can be pulled.

The PR that enabled `MutatingAdmissionWebhook` by default
(https://github.com/kubernetes/minikube/pull/2547) was merged, but
the latest minikube (0.28.0) still did not enable this option
by default b/c providing any arugments overrides all of the defaults,
so we must still set it explicitly.

Made it clear in the setting up knative serving docs that the cluster
admin binding is required, not just for istio.

Use a `NodePort` instead of a `LoadBalancer`
(see https://github.com/kubernetes/minikube/issues/384) - another
step along the road to #608.
This commit is contained in:
Christie Wilson 2018-06-27 15:04:21 -07:00 committed by Google Prow Robot
parent c9b08fe9ca
commit ad4d987f60
1 changed files with 98 additions and 44 deletions

View File

@ -83,39 +83,94 @@ To use a k8s cluster running in GKE:
Linux or `hyperkit` on macOS.
1. [Create a cluster](https://github.com/kubernetes/minikube#quickstart) with
version 1.10 or greater and your chosen VM driver:
version 1.10 or greater and your chosen VM driver.
_Until minikube [enables it by
default](https://github.com/kubernetes/minikube/pull/2547),the
MutatingAdmissionWebhook plugin must be manually enabled._
The following commands will setup a cluster with `8192 MB` of memory and `4`
CPUs. If you want to [enable metric and log collection](./DEVELOPMENT.md#enable-log-and-metric-collection),
bump the memory to `12288 MB`.
_Providing any admission control pluins overrides the default set provided
by minikube so we must explicitly list all plugins we want enabled._
_Until minikube [makes this the
default](https://github.com/kubernetes/minikube/issues/1647), the
certificate controller must be told where to find the cluster CA certs on
the VM._
For Linux use:
For Linux use:
```shell
minikube start --memory=8192 --cpus=4 \
--kubernetes-version=v1.10.4 \
--vm-driver=kvm2 \
--bootstrapper=kubeadm \
--extra-config=controller-manager.cluster-signing-cert-file="/var/lib/localkube/certs/ca.crt" \
--extra-config=controller-manager.cluster-signing-key-file="/var/lib/localkube/certs/ca.key" \
--extra-config=apiserver.admission-control="DenyEscalatingExec,LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook"
```
For macOS use:
```shell
minikube start --memory=8192 --cpus=4 \
--kubernetes-version=v1.10.4 \
--vm-driver=kvm2 \
--bootstrapper=kubeadm \
--extra-config=controller-manager.cluster-signing-cert-file="/var/lib/localkube/certs/ca.crt" \
--extra-config=controller-manager.cluster-signing-key-file="/var/lib/localkube/certs/ca.key" \
--extra-config=apiserver.admission-control="DenyEscalatingExec,LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook"
```
For macOS use:
```shell
minikube start --memory=8192 --cpus=4 \
--kubernetes-version=v1.10.4 \
--vm-driver=hyperkit \
--bootstrapper=kubeadm \
--extra-config=controller-manager.cluster-signing-cert-file="/var/lib/localkube/certs/ca.crt" \
--extra-config=controller-manager.cluster-signing-key-file="/var/lib/localkube/certs/ca.key" \
--extra-config=apiserver.admission-control="DenyEscalatingExec,LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook"
```
1. [Configure your shell environment](../DEVELOPMENT.md#environment-setup)
to use your minikube cluster:
```shell
export K8S_CLUSTER_OVERRIDE='minikube'
# When using Minikube, the K8s user is your local user.
export K8S_USER_OVERRIDE=$USER
```
1. [Start Knative Serving](../DEVELOPMENT.md#starting-knative-serving):
You need to deploy Knative itself a bit differently if you would like to have
routeable `Route` endpoints, and if you would like to setup secrets for
accessing an image registry from within your cluster, e.g.
[GCR](#minikube-with-gcr):
1. [Deploy istio](../DEVELOPMENT.md#deploy-istio):
By default istio uses a `LoadBalancer` which is not available for Minikube
but is required for Knative to function properly (`Route` endpoints MUST
be available via a routable IP before they will be marked as ready)
so deploy istio with `LoadBalancer` replaced by `NodePort`:
```bash
sed 's/LoadBalancer/NodePort/' third_party/istio-0.8.0/istio.yaml | kubectl apply -f -
```
(Then optionally [enable istio injection](../DEVELOPMENT.md#deploy-istio).)
1. [Deploy build](../DEVELOPMENT.md#deploy-build):
```shell
kubectl apply -f ./third_party/config/build/release.yaml
```
1. [Deploy Knative Serving](../DEVELOPMENT.md#deploy-knative-serving):
1. Create the namespaces and service accounts you'll be modifying:
```bash
ko apply -f config/100-namespace.yaml
ko apply -f config/200-serviceaccount.yaml
```
1. Create the secrets you need and patch the service accounts to use them, e.g.
[to setup access to GCR](#minikube-with-gcr).
1. Deploy the rest of Knative:
```bash
ko apply -f config/
```
1. [Enable log and metric collection](../DEVELOPMENT.md#enable-log-and-metric-collection)
(Note you should be using a cluster with 12 GB of memory, see commands above.)
```shell
minikube start --memory=8192 --cpus=4 \
--kubernetes-version=v1.10.4 \
--vm-driver=hyperkit \
--bootstrapper=kubeadm \
--extra-config=controller-manager.cluster-signing-cert-file="/var/lib/localkube/certs/ca.crt" \
--extra-config=controller-manager.cluster-signing-key-file="/var/lib/localkube/certs/ca.key" \
--extra-config=apiserver.admission-control="DenyEscalatingExec,LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook"
```
### Minikube with GCR
@ -174,14 +229,19 @@ _This is only necessary if you are not using public Knative Serving and Build im
1. Create a Kubernetes secret in the `knative-serving` and `build-system` namespace:
```shell
for prefix in ela build; do
kubectl create secret docker-registry "gcr" \
--docker-server=$GCR_DOMAIN \
--docker-username=_json_key \
--docker-password="$(cat minikube-gcr-key.json)" \
--docker-email=your.email@here.com \
-n "${prefix}-system"
done
export DOCKER_EMAIL=your.email@here.com
kubectl create secret docker-registry "knative-serving-gcr" \
--docker-server=$GCR_DOMAIN \
--docker-username=_json_key \
--docker-password="$(cat minikube-gcr-key.json)" \
--docker-email=$DOCKER_EMAIL \
-n "knative-serving"
kubectl create secret docker-registry "build-gcr" \
--docker-server=$GCR_DOMAIN \
--docker-username=_json_key \
--docker-password="$(cat minikube-gcr-key.json)" \
--docker-email=$DOCKER_EMAIL \
-n "build-system"
```
_The secret must be created in the same namespace as the pod or service
@ -191,19 +251,13 @@ _This is only necessary if you are not using public Knative Serving and Build im
`build-controller` service accounts:
```shell
for prefix in ela build; do
kubectl patch serviceaccount "${prefix}-controller" \
-p '{"imagePullSecrets": [{"name": "gcr"}]}' \
-n "${prefix}-system"
done
kubectl patch serviceaccount "build-controller" \
-p '{"imagePullSecrets": [{"name": "build-gcr"}]}' \
-n "build-system"
kubectl patch serviceaccount "controller" \
-p '{"imagePullSecrets": [{"name": "knative-serving-gcr"}]}' \
-n "knative-serving"
```
1. Add to your .bashrc:
```shell
# When using Minikube, the K8s user is your local user.
export K8S_USER_OVERRIDE=$USER
```
Use the same procedure to add imagePullSecrets to service accounts in any
namespace. Use the `default` service account for pods that do not specify a
service account.