docs: Kind cluster provisioning and TLS

This commit is contained in:
Luke K 2020-08-06 22:14:16 +00:00
parent 07c633a707
commit 00669dca25
No known key found for this signature in database
GPG Key ID: 4896F75BAF2E1966
9 changed files with 252 additions and 275 deletions

8
docs/eks/users.yaml Normal file
View File

@ -0,0 +1,8 @@
# Cluster users
# Used to patch configmap/aws-auth in the kube-system namespace.
data:
mapUsers: |
- userarn: $(USER_ARN)
username: $(USERNAME)
groups:
- system:masters

View File

@ -4,6 +4,8 @@ Functions can be deployed to any kubernetes cluster which has been configured to
This guide was developed using the dependency versions listed in their requisite sections. Instructions may deviate slightly as these projects are generally under active development. It is recommended to use the links to the official documentation provided in each section.
## Provision a Cluster
Any Kubernetes-compatible API should be capable. Included herein are instructions for two popular variants: Kind and EKS.
[Provision using Kind](provision_kind.md)
@ -12,228 +14,19 @@ Any Kubernetes-compatible API should be capable. Included herein are instructio
[Provision using Amazon EKS](provision_eks.md)
## Provisioning a Kind (Kubernetes in Docker) Cluster
[kind](https://github.com/kubernetes-sigs/kind) is a lightweight tool for running local Kubernetes clusters using containers. It can be used as the underlying infrastructure for Functions, though it is intended for testing and development rather than production deployment.
This guide walks through the process of configuring a kind cluster to run Functions with the following vserions:
* kind v0.8.1 - [Install Kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
* Kubectl v1.17.3 - [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl)
Start a new cluster:
```
kind create cluster
```
List available clusters:
```
kind get clusters
```
List running containers will now show a kind process:
```
docker ps
```
### Connecting Remotely
Kind is intended to be a locally-running service, and exposing externally is not recommended. However, a fully configured kubernetes cluster can often quickly outstrip the resources available on even a well-specced development workstation. Therefore, creating a Kind cluster network appliance of sorts can be helpful. One possible way to connect to your kind cluster remotely would be to create a [wireguard](https://www.wireguard.com/) interface upon which to expose the API. Following is an example assuming linux hosts with systemd:
First [Install Wireguard](https://www.wireguard.com/install/)
Create keypair for the host and client.
```
wg genkey | tee host.key | wg pubkey > host.pub
wg genkey | tee client.key | wg pubkey > client.pub
chmod 600 host.key client.key
```
Assuming IPv4 addresses, with the wireguard-protected network 10.10.10.0/24, the host being 10.10.10.1 and the client 10.10.10.2
On the host, create a Wireguard Network Device:
`/etc/systemd/network/99-wg0.netdev`
```
[NetDev]
Name=wg0
Kind=wireguard
Description=WireGuard tunnel wg0
[WireGuard]
ListenPort=51111
PrivateKey=HOST_KEY
[WireGuardPeer]
PublicKey=HOST_PUB
AllowedIPs=10.10.10.0/24
PersistentKeepalive=25
```
(Replace HOST_KEY and HOST_PUB with the keypair created earlier.)
`/etc/systemd/network/99-wg0.network`
```
[Match]
Name=wg0
[Network]
Address=10.10.10.1/24
```
On the client, create the Wireguard Network Device and Network:
`/etc/systemd/network/99-wg0.netdev`
```
[NetDev]
Name=wg0
Kind=wireguard
Description=WireGuard tunnel wg0
[WireGuard]
ListenPort=51871
PrivateKey=CLIENT_KEY
[WireGuardPeer]
PublicKey=CLIENT_PUB
AllowedIPs=10.10.10.0/24
Endpoint=HOST_ADDRESS:51111
PersistentKeepalive=25
```
(Replace HOST_KEY and HOST_PUB with the keypair created earlier.)
Replace HOST_ADDRESS with an IP address at which the host can be reached prior to to wireguard interface becoming available.
`/etc/systemd/network/99-wg0.network`
```
[Match]
Name=wg0
[Network]
Address=10.10.10.2/24
```
_On both systems_, restrict the permissions of the network device file as it contains sensitive keys, then restart systemd-networkd.
```
chown root:systemd-network /etc/systemd/network/99-*.netdev
chmod 0640 /etc/systemd/network/99-*.netdev
systemctl restart systemd-networkd
```
The hosts should now be able to ping each other using their wireguard-protectd 10.10.10.0/24 addresses. Additionally, statistics about the connection can be obtaned from the `wg` command:
```
wg show
```
Create a Kind configuration file which instructs the API server to listen on the Wireguard interface and a known port:
`kind-config.yaml`
```
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
apiServerAddress: "10.10.10.1" # default 127.0.0.1
apiServerPort: 6443 # default random, must be different for each cluster
```
Delete the current cluster if necessary:
```
kind delete cluster --name kind
```
Start a new cluster using the config:
```
kind create cluster --config kind-config.yaml
```
Export a kubeconfig and move it to the client machine:
```
kind export kubeconfig --kubeconfig kind-kubeconfig.yaml
```
From the client, confirm that pods can be listed:
```
kubectl get po --all-namespaces --kubeconfig kind-kubeconfig.yaml
```
## Provisioning an EKS (Elastic Kubernetes Service) Cluster
Amazon EKS is a fully managed Kubernetes service suitable for production deoployments. The below instructions were compiled using the following dependency versions:
* eksctl v1.15
* kubernetes v1.15
[Offical EKS Documentation](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
### AWS CLI tools
Install Python 3 via system package manager, and the AWS cli tools via pip:
```
pip install awscli --upgrade --user
```
### AWS Account
Install the AWS IAM Authenticator
https://github.com/kubernetes-sigs/aws-iam-authenticator
Create AWS account(s) via the AWS console:
https://console.aws.amazon.com/
Users _of_ the cluster require no permissions at this point, but the user _creating_ the cluster does. Once configured, set the local environment variables:
```
AWS_REGION=us-east-2
AWS_SECRET_ACCESS_KEY=[redacted]
AWS_ACCESS_KEY_ID=[redacted]
```
Or use aws credentials. To configure the CLI to use credintials, for instance:
To `~/.aws/config` append:
```
[profile alice]
region = us-west-2
output = json
```
To `~/.aws/credentials` append:
```
[alice]
aws_access_key_id = [redacted]
aws_secret_access_key = [redacted]
```
The profile to use can then be configured using the environment varaible:
AWS_PROFILE=alice
(note that [direnv](https://direnv.net/) can be handy here.)
### SSH key
Generate cluster SSH key, saving into `./keys/ssh`
```
ssh-keygen -t rsa -b 4096
```
### Cluster Resources
Install `eksctl`
https://github.com/weaveworks/eksctl
Provision the cluster using `eksctl`. For example, the configuration file `./eks/cluster-config.yaml` will create a single-node cluster named "prod" in the "us-west-2" region if used:
```
eksctl create cluster -f eks/config-cluster.yaml
```
### Verify Cluster Provisioned
You should be able to retrieve nodes from the cluster
```
kubectl get po --all-namespaces
```
### Administration
See the [eksctl](https://eksctl.io) documentation for how to adminster a cluster, such as [cluster upgrades](https://eksctl.io/usage/cluster-upgrade/) using this helper CLI.
## Configuring the Cluster
Once access to a kubernetes-compatible cluster has been established, it will need to be configured to handle Function workloads. This includes Knative Serving and Eventing, the Kourier networking layer, and CertManager with the LetsEncrypt certificate provider.
Create a namespace for your Functions:
```
kubectl create namespace faas
```
Optionally set the default namespace for kubetl commands:
```
kubectl config set-context --current --namespace=faas
```
### Serving
Docs: https://knative.dev/docs/install/any-kubernetes-cluster/ )
@ -253,7 +46,11 @@ Update the networking layer to
kubectl apply -f knative/config-network.yaml
```
Note: for environments where Load Balancers are not supported (such as local Kind clusters), the Kourier service should be updated to be of type IP instead of LoadBalancer.
Note: for environments where Load Balancers are not supported (such as local Kind or Minikube clusters), the Kourier service should be updated to be of type IP instead of LoadBalancer. This configuration patches the kourier service to be of type NodePort with its networking service attached to the host at ports HTTP 30080 and HTTPS 30443.
```
kubectl patch -n kourier-system services/kourier -p "$(cat knative/config-kourier-nodeport.yaml)"
```
For bare metal clusters, installing the [MetalLB LoadBalancer](https://metallb.universe.tf/) is also an option.
### Domains
@ -270,74 +67,88 @@ Register domain(s) to be used, configuring a CNAME to the DNS or IP returned fro
```
kubectl --namespace kourier-system get service kourier
```
May register a wildcard matching subdomain, for example.
### Users
Install users:
```
kubectl patch -n kube-system configmap/aws-auth --patch "$(cat users.yaml)"
```
May also register a wildcard subdomain match.
### TLS
Assumed Cert Manager configured to use Letsencrypt production and CloudFlare for DNS.
In order to provision HTTPS routes, we optionally set up a Certificate Manager for the cluster. In this example we configure it to use LetsEncrypt certificates vi a certificate provider and CloudFlare as the DNS provider.
Install Cert-manager
#### Cert-Manager
Docs: https://cert-manager.io/docs/installation/kubernetes/
```
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.3/cert-manager.yaml
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.16.0/cert-manager.yaml
```
Create a Cluster Issuer by updating `tls/letsencrypt-issuer.yaml` with an email addresses for the LetsEncrypt registration and for the associated CloudFlare account:
```
kubectl apply -f tls/letsencrypt-issuer.yaml
```
Generate a token with CloudFlare with the following settings:
* Permission: Zone - Zone - Read
* Permission: Zone - DNS - Edit
* Zone Resources: Include - All ZOnes
Base64 encode the token:
```
echo -n 'CLOUDFLARE_TOKEN' | base64
```
Update the `tls/cloudflare-secret.yaml` with the base64-encoded token value and create the secret:
```
kubectl apply -f tls/cloudflare-secret.yaml
```
Create a Cluster Issuer, update cluster-issuer.yaml with email address, and create the associated cloudflare secret.
#### KNative Serving Cert-Manager Integration
Install the latest networking certmanager packaged with KNative Serving:
Docs: https://knative.dev/docs/serving/using-auto-tls/
```
kubectl apply -f cluster-issuer.yaml
kubectl apply --filename https://github.com/knative/net-certmanager/releases/download/v0.16.0/release.yaml
```
The secret should be a CloudFlare Token with Zone Read and DNS Write permissions, in the UI as:
```
Zone -> Zone -> Read
Zone -> DNS -> Edit
```
```
kubectl apply -f secrets/cloudflare.yaml
```
Install the latest networking certmanager:
```
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.13.0/serving-cert-manager.yaml
```
Edit config-certmanager to reference the letsencrypt issuer:
Edit config-certmanager to reference the letsencrypt issuer. There should be an issuerRef pointing to a ClusterIssuer of name `letsencrypt-issuer`:
```
kubectl edit configmap config-certmanager --namespace knative-serving
```
### Eventing
Eventing with In-memory channels, a Channel broker, and enable the default broker in the faas namespace.
```
kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.13.0/eventing-crds.yaml
kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.13.0/eventing-core.yaml
kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.13.0/in-memory-channel.yaml
kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.13.0/channel-broker.yaml
kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.16.0/eventing-crds.yaml
kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.16.0/eventing-core.yaml
kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.16.0/in-memory-channel.yaml
kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.16.0/mt-channel-broker.yaml
```
Enable Broker for faas namespace and install GitHub source:
GitHub events source:
```
kubectl create namespace faas
kubectl label namespace faas knative-eventing-injection=enabled
kubectl apply --filename https://github.com/knative/eventing-contrib/releases/download/v0.13.0/github.yaml
kubectl apply --filename https://github.com/knative/eventing-contrib/releases/download/v0.16.0/github.yaml
```
Learn more about the Github source at https://knative.dev/docs/eventing/samples/github-source/index.html
Enable Broker for faas namespace:
```
kubectl label namespace faas knative-eventing-injection=enabled
```
### Other
### Monitoring
Get serving version
Optionally the addition of the [metrics-server](https://github.com/kubernetes-sigs/metrics-server) API allows one to run `kubectl top nodes` and `kubectl top pods`. It can be installed with:
```
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
```
Note that on local clusters such as Kind, it is necessary to add the following arguments to the metrics-server deployment in kube-system:
```
args:
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
```
### Troubleshooting
Get the installed KNative serving and Eventing versions
```
kubectl get namespace knative-serving -o 'go-template={{index .metadata.labels "serving.knative.dev/release"}}'
```
Get eventing version
```
kubectl get namespace knative-eventing -o 'go-template={{index .metadata.labels "eventing.knative.dev/release"}}'
```

View File

@ -7,11 +7,11 @@ data:
# TODO: update this list automatically as Service Functions
# are added with differing domains. For now manually add
# one entry per TLD+1. Example:
example.com: |
boson-project.com: |
selector:
faas.domain: "example.com"
example.org: |
faas.domain: "boson-project.com"
boson-project.org: |
selector:
faas.domain: "example.org"
faas.domain: "boson-project.org"
# Default is local only.
svc.cluster.local: ""

View File

@ -0,0 +1,15 @@
# Patch for changing kourier to a NodePort for installations where a
# LoadBalancer is not available (for example local Minikube or Kind clusters)
spec:
ports:
- name: http2
nodePort: 32080
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 30443
port: 443
protocol: TCP
targetPort: 8443
type: NodePort

91
docs/provision_eks.md Normal file
View File

@ -0,0 +1,91 @@
# Provisioning an Amazon EKS (Elastic Kubernetes Service) Cluster
Amazon EKS is a fully managed Kubernetes service suitable for production deoployments. The below instructions were compiled using the following dependency versions:
* eksctl v1.15
* kubernetes v1.15
[Offical EKS Documentation](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
## AWS CLI tools
Install Python 3 via system package manager, and the AWS cli tools via pip:
```
pip install awscli --upgrade --user
```
## AWS Account
Install the AWS IAM Authenticator
https://github.com/kubernetes-sigs/aws-iam-authenticator
Create AWS account(s) via the AWS console:
https://console.aws.amazon.com/
Users _of_ the cluster require no permissions at this point, but the user _creating_ the cluster does. Once configured, set the local environment variables:
```
AWS_REGION=us-east-2
AWS_SECRET_ACCESS_KEY=[redacted]
AWS_ACCESS_KEY_ID=[redacted]
```
Or use aws credentials. To configure the CLI to use credintials, for instance:
To `~/.aws/config` append:
```
[profile alice]
region = us-west-2
output = json
```
To `~/.aws/credentials` append:
```
[alice]
aws_access_key_id = [redacted]
aws_secret_access_key = [redacted]
```
The profile to use can then be configured using the environment varaible:
AWS_PROFILE=alice
(note that [direnv](https://direnv.net/) can be handy here.)
## SSH key
Generate cluster SSH key, saving into `./keys/ssh`
```
ssh-keygen -t rsa -b 4096
```
## Cluster Resources
Install `eksctl`
https://github.com/weaveworks/eksctl
Provision the cluster using `eksctl`. For example, the configuration file `./eks/cluster-config.yaml` will create a single-node cluster named "prod" in the "us-west-2" region if used:
```
eksctl create cluster -f eks/config-cluster.yaml
```
## Users
Install users by modifying the template to include the ARN and username of the IAM users to give access to the cluster:
```
kubectl patch -n kube-system configmap/aws-auth --patch "$(cat eks/users.yaml)"
```
## Verify Cluster Provisioned
You should be able to retrieve nodes from the cluster, which should include coredns, kube-proxy, etc.
```
kubectl get po --all-namespaces
```
## Administration
See the [eksctl](https://eksctl.io) documentation for how to adminster a cluster, such as [cluster upgrades](https://eksctl.io/usage/cluster-upgrade/) using this helper CLI.

View File

@ -20,15 +20,22 @@ List running containers will now show a kind process:
```
docker ps
```
Confirm core services are running:
```
kubectl get po --all-namespaces
```
You should see
## Configure Remotely
## Configure With Remote Access
This section is optional.
Kind is intended to be a locally-running service, and exposing externally is not recommended. However, a fully configured kubernetes cluster can often quickly outstrip the resources available on even a well-specd development workstation. Therefore, creating a Kind cluster network appliance of sorts can be helpful. One possible way to connect to your kind cluster remotely would be to create a [wireguard](https://www.wireguard.com/) interface upon which to expose the API. Following is an example assuming linux hosts with systemd:
### Create a Secure Tunnel
First [Install Wireguard](https://www.wireguard.com/install/)
[Install Wireguard](https://www.wireguard.com/install/)
Create keypair for the host and client.
```
@ -38,7 +45,9 @@ chmod 600 host.key client.key
```
Assuming IPv4 addresses, with the wireguard-protected network 10.10.10.0/24, the host being 10.10.10.1 and the client 10.10.10.2
On the host, create a Wireguard Network Device:
Create a Wireguard Network Device on both the Host and the Client using the following configuration files (Replace HOST_KEY HOST_PUB, CLIENT_KEY and CLIENT_PUB with the keypairs created in the previous step):
On the Kind cluster host:
`/etc/systemd/network/99-wg0.netdev`
```
[NetDev]
@ -51,11 +60,10 @@ ListenPort=51111
PrivateKey=HOST_KEY
[WireGuardPeer]
PublicKey=HOST_PUB
PublicKey=CLIENT_PUB
AllowedIPs=10.10.10.0/24
PersistentKeepalive=25
```
(Replace HOST_KEY and HOST_PUB with the keypair created earlier.)
`/etc/systemd/network/99-wg0.network`
```
@ -66,7 +74,8 @@ Name=wg0
Address=10.10.10.1/24
```
On the client, create the Wireguard Network Device and Network:
On the client:
`/etc/systemd/network/99-wg0.netdev`
```
[NetDev]
@ -79,13 +88,11 @@ ListenPort=51871
PrivateKey=CLIENT_KEY
[WireGuardPeer]
PublicKey=CLIENT_PUB
PublicKey=HOST_PUB
AllowedIPs=10.10.10.0/24
Endpoint=HOST_ADDRESS:51111
PersistentKeepalive=25
```
(Replace HOST_KEY and HOST_PUB with the keypair created earlier.)
Replace HOST_ADDRESS with an IP address at which the host can be reached prior to to wireguard interface becoming available.
`/etc/systemd/network/99-wg0.network`
@ -96,25 +103,30 @@ Name=wg0
[Network]
Address=10.10.10.2/24
```
_On both systems_, restrict the permissions of the network device file as it contains sensitive keys, then restart systemd-networkd.
_On both systems_, restrict the permissions of the network device file as it contains a sensitive private key, then restart systemd-networkd.
```
chown root:systemd-network /etc/systemd/network/99-*.netdev
chmod 0640 /etc/systemd/network/99-*.netdev
systemctl restart systemd-networkd
```
The hosts should now be able to ping each other using their wireguard-protectd 10.10.10.0/24 addresses. Additionally, statistics about the connection can be obtaned from the `wg` command:
The nodes should now be able to ping each other using their wireguard-protected 10.10.10.0/24 addresses. Additionally, statistics about the connection can be obtaned from the `wg` command:
```
wg show
```
### Provision the Cluster
Create a Kind configuration file which instructs the API server to listen on the Wireguard interface and a known port:
`kind-config.yaml`
```
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
apiServerAddress: "10.10.10.1" # default 127.0.0.1
apiServerPort: 6443 # default random, must be different for each cluster
apiServerAddress: "10.10.10.1" # default is 127.0.0.1 (local only)
apiServerPort: 6443 # default is random. Note this must be unique per cluster.
```
Delete the current cluster if necessary:
```
kind delete cluster --name kind
@ -131,5 +143,10 @@ From the client, confirm that pods can be listed:
```
kubectl get po --all-namespaces --kubeconfig kind-kubeconfig.yaml
```
### Verify Cluster Provisioned
You should be able to retrieve nodes from the cluster, which should include coredns, kube-proxy, etc.
```
kubectl get po --all-namespaces
```

View File

@ -0,0 +1 @@
# Provision a Minikube Cluster

View File

@ -0,0 +1,12 @@
# CloudFlare API key
apiVersion: v1
kind: Secret
metadata:
name: cloudflare
namespace: cert-manager
type: Opaque
data:
# Create token in CloudFlare UI, giving it zone read and dns write
# permissions. Create the value for the token using:
# echo -n 'raw-token-here' | base64
token: "Q3VfaHNrbVRRN0x0RU9VVlpGNTlqLXc1eWZuR05neFJyZkFCaWJvYw=="

View File

@ -0,0 +1,22 @@
apiVersion: cert-manager.io/v1alpha3
kind: ClusterIssuer
metadata:
name: letsencrypt-issuer
namespace: cert-manager
spec:
acme:
email: ACME_REGISTERED_EMAIL
server: https://acme-v02.api.letsencrypt.org/directory
# For testing use:
# server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-account-key
solvers:
- dns01:
cloudflare:
email: CLOUDFLARE_REGISTERED_EMAIL
apiTokenSecretRef:
name: cloudflare
key: token