Mesh Expansion (was cluster extension) (#493)

* Starting point/nav

* Costin's version

From https://github.com/istio/istio.github.io/pull/492

* Restore some of the initial content and the prerequisite/nav

title update

* First pass updates/corrections

Added a bit of content too

* Super-draft version of the task

Mostly placeholders - need to copy the exact commands from the test scripts and double-check.

* Update to use the new filenames

* setting traffic management task to draft mode.

* Small updates

* More examples and info

* Rename the file

* Update the content with the new name

* Move into kubernetes and fix links and renames

* Fixes for copypastable blocks + more

* Minor tweaks

* Some more minor updates

* Minor cosmetic fixes

* review fixes

Thanks a lot Rachel

* Last review comment

* Review comment
This commit is contained in:
Laurent Demailly 2017-09-26 18:32:18 -07:00 committed by GitHub
parent edfecd75c8
commit 1521a09248
3 changed files with 287 additions and 1 deletions

2
.gitignore vendored
View File

@ -2,7 +2,7 @@ _site
_static_site
.bundle
config_override.yml
.jekyll-metadata
# Eclipse artifacts
.project
.pydevproject

View File

@ -0,0 +1,207 @@
---
title: Mesh Expansion
overview: Instructions to add external machines and expand the Istio mesh.
order: 60
layout: docs
type: markdown
---
Instructions to configure Istio on a Kubernetes cluster so it can be expanded with
services running on cloud, on-premises VMs, or external machines.
## Prerequisites
* Setup Istio on Kubernetes by following the instructions in the [Installation guide](quick-start.html).
* The machine must have IP connectivity to the endpoints in the mesh. This
typically requires a VPC or a VPN, as well as a container network that
provides direct (without NAT or firewall deny) routing to the endpoints. The machine
is not required to have access to the cluster IP addresses assigned by Kubernetes.
* The Istio control plane services (Pilot, Mixer, CA) and Kubernetes DNS server must be accessible
from the VMs. This is typically done using an [Internal Load
Balancer](https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer).
You can also use NodePort, run Istio components on VMs, or use custom network configurations,
separate documents will cover these advanced configurations.
## Installation steps
Setup consists of preparing the mesh for expansion and installing and configuring each VM.
An example script to help with Kubernetes setup is available in
[install/tools/setupMeshEx.sh](https://raw.githubusercontent.com/istio/istio/master/install/tools/setupMeshEx.sh).
An example script to help configure a machine is available in [install/tools/setupIstioVM.sh](https://raw.githubusercontent.com/istio/istio/master/install/tools/setupIstioVM.sh).
You should customize it based on your provisioning tools and DNS requirements.
### Preparing the Kubernetes cluster for expansion
* Setup internal load balancers for Kube DNS, Pilot, Mixer and CA. This step is specific to
each cluster, you may need to add annotations.
```bash
install/tools/setupMeshEx initCluster
```
or
```
kubectl apply -f install/kubernetes/meshex.yaml
```
* Generate the Istio 'cluster.env' configuration to be deployed in the VMs. This file contains
the cluster IP address ranges to intercept.
```bash
install/tools/setupMeshEx generateConfigs MY_CLUSTER_NAME
```
Example generated files:
```bash
cat /usr/local/istio/proxy/cluster.env
```
```
ISTIO_SERVICE_CIDR=10.23.240.0/20
```
* Generate DNS configuration file to be used in the VMs. This will allow apps on the VM to resolve
cluster service names, which will be intercepted by the sidecar and forwarded.
```bash
install/tools/setupMeshEx generateConfigs MY_CLUSTER_NAME
```
Example generated files:
```bash
cat /etc/dnsmasq.d/kubedns
```
```
server=/svc.cluster.local/10.128.0.6
address=/istio-mixer/10.128.0.7
address=/mixer-server/10.128.0.7
address=/istio-pilot/10.128.0.5
address=/istio-ca/10.128.0.8
```
### Setting up the machines
* Copy the configuration files and Istio Debian files to each machine joining the cluster.
Save the files as `/etc/dnsmasq.d/kubedns` and `/var/lib/istio/envoy/cluster.env`.
* Configure and verify DNS settings. This may require installing `dnsmasq` and either
adding it to `/etc/resolv.conf` directly or via DHCP scripts. To verify, check that the VM can resolve
names and connect to pilot, for example:
On the VM/external host:
```bash
dig istio-pilot.istio-system
```
```
# This should be the same address shown as "EXTERNAL-IP" in 'kubectl get svc -n istio-system istio-pilot-ilb'
...
istio-pilot.istio-system. 0 IN A 10.128.0.5
...
```
```bash
# Check that we can resolve cluster IPs. The actual IN A will depend on cluster configuration.
dig istio-pilot.istio-system.svc.cluster.local.
```
```
...
istio-pilot.istio-system.svc.cluster.local. 30 IN A 10.23.251.121
```
```bash
dig istio-ingress.istio-system.svc.cluster.local.
```
```
...
istio-ingress.istio-system.svc.cluster.local. 30 IN A 10.23.245.11
```
* Verify connectivity by checking whether the VM can connect to Pilot and to an endpoint.
```bash
curl -v 'http://istio-pilot.istio-system:8080/v1/registration/istio-pilot.istio-system.svc.cluster.local|http-discovery'
```
```
...
"ip_address": "10.20.1.18",
...
```
```bash
# On the VM: use the address above - it will connect directly the the pod running istio-pilot.
curl -v 'http://10.20.1.18:8080/v1/registration/istio-pilot.istio-system.svc.cluster.local|http-discovery'
```
* Extract the initial Istio authentication secrets and copy them to the machine. The default
installation of Istio includes IstioCA and will generate Istio secrets even if automatic 'mTLS'
setting is disabled. It is recommended that you perform this step to make it easy to
enable mTLS in future and upgrade to future version that will have mTLS enabled by default.
```bash
# ACCOUNT defaults to 'istio.default', or SERVICE_ACCOUNT environment variable
# NAMESPACE defaults to current namespace, or SERVICE_NAMESPACE environment variable
install/tools/setupMeshEx machineCerts ACCOUNT NAMESPACE
```
The generated files (`key.pem`, `root-cert.pem`, `cert-chain.pem`) must be copied to /etc/certs on each machine, readable by istio-proxy.
* Install Istio Debian files and start 'istio' and 'istio-auth-node-agent' services.
```bash
ISTIO_VERSION=0.2.4 # Update with the current istio version
DEBURL=http://gcsweb.istio.io/gcs/istio-release/releases/${ISTIO_VERSION}/deb
curl -L ${DEBURL}/istio-agent-release.deb > istio-agent-release.deb
curl -L ${DEBURL}/istio-auth-node-agent-release.deb > istio-auth-node-agent-release.deb
curl -L ${DEBURL}/istio-proxy-release.deb > istio-proxy-release.deb
dpkg -i ${ISTIO_STAGING}/istio-proxy-envoy.deb
dpkg -i ${ISTIO_STAGING}/istio-agent.deb
dpkg -i ${ISTIO_STAGING}/istio-auth-node-agent.deb
# TODO: This will be replaced with an 'apt-get' command once the repositories are setup.
systemctl start istio
systemctl start istio-auth-node-agent
```
After setup, the machine should be able to access services running in the Kubernetes cluster
or other mesh expansion machines.
```bash
# Assuming you install bookinfo in 'bookinfo' namespace
curl productpage.bookinfo.svc.cluster.local:9080
```
```
... html content ...
```
## Running services on a mesh expansion machine
* Configure the sidecar to intercept the port. This is configured in ``/var/lib/istio/envoy/sidecar.env`,
using the ISTIO_INBOUND_PORTS environment variable.
Example (on the VM running the service):
```bash
echo "ISTIO_INBOUND_PORTS=27017,3306,8080" > /var/lib/istio/envoy/sidecar.env
systemctl restart istio
```
* Manually configure a selector-less service and endpoints. The 'selector-less' service is used for
services that are not backed by Kubernetes pods.
Example, on a machine with permissions to modify Kubernetes services:
```bash
# istioctl register servicename machine-ip portname:port
istioctl -n onprem register mysql 1.2.3.4 3306
istioctl -n onprem register svc1 1.2.3.4 http:7000
```
After the setup, Kubernetes pods and other mesh expansions should be able to access the
services running on the machine.

View File

@ -0,0 +1,79 @@
---
title: Accessing Services in the Expanded Mesh
overview: This task shows you how to use services provided by VM
order: 60
#draft: true
layout: docs
type: markdown
---
{% include home.html %}
This task shows you how to configure services running in a VM that joined the cluster.
Current task was tested on GCP. _WIP on adding specific info for other providers_
## Before you begin
* Setup Istio by following the instructions in the
[Installation guide](({{home}}/docs/setup/).
* Deploy the [BookInfo]({{home}}/docs/samples/bookinfo.html) sample application.
* Create a VM named 'db' in the same project as Istio cluster, and [Join the Mesh]({{home}}/docs/setup/kubernetes/mesh-expansion.html).
## Running mysql on the VM
We will first install mysql on the VM, and configure it as a backend for the ratings service.
On the VM:
```bash
sudo apt-get update && apt-get install ...
# TODO copy or link the istio/istio test script
```
## Registering the mysql service with the mesh
### Machine admin
First step is to configure the VM sidecar, by adding the service port and restarting the sidecar.
On the DB machine:
```bash
sudo echo "ISTIO_INBOUND_PORTS=..." > /var/lib/istio/envoy/sidecar.env
sudo chown istio-proxy /var/lib/istio/envoy/sidecar.env
sudo systemctl restart istio
# Or
db$ sudo istio-pilot vi /var/lib/istio/envoy/sidecar.env
# add mysql port to the "ISTIO_INBOUND_PORTS" config
```
### Cluster admin
If you previously run the mysql bookinfo on kubernetes, you need to remove the k8s mysql service:
```bash
kubectl delete service mysql
```
Run istioctl to configure the service (on your admin machine):
```bash
istioctl register mysql PORT IP
```
Note that the 'db' machine does not need and should not have special kubernetes priviledges.
## Registering the mongodb service with the Mesh
In progress...
## Using the mysql service
The ratings service in bookinfo will use the DB on the machine. To verify it works, you can
modify the ratings value on the database.
```bash
# ...
```