Updated mesh expansion guide (#3057)

* Updated mesh expansion guide

* Fixed broken link
This commit is contained in:
LisaFC 2019-01-09 22:01:46 +00:00 committed by istio-bot
parent 323487b857
commit 652fd950a7
1 changed files with 194 additions and 209 deletions

View File

@ -5,272 +5,257 @@ weight: 95
keywords: [kubernetes,vms]
---
{{< warning_icon >}} Mesh expansion is broken in the initial release of Istio 1.0. We expect to have a patch available that
fixes this problem very soon. We apologize for the inconvenience.
Instructions for integrating VMs and bare metal hosts into an Istio mesh
This guide provides instructions for integrating VMs and bare metal hosts into an Istio mesh
deployed on Kubernetes.
## Prerequisites
* Setup Istio on Kubernetes by following the instructions in the [Installation guide](/docs/setup/kubernetes/quick-start/).
* You have already set up Istio on Kubernetes. If you haven't done so, you can find out how in the [Installation guide](/docs/setup/kubernetes/quick-start/).
* The machine must have IP connectivity to the endpoints in the mesh. This
* Mesh expansion machines must have IP connectivity to the endpoints in the mesh. This
typically requires a VPC or a VPN, as well as a container network that
provides direct (without NAT or firewall deny) routing to the endpoints. The machine
is not required to have access to the cluster IP addresses assigned by Kubernetes.
* The Istio control plane services (Pilot, Mixer, Citadel) and Kubernetes DNS server must be accessible
from the VMs. This is typically done using an [Internal Load
Balancer](https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer).
You can also use NodePort, run Istio components on VMs, or use custom network configurations,
separate documents will cover these advanced configurations.
* Mesh expansion VMs must have access to a DNS server that resolves names to cluster IP addresses. Options
include exposing the Kubernetes DNS server through an internal load balancer, using a Core DNS
server, or configuring the IPs in any other DNS server accessible from the VM.
* If you haven't already enabled mesh expansion [at install time with Helm](/docs/setup/kubernetes/helm-install/), you have
installed the [Helm client](https://docs.helm.sh/using_helm/). You'll need it to enable mesh expansion for the cluster.
## Installation steps
Setup consists of preparing the mesh for expansion and installing and configuring each VM.
An example script to help with Kubernetes setup is available as part of the release bundle called
[`install/tools/setupMeshEx.sh`]({{< github_file >}}/install/tools/setupMeshEx.sh). Check the script content and environment variables supported (like GCP_OPTS).
An example script to help configure a machine is available as part of the release bundle called [`install/tools/setupIstioVM.sh`]({{< github_file >}}/install/tools/setupIstioVM.sh).
You should customize it based on your provisioning tools and DNS requirements.
### Preparing the Kubernetes cluster for expansion
* Setup Internal Load Balancers (ILBs) for Kube DNS, Pilot, Mixer and Citadel. This step is specific to
each cloud provider, so you may need to edit annotations. You can use an ILB based on Keepalived at
[here](https://github.com/gyliu513/work/tree/master/k8s/charts/keepalived) for demo or test in case where
the cloud provider or private cloud (for example IBM Cloud Private) doesn't have load balancer service
support out of box.
The first step when adding non-Kubernetes services to an Istio mesh is to configure the Istio installation itself, and
generate the configuration files that let mesh expansion VMs connect to the mesh. To prepare the
cluster for mesh expansion, run the following commands on a machine with cluster admin privileges:
1. Ensure that mesh expansion is enabled for the cluster. If you did not specify `--set global.meshExpansion=true` at
install time with Helm, there are two options for enabling mesh expansion, depending on how you originally installed
Istio on the cluster:
* If you installed Istio with Helm and Tiller, run `helm upgrade` with the new option:
{{< text bash >}}
$ kubectl apply -f @install/kubernetes/mesh-expansion.yaml@
$ cd install/kubernetes/helm/istio
$ helm upgrade --set global.meshExpansion=true istio-system .
$ cd -
{{< /text >}}
* Generate the Istio `cluster.env` configuration to be deployed in the VMs. This file contains
the cluster IP address ranges to intercept.
* If you installed Istio without Helm and Tiller, use `helm template` to update your configuration with the option and
reapply with `kubectl`:
{{< text bash >}}
$ export GCP_OPTS="--zone MY_ZONE --project MY_PROJECT"
$ @install/tools/setupMeshEx.sh@ generateClusterEnv MY_CLUSTER_NAME
$ cd install/kubernetes/helm/istio
$ helm template --set global.meshExpansion=true --namespace istio-system . > istio.yaml
$ kubectl apply -f istio.yaml
$ cd -
{{< /text >}}
Here's an example generated file
> When updating configuration with Helm, you can either set the option on the command line, as in our examples, or add
it to a `.yaml` values file and pass it to
the command with `--values`, which is the recommended approach when managing configurations with multiple options. You
can see some sample values files in your Istio installation's `install/kubernetes/helm/istio` directory and find out
more about customizing Helm charts in the [Helm documentation](https://docs.helm.sh/using_helm/#using-helm).
1. Find the IP address of the Istio ingress gateway, as this is how the mesh expansion machines will access [Citadel](/docs/concepts/security/) and [Pilot](/docs/concepts/traffic-management/#pilot-and-envoy).
{{< text bash >}}
$ GWIP=$(kubectl get -n istio-system service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
$ echo $GWIP
35.232.112.158
{{< /text >}}
1. Generate a `cluster.env` configuration to deploy in the VMs. This file contains the Kubernetes cluster IP address ranges
to intercept and redirect via Envoy. You specify the CIDR range when you install Kubernetes as `servicesIpv4Cidr`.
Replace `$MY_ZONE` and `$MY_PROJECT` in the following example commands with the appropriate values to obtain the CIDR
after installation:
{{< text bash >}}
$ ISTIO_SERVICE_CIDR=$(gcloud container clusters describe $K8S_CLUSTER --zone $MY_ZONE --project $MY_PROJECT --format "value(servicesIpv4Cidr)")
$ echo -e "ISTIO_CP_AUTH=MUTUAL_TLS\nISTIO_SERVICE_CIDR=$ISTIO_SERVICE_CIDR\n" > cluster.env
{{< /text >}}
1. Check the contents of the generated `cluster.env` file. It should be similar to the following example:
{{< text bash >}}
$ cat cluster.env
ISTIO_SERVICE_CIDR=10.63.240.0/20
ISTIO_CP_AUTH=MUTUAL_TLS
ISTIO_SERVICE_CIDR=10.55.240.0/20
{{< /text >}}
* Generate DNS configuration file to be used in the VMs. This will allow apps on the VM to resolve
cluster service names, which will be intercepted by the sidecar and forwarded.
1. (Optional) If the VM only calls services in the mesh, you can skip this step. Otherwise, add the ports the VM exposes
to the `cluster.env` file with the following command. You can change the ports later if necessary.
{{< text bash >}}
$ @install/tools/setupMeshEx.sh@ generateDnsmasq
$ echo "ISTIO_INBOUND_PORTS=3306,8080" >> cluster.env
{{< /text >}}
Here's an example generated file
1. Extract the initial keys for the service account to use on the VMs.
{{< text bash >}}
$ cat kubedns
server=/svc.cluster.local/10.150.0.7
address=/istio-mixer/10.150.0.8
address=/istio-pilot/10.150.0.6
address=/istio-citadel/10.150.0.9
address=/istio-mixer.istio-system/10.150.0.8
address=/istio-pilot.istio-system/10.150.0.6
address=/istio-citadel.istio-system/10.150.0.9
$ kubectl -n $SERVICE_NAMESPACE get secret istio.default \
-o jsonpath='{.data.root-cert\.pem}' |base64 --decode > root-cert.pem
$ kubectl -n $SERVICE_NAMESPACE get secret istio.default \
-o jsonpath='{.data.key\.pem}' |base64 --decode > key.pem
$ kubectl -n $SERVICE_NAMESPACE get secret istio.default \
-o jsonpath='{.data.cert-chain\.pem}' |base64 --decode > cert-chain.pem
{{< /text >}}
### Setting up the machines
As an example, you can use the following "all inclusive" script to copy
and install the setup:
Next, run the following commands on each machine that you want to add to the mesh:
{{< text bash >}}
$ export GCP_OPTS="--zone MY_ZONE --project MY_PROJECT"
$ export SERVICE_NAMESPACE=vm
{{< /text >}}
If you are running on a GCE VM, run
{{< text bash >}}
$ @install/tools/setupMeshEx.sh@ gceMachineSetup VM_NAME
{{< /text >}}
Otherwise, run
{{< text bash >}}
$ @install/tools/setupMeshEx.sh@ machineSetup VM_NAME
{{< /text >}}
GCE provides better user experience since node agent can always relies on
GCE metadata instance document to authenticate to Citadel. For everything
else, e.g., on-prem or raw VM, we have to bootstrap a key/cert as credential,
which typically has a limited lifetime. And when the cert expires, you have to
rerun the above command.
Or the equivalent manual steps:
------ Manual setup steps begin ------
* Copy the configuration files and Istio Debian files to each machine joining the cluster.
Save the files as `/etc/dnsmasq.d/kubedns` and `/var/lib/istio/envoy/cluster.env`.
* Configure and verify DNS settings. This may require installing `dnsmasq` and either
adding it to `/etc/resolv.conf` directly or via DHCP scripts. To verify, check that the VM can resolve
names and connect to pilot, for example:
On the VM/external host:
1. Install the Debian package with the Envoy sidecar:
{{< text bash >}}
$ host istio-pilot.istio-system
{{< /text >}}
Example generated message:
{{< text plain >}}
$ istio-pilot.istio-system has address 10.150.0.6
{{< /text >}}
Check that you can resolve cluster IPs. The actual address will depend on your deployment.
{{< text bash >}}
$ host istio-pilot.istio-system.svc.cluster.local.
{{< /text >}}
Example generated message:
{{< text plain >}}
istio-pilot.istio-system.svc.cluster.local has address 10.63.247.248
{{< /text >}}
Check istio-ingress similarly:
{{< text bash >}}
$ host istio-ingress.istio-system.svc.cluster.local.
{{< /text >}}
Example generated message:
{{< text plain >}}
istio-ingress.istio-system.svc.cluster.local has address 10.63.243.30
{{< /text >}}
* Verify connectivity by checking whether the VM can connect to Pilot and to an endpoint.
{{< text bash json >}}
$ curl 'http://istio-pilot.istio-system:8080/v1/registration/istio-pilot.istio-system.svc.cluster.local|http-discovery'
{
"hosts": [
{
"ip_address": "10.60.1.4",
"port": 8080
}
]
}
{{< /text >}}
On the VM, use the address above. It will directly connect to the pod running istio-pilot.
{{< text bash >}}
$ curl 'http://10.60.1.4:8080/v1/registration/istio-pilot.istio-system.svc.cluster.local|http-discovery'
{{< /text >}}
* Extract the initial Istio authentication secrets and copy them to the machine. The default
installation of Istio includes Citadel and will generate Istio secrets even if
the automatic `mTLS`
setting is disabled (it creates a secret for each service account, and the secret
is named as `istio.<serviceaccount>`). It is recommended that you perform this
step to make it easy to enable mutual TLS in the future and to upgrade to a future version
that will have mutual TLS enabled by default.
`ACCOUNT` defaults to 'default', or `SERVICE_ACCOUNT` environment variable
`NAMESPACE` defaults to current namespace, or `SERVICE_NAMESPACE` environment variable
(this step is done by machineSetup)
On a Mac either `brew install base64` or `set BASE64_DECODE="/usr/bin/base64 -D"`
{{< text bash >}}
$ @install/tools/setupMeshEx.sh@ machineCerts ACCOUNT NAMESPACE
{{< /text >}}
The generated files (`key.pem`, `root-cert.pem`, `cert-chain.pem`) must be copied to /etc/certs on each machine, readable by istio-proxy.
* Install Istio Debian files and start 'istio' and 'istio-auth-node-agent' services.
Get the debian packages from [GitHub releases](https://github.com/istio/istio/releases) or:
{{< text bash >}}
$ source istio.VERSION # defines version and URLs env var
$ curl -L ${PILOT_DEBIAN_URL}/istio-sidecar.deb > istio-sidecar.deb
$ curl -L https://storage.googleapis.com/istio-release/releases/1.0.0/deb/istio-sidecar.deb > istio-sidecar.deb
$ dpkg -i istio-sidecar.deb
$ systemctl start istio
$ systemctl start istio-auth-node-agent
{{< /text >}}
------ Manual setup steps end ------
1. Copy the `cluster.env` and `*.pem` files that you created in the previous section to the VM.
After setup, the machine should be able to access services running in the Kubernetes cluster
or other mesh expansion machines.
1. Add the IP address of the Istio gateway (which we found in the [previous section](#preparing-the-kubernetes-cluster-for-expansion))
to `/etc/hosts` or to
the DNS server. In our example we'll use `/etc/hosts` as it is the easiest way to get things working. The following is
an example of updating an `/etc/hosts` file with the Istio gateway address:
{{< text bash >}}
{{< text bash >}}
$ echo "35.232.112.158 istio-citadel istio-pilot istio-pilot.istio-system" >> /etc/hosts
{{< /text >}}
1. Install `root-cert.pem`, `key.pem` and `cert-chain.pem` under `/etc/certs/`.
{{< text bash >}}
$ sudo mkdir /etc/certs
$ sudo cp {root-cert.pem,cert-chain.pem,key.pem} /etc/certs
{{< /text >}}
1. Install `cluster.env` under `/var/lib/istio/envoy/`.
{{< text bash >}}
$ sudo cp cluster.env /var/lib/istio/envoy
{{< /text >}}
1. Transfer ownership of the files in `/etc/certs/` and `/var/lib/istio/envoy/` to the Istio proxy.
{{< text bash >}}
$ sudo chown -R istio-proxy /etc/certs /var/lib/istio/envoy
{{< /text >}}
1. Verify the node agent works:
{{< text bash >}}
$ sudo node_agent
....
CSR is approved successfully. Will renew cert in 1079h59m59.84568493s
{{< /text >}}
1. Start Istio using `systemctl`.
{{< text bash >}}
$ sudo systemctl start istio-auth-node-agent
$ sudo systemctl start istio
{{< /text >}}
After setup, the machine can access services running in the Kubernetes cluster
or on other mesh expansion machines.
The following example shows accessing a service running in the Kubernetes cluster from a mesh expansion VM using
`/etc/hosts/`, in this case using a service from the [Bookinfo example](/docs/examples/bookinfo/).
1. First, on the cluster admin machine get the virtual IP address (`clusterIP`) for the service:
{{< text bash >}}
$ kubectl -n bookinfo get svc productpage -o jsonpath='{.spec.clusterIP}'
10.55.246.247
{{< /text >}}
1. Then on the mesh expansion machine, add the service name and address to its `etc/hosts` file. You can then connect to
the cluster service from the VM, as in the example below:
{{< text bash >}}
$ sudo echo "10.55.246.247 productpage.bookinfo.svc.cluster.local" >> /etc/hosts
$ curl productpage.bookinfo.svc.cluster.local:9080
... html content ...
{{< /text >}}
Check that the processes are running:
{{< text bash >}}
$ ps aux |grep istio
root 6941 0.0 0.2 75392 16820 ? Ssl 21:32 0:00 /usr/local/istio/bin/node_agent --logtostderr
root 6955 0.0 0.0 49344 3048 ? Ss 21:32 0:00 su -s /bin/bash -c INSTANCE_IP=10.150.0.5 POD_NAME=demo-vm-1 POD_NAMESPACE=default exec /usr/local/bin/pilot-agent proxy > /var/log/istio/istio.log istio-proxy
istio-p+ 7016 0.0 0.1 215172 12096 ? Ssl 21:32 0:00 /usr/local/bin/pilot-agent proxy
istio-p+ 7094 4.0 0.3 69540 24800 ? Sl 21:32 0:37 /usr/local/bin/envoy -c /etc/istio/proxy/envoy-rev1.json --restart-epoch 1 --drain-time-s 2 --parent-shutdown-time-s 3 --service-cluster istio-proxy --service-node sidecar~10.150.0.5~demo-vm-1.default~default.svc.cluster.local
{{< /text >}}
Istio auth node agent is healthy:
{{< text bash >}}
$ sudo systemctl status istio-auth-node-agent
● istio-auth-node-agent.service - istio-auth-node-agent: The Istio auth node agent
Loaded: loaded (/lib/systemd/system/istio-auth-node-agent.service; disabled; vendor preset: enabled)
Active: active (running) since Fri 2017-10-13 21:32:29 UTC; 9s ago
Docs: https://istio.io/
Main PID: 6941 (node_agent)
Tasks: 5
Memory: 5.9M
CPU: 92ms
CGroup: /system.slice/istio-auth-node-agent.service
└─6941 /usr/local/istio/bin/node_agent --logtostderr
Oct 13 21:32:29 demo-vm-1 systemd[1]: Started istio-auth-node-agent: The Istio auth node agent.
Oct 13 21:32:29 demo-vm-1 node_agent[6941]: I1013 21:32:29.469314 6941 main.go:66] Starting Node Agent
Oct 13 21:32:29 demo-vm-1 node_agent[6941]: I1013 21:32:29.469365 6941 nodeagent.go:96] Node Agent starts successfully.
Oct 13 21:32:29 demo-vm-1 node_agent[6941]: I1013 21:32:29.483324 6941 nodeagent.go:112] Sending CSR (retrial #0) ...
Oct 13 21:32:29 demo-vm-1 node_agent[6941]: I1013 21:32:29.862575 6941 nodeagent.go:128] CSR is approved successfully. Will renew cert in 29m59.137732603s
{{< /text >}}
{{< /text >}}
## Running services on a mesh expansion machine
* Configure the sidecar to intercept the port. This is configured in `/var/lib/istio/envoy/sidecar.env`,
using the `ISTIO_INBOUND_PORTS` environment variable.
You add VM services to the mesh by configuring a
[`ServiceEntry`](/docs/reference/config/istio.networking.v1alpha3/#ServiceEntry). A service entry lets you manually add
additional services to Istio's model of the mesh so that other services can find and direct traffic to them. Each
`ServiceEntry` configuration contains the IP addresses, ports, and labels (where appropriate) of all VMs exposing a
particular service, as in the following example.
Example (on the VM running the service):
{{< text bash yaml >}}
$ kubectl -n test apply -f - << EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: vm1
spec:
hosts:
- vm1.test.svc.cluster.local
ports:
- number: 80
name: http
protocol: HTTP
resolution: STATIC
endpoints:
- address: 10.128.0.17
ports:
http: 8080
labels:
app: vm1
version: 1
EOF
{{< /text >}}
## Troubleshooting
The following are some basic troubleshooting steps for common mesh expansion issues.
* When making requests from a VM to the cluster, ensure you don't run the requests as `root` or
`istio-proxy` user. By default, Istio excludes both users from interception.
* Verify the machine can reach the IP of the all workloads running in the cluster. For example:
{{< text bash >}}
$ echo "ISTIO_INBOUND_PORTS=3306,8080" > /var/lib/istio/envoy/sidecar.env
$ systemctl restart istio
$ kubectl get endpoints -n bookinfo productpage -o jsonpath='{.subsets[0].addresses[0].ip}'
10.52.39.13
{{< /text >}}
* Manually configure a selector-less service and endpoints. The `selector-less` service is used for
services that are not backed by Kubernetes pods.
Example, on a machine with permissions to modify Kubernetes services:
{{< text bash >}}
$ istioctl -n onprem register mysql 1.2.3.4 3306
$ istioctl -n onprem register svc1 1.2.3.4 http:7000
$ curl 10.52.39.13:9080
html output
{{< /text >}}
After the setup, Kubernetes pods and other mesh expansions should be able to access the
services running on the machine.
* Check the status of the node agent and sidecar:
{{< text bash >}}
$ sudo systemctl status istio-auth-node-agent
$ sudo systemctl status istio
{{< /text >}}
* Check that the processes are running. The following is an example of the processes you should see on the VM if you run
`ps`, filtered for `istio`:
{{< text bash >}}
$ ps aux | grep istio
root 6941 0.0 0.2 75392 16820 ? Ssl 21:32 0:00 /usr/local/istio/bin/node_agent --logtostderr
root 6955 0.0 0.0 49344 3048 ? Ss 21:32 0:00 su -s /bin/bash -c INSTANCE_IP=10.150.0.5 POD_NAME=demo-vm-1 POD_NAMESPACE=default exec /usr/local/bin/pilot-agent proxy > /var/log/istio/istio.log istio-proxy
istio-p+ 7016 0.0 0.1 215172 12096 ? Ssl 21:32 0:00 /usr/local/bin/pilot-agent proxy
istio-p+ 7094 4.0 0.3 69540 24800 ? Sl 21:32 0:37 /usr/local/bin/envoy -c /etc/istio/proxy/envoy-rev1.json --restart-epoch 1 --drain-time-s 2 --parent-shutdown-time-s 3 --service-cluster istio-proxy --service-node sidecar~10.150.0.5~demo-vm-1.default~default.svc.cluster.local
{{< /text >}}
* Check the Envoy access and error logs:
{{< text bash >}}
$ tail /var/log/istio/istio.log
$ tail /var/log/istio/istio.err.log
{{< /text >}}