mirror of https://github.com/istio/istio.io.git
Single network virtual machine example (#8203)
* Single network virtual machine instructions This is a copy and paste from the multi-network instructions with anything "multi-network" removed. * Address @frankbu's comments. cannot run linter atm. * Fix linting error * Unindent a section
This commit is contained in:
parent
afb78d0b1d
commit
c4d66e5413
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Virtual Machines in Single-Network Meshes
|
||||
description: Learn how to add a service running on a virtual machine
|
||||
to your single network Istio mesh.
|
||||
title: Example Application using Virtual Machines in a Single Network Mesh
|
||||
description: Learn how to add a service running on a virtual machine to your single-network
|
||||
Istio mesh.
|
||||
weight: 20
|
||||
keywords:
|
||||
- kubernetes
|
||||
|
@ -15,251 +15,102 @@ owner: istio/wg-environments-maintainers
|
|||
test: no
|
||||
---
|
||||
|
||||
This example shows how to integrate a VM or a bare metal host into a single-network
|
||||
Istio mesh deployed on Kubernetes.
|
||||
This example provides instructions to integrate a virtual machine or a bare metal host into a
|
||||
single network Istio mesh deployed on Kubernetes. This approach requires L3 connectivity
|
||||
between the virtual machine and the Kubernetes cluster.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- You have already set up Istio on Kubernetes. If you haven't done so, you can
|
||||
find out how in the [Installation guide](/docs/setup/getting-started/).
|
||||
- One or more Kubernetes clusters with versions: {{< supported_kubernetes_versions >}}.
|
||||
|
||||
- Virtual machines (VMs) must have IP connectivity to the endpoints in the mesh.
|
||||
- Virtual machines must have L3 IP connectivity to the endpoints in the mesh.
|
||||
This typically requires a VPC or a VPN, as well as a container network that
|
||||
provides direct (without NAT or firewall deny) routing to the endpoints. The
|
||||
machine is not required to have access to the cluster IP addresses assigned by
|
||||
Kubernetes.
|
||||
|
||||
- VMs must have access to a DNS server that resolves names to cluster IP
|
||||
addresses. Options include exposing the Kubernetes DNS server through an
|
||||
internal load balancer, using a [Core DNS](https://coredns.io/) server, or
|
||||
configuring the IPs in any other DNS server accessible from the VM.
|
||||
- Installation must be completed using [virtual machine installation](/docs/setup/install/virtual-machine) instructions.
|
||||
|
||||
The following instructions:
|
||||
## Verify installation
|
||||
|
||||
- Assume the expansion VM is running on GCE.
|
||||
- Use Google platform-specific commands for some steps.
|
||||
After installation, the virtual machine can access services running in the Kubernetes cluster or in
|
||||
other virtual machines. To verify the virtual machine connectivity, run the following command
|
||||
(assuming you have a service named `httpbin` on the Kubernetes cluster:
|
||||
|
||||
## Installation steps
|
||||
{{< text bash >}}
|
||||
$ curl -v localhost:15000/clusters | grep httpbin
|
||||
{{< /text >}}
|
||||
|
||||
Setup consists of preparing the mesh for expansion and installing and configuring each VM.
|
||||
This shows endpoints for `httpbin`:
|
||||
|
||||
### Preparing the Kubernetes cluster for VMs
|
||||
{{< text text >}}
|
||||
outbound|8000||httpbin.default.svc.cluster.local::34.72.46.113:443::cx_active::1
|
||||
outbound|8000||httpbin.default.svc.cluster.local::34.72.46.113:443::cx_connect_fail::0
|
||||
outbound|8000||httpbin.default.svc.cluster.local::34.72.46.113:443::cx_total::1
|
||||
outbound|8000||httpbin.default.svc.cluster.local::34.72.46.113:443::rq_active::0
|
||||
{{< /text >}}
|
||||
|
||||
The first step when adding non-Kubernetes services to an Istio mesh is to
|
||||
configure the Istio installation itself, and generate the configuration files
|
||||
that let VMs connect to the mesh. Prepare the cluster for the VM with the
|
||||
following commands on a machine with cluster admin privileges:
|
||||
The IP `34.72.46.113` in this case is the pod IP address of the httpbin endpoint.
|
||||
|
||||
1. Create a Kubernetes secret for your generated CA certificates using a command similar to the following. See [Certificate Authority (CA) certificates](/docs/tasks/security/cert-management/plugin-ca-cert/) for more details.
|
||||
### Send requests from virtual machine workloads to Kubernetes services
|
||||
|
||||
{{< warning >}}
|
||||
The root and intermediate certificate from the samples directory are widely
|
||||
distributed and known. Do **NOT** use these certificates in production as
|
||||
your clusters would then be open to security vulnerabilities and compromise.
|
||||
{{< /warning >}}
|
||||
You can send traffic to `httpbin.default.svc.cluster.local` and get a response from the server. You must configure DNS in `/etc/hosts` to map the `httpbin.default.svc.cluster.local` domain name to an IP, or the IP will not resolve. In this case, the IP should be an IP that is routed over the single network using L3 connectivity. You should use the IP of the service in the Kubernetes cluster.
|
||||
|
||||
{{< text bash >}}
|
||||
$ curl -v httpbin.default.svc.cluster.local:8000/headers
|
||||
{{< /text >}}
|
||||
|
||||
### Running services on the virtual machine
|
||||
|
||||
1. Setup an HTTP server on the virtual machine to serve HTTP traffic on port 8080:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create namespace istio-system
|
||||
$ kubectl create secret generic cacerts -n istio-system \
|
||||
--from-file=@samples/certs/ca-cert.pem@ \
|
||||
--from-file=@samples/certs/ca-key.pem@ \
|
||||
--from-file=@samples/certs/root-cert.pem@ \
|
||||
--from-file=@samples/certs/cert-chain.pem@
|
||||
{{< /text >}}
|
||||
|
||||
1. For a simple setup, deploy Istio control plane into the cluster
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl install \
|
||||
-f manifests/examples/vm/values-istio-meshexpansion.yaml
|
||||
{{< /text >}}
|
||||
|
||||
For further details and customization options, refer to the
|
||||
[installation instructions](/docs/setup/install/istioctl/).
|
||||
|
||||
Alternatively, the user can create an explicit service of type `LoadBalancer` and use
|
||||
[internal load balancer](https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer)
|
||||
type. User can also deploy a separate ingress Gateway, with internal load balancer type for both mesh expansion and
|
||||
multicluster. The main requirement is for the exposed address to do TCP load balancing to the {{< gloss >}}Istiod{{< /gloss >}} deployment,
|
||||
and for the DNS name associated with the assigned load balancer address to match the certificate provisioned
|
||||
into Istiod deployment, defaulting to `istiod.istio-system.svc`.
|
||||
|
||||
1. Define the namespace the VM joins. This example uses the `SERVICE_NAMESPACE`
|
||||
environment variable to store the namespace. The value of this variable must
|
||||
match the namespace you use in the configuration files later on, and the identity encoded in the certificates.
|
||||
|
||||
{{< text bash >}}
|
||||
$ export SERVICE_NAMESPACE="vm"
|
||||
{{< /text >}}
|
||||
|
||||
1. Determine and store the IP address of the Istiod since the VMs
|
||||
access [Istiod](/docs/ops/deployment/architecture/#pilot) through this IP address.
|
||||
|
||||
{{< text bash >}}
|
||||
$ export IstiodIP=$(kubectl get -n istio-system service istiod -o jsonpath='{.spec.clusterIP}')
|
||||
$ echo $IstiodIP
|
||||
10.55.240.12
|
||||
{{< /text >}}
|
||||
|
||||
1. Generate a `cluster.env` configuration to deploy in the VMs. This file contains the Kubernetes cluster IP address ranges
|
||||
to intercept and redirect via Envoy. You specify the CIDR range when you install Kubernetes as `servicesIpv4Cidr`.
|
||||
Replace `$MY_ZONE`, `$MY_PROJECT` and `$K8S_CLUSTER` in the following example commands with the appropriate values to obtain the CIDR
|
||||
after installation:
|
||||
|
||||
{{< text bash >}}
|
||||
$ ISTIO_SERVICE_CIDR=$(gcloud container clusters describe $K8S_CLUSTER --zone $MY_ZONE --project $MY_PROJECT --format "value(servicesIpv4Cidr)")
|
||||
$ echo -e "ISTIO_SERVICE_CIDR=$ISTIO_SERVICE_CIDR\n" > cluster.env
|
||||
{{< /text >}}
|
||||
|
||||
It is also possible to intercept all traffic, as is done for pods. Depending on vendor and installation mechanism
|
||||
you may use different commands to determine the IP range used for services and pods. Multiple ranges can be
|
||||
specified if the VM is making requests to multiple K8S clusters.
|
||||
|
||||
1. Check the contents of the generated `cluster.env` file. It should be similar to the following example:
|
||||
|
||||
{{< text bash >}}
|
||||
$ cat cluster.env
|
||||
ISTIO_SERVICE_CIDR=10.55.240.0/20
|
||||
{{< /text >}}
|
||||
|
||||
1. If the VM only calls services in the mesh, you can skip this step. Otherwise, add the ports the VM exposes
|
||||
to the `cluster.env` file with the following command. You can change the ports later if necessary.
|
||||
|
||||
{{< text bash >}}
|
||||
$ echo "ISTIO_INBOUND_PORTS=3306,8080" >> cluster.env
|
||||
{{< /text >}}
|
||||
|
||||
1. In order to use mesh expansion, the VM must be provisioned with certificates signed by the same root CA as
|
||||
the rest of the mesh.
|
||||
|
||||
It is recommended to follow the instructions for "Plugging in External CA Key and Certificates", and use a
|
||||
separate intermediary CA for provisioning the VM. There are many tools and procedures for managing
|
||||
certificates for VMs - Istio requirement is that the VM will get a certificate with an Istio-compatible
|
||||
{{< gloss >}}SPIFFE{{< /gloss >}} SAN, with the correct trust domain, namespace and service account.
|
||||
|
||||
As an example, for very simple demo setups, you can also use:
|
||||
|
||||
{{< text bash >}}
|
||||
$ go run istio.io/istio/security/tools/generate_cert \
|
||||
-client -host spiffee://cluster.local/vm/vmname --out-priv key.pem --out-cert cert-chain.pem -mode citadel
|
||||
$ kubectl -n istio-system get cm istio-ca-root-cert -o jsonpath='{.data.root-cert\.pem}' > root-cert.pem
|
||||
{{< /text >}}
|
||||
|
||||
### Setting up the VM
|
||||
|
||||
Next, run the following commands on each machine that you want to add to the mesh:
|
||||
|
||||
1. Copy the previously created `cluster.env` and `*.pem` files to the VM. For example:
|
||||
|
||||
{{< text bash >}}
|
||||
$ export GCE_NAME="your-gce-instance"
|
||||
$ gcloud compute scp --project=${MY_PROJECT} --zone=${MY_ZONE} {key.pem,cert-chain.pem,cluster.env,root-cert.pem} ${GCE_NAME}:~
|
||||
{{< /text >}}
|
||||
|
||||
1. Install the Debian package with the Envoy sidecar.
|
||||
|
||||
{{< text bash >}}
|
||||
$ gcloud compute ssh --project=${MY_PROJECT} --zone=${MY_ZONE} "${GCE_NAME}"
|
||||
$ curl -L https://storage.googleapis.com/istio-release/releases/{{< istio_full_version >}}/deb/istio-sidecar.deb > istio-sidecar.deb
|
||||
$ sudo dpkg -i istio-sidecar.deb
|
||||
{{< /text >}}
|
||||
|
||||
1. Add the IP address of the Istiod to `/etc/hosts`. Revisit the [preparing the cluster](#preparing-the-kubernetes-cluster-for-vms) section to learn how to obtain the IP address.
|
||||
The following example updates the `/etc/hosts` file with the Istiod address:
|
||||
|
||||
{{< text bash >}}
|
||||
$ echo "${IstiodIP} istiod.istio-system.svc" | sudo tee -a /etc/hosts
|
||||
{{< /text >}}
|
||||
|
||||
A better options is to configure the DNS resolver of the VM to resolve the address, using a split-DNS server. Using
|
||||
/etc/hosts is an easy to use example. It is also possible to use a real DNS and certificate for Istiod, this is beyond
|
||||
the scope of this document.
|
||||
|
||||
1. Install `root-cert.pem`, `key.pem` and `cert-chain.pem` under `/etc/certs/`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ sudo mkdir -p /etc/certs
|
||||
$ sudo cp {root-cert.pem,cert-chain.pem,key.pem} /etc/certs
|
||||
{{< /text >}}
|
||||
|
||||
1. Install `root-cert.pem` under `/var/run/secrets/istio/`.
|
||||
|
||||
1. Install `cluster.env` under `/var/lib/istio/envoy/`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ sudo cp cluster.env /var/lib/istio/envoy
|
||||
{{< /text >}}
|
||||
|
||||
1. Transfer ownership of the files in `/etc/certs/` , `/var/lib/istio/envoy/` and `/var/run/secrets/istio/`to the Istio proxy.
|
||||
|
||||
{{< text bash >}}
|
||||
$ sudo chown -R istio-proxy /etc/certs /var/lib/istio/envoy /var/run/secrets/istio/
|
||||
{{< /text >}}
|
||||
|
||||
1. Start Istio using `systemctl`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ sudo systemctl start istio
|
||||
{{< /text >}}
|
||||
|
||||
## Send requests from VM workloads to Kubernetes services
|
||||
|
||||
After setup, the machine can access services running in the Kubernetes cluster
|
||||
or on other VMs.
|
||||
|
||||
The following example shows accessing a service running in the Kubernetes cluster from a VM using
|
||||
`/etc/hosts/`, in this case using a service from the [Bookinfo example](/docs/examples/bookinfo/).
|
||||
|
||||
1. First, on the cluster admin machine get the virtual IP address (`clusterIP`) for the service:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get svc productpage -o jsonpath='{.spec.clusterIP}'
|
||||
10.55.246.247
|
||||
{{< /text >}}
|
||||
|
||||
1. Then on the added VM, add the service name and address to its `etc/hosts`
|
||||
file. You can then connect to the cluster service from the VM, as in the
|
||||
example below:
|
||||
|
||||
{{< text bash >}}
|
||||
$ echo "10.55.246.247 productpage.default.svc.cluster.local" | sudo tee -a /etc/hosts
|
||||
$ curl -v productpage.default.svc.cluster.local:9080
|
||||
< HTTP/1.1 200 OK
|
||||
< content-type: text/html; charset=utf-8
|
||||
< content-length: 1836
|
||||
< server: envoy
|
||||
... html content ...
|
||||
{{< /text >}}
|
||||
|
||||
The `server: envoy` header indicates that the sidecar intercepted the traffic.
|
||||
|
||||
## Running services on the added VM
|
||||
|
||||
1. Setup an HTTP server on the VM instance to serve HTTP traffic on port 8080:
|
||||
|
||||
{{< text bash >}}
|
||||
$ gcloud compute ssh ${GCE_NAME}
|
||||
$ python -m SimpleHTTPServer 8080
|
||||
{{< /text >}}
|
||||
|
||||
1. Determine the VM instance's IP address. For example, find the IP address
|
||||
of the GCE instance with the following commands:
|
||||
{{< warning >}}
|
||||
You may have to open firewalls to be able to access the 8080 port on your virtual machine
|
||||
{{< /warning >}}
|
||||
|
||||
1. Add virtual machine services to the mesh
|
||||
|
||||
Add a service to the Kubernetes cluster into a namespace (in this example, `<vm-namespace>`) where you prefer to keep resources (like `Service`, `ServiceEntry`, `WorkloadEntry`, `ServiceAccount`) with the virtual machine services:
|
||||
|
||||
{{< text bash >}}
|
||||
$ export GCE_IP=$(gcloud --format="value(networkInterfaces[0].networkIP)" compute instances describe ${GCE_NAME})
|
||||
$ echo ${GCE_IP}
|
||||
$ cat <<EOF | kubectl -n <vm-namespace> apply -f -
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: cloud-vm
|
||||
labels:
|
||||
app: cloud-vm
|
||||
spec:
|
||||
ports:
|
||||
- port: 8080
|
||||
name: http-vm
|
||||
targetPort: 8080
|
||||
selector:
|
||||
app: cloud-vm
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
1. Add VM services to the mesh
|
||||
Create a workload with the external IP of the virtual machine. Substitute `VM_IP` with the IP of your virtual machine:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl experimental add-to-mesh external-service vmhttp ${GCE_IP} http:8080 -n ${SERVICE_NAMESPACE}
|
||||
$ cat <<EOF | kubectl -n <vm-namespace> apply -f -
|
||||
apiVersion: networking.istio.io/v1beta1
|
||||
kind: WorkloadEntry
|
||||
metadata:
|
||||
name: "cloud-vm"
|
||||
namespace: "<vm-namespace>"
|
||||
spec:
|
||||
address: "${VM_IP}"
|
||||
labels:
|
||||
app: cloud-vm
|
||||
serviceAccount: "<service-account>"
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
{{< tip >}}
|
||||
Ensure you have added the `istioctl` client to your path, as described in the [download page](/docs/setup/getting-started/#download).
|
||||
{{< /tip >}}
|
||||
|
||||
1. Deploy a pod running the `sleep` service in the Kubernetes cluster, and wait until it is ready:
|
||||
|
||||
{{< text bash >}}
|
||||
|
@ -270,13 +121,13 @@ The `server: envoy` header indicates that the sidecar intercepted the traffic.
|
|||
...
|
||||
{{< /text >}}
|
||||
|
||||
1. Send a request from the `sleep` service on the pod to the VM's HTTP service:
|
||||
1. Send a request from the `sleep` service on the pod to the virtual machine HTTP service:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec -it sleep-88ddbcfdd-rm42k -c sleep -- curl vmhttp.${SERVICE_NAMESPACE}.svc.cluster.local:8080
|
||||
$ kubectl exec -it sleep-88ddbcfdd-rm42k -c sleep -- curl cloud-vm.${VM_NAMESPACE}.svc.cluster.local:8080
|
||||
{{< /text >}}
|
||||
|
||||
You should see something similar to the output below.
|
||||
You will see output similar to this:
|
||||
|
||||
{{< text html >}}
|
||||
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
|
||||
|
@ -297,23 +148,16 @@ the configuration worked.
|
|||
|
||||
## Cleanup
|
||||
|
||||
Run the following commands to remove the expansion VM from the mesh's abstract
|
||||
model.
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl experimental remove-from-mesh -n ${SERVICE_NAMESPACE} vmhttp
|
||||
Kubernetes Service "vmhttp.vm" has been deleted for external service "vmhttp"
|
||||
Service Entry "mesh-expansion-vmhttp" has been deleted for external service "vmhttp"
|
||||
{{< /text >}}
|
||||
At this point, you can remove the virtual machine resources from the Kubernetes cluster in the `<vm-namespace>` namespace.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
The following are some basic troubleshooting steps for common VM-related issues.
|
||||
|
||||
- When making requests from a VM to the cluster, ensure you don't run the requests as `root` or
|
||||
`istio-proxy` user. By default, Istio excludes both users from interception.
|
||||
- When making requests from a VM to the cluster, ensure you don't run the requests as `root` or
|
||||
`istio-proxy` user. By default, Istio excludes both users from interception.
|
||||
|
||||
- Verify the machine can reach the IP of the all workloads running in the cluster. For example:
|
||||
- Verify the machine can reach the IP of the all workloads running in the cluster. For example:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get endpoints productpage -o jsonpath='{.subsets[0].addresses[0].ip}'
|
||||
|
@ -325,14 +169,14 @@ The following are some basic troubleshooting steps for common VM-related issues.
|
|||
html output
|
||||
{{< /text >}}
|
||||
|
||||
- Check the status of the Istio Agent and sidecar:
|
||||
- Check the status of the Istio Agent and sidecar:
|
||||
|
||||
{{< text bash >}}
|
||||
$ sudo systemctl status istio
|
||||
{{< /text >}}
|
||||
|
||||
- Check that the processes are running. The following is an example of the processes you should see on the VM if you run
|
||||
`ps`, filtered for `istio`:
|
||||
- Check that the processes are running. The following is an example of the processes you should see on the VM if you run
|
||||
`ps`, filtered for `istio`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ ps aux | grep istio
|
||||
|
@ -341,7 +185,7 @@ The following are some basic troubleshooting steps for common VM-related issues.
|
|||
istio-p+ 7094 4.0 0.3 69540 24800 ? Sl 21:32 0:37 /usr/local/bin/envoy -c /etc/istio/proxy/envoy-rev1.json --restart-epoch 1 --drain-time-s 2 --parent-shutdown-time-s 3 --service-cluster istio-proxy --service-node sidecar~10.150.0.5~demo-vm-1.vm-vm.svc.cluster.local
|
||||
{{< /text >}}
|
||||
|
||||
- Check the Envoy access and error logs:
|
||||
- Check the Envoy access and error logs for failures:
|
||||
|
||||
{{< text bash >}}
|
||||
$ tail /var/log/istio/istio.log
|
||||
|
|
Loading…
Reference in New Issue